Product Innovation Toolbox
Beckley_ffirs.indd i
2/4/2012 1:02:09 AM
Product Innovation Toolbox A Field Guide to Cons...
95 downloads
2008 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Product Innovation Toolbox
Beckley_ffirs.indd i
2/4/2012 1:02:09 AM
Product Innovation Toolbox A Field Guide to Consumer Understanding and Research
Edited by
Jacqueline Beckley The Understanding & Insight Group LLC Denville, New Jersey USA
Dulce Paredes, Ph.D. Takasago International Corporation (USA) Rockleigh, New Jersey USA
Kannapon Lopetcharat, Ph.D. NuvoCentric Bangkok Thailand
A John Wiley & Sons, Ltd., Publication
Beckley_ffirs.indd iii
2/4/2012 1:02:09 AM
This edition first published 2012 © 2012 by John Wiley & Sons, Inc Wiley-Blackwell is an imprint of John Wiley & Sons, formed by the merger of Wiley’s global Scientific, Technical and Medical business with Blackwell Publishing. Editorial Offices: 2121 State Avenue, Ames, Iowa 50014-8300, USA The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, UK 9600 Garsington Road, Oxford OX4 2DQ, UK For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by Blackwell Publishing, provided that the base fee is paid directly to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For those organizations that have been granted a photocopy license by CCC, a separate system of payments has been arranged. The fee codes for users of the Transactional Reporting Service are ISBN-13: 978-0-8138-2397-3/2012. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Library of Congress Cataloging-in-Publication Data Product innovation toolbox : a field guide to consumer understanding and research / edited by Jacqueline Beckley, Dulce Paredes, Kannapon Lopetcharat. p. cm. Includes bibliographical references and index. ISBN 978-0-8138-2397-3 (hard cover : alk. paper) 1. New products. 2. Consumer behavior. 3. Marketing research. I. Beckley, Jacqueline H. II. Paredes, Dulce III. Lopetcharat, Kannapon. TS170.P758 2012 658.8′3–dc23 2011037446 A catalogue record for this book is available from the British Library. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Set in 9/12pt Interstate Light by SPi Publisher Services, Pondicherry, India 1
Beckley_ffirs.indd iv
2012
2/4/2012 1:02:09 AM
Contents Contributors Acknowledgments Introduction: From Pixel to Picture Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat Scoping the innovation landscape How this book is organized Part I Part II Part III References PART I STARTING THE JOURNEY AS A CONSUMER EXPLORER 1
xix xix xx xxi xxiii xxiv 1
Setting the Direction: First, Know Where You Are Howard Moskowitz and Jacqueline Beckley
4
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10
4 6 7 8 9 9 10 11 11
Roles in the corporation – the dance of the knowledge worker Insights leader – learning on the job versus learning in school Being the authentic you What should you read? What else do you need to do to prepare to be an insight leader? Dealing with management and your clients Guidelines to success Reporting results Do not “winstonize” Making it public – helpful hints to grow from student to professional 1.11 The two types of professionals in the world of evaluating products (and studying consumers) 1.12 Knowing your limits and inviting others in 1.13 The bottom line – what’s it all about? References 2
xiv xvi xvii
13 14 15 16 17
The Consumer Explorer: The Key to Delivering the Innovation Strategy Dulce Paredes and Kannapon Lopetcharat
20
2.1 You as a brand 2.2 The roles of the Consumer Explorer 2.3 Taking the lead 2.4 Practical advice from seasoned Consumer Explorers References
20 21 25 29 30 v
Beckley_ftoc.indd v
2/6/2012 12:25:22 PM
vi
Contents
3
4
5
Invention and Innovation Daniel Ennis
32
3.1 Invention and innovation 3.2 The steam engine: Watt and Boulton 3.3 Nike: Bowerman and Knight 3.4 The US Navy: Scott and Sims 3.5 Consumer-perceived benefits: Coffee, beer and cigarettes 3.6 Extensibility: Is there a limit to it? 3.7 Innovation in scaling intensities and emotions 3.8 Scaling intensities 3.9 Scaling emotions (hedonics) 3.10 Final remarks References
32 32 33 34 35 36 36 37 38 40 40
Designing the Research Model Kannapon Lopetcharat, Dulce Paredes and Jennifer Hanson
44
4.1 Factors influencing product innovation 4.2 Setting up a successful product innovation program 4.3 Current approach to new product development 4.4 Iterative qualitative-quantitative research model References
44 46 47 48 51
What You Must Look For: Finding High Potential Insights Kannapon Lopetcharat, Jennifer Hanson and Dulce Paredes
54
5.1 What is an insight? 5.2 What is an “ownable” insight? 5.3 How to develop high potential insights 5.4 Behavior: The basis for all insights 5.5 Attitudes and needs: The explanation for behavior 5.6 Demographics and lifestyles: The personal connection 5.7 Making insights ownable 5.8 Summary References
54 55 56 57 57 57 58 63 63
PART II RESEARCH TOOLS OF THE CONSUMER EXPLORER
65
6 Tools for Up-Front Research on Consumer Triggers and Barriers 6.1 Understanding Consumer Languages Kannapon Lopetcharat
68 68
6.1.1 6.1.2 6.1.3 6.1.4 6.1.5
Beckley_ftoc.indd vi
Consumers do not understand these technical words, so what should we say about our new products? How to select a method Free elicitation and Zaltman metaphor elicitation technique Laddering interview Potential problems when applying laddering interview in practice
68 69 71 72 78
2/6/2012 12:25:23 PM
Contents
6.1.6 Kelly’s repertory grid and flash profiling 6.1.7 Summary and future References 6.2 Insights Through Immersion Donna Sturgess 6.2.1 The power of immersive experience 6.2.2 Immerse yourself 6.2.3 Conductive thinking 6.2.4 Getting started 6.2.5 Plunging into illumination 6.2.6 Taking action 6.2.7 Summary and future References 6.3 Qualitative Multivariate Analysis Kannapon Lopetcharat and Jacqueline Beckley Consumers do not know what they want, really. Really? 6.3.2 Introduction 6.3.3 Qualitative Multivariate Analysis in practice 6.3.4 Qualitative Multivariate Analysis in practice: Deeper understanding of cottage cheese consumption 6.3.5 Consumer perceived values 6.3.6 Summary and future of Qualitative Multivariate Analysis References
vii
81 88 88 91 91 92 93 94 95 98 99 99 100
6.3.1
6.4 The Gameboard “Model Building” Cornelia Ramsey The problem – how to talk to consumers about new products that do not exist 6.4.2 A new method: Gameboard strategy “Model Building” 6.4.3 Construction: Creative process model 6.4.4 Interview guide for model construction methodology 6.4.5 Ensuring reliability of the outcomes 6.4.6 Analysis of the outcomes from Gameboard “Model Building” 6.4.7 Analysis overview 6.4.8 Consumer-centered products and Gameboard “Model Building” 6.4.9 Limitations 6.4.10 Theoretical background of model construction methodology 6.4.11 Summary and future References
100 101 102 115 118 120 120 122
6.4.1
Beckley_ftoc.indd vii
122 123 123 127 128 129 130 131 132 132 134 134
2/6/2012 12:25:23 PM
viii
Contents
6.5 Quantitative Anthropology Jennifer Hanson
136
6.5.1 Anthropology: A brief introduction 6.5.2 The rise of ethnography in marketing 6.5.3 The elephant in the room 6.5.4 Quantitative Anthropology (QA) 6.5.5 Quantitative anthropology in practice 6.5.6 Under the hood 6.5.7 Applications of quantitative anthropology 6.5.8 Future potential References
136 137 139 140 141 143 145 147 148
6.6 Emotion Research as Input for Product Design Pieter Desmet and Hendrik Schifferstein
149
6.6.1 6.6.2 6.6.3 6.6.4
Putting emotion at the center: emotion-driven design New product development and design Emotional responses to consumer products Methods for emotion research in new product development 6.6.5 Emotion research in new product development 6.6.6 Summary and future of emotional research References Tools for Up-Front Research on Understanding Consumer Values 7.1 Kano Satisfaction Model Alina Stelick, Kannapon Lopetcharat and Dulce Paredes
149 150 152 154 159 171 173
7
Understanding the fundamental of consumer satisfaction – Kano satisfaction survey 7.1.2 Kano satisfaction survey step by step 7.1.3 Comparison with degree of importance surveys 7.1.4 Philosophy behind the Kano satisfaction model 7.1.5 Summary and future References
178 178
7.1.1
7.2 Conjoint Analysis Plus (Cross Category, Emotions, Pricing and Beyond) Daniel Moskowitz and Howard Moskowitz 7.2.1 7.2.2 7.2.3 7.2.4 7.2.5 7.2.6 7.2.7 7.2.8
Beckley_ftoc.indd viii
Consumer research: Experimentation vs. testing Conjoint analysis (aka conjoint measurement) Doing the basic conjoint analysis experiment The raw material of CA Experimental design Building models Presenting the result – numbers, text, data, talk, move to steps Using the results – what do the numbers tell us?
178 179 186 188 190 190 192 192 193 193 199 201 201 203 206
2/6/2012 12:25:23 PM
Contents
7.2.9 7.2.10 7.2.11 7.2.12
Beyond individual groups to segments New analytic advances in conjoint analysis “Next generation” thinking in conjoint analysis Discovering the “new” through conjoint analysis – creating an innovation machine 7.2.13 Dealing with prices 7.2.14 Mind Genomics™: A new “science of the mind” based upon conjoint analysis 7.2.15 Four considerations dictating the future use of conjoint analysis Acknowledgment References 7.3 Benefit Hierarchy Analysis Efim Shvartsburg Benefit hierarchy analysis – a new way to identify what drives consumers’ liking, purchase intent or preference 7.3.2 Hierarchy analysis vs. traditional approaches 7.3.3 Bounded rationality: the reason behind benefit hierarchy 7.3.4 How hierarchy analysis ranks the benefits and product attributes 7.3.5 Identify drivers of liking, purchase intent or preferences 7.3.6 Consumer segmentation using individual schemas 7.3.7 Summary and future References
ix
207 207 213 215 216 216 220 221 222 224
7.3.1
Tools to Refine and Screen Product Ideas in New Product Development 8.1 Contemporary Product Research Tools Michele Foley
224 225 226 229 234 236 238 239
8
8.1.1 8.1.2 8.1.3 8.1.4 8.1.5 8.1.6 8.1.7
Introduction What is a concept? What is a concept test? Considerations in conducting a concept test Sampling: Who do you test with? Contemporary measures Conclusion: From winning idea to successful product References 8.2 Insight Teams: An Arena For Discovery Stacey Cox 8.2.1 8.2.2
Beckley_ftoc.indd ix
Insight teams for discovery Definition of an insight team
242 242 242 243 243 244 247 247 248 248 249 249 250
2/6/2012 12:25:23 PM
x
Contents
8.2.3 When to apply the skills of an insight team 8.2.4 Implementing insight teams for development 8.2.5 How to use the insight team 8.2.6 Case study of using the insight team 8.2.7 The future of insight teams References
251 252 262 263 263 264
8.3 Consumer Advisory Boards: Incorporating Consumers Into Your Product Development Team Leah Gruenig
265
8.3.1 Introduction 8.3.2 Conducting consumer advisory boards 8.3.3 Case study 8.3.4 Summary References 8.4 Defining the Product Space and Rapid Product Navigation Jenny Lewis, Ratapol Teratanavat and Melissa Jeltema 8.4.1 Listening to understand: Rapid product navigation 8.4.2 Recommended tools and “how to” implement 8.4.3 Case study 8.4.4 Theoretical background of the tools 8.4.5 Summary and future of the tools References 8.5 Free-Choice in Context Preference Ranking: A New Approach for Portfolio Assessment Ratapol Teratanavat, James Mwai and Melissa Jeltema 8.5.1 8.5.2 8.5.3 8.5.4
Want to offer more but how many is too many? Current approaches on product line extension Free-choice in context preference ranking Theoretical backgrounds of free-choice in context preference ranking 8.5.5 Summary and future References 9 Tools to Validate New Products for Launch 9.1 Extended Use Product Research for Predicting Market Success Ratapol Teratanavat, Melissa Jeltema and Stephanie Plunkett 9.1.1 9.1.2 9.1.3 9.1.4 9.1.5 9.1.6
Beckley_ftoc.indd x
Balancing two important acts: Introducing new products and optimizing portfolio Shortcomings of traditional approaches An alternative: Extended use product research Steps in conducting extended use product research Understanding consumer segments Assessment of sensory performance
265 266 274 275 275 276 276 277 283 286 289 290 291 291 292 294 300 301 301 304 304
304 306 307 308 309 309
2/6/2012 12:25:23 PM
Contents
Understanding how consumers make choice decisions 9.1.8 Using behavioral measures to help assess product viability 9.1.9 Among users, they were also segmented into situational users and regular users 9.1.10 Philosophy behind extended use product research 9.1.11 Summary and future References
xi
9.1.7
9.2 Product Concept Validation Tests Jennifer Hanson The final verdict: Concept product validation testing 9.2.2 Type of innovation 9.2.3 Target market 9.2.4 Competitive set 9.2.5 Sales forecast 9.2.6 Types of validation tests 9.2.7 Central location test 9.2.8 Home-use test 9.2.9 Test market: Small-scale, in-market launch 9.2.10 Metrics for success
309 312 313 315 316 316 317
9.2.1
PART III WORDS OF THE WISE
325
10 Putting It All Together: Building and Managing Consumer-Centric Innovation Michael Murphy
328
10.1 10.2 10.3
10.4 10.5 10.6 10.7 10.8
Beckley_ftoc.indd xi
317 318 318 319 320 320 321 322 323 324
Researchers becoming breakthrough facilitators: The stairway to heaven Transformational team experiences 1: Where we observe comedians get naked Transformational team experiences 2: Why everybody who works for me will someday be wearing women’s underwear (or the “why we’re always hiring” model) Building stronger teams 1: Forming the group Building stronger teams 2: Failure equals ownership (or the “you break it, you buy it” model) Avoiding product feature dilution: The barrier to breaking through Researchers becoming breakthrough facilitators: A reprise Summary and future
329 331
332 333 335 336 337 338
2/6/2012 12:25:23 PM
xii
Contents
11
Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners 11.1 Above Averages: Use of Statistics, Design of Experiment and Product Innovation Applications Frank Rossi 11.1.1 Brief history of experimental design 11.1.2 Summary and future References 11.2 The Role of In-House Technical Experts Veronica Symon 11.2.1 11.2.2 11.2.3 11.2.4
First, look inside for the answer; it may be closer than you think In-house experts – magic touch to success How to work with in-house experts – advice for sensory professionals Some ideas to approach innovation projects
11.3 How to Leverage Research Partners (Local and International Testing) Gigi Ryan, Jerry Stafford and Jim Rook 11.3.1 11.3.2 11.3.3 11.3.4 11.3.5 11.3.6 11.3.7 11.3.8
Holistic partnership Benefits of a client–research agency partnership Example of benefits through holistic partnership Creating and maintaining a relationship Getting the most out of the relationship What to watch out for: Possible pitfalls Partnering for international research Summary and future
11.4 Best Practices in Global Testing and Multi-Cultural Consumer Research Alejandro Camacho 11.4.1 11.4.2 11.4.3 11.4.4 11.4.5
Introduction Step 1: Company’s internal stakeholders input Step 2: Secondary research Step 3: Country-based subsidiary or office branch Step 4: Developing a multi-country product testing checklist References 12 Future Trends and Directions Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat Digital technology will continue to drive mobility, convenience and speed 12.2 Engaged people (consumers) will continue to drive products and research
342 342 346 347 347 348
348 349 350 351
353 353 354 356 357 359 361 361 364
365 365 366 366 367 368 371 374
12.1
Beckley_ftoc.indd xii
374 375
2/6/2012 12:25:23 PM
Contents
12.3 Play and games will enhance respondent participation 12.4 Hybrid data and patterns 12.5 Translational research References Index
Beckley_ftoc.indd xiii
xiii
376 377 378 378 381
2/6/2012 12:25:23 PM
Contributors Jacqueline Beckley The Understanding & Insight Group LLC Denville, New Jersey USA Alejandro Camacho Hispanic Senses Marketing, Inc. Cincinnati, Ohio USA Stacey Cox H.J. Heinz Company Pittsburgh, Pennsylvania USA Pieter Desmet, Ph.D. Faculty of Industrial Design Engineering Delft University of Technology The Netherlands Daniel Ennis, Ph.D. The Institute for Perception Richmond, Virginia USA
Melissa Jeltema, Ph.D. Previously with Altria Client Services Richmond, Virginia Currently with The Understanding & Insight Group LLC Denville, New Jersey USA Jenny Lewis Altria Client Services Richmond, Virginia USA Kannapon Lopetcharat, Ph.D. NuvoCentric Bangkok Thailand Daniel Moskowitz Moskowitz Jacobs Inc. White Plains, New York USA
Michele Foley Nestlé Fremont, Michigan USA
Howard Moskowitz, Ph.D. Moskowitz Jacobs Inc. White Plains, New York USA
Leah Gruenig General Mills Minneapolis, Minnesota USA
Michael Murphy The Hershey Company Hershey, Pennsylvania USA
Jennifer Hanson Sequoia Partners, LLC Canton, Connecticut USA
James Mwai Altria Client Services Richmond, Virginia USA
xiv
Beckley_flast.indd xiv
1/31/2012 11:02:34 PM
Contributors
Dulce Paredes, Ph.D. Takasago International Corporation (USA) Rockleigh, New Jersey USA Stephanie Plunkett, Ph.D. Altria Client Services Richmond, Virginia USA Cornelia Ramsey, Ph.D., MSPH Virginia Commonwealth University Richmond, Virginia USA Jim Rook The Pert Group Farmington, Connecticut USA Frank Rossi Kraft Foods Glenview, Illinois USA Gigi Ryan The Pert Group Farmington, Connecticut USA
xv
Efim Shvartsburg, Ph.D. The Pert Group Farmington, Connecticut USA Jerry Stafford Chianti, Italy Alina Stelick Avon Products Inc. Suffern, New York USA Donna Sturgess Buyology Inc. New York, New York USA Veronica Symon Pepperidge Farm, Inc. Norwalk, Connecticut USA Ratapol Teratanavat, Ph.D. Altria Client Services Richmond, Virginia USA
Hendrik Schifferstein, Ph.D. Faculty of Industrial Design Engineering Delft University of Technology and Studio ZIN The Netherlands
Beckley_flast.indd xv
1/31/2012 11:02:34 PM
Acknowledgments I want to thank each of you for wanting to do such an awesome job to help others. I would also like to thank Leslie, my husband, for his patience with me and my “projects”. Jacqueline Beckley Jackie and Kannapon, you are my dream team. I would like to thank my husband, Rollie, and my children, Nathalie and Robert, for cheering me on as I flex my “academic side”. Dulce Paredes I want to thank each of you for giving me this opportunity. Also, I want to thank my parents and my brother (Mrs. Preeya Suwankul, Mr. Somkirt Lopetcharat and Mr. Akaraj Lopetcharat) for their support. Also, Professor Mina McDaniel and Professor Jae Park for their support which opened the door of opportunities for me to the US to meet with you guys and many magnificent colleagues. Kannapon Lopetcharat Finally, we would like to thank our book contributors who are all excellent practitioners and are willing to share their knowledge for future Consumer Explorers and Product Researchers. We would like to thank the following individuals who helped us provide finishing touches to the book: Rita Rozenshteyn, John Thomas, Divina Paredes, Nathalie Tadena and Linda Lieberman. The Editors
xvi
Beckley_flast.indd xvi
1/31/2012 11:02:34 PM
Introduction
From Pixel to Picture Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat
Consumer packaged goods companies are always challenged to introduce new product innovation that strongly resonates with consumers and that sets them apart from products currently in the market. An Ipsos global survey showed consumers rank food and beverage, personal and household products low on the list of innovative products compared to computer equipment and electronics (Palmer, 2009). Apple, Google, Proctor & Gamble, Starbucks and Dyson are known to be successful companies that stand out from their rivals in the marketplace, not only because they regularly reinvent their products that redefine their competitions but also the products that change consumers’ behavior and make consumers fall in love with the brands. In Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, key thought-leaders and seasoned consumer researchers from corporate research and development (R&D), academia and product or marketing research companies share their experiences, cutting edge consumer research tools and practical tips for successful and sustainable product innovation that have been implemented in both well-known innovators and some companies that are more quiet in their creativity. The field guide is essential for a wide range of people: ●
●
●
Executives looking to understand whether their current global practices incorporate these newer approaches that address weakness of past methodologies. These individuals must be asking: “How much of my benchmarked approaches are inadequate today and which new approaches should be started, somewhere?” Product practitioners like product developers, product researchers, marketers, product designers, marketing researchers and technologists who want to implement consumer-centric innovation and are responsible for designing productunderstanding strategies from up-front innovation to support new product development (NPD). These people will be saying: “What can I do better, smarter, faster to get the same or better results than I currently have been using?” Educators and their students will find the information in this book an asset in training that is more relevant to what is needed today to understand consumers, their behavior and their choices. This group will be thinking about how do I get the training and experience to educate against these new tools?”
The field guide offers guidelines and best practices for strategizing, designing, planning and executing a product research where the consumer, the person xvii
Beckley_flast.indd xvii
1/31/2012 11:02:34 PM
xviii
Introduction
being studied, is viewed as a person, not a “subject”. Our goal is to provide the reader with confidence and high efficiency (faster and better insights). The methods provided are unique in their design, less familiar than legacy approaches yet are proven to work in many industrial settings. The field guide equips the reader to become a “Consumer Explorer” (CE), an insight leader and strategic innovator who can infuse and apply deep consumer understanding throughout the product innovation process. Think of the CE taking a digital picture of the consumer and/or category landscape to identify breakthrough insights. A digital image is composed of many pixels (pixel is short for picture element, a pixel is a single point in a graphic image) that are close together so that the pixels appear connected. The number of pixels can define part of the quality of the image resolution, where to a point the more pixels the more exact the image. But also, a whole image is the sum of its parts, so the individual pixels in a picture make up what we see as the whole image. To gain competitive edge, the CE must scope the landscape, capture the aha! moment quicker and connect the dots faster to reveal the breakthrough insight faster than everyone else. Being well versed in both the pixel and the picture can make all the difference in the strategic direction of a project. Our imagery throughout is to move from pixels, or pieces of the picture, to the picture which is created by the piece. This field guide contains 11 major chapters that will transform a consumer researcher to a Consumer Explorer by providing a step-by-step guide that shows how to design an innovative consumer research program from start to finish (pixel to picture). A CE differs from ordinary consumer researchers because, in addition to understanding consumers deeply, the CE can guide project teams to connect the dots and comprehend the big picture faster in a systematic manner. The ability to have an integrated iterative process which links one part of the process to the next is an evolutionary step for innovation research. The best practices described in the field guide will enable the CE to select and deploy appropriate and efficient consumer understanding and research tools for their current situation and to guide the project team to successful innovation and new product introductions that have the opportunity of taking insight to income. Innovation is the engine of business strategy and success. A successful innovation is a product or service that provides comprehensive solutions to consumers’ needs and connects to consumers emotionally, cognitively and economically. However, many companies struggle to deliver a truly successful innovation constantly and sustainably in a tough economy. The key hurdle in product design and development is identifying “high potential” product opportunities while strengthening and maintaining core business needs. The key hurdle for a sustainable business is to balance the constant need to introduce game changing innovations in the marketplace and make a profit. Leaders in innovation do not just figure out “how to read the trend” but also “implement processes” that allow them to morph the opportunities to actual products that have real high value opportunities. In short, these leaders connect the dots faster than their competitors (see Figure). The creation and implementation of efficient innovation processes take creativity and discipline. The field guide will provide guidance on how to accomplish this task.
Beckley_flast.indd xviii
1/31/2012 11:02:34 PM
Introduction
xix
From pixel to picture, winners are the ones who connect the dots faster.
Scoping the innovation landscape Before embarking on any innovation journey, a general strategic direction must be understood and kept in mind. At the end of the day the team must “deliver” at least a product idea that supports the company’s strategic vision. Creating the team’s charter to innovate with consumers and business in mind is very important. Failure to incorporate these reasons early on derails the journey and results in delays and failure to deliver (but not failure to innovate). There are five general strategic types of innovation: (1) (2) (3) (4) (5)
Disruptive innovation (aka new market disruption) Low-end disruption and me-too innovation Sustainable innovation (aka incremental innovation) Product rationalization or productivity innovation Innovation for strategic purpose.
The first three innovations focus on “differences in different degrees” and the fourth one is focused on “similarity”. The last innovation deals with strategic advantages of companies more than introducing products to the marketplace. The research objective drives the methodology so it is important for the CE to define the innovation objective up-front in order to design the learning agenda effectively and choose the most efficient research tools.
How this book is organized The field guide is organized so the CE can choose tools from two stages of innovation: up-front innovation and NPD. The tools are lined up to enable the CE to zone in and amplify the breakthrough insight faster. Up-front innovation covers the various ways to generate deep consumer insights that can be converted to product ideas. The product ideas then enter the NPD process where they are refined, screened, optimized and validated. Many companies adopt the Stage Gate® approach (Cooper, 2001) in their NPD process to enhance the efficiency to commercialize and launch products (Moskowitz et al., 2006). Whether using Cooper’s approach or variations on this process, NPD classically is divided in stages. At the end of each stage is a gate that involves steps and checks from different business functions to assess viability of the whole product proposition. The ideas must be proven to have potential before entering the stage gate process. It is at this early juncture where things usually go wrong. Many innovations can go through all the gates but still fail or not perform as expected in the
Beckley_flast.indd xix
1/31/2012 11:02:34 PM
xx
Introduction
marketplace. We will address the flaws in the traditional screening process (few qualitative studies followed by a big quantitative study) and biases in the criteria that managers use to qualify innovation to enter the gate. We propose a new process called the “iterative qual-quant research” model (IQQR) that will enable companies to understand their product category comprehensively through knowledge mapping exercises, hypothesis testing, consumer deep dives, clear action standards, key performance criteria and action-oriented results. Taking these processes in mind, the field guide is arranged into three parts designed to provide guidance to the different roles that CEs have to play: insight leader, knowledge expert and project manager.
Part I This part contains Chapters 1, 2, 3, 4 and 5 and addresses basic principles and managerial topics. These chapters provide a big picture and guidance to anyone who is responsible for setting up and directing a sustainable consumer research program.
Chapter 1 Setting the direction: First, know where you are It is important first to define the landscape that you will be operating in. Chapter 1 provides an honest discussion on how you can transform yourself into an insight leader through self-education and on-the-job training and “playing” in the company sand box. The authors share their experiences in delivering and communicating impactful research.
Chapter 2 The Consumer Explorer: The key to delivering the innovation strategy Consumer Explorer’s roles are three-fold: ●
●
●
Steadily guide the team through the twists and turns that come with early project work or up-front innovation Design efficient testing strategies to refine and further validate product concepts and ideas Deliver project results on time to affect business decisions.
This chapter provides practical tips, checklists and best practice guidelines for setting up a research project plan from start to finish and covers roles of the Consumer Explorer in setting overall objectives, defining roles and responsibilities of key team members, understanding key questions, projecting expected deliveries from each stage, and finally presenting and communicating key learnings and diagnostic reviews.
Chapter 3 Invention and innovation Chapter 3 explains the relationships between inventors and innovators and how to turn an invention to an innovation. Through the examples, this chapter
Beckley_flast.indd xx
1/31/2012 11:02:34 PM
Introduction
xxi
magnifies the different skill sets required for being inventors and innovators that are rarely found in the same person.
Chapter 4 Designing the research model Chapter 4 further defines the two stages of consumer research: up-front innovation to identify product opportunities by understanding consumer wants, needs and pain points and NPD to refine, screen and validate new product opportunities grounded on consumer insight. The consumer exploration tools vary depending on whether the Consumer Explorer is looking for consumer insights or validating a product opportunity grounded on consumer insights. This chapter cites the importance of leveraging continuous feedback between qualitative and quantitative consumer research and sets the stage for the different research tools for Consumer Explorers.
Chapter 5 What you must look for: Finding high potential insights Everyone knows that the long-lasting success of an innovation in the market greatly depends on the insights on which the innovation is based. However, identifying high-value insight is not an easy task. Not knowing what and where to look for these insights contributes to the delay or even the failure to innovate. This chapter will outfit Consumer Explorers with skills to spot high-potential consumer insights and provides many characteristics of consumer behaviors and situations that allow discovery of high-value insights with in-depth analyses of product-related reasons for the success and longevity of these products in the marketplace. Furthermore, this chapter will demystify the belief that high potential insights can only be found in something or from someone extraordinary. The reader will learn that nothing is “ordinary” about the consumer’s routine and habit; on the contrary, it is rather “irrational”. Providing innovation that changes this “ordinary” behavior will alter the landscape of competition. Successful innovations must connect to consumers at an emotional level. Emotional benefits are everywhere and quite obvious (when people are happy they smile, when they are frustrated, they make faces). These non-verbal cues are often missed in traditional survey research. At the end of this chapter, the audience will gain a new perspective that “a consumer is not one consumer”. This new perspective in redefining consumers allows brands to innovate more accurately and successfully.
Part II This is the largest part in the field guide and organized based on the distance from final launch: (1) Up-front innovation and (2) New product development. These approaches have been proven to provide high-quality insights by experts. There will be instructions on how to identify the right questions, targets and contexts, how to set up the fieldwork and the common mistakes to avoid. After
Beckley_flast.indd xxi
1/31/2012 11:02:34 PM
xxii
Introduction
finishing each chapter, Consumer Explorers will be equipped with the knowledge and understanding that allow them to select appropriate and the most efficient methods and approaches for their projects and try to make the action work for their situation. This organization will allow the audience to customize the consumer research tools that fit their own innovation engine.
Chapter 6 Tools for up-front research on consumer triggers and barriers These research tools will clear the path to identify up-front innovation that identifies new opportunities stemming from unmet consumer needs and wants. At this stage in product innovation, the most important thing is to discover and capture high-value insights quickly and as many as possible. Here, the authors will introduce many research approaches and methodologies that guarantee the results when combining these recommendations with the knowledge gained from the other chapters. ●
●
Qualitative tools (Chapters 6.1, 6.2, 6.3 and 6.4): Qualitative approaches and methods are often used at the very front end to discover insights. However, many standard tools are vulnerable to many factors (moderator skills, composition of consumers, setting, agenda). These four chapters include timetested contemporary methods (Chapter 6.1) and cutting-edge methods (Chapters 6.2, 6.3 and 6.4). ° Chapter 6.1 Understanding consumer languages ° Chapter 6.2 Insights through immersion ° Chapter 6.3 Qualitative multivariate analysis ° Chapter 6.4 The Gameboard “Model Building” Quantitative tools (Chapter 6.5 and Chapter 6.6): At this stage of innovation, quantitative studies aim for exploration and discovery of insights. Learnings from qualitative studies are used to guide the preparation of quantitative studies, prioritize the objectives and ensure that the important questions will be asked. However, many standard tools are vulnerable to many factors such as questionnaire designs, composition of consumers, situations where products are used, test setting. Chapter 6.5 and Chapter 6.6 will highlight new methodologies that have been proven to provide new insights to consumers’ product and emotional experiences. ° Chapter 6.5 Quantitative anthropology ° Chapter 6.6 Emotion research as input for product design.
Chapter 7 Tools for up-front research on understanding consumer values This chapter starts the “how to” discussion of research tools to understand the hierarchy of desired consumer benefits and values. This chapter covers the “how to” discussion of quantitative research tools to understand the hierarchy of desired consumer benefits and values. These tools are used to validate the insights with consumers. The information can be used by the innovation team for concept development and business portfolio management.
Beckley_flast.indd xxii
1/31/2012 11:02:34 PM
Introduction
● ●
●
xxiii
Chapter 7.1 Kano satisfaction model Chapter 7.2 Conjoint analysis plus (cross category, emotions, pricing and beyond) Chapter 7.3 Benefit hierarchy analysis.
Chapter 8 Tools to refine and screen product ideas in new product development After identifying and validating the potential of the consumer insights found at up-front innovation, the ideas must be transformed into tangible products or concepts. Screening and refining the insights gathered are the hard parts in successful innovation. This chapter provides efficient approaches in refining and screening product ideas for product developers to prioritize and classify insights in order to strategize their activities accordingly. ● ● ●
● ●
Chapter 8.1 Contemporary product research tools Chapter 8.2 Insight teams: An arena for discovery Chapter 8.3 Consumer advisory boards: Incorporating consumers into your product development team Chapter 8.4 Defining the product space and rapid product navigation Chapter 8.5 Free-choice in context preference ranking: A new approach for portfolio assessment.
Chapter 9 Tools to validate new products for launch Chapter 9 has tools for product developers to validate new products developed from consumer insights. The tools will allow product developers to demonstrate early on the perception of values/benefits by consumers. Perceivable (aka demonstrable) values/benefits help product developers to get buy-in from different departments along NPD and, ultimately, guarantee the survival and success of the products in the marketplace. ● ●
Chapter 9.1 Extended use product research for predicting market success Chapter 9.2 Product concept validation tests.
Part III Parts I and II address topics to help CEs set direction, prepare and be ready for any product innovation project. Part III covers practical recommendations and steps to bring these learnings into practice.
Chapter 10 Putting it all together: Building and managing consumer-centric innovation Chapter 10 provides guidance through practical experience for building and managing a great innovation team and creating a consumer-centric philosophy at the center of product innovation. This chapter proposes a teachable model
Beckley_flast.indd xxiii
1/31/2012 11:02:34 PM
xxiv
Introduction
that combines team creativity with personal leadership at the consumer researcher level, with belief that innovation should be fun to the core.
Chapter 11 Words of the wise: The roles of experts, statisticians and strategic research partners This chapter will provide guidelines and practical tips in working with multifunctional teams and leveraging external research agencies and technical experts. ●
● ●
●
Chapter 11.1 Above averages: Use of statistics, design of experiment and product innovation applications Chapter 11.2 The role of in-house technical experts Chapter 11.3 How to leverage research partners (local and international testing) Chapter 11.4 Best practices in global testing and multi-cultural consumer research.
Chapter 12 Future trends and directions The final chapter will offer future directions of consumer research methodologies as traditional and innovative qualitative and quantitative tools combine and morph to meet the increasing demand to generate consumer and product feedback instantly and efficiently. This final chapter will summarize emerging research trends such as blurring the lines between qualitative-quantitative research tools, increasing use of digital technology and the rise of hybrid data. This book has been designed to be a “how to” for individuals who want or need to get behind some of the leading approaches in new methods of understanding the consumer in today’s marketplace. As a result it will evolve over time. The book has been designed to be a “cookbook”. It is up to you to follow the detailed steps, or better yet customize and create your own imprint.
References Cooper, R.G. (2001) Winning at New Products: Accelerating the Process from Idea to Launch (3rd edition). New York, NY: Perseus Publishing. Moskowitz, H.R., Beckley, J.H. and Resurreccion V.A.A. (2006) Sensory and Consumer Research in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Palmer, A. (2009) “Consumers Want More Innovative Packaged Goods”, Adweek. Prometheus Global Media LLC. 17 July 2009. Web. 25 July 2011 (http://www. adweek.com/news/advertising-branding/consumers-want-more-innovativepackaged-goods-106131).
Beckley_flast.indd xxiv
1/31/2012 11:02:35 PM
Part I
Starting the Journey as a Consumer Explorer
Beckley_p01.indd 1
1/31/2012 5:46:48 PM
1
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
1 “We’re not lucky. We win because we work hard.” Roger Penske Head of one of the most successful racing team groups for the last 30 years
To think outside the box one must know where the box is and where you are relative to the box. Chapter 1 honestly discusses how you can transform yourself to an “insight leader” through self-education and on-the-job training and “playing” in the company sand box. The authors share their experiences in delivering and communicating impactful research.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
3
Beckley_c01.indd 3
1/31/2012 5:43:14 PM
1
Chapter 1
Setting the Direction: First, Know Where You Are Howard Moskowitz and Jacqueline Beckley
Key learnings ✓ ✓ ✓ ✓ ✓ ✓
Knowledge worker: What is your job? Does it matter what profession you affiliate with? Prepare yourself to be an insight leader Dealing with management and your clients Two types of heroes in a corporation Knowing your limits How to report your data
1.1 Roles in the corporation – the dance of the knowledge worker The great management theorist and consultant, Peter Drucker (1999), coined a term for a class of professionals in industry whose job was to understand and manipulate symbols. He called them “knowledge workers”, to distinguish them from other workers who produced things. Knowledge workers had the job to manipulate symbols, and to create knowledge and manipulation of symbols. In Drucker’s mind, the knowledge worker was the worker of the future, as machines augmented, and even replaced people. Today’s world of computers, business intelligence, artificial intelligence, algorithms, and the utter connectivity between people and corporations makes the knowledge worker even more important than Drucker imagined. In corporations a special group of knowledge workers practice the job of understanding the products a company makes, and at the same time the mind of the customer who uses the product. There is not one group, but rather at least three groups who share this grand responsibility of knowing the product/ the user/and the product–user interface. Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
4
Beckley_c01.indd 4
1/31/2012 5:43:15 PM
Setting the Direction
5
1 These groups fall into at least three classes: market, product and sensory professionals. Of the three classes, two (market professionals and sensory professionals) have well-defined responsibilities. The third class, the “product professional” has eluded a clear definition of who they are and what they do. The market researcher studies the market, to understand market trends, and to discover how the consumer fits into that market. When it comes to products, market researchers focus on how the product and consumer come together for market benefit. The job of the sensory professional is to understand the product, and specifically the product as perceived by and used by the consumer, generally from an analytical perspective. The consumer is merely one of several instruments used to measure the properties of the product. The sensory field has been fairly clear in its definition of “sensory” professional. It defines the job by certain analytical methods that many sensory professionals value: descriptive analysis, discrimination testing and product understanding through a vast array of survey research tools. All these tools tend to be heavily anchored in analytics. The consumer represents a tool, much like an instrument. The evolving job of the product researcher (sometimes called a consumer technical leader) focuses on the product and the consumer interaction with the product. The product researcher has evolved out of the confluence of the sensory researcher and the market researcher, respectively. The product researcher studies the product but within the context of the consumers’ world. With the product researcher, the heavy reliance on analytics is not always useful nor always required, but in many cases a background in analytics helps one do the job. It’s important to understand the nature of these three groups in a company that sells products or services. We focus on these three professional groups in this chapter, for one major reason, relevance. In years gone by, it was enough to label oneself a sensory professional, and to focus on the evaluation of the product from the point of view of description, describing each of the facets of the product, in agonizing detail. It was also enough for the market researcher, to label himself or herself as just that, a market researcher. One didn’t need to be technical. There were consultants and research companies that did this job, as well as the plethora of computer services, so-called tab-houses, ready to run the data and provide pretty tables. One simply needed to evaluate products in the inactionable, evaluative language of the marketer (the product tastes sweet; the product is sophisticated, and so forth). But what about this new profession, this product researcher? Who is this person? Is the product researcher a marketer or a scientist (sensory professional)? Neither or both? It’s all about what one knows, and the world in which one feels comfortable. It is the focus of specific knowledge for these professionals that end up deciding what job the person does in the company. And it is the focus of knowledge, your knowledge, that ultimately decides what role you will play, who you really are, what is expected of you. Let’s look for a minute at the two foci of knowledge, that for the marketing researcher and that for the sensory professional. The marketing researcher
Beckley_c01.indd 5
1/31/2012 5:43:15 PM
6
Product Innovation Toolbox
1 traces the intellectual history back to sociology, with its focus on the behavior of people in groups. To the sociologist, or more properly to the market researcher, it’s not the product per se. Or perhaps only the product in passing. It’s rather the behavior of the person. The product is just one facet of behavior. Other behaviors need not involve the product at all, but might involve what media the individual “consumes”. Now turn to the sensory professional, where science of the product, not sociology of the consumer, takes front stage. The sensory professional traces his or her history back to the scientist or product developer who created the product, to the person who knows the innards of the product, and knows what makes the product “tick”. The company asks the sensory professional to link the presumed behavior of the consumer to the physical characteristics of the product. The research specialties include understanding how the person interacts with the product, and how the different characteristics of the product “drive” perceptions. The focus of the product researcher is to blend the science of product design with the marketing science elements (emotional, kinesthetic, behavioral). The product researcher must allow the product to tell its story as a person uses it, and link that story with the marketers’ job to connect to the product. Of all companies that have all three professions, Proctor & Gamble has had the longest running products research group. How the model came to be is part of myth and folklore, but this early creation of disciplines foreshadowed what must evolve for most product/service companies.
1.2
Insights leader – learning on the job versus learning in school In recent years there has been a curious shift in the role of many individuals in the worlds of both sensory analysis and market research. The new names have something to do with the word “insight”. Although one might cynically say this is just a name change of a department, the truth of the matter is that something else is going on. The focus is shifting, away from what the professional does (studies the product, studies the market), and towards what the product delivers. The product now is named “insights”. It’s not completely clear yet whether these insights are simply another way of talking about the same thing (old wine in new bottles), or whether we are witnessing a change in these knowledge workers, from what one did to what one contributed. Interestingly, product research has not had experience or, more cynically, “endured” this change. In the spirit of the new change, let’s investigate the new requirements of this job called insights manager or insights provider. We will call this person the “insight leader” because the role involves beyond “managing” insights or “providing” insights but leading the quest to get insights, and more specifically, what constitutes an insight. In our new Internet-connected world, definitions of “insight” reflect many different approaches:
Beckley_c01.indd 6
1/31/2012 5:43:15 PM
Setting the Direction
7
1 The Random House Dictionary (1971) defines “insight” as an instance of apprehending the true nature of a thing… penetrating mental vision or discernment of the underlying truth. For business Nigel Bradley (2007) points out a recent development in the world of business. There has been the emergence of new departments in corporations which carry the word “insight” in their titles. We have customer insight departments, insight management unit, consumer insight and so on. This extends to the job titles of executives working in those areas. One reason for this development was a realization that the emphasis of results from individual research projects needed to be shifted to a wider understanding of the dynamics operating in the full marketplace. Another reason was the impact of information technology. Progress in technology gave way to the availability of masses of information found in databases. The advantages of insight management are numerous. By making use of all existing information, there is less need to consult customers, thereby minimizing unnecessary contact and costs. For marketing Lee et al. (2009) point out that an insight is a statement based on a deep understanding of your target consumers’ attitudes and beliefs, which connect at an emotional level with your consumer, provoking a clear response (This brand understands me! That is exactly how I feel! – even if they’ve never thought about it quite like that) which, when leveraged, has the power to change consumer behavior. Insights must affect a change in consumer behavior that benefits your brand, leading to the achievement of the marketing objective. Insights can be based on real or perceived weakness to be exploited in competitive product performance or value: (1) Attitudinal or perceived barrier in the minds of consumers, regarding your brand (2) Untapped or compelling belief or practice (3) Insights are most effective when they: (a) are unexpected (b) create a disequilibria (c) change momentum (d) are exploited via a benefit or point of difference that your brand can deliver. It is, therefore, critical to think carefully about who you are and what path of professionalism you are trying to pursue. As you do, try to answer the question that we just posed: Can the job of insights leader be learned or must it emerge from education?
1.3
Being the authentic you We don’t believe we can teach you curiosity or make you passionate about an idea or skeptical about what you are taught. Some of these behaviors are mindsets that we believe you start with from birth. But if you aspire to be an insight leader, you must be on a path of continuous learning. Even if the organization
Beckley_c01.indd 7
1/31/2012 5:43:15 PM
8
Product Innovation Toolbox
1 you work in is not a learning organization, you have to get into the fray, into the center, where the “action is”. You cannot lead insights from the sidelines. To that end, you must … educate yourself. Now, just how do you go about educating yourself? (1) Look around you. Look, observe and take note. What is happening and how is it happening and do you make sense of the happenings? (2) Be curious about the trappings of the “set”. For example, a good idea is to go into a person’s office, and simply sidle up to the bookcase. (In our new age of non-books, this does take a more clever approach! By book we do not mean only Google searches and Wikipedia references!) Is there a bookcase? What’s in the bookcase? How are the books arranged? Does it look like the books have been read, or are they just there for show? Do you get a sense of connection between the books you see on the shelf and the person to whom you speak in the office? (3) Ask people what they are reading, right now. The reason for asking about one’s books (or reading) rather than one’s education is simple. It helps you understand what they are thinking about. (4) Books are different than a résumé. Résumés give you a sense of a magnificent achievement of a life, perhaps a unique life. There may not be any single accomplishment, but the résumé itself is a work of magnificence. Every achievement is highlighted, written, described in glowing terms, perhaps an action that signifies the worthiness of the person. There are companies and services which, for the right price, will polish one’s résumé, making one’s achievements look far more impressive than they actually are. And, since résumés are the coin of the job hunter, the key to the new job, it’s important that the résumé looks, lives, breathes and speaks like the junior partner the résumé owner is destined to be. But those books and the bookshelf? Well that’s an entirely different world. The books that a person buys are friendlier. There are no services, no organization, whose express desire it is to polish an executive’s bookshelf (in the past this was also an area of prestige, but in business today, not so much).
1.4
What should you read? (1) Reading is an investment in time. But it’s more than time. Reading, at least reading books, gives you a private space, a private conversation with the masters who wrote the books. When you read you get a sense of how people think. When you read good literature, good history, you may be sufficiently fortunate to come across someone who has a felicitous style, who writes well. Relish that writing. Stop for a moment, and look to see how the writer constructs the sentences. Verbalize to yourself what you see. It’s a worthwhile exercise. It helps make you a better thinker. We hope you are seeing how this process is creating person insight. (2) When reading it’s tempting to pick up novels and easy reads, stuff that passes the time, but doesn’t force you into an active contemplation. If it’s novels, then read classic ones, novels by the great authors, which probe topics. Or plays, or poetry.
Beckley_c01.indd 8
1/31/2012 5:43:15 PM
Setting the Direction
9
1 (3) A more productive read, for insight training, is history, something with a structure, something that happened. Pick up a good history, something well written, such as Gibbon’s Decline and Fall of the Roman Empire. It’s long, yes, three well written volumes. Look at the Modern Library Edition in three somewhat condensed volumes. Pick a section, and read it. Get a sense of how history unfolds. It’s worth the effort; you’ll end up fascinated, and your mind will be exercised.
1.5
What else do you need to do to prepare to be an insight leader? (1) Know who you are and where you need to go to be you. Take every opportunity to use those personality tests and surveys (Myers-Briggs, DISC, Personalysis, etc.). Why? Your biases and your mindset shape how you see insights and make sense of them. The less time you spend “faking it”, the faster you get to the authentic person you are. And then you can really dive into insights. (2) Look at who you are and be honest with yourself. When all is said and done, you are stuck with you. If there are issues you struggle with (you are dyslexic or you have attention deficit hyperactivity disorder, you suffered trauma as a child, you have an addiction to something), you need to own these factors and realize they will shape your view of people, places and things, aka insights. And the meaningful impact you can have being an insight leader will be shaped by you. To avoid being just another sensory or market research professional, but rather a unique insight leader, grappling with your demons and angels will make you a strong leader who will find those defining insights we hope you will seek out. (3) Heraclitus tells us: “Lovers of wisdom must open their minds to very many things. I searched into myself. Knowing many things doesn’t teach insight” (Von Oech, 2001).
1.6
Dealing with management and your clients (1) As a knowledge professional inevitably you deal with clients. Clients are not the individuals to whom you report. Reporting in that fashion is a line in an organization chart. Everyone in a company has “clients” of one sort of another. Clients are the individuals to whom you owe a “professional” something. To clients you owe the work efforts of your professionalism. You may owe clients your thoughts (also called inputs) on how to approach a problem, or the design of a study, execution, analysis and recommendations. (2) Often we believe that when we report results we have to present the results in the same form that we might present results from a graduate or professional study. Read the scientific literature to get an idea what this means. All too often, these papers in journals are difficult to read, filled with table after table of results, with significance tests, with the actual results reported in a convoluted way, defying everyone but graduate students and perhaps the reviewer. The journals are not particularly reader-friendly. They don’t need to be. They are archival, for the advancement of the profession. You, however, cannot afford that.
Beckley_c01.indd 9
1/31/2012 5:43:15 PM
10
Product Innovation Toolbox
1 1.7
Guidelines to success When dealing with your management and your clients, it’s important for you to follow a few guidelines. Follow them and the outcome will be more positive than could be imagined: (1) Clarity When first meeting to discuss the research issues, make every effort to simplify the discussion, to repeat the goals, and where possible to verbalize potential solutions. Those early times, the start of project discussions, are the best times for to ensure clarity. It is inevitable that over time the issues will get muddier, less clear, as details begin to crowd in and practical considerations begin to merge with technical and research issues. (2) Verbalization Practical experience shows that, time after time, it helps to verbalize what is going on. Simple, direct sentences are the order of the day here. You might almost consider this to be a running abstract of the topic. You may be tempted to keep quiet, to let things evolve, to summarize at the end. The truth of the matter is that by restating the objectives and strategies in simple, declarative terms throughout the meeting, you make everyone’s job easier. You will be clarifying the goals again and again until the structure is crystal clear to everyone. There’s a far greater likelihood of success after that clarification than before. Everyone knows the ground rules. (3) Feelings We often begin work with a combination of insecurity and bravado. In our most private moments, especially as we start out, we realize we don’t profoundly understand what we are going to do. And, at the same time, we feel the focus of an external audience, not our parents, as we do our work. The result can lead to withdrawal and flight, or to bravado. Fleeing doesn’t do any good. We merely remove ourselves from the fray, either momentarily, or more likely permanently. Bravado doesn’t do much good either. Professionals see through us. One can’t “fluff” one’s way through business issues. Our lack of knowledge catches up. The best thing to do here is simply to admit one is nervous, and move on. We’ll talk in the next section about what to do. (4) Imagery What’s the best way to capture the essence of the meetings, about what’s happening, about what you are learning and, of course, share that essence with others? How can you take the verbalization and the emotion and capture the imagery? Is it a sketch? What about an image? Or maybe it is a creative chart. People like William Tufte (1997), Andrew Abela (2008), Tony Buzan (1996), David Byrne (2003) and David Hockney (2001) have all explored meaningful ways to provide a visualization of ideas. The best way to approach this is to expose yourself to a number of approaches and then find the styles that help you communicate more precisely. (5) Wave 0 Every researcher knows that he or she is engaging in a dialogue with nature. And, the smarter ones realize that it’s good to know what one is doing. Microbiologist pioneer Louis Pasteur put it best, “Chance favors the prepared mind”. But how does one prepare in one’s job? One is expected to know what one is doing. Such knowledge comes from years of experience. But what does one do when starting out? The answer to this is really rather simple. Run a small-scale experiment. And do so publicly, not privately. The experiment is not to “crib”, to “cram”, to learn one’s profession quickly. The small-scale
Beckley_c01.indd 10
1/31/2012 5:43:15 PM
Setting the Direction
11
1 experiment (we call it wave 0) is a public test, to see what the data will bring, before anyone commits. It’s not a shameful thing to do. In fact, most clients are grateful. If they don’t cost a lot, if it is small scale, people welcome pretests and trials. Wave 0s give everyone a sense of relief that before any commitment, we’re going to try it out. The wave 0 is like a biopsy before a major operation; map out the territory, check what’s going on. Wave 0s, pre-tests, pilots, whatever you call them work. And they work easily and well.
1.8
Reporting results It may seem a little too much to deal with the notion of reporting results. After all, most companies have either formal or informal guidelines about presenting results to management. For example, Procter and Gamble prescribes a specific form that research uses to present the results. Traditionally, the researchers were told to remain within the confines of the actual results, rather than to speculate. General Foods (now Kraft Foods), in turn, also had specific formats to use. And the list could go on. The idea is not to create a new form; that would probably not be well appreciated in most companies. But the idea is to present results in the most cogent form. This form should combine tables, words and appropriate imagery. The tables should be simple to read. The language should not be a reiteration of all that the table shows. Rather, the form reporting the results should highlight the key findings, and the sound bites that the audience should take away. And the imagery should help clarify what is meant by the entire summary. In the archival literature in most fields, there is a certain style. The style varies with the particular journal. For the most part, the journal style is formal, and written to give the sense of gravitas to the results. The truth is most journal articles will go unread. That’s not the case with your reports issued in the privacy of the company, dealing with the results of studies. The work that most readers of this book will do will be used by different groups in the corporation, ranging from bench chemists to vice presidents, and even higher. The work that you do is important. Unlike the academe, there is precious little room in a corporation for “exploratory research” for the sake of one’s interest. (That’s a problem, but it’s still reality.) One’s work typically revolves around answering questions so that others may read and use the results. It’s important to be crisp, clear, succinct and yet provide the necessary detail. As an insight leader, do have a point of view. Be intellectually honest, yet find a way to work within the company veil that most organizations have in place.
1.9
Do not “winstonize” The term “winstonize” means taking a table of numbers, converting it to simplified text so that it is “boiled down” to its essence and presenting the meaningful story to the client (for marketing and non-math people). Some years ago, in the early 1990s, one of the co-authors of this chapter (HRM) had the pleasure of working with a well-known food company for the development of a new dairy product. The project involved the use of rule
Beckley_c01.indd 11
1/31/2012 5:43:15 PM
12
Product Innovation Toolbox
1 developing experimentation (RDE), a variation of conjoint analysis (a more detailed description of conjoint analysis can be found in Chapter 7.2). What’s important here is the test stimuli comprised about 90 phrases, each dealing with an aspect of a dairy product. Respondents evaluated combinations of phrases in short sentences. At the end of the evaluation the computer program deconstructed the combinations to show how each element drove the response. Every one of the 90 phrases generated its own impact or utility value. We were novices at the time. We had just expanded the use of IdeaMap.Net (RDE) to many elements. Of course the RDE study itself worked fine. What was very interesting was the reaction of the client. In the client’s mind (he was a consumer researcher, not a marketer) the most appropriate way to present the results was to give the marketer the “big picture”. There was a sense that the marketers would not appreciate table after table of data. We were told that the client wanted the results pre-digested, with the implications strongly presented, along with clear next steps. It took us a week of hard work, but finally we boiled out all of the numbers, and had the best written prose that we could create to “tell the story”. The presentation was something else altogether. Perhaps we were not particularly dynamic presenters. We kept noticing something which disturbed us. Our research client was attentive, but the marketers and the product developers seemed to wander. And so we proceeded with the presentation, droning on and on, with what we had first believed to be impeccable powerful prose, but by now we’re seeing it to be verbal drivel. We thought the results were there … but they weren’t. All of a sudden we hit upon a table of data, quite accidentally. We had failed to boil off all of the results. And so a table of data remained. It was as if we injected our Lazarus audience, bringing them back from the dead. They focused on the data, became animated, discussed the results among themselves and at the end warmly thanked us for what turned out to be a wonderful presentation. Here are key takeaways from our experience of “winstonizing” data: (1) We wanted the ‘approval’ of the client As outside suppliers, that is, not part of the client’s corporate family, we felt that the client’s word was sacrosanct. We were wrong. We didn’t realize that the client was as ignorant as we were about the needs of his own clients. In our quest to get approval because of being outsiders we sacrificed what really mattered in our role, clarity and direction. (2) Clients are fallible Clients are people. Clients are not omniscient. It is we, the outsiders, who give them the power. Clients are nervous (hey they need their jobs, they want their jobs!). But, at the same time they are misled. They believe what suppliers tell them. Perhaps not at first, but eventually they do. They are fed a diet of positives from the outside. And this diet makes them fallible. (3) Avoid presentertainment Yes, it’s nice to have pretty pictures or words, to avoid charts. It’s so very tempting to believe that the answer is in the presentation. We call it in our earlier works “presentertainment”. The term was coined by those audience members who sat through numerous PowerPoint presentations which employed an overwhelming number of images and flash with very minimal verbal content. The emphasis of those presenters seemed to be entertainment over information. It’s not the message, but rather the elegance of the presentation that convinces. Or so
Beckley_c01.indd 12
1/31/2012 5:43:16 PM
Setting the Direction
13
1 we would like to believe. When we did the “winstonizing” we were just at the beginning era of presentertainment. So tables gave way, mistakenly, to words, to prose, to description. Now that same presentertainment would jettison the tables in favor of pretty pictures, of simple visual entertainments. And the “winstonizing” would be complete; no text, no tables, just images which would convey (it is hoped) what we were trying to do. And we’d be dead wrong. (4) Clients like data We discovered from the presentation of the dairy product that clients did welcome real data. Pure and simple. But they wanted data in a form that they could use. They did not want polished data in a form that would disguise. They wanted simple tables. (5) And most important, clients want simple clear presentation When it was time to end the presentation, we were to end it. No ands, ifs, buts. No inserted meaningless words such as “… it’s quite interesting that …” and the like. Rather, simply hard hitting points. We noticed that the clients were responsible for making something happen. They just wanted the results. They did not want a presentation that went on forever, to justify the money that they spent. That was irrelevant. They were paying for answers to move the business forward, and that’s all.
1.10 Making it public – helpful hints to grow from student to professional It’s a good idea to understand the corporation in which you work, especially if you are involved in the evaluation of subjective aspects of products or services, whether this is through panel work with experts, community panels, or a range of one or many consumer evaluations. As a student you may have done your own experiments, all the way from design to analysis, and to some extent reporting. You may have published your results, either yourself or with a professor. And of course, if you’ve been through the publishing world you know all about the back and forth of editorial reviews, the occasional nasty rejection, the seeming infinite number of revisions and so forth. From these exercises you may have formed an idea of what constitutes a good report of results, and of course what constitutes a skimpy report. As you begin to work in a corporation, be prepared to abandon your preconceptions, and face what could be called Realpolitik by some, or compromising principles by others. The truth of the matter is that most companies are not in the knowledge business. Companies do studies to find out key information that they want to know. Companies are not run by scientists, and if they happen to be (a rare coincidence) it’s not going to be the science of consumer research, descriptive panels or ethnographic discoveries, for example. But just what does that disturbing information mean, in real terms? Does it mean that no one in the corporation is a real scientist? The answer to that is probably “yes”, no one in corporation is a scientist in the way that an academic is a scientist. Now that this ugly secret is out, that the company is not a university or an open society, what are you to do? Of course you don’t do slapdash research.
Beckley_c01.indd 13
1/31/2012 5:43:16 PM
14
Product Innovation Toolbox
1 You don’t sacrifice quality. You may have to reduce the sample size. (Or in many cases you actually will end up working with far more people than you would have had you remained an academic. Academics are notoriously impoverished.) Your big differences will be in the way you report the results. We’ve already dealt with this above, at the start of this section, but it always helps to reiterate how to communicate. Like real estate, the answer in business is “communicate, communicate, communicate”. (1) You’ll have to make your reports readable No more hiding behind the academic jargon, the tightly written, almost unreadable and certainly mostly forgettable results. You can be sure that when you work on relevant topics, your reports will be read. (2) You will have to avoid the flight to statistics You can’t expect your client to wade through a mountain of statistical effects. The company needs to know what you found, and what you did not. Pure and simple. You have to write to be read, not to defend your insecurity about not being worthy of the job. Do the job; it’s yours. (3) In many instances people are afraid to step up to the plate, to take a chance This happens in all professional arenas. And unfortunately it happens far more frequently than you realize. For market researchers, the conservative fear manifests itself in a 300-page report, with unbearable numbers of tables. That report, the death of uncountable trees, belies the sheer inability to say what’s really going on, preventing in turn a junior brand manager from having that all-important “aha” experience. This is called bird’s eye shot insights or results by the pound. For a sensory analyst it can be reams of tests, of spider plots for descriptive analysis, cross referenced to massive quantities of analytical data, along with maps in two and three-dimensional space which look impressive but don’t say anything. (4) Avoid number three above like the plague It will turn you into a corporate drone, and perhaps eventually cost you your job when things get tough. It’s hard to believe that doing one’s job, dutifully reporting results in an objective way, could be counterproductive, but it is. It’s nice when you have a staff position and believe that you merely need grind out the data in a facsimile of a professional. But you’re just fooling yourself. The real issue you will face is that the company did not hire you to run studies and report data. That’s part of your job, yes, but that’s not really the job. The job is to act as an intelligence source, using the data to identify what the company should do to solve a particular problem. The insight leader.
1.11
The two types of professionals in the world of evaluating products (and studying consumers) In a recently published book (Moskowitz, 2010), one of the co-authors brought up the point that in the world of knowledge workers, there are at least two types. One type prides himself on being able to solve the problem. This is the hero, not in the denigrated sense, but in the sense of Joseph Campbell
Beckley_c01.indd 14
1/31/2012 5:43:16 PM
Setting the Direction
15
1 (the esteemed mythologist). The hero moves out to the luminal space, and returns with something, returns transformed. The story is about the hero. It’s about what the hero does. All the focus is on changes in the hero, on the trials and tribulations. And in the end, the hero is the one who helps move society along. There’s a duality to the hero. That is the individual who works in the corporation, doing the job. The individual fits in. There is no need to prove oneself, at least at a professional level because the job is not about fulfilling his scientific education. There is a need to do the job, and to establish himself as a professional worthy of advancement and reward, but we’re not seeing the hero’s journey here. We’re seeing the individual in a corporation, not the individual fulfilling their own destiny. The first person, the individual who is in the “hero” role, may correspond to the professional who has been educated in the world of science. The role of this professional is to bring science to the corporation, to bring the benefits of the scientific education (or training) to the job. This is the typical role of the sensory professional and the career consumer researcher. Advancement in the corporation is a by-product of the job. The real nature of the job is to fulfill one’s education purpose; to improve the understanding of the product (or the consumer) through science. The second person, the individual who is doing the “job”, corresponds to many individuals who have the role in the corporation, but the particular job is one role of many. The person is in the role, but the person is not the role. This individual is that corporate employee who is rotated through different roles to give him or her both training and exposure. This second person is not so much doing this job for the sake of the profession as doing this job as part of corporate training. Whereas the first hero is anchored in the profession, this second hero is in trouble as an insight leader when he or she has failed to craft an authentic self prior to engaging in corporate rotation by training. Without authenticating somehow, this type of training can leave this second person an empty, rudderless shell of a person.
1.12
Knowing your limits and inviting others in In every business of reasonable size it sooner or later becomes necessary to learn how to deal with others, not as adversaries but as collaborators. It’s one thing to run a small, one or two-person consulting shop or expertise and do everything oneself. That’s a perfectly good way to spend one’s business career. There are many thousands of one-person shops, of virtuosi who can do all aspects of their business. They really don’t have a business, per se, they have a practice. There is nothing wrong with a practice, once people realize that’s what it is, an essentially solo operation. The client is buying the expertise of the individual. It’s the standard operating mode of individual experts outside the corporation. The corporate world, larger or smaller, is a different “kettle of fish”. The very essence of a corporation is built on the division of labor, the assignment to individual’s jobs that they can do better than others. Division of labor moves beyond
Beckley_c01.indd 15
1/31/2012 5:43:16 PM
16
Product Innovation Toolbox
1 the practice. It requires that the individual in the corporation be part of a smoothly running business. It’s no longer about the individual as the star, and the cynosure, as the lone wolf practitioner. It could be, of course, in one’s mind, but that’s not what makes the company really work. Rather, it is cooperation. At first it all seems so simple. After all, who doesn’t want their business to succeed? And cooperation is such a wonderful word. There’s only one problem. When people cooperate, they have to share the power and the spotlight with others. It’s here that the problems arise. Let’s look at two types of people in business, to reiterate in business, and not in solo practice. (1) Those who have come up through the scientific ranks who base their selfesteem on their own personal accomplishments, have to share the spotlight of capabilities with others. These individuals who have come up through the ranks on the basis of their own merit soon find that they no longer shine as the younger superstars, the wunderkind, or even the competent. It’s a team effort. That’s hard to swallow sometimes, no matter how often they get bombarded with motivational messages such as “there’s no I in team”. It’s important here to recognize this necessary transition from being a young and rising professional to being a less, perhaps, recognized member of a successful team. (2) Those who have come up through the ranks in business carry a different burden. They do not carry the burden of accomplishment through one’s own merits, in a world of scholars and professionals. Rather, they are accustomed to carrying out orders, being cogs in a wheel, doing the job within or under budget, and of course doing the job to which they were assigned. Going outside the job assignment is perceived by many to be risky. They are in some ways clerical-minded. Yet when they rise to a certain level they are expected to be independent, to think creatively, strategically and tactically. It’s not enough to bring in the project under budget and in time. It’s important to move the business ahead. Without working on the authentic self, these people can suffer, often silently, but deeply. They may get all the rewards due them, yet since they have not cultivated a point of view, they have nowhere “inside” to enjoy the fruits of this success.
1.13
The bottom line – what’s it all about? You’re hired on to be a knowledge expert in your corporation. Hooray, you’ve made it. Yet, you’re only at the beginning. A lot of your job will be to unlearn the wonderful habits of your undergraduate, graduate or university teaching days. Yes, it’s important to do things correctly, to be a scientist, to be faithful to the discipline from which you sprang. But there’s more: (1) Moral character of you as a professional What is the journey on which the company has placed you? What is the path you need to travel? (2) The ethics of the situation How do you behave with others? (3) Your work Just exactly what are you duty bound to return to your employer?
Beckley_c01.indd 16
1/31/2012 5:43:16 PM
Setting the Direction
17
1 References Abela, A. (2008) Advanced Presentations by Design. San Francisco, CA: Pfeiffer, a Wiley Imprint. Bradley, N. (2007) Marketing Research: Tools and Techniques. Oxford: Oxford University Press. Buzan, A. (1996) The Mind Map Book: How to Use Radiant Thinking to Maximize your Brain’s Untapped Potential. NYC, NY: Plume. Byrne, D. (2003) Envisioning Emotional Epistemological Information. Göttingen, Germany: Steidl. Drucker, P.F. (1999) Management Challenges of the 21st Century. New York, NY: Harper Business. Freeman, C. (1996) The Book of Stock Car Wisdom. Walnut Grove, CA: Walnut Grove Press. Hockney, D. (2001) Secret Knowledge. NYC, NY: Viking Studio. Lee, M.S.W., Motion, J. and Conroy, D. (2009) “Anti-Consumption and Brand Avoidance”. Journal of Business Research, 62 (2), 169–180. Moskowitz, H. (2010) YOU! What you MUST Know to Start your Career as a Professional. S. Charleston, SC: CreateSpace. The Random House Dictionary of the English Language (1971) The Unabridged Edition. NYC, NY: Random House. Tufte, E. (1997) Visual Explanations. Cheshire, CT: Graphics Press. Von Oech, R. (2001) Expect the Unexpected (or You Won’t Find it). New York, NY: Free Press.
Beckley_c01.indd 17
1/31/2012 5:43:16 PM
2
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
2 “It’s great that you have all this knowledge, but how can you translate this knowledge into something that is absolutely going to make money for us?” Kenneth Feld Chairman and CEO of Feld Entertainment
Chapter 2 will help you to evolve from a consumer researcher to a Consumer Explorer who is both an insight leader and strategic innovator who brings the voice of the consumer at all stages of the innovation and research process. The authors provide practical tips, checklists and best practice guidelines for setting up a research project plan from start to finish. Finally, this chapter will provide guidance in creating a “project dossier” and what should be included in the dossier.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
19
Beckley_c02.indd 19
1/31/2012 11:06:39 PM
Chapter 2 2
The Consumer Explorer: The Key to Delivering the Innovation Strategy Dulce Paredes and Kannapon Lopetcharat
Key learnings ✓ ✓ ✓
2.1
Building your brand as a Consumer Explorer Becoming a strategic innovator Having a seat at the innovation table
You as a brand Whether you are a knowledge worker in product research, sensory sciences and marketing research and/or managing a consumer research team, you are expected to become valuable contributors to the company’s strategic vision and bottom line. The previous chapter talked about the importance of knowing the mind-set of your authentic self and the principles that guide you before you can contribute to the broader company strategy. In this mobile environment, it is important to “build your brand” (Smith, 2011) and not let the company define who you are. You can play multiple roles such as strategist, innovator, tactician and take those skills with you as you move through the organization or move from one company to the next. You usually get hired or promoted for what you know and what you can bring to the party. At the same time, there is a fine balance between practicing your internal goals, for example your “creative and cognitive brand”, with matching the company’s fiscal objectives and long-term goals. This chapter introduces the concept of the Consumer Explorer (CE) as key to delivering innovation strategy. The Consumer Explorer is defined as the insight leader and strategic innovator who bring the consumer focus into the different goals and activities of the organization. The CE accomplishes multiple goals for both him/her and the organization and has mindfulness about doing them. Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
20
Beckley_c02.indd 20
1/31/2012 11:06:40 PM
The Consumer Explorer
21
Before learning to lead others and accomplishing goals for the organization you work for, you, as a Consumer Explorer, must first establish a strategic goal for yourself, and then identify your core values, motivations and the purposes of “your brand”. In the process, it is important not to sacrifice one for the other.
2.2
2
The roles of the Consumer Explorer The Consumer Explorer (CE) has two important roles in the company’s innovation journey: (1) Setting the strategic research agenda and team dynamics; and (2) Connecting the consumer learnings from one study to the next to facilitate building the business. The CE’s most visible role is putting together the consumer and product research strategy from project initiation to completion in partnership with various functions in the organization like R&D, marketing, marketing research and supply chain. The CE captures the essence or key learnings from the various qualitative and quantitative research studies to enable the research team to faster and bigger wins and to learn from its failures early in the development. To be effective, the CE should be a core member of the innovation team from the beginning and should not be brought in along the way for tactical reasons or when a consumer research test needs to get done. Howard Moskowitz (2005) referred to this scenario of becoming solely a testing center as “test are us” instead of creating a system of building knowledge for future growth. To be a valuable member of the innovation team, the CE has to be recognized as the technical expert or knowledge worker that can design, execute and identify actionable consumer learnings that can help build the business. In addition, the CE has to have the passion and enthusiasm to guide the team to continuously infuse the “voice, eye and brain of the consumer” throughout the project cycle. Finally, an effective CE has to have a sense of humility to admit what he/she doesn’t know or can’t recognize at different parts of the journey and is open to new approaches and ideas that have originated from other internal and external sources. As a research agenda strategist, the role of the CE is to design the consumer learning plan from up-front innovation to final stage consumer and product research validation. This includes identifying the necessary skills set to drive the research agenda and filling in the gaps through new hires and/or building research partnerships and collaborations with internal and external agencies to complement staff capabilities. The CE leads the back and forth discussion between the research team or client and those chartered with designing and executing the research studies. This ensures smooth translation of what is expected and what can be done so there are no big gaps or unwanted surprises in the end. Although having a strong portfolio of research tools is important, the CE should not be defined solely by the research tools that he or she knows and practices but by the skill of knowing which research tools are appropriate to use and be guided by the principles of what the organization is trying to accomplish to
Beckley_c02.indd 21
1/31/2012 11:06:40 PM
22
2
Product Innovation Toolbox
further build the business. How often have you seen new leaders push their “tool of choice” without regard to whether it is good for the organization long term? The role of a consumer researcher has traditionally been the gatekeeper of product testing results during the new product development process (Moskowitz and Saguy, 2012). In many companies, the consumer research group also serves as the repository of product knowledge outside of formula development. We have renamed the consumer researcher for innovation as Consumer Explorer to highlight its leadership and strategic role. The role may not always reside in the consumer insights, product research and/or sensory group. Innovative companies have changed the consumer researcher role from gatekeeper of product testing results to that of a strategic partner in designing and leading the consumer and product research learning plan from up-front idea development to product feasibility and consumer validation. Key enablers for the strategist role are the following: ●
●
●
●
Critical and creative thinking from hypothesis testing to idea validation. (Chapters 3, 4 and 5 identify tools and practices and how to turn inventions to innovations and to recognize insights) Technical skills to understand and digest data balanced with the communication skills to present them clearly and simply. (Chapter 1 discusses data presentation) Access to an extensive tool box of research capabilities and knowing when to use them appropriately. (Chapters 6, 7, 8 and 9 highlight cutting edge tools that have been proven to provide effective and efficient consumer-based decision making) Extensive reach of cross-functional internal and external research partners and experts that can be leveraged as needed. (Chapter 11 features practical advice from internal and external strategic research partners)
As a connector, the CE needs to bridge and integrate learnings from one study to the next to enable the innovation team to build bigger opportunities that can come from the learnings. Key enablers for the connector role are the CE’s ability to do the following: ●
● ●
●
Incorporate internal knowledge mapping and trends analysis for hypothesis testing and model development Extract relevant information from syndicated and external data Generate relevant consumer understanding and responses to product and communication stimuli And most importantly, integrate the vast consumer and product information and working understanding of the business model and financial inputs to provide actionable business results that could generate future revenues for the company.
The role of the CE is analogous to a ball juggler who is balancing multiple objectives or balls: ● ●
Beckley_c02.indd 22
Sustainable business strategy Healthy innovation pipeline
1/31/2012 11:06:40 PM
The Consumer Explorer
23
2
Figure 2.1 The Consumer Explorer, just like the experienced juggler, knows where all the balls are going and ensures the organization keeps all of them in continuous motion. ● ●
Process that works Voice of the consumer at all touch-points.
Many companies talk about innovation but are grounded on controlled processes that could hinder change and adaptation. Business strategies can be articulated through clear objectives and action standards. The consumer voice can be sensed with qualitative and quantitative research that requires connection from one study to the next to frame the real story. But true innovation can only be experienced.
Beckley_c02.indd 23
1/31/2012 11:06:40 PM
24
2
Product Innovation Toolbox
The definition of a successful innovation can range from success in the marketplace and increase in company revenue, to revolutionizing the market and opening new competitive frameworks. High functioning and innovative companies have learned how to keep all the balls in motion and without dropping one. A successful CE has two choices to keep all the balls moving: (1) Do I become the lead and lone juggler for all the balls and hire more people to help keep all of them in motion? (2) Do I get external help so I can concentrate on one to two balls and let others do the rest? The first choice requires company investment in resources and people. The second choice relies on open innovation and shared responsibilities with strategic research partners. Chapter 11 shows how you can leverage internal and external experts to expedite the research process. Despite well-meaning efforts, the Consumer Explorer can face additional challenges where he or she feels that it is an uphill battle to implement change. Large corporations sometimes outsource large portions of market and consumer research to core suppliers in attempts to standardize processes and/or implement cost savings. Corporate and global testing mandates and protocols are pushed down to local and/or regional operations and could hinder customization and reaction times to new approaches, emerging trends and a changing marketplace. So what does the Consumer Explorer have to do? The authors in the previous chapter highlighted the need to understand the political landscape in the company and expectations of project deliverables. As a result, it may take time and perseverance to change established protocols. So the CE will need to know what he or she is working toward and what he or she is not. Another valuable tip is to experiment in small-scale “proof of principle” or “wave 0” studies which could highlight potentials with minimal resources, yet provide the needed study to begin to onboard new/different thinking. It is also easier to implement a process change in a smaller category that is seen as less risky. Sustainable successes in a series of smaller categories could lead to universal adaption if the CE is viewed as a credible source of knowledge. Having others within the organization who support the new approach is necessary if the change is to happen at all. It is important for you to find allies who will champion your ideas and allow you to “pilot” new approaches and methods and learn from failures. Paul Allen (2011), co-founder of Microsoft, recently recalled how his first joint venture with Bill Gates called Traf-O-Data was a bust. It remains his favorite mistake because it confirmed to him that “every failure contains the seeds of your next success”. All this said, in large global organizations it might take colossal losses to a competitor who has a better approach or proper timing with organization change or a change in level or company for a CE to achieve some of the approaches we suggest in this field guide. Time, patience and commitment might be called for. So some CEs may not want to join the change.
Beckley_c02.indd 24
1/31/2012 11:06:41 PM
The Consumer Explorer
2.3
25
Taking the lead So how does the Consumer Explorer gain a prominent seat at the innovation table?
2
(1) Understand the problem The objective should always drive the research methodology. As C.K. Prahalad, a foremost strategic business strategist, said in one of his last interviews before his death in 2009: “In developing all of these ideas, I learned not to start with the methodology, but with the problem. A lot of times, research tends to start with the methodology. I prefer to start with a problem that’s of interest and apply whatever methodology is appropriate” (Kleiner, 2010). Understanding the overall project objective and what it means for the business is key before embarking on a research journey. The CE could have a big role in meeting the overall project objective or it could be one of shared responsibilities with other functions in the innovation team. Although the objectives could change as the project progresses, it is important to understand the overall project objective to set you on firm ground. You may need to adjust as information and decision changes but it is critical to know the overall project objective. (2) Get the questions right The CE should be grounded on asking first-order questions or the questions that really matter (Moskowitz, 2004). The questions could either be strategic or tactical in nature. Strategic questions are those that the business wants to get answers to in order to keep the project moving and would be used for key business decisions like investing in additional resources, creating a new line or cutting short an investment. Tactical questions are related to information needed to generate robust learnings like identifying core consumer targets, global or regional reach, or key metrics/ key product indicators (KPIs). As researchers, we are grounded on type 1 and type 2 errors to define the research boundaries and manage the risks of generating erroneous conclusions. Technically, type 1 error is the mistake made when a researcher rejects the null hypothesis and accepts the alternative hypothesis, when the null hypothesis is correct. Type 2 error is the mistake made when a researcher accepts the null hypothesis and rejects the alternative hypothesis, when the null hypothesis is incorrect. From the business side, type 1 error is sometimes referred to as “producer risk or manufacturer risk”. From the point of view of companies or manufacturers, committing type 1 error will result in losing market share or consumer confidence. New product launch, line extension and claim substantiation (especially superiority and differentiating claims) for existing brand are activities that need attention to avoid type 1 error. Launching new products (new brand, new product line for an existing brand or upgrading an existing product) requires companies to claim what makes their product better than those currently in the market; therefore, superiority or differentiating claims must be made by the companies. If consumers do not agree with those new or differentiating claims, they will lose their interest in
Beckley_c02.indd 25
1/31/2012 11:06:41 PM
26
2
Beckley_c02.indd 26
Product Innovation Toolbox
the brand and ultimately choose other brands. This leads to losing market share due to the uncompetitive nature of products and claims. Type 2 error is often called “consumer risk”. Companies often commit this mistake when they: (1) Conduct cost reduction activities (replace current formulation with a cheaper one), (2) Change suppliers (for reducing cost of raw materials or the old supplier is no longer available) and (3) Change processes (upgrade machine in plant or change from an old process to a more efficient process). All these activities result in “a new formula” where companies do not want core consumers to notice the change. Therefore, the risk of wasting money is on the consumer side. Companies must conduct at least a test to ensure that the changes they made are “unnoticeable” by consumers. If companies fail to do so, consumers could lose their trust in the brand and ultimately switch to other products. To commit either type 1 or type 2 errors, one must assume that they conduct a perfect study to begin with by: (1) Using the right tools, (2) Asking the right questions and (3) Measuring with right metrics. However, this is not usually the case. At the 2009 8th International Pangborn Sensory conference, Harry Lawless, Cornell University professor, talked about the notion of having type 0 error or asking the wrong question to begin with (Lawless, 2009). In essence, Type 0 error is the most dangerous mistake that the CE can commit. This means: (a) Running a study without knowing real objective(s) or pretty much running blind (not knowing the right questions) The CE may have the most precise measurement or method but does not know the real target. (b) Applying inappropriate methods or protocols to wrong situations or questions This means knowing the right questions but the CE keeps using the wrong tool versus leveraging an approach that is most suitable for the situation. This could also include using a metric that has a historical significance to the company but has outweighed its usefulness. (3) Designing the strategy (learning modules) to answer the questions Once the CE is grounded on the key questions that need to be answered, he or she is chartered to design a series of learning modules to guide the business and research teams that will provide actionable results from project initiation to completion and post-launch review. The CE should be prepared to design efficient testing that can weed out failures early and will enable the team to react quickly and leverage successive consumer and product learnings to build and validate bigger ideas. Chapter 1 discussed the benefits of running “wave 0” or small-scale pilot studies to test hypothesis, fine-tune protocols or as a dress rehearsal for larger studies. The research strategy should cover the work needed to identify consumer insights and feedback from up-front idea generation to identifying viable product ideas for new product development (NPD) and final consumer validation for market products. Two important aspects that the CE has to consider in this planning stage are timeline and available resources (i.e. people and testing dollars). (4) Assessing what you know and what you don’t know The first step in the learning module is for the team to assess the current situation and the information available to the team from both internal and external resources.
1/31/2012 11:06:41 PM
The Consumer Explorer
27
This step has been referred to by various contributors in this book as “knowledge mapping”. Moskowitz et al. (2006) defined knowledge mapping as converting company owned data and existing information into knowledge that can lead to actionable results. The CE has to guide the team in generating the information from both reported data and the information that resides in the research team’s mind-sets. At this step, the CE has to think what he/she needs to learn and know. This could include identifying and challenging the “sacred cows” or pillars of information that are valuable to the company and are held in high esteem and/or treated as “untouchable”. Key questions to ask are: (a) Is there data that exists? (b) What hypothesis has been known and proven? (c) Is it belief or folklore, that is, so many people believe the information that it has become the norm? After grounding the team on current collective knowledge and existing data, the CE has to capture the essence of the available information as baseline or background to form the hypothesis for each of the subsequent learning modules. Critical to the project success and the CE’s ability to lead the team in identifying key consumer and product insights and their actionability is having an engaged team up-front. The CE has to require that the team members “have their skin in the game”. This means requiring the team to be highly involved at all stages, starting with delivering project inputs (e.g. knowledge mapping, technology and prototype development) and digesting key learnings throughout the research cycle. Each team member should have key roles and responsibilities identified up-front. The CE has to set ground rules on how they should participate in consumer learning sessions, for example no emails and web surfing during focus group sessions and project debrief sessions. (5) Establishing the hypothesis Creating a hypothesis or the information that needs to be answered or validated will enable the CE to design a focused and purposeful research plan and to avoid a “fishing expedition” that could lead to conflicting results and/or generate loops of research studies. (6) Creating and executing the learning plan After the CE has designed the learning modules or stages of research and has identified the key questions and corresponding hypothesis that needs to be validated at each stage, the focus shifts to the tactical aspects of implementing the research. The following information is needed for successful implementation of each learning module and provides the framework for the project dossier: (a) Overall project objective The key question that needs to be answered (b) Research objective Specific to the learning module (c) Introduction/background Context of the project (d) Action standard How results will be used, key product indicators (KPIs) by which research findings will be judged (e) Methodology Qualitative and quantitative tools, analytics (see Chapters 6, 7, 8, 9, 10 and 11) (f) Design Tactical information to research plan, for example consumer screening criteria, sampling scheme, questionnaire flow
Beckley_c02.indd 27
2
1/31/2012 11:06:41 PM
28
2
Beckley_c02.indd 28
Product Innovation Toolbox
(g) Results/key deliverables How the results will be used by the business (h) Applications Implications for further research and research nuggets for building category learning and deeper understanding of consumers. During the results analyses, it is important for the CE to understand the reasons behind the numbers so he/she can confidently explain the results and connect the information to the desirable outcome. When presenting results in verbal or written form, the onus is on the CE as an insight leader to “connect the key learnings” digested from the data to make it understandable and actionable to the clients or recipients of the information. The ultimate challenge in sharing results is when the audience lacks the ability to understand the information or the story stemming from the data. Another scenario is when the audience is not ready to accept the results. Both could pose a big problem. In many cases, the same information requires several versions depending on the audience. Do not hesitate to tweak and edit as you present the same information from one group of recipients to the next. But as we learned in Chapter 1, it is important to keep the relevant information that makes the story come alive and not just “simplify” the story. The CE has to constantly adapt the key learnings on consumer insights and product feedback from each learning module to the next. At this stage the CE has to skillfully balance the desired deliverables for each learning module: quality, time and cost. Quality of the research refers to the robustness of the information that would be used to provide consumer-based business decision making. It does not always mean that studies with large sample sizes provide robust data. Quality can also be defined as the strength of the insight and its projectionable impact to larger groups of consumers. It is useful to quote two visionaries that define the research boundaries that CEs operate in to deliver quality research on point and on time. Warren Buffett, one of the most successful investors in the world had said, “I would rather be approximately right than precisely wrong” (Rose, 2008). Jeff Ewald, a leading market research guru, said, “I’d rather have some information that boosts the decision-making process to 80+% but have it FAST, rather than getting it 95% right but too late to be useful” (personal communication with the author (DP), October 2008). (7) Documentation and project dossier The work is never done until the paper work is finished. Finally, the effective CE has to write two reports: a topline executive summary report and a full technical research report. The topline executive summary has to be written simply and concisely with the highest level executive as the target audience. It has to be delivered on time to impact a business decision and/or before additional research investments need to be made. It could be as short as a one-pager or a five-pager that captures the key learnings, relevant data, conclusions, implications and recommendations from the collective research. The full technical report should include the detailed research plans, results and recommendations for each learning module. It should include the protocol and questionnaires, which could by itself be used in generating best practices for future research. The final analytics should include deep dive analysis and cross tabulations to support the conclusions and recommendations.
1/31/2012 11:06:41 PM
The Consumer Explorer
29
The project dossier is the final compilation of the research. It should include the two versions of the report - the topline/executive summary and the full technical report along with the outcome of the intended research. The outcome should spell out if the research objectives were met and how the results were used by the company. The outcome should highlight the action and next steps resulting from the learning plan and the actual name of the new product launch versus the project code name (this would help with future knowledge mapping). The project dossier should be stored in a secure and confidential company site that has appropriate controls and searchable functions. The project dossier is the intellectual property of the company and should remain in its possession for subsequent researchers. Finally, the CE has to be the “strong closer” and prevent the temptation of the “tired” researcher after a long and tumultuous journey to just wrap it up and file the information so he/she can move on to the next big project. A carefully documented project dossier serves three main purposes: (1) To capture the knowledge and learnings from the company research investment; (2) To build the category knowledge and best practices for future research; and (3) To provide a solid document framework in the event of a legal challenge for claims and patent infringement cases. Future researchers and the company will thank you for delivering a solid project dossier. Many books have been written about innovation and management models but this book centers the discussion on the Consumer Explorer, the knowledge worker and insight leader and his/her role in ensuring the consumer is at the center of the innovation process. The editors have invited seasoned Consumer Explorers from different research backgrounds to share their knowledge and research tools.
2.4
2
Practical advice from seasoned Consumer Explorers* ●
●
●
●
Be there when it starts All projects have a genesis. New product ideas germinate from innovation teams or core product development teams. Downstream product teams have more defined members. Early innovation teams may not always have a consumer researcher in their midst. Get the team engaged up-front It is critical that you get a multi-functional team with their “skin on the game” that respect each other’s point of view, can make decisions during work sessions and champion the collective outcomes with senior sponsors. Don’t hesitate to take the lead Taking the lead in designing a product testing strategy means being proactive in providing relevant consumer and product information at key decision points instead of being reactive to a time line or to a specific task objective. More is not always better … edit, edit, edit What you take out is as important as what you put in the test design and questionnaire and data report.
* Personal communications with Jennifer Hanson, Jackie Beckley and Howard Moskowitz (3 July 2011).
Beckley_c02.indd 29
1/31/2012 11:06:41 PM
30
Product Innovation Toolbox
●
2
●
●
Don’t get lost in the data Stick to your objective and what you want to find out. Identify the point of the study versus looking at all different data relationships and looking at data in many ways. Stand your ground Know your data and be passionate about it. You need to have the conviction to defend your statement and the data and understanding to back it up. You need to balance the rational and explainable parts of the research with the fear of divulging inadequate or misleading information. You are a piece to the puzzle You provide the voice, eye and brain of the consumer in the research team. Champion the insight but let product development deliver the goods. Let the business solve the problem when consumers do not like the original idea but there is an opportunity to create the demand.
References Allen, P. (2011) “My Favorite Mistake, Paul Allen on How He and Bill Gates Went Bust Before They Went Big”. Newsweek (page 56), 2 May 2011. Feld, K. quoted in Bryant, A. (2010) “Yes, You’re Smart But Can You Make Money?” New York Times, Business section (page 2), 24 October 2010. Kleiner, A. (2010) “The Life’s Work of a Thought Leader”. Strategy + Business, 9 August 2010. (http://www.strategy-business.com/article/00043). Lawless, H. (2009) Keynote Speech, Issues in Sensory Science. Presented at 8th Pangborn Sensory Science Symposium, 30 July 2009, Florence, Italy. Moskowitz, H. (2004) “From Psychophysics The World … Data Acquired, Lessons Learned”. Food Quality and Preference, 15 (7–8), 633–644. Moskowitz, H. (2005) “Commentary: Whither Now the Grand Sensory ‘Project’ in an Age of Improving Methodology”. Journal of Sensory Studies, 20 (1), 93–95. Moskowitz, H.R. and Saguy, I.S. (2012) Reinventing the Role of Consumer Research in Today’s Open Innovation Ecosystem. Critical Reviews in Food Science and Nutrition (In press). Moskowitz, H.R., Beckley, J.H. and Resurreccion A.V.A. (2006) Sensory and Consumer Research in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Rose, C. (2008) “An exclusive conversation with Warren Buffett”. Business, Wednesday, 1 October 2008. (http://www.charlierose.com/view/interview/9284). Smith, E. (2011) Journalism Convocation Speech at Medill, Northwestern University, 18 June 2011, Evanston, IL.
Beckley_c02.indd 30
1/31/2012 11:06:41 PM
3 Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
3 “Entrepreneurs of course may be inventors just as they may be capitalists, they are inventors not by nature of their function but by coincidence and vice versa.” The economist Schumpeter This chapter explains the relationships between inventors and innovators, how to turn an invention to an innovation and the different skill sets required for being inventors and innovators. Throughout the examples, this chapter shows the transformation of inventions that had humble beginnings and appeared to be “trivial” advances into the hands of innovators who applied the inventions to other fields and major innovations. In addition, this chapter describes the innovation in consumer research in measuring consumers’ responses especially their hedonic responses.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
31
Beckley_c03.indd 31
2/7/2012 7:36:02 PM
Chapter 3
Invention and Innovation Daniel Ennis 3 Key learnings ✓
✓ ✓ ✓
3.1
Link and foster relationships between inventors and innovators. Invention and innovation require different skills rarely found in the same person Keep an eye out for “trivial” advances. Major innovations often have small, but profitable beginnings, and then are extended to other fields Be aware of movements that drive consumer interest due to social change, demographics and contagion Know every benefit that consumers derive from your products or services, even small, seemingly insignificant ones, and extend these benefits to new advances
Invention and innovation The first person to make a novel and prospectively useful product or process is an inventor and the first person or enterprise to exploit that invention in a commercially viable product or service is an innovator. Anyone who has ever conceived a new product or service and then convinced others to purchase or adopt it will recognize the very different skills required for these two phases of a successful introduction. It is quite rare to find the aptitudes required to invent something and to commercialize it in the same person. In order to explore the separate roles that distinguish invention from innovation, it is worth reviewing some significant historical cases.
3.2
The steam engine: Watt and Boulton According to his own account, in an afternoon in the winter of 1764–1765, James Watt, the mathematical instrument maker to Glasgow University, strolled around the grounds to work out a problem (Scherer, 1984). Earlier he Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
32
Beckley_c03.indd 32
2/7/2012 7:36:02 PM
Invention and Innovation
33
had been given a Newcomen engine to repair, a type of engine invented in 1712. During his walk he became conscious of a change that would significantly improve the efficiency of the machine by using a separate condensing vessel. Apparently, the time it took for him to come up with this concept was very short – a matter of hours – and it was dwarfed by the time it took before his insight led to a workable machine in 1780. He may not, in that afternoon, have appreciated the future contributions of John Roebuck, who went bankrupt, and then Matthew Boulton, who played the roles of unsuccessful and successful innovators, respectively. In the development of the Watt-Boulton steam engine neither of them could have seen that their machine would be the overture to the Industrial Revolution when large industrial cities were built far from rivers, reducing reliance on hydropower. By way of contrast to the real potential of their technology, Watt and particularly Boulton were focused on solving a contemporary problem with business potential – getting water out of flooded copper mines. Two lessons derive from the steam engine story. The first is that the skills of both participants were required. James Watt provided the technical knowledge and the motivation to create a technical improvement and Matthew Boulton encouraged and funded Watt through years of development and resolved a serious patent roadblock. By 1780 they had a commercial product. The second lesson is that many successful products or services extend far beyond the vision of their creators but they must all have at least one sustainable application to get them started, such as extending the life of flooded mines.
3.3
3
Nike: Bowerman and Knight Bill Bowerman, coach of the University of Oregon track team, is known for producing numerous Olympic champions and world record holders, as well as co-founding Nike (Moore, 2006). Bowerman liked to make running shoes for his athletes as he was dissatisfied with shoe design in the 1960s and early 1970s. He founded Blue Ribbon Sports (BRS) with Phil Knight and they had Bowerman’s designs for running shoes manufactured by Onitsuka in Japan. Bowerman, in search of a light shoe with traction for his track athletes, invented a “waffle” sole using a waffle iron in his home. This invention became the basis for the waffle trainer, the first really successful shoe sold by BRS before the company became Nike. When Frank Shorter won the Olympic gold medal for the marathon in Munich in 1972, a running boom was launched in the US. With the running boom came a huge demand from the masses for comfortable shoes suitable for road running. Bowerman’s designs were well suited to exploit that demand. In this invention–innovation scenario, Phil Knight played the role of Matthew Boulton from the previous case and managed to successfully steer BRS through a contract dispute involving distribution and trademark issues with Onitsuka that could have destroyed the fledgling company. It is interesting to see once again the expression of the dual aptitudes required in a successful venture. The pattern of initial limited implementation followed by extensibility is seen again as Bowerman’s interests were initially focused on the
Beckley_c03.indd 33
2/7/2012 7:36:02 PM
34
Product Innovation Toolbox
needs of high-performance athletes. The opportunity for mass marketing and expansion into adjacent businesses was successfully exploited by Nike.
3.4 3
Beckley_c03.indd 34
The US Navy: Scott and Sims Prior to 1898, gunnery accuracy at sea was dismal. In the space of six years, accuracy was increased by 3000% based on the ingenuity and doggedness of two men, Sir Percy Scott of the British Navy and William Sims, an American naval officer (Tushman and Moore, 1982). Scott provided the basis for a process of continuous aim firing by adjusting guns on ships so that gunners could rapidly alter the positioning of guns to compensate for the roll of the ship. He also made changes to the telescopic sight so that gunners could continually sight their targets. Scott made these improvements in 1898 and began recording remarkable gunnery records. In 1900, Scott met Sims and showed him his new technique. Before long, Sims began to demonstrate similar improvements in gunnery accuracy to Scott. Then he set out to educate the US Navy who would surely welcome this new advance with open arms. On the contrary, they set out to prove that it was physically impossible to produce the results that Sims produced. He was dismissed and regarded as a falsifier of evidence. In a highly unusual move for a naval officer, Sims wrote to President Roosevelt to express his conviction about the value of continuous aim firing and in 1902 he was made Inspector of Target Practice. Scott’s method was finally adopted by the US Navy over a period of about six years. Continuous aim firing was a process made up of components brought together by Percy Scott, none of which he invented individually – guns, gears, telescopic sights – but he put them together in a highly successful way. William Sims, possessed of a desire to revolt against the rigidity of the status quo, provided the commitment and passion to bring Scott’s process into use. This case illustrates again the dual aptitudes mentioned already but also demonstrates the role of chance in bringing innovative components together. We also see the resistance to change in any society where the people in it have limited identifications. In the Navy at that time, gunners were not influential due to the ineffectiveness of their craft and others, such as other naval officers who were responsible for the strategic location of ships in battle, were not ready to step forward and relinquish their power when gunners started to actually hit their targets. In a hopeful attempt to address the issue of limited identifications, Morison suggested: “Any group might begin by defining for itself its grand object and see to it that everyone understands what it is” (as cited in Tushman and Moore, 1982). If one wants to create an innovative organization of a few people or of thousands, it is worthwhile to consider the aptitudes in staffing that would be required. Innovation is a messy, disruptive business often accompanied by personalities to match these qualities. In many companies, great ideas and concepts may be languishing for the attention of a Boulton, Knight or Sims. Some inventions may not be seen as grand enough to warrant interest. This attitude misses the point that small-scale but profitable implementation may be all that is needed at first before the landslide of another industrial revolution, a worldwide fitness boom, or an upheaval in a structured society, such as the navy.
2/7/2012 7:36:02 PM
Invention and Innovation
3.5
35
Consumer-perceived benefits: Coffee, beer and cigarettes Inherent in the earlier definitions of invention and innovation is that an innovation provides a benefit to its user, one not obvious in current practice. In the steam engine case, the benefit was extended mining; in the Nike case, the benefit was improved athletic performance and consumer-perceived injury protection; in the continuous aim firing case, the benefit was hitting targets more accurately. In the case of consumer products, some of which have deleterious health effects, an important consideration is that these benefits are consumer-perceived. By the early 1970s Philip Morris had acquired the Miller Brewing Company. By the middle of that decade two products were introduced that had a major impact on their industries. One was Miller Lite and the other was a cigarette called Merit. Both of these brands were based on remarkably similar consumerperceived benefits. In the case of Miller Lite, a technical advance in brewing technology allowed the introduction of a product with extremely low carbohydrate content without sacrificing taste. Merit advertising promoted cigarette flavor equal to full flavor rivals at half the tar, made possible through the use of a novel advance in tobacco flavor technology. Tar reduction may imply a health benefit from the consumer’s perspective. These two products contributed to new categories that became as important to their companies’ revenues as the original categories. The introduction of the Merit cigarette brand was not the first time that perceived health benefits drove fortunes in the tobacco industry. After Louis Pasteur connected disease to microbes in the 1860s, there was a lag until the general public became aware of the germ theory of disease in the 1880s. The spread of tuberculosis from sputum became a common concern and with it the health implications of chewing tobacco, the dominant form of tobacco use in the US in the 19th century. Smoking forms, such as pipes and cigars, began to increase as chewing tobacco declined, and in 1910, the future of the tobacco industry appeared to be firmly hooked on smoking (or pipe) tobacco. Then in 1913 the whole industry abruptly changed when R.J. Reynolds blended Bright and Burley tobacco to make a suitable inhalation form to create modern cigarettes. Lung absorption of nicotine and delivery to the bloodstream is far more efficient than buccal absorption as occurs with chewing tobacco. Ironically, a consumer health issue ignited consumers to turn to cigarettes and away from chewing tobacco, which they perceived to be an unhealthy alternative. This is the “benefit” that resonated with consumers and led to the creation of a multibillion dollar industry. The Merit/Miller Lite scenario seemed ripe for a repeat after Philip Morris acquired General Foods in the 1980s. Technology for removing caffeine from coffee using a CO2 extraction process seemed appropriate to take nicotine out of cigarettes to simulate what had been done with decaffeinated coffee. A product with little or no nicotine was test marketed in the late 80s under the brand name “Next”. This product was a failure and a valuable lesson for those who supported it in the company because it underscored the importance of
Beckley_c03.indd 35
3
2/7/2012 7:36:02 PM
36
Product Innovation Toolbox
understanding what a company’s products provide to consumers. The analogy with coffee was unfortunate and a better comparison might have been to whiskey where the removal of alcohol would leave behind a straw-colored uninteresting beverage or even lightly flavored water in the case of vodka. At least decaffeinated coffee is still a warm, good tasting beverage with consumer-perceived benefits. In removing nicotine from cigarettes, the main psychoactive substance that drives cigarette consumption was removed and there also may have been important sensory effects due to nicotine that disappeared on extraction.
3
3.6
Extensibility: Is there a limit to it? The steam engine extended to many industries and the waffle sole attracted non-elites; does extensibility always follow a successful innovation? Limits to the idea of extensibility can show up in surprising places. Statistical tools such as the general linear model may be properly called innovations as they have reached large-scale successful introductions. A relatively recent development, the generalized linear model (McCullagh and Nelder, 1989), allows the exploitation of the mathematical machinery for fitting the linear model to a broad range of other models through the specification of a link function. Although these statistical innovations have been very successfully employed in many fields, they have limitations in product and concept testing because their assumptions do not account for the psychological processes involved in quantifying features or attributes that differentiate products. Just as the cigarette brand Next challenged the limits of extending an idea from one category (coffee) to another (cigarettes), so, too, there are limits to how far we can exploit models from statistics and apply them to human decision making. A simple example of the limitations of a classical model of binary choice is when a subject chooses the item of greatest intensity from two alternatives as opposed to choosing one of two alternatives that is most similar to one of the alternatives acting as a reference. Both of these methods involve binary choice and theoretical results from the binomial distribution are often used to conduct hypothesis tests on the data. However, without specifying a psychological process for each of these methods, there is no hope of ever relating them to each other or of finding a common framework for interpreting the results. For this we need a theory that allows us to scale sensory intensities and thus conventional statistical models are blocked from making progress in understanding observations without considering how people make decisions.
3.7
Innovation in scaling intensities and emotions Consumer-perceived benefits, whether justified or not, can drive major changes in the fortunes of companies and even create new businesses. Next we will examine the product and concept testing field to identify benefits that are linked to current and developing models. Consumers of research on products and concepts have certain basic needs and they judge the benefits provided by the methods and models used according to their ability to satisfy those needs.
Beckley_c03.indd 36
2/7/2012 7:36:03 PM
Invention and Innovation
37
These consumers have two general interests. One is to measure features that differentiate among products and the other is to reach an understanding of why people like or choose certain products or brands. These interests are discussed in the next sections on scaling intensities and emotions.
3.8
Scaling intensities
3
According to legend, Gustav Fechner, a physicist, lay late in bed on the morning of 22 October 1850 contemplating a log-law relationship between physical and mental quantities to explain known data. His conception that morning gave birth to the field of psychophysics in which theories concerning the relationship between the physical world and its mental representation are nurtured. Every year, 22 October is celebrated as “Fechner Day” around the world by Fechnerian psychophysicists and the festivities include a special conference by the International Society of Psychophysicists. It is doubtful that their excitement will ever create a civil disturbance or compete with the Carnival in Rio de Janeiro. Nevertheless, for this small group of followers, Fechner made a scientific advance in psychological scaling that affected the thinking of all students of mental processes. Commercial applications of functions linking physical quantities, such as the time it takes for a gallon of water to exit a drain, to their mental representations, such as the perceived elapsed time, abound in consumer product categories. Physicochemical measures validated by psychophysical techniques reduce the cost and time of product development and improve the quality of consumer products. There are items for which a clear physicochemical correlate is not obvious, such as the beauty of art or handwriting specimens, and yet we still would like to produce relative scale values. In some cases the perceptual scale values are multivariate and it is of value to quantify and relate these multiple features, even though we are not aware of the physical or chemical correlates. In the consumer products area, an example might be the quality of a fine fragrance. In 1927, Louis L. Thurstone published a basis for a “purely” psychological theory for scaling that met this need. His papers from this period led to what are known today as Thurstonian probabilistic models. These models specify two basic ideas – the information and cognitive processes leading to decisions are probabilistic and there is a definable decision rule that depends on task instructions. In many cases the decision rule is deterministic (same information – same response) but some models allow a probabilistic decision rule (the response is known only with a certain probability). Thurstone was mainly concerned with the former type of decision rule, although the latter is a reasonable extension. The development of Thurstonian models has been extensive and there is now a large family of models that account for the results of many different types of behavioral tasks. Table 3.1 is a partial list of the methods for which Thurstonian models have been developed with associated references. Thurstonian models have very compelling process assumptions regarding the distribution of perceptual intensities and the decision rules applicable to each method to which they have been applied. They provide a theoretical framework for relating the results of product testing methods to one another so that the relative power of the methods can be compared (Ennis, 1993a). They are very well suited to
Beckley_c03.indd 37
2/7/2012 7:36:03 PM
38
Product Innovation Toolbox
Table 3.1 List of methods and references to corresponding Thurstonian models. Method
References
M-alternative forced choice
Hacher and Ratcliff (1979)
Triadic choice
Ennis and Mullen (1986); Ennis and Mullen (1992); Ennis, Mullen and Frijters (1988); Mullen and Ennis (1987)
Tetradic choice
Ennis, Ennis, Yip and O’Mahony (1998); Rousseau and Ennis (2001); Rousseau and Ennis (2002)
3
Ranks
Bâckenholt (1992)
Motivations
Ennis and Rousseau (2004)
Similarities and proximities
Ennis (1988); Ennis (1992); Ennis and Johnson (1993); Ennis, Palen and Mullen (1988); Nosofsky (1988); Shepard (1988); Zinnes and MacKay (1983)
Preferential choice
De Soete, Carroll and DeSarbo (1986); Ennis (1993b); Ennis and Johnson (1994); MacKay, Easley and Zinnes (1995); Mullen and Ennis (1991); Zinnes and Griggs (1974); Zinnes and MacKay (1987)
Liking
Ashby and Ennis (2002)
Identification and categorization
Ashby and Gott (1988); Ashby and Lee (1991); Ennis and Ashby (1993)
accommodate multivariate attributes of items with a simple structure to account for different variances and covariances (Ennis and Johnson, 1993). In the cases of invention and innovation discussed earlier, the separate roles of inventor and innovator were connected to individuals. Inevitably, this simplification diminishes the role of many other players in any major innovation. In the case of Thurstonian scaling, it could be thought that Thurstone’s inventions were popularized and, in some cases commercialized, not by one innovator but by a community of scientists and programmers who contributed to the dissemination of useful tools. These tools were then used to bring Thurstonian scaling to those who would benefit from them.
3.9
Scaling emotions (hedonics) Let us turn now from scaling intensities, which was the first interest mentioned in the previous section, to models of hedonicity, including liking and preference, for instance. It is quite natural when thinking of a hedonic response to consider it to be based on a hedonic continuum like we would a sensory variable such as sweetness. This idea is a direct extension of the previous section. Thinking of liking or preference responses as arising from judgments based on a hedonic or utility scale makes it possible to consider using Thurstonian models. Then to find explanatory variables for this hedonic scale, one could use a linear combination
Beckley_c03.indd 38
2/7/2012 7:36:03 PM
Invention and Innovation
39
of explanatory variables. A more imaginative alternative is to consider that scaling emotional responses involves considering the possibility of individual internally generated points which are used to make liking or preference decisions. Sometimes these points are referred to as ideal points or motivation points. The distribution of perceptual intensities is assumed to be normal in Thurstonian models. The justification for the normality assumption is that perceptual intensities result from the aggregated effect of millions of receptors activating a myriad of neurons. According to the central limit theorem, means arising from such an averaging process will tend to be normally distributed with increasing sample size. If, instead of assuming that perceptual intensities are distributed normally, we assume that they are distributed according to a double exponential distribution, then a simplification occurs. Differences in these random variables follow a logistic distribution, which has a closed form, assuming that the perceptual intensities are independent. This benefit of a simple, more computationally efficient model looms large when choices are made among multiple alternatives, an issue of importance in marketing and economics. A Thurstonian model of this task rapidly becomes computationally expensive compared to a logit model which remains in closed form. This benefit, notwithstanding other limitations which are overcome in a Thurstonian framework, propelled the logit to become a major innovation in a number of fields including economics, marketing and public health. It sometimes happens in the design of technologies that the design that becomes the generally adopted and celebrated innovation is the one that works best at low cost, efficiently, 24/7. A choice model based on the logit is such an innovation. Contributors to choice models such as the logit and its applications include Daniel McFadden who was awarded the Nobel Prize in Economics in 2000 and R. Duncan Luce who was awarded the 2003 National Medal of Science for work he completed in 1959. The impact and even the source of innovations can take decades or even centuries to be identified. Luce, who was 79 when he received the prize from President Bush, remarked: “This is a great honor for which I am most grateful … I’m also grateful for my genes, which have enabled me to live a long life and enjoy this honor.” An area for future development is the incorporation of ideal point concepts into Thurstonian models. This area offers significant advantages compared to the logit to discover drivers of preference, liking, motivations and other hedonic or emotional responses (Ennis et al., 2011). These benefits are already well recognized and the process of turning Thurstonian ideas in this area into innovations has begun. One example is the use of a closed form Thurstonian similarity model from the previous section to find individual and item locations in a sensory space. It would be foolish to think that any of the models mentioned in this section, or anywhere else, will not be made utterly irrelevant at some point in the future. All scientific models are fictions, not necessarily extensions of each other, and at any given time the accepted narrative is the one that explains the observables best. With advances now being made in neuroscience, the field of psychology itself will disappear in its present form as we formulate compelling molecular models to answer the question: “What is the chemistry of choice?” When that happens, and it will, we will have a rather different perspective on the parameters that account for decision making.
Beckley_c03.indd 39
3
2/7/2012 7:36:03 PM
40
Product Innovation Toolbox
3.10
Final remarks In general, for inventions to blossom into innovations, they usually benefit from the confluence of certain, sometimes chance, characteristics: ● ●
3
● ●
●
They may coincide with a movement, such as a running boom They may be extendable to other fields They should have consumer perceived benefits Their commercial value should be recognized and supported by individual or community entrepreneurs They should be easily implemented, otherwise, they will be interesting but, as Schumpeter remarked, “economically irrelevant.”
References Ashby, F.G. and Ennis, D.M. (2002) “A Thurstone-Coombs Model of Concurrent Ratings with Sensory and Liking Dimensions”. Journal of Sensory Studies, 17, 43–59. Ashby, F.G. and Gott, R.E. (1988) “Decision Rules in the Perception and Characterization of Multidimensional Stimuli”. Journal of Experimental Psychology: Learning, Memory and Cognition, 14, 33–53. Ashby, F.G. and Lee, W.W. (1991) “Predicting Similarity and Categorization from Identification”. Journal of Experimental Psychology: General, 120, 150–172. Böckenholt, U. (1992) “Thurstonian Models for Partial Ranking Data”. British Journal of Mathematical and Statistical Psychology, 43, 31–49. De Soete, G., Carroll, J.D. and DeSarbo, W.S. (1986) “The Wandering Ideal Point Model: A Probabilistic Multidimensional Unfolding Model For Paired Comparisons Data”. Journal of Mathematical Psychology, 30, 28–41. Ennis, D.M. (1988) “Confusable and Discriminable Stimuli: Comments on Nosofsky (1986) and Shepard (1986)”. Journal of Experimental Psychology: General, 117, 408–411. Ennis, D.M. (1992) “Modeling Similarity and Identification When there are Momentary Fluctuations in Psychological Magnitudes”. In F. Gregory Ashby (ed.), Multidimensional Models of Perception and Cognition. Mahwah, NJ: Lawrence Erlbaum Associates. Ennis, D.M. (1993a) The Power of Sensory Discrimination Methods”. Journal of Sensory Studies, 8, 353–370. Ennis, D.M. (1993b) “A Single Multidimensional Model for Discrimination, Identification, and Preferential Choice”. Acta Psychologica, 84, 17–27. Ennis, D.M. and Ashby, F.G. (1993) “The Relative Sensitivities of Same-Different and Identification Judgment Models to Perceptual Dependence”. Psychometrika, 58, 257–279. Ennis, D.M. and Johnson, N.L. (1993) “Thurstone-Shepard Similarity Models as Special Cases of Moment Generating Functions”. Journal of Mathematical Psychology, 37, 104–110. Ennis, D.M. and Johnson, N.L. (1994) “A General Model for Preferential and Triadic Choice in Terms of Central F Distribution Functions”. Psychometrika, 59, 91–96. Ennis, D.M. and Mullen, K. (1986) “A Multivariate Model for Discrimination Methods”. Journal of Mathematical Psychology, 30, 206–219. Ennis, D.M. and Mullen, K. (1992) “A General Probabilistic Model for Triad Discrimination, Preferential Choice, and Two-Alternative Identification”.
Beckley_c03.indd 40
2/7/2012 7:36:03 PM
Invention and Innovation
41
In F. Gregory Ashby (ed.), Multidimensional Models of Perception and Cognition. Mahwah, NJ: Lawrence Erlbaum Associates. Ennis, D.M. and O’Mahony, M. (1995) “Probabilistic Models For Sequential Taste Effects In Triadic Choice”. Journal of Experimental Psychology: Human Perception and Performance, 21, 1–10. Ennis, D.M. and Rousseau, B. (2004) “Motivations for Product Consumption: Application of a Probabilistic Model to Adolescent Smoking”. Journal of Sensory Studies, 19, 107–117. Ennis, D.M., Mullen, K. and Frijters, J.E.R. (1988) “Variants of the Method of Triads: Unidimensional Thurstonian Models”. British Journal of Mathematical and Statistical Psychology, 41, 25–36. Ennis, D.M., Palen, J. and Mullen, K. (1988) ”A multidimensional Stochastic Theory of Similarity”. Journal of Mathematical Psychology, 32, 449–465. Ennis, D.M., Rousseau, B. and Ennis J.M. (2011) Short Stories in Sensory and Consumer Science. Richmond, VA: The Institute for Perception. Ennis, J.M., Ennis, D.M., Yip, D. and O’Mahony, M. (1998) “Thurstonian Models for Variants of the Method of Tetrads”. British Journal of Mathematical and Statistical Psychology, 51, 205–215. Hacher, M.J. and Ratcliff, R. (1979) “A Revised Table of d′ for m-alternative Forced Choice”. Perception and Psychophysics, 26, 168–170. MacKay, D.B., Easley, R.F. and Zinnes, J.L. (1995) “A Single Ideal Point Model for Market Structure Analysis”. Journal of Marketing Research, 32, 433–443. McCullagh, P. and Nelder, J.A. (1989) Generalized Linear Models. London: Chapman & Hall. Moore, K. (2006) Bowerman and the Men of Oregon: the Story of Oregon’s Legendary Coach and Nike’s Cofounder. Emmaus, PA: Rodale. Mullen, K. and Ennis, D.M. (1987) “Mathematical Formulation of Multivariate Euclidean Models for Discrimination Methods”. Psychometrika, 52 (2), 235–249. Mullen, K. and Ennis, D.M. (1991) “A Simple Multivariate Probabilistic Model for Preferential and Triadic Choices”. Psychometrika, 56, 69–75. Nosofsky, R.M. (1988) “On Exemplar-based Exemplar Representations: Comment on Ennis (1988)”. Journal of Experimental Psychology: General, 117, 412–414. Rousseau, B. and Ennis, D.M. (2001) “A Thurstonian Model for the Dual Pair (4IAX) Discrimination Method”. Perception and Psychophysics, 63, 1083–1090. Rousseau, B. and Ennis, D.M. (2002) “The Multiple Dual Pair Method”. Perception and Psychophysics, 64, 1008–1014. Scherer, F. (1984) Innovation and Growth: Schumpeterian Perspectives. Cambridge, MA: MIT Press. Shepard, R.N. (1988) “Time and Distance in Generalization and Discrimination: Comment on Ennis (1988)”. Journal of Experimental Psychology: General, 117, 415–416. Thurstone, L.L. (1927) “A Law of Comparative Judgment”. Psychological Review, 34, 273–286. Tushman, M. and Moore, W.L. (1982) Readings in the Management of Innovation. Boston, MA: Pitman Press. Zinnes, J.L. and Griggs, R.A. (1974) “Probabilistic Multidimensional Unfolding Analysis”. Psychometrika, 39, 327–350. Zinnes, J.L. and MacKay, D.B. (1983) “Probabilistic Multidimensional Scaling: Complete and Incomplete Data”. Psychometrika, 48, 27–48. Zinnes, J.L. and MacKay, D.B. (1987) “Probabilistic Multidimensional Analysis of Preference Ratio Judgments”. Communication and Cognition, 20, 17–44.
Beckley_c03.indd 41
3
2/7/2012 7:36:03 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
4
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
4 “Innovation is an iterative process. You learn, you continue to improve, you stay ahead.” Mehmood Khan PepsiCo Chief Scientific Officer
This chapter further defines the two stages of consumer research: Up-front innovation that is looking for consumer insights and new product development (NPD) that is validating a product opportunity. The Consumer Explorer will learn how to choose an appropriate tool for any objectives of consumer exploration and ensure that the outcomes are grounded on consumer insights. This chapter sets the stage for the different research tools for Consumer Explorers and introduces iterative qualitativequantitative research (IQQR) model. The chapter will discuss how to jumpstart the research process in engaging the consumer from early conceptual to final product design.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
43
Beckley_c04.indd 43
2/6/2012 12:28:10 PM
Chapter 4
Designing the Research Model Kannapon Lopetcharat, Dulce Paredes and Jennifer Hanson 4 Key learnings ✓ ✓ ✓
4.1
Factors influencing product innovation Setting up a successful product innovation research program Iterative qualitative-quantitative research (IQQR) process
Factors influencing product innovation Product innovation comprises two major stages: (1) Up-front innovation to identify product opportunities by understanding consumers’ wants, needs and pain points and (2) New product development (NPD) to refine, screen and validate new product opportunities grounded on consumer insights. In practice, many companies lump the up-front innovation and NPD in one process, distinctly separate both stages from each other or are somewhere in between. Regardless of the organizational structure, the success and efficiency of product innovation depends greatly on two factors: (1) Organization and (2) Execution.
4.1.1
Organizational factors Organizational factors influence how different departments work with each other and, as a company, how the results from one stage will be utilized in the next step along the process. Organizational factors are dependent and unique to each company. Learning how the process works within your company and understanding how decisions are made along the process will enhance the efficiency of product innovation (e.g. reducing time, better communication, reducing cost, etc.). Here are four questions to answer to ensure efficiency of product innovation. Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
44
Beckley_c04.indd 44
2/6/2012 12:28:12 PM
Designing the Research Model
45
(1) Do the goals of product innovation align with technology strategies and business strategies and goals? The goals of product innovation must align and support business strategies and goals; otherwise, the innovations will not be useful for the company. Also, technology strategies allow product innovation to realize its goals more effectively. (2) Which projects should a company invest its money on? Because there will always be a limited amount of money, senior management must invest adequate resources on the right innovation projects tactically and strategically. (3) Is your product innovation effective and sustainable? Effective innovation means that the process is flexible and learns from idea to launch. The effectiveness of product innovation should not be defined by just the number of products launched. It should be defined as the ratio between the number of products that are launched and resource invested (i.e. money and time). With this criterion, developing one technology platform that needs five years for ten products will be more efficient than developing ten technologies that need ten years for ten products assuming both projects need the same amount of money. (4) Does your organization have a culture that nurtures innovation? A cultivating culture that nurtures innovation requires commitments from senior management and true cross-functional teams to succeed. The signs of a successful innovation culture are: (a) Seamless process from idea to launch (b) Healthy discussion among the team members (c) Experimentation is encouraged (d) Fail early (e) Iterative.
4
Organization factors enhance the overall efficacy of product innovation. These factors should be guided by business strategies and goals and technology strategies within a culture that nurtures innovation. Execution factors are equally important because without it, the process cannot reach its full potential.
4.1.2
Execution factors Execution factors are critical to product innovation because the factors influence directly and greatly on what will get “in” to the NPD process. These factors have direct influences on the quality of the outcomes more than just reducing time and cost. Execution factors involve the people who actually conduct the research (e.g. which method they choose to use, how they use the results to persuade upper management). Therefore, the factors are not unique to a company or even product category. Consumer Explorers (CE) are the ones who select research methods and execute studies; therefore, the ways that they utilize consumer research tools will have impact on the outcome and ultimately the efficiency of product innovation. In this book we are providing tools, guidelines, pros and cons, limitations and suggestions for conducting consumer research along the product innovation process that will allow CEs to select consumer research tools appropriately based on situations and goals. In addition, the
Beckley_c04.indd 45
2/6/2012 12:28:12 PM
46
Product Innovation Toolbox
success and efficiency of product innovation depends greatly on how well organization factors and execution factors work together.
4.2
4
Setting up a successful product innovation program Business strategies and goals provide strategic focus to direct, define and develop the strategies for product innovation which includes which product, market and technology areas, and technology strategies allow product innovation to realize its goals effectively. Consequently new NPD goals and roles in business are clearly defined and communicated. The best way to do this is developing product innovation strategies as “a part” of business strategies. With that in place, a company can start allocating resources accordingly and developing product roadmaps that ensure long-term success. There are four questions to answer before product innovation strategies become reality (or when a company starts spending money): (1) What are the goals and roles of new product innovation? Defining and communicating these goals and roles clearly is very important. The goals and roles must impact overall business’s goals such as “25% of business’s sales will come from new products by the end of 2012”. This will create a common purpose for everyone involved in the process and will set a good environment for innovation. (2) What are the strategic boundaries and which directions do we want to pursue? Defining which market, industry sector, applications, product types or technologies your company will focus on is very important because these will help your company to decide on which direction to pursue. Two factors are usually used to define the boundaries and directions: Attractiveness and business strength. (a) Attractiveness can be evaluated using many metrics, such as growth opportunities (market size and value), competition, margin and new product potential (technology maturity). (b) Business strength is simply what your company can do better than other competitors in the boundaries of interest. This involves evaluating business core competency and strengths that can be leveraged. (3) How do you want to attack and enter? Answering this question is very important for product innovation as it defines the types of innovation itself (disruptive, incremental or cost-reduction) that dictate the speed of new product development. For example, “first to market” strategy demands more disruptive innovation than incremental innovation, “fast me-too” demands incremental innovation and “value for less” demands costreduction innovation. (4) Do you know the priorities of innovation projects? This stage deals with the actual commitment and resource allocation to the project and it is when strategies become reality as money and resources will be spent. Project priority should be based on business’s strategic importance. The priorities of innovation projects allow senior management to develop a successful strategic product portfolio. To execute this plan successfully and
Beckley_c04.indd 46
2/6/2012 12:28:12 PM
Designing the Research Model
47
effectively, the management needs a set of criteria to decide which project should be continued or stopped and an effective product development process. The following are the requirements that each innovation project needs to fulfill at any time during the review: ● ●
● ●
● ●
4.3
Does the project still fit with innovation strategies and business goals? Will the new product deliver a compelling value proposition to target consumers? Will the new product be easily differentiated among its competitions? Have the current market conditions (competitive landscape) and projection (volume, market share and margins) changed from the last review? If so, do the changes impact the priority of the innovation? Will company’s leverage still be relevant and effective? What is the status of technology feasibility to deliver the product?
4
Current approach to new product development The questions above should be asked again and again during project review along the product innovation process where a product is developed from idea to actual commercialization. Generally, there are seven steps used by companies in their NPD and there are regular checking points built in where go/no go decisions will be made to products under development (Figure 4.1). There are several NPD processes published and the most notable one is the stage-gate system proposed by Robert G. Cooper in 1990 (Moskowitz et al., 2006). The seven-step process is a robust and flexible process that allows management to review and adjust its tactical planning to reach business goals through product innovation. Many successful companies differ from other companies at the “pre-development” steps (step 0 to step 2) of the process as the successful companies invest time, money and resources to “do their homework” by listening to their consumers to deliver differentiated products that deliver what consumers need. The purpose of this book is not to change the process but
0
1
2
3
4
5
6
Discovery
Scoping
Building business case
Development
Testing and validation
Launch
Postlaunch review
Check point 1
Check point 2
Check point 3
Check point 4
Check point 5
Figure 4.1 Seven general steps in NPD with five built-in check points (adapted from Moskowitz et al., 2006).
Beckley_c04.indd 47
2/6/2012 12:28:12 PM
48
Product Innovation Toolbox
rather to enhance it by providing tools and methodologies to accelerate and increase success rate in gathering consumer insights to build the pipeline that feeds to the NPD especially during step 0 to step 2 and step 4. Successful innovation companies reduce resources wasted on weak ideas, while the average company will spend more than 55% of their innovation budgets on ideas that should not have made it into their NPD in the first place (Cooper, 2001).
4.4 4
Iterative qualitative-quantitative research model To enhance product innovation, the authors would like to introduce a new product innovation process that encompasses up-front innovation and new product development in one seamless process called iterative qualitative-quantitative research process (IQQR process) (Figure 4.2). The IQQR’s goals are three-fold: (1) Enhance the ability of identifying high potential consumer, market and product insights early (2) Ensure effective resource allocation by increasing the probability of breakthrough product ideas grounded on consumer insight into entering the NPD process and (3) Build a learning organization through a series of qualitative-quantitative iterative learning cycles throughout innovation. Interactive innovation is the guiding principle for the IQQR process. The principle encourages an organization to: (1) Experiment often (2) Fail early and (3) Learn from each iteration. A series of qualitative-quantitative iterative learning cycles at the early steps of IQQR process (equivalent to step 0 to step 2 in the traditional seven steps in NPD) (Figure 4.3) allows researchers to fail often and fail early to create prototypes or protocepts for screening quickly. Quickly coming up with prototypes that clearly communicate the insights to consumers will significantly reduce product development time and money and will certainly increase probability of success in the final validation. Instead of sequentially running small qualitative and large quantitative studies, the IQQR process involves a series of iterative qualitative and quantitative work which allows the project team to continue to learn from each one. In this book, we offer cutting edge tools that provide deeper understanding of consumers and allow the team to form appropriate questions, set relevant hypotheses, and build knowledge quickly as the springboard for new product ideas. For example, the Consumer Explorer can conduct a series of small-scale qualitative research (e.g. in-context interviews and ethnographies, etc.) to identify insights on actual product usage and behavior up-front. The insights are then used to decide development paths, leading to new product ideas, targeting
Beckley_c04.indd 48
2/6/2012 12:28:13 PM
Designing the Research Model
Qual
Explore Qual Quant Qual Quant
Prototype and protocepts
Quant Post launch and review
Launch final product
Advanced development Quant
Innovation engine
49
Qual
New product pipeline
Figure 4.2 Iterative qualitative-quantitative research (IQQR) process for deeper insights and meaningful metrics.
4
0
1
2
3
4
Discovery
Scoping
Building business case
Development
Testing and validation
Check point 1
Check point 2
Check point 3
Check point 4
5
Explore
Prototype and protocepts
Check point 5
Quant
Advanced development
Launch final product
Quant
Innovation engine
Postlaunch review
Launch
Qual
Qual Quant Qual Quant
6
Post launch and review Qual
New product pipeline
Figure 4.3 Comparing iterative qualitative-quantitative research process to the traditional seven-step NPD. The IQQR process encourages a series of iterative qualitative-quantitative learning throughout the process.
consumers, claims and communication plans that are grounded on relevant consumer wants and needs in real-life situations. Then the product ideas, the claims and the communication plans are developed and refined quickly through screening and rapid optimization studies, such as conjoint and discrete choice modeling. These steps result in the identification of an appropriate number of top candidates in final validation studies. When you go through the IQQR process, not only are the learnings grounded on the experiences of real people for a particular project, but you also gain additional knowledge for future innovation. The team can convert the additional knowledge to more insights for new product ideas that are then fed into the product development cycle quicker because the knowledge is already there,
Beckley_c04.indd 49
2/6/2012 12:28:13 PM
50
Product Innovation Toolbox
Qual
Prototype Explore Qual Quant Qual Quant and protocepts
Quant
Advanced development
Launch final product
Quant
Innovation engine
Post launch and review Qual
New product pipeline
6.1 Consumer language probes 6.3 Qualitative multivariate analysis (QMA) 6.1, 6.2 and 6.5 Ethnography and other front-end voice of consumer tools 6.4 The Gameboard “Model Building” 8.1 Contemporary product research tools 8.2 Consumer panels for product understanding 8.3 Consumer advisory boards 6.5 Quantitative anthropology 8.4 Rapid product navigation 8.5 Product portfolio optimization 7.1 Kano satisfaction model 7.2 Conjoint analysis plus 9.1 Extending product research for predicting market success 9.2 Product concept validation tests
4
6.6 Emotions research 7.3 Benefit hierarchy analysis
Figure 4.4 Suggested consumer research tools along IQQR process. The numbers indicate the chapter and section where detailed description of the tools can be found.
ready for the team to pick up. The team generates more knowledge and learns more every time they go through IQQR process. Our role as Consumer Explorers is to be strategic partners. We reduce risk and build confidence by guiding the team to navigate the unknowns and uncertainties that occur during front-end innovation. We design efficient research plans that lead to marketing and product development strategies that refine and validate product ideas for successful launch and in-market results. The advancement in digital technology (such as audio and video diaries, social networks, blogs and real-time testing) has enabled Consumer Explorers to get closer to what consumers think, say and do than ever before. In addition, new research tools like quantitative anthropology and network analysis allow Consumer Explorers to convert traditional qualitative information to quantifiable metrics. In this book, the authors offer various new qualitative and quantitative consumer research tools, which will provide detailed procedures, pros and cons, suggestions and guidance so that when readers finish reading each tool section they can learn to apply the tool appropriately. Figure 4.4 shows suggested stages along the IQQR process at which certain research tools will be effective and appropriate. The research tools described in this book, along with new digital technology, enable the Consumer Explorer to pay more attention to strategic questions than tactical problems (see Chapter 2 for details).
Beckley_c04.indd 50
2/6/2012 12:28:15 PM
Designing the Research Model
51
References Cooper, R.G. (2001) Winning at New Products: Accelerating the Process from Idea to Launch (3rd edition). New York, NY: Perseus Publishing. McWilliams, J. (2011) Pepsico CEO: “You Learn, You Continue to Improve, You Stay Ahead”. Atlanta Business News, 4 June. Moskowitz, H.R., Beckley, J.H. and Resurreccion V.A.A. (2006) Sensory and Consumer Research in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional.
4
Beckley_c04.indd 51
2/6/2012 12:28:15 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
5
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
5 “No one ever really sells anything at all – people buy.” Lisa Fortini-Campbell, The author of Hitting the Sweet Spot.
This chapter outfits Consumer Explorers with skills to spot high-potential consumer insights and provides many characteristics of consumer behaviors and situations that allow discovery of high-value insights quicker, and everyone can be trained to do so. The audience will learn that nothing is “ordinary” about the consumer’s routine and habit and innovation that changes this “ordinary” behavior will alter the landscape of competition. This chapter provides tips to solicit insights hidden in actual “natural” behavior and infuse emotional benefits in products and brands.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
53
Beckley_c05.indd 53
2/4/2012 4:18:39 AM
Chapter 5
What You Must Look For: Finding High Potential Insights Kannapon Lopetcharat, Jennifer Hanson and Dulce Paredes
5
Key learnings ✓ ✓ ✓
5.1
What is an insight? How to develop an insight Making an insight ownable
What is an insight? Many people have trouble defining an “insight”. It is a term that rose in popularity among industry professionals once “market research” departments were renamed as “consumer insight” departments. This name change was designed to position market research departments as a more value-added department to the businesses it serves, in a time when market research was losing its “seat at the table” when it came to contributing to business decisions. The change of name was to elevate the outcomes of research studies from delivering research findings based on facts to delivering insights. Instead of changing the outcomes, the term “insight” became synonymous with “research finding”. Research departments and suppliers started to refer to themselves as “insights companies” or delivering “insights”, when in fact they did not change what they delivered: research findings, merely facts that summarize data, such as: “A majority of moms state oral care is extremely important for their children.” As a Consumer Explorer (CE), if you cannot answer the question “why?” with this statement, you have a research finding, not an insight.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
54
Beckley_c05.indd 54
2/4/2012 4:18:40 AM
What You Must Look For
55
An insight is not a research finding. An insight is a statement that connects the “whys” to consumer decisions. It delivers a very focused, yet deep understanding of consumer decisions. By definition, an insight must address the question “why?” within the statement. For example, the research finding: “A majority of moms state oral care is extremely important for their children” can be transformed into an insight by including the “why”: “Moms focus on proper oral care so their children develop a great foundation for their adult lives.” Good insights have the following characteristics: ●
●
●
●
●
5.2
They appear to be qualitative in nature even though they can be based on quantified factual information and supported by knowledge gained from qualitative tools. Researchers sometimes refer to it as “you’ll know it when you see it”, a “hunch”, an “intuition” and a “gut feeling”. This feeling is a common revelation that you and others can see without talking to each other. They allow you to see connections across research findings. Scrutinizing information after information allows the CE to develop the experience to see connections and patterns among the sea of data. This experience allows the CE to spot an insight. Insight is not the connections between different factual information but it is an understanding of the wants and desires of consumers behind the data that enables CEs to connect the dots among the information. They build hypotheses for future exploration. After the CE gains deeper understanding of the consumers, it is natural to form hypotheses regarding the consumers and the products. This deeper understanding provokes more thoughts and helps researchers to see more subtle signals hidden among noises and obvious facts. Here are examples of thought starter questions: ° Why do people behave that way? ° Why do they feel this way? ° What do they like to do after…? Good insights force the research team to think deeper about target consumers and form hypotheses about the consumers based on this new revelation. They are simple and easy to understand. An insight is intuitive by itself. It is like a 30-second elevator pitch. If one cannot state an insight with one simple sentence, it is most likely that the statement is a series of facts laced together with justifications and therefore, is not an insight. They seem to be “after the fact”. Because an insight is simple, it becomes obvious to audiences after it has been pointed out to them and increases their awareness of it.
5
What is an “ownable” insight? On its own, an insight may not be unique. The insight stated above: “Moms focus on proper oral care so their children develop a great foundation for their adult lives” is not unique. It is something that any oral care product can claim as an insight. It is the CE’s job to help make these insights unique. To do this, the CE needs to frame that insight in a way that is actionable to a particular product
Beckley_c05.indd 55
2/4/2012 4:18:40 AM
56
Product Innovation Toolbox
based on “how” consumers use the products they choose. If insights don’t create the bridge between brands and consumers, they are considered category insights, and not ownable insights. It is the application to a product that makes an insight ownable. Only then can an insight drive marketing and product development plans that lead to sales and profitability. An example of a good insight that drove business sales and profitability is when Oral BTM was looking to expand their manual toothbrush business. They hired IDEOTM, a well-known consulting and designing firm. The IDEOTM team looked at the vast information about the manual toothbrush market. The team knew that their major consumers are adults and that young children do not like to brush their teeth. After many ethnography sessions, they found a great insight that brought Oral BTM millions of dollars. The insight was that: “Young children cannot manipulate the toothbrush well; hence, they do not like to brush their teeth” (Kelly and Littman, 2005). This insight is clear and simple; it appears obvious and after the fact. The knowledge forces the project team to think deeper (more questions). Actually, children cannot manipulate small objects because their motor skills have not fully developed. This led to the invention of a “big handle” toothbrush for kids that seemed to be counterintuitive at that time but it was a big hit.
5
5.3
How to develop high potential insights There are many tools that CEs can use to create insights. They range from observation, to qualitative and quantitative research. It is the application of these tools that matters, more than the individual tool itself. The CE must always keep in mind the definition of an insight and potential business application when choosing research tools to deliver high potential insights. More often than not, delivering insights will require the application of more than one research tool. The CE must gather these different inputs and build the insights from knowledge gained across all sources (Figure 5.1).
ATTITUDES What they think NEEDS What they need BEHAVIOR What they do
INSIGHT DEMOGRAPHICS Who they are LIFESTYLES How they live
Figure 5.1 Insight is the result of understanding why consumers behave in certain ways.
Beckley_c05.indd 56
2/4/2012 4:18:40 AM
What You Must Look For
5.4
57
Behavior: The basis for all insights Since an insight must provide an understanding of a consumer decision, it is important to focus on identifying the appropriate behavior to build the foundation to the insight. Therefore, the application of the tools the CE uses must result in an outcome of identifying a behavior that leads to a choice of product by consumers. Depending on the amount of existing research findings, the CE may be able to identify the behavior using previous research, rather than conduct new research studies. Behaviors can be identified from quantitative surveys or through observational techniques, such as ethnographies.
5.5
Attitudes and needs: The explanation for behavior Once the behavior is identified, the CE must develop a deep understanding of why the behavior exists. The tools chosen for this step must focus on the attitudes or beliefs the consumers have that lead to the behavior as well as the needs they are trying to satisfy. This aspect of insight development is usually gained through consumer discussions, whether one-on-one or in groups.
5.6
5
Demographics and lifestyles: The personal connection Finally, the CE must consider the consumer demographics and lifestyle. Demographic information includes the age, gender, life-cycle stage, income and occupation of consumers while lifestyle refers to the attitudes, interests and opinions of consumers, as well as the differences in behavior that relate to social values (ESOMAR glossary). The insights must be relevant to the consumer target that exhibits the behavior. Understanding why consumers make decisions based on demographic characteristics such as lifestage, ethnicity or gender, will allow the CE to frame the insight to be relative to the behavior target. Many times, the CE can reference syndicated studies that provide a broad understanding of different demographic groups to obtain this knowledge. In addition to general knowledge about these targets, the CE must keep in mind a consumer is not just one demographic, or lifestyle throughout the day. We coin the term “multiple-selves” to describe the many roles consumers play throughout the day or in many situations. For example, a woman can be a mother, a daughter, a girlfriend, a wife and many other roles. Which role a person is in will influence the situation a consumer chooses to experience and vice versa. Situations are important as she has different product expectations and goals based on the situation. She can have a moisturizing lipstick for daily use at work, a shiny plumping lipstick for a night out and a lip balm for nightly treatment. Owning many products from the same category is a sign of multiple-selves phenomena. Bowman (2008) also referred to this phenomenon as “split personalities”. When approaching the demographic and lifestyle understanding part of insight development, the CE must consider whether to treat the person as if
Beckley_c05.indd 57
2/4/2012 4:18:40 AM
58
Product Innovation Toolbox
they experience only one situation for their product, or many situations based on viewing a consumer as having “multiple-selves”.
5.7
Making insights ownable It is easy to uncover an insight that is a category insight and not an ownable insight. The CE needs to provide ways to apply the insight to a product. Only through this application, will an insight provide the foundation to sales and profitability. Many times, we refer to this as the “how” of insights: ● ●
5
●
What are all the different ways consumers use my products? Are there work-arounds and trade-offs people make when using my product that can provide the team with ways to make it better fit with how consumers actually use it? How can I translate these specific behaviors to marketing messages that will deliver sales?
Since behavior is the foundation to all insights, it is important that CEs learn to look for the behavior-based signs that have the most opportunity to lead to an ownable insight. The rest of the chapter gives examples of behavior-based signs to spot insights.
5.7.1 Routine and regular habits while consumers are using products Every consumer has his or her own routine and ritual for using a product. Observing and understanding consumption routines and rituals allows the CE to spot where and how a new product can improve consumers’ life and differentiate itself from the other products in a “seamless manner”. The seamless manner is very important so consumers will not feel that they have to change their routine and habit or they have to trade-off too much to achieve their goals. A new product that seems to fit with their life seamlessly and effortlessly is usually identified as more convenient. However, it is not only convenience that the product should deliver but also ensuring that a new product brings changes in habit and can fit with a consumer’s routine or habit. Consequently, the product will differentiate itself from the rest of the market offerings.
5.7.2
Mistakes they make Few consumers really read the instruction that accompanies a product that they purchase and even fewer consumers follow the instruction provided. Studying the mistakes that consumers make fuels new innovation through product adaptation. Product adaptation is a situation when consumers do not use a product as it is designed and they devise a new way to use the product to reach their goals. Dismissing this sign misses an opportunity for improving current products to be more consumer friendly. A great example is Apple’s iMac series (starting with iMac G3 in 1998) that focused on consumer-friendly offerings (e.g. convenient to set up, unique
Beckley_c05.indd 58
2/4/2012 4:18:40 AM
What You Must Look For
59
designs and graphic display). In 1998, one of the iMac G3 selling points was “connecting to Internet in two steps: (1) Plug-in power cord and (2) Plug-in telephone line”. The two-step process was much easier than those of regular PCs from its rivals (e.g. IBM, Dell, Toshiba and many other PCs).
5.7.3
Consumers combine products This sign is usually found among avid users of a product category. They combine products because current products do not offer all the desired benefits and values, or the products do not offer benefits specifically for them. This behavior provides opportunities for creating incremental innovations (e.g. line extension), breakthrough innovations for unmet needs, or customizable solutions. An example is Pretzel M&M’s that was named the 2011 Product of the Year in the candy and snacks category by Product of the Year USA (MARS, 2011). Pretzel M&M’s was created based on the insight that combination of sweet and savory in snack is highly appealing to consumers (The Datamonitor Group, 2010).
5.7.4
5
Home remedies Understanding home remedies provides insights into consumers’ beliefs about how certain things should work. It allows easy associations between product attributes and benefits that lead to many sustainable innovations. Lineextensions in personal care products and new flavors in food and drinks are good examples. A limitation from using home remedies is that it is mostly regional or cultural specific. Turning a home remedy solution to a global product is difficult, but it is possible through thoughtful translation and communication of benefits. A good example is the use of the term “super fruits” in combination with individual regional names (e.g. acai, goji, mangosteen) to convey strong anti-oxidant properties and the exotic nature of the ingredients. Later in this book, we provide many methodologies that allow CEs to identity the links between product attributes (means) and aspiring goals (ends) in both qualitative and quantitative ways.
5.7.5
Consumers alter packages or use packages the way we do not expect The product’s package plays an important role in consumers’ satisfaction and it is very important to observe how consumers actually use your products in different real situations. This observation can yield insights for sustainable innovations (e.g. easy open cap, individual pack) to disruptive innovations (e.g. devices, applicators). The first thing to keep in mind when observing consumers, especially in the consumer packaged goods sector, is that the product is what consumers pay money for from the beginning to the end. Assuming that the package is merely a container that houses things that consumers actually consume is a common mistake that researchers usually make. Expanding the definition of packaging to be an integral feature of the product is a good start to gain more insights regarding packages.
Beckley_c05.indd 59
2/4/2012 4:18:40 AM
60
Product Innovation Toolbox
Observing how consumers alter the original package provides insights for many innovations. Laundry detergent is an obvious product where consumers often alter the package (in this case, the bottle). Before 2006, US consumers could only buy a 100 oz bottle of liquid laundry detergent (128 oz is a gallon), then the consumers had to lift the bottle and pour it into a cap provided to measure the amount of detergent every time they were doing a laundry (this is the case for Americans who own a laundry machine). For those who do not own their own laundry machines, they had four choices: (1) (2) (3) (4)
Carrying the big bottle to the laundry mat Transferring some of the detergent to a smaller container Buying a more expensive detergent from the laundry mat and Hiring someone else to do the laundry.
The first two options were adopted by many Americans. Transporting and lifting a big gallon of liquid detergent is cumbersome for most consumers, especially when they have to travel far from their homes to a laundry mat and transferring liquid detergent from one big gallon to a smaller bottle is inconvenient and messy. It was obvious that American consumers needed a smaller size package and there were many, but the consumers had to trade-off between smaller size and spending money to buy detergent more often. In 2006, Unilever® provided a solution to consumers by introducing a third smaller bottle (32 oz bottle) with 3x concentrated laundry detergent. This was hailed as a great sustainable innovation at that time because it benefited consumers, the environment and the company. The smaller size with 3x concentrate liquid detergent solved consumers’ problems of transporting, lifting, pouring and monetary value (providing the same number of laundry loads as the big bottle). It is good for the environment because it used less resin to make, reduced pollution from transportation as the company can ship more products at a time, reduced the use of paper boxes, etc. Unilever® benefited by spending less in trucks and transportation, paper cartons, etc. The idea was so impactful that other companies followed and it created a new subcategory called concentrated liquid detergent that took over the regular bottle. A recent detergent innovation taking convenience to the next level is Purex Complete 3-in-1® Laundry Sheets which are a load’s worth of detergent, softener and anti-static, all in one sheet. Observing consumers’ use behavior and habits and identifying pain points and patterns in the behavior and habit provide insights for great innovations.
5
5.7.6
How and where consumers store products How and where consumers store their products tell CEs more than just location. It shows the relationships between consumers and their products. These relationships are formed by certain mind-sets: the knowledge about products that consumers actually know; their expectations from products; and the relationships that manifest themselves in the form of non-verbal cues. Avon’s Pro to Go one handed lipstick (launched in 2008) was an innovative solution based on the
Beckley_c05.indd 60
2/4/2012 4:18:40 AM
What You Must Look For
61
insight that women store their lipsticks in many different places other than where they normally put on make-up and that they like to apply their lipstick while multi-tasking.
5.7.7
When consumers look for advice or help It is important to identify situations in which consumers are looking for help in order to use a product or feel that a product works. This boundary provides opportunities for product and marketing innovations. For example, many personal care products use claims associated with salon or professional results. Another example is single serve beverage machines that started with freshly brewed coffee products. These coffee machines provide high quality coffee that target freshly brewed coffee consumers who are looking for convenient solutions.
5 5.7.8
Unachievable goals This sign manifests itself in the form of complaints, pain-points, disappointments, odd behaviors, customization attempts and wishes. Digital photography is an example for disruptive innovation that stems from this type of insight. In film photography, taking a good photograph requires skill, equipment and practice that cost a lot of money. In digital photography, consumers can make mistakes, take as many pictures as they want and choose the best photographs without spending too much money and waiting to develop the film. To develop disruptive innovation based on this insight, real advancement in technology or a highly creative application of existing technology is usually required.
5.7.9
Changes in moods and emotions Emotional clues are everywhere but many researchers miss them because the clues are subtle, mostly behavioral in nature and context dependent. Product categories like fragrances elicit strong emotional cues. An example of successful commercial innovation is Unilever’s AXE fragrance brand that covers a wide range of personal care products. High school and college males are the target consumers based on the insight that sexual desire is a motivating factor for young men. The best place to find emotional clues are the actual situations that consumers use products, by studying the changes in moods people have before, during and after product use. The behavioral signs of changes in emotions when observing or interviewing consumers include change in tone of voice, talking about old experiences, length of time spent discussing a particular product and experience, increasing in engagement of a task, keep touching the product and others. Clorox®’s toilet cleaning wand is a good example for this case as it was developed through observing consumer habits and behaviors (Zink, 2008). A cross-functional team observed how consumers clean their own toilets. The team already knew that consumers were highly satisfied with the results of the current routine (Figure 5.2).
Beckley_c05.indd 61
2/4/2012 4:18:40 AM
62
Product Innovation Toolbox
Old sequence
Emotional state
New sequence Flush the toilet
Flush the toilet Put the cleaning agent in the toilet bowl Wait for a while (per direction on the package) Pick up a toilet bowl scrubber
5
Icky moment
Pick up Clorox® toilet wand
Scrub the bowl
Scrub the bowl
Flush the bowl
Discard the cleaning pad Flush the bowl Put away the clean handle
Shaking excess water out the scrubber
Icky moment
Quickly lift the scrubber out of the bowl
Icky moment
Put away the scrubber quickly
Icky moment
Figure 5.2 Comparison of toilet cleansing routine before and after the invention of Clorox ToiletWandî System. Toilet bowl cleaning behavior and emotional states that the Clorox team identified through observation of consumers’ facial expression, habit and behavior in the actual environment.
The team noticed an important behavior from this routine that was “every consumer made faces when they tried to put away the toilet scrubber”. They named that behavior the “icky moment”. Icky moment is very emotional as consumers want a clean toilet but they do not really know how to clean a toilet scrubber after rinsing it with clean water in a newly cleaned toilet bowl by flushing. There was always a feeling that it is dirty and they do not want to store it but there was no choice unless they really cleaned the scrubber thoroughly, which is another long process (and is an uncommon practice). The team quickly brainstormed and came up with ideas of prototypes that eliminated this scrubber cleansing and storing steps. The prototypes allowed them to assess an optimal solution for the “icky moment” and the final outcome was the Clorox® ToiletWand™ System with disposable sponge/scrubber. The wand revolutionized the toilet cleansing category as it changed consumers’ behavior in a drastic and seamless way (Figure 5.1). Another similar innovation, P&G’s Swiffer, can be analyzed in the same way. “Wow!” and “aha!” moments are special cases to spot emotional cues. An easy way to observe “wow!” and “aha!” moments is through giving consumers opportunities to experience physical prototypes. A “wow!” moment signals superiority of the product delivery for either unmet needs (“Wow, it works that well”), or unarticulated needs (“Wow, I didn’t even know that it can do that”). “Aha!” moments connote mostly unmet needs (“Aha, now I can do that too”). Presenting prototypes is effective in eliciting “wow!” and “aha!” moments since they are elicited from spontaneous experiences.
Beckley_c05.indd 62
2/4/2012 4:18:40 AM
What You Must Look For
5.8
63
Summary The inputs to insights are everywhere and it is up to Consumer Explorers to spot them. An insight is not a specific story or thing. It is an ability that enables researchers to connect the dots from pools of information and understand consumers better. Keen eyes and creativity are necessary for spotting high potential insights when observing actual consumers experiencing products in their normal environment. Leveraging digital technologies such as portable cameras, cell phones, blogs and social networks bring consumers closer to the research team in unprecedented ways, allowing CEs to inspire and influence their organizations to be empathic towards target consumers by showing proofs where consumers say, “at last someone understands me”.
References
5
Bowman, J. (2008) “Split Personalities”. Research World: The Magazine for Marketing Intelligence and Decision Making, 2 (October), 48–51. Fortini-Campbell, L. (2001) Hitting the Sweet Spot. Chicago: The Copy Workshop. Kelly, T. and Littman, J. (2005) The Ten Faces of Innovation: IDEO’s Strategies for Defeating the Devil’s Advocate and Driving Creativity Throughout Your Organization. New York: Doubleday. MARS (2011) “M&M’S® Pretzel Chocolate Candies Awarded Product of the Year”. MARS, Mars Incorporated. 07 July 2011. Web. 06 February 2011. (http://www.mars.com/ global/news-and-media/press-releases/news-releases.aspx?SiteId=94&Id=2811). The Datamonitor Group (2010) “Successes and Failures in Consumer Packaged Goods Innovation and Marketing”. The Datamonitor Group, http://www.datamonitor. com/store/Product/successes_and_failures_in_consumer_packaged_goods_ innovation_and_marketing?productid=CM00047-006 (accessed 14 February 2011). Zink, L. (2008) “Sensory Science & Market Research: A Marriage of the Mind, Heart and Soul”. Presented in 7th Pangborn Sensory Science Symposium. Minneapolis, MN.
Beckley_c05.indd 63
2/4/2012 4:18:40 AM
Part II
Research Tools of the Consumer Explorer
Beckley_p02.indd 65
1/31/2012 6:05:00 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Chapter 8: Tools to Refine Research on Consumer and Screen Product Ideas Triggers and Barriers in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 9: Tools to Validate New Products for Launch
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
6
6 “Listen for meaning, not necessarily the words.” Bill Moyers
This chapter starts the “how to” discussion of research tools (both qualitative and quantitative) guarantee to identify new opportunities stemming from consumers’ unmet needs and wants when combining these recommendations with the knowledge gained from the other chapters. This chapter will equip the Consumer Explorer with research methodologies and approaches that are cutting-edge, reliable and proven to provide high-value insights.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
67
Beckley_c06.indd 67
2/6/2012 12:26:32 PM
Chapter 6
Tools for Up-Front Research on Consumer Triggers and Barriers 6.1
Understanding Consumer Languages Kannapon Lopetcharat 6 Key learnings ✓ ✓ ✓ ✓
6.1.1
Intrinsic and extrinsic motivations How to select methods to understand consumers’ perception of products Interview techniques Analysis of qualitative data
Consumers do not understand these technical words, so what should we say about our new products? A product is valuable to consumers because it provides benefits or means that fulfill their needs and expectations. Therefore, these benefits constitute a product, differentiate a product from a set of close alternatives and define a product category. These abilities used to belong to companies and their marketing; however, in the 21st century, these abilities belong to consumers due to the rise in consumerism and social networking. Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
68
Beckley_c06.indd 68
2/6/2012 12:26:33 PM
Understanding Consumer Languages
69
It is very important to know how much consumers value a product’s benefits and to identify which benefits are more valuable than others. Two major factors determine a benefit’s value: intrinsic motivation and extrinsic motivation (Antonides, 1991). Intrinsic motivation is a consumer’s perception of the extent to which a product’s characteristic or attribute is believed to cause a consumer perceived product benefit; consequently, it is the reason why consumers are satisfied and eventually love a product. On the contrary, the extrinsic motivation is a consumer’s perception of the availability and acceptability of the characteristic in the market. Therefore, the intrinsic motivation is what consumers demand from products and the extrinsic motivation is what the market can supply to the consumers. In this section, we will introduce a set of techniques that will allow you to identify consumers’ intrinsic motivations (values, benefits and attitudes) toward products, to understand how consumers differentiate new products from other products in the market (extrinsic motivation) and to translate product characteristics to consumers’ benefits.
6.1.2
How to select a method
6
Many techniques can be used to understand consumers’ perceptions of products. It can be a simple product test with a specifically designed questionnaire to understand consumers’ choice and motivation. It can be a carefully designed experiment that extracts underlying important product characteristics such as conjoint analysis (detailed discussion can be found in Chapter 7.2). It can be a deliberately designed series of “why” questioning in laddering technique or the use of a carefully crafted interview guide in conjunction with a skillful moderator in a focus group interview (Bech-Larsen and Nielsen, 1999). However, each technique has its own limitations and optimal use. The following discussion centers on a few techniques that have been proven in practice to provide consumers insights effectively. The expectancy-value theory states that the best predictor of consumer behavior is the belief or the most important association that the consumer has about his/her attitude toward an object (Ajzen and Fishbein, 1980). Based on this framework, the outcomes of any techniques depend on three factors: (1) Source of information, (2) Task format and (3) The use of stimuli in the task. For example, in word association, a free elicitation technique, the words that first come to the consumer’s mind are associations, benefits and some product characteristics that should be the most relevant for consumer behavior (e.g. choice, purchase). The consumers articulate the information to the interviewer as quickly as they can (task format) from their own memories (source of information) which depends greatly on familiarity with the product of interest (stimulus). Table 6.1.1 shows these three factors for each recommended technique: free elicitation or Zaltman metaphor elicitation technique (ZMET), laddering interview, Kelly’s repertory grid and flash profiling. The source of information is virtually the most different and the other two factors, stimulus use and task format, seem to be more similar among the techniques. However, stimulus use and task format have equal or more influence on
Beckley_c06.indd 69
2/6/2012 12:26:34 PM
70
Product Innovation Toolbox
outcomes and the applicability of the outcomes from each method than source of information as the stimulus and the format influence the information source as well. Table 6.1.1 is organized based on the source of information used in each technique from “need driven” to “product driven”. The continuum between needdriven and product-driven method is not distinct from one method to the next. Zaltman metaphor elicitation technique, free elicitation and laddering interview require only a concept about a product class but not the actual product; meanwhile, Kelly’s repertory grid and flash profiling require actual products to be evaluated. The involvement of actual product and familiarity with product or product concept influence the level of consumers’ responses from strictly differentiating product characteristics obtained from flash profiling to abstract responses such as benefits, value and attitude from ZMET. These outcomes will impact the applicability in new product development (NPD) as shown in Table 6.1.2. To select which method(s) to use, the Consumer Explorer (CE) must know how he/she wants to use the information as there is no one method that gives all the answers. To get a more comprehensive picture about your consumers, the author recommends using two classes: need-driven and product-driven methods.
6
Table 6.1.1 Source of information, stimulus use and task format, influence outcomes and applicability of the outcomes for each technique.
Technique
Source of information
Flash profiling
Product driven
Stimulus use
Task format
Multiple familiar products
Self articulated and unstructured individual evaluation
Kelly’s repertory grid
Multiple familiar products
Self articulated and unstructured one-on-one interview
Laddering interview
Multiple familiar product/product class
Self articulated and unstructured one-on-one interview
Single familiar product/product class or No product evaluation
Self articulated and unstructured one-on-one interview
Free elicitation or ZMET
Need driven
Table 6.1.2 Expected outcomes and applicability of the outcomes from the techniques in NPD. Technique
Expected outcome
Target activity in NPD
Flash profiling
Differentiating product characteristics
Product development
Values Beliefs Attitudes
Marketing
Kelly’s repertory grid Laddering interview Free elicitation or ZMET
Beckley_c06.indd 70
2/6/2012 12:26:34 PM
Understanding Consumer Languages
6.1.3
71
Free elicitation and Zaltman metaphor elicitation technique Free elicitation is a general class of techniques that is quick and convenient to explore consumers’ perception. Free elicitation includes many well-known techniques such as word association, direct elicitation and others. In a nutshell, these techniques are based on the expectancy-value theory (Ajzen and Fishbein, 1980) and the memory schemata concept (Collins and Loftus, 1975). The expectancyvalue theory states that the most important associations that a consumer has about his/her attitude toward an object is the best predictor for his/her behavior; hence, it will be revealed first when we ask the consumer. The memory schemata concept believes that a consumer has a pre-conceived knowledge (memory) that is formed through their experience in life and the knowledge is organized in certain structures (schemata) in which contents are inter-related. Therefore, free elicitation techniques rely on a consumer’s memory about a product or product category (familiarity), a consumer’s ability to recall the memory during the interview and belief that the most important information is contained in the first responses. Regardless of the name, free elicitation, word association, direct elicitation, etc., the techniques share the same general seven steps (Figure 6.1.1). The elicitation techniques require consumers to react quickly to the stimuli (aka a product or an idea of a product category). Therefore, it is a stimulusdependent technique or a product-driven technique. If the stimulus is an actual product or a very specific product class, the outcomes will lean toward product attributes and some benefits that consumers think the product delivers (more information about extrinsic product differences). If the stimulus is an idea of a product class, the outcomes will lean toward benefits, values and attitudes
6
1. Define the objective of the study.
2. Define the product/product class of interest.
3. Prepare how to explain this to your target consumers. 4. Select target consumers: you will need consumers who are familiar with the product of interest. 5. Conduct a one-on-one interview: asking them to provide the first thoughts or images that come to mind.
6. Collect the data: a computerized system will be useful.
7. Group the responses into themes: use at least three different persons and compare their results.
Figure 6.1.1 Seven steps to conducting a successful free elicitation.
Beckley_c06.indd 71
2/6/2012 12:26:34 PM
72
Product Innovation Toolbox
(more information about intrinsic motivation). In general, 20–25 consumers per market are sufficient to provide qualitative information and many relating products or product concepts can be tested with the same group of consumers. However, if actual physical products are used, liking toward the products in conjunction with familiarity with the products will need to be considered. Some enhancements can be applied to the free elicitation techniques to make them more discriminating, to enrich the qualitative information or even to quantify the information generated from the techniques. Below are a few examples: ●
●
6
Adding negative and positive rating for each association (e.g. word or image) Consumers can rate each word after writing the word down from 1 (very negative/bad) to 5 (very positive/good). The rating provides the direction of the association and is usually analyzed after each response is grouped across all respondents. Using Zaltman metaphor elicitation technique (ZMET) Consumers will select images from a set provided to them that they think reflect what they feel toward the stimulus in question. Therefore, ZMET is a projective technique. Usually, once the images are selected, the consumers are interviewed to explain why they chose the images using a regular free elicitation. Special care must be taken to select and prepare a set of images for a study. Relevancy to the products and objectives and quality of the images can influence the outcomes. The outcomes of ZMET are rich in high-level perception about the stimulus, such as benefits, values and attitudes and less in product attributes.
To select significant elicited associations (words or images), a cut-off of 10 is usually used for the total occasions of 400 or more (e.g. 16 products × 25 consumers = 400 occasions or 1 product × 400 consumers = 400 occasions) (Guerrero et al., 2010; Roininen et al., 2006). Free elicitation is an efficient and rapid method to gain information about consumer perceptions of products. It allows researchers to identify high mental level perceptions (e.g. affective, benefits, values and attitudes) that are less conscious aspects of consumers’ mind better than methods that rely on direct questioning, extensive evaluation of products or highly structured interview techniques (Szalay and Deese, 1978). With good execution, free elicitation can provide a fast and convenient tool to identify and understand the motives behind consumers’ behaviors; however, without good preparation, free elicitation can provide shallow and confusing outcomes that are difficult to interpret. If your research goal is to identify links between attributes, consequences, values, benefits and attitude, free elicitation is not appropriate and other techniques that focus on indentifying these links (e.g. laddering interview, qualitative multivariate analysis (QMA)) will be more appropriate.
6.1.4
Laddering interview If the question of interest is to identify the links between high-level perceptions about products (e.g. attitude, values and benefits) and product attributes, laddering interview is an appropriate technique for the job. Based on the
Beckley_c06.indd 72
2/6/2012 12:26:34 PM
Understanding Consumer Languages
Table 6.1.3
73
Guideline to select laddering interview method. Consumer involvement level
Consumer’s familiarity with stimulus
Low involvement High involvement
Low knowledge or Unfamiliar
Soft laddering
Soft laddering
Medium knowledge or Somewhat familiar
Hard laddering
Hard laddering
High knowledge or Very familiar
Soft laddering
Soft laddering
means-ends chain (MEC) theory (Gutman, 1982), laddering interview is designed to discover the links between concrete product attributes and intrinsic motivations (e.g. perceived benefits, values and attitudes) and to explain why one attribute is more important than others. The MEC states that the strength and numbers of cognitive links between a product attribute and its related consequences (perceived benefits and attitudes toward the product) and personal values determine the attribute’s importance to consumers’ decisions or behavior. Therefore, a perceived benefit (an abstract attribute) that links directly to values and expectations of consumers will have more impact on final attitudes toward a product and consequently consumer’s choice than physical product attributes that are mediated by an abstract attribute. There are two general types of laddering interviews: soft laddering interview and hard laddering interview. There are two factors influencing the selection of laddering interviews: (1) Familiarity with products and (2) Level of consumer involvement with the products (Grunert and Grunert, 1995). Soft laddering interview is appropriate when either factor is extreme (very low or very high). Hard laddering interview is appropriate for medium level for either factor (Table 6.1.3). When there is no prior knowledge about any cognitive construct (links between product attributes, consequences and values) of a product category, soft laddering is recommended. Hard laddering is recommended when there is substantial knowledge about the product of interest. Steps to conduct soft laddering interviews are different from those of hard laddering interviews. Let’s start with soft laddering interview steps (Figure 6.1.2). The first four steps are common between soft laddering and hard laddering interviews. Because soft laddering interview is an “unstructured interview”, steps S5 to S8 will take some time to finish and it is one of the limitations of soft laddering interview that we will discuss later in this section. Hard laddering interview begins with the first four steps mentioned in the steps for soft laddering interview. Hard laddering interview is a “structured interview” that follows the next six steps (Figure 6.1.3). To conduct hard laddering interview, CEs must have prior knowledge about cognitive constructs of the product of interest in order to create lists of words that represent all the cognitive levels (product attributes, consequences and values). By presenting target consumers with a “defined” list of words for each level, there is a possibility that CEs do not capture everything. We will discuss these limitations later in this section.
Beckley_c06.indd 73
6
2/6/2012 12:26:34 PM
74
Product Innovation Toolbox
1. Define the objective of the study: do you want to know about consumers’ preference or product perceived differences? This will dictate the activity of step 5. 2. Define the products/product classes/product concepts of interest. 3. Prepare how to present the actual products/product classes/product concepts (generally called stimuli) to your target consumers. 4. Select target consumers: you will need consumers who are familiar with the product of interest. S5. Elicit distinction between products from consumers: preference or similarity. S6. Ask why he/she selected the first choice (most liked product or a group of products) compared to other products. S7. Laddering interview by asking “why is this reason important for you?” Keep asking this question until the consumer cannot answer any further. S8. Continue steps 6 and 7 until you exhaust all the products or groups.
6
Figure 6.1.2 Steps to conduct a soft laddering interview. Letter “S” in front of the numbers indicates that these steps are unique to soft laddering interview.
H5. Collect knowledge about the cognitive constructs of the products and conduct preliminary works such as focus group interview, etc. H6. Define general cognitive constructs, select product attributes, consequences (functional and psychological) and values (instrumental and terminal); ensure that the selected words are understandable by the target consumers. H7. Present the consumers with a product attributes list and ask the consumers to choose at least one and up to three attributes that are important to their preference or differentiating the products. H8. Present the consumers with the words they have selected from step 7 and ask “why is it important/differentiating for you?” for each word selected, and provide them with the list of words from the consequence level to select the reasons. The consumers must select at least one and up to three words. H9. Present the consumers with the words they have selected from step 8 and ask “why is it important/differentiating for you?” for each word selected and provide them with the list of words from the value level to select the reasons. The consumers must select at least one and up to three words.
Figure 6.1.3 Steps to conduct a hard laddering interview. Letter “H” in front of the numbers indicates that these steps are unique to hard laddering interview.
Analyzing results from soft laddering and hard laddering interviews requires extensive coding and categorizing, especially the results from soft laddering interview. Figure 6.1.4 shows eight general steps necessary to analyze and present the data from the laddering interviews. Step 1 Reviewing the data begins during the interview because the intent and meaning of every word generated from consumers should be confirmed by the consumers. Showing the data back to consumers at the end of interview
Beckley_c06.indd 74
2/6/2012 12:26:35 PM
Understanding Consumer Languages
75
1. Thorough review of the verbatim notes of the interview to identify the elements that represent the expressed concept by each person. 2. Content analysis of the elements (each response from each consumer) using the following level keys: (a) product attributes: concrete and abstract, (b) consequences: functional and psychological and (c) values: instrumental and terminal. S3. Classify all the elements into several levels according to the data.
H3. Combine the elements within the same predefined levels.
4. Within each level, sort the elements into different themes. 5. Construct sequences by assigning “links” to the words between cognitive levels for each product from each consumer. 6. Create a structure implication matrix (SIM) by summing the “links” from each theme from one level to the next level (e.g. a product attribute → a functional consequence, etc.). 7. Simplify the SIM by selecting only “important” links by using a cut-off rule (e.g. at least three or four consumers indicate the link or using the top four most frequently used links connecting any two levels, etc.).
6
8. Create hierarchical value map.
Figure 6.1.4 Steps to analyze the results from a laddering interview. Letters “S” and “H” in front of the numbers indicate that these steps are unique to soft laddering interview and hard laddering interview, respectively.
sessions is recommended to make sure that all the elements and the links are correctly recorded. Step 2 Content analysis requires two to three researchers with experience in coding qualitative information, but they should not be intimately involved in the study. The information will need to be transcribed and transformed to elements (words or phrases that each consumer uses) and then arranged from the lowest level of abstraction (concrete product attributes) to the highest level of abstraction (terminal values). Figure 6.1.5 is a typical coding sheet of this step. Each researcher must perform these tasks separately. Once it is finished, the results from each decoder are compared. Any discrepancies among the researchers are usually resolved by discussion and consensus agreement. The discussion involves the assignment of the words to one of the three levels of cognitive structures (product attributes, consequence and values) with two sublevels within each major level (Figure 6.1.5). Step 3 (S3 and H3) It is worth noting that this step marks the differences between soft laddering and hard laddering interviews (step S3 and H3). In soft laddering interview (step S3), the decoders assign any word to a cognitive level according to their understanding of the results. However, in hard laddering interview (step H3), the cognitive levels of words have been determined prior to the interview; therefore, the activity is combining the words within a pre-assigned level.
Beckley_c06.indd 75
2/6/2012 12:26:37 PM
76
Product Innovation Toolbox
Consumer ID: Date:
Decoder: Project: Product 1: Sequence 1 Sequence 2 Sequence 3 Sequence 4
Product attribute Physical
Word 1
Word 8
Abstract
Word 9
Consequence Functional
Word 2 Word 6
Psychological
Word 6
Value Transitional Terminal
6
Word 3
Word 3 Word 7
Figure 6.1.5 A typical coding sheet with results after a content analysis. The results are generated from a consumer for a product by a decoder in a study.
Step 4 This step is the same in both soft laddering interview and hard laddering interview. The coders sort the words into different themes within each sub-level (total six sub-levels). Step 5 Then the coders continue assigning “links” to the words between the three cognitive levels to construct a sequence (see the right column in Figure 6.1.5) of a product from each consumer. For example, sequence 1 in Figure 6.1.5 comprises word 1 (physical product attribute) → word 2 (functional consequence) → word 3 (transitional value). The same word can be used as many times as it is dictated by consumers, for example word 3 (a transitional value) is the end of both sequence 1 and sequence 4 or word 6 (a psychological consequence) is the beginning of sequence 2 and the end of sequence 3. Step 6 Creating a structural implication matrix (SIM) is a daunting task especially from soft laddering interview and it is the original reason for inventing hard laddering interview as it aids data collection and SIM creation. Figure 6.1.6 is an example of a SIM based on the results in Figure 6.1.5 by assuming that all nine words in Figure 6.1.6 are classified into nine different themes. For example: Sequence 1: word 1 → word 2 → word 3 that translates to theme 1 → theme 2 → theme 3 that is represented in SIM (Figure 6.1.6) as “1” between row T1 and column T2 and “1” between row T2 and column T3 Sequence 2: word 6 → word 7 that translates to theme 6 → theme 7 that is represented in SIM (Figure 6.1.6) as “1” between row T6 and column T7 Sequence 3: word 8 → word 6 that translates to theme 8 → theme 6 that is represented in SIM (Figure 6.1.6) as “1” between row T8 and column T6
Beckley_c06.indd 76
2/6/2012 12:26:37 PM
Understanding Consumer Languages
77
To T1
From
T1
T2
T3
T4
T5
T6
T7
T8
T9
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
T2
0
T3
0
0
T4
0
0
0
T5
0
0
0
0
T6
0
0
0
0
0
T7
0
0
0
0
0
0
T8
0
0
0
0
0
1
0
T9
0
0
1
0
0
0
0
0 0
Figure 6.1.6 A SIM matrix from a consumer for a product based on the data in Figure 6.1.5 (assuming each word is classified into a theme after discussion and consensus agreement). The matrix is symmetrical in structure but not in the data as it contains consequential data (From theme X to theme Y). “T” stands for “theme”.
6
Once all SIMs from all target consumers are constructed, a final SIM is created by summing all the frequencies for each cell. This final SIM is the data to create the final hierarchical value map (HVM) for a product or target consumers. Step 7 Simplifying the final SIM is done by applying a cut-off rule below. Rule 1: A cut-off level of 3 or 4 rule: A cut-off of 3 or 4 means that the links are used at least three or at least four times by consumers. This rule is appropriate when the number of consumers within a target group is more than 30 (Lind, 2007). Rule 2: The top-4 rule: This rule was proposed by Russell et al. (2004). The top-4 rule requires only the most frequently used four links that connect between two levels (e.g. product attributes → consequences or physical product attribute → psychological product attribute, etc.). This rule is easy to apply to hard laddering interview data but it is quite complicated when soft interview data are used or with a big data set. Many network programs such as UCINET (Borgatti et al., 1999), or MecAnalyst software (Skymax Inc., Italy) can handle the SIM and provide many tools to aid the analysis. Step 8 Creating a hierarchical value map after a cut-off level is decided and applied to the SIMs, final HVMs can be created. Figure 6.1.7 is a simplified version of a typical HMV based on the data in Figure 6.1.6 with additional information. If the total frequency is 100, and an absolute cut-off level of 3 is chosen then every link in Figure 6.1.7 will be kept. If the top-4 rule is chosen, two links will be deleted from the data sets. The first link to be deleted is the link that
Beckley_c06.indd 77
2/6/2012 12:26:38 PM
78
Product Innovation Toolbox
19%
Theme 3 (transitional value)
4%
Theme 10 (terminal value)
Theme 7 (terminal value)
11%
35%
21%
5%
Theme 5 (psychological value)
Theme 2 (functional value)
Theme 6 (psychological value) 25%
Theme 9 (abstract product attribute)
95% 18%
20%
3%
35%
6 Theme 1 (physical product attribute)
Theme 4 (physical product attribute)
Theme 8 (physical product attribute)
Figure 6.1.7 An example of HVMs from a laddering interview based on the data in Figure 6.1.6 with some additional information. Links in the same level are noted with the same kind of geometric shapes. “X” denotes the links that are not the top-4 of their own level.
connects theme 1 to theme 9, with three percent of the total frequency of the first cognitive level (denoted by rectangular) and the second link to be deleted is the link that connects theme 2 to theme 3 with five percent of the total frequency of the second cognitive level (denoted by triangle). The absolute number rule (3 or 4) is appropriate for hard laddering interview because pre-determined HVM is applied to every target consumer. The top-4 rule is appropriate for soft laddering interview or highly complex HVMs because the rule normalizes selection criterion for all target groups by choosing only the most important four links for each association level.
6.1.5
Potential problems when applying laddering interview in practice There are several potential issues when applying laddering interview in practice. Below are four most commonly encountered problems: ● ● ● ●
Beckley_c06.indd 78
It is time-consuming and expensive Consumers make up their answers Biases from researchers Oversimplification of the analyses and outcomes.
2/6/2012 12:26:39 PM
Understanding Consumer Languages
79
Table 6.1.4 Problems and solutions for “time-consuming and expensive” issue in applying laddering interview in practice. Time-consuming and expensive Potential problems
Potential solutions
1. Requires highly skilled interviewers
(a) Develop a database of skilled interviewers who you can rely on and alternate them from project to project
2. Will be too expensive for large-scale study
(b) Use a computerized/on-line system to capture the interview
3. Takes too long for large-scale study
(c) Depending on research objectives, adapting hard laddering interview in a group format may save some time (see Chapter 6.3: QMA)
4. Consumers lose interest because the interview takes too long
(d) Explain clearly to consumers about (i) The procedure and (ii) the length of time needed for the interview
6
(e) Respect consumers’ limitations and never force the consumers’ to answer any questions that they do not want to
Because the laddering interview is an unstructured one-on-one interview, scheduling time for a project is not easy. Consumer Explorers do not want to rush the interview to just fit with the deadline and miss important insights. This is the first issue. It is important to have highly skilled interviewers and consistencies, especially for multiple site projects and international studies. Table 6.1.4 shows potential problems and potential solutions for this time and cost issue. In addition to finding skilled interviewers and total time of a project, the interview can exhaust consumers if the session lasts too long. Therefore, making sure that consumers understand the procedure and time needed for the interview is the best way to avoid this problem. If consumers do not want to continue, it is better to let them go than coercing any responses from them. The second common issue: “Consumers make up their answer”, has many causes (e.g. consumers want to please the interviewer, consumers just want to get out as quickly as possible). The key to successful interview is preventing this scenario from happening and knowing how to notice it when it happens. Table 6.1.5 shows recommended potential solutions to this problem. The third issue is from the researchers themselves as personal bias and expectation can blind the researchers from new insights that do not fit with their expectation. Even though it is human nature to see what they expect to see, preventing this bias from impacting the study is critical. Table 6.1.6 shows several potential solutions to curb this issue in practice. The fourth issue, “oversimplification of the results”, is also rooted from the researchers and it is due to the expectation and the complexity of data analysis
Beckley_c06.indd 79
2/6/2012 12:26:39 PM
80
Product Innovation Toolbox
Table 6.1.5 Problems and solutions for “consumer made up their answers” in applying laddering interview in practice. Consumers made up their answers Potential problems
Potential solutions
1. Consumers try to please the interviewer
(a) Use highly skilled interviewers with specific training for laddering interview (b) Check for consistency of the answers from time to time during the interview (c) Do not limit the answers to only words or phrases. Using story telling or metaphor can enhance clarity of response (d) Use “pausing” to redirect the interview (e) Try “third person” probing to allow consumers to project their thought (f) Never interrupt the consumers. Ask follow-up question instead (g) Make sure that consumers feel comfortable about their confidentiality and anonymity
2. Consumers answer the question for the sake of answering the question 3. Consumers do not want to reveal their personal thoughts when the interview becomes too personal (happens at high abstract levels)
6
Table 6.1.6 Problems and solutions for “biases from researchers” in applying laddering interview in practice. Biases from researchers Potential problems
Potential solutions
1. Personal biases influence the interview process and data analysis
(a) Audit the interpretation of the results in step 5: decoding the interview by peers will improve the quality of the findings (b) Accept that you have personal value system that will impact judgment and interpretation of the results and try to be neutral as much as possible (c) Take advantage that consumers are there and let consumers validate your interpretation and ensure correct representation of their thoughts
2. Researchers make the content analysis easier by oversimplifying the reality
for laddering interview. Turning the interview data into HVMs and insights is a very laborious task, especially from soft laddering interview and many researchers are intimidated by it. Table 6.1.7 shows a few solutions for these problems. Laddering interview is a powerful tool to uncover consumers’ insights, especially the associations between product characteristics, consequences and
Beckley_c06.indd 80
2/6/2012 12:26:39 PM
Understanding Consumer Languages
81
Table 6.1.7 Problems and solutions for “oversimplifying the analyses and outcomes” issue in applying laddering interview in practice. Oversimplify the analyses and outcomes Potential problems
Potential solutions
1. Interesting insights are missed because they do not agree with the preconceived patterns imposed by the technique
(a) Use more than one data collection method (e.g. computer aided, paper-and-pencil)
2. The questions are usually overtly positive attributes
(b) Compare the findings with existing literatures and evaluate them thoroughly and critically (c) Balance negative and positive questions. Converging information in conclusions validates the results, discrepancy in conclusions provides more insights or indicates where you should conduct more research
6 values that are perceived by consumers. Hard laddering and soft laddering interviews provide consumer researchers some flexibility to cope with different objectives. The ability of laddering interview to uncover different levels of consumer’s product perception makes it suitable for a wide range of projects in NPD.
6.1.6
Kelly’s repertory grid and flash profiling This section will discuss two methods, Kelly’s repertory grid and flash profiling, as both methods share the following characteristics: (1) The methods are product-driven methods as both require multiple comparisons of products. (2) Both require target consumers to be familiar with products of interest. (3) Physical product characteristics are the results from both methods. Do not expect much about benefits or values from these methods as the purpose of these methods is to describe products. (4) Both methods use multiple-comparison tasks to elicit the insights. Kelly’s repertory grid uses triadic sorting and flash profiling uses ranking task.
6.1.6.1
Kelly’s repertory grid technique
Kelly’s repertory grid (also known as Kelly’s triadic sorting or, in short, Kelly’s grid) was developed by George Kelly in 1955 based on a personal psychology construct theory to map cognitive structures (Kelly, 1955). Applying the method to product research by using actual products as stimuli and asking consumers to describe what makes them different from each other uncovers the underlying perceptual constructs that describe the differences (product attributes). The
Beckley_c06.indd 81
2/6/2012 12:26:39 PM
82
6
Product Innovation Toolbox
Kelly’s grid procedure is designed to maximize the differences among products; hence, the method is not appropriate for detecting subtle differences and preference. Figure 6.1.8 describes general steps to conduct a successful Kelly’s grid. The main objective of Kelly’s grid is to uncover product attributes that differentiate products in a choice set. Therefore, step 1 is very important because the outcomes can be generalized only within the chosen product category. In step 2, stimuli preparation and presentation must be done with care and consideration. Stimuli presentation can be a complex issue because of two commonly forgotten factors: (1) Carry-over effect and (2) Use context. Products with long carry-over effect will need more recovery time (e.g. chili sauce, butter, lotion, facial cream); therefore, the recovery period can impact consumers’ memory. In addition, cleansing agents or procedures (e.g. water, cooking, washing process) alter the physiology of modalities especially on the tongue (for food) and skin (for personal care products) to be different from the natural state under real consumption. The memory loss and modality alteration impact the interpretation and usefulness of the results. In step 3, the target consumers must be familiar with the products of interest. If consumers are not familiar with the products, they will not be able to describe the product characteristics well and their results will be unusable due to the switching of evaluation criteria during use. This contradicts the underlying hypothesis of this method that consumers use their constructs (a set of systematic criteria) in their evaluation. Steps 4–8 describe the actual procedure of triadic elicitation. Warm-up sessions will be very useful to ensure that consumers understand the triadic elicitation. Steps 9–10 are the rating tasks where each consumer will use their own set of attributes to rate the products and any
1. Define the products/product classes/product concepts of interest. 2. Prepare how to present the actual products/product classes/the product concepts (generally called stimuli) to your target consumers. 3. Select target consumers: you will need consumers who are familiar with the product of interest. 4. For each individual consumer, present the consumer with a group of three products. 5. Ask the consumer to select a product that is the most different from the other two products. 6. Ask the consumer to explain what makes the product selected in step 5 differ from the other two products. 7. Record the descriptions elicited from the consumers. 8. Continue steps 4 to 7 with different sets of three products. 9. For each individual consumer, combine all product attributes elicited and ask the consumer to review the attributes and eliminate any redundant attributes from the list. 10. Ask each individual consumer to evaluate the products and any additional products using the list created by each individual. 11. Analyze the data using generalized procrustes analysis (GPA).
Figure 6.1.8 Eleven general steps to conduct Kelly’s repertory grid elicitation.
Beckley_c06.indd 82
2/6/2012 12:26:39 PM
Understanding Consumer Languages
83
additional products of interest. Intensity scale (line or categorical) or Likert-type scale (or agreement scale) are commonly used for rating task. Intensity scale provides the differences in magnitude of attributes and the Likert scale provides the level of agreement of the attributes that exist in the products. For product development, the author recommends using intensity scale (e.g. a line scale or a categorical scale with anchors on both ends (none to very much) over a Likerttype scale as the intensity scale provides more relevant information to product (the difference in magnitude of attributes) than the Likert scale. Step 11 is the analysis step and we will discuss this in detail after the flash profiling section as both techniques use the same analysis, generalized procrustes analysis (GPA).
6.1.6.2
Flash profiling
Flash profiling was invented by Professor Jean-Marc Sieffermann in late 2000 (Sieffermann, 2000). The method combines free elicitation (from another method called free choice profiling) and a multiple comparison task (ranking) to uncover discriminating product attributes of a product set. Because of the use of ranking task, flash profiling has the same limitations as Kelly’s grid, that is, memory loss and vulnerability to carry-over effects. However, there are many remedies to these limitations as the author has applied this method to facial cream, soap bars and many other high carry-over products. Figure 6.1.9 describes general steps to conduct flash profiling. The difference between flash profiling and Kelly’s grid technique starts at step 4. In flash profiling, each consumer will be presented with all the products at once (not a set of three products). This provides consumers with a holistic picture of the product set and helps them to
6
1. Define the products/product classes/product concepts of interest. 2. Prepare how to present the actual products/product classes/the product concepts (generally called stimuli) to your target consumers. 3. Select target consumers: you will need consumers who are familiar with the product of interest. 4. For each individual consumer, present the consumer with a group of products of interest. 5. Ask the consumer to generate a list of attributes that he/she thinks describes the product set. 6. Ask the consumer to explain the meaning and evaluation procedure of the attributes and combine redundant attributes, when it is possible. 7. Create customized ballot based on individual consumer’s attribute list. 8. Present each consumer with the same product set (additional product is permitted as long as the product is not drastically different from the set). 9. Ask the consumers to rank the product based on each attribute on his/her own ballot. 10. Repeat steps 8 to 9 at least one more time to confirm the reproducibility of each consumer. 11. Analyze the data using generalized procrustes analysis (GPA).
Figure 6.1.9 Eleven steps to conducting a successful flash profiling.
Beckley_c06.indd 83
2/6/2012 12:26:39 PM
84
6
Product Innovation Toolbox
generate the highly discriminating attributes. In step 5, each consumer can spend as much time as he/she needs to evaluate the product set to come up with a list of attributes that describe and differentiate the products from each other. It is important to instruct the consumers to generate “attributes that can discriminate the product set at least into 2–3 groups”. Steps 4 and 5 differentiate flash profiling from Kelly’s grid elicitation. Flash profiling provides attributes that can discriminate big differences (more obvious to consumers and most likely to be the first thing that they notice in the market); meanwhile, Kelly’s grid elicitation provides attributes that discriminate smaller differences (the results of triadic presentation) but the attributes many not be that obvious to consumers compared to those of flash profiling. However, neither flash profiling nor Kelly’s grid elicitation can discern very small differences between products (or confusable differences). In step 6, the meaning and evaluation procedure that each consumer used to evaluate the sample for each attribute must be recorded. This is very important for the research team as it will be very helpful in explaining the results. Many consumers use the exact words to describe the same products; however, their rank orders are not the same or, in many cases, opposite from each other. This phenomenon is usually caused by using different procedures in product evaluation. Steps 7 and 8 are the actual product evaluation steps, each consumer will use their own list of attributes to rank the products. Consumers are allowed to re-evaluate the products as many times as they want and assign the same rank to products that they believe to have a similar level of an attribute. In addition, many improvements can be added to these steps to circumvent the memory loss and carry-over effects especially when many products are ranked. Dairou et al. (2002) conducted flash profiling with 14 jam samples and the author successfully used flash profiling for more than 20 body lotions. The following practices have been used to improve flash profiling. Combining these improvements will enhance the ranking process for consumers. ●
●
●
●
Beckley_c06.indd 84
Using warm-up procedure allows consumers to familiarize themselves with the task. Also, the interviewer can allow them to rank the products according to their preference so that consumers do not feel the need to express their preference (which is usually the first attribute in the process) and can familiarize themselves with the ranking process. Allowing the consumer multiple sessions to rank helps prevent fatigue and memory loss. The consumer conducts partial ranking until he/she needs to rest to recover. Then the consumer comes back and continues the ranking process. Provide consumers enough physical space to arrange products: physical arrangement of products helps reduce memory loss because of visual presentation. It eases the ranking process by allowing consumers to place products physically away or close to each other according to their perceived relative intensity of each attribute. Use rank-rating procedure: rank-rating procedure allows consumers to rank products and then assign a rating to all products. The rating works as a memory holder for consumers. It helps to avoid unnecessary comparison between obviously different products for each attribute and works as a reminder for consumers to compare products that are close in intensity for each attribute.
2/6/2012 12:26:39 PM
Understanding Consumer Languages
85
(B)
(A)
Dimensions (axes F1 and F2: 90.21%) 1.0
4 F2 (19.62%)
F2 (19.62%)
0.8
Jelly 9 Jelly 7
3
Jelly 8
2 1 0 –1
Jelly 3 Jelly 1 Jelly 2
Jelly 17
–3 –5 –4 –3 –2 –1
0
1
F1 (70.60%)
2
3
0.3 0.0 –0.3 –0.5
Jelly 18 Jelly 16
–2
0.5
4
–0.8
C4: Sweet C4: Fiber when chewing C4: Grainy during chew C4: Slippery when chew C4: Jiggle C4: Stick to mouth C4: Look smooth when open C4: Glos v.surface C4: Fruit pieces C2: Color intensity C2: Translucent looking C2: Ease of chewing C4: Color intensity C4: Refreshing C4: Cooling during chew C2: Refreshing feel during eating C2: Cooked flavor during chew C2: Mouth puckling
–1.0 –1.0 –0.8 –0.5 –0.3 0.0 0.3 0.5 0.8 1.0 F1 (70.60%)
Figure 6.1.10 Loading plot (A) and Correlation plot (B) from a GPA analysis. These two plots are important outcomes from the GPA as they provide both validation and the insights to consumer’s language of products (in this case jelly).
The application of flash profiling in practice goes beyond just generating consumer languages. Its original purpose was to aid language generation in descriptive analysis; however, with the improvement mentioned above, direct application of flash profiling to naive consumers is possible. The information generated from flash profiling and also Kelly’s grid method is from each individual consumer. Therefore, the words and phrases generated by these methods are quite complex to understand and rich in insights about how consumers communicate their experiences with the products. A special statistical analysis technique called generalized procrustes analysis (GPA) is necessary to make sense of this rich data from flash profiling and Kelly’s grid method.
6
6.1.6.3 Analysis and interpretation of Kelly’s grid and flash profiling methods Generalized procrustes analysis (GPA) is a special class of multi-dimensional analyses whose main goal is to “figure out a common multi-dimensional structure (revealing underlining constructs, concepts or meanings) among many multiattributes data sets”. GPA was first applied to a data from free-choice profiling (Williams and Langron, 1984) and it has been used extensively for methods that generate this type of data (e.g. Kelly’s grid, free-choice profiling or flash profiling) ever since. There are many statistical programs that can run a GPA such as Sensetool®, XLSTAT®, SAS®, SPSS®, etc. These programs require different data arrangement formats; therefore, the author recommends any practitioner should consult the manuals of any programs they wish to use. However, the results from different programs are quite similar and there are two plots that are very important for consumer researchers: loading plot and correlation plot (Figure 6.1.10). Loading plot (Figure 6.1.10A) shows the locations of the products in a multidimensional space (in this case, two dimensions). It is prudent to inspect the following: (1) The total variance explained and (2) The grouping of products. First, the total variance explained reveals the representativeness of GPA outcomes. The higher the variance is explained, the more representative is the
Beckley_c06.indd 85
2/6/2012 12:26:40 PM
86
6
Beckley_c06.indd 86
Product Innovation Toolbox
outcome. Seventy percent or more is a good amount of variance. There is the usual question about number of dimensions to select. Many programs have built-in algorithms to deal with this problem. In conjunction with the amount of total variance explained, inspecting a drastic shift in variance explained from one dimension to the next is also helpful. Usually a shift of 10–15 percent is sufficient to use as a cut-off point to select the number of dimensions. In this case, the total variance explained is more than 90 percent by using two dimensions and inspecting the third dimension which contains only six percent (a shift of 13 percent from dimension 2 containing 19 percent of variance). This helps to confirm the two-dimension solution as an appropriate one. There is one cliché about these rules and technique in selecting a number of dimensions that is “the meaning and perceived differences”. If researchers know that there are certain subtle differences in product attributes that consumers can perceive consistently but the intensity of those attributes is low, then using only “ranks” will help to avoid this problem, even though researchers may collect data using rank-rating technique or rating technique. This is the advantage of flash profiling over Kelly’s grid method. Second, is the grouping of the products. It is prudent for researchers to expect certain products to be grouped in the same or different group and visual inspection of loading plot (Figure 6.1.10A) helps to confirm that. If the results do not align with the expectation, then there is a problem with the results. In Figure 6.1.10A, the author knew that there were three distinct groups of products within this jelly category. The author intentionally put these three groups and expected the results to be a distinct grouping as shown in Figure 6.1.10A. This validates the performance of the consumers. If unexpected outcomes are discovered, it is highly recommended that the data are inspected carefully because there may be mistakes or very important insights hidden in the data. Checking for consumers who are very different from others is the first step as there may be segmentation among the consumers. After inspecting the loading plot and validating the outcomes to check that there is no problem with the data, studying correlation plots will reveal the language that consumers use to describe their experience with products. Figure 6.1.10B shows only two consumers (C2 and C4) to simplify the plot. In this case, we can see that the two consumers are quite different in terms of what they use to describe the nine jellies. Consumer 4 uses texture (fiber during chewing, grainy during chewing and fruit pieces) and mouthfeel (refreshing feel) to describe the jellies and consumer 2 uses mouthfeel (refreshing feel and mouth puckling) to describe the jellies. It is interesting but expected to see that both consumers use the word “refreshing” to describe the product in the same way. Also, consumer 2 reveals more that the refreshing is about the mouthfeel and not smell or flavor. Other interesting findings are that: (1) Consumer 4 uses “cooling” and consumer 2 uses “puckling” (what does puckling mean? and this insight suggests that “puckling feel” has some cooling aspect in it) and (2) The fact that one consumer uses texture language more than the other person who uses mouthfeel to differentiate the nine products. If this happens when appropriate sample size is used (120 consumers or more), there is high probability of consumer segments in the market and they pay attention to different cues (one texture and the other mouthfeel) for product differentiation. Kelly’s repertory grid and flash profiling are very versatile methods for researching consumer reaction to products. Compared to other methods, Kelly’s
2/6/2012 12:26:40 PM
Understanding Consumer Languages
Advantages
Limitations
Open and dynamic
Too sensitive
Easy to conduct on a large scale
Prone to carry-over effect
Take into account individual differences
Low ecological validity in certain cases
Provide some statistical measures to guide qualitative interpretation
Do not provide the consequences of product attributes to perceived benefits and values
Allow hypothesis testing for product development
Not appropriate for disruptive innovation
87
Figure 6.1.11 Advantages and limitations of Kelly’s repertory grid and flash profiling compared to other consumer research methods for language discovery.
6 grid and flash profiling have many advantages (Figure 6.1.11). The methods are very open and dynamic, with some structure to the information, unlike many unstructured interviews such as free elicitation and soft laddering interviews. In addition, both methods are applicable for both qualitative and quantitative study as the information generated is self-structured by each consumer and is captured using computerized system with minimum or no impact on the outcomes. In conjunction with GPA, both methods provide overall and individual picture of product experiences. This allows researchers to anticipate segmentations in the market and to prepare for more costly quantitative studies. Because of the use of multiple products and GPA in these methods, statistical measures can be derived from small number of consumers. These statistical measures, such as correlation coefficient (see Figure 6.1.10B correlation plot) and p-values from procrustes analysis of variance for scaling differences and structural differences among consumers, etc., increase the confidence in the interpretation of the results, especially when a qualitative study is conducted. The use of multiple products has another benefit which is usually ignored by researchers, that is, allowing researchers to test hypothesis about products. Preparing products used in these studies in a systematic way provides an invaluable opportunity to discover the links between product formulae or attributes and the impacts caused by these factors on perceived differences among the products. This insight is valuable for new product development as the methods are well suited for both qualitative exploration and quantitative confirmation. Even though Kelly’s grid and flash profiling are versatile methods to capture consumer’s language and elicit product experiences, the methods also have limitations. First of all, both methods use multiple comparison procedures (triadic elicitation and ranking) which are very sensitive and, sometimes, the procedures are too sensitive for actual situation. Many products are not used side by side or some are impossible to compare side by side in real life. For
Beckley_c06.indd 87
2/6/2012 12:26:40 PM
88
Product Innovation Toolbox
example, nobody has two bathtubs in the same bathroom; therefore, trying to compare bubble bath samples side by side will be overkill. The use of multiple comparison procedures is prone to products with high carry-over effect as mentioned. Products such as chili paste, hot sauce, facial cream, or any high carryover-effect products, require long recovery time and consumers will forget many details of their experiences. Therefore, certain adjustments in the procedure will need to be considered or using alternative methods is recommended. This also brings up another issue with these methods: low ecological validity. If the main objective is to figure out product differences, these methods are appropriate. However, if the main objective is to figure out how consumers perceive product differences in real-use situations, these methods are not appropriate. Because Kelly’s grid and flash profiling require physical products and direct product comparison, the methods provide product-level information and guidance but will not provide the associations between product attributes and perceived benefits and values of using the products. Last but not least is that the methods are not appropriate for disruptive innovation as the methods require familiar products to make the methods work that is contradictory to the definition of disruptive innovation, which is a brand new product.
6 6.1.7
Summary and future To have a successful new product, the product must be invented to: (1) Answer the needs of consumers, (2) Be recognized by consumers as a product that provides benefits for their needs and (3) Be different from other products in the market. To do so, identifying the needs, benefits and values and the associations among these abstract meanings (psychological constructs) and product attributes is very critical. Free elicitation is quick and easy to conduct for identifying these abstract meanings of products. Laddering interview provides the associations among product attributes, benefits and values. And Kelly’s repertory grid and flash profiling provide product attributes that consumers use to differentiate one product from others in the same category. Applying each method at the right time in NPD and combining the methods to generate comprehensive insights of product categories are important. Combining the advantages of these four methods in a single method is possible as qualitative multivariate analysis (QMA, see Chapter 6.3) shows many traits of free elicitation and laddering interview with multiple product use in real-use situation. Computerizing the data will be easier and more consumer friendly as touch-screen technology is getting less expensive. Incorporating ethnographic studies will enhance the sensitivity and resolution of the methods. Consequently, providing more enriched and more useful insights is possible with the advent of inexpensive video cameras, cell phones, blogging and social networking technology.
References Ajzen, I. and Fishbein, M. (1980) Understanding Attitudes and Predicting Social Behavior. Englewood Cliffs, NJ: Prentice-Hall. Antonides, G. (1991) Psychology in Economics and Business. Dordrecht: Kluwer Academic Publishers.
Beckley_c06.indd 88
2/6/2012 12:26:41 PM
Understanding Consumer Languages
89
Ares, G. and Deliza, R. (2010) “Identifying Important Package Features of Milk Desserts Using Free Listing and Word Association”. Food Quality and Preference, 21 (6), 621–628. Baxter, I.A., Jack, F.R. and Schröder, M.J.A. (1998) “The Use of Repertory Grid Method to Elicit Perceptual Data from Primary School Children”. Food Quality and Preference, 9 (1–2), 73–80. Bech-Larsen, T. and Nielsen, A. (1999) “A Comparison of Five Elicitation Techniques for Elicitation of Attributes of Low Involvement Products”. Journal of Economic Psychology, 20, 315–341. Borgatti, S.P., Everett, M.G. and Freeman, L.C. (1999) UCINET 6.0 Version 1.00. Natick: Analytic Technologies. Chacon, R. and Sepulveda, D.R. (2010) “Development of an Improved Two-Alternative Choice (2AC) Sensory Test Protocol Based on the Application of the Asymmetric Dominance Effect”. Food Quality and Preference, 22 (1), 78–82. Available online 8 August 2010. Collins, A.M. and Loftus, E.F. (1975) “A Spreading Activation Theory of Semantic Processing”. Psychological Review, 82, 407–428. Dairou, V. and Sieffermann, J.-M. (2002) “A comparison of 14 Jams Characterized By Conventional Profile and a Quick Original Method, the Flash Profile”. Journal of Food Science, 67 (2), 826–834. Faye, P., Brémaud, D., Daubin, M.D., Courcoux, P., Giboreau, A. and Nicod, H. (2004) “Perceptive Free Sorting and Verbalization Tasks with Naive Subjects: An Alternative to Descriptive Mappings”. Food Quality and Preference, 15 (7–8), 781–791. Grunert, K.G. and Grunert, S.C. (1995) “Measuring Subjective Meaning Structures by the Laddering Method: Theoretical Considerations and Methodological Problems”. International Journal of Research in Marketing, 12, 209–225. Guerrero, L., Claret, A., Verbeke, W. et al. (2010) “Perception of Traditional Food Products in Six European Regions Using Free Word Association”. Food, Quality and Preference, 21, 225–233. Jack, F.R. and Piggott, J.R. (1991) “Free Choice Profiling in Consumer Research”. Food Quality and Preference, 3 (3), 129–134. Kelly, G.A. (1955) The Psychology of Personal Constructs. New York, NY: Norton. Lind, L.W. (2007) “Consumer Involvement and Perceived Differentiation of Different Kinds of Pork – a Means-end Chain Analysis”. Food Quality and Preference, 18, 690–700. McEwan, J.A. and Thomson, D.M.H. (1989) “The Repertory Grid Method and Preference Mapping in Market Research: A Case Study on Chocolate Confectionery”. Food Quality and Preference, 1 (1), 59–68. Meilgaard, M., Civille, G.V. and Carr, B.T. (1999) “Descriptive Analysis Techniques”. In: Sensory Evaluation Techniques (3rd edition). Boca Raton, FL: CRC Press, Inc., pp. 161–172. Mondada, L. (2009) “The Methodical Organization of Talking and Eating: Assessments in Dinner Conversations”. Food Quality and Preference, 20 (8), 558–571. Moskowitz, H. and Gofman, A. (2007) Selling Blue Elephants. Upper Saddle River, NJ: Wharton School Publishing. Risvik, E., McEwan, J.A., Colwill, J.S., Rogers, R. and Lyon, D.H. (1994) “Projective Mapping: A Tool for Sensory Analysis and Consumer Research”. Food Quality and Preference, 5 (4), 263–269. Roininen, K., Arvola, A. and Lähteenmäki, L. (2006) “Exploring Consumers’ Perceptions of Local Food With Two Different Qualitative Techniques: Laddering and Word Association”. Food Quality and Preference, 17 (1–2), 20–30. Russell, C.G., Flight, I., Leppard, P., van Lawick van Pabst, J. and Cox, D. (2004) “A Comparison of Three Laddering Techniques Applied to an Example of a Complex Food Choice”. Food Quality and Preference, 15, 569–583.
Beckley_c06.indd 89
6
2/6/2012 12:26:41 PM
90
Product Innovation Toolbox
Sieffermann, J.-M. (2000) “Le profil flash – un outil rapide et innovant d’évaluation sensorielle descriptive”. In AGORAL 2000, XIIèmes rencontres “L’innovation: de l’idée au success” (pp. 335–340). Montpellier, France. Sieffermann, J.-M. (2002) “Flash Profiling. A New Method of Sensory Descriptive Analysis”. In AIFST 35th Convention, 21–24 July, Sidney, Australia. Stone, H., Sidel, J., Oliver, S., Woosley, A. and Singleton, R.C. (1974) “Sensory Evaluation by Quantitative Descriptive Analysis”. Food Technology, 28, 24–34. Szalay, L.B. and Deese, J. (1978) Subjective Meaning and Culture: An Assessment Through Word Associations. Hillsdale NJ: Lawrence Erlbaum Associates. Veludo-de-Oliveira, T.M., Ikeda, A.A. and Campomar, M.C. (2006) “Commentary: Laddering in the Practice Of Marketing Research: Barriers And Solutions”. Qualitative Market Research: An International Journal, 9 (3), 297–306. Williams, A.A. and Langron, S.P. (1984) “The Use of Free-Choice Profiling for the Evaluation of Commercial Ports”. Journal of the Science of Food and Agriculture, 35, 558–565.
6
Beckley_c06.indd 90
2/6/2012 12:26:41 PM
6.2
Insights Through Immersion Donna Sturgess
Key learnings ✓ ✓ ✓
The power of an immersive experience to generate novel business ideas Starting an immersion journey Taking action from immersion learnings
6 6.2.1
The power of immersive experience The power of an immersive experience generates novel business ideas that transcend spreadsheets and ho-hum strategies to make your business thrive. It requires you to get out of the office and push past your operating boundaries to experience new worlds where competitors dare not go. They are few and far between, and sometimes the wait for that flash of inspiration can seem interminable. But when they come, they can change us forever. I’m talking here about spectacular moments, those very special and unique experiences that can lift us into a new perspective. They might take our breath away. They might send our pulse racing. They might make our head spin. That’s how our body tells us that we’ve stumbled on a truly amazing experience, one that can dramatically shake up our ideas and impressions about the world and leave us with a whole new set of insights, a whole new package of tools with which to achieve our goals and conceive of new ones. These “aha!” moments not only reset what we know, they also unleash new energy – sometimes more new energy than we could ever have imagined. As a veteran marketer from a large multinational corporation, I understand all too well that it takes many, many good ideas to drive a vibrant company and feed the volume of growth expected year after year after year. What leaders need more of are fresh ideas from the outside world. They need more new perspectives from completely different environments than business, where the assumptions and approaches and even the language are usually all too familiar. Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
91
Beckley_c06.indd 91
2/6/2012 12:26:41 PM
92
Product Innovation Toolbox
They need to venture out into the thrillingly new and thrillingly different. They must dare to immerse themselves – truly and completely – in the potentially transformative experience. These demands have pushed the author to look in some unusual places to continue to find new creative inspiration for fresh ideas and fresh approaches. Put simply: I never stop looking, and neither should you. If a new idea comes in during an elevator ride on a seemingly ordinary day, it is great. If it happens while walking the streets of New York City or Tokyo, so be it. I don’t believe in placing limits on inspiration, neither its timing nor its extent. This chapter is about the enormous power of an immersive experience to open your eyes to new insights and ideas.
6.2.2
6
Beckley_c06.indd 92
Immerse yourself An immersion is something you undergo, and yes, it takes more than an hour out of your day. An immersion causes you to turn away from the partial-attention world of BlackBerry and iPhone monitoring and multi-tasking to a full-attention world. It is about undergoing an experience, truly giving something up to live it fully, because of the potential pay-off of the new perspective, fresh knowledge or practical wisdom you can obtain. It is like the difference between standing in the ocean looking down at the surface of the water versus putting on a face mask and plunging your head into the water. At the surface, the glare and reflection off the water prevent you from seeing any deeper. You are unaware of what you are missing. Yet when you puncture the surface and go below, the noise and activity from above are immediately shut out. It opens up a whole new world. Let’s consider an example. It is not unusual for business leaders to use spreadsheets of customer demographics or segments to look for insights that will help them understand a mass-market audience. After all, their customer base is a large, complex group made up of a lot of different types of people. But when you are looking down at a spreadsheet, the “glare” of facts prevents you from seeing the customer at a deeper level. Imagine if instead you put on that face mask and plunged head first into the customer’s world by experiencing the secondmost popular professional sport in the US: NASCAR (National Association for Stock Car Auto Racing). A NASCAR event is a social aquarium teeming with mass-market consumers who happen to be some of the most devoted sports fans in the world. Spending a day with your team at a race, swimming among those consumers, is an immersion in what it takes to build devoted fans – it’s a hyper-lens for business. The experience is sure to have an impact on your team’s perspective, while the thrill of competition and connection stimulates the rush of dopamine needed to create new ideas for this audience. Your team will see things that would not have been visible through the lens of a spreadsheet. And there’s even more to it than that. For in these experiences the extreme focus of your mental and physical energy magnifies small things that ordinarily would go unnoticed. The noisy environment is shut out, and the intensity of the immersion produces a dramatic view of the new. Your senses are fully engaged.
2/6/2012 12:26:41 PM
Insights Through Immersion
93
It’s completely different to be pushed harder than you’ve ever been pushed, right out of all familiarity with your surroundings and how you “should” feel and behave. That is part of the value of an immersion – being pushed beyond your sphere of the familiar and into the role of a beginner. When we see or learn something new, as in the NASCAR example, our thinking slows down and we absorb information differently. The brain has a relaxation response that allows for insight and the emergence of other options. New connections are made during this time, because we are sifting and sorting information differently as a beginner than we do in our everyday work as a professional.
6.2.3
Conductive thinking Our whole body is involved in the immersive experience – it is not just a mental exercise like the activities at work. In an immersion, you sense the interconnection of body and mind, as well as the connections between your conscious, rational self and your subconscious, emotional self. The new stimulus bumps and rattles our operating assumptions as we seek to put the learning into a context; these new relationships start to rearrange our existing stores of information into novel patterns to produce new thoughts. This is where original ideas float free and opportunities are discovered. It doesn’t have to be quick. The stimulus from the experience will continue to percolate in your subconscious after the event is over. One thing is certain: afterward you will feel the shift, the impact of the immersion. John Boyd, author of the 1976 paper “Destruction and Creation”, saw the nature of creativity as the breaking down of elements, the shattering of the relationship between the parts and the whole. He called this “destructive deduction”. The many assorted parts can be seen independently and then are available to synthesize together in new, creative ways. Boyd was a US Air Force fighter pilot and a great strategic thinker. He thought about creative concepts and decision making as stemming from either analysis or synthesis. Analysis proceeds from the general to the specific, while the opposite happens in synthesis, which proceeds from the specific to the general. Said another way, analysis is deductive thinking and synthesis is inductive thinking. An immersion involves a third mode of processing: conductive thinking. This is a type of inductive thinking that comes from the blend of a physical experience with mental processing. We know there is a link between body and mind; take the concept of embodied cognition. Margaret Wilson at the University of California, Santa Cruz, defines embodied cognition as “the idea that the mind must be understood in the context of its relationship to a physical body that interacts with the world”. In an immersion, you physically engage your body in the experience as a way to stimulate conductive thinking. It is what sets an immersion apart from other approaches to creativity. It is also why you are not likely to develop the next big idea while sitting at your desk in front of a computer screen. You are going to have to regularly get out of the office and step into the world. In a letter to his brother, the artist Vincent Van Gogh described how he got his creative ideas: through direct contact. That is to say, creativity can be found by really immersing yourself in whatever situation you are in, without holding back. When Van Gogh died at the age of 37, he left behind a legacy of energy and
Beckley_c06.indd 93
6
2/6/2012 12:26:41 PM
94
Product Innovation Toolbox
emotion in the form of more than two thousand drawings and paintings. Van Gogh’s efforts reveal that when he worked, he transcended boundaries and deeply focused on the moment. In an immersion, you are engulfed in the experience. Letting go during this experience allows you to move away from your current situation to a new starting point for your imagination: to find new pieces that don’t fit with what you know, to collect them and to see new clusters and combinations that are stimulated by the physical and mental combination of conductive thinking. Have you ever made a business decision based on an insight that came to you after a long walk? This is just an indication of how an immersion experience can affect your entire thought process. The next time you are wrestling with a problem, get up from your desk and explore a new environment as you seek your answers. Are you able to feel the energy of your body and your mind coming together? Perhaps there are other individuals in your organization who can join you on an excursion, and together you can reach a state of conductive thinking. Conductive thinking can shake up the ideas of an entire team. As you journey toward your breakthrough, trying out ways to solve the problem, don’t be afraid of the darkness, of wondering. When your body is active, it allows your mind to recombine fragments of ideas and inspirations, forming entirely novel concepts. Open yourself up to this extreme focus of your mental and physical energy. You will see and understand your thought process in a whole new way. It is a seductive experience to be temporarily in the darker or lighter corners of an immersion, where you might not otherwise have allowed yourself to go. You find yourself truly living in the moment as the experience spontaneously enters your mind, then filtering new thoughts, and then seeing them on their way. The physical space unfolds around you in the experience and merges with your inner space as you move through the immersion. You are constantly processing new learning that is combined physically and mentally. Your sensation of time and space become newly connected as they are at the “edge of physics”. A dense net forms possible references and relationships. You must let go and lean into the situation to maximize what you get out of it.
6
6.2.4
Getting started At this point you are probably wondering, “How do I know this won’t be a waste time?” You don’t. But ask yourself if your team is coming up with brilliant insights and new ideas for the business. If not, why not? Perhaps this is just the “eyeballs out” approach you need. An immersion enables you to open up channels in your mind that are normally closed. We are all trapped inside our own perceptions based on our past experiences and interactions. Too often we stay in our comfort zone, burning time and energy needlessly as we vibrate in place. This only exacerbates the problem. The shorter commercial shelf-life of ideas puts even more pressure on leaders to improve continuously and undertake nothing less than repeatedly reinventing their business.
Beckley_c06.indd 94
2/6/2012 12:26:41 PM
Insights Through Immersion
95
An immersion offers the opportunity to take a step in a new direction. As with any new adventure, you may not know what you will get out of an immersive experience. It starts with the bold mind-set of the explorer, not the careful mindset of the planner. You should expect to be stretched by the newness of the situation. If you believe you’re ready to identify a topic for your immersion, here are some simple steps to think through: ● ●
●
●
●
Define areas of your business that would benefit from additional focus Identify shifts in the marketplace that will impact your growth potential and may require your team to do some problem solving Consider how process thinking and efficiency initiatives may be choking your innovation efforts Assess whether your best people are too internally focused and too limited in their external commercial perspective Rate your innovation pipeline according to incremental versus step-change growth ideas, and determine where more creativity is needed.
Immersion topics can vary widely, from customer service to technology to team building, depending on the type of innovation you seek. It is helpful to determine whether you are looking to solve a problem, identify an opportunity or gather information. It is useful to take some photographs during the immersion that will stimulate thinking afterwards. Avoid documenting the immersion with video, however, because you need to be absorbed in the moment – attention to filming will dilute your experience. The following section suggests a range of immersion possibilities, but you and your team may create your own unique experiences to accomplish your goals. Keep in mind, an immersion should: ●
●
● ●
●
6
Stimulate a new way of thinking, by expanding your knowledge base and providing a new experience Change your perspective, by offering a view of a subject from a completely different vantage point Allow you to zoom in and focus deeply on a problem or opportunity Help you step out of the office and connect your business to the real world beyond the corporate fence Excite your people and encourage a learning culture.
What you discover during the experience will determine how you execute change in your organization. Your focus during the immersion will determine what you discover.
6.2.5
Plunging into illumination You can find spectacular moments in your own life and the life of your organization that will reawaken the spirit of adventure and discovery. Immersive experiences will blast away the ordinary that is ingrained in your operations and
Beckley_c06.indd 95
2/6/2012 12:26:41 PM
96
Product Innovation Toolbox
unleash the potential energy that will propel your people to unlimited commitment and success. Here are some examples of immersive experiences you might want to explore.
6.2.5.1
Orchid hunting
What? Yes, that’s right: orchid hunting. Would it surprise you to know that the international orchid business brings in $10 billion annually? Orchids are the most highly evolved flowering plants on earth. Even more interesting is the fact that they are ancient plants that have outlived the dinosaurs. You can hunt orchids in many places, from Cuba to Malaysia to New Jersey. Summertime field trips are available in Manitoba, Canada, where participants venture through the wetlands to find fringed orchids. Perhaps you should start by reading The Orchid Thief by Susan Orlean (1998). The book captures the intrigue and politics of the orchid business. Orchid hunting is an interesting intersection between the worlds of nature, gardening and commerce. Science has labeled ideas that are based on solutions created by Mother Nature as “biomimicry”. For example, Velcro was an idea that came from a walk in the woods through burrs. Imagine the ideas that await you as you search for orchids.
6
6.2.5.2
Letterboxing
Letterboxing (letterboxing.org) is a crowd sourcing activity done outside, mainly in public parks, so get ready to hike as you engage in a mind-body experience. It’s like a treasure hunt: after hiding a small plastic box, letterboxers post clues online so that others can find it. The boxes are given names like Little Bubbles, Classic Robotics, Devil’s Kitchen and Don’t Leaf Me Here. There are more than 20,000 boxes hidden across North America alone. Finders make an imprint of the letterbox’s stamp and, to prove they were there, leave an imprint of their own personal stamp on the notebook in the box. At least six thousand people claim to be involved in letterboxing in the US. Crowd-sourcing is a call for open collaboration, and in this example, it is an invitation for recreation and fun. A day immersed in letterboxing is a chance to think about how adventure and surprise could be added to your product or service to make it more exciting. Novel experiences and surprises heighten customers’ arousal and attention, and can be an effective way to increase engagement with your business. In addition, the first-hand experience you get from participating in a crowd-sourced activity that links the online and offline worlds will open your mind to new, innovative ideas. Spend some time thinking about how a digital connection can be integrated into your product.
6.2.5.3
Feeling like a consumer
If you sell anything to a mainstream, mass-market audience, this experience is a chance to literally put yourself in the shoes of the consumer. Everyone involved in this immersion has to go to a discount store (a large chain such as Walmart, Tesco,
Beckley_c06.indd 96
2/6/2012 12:26:41 PM
Insights Through Immersion
97
etc.) and purchase an entire outfit, including underwear and shoes, for no more than $50. Next, put the whole outfit on and place your own clothes in a bag. Explore the shopping experience, and observe consumer behavior in different sections of the store as you think about how your business competes for customers. You can also challenge the team to buy a week’s worth of groceries to feed a family of four for $100. Can they do it, or will they be starving by Saturday? Or put spending constraints on products within your own industry or category, and identify how that affects the average consumer’s product choice. Get in touch with how it feels to be the consumer, operating within the financial limits that many households face. Consider how your new perspective changes your view of product differences and what are you willing to pay for under those constraints. The experience will no doubt put the professionals on your team, who typically don’t struggle with budgets at this level, into a new frame of mind regarding innovation and product differentiation. This economic perspective and empathy can be put to good use when thinking about underdeveloped markets as well.
6.2.5.4
Glass-blowing
Glass-blowing is a true process. Your skills build with practice, however, in as little as one week. The work is done in pairs, and the steps are synchronized moves as you twirl the molten glass that has been gathered from a furnace at 2200ºF. Remarkably, after just one day the beginner can make a basic bowl or vase. Eventually the work evolves to adding color to the glass, and opportunities for creativity increase as the basics are mastered. The Pittsburgh Glass Center is one example of a facility that offers one-day or week-long classes for the novice. Even award-winning glass artists, such as Brayton Furlong in Northern California (BraytonFurlong.com), will arrange for private team instruction. Glass-blowing combines process thinking and mechanics to achieve creativity. The focus is on efficiency in the work and how decisions during action produce something new. It is a world of new language and sensations, and it is amazing to see how the odd wooden tools of the glass-blower produce imaginative products from sand, water and heat. If you develop good ideas while taking a shower or on a long drive, the glassblowing experience is sure to suit; it is possible to tap more of those subconscious thoughts as you become immersed in a world of artisans and craftsmen. Themes of pride and accomplishment will also be revealed during this experience.
6.2.5.5
6
A vocation vacation
VocationVacations® is a company that is enriching people’s lives by allowing them to test-drive a career. While taking a vacation of two to three days, you can immerse yourself in a completely new discipline through one of the 125 career choices such as: Be a Farmer, Be a Broadway Director, Be an Architect, or Be a Non-Profit Director. You may or may not be able to convince your employer to foot the bill for the experience, which usually has a price tag of under $1,000.
Beckley_c06.indd 97
2/6/2012 12:26:41 PM
98
Product Innovation Toolbox
Even if you take a small team on the experience, it can be less expensive than most innovation projects. Try defining it as an immersion rather than a vacation when you discuss it with your boss! What you get out of the experience will depend on the type of career experience you choose. Think about how the theme may connect you to your customers or offer learning in a strategic area. If you work for Kraft Foods, for example, the Be a Chef experience may not open your eyes to new possibilities as much as the Be a Restaurant Critic or Be a Wine Sommelier experience would. Sink your teeth into something new.
6.2.5.6
A day to yourself
If you are like most people, it has been a long time since you have been alone with your thoughts for an entire day. This immersion is a chance to unclog your brain, to refresh and recalibrate yourself. When the mind is spinning with day-today demands and distractions, it is hard to get new ideas to flow. To stimulate the conductive thinking offered by an immersion, you will need to choose a physical activity to undertake for the day. Physically engaging your body in the experience is a way to help unlock your thoughts. Activities such as biking, hiking, painting, skating, gardening or swimming will stimulate juicy subconscious thoughts. The physical activity allows these circulating thoughts to break into your consciousness, where they can be assembled into new patterns and ideas. Spend the day in a place that you find inspiring, and enjoy yourself.
6
6.2.6
Taking action Successfully operating a business requires continuous innovation inspired by fresh insights. Immersion experiences reveal those insights in extraordinary ways. An immersion brings out new sights, sounds, smells, tastes and textures that can trigger deep emotional responses and amazing revelations – the fragments of new, original ideas. You should take notes during the immersion, recording what you experience through all five senses. Capture the full spectrum of content beyond simply what you see; note how it felt, how it smelled, and any unique elements in the environment, such as colors or symbols, or even a sense of mystery or energy. Once you complete an immersion, spend time thinking about the experience. How did it differ from your daily work life? What insights came to you during the experience? Did it shake you up? What thoughts still stick in your head and demand that your business change and evolve? Allow the thoughts from the experience to percolate through your mind and assemble into new ideas. A few days after the immersion (don’t wait too long), get together with your team to discuss the learning from the experience and the implications for your business. Share any photographs you may have taken to help stimulate the conversation. Remember, even fragments of ideas at this stage are useful, even if
Beckley_c06.indd 98
2/6/2012 12:26:41 PM
Insights Through Immersion
99
you can’t immediately put them into action. The list of questions below is a good way to start a discussion: ● ●
●
● ●
●
●
6.2.7
What part of the experience was eye-catching enough to get noticed? What creative pieces did people collect through the five senses, and why where those of interest? What principles were observed during the immersion, and how might those apply to your business? How did the experience change your perspective? Are there any fully formed ideas that you can apply directly to your business? Are there any idea fragments that you can build upon? What spectacular moments of insight and learning did you take away from the experience? What other immersive experiences might be useful to stimulate insights and creativity?
Summary and future Sometimes a new technique like an immersion can produce fresh-squeezed ideas and unlock a team’s creativity. If you’ve been relying on the tried and true tactics of your industry to enact change, it may be time to look for ideas outside of your category or industry. An immersion provides a continual process of selfdiscovery, and by supporting those discoveries you will excite people to reach for new creative opportunities. This in turn pushes them to redefine the boundaries of excellence and thus gain meaningful advantage over the competition. In the end, it is all about better serving customers by delivering remarkable innovation.
6
References Boyd, J.R. (1976) “Destruction and Creation,” unpublished paper, 3 September 1976. Orlean, S. (1998) The Orchid Thief. New York: Random House.
Beckley_c06.indd 99
2/6/2012 12:26:41 PM
6.3
Qualitative Multivariate Analysis Kannapon Lopetcharat and Jacqueline Beckley
Key learnings ✓ ✓ ✓ ✓
Generating emphatic conversations with consumers Transforming a moderator to a facilitator Consumer hierarchy of needs and qualitative Kano map Value diagram
6 6.3.1
Consumers do not know what they want, really. Really? “Consumers do not know what they want.” “Consumers have a hard time articulating what they need, want or like.” “Consumers are very fuzzy about what they want, specifically.” These are familiar expressions that we have often heard from our clients (product developers or marketing colleagues) or some of us (sensory professionals or designers who want more analytical responses) who did not hear what they had expected from a long day of focus group interview, one-on-one interview, a laddering study or any conventional qualitative methods. However, are these statements really true? It is very common for product developers or marketers to ask a Consumer Explorer (CE) to run a consumer study using X, Y or Z method because they are accustomed to those methods with which they have had familiar and positive experience (or positive outcomes). But, a good CE should not just execute a study because you know that your clients prefer a certain method over others. A good CE should contribute to his/her team by recommending appropriate methodologies and executing them flawlessly. Professor Mina McDaniel said this to one of the authors (KL) when he was her student at Oregon State University that:
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
100
Beckley_c06.indd 100
2/6/2012 12:26:41 PM
Qualitative Multivariate Analysis
101
“Every time a client calls, he or she will tell you to run such and such test for him or her. It is our responsibility to listen and recommend to our client the best or at least the most appropriate way to answer his/her needs.” Let’s come back to the three statements. There are two possible explanations for those statements: ● ●
Did consumers really not know what they wanted? or Were the methods that the researchers used not appropriate?
Most of the time, it is the latter, when it comes to product innovation especially disruptive innovation. Cammie Dunnaway, CMO at Yahoo! Inc. and Sergio Zyman, former CMO of Coca-Cola, thought that running focus groups was a waste of time as they never gained any useful information from them, especially for product innovation (Moskowitz and Gofman, 2007). Clancy and Krieg (2000) took aim at most aspects of classic market research in their book Counterintuitive Marketing, but in particular suggested that the traditional focus group was part of “death-wish research”. These are yet a small sample of the evidences of using an inappropriate tool as the name of focus group already hints that it needs focus. Therefore, if the client wants to know about unarticulated needs or subconscious insights, they will not find them by using a classic focus group format (Zaltman, 2003) or any method that utilizes interview guides in which it is assumed that the answer to the question is already known. The idea of getting people together and “forcing” them to think deeply around unarticulated and subconscious behaviors (i.e. no deep thinking) is rather crazy. How could that ever work? How can one bring out the unarticulated needs and subconscious insights from a consumer’s mind and visually present them to the outside world without a lot of fuss and biases from researcher’s and team’s agendas? One answer lies in a new protocol called Qualitative Multivariate Analysis (QMA).
6.3.2
6
Introduction Conventional consumer and marketing research processes can be quite robust. Many classic tools used (like quick Internet surveys via vendors like Surveymonkey and Zoomerang or many forms of CLTs) can be lacking in some core implementation areas:
(1) Ability to reflect empathy toward target consumers (2) Tools that appropriately unlock consumer’s behavior and needs (3) Use of relevant questions due to benchmarks/tradition or an incomplete picture of target consumers’ views of product or situation. These three important factors are inter-related (Figure 6.3.1) since one component is a basis of the others. To gain empathy toward target consumers, one must have the right insights that can be discovered through using appropriate consumer research tool. This depends on the objective/s of the study and on the research team’s desired level of empathy toward the target consumers.
Beckley_c06.indd 101
2/6/2012 12:26:41 PM
102
Product Innovation Toolbox
Ga
ble
in m
a En
ore
Empathy
Appropriate tool
Choosing
Asking right questions
Figure 6.3.1 Empathy, asking right questions and choosing appropriate research tools are inter-related and support each other.
Solving these three issues early at the beginning of new product development (NPD) guides developers to innovate into a right direction quickly and to enhance the success of their innovations. QMA can provide solutions to these three critical issues with speed and accuracy. QMA consists of many steps, including home-use testing, a group discussion, and projective mapping (often referred to as tablecloth mapping or napping techniques) with a small number of consumers (10–15). Many published studies have applied some aspects of QMA, especially the projective mapping part successfully (Nestrud and Lawless, 2008; Perrin et al., 2008; Risvik et al., 1994). QMA incorporates all these steps purposefully to uncover different aspects of consumer insights.
6
6.3.3
Qualitative Multivariate Analysis in practice QMA is an alternative consumer research protocol that was created to capture insights from consumers and to discover the possible linkages between different and important values of products (Drake et al., 2009). Jacqueline Beckley (one of the editors) from the Understanding and Insights Group invented QMA to take advantage of conventional qualitative tools (e.g. focus group, one-on-one interview) while also avoiding many biases that these methods suffer. Table 6.3.1 summarizes how QMA avoids or takes advantage of these biases. Moderator’s skills required for conventional research methods differ from those required for consumer-insight exploration research. The former are suitable for fine tuning assumptions (known), while the latter requires capability for exploration (less known, unknown). For example, skilled focus-group moderators are trained to probe and focus according to the objectives of the study (a priori determined assumptions); therefore, the moderators need to stay on task and create bases of understanding around what has been asked. This need
Beckley_c06.indd 102
2/6/2012 12:26:41 PM
Qualitative Multivariate Analysis
103
Table 6.3.1 Biases in conventional qualitative methods and how QMA avoids and takes advantage of the biases. Biases
Focus group
QMA
Moderator’s skill
Too structured and may be oblivious to actually important consumer’s insights
Panelists lead the conversation and moderators are trained to detect insights, loosen command and control
Group composition
Cannot handle too much difference (e.g. age, income, social status, etc.)
QMA takes advantage of this emotional dialogue between different groups of panelists
Group dynamic
Strictly controlled by heavy probing and group composition; consequently, consumers will over rationalize their answers
The dynamic of the group is a part of the outcome as the topic of discussion is generated by the group, lead by stimuli and experience, not the leader
Personality of panelists
Outcomes are easily influenced by an alpha panelist or too many quiet ones
QMA makes sure that all panelists will have equal opportunities to express themselves through many activities
Moderating guide
Moderating guide limits panelists’ true opinion as they will respond to researchers’ a priori agenda
Panelists will guide the topics of discussion by themselves so a priori agenda to follow. Anchored in business-driven hypotheses
6
for focus can be a barrier to hearing, seeing and feeling what might be of actual consumer importance on a topic. For QMA, the protocol is designed to be open and a QMA facilitator is trained to detect what are important to consumers through use of multiple cues (body, face, language, tone) and stimuli and the ability to turn over apparent control of the discussion to the consumers. The term facilitator will be used instead of moderator for the innovative qualitative tools for innovative consumer-insight exploration highlighted in this book. Group composition is another potential problem for conventional methods, especially those that require group activity. The results of the study are easily influenced by the composition of the panelists (e.g. an alpha-panelist (a person that dominates the opinion of the group), a quiet majority (those who do not express their opinion), age differences (young vs. old panelists), social status differences (different income levels), etc.). Without control from moderators, the focus group discussion can turn to be an argument or the participants may misrepresent their thoughts (lie) to save face, thereby leading to lost discussion opportunities around the topic at hand. This is not really a problem for QMA
Beckley_c06.indd 103
2/6/2012 12:26:42 PM
104
Product Innovation Toolbox
since it takes advantage of this emotional dialogue between different groups as these considerations are accepted as part of the results from QMA due to the grounded experiences of each panelist. Group composition also influences group dynamic. To control the discussion, conventional methods such as in focus group interviews rely heavily on probing activities that are assumed to be relevant, like frequency of use and typical market needs, such as consideration of choice in making a purchase. These specific questions may be detrimental to finding true and important consumers’ needs and wants because consumers will overly rationalize their answers in an attempt to assist the moderator in achieving the discussion goal. In QMA, the dynamic of the group is a part of the outcome as the topic of discussion is generated by the group using subtle framing and positioning of a key discussion starter. Personality of panelists is a very important aspect in group composition. As aforementioned, an alpha-subject can ruin the whole group outcomes. In the same token, too many quiet panelists can negatively impact the outcome as well. QMA was developed to make sure that every subject will remain engaged through the use of many activities (i.e. diary, round-robin discussion, note taking, voting and map building) and prior/recent experience which establishes everyone to participate equally. Moderating guide or “the guide” influences the results of the study through moderators and their skills. Moderators have to follow the guide to deliver what their clients need (generally listed in detail in the guide); consequently, it does not leave much room for exploration and diverging from the guide when actual important insights are discovered. QMA does not really use any moderating guide. In fact, consumers will guide the topics of discussion by themselves – creation of how to frame the research problem so as not to lead the group to what a “right answer” might be and understanding the 3–5 key business needs represent the “guide” created for QMA. To avoid and take advantage over these shortcomings of conventional methods, QMA provides eight steps to real insight discoveries (Figure 6.3.2).
6
6.3.3.1
Step 1: Set up the objective(s) of the study
Any good research starts with good objectives (aka right questions or firstorder questions) as mentioned earlier in this book. In practice, especially during the insight discovery phase, this is actually hard to achieve because there are many objectives or requests from the research team and all of these needs appear to have equal importance. QMA accommodates this problem easily as the objectives for QMA study can be as simple as understanding consumer habits or as refined as understanding which package is good for a specific product in a specific format. QMA handles this through collecting all hypotheses from the team during the planning process. QMA answers the hypotheses through observing consumer behavior and, consequently, QMA provides not just answers but also reasons behind the answers. For any hypotheses, if consumers do not mention them, the topics are most likely not that important for them or they are not aware about those needs/wants/values. Because QMA provides both answers and reasons, it helps the team to identify any topics as opportunities (aka unarticulated needs) or insignificant needs.
Beckley_c06.indd 104
2/6/2012 12:26:42 PM
Qualitative Multivariate Analysis
105
1. Setting up the objective(s) of the study.
2. Identifying the product set.
3. Conducting home-use testing (HUT).
4. Capturing consumers’ language.
5. Building value diagram.
6. Conducting Love-it! or Hate-it! voting.
6 7. Conducting napping exercise.
8. Developing Kano diagram.
Figure 6.3.2 Eight steps in conducting a successful QMA study.
6.3.3.2
Step 2: Identifying the product set
To discover insights about products, comparing many products is the best way for researchers to really understand what drives consumer’s judgments and provides the researchers true consumer insights (Moskowitz and Gofman, 2007). Testing one product with many questions (aka monadic testing) will not give one of those insights. QMA unlocks hidden insights through actual product evaluation in real life situations. However, which products should we give consumers to use? In contrast to many other qualitative methods that utilize a small set of products that cover a small portion of a product category, QMA requires the use of products that cover the product category as much as possible (e.g. sensory properties, benefits, packaging). Figure 6.3.3 shows the three criteria that should be considered to select products for any QMA study. First, the most important objectives must define the aspect(s) of the products in your study. For example, if the main objective is to understand the impact of vanilla flavor in ice cream, then the products should cover a very broad range of vanilla ice creams. Different textures (even better if at the same flavor types as the flavor series) should be secondary in this example since the flavor issue has been positioned to be superior. Second, the selection should be systematic as much as possible. In one of the authors’ (KL) experiences, a profiling technique (e.g. descriptive analysis, flash profiling) plus experimental designs are used to
Beckley_c06.indd 105
2/6/2012 12:26:42 PM
106
Product Innovation Toolbox
1. Selecting product aspects for your study
Most important objectives drive the selection
2. The selection should be systematic
3. The differences of the aspects should be large
Use profiling techniques
Consider prototype space instead of only current market
Apply experimental design principles
Sensory characteristics and benefits are important
Figure 6.3.3 Three critical criteria to select samples for a successful QMA.
guide product selection to accommodate many objectives effectively and to represent any larger quantitative study such as category appraisal study. In addition, systematic selection allows researchers to interpret and understand the results easily. Having a mindful rationale for the inclusion or exclusion of each product sample facilitates the listening and understanding function within the session. Third, the differences of the product aspects of the study should cover the whole category from an R&D perspective in addition to marketing/business requirements. In many cases, scent, taste, texture, color and other sensory experiences with products (aka sensory aesthetics in personal care category or benefits) drive preference/choice behavior (both liking and disliking) and ultimately make consumers love or hate a product. Even though the main objective may not be about sensory aesthetics, it is recommended to consider these differences in product selection.
6
6.3.3.3
Step 3: Home-use-testing (HUT) phase
This phase is the critical step as it influences the outcomes, study time-line and cost. Unlike other conventional methods, QMA requires many products to be used by consumers in real situations for an appropriate period of time. Therefore, the length of a HUT phase depends on the objectives of the studies, the nature of the products and the number of products selected from step 2. To estimate the length of a HUT phase for a product category, researchers follow the steps in Figure 6.3.4. First, researchers have to define a use occasion. A use occasion is how a product is used and experienced regularly by people (this should be realistic assumption and guided by the important objectives). This step is very critical as it defines the length of time per use of a product. Some product use occasion and experience can be as short as a few minutes, such as eating an ice cream cone, or as long as a few hours, such as using a lipstick. Then the researchers multiply the use occasion and experience by three or more to get the total
Beckley_c06.indd 106
2/6/2012 12:26:42 PM
Qualitative Multivariate Analysis
1. Define a use occasion
2. Assume that the consumers will use a product on at least three occasions
107
3. Multiply the number of products by three occasions
Figure 6.3.4 Steps in determining the length of HUT phase in QMA.
length of time needed to use a product. Finally, multiplying the total length of time to use a product with the number of products in the set provides the total length of time needed for the whole study. To conduct a successful HUT for a QMA study, the following is the checklist for major steps in HUT (Figure 6.3.5). The details for conducting a successful HUT will be further discussed in Chapter 11.
6
Prioritize the objectives of the study Prepare screeners based on the objectives Calculate the length of HUT
Procure and produce samples
Blind the samples (optional)
Ship samples to distribution sites
Distribute samples to consumers
Prepare use instruction and diary Make sure that consumers bring all products back to discussion session
Figure 6.3.5 Important steps to conduct a successful HUT for a QMA study.
Beckley_c06.indd 107
2/6/2012 12:26:43 PM
108
6
Product Innovation Toolbox
The nine steps in Figure 6.3.5 are inter-related, as each step will influence the others in terms of the length of the study and cost. The objectives will guide the screening of people needed for the study and the calculation of the length of the HUT phase. The recruiting should start at least two weeks before the product pick-up date; therefore, the screener will need to be approved by the team/ client about a week before recruiting starts. In addition, estimating the length of the HUT and the amount of product needed for the HUT is critical. The amount of the products must be in excess (at least twice of estimated amount) because there always will be other ways to use products. Limited product may limit what is learned through the QMA. Procurement and production of samples usually takes about one month for all marketed samples and about two months for created samples with proper clearances. Proper sample packaging will need to be agreed upon by the team also. Blinding product is another issue as it impacts the packaging selection. This step is optional depending on the objectives of the study. Usually, when the study involves a prototype at a very early stage in NPD, the samples should be blinded; however, if the study is at a late stage in NPD and/or the package is part of the benefits of the product, then blinding the sample may be unnecessary or may impact the context of the experience. Shipping samples to distribution/test site is very important for any product testing, especially any international projects and it will be covered separately in Chapter 12. Distributing samples to consumers and requiring them to sign a Confidentiality Disclosure Agreement (CDA) are very important. Researchers must make sure that consumers sign a CDA before the consumer touch any sample. At this point, consumer orientation to the HUT and QMA should be done, including how to fill out the diary (Figure 6.3.6). The diary that will accompany the samples should have pictures of the samples with corresponding three-digit random codes and have plenty of space for consumers to write down their experience.
“There are XXX samples in this bag. You have YYY days to use these samples. Use these samples as you regularly use them. You MUST use each sample at least once. If you do not want to continue to use any sample after you use it once, please do so and explain the reasons why you do not want to continue using the sample in the diary provided to you in this bag. If you want to continue to use any samples more than once, please do so and describe your experience and the reasons why you want to continue using the sample. You will use the diary to discuss your experience in a following group discussion, so please record as many details as possible. Again, you MUST use each sample at least once. Please feel free to start with any sample.”
Figure 6.3.6 An example of a typical QMA HUT diary use instruction.
Beckley_c06.indd 108
2/6/2012 12:26:43 PM
Qualitative Multivariate Analysis
109
Usually the authors prefer a blank page (half of an A4 paper) on which consumers write their experience, with the product picture and identifying code that are prominently displayed on the top-right corner of the page. Preference is to keep as limited a structure to the questionnaire as possible. If more information is needed (and we strongly recommend against this practice due to a desire to see how the consumer shapes their thinking and experience), a short list of questions (should be less than three questions in total) may be asked to provide a few structured pieces of information. During the orientation for the HUT, researchers need to make sure that consumers bring ALL samples back to the QMA discussion session. It is common that consumers transfer the samples from the test containers to their own containers before returning the samples back to the study coordinator. However, it is critical for QMA that consumers bring all the samples back to the discussion session as they will have to use the samples during the session to be reminded about their experience. On occasion, following the HUT, some people will not return for the QMA session. This can happen with items that are of high value or have long-lasting qualities (e.g. facial cream, electronics, certain packaged foods). The most efficient way to handle this situation is linking the return of samples to payment and also including possible charges when the samples are not returned.
6.3.3.4
6
Step 4: Capturing consumer language
Qualitative multivariate analysis captures words used to describe the product’s characteristics and those that differentiate products that are loved versus those that are hated in a very natural way; generally the practice is group discussion without any discussion guide (aka live-chatting). The QMA leader/facilitator must listen carefully and intently and with a listening partner (at least one other person who is the silent leader who is tracking all conversation in real time) must record all the details carefully. The discussion can be done in three easy steps (Figure 6.3.7). Start by telling panelists to bring out their products since this step will start the panelists talking to each other. Usually they will start to talk about their experience right away. The facilitator should ensure that the consumers do not influence each other’s opinion at this point. It is helpful to have a small overview or a brief orientation for the QMA participants regarding what their job is and the job of the facilitator and the listening partner. Brief introductions and ice-breakers can be used since the participants will be together for a few hours and will eventually
1. Bring out the products.
2. Introduction, orientation and ice-breakers.
3. Talking about their experience.
Figure 6.3.7 Three steps to start capturing consumer language in QMA.
Beckley_c06.indd 109
2/6/2012 12:26:44 PM
110
Product Innovation Toolbox
Why did they like a product and use a product again?
Why didn’t they want to use any product again?
Which occasion did they use the product and how? What values did experiencing product bring to them: functional, quality, monetary, emotional, safety, social, ethical, etc.?
Figure 6.3.8 Four questions that a QMA moderator and listening partner need to capture during the discussion.
get to know each other. The goal of this step is to explain the objectives of the study, to ease any tension among the consumers and to ensure consumers that their opinion counts and there will be no right or wrong answer. Also making sure that all participants are open-mined and respect other’s opinions is very important and the facilitator must be very attentive to this throughout the rest of the session. It is common for consumers to think that their experience with a product is a shared experience and preparing them for the fact that we might experience the same product differently, is essential. Finally, the discussion about their experience with the products begins (generally no more than 15 minutes after starting the session). The diary is very useful here as the consumers can use it as a reminder of their experience as there may be a length of time between the first product they used and the last product. Figure 6.3.8 shows key questions to be mindful of both as a facilitator and the listening team. Asking the questions and recording a consumer’s expression (e.g. language, phase, example) are only half of the findings. Probing to understand the “relationship” among the expressions is the other important half of the findings. This leads to the next important activity, value diagram building.
6
6.3.3.5
Step 5: Building the value diagram
Value diagrams show the relationships among different levels of a consumer’s values. It is the initiation of linking the casual language of the consumer to a hierarchical or laddered understanding of how they make sense of the product experiences they have had. The value diagram in QMA serves as the representation of the common experiences that everyone participating in the QMA has had. It will be referenced throughout the remainder of the session and helps to identify how many different individual experiences have been encountered with the products (called idiographs). Keep in mind, these values are discovered and captured through consumer’s actual experience with product use in real situations. Consumers are asked to provide the tangible anchoring points for the product experience (called elements). These are the factors that are highly tangible yet general to the product experience and can be controlled by the client company.
Beckley_c06.indd 110
2/6/2012 12:26:44 PM
Qualitative Multivariate Analysis
111
People/consumers usually express their experiences through the use of sensory descriptors (i.e. taste, scent, sound, appearance and feeling) and middle-level cognitive descriptors, phases or examples (e.g. fun, easy to use, pleasurable, remind me of . . ., this feature assures me that it is closed tightly, etc.). The defined words they use to describe their experience are usually the basic or top-of-mind level of their understanding and it is relevant to the actual product or situation. More complex descriptions (e.g. phases, examples, pictures, or additional products that they bring to the session) indicate higher level of experience for their context. The facilitator’s function at this stage is to gather enough insights to ensure clear explanation of the relationships being described. Probing, if it needs to occur, keys off of the consumer conversations and is not driven by a prescribed list of questions which assume specific answers (Moskowitz et al., 2006)
6.3.3.6
Step 6: Love-it! or Hate-it! voting
Love-it! or Hate-it! voting forces the consumers to make decisions and brings structure to the broad QMA process. The voting is intentionally done right after value diagram building because it enables the consumers to articulate the reasons why they love particular products instead of getting a particular answer “I just love it”. Remember the ultimate goal of QMA is to understand the reasons behind the love and not only to find which sample is the most loved. To conduct the voting, researchers follow the simple steps (Figure 6.3.9). Once the panelists declare their preferences toward the products, they will know who agrees and disagrees with them. By asking them to explain the reasons behind their choices through group discussion for each group of consumers who love the same product and who hate the same product, the process focuses on identifying the trade-off of factors that ultimately lead to values. The whole sequence of factors linked to consequences and ultimately values
6
1. Ask the consumers to choose only one sample that they love the most (the sample that they will use again, want to keep or simply the most loved sample).
2. Ask them again to choose the second and the most hated samples.
3. Record and tally the number of consumers who love each sample.
4. Let consumers explain the reasons behind the loving and hating of the products through group discussion for each group of consumers who love the same product and who hate the same product by focusing on the trade-off of values.
Figure 6.3.9 Four steps to conduct Love-it! Hate-it! voting.
Beckley_c06.indd 111
2/6/2012 12:26:45 PM
112
Product Innovation Toolbox
provides prioritization of elements in the value chain for a product or service and ultimately leads to the sequencing of elements to provide a meaningful experience. This is what is required for the product designer or developer to link into the marketing promise and that which is understood most completely by the consumer of a given offer.
6.3.3.7
6
Step 7: Mapping/napping exercise
This activity is like the other side of the coin for Love-it! Hate-it! voting because it focuses on figuring out the order of samples along the most important values or characteristics (mainly the most two important values). While the QMA mapping method has been in place since the late 1990s, articulation of this approach was documented in 2003 by Professor Jérôme Pagès (Pagès, 2003, 2005) as an alternative means to other sample classification methods (e.g. free sorting (Lawless and Glatter, 1991), flash profiling (Dairou and Sieffermann, 2002), Spectrum® (Meilgaard et al., 1999) and QDATM (Stone et al., 1974). The application of mapping/napping technique in QMA is not only to reveal the most important values/characteristics of the samples, but also to understand the grouping of the samples according to the most important values/characteristics. There are seven proven steps that cause the least confusion among the consumers when they conduct this activity (Figure 6.3.10). The first step is allowing the panelist to relax by getting up and helping clear the table. This creates the change of
1. Clear the table and relax.
2. Let the group select a reason to love the samples.
3. Making a physical axis (x-axis) for the reason to love the samples.
4. Let the group allocate the samples along the x-axis from the least to the most.
5. Let the group select another reason to love the samples.
6. Make another axis (y-axis) and put it perpendicularly from the first axis.
7. Move the samples from the first axis up and down the y-axis from the least to the most. Make sure that they do not alter the sample locations on the x-axis.
Figure 6.3.10 Seven steps to conduct mapping/napping exercise.
Beckley_c06.indd 112
2/6/2012 12:26:45 PM
Qualitative Multivariate Analysis
113
pace in the process and resets consumers’ attention and also lifts the mood of the group. Then the facilitator asks the group to select the key reason to love the samples. This reason can be anything such as a characteristic, a value or a phase that describes what will make them love the products. The value diagram and Love-it! Hate-it! voting will be very useful at this stage as the consumers know how to articulate their opinion or at least give some examples. Then the moderator will create a physical line (x-axis along one side of a table (masking tape is great for this purpose) and mark one end with the most positive side of the reason and the other end with the most negative side of the reason selected previously (Post ItTM note is great for this purpose since the note is non-permanent and highly flexible). Then the facilitator has the group allocate the locations of the samples along the x-axis from the least (close to the negative end) and the most (close to the positive end). Noticing and recording the interactions between the panelists is as important as the results. If the group has disagreement about the location of a particular product, the group can put the product in two or more places. This indicates a possibility of segmentation in the market. It is critical to probe the why of the differences of opinion and observe the interaction between the consumers as rationale is provided. If the interaction heats up, it means this is very important for them or at least a fraction of them (potential segmentation, potential factor which has high emotive consequences). The facilitator needs to make sure that the discussion will not turn into an argument, but rather keeps the dialog open to allow for understanding why the differences or the agreements occur. Once the group finish allocating the samples along the x-axis, they continue with setting up the y-axis following the same steps (identify the second reason, make a line and allocate the samples). One restriction for this step is that the y-axis will be perpendicular to the x-axis and the group can only move the samples (all the samples) up and down the y-axis without altering the samples’ locations on the x-axis. (Note: this allows the complex relationship of an intersection of two elements to be understandable to almost all people we have worked with during this activity.)
6.3.3.8
6
Step 8: Kano diagram development
This exercise will be done by the facilitator and the team who are present during the session and it is usually conducted after finishing the consumer discussion and organizing all information collected. (Alternatively, if time allows, it can be another part of the group process and allows the consumers to apply the rules of product design that they have articulated throughout the understanding experience.) If the leadership approach is taken, consumers will not be involved in this process at this point. This should be done as soon as the session is completed as the memory is still fresh in the researchers’ mind. The goal of developing a Kano diagram is to classify the product benefits, features and attributes using consumer languages discovered throughout the QMA process into classes according to Kano’s philosophy. The Kano satisfaction model is a brainchild of Professor Noraki Kano of Tokyo Rika University and his colleagues in Japan that was introduced to the world in the 1980s (Kano et al., 1984 and Berger et al., 1993). The Kano satisfaction survey
Beckley_c06.indd 113
2/6/2012 12:26:47 PM
114
Product Innovation Toolbox
Table 6.3.2 How to find Kano attributes from QMA and how to apply the learning in NPD.
6
Type of attribute
What it is for?
Where to find it in QMA?
Must-have
The cost of entry of a product category
Love-it ! Hate-it ! voting Focusing on the reasons for the most disliked product
Driver
Used for guiding sustainable innovation of a product category
Napping exercise The descriptions of the dimension are usually the driver attributes
Delighter
Unarticulated needs Used to differentiate the products from others and hook consumers
Love-it ! Hate-it ! voting Napping exercise Focus on emotional reaction and dialogue among the panelists
was used as an alternative from an importance rating task (Berger et al., 1993), since it is difficult for consumers to assign relative importance to product attributes directly (Stelick et al., 2009). Professor Kano believed that the relationship between need fulfillment and consumer’s satisfaction and dissatisfaction is not necessarily linear. The Kano satisfaction survey uses dual-questioning format with a predefined classification rule (aka Kano classification rule) to unlock the relationship. Kano survey format, its classification rule and more details can be found in Chapter 7.1. There are three classes that are very important for product designers/ developers: (1) Must-have (that which defines the product and without which dissatisfaction occurs) (2) Driver/optimizer (those factors that then follow the “more is better” philosophy) (3) Delighter (the “wow!” component that can excite when present but can go unarticulated by people since its absence is only realized once it has been identified, e.g. the unexpressed want). To identify which product benefits/features/ attributes (will use attribute from now on) fall into which types, the researcher can find the answer at different stages in the QMA process (Table 6.3.2). Must-have attributes are usually found during value diagram or Love-it! Hate-it! voting sessions by focusing on the reasons why the consumers dislike the least liked product and the reasons can be interpreted in-depth by understanding the value diagram. Driver/optimizer attributes are generally used for guiding sustainable innovation of a product category and the attributes can be found during the mapping/ napping exercise. Mapping/napping exercise was designed to let consumers articulate the driver attributes by arranging the studied products along important dimensions. The driver/optimizer attributes are usually contained as parts of the axis descriptions.
Beckley_c06.indd 114
2/6/2012 12:26:47 PM
Qualitative Multivariate Analysis
115
Delighter attributes can be found from value diagramming, Love-it! Hate-it! voting and the mapping/napping exercise. However, delighter attributes are hidden in behavior and emotional aspects of the results (this is the point where those with high levels of expertise in group dynamics/product design/and observation separate themselves from the rank and file “group moderator”). The most debated and discussed aspects during both exercises are most likely to be delighter attributes. Using the information from the value diagram in conjunction with the observation in the Love-it! Hate-it! voting and the mapping/napping exercise, researchers will have deeper understanding of why certain attributes are more important to consumers than other attributes, how they relate to each other and where the trade-offs intersection for different segments of individuals. The qualitative Kano benefits or attributes discovered through QMA are a good starting point for quantitative Kano survey that will be explained in detailed in Chapter 7.1.
6.3.4 Qualitative Multivariate Analysis in practice: Deeper understanding of cottage cheese consumption
6
At this point, the readers should get some idea about QMA and its process. In this section, we will discuss the application of QMA in understanding US consumer’s consumption of cottage cheese in more detail. In 2008, one of the authors (KL) received an email from Professor Mary Ann Drake about a new project concerning cottage cheese. Cottage cheese is one of the profitable processed dairy products because of its short production time. However, the market for cottage cheese has been shrinking, due to other dairy products, especially yogurt. In the beginning, Professor Drake had indicated that she wanted to conduct a focus group interview followed by a consumer taste test study; however, the author (KL) suggested trying a QMA approach instead. Since a primary objective of the research was to learn more about cottage cheese consumption behavior, the QMA approach could provide more insight into the behavior. In the beginning of our research, we (KL, Professor Drake and her graduate students) brainstormed and discussed what was known/unknown about the consumption of cottage cheese. We concluded that consumers eat cottage cheese mainly for breakfast or snack by eating it as it is or with jam. We thought that the sensory characteristics of plain cottage cheese should be the most important things for consumers and the variety could be manipulated by providing different flavors of jams. We estimated that cottage cheese consumers will consume cottage cheese about twice a day maximum (for breakfast and snack). Hence, four regular cups were deemed adequate for the HUT portion of the QMA. Detailed information on this research can be found in Drake et al. (2009). These assumptions turned out to be quite far off from actual behavior observed through QMA as you will see later. There are a few important issues about cottage cheese: (1) Cottage cheese flavor changes as the cheese ages and (2) It tastes the best when it is eaten fresh. Therefore, the research did not allow packing a big box of cottage cheese
Beckley_c06.indd 115
2/6/2012 12:26:47 PM
116
Product Innovation Toolbox
Quality Convenient
Healthy
Availability
Versatile usage Store location
Cooperative
Price
Diet food
Recipe
Lack of selection
Nutrition and ingredient
Packaging Breakfast Sale
Snack
Substitute
Dessert
Probiotic cultures
Salad dressing
Store brand
Sensory attribute
Side item
National brand Stabilizers/ gums
High in calcium
High in protein
Varying fat content
6 Organic
Buttery
Curd Whey flavor Sour Fatty size
Ricotta Creamy
Dry Grainy
Color
Figure 6.3.11 Cottage cheese value diagram adapted from Drake et al. (2009).
with 32 cups of cottage cheese inside and expecting that the cheese samples would not change during the HUT period. So the team decided to run the study locally where consumers could come and pick the samples up at our facility every two days. It is worth noting that, with perishable products, it is very critical to consider this aspect as it will influence the way the HUT will be tested and the screening of the participants. So the samples were made fresh on that day and we gave them plenty of samples (much more than four cups that were estimated). With eight samples in the sample set and two days for evaluating each sample, this cottage cheese study HUT phase should last 16 days. Actually it lasted 24 days as it covered weekends. Every time consumers returned to pick up a new sample, we were amazed that they actually used up most of cottage cheese samples that they liked. This was unexpected as we provided them in excess of the amount from our calculation. We figured out the reasons during the discussion about their consumption habit that was not what we anticipated. The value diagram in Figure 6.3.11 was an important outcome of the cottage cheese study. The hypothesis that cottage cheese was consumed mainly for breakfast or a snack that suggests that it was consumed with minimum alteration (e.g. adding jam or fruits) was disproved as consumers consumed cottage cheese as dessert, side dish and salad dressing in addition to breakfast and snack. The QMA study revealed that convenience and health benefits are the major drivers of cottage cheese consumption (Figure 6.3.11).
Beckley_c06.indd 116
2/6/2012 12:26:47 PM
Qualitative Multivariate Analysis
117
Would eat: delicious b/c flavorful, buttery, creamy, good texture and good appearance (white) S1 S2
S22
S25 S20 S24
S18
Inexpensive
S6
Expensive
S24 S18 S6 S11
S20
Wouldn’t eat: disgusting b/c bland, off-flavor, poor texture, grainy, pasty and too yellow
Figure 6.3.12 Preference map and locations of cottage cheese samples generated during a QMA session. Price and sensory properties were the two most important factors for the consumers to purchase and consume cottage cheese.
6
However, many of the convenience aspects were hurdles that may cause cottage cheese consumers to switch to other dairy products as seen in the value diagram (Figure 6.3.11). There were two aspects of conveniences: availability and versatile usage. Versatile usage was the main positive reason for cottage cheese consumers to keep consuming it, but low availability was the hurdle. There were few stores carrying cottage cheese and, if they did, there were not many choices to choose from compared to other dairy products. In addition to usage, the impacts of the sensory characteristics (e.g. flavor, taste and texture) were situational because cottage cheese’s versatility of use was unexpected. For example, if consumers eat cottage cheese as it is, then creamy, small curd size and not grainy texture are desired (aka good quality); however, if it is used as salad dressing, no flavor may be more important as it will get processed (e.g. blended or mixed) with other ingredients, or bigger curd size may be desirable as it contributes to the final appearance of the dressing. Figure 6.3.12 shows an important understanding of: (1) Which values (e.g. health, convenience, etc.), (2) Product experiences (positive, negative and neutral) and (3) Connections between the values and the experiences. These are very important and unique insights that can be discovered effectively by QMA but not through classic methods such as focus groups (in which discussion guides begin with questions you seek to confirm) or structured survey questionnaires (you get answers to questions you believe will reveal the important factors of the product design). In this study, the team decided to modify the Love-it! Hate-it! voting exercise a little by asking the consumers to rank all the samples according to their collective liking (Figure 6.3.13). Also, the research team conducted a central location test (CLT) for the same samples and the similarity in the rank order and the
Beckley_c06.indd 117
2/6/2012 12:26:47 PM
118
Product Innovation Toolbox
Trt1
Trt2
Liking direction
Trt22
Trt1 Trt22 Trt2
Trt24 Trt18
Trt20 Trt24 Trt18
Trt6
Trt6 Trt20
Trt11 Trt20
Trt11
CLT
QMA
Figure 6.3.13 Without major segmentation, QMA provided almost the same results as those found in subsequent quantitative study among cottage cheese consumers in North Carolina.
6
grouping of the products was striking (Figure 6.3.13). Further analyses of the CLT data reveal that the liking segmentation was not pronounced and the ranking of the liking can be captured by a small sample size used in a qualitative study as the cottage cheese QMA (Drake et al., 2009). This shows an application of QMA as an effective screening tool. Mapping/napping exercise was conducted and the qualitative map shows that there are two important factors driving cottage cheese consumption: (1) Price (inexpensive vs. expensive) and (2) Edible (wouldn’t eat vs. would eat) (Figure 6.3.13). There were few products that the consumers agreed to be both inexpensive and delicious (S1 and S2); meanwhile, expensive products seemed to be polarizing (as you can see two of S6, S20 and S24). Qualitative multivariate analysis allowed the consumers to agree to disagree about the locations of samples. If there is disagreement about the final location(s) of a product then it is recommended to pay attention to the reasons that the consumers used to persuade the others. This discussion reveals discriminating factors for consumer segmentation. In this case, it was not about price but it was about individual personal liking of sensory characteristics: texture, appearance and flavor.
6.3.5
Consumer perceived values Value diagram is an important outcome of QMA; therefore, the authors would like to introduce the readers to the meaning and current understanding of consumer’s values. The term value is a lasting organized set of preferential standards (aka belief) that privileges one mode of conduct or end-state of existence over another (Chryssohoidis and Krystallis, 2005). Values help a person to live and adapt in a society by helping that person to know and understand (aka internalize) how that society works (Grunert and Askegaard, 1997). There are five characteristics of values (Figure 6.3.14).
Beckley_c06.indd 118
2/6/2012 12:26:48 PM
Qualitative Multivariate Analysis
Consumers access the values unconsciously It is stable over time
Most values are universal across cultures
Value
119
Values influence consumers' attitudes and behavior
One has limited numbers of values
Figure 6.3.14 Five characteristics of a value.
Interaction (when a person forms an external / interpersonal value) between a person and the person’s society
Security
Health
6
Sense of belonging Being well respected
Convenient
Quality Internal and apersonal values
Fun Enjoyment Price Self-respect
Internal and personal values
Sense of accomplishment
Edible
Self-fulfillment Excitement
Good for family
Figure 6.3.15 The nine values based on the LOV of typology theory (Feather, 1984). These basic values are most likely to be the reasons behind the values uncovered by QMA. The dotted-line demonstrates an example of the linkages between security and few manifested values in the cottage cheese study (Drake et al., 2009).
Values influence consumers’ attitudes and behavior by serving as preferential standards in consumers’ minds; hence, values are more closely related to behavior and attitudes than demographic measures (e.g. gender, age, etc.). Consumers tend to have limited numbers of values and most values are universal across cultures. Values are stable over time and this characteristic differentiates values from attitudes that are not temporally stable. After values
Beckley_c06.indd 119
2/6/2012 12:26:48 PM
120
Product Innovation Toolbox
have been internalized, consumers access the values unconsciously (or behaving automatically) but the consumers can articulate their values as the reason behind their action and attitudes (Grunert and Askegaard, 1997). Figure 6.3.15 shows nine types of values based on list of value (LOV) of typology theory (Feather, 1984). The first three LOV values (security, sense of belonging and being well respected) are relevant to the interaction (when a person forms an external/interpersonal value) between a person and the person’s society. The other two values (fun and enjoyment) represent internal and apersonal values. The last four values (self-respect, sense of accomplishment, self-fulfillment and excitement) are internal and personal values. QMA allows consumers to articulate their values through the use of multiple use products.
6.3.6
Summary and future of Qualitative Multivariate Analysis Qualitative Multivariate Analysis is a method that allows consumers to voice their needs and wants without any interference from researchers’ preconceptions; consequently, it allows researchers to listen, observe and discover real consumer insights. Value diagrams depicting the relationships between product characteristics, a consumer’s experience and their values, a consumer’s product perceptual map from mapping/napping exercise and Kano classifications of product qualities are valuable for front-end research. With the advance in digital technologies, the HUT-phase can be improved by applying new technologies such as cell phones, handheld camcorders, Internet cameras and blogging to replace the paper-and-pencil diary. Also, with highspeed Internet, less expensive video conferencing and interactive programs, it is possible to run the discussion session virtually through inexpensive video conferencing and interactive programs.
6
References Berger, C., Blauth, R., Boger, D., et al. (1993) “Kano’s Methods for Understanding Customer-Defined Quality”. Journal of Center for Quality Management, 2 (4), 3–36. Chryssohoidis, G.M. and Krystallis, A. (2005) “Organic Consumers’ Personal Values Research: Testing and Validating the List of Values (LOV) Scale and Implementing a Value-based Segmentation Task”. Food Quality and Preference, 16 (7), 585–599. Clancy, K.J. and Krieg, P.C. (2000) Counterintuitive Marketing: Achieve Great Results Using Uncommon Sense. New York: The Free Press. Dairou, V. and Sieffermann, J.-M. (2002) “A Comparison of 14 Jams Characterized by Conventional Profile and a Quick Original Method, the Flash Profile”. Journal of Food Science, 67 (2), 826–834. Drake, S.L., Lopetcharat, K. and Drake, M.A. (2009) “Comparison of Two Methods to Explore Consumer Preferences for Cottage Cheese”. Journal of Dairy Science, 92, 5883–5897.
Beckley_c06.indd 120
2/6/2012 12:26:49 PM
Qualitative Multivariate Analysis
121
Feather, N.T. (1984) “Protestant Ethic, Conservatism and Values”. Journal of Personality and Social Psychology, 46, 1132–1141. Grunert, C.S. and Askegaard, S. (1997) “Seeing With the Minds Eye: On the Use of Pictorial Stimuli in Values and Lifestyle Research”. In L.R. Kahle and L. Chiagouris (eds), Values, Lifestyles and Psychographics. Mahwah, NJ: Lawrence Earlbaum Associates. pp. 161–181. Kano, N., Nobuhiku, S., Fumio, T. and Shinichi, T. Tsuji (1984) “Attractive Quality and Must-be Quality (in Japanese)”. Journal of the Japanese Society for Quality Control, 14 (2), 39–48. Lawless, H.T. and Glatter, S. (1991) “Consistency of Multidimensional Scaling Models Derived from Odor Sorting”. Journal of Sensory Studies, 5, 217–230. Meilgaard, M., Civille, G.V. and Carr, B.T. (1999) “Descriptive Analysis Techniques”. In Sensory Evaluation Techniques (3rd edition). Boca Raton, FL: CRC Press, Inc. pp. 161–172. Moskowitz, H. and Gofman, A. (2007) Selling Blue Elephants. Upper Saddle River, NJ: Wharton School Publishing. Moskowitz, H.R., Beckley, J.H. and Resurreccion, A.V.A. (2006) Sensory and Consumer Research in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Nestrud, M.A. and Lawless, H.T. (2008) “Perceptual Mapping of Citrus Juices Using Projective Mapping and Profiling Data From Culinary Professionals and Consumers”. Food Quality and Preference, 19 (4), 431–438. Pagès, J. (2003) “Recueil direct de distances sensorielles: Application à l’évaluation de dix vins blancs du Val-de-Loire”. Sciences des Aliments, 23, 679–688. Pagès, J. (2005) “Collection and Analysis of Perceived Product Interdistances Using Multiple Factor Analysis: Application to the Study of Ten White Wines from the Loire Valley”. Food Quality and Preference, 16 (7), 642–649. Perrin, L., Symoneaux, R., Maıˆtre, I., Asselin, C., Jourjon, F. and Pagès, J. (2008) “Napping® Procedure: Case of Ten Wines from Loire Valley”. Food Quality and Preference, 19 (1), 1–11. Risvik, E., McEwan, J.A., Colwill, J.S., Rogers, R. and Lyon, D.H. (1994) “Projective Mapping: A Tool for Sensory Analysis and Consumer Research”. Food Quality and Preference, 5 (4), 263–269. Stelick, A., Paredes, D., Moskowitz, H. And Beckley, J. (2009) “Kano Satisfaction Model in Cosmetics”. Presented in Congress Cosmetic and Sensory: From Neuroscience to Marketing. 24–26 June 2009, Tours, France. Stone, H., Sidel, J., Oliver, S., Woosley, A. and Singleton, R.C. (1974) “Sensory Evaluation By Quantitative Descriptive Analysis”. Food Technology, 28, 24–34. Wedel, M., ter Hofstede, F. and Steenkamp, J.-B. E.M. (1998) “Mixture Model Analysis of Complex Samples”. Journal of Classification, 15, 225–244. Zaltman, G. (2003) How Customers Think: Essential Insights into the Mind of the Market. Boston, MA: Harvard Business School Press.
Beckley_c06.indd 121
6
2/6/2012 12:26:49 PM
6.4
The Gameboard “Model Building” Cornelia Ramsey
Key learnings ✓ ✓ ✓
In-context interviewing and model building to assist consumers in describing future desirable products Analysis and ensuring reliability of model outcomes Building on theory of mental models (or cognitive maps)
6 6.4.1
The problem – how to talk to consumers about new products that do not exist Developing new products that meet consumer needs and expectations involves engaging consumers in product research that explores these needs and expectations. Posing the question, “what do consumers want in new products?” captures the problem for product developers. We simply do not know what consumers want in new products. However, we can explore what they like and do not like in existing products by simply asking them. But how do we, as product developers, take consumers with us into the future to talk about products and product attributes that do not yet exist? By developing the technique of incontext interviewing (Beckley and Ramsey, 2009), we have developed conversational approaches to framing the dialogue with consumers that place us both in the situation with consumers using current products. The consumers are able to describe their experiences to us in the context of product use. We found this discourse valuable and insightful to product developers because consumers could refer to real-life experiences and concrete examples of product uses, benefits and dislikes. However, the challenge of how to talk to consumers about new products remains – there is no “real” frame of reference for consumers or product developers. To facilitate this conversation of new products, we developed and tested new tools (based on empirical psychological and behavioral theory
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
122
Beckley_c06.indd 122
2/6/2012 12:26:49 PM
The Gameboard “Model Building”
123
and practice) to use during interviews in order to assist consumers in describing the unknown, future, desirable products. All product developers know that consumers have different likes and dislikes based on personal preferences, individual experiences with products, culture and traditions, availability of products and personal characteristics (e.g. try new things vs. stick with what’s known). For product developers, the key to designing successful new products requires the following: ● ● ●
●
● ●
Knowing the target audience(s) – groups, people Precise screening strategies – demographic characteristics of interest Emphasizing development of “consumer-centered” products – tailoring products and attributes to meet consumer needs Engaging consumers in product conceptualization early in the innovation process using new methodology beyond focus groups Creative “thinking out of the box” Being open to whatever the consumer tells you!
In this section, we will introduce a new methodology, Gameboard “Model Building” that effectively solicits consumer participation early in new product development processes.
6.4.2
6
A new method: Gameboard strategy “Model Building” This methodology is designed to explore how to actively engage participants beyond the interviewing conversation. Experienced, qualitative researchers explore new techniques in working with consumers to design consumercentered products. The model-building method demonstrates how consumers explore and guide the development of new, innovative products that would satisfy consumers in many levels, including the sensory experience of the products. This new qualitative methodology is developed based upon the understanding of cognitive processes involved in learning, decision making and the theories of schema, mental models, creativity and the creative process.
6.4.3
Construction: Creative process model There are two techniques developed using this methodology:
(1) A narrative technique and (2) A graphic technique. First, the narrative technique walks the participant through the process using words, cards and a “gameboard” requiring participants to fill in the blanks with their preferences (Figure 6.4.1). This narrative technique allows participants to talk through, with the interviewers, their preferred product attributes, talk about how these attributes work together for the total product experience and
Beckley_c06.indd 123
2/6/2012 12:26:49 PM
124
Product Innovation Toolbox
For each numbered slot, select from the corresponding list of choices. (1) A It is shaped like a (4) like (6) lasts a (9)
6
(7)
(2)
that you (3)
in your mouth. and tastes (5) but the flavor
. It lasts for after the product is gone. It is packaged in . The main benefit is (8) shaped like a
.
Choices for each slot Slot 1. Gum, mint, paste, gel, liquid, mist Slot 2. Chew, spray, savor, crunch, roll, dissolve Slot 3. Oval, circle, rectangle, bead, capsule, strip, ball, toothpick Slot 4. Mint, fruit, sweet, chocolate, spice, coffee, tea, cream Slot 5. 30 seconds, 1 minute, 2 minutes, 3 minutes Slot 6. 30 seconds, 1 minute, 2 minutes, 3 minutes, 5 minutes Slot 7. Tin, 6-pack, 12-pack, paper carton, canister Slot 8. Pack of gum, book of matches, jar with lid Slot 9. Breath freshening, wets mouth, whitens teeth, cleans teeth, wakes you up, relaxes you
Figure 6.4.1 An example of a “narrative technique” in Gameboard “Model Building” method. The writing in the box is called “schema”, which acts as a skeleton of the product concept of interest.
Pump
Sprays
Mists
Figure 6.4.2 Examples of “game pieces” used in the graphic technique. In this case two spray patterns were represented with two images in order to ensure the differences between “spray” and “mist” from a pump.
thus, combine the attributes into a complete product that they describe in narrative form. This technique works well with participants who are more verbal by nature and express themselves well with words. For participants who are more visual and comfortable expressing themselves graphically with pictures or physical items, the second technique works well. The graphic technique uses “game piece” that visually represents the meaning of the attributes or benefits of interest and requires the participants to place the game pieces together on a gameboard to represent their product preferences (Figure 6.4.2).
Beckley_c06.indd 124
2/6/2012 12:26:50 PM
The Gameboard “Model Building”
125
1. Determining the number of categories of product components they wish to know.
2. Identifying and listing the number of choices within each category.
3. Assigning different color for different categories.
4. Printing each choice in 5” × 8” card with corresponding color code determined in step 3.
5. Asking each consumer to choose as many choices (represented by the cards) within each category (represented by different colors) as they want to put into the corresponding category slot.
6. Confirming the final construct with the consumer.
6
Figure 6.4.3 Six simple steps to conduct the Gameboard “Model Building” method.
Although each technique accomplishes the same goals, the two offer the interviewer and participant their preferred way to accomplish the goals. The graphic technique may provide more emotional information as visual stimuli are used, and works especially well for kinesthetic learners (i.e. people who learn through movement and experience – often those who choose hobbies or careers that allow them to work with their hands such as carpenters or knitters). However, preparing many options for each attribute is recommended as not everyone will interpret an image in the same way, especially higher level benefits (e.g. it is “cool” to use, romantic, etc.). From our experience, we found that engaging the participants in actions with the cards and game pieces through this novel process, not only places the participants at ease, but is also fun and allows the participant to “drive the bus”, thus the outcome is a consumer-centered product. There are six simple steps to successfully conducting the Gameboard “Model Building” (Figure 6.4.3). First, the product development research team identifies and lists all product attributes for a new product (referred to as choices in Figure 6.4.1 and 6.4.2) of interest and then prints or pastes each individual attribute/choice/image on a 5” × 8” card according to their category color. Different colors are assigned to each category as it will prevent confusion during the “modeling building”. These cards constitute a “deck” that will be distributed to each consumer participating in the interviews. Figure 6.4.2 shows a breath mint schema, all cards representing potential shapes (e.g. square, oval, round) will be grouped together into a “shape” category and printed in the same color. The color-coding of cards is important because the corresponding slot on the gameboard is also color-coded. That way, the consumer knows to put the green colored cards in the green colored slot on the gameboard.
Beckley_c06.indd 125
2/6/2012 12:26:51 PM
126
6
Product Innovation Toolbox
The narrative technique uses actual words to describe the attributes/benefits of interest; however, the graphic technique uses images or physical items with descriptions affixed to them rather than using individual words or descriptions alone. The process for the graphic exercise is the same as the narrative exercise; however, it uses more tangible game pieces that demonstrate the product attributes. Again, the product attributes are pre-determined by the research team and are represented by game pieces of different shapes, sizes and containers that correspond to the meaning of the attribute. For example (see Figure 6.4.1), potential product shapes will be represented by actual circles, squares, triangles, ovals, etc., also of different sizes. All game pieces will also be labeled with words (i.e. “circle”, “square”) for clarity to ensure participants have an understanding of the meaning of the pieces. For action attributes, such as “spray” or “mist” examples, prototype game pieces that really spray or mist are used. These game pieces are called “action prototypes”. The consumers must try the action prototypes to make sure “spray” represents what they prefer versus a “mist” or “pump”. As with the narrative exercise, the consumers will be asked to select the game pieces that represent their preferences and place the pieces on the gameboard slots. These pieces must be labeled as well. Using these materials, participants will be asked to describe the product that represented their experience or the experience they would like to have with a new product. Additional variable “pieces” should be available to participants to represent any attributes the participants’ identified that are not adequately represented by existing game pieces. For both techniques, participants are told they are not required to use all of the attribute pieces and they can add other pieces. In addition, if they like more than one game piece for a particular slot on the gameboard, they can use as many as they like. For example, the participant may like more than one flavor for a product or could define more than one purpose for a product. Through this gameboard exercise, participants can identify product attributes, group attributes together in different combinations, prioritize the “must haves” from the “nice to haves” and talk with the interviewer about a consumer-centered product that meets their desires. The interview is essential to guide the participant through this novel approach but also to ensure that the research team really understands what the consumer is saying, what he means about each attribute and why he defines his consumer-centered product the way he does. Figure 6.4.4 and Figure 6.4.5 are sample outputs from a narrative and a graphic technique, respectively.
A (1) mist that you (2) spray in your mouth. It is shaped like a (3) capsule and tastes like (4) fruit and cream. It lasts for (5) 3 minutes but the flavor lasts (6) 5 minutes after the product is gone. It is packaged in a (7) paper carton and canister shaped like a (8) jar with lid. The main benefit is (9) breath freshening.
Figure 6.4.4 An example of a completed gameboard model by a consumer using the narrative technique.
Beckley_c06.indd 126
2/6/2012 12:26:51 PM
The Gameboard “Model Building”
A
Pump
That
It tastes like
Sprays
Pepper -mint extract
Or
Or
127
Mists
Chocolate
Figure 6.4.5 An example of a completed gameboard model by a consumer using the graphic technique.
6.4.4
Interview guide for model construction methodology
6
The interview component is critical to guide the process and for product developers to understand the outcomes. It is the same for both the narrative and graphic technique. If there is one card/piece (attribute) that is more important to the consumers than the others, it can be documented and explored in-depth via the interview questions. Why is that attribute more important? In the same way, the “must have” attributes can be distinguished from the “nice to have” attributes via interview questions. These are important to know because the “must haves” can be deal breakers and either encourage consumers to use the product or prohibit the consumers from wanting the product. In the same way, the common attributes that are included among most (or all) of the consumers’ models can be recorded in the field notes and compared to those that are seen in fewer consumer models. These are recorded to explore later. For example, which consumers come up with new attributes and what are those attributes? Do certain segments of the population come up with similar attributes? Why? These attribute selections can then be used in the development of a quantitative survey of consumers that is able to determine statistical attribute preferences among segments of the population. Each interview should be directly related to the known attributes pre-determined by the research team and from the existing consumer products as well as potential attributes that are not currently represented in products but under exploration by the product development researchers. These attributes are, in turn, represented on the cards used in the construction of the narrative models or by the shapes or prototype containers used in the construction of the graphic models. There are three critical phases in the interviewing process: (1) Introduction, (2) Model building and (3) Summary. Phase one of the interview introduces the product attribute descriptions/ images/actual objects to the participants through a series of open-ended
Beckley_c06.indd 127
2/6/2012 12:26:52 PM
128
Product Innovation Toolbox
questions. This is very important to ensure that all consumers understand the intended meaning of the attribute descriptions/images/objects in the same way and in the way that the researcher defined. Next, the facilitator who acts as the interviewer/researcher introduces the model-building concept to the participant. A gameboard mat will be spread out on a table with the concept clearly printed with the open slots (Figure 6.4.1) and the facilitator will read the concept to the participant. For example: “A _____flavored oral product that_____and _____. It is_____and lasts _____.” might be filled with the following attributes: “A peppermint flavored oral product that freshens breath and wakes you up. It is strong and lasts 5 minutes.” The participant will be asked to read the gameboard concept out loud and explain in his/her own words. The facilitator then reads each card out loud to the participant or places the shaped game pieces or containers at each colorcoded slot and talks about what each one represents in order to demonstrate the process. Phase two of the interview begins when the participants complete the gameboard exercise. The participant is asked to describe in detail what s/he is thinking as s/he arranges the cards or pieces on the mat. Each participant will be given as much time as s/he needs to construct a model. Upon completion of the task, each participant will be asked to describe the overall product concept model. The facilitator will explain to each participant that her or his model will remain on the mat until the end of the interview, at which time participants are told that they will be allowed to review the model and change the model if desired. Finally, in the third phase, the facilitator will ask the semi-structured summary questions to review the process with the participant, to confirm the selections of the participant and give the participant the opportunity to think about the model that they constructed and make any changes they desire.
6
6.4.5
Ensuring reliability of the outcomes To ensure reliability of this creative methodology, the author searched for an established framework to guide the process. The four-stage creative process model presented by Lubart in 1994 provides a step-by-step example for model construction. Accordingly, the four stages of the creative process are: ● ● ● ●
Preparation Incubation Illumination and Verification.
To further develop this qualitative process for consumer-centered product development research, the author worked with Jacqueline Beckley, a senior
Beckley_c06.indd 128
2/6/2012 12:26:53 PM
The Gameboard “Model Building”
129
executive and expert consumer product researcher to develop the physical gameboard and pieces for consumers to test. Preparation is initial acquisition of knowledge before a task is begun. The preparation stage was paralleled in this research by: ●
●
●
The initial interview questions and information exchange with the participant about the attributes in the model The introduction of the model construction task and orientation to the game pieces to be used and Participants’ initial attempts to build the model.
Incubation phase consists of successfully manipulating the task and discovering possible solutions. The incubation phase occurs when the participants arrange the pieces in different ways to determine the most accurate representation of the preferred product and combination of attributes. Illumination phase occurs when the solution becomes evident. Illumination is the final arrangement of all of the pieces selected by the participant. Verification occurs in the evaluation and refinement of the task. For example, the verification step can occur after model construction when summary questions are asked and the participant is given the opportunity to make any changes to their model. It is important to emphasize that the participants are encouraged to “think out loud” during the model construction process so their thoughts are recorded. Researchers must pay attention to the participants thinking process in order to record any additional attributes and different definitions of existing attributes that participants include in the models as well as the definitions assigned to all attributes, because these records will provide supplemental data for future interpretation of the results.
6.4.6
6
Analysis of the outcomes from Gameboard “Model Building” As with other qualitative techniques, sample size is determined by saturation, meaning that when researchers begin hearing (or seeing) similar themes from the participants and no new data is being discovered, sample size is reached. This usually happens between 17 and 20 participants per site. The consumer researcher, also referred to as Consumer Explorers (CE) in this book must also look across samples in different sites to see if themes are similar or different to determine if enough sites have been included. This type of sample size determination is appropriate because it is similar to other qualitative methodologies in that the research focuses on concept development and the goal of the research (gameboard) is to develop consumer-centered product concepts that are grounded in the consumers’ real life events and circumstances (Sandelowski, 1995). It is very important for the product development team to work closely with the facilitator and CE in the preparation, duration and de-briefing of this method. First, in the preparation, the meaning of each attribute under exploration must be clear to the team, the process of allowing the consumer to “drive the bus” is
Beckley_c06.indd 129
2/6/2012 12:26:53 PM
130
Product Innovation Toolbox
clear to product developers and the facilitator must be able to translate the intent that the product developers have for that attribute to the participant. There must be communication during the interviews (in the form of “check-ins” after each stage of the process) to establish consensus among the team (e.g. facilitator, CE and product developers viewing the interviews) about what the consumer/participant is saying, acting, expressing during the exercise and to identify any confusion or contradictions that the consumers may have because these contradictions often provide key elements of the product under exploration. For example, a contradiction may be a consumer who wants a product flavor that lasts ten minutes but does not want a strong flavor in a product. Finally, the de-briefing after each interview is critical to review key findings from the interview, review the combinations of attributes that consumers arranged on the gameboard and discuss briefly how each interview contributes to the overall findings of previous interviews or if there are key differences. In a nutshell, total team communication and engagement is the key. The objective of the analysis of this method is to have a clearly defined consumer-centered product with specific attributes as defined by the consumers. The similarities and differences of models regarding the attribute selection, placements of attributes, relationships among attributes and importance of attributes must be analyzed. Also, constructed models will be compared to each other for analysis. A “triangulation” of data sources (actual models on gameboard, interview records and field notes) help to construct and support reliability of findings. The three data sources allow cross-checking data for accuracy and established reliability of findings. In addition, consumer preference segmentation can be done also.
6
6.4.7
Analysis overview The analysis of the models consists of: (1) A preliminary descriptive analysis and (2) A secondary central theme analysis of the core elements of the product models. This analysis is iterative and data-driven, in other words, each analysis uncovers themes that are explored further in the next analysis.
(1) Descriptive analysis involves describing the process consumers followed and the characteristics of the outcomes. Four common things to observe from the data sources (actual models on gameboard, interview records and field notes) are listed below: (a) Which gameboard pieces (attributes, words, descriptions or images) are included? Which slots were filled first? Looking for similarities, differences and outliers (b) Which new pieces are added? (c) Which pieces are omitted? (d) The organization of pieces between individual consumer’s models – “must haves” versus “nice to haves” Descriptive analysis answers what happened in the model construction. (2) Central theme analysis describes why model construction happened in the way it did and helps researchers answer the following:
Beckley_c06.indd 130
2/6/2012 12:26:53 PM
The Gameboard “Model Building”
131
(a) Why do participants select these particular set of attributes per category/ slot? (b) Why do they select the attributes in this order? (c) Why do they combine these attributes in this certain combination? Any theme analysis is a search for relationships among the “parts” and relationships of the parts to the whole (Spradley, 1980). The completed gameboard should be examined to: (1) Determine which existing groups of attributes (e.g. those pre-determined constructs) are included in the models and which grouping is most important, if any (2) Identify if new pieces (attributes) are added (and those pieces are recorded along with the definitions assigned to them by the participant) and (3) Identify which attribute or attribute group is omitted, and why. It is critical that detailed note taking be incorporated into the interview process because this allows researchers to reflect back to the participant and review the selections that were made and provides quality assurance and rigor during the summary phase.
6.4.8
6
Consumer-centered products and Gameboard “Model Building” We found Gameboard “Model Building” to be both feasible and appropriate for consumer-centric product innovation because the conceptual models of new products were consistently constructed by consumers directly. Although the actual model construction process was open-ended, the methodology was introduced consistently across all interviews using the following procedures: ● ●
● ● ●
● ●
Each attribute was introduced through sequential interview questions Game pieces were shaped like their meaning (circle represented round) and actively demonstrated the attribute relevant to the meaning of the attribute Pieces were clearly labeled Participants were encouraged to “think out loud” as they composed the models All possibilities of attributes were encouraged; (there were no “wrong” answers) Models could be changed and Models were confidential.
From our experience, consumers found this Gameboard “Model Building” to be a “fun exercise” since there was no pressure to identify the “right” answers. This “fun exercise” increases consumers’ comfort level and self-confidence in discussing their likes and dislikes in products via building a model. We found the consumers had “relative ease” in constructing models which is important to observe because it provides more clues for further in-depth analysis. Moreover, consumers could invent even more complex features for the new products by
Beckley_c06.indd 131
2/6/2012 12:26:53 PM
132
Product Innovation Toolbox
combining attributes in new ways and assigning multiple attributes for each “slot” in the models. For example, consumers designed a product that could “mist” a light flavor on occasion but could also “squirt” a stronger flavor depending on the consumers’ preference at the time. The stepwise process described here and discussion allowed consumers to have ownership of their individual models. In other words, the process of building the models, adding new constructs or omitting constructs demonstrated an understanding of the definitions of the existing attributes, the purpose of the methodology and, again, ownership of the models. This method demonstrates the possibility of developing authentically consumer-centered products that meet consumer expectations from the onset of development.
6.4.9
Limitations As with any method, there are limitations that must be considered: ●
6
●
●
●
●
First, the research team guided by the CE must have comprehensive knowledge of existing attributes under exploration and must have clear definitions of new, innovative attributes being explored and be able to transfer the knowledge effectively to participants so that participants understand the new, non-existing attributes and can use them consistently across models The CE must have a facilitator knowledgeable in this process to assist and guide but not influence response of the participant The facilitator must have clear understanding and have pieces, cards and gameboards that accurately reflect attributes and make sense across participants The CE must educate other research team members to know what they are observing and to extract the most information out of this process This process takes interview time to orient the participants to the process.
With work up-front, these limitations can be managed. First, preparation is key. Devoting time and energy to the up-front work (e.g. preparing the gameboard and pieces) is essential. As with any other method, pilot testing with consumers is essential to ensure definitions are clear and the process is understandable.
6.4.10
6.4.10.1
Theoretical background of model construction methodology Cognitive processes
Interviewing consumers about new product ideas creates opportunities for new ways of thinking for both the CE and the consumer and requires techniques for putting existing knowledge together in new ways to produce new concepts. All of us have existing knowledge that we have learned through day-to-day living, formal education and informal information acquisition such as observations. We have existing knowledge and ways of thinking based on our beliefs and
Beckley_c06.indd 132
2/6/2012 12:26:53 PM
The Gameboard “Model Building”
133
experiences. These pieces of existing knowledge give us mental models of how life is and what we know. The mental models are cognitive maps that represent real or hypothetical situations and give us starting points in how to learn new information (Goel et al., 2010; Johnson-Laird, 1980). This is important to understand because when we interview consumers, we know they are responding to our cues based upon their existing knowledge and mental models. But this also tells us that because they (consumers) have existing mental models of knowledge based on their product experiences, with guided conversation and tools via interviews, they are able to make inferences regarding new information and think about new products and new product attributes that they have not used but that they may be interested in if those new products existed. So how do we help our consumers draw on their existing mental models of products to help them articulate the product attributes that they would want in new products? The answer is that we learn together (researcher and consumer) how to identify new attributes and how to put those attributes together to make a consumercentered product. By guided in-context interviews and using tools we have developed to facilitate the conversation, we, as researchers, are able to take consumers with us into the future. Through in-context interviewing (Beckley and Ramsey, 2009), we frame the conversation so that the consumer is placed in the experience and in the situation (context) of the product. We do not just talk about the product and its attributes but we describe the situation, environment, feelings, aromas, thoughts, people and surroundings that were happening to and with the consumer when s/he was using the product. We augment this interviewing with concrete tools (e.g. gameboard and pieces) that consumers use to identify and define what attributes and combinations of attributes they would like to have in new products.
6.4.10.2
6
Schema theory
A model is considered an individual’s mental picture or graphic that represents specific events or situations. Frederic Charles Bartlett (1886–1969) developed the theory of “schema” to explain how individuals mentally represent knowledge (Brewer, 2000). “A schema is a knowledge structure that captures regularities of objects and events” (McNamara, 1994). This structure is composed of “slots” in which events or categories of things can be specified (McNamara, 1994). These slots in schemata represent “… typical properties of an object or event and contain default values” (McNamara, 1994). For our research, these properties are attributes of products as shown in Figure 6.4.1.
6.4.10.3
Theory of creativity
The theory of creativity is critical for researchers to understand when developing new products and exploring product attributes that do not yet exist. Creativity is the ability to produce work that is both “novel” and “appropriate” (Lubart, 1994). “Novel” means it is original and not predictable. “Appropriate” means it fulfills a need and/or is useful. This research proposed a “novel” and “appropriate” approach to understanding new consumer products by requiring
Beckley_c06.indd 133
2/6/2012 12:26:53 PM
134
Product Innovation Toolbox
participants to construct a graphic representation of their favorite product attributes into new combinations of products while engaging in descriptive conversation with the interviewer.
6.4.11
Summary and future
The gameboard model-building technique enables consumers to identify and graphically present the key elements and combinations of elements (attributes) for new products. By identifying and defining the key elements via the model construction procedures of the methodology, the consumers are able to examine their current product experiences in new ways because they deconstruct the product by individual attribute and via the interview talk about each attribute – existing or new. It is plausible that this methodology would be applicable to other research. For example, organizations could conduct “internal inventories” of existing products by constructing models of the existing products and asking consumers to deconstruct then reconstruct the product with their individual preferences and priorities for attributes and different combinations of attributes. The model construction methodology could also be used to identify which attributes are most important to consumers and which attributes they could manipulate in new products. The gameboard model-building technique engages the participants/consumers and, once engaged, the participants’ true preferences are brought out via the interview. This process adds quality and depth beyond conversation. It engages the participants. It is fun and when people have fun they are relaxed and more open to “out of the box” thinking which is essential for new product development research with consumers. There are no wrong answers and consumers can design whatever they desire. As one consumer commented upon completing a gameboard exercise: “This is so cool! I’d for sure buy a product that was designed to do what I wanted it to do!” This new method offers a concrete picture of consumer preferences in a costefficient and time-effective manner by using cards or prototypes early in the consumer research process without having to devote research and development time and money into many different complex prototypes with pre-determined yet untested attributes that may not meet consumer needs or interests.
6
References Beckley, J.H. and Ramsey, C.A. (2009) “Observing the Consumer in Context”. In H.R. Moskowitz, I. Saguy and T. Straus (eds), An Integrated Approach to New Food Product Development. Boca Raton, FL: CRC Press. pp. 233–245. Brewer, W.F. (2000) “Bartlett’s Concept of the Schema and Its Impact in Theories of Knowledge in Contemporary Cognitive Psychology”. In A. Saito (ed.), Bartlett, Culture and Cognition. Guildford: Psychology Press. pp. 69–89. Goel, L., Johnson, N., Junglas, I. and Ives B. (2010) “Situated Learning: Conceptualization and Measurement”. Decision Sciences Journal of Innovative Education, 8 (1), 215–240.
Beckley_c06.indd 134
2/6/2012 12:26:54 PM
The Gameboard “Model Building”
135
Johnson-Laird, P.N. (1980) “Mental Models in Cognitive Science”. Cognitive Science: A Multidisciplinary Journal, 4 (1), Jan-Mar 1980, 71–115. Lubart, T.I. (1994) Creativity. In R.J. Sternberg (ed.), Thinking and Problem Solving. San Diego, CA: Academic Press. pp. 290–332. McNamara, T.P.(1994) “Knowledge Representation”. In R.J. Sternberg (ed.), Thinking and Problem Solving (2nd edition). San Diego, CA: Academic Press. pp. 81–117. Sandelowski, M. (1995) ”Sample Size in Qualitative Research”. Research in Nursing and Health, 18, 179–183. Spradley, J.P. (1980) Participant Observation. University of Minnesota: Holt, Rinehart and Winston.
6
Beckley_c06.indd 135
2/6/2012 12:26:54 PM
6.5
Quantitative Anthropology Jennifer Hanson
Key learnings ✓ ✓ ✓
Ethnography, the main tool of anthropologists Quantitative anthropology, the emerging tool of Consumer Explorers How to utilize quantitative anthropology
6 6.5.1
Anthropology: A brief introduction Anthropologists have long been focused on the why and how of cultural interactions. Why do certain cultures greet each other with a kiss on both cheeks? Why do other cultures have over 20 different words that translate to different versions of “snow”? Finding answers to these questions lets an anthropologist expand their knowledge and understanding on a specific culture, or sub-culture, and create similarities and differences in cross-cultural analysis. Traditionally, this seeking has been done through in-field ethnographic research. Ethnography or observational research is a hallmark of anthropological research. Even the term “ethnography” can conjure up the image of an anthropologist immersing him/herself in an unknown region of the world surrounded by perceived natives of a tribe unknown to anyone but themselves. Once back from his/her stay, the anthropologist will publish a book that shares the stories of these people. This romanticized version of anthropology has been a mainstay of the field because it has allowed for a fully developed story of a culture to be researched, lived, transcribed and written about by the anthropologist. This image has captured the imagination of the public and academia alike both positively and negatively. Because anthropological ethnography allows for a discourse between the researcher and the informants, as well as extended periods of observation and participant observation, it is a natural method to be applied to the fields of marketing and market research (Mariampolski, 2006; Pelto and Pelto, 1978; Pruitt and Aldin, 2006; Spradley, 1979).
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
136
Beckley_c06.indd 136
2/6/2012 12:26:54 PM
Quantitative Anthropology
6.5.2
137
The rise of ethnography in marketing Traditionally, marketers have relied on surveys to decide on how to sell and design their products to meet consumers’ needs. Surveys, whether a couple or multitudes of questions, are popular as they add a level of comfort in decision making for businesses for two reasons: the large-scale samples are designed to predict brand growth opportunities among specific target populations; the quantitative nature of survey results allow businesses to feel as if they are mitigating risk when investing millions into new and existing products and services. The challenge with relying only on surveys to make informed business decisions are buried in the underlying methods of survey design. Surveys are designed to be answered by the greatest number of people in the shortest amount of time. This focus helps to limit the cost of executing a survey, which, in turn, dictates the number of questions and type of questions being asked. This limitation forces marketers to decide on what types of information they can collect in a single survey, and how to collect it. The trade-off is typically between open and closed ended questions. Closed ended questions are often chosen over open-ended, as they take less time to answer by respondents and provide the largest return on investment using surveys. The reliance on closed ended questions requires consumer researchers and marketers to pre-determine the various responses that consumers may tell them if they were to have an in-person discussion. If those answers are not properly designed, the risk of not addressing the objectives of the survey is significant. This risk is reduced somewhat when the researcher adds an incremental discussion with respondents. These discussions can either take place as in-person focus groups, one-on-one observations and discussions, or may leverage technology through online chats, bulletin boards or blogs. These research tools have gained popularity in helping to design surveys; however, they still have fallen short in providing the knowledge for researchers and marketers to really understand the wants, needs and desires of consumers. Why? These tools remain rooted in the interview format, are subject to interviewer filters, and conducted with professional respondents – a combination that can produce incomplete research outcomes. To overcome these challenges, corporations have begun to turn to ethnography. Ethnography, the main tool of anthropologists, has risen in popularity over the last ten years. At its anthropological roots, ethnography is a series of observations that take place in person and over an extended period of time, typically a year or longer. While ethnography is attractive to researchers and marketers as it furthers the understanding of consumer decision making in an unbiased manner, its usefulness is constrained by length of time in the field, minimal focus on specific business objectives, and its roots as an art, not the science that defines most marketing research studies. Ethnography was designed to be one of the most unconstrained tools any research can leverage and ethnographers are trained to be open to the possibilities that surround them, and not focused on a specific end goal or result. An ethnographer must be able to see the whole picture as a puzzle, whether that be of a people in Latin America or a group of diabetic patients in
Beckley_c06.indd 137
6
2/6/2012 12:26:54 PM
138
6
Beckley_c06.indd 138
Product Innovation Toolbox
New York City. While these skills are crucial to being a good anthropological ethnographer, the skills are not easily repackaged into other disciplines. Many other fields do not have the luxury, or grounding, to be able to be open to the experience of ethnography. Rather, ethnographic methods are tools that are used as a means to an end. It is the process of ethnography, rather than the experience of ethnography that is most beneficial to marketers (Mariampolski, 2006; Platz, 2009). As a result, ethnography has been modified to fit the time constraints of marketers and business objectives they are trying to reach. Ethnography has become an interview that takes place over two to three hours in a home or store. Despite these dramatic changes to its roots of long-term observation without specific objectives, ethnography has become one of the fastest growing tools of consumer researchers and marketers today for three primary reasons: the ability to spend time with consumers, obtain a deeper understanding of motivations and behavior and its utility in addressing questions that interview methods alone cannot. One of the attractions of ethnography to marketers is the ability to spend time with the people that use their products and services. This is especially important for those team members who don’t have regular consumer contact or even use the product, such as those in package development, R&D, etc. Getting the team to spend time with the end user or decision maker can be enlightening. It is always interesting to see the clash of what someone says, vs. what they do. For example, a recent heart attack patient was proudly talking about his changes in diet and exercise, but there were pork rinds in the kitchen. Or the asthmatic who said, “I am the healthiest asthmatic you will ever meet.” but used her rescue inhaler every minute she was on the treadmill (J. Loving, Brand Insight Team Leader, AstraZeneca, personal communication, 15 August 2009). Another reason for its attraction is that ethnography can reveal the true whys and hows of product use. For example, a household products company had an internally developed idea for a cleaning product. It was a bleach based bathroom cleaner for South East Asia, where mold is a very difficult problem. We spent a few days in Indonesia cleaning bathrooms with housewives or their maids. We found that they started at the top of the tile walls, washed their way down, getting the fixtures along the way, and ending with the floor. They then hosed off the bathroom top of the tiles down, with the water swirling down a drain in the middle of the room in their bare feet. Clearly bleach couldn’t be an option, but led to a new R&D effort around non-bleach based mold cleaners. Finally, the utility of ethnography has extended beyond populating survey questions and has moved into other objectives, such as brainstorming and positioning development. At pharmaceutical companies, many times large innovation teams split up and everyone conducted a few visits with a broad range of people who experienced a certain medical condition. They talked with heavy sufferers, family members, health care professionals, fast-food employees, and a tightrope walker to understand balance. Everyone brought in their respective observations, which then fed into a workshop that ultimately developed themes that led to new product ideation. As with any business application of a research tool, there are pros and cons to ethnography. On the positive side, you get to really see the differences between attitudes and beliefs vs. behavior and environment. You get to snoop and look
2/6/2012 12:26:54 PM
Quantitative Anthropology
139
in closets, cupboards, go shopping with them, and see interpersonal interactions. These respondents are giving you a very personal look into their lives. The watch outs include the usual caveats with any research tool, what you are observing is what you recruited for and may not be representative of your target population; often you don’t want them all to be representative so you can see the outer limits! You are there to ask open-ended questions, let them talk and express themselves, and share their lives (J. Loving, personal communication, 15 August 2009). One of the most recent trends with the tool is the integration of the execution of ethnography into the corporate consumer researcher’s job description. The do-it-yourself (DIY) approach to any research tool becomes popular when it becomes a common tool used by researchers, or when budgets are stretched too much that it becomes a necessary alternative to contracting with a professional anthropologist. Ethnographic theory is intimately tied to anthropological theory and must be studied in order to be well applied. Cultural anthropologists seriously train for ethnography, much like athletes train (Pelto and Pelto, 1978; Spradley, 1979; Wolcott, 2008). This trend has added another major challenge to the founding processes of the tool – the interview being designed to mirror surveys leading ethnographic study to begin to replace surveys all together.
6.5.3
6
The elephant in the room It is natural for modifications to be made to research tools; without them we would not have innovation in the marketing research industry. However, conventionally, consumer researchers have been trained in tools that require people to tell you answers, rather than show you through their behavior. This is the foundation of surveys, but has also become the basis of ethnographic study and any other discussion-based tool utilized today. And relying only on what people say, and not what they really do, provides a tiny fraction of reliable information about consumer decisions, adding much more risk in business decisions that involve millions of dollars of shareholders’ money than anyone realizes. Consumer information used for marketing investments is often made with incomplete information about consumer decisions. This is because nearly all consumer knowledge is based on interviews. Whether you talk to people through ethnography or a survey, researchers still ask questions and consumers provide an answer based on recalling their behaviors or supporting their beliefs. While many will rationalize their behavior or misrepresent it in order to make them look better, most often people don’t accurately report because they either don’t realize what they are doing or they simply forget. The reliance on memory or perception is the reason why we do not have a complete picture of consumer decisions. Up to 95 percent of our behavior is controlled by our unconscious minds, by habitual responses to environmental cues and cultural influencers built up over time. People aren’t aware of what they do; their conscious minds rationalize, purposely misrepresent, simply forget or, most likely, are unaware what they do or the real reasons behind their actions (Zaltman, 2003). In addition, 85 percent of customers who defect report being satisfied or highly satisfied with the
Beckley_c06.indd 139
2/6/2012 12:26:54 PM
140
Product Innovation Toolbox
company they are leaving; however, satisfaction explains only eight percent of repurchase (Martin, 2008). The author has seen these results across a variety of product tests. Based on this data, “being satisfied means consumers have found a solution to meet their expectations, even if it requires a system of many inefficient steps to get to the end goal”. As a result, what people say isn’t nearly as important as what they do. As consumer researchers, we often fall into the same pattern of asking the same type of questions in focus groups and surveys and using the databases and syndicated data presented in a standardized format as the primary input to understanding consumers’ needs and desires. The only way to fill in our knowledge gaps is to stop asking questions and go back to observing how people actually live and make decisions. But observing is not enough. Corporations need insights about consumer decisions that are both measurable and projectable. Basically, everybody wants a number which is needed to provide comfort in business decisions for three reasons. First, you need to know quantifiably that this information is real. Second, they need to know the reasons behind these consumer decisions: the qualitative behaviors, expectations, motivations that drive the decisions, or the “whys” and “hows”. Third, they need to understand how to apply this knowledge to their business, what levers to pull and when to pull them in order to create stronger bonds with consumers, drive competitive advantage and create better forecasts. Acquiring these three pieces of information transforms a regular consumer researcher to a great Consumer Explorer (CE). We have the opportunity to bridge the gap between consumers and corporations and make it a winning proposition for everyone – consumers’ needs are met and their lives are made easier, companies are successful, profitable and grow. The key is starting with actual behavior (the “how”) to fuel discussions with consumers about their underlying motivations (the “why”). This will allow CEs to ensure they do not miss behavior and motivations that consumers cannot easily recall. This wealth of information then can create great insights. The challenge, then, is how to gain that insight? How do we find out what they actually do? Is it possible to combine the depth and breadth of insights in the same study? To be able to address both the quantitative and qualitative sides of the equation, with a hearty helping of behavioral understanding as well?
6
6.5.4
Quantitative Anthropology (QA) Market research requires a type of checks and balances that is relatively unknown to other branches in the social sciences. Many marketing disciplines, while traditionally quantitatively focused, now value qualitative research methods as well. Just as surveys can become more qualitative by adding open-ended questions, qualitative approaches can become more quantitative by translating observations into data. As mentioned earlier, the process and discipline of ethnography allows researchers to put consumers at the center of their business decision making and provide them the “whys” and “hows” necessary to mitigating risk, whereas with surveys marketers seek the answers to “why” and “how” after they receive the results. Quantitative-based consumer research does not
Beckley_c06.indd 140
2/6/2012 12:26:54 PM
Quantitative Anthropology
141
allow the informant to become the central aspect of the research, whereas anthropological research almost demands it. For this reason, starting with an anthropological approach to understanding consumer behavior and applying the science that is needed for business decisions is the perfect marriage for overcoming the risk inherent in making decisions on interview-based approaches. Technology has advanced rapidly in the past ten years around social communities and organization of massive amounts of unstructured data. In order to overcome the major challenge of observations over a long period of time with anthropological study for marketers and research teams, technology is applied to the process of ethnography to shorten the duration of data collection from a year (or longer) to weeks. Technology is also the solution to the measurement challenge with limited budgets, which constricts sample sizes. By using technology to collect each time a consumer uses a product or service, companies can collect thousands of observations of product and service use in respondents’ natural environments, allowing the application of structure to this unstructured data in a way that is meaningful to businesses. The application of technology is also being applied and accepted as a part of ethnographic field work, allowing the academic and practice of anthropology to converge rather than diverge in the process.
6.5.5
6
Quantitative anthropology in practice The QA realsight® system was developed to allow consumer explorers to apply quantitative anthropology in a way that is aligned with the needs and desires of business. It is a patent-pending technology system with a research methodology that marries videos of everyday product usage and shopping behavior with social networking technology, academic frameworks, and state-of-the-art analytics. Quite simply, it is observation plus quantification that provides the deepest consumer understanding and the proper validation that Consumer Explorers need to pinpoint where and how to grow businesses by placing more emphasis on what people actually do, not just what they say. The QA realsight® system puts you inside the daily lives of hundreds of consumers for thousands of product and service usage and purchase situations (Figure 6.5.1). Using a proprietary blend of traditional and non-traditional analytical techniques, the observations are turned into data to allow quantification of the patterns hidden in the thousands of qualitative observations of common everyday behavior, patterns that connect intentions with actions. But a system is only as good as the process and people that surround it. After developing the system in 2006, the author has found that it can’t be completely automated; otherwise you will fall into the pattern of having consumers only report what they can recall. A team of Consumer Explorers must stay close to the respondents and develop meaningful relationships with them over the course of study. Through careful review of participants’ interactive online diaries with observed behaviors, using the tools of anthropology, sociology and psychology, they use ongoing dialogs with consumers to develop these relationships and uncover their motivations and bridge the gap between what consumers say with what they do. This important part of the process allows CEs
Beckley_c06.indd 141
2/6/2012 12:26:54 PM
142
Product Innovation Toolbox
Observed data (what people do)
Quantitative anthropology
Ethnography
realsight ®
Few data points
Multiple data points
Focus group interview
6
Surveys
Claimed data (what people say)
Figure 6.5.1 Quantitative anthropology bridges the gap between ethnography and survey.
to use the QA realsight® system to bridge the information from ethnography and surveys, while maintaining descriptive and projectable results in a single study. Although the discussion in this chapter is about the QA realsight® system, other data design researchers such as Mark Hansen (2011) have developed processes that convert streams of hybrid data (i.e. voice, sound, text and observations) into clusters that tell a story or to groupings of recognizable patterns and visuals. By its very nature of data collection, the QA approach is very descriptive. The videos and language provide evidence and support for the findings. In addition, each behavioral pattern is summarized with: who, what, when, where, why, how as well as context and environmental factors. The factors that are most important for each pattern are also uncovered during the analysis, as each pattern will have different levels of importance for influences of behavior. The system uses anthropological and other academic theories and tools to add more meaning that cannot be described by consumers during discussions. Given that the system collects hundreds, if not thousands, of behavior events across as few as 50 or as many as 500 people, you know the incidence and frequency of each pattern to allow for further prioritization. If needed, one can project the results by combining the behavioral data with a broader survey that includes situations of use combined with needs, attitudes, product or service attributes, demographics – basically anything needed for each study – in the language consumers use. Surveys, on their own, can’t accurately project true behavior and motivations, as they under-represent environmental factors and over-state personal image-enhancing factors (e.g. “I live a healthy lifestyle”).
Beckley_c06.indd 142
2/6/2012 12:26:54 PM
Quantitative Anthropology
143
Consumer Explorers can have a better projection of the marketplace by combining the two pieces using data imputation tools, that is, all the depth that qualitative research yields, plus all the assurance that comes with quantitative analysis.
6.5.6
Under the hood Quantitative anthropology (QA) is very different from traditional methods of research. Instead of starting with quantitative and applying qualitative insights, it starts with qualitative and then applies quantitative insights. The methods used in the QA realsight® process to turn observations into numeric data are more time consuming than pre-programmed surveys, but deliver richer insights and metrics that are more closely aligned with the true consumer behavior, thoughts and ideas. Figure 6.5.2 shows the steps and activities involved in conducting a successful quantitative anthropology study. One of the primary reasons for the richer outcomes is that the CE does not just take a product-centric or a “whole self” view of the consumers. Most research tools today assume that people only interact with products or services in one way, or a snapshot of a person’s daily life at one moment in time is a reflection of every moment or every day. The reality is that people’s lives consist of many different situations of product or service use. Each situation that a person experiences varies based on his/her moods, feelings, lifestyles, products – almost anything – and can change at a moment’s notice. People are individuals, but situations can be as similar as they are different and occur throughout a person’s day, not at one point in time. Consumer Explorers need to identify opportunities from a situational perspective, not only from a product or a people view. Using cameras and real-time data collection tools such as smart phones, CEs can capture more situations of product use than a typical research study would provide. One of the primary benefits to using multimedia data is the ability to collect unarticulated and habitual behavior, our daily actions that are automatic and often not top of mind. As a Consumer Explorer, when you think about your own everyday routines, how many decisions are on autopilot? How many are influenced by your surrounding environment at the time you make decisions? These are the behaviors that are commonly forgotten during interview-based approaches and are best observed, rather than asked in question format. Extended observation also allows for analysis of individual usage situations, the most descriptive and accurate way to understand consumer decisions. Situations are multi-dimensional events that encompass the who, what, where, when, why and how of product usage and shopping decisions. Most importantly, situations capture environmental drivers of consumer choice (or context) as well as consumers’ unarticulated needs and behaviors – vital information that traditional methods miss. Simply identifying consumer needs for a situation is not enough to support a major investment in any opportunity. Companies need the certainty that discovered opportunities are not an artifact of qualitative interpretation or sample composition. They need rigorous and objective quantification to inform investment decisions.
Beckley_c06.indd 143
6
2/6/2012 12:26:55 PM
144
Product Innovation Toolbox
Phase
The realsight ® system
Recruiting
The recruitment process is like a survey. People are screened until the required sample is obtained.
Data collection
During data collection respondents are monitored to ensure relevant information is collected for analysis. The data collection time period depends on frequency of use or purchase. Enough data points must be collected for quantitative analysis.
Creating metrics
Metrics, or variables, are created by tagging all the entries and identifying key words and phrases. Metrics are created based on the study objectives. Following this labor-intensive task, a dictionary is created that can be used in future studies for analytical work.
Analytics
Analytical tools can be employed to help make sense of the metrics. The processes for analysis are much like traditional segmentations and market structures, with the exception that open-ended data takes a much longer time to clean and uncover relationships. A combination of text, video and multivariate analytics are used.
Reporting
The unique nature of the data can produce non-traditional outcomes. It is important to link the data to visual outcomes, as it will help explain complex concepts or ideas that are not top of mind.
6
Figure 6.5.2 Five steps and activities involved in conducting a successful study using the quantitative anthropological approach.
Quantifying situational usage event “data” cannot be completed using common statistical tools. A mixture of tagging, probabilistic neural network (PNN) analysis, linguistic and multivariate analysis must be used to provide structure and extract meaning from observations. The benefit of PNN is its proven ability to remove the “false positives” from results (Singh et al., 2009) (Figure 6.5.3). In addition, since we capture the before, during and after of a situation, we must look at the connection and underlying motivations and patterns that occur from beginning to end. This allows us to look at the data longitudinally within each situation event (micro level), but also across multiple situation events for the weeks of recording (macro level). It’s a top-down and bottom-up approach to what we simply call “pattern analysis”. Typically, we see anywhere from 15 to 30 patterns of situations emerge from the analysis. These patterns are presented visually, using evidence collected in video and language, and can reveal several ways to grow brands: ● ●
● ●
Beckley_c06.indd 144
Minimizing the number of steps to do something Combining the features of other items that are competing to be used in the same situation Making a behavior-based system easier or Eliminating a behavior altogether.
2/6/2012 12:26:55 PM
Quantitative Anthropology
Logistic reg. LDA Single tree Linear reg. Tree forest RBF Tree boost CCNN SVM
145
#Times best
PNN 0
10
20
PNN Tree boost Logistic reg.
30
40
50
60
#Times worst
CCNN LDA RBF SVM Tree forest Linear reg. Single tree
6
0
10
20
30
40
Figure 6.5.3 Probabilistic neural networks (PNN) was chosen in the realsight system as it had been proven to outperform other different types of models for less structured data and many input variables that are common for ethnographic studies (created from data from Singh et al., 2009). However, different types of models work best for different types of data.
It sounds like it would be difficult to execute, but it is not. The phases of the process are similar to other survey and ethnographic studies, but are combined in a unique way to align the process with existing research practices as well as the needs of marketers. Each study is customized to the objectives of the business.
6.5.7
Applications of quantitative anthropology A few examples of how companies have been leveraging the QA realsight® system include: ●
Beckley_c06.indd 145
Product optimization Research and design in a large global consumer packaged goods company asked us to design the next generation of product testing leveraging the realsight® system. The company was dissatisfied with the lack of consumer discrimination from existing tools. The program design had two parallel paths: the QA realsight® approach and traditional sequential monadic product testing.
2/6/2012 12:26:55 PM
146
Product Innovation Toolbox
Overall liking (1 = Dislike very much ... 7 = Like very much)
Overall, the QA realsight® system provided richer learnings than traditional testing. The extended use of products resulted in greater product discrimination with closed ended key metrics (Figure 6.5.4).
6
Realsight® Quant Anthro Traditional HUT approach
6.0 5.5 5.0 4.5 4.0 A
B
C Product
D
E
Figure 6.5.4 Conducting quantitative anthropology prior to a home-use testing (HUT) helps consumers differentiate products more than just conducting a traditional HUT.
●
●
Beckley_c06.indd 146
6.5
Platform development: food and beverage A major food brand asked us to determine the best strategic entry point into a $2 billion manufacturing defined market and the requirements for success. The development of the lead product idea was already underway and the realsight® team was asked to help optimize the product for its launch in 12 months. Using the QA realsight® system, the team restructured the category of the new product idea to a consumer defined marketplace, transformed a $2 billion fragmented category with a single product format and no clear point of entry into a $41 billion consumer defined market. The consumer-defined market resulted in discovering a $2 billion market for the existing product idea and an emerging $3 billion market previously ignored for the long-term pipeline. Consumer-generated ideas: confections category A well-known confections company asked us to use the realsight® system to identify insights and ideas to renovate their base gum brands in three countries through a consumerbased approach to understanding usage and attitudes. Through a threephased approach, we delivered a unique approach to developing ideas directly from consumers: ° Observe how consumers actually interact with confection products using the realsight® system ° Identify insights and opportunities for brand renovation by using original analysis techniques ° Help to prioritize these ideas: immediate, short-term and long-term opportunities through the structured application of consumer and client success criteria.
2/6/2012 12:26:56 PM
Quantitative Anthropology
147
As a result, we uncovered six ways to renovate two global brands. Using the product requirements at usage, we were able to define the path the brands should take to renovate their products using existing manufacturing capabilities. The mixture of capabilities across product lines delivered products that were in line with existing usage, as well as helped to redefine the portfolio and roles of the brands in a way that minimized cannibalization of its own portfolio and maximized incremental sales across new categories.
6.5.8
Future potential As with any research method, there are boundaries to where quantitative anthropology can add value to marketers and product developers. It has been successfully used in the front end of innovation, renovating existing brands and developing new brands. There are three common applications of quantitative anthropology in new product development and marketing:
(1) Determine dig areas ripe for growth: (a) You are exploring new direction for brand growth (b) You need ideas and boundaries for development. (2) Align products with the most opportunistic consumer experience: (a) You have been handed a new product to launch (b) You need to pinpoint where and how existing products succeed. (3) Strengthen concept and product propositions for launch: (a) You have a concept that is only average, a concept-product gap in fit or a forecast that doesn’t quite hit internal hurdles.
Marketing
Ethnographies and focus group
Dig areas
Attitude and usage
Brand renovation
Sales
Path to purchase
CLT / HUT
Segmentation and market structure
Portfolio optimization
Channel optimization
Product development
Innovation
Idea generation
Early warning system
Path to consumption
Opportunity identification
Product evaluation
Product consumer fit
6
Platform development
Figure 6.5.5 Examples of activities in which quantitative anthropology are used concurrently with other traditional techniques to ensure the outcomes through converging evidence.
Beckley_c06.indd 147
2/6/2012 12:26:57 PM
148
Product Innovation Toolbox
Quantitative anthropology is commonly used to elevate existing methodologies across all functional areas by uncovering insights that lead to growth opportunities and consumer generated ideas (Figure 6.5.5). As with any new method, the future is bright and the road there is exciting. New technologies are constantly emerging that will allow real-time data collection, analysis and reporting in ways that will further help connect marketers and product developers with the real needs and desires of their consumers. Data scientists such as Mark Hansen (2011) have developed processes that convert large streams of hybrid data (e.g. voice, sound, text and observations) into clusters that tell a story or groupings of recognizable patterns and visuals. The ease of data collection through digital technology (particularly mobile phones) coupled with and the consumer interest in personal data tracking (blogs, community groups, personal counting devices) have transformed us into a society of data (Hansen, 2011; Wolf, 2010). Research tools such as quantitative anthropology enable Consumer Explorers not to just summarize the data but to tell a consumer relevant story or series of stories based on behavior patterns that will be meaningful and actionable to those who care to listen.
6
References Hansen, M. (2011) “The Intersection of Data and Design”. New York Academy of Sciences e-Briefing, 27 June 2011. Mariampolski, Hy. (2006) Ethnography for Marketers: A Guide to Consumer Immersion. Thousand Oaks, CA: Sage Publications, Inc. Martin, N. (2008) Habit: The 95% of Behavior Marketers Ignore. Upper Saddle River, NJ: FT Press. Pelto, P. and Pelto, G. (1978) Anthropological Research: The Structure of Inquiry. Cambridge, UK: Syndicate of the Cambridge University Press. Platz, K. (2009) Popping My Collar: Applying Anthropology to the Field of Design and Marketing. MA Thesis. University of Waterloo, Waterloo, Ontario. Print. Pruitt, J. and Tamara, A. (2006) The Persona Lifecycle: Keeping People in Mind Throughout Product Design. San Francisco, CA: Elsevier Inc. Singh, Y., Kaur, A. and Malhotra, R. (2009) “Comparative Analysis of Regression and Machine Learning Methods for Predicting Fault Proneness Models”. International Journal of Computer Applications in Technology, 35 (2–4), 183–193. Spradley, J. (1979) The Ethnographic Interview. New York, NY: Holt, Rinehart and Winston. Wolcott, H. (2008) Ethnography: A Way of Seeing. Lanham, MD: AltaMira Press. Wolf, G. (2010) “The Data Driven Life”. New York Times, 28 April 2010. Zaltman, G. (2003) How Customers Think: Essential Insights in to the Mind of the Market. Watertown, MA: Harvard Business School Publishing.
Beckley_c06.indd 148
2/6/2012 12:26:57 PM
6.6
Emotion Research as Input for Product Design Pieter Desmet and Hendrik Schifferstein
Key learnings ✓ ✓ ✓
Roles of emotion in new product development Measuring emotional responses Incorporating emotional learnings in product development
6 6.6.1 Putting emotion at the center: emotion-driven design The first author recently purchased an electric water kettle. It is made of stainless steel and has a little window that enables me to observe the water while it heats to boiling temperature. When first using it, I unexpectedly experienced a little delight: as soon as the kettle was activated, the water inside turned bright blue. That is when I discovered that the manufacturer had incorporated an LEDlight in the kettle’s interior. Although it has no apparent function, I was delighted by this tiny magical moment of plain water suddenly turning blue. Because of my emotional response, I now appreciate the kettle more than I would have without this little surprise, even sharing the emotion by demonstrating it to friends and dinner guests. The kettle example, illustrating the impact of emotion during product usage, is actually arbitrary because any product will elicit emotions, whether or not the designer intends this or is even conscious of it (Desmet, 2008). The list of examples is infinite: someone may be irritated by the non-usability of a fancy music player, disappointed by the performance of a new computer, fascinated by a multi-functional espresso machine, inspired by an innovative electric car, etc. The fact that emotion is a key aspect of the user-product relationship is generally accepted and promoted in the (design) industry: with the power to entice customers to select one particular item from a row of similar products, emotions have a considerable influence on our purchase decisions (Pham, 1998). Moreover,
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
149
Beckley_c06.indd 149
2/6/2012 12:26:58 PM
150
Product Innovation Toolbox
emotions are not only involved in our reasoning about what product to buy, but also have a significant effect on consumer satisfaction (Westbrook and Oliver, 1991), product attachment (Mugge et al., 2005) and general well-being (Desmet and Hassenzahl, 2011). Hence, the emotions of product consumers and users are too important to be ignored in design processes, and the ability to design products with a positive emotional impact is of great relevance to the discipline of design. The blue light hidden in a water kettle is an example of “emotion-driven design” because it was probably intentionally installed to elicit pleasant surprise. Emotion-driven design or “design for emotion” involves a design process that intends to evoke particular user emotions. Although user emotion is taken into consideration in any given design project to some extent, in emotion-driven design the user emotion takes a central role: the design goal typically includes a statement about the intended emotional user effect or “target emotion”. Design for emotion is driven by the aim to better understand the relationship between users and products. Because emotion-driven design requires a thorough understanding of the intended users, measuring the users’ emotional responses, either to existing products or to new product ideas and concepts, is likely to contribute to the success of the resulting designs. Emotion research can serve various functions depending on the product design stage, such as helping in formulating the design goal, inspiring the design team, testing initial ideas, or selecting concepts. The aim of this chapter is to provide ideas of how emotion research can be used as a relevant source of information in new product development projects. First, we provide a brief overview of the design process involved in new product development projects. This overview is used as a framework for sketching the requirements for emotion research in the various stages of the design process. Subsequently, we provide a brief overview of the variety of relevant research methods that are available. By means of example projects, we illustrate possibilities of using emotion research in various stages of design processes.
6
6.6.2
New product development and design New product development is the term used for the complete process of conceptualizing and developing new products, materializing them and bringing them to the market. A general overview of new product development processes and the role of designers in these processes, is visualized in Figure 6.6.1. In general terms, the aim of new product development is to facilitate the movement from an existing towards a preferred state (Eekels and Roozenburg, 1991). The point of departure is the articulation of a gap between the existing and a preferred state (i.e. one determines that the current state is not optimal; top part in Figure 6.6.1). Three main stages can be identified in the new product development process that is aimed at changing the existing into the preferred state (Van Kleef et al., 2004; middle part in Figure 6.6.1). The first step is to initiate the design project. The initiation can originate from all kinds of opportunities (or threats) that are encountered or identified, such as technological developments,
Beckley_c06.indd 150
2/6/2012 12:26:58 PM
Emotion Research as Input for Product Design
151
New product development
Current state
Gap between current state and preferred state
Initiate design project
Analysis
Design product
Preferred state
Implement product
Understand
Evaluate Design cycle
Creation
Target
Conceptualize Envision
6
Figure 6.6.1 The role of design in new product development.
emerging markets and customer feedback. The second step is to develop a product (or service) and the third step is to implement the product by bringing it to market. Product implementation involves various activities, such as product planning, technical implementation and product launch. Designers, or design teams, play an important role in the second stage of new product development. Their task is to conceptualize product ideas and develop design specifications that optimize the function, value and form of products for the mutual benefit for both the user and the company. Design is essentially an integrating discipline, combining knowledge from, for example, usability, ergonomics, engineering, marketing, sensory science and aesthetics to create tangible design solutions that optimize the user-product relationship. Design processes typically start with an ill-defined problem and indefinite and incomplete criteria for the intended end result. Consequently, they always involve a high level of uncertainty. To deal with this uncertainty, industrial designers use design methodologies to structure the creative and analytical activities involved in design processes. In line with Roozenburg and Eekels (1995), and Desmet and Schifferstein (2011), we identify five basic steps in design processes: understand, target, envision, conceptualize and evaluate (bottom part in Figure 6.6.1). The process is iterative, and each design project can involve multiple cycles; the cycles are followed until the design result is satisfactory. In the “understand” step, the current situation is analyzed in order to understand this situation and its determinants. In the “target” step, the intended effect of the new product (i.e. the preferred state) is formulated. In the “envision” step, the specifications
Beckley_c06.indd 151
2/6/2012 12:26:58 PM
152
Product Innovation Toolbox
for the new product are formulated (e.g. what should be the character and purpose of the product) and in the “conceptualize” step, ideas for new products (or services) are generated (e.g. what the product does, who uses it in which situation, and how it is operated). The final step is to evaluate the new product design to determine whether it generates the intended effect on the consumer or usage situation. The design cycle illustrates that design processes integrate analytical and creative phases: whereas the nature of the “understand” and “evaluate” steps is analytical, the nature of the “target”, “envision” and “conceptualize” steps is creative. This integration is important, because it creates requirements for emotion research in order to be instrumental in new product development processes. Empirical emotion research typically takes place in the analytical phases (i.e. understanding emotions in the current state, or in response to new design ideas). In the creative phases, concepts and product ideas are generated. Here, emotion research can serve as a valuable source of inspiration, stimulating creative ideas about possible design qualities that align with the emotional intention. For the creative phases, it is also important to understand the reasons that underlie emotional responses to products. This helps designers to understand how they can design products that have the intended emotional effect. Before we discuss the role of emotion research in emotion-driven design projects, we will, therefore, discuss the basic variables involved in the process that evokes consumer emotions.
6
6.6.3
Emotional responses to consumer products Emotions are elusive in the sense that they are subjective: different people will experience different emotions towards the same product. A given product that is admired by some, can be experienced as boring or dissatisfying by others. Apparently, the relationship between a product and the emotional responses to this product involves other variables than the product alone – variables that differ between people. As a consequence, it is not sufficient for designers only to rely on their intuition and personal sensitivities. Instead, they need to understand the emotional responses of the target consumer, knowing that these may be different from their own. To facilitate this understanding, Desmet (2002) proposed a “basic” model of product emotions that represents the key variables involved in the process that evokes emotional responses to product design (Figure 6.6.2). The basic model of product emotions is based on contemporary emotion theory, which considers emotions to be mechanisms that signal whether events are favorable or harmful to an individual’s personal well-being. The process of signaling the personal significance of an event is called an appraisal: a “direct, non-reflective, non-intellectual automatic judgment of the meaning of a situation” (Arnold, 1960, p. 170). In appraising events, people’s personal concerns serve as points of reference. Following Arnold, Frijda (1986) argues that when we appraise a stimulus as beneficial to our concerns, we will experience positive emotions and try to approach this particular stimulus. Likewise, when we
Beckley_c06.indd 152
2/6/2012 12:26:59 PM
Emotion Research as Input for Product Design
153
Emotion
Appraisal
Concern
Product
Figure 6.6.2 A basic model of product emotions (Desmet, 2002).
appraise a stimulus as colliding with our concerns, we will experience negative emotions and try to avoid it. Concerns are more or less stable personal preferences for certain states of the world (Frijda, 1986). This explains why people differ with respect to their emotional reaction to a given product. Compare, for example, the response of a sailing enthusiast with the response of an environmentalist aiming to preserve coral wildlife to the same sailing yacht. The first will probably experience a pleasant emotion (e.g. hope) given the concern for enjoying sailing trips, whereas the second will more likely experience an unpleasant emotion (e.g. fear), given the concern of avoiding coral damage caused by sailing yachts. Several design cases have shown that understanding the concerns of the user is key to successful emotion-driven design (e.g. Desmet and Dijkhuis, 2003; Desmet et al., 2007). Ortony et al. (1988) developed a typology of human concerns in which three types of emotional concerns are distinguished: goals, standards and attitudes. Desmet and Hekkert (2007) found these three types of concerns to be particularly valuable for understanding product emotions. Goals are eventrelated concerns. These represent what one wants to get done and what one wants to see happen. Goals are often assumed to be structured in a hierarchy ranging between abstract goals or aspirations, like the goal to have a successful life, to goals as concrete and immediate as the goal to catch a train. Many goals are directly and indirectly activated in the human–product relationship (see Demir, 2010). For example, one buys, owns and uses products because one believes they can help to achieve things (a digital agenda helps to organize our life), or because they fulfill a need (a bicycle fulfills the need for transportation). Standards are our beliefs, norms or conventions of how we think things should behave. Whereas goals refer to the state of affairs we want to obtain, standards are the states of affairs we believe ought to be (Ortony et al., 1988). For example, many believe that they should respect their parents, and eat more fruit and vegetables. Most standards are socially learned and indicate which moral evaluations are made. Whereas goals are relevant for our personal
Beckley_c06.indd 153
6
2/6/2012 12:26:59 PM
154
Product Innovation Toolbox
well-being, standards are relevant for the preservation of our social structures (and thus indirectly also for our personal well-being). We approve of things that comply with standards and disapprove of things that conflict with them. We not only have standards regarding human (inter)action, but also regarding products. With respect to products, our values represent our beliefs of how a product (or a person associated with the product) should behave or function. For instance, we expect a new car to start without effort, and we expect a vase to be water resistant. Attitudes are our concerns that are related to objects. They represent our dispositional likings or dislikings (taste) for particular (attributes of) objects. We have attitudes towards product types (“I don’t like guns”), aspects or features of products (“I like red cars”), towards style (“I like Italian design”), towards quality of interaction (“I like cars that have a firm drive”), and towards context-related consequences of products (“I like feeling relaxed after drinking a beer”; see Desmet, 2010). Some people like red cars, others like black cars. Some people like Italian furniture style, whereas others prefer the Scandinavian style. Emotional responses related to attitudes are focused on the moment of the experience and not on the (anticipated) consequence of usage or on the (expected) behavior or functioning. In the latter cases, the emotional responses will be related to the expectations that can represent goals or standards.
6
6.6.4
Methods for emotion research in new product development Figure 6.6.3 shows three main types of emotion research in new product development projects. The first and second type focus on assessing emotions (either to (a) existing or to (b) new product designs) and the third type (c) focuses on assessing concerns. Measuring emotions can help understanding the emotional impact of products, and assessing concerns can help understanding the causes of these emotions. The measurement of emotion and the assessment of concerns require different approaches. For both approaches, various methods are available. The next
(a) Emotion measurement
Emotion
(b) Emotion measurement to evaluate new product design
to identify consumer emotions Appraisal
(c) Concern assessment
Concern
Product
to identify consumer concerns
Figure 6.6.3 Emotion research for new product development.
Beckley_c06.indd 154
2/6/2012 12:26:59 PM
Emotion Research as Input for Product Design
155
sections give a brief overview of research methods in general emotion research. Only some will be elaborated on further, because some methods are more suitable for application in design-oriented research than others.
6.6.4.1
Emotion measurement
Emotions are often conceptualized as multi-componential phenomena (Scherer, 2005), including behavioral reactions, expressive reactions, physiological reactions and subjective experiences. Behavioral reactions are the actions or behaviors one engages in when experiencing an emotion, such as running or seeking contact. Emotions initiate behavioral tendencies like approach, inaction, avoidance and attack (Frijda, 1986). Fear makes one want to run, love makes one want to approach or caress, and so one. Expressive reactions are the facial, vocal and postural expressions that are part of the emotion. Each emotion is associated with a particular pattern of expressions (Ekman, 1994). Anger, for example, comes with fixed stare, contracted eyebrows, compressed lips, vigorous and brisk movements, and, usually, a raised voice, almost shouting (Ekman and Friesen, 1978). Physiological reactions are the changes of activity in the autonomic nervous system that are part of the emotion, such as pupil dilatation, increase in heart rate, and sweat production. The subjective experience is the feeling, the conscious awareness of the emotional state one is in, such as feeling happy or angry. Most available instruments focus on the assessment of the expressive, physiological, or the feeling component of the emotion. Traditionally, most attempts to measure emotions have been done in the field of psychology and sociology. More recently (i.e. the last 20 years), acknowledging the important role of emotions in their field of research, consumer and marketing researchers have developed instruments which measure emotional responses to advertisements and consumption experiences. Even more recently (i.e. the last 10 years), and as a result of the rapid invasion of computers into our daily lives, computer science has also become a player in the field of measurement of emotions. The overview below is partly based on Laurans et al. (2009). ●
Beckley_c06.indd 155
6
Emotional expression The notion that each emotion is characterized by a unique (facial) expression, resulted in the development of systematic observation tools such as the facial action coding system (FACS; Ekman and Friesen, 1978). Facial action coding system is a system that deconstructs facial expressions into visual facial muscle activities. The system can be used for manually coding facial expressions of participants. These tools are, however, seldom used in design-oriented research. One reason is that coding methods demand a considerable investment in time and expertise (the manual is over 1000 pages long), while their usefulness for the measurement of the typically mild emotions elicited by product interactions remains to be established. Software packages are now beginning to appear that are based on techniques for the automatic recognition of expression, often within the framework of affective computing (Den Uyl and van Kuilenburg, 2005). For the time being, these packages mostly rely on posed expressions. Although they are able to detect differences in basic valence, they have
2/6/2012 12:27:00 PM
156
Product Innovation Toolbox
●
●
6
difficulties in detecting subtle differences between similarly valenced emotions. Nevertheless, they may become interesting for future designoriented research. Physiological activation Techniques that measure physiological activation have some practical advantages: they yield continuous measures that do not demand the user’s attention. The “objective” nature of the measurement is also appealing, as they avoid some of the biases influencing self-reports. One obstacle in applying these techniques for design-oriented research is the need for specific expertise and complex equipment, and the relative obtrusiveness of sensors, reducing the freedom of movement of the participants. A more fundamental problem is the relatively low coherence between physiological activation and other components of emotion (Bonanno and Keltner, 2004; Mauss et al., 2005), which requires a careful consideration when applying these techniques in design-oriented research. As of today, it remains very difficult to use physiological data for assessing users’ emotions in response to products (Laurans and Desmet, 2008). As a consequence, the bulk of published research using these techniques is supplemented with self-report data. Emotional experience The techniques that are used most often rely on the selfreport of the subjective experience component of emotion. These techniques do not require complex or expensive equipment. Various standardized scales and questionnaires are available, reflecting different views on the structure of emotional experience. Reviews of these self-report instruments are reported by Laurans et al. (2009) and Poels and de Witte (2006). A broad distinction can be made between questionnaires based on descriptors of discrete emotions and questionnaires developed to assess the main dimensions of feelings. An example of the first type is the Geneva Emotion Wheel (Scherer, 2005), which is a verbal self-report instrument that measures 20 distinct emotions. An example of the second type is the PANAS scale, which measures two independent dimensions: one for positive affect (PA) and the other for negative affect (NA), each representing a basic dimension of feelings (Tellegen, 1985).
Although most questionnaires are verbal, some do not rely on words. Non-verbal scales use cartoon characters to represent emotions, which makes them easier to use across cultures. One is the self-assessment manikin (SAM, see Bradley and Lang, 1994), which measures three basic dimensions of emotions: valence (positive versus negative), arousal (calm versus excited) and dominance (or feeling of control regarding the situation). With only one item per scale, the SAM is quick and easy to administer for product testing. PrEmo (Desmet, 2003) is a non-verbal measurement tool developed specifically for design-oriented research consisting of 10–14 animations that each represent a different emotion. Each animation shows a cartoon character that expresses an emotion through facial expression, body movement, and sound (see Desmet and Dijkhuis, 2003; Desmet et al., 2005).
6.6.4.2
Measuring emotions for product design
The main disadvantage of dimensional instruments for design-oriented research (either based on expression, physiology or self-report), especially in the early phases of the design process, is that they tend to lack inspirational value.
Beckley_c06.indd 156
2/6/2012 12:27:00 PM
Emotion Research as Input for Product Design
157
Assessments of existing products with self-report instruments based on categorical emotions are usually much more effective in informing further design efforts (Laurans et al., 2009). These instruments are able to uncover rich emotion profiles that provide nuanced information – and for designers these nuances form the sources of inspirational value. However, a drawback of many instruments that are based on categorical emotions is that they have the tendency to over-represent negative emotions, whereas emotional responses to products more often tend to be positive than negative. This was shown for industrial product design (Desmet, 2002), and for food design (Desmet and Schifferstein, 2008). Measurement instruments used in product development processes should therefore not over-represent negative emotions. There are additional characteristics of emotions elicited by product design that should be taken into consideration (see Desmet, 2008). First, most emotions that are experienced in the user-product relationship are felt at low intensities. Products, for example, tend to elicit frustration more readily than fury, and joy more readily than ecstasy. Second, rather than eliciting one single emotion, products may elicit multiple (mixed) emotions simultaneously, with each different product aspect (e.g. general appearance, particular details, implicit and explicit expectations, and associated, remembered and imagined meanings) having an emotional impact. A person can, for example, be proud of a new pair of shoes and happy with the reaction of his or her partner, and at the same time be irritated by the lack of comfort and afraid of damaging the delicate leather. Given the subtle and mixed nature of emotions experienced by product consumers, self-report instruments that measure distinct emotions, such as the Geneva emotion wheel and PrEmo are most appropriate for design-oriented emotion research.
6.6.4.3
6
Concern assessment
Concerns, values or expectations, are generally assessed with various types of questionnaires or interviews. Standardized concern questionnaires are based on general concern taxonomies. Several taxonomies of general life concerns are available, with associated measurement scales. An often used example is the taxonomy and questionnaire developed by Sheldon et al. (2001), which consists of ten motive types with three distinct motives for each type (Table 6.6.1). The Sheldon questionnaire can be used to identify the concerns that were involved in remembered emotional events. First, respondents are asked to imagine a remembered recent life event in which they experienced an emotion. Given this event, respondents use five-point scales to express to what extent each motive was attained in the case of a positive emotion or blocked in the case of a negative emotion. Analogously, respondents may be asked to remember events in which emotions were experienced in response to a particular product. The general concerns thus generated can be used as main directions in the design process, (e.g. to design for autonomy, to design for relatedness, etc.). In many cases, using general concern taxonomies may not be feasible, because particular product emotions may be determined by specific product features or situational characteristics. These cases call for open interview techniques. Laddering is an interview technique that can be used to determine the concerns that underlie people’s (dis)likes for specific product features and
Beckley_c06.indd 157
2/6/2012 12:27:00 PM
158
Product Innovation Toolbox
Table 6.6.1 Sheldon concern taxonomy (created from data from Sheldon et al., 2001).
6
Beckley_c06.indd 158
Autonomy
That my choices were based on my true interests and values. Free to do things my own way. That my choices expressed my “true self.”
Competence
That I was successfully completing difficult tasks and projects. That I was taking on and mastering hard challenges. Very capable in what I did.
Relatedness
A sense of contact with people who care for me, and whom I care for. Close and connected with other people who are important to me. A strong sense of intimacy with the people I spent time with.
Self-Actualization / Meaning
That I was “becoming who I really am.” A sense of deeper purpose in life. A deeper understanding of myself and my place in the universe.
Physical thriving
That I got enough exercise and was in excellent physical condition. That my body was getting just what it needed. A strong sense of physical well-being.
Pleasure-stimulation
That I was experiencing new sensations and activities. Intense physical pleasure and enjoyment. That I had found new sources and types of stimulation for myself.
Money-luxury
Able to buy most of the things I want. That I had nice things and possessions. That I got plenty of money.
Security
That my life was structured and predictable. Glad that I have a comfortable set of routines and habits. Safe from threats and uncertainties.
Self-esteem
That I had many positive qualities. Quite satisfied with who I am. A strong sense of self-respect.
Popularity-influence
That I was a person whose advice others seek out and follow. That I strongly influenced others’ beliefs and behavior. That I had strong impact on what other people did.
2/6/2012 12:27:00 PM
Emotion Research as Input for Product Design
159
qualities (Reynolds and Gutman, 1988). Laddering was originally developed to determine means-end chains. By repeatedly asking the question: “Why is this important to you?” the researcher is able to infer which concerns underlie particular product preferences. The session starts with asking the respondent to select the most preferred product from a set of products. Respondents are asked to explain what product qualities or features they base their selection on. They are then asked to explain why the particular quality or feature is important to them. Whatever they mention, the interviewer will ask again why they think this is important to them. This eventually leads to the concerns that underlie their preferences. The session continues until the researcher has a full overview of all concerns that the respondent has in relation to the given product.
6.6.5
Emotion research in new product development In this section, we discuss opportunities for emotion research in the different steps of the design cycle in emotion-driven design projects. Three design cases are used to illustrate application possibilities: a telephone case, in which a mobile phone was designed with the aim to evoke a “wow!” experience; a breakfast case, in which a tray-served airplane breakfast was designed with the general aim to improve the emotional impact of the current breakfast; and a fabric conditioner case, in which a fragrance for a fabric conditioner was developed with the aim to strengthen the emotional consistency of the product. Details of the case studies (including measurement procedures and data analysis) have been reported by Desmet et al. (2007) and Desmet (2010). Here, we only discuss them to illustrate the design-oriented emotion research.
6
6.6.5.1 Measuring emotions in the “understand” stage The first purpose of emotion research is to understand the emotions of consumers in the current situation. The most straightforward approach is to measure emotions that are evoked by existing products. One can, for example, measure the emotional impact of existing products in the portfolio of the client and/or the client’s competitors. Alternatively, these can be emotions experienced in a particular domain that is relevant for the client, such as visitors of museums, patients in hospitals, or travelers at airports. The results of these measurements can be used in several ways in the “target” step to formulate a statement about the intended emotional response of the users or consumers to the new product. For instance, the research can be used to identify possibilities to reduce negative emotions, to introduce positive emotions, or to improve emotional consistency in the design.
6.6.5.1.1
Identify possibilities to reduce negative emotions
Emotion measurement can identify unwanted negative emotions. For example, measurements can indicate that users are dissatisfied about particular product features or properties, or irritated by unclear requirements. In those cases, the design goal is to reduce these negative emotions. An example is a manufacturer
Beckley_c06.indd 159
2/6/2012 12:27:00 PM
Product Innovation Toolbox
Breakfast Overall mean (all meal types)
1.5 1.25
.8 1.20 .74 .88
1
.56 .33 .76 .63
.75
.89 .63
.78 .18 .56 .53 .44 .42 .24 .22
.5
.11 .13
6
Disgust
Dissatisfaction
Contempt
Desire
Boredom
Amusement
Fascination
Pleasant surprise
Satisfaction
.25
Unpleasant surprise
160
Figure 6.6.4 Emotions elicited by all meal types and by a breakfast tray.
of office seats, who discovers that users are frustrated because of a difficult-toadjust armrest (current state) and starts a product development process to resolve the frustration by improving the ease of use of the armrests (preferred state).
Example: Airplane meal An airline company had decided to invest in redeveloping the meals they served during flights with the aim to improve their emotional impact. It was decided to reduce negative emotions experienced by passengers in response to the existing meals. The first step in the project was to use PrEmo to assess the emotions elicited by the company’s current airplane meals during flight. The meals included breakfast trays, breakfast boxes, lunches, dinners and snack services. The graph in Figure 6.6.4 shows the average emotional impact. The black bars represent the emotional impact of the breakfast tray (overall mean ratings for a three-point scale). The grey bars represent the average emotional impact of all five meal types. The graph indicates that the breakfast tray rates bad on the negative emotions boredom and disgust when compared with the other meal types. On the basis of this study, it was decided to target the redesign efforts on the breakfast tray, with the aim to reduce the experience of disgust and boredom.
6.6.5.1.2
Identify possibilities to introduce positive emotions
An emotion measurement can also identify opportunities to introduce desirable, but not yet experienced positive emotions. For example, benchmark research can indicate that there is a market for a product that evokes fascination, or consumer feedback can indicate that there is a possibility for increasing the
Beckley_c06.indd 160
2/6/2012 12:27:00 PM
Emotion Research as Input for Product Design
161
Boredom
Satisfaction
Contempt
Desire
Fascination Dissatisfaction
6 Pleasant surprise
Disgust
Amusement Unpleasant surprise
Figure 6.6.5 Emotional impact of existing mobile phones.
joy of use. In those cases, the design goal is to introduce positive emotions. An example is a wine producer who discovers that there is a market for wine that evokes amusement and starts the development of an amusing wine.
Example: Wow telephone The client was a telecom company who had decided to develop a mobile phone that evokes a “wow!” feeling for the target consumers. Emotion measurement was used to better understand this general experience. The main aim of the study was to have an impression of what emotions are involved in a “wow!” experience, and to determine which existing phone designs evoked the intended “wow!” experience. The emotional responses elicited by eight existing mobile phones were measured (with 35 target consumers, each respondent responding to all models). The models were selected to represent substantial design variation and to include those models that the client believed would evoke the intended “wow!” experience (Figure 6.6.5).
Beckley_c06.indd 161
2/6/2012 12:27:00 PM
162
Product Innovation Toolbox
B
A
H
D
E
F
C
G
Surprise
1.54
1.23
1.37
.86
.89
.66
.66
.46
Desire
1.03
1.20
.57
.57
.40
.46
.29
.26
Fascination
1.14
1.06
.86
.63
.51
.51
.40
.29
Overall
1.32
1.16
.93
.69
.60
.54
.45
.34
Figure 6.6.6 Measured wow-impact in pre-study.
6
The graphical representation in Figure 6.6.5 is the result of a correspondence analysis. It visualizes the variance in the data, in which distances between items are based on how often a particular emotion was experienced in response to a particular stimulus. Given this visualization, the design team proposed to select the “wow!” experience to consist of the three emotions desire, fascination and pleasant surprise (shown in the gray area that was added by the researcher). Figure 6.6.6 shows the mean ratings of all telephone models on these three wow-emotions (on a three-point scale, ranging between 0 and 2). The last row, the “wow index”, shows the overall mean rating on these three emotions. The telephones are ordered in accordance with their “wow!” impact. Three models elicit a higher level of “wow!” experience than the other five: models A, B and H. This ranking served as the emotional benchmark for the project, and the project aim was to develop a mobile phone that evoked at least as much “wow!” as these three models.
6.6.5.1.3 Identify possibilities to strengthen emotional coherence Emotion measurement can help create coherence between the emotional fingerprint of a brand and the emotional impact of the product design.
Example: Fabric conditioner The aim of the project was to optimize the emotional consistency of a fabric conditioner product. The client had recently redesigned the product packaging, and wanted to develop a fragrance that, in terms of emotional impact, fitted with the package design. An initial measurement was performed to identify the “emotional fingerprint” of the new package design. The general aim of product design is to evoke positive emotions. There are, however, many different types of positive emotions, and the concept of emotional fingerprint was introduced to denote the specific positive emotion type that is typically elicited by a product. The emotional impact of the old and new package was measured with PrEmo. A third product (a green package) was included in the study to serve as point of reference. Figure 6.6.7 shows the correspondence map that visualizes the emotional impact of the new design in comparison to the old design.
Beckley_c06.indd 162
2/6/2012 12:27:02 PM
Emotion Research as Input for Product Design
Packages Current design
Boredom Disappointment
New design
163
Fragrances A
Desire p-surprise
Admiration Contempt Satisfaction Dissatisfaction Inspiration Amusement Desire Fascination Disgust Indignation p-surprise
u-surprise Amusement Admiration D
C
Satisfaction
Indignation Inspiration Fascination
B
Contempt Dissatisfaction
E
u-surprise
Boredom
Disgust
Disappointment
Figure 6.6.7 Emotions elicited by fabric conditioner packages and fragrances (p-surprise = pleasant surprise; u-surprise = unpleasant surprise).
6 On the basis of these results, the client decided to select the emotion “inspiration” as the key emotional target for the product, because it is evoked by the new package design and fits with the brand identity of this particular product. The emotional target for developing an appropriate fragrance was therefore to develop a “blue fabric conditioner fragrance” that evokes inspiration. The right side of Figure 6.6.7 shows a second emotion measurement. Each letter represents a fragrance that was developed by the client’s fragrance supplier to fit with a blue fabric conditioner. Note that the configurations of emotions differ between the two maps in Figure 6.6.7, because the analysis aims to make an optimal visualization for each specific stimulus set. On the basis of this measurement, fragrance D was selected and implemented in the product.
6.6.5.2
Assessing concerns in the “understand” stage
The second purpose of emotion research is to identify the concerns that consumers have with respect to given products or with respect to a given situation. Emotion measurement focuses on the “what” question, and concern assessment focuses on the “why” question. Assessing concerns enables the design team to understand why consumers experience particular emotions to a given product or situation.
6.6.5.2.1
Identify concerns given a particular product
Concern assessment can identify the goals, standards and attitudes of consumers that underlie their emotional responses to a given product (type). Laddering interview techniques can be used to investigate existing products, and value taxonomies can be used to structure the results (see Chapter 6.1).
Beckley_c06.indd 163
2/6/2012 12:27:02 PM
164
Product Innovation Toolbox
Example: Wow telephone
6
A concern study was performed to generate insights in the concerns that underlie the wow-experience of the target user with respect to mobile phones. Two group discussions (five respondents each) with target consumers were conducted. The same mobile phones as in the emotion measurement were used, together with the correspondence map in Figure 6.6.5. The group members first discussed the emotional impact of each of the eight mobile telephones. The models were placed randomly on the table, and the moderator invited participants to express their affective responses. Subsequently, the moderator introduced and explained the mobile telephone and emotion map. The group members were stimulated to discuss to what degree they “agreed with” the map, that is to what degree the map represented their emotional responses. The group was stimulated to discuss the underlying reasons why the models and emotions were placed in that particular configuration. By using this approach that was loosely based on laddering, the moderator was able to direct the discussion towards the abstract level of underlying concerns. All comments made by any of the participants that referred to concerns were recorded and subsequently categorized in terms of the three concern types: goals, standards and attitudes. Table 6.6.2 gives an overview of the categorized concerns. The first column shows the concern type, the second shows examples of mentioned concerns, and the third column shows concern themes that were formulated to represent the concern examples.
6.6.5.2.2
Identify concerns given a situation
Concerns underlying emotional responses to products are context-dependent. For example, someone has other concerns when using a laptop in the train than when using the laptop in an office environment. It can, therefore, be important to take the context of use into consideration when doing a concern analysis. The analysis should take place in a context that is as realistic as possible.
Example: Airplane meal In order to identify how the breakfast tray could be redesigned to decrease disgust and boredom, a concern analysis was made by interviewing passengers during intercontinental flights. Respondents were shown the current product, and asked to respond, using a laddering technique to enable the interviewer to discuss underlying concerns. Identified concerns were categorized in three classes – concerns with respect to: (1) The meal, (2) The presentation and (3) Eating the meal. Four pairs of seemingly conflicting concerns were identified as inspirational for the design team: (1) (2) (3) (4)
Beckley_c06.indd 164
Meal: The meal should show variety/the meal should show balance. Meal: I want to be refreshed/I want to be relaxed. Presentation: I want to be distracted/I want things that are easy to use. Eating the meal: I like familiarity/I like surprises.
2/6/2012 12:27:02 PM
Emotion Research as Input for Product Design
165
Table 6.6.2 Concern profile related to wow-experiences. Concern types
Concern examples
Concern themes
GOALS I want a telephone that
fits my hand (not too small or too big); is convenient to store; does not have awkward folding mechanisms.
Manageable
has clear and unambiguous buttons; has buttons that are easy to operate; can be operated with uncomplicated interaction protocols.
Practical
does not have parts that can break off; is not sensitive to damage; has a protected screen; has sturdy buttons; is always reliable.
Reliable
have a solid cover; not be too light; have a clear click if it has a folding mechanism; not have the tactile quality of plastic; not make cracking sounds when held in the hand.
Quality
be recognizable as telephone; have a design that emphasizes the telephone function; should be functional; have a no-nonsense expression; not have ‘design-frills’.
Logical
are consistent in general shape, color and buttons; are shaped geometrically; have balanced shapes; are made of ‘real’ materials like metal and rubber.
Consistent
are not boring; have powerful shapes; show distinctive features; have an innovative design; have an exciting design; are stylish; are elegant; are unique.
Unique
have classy design; are made of beautiful materials; are well-detailed and finished; do not look childish; do not look cheap; show perfection in design and fabrication.
Luxurious
VALUES A telephone should
ATTITUDES I like telephones that
6.6.5.3
6
Stimulating creativity in the “envision” stage
The “target” stage may result in an emotional design direction, either in terms of what emotion to design for, or in terms of what emotion to reduce or prevent. For instance, in the fabric conditioner case, the company wanted to design an inspiring fragrance that fitted the accompanying package. We also described the case of the airline, who wanted to reduce the disgust and boredom evoked
Beckley_c06.indd 165
2/6/2012 12:27:02 PM
166
Product Innovation Toolbox
by their breakfast trays. Knowing which emotions to evoke or to avoid provides a target that can be used to determine the appropriateness of design concepts, but it may not provide direct inspiration for generating new designs. Emotion research and emotion theory can be of use for stimulating creativity when envisioning design directions, and for formulating design qualities or characters.
6.6.5.3.1
6
Envision solution space
General knowledge available in the literature on emotions and concerns can be used as a source of inspiration. Because of the general nature, it can open up interesting opportunities for the creative process. We may use existing lists providing overviews of emotions (Frijda, 1986; Scherer, 2005), appraisals (Lazarus, 1991; Scherer, Ortony et al., 1988;), or concerns (Ford and Nichols, 1987; Sheldon et al., 2001) as inspirational materials for developing innovative ideas. For instance, starting out from Sheldon’s list of concerns (Table 6.6.1), we can ask participants of a creative session to pick a concern from the list at random, and to try to relate it to the target product. Imagine a situation with a user who has this concern, and explain how the product helps him or her in reaching this particular goal, meets or conflicts with a certain standard, or relates to this attitude. Analogously, starting out with an elaborate list of positive or negative emotions, the participants may be asked to develop a usage scenario in which the product evokes that particular emotion. The unusual product-emotion combinations that occur will open up the mind for new situations, uses and product concepts.
6.6.5.3.2
Generating product quality or character
Another activity in the “envision” stage of the design process that can help in making the transition from target to concept is to create a product character. A product character represents the designer’s vision on how to align the product with the concern profile. A character is a specification of the quality and personality of the product. This can reflect both the quality of the interaction and the physical product, such as, stimulating, inviting, seductive, forceful, natural or colorful. The product character can be used as a reference in all stages of the design process in order to safeguard the emotional fittingness of the final design.
Example: Wow telephone Some of the concerns in Table 6.6.2 appear to be inconsistent or even conflicting. These apparently conflicting concerns proved to be particularly inspiring for generating a product character. For example, the telephone should be innovative, surprising and stimulating, and at the same time it should be no-nonsense and harmonious. Or, the appearance should be simple and balanced, and at the same time distinctive and unique. These paradoxical concerns are interesting, because they represent combinations that are not yet fulfilled by existing products, and therefore stimulate novel design solutions. On the basis of the conflicting concerns, a character triangle was created that represents the eight concern themes. Figure 6.6.8 shows the character triangle in words and pictures.
Beckley_c06.indd 166
2/6/2012 12:27:02 PM
Emotion Research as Input for Product Design
167
Impetuous Self-willed
Beneficent Sophisticated
Sincere Balanced
6 Figure 6.6.8 Product character that represents eight concern themes.
This product character implies three successive layers: the first impression is impetuous and self-willed; then the character becomes sincere and balanced; and finally it becomes beneficent and sophisticated.
6.6.5.4
Generating concepts in the “conceptualize” stage
The next step in the design process is to conceptualize products that fit with the concern profile and product character. The envision stage does not yet formulate product ideas, but qualities or characters of the product that will be designed. In the conceptualization step, ideas for new products (or services) are generated that intend to have this defined character. This includes determining, for example, what the product does, what it will look like, what technology is used, who uses it in which situation, and how it is operated.
Example: Wow telephone In the process of sketching product ideas, the designer aimed to create a product concept that fitted the three-layered character. To prevent a “wow!” response that has only a short existence, it was decided to focus on the overall (holistic) concept rather than on feature-based concepts. Sketches explored the possibilities to create a layered response in which each character domain represents one layer of experience: an initial impact at first sight; a second impact; and a long-term impact.
Beckley_c06.indd 167
2/6/2012 12:27:03 PM
168
Product Innovation Toolbox
Figure 6.6.9 Design Wow telephone.
6
The final design has three functional layers. Each layer is built from three material layers (Figure 6.6.9). When closed, the product looks like a photo camera, and when opened it looks like a small computer, or like a simple mobile phone, depending on how it is opened. The parts that are touched (by the finger when interacting with the product or by the face when using the telephone function) are made of white soft rubber. The exterior is made out of cool metal (aluminum anodized in the color gold). The design has an impulsive spirit; the basic shape is clear, but the lines are playful. The layered character is applied in all the details of the product in order to create a subtle and sophisticated design experience. The first impression is that the product is impetuous and self-willed because of the contrasts in colors and material between the inside and outside of the phone, and because it can open in two ways (which was a unique feature at that time). When using the phone, the character is sincere and balanced because the way in which it is opened clearly communicates the main function. Moreover, dividing the functions in two enabled the design of a simple and straightforward interface. After a period of using the product, one will experience its sophistication in the details of the design. For example, the buttons are engineered in a way that prevents them from getting dirty; the rubber material gives a smooth and beneficent feeling when touching the ear.
Example: Airplane meal In the airline breakfast project, the overall product character was formulated as “the charger; like a morning walk in the park”. This metaphor, which expresses the dynamics of the breakfast character during breakfast consumption, was the leading theme for the design process (Figure 6.6.10). A morning walk in the park is a refreshing activity in which the person is in control of what path to follow, and of how fast she or he wants to be refreshed. This combination of refreshment and relaxation, and of control and surprises was envisioned to reconcile the seemingly conflicting concerns that were found in the concern analysis. The metaphor was used to formulate three key elements of the product character: engaging (explorative for the food and accessible for the package), dedicated (fresh for the food and bright for the packaging) and invigorating (nutritious for the food and embracing for the packaging).
Beckley_c06.indd 168
2/6/2012 12:27:05 PM
Emotion Research as Input for Product Design
The charger Like a morning walk in the park
Engaging
Food explorative
Invigorating
Food fresh
Packaging bright
169
Packaging accessible – open
Dedicated
Food nutritious
Packaging embracing
Figure 6.6.10 Breakfast character (by KVD Amsterdam).
6
Figure 6.6.11 Product concept (design by KVD Amsterdam).
The concept was titled “morning tapas”: a balanced union of distinct breakfasts (Figure 6.6.11). It balances between a heavy and a light breakfast, between a warm and a cold breakfast, and between a sweet and a savory breakfast. A duo of a warm and a cooled beverage take a prominent place in the meal. The cold and warm meal elements are selected for the combination with the beverages (like tapas). The packaging also consists of several contrasting elements. The tray is made of recycled paper, and the lid is made of transparent plastic. This combination creates the intended bright, open and embracing character. Inside, there are several cups with the warm and cold elements. The top left element is warm savory (e.g. omelet); the top right element is cold
Beckley_c06.indd 169
2/6/2012 12:27:05 PM
170
Product Innovation Toolbox
savory (e.g. cheeses); the bottom left element is warm sweet (e.g. sweet rice); the bottom right element is cold sweet (e.g. fruit yogurt). The smaller cups in the middle contain “condiments” (such as nuts and honey) that can be used to personalize the other meal elements. This allows for combining elements in a non-ambiguous manner, generating the nutritious, explorative and fresh character. This is enhanced by the cooled herbal tea in the first (transparent) cup that enables passenger to “awaken the taste buds” before opening the package to start the breakfast.
6.6.5.5
6
Testing products in the “evaluate” stage
In various stages of the design project, the emotional impact of design ideas, concepts and prototypes can be tested as a means for evaluating the emotional impact of the design in comparison to the emotional intentions of the designer. The emotional impact of initial ideas can be measured by using renderings and/ or verbal descriptions of the ideas as stimuli in emotion measurements. In a later stage of the design process, prototypes or mockups can be used as stimuli in emotion measurements. These studies can be used to investigate the effects of design decisions on the emotional responses of target users.
Example: Airplane meal The emotional impact of the new breakfast concept was tested in a real-life situation, on board of the airplane during flight. The breakfast was served on an intercontinental flight, and 26 randomly selected passengers used PrEmo to report their emotional responses after being served the breakfast. Five positive and five negative emotions were measured (see Figure 6.6.4). The data were compared with an identical test that was performed with a conventional breakfast. The study indicated that the new concept elicited significantly less disgust (0.78 compared to 0.27) and boredom (0.89 compared to 0.27) than the original concept. It was therefore concluded that the new concept was successful in generating the intended emotional effect, which was to reduce the levels of boredom and disgust. Given this conclusion, the breakfast was implemented and served during all intercontinental flights between Europe and Asia for a period of three years.
Example: Wow telephone An evaluation study was performed to assess if the new telephone design elicited the intended “wow!” experience. A prototype was built, which did not function as a telephone, but was functional in terms of basic interactions (opening and closing the telephone and computer function). The first study was repeated with the new prototype as an additional stimulus. Figure 6.6.12 shows the mean ratings of all stimuli on the three “wow!” emotions (on a three-point scale rating). In addition, the last row shows the “wow index,” that is, the overall mean “wow!” rating. In Figure 6.6.12, the models are ordered in accordance with their “wow!” impact. The new model in the second column (model G) had the highest “wow!” impact, whereas the model in the ninth column (model E) had the lowest “wow!” impact. Differences between the “wow!” rating of model G and all other
Beckley_c06.indd 170
2/6/2012 12:27:08 PM
Emotion Research as Input for Product Design
G
B
D
H
C
F
A
E
Surprise
1.57
1.40
1.07
.97
.80
.57
.37
.53
Desire
1.03
.97
1.17
.83
.27
.50
.50
.40
.87
1.17
1.93
.83
.37
.47
.40
.37
1.16
1.15
1.06
.88
.48
.51
.42
.43
Fascination Overall
171
Figure 6.6.12 Measured wow-impact of existing and new telephone designs.
models except model B were significant, indicating that both models G and B elicited higher levels of “wow!” experience than all other stimuli. On the basis of these findings it was concluded that the new design did evoke the intended “wow!” experience, and it was subsequently used as the basis for the development of a production model.
6 6.6.6
Summary and future of emotional research In this chapter we have sketched possibilities of emotion research as input for product design, using example cases to illustrate opportunities. In spite of the potential value of emotion research in design projects, we have observed that design teams are often not able to make good use of emotion research in the creative stages of the design process. This could be caused by a general mismatch between the scientific frame of mind of the researcher who studies emotions, and the creative frame of mind of the designer who generates ideas (for a discussion, see Eekels and Roozenburg, 1991). Although research and design are strongly interwoven and mutually dependent on each other, there are three essential differences that should be understood before we can discuss how emotion research can be useful for design processes: (1) Whereas design is possibilitydriven, research is reality-driven; (2) Whereas design aspires to create, research aspires to understand; (3) Whereas design focuses on totality, research focuses on aspects. Scientific research focuses on the existing, real and factual world as it is. The central aim is to bring about a change in the realm of the mind: to generate new knowledge that explains present and past phenomena, and predicts future phenomena. Given this focus, scientific research requires goal oriented observation, eventually supported by experiments. Moreover, research never focuses on the reality in its totality, but on selected aspects of reality. In contrast, design focuses on worlds that do not (yet) exist, but are, it is hoped, realizable. Whereas there is only one real, factual world, there are limitless non-existing, yet possible worlds. The central aim of design is to bring about a change in the material world: a design that intervenes with or changes the world as it is. Design cannot solely rely on observation (i.e. the world to observe does not yet exist), but requires envisioning, imagining and conceiving possible futures.
Beckley_c06.indd 171
2/6/2012 12:27:08 PM
172
6
Product Innovation Toolbox
Moreover, design is a holistic activity that involves the totality of the entity to be designed, simultaneously aiming to optimize all product aspects, such as usability, aesthetic appeal, safety and production costs. The methods that are available for measuring and understanding emotions are mostly developed within a scientific frame of thought. They are particularly suitable for developing an understanding of the existing world. However, in order to enable design teams to make good use of emotion measurement data, researchers need to be sensitive to the design frame of thought: designers focus on integrated possibilities of various future worlds, in which values like originality and creativity overshadow the typical scientific values like validity and reliability. Hence, data should be represented in a holistic (rather than reductionist) fashion. Although this may be inadequate for scientific purposes, because holistic data do not necessarily enable researchers to determine relationships between design features and emotions, for designers this is inspiring because it is the holistic design, including all details, that determines user emotions. Moreover, data should be represented and communicated in a descriptive instead of prescriptive way. Creativity can be supported better with a visual (or sensory) rather than a numerical data representation. Typically, designers tend to prefer holistic, in-depth descriptions of a few interesting personal cases in the form of personas, interaction scenarios, or story boards (see Sleeswijk Visser, 2009) to an overview of population means, which give general trends but tend to level out all interesting individual variations. Emotion-driven design projects require emotion research that is inspirational: ●
●
●
●
●
For design-oriented emotion research, the nuances in experienced emotions count. Products typically do not evoke basic emotions such as fear and anger, but more subtle emotions such as boredom and admiration. Emotion research in a design context should therefore use methods that measure these subtle emotions. Designers are inspired by combinations of insights in what emotions current products evoke and what the underlying human concerns are that drive these emotions. It is not ideal to separate the research activities from the design cycle; the data will be more inspiring when the design team is also involved in the “understand” step of the design cycle. Data representations are crucial; rich, holistic, multisensory data presentations stimulate discussion and creativity. Emotion measurement can have an important communication function in teamwork: it enables the team to develop a mutual language of and view on the emotional impact of the design they are working on.
There are stages in the design process, where emotion measurement does not play a direct role. These are the creative stages, which rely on the ability of the design team to envision and conceptualize products that are both original and appropriate for the given emotional target. In those stages, general emotion theory can be useful for both stimulating and directing creativity. The universal principles proposed by emotion theorists can be used, for example, to challenge designers to explore non-conventional design directions. It is our experience
Beckley_c06.indd 172
2/6/2012 12:27:10 PM
Emotion Research as Input for Product Design
173
that this combination of using emotion measurement in the “understand”, “target” and “evaluate” stages, and of using emotion theory in the “envision” and “conceptualize” stages, is most likely to generate successful results. Emotion is only one aspect of user experience. Other kinds of experiences, such as aesthetic experience and experience of meaning (see Desmet and Hekkert, 2007; Schifferstein and Hekkert, 2008), are also interesting to take into consideration during the design process. Future research can explore the possibilities of incorporating these other kinds of experiences in designoriented research (e.g. Desmet and Schifferstein, 2011). Another opportunity is to investigate how the dynamics of human–product interaction can be included in the measurement. Emotions experienced by product users unfold in time, depending on the course of the interactions and the events occurring during the interaction. A single outcome measure of the overall experience provides little insight in these dynamics. Continuous measurement of user experience could help designers identify the key episodes in the interaction and the time points they can act on to impact the user experience (Laurans et al., 2009). Emotion may be only one aspect of user experience, but it is a pivotal one. All the thoughts and experiences that users have in relation to their products affect their emotions. Hence, when a product evokes positive emotions, it fits with the user’s concerns on many levels – functional, aesthetic and symbolic. When it does not fit with the concerns on all levels, it will also evoke negative emotions. In that sense, “design for emotion” enables and stimulates designers to do what they do best: to work with a holistic perspective, focusing on the totality of the product to be designed, integrating all aspects into an envisioned possible future – one that evokes emotions like hope, desire, pride and inspiration.
6
References Arnold, M.B. (1960) Emotion and Personality, volume 1: Psychological Aspects. New York: Colombia University Press. Bonanno, G.A. and Keltner, D. (2004) “The Coherence of Emotion Systems: Comparing ‘On-line’ Measures of Appraisal and Facial Expressions, and Self-report”. Cognition and Emotion, 18 (3), 431–444. Bradley, M.M. and Lang, P.J. (1994) “Measuring Emotion: The Self-assessment Manikin and the Semantic Differential”. Journal of Experimental Psychiatry and Behavior Therapy, 25 (1), 49–59. Chulef, S., Read, S.J. and Walsh, A.A. (2001) “Hierarchical Taxonomy of Human Goals”. Motivation and Emotion, 25 (3), 191–232. Demir, E. (2010) “Understanding and Designing for Emotions”. Unpublished doctoral thesis, Delft University of Technology. Den Uyl, M. and van Kuilenburg, H. (2005) “The FaceReader: Online. Facial Expression Recognition”. In Proceedings of Measuring Behavior, 30 August–2 September 2005, Wageningen, The Netherlands. Desmet, P.M.A. (2002) “Designing Emotions”. Unpublished doctoral thesis, Delft University of Technology. Desmet, P.M.A. (2003) “Measuring Emotion; Development and Application of an Instrument to Measure Emotional Responses to Products”. In M.A. Blythe,
Beckley_c06.indd 173
2/6/2012 12:27:10 PM
174
6
Beckley_c06.indd 174
Product Innovation Toolbox
A.F. Monk, K. Overbeeke, and P.C. Wright (eds), Funology: From Usability to Enjoyment. Dordrecht: Kluwer Academic Publishers. pp. 111–123. Desmet, P.M.A. (2008) “Product Emotion”. In H.N.J. Schifferstein and P. Hekkert (eds), Product Experience. Amsterdam: Elsevier. pp. 379–397. Desmet, P.M.A. (2010) “Three Levels of Product Emotion”. In C. Bouchard, A. Aussat, P. Levy and T. Yamanaka (eds), The Proceedings of the Kansei Engineering and Emotion Research (KEER) International Conference 2010, Paris (France), 2–4 March 2010. pp. 238–248. Desmet, P.M.A. and Dijkhuis, E.A. (2003) “Wheelchairs Can Be Fun: A Case of EmotionDriven Design”. Proceedings of the International Conference on Designing Pleasurable Products and Interfaces, 23–26 June 2003. Pittsburgh, Pennsylvania, USA. New York: ACM Publishing. Desmet, P.M.A. and Hassenzahl M. (2011) “Towards Happiness: Possibility-driven Design”. Manuscript submitted for publication. Desmet, P.M.A. and Hekkert, P. (2007) “Framework of Product Experience”. International Journal of Design, 1 (1), 57–66. Desmet, P.M.A. and Schifferstein, H.N.J. (eds) (2011) From Floating Wheelchairs to Mobile Car Parks: a Collection of 35 Experience-Driven Design Projects. Den Haag, NL: Eleven Publishers. Desmet P.M.A., Porcelijn, R. and van Dijk, M. (2007) “Emotional Design: Application of a Research Based Design Approach”. Journal of Knowledge, Technology and Policy, 20 (3), 141–155. Eekels, J. and Roozenburg, N.F.M. (1991) “A Methodological Comparison of the Structures of Scientific Research and Engineering Design”. Design Studies, 12, 197–203. Ekman, P. (1994) “Strong Evidence for Universals in Facial Expressions - a Reply to Russell’s Mistaken Critique”. Psychological Bulletin, 115 (2), 268–287. Ekman, P. and Friesen, W.V. (1978) Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press. Fallman, D. (2006) “Catching the Interactive Experience: Using the Repertory Grid Technique for Qualitative and Quantitative Insight into User Experience”. In: Proceedings of Engage: Interaction, Art, and Audience Experience. November 2006. Sydney: University of Technology. Ford, M.E. and Nichols, C.W. (1987) “A Taxonomy of Human Goals and Some Possible Applications”. In M.E. Ford and D.H. Ford (eds), Humans as Self-Constructing Living Systems. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 289–311. Frijda, N.H. (1986) The Emotions. Cambridge: Cambridge University Press. Kelly, G. (1955) The Psychology of Personal Constructs. Vol. I, II. New York: Norton. Kim, J. and Wilemon, D. (2002) “Sources and Assessment of Complexity in NPD Projects”. RandD Management, 33 (1), 16–30. Koen, P., Ajamian, G., Burkart, R., et al. (2001) “Providing Clarity and a Common Language to the ‘Fuzzy Front End’”. Research Technology Management, 44 (2), 46–55. Laurans, G. and Desmet, P.M.A. (2008) “Speaking in Tongues – Assessing User Experience in a Global Economy”. In P.M.A. Desmet, S. Tzvetanova, P. Hekkert and L. Justice (eds), Proceedings of the 6th International Conference on Design and Emotion. Hong Kong: Hong Kong Polytechnic University Press. Laurans, G., Desmet, P.M.A. and Hekkert, P. (2009) “Assessing Emotion in Interaction: Some Problems and a New Approach”. In A. Guenand (ed.), Proceedings of the 4th International Conference on Designing Pleasurable Products and Interfaces, Compiegne (France), 13–16 October 2009. pp. 230–239.
2/6/2012 12:27:10 PM
Emotion Research as Input for Product Design
175
Lazarus, R.S. (1991) Emotion and Adaptation. New York: Oxford University Press. Mauss, I.B., McCarter, L., Levenson, R.W., Wilhelm, F.H. and Gross, J.J. (2005) “The Tie That Binds? Coherence Among Emotion Experience, Behavior and Physiology”. Emotion, 5 (2), 175–190. Mugge, R. Schoormans, J.P.L. and Schifferstein, H.N.J. (2005) “Design Strategies to Postpone Consumer Product Replacement. The Value of a Strong PersonProduct Relationship”. The Design Journal, 8 (2), 38–48. Ortony, A., Clore, G.L. and Collins, A. (1988) The Cognitive Structure of Emotions. Cambridge: Cambridge University Press. Pham, M.T. (1998) “Representativeness, Relevance, and the Use of Feelings in Decision Making”. Journal of Consumer Research, 25, 144–153. Poels, K. and Dewitte, S. (2006) “How to Capture the Heart? Reviewing 20 Years of Emotion Measurement in Advertising”. Journal of Advertising Research, 46 (1) (Mar.), 18–37. Reynolds, T.J. and Gutman, J. (1988) “Laddering Theory, Method, Analysis and Interpretation”. Journal of Advertising Research, 28 (1), 11–31. Roozenburg, N.F.M. and Eekels, J. (1995) Product Design, Fundamentals and Methods. Chichester, UK: John Wiley & Sons. Russell, J.A. (2003) “Core Affect and the Psychological Construction of Emotion”. Psychological Review, 110 (1), 145–172. Scherer, K.R. (2005) “What are Emotions and How Can They be Measured?” Social Science Information, 44 (4), 695–729. Schifferstein, H.N.J. and Hekkert, P. (eds) (2008) Product Experience. New York: Elsevier. Sheldon, K.M., Elliot, A.J., Kim, Y. and Kasser, T. (2001) “What is Satisfying About Satisfying Events? Testing 10 Candidate Psychological Needs”. Journal of Personality and Social Psychology, 80, 325–339. Sleeswijk Visser, F. (2009) “Bringing the Everyday Life of People into Design”. Unpublished doctoral thesis, Delft University of Technology. Tellegen, A. (1985) “Structures of Mood and Personality and Their Relevance to Assessing Anxiety, with an Emphasis on Self-report”. In A.H. Tuma and J.D. Maser (eds), Anxiety and the Anxiety Disorders. Hillsdale, NJ: Erlbaum. pp. 681–706. Van Kleef, E., Van Trijp, H.C.M. and Luning, P. (2004) “Consumer Research in the Early Stages of New Product Development: A Critical Review of Methods and Techniques”. Food Quality and Preference, 16, 181–201. Westbrook, R.A. and Oliver, R.L. (1991) “The Dimensionality of Consumption Emotion Patterns and Consumer Satisfaction”. Journal of Consumer Research, 18, 84–91.
Beckley_c06.indd 175
6
2/6/2012 12:27:10 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Chapter 9: Tools to Validate Research on Understanding New Products for Launch Consumer Values
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 3: Invention and Innovation
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
7
7
“Apple has maintained premium pricing by improving its products each year.” Rob Cyran, New York Times
This chapter covers quantitative research tools to understand many reasons behind consumers’ satisfaction, to select the best ideas from good ones and to prioritize consumers’ desired consumer benefits and their motivations through understanding their hierarchies. These tools are used to validate the insights with consumers.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
177
Beckley_c07.indd 177
1/31/2012 11:35:10 PM
Chapter 7
Tools for Up-Front Research on Understanding Consumer Values 7.1
Kano Satisfaction Model Alina Stelick, Kannapon Lopetcharat and Dulce Paredes
Key learnings ✓
7
✓ ✓ ✓
Consumers’ hierarchy of needs Value diagram True consumer languages Quantitative Kano map
7.1.1 Understanding the fundamental of consumer satisfaction – Kano satisfaction survey With many mature economies on shaky grounds, steady increase of the unemployment rate, fears of double dip recession and doom and gloom prophesied from every business talk show or newscast, many consumer goods companies find themselves in a tough position. The competition is as fierce as ever. Cost of production continues to climb. Shareholders and Wall Street demand profits. And consumers, squeezed from all sides by the rising cost of living and falling incomes, are reluctant to part with their hard earned money unless the product offered is an absolute necessity or something they love, love, love and cannot live without.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
178
Beckley_c07.indd 178
1/31/2012 11:35:11 PM
Kano Satisfaction Model
179
So what’s the key to keeping consumers buying our products? How do we make sure that our customers keep coming back to us time after time for their favorites? How do we make our products stay current, exciting and new? One way is to understand the fundamentals of consumer satisfaction with our products and continuously build upon it. This is how Apple and the like are keeping their customers and bringing new folks into their stores year after year. The Kano product satisfaction model is one of the methodologies that allows the Consumer Explorer to gather insights and prioritize product attributes to optimize the product development process. Kano was originally developed to gauge customer satisfaction with a product or service (Kano, 1979; Kano et al., 1982). However, this chapter will focus on the product optimization and insights cataloging applications of the Kano methodology. The Kano model differentiates the reasons behind consumer satisfaction into four main classes of attributes: must-have, driver, delighter and indifferent. Must-have attributes are product features that require robust quality control and assurance. Driver attributes highlight desired performance, that is, the more the better. Delighter attributes are new features that are not currently in the product but are highly desired if present. Indifferent attributes are those not missed when absent and can be traded off for more desired features. Here are some of the areas where Kano can be useful: ●
●
●
7.1.2
Product innovation and development: ° What features should we “innovate”? ° What technologies should we seek out? ° How do I know that a new benefit will be “relevant” to my consumers? Marketing: ° What claims/product features should we be highlighting in the advertisement? What is motivating product purchase? Quality control: ° What product attributes must be delivered? ° When developing cost reduced products within current product category, what attributes can be “trade-off” and what must be maintained?
7
Kano satisfaction survey step by step There are six simple steps in conducting a successful Kano satisfaction survey (Table 7.1.1).
7.1.2.1
Step 1: Identification and generation of consumer requirements
This is the single most important step that can determine the success of your Kano research. This is no small endeavor. To ensure success, first determine the scope of your product category. For example, if you were doing a study about toothbrushes, you should be asking: “Are vibrating electric toothbrushes my
Beckley_c07.indd 179
1/31/2012 11:35:11 PM
180
Product Innovation Toolbox
Table 7.1.1 Six steps to conduct a successful Kano satisfaction survey.
7
Step
Main objective
Tips
Step 1: Identify and generate consumer requirements
Gather languages to communicate benefits and describe product characteristics
Define the scope of your products clearly and do not use technical words or jargon in your study
Step 2: Refine the element list
Edit for exclusivity between questions
Thirty elements are recommended for questionnaire length
Step 3: Create Kano questionnaire
Make sure that the questions are easily understandable in the same way among target consumers
Ensure appropriate translation to other languages for both questionnaire and Kano’s scale Pilot test is recommended
Step 4: Define your target consumers
Create inclusion and exclusion criteria to screen consumers for your product category
Recommended 200 consumers per target group
Step 5: Field the study
Properly administer Kano questionnaire to consumers
One-on-one interviews and Internet surveys are standard Presentation of questions has to be randomized
Step 6: Analysis and interpretation
Convert the frequencies of each question into indices that are easily understandable
Presentation should be simple – table or chart
category? Or am I interested only in the basic toothbrush?” To help you answer these types of questions, the following are recommended steps: ●
●
●
●
●
Beckley_c07.indd 180
Examine all the previous research you have available in the product category: historical sensory panels, ethnographies, surveys, focus groups, syndicated data, etc. Go to the store and see what the consumer is seeing on the shelves and in your ads and competitive ads. Go to competitor websites and consumer blogs — these days anyone with a Facebook® page or Twitter® account can give your product a review which can be eye-opening. If you have technical terms in the list of attributes (aka features or requirements), make sure that you can translate the features into consumer friendly language. Conduct qualitative interviews to ensure the meanings are understandable. If you are new to the category, consider conducting other forms of consumer research before fielding a Kano study. Here are the three methodologies you can use to generate product descriptors/elements quickly and efficiently:
1/31/2012 11:35:11 PM
Kano Satisfaction Model
181
Free choice or flash profiling or equivalent technique (see Chapter 6.1 for details) ° Ethnography (see Chapter 6.2 for details) ° Qualitative multivariate analysis (QMA) (see Chapter 6.3 for details). And second, adding new product attributes that your company would like to know or pursue is recommended. This list gives a few recommendations on how to identify new product attributes: °
●
●
7.1.2.2
Review current consumer habits and attitudes (H&A) and future trends data. Have a talk with your innovation or research organization about new and “upcoming” technologies they are interested in.
Step 2: Refine the element list
Keep it short, simple and current. The recommended maximum number of attributes to use is 30. This will limit respondent fatigue and boredom while answering the questionnaire as each attribute will be asked twice, that is, in functional and dysfunctional forms. Pick and choose how to word attributes in consumer speak. Review it with a typical category user and if they don’t understand what a word means, a regular Joe or Jane on the street won’t either. Lastly, when running global research, ensure that the words/phrases you choose are easily translated. Verify translation before fielding by “back translation”, so that the meaning of each attribute is communicated correctly and consistently.
7.1.2.3
7
Step 3: Create questionnaire
Once the element list is finalized, word each attribute in two ways: one as if it is present in the product, another as if it is absent in the product. For example, a lipgloss “feels moist on the lips” (“moist” feel is present in the product), and a lipgloss “does not feel moist on the lips” (“moist” feel is absent in the product). Group all the “present”/“positive” attribute wordings into one list and “absent”/“negative” wordings into another. There are a few differently worded Kano scales. The classic five-point Kano scale is as follows (Kano, 1979; Kano et al., 1984): ● ● ● ● ●
I like it that way It must be that way I am neutral to this feature I can live with it that way I dislike it that way.
Several translations from the original exist, as some feel that it is not as easy to interpret by the public (Walden, 1993). Table 7.1.2 shows three examples of the variations. Some researchers have argued that the whole scale wording does not make sense and recommended leaving off some scale choices (Berger et al., 1993). However, our experience shows that keeping the original framework is important to get the full benefit of the Kano methodology. And rather than leaving off
Beckley_c07.indd 181
1/31/2012 11:35:11 PM
182
Product Innovation Toolbox
Table 7.1.2 Examples of three variations of the five-point Kano scale (the far left column). Original
Alternative 1
Alternative 2
Alternative 3
This would be very helpful for me
I like it
This is a basic requirement for me
I expect it
I am neutral I’m neutral to to this this feature feature
This would not affect me
I can live with it that way
I dislike it but I can live with it this way
This would be a minor inconvenience
I dislike it that way
I dislike it and This would be cannot accept it a major problem for me
I like it that way
It must be that way
I enjoy it this way
I expect it this way or it is the basic necessity
I’m neutral
I can tolerate it I dislike it
one of the scale options, we recommend instructing the respondents to read the question and scale choices carefully and choose the most appropriate answer. That is, the same scale choices should be available for both positive and negative lists. Once both the element list and wordings along with the scale are finalized, randomize the attributes for each respondent within the “positive” list, then within the “negative” list. The randomized “positive” attribute list is to be presented first to a respondent, then a randomized “negative” attribute list. The use of computerized survey programs has simplified questionnaire design, randomization and data collection.
7
7.1.2.4
Step 4: Determine your population
Your study population should reflect your real consumers and their experiences in the marketplace within their particular brand consideration sets. The respondents should be product category users or consumers that have used several versions of the same product type, for example from different brands, forms, situations. You may also consider a quota of respondents that are more “earlyadopters” who tend to buy the newest products available on the market. This would be especially important if the attribute set in your Kano questionnaire includes many innovative product features. When it comes to the population size, use at least n = 200 per test market. This would allow you to conduct segmentation analysis on the study results post hoc. Also include a few demographic and psychographic questions to be answered by each respondent after the main Kano questionnaire is completed. This will help describe your segments.
Beckley_c07.indd 182
1/31/2012 11:35:11 PM
Kano Satisfaction Model
7.1.2.5
183
Step 5: Field the study
This step is very simple and straightforward. There are several agencies that can provide fielding and data analysis services for this type of study. What used to be a painful “paper and pencil” process of cross tabulating results about 5–10 years ago is now as easy as a simple “push of a button”. However, we must caution you to screen your fielding agency to make sure that they have all the appropriate randomization and programming capabilities and they understand the methodology before committing your research monies (see Chapter 11). For fielding a Kano questionnaire, there are two special requirements: (1) For each respondent, a unique randomization scheme must be created separately for the “present”/positive attribute list and “absent”/negative attribute list. (2) The “present”/positive attribute list must always be presented first to each respondent; only then the “absent”/negative attribute list is to be presented. It is also important that the agency has capability to include additional or “classification” questions after the main Kano questionnaire (Stelick et al., 2009).
7.1.2.6
Step 6: Analysis and interpretation
Data analysis of Kano satisfaction survey requires the pairing of the responses from the “positive”/present and “negative”/absent forms of a requirement. For each consumer, each response-pair will be assigned a Kano class according to Figure 7.1.1. For example, if a respondent answers that he/she “likes” the attribute when it is “present” in the product and “dislikes” it when it is “absent”, then for that particular person, this attribute is a “driver”. Once this step is completed for each respondent, add all the frequencies across the whole study population together
7
Functional
Dysfunctional Like
Must-be
Neutral
Live with
Dislike
Like
Q
D
D
D
Dr
Must-be
R
I
I
I
M
Neutral
R
I
I
I
M
Live with
R
I
I
I
M
Dislike
R
R
R
R
Q
Figure 7.1.1 Kano classification of a requirement based on a consumer’s responses from “positive”/present form and “negative”/absent form using the classic five-point Kano scale. Note: Q = questionable, D = delighter, Dr = driver or one-dimensional, M = must-have or must-be, I = indifferent and R = reverse.
Beckley_c07.indd 183
1/31/2012 11:35:11 PM
184
Product Innovation Toolbox
Delighter (D)
Driver (Dr)
Must-have (M)
Indifferent (I)
Questionable (Q)
Reverse (R)
Attribute 1
35%
24%
15%
15%
10%
1%
Attribute 2
8%
22%
23%
20%
15%
22%
Attribute 3
38%
17%
3%
35%
5%
2%
Attribute n
...
...
...
...
...
...
Figure 7.1.2 An example of Kano data sheet. Consumer’s requirement (row) and Kano classification (column) form table of frequencies where each cell contains frequencies of a requirement to be assigned to different Kano classes.
7
and present them in the table format similar to one shown in Figure 7.1.2. (Each row represents a particular attribute and column – a Kano classification, e.g. delighter, driver, must-have, indifferent, questionnaire, reverse.) This simple frequency counting is sufficient and easy for communicating the results of the study to others. However, before moving forward, the following two steps must be performed to ensure correct interpretation and conclusion of the results: (1) Check the frequency of “reverse” and “questionable” categories for each requirement. If the frequencies of a requirement for these classes are more than 20 percent (in combination or alone) (Figure 7.1.2 above-attribute 2), it is recommended discarding this particular attribute data as consumers are signaling to you that they do not understand your question. (2) Check the distribution of the frequency of each requirement (this step might be missed when automated process of analysis is used). You might discover that the same or similar frequency is assigned to two or more different attribute classes, for example “delighter” and “indifferent”. This is a classic sign of segmentation and requires additional analyses. This same rule applies to other classes (except questionable and reverse). Once these steps are completed, you may present your data in a graphical format to reflect the original Kano philosophy or show it in a table. If you chose the graphical format, use the following formula to determine the attribute coordinates (Berger et al., 1993): Degree of satisfaction (Y axis) = (D + Dr)/(D + Dr + M + I) Degree of requirement (X axis) = (Dr + M)/(D + Dr + M + I)
Beckley_c07.indd 184
1/31/2012 11:35:11 PM
Kano Satisfaction Model
1.0
185
Driver
Delighter
0.9 Degree of satisfaction
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
Indifference 0.0
0.1
Must-have 0.2
0.3
0.6 0.4 0.5 Degree of requirement
0.7
0.8
0.9
1.0
Figure 7.1.3 Typical results from a Kano satisfaction survey after transforming the Kano data into degree of satisfaction (Y-axis) and degree of requirement (X-axis) using the formula provided.
1.0 0.9
Delighter
Driver
7
Degree of satisfaction
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Indifference 0.0 0.0 0.1 0.2
Must-have 0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Degree of requirement
Figure 7.1.4 Once certain attributes have been introduced to consumers, the attributes can change their Kano classes. The filled and empty dots show the movement of Kano classes of requirement from 2006 to 2010.
Figure 7.1.3 shows an example of graphical format from the equation above. The graphical presentation is useful when you need to compare the attribute classification, for example across time. Figure 7.1.4 shows how attributes (different symbols) migrate through time. However, we have found that our clients can digest the information better when the results are presented in a table format, especially when you are trying to organize attributes from different product aspects (Table 7.1.3).
Beckley_c07.indd 185
1/31/2012 11:35:11 PM
186
Product Innovation Toolbox
Table 7.1.3 A suggested table format to present the conclusion from Kano satisfaction survey. Product requirements are grouped into different aspects and reported as which requirement is delighter, driver or must-have for consumers (Prokopchuk et al., 2007). Product aspect Product aspect 1 (e.g. visual Product aspect 3 Product aspect 4 2 (e.g. sweet) appeal) Must-have
Driver
Requirement 5 Requirement 6
Requirement 1 Requirement 2
Delighter
Requirement 12 Requirement 13 Requirement 14 Requirement 15
Requirement 25
Requirement 4 Requirement 11 Requirement 7 Requirement 21 Requirement 8 Requirement 9 Requirement 10 Requirement 16 Requirement 17 Requirement 18 Requirement 19 Requirement 20 Requirement 22 Requirement 23
Requirement 3
7 7.1.3
Comparison with degree of importance surveys In the past, we used to routinely run a degree of importance survey to determine what attributes were important in a product. For that, a list of product attributes was compiled and presented to respondents, product category users, with a simple importance scale (Figure 7.1.5) The respondent would answer this and all other questions. The top 2 box counts would be calculated to determine whether a significant majority of respondents (>95 percent Level of Confidence or LOC) found this particular attribute “important”. Based on the top 2 ratings it is possible to “tier” the attributes into different levels of importance (Griffin and Hauser, 1993). As a result the researcher would typically get a “laundry list” of attributes, all or 99.9 percent of which would be considered “important” by the criteria described above. Although this looks like a great result and something that’s very easy to do, it complicates things for the product developer. Because now you’re telling a product developer that “everything” in their product is “important” to a consumer, there is no way to prioritize. To remedy this we turned to Kano. This method allows us to not only determine what’s important and what’s not, but also to prioritize within the important and non-important categories (Table 7.1.4). As an example, we run the same attribute list in a Kano study and in a degree of importance survey with similar groups of women, n = 200 regular users of lipstick from the US. What we found was that in the importance survey 20 attributes were found “important” in the same importance tier,
Beckley_c07.indd 186
1/31/2012 11:35:11 PM
Kano Satisfaction Model
187
Thinking about this widget, would you say having this widget be easy to use is... Very important to me Moderately important to me Somewhat important to me Not very important to me Not at all important to me
Figure 7.1.5 An example of importance rating question.
Table 7.1.4 Comparison of advantages provided by Kano satisfaction survey and degree of importance survey. Kano satisfaction survey
Degree of importance survey
Provides ability to classify attributes quantitatively and differentiate between “Must-Have” and “One-Dimensional” with attributes that are “important” to the consumers
Is not as restricted in number of attributes asked as Kano, therefore, attributes do not have to be as targeted/ hypothesis-driven as in Kano
Provides ability to identify new opportunities (“Attractive” attributes) whereas they may be classified as “not important” in the “Degree of Importance” surveys
Provides opportunity to test hypotheses with target population
7
Attributes that are NOT IMPORTANT from the degree of importance survey Attributes 32 and 34
Attribute 6
Attributes 35 and 36
Kano result as delighter opportunities for innovation
Kano result as indifferent not worth development or highlighting in advertisement
Kano result as reverse lack of consumer interest, may be undesirable
Figure 7.1.6 An example of when degree of importance survey fails to identify opportunities or guide product innovation and development compared to Kano satisfaction survey.
Beckley_c07.indd 187
1/31/2012 11:35:12 PM
188
Product Innovation Toolbox
whereas the Kano survey was able to “differentiate” out seven of these 20 as “must-have” attributes and the rest it classified as “drivers”. This provided a clear usable hierarchy of attributes. At the same time, several attributes from the “not at all” important list in the degree of importance test were classified as either “delighter” or “indifferent/reverse” in the Kano study (Figure 7.1.6). The Kano satisfaction survey was able to tease out what’s less important and identify opportunities for innovation and development via attractive attributes.
7.1.4
7
Philosophy behind the Kano satisfaction model Until 1984, many researchers believed that consumer satisfaction or liking (collectively called consumer hedonics) was a one-dimensional construct that is driven by the increase or decrease of perceived product attributes (also known as requirements or qualities). However, in 1984, Professor Noraki Kano of Tokyo Rika University and his colleagues in Japan proposed a new philosophy, the Kano satisfaction model, which has changed the way consumer researchers understand the meaning behind consumer hedonic responses (Kano et al., 1984). The Kano satisfaction model challenges the notion that consumer satisfaction is only driven by an increase or a decrease in a product attribute (onedimensional attribute); however, the model introduces three additional reasons behind consumers’ satisfaction: must-have attributes, delighter attributes and indifferent attributes (Figure 7.1.7).
Customer satisfied Delighter (D) Attributes that are unimportant to consumers; however, they will be very satisfied when there is one in their product.
Driver (Dr) Attributes that fit with the conventional meaning of drivers of likings as the higher the attribute, the higher the liking and vice versa.
Requirement not fulfilled
Indifferent (I) Attributes that are unimportant to consumers.
Requirement fulfilled Must-be (M) Attributes that consumers find to be very important to them.
Customer dissatisfied
Figure 7.1.7 Theoretical arrangement of product attributes classified based on Kano satisfaction model.
Beckley_c07.indd 188
1/31/2012 11:35:12 PM
Kano Satisfaction Model
189
One-dimensional attributes are attributes that fit with the conventional meaning of drivers of likings as the higher the attribute, the higher the liking and vice versa. General characteristics of one-dimensional attributes are: (1) (2) (3) (4)
Most of the time, consumers can articulate these attributes These attributes are quite specific for consumers The linear relationship with consumers’ liking makes it easy to measure Most of the time, there are technical definitions of these attributes.
Must-have or must-be attributes are product qualities/requirements that consumers find to be very important to them. Usually, must-have attributes confound with one-dimensional attributes in any important rating study because the study does not allow consumers to differentiate these two types of attributes. General characteristics of must-have attributes are: (1) Must-have attributes are needed by consumers. (2) Because these attributes are needs, consumers usually have a difficult time to articulate any specifics about the attributes but they usually describe the attributes indirectly (e.g. giving example of situations). (3) These attributes are usually obvious to consumers. It is so obvious to consumers that they usually forget to mention must-have attributes when they are asked. (4) Must-have attributes are self-evident. If the attributes are not there, consumers will believe that the product will not work.
7
Delighter attributes are unimportant attributes to consumers; however, they will be very satisfied when there is one in their product. For example, Westin Hotel chains’ Heavenly Beds when they first came out in the late 1990s and engaged people in a greater discussion around the importance of a comfortable bed and its linen, or Kellogg’s Corn Flakes as an alternative to tough whole grains around the turn of 20th century. General characteristics of delighter attributes are: (1) Consumers cannot articulate delighter attributes (2) Delighter attributes are tailored attributes for a specific need and want for a particular group of consumers (3) Consumer’s response when they encounter delighter attributes in a product is delight. Indifferent attributes require special treatment in practice. First, the data must be checked for consumer segmentation (Berger et al., 1993). If there is no major segmentation in the consumer sample, then the second interpretation, which is that consumers do not really care about these attributes, is warranted. The next two classes of attributes, reverse attributes and questionable attributes, do not have theoretical definition as the attributes can be found only in practice. Reverse attributes are found in practice when the results of certain attributes are opposite from what is usually expected from the attributes. Reverse attributes are usually a product of misunderstanding of the questions by consumers.
Beckley_c07.indd 189
1/31/2012 11:35:12 PM
190
Product Innovation Toolbox
Therefore, the best way to prevent it from happening is piloting the questionnaire with target consumers. If the questionnaires have been confirmed to be easy to understand by target consumers but there are still reverse attributes in the results, it is possible that there are segments of consumers. This is not a bad finding as researchers learn more that there is at least a group of consumers who do not interpret things the same way as the researchers do. Questionable attributes are indicators of misunderstanding of the questions by consumers and we recommend discarding these attributes from further interpretation. The Kano satisfaction survey allows Consumer Explorers to understand reasons behind numbers of important attributes by indirectly deriving from asking questions in two different forms: positive/functional and negative/dysfunctional forms. By doing so, Consumer Explorers can recommend appropriate actions to the team. The Kano satisfaction survey is a valuable tool to understand consumers’ needs but it is not a tool to discover consumers’ needs. Many qualitative methods described in Chapter 6 are better tools to discover consumers’ needs that can be verified by a quantitative Kano satisfaction survey.
7.1.5 7
Summary and future The impact of Professor Kano’s philosophy in understanding the reasons behind consumer satisfaction and his Kano protocol have direct impact on how Consumer Explorers think about the consumers’ hedonic responses. Lately Kano’s philosophy has been adopted in many fields such as economics (Sauerwein, 1996) or product testing (Rivière et al., 2005a; 2005b). Rivière et al. (2005a) applied Kano’s philosophy in preference mapping as seen in DASA preference mapping method and resulted in more insights behind liking scores from consumers. Kano’s philosophy and Kano satisfaction protocol is gaining popularity in many fields. Future advancement in digital surveys will further enhance the presentation of Kano elements to be more interactive and visual but the core philosophy of Professor Kano’s elegant model must be maintained as a cornerstone of understanding the reasons behind hedonic responses.
References Berger, C., Blauth, R., Boger, D., et al. (1993) “Kano’s Methods for Understanding Customer-defined Quality”. Journal of Center for Quality Management, 2 (4), 3–36. Cyran, R. (2010) “How to Tell if Apple Falters”. New York Times, 6 September 2010. Last accessed on 11 October 2010: http://www.nytimes.com/2010/09/06/business/ economy/06views.html Griffin, A. and Hauser, J.R. (1993) “The Voice of the Customer”. Marketing Science (Winter), 1–27. Kano, N. (1979) “On M-H Property of Quality”. Nippon QC Gakka, 9th Annual Presentation Meeting, Abstracts, pp. 21–26.
Beckley_c07.indd 190
1/31/2012 11:35:12 PM
Kano Satisfaction Model
191
Kano, N., Nobuhiku, S., Fumio, T. and Shinichi, T. (1984) “Attractive Quality and Must-be Quality”. Research summary of a presentation given at Nippon QC Gakka: 12th Annual Meeting (1982), 18 January. Prokopchuk, A., Pereira, B., Paredes, D., Katz, R. and Moskowitz, H. (2007) “Lipstick: Kano Satisfaction Study Presentation”. Presented in 7th Pangborn Sensory Science Symposium, 12–16 August, Minneapolis, USA. Rivière, P., Monrozier, R., Pagès, J. and Saporta, G. (2005a) “Dissatisfaction and Satisfaction Analysis for Preference Mapping (DASA - PrefMap): A New Preference Mapping Method Using a Sequential and Adapted Protocol”. Presented in 6th Pangborn Sensory Science Symposium, 7–11 August, Harrogate, UK. Rivière, P., Monrozier, R., Pagès, J. and Saporta, G. (2005b) “Kano’s Satisfaction Model Applied to External Preference Mapping: A New Way to Handle Non-linear Relationships Between Hedonic Evaluations and Product Characteristics”. Presented in 4th International Symposium on PLS and Related Methods, 7–9 September, Barcelona, Spain. Sauerwein, E., Bailom, F., Matzler, K. and Hinterhuber, H. (1996) “The Kano Model: How to Delight your Customers”. In Preprints Volume I of the IX. International Working Seminar on Production Economics, Innsbruck/Igls/Austria, 19–23 February, pp. 313–327. Stelick (Prokopchuk), A., Paredes, D., Moskowitz, H. and Beckley, J. (2009) “Kano Satisfaction Model in Cosmetics”. Presented in Congress Cosmetic and Sensory: From Neuroscience to Marketing. 24–26 June, Tours, France. Walden, D. (1993) “Kano’s Methods for Understanding Customer-defined Quality”. Center for Quality of Management Journal, 2 (4, Fall), 1–37.
7
Beckley_c07.indd 191
1/31/2012 11:35:12 PM
7.2
Conjoint Analysis Plus (Cross Category, Emotions, Pricing and Beyond) Daniel Moskowitz and Howard Moskowitz
Key learnings ✓ ✓ ✓ ✓ ✓
Systematic research through design-based experimentation Building study elements Identifying appropriate scales and metrics Data analysis, interpretation and reporting Mind genomics or science of the mind
7
7.2.1
Consumer research: Experimentation vs. testing In consumer research, the notion of experimentation has evolved into two distinct forms: (1) Testing and (2) Experimentation. Testing uses the responses of the consumer to judge the performance of test stimuli on a number of characteristics. From the ratings, the Consumer Explorer (CE) discovers the attributes that perform well, and the attributes that perform poorly. When testing several unconnected stimuli, the CE must perform additional analyses in order to discover the specific characteristics that are responsible for, or which drive, the strong performances versus the poor performances, respectively. The bottom line is that whether testing single stimuli or an array of unconnected test stimuli, the CE generates a report card about performance, and may even discover some possible causes of acceptance. The key thing to keep in mind is that the discovery of the drivers is left to the intelligence and experience of the CE, rather than being built into the system so that anyone can discover those drivers. In experimentation, the goal is to make these drivers easy to discover, through systematic up-front work. The CE systematically varies the factors that can be Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
192
Beckley_c07.indd 192
1/31/2012 11:35:12 PM
Conjoint Analysis Plus
193
controlled. Experimental design lays out the specific combinations of the variables. Each respondent evaluates one, several, or even all of these combinations, rating each combination on one or more scales. From the ratings, the CE uses statistical tools, especially least-squares regression, in order to estimate the specific contribution attributable to each element. The scientific discipline of experimental design has made it popular among development scientists in industry, as well as consumer researchers who want to apply it to realms such as the combination of ideas (Box et al., 1978). Furthermore, the widespread use of shrink-wrapped software, for example SYSTAT (2008) has made experimental design accessible to everyone who works with a personal computer. The bottom line is that drivers or causes of responses can be isolated when the CE takes the time to create, test and analyze responses to systematic variations.
7.2.2 Conjoint analysis (aka conjoint measurement) Conjoint analysis (CA) quantifies the response to mixtures. From that response, CA attempts to identify the underlying impact or utility of each element in the mixture. At its very core, CA is the implementation of experimental design. It has come to mean systematic exploration of combinations of independent variables which are discrete, either appearing or not appearing in a particular combination. Experimental design, in general, and conjoint analysis, in particular, operate under the assumption that people learn the most about independent variables when they are presented in combination, the way they would be experienced in real life. Indeed, people often cannot evaluate single elements (ideas) in a valid way. For example, brands of products are important, and would be rated important when evaluated singly. Yet, when evaluated in combination with product features and health benefits, quite often brands do not perform as well. Conjoint analysis pulls out the effect of the individual elements when they are evaluated in the “ecologically-valid” mixtures that constitute the experience of daily life. Conjoint analysis is a branch of statistical design. The stimuli are typically discrete options, present or absent in the test stimulus. These stimuli or elements are often cognitively “rich”, having meaning in and of themselves, such as brand names, benefits and prices.
7
7.2.3 Doing the basic conjoint analysis experiment There are seven steps to creating a successful study using conjoint analysis (CA).
7.2.3.1
Step 1: Set the objective(s) of a study
This step is the most critical as it dictates the conditions, details and results of the study. In practice, it is rare to run a study with a single objective. Conjoint analysis studies are no different. Therefore, prioritizing is the key to success in this step. Conjoint analysis is one of the best methods for assessing the magnitude of interest or impact of product attributes in order to select the most suitable
Beckley_c07.indd 193
1/31/2012 11:35:12 PM
194
Product Innovation Toolbox
combinations of product offerings that suit both consumers’ needs and business strategy. However, CA is not an appropriate tool for classifying attributes into different kinds or measuring the impact of product attributes that are functionally critical but not emotionally delightful. Thus, as typically implemented by consumer researchers, CA works with stimuli that can be understood when the respondent reads a product or service description.
7.2.3.2
Step 2: Create and/or select elements (attributes) and classify the elements into relevant categories
The rationale for this second step is that categories and their elements are the crucial building blocks of the CA study. They are fundamental building blocks of experimental design, the phrases that will be presented in the test stimuli. Category is a collection of elements that are similar. Technically, category is also known as “variable” in experimental design language. In practice, a category does not always contain the same kind of elements and the consequence is not really severe. However, having the same kind of elements in a category allows researchers to assess the overall impact or importance of that category compared to other categories in the same study. An element is an individual attribute or a product offering to which researchers want consumers to respond. An element can be as short as red color or as long as made from organic white tea exclusively grown in China with a fair trade agreement. It can be abstract as creamy, rich and silky smooth chocolate for decadent sensation in your mouth or as factual as $8.99 per pound. The key for a successful CA study is that the elements must fit with the objectives for which the team has an action plan and action standard. For example, when the innovation team would like to figure out which product offerings are the best for a long-term project, then the action standard should be to identify highly motivating elements that have the highest impact regardless of feasibility. However, when the objective is short term, the action standard should be to identify those particular elements which have high impact, and can be easily claimed and/or produced.
7
7.2.3.3
Step 3: Select an experimental design
The experimental design governs the specific combinations of elements that the respondent will see and evaluate. The experimental design can be likened to a recipe book with many recipes. In turn, each test combination constitutes one recipe in this book. The experimental design is a critical part of CA studies, but often the complexity behind the design makes the design a hindrance. It is common for CEs to have many questions. The technically appropriate experimental design is too costly. Therefore, the prioritization of study objectives allows researchers to select the best and most practical designs to use. When the CE is interested only in main effects or the impact of interaction terms are negligible, then many screening methods can be used such as Plackett-Burkman designs, Simplex designs, 2k factorial designs, fraction of the 2k series, for example, (Montgomery 1991; Moskowitz et al., 2006). If the objective is to study interaction effects between the elements from different categories, high resolution experimental designs such as full factorial designs,
Beckley_c07.indd 194
1/31/2012 11:35:12 PM
Conjoint Analysis Plus
195
D-optimal designs, some fractional factorial designs, central composite designs (for second-order effect), Box-Behken designs (for third-order effect), may be used. Many statistical packages have experimental design modules that enable users to design experiments easily. There are many service providers that integrate experimental design into their conjoint analysis services. In one of the more popular methods today, IdeaMap®.Net, a do-it-yourself conjoint package, the experimental design is set up to provide a specific set of combinations of elements in an efficient way. With IdeaMap®.Net, the user doesn’t have to know the principles of experimental design but instead has to select a specific design, with a fixed number of categories and equal number of elements in each category.
7.2.3.4
Step 4: Develop the appropriate scale by which to obtain consumer feedback
The scale provides a great deal of information about the consumer’s response. It is important to develop the appropriate scale. It often looks quite easy, and developing a scale without considering what one might learn is, in fact, easy. Yet a bit of thinking will yield other types of scales that will tell much more. The most typical scales are fixed point category scales that “evaluate” the test concepts. Examples include the traditional nine-point hedonic scale first proposed by Peryam and Pilgrim (1957), as well as the five-point purchase intent scale (1 = definitely not purchase → 5 = definitely purchase). Purchase intent scales are often confused with liking scales, but are different. The respondent evaluates the likelihood of buying the product. Buying may call into play other dimensions, such as price and efficacy not addressed by liking. Appropriateness for a specific end use is also often used. When the respondent evaluates a concept using appropriateness for a specific occasion or end use, the decision calls into play both acceptance of the concept and the context of use. A product can be liked and yet be inappropriate for a specific end use. Generally these scales are treated in one of two ways. One way is to use the actual numerical rating as the dependent variable when one “models” the data to develop utility values. Thus the actual rating itself is used in the statistical analysis. A second way is to convert the rating scale to a binary value, and then use that value as the dependent variable. Thus, in a nine-point scale, one might convert ratings of 1–6 to 0, and ratings of 7–9 to 100. One would then use regression analysis with the binary values as dependent variables. Other evaluative scales can be used, such as a seven-point scale, or indeed any other type of evaluative scale. The rating scale need not be limited to evaluation. Ratings of price and selection of feelings/emotions are also valid rating scales. When it comes to price, the rating scale can present the consumer with different price points. The question would be phrased along these lines: “Select the one price that you feel fits the concept (or vignette) that you have just read.” The rating scale would then present the consumer with a graded series of prices. The prices have to be monotonic (increasing from low to high), but do not have to be equally spaced. There should be a price point for 0 dollars (would not pay). Best practice dictates each scale point to have only one price, and the
Beckley_c07.indd 195
7
1/31/2012 11:35:12 PM
196
Product Innovation Toolbox
Figure 7.2.1 Example of a concept (package design) with an emotion question (source: Moskowitz, 2009).
7
Beckley_c07.indd 196
instruction to be around the consumer picking the scale point whose price is closest to the appropriate price one would pay for the product or service described by the concept. More recently, CEs have begun to explore the inner experience of emotion and feeling. These terms don’t intend to limit the focus to classically defined emotions, but call attention to attributes that the person feels applies to himself or herself, rather than to the stimulus (Meiselman, personal communication, 2010). The field of emotion and feeling is increasingly being seen as the “new frontier” in consumer research, and has been incorporated successfully into the IdeaMap®.Net system. Emotion has to be treated differently from other scales. We are accustomed to working with conjoint analysis where we rate our feelings when we read the test concept. Typically our ratings fall along some sort of a criterion variable, such as purchase interest or liking, or even price. What happens, however, when we select one of several qualitatively different responses? For example, the respondent can be instructed to read the test concept and select the feeling that he most strongly experiences. Figure 7.2.1 shows an example of this type of question, for a package created according to an experimental design. The analysis for such a study differs. When dealing with a set of different emotions, for example five emotions, we assume the respondent feels that emotion and does not feel the other four emotions. We code the selected emotion 100, and the remaining emotions 0. Across a large number of concepts we relate the presence/absence of the elements to the emotions felt. Typically, in such cases we run five ordinary least squares regressions, one per emotion. We do not use the additive constant when computing the regression model. The result of the analysis shows the conditional probability of a respondent selecting an emotion when a specific element is presented in a test concept.
1/31/2012 11:35:12 PM
Conjoint Analysis Plus
7.2.3.5
197
Step 5: Analyzing the data
We are accustomed to one-at-a-time analyses where the respondents evaluate one stimulus or a set of related stimuli. We then look for patterns using regression modeling, or do tests using inferential statistics to determine whether or not two or more stimuli differ from each other. The focus of conjoint analysis is more towards the side of recognizing patterns and relations, rather than inferential statistics, although there is room for inferential statistics when one wants to establish the degree of confidence that two stimuli differ from each other. Any conjoint study begins with an underlying experimental design or recipe. A conjoint study comprises many of these recipes, not just one. Furthermore, the elements in the recipe, that is, in the test concepts or vignettes, appear several times, against different backgrounds. Finally, the elements are statistically independent of each other (by the nature of the experimental design). Since the elements are statistically independent of each other, the experimental design leads to regression analysis. The regression analysis can either be logistic (which is appropriate when the independent variables are binary), or the regression analysis can be ordinary least squares (OLS). Ordinary least squares is a little less appropriate, statistically, but with it one can create a powerful science, and communicate the results in a way management can easily understand. In this book, we will use OLS, fully recognizing there is a somewhat more powerful statistical method, but one that is not easily understood, hard to communicate and does not lead to a science. It’s a matter of a value judgment here. Analyzing overall rating data is done in two ways: first, by using the actual ratings as the dependent variable, and second, by transforming the ratings to a binary scale. When we use the actual ratings at the dependent variable, OLS tells us the number of rating points contributed by each element. When we convert the ratings to a binary scale (e.g. 1 to 6 → 0; 7 to 9 → 100), we are changing our focus from studying how elements “drive intensity of feeling” to “how elements drive membership in a group” (don’t accept the concept versus accept the concept). The two dependent variables sound the same, and in fact get to the same end result, namely which individual elements in the conjoint analysis perform well. However, the focus differs. The former, using the actual rating as the dependent variable, focuses on intensity of feelings, and is typically employed by those interested in the respondent himself, that is, how he feels. The latter, using the binary transformed data, focuses on membership in a group, and comes from a long history of sociology and consumer research where the interest is in external, measured behavior that is typically “no/yes”. Typically the OLS analysis uses the additive constant, a measure of basic interest in the product or service. The individual coefficients are the utilities, and show the impact of each element. When the dependent variable is dollars or some other economic indicator that has a numerical value (e.g. years that one would use), the OLS model begins by transforming the ratings to the actual metric. For instance, when the rating scale is a 1–9 scale, with each scale point corresponding to a dollar value, we substitute the dollar value first, and then run the OLS regression. We do not use the additive constant. The result is a dollar and cents value for each element.
Beckley_c07.indd 197
7
1/31/2012 11:35:13 PM
198
Product Innovation Toolbox
When the dependent value is emotion or another type of “feeling” or “end use”, the respondent is instructed to select the appropriate emotion or end use from a list. For example, there may be five different emotions. The one-of-five response is converted into five new dummy variables, one dummy variable for each possible response. For each test concept, the respondent has had to select one of the five choices. The dummy variable corresponding to that choice is given a value of 100. The four remaining dummy variables, corresponding to the choices not selected, are each given the value 0. Then, one runs the OLS regression for each newly created dummy variable. The additive constant is not used in the OLS regression. The result is a conditional probability or percentage for each concept element, showing the proportion of times in 3–4 element concepts that one can trace a particular emotion to that element. Summing all the coefficients or percents across an element for the five emotions, for example, will generate a total of around 27 percent. The total is not 100 percent because the OLS is working with 3–4 elements per concept, and the OS is estimating the assignment of emotions to elements in a full concept.
7.2.3.6
7
Step 6: Interpret the results (what the utilities tell you and what the insights are)
Regardless of the method of eliciting consumers’ responses, the utilities from CA allow CEs to know the impact of product attributes on consumers’ decision making on liking (or believability, purchase intent, uniqueness perception, etc.). Consequently, the results enable the research team to create ideas and know their potential by summing the attribute utilities. Knowing the potential of product ideas allows the research team to select, prioritize and manage NPD processes more efficiently. We will now discuss the specific meaning of the utilities that emerge from the IdeaMap® process. We will talk about the results that emerge when the dependent variable is the binary 0/100 rating, with ratings of 1–6 first transformed to 0, and ratings of 7–9 transformed to 100. Afterwards, OLS is run. The output is coefficients or utilities. These are averages across all the respondents in a key sub-group, for example total panel, males, users of a specific product, and so forth. The typical regression model is expressed as: Binary result (0/100) = k0 + k1 (element 1) + k2 (element 2)… + kn (element n) First, what do the numbers mean, are they additive constant? So, how big is an impactful utility? There are two answers to this question: (1) Using benchmark(s) and (2) Comparing against historical data. Using benchmarks allows researchers to ground the findings with a reference product idea(s) that will be exposed to the same conditions as new product ideas generated later. Benchmark(s) also assures researchers that the results of the study are valid; however, using benchmarks(s) means having less space for new attributes. The other way is comparing utility values against a norm. Using norms is a common practice in concept testing. However, it takes time and
Beckley_c07.indd 198
1/31/2012 11:35:13 PM
Conjoint Analysis Plus
199
money to develop enough data to generate norms for each category. Some companies have generic norms for clients to compare to; however, since companies face different market conditions, the norms are rarely comparable.
7.2.3.7
Step 7: Report and learn
This step is the most important as it moves business forward in an efficient fashion. There is no specific right way to report the results; however, this list offers some advice generally useful in reporting CA results: (1) Report data, not just conclusions There’s a tendency by many to abstract the results, to identify “what happened” and then to report the key factoids. As tempting as it is to tell a story, skipping the data because there are only so many numbers one can handle, should be avoided. Those who commission CA studies enjoy looking at the numbers, and coming to their own conclusions. (2) Highlight strong performing elements (high positive utilities), and strong performing disutilities (high negative utilities) Typically, when we look at the model with the binary (0/100) as the dependent variable, we highlight elements whose utilities exceed +8 on the positive side, or exceed –5 on the negative side. At these levels, previous experience has shown that there is some correlating behavior in the external world, for example the marketplace. As tempting as it is to work with inferential statistics, the real information comes from the size of the utility vs. the cut-off points of +8 and –5, and not from differences between pairs of elements based on the utility values. In other words, comparisons using statistics may be valid, from a technical sense (i.e. inferential statistics are valid), but there is little to gain from such analysis. (3) Wherever possible, make the highlighting tell the story The story becomes even clearer when one sorts the elements by utility value. The sort, along with the highlighting of strong performing elements, allows whoever is looking at the data to formulate a hypothesis. The elements are cognitively “rich”, meaning that each element tells a story. Look for what is common, that is, for the story or theme, among the winning elements, and among the losing elements, respectively. (4) Revisit the objectives Ensuring that the stories you are going to tell fit with the objectives and are still the same (both content and priority). (5) Know the audience and what you expect from the audience When the meeting is a sharing and working session, full details of the results should be discussed in order to gain consensual conclusions. When the meeting is an informing or decision-making session, a short background of the study, objectives, results and recommendations should be presented concisely. The most common mistakes are presenting too much information and assuming that the audience has the same level of understanding as the presenter.
7.2.4
7
The raw material of CA This chapter will classify the stimuli into silos and elements. Silos (also known as variables, buckets, categories) are general groups of related elements. Elements (also known as levels) are the specific phrases that will be combined. Table 7.2.1
Beckley_c07.indd 199
1/31/2012 11:35:13 PM
200
Product Innovation Toolbox
Table 7.2.1 Example of silos and elements appropriate for conjoint measurement. The elements would be those used for an “unspecified food” label. The elements are ‘cognitively rich’. Each element can be evaluated by itself, or in a mixture, and be meaningful in either format (source: IT! Ventures LLC, a limited partnership between the Understanding and Insight Group and Moskowitz Jacobs Inc.). Silo 1: Calories and sweeteners A1 A2 A3 A4 A5 A6 A7 A8 A9
0 calories per serving 60 calories per serving 160 calories per serving Sweetened with sucrose Sweetened with high fructose corn syrup Sweetened with fruit juice concentrate Sweetened with Equal (aspartame) Sweetened with Splenda (sucralose) Sweetened with Truvia (Stevia extract) Silo 2: Ingredients for weight and health
7
B1 B2 B3 B4 B5 B6 B7 B8 B9
Contains caffeine Contains green tea Contains soluble fiber Contains aloe vera Contains guarana extract Contains natural polyphenols Contains antioxidants Contains plant phytonutrients A good source of folate Silo 3: Vitamins and minerals
C1 C2 C3 C4 C5 C6 C7 C8 C9
Contains vitamin C A good source of vitamin C An excellent source of vitamin C Contains L-taurine A good source of calcium A good source of potassium A good source of magnesium With vitamin B12 With beta carotene Silo 4: Consumer issues: Taste, price, satiety
D1 D2 D3 D4 D5 D6 D7 D8 D9
Beckley_c07.indd 200
Keeps you feeling fuller longer Prevents food cravings Can stop snack attacks Great refreshing taste Energizing taste Optimized taste All the nutrients for one low price A low-cost source of key nutrients Affordable nutrient rich choice
1/31/2012 11:35:13 PM
Conjoint Analysis Plus
201
shows an example of silos and elements for an “unspecified food” label. The same organizing principle, of the underlying structure comprising silos and elements holds whether the topic is a description of a food, a professional service, or even a topic in public policy such as health care and nutritional compliance. The elements can be short or long, simple words or complete sentences. Ideally, however, the elements should constitute single ideas.
7.2.5 Experimental design The key to conjoint analysis is the experimental design. The proper experimental design has the following two statistical characteristics: (1) The elements are statistically independent of each other, so that regression analysis can be used to pull out the effect of each element. (2) Each element appears against many different backgrounds, preventing pairs of elements from always appearing together, so that their separate contributions cannot be estimated. This is called confounding, which often happens when the researcher doesn’t use a design, but rather “guesses”, or puts together so-called “rifle shots”. Experimental designs come in many types. Statisticians may use off-the-shelf statistical programs to create the design so that there are relatively few combinations to be tested. Of course, there must be more combinations to be tested (observations) than there are elements, in order for the modeling to work. Interested readers are referred to classic texts such as Box et al. (1978) which deal with the issue of design. Additionally, programs such as SYSTAT contain within them modules that create the experimental design for a specific set of test conditions (SYSTAT, 2008).
7.2.6
7
Building models The essence of the analysis creates a model or equation that relates the presence/absence of the elements to the rating assigned by the respondent. (1) Using ordinary least square modeling Earlier forms of conjoint analysis used logistical regression, requiring a great deal of computer time. Although technically a slightly better form of regression, logistical regression does not produce results that are user-friendly. The user and, in turn, management commissioning the study, cannot easily understand the output of logistical regression. They cannot determine what the elements mean nor can they combine the elements mentally to come up with new combinations. In CA the models can be made much more intuitive and user-friendly by using conventional regression analysis, often called OLS, or ordinary leastsquares modeling. (2) The best modeling generates individual-level equation One doesn’t have to worry about sample balancing, because the data from each respondent
Beckley_c07.indd 201
1/31/2012 11:35:13 PM
202
7
Product Innovation Toolbox
generates a model for that particular respondent. The approach to creating individual models, and ensuring that each individual evaluated a unique set of combinations, was solved in the early 1990s, and optimized for the Internet about ten years later (Moskowitz et al., 2001). The approach, called IdeaMap®, began with a basic experimental design structure. (3) Each concept was created within that basic structure The key difference between this basic structure and so-called “fixed design” was that the structure itself sampled elements from the full set of available elements, checking that there were no previously defined pairs of incompatible elements, and then presenting the test concepts to the respondent. The entire process was done in real time. The key change was that every respondent now evaluated unique combinations. (4) Every respondent evaluates a different set of test concept Traditionally in CA, the researcher would create a set of combinations. The test stimuli would comprise only that set of combinations. Many respondents would evaluate the different combinations, in a randomized order. A high base evaluating the same set of combinations coupled with randomized order, reduced bias. In the IdeaMap® approach presented here, each respondent evaluates a unique set of combinations, rather than having everyone evaluate the same set of combinations. The unique set of combinations totally eliminates the potential of bias, caused by testing a limited number of the same combination. The unique set of combinations for each respondent also means that the data can be analyzed in terms of both main effects and interactions among elements (Gofman and Moskowitz 2010; Moskowitz and Gofman 2004). (5) Using dummy variables in the input data for modeling The input data for the modeling is typically in the form of a data matrix that looks like Table 7.2.2.
Table 7.2.2 Example of data used as input from one respondent for modeling. The table shows partial data (12 of 48 test concepts), the presence/absence of 14 of 36 elements, and the two dependent variables: the nine-point rating, and the binary expansion of that nine-point scale. A “1” signifies that the element was present in the test concept; a “0” signifies that the element was absent from the test concept. The column titled “binary” shows the result of transforming the nine-point rating to a binary 0/100 scale. Conc A1 A2 A3 A4 A5 A6 B1 E6 F1 F2 F3 F4 F5 F6 Rating Binary 1 2 3 4 5 6 7 8 9 10 11 12
Beckley_c07.indd 202
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 1 0 0 0
0 1 1 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 1
0 0 0 0 0 1 0 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
7 6 1 3 9 5 6 7 5 3 5 7
100 0 0 0 100 0 0 100 0 0 0 100
1/31/2012 11:35:13 PM
Conjoint Analysis Plus
203
The matrix is, of course, much larger. Each respondent in the study on food labels generates his own matrix, comprising 36 independent variables, the elements, and a dependent variable, the ratings. In the manner of consumer researchers, we convert the nine-point rating to a binary scale, with ratings of 1–6 converted to 0, and ratings 7–9 converted to 100. The rationale behind this transformation is the history of CA as a form of consumer research, wherein focus is placed on membership in a category (e.g. acceptor of the concept), rather than focus being placed on intensity of feeling. The former is sociological in nature, membership in groups, the latter is psychological, intensity of personal experience. (6) The equations can be run either at the individual respondent level, or at the total respondent level When the data is run at the individual level, the focus is typically limited to the parameters of the model. These parameters are averaged across all of the respondents to generate sets of averaged coefficients, including the additive constant and values for the different elements. We see the average coefficients (utilities) in Table 7.2.3, for the total panel and for some key sub-groups defined by the classification, such as gender, age, etc. (7) The utility values that we compute have the property of being “absolute” rather than relative This is important for the creation of a science using CA. The experimental design comprises incomplete concepts. Rather than forcing every concept to comprise exactly one element from each silo, the concept comprises a limited number of elements (3–5 to make reading easier). In some concepts, a silo is completely absent. Working with absent silos and incomplete concepts allows one to estimate the absolute utility value of each element (vs. only compatible within the silo). That absolute value, in turn, leads to comparability across studies. It encourages databasing, the foundation of a true science rather than just an analytical method.
7
7.2.7 Presenting the result – numbers, text, data, talk, move to steps Even before we deal with the actual data, it’s good to deal with what we’re going to do with the data. Most discussions of research methods focus on what the methods provide, how they are implemented, and, of course, the appropriate statistics. The truth of the matter is that the methods are only as good as the application, but also only as good as the presentation. Sometimes, especially in the world of consumer research, one hears about reams of perfectly valid data sitting in warehouses, unused. It’s just that the researcher has not been able to present the results in a sufficiently cogent way. It ought to be that management has the wisdom, but all too often that connection is missed, even though the work has been done. How, then, do we ensure that the results of conjoint analysis reach the eyes of management, and more importantly, drive decisions? What are we to do? Here are a few pointers, some of the Realpolitik of the situation: ●
Beckley_c07.indd 203
Management likes numbers It’s tempting to fall into the trap of giving management a verbal explanation of what you find, and keeping the numbers to
1/31/2012 11:35:13 PM
Beckley_c07.indd 204
A1 A2 B2 C4 D6 A4 B4 C6 C2 D4 A3 A6 B1 D2 D5 B3 D1 C1 C5 E6
Base size Additive constant This product is a good source of protein This product is an excellent source of protein This product is high in fiber This product has no saturated fat This product is without sodium This product is an excellent source of vitamin C This product is rich in calcium This product has zero cholesterol This product is fat free This product is free of added sugar This product is a good source of vitamin C This product is high in vitamin A This product contains fiber This product is sugar free This product is a low source of sodium This product contains calcium This product is low in total sugar This product is low in total fat This product is a low source of cholesterol Puts more nutrient power on your plate
320 22 17 16 14 12 12 11 11 11 10 10 9 9 9 9 9 8 8 7 7 7
Total 71 35 13 6 15 12 2 7 6 9 7 4 0 1 3 10 3 5 2 2 9 6
Male 249 18 18 18 14 11 15 13 12 12 11 12 11 11 11 8 11 9 9 8 6 7
Female 107 25 15 14 15 11 14 14 7 9 6 5 11 9 7 6 10 5 8 6 7 10
Age under 45
213 21 17 16 14 12 12 10 13 13 12 12 7 9 11 10 9 10 8 8 7 5
Age 45 +
Table 7.2.3 Sample output from the study on nutritional labeling, showing the results from the total panel, and from pairs of complementary sub-groups (males versus females; younger versus older respondents). Each respondent generated his own model, relating the presence/absence of the elements to the binary 0/100 rating of interest. The table shows the average of the parameter across the respondents. The 36 elements are sorted in descending order, based on the utilities from the total panel.
7
1/31/2012 11:35:13 PM
F5 A5 E2 E4 B6 E5 F4 B5 D3 E1 E3 F1 F2 C3 F3 F6
Be at your best. Enjoy good taste and good health This product provides vitamin A A good way to balance your diet to keep it nutrient rich Naturally packed with nutrients for better health This product is rich in iron Meets your daily nutrient needs without too many calories You and your family can eat right – for life This product provides iron This product is low in added sugar Wholesome food that gives you more nutrition per bite A total nutrient package with more nutrients than calories A great way to enjoy your healthy lifestyle Takes the stress out of healthful eating This product contains little saturated fat Lets you eat well to live well, starting today You can trust the nutrition label to guide smart eating
7
Beckley_c07.indd 205
1/31/2012 11:35:13 PM
7 6 6 6 5 5 5 4 4 4 4 4 4 3 3 3
5 1 2 –1 2 1 2 –2 –1 0 0 –1 0 –6 –3 –1
8 8 7 8 6 6 6 6 5 5 6 6 5 5 4 4
11 9 7 6 1 3 6 2 5 4 3 7 5 2 2 4
5 5 5 6 8 6 5 5 3 4 5 2 3 3 3 3
206
Product Innovation Toolbox
●
●
7
7.2.8
yourself. We’ve all done that. Indeed, one of the earliest lessons was a presentation author H. Moskowitz gave to a packaged goods company in the Midwest. The instructions were to talk about the results in text. Yet, during the course of the presentation, some numbers and tables “snuck in”. Marketing management gravitated to those simple tables of results, not the presenter’s verbose explanation. It was a good lesson (see discussion in Chapter 1 on “winstonizing” data). Don’t bore with statistics Give a rule of thumb, for example utilities of five or more are significant, utilities of eight or more are worth attending to. And then stop. Conjoint analysis may have an intellectual history in academic and science, but to make sure people use it, give them simple rules. They’ll make the data their own. Finally, tell a story With lots of elements as stimuli you are dealing with a cognitively rich environment. You have two things going for you; take advantage of them. The first is that you are dealing with elements that have meaning, whether these are brands, benefits, ingredients and so forth. That type of stuff is interesting. And second, you have numbers which reflect how the mind responds to these cognitively rich elements. The numbers fascinate. The combination is a winner. The combination is a peek into the customer’s mind. Management cannot resist that. The peek always fascinates.
Using the results – what do the numbers tell us? In its early business use, conjoint analysis for messaging never really achieved the wide distribution it deserved, for the reason that many “creatives” in advertising agencies believed strongly that messaging was an art, and that it was virtually impossible to explore messaging by rational, systematic means. The argument was often made that using conjoint analysis to understand messaging and create optimal messaging was akin to “monkeys typing Shakespeare”. The criticism was wrong, of course, but effective in holding back the use of conjoint analysis in the work of messaging. However, the force of business is such that improved methods eventually find their “place in the sun”, despite the best efforts to dismiss them. Conjoint analysis began to find wider acceptance after exposure to business needs forced it to deal with more mundane, realistic problems of an everyday sort. With the foregoing in mind, let’s look at the results in Table 7.2.3, to see what a researcher might extract from these results. (1) The additive constant, a measure of readiness to up-rate the concept The additive constant tells us the proportion or conditional probability that a person will rate the concept 7–9, in the absence of elements. The additive constant, a purely estimated parameter, nonetheless provides us with a sense of the baseline. Here the additive constant for the total panel is 22, meaning that without elements we expect 22 percent or about one person in five to rate the concept 7–9. The elements have to do the rest of the work.
Beckley_c07.indd 206
1/31/2012 11:35:13 PM
Conjoint Analysis Plus
207
The additive constant is higher for males (35) and lower for females (18), suggesting a gender difference. (2) We’ve sorted the utilities from high to low, based on the utilities from the total panel We did this for a simple reason. It’s the elements themselves, and not the silos or categories, that are important. The six silos are, in reality, just bookkeeping devices. We learn a lot from the elements that perform well. We should look for patterns. The pattern that we see is that familiarity is important. The elements that score highest are those that we recognize. (3) By putting the elements in descending order, we quickly get a sense of the range of utilities (+17 to +3) We know what elements work and what don’t. For our data, all of the elements drive interest in the label. (4) There are very few negatives A negative utility means that the element drives away consumer respondents, and diminishes the number of interested individuals. In this particular study almost none of the elements are negative. When we work with new products, however, rather than with conventional label information, we’re likely to see more elements showing negative utilities.
7.2.9
Beyond individual groups to segments People differ from each other. Table 7.2.3 showed us some of these differences when we divide the respondents in a CA study by the conventional ways of dividing people; who they are, what they do, and even what they say about a topic. The history of CA is also one of dividing people by the pattern of their utilities. Individuals showing different utilities can be considered to differ from each other in the ways their minds work. The typically narrow focus of a CA study, generally dealing with one topic, makes the assessment of individual differences a very instructive exercise. One learns just how people differ in a granular aspect of their lives. We get a sense of this “mind-segmentation” for labels from Table 7.2.4. The specifics of the segmentation are not important; they can be found in most texts on clustering, and in many off-the-shelf or “shrink-wrapped” statistical programs. What’s important is that the people in a specific segment show similar response patterns in the CA study. The centroids of the three segments are quite different from each other. Whether the people in the three segments will cluster together when it comes to other topics, such as vitamins or healthful lifestyle, is not known. The granularity of the CA study means that the segmentation is limited to the particular topic area being investigated.
7.2.10
7
New analytic advances in conjoint analysis
Conjoint analysis is a perfect example of the evolution of a research tool when it is applied to the practical problems of industry. In this section we deal with three of these advances: discovering synergisms among pairs of elements, applying CA to graphics, and finally using CA to mind-type consumers (so-called Mind Genomics™).
Beckley_c07.indd 207
1/31/2012 11:35:14 PM
208
Product Innovation Toolbox
Table 7.2.4 Strongest performing elements for three mind-set segments, emerging from the nutritional label CA study. Total
Seg. 1
Seg. 2
Seg. 3
320 22
58 43
157 9
105 30
12 7 7
17 16 16
18 15 1
0 –10 11
3
15
–3
5
3 7
15 15
11 13
–16 –6
14 9 13 10 11 12 10 9 17 7
0 –19 –7 –7 12 17 13 –7 12 16
22 20 19 18 18 18 17 16 16 15
12 7 14 7 2 0 –1 9 20 –10
Seg. 3: Protein seekers This product is an excellent source of protein
16
10
14
21
This product is a good source of protein
17
12
16
20 20
Base size Constant Seg. 1: Fat/cholesterol avoiders This product has no saturated fat This product is low in total fat Be at your best – enjoy good taste and good health Lets you eat well to live well, starting today This product contains little saturated fat This product is a low source of cholesterol Seg. 2: Fiber seekers/sugar, fat, salt avoiders This product is high in fiber This product is sugar free This product is without sodium This product is free of added sugar This product has zero cholesterol This product has no saturated fat This product is fat free This product is a low source of sodium This product is a good source of protein This product is low in total fat
7
7.2.10.1 Interactions among elements in a conjoint analysis study Traditionally, CA has focused on “main effects”, wherein each of the elements has been treated as an independent entity. In a CA study the individual element might appear against several different backgrounds. Nonetheless, when it comes time for the analysis, the question is often asked whether or not two elements “interact” with each other, so that the combination is far higher or lower than one might expect, or the performance of an element (e.g. promise or product feature) in the presence of another specific element (e.g. brand) is otherwise strongly affected. Let us look for interactions by presenting the results of a study on food and health. The original CA study comprised four silos or categories, each with nine elements, as we see in Table 7.2.5. The study was run using the IdeaMap® approach, with each respondent evaluating a unique set of 60 combinations. The combinations, that is, test concepts, comprised 2–4 elements. Each test concept contained at most one element from a silo.
Beckley_c07.indd 208
1/31/2012 11:35:14 PM
Conjoint Analysis Plus
209
Table 7.2.5 The elements from the CA study on food and health. Silo A: Benefits to the consumer A1 A2 A3 A4 A5 A6 A7 A8 A9
As part of a low fat diet, this food may reduce the risk of some types of cancers This food includes calcium and other nutrients that give you bright teeth, shinier hair and smoother skin Food that contains 20% of your daily requirement for fiber … important for reducing your risk of chronic diseases like heart disease Good food. Easy to eat on the go! Meals that require no preparation. Just heat and eat! One pot. One step to a meal. Start it in the morning, and have it in the evening just as you walk in the door Fresh juicy slices, slow roasted for added flavor, hot off the rack Prepared just to your liking … just the way your mom or someone special made it … so close to homemade you can almost smell the meal Luscious, creamy texture. So rich, so moist … dotted with juicy jewels of fruit, just the right amount of sweet Silo B: Health and convenience
B1 B2 B3 B4 B5 B6 B7 B8 B9
Just one serving provides important cancer protective benefits Contains essential omega-3 fatty acids, which may reduce your risk of heart disease Provides essential vitamins and minerals your body needs, including potassium, magnesium and zinc Doesn’t make a mess while you eat it It’s convenient Tastes freshly made Premium quality Wholesome goodness Tastes like it was prepared by someone who cared about you
7
Silo C: Emotions C1 C2 C3 C4 C5 C6 C7 C8 C9
Calms you … Better for you than you thought … Feeling good about feeding your family … It’s good for you and your body, soul and mind … Looks great, smells great, tastes delicious … Quick and easy … doesn’t have to take a long time to get a good thing … A joy for your senses … seeing, smelling, tasting Imagine the taste …. So irresistible, just thinking about it makes your mouth water … Silo D: Brand and store
D1 D2 D3 D4 D5 D6 D7 D8 D9
Beckley_c07.indd 209
From Quaker Oats From Newman’s Own From Kellogg’s From Kraft Foods From Betty Crocker From Campbell’s From Trader Joe’s From Whole Foods From Walmart
1/31/2012 11:35:14 PM
210
Product Innovation Toolbox
7.2.10.2
Discovering synergisms and suppressions between pairs of elements
The notion of interactions among elements has continued to fascinate researchers. Among the creative professionals, copywriters and graphic artists, there is a belief that developing good concepts and packages is partly due to talent; namely, one’s ability to spot synergies among words in language or in visual stimuli. Accordingly, it is important to be able to spot and quantify synergisms and suppressions among pairs of elements. Such capability has typically been beyond the ability of standard conjoint procedures, simply because there are too many possible combinations to test directly. For instance, a conjoint study comprising three silos, each with eight elements, has a possibility of 64 interactions for each pair of silos. This makes 192 combinations to test. From the practical point of view, it’s just too much to test 192 combinations simply to find interactions that may or may not even exist. The experimental design of the IdeaMap® form of CA, where each individual evaluates a different set of combinations, makes it possible to assess the statistical significance of all possible pairwise combinations. With IdeaMap®.net each respondent evaluates different combinations of the same elements. A study involving a reasonably large number of respondents (> 200) makes it likely that most pairs of elements will appear several times across the full array of respondents. The statistical analysis to discover interactions requires the full set of test stimuli, represented as 1s and 0s (see Table 7.2.2). With four silos and nine elements per silo, there are 4 × 3/2 or six pairs of silos, each with 81 possible combinations. This totals 6 × 81 or 486 possible pairs. With every 100 respondents participating in the study and evaluating 60 test concepts, we have 6,000 test combinations. That base size suffices to test for and to discover significant interactions: “The method is straightforward. One way to discover interactions forces in the linear terms and observes how much variance is explained. Then, one can assess the additional amount of variance in the data. This reveals which pairs of elements may synergize, over and above their individual ability to explain the variability” (Gofman, 2006). We see the results from the analysis in Table 7.2.6. The table shows the utility values for the model without interactions (column C), and the utility value of those elements, as well as the utility values of the two pairs of elements that are most highly significant (column D).
7
7.2.10.3
Scenarios – elements as “directors”
Scenarios or nested analyses assess interactions in a different way. In scenario analysis, one identifies a specific silo, such as brand name, whose elements, in turn, define the different strata. A stratum is thus defined by the particular brand name (or other criterion) in the test concept. Operationally, the researcher first sorts the full set of test combinations into the different strata. By design, each concept comprises only one element from the key variable defining a stratum (e.g. brand name), or, as a result of the design, the stratum may comprise those concepts because the design called for
Beckley_c07.indd 210
1/31/2012 11:35:14 PM
Beckley_c07.indd 211
B4 B5 B6 B7
B1 B2 B3
A9
A7 A8
A4 A5 A6
A3
A1 A2
A
Limited interactions 41 3 7 6 3 6 10 7 9 3 3 4 5 –1 1 3 0
Simple 41 5 7 7 3 6 11 7 9 3 3 5 6 –1 0 3 1
Text
Additive constant As part of a low fat diet, this food may reduce the risk of some types of cancers This food includes calcium and other nutrients that give you bright teeth, shinier hair, and smoother skin Food that contains 20% of your daily requirement for fiber … important for reducing your risk of chronic diseases like heart disease Good food. Easy to eat on the go! Meals that require no preparation. Just heat and eat! One pot. One step to a meal. Start it in the morning, and have it in the evening just as you walk in the door Fresh juicy slices, slow roasted for added flavor, hot off the rack Prepared just to your liking … just the way your mom or someone special made it … so close to homemade you can almost smell the meal Luscious, creamy texture. So rich, so moist … dotted with juicy jewels of fruit, just the right amount of sweet Just one serving provides important cancer protective benefits Contains essential omega-3 fatty acids, which may reduce your risk of heart disease Provides essential vitamins and minerals your body needs, including potassium, magnesium and zinc Doesn’t make a mess while you eat it It’s convenient Tastes freshly made Premium quality
(Continued)
D
C
B
Table 7.2.6 Utility values for the linear model with no interactions (column C); the linear model with limited interactions that must be highly significant (column D).
7
1/31/2012 11:35:14 PM
Beckley_c07.indd 212
B8 B9 C1 C2 C3 C4 C5 C6 C7 C8 C9 D1 D2 D3 D4 D5 D6 D7 D8 D9 B9D5 A3B7
A
Table 7.2.6
1 2 –1 1 4 4 4 3 3 0 2 3 –3 1 4 2 4 –7 –2 –7 0 0
Simple
Text
Wholesome goodness Tastes like it was prepared by someone who cared about you Calms you … Better for you than you thought … Feeling good about feeding your family … It’s good for you and your body, soul and mind … Looks great, smells great, tastes delicious … Quick and easy … doesn’t have to take a long time to get a good thing … A joy for your senses …. seeing, smelling, tasting Imagine the taste …. So irresistible, just thinking about it makes your mouth water … From Quaker Oats From Newman’s Own From Kellogg’s From Kraft Foods From Betty Crocker From Campbell’s From Trader Joe’s From Whole Foods From Walmart Tastes like it was prepared by someone who cared about you … + … From Betty Crocker Food that contains 20% of your daily requirement for fiber … important for reducing your risk of chronic diseases like heart disease … + … Premium quality
C
B
(Continued)
7
1/31/2012 11:35:14 PM
1 1 1 1 4 4 3 4 3 –1 3 3 –3 0 5 2 4 –6 –4 –7 13 12
Limited interactions
D
Conjoint Analysis Plus
213
that stratum to be absent in any elements from the silo. Each stratum ends up having only one constant element (e.g. brand name). For each stratum, the researcher then creates the model from the data. Each model comprises all of the elements from the other silos, however. The utility values for the elements in the stratum show how the elements perform in the presence of that common element (e.g. that brand which defines the stratum). The scenario analysis shows how the stratum-defining element affects the utility values of all elements from the other silos. A side-by-side comparison of the utility values for the different models, one per stratum, ends up showing the effect of the stratum-defining variable (Moskowitz and Gofman, 2007). Table 7.2.7 shows an example of scenario analysis based on brand name and venues. The numbers in the body of Table 7.2.7 display the utility value for eight elements in the particular stratum defined by the column brand or venue.
7.2.11
“Next generation” thinking in conjoint analysis
Whenever experimental design enters new areas that are a bit “artsy”, and packaging is certainly one of them, its acceptance is slow. Experimental design with words took a while, but did not really encounter resistance. After a decade and a half, experimental design with packages is finally beginning to be accepted by the design community. The statistical robustness of conjoint analysis in research is not necessarily a plus when it comes to dealing with artistic endeavors. Over time, conjoint analysis evolved from words alone to words and pictures and, finally, to pictures alone. When it comes to dealing with packages, there’s absolutely no reason why the test stimuli cannot be superimposed transparencies. Rather than mixing and matching words on a screen, the foregoing metaphor leads to mixing and matching transparencies testing actual package designs. Indeed, just recently an entire book appeared on the use of experimental design in packaging (Moskowitz et al., 2009). Of course, the same thinking holds for websites and page optimization (Gofman et al., 2009). Figure 7.2.2 shows the logic of the approach. Conjoint analysis treats the elements in a test concept as variables, and switches the elements in and out, according to an experimental design. The same can be done with graphics, which are switched in and out, also according to a design. Respondents don’t walk around judging the artistic quality of the combination, just as they don’t judge the artistic quality of test concepts in regular conjoint analysis. Thus, as long as the combinations are reasonably clear and not sloppy looking, respondents have no trouble judging the “gestalt”, even though this time the test stimulus is graphical. The outcome is a set of utilities, one utility value for each graphical element. One other aspect of graphical conjoint analysis deserves mention. Whereas in conjoint analysis with texts the additive constant is high, typically because it measures interest in the general idea, in graphical conjoint analysis the additive constant is usually quite low, near 0. The reason is simple, once we think about it. With graphical stimuli, we are judging what we see, not the idea behind it. The
Beckley_c07.indd 213
7
1/31/2012 11:35:14 PM
Beckley_c07.indd 214
47 –4 –1 5 4 1 0
48 10 0 –1 2 –4 2
Quaker oats
Tinted areas show utility scores (10+) which have a very strong positive impact.
Additive constant As part of a low fat diet, this food may reduce the risk of some types of cancers Meals that require no preparation. Just heat and eat! Just one serving provides important cancer protective benefits Tastes like it was prepared by someone who cared about you Feeling good about feeding your family … Looks great, smells great, tastes delicious …
Kraft foods
6 2
–2 1
–2
1
11 0
6
44 6
Kellogg’s
0
46 4
Campbell’s
0 3
15 3
3 11
8
13 –1
11
15
9
34 9
Newman’s own 37 16
Whole foods
0
5
3
41 5
Betty crocker
0 –2
10
6
4
33 11
Walmart
3 2
8
7
6
31 3
Trader Joe’s
Table 7.2.7 Brand name and scenarios. Utilities of concept elements in the presence of different brand names and venues. The original results come from one larger IdeaMap study, analyzed in terms of nine different strata. The strata are brand names and store venue (data from IdeaMap.net data 2009, Moskowitz Jacobs Inc.).
7
1/31/2012 11:35:14 PM
Conjoint Analysis Plus
215
Figure 7.2.2 Schematic of the logic behind conjoint analysis, where the elements are visual transparencies, combined by being stacked, one atop the other, to create the test stimulus.
7 less we see, the less interesting the stimulus is. There’s no intellectual framework of expectations filled in by the mind of the respondent.
7.2.12
Discovering the “new” through conjoint analysis – creating an innovation machine
About 20 years ago, in the early 1990s, the opportunity arose to work on a new form of dental cleaning. Jonathan Kalan, then director of market research of the Oral B Company, now part of Gillette, presented a challenge to create new, more powerful methods to clean teeth. The field of alternatives was open. Kalan had previously used conjoint analysis for positioning and advertising, but this time the question on the table was whether one could throw together ideas from different worlds, into a “mixmaster” and by doing so come up with new-tothe-world technologies for oral care. It became clear from those early experiments that the conjoint analysis silos need not be the simple, well-defined variables of features for a toothbrush or a paste. Rather, as part of the effort, the group took ideas from different types of fields, including industrial abrasives, industrial cleaning machinery, and oral care. The elements were edited and then incorporated into concepts according to experimental design. Consumers reacting to the test stimuli had no idea that the elements in their test concepts came from such diverse worlds. The concepts seemed “reasonable”, at least at a superficial level, even if they weren’t immediately recognizable.
Beckley_c07.indd 215
1/31/2012 11:35:14 PM
216
Product Innovation Toolbox
The outcome, discussed in detail in a book written in 1995 (Moskowitz, 1995), showed how one could use conjoint analysis to create an entirely new category, now called power brushing. Whereas today the notion seems so obvious, it was not in the 1990s. The ability of conjoint analysis to act as this “mixmaster” became a hallmark of innovation in many other product studies afterwards. Furthermore, with the advent of the Internet as an almost infinite source of information, conjoint analysis becomes an ever more powerful tool for innovation by recombination.
7.2.13 Dealing with prices Pricing presents us with an extensive set of topics, often left to economists and marketers, but very addressable by conjoint analysis. Indeed, conjoint analysis presents the pricing issue in terms of the newly emerging area of behavioral economics. With conjoint analysis one can deal with pricing either as an element in an offering, or a response to an offering. The former, price as an element in the mix, creates a set of elements dealing with price or even a whole silo of elements with price. Price is then embedded in the study, as a set of elements to be tested. The impact or utility values for specific items with prices attached show the marginal or part-worth contributions of price for that item. The second approach uses price as a dependent variable. The respondent reads the test concept and picks a price that would be appropriate for the concept. From the analysis, one determines the dollar value of each individual element. Which approach is better, price as an element or price as a rating, remains for the particular experiment. The results of many studies with price evaluated in each way suggest that respondents have little or no problem with price, either as the stimulus or as a response. Recently, one of the authors (HRM) published a jointly authored book on pricing, with a large number of examples showing how conjoint analysis can be used for pricing (Galanter et al., 2011).
7
7.2.14
7.2.14.1
Mind Genomics™: A new “science of the mind” based upon conjoint analysis Mind-set segmentation, Mind Genomics™ and Addressable Minds™
The notion that the world of consumers can be divided into various groups is hardly new. A century ago, merchants had already recognized that customers were divided into those who wanted upper scale products and those who wanted economy products having fewer features but at clearly lower cost. One need only look at the evolution of the automobile industry to recognize that manufacturers intuitively recognized these different strata of desire (Wells, 1975). The key problem for marketers is not the existence of these segments, but how to understand the nature of the segments, what to offer the segments and, most importantly, how to communicate to these segments. The problem of assigning a new person to a segment, called “scoring the person”, is equally important.
Beckley_c07.indd 216
1/31/2012 11:35:14 PM
Conjoint Analysis Plus
217
One can segment consumer respondents by any number of criteria. One of these is division into groups, based upon who the respondent “is” (e.g. male versus female, older versus younger). Another way to segment analyzes their patterns of behavior. Respondents exhibiting a specific pattern of behavior might fall into one segment, whereas respondents exhibiting another pattern of behavior might fall into a different segment. There is no end to the ways that one can segment individuals. Whether done on the basis of who the respondent is, or on the basis of what the respondent does, however, segmentation will not predict easily the specific stimuli to which the respondent will react. For example, when the segmentation is for a credit card, individuals in the same segment, whether defined by spending pattern, payment pattern or geo-demographics, will respond in different ways to the same messages. That is, a segment may be “homogeneous”, but generally people in the same segment respond in different ways to messaging. What is necessary is segmentation based on response patterns to messages. Fortunately, conjoint analysis produces segments which are homogeneous with respect to the tested messages. The conjoint experiment generates a set of utilities for each respondent, showing how that person reacts to the different elements. By segmenting the individuals based upon the patterns of their utilities, it becomes easy to develop segments or homogeneous clusters of individuals showing similar response patterns to the tested messages.
7.2.14.2
7
Systematized conjoint analysis generates databases and produces a substantive science
Up to now, we have been talking about methodological issues and the change of conjoint analysis from an arcane, difficult-to-understand rubric into a tool that can be used to solve simple problems at a reasonable cost. One of the newer approaches in conjoint analysis is the creation of databases of information about the way people react to their external world. It is an example of a tool by which we can simulate the complexity of nature, and understand the algebra of the consumer mind. For many years conjoint analysis was used primarily for high profile projects. In early 2001, author H. Moskowitz and his colleague Jacqueline Beckley at the Understanding and Insight Group were approached by Maryanne Gillette of McCormick & Company to create a system by which to understand craveability. At that time, in late 2000 and early 2001, the notion of craveability for food and drink was just being promoted as a prospective new way to understand foods (Beckley and Moskowitz, 2002). As part of the solution to the issue of craveability, Beckley and Moskowitz used conjoint analysis to create a set of 30 studies. Each study comprised four silos, with each silo in turn comprising nine elements. The silos were the same and elements were either the same or very similar across a number of studies – therefore it was possible to compare one element across different products, and generate a set of test concepts (see Figure 7.2.3). Thus, across studies of different foods the data comprised responses to 1080 elements. Each element of the 36 had a specific function, whether as a simple description of the food, a more complex description, an emotional reassurance or a brand.
Beckley_c07.indd 217
1/31/2012 11:35:15 PM
218
Product Innovation Toolbox
Figure 7.2.3. Structure of an It! Concept (source: IT! Ventures, LLC, a limited partnership between the Understanding and Insight Group and Moskowitz Jacobs Inc.).
The craveability studies used conjoint analysis to identify the contribution of the different elements. The results generated a database that retains its usefulness almost a decade later. One can return to the data to discover how a specific term fared across all of the different foods or how different elements fared in a single food.
7
7.2.14.3
Beyond conjoint analysis and databases to the emerging science of Mind Genomics™
The application of conjoint analysis as the foundation of a new science is called Mind Genomics™. The term was first used by Moskowitz et al. in their 2005a paper and then used extensively by Moskowitz and Gofman (2007) in their book Selling Blue Elephants. The notion of Mind Genomics™ is patterned on the science of genomics. Very simply, we live in a world of concrete experiences that can be divided up into meaningful wholes, of everyday life, such as shopping for a specific item. In turn, the experience can be dimensionalized into various components by extracting its different aspects, putting them into silos, and then creating alternative elements to represent different parts of the experience. Thus, shopping for a car can be deconstructed into the different aspects, ranging from the nature of the store, to that of the interaction with salespeople, to the nature of the purchase. These aspects are the actual experiences, and the feelings about the experience. Once the experience is structured, with multiple elements for each silo, the apparatus of conjoint analysis comes into play, presenting different vignettes to people, acquiring the relevant rating, such as how closely does the vignette match the ideal experience, and then deconstructing the vignettes into the partworth contribution of each element.
Beckley_c07.indd 218
1/31/2012 11:35:15 PM
Conjoint Analysis Plus
219
In this first step of Mind Genomics™, the work product is a granular “science” of the specific experience. This first exercise produces a database of the mind in terms of that experience, couched in terms of the world of the everyday (Moskowitz et al., 2006). We don’t live in abstractions, but rather in the concrete aspects of everyday experience. To reiterate, it is the science of that concrete experience that is the focus of Mind Genomics™. In the second step in the Mind Genomics™ science, people are segmented based upon the patterns of their responses to these granular aspects of everyday life. This second exercise with the data reveals how different people “think about” the specific experience. Often there may be as few as two or as many as 5–10 mind-set segments. The segments represent individuals with different points of view about the same everyday experience, as well as represent theoretical states that the mind could take (e.g. price orientation vs. convenience orientation). The fewer segments the better when it comes time to apply the science for both understanding and commercial application. Traditional segmentations provide the basis for interesting stories about the way people divide, whether the topic area is foods, health concerns, cars and so forth. Based upon the segmentation, the researcher creates a persona, a small picture and story about the individual. The personas and stories, called heuristics, are interesting and make intuitive sense. However, these heuristics soon take over. Users of the segmentation data begin to believe that the persona must be real, despite that fact that originally the persona was a way to make the segmentation alive. And, in turn, when an individual who looks like the persona shows up, the natural response is to assume that the individual is a member of the segment. In the third step of Mind Genomics™, people are assigned to the different mind-sets, based upon a typing tool. In most cases, it will be virtually impossible to assign a good proportion of people into these mind-set segments when the only information available is previous purchase behavior and/or geodemographics. That is, the assignment is not based upon who the person is, but rather the way the person thinks. The assignment is done by means of a short “intervention test”, comprising 3–5 questions. The rationale for the typing test is the analogy to the way medicine is practiced. Today’s medical community uses “intervention” tests, such as blood tests and urine tests, to identify medical condition. A productive, perhaps “best” way to identify membership is through a short intervention, in the same way that the doctor takes a person’s blood, sends it to a testing laboratory, and receives a report. The doctor may take down a detailed family history, but the information about the patient’s physiological condition is best obtained by the different tests. The key here is that the patient provides the information through the interaction. To identify a person as a member of a mind-set segment requires the same type of thinking; a short intervention or typing test. From the pattern of responses to the test, one easily identifies segment membership. The identification isn’t perfect, being usually 60–70 percent correct. However, that percentage is far higher than guessing. Figure 7.2.4 shows an example of the typing tool to identify a mind-set segment. Figure 7.2.5 shows the output of the typing tool, what to say to the segment, as well as what to avoid.
Beckley_c07.indd 219
7
1/31/2012 11:35:15 PM
220
Product Innovation Toolbox
Figure 7.2.4 Example of a typing tool for mind-set segmentation (source: IT! Ventures LLC, a limited partnership between the Understanding and Insight Group and Moskowitz Jacobs Inc.).
7
Figure 7.2.5 Example of output (what to say to the segment, what to avoid) (source: IT! Ventures LLC, a limited partnership between the Understanding and Insight Group and Moskowitz Jacobs Inc.).
7.2.15 Four considerations dictating the future use of conjoint analysis In the long run, any method of research with consumers will enjoy success when it can be done efficiently, is transparent, the cost is low, and most importantly, when the contribution is great. These four considerations apply to conjoint analysis as well:
Beckley_c07.indd 220
1/31/2012 11:35:15 PM
Conjoint Analysis Plus
221
(1) Doing studies efficiently By efficiently, we mean that it is not onerous to run the study. Certainly at the early stage of development in any research technique implementation will be difficult. Easy-to-implement research methods are common. Yet, looking at their history will reveal that these noweasy methods had their beginnings in more tortuous methods, in experimental trials, and in false starts. Over time, as a research method gains traction and acceptance, ways are found by various users to make the process quick and easy. Conjoint analysis is one of those methods that has been simplified. Beginning with arcane methods in the 1960s, conjoint analysis has been simplified into a virtual cut/and/paste system (Moskowitz et al. 2001). Their system is, not of course, the only system, but it does represent a serious attempt to make conjoint analysis efficient and, thus, attractive for general use. (2) Transparency By transparency we mean simple, easy-to-use methods; the opposite of normal business practice which obscures the method to “protect trade secrets”. For conjoint analysis, transparency comes about by using welldefined, easy to replicate methods. One of these methods is ordinary leastsquares regression as the analytic tool, which deconstructs mixtures to the contributions of their components. The happy outcome is the ease with which the utility values can be obtained, and their value communicated to management. (3) Low cost For many years, conjoint analysis was promoted as a high visibility, high effort approach and, thus, expensive method. This approach lent the method and its proponents a business and intellectual cachet, one hard to resist in a competitive world. Beginning with customized experimental designs, researchers made the method increasingly complicated, often by using hard-to-understand analyses. We believe, however, that creating an offthe-shelf version of conjoint analysis, morphing it into a web-based tool, and then expanding the use where possible is a better direction for the future, despite the fact that as a result conjoint analysis will become more ordinary, more pedestrian and less price-prohibitive. (4) Magnifying the contribution Read the books on statistical research methods, and look at the way conjoint analysis is framed. Conjoint analysis is typically presented as a solution to a specific problem. As such, for many years it developed its own literature, typically combining method and data. The papers that reported using conjoint analysis were as much expositions of method to solve a particular problem as they were actual data of relevance to the science. The method was so difficult, time consuming and complex that one had to spend as much time on the method as on the substantive results. With the development of web tools, however, methodological issues receded into the background. Conjoint analysis has now come to the fore as a tool to create a body of knowledge, and not simply as a state-of-the-art technology for oneoff solutions to rare problems. Today we see conjoint analysis joining the mainstream, as a business and science workhorse, rather than merely as a showcase capability, shown, but rarely used.
7
Acknowledgment The authors wish to thank Linda Ettinger Lieberman, Editorial Assistant, at Moskowitz Jacobs Inc., for preparing this chapter for publication.
Beckley_c07.indd 221
1/31/2012 11:35:15 PM
222
Product Innovation Toolbox
References
7
Beckley_c07.indd 222
Anderson, N. (1970) “Functional Measurement and Psychophysical Judgment”. Psychological Review, 77, 153–170. Beckley, J.H. and Moskowitz, H.R. (2002) “Databasing the Consumer Mind: The Crave It!, Drink It!, Buy It!, Protect It! and the Healthy You! Databases”. Paper presented at the annual meeting of the Institute of Food Technologists, Anaheim, California, June 2002. Box, G.E.P., Hunter, W.G. and Hunter, J.S. (1978) Statistics for Experimenters. New York: John Wiley & Sons. Galanter, E., Moskowitz, H. and Silcher, M. (2011) People, Products & Prices; Sequencing the Economic Genome of the Customer’s Mind. Sharjah, UAE: BenthamScience Publishers Ltd. Gofman, A. (2006) “Emergent Scenarios, Synergies and Suppressions Uncovered within Conjoint Analysis”. Journal of Sensory Studies, 21, (4), 373–414. Gofman, A. and Moskowitz, H.R. (2010) “Application of Isomorphic Permuted Experimental Designs in Conjoint Analysis”. Journal of Sensory Studies, 25 (1), 127–145 (DOI 10.1111/j.1745–459X.2009.00258.x). Gofman, A., Moskowitz, H.R. and Mets, T. (2009) “Integrating Science into Web Design: Consumer Driven Website Optimization”. The Journal of Consumer Marketing, 26, (4), 286–298. Green, P.E. and Srinivasan, V. (1980) “A General Approach to Product Design Optimization Via Conjoint Analysis”. Journal of Marketing, 45, 17–37. Green, P. & Tull, D. (1978) Research for Marketing Decisions (5th edition). Englewood Cliffs, NJ: Prentice Hall. Luce, R.D. and Tukey, J.W. (1964) “Conjoint Analysis: A New Form of Fundamental Measurement”. Journal of Mathematical Psychology, 1, 1–36. Montgomery, D.C. (1991) Design and Analysis of Experiments. New York: John Wiley & Sons. Moskowitz, H.R. (1994) Food Concepts & Products: Just in Time Development. Trumbull, CT: Food and Nutrition Press. Moskowitz, H.R. (1995) Consumer Evaluation of Personal Care Products. New York: Marcel Dekker. Moskowitz, H. (2009) “Sequencing the Genome of the CUSTOMER Mind: Application to Food and Drink”. Food Technology and Innovation Forum. May, Chicago, IL. Moskowitz, H.R. and Gofman, A. (2004) System and Method for Performing Conjoint Analysis. Provisional Patent Application, 60/538,787, filed 23 January 2004. Moskowitz, H. and Gofman, A. (2007) Selling Blue Elephants. Upper Saddle River, NJ: Wharton School Publishing. (Translated into numerous foreign editions.) Moskowitz, H.R., Gofman, A., Katz, R., Itty, B., Manchaiah, M. and Ma, Z. (2001) “Rapid, Inexpensive, Actionable Concept Generation and Optimization – the Use and Promise of Self-authoring Conjoint Analysis for the Foodservice Industry”. Foodservice Technology, 1, 149–168. Moskowitz, H.R., German, B. and Saguy, I.S. (2005a) “Unveiling Health Attitudes and Creating Good-For-You Foods: The Genomics Metaphor and Consumer Innovative Web-based Technologies”. CRC Critical Reviews in Nutrition and Food Science, 45 (3), 265–291. Moskowitz, H.R., Gofman, A., Beckley, J.H. and Ewald, J. (2005b) “Brand Name Anatomy: Experimental Design Assesses the Value of Retailer Names”. Marketing Research, 17 (3), 14–19.
1/31/2012 11:35:15 PM
Conjoint Analysis Plus
223
Moskowitz, H.R., Porretta, S. and Silcher, M. (2005c) Concept Research in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Moskowitz, H.R., Gofman, A., Beckley, J.H. and. Ashman, H. (2006) “Founding a New Science: Mind Genomics”. Journal of Sensory Studies, 21 (3), 266–307. Moskowitz, H.R., Reisner, M., Lawlor, J.B. and Deliza, R. (2009) Packages, Ideas and Experience in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Page, A.L. and Rosenbaum, H.F. (1989) Redesigning Product Lines With Conjoint Analysis: A Reply to Wittink”. Journal of Product Innovation Management, 5, 293–296. Peryam D.R. and Pilgrim F.J. (1957) “Hedonic Scale Method of Measuring Food Preference”. Food Technology, 11, 9–14. SYSTAT (2008) SYSTAT for Windows, Version 11. The system for statistics. User Manual. Evanston, IL: Systat Corporation, Division of SPSS. Wedel, M. and Kamakura, W.A. (1998) Market Segmentation: Conceptual and Methodological Foundations. Dordrecht, The Netherlands: Kluwer Academic Publishers. Wells, W.D. (1975) “Psychographics. A Critical Review”. Journal of Marketing Research, 12, 196–213. Wittink, D.R., Vriens, M. and Burhenne, W. (1994) “Commercial Use of Conjoint Analysis in Europe: Results and Critical Reflections”. International Journal of Research in Marketing, 11 (3), 41–52.
7
Beckley_c07.indd 223
1/31/2012 11:35:15 PM
7.3
Benefit Hierarchy Analysis Efim Shvartsburg
Key learnings ✓ ✓ ✓
Consumer behavior model based on concept of bounded rationality and decision schemas Benefits hierarchy based on consumer choice and preference, either conscious or subconscious Hierarchy analysis could be used to optimize products’ tangible characteristics (sensory attributes) or intangibles (perceived benefits)
7 7.3.1 Benefit hierarchy analysis – a new way to identify what drives consumers’ liking, purchase intent or preference The steps in the new product development process entail defining the product concept, identifying the consumer needs and product benefits, and determining the target consumer demographics. Then, an optimal product formulation (or several alternative formulations) is developed that can satisfy potential consumer needs, at a manufacturing cost that is low enough to justify a reasonable price. In every step of the new product development process, researchers are trying to determine what product benefits, consumer or sensory attributes, ingredients (including their different levels and combinations) drive product liking, purchase intent or preference. Hierarchy analysis is a relatively new data analysis technique that allows researchers to answer these questions by organizing benefits, attributes or different ingredient levels into hierarchies according to their relative impact on consumer choice and preference.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
224
Beckley_c07.indd 224
1/31/2012 11:35:15 PM
Benefit Hierarchy Analysis
225
7.3.2 Hierarchy analysis vs. traditional approaches The most noticeable difference between hierarchy analysis and traditional approaches to product optimization is the choice of optimization criterion. Let’s consider a typical study where each respondent tastes several similar products sequentially and uses the following nine-point hedonic overall liking scale to evaluate each product: 9 – Like extremely 8 – Like very much 7 – Like moderately 6 – Like slightly 5 – Neither like nor dislike 4 – Dislike slightly 3 – Dislike moderately 2 – Dislike very much 1 – Dislike extremely Traditional data analysis methodologies will either calculate the mean overall liking score for each product and use it as a criterion for decision making, thus implying that the best product is the one with the highest mean overall liking score; or calculate for each product a percent of respondents who rated the product as like extremely or like very much, the so-called top 2 box score, and use it as a criterion for decision making, thus implying that the best product is the one with the highest top 2 box overall liking score. In contrast, hierarchy analysis uses the criterion that the best product is the most preferred product. Let’s consider the example presented in Table 7.3.1, which shows the results from ten respondents who rated two products using a nine-point overall liking scale.
7
Table 7.3.1 Product rating.
Beckley_c07.indd 225
Respondent
Product A rating
Product B rating
Preferred product
1 2 3 4 5 6 7 8 9 10 Mean score Top 2 box score Preference
8 8 8 8 8 8 8 8 9 9 8.2 100% 20%
9 9 9 9 9 9 9 9 2 1 7.5 80% 80%
B B B B B B B B A A
1/31/2012 11:35:15 PM
226
Product Innovation Toolbox
Using either the mean overall liking score or the top 2 box overall liking score, we would come to the conclusion that product A is better than product B. However, when analyzing individual preferences on a respondent by respondent basis, 80 percent of the respondents preferred product B over product A. Thus, according to the criterion that the best product is the most preferred product, we would infer that product B is better than product A. The main source of discrepancies between the outcomes of different criteria usage comes from the way that the three different methods use the original nine-point hedonic overall liking scale: (1) Mean overall liking score criterion treats the scale as an interval scale, presuming that all differences between numeric tags assigned to each verbal statement are equidistant. (2) Top 2 box overall liking score treats the nine-point hedonic scale as binomial, recognizing only the difference between a “good rating” (like extremely or like very much) and a “bad rating”, but neglecting all the other differences. (3) Preference criterion treats the nine-point hedonic overall liking scale as ordinal, assuming that the rating 9 is better than the rating 8, that the rating 8 is better than the rating 7, etc., without any assumptions regarding distances between verbal statements and without any loss of information resulting from aggregating the statements into a “good” and a “bad” category.
7
From the measurement theory view point (Coombs, 1964) preference criterion is the only correct criterion, corresponding to the nature of the measurement scale used.
7.3.3 Bounded rationality: the reason behind benefit hierarchy The theoretical behavior background of the hierarchy analysis is based on a model of consumer behavior known as “bounded rationality”. The term and the concept were originally introduced by Herbert A. Simon (Simon, 1957), who in 1978 was awarded the Nobel Prize in economics “for his pioneering research into the decision-making process”. Ideas of bounded rationality were further expanded by Daniel Kahneman (Kahneman et al., 1982), who in 2002 received the Nobel Prize in economics “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty”. The main distinction of “bounded rationality” from “full rationality” (which is assumed is such popular a method as conjoint analysis) lies in the recognition that consumers have limited cognitive abilities and limited time to make decisions. Therefore, consumers are not able to evaluate all product benefits, attributes or ingredients at once, then immediately construct a utility function and maximize its expected value. There is overwhelming experimental evidence for substantial deviation of actual consumer behavior from what is predicted by traditional rationality models (Kahneman et al., 1982). Some authors call it “irrationality”,
Beckley_c07.indd 226
1/31/2012 11:35:16 PM
Benefit Hierarchy Analysis
227
but, in our opinion, the problem is not that people behave irrationally, but that elegant and beautiful mathematical rationality models do not adequately explain the consumer’s decisions and choices. According to Gigerenzer et al. (1999, p. 9), “The greatest weakness of unbounded rationality is that it does not describe the way real people think.” According to the bounded rationality concept, consumers employ the use of heuristics or schemas to make decisions rather than strict rigid rules of decision optimization (Gigerenzer and Selten, 2001). A schema is a mental structure we use to organize and simplify our knowledge of the world around us. We have schemas just about everything, including ourselves, other people, cars, phones, food, etc. Schemas affect what we notice, how we interpret things and how we make decisions and act. We use them to classify things, such as when we “pigeon-hole” people. They also help us to forecast and predict what will happen in the future. We even remember and recall things via schemas, using them to “encode” memories. Schemas are often shared within cultures and allow communication to be shortened. Every word is, in fact, a schema, that we can interpret in our own way. We tend to have favorite schemas which we use often. They act like filters, accentuating and downplaying various aspects of the things surrounding us, including different product attributes and benefits. Schemas are also self-sustaining, and persist even in the face of disconfirming evidence. If something does not match the schema, such as evidence against it, the contradictory evidence is often consciously or subconsciously ignored. Some schemas are easier to change than others, and some people are more open to changing their schemas than others. Schemas are also referred to in literature as mental models, mental concepts, mental representations and knowledge structures. The basic proposition of the bounded rationality theory applied to consumer behavior is that consumers are rational, and when they make choices or preferences between products, they have some conscious or subconscious reasons for those choices or preferences that are realized through their individual schemas. Hierarchy analysis presumes that each consumer uses an individual schema for evaluating a particular category of products and makes choices between products within the category based on this schema. Hierarchy analysis represents consumer schema in the form of a hierarchy of benefits, attributes or ingredient levels arranged in the order of likelihood of their impact on consumer decisions. By aggregating schemas among random probability samples of consumers, hierarchy analysis allows us to determine the prevalent schema in a population. On the other hand, hierarchy analysis methodology allows us to group consumers into clusters based on similarities or dissimilarities of their individual schemas to discover market segmentation based on consumer schemas. In addition, hierarchy analysis methodology includes procedures for testing statistical hypotheses related to consumer schemas, for example if a particular product benefit is more important than another benefit, or if a particular product benefit is more important for one consumer group than for another consumer group, or if a particular product benefit is more important for choice of one product than for choice of another product. Bounded rationality concept assumes that consumers evaluate products in three steps (Gigerenzer and Selten, 2001):
Beckley_c07.indd 227
7
1/31/2012 11:35:16 PM
228
Product Innovation Toolbox
(1) First they search for some familiar cues. (2) When consumers have found enough cues, they stop searching and start evaluating and organizing these cues in some order of importance to them or the magnitude of the differences between products. (3) Then they make judgments regarding “overall liking”, “purchase intent” and the choice of product.
7
Beckley_c07.indd 228
The hierarchy analysis model relies on the assumption that some of the cues recognized by consumers are related directly or indirectly, consciously or subconsciously, to the set of product benefits and attributes that we ask consumers to evaluate (or to the levels and the combinations of the ingredients and the sensory attributes that are associated with the products, evaluated by consumers). Consumers do not necessarily make quantitative choices between alternative options based on their perceived utilities. Instead, they rely on qualitative expectations regarding directional changes. For each pair of products, one product could be evaluated by a consumer as better than or as worse than another, or the differences between two products could be negligible. In this model of consumer behavior, the actual magnitude of the differences between products does not affect the product choice, only the directional differences matter. On other hand, the greater the magnitude of the differences between products, the more consumers will recognize the differences as noticeable and express their preferences. Therefore, the strength of preferences is measured, not in the magnitude of the differences between products or their utilities, as in the case of conjoint analysis, but by the proportion of consumers who evaluated the product as preferred over the alternatives. By considering only the directional differences between products and benefits, this method essentially treats all scales of measurement used in consumer research as ordinal, not interval. This corresponds to the actual nature of the scales and makes this technique conceptually more valid in comparison with traditional statistical methods based on means and correlations that treat all consumer research scales as if they were interval. Another important advantage of this approach over traditional statistical techniques is an acknowledgment of the fact that each respondent has an individual interpretation of the meanings of different values on psycholinguistic scales. Traditional statistical methods compare ratings given by an individual respondent to sample averages. This implies that all respondents interpret scales in the same manner. But, individual interpretations of scales might differ between respondents based on cultural background, education, age, gender, personal experiences, etc. Hierarchy analysis deals with data on a respondent by respondent basis, assuming that each respondent interprets the scales in an individual manner but consistently across various products, benefits, attributes or concepts. There are multitudes of articles in marketing research literature related to the effect of cross-cultural differences on scale item interpretations. This issue taints inferences based on the comparison of mean scores for the same product or benefit across different countries, languages or cultures. By analyzing data on a respondent by respondent basis, hierarchy analysis is free from this problem and allows the direct comparison of results across countries, languages and cultures. Traditional statistical methods usually assume the normal distribution of answers among respondents for all attributes and criterion ratings. Even if this
1/31/2012 11:35:16 PM
Benefit Hierarchy Analysis
229
is not stated explicitly, the mere fact that traditional statistical methods use only means and standard deviations to describe the statistical distribution of answers, characterizes the distribution as normal. Moreover, assuming normality implies that the distributions must be symmetric. In fact, we practically never observe symmetrical normal distribution in marketing research studies; in many cases answers are skewed toward high ratings, limited by range, and do not have a symmetrical normal distribution. Also, as we stated above, a normal distribution could be applied only if we treat all scales as interval, which actually contradicts the ordinal nature of the scales used. Hierarchy analysis methodology does not rely on any assumptions about distributions and accepts all actual distributions “as is”, which makes it a robust statistical method by definition. Most of the traditional statistical techniques are based on linear relationships between criterion and factors (regression and correlation analysis) or linear additive models (conjoint analysis) or polynomial models (response surface analysis). Hierarchy analysis presumes only probabilistic directional relationships between criterion and factors, which makes it independent from the researcher’s assumptions regarding data. The integral part of hierarchy analysis is the philosophy of exploratory data analysis (EDA), which was introduced by John W. Tukey (1977). The exploratory approach to data analysis calls for the exploration of the data with an open mind. According to Tukey, the goal of EDA is to discover patterns in data. He often likened EDA to detective work; Tukey suggested thinking of exploratory analysis as the first step in a two-step process similar to that utilized in criminal investigations. In the first step, the researcher searches for evidence using all of the investigative tools that are available. In the second step, that of confirmatory data analysis, the researcher evaluates the strength of the evidence and judges its merits and applicability. In the classical analysis framework, the data collection is followed by the imposition of a model (normality, linearity, etc.), and then the analysis that follows is focused on the parameters of that model. For EDA, the data collection is followed immediately by an analysis that has the goal of inferring which models are appropriate. Hence, the EDA approach allows the data to suggest models that best fit the data. Following the spirit of EDA, benefit hierarchy analysis evaluates all the possible multimodal relationships between product preferences and benefits and estimates the likelihood that each benefit has an impact on product preference. The result is a hierarchy of benefits, arranged in the order of likelihood of their impact on product choice and preference.
7
7.3.4 How hierarchy analysis ranks the benefits and product attributes Another cornerstone of benefit hierarchy analysis is the concept of probabilistic causality. A probabilistic causality approach applied to the analysis of consumer choice and preference data assumes the following: ●
Beckley_c07.indd 229
The observed choices and preferences are not spontaneous, but are the results of the conscious or subconscious use of schemas by consumers in their decision-making process.
1/31/2012 11:35:16 PM
230
Product Innovation Toolbox
●
●
● ● ● ●
●
7
The actual product characteristics, such as various ingredient levels or sensory attributes could be related to cues discovered by consumers and used in their schemas. The perceived product benefits and attributes could be related to cues discovered by consumers and used in their schemas. Consumer schemas represent reasons or causes for their choices. Consumers do not use their schemas deterministically and always consistently. Consumers do not use their schemas stochastically or completely randomly. For each of the possible product benefits, attributes or ingredients, there is an objective probability that consumers use this particular component in determining their choices and preferences. This causal probability could be estimated from the data.
The process of estimating causal probabilities from observed data starts with the assumption that all attributes or benefits are mutually independent and a priori each have an equal chance to be a cause for the consumer’s choices or preferences. Then, by analyzing evidence of all pairwise relationships between benefits from the data, and testing, for each pair of benefits, two alternative hypotheses: (1) That benefit A is more likely to be a cause for the choice and preference between products than benefit B, and (2) That benefit B is more likely to be a cause for the choice and preference between products than benefit A, we can estimate for every benefit, the a posteriori likelihood that the benefit is a cause of choice and preference between products. The result is a hierarchy of benefits, arranged in the order of this a posteriori likelihood of the impact on product choice and preference. The following examples illustrate several practical uses of hierarchy analysis in consumer research. The company wanted to develop a new kind of fresh baked bread to sell in stores nationwide. Their product developers created nine prototypes for the bread using Taguchi experimental design for four three-level (high-medium-low) design factors, as outlined below in Table 7.3.2. To identify which of the nine product prototypes is the most preferred by consumers, a nationally representative sample of 450 consumers were interviewed in 25 locations. Each respondent tasted four of the nine samples of bread (incomplete block design). To avoid order bias, we implemented a random
Table 7.3.2 Design factors.
Beckley_c07.indd 230
Product
Factor 1
Factor 2
Factor 3
Factor 4
1 2 3 4 5 6 7 8 9
3 2 2 3 1 1 2 3 1
1 2 3 2 1 2 1 3 3
3 3 1 1 1 2 2 2 3
2 1 2 3 1 2 3 1 3
1/31/2012 11:35:16 PM
Benefit Hierarchy Analysis
231
balanced rotation algorithm. As a result of the random balanced rotations, each respondent tasted a unique set of four products. Each product was tasted an equal number of times in each position balanced by location and each pair of products was tasted an equal number of times on each sequential position. For each product, respondents were asked overall liking, using a nine-point scale, and 13 diagnostic attributes. Figure 7.3.1 shows the results of the hierarchy analysis. In the hierarchy analysis, all products are arranged in the order of their preference and labeled alphabetically, so “A” is a label for the most preferred or best product, while “I” is a label for the least preferred or worst product. The bars for each product represent the likelihood that the product is the most preferred by consumers in comparison to the other products being considered. For product 1, which is labeled with the letter “A”, the 94.6 percent denotes that, based on the evidence in the data, we have a 94.6 percent confidence that product 1 is the most preferred product. The letter “C” after the confidence signifies that this product is more preferred than any product labeled with the letter “C” or below, with at least 95 percent confidence. Product 7, which is labeled with the letter “B” is the second most preferred product. The likelihood that product 7 is the most preferred product is equal to 89.0 percent, which is greater than all the products labeled with the letter “C” or below. Statistically, product 1 and product 7 are at parity, despite the fact that product 1 has a numerically greater likelihood of being the most preferred product. Now, when we know the hierarchy of product preference, we can define the optimal levels of four Taguchi design factors using a procedure called non-parametric response surface analysis. The principal difference of this analysis from the traditional response surface analysis is the fact that we do not restrict a set of possible functions describing the relationships between the design factors and the overall criterion to being the subset of polynomial regression functions,
Product 1 (A)
7
94.6 C 89.0 C
Product 7 (B) 67.1 E
Product 8 (C)
62.6 F
Product 6 (D) 51.9 F
Product 2 (E) 38.0 H
Product 4 (F) Product 9 (G)
33.9 H
Product 3 (H)
7.5
Product 5 (I)
5.4 0.0
10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0
Figure 7.3.1 Hierarchy analysis of products.
Beckley_c07.indd 231
1/31/2012 11:35:16 PM
232
Product Innovation Toolbox
Table 7.3.3 Non-parametric response surface analysis. Cells with the optimal levels of the corresponding factors are emboldened. Product
Factor 1
Factor 2
Factor 3
Factor 4
1 2 3 4 5 6 7 8 9 Optimal level Confidence
3 2 2 3 1 1 2 3 1 3 98.2
1 2 3 2 1 2 1 3 3 1 91.1
3 3 1 1 1 2 2 2 3 2 88.4
2 1 2 3 1 2 3 1 3 3 76.5
98.2
100.0
75.0
Likelihood
7
50.0
46.7
25.0
5.1 0.0
Low
Medium
High
Figure 7.3.2 Non-parametric response surface analysis of factor 1.
but we build the response surface as a multitude of points of interest. Table 7.3.3 shows the results of the non-parametric response surface analysis. Product 1 has the optimal levels for factors 1 and 2, while product 7 has the optimal levels for factors 2, 3 and 4. A product with a high level of factors 1 and 4, a low level of factor 2, and a medium level of factor 3, which was not part of the original design, could potentially be the best product. Figures 7.3.2, 7.3.3, 7.3.4 and 7.3.5 represent the non-parametric response surfaces for the factors. The numbers in the tables represent the likelihood that the corresponding level of the factor is preferred by consumers over the other levels of the same factor.
Beckley_c07.indd 232
1/31/2012 11:35:16 PM
Benefit Hierarchy Analysis
233
100.0 91.1
Likelihood
75.0
56.0 50.0
25.0
2.9 0.0
Low
Medium
High
Figure 7.3.3 Non-parametric response surface analysis of factor 2.
100.0
7
88.4
75.0
Likelihood
61.6 50.0
25.0
0.0
0.0 Low
Medium
High
Figure 7.3.4 Non-parametric response surface analysis of factor 3.
As we can see from the results of the four main effect analyses for the four design factors above, the gradient of differences between the optimal factor level and the second best factor level is 51.6 percent for factor 1; for factor 2 it is 35.2 percent, for factor 3 it is 26.8 percent and for factor 4 it is 9.6 percent. Therefore, by deviation from the optimal factor level, we would be exposed to the highest risk for factor 1, followed by factor 2, and then factor 3, with factor 4 representing the lowest risk.
Beckley_c07.indd 233
1/31/2012 11:35:16 PM
234
Product Innovation Toolbox
100.0
76.5
Likelihood
75.0
50.0
66.8
25.0
6.7 0.0 Low
Medium
High
Figure 7.3.5 Non-parametric response surface analysis of factor 4.
7
7.3.5 Identify drivers of liking, purchase intent or preferences Another useful application of hierarchy analysis involves linking the consumer preferences to the sensory attributes of the products. This methodology evaluates, for each sensory attribute, the likelihood that consumers can recognize different levels of the attribute for different products and make choices or express preferences between products based on this information. If consumers do not express preferences between two products with different levels of a sensory attribute, then we might conclude that the difference between these two levels of a sensory attribute is not noticeable to the average consumer, but can be discriminated by a trained sensory panel. In the bread optimization project described above, a sensory panel evaluated 55 various sensory attributes for each bread sample: ● ● ●
Thirty-two attributes are related to the taste of the bread Ten attributes are related to the texture of the bread Thirteen attributes are related to the aroma of the bread.
As a result of applying the hierarchy analysis methodology to all 55 attributes, we discovered 11 sensory attributes that affect consumer choices with at least an 80 percent likelihood. Each of these 11 attributes has a larger impact on consumer preferences, with at least a 95 percent confidence level, than any of remaining 44 attributes. Figure 7.3.6 illustrates the results of the application of the hierarchy analysis methodology to the sensory attributes. The sensory flavor attribute flavour 27 has the singular highest impact on consumer choice, with a 99.2 percent confidence level. The four attributes, flavour 6,
Beckley_c07.indd 234
1/31/2012 11:35:16 PM
Benefit Hierarchy Analysis
235
Flavor 27 (A)
99.2B
Flavor 6 (B)
95.5F
Texture 5 (C)
95.5F
Texture 9 (D)
95.5F
Flavor 26 (E)
93.5F 91.0G
Flavor 9 (F)
89.0H
Flavor 23 (G) Aroma 8 (H)
86.0J
Aroma 2 (I)
85.5J
Aroma 7 (J)
81.6L
Texture 7 (K)
80.9L 77.7Q
Flavor 4 (L) 0.0
10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0
Figure 7.3.6 Hierarchy analysis of sensory attributes.
texture 5, texture 9 and flavour 26, are statistically at parity on their likelihood to impact consumer preferences, with confidence levels ranging from 95.5 percent to 93.5 percent. Overall, flavor and texture sensory attributes have a greater impact on consumer choices and preferences between the nine samples of bread than the aroma related attributes, because the most impactful of the aroma attributes is ranked only eighth in the hierarchy. The hierarchy analysis for sensory attributes not only identifies which sensory attributes have an impact on consumer choice, but defines the optimal range for each sensory attribute. Figure 7.3.7 illustrates the optimal sensory attribute range for the most impactful sensory attribute, flavour 27. The optimal range for this attribute is below 9.5. Only two the most preferred products, product 1 and product 7, have this sensory attribute in the optimal range. As we can see from Figure 7.3.10, in this case, the application of the standard polynomial regression to the data would give a similar conclusion: products with smaller levels of the sensory attribute are more preferred. However, the hierarchy analysis reveals the two ranges of the attribute that are recognizable by consumers. Products in the optimal range have relatively high average likelihood (91.8 percent) of being the most preferred product, while products with sensory attribute levels higher than 9.5 have a low average likelihood of being the most preferred product (only 38.1 percent). During the product evaluation, respondents were asked the overall liking rating for each product and the ratings of 13 diagnostic attributes, using the same nine-point scale mentioned previously. Figure 7.3.8 demonstrates the application of the hierarchy analysis to the ratings of the 13 bread diagnostic attributes. The liking of the taste of the bread is the singular best predictor of overall product preferences, with 99.8 percent likelihood. The liking of the texture of the bread and the liking of the crust of the bread are statistically at parity, with an 85.6 percent and 82.8 percent likelihood, respectively. These results closely
Beckley_c07.indd 235
7
1/31/2012 11:35:17 PM
236
Product Innovation Toolbox
100.0 Product 1 Product 7 75.0 Product 8 Product 6 R2 = 0.4551
Product 2
50.0
Product 4 Product 9 25.0
Product 3
Product 5 0.0 8
9
11
10
12
13
14
15
16
Figure 7.3.7 Hierarchy analysis for the most impactful sensory attribute. Taste of bread (A)
7
99.8B
Texture of bread (B)
85.6D
Crust of bread (C)
82.8D
Appearance of bread (D)
61.2hI
Moistness of bread (E)
60.2hI
Aroma of bread (F)
56.4hJ
Thickness/denseness of bread (G)
55.5hJ 48.1J
Crispness/crunchiness of crust (H)
44.8J
Color of bread crust (I) 30.4K
Color of bread interior (J) 13.5L
Liking of particulates (K) 6.3
Amount of crumbs from bread (L) Amount of particulates within bread (M)
5.4
0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0
Figure 7.3.8 Hierarchy analysis of diagnostic attributes.
match the sensory attribute hierarchy analysis, where the two sensory attributes with the highest likelihood of impact were the flavor attributes and five out of the top seven attributes were the flavor related sensory attributes, while the remaining two were the texture related attributes.
7.3.6
Consumer segmentation using individual schemas Figure 7.3.8 represents the prevalent schema in a population for choosing between samples of bread. As mentioned above, while assessing the prevalent schema in a population, we calculated the individual schema for each respondent. Now we can
Beckley_c07.indd 236
1/31/2012 11:35:17 PM
Benefit Hierarchy Analysis
95% Significant differences
100.0
Taste of bread
Aroma of bread
Crust of bread
75.0 Segment 1 (52%)
237
Texture of bread Moistness of bread Appearance of bread
Thickness/ denseness of bread
50.0
Color of bread crust Crispness/ crunchiness of crust Color of bread interior 25.0 Liking of particulates Amount of particulates within bread Amount of crumbs from bread
0.0 0
25
50
75
100
Segment 2 (48%)
Figure 7.3.9 Comparative schema analysis.
7 use these results to evaluate the homogeneity of the consumer schemas. Applying traditional Ward’s algorithm of cluster analysis to individual schemas, we discovered two different consumer segments with different schemas. Figure 7.3.9 illustrates the statistical comparative analysis of two schemas. For all consumers, the most impactful product attribute is the taste of the bread. However for 52 percent of consumers (segment 1), the second most impactful attribute is the aroma of the bread. For the other 48 percent of consumers (segment 2), the aroma of the bread is ranked very low on the schema hierarchy; this is why aroma was not placed high on an average consumer schema presented on Figure 7.3.8. We can clearly see that the taste and the crust of the bread are equally important for both consumer segments. However, the aroma of the bread is significantly more impactful for segment 1, while the texture of the bread and the crispiness/crunchiness of the crust are significantly more impactful for segment 2, with at least 95 percent confidence. As a result of applying two different schemas to the product evaluation, consumers belonging to the different segments prefer different products. Figure 7.3.10 illustrates the results of the statistical comparative product choice analysis. Consumers in segment 1 preferred product 8, product 4 and product 9, with a significantly greater likelihood than the consumers in segment 2, with product 8 being the most preferred product in segment 1, with 93.4 percent likelihood. Consumers in segment 2 preferred product 1, product 6 and product 2, with a significantly greater likelihood than consumers in segment 1, with product 1
Beckley_c07.indd 237
1/31/2012 11:35:17 PM
238
Product Innovation Toolbox
95% Significant differences 100.0
Product 8 Product 7
Segment 1 (52%)
75.0 Product 4
Product 1
Product 9 50.0 Product 6
Product 2
25.0 Product 5
Product 3 0.0 0.0
25.0
50.0 Segment 2 (48%)
75.0
100.0
Figure 7.3.10 Comparative choice analysis.
7 being the most proffered product in segment 2, with 98.5 percent likelihood. Interestingly, product 7 is the second most preferred choice for both segments, and should be chosen if the manufacturer decides to introduce just one new product to the market. Alternatively, the introduction of two new products corresponding to product 1 and product 8 will better satisfy both segments. Product optimization, based on experimental design and sensory attributes, illustrated above, could be performed for every segment for more insight.
7.3.7
Summary and future Hierarchy analysis is a versatile and robust statistical methodology that helps to solve many tasks of consumer research. It has more than 15 years of history of usage for hundreds of consumer research projects by leading consumer packaged goods manufacturers. It is based on the bounded rationality consumer behavior theory and treats all consumer research scales as ordinal. The analyses are performed on a respondent by respondent basis, without unjustified assumptions of interval scales, respondent uniformity, linearity and normality. It provides quantifiable recommendations for choosing the best product prototype and the best levels of design factors or sensory attributes. It reveals the reasons for consumer choice and preference between products (known as consumer schemas), provides statistical tests for homogeneity of schemas in the population and discovers consumer segments in the cases of heterogeneous consumer schemas.
Beckley_c07.indd 238
1/31/2012 11:35:17 PM
Benefit Hierarchy Analysis
239
References Coombs C.H. (1964) A Theory of Data. New York: John Wiley & Sons. Gigerenzer G. and Selten, R. (eds) (2001) Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press. Gigerenzer, G., Todd, P.M. and the ABC Research Group (1999) Simple Heuristics That Make Us Smart. New York: Oxford University Press. Kahneman D., Slovic P. and Tversky, A. (eds) (1982) Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Pearl, J. (2000) Causality: Models, Reasoning, and Inference. New York: Cambridge University Press. Simon H.A. (1957) Models of Man: Social and Rational (Mathematical Essays on Rational Human Behavior in a Social Setting). New York: John Wiley & Sons. Tukey J.W. (1977) Exploratory Data Analysis. Reading, MA: Addison-Wesley.
7
Beckley_c07.indd 239
1/31/2012 11:35:17 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine Chapter 10: Putting It All and Screen Product Ideas Together: Building and in New Product Development Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 3: Invention and Innovation
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
8 ”It ain’t what you don’t know that gets you in trouble. It’s what you know for sure that just ain’t so.” Mark Twain
8
This chapter highlights and provides efficient approaches in refining and screening product ideas for product developers to prioritize and classify insights in order to strategize their activities accordingly.
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
241
Beckley_c08.indd 241
2/7/2012 8:39:09 PM
Chapter 8
Tools to Refine and Screen Product Ideas in New Product Development 8.1
Contemporary Product Research Tools Michele Foley
Key learnings ✓ ✓ ✓
8
8.1.1
The definition of a concept test The considerations in designing a concept test The considerations in interpreting a concept test
Introduction Selecting which new product ideas to develop and commercialize is one of the most important decisions made by innovation teams. How do you determine which ideas are worth pursuing? Introducing a new product is expensive and risky but it is critical to business growth. In most cases, success of a new product cannot be known until after it is in the market. Although there are many factors that determine a product’s success, the strength of the original idea in generating incremental growth is one of the most important. Deciding which ideas will be resourced is one of the most critical and most difficult business decisions in product innovation. Khurana and Rosenthal (1998), report that the greatest success comes to organizations that take a holistic process orientation to innovation. Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
242
Beckley_c08.indd 242
2/7/2012 8:39:10 PM
Contemporary Product Research Tools
243
Most businesses evaluate new ideas based on three broad criteria – technical feasibility, potential profitability and consumer appeal. Scanlon (2009), in a Business Week article on innovation, reports that: “For an idea to be considered for development, it has to meet Whirlpool’s three-pronged definition of innovation: it must meet a consumer need in a fresh way; it must have the breadth to become a platform for related products; and it must lift earnings.” For this discussion, we will consider only the consumer appeal – how relevant the idea is to the target consumer and how broad the appeal might be to a larger population. Concept testing is a key element in successful, productive innovation efforts (Cooper, 2008) because it provides a sense of potential and supports the definition of the product. Concept testing is a common practice in market research as indicated by the quantity of research suppliers found in the Green Book, the guide for buyers of marketing research services. In a recent search, 95 companies are listed for concept testing and 76 companies are listed for concept development and testing (http://www.greenbook.org/market-researchfirms.cfm/concept-testing). Product concept testing is an essential tool to ensure relevance and interest in the idea before significant investment is made in its development and commercialization.
8.1.2
What is a concept? The Product Development and Management Association (PDMA) defines a concept as “a clearly written and possible visual description of the new product idea that includes its primary features and consumer benefits, combined with a broad understanding of the technology needed” (http://www.pdma.org/ndp_glossary.cfm). Moskowitz et al. (2005) describe two types of concepts: a product concept and a positioning concept. Most concepts comprise both product and positioning; both a description of a product or service and benefits associated with the product. It provides a blueprint for the product. There are many styles of concepts, from simple one-line descriptions of the product to complex descriptions with positioning, graphics and price. The stage of development usually determines how complex or complete the idea. Early in concept development, simple single-line ideas might be screened to narrow down options. Ideas can come from a variety of sources – competitive intelligence, brainstorming sessions or suggestion boxes. These ideas need to be put into context (consumer insight, benefit or brand positioning) for testing. Then further refinement is used to develop the best positioning or selling orientation. The more complex concepts are used throughout product development to ensure the product design meets the expectations from the concept statement. Concept product fit testing is discussed in Chapter 9.2.
8.1.3
8
What is a concept test? Concept testing is the process of using quantitative and qualitative methods to evaluate consumer response to a product idea prior to the introduction of a product to the market. The PDMA lists three types of tests in its glossary: concept optimization,
Beckley_c08.indd 243
2/7/2012 8:39:10 PM
244
Product Innovation Toolbox
concept screening and concept testing. Concept screening involves evaluating potential new product ideas during the discovery phase of a product development project based on their fit with business strategy, technical feasibility, manufacturability and potential for financial success. Optimization is used to select the most appealing options of benefits and features to construct the best concept to present to consumers in a concept test. When optimizing ideas, determining tradeoffs can be very useful. Conjoint analysis is a technique that allows you to assess the relative importance of individual components of an idea (see Chapter 7.2 for more information about conjoint analysis). This approach is recommended in the optimization phase to determine which elements are more compelling to consumers. It can involve the evaluation of rational or functional benefits, such as “a detergent that removes stains”, non-rational or emotional benefits like “lets you be your best self”, as well as, descriptions of product features, “available in your favorite flavors”, store location, “you can find it in the dairy section” and price. The concept should be optimized to convey the attributes, benefits and desired positioning adequately. Effective communication is important at this point to ensure a good idea is not eliminated because consumers did not understand it. Concept testing provides insight into market potential for a product by identifying its potential success before investing in the time and resources to develop the product and marketing strategy. It can identify and prioritize ideas to meet new consumer needs as well as ideas to modify or reformulate existing products and services.
8.1.4
Considerations in conducting a concept test There are three steps in developing concepts:
8
(1) Screening (2) Optimizing and refining and (3) Testing the final concepts, potentially comparing them to other options and the competition. In this chapter we will discuss pre-screening and final confirmation. Conjoint analysis is discussed in Chapter 7.2 and is the best method for the optimization step.
8.1.4.1
Step 1: Concept screening tests
Concept screenings identify ideas that are sufficiently promising to merit further consideration. Product ideas are presented to consumers typically in verbal form to measure degrees of relevance, purchase intent, believability, uniqueness and similar indicators of product potential. The concepts tested in this step are typically simple product descriptions and may include context (why this idea exists) and/or claims. Sample sizes should be large enough to conduct segmentation analyses to identify potential consumer targets, typically 300–600. In this stage it is possible to screen several ideas with the same consumers. Many organizations also screen ideas through key internal business groups such as sales. With the ease and low cost of testing on the Internet, surveying both
Beckley_c08.indd 244
2/7/2012 8:39:10 PM
Contemporary Product Research Tools
245
internal and external constituents provides a rich context for decision making. While the consumers evaluate the ideas on purchase intent and uniqueness, the internal teams might evaluate the ideas on perceived relevance to customers or feasibility in the current systems. The method typically includes testing multiple concepts online and some suppliers describe a tournament format in which top-rated ideas are then matched against each other to force winners. At the screening phase, the intent is to assess business potential and to decide which ideas to take to the next phase for optimization and further refinement. Questions to be answered prior to quantitative testing include: “What are the key areas of interest for these concepts?” and “What is the best way to score each concept to determine a winner?” Uniqueness and its ability to provoke interest measure a concept’s novelty factor. Uniqueness reflects to what extent consumers see it as being different from products already in the market. Unique concepts lend themselves to more successful positioning efforts once they are introduced. The extent to which the concept is interesting measures a similar dimension. If consumers perceive the concept as interesting, they are more likely to be attracted to it. The analysis approach considers the interest or purchase intent scores and uniqueness. Higher potential products are high in both purchase intent and uniqueness. On the other extreme are low purchase intent/low uniqueness rankings. These ideas are not worth pursuing either because there is low interest or there are other offerings in the marketplace. Ideas that are ranked high in purchase intent but low in uniqueness are not worth exploring further because there are probably many competitive offerings. Ideas that are ranked low in purchase interest but high in uniqueness may require further development in the optimization phase to communicate the idea and benefits more clearly.
8 8.1.4.2
Step 2: Concept optimization tests
Ideas with higher potential from the screening phase enter into optimization where they are further developed. In this stage, qualitative interviews with the target audience can be helpful to explore usage, benefits and positioning. They can be used to understand current solutions and drive differentiation from competitive offerings. In addition to general likes and dislikes, areas to explore include the consumer need and comparison to their current products or method of doing things. Discussion could also include the message and type of communications that would help them decide to buy the product or service. Following qualitative testing and before the concept test, conjoint analysis is recommended to optimize key elements of the positioning concept – benefits (functional and emotional), product description, section of store and price. This will ensure the final concepts are communicated clearly to consumers. The best combination of elements in the conjoint study is then written into the format for concept testing.
8.1.4.3
Step 3: Concept tests
Because the concept test measures the attractiveness of a new product or service, communication of the differentiated benefit provided to the consumer
Beckley_c08.indd 245
2/7/2012 8:39:10 PM
246
Product Innovation Toolbox
Introducing ... “name and brief description” It’s important to me that ... “accepted consumer belief’’ graphic
Now, with “name”, I get “benefit’’ “Name” is “reason to believe” Available in “store location” in “packaging” for “price” (call to action)
Figure 8.1.1 Typical concept elements.
must be clearly communicated. In general, target consumers will each evaluate one concept in a format shown in Figure 8.1.1 with the following five elements: (1) (2) (3) (4)
Headline – summarizes the main idea of the new product. Accepted consumer belief – expression of a consumer need. Benefit – what’s in it for the consumer. Reason to believe – the product description, gives credibility to the promise, could include a graphic or image to help describe the idea. (5) Call to action – where consumer can buy it, price, etc. Before revealing concepts to the target consumer, these basic elements of a good concept should be considered:
8
● ●
● ●
Consistent voice; a first-person account can add personal relevance Neutral description of what the product experience would be in simple consumer language, not technical or business/marketing lingo Clear communication of relevant differentiated benefit Brand promise support.
The key question to be answered prior to quantitative testing is: “What criteria will be used to decide that the idea should go forward?” At this stage, most companies use benchmarks based on historical performance to set action standards. In establishing benchmarks, consider an appropriate context depending on the type of innovation under investigation. Different benchmarks should be set based on brand, target consumer or type of innovation (extension versus new-to-the-world). Similar to the screening phase, interest and uniqueness can be used to make the decision. For more developed categories, volumetric assessments are used to determine whether or not the concept should proceed to product development. The following list includes some of the most typical measures: ● ● ● ●
Beckley_c08.indd 246
Appeal Uniqueness Believability Purchase intent
2/7/2012 8:39:10 PM
Contemporary Product Research Tools
● ●
247
Relevance, ability to meet consumer need Fit with brand.
Measures such as frequency of purchase and source of volume can be used to calculate potential based on the predicted size of the business. The outcome of the model is either an index of potential success or an estimated volume or earnings in dollars.
8.1.5
Sampling: Who do you test with? In concept testing, at least two types of respondents are needed to understand the potential of the idea – a general population of potential users and the target consumer. A large sample of the general population is used early in the screening test to assess the breadth of acceptors and to identify sub-groups who are most likely to be adopters. During optimization, the target consumer, either someone with the particular need or who is loyal to the brand, will be most helpful to guide the refinement of the idea. A common practice is to leverage lead users or consumer experts (Cooper, 2008) in the idea creation and optimization stage. And at the concept testing phase, when the results provide an initial estimate of the size of the idea, information about the respondents’ adoption orientation is relevant. Klink and Athaide (2006) report that purchase intent is an appropriate response for early adopters, a question such as: “Would you purchase this product if a friend recommended it?” might be more appropriate for later adopters.
8 8.1.6
Contemporary measures Another recent development in concept testing is the use of neurological techniques to determine subconscious emotional resonance to the idea. Combining neurometrics with traditional approaches can provide additional insights into the potential success of concepts. Respondents in neuromarketing studies are exposed to the same stimuli as in more traditional studies. One example is where the concept is created as a radio ad and the consumer listens to the verbalization of the concept (Pradeep, 2010). Responses are recorded continuously, allowing for detecting which of the elements generate the highest and lowest levels of effectiveness. Similar to looking at the integration of interest and uniqueness in the screening step, replacing uniqueness with emotional resonance provides compelling results. Higher potential products are high in purchase intent and high resonance. On the other extreme are ideas with low purchase intent and low resonance. These ideas have low potential on both fronts. Ideas that are ranked high in purchase intent but low in emotional resonance, just as with uniqueness, are not worth exploring further because they are “me too” ideas. Ideas that require more optimization are ranked low in purchase interest but high in resonance. There is something in the idea that is compelling to the target but may not have broad appeal.
Beckley_c08.indd 247
2/7/2012 8:39:10 PM
248
8.1.7
Product Innovation Toolbox
Conclusion: From winning idea to successful product The process of concept testing is used to select, refine and develop new ideas to increase their success. The more new the idea, the more refinement and development are needed to clearly communicate the idea. Communication of the idea to the consumer can be challenging early in the idea stage because the developers may not have a way to describe the idea effectively. In addition, who you test with and how you measure success are critical. Using a blend of target consumers to evaluate the idea against the need or emotional resonance, brand loyalists to assess brand fit and a broader group to evaluate the potential size of the market can provide the most effective insights. The concept needs to be referenced throughout the development process to ensure the product design meets the expectations and promises described in the original idea. Referencing the concept during bench screening of prototypes provides a target to the project team and keeps them focused on key promises rather than their own personal preferences. The evaluation of the product with the concept by consumers validates the product experience and meets the expectations established by the original winning idea.
References
8
Beckley_c08.indd 248
Cooper R.G. and Edgett, S.J. (2008) “Maximizing Productivity in Product Innovation”. Research Technology Management, 51 (1) (March–April) for © Product Development Institute Inc. 2000–2010. Khurana, A. and Rosenthal, S.R. (1998) “Towards Holistic ‘Front Ends’ in New Product Development”. Journal of Product Innovation Management, 15, 57–75. Klink, R.P. and Athaide, G.A. (2006) “An Illustration of Potential Sources of ConceptTest Error”. Journal of Product Innovation Management, 23, 359–370. Moskowitz, H.R., Poretta, S. and Silcher, M. (2005) Concept Research in Food Product Design and Development. Ames, Iowa: Blackwell Publishing Professional. Pradeep, A.K. (2010) The Buying Brain, Secrets for Selling to the Unconscious Mind. Hoboken, NJ: John Wiley & Sons. Product Development & Management Association (2006) ”The PDMA Glossary for New Product Development”. http://www.pdma.org/npd_glossary.cfm. Scanlon, J. (2009) “How Whirlpool Puts New Ideas Through the Wringer”. Business Week. http://www.businessweek.com/print/innovate/content/aug2009/id2009083_ 452757.htm.
2/7/2012 8:39:10 PM
8.2
Insight Teams: An Arena For Discovery Stacey Cox
Key learnings ✓ ✓ ✓ ✓
8.2.1
How to utilize consumers differently for rapid development of innovative products Introduce a new “panel” format for development Discuss how insight teams differ from traditional tools used throughout the development process Identify what kind of information can be provided by insight teams
Insight teams for discovery In this chapter we will focus on the use of a new type of consumer “panel” or team, as I prefer to call them, which has been used at Heinz since 2000 for product development discovery and direction. The idea of an insight team was formed in 2000. At the time, I was beginning a product research department for part of the organization. The need for a descriptive panel as well as costeffective and rapid methods for more consumer feedback earlier in the development process was identified. The need for creative ways to get all the needed work done was also identified given this was a small department. At times, as well, I would need another set of eyes, ears and brains to help developers screen products against the consumer’s expectations, provide feedback, attend meetings, survey the marketplace and understand category dynamics. In collaboration with the understanding and insight group, we evaluated tools I had used in other companies for driving product understanding, laid out the pros and cons, identified resource gaps and created a new tool, the insight team, to assist the mission of my department. We first took a look at traditional descriptive methods. I had a number of years as a panel leader for several types of descriptive panels. Descriptive panels provide good objective feedback on products. However, I felt there was a large gap in communication between the
8
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
249
Beckley_c08.indd 249
2/7/2012 8:39:10 PM
250
Product Innovation Toolbox
panel’s work and the developer, in addition to the resource drain they could place on the organization. We then looked at other tools that I had at previous companies, narrative panels (a descriptive panel trained to be more descriptive about the story of the product) and POP panels (perception and opinion of people) (Moskowitz, 2006). In addition to descriptive panels, we evaluated the pros and cons of typical focus groups, groups lasting from 1.5 to 2 hours with developers trying to understand what product design parameters were of interest to the consumer. The main hurdle with this process was several clients indicated they were not getting enough useful direction to design more successful products. The traditional focus group was an insufficient tool to really understand what that consumer expected from the promise of the concept and would truly purchase repeatedly. In an attempt to meet the client’s needs, I introduced iterative consumer groups, focus groups where the consumer would return over several days, weeks or months to provide more feedback to the developer as they proceeded through the development process, later titled a guidance panel.
8.2.2
8
Beckley_c08.indd 250
Definition of an insight team It appears that the term “panel” has many definitions in the consumer product goods industry. A basic definition is a group of consumers who are recruited and screened for participation in research. A panelist, according to Lawless and Heymann (1999) “connotes a participant as a member of a group that is often tested on more than one occasion”. Participants on panels are recruited from some sort of source, most often a database of pre-recruited consumers, who typically have been through some sort of screening process where profile information has been gathered (Resurreccion, 1989). The product research (sensory evaluation) field began using panels for research with the creation of trained or expert panels for descriptive analysis, that is, the QDA (quantitative descriptive analysis) method (Stone et al., 1974) and the Spectrum method (Meilgaard et al., 1987). Consumer panels can be divided into two types: those that are more focused on answering a product or package design question versus those that would be set up for a marketing focused question. The insight team is an objectively trained “consumer panel”, a group of consumers who have been recruited and trained to provide iterative and objective, qualitative or quantitative feedback on anything from ideas, concepts or product/package designs. Over the years, the name insight panel, as originally called, became the insight team. The tool functions as more than just a group of participants recruited for a task and providing “one-way” answers, but as a group of consumers trained to function as one team, all functioning together to drive not only development but the business forward. The insight team has been utilized to answer both types of questions, both business and technical. The team has been utilized to ground the business in where the category is going, provide understanding on our competitors business models, provide language for concepts, provide product and package direction and design feedback, preparation insights and screen suppliers. The insight team at Heinz is a group of 10–12 consumers who have been classically trained in the spectrum method (Meilgaard, 2007), so they function
2/7/2012 8:39:10 PM
Insight Teams
251
somewhat as a descriptive panel, in terms of evaluating products with attributes, aligning on ratings using the 15-point universal scale and using references to guide the conversation. But here is where they end as a traditional descriptive panel. As mentioned previously, I needed this team not only to provide the traditional descriptive output my cross-functional partners needed, but also needed them to be researchers and panel leaders at the same time. So the group was additionally trained in various aspects of team behavior, leadership, some business acumen and research that would be necessary for them to succeed as a team (or a department of one) as well as individually.
8.2.3
When to apply the skills of an insight team An insight team can be used anywhere in the development process (Figure 8.2.1). At the beginning stage of the new product development (NPD) process, business assessment and/or ideation, an insight team can be used to understand what is occurring in the marketplace. Heinz has used them to identify white space or gaps in product design in the category prior to further ideation or concept work. An insight team can be used in the concept development and qualification process or stage 2. Heinz has used the insight team to evaluate initial concepts and to refine wording in the product description. In stage 3, or product qualification, an insight team can be used in all sorts of ways. We have used them to screen prototypes prior to quantitative consumer work; review the category to determine potential product and package drivers prior to qualitative consumer work; support the work of the developer or researcher with product purchase; lay out the potential consumer conversations for qualitative work; recommend stimuli for research; and to prepare as well as develop language for conjoint analysis. Once the product is launched, the insight team can also be used for documenting the profile of the launched product, monitoring it over shelf-life or anything else the cross-functional team might need throughout the process, such as data mining or report generation.
Stage 0–1 Understand consumer needs (in store and home)
Identify consumer insights
Understand category engagement
Identify concept options
Identify idea
Stage 2
Define product and package requirements Understand consumer trade-offs
Stage 3
Stage 4
Validate Screen Confirm initial product and final business package business proposition options proposition
8
Stage 5
Monitor for consistency
Opportunities for insight teams
Figure 8.2.1 Utilization of insight teams in the development process.
Beckley_c08.indd 251
2/7/2012 8:39:10 PM
252
Product Innovation Toolbox
8.2.4
Implementing insight teams for development
8.2.4.1
Insight team composition
The key to a successful insight team is having consumers who: (1) (2) (3) (4)
Love categories the company has businesses in Have background leading others, especially in teams if possible Have creative and innovative minds Are diverse in backgrounds and approaches to life.
So what do I mean by this? When embarking on an insight team, I would first take a hard look at your current descriptive panel and determine if they could do the work you need them to do. Can they move to the next level of understanding? Can they begin to integrate the business, development process and marketplace using their objective training as a basis? I would also take a hard look at you and your organization. Are you willing to let go of some of the more traditional approaches to consumer panel or descriptive panel usage? Are you willing to let the panel see all the products branded? Are you willing to let them talk to the business units without you present? Are you willing to share all you can about the business with them? Then look at the organization, are your developers willing to let others guide them in product design? Are they willing to give up quantitative data (the numbers!) and make decisions only based on words? Can you give up shelflife if needed? If the answer is “no” to any or all of these, the insight team may not be a good fit for you and your business.
8 8.2.4.2
Assessing your current panels for potential insight team
If you have decided to give it a try, then at the start of the recruiting process for the insight team, I would first look at the current members of your descriptive panel or an ongoing consumer panel you may have. I would begin by evaluating their styles of communication, personality styles and learning styles. We have the team participate in the Myers Briggs test (Myers et al., 1998), as well as communicating and learning style assessments. We have, at times, had the team participate in the team assessment from The Five Dysfunctions of a Team by Patrick Lencioni (2002). Through these tools, we determine where the gaps in our team are and what skill sets may be missing. We also talk to the team about their needs and the gaps they see of the team in skills, styles or experiences that would be of value to a successful team. Once the assessment is complete, a job description is written (see example below) and potential team members are recruited.
8.2.4.3
Recruiting and screening an insight team member
My recommendation is not to use employees, if at all possible, for an insight team. This team needs to remain objective and needs to have the time available to do anything that is needed from them. Their job, in my mind, is to help the business with resources, not be a drain on them.
Beckley_c08.indd 252
2/7/2012 8:39:10 PM
Insight Teams
253
At Heinz, we work through a temporary contractor firm to recruit consumers from various avenues of life. Once recruited by the firm, the firm does an initial prescreening. The initial prescreening consists of questions relating to: (1) Their ability to provide objective evaluations, such as no dentures, no tongue or nose piercings (2) Their ability to conduct work on a computer (3) A certain level of Word, Excel and PowerPoint knowledge and (4) An ability to think and articulate creatively. Figure 8.2.2 and Figure 8.2.3 outline some of the skills and attributes we look for when recruiting new members to our insight team. Box 8.2.1 is also an example of the job description for the insight team that is used in recruiting. We will recruit from both the public as well as family members/ friends of Heinz employees. Once pre-screened, the first of three sessions is set up on site with our current insight team leading the session. For this session, basic sensory capabilities
Loves exploring, creating and learning about the category Has some business knowledge (this is a plus) Conducts himself or herself professionally Can work in the “grey areas” Can deal with change and can move rapidly from one area of the job to another – flexibility and adaptability are key
8
Is interested in assisting the company in developing innovative cutting edge products Is interested in a long-term part-time temporary position that has flexibility for additional hours if available Wants to work with a group that continually grows and learns Creativity is valuable: art, music, poetry, crafts, people-related things Demonstrates a balance between listening and leading Has basic writing, verbal and listening skills Is able to follow instructions well Computer skills are a plus: PowerPoint basics a must; Word and Excel nice to have Has their own transportation for shopping/restaurant excursions Is willing to work additional hours in evening and weekends if needed
Figure 8.2.2 An insight team member should possess the above 15 desired skills.
Beckley_c08.indd 253
2/7/2012 8:39:10 PM
254
Product Innovation Toolbox
Mature, well rounded individual with an open mind Someone who can think “outside the box” Good leadership and organizational skills Good personality and able to work in a group atmosphere Has patience and will take this job seriously Flexible, critical thinker, and can take constructive criticism Has excellent communication skills and is creative and articulate Follows directions and is detail oriented Not afraid to consume anything (food-wise) Adventurous Open to challenges
Figure 8.2.3 An insight team member should possess the above 11 desired attributes.
8
Beckley_c08.indd 254
screening (taste, smell, visual, tactile and auditory acuity), potential members are brought in for approximately 45 minutes. The insight team members provide a brief introduction and then the recruits are asked to complete several tasks. In the introduction, we cover the purpose of the job, its roles and responsibilities, how the team interacts with the business and development cycles and the plans for the rest of the screening and training. The recruits then go through triangle tests for basic taste thresholds, taste intensity ranking, aroma identification (of various herbs and spices), visual ranking of color intensity, and a visual exercise to determine their ability to estimate magnitudes. Participants have to pass at least four of the activities to move to day 2. If recruits have made it to day 2, they are given an assignment for the next day to bring in a picture or two (or object) that describes a topic to them without, in our case, using food. For instance, they may be asked to bring in items that describe “genuine” without using food. They are asked to speak on their topic for five minutes in front of the group the next day. In the day 2 session, 8–10 participants are gathered in a group with the current team members watching or leading the discussion. The facilitator of the discussion reviews the day before and gives a little more information about the ins and outs of the insight team, its processes, project types, organizational structure, etc. The recruits are then asked to go around and share about themselves and the items they brought in and what the topic means to them. The purpose of the group is two-fold: (1) To see how articulate and creative they can be on short notice with an “out of the box” assignment and (2) To see how they interact in a group setting, do they try to take over, do they only listen and don’t add, or do they like to engage in the conversation?
2/7/2012 8:39:11 PM
Insight Teams
255
Box 8.2.1 An example of the job description for the insight team that is used in recruiting at Heinz The H.J. Heinz Company is currently recruiting persons interested in providing Heinz exceptional product insight through various product evaluations. This exciting job entails the selection, purchase, preparation, and evaluation of a wide variety of food products, not limited to the Heinz products. The most critical piece of this position is the product evaluation. Products are evaluated daily, individually and as a group. These evaluations are used to guide product development efforts at Heinz. A person in this job has many opportunities for interaction with various members of the business, especially the product developers. A person applying to this job should be interested in a long-term, parttime commitment to the organization. The person should thoroughly enjoy working with others in a group setting to accomplish small to very large job assignments. The person should be open-minded and have a desire to listen to and share others’ ideas and opinions. The job requires a high degree of critical thinking, creativity, flexibility, self-motivation and a drive to get the job done. Leaders will truly succeed in this type of work environment. The person should be willing to try new products and ideas on a daily basis. Other job requirements: ●
●
● ●
●
●
Flexibility to complete additional job assignments outside of the 9 am to 1 pm work schedule: may include work in the evening or on the weekends from your home Ability to communicate through a wide variety of methods: writing, visual, talking, listening, presentation Organizational and time management skills Basic computer skills – Microsoft Office including: Word, Excel and PowerPoint Transportation: Ability to travel to and from work, grocery stores, restaurants, etc. Loves to have fun!
8
Those recruits that pass the group discussion session are then asked back for a shadowing day. One or two recruits are asked to shadow one of the team members for the day. The team spends the day working on a current project and the recruits work right along with them. The recruits are taken to the store to understand how the team shops, prepares and even evaluates products. Through this process, both the recruits and team members can determine who would be a good fit and the recruits can determine if they want to be on the team. The team is also able to provide the recruits even more insight into the team’s
Beckley_c08.indd 255
2/7/2012 8:39:11 PM
256
Product Innovation Toolbox
roles and responsibilities in the development process. After day 3, the team determines the members with a final sign off from the manager.
8.2.4.4
Implementing the insight team process
The insight team’s process is a comprehensive process utilizing many different techniques to accomplish the required results. The training takes the team through four different sections: (1) (2) (3) (4)
Team roles and responsibilities Parts of the process and tools Language (attributes) and Teaming skills.
For the training, we have been able to get new members contributing to business results within the first week of training. With some teams, due to the business needs and project timelines, we have been able to modify the training and have the new members acting as full contributing team members in three weeks.
8.2.4.5
Insight team jobs
There are eight jobs within the insight team (Figure 8.2.4). The person that leads the process for the project is the process facilitator. Instead of having an outside panel leader or facilitator like many descriptive and consumer panels, the team rotates through the process facilitator position by project. The facilitator’s job is to ensure the project makes its deadlines, is the point person for communication of issues with upper management and is the key contact between the team and the developer and/or requestor throughout the process. The other jobs include recorder mapper, who has responsibility for note taking and documenting all conversations within the team throughout the projects; shopper, who is responsible for coordinating or conducting all shopping needed for the project; data input, who inputs all notes, data, graphs and generates the initial report for review by the team; editor, who is responsible for editing the report either individually or leading the team through the process; reference preparation, who ensures all references for evaluations are available and prepped; product preparation, who ensures all products needed for evaluation are available and prepped; and archivist/photographer, who takes all photos needed for the report as well as ensuring all documents used during the project and the report are filed appropriately for future reference. The team rotates through all jobs for each project.
8
8.2.4.6
Steps in the insight team processes
The process for the insight team contains eight phases (Figure 8.2.5). The first phase, project initiation and planning, provides the team with an opportunity to talk to the client, to understand their objectives and desired outcomes, and to plan how they may approach the project to get the best results and meet the timeline. This phase contains the following steps:
Beckley_c08.indd 256
2/7/2012 8:39:11 PM
Insight Teams
257
Process facilitator Recorder/mapper Shopper Data input Editor Reference preparation Product preparation Archivist/photography
Figure 8.2.4 Insight team roles and responsibilities.
Phase I
: Project initiation and planning
Phase II
: Experiential stories (history and experiences)
Phase III
: The marketplace
Phase IV
: The big picture
Phase V
: The details
Phase VI
: Report assembly
8
Phase VII : Presentation of the findings
Phase VIII : Team debrief
Figure 8.2.5 Phases of the insight team process.
● ● ● ●
Beckley_c08.indd 257
Requestor initiates project with insight team manager to get on the schedule Manager reviews initial test request with the team Research proposal is developed, clarified and agreed upon Process facilitator (with team if desired) develops a tentative agenda for all phases for team evaluation and alignment
2/7/2012 8:39:11 PM
258
Product Innovation Toolbox
●
● ●
Jobs are assigned based on the job grid and job coordinators develop phase agendas for their jobs Phase agenda is reviewed client and finalized by the team Data input creates folder for project information and inputs and files proposal electronically.
Phase 2, the experiential stories, is an opportunity for the team to become grounded on what is already known about the topic. The team spends time discussing past experiences they or their friends and family have had related to the project topic. They spend time researching the topic on the Internet, books, journals, etc. This phase allows the team to refine their project direction and generate hypothesis when exploring the marketplace in phase 3. Phase steps include: ●
●
●
8
Team conducts storytelling session on previous projects, stories and experiences in the category Map of past experiences is developed and refined, providing context for the rest of the process Mapping software used to visually represent the experiences and refine throughout the project.
Phase 3, the marketplace, is designed to get the team grounded in what is occurring in the marketplace that may have impact on the outcome of the project. Sometimes, the project stops here with the team reviewing the marketplace and sharing findings. For instance, the insight team has had several projects where the client needed to understand how food service establishments menu and serve various products. The team went out and evaluated products in over one hundred restaurants and was able to provide a comprehensive understanding of product design across all types of establishments. For the marketplace phase, we spend time teaching the team how to shop differently, how to not only pick up what they need but to see what else is going on in the categories they are familiar with, plus discovering what is happening in other non-company related categories. How are consumers making decisions with all the product designs available to them? We have a shopping pyramid we use for teaching this marketplace phase of the project (Figure 8.2.6). This phase includes the following: ●
●
● ● ●
Team discusses how “deep” to go in the marketplace using aha’s from stories from phase 2 Team refines what is needed for purchase and where the shopping is going to take place Shopper coordinates shopping trips throughout the process All products purchased are documented Marketplace experience is shared and informs the additional phases in the process.
Phase 4, big picture is what I consider a rapid, at times “30,000 foot” review of the products. During this phase, the team becomes well versed in the world of this product experience or the category being reviewed. The team will often spend
Beckley_c08.indd 258
2/7/2012 8:39:11 PM
Insight Teams
259
Basic observations Other products: new products Supplies
References
Ingredients
Competition: direct and indirect The product to be evaluated
Figure 8.2.6 The shopping pyramid.
time “powertasting” (Moskowitz, 2006) all the products to develop evaluation procedures and ballots, determine references that may be needed, etc. Steps include: ●
● ● ● ● ●
● ● ● ●
●
Team develops a plan, procedures and references (if needed) to be used for rapid product evaluations Team discusses what will be evaluated in the powertasting Product prep coordinates preparation of products Evaluation of the product during preparation is done by the preparation team Photographs are taken of the products Team evaluates all products and identifies key findings and determines if a detailed evaluation is needed Adjustments to the phase agenda may be made Overall category experience is documented electronically Process facilitator leads the discussions If a detailed evaluation is to occur, a ballot is developed including definitions and references Team decides if additional training is needed on specific terms.
8
In phase 5, the details, the team focuses on more detailed evaluations of the product. In this phase, the traditional descriptive scales, references and attributes are utilized in the process. Events which occur in this phase are: ● ●
Beckley_c08.indd 259
Team determines which (if any) key references will be needed for this phase Reference person coordinates shopping list and preparation of references as needed
2/7/2012 8:39:11 PM
260
Product Innovation Toolbox
Title page Background Executive summary Key findings: experiential stories / maps Key findings: the marketplace experience Key findings: product information / maps Appendix
Figure 8.2.7 Insight team report format.
● ●
● ● ● ●
8
●
Reference person refines and documents preparation procedures electronically Product preparation obtains and documents products including preparation procedures electronically Team evaluates the products in context individually Process leader facilitates product evaluation discussion Recorder/mapper documents product discussion Team reviews and refines product evaluations prior to input Data input documents evaluations electronically (including maps/graphing).
During the last three phases of the process, phases 6, 7 and 8: report assembly, presentation of findings and team debrief, the insight team creates a report which contains: ● ● ● ●
Project background Executive summary Key findings for the project and Recommended next steps (Figure 8.2.7).
The insight team then schedules the presentation with the client and other cross-functional team members. The insight team will then present the report often with samples of products they evaluated or recommend products they have created based on key category drivers. It is during the presentation that one can often see the true leadership and innovation of all of the members of the team. The presentations are extremely engaging and insightful for the cross-functional teams. After the presentation, the insight team is then tasked with debriefing the outcome of the project. They review what worked well throughout not only the presentation, but the project in its entirety, how the team dynamics worked and how the team would like to change the process for the future. Key steps in these three final phases include:
Beckley_c08.indd 260
2/7/2012 8:39:11 PM
Insight Teams
● ● ● ●
● ●
8.2.4.7
261
Data input completes electronic documentation Editor coordinates the review of all electronic documentation Process facilitator coordinates team review of the final documentation Team prepares and gives the presentation to the requestor and crossfunctional team Team debriefs on what went well and what could be better Final report and supporting documents filed electronically.
Maintaining the insight team
The key to maintaining a successful team is balance, a balance between creating a feeling of investment in the business among the team while ensuring they remain true to their job of providing objective, insightful feedback. Avenues I have found to keep this balance are two-fold: (1) Including them in as much of the business situation as possible and (2) Ensuring they are a strong functioning team. Motivating people to do their best, remain focused on the task at hand and stay on the team long term is a challenge. This topic has been debated over the years and can be seen still in online chats, at conferences and in training courses. Some people will recommend motivating team members through activities outside of work time. We have what we call a “red light day”, a day to stop what the team is doing and engage in a more fun, light-hearted, team-building activity, such as visiting a museum or going out to breakfast. We try to have them once every other month if possible. Another motivator people have tried is monetary or a similar type of incentive. We do provide those as well, yearly or depending on the success. We have had attendance incentives, thank you incentives and other monetary incentives over the years. There are many, many potential motivators; however, the motivator that I have found most useful is increasing the level of engagement of the team in the business and its strategies. I believe it is almost demotivating if the panel feels they just come in, do their job and don’t really get to see the impact they are having on the business or products that are going to market. They want to feel they are a critical part of what makes the business successful. They want to know their input is of value and is an integral part of what the business teams are using to make important strategic decisions. But be careful! With the opening up of the communications lines with the business, I have encountered the teams displaying characteristics of over estimation of worth and entitlement when provided too much access to the daily business ins and outs. I have come across feelings of: wanting to have involvement in all decisions being made, wanting to be able to have more influence on the direction of the business, and wanting to know more in areas that are not really integral to complete their work. I have seen teams get more caught up in the business or social aspect of the team and move away from the process and what their role is in the organization. For a strong functioning team, I believe it is important that all team members, especially perspective recruits understand that this job position is “the one job you will have in your life time where your performance as an individual doesn’t matter as compared to that of the team. If the team succeeds then you all
Beckley_c08.indd 261
8
2/7/2012 8:39:12 PM
262
Product Innovation Toolbox
succeed, if the team fails, then you all fail.” One of the ways we keep the team a strong one is through constant review of the teaming and individual skills training. The skills we review are: ● ● ● ● ●
Communication, personality and learning styles Listening and hearing Teaming and coaching Problem solving and critical thinking Innovation and creativity.
We use a multitude of resources throughout the team-building process. We have used such books as: ● ● ● ●
8.2.5
The Art of Possibility by Rosamund Stone Zander and Benjamin Zander A Theory of Shopping by Daniel Miller The Springboard by Stephen Denning Coaching for Commitment by Dennis Kinlaw.
How to use the insight team We have been able to utilize the insight team in so many different ways at Heinz to support the business. This list shows just a few examples of our use of this creative team in guiding business decisions and providing more insights into the consumer interaction with a product, package or category:
8
●
● ●
● ● ● ● ● ● ●
● ●
● ●
Reviewing various categories to determine potential product drivers and nuances Being trained observers in the marketplace or with consumer behavior Creating recipes with existing products and new ingredients for white space opportunities Creating the ideal prototype Developing names for new products based on flavor perception Confirming products cover a testable range in a design of experiments Providing concept wording Perceptual mapping of the category and prototypes Package design evaluation and recommendations Reviewing consumer data such as concept tests, central location tests and segmentations to provide data interpretation support to the department Collecting and developing language for conjoint studies Supporting developer needs, such as checking product design at various cook times and ball park drain weights Traveling to customers to present insight team information Purchasing products and setting up category reviews (without evaluations) for developers.
The insight team can really be used to accomplish many different tasks for a research organization. However, I would not use their work to:
Beckley_c08.indd 262
2/7/2012 8:39:12 PM
Insight Teams
263
(1) Replace consumer acceptance or preference ratings from the target consumer (2) Make any business decisions based on what they like or don’t like or (3) Make any type of volume forecasting prediction based on their findings.
8.2.6
Case study of using the insight team One of the wins we have had when using the insight team was on a product that was not doing well in the marketplace; however, the category was growing. The developer needed to improve the product, but was unsure where to start. The insight team reviewed the category utilizing the entire process. They “powertasted” (Moskowitz, 2006) all of the category, conducted drain weights, analyzed potential ingredients from various marketplace sources and created an ideal prototype that they felt delivered against the concept and its product promise and should beat the competition. The developer recreated the ideal formula utilizing ingredients in the Heinz system plus nine other options and tested them against the current product and lead competitor among the target consumer. The prototype that was created based on the insight team’s ideal received: 73% top 2 box purchase intent, 7.4 in overall liking, and a 67% top 2 box concept/product fit, the highest of all the products evaluated.
8.2.7
The future of insight teams In summary, I believe there will be many more types of opportunities for utilizing groups like the insight team to drive the business. More often than not, resources are limited, yet better insights and bigger innovations are still needed. This team can become an integral arm of the research organization increasing the reach of current resources, delivering true innovation, while utilizing a costeffective approach. This team is able to function as objective researchers, descriptive experts and insight gatherers for any consumer and product/ package interaction. The trend of utilizing creative, highly articulate consumers for more impactful, insightful research has flourished in many organizations over the past ten to fifteen years. In fact, some say consumers today, according to Simon Chadwick, are even expecting to be more involved in the co-creation of products, communications and brands (Chadwick, 2010). I can foresee in the future an increased opportunity for insight teams to interact even more closely with consumers to design and interpret. It is possible that target consumers could engage with the team in a process where consumers and the team are side by side creating a product, package or idea. The idea of a highly trained group of people with the perspective of objectivity and detail versus subjectivity and experience could be a powerhouse in developing relevant ideas rapidly. The future of teams like the insight team will require organizations to look hard at traditional approaches and ask: “Can they do more to drive the business than just provide descriptive profiles or point in time feedback? Can these creative and innovative minds be used to drive the organization faster through the development process and get to a better end result? Is it possible that these team
Beckley_c08.indd 263
8
2/7/2012 8:39:12 PM
264
Product Innovation Toolbox
members can become even more involved in the creation of the development process?” Insight teams should become an even stronger partner with the business team, specifically the development team leveraging their tools and working side by side on the bench creating products that will sustain in the market.
References
8
Beckley_c08.indd 264
Chadwick, S. (2010) “The Connectivity Revolution”. Research World, No. 23, September. ESOMAR magazine for marketing intelligence & decision making. Denning, S. (2001) The Springboard, How Storytelling Ignites Action in KnowledgeEra Organizations. Woburn, MA: Butterworth Heinemann. Kinlaw, D.C. (1999) Coaching for Commitment. San Francisco, CA: Jossey-Bass/ Pfeiffer. Lawless, H. and Heymann, H. (1998) Sensory Evaluation of Food: Principles and Practices. Norwell, MA: Kluwer Academic/Plenum Publishers. Lencioni, P. (2002) The Five Dysfunctions of a Team. San Francisco, CA: Jossey-Bass. Meilgaard, M.C., Civille, G.V. and Carr, B.T. (2007) Sensory Evaluation Techniques. Boca Raton, FL: CRC Press, Taylor & Francis Group. Miller, D. (1998) A Theory of Shopping. Ithaca, NY: Cornell University Press. Moskowitz, H., Beckley, J. and Resurreccion, A. (2006) Sensory and Consumer Research in Food Product Design and Development. Ames, IA: IFT Press and Blackwell Publishing Professional. Myers, I.B., McCaulley M.H., Quenk, N.L. and Hammer, A.L. (1998) MBTI Manual (A Guide to the Development and Use of the Myers Briggs Type Indicator) (3rd edition). Mountain View, CA: Consulting Psychologists Press. Resurreccion, A. (1998) Consumer Sensory Testing for Product Development. Gaithersburg, MD: Aspen Publishers, Inc. Stone, H., Sidel, J., Oliver, S., Woolsey, A. and Singleton, R.C. (1974) “Sensory Evaluation by Quantitative Descriptive Analysis”. Food Technology, 28 (11), 24–34. Zander, R.S. and Zander, B. (2002) The Art of Possibility. New York: Penguin Books.
2/7/2012 8:39:12 PM
8.3
Consumer Advisory Boards: Incorporating Consumers Into Your Product Development Team Leah Gruenig
Key learnings ✓ ✓ ✓
8.3.1
Building a consumer advisory board (CAB) How to set up, execute and report outcomes Applications and ideas to expand the use of CABs
Introduction Often as sensory professionals and product researchers we find ourselves in situations where we don’t understand the product experience a consumer desires or expects from a new product idea, or why an existing product is not performing in the marketplace as expected. We need to iterate quickly to get as many prototypes as possible in front of potential consumers to start developing hypotheses on drivers of liking, often with the constraints of short project timelines. The use of a consumer advisory board (CAB) is a qualitative method that can be used in these situations. This section will provide you with information to set up your own CAB and the appropriate uses of the information you generate using this method. Qualitative methods have been used in sensory research for several years and their usage is expanding with the realization that these methods, in combination with quantitative methods, can build a depth of knowledge for a project team that cannot be accomplished by using only quantitative techniques. Chambers and Smith (1991) list three main areas where qualitative research techniques are needed in the early stages of product development:
8
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
265
Beckley_c08.indd 265
2/7/2012 8:39:12 PM
266
Product Innovation Toolbox
(1) Which attributes consumers believe are important or relevant in the products they have tested and how those attributes might relate to “liking” or “preference” of the product or product category (2) What the various words used to describe attributes may mean to consumers (3) How quantitative questionnaires can be designed for ease of use and reduction of misunderstanding. Casey and Krueger (1994) list the following as essential for a successful focus group: careful designs, well thought-out questions, careful recruiting, skillful moderating and appropriate analysis. Researchers are often conflicted about using qualitative research since there are no statistics associated with the outcome and historically facilitators have the following notation on all finished reports, “This is qualitative research and as such, the findings cannot be considered conclusive or projected to the wider population”. However if qualitative research is used appropriately it can gather insights not attainable through quantitative techniques. Taken from Goldman and McDonald (1987): “Qualitative research addresses the nature or structure of attitudes and motivations rather than their frequency and distribution … the underlying goal of qualitative investigations is always the same: to explore in depth the feelings and beliefs people hold, and to learn how these feelings shape overt behavior.” In addition to Goldman and McDonald (1987) and Casey and Krueger (1994) already cited, another worthwhile reference is Stewart et al. (2007). If you have limited or no experience using qualitative techniques or focus groups specifically, I would recommend reading Lawless and Heymann (1998) as a good general overview in appropriate uses and expectations for these techniques and a listing of additional references for further reading.
8 8.3.2
Conducting consumer advisory boards The opinions below are my own, not those of General Mills or other colleagues. The following steps are from my experience as a sensory scientist and lead to what I believe to be better insights. A consumer advisory board (CAB) is a group of 8–10 consumers who represent your specific target market for a new product idea or an existing product in your portfolio. You may choose to have two panels for one project if each panel can represent a different target consumer for your new product idea. Consumer advisory boards are different from typical focus groups. A CAB should be treated as part of your product development team by sharing with it as much history and concept detail as possible. The product development team is required to sit in the room with the CAB during sessions instead of behind a one-way mirror. This builds the environment of one team (consumers and product development) working together. Consumer advisory boards are often iterative and occur over multiple sessions to allow product development to react to consumer feedback and refine the product attributes. Consumer advisory boards are qualitative panels, and should be designed and conducted as discussed in the introduction of this section. As with any qualitative technique, CABs are most powerful when used in conjunction with other quantitative methods for hypothesis confirmation. Any CAB panel should only
Beckley_c08.indd 266
2/7/2012 8:39:12 PM
Consumer Advisory Boards
267
be used for a maximum of 6–8 sessions. After this time, the CAB members often lose their consumer perspective and start extrapolating to other potential users instead of their own preferences. The steps to building a consumer advisory board are as follows.
8.3.2.1
Step 1: Define the facilitator role
You need to decide if you will moderate the CAB yourself or if you will hire a professional facilitator to lead the sessions. With either choice, you should ensure that the facilitator has skills in basic moderation techniques in addition to experience in conducting qualitative research with product. Stewart et al. (2007) and Goldman and McDonald (1987) both give recommendations and guidelines defining a good facilitator. If you will be hiring a facilitator for your CAB, you should ask for their level of experience, their training and their basic style for directing questions and conversations with groups they moderate. You will also play a large role in translating information from, or connecting the facilitator to, your project team. In order for the facilitator to be the most effective at directing conversations during the CAB session, it is essential that they clearly understand your project team’s objectives and learning needs. You should coordinate meetings for the objective-setting sessions as well as lead the debrief discussions, bringing viewpoints of both the project team and the facilitator into the conversation for the richest insights. You should also play a liaison role should your project team become frustrated or discouraged with the process so that they do not wrongly blame the facilitator for typical focus group behavior or if they are under incorrect assumptions of the type of learning they will generate from a CAB group.
8.3.2.2
Step 2: Set project objectives with product development
8
Before you start looking for the right consumers for your CAB, you need to set the project objectives with your product development team. Agreement on the number of prototypes that will be presented to the CAB and the amount of time the research team will need to iterate between sessions is important in setting the session times and days with the CAB panel. A maximum of two hour sessions is recommended to maintain the panel’s attention and ensure they will remain committed to the panel over all the planned sessions. Often research teams have more questions than can be addressed with a CAB in 6–8 sessions, so you will need to guide your team in identifying questions and prototypes that are most critical to reaching your product development goal. A short training session with your product development team is also necessary. This training gives your team the confidence and knowledge to interact with the CAB panel while they are sitting in the room during sessions (Figure 8.3.1). Additionally, during this time you should explain the expected types of insights you will generate from qualitative research compared to quantitative research to set appropriate expectations and help set the stage for setting the objectives.
Watch out: Refrain from using a quantitative metric for a qualitative discussion.
Beckley_c08.indd 267
2/7/2012 8:39:12 PM
268
Product Innovation Toolbox
1. Don’t discuss the topics among yourselves during the session. The CAB panel can hear you. If you have a question, direct it to the moderator or the CAB panel. 2. If you have a question, be sure to ask it during the session. The moderator is there to keep the discussion on track, but be sure to ask for additional discussion in areas of interest.
DO • Call people by their name • Make eye contact • Ask neutral questions
DON’T • Ask ‘why’ – instead use ‘Tell me more about that’ • Answer questions – gives the impression of authority • Ask two questions at once • Try to force consensus
Figure 8.3.1 Training topics for discussion with the product development team prior to the first CAB session (courtesy General Mills Inc., 2010).
Many product development teams want the members of the CAB panel to use the nine-point hedonic scale to rate their liking of prototypes or use a hand count to understand how many of the panelists preferred each prototype shown that day. This is one request you should not fulfill. You will have limited control of the project team members once they leave that CAB session, and their instinct is to look for numbers instead of insights and intuition. In my experience this preference (based on only eight consumers) was even transformed into percentage data and reported to executives. A better approach is have the CAB panel use words to describe what they “like” and what they “dislike” or would “change” about a prototype.
8
8.3.2.3
Step 3: Screen and build the consumer panel
Screening your consumers is critical to having a successful CAB. Ensure that you have the critical demographic and product or category usage questions on your screener and ensure they will be available for the days and times you identified with your research team during the objective setting. Often a product concept or idea is shared with potential panel members to select consumers most interested in the idea the product development team will be working on. Additionally, you may determine that the best approach to use is a psychographic screener from your business team. Whichever approach is taken, the project team should agree prior to writing the screener questionnaire. Often focus group protocol would dictate that you use three independent groups to balance idiosyncrasies among them (Casey and Krueger, 1994). However, with often limited quantities of prototype samples in this early stage of product development, it is most efficient to have one group of 8–12 individuals comprise your CAB. Use one or more articulation questions such as: “Tell me what you’ve heard about food safety in the news lately.” You will evaluate a consumer’s response based on their ability to explain beliefs or feelings and
Beckley_c08.indd 268
2/7/2012 8:39:12 PM
Consumer Advisory Boards
269
facts relating to the situation. Eliminate panelists who are hesitant to share views or are extremely concise in their response. An example of a screener is provided courtesy of Food Perspectives Inc. as a guide when you create your own (see example in Box 8.3.1).
Box 8.3.1 An example of a screener for CAB (courtesy Food Perspectives Inc., 2010) Adults Project Candy Cane CAB Screen-Down Food Perspectives Inc. Wednesday, April 7 Recruit 30 for 24 to show Questions: (When respondents are discontinued, always thank them for their time.) (1) We would like to talk to you about being a part of our continuing research on granola bars. This opportunity may involve several discussion groups over the next two months. You will be compensated for each discussion group you are invited to attend. Would you be interested in answering a few questions to see if you qualify to participate? Yes ..................Continue No ...................Discontinue (2) Do you or any member of your household work for a food company, an advertising agency, or a market research company? Yes ..................Discontinue No ...................Continue (3) Do you have any food allergies or sensitivities, or any dietary restrictions? Yes ..................Discontinue No ...................Continue (4) Which of the following types of granola bars have you purchased and eaten at least one box of in the past three months? Special K Bars (any variety)....................................................... Track Kashi Bars (any variety).............................................................. Track Kellogg’s Fiber Plus Bars (any variety) ................................... Track Slim Fast Bars (any variety) ....................................................... Track South Beach Diet Bars (any variety)........................................ Track General Mills Milk and Cereal Bars (any variety) .................. Track Curves Bars (any variety) ........................................................... Track Weight Watchers Bars (any variety) ........................................ Track Nutrisystem Nourish Bar (any variety) ................................... Track Fiber One Chewy Bars (any variety) ........................................ Track None of the Above ....................................................................... Discontinue
8
(Continued)
Beckley_c08.indd 269
2/7/2012 8:39:12 PM
270
Product Innovation Toolbox
Box 8.3.1 (Continued ) (5) Can you please describe your favorite dinner meal? Which foods would it include? Tell me all of the different foods and all of the different flavors in that dinner: Very articulate ......................................Continue to Q4 Somewhat articulate ...........................Discontinue Not articulate ........................................Discontinue (6) You will be in a room with a small group of respondents. Do you feel comfortable sharing your opinions and ideas without a lot of prompting in a setting like this?” Yes .......................Continue to Q5 No ........................ Discontinue (7) There is a possibility that you may be asked to participate in ongoing focus groups. These groups will be held on Tuesdays, starting April 13th through May 25th and will take place in Golden Valley and will last about 60 minutes. These following discussion groups will be held at 4:30 and 6:00 on April 13 and then at 4:30 only on the rest of the Tuesdays. If you are invited to attend these sessions you will be compensated $75 after each focus group plus a $100 bonus will be paid if you attend all the focus groups.
8
The dates of the focus groups are: April 13, April 20, April 27, May 4, May 11, May 18, and May 25 You will need to be able to attend all of the focus groups. If invited can you attend all the focus groups? Yes ...................... Continue to Q6 No ........................ Discontinue (8) Would you be willing to come to a 60 minute discussion group on Wednesday, April 7? Yes ............................................... Invite to test No .................................................Discontinue Invite To Test –Screen-Down: (1) The focus group will take place on Wednesday, April 7, in Golden Valley and will take 1 hour. Please arrive about 15 minutes early as it is important to start on time. Please be aware that you will be re-screened at the test. If your responses do not match the responses you just gave on this phone survey, you will not be allowed to test AND you will not be paid. Also please note you will be required to provide a photo ID during the sign-in process for this taste test. Any photo ID will work, such as: a Drivers License, Passport, Credit Card with photo, School ID, Health Club Card, Work ID, Other ID with photo and respondent name on it. Additionally, if you arrive late and we are unable to rescreen and seat you, you will not be paid. (2) If you are chosen to participate in the follow-up focus groups, you will be paid $75 for each of the remainder of the groups and if you attend all six follow-up groups you will receive a $100 bonus.
Beckley_c08.indd 270
2/7/2012 8:39:12 PM
Consumer Advisory Boards
271
Box 8.3.1 (Continued ) (3) It is important to come to the test as scheduled; however, if you cannot come it is important to call and cancel at least 2 days before your scheduled test so that we can replace your spot. Please call our cancellation line at 763–354–xxxx. Please write this number down. (4) The focus group is on Wednesday, April 7, in Golden Valley. (5) We will let you know within a few days following the discussion group if you will need to attend the follow-up groups. (6) Children are not allowed at the test and we do not have childcare available. Will this be a problem for you? (If yes, do not schedule them for the test.) (7) We will be requesting that all cellular phones be turned off during the test. (8) Please refrain from smoking for at least one hour prior to your scheduled time, and please refrain from consuming alcohol for at least two hours prior to your scheduled time. (9) We also request that you refrain from wearing any perfumes or fragrances to the test. (10) During this cold and flu season, we ask that you do not attend this test if you are sick or not feeling well on your scheduled test day. Please call our Customer Care line at 763–354–xxxx to cancel. Food Perspectives reserves the right to send home, without payment, individuals who are sick or are showing symptoms of illness. (11) I will now give you directions. (Be careful giving directions. Make sure they understand them. Insist that they write them down and take the directions with them when they go to the test.) Please do not call the location for directions. If you need, you may always call our main office at 763–354–xxxx. ***** Repeat the Day, Date, and Time to confirm. ***** ***** Make sure that they know which day they are scheduled. ***** (12) Remember that you will be re-screened and you will need to provide picture ID when you sign in at the test.
8
You will need to ensure that CAB participants function as a cohesive group since they will be working together over several weeks. To ensure you have the most effective and efficient panel, recruit double the number of CAB members needed for your final panel and run a “screen down” session to find your final group. This is typically a mini focus group held for 30 minutes to one hour, approximately 1–2 weeks before your first CAB session. Often, your hired facilitator will conduct this session and advise you on potential issues with any consumers that may arise during this screen down session. Look for a potential CAB member’s ability to share the airspace with the other potential panelists by sharing their thoughts and then sitting quietly while others share their views.
Beckley_c08.indd 271
2/7/2012 8:39:13 PM
272
Product Innovation Toolbox
You will also need to determine the incentive to pay the CAB members for each session and potentially a bonus amount if they attend all or the majority of the scheduled sessions. You should work with your recruiting agency to determine the most appropriate dollar amount for your area of the country. Often a special gift at the end of the session, something that is related to the subject of the project work, is given as a special thank you. An example could be a cookbook, coupons for free product from your business team, or a T-shirt commemorating the project.
8.3.2.4
Step 4: Create the facilitator’s guide
Prepare for the first CAB session by creating a facilitator’s guide or agenda (see example in Box 8.3.2): this will allow you stay on time and keep the discussion on target. It can also be used to hold product development team members accountable for maintaining the earlier agreed-to limitations and expectations with the CAB panel. Prepare your facilitator’s guide with the product development team and the facilitator so everyone understands the timing of products to be served and when discussion topics need to end so that the agenda can remain on schedule. Each of your remaining CAB sessions can use a similar facilitator’s guide format by modifying the topics and eliminating the orientation sections used in the first session.
8
Box 8.3.2 An example for CAB moderating guide and agenda (courtesy General Mills Inc., 2010). CAB – PROJECT CANDY CANE 01/06/2010 1:05/1:35 1:35/2:00
2:00/2:30
2:30/2:45
2:45/2:55
Beckley_c08.indd 272
Introductions and Orientation –Show basic concept for an oven meal –Get reactions to four side dishes that would accompany this flavor. –Probe on expectations –How much time would this take to prepare, the type of preparation methods you would use, would you want to use one appliance to prepare all three components Chicken preparation Feelings about handling chicken, how to season (rub in seasoning, drizzle on top, etc …), how do they prepare chicken now for their family Prototype 1 – Wild Rice Appearance, likes, dislikes Flavor, likes, dislikes Wrap up, pay respondents, share dates for future sessions & total compensation hand-out
2/7/2012 8:39:13 PM
Consumer Advisory Boards
8.3.2.5
273
Step 5: Conducting the consumer advisory board
In the first session with your new CAB spend at least 20 minutes on introductions with the CAB members and the product development team. Use ice breakers or ask each person to share one thing about themselves. Spend time briefing the new CAB on the project, business and concept background. Sharing as much project information as you can with the CAB panel builds the environment for collaboration. This is also a great time to share “ground rules” with the CAB panel such as: “everyone is expected to speak up” and “everyone should be respectful of other group member’s suggestions and comments”. “There are no right and no wrong answers in this session” is another great rule to share with the panel. A quick reminder to the project team immediately prior to the first session on listening skills and appropriate behavior during the CAB session is important. Marlow (1987) lists some great ways your team can listen: (1) Listen for implications. If respondents are reacting to your idea or product differently than you expected, what are the consequences to your product? Listen also for the level of enthusiasm. If consumers say they like a product but there is no excitement or interest in their voices, I would suspect something unusual is going on in your groups. A grimace, a tone of voice, or a gesture may provide different insights. Don’t always listen to just the words, but watch the body language and tone, as well. (2) Listen quietly to focus groups. Although it is tempting to comment upon every remark, it can be very distracting so keep a pad in your hand and write down your comments. (3) Listen objectively. Product development should contain their chagrin or defensiveness when consumers criticize or reject their creations.
8
The conclusion of your first session is a good time to re-evaluate your group. Do they work together well? Does one person need to be excused from additional sessions due to difficult behavior? Now is the time to modify your group composition before additional sessions. If one member does need to be excused, ask your recruiting agency to handle the communication to the panelist directly in their standard approach. This ensures the future relationship between the excused member and the recruiting agency remains on good terms. Conduct remaining sessions with your CAB panel building on previous CAB session learning if that is appropriate. Additionally, you can add in other types of work during this time with the CAB panel. Giving the CAB panel product to take home and use for discussion at the next session, preparing a new recipe or participating in a grocery store shopping experience are all examples of homework you can present to your CAB panel for completion outside of the scheduled sessions together. Remember, if you choose to use some of these homework techniques, supply your CAB panelists with any money they will need to purchase products or ingredients to fulfill your request. Your final CAB session should wrap up any remaining questions for your project. This session can also be used for some celebrating by reserving the last few minutes of the session for recognition of the CAB’s accomplishments and giving out any gifts or tokens of thanks you may have assembled.
Beckley_c08.indd 273
2/7/2012 8:39:13 PM
274
Product Innovation Toolbox
8.3.2.6
Step 6: Project team debrief
Immediately following each CAB session, debrief with your product development team for at least 30 minutes. This allows you to discuss what you heard immediately, without the bias of memory if you try to recall the discussion the following day or week. This is one of the most critical steps in your CAB process. As a product testing expert, your role will be to help the team interpret consumer language into words product developers understand and can use to refine products. Consumers often use language that is not consistent with what they actually desire in a product. In this step you will also be able to eliminate potential variables that don’t appear to drive liking for consumers. One person on the team should be assigned to record the team consensus on outcomes and next steps. This serves as a communication tool for team members who could not be present during the CAB panel session as well as documentation for future product work should a history be needed. The facilitator should be an objective member in the debrief process to clarify what the CAB panel discussed, which aspects they reached consensus on and which topics the CAB panel was not in agreement on, as well as any deeper thoughts or emotions that arose that may not have been apparent to team members with less qualitative experience. It is common with qualitative research that individual team members will come to the debrief with different conclusions based on what they heard during the session. This is where you and the facilitator will work together to weave a mixture of insights into a picture for the project team, reflecting the consumer’s experiences and conversations. This is a great time to start discussions on what should be shown to the CAB panel during the next session, based on learning from the session just completed. The facilitator’s guide for the next CAB session should be finalized and agreed to no less than three to four days prior to the next session, depending on the length of time needed by the product development team to create prototypes and equilibrate before the session.
8
8.3.3
Case study Using consumers in the product development process through CABs has been done for several years. The following is one example of how General Mills chose to implement the tool and the outcome. An idea for a new shelf-stable dinner line was created. A new target consumer was identified and five flavors needed to be developed in less than twelve months. A CAB was determined to be the best approach for quickly identifying drivers of liking in each of the five flavors. The CAB was formed with two groups of eight members. One group was comprised of empty-nester women who were open to using shelf-stable products. The second group was comprised of young (20–30-year-old) women, just married, with no children, who were open to using shelf-stable products. We explained our idea for a new line of boxed meal kits that would appeal to people who liked to cook, but didn’t want to cook from scratch every night and felt giving their family a good tasting meal was important.
Beckley_c08.indd 274
2/7/2012 8:39:13 PM
Consumer Advisory Boards
275
During the second CAB session it was determined that one of the initial flavors would not fit with the concept and thus needed to be dropped from the launch plan. Following the CAB completion, quantitative product guidance research was conducted in a central location test format on a subset of refined prototypes before confirmatory and volumetric testing was completed. The formulas were finalized in less than six months and the product formulas remain on the market today five years later.
8.3.4
Summary As qualitative tools are used more in practice, we should continue to find new and alternative ways to bring consumers into our product development research processes early and often. Tools that can drive early understanding of undeveloped product categories, uncovering consumers’ unmet needs and understanding the preparation and usage of existing marketplace products need to be used in everyday practice. Qualitative methods allow us to build ideas and create insights that are difficult if not impossible to gather through our standardized quantitative testing questionnaires. As qualitative product development approaches grow in popularity, a caution of misuse in the outcomes should be noted. Often teams with little money and time want one quick method to get an answer quickly and desire to skip more costly quantitative testing. Qualitative methods are best used for hypothesis building and narrowing prototypes early in the new product development process. They are most powerful when combined with quantitative research methods to select final, optimized products for launch. On the horizon are even more promising product research techniques using social networking sites. Proprietary research is happening now where we can find consumer groups more quickly and work with consumers who have specific concerns or ideas regarding our products. This new area will develop quickly as technology is expanding and consumers become more comfortable interacting in a virtual space.
8
References Casey, M.A. and Krueger, R.A. (1994) “Focus Group Interviewing”. In Measurement of Food Preferences. London: Blackie Academic & Professional. pp. 77–96. Chambers, Edgar IV and Smith, E. (1991) “The Uses of Qualitative Research in Product Research and Development”. In Sensory Science Theory and Applications in Foods. New York: Dekker. pp. 395–412. Goldman, A.E., and McDonald, S.S. (1987) The Group Depth Interview: Principles and Practice. New York: Prentice-Hall. Lawless, H.T. and Heymann, H. (1998) “Qualitative Consumer Research Methods”. In Sensory Evaluation of Food. New York: Chapman & Hall. pp. 519–547. Marlow, P. (1987) “Qualitative Research as a Tool for Product Development”. Food Technology, 41 (11), 74, 76, 78. Stewart, D.W., Shamdasani, P.N. and Rook, D.W. (2007) Focus Groups: Theory and Practice (2nd edition). Thousand Oaks, CA: Sage Publications.
Beckley_c08.indd 275
2/7/2012 8:39:13 PM
8.4
Defining the Product Space and Rapid Product Navigation Jenny Lewis, Ratapol Teratanavat and Melissa Jeltema
Key learnings ✓ ✓ ✓ ✓
8.4.1
Benefits of the rapid product navigation process What to consider when designing RPN research How to develop the qualitative product space (QPS) How to execute the rapid product navigation process
Listening to understand: Rapid product navigation What do you do when the marketing department wants to launch a new product or line extension in only a few months, and you have limited resources to do so? That’s the situation that Company ABC’s product researchers and product developers found themselves in when the marketing department identified an opportunity and wanted to launch a brand Y line extension into the moist smokeless tobacco flavor F segment, which was already dominated by the competitor’s product X. This case study will help illustrate the concepts of the rapid product navigation process and how it can be utilized for faster and lower cost product development and optimization. In times like these, everyone must work under ever-increasing constraints. We are all being asked to do more with less: less money, less time, less people. So, how can we most efficiently develop the best product possible? The rapid product navigation method (Lewis et al., 2010) was developed to meet these demands. There are multiple approaches for developing new products. A product can be developed to match a pre-determined descriptive profile, but this approach is devoid of any consumer feedback. Iterative central location or home-use testing is, by definition, iterative, and may take multiple rounds and a lot of money to
8
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
276
Beckley_c08.indd 276
2/7/2012 8:39:13 PM
Rapid Product Navigation
277
Very high Optimum
Flavor intensity
Prototype C Prototype A
Prototype B
Competitor
Low Low
Sweetness
Very high
Figure 8.4.1 Example product map.
reach an optimum product, if ever. A design of experiments modeling approach may lead to an optimum design that doesn’t make sense, especially when there are many interactions between the factors. And when results don’t make sense, we don’t have any understanding of why the consumers rated the products the way they did. What’s missing from all of these approaches is the act of listening to the consumer. Rapid product navigation (RPN) occurs in a series of qualitative discussion groups. Because of this, RPN is first and foremost, consumer-driven. By listening to understand, the project team can gain additional insights about the relative positioning of the prototypes, allowing them to make better decisions about product direction – such as seen in this example of a product map (Figure 8.4.1). Rapid product navigation is also, by definition, rapid. Because each product evaluation provides the consumer feedback that drives subsequent product trials, an acceptable product design can be reached quickly. Rapid product navigation is a highly-effective, powerful approach. The voice of the consumer is incorporated into all product decisions. Because the product developers are intimately involved with the research, they gain a deeper understanding of consumer wants and receive immediate actionable direction. Compared to current methods, RPN provides better results, faster.
8.4.2
8
Recommended tools and “how to” implement
8.4.2.1
Step 1: Building foundation through product screening
One of the most important steps in the process is building the foundation for the RPN. Before conducting RPN, three things must be known. First, the starting prototype for the navigation must be known – this could be a new product or an existing one. The product developers must know the design elements (sensory
Beckley_c08.indd 277
2/7/2012 8:39:13 PM
278
Product Innovation Toolbox
attributes) that need to be explored and optimized. And finally, there must be some understanding of potential consumer segments. If consumer segments are thought to exist, the business must decide how to move forward: develop the product for only one segment, which meets the product design criteria for one segment, but may alienate another; or develop the product to have the broadest appeal across all segments, which may not be the best product for any of the segments. This information is exactly what you would have coming out of the discovery and scoping stages of the product development process. If you do not know where you need to start, you should first do some preliminary product screening, or scoping. Rapid screening of a broad array of product options can quickly determine which options are most acceptable to the intended audience and if different segments of consumers appear to prefer different product options. A broad array of differentiated prototypes for screening is essential, because casting a wide net will reduce the possibility of missed product opportunities. The method of screening depends on the product category and the number of prototypes to be screened. If the product has a high propensity for carry-over, or if only one product can be evaluated at a time when used as intended, initial screening may need to be based on hedonics around attributes such as aroma or appearance. Research has shown that screening based on aroma is a good surrogate for screening based on taste, because consumers expect products to taste like they smell (Simons et al., 2008). In addition, consumers can evaluate many aromas in a single session before boredom and fatigue set in, so screening by aroma only can expedite the research by very quickly reducing many flavor options down to a few that can be explored more deeply. Also, because the base product itself may have its own aroma, as with products like coffee, tobacco products, etc., aromas should be screened in the presence of the base product. After an initial broad screening based on aroma, the remaining product options must be further narrowed down based on taste. One way to select the best option to optimize is to conduct qualitative discussion groups that incorporate product trial. The objective of these groups is to identify the best product direction to optimize, which product attributes need to be included in the optimization, and any consumer segments that might have different product sensory requirements. While the aroma screening method discussed above works well for smokeless tobacco products, it may not suit all products. Whatever screening method you choose, you should come out of the screening process knowing which product you want to optimize, the product attributes that should be included in the optimization and an understanding of possible consumer segments that may have different sensory preferences.
8
8.4.2.2
Step 2: Conduct rapid product navigation
The RPN process is a collaborative approach that requires the full participation of both product developers and product researchers in all aspects of the research, including defining the qualitative product space, defining the intended audience for the product, observing the group discussions and deciding which products to present to the groups. The details for each of these areas are discussed below.
Beckley_c08.indd 278
2/7/2012 8:39:13 PM
Rapid Product Navigation
Flavor intensity
Prototype 8
279
Prototype 9
Prototype 7 Prototype 5
Prototype 6
Prototype 4 Prototype 1
Prototype 2
Prototype 3
Sweetness
Figure 8.4.2 Simple qualitative product space (QPS).
8.4.2.2.1
Area 1: Qualitative product space
During the RPN, the project team will need to be able to easily identify the prototype that should be evaluated next, based on what they hear from the consumers. The way to do this is to translate the technical product design matrix into a sensory product space that is defined by the consumer’s language – the qualitative product space (QPS). For example, suppose the optimum balance of overall flavor intensity and sweetness needs to be identified. So the QPS should be the space defined by overall flavor intensity and sweetness (Figure 8.4.2). Prior to the RPN, the prototypes are arranged within the qualitative product space by the project team. Placement decisions can be made based on input from multiple possible sources: product developers’ expert opinions, descriptive panels or prior research. Increasing the number of design elements or the number of levels will quickly increase the number of products in the QPS to an unmanageable level. For some product categories, it may be possible to create a new prototype by blending existing prototypes. If custom blending is not possible for a product category, an additional round of RPN may be required once additional prototypes can be made.
8.4.2.2.2
8
Area 2: Participant selection
Participants in the RPN research must be members of the intended audience for the product. Multiple groups of consumers are recruited to participate in RPN discussion groups. If previous product screening or scoping research has been conducted, those same participants may be recruited for the RPN. However, additional new recruits should also be included for a fresh perspective. The number of discussion groups needed is determined based on the number of design elements to be optimized, the number of products that can be evaluated during one group session and the number of possible consumer segments. The number of products per group is based on the normal duration
Beckley_c08.indd 279
2/7/2012 8:39:13 PM
280
Product Innovation Toolbox
Introduction
• Introduce QPS map by placing their own brand on the map
• Warm-up sample to eliminate the first-order effect • Product evaluations followed by discussion Stimulusresponse
• Discuss liking, key attributes, comparison to other products, improvements needed
• Place product in QPS map • Rank products and discuss rationale Summary
• Identify improvement opportunities • Complete QPS map by identifying the “ideal” space
Figure 8.4.3 General discussion flow for rapid product navigation.
of product use and on the potential for carry-over. If multiple consumer segments with different product preferences are believed to exist, additional consumers and groups must be recruited. These segments might be based on product usage, such as regular brand or frequency of use. They might also be based on differences in desired product experience – some consumers may desire a stronger flavor experience than others, for example. Groups with 6–8 participants are recommended. Having more than eight consumers in a group is not only more difficult for the moderator to manage, but also limits the depth of discussion because more time must be spent to include all participants in the discussion. Fewer than six per group will require additional groups.
8
8.4.2.2.3
Area 3: Discussion flow
The most important thing to know about the RPN discussion groups is that they are actually simultaneous individual assessments. We are listening to understand the individual opinions, not come to a group consensus. The discussion consists of three sections: introduction, stimulus-response (product trial) and summary (Figure 8.4.3). At the start of the session, perform introductions to let participants get to know each other, and explain the objectives and the process for the discussion. They can get comfortable with the moderator and each other by sharing their opinion of the product they currently use. Introduce the product map by asking the participants to place their own brand on the map. Also, a quick evaluation of a practice sample may be used to familiarize the participants with the process and to minimize the impact of order effects on the subsequent product evaluations. During the stimulus-response phase of the discussion, ask the participants to try the product and record their opinions on the handout provided before any discussion takes place about that product. For some product categories, this does not take much time. The participants can try the product and evaluate it without much down time. However, some products, such as MST, require that the participants use the product for a period of time before doing the evaluation. It is important to give them ample time to think about the product they are evaluating and jot down any notes on their thoughts before beginning the discussion. Once the evaluation of the first product is complete, start the discussion to understand what they like or dislike about the product. How much do they like
Beckley_c08.indd 280
2/7/2012 8:39:13 PM
Rapid Product Navigation
281
the flavor and how do they describe the flavor? What can we do to improve the product for them? How does the product compare to their own product? Ask the participants to place the prototype on the product map relative to the other products on the map. Repeat the product trial, evaluation, discussion and mapping for each of the products, cleansing the palate between each product. After the evaluations are complete, ask the participants to rank the products from most liked to least liked and discuss the rationale for their decisions. For the highest ranked prototype, discuss what changes still need to be made to make the prototype even better. Complete the product map by asking the participants to define their “ideal” space. The discussion groups may be lengthy in duration to allow for product trial and in-depth discussion about each product. Groups extending beyond three hours should be designed to manage fatigue, boredom and breaks. The number of products presented during the group depends on the product category. For moist smokeless tobacco (MST), for example, only four products are evaluated per session for two reasons. First, the product trial period during the sessions must be long enough for the participants to feel comfortable that they can properly assess the product. Second, tobacco products have high carry-over, so there must be a sufficient waiting period after the product is removed from the mouth before tasting the next product. It is good practice to provide the participants with handouts, so they can rate their overall liking and record their thoughts on likes, dislikes, the product experience or anything else they want to share.
8.4.2.2.4
Area 4: Conducting the discussion groups
During the RPN process, we are listening to the consumers to understand several things. How much consensus is there? Are there groups whose liking is driven by different sensory factors? Are any of the prototypes polarizing? During the RPN discussion groups, the product developers must be present and focused, listening to understand the consumers’ reactions to the prototypes, making decisions about the next prototypes to evaluate, and if possible, mixing prototypes on the fly. Without the product developers’ full engagement in the initial development of the QPS and in the navigation decisions, the project will not be successful. There are two approaches to RPN: navigating within a discussion group, or navigating across discussion groups. For a fairly straightforward situation with only two or three design elements being modified, the navigation can occur within each group. The decision of which prototype to present next is driven by the feedback from the previous product. This process can be repeated for multiple groups to build confidence in the results, by coming to consensus on the product design that best meets the product sensory criteria. Because each group, and even each person, navigates to its own final product design, it is possible that this process can result in more than one product design that meets the product sensory criteria. In such a case, both products should be included in the validation test. If consumer segments are thought to exist, and the business wants to develop different products for each segment, the RPN process would need to be conducted separately for each segment to determine the final product design for that segment. In an elementary example, assume that only two design elements require optimization, sweetness and flavor intensity. Figure 8.4.4 shows a possible navigation path from the baseline product to the product that best meets the product
Beckley_c08.indd 281
8
2/7/2012 8:39:13 PM
282
Product Innovation Toolbox
• Right flavor intensity • A little too sweet Perfect! Prototype 8
• Better flavor intensity • Not sweet enough
Prototype 9
Flavor intensity
Prototype 7 Prototype 5
• More flavor intensity • Too sweet
Prototype 6
Prototype 4
Prototype 2
Prototype 1
Prototype 3
Sweetness
Figure 8.4.4 Simple two-factor navigation path within a discussion group.
Very high
Ideal region
8 Flavor intensity
Prototype 8
Prototype 9
Prototype 7 Prototype 5 Competitor
Low Low
Sweetness
Very high
Figure 8.4.5 Example of a final product map.
sensory criteria within a single discussion group. Consumers first evaluate and discuss the initial prototype, identifying the attribute changes needed to make it better, more sweetness for example. The next prototype for evaluation would then be one that offers more sweetness than the initial prototype. The evaluations and discussions continue in this way until the product design that best meets the product sensory criteria is reached. Figure 8.4.5 shows the final
Beckley_c08.indd 282
2/7/2012 8:39:14 PM
Rapid Product Navigation
283
product map based on the consumer feedback. In this example, prototype 8 was clearly the winner, as it was within the ideal space identified by the participants. However, product research is rarely that simple. When multiple design factors are involved, or when there are complicated interactions between the factors, the navigation must occur across the discussion groups, instead of within each group, because there is not enough time during a single group to evaluate enough prototypes to reach the final product design. Consider the multiple discussion groups as an iterative series of small-scale product trials where the feedback from the consumers in one group drives the focus of the product trials in the next group. In other words, at the conclusion of the first group, the project team determines the products to be presented to the next group based on the feedback from the first group. This decision is best made by the project team as a whole – the product researcher, product developers and the moderator. This adaptive approach allows the researcher to rapidly explore a greater number of factors than could be accomplished using a traditional experimental design methodology. When making the decision about the products to include in the next group, it is important to ensure that product design decisions are based on findings confirmed across multiple groups. For example, if the first group shows that ingredient 1 is better than ingredient 2 because ingredient 2 has an off-taste, then the two ingredients should be evaluated again in a future group, and the findings replicated, before eliminating ingredient 2 from consideration. This repetition across groups also allows the researcher to adjust the presentation orders to reduce the possibility of order effects biasing the results. The method of navigating across groups is illustrated in the case study.
8.4.2.3
Step 3: Verify and validate the design
Once a final prototype is selected, the acceptability of the product must be confirmed among members of the intended audience who did not participate in the RPN, ideally in market areas other than the market used during the RPN. The understanding of the key sensory attributes gained during the group discussions can drive the questionnaire development for this stage. However, quantitative consumer tests are expensive, so the sensory performance of the optimized prototype may first be confirmed in a small-scale consumer test. Incorporating in-depth interviews following this test may provide additional information on minor changes needed to the product prior to the quantitative study.
8.4.3
8
Case study1 Company ABC’s marketing department identified an opportunity and wanted to launch a brand Y line extension into the moist smokeless tobacco flavor F segment. This flavor variety was dominated by the competitor’s product X. Because this was 1 Due to proprietary concerns, actual research data and findings could not be presented in this case study. As a result, the following moist smokeless tobacco case study is fictional, but was devised based on actual research experiences with RPN in order to illustrate the concepts and procedures involved in the RPN process.
Beckley_c08.indd 283
2/7/2012 8:39:14 PM
284
Product Innovation Toolbox
Table 8.4.1 Mean aroma liking ratings by cluster (scale: 1 = dislike extremely, 7 = like extremely).
8
Beckley_c08.indd 284
Cluster n P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
C1
C2
1 2 3 4 5
4.33 4.62 4.10 3.80 4.86
3.33 4.46 5.20 5.20 6.43
5.56 4.00 4.90 2.40 5.86
1.89 3.77 1.50 2.40 5.43
5.00 4.92 3.60 4.20 6.57
5.11 3.92 4.30 3.40 5.86
5.11 4.62 3.80 2.80 5.14
4.44 3.31 4.40 2.80 4.86
5.56 4.08 2.00 6.40 6.71
2.33 4.69 4.30 4.20 6.29
2.22 5.69 2.90 2.60 3.43
11 14 11 6 8
4.78 4.77 2.80 3.40 5.29
a new segment for the company, some initial product scoping and screening was needed. Product development created ten new prototype flavor variants for the flavor F line extension. However, some screening was required to identify the best flavor direction to take before a product could be developed. Instead of internally selecting which variation was the best to move forward, a central location test (CLT) was conducted among 50 adult MST consumers to evaluate the aroma liking of all ten flavor variants plus two competitive products. Adult consumers were recruited who were users of competitor product X. They were recruited based on willingness and availability to participate in the research rather than for any specific expertise. Samples of MST with the prototype flavors applied were placed in glass jars with screw-top lids, as were samples of the competitor’s MST products. During the test, the participants removed the lid from the jar, sniffed the product, and rated their liking of the aroma on a seven-point hedonic scale (1 = dislike extremely, 7 = like extremely). The presentation order was balanced for order effects and first-order carry-over effects using a 12-product Williams design (Williams, 1949). The participants neutralized the odors in their nose by sniffing their own arms between each sample. Since it was possible that not all of the adult consumers would like the same aromas, potential consumer segments were identified by clustering the consumers based on their aroma liking ratings of the twelve products. A k-means cluster analysis on the consumer/product matrix of mean liking scores indicated there might be five clusters and provided the mean aroma liking scores for each product by cluster (Table 8.4.1). Because of the small sample size, the results of the analysis were treated qualitatively, looking for direction only. Highlighting the two most liked products for each of the clusters showed which products most appealed to each potential consumer segment. This analysis helped to develop hypotheses about possible consumer segments, which were further explored during additional research to select the best option to move forward. To further narrow the product options and select the best product to optimize, four discussion groups with product trial were conducted. The objective was to identify the best product direction to optimize, which product attributes needed to be included in the optimization, and any consumer segments that might have had different product sensory requirements. In the aroma screening, the five potential clusters of consumers appeared to like different products. For this phase, those same consumers were asked to
2/7/2012 8:39:14 PM
Rapid Product Navigation
285
return to participate in the group discussions. They were assigned to groups based on which cluster they were in – those participants in cluster 1 were in group 1, and so on. Because cluster 4 and cluster 5 were small, they were combined into a single discussion group for efficiency, but the feedback from individuals in both clusters was considered separately. Products 3, 4, 6 and 10 were the highest in aroma liking for the different clusters, and therefore were selected to be evaluated (tasted) and discussed in depth during each of the groups. Because the feedback on the products was similar across the groups, the project team decided to focus on the best overall option rather than different options for different segments. Products 3 and 6 had issues with the taste not meeting expectations based on the aroma or offtaste and were eliminated from consideration. Products 4 and 10 were both well liked, even though there were opportunities to improve the intensity and balance of the two flavors. Ultimately, product 4 was selected to move forward into the RPN process, because product 10 did not fit the concept of the flavor being developed, even though it was well-liked. Product 4 was selected as the best starting point to optimize the MST line extension, it still needed improvements to the balance of flavor intensity (ingredient 1) versus sweetness (ingredient 2). The product development team also needed to select the most appropriate tobacco blend that would enhance rather than clash with the flavor and to determine if ingredient 3 was needed for improvements to pinchability and flavor duration. Figure 8.4.6 shows the QPS that was designed for this example. With two tobacco blends, three levels of ingredient 1, two levels of ingredient 2 and two levels of ingredient 3, there were 24 prototypes available. In the QPS, prototype E represents product 4, the product selected in the initial screening. Because the potential line extension was in a segment dominated by the competitor product X, the objective of the RPN was to meet the product sensory criteria and to beat the competitor X among adult consumers of competitor X, the intended audience. Six groups of eight adult competitor X consumers were recruited to participate in the RPN. Because the product screening research showed that product 4 (prototype E) appeared to have broad appeal, all of the groups were recruited from the same audience. Because MST products are typically held in the mouth for an extended period, only four products could be evaluated during a single discussion group. Figure 8.4.7 shows the navigation path across the six discussion groups. The product selection decisions are discussed in detail below. Group 1 explored the different tobacco blends. Since tobacco blend 2 had an off-taste, tobacco blend 1 appeared to be the best tobacco blend, but this needed to be verified in the next group. In addition, flavor intensity appeared to be too low, and sweetness appeared to be too high. In group 2, the tobacco blend comparison was repeated at a different combination of flavor intensity and sweetness than in the first group, and varying levels of sweetness and flavor intensity were explored. Tobacco blend 2 again presented an off-taste, and because no interactions were expected between blend and varying levels of sweetness and flavor intensity and the presence/absence of ingredient 3, tobacco blend 2 was eliminated from further consideration.
Beckley_c08.indd 285
8
2/7/2012 8:39:14 PM
286
Product Innovation Toolbox
Tobacco blend 2 H
Prototype F
D
Prototype Prototype Prototype C E A
Prototype T Prototype Prototype P R
Prototype Prototype Prototype S Q N
L H
Prototype G
Prototype J Prototype L
L
Sweet
Sweet
H
H Flavor intensity
Prototype Prototype Prototype M H K
L
L
Sweet
L
Flavor intensity
H
L
Ingredient 3 not present
Flavor intensity
Prototype B Prototype
L
Ingredient 3 present
Flavor intensity
H
Tobacco blend 1
H
Prototype Prototype V Prototype Z X
Prototype U L
Prototype Prototype Y W
Sweet
H
Figure 8.4.6 Moist smokeless tobacco line extension qualitative product space.
For products with tobacco blend 1, a high level of flavor intensity with a low level of sweetness appeared to provide the desired flavor profile. Groups 3–5 confirmed levels of flavor intensity (high) and sweetness (low) and also explored the need for ingredient 3. Since the prototype without ingredient 3 needed improvements to pinchability and flavor duration, it was decided to keep ingredient 3 in the product. The final group discussion was used to verify the need for ingredient 3 and to verify the final prototype’s improved performance over the baseline prototype and competitor X. Prototype B with tobacco blend 1, high flavor intensity, low sweetness and ingredient 3 present was recommended for validation in a quantitative home-use test. In a quantitative blind home-use test, the final prototype (prototype B) was found to be more acceptable to competitor product X adult consumers than their own brand (Figure 8.4.8).
8
8.4.4
Theoretical background of the tools Over the years, multiple approaches for product optimization have been used; however, each has its own strengths and limitations. For example, a trial and error approach – develop a few prototypes, test for acceptability, reformulate and repeat as necessary – can be very costly and time-consuming due to its
Beckley_c08.indd 286
2/7/2012 8:39:14 PM
Beckley_c08.indd 287
E: Bland, too sweet, too much tobacco S: Off-taste X : Good taste, lack of control and flavor immediacy (ranked 1st)
B: Good flavor and burn, Good initial flavor, flavor lasts. Easy to pinch (ranked 1st, better than most competitor X users’ own dip) H: Good flavor, burn and immediacy, could offer more flavor and burn, need to improve pinchability and lastingness BE: Higher burn, but still too sweet or off-taste for some X: Good flavor level, burn, juice and texture, but off-aroma or taste, and lack of immediacy
B: Still best – Good flavor and burn, immediacy, duration. Easy to pinch and control (ranked 1st, better than competitor X users’ own dip) H: Good flavor, burn and immediacy, need to improve pinchability and lastingness (ranked 2nd) E: Still bland, lack of intial flavor, too much tobacco taste vs. Flavor 1 X: Good taste, but off-aroma. Somewhat difficult to control, low initial flavor
Key:
Ing3-Y(N): Ingredient 3 (not) present G: Group B1(2): Blend 1(2) Swt-L(M)(H): Sweetness low(moderate)(high) Flv-L(M)(H): Flavor intensity low(moderate)(high) Winning prototype
Outcome: prototype B was recommended for validation in large-scale HUT
Competitor X
Prototype B (B1, Ing3-Y, Flv-H, Swt-L) Prototype H (B1, Ing3-N, Flv-H, Swt-L) Prototype E (B1, Ing3-Y, Flv-L, Swt-H)
Ingredient 3 appeared to be required for pinchability and flavor duration. Verified Ingredient 3 and improved performance over baseline prototype E and Competitor X.
Competitor X
Prototype B (B1, Ing3-Y, Flv-H, Swt-L) Prototype H (B1, Ing3-N, Flv-H, Swt-L) Blended BE (B1, Ing3-Y, Flv-M, Swt-M)
Eliminated blend 2, continued to explore sweetness and flavor intensity along with need for ingredient 3
Verified blend and explored levels of sweetness and flavor intensity Prototype P (B2, Ing3-Y, Flv-H, Swt-L) P: Off-taste and bad aftertaste B: Natural flavor, good burn, immediacy, control (ranked 1st) Prototype B (B1, Ing3-Y, Flv-H, Swt-L) E: Still too sweet or artificial, good texture Prototype E (B1, Ing3-Y, Flv-L, Swt-H) BE: Good burn, lack of flavor/tobacco balance Blended BE (B1, Ing3-Y, Flv-M, Swt-M)
Prototype E (B1, Ing3-Y, Flv-L, Swt-H) Prototype S (B2, Ing3-Y, Flv-L, Swt-H) Competitor X
Figure 8.4.7 Moist smokeless tobacco line extension navigation path across discussion groups.
G6
G5
G4
G3
G2
G1
Explored blends
8
2/7/2012 8:39:14 PM
288
Product Innovation Toolbox
Among competitor product X adult consumers (n = 170) 7.0
6.0 5.0A* Overall liking
5.0
4.7
4.0
3.0
2.0 A
B
1.0 Competitor product X
Final prototype
Figure 8.4.8 Blind home-use test results.
8
Beckley_c08.indd 288
iterative nature. Sometimes it can also be difficult to determine what direction to take from standard hedonic and descriptive measures. For example, a prototype may receive lower liking scores than expected, and the descriptive attributes may not provide enough information to determine what needs to be improved in the product. Product developers then have to make their best guess, and optimization may take several iterations, costing time and money. Product opportunities may also be missed by making assumptions early in the process about which product direction will be most acceptable to the consumer. Descriptive analysis using trained assessors may be used to understand how different factors impact sensory perception and to modify a product to meet certain sensory criteria. But the trained assessors cannot provide hedonic information (Stone and Sidel, 1993). Without consumer feedback, the product developer must assume that meeting the sensory criteria will result in an improved product. Experimental design has long been used both for identifying the most important design factors and for modeling those factors to determine the optimum level of each factor (Moskowitz, 1995). Prototypes are produced that vary in certain factors according to the experimental design. The prototypes are then evaluated among consumers to gather hedonic ratings, and the results are analyzed to determine the optimum combination of factors. But because this approach can require a large number of prototypes when there are several factors to be considered, experimental design can be expensive and timeconsuming, especially for products with high carry-over, such as tobacco products. Also, modeling sensory responses based on an experimental design might not be feasible when there are extensive interactions between numerous
2/7/2012 8:39:15 PM
Rapid Product Navigation
289
factors. The complexity of these interactions may lead to results that are not actionable to product development because the models don’t make sense. While not a product optimization method, preference mapping can provide valuable information to the product developer about possible consumer segments, which products appeal to those segments and where a new product could be placed to best appeal to the target audience (Yackinous et al., 1999). To do this, a broad range of products is required to span the sensory space to ensure sufficient differentiation between the products. If the products do not cover the sensory space, product opportunities may be missed. Also, a large number of consumers are required to be able to see potential consumer segments. As with the experimental design approach, this method can be very expensive. To address the above issues, an adaptable, consumer-driven, rapid product navigation process for developing an optimal product was developed and refined. Consumers (users of the product) participate in qualitative discussion groups, during which prototypes are evaluated and discussed in depth, with the consumers’ response to each prototype determining the next prototype to be presented. The results from one group discussion may also be used to determine the products to be evaluated in subsequent groups. Within 4–12 iterative groups (2–4 days), the final product design can be reached. This adaptive, stimulusresponse approach is consumer-centric in nature, which lends itself to understanding consumers’ product sensory requirements. In addition, the business gains a deeper understanding of key design elements, which then feeds back into future product development projects.
8.4.5
Summary and future of the tools 8 This rapid product navigation process has demonstrated multiple successes for both product optimization and new product development projects across product categories. In the past, we performed multiple iterative loops of prototype development, screening and consumer testing that took 18–24 months. Rapid product navigation has greatly condensed that cycle time down to six months or less from scoping to commercialization, and thus has also greatly reduced costs. Also, starting with a broad scope and narrowing the product options down may reduce missed opportunities. The collaboration required for this approach to be successful helps to create a partnership between product researchers and product development. The consumer-centric, qualitative nature of the research lends itself to understanding consumers’ wants for the product sensory requirements, and increases knowledge and understanding of the key design elements. The depth of discussion may also lead to more actionable insights than traditional quantitative tests, because not only the “what” is understood, but also the “why”. It is possible to identify potential consumer segments that may not be obvious from traditional sensory evaluation and ensure that the resulting product design meets the product sensory requirements for the intended audience. Additionally, the real-time dialog with consumers provides a robust understanding of the consumer language that defines the product space and informs subsequent ballot development for product validation and other survey research.
Beckley_c08.indd 289
2/7/2012 8:39:15 PM
290
Product Innovation Toolbox
This consumer-centric, rapid product navigation process not only benefits the current project, but it also adds to existing organizational knowledge to benefit future projects.
References Lewis, J.N., Teratanavat, R., Beckley, J. and Jeltema, M.A. (2010) “Using a Consumerdriven Rapid Product Navigation Process to Develop an Optimal Product”. Food Quality and Preference, 21, 1052–1058. Moskowitz, H.R. (1995) “One Practitioner’s Overview to Applied Product Optimization”. Food Quality and Preference, 6, 75–81. Simons, C.T., Adam C., Kirkmeyer S., et al. (2008) “Using Orthonasal Aroma Evaluation to Predict Consumer Liking”. Presented at the Society for Sensory Professionals, Kentucky, 5–7 November 2008. Stone, H. and Sidel, J. (1993) Sensory Evaluation Practices (2nd edition). San Diego, CA: Academic Press, Inc. Williams, E.J. (1949) “Experimental Designs Balanced for the Estimation of Residual Effects of Treatments”. Australian Journal of Scientific Research, Series A: Physical Sciences, 2, 149–168. Yackinous, C., Wee, C. and Guinard, J.X. (1999) “Internal Preference Mapping of Hedonic Ratings For Ranch Salad Dressings Varying in Fat and Garlic Flavor”. Food Quality and Preference, 10, 401–409.
8
Beckley_c08.indd 290
2/7/2012 8:39:15 PM
8.5
Free-Choice in Context Preference Ranking: A New Approach for Portfolio Assessment Ratapol Teratanavat, James Mwai and Melissa Jeltema
Key learnings ✓
✓ ✓
A new approach to assess overall product performance and identify optimal product portfolio selection (e.g. what is the optimum assortment of product offerings for a target consumer group) Rationale and benefits of assessing preference ranking through free-choice and in-context method How to apply this approach (research set-up and data analysis) to answer business questions around category/competitive benchmarking and product portfolio optimization
8.5.1
8
Want to offer more but how many is too many? Consider a situation when a consumer packaged goods (CPG) company, called company X, is looking to expand its product portfolio. The company currently offers four SKUs and is considering adding some new products into this product line. Within this product category, there exists a competitor – assume the key competitor is company Y which offers two SKUs, see Figure 8.5.1. Working in the sensory product research group for company X, you are asked to team up with marketing and product development to help answer the following questions: ●
With the current offering, from the pure product standpoint, what is company X’s reach vs. competitor Y’s?
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
291
Beckley_c08.indd 291
2/7/2012 8:39:15 PM
292
Product Innovation Toolbox
Company X – current four SKUs A
B
C
Company Y – two SKUs E
D
F
Company X – potential new products G
H
I
J
Figure 8.5.1 The case scenario for the company X.
●
●
For company X, do they need all four SKUs? Should they streamline and keep only the two best performing SKUs? If company X wants to change its product offerings, what is the best mix of their products? How many do they need to beat the competition?
Driven by the need to remain competitive and relevant, many CPG companies need continuous innovation. The introduction of new products or varieties to a product line raises the need to determine the optimal product with the goal of building product portfolios that are profitable and different enough to generate additional revenue. An understanding of the competitive landscape and the performance of the company’s portfolio against competing lines is also of importance in measuring the impact of innovation and providing information on potential product opportunities. This innovation may take the form of product line extensions or the development of a completely new line of products, for a given target consumer audience.
8
8.5.2
Current approaches on product line extension In the case of product line extensions, the company needs to first understand its position as compared to its competition. The questions that marketing and product development teams get asked by the executives are as follows: “How do our products stack up vs. competitors? And are our current products better than the competition?” Then, the team needs to address the scenario question: “What if we add line extensions or replace the existing SKUs with new products? How does this impact the overall portfolio?” For the complete new line of products, the questions become: “What is the optimum mix of products to launch and how many and which ones should be included?” From a product performance standpoint, one approach is to rely on hedonic rating such as liking or purchase interest (Lawless and Heymann, 1998). Generally, by comparing mean liking or TB/T2B on purchase interest, this would indicate whether a product is more or less appealing than the other, but it does not provide information on whether one product bundle is more appealing than another bundle. More importantly, with hedonic measures, many companies generally rely on test statistics such as average or TB/T2B ratings (Sambandam
Beckley_c08.indd 292
2/7/2012 8:39:15 PM
Free-Choice in Context Preference Ranking
293
Overall liking
6 5
4.3
4.2
4.5
4.5
4.4
4.4
4.3
4.1
4 3
D
–C
p–
O wn –
O wn
wn O
O w n–
A
–B
2 m Co
E
p–
m Co
F
p–
m Co
G
p–
H
m Co
Figure 8.5.2 Using hedonic measure (e.g. mean overall liking) to compare how one product performs in comparison to others.
Percentage indicated most preferred product 8%
53%
39%
Own product
Competitor
None
Figure 8.5.3 Preference ranking measure.
8
and Hausser, 1998). This only tells us whether the product has broad appeal, it doesn’t necessarily reveal consumer segments, where one product appeals to one group and another product appeals to a different consumer group. Also, generally, the average scores tend to be similar; it is hard to differentiate one from the other, see Figure 8.5.2. Another approach is a paired preference test (O’Mahony, 2007). This approach can be used when we have a small set of products, generally for head-to-head comparison, for example own vs. competitor, see Figure 8.5.3. However, it does not work when we have a large set of products we want to compare. In addition, the preference question is often asked after hedonic rating, which could create a potential consistent bias from respondents. From the product optimization standpoint, the commonly known approach is total unduplicated reach and frequency or TURF (Kreiger and Green, 2000; Miaoulis et al., 1990). It identifies the optimal mix of products that maximizes consumer reach by selecting the combination that complements well and gains highest reach, regardless of individual product appeal. The example in Figure 8.5.4 illustrates how TURF works. In this example, the company wants to select two out of three products to launch. The solution is to go with products A and C. The combination of product A and B is not selected even though each individually
Beckley_c08.indd 293
2/7/2012 8:39:15 PM
294
Product Innovation Toolbox
Ex: Select two products to the market TB purchase interest Product A
30%
Product B
20%
Product C
15%
Option 1: A+B
A
Option 2: A+C
A
B
C
Total reach = 40%
Total reach = 45%
Figure 8.5.4 An example illustrates how TURF approach works.
has high appeal. However they overlap – they appeal to the same group of consumers and as a result, the total reach is less than when products A and C are offered. While TURF has been commonly used, its limitations have been discussed in the literature (Conklin and Lipovetsky, 1999; Cui, 2004; Lipovetsky, 2008). It offers multiple solutions that are close to one another. In addition, it shows the potential mix for various scenarios, but it doesn’t recommend how many to include. Fayle and Ennis (2010) also indicated that TURF is a flexible technique and easy to understand and explain; nonetheless its visualization is not possible for larger numbers of concepts and there is no exact solution for large problems – only approximate solutions exist. For this case study, we also find some limitations with TURF. This may work when there is no competition, but when the competition exists, it does not indicate how many products are required to be in a superior position. Also, it doesn’t tell us where we are relative to the competition. Given that these current approaches have some limitations, an alternative approach was developed to help deal with the business questions regarding competitive assessment and portfolio optimization. This chapter will explain in detail this new alternative approach – how to execute and analyze data, and how to use information to help identify the optimal product portfolio and determine how well this optimal portfolio mix compares to the relevant competition.
8
8.5.3
Free-choice in context preference ranking This alternative new approach is called “free-choice in context preference ranking”. This method measures preference based on multiple use/free choice by allowing consumers to use products freely in their natural setting. It allows researchers to consider a large set of products simultaneously and include competitive products whenever possible. This preference ranking information is used to help address the product portfolio comparison and optimization questions.
8.5.3.1
Methodology/set-up
The “free-choice in context preference ranking” method involves three steps, see Figure 8.5.5. Step 1: Recruit consumers who are the target audience of these products. Then, assign a set of products (competitive products can be included) to everyone. Generally products should be grouped in a small set of 4–5 products. It can
Beckley_c08.indd 294
2/7/2012 8:39:15 PM
Free-Choice in Context Preference Ranking
Step 1: assign a set of products to use in sequential monadic test over multiple weeks. Everyone receives all products (can include competitive products).
Step 2: within each set, use products in a sequential order and indicate whether it is in their consideration set.
F B
A
Step 3: assign a new set of products (those in the consideration set) – different set for each individual. Freely use products and rank preference at the end.
B
C A
J
A
G
E
J
I F I J
E
C I
295
G G
Figure 8.5.5 Free-choice in context preference ranking set up.
be done randomly or arranged based on product similarity. For instance, one set has products with mint flavors (wintergreen, peppermint and spearmint) and the other set has products with other non-mint flavors. Step 2: Within each set, ask consumers who participate in the research to use products and indicate, for each product, whether it is in their consideration set. They are asked whether it is something they would consider using in the future or whether they would see themselves regularly purchasing and using the product. Consumers are instructed to go through all sets of products. Instructions are provided to consumers on the order of product evaluation according to a balanced, randomized design across product samples. During this sequential monadic test, product attributes can be gathered as well. Step 3: Based on the information collected at step 2, these consumers receive a new set of products that includes only products in their consideration set. This set of products varies by consumers as they have different products in their consideration set. At this step, they can freely use products however they want to in their own setting. They can go back and forth or use each again to help form their opinion and at the end they report their preference ranking. Now, let’s go back to the case study to see how this new approach helps address important business questions with regard to competitive assessment and product portfolio optimization. Due to proprietary concerns, actual research data and findings are not presented. However, a hypothetical illustrative case is developed to demonstrate the concepts and procedures involved. This case study is based on a study that was conducted with approximately 200 adult consumers. There were a total of ten products included in the study – four products (A, B, C and D) from company X’s current offerings, two products (E and F) from competitor Y’s offerings, and four new products (G, H, I and J) that company X considered as line extension. These products were grouped into two sets (five products each) based on their flavor profile. The study was conducted in three weeks. During the first two weeks, consumers were asked to use
Beckley_c08.indd 295
8
2/7/2012 8:39:15 PM
296
Product Innovation Toolbox
Table 8.5.1
Example of data set obtained from this new approach.
Participant ID
Ranked 1st
Ranked 2nd
Ranked 3rd
Ranked 4th
101 102 103 104 — — — —
C H E G — — — —
A D F E — — — —
E F B — — — — —
— E C — — — — —
Note: Product not shown means it is not in the consideration set for that participant.
Company X – current four SKUs A
B
C
Company Y – two SKUs
D
E
F
Figure 8.5.6 Number of products currently offered by company X and Y in the same market.
each set of the products. At week 3, they received a new set of products that they indicated during the first two weeks were ones they had placed in their consideration set. Table 8.5.1 illustrates an example of preference ranking data obtained from this study.
8
8.5.3.2
Data analysis and interpretation
8.5.3.2.1 Question 1: With the current offering, what is company X’s reach vs. competitor Y? The first question focused on the assessment of the current offerings – from a pure product standpoint, did company X’s product mix beat the competition (Figure 8.5.6)? The first step in analyzing the data was to count the number of times each of these six products was ranked first (note: although ten products were included in this study, to answer this question, only six products, that is, A, B, C and D from company X vs. E and F from competitor Y, were included in the analysis). In the case where other products than these six products (i.e. G, H, I and J) were ranked first, we went to the next rank until we found one of these six products. For instance, in Table 8.5.2, consumer ID 102 ranked product H first, but this product was not included in the analysis; as a result, product D was considered a first rank for this consumer (the same analysis was done for consumer ID 104). Figure 8.5.7 shows the output from this analysis, it showed that company X received approximately 60% reach vs. 30% reach from the competitor Y. Based on this analysis, one would conclude that company X was in a good
Beckley_c08.indd 296
2/7/2012 8:39:16 PM
Free-Choice in Context Preference Ranking
297
Table 8.5.2 Identify order of product preference for each consumer. Participant ID
Ranked 1st
Ranked 2nd
Ranked 3rd
Ranked 4th
101 102 103 104 — — — —
C H E G — — — —
A D F E — — — —
E F B — — — — —
— E C — — — — —
Company X A
B
C
Competitor Y E
D 10%
24%
13% A
F Competitor Y – 31% reach
F
18% E D
Company X – 59% reach
B
C
16% 8%
11%
Figure 8.5.7 Preference ranking data with four SKUs from company X and two SKUs from the competitor Y.
8
position with these four product offerings, within the context of two product offerings from the competitor Y.
8.5.3.2.2 Question 2: For company X, is there a way to streamline the current offering? Do they need all four SKUs? What if they keep only the two best performing SKUs? Now, company X wanted to know if they needed to have all four products. Could they streamline the portfolio? What if they offered only two leading products, the same as the competitor (Figure 8.5.8)? Is this enough? Without having to conduct another study to answer this question, we can just simply go back to the ranking data and run a new analysis. To answer this particular question, only four products were included in the analysis – two products from company X (A and B – assuming that these two products were the best selling products in the current offerings) and two products from the competitor Y (E and F), see above. We went through the same process as we did previously, see Table 8.5.3: ● ●
Beckley_c08.indd 297
Identify the frequency with which each product gets ranked first If other products, other than these four products, were ranked first, go to the next rank until one of these products was reached
2/7/2012 8:39:16 PM
298
Product Innovation Toolbox
Company X – two leading SKUs A
Company Y – two SKUs
B
E
F
Figure 8.5.8 Depiction if company X offers only its two leading products (A and B) to compete with company Y’s products (E and F).
Table 8.5.3 Identify number rank first on four products (A, B vs. E, F). Participant ID
Ranked 1st
Ranked 2nd
Ranked 3rd
Ranked 4th
101 102 103 104 — — — —
C H E G — — — —
A D F E — — — —
E F B — — — — —
— E C — — — — —
Note: Product not shown means it is not in the consideration set for that participant.
Company X A
Competitor Y
B
E 11%
8
18%
20% Competitor Y – 50% reach
F
A
F E
B
Company X – 39% reach 21%
30%
Figure 8.5.9 Preference ranking data with two leading SKUs from company X and two SKUs from competitor Y. ●
Identify the number of participants who have none of these four products in the consideration set.
Figure 8.5.9 shows that if company X decided to keep only two products, they would actually be worse off. In this scenario, competitor Y would have a higher reach, which could be because consumers who liked the other two products (i.e. C and D) from company X switched to the competitor Y as their product was no longer available. Based on this analysis, company X should not streamline its offerings; rather, they should revisit their own products and try to understand why they would need more products in their portfolio than the competition to be in a superior position.
Beckley_c08.indd 298
2/7/2012 8:39:16 PM
Free-Choice in Context Preference Ranking
Company X – current four SKUs A
B
C
Company Y – two SKUs
D
New products G
H
I
299
E
F
J
Figure 8.5.10 Depiction if company X offers additional products from four new options (G, H, I and J) to compete with company Y’s products (E and F).
% Reach Scenario 1: company X offers one product
Company X:
B
Competitor Y:
E
30% 55%
F
% Reach Scenario 2: company X offers two products
Company X:
B
Competitor Y:
E
:
A
39% 50%
F
:
:
: Scenario 8: company X offers eight products
% Reach Company X:
A
B
C
D
G
H
I
J
E
F
60% Competitor Y:
8
30%
Figure 8.5.11 Comparing company X’s reach vs. competitor Y’s reach for each scenario.
8.5.3.2.3 Question 3: If company X wants to change its product offerings, what is the best mix of their products? How many do they need to beat the competition? Now, if company X wants to rearrange their product line – they want to revisit their current four product offerings (A, B, C and D) along with four new products (G, H, I and J) that product developers have been working on (Figure 8.5.10). The question becomes: “How many products should they offer and what are those products?” To answer these questions, we conducted the scenario analysis by running the analysis at different scenarios based on the number of products the company wanted to offer. For each scenario, the best product mix was identified and company X’s reach vs. competitor Y’s reach was estimated, see Figure 8.5.11. For instance, if company X were to put out only one product, they should select product B – in this
Beckley_c08.indd 299
2/7/2012 8:39:17 PM
300
Product Innovation Toolbox
Company X’s reach
Competitor Y’s reach with two SKUs
50
% Reach
40 30 20 10 0 1
2
3 4 5 6 # Products company X considers
7
8
Figure 8.5.12 Identify number product mix using equilibrium approach.
case company X would receive approximately 30% reach, whereas competitor Y would receive 50% reach. The analysis was conducted for all scenarios – the last one was when all eight products from company X were offered altogether. The next step was to plot the percentage reach in each scenario, see Figure 8.5.12. It shows that company X would need at least three products to be well off compared to the competition. If this was the option to go forward with, we could identify what is the best mix of the three products. Based on this result, again, company X should strongly reconsider its own product portfolio and try to understand why they would need more products in their portfolio to beat the competition. It is worth mentioning that this approach can be extended when new products are introduced at a later stage without having to start everything all over again. For instance, what if competitor Y decided, later on, to launch new products? In this case, there is no need to start everything all over again. The researchers can simply invite the same group of consumers back and give them a set of new products along with the product they most preferred. The analysis can be rerun to assess the impact of the new competition on company X’s reach. With the new competitive set, what is the defensive product strategy for company X? Should they include some new products? How much will they gain back? etc.
8
8.5.4
Theoretical backgrounds of free-choice in context preference ranking In developing a new product line, companies sometimes face the challenge of identifying the right mix of product varieties for a given target consumer audience. For instance, in developing a new sports drink line, the company needs to determine how many flavors should be offered. Generally, offering multiple flavors tends to increase appeal to a broader audience; however, this can be costly for marketers. The company needs to determine the right number of flavors to offer that gain the broadest reach (i.e. not appeal to the same audience) and the composition of that flavor mix. In addition, the company will want to gain insights on how its optimal flavor mix performs compared to its competition in order to guide its business decision process.
Beckley_c08.indd 300
2/7/2012 8:39:17 PM
Free-Choice in Context Preference Ranking
301
This study presents a new approach called the free-choice in context preference ranking method for optimal product portfolio selection. The traditional approach, such as TURF analysis, relies on purchase interest or liking measures, whereas this approach relies on the placement of products into a consideration set and a comparative measure, that is, preference ranking. First, participants evaluate all products and indicate those that fall into their consideration set. Then, they rank the products in their consideration set based on preference. With this approach, competitive products are also included whenever available to help guide the portfolio selection and allow head-to-head comparisons on overall appeal. Scenario analysis is conducted and the optimal product portfolio is identified as one that has the highest reach and is higher than the competitive set. It has been shown that this new approach provides insightful information to help identify the optimal product portfolio and to determine how well this optimal portfolio mix performs from a product standpoint compared to the competitors.
8.5.5
Summary and future This alternative approach, the free-choice in context preference ranking method, has been shown to help address complex business questions when it comes to portfolio assessment or optimization. It is based on preference ranking data and the preference is based on their multiple uses in a natural setting. This method also allows you to include a large set of products, including competitive products. And from what we experienced, the output from this is intuitive and easy to understand and communicate to the business. Note that the approach discussed in this chapter focuses on the pure product performance (i.e. a blind product test). It does not take into account the effect of branding and other marketing factors that could impact product choice and purchase decision. As a result, the estimated reach should be used cautiously; it should not be simply equated to potential market share. Nonetheless, this method can be applied to branded products as well – it is interesting to see how consumers react to products under two different contexts – one in the absence of the branding and the other with all under the brand. The area one may want to consider as a future study is to compare results from this approach to TURF, which is traditionally used in market research, as well as other approaches that have been introduced such as SURF – structural unduplicated reach and frequency (Lipovetsky, 2008), Shapley value analysis (Conklin and Lipovetsky, 1999; Cui, 2004) or LSA maximum coverage/ portfolio optimization (Fayle and Ennis, 2010).
8
References Conklin M. and Lipovetsky S. (1999) “A Winning Tool for CPG – The Shapley Value Game Theory Method Brings Advantages to Marketing Managers Evaluating Product Line Flavor Decisions”. Marketing Research, 11 (4) (Winter 1999/Spring 2000), 23–27.
Beckley_c08.indd 301
2/7/2012 8:39:17 PM
302
Product Innovation Toolbox
Cui, D. (2004) “Appropriate Application of TURF and Shapley Value for Product Line Optimization”. A White Paper from the Ipsos Group. Fayle C.M. and Ennis, J. (2010) “An Efficient Approach to Solving Complex Market Research Problems”. Institute for Perception. Paper Presented at the 2010 Sensometric Conference, Rotterdam, The Netherlands. Kreiger, A.M. and Green, P.E. (2000) “TURF Revisited: Enhancements to Total Unduplicated Reach and Frequency Analysis”. Marketing Research, 12 (4) (Winter), 30–36. Lawless, H.T. and Heymann, H. (1998) Sensory Evaluation of Food – Principles and Practices. New York: Springer Science. Lipovetsky, S. (2008) “SURF – Structural Unduplicated Reach and Frequency: Latent Class TURF and Shapley Value Analysis”. International Journal of Information and Technology & Decision Making, 7 (2), 203–216. Miaoulis, G., Free, V. and Parsons, H. (1990) “TURF: A New Planning Approach for Product Line Extensions”. Marketing Research, March, 28–40. O’Mahony, M. 2007) “Conducting Difference Testing and Preference Trials Properly for Food Product Development”. In MacFie, H. (ed.), Consumer-Led Food Product Development. Cambridge, UK: Woodhead Publishing in Food Science, Technology, and Nutrition. Sambandam, R. and Hausser, G. (1998) An Alternative Method of Reporting Customer Satisfaction Scores. White Paper from TRC Research Insight Direction. Available at http://www.trchome.com/white-paper-library/wpl-satisfaction-and-loyalty/144
8
Beckley_c08.indd 302
2/7/2012 8:39:17 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
9 “Because where a connoisseur sees the differences, a novice sees the similarities.” Youngme Moon Author of Different: Escaping the Competitive Herd (2008)
9 This chapter discusses the final product research tests used to identify the final formula for launch and the importance of identifying how consumers differentiate a new product from other products in the market. In contrast to the earlier chapter where different product options are refined and screened, this chapter focuses on the tools to identify the final product for launch and the necessary metrics to predict success in the marketplace. The tools employed here depend on the goal of innovation and business activities (incremental innovation, disruptive innovation). This chapter discusses the use of the benchmarks, the test designs, the involvement of concepts and product essence in product testing and key product indicators (overall liking, purchase intent, product concept-fit, emotions, behavior change).
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
303
Beckley_c09.indd 303
2/7/2012 7:41:38 PM
Chapter 9
Tools to Validate New Products for Launch 9.1
Extended Use Product Research for Predicting Market Success Ratapol Teratanavat, Melissa Jeltema and Stephanie Plunkett
Key learnings ✓ ✓ ✓ ✓
9
9.1.1
Rationale and benefits of extended product use research over traditional sensory research How to conduct extended product use – factors, research flow, key measures How to use insights to gain deeper understanding of consumer product experience to go beyond measuring product performance How to take insights from extended product use to assess product viability and potential market success
Balancing two important acts: Introducing new products and optimizing portfolio A leading CPG company wanted to explore new product opportunities to market under their major brand. These products were unlike any currently on the market. The brand team and product development faced two immediate challenges; first, they needed to determine which, if any, of the new products
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
304
Beckley_c09.indd 304
2/7/2012 7:41:39 PM
Extended Use Product Research For Predicting Market Success
305
Consumer (affective) testing Discrimination testing Descriptive testing Qualitative testing Observational testing Human factors testing
2007
2005
Others 0%
25%
50%
75%
100%
Figure 9.1.1 Commonly used sensory research methods, surveyed among sensory professionals (Beckley, 2007, with permission).
were viable options, and second, determine the optimal portfolio of products to achieve broad appeal. For new product ideas, traditional market research concept testing may have suggested that consumers are interested in these ideas; however, while concept testing of novel products provides a good indication of the general consumer need, it often is unable to predict how the consumer will react to the actual product. Additionally, neither concept testing nor single use concept/product testing can provide an indication of whether consumers will adopt new and novel products that require a change in behavior. Ultimately, companies want to know how the new products will affect their bottom line. Will they gain market share? Through enhancement of the traditional sensory research approaches, we might better understand the consumer in context and ultimately the real marketplace potential. It is known that the majority of new products fail within the first year and subsequently are removed from the shelf (Beckley et al., 2007; Stevens and Burley, 1997). While there are a host of reasons why new products may fail, we contend that the variability in predicting new product success can be improved through a better understanding of two important questions. First, do we have the right product and second, what is the market potential? In the conventional approaches, the sensory research group supports product development by utilizing well-known approaches to define overall product performance, see Figure 9.1.1 for the commonly used approaches by sensory professionals (Beckley, 2007; Moskowitz et al., 2006). Discrimination and descriptive measures provide direction for product development to make improvements against specific attributes measured (Stone and Sidel, 2007). Additionally, affective measures of liking are utilized, sometimes exclusively, as an indicator of product performance (Lawless and Heymann, 1998). Finally, traditional approaches may extend to qualitative testing which allows for a more in-depth understanding of product performance.
Beckley_c09.indd 305
9
2/7/2012 7:41:39 PM
306
9.1.2
9
Beckley_c09.indd 306
Product Innovation Toolbox
Shortcomings of traditional approaches Generally, in order to determine whether the product is ready to launch, many product researchers rely on hedonic measures such as liking, purchase interest, or stated preference to assess overall product performance. In general, these hedonic measures have to be at or above a certain level for a product to be considered acceptable. For instance, the average liking score has to be higher than a benchmark. Or, preference or liking measure of one product has to be higher than another product (usually competitors). Despite their popularity and usefulness, we believe that these methods do not provide adequate information for novel products to answer whether a product is ready to launch. First, will this product be adopted by consumers over their current products and why or why not? Next, how do we assess consumer choice and decision making? This includes understanding the trade-offs that a consumer may be willing to make. Consumers may not like the taste of a particular product (this can be reflected by low hedonic rating), but they still choose to use or buy the product because it offers other benefits. This understanding builds upon consumer product preferences in general to include their preferences in context of their lifestyle, attitudes and values. The type of trade-offs and degree to which they are made depends upon what the consumer values most. For instance, a consumer might not like the taste of whole grain or wheat bread and prefer the flavor of white bread. However, they may choose the whole grain bread because it offers more nutrients. In this case, the consumer is prioritizing the benefit of a healthier alternative. However, if a nutrient enriched white bread were able to deliver the taste and offer the same health benefits as whole grain bread, and was believable to consumers, then this same consumer would likely choose the white bread option. And lastly, in what ways do hedonic measures reflect actual behavior (i.e. in the context of the natural environment, what drivers of liking are most influential to product choice?). An important element to note in each of these questions is that they consider a specific context. In order to answer the question of whether or not the product is good enough, the context of the marketplace landscape needs to be considered, and within that, how the product measures up against an appropriately determined benchmark. The consumer choice and trade-off question focuses on consumer context, emphasizing individual differences in how consumers make choices as they weigh experience, preferences, attitudes and values, and individual needs. The third question (in what ways do hedonic measures reflect actual behavior?) is really a marriage of the prior contexts discussed. It is the assessment of the consumer/product relationship in the context of everyday life. This question emphasizes the understanding of the drivers that maintain this relationship in order to achieve product loyalty in consumers. The concern we have with the traditional sensory research approach is that those methodologies have the same deficits as laboratory research when compared with research conducted in the natural environment. The main deficit is that significant findings observed in laboratory studies, and likewise, traditional approaches, may not generalize to the natural setting or the marketplace (Moskowitz et al., 2006). For example, in laboratory settings, palatability has
2/7/2012 7:41:39 PM
Extended Use Product Research For Predicting Market Success
307
been shown to be a main driver on intake. Additionally, in these same settings, the time of day has not been shown to influence the amount of food consumed. However, in the natural environment, time of day is significantly more influential on the amount of food consumed, with little effect on palatability. As we have strived to improve our approaches to product research, we realized that we needed to go beyond an understanding of liking or preference for our products. In essence, we needed to understand why consumers want certain products and what benefits they are looking for. Using traditional sensory research approaches and relying on hedonic measures do not help us answer these questions. Therefore, we knew we needed to develop an alternative approach which provides specific information to the business. The foundation from which this alternative approach is developed is simply a thorough understanding of consumer behavior. In order to develop successful products, we need to truly understand consumers (Schieffer, 2005). Sensory attributes and liking are only a small part of that understanding. To be consumer-centric, we must understand the consumer decision-making process and assess product performance in the context of free choice and behavior. We have been successful internally in demonstrating that go/no go decisions should be rooted in understanding the consumer-product connection in order to be able to determine whether the product is likely to be successful. In other words, it is not just whether and how much consumers like the product. It is important to understand the driving factors – what is important to them, why and how often they will be using products, and whether and how this product fits into their life versus other products that are available for them.
9.1.3
An alternative: Extended use product research The approach we propose here is called “extended use product research” (EUPR). This approach combines traditional approaches (i.e. ballot measures) with behavioral measures such as product choice and usage as well as in-depth, in-context interviews (i.e. consumer decision-making criteria) evaluated over extended periods. It offers insights beyond traditional sensory research approaches (e.g. descriptive, discrimination and affective) to help understand consumer perspective on products and assess the potential market success. It enables us to understand the consumer interaction with the product (decisionmaking process and product design criteria) and to obtain behavioral measures that help determine market potential. This approach addresses the following important questions that will ultimately assess market potentials: ● ● ● ● ● ●
Beckley_c09.indd 307
9
What factors influence consumer choice decision, beyond sensory attributes? What are the product design criteria? How important are they? Does the product response remain the same or change over time? How well does the product fit to consumers’ life? What is the usage pattern? How does it change over time? Is the product good as it is or what changes need to be made to be good enough?
2/7/2012 7:41:39 PM
308
Product Innovation Toolbox
Now, we will walk through the steps of this method. At each step we will review how to execute the approach, the type of information provided and how to utilize the information to help make business decisions.
9.1.4
9
Steps in conducting extended use product research Note: This method should be used for proof of concept. For this method to be successful, the products used must have been sufficiently optimized for go/no go decisions to be realistic. Figure 9.1.2 shows general steps in conducting an EUPR. In this example, the first step was to introduce the product to consumers. This was the first point of contact for consumers with the product – we wanted to let them try it so they could familiarize themselves with the product and any product options (e.g. flavors or forms). For products that are very new and where we anticipate initial barriers to entry, a group setting can be used to encourage interaction and sharing of product experiences. In this research, we found this group interaction very helpful to get consumers past any reservations they may have had with the product, as well as helping them to understand what the product is about and how to use it. This group setting appears to be particularly useful for a “new to the world” product because consumers are often unfamiliar with the concept. During the beginning stage, we can also assess the initial response to the concept and products. Then, we offered consumers the option to take home a product(s) to use. They could choose from the products in the choice set provided to them. The number of products to be included in the choice set as well as the number of products each person is allowed to take home depends on your research objectives. We tracked each consumer’s choice selection and also gave them a journal to track their usage and write any thoughts and comments about each product. In-depth discussions with consumers were used at selected intervals to understand their product experiences and product adoption process. We also collected information to help us understand their usage and product experience and to assess product performance after extended use. This allowed us to understand any changes in their experience, behavior and usage over time. At the end, we conducted in-depth interviews to help us understand their decision-making processes and validate the product performance against the design criteria.
Initial response to concept and product trial Recruit consumers
Concept evaluation Initial product performance
Extended use – choice and usage over time
Product usage Product adoption Product performance after extended use
Follow up in-depth interviews (validation of extended use)
Understand choice decision Fit to life Assess product performance vs. design criteria
Recommendation Which one(s), if any, has a market potential Optimal portfolio offerings
Improvement opportunities
Figure 9.1.2 Steps in conducting an extended use product research (EUPR) to gauge new products’ potential in marketplace.
Beckley_c09.indd 308
2/7/2012 7:41:39 PM
Extended Use Product Research For Predicting Market Success
309
Table 9.1.1 Different responses from different segments – they were identified based on interest/acceptance. High acceptance
Moderate acceptance
9.1.5
Audience segment
% Acceptance
Rationale based on in-depth interviews
Segment 1
60%
➢More open/accepting to concepts ➢More familiar with products ➢High enjoyment
Segment 2
40%
➢Busy lifestyle; social interaction ➢Have interest but do not enjoy using these products
Segment 3
20%
➢Don’t really have any interest for these new products
Understanding consumer segments Often, we learn that there are segments of consumers based on interest and product acceptance. In this example, the in-depth interviews gave us an understanding of the differences between groups and allowed us to evaluate each group separately (Table 9.1.1).
9.1.6
Assessment of sensory performance Similar to traditional sensory research approaches, for each consumer segment, we observe and compare how consumers rate one product vs. others. In Figure 9.1.3 different segments reacted differently to these products (i.e. one product did not fit all), so we needed to assess product performance separately by consumer segments. We also reviewed the rating on sensory attributes and tried to explain the drivers of product acceptance. Extended product use testing allows us to compare product performance over time. Figure 9.1.4 shows product response varying at different stages. In this example, some products were well received at the beginning but performed poorly after an extended period, whereas other products did not do well at the beginning but performed better over time. It is important to understand the reasons behind this. It could be because of sensory attributes, but could equally be due to other reasons, such as fit to life. Sensory is only one aspect – there are so many other factors that drive product acceptance and rejection at different stages.
9
9.1.7 Understanding how consumers make choice decisions In-depth interviews allow us to understand how consumers made their choices. We find that several factors, not just sensory attributes, influence product choice decisions before, during and after experience with the products.
Beckley_c09.indd 309
2/7/2012 7:41:39 PM
Segment 2
Segment 3
E
B A E
D
C
B
A
E
C
B
A
Purchase interest (top 2 box)
Figure 9.1.3 Product response by consumer segments. Purchase interest (top 2 box) refers to % Definitely/Probably would buy.
0%
20%
40%
60%
Segment 1
D
Purchase interest – after extended use
C
Beckley_c09.indd 310
D
80%
9
2/7/2012 7:41:39 PM
Product A
Product B
80%
311
Product C
Definitely would
Probably would
60% 40% 20% 0%
al st
tri
pt ce
e
d de
us
n
r1
on
Ex
te
te
us
Af
st r1 te
n
d de
C
al
e
tri
pt ce on
Af
tri n
te
Ex
us
C
al
e
d de
st r1
te Af
C
on
ce
pt
Purchase interest (top 2 box)
Extended Use Product Research For Predicting Market Success
te
Ex
Figure 9.1.4 Extended product use research allows us to compare product performance over time. Purchase interest (top 2 box) refers to % Definitely/ Probably would buy.
Understand product
• Do I know how to use it? Do I have prior experience?
Determine whether the product fits them
• Will it fit into my life? (e.g. as a professional worker)? • Are there negative associations?
Trial – reject products based on sensory attributes
• Do I reject it based on sensory attributes (e.g. taste, texture and sensation)?
Give it a try – use in different occasions to see if it works
• Does it fit my life as expected? • Is it more for situational use or regular use?
9 Assess whether it provides enjoyable experience
• How enjoyable is the experience?
Figure 9.1.5 An example of output from in-depth interview in extended use product research.
This helps us understand why consumers accept or reject products. Understanding factors that influence their choice decisions allows us to determine what challenges each product faces. Figure 9.1.5 shows how consumers made their choice decisions from this example. In this research example, we provided consumers with three different options (products A, B and C), all of which were developed against the same consumer need. As you can see from Figure 9.1.6, the pain points were varied by product. While the initial reaction to product A was problematic, after use, the product was actually well received. The opposite was true of product C. While the issues with product A might be overcome with adequate marketing and communication,
Beckley_c09.indd 311
2/7/2012 7:41:40 PM
312
Product Innovation Toolbox
the issues with product C made it a poor option for commercialization. Product B was actually the best option of the three, as it had no pain points. Product A
Product B
Product C
Understand product – prior experience; familiarity Determine whether the product fits them Trial – reject products based on sensory attributes Give it a try – use in different occasions to see if it works/fits Assess whether it provides enjoyable experience Note: These findings were based on consumer feedback during the follow-up, in-depth interviews Shows the area that the product posed challenges Shows the area that the product did not appear to pose any challenges
Figure 9.1.6 Understanding factors that influence their choice decision allows us to determine what challenges each product faces.
9.1.8
Using behavioral measures to help assess product viability
9 Throughout the extended use period, we determine how well consumers adopt each product. We are able to observe the usage patterns through their choice selections. These are important behavioral measures that go beyond liking measures. Tracking product usage helps assess short-term and long-term product performance. It helps us understand what it will take for the new products to be acceptable and adopted. Also, we can determine the impact of new product introductions to the usage or consumption of existing products. Understanding product experience is critical, particularly for new products. This is important because, for a new product to be successful, it is not just about product attributes anymore. It is about understanding the value consumers gain from the product. This is very critical, particularly for new products. Not only do we want to assess whether the new products are acceptable to consumers, we also need to know why they want to use these products. Why would they choose to use these products instead of something else, and what benefits do consumers find from using these products? Through listening to consumers’ stories about their product experiences, we are able to identify consumer segments. In this example case, we were able to
Beckley_c09.indd 312
2/7/2012 7:41:40 PM
Extended Use Product Research For Predicting Market Success
313
Product enjoyment
“Potential regular use” Realize higher values adopt it and become new routine “Situational use” Incorporate and use in some occasions “Non-user” Not interested and not for me
Product experience
Figure 9.1.7 Illustration of consumer segments based on behavioral measures. Through this research we were able to classify each person as a “user” or “non-user” of each product. Among users, they were also segmented into situational users and regular users.
identify a group of consumers who were not interested in the product, but once they tried it they realized that these products were not for them, so they were not really using the products. Then, we had another group of consumers who saw benefits of these products and had adopted them. There were segments within this group as well. Some saw limited benefits and therefore used the products only in certain situations. Others realized more benefit. Those who really loved one or more of the products truly adopted them and the new product(s) became their new routine. The degree of adoption was related to the amount of benefits realized (how large was the need) as well as the length of the product experience (see Figure 9.1.7). This is particularly true with new products that require a behavioral change.
9.1.9
9
Among users, they were also segmented into situational users and regular users In order to adequately understand how well these products perform, secondary measures in addition to product usage are useful. At the beginning of the research example, we gathered measures to quantify the degree to which each consumer “needed” the benefits that these products were designed to deliver. Toward the end of the research, we then measured how well the products delivered against these needs (Table 9.1.2). We also gathered information on the critical sensory parameters and measured, for each product, how well each delivered against those sensory requirements, using a Kano diagram (Figure 9.1.8). (Note: A more detailed discussion on the Kano method can be found in Chapter 7.1.) These measures, as well as the in-depth interviews, allowed us to understand at a deep level the performance and potential of each product.
Beckley_c09.indd 313
2/7/2012 7:41:40 PM
314
Product Innovation Toolbox
Ranking - importance of benefits
Table 9.1.2 Comparison of product benefits consumers gain from using various products. Benefit/gap relevancy of benefit*
Product A
Product B
Product C
• I like that I am not being judged by other people when I use these products
15%
65%
45%
• It fits better with my lifestyle
20%
60%
30%
• I don’t have to go out of my way to enjoy these products
30%
70%
40%
• I can use these products anywhere anytime
35%
50%
15%
• I enjoy the taste and flavor of these products
50%
85%
65%
* Numbers are % consumers who feel that the product fulfills a benefit.
Attribute 11
Satisfied
Attribute 12
Product A Product B Product C Attribute 6
Note: We hypothesized what attributes could be delighters to adult consumers, but we didn’t have the opportunity to measure how well these products perform on these attributes
Attribute 7
Delighter
Attribute 10
Attribute 8 Attribute 9
Absent
Fully implemented
9
Product A Product B Product C Attribute 1
Optimizer
Attribute 2
Optimal – no change needed
Attribute 3
Some changes required Unmet consumer needs – significant changes required
Dissatisfied Must-have
Attribute 4 Attribute 5
Figure 9.1.8 Using Kano diagram to help assess how well the product meets the sensory requirement.
We then quantified and predicted the market potential to determine if the product was a viable option and thus should be pursued further. Figure 9.1.9 shows the comparison of data from the extended product use research with market tracking data; this allowed us to determine if the product is a viable option and thus should be pursued further. The final outcome was the ability to obtain an initial estimate on marketplace performance by providing volume projections.
Beckley_c09.indd 314
2/7/2012 7:41:40 PM
Extended Use Product Research For Predicting Market Success
315
Data from extended product use testing Total sample 100%
Interest in concept V%
Awareness* V%
Acceptance Y%
Regular use Z%
Data from market tracking study comparable products Total population
100%
Initial interest* A%
Awareness B%
Purchase C%
Repurchase D%
* Awareness data were not available from the extended use study. We assume the same percentage remained the same as the tracking study.
Figure 9.1.9 Comparison of data from extended product use research with market tracking data.
9.1.10
Philosophy behind extended use product research
During new product development, sensory and consumer research are generally conducted to determine how new products appeal to consumers and how new products perform compared to competitive products (Moskowitz et al., 2006). Several measures have been used to measure overall product performance, such as liking, preference and purchase interest. These measures are generally collected from consumers before and after product evaluation (i.e. pre-trial and post-trial) through mall intercepts, central location testings or home-use testings. While these measures have been widely used, how well these measures can accurately predict product success in the marketplace is often questioned. A more behavioral measure of product adoption can eliminate some of the limitations of other methods. One approach that can be used is to develop an extended use consumer panel. Researchers can observe initial responses on concept and product trial from potential consumers and measure how consumers’ reactions with products change over time. Researchers can also measure long-term behaviors such as product choice selection pattern in different scenarios. These behavioral measures may help identify consumer segments based on product preference. Understanding sensory segments is important for product development and marketing to determine product mix strategies. These behavioral measures may help researchers better understand actual consumer behavior, as compared to other stated measures such as overall liking, stated preference or purchase interest. We think this approach is very useful because it not only provides a snapshot of how acceptable the product is at the evaluation point, but also allows an assessment and understanding of product performance both in the short term and long term. It also allows us to bridge hedonic measures to actual behavior. Additionally, it allows us to gain a deeper understanding of consumer product experience, not only around product attributes, but also the functional and emotional benefits that are important to consumers.
Beckley_c09.indd 315
9
2/7/2012 7:41:41 PM
316
Product Innovation Toolbox
9.1.11
Summary and future Extended use product research is an approach that integrates behavioral measures (e.g. product choice selection and usage) in addition to hedonic measures (liking, stated preference or purchase interest). It allows researchers to gain deeper understanding of consumer product experience and uncover the “why” behind how all the pieces of the product experience fit together. It offers different views to assess product acceptance, short-term vs. long-term performance. These behavioral measures provide more solid ground than those one-time measures (liking, purchase interest) to assess likelihood of product success and product viability to help businesses make informed decisions.
References
9
Beckley_c09.indd 316
Beckley, J.H. (2007) “A biennial survey conducted by It! Ventures LLC and The Understanding & Insight Group LLC to understand the changing role of the Sensory Professional.” Availible online from Yahoo Groups. Beckley, J.H., Foley, M.M., Topp, E.J., Huang, J.C. and Prinyawiwatkul, W. (2007) Accelerating New Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Lawless, H.T. and Heymann, H. (1998) Sensory Evaluation of Food – Principles and Practices. New York: Springer Science. Moskowitz, H.R., Beckley, J.H. and Resurreccion, A.V.A. (2006) Sensory and Consumer Research in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Schieffer, R. (2005) Unlocking the Mind of the Market – Ten Key Customer Insights. Mason, OH: Thomson/South-Western. Stevens, G.A. and Burley, J. (1997) “3,000 Raw Ideas = 1 Commercial Success!”. Research Technology Management, 40 (3) (May/June), 16–27. Stone, H. and Sidel, J.L. (2007) “Sensory Research and Consumer-Led Food Product Development”. In H. MacFie (ed.) Consumer-Led Food Product Development. Boca Raton, FL: CRC Press. pp. 307–320.
2/7/2012 7:41:41 PM
9.2
Product Concept Validation Tests Jennifer Hanson
Key learnings ✓ ✓ ✓
9.2.1
Methods for evaluating concept and product alignment How to use these methods for incremental and disruptive innovation Success metrics for moving into commercialization of the innovation
The final verdict: Concept product validation testing Validation is the most important step prior to launching a new product. It usually is the final touch point with consumers for new product launches. The validation test is where the Consumer Explorer (CE) connects the dots between the concept messaging and the product delivery. These tests require at least two phases of evaluation to ensure the reaction to the concept and product are both captured. It is a time consuming study for the consumers and tends to be more expensive than most other studies in the innovation path. The inherent richness of this test allows the CE to make a very informed decision about whether to move forward and what modifications, if any, need to be made to improve the chance of success for the new product. Many companies do some kind of validation testing, which could consist of multiple tests for parallel product ideas or one final test for a go/no go decision. In most companies, the executive decision for the final product launch is done by one person (usually the CEO/chief executive officer or CMO/chief marketing officer). In instances where executive decisions are without consumer data, it is assumed that person knows exactly what their target market needs and usually is very involved throughout the process to make sure the company reaches the end goal. Multiple tests or parallel pathing may suggest the company is not as certain of what the market demands, or has a very short development timeline, and therefore needs to iterate along the innovation path until they develop a product and concept that meets the needs of their market. In this scenario, the company
9
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
317
Beckley_c09.indd 317
2/7/2012 7:41:41 PM
318
Product Innovation Toolbox
uses the testing to uncover the needs and solution along the way. Their final step is a validation test of some kind, such as a CLT, HUT or test market (discussed in this chapter). Regardless of the validation testing path chosen, there are four items any CE needs to consider as they are designing and evaluating the market potential of a new product: ●
●
●
●
9.2.2
Target market Who is my core consumer and what will meet their needs? Am I focusing on a target that has a low level of incidence in the market but is well defined, or one that has a large incidence and is less defined? Competitive set Which product will I compete against in the market? Who do I need to benchmark my performance against, both short and long term? Type of innovation Am I launching a potential platform of new products that cross traditional category boundaries and can disrupt or define new categories? Or are we delivering incremental innovation through a set of line extensions modeled from existing products? Sales potential What is the best way to forecast the sales of my innovation? Do we rely on a “top-down” approach where we start with the sizes of existing markets and make assumptions based on business requirements, or do we predict sales using a “bottom-up” approach where we estimate demand using trial, repeat and unit estimates generated by consumers in testing?
Type of innovation Understanding the type of innovation being tested is the most crucial element the CE should know prior to validation testing. There are two general types of innovations:
(1) Incremental Normally a set of line extensions modeled after existing inmarket products (such as Pitch Black Mountain Dew) or (2) Disruptive A platform of new products that cross traditional category boundaries and can disrupt or define new categories (such as Febreeze, a line of odor eliminator products from Procter and Gamble).
9
As the CE, you need to work with your internal cross-functional team to understand what type of innovation you are launching as it will not only determine the type of validation test you execute, but also the target and competition which directly impact sales forecasts, marketing plan and the final business decisions on investment. The most important thing to remember is to determine the type of innovation within and across categories. What may look like a disruptive innovation across categories may be an incremental innovation within your category.
9.2.3
Target market To properly assess the potential of a new product, the CE needs to determine the target market. The target definition and incidence will depend on the product being validated. Most companies define their target in terms of
Beckley_c09.indd 318
2/7/2012 7:41:41 PM
Product Concept Validation Tests
319
demographics; however, there are other considerations for defining the target market, such as attitudes, needs and behaviors. It is acceptable to target something other than demographics, but you do need to define the people that exhibit the target attitude, need or behavior in order to reach them with media. Media today, including TV and print, still relies on demographic definitions of targets; therefore, it is imperative that the CE always delivers the demographic target, in addition to the attitude, need or behavior target. The estimated size of your market will vary greatly and is directly impacted by the innovation you are testing. A key question that the CE needs to address is: “Am I focusing on a target that has a low level of incidence in the market but is well defined, or one that has a large incidence and is less defined?” While you may be testing the viability of your innovation among a smaller, or niche, target, most companies will also test among a broader group in order to see if the innovation will have “legs” beyond the initial target. This will provide the CE with a secondary target to use for media purchasing. Performance among the primary and secondary targets should be tracked once the innovation is in market in order to measure the future potential of the innovation against the secondary target. On the other end of the spectrum is a less focused target, commonly referred to as “general population”. Many well-known brands, such as Coke and Pepsi, have general population targets as they appeal to most people in the population. Within the population, brands such as these will have multiple advertising that reaches different sub-sets of the population. In most cases, it takes many years to develop brands that appeal to the entire population, so some kind of filter will be used to target broadly appealing products, such as existing usage of in-market brands. If the CE refines an innovation that will be mass marketed in this manner, he/she will be able to gain a better understanding of the potential of a product among its most likely users – people that already purchase and use the competitive brands.
9.2.4
Competitive set
9
The second consideration for validation testing is the competitive set, or the products the innovation is most likely to compete with after launch. It is important to clearly define the competitive set, both short term and long term, as the CE will likely benchmark in-market performance against these brands. In addition, the CE may want to test his/her innovation directly against a key competitor during the validation testing to properly assess how the innovation may perform against competition prior to launch. In order to benchmark the in-market potential, the CE needs to compare his/her test results to internal metrics. These internal metrics or hurdles are estimates of how the new product may perform based on historical performance of previously launched product innovations. These metrics can include trial rates (percentage top box purchase intent – percentage definitely buy) and repeat rates (percentage would purchase again, post-product use). Alternatively, many suppliers develop databases that contain the key measures that help predict in-market performance for all products they have ever tested and are ubiquitous among companies that provide this service. The CE must carefully consider what he/she is comparing
Beckley_c09.indd 319
2/7/2012 7:41:41 PM
320
Product Innovation Toolbox
his/her innovation to when benchmarking it against a historical database – many innovations tested may have been tested in a time that does not mirror consumer spending trends today.
9.2.5
Sales forecast There are two approaches the CE can take when developing a sales forecast for his/her innovation: a “top-down” and “bottom-up” approach.
9.2.5.1
Top-down approach
In the “top-down” approach, the CE starts with the annual sales of the existing market, then applies a number of business objectives and consumer trends to the market in order to obtain an estimate of an innovations potential. Some of the business objectives the CE must consider are as follows: ●
● ●
Target market share of the innovation that is required to meet profitability expectations Growth of the market and consumer target over time General consumer demographic, attitude need and behavior trends among the target.
This “top-down” approach to the forecast is generally referred to as the business forecast, or what the business needs to achieve to meet internal financial hurdles.
9.2.5.2
Bottom-up approach
The “bottom-up” approach is also very common where demand estimates are developed based on trial, repeat and units expected to be purchased by consumers. This information is collected from validation testing. In creating a forecast using this information, the CE must interpret the survey responses from consumers carefully – many people will over- or understate their interest based on various demographic and lifestyle factors. A good research supplier will consider this when developing their forecast of an innovation. Each forecast approach is different and should be calculated regardless of the type of innovation. In most cases, a forecast that blends both answers will be used for the innovation’s profit and loss (P&L) tracking and forecasting, but which number you lean toward does ultimately rely on the type of innovation.
9
9.2.6
Types of validation tests The answers to the four important questions discussed earlier (target market, competitive set, type of innovation, sales potential) will help the CE choose which type of validation testing to use in this final step, create the right input to the test, and evaluate the results appropriately in order to make sound business recommendations.
Beckley_c09.indd 320
2/7/2012 7:41:41 PM
Product Concept Validation Tests
321
There are three types of validation tests typically used today to achieve this goal. All methodologies are designed to measure the concept product fit. The CE should choose the method that best meets the business needs: (1) CLT: Central location test (2) HUT: Home-use test (3) Test market: Small-scale, in-market launch.
9.2.7
Central location test The process for a CLT starts with recruiting consumers via phone, an existing panel or online and asking them to visit a facility and be in front of a computer to take a test. Many times consumers are pre-screened with a concept statement to ensure that those that show up at the facility are already positive to the core message of the new product prior to evaluating the concept as a whole. The advantage to pre-screening consumers based on a concept statement minimizes the number of consumers that arrive at the facility that are negative to the new product, therefore saving time and money for the CE. Once consumers are at the facility and in front of a computer, they are presented with a concept to read, either on paper or on the computer screen, and fill out a survey addressing similar key questions addressed in Chapter 6. Once they answer the survey, and state that they would consider purchasing the new product in the future, they continue to the product evaluation portion of the test. The definition of “would consider purchasing” is defined by the CE and typically is when a respondent states they “definitely would buy” or “probably would buy” the product. Sometimes people that say “might or might not buy” are provided with a product to evaluate, as the concept may not help them visualize their behavior or anticipated use. This is typically the case with new products that are disruptive – often referred to as “new to the world” – or when an existing brand enters an entirely new category which may be unexpected to consumers and change their frame of reference for the brand. One example of an unexpected brand extension that may have caused confusion is Baileys Creamers for coffee. Yes, many people have Irish coffee after an evening meal made with Baileys, but having a brand that is strongly rooted in alcohol enter a morning coffee occasion in a non-alcoholic form and deliver the taste expectation of an Irish coffee may cause a bit of hesitation among consumers when reading a concept. In this case, having those people that state “might or might not buy” would allow a more accurate read of the product’s potential sales. The next step in the test is the product evaluation. This is not considered a “blind” product test, as consumers have been educated about the product they are about to try in the concept phase. The goal in the product evaluation is to understand if the product is delivering on consumers’ expectations set by the concept, and if not, how the communication or product can be improved to align with each other prior to launch. In order to evaluate the product, consumers are provided with samples of that product to try. The samples are usually smaller than an actual serving size, which allows the CE to minimize the amount of product manufactured for the
Beckley_c09.indd 321
9
2/7/2012 7:41:41 PM
322
Product Innovation Toolbox
test and perhaps run multiple tests simultaneously. The samples are usually transferred to secondary package and the respondent will not see what the package will look like, except on the concept they evaluated. Once they try the samples, a survey follows. Many of the questions in this phase of the test are similar to the concept phase, with a few modifications: ● ●
●
Asking whether the product met their expectations is added Understanding whether the consumer would buy the product again and how often, after they tasted it and Evaluating the product performance on several key attributes, including taste, value of money and fit with brand, as examples.
Capturing the key measures described in Chapter 6 and comparing the responses with the post-product evaluation allows the CE to understand if the product will meet, exceed or under-perform in-market. If the product is expected to underperform, the product attributes will allow the CE to make recommendations for modifying the product prior to launch to improve its chance of success. The CLT is a very popular test because: ● ● ●
9.2.8
9
It provides results faster and cheaper than a HUT It does not require as much product for the evaluation portion of the test The product is in control of the facility throughout the test, especially important in categories where ingredients are sensitive to temperature.
Home-use test The process of the HUT is very similar to a CLT test, but with one important difference: it is executed in a home-use environment, not in a test facility. Executing the test in a consumer’s home allows the CE to better simulate how the consumer would be exposed and use the new product after launch. The downside is less control of the environment and behaviors of the consumers, removing the more “scientific” nature of CLTs. Some of the considerations a CE should think about prior to choosing a HUT over a CLT include: ● ●
●
Distraction when taking the survey People responding to surveys who live in the house and did not qualify for the test Misuse or mishandling of product.
Similar to a CLT, consumers are recruited either by phone, existing panel or Internet. Those consumers that qualify for the test are sent a link to a survey that contains the concept portion of the test. They read the concept on their computer screen and answer a survey to address questions specific to the concept they just read. Similar to a CLT, they will proceed to the product phase of the test if they show the proper amount of interest in the concept. Instead of receiving the product immediately, product is mailed home to the consumers that are interested in the concept. In many cases, a copy of the concept is
Beckley_c09.indd 322
2/7/2012 7:41:41 PM
Product Concept Validation Tests
323
shipped with the product to ensure they remember the communication provided to them when they reviewed the concept on their computer a few days earlier. In markets which do not have Internet panels or have limited access to Internet recruiting, door-to-door interviewing and drop offs are still common. Unlike a CLT, the HUT product is shipped in full serving sized packages. For single serve products meant to be consumed for multiple occasions, they will receive two to four samples of a product to try over the course of a couple days. Over the past ten years, the number of product samples has decreased. It is important that the CE works with his/her team to determine the appropriate number of servings that will mirror in-market behavior and balance it with available production levels. While the product may be shipped in the final package structure, the package graphics are usually not printed on these samples. The CE should take into consideration that the package experience will not mirror the in-market launch when interpreting the product survey results. The CE should also consider compensating for incomplete package by showing printed visuals of the final packaging so respondents can get a better feel for what the product will look like when it is launched. When the results from the test are completed, the CE should analyze them similarly to a CLT but take into account the in-home environment and product differences when interpreting and recommending a path for action to the business team. The HUT is also a very popular test for its in-home testing environment, which more closely mirrors how consumers will actually consume the product.
9.2.9
Test market: Small-scale, in-market launch Despite the benefits of the in-home environment for testing, the HUT design does not accurately mirror how consumers will actually shop and use the products and has been shown to provide inaccurate repeat purchase estimates, given the short duration of usage and number of products provided for testing. In cases where innovations are more disruptive than incremental, benchmarks do not exist for properly estimating interest and long-term usage. In these cases, a test market is more appropriate for evaluating and forecasting potential sales. Traditionally, test markets require quite a large investment of money into the test, as it has been completed by actually launching the product into a specific region of a country. In this case, the business treats the test market as a “rolling launch”, meaning they invest just as if they were launching nationally, which includes full execution of advertising and in-store promotions. Once they receive an appropriate amount of data that meets their internal business assumptions, they will push the innovation into the rest of the country. Sometimes the time between the first market and national launch can be months, in other cases it can be years. Since many businesses do not have the resources for a test market, there is an alternative option that can provide a bridge between the HUT and test market. This alternative solution can use the quantitative anthropology method discussed in Chapter 6.5. By recruiting and providing enough sample for consumers to use over a month or two or more, and determining the right
Beckley_c09.indd 323
9
2/7/2012 7:41:41 PM
324
Product Innovation Toolbox
metrics to be able to predict demand, QA can provide a more realistic estimate of repeat usage over an extended period of time, as well as provide the CE with ongoing information that will allow the business team to make modifications to the innovation prior to a national launch. Instead of focusing on product-concept fit, the CE can focus on product-consumer fit, allowing him/her to understand how the innovation fits into people’s lives, rather than fitting against messaging. In this case, a simple concept test can be a substitute for message evaluation. Finally, the CE can understand the innovation’s real competitive set at usage, in order to do proper benchmarking for the business and provide more ways to extend the innovation into a true cross category product platform.
9.2.10
Metrics for success
The final piece in validation testing the CE needs to determine are the metrics that will determine the performance of the innovation during year 1 of the innovation’s life in-market, and beyond. Like designing and evaluating the validation testing, the metrics are directly linked to the type of innovation the CE is launching. The key performance indicators, or KPIs, for incremental innovations can be based on perception-based data, as most CEs will be able to compare this information to behavior-based metrics that have already been collected for the competition and will be able to develop models to translate perception to behavior. For disruptive innovations, the in-market benchmarks will not exist so the CE needs to develop behavioral data through a test market, a quantitative anthropology approach, or the first couple months of the innovation’s launch. This will require the CE to be more creative in determining what to measure and how to capture the metrics (Figure 9.2.1).
9
Consumer response
Disruptive innovation from behavior metrics
Incremental innovation from survey metrics
Product knowledge
Awareness: Which of these brands have you heard of before today?
Word of mouth or buzz
Acquire product
Trial: How likely are you to purchase this product?
Call to get trial or participate in test
Willingness to put money on the table
Repeat: (Now that you have tried the product) how likely are you to purchase the product again?
Pay for additional product, even if at a reduced price
Multiple uses
Units: How many (bottles, packages, etc.) of this product would you purchase?
Different ways they use the same product
Figure 9.2.1 Ideas for KPIs based on the type of innovation.
Beckley_c09.indd 324
2/7/2012 7:41:41 PM
Part III
Words of the Wise
Beckley_p03.indd 325
1/31/2012 7:05:25 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
10 “If opportunity doesn’t knock, build a door.” Milton Berle
This chapter proposes a teachable model that combines team creativity with personal leadership of the Consumer Explorer who believes that innovation should be fun to the core. Through real life experiences, the author provides principles and a new management model that will allow easy assimilation of consumer-centric innovation to the existing tradition and culture in every corporation. The new model puts consumers and product developers at the center of innovation efforts.
10
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
327
Beckley_c10.indd 327
1/31/2012 7:07:37 PM
Chapter 10
Putting It All Together: Building and Managing Consumer-Centric Innovation Michael Murphy
Key learnings ✓ ✓ ✓
Consumer research as the next killer app Insights into one of the most successful product launches Turning failure into innovation breakthroughs
In marketing, managers and project leaders throw around phrases like breaking paradigms, thinking out of the box and pushing the envelope, without taking into account how corporate culture can actually limit the application of true change. Sadly, the vast majority of new products are going to under-perform and/or outright fail in the marketplace. Yet, the methods used to bring this high percentage of failure to the marketplace do not change very much. In the first chapters of this book, the consumer insight leader and strategic innovator is referred to as a Consumer Explorer. As a Consumer Explorer in high-tech and consumer packaged goods, I argue for a teachable model that combines team creativity with personal leadership at the consumer researcher level. I also strongly believe innovation should be fun to the core. After working in innovation at HJ Heinz for five years, I joined the Hershey Company in the midst of a cultural shift in marketing where the corporation began thinking about new products not as a change of packaging or addition of ingredients to an existing product, but more as a response to consumer needs in an increasingly competitive business world.
10
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
328
Beckley_c10.indd 328
1/31/2012 7:07:37 PM
Putting It All Together
329
Organizations like Hershey’s just can’t throw anything out there. They’ve got to have a good idea whether or not it would be successful in the market. They know that low-quality innovation dilutes the power of their brands. They need to make sure that what they’re bringing to the market is truly innovative and brings incremental benefit to the consumer. It can’t be “me too”. You’re going to spend too much time and money to bring something that’s “me too” into the marketplace and ultimately that’s not how you create long-term meaningful relationships with consumers. Companies can invest up to two years or more and millions of dollars bringing a product from inception to shelf. The investments in time and money have to pay out. But, so many of them do not. Along the way an abundance of methods for gaining deep consumer insights and creating new products are used. All of them promise better ideas. Not all of them deliver. Why? We spend a lot of time waiting for innovation processes to deliver new ideas. In reality, great ideas don’t come solely from a process. They come from people. The new model puts people at the center of our innovation efforts – consumers and product developers. There are five key points you’re going to need to hold onto if you’re going to thrive in this model: (1) (2) (3) (4) (5)
Researchers becoming breakthrough facilitators Transformational team experiences Building stronger teams Making teams product evangelists through failure Avoiding product feature dilution.
Weaving through these elements will be the story of Hershey’s drops, a product that is looking to break $80 million in gross sales during the first year of release, in a category where $20 million launches are de rigueur (Figure 10.1). But first, you have to embrace that the process now and in the future will hinge on the Consumer Explorer and his or her skills at team leadership.
10.1
Researchers becoming breakthrough facilitators: The stairway to heaven
10
”Do not try and bend the spoon. That’s impossible. Instead, only try to realize the truth … it is not the spoon that bends, it is only yourself.” The Oracle, The Matrix The Consumer Explorer is the next most important component of a manufacturing or any kind of organization that deals with a consumer. Consumer research is the next sort of killer app in the organization, and here’s why. Organizations are like big black boxes. You put some inputs in them and they spit out some outputs. Typically those inputs are financial market information and commodity prices and all these other factors. Put them in, digest them and then on the back end you get a product that goes on the shelf, right?
Beckley_c10.indd 329
1/31/2012 7:07:37 PM
330
Product Innovation Toolbox
Figure 10.1 Hershey’s drops, a highly successful launch in Hershey’s history as a result of putting consumers in the center of product innovation.
10
Beckley_c10.indd 330
As consumer products and services organizations continue to shift focus toward the consumer, the Consumer Explorer has this really unique position in the organization where they can craft the inputs that go into that box. They can also get people to look in places they haven’t seen before and get them to consider things they’ve never considered before. It’s a double-edged sword because you can also keep people from looking at things that might be relevant. There’s a huge responsibility for Consumer Explorers to be ethical, thorough and smart about what they do. But, as organizations continue to move to more customer/consumer-centric approaches, guess who all the power goes to? For this next part I’d like you to imagine I’m playing a little bluegrass music on my guitar. The guitar and music part of my life helps me deal with the dry side of the business. And there is a major dryness when it comes to baseline competencies for consumer researchers. Statistics, for example. It has been quantitative skills that have defined the best researchers and the application and interpretation of quantitative data has been the hallmark of a market researcher. That’s not going to change. Statistics knowledge alone will not make a great Consumer Explorer in the future of consumer research. However, it is one of the table stakes, or the equivalent of a high school diploma. The undergraduate degree for the new Consumer Explorer is the understanding and use of human psychology and sociological principles to achieve a deeper and more thorough comprehension of what’s being observed out in the marketplace. Learning the complexities of decision making, and how people respond to stimuli on the most fundamental levels is critical. Right now, out there
1/31/2012 7:07:37 PM
Putting It All Together
331
somewhere, is some poor soul lying in an MRI machine being shown pictures of products and other things while a researcher watches and documents which part of the subject’s brain lights up. What’s the graduate degree for the Consumer Explorer? Consumer Explorers will need to bring techniques to the table that bring teams together. Those techniques aren’t limited to finding the most-innovative qualitative and quantitative techniques. Consumer Explorers must also bring techniques to the game that get more productivity out of the problem solving task that teams have. We’re collectively going to have to get better at this if we are to fulfill the promise of greater impact from consumer insights.
10.2
Transformational team experiences 1: Where we observe comedians get naked “College isn’t the place to go for ideas.” Helen Keller We need to participate. That means getting out from behind the glass wall. Observational techniques have been around forever. For years we’ve been doing our email and eating candy in focus group facilities. We’ve occasionally gone to people’s homes and looked through their cupboards. (And all with very few restraining orders being filed against us!) But, how do you get to the core of a consumer’s lifestyle? For our work on Hershey’s drops, we wanted to see how someone with a healthy lifestyle actually achieves those goals. When we talked to those people, we found that keeping a healthy lifestyle was a constant effort for them. It was always very top of mind and they personally identified that if they didn’t keep their head in the game, they’d slack off. With health and lifestyle choices like that the problem is that you have to keep doing it over and over and over again in order to be successful in the long term. We tried to find analogies where this pushing yourself out to the edge of your comfort zone and staying there long term was an important aspect for success. So we started saying, “What other human experiences are out there where you have to sort of push yourself to the edge and stay there?” That’s how we ended up with stand-up comics. We actually had people interviewing comics because they had to go out and handle stage fright by themselves. They don’t even have a team. And they fail often. They’re constantly bombing and they get bad feedback but they have to keep doing it over and over and over again. They told us about how they put up a shield around them. It was a mental shield that keeps them from succumbing to their own doubt and unkind words from others. That helped us make a leap to understand that consumers trying to achieve a healthier lifestyle must always have better-for-you options available and at hand.
Beckley_c10.indd 331
10
1/31/2012 7:07:38 PM
332
Product Innovation Toolbox
I always try and find comparable or similar experiences that aren’t completely related or are not related to the product. You can learn things from how people talk about it or when you hear about it in a different context that isn’t about the type of product that you’re working on. More importantly, you get generalized behavioral feedback that can be applied across many projects.
10.3
Transformational team experiences 2: Why everybody who works for me will someday be wearing women’s underwear (or the “why we’re always hiring” model) “Men want the same thing from their underwear that they want from women: A little bit of support, and a little bit of freedom.” Jerry Seinfeld
10
Beckley_c10.indd 332
I often think about this in my head: what if I worked on a project where we were developing women’s underwear. I would have everybody on the team wearing women’s underwear, whether they were men or women or otherwise. That last point is intended to be controversial, but somewhat real. Participation and first-hand experience with the consumer’s problems is the key. Yes, it seems to me that we often miss out on developing good, old-fashioned empathy for our consumers. Everyone working on the project needs to understand the consumer’s point of view first-hand. And sometimes, that empathy comes at the price of some discomfort for everyone involved. Again, referring to my exploration into how consumers achieve healthier lifestyles, we were trying to figure out how people who are trying to be healthy make decisions on what they buy or what they eat. Normally we would take people into a focus group and ask them: “How do you decide between this thing and that thing?” Or: “How do you do this?” I would have gotten the most rational answers possible, carefully worded by each respondent to put themselves in the best possible light in front of the other respondents. I didn’t do that. I found a health nut – someone who eats well all the time, exercises four or five times a week. I took her and a couple of my team members to a big-box restaurant that serves perhaps some of the unhealthiest food in the nation. I plopped our newly found healthy friend down and asked her to order food for herself off the menu – a veritable sea of deep fried and cheesy stuff. Sometimes, the cheese itself is deep-fried for good measure. We listened carefully as she talked us through the options. Then, I asked her to order for the entire table. I gave each person at the table a sort of health and wellness-related type of criteria that our respondent had to solve for and made her order the entire dinner. What was really fun was watching her squirm, watching her be uncomfortable and seeing how she reacted to this problem. We asked her to talk through the decision-making process on the menu. Then, we compared what she actually ordered with what she really wanted to order off the menu, if she could have anything she wanted and wellness were not
1/31/2012 7:07:38 PM
Putting It All Together
333
a criteria. I could see her eyes get big from time to time as she paged through the food offerings. That’s when I knew we would get good information from her. I like to have teams work with consumers on problems. Give them all problems, dilemmas or new situations. Then, sit back and watch what happens. To create a new situation for teams to interact with consumers, I hold “speed dating” sessions. This is like a focus group turned upside down and inside out. I’ll rent out a coffee shop, or a small restaurant, and bring six or seven different team members from different functional areas. I assign each of my team members to one specific question or information need. Each person should have one narrow area of focus. So, if there are eight team members, I bring in eight consumers. I put each consumer one-on-one with a team member: R&D, packaging engineer, marketer and other cross-functional team members. The key is to use people that are going to touch a product throughout the development process. At first, there’s a moment of uncomfortable eye contact. (Always wait until everyone is slightly uncomfortable because, quite honestly, you deserve to get some personal enjoyment out of this as well!) Then, my team members ask their question. They start digging deeper and trying to understand that one question. I ring the bell again and the consumers all rotate to talk to a different team member. That R&D person on my team keeps asking that exact same question but keeps hearing different perspectives. That helps them become a real knowledge expert in that one specific aspect of the problem. And then that person’s job is to summarize it themselves and teach it back to the rest of the team. It works on a couple of levels. First, it helps people become an expert in one thing. Folks appreciate the opportunity to develop expertise. Second, it draws on the adage: “to teach is to learn”. My teams have to internalize it in a way that they can express it to other people in a room. Third, it puts their own skin in the game. It gets them involved in it, gets them to know the consumer and it gives them a little bit of passion around some aspect of the consumer behavior. This will also be important when later we start talking about building effective teams and creating product evangelists.
10.4
Building stronger teams 1: Forming the group “No member of a crew is praised for the rugged individuality of his rowing.” Ralph Waldo Emerson
10
With the old model, team members get together to understand what needs to be done inside of each of their individual functional areas and then they go back to their desk. They then solve the problem and come back with a solution. Or worse, marketing works with R&D first to create the product, then involves packaging, graphic design, sensory and other teams much later in the process. Does this sound like a great way to get breakthroughs to you? My preference is to “form” the group and for it to solve problems together at the exact same time. I always try to bring people together in ways trivial and substantive to solve problems together or work together in ways that they haven’t previously done in the past.
Beckley_c10.indd 333
1/31/2012 7:07:38 PM
334
10
Product Innovation Toolbox
One fun way to do this is to get them to play a game or participate in a competitive, but non-threatening activity. I often choose flip-cup (a non-alcoholic version of an old college fraternity game) as an example. Two teams face off with rows of plastic cups placed upside-down in front of each person. The lip of the cup should be slightly over the edge of the table. The goal is for the first team member to flip their cup upright with their finger and then the second member can take their turn. Eventually one team has flipped all their cups successfully and is declared the winner. The point is not to prove who used their undergraduate college years most wisely. The game gets people laughing and smiling with each other and highfiving. All of a sudden, you’ve got a team. Gone are the disparate cross-functional members. You’ve got a team, and that’s key. You’ve got to gel them. Psychologists call this group formation or forming. Creation and reinforcement of this in-group is incredibly important. The make-up of this newly bonded team is also very critical. These days you should include people from across the product-development spectrum. You might have operations, R&D person, a food scientist, a creative and a packaging engineer all in the room together. Each one of them comes to the table with a unique educational background; with a unique way of thinking about products that is specific to their world. So, a packaging engineer and a food scientist walk into a store (I know, old joke) and they walk down the aisle together. The packaging engineer sees rows of corrugate densities and aqueous coatings and embossing and the number of colors and separations and the closure method and the dye cuts. The R&D scientist is thinking about reactions, process times, churn versus agitate, retort or aseptic. Truthfully, it’s not their fault. They’re wired that way. There are literally thousands of things particular to their specialty going through their heads at that moment. It’s this mental focus on their specialty that also keeps them from thinking holistically about a new product. And, when all of the elements of a product work together to signal to the consumer that the intended benefit is being delivered, we are closer to delivering the breakthroughs we want. But, to be effective as team members, they have to be able to communicate that intricate knowledge with each other. I try to create a mental model or common language for the team to talk about the new product. In my projects, I ask the teams to think about product attributes as belonging to one of three categories: (1) Must-haves (2) Optimizing (3) Delighters. Some folks reading this may notice my categories are extremely similar to Kano analysis categories. That’s no mistake. Dr Kano proposed a simple, yet powerful, way to think about what a product means to a consumer. (A detailed discussion of the Kano satisfaction model can be found in Chapter 7.1). However, my goal is
Beckley_c10.indd 334
1/31/2012 7:07:38 PM
Putting It All Together
335
not to quantify attributes in each of these categories, but to create a thorough debate among the team members about the DNA of their new product. I force people to have ideas and criticisms and comments about things outside of their area. So if John Smith is a packaging engineer, I want the packaging engineer commenting on the flavor of the product. Then I want the R&D person, who normally comes up with the flavor of the product or the product form, commenting about the imagery that we’re going to use on the package. And I want the graphic designer to comment … you get the idea. Interaction equals comfort, shared knowledge and perspective. A graphic designer can then comment on food science topics in a way that both allows him/ her to be intelligent but allows him/her not to step on the toes of the food scientist. It seems really simple. But how do you do it? By failing together, of course.
10.5
Building stronger teams 2: Failure equals ownership (or the “you break it, you buy it” model) “I didn’t fail the test, I just found 100 ways to do it wrong.” Benjamin Franklin Failure is a principle that I constantly reinforce to the group. It’s OK to sort of fail. And when I build systems or techniques, I make sure that failure is built in into the process. People make a lot of lip service about this whole thing: “Well, you only learn from your mistakes and you have to fail early, blah, blah, blah”. But how many build processes and incentives around failing. Nobody will do anything that they’re not incentivized to do, and so risk-taking and failure are avoided at all costs. So if you design a methodology – like I’ve been talking about the main example here, which is the Hershey’s drops – the whole thing should be designed to allow people to come up with tons and tons of different ideas with the explicit understanding that we’re going to lose a lot of them. That is the entire point, is to cut a ton of them. And all of a sudden, nobody treats any one idea like it’s their baby. You know, you never want tell somebody their baby is ugly, and nobody wants to let go of their baby. You can’t have that. Let’s talk about music again for a moment. Many bands and musicians practice playing a song over and over so they can play it the same way every time. I’ve always felt with my band the idea is to rehearse a song to find possibilities within the structure. I rehearse to intentionally make mistakes and discover how to improvise around them. An average musician can play a song the same way every time. Experts can improvise around a structure and turn mistakes into something that actually sounds enlightened. The same thing goes for what I’ve tried to do in any innovation or product development type of project. Take the way a product’s packaging sounds when it closes. The team would take that attribute and try to articulate multiple ways a package could open and close and the way it would sound and feel when you opened and closed it. We then repeat this process for nearly every aspect of a product.
Beckley_c10.indd 335
10
1/31/2012 7:07:38 PM
336
Product Innovation Toolbox
The exercise encourages open thinking; we will end up with somewhere between 150 and 200 different iterations of product attributes. Consumers join us and help us sort through the attributes. I have the teams listen for where we see passion versus indifference. We discuss and think about the meaning of the consumer feedback. Not all of the ideas are going to get into the product. By design, you’re only going to get a couple of these things into the product. Ultimately, each aspect of the product is only going to see one or two specific ideas come into play. From the beginning of the process the entire group realizes only about ten percent of the original ideas. By simple probability, almost everyone working on the project is going to have some idea of theirs thrown out and that creates a team dynamic that isn’t afraid of failing any one particular idea. Everybody owns every aspect of it and so when the finance guy gets a look at the thing and says: “Hey, I want to pull this ingredient out of there because it’s going to save us some money,” everybody on the team – every last person – is going to shout and say: “No, you can’t do that. That’s got to be there.” Create strength in numbers as a design team. They’ve developed it together and no one single person owns that one piece. It’s easy to gang up on one person, get rid of their idea. Congratulations, you’ve created a group of product evangelists.
10.6
Avoiding product feature dilution: The barrier to breaking through “Clothes make the man. Naked people have little or no influence on society.” Mark Twain
10
Beckley_c10.indd 336
So many products get cost optimized before they ever get to the shelf in the first place, which does not make sound business sense. Organizations will attempt to derive as much margin from a product at the very beginning, which I believe is a mistake. You need to invest in the product, and one of the ways we need to invest in new products is in a little bit of margin loss. Here’s the reason why: the extra cost of making this thing stick and stay on shelf is recouped by gaining repeat. The greed from margin will cause companies to pull down product quality entirely too early. It might have been a great idea, but by the time it ended up on the shelf, it becomes sub-par on product quality, and short on the attributes that delight and differentiate. Then, you don’t get the repeat intent that you wanted to in the first place. Make sure the product you put on the shelf is the real deal. Give it every weapon it needs to have the best fighting chance at becoming a part of the consumer’s product repertoire. Once it does, start looking for ways to derive more margin through scale and optimization. One of the first things that gets optimized too early is packaging. We tend to defer to the cheapest, highest-scale packaging we can apply to our products. Pulling relevant features out of packaging is not smart: products can’t live without the packaging. It’s like saying that you can live without your skin. People think that it’s just fancy clothing but it’s not. Packaging is the skin of the
1/31/2012 7:07:38 PM
Putting It All Together
337
product and the first visual cue to the consumer. Nonetheless, we drive out every bit of cost and the result is a product that doesn’t fully meet the consumer proposition. I try to account for this in our team design techniques. I force people to make product feature choices based on whether components or parts of the idea are going to be more expensive to manufacture or not. But, we still have to build in elements that delight, while meeting product minimum requirements. It’s a new art form for the team to take up. Part of that paintbrush is designing an exercise or a way for the team to come together and make sure they’re taking that margin into consideration when they create new ideas. Back to Hershey’s drops … We heavily scrutinized the product design. For instance, we hypothesized that if it was not shiny enough, consumers wouldn’t believe it was going to be a neat, clean product experience. We took into account the packaging and the movement of the drops and the way the font looks on the front among other things. No aspect was left to chance. What’s more, we treated the product as a whole from the beginning. For so many projects, you have the idea, you create a product, you evaluate the product and then you send the entire thing over to the design agency and they try and create a package around it. That’s not the way we approached it. I would argue that in many cases, the packaging is actually even more important than the actual product itself. Our research has shown that packaging could be more important than the product itself. When we designed the product and package simultaneously, we found the consumer wanted more features (such as a re-closable, stand up pouch) in order to fully realize the product benefit. We also knew that we would have to prove to the organization that it was worthwhile. So, we made two prototype packages with the same product inside: one in the standard, high-scale packaging used as standard across many brands in the category. The other was the re-closable pouch we thought the consumer wanted. When we put them in front of respondents in a central location methodology, consumers told us that the product that included the re-closable package tasted better, and had better repeat potential. Yet, they ate the same thing! The only thing difference was the packaging and the imagery that they were getting from it. The visual stimuli were far more sensitive than the taste stimuli.
10.7
10
Researchers becoming breakthrough facilitators: A reprise “Leadership and learning are indispensable to each other.” John F. Kennedy You want to be an effective leader and bring a product to market with a minimum of compromise and a maximum of success. You also now have formed a new type of team from a group with vastly different types of expertise. Oh, and you want them to respect you, as well. Good luck! I don’t know any other way to solve it other than being a cheerleader and having an infectious personality. You have to get up in front of them and you’ve
Beckley_c10.indd 337
1/31/2012 7:07:38 PM
338
Product Innovation Toolbox
got to entertain them a little bit. You have to be a cheerleader. There are a few keys to do this successfully. First, you must develop a curiosity about a lot of different areas. You need to be constantly trying to understand more about a lot of different topics both technical and non-technical. I’ve always tried to dig into the sciences and liberal arts as much as possible. It has nothing to do with acquiring any particular skill. It has everything to do with having empathy for the challenges your team faces. You have to be seen as conversant to be a smart leader. Not an expert, but try to be considered unthreateningly conversant in a lot of different areas. It is important to have a substantive but non-threatening conversation with an R&D technologist about what they do to be seen as collaborative. That might mean you need to brush up on your technical skills like chemistry and manufacturing technology. (Suggestion: dig out your old textbooks!) You might need to speak the language of illustration and graphic design with a graphic designer. (Suggestions: pick up communication arts magazine or attend a press check with your packaging team.) Second, you may need to overcome the top fear for most people: stage fright. I get butterflies every time I get in front of a group. To address this, I actually choose hobbies that put me on stages in front of people, like music. You actually have to start enjoying being in front of people in a team setting and not just be a passive participant but an actual leader in a group. Third, you must be seen as leader, regardless of your title in the organization. I grew up engaged in experiences like Boy Scouts, marching band and newspaper in high school. With those I’ve sought out leadership positions and learned what it’s like to lead a group in a collaborative way. My own personal path includes student director of the marching band, editor of a school newspaper and president of the fraternity (where I learned to play a mean game of flip-cup, of course). Those were important experiences that allowed me to grow comfortable in this kind of a role. This process is going to be different for everyone. There’s no single prescription for this. If you haven’t had those experiences, consider pushing yourself outside of your comfort zone and start seeking these experiences out. It’s only going to make you better. Volunteer to lead a community group or a church group. Volunteer to get involved and lead some of the more social or diversity groups in your organization. There’s usually affinity groups at your workplace — women’s affinity groups, African American groups, and so on. You can get involved there. But don’t just get involved, strive to become a cheerleader. Remember, you have to rally the troops. You have to be a personality. You have to not be afraid of being a cheerleader and getting people excited about it and personally having a very upbeat attitude toward what you’re doing. You personally have to have a sense of enthusiasm about going after this problem so that other people can feel good about joining you.
10
10.8
Summary and future Corporate culture doesn’t change from the top down. There’s only so much any one executive can do that is going to substantively change the tone of what’s
Beckley_c10.indd 338
1/31/2012 7:07:38 PM
Putting It All Together
339
going on. If change and innovation is being called for, it starts with the folks working on the project to create the change. Often, however, people are not incentivized correctly to think outside the box. That permission – to experiment, play, be open-minded, try new things, and dream up breakthroughs – has to come from somewhere else. The Consumer Explorer of tomorrow is in a unique position to give teams that permission. For a very specific point in time, often the Consumer Explorer really is in charge. So if there’s ever one of those unique opportunities to lead from the bottom up, this is one of them. The goal is to move to this new place in consumer research where Consumer Explorers have a bigger hand in both designing new ways to interact with consumers and also solving and understanding consumer problems. To do this, the new researcher’s skill set must now include being able to create opportunities for people of varying backgrounds, educational levels, education types and personality types to come together and solve problems. The skill set must also include the ability to excite the team about the challenges that lie ahead. The research role is going to evolve and you’re going to play a bigger part in that. The model works and you will end up with innovative ideas and new products that fully articulate the consumer benefit. In fact, you may start to have fully developed products that are scoring off the charts. But, you don’t launch them because you can’t support that many good ideas in the marketplace at once. That’s the situation we ended up with at Hershey’s. Now that’s an innovation problem we should all enjoy having!
10
Beckley_c10.indd 339
1/31/2012 7:07:38 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
11 “If you don’t seek out allies and helpers, then you will be isolated and weak.” Sun Tzu, The Art of War
This chapter will provide guidelines and practical tips in working with multifunctional teams and leveraging external research agencies and technical experts.
11
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
341
Beckley_c11.indd 341
2/7/2012 9:01:22 PM
Chapter 11
Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners 11.1
Above Averages: Use of Statistics, Design of Experiment and Product Innovation Applications Frank Rossi
Key learnings ✓ ✓ ✓
The statistical design of experiments approach When and where it can be applied in product innovation studies The statistician as a valuable member of product innovation teams
Many of you are familiar with product optimization studies where different sets of variables are studied to identify optimal product performance. Published examples of product optimization studies include Griffin and Stauffer (1990) and Moskowitz (1987). In these studies, ingredient levels and/or processing conditions (factors) are systematically varied according to a specific experimental design. Often, experimenters vary just one factor, identify the optimal level and repeat with the remaining factors. This experimental approach is less desirable as it fails to account for the potential interacting effects of the
11
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
342
Beckley_c11.indd 342
2/7/2012 9:01:23 PM
Above Averages
343
Table 11.1.1 A design created for the cream level range of 1.75–6.73% and a sugar level range of 5.03–20.53%. Product
Cream %
Sugar %
Liking
1 2 3 4 5 6 7 8 9 10 11
4.24 4.24 4.24 2.48 6.00 2.48 4.24 6.00 6.73 1.75 4.24
12.78 20.53 5.03 18.26 18.26 7.30 12.78 7.30 12.78 12.78 12.78
6.73 7.20 4.90 7.21 6.82 5.47 6.71 5.86 6.62 6.64 6.78
20
2
18
5
4
Sugar %
16 14 12
1 7 11
10
9
10 8
6
8
6
Predictions are best within the circle
3 2
3
4 5 Cream %
6
Figure 11.1.1 The design space for the smoothie, which is in this case a circle. Note how the design points are equidistant from each other on the circle and the center of the circle.
factors, resulting in missed global optimums. It also often requires more experimental run. For example, in the development of a new cream based smoothie, factors in the experimental design could be cream level and sugar level. Developers identify ranges for each of these factors, and an experimental design is used to determine prototype products to be created and tested with consumers (Table 11.1.1). An important aspect of this experimental design is that the prototypes define a space and the resulting performance scores will be used to create a mathematical model that can be used to predict performance within the space (Figure 11.1.1).
Beckley_c11.indd 343
11
2/7/2012 9:01:23 PM
344
Product Innovation Toolbox
Liking vs. cream and sugar
20 18 7.2
Sugar %
16 14 6.6 12 10
6.0
8
5.4
6
4.8 2
Predictions are best within the circle
3
4 Cream %
5
6
Figure 11.1.2 A contour plot of the modeling results, with levels of liking depicted as curves.
11
Beckley_c11.indd 344
The consumer research study at this stage could be conducted in a number of ways. Consumers could evaluate all 11 prototypes across a number of sessions or could evaluate a sub-set of the products. The questionnaire could ask a liking question only or a number of questions in addition to liking. The specifics will depend on the research objectives and aspects particular to the product category such as fatiguing and carryover effects. Additionally, consumers will evaluate products in different orders to minimize the effect of position bias across the study. For the smoothie study, consumers evaluated all prototypes and the average liking scores were computed. These became the responses in the mathematical modeling (Figure 11.1.2). Predictions outside the circle are not considered reliable since no data was collected beyond the space. The northwestern part of the circle is the area with the highest liking, indicating that higher levels of sugar with lower levels of cream are most liked. Cost information could be overlaid to cost optimize wellliked sugar and cream combinations. Note that the predicted optimum sugar and cream level combinations may be ones that have never been created or tested and would need some type of verification testing. This plot can be a powerful tool in identifying the “sweet spots” where the ideal level of several responses can be achieved.
2/7/2012 9:01:23 PM
Above Averages
345
Statistical experimental designs fall into several categories, whose use depends on a specific experimental objective. They can be categorized into four key types: (1) Screening designs are used to collect data to quantify the effect of factors with a goal of either reducing the number of factors under investigation or identifying the factor ranges for further study. For example, many starch and gum types could be studied to see which ones are most promising in making a low fat cheese sauce perform most like a full fat sauce. (2) Response surface designs are used to collect data so that a detailed model of the space defined by the factor ranges can be developed. The smoothie optimization study described above is an example of a response surface design. (3) Mixture designs are special forms of screening or response surface designs where the factor levels are constrained to a sum total. For example, in developing a fruit juice blend, the ratios of three fruit juices (cherry, apple and berry) are varied but will always sum to 100%. (4) Robust (Taguchi) designs are used to determine the factor settings that reduce end product variability due to difficult or impossible to control noise variation. An example would be an experiment to determine ingredient levels that make a dry mix pudding thickness robust to the fat level of the milk used in its preparation (skim, 2% or whole milk). The factors and factor levels for each of these design types are dictated by the specific research objectives in the innovation process and the nature of the factors studied at this stage. The smoothie example demonstrates the statistical experimental design approach applied to consumer research applied late in the innovation process. Product optimization studies are also used in research with existing products. But the statistical experimental design approach can be used much earlier in the innovation process. Most quantitative consumer studies can benefit from the approach. An example of the statistical experimental design approach used earlier in the innovation process is a conjoint study. Factors in the statistical design are elements of a product concept or features of a product. Consumers evaluate concepts that are combinations of the elements that are determined by the factorial combinations of the elements. In the development of a concept for mini muffins, three factors under consideration could be the dome height, surface texture cracks and serving size (Table 11.1.2). In the conjoint study, the consumers typically evaluate visual depictions of the factor level combinations. As in the optimization study, the goal is to develop a mathematical model of the performance scores based on the factors to predict desirable factor level combinations. A detailed description of conjoint analysis can be found in Chapter 7.2. Another example of where the statistical experimental design approach can be used earlier in the innovation process is in Kansei engineering studies (Nagamachi and Lokman, 2010). Through the Kansei engineering methodology, potential product design elements are identified that can represent the factors
Beckley_c11.indd 345
11
2/7/2012 9:01:23 PM
346
Product Innovation Toolbox
Table 11.1.2 A potential experimental design for these three factors, the full set of factorial combinations of the level of the three factors. Serial #
Size
Cracks
Dome
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027
1.7 inch 1.7 inch 1.7 inch 1.7 inch 1.7 inch 1.7 inch 1.7 inch 1.7 inch 1.7 inch 2.1 inch 2.1 inch 2.1 inch 2.1 inch 2.1 inch 2.1 inch 2.1 inch 2.1 inch 2.1 inch 3.0 inch 3.0 inch 3.0 inch 3.0 inch 3.0 inch 3.0 inch 3.0 inch 3.0 inch 3.0 inch
Smooth – no cracks Medium cracks Big cracks Smooth – no cracks Medium cracks Big cracks Smooth – no cracks Medium cracks Big cracks Smooth – no cracks Medium cracks Big cracks Smooth – no cracks Medium cracks Big cracks Smooth – no Cracks Medium cracks Big cracks Smooth – no cracks Medium cracks Big cracks Smooth – no cracks Medium cracks Big cracks Smooth – no cracks Medium cracks Big cracks
Flat – no dome Flat – no dome Flat – no dome Medium dome Medium dome Medium dome High dome High dome High dome Flat – no dome Flat – no dome Flat – no dome Medium dome Medium dome Medium dome High dome High dome High dome Flat – no dome Flat – no dome Flat – no dome Medium dome Medium dome Medium dome High dome High dome High dome
in a statistical experimental design. While this approach is not taught as part of the Kansei engineering methodology it is ideally suited to this step in the methodology. Consumers would then evaluate the object images, real drawn or computer generated that represent combinations of design elements determined by the statistical experimental design.
11
11.1.1
Brief history of experimental design The theoretical background for statistical experimental designs dates to R.A. Fisher’s proposed methodology in the 1935 textbook The Design of Experiments. While many of the early applications of the methodology were in agricultural research, advancements of the theory to applications in industry have been documented in work by Box et al. (2005) and Myers et al. (2009). Many statistical software programs have incorporated the techniques into their application offerings, and a number of software packages have been
Beckley_c11.indd 346
2/7/2012 9:01:24 PM
Above Averages
347
created specifically for the industrial application of statistically designed experiments. The statistical experimental design approach has gained increasing popularity in recent years in many fields. Availability of software tools to implement the techniques has made the implementation of the techniques easier and more time efficient. Computational advancements have made the design creation more flexible to practical research constraints. Research in the field continues to make the statistical experimental design approach a critical tool in the product development process in a widening list of industries.
11.1.2
Summary and future The statistical design of experiments approach provides a number of important benefits over less rigorous experimental approaches. The ability to isolate the effects of the different factors makes for unambiguous learning. The focus on model development and prediction identifies opportunities that may not have been considered beforehand. Innovation research would benefit in utilizing the approach in the many areas where it can be applied. As has been demonstrated, the statistical experimental design approach can be applied throughout many stages of the innovation process. Therefore a statistician can be a valuable member of a product innovation team. The role of a statistician in the innovation process is not limited to data analysis. Inclusion of the statistician on the team ensures that designs of all aspects of the studies are efficient and effective in collecting data to reach the goal at that stage in the process. When the team ultimately needs to make a decision on the introduction of the fruits of the innovation process the statistician can ensure that the data collected will be sufficient to manage the associated business risks to appropriate levels.
References Box, G., Hunter, J. and Hunter, W. (2005) Statistics for Experimenters: Design, Innovation and Discovery. Hoboken, NJ: John Wiley & Sons. Griffin, R. and Stauffer, L. (1990) “Product Optimization in Central Location Testing and Subsequent Validation and Calibration in Home-use Testing”. Journal of Sensory Studies, 5, 231–240. Moskowitz, H. (1987) “Optimizing Consumer Acceptance and Perceived Product Quality”. In Kapsalis, J.G (ed.) Objective Methods in Food Quality Assessment. Boca Raton, FL: CRC Press. pp. 183–229. Myers, R., Montgomery, D. and Anderson-Cook, C. (2009) Response Surface Methodology: Process and Product Optimization Using Designed Experiments. New York: John Wiley & Sons. Nagamachi, M. and Lokman, A. (2010) Kansei Engineering (Industrial Innovation). Boca Raton, FL: CRC Press.
Beckley_c11.indd 347
11
2/7/2012 9:01:24 PM
11.2
The Role of In-House Technical Experts Veronica Symon
Key learnings ✓ ✓ ✓
11.2.1
Benefits of using in-house experts Think out of the box to find innovation ideas Provide sensory guidance for in-house experts
First, look inside for the answer; it may be closer than you think When we think about innovation, don’t we sometimes wish we could just hire a consulting company to do the job for us? It sounds good as we are always tight on time and resources. But it may not be a good idea if we can’t replicate bench top prototypes in our production plant. So before we spend money on external consultants, we should look into inhouse expertise. There are many benefits of using in-house expertise. Obviously the first benefit is cost savings. Second, in-house experts know how to bring an idea to reality. For example, they know plant capability and cost of ingredients better than any external consultants. Another benefit is that in-house experts know company and product history very well, especially those who have been with the company a number of years. They could apply lessons learned from past experiences into any future project. As a sensory scientist at Pepperidge Farms R&D, I work closely with the product development technical team and bakers who collaborate to create innovative bakery products. I found most of the scientists and engineers were somewhat design-minded. To work with them, the sensory group would schedule a couple of work sessions to talk through the project. First, we need to understand objectives and background of the project well and let them explain technical challenges to us. Then we propose a plan and negotiate it
11
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
348
Beckley_c11.indd 348
2/7/2012 9:01:24 PM
The Role of In-House Technical Experts
349
with them based on possible constraints, such as processing feasibilities and cost of experiments. It is important for a sensory scientist to ask a lot of questions as each project is unique. The planning process becomes a learning experience for both technical experts and sensory scientists. Once I was very pleasantly surprised when a packaging engineer approached me to design an experiment to determine optimal packaging and cooking conditions for a new microwavable bakery item. I seldom worked with packaging projects. It was a very challenging project as it involved elements of packaging, formulation and cooking conditions. The project team could not move forward without choosing a solution. I saw the project as a great opportunity to apply experimental design. Together we identified three factors that would affect the food’s sensory quality the most, “type of packaging material”, “cooking time” and “type of bread” (example: open-faced like pizza or close-faced like hamburger). We used a response surface design for the experiment. Key sensory attributes, such as bread softness and chewiness, and overall quality of the products were measured by trained employee panelists and used as dependent variables. Internal temperatures of the bread were also measured as constraints required by regulatory department. After analyzing the data, we identified factor(s) that affected sensory quality the most and the best combination of the design factors for an optimal tasting quality. The results were so convincing that the project team was able to streamline resources and focus on the right direction for future development. This was a great example to show our R&D colleagues how sensory science could help them to achieve success. As sensory scientists we did not only measure the panel responses but also led the design phase of the research. At Pepperidge Farm, in addition to scientists and engineers, we are fortunate to have many experienced bakers including master bakers in the product development team. As people say, “baking is an art”. Bakers do have different disciplines from technologists and scientists. I have seen many bakers with a “magic touch” who can turn mission impossible projects into successful launches.
11.2.2
In-house experts – magic touch to success
I recently interviewed a master baker at Pepperidge Farm on how he has successfully contributed to driving innovation. He cites the following practices: ●
●
Beckley_c11.indd 349
Intuition After years of training and working as a master baker, he is very familiar with many categories of baked goods, from breads to cakes to cookies. Whenever he hears a new idea, he can quickly grasp the concept and develop different forms of new products with the new idea. For example, to create a new cake, he would look for inspiration from ice cream, cereals, cookies and even candies. Then he would orchestrate the cake presentation, taste and mouth feel into a new and longing experience. Network with the trade Bakers form a tight network among themselves. They would constantly critique each other’s work and give suggestions to each other. Our master baker said he had bonded with fellow bakers from
11
2/7/2012 9:01:24 PM
350
Product Innovation Toolbox
●
●
11.2.3
11
Beckley_c11.indd 350
everyday work. If he encounters difficulties, other bakers would lend him a hand without hesitation. He also keeps contacts with culinary schools, private bakers, restaurant owners and suppliers. Sometimes a solution is just a phone call away. For example, when he developed a new cinnamon flavor bread, the cinnamon flavor in the company list of approved existing ingredients just did not come through in this new form. So he quickly tapped his network of suppliers and requested cinnamon samples from across the world and successfully enhanced the cinnamon flavor. Trend sensitive Bakers have their own organizations, such as the American Bakers Association and the National Bakers Association. The trade conferences and communication channels provide great leads on most up-to-date trends in the industry. He would get new inspirations from these channels. For example, artisan breads have become very popular in recent years and he has explored opportunities to use different types of artisan breads for our current frozen bread platforms. Good knowledge of process and equipment In-house experts know not only recipes but are also familiar with the processing and equipment in the plants very well. They could modify current lines with minimal capital requirement to create new forms of bakery products. With their creative minds and hands-on experience, an innovative idea could speed to market. A good example is the thin sandwich bread which took a short period of time from prototyping to launch on shelf! Our baker worked closely with processing engineers on modified pita bread like formulations based on current equipments and processing. They further optimized the formulas by directions provided by sensory scientists obtained from competitive benchmarking against market leaders.
How to work with in-house experts – advice for sensory professionals
Baking is an art and working with bakers is also an art. As sensory professionals, we need to realize that we are trained in different ways and most likely would show different approaches to solving a problem. We are scientists by training, while many bakers learned their techniques from observing and working in their family business or through apprenticeships. In other words, we grew up as scientists by learning from books and lab experiments, while bakers perfected their skills by getting up at 3 am, starting from making the dough, to baking bread in ovens and finishing up with cleaning the whole kitchen. Having worked in the baking business for a few years, I have realized that some scientific approaches may not always apply well, such as response surface design for product optimization within a reasonable design space. Believe it or not, a good baker could make a bad formula work! There are a lot of factors beyond the formulation to make a good loaf of bread, such as oven temperature, leavening time, baking time, yeast level and gluten level. However, by realizing the gaps between us, technical researchers and creative bakers we could still work alongside beautifully. When working with our bakers I usually would let them do the magic, but do push for extremes. If a full-blown response surface design is not realistic, I would recommend a simple design, such as a two-by-two
2/7/2012 9:01:24 PM
The Role of In-House Technical Experts
351
factorial design. I also think it’s very important to benchmark the new product against competitors as early as possible. Due to the nature of the business and shelf-life of the product, the bread development circle is very short. As sensory scientists, we need to synchronize with the speedy innovation process and shorten our test turn-around time to better assist developers. We constantly need to remind the project team to use consumers’ responses as a guide for development. After all, consumers are the ones who pay for the products and eat the products, not us, nor our managers either. Also, I believe we would add the most value to the development process with our scientific approaches. Data itself is more powerful than a thousand words. We also do need to challenge ourselves to develop better, faster and cheaper ways to do sensory and consumer research all the time. Having a voice in the innovation team requires building a strong research partnership across multiple functions. Although each company’s culture is different, I believe that as sensory scientists, naturally partnering with product developers, statisticians and consumer insight teams is very helpful. To build the trust with our research partners, a good communication is essential, pre-meetings and joint projects would be helpful.
11.2.4 ●
●
●
Beckley_c11.indd 351
Some ideas to approach innovation projects Think outside of the box and look outside of your company Conduct a national and/or international competitive product auditing. Our in-house experts are very familiar with our own products, but it may be eye opening for them to learn from competitors. The auditing could be done on products as well as packaging features. It would also be very beneficial to sample products cross-categories. For example, to create a new form of breakfast bread, we could gain inspiration from snacks, pastry appetizer, pizza and sandwich wraps. Watch closely for international trends as the world is becoming one in the 21st century. A popular food item in Asia today could be a new American trend tomorrow. Brainstorm with cross-functional experts Bring cross-functional expertise together as early as possible in the development process. While we bounce ideas around, we could also clarify the concepts for our developers. One piece of advice I would share is to have an open mind during brain storming, and not to have cost, timing and processing limitations concern us yet. It’s an early stage for development and let’s encourage a free and creative spirit. We can always refine an idea later. Bring us closer to consumers Don’t wait for our turn, we should proactively look for ways to obtain consumers’ feedback as early as possible. I have had success in conducting product focused qualitative consumer groups at the early stage of the development process. Finding answers at an early stage would save lots of developer’s time and resources. The answers we look for could be as simple as what are the best colors, the best shapes and best flavors for the products? If facility permits, consider inviting consumers to your facility so the project team could interact with the consumers directly.
11
2/7/2012 9:01:24 PM
352
Product Innovation Toolbox
●
●
●
Apply scientific disciplines Use design of experiment whenever possible to guide product optimization. As we learned in Chapter 7 (Tools to refine and screen product ideas), product development (PD) experts are chartered to develop prototypes right after successful identification of an idea or concept. It is also the time for us, sensory scientists in collaboration with statisticians, to step in and provide guidance on experimental designs for our experts to choose the right design variables and extend design ranges. A question that I often receive is: “Why should I make this prototype as I know for sure this formulation does not work?” I always say, “Without bad, we don’t know how good is good”. As I mentioned before, for bakery products, response surface designs might not work very well, but simple factorial designs serve us very well. There are many benefits from using design of experiments. The most obvious one is that we could enhance the chance of winning as we test a wide range of prototypes with different sensory profiles. Most importantly, we could determine ingredient drivers of liking which are crucial for product optimization. Conduct consumer/sensory tests to optimize and validate the products After helping developers to choose the right experiment designs we should use consumer responses (example via central location test (CLT)) to optimize the prototypes. Once the optimal formula has been identified, a follow-up confirmatory test (CLT or home-use test (HUT)) is also necessary to verify success by testing against an internal or a competitive benchmark. Monitor product quality (shelf-life tests) In many companies, the sensory scientist has the added responsibility to determine shelf-life and track product quality for a new product. Sometimes a prototype may be well liked by consumers in a CLT, but it may have ingredients prone to rapid oxidization or a packaging with poor freshness prove. Timely feedback to technical experts on these issues would further improve the product and packaging features.
There are many other ways to encourage in-house experts for innovation, for example conduct an employee innovation fair, provide an online idea submission channel, or have an ‘Iron Chef’ type of event. Do keep a record of new ideas as sometimes a good idea may be too early for its time. If it was not launched this year, it does not mean it won’t be launched in a few years. Believe it or not, you are working with amazing innovators every day!
11
Beckley_c11.indd 352
2/7/2012 9:01:24 PM
11.3
How to Leverage Research Partners (Local and International Testing) Gigi Ryan, Jerry Stafford and Jim Rook
Key learnings ✓ ✓ ✓
11.3.1
Developing a holistic partnership with external research agencies Leveraging an external tool box of research tools Partnering for global research
Holistic partnership Different types of relationships exist between client and research agency, ranging from executional suppliers to strategic partners. While the former may “get the field work done”, it does not optimally leverage an agency’s capabilities or the benefits of collaboration. Market researchers and market research agencies are the front lines to the consumer, and when leveraged optimally can help drive the strategic direction of a product. Getting the most out of your research agency requires collaboration, trust and mutual respect. Once you have identified the need to outsource, make it a point to understand a prospect’s expertise, its organizational structure and culture. Determine how well it appears to work with the structure and culture of your company. Once you have found a good fit, the foundation for a strong partnership is laid. A paradigm shift is occurring through the research industry. To avoid the “data dump” supplier mentality, it has become increasingly more common for clients to partner with their research agency in the quest to uncover consumer insights, drive innovation and build the business. While it requires some initial effort from the client, creating an open dialogue with the people who have their fingers on the pulse of the consumer will ultimately improve
11
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
353
Beckley_c11.indd 353
2/7/2012 9:01:24 PM
354
Product Innovation Toolbox
The Holistic Partnership
1. DISCOVERY AND DESIGN
2. EXECUTION
3. REPORTING
Figure 11.3.1 Holistic partnership between client and research agency improves quality and reduces costs with both parties having a winning relationship.
your bottom line. The more your research agency knows, the better they will be able to hone the research and extract the learning you need. A clientresearch agency partnership improves quality and reduces costs with both parties having a winning relationship. This holistic partnership (Figure 11.3.1) begins with “discovery and design” and continues through “execution” and “reporting”. It is a back-and-forth exchange of information and ideas that clarifies goals, maximizes learning and drives client direction.
11.3.2 11
Beckley_c11.indd 354
Benefits of a client–research agency partnership
Marketing 101 has taught us that to sell an innovative product or service successfully you have to fill a (perceived) need or want. To determine that need or want you must identify your target consumers and understand who they are (demographic), what they think (psychographic), and how they act (behavior). In essence, build consumer intelligence. Maximizing consumer intelligence will drive innovation and help you build sound business objectives and strategies and grow your business. Market research plays a significant role in driving your business and keeping you ahead of the competition while helping to improve the bottom line. Both client and research agency provide unique yet interdependent perspectives. The client knows about the industry, marketplace and history of their product/
2/7/2012 9:01:24 PM
How to Leverage Research Partners
355
service. The research agency knows how best to fill information gaps utilizing cutting edge research methods, how to interpret patterns in quantitative data and identify cues in qualitative research. Your internal departments (R&D, product design, packaging) have access to an outside brain trust that, in effect, becomes an extension of your team. Pooling these perspectives leads to new ideas and innovation and a more holistic understanding of your product and its relationship with the consumer. Partnerships beget innovation. In a partnership, the client has access to an expanded tool box and the chance to be exposed to new ways of thinking and different methodologies. Some examples of the type of innovation to which you could be exposed are: ● ● ● ●
Innovative advanced analytic techniques Validated methods of sample control Proven methods of order bias control Creative survey designs.
For example, one of our clients was not satisfied with their current methods of determining how to improve their products during early product testing. We developed a method that was more grounded in statistics and offered far more specific and prioritized directions for improvement. Not only did we show the breadth of our capabilities, but the open discussions also resulted in a better approach to answer the client’s strategic business issue. Open dialogue is a two-way street. Even if they are sought for a certain research method, it behooves a research agency to convey the company’s full complement of capabilities. Clients often pigeonhole their research agency with one type of work, frequently the first type done for a client. However, the agency can show how multiple studies work together to uncover the learning for which a client is searching. For example, along with qualitative and quantitative, the firm may have proprietary techniques, expertise in different methodologies and most importantly, a brain trust which can be accessed to explore the holistic product issues. The brain trust, or internal think tank, is made up of senior staff, statisticians and experts in advanced analytics, and experts in the vertical or methodology being discussed. The main contact at the firm would convene a meeting or call with their client and the brain trust during early discussions. Experts and stakeholders on the client end should be included. Together this group can come up with ideas or discover possible problems or issues that neither the client nor the research firm alone may have seen. The collaboration that takes place at this phase can literally save the research by making sure that the objectives have been adequately delineated, that the design will strategically meet those objectives, and that errors will be avoided. Top-level participation also strengthens the brain trust that can be called upon for any one research project. As the people with the most experience and most knowledge of the company, senior staff can contribute heavily to the discussion, and can give the proposed design a stamp of approval. For example, we had a client that had been experiencing ongoing declines and was looking for an opportunity that would exponentially change the game of its
Beckley_c11.indd 355
11
2/7/2012 9:01:25 PM
356
Product Innovation Toolbox
business. We conducted an extensive opportunity discovery process to help the client develop a strategy for future growth. The opportunity discovery process included a market scan to identify category and consumer trends, a review of competitor growth strategies and a client work session to establish game changers to develop. The opportunity sessions with the client included senior staff from both the client and research organizations. The opportunity discovery determined that historical research had negated some product ideas for growth and the client ultimately walked away from these. Positive product ideas were then prioritized based on current opportunity and capabilities. The client is currently developing some of these products, and in-market testing is proving out their potential. Including senior staff from the client and research agency ensured that research opportunities with the greatest potential were identified for testing and further development. Up-front collaboration also leads to time and cost efficiencies. The more thorough the up-front discussion and early vetting of objectives and possible issues, the less time you will spend dealing with unforeseen problems throughout the project that can add its cost. Clear objectives drive a more efficient survey design process and less time spent with study participants. Because both the client and the agency put thought and effort into the design, it is clear, concise, less bloated, and therefore less costly. Post-field, clarity leads to optimal interpretation of existing data, better tabulation plans, analysis and insights, resulting in a quicker turnaround time between the start of the project and final report.
11.3.3
11
Beckley_c11.indd 356
Example of benefits through holistic partnership
A case which demonstrates the efficiencies garnered when the client and the research agency are aligned is a situation involving a company that had previously conducted two product tests in both the US and China. As a secondary goal for the studies, the company wanted to explore and compile a cumulative viewpoint on the impact of ethnicity and age. The hurdle for this client was to determine how to allocate resources to generate the strongest reach across all groups. We leveraged existing data and conducted new research to develop a cumulative viewpoint on the topic. We analyzed the studies conducted in the US, which had augments of the desired ethnic groups as well as a good mix of women of a variety of ages. On the surface, we discovered that a top tier of benefits applied regardless of ethnicity and age. However, digging deeper into secondary drivers, we uncovered large differences between ethnicities and age groups, providing an actionable means of targeting specific needs with certain benefits. To more accurately portray consumers’ changing needs and product opinions, we recommended that future research be conducted with a global perspective when possible, and/ or with augments of various ethnicities in US studies. In order to accelerate product and market decisions, existing information was leveraged and additional research was only conducted as necessary. Supplemented with previous research, the data to address questions concerning preference, perception, usage and behavior differences by ethnicity and age could be addressed more quickly and efficiently.
2/7/2012 9:01:25 PM
How to Leverage Research Partners
357
Another benefit of the partnership is the opportunity for cross-training. The agency can host training seminars, particularly for the client’s new or junior members, on the various aspects of the vendor side of the business. Likewise, the client can invite the vendor to spend time at their offices, either as an acting member of their department or as a more readily available resource. In any of these instances, the person being trained or spending time on-site learns more about the overall process, resulting in a better relationship overall.
11.3.4
Creating and maintaining a relationship
Establishing a set process early in the relationship helps define expectations and keeps everyone on the same page. Prior to approaching a research agency, think about your needs. There must be agreement on what the expertise of the client and the expertise of the research agency will contribute to a fruitful partnership. Are you looking for a large or small agency? Do you want to work with fewer partners that have skill sets in multiple practice areas, for example qualitative, quantitative and advanced analytics? Or, do you prefer working with separate agencies that specialize in certain areas? Partnering with one agency ensures fluidity to projects and a deep understanding of the business from inception to launch to assessing marketplace success. Working with separate agencies that specialize in certain areas can bring specific expertise. However, a company which has broader expertise can test multiple issues and provide more comprehensive understanding. An illustration of a situation where multiple methodologies were employed to address multiple issues is a company that wanted to achieve aggressive organic growth by expanding its penetration in the African American market and the Hispanic market. For further growth, it needed to better understand the consumer’s interest in the brand and the decision-making process relative to restaurant selection. We used a multi-dimensional research and strategic growth initiative to unearth segment needs, identify new, high-potential menu items and orchestrate the customer experience: ●
●
●
●
A market scan identified successful marketing strategies to ethnic-specific groups Mining of past primary and syndicated research helped structure the design of quantitative research and modeling efforts isolated the high-potential menu items that would resonate with these ethnic markets On-site interviews and group discussions with restaurant management and workforce, along with observational audits of restaurant personnel interacting with ethnic visitors, highlighted opportunities for change and enhancements A management workshop shared outcomes and secured agreement on needed actions.
11
As a result, changes were made to the store layout, the menu, the menu selection and the training process, resulting in immediate increased sales and improved workforce performance.
Beckley_c11.indd 357
2/7/2012 9:01:25 PM
358
Product Innovation Toolbox
Identify your expectations of the research agency. This may seem obvious, but take a step back and consider your company and how your internal teams operate. Do you approach a research partner with a hypothesis already developed and you are looking for execution? (Note: if this is the case you are not using the brain trust of your research agency optimally.) Or, do you find it helpful to brainstorm the issues and objectives and have an open mind to new strategically rooted ideas and possible alternatives? Whatever your criteria are, interview your potential research partners and clarify your expectations to ensure that you are both committed to a common goal. Characteristics of a good client/research agency relationship begin with open dialogue, clear communication of goals and a consistent process. First and foremost, clients must be willing to share information. Strategic plans, organizational structure and key stakeholder objectives help provide a framework from which the research agency can make relevant recommendations. A category or brand review that includes positioning, current advertising and media plans for the primary category players paints a more complete picture of the marketplace. The research agency should be committed to the success of the client’s organization, understand the market and strategic business plan and provide thought leadership. A research agency that can move quickly and has the flexibility to answer your business questions and provide clear recommendations can help you stay ahead of the competition. The research agency should not only evaluate the new products, advertising or promotions but identify opportunities for improvement. A case in point, in the fast-changing, cluttered world of digital marketing, it is often difficult to stay ahead of the curve. With limited control over respondent recruitment, and various site regulations, our client found it challenging to develop a sound methodology to test online campaigns in respect to site placement, media allocation and type of creative (i.e. rich media vs. flash vs. video). As our client started investing more of their media spend in digital marketing, it was important to evaluate the overall impact of online advertising. To evaluate the success of various campaigns, we employed a variety of strategies: ●
11
●
A test/control cell methodology, using “cookie” technology, assessed the performance of respondents exposed to the ads versus respondents unexposed to the ads, across a wide variety of key performance indicators, to determine the incremental lifts sourced from the digital advertising. Furthermore, analyzing the data by creative (rich media vs. flash vs. video) and by site offered insight on the appropriate media weight and creative rotation.
As a result of the findings, our client determined that they should allocate 20 percent of their advertising budget to digital marketing for the next fiscal year. Additionally, the analyses provided guidance to the creative agencies developing the client’s future digital marketing campaigns and served as an invaluable resource for the client internally.
Beckley_c11.indd 358
2/7/2012 9:01:25 PM
How to Leverage Research Partners
359
Determining factors in creating the partnership are: ●
●
●
● ●
11.3.5
Are the research agency’s principles aligned? ° Is there a willingness to listen and understand your needs and provide solutions? ° Is the focus on actionable recommendations? Who is working together? ° Is there a single point of contact? ° What is the relative position/experience? Is there industry experience and expertise? ° Does the research agency have the capacity and skill to deliver? Is the desired relationship long-term or short-term? What is the amount of time spent together on the following?: ° Number of projects ° Complexity of projects ° Amount of work involved ° Anticipated spending.
Getting the most out of the relationship
To make the partnership stronger and longer term, it is important to get senior buy-in from both the client and agency organizations. A full commitment from both organizations, beginning from the top, results in a partnership which spreads throughout both organizations, and which survives its originators leaving. Both parties should treat each other with honesty and mutual respect and work collaboratively. In order for the client to feel comfortable providing proprietary information, the research agency needs to assure the client that confidentiality is one of their highest priorities. Non-disclosure agreements should be signed. Communication is important both overall and during a specific research project. It means understanding the client’s current goals both on macro and micro levels, through information from the client and through the agency immersing themselves in the client’s business through discussion and secondary research. The more familiar an agency is with the client’s industry and current trends, the more they add to the research. It is a commitment and an effort, but is worth it to establish an environment of mutual respect and trust. Both the client and the research agency need to follow through on their promises to maintain credibility. Both the client and the research agency should insist on regular reviews to ask questions, review progress and discuss what is working and any areas for improvement. Clients can avoid a frequent complaint that research agencies do not consistently deliver reports that fully address their business issues by offering as much background as possible. The end product can then be less of a data dump. In order for research agencies to deliver a strategically focused report that solves business problems, clients must brief the research agency on the
Beckley_c11.indd 359
11
2/7/2012 9:01:25 PM
360
11
Beckley_c11.indd 360
Product Innovation Toolbox
background of the project, clearly define the objects, provide relevant information and results from previously conducted projects, and be kept apprised of internal expectations. We once conducted a flavor screen with multiple variants. One clear winner emerged, and our presentation and recommendation focused on this winner. To our surprise, the client chose the second place variant. Before the research started, the client neglected to inform us during the briefing session that one of the variants was considerably more costly to produce (the winning variant). The likelihood the client would implement that variant if it won was low. Had we known this, we might have recommended testing a metric that included purchase intent if the product were sold at a higher price, or we might have questioned testing that variant all together. This is an example of the importance of complete disclosure. Are there considerations not listed in the research brief that the client will use to make a decision? What information can you provide your research agency with that will help them turn around a better product for you? Free exchange of information will lead to better analysis and insights, and ultimately innovative ideas and a growing bottom line. The research agency should display their commitment by immersing themselves in the category and business. Monitoring client and competitive websites, in addition to other secondary data sources is necessary to keep current with trends and industry happenings. An agency should not get complacent but the same enthusiastic and innovative spirit that was present at the beginning of the relationship should be maintained. Innovation is evidenced in some research agencies developing new approaches that more effectively address client business issues. All research agencies will typically present the various research methods available and their benefits and drawbacks. Some agencies have even developed their own proprietary studies that work better at extracting information and trends, including tools which estimate the impact of changes in the elements of the marketing mix. The client is provided with tools that they can use to conduct “what if” scenarios. Recently we had a client who wanted to determine the impact of offering a beverage on the menu. Specifically, will the beverage provide a boost to traffic? Which beverage variants are of most interest to consumers? How important is a drive-thru service? What sales volume levels should be expected, based on variables such as advertising spend, beverage variant interest and times of day offered? To test the concept, we had consumers evaluate multiple beverage variants in a study that assessed current usage, interest and future intent to purchase these variants. Utilizing the survey data, we developed a sales volume simulator that illustrated various “what if” scenarios by changing one or more variables (e.g. coffee variants likely to order, time of day ordered, planned advertising support). This tool made it very easy to see the probable effect of the interaction of these variables on sales and profit margins. The client obtained a clear and deep understanding of consumer beverage preferences and drinking habits (both current and future). Additionally, the simulator allowed the client to mitigate a potential investment risk by developing a projectable view of market opportunity.
2/7/2012 9:01:25 PM
How to Leverage Research Partners
11.3.6
361
What to watch out for: Possible pitfalls
Maintaining the partnership requires continuous “management”: managing the relationship, the process, the constantly changing marketplace and the changing behaviors of consumers. Budgets can also become a problem. Costs have to stay competitive to avoid disillusionment on the client’s part. Few things ruin a relationship more quickly than nickel-and-diming. To avoid this, the research firm should be clear up-front about what the costs are, based on number of interviews, qualifiers, incidence, survey length, etc., so there won’t be misunderstandings if they need to raise the cost. The research agency cannot assume that they are the only partner. If the client puts all of their eggs in one basket, they are putting themselves at risk. The dangers are not limited to complacency and costs; the client also loses the chance to be exposed to multiple firm’s ideas and innovations. And while it helps for the relationship to be friendly, it is not always going to be that way. There may be disagreements along the way, but as long as mutual respect, civility, and honesty are maintained, the partnership can stay strong.
11.3.7
Partnering for international research
Effectively partnering with a foreign-based research supplier for international research also requires collaboration, trust and mutual respect. However, partnering with the right foreign-based supplier is a more complex endeavor than partnering with a domestic supplier (but equally important). This process is made especially difficult due to language differences, cultural differences and long-distance communications. Also, due to the fact that multiple partners will generally be required in multiple countries for international research, selecting the right partner in each of several countries requires a major investment in both time and effort. As with domestic research, getting the most out of your foreign-based research supplier partner for international research is essential for effectively serving the end – client’s needs. However, both the scope and depth of collaboration is more limited in nature, and is more prone to interruptions. This is due to a combination of factors:
11 ●
●
●
●
Beckley_c11.indd 361
The contribution of the foreign-based supplier partner is generally limited to execution (data collection). The amount of business involved is generally less for any one of the foreignbased supplier partners, thus, of less interest compared to domestic partners. The foreign-based partner is usually an independent company, not part of the lead-USA vendor’s organization. The relationship with foreign-based partners will be less structured and more prone to ruptures in the day-to-day details of working together. This is a critical factor if the partnership is to survive.
2/7/2012 9:01:25 PM
362
Product Innovation Toolbox
Effective partnering with foreign-based suppliers is a must for international research, and must be an ongoing process over time. To be a truly effective partnership, a different set of skills and priorities are needed from both parties: ●
●
●
●
●
Efficient and effective communications. This is no small order considering language differences and inherently inefficient long-distance communications. Highly experienced, knowledgeable and mature international researchers operating for both parties. Appreciation for and an understanding of cultural differences among the many countries that will be involved in international research. The mental flexibility and self-confidence needed to adapt to the many countryby-country differences on the same or similar issues. Expectations of a research partner, for example, in Saudi Arabia may be very different to that of a partner in the United Kingdom. Facilitating a relationship that will successfully meet such expectations will be very different for each of the countries. An understanding of the practical aspects of carrying out research in different countries; due to infrastructure differences, cultural differences and to the state of development of the market research industry in each country. One can only imagine the differences in the challenge, for example, when selecting a partner in Germany versus Vietnam.
In addition to trust and mutual respect, effective collaboration requires major efforts by each party. In order to guard against ruptures in the relationship, some basic rules for interaction are worth keeping in mind up-front when establishing any relationship: ●
●
●
●
●
11
●
Beckley_c11.indd 362
Know your needs! Understand what you need from the partner and make sure he/she is equipped to satisfy those needs. Know your partner! Do not attempt to establish a partnership with a foreignbased partner based only on long-distance communications. Without some form of personal contact, you will not know your partner well. Make sure each of the partners will benefit from the relationship and will have a vested interest in ensuring its longevity. Establish clear, open and fixed lines of communication with the foreign-based partner early on in the relationship. Understand that most problems commonly associated with international research will result from a combination of factors involving language differences, cultural differences and long-distance communications. If this fact is recognized by both parties throughout the relationship, it is easier to understand the basis for and resolve the many issues/problems that may arise from time to time, ensuring a better chance that the partnership will survive over the long term. Major efforts must be made by both parties to ensure at least a minimum of personal contact. Personal contact with foreign-based partners is invaluable for ensuring the success of the partnership. Improved face-to-face communications reduces the problems associated with long-distance communications, as well as the impact of cultural and language differences.
2/7/2012 9:01:25 PM
How to Leverage Research Partners
363
One must recognize that establishing the partnership arrangement is just the first step. The relationship must be maintained on an ongoing basis and evaluated regularly. During the initial phases of the relationship: ●
●
●
●
●
●
●
Beckley_c11.indd 363
Get the foreign-based partner involved in the research process as early as possible. This will give them a more significant stake in the project, prevent them from “buck-passing” at later stages and make them feel more of a full partner in the whole process. Encourage the foreign-based partner to recommend changes in the proposed methodology and be willing to revisit specifications if necessary. Be specific and consistent with all partners in all countries concerning your needs and expectations. Each partner must recognize that there are major differences in attitudes toward the importance of punctuality and adherence to deadlines among cultures. Temper your timing expectations based on a mutual understanding of each partner’s viewpoint. Do not force a partner to agree with an unrealistic project schedule. It will only come back to haunt you later. Recognize each partner’s need to make a reasonable profit from the relationship. During the job process: ° Again, establish clear and open lines of communication with the partner and review the agreement as to how communications will flow for each job. ° Clearly state, up-front, who has final decision-making authority and responsibility and how decisions are to be made throughout a project. ° When communicating with vendors, be specific and define your terminology well. Do not use abbreviations, jargon or acronyms. Not everyone will have a sufficient understanding of American English to understand some of the abbreviations commonly used in the US (demos, ATU, SES, etc.). ° Confirm in writing all decisions made face to face or by telephone with foreign vendors throughout all phases of a project. This is critical for confirmation, as well as for clarification of the decision made. ° Provide timely feedback to questions from foreign vendors and provide vendors with a back-up contact for communications and questions. This will ensure a continuous flow of communications so vital to the success of global research projects. ° Make sure you obtain prior agreement on all the specific deliverables you will require from your vendors. This includes such things as periodic status reporting, information for establishing incidence rates, translations, English verbatim transcripts, preliminary data, top-line reports, data files, etc. ° If you require periodic status reporting, communicate this to the vendors before the project begins. Be very specific about what information is needed, how, when and in what format the information should be communicated. It is highly advisable to provide the vendor with a template outlining the format and the information required.
11
2/7/2012 9:01:25 PM
364
Product Innovation Toolbox
°
°
°
°
11.3.8
Do not assume that all vendors have the same or similar policies regarding record keeping and data retention. If you require specific information from the recruiting or interviewing process, communicate this clearly, in writing, during the initial stage of a project. Recognize the fact that long-distance communication involving multiple time zones shortens the common workday between you and the foreign vendor. Adjust your schedule to minimize delays in communications caused by the shorter workday. When communicating with foreign vendors, contact them during their normal workday (not necessarily during your normal workday). Respect the division of work and personal time with foreign vendors. Always obtain a data file from the vendors, even if you are not doing the data processing.
Summary and future
It is important to have a framework to nurture and sustain the partnership. A partnership relationship results in both parties having a winning relationship. By providing comprehensive background and clearly defining the objectives, clients assure that research agencies can provide insight and solve business problems and drive the organization forward. Clients are overloaded with information, and research agencies are uniquely able to provide insights because they are closest to the data analysis. Research agencies can translate the data into information and provide the “aha moments”, the critical insights, that drive the client’s business forward. A solid client and research agency relationship is built on clear, honest and open communication. Both parties have integrity and follow through on their promises. There is mutual respect and both have a clear understanding of the desired objectives.
11
Beckley_c11.indd 364
2/7/2012 9:01:25 PM
11.4
Best Practices in Global Testing and Multi-Cultural Consumer Research Alejandro Camacho
Key learnings ✓ ✓ ✓
11.4.1
Practical steps to conduct international product testing Multi-country product testing checklist International shipping guidelines
Introduction Previous chapters have provided you with different elements and direction regarding research design, methodology, multivariate data analyses and modeling techniques, all of which are essential to effective product innovation research. This section is divided into four steps that will equip readers with key practical elements that are necessary to conduct multi-country or global product testing successfully in order to establish the customer-perceived value (CPV) level on the product prototype. The CPV addresses customer interest level, liking, usage, purchase intent and price sensitivity. It is important to mention two things before starting:
(1) These are general guidelines for multi-country product testing that could be applicable to most of the consumer products. (2) Product testing comes after other previous types of research have been conducted, such as concept and positioning testing, in-house (laboratory) and internal (e.g. employees) usage testing. For the purpose of this section, we are making the assumption that these types of research were successfully conducted previously.
11
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
365
Beckley_c11.indd 365
2/7/2012 9:01:25 PM
366
Product Innovation Toolbox
The following are four steps that have helped me in the past to get the “job done” successfully.
11.4.2
Step 1: Company’s internal stakeholders input
This is also called inside information. This is the process of collecting and summarizing as much inside knowledge from previous research reports (e.g. concept and positioning testing) and company stakeholders (e.g. marketing, R&D, sales, supply chain management, etc.) about the product to be tested. After receiving the initial details and objectives for a multi-country product test, there is more information you will need to learn about. Begin by asking stakeholders for previous research reports and review them. Then, schedule stakeholder interviews in order to be able to ask questions. These are key general elements to consider doing: (1) Conduct secondary research about similar products, competitors, or consumer benefits through the Internet. (2) Take notes on everything the stakeholder says about the product(s) to be tested, such as: (a) History and background (b) Requirements and complexities in bringing the testing product to life (c) Packaging (d) Product size(s) and weight(s) (e) When, where, why and how to use (f) Positioning (g) Distribution channel (h) Price range (i) Which countries and why (j) If applicable, current: (i) Usage in US (ii) Target consumer in US (iii) Known product strength and weaknesses. (3) Compile, summarize, and include new “internal” findings to the original multicountry product testing study creating a revised version. (4) Send revised version to all multi-country product testing stakeholders for confirmation and approval. The information collected is valuable for you and outside research supplier/consultants in designing a multi-country testing study. Some of these key elements are included in the multi-country product testing checklist sample provided at the end of this section.
11
11.4.3
Step 2: Secondary research
Multi-country testing cannot be conducted successfully without a deep understanding of different cultures under study, such as countries’ cultural and social organization, language, the role of the family unit, history, political and legal
Beckley_c11.indd 366
2/7/2012 9:01:25 PM
Global Testing and Multi-Cultural Consumer Research
367
environment, economy, education, aesthetics preferences, religion, traditions, holidays, individual values, beliefs, technology, transportation and distribution infrastructure. An inexpensive way to gather information about these elements is using the Internet. It provides rich and meaningful information free, or for a small fee. A few secondary information sources are provided here: (1) Central Intelligence Agency – World Fact Book. https://www.cia.gov/library/ publications/the-world-factbook/ (2) US Department of Commerce http://www.commerce.gov (a) International Trade Administration http://trade.gov (b) Industry Sector Analysis http://trade.gov/data.asp (c) Market Research http://www.export.gov/mrktresearch (d) Trade Information Center http://www.export.gov/exportbasics/eg_main_ 017483.asp (3) US Department of State http://www.state.gov (a) Countries and Regions http://www.state.gov/p (4) The National Trade Data Bank (NTDB) http://www.stat-usa.gov/tradtest.nsf (5) Country’s Embassy in US (6) Country’s Trade Commerce Department (7) Country’s External Relations Minister’s office (8) Country’s government. In addition, there are other sources that would need to be explored, as well as asking for local market research supplier knowledge about the consumerproduct interaction. Another source of knowledge and intelligence is the company’s country-based subsidiary.
11.4.4
Step 3: Country-based subsidiary or office branch
If the company has a subsidiary or branch office in countries where the product is going to be tested, they would be able to provide you with their past local consumer research studies (e.g. customer profile, segments) that will allow you to complement your target consumer knowledge in each country, as well as, current market situation for your client industry, values, demographic and psychographic information. Interviewing marketing members in those offices will facilitate your comprehension of each marketplace dynamic. In the past, it has been a little complicated to schedule a conference call. Recent advances in technology that allow virtual collaboration have made trading documents and scheduling meetings more accessible and easy to manage. Now that you have learned, understood and gathered all “inside” information about the product to be tested globally, it is important to develop a country situation analysis brief among different countries (i.e. cultures) under study. As you would expect, what works well in some countries may not necessarily work in other countries for a variety of reasons. Countries’ cultural variables will affect overall study design, including when, where, how and by whom a product
Beckley_c11.indd 367
11
2/7/2012 9:01:25 PM
368
Product Innovation Toolbox
testing could be conducted. Technical and technology factors also affect the methods used. Examples are computer and Internet access, telephone line availability, social and economic levels between interviewer and respondent, and perceived demographic or psychographic information that is not culturally appropriate to ask. Thus, different countries require individual treatment, but the goal should be to design a study so that the result will be comparable.
11.4.5
Step 4: Developing a multi-country product testing checklist
Multi-country testing has many moving parts and requires step-by-step execution. The development of a checklist is a great tangible and visual tool that provides a guide as to where you are in this process and who is responsible for each step. Table 11.4.1 demonstrates a sample multi-country product testing checklist provided as a guideline. It is crucial to include in your checklist the shipping requirements of each country in which you are testing a product. Certificates, signatures and packing restrictions can vary dramatically by country and will undoubtedly take a considerable amount of time of the overall test schedule. Customs approvals in some countries could take 1–3 months, even more. As soon as you know you will be testing abroad and need to ship product overseas, investigate the shipping requirements of each country, the time required to obtain certificates and approve products, and incorporate the time into your schedule, including some extra time for delays. The following provides some general guidelines that apply to shipping products internationally and then examples of requirements you may come across for specific countries.
11.4.5.1 ●
●
11 ●
●
Beckley_c11.indd 368
International shipping – general guidelines When possible have the local client ship the product to its own office abroad. The international office can then forward to the testing site. Impart to your client that while it may take some time on their part to box and ship, it elicits less attention from customs agents and may prevent significant delays. If you, as research agency, need to ship, you can send to your client’s international offices. They can then in turn forward boxes to the test site. Include a full description of the product and use the trade name whenever possible, for example 20 jars of “Acme Face Cream”. Label each box, “Samples, not for resale”. Most countries want to prevent unknown product imports that could potentially be resold. Labeling “samples” can reduce customs delays. Smaller boxes with fewer products tend to have a smoother transition through customs (less of a threat for resale). Shipping carriers have recommended that the assigned total value per shipment should be $20.00USD or less. Indicate per unit price as well, for example 20 jars × $.95USD = $19.00USD. While this may require sending more boxes for a product test, it will reduce the chances of a red flag than if you were to send one box with 250 pieces in it.
2/7/2012 9:01:25 PM
Global Testing and Multi-Cultural Consumer Research
369
Table 11.4.1 Multi-country product testing checklist. Prepare or Read Request for Global Product Testing Proposal (RFP). Submit RFP to research supplier partners. Get bid estimates and choose a Research Supplier Partner. Request and Review previous research study reports conducted on the product concept or product itself. Provide copy of the previous research reports to Research Supplier. Request and provide to research suppliers with Prototype Products to see, learn, try, and play with. Prepare a set of questions to ask Key Company Stakeholders such as Marketing, Market Research & Insights, Research & Development, Sales, and Supply Chain Management among others. Conduct an Internet Country-by-Country Search of similar and/or competitive products. Schedule Q&A meetings with Key Company Stakeholders. Take notes of the following: (1) History and background. (2) Requirements and complexities in bringing the testing product to life. (3) Packaging. (4) Product Size(s) and Weight(s). (5) When, where, why, and how to use. (6) Positioning. (7) Distribution channel. (8) Price range. (9) Which Countries and why. (10) If applicable, current: (a) usage in U.S. (b) target consumer in U.S. (c) product strength and weaknesses. Prepare a set of questions to ask In-Country Market Research or Data Collection facility about their market, consumer, category, competitive products and brands. Compile, summarize, and include “internal” findings creating a Country Situation Analysis brief. Prepare Product to be tested for shipping. Ship Product to be tested to Research Supplier Partner, or Global Data Collection facilities. Design research study that includes: (1) Global Markets. (2) Research Methods. (a) Qualitative (e.g., Focus Groups, In-Depth Interviews, Ethnography). (b) Quantitative (e.g., Multivariate data analysis, Modeling, Forecasting). (3) Type of data collection. (a) At location (e.g., Central location testing, Mall, Research Facility). (b) In-Home (e.g., diary, follow-up call, paper survey). (4) Product Presentation. (a) Blind (i.e., without Branded label). (b) Branded.
11
(Continued)
Beckley_c11.indd 369
2/7/2012 9:01:25 PM
370
Product Innovation Toolbox
Table 11.4.1 (Continued) (5) Type of Test. (a) Monadic (i.e., one single product rating). (b) Paired (i.e., more than one to compare). (6) Type of Analyses. (a) Frequency Tables (cross tabulations). (b) Multivariate data analysis. (c) Conjoint. (7) Modeling. (8) Hypotheses. (9) Participant’s screening criteria. (10) Questionnaire(s). (a) Questionnaire translated into countries’ language. (11) In-country product testing instructions for the testing facilities and study participants in country’s language. (12) Product handling instruction in country’s language. (13) Testing conditions in country’s language. (14) Packaging and labeling Product(s) to be tested. (15) Import and export legal issues. (16) Research Supplier Partner Deliverables in English Language. (17) Report. (18) Data Tables. (19) Presentation.
●
●
11.4.5.2
11
International shipping – country specific guidelines
Based on the constant change of trade agreements between countries it is important to have the company’s trade advisory office to find specific regulations for international shipping of products to be tested in other countries. As an example, these are some of the current exporting regulations for products to be tested in the following markets: ●
●
●
Beckley_c11.indd 370
Stagger shipments and send more than you need to avoid entire batch being held up. Send 4–6 boxes per day over consecutive days until all products are sent. If one day’s shipment is held up, the others may get through and your fielding can still start on time. If testing multiple products, don’t put all of one type of product in the same box. Include all products being tested in each box. That way testing can begin on those that get through.
Russia – GSEN Hygiene Certificate – certifies whether your goods are in conformity with hygiene norms of Russian Federation and Belarus; Quality Certificate; Certificate of Manufacturer. China – some or all of the following may be required depending on products tested. Certificate of Quarantine; Cosmetic Label Inspection (could take up to one month to approve); Sanitation License. Japan – Import License; Cosmetic Sales License for less than 36 pieces of each item.
2/7/2012 9:01:25 PM
Global Testing and Multi-Cultural Consumer Research
●
● ●
371
Brazil – Minister of Health review. Term of Responsibility with notarized signature; Sanitary Authorization to import (cosmetics); Operation Authorization; Certificate of Technician. Mexico – Sectorial Importers License; Sanitary Certificate. India – Import/Export Code provided by the Directorate General Foreign Trade.
References Belliveau, P., Griffin, A. and Somermeyer, S. (2002) The PDMA Toolbook for New Product Development. Product Development & Management Association. New York: John Wiley & Sons. Blankenship, A.B. and Breen, G.E. (1993) State-of-the-Art Marketing Research. American Marketing Association. Lincolnwood, IL: NTC Business Books. Cooper, R.G. (1998) Product Leadership: Creating and Launching Superior New Products. New York: Perseus Books Group. HarperCollins Publishers. Frand, E.A. (1989) Art of Product Development, from Concept to Market. Homewood, IL: Dow Jones-Irwin. Gruenwald, G. (1991) New Product Development Checklists–Proven Checklists for Developing New Products from Mission to Market. Lincolnwood, IL: NTC Business Books. Gruenwald, G. (1992) New Product Development: Responding to Market Demand (2nd edition). Lincolnwood, IL: NTC Business Books. Urban, G.L. and Hauser,J.R. (1980) Design and Marketing of New Products. New York: Prentice-Hall. Wind, Y.J. (1982) Product Policy: Concepts, Methods, and Strategy. Reading, MA: Addison-Wesley Publishing Company.
11
Beckley_c11.indd 371
2/7/2012 9:01:26 PM
Chapter 1: Setting the Direction: First, Know Where You Are
Chapter 6: Tools for Up-Front Research on Consumer Triggers and Barriers
Chapter 8: Tools to Refine and Screen Product Ideas in New Product Development
Chapter 10: Putting It All Together: Building and Managing Consumer-Centric Innovation
Chapter 2: The Consumer Explorer: The Key to Delivering the Innovation Strategy
Chapter 7: Tools for Up-Front Research on Understanding Consumer Values
Chapter 9: Tools to Validate New Products for Launch
Chapter 11: Words of the Wise: The Roles of Experts, Statisticians and Strategic Research Partners
Chapter 3: Invention and Innovation
Chapter 12: Future Trends and Directions
Chapter 4: Designing the Research Model Chapter 5: What You Must Look For: Finding High Potential Insights
12 “We become what we behold. We shape our tools, then our tools shape us.” Marshall McLuhan Canadian communications theorist as quoted by August de los Reyes, designer with the Artefact Group The future is happening now. What we see today in terms of consumer and product trends and research tools provide the framework for the future. We interviewed thought leaders and reviewed what our contributors have written and other discussions on predicting future applications and trends and what it means for Consumer Explorers (CEs) as insight leaders and strategic innovators. We see six directions building for the future: (1) Technology will continue to be a driver. (2) The people we seek to understand will continue to be more engaged in products that matter to them. (3) Fun in the form of playfulness and gaming will increase in importance, to enhance engagement with people for deeper understanding. (4) Non-traditional “data” (hybrids) will emerge due to the intersection of technology and people connections. (5) Translational research – the more we have of “data” the more we will want to really understand and will learn that our too deep love of “data” leads us down paths that lack a future. (6) When everyone feels they understand data, who really will understand data?
12
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
373
Beckley_c12.indd 373
2/7/2012 7:56:03 PM
Chapter 12
Future Trends and Directions Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat
12.1
Digital technology will continue to drive mobility, convenience and speed We have cited again and again how portable devices, high speed web connections and cloud applications have changed the way consumers live and communicate and how we, as Consumer Explorers, have interacted with them. At the unveiling of Apple’s iCloud concept, Steve Jobs (2011) described how we will live our digital life: “About ten years ago we had one of our most important insights and that was the PC was going to become the digital hub of your digital life … and it did. But it has broken down in the last few years. Why? Because devices have changed. They all have music, photo and video … Keeping these devices in synch is driving us crazy … So we have a great solution … The solution is our next big insight. We will demote the PC and the Mac to be just a device and we’re going to move the center of the digital hub, the center of your digital life to the cloud … Everything will be in synch … They can talk whenever they want … And it all works.” Following Job’s announcement, the Wall Street Journal writer, Walter Mossberg (2011) described it as the post-PC era. He predicts the future to be mobile, local and social. This means having sleek powerful devices that can do many things. You can walk around with a mobile device that knows your location and you can do social things with your friends as well as engage in e-commerce if you choose to do so. So what does post-PC era and new digital technologies mean for us, Consumer Explorers? We have the opportunity to be “ultimate observers”. We can participate in the consumers’ experiences through intimate observations and conversations. It is important, then, to look for quality of
12
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
374
Beckley_c12.indd 374
2/7/2012 7:56:04 PM
Future Trends and Directions
375
information rather than volume. It may be more meaningful to gather information from a small of group of people who represents a myriad of experiences rather than large surveys with minimum impact. It is the way to get to the why of an action or interaction. But here is the problem – our humanness loves fast and easy. We are wired for it. How does one balance the desire for quick and simple (some would call it shallow) yet needing to drink deeply in this opportunity for more consumer/people intimacy? It starts by being aware of the mixed desires and then being very mindful of the actions taken. Precision listening doesn’t just happen – it means learning and practicing and wanting the experience of true connection. As a result not everyone will be able to enjoy this future. Those who live in the vaults of their piles of quantitative data will be left behind. We can learn from Bill Moyers, a seasoned journalist and chronicler of people’s lives. In a recent interview with Jon Stewart (Moyers, 2011), Moyers said he wants to interview people who “want to” reveal their thinking. He tries to get the meaning behind the words and that’s when an interview becomes a conversation where both the interviewer and the interviewee become comfortable in exploring the topic. That dialogue becomes effective when we experience their life and become emphatic to what the person is feeling, seeing and doing in real time – in experience time. The CE has to be skilled in keen observation and listening and understanding what must be done with the information being gathered and be able to persuade consumers to “want to” participate in your research activities.
12.2
Engaged people (consumers) will continue to drive products and research Skepticism around being marketed to and being “used” will continue to swirl. We are not sure it will rise, but we are sure that it will be more like the beach shore and tides, where every now and then more people will become more aware of incidence and will think about modifying their behavior – rising behavior (examples in 2011 were the breaches that occurred with private information regarding Xbox consumers or the awareness of some that their ubiquitous smartphones can track all of their movements and report on them) or subside as some other incidence captivates the mind-set for the moment (like a horrific story regarding a murder or a terrible natural disaster – such as a drought or hurricane). What we know is that the technology above is allowing people to demonstrate their expertise more than ever. In the health and beauty category, there are YouTube videos for young people training us all on how to apply a type of make-up and making recommendations regarding the products. In 2011, MakeupAlley, a user-generated website had as many reported members (1.1 million) as Allure magazine (Saint Louis, 2011). Whether it is this group of women who trust each other for the “truth” or the favorite snack site, Taquitos. net, these engaged and highly focused people should be changing the way anyone builds products for them, the consumer. This will not stop and needs to be embraced.
Beckley_c12.indd 375
12
2/7/2012 7:56:04 PM
376
12.3
12
Beckley_c12.indd 376
Product Innovation Toolbox
Play and games will enhance respondent participation August de los Reyes, designer with Artefact Group (personal communication with the authors, 15 October 2010), predicts the notion of play will be the big force in the next ten years, in the same way social media was a defining force in the past decade. He cited how LinkedIn increased people’s participation by giving them a metric of their percent completion or “score” based on the amount and range of data that they are willing to share in their profile. Two years ago LinkedIn introduced the profile completion bar, an example of a play dynamic where it is basically a score card. More users completed their profile just to get it to 100%. Social network sites like Facebook encourage instant feedback and Love-it! Hate-it! type behavior (Chapter 6.3) through “thumbs up”, “thumbs down” buttons and counting. Times change, but not so much. Gladiators amused and lived and died with crowd mentality of winning and losing – thumbs up or down may allow a blogger to succeed or fail today. Play and games are important elements in the way we collect and look at information. Themes of fun do really well as a motivating force that allows participants within the system to learn and accomplish tasks within the game and derive pleasure from doing the work (it makes consumers “want to participate”) which in the CE world means understanding and data collection. Product and service design leveraging the dynamics of play and game and applying it to data collection to motivate people will continue to grow. Leonard Murphy (on the GreenBook Market Research Blog) reiterated in 2011 the rise of gaming: “I am an adherent of the school of thought that new approaches that incorporate elements of gamification, social networking, and mobile apps are the way to address the issue of respondent engagement in research; after all, 700M people on Facebook can’t be wrong, can they?” Many of the respondent interfaces from innovative market research companies like Brainjuicer are highly interactive and edgy visual in nature. New market research companies, like Spark, promote themselves as doing focus groups differently by incorporating tools from childhood and corporate ideations to look more like play by encouraging respondents to create large collages with all kinds of materials (Vega, 2011). Innovative consumer and market research companies employ methodologies that use to be considered qualitative (see Chapter 6.1) and enhance the ways to collect the data to be both fun and quantifiable through well designed computer interfaces. As we learned in Chapter 6.5 (quantitative anthropology, QA), cell phones and portable recording devices have been effectively used to capture the activities of consumers in real time. Video diaries and blogs have become part of the data input. Many of the community panels online also function as a social network of sorts, where the members freely share their thoughts and experiences through text and video. This enables the respondents to feel less intimidated by the questions and become more engaged. The ease and speed of capturing data from this new breed of “respondents” will continue to grow. Innovative and experimental research firms will continue to incorporate interactive features and self-checks to balance quality and depth of responses.
2/7/2012 7:56:04 PM
Future Trends and Directions
12.4
377
Hybrid data and patterns The ease of information collection and sharing brings a new challenge to integrating and synthesizing large volume of hybrid data (text, voice, photo and video). The definition of what is considered “consumer data” has forever changed. New fields in data science will continue to find solutions in summarizing and synthesizing data beyond what is traditional statistics (Hansen, 2011). Consumer Explorers have to be open to expanding their tools repertoire to include new ways of presenting data and summarizing its essence beyond averages, frequencies and numbers. The challenge will be the legacy approaches which will become tired and unuseful, yet familiar to executives, particularly the senior executives who grew up with methods that are now dated and worn. This will be an area of struggle. One CE is looking forward to the day that these “dinosaurs of consumer information retire”. Decision makers need to be sure that they can be sure and if a method (drivers of liking, monadic testing or descriptive analysis to name some) is believed to be part of how they succeeded, it is very difficult to leave favorite approaches (Fitzsimons et al., 2002; Hawkins and Hoch, 1992; Lynch and Srull, 1982). Battles will occur. Victory will be tracked by the products that become beloved. The format for successful hybrid data will be a continuum of data that has a meaningful relationship to each other and is based on solid principles of science. Instead of a “mixed methods” approach, which is really nothing more than sticking together tools that have been around, it will allow for the declaration of qual-quant. Hybrids will: ●
● ●
●
Acknowledge the need to triangulate understanding so that you have context and are as sure of the data as possible Do this in the most resource preserving fashion With tools that build upon each other, having enough overlap to form a continuum or iterate Create a fast story that allows more rapid deploying of the product and redeploying of the resources that focused on the project.
A big challenge in dealing with hybrid data is the ability to identify the real insight and how you amplify it so others can understand and differentiate the results. New research tools such as idiographic mapping (Moskowitz et al., 2006), behavior archetype (Chapter 6.5) and Mind Genomics™ (Chapter 7.2) are useful approaches to group this “data” and find meaningful patterns from new types of measurements and activities. Neuroimaging and neuroanalytics play a role in this mix, but their usefulness and value are still to be fully understood. Daniel Ennis (Chapter 3) predicts that advances in neuroscience tools will make the field of psychology obsolete and will transform how we measure the “chemistry of choice”. Another field guide contributor, Donna Sturgess (personal communication with DP on 29 July 2011) writes:
12
“The future of market research will come from expanding beyond today’s conscious survey of consumer preference to include deeper, non-conscious
Beckley_c12.indd 377
2/7/2012 7:56:04 PM
378
Product Innovation Toolbox
preference. Conscious, rational decision-making involves a careful, deliberate process one is fully aware of when weighing the costs and benefits of buying. Non-conscious decision making is a complex of emotions, impulses, reflex actions, habits, memories and instincts that occur quickly and automatically, with little awareness or feeling of effort. Much of the brain is constructed to support non-conscious processes, and buying behavior emerges between the interplay of both systems.” Clearly Sturgess is supporting a view that says that consumer research as we know it may have more than a few fatal flaws! The CE’s job is to look at the individual data and understand how many have similar traits and which of these are relevant to the business. And then to find the ways to represent the information in a meaningful, compelling and convincing fashion.
12.5
Translational research The future is the blurring and blending of qualitative and quantitative research tools. More important than the tool itself is the insight that you get that is understandable and can be applied to move the business forward (Paredes et al., 2008). And you will know that you are not listening and you do not get the right information when people leave your brand in droves; the engaged people we talked about a few points ago. So eventually, what is the framework that executives and Consumer Explorers need to do? You need to be an ACE: (1) Apply what was explained in Chapter 1 on how to be an independent voice in a corporation. (2) Champion a learning organization through use of diagnostic tools like knowledge mapping. (3) Evoke a mindset that is driven to create action and produce products that people really need and want through deep understanding of who they are and what creates a happy, sustainable world. When executives and Consumer Explorers can truly walk in the shoes of their consumers and with their consumers they are looking for and totally understand them, then they will be more successful. Good luck brave new explorers.
References 12 de los Reyes, A. (2011) Learning by Design. TEDx Overlake. http://www.youtube.com/ watch?v=nv0dObM5XGk Fitzsimons, G.J., Hutchinson, J.W., Williams, P., et al. (2002) “Non-Conscious Influences on Consumer Choice”. Marketing Letters 13 (3), 269–279. Hansen, M. (2011) The Intersection of Data and Design. New York Academy of Sciences e-Briefing, 27 June 2011.
Beckley_c12.indd 378
2/7/2012 7:56:04 PM
Future Trends and Directions
379
Hawkins, S. and Hoch, S. (1992) “Low-involvement Learning: Memory without Evaluation”. Journal of Consumer Research, 19, (2), 212–225. Jobs, S. (2011) “Apple’s World Wide Developers Conference” (WWDC). San Francisco, CA: 6 June 2011. Keynote Address. Web. 22 July 2011. http://events.apple.com. edgesuite.net/11piubpwiqubf06/event/ Lynch, J. and Srull, T. (1982) “Memory and Attentional Factors in Consumer Choice: Concepts and Research Methods”. Journal of Consumer Research, 9 (1), 18–37. Moskowitz, M., Beckley, J. and Resurreccion, A. (2006) Sensory and Consumer Research in Food Product Design and Development. Ames, IA: Blackwell Publishing Professional. Mossberg, W. Interview by Charlie Rose (2010) Web. 22 July 2011. http://www. charlierose.com/guest/view/746 Moyers, B. (2011) Interview by Jon Stewart, The Daily Show. http://www.thedailyshow. com/watch/wed-june-1-2011/bill-moyers-pt–2 Murphy, L. (2011) GreenBook Market Research Blog (July). Paredes, D., Beckley, J. and Moskowitz, H. (2008) “Bridging Hedonic and Cognitive Performance in Food and Health and Beauty Aide (HBA) Products”. Society of Sensory Professionals Conference, Covington, Kentucky, 5–7 Nov 2008. Saint Louis, C. (2011) “Someone Just like Me Said, ‘Buy It’”. New York Times, 28 July. Thursdays Styles. Vega, T. (2011) “Focus Groups That Look Like Play Groups”. New York Times, 29 May 2011. Web. 22 July 2011.
12
Beckley_c12.indd 379
2/7/2012 7:56:04 PM
Index action standards, 27 Addressable Minds, 216–17 advertising language see language use aesthetic experiences, 173 agency partners, 353–64 benefits, 354–7 developing the relationship, 357–9 holistic approaches, 353–4 international perspectives, 361–4 maximizing the potential, 359–60 possible pitfalls, 361 airline meals, 160, 164, 168–70 anthropological studies, 136–48 background and contexts, 136–40 concept outline, 140–41 methodologies, 141–5 practical applications, 145–7 product developments and future potential, 147–8 Apple, 58–9, 177, 374 aroma screening tests, 278 case study, 283–7 art and craft experiences, 97 Artefact Group, 376 attributes of products see product attributes AXE fragrance (Unilever), 61–2 Bartlett, Frederic Charles, 133 behavior archetype tools, 377 see also quantitative anthropology behavior patterns (consumers), 6, 57, 139–40 emotional cues, 61–2, 155–6 hierarchical approaches, 226–9 information and advice needs, 61 and packaging, 59–60 storage of products, 60–61
vs. consumer survey data, 139–40 see also consumer decision making behavioral measures (overview), 312–13 beliefs about products see consumer attitudes; consumer’s values; perceptions of a product benchmarking, 246–7, 319–20 Benefit Hierarchy Analysis, 224–38 benefits and rationale, 226–9 comparisons with traditional methods, 225–6 ranking criteria and methods, 229–34 sensory attributes and consumer preferences, 234–8 useful applications, 229–34, 234–8 “bird’s eye shot” presentations, 14 Blue Ribbon Sports (BRS), 33–4 Boulton, Matthew, 33 “bounded rationality”, 226–9 Bowerman, Bill, 33–4 Box-Behken designs, 195 Boyd, John, 93 “brain trusts”, 355–6 Brainjuicer, 376 brainstorming sessions, with crossfunctional experts, 335, 351 Brazil, shipping guidelines, 371 CAB panels see consumer advisory boards (CABs) Campbell, Joseph, 14–15 causal probability of consumer choice, 229–30 Central Intelligence Agency, 367 central location tests (CLT), 321–2, 352 change implementation, challenges, 24
Product Innovation Toolbox: A Field Guide to Consumer Understanding and Research, First Edition. Edited by Jacqueline Beckley, Dulce Paredes and Kannapon Lopetcharat. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
381
Beckley_bindex.indd 381
2/4/2012 1:05:30 AM
382
Index characteristics of products see product attributes China, shipping guidelines, 370 choice models, 38–9 see also consumer decision making cigarette products, 35–6 client management, 9–11 guidelines, 10–11 client—research agencies see research partners Clorox®, 61–2 CLT see central location tests Coca-Cola, 101 coffee products, 35–6 cognitive maps, 132–3 comparative choice analysis, 237–8 competition within the marketplace, 319–20 computer technologies see data capture technologies “concept”, definitions, 243 concept tests and measures, 242–8 characteristics and definitions, 243–4 general considerations, 244 key steps, 244–7 new and contemporary measures, 247 optimization stages, 245 target consumers and populations, 247 conceptualization stages of design, 167–70 conductive thinking, 93–4 Conjoint Analysis Plus, 192–221 concepts and characteristics, 192–3 examples, 11–12 experimental design, 201 future directions, 220–21 grouping stimuli, 199–201 methodologies, 193–9 models, 201–3 new developments, 207–13 next generation advancements, 213–15 ratings data analysis, 197–8 reporting and dissemination, 199 results, 198–9 results presentations, 203–6 selection of rating scales, 195–6 using the results, 206–7 conjoint measurement see Conjoint Analysis Plus “connector” roles see Consumer Explorers consumer advisory boards (CABs), 265–75
Beckley_bindex.indd 382
benefits and applications, 266–7 case studies, 274–5 development steps, 267–74 facilitator roles, 267 objective setting, 267–8 preparation of guidelines, 272 selection and recruitment, 268–72 session formats, 273 team debriefing, 274 see also research partners consumer attitudes, 57, 154 see also consumer values; perceptions of a product consumer behaviors, 6, 57, 139–40 emotional cues, 61–2, 155–6 hierarchical approaches, 226–9 information and advice needs, 61 and packaging, 59–60 storage of products, 60–61 vs. consumer survey data, 139–40 see also consumer decision making consumer concerns assessment, 157–9 typologies, 153–4 see also consumer values consumer decision making, 309–12 conscious vs. unconscious drivers, 377–8 Consumer Explorer(s), 20–30 definitions, 20–21 developing self as a “brand”, 20–21 functions and roles, 20–21, 45–6, 50 as “breakthrough facilitator”, 329–31 impact of technology changes, 374–8 key skills and enablers, 22, 338 leadership roles, 25–9, 337–8 practical advice, 29–30 role requirements, 22–4, 337–8, 378 team-working, 331–6 Consumer Explorer teams, 331–6 developing empathy, 332–3 generating interactions, 334–5 group creation, 333–5 importance of active participation, 331–2 learning from failures, 335–6 using immersion techniques, 332–3 using uncomfortable situations, 333 working directly with consumers, 332–3 see also insight teams consumer research tools see research tools and methodologies
2/4/2012 1:05:30 AM
Index “consumer researchers”, 22 consumer risk (type-2 errors), 25–6 consumer schema, models, 236–8 consumer segmentation models, 236–8 consumer technical leaders see Consumer Explorer(s) consumer-focused organizations, 329–31 consumer-generated ideas, 159–63 and quantitative anthropology approaches, 146–7 see also insight(s) consumer values, 118–20 research tools, 178–238 benefit hierarchy analysis, 224–38 conjoint analysis, 192–221 Kano satisfaction surveys, 178–90 and standards, 153–4 see also consumer concerns context in product use, 164–5 Context Preference Ranking, 291–301 current approaches, 292–4 free-choice methodologies, 294–300 future directions, 301 theoretical background, 300–301 corporate life see organizational structures and cultures cost reduction activities, common errors, 26 cottage cheese marketing, 115–18 Counterintuitive Marketing (Clancy and Krieg), 101 “craveability” of products, 217–18 creativity, 93 future research directions, 172–3 stimulation methods, 165–7 theoretical background, 134 crowd-sourced activities, 96 cultural influences characteristics, 118–19 on product use, 59 see also international opportunities; quantitative anthropology culture of organisations see organizational structures and cultures curiosity attributes, 338 D-optimal designs, 195 DASA Preference Mapping, 190 data capture technologies hybrid data trends, 377–8
Beckley_bindex.indd 383
383
new technologies, 148, 376 data presentations (overview), 11–13 debriefing techniques, 274 decaffeinated coffee, 35–6 decision-making and choice see consumer decision making “degree of importance” surveys, cf, Kano Satisfaction Model, 186–8 “Delighter” attributes, 188–9, 334–5 De los Reyes, August, 373, 376 demographic data, 57–8 design cycle, 150–52 see also emotional responses and design; innovation; new product development (NPD) digital technologies, 374–5 Drake, Mary Ann, 115–18, 119 driver/optimizer attributes, 114–15, 188–9 education and training, for insight leadership, 7–9 elicitation techniques (language and meanings), 71–2 Emerson, Ralph Waldo, 333 emotion measurement forms and types, 155–6 in design-orientated research, 156–7, 196 in new product development, 159–63 emotion theories, 152–3 emotional cues, 61–2 emotional responses and design, 149–73 background and development rationale, 149–54 language use, 109–10 polarizations, 111–12 positive impact studies, 160–63 research tools and methodologies emotion measurement, 155–6 generating concepts (conceptualize stage), 167–70 identifying concerns (understanding stage), 163–5 new product development, 159–63 product design assessments, 156–7 scaling techniques, 38–9 stimulating creativity (envision stage), 165–7
2/4/2012 1:05:30 AM
384
Index emotional responses and design (cont’d) testing products (evaluation stage), 170–71 values and concerns assessment, 157–9 see also perceptions of a product errors (statistical), 25–6 ethnographic studies, 136–9 characteristics, 137 drawbacks and benefits, 138–9 see also quantitative anthropology EUPR see extended use product research evaluating product designs, 170–71 see also product feedback (consumers) Ewald, Jeff, 28 execution factors, 45 experiences of being a consumer, 96–7, 331–3 see also immersion techniques experiences of using products see perceptions of a product; product testing experimental design, 192–3, 342–3 choice and selection, 194–5 key characteristics, 201 methodologies for analysis, 193–221 multi-factor combinations, 345–6 scale decisions, 195–6 technology and software advances, 347 theoretical background, 346–7 using statistical data and models, 343–7 experimental studies see experimental design; pilot studies “experimentation”, 192–3 expertise, 333 exploratory data analysis (EDA), 229 export arrangements, 368–70 extended use product research, 304–16 common challenges, 304–5 concept outline, 307–8 future directions, 316 problems with traditional approaches, 306–7 processes and stages, 308–9 theoretical basis, 315 traditional decision-making approaches, 292–4 understanding consumer decision making, 309–12
Beckley_bindex.indd 384
understanding consumer segments, 309 using behavioral measures to assess viability, 312–15 extensibility, 36 executive summary reports, 28–9 fabric conditioners, 162–3 Facebook, 376 Facial Action Coding (FAC) system, 155–6 facial expressions, 155–6 facilitator, 103, 109 factorial designs, 194–5 failures and errors (of organizations), 24 impact on risk taking, 335–6 see also product failures fears and anxieties, 338 financial investment in new products, 329 first-order questions, 25, 104 The Five Dysfunctions of a Team (Lencioni), 252 flash profiling, 83–5 analysis and interpretation, 85–8 focus group interviews, 101 see also consumer advisory boards; interviews with consumers foreign embassies, 367 foreign-based supplier partnerships, 361–4 “forming”, 334 “four-stage creative process model” (Lubart), 128–9 Franklin, Benjamin, 335 “free-choice in context preference ranking”, 294–301 basic methodology, 294–6 data analysis and interpretation, 296–300 future directions, 301 theoretical background, 300–301 free elicitation techniques, 71–2 freight transport, 368–70 gameboard “model building”, 122–34 basic techniques (narrative and graphic), 123–6 format stages, 125 goals of research, 129–30 interview components, 127–8 outcomes analysis, 129–31 product developments and limitations, 131–2
2/4/2012 1:05:30 AM
Index reliability and validity, 128–9 theoretical background, 132–4 types of analysis, 130–31 games for team-building, 334 General Foods (Kraft Foods), 11 General Mills, CAB case study, 274–5 generalized procrustes analysis (GPA), 85–6 Geneva Emotion Wheel (Scherer), 156 Gillette, Maryanne, 217 glass-blowing, 97 global testing, 365–70 country-based information inputs, 367–8 implementing product testing, 368–70 partnership working (foreign-based suppliers), 361–4 goals and goal setting definitions, 153–4 during initial meetings, 10 GPA see generalized procrustes analysis (GPA) graphic techniques, for product attributes/benefits, 123–7 GreenBook Market Research blog, 376 group compositions, 102–4 group dynamics, 103–4 uncomfortable situations, 333 group formation, 334 hedonic measures (overview), 38–9, 292–3 Heinz, 250–53 “hero” professionals, 14–15 Hershey Company, 328–39 hierarchical value maps (HMVs), 77–8 hierarchy analysis concepts and theory, 226–9 see also Benefit Hierarchy Analysis holistic partnerships, 353–4 home remedies, 59 home-use testing (HUT), 322–3 and qualitative multivariate analysis, 106–9 and rapid product navigation techniques, 286 hybrid data, 377–8 hypothesis creation, 27 ideal point concepts, 39 ideaMap R, 195, 196, 202–3, 208–10 ideas generation see creativity; insight(s) ideas presentation
Beckley_bindex.indd 385
385
guidelines for initial stages, 10–11 reporting results, 11–13 IDEO™, 56 idiographic mapping, 377 idiographs, 110 imagery use, 10 immersion techniques, 91–9 conductive thinking processes, 93–4 getting started, 94–5 processes involved, 95–8 results and tacking action, 98–9 use by “transformational teams”, 331–3 impact of interaction studies, 194 in-context interviewing, 122 in-house experts, 348–52 India, shipping guidelines, 371 “Indifferent” attributes, 188–9 information on products, 61 innovation definitions, 32 execution factors, 45–6 goal setting, 24, 45 cf. “invention”, 32–6 limits of extensibility, 36 new processes tools and models, 48–50 organizational influences, 44–5 role of the Consumer Explorer, 20–24, 45–6, 50 scaling intensities and emotions, 36–9 screening considerations, 244–5 setting up programs, 46–7 stages, 44–6 strategic boundaries and priorities, 46–7 using in-house expertise, 348–52 insight(s), 54–63 characteristics, 55 definitions, 7, 54–5 development tools and methodologies (overview), 56–8 ownability, 55–6, 58–62 understanding behaviors and attitudes, 57 use of demographic information, 57–8 see also individual tools; perceptions of a product; research tools and methodologies insight leaders, 6–7 attributes and skills, 7–8 definitions, 6–7 education and training, 7–9 see also Consumer Explorer(s)
2/4/2012 1:05:30 AM
386
Index insight teams, 249–64 case studies, 263 composition, 252 definitions, 250–51 functions and opportunities for use, 251, 262–3 future directions, 263–4 methods of working, 252–62 motivating members, 261–2 process implementation, 256–61 recruitment, 252–6 selecting members, 252 see also transformational teams “instructions for use”, 61 interaction effects studies, 194–5 international opportunities creating research partnerships, 361–4 information sources, 367 multi-cultural consumer research, 365–70 product testing practicalities, 368–70 using country-specific data, 367–8 International Pangborn Sensory Conference (2009), 26 international shipping guidelines, 368–70 interviews with consumers choice of techniques and tools, 101–2 laddering techniques, 72–8 for model construction, 127–8 problems and solutions, 78–81 use of flash profiling, 83–5, 85–8 use of Kelly’s repertory grid, 81–3, 85–8 multi-attribute data sets analysis, 85–8 use of qualitative multivariate analysis, 100–120 “invention”, 32 cf. “innovation”, 32–6 see also new product development (NPD) investment in new products, 329 iterative qualitative-quantitative research process (IQQR process), 48–50 Japan, shipping guidelines, 370 Jobs, Steve, 374 Kalan, Jonathan, 215 Kano Satisfaction Model, 113–15, 178–90, 313–14, 334–5
Beckley_bindex.indd 386
applications, 179, 313–14 basic six steps, 179–86 concepts and philosophy, 188–90 cf. “degree of importance” surveys, 186–8 Kansei engineering studies, 345–6 Keller, Helen, 331 Kellogg’s, 189 Kelly’s repertory grid technique, 81–3 analysis and interpretation, 85–8 Khan, Mehmood, 43 Knight, Phil, 33–4 knowledge mapping, 26–7 see also mapping exercises “knowledge workers” client management, 9–11 forms and roles, 4–6 personality types, 15–16 presentation skills, 11–13 labelling, international requirements, 368 laddering interviews, 72–8, 78–81, 159 application problems, 78–81 language use, 68–9 elicitation of meanings, 71–2 recording consumer experiences, 109–10 see also perceptions of products laundry detergent, packaging, 60 leadership, 337–8 importance, 24, 29 practical tips, 25–9 learning opportunities from failures and mistakes, 24 from literature, 8–9 learning theories cognitive maps, 132–3 schema (knowledge structures), 133 Lencioni, Patrick, 252 letterboxing, 96 lifestyle data, 57–8 “liking” performance testing, 343–4 see also “Love-it / Hate-it” voting List of Value (LOV) of typology theory (Feather), 119–20 listening skills, 273, 375 literature, value of, 8–9 logit models, 39 “Love-it / Hate-it” voting, 111–12 examples, 117–18 impact of new technologies, 376
2/4/2012 1:05:30 AM
Index LSA maximum coverage (Fayle and Ennis), 301 Luce, Duncan R., 39 M-alternative forced choice method, 38–9 McCormick and Company, 217 McDaniel, Mina, 100–101 McFadden, Daniel, 39 MakeUpAlley, 375 mapping exercises in qualitative multivariate analysis, 112–13, 117–18 see also preference mapping market research, 54 see also insight(s) market researcher, 5–6 see also Consumer Explorer(s) market size, 318–19 “me too” products, 329 means-end chain (MEC) theory, 73 Mexico, shipping guidelines, 371 Miller Brewing Company, 35–6 Mind Genomics ™, 207, 217–20, 377 mistakes (by consumers), 58–9 see also failures and errors (of organizations) “mixed methods” approaches, 377–8 mobile phones data capture technologies, 148, 376 new product research, 161–2 testing and evaluating new designs, 170–71 model construction techniques see gameboard “model building” models for consumer-centric innovation, 328–39 key points, 329 moderators, 102–4 Morris, Philip, 35 Mossberg, Walter, 374 motivation and product value, 69 and team-building, 261–2 Moyers, Bill, 375 multi-attibute data sets analysis techniques, 85–8, 144–5 see also hybrid data multi-cultural consumer research, 365–70 home country stakeholder input, 366 market-specific data, 367–8 practicalities, 368–70
Beckley_bindex.indd 387
387
product testing checklists, 369 “multiple-selves”, 57–8 “must-be” attributes, 188–9 “must-haves/optimizers/delighters” product classifications, 188–9, 334–5 narrative techniques, for product attributes/benefits, 123–6 NASCAR, 92–3 negative responses, 159–60 nervousness, 10 nested analyses, 210–13 neuroimaging technologies, 247, 377–8 new product development (NPD) common research errors, 25–6 current approaches, 47–8 design stages, 150–52 emotion research, 159–63 emotional responses and design implications, 152–4 generating concepts, 167–70 international product testing, 365–70 investment risks, 329, 336 key processes, 47–8 margin losses, 336 positive impact studies, 160–63 product failures, 48, 305 studies to understand concerns, 163–5 success guidelines, 123 testing and evaluating new designs, 170–71 tools to refine and screen, 242–301 tools to understand consumer values, 178–239 tools to validate products, 304–24 use of benchmarks, 246–7 Nike, 33–4 normal distributions, 38–9 NPD see new product development (NPD) null hypothesis, 25–6 observation vs. surveys, 139–40 see also ethnographic studies; quantitative anthropology Oral B™, 56, 215–16 orchid hunting, 96 organizational factors, 44–5
2/4/2012 1:05:30 AM
388
Index organizational structures and cultures influence on innovation, 44–5 politics and compromise, 13–14 refocusing towards the consumer, 329–31 Orlean, Susan, 96 over-simplification errors (data presentations), 81 “ownability” of ideas, 55–6, 58–62 P & G, 62 packaging consumer behaviors and adaptations, 59–60 feature dilution risks, 336–7 international requirements, 368–70 Pagès, Jérôme, 112 paired preference testing, 293 PANAS scale, 156 partnership working see research partners Pasteur, Louis, 10 “pattern analysis”, 144–5 Pepperidge Farms, 348–9 perceived values see consumer values perceptions of a product consumer attitudes, 57 consumer language use and insights, 69–88 emotional responses, 152–4 see also emotional responses and design; satisfaction surveys perceptual intensities, distribution models, 38–9 performance predictions, 324 personality types Consumer Explorers, 16 within consumer focus groups, 103–4 physiological responses, 156 pilot studies, 10–11, 24 see also small-scale/in-market launches Plackett-Burkman designs, 194 platform development, and QA approaches, 146 polarized responses, 111–12 population criteria and evaluations, 182–3 predicting performance, 324 preference mapping, 190, 289 preferential choice, 38 PrEmo measuring tool (Desmet), 156 presentations
Beckley_bindex.indd 388
providing what clients want, 11–13 reporting results data, 11–13 “presentertainment”, 12–13 price ratings scales, 195–6 pricing decisions, 216 probabilistic causality, 229–30 probabilistic models, 37–8 probabilistic neural network (PNN) analysis, 144–5 process models, for NPD, 47–8, 49–50 Proctor & Gamble, 6, 11 “producer/manufacturer risk” (type-1 errors), 25–6 product adaptations, 58–9 feature dilution risks, 336–7 see also extended use product research; product line extensions product attributes classification systems, 188–9 and consumer motivation, 69 description techniques, 123–7 key criteria, 334 language use, 68–9, 109–10 selection, 188–9, 194 sensory criteria and consumer preference, 234–8 survey classifications and analysis, 181, 183–8 see also perceptions of a product product concept validation tests, 317–24 aims and objectives, 317–18 consideration of the competitive set, 319–20 innovation forms, 318 sales potential, 320 success metrics, 324 types of test, 320–24 understanding target markets, 318–19 product emotions, 152–4 basic model (Desmet), 152–3 see also emotional responses and design product experiences emotional responses, 152–4 importance of context, 164–5 see also extended use product research; product concept validation tests product failures, 48, 305, 329, 335–6 product feature dilution, 336–7 product feedback (consumers) hierarchical approaches, 227–9
2/4/2012 1:05:30 AM
Index see also perceptions of a product product innovation see “innovation”; new product development (NPD) product line extensions decision-making approaches, 292–4 see also extended use product research product optimization studies, 342–7 experimental design constraints, 342–3 feature dilution risks, 336–7 and QA approaches, 145–6 statistical experimental design approaches, 343–7 see also product line extensions product researchers foci of knowledge, 6 interaction with other knowledge professionals, 4–6 personality types, 15–16 presenting ideas and results, 9–13, 13–14 understanding the organization’s needs, 13–14 see also Consumer Explorer(s) product samples classification methods, 112 international shipping, 365–70 mapping exercises, 112–13, 117–18 product space identifying product set, 105–6 qualitative, 279 product storage, 60–61 product testing, 170–71 home-use testing (HUT), 106–9, 286 international markets, 365–70 product use in combination, 59 common mistakes, 58 customer behaviors, 59–60 see also perceptions of a product; product emotions professional/trade organizations, 350, 367 professional status self-knowledge, 15–16 transitions from student status, 13–14 types, 14–15 project dossiers, 28–9 project meetings guidelines for success, 10–11 initial sessions, 10, 25, 29
Beckley_bindex.indd 389
389
project objectives establishing the focus, 26–7 importance of understanding, 25 “proof of principle” studies see pilot studies prototype experimentation methods, 288–9, 343–6 purchase intent scales, 195, 292 QA (quantitative anthropology) realsightR system, 141–4 QMA see qualitative multivariate analysis qualitative multivariate analysis, 100–120 benefits of use, 103–4 effectiveness as screening tool, 117–18 stages, 104–15 home-use testing, 106–9 Kano satisfaction survey, 113–15 mapping exercises, 112–13 use and practice examples, 115–18 value perceptions, 118–20 qualitative research general characteristics, 265–6 and quantitative approaches, 378 see also individual methodologies quantitative anthropology, 136–48 background and contexts, 136–40 concept outline, 140–41 methodologies, 141–5 practical applications, 145–7 product developments and future potential, 147–8 questionnaire design, 181–2 R&D teams see in-house experts radio ads, 247 rapid product navigation (RPN) techniques, 276–90 case studies, 283–7 discussion flow and conduct, 280–83 future directions, 289–90 implementation steps, 277–83 participant selection, 279–80 theoretical background, 286, 288–9 validating the findings, 283 RDE see rule of developing experimentation (RDE) reading and insight, 8–9 realsightR system see QA (quantitative anthropology) realsightR system
2/4/2012 1:05:30 AM
390
Index reporting results, 11–13 formats, 28–9 technical data presentations, 11–13, 28 research agencies, 353–64 advantages, 354–7 developing international markets, 361–4 maintaining relations, 357–9 maximizing investment, 359–60 problems and pitfalls, 361 selection decision making, 357 research agendas common errors and problems, 25–6 role of the Consumer Explorer, 21–4, 337–8, 378 research findings, 54–5 see also insight(s); reporting results research partners, 353–64 benefits, 354–6 developing the relationship, 357–9 examples, 356–7 holistic approaches, 353–4 international perspectives, 361–4 maximizing the benefits, 359–60 possible pitfalls, 361 research tools and methodologies, 65–324 Benefit Hierarchy Analysis, 224–38 common methods (trends), 305 concept tests and measures, 242–8 Conjoint Analysis Plus, 192–221 consumer advisory boards (CABs), 265–75 Context Preference Ranking and freechoice, 291–301 emotion measurement for design, 154–73 gameboard “model building”, 122–34 immersion techniques, 91–9 insight teams, 249–64 Kano Satisfaction Model, 178–90 language use and elicitation techniques, 68–88 qualitative multivariate analysis, 100–120 quantitative anthropology, 136–48 rapid product navigation (RPN) techniques, 276–90 response surface designs, 345 results sharing, 28 see also reporting results Roebuck, John, 33
Beckley_bindex.indd 390
“rolling launches”, 323–4 RPN see rapid product navigation (RPN) techniques rule of developing experimentation (RDE), 11–12 running shoes, 33–4 Russia, shipping guidelines, 370 sales forecasts, 320 SAM see Self-Assessment Maniken (SAM) (Bradley and Lang) sample classification methods, 112 “satisfaction”, 149–50 see also emotional responses and design satisfaction surveys, 113–15 Kano Satisfaction Model, 178–90 and repurchase, 139–40 scaling emotions, 38–9 scaling intensities, 36–8 schema theory (knowledge structures), 133, 227 Scott, Sir Percy, 34 screening experimental designs, 345 screening tools for new product ideas, 242–301 contemporary concept tests and measures, 242–8 Context Preference Ranking with free choice, 291–301 rapid product navigation techniques, 276–90 role of consumer advisory boards, 265–75 role of insight teams, 249–64 Self-Assessment Maniken (SAM) (Bradley and Lang), 156 self-reporting methodologies for emotional experiences, 156 for emotional responses to products, 157 Selling Blue Elephants (Moskowitz and Gofman), 218 sensory attributes of products and purchase intent, 234–6 sensory professionals, 5–6 in-house expertise, 348–52 key skills, 349–50 see also Consumer Explorer(s); in-house experts sensory research methods, 347–8 common types, 305, 350–51
2/4/2012 1:05:31 AM
Index seven-step NPD model (Moskowitz et al.), 47–8, 49–50 Shapley value analysis, 301 Sheldon Concern Taxonomy Questionnaire, 157–9 shelf-life tests, 352 shipping guidelines, 368–70 Sieffermann, Jean-Marc, 83 silos and elements (conjoint analysis), 199–201 SIM matrices, 76–7 Simplex designs, 194 skill development, 97 small-scale/in-market launches, 323–4 social network sites, 376 Spark, 376 standards, 153–4 statistical data over-use of, 14 “winstonizing” of, 11–13 statistical experimental designs, 342–7 key types, 345 steam engine, 32–3 storage of products, 60–61 strategic design, 26 strategic research, role of the Consumer Explorer, 21–4, 337–8 structural unduplicated reach and frequency (SURF), 301 Sturgess, Donna, 377–8 SURF see structural unduplicated reach and frequency (SURF) surveys (general), 137 comparisons between types, 186–8 vs. “observations”, 139–40 see also ethnographic studies; satisfaction surveys SYSTAT, 193, 201 tab-houses, 1 Taguchi designs, 345 Taquitos.net, 375 team-building, 261–2, 332–6 use of games, 334 teams member selection decisions, 252 see also Consumer Explorer teams; insight teams technical research reports, 28–9 testing products, 170–71
Beckley_bindex.indd 391
391
international markets, 368–70 theory of creativity, 133–4 Thurstonian models, 37–8 toilet cleaning products, 61–2 topline executive summary reports, 28–9 total unduplicated reach and frequency (TURF), 293–4 limitations, 294 trade organizations, 350 international markets, 367 Traf-O-Data, 24 transformational teams, 331–6 developing empathy, 332–3 generating interactions, 334–5 group creation, 333–5 importance of active participation, 331–2 learning from failures, 335–6 using immersion techniques, 332–3 using uncomfortable situations, 333 working directly with consumers, 332–3 translational research, 378 transport arrangements, international shipping, 368–70 triadic choice, 38–9 TURF see total unduplicated reach and frequency (TURF) type-0/-1/-2 errors, 25–6 “uncomfortable” situations, 333 unconscious decision-making, 377–8 UnileverR, 60–62 “up-front” innovation, 44 US Department of Commerce, 367 US Navy, 34 user experiences see perceptions of a product; product testing validation tools for new products, 304–24 central location tests, 321–2 extended use product research, 304–16 home-use tests, 106–9, 286, 322–3 product concept validation tests, 317–24 value diagrams, 110–11 examples, 116 see also consumer values Van Gogh, Vincent, 93–4
2/4/2012 1:05:31 AM
392
Index verbalization and communication initial stages, 10 listening skills, 273, 301 visualizations of ideas, 10 VocationVacationsR, 97–8 voting behaviors, 376 Watt, James, 32–3 “wave O” studies, 10–11, 24 Westin Hotels, 189
Beckley_bindex.indd 392
Wilson, Margaret, 93 “winstonizing” data, 11–13 word association techniques, 71–2 “wow” feelings about products, 62, 161–2 Yahoo! Inc., 101 YouTube, 375 Zaltman metaphor elicitation technique, 71–2
2/4/2012 1:05:31 AM