Gestures in Language Development
Benjamins Current Topics Special issues of established journals tend to circulate wi...
280 downloads
1083 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Gestures in Language Development
Benjamins Current Topics Special issues of established journals tend to circulate within the orbit of the subscribers of those journals. For the Benjamins Current Topics series a number of special issues have been selected containing salient topics of research with the aim to widen the readership and to give this interesting material an additional lease of life in book format.
Volume 28 Gestures in Language Development Edited by Marianne Gullberg and Kees de Bot These materials were previously published in Gesture 8:2 (2008)
Gestures in Language Development Edited by
Marianne Gullberg Lund University
Kees de Bot University of Groningen
John Benjamins Publishing Company Amsterdam / Philadelphia
8
TM
The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.
Library of Congress Cataloging-in-Publication Data Gestures in language development / edited by Marianne Gullberg, Kees de Bot. p. cm. (Benjamins Current Topics, issn 1874-0081 ; v. 28) Includes bibliographical references and index. 1. Communicative competence in children. 2. Language acquisition. 3. Gesture. 4. Semantics. 5. Nonverbal communication. I. Gullberg, Marianne. II. De Bot, Kees. P118.4.G47 2010 401’.93--dc22 2010043360 isbn 978 90 272 2258 9 (Hb ; alk. paper) isbn 978 90 272 8744 1 (Eb)
© 2010 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co. · P.O. Box 36224 · 1020 me Amsterdam · The Netherlands John Benjamins North America · P.O. Box 27519 · Philadelphia pa 19118-0519 · usa
Table of contents About the authors
vii
Preface
1
Gestures and some key issues in the study of language development Marianne Gullberg, Kees de Bot, and Virginia Volterra
3
Before L1: A differentiated perspective on infant gestures Ulf Liszkowski
35
The relationship between spontaneous gesture production and spoken lexical ability in children with Down syndrome in a naming task Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
53
The effect of gestures on second language memorisation by young children Marion Tellier
75
Gesture and information structure in first and second language Keiko Yoshioka
93
Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker Amanda Brown
113
Author index
135
Subject index
137
About the Authors
Kees de Bot received his PhD in Applied Linguistics from the University of Nijmegen. He is Chair of Applied Linguistics and Director of the Research School for Behavioral and Cognitive Neuroscience at the University of Groningen. His main interest is in the application of Dynamic Systems theory to second language development and bilingual processing. Amanda Brown received her PhD from the Program in Applied Linguistics at Boston University and the Multilingualism Project at the Max Planck Institute for Psycholinguistics. She is currently Assistant Professor of Linguistics in the Dept. of Languages, Literatures and Linguistics at Syracuse University. Maria Cristina Caselli, senior researcher at the Italian National Research Council (CNR), currently coordinates the “Language Development and Disorders” Laboratory at the CNR Institute of Cognitive Sciences and Technologies. Her research focuses on communication and language in typical and atypical development, neuropsychological developmental profiles, language assessment, and early identification of children at risk for language development. She is the author or co-author of many national and international publications in psycholinguistics, developmental psychology, and neuropsychology. Marianne Gullberg received her PhD in Linguistics from Lund University, Sweden. She was a staff member at the Max Planck Institute for Psycholinguistics, the Netherlands, 2000-2009, where she launched and headed the Nijmegen Gesture Centre with Dr A. Özyürek. She is now Professor of Psycholinguistics and Director of the Humanities Lab at Lund University. Her research targets bilingual, second and first language acquisition and use, with particular attention to processing, semantics, discourse, and the production and perception of gestures. Ulf Liszkowski received his PhD in Psychology from the University of Leipzig, Germany. He is head of the Max-Planck Independent Junior Research Group Communication Before Language at the MPI for Psycholinguistics in Nijmegen, The Netherlands. His research addresses infants’ prelinguistic communication and their social and cognitive development. Martina Recchia holds a degree in psychology from the University of Rome and she is a doctoral student at the University of Rome “La Sapienza”. She collaborates with the Institute of Cognitive Sciences and Technologies of the Italian National Research Council with the financial support of the Fondation Jerome Lejeune, Project “Lexical abilities in children with Down syndrome: the relationship between gestural and spoken modalities”. Silvia Stefanini holds a degree in psychology from the University of Padua. She is currently at the University of Parma, Department of Neuroscience, where she obtained her Ph.D. in Neuroscience in 2006. She has collaborated with the Institute of Cognitive Sciences and Technologies of the Italian National Research Council since 2002. Her main interest is first language acqui-
viii
About the Authors
sition in typical and atypical conditions, focusing on the link between motoric and linguistic development, in particular the gesture-speech system. Marion Tellier received her PhD in Linguistics in 2006 at University Paris 7 – Denis Diderot. She has since conducted research on embodided conversational agents at the IUT de Montreuil, University Paris 8, and is Maître de Conférence at the University of Provence – Aix-Marseille I. Her research interests include ‘teaching gestures’, second language teaching to children, teachers’ training and gesture perception and recognition. Virginia Volterra received her “laurea” in Philosophy from the University of Rome La Sapienza in 1971. She is Research Director of the Italian National Research Council, associated with the Institute of Cognitive Sciences and Technology. Her research focuses on the early stages of language acquisition in children with typical and atypical development. She has also conducted pioneering studies on Italian Sign Language (LIS). Keiko Yoshioka obtained her PhD in Applied Linguistics from Groningen University, the Netherlands, and currently lectures in Japanese Language and Second Language Acquisition at Leiden University. Her research interests include speech and gesture in second language acquisition and use.
Preface
Perhaps surprisingly, researchers working on language development in children and adults generally consider themselves as working in different disciplines, pursuing different research questions. They do not necessarily publish in the same journals, go to the same conferences, and discuss issues of (cross-linguistic) language development more generally across the disciplinary divide. This state of affairs holds even for those researchers who take a common interest in gestural aspects of communication and language development. The workshop “Gestures in Language Development”, held at Rijksuniversiteit Groningen, the Netherlands, in April 2006, aimed at bringing together researchers working on aspects of language development in both traditions to help establish new networks and to encourage cross-disciplinary exchange and discussions of the common themes and key issues to be explored in the realm of gesture research. The papers in this volume reflect some of the themes and concerns debated over the two days of the workshop. We extend our heartfelt thanks to all the participants in the workshop for their stimulating discussions and thought-provoking contributions, and to Marjolijn Verspoor for her hospitality. We also thank the editors of GESTURE and John Benjamins Publishers for giving us an opportunity to share some of the discussions with the wider gesture community through the special issue of GESTURE of which this volume is a re-print, the external reviewers for their time and expertise, and Nienke Hoeven van der Houtzager for help in the preparation of this book version. We gratefully acknowledge funding for the workshop from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
Gestures and some key issues in the study of language development Marianne Gullberg1,2, Kees de Bot3, and Virginia Volterra4
1Max
Planck Institute for Psycholinguistics / 2Lund University / Groningen / 4Istituto di Scienze e Tecnologie della Cognizione, CNR 3Rijksuniversiteit
The purpose of the current paper is to outline how gestures can contribute to the study of some key issues in language development. Specifically, we (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development or changes over the life span more generally; (2) highlight theoretical and empirical issues in these domains where gestures can contribute in important ways to further our understanding; and (3) summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts. Keywords: first language, second language, development, acquisition, ageing
Introduction In recent years the scope of studies on language development has broadened from a fairly narrow focus on lexical and syntactic aspects at the sentence level to an interest in structures and processes at higher levels such as discourse and the interaction with other semiotic systems in communication. In parallel, studies on communication systems across modalities have provided growing empirical evidence supporting the view that gestures are a mode of expression tightly linked to language and speech (e.g. Goldin-Meadow, 2003; Kendon, 2004; McNeill, 1992, 2005). Gestures are spatio-visual phenomena influenced by contextual and socio-psychological factors, and also closely tied to sophisticated speaker-internal, linguistic processes. Under this view of speech and gesture as an inter-connected system, the study of gestures in development and the study of the development of gestures are natural extensions of research on language development, be it phylogenetically,
4
Marianne Gullberg, Kees de Bot, and Virginia Volterra
ontogenetically, or during the lifespan of an adult. Moreover, given their properties and dual role as interactive, other-directed vs. internal, speaker-directed phenomena, gestures allow for a fuller picture of the processes of language acquisition in which the learner’s individual cognition is situated in a social, interactive context. The role of gestures in language development can be studied from various perspectives: 1. Gestures as a medium of language development. We can examine the role gestures play in interaction to mediate the acquisition of spoken language, their general role in communication, in establishing the socio-cognitive prerequisites for the development of language, in conveying and possibly entrenching meaning, and their connection to cognitive capacities such as working memory, etc. 2. Gestures as a reflection of language development. We can further investigate the way in which gestures develop and change in parallel to spoken language development, and the ways in which they shed light on both the product and process of language acquisition. 3. Gestures as language development itself. This approach studies the acquisition of gestures as an expressive system in its own right. Traditionally the term language development has implicitly focused only on the gradual growth or progression of a first or second language towards the (idealised) stable model of an adult or native system. However, phenomena such as decline or regression in ability are clearly related (see papers in Viberg & Hyltenstam, 1993). For instance, regression as attested in attrition, or language loss, in adoptees, ageing bilinguals, and immigrants who stop using their first language, seems to affect the lexicon and grammar in similar ways as in progression. Not all shifts in ability lead to loss, however. Bilingual speakers may experience a decline in ability in one language when not using it without this leading to ungrammaticality. Moreover, they regain the ability when the language is brought back to use. Shifts in language dominance due to usage highlight the dynamic nature of language abilities. Development can thus usefully be seen not only as a linear process of progression, but as a complex, dynamic process that encompasses growth, decline, and any shift in both first and second languages (de Bot, 2007; de Bot, Lowie, & Verspoor, 2005). We will use the term development in this more general sense of change throughout. The purpose of the current paper, then, is to outline how gestures can contribute to the study of some central issues in language development. Specifically, we aim to (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development over the life span more generally; (2) to highlight theoretical and empirical issues in these domains
Gestures and some key issues in the study of language development
where gestures can contribute to further our understanding; and (3) to summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts.
Gesture and language In the contemporary gesture literature arguments are made for viewing gestures, language and speech as intimately linked or as forming an ‘integrated system’, an audiovisual ‘ensemble’, or a ‘composite signal’, depending on the theoretical approach (Clark, 1996; Engle, 1998; Kendon, 2004; McNeill, 1998). The arguments for integration come both from studies of language production and comprehension. First, in production, gestures have been found to fill linguistic functions like providing referential content to deictic expressions (this wide), filling structural slots in an utterance (“GIVE! [gesture: ‘the book’]”: Slama-Cazacu, 1976, p. 221), and acting as or modifying speech acts (e.g. Bühler, 1934; Slama-Cazacu, 1976; Kendon, 1995, 2004). Second, the observed semantic-pragmatic and temporal coordination between speech and gesture lies at the heart of all theories and models concerning the relationship. Although the precise relationship between the modalities is not entirely straightforward, particularly with regard to meaning and coexpressivity, there is a general consensus that gesture and speech express closely related meanings selected for expression (see de Ruiter, 2007; Kendon, 2004; Holler & Beattie, 2003, for overviews). A third argument for integration is that speakers deliberately distribute information across both modalities depending on spatial and visual properties of interaction (e.g. Bavelas, Kenwood, Johnson, & Phillips, 2002; Holler & Beattie, 2003; Melinger & Levelt, 2004; Özyürek, 2002a). Finally, a fourth frequent argument is that gestures and speech develop together in (first) language acquisition (e.g. Mayberry & Nicoladis, 2000; Volterra, Caselli, Capirci, & Pizzuto, 2005), and that they break down together in disfluency, in aphasia, etc. (e.g. Feyereisen, 1987; Lott, 1999; McNeill, 1985). This last argument is further discussed in the papers in this volume. In language comprehension, there is considerable evidence that gestures affect perception, interpretation of and memory for speech (Beattie & Shovelton, 1999; Graham & Argyle, 1975; Kelly, Barr, Breckinridge Church, & Lynch, 1999; Riseborough, 1981). Further to this, recent neurocognitive evidence shows that the brain integrates speech and gesture information, processing the two in similar ways to speech alone (e.g. Bates & Dick, 2002; papers in Özyürek & Kelly, 2007; Wu & Coulson, 2005). Overall, then, there is good reason to consider gestures, language, and speech as a closely-knit system.
5
6
Marianne Gullberg, Kees de Bot, and Virginia Volterra
The models attempting to formalise the relationship between gestures and speech differ in their views of the locus and the nature of the link. As suggested by Kendon (2007) some see speech as primary and gesture as auxiliary. Others regard gestures and speech as equal partners. The first set either considers gestures to facilitate lexical retrieval (the Lexical Retrieval Hypothesis, Krauss, Chen, & Gottesman, 2000) or views gestures as instrumental in the process of representing and packaging imagistic thought for verbalisation (the Information Packaging Hypothesis, Alibali, Kita, & Young, 2000; Freedman, 1977). The second set of theories regards gestures as an integral part of an utterance. Beyond this startingpoint, they differ in focus. Either they concentrate on gestures as a window on (linguistic and non-linguistic) thought (the Growth Point Theory, McNeill, 1992, 2005; McNeill & Duncan, 2000), or they target the interplay between imagistic and linguistic thinking (the Interface Hypothesis, Kita & Özyürek, 2003), or, finally, they centre on the communicative intention driving both modalities to form a deliberately coherent multimodal utterance (de Ruiter, 2000, 2007; Kendon, 1994, 2004; Schegloff, 1984). All existing accounts model the adult stable system. No theory has yet undertaken to account for development either in children or in adults.
Gesture and first language development The field of First Language Development (FLD) has a long-standing interest in gestures. Infants’ gestures have traditionally primarily been explored as relevant features of a prelinguistic stage, as behaviours that precede and prepare the emergence of language, identified exclusively with speech. More recently, the view of adult language as a gesture-speech integrated system has prompted the need to understand how the gesture-speech relationship is established in infancy and how it evolves towards the adult system.
The earliest development Infants begin to communicate intentionally through gestures and vocalisations and later with words (see Liszkowski, Stefanini et al., this volume). Gestures and speech are equal partners — in the majority of cases the communicative signals produced by children are expressed in both modalities, gestural and vocal. A key question is whether the two modalities are integrated from the very beginning, or are initially separate to become an integrated system only with development (McNeill, 1992, 2005). Some studies indicate that the gestural and vocal modalities are semantically
Gestures and some key issues in the study of language development
and temporally integrated from the earliest stages (Capirci, Contaldo, Caselli, & Volterra, 2005; Iverson & Thelen, 1999; Pizzuto, Capobianco, & Devescovi, 2005), while others report that asynchronous combinations of gestures and words are more frequent than synchronous ones in an initial developmental period (Butcher & Goldin-Meadow, 2000; Goldin-Meadow & Butcher, 2003). Despite these differences, all agree that deictic gestures appear before the end of the first year and that they fulfil the basic function of drawing the interlocutor’s attention to something in the environment. These gestures include requesting (extending the arm toward an object, location or person, sometimes with a repeated opening and closing of the hand), showing (holding up an object in the adult’s line of sight), giving (transferring an object to another person) and pointing (index finger or full hand extended towards an object, location, person, or event). The referents of these gestures can be identified only in the physical context in which communication takes place. Around 12 months children start to produce other more content-loaded types of gestures, referring, like first words, to action schemes usually performed at this age with or without objects (e.g. bringing the handset or an empty fist to the ear for telephone/phoning). Some gestures refer to action schemes that are non-objectrelated (e.g. moving the body rhythmically without music for dancing to request that music be turned on) or to conventional actions (waving the hand for bye-bye) with forms more arbitrarily related to their meaning. The terminology used for these gestures (“conventional”, “referential,” “symbolic”, “iconic”, “characterising”, “representational”) is variable, and has changed considerably over the years, even in the work of the same author(s), reflecting changes both in methodology and theoretical perspectives. The communicative function of such gestures appears to develop within routines similar to those considered to be fundamental for the emergence of spoken language. Their forms and meanings are established in the context of child–adult interaction. The first gestures and the first words involve the same set of concerns: eating, dressing, exchange games, etc., and they are initially acquired with prototypical objects, in highly stereotyped routines or scripts. At roughly parallel rates, they gradually “decontextualise” or extend out to a wider and more flexible range of objects and events.
The role of input The remarkable similarities between production in the gestural and the vocal modalities during the first stages of language acquisition raise interesting issues regarding the communicative and linguistic role of early words and gestures. Symbolic actions produced in the gestural modality have often been seen as communicative and referential irrespective of the contexts of use (for a discussion,
7
8
Marianne Gullberg, Kees de Bot, and Virginia Volterra
see Caselli, 1994). Around 13 months there is a basic equipotentiality between the vocal and the gestural channels (Erting & Volterra, 1990). Differences in the type of input to which children are exposed influences the extent to which the manual or spoken modality is used for representational purposes and assumes linguistic properties. For example, children systematically exposed to sign language input acquire and develop a complete language in the visual gestural modality (see Schick, Marschark, & Spencer, 2006). Comparisons between deaf and hearing children suggest that all children, regardless of whether their primary linguistic input is spoken or signed, use gestures to communicate, in particular in the transition stage to symbolic communication (Volterra, Iverson, & Castrataro, 2006). Although the relationship between gesture and sign language in general and in development has received little attention to date, recent research suggests that gesture is as essential a part of sign language as it is of spoken communication (Emmorey, 1999; Liddell, 2003). Typically developing children are clearly encouraged by parents to rely much more on vocal symbols for communication. However, it has been suggested that gestural input may facilitate the acquisition of spoken words, as in the case of “baby signs” or ‘enhanced gestures’ used in conjunction with speech (Goodwyn & Acredolo, 1998; Goodwyn, Acredolo, & Brown, 2000). A possible explanation for this effect, found also in children with developmental disorders, is that exposure to enhanced gesturing provides children with opportunities to master new forms in both the vocal and manual modalities (Abrahamsen, 2000). Culture and adult input may influence both the form and the frequency of representational gestures. Many studies have reported more frequent production of representational gestures by Italian children who are immersed in a ‘gesturerich’ culture (see the discussion in Kendon, 2004, Ch. 16). In particular, the representational gestures produced by Italian children include numerous object/action gestures (e.g. eating, phoning) and attributive gestures (e.g. big, hot), whereas American children almost exclusively produce conventional gestures (e.g. hi, yes, all gone) (Iverson, Capirci, Volterra, & Goldin-Meadow, 2008). Cross-cultural longitudinal studies of spontaneous interaction should reveal how similarities and differences in the way object/action gestures versus more conventional social gestures develop.
The relationship between speech and gesture Interesting findings on the relationship between children’s production of action and gestures and early (receptive and expressive) word repertoires have been collected through the MacArthur-Bates Communicative Development Inventory
Gestures and some key issues in the study of language development
(MBCDI). This is an instrument designed to explore and assess typically developing children’s early communicative and linguistic development (Fenson et al., 1993). In particular, it has been shown that there is a complex relationship between early lexical development in comprehension and production, and action-gestures (Caselli & Casadio, 1995). Around 11–13 months, the productive repertoire of action-gestures appears to be larger than the vocal repertoire, but in the following months the mean number of words and action-gestures are more similar. More interestingly, at this early age there is a significant correlation between words comprehended and action-gestures produced (Fenson et al., 1994). These findings suggest that the link between real actions, actions represented via gestures, and children’s vocal representational skills may be stronger than has been assumed thus far. Another important finding is that in all cultures investigated to date the first utterances (combinations of two or more meaningful communicative elements) are cross-modal. Various studies highlight that deictic gestures (notably pointing) play a special role in two-element utterances. Combinations of a pointing gesture with a representational word are the most productive types of child utterances. These gesture-speech combinations can refer to a single element or to two distinct elements. Complementary and supplementary gesture-speech combinations reliably predict the onset of two-word combinations, underscoring the robustness of gesture as a harbinger of linguistic development (Butcher & Goldin-Meadow, 2000; Capirci, Iverson, Pizzuto, & Volterra, 1996; Iverson et al., 2008; Iverson & Goldin-Meadow, 2005). Many constructions (e.g. predicate+argument like “point (to chair) saying “mommy” to ask mommy to sit on the chair) appear in supplementary gesture-speech combinations several months before the same construction appears in speech (e.g. “sit mommy” or “mommy sit”). The production of a supplementary deictic gesture-word combination appears early, whereas supplementary representational gesture-word or two-word combinations, which require the child to retrieve two symbols each conveying a different piece of semantic content, appear later. The production of a single word and identification of another referent in the context through a deictic gesture supposedly places fewer cognitive demands on the child than the combination of two representational elements and presumably fits the child’s current cognitive capacities (Özcaliskan & Goldin-Meadow, 2005). The study of children with atypical input or development can further illustrate how gesture appears to be related to cognitive and linguistic development in infancy. An example of how gesture may compensate for specific impairments of the spoken abilities is children with Down syndrome (DS). The neuropsychological profile of DS children is characterised by a lack of developmental homogeneity between cognitive and linguistic abilities. The linguistic abilities of DS children are
9
10
Marianne Gullberg, Kees de Bot, and Virginia Volterra
poorer than expected based on their overall cognitive level (e.g. Chapman & Hesketh, 2000). These children appear to compensate for poor productive language abilities through greater production of gestures. There is ample evidence that the gap between cognition and productive language skills becomes progressively wider with development among DS children (Chapman, 1995; Franco & Wishart, 1995). However, with increasing cognitive skills and social experience these children also develop relatively large repertoires of gestures (Caselli et al., 1998; Stefanini, Caselli, & Volterra, 2007; Stefanini, Recchia, & Caselli, this volume). The compensatory use of gesture can be enhanced, particularly if children are encouraged through the provision of signed language input (cf. Abrahamsen, 2000). Higher gesture rates associated with speech difficulties have also been reported for other clinical populations such as children with specific language impairment (Evans, Alibali, & McNeil, 2001; Fex & Månsson, 1998).
Later development Given that gesture usage appears to be related both to the general cognitive level and to phono-articulartory abilities, it is important to examine children in later childhood and at different stages of linguistic development. The development whereby children’s gestures become organised into the adult speech-gesture system have not been fully described. Very few studies have explored the development of this system after the two-word stage when other types of gestures, such as ‘rhythmic’ or ‘emphatic’ gestures, start to appear. Mayberry & Nicoladis (2000) followed 5 French-English bilingual boys longitudinally (from 2 years to 3;6 years), showing that children from age 2 onwards largely gesture like adults with regard to gesture rate and meaning. Interestingly, different gesture types developed differently such that the use of iconic and beat gestures correlated with language development, whereas the use of pointing gestures did not. Children between 16 and 36 months use gestures and speech in agreement and refusal constructions with their mothers somewhat differently from adults (e.g. Guidetti, 2005). Looking at more sophisticated language use, children from 4 to 5 years productively use idiosyncratic, content-loaded gestures during narratives (McNeill, 1992). Colletta (2004), recording adult-child spontaneous interactions, has described the development of conversational abilities in school-age children. Younger children produce very few metaphoric, abstract deictic gestures and beats, which become more frequent in the production of older children. Finally, research investigating gesture production in school-aged children in problem-solving tasks, reasoning about balance or mathematical equivalence, indicates that children convey a substantial proportion of their knowledge through
Gestures and some key issues in the study of language development
speech-accompanying gestures (Alibali & Goldin-Meadow, 1993; Church & Goldin-Meadow, 1986; Pine, Lufkin, & Messer, 2004). In some cases children’s gesturespeech ‘mis-matches’ predict learning. Children whose speech and gestures ‘mismatch’ are more likely to benefit from instruction than children whose speech and gestures match. These studies indicate that gestures can reveal not only what children are thinking about but also their learning potential. In sum, even if differences in data sets (e.g. ages considered, gesture types described), in methodology and terminology make it challenging to compare findings across studies, the available data suggest that the role of gesture in spoken language acquisition and development changes according to different stages and communicative/interactional contexts. Around one year of age gesture plays a crucial role in the construction and expression of meaning. In the following stages gesture production develops together with speech. At later stages still, gesture production appears to decrease in some linguistic contexts (e.g. naming tasks) although it is frequent with speech in others (e.g. narratives). These findings together indicate that any study on the development of language should include and pay particular attention to gestures.
Gesture and second language development In recent years the interest in the relationship between gestures and Second Language Development (SLD or L2D) has grown considerably. Studies suggest that gestures play an important role in SLD and should be seen both as a resource in learning and as a component of language proficiency in its own right (cf. Gullberg, 2006b, 2008; Gullberg & McCafferty, 2008). Again, if gestures and speech are seen as an integrated system, then factors that play a role in SLD in general may also play a role in the development of gesture, and conversely, gestures may provide further information on the effects of such factors. Therefore, a large part of the SLD research agenda is also relevant for gesture where a number of traditional topics can fruitfully be addressed taking gestures into account.
Cross-linguistic influence (CLI) or transfer One of the most widely studied aspects of SLD is cross-linguistic influence, that is, the impact of existing languages on the acquisition and use of new ones. Traditionally this research has been concerned with the effect of the first language (L1) on later learned languages, but research on lexical processing in bilinguals and research on language attrition and language loss has shown that later learned
11
12
Marianne Gullberg, Kees de Bot, and Virginia Volterra
languages may influence the first language (Cook, 2003; Costa, 2005; de Bot & Clyne, 1994; Köpke, Keijzer, & Weilemar, 2004; van Hell & Dijkstra, 2002). Recent studies have also demonstrated an impact of the L2 on the L1 in gestures (e.g. Brown, 2007; Brown, this volume; Brown & Gullberg, 2008; Pika, Nicoladis, & Marentette, 2006). A growing body of work suggests that native speakers of typologically different languages, such as English on the one hand, and Spanish and Turkish on the other, gesture differently, both in terms of gestural form and timing, as a reflection of how these languages encode and express meaning components of motion like path and manner (e.g. Duncan, 1996; Kita & Özyürek, 2003; McNeill, 1997; McNeill & Duncan, 2000; Özyürek, Kita, Allen, Furman, & Brown, 2005). Further studies have also shown that L2 learners of these languages do not necessarily gesture like target language speakers, but display traces of their L1s in their gesture production either in terms of timing, aligning their gestures with different elements in speech compared to native speakers (e.g. Choi & Lantolf, 2008; Kellerman & van Hoof, 2003; Negueruela, Lantolf, Rehn Jordan, & Gelabert, 2004; Stam, 2006), or in terms of gestural forms, expressing different semantic content in gestures compared to native speakers (e.g. Brown, 2007; Brown & Gullberg, 2008; Gullberg, submitted; Negueruela et al., 2004; Özyürek, 2002b; Yoshioka & Kellerman, 2006). Such findings are often discussed in terms of Slobin’s notion of ‘thinking for speaking’ (e.g. Slobin, 1996), that is to say, ways in which linguistic categories influence what information you attend to and select for expression when speaking. The argument for L2 is that L1-like gesture patterns may reveal whether L2 speakers continue to think for speaking in the L1 rather than in L2-like ways. A number of questions need to be addressed in this domain. A crucial issue concerns how to identify and study gestural practices typical of a given language and culture. It is a real difficulty that so little is known about language-specific gesture patterns in terms of frequency, gestural forms, use of gesture space, and semantic expression. An absolute prerequisite for the study of CLI in gestures in L2 is therefore a better understanding of gestural practices across languages in native performance. Currently, any study on L2 behaviour is a triple study where the native behaviour in both source and target language needs to be described before learner behaviour can be considered. If gestures and L2 studies are to follow in the steps of general SLD research, effects of other known languages (L3, Ln) should also be taken into account, pushing the boundaries even further. It is equally important to point out that in contrast to the traditional focus on ‘errors’ in SLD (see papers in Richards, 1974; van Els, Extra, van Os, & Janssen van Dieten, 1984), a different approach is necessary when considering gestures in L2 production. Since there can be no absolute ‘grammaticality’ of gesture
Gestures and some key issues in the study of language development
performance, preferential usage patterns must instead be established with corresponding gradient native scales of appropriateness or acceptability. For instance, Duncan (2005) examined 20 native English speakers retelling a cartoon and found that 64% of the manner gestures coincided with manner verbs, while 33% of the manner gestures were linked to other elements such as ground or path. In contrast, 20 Spanish speakers engaged in the same task aligned only 23% of their manner gestures with manner verbs, while 58% coincided with ground or path elements. The range of variation defines what is ‘nativelike’ and allows for an equal range of possible behaviours for L2 learners that would still qualify as ‘nativelike’. This opens for a more gradient and sophisticated view of L2 performance in general beyond the narrow domain of target-like gestures. CLI effects have mainly been studied looking at representational (iconic) gestures. It is unknown whether effects of CLI can be found for other types of gesture practices. For instance, given that gestures supposedly align with speech rhythms and language-specific prosodic patterns, it seems plausible that rhythmic patterns of gesturing will transfer into an L2 along with a foreign accent. Similarly, it is possible that cross-linguistic differences in ways of managing interaction might transfer into an L2 in the use of interactive and ‘pragmatic’ gestures (e.g. Bavelas, Chovil, Lawrie, & Wade, 1992; Kendon, 2004). To date, no study has examined these issues. The studies of L2 gestures occasionally display dissociation between surface form and gesture whereby L2 learners say one thing (in L2-like fashion) and gesture another (in L1-like fashion) (e.g. Özyürek, 2002b; Stam, 2006). In most studies gesture is more conservative than speech, such that speech seems to change more readily towards the L2 target than gestures. This phenomenon is mainly interpreted as indicating transfer of L1 representations, perspectives, or thinking for speaking. However, similarly to the study of CLI in spoken language, to determine whether a particular phenomenon is caused by CLI/transfer, or whether it is a general learner phenomenon, requires methodological triangulation (cf. Jarvis, 2000). At the very least, it is necessary to examine learners from two different source languages learning the same target language to tease apart such effects. Further, very few attempts have been made to theoretically account for the fact that L2 speakers do and say different things, an L2-specific form of speech-gesture discrepancy. A question that arises is what representations actually underpin L2 surface forms, especially when these look target-like but gesture does not, and why it should be that speech changes before gesture. Do gestures have a privileged link to conceptual representations relative to speech? How dissociated can speech and gestures be and still be said to reflect the same representation? A different set of questions pertains to how gestures that seem not quite targetlike from a native speaker’s point of view are perceived by native speakers. The
13
14
Marianne Gullberg, Kees de Bot, and Virginia Volterra
inclusion of gesture in assessments of L2 speakers expands the number of dimensions along which learners’ production can vary relative to native speakers. In this sense, gesture data raise important questions concerning the ‘native speaker standard’ (cf. Davies, 2003), crucial in many studies of SLD. The discussion of critical periods for language learning and the degree to which adult learners can become nativelike is central to theories of adult L2 acquisition (cf. Birdsong, 2005). Gestures definitely raise the stakes for learners. However, no studies have systematically examined native perception of ‘foreign gesture’, nor its potential interactional consequences. Although a number of studies show that learners’ gesture production affects assessments positively such that learners are deemed more proficient if they gesture than if they do not (Gullberg, 1998; Jenkins & Parra, 2003; Jungheim, 2001; McCafferty, 2002), no studies so far have directly tested for effects of ‘foreign gesture’.
Gesture and learner-general phenomena SLD research does not restrict explanations of properties of the L2 to effects of the L1 or other languages learned. SLD studies also look at learner behaviour as a systematic and regular variety in its own right, as an interlanguage (Selinker, 1972), with properties determined both by general learning mechanisms and by the specific languages involved. Again, in such a perspective, a number of issues arise where gestures might provide important insights. One such issue concerns how language learners handle different types of difficulties at a given proficiency level, such as managing lexical, grammatical, and discourse related problems at the same time in real time. The analysis of gestures and speech in conjunction provides a fuller picture of such problem-solving. For instance, studies of Moroccan and Japanese learners of French show how learners move from using mainly representational gestures, complementing the content of speech, towards more emphatic or rhythmic gestures related to discourse (Kida, 2005; Taranger & Coupier, 1984). This suggests a transition from essentially lexical difficulties and lexically based production to more grammatical problems related to discourse. More careful charting of what gestures are produced by learners with particular proficiency profiles has potential pedagogical and diagnostic applications. The acquisition of gestures can and should also be studied in its own right. Just as we need to find out how children come to gesture in adult-like and culturespecific ways, so we need to know whether L2 learners ever come to gesture like native speakers. Although some attention has been given to L2 users’ comprehension of conventional or quotable gestures (‘emblems’) (e.g. Jungheim, 1991; Mohan & Helmer, 1988; Wolfgang & Wolofsky, 1991), nothing is known about whether L2
Gestures and some key issues in the study of language development
learners ever produce such culture-specific gestures, which may show the same acquisition difficulties as idiomatic expressions (e.g. Irujo, 1993). For instance, do L2 learners learn to produce appropriate gestural forms such as distinguishing the head toss from the headshake (Morris, Collett, Marsh, & O’Shaughnessy, 1979), do they learn to point in culturally appropriate ways (see papers in Kita, 2003), and do they learn to respect handedness taboos (e.g. Kita, 2001)? Even less is known about whether L2 learners acquire and produce language-specific nonconventionalised gestural practices. If they do, this raises important questions about implicit learning of both form and meaning, crucial to the domain of SLD. If they do not, it raises familiar SLD issues about why learners do not notice or ‘take in’ certain aspects of the input despite extended exposure (e.g. Robinson, 2003). It is perhaps particularly interesting to consider visual phenomena like gestures since they are often assumed to be inherently ‘salient’, and to have an attentiondirecting, enhancing effect in their own right. If they did, they should be easy to acquire. Again, next to nothing is known about this question. A closely related issue is what might be learnable and indeed teachable (and therefore assessable) in terms of gesturing. While it may be possible to teach forms and meanings of emblems, it is much less clear that other aspects of gestural practices are teachable. Even when gestures are on the classroom agenda, an explicit link is seldom made between language and gesture. Furthermore, research in this domain should consider the possible differences and similarities between spontaneously produced gestures and gestures explicitly deployed for teaching purposes (e.g. Lazaraton & Ishihara, 2005; Tellier, 2006). It is possible that features noted for ‘instructional discourse’ like child- or foreigner-directed gestures share properties with gestures employed in language classrooms. A further step is to consider learners’ interpretations of teachers’ gestures rather than examining teachers’ gestures in social isolation (cf. Sime, 2006). Answers to questions concerning learnability and teachability are wide-open.
Gesture across the lifespan Under the view that language development encompasses all shifts, a number of further domains become relevant such as the development of rhetorical styles and registers, but also language attrition in bilinguals, and changes in language related to ageing. Changes in language can of course also be related to disease, as in aphasia, split-brain surgery, etc., but we leave those changes aside in this overview (but see e.g. Feyereisen & de Lannoy, 1991; Goodwin, 2002; Lausberg, Zaidel, Cruz, & Ptito, 2007; Lott, 1999; Rose, 2006).
15
16
Marianne Gullberg, Kees de Bot, and Virginia Volterra
With regard to the development of rhetorical styles and gestures, something is known about the development of narrative skills and concomitant changes to gesture in later childhood. For instance, Cassell (1988) demonstrated that children’s production of beats becomes adult-like only with increasing development of narrative skills, specifically when children can alternate between different narrative levels. Very little is known, however, about the development of other rhetorical skills such as gestures in different registers, sermons, public speeches, etc. Although a small literature explores politicians’ gesture practices (e.g. Calbris, 2003), the focus is typically on the accomplished speaker, not on the development of the speech-gesture repertoire. In the domain of language attrition due to immigration or bilingualism, nothing at all is known about gesture practices. Assuming that gesture and speech are connected, it seems plausible that the gesture practices might also be affected if skills in the spoken first language are lost. However, given that gestures can also be recruited for other purposes, it is an empirical question whether this happens or not.
Gestures and ageing A recent overview of research on gestures over the lifespan suggests that there is very little research on gestures in older age groups (Tellier, 2009). There is a substantial body of research on non-verbal communication and ageing, and some of these studies have also considered gesture use and interpretation (Montepare & Tucker, 1999). The perspective taken is often a compensatory one. That is, communication problems emerge with age due to a decline in speech-motor skills and hearing. The assumption is that these problems are compensated for by gesturing (e.g. Cohen & Borsoi, 1996; Feyereisen & Havard, 1999). There are several problems with this approach. First, the decline of speech production in ageing is not well-established. Second, any decline seems to be co-affected by variables such as continuous use of the language and level of education. Third, the groups considered are typically fairly young (60s and early 70s) and comparisons between age groups are cross-sectional. Age-related language problems are more likely in the 75+ age group, in particular when there are other health problems and the level of education is low (de Bot & Makoni, 2005). Finally, there is considerable variation within and between age groups. So a simple young/old comparison may not be informative. It is possible that there are specific age-related types of gesturing, probably more due to specific motor patterns than to language issues. For instance, the control of small movements may be reduced, leading to larger movements. It is also possible that with decreasing flexibility of joints, changes in spinal curvature, etc.,
Gestures and some key issues in the study of language development
there is a reduction in gesture size, gesture speed, etc. (cf. Laver & Mackenzie Beck, 2001). Both changes may be given (un-intended) semiotic importance by onlookers. The field of gestural practices in ageing is desperately under-researched.
Common themes The preceding sections have briefly outlined some of what is known about gestures and language development, with some emphasis on questions that remain open to investigation in each domain. There are, however, clearly general themes that are common to all studies of language development and gesture.
The role of gestures in the input In studies on language development the precise role of input, that is, what language users hear and see, is hotly debated. Both in studies of FLD and SLD a familiar debate concerns whether input is simply a trigger of innate knowledge and structures (Pinker, 1989; Wexler & Culicover, 1980; White, 2003), or whether language development is based on detailed properties of the input such as frequencies and on usage (Ellis & Larsen-Freeman, 2006; Tomasello, 2003). In SLD the role of input is debated partly because L2 learners seem not to attend to what is in the input, namely ‘correct’ pronunciation, grammar, etc., as seen in their tendency to maintain foreign accents and grammatical peculiarities even after many years of teaching and exposure. A well-known hypothesis states that a prerequisite for input to be useful to learning is that it is comprehensible (e.g. the Comprehensible Input Hypothesis, Krashen, 1994).1 In this perspective, gestures seem to play an important role. Interlocutors are known to attend to and make use of gestural information, for instance, to improve comprehension in noise (Rogers, 1978). It is also clear that gestures in the input can improve learning in general such as the learning of maths and symmetry (Singer & Goldin-Meadow, 2005; Valenzeno, Alibali, & Klatzky, 2002). A natural assumption is therefore that gestures that convey speech-related meaning should improve language learners’ comprehension and possibly also learning of language. Indeed, adults, teachers and other ‘competent’ speakers seem to think so. All forms of didactic talk or ‘instructional communication’ studied — whether by adults to children (‘motherese’) or by adult native speakers to adult L2 users (‘foreigner/teacher talk’, Ferguson, 1971) — is characterised by an increased use of representational and rhythmic gestures (e.g. Adams, 1998; Allen, 2000; Iverson, Capirci, Longobardi, & Caselli, 1999; Lazaraton, 2004). However, few studies test
17
18
Marianne Gullberg, Kees de Bot, and Virginia Volterra
actual effects on language learning. There is some evidence that gestures improve the learning of new adjectives in English children (O’Neill, Topolovec, & SternCavalcante, 2002). Very few studies empirically test the connection between gestural input and learning outcomes in SLD (for exceptions, see Allen, 1995; Sueyoshi & Hardison, 2005; Tellier, this volume). Moreover, facilitative effects of gestures may differ depending on the linguistic units tested and be more evident for lexical than grammatical material (e.g. Musumeci, 1989). Different types of gesturing may also have different effects. Again, all these issues remain wide open. It is also an empirical question to what extent children and adult learners mirror the gesture input in their own gesture production. A related question is to what extent learners affect their own input by their spoken and gestural practices in interaction. It has been suggested that learners’ gestures might help promote positive affect between learner and adult/native speaker, which might ultimately promote learning (e.g. Goldin-Meadow, 2003; McCafferty, 2002). It has also been suggested that adult and native listeners in general tailor their production to learners based on the learners’ gestures (e.g. Goldin-Meadow, 2003). This is in line with the well documented observation that interlocutors synchronise or accommodate to each other in interaction also as regards gestures (Bavelas, Black, Chovil, Lemery, & Mullett, 1988; Condon & Ogston, 1971; Kimbara, 2006; Wallbott, 1995). It is an open question to what extent such synchronisation might affect language learning (cf. discussions of structural priming as a means of learning, e.g. Bock & Griffin, 2000; Branigan, Pickering, & Cleland, 2000).
The role of gestures in the output The complementary notion also plays a role in development, namely that production is crucial to acquisition. Bruner (1983) suggested that (first) language is learned through use and a similar notion is present in the ‘output hypothesis’ in SLD. This states that new language knowledge only becomes automatised if used for production (Gass & Mackey, 2006; Swain, 2000). In a parallel fashion, it has been shown that the production of gestures promotes learning of other skills, such that adults and children who gesture while learning about maths and science do better than those who do not (Alibali & DiRusso, 1999). General recall also improves when participants enact events (e.g. Frick-Horbury, 2002). Evidence for an effect of gesturing on the acquisition of language is again much scarcer. Although it has been suggested that gesturing might help L2 learners internalise new knowledge on theoretical grounds (Lee, 2008; McCafferty, 2004; Negueruela et al., 2004), and although teaching methods relying on embodiment exist (e.g. Total Physical Response, Asher, 1977), it remains an empirical question whether any real, long-
Gestures and some key issues in the study of language development
term learning effects can be demonstrated for gesture production in L1 or L2 (for short-term effects in L2, see Tellier, 2006).
Variation and individual differences All language development is characterised by individual variation. First language development is relatively uniform — at least regarding final outcome — in comparison to SLD, which is characterised by highly variable outcome. In SLD the effect of a range of psycho-social factors have been explored, such as intelligence, language aptitude, memory capacity, attitudes, motivation, personality traits, and cognitive style (e.g. de Bot et al., 2005, pp. 65–75; Dörnyei, 2006; Verspoor, Lowie, & van Dijk, 2008). For instance, intelligence matters more in tutored than in untutored SLD, and more in grammar learning than in other skills. The correlations between language aptitude tests and free oral production and general communicative skills are generally low. Working memory capacity seems to be lower generally in L2 than in L1 (Miyake & Friedman, 1999), etc. No study of such factors in SLD has to date considered gestures either as a co-variable or as a measure of any of the factors despite the fact that the influence of some of these factors on gestures has been extensively studied. For instance, effects of personality and psychological types (e.g. introvert vs. extrovert) on non-verbal behaviour have received a lot of attention (see Feyereisen & de Lannoy, 1991, for an overview), and verbal vs. spatial fluency (Hostetter & Alibali, 2007), etc., have been documented. However, no studies have combined these perspectives although a number of possible links can be hypothesised. Recent studies have suggested that gestures help reduce cognitive load (e.g. Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001; Wagner, Nusbaum, & GoldinMeadow, 2004). Such an effect would be important in L2 production (cf. Gullberg, 2003, 2006a) where individual differences in working memory and proficiency might conspire to make such effects more important. A key expansion on the hitherto rather uninformative observations that L2 learners gesture more in the L2 than in the L1 would be to examine the relationship between fluency, processing units, and gesture production more closely in these terms. For instance, at stages where L2 learners are not very fluent and proceed almost word by word, they seem to produce one gesture for every unit/word. Once they start stringing together more material in chunks, the gesture rate also goes down (Gullberg, 1998, 2006a; Nobe, 2001). This suggests a possible link between working memory, fluency and gesture production. Similarly, individual differences in cognitive style and personality affect interaction patterns and thereby the extent to which L1 and L2 learners create situations
19
20 Marianne Gullberg, Kees de Bot, and Virginia Volterra
of rich input for themselves (cf. Goldin-Meadow, 2003). While this has been examined in FLD, no studies to date have explored such issues in SLD. Finally, there is inter- and intra-individual variation in adult, native gesturing, depending on social setting, degree of formality, shared knowledge, ambiguity, expertise, the content of speech, etc. Many aspects of individual variation in adult, native gesturing are not well understood, such as why some speakers gesture more than others, and why the same speaker sometimes chooses to gesture and sometimes not (Kendon, 1994). To qualify the possible range of behaviours in adult native speakers while allowing for variation is crucial to studies of language development and gesture. Rather than looking at behaviour outside of the ‘typical’ as ‘noise’ in the data, a more productive approach is to look at variation as a meaningful source of information. This is not to say that we need to explain every single instance of a deviation from a general pattern. As in other areas of language development, variation is a reflection of the developmental process resulting from the interaction of many internal variables that cannot be taken apart to study the impact of each individual factor (van Dijk & van Geert, 2005; Verspoor et al., 2008). Studies of gestures and language development will have to be methodologically creative to find ways of taking variation into account.
Gesture as compensation In many parts of the language development literature, a general and often tacit assumption is that children and adults alike produce gestures mainly to overcome the gap between their communicative intentions and the expressive means at their disposal. That is to say, gestures are viewed as a compensatory mode of expression. However, the theoretical issues underlying such a view are rarely discussed. First, compensation as a notion is often ill- or undefined. For instance, spoken language acquisition research shows that not all learner behaviour is best characterised as strategic problem-solving. Children and adult learners all over-generalise, not as a means of compensation, but as part of the developmental process. Furthermore, adult learners are often communicatively fluent in an L2 even though their systems do not look like those of native speakers. Conversely, not all difficulties are overt. Learners may avoid difficulties by changing their intention when the expressive means do not match. The general difficulties involved in identifying and defining compensatory behaviour has received attention in SLD studies (see papers in Kasper & Kellerman, 1997), but much less so in studies of FLD, and are virtually absent from studies considering gesture as compensation. A related issue relevant both to acquisition and gesture studies is the question whether compensation is intended for the speaker or for the addressee. That is,
Gestures and some key issues in the study of language development
is it a speaker-internal solution to a problem, an interactional solution, or both? These questions echo familiar debates in the gesture literature regarding gesture production (cf. the input/output distinction above), but they are equally relevant for developmental, compensatory issues (e.g. Gullberg, 1998). A third question concerns what parts of spoken language gestures can compensate for. The focus has traditionally been on lexis and meaning, but lexical access, grammar, discourse, conceptualisation, and problems of linearising global information have all been implicated in gestural compensation (Alibali et al., 2000; Gullberg, 1999, 2006a; Hostetter, Alibali, & Kita, 2007; Pine, Bird, & Kirk, 2007). Finally, of theoretical relevance for gesture studies is the question how gestures can compensate for linguistic expressions, and how compensatory gestures are defined and function. In adult, ‘competent’ users, the speech-gesture integration is multifaceted and may not be obligatory and automatic. ‘Competent’ speakers can choose to decouple speech and gesture. This raises important questions about co-expressivity, however that is defined. Gestures that express non-redundant meaning from speech are not typically considered ‘compensatory’ in cases of mature, adult native speakers, whereas such instances are often seen as compensatory in developing speakers. Further, a number of familiar questions in the debate on gesture production could be cast in terms of compensation, such as whether gestures help lexical retrieval (activate word forms) (Krauss et al., 2000), or help with conceptualisation or information packaging (Goldin-Meadow, 2003; Kita, 2000). However, surprisingly, these theoretical notions are rarely touched upon in discussions of ‘compensatory’ gestures in development (for notable exceptions, see Nicoladis, 2007; Nicoladis, Mayberry, & Genesee, 1999). Although there are exceptions in the literature on children’s development, notably the literature on ‘mis-matches’ (e.g. Goldin-Meadow, 2003) and on lexical access in children (e.g. Pine et al., 2007), even these studies do not typically discuss explicitly what defines some gestures as compensatory. In studies of adult L2 users’ gestural behaviours, theoretical discussions of gestural compensation are almost entirely absent. The properties that make some gestures compensatory and others not need to be discussed and elucidated if we are to form a better understanding of the role of gesture in language development. In sum, the notion of compensation raises important theoretical issues both for studies of language development and for gesture studies. We need to consider how and when to view the function of gesture as mainly compensatory, to formulate independent defining criteria, etc. (e.g. Goodwin & Goodwin, 1986). Developmental data that raise important issues for compensation are to be seen in the context of theories concerning the relationship between speech and gesture. Conversely, developmental studies may need to be more specific about their view of how gestures can serve compensatory functions.
21
22
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Conclusions and introduction to this volume The issues regarding language development and gesture raised in this review are far from exhaustive. A range of other questions can be asked, with regard to methodology, to interaction, and concerning the relationship between language, gestures, and culture. Are some types of gesture related to characteristics of the language system while others are more cultural (e.g. gesticulation vs. emblems) and if so, what does that mean for the parallel development of the two modalities? Is there anything in culture-specific communication that affects the emergence and use of gestures, such as the presence of semi-conventionalised, recurrent hand shapes (see Kendon, 2004)? How does lack of contact with a language and culture affect gesture use? Are there differences in gesture practices between tutored and untutored learners? What is the gestural behaviour of early simultaneous bilinguals? How might learners use gestures to express group affiliation (e.g. Efron, 1972 [1941])? Can language development and gesture be modelled together? The papers in this volume span both first and second language development. They all exemplify how studies of language development can gain insights from taking gestures into account. The first two papers focus on first language development. Liszkowski’s paper examines the gestures of pre-linguistic infants who have not yet developed their first language. He reviews and assesses what is known about pointing and other representational gestures. The paper re-evaluates current findings and takes a new stance, upgrading the role of pointing and downgrading the role of representational gestures in infants, thereby re-assessing the role of such gestures for the emergence of human communication. The second paper by Stefanini, Recchia, & Caselli focuses on the relationship between gesture production and spoken lexical capacity in children with Down syndrome compared to typically developing children. Drawing on data from a naming task, the authors show that, although children with Down syndrome do not differ quantitatively in gesture production from developmentally-matched controls, they do differ qualitatively in the distribution of information across the modalities. The study sheds important light on the ways in which gestures come into play when cognitive abilities outstrip productive spoken language skills. In the transition between first and second language studies, Tellier’s paper investigates the popular assumption that gestures improve the acquisition of a new word in a foreign language by looking at French children who are taught English. The study compares the effect of seeing vs. both seeing and producing gestures. The results indicate that (producing) gestures affects the productive retention of new vocabulary. The study thus lends support to the notion that gestures are implicated in learning language specifically, not only learning in general.
Gestures and some key issues in the study of language development
In the domain of adult second language development, the paper by Yoshioka examines how adult Dutch learners of Japanese construct narrative discourse in speech and gesture. In particular, the paper investigates how learners deal with cross-linguistic differences in how entities are referred to, for instance by lexical means (e.g. the frog, it) or by ellipsis. The results show that learners display both general and target language-specific means of structuring information in discourse in the two modalities. In this sense, the study adds to the evidence suggesting that gestures reflect language-specific speech patterns. It also contributes to the study of cross-linguistic influence in SLD. Brown investigates the interaction between first and second languages in adult speakers, specifically comparing the use of character- and observer-viewpoint in English and Japanese. Japanese speakers with some knowledge of English gesture differently in their native language from Japanese speakers without any knowledge of English, showing patterns similar to those of monolingual English speakers. Although traditionally only the effect of the L1 on the L2 has been considered in studies of SLD, this paper interestingly suggests that the L2 might also affect the L1. This perspective has important implications for what is considered the native standard in studies of language development.
Acknowledgements We gratefully acknowledge support from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) for a grant awarded to Kees de Bot and Marianne Gullberg to fund an International Workshop, “Gesture in Language Development”, held at Rijksuniversiteit Groningen, the Netherlands, April 20–22, 2006. We also thank Adam Kendon for helpful comments and discussions.
Note 1. For an overview of critiques of this hypothesis, see Ellis, 1994, pp. 273–280.
References Abrahamsen, Adele (2000). Explorations of enhanced gestural input to children in the bimodal period. In Karen Emmorey & Harlan Lane (Eds.), The Signs of Language revisited: An anthology to honor Ursula Bellugi and Edward Klima (pp. 357–399). Mahwah, NJ: Erlbaum. Adams, Thomas W. (1998). Gesture in foreigner talk. Unpublished PhD diss., University of Pennsylvania.
23
24
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Alibali, Martha W. & Alyssa A. DiRusso (1999). The function of gestures in learning to count: More than keeping track. Cognitive Development, 14 (1), 37–56. Alibali, Martha W. & Susan Goldin-Meadow (1993). Gesture-speech mismatch and mechanisms of learning: What the hands reveal about a child’s state of mind. Cognitive Psychology, 25, 468–523. Alibali, Martha W., Sotaro Kita, & Amanda J. Young (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15 (6), 593– 613. Allen, Linda Q. (1995). The effect of emblematic gestures on the development and access of mental representations of French expressions. Modern Language Journal, 79 (4), 521–529. Allen, Linda Q. (2000). Nonverbal accommodations in foreign language teacher talk. Applied Language Learning, 11, 155–176. Asher, James (1977). Learning another language through actions. Los Gatos: Sky Oaks Productions, Inc. Bates, Elizabeth & Frederic Dick (2002). Language, gesture, and the developing brain. Developmental Psychobiology, 40, 293–310. Bavelas, Janet B., Alex Black, Nicole Chovil, Charles R. Lemery, & Jennifer Mullett (1988). Form and function in motor mimicry. Topographic evidence that the primary function is communicative. Human Communication Research, 14 (3), 275–299. Bavelas, Janet B., Nicole Chovil, Douglas A. Lawrie, & Allan Wade (1992). Interactive gestures. Discourse Processes, 15 (4), 469–489. Bavelas, Janet B., Christine Kenwood, Trudy Johnson, & Bruce Phillips (2002). An experimental study of when and how speakers use gestures to communicate. Gesture, 2 (1), 1–17. Beattie, Geoffrey & Heather Shovelton (1999). Mapping the range of information contained in the iconic hand gestures that accompany spontaneous speech. Journal of Language and Social Psychology, 18 (4), 438–462. Birdsong, David (2005). Nativelikeness and non-nativelikeness in L2A research. International Review of Applied Linguistics, 43 (4), 319–328. Bock, Katherine & Zensi Griffin (2000). The persistence of structural priming: Transient activation or implicit learning. Journal of Experimental Psychology: General, 129 (2), 177–192. Branigan, Holly P., Martin J. Pickering, & Alexandra A. Cleland (2000). Syntactic co-ordination in dialogue. Cognition, 75 (2), B13–B25. Brown, Amanda (2007). Crosslinguistic influence in first and second languages: Convergence in speech and gesture. Unpublished PhD diss., Boston University, Boston, and MPI for Psycholinguistics, Nijmegen. Brown, Amanda & Marianne Gullberg (2008). Bidirectional crosslinguistic influence in L1-L2 encoding of manner in speech and gesture: A study of Japanese speakers of English. Studies in Second Language Acquisition, 30 (2), 225–251. Bruner, Jerome (1983). Child’s talk: Learning to use language. Oxford: Oxford University Press. Bühler, Karl (1934). Sprachtheorie. Jena: Fischer. Butcher, Cynthia & Susan Goldin-Meadow (2000). Gesture and the transition from one- to twoword speech: When hand and mouth come together. In David McNeill (Ed.), Language and gesture (pp. 235–257). Cambridge: Cambridge University Press. Calbris, Geneviève (2003). L’expression gestuelle de la pensée d’un homme politique. Paris: CNRS Editions.
Gestures and some key issues in the study of language development
Capirci, Olga, Annarita Contaldo, Maria Cristina Caselli, & Virginia Volterra (2005). From action to language through gesture: A longitudinal perspective. Gesture, 5 (1/2), 155–177. Capirci, Olga, Jana M. Iverson, Elena Pizzuto, & Virginia Volterra (1996). Gestures and words during the transition to two-word speech. Journal of Child Language, 3, 645–675. Caselli, Maria Cristina (1990). Communicative gestures and first words. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 56–67). Washington, DC: Gallaudet University Press. Caselli, Maria Cristina & Paola Casadio (1995). Il primo vocabulario del bambino: Guida all’uso del questionario MacArthur per la valutazione della comunicazione e del linguaggio nei primi anni di vita. Milan: Franco Angeli. Caselli, Maria Cristina, Stefano Vicari, Emiddia Longobardi, Laura Lami, Claudia Pizzoli, & Giacomo Stella (1998). Gestures and words in early development of children with Down Syndrome. Journal of Speech, Language and Hearing Research, 41, 1125–1135. Cassell, Justine (1988). Metapragmatics in language development: Evidence from speech and gesture. Acta Linguistica Hungarica, 38 (1/4), 3–18. Chapman, Robin S. (1995). Language development in children and adolescents with Down Syndrome. In Paul Fletcher & Brian MacWhinney (Eds.), The handbook of child language (pp. 641–663). Oxford: Blackwell Publishers. Chapman, Robin S. & Linda Hesketh (2000). The behavioral phenotype of Down syndrome. Mental Retardation and Developmental Disabilities Research Review, 6, 84–95. Choi, Soojung & James P. Lantolf (2008). The representation and embodiment of meaning in L2 communication. Motion events in the speech and gesture of advanced L2 Korean and L2 English speakers. Studies in Second Language Acquisition, 30 (2), 191–224. Church, Ruth B. & Susan Goldin-Meadow (1986). The mismatch between gesture and speech as an index of transitional knowledge. Cognition, 23 (1), 43–71. Clark, Herbert H. (1996). Using language. Cambridge: Cambridge University Press. Cohen, Ronald L. & Diane Borsoi (1996). The role of gestures in description-communication: A cross study of ageing. Journal of Nonverbal Behavior, 20 (1), 45–63. Colletta, Jean-Marc (2004). Le développement de la parole chez l’enfant âgé de 6 à 11 ans. Corps, language et cognition. Sprimont: Mardaga. Condon, William S. & William D. Ogston (1971). Speech and body motion synchrony in the speaker-hearer. In David L. Horton & James J. Jenkins (Eds.), Perception of language (pp. 150–173). Columbus, OH: Merrill. Cook, Vivian (Ed.) (2003). Effects of the second language on the first. Clevedon: Multilingual Matters. Costa, Albert (2005). Lexical access in bilingual production. In Judith F. Kroll & Annette M. De Groot (Eds.), Handbook of bilingualism. Psycholinguistic approaches (pp. 308–325). Oxford: Oxford University Press. Davies, Allan (2003). The native speaker: myth and reality. Clevedon: Multilingual Matters. De Bot, Kees (2007). Dynamic systems theory, life span development and language attrition. In Barbara Köpke, Monika Schmid, Merel Keijzer, & Susan Dostert (Eds.), Language attrition: Theoretical perspectives (pp. 53–68). Amsterdam & Philadelphia: Benjamins. De Bot, Kees & Michael Clyne (1994). A 16-year longitudinal study of language attrition in Dutch immigrants in Australia. Journal of Multilingual and Multicultural Development, 15 (1), 17–28.
25
26 Marianne Gullberg, Kees de Bot, and Virginia Volterra
De Bot, Kees, Wander Lowie, & Marjolijn Verspoor (2005). Second language acquisition: An advanced resource book. London: Routledge. De Bot, Kees & Sinfree B. Makoni (2005). Language and ageing in multilingual contexts. Clevedon: Multilingual Matters. De Ruiter, Jan-Peter (2000). The production of gesture and speech. In David McNeill (Ed.), Language and gesture (pp. 284–311). Cambridge: Cambridge University Press. De Ruiter, Jan-Peter (2007). Postcards from the mind: The relationship between speech, gesture and thought. Gesture, 7 (1), 21–38. Dörnyei, Zoltan (2006). Individual differences in second language acquisition. AILA Review, 19, 42–68. Duncan, Susan D. (1996). Grammatical form and ‘thinking-for-speaking’ in Mandarin Chinese and English: An analysis based on speech-accompanying gesture. Unpublished PhD diss., University of Chicago, Chicago. Duncan, Susan D. (2005). Co-expressivity of speech and gesture: Manner of motion in Spanish, English, and Chinese. In Charles Chang et al. (Eds.), Proceedings of the 27th Annual Meeting of the Berkeley Linguistic Society (pp. 353–370). Berkeley, CA: Berkeley Linguistics Society. Efron, David (1972 [1941]). Gestures, race and culture. The Hague: Mouton. (first ed. 1941 as Gestures and environment. New York: King’s Crown Press.) Ellis, Nick C. & Diane Larsen-Freeman (2006). Language emergence: Implications for Applied Linguistics. Applied Linguistics, 27 (4), 558–589. Ellis, Rod (1994). The study of second language acquisition. Oxford: Oxford University Press. Emmorey, Karen (1999). Do signers gesture? In Lynn L. Messing & Ruth Campbell (Eds.), Gesture, speech and sign (pp. 133–159). Oxford: Oxford University Press. Engle, Randi A. (1998). Not channels but composite signals: Speech, gesture, diagrams, and object demonstrations are integrated in multimodal explanations. In Morton Ann Gernsbacher & Sharon J. Derry (Eds.), Proceedings of the 20th Annual Conference of the Cognitive Science Society (pp. 321–326). Mahwah, NJ: Erlbaum. Erting, Carol J. & Virginia Volterra (1990). Conclusion. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 278–298). Berlin: Springer-Verlag. Evans, Julia L., Martha W. Alibali, & Nicole M. McNeil (2001). Divergence of verbal expression and embodied knowledge: Evidence from speech and gesture in children with specific language impairment. Language and Cognitive Processes, 16 (2), 309–331. Fenson, Larry, Philip S. Dale, J. Steven Reznick, Elizabeth Bates, Donna Thal, & Stephen Pethick (1994). Variability in early communicative development. Monographs of the Society for Research in Child Development, 59 (5). Fenson, Larry, et al. (1993). The MacArthur Communicative Development Inventories: User’s guide and technical manual. San Diego: Singular Publishing Group. Ferguson, Charles A. (1971). Absence of copula and the notion of simplicity: A study of normal speech, baby talk, foreigner talk and pidgins. In Dell Hymes (Ed.), Pidginization and creolization of languages (pp. 141–150). Cambridge: Cambridge University Press. Fex, Barbara & Ann-Christine Månsson (1998). The use of gestures as a compensatory strategy in adults with acquired aphasia compared to children with specific language impairment (SLI). Journal of Neurolinguistics, 11 (1/2), 191–206. Feyereisen, Pierre (1987). Gestures and speech, interactions and separations: A reply to McNeill (1985). Psychological Review, 94 (4), 493–498.
Gestures and some key issues in the study of language development
Feyereisen, Pierre & Jacques-Dominique de Lannoy (1991). Gestures and speech: Psychological investigations. Cambridge: Cambridge University Press. Feyereisen, Pierre & Isabelle Havard (1999). Mental imagery and production of hand gestures while speaking in younger and older adults. Journal of Nonverbal Behavior, 23 (2), 153–171. Franco, Fabia & Jennifer Wishart (1995). The use of pointing and other gestures by young children with Down syndrome. American Journal of Mental Retardation, 100 (2), 160–182. Freedman, Norbert (1977). Hands, words, and mind: On the structuralization of body movements during discourse and the capacity for verbal representation. In Norbert Freedman & Stanley Grand (Eds.), Communicative structures and psychic structures: A psychoanalytic approach (pp. 109–132). New York: Plenum Press. Frick-Horbury, Donna (2002). The use of hand gestures as self-generated cues for recall of verbally associated targets. American Journal of Psychology, 115 (1), 1–20. Gass, Susan M. & Alison Mackey (2006). Input, interaction and output: An overview. AILA Review, 19, 3–17. Goldin-Meadow, Susan (2003). Hearing gesture: How our hands help us think. Cambridge, MA: The Belknap Press. Goldin-Meadow, Susan & Cynthia Butcher (2003). Pointing toward two-word speech in young children. In Sotaro Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 85–107). Mahwah, NJ: Erlbaum. Goldin-Meadow, Susan, Howard Nusbaum, Spencer D. Kelly, & Susan Wagner (2001). Explaining math: Gesturing lightens the load. Psychological Science, 12 (6), 516–522. Goodwin, Charles (Ed.) (2002). Conversation and brain damage. Oxford: Oxford University Press. Goodwin, Marjorie H. & Charles Goodwin (1986). Gesture and coparticipation in the activity of searching for a word. Semiotica, 62 (1/2), 51–75. Goodwyn, Susan W. & Linda P. Acredolo (1998). Encouraging symbolic gestures: A new perspective on the relationship between gesture and speech. In Jana M. Iverson & Susan Goldin-Meadows (Eds.), The nature and functions of gesture in children’s communication (pp. 61–73). San Francisco: Jossey-Bass. Goodwyn, Susan W., Linda P. Acredolo, & Catherine A. Brown (2000). Impact of symbolic gesturing on early language development. Journal of Nonverbal Behavior, 24 (2), 81–103. Graham, Jean Ann & Michael Argyle (1975). A cross-cultural study of the communication of extra-verbal meaning by gestures. International Journal of Psychology, 10 (1), 56–67. Guidetti, Michèle (2005). Yes or no? How young French children combine gestures and speech to agree and refuse. Journal of Child Language, 32 (4), 911–924. Gullberg, Marianne (1998). Gesture as a communication strategy in second language discourse. A study of learners of French and Swedish. Lund: Lund University Press. Gullberg, Marianne (1999). Communication strategies, gestures, and grammar. Acquisition et Interaction en Langue Étrangère, 8 (2), 61–71. Gullberg, Marianne (2003). Gestures, referents, and anaphoric linkage in learner varieties. In Christine Dimroth & Marianne Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 311–328). Amsterdam & Philadelphia: Benjamins. Gullberg, Marianne (2006a). Handling discourse: Gestures, reference tracking, and communication strategies in early L2. Language Learning, 56 (1), 155–196. Gullberg, Marianne (2006b). Some reasons for studying gesture and second language acquisition (Hommage à Adam Kendon). International Review of Applied Linguistics, 44 (2), 103–124.
27
28
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Gullberg, Marianne (2008). Gestures and second language acquisition. In Peter Robinson & Nick C. Ellis (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276–305). London: Routledge. Gullberg, Marianne (forthcoming). What learners mean. What gestures reveal about semantic reorganisation of placement verbs in advanced L2. Gullberg, Marianne & Stephen G. McCafferty (2008). Introduction: Gesture and SLA — Toward an integrated approach. Studies in Second Language Acquisition, 30 (2), 133–146. Holler, Judith & Geoffrey Beattie (2003). Pragmatic aspects of representational gestures. Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3 (2), 127–154. Hostetter, Autumn B. & Martha W. Alibali (2007). Raise your hand if you’re spatial: Relations between verbal and spatial skills and gesture production. Gesture, 7 (1), 73–95. Hostetter, Autumn B., Martha W. Alibali, & Sotaro Kita (2007). I see it in my hand’s eye: Representational gestures are sensitive to conceptual demands. Language and Cognitive Processes, 22 (3), 313–336. Irujo, Suzanne. (1993). Steering clear: avoidance in the production of idioms. International Review of Applied Linguistics, 31 (3), 205–219. Iverson, Jana M., Olga Capirci, Emiddia Longobardi, & Maria Cristina Caselli (1999). Gesturing in mother–child interactions. Cognitive Development, 14 (1), 57–75. Iverson, Jana M., Olga Capirci, Virginia Volterra, & Susan Goldin-Meadow (2008). Learning to talk in a gesture-rich world: Early communication of Italian vs. American children. First Language, 164–181. Iverson, Jana M. & Susan Goldin-Meadow (2005). Gesture paves the way for language development. Psychological Science, 16, 367–371. Iverson, Jana M. & Esther Thelen (1999). Hand, mouth and brain. The dynamic emergence of speech and gesture. Journal of Consciousness Studies, 6 (11/12), 19–40. Jarvis, Scott (2000). Methodological rigor in the study of transfer: Identifying L1 influence in the interlanguage lexicon. Language Learning, 50 (2), 245–309. Jenkins, Susan & Isabel Parra (2003). Multiple layers of meaning in an oral proficiency test: The complementary roles of nonverbal, paralinguistic, and verbal behaviors in assessment decisions. Modern Language Journal, 87 (1), 90–107. Jungheim, Nicholas O. (1991). A study on the classroom acquisition of gestures in Japan. Ryutsukeizaidaigaku Ronshu, 26 (2), 61–68. Jungheim, Nicholas O. (2001). The unspoken element of communicative competence: Evaluating language learners’ nonverbal behavior. In Thom Hudson & James D. Brown (Eds.), A focus on language test development: Expanding the language proficiency construct across a variety of tests (pp. 1–34). Honolulu: University of Hawai’i. Kasper, Gabriele & Eric Kellerman (Eds.) (1997). Communication strategies: Psycholinguistic and sociolinguistic perspectives. London: Longman. Kellerman, Eric & Anne-Marie van Hoof (2003). Manual accents. International Review of Applied Linguistics, 41 (3), 251–269. Kelly, Spencer D., Dale J. Barr, Ruth Breckinridge Church, & Katherine Lynch (1999). Offering a hand to pragmatic understanding: The role of speech and gesture in comprehension and memory. Journal of Memory and Language, 40 (4), 577–592. Kendon, Adam (1994). Do gestures communicate? A review. Research on Language and Social Interaction, 27 (3), 175–200.
Gestures and some key issues in the study of language development
Kendon, Adam (1995). Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics, 23 (3), 247–279. Kendon, Adam (2004). Gesture. Visible action as utterance. Cambridge: Cambridge University Press. Kendon, Adam (2007). Some topics in gesture studies. In Anna Esposito, Maja Bratanic, Eric Keller, & Maria Marinaro (Eds.), Fundamentals of verbal and nonverbal communication and the biometric issue (pp. 3–19). Amsterdam: IOS Press. Kida, Tsuyoshi (2005). Appropriation du geste par les étrangers: Le cas d’étudiants japonais apprenant le français. Unpublished PhD diss., Université de Provence, Aix-en-Provence. Kimbara, Irene (2006). On gestural mimicry. Gesture, 6 (1), 39–61. Kita, Sotaro (2000). How representational gestures help speaking. In D. McNeill (Ed.), Language and gesture (pp. 162–185). Cambridge: Cambridge University Press. Kita, Sotaro (2001). Pointing left in Ghana: How a taboo on the use of the left hand influences gestural practice. Gesture, 1 (1), 73–95. Kita, Sotaro (Ed.). (2003). Pointing: Where language, culture, and cognition meet. Mahwah, NJ: Erlbaum. Kita, Sotaro & Asli Özyürek (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48 (1), 16–32. Krashen, Stephen D. (1994). The input hypothesis and its rivals. In Nick C. Ellis (Ed.), Implicit and explicit learning of languages (pp. 45–78). London: Academic Press. Krauss, Robert K., Yihsiu Chen, & Rebecca F. Gottesman (2000). Lexical gestures and lexical access: a process model. In David McNeill (Ed.), Language and gesture (pp. 261–283). Cambridge: Cambridge University Press. Lausberg, Hedda, Eran Zaidel, Robyn F. Cruz, & Alain Ptito (2007). Speech-independent production of communicative gestures: Evidence from patients with complete callosal disconnection. Neuropsychologia, 45 (13), 3092–3104. Laver, John & Janet Mackenzie Beck (2001). Unifying principles in the description of voice, posture and gesture. In Christian Cavé, Isabella Guaïtella, & Serge Santi (Eds.), Oralité et gestualité (pp. 15–24). Paris: l’Harmattan. Lazaraton, Anne (2004). Gesture and speech in the vocabulary explanations of one ESL teacher: A microanalytic inquiry. Language Learning, 54 (1), 79–117. Lazaraton, Anne & Noriko Ishihara (2005). Understanding second language teacher practice using microanalysis and self-reflection: A collaborative case study. Modern Language Journal, 89 (4), 529–542. Lee, Jina (2008). Gesture and private speech in second language acquisition. Studies in Second Language Acquisition, 30 (2), 169–190. Liddell, Scott (2003). Grammar, gesture and meaning in American Sign Language. Cambridge: Cambridge University Press. Lott, Petra (1999). Gesture and aphasia. Bern: Peter Lang. Mayberry, Rachel I. & Elena Nicoladis (2000). Gesture reflects language development: Evidence from bilingual children. Current Directions in Psychological Science, 9 (6), 192–196. McCafferty, Stephen G. (2002). Gesture and creating zones of proximal development for second language learning. Modern Language Journal, 86 (2), 192–203. McCafferty, Stephen G. (2004). Space for cognition: Gesture and second language learning. International Journal of Applied Linguistics, 14 (1), 148–165.
29
30
Marianne Gullberg, Kees de Bot, and Virginia Volterra
McNeill, David (1985). So you think gestures are nonverbal? Psychological Review, 92 (3), 271– 295. McNeill, David (1992). Hand and mind. What gestures reveal about thought. Chicago: University of Chicago Press. McNeill, David (1997). Growth points cross-linguistically. In Jan Nuyts & Eric Pederson (Eds.), Language and conceptualization (pp. 190–212). Cambridge: Cambridge University Press. McNeill, David (1998). Speech and gesture integration. In Jana M. Iverson & Susan GoldinMeadow (Eds.), The nature and functions of gesture in children’s communication (pp. 11–27). San Francisco: Jossey-Bass. McNeill, David (2005). Gesture and thought. Chicago: University of Chicago Press. McNeill, David & Susan D. Duncan (2000). Growth points in thinking-for-speaking. In David McNeill (Ed.), Language and gesture (pp. 141–161). Cambridge: Cambridge University Press. Melinger, Alissa & Willem J. M. Levelt (2004). Gesture and the communicative intention of the speaker. Gesture, 4 (2), 119–141. Miyake, Akira & Naomi Friedman (1999). Individual differences in second language proficiency: Working memory as language aptitude. In Alice Healy & Lyle Bourne (Eds.), Foreign language learning: Psycholinguistic experiments on training and retention (pp. 339–362). Mahwah, NJ: Erlbaum. Mohan, Bernard & Sylvia Helmer (1988). Context and second language development: Preschoolers’ comprehension of gestures. Applied Linguistics, 9 (3), 275–292. Montepare, Joann M. & Joan S. Tucker (1999). Aging and non-verbal behavior: Current perspectives and future directions. Journal of Nonverbal Behavior, 23 (2), 105–109. Morris, Desmond, Peter Collett, Peter Marsh, & Marie O’Shaughnessy (1979). Gestures, their origins and distribution. London: Cape. Musumeci, Diane M. (1989). The ability of second language learners to assign tense at the sentence level: A crosslinguistic study. Unpublished PhD diss., University of Illinois at UrbanaChampaign. Negueruela, Eduardo, James P. Lantolf, Susan Rehn Jordan, & Jaime Gelabert (2004). The “private function” of gesture in second language speaking activity: A study of motion verbs and gesturing in English and Spanish. International Journal of Applied Linguistics, 14 (1), 113–147. Nicoladis, Elena (2007). The effect of bilingualism on the use of manual gestures. Applied Psycholinguistics, 28 (3), 441–454. Nicoladis, Elena, Rachel I. Mayberry, & Fred Genesee (1999). Gesture and early bilingual development. Developmental Psychology, 35 (2), 514–526. Nobe, Shuichi (2001). On gestures of foreign language speakers. In Christian Cavé, Isabella Guaïtella, & Serge Santi (Eds.), Oralité et gestualité (pp. 572–575). Paris: l’Harmattan. O’Neill, Daniela K., Jane Topolovec, & Wilma Stern-Cavalcante (2002). Feeling sponginess: The importance of descriptive gestures in 2- and 3-year-old children’s acquisition of adjectives. Journal of Cognition and Development, 3 (3), 243–277. Özcaliskan, Seyda & Susan Goldin-Meadow (2005). Gesture is at the cutting edge of early language development. Cognition, 96 (3), B101–B113. Özyürek, Asli (2002a). Do speakers design their cospeech gestures for their addressees? The effects of addressee location on representational gestures. Journal of Memory and Language, 46, 688–704.
Gestures and some key issues in the study of language development
Özyürek, Asli (2002b). Speech-language relationship across languages and in second language learners: Implications for spatial thinking and speaking. In Barbora Skarabela (Ed.), BUCLD Proceedings (Vol. 26, pp. 500–509). Somerville, MA: Cascadilla Press. Özyürek, Asli & Spencer D. Kelly (2007). Special isssue ‘Gesture, brain, and language’. Brain and Language, 101 (3), 181–184. Özyürek, Asli, Sotaro Kita, Shanley E. M. Allen, Reyhan Furman, & Amanda Brown (2005). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture, 5 (1/2), 219–240. Pika, Simone, Elena Nicoladis, & Paula Marentette (2006). A cross-cultural study on the use of gestures: Evidence for cross-linguistic transfer? Bilingualism, 9 (3), 319–327. Pine, Karen J., Hannah Bird, & Elizabeth Kirk (2007). The effects of prohibiting gestures on children’s lexical retrieval ability. Developmental Science, 10 (6), 747–754. Pine, Karen J., Nicola Lufkin, & David Messer (2004). More gestures than answers: Children learning about balance. Developmental Psychology, 40 (6), 1059–1067. Pinker, Stephen (1989). Learnability and cognition: The acquisition of argument structure. Cambridge, MA: MIT Press. Pizzuto, Elena, Micaela Capobianco, & Antonella Devescovi (2005). Gestural-vocal deixis and representational skills in early language development. Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, 6 (2), 223–252. Richards, Jack C. (Ed.) (1974). Error analysis. London: Longman. Riseborough, Margaret G. (1981). Physiographic gestures as decoding facilitators: Three experiments exploring a neglected facet of communication. Journal of Nonverbal Behavior, 5 (3), 172–183. Robinson, Peter (2003). Attention and memory during SLA. In Catherine J. Doughty & Michael H. Long (Eds.), The handbook of second language acquisition (pp. 631–678). Oxford: Blackwells. Rogers, William T. (1978). The contribution of kinesic illustrators toward the comprehension of verbal behavior within utterances. Human Communication Research, 5 (1), 54–62. Rose, Miranda L. (2006). The utility of arm and hand gesture in the treatment of aphasia. Advances in Speech-Language Pathology, 8 (2), 92–109. Schegloff, Emanuel A. (1984). On some gestures’ relation to talk. In J. Maxwell Atkinson & John Heritage (Eds.), Structures of social action (pp. 266–296). Cambridge: Cambridge University Press. Schick, Brenda, Marc Marschark, & Patricia E. Spencer (Eds.) (2006). Advances in the sign language development of deaf and hard-of-hearing children. New York: Oxford University Press. Schmid, Monika S., Barbara Köpke, Merel Keijzer, & Lina Weilemar (Eds.) (2004). First language attrition: Interdisciplinary perspectives on methodological issues. Amsterdam: Benjamins. Selinker, Larry (1972). Interlanguage. International Review of Applied Linguistics, 10 (3), 209–231. Sime, Daniela (2006). What do learners make of teachers’ gestures in the language classroom? International Review of Applied Linguistics, 44 (2), 209–228. Singer, Melissa A. & Susan Goldin-Meadow (2005). Children learn when their teacher’s gestures and speech differ. Psychological Science, 16 (2), 85–89. Slama-Cazacu, Tatiana (1976). Nonverbal components in message sequence: “Mixed syntax”. In William C. McCormack & Stephen A. Wurm (Eds.), Language and man: Anthropological issues (pp. 217–227). The Hague: Mouton.
31
32
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Slobin, Dan I. (1996). From “thought and language” to “thinking for speaking”. In John J. Gumperz & Stephen C. Levinson (Eds.), Rethinking linguistic relativity (pp. 70–96). Cambridge: Cambridge University Press. Stam, Gale (2006). Thinking for Speaking about motion: L1 and L2 speech and gesture. International Review of Applied Linguistics, 44 (2), 143–169. Stefanini, Silvia, Maria Cristina Caselli, & Virginia Volterra (2007). Spoken and gestural production in a naming task by young children with Down syndrome. Brain and Language, 101 (3), 208–221. Sueyoshi, Azano & Debra M. Hardison (2005). The role of gestures and facial cues in second language listening comprehension. Language Learning, 55 (4), 661–699. Swain, Merrill (2000). The output hypothesis and beyond: Mediating acquisition through collaborative dialogue. In James P. Lantolf (Ed.), Sociocultural theory and second language learning (pp. 97–114). Oxford: Oxford University Press. Taranger, Marie-Claude & Christine Coupier (1984). Recherche sur l’acquisition des langues secondes. Approche du gestuel. In Alain Giacomi & Daniel Véronique (Eds.), Acquisition d’une langue étrangère. Perspectives et recherches (pp. 169–183). Aix-en-Provence: Université de Provence. Tellier, Marion (2006). L’impact du geste pédagogique sur l’enseignement/apprentissage des langues étrangères: Etude sur des enfants de 5 ans. Unpublished PhD diss., Université Paris VII — Denis Diderot, Paris. Tellier, Marion (2009). The development of gesture. In Kees de Bot & Robert Schrauf (Eds.), Language development over the life-span (pp. 191–216). New York: Routledge. Tomasello, Michael (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press. Valenzeno, Laura, Martha W. Alibali, & Roberta Klatzky (2002). Teachers’ gestures facilitate students’ learning: A lesson in symmetry. Contemporary Educational Psychology, 28, 187–204. Van Dijk, Marijn & Paul van Geert (2005). Disentangling behavior in early child development: Interpretability of early child language and the problem of filler syllables and growing utterance length. Infant Behavior and Development, 28, 99–117. Van Els, Theo, Guus Extra, Charles van Os, & Annemieke Janssen van Dieten (1984). Applied Linguistics and the learning and teaching of foreign languages. London: Edward Arnold. Van Hell, Janet G. & Ton Dijkstra (2002). Foreign language knowledge can influence native language performance in exclusively native contexts. Psychonomic Bulletin & Review, 9 (4), 780–789. Verspoor, Marjolijn, Wander Lowie, & Marijn van Dijk (2008). Variability in L2 development from a dynamic systems perspective. Modern Language Journal, 92 (2), 214–231. Viberg, Åke & Kenneth Hyltenstam (Eds.) (1993). Progression and regression in language. Cambridge: Cambridge University Press. Volterra, Virginia, Maria Cristina Caselli, Olga Capirci, & Elena Pizzuto (2005). Gesture and the emergence and development of language. In Michael Tomasello & Dan I. Slobin (Eds.), Beyond nature-nurture: Essays in honor of Elizabeth Bates (pp. 3–40). Mahwah, NJ: Erlbaum. Volterra, Virginia, Jana M. Iverson, & Marianna Castrataro (2006). The development of gesture in hearing and deaf children. In Brenda Schick, Marc Marschark, & Patricia E. Spencer (Eds.), Advances in the sign language development of deaf children (pp. 46–70). New York: Oxford University Press.
Gestures and some key issues in the study of language development
Wagner, Susan M., Howard Nusbaum, & Susan Goldin-Meadow (2004). Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language, 50 (4), 395–407. Wallbott, Harald G. (1995). Congruence, contagion, and motor mimicry: Mutualities in nonverbal exchange. In Ivana Marková, Carl Graumann, & Klaus Foppa (Eds.), Mutualities in dialogue (pp. 82–98). Cambridge: Cambridge University Press. Wexler, Kenneth & Peter Culicover (1980). Formal principles of language acquisition. Cambridge, MA: MIT Press. White, Lydia (2003). On the nature of interlanguage representation: Universal grammar in the second language. In Catherine J. Doughty & Michael H. Long (Eds.), The handbook of second language acquisition. Oxford. Wolfgang, Aaron & Zella Wolofsky (1991). The ability of new Canadians to decode gestures generated by Canadians of Anglo-Celtic backgrounds. International Journal of Intercultural Relations, 15 (1), 47–64. Wu, Ying Choon & Seana Coulson (2005). Meaningful gestures: Electrophysiological indices of iconic gesture comprehension. Psychophysiology, 42 (6), 654–667. Yoshioka, Keiko & Eric Kellerman (2006). Gestural introduction of Ground reference in L2 narrative discourse. International Review of Applied Linguistics, 44 (2), 171–193.
33
Before L1 A differentiated perspective on infant gestures Ulf Liszkowski Max Planck Research Group Communication Before Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
This paper investigates the social-cognitive and motivational complexities underlying prelinguistic infants’ gestural communication. With regard to deictic referential gestures, new and recent experimental evidence shows that infant pointing is a complex communicative act based on social-cognitive skills and cooperative motives. With regard to infant representational gestures, findings suggest the need to re-interpret these gestures as initially non-symbolic gestural social acts. Based on the available empirical evidence, the paper argues that deictic referential communication emerges as a foundation of human communication first in gestures, already before language. Representational symbolic communication, instead, emerges after deictic communication first in the vocal modality and, perhaps, in gestures through non-symbolic, socially situated routines. Keywords: prelinguistic communication, pointing, representational gestures, infant
Before L1 Language is a hallmark of modern human communication, but language is maybe best understood as an emergent property over historical time of more basic and non-linguistic yet unique forms of human communication. Language, whether spoken or signed, is a conventional code, but the code alone is not a sufficient basis for communication. For example, the statement “This is a concert” to a person whose mobile phone goes off in the middle of a symphony is meant to communicate much more than what is actually coded linguistically. Further, in the absence of a shared linguistic code speakers of different languages still communicate successfully with gestures. Even in the absence of conventionalized gestures (i.e., sign language) deaf-born humans who are deprived of spoken and signed linguistic
36
Ulf Liszkowski
codes can still communicate successfully with creatively invented ‘home-sign’ gestures (Goldin-Meadow, 2003; Senghas, Kita, & Özyürek, 2004). The main point is that language, albeit special in itself, is only the tip of the iceberg of human communication, with all its complexities below the surface still waiting to be discovered. Human communication is an inferential process. A recipient tries to understand a sender’s intention, and the sender, in turn, knows this and intends the recipient to understand his intention (Grice, 1957; Sperber & Wilson, 1986). This recursive inferential model of human communication involves two main psychological components. First, social-cognitively, interlocutors need to form and understand intentions toward others’ intentions and understand epistemic states to transmit and infer referential content. Second, motivationally, human communication at its base is cooperative. The sender marks and formulates his utterances so that the receiver understands them, and the receiver tries to understand them as the sender intended her to. Communication takes place in a joint zone (‘common ground’; Clark, 1996). Communicative attempts outside the zone fail to communicate successfully, whether they are coded linguistically or not. Further, humans communicate with cooperative motives, for example to freely provide others with relevant information, not only to gain immediate direct benefit. Human communication thus involves complex social-cognitive and cooperative abilities, abilities which must develop somehow. Animal communication provides an informative contrast to the inferential and cooperative human communication model. Most of animals’ signals are reactions to stimuli, usually with little flexibility, often lacking referential or even communicative intent (e.g., Call & Tomasello, 2007). Dawkins and Krebs (1978) emphasized that ritualized combats, mating, courtship and so forth really are individualistic behaviors to maximize the benefit of the individual who emits the signal by literally using another individual’s muscles from afar. Animal communication thus seems to involve quite different processes compared to human communication. It mostly lacks a cooperative structure and is rather based on individualistic attempts of manipulating the environment to one’s own benefit, whether the environment is animate or inanimate. How and when in phylogeny cooperative behaviors did evolve is currently a topic of hot debate (e.g., Boyd, 2006). It would seem that it is only with the advent of cooperative motives that we would first see rudimentary forms of communication proper that go beyond the manipulation of others’ muscles for one’s own direct benefit. Infant gestural communication is a test case for models of communication. If language rests on some more fundamental yet already unique forms of human communication, the core infrastructure of human communication should already be present before language, and already be different from other species. This paper
Before L1
takes an ontogenetic perspective on human communication that is largely independent of language and linguistic code models by investigating infant gestures before acquisition of a first language (L1). In contrast to adult gesture research, it does not address speech-accompanying gestures as part of or a complement to an existing linguistic system, but instead focuses on the emergence and use of core gestures before any speech or linguistic system has yet developed. The main question is: To what extent do infant gestures already share a common cognitive and motivational infrastructure with fully fledged adult communication before language has emerged? From a developmental perspective, a related question is: Where do infant gestures originate from ontogenetically? With regard to the gestural origins, two perspectives may broadly be distinguished. Classically, from a language acquisition perspective, infants’ gesturing has been interpreted as a kind of social tool-use, building on infants’ emerging intentionality (Bates, 1979). Infants’ intentionality, in turn, supposedly originates from infants’ individualistic sensorimotor schemes toward the physical environment. Infant gesturing, on this account, is a tool for individualistic problem-solving serving one’s own benefit, which rests on the emergent intentionality from individualistic object-directed action schemes. On this account, however, it is not clear to what extent infant gestures already reflect a psychology of intentions toward others’ intentional states, and cooperative motives to commune with and achieve mutual understanding involving benefits for the other. It is also not straightforward how inferential-cooperative communication would simply emerge on the heels of individualistic object-directed action schemes and egocentric motives. Other accounts instead emphasize humans’ ultra-sociality as the origins of infants’ gestures. Infants are entrenched in rich interactional contexts with competent adult communicators from the beginning, which provide a strong basis for the ontogenetic origins of cooperative human communication (Bruner, 1983; Werner & Kaplan, 1963). Infants are attuned to people from birth (e.g., their faces and voices) and an attachment system assures adults’ interest in interacting with their offspring on a psychological level beyond mere nurturing (Bowlby, 1969). Very young infants are already sensitive to several interaction cues like ostension and reference (see Csibra & Gergely, 2006), and to the contingencies of turn-taking as evidenced, for example, by their gaze aversion and reengagement behavior in the still-face paradigm (an adult interrupts an ongoing face-to-face interaction with a still face; see Adamson & Frick, 2003). Cognitively, however, on this account it is not so clear to what extent infant gestures are already under their control and intentionally directed at people. One perspective on the emergence of gestures thus emphasizes infants’ developing intentionality from individualistic object-directed sensorimotor schemes as the key feature in the emergence of human communication. In that perspective,
37
38
Ulf Liszkowski
the origins are individualistic and the underlying motivation is egocentric and still lacks the cooperative structure of adult communication. Another perspective instead suggests that the roots of human communication are a primary motive for social contact within an ultra-social environment. It is less clear on this account, however, whether infants’ behaviors already involve communicative and referential intent, or whether adults only interpret and construct them as communicative. Few empirical studies have directly tested the underlying complexities of infants’ gestures with regard to communicative intent, social cognition and cooperative motives. In what follows, I first present recent findings on infants’ referential deictic gestures, in particular infant pointing. These findings constitute evidence for social-cognitively and motivationally ‘rich’ prelinguistic referential communication. Next, I review relevant findings on infants’ representational gestures emerging after pointing. These findings yield little support for a symbolic interpretation of infant gestural communication, in particular not before language. Instead, I propose a leaner re-interpretation of these gestures as non-symbolic gestural social acts.
Infant gestures In infancy research infants’ gestures have been operationally defined as intentionally communicative based on infants’ (1) looks to the adult, (2) persistence and flexibility to achieve a goal, and (3) conventionalization of behavioral forms (Bates, 1979). Intentionally communicative gestures have been classified into deictic and representational gestures (Bates, 1979). Deictic gestures show or present a referent in the environment (deixis, Greek “to show”), the most prominent gesture being pointing. Deictic gestures are thus used to communicate referentially. Representational gestures re-present a referent, either in a conventionalized arbitrary form with a gesture that is commonly associated with the idea it should trigger (e.g., thumbs up for ‘good’), or in an iconic way by miming a referent or pretending to act out the content of a message (e.g., raising fist half open toward mouth for ‘drinking’). Representational gestures are thus used to communicate referentially by means of a symbolic vehicle which represents the referent.
Deictic gestures and infant pointing Classically, infant gestures such as showing, giving, reaching, and pointing have been classified as deictic gestures (Bates, 1979). In fact, infant reaching may better be described as a request rather than a showing gesture. It is ritualized from abbreviated grasping attempts, similar to begging gestures in apes (Call & Tomasello, 2007). However, infants from around 9 months also pick up objects and hold them
Before L1
out with an outstretched arm, usually to the delight of their caregivers who then comment on them. Although infants are often less ready to let an adult take the object they are holding out (and so such ‘showing’ is not yet an ‘offer’), infants will sometimes also hand over objects, often by placing them in the parents’ lap or hands. Such ‘placing’ is deictic in the sense that the object becomes a referent by virtue of the specific place where it is put (Clark, 2003). In a sense, these gestures are thus referential because they bring specific objects to the attention of others. Further, they seem to be motivated cooperatively, to mutually engage about these objects. Showing and placing are thus good candidates for crediting infants with intentional deictic referential communication and may reflect foundations of uniquely human communication. However, it is not clear precisely how these gestures work from the infants’ point of view. There are no experiments to my knowledge which have directly tested referential intent underlying infants’ showing or placing. Since these gestures involve objects at hand, a leaner interpretation is that they originate from individualistic object-directed actions. For example, infants may shake objects as an exploratory activity, while parents interpret this as communicative object exposure. Based on parents’ reactions the activity then becomes ritualized into a social gesture. Once ritualized, social gestures may be interpreted as intentional communication. However, they need not yet be intentionally deictic and express referential intent on the infants’ part. Instead, showing and placing may simply reflect a way of interacting with others, non-referentially. These gestures are motivationally interesting because they afford and establish social contact. The underlying communicative and cognitive complexities, however, are not yet clear. Pointing emerges after showing and placing around 12 months. Pointing is interesting because it enables reference to things at a distance and does not require physical contact with objects. Its action scheme has no function outside communicative contexts, in particular not for individualistic actions on objects. Pointing is even used to refer to referents beyond the immediate perceptual ‘here and now’ as one can point to a chair to refer to the late grandfather who used to sit in it. Referring to entities displaced from the ‘here and now’ is clearly a distinguishing feature of human communication. Social-cognitively, communicating by pointing requires an understanding that people attend to things and that one can direct their attention to these. It also involves an understanding of the shared background against which the point’s referent must be interpreted. Motivationally, adults point with cooperative motives, for example, to engage about things together, or to help others notice what they need to know. Two core aspects of human communication, that is, social cognition and cooperation, are thus already reflected in the single special act of human pointing. But developmentally, we need to know how this gesture works from the point of view of prelinguistic infants.
39
40 Ulf Liszkowski
In a series of recent experimental studies my colleagues and I have investigated infant pointing when it has just emerged around 12 months. These studies were designed to challenge communicatively ‘lean’ accounts. For example, developmental psychologists have proposed that (i) infants initially point non-communicatively (Desrochers, Morissette, & Ricard, 1995); (ii) pointing is non-referential as it does not involve a social-cognitive understanding of recipients’ attention (Moore & D’Entremont, 2001); and (iii) infants’ motivation is mainly egocentric, to obtain objects or attention to the self (Bates, Camaioni, & Volterra, 1975; Moore & D’Entremont, 2001; Gomez, Sarria, & Tamarit, 1993). We used different procedures to elicit infant pointing. Either interesting events happened (a light flashed; a puppet appeared from behind a curtain), or an adult searched for something she needed, or the infant desired something out of reach. What we systematically varied in all these studies was the social context of infant pointing. First, with regard to communicative intent, findings were that already at twelve months infants use their pointing gestures to communicate. For example, when a recipient did not react to their pointing, infants persisted in their communicative goal and augmented the signal as reflected in repeated pointing and increased vocalizations compared to a situation in which the adult reacted typically by sharing attention and interest (Liszkowski et al., 2004, 2007a). Even more clearly, before infants initiated a point, they considered whether the recipient attended to them and so could see their point. When an adult turned sideways and did not look at infants (and so could not possibly see a point), infants pointed less than when the adult was turned toward them and so could see and react to their visual gesture (Liszkowski, Albrecht, Carpenter, & Tomasello, 2008). These experimental results thus establish that 12-month-olds point with communicative intent. Second, with regard to reference, we found that infants point referentially, making reference even to absent entities. In two studies a recipient misidentified infants’ referents and either attended solely to the infants’ face, or to an irrelevant object nearby the intended referent. In both these cases of referential misunderstandings, infants attempted to redirect the recipient’s attention by repeating their pointing to their intended referent more often than when the recipient had correctly identified the referent (Liszkowski et al., 2004, 2007a). Infants thus point to refer to particular entities. In further studies, we found that infants even refer to ceased events and objects which are not present at the moment of testing. For example, when infants had attended to an interesting event and it had ceased, they then pointed to its previous, now-empty location depending on how a recipient had reacted to it before (Liszkowski et al., 2007b). Further, to obtain a desirable object that was absent at the moment of request, infants but not chimpanzees who were tested in the same study design pointed to the object’s usual but now-empty location, thus referring to the absent entity (Liszkowski, Schäfer, Carpenter, &
Before L1
Tomasello, 2009). These studies thus establish that infants point to refer others to specific, and sometimes even absent referents. In further studies we tested infants’ epistemic understanding underlying their referential pointing. We found that infants pointed significantly more often to an interesting event when the adult had not yet seen it than when she already had (Liszkowski et al., 2007b). Moreover, we established that infants point to inform an adult who is looking for an object (Liszkowski, Carpenter, Striano, & Tomasello, 2006). In this new search paradigm, an adult lost track of one of two boring objects which she (but not the infant) needed, and then searched around with a quizzical look. Infants readily pointed out the object the adult needed, without requestive accompaniments or personal interest in them, and more often when the adult was ignorant than knowledgable of the objects’ locations (Liszkowski, Carpenter, & Tomasello, 2008). These results thus reveal that infants point referentially with an understanding of the attentional and epistemic states of others. Third, we found that infants point for others with cooperative and prosocial motives. The studies show that infants point at interesting events to share their interest about these with others. For example, when an adult only oriented to the infant’s referent but then did not comment on it (Liszkowski et al., 2004), or when the adult’s comment about a referent was unenthusiastic and therefore did not match the infant’s interest (Liszkowski et al., 2007a), infants were dissatisfied, as reflected in their differential pattern of pointing. Crucially, when infants already shared an attentional focus with the adult, they then still pointed if the adult expressed interest in the referent, in order to express their alignment with the adult’s expression of attitude (Liszkowski et al., 2007b). These findings show that infants do not only want to share the visual focus on a referent; they want to express and share their attitudes about a referent, too. Moreover, we demonstrated for the first time that infants also point to help others, which may be interpreted as the ontogenetically earliest evidence for altruistic helping without direct benefit for the self. In these studies, infants pointed to help an adult find things which they themselves did not request or find of particular interest (Liszkowski et al., 2006), and more so when the adult needed help to find it than when she did not (Liszkowski et al., 2008). The studies thus provide experimental evidence that infants point with cooperative and prosocial motives, i.e., to align with and to help others. The new experimental findings provide a new look at infant pointing as a human communicative act including full-fledged reference on a mental level and cooperative motives like sharing and helping, all before language has emerged (see Tomasello, Carpenter, & Liszkowski, 2007). This interpretation is further supported by the fact that infants also comprehend the pointing of others in the same way that they themselves point (Camaioni, Perucchini, Bellagamba, & Colonnesi, 2004; Behne, Liszkowski, Carpenter, & Tomasello, submitted). The exact
41
42
Ulf Liszkowski
process of the emergence of pointing is not well understood at the moment (see also Lock, Young, Service, & Chandler, 1990). Since pointing is not functional as an object-directed action (unlike, e.g., reaching), and because it is used communicatively with cooperative motives from the beginning, pointing does not seem to simply originate from individualistic object-directed actions, in particular not from reaching (see also Franco & Butterworth, 1996). Presumably, as already suggested by Werner & Kaplan (1963), the ability to refer originates in interpersonal contexts from an emerging motive to share objects together as ‘objects-of-regard’. On that account, infants’ showing and placing may enhance object-involved interpersonal contexts and, through social scaffolding, lead to infants’ comprehension and production of referential communication. It is not clear whether imitation or instead a more biological basis leads to the particular form of index-finger pointing. Given the communicative complexities of pointing when it has just emerged, however, it should be conceptualized as developmental accomplishment of — and not a precursor to — referential communication.
Representational gestures and infant gestural social acts Like words, representational gestures have semantic content and are used for symbolic reference. They are thus different from pointing which in itself does not represent in a symbolic way or carry meaning independent of its context. Infant representational gestures emerge after pointing, in the case of arbitrary gestures through imitation and, in the case of iconic gestures, also creatively from one’s own action experiences (Bates, 1979; Capirci et al., 2005). Symbolic communication is a transformation of earlier forms of deictic communication which presupposes skills of reference and, in addition, cognitive skills for symbolizing. Representational gestures, especially iconic gestures, require the cognitive ability to decouple an action directed at an object from the communicative act of representing a referent with that action. Iconic gestures thus involve some kind of pretending or miming of an object-related action in order to represent a referent. Developmentally, however, it is possible that both arbitrary conventional and iconic gestures are initially simply reproduced via imitation. To merely reproduce an iconic gesture one need not understand the ‘etymological’ relation to its action scheme derivate and decouple action and representation. Indeed, studies show that there is no advantage for infants’ comprehension of iconic over arbitrary gestures (Namy, Campbell, & Tomasello, 2004). It is thus possible that representational gestures initially rather reflect non-symbolic forms of participating in social situations, routines and game formats. On such an account, infant representational gestures may be re-interpreted as non-symbolic gestural social acts.
Before L1
Gestural social acts are conceptually and developmentally different from fully symbolic representational gestures and best understood in terms of their origins. They originate in social routines, games, and contingencies through interactional processes and are mainly used to do what one does with others socially. Just like objects afford certain object-directed actions, for infants social situations and persons may afford certain social gestures. For example, a routine in which a mother sings ‘we are birds’ while flapping with her arms, may lead the infant to eventually flap arms too, initially just in the game context just with the mother, and then maybe as a way of initiating or maintaining contact with other social partners, out of an interest in the social world and a proclivity to interact. The point is that the infant initially need not know that the gesture is used to symbolically represent the referent ‘bird’ or ‘bird game’. Instead of representing anything symbolically, gestural social acts are about the direct social activity and interaction itself. It is not entirely clear what the convincing evidence for a symbolic understanding of representational gestures would be, but there are several co-occurring behaviors which could support a symbolic interpretation. Clearly, one would expect generalization of usage and decontextualization (Werner & Kaplan, 1963), since symbols are rather abstract and not bound to specific situations or recipients. Further, good evidence would be skills for creatively producing iconic gestures, which requires the cognitive ability to decouple actions from objects and, instead, use these actions to represent referents. Other supportive evidence for a symbolic interpretation of representational gestures is their combination with other deictic and representational gestures or words into symbolically communicated messages (for example, gesture for sleep + word ‘bed’, or gesture for sleep + point to bed). More independent support would be the cognitive ability to creatively extend pretense acts and understand symbols more generally, for example maps or scale models. In what follows I review and discuss findings on infant representational gestures which have been taken to support a symbolic interpretation. In light of the findings, I propose a leaner re-interpretation of these gestures as initially nonsymbolic gestural social acts. Classically, Bates (1979) proposed a transition from deictic to representational communication around 13 months when infants start to use their first words and representational gestures to name things and engage in symbolic play such as putting doll shoes on a doll’s feet, or stirring with a spoon to express something about ‘spoonness’. Such pretense or symbolic play has been interpreted as representational gesturing (‘gestural naming’). For example, Caselli (1990), based on a diary study, reported episodes of symbolic play at 9–12 months which she interpreted as communicative semantic acts similar to first words (e.g., holding empty fist to ear for ‘telephone’). However, most of these observations were about narrowly defined, context-bound social acts learned and reproduced in specific social interactions.
43
44 Ulf Liszkowski
Further, these ‘naming’ gestures could be a form of individualistic pretense play instead of being communicative. Moreover, they may not even involve pretense, but instead only reflect individualistic trying and practicing of object-directed action schemes. In fact, more recent studies suggest that it is around 2 years of age that children differentiate pretense from serious trying (Rakoczy, Tomasello, & Striano, 2004) and creatively extend others’ pretense acts (for example, when an adult pretends to spill some coffee on a table, infants then pretend to clean the table; Harris & Kavanaugh, 1993). Acredolo and Goodwyn (1988, Study 2) conducted longitudinal interviews with parents of 16 infants at 11 months over a weekly assessment period of 9 months. They concluded that infants gesture representationally from around 14 to 15 months onwards, with a possible advantage in onset of representational gestures over words of about 3 weeks (only with gesture training; Goodwyn & Acredolo, 1993). They coded gestures if they occurred more than once (to exclude one-time off mimicking) and if they were clearly discernable from other (e.g., vocal) behaviors. They distinguished ‘object gestures’ denoting the presence of specific objects or events (e.g. sniffing for ‘flower’) if they were generalized from a real object to a picture of it or vice versa; ‘request gestures’ (like knob-turning for ‘open door’; arms-up for ‘pick me up’) which were specific to situations and contexts and, as the authors noted, not generalizable; and ‘attributes’ which were object descriptions, like blow for ‘hot’ or palms up for ‘all gone’, if they were not instrumental. The main findings were that request, attribute, and object gestures emerged between 14 and 15 months (in that order), and that object gestures were most frequent (38 types in 75% of the sample during 9 months of weekly observations, thus on average 2 gesture types per infant). Of these object gestures 32% originated within interactive routines, either through repeated exposure or explicit teaching. Instead, 58% of the object gestures were mimed actions without objects in hands, for example, rubbing the tummy for ‘soap’, panting for ‘dog’, or flapping arms for ‘bird’, which the authors argued had emerged outside interactive routines. Ten percent of the gestures depicted perceptual qualities of the referent (e.g., a cupped hand for ‘moon’). However, a number of issues challenge the authors’ interpretation of symbolic gesturing before language. Methodologically, parental weekly interviews may be limited with regard to issues of modes of acquisition (e.g., inside vs. outside interactive routines) and the extent of generalization. Operationally, request gestures are context-specific and ritualized abbreviated action attempts, observable also in non-linguistic apes, rather than symbolic (e.g., instead of climbing up mother’s leg, it becomes sufficient after a while to simply ‘raise arms’; see also Call & Tomasello, 2007). Further, attribute gestures also do not involve great generalization, because their occurrence is constrained to specific situations like feeding or clean-up/
Before L1
hiding games (e.g., when eating, mum always blows, or when things are gone one always raises palms). They are thus rather prototypical gestural social acts, used to do what others do in specific social situations. With regard to object gestures, infants had a very small repertoire of on average 2 object gestures. This is in stark contrast to the rapid word growth at that age. Further, object gestures were generalized only to similar referents in similar contexts rather than being flexibly used. The degree of generalization was thus fairly small. A third of these gestures emerged inside interactive routines, consistent with the re-interpretation of these gestures as being gestural social acts. With regard to the remaining gestures, it is not clear that they really emerged outside a social context, especially not as individually created iconic symbols. To be parsimonious, it seems unnecessary to assume that young infants draw an analogy between birds’ wings and their own arms, then on this basis creatively mime birds by ‘flapping arms’, and finally use this pantomime with the intent to communicate something about a bird. Similarly, it is not clear that infants creatively produce gestures to depict perceptual qualities of objects. First, the very low frequency alone does not seem to suggest a general ability to create and flexibly communicate with iconic gestures. Second, if we imagine Acredolo and Goodwyn’s infant with cupped hands, it seems unlikely that the infant would invent this sign outside a social context by looking at the moon (on a sleepless night) and then for the first time communicate about the moon by creatively inventing the sign ‘cupped hands’. In another study, Iverson, Capirci, and Caselli (1994) collected observational data from 12 infants at 16 and 20 months in 45 minute play sessions to compare the development of gestural and vocal communication with regard to deictic and representational gestures. They measured ‘showing’, ‘pointing’, and ‘ritualized requesting (reach)’ as deictic gestures, and as representational gestures ‘predicates’ (e.g., hot; tall), ‘conventional gestures’ (e.g., no; bye-bye; all gone), and ‘nominal gestures’ (cf. Acredolo and Goodwyn’s ‘object gestures’; e.g., drinking from a cup, demonstrated with or without the object, or flapping hands for ‘birdie’). They found that most 16-month-olds communicated more frequently with gestures than vocally. However, the vast majority of all gestures at both ages were actually deictic, not representational. Moreover, the deictic gestures increased with age from 68% to 80% of all gestures, while representational gestures decreased correspondingly. Interestingly, the vast majority of deictic gestures at 16 and 20 months consisted of pointing alone (rising from about 60% to 80%, chance= 33%). Further, already at 16 months the total number of representational gestures was much smaller than the number of representational words (25% of the number of representational words). In addition, with regard to the types of representational gestures, of 14 nominal gestures observed at 16 months, half were done with an object in hands, thus not being clearly distinguishable from object-directed actions.
45
46 Ulf Liszkowski
The study shows quantitatively that infants use representational gestures rather seldom compared to pointing and verbal communication. This suggests that representational gestures — in contrast to pointing — play only a small role in infants’ prelinguistic communication. Also, the number of representational gestures is much smaller than that of representational words from the outset, suggesting that infant representational communication is vocal rather than gestural throughout. Further, relative to deictic gestures, infants’ low-frequent representational gesturing even decreases during the transition to symbolic communication, which suggests that infants do not build their emerging symbolic verbal communication on skills of symbolic gestural communication. Instead, pointing alone is the most frequent gesture which increases even until the two-word stage relative to all other gestures (see also Lock et al., 1990). This suggests that it is actually pointing which leads infant communication throughout the prelinguistic period to the two-word stage, not representational gestures. To investigate the role of gestures in the transition from one to two-word utterances, further research has addressed gesture-word combinations in the second year (e.g., Capirci, Iverson, Pizzuto, & Volterra, 1996; Iverson & Goldin-Meadow, 2005). Gesture-word combinations in these studies are, for example, when infants point to a cup and say ‘drink’, or ‘cup’. The main findings in these studies are that the vast majority of gestures that co-occur with words are, in fact, pointing gestures only. Further, combinations in the gestural mode, for example, two representational gestures (gesture for cup + gesture for drink) or deictic and representational gestures (point to cup + gesture for drink) are virtually absent (see Pizzuto & Capobianco, 2005). These findings thus suggest that in the transition to symbolic (verbal) communication, infant representational gestures do not play a significant role. Instead, they are bypassed by pointing and first word utterances. The co-occurrences of point + word may in fact also be interpreted as single holistic utterances (albeit in two modalities) instead of true combinations of two different utterances, especially since a point itself does not carry semantic content independent of the communicative context. In a sense, this may suggest that also one-word utterances are a fragile form of verbal representational communication, which initially still relies heavily on pointing. The studies show that infants in their second year of life produce forms of representational gestures. Quantitatively, these gestures are small in number, they are low-frequent, and used much less than deictic gestures (in particular pointing) or vocal communication. Infants thus rarely use representational gestures for their prelinguistic communication. Further, there is no conclusive evidence that infants use these gestures symbolically, in particular not before language. Infants’ representational gestures may thus be reinterpreted as non-symbolic gestural social acts. Initially, infants use these gestures as a form of social activity and a way
Before L1
of doing things together, mainly in play formats or routines. At this stage, infant representational gestures do not re-present anything other than the gestural activity itself. Instead, they rather present infants’ joint activity directly, their social acts done with gestures.
A differentiated perspective on infant gestures Infant gestural communication provides important insights into the emergence and nature of human communication. It is a model for unique forms of human communication independent of linguistic codes. Findings show that infants communicate with gestures already before the acquisition of a first language, in ways that are already different from those of other species. If language is the tip of the iceberg of human communication, infant gestural communication is its base. Focusing on the base of infant gestures more specifically, findings suggest a differential picture. Deictic referential gestures (i.e., pointing) are foundational to human communication. Representational gestures are an emergent property of interactional processes in the transformation of gestural deictic toward verbal symbolic communication. A new look at prelinguistic infants’ pointing has revealed that infants point to communicate referentially — including reference to absent entities — in various and flexible ways, with cooperative motives and a social-cognitive understanding of others’ epistemic states. Infant pointing thus bears core features of the infrastructure of human communication, already before language has emerged. In typically developing infants, pointing is foundational to language and mediates its acquisition. In children with autism, the absence of deictic pointing is source and symptom of their impaired communication (e.g., Baron-Cohen, 1989). Nonhuman primates, apes, who do not have language, also do not point for each other (Tomasello, 2006). Gestural deictic communication is thus primary in the emergence of human communication, predating language both ontogenetically and, perhaps, phylogenetically. It would be interesting to know from where pointing originates. Given that infants point socially from the beginning, its motivational background is presumably rooted in interpersonal contexts. It is possible that pointing is based on earlier interactive routines and play formats which involve objects and in which infants actively participate with gestures such as showing, and give-take exchanges. The re-interpretation of infant representational gestures questions whether infants gesture symbolically before language, and whether they produce iconic gestures creatively from individualistic object-directed actions or, instead, acquire them socially from interaction. The findings suggest that infants rarely
47
48 Ulf Liszkowski
communicate with such gestures, in particular when compared to pointing and words. Their usage is also still context-bound and not integrated into infants’ emerging combinations of communicative utterances, quite unlike infants’ usage of pointing. There is only little support that such gestures emerge as creatively produced pantomimes from individualistic object-directed action schemes. Cognitively, there is also little evidence that these gestures involve symbolic understanding, which emerges in related areas like pretense and scale-model games only around 2 years of age (DeLoache, 2004). Instead, these gestures may be reinterpreted as gestural social acts which emerge in interactive routines and game formats, mainly through observation and reproduction. They involve a bi-directionality in the sense that both infant and adult know how to react when being addressed, but infants presumably still use these gestures non-symbolically to initiate or maintain social interaction based on interactive routines. Gestural social acts build on infants’ earlier communication skills and social-cooperative motives. However, rather than being used as a symbolic vehicle to represent a referent in conventional or iconic ways, the interaction is about the gestural activity itself. Such types of nonsymbolic social gesturing may lead to the accumulation and extension of common grounds necessary for symbol acquisition. The social use suggests that gestural social acts originate in infants’ motivation for social participation and interaction. It is not entirely clear where representational gestures originate from and what role they play in the emergence of language. Phylogenetically, one possibility is that after pointing and before spoken language, there was a phase of creating iconic gestures from action schemes. Ontogenetically, however, this is not the case. Instead, infants acquire language before they creatively produce iconic gestures, and it is deictic, not representational gestures, that play a pivotal role in the acquisition of a first language. It would be interesting to know whether infants’ gestural social acts, like pointing, also mediate the acquisition of language or whether they have a more general social function, for example, in the emergence of conventionality and joint activities. It is an open question whether the absence of infant gestural social acts would hamper the transition from pointing to language in any specific way. Based on the available ontogenetic evidence, this paper proposes a differentiated perspective on infant gestures before language. Infant pointing is already a complex prelinguistic form of human cooperative referential communication. Infant representational gestures are still a form of non-symbolic gestural social acts which draw on earlier interaction skills and social-cooperative motives. This perspective emphasizes the interactional basis of language acquisition and symbol formation. Neither pointing nor representational gestures seem to simply emerge from individualistic object-directed action schemes. Instead, their emergence is presumably mediated by a primary motive for social contact and interaction. We need to know more about the origins of deictic gestures and about the role of
Before L1
gestural social acts in the transition to language to better understand the nature and origins of human communication.
Acknowledgements I thank Malinda Carpenter, Marianne Gullberg, Kees de Bot, Susan Schmidt, and two anonymous reviewers for helpful comments on an earlier draft.
References Acredelo, Linda & Susan Goodwyn (1988). Symbolic gesturing in normal infants. Child Development, 59, 450–466. Adamson, Laura & Janet Frick (2003). The still face: A history of a shared experimental paradigm. Infancy, 4 (4), 451–473. Baron-Cohen, Simon (1989). Perceptual role taking and protodeclarative pointing in autism. British Journal of Developmental Psychology, 7 (2), 113–127. Bates, Elizabeth (1979). The emergence of symbols: Cognition and communication in infancy. New York: Academic Press. Bates, Elizabeth, Luigia Camaioni, & Virginia Volterra (1975). The acquisition of performatives prior to speech. Merrill-Palmer Quarterly, 21, 205–226. Behne, Tanya, Ulf Liszkowski, Malinda Carpenter, Michael Tomasello. (submitted). Twelvemonth-old infants comprehend the communicative intent behind other’s pointing gestures. Bowlby, John (1969). Attachment and Loss. Vol. 1: Attachment. New York: Hogarth. Boyd, Richard (2006). The Puzzle of Human Sociality. Science, 314, 1553. Bruner, Jerome (1983). Child’s talk. New York: Norton. Call, Josep & Michael Tomasello (Eds.) (2007). The gestural communication of apes and monkeys. New York: LEA. Camaioni, Luigia, Paola Perucchini, Francesca Bellagamba, & Cristina Colonnesi (2004). The role of declarative pointing in developing a theory of mind. Infancy, 5 (3), 291–308. Capirci, Olga, Annarita Contaldo, Maria Cristina Caselli, & Virginia Volterra (2005). From action to language through gesture: A longitudinal perspective. Gesture, 5 (1/2), 155–177. Capirci, Olga, Jana Iverson, Elena Pizzuto, & Virginia Volterra (1996). Communicative gestures during the transition to two-word speech. Journal of Child Language, 23, 645–673. Caselli, Maria Cristina (1990). Communicative gestures and first words. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 56–67). Berlin: Springer. Clark, Herbert H. (1996). Using language. Cambridge: Cambridge University Press. Clark, Herbert H. (2003). Pointing and placing. In Sotaro Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 243–268). Mahwah, NJ: Lawrence Erlbaum. Csibra, Gergely & Gyorgy Gergely (2006). Social learning and social cognition: The case for pedagogy. In Yuko Munakata & Mark H. Johnson (Eds.), Processes of Change in Brain and Cognitive Development. Attention and Performance XXI (pp. 249–274). Oxford: Oxford University Press.
49
50
Ulf Liszkowski Dawkins, Richard & John Krebs (1978). Animal signals: information or manipulation? In John Krebs & Nicolas Davies (Eds.), Behavioural ecology: An evolutionary approach (pp. 282– 309). Oxford: Blackwell. DeLoache, Judy (2004). Becoming symbol-minded. Trends in Cognitive Sciences, 8, 66–70. Desrochers, Stephan, Paul Morissette, & Marcelle Ricard (1995). Two perspectives on pointing in infancy. In Chris Moore & Philip J. Dunham (Eds.), Joint attention: Its origins and role in development (pp. 85–101). Hillsdale, NJ: Lawrence Erlbaum. Franco, Fabia & George Butterworth (1996). Pointing and social awareness: Declaring and requesting in the second year. Journal of Child Language, 23 (2), 307–336. Goldin-Meadow, Susan (2003). The resilience of language: What gesture creation in deaf children can tell us about how all children learn language. New York: Psychology Press. Gomez, Juan C., Encarnacion Sarria, & Javier Tamarit (1993). The comparative study of early communication and theories of mind: Ontogeny, phylogeny, and pathology. In Simon Baron-Cohen, Helen Tager-Flusberg, et al. (Eds.), Understanding other minds: Perspectives from autism (pp 397–426). New York: Oxford University Press. Goodwyn, Susan & Laura Acredolo (1993). Symbolic gesture versus word: Is there a modality advantage for onset of symbol use? Child Development, 64, 688–701. Grice, Paul (1957). Meaning. The Philosophical Review, 64, 377–388. Harris, Paul & Robert Kavanaugh (1993). Young children’s understanding of pretense. Monographs of the Society for Research in Child Development, 58 (1) [231], v–92. Iverson, Jana, Olga Capirci, & Maria Caselli (1994). From communication to language in two modalities. Cognitive Development, 9, 23–43. Iverson, Jana & Susan Goldin-Meadow (2005) Gesture paves the way for language development. Psychological Science, 16 (5), 367–371. Liszkowski, Ulf, Malinda Carpenter, Anne Henning, Tricia Striano, & Michael Tomasello (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7 (3), 297–307. Liszkowski, Ulf, Malinda Carpenter, Tricia Striano, & Michael Tomasello (2006). Twelve- and 18-month-olds point to provide information for others. Journal of Cognition and Development, 7 (2), 173–187. Liszkowski, Ulf, Malinda Carpenter, & Michael Tomasello (2007a). Reference and attitude in infant pointing. Journal of Child Language, 34(1), 1–20. Liszkowski, Ulf, Malinda Carpenter, & Michael Tomasello (2007b). Pointing out new news, old news, and absent referents at 12 months of age. Developmental Science, 10 (2). F1–F7. Liszkowski, Ulf, Konstanze Albrecht, Malinda Carpenter, & Michael Tomasello (2008). Twelveand 18-month-olds’ visual and auditory communication when a partner is or is not visually attending. Infant Behavior and Development, 31(2), 157–167. Liszkowski, Ulf, Malinda Carpenter, & Michael Tomasello (2008). Twelve-month-olds communicate helpfully, and appropriately for knowledgeable and ignorant partners. Cognition, 108(3), 732–739. Liszkowski, Ulf, Marie Schäfer, Malinda Carpenter, & Michael Tomasello (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20, 654–660. Lock, Andrew, Andrew Young, Valerie Service, & Paul Chandler (1990). Some observations on the origins of the pointing gesture. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 42–55). Berlin: Springer.
Before L1
Moore, Chris & Barbara D’Entremont (2001). Developmental changes in pointing as a function of attentional focus. Journal of Cognition & Development, 2, 109–129. Namy, Laura, Aimee Campbell, & Michael Tomasello (2004). The changing role of iconicity in non-verbal symbol learning: A U-shaped trajectory in the acquisition of arbitrary gestures. Journal of Cognition and Development, 5 (1), 37–57. Pizzuto, Elena & Micaela Capobianco (2005). The link and differences between deixis and symbols in children’s early gestural-vocal system. Gesture, 5 (1/2), 179–199. Rakoczy, Hannes, Michael Tomasello, & Tricia Striano (2004). Young children know that trying is not pretending — a test of the “behaving-as-if ” construal of children’s early concept of “pretense”. Developmental Psychology, 40 (3), 388–399. Senghas, Ann, Sotaro Kita, & Asli Özyürek (2004). Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science, 305 (5691), 1779–1782. Sperber, Dan & Deirdre Wilson (1986/1995). Relevance: Communication and cognition. Oxford: Blackwell. Tomasello, Michael, Malinda Carpenter, & Ulf Liszkowski (2007). A new look at infant pointing. Child Development, 78, 705–722. Tomasello, Michael (2006). Why don’t apes point? In Nick Enfield & Steve Levinson (Eds.), The roots of human sociality: Culture, cognition, and interaction. Oxford: Berg. Werner, Heinz & Bernhard Kaplan (1963). Symbol formation: An organismic-developmental approach to language and the expression of thought. New York: Wiley.
51
The relationship between spontaneous gesture production and spoken lexical ability in children with Down syndrome in a naming task Silvia Stefaninia, Martina Recchiab,c, and Maria Cristina Casellib aDepartment
of Neurosciences, University of Parma, Italy / bInstitute of Cognitive Sciences and Technologies, National Research Council, Italy / cDepartment of Development and Socialization Processes, University of Rome “La Sapienza”, Italy
We examined the relationship between spontaneous gesture production and spoken lexical ability in children with Down syndrome (DS) in a naming task. Fifteen children with DS (3;8–8;3 years) were compared to 15 typically developing (TD) children matched for developmental age (DATD) (2;6–4;3 years of chronological age) and 15 matched for lexical ability identified by the MacArthur-Bates Communicative Development Inventory questionnaire (LATD) (1;9–2;6 years of chronological age). Children of the DATD group displayed a larger number of correct spoken answers compared to other groups, while DS and LATD groups showed a similar naming accuracy. In comparison to both groups of TD children, a higher number of unintelligible answers was produced by children with DS, indicating that their spoken language is characterized by serious phono-articulatory difficulties. Although children with DS did not differ from DATD and LATD controls on the total number of gestures, they produced a significantly higher percentage of representational gestures. Furthermore, DATD children produced more spoken answers without gestures, LATD children produced more bimodal answers, while children with DS gestured more without speech. Results suggest that representational gestures may serve to express meanings when children’s cognitive abilities outstrip their productive spoken language skills. Keywords: children with Down syndrome, gestures, lexical abilities
Gesture and spoken language is closely linked in young typically developing (TD) children (Bates & Dick, 2002). At the end of the first year of life, the emergence
54
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
of first words is preceded and accompanied by deictic gestures, used to draw attention to objects, locations or events (Bates, Benigni, Bretherton, Camaioni, & Volterra, 1979; Capone & McGregor, 2004; Volterra, Caselli, Capirci, & Pizzuto, 2005). These gestures are ritualized requests, showing, giving and pointing. Their referents can only be identified in the physical context in which communication takes place (e.g., reaching for an object, opening and closing the palm, looking alternatively at the adult). Many authors have argued that deictic gestures, and specifically pointing, are used not only to communicate but also to influence the mental states of others. Pointing constitutes a pathway through which communication and language develop (Goldin-Meadow, 2007; Tomasello, Carpenter, & Liszkowski, 2007). At approximately 12 months of age, representational gestures emerge (Caselli, 1990; Goodwyn & Acredolo, 1993). These gestures (also defined as symbolic, characterizing, iconic and referential) differ from deictic gestures in that they denote a precise referent and their basic semantic content remains relatively stable across different situations (e.g., bringing a fist to the ear for telephone). Several studies have investigated the links between early language development and specific aspects of communicative and symbolic gestures, in an attempt to confirm Piaget’s ideas about the shared sensorimotor origins of linguistic and non-linguistic symbols (Piaget, 1945; Werner & Kaplan, 1963). This research has highlighted the strong relationships between gestural communication, specific language milestones and specific cognitive events, including the production of actions associated with specific objects and symbolic (pretend) play (Bates & Dick, 2002). Both deictic and representational gestures originate in action: initially children produce communicative gestures by touching or manipulating objects, e.g., children show and give an object before pointing, or bring a glass to the mouth for drinking before producing an empty-handed gesture (Capirci, Contaldo, Caselli, & Volterra, 2005). Moreover, both types of gestures appear to undergo a similar process of decontextualization: initially gestures are used to identify and recognize objects and events and are progressively employed outside of the specific context in which they have been learned (Goodwyn & Acredolo, 1993). According to Goodwyn and Acredolo (1993), “symbolic status” may be ascribed to a gesture when referring to multiple exemplars, when produced referring to pictures in the absence of the original exemplar, and when the object itself is not involved. This evidence supports the idea that these gestures are an early form of categorizing or naming and that gesture and spoken words tap the same cognitive and linguistic skills (Bates & Dick, 2002). Gestures not only precede and accompany early language development, but also predict progress in verbal language abilities (Capirci, Iverson, Pizzuto, & Volterra, 1996; Iverson & Goldin-Meadow, 2005). For instance, the onset of pointing is
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
a reliable predictor of the appearance of first words (Bates et al., 1979). Later, the production of gesture-word combinations that convey two distinct pieces of information predicts the emergence of two-word speech (Butcher & Goldin-Meadow, 2000; Camaioni, Caselli, Longobardi, & Volterra, 1991; Pizzuto & Capobianco, 2005). Moreover, some authors have reported that semantic relations conveyed in gesture-speech combinations are some of the first observed in two-words combinations (Capirci et al., 1996; Özcaliskan & Goldin-Meadow, 2005). When spoken language ability increases, gestures continue to be produced and become integrated with it (Jancovic, Devoe, & Wiener, 1975; Nicoladis, Mayberry, & Genesee, 1999). Studies aimed at exploring the development of the gesturespeech system beyond toddlerhood have demonstrated that preschool and schoolage children produce gestures in combination with speech across contexts and tasks, i.e., in conversation and narratives and during explanations of the concepts/ problems’ solutions (Alibali, Kita, & Young, 2000; Colletta, 2004; Guidetti, 2002; Pine, Lufkin, Kirk, & Messer, 2007). In addition, Capone (2007) showed that toddlers produce more gestures in isolation when adult input is gesture rich and/or when the task is conducive to gesturing. These gestures may help children to display ideas that they cannot express in the spoken modality, conveying a substantial proportion of their knowledge. Although many authors describe the development of the gesture-language system in TD children, relatively little is known about children with developmental disorders involving impaired linguistic abilities. These studies highlight that when children are limited in cognitive, linguistic, metalinguistic, and articulatory skills, they may use representational gestures more frequently to express meanings (Bello, Capirci, & Volterra, 2004; Capone & Mc Gregor, 2004; Thal & Tobias, 1992). Some interesting results may come from research on children with Down syndrome (DS) due to a lack of developmental homogeneity between cognitive and linguistic abilities, the latter being more impaired (Chapman & Hesketh, 2000; Vicari, Albertini, & Caltagirone, 1992). Furthermore, several studies have found a specific asynchrony between language domains, i.e., verbal comprehension is better preserved than spoken production and the lexicon is less impaired than grammar, although the two domains are not dissociated (Vicari, Caselli, & Tonucci, 2000). Spontaneous speech is often less intelligible compared to controls (Abbeduto & Murphy, 2004). Very few studies have investigated the use of gestures in DS children. Caselli and colleagues (Caselli, Vicari, Longobardi, Lami, Pizzoli, & Stella, 1998) administered the Italian version of the MacArthur-Bates Communicative Development Inventory (MBCDI) (Caselli & Casadio, 1995) to parents of 40 Italian children with DS having a mean age of 28 months. Comparing children’s scores on the Actions and Gestures section of the questionnaire to those of a group of TD children
55
56
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
from the normative sample matched on the basis of comprehension vocabulary size, the authors reported that children with DS had significantly larger action/ gesture repertoires. In particular, they produced a greater percentage of representational gestures such as gone or good . In a second study, Iverson, Longobardi, and Caselli (2003) analyzed the frequency with which children produced gestures in mother-child spontaneous interactions. Five children with DS (mental age of around 22 months and language age of around 18 months) were matched to five TD children for sex, language age, and observed expressive vocabulary size. Relative to a matched TD group, children with DS displayed a significantly smaller repertoire of representational gestures, but produced them with similar frequency; they also exhibited fewer gesture and word combinations than TD children and did not produce any two-word speech combinations. The authors concluded that the relationship between gesture and language in children with DS is very similar to what has been observed in TD children with comparable language production abilities, but children with DS may have a specific delay in making the transition from one- to two-word speech. Stefanini, Caselli, and Volterra (2007) recently investigated the relationship between gestures and words in a more structured task. A picture-naming task was administered to a sample of children with DS (mean chronological age 6 years; mean mental age 4 years) and to two groups of TD children, one matched for developmental age and one for chronological age. The main finding was that children with DS gave fewer correct spoken answers compared to the TD groups. They gestured and produced bimodal and unimodal gestural responses more often than the chronological age-matched controls, who produced significantly more unimodal spoken responses. Further analysis showed that children with DS produced more representational gestures than both TD groups. These gestures were semantically related to the meaning represented in the pictures, thus children with DS could convey the correct information in their gestures even if they could not do so in speech. The results of Stefanini et al. (2007) seem to suggest that cognition and spoken language in children with DS may be partially disparate. When asked to name pictures, their lexical spoken competence appeared to be more impaired than would be expected on the basis of their cognitive development, but they used more representational gestures than their developmental age controls. The poor lexical repertoire of children with DS may be due to difficulties in phonological processes: many studies have shown that the increase of vocabulary in TD children is dependent on phono-articulatory skills (Gathercole, Willis, Emslie, & Baddeley, 1992). Children with DS are particularly interesting because of their specific difficulties with phonological processes, which allow us to study the relationship
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
between speech and the mental representations that might become visible through representational gestures. What is still unclear is whether this wide use of representational gestures characterizes an early stage of lexical acquisition in both DS and TD children. This current study represents a follow-up to the study by Stefanini et al. (2007). We compare a sample of children with DS with two groups of TD children: one matched for developmental age and one matched for lexical ability (measured on vocabulary size). If children with DS and their lexical age-matched controls show a similar use of representational gestures, we can hypothesize a strict link between representational gestures and the spoken lexicon. By contrast, if children with DS use more representational gestures than their lexical age-matched controls, we can conclude that the use of this type of gesture is more related to non-verbal cognition, i.e., that children with DS exploit representational gestures in order to express semantic knowledge and compensate for their limited speech abilities. Our prediction is that this second hypothesis will be confirmed by analyzing the behavior of children with DS compared to TD children matched for lexical age.
Method Participants Fifteen children with DS (7 females; 8 males) and thirty typically developing children (14 females; 16 males) participated in this study. The age range of participants with DS was 3;8 to 8;3 (M 6;1, SD 1;3) and their mental age range was from 2;6 to 4;3 (M 3;10, SD 0;7). Clinical psychologists assessed the mental age of children with DS with the Leiter International Performance Scale (LIPS; Leiter, 1979) or the Italian version of the L-M form of Stanford-Binet Intelligence Scale (Bozzo & Mansueto Zecca, 1993). Children exposed to other languages, children with recurrent serious auditory impairment, and children with epilepsy and psychopathological disorders were excluded from this study. The thirty TD children were individually matched to children with DS, resulting in two different control groups. The first group included 15 children between the ages of 2;6 and 4;4 (M 3;7, SD 0;7); each child in this group was individually matched to a child of the same sex in the DS group whose mental age corresponded to the TD child’s chronological age. This group represented the “Developmental Age” control group (DATD). Preliminary analyses confirmed that the chronological age of this first group did not differ from the mental age of the DS group, t (28) = .12, p = .9. The second group, including 15 children between the ages of 1;9
57
58
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
and 2;6 (M 2;2, SD 0;2), represented the “Lexical Ability” control group (LATD): each child in this group was individually matched to a child of the DS group for sex and vocabulary size (number of words produced), calculated with the parent questionnaire “Il Primo Vocabolario del Bambino” (PVB; Caselli & Casadio, 1995), the Italian version of the MBCDI (Fenson et al., 1994). We used the “Words and Sentences” Short Form, which allows collection of data on lexical production and first grammar abilities (Caselli, Pasqualetti, & Stefanini, 2007). The user’s manual indicates that it is possible to administer the questionnaire to parents until the child reaches the score corresponding to the 50th percentile of TD 3 year-old children. Only the vocabulary repertoire out of the 100 words included in the questionnaire was calculated. On the basis of this measure the level of lexical development was established. Preliminary analyses showed that the two groups did not differ in vocabulary size, t(28) = 1.06, p = .3 (raw scores on the questionnaire: DS group: M 74.4, SD 17.9; LATD group: M 66.8, SD 21.1), confirming that the matching was appropriate.
Materials and procedure: Picture–Naming Task (PNT) The Picture Naming Task (PNT) was designed for very young children between the ages of 2 and 3 years. Lexical items were selected from the normative data of the PVB questionnaire on the basis of item frequency data. The standardization of the PNT with an Italian population was recently completed (Bello, Caselli, Pettenati & Stefanini, 2010). The version of the task employed consists of 77 colored pictures divided into two sets: a set of 44 pictures representing objects/tools (e.g., a comb) and a set of 33 pictures representing actions (e.g., eating) and characteristics (e.g., small). Children were assessed in a familiar setting (rehabilitation center, home or school). The two sets of pictures were presented separately in random order, but the order of picture presentation within each set was fixed. After a brief period of familiarization, the experimenter placed the pictures in front of the child one at a time. For pictures of body parts, animals, objects/tools, food, and clothing, the child was asked: “Che cos’ è questo?” (What is this?). For pictures of actions, children were asked “Cosa sta facendo il bambino?” (What is the child doing?); and for pictures of characteristics, “Com’ è/dov’ è questo?” (How/where is this?). Two practice trials were given for each subtest. For the elicitation of characteristics, two pictures were put in front of the child: one representing the expected characteristic (e.g., a small ball) and another representing the opposite characteristic (e.g., a big ball). If the child did not provide the expected label as a first answer, the experimenter offered help to the child by saying: “This one is big (pointing to the picture of the big ball), and what is this one like?” (pointing to the picture of
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
the small ball).Occasionally the experimenter also pointed to the picture in order to help the child maintain focus, but otherwise avoided producing any other kind of gesture. The test was administered in two sessions, one dedicated to objects/tools, the other to actions/characteristics. The mean duration of the task was 25 minutes for the DS and LATD groups and 16 minutes for the DATD group. All sessions were videotaped for later transcription.
Coding We transcribed the communicative exchanges between child and experimenter from the time a picture was placed in front of the child to when the picture was removed. During these exchanges, children could, in principle, produce multiple spoken utterances and multiple gestures. We examined children’s responses in terms of modality of expression, accuracy of the spoken answer and types of gestures produced.
Modality of expression All children’s responses produced during communicative exchanges were tallied and classified into one of three categories on the basis of modality: (a) unimodal spoken productions included responses produced only in the spoken modality; (b) bimodal productions included all responses in which the child used both verbal and gestural modalities; (c) unimodal gestural productions included the responses produced only in the gestural modality. Spoken responses Answers in the naming task were classified as correct, incorrect, or no-response. An answer was coded as correct when the child provided the expected label for the picture. For some pictures, more than one answer was accepted as correct (e.g., “bag” may be called “sacchetto” or “busta” in Italian). Incorrect answers included words different from the target items the pictures were meant to elicit. We classified as incorrect: semantic errors (such as circumlocutions, use of general terms and semantic replacements), visual errors, off-target responses and unintelligible answers; for more details see Stefanini et al., 2007. This category also included unintelligible productions (e.g., “enno” instead of “telefono” for the picture of a telephone). Many phonologically-altered productions were found, especially in the DS and LATD groups. These were classified as correct answers (e.g., “lelefono” for “telefono” for the picture of a telephone) or incorrect answers (e.g., “olologio” (clock) for the picture of a telephone, intended to elicit the Italian word “telefono”).
59
60 Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
No-responses were coded when children either stated that they did not know the word corresponding to a picture or did not provide a spoken answer. When children gave an incorrect answer or a no-response on their first attempt, they were given a second chance to provide the correct answer, adopting a “best answer” criterion. If neither answer was correct, the first answer was tallied.
Gestural production All gestures produced by the children as they interacted with the experimenter were transcribed. These included gestures produced with and without speech, and those occurring both before and after the accepted spoken answer. This study was primarily limited to manual gestures and movements of the head, although occasional reference will be made to other kinds of non-manual gestures (e.g., posture, body movements, facial expressions). For the isolation and classification of gestures we referred partially to recent work conducted with young preschool children (Bello et al., 2004; Butcher & Goldin-Meadow, 2000; Stefanini et al., 2007). Our work differs from that of Butcher and Goldin-Meadow (2000) in that we did not require eye contact between the child and the observer. Given the specific nature of the task (asking children to name pictures), all of the children’s productions were considered to be communicative. Each gesture was classified into one of the following three categories. Deictic gestures included pointing, showing and giving. Most of the deictic gestures produced were pointing gestures either directed at or touching or patting the target picture. Instances of pointing with other fingers than the index or with the palm extended were included in this category. For the analyses we only took spontaneous pointing gestures into account, excluding cases in which the children’s pointing gestures could have been elicited by the adult, i.e., when children produced a pointing gesture immediately following the same gesture performed by the experimenter. Showing was defined as an arm extension while holding an object (often the picture) in the hand. In the case of giving, the object (i.e., the picture) was transferred to another person. Representational gestures are pictographic representations of the target picture’s meaning (or meanings associated with the object or event represented in the picture). This category includes action gestures and size-shape gestures. Action gestures depict the action typically performed with the object, by an object, or by a character (e.g., picture of a comb: the child moves his fingers near his head as if combing his hair). Size-shape gestures depict the size, shape or other perceptual characteristics of an object or an event (e.g., picture of a table: the child opens his arms with the palms up, saying “big”).
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
The category of other gestures included gestures that could not be classified as deictic or representational. This category included conventional interactive gestures (e.g., shaking the head for “no”); beat gestures (e.g., the hand moving in time with the rhythmic pulsation of speech, or in the air while pronouncing a particular word); and Butterworths1 gestures (e.g. supporting the head with the hand in the action of thinking).
Reliability Reliability between two independent coders (the first and second authors) was assessed for modality of expression as well as for all spoken and gestural productions. Agreement between coders was 95.3% for response type (unimodal spoken, bimodal, unimodal gestural), 95.3% for accuracy of spoken answers (correct, incorrect, no-responses), and 90.5% for gesture type (deictic, representational, other). The instances of disagreement were identified and a third coder was requested to code the answers, choosing one of the two classifications proposed by the first two coders.
Statistical Analysis Differences between DS, DATD and LATD groups (between-subjects factor) in the task will be explored with respect to the following within-subjects factors: modality of expression (verbal, gestural or bimodal), naming accuracy (number of correct answers), phonological accuracy (proportion of intelligible phonologically altered answers and proportion of unintelligible answers) and number and type of gestures (proportion of deictic and representational) produced. The program used for statistical analysis was STATISTICA 6.1 and an alpha level of 0.05 was used to reject the null hypothesis. The primary statistical analyses were based on ANOVA models and the Duncan test was used for post-hoc analysis.
Results Modality of expression Total responses. Many children produced multiple spoken utterances and multiple gestures for each item during the communicative exchange. Thus, the total number of responses produced (correct and incorrect) was calculated (DS: M 94.7, SD 12.9; LATD: M 92.9, SD 10.4; DATD M 91.9, SD 8.2). An analysis of variance (ANOVA) with Group as a between-subjects factor (DS, LATD, DATD) and Total Responses
61
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
Mean numbers of modality of expression
62
90 80 70 60 50 40 30 20 10 0 DS Unimodal Spoken
LATD Bimodal (speech+gesture)
DATD Unimodal Gestural
Figure 1. Mean numbers and standard deviations of modality of expression (Unimodal Spoken, Bimodal speech+gesture, Unimodal Gestural) exhibited by the three groups of children (DS: children with Down syndrome; LATD: typically developing children matched for lexical ability; DATD: typically developing children matched for developmental age).
as the dependent variable showed that this difference was not statistically significant, (F(2,42) = .25, p > .05). Types of responses. We classified all of the children’s productions according to modality of expression: unimodal spoken, bimodal (i.e., speech + gestural), or unimodal gestural. The mean numbers of each modality for the three groups are presented in Figure 1. A repeated measures ANOVA with Group (DS, LATD, DATD) as the betweensubjects factor and Modality of Expression Type (unimodal spoken; bimodal; unimodal gestural) as the within-subjects factor was conducted. The difference between Groups was not significant (F(2,42) = .52, p > .05), but a main effect of Modality of Expression Type (F(2,84) = 71.28, p < .001) and a significant interaction between the two variables (F(4,84) = 4.24, p < .05) were found. A follow-up Duncan test indicated that the number of unimodal spoken productions was significantly greater than for bimodal productions in the DS and DATD groups (p’s < .01) but not in the LATD group (p > .05). Unimodal gestural productions were the least performed in all groups. Children of the DATD group showed more unimodal spoken productions than children with DS (p < .001), who exhibited more of this production type than LATD (p < .001). Bimodal productions were performed more frequently by LATD children than children with DS (p < .001), who used more bimodal productions than DATD
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
children (p < .01). By contrast, children with DS performed more unimodal gestural productions than the LATD and DATD groups (p’s < .001), but no significant difference was found between the two groups of TD children (p > .05). To summarize, LATD children produced a relatively larger number of bimodal utterances, even in the context of a classic naming task, when the expected response was a spoken label. In addition, the responses of children with DS were characterized by a significantly greater number of unimodal gestural productions than those of TD children.
Spoken responses We analyzed the spoken responses provided by all the children within unimodal spoken or bimodal answers to determine whether or not they corresponded to the expected word. Figure 2 displays the mean numbers and the standard deviations of correct spoken answers, incorrect spoken answers and no-responses produced by each group of children. Children of the DATD group had a greater number of correct spoken answers compared to children with DS and children in the LATD group. Moreover, the task was clearly difficult for children in these two groups, who produced only around 50% of the expected words, whereas children in the DATD group supplied around 80% correct answers. The percentage of no-responses was quite high in the DS (10%) and LATD (8%) groups, but very limited in the DATD group (1.4%). Mean numbers of spoken answers
80 70 60 50 40 30 20 10 0 DS correct spoken answers
LATD incorrect spoken answers
DATD no spoken answers
Figure 2. Mean numbers and standard deviations of spoken answers produced by children in the three groups (DS: children with Down syndrome; LATD: typically developing children matched for lexical ability; DATD: typically developing children matched for developmental age) on the PNT. The maximum possible score on the task was 77.
63
64 Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
We conducted a repeated measures ANOVA, with Group as a between-subjects factor (DS, LATD, DATD) and Type of Spoken Answer as a within-subjects factor (two levels: correct and incorrect spoken answers2). A Group main effect (F(2,42) = 6.32, p < .05), a Spoken Answer main effect (F(1,42) = 47.03, p < .001) and a significant interaction between the two variables (F(2,42) = 25.62, p < .001) were found. Post-hoc comparisons showed that only the DATD group produced more correct than incorrect spoken answers (p < .001). No significant difference was found between correct and incorrect spoken answers in the other groups (in both cases DS and LATD, p > .05). In addition, the DATD group gave more correct spoken answers and fewer incorrect spoken answers than the DS group and the LATD group (consistently, p < .001). No significant difference emerged between the DS and LATD groups either in correct or in incorrect spoken answers (in both cases, p > .05). Additional qualitative analyses focused on phono-articulatory accuracy in speech production. In the first analysis we took the phonologically-altered forms within correct and incorrect answers into account, excluding the unintelligible productions that were considered in the second analyses. These analyses show high and similar percentages of phonologically-altered forms in children with DS (48%) and the LATD group (40%), demonstrating that this phonological pattern characterizes the first stages in typically developing children as well as in children with DS. Phono-articulatory accuracy in speech production tends to increase with age in TD children: the percentage of phonologically-altered forms was only 17% in DATD children. We conducted a one-way analysis of variance (ANOVA) with Group as a between-subjects factor (DS, LATD, DATD) and the Proportion of phonologicallyaltered forms calculated on the basis of the total number of intelligible spoken answers as the dependent variable. The difference between groups was significant: F(2,42) = 12.66, p < .001. Post-hoc analyses revealed that the DS and LATD groups showed a similar proportion of phonologically-altered forms (p > .05) and that both groups produced more of these altered spoken answers than the DATD group (p’s < .001). Within the incorrect spoken answers, the percentage of unintelligible productions was higher in children with DS (12%) than in the LATD group (6%), while only one child gave one unintelligible production in the DATD group. A one-way analysis of variance (ANOVA), with Group as a between-subjects factor (DS, LATD, DATD) and the Proportion of unintelligible productions calculated on the basis of the total number of spoken answers as the dependent variable, showed a main effect of Group (F(2,42) = 11.07, p < .001). The Duncan test showed
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
that children with DS had a higher proportion of unintelligible productions than the LATD and DATD groups (p < .05 and p < .001, respectively). However, LATD children had more unintelligible productions than DATD children (p < .05). In sum, children with DS and LATD children showed similar trends of correct and incorrect spoken answers and both groups produced a similar amount of phonologically-altered forms, performing worse in these areas than the DATD group. Nevertheless, the spoken answers of children with DS were unintelligible to the interlocutor to a greater extent when compared to the two groups of TD children.
Gestural production Total number of gestures. Spontaneous gestures were produced during the naming task by all children, though with large variability (range 3–108). Figure 3 shows the mean numbers and standard deviations of the total number of gestures produced by the three groups of children. A one-way analysis of variance (ANOVA), with Group as between-subjects factor and Number of gestures produced during the task as the dependent variable revealed a significant difference between the groups (F(4,42) = 3.69, p < .05). Post-hoc comparisons indicated that this effect is due only to a difference between the LATD and DATD groups, with LATD children exhibiting more gestures than DATD children (p < .05). We also analyzed the types of gestures produced. Table 1 shows the mean numbers and standard deviations of the different gesture types produced by the three groups of children during the task. Mean numbers of total gestures
80 70 60 50 40 30 20 10 0 DS
LATD
DATD
Total gestures
Figure 3. Mean numbers and standard deviations of total gestures produced by children in the three groups (DS: children with Down syndrome; LATD: typically developing children matched for lexical ability; DATD: typically developing children matched for developmental age) during the PNT.
65
66 Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
Table 1. Types of gestures Deictic gestures
Representational gestures Mean (standard deviation) 11.7 (8.4) 8.7 (11.7) 5.3 (5.1)
Mean (standard deviation) 23.9 (19.8) 34.9 (21.0) 17.7 (11.4)
DS LATD DATD
Other gestures Mean (standard deviation) 5.7 (3.7) 3.5 (3.5) 3.1 (3.6)
Mean numbers and standard deviations of different types of gestures produced by the three groups of children (DS: children with Down syndrome; LATD: typically developing children matched for lexical ability; DATD: typically developing children matched for developmental age) during the PNT.
Mean percentages of gesture types
Gesture types. Figure 4 shows the mean percentages of the different gesture types produced during the task by the three groups of children. We conducted a repeated measures analysis of variance (ANOVA) with Group as a between-subjects factor and Proportion of Gesture Type (deictic and representational)3, calculated on the basis of the total number of gestures produced, as a within-subjects factor. Group and Gesture Type main effects (F(2,42) = 5.72, p < .05 and F(1,42) = 57.48, p < .001, respectively) were found as well as a significant interaction between the two variables (F(2,42) = 7.85, p < .01). Post-hoc comparisons indicated that while deictic gestures were more numerous than representational gestures in the TD groups (p’s < .001), children with DS produced comparable proportions of both gesture types (p > .05). Pointing was
100% 80% 60% 40% 20% 0%
DS
LATD deictic
iconic
DATD other
Figure 4. Mean percentages of the different gesture types (deictic, iconic, or other) produced by the three groups of children (DS: children with Down syndrome; LATD: typically developing children matched for lexical ability; DATD: typically developing children matched for developmental age) during the PNT.
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
the most frequent deictic gesture used in all groups: 92.7% for children with DS, 95.8% for LATD children and 99.5% for DATD children. Remarkable differences for deictic gestures among age groups were found: their proportion was higher in the LATD group than in the DATD group (p < .05). Children with DS produced a lower proportion of deictic gestures than the two TD groups (p’s < .001) and they exhibited a more elevated proportion of representational gestures compared to LATD children (p < .01) and to DATD children (p < .001). No significant difference in the proportion of representational gestures was found between these last groups. In the DS and DATD groups most correct answers were not accompanied by gestures (DS: M 82%; DATD: M 72%). By contrast, in the LATD group correct answers were often accompanied by gesture, especially pointing (gesture M 53%; pointing M 40%). In the DS group gestures mostly accompanied incorrect, unintelligible and non spoken answers (M 48%), while in the LATD group gestures were present with both correct and incorrect answers, unintelligible and non spoken answers considered as a whole (correct M 50%; incorrect, unintelligible, non spoken answers M 50%). Finally, in the DATD group the mean proportion of representational gestures was higher with correct answers (M 65%) In summary, our data thus far indicate interesting differences in the use of gestures between TD children and children with DS. While most gestures produced by TD children were deictic, the production of representational gestures was significantly higher in children with DS and mostly accompanied incorrect, unintelligible and non spoken answers.
Discussion and conclusions This research was designed to examine the relationship between spontaneous gesture production and spoken lexical ability in a naming task in children with Down syndrome (DS) and two groups of typically developing children, one matched for developmental age (DATD) and one matched for lexical ability (LATD). In particular, we analyzed the differences between groups in terms of naming and phonological accuracy (number of correct answers and intelligibility of verbal production), number and type of gestures produced, and modality used in replying (spoken, gestural or bimodal). Our purpose was to clarify if gesture is more strictly related to cognitive or spoken linguistic abilities. Our data showed that children with DS and LATD children provided significantly more incorrect answers or no spoken answers than DATD children. In addition, both children with DS and LATD children produced more phonologically altered answers, but children with DS had more unintelligible spoken productions.
67
68 Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
The total number of gestures produced by children with DS and both groups of TD children did not differ, but LATD children produced more bimodal answers, while children with DS produced more gestures without speech. DATD children produced more unimodal spoken answers than the other two groups of children. Considering that bimodal answers were more frequently produced by the LATD children than the DATD children, we can argue that gesture plays a crucial role in the transition to spoken language, confirming similar findings reported by previous studies on this topic (Capone & McGregor, 2004; Pizzuto & Capobianco, 2005). Moreover, Stefanini et al. (2007) found interesting differences in the proportion of gesture types produced; both groups of TD children (matched for chronological and developmental age) produced a higher proportion of deictic than representational gestures. By contrast, children with DS produced a similar proportion of deictic and representational gestures. Finally, the proportion of representational gestures was higher in children with DS than in both groups of TD children, who did not differ. These data are in line with our prediction and can be interpreted as indicating that representational gestures are used by children with DS to express those meanings that cannot be conveyed in speech. In other words, a stronger link clearly emerges between gesture production and non-verbal cognition in children with DS where spoken abilities are limited. Further support for this hypothesis can be found in the differences in accuracy of spoken answers produced by both TD groups and the DS group (i.e., children with DS made more unintelligible utterances), as well as in the use of unimodal gestural answers (more frequently produced by children with DS compared to both groups of TD children). In sum, interesting similarities and differences emerge, both in spoken accuracy and gestural production, when comparing children with DS and TD children with similar lexical production abilities. When spoken modalities develop in an asynchronous manner with respect to cognitive abilities (as in children with DS), gestures may serve as a compensatory mechanism. In order to explain these different patterns and to better understand the link with general cognition and/or linguistic competence, it is useful to consider the use of deictic versus representational gestures separately.
Why did children use pointing in the naming task? Deictic gestures were found to be the most numerous type of gestures produced and pointing was overall the only deictic gesture produced by all groups during the picture naming task. This behavior may have different explanations, which may be linked to closely connected communicative and linguistic factors. Considering
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
its communicative nature, the use of pointing by children in the present study may be understood as an attempt to participate in a communicative interaction based on a joint attention scheme. Within this frame of reference the child points to the picture in order to catch and hold the adult’s attention as well as to show active participation in a well-known situational format. These formats are built within the first phases of development during mother-infant interaction (Bruner, 1975; Butterworth, 2003) and are strictly related to language acquisition and use during infancy. Recently, Tomasello et al. reconsidered the ‘social’ character of pointing, stating that children use pointing to draw attention to objects or events they find interesting enough to communicate about (Tomasello, Carpenter, & Liszkowski, 2007). Given the young age of the participants in our study, the observed number of deictic gestures may be related to an observational setting which preserved the ecology of these situational formats and which facilitated an effective joint attention interaction. Pointing may also be considered a support to spoken language. At a stage when the vocabulary is still limited and phono-articulatory systems are still developing, the use of pointing may enable the child to clarify the meaning which s/he whishes to convey by allowing the adult to better identify the referent or understand vocal productions. Finally, pointing may help a child focalize his or her own attention on the object that must be named and create a link between the perceived object or action and its spoken label. On this issue Butterworth states that “[…] pointing serves not only to individuate the object, but also to authorize the link between the object and speech. […] Pointing allows visual objects to take on auditory qualities […]” (Butterworth, 2003, p. 29). In our sample, the more frequent use of deictic gestures in LATD children may indicate that pointing triggers semantic knowledge and facilitates the link with naming (Capone, 2007). Further support for this hypothesis comes from the higher number of bimodal productions performed by children with a limited spoken vocabulary, i.e., LATD children and children with DS. The developmental trend emerging from the present study suggests that the use of a bimodal communicative system is frequent not only when spoken words emerge, but also when the lexical repertoire increases. Still, children need to refer to physical context in this phase of language and cognitive development by performing a gesture when the task requires cognitive effort (Abrahamsen, 2000). We will return to this point in the next section.
Why did children use representational gestures in the naming task? As reported above, children with DS exhibited similar proportions of deictic and representational gestures and the proportion of representational gestures was sig-
69
70 Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
nificantly higher in this group compared with both groups of TD children. Our study also uncovered difficulties in spoken lexical production in children with DS: this group showed a more limited vocabulary repertoire than DATD children and, within incorrect answers, a higher number of unintelligible answers than both groups of TD children. A positive relationship has been found between semantic representation and successful word retrieval for picture naming both in typical (McGregor, Friedman, Reilly, & Newman, 2002) and atypical populations (McGregor, Newman, Reilly, & Capone, 2002). According to Capone (2007), if a child’s meaning representation is intact, but poorly linked to phonological representation, such representation may be expressed more readily in gestures. In fact, while phonological representations are derived mostly from acoustic input, semantic knowledge of objects and actions results from an integration of multi-modal abilities including lexical labeling, perceiving features (shape, function) and proprioceptive information extracted from direct experience. In our study, the more extensive use of representational gestures and the higher number of unintelligible spoken productions (due to greater phono-articulatory difficulties) by children with DS may indicate that the deficit in this population lies in linking meaning with speech rather than in semantic knowledge of the target that needs to be labeled. Representational gestures are a more imagistic form of labeling, which manifests itself in the child’s knowledge of the target to be named. As reported by some authors, representational gestures may exploit different cognitive resources than speech, e.g., meanings that lend themselves to visual representation may be easier to express in gesture than in speech (Iverson & Goldin-Meadow, 2005). In sum, our findings are in agreement with previous studies, which have reported that spoken linguistic abilities are more impaired than non-verbal cognition in children with DS. Greater use of representational gestures seems to result from a “speech disadvantage” characteristic of children with DS in comparison to TD children. Gestures “interact to co-determine meaning” for both producer and listener (Kelly, 2001, italics in the original) and reinforce salient semantic content of the spoken words. This result suggests that, at the stage of cognitive and linguistic development explored here, some semantic features of words are still encoded in sensorimotor form (Bates & Dick, 2002). The tight relationship between gesture and word may, indeed, be related to action because of the representational property of the motor system (Rizzolatti & Luppino, 2001). To conclude, different patterns of gesture usage appear to be consistent with both general cognitive level and phono-articulatory abilities. Specifically, when deictic gestures are produced, these may serve to establish and maintain attention for both producer and listener, and to clarify an unclear spoken production. This
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
behavior is more frequent when cognitive and/or language ability is more limited. Further research is needed in order to better understand the role of each of these abilities. Representational gestures appear to be more clearly linked to the general “lexical competence” broadly defined (i.e., not limited to meanings expressed in the spoken modality) and may serve to bring forth semantic knowledge in a visible representation (Kendon, 2004; McNeill, 2000). Naming objects or events by gestures is a strategy commonly used in certain situations, but it may become particularly efficient when children’s cognitive abilities outstrip their productive language skills. Therefore, the study of gestures may allow us to obtain a better view of semantic knowledge, in particular in children with cognitive and/or language impairment.
Acknowledgments This work was supported by funds from Fondazione Monte Parma and from the Fondation Lejeune. We are grateful to Virginia Volterra for discussion of the data and comments on an earlier version of the paper. We thank Aaron Shield and Laura Sparaci for their comments on the manuscript. We also want to thank the children who participated in the study and their parents.
Notes 1. Butterworth gestures, named after the British psycholinguist Brian Butterworth, “are made when a speaker is trying to recall a word or another verbal expression” (Kendon 2004, p. 101) 2. In order to improve variance homogeneity, we excluded from this analysis the category of no spoken answers. 3. We excluded from this analysis the category of Other gesture to improve variance homogeneity.
References Abbeduto, Leonard & Melissa M. Murphy (2004). Language, social cognition, maladaptive behavior, and communication in Down syndrome and fragile X syndrome. In Mabel L. Rice & Steven F. Warren (Eds.), Developmental language disorders: from phenotype to etiologies (pp. 77–99). Mahwah, NJ: Lawrence Erlbaum. Abrahamsen, Adele (2000). Explorations of enhanced gestural input to children in the bimodal period. In Karen Emmorey & Harlan Lane (Eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima (pp. 357–399). Mahwah, NJ: Lawrence Erlbaum.
71
72
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
Alibali, Martha W., Sotaro Kita, & Amanda J. Young (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15, 593–613. Bates, Elizabeth, Laura Benigni, Inge Bretherton, Luigia Camaioni, & Virginia Volterra (1979). The emergence of symbols: Cognition and communication in infancy. New York: Academic Press. Bates, Elizabeth & Frederic Dick (2002). Language, gesture, and the developing brain. Developmental Psychobiology, 40, 293–310. Bello, Arianna, Olga Capirci, & Virginia Volterra (2004). Lexical production in children with Williams syndrome: Spontaneous use of gesture in a naming task. Neuropsychologia, 42, 201–213. Bello, Arianna, Maria Cristina Caselli, Paola Pettenati & Silvia Stefanini (2010). Parole in Gioco. PinG-Parole in Gioco. Una prova di comprensione e produzione lessicale per la prima infanzia. Firenze: Giunti, Organizzazioni Speciali. Bozzo, Maria T. & Graziella Mansueto Zecca (1993). Adattamento italiano della scala d’intelligenza Stanford-Binet, forma L-M nella revisione Terman-Merrill. Firenze: Giunti: Organizzazioni Speciali. Bruner, Jerome (1975). The ontogenesis of speech acts. Journal of Child Language, 2, 1–19. Butcher, Cynthia & Susan Goldin-Meadow (2000). Gesture and transition from one- to twoword speech: When hand and mouth come together. In David McNeill (Ed.), Language and gesture (pp. 235–258). Cambridge: Cambridge University Press. Butterworth, George (2003). Pointing is the royal road to language for babies. In Sotaro Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 9–33). Hillsdale: Lawrence Erlbaum. Camaioni, Luigia, Maria Cristina Caselli, Emiddia Longobardi, & Virginia Volterra (1991). A parent report instrument for early language assessment. First Language, 11, 345–359. Capirci, Olga, Annarita Contaldo, Maria Cristina Caselli, & Virginia Volterra (2005). From action to language through gesture. Gesture, 5 (1/2), 155–177. Capirci, Olga, Jana M. Iverson, Elena Pizzuto, & Virginia Volterra (1996). Communicative gestures and words during the transition to two-word speech. Journal of Child Language, 23, 645–673. Capone, Nina C. (2007). Tapping toddlers’ evolving semantic representation via gesture. Journal of Speech, Language, and Hearing Research, 50, 732–745. Capone, Nina C. & Karla K. McGregor (2004). Gesture development: A review for clinical and research practices. Journal of Speech, Language, and Hearing Research, 47, 173–186. Caselli, Maria Cristina (1990). Communicative gestures and first words. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 56–67). Berlin & New York: Springer Verlag. (1994–2nd edition, Washington, DC: Gallaudet University Press). Caselli, Maria Cristina & Paola Casadio (1995). Il primo vocabolario del bambino. Guida all’uso del questionario MacArthur per la valutazione della comunicazione e del linguaggio nei primi anni di vita. Milano: Franco Angeli. Caselli, Maria Cristina, Patrizio Pasqualetti, & Silvia Stefanini (2007). Parole e frasi nel “Primo vocabolario del bambino”. Nuovi dati normative fra 18 e 36 mesi e forma breve del questionario. Milano: Franco Angeli.
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
Caselli, Maria Cristina, Stefano Vicari, Emiddia Longobardi, Laura Lami, Claudia Pizzoli, & Giacomo Stella (1998). Gestures and words in early development of children with Down syndrome. Journal of Speech, Language and Hearing Research, 41, 1125–1135. Chapman, Robin S. & Linda J. Hesketh (2000). Behavioural phenotype of individuals with Down syndrome. Mental Retardation and Developmental Disabilities Research Reviews, 6, 84–95. Colletta, Jean M. (2004). Le développement de la parole chez l’enfant âgé de 6 à 11 ans. Corps, langage et cognition. Sprimont: Mardaga. Fenson, Larry, Philip S. Dale, J. Steven Reznick, Elizabeth Bates, Donna Thal, & Stephen Pethick (1994). Variability in early communicative development. Monographs of the Society for Research in Child Development, 242, 59, 5. Gathercole, Susan E., Catherine S. Willis, Hazel Emslie, & Alan D. Baddeley (1992). Phonological memory and vocabulary development during the preschool years: A longitudinal study. Developmental Psychology, 28, 887–898. Goldin-Meadow, Susan (2007). Pointing sets the stage for learning language — and creating language. Child Development, 78 (3), 741–745. Goodwyn, Susan & Linda Acredolo (1993). Symbolic gesture versus word: Is there a modality advantage for onset of symbol use? Child Development, 64, 688–701. Guidetti, Michèle (2002). The emergence of pragmatics: Forms and functions of conventional gestures in young French children. First Language, 22, 265–285. Iverson, Jana M. & Susan Goldin-Meadow (2005). Gesture paves the way for language development. Psychological Science, 16 (5), 367–371. Iverson, Jana M., Emiddia Longobardi, & Maria Cristina Caselli (2003). Relationship between gestures and words in children with Down’s syndrome and typically developing children in the early stages of communicative development. International Journal of Language & Communication Disorders, 38, 179–197. Jancovic, MerryAnn, Shannon Devoe, & Morton Wiener (1975). Age-related changes in hand and arm movements as nonverbal communication: Some conceptualizations and an empirical exploration. Child Development, 46 (4), 922–928. Kelly, Spencer D. (2001). Broadening the units of analyses in communication: Speech and nonverbal behaviours in pragmatic comprehension. Journal of Child Language, 28, 325–349. Kendon, Adam (2004). Gesture: Visible action as utterance. Cambridge: Cambridge University Press. Leiter, R. G. (1979). Leiter international performance scale (LIPS). Los Angeles: Western Psychological Service. McGregor, Karla K., Rena M. Friedman, Renée M. Reilly, & Robyn M. Newman (2002). Semantic representation and naming in young children. Journal of Speech, Language and Hearing Research, 45, 332–346. McGregor, Karla K., Robyn M. Newman, Renée M. Reilly, & Nina C. Capone (2002). Semantic representation and naming in children with specific language impairment. Journal of Speech, Language and Hearing Research, 45, 998–1014. McNeill, David (1992). Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press. McNeill, David (Ed.) (2000). Language and gesture. Cambridge: Cambridge University Press. Nicoladis, Elena, Rachel I. Mayberry, & Fred Genesee (1999). Gesture and early bilingual development. Developmental Psychology, 35 (2), 514–526.
73
74
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
Özcaliskan, Seyda & Susan Goldin-Meadow (2005). Gesture is at the cutting edge of early language development. Cognition, 96 (3), B101–B113. Piaget, Jean (1945). La formation du symbole chez l’enfant. Neuchâtel: Delachaux et Niestlé. Pine, Karen J., Nicola Lufkin, Elizabeth Kirk, & David Messer (2007). A microgenetic analysis of the relationship between speech and gesture in children: Evidence for semantic and temporal asynchrony. Language and Cognitive Processes, 22 (2), 234–246. Pizzuto, Elena & Micaela Capobianco (2005). The link and differences between deixis and symbols in children’s early gestural-vocal system. Gesture, 5 (1/2), 179–199. Rizzolatti, Giacomo & Giuseppe Luppino (2001). The cortical motor system. Neuron, 31, 889– 901. Stefanini, Silvia, Maria Cristina Caselli, & Virginia Volterra (2007). Spoken and gestural production in a naming task by young children with Down syndrome. Brain and Language, 101 (3), 208–221. Thal, Donna & Stacy Tobias (1992). Communicative gestures in children with delayed onset of oral expressive vocabulary. Journal of Speech and Hearing Research, 35, 1281–1289. Tomasello, Michael, Malinda Carpenter, & Ulf Liszkowski (2007). A new look at infant pointing. Child Development, 78 (3), 705–722. Vicari, Stefano, Giorgio Albertini, & Carlo Caltagirone (1992). Cognitive profiles in adolescents with mental retardation. Journal of Intellectual Disability Research, 36, 415–423 Vicari, Stefano, Maria Cristina Caselli, & Francesca Tonucci (2000). Asynchrony of lexical and morphosyntactic development in children with Down syndrome. Neuropsychologia, 38, 634–644. Volterra, Virginia, Maria Cristina Caselli, Olga Capirci, & Elena Pizzuto (2005). Gesture and the emergence and development of language. In Michael Tomasello & Dan Slobin (Eds.), Beyond nature-nurture. Essays in honor of Elizabeth Bates (pp. 3–40). Mahwah, NJ: Lawrence Erlbaum. Werner, Heinz & Bernard Kaplan (1963). Symbol formation: An organismic developmental approach to language and the expression of thought. New York: John Wiley.
The effect of gestures on second language memorisation by young children Marion Tellier Université de Provence – Aix-Marseille I, France
This article examines the impact of gesture on second language memorisation in teaching to very young learners. Twenty French children (mean age 5;5) took part in an experiment. They had to learn eight words in a foreign language (English). One group of children (N = 10) were taught words with pictures and another group (N = 10) words with accompanying gestures. Children in this group had to reproduce the gestures while repeating the words. Results show that gestures and especially their reproduction significantly influence the memorisation of second language (L2) lexical items as far as the active knowledge of the vocabulary is concerned (being able to produce words and not only understand them). This finding is consistent with theories on multimodal storage in memory. When reproduced, gestures not only act as a visual modality but also as a motor modality and thus leave a richer trace in memory. Keywords: teaching gestures, memorisation, second language acquisition, children
Several studies have emphasized the role of gestures in second language (L2) acquisition (for an overview, see Gullberg, 2008). Teachers tend to gesture a lot (Sime, 2001; Hauge, 1999), especially when addressing young learners and/or beginners. It is commonly acknowledged that ‘teaching gestures’ (i.e., gestures used deliberately by teachers to help their students) capture attention and make the lesson more dynamic. Using analyses of video recordings of English lessons to French students, Tellier (2006) determined three main roles for teaching gestures: management of the class (to start/end an activity, question students, request silence, etc.), evaluation (to show a mistake, correct, congratulate, etc.). and explanation (to give indications on syntax, underline specific prosody, explain new vocabulary, etc.). Teaching gestures appear in various shapes: hand gestures, facial expressions, pantomime, body movements, etc. They can either mime or symbolise something and
76
Marion Tellier
they help learners to infer the meaning of a spoken word or expression, providing that they are unambiguous and easy to understand. This teaching strategy is thus relevant for comprehension (Tellier, 2006). However, its utility may depend on the kind of gesture used by the teacher. It has been emphasized that foreign emblems, for instance, may lead to misunderstandings when not known by the learners (Hauge, 1999; Sime, 2001). In addition to supporting comprehension, teaching gestures may also be relevant for learners’ memorisation process. Indeed, many second language teachers who use gestures as a teaching strategy declare that they help learners in the process of memorising the second language lexicon. Many of them have noticed that learners can retrieve a word easily when the teacher produces the gesture associated with the lexical item during the lesson before them. Others have seen learners (especially young ones) spontaneously reproducing the gesture when saying the word. The effect of gestures on memorisation is thus something witnessed by many but hardly explored on a systematic and empirical basis.
Multimodality and memorisation One of the universally acknowledged facts about memory in cognitive psychology is that it can be divided into three stores: the sensory, the short-term, and the long-term. Differences between these types of memory are based both on the storage capacity and the retention of a piece of information (from a millisecond up to many years). Short-term memory, now more commonly referred to as working memory, has been considered to be a very dynamic system. Baddeley (1990) identifies three components of working memory: (1) the Articulatory Loop which consists of a speech sound based storage system that can hold a limited quantity of phonological items; (2) the Visuo-Spatial Sketchpad which is related to visual imagery and which serves to encode non-verbal visual and spatial information; and (3) the Central Executive Device which controls the two other components and gives attention to incoming stimuli. It is also responsible for retrieving information from long-term memory. As far as learning is concerned, several researchers have been interested in how multimodality (the co-occurrence of several modalities) can reinforce memorisation. Clark and Paivio’s Dual Coding Theory (1991) suggests that learning is reinforced when both verbal and non-verbal modalities co-occur. Baddeley (1990) also argues that coding a piece of information through different modalities has an impact on memorisation because it leaves more traces in the memory system. Moreno and Mayer (2000) argue that multimedia learning can be efficient because
The effect of gestures on second language memorisation by young children
it conveys both auditory and visual information. Their cognitive theory of multimedia learning is based on the assumptions that working memory includes independent auditory and visual working memories and that humans have separate systems for representing verbal and non-verbal information, consistent with the Dual Coding Theory. Furthermore, research in cognitive psychology has highlighted the effect of enactment and of the motor modality on memorisation. Recall of enacted action phrases has been found to be superior to recall of action phrases without enactment (Engelkamp & Cohen, 1991; Cohen & Otterbein, 1992). Engelkamp and Zimmer (1985) demonstrated that the free recall of enacted sentences is superior to the recall of spoken sentences and to the recall of visually imaged sentences. Thus, the enactment effect is not a mere visual effect. Engelkamp and Zimmer (1985) explain the enactment effect on memorisation by postulating a motor system above the visual and the verbal memory systems. It seems that the encoding of enacted events involves a verbal modality, a visual modality and a motor modality. Thus, enactment adds something to the memory trace of the event, it makes the trace richer, or more distinctive, and consequently easier to find at recall. Nyberg, Persson, and Nilsson (2002) have demonstrated the positive effect of enactment encoding on memorisation for different populations (including demented patients and patients with frontal-lobe dysfunction) and for different age groups ranging from 35 to 80 years of age. Recent neuroimaging studies have also brought evidence that retrieval following enactment encoding is associated with motor brain regions (Nyberg et al., 2002). Brain activity is higher during cued recall after enacted encoding compared to cued recall after verbal encoding. The activated regions (contralateral somatosensory and motor cortex) are also more active during enacted encoding compared to verbal encoding, suggesting that some of the motor areas that are engaged during enactment are subsequently reactivated during retrieval.
The effect of gestures on short-term memorisation in the first language (L1) There has been very little work on the impact of gestures on short-term and longterm memorisation in general. Experiments by Cohen and Otterbein (1992) have demonstrated that adult subjects exposed to sentences illustrated by pantomimic gestures1 remembered significantly more sentences than subjects who did not see the gestures and subjects who saw non-pantomimic gestures. They worked with three groups of adult subjects. The subjects had to watch a video containing several different sentences in their L1 and then to write down as many sentences as they could remember in a free recall task. Each group received the same verbal input
77
78
Marion Tellier
but the videos were slightly different: one just presented the sentences, the second showed somebody illustrating each sentence with pantomimic gestures, and in the last video, sentences were accompanied by non-pantomimic (i.e., meaningless) gestures. A similar experiment set up by Feyereisen (1998) confirms Cohen and Otterbein’s results. Feyereisen hypothesised that a sentence accompanied by a gesture is better remembered either because the gesture constitutes a distinctive effect (the gesture adds some particularity to the sentence) or because the gesture conveys significant information related to the meaning of the sentence in a visual modality which is added to the verbal information (Dual Coding Theory). Similarly to Cohen and Otterbein’s study, Feyereisen exposed his subjects to three kinds of sentences: without gestures, with iconic gestures, and with iconic gestures that did not match the content of the sentences and thus looked meaningless. Feyereisen found that in the recall task, facilitation only occurred for the sentences that were presented with iconic gestures that matched the content. He thus inferred that higher recall scores do not depend on the increased distinctiveness of sentences presented together with meaningless gestures. The results highlight the effect of meaningful gestures and favour the hypothesis of the Dual Coding Theory and the impact of multimodality on memorisation of sentences in L1 by adult subjects. Both studies (Cohen & Otterbein, 1992, and Feyereisen, 1998) dealt only with adult subjects. In the experiments reported on here younger subjects have been tested. There has been no work on the effect of gestures on memorisation in children, whether in first or second languages, on short or on long-term memorisation. In a series of studies the impact of gestures on memorisation has been explored (Tellier, 2005, 2006, and 2007) to examine whether gestures improve children’s memory for words in the L1, taking into account the difference in mnemonic span between adults and children. The notion of mnemonic span is used here to refer to the number of items a subject can memorise from a list heard once. The average score is 7 items plus/minus 2 (Miller, 1956; Baddeley, 1990) for an adult. However, it is lower for children and increases with age and cognitive development (Cowan et al., 1991). Span for digits (the most frequent stimuli) exhibits roughly a threefold increase from the age of 2 to young adulthood (college). The mnemonic span is thus about 2 items at the age of 2, 4 items at the age of 5, 5 at the age of 7, 6 items at the age of 9 and 7 at the age of 12 (Dempster, 1981). A first study (Tellier, 2005) involved 32 French children (age range 4;11 to 5;10, M 5;5) who were divided into 2 groups (control and experimental). They had to watch 3 videos (each contained a list of 10 words in the L1). The children watched the videos alone with the experimenter and had to do a free recall task immediately afterwards. The three videos watched by the control group only presented them with words pronounced by a person on the screen. The first video watched by the experimental group was the same as the control group, the second video
The effect of gestures on second language memorisation by young children
was illustrated with gestures and the third with pictures. The experimental group had significantly better results with video 2 and 3. This suggests that the use of visual modalities (pictures and gestures) improves short-term memorisation in a free recall task, consistent with the Dual Coding Theory. The significant effect of pictures on young children’s memory span is also consistent with previous findings (Cowan et al., 1991, who worked with 4-year olds). Importantly, there was no statistical difference between the effect of the picture and of the gesture on memorisation. In this case, gestures acted as a mere visual modality since they were only looked at. A second study (Tellier, 2007) examined whether reproducing gestures has a greater impact on children’s memory span than merely looking at them. 42 French children (age range 5;3 to 6;3, M 5;9) performed a very similar task to the previous experiment except that images were not used and that children were asked to repeat the words out loud in their first language after listening to them. There were three groups for the study. A control group listened to the words and repeated them. A first experimental group (EG1) listened to the words and repeated them as well but also looked at illustrative gestures with each word. A second experimental (EG2) group was told to listen to the words, repeat them, look at the gestures and reproduce them. They were then given a free recall task. Results show that the second experimental group (EG2) did significantly better than the two other groups (control and EG1). This points to an effect of the reproduction of gestures on short-term memorisation in the L1.
The effect of gestures on memorisation in a second language (L2) As far as the effect of gestures on memorisation of items in second language is concerned, there have been very few studies. Allen (1995) worked with 112 American university students in French. A control group and a comparison group were shown 10 French sentences and their English equivalents on a screen and they also heard a teacher pronouncing them 3 times. The students were told to repeat them. The experimental group’s procedure differed only in that the students were also provided with an illustrative gesture for each sentence, which they saw three times (with the three repetitions of the sentence) and had to reproduce. However, they did not repeat the sentences, only the gestures. Then, immediately after all 10 sequences, a post-test was given in which the teacher produced the 10 French sentences in a different order and during the pause after each sentence the subjects had to write down the English equivalent. The comparison group and the experimental group were given the gestures as well. There were 5 sessions of this kind with different groups of 10 French expressions. The results show that the students
79
80 Marion Tellier
presented with illustrative gestures recalled more sentences than the others. The experimental group who reproduced the gestures did significantly better than the comparison group who only saw them during the post-test. The effect of reproducing gestures on memorisation in L2 by adult learners was confirmed. Allen’s pioneering experiment (1995) seems to be the only study on the impact of gestures on memorisation of L2 sentences. However, it has two limitations. First, the L2 sentences were always given to the subjects with the L1 translation, but the sentences to be memorised were French idiomatic expressions which are not always directly translatable. Second, subjects were asked during the post-test to give the L1 equivalent of the L2 sentences that were only used as stimuli. The study thus does not assess how many expressions in L2 subjects have remembered with gestures, but rather how many expressions they can translate. The experiment therefore dealt mainly with passive knowledge of the vocabulary, that is, the ability to recognise and translate but not to produce the L2 items. It is therefore not clear whether gestures affect active knowledge of L2 vocabulary. It is also not known whether gestures affect the memorisation of lexical items in L2 in child learners. The current study therefore examines precisely these issues.
This study Hypothesis This experiment builds on the findings of the experiments mentioned earlier. First, it is assumed that gestures and pictures, when used as visual modalities, have a similar impact on memorisation (Tellier, 2005). Second, as demonstrated by Engelkamp and Cohen (1991), Allen (1995), and Tellier (2007), reproduction of gestures has a stronger impact on short-term memorisation than only viewing gestures. The aim of this study is to examine whether this also holds for the learning of second language items and for long-term memorisation. The assumption is that the combined use of a spoken modality, a visual modality, and a motor modality leaves a richer trace on memorisation (Engelkamp & Cohen, 1991; Cohen & Otterbein, 1992, and Nyberg et al., 2002). Thus, seeing and reproducing gestures (visual and motor modality) should have a stronger impact on memorisation of items than simply seeing pictures (visual modality). This study also aims to assess active knowledge on new vocabulary.
The effect of gestures on second language memorisation by young children
Method Participants Twenty French children took part in this experiment on long-term memorisation (age range 4;11–5;10, M 5;5, SD 3 months). They were divided randomly into two groups of 10: one picture group and one gesture group. Every child was monolingual (French native speaker), none of them knew English.
Materials Eight English words were selected and associated with a picture and an illustrative gesture: ‘house’, ‘swim’, ‘cry’, ‘snake’, ‘book’, ‘rabbit’, ‘scissors’ and ‘finger’. Since a child of 5 has a mnemonic span of about 4 items (Dempsey, 1981), and the memory span for a foreign language is reduced (Gaonac’h, 1995), we included only a small number of words. As we are dealing with long-term memorisation and as the procedure of the experiment requires several sessions, we can expect children to remember more and more items after each session. To avoid floor or ceiling effects, eight items were chosen; not too many words to learn, which could discourage the child, but enough words to enable progression to be observed. The lexical items chosen for this experiment were very common words for children likely to be taught in a second language course in accordance with French official instructions (Ministère de la Jeunesse, de l’Education Nationale et de la Recherche, 2002). They were also selected because they are easy to illustrate both with pictures and gestures. Two sets of experimental videos were developed with a presentation of the words and their visual equivalent. One video showed the words only with gestures and the other only with pictures. In each video, the lexical items were pronounced clearly and were followed by a blank of two seconds to let the children repeat the words. The presentation of the corresponding pictures or gestures slightly preceded the pronunciation of the words. The experimental video was also used in the assessments. All items were presented in the same order during training, but were presented in a different order during the assessments. The gestures selected appeared as typical gestures used in teaching contexts to young children. For instance, the gesture that represented ‘book’ was made by opening and closing hands, palms facing up, the gesture for ‘swim’ was a mime of the action of swimming (breaststroke), and the gesture for ‘cry’ consisted in drawing tears with a finger down the cheeks of a sad face. These gestures had been selected from recordings by the author of two English classes for young French children in 2004 and 2005. The first class involved children aged 4 to 5 and the second one
81
82
Marion Tellier
Figures 1 and 2. Snake
Figures 3 and 4. Book
children of 5 to 6 years of age. In these recordings both teachers were French and experienced in teaching English to children. The classes recorded, each an hour in length, were held weekly. They were organized by a French association called MiniSchools and not connected to school or any other national curriculum. Figures 1 to 6 show some of the gestures and pictures used as materials.
The effect of gestures on second language memorisation by young children
Figures 5 and 6. Rabbit
Procedure The study lasted 4 weeks with one session per week during which children watched the videos according to their group (picture or gesture). The videos were displayed on a laptop. Children were told that it was a game to learn English words. Children were tested individually. Initially, every word was presented once with both the picture and the gesture, to make sure that the meaning was understood and that there was no ambiguity. For some children, especially the younger ones, the meaning of certain gestures and pictures may not be easy to detect and it seemed best to use several modalities to clarify the meaning of some words in the second language classroom (Tellier, 2006). Following that and for the rest of the experiment, the words were only presented with one additional modality (gesture or picture, depending on the group). During the first three sessions, the subjects were told to repeat the English words they heard 5 times (once during the first session and twice during Session 2 and 3). The children of the gesture group also had to reproduce the gestures while repeating the words. Every subject heard and repeated each word exactly the same number of times (i.e., 5 times) so that the children received the same input. Session 2 and Session 3 included an assessment. In Session 4, there was no warm-up of the items but only two assessments. In the first assessment (Session 2), the passive knowledge of the vocabulary was evaluated. Children heard the English words in a different order and had to show the associated picture or gesture depending on their group. In the second assessment (Session 3), they were shown the pictures or the gestures and had to produce the corresponding English word, which gave
83
84
Marion Tellier
Table 1. Procedure SESSION 1 Picture group 1.Double presentation Watch in silence 2. Listen and repeat each word with video Gesture group 1.Double presentation Watch in silence
Assessments
2. Listen and repeat each word and each gesture with video none
SESSION 2 1. Listen and repeat each word with video 2. Listen and repeat each word with video 1. Listen and repeat each word and each gesture with video 2. Listen and repeat each word and each gesture with video Show appropriate picture or gesture
SESSION 3 1. Listen and repeat each word with video 2. Listen and repeat each word with video 1. Listen and repeat each word and each gesture with video 2. Listen and repeat each word and each gesture with video Produce the appropriate word
SESSION 4 No rehearsal
No rehearsal
a. As in Session 3 b. As in Session 2
an evaluation of the active knowledge of the vocabulary. In the last assessment (Session 4), both previous assessments were conducted: first the evaluation of the active knowledge, followed by the passive knowledge of the lexical items. Table 1 sums up the procedure for each group.
Results The mean numbers of correctly memorised words per group and assessment session are summarised in Table 2.
First assessment (Session 2) In this assessment, the aim was to measure the passive knowledge of the vocabulary, i.e., whether children were able to show the visual equivalents of the words. The subjects of the picture group were shown the pictures and were asked to point to the corresponding one when they heard an English word. The subjects of the gesture group had to produce the appropriate gesture. Note that the task was more difficult for the children of the gesture group since the subjects of the picture group had to choose among a limited number of pictures in front of them, whereas children of the gesture group had to remember the gestures they had learnt.
The effect of gestures on second language memorisation by young children
Table 2. Means of memorised words for each group per assessment
Picture group Gesture group Difference (T-tests)
Assessment 1 (passive) 3 3.1 x
Assessment 2 (active) 2.6 3.7 t(18) = −2.108 with p < .0493
Assessment 3a (active) 2.8 3.8 t(18) = −2.433 with p < .0256
Assessment 3b (passive) 4.3 4.9 t(18) = −1,579 with p < .1318
Despite the asymmetry in the difficulty of the task, children of both groups performed equally (cf. Table 2). The subjects of the picture group gave a mean of 3 good answers (range 1–6, SD 1.3 word) and in the gesture group a mean of 3.1 (range 1–5, SD 1.4). Four words were better memorised than others: ‘scissors’ (19/20 children), ‘rabbit’ (10/20 children), ‘cry’ (10/20 children) and ‘finger’ (9/20 children). It is noticeable that among those 4 words are three disyllabic words.
Second assessment (Session 3) In the second assessment, the aim was to measure the active knowledge of the vocabulary, that is to say, whether children were able to produce the English words. For this assessment, they saw pictures or gestures in a different order than the one of the repetitions and had to name them. The picture group gave a mean of 2.6 correct words (range 1–5, SD 1.17 word) and the gesture group 3.7 words (range. 2–5, SD 1.16 word). The difference between the means of answers of both groups was thus 1.1 word. An independent samples t-test (cf. Table 2) confirmed that this difference was significant and revealed the effect of the reproduction of gestures on memorisation.
Third assessment (Session 4) This last assessment was concerned with long-term memorisation since no rehearsal of the items was provided during the session. Therefore, children had not heard the words for a whole week when they were assessed. The picture group gave a mean of 2.8 correct answers (range 2–4, SD 0.919) and the gesture group a mean of 3.8 good answers (range 2–5, SD 0.919). The difference in performance was thus 1 word. An independent samples t-test showed that this difference was significant (cf. Table 2). Therefore, there was an effect of the reproduction of gestures on long-term memorisation. The best memorised words are ‘scissors’ (20/20 children), ‘book’ (16/20 children), ‘rabbit’ (10/20 children), ‘cry’ and ‘finger’ (7/20 children).
85
86 Marion Tellier
As far as the passive knowledge of the vocabulary was concerned in this last assessment, there was no real difference of performance. Children in the picture group correctly paired 4.3 words (range 3–5, SD 0.823) with the appropriate picture, and children in the gesture group correctly paired 4.9 words (range 4–6, SD 0.876) with their gestures. An independent samples t-test showed that the difference was not significant (cf. Table 2).
Memorisation of the items During all the assessments, 5 items were successfully memorised more often than the others: ‘finger’, ‘cry’, ‘rabbit’, ‘book’ and ‘scissors’. Figures 7 and 8 show the frequency of correct answers given for each word for the passive and for the active knowledge assessments. How can we explain that some L2 lexical items are easier to memorise than others (such as ‘house’ and ‘swim’)? One possibility is that syllabic structure matters. Among those 5 words are the only three disyllabic words of the list. Regarding the item ‘scissors’, this word may have sounded familiar to some children since the French equivalent is ‘ciseaux’, a phonologically similar form. Furthermore, during the repetitions of the items, ‘scissors’ was always the last one of the list and this may have led to a recency effect (Baddeley, 1990).
Discussion The present study aimed to examine the effect of gesture reproduction on the long-term memorisation of L2 vocabulary in children. As hypothesised, the gesture group did significantly better than the picture group at least in the assessments measuring the active knowledge of the vocabulary. It appears that when gestures are re-produced and act as a motor modality, they have a stronger impact on memorisation than pictures (a visual modality). This result is consistent with previous studies (Engelkamp & Cohen, 1991; Cohen & Otterbein, 1992; and Nyberg et al., 2002) which have shown that enactment makes the trace in memory richer and facilitates recall. This is an important fact for teachers who want to help their young learners to acquire a second language. Involving the body in the learning process is therefore relevant in the classroom. However, the results of the current study have to be treated with caution due to the limited number of subjects involved in the study (N = 20 split into two groups of 10). The experiment should be replicated with a larger sample of children. It could also be relevant to examine children of different age groups to investigate whether the effect of the reproduction of gestures is valid for learners at all ages.
The effect of gestures on second language memorisation by young children
Frequency of correct answers by word (active vocabulary)
Amount of good answers 20 18 16 14 12
Eval 2
10
Eval 3a
8 6 4 2
Amount of good answers 20
or s
Words
Sc
iss
ok Bo
Ra bb it
Fi ng er
y Cr
Sn a
ke
m Sw i
H ou s
e
0
Frequency of correct answers by word (passive vocabulary)
18 16 14 12
Eval 1
10
Eval 3b
8 6 4
Figures 7 and 8. Frequency of correct answers by word
or s iss Sc
Bo ok
it bb Ra
ng er Fi
Cr
e ak Sn
m Sw i
H
ou
se
0
y
2 Words
87
88
Marion Tellier
In addition, the scores of the learners are rather low; the gesture group only memorised 3.8 words out of 8 after the last session. This can be explained by the fact that the words have only been repeated 5 times and have been learnt in a controlled experimental setting. Learning vocabulary in a classroom is obviously very different from what is done in an experiment. Indeed, the input is much more important in class and so is the amount of repetitions. Also, unlike in the experiment, children have opportunities to use the new vocabulary in various ways while in class in stories, games, songs and other activities. For future work it seems relevant to study different kinds of words. There may be a difference in the memorisation of words depending on word class. It has been suggested that verbs are harder to learn than nouns (see Gentner, 1981, for an overview). However, Choi and Gopnik (1995) investigated children’s early lexical development in English and Korean, and compared caregivers’ linguistic input in both languages. They found that very young Korean children use verbs productively with appropriate inflections and that, for most of them, the verb spurt occurs before the noun spurt. Unlike in English, both verbs and nouns in Korean are dominant categories from the single-word stage. By comparing the verbal input received by children of both linguistic groups, Choi and Gopnik (1995) found that Korean caregivers used more verbs and fewer nouns than the American mothers. The study suggests that verbs are accessible to children from the beginning, and that they may be acquired early in children who are encouraged to do so by their language-specific grammar and input. Even if the findings in the literature in this domain are somewhat contradictory, it would nevertheless be interesting to assess the impact of gestures on the memorisation of nouns and verbs in second language acquisition. For instance, one may wonder whether or not action verbs are easier to memorise with gestures than nouns. Moreover, it would be interesting to further examine the syllable as a variable by testing monosyllabic as well as di- or even polysyllabic words. Indeed, the three disyllabic words in our study were among the best memorised items. It is difficult to know whether this is due to a word duration effect or not. The literature on memory span generally suggests that lists made up of long words are harder to recall than lists of short words. Baddeley, Thomson, & Buchanan (1975) found a significant effect of the word length on immediate serial recall. They showed that this effect was due to articulatory duration by selecting two sets of disyllabic words, which were matched for frequency and number of phonemes but which differed in terms of their articulatory duration. In tests of immediate serial recall the lists made from words with a short articulatory duration were better recalled than those made from long words. Similarly, Lovatt et al. (2000) also conducted a
The effect of gestures on second language memorisation by young children
series of experiments to compare immediate serial recall of disyllabic words that differed on spoken duration. They first found that long words were better recalled than short words. However, in a second experiment using another set of items, they found no difference between long and short disyllabic words. Finally, in a third experiment using the word set originally selected by Baddeley, Thomson, and Buchanan (1975), they confirmed the large advantage for short-duration words. Lovatt et al. (2000) suggest that there is no reliable advantage for short-duration disyllables in span tasks, and that previous accounts of a word-length effect in disyllables are based on accidental differences between list items. It seems that more data on this topic is needed. Another explanation for why some words are better memorised than others may be because they sound more distinctive or more pleasant to the children. In a study on the effect of multimodality (especially pictures) on the memorisation of L2 lexical items, Dat (2006) found that a long item such as ‘coconut tree’ was among the better memorised words of her study. The effect of distinctiveness of some words on memorisation should be investigated in the SLA field. In a more global perspective, the impact of prosody on second language learning requires more attention particularly in the L2 classroom. To conclude, this study has shown that gesturing enables children to memorise vocabulary better in L2, as they get physically involved in their learning. The findings support Paivio’s Dual Coding Theory, which argues that a verbal and a visual modality reinforce memorisation. This study goes one step further, showing that gestures — a motor modality — leave an even richer trace in memory.
Note 1. The words “pantomimic” and “non-pantomimic” are used by Cohen and Otterbein (1992). Pantomimic gestures are mime-like gestures that represent speech whereas non pantomimic gestures have no semantic connexion with the speech they accompany: they are meaningless movements.
References Allen, Linda Q. (1995). The effects of emblematic gestures on the development and access of mental representations of French expressions. The Modern Language Journal, 79, 521–529. Baddeley, Alan (1990). Human memory: Theory and practice. East Sussex: Lawrence Erlbaum. Baddeley, Alan, Neil Thompson, & Mary Buchanan (1975). Word length and the structure of memory. Journal of Verbal Learning and Verbal Behaviour, 14, 575–589.
89
90 Marion Tellier
Choi, Soonja & Alison Gopnik (1995). Early acquisition of verbs in Korean: A cross-linguistic study. Journal of child language, 22 (3), 497–529. Clark, James M. & Allan Paivio (1991). Dual coding theory and education. Educational Psychology Review, 3 (3), 149–210. Cohen, Ronald L. & Nicola Otterbein (1992). The mnemonic effect of speech gestures: Pantomimic and non-pantomimic gestures compared. European Journal of Cognitive Psychology, 4 (2), 113–139. Cowan, Nelson, J. Scott Saults, Carrie Winterowd, & Molly Sherk (1991). Enhancement of 4-yearold children’s memory span for phonologically similar and dissimilar word lists. Journal of Experimental Child Psychology, 51, 30–52. Dat, Marie-Ange (2006). Didactique présecondaire des langues étrangères: l’influence de la présentation multimodale du lexique sur la mémorisation chez des enfants de 8 à 11 ans. Un��� published doctoral dissertation, University of Toulouse 2 — Le Mirail. Dempster, Franck N. (1981). Memory span: Sources of individual and developmental differences. Psychological Bulletin, 89, 63–100. Engelkamp, Johannes & Hubert D. Zimmer (1985). Motor programs and their relation to semantic memory. German Journal of Psychology, 9, 239–254. Engelkamp, Johannes & Ronald L. Cohen (1991). Current issues in memory of action events. Psychological Research, 53, 175–182. Feyereisen, Pierre (1998). Le rôle des gestes dans la mémorisation d’énoncés oraux. In Serge Santi, Isabelle Guaïtella, Christian Cavé, & Gabrielle Konopczynski (Eds.), Oralité et gestualité. Communication multimodale, interaction. Actes du colloque Orage 98 (pp. 355–360). Paris: L’Harmattan. Gaonac’h, Daniel (1995). La mémoire dans l’apprentissage des langues vivantes: fonctionnement, acquisitions, utilisation. Les langues modernes, 2, 9–24. Gentner, Dedre (1981). Some interesting differences between verbs and nouns. Cognition and Brain Memory, 4 (2), 161–178. Gullberg, Marianne (2008). Gestures and second language acquisition. In Nick C. Ellis & Peter Robinson (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276–305). London: Routledge. Hauge, Elizabeth (1999). Some common emblems used by British English teachers in EFL classes. In David Killick & Margaret Parry (Eds.), Cross-cultural capability — Promoting the discipline: Marking boundaries and crossing borders. Proceedings of the conference at Leeds Metropolitan University Dec. 1998 (pp. 405–420). Lovatt, Peter, Steve E. Avons, & Jackie Masterson (2000). The word-length effect and disyllabic words. The Quarterly Journal of Experimental Psychology, 53a (1), 1–22. Miller, George A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. Ministère de la Jeunesse, de l’Education Nationale et de la Recherche (2002). Programme des langues étrangères et régionales à l’école primaire. B.O.E.N. hors série n° 4 du 29 août 2002. Moreno, Roxanna & Richard E. Mayer (2000). A learner-centered approach to multimedia explanations: Deriving instructional design principles from cognitive theory. Interactive Multimedia Electronic Journal of Computer Enhanced Learning, 2 (2) http://imej.wfu.edu. Nyberg, Lars, Jonas Persson, & Lars-Göran Nilsson (2002). Individual differences in memory enhancement by encoding enactment: Relationships to adult age and biological factors. Neuroscience and Biobehavioral Reviews, 26, 835–839
The effect of gestures on second language memorisation by young children
Sime, Daniela (2001). The use and perception of illustrators in the foreign language classroom. In Christian Cavé, Isabelle Guaïtella, & Serge Santi (Eds.), Oralité et gestualité. Interactions et comportements multimodaux dans la communication (pp. 582–585). Paris: L’Harmattan. Tellier, Marion (2005). L’utilisation des gestes en classe de langue: comment évaluer leur effet sur la mémorisation du lexique? In Michel Billières, Pascal Gaillard, & Nathalie SpangheroGaillard (Eds.), Actes du Premier colloque international de Didactique Cognitive: DIDCOG 2005. Université de Toulouse–Le Mirail, 26–28 Janvier 2005. Proceedings on CD-ROM. Tellier, Marion (2006). L’impact du geste pédagogique sur l’enseignement-apprentissage des étrangères: Etude sur des enfants de 5 ans. Unpublished doctoral dissertation. University Paris 7 — Denis Diderot, Paris. Tellier, Marion (2007). How do teacher’s gestures help young children in second language acquisition? Proceedings of the meeting of International Society of Gesture Studies, ISGS 2005: Interacting Bodies, June 15–18, ENS Lyon, France. http://gesture-lyon2005.ens-lsh.fr/IMG/ pdf/TellierFINAL.pdf
91
Gesture and information structure in first and second language* Keiko Yoshioka Leiden University
This study investigates the frequency of gestural marking of pre-introduced referents in discourse by Dutch learners of Japanese with native data as baseline. Of interest is whether learners’ over-explicit marking of referents in speech and gesture reported in the literature is a phenomenon related specifically to the acquisition process of pronominal systems or a general phenomenon related to learning to structure information in a target-like manner irrespective of target languages. The data were analyzed in terms of referential expressions used (lexical NP, pronoun, zero-anaphora) and of the rate of gestures accompanying mentions of referents. The results reveal that even when the target language does not have an active use of pronouns, learners overtly specify referents in two modalities. Cross-linguistic variations in discourse-related gestures and possible accounts of the frequent gestural marking of referents in L2 are discussed. Keywords: gesture, second language learners, discourse, information structure
Introduction Reference tracking in speech In order to construct intelligible discourse, it is essential that the identities of referents are made clear at all times. To achieve this, speakers can use a range of referring expressions. Speakers use them strategically so that the information represented by referents is conveyed to the listener efficiently and effectively. However, acquiring the skills required to create anaphoric linkages between mentions of the same referent and to track referents in a manner appropriate to a target language poses a challenge to learners. Various studies reveal that learner speech is characteristically over-explicit; that is, learners use lexical noun phrases (henceforth lexical NPs) when attenuated forms such as pronouns and
94 Keiko Yoshioka
zero-anaphora are more appropriate (e.g., Hendriks, 2003; Jung, 2004; Muñoz, 1995; Polio, 1995). The following is an example. (1)
kono otoko no hito ga inu chiisai inu ga imasu this male GEN person NOM dog small dog NOM exist:NONPAST chiisai inu wa otoko no ko o tetsudatte-agemasu small dog TOP male GEN child ACC help-give:NONPAST chiisai inu wa mado kara ochimasu small dog TOP window from fall:NONPAST1
This boy has a dog, a small dog The small dog helps the boy The small dog falls from a window
In (1), the speaker first introduces a new referent with a lexical NP (chiisai inu ‘a small dog’). In the following co-referential context, the speaker does not switch to attenuated forms of reference but repeatedly uses the same lexical NP to mark the same referent. L2 learners’ co-referentially over-explicit reference tracking as seen above seems contradictory to principles governing the use of referring expressions studied by various researchers for a number of languages (e.g., Ariel, 1990; Chafe, 1994; Givón, 1985; Lambrecht, 1994). These studies reveal that the choice of linguistic means used for tracking referents interacts with the information status of the entity which is being referred to. For instance, Givón’s Principle of Quantity states that “The less predictable/accessible/continuous a topic is, the more coding material is used to represent it in language” (1985, p. 197). Although languages may vary in the specific forms of referring expressions that are available to their speakers, the basic relationship between the choice of referring expressions and the information status of a referent, that is, ‘less attenuated (form) for less accessible (information)’, has been found to apply to all languages in native use. Thus, the repetitive use of lexical NPs in L2 causes the information status (‘new’ vs. ‘given’) of referents to be unclear and may, as a result, hinder fast retrieval of information by the listener (Cloitre & Bever, 1988). Given that adult L2 learners have knowledge of how languages work from their experiences in their first language (L1) (including principles underlying the use of referring expressions), one may wonder why they do not adopt their L1 knowledge in L2 use. Various explanations have been put forward to account for the over-explicit reference tracking observed in L2 discourse. One is that the use of lexical NPs may be driven by learners’ preference to be ‘hyper-clear’ (Williams, 1989). Using complex attenuated forms (e.g., pronouns) makes learners prone to error, resulting in important information about referents such as gender, number and case becoming ambiguous. Thus, in order to avoid such ambiguity, learners may resort
Gesture and information structure in first and second language
to an increased use of lexical NPs. The other explanation views the use of lexical NPs from the perspective of acquisition order, arguing that learners may prefer to use lexical means (such as nouns) before using grammaticised linguistic means (such as pronouns) for tracking referents (Hendriks, 2003). Other researchers suggest that learners may avoid using pronouns because the proper use of these forms requires complex planning (Carroll, Murcia-Serra, Watorek, & Bendiscoli, 2000). Attending both to the grammar of on-going utterances and to the flow of information in discourse (i.e., whether the information status of a referent is ‘new’ or ‘given’) is a cognitively demanding task. Resorting to using lexical NPs may be one possible solution to avoid cognitive overload. Cross-linguistic transfer of reference tracking strategies has been suggested as one of the explanations for the over-use of lexical NPs in place of zero-anaphora (Ø) (Polio, 1995; Jung, 2004). Languages vary with respect to the use of zero-anaphora (Li & Thompson, 1976). Studies reveal that learners with L1s that restrict the use of zero-anaphora may limit their use of this attenuated form in L2 to L1-like usage levels even if the target language allows a wider use (Jung, 2004; Nakahama, 2003; Polio, 1995; Yanagimachi, 1997). One phenomenon discussed in the literature with respect to reference strategies concerns the fixation of viewpoint in organizing a narrative (Nakahama & Kurihara, 2007; Yanagimachi, 1997). It has been reported that native speakers of Japanese, a language with various forms which reflect speaker viewpoint (e.g., forms of giving-receiving, ‘come-go’ auxiliaries), prefer to project themselves onto the protagonist and narrate a story from that viewpoint (Nakahama & Kurihara, 2007; Yanagimachi, 1997, and Mizutani, 1985, for theoretical discussion on this topic). Japanese speakers’ tendency to focus on the protagonist is reflected in their narrative style, where the main character occupies a subject/topic position over a stretch of discourse. Even when a different character is re-introduced into the narrative, the use of viewpoint-related forms allows the speaker to continuously fixate the viewpoint on the same character. Here is an example: Ø samishii omoi o shite lonely feeling ACC do:TE Ø sooji toka shokkiaria toka hajimerun dakedo cleaning so forth washing dishes so forth begin:NONPAST but Ø kamatte-kurenai mon dakara Ø ie o dete-shimatte pay attention-give:NEG SE COP so house ACC exit-ASP:TE “Ø (= dog) felt lonely, and Ø (= dog) started vacuuming and washing the dishes and so on, but Ø (= couple) did not pay attention for the sake of the dog (from the viewpoint of the dog), so Ø (= dog) left the house and…” (based on Yanagimachi, 1997) (2)
95
96 Keiko Yoshioka
In (2), the speaker describes events focusing on the protagonist (the dog) which occupies the topic role in the first two clauses and is consecutively marked by zeroanaphora (Ø). In the next clause, different referents (a couple) are re-introduced into the narrative also marked by zero-anaphora. At this point, instead of using a viewpoint-neutral verb kamau (‘pay attention’), the speaker chooses to add kureru (‘give’) to the verb, an auxiliary which describes an action from the viewpoint of the receiver.2 The choice of kureru (kurenai for negative) allows the speaker to keep the viewpoint of the narrative fixated on the protagonist so that the zero-marked re-introduction of the protagonist in the following clause is not likely to cause any ambiguity regarding its identity. As seen above, the overall effect of viewpoint fixation is the creation of zero-anaphoric chains. Studies show that target-like viewpoint fixation is difficult for learners to acquire. Learners frequently shift their viewpoint from one character to another and mark the re-introduction of referents much more overtly with lexical NPs than their native counterparts. (Nakahama & Kurihara, 1997; Yanagimachi, 1997).
Reference tracking in gesture In a series of studies, Gullberg (1998, 2003, 2006) has shown that over-explicit reference maintenance in L2 speech is also found in the use of gesture. Based on data from Swedish learners of French and French learners of Swedish, Gullberg (1998, 2003) reports that when learners over-use lexical NPs to mark referents in coreferential contexts, such reference is frequently accompanied by gestures whereby each referent is associated with a gesture which occurs in a different location in the gesture space. These gestural anaphoric linkages may seem to be motivated by a desire to clarify the referential ambiguity that may be caused by the overuse of NPs in speech. However, Gullberg’s later study (2006) reveals that these gestures remain even when listeners are not visible, which suggests that they may perform certain functions for the speaker. Gullberg’s L2 studies are an extension of findings on gesture production in L1. Research on native speakers has shown that gestures are more likely to occur in association with unpredictable than predictable information. For instance, there are a number of studies which show that the first mentions (i.e., introductions) of referents in narratives are more likely to be accompanied by gestures than when they are mentioned subsequently (Levy & Fowler, 2000; Levy & McNeill, 1992; McNeill, 1992). The rationale for this phenomenon is provided by McNeill (1992), who maintains that the occurrence of gesture can be predicted according to Givón’s principle of grammatical coding, which states: “The less predictable/accessible/continuous a
Gesture and information structure in first and second language
topic is, the more coding material is used to represent it in language” (Givón, 1985, p. 197). Drawing on Givón’s principle, McNeill (1992, p. 208) suggests that gesture as a form of coding is more likely to occur at a place where the information provided is unpredictable, inaccessible and/or discontinuous. According to this view, the production of gesture depends on the degree of accessibility of the information carried by a referent at a given time in the discourse. If we now return to L2 gestures, Gullberg (2003) reports that as learners become capable of using pronouns in an appropriate manner, the observed dual over-explicitness ceases, suggesting that the production of gesture is related to the acquisition of the target-like use of referring expressions. Given that the over-use of lexical NPs in L2 speech is observed irrespective of the characteristics of the target language, we might ask whether over-explicit reference marking in speech and gesture by learners is a phenomenon related to the acquisition of complex pronominal systems or whether it is a more general phenomenon related to learning how to structure information in discourse in a target-like manner. One way to find an answer to this question is to examine learners of a target language which does not have a complex pronominal system. Consequently, we will examine Dutch learners of Japanese, since Dutch has a pronominal system, whereas Japanese does not. The present study addresses the following questions: Is there any difference in bimodal (i.e., oral and gestural) reference tracking between native speakers of Japanese and Dutch? Do Dutch learners of Japanese mark pre-introduced referents more overtly and frequently by lexical NPs and gestures in comparison to their native counterparts?
Characteristics of Dutch and Japanese There are some essential differences between Dutch and Japanese which influence the way information about referents is structured in narratives. They pertain to the availability of articles and pronouns, and the way zero-anaphora are used in narratives. The use of articles is obligatory in Dutch. As the articles encode (in)definiteness, they allow speakers to distinguish whether the information carried by a particular referent is new or given (e.g., ‘a boy’ vs. ‘the boy’). Dutch has a complex pronominal system, and Dutch pronouns encode number, gender and case-marking. Thus, Dutch speakers are equipped with a relatively wide range of linguistic forms to distinguish the information status of referents. In contrast, Japanese does not have an article system and authentic third-person pronouns (Kuno, 1973). Although forms such as kare (‘he’) and kanojo (‘she’) exist, they are rarely used in narratives as the equivalents of he or she (Clancy, 1980). Thus, Japanese basically
97
98 Keiko Yoshioka
offers two referring forms to distinguish the information status of referents: lexical NPs and zero-anaphora.3 Because Japanese verbs do not encode number, the identity of an intended referent should be inferred from the preceding discourse when zero-marking is used. In Dutch, the use of zero-anaphora is restricted to finite coordinate clauses, as shown in the example below. (3) De kikker die sprong uit z’n pot en verdween. ART frog he jump-PAST out his pot and disappear-PAST ‘The frog, he jumped out of his pot and Ø (= frog) disappeared’
In (3), the zero-marking occurs in a clause which is connected with the preceding clause by a conjunction en (‘and’). In contrast, such restrictions do not apply to the use of zero-anaphora in Japanese. Japanese speakers may use chains of zero-anaphora where the same referent is sustained as subject/topic over a number of clauses. Furthermore, zero-anaphora may be used to re-introduce referents in Japanese as long as the identity of the intended referent can be inferred from prior discourse, as discussed in detail in the previous section. Although zero-anaphora can be used for entities in object position, we will only focus on entities in subject or topic position in the present study.
The present study Data Data for the present study consist of 15 L1 Japanese, 12 L1 Dutch and 15 L2 Japanese narratives. The same speakers provided the L1 Dutch and L2 Japanese narratives. All L2 speakers had the same number of hours of instruction and length of residence in the country where the target language is spoken. The proficiency of the learners was assessed via the Japanese Language Proficiency Test (Level 3) and an oral examination administered by their language instructor. The learners’ proficiency was established as low intermediate. The task used for the data elicitation was retelling of a story contained in a (wordless) picture book, ‘Frog, where are you?’ (Mayer, 1969). The story was chosen because it contains various animate characters who are involved in a number of activities, providing ample opportunities for narrators to identify different referents. The literature suggests that the choice of referring expressions is influenced by how important a role a particular referent assumes in a story (Chafe, 1994). Although various characteristic features may distinguish major from peripheral characters (McGann & Schwartz, 1988), ‘referential importance’ in the present study is measured by the number of appearances in the story and by whether the
Gesture and information structure in first and second language
first mention of the referent is likely to be accompanied by a proper name. According to these criteria, the story contains one main character (‘boy’), two secondary characters (‘frog’ and ‘dog’) and several peripheral characters.
Procedure Participants were tested individually and their performance was video-recorded. They were given a printed copy of the story and asked to memorize the storyline as thoroughly as possible so that they could retell it to a third person who did not know the story. No time constraint was placed on memorizing the story. When participants decided that they were ready, they retold the story to a native listener. The subjects were not told that the focus of the study was on the production of gesture. Since the L1 Dutch and L2 Japanese narrative data were provided by the same speakers, data were collected ten months apart, in order to reduce the effect of the language used in the first data collection.
Analytical frame The following figure illustrates how reference to certain entities is to be distinguished in this paper. How referents are introduced into discourse will not be discussed (see Yoshioka, 2006). no specific reference in the previous text yes tracking
introduction maintenance re-introduction
Figure 1. Analytical framework adopted in the present study.
As the figure shows, we differentiate the following two kinds of reference tracking: maintenance and re-introduction. In the current analysis, maintained refers to a referent in subject/topic position that is the same as the subject/topic in the immediately preceding clause or as a referent that is introduced somewhere in the immediately preceding clause. In (4), the referent in subject position in the second clause (‘he’) is maintained.
(4) The boy looked in the hole, but he did not find the frog there.
A referent is re-introduced if it has already been introduced in preceding discourse, is the subject of the current clause but different from the subject in the immediately
99
100 Keiko Yoshioka
preceding clause. In (5), the referent (‘the boy’) is re-introduced into the narrative in the third clause.
(5) The boy kept a frog in a jar, but the frog escaped during the night. The next morning when the boy woke up…
For each clause, animate referent(s) in subject or topic position were coded according to the referential forms used (lexical NP, pronoun, zero-anaphora). We followed Berman & Slobin (1994, p. 657) for the definition of ‘clause’ in this study: “By clause is meant a unit that contains a unified predicate.” Preceding discourse context was used to infer the intended identity of zero-anaphora. Restricting the analysis only to animate referents in subject/topic position is a conscious choice for the purpose of comparing the results of the present study to previous research findings. We have opted for ‘maintained’ and ‘re-introduced’ as operational terms following Gullberg (2003, 2006), although other terms have been used in the literature (see Hendriks, 2003; Muñoz, 1995).
Coding of gesture As for gesture coding, we coded any gesture that co-occurred with referring expressions either exactly or closely, including the adjacent word (cf. Levy & Fowler, 2000). In the Dutch data, the ‘adjacent word’ included the definite articles de and het. In Japanese, the ‘adjacent word’ included the subject marker ga or the topic marker wa. Figure 2 is an example of the gestural marking of referents identified in the data. Co-occurrence was determined by whether the gesture stroke, post-stroke hold or the combination of the stroke and post-stroke hold coincided with the
[otokonoko wa] ano ki ni notte nobotte boy TOP INJ tree on ride climb The boy rides, climbs up on a tree
Figure 2. Gesture marking the mention of a referent.
Gesture and information structure in first and second language
articulation of referential expressions. This definition thus excludes any gestures that co-occur with verb phrases in clauses with zero-marked subjects/topics. Because of the difficulties associated with establishing the correspondence between gestures and zero-anaphora, only gestures accompanying lexical NPs and, in some rare cases, pronouns, were identified and included in the analyses. It is important to note that we only examined whether a referent was marked by gesture or not in a ‘yes’ or ‘no’ manner. We did not count how many gestures accompanied a single mention of a referent. Thus, when a superimposed beat accompanied an iconic gesture, it was still counted as a case of gestural marking of a referent. Furthermore, although small in number, there were cases where a gesture accompanying a newly introduced referent in the preceding utterance was held in the same position until reference to the same referent was made in the following clause (cf. Duncan, 1996).4 These were treated as cases of gestural marking of referents. Some deictic gestures, in the Japanese data in particular, varied with respect to the gesture phase that co-occurred with the mention of a referent. In some cases, the mention of a referent was accompanied by the stroke-phase of a gesture. In others, the static phase following the stroke co-occurred with the mention of a referent. Given the purpose of the study, we did not differentiate these cases. For the same reason, we will not perform analyses with respect to the forms of the gestures. After the gestures were identified, the ratio of gestures accompanying the total number of references to entities was calculated. Around one-tenth of the data (200 clauses and 65 gestures) were re-analyzed by an independent coder. There were 92.5% and 92% inter-rater agreements for speech and gesture data, respectively.
Results The results are presented in two sections. First, the results for reference maintenance are presented, and second, those for reference re-introduction.
Reference maintenance A total of 823 instances of maintained referents were found in narratives: 245 in L1 Dutch, 407 in L1 Japanese and 171 in L2 Japanese narratives. The referring expressions used to maintain and re-introduce referents are largely divided into lexical NPs and attenuated forms. Lexical NPs consist of either bare nouns, nouns with demonstratives (e.g., sono kaeru ‘that frog’) or nouns with a definite article (e.g., het jongetje ‘the boy’). Attenuated forms consist of pronouns and zero-anaphora.
101
102 Keiko Yoshioka
100 90 80 70 60
Ø
50
PRO NP
40 30 20 10 0
L1 D maintained
L1 J maintained
L2 J maintained
Figure 3. Distribution of referring expressions used to maintain referents in L1 Dutch (L1D), L1 Japanese (L1J) and L2 Japanese (L2J) narratives.
The distribution of the forms used in the three groups of narratives is provided in Figure 3. In L1 Dutch narratives, referents are maintained mostly by pronouns (72%). The use of zero-anaphora and lexical NPs is less frequent, at 15% and 13% respectively. In L1 Japanese narratives, the distribution of referring expressions is different. Japanese L1 speakers rarely use pronouns as referring expressions. Instead, zero-anaphora are frequently used (75%), in accordance with findings in the literature (e.g., Clancy, 1980; Kuno, 1973). 24% of the maintained referents are denoted by lexical NPs. The Dutch learners of Japanese choose zero-anaphora most frequently to denote maintained referents (63%). The use of pronouns is minimal, suggesting that the learners know that pronouns are rarely used in Japanese narratives. 33% of the maintained referents in the L2 narratives are denoted by lexical NPs. There were a total of 52 gestures that accompanied the mentions of maintained referents: 4 in L1 Dutch, 26 in L1 Japanese and 22 in L2 Japanese. Figure 4 shows the results for the three groups. Results of two independent samples t-tests found a significant difference in the proportion of instances of maintained reference encoded by lexical NP in L1 Dutch vs. L1 Japanese: t(25) = −2.909; p = 0.008 ((M .135; SD .13) vs. (M .256; SD .09)) and in the frequency of gestural marking of maintained referents between these two native groups: t(25) = −2.500; p = 0.019 ((M .014; SD .02) vs. (M .064; SD .068)). Turning to learners, results of two independent samples t-tests found a significant
Gesture and information structure in first and second language
50% 45% 40% 35% 30%
L1 D L1 J
25%
L2 J
20% 15% 10% 5% 0% maintained
Figure 4. Frequency of gestures accompanying mentions of maintained referents in L1 Dutch (L1D), L1 Japanese (L1J) and L2 Japanese (L2J) narratives.
difference in the use of lexical NP between L1 and L2 Japanese: t(25) = −2.459, p = 0.020 (M .253; SD .09) vs. (M .367; SD .16). However, no difference was found in the frequency of gestural marking of maintained referents between these groups: t(28) = −1.177, p = 0.087 (M .065; SD .07) vs. (M .134; SD .14). Learners’ more frequent use of lexical NPs than their native counterparts for maintaining referents resembles other findings in the literature (e.g., Hendriks, 2003; Jung, 2004). However, this higher frequency of over-marking in speech is not matched in the relative use of gesture. The above findings suggest that L1 Japanese speakers gesturally mark referents representing accessible information as frequently as L2 speakers do, which runs counter to the previous findings in the literature (Levy & Fowler, 2000; Levy & McNeill, 1992; McNeill, 1992). In order to find a possible explanation for these unexpected findings, further analyses were performed with a specific focus on the discourse context where gestures co-occurred, and a comparison was made between L1 and L2 Japanese.
Discourse context where gestures occurred In this post-hoc analysis, maintained referents referred to by lexical NPs and marked by gestures were coded according to the form of the co-referent in the
103
104 Keiko Yoshioka
Table 1. Contexts where gestures accompanying maintained referents occur. Form of co-referent NP Ø total
L1 Japanese 12/26 14/26 26
L2 Japanese 47% 53% 100%
17/20 3/20 20
85% 15% 100%
preceding clause. Of interest here was the examination of conditions under which speakers gesturally marked explicit referential expressions (lexical NPs) when there was strong continuity or accessibility in topics. Two contexts were identified: the co-referent in the preceding clause was expressed either by a lexical NP or zero-anaphora.5 Table 1 shows the conditions where gestures marking maintained referents occurred in L1 and L2 Japanese narratives. Table 1 shows that 47% of the maintained referents accompanied by gestures in L1 Japanese are co-referential with entities previously expressed in lexical NPs. 53% of the gestural marking occurs after the co-referent has been denoted by zeroanaphora. In contrast, in L2 Japanese, 85% of the gestures occur after clauses where co-referential referents are explicitly identified by lexical NPs. Only 15% occur after the co-referent is represented by zero-anaphora. Due to the small numbers in each cell, no statistical analysis was performed on the data. Thus, any interpretation of the results requires caution. However, at the very least they suggest that around half of the incidences of gestural marking of maintained referents in L1 Japanese occurs where the identity of the intended referent is not clear from the preceding linguistic context. In contrast, L2 speakers tend to produce gestures accompanying maintained referents where the identity of the intended referent is already explicitly marked in the preceding linguistic context.
Reference re-introduction There were a total of 763 instances of re-introduced referents in the data: 169 in L1 Dutch, 435 in L1 Japanese and 159 in L2 Japanese. The distribution of referring expressions used to denote re-introduced referents in the three groups is provided in Figure 5. Figure 5 shows that in L1 Dutch narratives referents are re-introduced mostly by lexical NPs (62%). Pronouns are also used to re-introduce referents (36%). The use of zero-anaphora for reference re-introduction is rare in L1 Dutch narratives. Similar to the results in L1 Dutch, L1 Japanese speakers use lexical NPs to re-introduce referents (55%). However, they use zero-anaphora to re-introduce referents
Gesture and information structure in first and second language
100 90 80 70 60
Ø
50
PRO NP
40 30 20 10 0 L1D re-introduced
L1J re-introduced
L2 J re-introduced
Figure 5. Distribution of referring expressions used to re-introduce referents in L1 Dutch (L1D), L1 Japanese (L1J) and L2 Japanese (L2J) narratives.
much more frequently than their Dutch counterparts (44%). In the L2 narratives 86% of the referents are re-introduced by lexical NPs. The use of zero-anaphora is less frequent (13%). It is worth noting that the relatively high frequency of the use of zero-anaphora in L1 Japanese involved mostly re-introductions of the main and secondary characters (91%). Only 9% occurred with the peripheral characters. It seems that the main characters are deeply anchored in L1 Japanese narratives, and consequently, their topicality seems to be less influenced by the change of subject or by other competing referents than in L1 Dutch narratives, allowing zero-anaphora to be used frequently in referent re-introduction. These results are in accordance with previous findings (Nakahama & Kurihara, 2007; Yanagimachi, 1997). There were a total of 116 gestures accompanying the mentions of re-introduced referents: 16 in L1 Dutch, 58 in L1 Japanese and 42 in L2 Japanese. The rates of gestural marking of re-introduced referents are provided in Figure 6. In L1 Dutch narratives, 9% of the re-introduced referents are accompanied by gesture. The corresponding figure for L1 Japanese is 13%. In contrast, in L2 Japanese, 30% of the re-introductions of referents are accompanied by gesture.
105
106 Keiko Yoshioka
50% 45% 40% 35% 30%
L1D L1J L2J
25% 20% 15% 10% 5% 0%
re-introduced
Figure 6. Frequency of gestures accompanying mentions of re-introduced referents in L1 Dutch (L1D), L1 Japanese (L1J) and L2 Japanese (L2J) narratives.
Independent samples t-tests comparing L1 Dutch and L1 Japanese found no significant difference in the use of lexical NPs, t(25) = .851; p = 0.403 ((M .62; SD .170) vs. (M .539; SD .093)) or in the frequency of gestural marking of re-introduced referents: t(25) = −1.568; p = 0.130 ((M .09; SD .09) vs. (M .134; SD .093)). In contrast, results of Independent Samples t-tests found a significant difference in the use of lexical NPs between L1 Japanese and L2: t(25) = −8.211; p = 0.004 ((M .539; SD .093) vs. (M .863; SD .112)) and also in the frequency of gestural marking of re-introduced referents between these two groups: t(28) = −2.543, p = 0.017 ((M .134; SD .093) vs. (M .297; SD .229). Thus, the analyses reveal no cross-linguistic variation in the native bimodal re-introduction of referents. In contrast, the learners in the present study used lexical NPs for re-introducing referents more frequently than native speakers of Japanese. These NPs were frequently accompanied by gesture. Thus, learners were more overt and precise in marking re-introduced referents in both speech and gesture than L1 Japanese speakers.
Gesture and information structure in first and second language
Discussion The present study examined whether adult L2 learners’ frequent production of gestures accompanying explicit marking of maintained referents reported in the literature is related specifically to the acquisition of complex pronominal forms or to a more general L2 phenomenon reflecting the acquisition of how to structure information in discourse in a target-like manner. For this purpose, the present study investigated narratives by Dutch learners of Japanese, a language without an active use of pronouns. The analyses were performed on both speech and gesture, distinguishing maintenance and re-introduction of referents with L1 data as baseline. The results of the cross-linguistic analyses of native speakers revealed a difference in the overall gesture rate, with more gestures produced in L1 Japanese than L1 Dutch narratives. Furthermore, the native speakers of Dutch and Japanese in the present study showed variation in the frequency of gestural marking of maintained referents. Although small in number, there were cases in the L1 Japanese narratives where gestures marked referents representing accessible information, findings which are not in agreement with what was reported previously (Levy & Fowler, 2000; Levy & McNeill 1992; McNeill, 1992). Because of these unexpected results, a post-hoc analysis was performed which examined conditions in which gestural marking of maintained referents occurred in L1 Japanese. The results showed that in about half of the cases, speakers marked maintained referents when the co-referent in the preceding clause had been denoted by zero-anaphora. The two L1 groups showed no difference with respect to the frequency of gestural marking of re-introduced referents. The results further revealed that learners were more overtly explicit in their use of lexical NPs in maintaining referents than L1 Japanese speakers. This is in line with previous findings. However, this higher frequency of lexical NPs was not mirrored in the use of gesture. Learners were like their native counterparts with respect to the frequency of gestural marking of maintained referents, although unlike in L1, they marked maintained referents where the identity of the intended referent is already explicitly marked in the preceding linguistic context. Learners were much more precise and overt than their native counterparts in marking reintroduced referents both in speech and gesture. How can we explain these results? Let us begin with the cross-linguistic variation. Dutch and Japanese differ in the availability of linguistic forms which can be used to distinguish accessibility of information (‘new’ vs. ‘given’) represented in a referent. With articles, various pronouns and zero-anaphora, Dutch speakers are equipped with a relatively wide range of linguistic means to mark and structure
107
108 Keiko Yoshioka
information in discourse. In contrast, Japanese speakers, lacking articles and actively used pronouns, have fewer linguistic options for performing the same task in comparison to their Dutch counterparts. It is plausible that the high overall gesture frequency in L1 Japanese is designed to complement speech for the purpose of an effective construction of information in discourse where speech differentiates the information status of referents rather poorly. Japanese speakers basically use two referring forms: lexical NPs and zeroanaphora. However, gestural marking provides an extra level of distinction, i.e., lexical NP (±gesture) and zero-anaphora. Speakers may use gestures accompanying lexical NPs when extra highlighting of referents is deemed necessary. This may account for the gestural marking of maintained referents in L1 Japanese. However, if gestures are produced solely to complement speech where the language does not provide multiple linguistic means to differentiate the information status of referents, we could expect L1 and L2 speakers to gesturally mark referents in a similar manner, since speakers in both groups need to find a way to differentiate information with limited linguistic resources. However, as the analyses of the results show, L2 learners do not look like L1 speakers in the way they gesturally mark re-introduced referents. It has been shown that while the form of anaphoric gestures may be motivated by the presence of an addressee, the presence of gesture itself may perform certain functions specific to the speakers (Gullberg, 2006). The higher frequency of gestural marking in L2 may reflect some speaker-oriented aspect of gesture use. We will discuss this point later. Let us now focus on the results for L2 narratives. To reiterate, the present study was motivated by the question whether the over-explicitness in two modalities is a phenomenon specifically related to the difficulties in acquiring pronouns. The results show that with respect to reference maintenance, Dutch learners of Japanese were overtly explicit in speech in comparison to their native counterparts, resembling other findings in the literature (e.g., Hendriks; Jung, 2004). However, unlike as reported in Gullberg (1998, 2003), this higher frequency of lexical NP use for reference maintenance was not matched in the relative use of gesture. We speculate that this somewhat puzzling result comes about because Japanese native speakers themselves frequently produced gestures to maintain reference, as we have already discussed, thus masking the learners’ apparent over-explicitness. In fact, the results show that Dutch learners of Japanese were overly explicit, if not over-explicit, in re-introducing referents in both modalities. Thus, the phenomenon of bimodal explicit reference to track referents is observed, although the results suggest that where the phenomenon occurs (maintenance or re-introduction) may vary depending on the characteristics of the target language.
Gesture and information structure in first and second language
One might wonder why acquiring nothing (i.e., zero-anaphora) is so difficult. It seems that what is problematic for Dutch learners of Japanese to acquire is the L1-like narrative style which allows them to use this attenuated form. L1 Japanese speakers fixate the viewpoint of their narratives on main characters and keep them in subject/topic position using various viewpoint-related expressions (Nakahama & Kurihara, 2007). However, acquiring both forms and functions of these expressions poses challenges to learners. Learners’ organization of narratives differs from that of the target norm with referents in subject/topic position switching frequently (Nakahama & Kurihara, 2007; Yanagimachi, 1997). Although no systematic analyses were performed on the use of view-point related expressions in the present study, the high frequency of lexical NPs for re-introduction of referents in L2 suggests that the learners in the present study have not fully acquired the targetlike use of these expressions and the target-like organization of information. It is also plausible that the ambiguity which zero-anaphora may cause learners to avoid using zero-marking, especially in referent re-introduction. As in L1 discourse, the crucial factor in deciding which referring expression to use is how L2 learners as speakers assess the knowledge state of their interlocutors. Faced with the extreme choices of zero-anaphora or lexical NPs, learners may choose the latter to ensure clarity. Of interest is why the Dutch learners in the present study produced gestures accompanying the mentions of re-introduced referents much more frequently in their L2 than in their L1 or than their native counterparts. What seems to be correlated with the gestural marking in L2 is not just the availability of linguistic means to mark the information status of referents but the high frequency of lexical NPs in tracking referents. There are several possible explanations for this co-occurrence of lexical NPs and gesture observed in L2. First, the high frequency of gestural marking of lexical NPs may be a reflection of learners’ tendency to be insecure and therefore hyper-clear when making reference to entities. It is plausible that gestures are used as an additional tool for confirmation and clarification purposes for both the listener and the speaker. Second, given that learners have knowledge of how information is mapped onto forms from their experiences with L1, they know that not all re-introduced referents are marked by lexical NPs. They may also know that the protagonists and peripheral characters are marked differently by referring expressions. Thus, gestural marking on lexical NPs may constitute their attempt to distinguish referential importance (main vs. peripheral) within re-introduced entities. At the same time, gestures may perform some functions for the speaker while they structure information at the macro-level. It is beyond the scope of the present study to examine these possible functions of gestural marking of referents, but further examination is needed in the future.
109
110 Keiko Yoshioka
Lastly, let us discuss the frequency of gesture we observed in the present study in relation to proficiency. It is plausible that as their proficiency develops, learners acquire a target-like use of zero-anaphora, including for reference re-introduction. Gullberg (2003) found that as learners’ proficiency progressed, gestures marking the over-explicit use of lexical NPs diminished. Further studies are needed to see if gestures accompanying re-introduced referents also gradually diminish as learners’ proficiency develops. Some limitations of the study are recognized. First, our discussions are based on observations of retellings of a story with clear protagonists. It has been shown that the number of main characters affects gesture production in story-retellings (Furuyama, 2001). Thus, the generalizability of the findings may be limited until further investigations with various types of stories are conducted. Second, the present findings are based on bimodal reference marking by L2 speakers with one source (Dutch) and one target (Japanese) language. In order to fully establish L2-specific aspects of the phenomena, it is necessary to use subjects with various source and target languages. As a final note, the results of the present exploratory work reveal the intricate nature of the relationship between gesture and the characteristics of the language spoken. The value of the study of the role of gesture for an understanding how speakers (both L1 and L2) structure meaning in discourse should thus not be underestimated.
Notes * In this article, ‘second language’ is used as an umbrella term that refers to any language(s) that speakers use in addition to their mother tongue. 1. The abbreviations are: ACC = accusative case marker; ASP = aspect marker; DAT = dative case marker; GEN = genitive case marker; INJ = interjection; NOM = nominal marker; TE = te(conjunctive) form; TOP = topic marker. 2. In Japanese, two kinds of verbs are used to describe a situation where someone gives something or performs an action to someone: ageru/~te ageru and kureru/~te kureru. The former describes the exchange of an entity or action from the giver’s viewpoint, while the latter describes the same exchange from the receiver’s viewpoint. 3. Lexical NPs are sometimes used together with demonstrative sono (‘that’). However, its use is not obligatory. In addition, as ga is multi-functional, the use of the subject marker ga and the topic marker wa does not clearly correspond to the manner in which (in)definite articles distinguish information status of referents. 4. See Yoshioka (2006) for cross-linguistic analyses and analyses of L2 on the topic of gestural marking of newly introduced referents.
Gesture and information structure in first and second language
5. There were two cases in the L2 narratives where the co-referent in the preceding clauses was expressed by a pronoun kare (‘he’). Pronouns are not actively used in L1 Japanese, and this use of kare was most likely a case of transfer from L1 Dutch. Given the difficulties associated with their treatment, they were excluded from the analysis.
References Ariel, Mira (1991). The function of accessibility in a theory of grammar. Journal of Pragmatics, 16, 443–463. Berman, Ruth A. & Dan Slobin (1994). Relating events in narratives: A crosslinguistic developmental study. Hillsdale, NJ: Lawrence Erlbaum. Carroll, Mary, Jorge Murcia-Serra, Marzena Watorek, & Alessandra Bendiscoli (2000). The relevance of information organization to second language acquisition studies: The descriptive discourse of advanced adult learners of German. Studies in Second Language Acquisition, 22 (3), 441–466. Chafe, Wallace (1994). Discourse, consciousness, and time. Chicago: University of Chicago Press. Clancy, Patricia (1980). Referential choice in English and Japanese. In Wallace Chafe (Ed.), The pear stories: Cognitive, cultural, and linguistic aspects of narrative production (pp. 127–202). Norwood, NJ: Ablex. Cloitre, Marylene & Thomas G. Bever (1988). Linguistic anaphors, levels of representation, and discourse. Language and Cognitive Processes, 3 (4), 293–322. Duncan, Susan D. (1996). Grammatical form and ‘thinking-for-speaking’ in Mandarin Chinese and English: An analysis based on speech-accompanying gestures. Unpublished PhD dissertation. University of Chicago. Furuyama, Nobuhiro (2001). De-syntacticizing the theories of reference maintenance from the viewpoint of poetic function of language and gesture: A case of Japanese discourse. Unpublished PhD dissertation. University of Chicago. Givón, Talmy (1985). Iconicity, isomorphism and non-arbitrary coding in syntax. In John Haiman (Ed.), Iconicity in syntax (pp. 187–219). Amsterdam: Benjamins. Gullberg, Marianne (1998). Gesture as a communication strategy in second language discourse: A study of learners of French and Swedish. Lund: Lund University Press. Gullberg, Marianne (2003). Gestures, referents, and anaphoric linkage in learner varieties. In Christine Dimroth & Marianne Starren (Eds.), Information structure, linguistic structure and the dynamics of language acquisition. Amsterdam: Benjamins. Gullberg, Marianne (2006). Handling discourse: Gestures, reference tracking, and communication strategies in early L2. Language Learning, 56 (1), 155–196. Hendriks, Henriette (2003). Using nouns for reference maintenance: A seeming contradiction in L2 discourse. In Anna G. Ramat (Ed.), Typology and second language acquisition (pp. 291–326). Berlin: Mouton de Gruyter. Jung, Euen Hyuk (2004). Topic and subject prominence in interlanguage development. Language Learning, 54 (4), 713–738. Kuno, Susumu (1973). The structure of the Japanese language. Cambridge, MA: MIT Press. Lambrecht, Knud (1994). Information structure and sentence form. Topic, focus, and the mental representation of discourse referents. Cambridge: Cambridge Unviersity Press.
111
112 Keiko Yoshioka
Levy, Elena T. & Carol A. Fowler (2000). The role of gestures and other graded language forms in the grounding of reference in perception. In David McNeill (Ed.), Language and gesture (pp. 215–234). Cambridge: Cambridge University Press. Levy, Elena. T. & David McNeill (1992). Speech, gesture and discourse. Discourse Processes, 15 (3), 277–301. Li, Charles N. & Sandra A. Thompson (1976). Subject and topic: A new typology of language. In Charles N. Li (Ed.), Subject and topic (pp. 457–489). New York: Academic Press. Mayer, Mercer (1969). Frog, where are you? New York: Dial Press. McGann, William & Arthur Schwartz (1988). Main character in children’s narratives. Linguistics, 26, 215–233. McNeill, David (1992). Hand and mind. Chicago: University of Chicago Press. Mizutani, Nobuko (1985). Nichiei hikaku hanashikotoba no bunpo [Comparison of discourse grammar between Japanese and English]. Tokyo: Kuroshio. Muñoz, Carmen (1995). Markedness and the acquisition of referential forms. The case of zero anaphora. Studies in Second Language Acquisition, 17 (4), 517–527. Nakahama, Yuko & Yuka Kurihara (2007). Viewpoint setting in L1 and L2 Japanese narratives. Studies in Language Sciences, 6, 170–194. Polio, Charlene (1995). Acquiring nothing? The use of zero pronouns by nonnative speakers of Chinese and the implications for the acquisition of nominal reference. Studies in Second Language Acquisition, 17, 353–377. Williams, Jessica (1989). Pronoun copies, pronominal anaphora and zero anaphora in second language production. In Susan Gass, Carolyn Madden, Dennis Preston, & Larry Selinker (Eds.), Variation in second language acquisition: Discourse and pragmatics (pp. 153–189). Clevedon: Multilingual Matters. Yanagimachi, Tomoharu (1997). The acquisition of referential form use in L2 oral narrative discourse by adult English-speaking learners of Japanese. Unpublished PhD dissertation. University of Minnesota. Yoshioka, Keiko (2006). Manual introduction of animate referents in L2 narrative discourse. In Asako Yoshitomi, Tae Umino, & Masashi Negishi (Eds.), Readings in second language pedagogy and second language acquisition (pp. 179–199). Amsterdam: Benjamins.
Gesture viewpoint in Japanese and English Cross-linguistic interactions between two languages in one speaker Amanda Brown Syracuse University and Max Planck Institute for Psycholinguistics, Nijmegen
Abundant evidence across languages, structures, proficiencies, and modalities shows that properties of first languages influence performance in second languages. This paper presents an alternative perspective on the interaction between established and emerging languages within second language speakers by arguing that an L2 can influence an L1, even at relatively low proficiency levels. Analyses of the gesture viewpoint employed in English and Japanese descriptions of motion events revealed systematic between-language and within-language differences. Monolingual Japanese speakers used significantly more character viewpoint than monolingual English speakers, who predominantly employed observer viewpoint. In their L1 and their L2, however, native Japanese speakers with intermediate knowledge of English patterned more like the monolingual English speakers than their monolingual Japanese counterparts. After controlling for effects of cultural exposure, these results offer valuable insights into both the nature of cross-linguistic interactions within individuals and potential factors underlying gesture viewpoint. Keywords: bi-directional cross-linguistic influence, gesture viewpoint, motion events, second language acquisition, Japanese
The existence of interactions between languages within the multilingual mind is relatively uncontroversial. With abundant evidence across language pairings, across linguistic domains, and across proficiency levels, we know that properties of a first language (L1) influence performance in a second language (L2). Moreover, very recent research shows how effects of an L1 can be observed across modalities. Yet after substantial research, our understanding of the L1-L2 relationship is still largely one-sided.
114 Amanda Brown
This paper presents an alternative perspective on the relationship between established and emerging languages within the mind of a second language learner by showing that not only does a developed L1 influence a developing L2, but that the presence of the developing L2 may exert its own influence on the L1, even at relatively low proficiency levels. Controlling for effects of culture, we investigate the viewpoint adopted in gesture production among monolingual Japanese and monolingual English speakers as compared to native Japanese speakers with knowledge of English in their L1 and L2. Results offer valuable insights into both the nature of cross-linguistic interactions within individuals and potential factors underlying gesture viewpoint.
Background Perspectives on cross-linguistic interactions In one guise or another, “cross-linguistic influence”, defined as “the interplay between earlier and later acquired languages” (Kellerman & Sharwood Smith, 1986, p. 1), has benefited from a long research tradition in the fields of second language acquisition and bilingualism, as well as other areas of linguistics such as language contact (see Odlin, 1989, for a historical overview). However, the phenomenon has typically been synonymous with the unidirectional “transfer” of features from a first language to a second language. One of the most obvious manifestations of this phenomenon is foreign accent, but effects of the L1 have been discovered in almost every aspect of L2 performance (see overviews in Gass & Selinker, 1992; Kellerman & Sharwood Smith, 1986; Odlin, 1989, 2003). Yet a crucial component in the definition of cross-linguistic influence is the word “interplay”, which assumes that relationships between first and second languages are bi-directional and that the systems interact. While this is fully acknowledged in the bilingualism literature and recent studies in second language acquisition have begun to investigate the effects of an L2 on the L1, sometimes called “borrowing transfer” (Odlin, 1989), many gaps in our knowledge remain. In the few studies that have found a variety of linguistic effects of a second language on a first language in adult second language learners (e.g., Cook, 2003; Dussias & Sagarra, 2007; Pavlenko & Jarvis, 2002, inter al.), the populations investigated have typically been functional bilinguals, i.e., those with very advanced functional proficiency in the second language. Furthermore, as much of the research focuses on errors in the L1 and participants are frequently resident in the second language community, effects of the L2 are often interpreted as contributing to loss of the L1. We do not know, therefore, whether the presence of an L2
Cross-linguistic interactions in gesture viewpoint
genuinely still in development can influence an L1, and if so, whether errors are uniquely part of the process.
Cross-linguistic interactions in co-speech gesture Given the tight semantic and temporal co-ordination between speech and cospeech gesture (cf. Kendon, 1993; McNeill, 1992; Schegloff, 1984), it is not surprising that signs of cross-linguistic influence have surfaced in the manual modality. Several studies have found evidence of a “manual accent” (Kellerman & van Hoof, 2003) in L2 production. These include studies of gesture placement within the L2 utterance (Kellerman & van Hoof, 2003; Negueruela, Lantolf, Jordan, & Gelabert, 2004; Stam, 2006) and prominent marking of specific concepts in L2 gestures such as movement over location (Yoshioka & Kellerman, 2006) and manner of motion (Brown & Gullberg, 2008). Moreover, in some cases, gesture analyses uniquely reveal L1 conceptualizations masked in otherwise proficient L2 speech (Gullberg, forthcoming). In contrast to the handful of studies of L1 effects on L2 gesture, almost no research exists concerning the reverse direction of influence, i.e., whether effects of an L2 can be observed in L1 gesture. Pika, Nicoladis, and Marentette (2006) found that, at least for functional bilinguals, the frequency of gesturing in the L2 community may affect the frequency of gesturing in L1 production, seemingly an effect of cultural exposure. However, there is some evidence to suggest that even with lower proficiency in a second language, the distribution of semantic information across modalities in the L1, for example, depiction of manner of motion in L1 speech and/or gesture, may exhibit properties of the L2 (Brown & Gullberg, 2008). As far as our understanding of cross-linguistic interactions between languages in the mind of a second language learner goes, novel methodologies such as gesture analyses have much to contribute. All that remains is to outline a suitable domain in which this methodological tool may be exploited. In doing so, we make use of bilingual data to address current issues in gesture studies.
Gesture viewpoint in descriptions of motion The domain of motion has seen an enormous amount of cross-linguistic work over the last two decades. Cross-linguistic differences have been discovered in the way languages map semantic elements such as manner and path of motion onto morphosyntactic devices (Talmy, 1985), in the frequency and specificity with which these semantic elements are encoded in spoken discourse (Slobin, 1996, 2004), and in the composition of co-speech gestures depicting these semantic elements (Kita & Özyürek, 2003; McNeill, 2001; Özyürek, Kita, Allen, Furman, & Brown, 2005).
115
116 Amanda Brown
Within this domain, the issue of gesture viewpoint, though as yet under- investigated, offers some potential for addressing whether and how languages interact in second language acquisition. Gesture viewpoint describes the perspective from which a gesture is deployed. According to McNeill (1992, 2005) gestures typically display either character viewpoint (C-VPT) or observer viewpoint (O-VPT), although these categories are not mutually exclusive. In C-VPT, the event is depicted in first person, as it was experienced by the protagonist, and the hands represent the hands of the protagonist. In O-VPT, the event is depicted in third person, as it was observed by the speaker, and the hands represent whole entities. McNeill provides two examples of a climbing gesture illustrating the difference: one involving the speaker enacting the climbing motion by adopting a clutched hand-shape and moving his/her hands up and down (C-VPT), and the other a simple upward movement depicting the character’s ascension (O-VPT) (McNeill, 1992, p. 119). McNeill notes that C-VPT, which minimizes the distance between the narrator and the event, is more likely to occur with transitive verbs and single clause sentences, which also serve to minimize the narrator-event distance. It is also most common in depictions of central events in the story line. O-VPT, on the other hand, occurs more with intransitive or stative verbs as well as multi-clause sentences, all devices that introduce distance between the narrator and the story line. O-VPT, then, can be found more often in depictions of events peripheral to the story line. These linguistic factors are predicted to be universal across languages; however, recent work suggests that there may also be additional cross-linguistic differences in use of gesture viewpoint. In a cross-linguistic study of motion event descriptions, Kita and Özyürek (2003) noted that while O-VPT (in their terminology “event-external perspective”) was the most common perspective, Turkish speakers produced twice as many CVPT (“event-internal perspective”) gestures as English and Japanese speakers. Furthermore, cross-linguistic differences in viewpoint do not seem to be restricted to gesture. In a comparison of German Sign Language and Turkish Sign Language (Perniss & Özyürek, 2008), although C-VPT was the preferred option, Turkish signers used O-VPT with handling classifiers. In sum, perspective taking in the manual modality, be it gesture or sign, seems to vary cross-linguistically. It is these cross-linguistic differences that constitute an ideal environment in which to investigate cross-linguistic interactions in second language acquisition. Furthermore, we do not yet have a clear understanding of what motivates particular gesture viewpoints within and across languages, for example, the role of culture versus linguistics. Therefore, a comparison of monolingual and bilingual data, while holding the effects of one variable constant, may shed some light on factors underlying the perspective taken in gesture production.
Cross-linguistic interactions in gesture viewpoint
This study The aim of the present study is to present an alternative perspective on the relationship between languages in the multilingual mind. In addition to the many known effects of the L1 on the L2 in second language acquisition, this paper examines whether an established L1 can also be influenced by an L2 still in development. On the assumption that gesture is fully part of the linguistic system, gesture analysis is proposed here as a novel methodological window on such cross-linguistic interactions. Interactions between an L1 and an L2 are investigated in the realm of gesture viewpoint in motion event descriptions. Although factors motivating gesture viewpoint are still unclear, there is evidence to suggest cross-linguistic differences. To confirm this difference, gesture patterns are observed in two typologically different languages, Japanese and English, in order to establish a monolingual baseline. To investigate the issue of cross-linguistic interactions, monolingual baseline results are compared to L1 and L2 production from native Japanese speakers with intermediate knowledge of English as a second language. After controlling for effects of cultural exposure, differences between monolingual and bilingual gesture production are discussed with respect to the nature of bi-directional cross-linguistic influence and to the source of gesture viewpoint.
Methodology Participants A total of fifty adults aged between 18 and 48 participated in this study, distributed across four groups: monolingual Japanese speakers resident in Japan (11 speakers), monolingual English speakers resident in the USA (11 speakers), and native Japanese speakers with knowledge of English resident in Japan (15 speakers) or the USA (13 speakers). Biographical information and information on general language usage was gathered using a detailed questionnaire developed by the Multilingualism Project at the Max Planck Institute for Psycholinguistics (Gullberg & Indefrey, 2003). The native Japanese speakers with knowledge of English declared that they were engaged in active use of their L2, whereas the functionally monolingual speakers of each language stated that they had had minimal exposure to an L2, they were not engaged in active study of an L2, and they did not use an L2 in their everyday lives. The choice of two learner groups living in different language environments was designed to test for the impact of culture on gesture viewpoint. The second language speakers in Japan had never lived in an English-speaking country, while
117
118 Amanda Brown
those in the USA had been residents for between one and two years. Effects seen only in the gestures of second language speakers in the USA, then, would suggest an influence of culture, whereas comparable gesture patterns between both groups would render culture less likely as a factor underlying cross-linguistic interactions and gesture viewpoint. Knowledge of English as a second language was measured in three ways. All Japanese-speaking participants, including the functional monolinguals, rated their own English proficiency in speaking, listening, writing, reading, grammar, and pronunciation. Learner groups also completed the first grammar section of the Oxford Placement Test (Allan, 1992), and their oral proficiency was evaluated using the University of Cambridge Local Examinations Syndicate (UCLES) oral testing criteria for the First Certificate in English (FCE).1 Grammar and vocabulary, discourse management, pronunciation, and global skills were scored by consensus judgment of two Cambridge-certified examiners. Both the Oxford and the FCE proficiency measures descriptively placed the learners within intermediate range. Second language speakers resident in Japan versus the USA did not significantly differ in proficiency as measured by the Oxford Placement Test, t (25) = .795, p = .434, and only marginally differed in proficiency as measured by the Cambridge FCE criteria, t (26) = 1.982, p = .058, with those in Japan scoring slightly higher than those in the USA. Learner groups were thus matched on formal proficiency in English. Participants’ biographical and language usage data as well as English proficiency data are summarized in Table 1. Table 1. Summary of biographical and language usage/proficiency data Language background
Monolingual Japanese (n = 16) 12.3 (range 7–14) NA
Mean AoE:a English Mean usage:b English Mean self-rating:c 1.35 English (range 1–2.5) Mean Oxford Score NA Mean FCEd Score a Age
NA
Learners in Japan Learners in USA (n = 15) (n = 13) 11.9 (range 9–13) 3 hrs (range .5–8.5) 2.97 (range 2–4.17) 78% (range 60–88%) 4.27 / 5 (range 2–5)
12.8 (range 12–14) 6 hrs (range 1–12) 3.27 (range 1.8–4.3) 75% (range 58–85%) 3.69 / 5 (range 2.3–5)
Monolingual English (n = 13) Birth NA NA NA NA
of first exposure; b Hours of usage per day; c A composite score of individual skill scores; d Cambridge First Certificate in English
Cross-linguistic interactions in gesture viewpoint
Stimuli Data were obtained through a narrative retelling task. Short narrative descriptions were elicited based on the six-minute, animated Sylvester and Tweety Bird cartoon, “Canary Row” (Freleng, 1950), commonly used in gesture research on motion events. The cartoon was divided into scenes following McNeill (1992), and two different orders of scenes were systematically varied in the presentation of the stimulus across all groups. Each scene contains numerous motion events, and narrative description of the scenes typically elicits abundant gestures (cf. Kita & Özyürek, 2003; McNeill, 1992, 2001, inter al.). From the stimulus material, four motion events consistently described by participants were selected for coding and analysis: (1) Sylvester climbs through a pipe, (2) Sylvester rolls down a hill, (3) Sylvester clambers up a pipe, and (4) Sylvester swings across the street on a rope.
Procedure All participants narrated in their L1. The native Japanese speakers who knew English also produced narratives in their L2. Note, however, that the language order in which the second language speakers gave descriptions was counter-balanced across participants with a minimum of three days between appointments. This minimized the likelihood of both L1 and L2 being fully active at the same time, i.e., controlling for the effects of “language mode” (Grosjean, 1998). Depending on the language of the experiment, participants were tested individually by either a native English- or native Japanese-speaking confederate. The participant and experimenter first engaged in a brief warm-up, consisting of small talk in the target language, in order to relax participants, increasing the likelihood of gesturing, and to put participants in “monolingual mode”. Next, the experimenter told participants that they would be watching a series of animated scenes from a cartoon on a computer screen and should retell what they had seen to the experimenter in as much detail as they could remember. The experimenter was trained to appear fully engaged in the participants’ narratives, but to avoid asking questions or prompting answers.
Data treatment All narratives were first transcribed from digital video by a native speaker of the relevant language. Then, narratives were divided into clauses, defined as “any unit that contains a unified predicate … (expressing) a single situation (activity, event, state)”, following procedures laid out in Berman and Slobin (1994, p. 660). Next, clauses describing the four target motion events were identified.
119
120 Amanda Brown
Gesture segmentation and coding Representational gesture strokes (iconic, metaphoric, and deictic) (Kita, 2000), hereafter simply gestures, which depicted target motion events and which co-occurred with clauses containing target motion event speech, were identified and coded for gesture viewpoint. Elan (Wittenburg, Brugman, Russel, Klasselmann, & Sloetjes, 2006), a digital video tagging software program developed at the Max Planck Institute for Psycholinguistics, was used for gesture coding.2 Elan enables a frame-by-frame analysis (at 40 ms intervals) of movement as well as sound. Gestures were coded for viewpoint, i.e., depiction of the protagonist’s movement as experienced (Character Viewpoint) or as observed (Observer Viewpoint). Broadly in line with Gullberg (1998), viewpoint was operationalized along three dimensions: direction, hand-shape, and handedness. With respect to direction, gestures on a sagittal axis, i.e., originating at and moving away from the body, depicted movement as experienced, while gestures on a lateral axis, i.e., originating to the right or left and moving across the body, depicted movement as observed. With respect to hand-shape, gesture forms enacting the protagonist’s movement, i.e., where the hands resembled the hands of the protagonist, depicted movement as experienced, while gesture forms with a non-enactment hand-shape, i.e., where the hands represented objects, depicted movement as observed. Finally, gestures involving more of the body, defined here as both hands, depicted movement as experienced, while gestures involving only one articulator, defined here as one hand, depicted movement as observed. A mimetic combination, then, of sagittal direction, with an enactment hand-shape employing both hands was considered Character Viewpoint. In contrast, a combination of lateral direction, with no enact-
Figure 1. Stills from a C-VPT gesture (sagittal, enactment, and bi-manual) in a Japanese description of the swinging across event.
Cross-linguistic interactions in gesture viewpoint
Figure 2. Stills from an O-VPT gesture (lateral, non-enactment, and one-handed) in an English description of the swinging across event
ment hand-shape, employing only one hand was considered Observer Viewpoint. Analyses of gesture consisted of identifying the frequency of C-VPT and O-VPT.3 Figures 1 and 2 show stills of typical motion event gestures produced in descriptions of the swinging across event. Along the dimensions of viewpoint, the gesture in Figure 1 displays sagittal direction, enactment hand-shape, and bi-manual handedness — a C-VPT gesture. The gesture in Figure 2, on the other hand, displays lateral direction, non-enactment hand-shape, and one-handed handedness — an O-VPT gesture.
Reliability of speech and gesture data coding To establish reliability of data coding, 15% of the entire data set was segmented and coded by an independent second coder. 88% agreement was reached on identification of a relevant representational gesture depicting a target motion event, 80% agreement on identification of the stroke, and of the strokes that both coders identified as relevant, there was 94% agreement on viewpoint code. In cases of disagreement, the coding of the initial coder was adopted.
Results Results are presented in three parts. First, gesture viewpoint among L1 groups is compared. As not all participants gestured in their L1, only a subset is included in gesture analyses (sample numbers are indicated in each figure). Second, gesture viewpoint in L2 and monolingual groups is compared. Finally, gesture viewpoint within the same participants in L1 and L2 is compared. Before these analyses, the native Japanese speakers with knowledge of English resident in Japan were
121
122 Amanda Brown
compared to their counterparts resident in the USA. As no differences were found between them, the data were collapsed to form a single group of second language speakers. Non-parametric statistical tests were employed throughout, specifically Kruskal-Wallis for multiple group analyses and Mann-Whitney for between group analyses.
Gesture viewpoint in L1 The first analysis concerns gesture viewpoint in monolingual and bilingual L1. Examples from the monolingual data were given in Figures 1 and 2 above. These
Figure 3. Stills from an O-VPT gesture (non-enactment and one-handed) in a monolingual Japanese (J) description of the clambering up event (the dimension of direction was not applied to coding of gestures for this event).
Figure 4. Stills from a C-VPT gesture (enactment and bi-manual) in a monolingual English (E) description of the clambering up event (the dimension of direction was not applied to coding of gestures for this event).
Cross-linguistic interactions in gesture viewpoint
showed a C-VPT gesture from a monolingual Japanese speaker (J) and an O-VPT gesture from a monolingual English speaker (E). However, alternative viewpoints were employed by speakers in both monolingual groups, as can be seen from the following figures. Figure 3 shows the same monolingual Japanese speaker as in Figure 1, this time producing an O-VPT gesture, while Figure 4 shows the same monolingual English speaker as in Figure 2, this time producing a C-VPT gesture. Similarly, in their L1, native Japanese speakers with knowledge of English (J (E)) produced gestures of both types. Figures 5 and 6 show the same speaker producing a C-VPT and O-VPT gesture in an L1 description of the swinging across event.
Figure 5. Stills from a C-VPT gesture (sagittal, enactment and bi-manual) in a learner L1 (J (E)) description of the swinging across event.
Figure 6. Stills from an O-VPT (lateral, non-enactment and one-handed) in a learner L1 (J (E)) description of the swinging across event.
123
124 Amanda Brown
1.0 .9 .8 .7
Prop. Full C-VPT Gesture
.6 .5 .4 .3 .2 .1 0.0 N=
11 J
21 J(E)
11 E
Figure 7. Mean proportion of C-VPT gestures out of all motion gestures in L1 groups: J (monolingual Japanese speakers), J (E) (native Japanese speakers with knowledge of English), and E (monolingual English speakers).
A quantitative analysis of all speakers, however, revealed differing preferences for gesture viewpoint. As preliminary analyses showed no significant difference between the L1 of the native Japanese speakers with knowledge of English resident in Japan versus the USA (z = −.323, p = .747), the data were collapsed to form one group. Figure 7, then, shows the mean proportion of C-VPT gestures out of the total number of motion event gestures in each language group.4 There was a significant difference between the groups in their tendency to employ C-VPT in motion event gestures (χ2 (2, N = 43) = 9.294, p = .01). Specifically, monolingual Japanese speakers produced significantly more C-VPT gestures than both monolingual English speakers (z = −2.485, p = .013) and native Japanese speakers with knowledge of English in their L1 (z = −2.663, p = .008), who did not significantly differ from each other (z = −.609, p = .542). Note that, although the data were rather variable, there was no evidence of a bimodal distribution in any group; hence, means did not conceal underlyingly different patterns. In other words, speakers in each group behaved in comparable ways, and it was not the case, for example, that some monolingual Japanese speakers always produced C-VPT and others never did. In sum, L1 results reveal between- and within-language differences. First, there is a clear baseline difference in gesture viewpoint such that monolingual Japanese
Cross-linguistic interactions in gesture viewpoint
speakers used many more C-VPT gestures than monolingual English speakers did. More striking, however, is that native Japanese speakers with knowledge of English patterned more similarly to monolingual English speakers in their L1, Japanese, than to their monolingual Japanese counterparts, that is with predominant use of O-VPT. Crucially, non-monolingual L1 patterns were not affected by the contrast in residence between Japan and the USA.
Figure 8. Stills from a C-VPT gesture (sagittal, enactment and bi-manual) in a learner L2 (E (J)) description of the swinging across event.
Figure 9. Stills from an O-VPT gesture (lateral, non-enactment and one-handed) in a learner L2 (E (J)) description of the swinging across event.
125
126 Amanda Brown
1.0
Prop. Full Character Perspective Gesture
.9 .8 .7 .6 .5 .4 .3 .2 .1 0.0
N=
11 J
28 E (J)
11 E
Figure 10. Mean proportion of C-VPT gestures out of all motion gestures in L2 English and monolingual groups: J (monolingual Japanese speakers), E (J) (native Japanese speakers with knowledge of English), and E (monolingual English speakers).
Gesture viewpoint in L2 and monolingual groups The second analysis concerns gesture viewpoint in monolingual L1 and learner L2. As in the L1, native Japanese speakers with knowledge of English (E (J)) employed both C-VPT and O-VPT in their L2 gestures, as shown in Figures 8 and 9. However, a quantitative analysis of all speakers again revealed viewpoint preferences. Figure 10 shows the mean proportion of C-VPT gestures out of the total number of motion event gestures in each language group. Again, there was no significant difference between the L2 of the second language speakers resident in Japan versus the USA (z = −.936, p = .349); therefore, the data were collapsed to form one group. There was a significant difference between the groups in their tendency to employ C-VPT in motion event gestures χ2 (2, N = 50) = 8.185, p = .017.). Specifically, monolingual Japanese speakers produced significantly more C-VPT gestures than both monolingual English speakers (z = −2.485, p = .013) and native Japanese speakers with knowledge of English in their L2 (z = −2.299, p = .022), who did not significantly differ from each other (z = −1.206, p = .228). In sum, rather surprisingly, L2 results showed only between-language differences. Despite merely an intermediate level of proficiency in L2 English, native Japanese speakers with knowledge of English in their L2, English, looked
Cross-linguistic interactions in gesture viewpoint
Figure 11. Stills from an O-VPT gesture (non-enactment and one-handed) and a C-VPT gesture (enactment and bi-manual) in L1 Japanese descriptions of the clambering up and climbing through events (the dimension of direction was not applied to coding of gestures for these events).
Figure 12. Stills from an O-VPT gesture (non-enactment and one-handed) and a C-VPT gesture (enactment and bi-manual) in L2 English descriptions of the clambering up and climbing through events (the dimension of direction was not applied to coding of gestures for these events).
remarkably target-like, patterning more similarly to monolingual English speakers than to monolingual Japanese speakers, that is with predominant use of O-VPT.
Within-subject comparison of gesture viewpoint in L1 and L2 The final analysis concerns the relationship between gesture viewpoint in L1 and L2 production within the same individuals. The following figures show the same speaker producing both gesture types in his L1, Japanese, and L2, English.
127
128 Amanda Brown
A Wilcoxon repeated-measures analysis showed no significant within-subject difference in L1 and L2 production (z = −.848, p = .396). In other words, despite the existence of both gesture viewpoints within the learner data, native Japanese speakers with knowledge of English displayed the same preferences for O-VPT in their L1, Japanese, and L2, English.
Discussion The aim of this study was to investigate interactions between first and second languages, namely effects of a first language on a developing second language and effects of relatively low proficiency in a second language on an ostensibly mature first language, in the domain of gesture viewpoint. The variable of residence was manipulated in order to enable preliminary testing of the nature of cross-linguistic interactions as well as the factors underlying gesture viewpoint with respect to effects of culture. Analyses of the gesture viewpoint adopted in motion event descriptions by monolingual Japanese speakers, monolingual English speakers and native Japanese speakers with knowledge of English revealed systematic between-language and within-language differences. In line with previous findings (cf. Kita & Özyürek, 2003), monolingual English speakers predominantly used observer viewpoint. These gestures were lateral to the body, produced with a non-enactment handshape, and only employed one hand. Monolingual Japanese speakers, in contrast to previous findings (cf. Kita & Özyürek, 2003), used a significant number of character viewpoint gestures that were bi-manual with sagittal direction and enactment hand-shape. Most striking was the observation that monolingual Japanese speakers significantly differed from native Japanese speakers with knowledge of English in use of gesture viewpoint. In both their L1 and their L2, Japanese speakers with knowledge of English more closely resembled monolingual English speakers. These results suggest the existence of cross-linguistic interactions between languages within the minds of second language learners. Remarkably, however, this interaction was more evident in L1 Japanese production than in L2 English production. While robust evidence typically supports effects of the L1 on the L2 in numerous domains, these effects were not apparent in gesture viewpoint. Instead, given the similarities between monolingual English speakers and native Japanese speakers with knowledge of English in their L1 and L2, there appears to be an effect of the L2 on the L1 in this particular domain. With respect to the nature of the cross-linguistic interaction observed here, one possibility is an effect of cultural knowledge such as that seen in Pika et al. (2006). Under this account, we would have expected effects only in the group of
Cross-linguistic interactions in gesture viewpoint
second language speakers resident in the second language community, i.e., those in the USA. However, this was not observed. Instead, the native Japanese speakers with knowledge of English resident in the USA patterned similarly to those resident in Japan. Although the second language speakers living in Japan did have some exposure to American culture through television, etc., it was quite different in quantity and quality to that experienced by those who were immersed in the culture. Of course, residence in a country alone does not ensure immersion in the culture, but according to self-reported usage of English, the participants living in the USA were at least speaking English and not Japanese for a large part of their day. Thus, it is tentatively proposed that differences in gesture viewpoint between monolingual Japanese speakers and native Japanese speakers with knowledge of English are not the result of cultural exposure. An alternative possibility warranting further investigation is that cross-linguistic interactions in the domain of gesture viewpoint arise from parallel crosslinguistic interactions in underlying linguistic domains such as semantics or syntax, a process commonly known in the acquisition literature as “cross-linguistic influence”. Here, “cross-linguistic influence” would be distinguished from “crosscultural influence”, a difference that explains various existing empirical findings in the gesture literature such as preferential marking of movement over location in the gestures of second language speakers as a result of typological differences in the mapping of semantics onto morphosyntactic resources (cf. Yoshioka & Kellerman, 2006) versus unique gesture frequencies in the gestures of second language speakers as a result of cultural differences in rates of gesture production (cf. Pika et al., 2006). As no analyses of the relationship between linguistic variables in speech and viewpoint in gesture were undertaken here, the precise nature of such cross-linguistic influence on gesture viewpoint, if it exists, remains to be identified. Previous claims about purportedly universal linguistic relationships between transitivity, clause complexity, event saliency, and gesture viewpoint (McNeill, 1992) may account for the variations observed within speakers; however, they may not account for cross-linguistic differences, as, for example, one would expect that an event salient for English speakers would also be salient for Japanese speakers. Alternative explanations may relate to specific cross-linguistic differences between English and Japanese in the expression of motion, for example, frequent use of mimetic (onomatopoetic) constructions in Japanese but not in English, or more general system-wide differences between the languages, for example, frequent pragmatically licensed argument omission in Japanese but not in English.5 Leaving identification of causal factors motivating gesture viewpoint aside for future research, there are several implications from the current findings such as they are. From the perspective of second language acquisition, these results
129
130 Amanda Brown
suggest that the relationship between an established first language and an emerging second language is bidirectional: that not only does an L1 influence an L2, but that an L2 can also influence an L1. Moreover, these influences may be considered a normal part of the process of acquiring a second language and not only the result of a shift in language dominance leading to grammatical errors and loss of the L1. This in turn has further implications for the so-called “native speaker standard” (Davies, 2003). This standard, which is used in both research on second language acquisition and language testing, is typically regarded as a stable benchmark. However, if an L2 can affect an L1 even at relatively low proficiency levels, there is reason to suspect that “native speaker” performance may actually be rather variable depending on the language experience of each individual. Indeed, this may even explain the differences between the native Japanese speakers in Kita and Özyürek (2003) and the monolingual Japanese speakers here. Therefore, there is a need to fully describe the potentially wide parameters and contexts within which speakers of a language can operate, particularly in investigations of ultimate attainment in an L2 (Birdsong, 2005) and in language assessment. From the perspective of gesture studies, in addition to evidence that the viewpoint from which gestures are deployed varies within individual speakers, we also have empirical support for the notion that gesture viewpoint varies systematically across languages. Moreover, data from multilingual speakers can inform our understanding of this phenomenon. Although some gesture phenomena, for instance rate of gesturing, may be culturally motivated, it appears that gesture viewpoint may not be one of those phenomena. Finally, more data is needed on other language pairings as this would distinguish between patterns arising from the convergence of knowledge of particular languages and those arising from general effects of bilingualism. In addition, one study has shown that three or more years of residence in the L2 community is required before effects of the L2 on object categorization in the L1 are visible (Cook, Bassetti, Kasai, Sasaki, & Takahashi, 2006). Therefore, at the risk of confounding exposure with proficiency, participants with longer residencies in the L2 community might be tested. In conclusion, this study investigated the relationship between languages in second language acquisition. Effects of the presence of a second language were found in the gesture viewpoint employed in first and second language production, even at intermediate levels of L2 proficiency. These effects did not appear to arise from cross-cultural influence, which leaves cross-linguistic influence as a more likely possibility. Although the crucial linguistic constituents of the accompanying speech remain unspecified at this point, gesture analyses are proposed as a unique window through which to observe the online interaction between languages in the multilingual mind.
Cross-linguistic interactions in gesture viewpoint
Acknowledgements This research received technical and financial support from the Max Planck Institute for Psycholinguistics and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56–384, The Dynamics of Multilingual Processing, awarded to M. Gullberg and P. Indefrey). Previous versions of this paper were discussed with audiences at the 2nd International Society for Gesture Studies Conference (2005), the University of Groningen Gesture Workshop (2006), the Max Planck Institute for Psycholinguistics, and Syracuse University. Marianne Gullberg, Kees de Bot and two anonymous reviewers offered many helpful comments and suggestions. All of these contributions are acknowledged with grateful thanks.
Notes 1. More information can be found at http://www.cambridgeesol.org. 2. See http://www.lat-mpi.eu/tools/tools/elan. 3. The dimensions of direction and hand-shape did not apply to all events. Direction was not applied to the climbing through or clambering up events since the upward movement was neither sagittal nor lateral. Hand-shape was not applied to descriptions of the rolling down event because the criteria for enactment hand-shape would involve speaker actually rolling, which was considered highly unlikely in adult data. 4. As not all of the dimensions were appropriate for all event descriptions, C-VPT in all analyses describes gestures exhibiting properties of character viewpoint in the maximum number of dimensions appropriate for a given event. For example, a gesture depicting the swinging across scene that was sagittal and bi-manual with an enactment hand-shape was coded as C-VPT, and a gesture depicting the climbing through scene that was only bi-manual with an enactment handshape was also coded as C-VPT. 5. Many thanks to one of the anonymous reviewers as well as an audience at the Syracuse University Linguistics Symposium for this latter suggestion.
References Allan, David (1992). Oxford placement test. Oxford: Oxford University Press. Berman, Ruth & Dan I. Slobin (1994). Relating events in narrative: A cross-linguistic developmental study. Mahwah, NJ: Lawrence Erlbaum. Birdsong, David (2005). Nativelikeness and non-nativelikeness in L2A research. International Review of Applied Linguistics, 43 (4), 319–328. Brown, Amanda & Marianne Gullberg (2008). Bidirectional cross-linguistic influence in L1-L2 encoding of Manner in speech and gesture: A study of Japanese speakers of English. Studies in Second Language Acquisition, 30 (2), 225–251. Cook, Vivian (2003). Effects of the second language on the first. Clevedon, UK: Multilingual Matters.
131
132 Amanda Brown
Cook, Vivian, Benedetta Bassetti, Chise Kasai, Miho Sasaki, & Jun Arata Takahashi (2006). Do bilinguals have different concepts? The case of shape and material in Japanese L2 users of English. International Journal of Bilingualism, 10 (2), 137–152. Davies, Alan (2003). The native speaker: Myth and reality. Clevedon: Multilingual Matters. Dussias, Paola & Nuria Sagarra (2007). The effect of exposure on syntactic parsing in SpanishEnglish bilinguals. Bilingualism: Language and Cognition, 10 (1), 101–116. Freleng, Friz (1950). Canary Row. Film, animated cartoon. New York: Time Warner. Gass, Susan & Larry Selinker (1992). Language transfer in language learning. Amsterdam: John Benjamins. Grosjean, Francois (1998). Studying bilinguals: Methodological and conceptual issues. Bilingualism: Language and Cognition, 1 (2), 131–149. Gullberg, Marianne (1998). Gesture as a communication strategy in second language discourse. A study of learners of French and Swedish. Lund: Lund University Press. Gullberg, Marianne (forthcoming). What learners mean: What gestures reveal about semantic reorganisation of placement in advanced L2. Gullberg, Marianne & Peter Indefrey (2003). Language background questionnaire. The Dynamics of Multilingual Processing. Nijmegen: Max Planck Institute for Psycholinguistics. http://www.mpi.nl/research/projects/Multilingualism/Questionnaire.pdf. Kellerman, Eric & Michael Sharwood Smith (1986). Cross-linguistic influence in second language acquisition. New York: Pergamon. Kellerman, Eric & Anne-Marie van Hoof (2003). Manual accents. International Review of Applied Linguistics, 41 (3), 251–269. Kendon, Adam (1993). Human gesture. In Kathleen R. Gibson & Tim Ingold (Eds.), Tools, language and cognition in human evolution (pp. 43–62). Cambridge: Cambridge University Press. Kita, Sotaro (1997). Two-dimensional semantic analysis of Japanese mimetics. Linguistics, 35 (2), 379–415. Kita, Sotaro (2000). How representational gestures help speaking. In David McNeill (Ed.), Gesture and language: Window into thought and action (pp. 162–185). Cambridge: Cambridge University Press. Kita, Sotaro & Asli Özyürek (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48 (1), 16–32. McNeill, David (1992). Hand and mind. What gestures reveal about thought. Chicago: University of Chicago Press. McNeill, David (2001). Imagery in motion event descriptions: Gestures as part of thinking-forspeaking in three languages. Proceedings of the Twenty-Third Annual Meeting of the Berkeley Linguistics Society, 255–267. McNeill, David (2005). Gesture and thought. Chicago: University of Chicago Press. Negueruela, Eduardo, James P. Lantolf, Stefanie R. Jordan, & Jaime Gelabert (2004). The “private function” of gesture in second language speaking activity: A study of motion verbs and gesturing in English and Spanish. International Journal of Applied Linguistics, 14 (1), 113–147. Odlin, Terence (1989). Language transfer: Cross-linguistic influence in language learning. Cambridge: Cambridge University Press. Odlin, Terence (2003). Cross-linguistic influence. In Catherine J. Doughty & Michael H. Long (Eds.), The handbook of second language acquisition (pp. 436–486). Oxford: Blackwell.
Cross-linguistic interactions in gesture viewpoint
Özyürek, Asli, Sotaro Kita, Shanley E. M. Allen, Reyhan Furman, & Amanda Brown (2005). How does linguistic framing of events influence co-speech gestures? Insights from cross-linguistic variations and similarities. Gesture, 5 (1/2), 219–240. Pavlenko, Aneta, & Scott Jarvis (2002). Bidirectional Transfer. Applied Linguistics, 23 (2), 190–214. Perniss, Pamela M. & Asli Özyürek (in press). Representations of action, motion, and location in sign space: A comparison of German (DGS) and Turkish (TID) Sign Language narratives. In J. Quer (Ed.), Signs of the time: Selected papers from TISLR 8 (2004) (pp. 353-376). Seedorf: Signum Press. Pika, Simone, Elena Nicoladis, & Paula Marentette (2006). A cross-cultural study on the use of gestures: Evidence for cross-linguistic transfer? Bilingualism: Language and Cognition, 9 (3), 319–327. Schegloff, Emanuel A. (1984). On some gestures’ relation to talk. In J. Maxwell Atkinson & John Heritage (Eds.), Structures of social action (pp. 266–296). Cambridge: Cambridge University Press. Slobin, Dan I. (1996). Two ways to travel: Verbs of motion in English and Spanish. In Masayoshi Shibatani & Sandra A. Thompson (Eds.), Grammatical constructions: Their form and meaning (pp. 195–219). Oxford: Oxford University Press. Slobin, Dan I. (1997). Mind, code and text. In Joan Bybee, John Haiman, & Sandra A. Thompson (Eds.), Essays on language function and language type: Dedicated to T. Givon (pp. 437–476). Philadelphia: John Benjamins. Slobin, Dan I. (2004). The many ways to search for frog: Linguistic typology and the expression of motion events. In Sven Stromqvist & Ludo Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 219–257). Mahwah, NJ: Lawrence Erlbaum. Stam, Gale (2006). Thinking for Speaking about motion: L1 and L2 speech and gesture. International Review of Applied Linguistics, 44 (2), 143–169. Talmy, Leonard (1985). Lexicalization patterns: Semantic structure in lexical forms. In Tim Shopen (Ed.), Language typology and syntactic description, Vol. 3 (pp. 57–149). Cambridge: Cambridge University Press. Weingold, Götz (1995). Lexical and conceptual structures in expressions for movement and space: With reference to Japanese, Korean, Thai and Indonesian as compared to English and German. In Urs Egli, Peter E. Pause, Christoph Schwarze, Arnim von Stechow, & Götz Weingold (Eds.), Lexical knowledge in the organization of language (pp. 301–340). Amsterdam & Philadelphia: John Benjamins. Wittenburg, Peter, Hennie Brugman, Albert Russel, Alex Klassmann, & Han Sloetjes (2006). ELAN: A professional framework for multimodality research. Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC). Genoa: Italy. Yoshioka, Keiko & Eric Kellerman (2006). Gestural introduction of Ground reference in L2 narrative discourse. International Review of Applied Linguistics, 44 (2), 171–193.
133
Author index A Abbeduto, Leonard, 71 Abrahamsen, Adele, 23, 71 Adams, Thomas W., 23 Adamson, Laura, 49 Alibali, Martha W., 24, 72 Allan, David, 131 Allen, Linda Q., 24, 89 Ariel, Mira, 111 Asher, James, 24 B Baron-Cohen, Simon, 49 Bates, Elizabeth, 24, 49, 72 Bavelas, Janet B., 24 Beattie, Geoffrey, 24 Behne, Tanya, 49 Bello, Arianna, 72 Berman, Ruth A., 111 Birdsong, David, 24, 131 Bock, Katherine, 24 Bowlby, John, 49 Boyd, Richard, 49 Bozzo, Maria T., 72 Branigan, Holly P., 24 Brown, Amanda, 24, 131 Bruner, Jerome, 24, 49, 72 Bühler, Karl, 24 Butcher, Cynthia, 24, 72 Butterworth, George, 72 C Calbris, Geneviève, 24 Call, Josep, 49 Camaioni, Luigia, 49, 72 Capirci, Olga, 25, 49, 72 Capone, Nina C., 72 Carroll, Mary, 111
Caselli, Maria Cristina, 25, 49, 72, 73 Cassell, Justine, 25 Chafe, Wallace, 111 Chapman, Robin S., 25, 73 Choi, Soojung, 25 Choi, Soonja, 90 Church, Ruth B., 25 Clancy, Patricia, 111 Clark, Herbert H., 25, 49 Clark, James M., 90 Cloitre, Marylene, 111 Cohen, Ronald L., 25, 90 Collett, Peter, 30 Colletta, Jean-Marc, 25 Condon, William S., 25 Cook, Vivian, 25, 131, 132 Costa, Albert, 25 Cowan, Nelson, 90 Csibra, Gergely, 49 D Dat, Marie-Ange, 90 Davies, Alan, 132 Dawkins, Richard, 50 De Ruiter, Jan-Peter, 26 DeLoache, Judy, 50 Dempster, Franck N., 90 Desrochers, Stephan, 50 Dörnyei, Zoltan, 26 Duncan, Susan D., 26, 111 Dussias, Paola, 132 E Efron, David, 26 Ellis, Nick C., 26 Ellis, Rod, 26 Emmorey, Karen, 26
Engelkamp, Johannes, 90 Engle, Randi A., 26 Erting, Carol J., 26 Evans, Julia L., 26 F Fenson, Larry, 26, 73 Ferguson, Charles A., 26 Fex, Barbara, 26 Feyereisen, Pierre, 26, 27, 90 Franco, Fabia, 27, 50 Freedman, Norbert, 27 Freleng, Friz, 132 Frick-Horbury, Donna, 27 Furuyama, Nobuhiro, 111 G Gass, Susan M., 27 Gathercole, Susan E., 73 Gentner, Dedre, 90 Givón, Talmy, 111 Goldin-Meadow, Susan, 27, 50, 73 Gomez, Juan C., 50 Goodwin, Charles, 27 Goodwin, Marjorie H., 27 Goodwyn, Susan W., 27 Graham, Jean Ann, 27 Grice, Paul, 50 Grosjean, Francois, 132 Guidetti, Michèle, 27, 73 Gullberg, Marianne, 27, 28, 90, 111, 132 H Harris, Paul, 50 Hauge, Elizabeth, 90 Holler, Judith, 28
136
Author index
Hostetter, Autumn B., 28 I Irujo, Suzanne, 28 Iverson, Jana M., 28, 73 J Jancovic, MerryAnn, 73 Jarvis, Scott, 28 Jenkins, Susan, 28 Jung, Euen Hyuk, 111 Jungheim, Nicholas O., 28 K Kasper, Gabriele, 28 Kellerman, Eric, 28, 132 Kelly, Spencer D., 28, 73 Kendon, Adam, 28, 29, 73, 132 Kida, Tsuyoshi, 29 Kimbara, Irene, 29 Kita, Sotaro, 29, 132 Krashen, Stephen D., 29 Krauss, Robert K, 29 Kuno, Susumu, 111 L Lambrecht, Knud, 111 Lausberg, Hedda, 29 Laver, John, 29 Lazaraton, Anne, 29 Lee, Jina, 29 Levy, Elena T., 112 Li, Charles N., 112 Liddell, Scott, 29 Liszkowski, Ulf, 50 Lock, Andrew, 50 Lott, Petra, 29 Lovatt, Peter, 90 M Mayberry, Rachel I., 29 Mayer, Mercer, 112 McCafferty, Stephen G., 29 McGann, William, 112 McGregor, Karla K., 73 McNeill, David, 30, 73, 112, 132
Melinger, Alissa, 30 Miller, George A., 90 Miyake, Akira, 30 Mizutani, Nobuko, 112 Mohan, Bernard, 30 Montepare, Joann M., 30 Moore, Chris, 51 Moreno, Roxanna, 90 Morris, Desmond, 30 Muñoz, Carmen, 112 Musumeci, Diane M., 30 Nakahama, Yuko, 112 Namy, Laura, 51
Schick, Brenda, 31 Schmid, Monika S., 31 Selinker, Larry, 31 Senghas, Ann, 51 Sime, Daniela, 31, 91 Singer, Melissa A., 31 Slama-Cazacu, Tatiana, 31 Slobin, Dan I., 32, 133 Sperber, Dan, 51 Stam, Gale, 32, 133 Stefanini, Silvia, 32, 74 Sueyoshi, Azano, 32 Swain, Merrill, 32
N Negueruela, Eduardo, 30, 132 Nicoladis, Elena, 30, 73 Nobe, Shuichi, 30 Nyberg, Lars, 90
T Talmy, Leonard, 133 Taranger, Marie-Claude, 32 Tellier, Marion, 32, 91 Thal, Donna, 74 Tomasello, Michael, 32, 51, 74
O Odlin, Terence, 132 Özcaliskan, Seyda, 30, 74 Özyürek, Asli, 30, 31, 133 P Pavlenko, Aneta, 133 Perniss, Pamela M., 133 Piaget, Jean, 74 Pika, Simone, 31, 133 Pine, Karen J., 31, 74 Pinker, Stephen, 31 Pizzuto, Elena, 31, 51, 74 Polio, Charlene, 112 R Rakoczy, Hannes, 51 Richards, Jack C., 31 Riseborough, Margaret G., 31 Rizzolatti, Giacomo, 74 Robinson, Peter, 31 Rogers, William T., 31 Rose, Miranda L., 31 S Schegloff, Emanuel A., 31, 133
V Valenzeno, Laura, 32 Verspoor, Marjolijn, 32 Viberg, Åke, 32 Vicari, Stefano, 74 Volterra, Virginia, 32, 74 W Wagner, Susan M., 33 Wallbott, Harald G., 33 Weingold, Götz, 133 Werner, Heinz, 51, 74 Wexler, Kenneth, 33 White, Lydia, 33 Williams, Jessica, 112 Wittenburg, Peter, 133 Wolfgang, Aaron, 33 Wu, Ying Choon, 33 Y Yanagimachi, Tomoharu, 112 Yoshioka, Keiko, 33, 112, 133
Subject index A action, 7, 8, 25, 29, 31, 39, 42, 44, 49, 54, 56, 61, 69, 70, 72, 73, 77, 81, 88, 90, 96, 110, 132, 133 action gestures, 8, 60 action schemes, 7, 37, 44, 48 ageing, 3, 4, 15, 16, 17, 25, 26 ambiguity, 20, 28, 83, 94, 96, 109 anaphora, 93, 94, 95, 96, 97, 98, 100, 101, 102, 104, 105, 107, 108, 109, 110, 112 anaphoric gestures, 108 aphasia, 5, 15, 26, 29, 31 aptitude, 19, 30 arbitrary gestures, 42, 51 articles, 97, 100, 107, 110 attrition, 4, 11, 15, 16, 25, 31 autism, 47, 49, 50
cognition, 4, 10, 25, 27, 29, 31, 38, 39, 49, 51, 56, 57, 68, 70, 71, 72, 73, 132 cognitive abilities, 22, 53, 68, 71 cognitive development, 56, 69, 78 cognitive load, 19 cognitive style, 19 common ground, 36, 48 communication, 1, 3, 4, 7, 8, 16, 17, 22, 25, 27, 28, 29, 30, 31, 35, 36, 37, 39, 42, 43, 45, 46, 47, 48, 49, 50, 54, 71, 72, 73, 91, 111, 132 communicative function, 7 communicative intent, 6, 20, 30, 36, 38, 40, 49 compensation, 20, 21 composite signal, 5, 26 Comprehensible Input HyB pothesis, 17 baby signs, 8 conceptual representations, 13 beat gestures, 10, 61 convergence, 130 bi-directionality, 48 co-ordination, 5, 24, 115 bilingualism, 16, 25, 30, 114, 130 cross-cultural influence, 129, bimodal, 23, 53, 56, 59, 61, 62, 130 63, 67, 68, 69, 71, 97, 106, 108, cross-linguistic, 1, 11, 13, 23, 29, 110, 124 30, 31, 90, 106, 107, 110, 113, Butterworth gestures, 71 114, 115, 116, 117, 118, 128, 129, 130, 131, 132, 133 C cross-modal, 9 child–adult interaction, 7 culture, 8, 12, 14, 22, 26, 27, 29, classroom, 15, 28, 31, 83, 86, 88, 49, 72, 114, 116, 117, 128, 129 89, 91 clause, 96, 98, 99, 100, 101, 104, D 107, 116, 129 deaf, 8, 25, 26, 31, 32, 35, 49, co-expressivity, 5, 21 50, 72
deictic expressions, 5 deictic gestures, 7, 9, 10, 38, 45, 46, 48, 54, 60, 66, 69, 70, 101 developmental disorders, 8, 55 discourse, 3, 14, 15, 21, 23, 27, 29, 33, 93, 94, 95, 97, 98, 99, 100, 103, 107, 108, 109, 110, 111, 112, 115, 118, 132, 133 disfluency, 5 Dual Coding Theory, 76, 78, 79, 89 Dutch, 23, 25, 93, 97, 98, 99, 100, 101, 102, 104, 105, 106, 107, 108, 109, 110, 111 E ellipsis, 23 emblems, 14, 15, 22, 76, 90 enactment, 77, 86, 90, 120, 121, 128, 131 English, 10, 12, 13, 18, 22, 23, 24, 25, 26, 30, 75, 79, 81, 83, 84, 85, 88, 90, 111, 112, 113, 114, 116, 117, 118, 119, 121, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133 event, 7, 40, 41, 60, 77, 116, 117, 119, 120, 121, 123, 124, 126, 128, 129, 131, 132 expertise, 1, 20 F fluency, 19 foreigner talk, 23, 26 formality, 20 French, 10, 14, 22, 24, 27, 73, 75, 78, 79, 80, 81, 86, 89, 96, 111, 132
138
Subject index
frequency, 8, 12, 45, 56, 58, 86, 88, 93, 102, 103, 105, 106, 107, 108, 109, 110, 115, 121
Interface Hypothesis, 6 interlanguage, 14, 28, 33, 111 Italian, 8, 28, 29, 55, 57, 58, 59
G German Sign Language, 116 gesticulation, 22 gestural form, 12, 15 gestural social acts, 35, 38, 42, 43, 45, 46, 48 gesture space, 12, 96 gesture type, 10, 11, 44, 61, 65, 66, 68, 127 gesture-speech combinations, 9, 55 Growth Point Theory, 6
J Japanese, 14, 23, 24, 93, 95, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 116, 117, 118, 119, 121, 123, 124, 126, 127, 128, 129, 130, 131, 132, 133
H hand shape, 22 handedness, 15, 120, 121 hold, 38, 69, 76, 100 home-sign, 36 I iconic gestures, 42, 43, 45, 47, 48, 78 idiomatic expressions, 15, 80 imitation, 42 immersion, 129 individual variation, 19, 20 infant gestures, 35, 37, 38, 47, 48 infant pointing, 35, 38, 40, 41, 50, 51, 74 Information Packaging Hypothesis, 6 information status, 94, 95, 97, 108, 109, 110 input, 7, 8, 9, 15, 17, 18, 20, 21, 23, 29, 55, 70, 71, 77, 83, 88 integrated system, 5, 6, 11 intelligence, 19 intentionality, 37 interaction, 3, 4, 5, 8, 13, 18, 19, 20, 22, 23, 27, 37, 43, 47, 48, 51, 62, 64, 66, 69, 90, 113, 128, 130
K Korean, 25, 88, 90, 133 L language dominance, 4, 130 language loss, 4, 11 language mode, 119 lexical, 3, 6, 9, 11, 14, 18, 21, 22, 23, 29, 31, 53, 56, 57, 58, 67, 68, 69, 70, 71, 74, 75, 76, 80, 81, 84, 86, 88, 89, 93, 94, 95, 96, 97, 98, 100, 101, 102, 103, 104, 106, 107, 108, 109, 110, 133 Lexical Retrieval Hypothesis, 6 lexicon, 4, 28, 55, 57, 76 long-term memory, 76
motivation, 19, 36, 38, 39, 40, 48 motor modality, 75, 77, 80, 86, 89 multimodal, 6, 26, 75 N naming tasks, 11 narrative, 16, 23, 33, 95, 96, 99, 100, 109, 111, 112, 119, 131, 133 native speaker, 12, 13, 14, 17, 18, 20, 21, 25, 81, 95, 96, 97, 106, 107, 108, 119, 130, 132 native standard, 23 neuroimaging, 77 nominal gestures, 45 non-manual gestures, 60 non-verbal, 16, 19, 30, 51, 57, 68, 70, 76 O object gestures, 44, 45 other-directed, 4 output, 18, 21, 27, 32 Output Hypothesis, 18, 32
P pantomime, 45, 75 passive knowledge, 80, 83, 84, 86 M path, 12, 13, 115 manner of motion, 115 perception, 5, 14, 91, 112 manual accent, 115 personality, 19 memory, 5, 19, 28, 30, 31, 73, perspective, 14, 16, 17, 23, 25, 27, 75, 76, 77, 78, 79, 81, 86, 88, 32, 35, 37, 47, 48, 49, 89, 95, 89, 90 113, 114, 116, 117, 129, 130 mimetic, 120, 129 phonological processes, 56 mis-matches, 11, 21 picture naming task, 68 modality, 7, 35, 50, 55, 59, 61, pointing, 7, 9, 10, 22, 27, 35, 62, 67, 71, 73, 77, 80, 83, 115, 38, 39, 40, 41, 42, 45, 46, 47, 116 48, 49, 50, 51, 54, 58, 60, 67, models, 5, 6, 36, 43, 61 68, 69 monolingual, 23, 81, 113, 114, prelinguistic, 6, 35, 38, 39, 46, 116, 117, 119, 121, 122, 123, 124, 47, 48 126, 127, 128, 129, 130 priming, 18, 24 motion events, 113, 119, 120, 133 processing, 5, 11, 19, 90
production, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 21, 22, 24, 25, 26, 27, 28, 29, 32, 42, 53, 54, 55, 56, 58, 60, 62, 64, 65, 67, 68, 70, 72, 74, 96, 97, 99, 107, 110, 111, 112, 114, 115, 116, 117, 127, 128, 129, 130 proficiency, 11, 14, 19, 28, 30, 98, 110, 113, 114, 115, 118, 126, 128, 130 pronouns, 93, 94, 97, 101, 102, 107, 108, 112 prosodic patterns, 13 Q quotable gestures, 14 R reasoning, 10 recall, 18, 27, 71, 77, 78, 79, 86, 88 reference tracking, 27, 94, 95, 97, 99, 111 referential gestures, 35, 47 repertoire, 9, 16, 45, 56, 58, 69, 70 representational gestures, 8, 14, 22, 28, 29, 30, 35, 38, 42, 43, 44, 45, 46, 47, 48, 53, 54, 55, 56, 57, 66, 67, 68, 69, 70, 132 requesting, 7, 45, 50 retrieval, 6, 21, 31, 70, 77, 94 rhythmic gestures, 14, 17 routines, 7, 35, 42, 43, 44, 45, 47, 48 S saliency, 129 second language acquisition, 26, 27, 28, 29, 31, 33, 75, 88, 90, 91, 111, 112, 113, 114, 116, 117, 129, 130, 132 sign language, 8, 31, 32, 35, 51 size-shape gestures, 60
Subject index
skills, 9, 10, 16, 18, 19, 22, 28, 31, 35, 42, 43, 46, 48, 53, 54, 55, 56, 71, 93, 118 social gestures, 8, 39, 43 sociality, 37, 51 Spanish, 12, 13, 26, 30, 132, 133 speaker-directed, 4 speaker-internal, 3, 21 speech-accompanying gestures, 11, 37, 111 split-brain, 15 stroke, 100, 101, 121 Swedish, 27, 96, 111, 132 symbolic communication, 8, 35, 46, 47 symbolic gestures, 27, 54 synchrony, 25 T target language, 12, 13, 23, 93, 95, 97, 98, 108, 110, 119 teacher talk, 17, 24 teaching gestures, 75, 76 thinking for speaking, 12, 13, 32 timing, 12 Total Physical Response, 18 transfer, 11, 13, 28, 31, 95, 111, 114, 132, 133 Turkish, 12, 116, 133 Turkish Sign Language, 116 turn-taking, 37 U ultimate attainment, 130 usage patterns, 13 V variation, 13, 16, 20, 29, 106, 107, 132 viewpoint, 23, 95, 96, 109, 110, 111, 113, 114, 115, 116, 117, 120, 121, 122, 124, 126, 127, 128, 129, 130, 131 visual modality, 75, 77, 78, 79, 80, 86, 89
vocabulary, 22, 29, 56, 57, 58, 69, 70, 73, 74, 75, 80, 83, 84, 85, 86, 88, 89, 118 W working memory, 4, 19, 76, 77
139
In the series Benjamins Current Topics (BCT) the following titles have been published thus far or are scheduled for publication: 28 GULLBERG, Marianne and Kees de BOT (eds.): Gestures in Language Development. 2010. viii, 139 pp. 27 DROR, Itiel E. (ed.): Technology Enhanced Learning and Cognition. xi, 263 pp. + index. Expected January 2011 26 SHLESINGER, Miriam and Franz PÖCHHACKER (eds.): Doing Justice to Court Interpreting. 2010. viii, 246 pp. 25 ANSALDO, Umberto, Jan DON and Roland PFAU (eds.): Parts of Speech. Empirical and theoretical advances. 2010. vi, 291 pp. 24 ARBIB, Michael A. and Derek BICKERTON (eds.): The Emergence of Protolanguage. Holophrasis vs compositionality. 2010. xi, 181 pp. 23 AUGER, Alain and Caroline BARRIÈRE (eds.): Probing Semantic Relations. Exploration and identification in specialized texts. 2010. ix, 156 pp. 22 RÖMER, Ute and Rainer SCHULZE (eds.): Patterns, Meaningful Units and Specialized Discourses. 2010. v, 124 pp. 21 BELPAEME, Tony, Stephen J. COWLEY and Karl F. MACDORMAN (eds.): Symbol Grounding. 2009. v, 167 pp. 20 GAMBIER, Yves and Luc van DOORSLAER (eds.): The Metalanguage of Translation. 2009. vi, 192 pp. 19 SEKINE, Satoshi and Elisabete RANCHHOD (eds.): Named Entities. Recognition, classification and use. 2009. v, 168 pp. 18 MOON, Rosamund (ed.): Words, Grammar, Text. Revisiting the work of John Sinclair. 2009. viii, 124 pp. 17 FLOWERDEW, John and Michaela MAHLBERG (eds.): Lexical Cohesion and Corpus Linguistics. 2009. vi, 124 pp. 16 DROR, Itiel E. and Stevan HARNAD (eds.): Cognition Distributed. How cognitive technology extends our minds. 2008. xiii, 258 pp. 15 STEKELER-WEITHOFER, Pirmin (ed.): The Pragmatics of Making it Explicit. 2008. viii, 237 pp. 14 BAKER, Anne and Bencie WOLL (eds.): Sign Language Acquisition. 2009. xi, 167 pp. 13 ABRY, Christian, Anne VILAIN and Jean-Luc SCHWARTZ (eds.): Vocalize to Localize. 2009. x, 311 pp. 12 DROR, Itiel E. (ed.): Cognitive Technologies and the Pragmatics of Cognition. 2007. xii, 186 pp. 11 PAYNE, Thomas E. and David J. WEBER (eds.): Perspectives on Grammar Writing. 2007. viii, 218 pp. 10 LIEBAL, Katja, Cornelia MÜLLER and Simone PIKA (eds.): Gestural Communication in Nonhuman and Human Primates. 2007. xiv, 284 pp. 9 PÖCHHACKER, Franz and Miriam SHLESINGER (eds.): Healthcare Interpreting. Discourse and Interaction. 2007. viii, 155 pp. 8 TEUBERT, Wolfgang (ed.): Text Corpora and Multilingual Lexicography. 2007. x, 162 pp. 7 PENKE, Martina and Anette ROSENBACH (eds.): What Counts as Evidence in Linguistics. The case of innateness. 2007. x, 297 pp. 6 BAMBERG, Michael (ed.): Narrative – State of the Art. 2007. vi, 271 pp. 5 ANTHONISSEN, Christine and Jan BLOMMAERT (eds.): Discourse and Human Rights Violations. 2007. x, 142 pp. 4 HAUF, Petra and Friedrich FÖRSTERLING (eds.): Making Minds. The shaping of human minds through social context. 2007. ix, 275 pp. 3 CHOULIARAKI, Lilie (ed.): The Soft Power of War. 2007. x, 148 pp. 2 IBEKWE-SANJUAN, Fidelia, Anne CONDAMINES and M. Teresa CABRÉ CASTELLVÍ (eds.): Application-Driven Terminology Engineering. 2007. vii, 203 pp. 1 NEVALAINEN, Terttu and Sanna-Kaisa TANSKANEN (eds.): Letter Writing. 2007. viii, 160 pp.