Dictionary Use in Foreign Language Writing Exams
Language Learning & Language Teaching (LL<) The LL< monograph se...
175 downloads
1331 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Dictionary Use in Foreign Language Writing Exams
Language Learning & Language Teaching (LL<) The LL< monograph series publishes monographs, edited volumes and text books on applied and methodological issues in the field of language pedagogy. The focus of the series is on subjects such as classroom discourse and interaction; language diversity in educational settings; bilingual education; language testing and language assessment; teaching methods and teaching performance; learning trajectories in second language acquisition; and written language learning in educational settings.
Editors Jan H. Hulstijn
Department of Second Language Acquisition, University of Amsterdam
Nina Spada
Ontario Institute for Studies in Education, University of Toronto
Volume 22 Dictionary Use in Foreign Language Writing Exams. Impact and implications by Martin East
Dictionary Use in Foreign Language Writing Exams Impact and implications
Martin East Unitec New Zealand / The University of Auckland
John Benjamins Publishing Company Amsterdam / Philadelphia
8
TM
The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.
Library of Congress Cataloging-in-Publication Data East, Martin. Dictionary use in foreign language writing exams : impact and implications / Martin East. p. cm. (Language Learning & Language Teaching, issn 1569-9471 ; v. 22) Includes bibliographical references and index. 1. Composition (Language arts)--Ability testing. 2. Rhetoric--Ability testing. 3. Encyclopedias and dictionaries--Use studies. I. Title. P53.27.E17 2008 418.0071--dc22 isbn 978 90 272 1983 1 (Hb; alk. paper)
2008010429
© 2008 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co. · P.O. Box 36224 · 1020 me Amsterdam · The Netherlands John Benjamins North America · P.O. Box 27519 · Philadelphia pa 19118-0519 · usa
Table of contents
Preface Acknowledgments List of key acronyms chapter 1 What is the problem with dictionaries?
vii xi xiii 1
chapter 2 On dictionaries and writing
13
chapter 3 Does the dictionary really make a difference?
37
chapter 4 How do test takers use dictionaries?
65
chapter 5 When the dictionary becomes a liability
95
chapter 6 What do the test takers think of having a dictionary?
125
chapter 7 Some more test taker perspectives
147
chapter 8 Having a dictionary in writing exams – is it useful and is it fair?
169
chapter 9 Maximizing the opportunity and minimizing the liability
187
vi
Dictionary Use in Foreign Language Writing Exams
References
203
appendix 1 A note on intermediate level students
211
appendix 2 A note on data and procedures to establish reliability
213
appendix 3 The writing tasks used in the first two studies
217
appendix 4 A note on inferential statistics
221
Index
227
Preface
Dictionaries – friend or foe? On my latest visit to Berlin, back in 1998, I was asked to make a speech, in German, to the mayor and other dignitaries of the district of Wilmersdorf. A colleague and I from the UK had been regularly accompanying groups of 18-year-old high school students of German who were undertaking two weeks of work experience throughout the region – an excellent opportunity to practice their German in the real world. This was part of an exchange program that also saw groups of German students come to England for similar purposes. German is a foreign language to me, although I consider myself communicatively competent in it. Nevertheless I wanted to make sure that my speech was as accurate and meaningful as possible, and so in my preparation I turned to my bilingual English/German dictionary. I had been used to using a dictionary since I first started learning German, way back in school. I had come to rely on it as a trusted ally and effective tool – one that, in the words of Canale (1983), would help me to make up for gaps in my knowledge and enhance the rhetorical effect of what I wanted to say. With the help of the dictionary I was able to make the speech what I wanted it to be, and, so far as I could tell, it was appropriate and well received. Back home in England, the whole issue of allowing students to use bilingual dictionaries was very much at the forefront of my thinking. As the head of a large modern languages department where almost every student in my school was expected to study a foreign language up to the General Certificate of Education – the first high stakes examination for 15- to 16-year-olds – I was grappling, with my team, with the knowledge that, for the first time, these students were going to be allowed to take a bilingual dictionary with them into their written examinations. We were also preparing to allow our students at Advanced (A) level – the higherlevel high stakes examination for 17- to 18-year-olds – to use dictionaries. At A level we believed we could recommend a more comprehensive and sophisticated dictionary. After all, these were more advanced students who already had a fair degree of language proficiency. For our GCSE candidates, many of
viii Dictionary Use in Foreign Language Writing Exams
whom were quite weak in the language they were studying, it was a more complex decision. This was a totally new experience for us, but one that I, as a frequent user of bilingual dictionaries, believed in. We put a lot of thought into the type of dictionary we would recommend, and into how we were going to help our students to get the best out of it. We decided on a dictionary and we asked all the students to buy the same type so that we could train them in its use. So far so good. But I remember what happened when it came to final examination time. I was scheduled to supervise the reading test. All the students turned up with their shiny new dictionaries and, when the announcement to start the exam was made, set down to work. I remember my heart sinking as I witnessed several students with their noses seemingly buried in the dictionary, apparently oblivious to the passing of time and either unaware of, or unable to deal effectively with, the need to use their dictionaries sparingly and judiciously. Oh, how we had warned them to take care. How we thought we had prepared them for this eventuality. But in the heat of the examination context several of the students seemed to forget our advice. It is impossible to know how much this experience may have impacted on their performance, whether for good or ill, but I remember it to this day. I regarded the dictionary as a friend. Could it be that, for some students of a foreign language, it was more like a foe? But I was soon to leave England for New Zealand, and I put the experience behind me. Dictionaries were not allowed in New Zealand school exams. I was also teaching German at a tertiary level. My students, almost without exception mature students with high levels of motivation to learn German, were keen to find ways to improve their learning. And I was keen to share with them my belief that, used effectively, the bilingual dictionary was a valid and useful tool of the trade. I made sure that my students had ample opportunity to use dictionaries in class. But I had to draw the line at allowing them in exams. At the end of 2000 I took a trip back to England. I was chatting one day with an ex-colleague. In the course of our conversation on the comparative state of foreign languages teaching in the UK and in New Zealand, she remarked to me “by the way, did you know that they’ve changed their minds about dictionaries? Very soon now they’re going to be banned from the exams.” I must admit to being surprised and a little disappointed by what she had to say. My colleague also admitted to her own bewilderment about the decision. Her remark got me thinking. Why the sudden change of heart? On what basis had the policy makers made this decision? To my mind using a dictionary was quite a reasonable thing to do. Why take away from the students a legitimate resource? What signal were we giving to the students about where dictionaries should fit in as they attempted to use a foreign language? On the other hand, I had witnessed first hand a potential problem with allowing the dictionary in
Preface
an examination. Could it be that similar experiences had been reported, and the testing authorities had genuine concerns? Did they look back in hindsight and conclude that the experiment wasn’t working? I wondered whether anybody had carried out any research to find out what students actually did when they had a dictionary in an exam. In other words, was there any evidence to back up the policy change of heart? I tracked down several studies into what teachers and students thought about using dictionaries in the exams, but I wanted something that would show me what actually happened in a testing context. I managed to locate one study into bilingual dictionaries in GCSE writing. Two researchers had worked with groups of French students who were being prepared for the new ‘with dictionary’ exams. They asked them to take two GCSE-type writing tests, one with and one without a dictionary. Then they compared their scores. The researchers concluded that using a dictionary made an improvement. From my reading it seemed that dictionaries were subsequently banned on the basis of this conclusion. But there was also a concern about the mistakes the test takers made when using dictionaries, and the amount of time it seemed to take the test takers to use them – something I had witnessed myself. My biggest concern as I read the research and as I looked at what was happening in the UK was this: the research had focused only on the GCSE – and yet, when the announcement was made to remove dictionaries from the GCSE, they were removed from the A level at the same time. I thought about the very first question of the GCSE writing examination which required students to make a list of ten items – things you might take on a picnic, for example, or things you might need on holiday. Surely if you had a dictionary it would not be surprising if you could improve your score on this sort of question. On the other hand, I knew that many GCSE candidates were weak in their language of study. Perhaps dictionaries at GCSE level were more of a liability than a supportive tool for them, and in this case outlawing the dictionary made sense. But the research had not taken account of more proficient learners, those who might be working towards the A level, and who perhaps needed a dictionary all the more because of the higher levels of authentic language they were dealing with. I wondered whether any of these more proficient students, faced with having to write a speech for a set of dignitaries in Germany, and told ‘no, you can’t use a dictionary’, would be able to accomplish this task with the sense of confidence I had enjoyed. Was there any justification for banning the dictionary at A level? This background formed the impetus for the studies I describe in this book. I wanted to investigate what would happen if you allowed higher level more proficient users of a foreign language to take a dictionary with them into a writing exam. I made a deliberate choice to investigate German, in contrast to the study into French I had read about. I also made a deliberate choice to investigate
ix
Dictionary Use in Foreign Language Writing Exams
those who had reached at least the intermediate level in German and who possibly needed a dictionary all the more, or who should theoretically be able to use a dictionary more successfully. The three studies I describe were designed as case studies that would enable me to look at dictionary use from a variety of angles. In the first two studies I worked with my own students, all of whom were taking intermediate (or in some cases advanced) level courses in German language in a New Zealand tertiary institution. For the third study I recruited high school students from a number of local schools, all of whom were studying German for an A level equivalent examination. One focus of my studies was to compare students’ performance in two writing tests, one with and one without a dictionary, to find out whether they did better or worse in the ‘with dictionary’ test, or whether there was no difference. I looked at the marks they were given. I looked at the quality of their writing and the types of words they had used. Another focus was to find out what the test takers had to say about the two tests. What did they like? What did they dislike? Which test did they prefer? Which did they think was a fairer test of their writing skills? The multi-faceted nature of the studies meant that I was able to get a qualitative picture of what a whole range of students thought and did. I was also able to quantify what I found out, so that I could look at trends. One aim of this book is to describe the studies in as clear a way as possible. I hope you will enjoy reading about the test takers’ experiences and what all of this means for language testing. I am also aware that the writing of many students is now assessed through coursework options that allow dictionaries. Another aim of this book is to help language students, at whatever level, and whatever language they are studying, to get the best out of bilingual dictionaries when writing. It is also to help language teachers to help their students to use this resource to full effect. Ultimately, my goals are that what I present here will contribute to ongoing debates around using or not using dictionaries in writing, whatever the context, and that what I have to say will keep these debates going. Martin East Auckland, July 2008
Acknowledgments
No research project occurs in isolation; on the contrary, a project arises out of engagement with the academic community. Many individuals have helped to shape the work that I have presented in this book. Three key people first set me on the research road – Cathie Elder, Melbourne University, Roger Peddie, Auckland University, and Shawn Loewen, Michigan State University. I am indebted to them for their input as research mentors. Three representatives of the Applied Linguistics Association of New Zealand (ALANZ) – John Bitchener, AUT University, Gary Barkhuizen, Auckland University, and Cynthia White, Massey University – judged the research on which this book is based to be of sufficient quality for an ALANZ award. As a consequence I was inspired to consider releasing the research in book form. Ute Knoch, Melbourne University, and Lynn-Elizabeth Hill and Anna Beltowski, Unitec New Zealand, kindly read the first draft of the manuscript and offered me invaluable suggestions for improvement. I am particularly grateful to Ute for giving me her input on the many examples of German I use. On behalf of John Benjamins Publishers, Kees Vaes, Jan Hulstijn, and Nina Spada provided positive and enthusiastic support for and comments on the manuscript which has led to it becoming part of the Language Learning and Language Teaching series. While all of those mentioned have contributed to an improved final manuscript, none is responsible for any remaining shortcomings, errors, or omissions, for which I accept responsibility. It would be impossible to name all the other individuals who supported me in one way or another as I carried out the research, but I would particularly like to mention the following: Nick Shackleford, Jürg Brönnimann, Sue Walter, Janet von Randow, Sheena von Bassewitz, Elena Kolesova, Alan Kirkness, Judith Geare, and Margaret Hardiman. My particular thanks go to the research participants, my own students and other students, for consenting to take part in the project in the first place, and without whom the research would definitely not have been possible. Several individual facets of the studies have been published in other fora, and I would like to thank those who have given me permission to include aspects of previously published material, each of which is acknowledged in the reference list. A number of these publications were primarily aimed at local audiences and local contexts, particularly in New Zealand and Australia. I am, however, also grateful
xii Dictionary Use in Foreign Language Writing Exams
for permission to include material published more widely, including extracts from Elsevier Publications (Assessing Writing, 2006) and SAGE Publications Limited (Language Testing, 2007). I acknowledge HarperCollins Publishers for allowing me to reproduce dictionary entries from the Collins German Dictionary (Terrell, Schnorr, Morris, & Breitsprecher, 1997), in Chapters 5 and 9, and the Collins Pocket German Dictionary (Collins, 2001), in Chapter 5, and Edexcel Limited for permission to reproduce, in some cases in adapted form, writing examination questions previously used in the United Kingdom’s Advanced level German examinations (Appendix 3). Finally, and on a more personal note, I would like to acknowledge the significant contribution to my research endeavors which my wife, Julie, has made over the years – as mentor, critic, and supporter. My grateful thanks go to her.
List of key acronyms
A level ARG AS level BA CEF CLT GCSE L1 L2 LEP LFP OU PED QCA TLU
Advanced level Assessment Reform Group Advanced Supplementary level (up to 2000); Advanced Subsidiary level (from 2001) Bachelor of Arts Common European Framework Communicative Language Teaching General Certificate of Secondary Education First language Second / foreign language Limited English Proficient Lexical Frequency Profile Open University Portable electronic dictionary Qualifications and Curriculum Authority Target Language Use
chapter 1
What is the problem with dictionaries?
The dictionary, according to Kirkness (2004), is well established as “an essential source, if not indeed the principal source, of information on language for all members of literate societies who might have questions on any aspect of the form, meaning, and/or use of a word or words in their own or in another language” (p. 54). Kirkness goes on to talk of second or foreign language (L2) learners, and claims that for these learners bilingual dictionaries continue to be the most-used reference book at all levels. Ilson (1985) describes the dictionary as “the most successful and significant book about language.” For Ilson, its significance is “shown by the fact that … its authority is invoked, rightly or wrongly, to settle disputes” (p. 1). If this is so, Carter and McCarthy (1988) make a curious observation when they suggest that dictionaries for language learning have been largely ignored in the plethora of books and articles that have been published by those who have a stake in language learning and teaching. But recent events in language testing practices, particularly in the United Kingdom, highlight the more central position that bilingual dictionaries are now taking, especially in debates about their use or non-use by L2 students. As Horsfall (1997) puts it, it might almost have seemed that a dictionary was redundant in the foreign languages classroom. That is, however, until fairly recently. In the late 1990’s the UK’s Qualifications and Curriculum Authority (QCA) – the governing body that develops and maintains the National Curriculum for UK schools and regulates its associated examinations and assessments – made a bold and radical decision with regard to secondary school L2 examinations. It was decided that, from June 1998, students taking the General Certificate of Secondary Education (GCSE) in an L2 would be allowed to take bilingual dictionaries into the examination with them. (The GCSE is the first major ‘high stakes’ examination in UK schools, taken by 15- to 16-year-olds after five years of secondary schooling.) The different examining boards made slightly different arrangements about exactly which parts of the exam would allow dictionaries, but there was general consensus that they should be allowed in the two separate components testing reading comprehension and writing skills. A new option for assessing writing – coursework – was also introduced. By its very nature the coursework option (which could be taken instead of the writing examination) allowed the use of support resources such as dictionaries. These
Dictionary Use in Foreign Language Writing Exams
radical changes mirrored what had already become established in some Advanced Supplementary (AS) and Advanced (A) level L2 examinations (high stakes examinations taken by 17- to 18-year-olds after one or two years of study post-GCSE). In fact, access to a bilingual dictionary had already been introduced into one A level syllabus as early as 1978 (Bishop, 2000). The changes also reflected an early expectation of the UK’s National Curriculum – that students’ independent learning should be developed through the effective use of reference materials like dictionaries (DES, 1990). In 1998, therefore, the first groups of 16-year-old GCSE candidates made their way into the examination rooms armed with bilingual dictionaries to support them in their L2 tests. But less than five years later, in what might have appeared to many as a fin de siècle fit of returning to more ‘traditional’ testing practices, the decision was reversed. From 2003, dictionaries would no longer be allowed in the examination. This decision impacted not only on the GCSE, but also on AS and A level examinations, and a component that had become established in several of these higher level examinations was forcibly removed. At the same time, the coursework option for GCSE remained (as it did for A level), and with it the continued opportunity to use dictionaries. What had happened? Why was it that an initial policy decision to introduce dictionaries into the examination room was so swiftly and decisively reversed? And what effect did all of this have on the teachers and students?
Teacher and student perspectives When the initial decision to introduce bilingual dictionaries into the GCSE was made there was a flurry of research activity which took place before the first cohort of students had sat any examinations with a dictionary. The research arose out of an attempt to find out what teachers and students thought about the initiative, and what impact it was likely to have on the students as the principle stakeholders. Barnes, Hunt, and Powell (1999) sought the opinions of a range of language teachers on issues surrounding the use of dictionaries. Asher (1999) undertook two school surveys. A 1997 survey looked at dictionary policy in 27 schools. A 1998 survey, with 36 schools, focused on teachers’ classroom practice, their attitudes towards dictionary use and some of the effects teachers perceived dictionary access was having on their students’ patterns of learning. Chambers (1999) aimed to uncover students’ views about dictionary availability in the GCSE. In total, 267 students from three schools filled in a questionnaire relating to their experience of bilingual dictionary use and their perceptions of the experience. Just under half the participants were subsequently interviewed.
Chapter 1. What is the problem with dictionaries?
If we consider what arose from this range of research we find evidence that, on the positive side, many teachers seemed to support having dictionaries in the examination: they were authentic and helpful, and contributed to the development of learner autonomy, greater self-reliance, and increased learner confidence. One typical comment supporting the positive views was “long overdue, very helpful in independent study” (Barnes et al., 1999, p. 25). Three quarters of respondents in the Barnes et al. study agreed that using dictionaries in an examination was an authentic task. Using a dictionary was true to life; its use was also seen as a transferable skill which could encourage more detailed awareness of language and how it fits together. Asher’s (1999) research also showed that a good number of teachers valued the authenticity that using dictionaries brought to language learning and language use. They saw this as a valuable life-skill which mirrored the situation of those who, when visiting the target language country, might occasionally need to use a dictionary. In both research projects, the observation was made that teachers also needed to refer to dictionaries from time to time. Why not the students? On the negative side, there was concern about students’ inability to use dictionaries appropriately and comments highlighted the need for training in dictionary skills. Teachers were worried that students might overuse the dictionary, thereby wasting time, or look up items word by word, thereby either misunderstanding or producing inaccurate language. They were also concerned that students might be less likely to commit words to memory if they knew they had access to a dictionary in the examination. For one teacher in the Barnes et al. (1999) study, allowing a dictionary in the exam room was “making it increasingly difficult to get lazy pupils to do any learning homework properly! I feel sure that some pupils will never finish an exam – they’ll be spending so much time looking up words in a dictionary” (p. 26). Another considered that the majority would “use them as an excuse not to learn vocabulary and spend too much time looking up words they ought to know” (p. 24). The students themselves thought the dictionary would be helpful with understanding rubrics and translating ideas into the target language more easily. They thought they would be helped to face the challenge of the examination with greater confidence. Like the teachers, they were worried that they might spend too long looking for words or become dependent on the dictionary. They were concerned about not being able to find the word in the first place, or being unable to choose the appropriate word for the context. Interestingly, Asher and Barnes et al. appeared to come to different conclusions about the principal teacher perspective on ‘with dictionary’ exams. Asher (1999) observed that “[v]iews of the dictionary reform amongst languages teachers seem predominantly, though not overwhelmingly, negative” (my emphasis).
Dictionary Use in Foreign Language Writing Exams
He conceded that in some cases this may have been “due less to the principle of dictionary use itself than to … the need to include another element of teaching [i.e. dictionary skills] into an already crowded syllabus” (p. 65). Barnes et al. (1999) by contrast concluded that “teachers are on the whole positive about the use of dictionaries … [and appear] to welcome the introduction of dictionaries and to have considered the surrounding issues very carefully” (p. 27, my emphasis). At the same time that the survey research was happening, Hurman and Tall (1998) undertook a very different kind of investigation. They compared actual test taker performances in GCSE French writing examinations, one taken with, and one without, a bilingual dictionary. The major findings and implications of this crucial study will be discussed later. For now, I present some of the comments teachers provided in a questionnaire Hurman and Tall used, which serve to underline the perspectives of the more concerned teachers: I am very much of the belief that in the written exam using a dictionary can often do more harm than good. Pupil use of a dictionary is not sophisticated enough (other than maybe perhaps with the brightest pupils) to stop them making considerable errors… (pp. 27–28) At KS4 [Key Stage Four, or school years 10 and 11] we find particularly less able pupils unable to find their way around a dictionary. (p. 28)
Whatever the views of the teachers and students at the time of its introduction, the dictionary use policy was put into place in 1998, only to be reversed some five years later. Such a swift policy volte-face would surely have seemed bewildering and bizarre. What did teachers and students think about this u-turn? At the end of 2000, at the time the policy reversal had been announced, I undertook a series of interviews among interested stakeholders from which I concluded that there might well be a case for retaining the dictionary in the examination (East, 2006a), or at least for seeing that there is more than one valid way of assessing students’ language ability (East, 2008). Admittedly this was a very small-scale survey, but its findings, summarized below, were telling in terms of highlighting stakeholder opinion at this crucial juncture. (Here and elsewhere I use pseudonyms when referring to any research participants.) I interviewed twelve people individually, in interviews ranging from about half an hour to (in one case) over two hours. There were three ‘language professionals’ (Mark – a chief examiner for GCSE and A level for a major examining board, Sue – the head of a foreign languages department in a large and popular suburban girls’ school, and Tim – a junior teacher of French and German in a large, successful suburban boys’ school). I also talked to nine students who were being prepared for, or who had recently taken, the A level in German. The nine students
Chapter 1. What is the problem with dictionaries?
had had different experiences with dictionaries in examinations in a variety of ‘with dictionary’/‘without dictionary’ combinations: • • • •
Two had taken A level in 1996. They had therefore sat both GCSE and A level examinations under the ‘old’ system – no dictionaries in either exam. Two had taken A Level in 1999. They had taken GCSE without a dictionary, but A level with a dictionary. Two were planning to take A level in 2001. They would have taken both GCSE and A level with a dictionary. Three were working towards A level in 2002. They had taken GCSE with a dictionary, but would take A (and AS) level without.
With regard to QCA’s dictionary u-turn, all three language professionals expressed a sense of bewilderment and frustration. Tim argued: To give something two or three years and then say ‘well, it’s not working’ or ‘let’s change it now’ is just typical of the problems within education … you know, ‘move the goalposts’, ‘no, move them again’, ‘oh no, let’s move them back’ …
Sue’s perspective revealed a similar concern. The dictionary appeared to have been withdrawn “for the same reason it was introduced in the first place” – someone in government saying “oh, I could have passed all these exams … if I’d had a dictionary to look up the odd word, so I think they ought to have dictionaries” and then somebody else in government saying “oh, well, I mean, if they want dictionaries, it’s all done for them, isn’t it?” She concluded that “it seems to me completely arbitrary.” Mark was concerned that a resource had been taken away without careful thought into its legitimate value: I don’t understand the government, or QCA’s, reasoning for [the removal of dictionaries], because dictionary skills, to me, are very important. … Ministers have been saying … it’s non-linguists I think … ‘if you have an exam with a dictionary, oh, you can do it all’.
For Mark this appeared to demonstrate on the part of QCA “a lack of understanding of what the subject is, and I think also the lack of understanding it from the pupils’ perspective …” Tim, Sue, and Mark all noted, however, that having a dictionary would certainly help the test takers to feel more secure and confident. The removal of dictionaries at A level was, said Tim, “a rude shock” for those who had got used to it at GCSE – something which, asserted Sue, had “put the wind up them quite a lot.” According to Tim it had been “quite a relief ” for the students to have the dictionary “on tap.” There was, said Mark “an enormous amount of value” in
Dictionary Use in Foreign Language Writing Exams
aving the dictionary in the examination because it “gave students an enormous h sense of security.” Furthermore, surely allowing a dictionary in the examination was authentic. Sue argued, “when you and I go abroad, we take a dictionary with us, it’s not a big deal … and people do do that.” Tim concurred: “It’s realistic. We would not consider that using a dictionary was a taboo for us … why should we not be allowed to use a dictionary in whatever circumstances are necessary?” There was, according to Mark “an enormous amount of worry” among teachers about the lack of dictionaries. Mark was also concerned about what he saw as negative washback into classrooms – a consequent ‘decontextualization’ or ‘disauthentication’ of language: “I know some teachers are resorting to the old algebraic equivalent, you know, giving a vocab list, twenty words a lesson, and they’re having to learn off by heart.” He perceived that the policy reversal was “bound to have an effect on the teaching, without any doubt … I think we might find (and this is pure conjecture) that kids have been taught lists, and learnt stereotypes, instead of being immersed in language, where they can actually use their own resources.” Among the students themselves, there appeared to be a very clear understanding both of the potential benefits of having a dictionary in the examination, and of the potential pitfalls and problems, both with and without a dictionary. The rote-learning of decontextualized vocabulary had indeed become a problem, and something of a disincentive. Three students who would face the A level without a dictionary all independently agreed about this: There’s a hell of a lot of words to learn every week … I hate it – but it’s necessary. We have a lot of vocab to learn each week, which I know is useful, but it’s just the amount of words we’ve had to learn … and we have a test each week. There’s, like, loads of words you’re meant to learn, and I just can’t do all of them.
For one student, the prospect of not having a dictionary in the A level examination “seems really, really daunting right now … it is a big help to have the dictionary, it’s a burden not to.” This student believed that having the dictionary was useful “because otherwise some [students] might get stressed up totally, and it puts you at a bit of a dead end if you have no inspiration or you think ‘oh, what’s that word?’ and you can never remember it in German … if the dictionary’s there you can just sort of quickly flick through it …” Not having the dictionary was “going to make it more difficult, definitely.” Certainly those who had been allowed the dictionary in their examinations were generally positive about it. One student saw it as a “life-saver”, a useful resource to check up a few key words without which he “would have been lost.” It might slow you down a bit, but “that’s outweighed really by the advantages.”
Chapter 1. What is the problem with dictionaries?
Another commented that the exam would have been “really hard” without the dictionary because “if there’s a word you don’t know, and you don’t know it, then there’s not a lot really you can do about it.” Certainly for the essay questions “it makes it so much better to have a dictionary … I wouldn’t be able to express myself in the depth that they want without having a dictionary.” On the other hand, some students were aware of the limitations of dictionary use. One who had been allowed a dictionary at A level commented, “you think ‘okay, I’ll look up that word and I’ll put it in’, but at the same time … you actually don’t really know what you’re writing down, you’re just picking everything up from the dictionary [and it might not] make sense to a German person.” A similar perspective was reiterated by another student who had not been allowed a dictionary at either level of examination. He argued that you might be able to “use more complicated words” when writing, but you “often have four or five different German words meaning the one English word you’re looking for, and it’s using the right ones … [and] you might … waste too much time.” Where, then, did things stand in the minds of teachers and students at the time of the dictionary volte-face? Similar issues were raised to those uncovered before dictionary use became established in the examination. Several themes emerge from all the stakeholder perspectives I have described. Dictionaries were helpful for checking individual words that might otherwise cause problems or for broadening vocabulary in writing. Their use was authentic, and they helped with the continued use of authentic texts, both in class and in the examinations. Not allowing them appeared to provoke some stress and to lead to a much greater emphasis on the decontextualized rote-learning of lists, something that students definitely appeared to dislike. Nevertheless, the learning of vocabulary was seen as important, and students could see the drawback of having a dictionary if they did not know how to use it successfully. On the plus side the stakeholders appeared to be supporting the use of dictionaries in exams because they promoted: • • • •
authenticity; flexibility to respond to the questions; increased learner independence; potentially increased test taker confidence.
On the minus side there was concern over: • • • •
taking too long to use the dictionary in the examination; over-reliance on the dictionary; not taking sufficient time to learn vocabulary; misuse of the dictionary.
Dictionary Use in Foreign Language Writing Exams
On this last point, Asher (1999) was particularly emphatic: In terms of enhancing pupils’ performance in reading and writing, the areas of the GCSE where candidates have access to dictionaries during the examination, a far from reassuring picture emerges. It is apparent that many pupils are ill-equipped with some of the most rudimentary skills required to make effective use of their dictionary, resulting in wasted time and inaccurate language production, especially in the target language. In such cases, dictionary use is at present counterproductive, diminishing rather than improving pupils’ performance. (p. 65)
Indeed, the prevalence of so-called ‘dictionary howlers’ would seem to be one of the strongest arguments against their use. Asher himself (p. 61) points out the example of the candidate who rendered ‘I will not’ in German as Ich Testament nicht. One example recently anecdotally reported to me was the candidate who wanted to say ‘I’m a tall boy’ and ended up writing Ich bin eine Kommode. The British Broadcasting Corporation has an interesting website (BBC, 2007) which reveals the more amusing consequences of inappropriate word choices in a range of languages. In an examination context such howlers may raise a chuckle from examiners, but the consequence for the test takers may be disastrous – in a real sense ‘diminishing rather than improving their performance.’ On this basis a ban on dictionaries in writing examinations may be warranted. But outlawing dictionaries from the exam room, and yet continuing to allow students to use them in coursework, still leaves these students open to making errors. If dictionaries continue to be available to students in coursework elements, they are surely potentially just as much a liability here as in an examination and may also contribute to diminished performance. Nevertheless, and perhaps most importantly in terms of the interviews I carried out in 2000, both teachers and students appeared to be generally in favor of allowing dictionaries in the examinations. This was particularly so for those who had had experience with using a dictionary, either at GCSE or at A level. And there was a definite sense that the ‘dictionaries are in’ policy had not been given sufficient time, with a consequent lack of any real evaluation of its effectiveness, strengths, and limitations as perceived by key stakeholders. We therefore need to be very clear about exactly how using a dictionary is affecting students in assessments so that we can at the very least help our students to minimize the liability of their use. Why, then, was there a change in policy?
Chapter 1. What is the problem with dictionaries?
The bigger picture The use of dictionaries in L2 assessment is not just an issue for the UK. Australia and New Zealand, for example, have both faced the question of allowing support resources in L2 assessments, and have come to different conclusions about where they might best fit (East, 2005a, 2008). Beyond language learning and teaching, debate in the United States over allowing access to support resources and accommodations in standardized tests – for example for test takers whose first language is not English – has provoked considerable research activity (Sireci, Li, & Scarpati, 2003). At a more fundamental level the debates surrounding use or non-use of support resources like dictionaries in exams comes down to different understandings of the purposes of testing and assessment. Influential in these debates have been on-going discussions beyond the language learning context about what constitutes ‘best practice’ in assessment. The discussions illustrate an apparent conflict between two opposing value systems. Gipps and Murphy (1994) bring out this tension when they suggest that broadly speaking assessment is for ‘professional and learning’ purposes or for ‘managerial and accountability’ purposes. The first type of assessment focuses on the learning and the second type emphasizes the measurement. The UK’s Assessment Reform Group (ARG) distinguishes between these two intentions or types by using the descriptions ‘assessment for learning’ and the ‘assessment of learning’ (ARG, 2002a, 2002b, 2002c, my emphasis). Assessment for learning sits within a constructivist process-oriented approach to the curriculum which favors ‘dynamic’ or on-going assessment. This type of assessment is concerned with bringing out the test takers’ best performance by using testing procedures that ‘bias for best’ and ‘work for washback’ (Swain, 1985). L2 coursework options fit within this understanding of assessment. The assessment of learning, by contrast, is traditionally carried out through ‘static’ one-time tests. It is rooted in a traditional behaviorist product-oriented and knowledge-based approach. This approach focuses on the discriminatory power of tests to identify different levels of test taker ability and also to predict future academic performance. The big dilemma is that the two assessment paradigms are not mutually exclusive. We cannot say that either one is ‘right’ or ‘wrong’, ‘better’ or ‘worse’. They are just different, and based on different assumptions about what we want to measure. As a result, there is tension between them and often an attempt to ‘mix and match’, with assessment for learning sometimes taking the dominant position in the arguments, and with the assessment of learning staking its claim when there is a feeling that its influence is being watered down. The UK’s ‘dictionaries
10
Dictionary Use in Foreign Language Writing Exams
are in / dictionaries are out’ policy decisions are symptomatic of this tension. The tension gives rise to a situation in which evaluation and assessment policies have bounced back and forth between an emphasis on more traditional testing practices and a focus on skills development. Taking an historical view of the UK’s GCSE as an example, Gipps (1994) argues that the more traditional type of summative school examinations had had a negative impact on the secondary school system, an effect she describes as “stultifying” (p. 3). This had led to a move towards a more progressive form of assessment, exemplified in the GCSE, that emphasized a broader range of skills and placed less focus on timed examinations. Gipps notes, however, that these moves were introduced and supported by a government that subsequently returned to a system that placed more emphasis on a more traditional formal assessment system. In 2004, however, proposals were made (forming part of “the biggest planned shake-up of English education for 60 years” (Ross, Attwood, & Moynihan, 2004)) to dismantle the dominance of external examinations at GCSE level and place more focus on internal school-based elements. This bouncing back and forth between more traditional testing practices and skills development is driven by conflicting beliefs among those who devise or advise on the assessment policies about what assessment should be about – the assessment of learning or assessment for learning. As Gipps (1994) puts it: Government intervention in the UK has sometimes initiated, sometimes reinforced the move towards a more practical and vocationally oriented curriculum and thus the move towards more practical, school-based assessment. But government has also been concerned with issues of accountability and with what it sees as the maintenance of traditional academic standards through the use of externally set tests. (p. viii)
Gipps concludes that the situation that arises is one of ‘complexity and confusion’. The use or non-use of dictionaries in components of L2 assessment is a case in point, complicated by the two types of assessment available – the ‘traditional’ examination in which (currently at least) “[t]he use of dictionaries will not be permitted in any external assessment” (QCA, 2000), and the writing coursework option in which, as one examination board puts it, candidates may “have access to the task stimulus and a dictionary (which may be on-line)” (Edexcel, 2001, p. 7, my emphasis). There is, however, another concern in the UK which has implications for the use or non-use of dictionaries. Recent research (Coe, 2006) has indicated that GCSEs in foreign languages are considered to be among the hardest subjects, with students being awarded grades relative to other subjects that are sometimes at least one lower. The research, together with the finding of a government inquiry into take-up of L2 study at GCSE level, received considerable media attention.
Chapter 1. What is the problem with dictionaries?
The government inquiry revealed that the number of students entered for GCSE L2 examinations dropped from 80% in 2000, when studying an L2 at GCSE level was compulsory, to 51% since 2004, when a GCSE in an L2 was made optional (Blair, 2007). One report noted the view that students were opting out “because the exams are harder and are seen to be harder” (Garner, 2007). A subsequent report (Cassidy, 2007) highlighted a continued “dramatic decline” in the face of not only perceived difficulty but also perceived lack of value. None of these reports suggests that no longer having dictionaries is to blame (and indeed the lack of compulsion at GCSE level is a strong contributing factor). Nevertheless in my interview with Mark (the chief examiner), Mark expressed the view that, given the dictionary ban, students would no longer opt to study an L2 – they would compare the subject with something like mathematics, which allows the use of a calculator in certain examination components, and would “vote with their feet” because they “don’t understand the logic.” Bearing in mind the issues I have raised in this chapter – the advantages and disadvantages of dictionary use, the wider issues around assessment practices, the use or otherwise of dictionaries in assessment, and the perception that GCSEs in foreign languages are harder than other subjects – several questions come to mind, specifically with regard to writing in an L2 (the focus of this book): • •
•
Is there really a case for banning dictionaries from examinations? What difference, if any, do dictionaries actually make to test takers, not only in terms of their performance, but also in terms of their perceptions of the test? What pedagogical and training issues arise, especially when, in other assessment contexts (like coursework), students can still use dictionaries?
This book explores each of these questions in depth in an attempt to draw some conclusions about the dictionary use question, and to offer practical advice to teachers and students about how to get the best out of a support resource like a dictionary. The illustrations used in this book are predominantly German, but the principles I draw have relevance for L2 teaching and testing, whatever the language.
11
chapter 2
On dictionaries and writing
In Chapter 1 I suggested that two essential tensions are inherent in the conflict over the use or non-use of bilingual dictionaries in assessments. Firstly, there are conflicting opinions, particularly among teachers, about the added value that dictionaries give. On the one hand, they are seen as authentic; on the other, there is genuine worry about test takers’ ability to use them effectively. Secondly, there is conflict and debate, particularly at a theoretical level among the assessment experts, over two opposing value systems that underpin different assessment practices. Perspectives concerning what makes a fair test differ according to whether the underlying value system favors process-oriented assessment for learning or product-focused assessment of learning. Assessments that fit into the first paradigm focus on the learning, whereas those that are part of the second emphasize the measurement. The two assessment paradigms are not mutually exclusive, however. Neither one is ‘right’ or ‘wrong’, ‘better’ or ‘worse’. They are simply different, and based on different assumptions about what we want to find out. The ‘dictionaries are in / dictionaries are out’ policy decisions are symptomatic of this tension, with dictionaries arguably acceptable in the first paradigm and not acceptable in the second. The situation is compounded by a perception that, at least in the UK, foreign languages are among the hardest of school subjects, and consequently students are ‘voting with their feet’ and abandoning their study of them. It is important to consider whether allowing or disallowing a dictionary might make a difference to students’ subject choice. However, student use and misuse of the dictionary and different assessment paradigms make the question of bilingual dictionaries in assessment, particularly of the skill of writing, a far from straightforward one to address. This chapter presents some of the background to the difficulty. I begin with an overview of what is currently accepted as the predominant way of teaching languages – the communicative approach – and explain how dictionaries naturally fit within this way of teaching. I go on to consider the two main types of dictionary available to the L2 learner – monolingual and bilingual dictionaries – and weigh up their relative merits and demerits. The chapter proceeds with a consideration of the two most common forms of assessing L2 students’ writing proficiency: coursework options and timed tests. These two methods have been chosen because they
14
Dictionary Use in Foreign Language Writing Exams
r epresent contrasting conceptualizations of writing assessment. I consider the issue of where dictionaries fit into these two types of assessment.
Teaching our students within a framework of communicative competence If we were to ask a group of language teachers what they thought the main purpose of teaching languages was they would almost certainly reply with something along these lines: ‘we teach our students a language so that they can communicate effectively in that language.’ Current understanding about language teaching is bound up with the concept of ‘communication’. The teaching approach that has come to be known as Communicative Language Teaching, or CLT, is now firmly embedded in the culture of language teaching and learning practices in many contexts and is underpinned by theories of communicative competence. Various theoretical frameworks of communicative competence have been expounded (Bachman, 1990; Canale, 1983; Canale & Swain, 1980; Hymes, 1971, 1972, 1982; Savignon, 1983, 1997, 2002; Widdowson, 1978), with each model building on, adapting, and developing the framework over time. The distinguishing feature of CLT approaches is that they have led to a distinct move away from artificiality of language with its emphasis on frequently decontextualized grammatical structures and vocabulary learning common within earlier frameworks such as ‘grammar-translation’. Instead there has been a move towards an understanding that language exists for purposes of real communication with real individuals in real contexts. According to Bachman (2000) the communicative paradigm regards language use as “the creation of discourse, or the situated negotiation of meaning” and language ability as “multicomponential and dynamic” (p. 3). The ‘negotiation of meaning’ has been central to L2 teaching since the 1970s (Kramsch & Thorne, 2002). In addition, notions such as authenticity and learner-centeredness are now well established as central concepts of language teaching, which “cluster around a ‘communicative’ core” (Benson & Voller, 1997, p. 10). These understandings of teaching approaches drive the view that dictionaries may be seen as legitimate ‘tools of the trade’. This is because dictionaries are frequently used for purposes of communication, discourse creation, and negotiation of meaning in real-world contexts. Also, if bilingual dictionaries are the most frequently owned and most used reference book consulted by L2 learners (Kirkness, 2004; Midlane, 2005), then the bilingual dictionary has, more often than not, become the principal tool for which language learners reach, whether their teachers like it or not. There are some good reasons why language teachers do not like it: if, for example, students do not seem able to use bilingual dictionaries
Chapter 2. On dictionaries and writing
e ffectively, encouraging their use to support the ‘situated negotiation of meaning’ is surely, as Asher (1999) puts it, counterproductive. This leads to several important questions. The bilingual dictionary may be the most used reference tool, but does that make it the best choice of dictionary for L2 learners, given the number of errors students seem to make? Or would students be better served with a different type of dictionary, or no dictionary at all? If we can answer these questions, we shall be part of the way towards understanding how, if at all, bilingual dictionaries fit into the bigger picture of L2 learning and assessment. I turn now to this first set of questions.
Do dictionaries have value in foreign language learning? Dictionaries as reference tools can be divided into two broad groups – print dictionaries and electronic dictionaries. These groups of dictionaries are also available in three types – monolingual, bilingual, and bilingualized. (The bilingualized dictionary, as I explain in more detail later, is a kind of hybrid of monolingual and bilingual dictionaries.) The monolingual dictionary may be further sub-divided into two: the standard monolingual designed for the native (L1) speaker, and the monolingual learner’s dictionary, designed for the L2 learner (Bogaards, 1999). Of these different types, survey research (Hartmann, 1999) has suggested that the most commonly owned and consulted dictionaries are monolingual and bilingual print dictionaries. These two types of dictionary are therefore the well established tools of the trade, both in general and in L2 language learning contexts (so much so that Bogaards refers to them as ‘traditional’). Ownership of an electronic dictionary, whether on a computer or in portable form, was (at least at the time and in the context Hartmann’s survey was taken) considerably limited. The availability and use of portable electronic dictionaries (PEDs) has grown substantially over time, especially as their cost has fallen, even though to an extent they are still something of a novelty (Midlane, 2005, p. 28). Whether available in print or electronically, it might seem unnecessary to ask which of the two main types of dictionary – monolingual or bilingual – is better for the L2 learner. After all, these dictionary types are quite distinctive and it would surely be easy to conclude that the better tool is the one that fits the particular purpose. But we face a dilemma when it comes to deciding which of these two types L2 learners should be encouraged to use in particular circumstances, and this dilemma is underpinned by various questions. What is the language level of the L2 learner? What information does that learner need to know? What is the best means of providing that information?
15
16
Dictionary Use in Foreign Language Writing Exams
Most significantly, one of the major drivers in the decision-making has to do with how teachers interpret CLT approaches. One major interpretation is that CLT should emphasize the virtually exclusive use of the target language. As a consequence it is often perceived that monolingual dictionaries are the more appropriate pedagogical tools regardless of the circumstances. The argument among language teachers runs something like this: ‘we want you to use a monolingual dictionary because by doing so you are being exposed to the target language and you will learn it more effectively.’ Monolingual dictionary use by L2 learners therefore appears to have become the “accepted orthodoxy” (Thompson, 1987, p. 282). As such, it is important to consider in some detail exactly why this is the case. Underhill (1985) suggests that an advantage of the monolingual dictionary is that it forces its user to use the target language in order to understand it. This apparently helps with the internalization of L2 without the barrier of L1 (Atkins, 1985). In acknowledging (although not necessarily agreeing with) the established viewpoint, Thompson (1987) notes the apparently accepted drawbacks of bilingual dictionaries which would appear to make teachers shy away from them as useful pedagogical tools: • • • •
They strengthen users’ inclination to translate from their L1 rather than helping them to think directly in the L2. They strengthen users’ belief in a one-to-one association between two languages at an individual word level. They do not adequately describe how individual words behave in a sentence. They may order meanings in a way that is not necessarily based on frequency or common usage, which would perhaps be more helpful for users.
Thompson (1987) acknowledges the claims that monolingual dictionaries avoid these drawbacks, particularly with regard to how words fit into sentences in contextually appropriate ways. To these objections we might add the following: •
•
The tiny pocket bilingual dictionaries that students often buy commonly provide only one-word equivalents with one or two ‘frequent’ meanings (Rossner, 1985). This might be sufficient when students use the dictionary to access the meanings of unknown words when reading (which is, after all, what they are often used for). When writing, however, these types of dictionary might give dictionary users insufficient or inadequate information from which to choose an appropriate word. What are apparently nearest translations in a bilingual dictionary may in fact have quite different meaning associations in the two languages. Being able to
Chapter 2. On dictionaries and writing
find suitable translations in a bilingual dictionary therefore depends on how closely related the two languages are (Laufer & Hadar, 1997; Underhill, 1985). Each of these constraints theoretically leads L2 learners to make a myriad of mistakes when using a bilingual dictionary which could be avoided. Teachers’ practical experience is that this is often the case. Indeed, Tomaszczyk (1983) warns of the inadequacy of some bilingual dictionaries because in their attempt to meet the diverse needs of a broad range of prospective users they try to do too much. These needs are, in his view, so wide-ranging that they are effectively irreconcilable. He perceives the danger of L2 learners being led astray by entries in the bilingual dictionary because these learners are not always proficient enough to realize the limitations and take on board the information given to them without question. As a result Tomaszczyk concludes that what they produce can often reveal the limitations of bilingual dictionaries quite dramatically. Baxter (1980) suggests that the difficulty is made worse by the narrow focus on lexis (rather than a broader focus on discourse). The bilingual dictionary user gets used to matching lexical items and does not necessarily learn about the wider uses of words in context. Those in favor of the monolingual dictionary may argue on this basis that, when aiming to produce language, there is no opportunity to look at definitions that might help the learner to choose the appropriate word for the context, and to see how words work in contexts. These are all very convincing arguments in favor of promoting monolingual dictionaries: they fit into an accepted language teaching approach (CLT); they promote the internalization of L2; and they coax students away from a belief in a oneto-one lexical relationship between words in different languages that often leads to errors. But these perspectives fail to recognize several benefits of the bilingual dictionary. Thompson (1987) acknowledges two essential differences between monolingual and bilingual dictionaries: monolingual dictionaries have headwords in the target language, and monolingual dictionaries define words in the target language. Apart from this, Thompson asserts that the other features of monolingual dictionaries can equally be found in bilingual ones. Furthermore, the two differentiating aspects of monolingual dictionaries can cause problems in the language-learning process, especially for students below the most advanced levels. If, for example, learners do not know the target language word for the concept they wish to communicate, they will not know where to locate the word in a monolingual dictionary and will be obliged to look it up in a bilingual dictionary, at least as a starting point (Nesi, 1992). It may well be that after the initial enquiry students will need a monolingual dictionary to see how to use, or not to use, the word – but in practice students may simply take one of the choices given in the bilingual dictionary
17
18
Dictionary Use in Foreign Language Writing Exams
ithout making the effort to check in the monolingual one. Teachers may not see w this as a good thing, and it may lead to error, but it is a reality. A second problem lies with understanding the monolingual dictionary definitions, given that these are provided in the target language. If L2 students do know the word they are looking up the definition is not required (although the grammatical and other information may be useful). If they want a definition, the grammatical structures used in defining examples may be complex, and may be beyond the competence of anybody other than an advanced learner. Amritavalli (1999) suggests that although learners go to a great deal of trouble to decode complex dictionary explanations in monolingual learners’ dictionaries of English, their efforts still often lead to inaccuracy. Entries may involve decoding complex structures such as conditionals, indirect questions, relative clauses, and passives. Furthermore, illustrative examples can be idiomatic, or specific to a particular domain or culture. Students may simply not have learnt enough to be able to understand the definitions they find. Thompson (1987) puts it like this: One claim implicit in monolingual dictionaries is that the users will benefit from being exposed to the foreign language via the definitions. But one has only to look at the majority of the definitions to see that they are, of necessity, ungainly and complicated, and that they employ a special register which is not necessarily the most useful or rewarding for learners to be exposed to. (p. 284)
With regard to the use of English monolingual dictionaries, Amritavalli (1999) claims: In my experience, complex structures in explanations are more likely to be understood if they are spoken by a proficient user of English, who in effect ‘parses’ them for the learner by using appropriate pause and intonation patterns, and by drawing attention to the more complex parts of the definition. Typically, however, dictionary reference is a solitary and silent activity, in which students faced with unduly complex or otherwise unsatisfactory explanations too often lose their way. (p. 264)
The situation remains similar whether the monolingual dictionary is designed for L1 or L2 use. Worsch (1999) notes that monolingual learners’ dictionaries can be just as off-putting as monolingual L1 dictionaries due to users’ frustration over not understanding the definition. These drawbacks highlight several potential benefits of bilingual dictionaries. As has already been pointed out, monolingual dictionary users are particularly disadvantaged if they want to use the L2 productively, because the dictionary is only beneficial if L2 users know or can recall the words they want to use. A strong practical argument in favor of bilingual dictionaries appears to be that they overcome the difficulty inherent in not knowing the
Chapter 2. On dictionaries and writing
word in the first place, and, secondly, not fully understanding the definition when the word is looked up. When learners are given a word in their own language they can understand its scope and the contextual limitations. For example, the English word ‘scale’ carries several different meanings depending on the context. Direct translations into contextually appropriate equivalents may be more successfully understood by an L2 learner than providing illustrative definitions in the target language. Furthermore, one major perceived drawback of bilingual dictionaries – locating an inappropriately matched lexical item – is not necessarily worse than not using a dictionary at all. Ard (1982) argues, “the fact that students often fail to find acceptable words is no reason to condemn the use of bilingual dictionaries. In most cases, the students would presumably fail to find an acceptable word in the absence of a bilingual dictionary also” (p. 206). It appears, then, that methodological purists prefer the monolingual dictionary because it seems to fit better with an understanding of what CLT is all about. But monolinguals can be hard to use. On a practical level, therefore, the bilingual dictionary may prove to be the more effective tool, particularly for developing L2 learners’ more active expressive abilities (Ard, 1982). Are there, however, any methodological arguments that might support the use of bilingual dictionaries? We might argue that encouraging the use of bilingual dictionaries contributes to an important dimension of CLT – that of helping learners to become more autonomous and independent of the teacher by developing effective communicative strategies (East, 2005b). Indeed, Canale and Swain (1980), whose work has been influential in framing contemporary understandings of communicative competence, speak of one important dimension of communication which they label ‘strategic competence’. By their definition of the construct this competence is the ability to use strategies effectively to make up for gaps in knowledge and thereby to compensate for breakdowns in communication. Canale (1983) went on to develop this dimension by claiming that it might also be useful to “enhance the rhetorical effect of utterances” (p. 339). Surely the bilingual dictionary is useful in both respects. Oxford (1990) notes that language learning strategies are used “because there is a problem to solve, a task to accomplish, an objective to meet, or a goal to obtain” (p. 11). Bilingual dictionary use may help L2 learners to make up for what Oxford calls “an inadequate repertoire of vocabulary” (p. 47) as they aim, for example, to complete a written task. Learners can do this in a way that cannot be achieved with a monolingual dictionary, especially if they do not know the word they are looking for in the first place. Horsfall (1997) suggests that students who are able to use a bilingual dictionary confidently are “well on the way to becoming much more independent learners” who are able to problem-solve. The
19
20 Dictionary Use in Foreign Language Writing Exams
dictionary provides the opportunity for students to check up on particular words and clarify misunderstandings. In turn, this may become “a positive motivator and confidence-builder, showing the learner that he/she can proceed without the teacher” (p. 7). This, he claims, makes the bilingual dictionary an aid to both teaching and learning. In summary, confident and effective use of a bilingual dictionary promotes learner autonomy. It also means that language learners can operate from their L1 understanding of a word to find its direct L2 equivalent (L1 → L2), or can check the L1 meaning of a word when presented with its L2 equivalent (L2 → L1). When considering the production of language, for example when writing, it may be that such use of the bilingual dictionary will be frequently advantageous. Furthermore, if authenticity is considered as another important dimension of CLT there is further argument in favor of the use of bilingual dictionaries by L2 learners. Certainly the use of authentic texts, rather than made-up and structured language, carries with it the additional responsibility to make their meaning accessible. The language used is no longer regulated by the teacher or the textbook and will contain vocabulary that may need to be looked up in a bilingual dictionary (Horsfall, 1997). Bilingual dictionaries may be necessary to check items in a task that is being used to generate a written response, and also necessary to complete the response adequately. After all, is this not what users of an L2 often do in the real world? They read something and respond to it with the help of a dictionary. Using a bilingual dictionary is “an authentic activity” (Chambers, 1999, p. 78). There would therefore seem to be strong and convincing methodological arguments in favor of the bilingual dictionary, particularly when writing. These arguments may not, however, be enough to convince the methodological purists that many L2 learners (apart from possibly the most advanced) probably derive more benefit from bilingual dictionaries than monolingual ones. Part of the response to reservations around bilingual dictionaries may be to encourage the use of a different kind of resource such as a thesaurus. The benefit of thesauri is that L2 users can continue to operate in the target language, but are not required to understand definitions of words. They can, for example, establish the meaning of an unknown word via a synonym which they may understand, or use the information available to broaden or extend their writing. There are, however, two major limitations. The first of these is that users are provided with neither L1 support nor definitions. Secondly, synonyms of words do not necessarily convey exactly the same meaning, and L2 users may be led astray by choosing a synonym that is not necessarily appropriate for a particular context (although the Collins School Thesaurus (Collins, 2006) is an example of an English thesaurus that aims to help with this by providing short examples of appropriate use).
Chapter 2. On dictionaries and writing
roviding synonymous information may prove to be insufficient for many users P of a foreign language without the additional benefit of an L1 equivalent. Another response to reservations around bilingual dictionaries may be to develop a new kind of dictionary – one that enables learners to find the L2 equivalent of the L1 word, and then provides contextually appropriate definitions. Indeed, in recent years there has been a move towards developing such a ‘hybrid’ – the bilingualized dictionary. A bilingualized dictionary – “at the intersection of monolingual, bilingual and pedagogical lexicography” (Hartmann, 1994, p. 206) – is defined by Laufer and Hadar (1997) as a combination of a learner’s monolingual dictionary, with the same number of entries and meanings for each entry, with a translation of the entry into the target language. If the target word has several meanings, each meaning is translated. Worsch (1999) argues that the bilingualized dictionary can be regarded as “a kind of spin-off of existing monolingual dictionaries” (p. 103). Laufer and Hadar (1997) suggest that these types of resource are “a step in the right direction” with regard to the creation of user-friendly dictionaries, because they “provide more information than monolingual or bilingual dictionaries and allow the user to choose explanations in the one language with which he or she is more comfortable, or in both languages for reassurance and reinforcement” (p. 196). When writing, L2 students may find that this type of additional information helps them to use the word they have looked up even more effectively than if they had just been presented with word equivalents. Bilingualized dictionaries may be ‘the way to go’ in the future. Their availability and use are not yet widespread, however. In the meantime the majority of learners are faced with the choice between monolingual and bilingual, whether in print or electronic form. And as far as these two are concerned, Thompson (1987) argues: … monolingual learners’ dictionaries have a very important role to play at the most advanced levels. Basically, though, for learners below this level the bilingual dictionary can do all the useful things that the monolingual dictionary can do; and it can do several of the things in a more efficient and more motivating way. All that is needed is the recognition that this is so. (p. 286)
What about learner choice? Although there are constraints, there are some convincing theoretical, practical, and methodological arguments in favor of bilingual dictionaries. Further evidence in support of bilinguals can be found from the results of surveys into dictionary
21
22
Dictionary Use in Foreign Language Writing Exams
preferences. Piotrowski (1989) concludes from a survey of studies of dictionary use that L2 learners and dictionary users turn to their bilingual dictionaries as long as they use dictionaries at all, regardless of their level of competence. Tomaszczyk (1983) concludes from 449 respondents to a questionnaire survey that beginning and intermediate L2 learners rely on bilingual dictionaries almost exclusively. More surprisingly, language teachers in secondary schools and universities used bilingual dictionaries more than L2 and other monolingual dictionaries, even though they had access to these. This finding mirrors Laufer and Hadar’s (1997) observation that even L2 learners who have reached a good level of L2 proficiency and have been trained in skills such as dictionary use still favor a bilingual dictionary, although some may use a monolingual and a bilingual dictionary together. Tomaszczyk argues that it can surely be assumed that the teachers were able to use the L2 quite successfully and had all been “brought up in the tradition that discouraged excessive reliance on the mother tongue” (p. 46). Nevertheless, practically all of them preferred to use bilingual dictionaries, even though they regarded them as considerably less valuable than monolingual ones – an interesting reflection of the disjunction between ideological perspectives and actual use. Even those who did not find monolingual dictionaries too difficult to use felt more comfortable with bilingual dictionaries and would prefer them when given the choice. Thompson (1987) makes his own informal observation that only very few students in his classes in several countries claimed regularly to use a monolingual dictionary for production of language. Most admitted that they hardly ever used them for production or comprehension, even after having been trained in their use. Thompson concedes that when learning his students’ languages, he has made far more use of bilingual than monolingual dictionaries. He therefore argues that monolingual dictionaries are “simply not cost-effective for many learners in terms of rewards (correct choice of word) versus effort” (p. 284). Tomaszczyk (1983) puts it like this: “[p]eople use bilingual dictionaries whether language teaching methodologists like it or not. Indeed, to many people the word ‘dictionary’ means ‘a bilingual dictionary’” (p. 45). Ownership of bilingual dictionaries by learners far outweighs that of monolingual dictionaries (Midlane, 2005). In the absence, at present, of widespread bilingualized dictionaries this is therefore the “consumer reality” (Laufer & Hadar, 1997, p. 190) that reflects what students “notoriously prefer” (Atkins, 1985, p. 19). Atkins puts the dilemma well when she reframes the traditional belief in the superior value of monolingual dictionaries in these terms: “Monolinguals are good for you (like wholemeal bread and green vegetables); bilinguals (like alcohol, sugar and fatty foods) are not, though you may like them better” (p. 22).
Chapter 2. On dictionaries and writing
Do dictionaries have a place in foreign language testing? The evidence so far presented reveals some convincing arguments for the inclusion of bilingual dictionaries in the L2 classroom in a way that is fully commensurate with CLT approaches – their use is authentic, their use promotes learner autonomy, and, most importantly, their use allows students to make up for gaps in their knowledge and potentially enhance the quality of what they produce. Furthermore, if using bilingual dictionaries is what the students ‘notoriously prefer’, and if they are going to use them regardless of what their teachers might say, there would surely be little point in insisting on monolingual dictionaries, and it would surely be beneficial to help language learners to get the best out of bilingual dictionaries. It is also certainly beyond question that L2 learners’ language proficiency has to be assessed in some way if we are to gain a measure of these learners’ communicative competence. This leads to the second major issue we need to consider: given the two assessment paradigms articulated by the ARG – assessment for learning and the assessment of learning – and the central importance of CLT as a teaching approach, where might the bilingual dictionary fit into assessment? Without doubt CLT has compelled language testers to move away from an emphasis on the testing of language ability as an isolated ‘trait’, and has required them to “take into consideration the discoursal and socio-linguistic aspects of language use, as well as the context in which it takes place” (Bachman, 2000, p. 3). In other words, in response to a view of language ability that moves beyond the limitations associated with discrete point or integrative tests (Morrow, 1981; Oller, 1979), language testers have been required to make efforts to develop tests that not only measure linguistic ability considered more widely, for example in terms of the aspects of communicative competence identified by Canale and Swain (1980) and Canale (1983), but are also seen as ‘authentic’ or related to how language is used in the real world. This type of testing (where the underlying construct is based on a theoretical framework of communicative competence) may be considered more valid as a mirror of language in actual use than earlier testing models. The quality of authenticity arguably has central importance (Morrow, 1991; Wood, 1993). With regard to authenticity there is, suggest Bachman and Palmer (1996), the need for a correspondence between language test performance and language use. For a given test to be useful for its intended purposes, performance on the test needs to correspond in demonstrable ways to language use in non-test situations or capture in a representative way the complexities and demands found in the real world (Wiggins, 1989). Bachman and Palmer refer to these as ‘target language use’ (TLU) domains – the actual real-world situations that the test tries to mirror. Indeed, for Bachman and Palmer authenticity is “a critical quality of
23
24
Dictionary Use in Foreign Language Writing Exams
language tests” because “it relates the test task to the domain of generalization to which we want our score interpretations to generalize” (pp. 23–24). In other words, if we want to find out how our students are likely to perform in real world language use tasks beyond the classroom, we need to create assessment opportunities that allow them to use the type of language they are likely to encounter beyond the classroom. This would surely be a persuasive reason, in theory at least, to allow bilingual dictionaries, whatever the type of assessment. Furthermore, Bachman (1990) argues that communicative language tests should provide “the greatest opportunity for test takers to exhibit their ‘best’ performance” so that they are “better and fairer measures of the language abilities of interest” (p. 156). It might be that bilingual dictionaries might be helpful to some test takers. They may therefore have a supportive role to play. It is, however, harder to justify the availability of dictionaries in tests if language learners cannot necessarily use them successfully. There is also a tension between what Weir (1990) notes as two genuine concerns of communicative language testing – that of eliciting the test taker’s best performance (for which a dictionary may or may not be useful) and that of ensuring that the measurement properties of tests, such as reliability, are upheld (for which a dictionary might be problematic). This tension leads to the challenges around assessment practices outlined in Chapter 1 and the conclusion that the role of bilingual dictionaries in tests is controversial (East, 2005c, 2007). We therefore face a problem when we consider where dictionaries fit into assessments. Are they beneficial? Or do they distort the information about language proficiency available from the assessment? We need a framework whereby we can evaluate a given assessment or testing procedure in a meaningful way and draw constructive conclusions about dictionaries. One way of doing this is to consider different assessment opportunities in the light of two questions: is the assessment opportunity useful? And is it fair?
Designing useful tests Bachman and Palmer (1996) argue that “[t]he most important consideration in designing and developing a language test is the use for which it is intended, so that the most important quality of a test is its usefulness” (p. 17). Test usefulness is, they suggest, an overriding consideration for quality control throughout the process of designing, developing, and using a particular language test. After all, what use is a language test if we cannot have confidence that the information it gives us about the test takers’ language proficiency is meaningful? An important
Chapter 2. On dictionaries and writing
c onsideration when thinking about dictionaries in assessment is therefore whether the test or the assessment gives us this kind of useful information. Bachman and Palmer (1996) drew up a kind of ‘checklist’ which they suggest would help us when thinking about the usefulness of tests. They argue that evaluating a test against the six qualities on the list would help test-setters to weigh up the relative usefulness of the test. The qualities are also useful in evaluating alternatives such as coursework. These qualities are:
1. 2. 3. 4. 5. 6.
Reliability Construct validity Interactiveness Impact Practicality Authenticity.
Reliability and construct validity are the two fundamental measurement characteristics of tests. They are concerned with the meaningfulness and accuracy of the scores awarded, in relation to the measurement of an underlying construct, where the scores are indicators of the ability or construct. Construct validity has to do with the assessment task: it relates to whether the task adequately and fairly reflects the construct that the test is trying to measure (and therefore the extent to which the scores are adequate reflections of test takers’ abilities). Reliability has to do with the scores: it relates to the way scores are awarded (and therefore the extent to which the process of awarding the scores is adequate) or to whether the test (or a different version of the test) yields comparable results across different administrations. Thus validity focuses on the tasks themselves and reliability is concerned with how consistently performances on those tasks are measured. The four qualities of interactiveness, impact, practicality, and authenticity are distinct from the measurement qualities. Interactiveness is a quality of the interface between the test taker and the test – how effectively the test taker is able to engage with the task. Impact at the microlevel is concerned with the effects of taking a particular test on the test takers. Impact at the macrolevel is concerned with the wider implications of the test for the stakeholders, including test takers, teachers, parents, educational programs, employers, and gatekeepers. Practicality is a quality of how the test is run – whether the test can be administered efficiently. Authenticity is, as has already been stated, a quality of the test and its relationship to a TLU domain. Bachman and Palmer (1996, p. 18) do not suggest that all six qualities must be traceable in equal measure if we are to conclude that a test is useful. Three principles guide the application of the six qualities to any given test:
25
26 Dictionary Use in Foreign Language Writing Exams
1. It is important to maximize the overall usefulness of the test, rather than the individual qualities. 2. The qualities should not be considered in isolation, but rather in terms of how each quality contributes to the overall usefulness of the test. 3. The appropriate balance among the qualities should be determined for each specific testing situation. It is therefore inevitable, and acceptable, that some tests will regard some qualities of test usefulness as vital that other tests might consider relatively minor. A formative ‘in class’ test, for example, may be highly interactive in the sense that the task is engaging and stimulating for the test takers, drawing on their experience and knowledge of the world. At the same time it may have low reliability. A summative ‘high stakes’ test may be considered to be highly reliable, but it may lead to a low level of interactiveness for test takers who are unable to engage fully with the task due to the pressurized demands of time constraint. Timed tests and coursework options have different balances of the qualities that contribute to their ‘usefulness’. This differential balance gives rise to questions about dictionaries in these assessments. It also leads to a second consideration which needs to be laid alongside questions of usefulness – the extent to which a given testing or assessment procedure is fair.
Designing fair tests Establishing test usefulness is not enough. We also need to establish test fairness. As Kunnan (2000) argues, there is little or no value in a test being useful if it is not fair. Fairness is an important consideration. But what exactly do we mean by fair? Hamp-Lyons (2000) suggests that fairness is a difficult concept because “there is no one standpoint from which a test can be viewed as ‘fair’ or ‘not fair’” (p. 32). There are several implications of this for dictionary use. Using a dictionary might distort the measurement of test takers’ language proficiency. Some test takers may do better with a dictionary, and some may do worse. Some may do better with one type of dictionary rather than another. These factors may lead to unfairness for a whole range of test takers. Taking into consideration Bachman’s (1990) comments about communicative language testing – that tests should provide adequate opportunity for all test takers to demonstrate their best performance – one way of looking at fairness is this: did the test takers have the greatest opportunity to display in the test what they know and can do? Was there part of the test procedure that may have hindered this? What may be the consequences, for the test takers, of this?
Chapter 2. On dictionaries and writing
In the light of considerations of both usefulness and fairness different ways of assessing writing have been developed. Also in this light questions around using or disallowing dictionaries in writing assessment have arisen. Where, then, might dictionaries fit into these different ways of assessing writing?
Dictionaries and coursework Coursework and portfolio options, whereby students are able to produce an extended piece of work or collect samples of writing over a period of time, are well suited to an understanding of assessing communicative competence in a way that provides opportunity for those being assessed to demonstrate the full extent of their proficiency. This is one reason why this assessment method, which is quite distinct from a ‘test’, is gaining in popularity. Hamp-Lyons and Condon (2000) suggest that the portfolio assessment method arguably gives us a better representation of writers’ efforts and range of writing than a one-off time-constrained summative test. The coursework approach views writing as process, and this has enabled us to rethink our responses to students’ writing. There certainly seems to be a lot to commend coursework and portfolio options for assessing writing. In the view of Grabe and Kaplan (1996), the portfolio system has particular strengths: 1. A range of writing samples, drawing on different topics and task types, can be assessed. 2. Students are able to reflect on their writing and the progress they are making. 3. Students are given responsibility for choosing the writing on which they want to be assessed. 4. A more realistic audience for students’ writing is created. 5. There is a stronger link between teaching and assessment. Perhaps most importantly Grabe and Kaplan (1996) assert that writing assessment should match its assessment criteria to the type of writing that students will be expected to carry out beyond the learning context. Portfolios and coursework options arguably contribute positively in this respect. When dictionary use is being considered it is definitely the case that the assessment available through coursework or the writing portfolio reflects the ways in which those being assessed will carry out writing outside the classroom. Portfolio assessment may therefore be seen as highly authentic in that it mirrors the process of writing through which
27
28
Dictionary Use in Foreign Language Writing Exams
people go as they write in the real world, it is not subject to such rigorous time constraints as the timed test, and it factors in the use of resources. Also, coursework or portfolio assessment may promote greater positive interactiveness on the part of, and greater positive impact on, those being assessed. This is because it is removed from the artificial constraints of the timed test and because it often relies on multiple samples of writing covering different genres and themes from which students can choose their best work. In the words of Swain (1985) it ‘biases for best’ and ‘works for washback’ in ways that the timed test apparently cannot. Elbow (1991), for example, argues: Testing is a process of “checking up on” students to see who is smart and who is dumb, who has been working hard and who has been goofing off. … Portfolio assessment takes the stance of an invitation: “Can you show us your best work, so that we can see what you know and can do – not just what you do not know and cannot do?” (pp. xv–xvi)
Allowing bilingual dictionaries for portfolio work authentically mirrors real-life writing activity and such dictionaries therefore arguably have a legitimate role to play in the writing process. Furthermore, portfolio or coursework assessment of writing is one means of providing a broader range of evidence about what learners can do, commensurate with an understanding that assessment for learning is “far more than just a summative collection of information about learners’ achievements” but rather “a vital part of the learning process [that] impacts on the course of pupils’ learning” (Harlen & Winter, 2004, p. 391). Dictionaries make their contribution here as mediators of a valid communicative strategy which L2 students are learning to use. As a result, it would appear to be beyond dispute that access to resources like dictionaries is considered to be quite acceptable and useful in coursework and portfolio options, and it is relatively easy to justify the use of dictionaries in such options. Also, the positive benefits of these types of assessment outlined above surely make them more useful and fairer as measures of writing proficiency than time-constrained writing examinations.
Dictionaries and timed tests When it comes to questions of usefulness and fairness, however, portfolio and coursework options are not without their challenges. Allowing dictionaries in coursework begs an important fundamental issue. More ‘traditional’ or ‘direct’ tests of writing, variously referred to as the “timed impromptu writing test” (Weigle, 2002, p. 59) or the “snapshot approach” to writing (Hamp-Lyons & Kroll, 1997, p. 18), remain a popular method of assessing writing proficiency and they
Chapter 2. On dictionaries and writing
continue to be found in many large-scale testing situations internationally. Given the centrality of timed tests to the assessment of writing proficiency, we need to take their usefulness and fairness as assessment tools seriously, and it is very important to think about whether the bilingual dictionary also has a role to play in these types of test. More importantly, we need to consider why allowing dictionaries in such writing tests is more controversial. The first question to ask, of course, is why the timed test format continues to be so popular and wide-spread given what appear to be the distinct advantages of coursework. There are benefits to the timed test method without doubt. Wolcott and Legg (1998) suggest that particularly from the perspective of measurement experts timed tests offer several advantages: •
• •
A number of variables that may influence the test takers’ writing performance, such as the topic, availability of resources, time allowed to complete the task, and scoring methods, are controlled and kept as identical as possible. Using controlled conditions means that the writing performance of any one test taker can be evaluated relative to the performances of all other test takers. Using controlled conditions means that there is no opportunity for the test takers to pass off somebody else’s work as their own, and questions over the extent of feedback or support do not arise. The certainty that we are assessing the test takers’ own work is particularly important when individual competences are being considered.
The problem of plagiarism has been identified as a major weakness of coursework (Smith, 1991): how can we know for certain that students’ work in portfolios is their own? One solution to the plagiarism problem might be to allow those being assessed only to complete coursework in class, under supervision, and with some time constraints (Green, 1985). If this limitation is placed on coursework, however, why not just use a timed test? Indeed, the benefits of timed tests highlight an important aspect of their usefulness. They are reliable in so far as they control for variations in test task, test condition, and scoring method, standardizing these across all test takers. This also contributes to the construct validity of timed tests when the construct being measured relates to writing performance which is being tested uniformly and in a controlled way. Certainly, Grabe and Kaplan (1996) admit that reliability can be seriously challenged in portfolio assessments. Additional benefits of timed tests are that they are practical and relatively costeffective to administer and score (Wolcott & Legg, 1998). They require a set time period for completion and the deployment of test supervisors for this set, relatively short, period. They therefore enable short turnaround times, and they can be used in almost any setting (White, 1995). Finally, although timed tests place the focus on
29
30
Dictionary Use in Foreign Language Writing Exams
the writing product rather than the writing process, it has been suggested (Gorrell, 1988; Lederman, 1986; Wolcott & Legg, 1998) that products are ultimately what we get to see, and therefore what we are concerned with evaluating. Where, then, might dictionaries fit into timed writing tests? On the face of it there would appear to be no reason why dictionaries should not be allowed in timed tests of writing proficiency. After all, the construct of writing proficiency based on a framework of communicative competence aims to allow language learners to use language to create meaningful communication in ‘authentic’ and real-life contexts. Timed tests based on this framework frequently aim to reflect such real-world contexts by asking the test takers to write a postcard or a letter, possibly in response to some culturally authentic stimulus material (a picture or another letter in the target language). If we are asking the test takers to compose a letter we must at least acknowledge that when they respond to and write letters in an L2 in real life they might very well use a bilingual dictionary to help them with this task. If we disallow dictionaries from timed tests there is a problem about the extent to which performance on the test can be generalized to performance in the real world and can therefore be ‘authentic’. One way round the problem of lack of authenticity might be to test L2 learners through tasks that mirror TLU domains that are not so dependent on time or resources. An e-mail, for example, might be potentially highly authentic as a writing task set within time constraints and without resources because it is often a product “written … in haste, without much collaboration, and without much chance to “go back in” to erase or correct” (Wolcott & Legg, 1998, p. 19). But anything beyond the limited confines of a short e-mail or a postcard, and certainly something requiring extended discourse, can often not be replicated authentically in the timed test (especially if no support resources are allowed), and we frequently require longer responses so that we have a reasonable sample of language on which we can make judgments about writing proficiency. According to Bachman and Palmer (1996), authenticity is also important for the test takers, because it contributes to test takers’ perceived relevance. This perceived relevance helps to promote a positive affective response to the test task, thereby helping test takers to perform at their best. The test takers’ interaction with the task may therefore be negatively affected if there is perceived lack of authenticity in the task. If using a bilingual dictionary is an authentic part of writing in the real world – what L2 users do when, for example, writing a letter in a real context – and if its use potentially supports the test takers, why should the dictionary be excluded from a timed test that measures test takers’ ability to carry out an authentic task like writing a letter? The answer to this question reveals the complexity of the issue where time-constrained tests are concerned.
Chapter 2. On dictionaries and writing
The problem of dictionaries in timed tests In Chapter 1 I argued that policy decisions with regard to allowing or outlawing dictionaries in time-constrained examinations are symptomatic of a tension between two opposing value systems and two different understandings of constructs. This tension gives rise to the essential difficulty. Where the emphasis of the test is on the measurement of test takers’ prior learning it is important to establish that the test can measure this adequately. If “the primary function performed by tests is that of a request for information about the test taker’s language ability” (Bachman, 1990, p. 321), tests are required to measure such ability against particular benchmarks, and this measurement function relies, for the accuracy of information it provides, on not allowing anything that will get in the way of test takers’ performances. By this argument, allowing a bilingual dictionary may potentially compromise the fairness of the test as a measure of the underlying construct by ‘giving the answers away’. Indeed, this is how dictionaries in writing tests have often been viewed. As Bishop (1998) explains, “[d]ictionaries were put in the same category as cheating or were accused of making the assessment either impossible to set or the task too easy to achieve. This attitude is deeply rooted even today” (p. 3). However, one counter-argument to the ‘dictionaries make it all too easy’ perspective is that some test takers may simply not be able to use a dictionary effectively. In these cases including it in the test may make the test more difficult. Some test takers may actually do worse than we might have expected, and any ‘advantage’ the dictionary was supposed to give is negated. Another counter-argument is that if dictionary use is a valid part of the construct of writing proficiency which the timed test should aim to measure (namely because it enhances authenticity and provides legitimate support) outlawing its use in the test may mean that an important part of the construct has been excluded. In this case some students may do worse than they otherwise would have done and we do not get a true measure of their writing ability. These different arguments also impact on considerations of fairness. Cumming (2002) argues that in one key respect the timed test is fair: The assessment of writing in high stakes, large scale, international settings is obliged to provide a uniform context for examinees’ performance … so that examinees all start from a basis of equal opportunity. The constructs to be assessed cannot be biased for or against any particular population, nor can they seem to privilege any people with specific abilities or knowledge. (p. 74)
Indeed, for Cumming (2002) the fundamental requirement for fairness in such tests is that the test takers have an equal opportunity to perform on the test.
31
32
Dictionary Use in Foreign Language Writing Exams
umming goes on to suggest that fairness is also built into the requirement of C comparability of performance “from one place to another place, from one time to another time, or from one version of the test to another version” (p. 74). In his view, such ‘equitability’ considerations present a challenge for alternative assessment procedures, such as portfolios. One equity challenge for portfolios is that they are highly localized to specific contexts. There is also often no way of controlling the amount of help the writers are getting or the range of support resources on which they can draw. This is surely unfair on those who receive less help, who have less access to resources, who have access to different types of resource, or who cannot use these resources very well. If we were to allow dictionaries in timed tests, we would also have to face these ‘unfair’ considerations. Whatever way we look at it, it would seem that dictionaries are problematic in timed tests. It is perhaps easy to see why Bishop (1998) concludes his assessment on dictionaries by asserting that “[a]rguments still rage over the role of the dictionary and what effect it has on assessment in particular” (p. 3). Depending on the stance taken in the ‘dictionaries are in / dictionaries are out’ debates we would have to conclude that tests are, in the words of Messick (1989), “imperfect measures of constructs because they either leave out something that should be included according to the construct theory or else include something that should be left out, or both” (p. 34). Messick (1989) develops this perspective by articulating the two major problems that, in his view, might threaten the validity and fairness of a test: 1. Construct under-representation: the test tasks exclude important dimensions or facets of the construct we are aiming to measure. Test results may therefore not reveal the test takers’ true abilities in relation to the construct. 2. Construct irrelevant variance: the test measures variables that are not relevant to the construct: a. ‘Construct irrelevant easiness’ occurs when additional information, unrelated to the construct we are aiming to measure, is provided in or with the task, and this information makes the task easier for some test takers. They thereby score higher than they might otherwise have done. b. ‘Construct irrelevant difficulty’ occurs when additional unrelated information is provided, and this information makes the task more difficult for some test takers. They thereby score lower than they might otherwise have done. What does this have to do with dictionaries? Regardless of how the construct is understood, it is important for us to eliminate anything that may threaten a test’s
Chapter 2. On dictionaries and writing
validity. And depending on how the construct is understood, a valid score on the test is one that has not been unduly affected by either construct under-representation (where something has been left out and the scores may be invalidly low or high), or by construct irrelevant difficulty (where something extraneous has been included and the scores may be invalidly low), or by construct irrelevant easiness (where something extraneous has been included and the scores may be invalidly high). If we are concerned that a timed writing test should measure students’ prior learning and should help us to discriminate between test takers of different proficiency levels on the basis of their test scores, we need to make sure that we take account of anything that might interfere with this measurement – otherwise the interpretations we make on the basis of the scores may be spurious. This might mean saying ‘no’ to dictionaries. If, however, we are concerned that test takers should be given the best opportunity to display what they can do in a timed test we need to make sure that we take account of anything that might support this – otherwise score interpretations may also be spurious. This might mean saying ‘yes’ to dictionaries. And so, when it comes to considering the timed writing test, the most popular and most widespread means of assessing L2 student writing, we come up against huge challenges when we consider allowing dictionaries. On the one hand, the enormous benefits of timed writing tests – practicality, cost-effectiveness, ease of administration – have contributed to their popularity. Furthermore, central to the concept of the timed test are the measurement properties of reliability and validity. Where the construct is perceived more narrowly as ‘writing performance’ or the ‘assessment of learning’, and where discriminating among test takers is key, the timed writing test arguably has high construct validity. On the other hand, the lack of authenticity found in the unnatural time-constraints and the outlawing of support resources works against them. However, if construct validity is central to the usefulness of a timed writing test, and if a major consideration has to be reducing threats to the test’s validity, we must consider where, if at all, dictionaries fit in. We require evidence on the basis of which we can make sound decisions.
Where to from here? This chapter has dealt with a number of complex but important issues surrounding language learning, language use, and language testing. It would be useful, in concluding, to relate these issues back to the central theme of this book – dictionary use in writing. I have suggested the following:
33
34
Dictionary Use in Foreign Language Writing Exams
• • •
Bilingual dictionaries are potentially supportive tools for L2 learners – they help them to make up for gaps in their knowledge. Bilingual dictionary use is authentic – L2 users use them in the real world if they do not know or cannot understand something. The ideal assessment of writing proficiency will aim to mirror real-world practice as authentically as possible.
Relating these to issues of construct validity I also suggest the following: • •
•
•
•
If writing is assessed through coursework or portfolios it is arguably easier to justify including dictionaries as legitimate tools. Allowing dictionaries in timed writing tests is more problematic. If, however, we outlaw the use of dictionaries in such tests there is an immediate disjunction between language testing and language use in the classroom and beyond. If dictionaries are used in the classroom and beyond, students will have got used to them and it is likely that they will see the relevance of them. If students are denied them in the test this may have negative consequences for them (they may not understand why a resource they have come to see as relevant has been taken away from them, and this may impact on their performance). On the other hand, there is clearly a need to preserve the measurement qualities of the test (its ability to measure writing proficiency and to discriminate between different levels of test taker ability). We also need to be mindful of fairness. Having a dictionary in a writing test may be beneficial for some and not for others. Some test takers may use it well; others may find that their performance is compromised. This consideration also impacts on allowing dictionaries in coursework.
A whole range of issues influences decisions on the usefulness of dictionaries in writing assessments and the consequent fairness of the assessment for all stakeholders. Given the centrality of timed writing tests in many contexts, there would appear to be sufficient grounds on the basis of how they are currently carried out – lack of authenticity, time constraints, impact on the test takers – to argue that we need to look closely at the timed test as a testing procedure and to consider its construct validity. Rather than simply accepting the ‘dictionaries drive up marks and make the test too easy’ argument, or the ‘students do not know how to use them and this undermines their performance’ argument, we need to take a proper look at the difference that bilingual dictionaries actually do make to writing performance.
Chapter 2. On dictionaries and writing
However, the evidence provided by performance alone is insufficient to claim that a particular writing test is construct valid. We need, as Bachman (1990, 2000) and Messick (1989) suggest, to move beyond a conceptualization of construct validity that focuses solely on the test and interpretations of scores. Arguments surrounding what does and does not constitute a fair test have consequences for the stakeholders, whether good or bad, that have implications for that test’s validity. Construct validity must also consider the value implications and social consequences of the test – the impact of the scores on the test takers – and therefore whether the scores were affected by facets of the test procedure that biased the test against some test takers. We need to consider the differential impact of dictionaries on a range of test takers. Also, we need to find out what the test takers themselves think. Weigle (2002) suggests that an evaluation of test usefulness should not be separated from a consideration of fairness and equity. She relates issues of fairness to a canvassing of the views of stakeholders. Rea-Dickins (1997) suggests that perhaps the most important stakeholders who might be consulted are the test takers themselves. This view is in accord with Bachman and Palmer’s (1996) assertion that “one way to promote the potential for positive impact is through involving test takers in the design and development of the test, as well as collecting information from them about their perceptions of the test and test tasks” (p. 32). It also lines up with Messick’s (1989) recommendation that test-taker perceptions should be included as a crucial source of evidence for construct validity. It may be argued (East, 2005c) that such an investigation may just result in face validity – that is, the test is perceived to be reasonable, fair, and appropriate, but the perception may be of little value in evaluating test validity (Savignon, 1983). Test-taker perceptions are therefore often not seen as central to the test validation process because of this (Elder, Iwashita, & McNamara, 2002). But if test taker perceptions are laid alongside all the other evidence in a broader construct validation enquiry, such an enquiry may help to establish whether a given testing procedure is both useful and fair, particularly with regard to the test takers as major stakeholders. The questions we need to look at are these. 1. Does bilingual dictionary use affect test performance? There are two dimensions here. Do test takers actually get higher or lower scores when using a dictionary? Or does using a dictionary make no difference to test scores? Secondarily, does use of a bilingual dictionary enhance or diminish writing quality? There would seem to be little point in allowing a dictionary if it did not actually help students to improve the quality of their work (or, in the words of Ard (1982), to develop more active expressive abilities), or if it had a detrimental
35
36
Dictionary Use in Foreign Language Writing Exams
effect on the quality of their work (or, in Asher’s (1999) words, if inaccurate language production diminished rather than improved students’ performance). 2. What about the test takers? How do they use dictionaries? Do they think they can use them effectively to improve their writing? What do they think about having dictionaries in the test? How can we help the test takers to improve their use of dictionaries? Answers to these questions have relevance not only for timed tests, but also for coursework and portfolio options. In Chapter 3 I begin to explore how we might answer these questions effectively and in a way that will provide us with a comprehensive and multi-faceted picture. I look at the type of research studies we could set up to make a fair evaluation, and I include a summary of the research work of Hurman and Tall (1998), which was pivotal in the UK decision to remove dictionaries from L2 examinations.
chapter 3
Does the dictionary really make a difference?
There are several very good reasons why we should consider allowing students to take bilingual dictionaries with them into assessments of their L2 writing proficiency. Bilingual dictionaries are potentially supportive tools for L2 learners. Their use is authentic, that is, consistent with what L2 users do in the real world if they do not know or understand something. In Chapter 2 I suggested that the ideal assessment of writing proficiency would surely aim to mirror such realworld practice as authentically as possible. Coursework and portfolio options would appear to do this quite well. The issue of allowing dictionaries in timed writing tests is less straightforward. Based on the need to preserve the measurement qualities of the test (the ability of that test to measure writing proficiency and to give us information about differential levels of test taker ability), there has been pressure to disallow dictionaries. They might, for example, make the test too easy. If test takers were allowed to use them we might get an inaccurate picture of their writing ability. Alternatively, dictionaries could potentially make the test more difficult for those who did not know how to use them effectively. Weigle (2002, p. 106) brings out several of the tensions around dictionaries in timed tests quite well. She argues that in traditional language tests vocabulary knowledge is regarded as part of the construct being tested, and on this basis dictionaries would usually be outlawed. She suggests, however, that if writing ability were defined more broadly in a way that included the use of any available resources, using dictionaries might become a possibility. Indeed, it could be argued that good writers know how to use dictionaries efficiently and in a way that enables them to choose the appropriate words for the concepts they wish to convey. As anecdotal evidence demonstrates, however, students often do not know how to use dictionaries effectively. Also, using dictionaries may impact negatively on the time available to complete a written task. If test takers cannot use dictionaries well this might cause them to underperform in writing tests. A range of issues therefore impacts on the use of dictionaries in writing tests and the consequent fairness of the test for everybody. The notion of what constitutes a ‘fair’ test lies at the heart of discussions around different types of testing. Certainly in virtually all types of testing students receive some kind of grade or mark which differentiates their performance
38
Dictionary Use in Foreign Language Writing Exams
from that of others. Tests clearly differ in the extent to which scores are used to provide evidence of performance, but wherever they are used a central concern of all the stakeholders is that those scores need to be reliable and fair, or construct valid. As explained in Chapter 2, reliability and construct validity have to do with the meaningfulness and accuracy of the scores awarded, in relation to the measurement of an underlying construct. The scores provide an indication of the ability or construct. Both reliability and validity are connected with the concept of fairness, with validity focusing on the task itself and reliability concerned with the consistency with which task performances are measured. Weir (2005) sees the two qualities as so inter-dependent that he describes reliability as ‘scoring validity’. The central importance of test score evidence means that if we are genuinely concerned with the difference a dictionary might make, it would be useful to find out whether test takers get higher or lower scores when using a dictionary, or whether using a dictionary actually makes no difference to scores. Test score evidence is crucial in helping us to begin to make some meaningful decisions around the usefulness and fairness of having a dictionary in writing tests. If, however, we want to carry out a thorough investigation into the difference, if any, that having a bilingual dictionary in writing is actually going to make, we ideally need to set up a study or studies which can draw on several sources of evidence – not just test scores. This chapter looks at the type of investigation we could set up if we wanted to get the fullest picture possible of dictionary use in writing tests. I explore the fundamental concept of a ‘repeated measures’ study, bringing out the types of consideration that are important to make this type of study effective. I go on to give an overview of Hurman and Tall’s 1998 repeated measures study into French dictionary use – one which was crucial in the recent debates around dictionaries in the United Kingdom. The studies which form the focus of the remainder of this book are then described, and I conclude with my own findings with regard to test scores and what we can learn from these. The first two sections of this chapter will be helpful for those who would like to get a fuller understanding of the methodological rationale for the studies I carried out. Readers who are experienced in the research process may want to skip these two sections and pick things up again on p. 41.
Setting up a comparative study If we are interested in seeing whether a class of L2 students actually performs differently in a writing examination when the students have access to a dictionary we
Chapter 3. Does the dictionary really make a difference?
could set up a comparative study. We could divide the class into two groups, and then give the groups the same or a similar writing task, but in two different conditions – one group would take the test with a dictionary, and one would take it without. If we subsequently compared students’ performance on the two tests (as measured by their writing scores) we would be able to judge whether there were any differences, and, if so, the extent of those differences. We would look not only at the average performance in the groups, but could employ statistical measures to determine whether any observed score differences were most likely random and down to chance, or whether they were indicating the existence of some real factor (like the dictionary) which was helping to make the difference. The smaller the probability that the differences were down to chance, the more confident we could be that some intervening factor was having an effect. We could argue for a real (statistically significant) difference if the probability that score variations were random was once in at least every twenty (p ≤ .05). This type of independent groups study can provide us with adequate and valid comparative evidence, particularly when dealing with larger sample sizes. However, if sample sizes are small it is important to make sure that the two groups are comparable. Otherwise it would be difficult to isolate the intervention of the variable under consideration from any differences inherent in the groups.
Setting up a repeated measures study Another useful approach would be to work with only one group of students. In a repeated measures dictionary study we could give this one group of students two similar writing tasks in two different conditions – once with the dictionary, and once without. We could then compare each student’s performance in both testing conditions. We would also need to consider ways of isolating the effect of the dictionary from any other potentially impacting factors. In the actual test administration, three considerations are important. Firstly, if you allowed, say, a week’s gap between testing you would not know whether students’ performances improved or became worse due to some factors outside dictionary use – the students might have studied extra hard in that intervening week and thereby improved their performance in a way that was irrelevant to having a dictionary. We would need to control for this potential practice effect. A second type of practice effect may become apparent if the participants always took tasks in a particular order – for example, the ‘without dictionary’ task followed by the ‘with dictionary’ task. Participants might perform better on the second task because it was second rather than because they had a dictionary. We would need to take account of this order effect. Thirdly, one test task might
39
40 Dictionary Use in Foreign Language Writing Exams
be somewhat easier or more difficult for a number of participants in a way that was completely unrelated to dictionary use, and this might influence their performance. We would need to take account of this task effect. Practice effect would be minimized if the participants took the two tests one after the other with a minimal time-lag between each test administration. Order effect and task effect could be mitigated through counterbalancing. That is, some participants would take Task 1 followed by Task 2, and others would take them in the reverse order (two groups). Also, some would take Task 1 with a dictionary and Task 2 without, and others would take them the other way round (two more groups). This type of counterbalancing would thereby contribute to the elimination of biases which might invalidate any conclusions we draw (Keppel, 1991). Two further considerations are important. Firstly, we would need to ensure that the scoring procedures could be shown to be highly reliable and construct valid. As Weigle (2002) suggests, “scoring procedures are critical because the score is ultimately what will be used in making decisions and inferences about writers” (p. 108). This would include: •
• • •
drawing up sufficiently detailed scoring criteria which adequately reflected the construct being measured and which would be sensitive enough to differentiate between different levels of test taker performance; having at least two independent raters who received training in the use of the scoring criteria; putting measures in place to determine how consistently the raters used the scoring relative to each other; and making sure that the raters did not know which test script had been written in which test condition – with or without a dictionary – because knowing this might somehow influence their marking, albeit at a subconscious level.
In a study to determine the effect of using a dictionary this last point is critical. Spolsky (2001), for example, suggests that “it is believed that dictionary use improves performance on the examination” (p. 4, my emphasis) – a perception also mentioned by Bensoussan, Sim, and Weiss (1984). If, for example, the raters think that ‘with dictionary’ answers should be better they might be tempted to be harsher in assigning marks to ‘with dictionary’ scripts. Alternatively raters might be a little more generous in the belief that the test scores should indicate an improvement in writing. A second important consideration is to employ measures other than test scores to see if we could locate and explain more thoroughly what might be accounting for any differences. We could analyze the quality of the writing in other ways. We could use questionnaires, interviews, or focus groups as means of homing in on
Chapter 3. Does the dictionary really make a difference?
participants’ use or perceptions of the dictionary. This type of ‘mixed method’ approach – drawing on both quantitative and qualitative data – provides greater breadth and depth, and arguably adds to the reliability and validity of any empirical investigation, because it is able to take into account multiple and distinctive forms of evidence (Greene, Caracelli, & Graham, 1989; Schutz, Chambless, & DeCuir, 2004). Comparative test score data are crucial, but other data add significantly to the available evidence.
Findings of an independent groups study Rivera and Stansfield (1998) argue for the importance of generating comparative test score evidence. These two researchers have been particularly concerned with allowing the use of test accommodations – dictionaries, glossaries, extra time allowance – when students with limited proficiency in English (so-called LEP students) are required to take mainstream tests in English alongside English L1 speakers. Rivera and Stansfield assert that accommodations aim to ‘level the playing field’ by eliminating irrelevant obstacles that may affect some test takers’ performance. Such accommodations are therefore designed to provide equality of opportunity for all students. They support weaker LEP students in assessments so that they are not hindered in displaying what they know and can do by a linguistic barrier – not being able to understand fully what the question is asking. There are, however, recognized problems with accommodations (Butler & Stevens, 1997): • •
•
It may be difficult to identify the subset requiring the accommodation and the needs of the subset. Unfair advantage or disadvantage may be given to test takers who either have or do not have access to a particular accommodation (indeed, test takers with the accommodation may be advantaged or disadvantaged by it, just as test takers without the accommodation may be). This may bring into question whether all test takers do approach the test with a ‘level playing field’, and whether the test is therefore valid.
Several of Butler and Stevens’ concerns are of direct relevance to the dictionary use question. It may be that some dictionaries would provide extra help or support that others do not. If students are allowed a free choice in the type of dictionary they take into the examination with them, some may have an unfair advantage. Also, test takers who are more proficient at dictionary use might be at an unfair advantage over those who cannot use a dictionary well.
41
42
Dictionary Use in Foreign Language Writing Exams
Unfair advantage or disadvantage is a threat to the construct validity of the test. Rivera and Stansfield (1998) go on to argue, however, that if score comparability between tests with and without an accommodation has been established (no significant difference in scores) the accommodation can be rightfully endorsed. By their argument, no significant difference in scores would mean that the accommodation is fair in that it does not compromise the measurement of the construct. To investigate whether this was likely to be true for LEP test takers, Rivera and Stansfield (2001) looked at the performance of over 11,000 non-LEP students on two sets of tests, some presented in standard non-simplified English, and some that had been linguistically simplified. Participants took one of the two types of test. The researchers found that there was no significant difference in scores between the two tests. Stansfield (2002) describes this finding as important because it demonstrates that linguistic simplification can be used for non-LEP test takers without concerns that it provides unfair advantage and thereby affects the comparability of scores across all test takers. This is because, from the point of view of non-LEP test takers (that is, those who did not require the accommodation), linguistic simplification did not make the test ‘too easy’ and did not therefore distort the scores as a measure of ability. Bearing in mind that this finding relates to a construct other than the L2 ability of L2 learners, the finding does have implications for the wholesale use of dictionaries in L2 tests. If test takers took tests in two conditions, with a dictionary and without a dictionary, and a non-significant result were to be found when test scores were compared, this would indicate that there was no crucial threat to the construct validity of the test as a snapshot measure of test takers’ writing performance – at least as far as we could tell from score evidence. That is, the L2 test takers did not ‘require’ the dictionary to demonstrate their ability and its inclusion was not a confounding variable that distorted the ability being tested – making the test seem ‘too easy’ or ‘too hard’.
Findings of a repeated measures study Hurman and Tall (1998) were also interested in obtaining comparative test score evidence. These researchers were commissioned by the UK’s Qualifications and Curriculum Authority to investigate the difference that bilingual dictionaries made in tests. In a study conducted with over 1000 participants they utilized a repeated measures design to look at the effect of using a bilingual English/French dictionary on scores in GCSE French writing at the two levels of the examination – Foundation and Higher Tiers. They were also interested in the comparative impact of different dictionaries on the scores awarded.
Chapter 3. Does the dictionary really make a difference?
The study incorporated a variety of quantitative and qualitative elements so that it could go beyond test score evidence and paint a broader brush-stroke picture. These included two examination papers at both Foundation and Higher Tiers, designed to be as similar as possible, and taken by each participant (one with, the other without a dictionary), a participant questionnaire after writing the ‘with dictionary’ paper, a qualitative analysis of dictionary look-ups made by participants of different ability ranges across a subset of papers, and an observation schedule to record frequency and duration of the look-ups. Three basic types of dictionary were selected, based on the discovery that these three were in common use in schools (Tall & Hurman, 2000): a. Collins Pocket French Dictionary (Collins, 1995a) – a more ‘traditional’ type. b. Collins Easy Learning French Dictionary (Collins, 1996) – a newer type that aimed to make student use as easy as possible. c. Your French Dictionary (Malvern, 1996) – tailor-made for the GCSE. Some other dictionaries were allowed to some test takers, although the majority used one of the three specified dictionaries. It was considered that a useful comparison could be made between the three because they differed in content and presentation. The test papers used contained the types of task illustrated in Figure 3.1. Hurman and Tall discovered that Foundation Tier candidates gained an average increase of two marks (9%) on the ‘with dictionary’ paper in comparison to their scores on the ‘without dictionary’ paper. Higher candidates gained an average increase of between two and three marks (9%). A couple of marks difference might not seem very much, but the differences in scores were found to be statistically significant (Tall & Hurman, 2002). The use of different dictionaries appeared Foundation Tier
Higher Tier
Form-filling (ten one-word answers) Postcard writing (maximum 40 words) Letter writing (approx. 70 words) Extended writing (approx. 150 words)
Figure 3.1 Test task types across the two tiers of the GCSE.
43
44 Dictionary Use in Foreign Language Writing Exams
to have different effects on different types of question. It was demonstrated, for example, that at Foundation Tier the Collins Pocket might give a slight advantage, whereas at Higher Tier test takers were disadvantaged – the overlapping question was answered more effectively by those using Your French Dictionary and the extended piece of writing was answered more effectively by those using the Collins Easy Learning dictionary. In the light of Butler and Stevens’ (1997) argument around unfair advantage and disadvantage, these differences reveal important considerations: it is unfair if one group of test takers, using one type of dictionary, can thereby do better than another group with a different dictionary. Hurman and Tall also found that a good deal of examination time was spent using the dictionary. On average, participants accessed a dictionary at least once in every 2 minute period, with the majority of look-ups taking between 20 and 40 seconds. Although only 10% of look-ups took more than 60 seconds, there was clearly frequent and constant dictionary use throughout the tests which would have distracted somewhat from time available for task completion. Several instances of actual dictionary use by candidates were recorded (Hurman & Tall, 1998, p. 17). There were examples of phrases being taken directly, and successfully, from the dictionary:
French response la semaine dernière c’est tellement gênant
Meaning last week that’s really embarrassing
On the other hand, there was evidence of considerable confusion about the differences between, for example, verbs and nouns, and lack of understanding about the dictionary’s metalanguage. This resulted in many English expressions being directly transliterated into meaningless French: Error J’ai propré / propu ma chambre Je s’éteindre à 8h.
Intended meaning I cleaned my room I went out at 8 o’clock
Puis semaine
Next week
Je gauche
I left
Nature of error An inappropriate word was selected (‘clean’ as an adjective rather than a verb). An inappropriate verb was selected (s’éteindre = to extinguish) and was used in its infinitive form. An inappropriate word was selected (‘next’ as an adverb when an adjective was required). An inappropriate word was selected (‘left’ as an adjective and not as the past participle of ‘leave’).
Chapter 3. Does the dictionary really make a difference?
Je suis deborder des I spilt some on a déborder (spill) and des (some) were sur un madame lady incorrect in the context, and the gender of madame was also incorrect. Une fille abattre A girl fell down Inappropriate words (‘fell’ as in ‘chop’ and duvet ‘down’ as in ‘duvet’).
In addition to errors such as these a large majority of participants failed to use the dictionary to check genders or spellings of words. These types of inaccurate dictionary use pointed clearly to a distinct weakness of dictionary availability in writing tests, and one which would bring into question whether they should be allowed. The questionnaires revealed that 62% of participants had been trained to use a dictionary. This is a high number in comparison with findings of other research (Atkins & Varantola, 1998a; Hartmann, 1983, 1999) although in setting up the project teachers had been asked to ensure that participants had received some instruction in dictionary use. Only 42%, however, claimed to have been trained to use the dictionary they had in the examination. Nevertheless 95% felt able to find the French equivalent of an English word and 94% said they could do the converse. Given the instances of error, this is a self-confidence not necessarily matched with proficiency. Some test takers may have thought that they could use a dictionary better than they in fact could (as has already been stated, if students do not know how to use dictionaries effectively use of a dictionary during a language test may be counterproductive). Nevertheless, 88% of participants commented that being able to use a dictionary in an examination made them feel more confident, and 3% admitted to using it more frequently than necessary “because it was there” (Hurman & Tall, 1998, p. 26). This extra boost to confidence in the examination is an important consideration because it potentially leads to a more positive feeling about the test and more positive engagement with the task. In view of the finding that participants scored higher marks when using a dictionary, Hurman and Tall (1998) recommended that GCSE boards should take potential increases in scores into account when setting questions and establishing grade boundaries. Their recommendations underscored the steps they perceived as necessary to ensure that ‘with dictionary’ tests were as useful and fair as possible, particularly noting the following (p. 48): • • •
The content of what was written about could be expanded. There could be less limitation on the language used in the test task or expected in the response. The mark scheme could put more emphasis on range and quality of language.
45
46 Dictionary Use in Foreign Language Writing Exams
• •
The mark scheme could require a higher level of accuracy. Time allowed to complete the paper may need to be extended to allow more time for dictionary use.
Consequences of the Hurman and Tall study There is a good deal that we can learn from Hurman and Tall’s study, at least for beginner and pre-intermediate test takers. Their findings, especially the uses to which the dictionaries were put, raise legitimate questions of concern about the dictionary as a potential liability. This research stands us in good stead as we consider the whole issue of dictionaries in writing, whether in testing or non-testing contexts. Hurman and Tall’s study also had huge ramifications in the UK. A bulletin from Birmingham University (2000) reported what were seen as the undeniable consequences of their findings. Under the heading ‘Dictionaries Fail the Test’, it was stated: Some may regard John Hurman and Graham Tall in the School of Education as spoilsports. The Government’s Qualifications and Curriculum Authority, however, fully endorses their research into the effect of taking dictionaries into modern languages examinations. Dictionaries are now banned.
The bulletin went on to underscore the particular place that the test score evidence – that test takers seemed to do better on ‘with dictionary’ tests – appeared to play in this decision: Their findings unequivocally showed that using dictionaries, especially the newer types of language dictionaries now on the market, drives up marks. … The researchers did not go on to expressly recommend that dictionaries should not be allowed in the exam room, but the QCA had no hesitation in outlawing them. It makes languages examinations harder, of course, but it also makes them a better test of ability.
These somewhat emphatic observations appeared to support a perspective that the standard of examinations was being compromised by dictionary access. The observations are reminiscent of Bishop’s (1998) commentary that having dictionaries was somehow like cheating or that they made the assessment impossible to set or the task too easy to achieve. Hence, a decision to remove dictionaries from the examinations would make these examinations fairer and more useful measures of test taker ability.
Chapter 3. Does the dictionary really make a difference?
Building on past research No doubt Hurman and Tall’s test score evidence was one of several factors influencing the policy decision of the QCA (see, for example, Poulet, 1999). Nevertheless, given that they were part of the evidence available, Hurman and Tall’s test score findings are a concern. Firstly, there were several limitations to their scoregathering process (East, 2007). Also, Hurman and Tall had focused only on the GCSE, and yet the QCA dictionary ban also included higher level examinations. There is a world of difference between the type of language required for what Barnes et al. (1999) refer to as ‘single item recall’ tests – test takers giving, for example, a list of ten food items – and the extended discourse required in a higherlevel discursive essay. As Barnes et al. conclude, “[t]he degree of ease or difficulty of some tasks is altered considerably with the use of a dictionary” (p. 24). This may well make it more appropriate to exclude a dictionary from some types of writing task, but it does not necessarily follow that dictionaries should be outlawed at all levels. Finally, in the light of Rivera and Stansfield’s (1998) argument that it is only when no significant difference in test scores is found that we can begin to consider seriously whether to allow a particular support resource, there is a need to gather further evidence, especially with higher levels of writing. In the light of all this I carried out the studies which are explored in the remainder of this book. The three studies I present were formulated so that they could build on Hurman and Tall’s research. For comparative purposes the studies investigated a different foreign language (German in contrast to French) and higher levels of test taker ability (intermediate and beyond in contrast to beginners and pre-intermediate). Given the unequivocal stance of the University of Birmingham concerning Hurman and Tall’s test score evidence, it was important to take all the necessary steps to ensure the reliability of my own scoring evidence. Particular attention was therefore paid to test scores. I also wanted, as Hurman and Tall had done, to go beyond test score evidence to several other dimensions of the test taking experience – including a qualitative analysis of test taker performance and a focus on test taker perspectives. The first two investigations were small-scale exploratory case studies of specific students taking German courses at the intermediate level or above in a New Zealand tertiary institution, and who had been my own students (see Appendix 1 for an explanation of how I understand the term ‘intermediate’). At the time of the studies the institution offered six 14-week courses (two per year), forming part of a three-year Bachelor of Arts degree. The courses were also open to non-degree students on an ad hoc basis. Both studies drew on a small sample (n = 6 and n = 5). The data provided a rich picture of dictionary use from a variety of angles.
47
48 Dictionary Use in Foreign Language Writing Exams
The third study built on the findings and conclusions of the first two, but was broader in scope. It involved 47 high-school students, all 17 to 18 years of age, taking German courses in schools in the Auckland and Northland regions of New Zealand in the academic year 2003. These students were preparing for an external high stakes examination in German (equivalent to A level) known as ‘Bursary’, which they were due to take two months after taking part in the study. This is an intermediate level examination which can be benchmarked to levels B1/ B2 on the Common European Framework or CEF (Appendix 1). A range of different schools was targeted, including state and independent schools, and single sex and co-educational schools, so that a good cross-section of students could be included. The students were recruited via a letter of invitation, sent to German teachers, who were asked to discuss the project with their students and enlist the students’ involvement. The final number of students who took part (39 girls and 8 boys from 11 schools) was quite small. Statistics from New Zealand’s Ministry of Education indicated, however, that on a national level 276 girls and 90 boys in 88 schools were preparing for the Bursary in German in 2003. The sample for the third study therefore represented 14% of girls taking Bursary German, 9% of boys, and overall just under 13% of the total cohort in 12.5% of schools. It was considered that the sample could be regarded as largely representative of the population to which any results might be generalized. It also allowed for a more quantitative investigation using statistical analyses. Generalizability of findings is an important consideration. That is, it is important that the lessons we learn from a particular study can be applied more generally beyond the study to other situations. Johnson and Christensen (2004) suggest that “[p]erhaps the most reasonable stance toward the issue of generalizing is that we can generalize to other people, settings, times, and treatments to the degree to which they are similar to the people, settings, times, and treatments in the original study” (p. 256). Bearing this in mind I leave the extent of the transferability of the findings to other contexts to the reader’s judgment. A similar design was used in all three studies: 1. All participants took a placement test (Oxford University Language Centre, 2003). This test was used to provide a measure of the participants’ abilities for benchmarking purposes. The test categorizes students into one of six bands, from beginner to advanced. 2. They took two timed writing tests, one after the other. In the ‘with dictionary’ test they were asked to indicate, by underlining or highlighting, the items they had used as a result of dictionary look-up. The test scripts were subsequently analyzed in a variety of ways, both quantitative (for example, using lexical
Chapter 3. Does the dictionary really make a difference?
f requency profiling (Laufer & Nation, 1995)) and qualitative (for example, looking at actual instances of dictionary use). 3. They completed two short questionnaires which compared test takers’ preferences and strategies across the two test conditions. Attitudinal questions also focused on the participants’ personal opinions about the relative ease or difficulty of the tasks across the two conditions, and the impact of this on their sense of confidence. 4. They filled in a longer questionnaire with both open- and closed-ended questions, with the open-ended questions providing opportunity for exploration, probing, and elaboration (Peterson, 2000). Questions explored such aspects as how the dictionary was used, and perceived advantages and disadvantages of having a dictionary in the examination. All three studies followed a similar schedule. Participants were allowed 50 minutes to complete the first timed writing test, whether with or without a dictionary. They then filled in a short questionnaire on that test task (5–10 minutes). This procedure was repeated for the second test. After both tests had been completed, participants were asked to fill in the longer questionnaire. Throughout the process of data gathering and analysis, procedures were put in place to ensure that the results derived from data were reliable (see Appendix 2). The small-scale case study nature of the first two studies also enabled two additional sources of data collection that were not used in the final study – participant observation and interviews. The particular focus of the observations was on participants’ use of time – that is, time spent accessing the dictionary and number of dictionary look-ups. After the completion of the tasks, the participants were interviewed in groups, with the interviews designed to provide opportunity for the participants to explore any aspects of the study that particularly interested them. Following on from the longer questionnaire, questions included how test takers had used the dictionary, and what they thought about using the dictionary. The interviews were audio-taped and subsequently transcribed so that I could revisit them later with a view to making a closer analysis.
The first two studies The six participants in the first investigation were drawn from a range of BA classes. The placement test indicated that three could be classified as ‘intermediate’ (Simon, Patricia, and Jenny), and three as ‘advanced’ (Sharon, Rachel, and Janet). Of the five participants in the second study, three were ‘intermediate’ (Mary, Jessica, and Peter) and two ‘upper intermediate’ (John and Sandra). These participants
49
50
Dictionary Use in Foreign Language Writing Exams
could be differentiated from those in the first study because they were members of a particular class, following a 14-week German course (Deutsch 3). This was the third of the six available courses, and was set at a level of language proficiency broadly commensurate with the intermediate level B1 of the CEF (see Appendix 1). All the participants in both these small-scale studies brought with them significant background knowledge and prior learning in German. The majority were ‘mature’ students. In the first study no level of prior specific learning in writing skills or use of a dictionary was assumed. In the second study, however, the development of writing and dictionary skills was a major component of the course (East, 2005b, 2006c). To help with this participants were issued with two study guides at the start of the course: 1. A ‘Writing Study Guide’ dealt with key issues relevant to writing communicative texts. The guide provided practice in using devices such as adjectives, adverbs, conjunctions, and formulaic sequences (Wray, 2002) that would help students to develop both coherence and cohesion and enhance the quality of their writing. Participants were encouraged to learn a good number of the sequences suggested in the guide. Also included were examples of test tasks similar to those the participants would be carrying out at the end of their course, data from which would be used as part of the study. The guide also suggested ways of answering the tasks. 2. A ‘Dictionary Skills Guide’ contained 14 exercises designed to help participants to find their way around a bilingual dictionary. Most exercises were available for the participants to do in their own time, and answers were provided for self-checking. The participants were given ownership of practicing the skills themselves, thereby developing their own autonomy in the learning process. In both studies, students were required in the tests to use the same one-volume bilingual dictionary – the Collins German Dictionary (Terrell, Schnorr, Morris, & Breitsprecher, 1997), and participants were issued with a copy to take with them into the examination room. I selected this dictionary because it is among the most comprehensive published by HarperCollins. Also, one early survey into bilingual English/German dictionaries (Hartmann, 1983) had established that the original Collins German Dictionary, on which this dictionary was based, was considered the most popular out of the more than 25 dictionaries represented: of those who knew the Collins German Dictionary, 96% saw it as ‘excellent’ or ‘good’, 17% ahead of the next most popular. As such, Hartmann observes that this dictionary did “surprisingly well” as a “relative newcomer” (p. 197), first published in 1980. There was therefore some evidence to suggest that the 1997 edition of the
Chapter 3. Does the dictionary really make a difference?
Collins German Dictionary would serve its users well. The dictionary contained over 280,000 references and 460,000 translations. Participants in the second study were also given practice opportunity in class time with using this dictionary. The edition of the dictionary used was not fully compliant with the neue Recht schreibung – the new spelling rules introduced in 1996 which were designed to simplify and rationalize German spellings (Heller, 2000). The dictionary did alert its users to the new rules through a detailed appendix section, although the rules only became fully mandatory in schools in Germany in 2005, and participants in the studies were not penalized if they did not follow them. In addition to its extensive vocabulary, the dictionary also has a large central bilingual section containing some 70 pages of formulaic sequences and other items – Language in Use / Sprache Aktiv. This is a distinguishing feature of this dictionary. The intended aim of the section is “to help non-native speakers find fluent, natural ways of expressing themselves in the foreign language, without risk of the native-language distortion that sometimes results from literal translation” (Terrell et al., 1997, p. 819). I drew my students’ attention to this section, and explained to them that the phrases and examples, used effectively, would enable them to enhance the quality of what they were writing. In the first two studies three types of writing examination task were set, which were labeled A, B and C. Two papers were provided for each task type (Task 1 and Task 2), so that one could be completed with a dictionary, and the other without. In the first study each participant carried out one of these tasks, and in the second each participant took all three tasks over a three-week period at the end of the course. The tasks have been reproduced in Appendix 3. The first two sets of tasks required test takers to write a letter in response to a stimulus presented in German. In the first task type (A1 and A2) test takers were given a series of bullet points in English to guide their responses. In the second task type (B1 and B2) the bullet points were in German. In the third set of tasks (C1 and C2) students were presented with three discursive essay titles, in German, and were asked to respond to one of the tasks. Counterbalancing of task was partial (due to the small sample size), and focused on test condition rather than task order. That is, although all participants took Task 1 followed by Task 2, some took Task 1 with a dictionary, and some without.
The third study The third study was distinct from the first two in three respects: number and type of participants (47 participants, 17 to 18 years of age); type of test task used (two discursive essays); and types of dictionary used (free choice). Participants were
51
52
Dictionary Use in Foreign Language Writing Exams
Figure 3.2 Fit of placement test scores to normal distribution. Note. The Anderson-Darling test revealed no reason to reject the null hypothesis of normality (AD = .405, p = .34).
issued with a version of the Writing Study Guide about six months prior to taking the test tasks, and they and their teachers were encouraged to make use of it in class and at home. In practice many students did use it, although it was not mandated as a requirement to take part in the study, and did not form such a central part of test preparation as it had done in the second study. The range of placement test scores (15/50 to 47/50) indicated a broadly normal distribution (Figure 3.2). Participants were placed into four ability levels based on the suggested bands: lower intermediate (n = 5), intermediate (n = 22), upper intermediate (n = 16) and advanced (n = 4). The two writing tasks used in this study were similar to the types of essay these high school students would be expected to complete, without a dictionary, in their final Bursary examination. The main difference was that in the Bursary examination students would have a choice of titles. In the study no choice of title was given so as to contribute to controlling for task-related differentials in performance: Task 1 Sprachen in Neuseeland: “Alle Schüler sollen heutzutage mindestens EINE Fremdsprache in der Schule lernen”. Sind Sie auch dieser Meinung?
Translation Languages in New Zealand: “These days all school students should learn at least ONE foreign language in school”. Are you also of this opinion?
Chapter 3. Does the dictionary really make a difference?
Task 2 Der Massentourismus: “Viele Touristen aus vielen Ländern besuchen heutzutage Neuseeland, und das ist gut für das Land”. Was meinen Sie?
Translation Mass Tourism: “These days many tourists from many countries visit New Zealand, and that is good for the country”. What do you think?
Because participants were free to select their own bilingual dictionary they were encouraged to use one that they had used before. In all, five different types of dictionary were used (Table 3.1). Among the most popular was the Collins Pocket German Dictionary (Collins, 2001). If participants did not have access to a dictionary on the day of the test they were provided with a copy of this dictionary. This enabled an investigation of the Collins Pocket users as a subset of the whole sample, with the subset using only one dictionary type. (In Hurman and Tall’s (1998) study the Collins Pocket French Dictionary had fared badly in comparison with the other two they had investigated, particularly at the higher GCSE level. Using the Collins Pocket German in this study allowed for some small-scale comparison between the Collins Pocket and the other dictionaries.) I was also interested in knowing how often the participants had used the dictionary before taking the tests. On the longer questionnaire I asked them about this. Participants’ responses to this question were translated into four levels of prior experience with a dictionary: 1. 2. 3. 4.
I have never used [this dictionary] before today → very inexperienced I have used it once or twice in class or at home → quite inexperienced I have used it fairly often in class or at home → quite experienced I have used it quite regularly in class or at home → very experienced
It was therefore possible to place the participants in this study into four ability levels and four levels of prior experience with the dictionary, and this information was taken into account in subsequent statistical analyses. To help to take account of task effect and order effect participants in the third study were placed into one of four groups with roughly equal numbers. The distribution of participants resulted in the following: • • •
Just under half the participants took Task 1 followed by Task 2, and the remainder took these tasks in the reverse order. Just under half the participants wrote Task 1 with a dictionary and Task 2 without, and the remainder wrote Task 1 without and Task 2 with. Roughly half the participants were able to use a dictionary in the first task, and the other half in the second task.
53
54
Dictionary Use in Foreign Language Writing Exams
Table 3.1 Dictionary types used in the third study. Title
Number of references
Distinguishing features
No. of participants using this dictionary
Collins Pocket German Dictionary (Collins, 2001)
Information not available
All headwords are printed in blue.
31
Collins German College Dictionary (Collins, 1995b)
Over 83,000 references and 120,000 translations
Some references include cultural information.
5
Collins German Dictionary plus Grammar (Collins, 1995c)
Over 83,000 references and 120,000 translations
An additional grammar section of 255 pages. (Reference section identical to the Collins German College Dictionary.)
5
Over 50,000 referOxford Duden Paperback Dictionary ences and (OUP, 1997) 75,000 translations BBC German 42,000 references Learners Dictionary (BBC/Larousse, 1997)
Covers “the essential vocabulary 3 of everyday life” and is “specially designed to meet the needs of students, tourists, and travelers.” 3 Includes cultural notes and detailed coverage of GCSE word lists.
The test score evidence from the three studies In order to ensure the accuracy of scoring I designed a thorough and detailed diagnostic scoring rubric. The rubric drew on Canale and Swain’s (1980) theoretical framework of communicative competence. It was also influenced by two other rubrics that were already in use in different contexts (Edexcel, 2000; Jacobs, Zinkgraf, Wormuth, Hartfiel, & Hughey, 1981). The scoring rubric enabled raters to differentiate between eight levels of performance (from 0 to 7) across five separate facets of the writing (Figure 3.3). Each facet was given equal weight, meaning that test takers could receive a potential maximum total score of 35 (5 × 7). Although it was recognized that applying such a thorough scoring procedure would take the raters some time, I considered that this extra time was worth the effort required to provide accurate marks that could differentiate clearly across different aspects of writing and different levels of performance. In the first study I marked the scripts myself so that I could see how the rubric might work in practice. To provide some measure of reliability, however, I asked a colleague who did not know any of the participants to moderate my scores. In
Chapter 3. Does the dictionary really make a difference?
the second and third studies the scoring was carried out by two raters who were asked to rate the scripts independently. Two different sets of raters were used in the second and third studies. The raters were trained in the process of scoring and were then given some practice in applying the rubric using examples from the first study (or, in the case of the third study, from the second study). The raters marked anonymous word-processed copies of the essays so that they would not know in which test condition essays had been written (because the test takers had been asked to underline instances of dictionary use in their ‘with dictionary’ answers the hand-written versions could not be used). Considerable effort was therefore made to ensure that the final scores awarded captured, as reliably as possible, the actual performance of the writers under consideration compared against the rubric (East, 2007), and high levels of both inter- and intra-rater reliability were reached (see Appendix 2). Table 3.2 presents the final mean scores awarded in all three studies, together with the range of scores awarded, and the standard deviations from the means. In all cases the mean scores across the two test conditions were roughly comparable. Average scores between the two testing conditions, considered on a study by study basis, differed by a maximum of 1 or 2 marks. A closer look at the scores awarded across all three studies indicates the following: • • •
In the ‘with dictionary’ tests scores ranged from 13/35 to 35/35 with a modal (most common) score of 21/35 (awarded 10% of the time). In the ‘without dictionary’ tests the range was 13.5 / 35 to 34.5 / 35, with a modal score of 20 (again awarded 10% of the time). Although some test takers did do better on the ‘with dictionary’ tests, and some did worse, the average difference in scores between the two conditions, when calculated from the raw scores across all studies, was only ±0.25 of a mark. There was a whole range of scores, but no meaningful difference at any level.
I also took the opportunity to look more closely at different sets of results to determine if any particular factors – like language proficiency, prior experience with using a dictionary, and type of dictionary – appeared to have any effect on test scores. In the first study I was able to compare three intermediate level students with three advanced students, all using the Collins German Dictionary. The three intermediate students’ scores differed across the test conditions by less than half a mark (‘with dictionary’ M = 24.7, SD = 3.2; ‘without dictionary’ M = 24.3, SD = 3.8). The scores for the ‘advanced’ students’ likewise showed minimal difference (‘with
55
Figure 3.3 The scoring rubric.
task is well understood ideas are very clearly stated very well-organized logical sequencing cohesive fluent expression the writing generally ‘flows’
• • • • • 6 • •
• task is understood • loosely organized but main ideas stand out • mainly logical but some incomplete sequencing • mainly cohesive • mainly fluent expression 5 • the writing is a little bit ‘choppy’
implications of task fully understood ideas are extremely clearly stated extremely well-organized very logical sequencing very cohesive very fluent expression the writing ‘flows’
• • • • • 7 • •
Cohesion, coherence and rhetorical organization Grammatical competence: Syntax, sentence-grammar semantics
• rich and complex language • errors are only of a very minor nature • language is very varied and • very few errors of appropriate to the task • agreement • tense • very effective choice and usage • number of words / idioms / functions • gender • there is solid mastery of word • word order form • function • the meaning is not obscured • articles • pronouns • prepositions • language is varied, wide• errors are generally of a minor nature ranging, and appropriate to • a few errors of the task • agreement • tense • effective choice and usage of • number words / idioms / functions • gender • there is mastery of word form • word order • the meaning is not obscured • function • articles • pronouns • prepositions • there are occasional errors in • simple constructions are used accurately choice and usage of words / • minor problems with complex constructions idioms / functions • several errors of • the meaning is very rarely • agreement • tense obscured • number • gender • word order • function • articles • pronouns • prepositions • these are mainly of a less serious nature • the meaning is very rarely obscured
Knowledge of lexis, idiomatic expressions. Functional knowledge
• letter correctly set out, with appropriate and good opening and closing • essay has a very good introduction and conclusion • essay argues the for and against of the topic and provides opinion as appropriate • there is almost always appropriate register • generally appropriate use of cultural references if needed
• letter correctly set out, with appropriate opening and closing • essay has a good introduction and conclusion • essay argues the for and against of the topic and provides opinion as appropriate • there is mainly appropriate register • some cultural references if needed
• demonstrates good mastery of spelling and punctuation conventions • occasional errors of spelling, punctuation, capitalization, and paragraphing • the meaning is very rarely obscured
• •
•
•
•
Knowledge of register and varieties of language; knowledge of cultural references (where appropriate) letter correctly set out, with very appropriate opening and closing essay has a clear and effective introduction and conclusion essay effectively argues the for and against of the topic and provides opinion as appropriate there is always appropriate register appropriate use of cultural references if needed
• demonstrates very good mastery of spelling and punctuation conventions • few errors of spelling, punctuation, capitalization, and paragraphing • the meaning is not obscured
• demonstrates excellent mastery of spelling and punctuation conventions • very few errors of spelling, punctuation, capitalization, and paragraphing • the meaning is not obscured
‘Mechanics’: Spelling and punctuation
56 Dictionary Use in Foreign Language Writing Exams
•
•
•
•
0
no rewardable response
• very limited understanding of the task • there is little knowledge of • ideas are very confused or disconnected German vocabulary, idioms, • virtually no logical sequencing and functions development • the meaning is very confused 1 • virtually no cohesion or obscured • virtually no fluency • the writing is extremely ‘choppy’
• limited understanding of the task • ideas are confused or disconnected • very minimal logical sequencing and development 2 • very weak cohesion • hardly fluent • the writing is very ‘choppy’
• some understanding of the task • ideas are somewhat confused or disconnected • lacks logical sequencing and development • weak cohesion 3 • not very fluent • the writing is quite ‘choppy’
• letter generally correctly set out, with an attempt at an opening and closing • essay has some introduction and conclusion • essay argues to some extent the for and against of the topic and provides opinion as appropriate • there is some appropriate register • minor cultural references if needed
• demonstrates some mastery of spelling and punctuation conventions • frequent errors of spelling, punctuation, capitalization, and paragraphing • the meaning is somewhat confused or obscured
• demonstrates limited mastery of spelling and punctuation conventions • very frequent errors of spelling, punctuation, capitalization, and paragraphing • the meaning is confused or obscured
• letter correctly set out, with satisfactory opening and closing • essay has a satisfactory introduction and conclusion • essay satisfactorily argues the for and against of the topic and provides opinion as appropriate • there is generally appropriate register • some cultural references if needed
• demonstrates satisfactory mastery of spelling and punctuation conventions • more frequent errors of spelling, punctuation, capitalization, and paragraphing • the meaning is seldom obscured
• letter barely correctly set out, with little attempt at an opening and closing • essay has a limited introduction and/or conclusion • essay argues in a limited way the for and against of the topic and provides limited opinion as appropriate • there is limited appropriate register • very minor cultural references if needed • virtually no mastery of sentence construction rules • demonstrates very limited • letter barely correctly set out, with no attempt at • almost exclusively dominated by error mastery of conventions an opening and closing • the meaning is very confused or obscured • dominated by errors of spell- • essay has no introduction and/or conclusion ing, punctuation, capitaliza- • essay argues in a very limited way the for and tion, and paragraphing against of the topic and provides very limited • the meaning is very confused opinion as appropriate or obscured • there is very inadequate appropriate register • minimal cultural references if needed
• simple constructions are mostly used accurately • more noticeable problems with complex constructions • several errors of • agreement • tense • number • gender • word order • function • articles • pronouns • prepositions • these are of a major or more serious nature • the meaning is seldom obscured there are frequent errors in • major problems with both simple and complex choice and usage of words / constructions idioms / functions • frequent errors of the meaning is somewhat • agreement • tense confused or obscured • number • gender • word order • function • articles • pronouns • prepositions • the meaning is somewhat confused or obscured there are very frequent errors • some very minor mastery of sentence construcin choice and usage of words / tion rules idioms / functions • mostly dominated by error communicates, but the meaning • communicates, but the meaning is confused or is confused or obscured obscured
• task is mostly understood • there are more frequent errors • loosely organized but main ideas in choice and usage of words / stand out idioms / functions • generally logical but several instances of • the meaning is seldom incomplete sequencing obscured • satisfactory cohesion • some fluent expression 4 • the writing is somewhat ‘choppy’
Chapter 3. Does the dictionary really make a difference? 57
58
Dictionary Use in Foreign Language Writing Exams
Table 3.2 Final mean scores across all studies. Study
1 (n = 6) 2 task A (n = 5) 2 task B (n = 5) 2 task C (n = 5) 3 (n = 47)
M SD M SD M SD M SD M SD
Average test score ‘with ‘without dictionary’ dictionary’
Range of scores ‘with ‘without dictionary’ dictionary’
28.5 5.01 26.6 5.5 25.6 3.8 23.2 4 22.7 6.1
26–34
26–34
18–33
19–30
20–28
22–28
20–28
20–31
13–35
13.5–34.5
28 5.33 24.6 5 24.8 2.8 24.4 4.9 22.6 6.1
dictionary’ M = 32.3, SD = 2.9; ‘without dictionary’ M = 31.7, SD = 4). The comparison did show, however, that the advanced students performed better on average regardless of test condition, even though the dictionary made no difference at either level. In the second study I was able to take a look at differences in performance by the same set of test takers, each using the same dictionary, across three different tasks. Once more no meaningful differences were noted. The type of test task did not appear to be a factor in influencing the extent to which having or not having a dictionary might make a difference. The third study provided the opportunity to consider several factors – level of ability, level of prior experience with a dictionary, and the number of look-ups made. Table 3.3 presents score differences, first for all test takers, and then for the Collins Pocket dictionary users, considered against the four levels of ability (as determined by the placement test). These descriptive statistics indicate the following: •
•
For all dictionary users the dictionary appeared to help the lower intermediate participants to improve their overall score by just over one mark when they had access to the dictionary, whereas advanced participants appeared to do slightly worse by just under one mark. There was no apparent difference to overall test scores for the other levels of ability. For the Collins Pocket users there was a slight improvement in the ‘with dictionary’ test (but in all cases by no more than around one mark) for lower and upper intermediate participants, and a slight improvement in the ‘without dictionary’ test for intermediate and advanced participants.
Chapter 3. Does the dictionary really make a difference?
Table 3.3 Overall scores by level of ability – third study. Levela
‘With’
‘Without’
‘With’
All users LI (n = 5) I (n = 22) UI (n = 16) A (n = 4)
M SD M SD M SD M SD
‘Without’ Collins Pocket users
Total Score
Total Score
Total Score
Total Score
16.2 2.2 20.1 3.1 25.9 5.8 32.3 5.2
14.9 1.8 20 3.1 25.8 5.4 33 1.1
16.3 2.9 19.9 3.5 27.4 5.9 31.3 5.9
15.5 2.3 20.3 3.2 26.6 5.1 32.5 0.5
Note. aLI: lower intermediate, I: intermediate; UI: upper intermediate; A: advanced. Adapted from East (2007, p. 344).
It also seemed to be the case that score and ability level were related. As ability level went up, so too did the scores awarded. It certainly seemed that the scoring rubric was able to distinguish clearly and accurately between levels of performance commensurate with test taker proficiency levels. Nevertheless there was no evidence that the dictionary either enhanced or diminished performance at any ability level. Table 3.4 records the mean scores according to the four levels of prior experience with the dictionary (as determined by the ‘prior experience’ question on the longer questionnaire). There appeared to be no evidence to suggest that experience with the dictionary, viewed independently of level of ability, made any positive difference to participants’ performance in the ‘with dictionary’ condition. That is, very inexperienced and quite experienced participants improved scores when writing with the dictionary, whereas quite inexperienced and very experienced dictionary users’ scores were less high. These two distinct trends (that is, the differential impact of level of ability and prior experience with the dictionary on the mean test scores) are illustrated for the entire sample in Figure 3.4. The relationship between the number of look-ups participants self-reported that they had used in their response (that is, the number of words they underlined) and final ‘with dictionary’ scores is illustrated in Figure 3.5. Evidence from the scatterplot suggests that there was no relationship between the number of look-ups made and scores on the ‘with dictionary’ test.
59
60 Dictionary Use in Foreign Language Writing Exams
Table 3.4 Overall scores by level of experience – third study. Experience
‘With’
‘Without’
‘With’
Collins Pocket users
All users Very inexperienced (n = 10) Quite inexperienced (n = 15) Quite experienced (n = 13) Very experienced (n = 9)
M SD M SD M SD M SD
‘Without’
Total Score
Total Score
Total Score
Total Score
23.8 5.4 22 5.4 23.5 7.5 21.5 6.1
22.9 5.1 23.8 5.4 21.7 7.4 21.4 6.7
24.1 6.1 22.3 6.5 25.3 8.4 20.6 3.7
22.8 5.7 24.6 5.5 23.8 7.8 21.3 4.7
Adapted from East (2007, p. 345).
The crucial question to ask with regard to the test score evidence was whether any observed differences were significant. Various inferential tests were carried out (see Appendix 4). In all three studies paired samples t-tests indicated that none of the observed differences in scores was statistically significant. In other words, according to the test score evidence from these sets of test takers, having a dictionary in the writing test was apparently making no difference whatsoever. What little variation there was between scores was most likely down to chance or random factors. In the third study I was also able to take account of prior experience with the dictionary, level of ability, and the number of times the test takers reported that they had used a dictionary. It was found that neither prior experience with a dictionary nor the number of look-ups made were significant factors in predicting ‘with dictionary’ scores. It did not seem to matter how ‘dictionary-wise’ the participants were, nor how often they used a dictionary – their performances as judged by their scores were not influenced by either of these factors. Level of ability did make a significant difference to performance regardless of test condition – the higher the ability level the higher the score. Although these differences did not tell me anything about the difference that the dictionary made, they did confirm the evidence that the test scores were able to differentiate quite sensitively between different levels of ability.
Chapter 3. Does the dictionary really make a difference?
Figure 3.4 Impact of ability and prior experience on average total scores. Note. Trends for both ‘with dictionary’ and ‘without dictionary’ scores are recorded for the purpose of comparison, although prior dictionary experience is not relevant to scores in the latter test condition.
Figure 3.5 Relationship between ‘with dictionary’ scores and number of look-ups made.
What does the test score evidence mean? All of this test score evidence raises several interesting questions. Spolsky (2001) has suggested that it is believed that dictionary use improves performance on
61
62
Dictionary Use in Foreign Language Writing Exams
examinations, and Hurman and Tall (1998) found a significant improvement in students’ writing (as measured by scores) in ‘with dictionary’ tests. I had found no differences. When I asked myself whether the steps I had taken in these three studies were sufficiently adequate to provide meaningful scoring evidence I concluded that they were. I was careful to develop comprehensive scoring criteria that could differentiate sensitively between different levels of test taker performance across several individual facets of writing. In all cases the raters would not have known which scripts had been written in which test condition, and counterbalancing (with some students taking a given test with a dictionary, and some taking the same test without) would have ensured that they would not be able to guess the test condition with any accuracy. Two potential subjective rating factors were thereby reduced. When reliability checks were carried out on the scores (see Appendix 2) it was found not only that inter-rater and intra-rater reliability more than adequately reached acceptable thresholds but also that the scoring rubric was able to differentiate across a range of levels of performance with a high degree of precision. I was left with what appeared to be a clear and unambiguous result. Each study had its own unique features. Across all three studies there was a range of student type (school or university level, high school age or mature learner), student ability (lower intermediate to advanced), course type (one that focused specifically on developing writing and dictionary skills, and ones that did not), test task type (letter writing presented in different ways and a range of discursive essays), dictionary type (large comprehensive dictionary, smaller compact dictionaries), and level of prior experience with the dictionary (modest to extensive). There was, however, no evidence to suggest, across any of the studies, that the dictionary made any difference to test takers’ performance, either positive or negative, as measured by test scores. If we were looking for a straightforward answer to the question ‘does the dictionary really make a difference?’ we could conclude on the basis of this evidence that it does not. For those who believe that the dictionary is extraneous to the test construct, this finding of no significant differences in performance across the two test conditions might be reassuring because it could be interpreted as meaning that the dictionary was not a source of construct irrelevant variance – the use of the dictionary was not masking the measurement of the test takers’ abilities as reflected in the scores. This is not, however, the end of the story. No difference in scores does raise questions for those who see the dictionary as a supportive part of the construct being tested, and who therefore believe that dictionary use should improve performance. It is certainly curious that no differences in scores which can be attributable to dictionary use are evident anywhere in the data, despite the very
Chapter 3. Does the dictionary really make a difference?
real differences between the three studies. Are we to conclude that prior training with using a dictionary, or prior experience with a dictionary, really make no difference? Are we to assume that no students, at any level of ability, can exploit the dictionary to real advantage and find it to be a supportive tool? Or that task type is irrelevant? Test score evidence alone is insufficient to make a judgment. We need to consider other sources of evidence. How did the students use the dictionary? How successful was this use? The next two chapters consider these questions.
63
chapter 4
How do test takers use dictionaries?
In Chapter 3 I argued that test score evidence is crucial in determining the construct validity of a particular test. If we are concerned about issues of fairness, and especially the difference that any modification to a test procedure might make, comparing test scores across tests delivered in different testing conditions will help us to look at differences. If there is no statistically significant difference in scores between two different testing conditions, we can, as Rivera and Stansfield (1998) suggest, draw a conclusion that the different conditions do not compromise the ability of the test to measure test taker performance fairly. With regard to the three studies that are the focus of this book, the test score evidence presented in Chapter 3 (that is, the lack of significant difference between ‘with dictionary’ and ‘without dictionary’ scores) appeared to suggest that having or using a bilingual dictionary simply did not make any difference at all to writing performance. This seemed to be so regardless of all other factors – test taker type or level of ability, task type, dictionary type, level of prior experience with the dictionary, or extent of prior training in writing and dictionary skills. In terms of the relative construct validity of tests in the different test conditions this might be seen as a reassuring finding. If we believe that dictionary use is to be encouraged, we could, on this basis, argue that students should be allowed dictionaries in writing tests. We could argue that there is no need to worry that having a dictionary will make the task invalidly easy. Perhaps just as importantly, we would also not need to worry too much about tests that did not allow dictionaries – because it would seem on the test score evidence that not having a dictionary will not make the task invalidly difficult. It is, however, a little concerning if it seems (from the test scores) that no other factors are making a difference. If, for example, advanced students with a good deal of prior experience with using a dictionary cannot perform any better in ‘with dictionary’ tests than other tests, when compared with intermediate level students with minimal dictionary experience, it would seem that nobody benefits, in terms of improved or more sophisticated writing. For those who see the dictionary as a supportive tool, this finding is at best disappointing and at worst alarming. It would therefore be advantageous to dig a little deeper – to move beyond test score evidence to other evidence available from the studies. In this chapter
66 Dictionary Use in Foreign Language Writing Exams
I focus on an additional strand of evidence – how the test takers across all three studies used the dictionaries, and whether this dictionary use led to any improvement in the writing quality of these test takers, even if this was not showing up in test scores. This chapter presents findings related to two additional sources of evidence – the Lexical Frequency Profiles (LFPs) of test takers’ writing samples and an analysis of how dictionary entries were actually used to enhance writing quality.
The Lexical Frequency Profiles Previous research into the uses to which L2 learners put dictionaries has suggested, perhaps not surprisingly, that learners want to access the meanings of individual items of lexis more than any other information. According to two small-scale studies involving translation between languages and undertaken by Atkins and Varantola (1998b), the primary use to which students put their dictionaries was to locate the meaning of words, firstly from L1 to L2, and secondly from L2 to L1. This finding is supported by survey research into bilingual dictionary use which concluded that accessing meanings came well ahead of any other uses in both French and German as an L2 (Bishop, 1998; Hartmann, 1983). Given that one of the essential purposes of a bilingual dictionary is to give L2 students access to vocabulary they might otherwise not know or use, or, as Oxford (1990) puts it, to make up for “an inadequate repertoire of vocabulary” (p. 47), I considered that it would be useful to find a way to objectify whether students could use a dictionary in a way that broadened their choice of lexis. This is a particularly important consideration in assessing the quality of L2 students’ writing. Grobe (1981) suggests that a judgment on the quality of writing is “closely associated with vocabulary diversity” (p. 85), and Nation (2001) asserts that “vocabulary plays a significant role in the assessment of the quality of written work” (p. 178). A well-written composition will, suggest Laufer and Nation (1995), make effective use of vocabulary. I decided to take a look at lexical range by calculating these test takers’ LFPs across the different writing samples. This is because LFPs are very useful means of quantifying the extent to which writers are using a varied and large vocabulary when writing with or without a dictionary (East, 2004, 2006b). Calculating the LFPs, and comparing results across essays written in both test conditions, will help to determine whether the availability of the dictionary makes a difference (positive or negative) to test takers’ lexical sophistication, and therefore the quality of the text and the extent to which word choice is effective. The calculation is done by a computer program called Range (Heatley, Nation, & Coxhead, 2002) that compares texts prepared in ASCII format in terms of the range of vocabulary
Chapter 4. How do test takers use dictionaries?
writers use. The program benchmarks individual items of lexis against pre-determined frequency lists, and generates a report giving the number of words used at each level of the lists. The LFPs provide information about range of lexis in three ways, which Laufer (2005) describes as follows: 1. Tokens: that is, all words in the composition. 2. Types: that is, different words in the composition. 3. Families: that is, the base word, its inflections, and its most common derivations. For these studies into German dictionary use the English benchmark lists originally used by Laufer and Nation were replaced with two German lists which were prepared following Nation’s guidelines (Nation, 2002). The first list contained what might be described as ‘basic’ vocabulary – the kind of vocabulary all students at an intermediate level should be actively familiar with, and the vocabulary prescribed in New Zealand for the ‘School Certificate’ examination (equivalent to the GCSE examination taken in the UK by 15- to 16-year-olds). The second list contained higher level vocabulary of the type specifically prescribed for New Zealand’s ‘Bursary’ examination – what students might reasonably be expected to know at the end of study for that or a similar intermediate-level examination, such as the A level (taken two years post-GCSE). Any words the LFPs showed as being outside these two lists could reasonably be considered to be words students at the intermediate level might not be expected to know – words that perhaps only a dictionary would give them. Despite its claim to quantify vocabulary use in written texts, the LFP is in fact an imprecise measure. It cannot, for example, measure correct lexical use. It only measures lexical range. Errors in word choice are handled by removing items from the calculation. Errors in spelling are corrected before the analysis. Errors in word form are ignored if the word itself is correct. In the case, for example, of an incorrect verb ending the software would recognize the word as belonging to a particular family. The word would then be considered as known to the writer even if its form, in the context, was not. This is a particular problem for German (East, 2006b): the wide use and variation in derived and inflected forms in the German language will increase the likelihood that errors may occur with a particular item of lexis that will not be picked up in the LFP (which regards inflected forms as part of the same family as the base form) – a student may have used a particular word wrongly which the LFP will still record. Take, for example, the English verb ‘sleep’ and its German equivalent schlafen. In English there are only three derived forms: sleeping, sleeps, slept. For schlafen there are at least eleven, not counting subjunctive forms:
67
68 Dictionary Use in Foreign Language Writing Exams
1 2 3 4 5 6
schlafend 7 schlafe 8 schläfst 9 schläft 10 schlaft 11 schlaf
schlief schliefst schliefen schlieft geschlafen
This wider variation in derived forms means that a writer in German is potentially more likely to choose a wrong form of schlafen than if that writer were writing in English and using ‘sleep’. The LFP would not reveal this error in word use. Nevertheless, the LFP is a useful measure to determine lexical range, provided that profile results are interpreted against the limitations. Laufer and Nation (1998) argue that it is also a valid measure. They go on to assert that one way of determining the validity of the LFP as a measure of lexical sophistication is to see whether it can reveal differences between proficiency levels. Since it was evident from my first study that there were two distinct groups of students (intermediate and advanced), whose placement test and writing test scores indicated clear differences, I anticipated that, if the LFP were working well, the profiles would reveal a similar differential picture with regard to lexical sophistication. The data available from the first study enabled me to pilot the LFP so that the effectiveness of the measure could be determined (East, 2004).
Piloting the LFP – the first study As I reported in the previous chapter, the intermediate test takers in the first study achieved average ‘with dictionary’ and ‘without dictionary’ scores of 24.7 (SD = 3.2) and 24.3 (SD = 3.8) respectively. For the advanced test takers, by contrast, the average ‘with dictionary’ and ‘without dictionary’ scores were 32.3 (SD = 2.9) and 31.7 (SD = 4). In both cases the advanced scores were some 7 marks higher, out of a total of 35 (a positive difference of 20%). Also, the scoring rubric (reproduced in Chapter 3) enabled a specific consideration of the test takers’ range of lexis, which included its sophistication – the measure the LFP was designed to record. Scores awarded in the individual facet of lexical range were around 1½ marks higher out of 7 (again a 20% difference). These differences were not necessarily unanticipated. As Laufer and Nation (1998) argue, it is only to be expected that more advanced writers will use richer lexis. Both the overall scores and the scores for lexical range indicated a clearly higher level of performance, both in general and, specifically, in terms of lexical
Chapter 4. How do test takers use dictionaries?
Table 4.1 Lexical Frequency Profiles from the first study. Level of Profile ability Int. Tokens M (n = 3) SD Families M SD Adv. Tokens M (n = 3) SD Families M SD
First list
Second list
Not in the lists
With Without With Without With Without dictionary dictionary dictionary dictionary dictionary dictionary 92.3 3.2 86.7 3.8 83.2 2.6 72 1.7
93.7 0.6 90.3 1.2 79.6 4 66.3 2.5
3.3 0.6 4 1.7 7.1 1 12 1.7
2.7 0.6 3.7 1.5 8.8 2.6 13.3 2.1
5 2.6 9.7 4.9 9.7 1.2 16 1.2
3 1 6 1.7 11.7 2.1 20.6 2.5
quality. The question then became, what would a calculation of the LFPs reveal with regard to differential lexical sophistication? I surmised that if the LFPs were working satisfactorily they should reveal clear differences. The results of the calculations, for tokens and families, are given in Table 4.1. It was found that, regardless of test condition, the intermediate participants drew on considerably more basic lexis than the advanced participants. They used noticeably more tokens and families from the first list (‘basic’ vocabulary), and noticeably fewer tokens and families from outside the lists (words not known), than their advanced counterparts. That is, the intermediate test takers used about 90 words in every 100 from the basic list, whereas the advanced participants used around 80 – a reasonably small difference, but a conspicuous one nonetheless. The advanced test takers therefore appeared to use a broader and more sophisticated range of vocabulary overall than the intermediate test takers. The basic question with regard to the validity of the LFP as a measure of lexical richness – differentiation between ability levels – had been sufficiently answered (East, 2004). It was therefore considered viable to use the LFPs more comprehensively with the data available from the two subsequent studies, and to carry out more thorough analyses to determine the difference, if any, that the bilingual dictionary was making.
Using the LFP – the second study In the first of these analyses LFPs were calculated for tokens, types, and families across all three tasks completed by the five participants in the second study. Results of these calculations, for each individual task, are given in Table 4.2.
69
70 Dictionary Use in Foreign Language Writing Exams
Table 4.2 Average LFP results across all three tasks – second study. Task Profile
First list
Second list
Not in the lists
With Without With Without With Without dictionary dictionary dictionary dictionary dictionary dictionary A
B
C
Tokens
M SD Types M SD Families M SD Tokens M SD Types M SD Families M SD Tokens M SD Types M SD Families M SD
92.6
93.2
1.7
3.6
88.4
89
3.3
4.2
86.2
86.6
4.1
4.6
88.2
89.4
2.2
3.1
84.8
85.2
2.2
5.6
82.4
82.8
2.3
6.5
3.2 0.8 5.4 1.5 6.2 1.9 6.4 1.1 8.2 1.3 9.4 1.3
82.8
89.6
10.6
4.5
4.5
3.6
79.4
84.6
10.4
6.5
4.9
2.6
4 3.0 6.2 3.7 7.2 4.4 5.8 2.6 8 4.7 9 5.4 6.6 3.6 9.2 2.8
76.8
83
12
10.4
7.3
5.5
3.0
3.0
4 1.6 6.6 2.3 7.4 3.0 5 1.6 7 1.9 8 1.6 6.6 4.5
10.2 6.3 11.6 7.4
3 1.2 5.4 1.7 6.2 2.4 4.8 1.1 7 1.6 8.2 1.8 4.2 1.8 6.4 2.9 6.8 3.4
The profiles suggest that, considered on a task-by-task basis, the writing (whether with or without the dictionary) appeared to become more sophisticated as participants progressed, week by week, through each task. Also, for Task C, there was a greater apparent difference between ‘with dictionary’ and ‘without dictionary’ profiles than was evident in Tasks A and B, with participants moving somewhat away from basic lexis and correspondingly experimenting a little more with more sophisticated lexis (whether from the second list or outside the lists) when the dictionary was available (‘basic’ tokens – 90% (Tasks A and B) → 83% (Task C), ‘unknown’ tokens – 4% → 7%). This was suggesting that test takers required somewhat more sophisticated lexis in the discursive essay task than in the letter-writing tasks – not necessarily surprising given the nature of the tasks set – and that the dictionary was able to help them out. It was also important to find out whether the dictionary look-ups themselves were making this (albeit very slight) improvement, and whether the differences were statistically significant. Since the participants had been asked to highlight all instances of dictionary use, it was possible to identify the look-ups. Although a limitation to this approach is that such self-reporting can be unreliable (test takers
Chapter 4. How do test takers use dictionaries?
Table 4.3 Average LFP results (including look-ups removed) – second study. Profile Tokens Types Families
M SD M SD M SD
First list + +*
–
Second list + +*
–
Not in the lists + +* –
87.9 5.0 84.2 5.6 81.8 6.1
90.7 3.9 86.3 5.0 84.1 5.5
6.7 3.8 8 2.8 9.2 3.2
5.5 3.1 7.8 3.8 8.9 4.3
5.2 2.9 7.9 4.1 9 4.8
89.8 4.8 86.7 5.6 84.6 6.2
6.2 4.0 7.1 2.7 8.1 2.9
4.3 2.7 6.2 4.1 7.3 4.6
4 1.5 6.3 2.1 7.1 2.6
Note. +: with dictionary (look-ups included); +*: with dictionary (look-ups removed); –: without dictionary.
might under-report instances of dictionary use for a variety of reasons such as forgetfulness or not wishing to acknowledge that they used a dictionary), it was really the most feasible method of gathering specific data on actual dictionary use, and the evidence was considered the best that was available. To investigate whether the look-ups were making the difference to the quality of lexis all words accessed from the dictionary were removed and the adapted texts were re-analyzed. Table 4.3 shows the results of this analysis, considered overall across all three tasks, and comparing ‘with dictionary’ profiles (with look-ups included and then removed) and ‘without dictionary’ profiles. This analysis reveals several interesting trends: •
•
•
Considered overall, there appeared to be a slight improvement in participants’ lexical sophistication in the ‘with dictionary’ condition with regard to varying the tokens, types, and families. This was judged by an average decrease in, for example, tokens taken from the first list (91% → 88%), and an average increase in tokens from the second list (5.5% → 7%), and outside the lists (4% → 5%). In real terms, however, this represented small shifts in lexis of around 3 words in every 100. The mean figures between ‘with’ and ‘without’ texts when look-ups were included differed from each other by as much as 2.9%, with an average variation of ±1.5%. The mean figures between ‘with’ and ‘without’ texts when look-ups were taken out differed from each other by, on average, ±0.5%.
When dictionary look-ups were included in the calculation, there appeared to be greater lexical sophistication in the writing. When they were taken out, there was, on average, negligible differentiation in the profiles – the test takers wrote with comparable levels of lexical sophistication if words located in the dictionary were
71
72
Dictionary Use in Foreign Language Writing Exams
not factored into the analysis. There was some evidence to suggest that the lookups themselves were making a positive difference. However, as has already been pointed out, the difference between the profiles was very slight (3 words in every 100), and the sample size (2 × 15 pieces of work) was small, and it is hard to go beyond mere speculation that this was the case. The results of several paired samples t-tests (see Appendix 4) indicated no significant differences between the profiles at any level. Taking the limitation of sample size into account, these results suggest that any improvement in writing with the dictionary available was just as likely due to chance or random factors than to the intervention of the dictionary. The findings from the second study were intriguing, nonetheless. The observed differences may well have been down to chance, but slight improvement in lexis in the ‘with dictionary’ texts was observed which disappeared when the dictionary look-ups were removed. There was enough inconclusive evidence in these data to justify further investigation of the LFP with the larger set of test takers available from the third study.
Using the LFP – the third study In the third study LFPs were calculated on a subset of 39 test takers. This subset was extracted from the data available from all 47 participants according to two criteria: 1. Essays across both test conditions contained more than 100 words. 2. Students had indicated their use of the dictionary through underlining. In cases where any participants did not meet either or both these criteria, both their essays were removed from the calculation. A similar procedure was followed with this data set as with that from the second study. ‘Without dictionary’ profiles were compared with ‘with dictionary’ ones, first with look-ups included, and then with look-ups removed. Results of the profiles – ‘with dictionary’ look-ups included, ‘with dictionary’ look-ups removed and ‘without dictionary’ – are presented in Table 4.4. The profiles indicate the following trends: •
When look-ups were included there was an improvement in lexical sophistication in the ‘with dictionary’ condition, compared to ‘without dictionary’ texts. This was indicated by less reliance on words from the first list, and greater use of words from the second and outside the lists. This trend was noticeable across all levels of word type, token, and family. It was most noticeable with
Chapter 4. How do test takers use dictionaries?
Table 4.4 Impact of look-ups on the LFPs – third study. Profile Tokens Types Families
First List M SD M SD M SD
Second List
Not in the lists
+
+*
–
+
+*
–
+
+*
–
89.2 4.5 82.5 5.1 80.1 5.7
92.1 3.9 87 4.8 85 5.5
91.9 4.4 86.9 5.2 85.1 5.7
6.5 2.8
5.21 2.5 8.6 3.4 9.9 3.9
5.4 2.5 9 3.5
4.4 2.3 7.2 3.2 8.3 3.7
2.7 2.0 4.5 2.9 5.1 3.3
2.7 2.2 4.2 2.9 4.7 3.2
10.3 3.2 11.6 3.5
10.2 3.8
Note. +: with dictionary (look-ups included); +*: with dictionary (look-ups removed); –: without dictionary Taken from East (2006b, p. 189).
•
word families, where 5% fewer word families were drawn on from the first list when writing with the dictionary available, and 3.6% more word families were drawn on from outside the lists. When look-ups were taken out, and comparison was made with the ‘without dictionary’ profiles, there was considerably less variability, in all cases not more than ±0.4%. This indicated a far closer parallel between the profiles.
These differences can be illustrated diagrammatically (Figure 4.1). This bar graph brings out more clearly three major findings from these LFP data: 1. The ‘with dictionary’ texts used more sophisticated lexis than the ‘without dictionary’ texts. 2. When the look-ups were removed the difference between ‘with’ and ‘without dictionary’ texts disappeared. When the two sets of texts were treated as if they had been written in similar circumstances (that is, both as if no dictionary had been available) the samples of writing showed similar and comparable levels of lexical quality. 3. The improvement in lexical sophistication in ‘with dictionary’ texts appeared to be down to the use of words located in the dictionary. When the test takers used a dictionary they increased the richness of their lexis. To determine whether the look-ups were making a statistically significant difference to profiles two further comparisons were made. The first was between ‘with dictionary’ and ‘without dictionary’ profiles. The second was between ‘with dictionary’ profiles with the look-ups removed, and ‘without dictionary’ profiles.
73
74
Dictionary Use in Foreign Language Writing Exams
Figure 4.1 A comparison of profiles.
When the look-ups were removed and the ‘with dictionary’ and ‘without dictionary’ profiles were compared, it was found that the LFPs were not significantly different from each other at any level. When, however, the same comparison was made, this time including the look-ups, the LFPs for words taken from the first list and words outside the lists (whether tokens, types, or families) were significantly different from each other across the conditions at p ≤ .005 (although words taken from the second list were not). That is, the probability of this difference happening by chance alone was equal to or less than 1 in 200 (Appendix 4). When the test takers used a dictionary they significantly increased the richness of their lexis. They used significantly fewer words from the first list, and correspondingly significantly more words from outside the lists. I also decided to take a look at whether the test takers’ level of ability made any difference to the quality of their lexis across the two conditions. In other words, was there any evidence to suggest that one group of test takers could use the dictionary more effectively than another to enhance the quality of their writing? The profiles were placed into two groups: lower ability (n = 20) – that is, the pre-intermediate and intermediate participants; and upper ability (n = 19) – that is, the upper-intermediate and advanced participants. Results are presented in Table 4.5. It was apparent that, when writing with the dictionary, the participants did not differ greatly in terms of the sophistication of their lexis, regardless of ability.
Chapter 4. How do test takers use dictionaries?
Table 4.5 Impact of look-ups according to ability – third study. Ability level
Profile
Lower
Tokens Types Families
Upper
Tokens Types Families
First list M SD M SD M SD M SD M SD M SD
Second list
Not in the lists
+
–
+
–
+
–
89.7
93.7 2.5 89.0 4.0 87.5 4.3 90.1 5.2 84.6 5.6 82.6 6.0
6.1 2.7
4.4 1.7 7.8 3.0 8.8 3.3 6.4 2.8
4.2 2.2 7.4 5.0 8.4 3.4 4.5 2.5 7.1 3.5 8.1 4.1
1.9 1.6 3.2 2.7 3.7 3.0 3.5 2.5 5.1 2.8 5.8 3.0
4.0
82.5 3.4 80.1 5.0 88.6 5.0 82.5 5.8 80.1 6.5
10.1 3.0 11.5 3.8 6.9 2.9 10.4 3.0 11.8 3.2
10.3 3.6 11.6 3.8
Note. +: with dictionary; –: without dictionary Adapted from East (2006b, p. 187).
The profiles were more or less identical. When writing without the dictionary two trends emerged: Firstly, it appeared that both ability groups wrote with marginally less lexical sophistication; secondly, it seemed that the upper ability participants were able to write with more sophisticated lexis in comparison with the lower ability participants. Again, these trends can be illustrated diagrammatically (Figures 4.2 and 4.3). Once more the question was whether any of the observed differences in the profiles were significant. Independent samples t-tests revealed no significant difference between the two groups across any of the ‘with dictionary’ profiles. However, the ‘without dictionary’ profiles were significantly different from each other at p < .05 across all levels – first list, second list, and outside the lists (see Appendix 4). This difference suggests that when the two groups did not have the dictionary the upper ability participants wrote with significantly more lexical sophistication than the lower ability participants. However, ‘with dictionary’ performances were not significantly differentiated between the two ability groups: the lower ability participants were able to increase their range of more sophisticated lexis in the ‘with dictionary’ tasks proportionately more than the upper ability participants so that they wrote more comparably, and no significant differences were found between the groups. The dictionary appeared to ‘level the playing field’ in terms of ability to access and use lexis at a variety of levels. Both groups wrote with more lexical sophistication with a dictionary. The previous analysis of profiles had indicated that this increase in sophistication was due to dictionary look-ups.
75
Dictionary Use in Foreign Language Writing Exams
100%
upper ability
lower ability
upper ability
lower ability
40%
lower ability
60%
upper ability
80%
20%
0%
types
tokens
words from 1st list
words from 2nd list
families
words outside the lists
Figure 4.2 A comparison of ‘with dictionary’ profiles according to ability.
100%
upper ability
lower ability
upper ability
40%
lower ability
60%
upper ability
80%
lower ability
76
20%
0%
tokens
words from 1st list
types
words from 2nd list
families
words outside the lists
Figure 4.3 A comparison of ‘without dictionary’ profiles according to ability.
Chapter 4. How do test takers use dictionaries?
Implications for test scores In the light of this improvement in lexical quality I found it curious that the test scores did not reveal any improvement in performance in ‘with dictionary’ tests. This raised a fascinating question for me. If the dictionary did make a positive difference to test takers (especially, it would seem, lower ability – pre-intermediate to intermediate level – test takers) in terms of an increased range of more sophisticated lexis, why was this difference not showing up in scores? There are several possibilities. This interesting anomaly is similar to that described by Wigglesworth (1997), who explored the effects of planning time on the discourse produced by test candidates in an oral examination, comparing results across two conditions (‘with planning time’ and ‘without planning time’). Her findings suggest that where planning time was provided complexity and accuracy (at least on some measures) may improve for high-proficiency candidates on some tasks. The test scores revealed, however, no significant differences across the groups as a result of the planning time variable. Wigglesworth argues that it is curious why raters do not perceive differences manifest at the discourse level when assigning test scores. She concludes: It may be because raters assess the discourse at a macrolevel in terms of communicative effectiveness while the differences are manifest at the microlevel. … It may be that there is a point at which there needs to be a larger increase in sophistication of language use by the candidate in order to obtain a better mark from the raters. (p. 103)
If we were to apply this argument to writing we might suggest that raters, not unsurprisingly, will assess a piece of written work and the sophistication of its lexis more globally, and are less likely to notice nuances and subtleties of levels of lexical sophistication (which is where the LFP comes in as a useful measure). By this argument the raters in these studies would have assessed the discourse at a macrolevel in terms of communicative effectiveness and range of lexis, whereas differences in lexical sophistication are only manifest at the microlevel available through the LFP. A more dramatic gain in lexical sophistication would probably be required before this would lead to better scores from raters. (The average increase in lexical sophistication was, as has already been stated, quite small and would presumably have made little impact on lexical quality considered more globally.) There is another possibility, which relates to the particular weakness of the LFP (East, 2006b). That is, the LFP is a limited calculation. It focuses on lexical range, but not lexical accuracy. It may be a very good tool for measuring word
77
78
Dictionary Use in Foreign Language Writing Exams
choice, but the LFP data tell us very little, if anything, about the accurate use of lexis, and accuracy of lexis may have had an influence (consciously or unconsciously) when it came to the raters assigning scores. Even in cases where there was a marginal increase in lexical sophistication instances of lexical error may have influenced the raters, leading to no improvement in scores. (The issue of lexical accuracy is the focus of Chapter 5.) Should the limitations therefore lead us to discount the LFP data? Not really. If we are interested in lexical range and lexical quality (the two dimensions of writing which are most likely to be affected by dictionary use), the LFPs allow us to dig a little deeper than test scores. Provided that the limitations of the measure are taken into account, it is possible to draw two important conclusions. On one measure (the LFPs) the evidence suggests that, on the whole, participants did improve their lexical sophistication when using a dictionary, albeit marginally, and this difference (in the third study) was statistically significant. On another measure (test scores) it was found that using a dictionary did not make any significant difference. For the reasons just outlined neither of these findings is incompatible with the other. The difference in the findings between the LFPs and test scores is intriguing nonetheless. It indicated the need to continue digging even deeper into the data. It was also important to take a closer look at the types of words the test takers actually looked up in the dictionary, and to scrutinize their levels of success with using them appropriately. This would enable a clearer picture to emerge not only of the extent of lexical sophistication achieved with the dictionary but also of the extent of dictionary misuse. In the remainder of this chapter I explore the first of these aspects – the extent of positive dictionary use.
How did the test takers use the dictionary? I have already indicated that previous research has established that L2 learners want to access meanings from their dictionaries more than any other information. Hartmann (1983) investigated the uses to which teachers (n = 67) and students (n = 118) put bilingual English/German dictionaries. Just under two thirds of the students were studying German in school. Bishop’s (1998) survey into bilingual English/French dictionary use spanned both secondary and tertiary level, and involved two cohorts of participants – school students in the first year of studying for the A level in French (n = 25), and second level French students (n = 25) at the UK’s Open University (OU). The school students were at an earlier stage in learning French than many of the OU students. The two surveys provided evidence that students used their dictionaries first and foremost to locate meanings.
Chapter 4. How do test takers use dictionaries?
Table 4.6 Rank order of dictionary uses. Hartmann (1983) (German) meaning grammar use in context spelling synonyms pronunciation etymology other
Bishop (1998) (French) A level students
OU students
meaning gender spelling feminine forms verb forms synonyms plurals pronunciation register
meaning spelling gender verb forms feminine forms plurals register synonyms pronunciation
Adapted from East (2006c, p. 3).
Table 4.6 illustrates, in rank order from most to least, how dictionary users most often used their dictionary. Hartmann (1983) discovered that 97% of the German dictionary users frequently accessed meaning. Grammar, the next most frequent use, was 15% behind this, at 82%. Bishop (1998) observed that 68% of the A level students and 80% of the OU students most often used the French dictionaries to find meanings. The next most frequent use for the A level students was to check gender (52%), and for the OU students to check spelling (64%). Thus, in both cases, checking meanings ranked as 16% above the next most popular option. I utilized these rank-ordered lists to draw up a series of ‘dictionary use’ statements that were then placed on the longer questionnaire that the participants in my studies would complete. Participants in the first study were asked to indicate from the list of statements the uses to which they put the dictionary (no ranking was required). There was also a space for participants to list any other uses to which they put the dictionary. Their responses were used to confirm the statements that were included in the questionnaires for the two subsequent studies. One additional use was added based on an additional comment received from the first study. In this way a ‘dictionary use checklist’ was presented to the test takers in the second and third studies, on which participants were asked to rank their uses of the dictionary (Figure 4.4). Although the rankings in the second and third studies differed somewhat, a comparison of responses from all three studies helps to paint a picture of the uses to which these test takers put the dictionary, and their importance for the test takers relative to each other. This overall ranking, in three bands, is given in Figure 4.5.
79
80 Dictionary Use in Foreign Language Writing Exams
For each statement please tick the box that most accurately reflects the ways in which you USED THE DICTIONARY for this task. It is important to tick ONE box per statement. When I used the dictionary, I used it . . . to find out the meanings of individual GERMAN words I DEFINITELY did not know to check the meanings of individual GERMAN words I WASN’T SURE about to find out the meanings of GERMAN PHRASES/EXPRESSIONS I did not know to find out or check the genders of individual GERMAN NOUNS to find out or check the plural endings of individual GERMAN NOUNS to find out or check the spelling of individual GERMAN WORDS to find out or check if a GERMAN VERB was irregular to find out or check the correct preposition to use with a GERMAN VERB to find the German equivalent of individual ENGLISH words to find the German equivalent of ENGLISH PHRASES/EXPRESSIONS to find GERMAN PHRASES that I could use in my essay
every most some- rarely never time times times
☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐
☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐
☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐
☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐
☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐ ☐
Figure 4.4 Dictionary use checklist.
This composite picture supports the findings of previous research and indicates that these test takers most wanted to use the dictionary to access the meanings of individual lexical items, first from L1 to L2, and then from L2 to L1. Using the dictionary to find phrases tended to be of less importance, although checking spelling and gender also featured quite highly. The principal use to which the dictionary was clearly put by all, regardless of the mode of writing, was to find the German equivalent of English words. This raised two questions for which I hoped the comparative shorter questionnaires would provide answers: what did the test takers do when they did not have the dictionary with them, and was using the dictionary the most preferred strategy when it was available? As I argue in Chapter 2, language learning strategies are used because, in the words of Oxford (1990), there is “a problem to solve, a task to accomplish, an objective to meet, or a goal to obtain” (p. 11). Strategy use is also an important dimension of the theoretical framework of communicative competence outlined by Canale and Swain (1980) and Canale (1983). Within their outline strategic competence is called into play “to compensate for breakdowns in communication due to performance variables or to insufficient competence” (p. 30). When the dictionary is not available test takers need to draw on other communicative strategies. The shorter questionnaires presented participants with several statements adapted from Part C of Oxford’s (1990) ‘Strategy Inventory for Language Learning’
Chapter 4. How do test takers use dictionaries?
Information sought
Strategy Primary uses
Meaning, L1 → L2 Meaning, L2 → L1 Meaning, L2 → L1
To find the German equivalent of individual English words To find out the meanings of individual German words I definitely did not know To check the meanings of individual German words I wasn’t sure about Secondary uses
Spelling Gender Meaning, L1 → L2
To find out or check the spelling of individual German words To find out or check the genders of individual German nouns To find the German equivalent of English phrases or expressions Other uses
Meaning, L2 → L1 Plurals Meaning Grammar Grammar
To find out the meanings of German phrases or expressions I did not know To find out or check the plural endings of individual German nouns To find German phrases that I could use in my essay To find out or check the correct preposition to use with a German verb To find out or check if a German verb was irregular
Figure 4.5 Overall ranking of dictionary use strategies.
(SILL) which focuses on ‘compensating for missing knowledge’. In the second study the most popular strategies were as follows: 1. Preferred strategy in the ‘with dictionary’ tests: • In the input: • I guessed the general meaning by using clues. • In the response: • I looked up the word or expression in the dictionary. 2. Preferred strategy in the ‘without dictionary’ tests: • In the input: • I guessed the general meaning by using clues. • In the response: • I kept to the same idea, but found another way to say it.
81
82
Dictionary Use in Foreign Language Writing Exams
In the third study, the larger number of participants made it possible to provide a rank order of a number of strategies. 1. Rank order of strategies used in the ‘with dictionary’ test: • In the input: 1. I looked up the word I did not know in the dictionary. 2. I guessed the general meaning by using clues. 3. I chose to ignore the word. • In the response: 1. I looked up the word or expression in the dictionary. 2. I kept to the same idea, but found another way to say it. 3. I made a guess based on the words I know. 4. I dropped the idea. 5. I invented a word. 2. Rank order of strategies used in the ‘without dictionary’ test: • In the input: 1. I chose to ignore the word. 2. I guessed the general meaning by using clues. • In the response: 1. I kept to the same idea, but found another way to say it. 2. I made a guess based on the words I know. 3. I dropped the idea. 4. I invented a word. The evidence across all three studies suggests that when it came to accessing unfamiliar words from the test tasks guessing was often used regardless of whether the dictionary was available or not. When writing a response to the tasks set, however, using the dictionary, when available, was preferred over any other strategies to make up for gaps in knowledge.
The look-ups themselves Given that accessing the meanings of individual items of lexis was the predominant use to which the participants claimed that they put the dictionary, the types of words the test takers looked up were analyzed closely. Data on this were drawn from the highlighted or underlined instances of dictionary use. Figure 4.6 records the available information on the categories of item participants in each of the studies looked up, together with the relative popularity of these different categories.
Chapter 4. How do test takers use dictionaries?
70%
Percentage of look-ups
60% 50% 40% 30% 20% 10% 0%
Nouns
Verbs
Adjectives
Adverbs
Phrases
Other
First study
60%
13.3%
17.8%
2.2%
0
6.7%
Second study
50%
19.8%
17%
5.7%
3.8%
3.8%
46.1%
21.9%
17.4%
7.1%
3.5%
4%
Third Study
Type of word
Figure 4.6 Types of word looked up. Adapted from East (2006c, p. 9)
These data reveal a level of similarity across the three studies in the types of items test takers wanted to locate from the dictionary. It was evident that the self-reporting of dictionary uses confirmed the questionnaire responses: individual items of lexis constituted the vast majority of look-ups, with the location of phrases accounting for less than 4% of look-up activity. In total the look-up of nouns made up around half the number of look-ups, with verbs (at one in five of all look-ups) coming second.
The sophistication of the look-ups The exercise in lexical frequency profiling had revealed that, at least in the third study, the dictionary look-ups were making a positive difference to the lexical sophistication of the students’ writing. This was suggesting that the types of look-up the test takers were making were of items they might not reasonably have known, items they were subsequently able to use to improve the quality of their lexis. An important question to ask was therefore: how many of the items looked up by the test takers could be placed into this category of ‘unknown’ words? In other words, to what extent was the dictionary helping the students to find new or more sophisticated ways of expressing themselves, and to what extent was the dictionary being used to check up on words the test takers might reasonably be expected to know? These are important questions because they help to identify
83
84
Dictionary Use in Foreign Language Writing Exams
the extent to which the bilingual dictionary was a useful tool to facilitate more sophisticated language use, and the extent to which it was being used when perhaps it did not need to be used – that is, as a check for items which the test takers really ought to have been actively familiar with. This is not to suggest that checking items which should be known is not a legitimate use to which a dictionary can be put – although those that might argue against allowing dictionaries in writing tests might suggest on this basis that using a dictionary to do this leads to laziness on the part of the test takers (not taking the time or effort to learn vocabulary) and subsequent reliance on the dictionary at the expense of other strategies. It is to suggest that if a dictionary is being used to extend knowledge and to help test takers to become more adventurous in their choice of words, it is fulfilling a useful function and is being used as a valid tool – which would support the arguments of those who are in favor of dictionary use in tests. I considered that calculating an LFP of the look-ups as individual items of lexis would help to differentiate between different levels of look-up. One of the limitations of this type of comparison is that it aims to draw a distinction between words that the test takers might realistically be expected to know (words in the lists) and words they might not know (words not in the lists). In reality, language acquisition and knowledge of lexis are somewhat fluid, and test takers cannot necessarily be expected to be actively familiar with every word on a given list (which is why using a dictionary to check an item may be seen as a legitimate activity). Despite this limitation, however, an analysis of the words looked up provides some picture of the range of items. When I analyzed the individual items of lexis using Range I found that around half the look-ups were of items not found in either of the two benchmark lists. In other words, half the look-ups were of words that (as judged against the benchmark lists) the test takers could not reasonably be expected to know at the intermediate level. A further one in four items could be located in the second list (more sophisticated lexis). These figures indicate that only around a quarter of the look-ups were of ‘basic’ lexis (items from the first list). Based on this evidence, and taken across the data available from all three studies, the following conclusion can be drawn: three out of four occasions of dictionary use were to locate ‘higher order’ items of vocabulary, and half of all look-ups were of items the test takers were not expected to know. Just as the LFPs of the entire texts had indicated, the test takers were using the dictionaries to increase their range of lexis. Examples of the range of vocabulary accessed successfully from dictionaries at each of the three levels measured by the LFP are given on the next page.
Chapter 4. How do test takers use dictionaries?
First List Type of word
Example
Meaning
Noun Noun Noun Noun Noun Verb Verb Verb Adjective Adjective Adjective Adverb Adverb Adverb
Beruf Fehler Grund Landschaft Mannschaft bauen bezahlen reisen einfach höflich nützlich besonders vielleicht wirklich
profession mistake reason countryside team to build to pay to travel simple polite useful especially perhaps really
Second List Type of word
Example
Meaning
Noun Noun Noun Noun Noun Verb Verb Verb Adjective Adjective Adjective Adverb Adverb Adverb
Einfluss Erinnerung Erlebnis Kenntnis Verständnis beschliessen entwickeln schätzen einheimisch neugierig zweisprachig eindrucksvoll sorgfältig überraschend
influence memory / reminder / memento experience knowledge understanding to decide to develop to treasure / value indigenous curious bilingual impressive carefully surprising
Not in the lists Type of word
Example
Meaning
Noun Noun Noun Noun
Aufmerksamkeit Bereitschaft Entwicklung Tätigkeit
attentiveness readiness development activity / occupation
85
86 Dictionary Use in Foreign Language Writing Exams
Noun Verb Verb Verb Adjective Adjective Adjective Adverb Adverb Adverb
Zunahme beeindrucken befreien erfordern einmalig geistig herausfordernd offensichtlich unerlässlich unflätig
increase to impress to set free to require unique spiritual provocative / inviting obvious / blatant essential / imperative offensive
These lists provide evidence that the test takers were often able to use the dictionary successfully to locate a wide assortment of words, at different levels of sophistication, to meet their needs in a variety of contexts.
Phrasal use The location of phrases (or what Wray (2002) calls formulaic sequences) constituted only a very minor part of the look-ups. It was evident, however, that several test takers were able to locate from the dictionary, and then use successfully, fixed phrases and expressions that enhanced the quality of their response. Examples located in the Collins German Dictionary included: German phrase
Meaning
Ohne Rücksicht auf Ich möchte Sie bitten ... ... was Sie beabichtigen zu tun
Without consideration for I would like to ask you ... ... what you intend to do
Other phrases taken from different dictionaries included the following: German phrase
Meaning
Bis zu einem gewissen Grade … gegenüber im Vorteil sein [Wir müssen auch] dafür sorgen, dass Auf der anderen Seite … Hinzu kommt noch …
To a certain extent To have the advantage over … [We must also] be concerned that ... On the other hand … There is also the fact that …
On the other hand, a qualitative analysis of texts from the second study revealed the successful use of formulaic sequences that had been committed to memory rather than items that had been looked up in the dictionary, at least according to
Chapter 4. How do test takers use dictionaries?
the test takers’ self-reporting (East, 2005b, 2006c). Across all three tasks, these sequences included: German phrase
Meaning
Ich bin der Meinung, dass (7 times) Ich glaube, dass (4 times) Es tut mir sehr Leid, dass (3 times) Ich bin sicher, dass (twice) Ich schlage vor, dass Man kann mit Sicherheit sagen, dass Es kommt darauf an, ob Es wird viel gesagt, dass Ich habe verstanden, dass Für mich hat es die Bedeutung, dass Zuerst muss ich mich fragen, was
I am of the opinion that I believe that I am really sorry that I am certain that I suggest that It can be said with certainty that It depends whether It is often said that I understood that For me it has the significance that First of all I must ask what
In the discursive essays (Task C), several sequences were drawn on (again not located in the dictionary), although their use indicated some instances of error: German phrase
Meaning
Wenn man das Thema Mütter und ihre When you consider the theme of kleine Kinder betrachtet ... mothers and their small children ... Wenn man das Thema, Drogen betrachtet, When you consider the theme of kann man sicher sein, dass ... drugs you can be certain that ... Über das Thema „Sollten Mütter arbeiten A lot has been written, and perhaps oder [sich um] ihre Kinder kümmern“ even more spoken, about the theme wird sehr viel geschreiben und vielleicht “should mothers work or look after eben mehr gesprochen their children?” This sometimes relatively sophisticated use of sequences without recourse to the dictionary hints at the extent to which test takers were able to enhance the quality of their writing without having to look things up, and the extent to which thorough preparation was likely to pay dividends (an issue which I explore more fully later).
Looking at lexis in context Bearing in mind the extensive use of dictionary look-ups, I also considered it important to look at how individual items of lexis were used in specific contexts
87
88
Dictionary Use in Foreign Language Writing Exams
to add to the quality of the writing. A qualitative analysis of the texts of a number of Collins Pocket users in the third study revealed ways in which quality was improved by correct application of dictionary look-ups. This is illustrated in the following examples. One upper intermediate participant used the expression ‘benefits from’ effectively and correctly when she wrote:
Obwohl einige Schüler jetzt keine Interesse für die Fremdsprachen haben, werden sie in Zukunft aus Fremdsprachen Nutzen ziehen.
Although a few students now have no interest for foreign languages, they will benefit in the future from foreign languages.
One advanced participant indicated that she only used the dictionary twice, but on both occasions the entry was used successfully and correctly when she wrote:
Das ist kaum überraschend That is hardly surprising
and
Für diese Leute gibt es auch viel mehr Arbeitsgelegenheiten. For these people there are also more work opportunities.
Two other advanced participants wrote essays that contained some errors of grammar and spelling, but dictionary look-ups were often correctly applied (relevant sections of these essays are presented verbatim and uncorrected below, although for ease of readability paragraph breaks have been removed). In the first example look-ups were used to good effect on 11 out of 13 occasions. German text
Jedes Jahr kommen viele Touristen, um Neuseelands Abenteuer-, Entspannungs- und Kulturmoglichkeiten zu erleben. Aber wie wirken sie aufs Land? Erstens überlegen wir die wirtschaftlichen Wirkungen. Für neuseeländischen Gesellschaften sind also ausländische Gäste sehr wichtig. Zweitens hat der Zustrom der Touristen positive und negative Konsequenze für die Umwelt. Wenn zu viele Touristen in ein Gebiet reisen dürfen, wird es ohne Zweifel beinflusst. Zum Beispiel in der Sudinsel werden viele abgelegene Einlassen wegen der Emissionen der Reiseboote verschmutzt. ... es könnte für die Neuseeländer vorteilhaft sein, so viele andere Kulture kennen zu lernen können. Alles in allem bin ich überzeugt, dass der Massentourismus viele Nutzen für Neuseeland haben kann, vorausgesetzt dass er sorgfältig kontrolliert wird.
Chapter 4. How do test takers use dictionaries?
Direct translation
Each year many tourists come in order to experience New Zealand’s adventure, relaxation and culture possibilities. But how do they affect the land? First of all let us think about the economic effects. For New Zealand businesses foreign visitors are therefore very important. Secondly the influx of tourists has positive and negative consequences for the environment. When too many tourists are allowed to travel into a region it is influenced without doubt. For example in the South Island many remote inlets are polluted because of the emissions of the travel boats. … it might be advantageous for the New Zealanders to be able to get to know so many other cultures. All in all I am convinced that mass tourism can have many uses for New Zealand, provided that it is carefully controlled.
As for the two occasions of error, the word Einlassen (inlets) was inappropriate in the context, and also (if it had been the correct word) was misspelt – it should have been Einlässe. The contextually appropriate word was Meeresarme. Also Reiseboote (travel boats) was a good attempt to form an appropriate compound, but a more fitting word might have been Touristenschiffe (tourist ships). In the second example another advanced participant used the Collins Pocket successfully four times. With the exception of the basic word Geld (money), I considered that the other words added to the discourse quality of the essay. German text
Touristen bringen nämlich sehr viel Geld in Neuseeland rein. ... Ohne dieses Geld würde Neuseeland sich nicht weiter entwickeln. Es ist schlecht für die Umwelt weil sie mehr verschmutzt wird mit Mühl, Luftverschmutug und Lärmbelästigung. Die Strände überall in Neuseeland werden überbevölkert mit Touristen ...
Direct translation
Indeed tourists bring a lot of money into New Zealand. … Without this money New Zealand would not develop itself further. It is bad for the environment because it becomes more polluted with rubbish, air pollution and noise pollution. The beaches everywhere in New Zealand are becoming over-populated with tourists …
. In the case of überbevölkert, a native German-speaking colleague who looked at the data had expressed a preference for überhäuft. I considered that überbevölkert communicated the concept of ‘overcrowded beaches’ sufficiently accurately to be considered as effective use of the dictionary.
89
90 Dictionary Use in Foreign Language Writing Exams
The adequacy of the dictionaries used One important remaining question in which I was interested related to the extent to which the test takers found the dictionaries useful for what they had to do. That is, they may often have been able to locate lexis appropriate to their contexts, but to what extent was the dictionary they used always helpful with this? Participants were asked about this in the longer questionnaires (Figure 4.7). Did you find the dictionary you used adequate for the task?
☐ yes
☐ no
☐ sometimes
Why do you say this?
Figure 4.7 The ‘adequacy’ question on the longer questionnaire.
The evidence from the questionnaires suggests that the Collins German Dictionary users (those in the first and second studies) were, on the whole, satisfied with the adequacy of this dictionary. In the first study all six participants recorded an overall high level of satisfaction. They commented that they were mostly or always able to find the meaning of a German word, and all but one said that they were always able to find the German equivalent of an English word. In the second study, all participants noted that they found the dictionary adequate when used, either sometimes or always. Those who found it adequate, at least on one occasion, commented on the following: • • • •
A wide range of words and definitions for a wide variety of contexts was available. Important grammatical information, most particularly genders and plural forms, could be found. Occasionally important expressions or phrases could be located. The dictionary was also useful for checking spelling.
Participants in the third study were also asked about ease of dictionary use (Figure 4.8)
Chapter 4. How do test takers use dictionaries?
How EASY did you find this dictionary to use? It is important to tick ONE box ONLY. I found it very easy I found it quite easy I found it quite difficult I found it very difficult
☐ ☐ ☐ ☐
Figure 4.8 The ‘ease of use’ question on the longer questionnaire.
Figure 4.9 presents the results of the questions focusing on adequacy of the dictionary and ease of use in the third study. Participants’ responses indicate the following: •
•
•
Virtually all the participants using the Collins Pocket (97%) found it at least sometimes adequate, with a similar result for the remaining participants using the other dictionaries (88%). A good number expressed a high level of satisfaction with the dictionary, judged by the number of responses that indicated ‘yes’ to the adequacy question (42% of the Collins Pocket users, and 44% of the other dictionary users). Virtually all the participants found the dictionary they had at least quite easy to use (94% of the Collins Pocket users and 97% of the users of other dictionaries).
Evidence on the adequacy of the various dictionaries used across the three studies appears to reveal, therefore, that no dictionary stood out as being any more or less adequate than any other. Nevertheless, these findings do hint at some potential problems with the dictionaries. For example, a good number of users in the third study recorded that their own expectations of adequacy were only sometimes met – a finding that was suggesting that the dictionaries they had access to had some definite limitations. This was going to require some investigation.
Where to next? Putting together all of the dictionary use evidence presented in this chapter, a number of important strands emerge:
91
92
Dictionary Use in Foreign Language Writing Exams
Figure 4.9 Opinions about the dictionary used – third study. Note. No participants reported that they found the dictionary they had very difficult to use.
•
• • •
• •
•
When the dictionary was available to the test takers, its use was the most preferred strategy when it came to tackling the response to the questions – the writing of their essays. The dictionary was primarily used to locate meanings, firstly from L1 to L2, and then from L2 to L1. Individual items of lexis made up 97% of look-ups, and half of these were nouns. The test takers were able to increase the sophistication of their lexis when using a dictionary when compared to range of lexis in the ‘without dictionary’ essays. In the third study this increase was statistically significant. Around 75% of all look-ups could be classed as ‘more sophisticated’ lexis when benchmarked against the vocabulary lists used by Range. According to evidence from the second study, test takers were often able to use formulaic sequences successfully in their writing without having to use a dictionary. Most participants were satisfied that the dictionary they had with them was at least sometimes adequate for the uses to which they wanted to put it.
Chapter 4. How do test takers use dictionaries?
All in all, this evidence paints quite a positive picture of the benefits of having a dictionary in a writing examination. There was some indication, however, that the dictionaries were not always adequate for the task, and the two instances of dictionary error I recorded hint at potential problems with using a dictionary. Chapter 5 explores some of the definite limitations to having dictionaries in writing tests which these studies revealed.
93
chapter 5
When the dictionary becomes a liability
In Chapter 3 I presented the evidence available from the three reported studies about the impact of bilingual dictionaries on writing test scores. If we were to look solely at this evidence we could conclude that utilizing a dictionary makes no difference to test taker performance. In Chapter 4 I began to delve a little deeper than test scores and considered the positive uses to which the test takers put the dictionaries. On the basis of the evidence presented there it might be concluded that dictionary use actually makes a positive difference. It certainly seemed as if the test takers were able to draw on a broader and more sophisticated range of lexis when they wrote with the dictionary, and this more sophisticated lexis seemed to be down to their use of the dictionary. Nevertheless, it cannot necessarily be assumed from this positive increase in lexical sophistication that the test takers were always able to exploit their dictionaries to maximum effect. There was also some indication that the test takers were not entirely satisfied with the dictionaries they had and could not always use them successfully. Another important strand of evidence which must be taken into consideration concerns those occasions when the dictionary became a liability. I turn to the liability dimension of dictionary use in this chapter. As pointed out in Chapter 4, Weigle (2002) suggests that two potential liabilities to having a dictionary in a writing examination are the time taken to use a dictionary and some test takers’ inability to use the dictionary effectively. In this chapter I consider first of all the observational evidence from the first two studies. This evidence is used to explore the influence of dictionary use on time. I go on to look at the number of look-ups test takers reported that they made and examine how this potentially impacts on task completion. Finally I explore the extent to which the test takers made errors with dictionary look-ups and look closely at some of these errors.
The number of look-ups test takers made In the first two studies I gathered observational evidence on the test takers’ use of the dictionary. Participants consented to being video-taped so that the evidence could be scrutinized after the completion of the tasks. All the participants took
96 Dictionary Use in Foreign Language Writing Exams
the tests in the same classroom. The video camera was placed at the front of the room approximately 4 meters from the participants, as illustrated in Figure 5.1 (not drawn to scale). In the first study, up to three participants took the tests at any one time. In the second study participants numbered ‘1’ in Figure 5.1 used the dictionary at the same time for one task (whether first or second), and participants numbered ‘2’ used it at the same time for the other. The recorder was switched on before the participants began the first task, and was allowed to run until both tasks were completed so that all participants were recorded unobtrusively. In practice, it was noted that once participants got underway with the tasks, the video-camera was largely ignored. Observation sheets (Figure 5.2) were completed retrospectively from the video evidence. Look-ups were noted for all 10-minute blocks in the entire 50-minute examination, and stop-watch timings were recorded for each of these. Each look-up was treated as an individual event. That is, if it was apparent that a participant returned to the same dictionary entry on a separate, later occasion this was recorded as a separate look-up. An exception was made whenever it appeared that participants were moving from a particular look-up to their writing and back again in a very short space of time. These occasions were deemed to be single look-ups. On a few occasions it was unclear whether participants were looking at a dictionary entry or re-reading aspects of the test task or their response. Periods of time that were ambiguous were omitted from the schedule of observations because they could not be regarded as definite look-ups. In both studies I also compared the observational evidence with look-ups identified through participants’ self-reporting in the texts. This was done to determine the extent of reliability of the test takers’ self-reports of dictionary use.
1
1
2
2
2
Figure 5.1 Placement of video camera for the first and second studies.
Chapter 5. When the dictionary becomes a liability
Name _____________________________ 1
2
3
4
5
6
7
8
9
10
1st 2nd 3rd 4th 5th
Figure 5.2 Observation sheets for the first and second studies.
The first study Figure 5.3 illustrates the number of look-ups each participant in the first study made in each 10-minute block. With the exception of Simon, who only chose to use the dictionary on three occasions, all the participants made use of the dictionary at different points throughout most of the examination. Indeed, it was evident that three participants (one intermediate and two advanced) accessed the dictionary more or less evenly throughout the whole examination (that is, in all five 10-minute segments), thereby indicating an on-going reliance on the dictionary to support their writing. A summary of the lengths of time taken to use the dictionary is given in Table 5.1. It was evident from these data that the advanced participants consulted the dictionary more quickly than their intermediate counterparts (average look-ups were 30 seconds long compared to 47 seconds). Also, although the advanced participants consulted the dictionary more often than the intermediate writers (on average, 18 times rather than 13), as a consequence of speed they spent less time overall in dictionary look-ups (8 minutes rather than 10½ minutes). These data suggest that level of test taker ability might be a factor in the length of time it takes to use a dictionary, although both groups in this study appeared to want to use the dictionary, on average, a similar number of times. These test takers were also choosing, on average, to spend around 20% of the available examination time on dictionary consultation. I considered this quite a considerable amount of time and, if typical, indicated a potential distraction when it came to completing a time-constrained examination.
97
98 Dictionary Use in Foreign Language Writing Exams
*OUFSNFEJBUF1BSUJDJQBOUT 1BSUJDJQBOU 1BUSJDJB
1BSUJDJQBOU 4JNPO
NJOVUFCMPDL
NJOVUFCMPDL
"EWBODFE1BSUJDJQBOUT 1BSUJDJQBOU 4IBSPO
1BSUJDJQBOU 3BDIFM
NJOVUFCMPDL
NJOVUFCMPDL
1BSUJDJQBOU +BOFU
1BSUJDJQBOU +FOOZ
NJOVUFCMPDL
NJOVUFCMPDL
Figure 5.3 Number of look-ups made – first study.
Table 5.1 Summary of dictionary use trends – first study. Participant Level of ability
Number of look-ups
Average dura- Total dura- Percentage of total tion of each tion of look- time available look-up ups (50 minutes)
1 2 3
Intermediate Intermediate Intermediate M SD
3 11 24 13 10.6
00:42 00:50 00:51 00:47 00:05
2:32 7:51 21:35 10:39 9:49
5% 16% 43% 21% 19.55
4 5 6
Advanced Advanced Advanced M SD
10 26 17 18 8.02
00:31 00:24 00:26 00:30 00:06
5:44 10:25 7:34 7:54 2:21
12% 21% 15% 16% 4.58
Chapter 5. When the dictionary becomes a liability
Table 5.2 Summary of look-up timings by task – second study. Task
A
M SD B M SD C M SD All M tasks SD
Block of Time 1st 10 minutes 2nd 10 minutes 3rd 10 minutes 4th 10 minutes 5th 10 minutes L
T
L
T
L
T
L
T
L
T
3.6 1.8 1.6 1.5 2.6 2.2 2.6 1.9
2.25 1:01 1:17 1:06 1:31 1:09 1:51 1:16
3.6 2.5 2 2.0 2.6 1.3 2.7 2.0
2.34 1:08 1:17 1:06 2:11 2:03 2:00 1:29
2.8 1.3 2.4 1.8 2 1.9 2.4 1.6
1.45 0:46 1:51 1:06 1:39 1.18 1:45 1:00
2.6 2.2 3 3.1 3 1.6 2.9 2.2
1.34 1:11 1:38 1:42 1:39 1:16 1:37 1:18
1.6 1.2 2.6 2.1 1.6 1.1 1.9 1.5
0.55 0:51 3:00 2:42 1:00 0:48 1:38 1:52
Note. L = number of look-ups; T = total time.
A closer inspection of the look-up time of the two participants who used the dictionary the most (intermediate participant 3 (Jenny) and advanced participant 5 (Sharon)) makes the potential liability of the dictionary more apparent. Sharon made 26 look-ups, and spent just over 10 minutes (21% of the examination time) on this. Jenny made 24 look-ups, but the longer time each took meant that she spent over 20 minutes, or 43% of her available time, using the dictionary – a serious distraction from the time available to complete the task.
The second study The second study provided an opportunity to further investigate frequency and duration of look-ups, but this time across three different tasks. Table 5.2 records the average number and duration of look-ups in each 10-minute block, for each individual task and then overall. Findings reveal the following dictionary use trends: •
•
In Task A, the dictionary was used most in the first 20 minutes (three to four times in each 10-minute block). In Tasks B and C average look-ups generally ranged between one and three in each 10-minute block. More reliance was therefore placed on consulting the dictionary in the first task than in the next two. On average participants spent between 1½ and 2 minutes in each 10-minute block consulting the dictionary, and on those occasions made between two and three look-ups, with a tendency to make more look-ups in the first 40 minutes.
99
100 Dictionary Use in Foreign Language Writing Exams
•
Overall, the dictionary was used pretty consistently throughout the examination, indicating its usefulness not only to access the task but also to support the response through the whole process of writing. This was so across all three tasks.
Table 5.3 records the total number and duration of all look-ups for each individual task. It was found that, on average, 14 look-ups were made in Task A, over a period of 9½ minutes. This had reduced to between 11 and 12 look-ups, over a period of between 8 and 9 minutes, in Tasks B and C. Look-ups took participants around 45 seconds each. The findings of the second study regarding the total amount of time used to access the dictionary mirrored those of the first – that on average test takers used the dictionary for around 20% of the available time. Using the dictionary over three consecutive weeks did seem, however, to help the participants to use the dictionary marginally more efficiently. It was not possible to determine, from this evidence at least, if this was due to a kind of ‘practice effect’, or was a reflection of the types of task the test takers had been asked to do (although the test taker perceptions explored in the next chapter help to throw some light on which one of these may have made the difference). Nevertheless, if practice with the dictionary was making the difference, however slight, there was some indication that giving students opportunity to practice dictionary use in time-constrained contexts might be helpful in alleviating somewhat the potential distraction of using a dictionary in such contexts – an issue that therefore warranted further reflection.
Table 5.3 Summary of look-up timings by participant – second study. Look-ups ParticiTotal number Total duration pant Task A Task B Task C Task A Task B
Task C
Average duration Task A Task B Task C
1 2 3 4 5 M SD
04:53 11:47 10:10 10:00 3:36 8:05 3:36
00:34 00:47 00:38 00:44 00:37 00:40 00:05
21 12 11 20 7 14.2 6.1
15 14 8 18 3 11.6 6.0
5 12 18 16 8 11.8 5.4
12:06 9:25 7:03 14:51 4:23 9:33 4:06
7:49 10:55 9:16 13:49 3:33 9:04 3:48
00:31 00:46 01:09 00:46 01:11 00:52 00:17
00:58 00:58 00:33 00:37 00:27 00:43 00:14
Chapter 5. When the dictionary becomes a liability 101
An additional question which I wanted to answer was the extent to which the observational evidence from the first two studies matched with the self-reporting evidence of the participants – their highlighting or underlining in the texts. This was important in establishing the extent to which the self-reports could be considered to be giving reliable information about dictionary use. Participants in the first study self-reported that on average they made eight look-ups, whereas the observational evidence indicated an average of 15 look-ups, or about double the number reported. In the second study participants again self-reported an average of around eight look-ups, whereas on average 13 were observed. On both occasions, therefore, the observational evidence appeared to suggest that the dictionary was being used more often than the participants were claiming. Was this indicating that self-reporting of look-ups was an unreliable measure? As mentioned in Chapter 4, there is indeed some unreliability in it – for a variety of reasons the test takers may have under-reported their actual dictionary use. One important factor needed to be taken into account: although the observational data did not discriminate between test taker use of the two different main sections of the dictionary (German/English and English/German) it was clear that on several occasions test takers were moving between both sections before settling on a word. On these occasions the observational evidence would record two look-ups, whereas the self-reporting would indicate one – the dictionary was used twice to solve one look-up problem. Sharon (first study) illustrates this trend. The video evidence indicated that, of a total of 26 observed look-ups, 14 were from the German/English section and 12 were from the English/ German section – Sharon was moving back and forth between entries. Sharon self-reported nine look-ups, and it is feasible to conclude that she may only have used nine items as a result of her look-up activity. Although the self-reporting evidence may need to be treated with some caution, the case of Sharon indicates that the differential between observed and selfreported look-ups was not necessarily as great as it first appeared. This was quite important when it came to interpreting the look-up evidence in the third study, and the extent to which self-reporting gave an informative picture of how participants used their dictionaries.
The third study In the third study participants’ underlining of look-ups, together with a question on the longer questionnaire about frequency of dictionary use, enabled some comparative data on dictionary use trends to be gathered. The questionnaire
102 Dictionary Use in Foreign Language Writing Exams
Figure 5.4 Frequency of dictionary use – third study.
asked participants to indicate on a four-point scale how often they judged that they had used the dictionary. Results are given in Figure 5.4. It appeared that the vast majority of participants claimed that they made up to ten look-ups, indicating fewer look-ups than those in the first two studies had actually made, but similar to the number of times participants in the first two studies had self-reported. When answers to this question were compared, on a case-by-case basis, with the number of occasions participants indicated dictionary use through underlining in the texts, it was noted that the vast majority (31/44 or 70%) reported accurately the number of times they had used the dictionary. All but one of the remainder under-reported in response to the question, indicating that they thought they had used the dictionary less often than the underlining would indicate. Of three participants who had not underlined anything in the texts, questionnaire responses revealed that two (lower intermediate) participants claimed to have used the dictionary more than 11 times, and one (intermediate) participant 3–10 times. It was possible to conclude that the self-report evidence of look-ups (that is, the underlining) gave a fairly accurate picture of look-up activity. Table 5.4 presents the number of look-ups made by participants at each ability level, together with the average number of times participants at each level, and overall, self-reported dictionary use. Participants’ underlining indicated that on average nine look-ups were made, but there did seem to be a difference between ability groups. It appeared that the higher the ability level, the more sparing the use of the dictionary.
Chapter 5. When the dictionary becomes a liability 103
Table 5.4 Range of look-ups based on underlining in the response – third study. Level 3)a
Lower intermediate (n = Intermediate (n = 21)b Upper intermediate (n = 16) Advanced (n = 4) Overall (n = 44)
Range of look-ups
M
SD
4–15 0–32 2–20 2–13 0–32
9 9.9 8.4 6.8 9
5.6 6.7 4.7 4.9 5.7
Note. a Two lower intermediate participants did not follow the instruction to underline, although it was clear from the questionnaire that they had used the dictionary. These participants were not included in the figures. b One intermediate participant also did not follow the instruction to underline and was not included in the figures. One intermediate participant did not use the dictionary at all in the response, and this is included.
Dictionary look-ups – what do all three studies tell us? A comparison of the duration of look-ups between the first two studies and Hurman and Tall’s study (1998, p. 15), in which the researchers timed 27 test takers, is represented diagrammatically in Figure 5.5. In both contexts look-ups most commonly took between 20 and 40 seconds. However, there did appear to be a tendency for look-ups to take a little longer in the two reported studies (over a third were above 40 seconds) than in Hurman and Tall’s study (about a quarter were above 40 seconds). Another interesting comparison (Figure 5.6) arises from placing the total of look-ups (noted from all three studies) against the number for writing recorded by Bishop (2000), who used questionnaires to investigate the actual experiences
Figure 5.5 A comparison of look-ups by duration.
104 Dictionary Use in Foreign Language Writing Exams
Figure 5.6 A comparison of look-ups by frequency. Note. The percentages for all three studies are derived from the sum of participants’ underlining or highlighting of words used in the responses.
and post hoc perceptions of OU students who had taken examinations in French in two conditions – with and without dictionaries. Bishop (2000) found that two out of three test takers reported that they used the dictionary no more than five times to locate words when writing. Just over one in 10 used the dictionary more than eleven times. There appeared to be somewhat more judicious use of the dictionary in Bishop’s (2000) study than in those reported here, where two out of three participants made at least six look-ups. With regard to look-ups and ability level, I compared the third study data with those generated from Atkins and Varantola (1998a). These two researchers had undertaken a detailed large-scale investigation with groups of learners of English as a foreign language (EFL) drawn from a range of educational institutions across Europe. They wanted to discover what dictionary users actually did when they used a dictionary to complete a set of test tasks. Although, strictly speaking, findings between their study and the one reported here are not directly comparable, essentially because the nature of the tasks in Atkins and Varantola’s study were different, there is an interesting point of comparison (Table 5.5). Lower ability participants (groups D and C in Atkins’ and Varantola’s study, and the lower intermediates and intermediates in the third study) made more use of the dictionary than higher ability participants. The most able participants (group A and the advanced participants) used the dictionary the least. Comparison with previous research raises a number of questions. Why did the participants seem to take longer with look-ups than those observed by Hurman
Chapter 5. When the dictionary becomes a liability 105
Table 5.5 Number of look-ups – a comparison by ability. Atkins and Varantola’s (1998a) study
The third study
Level of ability Number of Percentage of Level of (least to highest) look-ups look-upsa abilityb
Range of Average look-ups in number of the response look-ups
D C B A
4–15 0–32 2–20 2–13
a
634 743 600 301
17.5 19 14 11.5
LI I UI A
9 9.9 8.4 6.8
The percentage of look-ups is a ratio of the total number of possible look-ups. lower intermediate, I: intermediate; UI: upper intermediate; A: advanced.
b LI:
and Tall (1998)? Why did the participants seem to make more look-ups in these tasks than had been identified by Bishop (2000)? Why was there a discrepancy between the first and third studies with regard to the advanced participants’ dictionary use? It may be speculated that the duration and range of look-ups made by participants in these studies, in comparison to evidence derived from elsewhere, is affected by a number of factors.
Duration of look-ups In the first two studies: 1. Higher level writing tasks may demand more sophisticated vocabulary than lower level tasks, meaning that participants took slightly longer to locate words in the first two studies than in Hurman and Tall’s (1998) research, which had focused on writing at less complex levels. 2. The size of the dictionary (around six times larger than those used in the Hurman and Tall study) may have made it somewhat more difficult to locate the required information than was the case for Hurman and Tall’s participants. 3. The maturity and higher proficiency level of the participants may have meant that they were more confident with exploring dictionary entries than the younger and less proficient participants in Hurman and Tall’s study, and were willing to invest slightly more time in doing so. (As Bachman and Palmer (1996) suggest, age may be influential in how test takers perform on a given task.)
106 Dictionary Use in Foreign Language Writing Exams
Look-ups and ability level It is difficult to account for the differential findings between the first and third studies with regard to dictionary use by advanced learners (although the small sample size in the first study must be taken into consideration). Again, one possible explanation is the difference in maturity between the two groups, with those in the first study being more mature, and also more experienced language users. The advanced participants in the first study may have chosen to use the dictionary more because they felt more comfortable doing so in comparison to the younger upper ability participants in the third study, who were also less experienced language users.
Number of look-ups In terms of the number of look-ups made across all three studies, Bishop (2000) suggests the following: “If we define overuse as more than 10 times in the 45 minutes allocated to [writing] then it is clear that students in the main did not overuse their dictionaries [in the OU study] and thereby lose time” (p. 61). Although setting the benchmark for overuse at ten look-ups in 45 minutes appears to be somewhat arbitrary, the tendency for a number of participants across all three studies, in comparison to Bishop’s finding, was to ‘overuse’ the dictionary, with a consequent loss of time. The marginally longer time taken to make the look-ups in comparison with the observed evidence in Hurman and Tall’s (1998) study would have accounted for an even greater negative impact on time. It is not possible to draw any conclusions about why the dictionaries were used more often in the studies reported here. However, Bishop relied on retrospective questionnaire evidence some time after tests had been completed, and this may have influenced the participants’ recall of their actual dictionary use. The important conclusion that can be drawn from the observational and selfreport evidence is this: the dictionary is potentially a serious liability when it comes to its use in time-constrained tests, and the impact on time is one that test setters and test takers must consider seriously. (As to the perceived level of seriousness of this liability, we need to take into account the voices of the test takers themselves – but this will be the subject of the next two chapters.)
The number of errors the test takers made The second major source of dictionary liability which Weigle (2002) identifies is the inability of some test takers to use dictionaries effectively. Their inappropriate
Chapter 5. When the dictionary becomes a liability 107
Figure 5.7 Comparison of percentages of look-up errors across all three studies.
use can lead to the types of error exemplified in Chapters 1 and 4. It is this potential liability of dictionary availability that arguably causes the most concern among test setters and teachers (and if we add to this the time-constrained nature of a writing task it might well be that the problem is being compounded). It is important to take a look at some of the errors the test takers in these three studies made and thereby determine how serious dictionary misuse is as a potential liability. I turn now to the evidence regarding dictionary misuse. In Chapter 4 I indicated that, taken across all three studies, there was a level of symmetry in the types of look-up participants made. The major use to which dictionaries were put was to find individual items of lexis, with nouns making up around half of all look-ups. Participants overall had made 548 look-ups. Of these, 247 items (45%) were found to be in error. This did not appear to be a particularly high level of success, although it should be noted that the majority of look-ups were made in the third study, and this influences the overall impression. Indeed, when these figures were examined on a study-by-study basis an interesting anomaly became apparent. Considered in percentage terms, participants in the first and second studies made errors with look-ups around 15% of the time, and were therefore able to use look-ups correctly on 85% of all occasions when look-ups were used. This indicated a very high success rate with the dictionary. By contrast, those in the third study appeared to make errors with look-ups 50% of the time – one error for every two look-ups. Figure 5.7 illustrates the percentages of errors made in relation to the percentages of items used in each of the six look-up categories. The areas above the lines indicate correct look-ups, with the areas below showing look-up errors.
108 Dictionary Use in Foreign Language Writing Exams
Noticeably more errors were made in all categories of look-up in the third study compared to the first two. These differential findings raise several questions. Why did there appear to be such a difference in look-up errors? Could it be related to the type of task? Or the type of student? Or the type of dictionary? Before considering these questions, it is helpful to take a closer look at the nature of the errors participants made, and then to go on to see if these important questions can be answered satisfactorily.
The first two studies It was evident from the first study that three of the six participants made no errors at all with look-ups, and a further two made errors based on dictionary look-ups on two occasions only. Patricia, however, was led astray with three out of the five look-ups she used. In the second study look-up errors were quite evenly distributed among the test takers, with no one test taker making considerably more or less errors than any other. Examples of look-up error from the first two studies include the following: Error
Intended meaning Should be
Nature of error
kennen
know
wissen
an inappropriate word was selected (‘be acquainted with’ rather than ‘know [a fact]’)
Hafens
ports
Häfen
an incorrect plural was used
hilfsbereit
helpful
hilfreich/nutzlich
an inappropriate word was selected (‘ready to help’ rather than ‘useful’)
Kriegsgefangener prisoner of war
Kriegsgefangene
the correct word was located, but was taken directly from the dictionary, using a contextually incorrect declension
Abschluss
conclusion
Schlussfolgerung
a contextually incorrect word was chosen
gebäude
building
Gebäude
wrong capitalization
Chapter 5. When the dictionary becomes a liability 109
Geschichte Romane
history [historical] Geschichtsromane failure to form a correct novel compound
In die Schweiz
In Switzerland
In der Schweiz
failure to recognize that a change in case was needed
informieren
information
Informationen
using a verb when a noun would have been more appropriate
vollig
completely
völlig
misspelling
Materiell
material
aus Kunststoff
using an adjective when a noun was required
Two types of error are indicated: an inability to choose the appropriate word from the list of options provided in the dictionary – word choice error; and an inability to decline the word correctly (that is, apply the relevant grammar) – word form error. Although it is not possible to determine whether participants accessed information from which they went astray from the German-English or the English-German section of the dictionary, a look at the relevant English-German entries for a number of other misapplied words in the second study would suggest that these entries (all located in the Collins German Dictionary) were the starting points. It would seem that the participants did not have, or did not apply, knowledge about how to use the entries correctly. Peter, writing a letter about his experiences in a German language course, wanted to speak about having more conversation practice (or chats) in class. He had presumably looked up ‘chat’ in the dictionary:
He had chosen the first entry and had gone on to use the wrong plural form – Unterhaltungs rather than Unterhaltungen. No plural was suggested in the entry. Should an intermediate level student of German know that all nouns ending in -ung are feminine, and make their plural with -en? Or should the dictionary have provided this information? Also, arguably the more appropriate entry for his context was illustrated in the example ‘could we have a ~ about it?’ – that is, he should perhaps have aimed to write ‘[the opportunity to] have chats with each other in class’ rather than the more simple ‘more chats in class’. If he had chosen this phrase he would have needed to restructure his whole sentence, which may
110 Dictionary Use in Foreign Language Writing Exams
have been beyond his linguistic capability. It would seem that the dictionary entry did not necessarily contain sufficient information for Peter to know exactly how to use it. John wanted to talk about the reasons for writing a letter. He had most likely looked up ‘reason’ in the dictionary:
Like Peter, he had used an incorrect plural form – Grunden rather than Gründe. Again, no plural was suggested in the entry. Should John have taken the time to check the entry in the German-English section (where the plural was provided)? Or should the dictionary have provided the appropriate plural ending in the English/German section? Sandra wanted to speak of food poisoning. It is likely she located the following entry:
She needed to use the past participle form, ‘poisoned’. She located the correct entry – the transitive verb form given in entry (2) – but she did not conjugate it correctly, choosing simply to write the infinitive form (vergiften) as given in the dictionary entry. Mary wanted to write ‘I would like to reply to your letter’ and most likely looked up this entry:
She located the entry ‘in ~ to your letter/remarks’, in Beantwortung Ihres Briefes. She successfully switched the formal Ihres into an informal deines, but did not recognize that Beantwortung was a noun, and therefore inappropriate for her context. The appropriate entry for her context was (3) – antworten (auf). Jessica spoke of misusing information:
She located the noun entry Mißbrauch (1). The appropriate entry for her context was the transitive verb mißbrauchen, entry (2).
Chapter 5. When the dictionary becomes a liability
Elsewhere in the same essay Jessica wanted to talk about how drugs can affect health:
She needed the entry ‘affect’ in the sense of ‘has an effect on’. She had used the first entry under (a), and had applied the information regarding prepositional use (auf + accusative) correctly. Her choice was logical in the circumstances, but the more contextually appropriate verb would have been schaden (last entry under (a)). The dictionary entry did indicate that schaden was an appropriate word to use when talking about health, but Jessica’s use of both groups of entries (‘misuse’ and ‘affect’) suggests that she may simply have gone for the first entry listed in both cases and not looked through the remaining information. In the discursive essay task, John spoke of the state of mind of ‘stay at home’ mums – they might suffer as a result of lack of social contact:
He also selected the first entry – the transitive verb erleiden. In the context he actually needed the intransitive form – leiden (unter) in entry (2). These types of error raise a particularly serious problem with dictionaries. Tomaszczyk (1983), as pointed out in Chapter 1, is inclined to lay the blame at the feet of the dictionaries themselves. He argues that learners of an L2 are “not sophisticated enough to realise that they are poorly served and take the information supplied at face value” with the result that what they produce reveals “the deficiencies of such dictionaries in a most dramatic way” (p. 47). We might conclude that in each of the cases above the dictionary entry was deficient. Perhaps more examples or more contextual and usage information could have been provided. However, the types of errors illustrated here also possibly indicate a more complex interaction. It is not just the dictionary (as Tomaszczyk suggests) and not just the user, but a combination of the two. That is, some users may have coped differently with these entries, or the same users may have coped differently if additional
111
112 Dictionary Use in Foreign Language Writing Exams
or other information had been provided. In fact, Nesi and Meara (1991), who investigated the use of dictionaries in reading tests, speak of a three-way interaction. They suggest that it is reasonable to assume that, out of the test, the dictionary or the user, at least one factor is failing in some way if dictionary use cannot improve performance. In their view, the responsibility lies with all three factors. If we believe that dictionary use should make a supportive difference to students when writing, Nesi and Meara’s conclusion raises the possibility that errors arise from this three-way interaction. If there were items relevant to task completion that the test takers did not know and wanted to use in their writing, either the dictionary did not provide sufficient information, or the dictionary users lacked sufficient skill in using the dictionary and could not locate the relevant information, or they did not know how to use the information correctly when found. The dictionary becomes a liability if this three-way interaction interferes with an improved performance. All of this leads Spolsky (2001), for example, to speculate that although using dictionaries may have beneficial washback into the teaching situation, the use of dictionaries in examinations is a more complex matter. In his view, allowing dictionary use introduces an additional source of testing error into the process, leading to lower reliability and weakened validity. The usefulness of the test is therefore diminished in these respects. Whether we consider that dictionaries are supportive tools or that dictionaries should not be allowed in examinations, evidence of misuse is a considerable concern, especially if we cannot identify the source of error. Is it the dictionary? The user? The task? Or all three? Despite these problems, however, the reported success rate with using the dictionary in the first two studies (at least 85%) is very high.
The third study In the third study, the test takers reported 397 instances of dictionary look-up. On 200 of these occasions (50% of look-ups) the entry was applied incorrectly. The following errors were noted: • • • •
Wrong word – 97 out of 200 occasions (24% of all look-ups) Wrong form – 80 occasions (20% of all look-ups) Wrong gender – 6 occasions (1.5%) Misspelling – 17 occasions (4%).
The 200 errors were analyzed for each of the main categories of look-up. Results of this analysis are recorded in Table 5.6. Principal errors for the different types of words were noted as follows:
Chapter 5. When the dictionary becomes a liability 113
Table 5.6 Errors according to types of word – third study. Type of word
Nouns Verbs Adjectives Adverbs Phrases Other items
Total
183 87 69 28 14 16 397
Error
95 49 35 9 9 3 200
Type of error wrong word
wrong form
wrong gender
misspelling
49 21 12 8 5 2 97
28 28 19 0 4 1 80
6 0 0 0 0 0 6
12 0 4 1 0 0 17
Adapted from East (2006b, p. 191).
•
•
•
•
•
Nouns: wrong forms were caused mainly by errors of plural (16 occasions). The most common error of spelling was failure to capitalize the noun (eight occasions). On two occasions errors with failure to form a correct compound were also noted. Verbs: main errors of form were use of an inappropriate conjugation (seven occasions), failure to apply the appropriate grammatical rules with regard to the use of separable verbs (eight occasions), and use of the infinitive when a participle was required (five occasions). Adjectives: most errors of form were accounted for by failure to add the correct adjective ending (13 occasions). There were four spelling errors, which included three occasions when the adjective was capitalized (as if it were a noun). In addition, failure to form the comparative correctly was observed on two occasions. Adverbs: apart from one spelling error, all errors were accounted for by the choice of a wrong or inappropriate adverb. There was a high incidence of success with using adverbs (possibly because adverbs do not require any inflection and can therefore be used ‘as is’ from the dictionary). Adverbs were used correctly or appropriately 67% of the time. Phrases: apart from incorrectly chosen phrases, errors in the use of phrases focused on problems with word order (three occasions) – that is, using the phrase exactly as found in the dictionary without reference to relevant word order rules – or problems with case (one occasion) – that is, using the phrase correctly but failing to take account of a case requirement that was dependent on the phrase.
Several errors mirrored the types observed in the first two studies. Examples include the following:
114 Dictionary Use in Foreign Language Writing Exams
Word(s) looked up
Should be
Meaning
Category of Type of error error
schiessen
schliessen
shut
wrong word may have been a spelling error
stagnate
wrong word noun instead of verb; word choice error
[es wird] Verfall stagnieren
Verschwendung verschwenden disappear
wrong word noun instead of verb
Tourist aktivität Touristenakti- tourist activi- wrong form no compound (also vitäten ties spelling error) Orts
Orte
places
wrong form incorrect plural
viele Grunden
Gründe
many reasons wrong form incorrect plural
wir will nicht
wir wollen nicht
we do not want to
wrong form wrong conjugation
mehr informiert informierter
more informed wrong form wrong form of comparative
es kann beschä- beschädigt digen werden
it can become wrong form infinitive instead of damaged past participle
zu ausgeben
auszugeben
give out / spend
ankommen jeden Tag
kommen jeden arrive each day wrong form incorrect separable Tag an verb use
Ich einverstanden mit das
Ich bin damit I agree with einverstanden that
wrong form incorrect syntax
das Struktur
die Struktur
the structure
wrong gender
die Naturliches Attraktionen
natürliche
the natural attractions
misspelling also wrong form – adjective treated as noun
wirtschaft
Wirtschaft
economy
misspelling no capitalization
wrong form incorrect separable verb use
A further analysis of entries found in the Collins Pocket was also made. This analysis revealed a number of ways in which entries were inappropriately applied. There were, for example, several instances where participants attempted to translate
Chapter 5. When the dictionary becomes a liability 115
irectly, and on a word-for-word basis, from English into German, and as a result d came up with virtually meaningless German (East, 2005a). On other occasions, the incorrect word for the context was chosen. One lower intermediate participant wanted to communicate the notion that people might ‘take advantage’ of tourists in New Zealand. It seems likely that this candidate looked up ‘take’ and ‘advantage’ as two independent words, and as a consequence wrote nehmen Vorteil. In fact the expression she should have used was andere Leute ausnutzen. The dictionary entry for ‘advantage’ provided some clue to this, but may not have supplied sufficient information:
An upper intermediate participant wrote about the ‘double standards’ of people in New Zealand. She suggested [wir haben] Verdoppeln Massstäbe, not realizing that the correct German expression is wir messen mit zweierlei Mass. The dictionary entries for neither ‘double’ nor ‘standards’ would have helped here. An intermediate participant, presumably having looked up ‘way’, translated the expression ‘the best way to’ directly into German by writing Die beste Weg um. It appeared she had simply gone for the first entry, assuming that it would be correct:
She had not only used the wrong gender for Weg (even though the dictionary signaled the correct gender), but had also not realized that Weg carries the sense of ‘pathway’. The dictionary entry did not make this explicit. In fact the concept this participant was trying to communicate would have been expressed better in German by using an adverb (am besten), although the second entry (Art und Weise) could have been utilized in some way. Another intermediate participant successfully located the correct word to render the expression ‘headstart’ – Vorsprung – but had located this as ‘start’ and had added the word Kopf (head) to it, thereby coming up with eine gute Kopf Vorsprung im Leben (a good head headstart in life). This also involved a gender error. There was no information in the dictionary to help this writer to know that Vorsprung itself would have communicated adequately. The most serious error by an intermediate participant led to a completely meaningless German sentence. It seems most likely that this participant had accessed two words in the English-German section of the dictionary, but had
116 Dictionary Use in Foreign Language Writing Exams
c ompletely failed to apply the words correctly. Attempting to write ‘that can lead to the possibility’ she had written ... das Büchse Vorbild zu moglichkeit ...
when she should have written … das kann zur Möglichkeit führen ...
This participant had probably looked up the entry ‘can1’, and chosen the noun Büchse (tin can) instead of the modal verb können, which was in the second separate entry:
She had also presumably looked up the word ‘lead’, and chosen the noun Vorbild (‘lead’ in the sense of ‘example’) instead of the verb führen:
The dictionary entries would not have helped this participant to locate the more appropriate words without prior knowledge of what she was looking for (two verbs). The entry ‘can2’ did supply information that should have been useful to this participant, but it would appear that she went for the first listed entry and did not take the trouble to check other entries. It appeared she had even overlooked the ‘keyword’ alert – the “special status” given to some frequently occurring or otherwise significant words (Collins, 2001, p. viii). On two occasions the word ‘around’ was looked up and the entry was applied incorrectly:
One lower intermediate participant, talking of ‘around the world’, wrote ungefähr die Welt. The dictionary entry did indicate that ungefähr means ‘around’ in the sense of ‘approximately’ or ‘almost’, but this participant did not realize that this
Chapter 5. When the dictionary becomes a liability 117
was the inappropriate word for her context. An intermediate participant similarly wanted to use ‘around’ in the expression ‘around New Zealand’. The dictionary entry did not give sufficient contextual information, and this participant thereby wrote ringsherum Neuseeland (indicating ‘around’ in the sense of circumnavigation) whereas in correct German in the context this should have been durch Neuseeland (‘around’ in the sense of ‘throughout’). Another instance of wrong or inappropriate word choice was ‘economy’ rendered as Sparsamkeit (means ‘economy’ in the sense of ‘thrift’) instead of the contextually more appropriate Wirtschaft. This occurred on two separate occasions, and the error was most likely attributable to the following entry:
The phrase Nutzen ziehen aus (‘to benefit from’) proved a popular expression from the Collins Pocket, used by four participants. The dictionary entry gave the following information:
Three participants (two intermediate and one upper intermediate) located the right part of the entry, but went on to use the expression exactly as found in the dictionary, resulting in incorrect syntax. In summary, errors with regard to look-ups in this study were of the two main types identified in the first two studies. The wrong word or phrase was taken from the dictionary. Alternatively, the word or phrase, in and of itself correct, was applied incorrectly. Similarly, dictionary entries may have led participants astray in two ways. There was insufficient information in the entry about how to use the word or which word was contextually the more appropriate. Also, the participants did not know how to use the word. Herein lies further evidence of what I have earlier described as the complex interaction between the dictionary, the user, and the task, and it is not possible to say with any definitiveness where among these three variables and interactions the source of an error most lies. Given that the test takers in the third study could be classified according to level of ability and level of prior experience with the dictionary, there was also the opportunity to investigate whether either of these factors made any difference to success rate with the dictionary. For this analysis the four participants who either did not underline any words, or (in one case) did not use any words from the dictionary in the response, were omitted, and the group size was reduced from 47 to 43. The levels were collapsed into two: lower ability (lower-intermediate /
118 Dictionary Use in Foreign Language Writing Exams
Figure 5.8 Success with look-ups according to ability and experience – third study.
intermediate, n = 23) and upper ability (upper intermediate / advanced, n = 20), and inexperienced (no experience / quite inexperienced, n = 24) and experienced (quite / very experienced, n = 19). Figure 5.8 records the level of success with look-ups, according to level of ability and level of experience with the dictionary, against two factors: firstly, how often the look-ups were correct (that is, no error of word choice); secondly, if correct, how often look-ups were subsequently used correctly (that is, no error of word form). The upper ability participants were able, on average, to select words correctly from the dictionary eight times out of ten (M = 81.9%, SD = 16.3), compared to seven out of ten for the lower ability group (M = 71.8%, SD = 16). Furthermore, the upper ability group was able to use words correctly two times out of three (M = 61.5%, SD = 25.6), compared to less than half the time for the lower group (M = 45.1%, SD = 21.1). The wide standard deviations indicate a very broad range of success with look-ups. Independent samples t-tests reveal that, as far as ability was concerned, the two groups were significantly different from each other with regard to both correct look-ups (t(41) = –2.044, p = .047) and correct use of look-ups (t(41) = –2.287, p = .027). The upper ability participants were significantly more successful with both correctly locating and correctly using look-ups. It seemed that ability was a factor in determining the success with which test takers could locate and use look-ups correctly.
Chapter 5. When the dictionary becomes a liability 119
Experience with the dictionary appeared to make some difference to whether participants could correctly access the word. The more experienced group was able to locate the correct word almost three times out of every four (M = 71.8%, SD = 20.3), whereas those with less experience seemed to fare better (M = 80.1%, SD = 12.5). Experience did not appear to make a difference to success with using look-ups, however (experienced users, M = 53.5%, SD = 22.1; inexperienced users M = 52.1%, SD = 26.5). The wide standard deviations again indicated the broad range of success. It is curious that those with less prior dictionary experience seemed to be more successful with correctly locating look-ups. However, no significant differences between the groups were found (correct look ups – t(41) = 1.554, p = .109; correct use of look-ups – t(41) = –.173, p = .864); observed differences may therefore be regarded as random. Prior experience was apparently not a factor in helping these test takers to use the dictionary more successfully.
The evidence across all three studies Taking a look across all three studies, and bearing in mind the marked differences in success rates with using the dictionary, what can we conclude? It is difficult to explain why success with using look-ups was considerably more marked in the first two studies than in the third. Three factors differentiate the third study from the first two, however, each of which (as has already been discussed) also potentially impact on the frequency and duration of dictionary use: 1. The type of dictionary. In the first two studies using a much more substantial dictionary may have made a positive difference to success with using lookups. 2. The maturity/experience of the participants. In the first two studies, participants were on the whole more mature and more experienced with language learning than those in the third study. It may be that maturity and/or greater experience with using language were factors determining success with lookups. 3. The sample size. The third study had considerably more participants than the first two. It may be that, in the greater sample, there was greater potential for variability in successful dictionary use to show up (also given 2 above). A number of other noteworthy issues arise. Certainly, it would seem that the size of the dictionary makes very little difference with regard to some errors. Errors of interpretation were made both with the Collins German Dictionary (the largest of all dictionaries used and arguably one that provided the most comprehensive
120 Dictionary Use in Foreign Language Writing Exams
information) and with the other dictionaries. Similar errors were noted across all three studies. These included: • • • • •
failure to capitalize the noun (a frequent but fundamental error in German); incorrect plural forms; nouns used instead of verbs; verbs incorrectly conjugated; incorrect compound nouns.
Experience with using a dictionary also did not appear to be a factor. In the third study prior experience with a dictionary was found to make no statistically significant difference to success rates. In the first and second studies there was a similarly high level of success with using the dictionary, even though, in the second study, considerable time and attention had been given to helping the students to learn how to negotiate their way around the dictionary, and in the first no prior experience with the dictionary was assumed. Given the comparably high success rate in both studies it is difficult to determine the extent to which prior training was or was not of benefit. In the second study, however, there did appear to be a small ‘practice effect’ whereby test takers came to use the dictionary more judiciously week by week – suggesting that perhaps experience in time-constrained contexts would have some positive benefit. (This issue will be explored further in the next chapter.) The third study did illustrate that ability makes a difference to success – the more able the participants, the more successful they were with the dictionary. Nevertheless, success with using look-ups in the third study was noticeably poorer than in the first two. If the third study is taken as being more representative of the population to which any findings might be generalized (although it is still limited by sample size), this raises an important final question for this chapter.
Assessing the extent of the liability In the context of allowing English as a second language (ESL) learners to use bilingual dictionaries when writing, Ard (1982) argues that if the immediate goal of a writing assessment were to minimize the absolute number of errors, then even a nominal increase in errors in ‘with dictionary’ writing would be sufficient to argue for the outlawing of dictionaries. Should we therefore conclude that using dictionaries was leading the participants astray to the extent that their wholesale use in tests should not be encouraged? There is no clear cut answer here. Indeed, Ard goes on to suggest:
Chapter 5. When the dictionary becomes a liability 121
It has not been proven that the use of a bilingual dictionary leads to errors where no errors would otherwise occur. … The word that a student obtains from the dictionary might be quite different from the word that the student would choose if the dictionary were not available, but the alternative is not necessarily any more acceptable. (p. 17, my emphasis)
Given that, as outlined in Chapter 4, it certainly seemed as if the dictionary was able to contribute to the use of richer and more sophisticated lexis (potentially a good reason to include it in writing tests), this issue is particularly pertinent. To investigate the extent to which comparable errors were occurring whether or not a dictionary was used, all essays in the first study, regardless of test condition, were analyzed for instances of lexical error. To provide a suitable framework for this error analysis, errors were noted and classified based on criteria suggested by Engber (1995): 1. Lexical choice a. Individual lexical items i. Incorrect – does not exist: 1. Ein sehr langzeitiges Problem (should be langfristiges) ii. Incorrect – semantically unrelated or semantically close: 1. Die treue Antwort (should be wahre) 2. Der Kurs war hilfsbereit (should be hilfreich) 3. Es ist hart zu sagen (should be schwer) 4. Ich weiss ihn nicht (should be kenne) b. Combinations i. Two lexical items or phrases: 1. Ich sehe gern auf das Gespräch (should be freue mich auf) 2. Ich machte teil aus diesem Kurs (should be nahm teil an) 2. Lexical form a. Derivational errors: i. Es bedarf einer erklären (should be Erklärung) b. Phonetically similar, semantically unrelated: i. Das Glas ist grün (should be Gras) c. Word distorted – spelling error: i. Sie müssen ein Visa haben (should be Visum) ii. Das Mäddchen (should be Mädchen) iii. Das Water war kalt (should be Wasser) In addition, for the purpose of this study two other categories (2d and 2e) were added to those Engber suggests:
122 Dictionary Use in Foreign Language Writing Exams
d. e.
Gender incorrect: i. Das Antwort (should be Die) ii. Diese Brief (should be Dieser) Verb form incorrect: i. Was ich erleben habe (should be erlebt).
Although it is debatable whether these last two categories should be classed as grammatical rather than lexical errors, they were added because it was considered that both errors – inaccuracy of gender and verb form – could potentially be influenced by access or non-access to a bilingual dictionary. Results of the error analysis are given in Table 5.7. These results indicate the following: • •
•
On average, the intermediate participants tended to make more lexical errors than the advanced participants regardless of test condition. On average, there was negligible or no difference between the number of lexical errors made across the two conditions, regardless of the participants’ level (4.6% and 4.8% for intermediate participants, and 1.9% for advanced participants). For these participants, having a dictionary to hand did not therefore appear to be a factor in changing the general quality of the text, measured in terms of lexical error.
The data indicate (notwithstanding the small sample size) that neither the intermediate nor the advanced test takers made any more (or fewer) errors with the Table 5.7 Percentages of lexical error – first study. Participant Level of ability
With dictionary
Without dictionary
1 2 3
Intermediate Intermediate Intermediate M SD
312 187 243 247 62.6
11 5 20 12.0 7.5
3.5 2.7 8.2 4.8 3.0
296 193 375 288 91.3
8 9 24 13.7 9
2.7 4.7 6.4 4.6 1.9
4 5 6
Advanced Advanced Advanced M SD
327 389 366 361 31.3
9 2 9 6.7 4.0
2.8 0.5 2.5 1.9 1.25
278 466 307 350 101.2
3 9 8 6.7 3.2
1.1 1.9 2.6 1.9 0.8
Essay Number Percentage Essay Number Percentage length of of whole length of of whole (words) errors text (words) errors text
Chapter 5. When the dictionary becomes a liability 123
dictionary than without it – although the intermediates (perhaps not surprisingly) made considerably more errors in both tests than their advanced counterparts. This being so, dismissing the dictionary from writing examinations on the basis that test takers cannot use them successfully might be inappropriate – especially given the evidence that on many occasions test takers could use dictionaries successfully. However, given the higher failure rate with look-ups in the third study it was also important to consider whether these test takers may have made more or fewer errors when writing without the dictionary. Although the primary focus of analysis in the third study was on dictionary use errors, I did carry out a smaller scale analysis of a sample of ‘without dictionary’ essays in order to identify errors. The following are typical of those that were noted: Error
Transcription
Type of error
ich denke es ist sehr important …
I think it is very important ...
wrong word
Die Schüler kann erlebnis …
The students can experience …
wrong verb form + wrong word – noun instead of verb
ein gut berüf
a good profession
wrong form of the adjective (no appropriate ending provided) + wrong spelling of the noun (two errors)
Es ist sehr helfen
It is very help
wrong word – verb instead of adverb
Immer mere
More and more
wrong spelling
je mehr Sprachen ich wissen
the more languages I know
wrong word (kennen would be more appropriate); wrong verb form (infinitive used)
mit äuslanders communication kommunikationen with foreigners
wrong spelling of ‘foreigners’ + wrong word (noun instead of verb – ‘communicate’)
These examples reveal several errors that parallel those made with dictionary look-ups, and appear to support the assertion (Ard, 1982) that if using a bilingual dictionary leads to errors, it is possible that the desired concept would not be expressed accurately in the target language if a dictionary were not used. A case in point, from the third study, was the participant who wrote ‘beneficts vom’ in her ‘without dictionary’ essay. This was a ‘wrong word’ error that was avoided by the four participants who correctly looked up the German equivalent Nutzen
124 Dictionary Use in Foreign Language Writing Exams
ziehen aus (although only one of the four in fact used the German phrase absolutely correctly). Nevertheless, errors in lexis did occur in ‘with dictionary’ texts which could potentially be avoided. It is important to use the evidence of look-up error to help inform ways in which we can help dictionary users to maximize the successful use of a bilingual dictionary, and minimize the potential liability. This is the substance of Chapter 9. Before we take a look at what we can do to help L2 learners, however, there is one more vitally important strand of evidence to look at – the evidence from the test takers themselves. What did they think about having and not having dictionaries with them in the examination? Chapters 6 and 7 tell their story.
chapter 6
What do the test takers think of having a dictionary?
So far in this book I have considered in some detail some of the potential benefits and drawbacks of using bilingual dictionaries in L2 writing as exemplified in three specific studies into German timed writing tests. The focus has been on the writing samples. There is, however, as I argued in Chapter 2, another strand of evidence that is important. If we are genuinely concerned to get the fullest picture possible of the impact, positive and negative, of dictionary availability in writing assessments, it would be useful to gain evidence from arguably the most important stakeholders – the test takers themselves – about what they think. This evidence is valuable in informing any conclusion we want to reach about construct validity, or whether a given test and testing process can be considered to be valid or fair (Bachman & Palmer, 1996; Messick, 1989). In the three studies described in this book test taker opinion was sought through questionnaires and interviews (see Chapter 3). Not unexpectedly, this investigation uncovered a range of participant opinions, both positive and negative, about dictionary availability. In this chapter I explore the perspectives from the first two studies. The small-scale nature of these studies provided ample opportunity for the collection of more fine-grained qualitative data from the open-ended questions and the follow-up group interviews. Considering these studies together also enables participants’ viewpoints to be evaluated in the context of their exclusive use of the Collins German Dictionary. I begin by providing a review of the participants in these two studies. I go on to present the positive opinions expressed about having the dictionary in writing tests, and then move on to the negatives. Particular focus is given to the data available from the interviews, in which the participants frequently took the opportunity to underscore or expand on comments they had noted in the questionnaires. The chapter concludes by bringing together several of the key issues these participants raised.
126 Dictionary Use in Foreign Language Writing Exams
The participants In the first study there were six participants – Simon, Patricia, and Jenny (the three intermediate students), and Janet, Sharon, and Rachel (the three advanced students). All but one of these participants (Jenny, who had joined the program straight from school) would be classed as ‘mature’, and each had several years of study behind them. In the second study there were five participants – Mary, Jessica, and Peter (intermediate), and John and Sandra (upper intermediate). All were mature students, all were my own students, and all were taking an intermediate level 14-week first semester German course, Deutsch 3.
The benefits of dictionaries in writing tests Finding the right word As explored in the last two chapters, an analysis of the actual texts which these test takers had written revealed a very high success rate with using dictionary look-ups. The test takers were able to locate the correct word for their context, and then go on to use it correctly, almost nine times out of every ten. One important benefit to having the dictionary in the writing test that was evident from these test takers’ opinions was this: the dictionary helped them to find the right words for their context. It might simply have been the opportunity to check items like gender, plurals, or spelling, or to check up on the meaning of a word. Having the dictionary also provided the opportunity to find a word that was not known, and to be able to write more complex sentences and express more complex ideas – not being limited in the range of words that could be used. These aspects of dictionary availability were seen as distinct advantages. Sharon, the advanced participant in the first study who had used the dictionary the most frequently throughout the writing, underscored this perspective: It was very useful to have the dictionary for checking if the choice of word was correct or not, for plurals etc. (i.e. for accuracy) … I felt I could be more accurate in spelling/gender choice/plural choice + be ‘closer’ to the word or idea I wanted to express. (Sharon, questionnaire) [Having the dictionary] enables you to be more accurate … in some ways it helped me be able to say something better. (Sharon, interview)
Chapter 6. What do the test takers think of having a dictionary? 127
This sense that the writers could perhaps ‘say something better’ with the dictionary was reiterated by several others: If I didn’t know a word or what the question was asking I looked it up. I was able to use a wider range of vocabulary, without having to worry whether I knew a word. (Jenny, questionnaire) You can elaborate a little bit more with a dictionary … Maybe I used better words, better vocabulary. (Jenny, interview) I can use it if I need it. If I really want to use a word that’s quite difficult, well, I will find it anyway. (Janet, interview) I was able to write about topics that I would not have written about if I didn’t have the dictionary and therefore was able to write about something new. (Jessica, second questionnaire).
These participants therefore saw the dictionary as being helpful on those occasions when they were specifically looking for a particular word, especially a somewhat higher order word that was probably not actively known. The test takers were also clear on this: if you did not have a dictionary you might think of the word you wanted to use in a given context, but if you simply did not know the German equivalent you would be forced to abandon your idea and try to express yourself in some other way instead. You would be limited in the range of vocabulary you could use. Janet put it like this: If you want to use a specific word that pops up in your mind and do not find a replacement, you might have to change the whole phrase. (Janet, questionnaire)
This was occasionally seen as a real disadvantage. There was also a sense in which being restricted to what you already knew meant that you were not extending your knowledge. At least if you had the dictionary to hand you might be able to experiment with something new, and learn something in the process. This was certainly Jenny’s perspective: If I didn’t know a word I had to re-think about what I was going to write, and put it in another form. … by changing it, I mean, you’re not actually learning what
128 Dictionary Use in Foreign Language Writing Exams
you meant to, you’re changing it to what you do know, so you’re not actually learning anything by doing it. (Jenny, interview)
There were times when Sharon saw not having the dictionary as a distinct limitation – one that potentially hampered her writing: [I] sometimes didn’t bring up a topic because it was in the ‘too hard’ basket. … I didn’t get the exact point across or couldn’t develop an idea as extensively as I wanted. (Sharon, questionnaire)
This sense of constraint when the dictionary was not available also made Sharon a little anxious that her essay would not be as good. She felt that she ran the risk of getting fewer marks because of mistakes which presumably the dictionary would have helped her to avoid. In her thinking, the greater accuracy of word choice that was made possible with the dictionary, and the sense that this led to improvement in the writing and potentially a higher score, led her to feel more confident in the ‘with dictionary’ test: If I was striving to get a good mark I thought it was a good thing, and I liked it for confidence reasons and for … just feeling that it was perhaps a little bit better writing. (Sharon, interview)
This increase in confidence as a result of being able to check something out in the dictionary was also important to Janet: When I had it I felt confident, that I could look it up (e.g. gender). (Janet, interview)
Some participants in the second study, however, expressed a belief, contrary to Sharon’s, that using the dictionary would actually make no difference to their marks. This did not lead them to think that having the dictionary was of no value, but they were realistic about its potential limitations: I just felt it probably didn’t matter whether I used the dictionary or not, the results would still pretty much be the same. … Really it’s just the way you use that word that you’ve looked up, it doesn’t necessarily mean that, you know, it’s going to make everything better once you’ve used that word. (Mary, third interview)
Chapter 6. What do the test takers think of having a dictionary? 129
I certainly don’t think … [having the dictionary] will enhance my mark a jot, 2% perhaps at most. (Peter, third interview)
Thus for several participants one clear advantage of having a dictionary – finding an appropriate word for the context – was tempered by a belief that, at the end of the day, dictionary use probably would not make any difference to test scores – a viewpoint that was in fact substantiated.
The psychological benefit Both Sharon and Janet had commented on an increase in confidence when they had the dictionary. This was related to the writers’ opportunity to make up for gaps in knowledge or enhance rhetorical effect. Enhanced confidence was noted by several others as another important positive dimension to having the dictionary in the test. Having the dictionary to hand helped some test takers to approach the task more calmly, and therefore to feel more positive about their performance, regardless of whether they used it or not: It gives one more confidence. (Simon, questionnaire) It can be a bit of a comfort knowing the dictionary is actually there … available if you do get stuck. (Mary, second interview) I’d probably like to have the dictionary for all tasks … only as, sort of like moral support if you really need to look it up … which doesn’t necessarily mean that you are going to look it up as much as you think you will – just knowing it’s there. (Mary, third interview)
After the first week, Peter stated that “just knowing it was there” was “sort of a security blanket.” Later on he commented: It does make a psychological effect on you, it’s like, say, going into a maths exam, you know, able to take essential formulae or your calculator in – the calculator just speeds up the process, it doesn’t help you actually solve the problem, and to my way of thinking it’s an aid to assist you. … I think the dictionary has the advantage of de-stressing the people who are sitting the test. Whether you use it or not I think is immaterial … what it does say is it’s there if you require it, and it puts you in a more relaxed frame of mind. (Peter, third interview)
130 Dictionary Use in Foreign Language Writing Exams
Indeed, Peter felt strongly that, whether it is a dictionary or something else, having some kind of resource with you can be a definite advantage when taking a time-constrained test. In his view, having the resource to hand might help you at least to approach the test more calmly, which would be of distinct psychological benefit, especially for those at the lower levels of proficiency: I think a glossary, particularly for beginners, would be a blessing because one of the problems with exams, people that are relaxed that’s okay, but there’s a huge number of people – me being one of them – that get stressed out of their minds in some sort of exams, and so you’re actually not getting an accurate feedback from what they’ve produced, you know, they may totally fail because of sheer terror, and all they need is that mind to be unlocked – being given a glossary is like wearing a safety glove. (Peter, second interview)
Peter here identified the stress of the examination as being potentially harmful, especially if this leads to negative impact on the test takers and negative interaction with the test task. This might well obscure the potential of the test to give us an accurate reflection of test taker ability. Dictionaries might be particularly beneficial in this regard – and the help in diminishing test takers’ stress levels alone would certainly, in the viewpoint of several of the participants, be one valid reason for advocating the dictionary – comfort, moral support, de-stressor, just knowing it’s there whether you use it or not.
The benefit of experience In the second study it was possible to gauge the cumulative effect of having the dictionary over three consecutive weeks. The observational evidence, discussed in the last chapter, suggested that the test takers’ use of the dictionary (measured by the number of times they looked things up) became marginally less over time – an average of 14 look-ups in 9½ minutes in the first task, subsequently reduced to between 11 and 12 look-ups in 8 and 9 minutes. This was admittedly a small decrease in dictionary use. Nevertheless, the test takers themselves seemed to think that they became less reliant on the dictionary as each week progressed. It became apparent that over the weeks there was not only an increasing confidence but also, for some participants, more discerning use of the dictionary. Mary’s use of the dictionary became markedly less frequent week by week. According to the video evidence Mary used the dictionary 21 times in the first task, and spent a total of 12 minutes doing so. This had reduced to 15 times in the second task (8 minutes),
Chapter 6. What do the test takers think of having a dictionary? 131
and five times in the third (5 minutes). Mary perceived a definite connection between this decrease in dictionary use and the benefit of cumulative experience: In the first week you don’t really know what to expect, really, like you’re forewarned about the task … and then you come in and you’re in a stressful mode anyway, and then you start looking up everything, but after the first task you know what to expect and you’re not so reliant on the dictionary because you pretty much know what it’s all about. (Mary, third interview)
Sometimes, however, the increasing confidence week by week led to a perception that the dictionary was being used less which was not substantiated by the observational evidence. Jessica, for example, whose use of the dictionary remained quite constant across all three tasks (12–14 look-ups each time), commented: I think last week I was probably just being a bit paranoid and looking up every second word … whereas this time I was actually looking up words that I definitely didn’t know the meaning for – words that I definitely didn’t know in German. There were a few cases where I did actually look up just to double-check on gender or spelling, but not quite as much as last week. (Jessica, second interview)
In this case, Jessica appeared to become more positive about using the dictionary even though in fact she used it just as frequently in each task. Nevertheless, this subjective sense of being more in control of its use seemed to be important to her, and would no doubt have enhanced her perception of its benefit even if her actual use of the dictionary did not change. Peter likewise went on to express an increase in confidence week by week. After the second task he commented: I think I used [the dictionary] more last week, I certainly didn’t feel as capable last week. (Peter, second interview)
In his experience, however, it was the cumulative effect of engaging with the task, rather than having the dictionary, that appeared to make the difference. He suggested: I think the test situation actually builds up confidence in your own ability. … I personally found each week got easier … I certainly felt more relaxed. (Peter, third interview)
132 Dictionary Use in Foreign Language Writing Exams
This was because: The first task brought out all sorts of vocab, so the second one was easier because suddenly you’ve got more vocab floating around in your head. … I think it brings the vocab to a conscious level … (Peter, third interview)
It was, however, curious to note from the observational evidence that Peter used the dictionary considerably more in the final week than in the previous two (18 times in contrast to 11 times and 8 times). What had happened to his new found sense of capability, and drawing on vocabulary he already knew? He provided the following explanation: I have to admit that I got fascinated with a group of words with different meanings … so I spent it must have been about 5 minutes reading through this page, and I thought ‘this is really neat’, and it suddenly dawned on me, whoops, I do have an essay to finish here. (Peter, third interview)
There was therefore an acknowledgment that cumulative practice helped Peter to tap into material he had previously learnt … so much so that he felt he did not need the dictionary as much as each week progressed. There was also, however, a potentially negative factor: in Peter’s case, the dictionary was distracting (although admittedly this was not related to the task itself) and could possibly have jeopardized his ability to finish the task on time. Indeed, if Peter’s experience that taking the tests actually brings known vocabulary up to a more conscious level, it might be that having the dictionary leads some test takers away from using what they already know. Perhaps it would be better not to allow the dictionary and to let the test takers tap into this knowledge. This raises one of several perceived disadvantages to having the dictionary in the test.
The drawbacks of dictionaries in writing tests Interference with trains of thought The potential distraction of having the dictionary in the examination was one of the distinct expressed disadvantages to its availability. The dictionary sometimes led participants away from what they were trying to say: I felt it was better [writing] without the dictionary at all … because your mind goes off track when you’ve got a whole lot of ideas in your head, like you’re ‘mind-
Chapter 6. What do the test takers think of having a dictionary? 133
mapping’, and then when you’ve got the dictionary you have a look and you think ‘oh yeah, I can use that, but then again … you know, this is what I had in my head, but down below I’ve found something else’, I mean that takes up a lot of time, and you’ve got to change your sentence. (Mary, second interview)
Peter observed that he wrote the first ‘without dictionary’ task “much more instinctively.” When writing with the dictionary he noticed: When I reached the dictionary and I looked something up, by the time I’d actually found it, it wrecked my whole train of thought. (Peter, first interview)
There would seem to be little purpose in having a dictionary if, when you went to use it, you actually lost your way in the writing. For Peter, the interruption to his train of thought was also related to a sense that the dictionary negatively influenced his ability to think in German: When I did look up the dictionary I went straight back into English and German, when I didn’t have the dictionary I wasn’t thinking in English at all, managing to put everything together – no matter how shambolic my German thoughts may have been, I wasn’t bothering with much English at all. (Peter, first interview)
Peter therefore saw that his ability to stay on track with the task, even if the German he used was not correct, was more important to him than the potential greater accuracy that using the dictionary might lead to. Certainly for Peter, tasks that enabled him to focus on thinking in German were preferable to those that led him to have to switch between languages. He expressed a preference for the second letter-writing task in which both the stimulus material and the bullet points were in the target language: I think the bullet points in German tend to focus you straight away … it’s like warming up before a match or something – you’ve warmed up … and then you go straight on the field, whereas if they’re in English you tend to warm up, cool down, then go on the field, and I think you’re slower to get started. (Peter, second interview)
For Peter, then, the bilingual dictionary was potentially disadvantageous precisely because of one of its essential characteristics – the facility to move between L1 and L2. Peter concluded:
134 Dictionary Use in Foreign Language Writing Exams
Surely the aim is to get us to use and think in the language … I just don’t think in English when I do it without a dictionary. (Peter, second interview)
Measuring what test takers ‘know’ Another potential disadvantage of the dictionary, expressed by several participants, was that dictionary availability somehow obscured the measurement of a test taker’s ‘real’ knowledge, and this was on the whole not seen as a good thing. Thus a tension was set up between, on the one hand, greater accuracy and improvement in writing for which having a dictionary might be advantageous, and, on the other hand, measuring what test takers ‘really’ knew, which looking words up in the dictionary might confound. This tension led to some reflection about the different functions of writing tests, and whether one type of test was fairer than another. Opinion was divided about which test condition led to a fairer measure of the test takers’ writing ability. In cases where participants thought that both types of test were equally fair, several saw this as being because the tests were testing different things: In general, I think no dictionary better assesses the level of competency, vocabulary etc. in language acquisition and usage, and better reflects a candidate’s ability to use the language in actual situations. (Rachel, questionnaire) If you’re trying to look at how well they write, then I’d say give them the dictionary, but if you want to know how much they know, then don’t. (Jenny, interview) If it is a test on grammar or vocabulary, I don’t think it should be allowed because you have to learn these things … but if you write an essay I think in these circumstances it should be allowed. (Janet, interview) The task with the dictionary can better measure your writing ability in German because you are not limited as to what you can write just because you don’t have a wide vocabulary range. However, the task without the dictionary is a good way of testing students’ knowledge and vocabulary. (Jessica, second questionnaire)
Chapter 6. What do the test takers think of having a dictionary? 135
My writing ability would have been better tested in the task where I used a dictionary, and my knowledge of German words and expressions would have been better tested in the task with no dictionary. (Jessica, third questionnaire)
These perspectives revealed some very pertinent insights into the different uses to which writing tests might be put, and into where the dictionary might or might not fit in. If the test was aiming to assess writing quality then the dictionary was often perceived as being useful. On the other hand, tests without dictionaries were perceived as being more useful if the aim of the test was to measure the words the test taker actually knew. There was in these perspectives an implicit understanding of the purposes of tests described in Chapter 1: the test without the dictionary was testing vocabulary knowledge (the assessment of learning), and the test with the dictionary was testing writing ability (assessment for learning), with the strategic competence to make up for gaps in knowledge and enhance rhetorical effect that this ability entails. In the thinking of several participants the assessment of learning was seen as important. When participants regarded the test without the dictionary as fairer this was often because it was perceived as giving the test takers the opportunity to demonstrate what they knew or had previously learned. Even in cases where both tests were perceived as being equally fair there was a definite sense that using the dictionary masks or distorts the test’s ability to capture what the test takers know and can do unaided. Jenny expressed this viewpoint on two occasions: With the dictionary I got to widen my vocab and grammar skills, but without is what I actually know, with no help, and it shows much better what I do know. (Jenny, questionnaire) I thought without the dictionary it was showing more of what I knew. … If you’re using the dictionary to look up a word, anyone can do that, but at least when you’re writing without the dictionary you can actually see what I do know, as in what vocab, what grammar, instead of just using the dictionary, which obviously isn’t that hard. (Jenny, interview)
Jenny therefore appeared to see using a dictionary as a simple matter, something that anybody can do, with the implication that its use might be making the test too easy. This is an interesting perspective seen in the light of the many ways, explored in the last chapter, in which the test takers could not use their dictionaries successfully – a disjunction I discuss in Chapter 7.
136 Dictionary Use in Foreign Language Writing Exams
Two participants reiterated Peter’s point about vocabulary stored in test takers’ memories, and in this connection saw having the dictionary as potentially hindering the test taker’s ability, or even willingness, to tap into this prior learning. Both framed this limitation negatively in terms of test taker laziness: It makes you lazy, you know, using a dictionary. … You can become lazy, and you do not train your mind … we have stored what we have learned in our memory … and when you think for a while it might come back. (Janet, interview) I think you can get lazy, having a dictionary there, knowing, you know, oh there’s a translation for English and German in there, so, I know the English, but I want to say it in German, so I’ll just quickly look it up. I know I know it, but, you know, I’m going to actually make sure that it’s what I think it was. (Mary, first interview)
Thus, two interesting and related perspectives on the dictionary were set up. Firstly, when test takers use a dictionary we may not get a clear idea of what they know and have learnt. This led to a perception that using a dictionary is like cheating. Secondly, when test takers use a dictionary they may become lazy and over-reliant on it. This led to the viewpoint that using a dictionary may be a hindrance to effective engagement with the task.
Impact on other strategies Indeed, when it came to writing without the dictionary, all the participants agreed that this compels test takers to rely on other communicative strategies such as circumlocution, paraphrase, word avoidance, or word substitution, and sometimes this was seen as a good thing. Rachel, for example, actually felt that the ability to use the dictionary to extend a test taker’s lexical range was a distinct disadvantage. She believed that this might lead to an expectation of greater accuracy, and she viewed this negatively – as if it were related to a harsher external judging of her work. Surely, she thought, if she had the dictionary, there would be an expectation that she would write better. This led her to express a preference for the ‘without dictionary’ task, simply because she could rely on her current knowledge and did not feel compelled to step outside her ‘safe’ boundaries: My approach to each test felt a bit different – but obviously a higher level of accuracy could be achieved with dictionary. This created a sense of higher expectations. … I found that having the dictionary tended to make me feel I needed to express ideas in a more sophisticated way when I might have used simpler forms
Chapter 6. What do the test takers think of having a dictionary? 137
of expression without the dictionary. I also felt I needed to use it to make sure I was correct, i.e. a higher standard. … Perhaps the dictionary tended to make me more ‘anxious’ about what I wrote … it was a bit more ‘distracting’ than I expected – though obviously helpful in terms of accuracy in some cases. (Rachel, questionnaire)
So for Rachel, in the ‘without dictionary’ task: I expressed myself in ways I knew I could within my existing knowledge of German language. (Rachel, questionnaire)
As has already been stated, one experience of the testing process for Peter was that he became more consciously aware of vocabulary he had previously learnt. Reflecting on an earlier conversation he had had with John, Peter noted: If this exercise has done anything it’s made us realize that we’ve got a lot of vocab that we didn’t know that we actually have absorbed … that surprised me because it made me realize that there must be quite a fund of words and … it’s just a matter of persevering and keep on working away, you know. (Peter, third interview)
Persevering with an attempt to tap into this fund of knowledge might therefore be more beneficial than simply reaching for a dictionary. Indeed, for some participants the extent of their existing knowledge was sufficient to reassure them that they could proceed relatively confidently without the dictionary. As Peter commented after the first task, he may have appreciated having the dictionary alongside him, but: I certainly think not having [the dictionary] brought out vocab that I thought I’d forgotten or I didn’t even know that I knew. (Peter, first interview)
At the end of the third week Peter gave an example of how not having the dictionary forced him to have to find an appropriate word: There was a word that I wanted to use – divorced – geschieden, isn’t it? – and I couldn’t remember it, so I sat there for about, I don’t know, 2 minutes … and words bounced off all over the place, and then all of a sudden it just came out. Had I a dictionary at that time I probably would have just reached for it, but by going through my own drawing out of my own words it brought all sorts of other words to the surface that I probably wouldn’t have found. (Peter, third interview)
138 Dictionary Use in Foreign Language Writing Exams
Peter’s comment raises an important issue. It might have been more expedient to look up the item he wanted in the dictionary. This may have suited his short-term goal, to make up for a gap in his knowledge, but not having the dictionary compelled him to have to think around the problem. This was a reflection of the other side of the perceived problem with over-reliance. On the other hand, and especially in a time-constrained context, it might have been more beneficial simply to locate the precise word for his meaning and to move on with the task. Herein lies the potential ‘double-edged sword’ of the dictionary. In what circumstances is it more beneficial to quickly look something up? And in what circumstances would it be better to think around the problem? There is a tension here. The participants went on to speculate that part of the answer to this tension was related to the level of skill the test taker had with using the dictionary and applying the look-ups.
Test taker ability to use the dictionary effectively It was noted that the dictionary, in itself, may not make the test fairer because a writer has to know how to use it. John put it like this: The dictionary is helpful to look up a gender or a word but you already have to know a certain amount of German to even begin to express your ideas. If you have insufficient German the dictionary would be of no use anyway. (John, second questionnaire)
John here expressed a potentially serious drawback to the dictionary. It really cannot be used profitably if there is insufficient prior knowledge of the language. Indeed, John suggested that writers may be better served by using other strategies. He argued that although occasionally it was useful to be able to look up words he did not understand in the test task – if he had not been able to do this, he felt that he “might have gone off at a tangent” – writing a response should ideally be confined to what the writer knows. He commented: The more German you know, the more useful the dictionary will be, the less you know, the more of a hindrance it is … and you have to write according to what you know – the dictionary tends to lead you on to try and do something better, whereas you’d be better off sticking to what you’d thought of in the first place perhaps. (John, second interview)
Chapter 6. What do the test takers think of having a dictionary? 139
For several participants, the ability to use the dictionary successfully went beyond the location of individual words to a recognition that you also needed to have a good grasp of appropriate grammar: The exam is to test your writing skills in the German language. Although the dictionary does have phrases where apparently the German grammar is written you would still need to know word order. (Sandra, first questionnaire) The dictionary aided me with my German use of genders, plurals, and definitions but as far as grammar was concerned, I had to rely / relied on my knowledge for both tasks. (Jessica, first questionnaire) You can’t actually use … the dictionary, unless you know what you’re looking up anyway, can you? … You’ve got to have the basic knowledge of what you’re doing to be able to utilize the tool. (John, second interview)
Sharon acknowledged, however, that the skills required to circumvent lack of knowledge of a particular word (an important dimension of strategic competence) was related to the test taker’s proficiency. Her own reflections brought out very clearly the tension between test-taker skill and knowing when to use and not to use the dictionary, and illustrate that allowing or disallowing dictionaries is not a straightforward matter: I actually quite enjoyed the non-dictionary task because I was making up all these new words, which I think is not a bad thing anyway … but I think if you were someone who didn’t or couldn’t do that that easily, couldn’t re-word something or have the confidence to make up something and go ‘oh, give it a go’ or wasn’t very, I guess, trusting themselves, then I think they’d find it very hard without the dictionary … Generally, at my [advanced] level you can talk around the point or you can re-phrase it or re-word it, whereas someone at a beginners’ level would be more dependent on it, which has a whole lot of factors as well, doesn’t it – time, whether they’ve got it right or not, how they use a word in the sentence, lots of factors. (Sharon, interview)
Whether you have got it right or not. How you use a word in the sentence. A clear perspective was emerging from the data that the ability to use the dictionary successfully was linked with test taker ability and language proficiency.
140 Dictionary Use in Foreign Language Writing Exams
And time. The observational evidence presented in Chapter 5 revealed that, on average, the test takers in the first two studies were spending around 20% of the available examination time on dictionary look-ups. This was quite a considerable amount of time which indicated a potential distraction when completing a time-constrained examination. What, then, did the participants think about the influence of dictionary use on time?
Impact on time The impact on time within a time-constrained task was one strongly held negative drawback to dictionary use. Jenny is a case in point. The observational evidence revealed that Jenny made 24 look-ups. She spent over 20 minutes, or 43% of the available time, using the dictionary. This was a serious distraction from the time available to complete the task: It wasted quite a bit of time, I only wrote half of what I did without the dictionary. (Jenny, questionnaire) I didn’t write as much because I was too busy flicking through the dictionary. … you waste more time with the dictionary, you’re looking for things. (Jenny, interview)
It seemed that when participants had the dictionary the sense of ‘writing against the clock’, already present in examination conditions, was made worse: When the dictionary is there you write away, and it’s slower … and when it’s not there you write faster. (Mary, third interview) I was like ‘how do you, how does the alphabet go again? No, it’s back there, it’s back there, it’s right there’, and I’m thinking, oh, the time’s going … and I hear in the background going tick-tock, tick-tock, and I’m going ‘oh no’. (Sandra, third interview)
The impact on time was in fact one reason why Peter chose to rely on other strategies: If I’d had the time perhaps, like if it had been an assignment, I would have taken much longer, but I could find an alternative word [without the dictionary] so I did that … because I knew I was racing against the clock. (Peter, first interview)
Chapter 6. What do the test takers think of having a dictionary? 141
The time factor was not always viewed as negative, however. Two advanced participants, Janet and Sharon, noted that, in their experience, time spent accessing the dictionary was time wisely spent. Certainly, the observational evidence confirmed that they could use the dictionary somewhat faster than the intermediate test takers: It allows you to work faster too, in fact … when you go straight away for the word, you look it up and there you have it, and you have gender, plural. (Janet, interview) I’m relatively quick with a dictionary anyway. … [but it] could be a downfall if you’re not very quick with it. (Sharon interview)
This preference for the dictionary was despite the fact that, as Sharon noted in her questionnaire, sometimes there was “too big a choice of words to use.” These different perspectives on time exemplify what I earlier described as the ‘double-edged sword’ of the dictionary – when do you look something up, and when is it better to think around the problem? It seemed that a more positive perspective on time was clearly related to both the level of skill the test takers had with using the dictionary efficiently and the level of language proficiency of the user. Overall, however, time taken to use the dictionary was generally seen as a clear disadvantage. Furthermore, some negative perspectives on time appeared to be directly related to the particular dictionary being used. In what ways was the Collins German Dictionary seen as exacerbating the time problem?
Perspectives on the dictionary used In Chapter 4 I noted that all the participants in the second study had reported that they found the Collins German Dictionary adequate when used, either sometimes or always. On the positive side the dictionary was considered useful because it contained a wide range of words and definitions for a wide variety of contexts. There was also important grammatical information, such as genders and plural forms. When, however, participants found the dictionary only sometimes adequate, several limitations were identified: • •
Not always being able to locate the precise word. Too many choices available and occasional difficulty in finding the particular word in the form required.
142 Dictionary Use in Foreign Language Writing Exams
•
The time taken to locate the word, given the size of the dictionary and some lack of familiarity with it.
John commented: It’s too big … too many choices, and it’s too hard to find a particular word, or if you’re looking for a noun, first you’re looking at verbs and things … and suddenly you’re lost in the dictionary … you feel you’re wasting time. (John, second interview)
Participants also noted the prohibitively large size of the middle Sprache Aktiv section when writing in test conditions. As mentioned in Chapter 3, this section is a distinguishing feature of the dictionary, with its intended aim being to help L2 users to find fluent and natural ways of expressing themselves using a variety of different phrases and word sequences. Participants had been made aware that effective use of ideas in this section might enable them to enhance the quality of their writing. Mary argued, however: If you are struggling to start a sentence or to expand on a sentence, and you go in and look through there [the middle section] you’ve got too many choices, and so you’re stalling, wondering which one would actually fit in with what you want to write. (Mary, first interview)
Jessica noted: I was trying to find an expression in there and I gave up after a while because I couldn’t find the one I was looking for. (Jessica, second interview)
It appeared, then, that in a time-constrained context the Collins German Dictionary was not really serving the needs of several participants.
Drawing conclusions The picture painted thus far provides somewhat conflicting evidence about the perceived value of having a bilingual dictionary in a writing examination. On the one hand, the test takers expressed the view that the dictionary was definitely valued because they could use it to find words they did not necessarily know to help to make up for gaps in knowledge and to enhance the rhetorical effect of the writing. Several test takers also noted that having the dictionary made them feel
Chapter 6. What do the test takers think of having a dictionary? 143
more confident in the examination, and this appeared to lead to a more positive interaction with the test. Just having the dictionary there, whether it was actually used or not, contributed to this reassuring sense of security and comfort. It was also evident from the second study that the more experience the test takers gained with the dictionary in the examination context the easier it became, in their perception, to use it, and the more confident it appeared to make them feel. On the other hand, several participants struggled with allowing a dictionary because of a belief that the test was really there to measure their vocabulary knowledge. This being the case, having a dictionary potentially made the test unfair. Also, the dictionary was sometimes seen as a distraction from the task – it might interfere with a test taker’s train of thought, or the test takers might become reliant on it to the exclusion of other equally valuable strategies. There was a recognition by several that ability to use the dictionary successfully was linked to the level of language proficiency, and this set up a tension regarding when it might, and when it might not be appropriate to use it. Finally, the dictionary was often seen to impact negatively on the time available to complete the task, and the sheer size of the Collins German Dictionary seemed to be a factor in this. Nevertheless, despite the drawback of the size of the dictionary used, there was recognition that having access to some kind of resource when writing was authentic, and mirrored real-world practice. Peter argued in the second interview that in the outside world, in contrast to academic settings, “if you don’t know something you can look it up.” His perspective raises two important questions: if a large and comprehensive dictionary, such as the Collins German Dictionary, is simply too big in an examination context, should test takers be encouraged to use smaller dictionaries? Or should they be allowed a different kind of resource instead?
The use of other resources With reference to his work in law enforcement, John acknowledged the reliance that was placed on having the appropriate resource to fulfill a given task. He argued: You had to constantly look up law and stuff to be able to do the job, because you couldn’t retain it all – you still had a basic knowledge, but to fine-tune it you still had to go to the law often, to see exactly what it said and whether the circumstances fitted it. (John, second interview)
144 Dictionary Use in Foreign Language Writing Exams
The ‘basic knowledge’ had to be there. Nevertheless, that fundamental knowledge-base often required some checking or referencing to ensure that you were going along the right track. Peter used the analogy of an aircraft mechanic. Aircraft mechanics would presumably not be expected to have to retain all their knowledge of every aircraft model without having recourse to the appropriate manuals. Why, then, should those taking a test be placed in different conditions – not having access to suitable support resources? Reflecting on the prohibitive size of the Collins German Dictionary, Peter commented that it was “like looking up every aircraft model.” He went on to suggest: If I was just working on the Boeing 767 I only want that, I don’t want the rest of the rubbish because I don’t need it. (Peter, second interview)
In other words, in his view the information supplied in a comprehensive dictionary was simply too broad to be of value in a timed test, and much of the information found in it may appear superfluous and unnecessary. In this connection, John suggested that a different type of dictionary might be more practical: Perhaps with a smaller dictionary it would be far better than with such a big dictionary where there are so many variables … and it’s too hard perhaps to actually look up something … with a smaller dictionary there’s less choice, you can look it up more quickly and carry on with the task. (John, first interview)
In contrast to John, however, Peter considered that a smaller dictionary was not the answer. What was needed was a more focused resource, so that there was less potential to waste time and more opportunity to home in on the exact information needed to complete the task. He referred back to the lists of words and phrases in the Writing Study Guide that he and his fellow students had been encouraged to use throughout the course. He argued that the Writing Guide was “great from the point of view that it’s tailor-made.” He went on to explain why this was so: It’s not so much the vocab that you’re grasping for, it’s how to put key phrases, like for example ‘on the other hand’ or ‘besides’ that might shoot out of your head, whereas if they’re there in the list it’s no different from, say, an engineer that’s looking up the specifications for a Boeing refit, something like that … you’re using it just to check your knowledge and to give you a little bit of directional push. (Peter, second interview)
Chapter 6. What do the test takers think of having a dictionary? 145
Thus, from Peter’s perspective a list of words and phrases was ideally suited to the task of writing, helpful not only to check your knowledge but also to set you on the right path. Peter therefore saw a difference between dictionaries and some kind of writing guide in tests: If it was a more articulate task using much more articulate expressionisms, I think the booklet would be better because it wouldn’t actually be cheating in as much, because you’d still have to have the vocab … a mechanic is not cheating when he picks up a car manual, he’s checking something which is peculiar to the task in hand, and I think that would just enhance a person’s natural ability. … I think the booklet has got key expressions which are easy to find, they’re all concentrated … and if you’re trying to do sort of well co-ordinated German I think that using a book like that is quite handy. (Peter, second interview)
Peter here made an interesting comment which reflected what had already been articulated in the discourse surrounding the potential of the dictionary to obscure the measurement of the test takers’ knowledge – that using a dictionary is tantamount to cheating because when it is used we cannot find out accurately what a test taker knows and can do unaided. In Peter’s view, however, mechanics are not cheating when they access a car manual; they are simply checking something necessary for the task in hand. In a similar way, test takers are not cheating if they use something like the lists in the Writing Study Guide because they would still need to know the vocabulary.
Conclusion It certainly seemed from the variety of data collected in the first two studies that there was no clear preference for or against the dictionary in writing tests. There was, rather, a range of opinions which highlighted perceived strengths and weaknesses of allowing or disallowing dictionaries, and considerable reflection on the potential positive and negative impact of dictionary availability. Having said that, the weight of evidence did seem to be tipping the scales somewhat in favor of outlawing dictionaries in tests, even if access to some kind of resource was seen as legitimate and authentic. Also, there appeared to be a connection in the minds of some test takers between dictionary availability and cheating. This raises a question about the extent to which this might override or negate the positive advantages the test takers perceived. It also raises a question about why using a list of words and phrases –
146 Dictionary Use in Foreign Language Writing Exams
which, after all, gives the students a lot of information about how to structure their writing – should not be viewed as cheating. The next chapter considers the opinions of the wider range of participants in the third study, who also used smaller dictionaries. It seeks to answer several questions. To what extent did this larger group agree with the opinions of the smaller, more focused groups? Did these test takers share their concerns? Or were they on the whole more positive about their experiences? Did having a smaller dictionary make a difference to their perceptions? Or did they appear to prefer another kind of resource altogether? I now turn to a consideration of test taker perspectives in the third study.
chapter 7
Some more test taker perspectives
Whether we regard it as advantageous or disadvantageous for test takers to be allowed bilingual dictionaries in tests of their L2 writing proficiency depends on many factors. In the last few chapters a whole range of evidence has been presented which might help us to consider the issues surrounding including or excluding dictionaries in a more informed way. The last chapter focused on the test takers in the first two studies and presented a good deal of fine grained qualitative data on test taker perspectives. These data served to build up a detailed picture of the perceived strengths and weaknesses of dictionary availability. The third study presented an opportunity not only to collect a variety of qualitative perspectives from a wider range of participants but also to quantify some of these perspectives and to lay them alongside the qualitative picture that was emerging. This chapter presents the opinions of this larger set of test takers, and considers their perspectives in the light of the first two studies and the findings of previous research.
The participants The 47 participants in the third study (eight boys and 39 girls) were drawn from 11 secondary schools in New Zealand. The students came from a variety of school types. Common to all the participants were their age (all were 17 to 18 years old) and their German course. All were being prepared for the intermediate-level Bursary German examination which they were set to take two months after taking part in the study.
Preferences for task type In this study the participants were not interviewed, but, in common with the two previous studies, were issued with two types of questionnaire. Two short questionnaires were completed, one after each task. These questionnaires were designed to provide quantifiable comparative perspectives across both tasks, and students were required to respond on a five-point attitudinal scale (Figure 7.1).
*Note. These statements were presented separately on two different versions of the questionnaire.
Figure 7.1 Questions on the shorter questionnaire.
148 Dictionary Use in Foreign Language Writing Exams
Chapter 7. Some more test taker perspectives 149
Table 7.1 Participants’ perceptions. Questionnaire statement
With Without Opinion dictionary dictionary
Understanding the question 1. I understood the German used in the ques- M tion SD
4.5 0.7
4.5 0.7
strongly agree
2. I understood what I had to do to answer the M question SD
4.4 0.6
4.3 0.7
agree
3. When first reading the question I felt confi- M dent that I could write a good answer SD
3.4 0.9
3.3 0.9
neutral
4. The dictionary helped / would have helped M me to understand the German in the question SD better
2.9 1.2
2.5 1.1
disagree
5. I felt happy with my answer because I was M able to use a dictionary / I would have felt hap- SD pier with my answer if I had been able to use a dictionary
3.4 1
4 0.8
neutral → agree
6. I found writing the answer to this task hard M SD
3.4 1
3.4 1
neutral
7. I think writing the answer to this task would M have been harder without the dictionary / easier SD with the dictionary
3.6 1
3.9 1
neutral → agree
Reflecting on the answer
Responses were recorded according to the following scale:
5 = strongly agree 4 = agree 3 = neither agree nor disagree 2 = disagree 1 = strongly disagree
Overall results for the two questionnaires were then calculated (Table 7.1). These results suggest the following perceptions: •
Participants appeared to find the test tasks equally straightforward to understand, regardless of the test condition, and it was very clear to participants what they had to do in response to the task, regardless of condition. This was reassuring given that one aim of the study was to ensure that the two tasks were comparable in difficulty.
150 Dictionary Use in Foreign Language Writing Exams
• • •
•
Participants were generally quite neutral about how confident they felt, at the start, about being able to respond to the task. It was generally not perceived as being necessary to have the dictionary to help participants to understand the task. Having the dictionary did not appear to affect participants’ sense of happiness or satisfaction with their response, judging by the overall neutral response to statement 5. There was, however, agreement that, in the task where participants were denied the dictionary, they would have felt happier if they had been allowed to have the dictionary. Participants were also equally neutral overall about whether or not they found writing the response difficult, with slightly more agreement that it would have been easier with a dictionary.
In other words, these descriptive statistics suggest that having or not having a dictionary to hand in the writing test really did not appear to be making any difference to attitudes, either towards the task or towards how the test takers felt about their response. Having said that, it was apparent from two sets of statements that perhaps the availability of the dictionary was making some positive difference, albeit slight. There was no other evidence here to suggest a preference either for or against the dictionary.
Perceptions about having the dictionary Several questions on the longer questionnaire focused on across-condition comparisons. In three questions participants were asked to identify one statement that most accurately reflected their opinion concerning which type of test (‘with dictionary’ or ‘without dictionary’) they preferred, thought was fairer, or led to a sense of greater confidence. After each set of statements, space was given for participants to give a reason for their choice. Figure 7.2 records the statements as found on the questionnaire and Figure 7.3 presents the responses. There was no clear-cut preference either for or against having the dictionary. That is, although more participants preferred having the dictionary to not having it, around four out of ten participants were equally happy with or without. This finding mirrored the evidence from the shorter questionnaires. The other two sets of responses presented more definite opinions. Although just over two in every three participants thought it was fairer to be assessed without the dictionary, just under two in every three felt more confident when the dictionary was available.
Chapter 7. Some more test taker perspectives 151
Either: Or: Or:
Overall, I preferred HAVING the dictionary Overall, I preferred NOT HAVING the dictionary I was equally happy with or without the dictionary
Either: Or Or:
I think it is fairer to assess my writing skills WITH the dictionary than without I think it is fairer to assess my writing skills WITHOUT the dictionary than with I think both types of assessment are equally fair
Either: Or: Or:
I felt more confident when I HAD the dictionary with me I felt more confident when I DID NOT HAVE the dictionary with me I felt equally confident in both tasks
Figure 7.2 Test taker options in the third study.
Figure 7.3 Opinions about dictionary availability.
A further comparison was undertaken to explore whether test taker ability or prior experience with the dictionary made a difference to participants’ perceptions across the three areas under consideration – preference, fairness, and confidence. The percentages of participants expressing differential preferences are illustrated in Figures 7.4 and 7.5. Chi-square (χ2) analyses were used to determine if the actual numbers of participants expressing different preferences were different to what might have been expected if all things were equal and if there were no potentially influencing factors (East, 2005c). When considered in relation to level of ability, it was found that the choices made did not differ significantly from expected frequencies. When considered in relation to level of experience, actual preferences did
152 Dictionary Use in Foreign Language Writing Exams
Figure 7.4 Participant opinions by level of ability.
Figure 7.5 Participant opinions by prior experience.
not differ significantly from expectations in terms of fairness. There was, however, a significant difference for the other two categories: •
Less experienced dictionary users preferred not having the dictionary significantly more often than might have been expected – with the probability of this difference happening by chance alone being less than 1 in 30 (χ2 (2, n = 47) = 6.933, p = .031 (two-tailed)).
•
Chapter 7. Some more test taker perspectives 153
More experienced dictionary users felt significantly more confident with the dictionary, with those with less experience feeling more confident correspondingly less often. The probability of this difference happening by chance alone was about 1 in 100 (p = .011 (two-tailed), Fisher’s exact test).
In summary, level of ability did not significantly affect participants’ perceptions about the test condition they preferred or found fairer, or about how confident they felt. However, the more prior experience test takers had had with a dictionary, the more confident they felt in the ‘with dictionary’ tasks, and those with less experience had a greater preference for ‘without dictionary’ tests. Dictionary experience could be regarded as a factor in determining how participants felt about having the dictionary available, but level of ability was not. What about the reasons behind participants’ preferences? Qualitative opinions from the test takers were available from the open-ended sections of the three questions above, and also from the sections that asked the participants to consider the advantages and disadvantages of the two testing conditions. In order to analyze these opinions effectively a categorization was carried out, leading to a coding system and a resultant taxonomy of opinions on dictionary use (see Appendix 2). This allowed for a meaningful and thick description of the data and a summarization of participant perspectives (Hays, 2004). The taxonomy covered four areas:
1. 2. 3. 4.
Opinions about the dictionary used Advantages of dictionary availability Disadvantages of dictionary availability Miscellaneous comments.
Once the final coding had been done it was possible to determine the frequency with which particular opinions were held together with the opinions themselves.
The benefits of dictionaries in writing tests The final taxonomy identified several perceived advantages to having the dictionary in the tests, and the numbers of times each of these responses was made are recorded, in rank order, in Table 7.2 . Because more than 20% of cells had an expected count less than five, the ‘equally confident’ count was removed and Fisher’s exact test (Sheskin, 2000) was calculated using a 2×2 cross-tabulation with level of experience (two levels) and more confidence with or without the dictionary.
154 Dictionary Use in Foreign Language Writing Exams
Table 7.2 Advantages to having the dictionary. Category
Frequency (n = 47)
Percentage
1.
Ability to check or find words not known or unsure about
46
98%
2.
Ability to check related grammar information or spelling
22
46%
3.
Sense of security and confidence
14
30%
4.
Helps with understanding the question
11
23%
5.
Ability to write and think freely when dictionary is there
7
15%
6.
A source of inspiration and ideas
6
13%
The most commonly identified advantage of dictionary availability, cited by virtually all of the participants, was the ability to check or to find words they either did not know or were not sure about. Among the reasons given for this was that this was seen as advantageous in comparison with other communicative strategies such as circumlocution or paraphrase. Several viewpoints brought out the benefit of being able to say exactly what you wanted to say without having to resort to other words or change your ideas: • • • •
“I could look up words I didn’t know instead of having to rephrase things.” “I didn’t have to spend time trying to think of alternative phrasings.” “When you get stuck on a word that you can’t think of the German for, it is good to be able to look it up without having to change your ideas.” “You could look up a word instead of trying to use other words to explain the same thing.”
One participant commented that the availability of the dictionary supported an achievement rather than an avoidance strategy – “You don’t have to leave gaps in your essay when you can’t think of the German vocab.” The dictionary was also seen as advantageous as a way to broaden and enhance the writing – using richer or more sophisticated lexis. You could, for example: • • • • • •
“use a wider vocabulary” “use more vocab in your essay” “use a greater range of words (words I hadn’t used in class)” “impress the marker with a word from the dictionary” “use more in-depth sentences” “give a more in-depth answer” or “find ‘fancy’ big words to improve standard of writing.”
Chapter 7. Some more test taker perspectives 155
As one participant put it, using the dictionary “[e]xtended the range of words available to use in my essay, making it a little more complex than it would’ve been.” These perspectives on the benefit of the dictionary were certainly supported in the evidence presented in Chapter 4 concerning both the LFP and the uses to which dictionaries were put. This perceived potential benefit of the dictionary was therefore reflected in test takers’ actual dictionary use. The support of the dictionary to help keep the writing going or to help with inspiration was also noted. Just over one in ten found the dictionary helpful as a source of information or ideas, and for 15% of participants the availability of the dictionary contributed to the ability to write and think more creatively. Comments included: • • • • • • • • •
“Able to think more freely because I could look up the equivalent German words.” “It can give you more inspiration of things to say.” “If I saw a word in the dictionary relevant to the topic I could use it as an idea to base a paragraph on.” “Look up suitable phrases to begin a paragraph – makes essay written with more flair.” “If you are stuck on what to say you can consult the dictionary for an answer, it’s easier.” “Using the dictionary enabled me to increase my vocabulary and express the ideas I had.” “You could stay on the same train of thought as vocabulary was no longer a barrier.” You “could stay closer to planned ideas.” “I was able to carry on with what I was trying to say and use the dictionary for finding the word I didn’t know.”
For several participants, therefore, a distinct benefit of having the dictionary was that you did not have to abandon an idea or a train of thought just because you did not know a word. Just under half commented that it was also an advantage for them to be able to check related grammar information or spelling. This included checking for genders, word endings, plurals, and prepositions, with genders and plurals being the two most sought after items of grammar.
156 Dictionary Use in Foreign Language Writing Exams
Table 7.3 Disadvantages of having the dictionary. Disadvantages of Dictionary Availability
Frequency (n = 47)
Percentage
1. Time-consuming – takes too long to use
32
68%
2. Its use distracts from a student’s ‘real’ knowledge
24
51%
3. Over-reliance
13
28%
4. Disincentive to learn vocabulary or to use known vocabulary 5. Over-stretching: writers try to say more than they are capable of 6. Requirement to know how to apply the knowledge found 7. Concern about a greater expectation on accuracy
9
19%
7
15%
7
15%
5
11%
8. Confusion and self-questioning after locating a word
5
11%
9. Ability to write freely & unemcumbered when diction- 4 ary not there 10. Negative impact on the perceived quality of the writing 2
9%
11. Its use distracts from using other strategies
4%
2
4%
The drawbacks of dictionaries in writing tests The taxonomy also identified several perceived disadvantages to having the dictionary. Again, the numbers of times each of these responses was made are recorded in rank order in Table 7.3. The most identified disadvantage of dictionary availability, cited by over twothirds of participants, was that using the dictionary takes too long in an examination. This was often related to other disadvantages. It was evident, for example, that several participants used the dictionary to reassure themselves about the accuracy of what they were writing, and they saw this as time-wasting. For some this checking led to confusion and self-doubt about words they thought they knew: • • • • •
“I’d waste time checking words I already knew but wasn’t sure about.” “It used up time looking things up only to find I already knew it.” “I spent a lot of time looking up words which I actually knew, just to check, which wasted time.” “Slowed me down, checking small details – tempting to check everything.” “It slowed me down. I felt I needed to check all the uncertainties of noun genders and plural endings. This took time, thus less time to write the essay.”
• •
Chapter 7. Some more test taker perspectives 157
“I felt it was too time-consuming because I started to doubt the words I thought I knew.” “Used up a lot of time, confused you sometimes.”
For others, time was seen as a disadvantage when wanting to broaden vocabulary or increase the quality of the discourse: • •
“Took too long with the dictionary, trying to think of impressive sentences which were more confusing.” “It made me want to use bigger words all the time – looked in the dictionary lots – took lots of time – when I should have been writing.”
Indeed, 15% of participants commented that having the dictionary led them to try to say more than they felt they were in fact capable of, and 11% felt that because the dictionary was there they were somehow being expected to write more accurately: • • • • •
“You perhaps try to be too tricky in some respects.” “I used difficult word orders which were probably wrong because of new words.” “Tried too complex sentences.” “It made me feel I had to / should use vocabulary I hadn’t used before.” “It was very time-consuming and I became more worried about accuracy than communication.”
As one participant commented, the dictionary brought something negative to the writing that its non-availability avoided: “I used words I was not familiar with and therefore meant that I was not too sure if the sentence structure was correct, but if I had used familiar words and phrases I would know the right sentence structure and cases.” Related to this last point, one in five participants saw the availability of the dictionary as a potential disincentive either to learn vocabulary in the run-up to the examination or to use known vocabulary when writing in the examination. Just over one in four admitted to a sense of over-reliance on the dictionary when it was available. One participant noted that it “[d]iscourages students from using vocabulary that they already know since they head straight for the dictionary.” For this participant this meant that “[s]tudents may rely totally on the dictionary.” It is as if the writer becomes “[m]ore dependent on dictionary to provide vocabulary rather than background knowledge” – it “sort of defeats the purpose of learning vocab.” For another participant, “Once you started using it you didn’t want to
158 Dictionary Use in Foreign Language Writing Exams
stop.” Another suggested, “I like to see what I can do without dictionary and use words I’ve bothered to learn. It’s more rewarding for me personally.” Allied to the viewpoint that having a dictionary was a potential disincentive to learning vocabulary was the even stronger opinion (noted by half the participants) that being allowed to use a dictionary in an examination distracted from, and therefore failed to measure, a student’s ‘real’ knowledge. There was a sense in which this created an ‘unfair’ dimension: • • • • • •
“With the dictionary it doesn’t really reflect what you know but what the dictionary tells you.” “This method [without a dictionary] tests your knowledge not your ability to read a dictionary.” “Your writing will be based on your knowledge, not what a dictionary has given you.” “This gives a fairer assessment of a student’s vocabulary and grammar and knowledge of the language.” “It is better to assess the general knowledge of the person and not the book they use.” “[Without the dictionary] you get a more accurate view on my writing skills and range of vocabulary.”
Thus, for these test takers a tension was apparent between only using knowledge which had been gained through prior study, and using new information which the dictionary might provide. As was evident from some comments from the previous two studies a discourse was set up which framed using the dictionary as being as good as cheating. Indeed, two participants made this direct association. For one, “[i]t’s supposed to be a test of what you actually know and by looking up words it’s similar to cheating.” For the other using the dictionary “[c]ould be deemed as cheating” because if you use it you “don’t need to remember anything.” Another participant put it like this: “Using a dictionary is like using an answer booklet – it defeats the purpose of a test which is to evaluate how much and what you know.” Another potential drawback to the availability of dictionaries, alluded to by 15% and again in evidence in the previous studies, was that unless you knew how to use the knowledge found in a dictionary, the information was of little perceived value. Successful use of the dictionary “depends on whether the person uses the word in the right order or context of an essay.” Even though with the dictionary one participant “could look up words”, this participant then “had to think about how to use them correctly.” Another noted that “you would have to understand how to put the word into context otherwise it was irrelevant.” A related drawback,
Chapter 7. Some more test taker perspectives 159
for one participant, was that “sometimes [with the dictionary] I was translating almost word for word from English”, suggesting that in this case the information was also of no great value. One participant reflected on potential unfairness if certain dictionary users were more familiar with how to apply the knowledge they found in the dictionary than others. Her answer to the problem revealed a particularly pertinent issue: “if dictionaries were used in tests, maybe have lessons on how to use dictionary properly and to full advantage.”
Miscellaneous comments For a number of participants, there were neither perceived advantages nor disadvantages to having the dictionary, and its availability or otherwise in a writing examination was immaterial to them. Comments of this sort were classified as ‘miscellaneous’. One participant noted, for example, that for him there were “no disadvantages” to dictionary use because “[t]he questions were easy enough to understand” and he therefore “knew what the question was asking.” Six participants (13%) felt that it made no difference to the quality of their writing or to the test either way. One commented that she felt using the dictionary can be helpful but, in her opinion, it is “not necessary for writing tasks.” Another noted, “I found I could write a good essay without a dictionary as well as with one.” Another stated that “[b]oth types show different aspects of my writing ability.” It was also observed that tests in either condition were fair “as long as whether a dictionary was used is known by the person doing the assessing.” Presumably this test taker was concerned that no unfair advantage should be gained by or disadvantage be experienced for the test takers because the raters did not take dictionary availability into account when marking. (As I argue in Chapter 3, however, it may be that more reliable test scores are generated from rating sessions in which the raters are not aware if dictionaries have been allowed.)
The dictionaries used The taxonomy also indicated a variety of comments regarding the various dictionaries the participants used. These comments, which focused on overall satisfaction with the dictionary, were analysed for the subset of Collins Pocket users (n = 31) and for the remaining users (n = 16), so that some small-scale comparison between different types of dictionary could be made. Table 7.4 presents the data for this section.
160 Dictionary Use in Foreign Language Writing Exams
Table 7.4 Satisfaction with the dictionary – third study. Category
Collins Pocket All other users Frequency Percentage Frequency Percentage (n = 31) (n = 16)
1. Unable to locate all words (words not listed) 2. Satisfied that all required information was there 3. Insufficient grammatical information 4. Insufficient information about words in contexts
13
42%
8
50%
10
32%
7
44%
7 4
23% 13%
4 4
25% 25%
Bearing in mind that the interaction between the dictionary and the dictionary user (mentioned in Chapter 5) might be confounding the success with which the dictionaries could be used (that is, perceptions of dictionary inadequacy are not necessarily down to limitations in the dictionaries alone), this comparative information reveals the following: •
•
The most common perspective on the dictionaries was the limitation that the test takers were unable to locate all the words they wanted. At least four out of ten experienced this sometimes, although the Collins Pocket did appear to fare somewhat better than the other dictionaries with regard to the information it did give. However, fewer participants reported that they were satisfied that the Collins Pocket gave them the required information than those using the other dictionaries (32% compared to 44%).
With regard to those who used dictionaries other than the Collins Pocket, the inadequacies expressed forced the test takers to have to find alternative means of solving the problems for which they wanted to use the dictionary. Typical comments included: • • •
“Some of the words I looked up were not in the dictionary so I had to find other ways of saying it” (BBC German Learners’ Dictionary) “There was one word I looked up but couldn’t find it and so I had to rethink the sentence” (Collins German College Dictionary). “I found it frustrating that there were often no indications of plurals” (BBC German Learners’ Dictionary).
With regard to the Collins Pocket, negative opinions focused on inadequacy of information:
• • • •
Chapter 7. Some more test taker perspectives 161
“Sometimes it doesn’t have all of the words.” “Sometimes I could not find the expression I was looking for.” “I would usually find the German equivalent to English words but it wasn’t always clear that these words were the right words to use in a certain context.” “I was unable to find most nouns’ plural endings, or some idiomatic phrases I looked for equivalents for.”
Certainly when these comments are considered alongside the instances of dictionary misuse illustrated in Chapter 5, there is some evidence to suggest that, for some test takers, the Collins Pocket Dictionary was ‘found wanting’ in terms of the information it gave. On the other hand, when satisfaction was expressed about the Collins Pocket, comments were positive and unequivocal: • • • • • • •
“I was able to find all the words I was looking for and it was easy to tell what the genders and plural forms of the words were.” “User Friendly Dictionary, I am used to using this specific one.” “I found it quick and easy to find the words needed.” “Because the dictionary is set up so that it can easily be followed.” “I always found the word I was looking for quite easily.” “The dictionary was very simple to use, basic, the layout was very easy to follow when finding a word.” “Because the words were there with all the relevant endings.”
Nevertheless, one expressed limitation, possibly a consequence of the dictionary’s small size, was this: “I found most words I was looking for but I needed sentence starters and phrases for which the dictionary wasn’t helpful.” It was evident, therefore, that some found the Collins Pocket frustrating, and others found it ‘user friendly’, simple and quick to use, and informative – at least if all you wanted to find were items of lexis. There was, however, no clear evidence to suggest that the Collins Pocket did either considerably better or considerably worse than the other dictionaries used in this study.
Putting it all together The participants in this third study clearly had a lot to say, both positive and negative, about having a dictionary in a timed writing test. I conclude this chapter by placing their perspectives into a wider context, comparing and contrasting their viewpoints both with the opinions of those in the first two studies and with the research that others have done into test taker perspectives. This comparison
162 Dictionary Use in Foreign Language Writing Exams
reveals that each perceived strength or weakness of having the dictionary was in fact two-sided: there were advantages and disadvantages inherent in each aspect. I consider below four key perspectives and bring out the positives and negatives of each one: 1. 2. 3. 4.
Access to words forgotten or not known Increase in confidence Time taken to complete the test Impact on the measurement of test takers’ knowledge.
It was evident across all three studies that one of the primary perceived advantages to having the dictionary with you in an examination was that you could check up on words, or find new words to enhance the quality of the writing. This was a definite bonus for many. Nevertheless, participants in all the studies recognized the ‘double-edged’ nature of having the dictionary. For some, there was an expectation that they needed to use more complex vocabulary simply because the dictionary was available to find it. This pushed several out of their comfort zones, and they did not always have the confidence to know that they were choosing the best or most appropriate options. It was recognized that the more you knew how to use the dictionary effectively, the more likely you were to be successful with it. As one participant put it, the dictionary was “only helpful if you are basically confident in what you are saying.” The suggestion here is that the dictionary becomes a liability when you do not know how to use it and it therefore potentially leads you into unfamiliar territory, which might have unanticipated landmines. Nevertheless, two out of three participants in the third study felt more confident when they had the dictionary. Taken at face value, an increase in confidence would be a definite positive reason for promoting dictionary availability in timed examinations. This increased sense of confidence was also expressed in the first two studies. It is also apparent from previous studies into dictionaries. Idstein (2003) investigated the difference that a bilingualized dictionary might make in reading comprehension tests. Idstein gathered questionnaire evidence on dictionary use and availability from 114 students from which she concluded that: • • •
92% of students were satisfied with the information the dictionary provided; 90% believed that their test score would be higher if they used a dictionary; 96% believed that dictionaries should be allowed in tests.
On this last finding, Idstein (2003) comments that “[t]he very strong results … show student confidence in the dictionary’s effectiveness, and at the very least, suggest they are more comfortable working with this resource at hand” (p. 73).
Chapter 7. Some more test taker perspectives 163
Bishop’s (2000) questionnaire investigation among 200 OU students who had taken both ‘with dictionary’ and ‘without dictionary’ French examinations revealed the following: • •
• •
•
71% of respondents thought that the use of dictionaries in language examinations or tests was helpful. 46% commented on the “reassurance factor” of knowing that the dictionary was available to them even if they chose not to use it. This contributed to a “reduction of stress, pressure and nerves”, and “a calming effect” both before and during the examination. They felt “less flustered, more confident” (p. 63). Over 40% recognized that when using the dictionary good management of time was essential. 79% stated that the knowledge that they could use a dictionary in the examination did not mean that they did not memorize as much when they were preparing. The respondents were in agreement with the OU policy to introduce dictionaries into the examinations.
Bishop (2000) notes the importance of the psychological impact of the dictionary. He suggests that “[the] effect alone of reducing stress in examinations might be deemed to justify the permitting of dictionaries in examinations” (p. 63). There was certainly clear evidence that several of the factors raised by both Idstein (2003) and Bishop (2000) were also of importance to test takers in the first two studies. In the third study, however, the sense of greater confidence with the dictionary needed to be interpreted against another important finding. Experienced dictionary users expressed more confidence in ‘with dictionary’ tests more frequently than those with less prior experience. However, as pointed out in Chapter 5, experience actually played no part in increasing the level of success with which the dictionary could be used. Ability level was the deciding factor. It would seem that those who thought, by virtue of experience, that they might be better served by having a dictionary, were in fact no better off when it came to actual use than those who had less experience. This contrast is reminiscent of a concern raised by Hartmann (1999) from survey data available from 710 students from a range of disciplines (not just language learners) at Exeter University. Hartmann observed that 35% of respondents claimed that they had never been taught how to use a dictionary, and 43% had received a little training. Respondents did, however, consider themselves able to use dictionaries. Nine out of ten said they were satisfied with their ability to use a dictionary, and 58% found it easy to do so. Only 8% felt that inability to locate
164 Dictionary Use in Foreign Language Writing Exams
information was down to their lack of dictionary skills. Hartmann suggests on this evidence that this might reveal “an exaggerated feeling of self-confidence” (p. 47) – students think they can use a dictionary when sometimes they cannot. By implication, test takers who think, by virtue of prior experience, that they are able to use the dictionary successfully and thereby approach a ‘with dictionary’ test task with greater confidence, might be over-estimating what they can actually do with it. In similar vein, Jenny’s comment (first study) that using a dictionary was ‘obviously not that hard’ because ‘anybody can do it’ belies the extent to which test takers frequently misused dictionaries. On the other hand, the more able participants in the third study clearly were able to use the dictionary more successfully, even if this did not translate into greater confidence in ‘with dictionary’ tests. Overall, therefore, increased confidence in ‘with dictionary’ tests had both positive and negative dimensions when considered in terms of actual performance. There is also another interesting contrast between Idstein (2003) and Bishop (2000), and the studies reported here. In both previous studies the test takers had expressed support for having the dictionary in the test. For many in the studies reported here having the dictionary in the test was often seen as unfair. The discourse surrounding unfairness and obscuring knowledge, and a relationship between the two, was strong in all three studies. One participant in the third study, for example, commented that taking the tests without the dictionary “tests what I know, not what I can figure out,” with the implication being that, without a dictionary, what he had learnt through previous study was being tested, not what he could discover from the dictionary if he did not know it. There was a very real sense, across all three studies, that using a dictionary was practically the same as cheating. Having tests whose primary purpose was the assessment of learning was therefore a major concern for a good number of these participants – with the dictionary being a distinct hindrance in this respect. (It should be noted that Idstein (2003) had investigated reading tests, and it may be that test takers perceive dictionary availability differentially depending on the skill being tested.) Furthermore, some participants in the third study commented that if test takers were allowed dictionaries in an examination there was a potential disincentive to learn vocabulary or to use known vocabulary in the examination – a danger that had also been recognized in other studies (for example Barnes et al., 1999). This also has potentially negative washback into the teaching situation, although Bishop’s (2000) participants had denied this. Indeed, Bishop comments: [i]t is clear from [the] results that the dictionary should not be cited by tutors as a main cause of any lack of revision and commitment to memory by students. Students, with few exceptions, will not use its presence as an excuse for not trying
Chapter 7. Some more test taker perspectives 165
to memorise or learn grammar and vocabulary, although a few claimed they may feel tempted to do so. (p. 59)
It was also noticeable that for some participants there was a tendency to use the dictionary to check words they already knew, suggesting that they were relying on it more than they needed to. This finding mirrors a concern expressed by Asher (1999), whose two surveys indicate that dictionary use often appeared to be a first rather than a last resort, “to confirm words which pupils already know but which they want to check ‘just to be sure’, rather than to find words which are totally unknown” (p. 63). This is surely wasting time that could be used more profitably elsewhere. Indeed, the most strongly held disadvantage of having the dictionary, noted across all studies, was the time taken to use the dictionary. This was, for many, a definite distraction. In the first two studies, however, the negative impact on time was often related to the size of the dictionary – the Collins German Dictionary was simply perceived as being too big, and containing a lot of information that, in an examination context, was irrelevant. Two perspectives encapsulate the mood of the test takers in the first two studies. Patricia (first study) asked, “is a really large dictionary necessary for this type of test?” For John (second study), “it’s the old sledge-hammer to crack a nut, isn’t it?” However, in the third study, when all dictionaries used were between around three and six times smaller than the Collins German Dictionary, the negative impact of dictionary use on time was still a major factor – a limitation for two out of three test takers. There was a sense that it slowed them down because it took time to locate words. There was no evidence to suggest that smaller dictionaries necessarily speeded up the process. Once more, however, the time issue had two sides – with two advanced participants in the first study commenting that time spent accessing the dictionary was time spent wisely: they were able to use the dictionary effectively and efficiently to find the precise word for the context. One thing is evident from the test taker opinions available from all three studies: there was no clear preference for or against the dictionary – perspectives were balanced. It was curious that, in contrast to at least two other studies (Idstein, 2003; Bishop, 2000), many test takers did not favor having dictionaries on the basis that it interfered with what they saw as the purpose of the test. There was, however, some discussion in the second study around alternative resources that might be better or fairer. What did the test takers in the third study have to say about this?
166 Dictionary Use in Foreign Language Writing Exams
Other resources At the end of the longer questionnaire, participants in the third study were asked to indicate which one resource they would like to have with them in a writing test, using a set of statements and opportunity to provide a reason for a particular choice. The statements are given in Figure 7.5, and test taker preferences are recorded in Figure 7.6. It was clear that the most popular preference, noted by 22 participants (46%), was for a list of useful words and phrases. The next most popular option, a small bilingual dictionary, was cited by only 7 participants (15%). Comments given to support the choice of useful words and phrases focused mainly on the issue of scaffolding. The participants felt the need for some type of support that went beyond the ability to check individual words, and that would give them the building blocks with which to construct their writing. It was felt, for example, that such a list, if it included words “that you can fit into a passage”, would give writers “a better overall idea of how to structure [the essay] and write it.” Other comments included “It gives you more than just words and meanings” and “It’s not really the words that get me stuck, it’s how to put them into a sentence.” Scaffolding was perceived by several as the means to a good beginning to their response which would help some to develop their ideas: • • • • •
“It starts me off …” “Then you have starters for sentences and your essay would flow better.” “Having the vocabs is not enough to get started. It’s much better when I can join my ideas together fluently – using phrases.” A list “[s]tarts phrases and paragraphs off to a good start and I could think of good ideas and I have my beginning.” “That would help get you started and give you plenty of ideas.”
If I had the choice about which resources (like a dictionary) I would like when I am taking a writing exam in German, I would prefer to have (please tick ONE box ONLY): ☐ no extra resources with me ☐ a small bilingual dictionary (roughly the size of a video case) ☐ a larger bilingual dictionary (roughly the size of a large telephone book or ‘Yellow Pages’) ☐ a list of useful words and phrases (like in the Writing Study Guide) ☐ a monolingual dictionary (just German words, no English) ☐ an electronic (bilingual) pocket dictionary ☐ my textbook
Figure 7.5 Question regarding preferences for different resources – third study.
Chapter 7. Some more test taker perspectives 167
NPOPMJOHVBMEJDUJPOBSZ MBSHFSCJMJOHVBMEJDUJPOBSZ OPSFTPVSDFT MJTUPGVTFGVMXPSETBOE QISBTFT
UFYUCPPL
FMFDUSPOJDEJDUJPOBSZ TNBMMCJMJOHVBMEJDUJPOBSZ
Figure 7.6 Choice of resources in writing tests – third study.
It would give others more confidence: • • •
“There are helpful guidelines to start writing. … It makes me feel more confident.” “I am usually all right with vocab, but I would feel safer with a list of useful words and phrases in case I get stuck.” “I feel more confident when I have the start of a good phrase which I just need to complete. It helps to keep me thinking in German, rather than thinking in English and then translating.”
Time-saving (also the reason given by three respondents for their choice of the electronic dictionary) was mentioned by one respondent: “[Lists] are helpful in making writing of a better standard and a list is not as time consuming as a dictionary.” Several of these perspectives on having a list of words and phrases were also mirrored in interview comments from the second study. In that study the view was expressed that the Writing Study Guide was more ‘tailor-made’ for finding phrases that would help to structure the writing. Preference for something like a list of phrases is an interesting finding when seen in the light of previous studies. As suggested in Chapter 2, other studies (Atkins, 1985; Atkins & Varantola, 1998a; Bishop, 2000; Hartmann, 1994; Idstein, 2003; Thompson, 1987; Tomaszczyk, 1983) draw the conclusion that students overwhelmingly prefer the bilingual dictionary. Indeed, Bishop’s (2000) study revealed that, if the choice were between a bilingual dictionary, a monolingual dictionary, both, or an additional grammar book, 78% would choose the bilingual dictionary. Findings from the third study, corroborated somewhat by those in the second, lead to the conclusion that, when bilingual dictionaries are compared to resources that provide more
168 Dictionary Use in Foreign Language Writing Exams
s caffolding, participants prefer the scaffolding. There was also no perception that a list of words and phrases provided unfair advantage. Such a resource was therefore sometimes perceived as being fairer than having a dictionary. This perception is curious in that the types of phrases included in the Writing Study Guide were typical of any which might be found in a good bilingual dictionary. Nevertheless, it was seen that knowledge of words, and a demonstration of such knowledge, was somehow being compromised by dictionary availability, whereas use of phrases and formulaic sequences was apparently perceived merely as providing the building blocks on which test takers could place their ‘real knowledge’. There is an interesting progression here. Students prefer bilingual to monolingual dictionaries because, as noted for example by Atkins (1985) and Thompson (1987), they are easier and swifter to use, and they provide more security. Participants in studies two and three would prefer a list of phrases to bilingual dictionaries because use of a list would make it easier and swifter to create sentences. Such lists thereby provided more security without necessarily compromising a display of ‘word knowledge’. It would seem that participants were looking for easy to use resources that helped them with their writing without ‘cheating’, and the more it was perceived that a resource could do that, the more it was preferred.
Conclusion Over the last five chapters I have presented a range of evidence around bilingual dictionary use in writing exams – all involving learners of German as an L2 of at least an intermediate level of proficiency. I have highlighted, from samples of test takers’ writing, how these test takers both profited from and were hindered by having a dictionary in the examination. I have also explored the range of opinions these test takers expressed about their experiences, both positive and negative. We are left with two important final issues. Firstly, in the light of all the evidence that has been presented – does allowing students access to a bilingual dictionary lead to writing tests that are useful and fair? This is an issue for language testing practices. Secondly, in the light of the picture that has emerged about the ways in which these test takers actually used, and misused, dictionaries – what steps can we take to minimize the liability of having the dictionary to hand, and to maximize the opportunity? This is a concern for language teaching practices. In the next chapter I focus on the first of these two questions – the usefulness and fairness of allowing dictionaries in writing examinations.
chapter 8
Having a dictionary in writing exams – is it useful and is it fair?
What is the use in having a dictionary in a writing examination if it does not lead to better writing? And if it does help test takers to improve their writing, doesn’t that make it a useful tool? But if some test takers actually do worse with a dictionary because they can’t use it properly, isn’t that unfair? Findings from the three studies reported in this book have several important implications for both test usefulness and test fairness. This chapter considers the usefulness and fairness of writing tests that allow bilingual dictionaries. As explained in Chapter 2, according to Bachman and Palmer (1996) the most important issue to consider when designing and developing a language test is the use for which the test is intended. By this argument it is important for language testers to articulate clearly what they understand the purpose of a particular test to be. That is, we need to be clear about exactly what type of information we want to find out from the test. Bachman and Palmer go on to suggest that test usefulness is an overriding consideration for quality control throughout the process of designing, developing, and using a particular language test. Bachman (2000) asserts that six qualities of test usefulness – authenticity, interactiveness, impact, practicality, reliability, and validity – provide “a mechanism for evaluating overall test usefulness, as determined by the test developer’s prioritization of the different individual qualities, as appropriate to a given testing situation” (p. 24). Kunnan (2000) argues, however, that test usefulness is of minimal value if the test is not fair. This chapter begins by considering test usefulness in the light of the empirical evidence I have presented. It goes on to consider test fairness and concludes with a consideration of further avenues for research.
Implications for test usefulness Reliability and validity In Chapter 2 I suggested that the two qualities of reliability and construct validity are concerned with the meaningfulness and accuracy of test scores in relation to
170 Dictionary Use in Foreign Language Writing Exams
the measurement of a particular construct. A reliable writing test is one which yields comparable scores at different administrations, commensurate with different levels of test taker proficiency, or where performances are comparable across different forms of the same test. It is also a test in which we can know with a fair degree of certainty that the process of awarding the scores is adequate. In a writing examination we can put measures in place to enhance the reliability of the scores – such as designing a carefully worded and sufficiently detailed scoring rubric, training raters in its use, and measuring the extent to which raters agree about awarded scores. A construct valid writing test is one in which we can know with a fair degree of certainty that the awarded scores are adequate reflections of test takers’ writing proficiency. When it comes to considering whether or not dictionaries should be allowed in writing tests, there is a requirement to define the construct underlying the test. Given that a principal focus of much of our language teaching is to help students to communicate effectively in their language of study, and that effective communication is most commonly understood within a theoretical framework of communicative competence, the ‘ideal’ or most useful test of writing would aim to measure students’ writing proficiency within this framework. How, then, are we to define the construct of writing proficiency within a theoretical framework of communicative competence? The construct of writing proficiency may be defined quite simply as ‘writing performance’, or what the UK’s Assessment Reform Group refers to as the ‘assessment of learning’ – a snapshot of test takers’ writing proficiency at a particular point in time that aims to measure test takers’ knowledge of key components of the writing construct such as vocabulary and grammar. The ‘traditional’ timed writing test fits this understanding of the construct. The construct may also be defined more broadly as ‘assessment for learning’, or as an authentic reflection of writing in the ‘real world’, including the strategies normally adopted to enhance the communicative effectiveness of the messages. Certainly coursework or portfolio options for assessing writing would fit more comfortably within this understanding of the construct, but the timed test is not necessarily precluded. By the more traditional definition of the construct we would probably outlaw the dictionary, but we do not necessarily have to. By the broader definition of the construct, advocating the use of bilingual dictionaries in assessments would be more acceptable. This would mean that writing tests that factor in or factor out the use of bilingual dictionaries are potentially equally valid. Their validity depends on the value system and the understanding of the construct that underlies the test. Nevertheless the value systems are in tension. As explained in Chapter 3, one way of determining the impact that allowing or disallowing a dictionary is having in a writing test is to set up a repeated measures
Chapter 8. Having a dictionary in writing exams 171
study whereby the same sets of test takers take two tests, one with a dictionary, and one without. We would then be able to compare test performance across the two conditions and use the findings to draw some conclusions about dictionaries in tests. This is exactly what the studies reported in this book aimed to do. One of the most important findings of these studies was that the bilingual dictionary made no statistically significant difference to test scores across the two conditions. This was so regardless of all other factors, including the level of test taker ability, the type of task, or the type of dictionary. There are two important consequences to this finding. Firstly, the dictionary did not appear to make any difference to the tests as reliable measures of the construct of writing proficiency as measured by the scoring rubric. There was consistency of measurement across the two conditions. Secondly, bearing in mind that, as Messick (1989) notes, all tests are subject to threats to their validity to some extent, the dictionary was not found to contribute substantially to the two threats to construct validity Messick identifies – construct under-representation or construct irrelevant ease or difficulty. That is, participants were able to perform equally well with or without the dictionary. In the test where they did not have the dictionary the construct (if it were understood to include the use of support resources) was not necessarily under-represented; in the test where they had the dictionary its availability did not lead to variances in performance that were irrelevant to the construct being tested (if, that is, the construct were understood to focus on language knowledge) – at least as indicated by test scores. Test score evidence would suggest that having the dictionary in the test did not lead to any unfair advantage. In addition to the test scores, there was also no evidence that errors substantially increased or decreased in one condition compared to the other. On this evidence it might be suggested that having the dictionary did not lead to any unfair disadvantage with regard to the number of errors: test takers were not necessarily being led astray by the dictionary any more than they may have been without it if they had mistakenly thought that what they were writing was correct. Therefore, for those in the assessment debates who are concerned that tests should exist for both the assessment of prior learning and to discriminate between test takers, and for those who as a consequence view the construct underlying the test as essentially the ‘snapshot’ measurement of writing performance, it can be demonstrated from these studies that the bilingual dictionary did not appear to interfere with these goals for these sets of test takers. The findings with regard to test scores suggest the following: tests that factor in the use of the bilingual dictionary are just as reliable as tests that disallow them, and the construct validity of the test, no matter how the construct is defined, is not threatened. The findings with regard to errors suggest that test takers are not
172 Dictionary Use in Foreign Language Writing Exams
ecessarily going to make more or fewer errors because they use a dictionary than n the number or type of errors they might potentially make without one. The test score and error evidence are useful in hypothesizing that we can allow test takers to take dictionaries into the exam room, or alternatively outlaw them, and still expect the test takers to be able to perform comparably in terms of both scores and frequency of errors. (This is not to say that the extent of errors in writing is something about which we should not be concerned, but this is an issue for Chapter 9.) There are, however, four other qualities of test usefulness according to Bachman and Palmer’s (1996) framework, and it would be beneficial to consider the implications for ‘with dictionary’ tests when considered against these four qualities.
Authenticity Authenticity, as noted in Chapter 2, is an important quality in language tests. It is a quality of the test’s relationship to what Bachman and Palmer (1996) refer to as ‘target language use’ or TLU domains – the spheres in which the test takers will actually need to use the language they have learnt in the real world. If in real world situations people often use bilingual dictionaries when operating in an L2, it would make sense to ensure that any tests of these users’ language proficiency include the dictionary as a relevant and authentic reflection of such TLU domains. Bachman and Palmer also suggest that authenticity may contribute to test takers’ perceptions of test relevance, and that this perceived relevance may help to promote a positive affective response to the test task, thereby helping test takers to perform at their best. The implication here is that if test takers do not perceive tests that allow the use of dictionaries as ‘authentic’, they may not be performing at their best in tests that allow their use. Conversely, if test takers see dictionary use as an authentic activity they may not perform at their best in tests that disallow dictionaries. In contrast to Bachman and Palmer (1996), however, Lewkowitz (2000) argues that “[i]t is not known … how test takers perceive authenticity. It may be that authenticity is variably defined by the different stakeholders. It is also unclear whether the presence or absence of authenticity will affect test takers’ performance” (p. 44, my emphasis). Lewkowitz found, from a study among 72 Cantonese speakers of English involving two different test types and follow-up questionnaires, that few participants viewed authenticity as important. On this basis it seemed reasonable to conclude that authenticity was not a priority for the majority of respondents. Lewkowitz deduces that “[a]uthenticity may be of theoretical importance for language testers needing to ensure that they can generalize
Chapter 8. Having a dictionary in writing exams 173
from test to non-test situations, but not so important for other stakeholders in the testing process” (p. 60). There was little evidence in the three studies I have reported to suggest that authenticity was an important consideration for these participants. In the first study, where participants were asked to give their opinions about the use of dictionaries by responding to a series of statements, four out of six saw the dictionary as a useful and legitimate tool, but only two saw its use in a test as reflecting reallife language use. In the second study, authenticity was discussed in the interviews with participants. Although there was recognition that having some kind of resource while writing reflected real-world practice, it was not necessarily concluded that the bilingual dictionary was the best instrument to contribute to this. In the third study, there was no evidence at all to suggest that any of the participants had considered the use of the dictionary as a means of enhancing the authenticity of the test. In fact, across all the studies several participants thought that having a dictionary ‘disauthenticated’ the task as a test task. As a consequence, at least in terms of enhancing authenticity in the perception of the test takers, the dictionary was really making no difference. Its use in tests could surely therefore not be recommended on the basis of any argument around perceived authenticity in the eyes of the test takers, and the lack of perception of the authenticity of tests that allow dictionaries may not be adversely affecting performance. The consideration that dictionaries should be allowed because they make the tests more authentic as a reflection of real life is in fact more a theoretical consideration that may be more important to other stakeholders. It is noteworthy in this connection that research into allowing bilingual dictionaries into the GCSE in the UK (Asher, 1999; Barnes, Hunt, & Powell, 1999) revealed that strong argument for the use of dictionaries as authentic was expressed by the teachers. This finding was confirmed in the interviews I carried out in 2000, reported in Chapter 1. If stakeholders other than the test takers conclude that dictionaries are authentic we should not necessarily be discouraged from advocating them in tests – but there may be no reason for doing so if the test takers do not perceive that having a dictionary enhances authenticity.
Interactiveness Interactiveness was defined in Chapter 2 as a quality of the interface between the test taker and the test. For Bachman and Palmer (1996), it is “the extent and type of involvement of the test taker’s individual characteristics in accomplishing a test task” (p. 25). Thus it has to do with ‘accomplishment’ and it is influenced by three things:
174 Dictionary Use in Foreign Language Writing Exams
1. the test taker’s language ability 2. topical knowledge 3. ‘affective schemata’ – “the affective or emotional correlates of topical knowledge” (Bachman & Palmer, 1996, p. 65). With regard to language ability, or, more specifically, writing ability, we would need initially to refer back to the test scores, because the scores are arguably the clearest external measure of the test takers’ writing proficiency as defined by the model of communicative competence reflected in the scoring rubric. There was here no evidence to suggest that the dictionary made a difference to test takers’ ability to use the language. If, however, we were to view writing ability more broadly as the test takers’ ability to enhance the quality of their writing through appropriately chosen lexis, the evidence available from the Lexical Frequency Profiles (Chapter 4) would suggest that the task was somewhat more successfully accomplished with the dictionary. The evidence from the third study suggests that this was particularly so for the lower ability participants. The overall difference in lexical sophistication, although statistically significant in the third study, was quite small, however. The extent of enhancement might not therefore be considered large enough to merit including the dictionary given other perceived limitations. If topical knowledge is taken to be what Bachman (1990) defines as knowledge structures in long-term memory, the dictionary was not seen as helpful as an aid to tapping into that knowledge, but rather as a distraction from the participants’ ‘real’ knowledge, or potentially making the learner lazy. There was also, from the second study, evidence to suggest that test takers were able to draw successfully on words, phrases, and formulaic sequences which they had learnt in the context of the course and for which they did not require the dictionary. It seemed that they were perfectly able to enhance the quality of their writing without having to use a dictionary. If, however, topical knowledge also includes getting the information necessary to be able to write more successfully about a given topic – that is, the dictionary provides one potential source of ‘lexical knowledge’ on the topic – it is clear that the dictionary did indeed help a good number of participants to make up for gaps in knowledge, and this was seen as a clear advantage – the test takers were able on occasions to find the precise word for their contexts, which may have escaped them if they had not had the dictionary. They would then have been forced to rethink what they wanted to say. The increase in lexical sophistication measured by the LFPs might be relatively small, but the potential to make up for gaps in knowledge was considered as being important.
Chapter 8. Having a dictionary in writing exams 175
If affective schemata are thought of as the affective or emotional correlates of topical knowledge, and if topical knowledge does include getting the information needed to write more successfully about the topic, then there was some evidence to suggest that dictionary availability led to a more positive affective response for a good number of participants. Several test takers were positive about the benefit of being able to find the exact word for their meaning, so that they did not have to abandon an idea. This led to a feeling of greater confidence in the ‘with dictionary’ tasks because having the dictionary could “facilitate … the flexibility with which [the test taker] responds in a given context” (Bachman & Palmer, 1996, p. 65). In the second study, for example, one participant saw having the dictionary as a “comfort”, “moral support”, “just knowing it’s there.” On the other hand, some participants, particularly in the third study, appeared to worry that, because the dictionary was there, they should be checking everything (even what they thought they knew) to make sure it was right, or they should be using it to ensure that they used more sophisticated words. One advanced participant in the first study commented that the dictionary was a distraction that made her feel “more anxious” about what she wrote. Also, participants were not entirely successful in using the dictionary to find items relevant to the topic. There were often instances when they could not find the words they wanted, or where they chose a contextually inappropriate word. They may have felt better about having the dictionary, but in terms of actual use it was not always making a positive difference. Thus, in terms of positive affective response for the test takers in the testing situation, there was a definite leaning towards appreciating the dictionary as a check on or support for knowledge, but for some its availability was viewed negatively. Positive affect was also influenced by prior experience with the dictionary, suggesting that test takers may have a more positive emotional response to tasks with a dictionary if they are familiar with the dictionary – the ‘security blanket’ noted by Peter in the second study becomes more appreciated if it is well worn. The actual use of a dictionary may make little or no difference to test performance, but if it makes test takers feel better, especially when they gain experience with using it, there is some argument to suggest that its use in timed tests should be encouraged. Bachman and Palmer (1996) also suggest that authenticity and interactiveness are linked. For Bachman and Palmer, a positive affective response to a test task is enhanced through a perception of its authenticity. And yet it has already been pointed out that for these test takers authenticity was not really an issue. Bearing in mind Lewkowitz’s (2000) suggestion that we simply do not know whether the presence or absence of authenticity will affect performance, there is no reason to conclude that the test takers’ interaction with the test would necessarily also be enhanced by dictionary access from the point of view of perceptions of authenticity.
176 Dictionary Use in Foreign Language Writing Exams
Impact In Chapter 2 I suggested that impact has to do with the wider implications of the test for the stakeholders, including test takers, teachers, parents, programs, and gatekeepers. As Bachman and Palmer (1996) note, impact can be seen as both a microlevel and a macrolevel quality. At the macrolevel we may be concerned with the positive or negative consequences for the stakeholders of a given test procedure (for example, being allowed entry into or being denied access to higher level courses and programs, or a particular career, on the basis of test performance). At the microlevel we may be concerned with the positive or negative consequences for the test takers as they actually carry out a given test. In terms of the microlevel consequences of allowing the dictionary into the test, it was evident that there was both positive and negative impact on the participants. Impact was positive in terms of the increase in confidence and decrease in stress expressed by many, and this also contributed to more positive interactiveness with the task. This was not everyone’s experience, however. Some felt more stressed when the dictionary was there because of a perceived expectation that examiners would be looking for greater accuracy in the response. On balance, however, it would seem that the impact of having the dictionary in the examination was positive for more test takers than it was negative for others. In the third study, for example, two out of three participants stated that they felt more confident in the test when they had the dictionary. One clear dimension of negative impact was, however, a sense of time pressure. This brings us to the final quality articulated by Bachman and Palmer (1996) – it is important to consider whether dictionaries are practical in a time-constrained examination.
Practicality The quality of practicality is concerned with the design and administration of a given test. It focuses on questions such as ‘are there sufficient resources – people, time, materials – to enable this test to be implemented successfully and efficiently?’ and ‘is this test likely to be more or less efficient than another type of test?’ The test takers in these tests provided clear evidence, across all three studies, that the greatest drawback to having a dictionary was the time taken to look up words. There was a sense, expressed in the second study, of ‘writing against the clock’, which appeared, for some, to make the testing process with the dictionary somewhat
Chapter 8. Having a dictionary in writing exams 177
more stressful. (In view of this constraint, several participants commented that something like a list of words and phrases as found in the Writing Study Guide might be swifter). To a very large extent the dictionary was seen as an impractical tool, given the constraints of time imposed by the test. This was apparently so regardless of the dictionary’s size. In terms of practicality, therefore, ‘with dictionary’ tests were often viewed as being less useful than tests without the dictionary. However, as suggested in Chapter 5, the week-by-week practice effect observed in the second study led, on average, to somewhat less use of the dictionary, potentially making the dictionary more practical to use as experience was gained. Also, two advanced participants in the first study actually stated that being able to use a dictionary helped them with the speed with which they were able to complete the task – impact on time was not seen negatively by all. This raises another practical dimension. When test takers are faced with a word they wish to use but do not know, and the dictionary is available, is it more expedient to look the word up, or is it more efficient to try to think around the problem? There is no easy answer here. As suggested in Chapter 6, whether or not to use a dictionary to solve a particular problem is dependent on factors such as test taker proficiency and ability to use the dictionary successfully (issues I shall discuss later in this chapter). A tension also emerges between impact (in terms of a greater sense of security and confidence) and practicality (the time taken to use the dictionary). In the first study, Jenny – one of the participants who used the dictionary the most – faced the conflict between a very real sense of increased confidence and an equally real sense of running out of time. In terms of confidence, her perception (as recorded in the two comparative short questionnaires) moved from ‘quite unconfident’ in the ‘without dictionary’ task to ‘extremely confident’ in the ‘with dictionary’ task. She noted, however, that using the dictionary wasted a lot of time. In the ‘with dictionary’ task she spent almost half the available time using the dictionary, and she ended up writing less. Having a dictionary raises some serious issues around test takers’ time management. Indeed, although Bishop (2000) had observed that almost half his respondents commented on the ‘reassurance factor’ of having the dictionary to hand, even if they did not use it, four out of ten recognized the need for good time management. As noted in Chapter 7, Bishop concludes that the effect of reducing stress may well justify allowing dictionaries in tests. He goes on, however, to add the proviso that this must be balanced by using the time available effectively.
178 Dictionary Use in Foreign Language Writing Exams
Are ‘with dictionary’ tests useful? If we weigh the evidence gleaned from these studies against the six qualities of test usefulness suggested by Bachman and Palmer (1996), we would have to conclude that we can make no clear decision for or against allowing dictionaries in writing tests. Having a dictionary arguably adds to the authenticity of the task when considered in relation to TLU domains, but authenticity was not a concern for the majority of these test takers. Positive interactiveness with the task, and positive impact on the test takers, were clear benefits of having the dictionary for many – they could often locate the words they wanted, and this made them feel more positive about being able to complete the task successfully. This perception was not universal, however. For some the dictionary was a distraction; for others it added to a sense of pressure that they would be expected to write with more sophistication. Furthermore, evidence from the texts themselves revealed many instances of positive dictionary use, but also many instances of poor word choice or inaccurate word use. Some test takers were able to enhance the quality of their writing without using the dictionary. For many the dictionary was an impractical tool – it simply took too long to use – and yet for some it helped them to complete the task efficiently. In terms of the measurement qualities of the test – reliability and validity – the dictionary did not appear to be a confounding variable. In the light of this range of evidence, we are faced with a dilemma if we want to make a definitive conclusion about the usefulness of ‘with dictionary’ writing tests. Before we consider a conclusion, however, we also need to look at another dimension of the testing process.
Implications for test equity These studies have suggested that test scores may remain stable whether writing with or without a dictionary. This is an important finding. However, in the context of a wider validation enquiry that takes into consideration the social consequences of the testing process, or the impact on the test takers within and beyond the test, and the implications of this for test usefulness, it has also been necessary to take into consideration several other findings of the studies. These other findings must inform a decision about whether or not to allow dictionaries. The empirical evidence also has implications for fairness for all test takers, or whether writing tests that allow bilingual dictionaries are equitable for the test takers. I turn now to consider whether tests that allow dictionaries are fair, and the extent to which, based on the evidence, we might conclude that they are fairer, or less fair than, non-dictionary tests.
Chapter 8. Having a dictionary in writing exams 179
What about issues of equity within the test? Do the actual uses the test takers made of the dictionaries tell us anything about fairness? With regard to issues of equity and impact within the test, Spolsky (2001) argues that “any test of general language proficiency and achievement that allows dictionary use is inevitably more unfair than one that does not” (p. 4, my emphasis). His argument is based in part on a recognition of the complex interaction between test takers, tasks, and dictionaries, alluded to in Chapter 5. What, then, led Spolsky to assert that ‘with dictionary’ tests are inevitably more unfair? In supporting his assertion, Spolsky makes three claims: 1. Test takers who are linguistically more proficient probably do not need the dictionary, although they may use it to confirm what they already know. 2. Those with less linguistic proficiency are harmed, because dictionary use impacts on time, and because they are misled by or misinterpret dictionary look-ups. 3. Any sense of greater confidence that such test takers feel when they are permitted to have a dictionary in an examination is a “misguided confidence” (p. 4) A number of participant perceptions from the studies reported in this book, taken together with evidence of dictionary use from the test takers’ responses, support these assumptions. In the third study it appeared that a sense of confidence when the dictionary was available was related to the amount of prior experience test takers had had with the dictionary, with those with more experience feeling more confident more frequently than those with less experience. This level of confidence was not matched by being more successful with using the dictionary simply by virtue of having had some experience with a dictionary. Confidence was matched with experience, but not with success. Also, it was evident, again from the third study, that stronger, more proficient test takers were able to use a dictionary more effectively than weaker ones – the upper ability participants were generally able to find correct words in the dictionary more successfully than their less able counterparts and were able to use the correct words accurately considerably more. Upper ability participants were therefore less likely to be ‘misled’, either by making wrong word choices, or by failing to apply the look-up correctly. Success was matched with ability. Finally, although this finding was not found to be statistically significant, it was found that 70% of the lower ability participants in the third study felt more confident with the dictionary, compared to 50% of the upper ability participants. To an extent the confidence expressed by the lower ability group was ‘misguided’ in that, in reality, their ability to use the dictionary was not as good as the more able group. Confidence was not matched with ability.
180 Dictionary Use in Foreign Language Writing Exams
Spolsky’s (2001) assumptions are only supported to an extent, however. The following were also found: •
•
•
•
Many participants (regardless of ability) were able to increase the range and sophistication of their lexis through dictionary use. Indeed, the finding in the third study, discussed in Chapter 4, that ‘with dictionary’ LFPs were not significantly different from each other across ability levels, whereas ‘without dictionary’ profiles were, suggests that the dictionary appeared to help performances between the groups to become more evenly matched, with the lower ability group drawing on lexis from the dictionary that helped their lexical range to be more comparable to the range of the upper ability group. Based on this measure, the less proficient test takers did derive some benefit from having a dictionary. The LFP is, however, a means of measuring range of lexis, but not always accurate use of lexis. That is, errors in word use are either ignored or absorbed by the calculation. It is beyond dispute that the test takers did make errors with look-ups, and the extent of these errors was related to level of ability. The less proficient test takers were arguably ‘harmed’ because they were ‘misled’ more often than the more able test takers. However, just because errors did occur when the dictionary was used did not mean that errors would not have occurred in its absence or that more errors would have been made with the dictionary than without it. Certainly, if the purpose of the test is to measure vocabulary acquisition in writing that is as error-free as possible, then the dictionary does indeed appear to be masking such a measure. There was, however, some evidence to suggest that a similar number and similar types of error were made regardless of test condition – on this basis errors made with look-ups would not necessarily be a reason to outlaw the bilingual dictionary – the test takers were no more ‘misled’ with a dictionary than they were without. Spolsky asserts that more proficient test takers probably do not need a dictionary. There was, however, evidence to suggest that the more able participants actually made good use of the dictionary to enhance the discourse quality of their writing. They may not have needed it, but they were able to use it to good effect.
Test taker perceptions of fairness In considering equity and impact beyond the test Weigle (2002) argues that stakeholder perspectives would help in deciding whether a test or assessment served its function well and could be accepted as a useful and equitable social tool. With
Chapter 8. Having a dictionary in writing exams 181
regard to participant perceptions of fairness there was no evidence, across all studies, to suggest that ‘with dictionary’ tests were frequently perceived as being fairer than ‘without dictionary’ tests. One strongly held opinion that was apparent from all three studies was the concern that using a dictionary somehow obscured the ability of the test to measure what the test takers really knew. This revealed an understanding that the essential purpose of the writing test, at least as far as these groups of test takers were concerned, was ‘the assessment of learning’. This was connected in the minds of many with the notion of what constituted a ‘fair’ test. Indeed, in the third study two out of three participants were inclined to believe that writing tests that did not allow dictionaries were fairer. It is curious, however, that this finding does appear to contrast with that of other studies (Bishop, 2000; Idstein, 2003), or even with the test taker perceptions recorded in Chapter 1 from the interviews I conducted in 2000. The students I interviewed who had been allowed to use dictionaries in the examinations commented positively on the benefit of having the dictionary – it was “a big help”, a “life-saver”, “so much better” to have the dictionary. It was as if the participants in each of these earlier studies perceived an ‘official’ recognition of dictionary access that may have meant that they were more inclined to see ‘with dictionary’ tests as more helpful. By contrast, all of the participants in the three studies reported here came from backgrounds in which dictionaries had traditionally been outlawed from the tests. This was what they were familiar with. It was as if those who were used to taking examinations without dictionaries, and for whom this was the norm, had a tendency to see examinations with dictionaries as more unfair (because, for example, they obscured ‘real’ knowledge). The more conservative perspective of the participants in these studies, particularly in the third study, may therefore arise from an understanding that testing exists to measure their subject-matter acquisition and retention (Glaser, 1990) and dictionaries in such tests are not normally recognized. It is possible that more conservative perceptions towards dictionaries in examinations might change if dictionary use in tests became more common and accepted. It would seem that, in this respect, participants’ perceptions were to some extent shaped by their environment and past experiences – as was shown from the third study, those with less prior experience with the dictionary were more inclined to prefer ‘without dictionary’ examinations. Bearing in mind, however, that those with greater prior experience felt more confident with the dictionary more often than their less experienced counterparts, and placing this finding alongside the evidence from other studies, it may be hypothesized that the more test takers gain experience with using dictionaries in tests the more they will perceive dictionaries as a valid part of the test and the more positive their attitude towards their use is likely to be.
182 Dictionary Use in Foreign Language Writing Exams
Are ‘with dictionary’ tests fair? In response to the question whether tests that allow dictionaries are fairer or less fair than other tests we are again left with an inconclusive answer. The weight of evidence from these test takers suggests that ‘with dictionary’ writing tests are more unfair, but this opinion was not held by everyone, and the wider body of evidence (including some perspectives from the first and second studies) suggests that perceptions of fairness are impacted by how the test takers view the test – what they understand the test is trying to measure. In light of the fact that dictionaries do not significantly interfere with test scores in intermediate tests of L2 writing (at least as found in these studies), we are really left with the following conclusion: the decision about whether bilingual dictionary use should be encouraged or discouraged in ‘snapshot’ writing tests really comes down to considering, in any given test situation, the advantages and disadvantages of each test condition in the light of the particular priorities that test setters wish to set. That is, where the positive aspects of dictionary availability (for example, psychological benefits to test takers or ability to make up for gaps in knowledge) are considered to be paramount, test setters may choose to include the bilingual dictionary. Where it is seen to be important to control for the negative aspects (for example, impracticality in timed test conditions), test setters may choose to exclude it. On the basis of the findings of the three studies reported here a definitive decision cannot be prescribed. This is probably a good thing. It means that test setters, test designers, and those responsible for making decisions about test procedures (such as the Qualifications and Curriculum Authority in the UK) are free to weigh up the evidence for themselves, and to decide on a case-by-case basis whether, and when, to allow bilingual dictionaries, or, for that matter, other resources.
Avenues for further research No piece of research, no matter how thorough or painstakingly executed, can be the end of the story, however. There are always limitations to any research, and, as Bogaards (1999) makes clear, there are always further avenues to explore. Bearing in mind the range of findings that these three studies revealed, where might we need to go from here in terms of research? With regard specifically to bilingual dictionary use in intermediate level timed tests, the findings raise several issues that require more research. The studies reported in this book focused on one genre of dictionary – a bilingual English/ German dictionary in print form – and small sample sizes. As a start, research
Chapter 8. Having a dictionary in writing exams 183
with larger sample sizes that investigates the differential effect of different types of dictionary across different ability levels and on several task types would make up for a major limitation of the studies reported here. In addition, research across different skills (for example, reading) and different languages would help to build up a better picture of how different types of dictionary may affect outcomes differently, helping us to consider usefulness and fairness across a combination of task types in various contexts. It would also be valuable to investigate the effect of monolingual dictionaries. Although evidence suggests that monolingual dictionaries are nowhere near as popular as bilingual ones from the point of view of L2 learners, they are nevertheless an established resource, particularly for the most advanced learners. It would certainly be useful to take a close look at how they work, and do not work, in an examination context. Apart from the monolingual, two other main types of dictionary could be investigated in further research. These are bilingualized dictionaries and electronic dictionaries. Both these types of dictionary are relative new-comers as reference tools. As such their use is not yet widespread (Hartmann, 1994, 1999; Laufer & Hadar, 1997). As was pointed out in Chapter 2, a bilingualized dictionary may be defined as a combination of a learner’s monolingual dictionary, with the same number of entries and meanings for each entry, with a translation of the entry into the target language (Laufer & Hadar, 1997). Where the target word has several meanings each meaning is translated. Electronic dictionaries of various types are currently available either for use with personal computer based programs, in which case they can be accessed via the Internet or from specific dictionary software, or in the form of small hand-held portable devices – PEDs. PEDs frequently offer monolingual, bilingual, and bilingualized options. They are gaining in popularity, particularly among East Asian students (Midlane, 2005; Nesi, forthcoming).
The bilingualized dictionary Some research into bilingualized dictionaries has already taken place. One study (Laufer & Hadar, 1997) used three different types of dictionary – learners’ monolingual, bilingual, and bilingualized – and aimed to examine differences in effectiveness of the three types in tests of both the comprehension of unknown words and the production of original sentences with these words. Results for the entire sample (n = 123) revealed that the significantly higher scores across two tests were almost always obtained when consulting the bilingualized dictionary. This dictionary type was significantly better than the
184 Dictionary Use in Foreign Language Writing Exams
other two for comprehension, and significantly better than the monolingual for production. The researchers argue that the combination of the monolingual information, which provides a definition and examples, along with a translation into the learner’s first language, tends to produce the best results. Bilingualized dictionaries do provide more information than either monolingual or bilingual dictionaries. Users are also able to access explanations in the language with which they feel more comfortable, or in both languages if they wish. Such dictionaries may therefore provide contextual information that the bilingual lacks, and may help to lessen some context-related errors observed in the studies reported here. Further research into the use of bilingualized dictionaries while writing may be beneficial.
The electronic dictionary Hartmann’s (1999) study revealed that although virtually all respondents owned a general (monolingual) dictionary, and more than three quarters owned a bilingual dictionary, just over a third owned an electronic dictionary, and of these, two out of three (22% of the total) owned such a dictionary on a personal computer, thereby making the dictionary impractical for anything other than personal computer based work. Fewer than one in ten (7%) owned an electronic dictionary in the form of a PED. As noted in Chapter 2, however, the popularity of PEDs is growing substantially. Nesi (forthcoming) reports a small-scale survey among Chinese and Japanese students which revealed a strong preference for the use of bilingual PEDs. Portability was considered an important factor. Speed and ease of use have also been cited as reasons for their popularity (Stirling, 2005). It would therefore be useful to investigate the influence of PEDs on writing (or indeed other skills). If a PED is swifter to use than a paper dictionary, this may help to lessen the negative impact on time noted in the studies here.
Two others avenues of investigation The studies also revealed two other dimensions that would benefit from research. For many participants, preference for having a more tailor-made resource such as a list of useful words and phrases was expressed over having a dictionary. This list might become something like the ‘formula card’ which is frequently made available to those taking mathematics examinations and which provides essential formulae for carrying out mathematical calculations. Research into the extent
Chapter 8. Having a dictionary in writing exams 185
to which using the phrases provided in this type of resource helps test takers to enhance the quality of their writing could be carried out in a comparative study, similar to those described in this book. Comparison could be made between ‘with formulae’ and ‘without formulae’ tests. ‘With dictionary’ tests could be added as a third point of comparison. A final issue for further research is the impact of various types of resource on time taken to complete the test. It would be useful to investigate whether increasing the time available in the test makes a difference to the quality of test takers’ writing, the test scores, and the test takers’ perceptions. Through these types of research, we would be able to build up a better picture of the relative influence of different factors – dictionaries, lists, extra time – on writing, and thereby be in a better position to determine the relative impact of these different factors on the usefulness and equitability of the tests, including influence on the tests’ reliability and construct validity as accurate measures of the underlying construct, and the impact of different conditions on the test takers. Many investigations could be set up as ‘repeated measures’ studies similar to the ones reported here. Some may benefit from an ‘independent groups’ design, especially when more than two factors are being investigated together.
Conclusion The studies presented in this book have focused on bilingual print dictionaries in timed writing tests. In terms of usefulness and equity a somewhat ambiguous picture has emerged. Future research may help to lessen this ambiguity somewhat. Nevertheless, the studies have raised several important issues around how test takers use and potentially misuse dictionaries. The fact also remains that users of an L2 do regularly use bilingual dictionaries in domains beyond tests. Furthermore, within several current systems for assessing L2 proficiency coursework or portfolio options exist which allow the use of dictionaries or other resources. Considering the evidence around the uses to which these test takers put dictionaries raises an important final consideration – one that impacts on language teaching: in what ways can we help our students to get the best out of bilingual dictionaries, to maximize their opportunities and to minimize the liabilities? This is the focus of the final chapter.
chapter 9
Maximizing the opportunity and minimizing the liability
The opening chapter of this book posed a very important question – what is the problem with dictionaries? In that chapter I pointed out that dictionaries were considered by many people to be very important in the process of both learning and using a foreign language. Kirkness (2004) describes the dictionary as an essential and principal source of information on language, and Ilson (1985) portrays it as the most successful and significant book about language. For learners of an L2 several benefits of the bilingual dictionary in particular were identified. Having access to bilingual dictionaries is authentic because people in real world situations frequently use them to find out information. Bilingual dictionaries help learners if they get stuck because they can find information to help them. Nevertheless, these dictionaries have sometimes been framed in villainous terms. L2 learners do not always know how to use them correctly. Dictionary misuse leads them astray and causes them to make errors which can sometimes be quite major – so-called ‘dictionary howlers’. The question of whether students of an L2 should be allowed to use dictionaries in assessments of their language proficiency has, for a good many years, been a hotly debated and controversial topic (Bensoussan, 1983). The on-going controversies have sometimes had serious repercussions for language learners, such as the removal of bilingual dictionaries from high stakes examinations in the UK only five years after they had been introduced. The issue of allowing or disallowing bilingual dictionaries in writing tests is, in my view, a very important one for all those involved in language teaching and testing. The studies I carried out were designed to extend our knowledge of what actually happens when students use these dictionaries when writing. The findings have relevance not only for reliability and validity of measurement, but also for the wider social consequences for the test takers. As Bachman (2000) argues: Validity and fairness are issues that are at the heart of how we define ourselves as professionals, not only as language testers, but also as applied linguists. … Our increased sensitivity to ethical issues and the diversity and sophistication of our approaches to research mean that we can now research more than test scores, and we can go beyond speculation about their meaning and use. (p. 25)
188 Dictionary Use in Foreign Language Writing Exams
If we are concerned with fairness we need to take a close look at any aspects of a testing procedure that might bias the test against some test takers. Bilingual dictionaries in tests is a case in point. This book has focused on the timed writing test because it is a well established and widely used means of assessing L2 learners’ writing, despite objections that may be raised regarding, for example, its minimal authenticity or limited view of a given test taker’s writing skills. The timed writing test, due chiefly to its practicality and largely accepted reliability and validity, is no doubt here to stay for the foreseeable future. It is therefore incumbent on test setters to ensure that its use is as fair as possible to all test takers – that it aims, as far as possible, to ‘bias for best’, as Swain (1985) puts it, in a way that also allows us to get an accurate measure of the test takers’ writing abilities. Also, for those students whose L2 writing skills are assessed through coursework or portfolio options, dictionaries are frequently allowed, and it is therefore necessary to investigate how test takers actually use dictionaries, and to find ways, based on such empirical evidence, to help students to get the best out of dictionaries. This is another way of ‘biasing for best’. The evidence presented from the three empirical studies would suggest that timed writing tests can continue to be used without dictionaries, and that this does not necessarily compromise them as valid and fair tests of language learners’ writing proficiency. Having said that, the use of coursework and portfolio options indicates that in many learning contexts today there have been attempts to include several different types of assessment, using assessment opportunities that capitalize on the benefits of both ‘the assessment of learning’ and ‘assessment for learning’. In contexts where both assessment that measures learning and assessment that supports learning are perceived as being important the studies have shown that allowing a bilingual dictionary into static assessments such as timed tests may be a means of supporting the test takers without compromising the measurement. Broadfoot and Black (2004) argue that “[i]f formative assessment [assessment for learning] is to prosper, initiatives aimed at supporting a positive link between formative and summative work [assessment of learning] are sorely needed” (p. 17). I suggest that allowing bilingual dictionaries into timed tests potentially offers a positive link between the summative assessment of learning and several dimensions of assessment for learning. In contexts where allowing dictionaries in timed writing tests is being seriously considered, there are important factors to bear in mind. This chapter focuses on the practical and pedagogical implications of the research findings. It begins with the implications for test administration, and concludes with the implications for the classroom.
Chapter 9. Maximizing the opportunity and minimizing the liability 189
What does all this mean for timed writing tests? In the last chapter I considered what the evidence available from the three studies might tell us about the usefulness and fairness of allowing dictionaries into writing tests. The following conclusions were drawn: 1. The authenticity of allowing dictionaries into tests was not an important consideration for the majority of the participants. Any argument that dictionaries should therefore be included in such tests on the basis that this makes them more authentic reflections of a TLU domain may certainly be lost on the vast majority of test takers and therefore does not provide a convincing reason to include them. This is not to say that, from this perspective, dictionary use itself becomes less authentic; it is rather that authenticity appears not to be so essential for the test takers. 2. Interactiveness with the test may be enhanced if the dictionary is seen as a means of supporting topical knowledge (by, for example, providing items of lexis that test takers had forgotten, were unsure about, or did not know), and this in turn may lead to a more positive affective response. On the other hand, the look-ups of a number of participants did not in practice make up for gaps in knowledge because the look-ups were incorrect, and some participants actually found having the dictionary with them to be a source of additional stress. Greater positive interactiveness in ‘with dictionary’ tests was not, therefore, universal. 3. Impact on the participants within the test was again not universally positive. Many, but by no means all, participants felt more confident when the dictionary was there. Some felt less confident or more under stress, both because they perceived that they were expected to ‘do better’ with the dictionary, and because it often took too long to use. 4. Time taken to use the dictionary was the greatest perceived hindrance to the practicality of the dictionary in the timed test. 5. With regard to the two measurement qualities of reliability and validity, the test scores revealed that writing performance at this level was comparable across the two test conditions. This finding has significant implications. Regardless of how the construct underlying the test is viewed it appeared that the dictionary did not threaten the reliability or the construct validity of the test, at least as judged by test scores. For those involved in designing tests or implementing test procedures whether at the local level (in individual schools) or more globally (for example, as part
190 Dictionary Use in Foreign Language Writing Exams
of centralized assessment policies), these considerations for test usefulness, and their implications for fairness, are important. Based on the findings from all three studies, what are some of the practical lessons that may help to minimize the potential negative impact of dictionary availability, and maximize the benefits, in testing contexts that allow them?
The test task All test tasks should be made as clear as possible (as generally appeared to be the case in these studies), so that the test takers do not need to spend unnecessary time having to look up words. If it is clear from the outset what the test takers are being expected to do, one burden of the testing situation is lessened. Time to write the response There are two ways in which the negative impact of dictionary use on time may be taken into account. First of all we might consider extending the length of the examination. On average, dictionary look-ups took up around 10 minutes of the total examination time of 50 minutes – or 20% of the available time. Extending the time of the examination by this amount may therefore give the test takers less pressured opportunity to use the dictionary. This has minimal implications for the practicality of test administration, but may make a positive difference both to test takers’ perceptions and to test taker ability to use the dictionary effectively. A recommendation of extra time has also been noted by others (Hurman & Tall, 1998; Rivera & Stansfield, 1998). In the third study, the number of look-ups was used in a statistical calculation to determine if there was any relationship between frequency of dictionary use and test scores (East, 2007). No relationship was found. Students who made more look-ups did not thereby improve their performance. There may therefore be no added value in allowing test takers more than an additional 20%. Test takers are unlikely to benefit from looking up too much. But will the extra time make a difference? Parkinson’s (1958) somewhat satirical observation that work expands to fill the time available for its completion may mean that test takers might not perceive the benefit of the additional time and may be equally likely to find that the dictionary impacts negatively on their time. As noted in Chapter 8, impact on time is an area where further research is needed. Because for most participants the dictionary was not required to understand the test task we might consider an alternative modification. One observed problem with having the dictionary was that it was often seen as a distraction. Bearing this in mind, the length of the examination may be maintained at, say,
Chapter 9. Maximizing the opportunity and minimizing the liability 191
50 minutes, during which time dictionaries are not available, and a subsequent 10 minutes may be allowed at the end for checking the response with a dictionary. There are, again, minimal practical considerations, but this approach controls time for both writing and checking. This would, however, have the advantage of enabling test takers to focus on the writing without the potential distraction of the dictionary, at the same time knowing that they will have an opportunity to check their work at the end of the test. This may also mean that test takers were less inclined to view the dictionary as ‘cheating’ or obscuring their ‘real’ knowledge, and more inclined to see it as a final check on their work. An objection that dictionary use in the final 10 minutes may be seen as less authentic can be countered because the test takers, in general, did not perceive authenticity as an issue. This modification may, however, create a more pressured and frenetic final 10 minutes, with some test takers feeling, for example, that they wanted to check everything. It may also interrupt the flow of the writing because participants may be tempted to leave gaps they plan to fill in later and would have to keep a record of what they wanted to check.
The type of dictionary Although these studies do not enable us to draw any definitive conclusions about the effect of different dictionaries, it was evident from the first two studies that the size of the Collins German Dictionary was off-putting for a number of participants, and it was consequently seen negatively in terms of the time taken to use it. It was also sometimes viewed negatively because some items participants tried to look up could not be found. In many of these cases it would seem unlikely, given the comprehensiveness of the dictionary, that the word was not there, but more likely that the dictionary’s size made the item hard to locate. On the other hand, other evidence from the third study suggests that smaller size dictionaries were sometimes perceived as limiting in terms of the items they contained, and time taken to use these smaller dictionaries was also noted as a clear disadvantage. There was some evidence that some test takers could use dictionaries faster than others (as was the case between the intermediate and advanced participants in the first study). That some test takers may be swifter than others is not necessarily unfair. Some test takers work more swiftly anyway, and this is a factor that cannot be controlled. There was, however, no clear evidence to suggest that it was more efficient to use smaller sized dictionaries rather than the larger one, or that (in comparison with a larger dictionary) the smaller dictionaries frequently lacked information that was seen as necessary for the completion of the tasks. We might conclude on this basis that different dictionaries do not make any meaningful difference to the test takers in terms of actual use. On the other hand,
192 Dictionary Use in Foreign Language Writing Exams
Hurman and Tall (1998) suggest that different dictionaries do have differential impacts. If this is genuinely the case, the size and comprehensiveness of different dictionaries have implications for equal opportunities. Allowing different dictionaries may distort a ‘level playing field’ and may thereby lead to other unfair advantages or disadvantages (Butler & Stevens, 1997). This may be particularly so if some dictionaries contain specific information relevant to the construct being tested – such as how to set out a letter – that is being denied to those without this dictionary. That dictionaries are different may lead to inequality of opportunity, and therefore potential unfairness. This is an issue that requires careful consideration. To legislate, however, that only one dictionary should be allowed in a high stakes large scale test has considerable practicality and cost implications: who would make the decision about the ‘best’ dictionary, and on what evidence would the decision be made? Who would pay for the dictionary? How would this be governed or enforced? Furthermore, any decisions about who used which dictionary would need to be defended rigorously to limit unfairness.
Other resources Given the constraints around different types of dictionary, there may be merit in considering alternative resources. With regard to choice of resource in an examination context, many participants expressed a preference for something like a list of useful words and phrases. In the light of this, careful thought needs to be given to whether such a resource should be given to test takers in assessments instead of a dictionary. Examples of the types of phases included in the Writing Study Guide, several of which had been taken or adapted from the Collins German Dictionary, are illustrated in Figure 9.1. In high stakes large scale tests, this type of resource may be relatively easy and cost effective to produce. Also, as pointed out in Chapter 8, these resources are analogous to formula cards used in some mathematics examinations, and they would be relatively easy to control so that all test takers have the same resource. This would potentially add to the validity and fairness of the test in terms of ensuring equal opportunity. Encouraging other strategies The findings of the studies indicate that, on the whole, dictionary use was the preferred strategy for making up for gaps in knowledge when it was available. However, there was also evidence to suggest that, given thorough preparation, dictionaries or other resources were not necessarily required, and that it was possible to enhance the rhetorical effect of writing with the use of formulaic sequences that had been committed to memory. This strategy may save test takers time. It may be beneficial to make test takers more explicitly aware of other strategies to
Chapter 9. Maximizing the opportunity and minimizing the liability 193 PHRASES WITH ‘DASS’ AND ‘OB’
Lots of useful expressions go with dass (or sometimes with ob). Remember that dass and ob will send the next verb to the end of the clause . . . Ich schlage vor, dass Ich würde vorschlagen, dass Ich nehme an, dass Ich erinnere mich daran, dass Mir scheint, dass Ich bin davon überzeugt, dass Ich kann mir gut vorstellen, dass Ich habe den Eindruck, dass Wir können mit Sicherheit sagen, dass Ich bezweifle, dass Ich bin nicht überzeugt, dass Es ist höchst unwahrscheinlich, dass Das Problem ist, dass Ich frage mich, ob
I suggest that I would suggest that I expect that I remember that It seems to me that I am convinced (by the fact) that I can well imagine that I have the impression that We can say for certain that I doubt that I’m not convinced that It is highly improbable that The problem is that I wonder if
Remember also that some of these can be useful in the past tense (particularly in letter-writing or story-telling). Note that in these examples the perfect tense has been used where appropriate, because the perfect tense is normally used to communicate ideas in the past in letter-writing: Ich habe vorgeschlagen, dass Ich habe angenommen, dass Ich habe mich daran erinnert, dass Ich war davon überzeugt, dass Ich konnte mir gut vorstellen, dass Ich hatte den Eindruck, dass Ich war nicht überzeugt, dass Es war höchst unwahrscheinlich, dass Ich habe mich gefragt, ob
I suggested that I expected that I remembered that I was convinced (by the fact) that I could well imagine that I had the impression that I wasn’t convinced that It was highly improbable that I wondered if
EXPRESSING YOUR OPINION Ich denke, Ich meine, Ich glaube, Ich bin der Meinung, dass Meiner Meinung nach
I think I think (in my opinion) I think (I believe) I am of the opinion that In my opinion
Figure 9.1 Examples of phrases from the Writing Study Guide.
circumvent difficulties in expression. As Hurman and Tall (1998) observe, test takers should be encouraged to build up a substantial vocabulary base and not see the dictionary as something that negates the need to learn words and phrases.
194 Dictionary Use in Foreign Language Writing Exams
Practice in time-constrained contexts One further consideration, which forms a bridge between what happens in the test and what goes on in the classroom, is to provide adequate prior training to help to ensure that all test takers use the resource they are to be allowed as well as possible, regardless of level of ability. This is one way of leveling the playing field, especially in situations where it is impractical or impossible to control for the type of dictionary. In the second study there was a perception that participants became more comfortable using the dictionary as experience in the tests accumulated week by week. In the third study it was found that lack of experience was a factor in being less inclined to prefer having the dictionary in tests, and greater experience led to a greater sense of confidence when it was available. By contrast, in the third study prior experience was not shown to be a significant factor in improving performance in the ‘with dictionary’ tests. In this connection it should be noted that these participants would have had no prior experience of using a dictionary in testing contexts. All this would suggest that when it comes to using a dictionary in the test, experience gained in time-constrained contexts (such as mock tests) is likely to be more beneficial than experience gained outside the classroom or in normal classroom activities. One important dimension of dictionary training, which reflects a recommendation of Rivera and Stansfield (1998), would therefore be to give the test takers the benefit of ample opportunity to practice using the dictionary to be allowed in tests in actual test conditions. What does all this mean for the classroom? The findings of the three studies also revealed a good deal that has practical implications for language learners as they engage in the everyday process of language learning. A final important task for this book is to take a look at how the lessons learnt from these studies can help to improve students’ dictionary use, whether in tests, in the classroom, or in contexts beyond the classroom. The studies identified two different broad categories of dictionary use error: errors of word choice, and errors of word form. It is important to explore ways in which both these errors can be minimized. It is useful to begin with an exploration of word form issues, because accurate application of grammatical principles provides the structure and the foundation upon which language learners need to build if they are to be successful in writing in an L2, whether they use a dictionary or not.
Chapter 9. Maximizing the opportunity and minimizing the liability 195
Avoiding errors of form With regard to form, East (2005a) focuses attention on several specific problems that confront users of German as a foreign language, particularly when using dictionaries. There are problems with knowing: • • • • •
that all nouns begin with a capital letter; that genders and plurals of nouns should be carefully checked; that verbs have definite inflectional rules and simply writing the infinitive often will not do; that adjectives require endings to specify gender, number, and case; that German has complex word order rules which need to be adhered to.
Errors of form need to be confronted in the classroom as part and parcel of the language learning experience. That is, apart from helping students to recognize that words they might look up in the dictionary must be subject to rules of form, students must also be helped to acquire adequate working knowledge of how words work together in contexts. The Writing Study Guide which students in the second study were required to use, and use of which was encouraged for participants in the third study, was designed in part to help intermediate level learners to focus on accuracy and to enhance the quality of their writing in a sequential way. It was envisaged as a means of helping intermediate level writers of German to make the conceptual leap from ‘short, simple notes’ to ‘simple connected text’ to ‘clear, detailed text’ as articulated in the CEF (see Appendix 1). The guide itself is a comprehensive 35-page document which provides generic information about the skill of writing, its importance for L2 learners, and principles to be followed to develop the skill. The principles are focused around what I have termed ‘the central ‘c’ concepts’:
• • • • • •
Communication Construction Content Coherence Cohesion Correctness.
The guide has two practical foci, each of which is designed to scaffold the learner and model a developmental understanding of writing. One practical dimension, as previously stated, is the provision of comprehensive lists of different lexical items, in both German and English, from which students are free to choose items appropriate to their writing contexts. The types of item included are:
196 Dictionary Use in Foreign Language Writing Exams
• • • •
adverbs that attach to adjectives (such as außerordentlich (extraordinarily) and äußerst (extremely)); other adverbs (such as alles in allem (all in all) and anscheinend (apparently)); conjunctions (both coordinating and subordinating); formulaic sequences that are useful in a variety of contexts and in different genres of writing – letter-writing, discursive essays, and so on. (Examples of these sequences have been given earlier in this chapter.)
The second practical aspect is a series of developmental stages through which learners are guided so that they can enhance the quality of basic level writing. Learners are initially provided with a straightforward communicative activity – ‘you are required to write a letter …’. Five initial stages provide carefully structured steps, beginning with five basic sentences which are modified in different ways (such as adding adjectives and adverbs, and using conjunctions). The learners work within a controlled framework. The emphasis is on choosing appropriate items of lexis from the prescribed lists and learning how to use them accurately. Developing grammatical competence is key to the writing during these stages. In Stages 6 and 7 learners are given more freedom to reshape the work to create a cohesive, coherent text independently. The emphasis is on allowing students to take ownership of the writing, to move beyond the ‘scaffold’ to something they can do on their own. Sociolinguistic and discourse competence are being developed in these final two stages. In order to help students to see the importance of correctness, the guide uses the final text that has resulted from this seven-stage process to highlight the issues of which writers of German need to be mindful, including correctness of gender, adjectival declension, verb conjugation, word order, and cases. It is envisaged that once students are familiar and confident with this still somewhat basic level of writing they will be able to progress, using similar steps and the formulaic sequences suggested in the guide, to the more complex work demanded in higherorder intermediate level writing tasks.
Avoiding errors of meaning Apart from errors of form, a myriad of errors of meaning, attributable to dictionary misuse, was observed in the writing analyzed in the studies. There is therefore a real need to train language learners in dictionary use in order, as Asher (1999), for example, puts it “to maximise their chances of using this resource to best effect” (p. 61). Language learners need to be given ample opportunity to learn how
Chapter 9. Maximizing the opportunity and minimizing the liability 197
to make and then apply look-ups as successfully as possible. Ideally this training will be in conjunction with on-going practice with the language and its structures, and will be an integrated part of language acquisition practices. In this way accuracy and effective communication will go hand-in-hand. The three studies highlighted several instances of look-up error, of which the following were prevalent: • •
•
an assumed ‘word-for-word’ relationship between German and English which led to literal translations; inability to recognize that one word in English (or in German) might have several different equivalents in German (or English) and thereby choosing a contextually inappropriate word; taking the first item given in a dictionary entry without taking the trouble to read through all the items. This problem was compounded by students’ apparent inability to tell the difference between the different items, and to work out which was the most contextually appropriate.
Language learners need to be made more aware of these particular dictionary use errors. Training may need to be carried out using a variety of tasks, and even a variety of dictionaries, to ‘maximize chances’ and to learn, through experience, what works best for whom. In this connection, several of the exercises in the Dictionary Skills Guide, which the participants in the second study used, may be particularly beneficial. A number of the exercises in this guide drew on examples from the Collins German Dictionary, because this was the dictionary the participants in this study were to be allowed in the tests. Typical examples of the types of exercises from this guide are reproduced in Figures 9.2 to 9.4. The guide also alerted the students to potentially more serious errors and how to avoid them (Figure 9.5). If activities similar to those available in the Writing Study Guide and the Dictionary Skills Guide were to become established in L2 classrooms, whatever the language, it is to be hoped that L2 learners will learn how to make better, more sophisticated, more accurate, and more efficient use of bilingual dictionaries, and will also be open to other ways in which they might enhance the effectiveness of their writing. (The extent to which this is so is an issue for further research.)
198 Dictionary Use in Foreign Language Writing Exams
LOOKING AT DICTIONARY ENTRIES Here’s a typical dictionary entry (taken from the Collins German Dictionary):
The dictionary entry gives us a lot of information. It tells us: • • • • • •
•
That Buch is a neuter noun – that means it goes with das That, in the plural, it’s written Bücher (books) That in the genitive it’s written des Buches (of the book) That its first or main meaning – (a) – is book That it can also mean volume (in the sense of a volume of a series of books) That it can be used with other expressions: • Er redet wie ein Buch – he never stops talking • Ein Gentleman, wie er im Buche steht – a perfect example of a gentleman • Das Buch der Bücher – the Book of Books That its secondary meaning – (b) is books or accounts – and here it’s used in the plural – Bücher – usu pl means usually plural
Figure 9.2 An exploration of ‘Buch’.
A final word I began this book with the voices of test takers. In the first chapter I looked at the perspectives of those who had completed, or were in the process of completing, the A level in German either with or without a bilingual dictionary. The students were generally positive about being allowed to use a dictionary in written examinations, particularly those who had had prior experience with using a dictionary in examination contexts. These students were consequently anxious and perturbed at the prospect that this resource would be removed. One student who had used the dictionary in the GCSE but was not able to use it at A level was quite unequivocal about how she felt. In essence this is what she had to say: Not having the dictionary seems really daunting right now – it is a big help to have the dictionary, it’s a burden not to. Having the dictionary is useful because otherwise some students might get totally stressed, and it puts you at a bit of a dead end
Chapter 9. Maximizing the opportunity and minimizing the liability 199
☑ EXERCISE FOUR Look at the dictionary entry below and answer the questions about it (bear in mind that you don’t have to understand everything about the entry to be able to use it effectively). It’s taken again from the Collins German Dictionary:
Answer these questions BASED ON THE DICTIONARY ENTRY ABOVE: 1. Is Kind a noun or a verb? 2. Is Kind masculine, feminine or neuter? How do you know? 3. How would you say children? How do you know that? 4. How would you say ‘of the child’ using the genitive? How do you know that? 5. What does von Kind auf mean? 6. What does sie kriegt ein Kind mean? 7. How would you say ‘she enjoys life’ in German? 8. How would you say ‘he’s a big baby’ in German? 9. How would you say ‘we have to call it something’ in German? 10. What does ‘Los, Kinder’ mean?
Figure 9.3 Exercise four from the Dictionary Skills Guide. if you have no inspiration or you think ‘oh, what’s that word?’ and you can never remember it in German – if the dictionary’s there you can quickly flick through it. Not having the dictionary is going to make it more difficult, definitely.
And yet so many of the participants in the studies reported in this book, who without exception were not used to having dictionaries in examinations at all, were more inclined to see the dictionary in negative terms, even though several recognized that its availability could be helpful.
200 Dictionary Use in Foreign Language Writing Exams
☑ EXERCISE SIX Try working with these entries from the English-German section of the Collins German Dictionary:
Answer these questions BASED ON THE DICTIONARY ENTRIES ABOVE: 1. What type of word is doubt [1] – noun, verb, adjective, adverb? 2. What type of word is doubt [2] – noun, verb, adjective, adverb? 3. Is the German noun equivalent of doubt masculine, feminine or neuter? 4. How would you say in German ‘His honesty is in doubt’? 5. How would you say in German ‘It is still in doubt’? 6. What does ‘ich bezweifle, dass er kommt’ mean? 7. Look at the entry doubt [2] - What type of word is bezweifeln? 8. How would you say in German ‘I’m sorry I doubted you’? 9. What does ‘das bezweifle ich sehr’ mean? 10. Can you work out what ‘Ich habe Zweifel, dass …’ might mean?
Figure 9.4 Exercise six from the Dictionary Skills Guide.
One final practical issue for the classroom is this: it may be beneficial to spend some time helping learners of an L2 to recognize that bilingual dictionaries are legitimate resources – there to assist them in the process of both learning and using the L2. Provided that students can learn to use such dictionaries effectively, there would seem to be no reason to outlaw them, at least in the process of learning. Nevertheless, the use of dictionaries in tests might remain a more controversial issue for some time to come. I hope that this book has helped to make the root causes of the controversy more understandable, and that I have alerted those who have a stake in language learning, teaching, and assessment to both the benefits and drawbacks of bilingual dictionaries and to means of improving their
Chapter 9. Maximizing the opportunity and minimizing the liability 201
CHOOSING THE RIGHT WORD! This can be the biggest problem with using a dictionary – and often leads to some very amusing ‘examination howlers’ of which students are largely unaware. Here are some typical examples of where dictionary use can go vastly wrong: English
German (incorrect)
German (correct)
“I will not do that!”
“Ich Testament das nicht machen!”
“Ich will das nicht machen!”
He left the house
Er links das Haus
Er verliess das Haus
That was a fine wine!
Das war eine Strafe Wein!
Das war ein feiner Wein!
We must book a room
Wir müssen Buch ein Zimmer Wir müssen ein Zimmer reservieren lassen
The room is now cleaner Das Zimmer ist jetzt Reinemachefrau
Das Zimmer ist jetzt sauberer
We saw the film yesterday Wir Säge den Film gestern
Wir haben den Film gestern gesehen
He’s a talent scout
Er ist Talent Pfadfinder
Er ist Talentsucher
It’s just past the library
Es ist gerecht vergangen die Bibliothek
Es ist kurz nach der Bibliothek
How do you spell that?
Wie Zauber man das?
Wie buchstabiert man das?
HOW MIGHT THESE MISTAKES HAVE OCCURRED? 1. Confused ‘will’ as a noun (last will and testament) with ‘will’ (‘want to’) as a verb. 2. 3. 4. 5.
Confused ‘left’ as an adverb of place – ‘on the left’ - with ‘left’ as the past tense of ‘leave’. Confused ‘fine’ as a noun (a fine of $100) with ‘fine’ as an adjective (that’s fine). Confused ‘book’ as a noun with ‘book’ as a verb (to book something). Confused ‘cleaner’ as a noun (she’s a cleaner) with ‘cleaner’ as an adjective of comparison (it’s cleaner / more clean). 6. Confused ‘saw’ as a noun (it’s a sharp saw) with ‘saw’ as the past tense of ‘see’. 7. Confused the two different German nouns that translate the one English ‘scout’. 8. Confused the different words that translate ‘just’ and ‘past’ – ‘fair’ versus ‘immediately’, and ‘past in time’ versus ‘past in place’. 9. Confused ‘spell’ as a noun (cast a spell) with ‘spell’ as a verb (to spell). So – to use a dictionary successfully really does require you to know, as far as possible, how different words work in sentences – nouns, verbs, adjectives, present, past, and so on.
Figure 9.5 Alerting students to ‘dictionary howlers’.
202 Dictionary Use in Foreign Language Writing Exams
use. I close this book with the opinions of two participants from my investigation whose voices articulate clearly the essence of the controversy and the reason why this is an issue which requires on-going debate: During all the time at school we have never been allowed dictionaries, and you should learn vocabulary and remember it. That is fairer. (Participant in the third study) I’ve been taught never to use the dictionary in an exam, and now, after these three weeks, it’s like ‘oh that’s nice, oh there’s one word, I really don’t know it, and the dictionary’s there! Oh, I can look it up’, and that was a great feeling. (Sandra, second study)
References
Amritavalli, R. (1999). Dictionaries are unpredictable. ELT Journal, 53(4), 262–269. Ard, J. (1982). The use of bilingual dictionaries by ESL students while writing. ITL Review of Applied Linguistics, 58, 1–27. ARG. (2002a). Assessment for learning: 10 principles. Retrieved October 26, 2004, from http:// www.assessment-reform-group.org.uk. ARG. (2002b). Assessment for learning: Beyond the black box. Cambridge, UK: University of Cambridge Faculty of Education. ARG. (2002c). Testing, motivation and learning. Cambridge, UK: University of Cambridge Faculty of Education. Asher, C. (1999). Using dictionaries in the GCSE examination of modern foreign languages: Teachers’ views and learners’ performance. Studies in Modern Languages Education, 7, 59–67. Atkins, B. T. S. (1985). Monolingual and bilingual learners’ dictionaries: A comparison. In R. Ilson (Ed.), Dictionaries, lexicography and language learning (pp. 15–24). Oxford, UK: Pergamon Press. Atkins, B. T. S., & Varantola, K. (1998a). Language learners using dictionaries: The final report on the EURALEX/AILA research project on dictionary use. In B. T. S. Atkins (Ed.), Using dictionaries: Studies of dictionary use by language learners. Lexicographica, Series Maior, 88 (pp. 21–81). Tübingen, Germany: Max Niemeyer Verlag. Atkins, B. T. S., & Varantola, K. (1998b). Monitoring dictionary use. In B. T. S. Atkins (Ed.), Using dictionaries: Studies of dictionary use by language learners. Lexicographica, Series Maior, 88 (pp. 83–122). Tübingen, Germany: Max Niemeyer Verlag. Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford, UK: Oxford University Press. Bachman, L. F. (2000). Modern language testing at the turn of the century: Assuring that what we count counts. Language Testing, 17(1), 1–42. Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford, UK: Oxford University Press. Barnes, A., Hunt, M., & Powell, B. (1999). Dictionary use in the teaching and examining of MFLs at GCSE. Language Learning Journal, 19, 19–27. Baxter, J. (1980). The dictionary and vocabulary behaviour: A single word or a handful? TESOL Quarterly, 14(3), 325–336. BBC. (2007). Lost for words. Retrieved May 1, 2007, from http://www.bbc.co.uk/languages/ yoursay/lost_for_words.shtml. BBC/Larousse. (1997). BBC German Learners’ Dictionary: BBC/Larousse. Benson, P., & Voller, P. (Eds.). (1997). Autonomy and independence in language learning. London: Longman.
204 Dictionary Use in Foreign Language Writing Exams
Bensoussan, M. (1983). Dictionaries and tests of EFL comprehension. ELT Journal, 37(4), 341– 345. Bensoussan, M., Sim, D., & Weiss, R. (1984). The effect of dictionary usage on EFL test performances compared with student and teacher attitudes and expectations. Reading in a Foreign Language, 2(2), 262–276. Birmingham University. (2000). Review 2000. Retrieved April 2, 2001, from http://www.publications.bham.ac.uk/review2000/research.html#dictionaries. Bishop, G. (1998). Research into the use being made of bilingual dictionaries by language learners. Language Learning Journal, 18, 3–8. Bishop, G. (2000). Dictionaries, examinations and stress. Language Learning Journal, 21, 52– 65. Blair, A. (2007, March 12). Call for easier exams as GCSE pupils shun ‘difficult’ languages. The Times Newspaper. Bogaards, P. (1999). Research on dictionary use: An overview. In R. R. K. Hartmann (Ed.), Dictionaries in language learning: Recommendations, national reports and thematic reports from the TNP sub-project 9: Dictionaries (pp. 32–35). Berlin, Germany: Thematic network project in the area of languages, Freie Universität Berlin. Broadfoot, P., & Black, P. (2004). Redefining assessment? The first ten years of Assessment in Education. Assessment in Education: Principles, Policy and Practice, 11(1), 7–27. Brown, J. D. (2001). Using surveys in language programs. Cambridge, UK: Cambridge University Press. Butler, F. A., & Stevens, R. (1997). Accommodation strategies for English language learners on large-scale assessments: Student characteristics and other considerations (CSE Technical Report 448). Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing, University of California. Canale, M. (1983). On some dimensions of language proficiency. In J. W. J. Oller (Ed.), Issues in language testing research (pp. 333–342). Rowley, MA: Newbury House. Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1, 1–47. Carter, R., & McCarthy, M. (1988). Vocabulary and language teaching. New York: Longman. Cassidy, S. (2007, August 24). 30,000 fewer entries in French and German. The Independent Newspaper. Chambers, G. (1999). Using dictionaries in the GCSE examination of modern foreign languages – pupils’ perceptions. Studies in Modern Languages Education, 7, 68–85. Coe, R. (2006). Relative difficulties of examinations at GCSE: An application of the Rasch model. Durham, UK: Durham University Curriculum, Evaluation and Management (CEM) Centre. Collins. (1995a). Collins Pocket French Dictionary. Glasgow, UK: HarperCollins Publishers. Collins. (1995b). Collins German College Dictionary (2nd ed.). Glasgow, UK: HarperCollins. Collins. (1995c). Collins German Dictionary plus Grammar (2nd ed.). Glasgow, UK: HarperCollins. Collins. (1996). Collins Easy Learning French Dictionary. Glasgow, UK: HarperCollins. Collins. (2001). Collins Pocket German Dictionary (5th ed.). Glasgow, UK: HarperCollins. Collins. (2006). Collins School Thesaurus (3rd ed.). Glasgow, UK: HarperCollins. Council of Europe. (2001). Common European Framework of Reference for languages. Cambridge, UK: Cambridge University Press.
References 205
Cumming, A. (2002). Assessing L2 writing: Alternative constructs and ethical dilemmas. Assessing Writing, 8, 73–83. DES. (1990). MFL for ages 11–16. London: Department of Education and Science. East, M. (2004). Calculating the Lexical Frequency Profile of written German texts. Australian Review of Applied Linguistics, 27(1), 30–43. East, M. (2005a). Using dictionaries in written examinations. Babel (Journal of the Australian Federation of Modern Language Teachers Associations), 40(2), 4–9. East, M. (2005b). Autonomy in an assessment context: Allowing dictionaries in writing tests. In H. Anderson, M. Hobbs, J. Jones- Parry, S. Logan & S. Lotovale (Eds.), Supporting independent learning in the 21st century. Proceedings of the second conference of the Independent Learning Association, September 9–12, 2005. Auckland: Independent Learning Association. East, M. (2005c). Using support resources in writing assessments: Test taker perceptions. New Zealand Studies in Applied Linguistics, 11(1), 21–36. East, M. (2006a). A case for re-evaluating dictionary availability in examinations. Language Learning Journal, 33, 34–39. (Accessible from http://www.informaworld.com.) East, M. (2006b). The impact of bilingual dictionaries on lexical sophistication and lexical accuracy in tests of L2 writing proficiency: A quantitative analysis. Assessing Writing, 11(3), 179–197. East, M. (2006c). An investigation into how intermediate level language students use bilingual dictionaries in writing tests. New Zealand Studies in Applied Linguistics, 12(1), 1–15. East, M. (2007). Bilingual dictionaries in tests of L2 writing proficiency: Do they make a difference? Language Testing, 24(3), 331–353. East, M. (2008). Language evaluation policies and the use of support resources in assessments of language proficiency. Current Issues in Language Planning, 9(3). Edexcel. (2000). Specification – Edexcel AS/A GCE in modern foreign languages. London: Edexcel. Edexcel. (2001). Edexcel GCSE German coursework guide. London: Edexcel. Elbow, P. (1991). Foreword. In P. Belanoff & M. Dickson (Eds.), Portfolios: Process and product (pp. ix–xvi). Portsmouth, NH: Boynton-Cook/Heinemann. Elder, C., Iwashita, N., & McNamara, T. (2002). Estimating the difficulty of oral proficiency tasks: What does the test-taker have to offer? Language Testing, 19(4), 347–368. Engber, C. A. (1995). The relationship of lexical proficiency to the quality of ESL compositions. Journal of Second Language Writing, 4(2), 139–155. Garner, R. (2007, March 12). Languages are the hardest GCSEs, research finds. The Independent Newspaper. Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: The Falmer Press. Gipps, C., & Murphy, P. (1994). A fair test? Assessment, achievement and equity. Buckingham, UK: Open University Press. Glaser, R. (1990). Toward new models for assessment. International Journal of Educational Research, 14(5), 475–483. Gorrell, D. (1988). Writing assessment and the new approaches. St. Clout, MN: St. Clout State University. (ERIC Document Reproduction Service No. ED 296 334). Grabe, W., & Kaplan, R. (1996). Theory and practice of writing: An applied linguistic perspective. New York: Longman.
206 Dictionary Use in Foreign Language Writing Exams
Green, D. Z. (1985). Developing measures of communicative proficiency: A test for French immersion students in grades 9 and 10. In P. C. Hauptman, R. LeBlanc & M. B. Wesche (Eds.), Second language performance testing (pp. 215–227). Ottawa, Canada: University of Ottawa Press. Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixedmethod evaluation designs. Educational Evaluation and Policy Analysis, 11, 255–274. Grobe, C. (1981). Syntactic maturity, mechanics, and vocabulary as predictors of quality ratings. Research in the Teaching of English, 15, 75–85. Hamp-Lyons, L. (1993). Scoring procedures for ESL contexts. In L. Hamp-Lyons (Ed.), Assessing second language writing in academic contexts (pp. 241–276). Norwood, NJ: Ablex. Hamp-Lyons, L. (2000). Fairnesses in language testing. In A. J. Kunnan (Ed.), Fairness and validation in language assessment (pp. 30–34). Cambridge, UK: Cambridge University Press. Hamp-Lyons, L., & Condon, W. (2000). Assessing the portfolio: Principles for practice, theory, and research. Cresskill, NJ: Hampton Press. Hamp-Lyons, L., & Kroll, B. (1997). TOEFL 200 writing: Composition, community, and assessment. Princeton, NJ: Educational Testing Service. Harlen, W., & Winter, J. (2004). The development of assessment for learning: Learning from the case of science and mathematics. Language Testing, 21(3), 390–408. Hartmann, R. R. K. (1983). The bilingual learner’s dictionary and its uses. Multilingua, 2(4), 195–201. Hartmann, R. R. K. (1994). Bilingualised versions of learners’ dictionaries. Fremdsprachen Lehren und Lernen, 23, 206–220. Hartmann, R. R. K. (1999). Case study: The Exeter University survey of dictionary use. In R. R. K. Hartmann (Ed.), Dictionaries in language learning: Recommendations, national reports and thematic reports from the TNP sub-project 9: Dictionaries (pp. 36–52). Berlin, Germany: Thematic network project in the area of languages, Freie Universität Berlin. Hatch, E., & Lazaraton, A. (1991). The research manual: Design and statistics for applied linguistics. Boston, MA: Heinle and Heinle. Hays, P. A. (2004). Case study research. In K. DeMarrais & S. D. Lapan (Eds.), Foundations for research: Methods of inquiry in education and the social sciences (pp. 217–234). Mahwah, NJ: Lawrence Erlbaum. Heatley, A., Nation, P., & Coxhead, A. (2002). RANGE and FREQUENCY programs. Retrieved January 13, 2003, from http://www.vuw.ac.nz/lals/staff/Paul_Nation. Heller, K. (2000). Rechtschreibung 2000 – die aktuelle Reform: Wörterliste der geänderten Schreibungen. Stuttgart, Germany: Ernst Klett Verlag. Horsfall, P. (1997). Dictionary skills in MFL 11–16. Language Learning Journal, 15, 3–9. Hurman, J., & Tall, G. (1998). The use of dictionaries in GCSE modern foreign languages written examinations (French). Birmingham, UK: University of Birmingham School of Education. Hymes, D. (1971). Competence and performance in linguistic theory. In R. Huxley & E. Ingram (Eds.), Language acquisition: Models and methods. London: Academic Press. Hymes, D. (1972). On communicative competence. In J. B. Pride & J. Holmes (Eds.), Sociolinguistics (pp. 269–293). Harmondsworth, UK: Penguin. Hymes, D. (1982). Toward linguistic competence. Philadelphia, PA: Graduate School of Education, University of Pennsylvania. Idstein, B. (2003). Dictionary use during reading comprehension tests: An aid or a diversion? Unpublished doctoral dissertation, Indiana University of Pennsylvania, Pennsylvania.
References 207
Ilson, R. (Ed.). (1985). Dictionaries, lexicography and language learning. Oxford, UK: Pergamon Press. Jacobs, H. L., Zinkgraf, S. A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: A practical approach. Rowley, MA: Newbury House. Johnson, B., & Christensen, L. (2004). Educational research: Quantitative, qualitative and mixed approaches (2nd ed.). Boston, MA: Pearson Education. Keppel, G. (1991). Design and analysis: A researcher’s handbook (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Kirkness, A. (2004). Lexicography. In A. Davies & C. Elder (Eds.), The handbook of applied linguistics (pp. 54–81). Oxford, UK: Blackwell. Kramsch, C., & Thorne, S. (2002). Foreign language learning as global communicative practice. In D. Block & D. Cameron (Eds.), Globalization and language teaching (pp. 83–100). London: Routledge. Kunnan, A. J. (2000). Fairness and justice for all. In A. J. Kunnan (Ed.), Fairness and validation in language assessment (pp. 1–14). Cambridge, UK: Cambridge University Press. Laufer, B. (2005). Lexical Frequency Profiles: From Monte Carlo to the real world. Applied Linguistics, 26(4), 582–588. Laufer, B., & Hadar, L. (1997). Assessing the effectiveness of monolingual, bilingual and “bilingualised” dictionaries in the comprehension and production of new words. The Modern Language Journal, 81(2), 189–196. Laufer, B., & Nation, P. (1995). Vocabulary size and use: Lexical richness in L2 writing. Applied Linguistics, 16(3), 307–322. Lederman, M. J. (1986). Why test? In K. Greenberg, H. Wiener & R. Donovan (Eds.), Writing assessment: Issues and strategies (pp. 34–46). New York: Longman. Lewkowicz, J. A. (2000). Authenticity in language testing: Some outstanding questions. Language Testing, 17(1), 43–64. Malvern. (1996). Your French Dictionary. Malvern, UK: Malvern Language Guides. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). New York: Macmillan. Midlane, V. (2005). Students’ use of portable electronic dictionaries in the ESL/EFL classroom: A survey of teacher attitudes. Unpublished Master of Education dissertation, University of Manchester, Manchester, UK. Morrow, K. (1981). Communicative language testing: Evolution or revolution? In J. C. Alderson & A. Hughes (Eds.), Issues in language testing. ELT Documents 111 (pp. 9–25). London: The British Council. Morrow, K. (1991). Evaluating communicative tests. In S. Anivan (Ed.), Current developments in language testing (pp. 111–118). Singapore: SEAMEO Regional Language Centre. Nation, P. (2001). Learning vocabulary in another language. Cambridge, UK: Cambridge University Press. Nation, P. (2002). Range and Frequency: Programs for Windows based PCs. Wellington, New Zealand: Victoria University of Wellington. Nesi, H. (1992). Go away and look it up! Revue de Phonétique Appliquée, 103–104, 195–210. Nesi, H. (forthcoming). The virtual vocabulary notebook: The electronic dictionary as vocabulary learning tool. In G. Blue (Ed.), Developing academic literacy. London: Peter Lang. Nesi, H., & Meara, P. (1991). How using dictionaries affects performance in multiple-choice EFL tests. Reading in a Foreign Language, 8(1), 631–643. Oller, J. (1979). Language tests at school. London: Longman.
208 Dictionary Use in Foreign Language Writing Exams
OUP. (1997). Oxford Duden Paperback Dictionary (2nd ed.). Oxford, UK: Oxford University Press. Oxford, R. (1990). Language learning strategies: What every teacher should know. New York: Newbury House. Oxford University Language Centre. (2003). Online tests. Retrieved February 3, 2003, from http://www.lang.ox.ac.uk/placement.html. Parkinson, C. N. (1958). Parkinson’s law: The pursuit of progress. London: John Murray. Peterson, R. A. (2000). Constructing effective questionnaires. Thousand Oaks, CA: Sage. Piotrowski, T. (1989). Monolingual and bilingual dictionaries: Fundamental differences. In M. Tickoo (Ed.), Learners’ dictionaries: State of the art (pp. 72–83). Singapore: SEAMEO Regional Language Centre. Poulet, G. (1999). Instruction in dictionary use and foreign language teacher training: The English scene. In R. R. K. Hartmann (Ed.), Dictionaries in language learning: Recommendations, national reports and thematic reports from the TNP sub-project 9: Dictionaries (pp. 78–82). Berlin, Germany: Thematic network project in the area of languages, Freie Universität Berlin. QCA. (2000). GCSE criteria for modern foreign languages. London: Qualifications and Curriculum Authority. Rea-Dickins, P. (1997). So, why do we need relationships with stakeholders in language testing? A view from the UK. Language Testing, 14(3), 304–314. Rivera, C., & Stansfield, C. W. (1998). Leveling the playing field for English language learners: Increasing participation in state and local assessments through accommodations. Assessing student learning: New rules, new realities. Retrieved August 4, 2004, from http://ceee. gwu.edu/standards_assessments/researchLEP_accommodintro.htm. Rivera, C., & Stansfield, C. W. (2001). The effects of linguistic simplification of science test items on performance of limited English proficient and monolingual English-speaking students. Paper presented at the annual meeting of the American Educational Research Association, Seattle, WA. Ross, T., Attwood, K., & Moynihan, T. (2004, October 18). School exams shake-up heralds fourlevel diploma. The Independent Newspaper. Rossner, R. (1985). The learner as lexicographer: Using dictionaries in second language learning. In R. Ilson (Ed.), Dictionaries, lexicography and language learning (pp. 95–102). Oxford: Pergamon Press. Savignon, S. (1983). Communicative competence. Reading, MA: Addison-Wesley. Savignon, S. (1997). Communicative competence: Theory and classroom practice (2nd ed.). New York: McGraw-Hill. Savignon, S. (Ed.). (2002). Interpreting communicative language teaching: Contexts and concerns in teacher education. New Haven, CT: Yale University Press. Schutz, P. A., Chambless, C. B., & DeCuir, J. T. (2004). Multimethods research. In K. DeMarrais & S. D. Lapan (Eds.), Foundations for research: Methods of inquiry in education and the social sciences (pp. 267–281). Mahwah, NJ: Lawrence Erlbaum. Sheskin, D. J. (2000). Handbook of parametric and nonparametric statistical procedures (2nd ed.). Boca Raton, FL: Chapman and Hall/CRC. Sireci, S. G., Shuhong, L., & Scarpati, S. (2003). The effects of test accommodation on test performance: A review of the literature (Center for Educational Assessment Research Report no. 485). Amherst, MA: School of Education, University of Massachusetts Amhurst.
References 209
Smith, C. A. (1991). Writing without testing. In P. Belanoff & M. Dickson (Eds.), Portfolios: Process and product (pp. 279–291). Portsmouth, NH: Boynton/Cook Publishers. Spolsky, B. (2001). A note on the use of dictionaries in examinations. (unpublished). Stansfield, C. W. (2002). Linguistic simplification: A promising test accommodation for LEP students? Practical Assessment, Research and Evaluation, 8(7). Retrieved August 4, 2004, from http://PAREonline.net/getvn.asp?v=8&n=7. Stirling, J. (2005). The portable electronic dictionary: Faithful friend or faceless foe? The Modern English Teacher, 14(3), 64–72. Swain, M. (1985). Large-scale communicative language testing: A case study. In S. Savignon & M. Burns (Eds.), Initiatives in communicative language teaching (pp. 185–201). Reading, MA: Addison-Wesley. Tall, G., & Hurman, J. (2000). Using a dictionary in a written French examination: The students’ experience. Language Learning Journal, 21, 50–56. Tall, G., & Hurman, J. (2002). Using dictionaries in modern languages GCSE examinations. Educational Review, 54(3), 205–217. Terrell, P., Schnorr, V., Morris, W. V. A., & Breitsprecher, R. (Eds.). (1997). The Collins German Dictionary – Unabridged (3rd ed.). Glasgow, UK: HarperCollins. Thompson, G. (1987). Using bilingual dictionaries. ELT Journal, 41(4), 282–286. Tomaszczyk, J. (1983). On bilingual dictionaries: The case for bilingual dictionaries for foreign language learners. In R. R. K. Hartmann (Ed.), Lexicography: Principles and practice (pp. 41–51). London: Academic Press. Underhill, A. (1985). Working with the monolingual learners’ dictionary. In R. Ilson (Ed.), Dictionaries, lexicography and language learning (pp. 103–114). Oxford, UK: Pergamon Press. Weigle, S. C. (2002). Assessing writing. Cambridge, UK: Cambridge University Press. Weir, C. (1990). Communicative language testing. New York: Prentice Hall. Weir, C. (2005). Language testing and validation: An evidence-based approach. Basingstoke, UK: Palgrave Macmillan. White, E. M. (1995). An apologia for the timed impromptu essay task. College Composition and Communication, 46(1), 30–45. Widdowson, H. G. (1978). Teaching language as communication. Oxford, UK: Oxford University Press. Wiggins, G. (1989). A true test: Towards more authentic and equitable assessment. Phi Delta Kappan, 70(9), 703–713. Wigglesworth, G. (1997). An investigation of planning time and proficiency level on oral test discourse. Language Testing, 14(1), 85–106. Wolcott, W., & Legg, S. (1998). An overview of writing assessment – theory, research, and practice. Urbana, IL: National Council of Teachers of English. Wood, R. (1993). Assessment and testing. Cambridge, UK: Cambridge University Press. Worsch, W. (1999). Recent trends in publishing bilingual learners’ dictionaries. In R. R. K. Hartmann (Ed.), Dictionaries in language learning: Recommendations, national reports and thematic reports from the TNP sub-project 9: Dictionaries (pp. 99–107). Berlin, Germany: Thematic network project in the area of languages, Freie Universität Berlin. Wray, A. (2002). Formulaic language and the lexicon. Cambridge, UK: Cambridge University Press.
appendix 1
A note on intermediate level students
The studies reported in this book were designed to investigate what I have described as intermediate level learners of German as a foreign language. I also make reference to tests that are used in two distinct contexts – the GCSE and A level in the United Kingdom, and their equivalents in New Zealand, School Certificate and Bursary. To provide a measure of the approximate level of language proficiency required for these qualifications it is helpful to compare expected outcomes with those described in the Common European Framework for Languages or CEF (Council of Europe, 2001). The lower level of the GCSE, covered by the Foundation Tier and grades G to C, is roughly equivalent to the elementary levels A1 and A2 on the Framework. In terms of writing skills, a learner at A1 level can write “a short, simple postcard, for example sending holiday greetings” or “fill in forms with personal details.” At A2 level candidates would be expected to “write short, simple notes and messages relating to matters in areas of immediate need” and “write a very simple personal letter.” GCSE Higher Tier and grades C to A* are targeted at roughly A2 on the Framework. The highest grades also begin to record proficiency at the intermediate level B1 which measures ability to “write simple connected text on topics which are familiar or of personal interest” and “write personal letters describing experiences and impressions.” New Zealand’s ‘School Certificate’ (and the new National Certificate of Educational Achievement (NCEA) level 1 which has recently replaced it) also covers levels A1 and A2. The UK’s AS level (usually taken after one year of study post-GCSE) more specifically targets the intermediate level B1 on the CEF, as do the lowest grades of A level. Higher grades of the A level, and New Zealand’s ‘Bursary’ examination (now replaced by NCEA level 3) target proficiency which spans levels B1 and B2, where, at the highest levels, candidates would be expected to “write clear, detailed text on a wide range of subjects” or “write an essay or report, passing on information or giving reasons in support of or against a particular point of view” or “write letters highlighting the personal significance of events and experiences.” The comparative levels of the examinations are represented diagrammatically in Figure A1.1.
212 Dictionary Use in Foreign Language Writing Exams
Generally taken ↓ Year 11 (15–16 years of age)
Language level (CEF) ↓ A1 → A2
United Kingdom (current system)
New Zealand (old system)
New Zealand (new system)
↓
↓
↓
School Certificate
NCEA level 1 (from 2002)
↓
↓
Sixth Form Certificate
NCEA level 2 (from 2003)
↓
↓
University Entrance, Bursaries and Scholarships (Bursary)
NCEA level 3 (from 2004)
GCSE Foundation Tier
↓
↓
GCSE Higher Tier ↓
Year 12 (16–17)
B1
Advanced Subsidiary (AS)a
↓
↓
↓
B1 → B2
Advanced (A) level
Year 13 (17–18)
Figure A1.1 Secondary school L2 examinations. a The
‘Advanced Supplementary’ qualification was revised and replaced by ‘Advanced Subsidiary’ in 2001
It should be noted that: 1. the United Kingdom is constituted by England, Wales, Northern Ireland, and Scotland; 2. the GCSE /A level framework is only used in England, Wales, and Northern Ireland, and is administered by five separate examining bodies, including one in Wales and one in Northern Ireland. Scotland has its own national qualifications system, governed by the Scottish Qualifications Authority (SQA). In this book I have chosen to use the acronym UK to contextualize, for example, the work of the Qualifications and Curriculum Authority (QCA), based in England, and the GCSE /A level examinations system. In practice the examinations system in the UK, and its governance, are more localized than using this acronym might suggest.
appendix 2
A note on data and procedures to establish reliability
No research project can be truly successful without a robust research design, informed by focused research questions, and procedures to ensure that the results derived from data are reliable. Three overarching research questions framed the studies described in this book: 1. How do L2 test takers use bilingual dictionaries in specified L2 writing assessment contexts at an intermediate (A level or Bursary equivalent) level? a. How often is the dictionary accessed? b. For what purposes is the dictionary used? c. What strategies do test takers use? d. Does the nature of dictionary use differ according to task type? 2. Does use of a dictionary make a difference to the quality of L2 test takers’ writing, as reflected in: a. the test scores? b. the quality of text discourse? 3. How do L2 test takers perceive the use of dictionaries in the test environment? When carrying out the studies various checks and balances were put in place to ensure that the data were handled and analyzed reliably. This included inviting several impartial observers to work alongside me from time to time to scrutinize data independently. This enabled me to check my own findings against those of someone else. This scrutiny and checking occasionally led to some modifications in data collection or analysis, but on many occasions the reliability of my own interpretations was confirmed. Whenever I record qualitative perspectives given by the participants in both the interviews and the open-ended questions I quote comments as they were recorded so that I could remain faithful to the data. The only modifications to the interview data were to remove extraneous or irrelevant comments, or pauses – the usual ‘erms’, ‘ers’ and hesitations of normal speech. I also corrected spellings in the written data. In cases where test takers’ use of German is quoted, this has been
214 Dictionary Use in Foreign Language Writing Exams
done without correction in order to remain faithful to the examples of intermediate level writing produced.
The reliability of the test scoring process In the second two studies two raters, working independently in the same room, scored each test script against detailed criteria which led to the award of a score out of 7 across five different facets of writing, and a total score of 35. Scores in each facet that were greater than ±2 were considered to be discrepant. In these cases discussion between the raters was initiated with a view to bringing scores into greater ageement. A subsequent inter-rater correlation coefficent was calculated as at least r = .86. In the third study the raters were invited back about a month after the original scoring session to re-rate 16 essays (17% of the original sample). An intra-rater reliability coefficient to determine the differences between the two sets of final awarded scores for this sample of scripts was calculated as r = .87. These coefficients suggested that the rubric could be used with a high degree of reliability and that, as a consequence, the final awared scores (which were the average of the two sets of independent scores) could be considered to be reliable (East, 2007; Hamp-Lyons, 1993; Hatch & Lazaraton, 1991).
The reliability of the LFPs (third study) Ten sets of LFPs from the third study were extracted to determine the reliability of the measure (East, 2006). Subsequent to the first analysis of these profiles, I re-prepared and re-analysed the ten scripts from which the profiles had been calculated so that I could carry out an intra-rater reliability check. I also asked an independent coder to prepare these same ten scripts according to my guidelines. This was done to establish the extent of inter-rater reliability. Comparisons were made across each level of vocabulary for each of tokens, types, and families. All reliability coefficients were calculated as at least r = .9, with the majority reaching r = .99. It was concluded that these LFPs provided highly reliable information on test takers’ range of lexis.
The reliability of questionnaire evidence (third study) Qualitative data from the open-ended questions were transferred verbatim to a word processor document. My own initial coding identified 27 categories of opinion. When these were seen as duplicating, they were collapsed. A resultant initial
Appendix 2. A note on data and procedures to establish reliability 215
taxonomy contained 23 categories, in four groups: ‘opinions about the dictionary used’, ‘advantages of dictionary availability’, ‘disadvantages of dictionary availability’ and ‘miscellaneous’. The initial taxonomy and uncoded data were given to an independent coder, whose subsequent coding was compared to my own. It was found that we were in agreement 75% of the time. To improve the level of agreement discrepancies were discussed. The discussion and subsequent further collapsing of categories led to agreement on 98% of occasions. Once this level of agreement had been reached, I revisited each individual questionnaire a third time, and cases where agreement had not been reached were removed from the analysis. The final taxonomy was then drawn up and applied to the data.
The reliability of inferences drawn from look-up evidence (third study) The 397 look-ups which had been identified in the third study were transferred to a spreadsheet, together with the words immediately before and after the look-up, so that each look-up was contextualized. Look-ups were then identified according to word type (noun, verb, and so on) and correctness of use. To determine the reliability of the identification of look-up errors (correct or incorrect), and the suggested correction if incorrect, a second coder, a native speaker of German, was invited to categorize the look-ups. He was given an initial set of 50 examples which illustrated particular difficulties, including words that greatly obscured the meaning, words that were contextually wrong, or words that exemplified how categories of incorrectness should be applied. My own classifications and suggested corrections were passed to the coder, who was asked whether he agreed with them. The independent coder subsequently classified the remaining look-ups. The two sets of coding were compared and cases of discrepancy were discussed. After discussion and adjustment to either the independent coder’s or my own initial decision, high levels of agreement were achieved: 1. Word correct or incorrect: agreement on 344 out of 348 occasions (98.8%). 2. Word subsequently used correctly or incorrectly: agreement on 347 out of 348 occasions (99.7%). In the few cases of discrepancy (which included, for example the difference of opinion over überbevölkert and überhäuft noted in Chapter 5) my own interpretations were used for purposes of analysis.
appendix 3
The writing tasks used in the first two studies
The writing tasks (Figures A3.1 to A3.5) were taken from or closely based on the A level examination essays of Edexcel, one of the major examining boards in the UK. I selected these tasks for three reasons: 1. They had already been field-tested and used in the public domain as part of the UK’s A level examination in the years 1997 to 2002 (with 2001 and 2002 papers set under the ruling that disallows the bilingual dictionary, and earlier papers allowing dictionaries). 2. They provided examples of communicative writing tasks developed within a framework of communicative competence. 3. They provided assessment of language broadly commensurate with the level of language proficiency presupposed by those at the intermediate level of German as an L2 (levels B1 / B2 on the Common European Framework). In Task A1 test takers are asked to read a short note in which the writer explains that (s)he has not been able to engage the recipient in conversation at a party, and could (s)he please write back? In Task A2 a competition invites the submission of two photographs depicting ‘a lasting impression of Germany’. Task B1 contains an advertisement for holding celebrations on a pleasure boat trip, and asks the test takers to write a letter of complaint about a disastrous party on the boat. Task B2 invites the test takers to write a letter outlining the good and bad points of a recent intensive language course at a Goethe Institut in Germany. Tasks C1 and C2 present a variety of discursive essays. The tasks were presented in German only. They are reproduced here with translations.
218 Dictionary use in foreign language writing exams
Ich habe den ganzen Abend versucht, mit dir ins Gespräch zu kommen, aber die anderen haben dich zu sehr in Beschlag genommen. Leider muss ich jetzt gehen, aber ich möchte gern mehr über dich und dein Land erfahren. Könntest du mir wohl schreiben? Name und Adresse Rückseite During the summer holidays you stayed with some friends in Switzerland. While you were there, you spent an evening at a party. Before you left, someone pushed a piece of paper into your hand with the above message on it. You decide to write back. In a letter of at least a page in German, give the following information: • • • • •
Why you have decided to respond to the note Who you are, your background and interests The people and things you dislike, and why When and where you would like to meet this person What you have done in Switzerland so far and what else you intend to do
Figure A3.1 Task A1 (©Edexcel Limited, 1998)
FOTO-WETTBEWERB Alle Studenten sind aufgerufen, an einem Wettbewerb zum Thema MEIN DEUTSCHLANDBILD IM FOTO teilzunehmen. Schicken Sie uns zwei Bilder. Für die Aufnahmen gibt es thematisch keine Grenzen. Alles ist erlaubt, was dem Fotografen oder der Fotografin einen Schnappschuss wert war und sich als bleibender Eindruck von Deutschland erweist. Preis – 2000 Euro Einsendeschluss ist der 15. September 2003 While reading a German magazine for foreign students, you see the above competition advertised. You send two photographs, accompanied by a letter of at least a page in German to the editor, giving the following information: • Who you are and why you are writing • When and where you took your pictures • What each of your pictures depicts • Why you wanted to take these particular photographs • What you would plan to do with the money if you were to win the first prize
Figure A3.2 Task A2 (©Edexcel Limited, 1999)
Appendix 3. The writing tasks used in the first two studies 219
EIN SCHIFF – NUR FÜR SIE UND IHRE GÄSTE Planen Sie eine Geburtstagsfeier oder gar eine Hochzeitsfeier? Möchten Sie, dass es nicht nur für Sie ein besonderer Tag, sondern auch für Ihre Gäste ein unvergessliches Erlebnis wird? Dann chartern Sie doch ein Schiff der Blauen Flotte: Sie allein bestimmen die Dauer und den Kurs der Reise. Wir werden alles tun, um Ihre ganz persönlichen Wünsche zu erfüllen. Schreiben Sie uns an folgende Adresse: Die Blaue Flotte, 49832 Neustadt, Deutschland Während der Sommerferien haben Sie ein Schiff der Blauen Flotte gechartert. Leider haben Sie einige Probleme erlebt. Schreiben Sie einen Brief an die Firma (von mindestens einer Seite auf Deutsch), in dem Sie folgendes beschreiben: • • • • •
Wer Sie sind und warum Sie schreiben Warum Sie ein Schiff der Blauen Flotte auf einem deutschen Fluss gechartert haben Was Ihren Gästen besonders gut gefallen hat Die Schwierigkeiten, die Sie während der Fahrt erlebt haben Was Sie jetzt von der Firma erwarten
Figure A3.3 Task B1 (©Edexcel Limited, 2001)
LERNEN SIE DEUTSCH IM GOETHE-INSTITUT DEUTSCH ALS FREMDSPRACHE INTENSIVKURSE – JEDEN MONAT NEU! VORMITTAGS UND NACHMITTAGS KURSE ZUR NEUEN DEUTSCHEN RECHTSCHREIBUNG MÜNDLICHE UND SCHRIFTLICHE ÜBUNGEN AUSBILDUNG DURCH HOCHQUALIFIZIERTE LEHRKRÄFTE UND ZU GÜNSTIGEN PREISEN MULTIMEDIAL UND INTERKULTURELL Informieren Sie sich telephonisch unter 089/275-49-11 bei der Zentrale des Goethe-Instituts. Wir schicken Ihnen gerne alle Unterlagen zu. Während eines Aufenthaltes in einer deutschen Großstadt haben Sie einen Kurs im Goethe- Institut mitgemacht. Schreiben Sie einen Brief an das Institut (von mindestens einer Seite auf Deutsch), in dem Sie auf folgende Punkte eingehen: • Wer Sie sind und warum Sie schreiben • Warum Sie einen Kurs speziell in dieser Stadt gewählt haben • Was Sie an dem Kurs positiv gefunden haben • Wie das Institut den ganzen Aufenthalt der Studenten verbessern könnte
Figure A3.4 Task B2 (©Edexcel Limited, 2002)
220 Dictionary use in foreign language writing exams
Beantworten Sie EINE FRAGE auf Deutsch. Schreiben Sie mindestens eine Seite ENTWEDER 1.
“Die Schule muss heutzutage unbedingt über Drogen informieren.” Sind Sie auch dieser Meinung? “School must definitely inform about drugs these days.” Are you also of this opinion? ODER
2.
Leben und leben lassen – Das Rezept für ein glückliches und stressfreies Leben? Live and let live – the recipe for a happy and stress-free life? ODER
3.
Der Massentourismus – ist das etwas Negatives oder Positives? Mass tourism – is this something positive or negative?
Beantworten Sie EINE FRAGE auf Deutsch. Schreiben Sie mindestens eine Seite ENTWEDER 1.
Wie sehen Sie die Zukunft für Jugendliche heute? How do you view the future for young people today? ODER
2.
Glauben Sie, dass internationale Austauschbesuche zu einem besseren gegenseitigen Verständnis zwischen den Völkern führen? Do you believe that international exchange visits lead to a better mutual understanding between peoples? ODER
3.
Sollten Mütter von kleinen Kindern ihre Karriere aufgeben, um sich um ihre Kinder zu kümmern? Should mothers of small children give up their careers so that they can look after their children?
Figure A3.5 Tasks C1 and C2 (©Edexcel Limited, 2002). Note: Translations added.
appendix 4
A note on inferential statistics
Several of the inferential statistical procedures I utilized come from a general family of parametric statistical models known as the ‘General Linear Model’. Careful thought was given to the assumptions of the various tests and the extent to which any data might violate the assumptions in a way that might invalidate or compromise the inferences to be drawn. The α-level was set at .05.
Test scores Differences in the final awarded scores were initially analyzed by the use of paired samples t-tests. In all cases it was found that p > .05 by a sufficiently large margin. In the first two studies caution must be taken in interpreting any inferences drawn because of the small sample sizes:
First study: Second study Task A: Second study Task B: Second study Task C: Third study:
t(5) = 1.464, p = .203 t(4) = 1.451, p = .220 t(4) = .93, p = .405 t(4) = –2.058, p = .109 t(46) = .279, p = .781
In the third study, I utilized a two-way repeated measures analysis of variance (ANOVA) which included two between-subjects factors – level of ability and level of prior experience with the dictionary. The results of these ANOVAs, first for all users, and then for the Collins Pocket users, is given in Tables A4.1 and A4.2. The most noteworthy but not necessarily unanticipated result from these analyses, and the post-hoc tests that were carried out, was that levels of ability were significantly different from each other. In other words, lower ability participants, for example, performed significantly differently to upper ability participants.
222 Dictionary Use in Foreign Language Writing Exams
Table A4.1 Analysis of variance – all users. Interaction Level of ability Level of experience Level of ability * level of experience
df Between subjects 3 3 6
Within subjects 1 Conditiona Condition * level of ability 3 Condition * level of experience 3 Condition * level of ability * level of experience 6
F
p
Partial η2
16.891 .299 .492
.000** .826 .809
.598 .026 .080
.050 .683 3.037 .701
.824 .569 .042* .650
.001 .057 .211 .110
Notes. aCondition: the final awarded score in both test conditions; **p < .01; *p < .05. Taken from East (2007, p. 345)
Table A4.2 Analysis of variance – Collins Pocket users. Interaction Level of ability Level of experience Level of ability * level of experience
df
F
Between subjects 3 10.239 3 .217 5 .842
Within subjects Condition 1 .005 Condition * level of ability 3 .037 Condition * level of experience 3 1.646 Score * level of ability * level of experience 5 1.475
p
Partial η2
.000** .883 .536
.618 .033 .181
.945 .990 .212 .244
.000 .006 .206 .280
**p < .01
A linear regression analysis was also used to determine which of several factors (ability, experience, number of look-ups) were having an effect on outcomes, and therefore to predict the factors that were most likely to contribute to outcomes in ‘with dictionary’ tests. Tables A4.3 and A4.4 record the results of the analysis, first for all users, and then for the Collins Pocket users. Once more ability was (not surprisingly) a significant factor in predicting outcomes, but neither prior dictionary experience nor the number of look-ups made a significant difference to outcomes.
Appendix 4. A note on inferential statistics 223
Table A4.3 Linear regression analysis – all users. Variable
B
SE B
β
t
p
Ability Experience No. of look ups
5.589 .425 –.024
.817 .629 .107
.735 .073 –.023
6.843 .675 –.221
.000* .503 .826
*p < .01; R = .724; R2 = .524; Adjusted R2 = .491. Taken from East (2007, p. 346)
Table A4.4 Linear regression analysis – Collins Pocket users. Variable
B
SE B
β
t
p
Ability Experience No. of look ups
5.901 .778 –.258
1.079 .867 .212
.730 .129 –.172
5.470 .897 –1.218
.000* .378 .234
*p < .01; R = .737; R2 = .543. Adjusted R2 =.492
Differences in the LFPs When investigating differences among the LFPs paired samples t-tests were used. This required the use of multiple t-tests, nine in all, which did present a potential difficulty (East, 2006b): There is controversy around multiple t-tests because they increase the likelihood of a Type I error. To adjust for this the Bonferroni correction is sometimes recommended (Brown, 2001). The correction is contentious, however. When it forces the α level to become too conservative the differences between two data-sets have to be extensive for those differences to be statistically significant. This leads to the possibility of a Type II error. (p. 188)
Given this difficulty I suggest that the comparisons should be viewed as three sets of three tests – that is, types, tokens, and families, with three levels in each. The Bonferroni correction would then necessitate an α level of .02, which is fairly realistic (that is, not too conservative). However, in the light of Keppel’s (1991) recommendation that no special correction is needed for a reasonable number of planned comparisons, results were also regarded as significant if they were below the usually accepted α level of .05. Results of the paired t-tests for the second and third studies are given in Table A4.5.
224 Dictionary Use in Foreign Language Writing Exams
Table A4.5 Significance tests for the LFPs. Profile
First Second Not in the First Second Not in the list list lists list list lists ‘With dictionary’ texts compared to ‘without dictionary’ texts Dictionary look-ups included
Tokens
t df p
–1.990 14 .066
1.044 14 .314
Types
t df p
–1.467 14 .165
.202 14 .843
Families
t df p
–1.456 14 .167
.294 14 .773
second study 1.809 14 .092
Dictionary look-ups removed .658 14 .521
–.549 14 .592
–.480 14 .639
1.804 14 .093
–.282 14 .782
.781 14 .448
.76 14 .940
1.783 14 .096
–.589 14 .565
.764 14 .458
–.204 14 .842
third study Tokens
t df p
–3.003 38 .005**
1.873 38 .069
3.657 38 .001**
.153 38 .879
–.305 38 .762
.049 38 .961
Types
t df p
–4.397 38 .000**
1.747 38 .089
5.398 38 .000**
.086 38 .932
–.513 38 .611
.601 38 .552
Families
t df p
–4.510 38 .000**
1.785 38 .082
5.596 38 .000**
–.100 38 .921
–.315 38 .754
.689 38 .495
**p < .01 (two-tailed) Adapted from East (2006, p. 190)
Independent samples t-tests were also used with the data from the third study to determine if the LFPs were significantly different across two ability levels (Table A4.6).
Appendix 4. A note on inferential statistics 225
Table A4.6 Comparison of LFP profiles across two ability groups. Profile
First list
Second list
Not in the lists
+
–
+
–
+
–
Tokens
t df p
.792 37 .434
2.776 37 .009**
–.857 37 .397
2.849 37 .007**
–.480 37 .634
2.989 37 .005**
Types
t df p
.000 37 1.000
–2.687 37 .011**
–.308 37 .760
–2.357 37 .024*
.269 37 .790
–2.472 37 .018**
Families
t df p
–.001 37 .999
–2.361 37 .024*
–.264 37 .793
–2.170 37 .036*
.241 37 .811
–2.275 37 .029*
Note. *p < .05 (two-tailed); **p < .02 (two-tailed); +: with dictionary; –: without dictionary Taken from East (2006, p. 188)
Index
A A level examination 2, 5–8, 78–79, 211–212, 217 accommodations 9, 41 affective response 30, 172, 175, 189 analysis of variance 221–222 assessing writing 1, 27–30, 170 assessment for learning 9–10, 13, 23, 28, 135, 170, 188 assessment of learning 9–10, 13, 23, 33, 135, 164, 170, 181, 188 Assessment Reform Group (ARG) 9 authenticity 3, 6, 7, 14, 20, 23, 25, 27, 30, 33, 34, 143, 145, 172–173, 175, 189, 191 see also real-world contexts; real-world practice autonomy see learner autonomy B bilingual dictionaries advantages of 6, 18–20, 126–132, 153–155, 162, 174, 182 disadvantages of 4, 16–17, 132–141, 156–159, 165, 171, 197 in assessments 28, 34, 37, 170, 172, 188 liability of 8, 46, 95, 99, 106, 107, 112, 120 misuse of 7, 107, 108–120 user preferences and 1, 14, 21–22, 163–164 bilingualized dictionaries 15, 21, 162, 183–184
C CEF see Common European Framework central ‘c’ concepts 195 cheating 31, 46, 136, 145, 158, 164, 191 CLT see Communicative Language Teaching Collins German Dictionary 50–51, 90, 109–111, 119, 141–144, 165, 191, 198–200 Collins Pocket German Dictionary 53, 54, 58–60, 88–89, 91, 115–117, 160–161 Common European Framework 48, 195, 211, 217 communicative competence 14, 19, 23, 30, 54, 80, 170 Communicative Language Teaching 14, 16, 19, 20, 23 confidence 3, 7, 20, 45, 128–130, 131, 143, 150–151, 153, 162–164, 167, 175, 176, 177, 179, 181, 189, 194 construct irrelevant variance 32–33, 171 construct validity 25, 33, 34–35, 38, 42, 169–171, 189 coursework 1, 8, 9, 10, 27–28, 34, 170, 188 D diagnostic scoring rubric 54 dictionary howlers 8, 187, 201 dictionary skills 3, 4, 5, 50, 164 training in 3, 65, 120, 163, 194, 197
Dictionary Skills Guide 50, 197, 199, 200 E electronic dictionaries 15, 183, 184 equality of opportunity 31, 41, 192 equity see fairness errors 4, 8, 17, 45, 108–120, 171, 172, 180, 194–197 F fairness 26, 27, 28, 31, 32, 34, 35, 37, 42, 65, 134–135, 159, 168, 169, 178–182, 187, 188, 189, 190, 192 see also unfairness formulaic sequences 50, 51, 86, 168, 174, 192–193 G gaps in knowledge 19, 82, 129, 135, 142, 174, 182, 189, 192 GCSE 1–2, 4, 8, 10–11, 42–47, 173, 211–212 grammar 14, 79, 134, 135, 139, 158 grammatical competence 196 grammatical information 90, 141, 160 guessing 81–82 H high stakes examinations 26, 31 howlers see dictionary howlers
228 Dictionary use in foreign language writing exams
I impact 25, 125, 176, 189 on test scores 42, 43, 58–60, 62, 77, 171, 189, 221–222 on test takers 28, 34, 35, 130, 176, 178 on time 3, 7, 8, 44, 49, 97–106, 140–142, 156–157, 165, 176, 177, 190–191 independence see learner independence independent learning 2 interaction 30, 130, 143 interactiveness 25, 26, 28, 173–175, 178, 189 L language ability 14, 23, 31, 174 language proficiency 23, 24, 26, 141, 143, 179 learner autonomy 3, 20, 23, 50 learner independence 7 see also independent learning Lexical Frequency Profiles 66– 76, 84–86, 174, 180, 214, 223–225 lexis range of 68, 77, 84, 92, 180 sophistication of 69, 71, 72, 73–75, 77, 78, 83, 174, 180 lexicography 21 LFPs see Lexical Frequency Profiles M maturity of the test takers 105–106, 119 meanings 1, 7, 14, 16, 20, 21, 66, 78–80, 81, 82, 126 monolingual dictionaries 15, 16, 17, 18, 19, 22, 167, 183 O observational evidence 95–96, 101, 130–132, 140–141 Open University (OU) 78–79, 104, 106, 163
P PEDs see electronic dictionaries perceptions of test takers 2, 11, 35, 149–153, 175, 179, 180–181 portfolio assessment 27–29, 188 practicality 25, 29, 144, 176–177, 191 psychological benefit 129–130, 182 Q Qualifications and Curriculum Authority (QCA) 1, 5, 42, 46, 47, 182, 212 R Range software 66, 84 raters 40, 55, 62, 77, 214 real-world contexts 14, 24, 30 real-world practice 20, 23, 28, 34, 37, 143, 172, 173 reassurance 21, 163, 177 reliability 25–26, 29, 33, 38, 41, 54, 169–170, 178, 188, 189, 213–215 research design independent groups 39, 41, 185 mixed method 41 repeated measures 38, 39, 42, 170, 185 rhetorical effect 19, 129, 135, 142, 192 S scaffolding 166–168 scores see impact on test scores scoring criteria for 40 method of 29 rubric for 56–57, 59, 62, 170, 171, 174 see also diagnostic scoring rubric
sense of security 6, 143, 177 stakeholders 2, 4, 7, 8, 25, 35, 172, 173, 176 strategies 19, 80–82, 84, 136–138, 154, 156, 192 stress 7, 130, 163, 176, 177, 189 support resources 9, 30, 33, 144 supportive tools 34 T target language use 23, 30, 172 see also TLU domains taxonomy 153, 156, 159, 215 test accommodations 9, 41 thesaurus 20 time see impact on time time-constrained contexts 27, 28, 30, 31, 100, 106, 107, 120, 130, 138, 140, 142, 176, 194 timed tests 13, 182 of writing 29, 30, 32, 37, 170, 188, 189 TLU domains 23, 25, 30, 172, 178, 189 U unfairness 26, 32, 41, 44, 143, 158, 159, 164, 171, 179, 181, 182, 192 see also fairness usefulness 24–26, 34, 169, 178, 189 V validity 35, 38 of the LFP 68–69 of research design 41 threats to 32 see also construct validity W washback 6, 9, 28, 112, 164 Writing Study Guide 50, 144, 145, 167, 168, 177, 193, 195–196
In the series Language Learning & Language Teaching the following titles have been published thus far or are scheduled for publication: 23 Philp, Jenefer, Rhonda Oliver and Alison Mackey (eds.): Second Language Acquisition and the Younger Learner. Child's play? approx. vi, 320 pp. + index. Expected October 2008 22 East, Martin: Dictionary Use in Foreign Language Writing Exams. Impact and implications. 2008. xiii, 228 pp. 21 Ayoun, Dalila (ed.): Studies in French Applied Linguistics. Expected Forthcoming 20 Dalton-Puffer, Christiane: Discourse in Content and Language Integrated Learning (CLIL) Classrooms. 2007. xii, 330 pp. 19 Randall, Mick: Memory, Psychology and Second Language Learning. 2007. x, 220 pp. 18 Lyster, Roy: Learning and Teaching Languages Through Content. A counterbalanced approach. 2007. xii, 173 pp. 17 Bohn, Ocke-Schwen and Murray J. Munro (eds.): Language Experience in Second Language Speech Learning. In honor of James Emil Flege. 2007. xvii, 406 pp. 16 Ayoun, Dalila (ed.): French Applied Linguistics. 2007. xvi, 560 pp. 15 Cumming, Alister (ed.): Goals for Academic Writing. ESL students and their instructors. 2006. xii, 204 pp. 14 Hubbard, Philip and Mike Levy (eds.): Teacher Education in CALL. 2006. xii, 354 pp. 13 Norris, John M. and Lourdes Ortega (eds.): Synthesizing Research on Language Learning and Teaching. 2006. xiv, 350 pp. 12 Chalhoub-Deville, Micheline, Carol A. Chapelle and Patricia A. Duff (eds.): Inference and Generalizability in Applied Linguistics. Multiple perspectives. 2006. vi, 248 pp. 11 Ellis, Rod (ed.): Planning and Task Performance in a Second Language. 2005. viii, 313 pp. 10 Bogaards, Paul and Batia Laufer (eds.): Vocabulary in a Second Language. Selection, acquisition, and testing. 2004. xiv, 234 pp. 9 Schmitt, Norbert (ed.): Formulaic Sequences. Acquisition, processing and use. 2004. x, 304 pp. 8 Jordan, Geoff: Theory Construction in Second Language Acquisition. 2004. xviii, 295 pp. 7 Chapelle, Carol A.: English Language Learning and Technology. Lectures on applied linguistics in the age of information and communication technology. 2003. xvi, 213 pp. 6 Granger, Sylviane, Joseph Hung and Stephanie Petch-Tyson (eds.): Computer Learner Corpora, Second Language Acquisition and Foreign Language Teaching. 2002. x, 246 pp. 5 Gass, Susan M., Kathleen Bardovi-Harlig, Sally Sieloff Magnan and Joel Walz (eds.): Pedagogical Norms for Second and Foreign Language Learning and Teaching. Studies in honour of Albert Valdman. 2002. vi, 305 pp. 4 Trappes-Lomax, Hugh and Gibson Ferguson (eds.): Language in Language Teacher Education. 2002. vi, 258 pp. 3 Porte, Graeme Keith: Appraising Research in Second Language Learning. A practical approach to critical analysis of quantitative research. 2002. xx, 268 pp. 2 Robinson, Peter (ed.): Individual Differences and Instructed Language Learning. 2002. xii, 387 pp. 1 Chun, Dorothy M.: Discourse Intonation in L2. From theory and research to practice. 2002. xviii, 285 pp. (incl. CD-rom).