g
h
This page intentionally left blank
Psychology and Neuroscience g
h
Edited by
Morton A. Heller and Soledad B...
13 downloads
933 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
g
h
This page intentionally left blank
Psychology and Neuroscience g
h
Edited by
Morton A. Heller and Soledad Ballesteros
2006
LAWRENCE ERLBAUM ASSOCIATES, PUBLISHERS Mahwah, New Jersey London
This edition published in the Taylor & Francis e-Library, 2008.
“To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” Copyright © 2006 by Lawrence Erlbaum Associates, Inc. All rights reserved. No part of this book may be reproduced in any form, by photostat, microform, retrieval system, or any other means, without prior written permission of the publisher. Lawrence Erlbaum Associates, Inc., Publishers 10 Industrial Avenue Mahwah, New Jersey 07430 www.erlbaum.com Library of Congress Cataloging-in-Publication Data Touch and blindness : psychology and neuroscience / edited by Morton A. Heller and Soledad Ballesteros. p. cm. Includes bibliographical references and index. ISBN 0-8058-4725-1 (cloth : alk. paper) ISBN 0-8058-4726-X (pbk. : alk. paper) 1. Touch—Psychological aspects. 2. Blindness—Psychological apects. 3. Touch. 4. Blindness I. Heller, Morton A. II. Ballesteros, Soledad. BF275.T685 2006 152.1—dc22 2006053041 CIP
ISBN 1-4106-1567-7 Master e-book ISBN
g
h
Contents
Preface
1
vii
Introduction: Approaches to Touch and Blindness
1
Morton A. Heller and Soledad Ballesteros
PART I: 2
PSYCHOLOGY
Processing Spatial Information From Touch and Movement: Implications From and for Neuroscience
25
Susanna Millar
3
Picture Perception and Spatial Cognition in Visually Impaired People
49
Morton A. Heller
4
Form, Projection and Pictures for the Blind
73
John M. Kennedy and Igor Juricevic
5
Haptic Priming and Recognition in Young Adults, Normal Aging, and Alzheimer’s Disease: Evidence for Dissociable Memory Systems Soledad Ballesteros and José M. Reales
95
vi
CONTENTS
6
Tactile Virtual Reality: A New Method Applied To Haptic Exploration
121
José Antonio Muñoz Sevilla
PART II: NEUROSCIENCE
7
Do Visual and Tactile Object Representations Share the Same Neural Substrate?
139
Thomas W. James, Karen Harman James, G. Keith Humphrey, and Melvyn A. Goodale
8
Cerebral Cortical Processing of Tactile Form: Evidence From Functional Neuroimaging
157
K. Sathian and S. C. Prather
9
The Role of Visual Cortex in Tactile Processing: A Metamodal Brain
171
Alvaro Pascual-Leone, Hugo Theoret, Lotfi Merabet, Thomas Kauffmann, and Gottfried Schlaug
10
Conclusions: Touch and Blindness, Psychology and Neuroscience
197
Soledad Ballesteros and Morton A. Heller
Author Index
219
Subject Index
229
g
h
Preface
T
he area of touch and blindness has been of interest to psychologists and neuroscientists for as long as the fields have existed. Long before there were professionals in these areas, philosophers and medical professionals were concerned with the issues that are discussed in this book. The problems are important, but are not at all simple. It is likely that interest in these fields will continue for the foreseeable future. The field of touch has been undergoing a rapid transformation over the past few years, and the changes that are presently taking place may accelerate in the future. Professional societies with interests in touch and blindness have grown in number, as have international meetings with a focus in the area. One new area of emphasis has involved the development of sophisticated technology that is designed to provide assistance to blind persons by allowing touch to access computers. There have been dramatic developments in robotics that demand haptic interfaces. Effective action in the world requires haptic feedback. One cannot manipulate objects precisely, if they cannot be felt. Just think of the damage that one can do to oneself and to the world if one’s hands are numb! Researchers in this area have also been concerned with teleoperation, and the ability to remotely sense spatial information. While exciting advances have taken place in the field of robotics, the present volume has relatively little to say about these applications. Advances in technology have also led to the development of new devices that allow for more effective study of the relationship between brain events,
viii
PREFACE
neural functioning and haptics. Neuroscientists are concerned with measuring the functioning of the brain and many are interested in understanding issues that have concerned psychologists. This includes imagery, whether there are important differences in the way that the senses process information, and whether this may be reflected in brain organization and reorganization. A number of the authors of chapters in this book approach the study of touch and blindness from the perspective of neuroscience. More traditional, psychological methodology has certainly added to our knowledge of haptics in sighted and blind people.
ACKNOWLEDGMENTS This volume is derived from the talks presented by the invited speakers at a conference that was held in Madrid, Spain, October 16–18, 2002. The meeting was held at the Universidad Nacional de Educación a Distancia (UNED) and was organized by Soledad Ballesteros and Morton A. Heller. Morton A. Heller wishes to acknowledge research support by NSF grant 0317293, “Haptic Spatial Perception in the Sighted and Blind.” He is especially grateful for the help provided by all of the blind participants in the research described in this volume. Many students have assisted in the research reported here, and all certainly deserve mention. Special thanks go to Kathy Wilson, Melissa McCarthy, Ashley Clark, Jayme Greene, Keiko Yoneyama, and Melissa Shanley. Soledad Ballesteros wishes to thank the Dirección General de Universidades grant BSO2000-0118.CO2-01 for financial support to the research on implicit and explicit memory in young adults, older adults, and Alzheimer’s disease patients for objects encoded by touch. Special mention goes to the older adults and the Alzheimer’s disease patients who participated in the research. We also acknowledge the participation of the student Alicia Sánchez in running the experiment. The Conference on Touch, Blindness, and Neuroscience was supported by the Spanish Ministerio de Ciencia y Tecnología, ONCE (The National Organization of Spanish Blind), UNED (Universidad Nacional de Educación a Distancia), and by the IUED (Instituto Universitario de Educación a Distancia).
g
h
Introduction: Approaches to Touch and Blindness Morton A. Heller Eastern Illinois University
Soledad Ballesteros Universidad Nacional de Educación a Distancia, Madrid, Spain
R
esearch on touch has blossomed, with recent years seeing startling growth in the field. This renewed interest represents an awakening of a realization that fundamental perceptual problems may be solved in this arena. Researchers have approached the problems of touch and haptics from a great many directions, and this volume aims to bring a number of important contemporary approaches to the forefront. In this volume, researchers have approached the study of touch and blindness from the perspectives of psychological methodology and the most sophisticated, state-of-the art techniques in neuroscience. The traditional investigation of touch and blindness has involved psychophysics, however, the historical roots of interest in the area are philosophical, psychological, and medical. The problems posed are important for a variety of theoretical reasons, since they bear on issues of intersensory equivalence and intermodal relations. Psychologists and philosophers have long wondered if we can get equivalent information through the senses of vision and touch (Freides, 1974, 1975; Ryan, 1940; Streri & Gentaz, 2003). The ecological answer is affir-
2
mative (Gibson, 1966), but not everyone has agreed with this appraisal (Revesz, 1950). Philosophers have been interested in cases where congenitally blind persons have had sight restored (e.g., Gregory & Wallace, 1963). Gregory and Wallace (1963) have published a fascinating account of the restoration of sight in a congenitally blind person (also see Morgan, 1977). The assumption is that initial responses of the individual after the restoration of sight will provide some reasonable answer to questions about the equivalence of vision and touch. Thus, should persons immediately understand the configuration of a seen object that was only felt in the past, that would be an indication of intersensory equivalence. The problem, of course, is that it is rare that psychologists are able to gain immediate access to persons when their sight is restored. Moreover, the failure of sight would not be a crucial test of the notion of intersensory equivalence, since sight is rarely normal immediately after the removal of surgical bandages. Researchers have been interested in studying haptics (touch perception) in blind people for a number of important reasons. Research with blindfolded sighted individuals may frequently underestimate the potential of touch for pattern perception (Heller, 1989, 2000; Heller, Brackett, Scroggs, Allen, & Green, 2001; Heller et al., 2002; Heller, Wilson, Steffen, Yoneyama, & Brackett, 2003). Blindness may be accompanied by increased tactile sensitivity (Sathian & Prather, chap. 8, this volume; Van Boven, Hamilton, Kaufman, Keenan, & Pascual-Leone, 2000). Blind individuals have increased haptic skill, and very low vision and late blind persons may show advantages over the sighted when pattern familiarity is controlled for (see Heller, 2000; Heller et al., 2001). Blindfolded persons lack visual guidance of haptic exploration (Heller, 1993), and it is known that sight of hand motion helps touch in a variety of tasks involving pattern perception and texture judgments (Heller, 1982; Heller et al., 2002). We also know that it may help a person attend to touch if he or she directs vision toward the location of a target. A blind person told Morton Heller that it helps him concentrate on touch perception if he “looks” at his hands while feeling objects. This person is totally blind, and does not have any light perception. Researchers report empirical evidence that gaze direction can aid haptics, and spatial resolution is improved and reaction time is speeded when persons look at their hands (e.g., Kennett, Taylor-Clarke, & Haggard, 2001). Visual guidance may aid touch in many ways. The benefits could be attentional, and this would be consistent with the previously mentioned comments by the late blind individual. In addition, very blurry vision may provide spatial reference information that is often lacking in blind and blindfolded conditions (see Millar, 1994). Spatial frame of reference information can help subjects interpret patterns that are defined by orientation, namely Braille patterns (Heller, 1985). Vision can be so blurry (as with visual impairment or the use of
1. INTRODUCTION
stained glass) that it is impossible to see surface features (e.g., Heller 1982, 1985, 1989). Nonetheless, this blurry vision of hand motion can aid haptic perception of form. Moreover, it is conceivable that some form information is obtained by watching one’s hand feel patterns, even when the patterns themselves are not visible. Thus, sighted subjects were able to name Braille patterns by viewing another individual touch them (Heller, 1985). The patterns themselves could not be seen, because of the effect of interposed stained glass. However, subjects were able to glean useful pattern information by observing the finger motion of another person. Touch is an accurate and fast modality in detecting salient attributes of the spatial layout of tangible unfamiliar displays, especially their bilateral symmetry (see Ballesteros, Manga, & Reales, 1997; Ballesteros, Millar, & Reales, 1998; Locher & Simmons, 1978). Active touch is an accurate perceptual system in discriminating this spatial property in shapes and objects. Although touch is quite sensitive in dealing with flat displays, it is sometimes far more accurate and faster with 3-D objects. Moreover, studies on the discrimination between symmetical and asymmetrical patterns underscored the reference frame hypothesis. Accuracy in the perception of symmetric two-dimensional raised line shapes improved under bimanual exploration (Ballesteros et al., 1997; Ballesteros et al., 1998). Bimanual exploration proved to be superior than unimanual exploration due to extraction of parallel shape information and to the use of the observer’s body midline as a body-centered reference frame. The findings suggest that providing a reference frame in relation to the body midline helps one to perceive that both sides of a bilaterally symmetrical shape coincide. The finding was further supported in another study (Ballesteros & Reales, 2004b) designed to assess human performance in a symmetry discrimination task using new, two-dimensional (raised-line shapes and raised surfaces) and three-dimensional shapes (short and tall objects). These stimuli were prepared by extending the 2-D shapes in the z-axis. The hypothesis under test was that the elongation of the stimulus shapes in the third dimension should permit a better and more informative exploration of objects by touch, facilitating symmetry judgments. The idea was that providing reference information should be more effective for raised shapes than for objects, since reference information is very poor for those stimuli when they are explored with one finger. The results confirm this hypothesis, since unimanual exploration was more accurate for asymmetrical than for symmetrical judgments, but only for 2-D shapes and short objects. Bimanual exploration at the body midline facilitated the discrimination of symmetrical shapes without changing performance with asymmetrical ones. Accuracy for haptically explored symmetrical stimuli improved as they were extended in the third dimension, while no such trend was found for asymmetrical stimuli.
4
PERCEPTION IS NORMALLY MULTIMODAL In sighted individuals, objects and space are normally perceived through multisensory input. We see the world, feel it, hear it, and smell it. It is rare that we are limited to one sense when we seek information about the world. Thus, we may use vision to guide tactual exploration (Heller, 1982), or for pattern perception. People are able to localize objects in space by looking at them and by feeling them. We may use peripheral vision for guidance of haptic exploration, and simultaneously use foveal vision for pattern perception when looking at objects that are at a different location in the world. Moreover, vision can be used to orient objects for more efficient haptic exploration. The two senses of vision and touch may cooperate to allow us to move objects in space so that we can see them more effectively. Of course, there are instances in which vision and touch yield contradictory information about the world, rather than redundant information. We may look at a snake and it appears slimy and wet, but feels cool and very smooth and dry. Visible textures are not invariably tangible, particularly when they are induced by changes in coloration that do not include alterations in surface configuration. For example, one can not feel the print on a page of this volume, but these changes in brightness and contrast are certainly visible. While the senses may yield conflicting input about objects in the world, we learn when to rely on vision or touch, and when to attempt to compensate for these apparent perceptual errors. If two senses yield contradictory information, they both can not be correct. Fortunately, it is more often the case that the senses provide redundant information that is accurate and reliable, leading to veridical percepts (see Ernst & Banks, 2002; Gibson, 1966; Millar, 1994). There has been a recent trend toward the view that perception is typically multimodal, and this movement has roots in psychology and in neuroscience. From the psychological perspective, there has been an increasing interest in intermodal interactions from Spence and his colleagues, and many others (e.g., Spence, Kingstone, Shore, & Gazzaniga, 2001). For example, Reales and Ballesteros (1999) studied the architecture of memory representations underlying implicit and explicit recollection of previous experiences with visual and haptic familiar 3-D objects. They found that cross-modal priming as a measure of implicit recovery of object information (vision to touch and touch to vision) was equal to within-modal priming (vision to vision and touch to touch). The interpretation was that the same or very similar structural descriptions mediate perceptual priming in both modalities (see also Easton, Greene, & Srinivas, 1997). This issue is discussed in more detail later in this chapter. There has been a renewed interest in intermodal relations from a neuroscience perspective (e.g., James et al., 2002; James, James, Humphrey, & Goodale, chap. 7, this volume; Millar, chap. 2, this volume, Millar & Al-Attar, 2002;
1. INTRODUCTION
Pascual-Leone, Theoret, Merabet, Kauffmann, & Schlaug, chap. 9, this volume; Pascual-Leone & Hamilton, 2001; Roder & Rosler, 1998; Sadato et al., 1998; Sathian, 2000; Sathian & Prather, chap. 8, this volume; Stein & Meredith, 1993). An examination of many of these contributions shows this increased emphasis on the relationship between the senses, and a very interesting blurring of the lines between psychology and neuroscience. This issue is taken up again very shortly in this introduction (also see Milner, 1998).
IMAGERY AND VISUAL EXPERIENCE IN TOUCH An important motive for research on blind individuals derives from interest in the roles of visual imagery and visual experience in the development of spatial awareness, pattern perception, and memory (e.g., Ballesteros & Reales, chap. 5, this volume). We take the first two issues in turn, although they are intimately related. Note, also, that some persons in our society appear to believe that vision is the only adequate sense for spatial perception (see Millar & Al-Attar, 2002; Millar, this volume). There is little doubt that vision is an excellent spatial sense. However, there also can be no doubt that spatial perception can be superb in the absence of vision, as with some persons who are congenitally blind or early blind. A lack of visual experience has implications for the nature of the imagery that one possesses. Individuals who are born without sight are presumably incapable of using visual imagery in understanding space. While their imagery could be spatial, it must derive from different sensory experiences than imagery that is specifically visual in nature. Visual imagery is known to aid memory (e.g., Paivio, 1965), and may be especially useful in coping with complex imagery tasks (Cornoldi & Vecchi, 2000). Of course, color may be relevant to object recognition. Late blind persons recall how things look, and report using visual imagery. Sighted persons certainly report experiencing visual images while feeling objects (Lederman, Klatzky, Chataway, & Summers, 1990), but it is not likely that this visualization process is needed for the perception of 2-D forms. Current evidence suggests that while it may be helpful, it is not necessary, since congenitally blind subjects perform very much like blindfolded sighted individuals in picture perception and matching tasks, and in a variety of tasks that assess spatial reasoning (see Heller, chap. 3, this volume; Kennedy, 1993; Millar, 1994). Of course, this does not mean that visual imagery is not useful, and it surely is. However, the evidence suggests that other forms of representation may often substitute for a lack of vision (Millar, 1994). Moreover, the nature of processing might differ between the sighted and the congenitally blind. Thus, one should probably not attempt to draw inferences from the blind to the sighted, nor vice versa.
6
Some researchers have argued that haptics is limited in blind persons, since they are not likely to be able to think of objects from a particular vantage point (Hopkins, 2003), nor are they able to think in terms of perspective (Arditi, Holtzman, & Kosslyn, 1988). The representation of depth relations is a problem for haptic pictures, and has only recently received much interest (Holmes, Hughes, & Jansson, 1998; Jansson & Holmes, 2003). When asked to draw pictures of a slanted board to show the tilt, congenitally blind persons did not spontaneously draw rectangular shapes using foreshortening and perspective cues to depth (Heller, Calcaterra, Tyler, & Burson, 1996). They did not use a reduction in drawing height as the board was tilted, nor did they use converging lines in their depictions of the board. Their drawings were all the same height, despite inclination of the board in depth. Moreover, more than one blind person has indicated (to Morton Heller) that blind people spontaneously tend to imagine objects as a whole, and not from one side. However, blind people are able to adopt a vantage point, and have demonstrated this in a number of studies (Heller & Kennedy, 1990; Heller, Kennedy, & Joyner, 1995; Heller et al., 2002). Perspective is a complex problem, and there is little reason to believe that blind people would spontaneously draw in “correct” perspective (see Heller et al., 2002). Drawing in conventional linear perspective requires that one adopt a viewpoint, and imagine a complex array projected upon a plane intersecting that vantage point (Panofsky, 1925/1991). However, sighted persons require instruction before they are able to produce accurate perspective representations, and relatively few individuals learn this skill. Geometrical perspective is a relatively recent invention in the history of art, and was not known in its current form before Brunelleschi and the Renaissance (Kemp, 1990). Of course, there are a number of experiential implications of a lack of visual experience that are independent of the imagery issue, and have little to do with visual sensory experience, per se. Visual information is processed much more rapidly than haptic information. This means that considerably more information can be acquired in the same time frame by sighted individuals than blind persons. While this time difference may not matter much in some instances, it can mean that the educational background of sighted individuals will likely have progressed at a faster rate than for blind children. Thus, the educational backgrounds of blind individuals may differ from that of the sighted, and much of this may have little to do with visual imagery. Blind persons may have had minimal experience with adopting a specific viewpoint when interpreting graphics, since they have had far less experience with maps and pictures than sighted individuals. In addition, blind persons are likely to be less familiar with mental rotation tasks, even though they have been tutored in generating
1. INTRODUCTION
left-right reversals when producing Braille (Heller, Calcaterra, Green, & Lima, 1999). Slower information acquisition may mean that there is a developmental lag in education and in the development of spatial skills. This may not matter in the long run, but could have important consequences for the evaluation of studies that involve blind children or adolescents. A large body of evidence suggests that there is a lag in the development of spatial understanding in blind individuals (Hatwell, 1985). This developmental lag probably results from the relative inefficiency of touch, and the slower acquisition of some forms of spatial knowledge via haptics. However, a recent study showed that visual status was significant in a number of subtests of a newly developed haptic test battery. In a number of subtests, blind children showed better performance than sighted children from 3 to 16 years of age, especially at younger ages. The new psychological instrument was originally composed of 20 subtests created to assess a wide range of abilities, ranging from tactual discrimination, systematic scanning and shape coding to short-term and longer-term memory tasks (Ballesteros, Bardisa, Millar, & Reales, 2004; Ballesteros, Bardisa, Reales, & Muñíz, 2004). Blind and visually impaired children performed better than the sighted in the several spatial subtests designed to assess figure-ground organization, dimensional structure, spatial orientation, efficient dot scanning, graph and diagram scanning skills, and symmetry detection accuracy (raised-lines shapes and raised surfaces). Moreover, blind children were also better on the dot span (short-term memory, or STM) subtest, and the longer-term recognition (LTR) memory for novel objects subtest. The advantage was found particularly at the youngest age levels. The older sighted individuals tended to catch up. These results suggest the effectiveness of specific early training that the blind children received at school. Blind persons may ultimately develop a sophisticated understanding of spatial relations. Heller et al. (2001) found that subjects with very low vision showed better performance on the Piagetian water-level problem than sighted subjects who used their sight. Thus, a temporary lag in development may not presage a deficiency in the blind adult, over the long run. Certainly, we should be cautious in attempting to generalize from data derived from visually impaired children to adults. The lack of visual experience in the blind individual also provides an opportunity to study psychological and brain functioning that is “purely tactual” in the sense that it may not be altered by visual experience. Of course, audition contributes to spatial perception in blind persons, as well as other modalities. From a psychological perspective, the study of haptic performance in blind children and adults represents a unique model for understanding haptic perception and cognition. For the neuroscientist, blind persons allow an investigation into
8
the functional organization of the brain in a manner that will unravel the role of experience. There is increasing evidence that the functional organization of the brain and nervous system will change with experience. In addition, it is known that the brain is plastic, and areas that are not used for vision may possibly take on new roles in the congenitally blind person. This is an oversimplification of a complex problem, but study in this arena has made for some very exciting and interesting, but provocative discoveries. A number of researchers report that occipital regions of the cortex may serve haptic object perception in blind individuals, in those who experience visual deprivation, and even in the sighted. Gentaz and Hatwell (2004) have pointed out a number of important reasons for studying haptics in blind people. Haptic illusions may occur in sighted persons because of the influence of visual imagery, but this can not be the case for the congenitally blind person. This allows researchers to understand the sense of touch and touch perception without the influence of vision. Illusions could occur for different reasons, and could be influenced by different mechanisms in touch and vision. The evidence suggests that illusions like the Mueller-Lyer illusion operate in a very similar way in vision and touch, and the Mueller-Lyer illusion is found in congenitally blind people (e.g., Heller et al., 2002). However, the horizontal-vertical illusion is greatly influenced by factors such as the orientation of patterns in space, and the manner in which the stimuli are touched (Heller, Brackett, Salik, Scroggs, & Green, 2003). In the horizontal-vertical illusion, subjects normally overestimate vertical line segments compared to horizontal lines, but only when the vertical leads to tracing motions that converge upon the body. A strong negative illusion was found for the horizontal-vertical illusion using solids, when the vertical corresponded to the gravitational vertical. When the vertical segment was gravitationally vertical, subjects overestimated horizontals, rather than verticals. These differences suggest that different mechanisms may play causal roles in some visual and haptic illusions.
TOUCH AND NEUROSCIENCE The study of the relationship between brain functioning, neural projection, and touch has been of great interest, and we have seen considerable growth in the adoption of this approach. The mind-brain problem has been vexing and difficult; however, many individuals see the tools of neuroscience as the best way to understand touch and blindness. This reflects an increased focus on neuroscience within the professions of psychology and medicine, and in the broader scientific community. There have been a number of clear trends in neuroscience, including a shift toward considering the senses as interrelated. Shimojo and Shams (2001) have
1. INTRODUCTION
recently argued that we should not consider perception as modules, with the sensory modalities all functioning relatively independently. They have stressed the role of cross-modal integration in the brain, and also questioned the idea that the modalities start off as independent and separate. An important impetus for this trend has been evidence for the plasticity of the human brain. Thus, early deprivation in one modality may lead to changes in the functioning of the area in the brain that is normally devoted to that modality (see Roder et al., 1999). Brain plasticity may result from training, where neural representations are altered by experience. Thus, there may be changes in the somatosensory receptive regions in the brain as a result of altered experience (Merzenich & Jenkins, 1993; Roder & Rosler, 2002). Some current researchers have presented the view that many parts of the brain are not as specialized as once thought (e.g., James et al., 2002; James et al., chap. 7, this volume; Pascual-Leone & Hamilton, 2001; Sathian & Prather, chap. 8, this volume), and have emphasized the idea that there may be areas in the brain that are not designed to respond only to one sensory modality. The traditional view has been that there are separate brain areas that respond to vision, touch, olfaction, and so on. There is little doubt that there are primary cortical areas that show specialization of functioning; however, a rather interesting position has been forthcoming, stressing the plasticity of the brain (e.g., Pascual-Leone et al., chap. 9, this volume) and the idea that there may be brain regions where representation is shared across modalities (James et al., chap. 7, this volume; Sathian & Prather, chap. 8, this volume). New behavioral findings (Ballesteros & Reales, 2004a; chap. 5, this volume) obtained with aging healthy controls and neuropsychologically impaired Alzheimer’s disease patients are in agreement with published results of a series of neuroimaging studies. Neuroscientists have shown common activation in the lateral occipital area (LO) during haptic and visual object identification (Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Amedi, Malach, Hendler, Peled, & Zohary, 2001). Specifically, functional magnetic resonance imaging (fMRI) research conducted with young adults points to the extrastriate cortex as the potential locus of the object structural description system (James et al., 2002). The results show that haptic exploration of novel objects produced activation when the same objects were later viewed, not only in the somatosensory cortex, but in areas of the occipital cortex associated previously with visual perception. The areas MO (middle occipital) and LO (lateral occipital) were activated equally in haptic-to-visual priming and visual-to-visual priming. These neuroimaging data are consistent with a number of behavioral results (Reales & Ballesteros, 1999) and point to the involvement of these areas during haptic object exploration.
10
THE CONTRIBUTING AUTHORS The next section of the introduction consists of a brief introduction to the authors who have contributed their work to this volume. We have organized the chapters around the areas of psychology and neuroscience, with chapters by the psychologists presented first. Of course, there is considerable overlap between the approaches, and many of the researchers in neuroscience use rather sophisticated perceptual and cognitive manipulations in an attempt to understand brain functioning. We take the position that psychological and neuroscience approaches are equally valid, but use different methodologies. They represent different levels of understanding, but both are useful for acquiring knowledge. Of course, if one assumes that you can equate mind and brain in a very straightforward and simplistic manner, then you come to the conclusion that neuroscience is the only reasonable method to study the “mind.” An alternative is the idea that the mind may not reduce completely to cortex, and a thorough understanding of the brain will not yield satisfying explanations of complex human cognitive functioning (see Popper & Eccles, 1977). This issue is taken up again in the concluding chapter of this volume. There is a common theme that runs through the chapters, whether they have a focus on neuroscience or take a psychological perspective. The authors of this volume generally assume that the senses do not normally act in isolation. Vision and touch typically function together, and it is rare that we only see or only feel. Moreover, many of the authors assume intersensory equivalence, either at some psychological level or in terms of cortical functioning.
PSYCHOLOGICAL PERSPECTIVES ON TOUCH AND BLINDNESS
Susanna Millar has made important contributions to our understanding of touch and haptics. She has a distinguished record of publications in the area, and has provided a large number of important insights into haptics and blindness. Here, Millar makes the case that haptic perception is very much dependent upon spatial reference frame information. According to Millar, our processing of spatial information is influenced by our perception of external spatial reference frames, such as an external surround or, perhaps, the vertical and horizontal frames of reference. Alterna-
1. INTRODUCTION
tively, we may use perceiver-centered, or egocentric information as a context within which we interpret haptic spatial patterns. This egocentric information derives from bodily cues, such as posture or gravitational information. Millar argues her case with evidence that the most adequate haptic perception of form derives from coordinated multisensory input from both of these frames of reference, that is, the egocentric and exocentric, external cues. The evidence derives from very recent research on the Mueller-Lyer illusion. Millar and Al-Attar (2002) reported that instructions to use body-centered spatial reference information practically eliminated the illusory distortion that is generally found in touch and vision. In addition, there was a substantial reduction of illusory misperception when subjects used their right hands to feel patterns and their left hands to trace a rectangular surrounding framework. Thus, spatial reference information is crucial for the accurate perception of the world via touch. Millar also discusses the relationship between her views and current thinking in neuroscience on the importance of multimodal input. Of note to those with applied interests, Millar has related her work to the development of effective small-scale haptic displays. Of course, these are of importance for blind individuals. There are clear implications for the development of useful maps and tangible depictions of the layout of buildings.
Morton Heller presents a discussion of his recent research on perception of tangible pictures. Haptic pictures differ from those that we normally interpret visually, but are most similar to line drawings. Some earlier theoretical positions hold that tangible pictures are not very useful for blind individuals. In this view, vision is the only sense that is adequate for the perception of pictures. That is not the view presented in Heller’s chapter. Rather, it is argued that haptic pictures may differ from visual representations in important ways, but they are extremely useful for the depiction of spatial information. Blind and blindfolded sighted persons may sometimes have difficulty with some haptic pictures for a variety of reasons. They may not be familiar with the use of touch for interpreting pictures. Moreover, they may be unfamiliar with haptic representations of depth information, for example, perspective representations. However, some earlier research underestimated the capability of touch for picture perception. This occurred for more than one reason. First, much research in this area has used sighted subjects, and their tactile pattern perception performance is generally much poorer than that of persons with very low vision, or those who are late-blind. Thus, research with the sighted person yields data that may not be generalized to visually impaired persons. Second, we do not cur-
12
rently know how to generate the “best” and most comprehensible pictures. Much more research has been conducted in vision, where we have good information about the canonical views of familiar objects. Unfortunately, we do not currently know which sorts of views are optimal for haptic pictures, nor which types of haptic 2-D perspective representations are most useful for conveying spatial information. Third, there is relatively little in the research literature on the segregation of figure-ground relations in tangible pictures. Heller’s chapter (chap. 3) discusses recent evidence that bears on methods for the improvement of haptic picture perception (Heller et al., 2002; Heller, Wilson, et al., 2003). Heller and his colleagues (Heller et al., 2002) have reported that top views of geometrical solids were most comprehensible for sighted and for blind subjects. It was especially interesting that very low vision subjects performed at far higher levels than the sighted controls in a variety of haptic tasks. Furthermore, very low vision participants were superior to the sighted in the speed of locating targets in confusing backgrounds, and were more than twice as fast as the sighted subjects. These empirical findings are entirely consistent with the idea that the blindfolded sighted subject is not a suitable benchmark for the evaluation of the capability of the sense of touch. The advantage of the very-low-vision participants is entirely in agreement with views that emphasize the importance of multimodal input for optimal perception. The mere presence of light perception, without form perception, may aid haptics and serve to improve form perception (see Heller et al., 2002). The results of the research have clear implications for the education and the rehabilitation of blind individuals. They should be given instruction in the interpretation of maps. In addition, it is proposed that their education and rehabilitation would benefit from the inclusion of instruction in drawing. Tangible illustrations would enhance the education of the visually impaired in areas such as mathematics, science, and geography.
John M. Kennedy has engaged in pioneering research in the area of pictures for blind persons. His research has pointed out a number of issues with relevance for general theoretical points of view, and for neuroscience. His chapter summarizes a long line of study on haptic pictures, with some new observations from congenitally blind persons. Kennedy and Juricevic discuss some important problems in picture perception, including linear perspective and the representation of depth, lines, planes, and T-junctions. Perspective assumes the presence of a vantage point, a scene, a projection surface, and some sort of projective system. They further consider
1. INTRODUCTION
the problems that blind people solve when attempting to draw, with or without much prior experience with the medium of representation. The authors assume that when one sees similar perceptual mechanisms at work in vision and touch, then there is evidence for common neural mechanisms for the representation of lines and surfaces. They are particularly interested in the relationship between vision and touch, and the study of picture perception in blind people can provide important insights into this realm. Some of Kennedy’s extremely insightful observations are worthy of note. Drawings by blind persons make use of projection schemes and represent outline. This is hardly surprising. What is especially interesting are reports of the use of height in the picture plane to represent depth relations, and the use of convergence and foreshortening in drawings by blind persons. While this author (Morton Heller) has not seen blind persons using height in the picture plane to represent depth relations, I have witnessed congenitally blind persons spontaneously making use of converging lines to represent depth relations when drawing geometrical solids. They also commented that the edges in the solid they felt (hexagonal prism) “angled” in depth, and so drawings of this form required “angles.” These observations are entirely consistent with the position advocated by Kennedy and Juricevic in their chapter. Kennedy notes that lines represent contours, both in visual and tangible representations. Both senses, vision and touch, can understand lines, since they represent tangible contours. Moreover, Kennedy notes that the science of perspective involves direction. Direction makes sense to touch, much as it does in vision.
Chapter 6 describes the development of exciting new technology for tactile output from computers. Computers have been used to provide virtual reality for sighted persons, via many types of visual and haptic displays. However, we have been much more successful at generating virtual reality using visual displays than haptic ones. Earlier visual displays can yield convincing visual representations of depth and form. Our haptic displays have typically used one finger, and this is a rather minimal and meager source of information for haptics. Since touch requires temporal extension for the understanding of shape information, a one-finger display is rather poor for the presentation of complex layouts, especially those that involve depth relations (see Jansson & Monaci, 2004). Sevilla describes an exciting new research project that aims to develop a virtual reality device that will make use of two fingers and will augment tactile information with auditory feedback. The haptic interface is now in a developmental stage, but when completed, has promise for the communication
14
of 3-D information to blind and visually impaired users. The purpose is to allow blind users to benefit from the ability to interact with 3-D computer graphics. This ambitious research project represents a collaborative effort that has the support of organizations across a number of European countries, including Spain, Ireland, Italy, and England. We hope to hear more from similar efforts in the near future. When successful, the haptic interface has the potential to provide considerable benefit to blind computer users, and may also be of interest to researchers in haptics, and to sighted individuals. Certainly, the development of an effective haptic/auditory virtual environment will be helpful to us all. The development of useful haptic reality devices has the potential of serving remote sensing devices for medical applications, education, and a variety of scientific applications.
Ballesteros and Reales report an interesting set of studies on haptic memory for objects in young and old adults, and in Alzheimer’s disease (AD) patients. Ballesteros and her colleagues have found haptic priming for familiar and unfamiliar objects. The objects were presented tactually without vision in young adults (Ballesteros, Reales, & Manga, 1999). Furthermore, they report evidence that their priming effects are not modality specific, since repetition priming was found between and within the modalities of vision and touch. They suggested that visual and haptic representations of objects share common features (Reales & Ballesteros, 1999). Ballesteros and Reales sought to investigate whether AD patients show intact perceptual priming when familiar objects are presented haptically, and whether the priming effect is similar to the priming obtained by two groups of older and young healthy adults. The study yielded two main conclusions. The first was that AD patients showed intact haptic priming in the speeded object naming task. Moreover, similar haptic priming was exhibited by the three groups (young adults, older controls, and AD patients), although young adults named the objects faster than the other two older groups. The second conclusion was that despite the intact priming, AD patients’ explicit recognition of haptically explored objects was highly impaired compared to the other two groups of healthy adults (young and older controls). The results suggest that implicit memory for haptically explored objects is preserved at early stages of Alzheimer’s disease. The double dissociation found in touch suggests the implication of different memory systems in the performance of both memory tasks (the implicit speeded haptic object naming task and the explicit haptic recognition task). Gabrieli (1998) considered studies that showed intact priming for stimuli presented to
1. INTRODUCTION
vision and audition in AD and suggested that this could be explained by the sparing of modality specific cortices. Gabrieli et al. (1994) explained intact visual perceptual priming in AD by the sparing of the extrastriate visual areas in early stages of the dementia. A number of recent imaging findings can be used as supporting the idea that areas of the occipital cortex are implicated not only in visual priming, but in haptic priming as well (James et al., 2002; James et al., chap. 7, this volume). The behavioral data reported in this volume (Ballesteros & Reales, chap. 5) provide some support to the proposal that at early stages of the dementia the occipital cortex of AD patients remains preserved, while the more anterior areas of the brain are highly deteriorated. These posterior cortical occipital areas and the somatosensory cortex might be involved in haptic priming. In fact, post mortem examinations of AD brains have shown little damage of the primary auditory, visual, and somatosensory cortices, and in the basal ganglia and cerebellum. However, severe lesions were found in the frontal, parietal, and temporal lobes of AD brains (Brun & Englund, 1981). There is a great deal of evidence that lesions in the medial-temporal lobe system and diencephalic structures produced a serious deficit in the performance of explicit (episodic and semantic tasks) memory tasks (e.g., de Toledo-Morrell et al., 1997; Jack et al., 1997). Alzheimer’s patients suffer severe lesions in the hippocampus and the medial temporal lobe system from the beginning of the disease, so they showed major deficits in the performance of explicit haptic recognition. Ballesteros and Reales reported that Alzheimer’s (AD) patients were very impaired in haptic conscious recognition involving explicit memory, but this was not surprising. Of special interest was the finding that AD patients were not different from the healthy older adults in a haptic implicit memory task (Ballesteros & Reales, 2004a). This suggests that different mechanisms are involved in implicit and explicit haptic memory.
HAPTICS, BLINDNESS, AND NEUROSCIENCE The next section of the book includes contributions from a number of very important authors, all of whom adopt a neuroscience approach to the understanding of touch and blindness. Research in this area has been exciting, and the pace of progress has been breathtaking. In the present volume, James, James, Humphrey, and Goodale (chap. 7) argue that haptic recognition of objects produces activation in the somatosensory cortex and in areas of the visual cortex that are normally associated with vision. In a similar vein, Sathian and Prather (chap. 8) report that visual cortical areas are used when haptic tasks require mental rotation. Pascual-Leone and his collaborators (chap. 9) report the results of brain plasticity studies involving visual deprivation in sighted persons.
16
These very interesting data suggest a visual involvement in haptic processing after just a few days of visual deprivation.
Goodale is widely known for his very important theoretical work on the distinction between the dorsal and ventral streams (Milner, 1998; Milner & Goodale, 1995). One of these visual pathways presumably guides action while the other provides perceptual information about the environment. There is a wealth of empirical evidence in support of this position. Thus, the hand may not respond like the eye to some size illusions, particularly when a grasp response is used (Aglioti, De Souza, & Goodale, 1995). Although Goodale and his collaborators recognized that vision and touch have many differences, both are the only modalities that can process the geometrical structure of three-dimensional (3-D) objects. Goodale et al. engage in elegant psychological manipulations to investigate the relationships between perception, action, and neural mechanisms. Also, people show haptic dominance given a conflict between vision and touch, but only when a pincer’s grasp-like response measure was used (Heller et al., 1999). The dorsal and ventral streams presumably operate within different time domains. James et al. (chap. 7) propose that there is a common cortical representation of visual and tactual objects in the nervous system, and that this exists at the level of 3-D objects. They argue that both vision and touch are suited to the understanding of solid objects. This common representation of 3-D form is not merely a higher level cognitive or linguistic representation, however. The evidence for this position derives from sophisticated imaging studies using priming paradigms. They used abstract 3-D objects to preclude the possibility of naming (or semantic priming) providing a link between haptic and visual representations. James et al. provide evidence that haptic exploration of solids produces activation in haptic areas and in areas of the cortex that are normally involved in visual processing of form. In addition, they report that prior haptic experience with an object yields increased activation in areas when they are subsequently viewed. These results were interpreted as consistent with the idea that there is considerable overlap between neural areas that serve haptic and visual processing. Goodale and his colleagues argue further that visual and haptic representations of 3-D objects take place at the level of shape processing and not at a higher lexical or semantic level. This is the case, since they have used nonsense objects for their research. They based the proposal on the results from two case studies. The first was reported by Kilgour, de Gelder, and Lederman (in press). Kilgour et al. showed that a prosopognosiac patient was unable to recognize faces not only visually but also by touch. A second line of converging results was
1. INTRODUCTION
from a patient with agnosia that was unable to produce structural representations of objects. The interesting finding was that this patient suffered bilateral lesions in the lateral occipital area (LOC). James et al. suggest that the LOC is bimodal and that perhaps this is only the beginning of a new way to understand how things work in the cortex, as perhaps areas that have been considered “visual” are also “tactual.”
Sathian has engaged in pioneering work in the area of cross-modal activation of visual cortical areas during touch. Sathian and Prather (chap. 8) focus on intersensory interactions. They report that the occipital lobe is active when subjects engage in tactile orientation grating tasks, as shown by Positron Emission Tomography (PET). They have further demonstrated the importance of the parietal-occipital region by disrupting functioning of this cortical area. Sathian and his colleagues used strong magnetic pulses, that is, transcranial magnetic stimulation (TMS), to selectively “turn off” localized regions of the cerebral cortex (Zangaladze et al., 1999). Of special interest is their report of task-specific occipital involvement in haptic tasks. Imaging data support the idea that ventral visual pathways are involved in haptic pattern perception tasks, while the dorsal visual pathways are critical in spatial tasks that may involve manipulation or mental rotation. Sathian and Prather suggest that the visual areas are normally involved in haptic tasks. Furthermore, they argue against the simplistic dichotomy of brain regions into “visual” or “somatosensory” regions. This perspective is consistent with the views presented by Pascual-Leone and his colleagues.
Pascual-Leone, as with many of the authors in this volume, has engaged in research at the forefront of new developments in touch and blindness. According to Pascual-Leone, brain imaging shows that in blind people, the visual cortex is activated when reading Braille. This activation can be seen in congenitally blind subjects, where TMS can disrupt haptic processing when magnetic pulses are directed at the critical visual areas. One interpretation of these fascinating data involves the assumption that there is cross-modal plasticity in the cortex, and experience alters the organization of the brain. Pascual-Leone and his colleagues (chap. 9) describe the fascinating finding that this plasticity may occur far more rapidly than previously thought. It is certainly striking to find changes in brain organization after years of visual deprivation; however, Pascual-Leone found that 5 days of complete vi-
18
sual deprivation and tactile immersion training yielded activation of the visual cortex in response to stimulation of the fingers of subjects. In addition, he reported that the auditory system is also affected by visual deprivation. These are exciting and challenging times for research in perception and mental representation of shapes and objects. The rapid growth of research into touch has come after a history of relative neglect, compared with the voluminous work on other modalities, such as vision and audition. However, it appears that we are on the threshold of important new discoveries, and very innovative approaches are presented by the authors in this volume. The authors of this book’s chapters have made important contributions to our knowledge about touch, blindness, and the neural basis of perception and memory. Some of the work is controversial, but extremely creative. It is hoped that these discussions of theoretical and empirical breakthroughs will make for fascinating reading.
REFERENCES Aglioti, S., DeSouza, J. F., & Goodale, M. A. (1995). Size-contrast illusions deceive the eye but not the hand. Current Biology, 5, 679–685. Amedi, A., Jacobson, G., Hendler, T., Malach, R., & Zohary, E. (2002). Convergence of visual and tactile shape processing in the human lateral occipital cortex. Cerebral Cortex, 12, 1202–1212. Amedi, A., Malach, R., Hendler, T., Peled, S., & Zohary, E. (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience, 4, 324–330. Arditi, A., Holtzman, J. D., & Kosslyn, S. M. (1988). Mental imagery and sensory experience in congenital blindness. Neuropsychologia, 26, 1–12. Ballesteros, S., Bardisa, L., Millar, S., & Reales, J. M. The haptic test battery: A new instrument to test tactual abilities in blind and visually impaired and sighted children. Manuscript submitted for publication. Ballesteros, S., Bardisa, D., Reales, J. M., & Muñíz, J. (2004). Haptic battery to test tactual abilities in blind children. In S. Ballesteros & M. A. Heller (Eds.), Touch, blindness, and neuroscience (pp. 297–308). Madrid: UNED. Ballesteros, S., Manga, D., & Reales, J. M. (1997). Haptic discrimination of bilateral symmetry in two-dimensional and three-dimensional unfamiliar displays. Perception & Psychophysics, 59, 37–50. Ballesteros, S., Millar, S., & Reales, J. M. (1998). Symmetry in haptic and in visual shape perception. Perception & Psychophysics, 60, 389–404. Ballesteros, S., & Reales, J. M. (2004a). Intact haptic priming in normal aging and Alzheimer’s disease: Evidence for dissociable memory systems. Neuropsychologia, 42, 1063–1070. Ballesteros, S., & Reales, J. M. (2004b). Visual and haptic discrimination of symmetry in unfamiliar displays extended in the z-axis. Perception, 33, 315–327. Ballesteros, S., Reales, J. M., & Manga, D. (1999). Implicit and explicit memory for familiar and novel objects presented to touch. Psicothema, 11, 785–800. Brun, A., & Englund, E. (1981). Regional patterns of degeneration in Alzheimer’s disease: Neuronal loss and histopathological grading. Histopathology, 5, 549–564. Cornoldi, C., & Vecchi, T. (2000). Mental imagery in blind people: The role of passive and active visuo-spatial processes. In M. A. Heller (Ed.), Touch, representation and blindness (pp. 143–181). Oxford: Oxford University Press.
1. INTRODUCTION De Toledo-Morrell, L., Sullivan, M. P., Morrell, F., Wilson, R. S., Bennett, D. A., & Spencer, S. (1997). Alzheimer’s disease: In vivo detection of differential vulnerability of brain regions. Neurobiology of Aging, 18, 463–468. Easton, R. D., Greene, A. J., & Srinivas, K. (1997). Transfer between vision and touch: Memory for 2-D patterns and 3-D objects. Psychonomic Bulletin & Review, 4, 403–410. Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433. Freides, D. (1974). Human information processing and sensory modality: Cross-modal functions, information complexity, memory, and deficit. Psychological Bulletin, 81, 284–310. Freides, D. (1975). Information complexity and cross-modal functions. British Journal of Psychology, 66, 283–287. Gabrieli, J. D. E. (1998). Cognitive neuroscience of human memory. Annual Review of Psychology, 49, 87–115. Gabrieli, J. D. E., Keane, M. M., Stanger, B. Z., Kjelgaard, M. M., Banks, K. S., Corkin, S., et al. (1994). Dissociations among structural-perceptual, lexical-semantic, and event-fact memory systems in Alzheimer’s, amnesic, and normal subjects. Cortex, 30, 75–103. Gentaz, E., & Hatwell, Y. (2004). Geometrical haptic illusions: The role of exploration in the Muller-Lyer, vertical-horizontal and Delboeuf illusions. Psychonomic Bulletin and Review, 11, 31–40. Gibson, J. J. (1966). The Senses considered as perceptual systems. Boston: Houghton Mifflin. Gregory, R. L., & Wallace, J. G. (1963). Recovery from early blindness: A case study (Experimental Society Monograph No. 2). Cambridge, England: Heffer. Hatwell, Y. (1985). Piagetian reasoning and the blind. New York: American Foundation for the Blind. Heller, M. A. (1982). Visual and tactual texture perception: Intersensory cooperation. Perception & Psychophysics, 31, 339–344. Heller, M. A. (1985). Tactual perception of embossed Morse code and Braille: The alliance of vision and touch. Perception, 14, 563–570. Heller, M. A. (1989). Picture and pattern perception in the sighted and blind: The advantage of the late blind. Perception, 18, 379–389. Heller, M. A. (1993). Influence of visual guidance on Braille recognition: Low lighting also helps touch. Perception & Psychophysics, 54, 675–681. Heller, M. A. (Ed.). (2000). Touch, representation and blindness. Oxford, UK: Oxford University Press. Heller, M. A., Brackett, D. D., Salik, S. S., Scroggs, E., & Green, S. (2003). Objects, raised-lines and the haptic horizontal-vertical Illusion. Quarterly Journal of Experimental Psychology: A, 56, 891–907. Heller, M. A., Brackett, D. D., Scroggs, E., Allen, A. C., & Green, S. (2001). Haptic perception of the horizontal by blind and low vision individuals. Perception, 30, 601–610. Heller, M. A., Brackett, D. D., Scroggs, E., Steffen, H., Heatherly, K., & Salik, S. (2002). Tangible pictures: Viewpoint effects and linear perspective in visually impaired people. Perception, 31, 747–769. Heller, M. A., Calcaterra, J., Green, S., & Lima, F. (1999). The effect of orientation on braille recognition in persons who are sighted and blind. Journal of Visual Impairment and Blindness, 93, 416–419. Heller, M. A., Calcaterra, J. A., Tyler, L. A., & Burson, L. L. (1996). Production and interpretation of perspective drawings by blind and sighted people. Perception, 25, 321–334. Heller, M. A., & Kennedy, J. M. (1990). Perspective taking, pictures and the blind. Perception & Psychophysics, 48, 459–466. Heller, M. A., Kennedy, J. M., & Joyner, T. D. (1995). Production and interpretation of pictures of houses by blind people. Perception, 24, 1049–1058.
20 Heller, M. A., Wilson, K., Steffen, H., Yoneyama, K., & Brackett, D. D. (2003). Superior haptic perceptual selectivity in late-blind and very low vision subjects. Perception, 32, 499–511. Holmes, E., Hughes, B., & Jansson, G. (1998). Haptic perception of texture gradients. Perception, 27, 993–1008. Hopkins, R., (2003). Touching, seeing and appreciating pictures. In E. Axel & N. Levent (Eds.), Art beyond sight: A resource guide to art, creativity, and visual impairment (pp. 186–199). New York: American Foundation for the Blind Press. Jack, C. R., Peterson, R. C., Xu, I. C., Waring, S. C., O’Brien, P. C., Tangalos, E. G., Smith, G. E., Ivnik, R. J., & Kokmen, E. (1997). Medial temporal atrophy on MRI in normal aging and very mild Alzheimer’s disease. Neurology, 49, 786–794. James, T. W., Humphrey, G. K., Gati, J. S., Servos, P., Menon, R. S., & Goodale, M. A. (2002). Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia, 40, 1706–1714. Jansson, G., & Holmes, E. (2003). Can we read depth in tactile pictures? In E. Axel & N. Levent (Eds.). Art beyond sight: A resource guide to art, creativity, and visual impairment (pp. 1146–1156). New York: American Foundation for the Blind Press. Jansson, G., & Monaci, L. (2004). Haptic identification of objects with different numbers of fingers. In S. Ballesteros & M. Heller (Eds.), Touch, blindness and neuroscience (pp. 205–215). Madrid, Spain: Universidad Nacional de Educación a Distancia. Kemp, M. (1990). The science of art: Optical themes in western art from Brunelleschi to Seurat. New Haven, CT: Yale University Press. Kennedy, J. M. (1993). Drawing and the blind. New Haven, CT: Yale University Press. Kennett, S., Taylor-Clarke, M., & Haggard, P. (2001). Noninformative vision improves the spatial resolution of touch. Current Biology, 11, 1188–1191. Kilgour, A. R., de Gelder, B., & Lederman, S. J. (in press). Haptic face recognition and prosopagnosia. Neuropsychologia. Lederman, S. J., Klatzky, R. L., Chataway, C., & Summers, C. D. (1990). Visual mediation and the haptic recognition of two-dimensional pictures of common objects. Perception & Psychophysics, 47, 54–64. Locher, P. J., & Simmons, R. W. (1978). Influence of stimulus symmetry and complexity upon haptic scanning strategies during detection, learning, and recognition tasks. Perception & Psychophyisics, 32, 110–116. Merzenich, M. M., & Jenkins, W. M. (1993). Reorganization of cortical representations of the hand following alterations of skin inputs induced by nerve injury, skin island transfers, and experience. Journal of Hand Therapy, 6, 89–104. Millar, S. (1994). Understanding and representing space: Theory and evidence from studies with blind and sighted children. Oxford, UK: Oxford University Press. Millar, S., & Al-Attar, Z. (2002). The Mueller-Lyer illusion in touch and vision: Implications for multisensory processes. Perception & Psychophysics, 64, 353–365. Milner, A. D. (1998). Streams and consciousness: Visual awareness and the brain. Trends in Cognitive Science, 2, 25–30. Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford, UK: Oxford University Press. Morgan, M. J. (1977). Molyneux’s question. New York: Cambridge University Press. Paivio, A. (1965). Abstractness, imagery, and meaningfulness in paired-associate learning. Journal of Verbal Learning and Verbal Behavior, 4, 32–38. Panofsky, E. (1991). Perspective as symbolic form. New York: Zone. (Original work published 1925) Pascual-Leone, A., & Hamilton, R. (2001). The metamodal organization of the brain. In C. Casanova & M. Ptito (Eds.), Progress in brain research, 134, 427–445. Popper, K. R., & Eccles, J. C. (1977). The self and its brain: An argument for interactionism. New York: Routledge & Kegan Paul.
1. INTRODUCTION Reales, J. M., & Ballesteros, S. (1999). Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 1–20. Revesz, G. (1950). The psychology and art of the blind. London: Longmans Green. Roder, B., & Rosler, F. (1998). Visual input does not facilitate the scanning of spatial images. Journal of Mental Imagery, 22, 165–182. Roder, B., & Rosler, F. (2002). The principle of brain plasticity. In R. H. Kluwe, G. Luer, & F. Rosler, (Eds.), Principles of learning and memory (pp. 27–49). Basel-Boston-Berlin: Birkhauser Verlag. Roder, B., Teder-Salejarvi, W., Sterr, A., Rosler, F., Hillyard, S. A., & Neville, H. J. (1999). Improved auditory spatial tuning in blind humans. Nature, 400(8), 162–166. Ryan, T. A. (1940). Interrelations of the sensory systems in perception. Psychological Bulletin, 37, 659–698. Sadato, N., Pascual-Leone, A., Grafman, J., Deiber, M., Ibanez, V., & Hallet, M. (1998). Neural networks for Braille reading by the blind. Brain, 121, 1213–1229. Sathian, K. (2000). Practice makes perfect: Sharper tactile perception by the blind. Neurology, 54, 2203–2204. Shimojo, S., & Shams, L. (2001). Sensory modalities are not separate modalities: Plasticity and interactions. Current Opinion in Neurobiology, 11, 505–509. Spence, C., Kingstone, A., Shore, D. I., & Gazzaniga, M. S. (2001). Representation of visuotactile space in the split brain. Psychological Science, 12, 90–93. Stein, B. E., & Meredith, M. A. (1993). The merging of the senses. Cambridge, MA: MIT Press. Streri, A., & Gentaz, E. (2003). Cross-modal recognition of shape from hand to eyes in newborns. Somatosensory and Motor Research, 20, 13–18. Van Boven, R. W., Hamilton, R. H., Kaufman, T., Keenan, J. P., & Pascual-Leone, A. (2000). Tactile spatial resolution in blind Braille readers. Neurology, 2, 2230–2234. Zangaladze, A., Epstein, C. M., Grafton, S. T., & Sathian, K. (1999). Involvement of visual cortex in tactile discrimination of orientation. Nature, 401, 587–590.
This page intentionally left blank
g
h
PSYCHOLOGY
This page intentionally left blank
g
h
Processing Spatial Information From Touch and Movement: Implications From and for Neuroscience Susanna Millar University of Oxford
B
lind conditions produce an apparent but instructive paradox. Vision is our most obvious source of spatial information. Yet congenitally totally blind people can be excellent at chess. Spatial tasks are undoubtedly a major problem for young children born without any sight. Nevertheless, visual experience is not necessary for solving spatial problems. Perhaps more puzzling still, “visual” illusions also occur in touch. The paradox raises questions that need answering if we are to understand and address the effects of blindness. The questions are also central issues in cognitive neuroscience. How do we process spatial information? What do the sense modalities contribute to spatial thinking, and how do they relate to each other? Is there something special about spatial cues in vision that haptic perception does not, or cannot, provide? Is there something in common to the obviously very different inputs that different modalities provide? If so, what is it? What determines spatial coding of inputs from touch and movement? This chapter dis-
26
MILLAR
cusses findings from studies that were designed to address the paradox and the questions it raises. The first section briefly considers some implications of very different theories that have been proposed to account for apparently paradoxical findings. In fact, neither the theories which assume that space is processed quite differently by touch than by vision, nor opposite theories that regard sensory modalities as largely irrelevant to space perception can be considered wholly wrong. Each stresses an aspect that does indeed have to be taken into account. But none of these factors is sufficient, on its own, to explain all the findings. The different aspects do, however, fit into a descriptive model that was originally designed to provide a common language for findings on spatial processing by congenitally totally blind children and evidence from neurological studies. The metaphor that was used, “convergent active processing in interrelated networks,” was greatly influenced by new neurological findings. The evidence on the multiple distributed brain areas and complex connecting circuits involved in spatial tasks, and for much greater neural plasticity than had been believed previously was more consistent with the findings for congenitally totally blind children than previous more rigid hierarchical accounts. The reference hypothesis, based on that descriptive model, defines spatial processing as organizing inputs, from whatever source, as reference cues for the location, distance, or direction response demanded by a task. But the reference hypothesis also assumes further that spatial accuracy depends on the congruence of the diverse inputs that provide potential reference cues in different task conditions. The implications of that assumption are examined in this chapter. The middle sections of this chapter focus on findings from three approaches to the question, “How far is spatial perception by touch and movement determined by the same factors as visuo-spatial perception?” One considers evidence from perceptual illusions that occur in both modalities. Perceptual illusions are an important test, because the similarity of perceptual illusions in touch and vision is often considered fortuitous. Showing that the same empirical manipulation produces the same effect in both modalities is good evidence that a common factor is involved. The studies that are examined here show that the very cues that produce accurate perception when they are used for reference produce biases when they occur in isolation. By contrast, explicit additional reference information all but eliminated a very powerful illusion in both touch and vision, demonstrating that perceptual accuracy in perceiving illusory shapes involved the same reference information in both modalities. The heuristics that people use spontaneously to remember haptic locations and distances constituted another approach to the question of spatial process-
2. PROCESSING SPATIAL INFORMATION
ing and how it arises. Secondary tasks that would interfere either with spatial strategies, verbal strategies, or attempts to rehearse scanning movements, were interpolated prior to recall. It is generally believed that spatial coding can be inferred directly from performance on location (“where”) tasks, whereas that is less clear for distance tasks. We argued that if spatial coding is elicited automatically by location tasks, spontaneous spatial coding should be found even if the positioning movements for the location differ in recall. Contrary to the assumption, the location task was not affected by spatial interference when recall involved longer, complex positioning movements, whereas distance recall, which depended on a repeated small movement, showed significant spatial coding. The reversal of the usual expectation in these conditions showed that spatial heuristics are not elicited automatically by the spatial task. Modality-specific factors, such as the concordance or discrepancy of positioning movements, affect spontaneous processing heuristics. The third approach to the question whether, or how far, frames of reference are determined by the input modality, was to test body-centered and external reference cues separately and in combination in the recall of an irregularly positioned sequence of haptic locations. The view that spatial coding of haptic information depends solely on body-centered reference was not supported. Instructions to use either body-centered cues, or external frame cues for reference, improved accuracy by a similar amount. Moreover, when both forms of reference were made available, performance was twice as accurate as with either form of reference alone. The final section of the chapter considers the practical and theoretical implications of the three types of evidence.
THEORETICAL DESCRIPTIONS It is not, perhaps, surprising that a whole range of contrasting theories has been put forward to account for the apparent paradox that spatial perception seems to depend on vision, but that vision is not necessary for spatial coding. It is not possible in this chapter to do justice, or even to list, all of these. Instead, I am confining myself to brief examples of apparently extreme opposite views. The point is to highlight the aspects of factors that need to be assumed in any theory, and implications that require the empirical evidence that is being considered here. The view that space perception differs radically in vision and touch has been most influential in research on blindness. The view is largely attributable to Revesz (1950). Revesz suggested that haptic space is centered on the body, whereas vision is centered on external coordinates. He based his view on findings with blind people, which suggested differences in their estimation of dis-
28
MILLAR
tance and other spatial tasks. But the assumption that space is processed differently in touch and vision, or that vision is the basic spatial modality which is needed to integrate spatial information is also suggested in later work (e.g., Hatwell, 1978; Warren, 1977). A similar view is implied in some recent versions of the working memory model by the assumption that immediate spatial memory depends on a temporary visuo-spatial register (Logie, 1995, but see Baddeley, 2000). The traditional division of sense modalities into “proximal” and “distal” senses has had a considerable influence on the view that visuo-spatial concepts differ radically from spatial concepts derived from touch. Touch has typically been considered a proximal sense, because the stimuli arise from direct contact of objects with the body. That seems to be in complete contrast to the distal senses, especially vision, in which stimuli arise from distant objects. Revesz himself, was, of course, concerned with haptics, that is to say, with inputs that depend on movements as well as on touch, rather than with inputs from touch receptors alone. The importance of exploring and scanning movements in the perception of shapes was recognized early (e.g., Weber, 1834 & 1846/1978; Gibson, 1962; Katz, 1925), and is not in doubt. We can also move from one object to another without vision in 3-D space, and can trace raised-line outlays in tabletop displays with our fingers. But movement information is, of course, also proximal in the sense of arising within the body. If we classify sensory inputs into proximal and distal categories, information from scanning and positioning movements must also be regarded as providing proximal cues in that sense. However, as we shall see, this seemingly obvious common sense classification is not actually useful when trying to determine the relation of spatial processes to modality effects. The notion of body-centered or “egocentric” reference frames was current in other contexts prior to the distinction made by Revesz. But his association of body-centered reference with blindness is nevertheless important, because it implies that haptic (touch and movement) information can actually be processed spatially also in blind conditions. That was not always obvious earlier (e.g., von Senden, 1933, but see Katz, 1925). Indeed, it still often comes as a surprise to sighted people that blindness does not preclude spatial knowledge and performance. Revesz’ theory draws attention to the fact that proprioceptive, gravitational, and kinaesthetic cues from the body and from body postures can provide effective spatial reference in the absence of vision. The view that body-centered frames of reference are important in blind conditions has received considerable support from research with congenitally totally blind children (e.g., Millar, 1978, 1979, 1981a, 1981b, 1985, 1994). It is easy to reach for the key on your table without vision, provided that no one has moved it unbeknown to you, and it is still in the same position relative to your
2. PROCESSING SPATIAL INFORMATION
body posture. Recall is more accurate when cues from the body (midline, nose, shoulder, or some other part) can be used as reference cues for the position or distance of targets. That is not a question of visualizing. Congenitally totally blind children can remember the location of an object by reference to their own body position. It is also possible to find an object by using the same reaching movement. Reliance on movement information has been distinguished experimentally from using body-centered reference (Millar, 1985; Millar & Ittyerah, 1991). Using the same movement produces errors, if the body-to-target relation is changed. But when body posture is unchanged, recall is more accurate when the positioning movements are also the same, than when the recall movement differs. It is much more difficult to find an object accurately if there is a change in body position relative to the object, or to the end of a positioning movement, than if only the retrieval movement is changed. Perception by touch is thus often a portmanteau term for multisensory information from touch, scanning movements, and from body-centered reference cues rather than referring to touch alone. That is important. We can assume that haptic (touch and movement) information can be coded spatially by reference to body-centered gravitational and posture cues. Nevertheless, the view that spatial coding of haptic inputs operates solely within personal space, and thus differs radically from space perception in vision which depends on external reference, makes assumptions that need to be tested. Briefly, the view implies that similarities in perceptual illusions in touch and vision must be chance effects, as is indeed often still assumed. Moreover, reference information that is based on external cues should be much less effective than body-centered reference, or may even be counterproductive. A completely opposite theory is that modality is irrelevant to spatial processing. The view takes several forms. What may be called the “platonist” view is that Euclidean geometric concepts are innate and are applied without any experience (e.g., Landau, Gleitman, & Spelke, 1981). However, that assumption does not actually account for the empirical findings that were held to necessitate it (Liben, 1988; Millar, 1988, 1994, 2000). We certainly have to assume, on any theory, that humans and other animals have the genetic endowments that potentially enable them to survive in the spatial environment in which they live and function, and to cope with the problems this may pose for them. Precisely how the gene pools function in these respects in particular species and individuals raises a large number of interesting questions. It is already clear that these are far too complex to be solved by the notion of inborn concepts, or to fit into neat “nature/nurture” or “rationalist/empiricist” dichotomies. But our question is, in any case, the much narrower one of how spatial processing take place, and how it relates to the input modalities in humans.
30
MILLAR
The theory that space perception is direct and amodal assumes explicitly that sensory inputs are largely irrelevant to the perception of spatial relations (J. J. Gibson, 1966, 1979). The theory is usually contrasted with constructivist theories, which argue that space perception requires unconscious logical inference (e.g., von Helmholtz, 1867), or mediation (Rock, 1984, 1997). Visual studies show that specific current (e.g., depth) cues from the environment determine visuospatial perception without cognitive mediation (e.g., Howard & Rogers, 1995, but see Rock, 1997). Subjectively, it makes good sense to suggest that we experience seeing objects in relation to each other and to the background, without actively intending to construe these relations. Similarly, constructivist accounts seem intuitively to be more appropriate for haptic perception, if only because it depends on sequential inputs from scanning movements. An interesting early description (Weber, 1834 & 1846/1978) makes the point that scanning movements in exploring the contours of a large object enable us to construct an idea of its shape. However, there is no compelling reason to consider that the presence or absence of conscious inferential or cognitive activity is the crucial characteristic of perception by either touch or vision. How far quasi inferential or cognitive activity is involved may depend on the type of perceptual task (e.g., identifying versus locating objects, Norman, 2002), or on the familiarity of the task, or on the presence of adequate current reference cues (Millar, 1994). The point is that both types of theory imply that space perception is of a “higher” order than receptor stimulation, and suggest that the sensory input modality is largely irrelevant to shape and space perception: Constructivist theories assume that shape and space perception is mediated cognitively (though not necessarily consciously). Direct perception theories consider that perceiving space and shape relations is amodal and requires no mediation. There is thus no reason, on either theory, why shape and space illusions should differ with input modes, except insofar as a given modality may provide less information. A spatial illusion may thus be less strong in one or another modality, but only trivially so. At the same time, Gibson’s insistence, that it is necessary to look for the actual information that determines a given perception, is important. The injunction is followed in the approach adopted here. Perceptual theories rarely provide explicit criteria for what is meant by “higher order,” “direct” or “amodal” versus “indirect” perception of spatial relations. Similarly, explaining paradoxical findings by differences in spatial knowledge is only useful if we can specify precisely what aspects of knowledge are “spatial.” The tacit assumption is that we already know what is crucial for space perception, and what counts as a spatial cue, process, or knowledge. Spatial processing can simply be inferred from the performance of what are intuitively recognized as spatial tasks.
2. PROCESSING SPATIAL INFORMATION
The problem is that it is actually very difficult to distinguish what is specifically spatial from what is modality-specific, but not spatial, even in visuospatial tasks. Explicit criteria for what is to count as a spatial process are even more essential if we want to test how spatial processes work in a very different modality than vision, and whether or not spatial processes are in common between different modalities. Spatial tasks are questions about location (where), distance (how far or long), and direction (what turnings). When pressed, most people would agree that answers to each of these questions depend crucially on using reference or anchor cues that can specify these targets. The reference hypothesis consequently defines (Introduction) spatial processing as organizing (or integrating) inputs, from whatever source, to act as reference cues for the response the task demands in location, distance, or direction tasks. The descriptive model on which the reference hypothesis is based has been discussed before (Millar, 1994, 2000), and is not detailed further here. But it is worth briefly mentioning some of the neurophysiological findings, which originally suggested the further assumptions that are central to the present discussion. Most compelling, perhaps, was evidence that the considerable diversity and specialization of receptor organs, and the specialized brain areas that are dedicated to the diverse outputs from these, is balanced by what appears as a considerable overlap, and almost redundancy of neural processes. Thus, the specialized sensory visual, tactile, and movement analyses are not only processed further in dedicated visual, tactile, and motor areas of the brain. Inputs from these areas also converge in multiple, distributed brain areas that are involved in diverse spatial tasks (e.g., Duhamel, Colby, & Goldberg, 1991; Gazzaniga, 1995; Rolls, 1991; Stein, 1991). The complex circuits that serve these suggested the notion of interconnecting networks, and their activation by the demands of different spatial tasks. Findings suggesting the convergence of disparate inputs, sometimes in unitary cells (e.g., Graziano & Gross, 1995; Gross & Graziano, 1995; Sakata & Iwamura, 1978) suggested further that accurate spatial perception involves the congruence of converging inputs from different sources, if they are to be organized for reference. The further assumption that is to be considered here, therefore, is that spatial accuracy depends on the congruence of inputs from diverse sources that can potentially be processed as reference cues. Forms of reference are usually categorized roughly into body-centered, external-environmental, and/or object-based sources. The assumption under discussion is that accurate spatial coding of haptic information depends on the congruence of inputs from any, and all, sources that can be processed as reference cues. The predictions from the reference hypothesis thus differ from both of the views mentioned earlier. If accurate shape and spatial perception in touch, as
32
MILLAR
well as in vision, depends on congruent reference cues, an illusion that occurs in both modalities should be due to a discrepancy in the same type of reference cue. The prediction is, therefore, that the same additional reference cues would diminish the discrepancy, and therefore reduce the illusion in both modalities. The hypothesis assumes that spatial accuracy in both modalities depends on the congruity of the reference cues that are, or are made available in a given task, rather than that each modality is linked to a specific source or type (body-centered or environmental) of the information. If so, it should be possible to process information in touch as well as well as in vision spatially by reference to external cues, if that information is made available in the absence of vision. The differences in predictions between the views briefly mentioned here, and the outcomes of studies testing these, are discussed in the middle sections of the chapter.
COMMON FACTORS IN PERCEPTUAL ILLUSIONS IN TOUCH AND VISION The great 19th-century physiologist von Helmholtz (1867) was, I think, the first to suggest that perceptual illusions are due to discrepancies in the very cues that normally provide accurate perception. That description of perceptual illusions actually provides an excellent approach to understanding the factors that underlie accurate shape perception. It is particularly useful in attempting to test the hypothesis that congruent reference cues are in fact crucial factors in accurate spatial perception by touch.
The well-known phenomenon known as veering from the straight-ahead direction provides an interesting test of the cues that govern spatial performance in the absence of reliable reference cues. The phenomenon occurs typically in dense fog, under water, and in other environments in which people “get lost,” or lose their bearings. The curious thing is that people do not just simply oscillate in various directions in a chance fashion. They believe that they are still moving in a straight line, although they are actually deviating more and more from what would have been the straight trajectory to where they intended to go. It was assumed originally that veering was due to some innate spiraling mechanism, which eventually returned animals, including humans, to their original habitat. There is little evidence on whether that actually happens. Alternative suggestions have been that individuals veer consistently to one side because one of their legs is longer,
2. PROCESSING SPATIAL INFORMATION
or because they are right-handed. But studies neither of handedness, nor leg-length measurements have borne this out. The factor stressed by the reference hypothesis is that veering occurs in environments that lack congruent orientation and reference cues. Following the suggestion by Helmholtz, it predicts that the side, severity, and consistency of veering is due to the very cues that normally provide accurate, congruent reference information, when they occur fortuitously, isolated from normal links in environments that lack congruent references. The fact that sounds are important orientation cues is, of course, well known. In blind conditions reliable sound cues are particularly important as external updating cues for locomotion in large-scale space. Quite young, congenitally totally blind children walk straight to the source of a sound, even after a short delay (Millar, 1999). The reference hypothesis predicts that if the sound is uncoupled from posture or movement cues, and occurs unexpectedly from a given side during locomotion, it will produce consistent veering to the side from which the sound originated. There is evidence that the side to which blind people deviate differs with extraneous cues from underfoot in a particular terrain (Cratty, 1967). That would be consistent with the hypothesis. Using cues from underfoot is known to be a useful orientation cue in blindness. But such cues are less likely to be crucial for space perception by the sighted, who show consistent veering at least as much as the blind in the same conditions. More important for every one moving in 3-D space are the body-centered, proprioceptive, gravitational, movement, and posture cues that are involved in conjunction with updating cues from the external environment in accurate locomotion. Consistent veering should, therefore, also be produced by altering body posture cues while people attempt to walk in a straight line in conditions that exclude external sight and sound cues. Both predictions were, in fact, borne out (Millar, 1999). An unexpected short irrelevant sound on either the right or left side of what was supposed to be a straight walk, produced very significant deviations in the direction of the sound in blind conditions. Irrelevant posture cues, in the absence of sounds during locomotion, were produced by asking the young blind and blindfolded sighted participants to carry a bag with either the right or the left hand while attempting to walk in a straight line. It produced a tendency to lean to the opposite side for balance. Consistent veering in the direction of the side opposite to the hand that was carrying the bag ensued. The results showed clearly that consistent veering in a particular direction was produced by external sounds and by body-centered cues when these occurred as isolated irrelevant stimuli in conditions that lacked reference cues to
34
MILLAR
which they could be related reliably. Deviations from the straight-ahead direction also occurred in control conditions. But errors were not only smaller. More important, the direction of veering was not consistent over trials from different starting points, unlike veering in either of the two experimental conditions. The fact that sounds are important orientation cues, and that body posture is involved in conjunction with updating cues, is not surprising. The important point is the finding that the very cues that normally contribute to spatial accuracy do indeed become misleading when there are no other reference cues to which they relate consistently. This has obvious practical implications in blind conditions. The importance of body-centered cues in veering, or directional bias in walking without sight, was further underlined by an unexpected finding that showed effects also of stepping movements during locomotion. The apparatus, which produced these results, ensured continuous, accurate, automatic sampling of the trajectory during locomotion. The participants wore a lightweight frame strapped to the back. The tip of a light rod, centered above the participant’s head, projecting from the frame, was attached to three counterbalanced lines running to take-up spools that were connected to potentiometers on the three extendable stands. These were positioned to form an equilateral just outside a (2-m square) experimental space, in a large room. Voltages across the potentiometers were transformed from analogue (audio-frequency in real time) to digital form by computer interface and converted to outputs into X-Y coordinate sequential positions (see Millar, 1999). The conversions reproduced the trajectories of a participant’s locomotion through the square space from different locations in digital form, and on coordinates drawn on squared paper. These showed the constituent sequential digital positions in the shape of the trajectories. Interestingly enough, the fast rate of sampling body positions during locomotion also showed typical side-to-side curves with every stride of the participant’s left and right foot. The depth of the curves seemed to depend on walking speed. Fast walkers produced shallower curves to either side than slower walkers, suggesting an explanation for the common finding that people veer less when they are walking fast. The “stepping curves” could also explain the individual differences in veering that are occasionally reported. Uneven strides to one or other side by slow walkers could produce veering to the side of the wider step in the absence of reference or updating cues for locomotion. The findings, showing consistent veering to the side of an irrelevant sound, and also consistent veering in the direction opposite to an experimentally altered posture, supports the hypothesis that accurate spatial performance depends on the congruence of the external and body-centered cues that it
2. PROCESSING SPATIAL INFORMATION
involves. Isolating the cues so that they do not constitute reliable reference information produced consistent biases.
The view that vision and touch, or at least touch and movement, depend on totally different spaces (see earlier) suggests that the presence of “optical” illusions in the haptic modality is simply fortuitous. By contrast, both the assumption that space and shape perception is direct, and that the view that it depends on inference from prior knowledge, suggest that the same spatial illusions occur in touch and vision. The reference hypothesis, that accurate shape and space perception depends on the congruity of reference cues, assumes that touch and vision produce the same perceptual illusions, if they arise from discrepancies in the same reference cues. The hypothesis does not, of course, exclude additional, or task-related modality-specific effects on processing. Müller-Lyer shapes consist of shafts that end in “wings” or “fins” that either diverge from the shaft or converge on it. The shapes produce a powerful illusion. Shafts in shapes that end in diverging fins are perceived as much larger than identical shafts in shapes that end in converging fins, whether the shapes are oriented horizontally or vertically. The illusion occurs in both vision and touch, even in complete blindness (Heller et al., 2002). The illusion is particularly interesting, because knowing that the size difference is illusory does not prevent its being perceived. It was originally assumed to be due specifically to visual factors, and most studies have been in vision. But explanations in terms of low-level visual receptor functions factors could obviously not apply to touch. We checked out two other explanations that could, in principle, have applied to touch as well as to vision (Millar & Al-Attar, 2002). One may be called the “movement time” hypothesis. The suggestion is that divergent figures are overestimated because it takes longer to scan shapes with diverging than with converging fins (e.g., Judd, 1905; Moses & DeSisto, 1970). We consequently compared the time (recorded automatically by 1/100 sec cumulative timing; Millar, 1997) that participants took to scan the two types of shapes blindfold by touch. In fact, haptic scanning only took longer for divergent than for convergent shapes on the very first trial. There was no time difference for later trials, despite the fact that the illusion was present. Scanning speed can, of course, influence length estimations. But whatever scanning speed contributed to the length judgments in our study, it could not explain the fact that the typical Müller-Lyer errors occurred regardless of speed.
36
MILLAR
The other potential explanation takes many forms, which can be summarized roughly as attributing the illusion to a lack of distinctive cues that differentiate the fins or wings from the shaft in the shapes. Various means of enhancing the distinctiveness of fins from the shaft have been shown to reduce the illusion in vision to some extent (e.g., Erlebacher & Sekuler, 1969; Fellows, 1967; Pressey & Pressey, 1992), though not completely. However, the means of enhancing the distinctiveness of the fins could have been insufficient. We, therefore, used fin size as well as texture cues to differentiate the fins from shafts. Shapes with small dotted wings at the end of continuous plain raised-line shafts were compared with shapes that had normal-sized wings of the same texture as the shaft in all conditions. The fin difference showed an interesting modality-related effect between vision and touch in some conditions. Distinctively textured, shorter fins produced a smaller illusion than larger fins that had the same texture as the shaft. But in touch that was significant only for horizontally oriented shapes with converging fins; not for horizontal shapes with diverging fins, nor for vertically oriented shapes. In vision, by contrast, the texture-size difference was significant for vertical divergent as well as convergent figures. The modality difference is instructive, because the difference in effect on touch was related to the type of exploring movements that the different conditions elicited. In exploring horizontal figures, converging fins are encountered in relatively close contact with the shaft, whereas that is less the case for horizontal figures with diverging fins, or with either type of vertically oriented shapes. That difference does not arise in vision, because the fins and shaft are perceived concomitantly in both shape orientations, and whether the fins converge or diverge from the shaft. Nevertheless, the typical illusion was present in vision and touch also for shapes with small, distinctively textured fins. Lack of cue distinctiveness is thus a contributing factor, which can differ with modality. But it was clearly not the main factor in producing the illusion in either vision or touch. The further experimental manipulations were directed to testing the reference hypothesis, that accurate perception depends on congruent reference cues. On the Helmholtz principle, mentioned earlier, perceptual illusions are due to disparities in the very cues that normally produce accurate information. The Müller-Lyer illusion is a particularly good example of the Helmholtz principle. The very features that are integral to the Müller-Lyer shape, namely the shaft and fins, and the overall size of the configuration, give contradictory cues to size and length. Shapes consisting of horizontal shafts with fins at acute angles to the shaft produce the typical illusions more than shapes with end lines that approach perpendicular relations to the shaft (Heller et al., 2002). The contradictory cues to length thus have most impact in horizontal shapes with
2. PROCESSING SPATIAL INFORMATION
fins in acute angular directions, which approach the direction (e.g., horizontal) of the shaft. The illusion is smallest for figures in which the fins are more nearly at right angles to the shaft, and are thus in an obviously different direction. Shapes with end lines that are perpendicular to the shaft are not Müller-Lyer shapes, and do not produce the Müller-Lyer illusion. We used the same acute fin-shaft (45²) angles for convergent and divergent shapes in all conditions in our study, which should, and did, produce the typical illusion in touch and vision. The reference hypothesis predicts that adding reliable reference cues should help to override the discrepancy in length cues that produces the illusion, and consequently serve to reduce it. The hypothesis does not assume that all forms of reference cues are the same, nor that all perceptual illusions are due to the same discrepancy in reference cues. Prima facie, external reference may actually enhance the Müller-Lyer illusion, since it occurs in visual conditions in which external background cues for targets are usually present. Since the illusion also occurs in blind touch, in which external background cues are not normally available, we tested possible effects of external cues on the illusion in touch, by deliberately providing an external raised-line frame that surrounded the pairs of shapes. Participants were instructed to use one hand for the figures and the other to relate cues from the surrounding frame for reference in their size judgments. In fact, the external frame did not reduce the illusion either for horizontally nor for vertically oriented tactual shapes. Moreover, as expected, vertical shapes in vision, which clearly showed the external surround, produced the usual illusion, even with additional explicit instructions to ignore the fins. The common factor, which reduced the illusion in both vision and touch, was the explicit use of body-centered reference cues. Simply being told to ignore the fins failed to reduce the visual illusion to near floor level. But adding explicit instructions to use body-centered cues for reference in judging the size of shafts abolished the illusion almost completely (to less than 2 mm for 8-cm shafts) in both touch and vision. The only serious reduction of the visual Müller-Lyer illusion, reported previously, was found with frequently repeated presentations (Rudel & Teuber, 1963). It was interpreted as a form of perceptual learning. But that surmise does not tell us what was learned, nor precisely what aspect of repeated viewing produced the reduction. The virtual elimination of the illusion in our study (Millar & Al-Attar, 2002) did not result from repeated trials. We used the same small number of trials in every condition. The virtual elimination of the illusion, in both touch and vision, was only found in the condition in which participants were explicitly instructed to use body-centered reference cues in judging shaft length. It is possible that frequent repetitions elicit egocen-
38
MILLAR
tric reference heuristics spontaneously (see later). In our study, egocentric reference was explicitly required. Some participants even reported that it made their judgments more difficult. Nevertheless, the instruction eliminated the difference between small-textured and larger plain fins also in vision, as well as the illusion in vision as well as in touch. The findings considered here provide a common explanation for the Müller-Lyer illusion in touch and vision. It is clear that external reference is not sufficient in either modality to override the discrepancy in size cues that the shapes produce. External reference may indeed increase the illusion, insofar as it draws attention to discrepant size cues from the global figure. In any case, the fact that same experimental manipulation had exactly the same effect in touch and vision strongly suggests that it addresses a common factor. The common factor here was the explicit use of body-centered reference cues to size that diminish or override the effect of the discrepant cues. It should be stressed that not all perceptual illusions that are common to vision and touch are necessarily due to the exactly the same type of discrepancy in as Müller-Lyer shapes. Inverted T-shapes, for instance, show a different discrepancy in reference information that is common to tactual as well as visual figures. In both modalities, the line that is bisected is underestimated relative to the bisecting line, whether the bisecting line is in the vertical or horizontal orientation. This illusion is greatly diminished by using a combination of additional external as well as body-centered reference information (Millar & Al-Attar, 2001). The relative effects of body-centered and external frame information, and how they relate to each other in haptic conditions, is discussed later. Before that it is necessary to examine why body-centered spatial cues are evidently not always effective. Body-centered (proprioceptive, gravitational and posture) cues are normally present in both touch and vision. The Müller-Lyer illusion occurs nevertheless in both touch and vision. In the study discussed previously, the shapes were presented in tabletop space, in the midline with respect to the participant’s body, as is usual. In principle, participants should, therefore, have been able to use body-centered cues spontaneously for reference, and so overcome the illusion. That was not the case. The virtual elimination of the illusion in both modalities occurred only with explicit instructions to use body-centered cues for reference. The fact that the virtual elimination of the illusion in both modalities occurred only with explicit instructions to use body-centered cues for reference raises questions about the heuristics that people use spontaneously to remember spatial information from touch and movement, and what conditions elicit or prevent spatial coding. This question is considered in the next section.
2. PROCESSING SPATIAL INFORMATION
As noted earlier, there is plenty of evidence for body-centered spatial coding in blind conditions. It is easy to show that errors shoot up if you change the relative position of a target to the participant’s body posture in tabletop space. It is much less obvious why body-centered cues are not always used for reference, even though the relative location-to-body position remains intact in recall. We examined the question by testing the heuristics that people use spontaneously with purely haptic inputs (Millar & Al-Attar, 2003). The participants performed a number of different activities just before they had to recall locations or distances from a raised-line tactile map. Increased errors, compared to delays without that secondary task, or to delays filled with a different task, show which secondary task interferes with processing the main task. Secondary verbal tasks and backward counting had little effect. The main interest here are the results found for the secondary task that was designed to test for spatial coding. The task consisted of judging whether two tactile matrices were different shapes or were identical shapes that had been rotated with respect to each other. Location and distance tasks were chosen, because “where” (location) tasks are often considered the main tests of spatial processing. Recall of haptic distances is more frequently attributed to memory for the scanning movements that are involved. We wanted to see whether the type of spatial task elicits spatial coding automatically, so that spontaneous spatial coding can be inferred from location tasks. If so, the interpolated spatial task should interfere with recall of the locations, regardless of the length, complexity or consistency of the scanning movements needed to reach the location in recall. If, on the other hand, the type or consistency of scanning movements is an important factor in eliciting or maintaining spontaneous body-centered reference in the absence of external background cues, identical small distances should be recalled more accurately than locations that are reached by inconsistent paths. The results of the study were quite clear. The two locations, which had to be reached by longer, more complex scanning movements in recall than in presentation, showed no spatial interference. But the same participants showed significant effects of the interpolated spatial task interference in recall of two identical small distances, which required the same movement extent for accurate recall. The fact that the same participants who showed interference from the secondary spatial task in distance recall did not show spatial interference in their recall of locations suggests strongly that spatial heuristics cannot be inferred automatically from location tasks. Simply changing the return movements, so that
40
MILLAR
they were more complex in recall than in presentation, reversed findings that have led to the common assumption that locations are coded spatially and are recalled more accurately than distances. The reversal of the usual findings involved the concordance of scanning and recall movements. The results show that modality-specific aspects of the haptic input, here the ease and consistency of the scanning and positioning movements in recall, are indeed involved in preventing the spontaneous use of body-centered reference for spatial coding.
There is no doubt that blind and visual conditions differ in the reference information that they normally provide. The most important difference is that visual targets occur in backgrounds that provide external reference cues by which the target can be specified spatially. Blind conditions do not automatically provide external backgrounds for new haptic targets, which can specify their locations. The question is, therefore, whether that is a difference in information that can be remedied by providing the relevant cues, or whether the form of spatial code is determined by the sensory input modality. The view that space perception differs with modality assumes that accurate haptic space perception is determined by body-centered information. It predicts that external reference cues either have no effect, or are actually detrimental to perceptual accuracy. The reference hypothesis implies, by contrast, that it is the congruence of potential reference cues, and not their source, or the modality-specific origin of the inputs, which determines the accuracy of spatial processing. The studies considered in this section, therefore, examined how body-centered and external reference cues relate to each other and to perceptual accuracy in spatial processing by touch and movement. The view that haptic space perception depends on egocentric reference predicts that providing cues external to the target location has little or no effect without intact body-centered cues, and may even be detrimental to recall. The reference hypothesis predicts that if external reference cues can be provided in touch, such reference cues can, on their own, increase accuracy to the same extent as body-centered reference alone. It also predicts that the conjunction of both types of reference makes recall twice as accurate. A raised-line tactile map was used in tabletop space to test these alternatives. The task was to remember the locations of five embossed landmark symbols that were dispersed randomly along an irregular, complex raised-line route. The locations varied randomly in horizontal and vertical directions, as in visual paradigms used to test short-term visuospatial memory. The participants wore
2. PROCESSING SPATIAL INFORMATION
blindfolds throughout the experiment. They encountered the landmark locations by scanning along the irregular route. For recall, they scanned the same route on a test map without landmarks, stopping the fingertip on the midpoint of the location they remembered for given landmark. The point of the study was to separate body-centered from external reference cues, so that the effect of each type of reference could be assessed separately, as well as in conjunction with each other (Millar & Al-Attar, 2004). Body-centered coding was made easy in a control experiment in which the test-map was aligned in the same relation to the participant’s body in recall as the presentation map. It was compared with a study in which body-centered coding was disturbed by rotating the test map in recall. There were no external cues for reference in either experiment. Participants in the rotation study did not have to construe the rotated direction of the route mentally. The new scanning direction of scanning was shown. However, the relation of the locations to the participant’s body differed in recall. Disturbing that relation produced twice the positioning errors at all locations. As predicted by both hypotheses, the findings showed that people use body-centered cues for reference when these are consistently available in recall. The important manipulation concerned the effects of external reference cues. It is clear that blind conditions do not normally provide external background cues beyond the target location being touched. We, therefore, provided an actual raised-line square frame surrounding the map that was external to all target locations. Participants used one hand for scanning the route and landmark locations as before. But they were also explicitly instructed to use the other hand for the frame, and to relate frame cues to each of the five locations that they had to remember, and to use the frame cues for reference in recall. In one experiment, explicit instructions to use external cues for reference were combined with map rotation. This meant that body-centered coding was disturbed, so that only external reference cues were available for spatial coding. In a further study, instructions to use external frame cues were combined with alignment of the map to the body in the same way in recall as in presentation, so that participants could use both body-centered and external frame cues for reference. The results were completely consistent with the reference hypothesis. When body-centered coding was disrupted, explicit instructions to use external frame cues for reference produced the same level of accuracy as intact body-centered reference without external frame information. When both forms of reference could be used, recall was twice as accurate than for either body-centered reference or for external reference alone. The five locations that were positioned in an unpredictable pattern along the route that was to be followed in scanning also showed significant differ-
42
MILLAR
ences. Prima facie, the error pattern looked like serial position effects, with better recall for the final location and to a lesser extent also for the first location. But it could not be explained by either verbal or order effects, because the method deliberately minimized the need to remember either. The experimenter invariably named the next landmark before it was encountered in presentation, and before it was to be indicated in recall. In fact, the location error pattern related to fortuitous but distinctive touch cues at the final location, and from the route near the first landmark. There were no such distinctive touch cues on the route near the middle locations. The additional touch cues seemed to limit errors from the positioning movements for the final and first locations, whereas there were no such additional limiting features in positioning the middle locations. There is evidence from studies of the inverted-T illusion, that distinctive touch cues along a scanning distance act as additional reference cues. Distances which contain such points are judged to be smaller than distances without additional touch cues (Millar & Al-Attar, 2000). The location error pattern thus suggests that people also spontaneously use fortuitous touch cues near target locations as additional reference cues. That is important in practice (see later). At the same time, all the locations were recalled significantly more accurately in conditions that provided either external or body-centered reference cues, and were twice as accurate when these two sources of reference coincided. Considered in the light of the alternative theoretical descriptions of spatial coding for inputs from touch and movement, the findings provide empirical evidence which shows that external reference cues are as effective as body-centered cues in haptic conditions, and not solely in vision. The use of external reference cues in blind conditions evidently requires explicit instruction, or relevant experience. The main difference between spatial coding in touch and vision must, therefore, be attributed to the task conditions that provide potential reference information, rather than to the input modality, as such.
GENERAL DISCUSSION The evidence suggests that two quite specific aspects of information explain the apparent paradox in spatial coding without vision: Explicit or implicit awareness of the inputs that can potentially be used for reference in a given task, and procedural knowledge of how reference cues are accessed and used. The notion, that similarities in space perception and shape illusions in touch and vision are simply fortuitous, can be rejected. The fact that the same experimental conditions reduced the very powerful Müller-Lyer illusion to near floor level in both touch and vision provides strong grounds for assuming a common
2. PROCESSING SPATIAL INFORMATION
factor. The common factor, in the case of the Müller-Lyer illusion, was the explicit use of body-centered cues for reference. It evidently served to reduce the bias due to the discrepancies in length and size cues that is inherent in these particular shapes in both modalities. At the same time, it was also clear that body-centered coding is not the only type of reference that is common to touch and vision. Explicit body-centered reference may be particularly effective for overriding the discrepancies that produce perceptual illusions in both modalities. External (beyond the target) forms of reference may function mainly to improve spatial accuracy, because they provide alternative and/or additional cues for reference and retrieval in recall. That requires further work. Further studies are also needed to examine which task conditions elicit spatial coding spontaneously, compared to those that require explicit instruction to use reference cues. It is likely to depend on task difficulty at least to some extent. Body-centered cues are normally present in vision as well as in touch. But their use in reducing illusions required explicit instruction in both modalities. Visual conditions do, of course, have an advantage. Since external and body-centered reference information is normally concordant in gravitationally oriented environments, vision affords double the reference information for coding targets spatially than blind conditions, which lack background cues beyond the target, unless such cues are made explicit. The redundancy explains why visuospatial coding is usually more accurate than spatial coding in haptic conditions. The findings discussed here show that this difference in information is not a necessary condition of the input modality. The combination of explicit external and body-centered reference also doubled the accuracy of location recall, compared to either form of reference alone, in purely haptic conditions. The important point is that the studies, which separated and combined body-centered and external reference effects, leave no doubt that, with explicit provision or experience, external (beyond the target) cues can function for reference in touch as well as in vision. The evidence clearly contradicts theories that link haptic inputs exclusively to body-centered spatial reference, and vision exclusively to external spatial reference. Furthermore, the results imply that body-centered and external reference have additive effects on spatial coding, suggesting that the two forms of reference are largely independent of each other for haptic inputs. That is entirely consistent with neurophysiological findings in vision. These studies associate allocentric and egocentric visuospatial performance with different cortical and subcortical brain regions and neuronal circuits (e.g., Bayliss & Moore, 1994; Berthoz, 1991; Rolls, 1991; Snyder, Grieve, Brotchie, & Andersen, 1998; Vallar et al., 1999). As far as this author is aware, there are no neurological studies that
44
MILLAR
separate egocentric (body-centered) from external spatial frames specifically with purely haptic inputs in humans. Given the evidence that inputs from diverse sense modalities converge in cortical areas associated with spatial performance (e.g., Andersen, Snyder, Bradley, & Xing, 1997; Berthoz, 1991; Graziano & Gross, 1995; Gross & Graziano, 1995; Stein, Wallace, & Meredith, 1995), a similar dissociation of the two forms of reference may occur with haptic inputs. Neurophysiological studies of such dissociation in touch, and how far it coincides with findings for vision, would be extremely useful for practical purposes, and for the theoretical implications in cognitive neuroscience. A point to note for practical purposes is that hand movements can contribute greatly to accurate performance, but can also produce serious errors and biases (e.g., Heller et al., 2003). A modality difference due to hand movements found here was that the distinctive fins had some effect on visual Müller-Lyer shapes in most conditions vision, but affected touch in conditions in which the shaft and fins were felt in close proximity. A further example comes from the study on the heuristics that people use spontaneously (Millar & Al-Attar, 2003). Longer and/or discrepant recall movements prevented spontaneous spatial coding of locations, whereas spatial effects were found with repeated identical small movements. Such spatial coding of distances may involve proprioceptive cues that can be related spatially by reference to the midline, or to the elbow (Heller et al., 1997; Millar, 1985). There is good evidence that accurate limb movements depend crucially on multiple body-centered reference cues (e.g., Paillard, 1991). But whether or not body-centered cues are invoked spontaneously at all to serve for spatial reference evidently depends on task conditions, including the consistency of movements these demand. They are not elicited automatically by location tasks. Another point worth taking into account in using maps and other raised-line displays in practice is that, contrary to common wisdom, two hands are not necessarily always better than one. Using both hands did not improve Müller-Lyer judgments, for instance. The advantage of using both hands depends on their function in a given task (Millar, 1987). External frames draw attention to global features of shapes, thus adding to the discrepancy in size cues inherent in Müller-Lyer figures. By contrast, using the second hand to relate external reference to the location of targets, concomitantly with using body-centered reference, doubled the accuracy of recall (Millar & Al-Attar, 2004). It is not, of course, always possible in practice to use the same reaching or positioning movements for recall. The implication of the findings is that the length or concordance of positioning movements becomes less important when people look for, and use reference information, whether this is based on body-centered reference, or on cues that are external to the targets, or depend
2. PROCESSING SPATIAL INFORMATION
on additional touch features from the display. The fact that distinctive touch cues at the end and near the start of otherwise unpredictable landmark locations in tabletop space produced more accurate recall than the middle locations (Millar & Al-Attar, 2004) suggests that the additional touch cues serve to limit errors that positioning movements can produce otherwise. The important factor for accurate spatial perception and recall thus does not necessarily depend on the use of both hands, but on the diverse functions they perform. Scanning movements and using distinctive tactile features for reference thus seem to play somewhat different roles in relation to spatial processes. Modality-specific aspects of inputs (e.g., movement effects in active touch) tend to predominate if task conditions provide discrepant inputs from different sources. The evidence that has been discussed in this chapter suggests that the accuracy of spatial coding does indeed depend on the congruence of potential reference cues from external, object and body-centered sources in touch as well as in vision. The common factor in spatial and shape coding by touch and vision is thus best described as an activity, namely that of organizing inputs as reference cues for perception and recall. Such organizing activities cannot be considered modality-specific, since they occur regardless of the source of inputs that are being organized. At the same time, spatial perception is not amodal either. The activities of organizing or integrating cues for reference and the resulting responses are influenced by modality-specific factors, including the procedures that these involve. In active touch, hand movements in tabletop space are important modality-specific aspects of the information that has to be organized spatially. Perception implies that inputs from one or more sense modalities are involved.
REFERENCES: Andersen, R. A., Snyder, L. H., Bradley, D. C., & Xing, J. (1997). Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Review of Neuroscience, 20, 303–330. Baddeley, A. D. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences, 4(11) 417–422. Bayliss, G. C., & Moore, B. O. (1994). Hippocampal lesions impair spatial response selection in the primate. Experimental Brain Research, 98, 110–118. Berthoz, A. (1991). Reference frames for the perception and control of movements. In J. Paillard (Ed.), Brain and space (pp. 81–111). Oxford: Oxford University Press. Cratty, B. J. (1967). The perception of gradient and the veering tendency while walking without vision. American Foundation for the Blind Bulletin, 14, 31–51. Duhamel, J. R., Colby, C. L., & Goldberg, M. E. (1991). Congruent representations of visual somatosensory space in single neurons of the monkey ventral intraparietal cortex (Area VIP). In J. Paillard (Ed.), Brain and space (pp. 223–236). Oxford: Oxford University Press.
46
MILLAR
Erlebacher, A., & Sekuler, R. (1969). A conclusion on confusion in the illusion of Müller-Lyer. Proceedings of the Annual Convention of the American Psychological Association, 4, 27–28. Fellows, B. J. (1967). Reversal of the Müller-Lyer illusion with changes in the inter-fins line. Quarterly Journal of Experimental Psychology, 19, 208–214. Gazzaniga, M. (Ed.), (1995). The cognitive neurosciences. Cambridge, MA: MIT Press. Gibson, J. J. (1962). Observations on active touch. Psychological Review, 69, 477–491. Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin. Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Graziano, M. S. A., & Gross, C. G. (1995). The representation of extrapersonal space: A possible role for bimodal visual-tactile neurons. In M. Gazzaniga (Ed.), The cognitive neurosciences (pp. 1021–1034). Cambridge, MA: MIT Press. Gross, C. G., & Graziano, M. S. A. (1995). Multiple representations of space in the brain. The Neuroscientist, 1, 43–50. Hatwell, Y. (1978). Form perception and related issues in blind humans. In R. Held, H. W. Leibowitz, & H. L. Teuber (Eds.), Handbook of sensory physiology (pp. 489–519). Berlin: Springer Verlag. Heller, M. A., Bracket, D. D., Salik, S. S., & Scroggs, E. (2003). Objects, raised lines and the haptic horizontal-vertical illusion. Quarterly Journal of Experimental Psychology, 56A(5), 891–907. Heller, M. A., Brackett, D. D., Wilson, K., Yoneyama, K., Boyer, A., & Steffen, H. (2002). The haptic Müller-Lyer illusion in sighted and blind people. Perception, 31, 1263–1274. Heller, M. A., Calcaterra, J. A., Burson, L. L., & Green, S. L. (1997). The tactual horizontal-vertical illusion depends on radial motion of the entire arm. Perception & Psychophysics, 59, 1297–1331. Helmholtz, H. von. (1867). Handbuch der physiologischen Optik [Handbook of optic physiology]. Leipzig: L. Voss. Howard, I. P., & Rogers, B. J. (1995). Binocular vision and stereopsis. Oxford: Oxford University Press. Judd, C. H. (1905). The Müller-Lyer illusion. Psychological Review, 7, 55–81. Katz, D. (1925). Der Aufbau der Tastwelt [Construction of the world of touch]. Leipzig: Barth. Landau, B., Gleitman, H., & Spelke, E. (1981). Spatial knowledge and geometric representation in a child blind from birth. Science, 213, 1275–1278. Liben, L. S. (1988). Conceptual issues in the development of spatial cognition. In J. Stiles-Davis, M. Kritchevsky, & U. Bellugi (Eds.), Spatial cognition (pp. 167–194). Hillsdale, NJ: Lawrence Erlbaum Associates. Logie, R. H. (1995). Visuo-spatial working memory. Hillsdale, NJ: Lawrence Erlbaum Associates. Millar, S. (1978). Aspects of memory for information from touch and movement. In G. Gordon (Ed.), Active touch: The mechanisms of recognition of objects by manipulation: A multidisciplinary approach (pp. 215–227). Oxford: Pergamon Press. Millar, S. (1979). The utilisation of shape and movement cues in simple spatial tasks by blind and sighted children. Perception, 8, 11–20. Millar, S. (1981a). Self-referent and movement cues in coding spatial location by blind and sighted children. Perception, 10, 255–264. Millar, S. (1981b). Crossmodal and intersensory perception and the blind. In R. D. Walk & H. L. Pick, Jr. (Eds.), Intersensory perception and sensory integration (pp. 281–314). New York & London: Plenum Press. Millar, S. (1985). Movement cues and body orientation in recall of locations of blind and sighted children. Quarterly Journal of Experimental Psychology, 37A, 257–279. Millar, S. (1987). Perceptual and task factors in fluent braille. Perception, 16, 521–536. Millar, S. (1988). Models of sensory deprivation: The nature/nurture dichotomy and spatial representation by the blind. International Journal of Behavioral Development, 11, 69–87. Millar, S. (1994). Understanding and representing space: Theory and evidence for studies with blind and sighted children. Oxford: Clarendon Press.
2. PROCESSING SPATIAL INFORMATION Millar, S. (1997). Reading by touch. London: Routledge (now Taylor & Francis). Millar, S. (1999). Veering revisited: Noise and posture cues in walking without sight. Perception, 28, 765–780. Millar, S. (2000). Modality and mind: Convergent active processing in interrelated networks. A model of development and perception by touch. In M. A. Heller (Ed.), Touch, representation and blindness (pp. 99–141). Oxford: Oxford University Press. Millar, S., & Al-Attar, Z. (2000). Vertical and bisection bias in active touch. Perception, 29, 481–500. Millar, S., & Al-Attar, Z. (2001). Illusions in reading maps by touch: Reducing distance errors. British Journal of Psychology, 92, 643–657. Millar, S., & Al-Attar, Z. (2002). Müller-Lyer illusions in touch and vision: Implications for multisensory processes. Perception & Psychophysics, 64(3), 353–365. Millar S., & Al-Attar, Z. (2003). How do people remember spatial information from tactile maps? British Journal of Visual Impairment, 21(2), 64–72. Millar, S., & Al-Attar, Z. (2004). External and body-centred frames of reference in spatial memory: Evidence from touch. Perception & Psychophysics, 66(1), 51–59. Millar, S., & Ittyerah, M. (1991). Mental practice without visuospatial information. International Journal of Behavioral Development, 15, 125–146. Moses, F. L., & DeSisto, M. J. (1970). Arm-movement responses to Muller-Lyer stimuli. Perception and Psychophysics, 8, 376–378. Norman, J. (2002). Two visual systems and two theories of perception: An attempt to reconcile the constructivist and ecological approaches. Behavioral & Brain Sciences, 25(2), 73–144. Paillard, J. (1991). Motor and representational framing of space. In J. Paillard (Ed.), Brain and space (pp. 163–182). Oxford: Oxford University Press. Pressey, A. W., & Pressey, C. A., (1992). Attentive fields are related to focal and contextual features: A study of Müller-Lyer distortions. Perception and Psychophysics, 51, 423–436. Revesz, G. (1950). Psychology and art of the blind. London: Longmans. Rock, I. (1984). Orientation and form. London: Academic Press. Rock, I. (1997). Indirect perception. Cambridge, MA: MIT Press. Rolls, E. T. (1991). Functions of the primate hippocampus in spatial processing and memory. In J. Paillard (Ed.), Brain and space (pp. 353–376). Oxford: Oxford University Press. Rudel, R. Q., & Teuber, H.-L. (1963). Decrement of visual and laptic Müller-Lyer illusion on repeated trials: A study of crossmodal transfer. Quarterly Journal of Experimental Psychology, 15, 125–131. Sakata, H., & Iwamura, Y. (1978). Cortical processing of tactile information in the first somato-sensory and parietal association areas of the monkey. In G. Gordon (Ed.), Active touch (pp. 55–72). New York: Pergamon Press. Snyder, L. H., Grieve, K. L. Brotchie, P., & Andersen, R. A. (1998, August 27). Separate body and world-referenced representations of visual space in the parietal cortex. Nature, 394, 887–891. Stein, B. E., Wallace, M. T., & Meredith, M. A. (1995). Neural mechanisms mediating attention and orientation to multisensory cues. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (pp. 683–720). Cambridge, MA: MIT Press. Stein, J. F. (1991). Space and the parietal association areas. In J. Paillard (Ed.), Brain and space (pp. 185–222). Oxford: Oxford University Press. Vallar, G., Lobel, E., Galati, G., Berthoz, A., Pizzamiglio, L., & Le-Bihan, D. (1999). A fronto-parietal system for computing the egocentric spatial frame of reference in humans. Experimental Brain Research, 124(3), 281–286. von Senden, M. (1933). Raum und Gestalt: Auffassung bei operierten Blindgeborenen vor und nach der Operation [Space and configuration: Concepts by congenitally blind before and after an operation]. Leipzig: Barth.
48
MILLAR
Warren, D. H. (1977). Blindness and early childhood development. New York: American Foundation for the Blind. Weber, E. H. (1834 & 1846/1978). The sense of touch. (De Tactu, 1834, translated by E. H. Ross, & Der Tastsinn und das Gemeingefühl, 1846, translated by D. J. Murray for the Experimental Psychology Society). London: Academic Press.
g
h
Picture Perception and Spatial Cognition in Visually Impaired People Morton A. Heller Eastern Illinois University
A
congenitally blind person recently said that he thought that it was stupid and idiotic to expect blind people to make sense of tangible pictures (Heller, 2000a). This person claimed that it was perfectly reasonable to teach blind people about maps and graphs. However, it was not sensible to ask blind people to “think sighted,” according to another congenitally blind person. These views represent a bias that is present in the community of educators of blind people, and has only recently been challenged (e.g., Axel & Levent, 2003). Educators of the blind are influenced by theoretical and empirical developments in psychology, and there has been a clear division within the field on the implications of visual deprivation for perceptual processing and cognition. This chapter considers issues and evidence on the role of visual experience for perception of pictures, and for spatial reasoning and object perception in visually impaired individuals. A frequently held notion is that visual imagery is necessary for veridical perception of the world, and is the premier sense for spatial representation. There can be little argument with the idea that vision is useful, and a superior sense for obtaining spatial information. However, the consequences of visual deprivation are far less clear-cut. There are a number of good reasons for
50
HELLER
doubting the necessity of visual imagery and visual experience, though the absence of visual representation may have important implications for education and rehabilitation. The evidence is clear that touch can provide substitute information when vision is lost, or is lacking at birth. While touch is much slower than vision at obtaining form information, there are a variety of methods for compensating for this reduced efficiency. Certainly, practice and skill can allow one to overcome many initial failures. Kennedy (1993) has argued that touch is capable of obtaining equivalent information to vision, and can understand perspective (also see Kennedy & Juricevic, chap. 4, this volume). On this view, perspective is the science of direction, and directions are comprehensible to touch, just as to vision. Also, Millar has provided a considerable body of evidence that touch can substitute for a lack of vision (Millar, 1994, 2000). Contrasting views have emphasized the idea that there are significant deficiencies in spatial cognition in touch. These presumed deficiencies derive from limitations in the sense of touch, and occur in blind people who are limited to haptic knowledge about their spatial world. Thus, touch has a limited field of view, which is presumed to derive from the small size of the fingertip. In addition, touch must process spatial information in a sequential fashion, and this places a burden on memory. Moreover, touch may have difficulty obtaining an overview of a large display, and will have problems interpreting depth relations in pictures. The assumption that some people adopt is that haptic space differs in important ways from visual space, and is presumably distorted. Some individuals assume that visual space is euclidean and haptic space is not. These assumed deficiencies in touch will lead to a number of unwarranted assumptions about limitations in blind people. I do not propose that touch is invariably equivalent to vision. However, the assumptions in the previous paragraph are just that: assumptions. They may derive support from the particular (or atypical) sorts of tasks that researchers have posed to blind subjects, and to blindfolded sighted undergraduate research participants. Thus, differences in the nature of processing between vision and touch could depend upon differential practice and familiarity with stimuli and tasks, and could also be constrained by the types of problems that we have put to the sense of touch. Thus, the notion that touch is limited in its field of view may depend upon tasks that restrict touch to the use of a single finger to feel two-dimensional patterns. Vision does have a larger field of view than touch, but touch has clear advantages for the perception of 3-D forms. For example, one can grasp the front and back of an object at once, but we do not normally see multiple views of an object at the same time. Moreover, blind people may bring their assumptions about view-
3. PICTURE PERCEPTION AND SPATIAL COGNITION
point to many picture perception tasks, and this could impede their performance. Blind people may not be very familiar with thinking about objects from particular vantage points, since they can often simultaneously feel many sides of an object. However, this analysis is very much dependent upon scale, and vision has clear advantages for perception of very large objects. Nonetheless, there are many occasions where the station point of an observer, and the large size of a display, may cause vision to process serially. This occurs when one is very close to a large scene or structure, such as an office building. Revesz argued that touch is poor at obtaining information about covering and perspective, and is not very good for picture perception (Revesz, 1950). More recently, Lederman, Klatzky and their colleagues have proposed that touch is best suited for the perception of substance-related characteristics of objects, namely hardness, softness, texture, thermal properties, and so forth (Lederman, Klatzky, Chataway, & Summers, 1990). On this theoretical view, touch may do well, but only when the tasks and stimuli tap into these object characteristics. Lederman and Klatzky have proposed that touch has difficulty interpreting pictures, since raised-line configurations are not ecologically valid for touch. They further argue that people need to be able to generate visual imagery to interpret haptic pictures. This imagery would be lacking in the congenitally blind individual, who might be expected to perform poorly when trying to name haptic pictures (Lederman et al., 1990). Congenitally blind people were rather poor at naming unfamiliar tangible pictures, but so were blindfolded sighted subjects (Heller, 1989). It is clear that sighted subjects may have difficulty naming 2-D representations of objects that are easy to identify when they are 3-D solids (Klatzky, Loomis, Lederman, Wake, & Fujita, 1993). However, it must be noted that naming failures are rather crude measures of perception. One may fail to name an object that is perceived veridically. Thus, a child may fail to name a color or animal, but this does not mean that the child does not see the object. Similarly, naming failures in touch do not mean that objects are not accurately perceived through touch (Heller, 2000b). Heller, Calcaterra, Burson, and Tyler (1996) showed that providing categorical information about tangible pictures improved naming accuracy. When subjects were asked to choose a target picture from among three choices, performance was excellent. Picture naming accuracy was also helped when subjects were provided information about the superordinate category to which a picture belonged. Thus, it helped subjects identify a picture, say, of an apple, when the subjects were told that they would feel pictures of fruit. These results suggested that some of the difficulties associated with naming tangible pictures probably derive from problems accessing semantic memory, rather than intrinsic limitations in haptic picture perception, per se.
52
HELLER
PICTURE MATCHING Recently, I have found that blindfolded sighted and visually impaired subjects were able to match tangible pictures at a very high level of accuracy (about 90% correct). The subjects were exposed to a target picture and then asked to select a match from among four picture choices (Heller, Brackett, & Scroggs, 2002). The stimuli are shown in Fig. 3.1. Very low vision subjects did better than the other subjects, with nearly perfect performance in a multiple choice matching task (M = 99.3%). The congenitally blind and blindfolded sighted subjects were not significantly different in picture matching accuracy. Moreover, all of the visually impaired subjects were significantly faster than the blindfolded sighted subjects at picture matching. While matching to sample is a relatively crude measure of picture perception, it is often used as an indicator of perceptual functioning (Farah, 2000). If visual experience and visual imagery were necessary for perception of tangible pictures, one would not expect to see nearly perfect performance by the very low vision subjects.
FIG. 3.1. The pictures used in the matching experiment (adapted from Heller, Brackett, & Scroggs, 2002a, Journal of Visual Impairment and Blindness, 96, 349–353).
3. PICTURE PERCEPTION AND SPATIAL COGNITION
Piaget’s water-level problem was devised as a measure of a child’s understanding of the horizontal and vertical frames of reference (Piaget & Inhelder, 1956). Piaget tested this understanding by showing children a glass container that was half-full of water. The child was asked to anticipate how the water level would look in a tilted glass. Understanding was tested by asking the child to draw in the water level in a line drawing of a tilted jar. Subsequently, Piaget asked children to copy the water level when looking at a tilted container. Piaget and Inhelder claimed that knowledge that water level stays horizontal despite container tilt is not present until the age of about 8. However, there is a large body of evidence that many college students fail at the water-level problem, and there have been reports of gender differences in the task (Harris, Hanley, & Best, 1977). Note that sex differences were not found in a haptic version of the water-level problem (Berthiaume, Robert, St-Onge, & Pelletier, 1993; Heller, Calcaterra, Green, & Barnette, 1999). Heller, Calcaterra, Green, and Barnette (1999) asked blindfolded sighted subjects to draw a line that represented water-level in tilted containers using a raised-line drawing kit. There were large errors in this task, and performance depended upon more than mere knowledge of the physics of water. Even if an individual understands the notion that water level remains horizontal, performance can suffer if individuals are distracted by the sloping lines representing the jar’s sides (see Fig. 3.2). Thus, performance was better when the drawing of a rounded flask was used. Males and females all showed substantial errors when
FIG. 3.2. Standard representation of the jar with 0 degree tilt; (b) Representation of water in the jar at –30 degree tilt; (c) Representation of an incorrect choice picture of water in the jar tilted at 60 degrees; (d) standard representation of the rounded flask with 0-degree tilt (adapted from Heller, Calcaterra, Green, & Barnette, 1999, Perception, 28, 387–394, Pion Limited, London).
54
HELLER
attempting to draw a line that represented a light hanging from a flexible cord. However, errors diminished considerably when subjects drew a telephone pole on hills of varying angles. It was argued that the pole task was a measure of the understanding of the vertical, and allowed subjects to use their bodies as frames of reference during drawing. Gender differences were minimal using this latter response measure. Subsequent experiments tested congenitally blind (CB), late blind (LB), very low vision (VLV), and blindfolded sighted subjects on the water-level problem (Heller, Brackett, Scroggs, Allen, & Green, 2001). It was assumed that if the understanding of the horizontal frame of reference were dependent upon visual experience, one would expect to see deficiencies in performance by the CB subjects. The jar was drawn tilted at four different angles: –60, –30, 30 and 60 degrees. The subjects were exposed to four pictures of tilted jars on each trial, all at the same angle. One jar showed the correct water level, while the other jars had lines that were incorrect by ±30², or had a line that was parallel to the base of the jar. The subjects were told to “find the picture that showed how the water would appear in the real world.” The results showed superior performance by the VLV subjects, with nearly errorless judgments of the horizontal (about 97.5% correct). Furthermore, the congenitally blind subjects performed very much like the sighted, indicating that visual experience was not critical for the development of understanding that water level stays horizontal, despite container tilt. The superior performance of the VLV subjects was striking, and is taken up again at a later point in this chapter. However, it is important to point out that these subjects were all Braille readers, and most had minimal light perception. A few had the ability to see very large objects given intense lighting, for example, a brightly lighted picture window. However, most had little more than shadow vision, or the ability to locate the direction of bright lights without substantial pattern perception. None could see the colorless patterns that they felt. Nonetheless, they performed better using touch (M = 97.5% correct) than a separate group of sighted subjects using their vision (M = 83.8% correct). It is impossible to rule out the possibility that these subjects could see the location of their hands in space as they felt the stimuli. Sight of hand motion can aid haptic pattern perception (Heller, 1982, 1985). There is recent evidence that the ability to see the location of one’s hand or arm in space is sometimes sufficient for perceptual enhancement (Kennett, Taylor-Clarke, & Haggard, 2001).
The issue of viewpoint dependence has been controversial in the visual literature (see Newell, Ernst, Tjan, & Bulthoff, 2001). Less is known about the prob-
3. PICTURE PERCEPTION AND SPATIAL COGNITION
lem of haptic picture recognition and object recognition, but Newell recently argued for viewpoint dependence in haptic recognition of 3-D objects. Newell et al. claimed that the backs of objects were the preferred viewpoint when subjects explored randomly generated abstract forms. Perspective is a relatively recent invention in the science of depiction (Panofsky, 1925/1991), and dates from the Renaissance. Perspective representations of objects are altered as a function of viewing distance, with close distances inducing increases in the convergence of lines and size distortion. Correct perspective drawings of shapes and objects will vary as a function of viewing distance and direction, therefore, perspective is linked to viewpoint (Heller, Calcaterra, Tyler, & Burson, 1996; Heller, Brackett, Scroggs, Steffen, et al., 2002). It is important to note that blind people report imagining objects rather differently than sighted people do. For example, congenitally blind people may not report “seeing” objects move as they engage in mental rotations of Braille patterns (Heller, Calcaterra, Green, & Lima, 1999). Furthermore, a blind person told M. Heller that blind people imagine “an entire tree,” but they know that sighted people imagine “half of a tree.” This statement reflects a visually impaired person’s belief that sighted people tend to think of objects from vantage points. Other blind persons have reported their friends’ surprise when they first learned that visual pictures do not depict all sides of an object or a person. Blind people are certainly capable of adopting a vantage point (Heller, Kennedy, & Joyner, 1995). However, adopting a particular viewpoint may not be a normative imaginal or perceptual behavior for congenitally, totally blind persons. It also may be a relatively unfamiliar skill, and this could influence the reactions of blind people to pictures. Moreover, it is not known if there are preferred viewpoints in haptic pictures of many familiar objects, nor is it known if this would change with practice and skill. A number of blind people have been found to produce foldout depictions of objects upon their first attempts at drawing (Heller, 1989, Heller et al., 1995; Kennedy, 1993). These drawings attempt to show all sides of solid objects, for example, a table or model house. Thus, some congenitally blind people drew pictures of a house from above, but their pictures showed all four sides along with the roof. Another blind individual drew a picture of a tabletop from above, but the depiction included the four legs splayed out at the corners. Others, however, drew side views of a table, including drawings of all four legs. This is another indication that blind people may adopt viewpoints when producing drawings of solids. Arditi, Holtzman, and Kosslyn (1988) claimed that the imagery of the congenitally blind did not follow the laws of perspective. They asked blind people to imagine familiar objects at varying distances and point to the sides of these ob-
56
HELLER
jects. There was no evidence of alterations in arm angle as a function of distance in the congenitally blind. However, Kennedy (1993) has reported that blind people understand perspective. He asked blind people to point to the sides of an imagined wall at a near and a far distance, and found that blind people bring their arms together when pointing to the wall at a greater distance. Nonetheless, knowledge of the location of the ends of a wall at a near and far distance may not reflect a blind person’s interpretation of perspective in drawings. Moreover, people may follow different principles when using imagery than when interpreting pictures or locating solid objects in space. Heller, Calcaterra, Tyler, and Burson (1996) reported that congenitally blind people did not spontaneously use perspective in their drawings of a board at a slant. Congenitally blind people did not use foreshortening in their drawings, since all of their rectangles were drawn at the same height, despite tilt of the board in depth. However, sighted subjects generated foreshortened representations, as did LB subjects. It was especially interesting that congenitally blind persons discovered the principle of perspective when they were subsequently asked to match perspective drawings to the slanted board. The CB subjects performed as well as the blindfolded sighted subjects in this multiple-choice task. Note that Heller, Calcaterra, Tyler, and Burson (1996) found a failure to use correct perspective in drawing, and this may also occur in some sighted subjects using vision. Sighted individuals require instruction before they are able to draw in correct convergent perspective. More recent research indicates that CB individuals may have the potential to correctly interpret perspective with minimal or no prior instruction (Heller, Brackett, Scroggs, Steffen, et al., 2002). Experiment 1 in Heller et al. (2002) required subjects to feel two intersecting planes at different angles, and select the correct perspective drawing out of 4 choices to match the object (see Fig. 3.3). The VLV subjects had very high performance, while somewhat lower scores were found for the CB, LB, and sighted subjects. The difference between the CB and sighted subjects failed to reach significance. This means that performance on this perspective task was not dependent upon visual experience or visual imagery. Subsequent experiments tested the effect of picture viewpoint on recognition (see Fig. 3.4). The aim of this study was to determine if picture viewpoint has an effect on the recognizability of pictures of solid objects. One might expect that 3-D views, and any views expressing depth relations, would be more difficult for haptics. Furthermore, prior research showed that elevated views of a model house were especially problematic for sighted and congenitally blind subjects. Thus, we expected lower performance for top views of geometrical solids. The frontal views without converging lines were expected to be easier than the frontal views that included converging lines for the CB sub-
FIG. 3.3. (a) Drawing of obtuse angle stimulus object oriented at –45 degrees from the vantage point of the subject (adapted from Heller, Brackett, Scroggs, Steffen, et al., 2002, Perception, 31, 747–769, Pion Limited, London); (b) drawing of acute angle object oriented at +45 degrees; (c) drawing of right angle object oriented at +45 degrees.
FIG. 3.4. Drawings from (a) a frontal view, (b) three-dimensional view involving perspective, and (c) top view. Figure 3.4d shows the drawings depicting the objects from a frontal viewpoint with converging lines for the top edges, where the bottoms of the objects were at eye height (adapted from Heller, Brackett, Scroggs, Steffen, et al., 2002, Perception, 31, 747–769).
58
HELLER
jects. This followed from the assumption that this convergence would not be present in haptics, in the absence of visual experience. Of course, if viewpoint does not matter, there should be little difference in performance for the varying types of viewpoints shown in Fig. 3.4. Blindfolded sighted subjects were exposed to the solid, 3-D objects and asked to draw them using a raised-line drawing kit. Subsequently, they were given the same objects, but in a different random order. Their task was to feel the object, and then feel four picture choices, and indicate which picture showed the object. Some subjects were given prior information about the viewpoint of the pictures, and others were not. Thus, the subjects given viewpoint information about the top views were told that the pictures were top views of the objects that they felt. Performance was far better for the top views than the other views, and prior information about viewpoint aided matching accuracy. Performance on the top views (M = 95% correct) was much higher than was found for the frontal views (M = 51.6% correct) or the 3-D views (M = 56.6% correct). The superior performance for the top views was highly significant, but the difference between the frontal and 3-D views was due to chance. An experiment explicitly compared the ease of interpreting frontal views with (Fig. 3.4a) and without the use of converging lines (Fig. 3.4d). Frontal views without converging lines (Fig. 3.4a) are typical of the sort of perspective view one might have of a very distant object. Near objects yield converging lines if their bottoms are at eye height, as in Fig. 3.4d. Blindfolded sighted subjects actually performed significantly better for tangible drawings that included the use of converging lines. A subsequent experiment tested the possible effect of visual experience on viewpoint effects. Repeated measures were taken on the four viewpoints (see Fig. 3.4). The subjects included those with VLV, CB, LB and blindfolded sighted persons (n = 10 per group, total N = 40). The VLV subjects performed significantly better than the others, with very high performance overall (90% correct). The CB subjects were not significantly different from the other subjects, indicating that visual experience is not necessary for correctly interpreting pictures from different vantage points. The LB and VLV subjects had 100% correct scores for the top views. The top views were much easier than all of the others. The blindfolded sighted subjects did better with frontal views including converging lines than frontal views without converging lines, as in the previously mentioned experiment. This result was not found for the CB subjects, who had nearly identical performance for the two types of frontal views. The CB subjects have had less experience with pictures, and certainly no experience with the use of convergent perspective. Thus, it is not surprising that they reacted in a similar fashion to the two forms of frontal views.
3. PICTURE PERCEPTION AND SPATIAL COGNITION
The visually impaired subjects were much faster than the blindfolded sighted subjects in making their judgments. While the groups of visually impaired subjects all took similar amounts of time to match tangible pictures to the standard 3-D objects, they were approximately twice as fast as the blindfolded sighted participants. This enhanced speed reflected increased levels of haptic skill. As in earlier research, the VLV subjects demonstrated superior performance levels (90% correct overall). They were much faster than the blindfolded sighted subjects, and more accurate than the other groups of subjects. These results suggest that research with blindfolded sighted subjects often underestimates the capability of touch. The advantage of the VLV subjects could have derived from a number of sources. They were not blindfolded, and it is possible that blindfolding initially disorients sighted subjects. Sight of the hand can help touch in judgments of texture (Heller, 1982, 1985). Furthermore, Kennett et al. (2001) reported that subjects showed improved two-point discrimination with gaze directed toward the locus of stimulation. Thus, the VLV vision subjects could have been able to see the location of their hands in space, which in turn may have helped them. One LB subject in this experiment told Heller that he thought that it helped him attend to touch, when he “looked” at the location of his hands. This person did not have light perception, and was totally blind. Note, however, that more than half of the VLV subjects claimed that they could not see their hands on the table surface. Furthermore, it is possible that low lighting helped touch, independently from any possible effect of gaze direction or sight of hand motion. Visual pattern information may be distracting, and blindfolds may also temporarily interfere with attention to touch. To clarify the possible role of light perception and sight of hand location, separate groups of sighted subjects were tested in the dark, and they wore stained-glass covered goggles to blur vision. In addition, they were dark-adapted for at least 3 min prior to replication of the viewpoint task that was presented to the visually impaired subjects. Nothing could be seen in the experimental room prior to sufficient dark adaptation. One group of subjects had a small, red light-emitting diode (LED) attached to the upper surface of their right hands; the other group was merely tested with low lighting and blurred vision. This second low-lighting group was included to determine if blindfolding alone could explain the lower performance of the blindfolded sighted subjects. The subjects with the LED (M = 17.8 correct; 89% correct) performed significantly better than the group without the LED and low lighting (M = 15.2 correct; 76% correct). Performance in the LED low-lighting group was almost
60
HELLER
identical to that of the VLV vision subjects (M = 90% correct), while the other group of low-lighting subjects performed slightly lower than the blindfolded sighted subjects (M = 79.5% correct). These results suggest that the lower performance of the blindfolded subjects cannot be explained solely in terms of the effect of blindfolding. However, the very similar performance of the VLV vision subjects and the low-lighting, LED subjects indicates that minimal residual vision is important. Moreover, both groups of subjects exceeded the accuracy of the blindfolded subjects. It is noteworthy that the LED subjects were no faster than the blindfolded sighted subjects. The present experiments do not discriminate between a couple of alternate interpretations of the data. First, it is possible that mere gaze direction matters. It is also possible that sight of hand motion is important. However, one cannot rule out the possibility that subjects were able to see the movement patterns produced by the moving LED, and used this visual pattern information in making their judgments. In addition, sight of the orientation of the hand could influence pattern perception (see Heller, 1985), but this is the subject of future research. While the top views of geometrical solids were the optimal viewpoint for the forms in the experiment described here, this result may not generalize to other stimuli. Most of the stimuli shown in Fig. 3.4 are the same in cross-section with varying height. Thus, the pyramid was more difficult for the subjects, and it had a cross-section that varied as a function of elevation. Subjects did not do as well with top views of the pyramid. The preferred viewpoint for a 3-D form may depend upon stimulus characteristics. Of course, no one would argue that the top view of a head is the preferred viewpoint, nor is it the most familiar view. Future research will need to more completely examine the complex relationship between stimulus characteristics and the effect of varying the viewpoint of haptic pictures.
Figure 3.5 shows a set of tangible drawings that were used in the study of embedded figures in the sighted and visually impaired (Heller, Wilson, Steffen, Yoneyama, & Brackett, 2003). The aim of the study was to understand any possible link between perceptual selectivity and visual experience. The ability to segregate figure from ground is a fundamental characteristic of perception, but has not been studied extensively in haptics. One might expect that the ability to locate a target in a complex array depends upon a number of stimulus characteristics in vision, but the problem should be much more difficult in touch. Vision is generally thought to operate more quickly than touch, and is capable of simultaneous perception of a relatively large configuration. Most researchers have assumed that the sequential nature of exploration is
3. PICTURE PERCEPTION AND SPATIAL COGNITION
FIG. 3.5. Target stimuli and simple and complex backgrounds in the study of haptic embedded figures. In addition, the practice figure is present on the right. The letters correspond to the labels of the target stimuli taken from the embedded figures test (adapted from Heller, Wilson, Steffan, Yoneyana, & Brackett, 2003, Perception, 32, 499-511).
characteristic of touch, and represents a hardship for that sense. Thus, the serial nature of processing in touch is often presumed to provide a burden and a severe load on memory, given complex arrays. Moreover, this memory load should make it very difficult to obtain an overview of a large array, and further impede the process of organizing the configuration. Given these presumed deficiencies in haptics, one might expect very low performance levels in a haptic version of the embedded figures test.
62
HELLER
The embedded figures test was originally designed as a measure of cognitive skills (see Ottman, Raskin, & Witkin, 1971). Witkin assumed that higher levels of cognitive functioning are revealed in the ability to engage in an analytical perceptual style, characterized by the ability to locate targets that are hidden in complex backgrounds (Witkin, 1967). Witkin, Birnbaum, Lomonaco, Lehr, and Herman (1968) studied performance on a haptic version of the embedded figures task in congenitally blind children. They found lower performance in the congenitally blind, and this was consistent with the idea that perceptual selectivity depends on vision. They thought that the ability to develop an articulated understanding of an array reflects perceptual selectivity, and requires visual experience. Their results were consistent with the idea that congenitally blind children tend to develop a global and more primitive perceptual understanding of complex patterns (also see Revesz, 1950). However, the results of the study (Witkin et al., 1968) do not speak to the perceptual skills of congenitally blind adults. Pilot data suggested that the haptic embedded figures task would be extremely difficult for the sighted subjects. Consequently, a preliminary experiment with sighted subjects used independent groups for the simple and complex backgrounds. The task was first explained using the practice raised-line stimuli shown on the right side in Fig. 3.5. The participants were told that they could feel the target in any manner that they wished, and were to then try to find it in the larger background. They were told that the background could have extra lines that they might need to ignore to find the target. They were also told that the target would appear to be the same size and orientation in the background. When the subjects failed to locate the target in the background, the experimenter took their preferred index fingers and showed them where it was. Almost all of the subjects failed to correctly locate the target in the background, generally choosing one of the smaller triangles of a slightly different configuration. The subjects were given a target stimulus and asked to feel four background picture choices to locate the one that held the target. They were subsequently asked to trace the target within the background. Time limits were not imposed, and subjects were allowed to repeatedly examine the target stimulus before indicating a choice. Performance in the haptic embedded figures multiple-choice task was extremely low, with mean performance levels of 66% to 73% correct for the simple and complex backgrounds, respectively (see Table 3.1). However, tracing accuracy was significantly higher for the simple (M = 38.2% correct) than the complex backgrounds (M = 19% correct), t (26) = 2.17, p < .05. This is much lower performance than one obtains for visual versions of the task. In vision, I have found 100% accuracy for the multiple-choice task and simple backgrounds, but
3. PICTURE PERCEPTION AND SPATIAL COGNITION
Congenitally blind Simple backgrounds
Complex backgrounds
Overall
Task
M
SD
M
SD
M
SD
X-Choice
3.8
1.8
3.5
2.0
7.3
3.7
Trace Form
2.5
1.7
1.4
1.6
3.9
3.1
Late blind Simple backgrounds
Complex backgrounds
Overall
Task
M
SD
M
SD
M
SD
X-Choice
5.0
0.9
5.1
0.9
10.1
1.6
Trace Form
4.5
1.3
2.7
1.3
7.2
2.1
Very low vision Simple backgrounds
Complex backgrounds
Overall
Task
M
SD
M
SD
M
SD
X-Choice
5.7
0.7
5.2
1.3
10.9
1.5
Trace Form
4.9
1.3
3.6
2.1
8.4
3.1
Blindfolded sighted Simple backgrounds
Complex backgrounds
Overall
Task
M
SD
M
SD
M
SD
X-Choice
4.7
0.9
3.8
0.9
8.5
1.0
Trace Form
2.9
1.4
1.0
0.8
3.9
1.8
Note.
Maximum score possible = 6 per type of background, and 12 overall.
slightly lower scores of 98.7% correct for complex backgrounds. The tracing response measure yielded lower scores than the multiple-choice task in vision, since mean correct scores were 98.7% and 66.7%, for simple and complex backgrounds, respectively. Subsequently, independent groups of blindfolded sighted, LB, CB, and VLV subjects participated in the embedded figures task. Note that the stimuli were drawn without the use of ink so that the lines would be very low contrast and not visible for the VLV subjects. This yielded pale white raised lines against a white background, and the lines were not visible for the VLV subjects. All subjects in this experiment were first exposed to the targets with simple backgrounds and then the targets with complex backgrounds.
64
HELLER
The VLV subjects had extremely high levels of performance, and they were much faster than sighted subjects. All groups of the visually impaired subjects were significantly faster than the sighted subjects. The accuracy of the LB approached that of the VLV, but the CB and blindfolded sighted subjects performed at similar accuracy levels. This indicates that visual experience is not necessary for the development of perceptual selectivity. Note that the CB subjects were much faster than the blindfolded sighted participants, despite similar accuracy scores. The superior performance of the VLV subjects was striking, and they did not do much worse using touch than sighted subjects did using vision. The mean number of correctly traced stimuli in complex backgrounds were 3.6 and 4, for the VLV (using touch) and sighted subjects (using sight), respectively. Of course, touch was much slower than vision in mean response latency. These results were consistent with the idea that research with blindfolded sighted subjects underestimates the capability of the sense of touch. Note that the late blind subjects were not significantly different from the VLV participants, and they were also much better than the sighted subjects at locating targets in simple and complex backgrounds.
The study of haptic illusions has the potential to let us determine if haptic space follows the same metrics as visual space. If touch provides equivalent spatial information to that derived from vision, one would expect that congenitally blind persons would demonstrate sensitivity to many of the same aspects of geometry as do sighted persons. This was shown in many of the previously discussed studies in this chapter. Another strategy to approach this problem has been to study illusory percepts in touch and in vision. If there are processes that are limited to vision, as in illusions that are purely “optical,” then one has evidence for nonequivalence between the senses of touch and vision. Thus, illusions that are potent in vision may be weaker in touch, or perhaps, fail to reveal themselves in haptics. However, there may be similar illusory responses in both senses (Heller, Brackett, Wilson, Yoneyama, Boyer, & Steffen, 2002), but they may occur for different reasons. The horizontal-vertical illusion is potent in haptics, and subjects overestimate vertical lines in inverted “T” shapes (Heller, 2000c). This overestimation occurs when the vertical line segment leads to radial scanning motions that are directed toward the body. However, there are some important differences between the expression of the visual and haptic illusions. First, the haptic horizontal-vertical illusion is contingent upon stimulus size, and may not appear with smaller stimuli (Hatwell, 1960; Heller, Calcaterra, Burson, & Green, 1997).
3. PICTURE PERCEPTION AND SPATIAL COGNITION
Heller et al. (1997) argued that smaller stimuli are examined with finger motions alone, and that illusory misperception is dependent upon the involvement of whole-arm motion. Larger movements are induced by stimuli that go beyond a “hand space” and involve a larger “arm and body space.” The illusion was obtained in congenitally blind subjects (Heller, Brackett, Wilson, Yoneyama, & Boyer, 2002; Heller & Joyner, 1993). However, overestimation of verticals was not found when it was vertical with respect to gravity (Heller, Brackett, Salik, Scroggs, & Green, 2003). When the vertical segment of inverted “T” shapes is vertical with respect to gravity, one finds a negative illusion (also see Day & Avery, 1970). The negative illusion, with overestimation of horizontals, was robust and occurred with small stimuli. One does not see this negative illusion in vision, and gravitationally vertical placement still yields a strong visual horizontal-vertical illusion with overestimation of verticals. The Mueller-Lyer illusion is found in haptics, and it is powerful (Heller, Brackett, Wilson, Yoneyama, Boyer, & Steffen, 2002). Moreover, the illusion was potent for smaller stimuli. As in vision, the haptic Mueller-Lyer illusion increased in strength as the included wing angle was reduced. More acute angles make it more difficult for subjects to feel the lines between the endings for wings-in figures, and this was consistent with a confusion interpretation of the illusion. Thus, more acute wing angles make it difficult for subjects to tell the difference between the horizontal line and the line endings. This would lead to underestimation or overestimation of lines, depending upon wing direction. There are some important differences between touch and vision that affect the processing of patterns, and this difference extends to illusions. Vision is better suited for the apprehension of larger configurations, and thus may be more susceptible to illusions that derive from depth relations expressed in relatively large arrays. The Ponzo illusion was not found in haptics, and this may be a consequence of a tendency to focus on the local features of displays derived from raised-line and other stimuli (see Heller & Clyburn, 1993; Heller, Hasselbring, Wilson, Shanley, & Yoneyama, 2004). Heller and Clyburn (1993) found that LB subjects were far more likely than sighted individuals to attend to local features in a tactile analog of the Navon (1977) local-global pattern identification task. Blindfolded sighted subjects tended to notice both the local and global forms, when confronted by large geometric patterns made up of smaller, different shapes (e.g., a circle of squares). Visual experience may be important, but past nonvisual experience is probably very influential as well. Thus, Braille is size specific, unlike visual print. Therefore, it was not surprising that CB subjects tended to notice the smaller Braille characters that made up larger, different characters. LB subjects were more likely than the CB to notice both the large and small Braille characters, but they also tended to focus on local features.
66
HELLER
Of course, this discussion of vision, touch, and scale is subject to an important caveat. We have a great deal to learn about the relevant appropriate scale for touch, and the notion of large or small size is also relative for the sense of vision. While vision can certainly give us good spatial information about very large spaces, it may have to process large scenes sequentially. This is especially likely when viewing conditions yield displays that exceed the field of view of the eyes. This occurs when we are very close to a large display. In addition, haptics can hold some advantages for the simultaneous processing of form, as in grasping 3-D objects. Thus, one can simultaneously feel the front and back of a solid object, but vision must process this 3-D spatial information sequentially.
CONCLUSIONS Recent data suggest that it is appropriate to revise many misconceptions that have been held in this area. First, tangible pictures are extremely useful for the evaluation of perceptual functioning and spatial reasoning in visually impaired people. Visual experience is not necessary for the perception of 2-D patterns, but there is certainly something to be gained from extremely low-level vision as opposed to a complete lack of light perception. Second, research with sighted subjects often underestimates the capability of the sense of touch. Finally, many supposed limitations in haptics, such as the necessity to process information sequentially, may have advantages. These advantages can include the freedom from some types of illusory misperception, as in the Ponzo illusion.
Researchers often underestimate the capability of haptics and assume that tangible pictures are simply too difficult. However, performance with haptic pictures may be excellent, when tasks do not depend upon familiarity and semantic memory (Heller, 2002; Heller, Brackett, & Scroggs, 2002; Heller, Brackett, Scroggs, et al., 2001, 2002; Heller, Calcaterra, et al., 1996). Moreover, tangible pictures provide a window through which we may evaluate spatial reasoning in blind individuals (Heller, Brackett, Scroggs, et al., 2001). Certainly, visual experience is not needed for gaining an appreciation of the horizontal spatial frame of reference. Additional research may tell us about other aspects of haptic space perception. At this time, it is clear that tangible pictures can provide useful information about depth relations. Congenitally blind people may not anticipate the con-
3. PICTURE PERCEPTION AND SPATIAL COGNITION
ventions that sighted people employ to illustrate depth relations in pictures (Heller, Calcaterra, et al., 1996). However, they were able to discover the principle of foreshortening and (perhaps) convergence with minimal experience and without feedback on their performance. Heller, Brackett, Scroggs, Steffen, et al. (2002b) found evidence that congenitally blind people could understand some aspects of perspective when interpreting drawings of intersecting planes. In addition, the ease or difficulty of tangible drawings varied as a function of viewpoint, with top views providing much better performance than side views or 3-D views of geometrical solids. Subjects showed excellent performance when feeling 3-D solids and selecting the appropriate match from among four top-view drawings.
Touch has often been considered a source of “ultimate” knowledge about whether one is in contact with “reality.” Thus, everyday language tells us to pinch ourselves to tell if we are dreaming or awake. However, touch is susceptible to illusions, as is vision. Moreover, reliance on touch or vision can vary, and is not an absolute (Heller, 1983, 1992; Heller, Calcaterra, Green, & Brown, 1999). Dominance relations can change with the speed and ease of the response measure, the surface attribute that is judged, and individual differences in perceptual style. Touch and vision may sometimes differ in the nature of illusory misperception that one experiences. The senses of touch and vision may also give us illusions for different reasons. For example, the Mueller-Lyer illusion can be found with congenitally blind persons (Heller, Brackett, Wilson, Yoneyama, Boyer, & Steffen, 2002). Many visual illusions have been linked to misapplied size-constancy scaling (e.g., Gregory, 1963) and other factors that may be specifically visual or linked to visual experience (Coren & Girgus, 1978). Clearly, the Mueller-Lyer illusion can not be dependent upon visual experience or imagery, since it has been found in CB persons. Furthermore, the horizontal-vertical illusion has been explained in terms of the shape of the visual field. The visual field is elliptical, and this means that verticals will occupy more of our phenomenal visual field than horizontals. However, one can make the same argument for touch. We have a broader horizontal reach than we have in the vertical direction, and this could alter perceived extent. More relevant is the finding that some illusions do not seem to occur in touch, namely the Ponzo and the Poggendorf. These optical illusions have been related to misapplied size constancy scaling, and this would be less likely to influence touch.
68
HELLER
A number of studies have demonstrated the importance of skill levels for pattern perception in touch. Sighted subjects are much slower than visually impaired subjects at matching pictures (Heller, Brackett, & Scroggs, 2002) and considerably less accurate than VLV subjects. Very low vision subjects have emerged as exceptionally skilled in numerous studies involving picture perception. The VLV subjects had near-errorless performance in picture matching, making haptic judgments in the water-level problem, and in experiments involving perspective and viewpoint. The relatively low levels of performance that are often shown by blindfolded sighted subjects may lead researchers to underestimate the capability of the sense of touch. These results suggest that it is a mistake to generalize from the results of studies of naïve blindfolded sighted subjects to blind persons, and vice versa. There are a number of reasons for the lower levels of performance that have been shown by blindfolded sighted subjects in studies of haptic pattern perception. Sighted subjects are relatively unfamiliar with the use of touch for pattern perception, and this slows their exploration. Thus, sighted subjects take much longer than necessary when instructed to try for accuracy, than when told to try to respond “as quickly as possible.” While accuracy did not suffer when subjects were given “speed” instructions, they performed twice as fast in a picture matching task. Their slower exploration probably reflects their uncertainty and unfamiliarity with the task. In addition, sighted subjects have learned to control haptic exploration with visual guidance (see Heller, 1982). Peripheral vision is often used to guide haptic exploration, while foveal vision may be reserved for pattern perception. Subjects are better able to judge the smoothness of surfaces when allowed blurry vision of their hands, given very low lighting. Recently, I have found that sight of hand motion also aids picture perception. It is not clear if sight of the hand alone would be sufficient for this improvement. Earlier research showed that very low vision can aid touch through the provision of spatial reference information, in addition to sight of hand motion and hand location (Heller, 1985). Perhaps, foveal vision of patterns can sometimes disrupt haptic processing of form, and direct attention away from that sense. Subjects have sometimes told me that it helped them imagine what they were touching, if they closed their eyes behind the blindfolds that they wore. Whatever the mechanisms might be, blurry vision and low lighting can aid sighted subjects as they engage in picture perception tasks (Heller, Brackett, Scroggs, Steffen, et al., 2002).
3. PICTURE PERCEPTION AND SPATIAL COGNITION
Touch perception typically occurs in concert with vision, and this is supported by the superior performance of the very low vision participants. Moreover, visual guidance aids touch, and this also suggests the idea that multisensory processing is “normal.” Haptic performance may suffer when spatial reference information is denied, and this may be more of a problem for sighted than for blind subjects. Many blind subjects may have learned to compensate for a lack of visual information about posture and spatial layout, but sighted subjects are much less experienced with perception using touch in isolation. This should make us very cautious about generalizing from the results of laboratory studies with blindfolded sighted subjects to other, perhaps more typical situations. If perception is normally multimodal, then what do we conclude are the consequences of visual deprivation in the totally blind person? The jury is out, and we should not be too quick to jump to conclusions without empirical evidence. When sighted individuals are deprived of sight for prolonged periods of time, we may see a reorganization of the cortex. Over the short run, there can be very negative emotional consequences of visual deprivation. An important caveat for the researcher is that it may be difficult to discriminate between the impact of the affective consequences of visual deprivation and those outcomes that are “purely cognitive.”
ACKNOWLEDGMENT Preparation of this chapter and some of the research reported here were supported by NSF grant 0317293.
REFERENCES Arditi, A., Holtzman, J. D., & Kosslyn, S. M. (1988). Mental imagery and sensory experience in congenital blindness. Neuropsychologia, 26, 1–12. Axel, E., & Levent, N. (Eds.). (2003). Art beyond sight: A resource guide to art, creativity, and visual impairment. New York: American Foundation for the Blind. Berthiaume, F., Robert, M., St-Onge, R., & Pelletier, J. (1993). Absence of a gender difference in a haptic version of the water-level task. Bulletin of the Psychonomic Society, 31, 57–60. Coren, S., & Girgus, J. S. (1978). Seeing is deceiving: The psychology of visual illusions. Hillsdale, NJ: Lawrence Erlbaum Associates. Day, R. H., & Avery, G. C. (1970). Absence of the horizontal-vertical illusion in haptic space. Journal of Experimental Psychology, 83, 172–173. Farah, M. J. (2000). The cognitive neuroscience of vision. Malden, MA: Blackwell.
70
HELLER
Gregory, R. L. (1963). Distortion of visual space as inappropriate constancy scaling. Nature, 199, 678–680. Harris, L. J., Hanley, C., & Best, C. T. (1977). Conservation of horizontality: Sex differences in sixth graders and college students. In R. C. Smart & M. S. Smart (Eds.), Readings in child development and relationships (pp. 375–387). London: Macmillan. Hatwell, Y. (1960). Etude de quelques illusions geometriques tactiles chez les aveugles [The study of some tactile geometric illusions in the blind]. L’Anee Psychologique, 60, 11–27. Heller, M. A. (1982). Visual and tactual texture perception: Intersensory cooperation. Perception & Psychophysics, 31, 339–344. Heller, M. A. (1983). Haptic dominance in form perception with blurred vision. Perception, 12, 607–613. Heller, M. A. (1985). Tactual perception of embossed Morse code and Braille: The alliance of vision and touch. Perception, 14, 563–570. Heller, M. A. (1989). Picture and pattern perception in the sighted and blind: The advantage of the late blind. Perception, 18, 379–389. Heller, M. A. (1992). “Haptic dominance” in form perception: Vision versus proprioception. Perception, 21, 655–660. Heller, M. A. (2000a). Guest editorial: Society, science and values. Perception, 29, 757–760. Heller, M. A. (Ed.). (2000b). Touch, representation and blindness. Oxford, UK: Oxford University Press. Heller, M. A. (2000c). Haptic perceptual illusions. In Y. Hatwell, A. Streri, & E. Gentaz (Eds.), Toucher pour connaitre [Touching for knowing]. Paris: Presses Universitaires de France. Heller, M. A. (2002). Tactile picture perception in sighted and blind people. Behavioral Brain Research, 135, 65–68. Heller, M. A., Brackett, D. D., Salik, S. S., Scroggs, E., & Green, S. (2003). Objects, raised-lines and the haptic horizontal-vertical illusion. Quarterly Journal of Experimental Psychology: A, 56, 891–907. Heller, M. A., Brackett, D. D., & Scroggs, E. (2002). Tangible picture matching in people who are visually impaired. Journal of Visual Impairment and Blindness, 96, 349–353. Heller, M. A., Brackett, D. D., Scroggs, E., Allen, A. C., & Green, S. (2001). Haptic perception of the horizontal by blind and low vision individuals. Perception, 30, 601–610. Heller, M. A., Brackett, D. D., Scroggs, E., Steffen, H., Heatherly, K., & Salik, S. (2002). Tangible pictures: Viewpoint effects and linear perspective in visually impaired people. Perception, 31, 747–769. Heller, M. A., Brackett, D. D., Wilson, K, Yoneyama, K., Boyer, A., & Steffen, H. (2002). The Haptic Muller-Lyer illusion in sighted and blind people. Perception, 31, 1263–1274. Heller, M. A., Brackett, D. D., Wilson, K., Yoneyama, K., & Boyer, A. (2002). Visual experience and the haptic horizontal-vertical illusion. British Journal of Visual Impairment, 20, 105–109. Heller, M. A., Calcaterra, J. A., Burson, L. L., & Green, S., L. (1997). The tactual horizontal-vertical illusion depends on radial motion of the entire arm. Perception & Psychophysics, 59, 1297–1331. Heller, M. A., Calcaterra, J. A., Burson, L. L., & Tyler, L. A. (1996). Tactual picture identification by blind and sighted people: Effects of providing categorical information. Perception & Psychophysics, 58, 310–323. Heller, M. A., Calcaterra, J. A., Green, S. L., & Barnette, S. L. (1999). Perception of the horizontal and vertical in tangible displays: Minimal gender differences. Perception, 28, 387–394. Heller, M. A., Calcaterra, J., Green, S., & Lima, F. (1999). The effect of orientation on Braille recognition in persons who are sighted and blind. Journal of Visual Impairment and Blindness, 93, 416–419.
3. PICTURE PERCEPTION AND SPATIAL COGNITION Heller, M. A., Calcaterra, J. A., Green, S. L., & Brown, L. (1999). Intersensory conflict between vision and touch: The response modality dominates when precise, attention-riveting judgments are required. Perception & Psychophysics, 61, 1384–1398. Heller, M. A., Calcaterra, J. A., Tyler, L. A., & Burson, L. L. (1996). Production and interpretation of perspective drawings by blind and sighted people. Perception, 25, 321–334. Heller, M. A., & Clyburn, S. (1993). Global versus local processing in haptic perception of form. Bulletin of the Psychonomic Society, 31, 574–576. Heller, M. A., Hasselbring, K., Wilson, K., Shanley, M., & Yoneyama, K. (2004). Haptic illusions in the sighted and blind. In S. Ballesteros & M. A. Heller (Eds.), Touch, blindness and neuroscience (pp. 135–144). Madrid, Spain: UNED Press. Heller, M. A., & Joyner, T. D. (1993). Mechanisms in the tactile horizontal/vertical illusion: Evidence from sighted and blind subjects. Perception & Psychophysics, 53, 422–428. Heller, M. A., Kennedy, J. M., & Joyner, T. D. (1995). Production and interpretation of pictures of houses by blind people. Perception, 24, 1049–1058. Heller, M. A., Wilson, K., Steffen, H., Yoneyama, K. & Brackett, D. D. (2003). Haptic perceptual selectivity: Embedded figures in the sighted and blind. Perception, 32, 499–511. Kennedy, J. M. (1993). Drawing and the blind. New Haven, CT: Yale University Press. Kennett, S., Taylor-Clarke, M., & Haggard, P. (2001). Noninformative vision improves the spatial resolution of touch in humans. Current Biology, 11, 1188–1191. Klatzky, R. L, Loomis, J. M., Lederman, S. J., Wake, H., & Fujita, N. (1993). Haptic identification of objects and their depictions. Perception & Psychophysics, 54, 170–178. Lederman, S. J., Klatzky, R. L., Chataway, C., & Summers, C. D. (1990). Visual mediation and the haptic recognition of two-dimensional pictures of common objects. Perception & Psychophysics, 47, 54–64. Millar, S. (1994). Understanding and representing space: Theory and evidence from studies with blind and sighted children. Oxford, UK: Oxford University Press. Millar, S. (2000). Modality and mind: Convergent active processing in interrelated networks as a model of development and perception by touch. In M. A. Heller (Ed.), Touch, representation and blindness (pp. 99–141). Oxford, UK: Oxford University Press. Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353–383. Newell, F. N., Ernst, M. O., Tjan, B. S., & Bulthoff, H. H. (2001). Viewpoint dependence in visual and haptic object recognition. Psychological Science, 12, 37–42. Ottman, P. K., Raskin, E., & Witkin, H. A. (1971). Group Embedded Figures Test. Palo Alto, CA: Consulting Psychologists Press. Panofsky, E. (1991). Perspective as symbolic form. New York: Zone. (Original work published 1925) Piaget, J., & Inhelder, B. (1956). The child’s conception of space. London: Routledge and Kegan Paul. Revesz, G. (1950). The psychology and art of the blind. London: Longman. Witkin, H. A. (1967). A cognitive style approach to cross-cultural research. International Journal of Psychology, 2, 233–250. Witkin, H. A., Birnbaum, J., Lomonaco, S., Lehr, S., & Herman, J. L. (1968). Cognitive patterning in congenitally totally blind children. Child Development, 39, 767–768.
This page intentionally left blank
g
h
Form, Projection and Pictures for the Blind John M. Kennedy Igor Juricevic University of Toronto
B
lind individuals can draw outline pictures in a raised form (Kennedy, 2003). Their drawings include many features to do with an observer’s vantage point. Of course these are found in many perspective pictures drawn by the sighted, even in quite young children’s drawings, but they are often assumed to be purely matters of sight. The drawings by the blind require us to question such assumptions, and to challenge theories of the neural basis of perception and spatial cognition. So here we consider the theory of outline, and argue that via haptics blind people develop an appreciation of directions from an observer (Millar, 1994, 2002) and, thereby, a basis for projection (Heller & Kennedy, 1990). Perhaps blind individuals use hearsay ideas about pictorial cues to depth to produce an end result that merely mimics pictures by the sighted. If so, their drawings are diagrams with no serious basis in haptics. Alternatively, and more radically, the blind may use perspective schemes, and it may be helpful to compare their drawing development with that of sighted children and adults, seeking a common base. Common and unique factors, taken together, could spur hypotheses about the development of some highly dedicated and some remarkably flexible neural machinery serving perception and spatial cognition, which attention-demanding studies have made evident recently (Goodale, James, & Humphrey, 2002; Johnson, 2001; Sadato et al., 1996; Zangaladze, Epstein, Grafton, & Sathian, 1999).
74
KENNEDY AND JURICEVIC
Sketches presented here by a blind girl and a blind woman who have long been interested in drawing clearly raise the puzzle of the relation between haptics, which deals with both 3-D and 2-D in the everyday world, and lively controversy about projection onto a pictorial surface (Hopkins, 2003; Lopes, 2003), which is, of course, just 2-D by definition. To deal with “reverse projection,” from 2-D pictures back into a 3-D world, we suggest that matters of direction can help again. Novel theories we offer here have to do with grouping and dots (a receptive-field theory), haptic and visual pathways (a dedicated-or-flexible theory), and projection (a perspective band theory).
GAIA A group of congenitally blind children, ages 8–13, on first exposure to outline pictures recognized more raised outline pictures than sighted blindfolded children, as reported by D’Angiulli, Kennedy, and Heller (1998). Pictures that were difficult for one group to identify were also hard for the other, so common features may underwrite the task for both. The form of the outline on the picture surface was often similar to the form of the referent object, but the sketches also involved projection to an observer’s vantage point, with T-junctions indicating foreground-background overlap for example. The pictures presented to the blind children were drawn by the sighted investigators. Naturally, what the children themselves would choose to draw might be radically different. However, Figs. 4.1–4.5, drawings by a blind child, Gaia, have many features in common with pictures drawn by sighted peers. Unlike the vast majority of blind children (Eriksson, 1998), Gaia has been drawing since preschool days, with encouragement from her family, much like a typical sighted child. Her drawings suggest the same developmental trajectory as those of sighted children (Golomb, 2002; Nicholls & Kennedy, 1992; Willats, 2003). Gaia was born with limited peripheral sensitivity to light, and lost that little sight to become totally blind at age 7. She was given raised line drawings by her mother, Lucia, from age 2, and she herself began drawing soon thereafter. She has never had enough sight to see the pictures she draws. Pedagogue Vincenzo Bizzi, who met the family when Gaia was 6, encouraged Gaia and Lucia to continue with drawing. Gaia draws out of interest, inventing beach scenes and playgrounds, designing clothing, and so on. She was tested twice, at ages 12 and 13, by J. M. Kennedy, at the behest of Paola Di Giulio of the Infant Jesus Hospital in Rome, who realized that Gaia’s interests and abilities are important to theories of tactile graphics. Kennedy asked Gaia to make raised line drawings of objects and scenes. He was also shown some raised line drawings Gaia made at her own behest, which had been kept by her mother. Gaia’s drawings in response to Ken-
4. HAPTIC PICTURES
nedy’s requests often surprised Lucia, Bizzi, and Di Giulio, which indicates that Gaia is inventive, and is not simply repeating what she has been taught. Consider three of Gaia’s own spontaneous drawings, kept by her mother, probably drawn at ages 9–11, and three drawings she made when tested at ages 12 and 13 (Kennedy, 2003). Figure 4.1 is a drawing of a playground scene. The objects are arrayed on a ground plane. Objects further in depth are shown higher in the picture plane. Left and right in the picture plane depict left and right in the scene. The objects at the top of the picture are houses and trees, drawn in proportion to each other, though relatively small compared to the slides, rope climbers and bush in the playground. At the bottom of the picture is a small drawing of a house, likely a playhouse. Lines in the picture mostly represent edges of flat and curved occluding surfaces, or long narrow surface forms (wires), though some are borders of paths. Orientations of the objects match the orientation of objects in the world to an observer (Mirabella & Kennedy, 1999). Flat objects are shown by shapes similar to their front surfaces (elevations), as is common in sighted children ages 5 to10 (Landerer, 2000; Milbrath, 1998). The steps up into the houses are drawn in plan view, much like the overall layout of the playground and the pathways around it. Like elevations, plan views could be “imprints,” made by flat rigid surfaces laid on flat yielding surfaces. But the steps suggest projection, since each step is at a different depth in the scene.
FIG. 4.1.
Drawing of a playground scene by Gaia.
76
KENNEDY AND JURICEVIC
Figure 4.2 depicts rounded forms: people on a beach. Objects higher in the picture plane are at greater depth in the scene, and the orientation of the objects is as before. Depictions of rounded forms showing frontal details can be considered projections (not cross-sections, imprints or silhouettes), since they show parts like mouths and shoulders that are at different depths in the scene. Overlap (T-junctions) and hidden line elimination are used. The water occludes the people, and so too does a ring around one person. At the extreme one person is indicated by nothing more than flippers. But a fish is evident, and some other briny creatures, in the water. (On one hand, the boundaries of water can be haptically determined, so it may occlude. On the other hand, one can haptically explore objects in water, so haptics is not occluded. The choice is made one way for the fish and the other for the people, likely because the fish are known to be immersed but the people need to be shown to be.) Figure 4.3 has people in a variety of postures. The figure on the left jogs towards the observer, with one leg in the air and foreshortened, suggesting projection. The neighbor has one foot pointing to the left and the other to the right, a common device in sighted children ages 8–10 for a person standing facing the observer (Arnheim, 1990; Milbrath, 1998). A third person in Fig. 4.3 is walking to the right, in profile, one arm overlapping the body. The hidden body-line is eliminated. Both feet point to the right, which again is a common device for a walking person in sighted children ages 8–10. The rear leg of a fourth person, with long hair obscuring one arm and the body, and both feet pointing to the right, is sharply bent and suggests “running.” The people are lined up, left to right, on a pair of ground lines, suggesting a path.
FIG. 4.2.
Drawing of a beach scene by Gaia.
4. HAPTIC PICTURES
FIG. 4.3.
Drawing of people by Gaia.
The person with one foot to the left and one to the right might be described as folded out, rather than projected. Indeed, when asked to draw a cube, Gaia showed each face, including all sides, top and rear, as if the box were folded out (Fig. 4.4 left), a common device at ages 7–8 in sighted children (Nicholls & Kennedy, 1992), and then followed it with a cube drawn in inverse projection (Fig. 4.4 right) with the front surface drawn as a smaller central square and the rear as larger. Inverse projection is likely from sighted children ages 9–11 (Kennedy, 2003). Figure 4.5 shows two cars and a storefront. The more distant car is drawn higher in the picture plane. The storefront is shown by vertical lines that stop above the cars, implying T-junctions and occlusion. The cars are shown from the side, a common scheme for cars from sighted children ages 8–10 and older. The drawing suggests projection, since cars at different depths are depicted, plus occlusion. Further, since the bases of objects on a ground plane are higher in direction the further the object from the observer, matters of projection may govern Gaia’s use of height-in-the-picture for depth. When a drawing shows creatures inside the sea, it is tempting to attribute the scheme to what one knows rather than what one observes. But while knowledge certainly influences how the fish are shown, there are many properties in Gaia’s pictures that have to do with what can be perceived haptically as well as visually. Gaia’s use of lines for surface edges is common in drawings by the congenitally totally blind. As in the sighted, what varies significantly from one blind person to another is the kind of projection from the referent to the line patterns, which likely depends on the person’s age and experience with drawing. In this vein consider a drawing with convergence, from a blind woman who, like Gaia, has considerable practice with drawing.
78
KENNEDY AND JURICEVIC
FIG. 4.4. Two drawings of a cube by Gaia. The first is in foldout style; the second uses inverse projection, since the small square in the center is the front of the cube and the large square shows the rear edges of the cube (from Kennedy, J. M., Perception, 32, 321–340, p. 326. With permission of Pion Publishers Limited, London, UK).
FIG. 4.5. Two cars and a storefront by Gaia (from Kennedy, J. M. (2003), Perception, 32, 321–340, p. 331. With permission of Pion Publishers Limited, London, UK).
TRACY Tracy, from New York, is a 40-year-old woman who is totally blind, who never had perfect sight, having lost her sight totally at age 2 because of retinal blastomas (Kennedy & Juricevic, 2003). Consequently, her experience in making drawings came after loss of sight. She reports that she enjoyed drawing as a child, and still does, on her own initiative. She reports that as a child she was typically given general encouragement in drawing rather than instruction, and the same as an adult. She often drew her toys (e.g., a horse) or domestic objects (e.g., a cat). She considers herself largely self-taught. For example, she reports drawing her toy horse, and then comparing it to the model and making another drawing which she judged better, but still re-examining her horse and making
4. HAPTIC PICTURES
another drawing which she deemed even better, repeating the process until her drawing was satisfactory. If drawing develops in the blind and the sighted in much the same way, Tracy would be expected to use a projection system a stage ahead of Gaia’s foldout, parallel, and inverse perspective schemes (Kennedy & Juricevic, 2003). The stage ahead is likely convergent perspective (Kennedy, 1993; Willats, 2003). In one especially revealing drawing, Tracy drew 6 glasses, in two receding rows of three glasses each, which were sitting on a tabletop (Fig. 4.6). The sketch uses two devices: convergence “foreshortening” and height in the picture. The glasses are shown as U shapes (similarity of form). Two large U shapes (2.1 cm tall and 3.5 cm apart) stand for glasses near the observer. Smaller U shapes (1.4 cm tall and 1.9 cm apart) spaced closer together depict two glasses further away. Still-smaller U shapes (.6 cm tall and 1.1 cm apart), closer together again, portray the furthest glasses. The more distant the glass, the higher the U shape. Also, the vertical spacing from the base of the glasses in the low pair to the bases in the middle row is 3.5 cm, and the spacing from the bases in the middle to the bases of the upper U shapes is slightly less, 3.0 cm. Tracy foreshortens via decrease in size of the Us, convergence of left-right spacing and perhaps vertical diminution as well. The glasses themselves simply use similarity of form. It may be that the projective system for the blind typically is initially based on direction, and objects that are too small to differ largely in direction (e.g., a glass) at that stage are drawn using other systems (e.g., similarity of form and orientation, t-junctions, height in the picture). Tracy mentioned that the U shapes to the right were not aligned. She said she had moved the upper U too much to the right. Tested in her 20s, Tracy used a single square to represent a cube. A few years later she used a square for the front face of the cube, and a slim rectangle for the top face foreshortened. Her drawings developed in sophistication from single
FIG. 4.6. Two receding rows of glasses, three per row, by Tracy (from Kennedy, J. M., & Juricevic, I. (2003), Perception, 32, 1059–1071, p. 1064. With permission of Pion Publishers Limited, London, UK).
80
KENNEDY AND JURICEVIC
aspects of objects (i.e., fronts, using similarity of form) to parallel projection with foreshortening, and now she has used convergent projection. Other congenitally blind adults we have tested have also used foreshortening and convergence in their drawings.
VANTAGE POINTS: LINE FOR OCCLUDING SURFACE BOUNDARIES Gaia and Tracy use lines for surface borders, that is, for occluding boundaries of flat surfaces (edges of the fronts and roofs of houses), rounded boundaries (of people), and corners between two surfaces (in drawing cubes). A skeptic might wonder at the use of line in haptic pictures for boundaries of rounded objects. What is the tangible equivalent of the clear division between the visible front and the nonvisible back of a cylinder? As a viewpoint-dependent feature, the sceptic may claim, it is a denizen of optics. The skeptic’s argument assumes that the blind person has no clear appreciation of any haptic vantage point. In reply, let us entertain an everyday example. Hold a tall bottle of wine with both hands near the middle of the bottle. If your left hand keeps hold of the bottle, and you slide your right hand to the cork of the bottle and then down to the base, it will travel vertically up and then down. From the waist of the bottle, the movements up and down diverge, moving almost 180²apart. Now have your right hand let go of the bottle and move a short distance away from the waist of the bottle, off to your right. From this point of origin make two movements, one reaching for the cork, then one to the base. From the point of origin, the two movements diverge at an acute angle of perhaps 60². The change in angle from 180² to 60² over a short space occurs in haptics, the obtaining of information from touch and movement (J. J. Gibson, 1962). Our vantage point shifts constantly in everyday haptics. We reach from points of origin, varying their locations, and often reaching fairly suitably, though rarely perfectly (Cabe, Wright, & Wright, 2003; Jansson, 2000; Kappers & Koenderink, 2002; Klatzky, 1999). How many degrees of freedom are there at a haptic vantage point? The answer is 6. At any point of origin our hand can face palm forward or back, up or down, left or right (rotating in xyz dimensions). The three axes for directions allow yaw, pitch, and roll. But besides changing the directions in which its palm faces, the hand’s location can be moved: first to and fro, second elevation and lowering, and third to the left or right side (moving in xyz dimensions, or “translating”). As the hand’s point of origin moves, we can maintain the direction in which it faces. That is, the palm can be forward (or up, or left, for example) while the
4. HAPTIC PICTURES
hand moves forward or back (to and fro), changes height up or down (elevating and lowering), or moves to the side (the right or left). If we rotate our head say 90² to the left, what was in front of our nose is now in line with our right ear while objects on the vertical axis of rotation, above and below us, remain invariant in direction. If we move the point of origin then again, many objects change direction. If we move directly forward bodily, objects aligned with our left and right shoulders fall back, behind us. Objects aligned with the motion, in contrast, keep their directions. Also, objects on the horizon are so far away that a short motion changes their direction infinitesimally. The result of the motions (either of the hand or of the head) is a flow field: a set of directions changing (flowing) as a group (a field), around a mobile vantage point. The set of possibilities is given by 6 degrees of freedom (DF) given by 3 xyz rotations and 3 xyz translations, with all their combinations. Exercising any one degree of freedom at a time, keeping the others invariant, is fairly straightforward for the observer. Combining them in pairs is more difficult, but can be easy with the proper combination and a target, such as rocking forward and back while yawing one’s head to keep it pointed to a fixed object to one side such as someone speaking. Given three or more changes, at times it can be hard to keep track without being a gymnast, accustomed to twisting while leaping up and forward, tucking the head at the start of the motion. Targets facilitate control of the 6 DF. Imagine going up in an escalator, while turning to one side, and bending over the rail to hear someone on the ground calling out to you. The escalator moves up and forward, your head is turning (yaw), and lowering (pitch) toward the target. All you have to do is keep the target straight ahead. If you try to favor one ear, cupping your hand to hear the person on the ground speaking to you, and try to keep that ear directed towards the person, you have to rotate your head (roll). Here, you are controlling the orientation and location of your ear toward a target. It is the point of origin of the directions to the speaker. Now imagine that the person on the ground is trying to hand something to you—a long cane, say. Immediately your hand becomes a point of origin of directions to be controlled. Is your palm constantly facing the right way? We modify its direction aptly as its location shifts. We can adopt an ear or a hand as the point of origin of directions. We can also readily adopt a shoulder (in pointing with a straight arm) or a foot (in kicking), or the forehead, or the back. For some of these we may be more skilled than for others, just as we are more skilled with one hand than the other. But in principle we can pick any part of the body. In principle we can pick a point outside the body, too. As we anticipate the consequences of our driving, what is coming toward a space just around the next corner is likely to be considered quite often.
82
KENNEDY AND JURICEVIC
We might select an object as the point of origin—for example, to consider the direction a car is facing, or a letter’s motion to a letterbox, or a door swinging toward an oblivious friend. The principle involved in estimating directions from an arbitrarily selected vantage point is readily appreciated, though accuracy varies in many ways. Lefts and rights, and tos and fros can become confused as we consider occupying vantage points that are external to us, and facing in a different direction. But these are just matters of practice, not principle—speech, not grammar; parole, not langue. We appreciate the principles governing haptic vantage points and direction, even if in practice we often fail to apply them perfectly. The result is that haptics affords awareness of occlusion and surface boundaries. It makes us aware that from a given point of origin, one object interposes before another, and a given object has a front and back, with a fairly well-defined division between these two. Returning to our celebrated bottle of wine, let us note what haptics reveals. With your left hand holding the bottle of wine, and your right hand off to the right again, the cork is above your right hand at the level of its waist, the base below, and a curved front faces your right hand, and a back the opposite direction, away from your hand. Hence, curved surfaces have occluding boundaries between front and back for haptics, much as they do in vision. Hence too, like Tracy, Gaia has a facility with lines standing for curved surface boundaries, such as rings around the body of a swimmer, as deft as her ability to draw edges of flat surfaces of swim fins.
DIRECTIONS: PERCEPTUAL EXPERIENCE AND COGNITIVE CODE Visual lines give us vivid experiences of slant and depth. Depicting corners, they give impressions of a change in slant, one foreground surface abutting another, or “figure-figure” (Albertazzi, 2002, p. 63). Depicting an occluding boundary of a foreground surface against a background they give an experience of an abrupt change in depth at an edge, or “figure-ground” (Petersen & Gibson, 1993). The boundary can have to do with the edge of a flat surface, or to do with a curved surface, in which case the impression is one of a gradually changing slant, coming to a tangent at the boundary. In vision, figure-ground impressions control recognition. If we reverse the figure-ground impression given by the contour of a flat pattern, so figure is on the left of the contour where before it was on the right, the display is often not recognized on its second presentation. A similar loss of recognition occurs when blind
4. HAPTIC PICTURES
persons reverse figure-ground in raised outline pictures (Kennedy, 1993), suggesting foreground-background impressions from line in haptics. Indeed, the blind may be influenced by many grouping principles besides figure-ground, such as good continuity, in trying to detect hidden shapes embedded in raised line figures, since congenitally blind subjects perform like blindfolded sighted subjects (Heller, Wilson, Steffen, Yoneyama, & Brackett, 2003), though much faster. It seems unlikely that the range of perceptual experiences from haptic pictures will be as wide or impressive as that from visible pictures. Compared to visual observers, it may be harder for haptic observers to put in abeyance the flatness of the pictorial surface. Also, the patterns in visible pictures specify depths as well as directions. In contrast, tangible pictures may often be taken by the skilled user as specifying directions: using their depth information, but emphasizing directions of objects from the vantage point. Direction and depth are different notation systems for pictures, but both notations rely on the affinity between lines and surface boundaries. Hence, haptic outlines are as concerned with occlusion as vision. Both direction and occlusion imply the observer’s vantage point (Cox, 1992). It lies outside the picture plane, just as the picture plane itself is not the location of the occluded surface. A hand touching the picture surface is pointing to the distant referent, and the lines in the picture control the direction in which the hand points. So a great deal follows from the edge-line affinity, if direction is relevant, though we are unsure about what other kinds of experiences outline can generate in haptics.
DOT GROUPINGS AND SURFACE EDGES Figure-ground has to do with perceptual grouping, the subject of Gestalt psychology. The basis of line perception is just such an effect in both vision and touch, since visual and tactile outline drawings can be based on dotted lines. That is, we use a continuous line to depict a continuous edge or corner just for convenience, since dotted lines work as well. In this case, the continuity is provided by a curious grouping or “gestalt” of the dots (Jansson, 2001). The continuous gestalt grouping across separated elements on a picture’s surface is likely a process triggered by borders of real surfaces. They too often have separate texture units. A wooden surface, for example, has a continuous edge marked by discontinuities in the visual grain. Real surface edges have separate units, such as dots or contours of the grain aligned, and vision uses the alignment to perceive the surface edge. The perception has three aspects: one, a spatial grouping is picked out by vision; two, the grouping permits a border to be perceived; and, three, the border is taken as the edge of a surface.
84
KENNEDY AND JURICEVIC
Edges of wooden surfaces are usually shown by continuous color and reflectance borders, as well as grain, so a more helpful analogy to the grouping offered by a simple dotted line is one with fewer features. Just so, binocular vision of textures can allow us to see borders that are not present in either eye. Disparities between the two eyes drive these purely binocular borders. If the disparities are aligned, the purely stereoptic borders result. Like a dotted line’s continuity, the stereopsis border has no continuous color or luminance contour. Continuous grouping without continuous color or luminance also occurs at purely motion-defined borders. Alignments of accretion or deletion of texture elements across time yield these perceived borders. Accretion-deletion borders can appear continuous in vision, even if the inducing elements in the ocular inputs are separated in space and time. Elements that are close in space or time, with aligned contours, favor the perceived continuity. Each of the four kinds of optical inputs just listed—disparity, accretion-deletion, luminance, and spectral—can correspond to surface boundaries in the environment. An interesting theory is that the input from dotted lines contains the optical feature the four use to anchor perception of surface boundaries: alignment. In a simple form, we can model the consequences as follows: A stretch of luminance or spectral contour drives a cell responsive to an elongated receptive field. A pair of dots can also fire a cell responsive to an elongated receptive field. In turn, two such elongated-receptive-field cells can drive a higher order cell, responsive to the alignment of the two. That higher order cell groups two receptive fields, in effect. It, in turn, triggers cells that support perceived surface boundaries. Quite simply, the perceived surface boundaries are depicted ones if the key optical features come from dots or continuous lines on a surface, and real ones precisely if the same optical triggers come from the edges of real surfaces.
TOUCH, DOTTED LINE AND EDGES Surface borders in nature trigger a grouping on which outline drawings can capitalize. As goes vision and outline depiction, so goes touch, surely. Via haptics, we explore real 3-D edges, a piece of an edge at a time. Haptics permits detection of the spatial grouping of the pieces by proceeding along the edge, across time. The sequence can entail a continuous series of contacts, say of a fingertip following along a stretch of convex corner (Lederman & Klatzky, 1987, 2002). Presumably, each moment of contact is grouped with the immediately prior contact if the contacts are continuous. Also, as we touch the separate stretches of the corner, we detect the orientation of each stretch, and can perceive a series of stretches as having aligned orientations. Likewise, we can touch dotted forms
4. HAPTIC PICTURES
on a surface and each separate dot can be perceived as part of a pair with an orientation. Presumably, the more their orientations are aligned the more readily a series of pairs of dots is perceived as a continuous form. A straight line or smooth curve is optimally aligned, but there are likely many alignment functions that perception can use. The spacing of the dots likely matters, too: regular or dense spacing is especially effective. The ability to feel a series of parts and sense their alignment is in common to exploring real edges and a series of dot pairs. We can touch a pair with, say, one fingerpad per dot, or many at a time, for example with multiple fingers, one per dot. Either way we can sense the orientation of pairs, and the alignment of their orientations. Thus we perceive grouping and continuity. Common physical factors and common underpinnings in perceptual processes in vision and haptics lead to perception of both real edges of 3-D surfaces and depicted edges shown by outline drawings.
FROM PRESSURE AND MOBILE POSTURE TO CONTINUITY OF A SURFACE Johnson (2001, 2002) argues that since haptic observers can discriminate curvature independently of contact force, they rely on a spatial profile of the neural activity evoked by a curved surface, and not some intensive cue like the total impulse rate. Let us consider how pressures and mobile postures might be used in Johnson’s fashion, with real surfaces and with outlines. Consider touching a continuous surface like a smooth wooden tabletop with the five fingertips of one hand, fingers widely splayed. There are gaps between the fingertips, but we can have an impression of one single continuous surface stretching between the widely separated fingertips. How is this done? Why do we not have an impression of five little islands of wood floating in space? The wood’s five contacts with us have aligned slants and heights, and in addition all have the same resistance to forces perpendicular to or across the surface, and the same thermal properties. Shearing motions of the fingers find the same friction forces at all fingertips, all with the same motion vectors. The commonalities may “specify” continuity for us (E. J. Gibson, 1969; Kennedy, 1974). They are the properties to control in a virtual environment to create impressions of a continuous phantom surface where none existed (Jansson, 2000, 2001). If we were in contact with two different surfaces, ordinarily they would differ in height, slant, reaction to the perpendicular and shearing forces applied to the surface, or thermal properties. The differences would be perceived as due to two surfaces.
86
KENNEDY AND JURICEVIC
These contacts with a tabletop reveal the information-gathering schemes of haptics (Lederman & Klatzky, 2002). Contact with a surface involves compression of skin, from forces normal to the surface, or lateral. The compression indicates the force involved, and in principle it does not matter whether the hand or the arm or another body part picks up the force information. Pressing motions discover the firmness of its substance, finding, say, that a steel ball is very firm. Lateral or shear forces provide friction information revealing, say, that the steel ball is smooth and almost frictionless. At any moment, a set of perpendicular forces, together with information about the posture of the contacts (such as the fingers or other body parts) allows information about the shape of the surface to be obtained: the curve of the ball. The set can also be obtained across time, with mobile postures (Lederman & Klatzky, 2002). The perpendicular forces can reveal the weight of the ball, if our hand is under it, and the mass of the ball if we raise our hand. The shape information can also be obtained by shear forces, an important study by Symmons and Richardson (2002) found, when taken together with the posture of the contact (say the hand). Each contact in a set yields shear as it pushes or is pushed across the curve of the ball’s surface. This is “surface following” (like Lederman & Klatzky’s 2002 analysis of holding and contour following in haptics). The shape information is a set of orientations joined because their alignments can be fitted together.
THEORY OF CONTINUITY Toy connectionist models—ones that show what needs to be done but do it incompletely—highlight the issue of continuity. For example, to explain why a series of pairs of dots in a line has perceived continuity, a toy connectionist model might claim a continuous link between brain cells fired by the dots. Each dot is a discrete input (firing a receptor) with gaps between them (silent receptors). At a higher neural stage, the discrete inputs could begin to institute firing at all the silent cells between the ones fired by the active receptors. But does perception need continuity in an image created in the brain in order to see continuity? In color theory the equivalent would be claiming that in order to see red and green our neurons must turn red and green. The argument depends on a homunculus, very much to be avoided. By way of contrast to the toy neural model, should one claim instead that dotted lines allow the observer to infer continuity? Taking a cognitive stance, we could hazard that continuity is inferred on the basis of certain kinds of discontinuity. The inference argument is usually vacuous (Parks, 2001). Perception is certainly rule-governed, and certain kinds of proximal dot patterns reliably give
4. HAPTIC PICTURES
rise to perceived continuities (Jansson, 2001), but do we need to add anything about inferences? Eight equally spaced dots around an empty center tend to be perceived as a circle, in vision or haptics (Kennedy, 1974). Pick four of these and the circle may vanish and a square appear. The idea that the step from the eight to the four leads to different inferences is just post hoc, adding nothing to the particular rule. In its general form the argument is simply that perception uses input at receptors to give rise to an impression of the distal source of the input. Hence, in principle, it could tag every percept “an inference,” for there is no alternative. If we merely state the indisputable conditions for all perception, and set no conditions on how a particular inference adds to a rule, we are not useful. The inference idea is just a version of the problem of induction: finite data can be inferred to come from any of many populations. In contrast, the argument about continuity, alignment, and outline is a “key feature” explanation. It seeks common properties. We are observers in a finite environment, in which distal sources give rise to specific patterns. Key features in those patterns mean specific things. Can we discover the physical factors shared by lines in pictures and features in the 3-D world? These are not inferences of one thing (X) from another (Y), but common features (X and X), shared physical factors. The question is not how can X be inferred from Y but rather what X is shared.
SHAPE AND BORDERS: OPEN AND CLOSED SETS Once continuity has been considered, the next factor is the shape it traces out. Alas, no shape-analysis systems are general. People around the world learn quite different alphabets. Perceptual systems are open to each one, and new ones not yet invented. Hence, shape is an open system. We can invent new shapes. We are constantly inventing new fonts, new alphabets, and new objects. There is no finite solution to the problem of object-shape perception. That goal is a mirage. To be specific, imagine short lines joined to other short lines. The first pair, A and B, can meet at any angle from 0² to 360², the next pair, B and C, at any angle from 0² to 360², and so on. The number of combinations is infinite. The number of types of combinations is infinite, too. The neural implication of this conclusion is clear. Physically, there is a fixed, closed set of borders, having to do with luminance, color, stereopsis, and motion, but there is no fixed set of shapes to make with the borders. Therefore, perception neurology can contain fixed regions handling kinds of borders, but it must contain flexibility in putting borders together.
88
KENNEDY AND JURICEVIC
Like vision, touch must allow flexibility. Like a visual alphabet, Braille is an arbitrary pattern. Brain regions to do with putting visual borders together must have flexibility to learn about conjunctions of shapes A, B, C, and so on, in Dublin and a different set in Dubhai. The lesson must hold for brain regions for haptics, because of Braille. Since we can read “ABC” defined by any of the kinds of border, the shape-flexible brain region is accessible by any of the borders, visual or haptic. This is quite different from the brain regions required to generate the borders in the first place. Each border requires a unique mathematic function. Disparity analysis for creating binocular stereopsis borders is highly specific. Once luminance borders are created, shape-from-shadow geometry applies in highly tuned ways. These mathematic functions are so distinct from one another in the brain that a binocular-defined border cutting a shadow region into two planes will not interfere with shape-from-shadow analysis of the region (Kennedy & Bai, 2002, 2004). These specific functions are undertaken without reference to each other and surely in dedicated brain areas, open to specific inputs. An important implication of the independence of the border functions is that they can corroborate each other or be in conflict. Stereopsis information for depth may tell us an object is 6 meters away from a foreground surface, accretion and deletion may tell us they are coplanar, and shape-from-shadow may suggest they are 3 meters apart. Should vision add the information (9 m), average the three sources (2 m), or accept the largest (6 m) or smallest (0)? The answer is that perception should select the accurate source, and do so by seeking out the basis for the disagreement. Information from diverse sources should “confirm.” Accurate sources, by definition, should all be telling the same story. In principle, perception should avoid mixing information from different sources if they are telling different stories. Neural streams triggered by the different sources should not meet and meld. The information-independence principle is important for picture perception. Pictures are flat but offer us information about depth and direction of depicted objects. Neural pathways taking the information ideally should keep the flatness information intact while transmitting the information about depicted objects. Picture processing comes close to this ideal. Visual flatness, depth information, and direction information are kept pretty distinct (Sedgwick, 2003). Haptic lines give us impressions of edges of occluding surfaces, foregrounds and backgrounds, and simultaneously of ridges on an otherwise flat surface.
MULTIPURPOSE AREAS An interesting possibility is that, conceivably, the brain areas dedicated to creating specific borders from special inputs in a bottom-up fashion, each perform-
4. HAPTIC PICTURES
ing unique calculations on the input, are open to very different kind of inputs (say, top-down), and perform entirely different calculations on it. Certainly, every perceptual brain area has both bottom-up and top-down input. Top-down connections allow search for targets in the input. Likely, they would normally be used only to tweak and tune a specific calculation, to prime a grouping, or to select in ambiguous circumstances, much as we can reverse figure-ground and change it for figure-figure or ground-ground. Tweaking could help in a search in a picture for an old person rather than a child. However, the top-down connections are required to be related to the products of the bottom-up processes if they are to be selective in search and priming. The top-down processes may grow many strong connections to become quite dominant and detailed if the bottom-up processes are eliminated in the blind (Cohen et al., 1997). They can be dominant, though likely not detailed, in subjects blindfolded just for a few days (Pascual-Leone, Theoret, & Merabet, 2002), using pre-existing top-down connections. In their turn, the top-down processes to a cortical sensory area may be triggered directly from another cortical sensory area responding to bottom-up input, or through a higher order cortical way station that accepts signals from several sensory areas. The temporal-parietal-occipital junction is a higher order way station that is helpful in relatively cognitive matters such as seriation (Ramachandran & Hubbard, 2003). The superior colliculus is a relatively low way station with spatial sensory maps for audition, vision, and touch, serving eye movements. All the senses control higher order dorsal action pathways (Goodale et al., 2002). Flexible regions that are needed to learn about arbitrary shapes formed by borders are “expertise” regions, likely capable of massive speedy retraining as new needs arise. They need to be in close contact with the border-dedicated regions. A third kind of region is required because some combinations of borders contain nonarbitrary or “ecological” information, and require specific mathematic functions. While shapes of shadow borders are a case in point, we stress here that perspective vision is another. When we view a perspective picture, we see in accordance with a particular function. For example, viewing a piazza of square tiles from further than the correct center of projection, vision accepts some of the projections as still showing squares. Tiles further away on the piazza are rejected as too short to be squares, and ones closer on the piazza as too long to be squares. The accepted projections form a wide-band circling a point on the picture on the perpendicular from the center of projection. The bounds of the band, we find, are somewhat wider than the projective ratios of the sides of the tiles at the correct center of projection. Thus, vision has a well-defined perspective function, more tolerant than perfect perspective. Indeed, much more tolerant because by the rules of perfect linear
90
KENNEDY AND JURICEVIC
perspective, as soon as we view from further than the correct center of projection, all the tiles should be seen as elongated (Kennedy & Juricevic, 2002). Haptics may have a crude sense of shadow, via heat loss when we enter the shadow of a building on a sunny day. Likewise it has a rough sense of perspective. It has perspective’s important principles (e.g., convergence) but not the detail of vision. Likely, haptics lacks the precise perspective band related to distance between the observer and the picture, since the haptic observer’s vantage point is not anchored at a specific distance from a picture. Only the pictorial directions from the observer are specified precisely. (In a pilot study on haptics with blindfolded observers, we found preference for a cube drawing with more convergence when the cube drawing was closer to the subject’s head, so we conjecture that the principle of a perspective effect of observer distance matters to some degree in scanning a haptic picture.) But what is especially intriguing for students of the liaisons between vision and touch is that some dedicated sensory functions are highly relevant to all objects in more than one sense. Surface, vantage points, and perspective are general in this fashion. All objects are perceived from some vantage point. Hence, perspective’s general principles are the master geometry governing not only viewing from two vantage points simultaneously (stereopsis), or across time (motion), but also reaching or pointing from a vantage point, and directions of objects as we walk around in a scene. In vision, the existence of a well-defined master geometry allows the development of stereopsis and motion perception to be largely self-organizing in infancy. In haptics, by analogy, a master geometry with 6 DF from a vantage point permits the development of haptic space perception to be self-organizing in infancy. These self-organizing systems produce percepts of the same three-dimensional space. Their products should be related to one another. Vision and haptics should produce impressions that the infant appreciates are of the same object, from a vantage point (Hernandez-Reif & Bahrick, 2001). Further, line should be able to trigger similar impressions in vision and touch. If we sketch a developmental account, it might go like the following. Most visual developments follow shortly after improvements in visual acuity, a process which is gradual but steady in the first years of life. With nothing but diffuse shapes to work with in early infancy, disparity is unavailable. Disparity differences become available more and more precisely in the first two or three years of life. Motion sensitivity increases at the same pace, for example allowing rigid and nonrigid motion to be differentiated soon after suitable acuity is available (E. J. Gibson, 1969). Similar development of spatial perception in touch should follow increasing sensitivity to each of the 6 DF. The products of each analysis are three-dimensional scenes, with vantage points. These provide ac-
4. HAPTIC PICTURES
cess to flexible-shape brain areas, but also, brain areas for the products are linked, so each can give rise to top-down input to the other, for instance to prime the area or help define a target for search. For the neonate, with 20:800 vision, visual borders and lines are highly defocused. By about 6 months, vision has developed some precision, and corners and edges of surfaces can be distinguished, and lines too. Visual outline depiction is possible, and clicks into place with line depicting surface edges, spontaneously, since the line contains the same physical features that allow perception of the surface edges. Similarly, haptics likely develops the ability to distinguish surface borders, and lines in the form of ridges or grooves contain the physical features used to perceive the surface borders. The upshot is that we have self-organizing and highly specific areas in brain regions devoted to calculations specific to a kind of border, flexible areas having to do with developing expertise with the shapes of borders, and regions having to do with ecological shapes that cut across senses. This argument has implications for drawing development. If the self-organizing aspects of perspective in vision and touch are alike, they could support a drawing development trajectory that makes specific spatial skills (Milbrath, 1998) and projection systems (Willats, 2003) available in the same order to the blind and the sighted. The evidence in our figures here lends itself to that speculation.
CONCLUSION Haptic drawings from Gaia and Tracy use outline much as in drawings from the sighted. They reveal what blind people who are practiced in picture making produce regularly. Surface edges are what the lines depict in vision and haptics. The lines likely trigger grouping functions that are the bases for perceiving surface edges in nature. The geometries of projection suggested by Gaia’s and Tracy’s drawings have to do with occlusion, foreground, and background. The geometries may have a developmental progression from single faces of objects (front surfaces, and similarity of form), to foldout, parallel, inverse projection, and convergent projection.
REFERENCES Albertazzi, L. (2002). Towards a neo-Aristotelean theory of continua. In L. Albertazzi (Ed.), Unfolding perceptual continua (pp. 29–79). Philadelphia: John Benjamins. Arnheim, R. (1990) Perceptual aspects of art for the blind. Journal of Aesthetic Education, 24, 57–65. Cabe, P. A., Wright, C. D., & Wright, M. A. (2003). Descartes’ blind man revisited: Bimanual triangulation of distance using static hand-held rods. American Journal of Psychology, 116, 71–98.
92
KENNEDY AND JURICEVIC
Cohen, L. G., Celnik, P., Pascual-Leone, A., Corwell, B., Faiz, L., Dambrosia, J., Honda, M., Sadato, N., Gerloff, C. M., Catala, M. D., & Hallett, M. (1997). Functional relevance of cross-modal plasticity in blind humans. Nature, 389, 180–183. Cox, M. (1992). Children’s drawings. London: Penguin Books. D’Angiulli, A., Kennedy, J. M., & Heller, M. A. (1998). Blind children recognizing tactile pictures respond like sighted children given guidance in exploration. Scandinavian Journal of Psychology, 39, 187–190. Eriksson, Y. (1998). Tactile pictures: Pictorial representations for the blind 1784–1940. Göteborg: Göteborg University Press. Gibson, E. J. (1969). Principles of perceptual learning and development. New York: AppletonCentury-Crofts. Gibson, J. J. (1962). Observations on active touch. Psychological Review, 69, 477–449. Golomb, C. (2002). Child art in context: A cultural and comparative perspective. Washington, DC: American Psychological Association. Goodale, M. A., James, T. W., & Humphrey, G. K. (2002, October). Haptic recognition of geometrical object properties makes use of extra-striate visual areas. Paper presented at the conference on Touch, Blindness, and Neuroscience, Madrid, Spain. Heller, M. A., & Kennedy, J. M. (1990). Perspective-taking, pictures and the blind. Perception & Psychophysics, 48, 459–466. Heller, M. A., Wilson, K., Steffen, H., Yoneyama, K., & Brackett, D. D. (2003). Superior haptic perceptual selectivity in late-blind and very-low-vision subjects. Perception , 32, 499–511. Hernandez-Reif, M., & Bahrick, L. E. (2001). The development of visual-tactile perception of objects: Amodal relations provide the basis for learning arbitrary relations. Infancy, 2, 51–72. Hopkins, R. (2003). Touching, seeing and appreciating pictures. In E. Axel & N. Levent (Eds.), Art beyond sight: A resource guide to art, creativity and visual impairment (pp. 186–199). New York: AFB Press. Jansson, G. (2000). Spatial orientation and mobility of people with vision impairments. In B. Silverstone, M. A. Lang, B. P. Rosenthal, & E. E. Faye (Eds.), The Lighthouse handbook on vision impairment and vision rehabilitation (pp. 359–375). Oxford: Oxford University Press. Jansson, G. (2001). The potential importance of perceptual filling-in for haptic perception of virtual object form. In C. Baber, M. Faint, S. Wall, & A. M. Wing (Eds.), Eurohaptics 2001 Conference Proceedings. Educational Technology Research Papers, 12, 72–75. Birmingham, UK: University of Birmingham. Johnson K. O. (2001). The roles and functions of cutaneous mechanoreceptors. Current Opinion in Neurobiology, 11, 455–461. Johnson, K. O. (2002, October). Neural mechanisms of tactile sensation. Paper presented at the conference on Touch, Blindness, and Neuroscience, Madrid, Spain. Kappers, A. M. L., & Koenderink, J. J. (2002). Continuum of haptic space. In L. Albertazzi (Ed.), Unfolding perceptual continua (pp. 29–79). Philadelphia: John Benjamins. Kennedy, J. M. (1974). A psychology of picture perception. San Francisco: Jossey-Bass. Kennedy, J. M. (1993). Drawing and the blind. New Haven, CT: Yale University Press. Kennedy, J. M. (2003). Drawings from Gaia, a blind girl. Perception, 32, 321–340. Kennedy, J. M., & Bai, J. (2002). Haptic pictures: Fit judgments predict identification, recognition memory, and confidence. Perception, 31, 1013–1026. Kennedy, J. M., & Bai, J. (2004). Border polarity matters for shape from shadow—not belongingness. Perception, 33, 653–665. Kennedy, J. M., & Juricevic, I. (2002). Foreshortening gives way to forelengthening. Perception, 31, 893–894. Kennedy J. M., & Juricevic, I. (2003). Haptics and projection: Drawings by Tracy, a blind adult. Perception, 32, 1059–1071. Klatzky, R. L. (1999). Path completion after haptic exploration without vision: Implications for haptic spatial representation. Perception & Psychophysics, 61, 220–235.
4. HAPTIC PICTURES Landerer, C. (2000). Kunstgeschichte als kognitiongeschichte: Ein beitrag zur genetischen kulturpsychologie [History of art as history of cognition: A contribution to genetic psychology]. Doctoral dissertation, University of Salzburg. Lederman, S. J., & Klatzky, R. L. (1987). Hand movements: A window into haptic object recognition. Cognitive Psychology, 19, 342–368. Lederman, S. J., & Klatzky, R. L. (2002). Tactile object perception and the perceptual stream In L. Albertazzi (Ed.), Unfolding perceptual continua (pp. 147–162). Philadelphia: John Benjamins. Lopes, D. M. M. (2003). Are pictures visual: A brief history of an idea. In E. Axel & N. Levent (Eds.), Art beyond sight: A resource guide to art, creativity and visual impairment (pp. 176–185). New York: AFB Press. Milbrath, C. (1998). Patterns of artistic development in children: Comparative studies of talent. Cambridge: Cambridge University Press. Millar, S. (1994). Understanding and representing space. Oxford: Oxford University Press. Millar, S. (2002, October). Spatial reference and coding haptic information: Implications for and from neuroscience. Paper presented at the conference on Touch, Blindness, and Neuroscience, Madrid, Spain. Mirabella, G., & Kennedy, J. M. (1999). Which way is upright and normal? Haptic perception of letters above head level. Perception and Psychophysics, 61, 909–918. Nicholls, A., & Kennedy, J. M. (1992). Drawing development: From similarity of features to direction. Child Development, 63, 227–241. Parks, T. E. (Ed.). (2001). Looking at looking. Sage: London. Pascual-Leone, A., Theoret, H., & Merabet, L. (2002, October). Tactile processing in the visual cortex. Paper presented at the conference on Touch, Blindness, and Neuroscience, Madrid, Spain. Petersen, M. E., & Gibson, B. S. (1993). Shape recognition contributions to figure-ground reversal: Which route counts? Journal of Experimental Psychology: Human Perception and Performance, 17, 1075–1089. Ramachandran, V. S., & Hubbard, E. M. (2003). Hearing colors, tasting shapes. Scientific American, 288(5), 52–59. Sadato, N., Pascual-Leone, A., Grafman, J., Ibanez, V., Deiber, M-P., Dold, G., & Hallett, M. (1996). Activation of the primary visual cortex by Braille reading in blind subjects. Nature, 380, 526–528. Sedgwick, H. A. (2003). Relating direct and indirect perception of spatial layout. In H. Hecht, R. Schwartz, & M. Atherton (Eds.), Looking into pictures: An interdisciplinary approach to pictorial space (pp. 61–76). Cambridge, MA: MIT Press. Symmons, M., & Richardson, B. (2002, October). Active versus passive touch: Superiority depends more on the task than the mode. Paper presented at the conference on Touch, Blindness, and Neuroscience, Madrid, Spain. Willats, J. (2003). Optical laws or symbolic rules? The dual nature of pictorial systems. In H. Hecht, R. Schwartz, & M. Atherton (Eds.), Looking into pictures: An interdisciplinary approach to pictorial space (pp. 125–143). Cambridge, MA: MIT Press. Zangaladze, A, Epstein, C. M., Grafton, S. T., & Sathian, K. (1999). Involvement of visual cortex in tactile discrimination of orientation. Nature, 401, 587–590.
This page intentionally left blank
g
h
Haptic Priming and Recognition in Young Adults, Normal Aging, and Alzheimer’s Disease: Evidence for Dissociable Memory Systems Soledad Ballesteros José M. Reales Universidad Nacional de Educación a Distancia, Madrid, Spain
O
ver the last 100 years there has been an enormous increase in the population of elderly people (age 65 years and older) in the world, although this increase is much larger in the industrialized than in the developing countries. A well-accepted observation is that with age, a decline in recall and recognition of facts and events is often observed. By the year 2050 it is estimated that 20% of the population will be 65 years old or older. With the increase of the elderly population and the high incidence of age-related neurological disorders it is expected that there will be a dramatic increase of neurological disorders in the next few decades. Degenerative brain disorders such as Alzheimer’s disease (AD), Parkinson’s disease, and other dementias will hit the elderly population more strongly in the next decades.
96
BALLESTEROS AND REALES
AD is the most common type of senile dementia, causing half of the dementias, and is the fourth leading cause of death in older adults. Its central characteristic is a progressive disruption of most cognitive processes. The incidence of the disease rises steeply with age. AD has an insidious and progressive nature, and it is defined as a clinical syndrome characterized by a global deterioration of cognitive and physical functioning. Although there are a lot of individual differences in the course of this dementia, memory deficits appear very early in the course of the disease. Most authors consider the disruption of memory assessed by common recognition and recall (explicit) memory tests as the first symptom of the dementia (for reviews, see Carlesimo & Oscar-Berman, 1992; Fleischman & Gabrieli, 1998). This may explain why many of the experimental studies conducted on the effects of aging on cognition during the last decade focused mostly on memory. A major approach in the study of memory over the past two decades has been the distinction between two memory systems, one implicit and other explicit (Graf & Schacter, 1985; Squire, 1987; Tulving & Schacter, 1990). Despite the large number of experimental studies conducted on the topic in these patients during the last few years, the precise underlying cognitive mechanisms responsible for the observed memory deficits remain unclear. A careful review of the literature on long-term memory deterioration in AD patients suggests that they suffer a large and pervasive deficit on explicit (declarative) memory and a partial deficit of implicit (procedural) memory for verbal and visuoperceptual stimuli assessed by perceptual priming effects (e.g., Carlesimo & Oscar-Berman, 1992; Fleischman & Gabrieli, 1998; Spinnler, 1999). The results suggest that unconscious retrieval of previously encoded material (implicit memory) deteriorates less than conscious declarative memory (explicit memory) in aging and AD. The literature in the field is expanding very quickly, but the studies conducted already on this topic have focused exclusively on the visual and auditory modalities. The stimuli used in these studies were mostly words, although a number of them used pictures (see Fleischman & Gabrieli, 1998). Human vision is a very remarkable perceptual modality that allows one to acquire information and knowledge from environmental objects and events and their spatial relations quickly and accurately. For this reason, researchers look at this modality to study psychological processes such as attention, perception, and memory. In fact, most of what we know at the moment about how memory and other psychological processes work comes from well-designed visual laboratory studies. During the last decade we, as well as others, have been interested in studying the mental representations of objects perceived by haptic exploration without vision and their recovery under implicit and explicit conditions in younger
5. HAPTIC PRIMING AND RECOGNITION
adult explorers (Ballesteros, Reales, & Manga, 1999; Ballesteros & Reales, 1995; Easton, Greene, & Srinivas, 1997; Easton, Srinivas, & Greene, 1997; Reales & Ballesteros, 1999). In this chapter we review some recent findings on haptic performance under implicit and explicit conditions not only in young adults, but also in older healthy adults and in patients suffering from Alzheimer’s disease. The goals in the present chapter are threefold. The first goal is to briefly review the literature on implicit and explicit memory in younger healthy participants in the visual domain, especially with unfamiliar visual objects. The second goal is to review a series of recent results obtained in experiments conducted in our laboratory in which young sighted participants dealt with 3-D objects haptically without vision under implicit and explicit conditions. We also review some cross-modal findings that suggest that vision and touch share similar representations. Our third goal is to present recent results on implicit and explicit memory in the tactual modality, without vision, in young and older healthy participants in comparison to AD patients. We finish the chapter with a more intensive discussion of these recent results. The study of specific memory impairments in AD patients in comparison to normal aging in modalities other than vision is important in order to be able to generalize previous visual findings to other perceptual modalities.
IMPLICIT AND EXPLICIT MEMORY FOR VISUAL OBJECTS: DISSOCIATION BETWEEN STRUCTURAL AND EPISODIC REPRESENTATIONS Human memory allows people to retain information about themselves, others, and the environment. Memory allows individuals to live independent and functional lives, taking advantage of previous experience. The encoding and retrieval of previously acquired information is one of the major cognitive functions of the cerebral cortex. We are able to recall words, faces, objects, events, knowledge about the world, sounds, textures, and so on. Memory, however, is not a unique entity. There are many types of memories. Some last just a few seconds, others are long-lasting memories that can be with us during our whole life. A type of long-term memory that the reader will encounter in this chapter is perceptual memory. The term refers to much of the memory acquired and evoked incidentally through the different sensory modalities. Perceptual memory is mainly represented in the posterior cortical areas that occupy nearly two thirds of the neocortex (Fuster, 1995). Long-term memory includes different forms of memories. Three decades ago, Tulving (1972) proposed that long-term memory was composed of two types of memories: episodic and semantic memory. According to Tulving, these
98
BALLESTEROS AND REALES
two kinds of memories have different functional properties and depend on different brain structures. Episodic and semantic memories are two forms of declarative memory (called explicit memory). Declarative forms of memory refer to the conscious recollection of previous experienced stimuli and events tested by common tests of recognition, cued recall, and free recall. Episodic memory refers to conscious experience of personally experienced events that occurs to us in a specific place and time, whereas semantic memory refers to information related to language and general world knowledge (Tulving, 1983). The main focus of this chapter is the long-term representations of objects convened by previous experience of visual objects, and especially of active tactual exploration of objects. This type of memory is often called permanent memory, and it spans from a few minutes to many years. Over the past two decades, the field of memory has expanded enormously. Graf and Schacter (1985) first used the terms implicit memory and explicit memory to refer to two different ways of accessing prior acquired information and knowledge. These two terms are used as well to refer to two ways in which memory for previous encoded information of different kinds is stored and expressed. Explicit memory (episodic declarative memory) for words, objects, and events is related to conscious recollection of previous acquired information on the stimuli. This memory is assessed by direct recognition, recall, and cued recall tests. On the other side, implicit memory is shown when the mental representations of previously experienced stimuli are recollected unconsciously or involuntarily (Schacter, 1987). Implicit memory is indicated when subjects show facilitation in performance that is attributable to information acquired in a previous encounter with the stimuli. This facilitation, often referred to as priming, has been found in several tests using verbal materials; for instance, word identification (e.g., Jacoby & Dallas, 1981), word-stem completion (e.g., Roediger & Blaxton, 1987), and word-fragment completion (e.g., Tulving, Schacter, & Stark, 1982). Over the years, implicit memory has been assessed by more than a dozen tests (for a comprehensive review, see Roediger & McDermott, 1993). Only a decade ago, some researchers started to use pictures of familiar objects (e.g., Biederman & Cooper, 1991; Parkin & Russo, 1990), linear drawings of unfamiliar objects (e.g., Schacter, Cooper, & Delaney, 1990), and 2-D straight-line patterns (e.g., Ballesteros & Cooper, 1992; Musen & Treisman, 1990) presented to the visual modality. Compared to the large number of studies conducted on the verbal and visual domains, there is a lack of implicit and explicit memory studies conducted on modalities other than vision or audition, such as active touch. Implicit memory in the laboratory is shown by repetition (perceptual) priming; that is, by showing better performance in terms of accuracy and/or response
5. HAPTIC PRIMING AND RECOGNITION
time for stimuli presented previously (old, studied stimuli) in comparison to performance with new (new, nonstudied stimuli). The research on these two types of long-term memory has grown rapidly, and a number of main reviews on the subject have been published (e.g., Roediger & McDermott, 1993; Tulving & Schacter, 1990). The main interest in studying implicit and explicit memory is to explain a series of dissociations found in experimental studies. These experimental dissociations reported on neurologically impaired patients as well as in healthy participants when stimuli were presented to the visual or auditory modalities lead cognitive neuroscience investigators and cognitive psychologists to propose that memory is mediated by different memory systems in the brain (e.g., Gabrieli, 1998; Squire, 1992; Tulving & Schacter, 1990). Memory systems theorists used dissociations between performance on implicit and explicit memory tasks as proof of the existence of different types of long-term memory located at different sites in the human brain. The neuropsychological tradition has emphasized the idea that memory is nonunitary. Tulving and Schacter (1990) proposed the existence of a perceptual representation system different from declarative memory because: (a) perceptual priming is preserved in amnesic patients; (b) perceptual priming is preserved in aging; and (c) functional independence has been found in normal subjects.
PRIMING AND RECOGNITION OF UNFAMILIAR VISUAL OBJECTS In a pioneering study, using lineal depictions of 3-D unfamiliar visual objects, Schacter et al. (1990) showed implicit memory for newly acquired information about novel line-drawings of objects. They also reported a functional dissociation in the representation of structural and semantic information for these unfamiliar visual objects. Elaborative or semantic encoding at study (e.g., to indicate something familiar that the unfamiliar depicted object reminded the subject of most strongly) produced better performance in an explicit recognition test, while structural encoding (e.g., to judge whether unfamiliar objects looked to the right or to the left) produced more perceptual priming (implicit memory performance). The structural encoding condition was specifically designed to encode information about object structure and relations among the different object components. Participants in this experimental condition were required to indicate whether the visual unfamiliar object appeared to be facing to the left or to the right. On the other hand, participants in the semantic encoding condition were asked to indicate something familiar that the unfamiliar visual objects reminded them of most strongly. The results showed that elaborative (semantic) encoding yielded better recognition in the explicit memory test
100
BALLESTEROS AND REALES
than produced in the structural encoding condition. Furthermore, priming as a measure of implicit memory was found only for possible objects under structural encoding (but see Carrasco & Seamon, 1996). This dissociation was interpreted as a demonstration that the global structure of objects is represented by a memory system different from the memory system that deals with the meaning of an object (the episodic memory system). The findings were interpreted as supporting the idea that implicit and explicit memory for lineal depictions of 3-D unfamiliar visual objects are represented and stored in multiple memory systems in the human brain. A memory system may be defined as a specific neural network that is responsible for a specific memory process (Gabrieli, 1998). A second line of investigation of Schacter and Cooper’s pioneer research program was specifically directed at proving the nature of the mental representations involved in structural (implicit) and episodic (explicit) encoding of visual depictions of unfamiliar objects. The idea was to find out what information is preserved in structural and episodic representations of visual objects by introducing changes in some stimulus dimensions (Cooper, Schacter, Ballesteros, & Moore, 1992). The logic of this line of research was to introduce changes in two stimulus variables (size in Exp. 1 and orientation in Exp. 2) from encoding to test and to see the influence of this manipulation on implicit and explicit memory tasks. The hypothesis under investigation was that to change the appearance of the object in a dimension that is not important to its global structure would not diminish perceptual priming while the same changes would impair explicit recognition of the object as “old” or “new.” The experiment in which an object’s size was changed from study to test consisted of a study phase in which participants were presented with a series of objects while they were indicating whether each object was facing to the left or to the right (the structural encoding condition). Half of the participants performed the structural encoding phase with small objects and the other half with large objects. At test, each encoding group was further divided into two subgroups. A subgroup was presented with the objects at the same encoding size (the same size condition), while the other subgroup was presented with the objects of a different size (the changed size condition). Furthermore, different groups of participants performed the implicit memory test and the explicit recognition test. The second experiment in this series investigated the influence of changing the right-left orientation of objects from encoding to recognition or object decision tests (see Cooper et al., 1992). Half of the participants were presented with objects in a left- or right-facing orientation at study. At test, half of the objects were facing at the same orientation, and the other half a different orientation, than in the study phase. As in previous studies, the implicit task consisted of de-
5. HAPTIC PRIMING AND RECOGNITION
ciding whether the object flashed on the computer screen was possible or impossible. The explicit task consisted of an “old-new” recognition test. Different groups of participants performed the implicit and the explicit memory tests. The two main outcomes from these experiments were: (a) a robust priming on the implicit memory task (object decision) was found, and it was not reduced when the size or the orientation of the objects was changed from study to test; and (b) recognition was significantly impaired when the stimulus appearance changed in size or in orientation from study to the performance of the explicit memory test. These findings suggest that the experimental manipulations introduced from study to test do not affect implicit memory performance, because priming did not change under these two types of stimulus transformation. In contrast, these stimulus transformations affected explicit memory performance adversely. Summarizing, the change in size and right-left orientation of visual objects is not encoded in the mental representation that supports implicit memory. In contrast, the change in these dimensions from study to test substantially impaired performance in explicit recognition. From a theoretical point of view, Cooper et al. (1992; Cooper, 1994; Schacter et al., 1990) speculated that distinct neural networks support implicit and explicit representation of visual objects and proposed that regions of the inferior-temporal cortex could be the locus of the implicit memory system (see Plaut & Farah, 1990). This presemantic memory system would be responsible for encoding the global structure of visual objects and the relations among their different parts and surfaces. This memory system has also been assumed to be distinct from the semantic representational system responsible for explicit memory when 3-D familiar and unfamiliar objects were presented to touch (Ballesteros et al., 1999; Reales & Ballesteros, 1999).
IMPLICIT AND EXPLICIT MEMORY DISSOCIATIONS FOR HAPTICALLY EXPLORED 0BJECTS Although humans have a number of sensory modalities, vision is considered the primary sense for the identification of shapes and objects in space. Nevertheless, touch allows humans and other primates to extract precise information about many dimensions of objects that are in their personal space, and can be reached and handled actively. Cutaneous information is very important in touch. The skin is the largest sensory receptor organ that covers the whole body and it is formed by several layers. But touch is an active modality. Our hands rarely stay still, they freely and purposively explore objects and surfaces in order to extract useful information. The role
102
BALLESTEROS AND REALES
played by systematic movement in touch has to be taken into account in order to explain the accuracy of the system in processing complex shape information, such as the symmetry or asymmetry of 3-D objects or raised shapes (e.g., Ballesteros, Manga, & Reales, 1997; Ballesteros, Millar, & Reales, 1998). The haptic system is a complex perceptual system that encodes cutaneous and kinesthetic information (Loomis & Lederman, 1986; Révész, 1950). Haptic perception depends on complementary information from tactual acuity, active hand movements and spatial cues; furthermore, size and familiarity are very important variables in haptic perception (Millar, 1994). Touch is a proximal sense, because in order to feel the texture of a surface, the shape and size of an object, or to identify the object quickly and accurately, it should be close or in contact with the skin of the perceiver. Differences as well as similarities have been pointed out in object representation, and in the identification and recognition of objects by vision and touch. For example, both modalities can be used to identify objects, shapes, and textures that are within reach. However, vision can process different kinds of information that are at a distance from the perceiver, but only those surfaces, corners, and parts of objects facing the observer can be perceived visually. Shape, size, and structure of objects in the close space can be accurately perceived by haptic manual exploration. We speculated that visual and haptic object representations are similar and might be shared between both modalities (Ballesteros & Reales, 1995; Reales & Ballesteros, 1999). The program of research reviewed in this part of the chapter tried to show the dissociation between automatic, unconscious representations of objects manipulated actively with the hands in the proximal space and conscious knowledge about the same objects. For this purpose, we first reviewed a series of experiments in which we showed that conscious and unconscious manifestations of haptic familiar and unfamiliar objects can be functionally dissociated in young healthy participants. In this line of research, we first showed that haptic priming, as a manifestation of implicit memory, can be established for familiar and unfamiliar objects. Furthermore, we found functional dissociations between the two forms of recovering object information; then, we set out to demonstrate that vision and touch might share similar object representations. We were interested in studying the characteristics of implicit and explicit memory for 3-D familiar objects explored haptically without vision. The study of haptic perceptual priming and recognition of real objects was a new subject, as researchers in the field used visual stimuli almost exclusively. Participants in our experiments explored objects actively at study and test. A number of studies have reported the advantage of active versus passive tactile exploration of shapes (see Heller & Myers, 1983; Heller, Rogers, & Perry, 1990). Furthermore,
5. HAPTIC PRIMING AND RECOGNITION
active touch proved to be very efficient in detecting a number of important structural properties of 3-D objects, such as their bilateral symmetry (Ballesteros & Reales, 2004; Ballesteros et al., 1997; Ballesteros et al., 1998) and in identifying familiar objects after just 2 or 3 seconds of exploration (Klatzky, Lederman, & Metzger, 1985).
HAPTIC PRIMING OF FAMILIAR AND UNFAMILIAR OBJECTS Research on touch conducted in our laboratory was aimed at studying how active touch explorers store and retrieve information about objects, both novel and familiar, consciously and unconsciously, using haptic implicit and explicit memory tasks (Ballesteros, 1993; Ballesteros et al., 1999). Pictures of objects presented visually create mental representations so that when the same stimuli are presented again, they produce perceptual facilitation of these objects in comparison to new objects (e.g., Biederman & Cooper, 1991; Mussen & Treisman, 1990; Schacter et al., 1990). We anticipated that haptically objects explored (without vision), would produce facilitation in terms of accuracy and/or latency compared to unexplored (new) objects. In two experiments we investigated whether haptic priming for familiar and unfamiliar novel 3-D objects could be obtained, as has been shown in vision. We also asked whether different manipulations conducted from the first presentation (the study phase) of the objects to the second presentation (the test phase) would reduce or eliminate haptic priming. In other words, we studied how different experimental manipulations would affect implicit and explicit memory tasks, looking for dissociations between both types of tests. The task used to assess implicit memory for familiar objects was a speeded object naming task. Explicit memory was evaluated with a recognition test. The procedure was as follows: At study, 80 blindfolded young, healthy adult participants explored a series of familiar natural and man-made (artificial) objects without wearing gloves on their hands. The encoding task consisted of judging a series of salient features of objects such as weight (heavy or light), temperature (warm or cold), size (large or small), and shape (round or sharp). At test, participants were divided into two groups of 40 observers each. One group performed the implicit memory test, and the other group participated in the explicit recognition test. These two groups were further divided randomly into two subgroups of 20 participants each. According to the condition, one subgroup performed the implicit or the explicit test in the same way as in the study phase (that is, without gloves). In contrast, the other subgroup performed the implicit or the explicit task wearing latex gloves on their hands. We anticipated that the introduction of the
104
BALLESTEROS AND REALES
study-to-test transformation in the mode of exploration would not impair implicit memory. No significant differences in priming were expected, as the experimental manipulation introduced from study to test allows one to extract the object’s structure in both exploratory conditions. On the other hand, impaired performance was expected in the explicit memory test under such conditions. Based on the visual results reviewed previously (see Cooper et al., 1992; Schacter et al., 1990), we anticipated that mode of encoding would not affect implicit memory while the same encoding manipulation would impair recognition. The main results of this experiment are displayed in Fig. 5.1. Data in the left side of the figure shows haptic performance on the speeded object naming task for haptically studied and nonstudied natural and artificial objects when the mode of exploration changed from study to test. Data in the right side shows performance when mode of exploration was the same at study and test (the unchanged condition). The central outcome of the experiment can be summarized as follows: Substantial priming for haptically explored objects was obtained in the speeded object naming task. Furthermore, although responses were slower, priming was not affected whether participants explored the objects with gloves (the changed condition) or without gloves at test (the unchanged condition). The findings suggest that low-level skin sensory information does not play a crucial role in haptic perceptual priming. Although explicit memory for haptically explored objects was
FIG. 5.1. Response times (sec) in the speeded haptic naming task for studied and nonstudied objects as a function of object type (natural and artificial) and mode of exploration (without gloves, as in the study phase or with gloves). Modified from Ballesteros et al., 1999.
5. HAPTIC PRIMING AND RECOGNITION
excellent (90% correct), performance on the recognition task was adversely affected by the change in the mode of exploration. These results showed a dissociation between implicit and explicit memory process for haptically manipulated objects. These findings were interpreted as a manifestation of two different memory systems, one specialized in storing structural information and the other specialized in dealing with episodic information about haptic objects. The robust haptic priming obtained in the study that resists sensory study-to-test changes suggests that implicit memory is mediated by structural object information. Kinesthetic information extracted by the moving hands suffices to tap the newly created representations of objects. This finding contrasts with the results obtained in recognition, as conscious retrieval of familiar objects seems to rely more heavily on cutaneous sensory information because a change in the mode of exploration impaired haptic recognition. A new experiment tried to find out whether haptic priming exists for unfamiliar objects as well. In this study we wished to minimize the possibility of semantic or meaningful encoding. Note that for unfamiliar haptic objects (as those shown in Fig. 5.2) there were no mental representations prior to the first encounter with the objects during the study phase. We designed a symmetry judgment task to assess implicit memory incidentally. Participants in the haptic implicit tasks judged whether a series of objects explored haptically were bilaterally symmetric or asymmetric. We asked participants to respond as quickly and accurately as possible. Explicit memory was evaluated by an “old-new” recognition test. The idea was that haptic exploration of an object would produce a mental representation which includes its shape and spatial structure. We proposed that this recently encoded representation will be activated when the same haptic unfamiliar object is presented in the implicit memory tests, making performance on the task faster or more accurate.
FIG. 5.2. Examples of symmetric and asymmetric unfamiliar objects used by Ballesteros et al. (1999, Exp. 2).
106
BALLESTEROS AND REALES
Another 80 young haptic observers participated in the study. Forty participants performed the implicit memory test after the encoding phase, and the other 40 performed the explicit memory test. Twenty participants in each group encoded the unfamiliar objects under semantic conditions and the other 20 under structural conditions. Previous visual (Jacoby & Dallas, 1981; Schacter et al., 1990) and haptic studies (Reales & Ballesteros, 1999) have shown that repetition priming is a perceptual phenomenon that is not influenced by meaning. Participants in the semantic or elaborative encoding condition were allowed to explore the unfamiliar object for 10 seconds and provide a name of a real-world object that each wooden structure reminded them of. Participants in the structural encoding condition were allowed the same time to judge the complexity of the unfamiliar object in a five-point scale. At test, those haptic explorers participating in the implicit condition performed the speeded symmetry judgment task. Haptic explorers participating in the explicit condition performed a recognition “old-new” task. The results from this experiment showed haptic priming for objects encoded structurally. These objects were judged more quickly as symmetric or asymmetric than nonstudied objects, but those encoded semantically did not. Those objects encoded elaboratively or semantically at study were recognized more accurately (hit-false alarms = 0.66 correct) than those objects encoded structurally (hit-false alarms = 0.39 correct). Haptic recognition of unfamiliar objects was significantly enhanced under elaborative study conditions, whereas the implicit symmetry detection task produced the opposite results. Structurally encoded unfamiliar objects were detected more quickly as symmetric or asymmetric, and marginally more accurately, than elaboratively encoded objects (Ballesteros et al., 1999).
CROSS-MODAL PRIMING: THE STRUCTURAL DESCRIPTION SYSTEM A complementary line of our research on haptic memory explored whether the perceptual representations of visual and haptic objects that mediate priming are modality specific, as has been claimed in several reviews (see Roediger & McDermott, 1993; Schacter, Chiu, & Ochsner, 1993; Tulving & Schacter, 1990). We were interested in finding out how this represented information about familiar objects is accessed under implicit and explicit conditions. Although repetition priming was assumed to be largely modality specific, previous studies used mostly words as stimuli, shifting from vision to audition or vice versa. We suggested that the reduced cross-modal priming previously observed might be due to the lack of overlap between information (e.g., sound and visual letters). The question was, “What happens when the stimuli presented to dif-
5. HAPTIC PRIMING AND RECOGNITION
ferent modalities had exactly the same structure?”; for example, when familiar objects were presented to vision and touch at study and test, respectively. We anticipated cross-modal priming when objects were presented, for example, to vision at study and to touch at test or vice versa. A series of experiments conducted by Reales and Ballesteros (1999) showed significant priming effects between and within visual and haptic modalities. Cross-modal priming (vision to touch; touch to vision) was equivalent to within-modal priming (vision to vision; touch to touch). In one of the experiments we investigated whether priming is preserved when observers at study were presented with familiar objects in one modality (vision or touch, according to the experimental condition) and incidentally, at test, implicit memory for these objects was evaluated in the other modality. Cross-modal priming of 3-D familiar objects was studied under two encoding conditions (semantic meaningful encoding and structural encoding conditions). At study, different groups of participants explored 30 objects visually or haptically (according to the experimental condition). Participants encoded 15 objects semantically (elaboratively) and the other 15 structurally (physically) in a counterbalanced order. At test, objects were presented either at the same modality or at different modality. Participants performed an implicit speeded object naming task. The results showed a large facilitatory priming effect for studied compared to nonstudied objects (see Fig. 5.3). Similar and substantial repetition priming was found between and within modalities. Furthermore, priming effects were not influenced by the level-of-processing manipulation at encoding. Thus, the facilitation observed under physical encoding was similar to the facilitation ob-
FIG. 5.3. Response time (in sec) in the haptic (right) and visual (left) implicit object naming test as a function of type of study (physical study, semantic study and nonstudied) and study modality, visual encoding or haptic encoding (modified from Reales & Ballesteros, 1999, Exp. 1).
108
BALLESTEROS AND REALES
served under semantic encoding. These findings suggest that haptic and visual repetition priming is a perceptual, instead of merely a conceptual, phenomenon. Repetition priming for 3-D objects is not modality specific but depends instead on high-level structural features that define object shape which are essential for basic level object naming. The results suggest that similar structural descriptions mediate implicit memory in vision and touch (Ballesteros & Reales, 1995; Reales & Ballesteros, 1999). The idea of a common representation shared by vision and touch is also supported by other behavioral studies conducted by Easton and his colleagues. They have also questioned the proposal that the representations underlying perceptual implicit tests are modality specific. These authors showed no effect of modality change on implicit and explicit memory tests for verbal information. The stimuli were words presented to vision and touch and these researchers argued that verbal information in the two modalities could be coded in geometrical terms (Easton, Srinivas, & Greene, 1997). In a further study using nonsense 2-D shapes and common 3-D objects instead of words, these authors reported a robust cross-modal priming that was slightly smaller than within modal priming (Easton, Greene, & Srinivas, 1997).
AGE-RELATED CHANGES IN IMPLICIT AND EXPLICIT MEMORY IN NORMAL AGING AND ALZHEIMER’S DISEASE (AD) Convergent results have shown that advancing age is normally associated with a clear decline in most mental processes (Park, 1999). There is compelling evidence that aging is closely associated with impaired memory functioning, but the basis for this impairment in not yet well understood (Light, 1991; Nyberg, Bäckman, Erngrund, Olofsson, & Nilson, 1996). The numerous alterations reported on brain structure and function might be an important factor of cognitive aging, in general, and memory impairment, in particular (see Raz, 2000). However, other important mechanisms have been proposed to account for age differences in cognitive functioning, such as the powerful connection between sensory and cognitive functioning (e.g., Baltes & Linderberger, 1997; Schneider & Pichora-Fuller, 2000), the decrease of the speed of performing mental tasks (Salthouse, 1996), and the decline in processing resources (working memory deficit; Craik & Byrd, 1982). One thing is clear: as the human brain ages, memory becomes less efficient. It is well documented in the literature that explicit memory, assessed by recognition, free recall, and cued recall tests, becomes less efficient with age. Older people often have difficulties in remembering and recalling new facts and
5. HAPTIC PRIMING AND RECOGNITION
events (Light, 1991). However, not all types of memories deteriorate equally. Many studies have indicated that age invariance usually exists when memory is assessed by implicit memory tasks that do not require conscious effortful retrieval (Kausler, 1994; Light, 1991; Prull, Gabrielli, & Bunge, 2000; Schacter, Cooper, & Valdiserri, 1992). There is a tendency, however, toward an age effect (Nyberg et al. 1996). When age effects have appeared in implicit memory tasks, they have always favored young over older participants (Fleischman & Gabrieli, 1998; La Voie & Light, 1994; Rybash, 1996). The stimuli commonly used in these aging studies were visually presented words and pictures. We can conclude that normal aging shows robust repetition priming across a large number of implicit memory tests with visual stimuli. The effects of age are generally nonsignificant, and when they exist, are much smaller than in explicit memory tests. Alzheimer’s disease is the more common type of senile dementia, representing half of the dementing syndromes. This disease is a progressive neurodegenerative condition characterized by a global impairment of cognitive functioning. The first deficit to appear includes a progressive inability to remember facts and events and, later, to recognize friends and family. These problems derive from an explicit (declarative) memory deficit. It is not surprising that memory has been the psychological process most studied in AD patients. The impairment of explicit memory in these patients is due to brain damage that causes a severe deterioration of the medial-temporal lobe neural structures (Hyman, Van Oesen, Damasio, & Barnes, 1984). During the last 15 years, interest has increased enormously in the investigation of implicit memory and its dissociation from explicit memory in older healthy adults and in AD patients. Fleischman and Gabrieli (1998) published a comprehensive review that included the results from more that 180 experiments on the topic. The conclusion was that a substantial variation in priming occurs with normal aging and AD. This is due to several causes, including measurement issues, individual differences and the different tasks used to assess implicit memory. The findings show that 65% of the reviewed studies reported that priming was intact in these patients but around 35% of the experiments produced a priming reduction of at least a 10% compared to healthy old participants. The stimuli used in all of these experiments were mostly words presented to vision or audition. A smaller number of experiments used pictorial stimuli. No one experiment, however, tested implicit memory in healthy older adults and Alzheimer’s patients for stimuli presented to another modality, such as touch. In summary, the increasingly large literature on long-term memory in AD patients suggests a large and pervasive deficit of explicit (episodic) memory and a
110
BALLESTEROS AND REALES
relatively well preserved implicit memory for verbal and visual information (Fleischman & Gabrieli, 1998; La Voie & Light, 1994). To recover information from memory is nearly impossible for AD patients when memory performance is assessed by recall or recognition tasks that require the individual to recover prior explicitly acquired information. These memory tasks are direct (declarative) because people try to bring to mind facts and experiences previously encoded. However, when memory of AD patients is assessed with implicit memory tasks, a different pattern of results appeared. Normal priming effects have been reported using a number of visual identification tasks (e.g., Keane et al., 1995; Monti et al., 1994). It is worth mentioning, however, that this generalization was based on verbal and visual tasks as all the experiments reviewed were conducted using these stimuli and the visual and auditory modalities; as far as we know, no one has studied haptic memory in demented patients. Repetition priming, as an expression of implicit memory, does not depend on the medial-temporal lobe memory system (Gabrieli, 1998). It is well documented that the first lesions to appear in AD patients usually occurred in the medial-temporal lobe system (Hyman, Van Oesen, Damasio, & Barnes, 1984). Recent structural imaging studies (MRI) have found medial temporal atrophy showing a reduction in the volume of the hippocampus formation even at very early stages of AD (deToledo-Morrell et al., 1997; Jack et al., 1997). This may explain why an explicit memory disorder (episodic memory) is the first symptom of AD. The aim of our study (Ballesteros & Reales, 2004b) was precisely to investigate the possible dissociations between repetition priming and explicit memory performance in AD when 3-D familiar objects are presented to active touch. Human beings at all ages interact continuously with objects in daily life. Furthermore, demented patients use the touch modality to perform daily-life acts, such as to hold a spoon and eat or to use a comb to take care of their hair. So, haptic perceptual facilitation might be important to deal with familiar objects commonly encountered in daily life. In our laboratory we studied the status of haptic perceptual priming and explicit recognition of familiar objects in three groups of participants: younger adults (mean age 29,33 years), Alzheimer’s patients (mean age 74,16 years), and healthy older controls (mean age 74,66 years) matched in gender, age, and years of education. For stimulus presentation we used a 3-D object tachistoscope provided with piezoelectric board acting as the object presentation platform. At study, each participant haptically explored (without vision), one by one, a series of objects, like those displayed in Fig. 5.4. They were asked to produce a sentence with the object’s name. As stated already, previous studies from our laboratory and from other laboratories have shown that perceptual priming is presemantic and that semantic or elaborative encoding does not influence implicit memory (e.g., Ballesteros et al., 1999; Re-
5. HAPTIC PRIMING AND RECOGNITION
FIG. 5.4. (2004b).
Examples of the familiar objects used in the aging study by Ballesteros and Reales
ales & Ballesteros, 1999; Schacter et al., 1990). After performing a distracting task that lasted for 5 minutes, implicit memory was assessed incidentally with a haptic speeded object naming task, followed by a recognition “old–new” task to assess explicit memory. Half of the studied objects were presented as old stimuli in the implicit test, while the other half were the old objects in the recognition test. Performance on the implicit memory test was compared to performance on the explicit recognition test. Figure 5.5 displays the results from the object naming test as a function of group (younger adults, older adults and AD patients) and item type (studied or nonstudied). As can be seen, the three groups of participants named studied objects faster than nonstudied objects, and the effect of group was not significant. AD patients showed normal object priming that did not differ from the priming of healthy young participants and older controls. The study showed age invariance, as the two healthy groups (young and older adults) showed similar priming effects. There was no difference in performance between the younger and older healthy groups. The implicit task used in this study requires the identification of haptically explored objects. The normal AD priming obtained in this task fit the identification versus production distinction. According to this hypothesis, AD priming is spared on a visual identification task, such as word identification, lexical decisions, and picture naming, but not on production tasks, such as word-stem completion and word association
112
BALLESTEROS AND REALES
FIG. 5.5. Response time (in sec) in the haptic object naming test as a function of study condition (studied versus non studied objects) and group (younger adults, healthy older adults, and Alzheimer’s patients). Note the substantial priming effects obtained by the three groups of participants (modified from Ballesteros & Reales, 2004b).
(Fleischman & Gabrieli, 1998). The results of the speeded object naming task differed from the results in the explicit memory tests. Figure 5.6 shows the results from the explicit memory test expressed as the percentage of stimuli correctly recalled as “old” objects (hits) minus false-alarms corresponding to the three groups. As is shown, younger and older healthy adults did not differ on explicit recognition while, as expected, AD patients were highly impaired in the explicit recognition test. The present findings on the haptic domain support other previous results indicating preserved priming for novel patterns and words presented either to vision or audition (Keane, Gabrieli, Growdon, & Corkin, 1994; Postle, Corkin, & Growdon, 1996; Verfaellie, Keane, & Johnson, 2000). These results support the idea that implicit memory is mediated by a memory system that is different from the medial-temporal diencephalic memory system underlying explicit haptic memory. Implicit memory might depend on brain areas that are relatively spared at the initial stage of AD. Such areas might be the multimodal sensory areas in the neocortex that can be activated either by touch or vision (see chaps. 7, 8, and 9, this volume). The double dissociation observed on explicit and implicit memory tests suggests that different memory systems
5. HAPTIC PRIMING AND RECOGNITION
mediate both types of memories. These findings extend to the haptic domain other results indicating preserved priming in implicit memory tests using novel patterns (Postle et al., 1996), pictures, words presented to vision (Keane, Gabrieli, Fennema, et al., 1991) and pseudowords (Keane, Gabrieli, Growdon, & Corkin, 1994), or words presented to audition (e.g., Verfaellie et al., 2000). The preserved haptic priming in AD patients, despite highly impaired haptic recognition performance of 3-D objects, suggests that priming in the haptic domain depends on the activation of the structural descriptions of objects (Reales & Ballesteros, 1999). Several findings in AD implicit memory studies conducted on the visual, auditory, and tactual modalities suggest that implicit memory depends on neocortical areas relatively well preserved in early stages of the Alzheimer’s disease. The preservation of haptic priming in patients with AD provides strong support to the idea that haptic priming is mediated by a memory system that is different from the medial-temporal diencephalic memory system underlying explicit haptic memory. Taken together, previous findings in vision and audition, along with those reported in this chapter in the haptic modality, indicate that perceptual priming tested across an ample number of tasks, stimulus types, and perceptual modalities, is preserved in AD patients.
FIG. 5.6. Recognition results expressed as the corrected measure of hits—false alarms as a function of group (young adults, healthy older adults, and Alzheimer’s disease patients). Note the impaired explicit memory of AD patients and the unimpaired memory of healthy older adults and younger adults (modified from Ballesteros & Reales, 2004b).
114
BALLESTEROS AND REALES
The sparing of object haptic priming in AD patients suggests that perceptual priming with tactual objects depends on brain areas that are relatively spared, at least during the early stages of the disease. Possibly, these areas are the polimodal sensory areas in the neocortex (see Pascual-Leone & Hamilton, 2001). The exact brain mechanisms and neural networks that underlie haptic object priming in young adults, older healthy adults, and AD patients cannot be determined using only behavioral methods. However, recent brain imaging studies conducted by James et al. (2002; see Goodale, James, James, & Humphrey, chap. 7, this volume), Sathian and Prather (chap. 8, this volume), and Pascual-Leone, Theoret, Merabet, Kauffmann, and Schlaug (chap. 9, this volume) presented compelling evidence of the implication of extrastriate visual areas on haptic processing of 2-D and 3-D stimuli. Sathian and Zangaladze (1997) in an early study used positron emission tomography (PET) to localize activation in regional cerebral blood flow while young adults performed a grating orientation tactile task. They found that attention to grating orientations increased regional blood flow in the left parieto-occipital cortex. More recently, Sathian and Prather (chap. 8, this volume) reported results from a study in which they used two-dimensional form stimuli and PET imaging to measure activation in the brain. They showed that tactile tasks recruit extrastriate visual cortex in a task-specific manner. Dorsal visual areas are recruited during certain tasks (grating orientation discrimination and mental rotation), whereas ventral visual areas are recruited during tasks calling for form discrimination. The important point is that the activations occurred in regions that are also active during corresponding visual tasks. Goodale and his colleagues (Goodale et al., chap. 7, this volume; James et al., 2002) found that haptic exploration of nonfamiliar 3-D objects produced activation not only in the somatosensory cortex, but also in areas of the occipital cortex associated with visual processing. These investigators studied cross-modal representations of object shape. They presented young haptic explorers with 3-D novel objects and used functional magnetic resonance imaging (fMRI) to assess cross-modal haptic to visual priming on brain activation. The results showed that haptic exploration of novel objects produced activation in areas of the occipital cortex associated with visual object processing. The overlap in some of the neural substrates mediating haptic and visual processing of object structure suggests that the haptic modality may use the object representation systems of the ventral visual pathway. Pascual-Leone and his colleagues using functional brain imaging (PET and fMRI) while subjects read Braille found that early blind young participants showed activation of the striate and extrastriate visual cortex. Furthermore, disruption of the visual cortex in early blind explorers with Transcranial Mag-
5. HAPTIC PRIMING AND RECOGNITION
netic Stimulation (TMS) significantly impaired performance on a tactile spatial discrimination task. Such effects are not found in sighted controls or in late blind subjects (Sadato et al., 1996). Based on results showing that congenitally blind and sighted explorers employ the same cortical brain region for the same task, Pascual-Leone et al. (chap. 9, this volume) suggest a metamodal cerebral organization to explain how sensory processing is being carried out in the human brain. Post mortem examinations of AD patients have found relatively little damage of primary visual, auditory and somatosensory neocortical areas, the basal ganglia and the cerebellum. On the other hand, substantial damage to association neocortices in the frontal, parietal, and temporal lobes was found (Brun & Englund, 1981). As Gabrieli (1998) suggests, referring to previous studies on vision and audition, the observed intact perceptual priming might be explained by the sparing of modality specific cortices. In our case, priming might be explained by the sparing of the somatosensory cortex. Recent brain imaging studies all suggest the implication of the visual cortex in haptic processing. Further neuropsychological research using brain imagery and electrophysiological recordings may help to delineate the brain circuitry implicated in the preserved implicit memory for objects identified by touch in the normally aging brain and in AD patients.
CONCLUDING REMARKS Our findings add to those showing intact perceptual priming on visual tasks (Fleischman et al., 1995; Keane, Gabrieli, Fennema, Growdon, & Corkin, 1991; Keane, Gabrieli, Mapstone, Johnson, & Corkin, 1995). Taken together, previous findings in vision and audition, along with those from the new haptic study, indicate that perceptual priming tested across an ample number of tasks, stimulus types, and perceptual modalities is preserved in healthy aging and AD patients which did not differ from young adults. On the other hand, explicit recognition is highly impaired in these patients, whereas it is preserved in aging normal controls. The sparing of object haptic priming in AD patients suggests that perceptual priming with tactual objects might depend on brain areas that are relatively intact in AD. Possibly, these are polimodal sensory areas in the neocortex.
ACKNOWLEDGMENTS The research reported in this chapter on implicit and explicit memory in aging and dementia was supported by a grant from Dirección General de Investigación
116
BALLESTEROS AND REALES
Científica y Técnica (BSO2000-0108-C02-01). Previous research on haptic priming and recognition was funded in part by grants (PB94-0393 and PB90-0003), and by several grants from the Universidad Nacional de Educación a Distancia (UNED). The authors wish to thank Herminia Peraita and her research group for helpful discussions through the project on aging and dementia, and José Luis Dobato, neurologist from Hospital-Fundación Alcorcón for the careful evaluation of the Alzheimer’s patients that participated in this study. The authors thank Alicia Sanchez for her help in locating the elderly people who participated in the study and for collecting some of the data reported in this chapter.
REFERENCES Ballesteros, S. (1993, November). Visual and haptic priming: Implicit and explicit memory for objects. Paper presented at the Universität of Trier, Germany. Ballesteros, S., & Cooper, L. A. (1992, July). Perceptual priming for two-dimensional patterns following visual presentation. Paper presented at the 25th International Congress of Psychology, Brussels, Belgium. Ballesteros, S., Manga, D., & Reales, J. M. (1997). Haptic discrimination of bilateral symmetry in 2-dimensional and 3-dimensional unfamiliar displays. Perception & Psychophysics, 59, 37–50. Ballesteros, S., Millar, S., & Reales, J. M. (1998). Symmetry in haptic and in visual shape perception. Perception & Psychophyisics, 60, 389–404. Ballesteros, S., & Reales, J. M. (1995, July). Priming of objects presented to vision and touch. Paper presented at the XXV Congreso Interamericano de Psicología, San Juan, Puerto Rico. Ballesteros, S., & Reales, J. M. (2004a). Visual and haptic discrimination of symmetry in unfamiliar displays extended in the z-axis. Perception, 33, 315–327. Ballesteros, S., & Reales, J. M. (2004b). Intact haptic priming in normal aging and Alzheimer’s disease: Evidence for dissociable memory systems. Neuropsychologia, 42, 1063–1070. Ballesteros, S., Reales, J. M., & Manga, D. (1999). Implicit and explicit memory for familiar and novel objects presented to touch. Psicothema, 11, 785–800. Baltes, P. B., & Linderberger, U. (1997). Emergence of a powerful connection between sensory and cognitive functions across the adult life span: A new window to the study of cognitive aging? Psychology and Aging, 12, 12–21. Biederman, I., & Cooper, E. E. (1991). Evidence for complete translational and reflectional invariance in visual object priming. Perception, 20, 585–593. Brun, A., & Englund, E. (1981). Regional patterns of degeneration in Alzheimer’s disease: Neuronal loss and histopathological grading. Histopathology, 5, 549–564. Carlesimo, G. A., & Oscar-Berman, M. (1992). Memory deficits in Alzheimer’s patients: A comprehensive review. Neuropsychology Review, 3, 119–169. Carrasco, M., & Seamon, J. (1996). Priming the impossible figures in the object recognition task: The critical importance of perceived stimulus complexity. Psychonomic Bulletin & Review, 3, 344–351. Cooper, L. A. (1994). Probing the nature of the mental representation of visual objects: Evidence from cognitive dissociations. In S. Ballesteros (Ed.), Cognitive approaches to human perception (pp. 199–221). Hillsdale, NJ: Lawrence Erlbaum Associates.
5. HAPTIC PRIMING AND RECOGNITION Cooper, L. A., Schacter, D. E., Ballesteros, S., & Moore, C. (1992). Priming and recognition of transformed three-dimensional objects: Effects of size and reflection. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 43–57. Craik, F. I. M., & Byrd, M. (1982). Aging and cognitive deficits: The role of attentional resources. In F. I. M. Craik & S. Trehub (Eds.), Aging and cognitive processes (pp. 191–211). New York: Plenum Press. deToledo-Morrell, L., Sullivan, M. P., Morrell, F., Wilson, R. S., Bennett, D. A., & Spencer, S. (1997). Alzheimer’s disease: In vivo detection of differential vulnerability of brain regions. Neurobiology of Aging, 18, 463–468. Easton, R. D., Greene, A. J., & Srinivas, K. (1997). Transfer between vision and touch: Memory for 2-D patterns and 3-D objects. Psychonomic Bulletin & Review, 4, 403–410. Easton, R. D., Srinivas, J., & Greene, A. J. (1997). Do vision and haptic shared common representations? Implicit and explicit memory between and within modalities. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 153–193. Fleischman, D. A., & Gabrieli, J. D. E. (1998). Repetition priming in normal aging and in Alzheimer’s disease. A review of findings and theories. Psychology and Aging, 13, 88–119. Fleischman, D. A., Gabrieli, J. D. E., Reminger, S., Rinaldi, J., Morrell, F., & Wilson, R. (1995). Conceptual priming in perceptual identification for patients with Alzheimer’s disease and a patient with a right occipital lobectomy. Neuropsychology, 9, 187–197. Fuster, J. M. (1995). Memory in the cerebral cortex. Cambridge, MA: The MIT Press. Gabrieli, J. D. E. (1998). Cognitive neuroscience of human memory. Annual Review of Psychology, 49, 87–115. Graf, P., & Schacter, D. L. (1985). Implicit and explicit memory for new associations in normal and amnesic patients. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 501–518. Heller, M. A., & Myers, D. S. (1983). Active and passive recognition of form. Journal of General Psychology, 108, 225–229. Heller, M. A., Rogers, G., J., & Perry, C. L. (1990). Tactile pattern recognition with the Optacon. Neuropsychologia, 28, 1003–1006. Hyman, B. T., Van Oesen, G. W., Damasio, A. R., & Barnes, C. L. (1984). Alzheimer’s disease: Cell-specific pathology isolates the hippocampal formation. Science, 225, 1168–1170. Jacoby, L. L., & Dallas, M. (1981). On the relationship between autobiographical memory and perceptual learning. Journal of Experimental Memory: General, 110, 306–340. Jack, C. R., Peterson, R. C., Xu, I. C., Waring, S. C., O’Brien, P. C., Tangalos, E. G., et al. (1997). Medial temporal atrophy on MRI in normal aging and very mild Alzheimer’s disease. Neurology, 49, 786–794. James, T. W., Humphrey, G. K., Gati, J. S., & Servos, P., & Menon, R. S., & Goodale, M. A. (2002). Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia, 40, 1706–1714. Kausler, D. H. (1994). Learning and memory in normal aging. New York: Academic Press. Keane, M. M., Gabrieli, J. D. E., Fennema, A. C., Growdon, J. H., & Corkin, S. (1991). Evidence for a dissociation between perceptual and conceptual priming in Alzheimer’s disease. Behavioral Neuroscience, 105, 326–342. Keane, M. M., Gabrieli, J. D. E., Growdon, J. H., & Corkin, S. (1994). Priming in perceptual identification of pseudowords is normal in Alzheimer’s disease. Neuropsychologia, 32, 343–356. Keane, M. M., Gabrieli, J. D. E., Mapstone, H. C., Johnson, K. A., & Corkin, S. (1995). Double dissociation of memory capacities after bilateral occipital-lobe or medial temporallobe lesions. Brain, 118, 1129–1148. Klatzky, R. L., Lederman, S. J., & Metzger, V. A. (1985). Identifying objects by touch: An “expert system.” Perception & Psychophyisics, 37, 299–302.
118
BALLESTEROS AND REALES
La Voie, D., & Light, L. L. (1994). Adult age differences in repetition priming. A meta-analysis. Psychology and Aging, 4, 538–553. Light, L. L. (1991). Memory and aging: Four hypothesis in search for data. Annual Review of Psychology, 42, 333–376. Loomis, J., & Lederman, S. J. (1986). Tactual perception. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance (Vol. 2, pp. 31-1–31-44). New York: Wiley. Millar, S. (1994). Understanding and representing space: Theory and evidence from studies with blind and sighted children. Oxford: Oxford University Press. Monti, L. A., Gabrieli, J. D. E., Wilson, R. S., & Reminger, S. L. (1994). Intact text-specific implicit memory in patients with Alzheimer’s disease. Psychology and Aging, 9, 64–71. Mussen, G., & Treisman, A. (1990). Implicit memory for visual patterns. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16, 127–137. Nyberg, L., Bäckman, L., Erngrund, K., Olofsson, U., & Nilson, L.-G. (1996). Age differences in episodic memory, semantic memory, and priming: Relationships to demographic, intellectual, and biological factors. Journal of Gerontology, 51B, 234–240. Park, D. C. (1999). The basic mechanisms accounting for age-related decline in cognitive functioning. In D. C. Park & N. Schwarz (Eds.), Cognitive aging: A primer (pp. 3–21). Philadelphia, PA: Taylor & Francis. Parkin, A. J., & Russo, R. (1990). Implicit and explicit memory and the automatic/effortful distinction. European Journal of Cognitive Psychology, 2, 71–80. Pascual-Leone, A., & Hamilton, R. (2001). The metamodal organization of the brain. In C. Casanova & M. Ptito (Eds.), Progress in brain research (Vol. 134, pp. 1–19). Plaut, C. D., & Farah, M. J. (1990). Visual object representation: Interpreting neuropsychological data within a computational framework. Journal of Cognitive Neuroscience, 2, 320–342. Postle, B. R., Corkin, S., & Growdon, J. H. (1996). Intact implicit memory for novel patterns in Alzheimer’s disease. Learning and Memory, 3, 305–312. Prull, M. W., Gabrieli, J. D. E., & Bunge, S. A. (2000). Age-related changes in memory: A cognitive neuroscience perspective. In F. I. M. Craik & T. A. Salthouse (Eds.), Handbook of aging and cognition (pp. 91–153). Mahwah, NJ: Lawrence Erlbaum Associates. Raz, N. (2000). Aging of the brain and its impact on cognitive performance: Integration of structural and functional findings. In F. I. M. Craik & T. A. Salthouse (Eds.), The handbook of aging and cognition (pp. 1–90). Mahwah, NJ: Lawrence Erlbaum Associates. Reales, J. M., & Ballesteros, S. (1999). Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 118, 219–235. Révész, G. (1950). Psychology and art of the blind. London: Longmans Green. Roediger, H. L., & Blaxton, T. A. (1987). Effects of varying, modality, surface features, and retention interval on priming in word fragment completion. Memory and Cognition, 15, 379–388. Roediger, H. L., & McDermott, K. B. (1993). Implicit memory in normal human subjects. In H. Spinnler & F. Boller (Eds.), Handbook of neuropsychology (Vol. 8, pp. 63–131). Amsterdam: Elsevier. Rybash, J. M. (1996). Implicit memory and aging: A cognitive neuropsychological perspective. Developmental Neuropsychology, 12, 127–179. Sadato, N., Pascual-Leone, A., Grafman, J., Ibáñez, V., Deiber, M. P., Dold, G., & Hallett, M. (1996). Activation of the primary visual cortex by Braille reading in blind subjects. Nature, 380, 526–528. Salthouse, T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 103, 403–428. Sathian, K., & Zangaladze, A. (1997). Feeling with the mind’s eye. NeuroReport, 8, 3877–3881.
5. HAPTIC PRIMING AND RECOGNITION Schacter, D. L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 501–518. Schacter, D. L., Chiu, C. Y. P., & Ochsner, K. N. (1993). Implicit memory: A selective review. Annual Review Neuroscience, 16, 159–182. Schacter, D. L., Cooper, L. A., & Delaney, S. M. (1990). Implicit memory for unfamiliar objects depends on access to structural description. Journal of Experimental Psychology: General, 119, 5–24. Schacter, D. L., Cooper, L. A., & Valdiserri, M. (1992). Implicit and explicit memory for novel visual objects in older and younger adults. Psychology & Aging, 2, 299–308. Schneider, B. A., & Pichora-Fuller, M. K. (2000). Implications of perceptual deterioration for cognitive aging research. In F. I. M. Craik & T. A. Salthouse (Eds.), The handbook of aging and cognition (pp. 155–219). Mahwah, NJ: Lawrence Erlbaum Associates. Spinnler, H. (1999). Alzheimer’s disease. In G. Denes & L. Pizzamiglio (Eds.), Handbook of clinical and experimental neuropsychology (pp. 699–746). UK: Psychology Press. Squire, L. R. (1987). Memory and brain. New York: Oxford University Press. Squire, L. R. (1992). Memory and the hippocampus: A synthesis from findings with rats, monkeys and humans. Psychological Review, 99, 195–213. Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization and memory (pp. 381–403). New York: Academic Press. Tulving, E. (1983). Elements of episodic memory. New York: Oxford University Press. Tulving, E., & Schacter, D. L. (1990). Priming and human memory systems. Science, 247, 301–306. Tulving, E., Schacter, D. L., & Stark, H. A. (1982). Priming effects in word-fragment completion are independent of recognition memory. Journal of Experimental Psychology: Learning, Memory and Cognition, 8, 336–342. Verfaellie, M., Keane, M. M., & Johnson, G. (2000). Preserved priming in auditory perceptual identification in Alzheimer’s disease, Neuropsychologia, 38, 1581–1592.
This page intentionally left blank
g
h
Tactile Virtual Reality: A New Method Applied to Haptic Exploration José Antonio Muñoz Sevilla
W
hereas the rapid growth in use of three-dimensional (3-D) computer images has proved to be of significant value to many computer users, they have remained totally inaccessible to blind people (Fernández, Muñoz, Gutiérrez, & Barbero, 1999). The EU GRAB1 project is seeking to establish the degree to which a dual-finger haptic interface, augmented by audio input and output, can provide nonvisual access to this important area of the information world. The haptic interface itself is an entirely new development controlled by a powerful haptic modeling tool. Validation of such a device is a complex procedure, not the least because it falls entirely outside of the experience of most users.
PROJECT OBJECTIVES Information Society (IS) utilities are increasingly being used in working and learning environments. New digital interactive information products and services will emerge. The difficulty for blind people of using some of these IS applications is creating a gap between blind and other users. Nowadays, some of the 1 The GRAB consortium is made up of six members: LABEIN, PERCRO, RNIB, NCBI, ONCE and HAPTICA.
122
obstacles that impede blind and visually impaired people from having access to the computer and its applications are being solved with the use of screen reader software, speech synthesizers, Braille displays, or the haptic mouse (2-D) (Sjostrom & Rassmuss, 1999). However, there is an inaccessible field yet for them: 3-D computer graphics (Jansson, 1999; Jansson et al., 1999). The main objective of the GRAB project is to eliminate this barrier, allowing blind and visually impaired people access to the 3-D graphic computer world through the sense of touch and with audio help (Ramloll et al., 2000), by means of a new Haptic & Audio Virtual Environment (HAVE). Instead of displaying just the images of the 3-D objects with a visual display, the proposed HAVE allows its user to feel, with his/her fingers, the shape and tactile surface properties of the 3-D objects. This is achieved using a Haptic Interface (HI). The interface was specifically developed in this project to allow people to touch 3-D virtual objects with both the thumb and the index fingertips, or both index fingertips, while moving the hands in a desktop workspace. The user feels contact forces at the fingertips and is free to perform procedures of haptic exploration, moving his/her fingers over the object for the recognition of its geometric features (such as corners, surface edges, curvature) and superficial features (such as embossing and etchings; see Fig. 6.1). In this way, the user experiences the physical manipulation and exploration of digital 3-D objects just as if he or she were handling a real object. By this kind of activity, blind or visually impaired users are capable of getting very sharp and accurate mental images of objects (Katz, 1925/1989).
FIG. 6.1. Work Session with the HAVE.
6. TACTILE VIRTUAL REALITY
THE GRAB SYSTEM The main aim of the new GRAB system is to allow blind and visually impaired people to have access to the 3-D graphic computer world through the sense of touch and with audio help. Figure 6.1 shows a work session with the new GRAB system. The user inserts two fingers (the thumb and index of a hand or both index fingers) into the thimbles of the haptic interface and moves them through the virtual workspace. The user can interact with any virtual object displayed on the screen (feeling contact forces at the fingertips), listen to audio messages and execute verbal or keyboard commands. He/she can move his/her fingers over the external boundary of any virtual object, feel the shape of the object and recognize its geometric features (curvature, edges, corners) as if he/she were manipulating a real mock-up of the object. The operator screen shows the virtual objects and the points that represent the positions of the user’s fingers within the virtual space. With the new system, the user interacts with the real geometry of the virtual object. This can be defined by conics (lines, circles, and ellipses), quadrics (planes, cylinders, cones, spheres, torus), or geometry more complex like b-splines (complex curves and surfaces). The GRAB system is based on the integration of three tools: • A two-finger 3-D force-feedback Haptic Interface developed by PERCRO. • Audio Interaction using speech recognition and voice synthesis (IBM ViaVoice). • A Haptic Geometric Modeler, developed by LABEIN, to allow the interaction with any 3-D virtual object through haptic stimuli, sound aids, and speech recognition.
The new haptic interface consists of two coordinated arms, each with 6 DF, allowing relevant forces to be exerted on the user’s fingers, as shown in Fig. 6.2. Each arm consists of a passive gimble attached to an actuated serial chain 2R+1P. The specific differential kinematics achieves a high degree of stiffness and isotropy while still exhibiting low inertia on the fingertips. The interface has been designed so that the joint workspace of the two arms can cover a large portion of the desktop (arranged as a parallelepiped of 600 mm wide, 400 mm height, and 400 mm depth). The system is equipped with high-performance DC motors that directly drive a cable transmission, thus avoiding problems of backlash. While operating in the center of the workspace, a weight counterbalances
124
FIG. 6.2.
Gimbals of the haptic interface detail.
the moving mass of the barrel, thereby reducing the amount of torque required from the motors. The motors have been designed such that the interface can exert a force of at least 4N throughout the workspace, though higher figures can be achieved when operating in the center of the workspace. A peak force of 12N can be achieved for a short time. The system is equipped with a set of encoders for measuring the finger position with a spatial resolution of better than 100-m worst case. The device is designed to be transparent to the user by minimizing extraneous forces generated by the interface. The moving arms are provided with semi-embedded drivers that take care of all the basic control aspects for managing the interface. Control and driver units generate the required forces on the fingertips while measuring their position in the workspace. Moreover, the control unit compensates for all the nonlinear features related to the interface control and force generation, such as gravitation compensation, active friction compensation, and the interface kinematics. The whole control system implements a real-time scheduler and runs at a nominal frequency of 5KHz, which was chosen to avoid stability problems at contact, even when the arm is remotely controlled. The system can be driven by attaching its data port to the parallel port of a remote computer running any type of operating system. A set of libraries and drivers allow the control of the haptic interface at a frequency of 1kHz, though this could be increased to 14kHz if required. The controller has been designed to incorporate several safety features, particularly bearing in mind the needs of blind people. When the interface is moving autonomously, the controller limits the maximum velocity and forces that can be exerted on the joints. This minimizes risk of injury even if the user gets too close to the device or touches it improperly. The interface can be configured to warn the user of specific actions. The power to the motors is disabled if the emergency pushbutton is operated or the interface is been left unattended for more than an hour.
6. TACTILE VIRTUAL REALITY
The new Haptic Geometric Modeler is an object-oriented toolkit with all the data and algorithms that allow the user to interact with any 3-D virtual object providing the haptic stimuli to be rendered by the new haptic interface and sound aids and speech recognition to improve the interaction. Its main role is to analyze the position of the user’s fingers (provided by the haptic interface), taking into account the action required by the user through the keyboard or verbal command (for example, to make a zoom or to get a specific help audio). This is done in order to get the corresponding audio messages and the values of the forces to be replicated by the haptic interface on the user’s fingers.
The GRAB system is based on the integration of the tools just described and the commercial tool ViaVoice. Figure 6.3 shows the architecture of the system. The virtual objects/environments and their properties are defined and managed by the “geometric modeler.” Then, while the user is interacting with the virtual environment haptically, the workflow is the following: • The “haptic process” has the responsibility of communication with the “haptic interface,” executing the cycle “get position—analysis of position—replicate force” with a 1000Hz constant frequency. The workflow of this process is:
FIG. 6.3.
Architecture of the GRAB Haptic and Audio Virtual Environment.
126
1. The user moves his/her fingers along the workspace. 2. The haptic interface detects the movement of the user’s fingers and gets the new positions. 3. The haptic interface sends the new positions to the haptic geometric modeler through the Host PC library. 4. The haptic geometric modeler analyzes these new positions taking into account the action required by the user (for example, to make a zoom, or ask for a constrained movement) in order to calculate the value of the forces to be sent. 5. The haptic geometric modeler sends the values of the forces to the haptic interface through the Host PC library. 6. The haptic interface replicates the calculated forces on the user’s fingertips. 7. Go to the first step. • The “management process” is responsible for managing the actions required by the user and generating the audio aids (speech and nonspeech) taking into account the position of the user’s fingers (for example, if the user wants a description of the objects that are around him/her) and the information about the interaction (for example, to send information about the unexplored objects by the user, the description of the object the user is touching, and so forth). • The “geometric modeler” is responsible for rendering the virtual environment, including any dynamic modifications, for instance: the placement of the user’s fingers, the movement of the objects. • The “Audio process” is responsible for recognizing any “verbal command” required by the user and sending the audio aids. • The User commands can also be given by keyboard.
The first prototype of the GRAB system provides the following functionality: • Simulation of different forces: the motors of the Haptic Interface can generate forces on the contact points, creating compelling illusions of interaction with the virtual objects. It is possible to simulate different types of forces, by means of different software algorithms, providing the user with multiple utilities: u
u
Contact forces touch and explore 3-D virtual object/environments. This force also takes into account the properties of the material of the object: stiffness, texture, and stickiness (Klatzky, Lederman, & Reed, 1987). Constrain forces constrain the user’s motion on the boundary of the virtual object.
6. TACTILE VIRTUAL REALITY
u u u u u u
Sliding forces follow a trajectory or a path. Attraction/Repulsion forces attract or repel the user to or from an object. Finding forces help the user to find an unexplored object. Spring forces push virtual buttons. Weight forces simulate the weight when an object is grasped. Collision forces move an object, detecting that it is colliding with another one.
• Audio feedback, both speech and nonspeech, to provide static information about the virtual objects defined when the environment was designed (e.g., the name of the object, its description) and dynamic information related to the user’s actual position within the environment. (e.g., the description of the distribution of the models around the user). • Verbal and keyboard commands. • Zoom in or out to explore objects whose size is very small or very large. • Panning the virtual environment to have access to any point of the virtual environment although its size is bigger than virtual workspace. Making use of this functionality, the first GRAB prototype also provides one of the applications more demanded by the users: a searching and adventure game. In this game, with three levels of difficulty, the user must move inside of a building with different rooms. Each room is composed of some prizes and dangers: points, keys, extra lives, bombs, traps, and so on (Oakley, McGee, Brewster, & Gray, 2000). In the two first levels (basic and intermediate) the user must find and get all the elements of the type “point” and get out of the building to finish the game. In the complex game, the user must find a treasure that is hidden in the second floor (Fig. 6.4 shows the first floor).
VALIDATION OF THE PROTOTYPE 1 OF THE HAVE: ERGONOMICS AND USABILITY ANALYSIS With the aim of testing the validity of the HI prototype specifically developed in this project, as well as its functionality and the study of possible applications to be developed, all partners of the GRAB consortium considered it appropriate to undertake a pilot study with experimental situations. This pilot study would consider the different features that define the objects, and which exploration method allows identification of them. Since it is a pilot study and there were a lot of features and utilities to test, the tasks were shared out among the user organizations, NCBI, RNIB, and ONCE, in order to study the utilities deeply. Except for the specific tasks tested, it followed a common validation methodology: introduce the system to the user, familiarization session, test tasks, post-test questionnaire.
128
FIG. 6.4.
Searching and adventurous game.
Specifically, the validation study expected the production of results about the haptic system, recognition of 3-D objects by means of touch and audio help. It also tested the interaction of the user with the application, and his/her active participation by moving objects and exerting actions (pressing buttons, moving objects, etc.). In addition, it evaluated the usability levels and other interesting data gathered from the user’s opinions.
Each of the three user organizations that carried out the tests (NCBI, RNIB, and ONCE) got a prototype of the HI system and were briefly trained to use it.
The sample used during the testing phase was varied enough as regards user characteristics in order to guarantee a maximum representation. In this sense, the following factors were considered: visual status, Braille reading level, exploration level, and relief graphics reading skills. The GRAB system was tested with a total of 52 participants across three user organizations, RNIB, NCBI, and ONCE. Participants included 17 congenitally blind, 14 adventitiously blind, and 15 sighted participants, who were blindfolded during the testing. Testing was also conducted with 6 partially sighted participants.
6. TACTILE VIRTUAL REALITY
In the experimental phase, users were trained at the beginning of the session to become familiar with the system. With this aim, two different figures were used, placing particular emphasis on the idea that the objective was to test the device and not the skill or professional abilities of the participants. The users were provided with a description of the virtual workspace that they were going to explore and they were prompted to use both fingers, as they preferred (two index fingers, thumb and index finger). Where necessary, the tester “guided” the finger of the user along the outline so that he/she could know the feeling that had to be perceived. Complete sessions lasted about 1 hour. The tasks that were carried out to validate the different features and utilities of the GRAB system are the following: • • • • • • • • • • • • • • • • •
Finding & identifying objects (see Fig. 6.5). Properties of the materials: stiffness, stickiness, and texture. Buttons (see Fig. 6.6). Attraction and repulsion forces. Using a force to find unexplored objects (this is an expansion of the basic “attraction force” task). Constrained movement on the boundary of an object (keeping the user in contact with the model). Following a trajectory (see Fig. 6.7). Exploring with one finger versus exploring with two index fingers versus a thumb and an index finger. Recognizing the size of a virtual object (see Fig. 6.8). Judge distances between virtual objects (see Fig. 6.9). Zoom in and zoom out. Panning. Weights. Moving objects and detecting collisions (interacting with the environment; see Fig. 6.10). Workspace features. Test buttons. Audio feedback.
RESULTS AND CONCLUSIONS This section summarizes the results obtained in each one of the tasks carried out to validate the different features and utilities of the GRAB system. The results are qualitative data, based on whether the users were able to execute the tasks
FIG. 6.5. bottle.
Flower base and
FIG. 6.6. Cube with buttons and cube round buttons.
FIG. 6.7. M_trajectory and simple maze.
6. TACTILE VIRTUAL REALITY
FIG. 6.8.
FIG. 6.9.
FIG. 6.10.
Size and size L.
Distances and distances L.
Collision.
successfully or not, their comments and suggestions, as well as the observations that the examiners made as the validation went along. The results of these tests should enable us to determine which of the system’s attributes are effective and which are not, and their potential applications. The comments of the users of the three user organizations coincided to the extent that different people from different countries used almost the same
132
words to explain their perceptions of the usefulness of the system. This suggests that the results are strong and homogeneous. The general assessment of the results allows optimism when it comes time to face the second part of the project, beginning with the suggestions of the participants and examiners. We start the conclusions by dealing with the difficulties manifested in other studies, such as the lack of orientation in the workspace, losing contact with the object, and the difficulty of being aware of whether all the objects have been explored or not. Several utilities were designed to try to solve these problems: if the workspace was reduced to the indispensable minimum size according to the objects included in the workspace, the effectiveness of the subjects improved because they had less space to explore. An automatic guided system (force to find objects) was used for the other hand, to permit the user to let his/her finger be moved by the system toward any object which remained unexplored. This was available for use whenever he/she wanted. Another utility was able to apply a constraining force that either facilitated or prevented loss of contact of the user’s finger with the object. These utilities can still be improved, although we think we have made good progress. The audio utilities were designed to provide support for the location of the objects (information about the distribution of the objects with respect to the user’s finger). The users thought they could be very useful, but they needed to be improved. In order to be helpful, this utility should give information only about the objects that haven’t already been explored, clarify the concepts of “front” and “back,” and also give information on whether the objects are “up or down” in the workspace. On the other hand, the zoom utility was considered successful by the users, since they were able to locate and identify small objects as they were zoomed. Other properties of the object, such as identification by its shape (boundary) and size were also tested and the results and assessment of the users was quite favorable (Kirkpatrick & Douglas, 2001). However, the ease of finding and identifying an object depends mainly on its size, position, and complexity. The size was established by a comparison task (two figures that are exactly the same shape but have different sizes). The majority of the users were able to determine the difference in size, although not all the participants could correctly quantify the size difference. This may be because the objects were located on different planes in the workspace and they weren’t able to do a comparison by means of a superimposition. The strategy they followed was to put a finger on the top of the object and the other one at the bottom, and then do the same with the other object. Maybe in a next stage we could test this, allowing the user to move one object and place it close to the other object in order to facilitate the task.
6. TACTILE VIRTUAL REALITY
Participants were generally able to accurately gauge comparative distances between objects, and they were also able to describe the position and orientation of objects in relation to each other. Perhaps the most successfully completed tasks were the ones in the group “Attributes referred to the quality of the material,” where the users easily detected properties such as stiffness, weight, texture, and stickiness (West & Cutosky, 1997). Texture and stickiness had different levels of number of correct answers among the participating organizations. The results from RNIB and NCBI showed that stickiness is effective, but not texture; in ONCE, on the contrary, texture was felt as very effective and users were able to point the places where the texture changed very accurately, but “stickiness” wasn’t successful. In the “stickiness” task the users didn’t understand the feeling until the tester prompted them to tap the surface. From that moment on, the users were able to determine that it was stickiness, although they pointed out that it was not very well simulated. They would have preferred to perceive the stickiness by running their fingers across the surface. There was a mixed reaction to the virtual buttons. The users could undertake the task of pushing buttons, but most of the time the buttons were not perceived as buttons. Nevertheless, the users considered that the buttons could have good potential for use, but with some improvements: redesign of the geometrical shapes (circular buttons are better than rectangular ones, and short buttons are better than tall buttons), better audio feedback (applying the audio signal when the button is pressed and not when it is released), and more tactile feedback (more initial resistance; Stone, 2000). Following a trajectory within the device was easy, and being constrained to the path helped in staying on it and determining its shape. Important points that should be highlighted are the tasks that the users liked most. These tasks are the ones where there is an interaction between the system and the user (collision, weights). A very high percentage of users thought that it was very interesting and funny to have the capability of “catching” an object. They also liked engaging in actions involving moving objects, which made it possible for them to collide with other objects . Since the device involves the use of two fingers, an important part of the experimental situation was to assess the differences between using one finger or two fingers. The majority of the users said that it was easier to explore with two fingers than only one. Observation and user responses indicated that a second finger—on a separate hand—can be vital as an “anchor” or reference point that allows the users to orientate themselves in space. It allows them to more readily understand the objects’ relationships with one another, and makes re-finding objects easier. They also preferred to use their index fingers more than an index
134
finger and a thumb—in fact, using a thumb and an index finger was considered “very difficult” in the usability questionnaire (the calibration problems could be an import factor contributing to this result). Finally, it is worth pointing out that most of the users who had previously tested other haptic systems commented that they considered that the GRAB system was much more effective. The advantage consisted of the ability to use two fingers, the larger workspace, greater robustness, and so forth.
USABILITY QUESTIONNAIRE Most of the visually impaired participants felt that the new GRAB system could be of use to them. Nevertheless, the participants who did not feel that the device currently has any practical applications felt that it was a technology worth exploring further, and that it might be useful in terms of research issues and knowledge building. During the evaluation of the system, the users suggested several potential applications for the GRAB system in different fields: • City maps, tube maps, reliefs, a good city maps library, a library of buildings, trajectories, paths …. • For a school … virtual access to information that is not always available. • To know details that are difficult to describe. • Mathematical functions. • Necessary for developing perception (3-D, perspective, spatiality, …) aesthetics of perception and maps …. • Leisure: games, virtual museums …. • For orientation and mobility, laterality works (comparing sizes: a horse with a sheep …). • Models, 3-D figures. • For professionals: to undertake behavioral studies, balance studies with children (4–5 years old), mobility, maturational issues … clinical and psychological levels. It could be subsidized for professional level; for pleasure it would be expensive (luxury must be paid for). For an adventitiously blind person to study psychological affect …. Some users insisted that it would be important to have a practical application to be able to determine the usefulness of the device. At the time of this writing, the project continues for its third and last year. Therefore, this contribution must be interpreted as an advance in the process of research in a promising new technology, which a priori seems to produce positive data. This new technology will not only make progress in the field of technical aids for visual
6. TACTILE VIRTUAL REALITY
impairment, but will provide the user with new virtual tools to “explore and access” some specific 3-D graph information. The system has the capacity to register and save the movements that users make with their fingers. This permits assessment of specific components and procedures of blind users’ “exploratory behaviors.” This contribution adds one new technological approach to the analysis in the field of “haptic perception.” However, the information and temporary conclusions presented here must be considered tentative, since they lack experimental rigor, in this first stage of the validation study. At this moment, the GRAB Consortium is working on the second prototype of the system. Specifically, two new applications are being studied and developed by taking into account the results from the first validation: an application to represent graphs and an application to represent virtual city maps.
ACKNOWLEDGMENTS The following members of the GRAB consortium have collaborated on this chapter: Ms. Teresa Gutiérrez. LABEIN, Bilbao, Spain (co-ordinator); Ms. Bláithin Gallagher. NCBI, Dublin, Ireland; Mr. Carlo Alberto Avizzano. PERCRO, Pisa, Italy; Ms. Fiona Slevin. HAPTICA, Dublin, Ireland; and Mr. Keith Gladstone. RNIB, London, UK. This project has been partially funded by the European Commission DG INFSO under the IST program.
REFERENCES Ramloll, R., Yu, W. Brewster, S., Riedel, B., Burton, M., & Dimigen, G. (2000). Constructing sonified haptic line graphs for the blind student: First steps. Proceedings of ACM Assets 2000 (pp. 17–25). Arlington, VA: ACM Press. Fernández, J. L., Muñoz, J. A., Gutiérrez, T., & Barbero, J. I. (1999). Reality Virtual applications with haptic interaction for blind people. Integration, 29, 5–11. Jansson, G. (1999). Can a haptic display rendering of virtual three-dimensional objects be useful for people with visual impairments? Journal of Visual Impairment and Blindness, 93, 426–429. Jansson, G., Petrie, H., Colwell, C., Kornbrot, D., Fanger, J., Konig, H., et al. (1999). Haptic virtual environments for blind people: Exploratory experiments with two devices. International Journal of Virtual Reality 4, 10–20. Katz, D. (1989). The world of touch (L. E. Krueger, Trans.). Hillsdale, NJ: Lawrence Erlbaum Associates. (Original work published 1925) Kirkpatrick, A. E., & Douglas, S. A. (2001). A shape recognition benchmark for evaluating usability of a haptic environment. Lecture Notes in Computer Science, 2058, 151–156. Klatzky, R. L., Lederman, S. J., & Reed, C. (1987). There’s more to touch than meets the eye: The salience of object attributes for haptics with and without vision. Journal of Experimental Psychology, 116, 356–369. Oakley, I., McGee, M. R., Brewster, S., & Gray, P. (2000). Putting the feel “look and feel.” Proceedings of ACM CHI 2000 (pp. 415–422). The Hague: ACM Press.
136 Sjostrom, C., & Rassmuss, K. (1999). The sense of touch provides new computer interaction techniques for disabled people. Technology & Disability, 10, 45–52. Stone, R. (2000, August). Haptic feedback: A potted history, from telepresence to virtual reality. Paper presented at the First International Workshop on Haptic Human-Computer Interaction, Glasgow, UK. West, A. M., & Cutosky, M. R. (1997). Detection of real and virtual fine surface features with a haptic interface and stylus. Proceedings of the ASME Dynamic Systems and Control Division, 61, 159–166.
g
h
NEUROSCIENCE
This page intentionally left blank
g
h
Do Visual and Tactile Object Representations Share the Same Neural Substrate? Thomas W. James Karin Harman James Vanderbilt University
G. Keith Humphrey Melvyn A. Goodale The University of Western Ontario
O
bjects can be recognized using any of our sensory modalities. For instance, a bumblebee can be recognized by seeing its characteristic yellow and black colors, by hearing its distinctive buzzing sound, by feeling the fuzzy surface of its body as it walks across our hand, by experiencing the pain as it stings our finger, or by any combination of these cues. But, it is only by using vision and touch that the complex three-dimensional (3-D) geometric properties of particular objects can be recognized. Of these two senses, vision is the one we use most often to identify objects—although the tactile system (or haptics) is also useful, particularly in situations where the objects cannot be seen. Haptics can also provide information about the weight, compliance, and temperature of an object—as well as information about its surface features, such
140
JAMES ET. AL
as how sticky or slippery it is—information that is not readily available by merely looking at the object. But, by the same token, vision can provide information about an object’s color and surface patterns—features that cannot be detected by haptics. Moreover, even though both haptics and vision provide information about an object’s volumetric shape, there are clear differences in the way in which that information is garnered by the two systems. The haptic system can operate only on objects that are located within personal space, that is, on objects that are within arm’s reach. The visual system, however, can analyze not only objects that reside within personal space but also those that are at some distance from the observer. Of course, when objects are at a distance, only the surfaces and parts of an object that face the observer can be processed visually (although it is possible, in some cases, for the observer to walk around the object and take in information from multiple viewpoints). When objects are within reach, however, they can be manipulated, thus revealing the structure and features of the previously unseen surfaces and parts to both the visual and the haptic system. The receptor surfaces of both systems have regions of low and high acuity. For vision, the high-acuity region of the retina is the fovea; for haptics, the high-acuity regions are the fingers, lips, and tongue. Although both systems are able to bring these high-acuity surfaces to bear on an object, vision has a decided speed advantage. After all, a saccadic eye movement can be planned and executed in under 200 ms, whereas moving the fingers to a new location of an object takes much longer. But even though the visual system is much more efficient in this regard, both systems perform their high-acuity analysis of an object in a serial fashion. The visual system, however, is capable of carrying out a coarse-grain analysis using the peripheral retina simultaneous with the fine-grained analysis carried out with the fovea. In contrast, except for extremely small objects, it is difficult for the haptic system to carry out a coarse-grained analysis using the palms (or even enclosure by the arms) simultaneous with a fine-grained analysis with the fingers. Despite these differences between the two systems, the fact remains that vision and haptics are the only two sensory systems that are capable of processing the geometrical structure of objects. It is perhaps not surprising, therefore, that higher order processing of objects by the two systems appears to deal with their respective inputs in much the same way. For example, in many situations, particularly those in which differential information about surface features such as color and visual texture are not available, visual recognition of objects is viewpoint dependent. In other words, if an object is explored visually from a particular viewing angle, recognition will be better for that view of the object than for other views (Harman & Humphrey, 1999; Humphrey & Khan, 1992; Tarr,
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS
1995). The concept of “viewing angle” in haptic exploration of objects is not as well-defined as it is in vision—in part because objects, particularly ones that can be manipulated, are rarely explored from one “viewpoint.” Nevertheless, work by Newell, Ernst, Tjan, and Bulthoff (2001) has shown that haptic recognition of 3-D objects that are fixed to a surface is much better when the “views” of the objects during the test phase of the experiment are the same as they were during the study phase. Although this finding may be somewhat artificial, it does suggest that information about an object’s structure, which could be considered a higher order representation of that object, is encoded and stored in a similar way by the visual and haptic systems. Also in support of this idea are data suggesting that perspective is important for the successful perception of haptically apprehended tangible 2-D drawings of 3-D objects (Heller et al., 2002). Accuracy for matching a tangible drawing to its 3-D counterpart haptically was better when the drawing was depicted with perspective, despite the fact that distortions of perspective, such as foreshortening, are associated with visual processing, not haptic processing. Indeed, because the characteristics of the visual and haptic object representations are so similar, there is some speculation that the two modalities actually share the same underlying representation. For example, several studies (Easton, Greene, & Srinivas, 1997; Easton, Srinivas, & Greene, 1997; Reales & Ballesteros, 1999) have used cross-modal priming between vision and haptics to show that exposure to real objects in one modality affected later naming of the objects when they were presented using the other modality. The term priming in this context refers to the facilitative effect that prior exposure to a stimulus has on responses to that stimulus during a subsequent encounter, a facilitative effect of which people are usually quite unaware. In a cross-modal priming experiment, then, subjects would first be exposed to objects in one modality and then would be required to identify or discriminate between the same objects presented in the other modality. Interestingly, in at least three experiments (Easton, Greene, & Srinivas, 1997; Easton, Srinivas, & Greene, 1997; Reales & Ballesteros, 1999), cross-modal priming and within-modality priming resulted in similar effect sizes, suggesting that a “visual” representation of an object can be activated as much by a haptic presentation of the object as by a visual presentation of the object (and vice versa). One possible explanation of this finding is that there is, in fact, a single representation of the object that can be equally activated by both modalities. A second possibility is that there are two representations, one visual and one haptic, but each representation is able to co-activate the other. For this latter explanation to work, however, an assumption must be made that the co-activation is efficient enough to produce complete transfer of the relevant information delivered by the two modalities. In
142
JAMES ET. AL
fact, if the transfer were that complete and transparent, then in many ways the second explanation reduces to the first—and the only difference is how distributed are the two representations. A third possibility, of course, is that the cross-modal priming and the within-modality priming are both mediated by verbal or semantic processing of the object. In other words, the two modality-specific representations are re-activated by feedback from verbal processing systems. The fact, however, that babies as young as 2 months of age, as well as chimpanzees (Streri, 1993), show evidence of transfer in cross-modal (visual-to-haptic) matching tasks, suggests that interactions between the systems are not mediated by only verbal representations. As was mentioned earlier, there is evidence that if only one view of an object is studied, then during later testing the object will be recognized more quickly if that view rather than another is presented—and this is true in both the visual as well as the haptic domain. What is interesting is that this viewpoint-specificity is also true for cross-modal presentations. In other words, an object studied haptically from one particular “viewpoint” will be better recognized in a visual presentation if the same rather than a different view of the object is presented (Newell et al., 2001). Like the cross-modal priming results described earlier, this finding also suggests that vision and haptics share a common object representation. Moreover, the viewpoint-specificity of the cross-modal transfer lends support to the argument that this shared representation encodes the 3-D structure of the object rather than a more abstract conceptual or verbal description of the object. In short, there is reasonably good behavioral evidence to suggest that vision and haptics encode the structure of objects in the same way—and use a common underlying representation. This conclusion finds additional support in a number of neuroimaging studies that have demonstrated overlap between visual and haptic processing within the human brain. This overlap appears to occur in regions of the brain that are usually considered visual, such as extrastriate areas in the occipital cortex. Several investigators (Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Amedi, Malach, Hendler, Peled, & Zohary, 2001; Deibert, Kraut, Kremen, & Hart, 1999) have found that haptic object identification tasks show activation in visual areas when measured using functional magnetic resonance imaging (fMRI). In other words, compared to a control task, identifying objects haptically produced greater activation in the extrastriate cortex (in addition to other regions). The involvement of visual areas in haptic processing has also been demonstrated using transcranial magnetic simulation (TMS), a technique in which a brief magnetic pulse is applied to the brain to disrupt the processing occurring in a localized region of the cortex. This is sometimes referred to as a “transient lesion,” because processing is
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS
suspended temporarily in the target region. TMS was applied to different regions of the cortex while subjects were asked to identify the orientation of a grating that was placed on their finger (Zangaladze, Epstein, Grafton, & Sathian, 1999). When TMS was applied to the occipital cortex contralateral to the hand being used, subjects were not able to perform the task, but when TMS was applied to the ipsilateral occipital cortex, they performed normally. The fact that the application of TMS to the occipital cortex disrupts tactile discriminations (coupled with the fact that visual areas within this region show activation to haptic identification of objects) could be construed as evidence that extrastriate cortex is not devoted entirely to the processing of visual information—but is also involved in haptic processing. Indeed, one might even speculate that the extrastriate cortex is the neural substrate of the shared bimodal object representation suggested by the behavioral studies. Another, perhaps more straightforward explanation, of course, is that the activation in the extrastriate cortex is simply a reflection of visual imagery. In other words, when one uses touch to explore an object, a mental image of the object is constructed and this process of constructing a mental image recruits the extrastriate cortex. There is no doubt that mental images of objects are constructed when they are haptically explored for the purposes of recognition; there is also no doubt that these mental images are predominantly visual. But, the question is not whether or not visual imagery occurs during haptic exploration, but whether or not such imagery drives the activation in the extrastriate cortex. It has certainly been argued that the reason that TMS applied to the occipital cortex interferes with haptic recognition is that it disrupts visual imagery (Zangaladze et al., 1999). Nevertheless, it is not clear that haptic recognition depends on visual imagery, nor is it clear that the extrastriate areas activated during haptic exploration tasks are the same areas that are activated during visual imagery. In an attempt to address these questions, Amedi et al. (2001) compared the activation produced in the extrastriate cortex when subjects were presented visually or haptically with objects, or when objects were only imagined. They found that an object-selective area of the extrastriate cortex, the lateral occipital complex (LOC), responded preferentially when objects were explored visually or haptically, but did not respond when objects were only imagined. In a follow-up study (Amedi et al., 2002), objects were again presented visually and haptically, but in addition auditory sounds were presented that were diagnostic for particular objects. The LOC did not show differential activation (compared to baseline levels) when objects were identified by their sounds. As before, however, the LOC responded preferentially when objects were identified using either vision or touch. This study makes three important points. First, it confirms
144
JAMES ET. AL
the idea that a common area within the extrastriate cortex (LOC) can be driven both by visual and by haptic information about an object’s structure. Second, it shows that the LOC is probably bimodal not multimodal in nature, because auditory cues associated with a particular object did not produce activation there. And finally, it shows that the mental image of an object evoked by associated auditory cues was also insufficient to activate the LOC. Taken together, these findings suggest that the mental (visual) image of an object that might be evoked during haptic exploration is not responsible for the activation observed in LOC—unless one assumes that visual images invoked by tactile cues are different from the visual images invoked by auditory cues, or the visual images invoked by deliberate imagination. For instance, the visual imagery induced by an auditory cue may not be as detailed or specific as that induced by tactile exploration—and may be more difficult to sustain. Nevertheless, even indistinct visual imagery would be expected to produce activation in the LOC. Furthermore, if one postulates that a special kind of visual image is invoked by haptic cues, then this is tantamount to suggesting that haptics and vision enjoy a special relationship (perhaps a bimodal representation) that is independent of any overarching visual image that might be generated by other means. The behavioral and neuroimaging evidence we have described so far suggests that haptics and vision share a common bimodal representation of objects. To explore this hypothesis further, in a recent study (T. W. James et al., 2002), we combined the cross-modal priming method used in previous behavioral studies with high-field fMRI. As we have seen, priming paradigms are a good tool for investigating the nature of object representations (Reales & Ballesteros, 1999), because they involve the use of an implicit task, in which earlier exposure to an object can affect (or not affect) current processing of the same object. Any observed effect of the priming manipulation must be attributed to residual activation of the object representation or to some form of permanent change to that representation. Because we wanted to look directly at cross-modal priming of the geometric structure of objects, we used a set of 3-D novel objects that were made out of clay and spray-painted white (Fig. 7.1). By using objects that were both novel and meaningless, we hoped to limit the use of semantic or verbal encoding. Importantly, we also used a passive viewing paradigm, in which subjects were simply required to look at the objects and to do nothing else. They did not have to identify, name, or explicitly recall the objects in any way. It was expected that this “task” would ensure the implicit activation of the object representation on subsequent presentations with as little “explicit contamination” as possible. We hypothesized that any common region for haptic and visual object processing that we identified would show an equivalent priming effect whether the
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS
FIG. 7.1. Examples of novel three-dimensional clay objects.
objects were first studied visually or haptically. This hypothesis is derived directly from the notion that equivalency of brain activation with priming implies that there was no extra processing step that differentiates the study of objects in one condition from the study of objects in the other. That is, if equivalent priming effects were found in the common extrastriate area identified in other studies (Amedi et al., 2002; Amedi et al., 2001; Deibert et al., 1999), then whatever effects earlier visual or haptic study had on processing in this region must also have been equivalent. If such results were indeed obtained, it would be difficult to argue that haptic representations were stored elsewhere (such as in the parietal somatosensory cortex) and had an indirect influence on activation in this occipital region. The extra processing step required for an indirect influence should lead to differences between haptic and visual priming effects. In contrast, if there were observed differences between visual and haptic priming, then this would imply that there were distinct visual and haptic representations. Before scanning, each participant in our study visually explored a set of 16 objects and haptically explored a separate set of 16 objects. During scanning, participants were presented with visual images of these studied objects on a projection screen together with an additional set of 16 nonstudied objects. Priming effects could therefore be assessed by comparing the pattern of activation that was obtained with the studied objects with the pattern of activation that was obtained with the nonstudied objects. Figure 7.2 illustrates a brain region in the lateral ventral occipital cortex that showed significant haptic-to-visual priming, significant visual-to-visual priming, and showed significant overlap be-
146
JAMES ET. AL
FIG. 7.2. Bimodal lateral occipital cortex activation. The brain image is a rendered representation of the grey-matter surface of the right hemisphere. The white region indicates the location of the LOC. The LOC is equally activated, bilaterally, by visual and haptic exploration of objects and shows equivalent priming effects whether prior exposure was visual or haptic.
tween visual and haptic exploration (T. W. James et al., 2002). This region corresponds to the lateral occipital complex (LOC), a region that has been implicated in the selective processing of visual objects (Kanwisher, Chun, McDermott, & Ledden, 1996; Malach et al., 1995) and often shows evidence of visual priming in imaging studies (for review, see Cabeza & Nyberg, 2000; Schacter & Buckner, 1998; Wiggs & Martin, 1998). Thus, it was not surprising that the LOC was activated by visual exploration of objects or showed significant visual-to-visual priming effects. More recently, the function of the LOC has been reinterpreted as bimodal (Amedi et al., 2002; Amedi et al., 2001). Thus, it was not too surprising that the LOC showed significant haptic-to-visual priming as well. The interesting point to be made, however, is that the effect of haptic priming in the LOC was equivalent to that of visual priming. This can be seen in the activation time courses shown in Fig. 7.3. Visually and haptically studied objects each produced more activation than nonstudied objects, but importantly the time courses for the activation produced with visually and haptically studied objects overlapped almost completely. The increase in activation with studied objects that we observed, although inconsistent with other priming results using common objects (for review, see Cabeza & Nyberg, 2000; Schacter & Buckner, 1998; Wiggs & Martin, 1998), was consistent with the results from at least two other priming studies that used novel objects (Henson, Shallice, & Dolan, 2000; Schacter et al., 1995). Our priming experiment (T. W. James et al., 2002), together with results of previous studies (Amedi et al., 2002; Amedi et al., 2001), provides converging evidence that visual imagery does not mediate the haptically produced activation in the LOC. In previous studies, no visual stimulus was present during haptic exploration conditions, and this lack of a visual stimulus should promote the use of visual imagery. Recall that during scanning in our study, participants were always viewing a visual stimulus. What varied from trial to trial was
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS
whether or not the object on the screen had been previously explored haptically or visually. Thus, the use of visual imagery during scanning was equally likely (or rather equally unlikely, since there was a real visual image present) during all experimental conditions. In short, visual imagery during scanning could not have been responsible for the increased activation with haptic priming. The use of the priming paradigm with separate study and test phases raises another question: Could visual imagery during haptic study of the objects have produced permanent changes in occipital cortex that were responsible for the observed differences in activation seen during the test phase? In other words, could the haptic priming effect have been caused by activation of visual cortex through visual imagery instead of through somatosensory input? A widely accepted theory of mental imagery suggests that visual imagery is the endogenous activation of neural mechanisms normally involved in visual perception (Farah, 2000; Kosslyn & Shin, 1994). In other words, invoking some sort of high-level semantic or abstract representation of an object feeds back onto early areas in visual cortex and activates perceptual representations by “normal” feedforward processing. As a consequence, there is perception of visual images without vi-
FIG. 7.3. Time course of activation from the LOC. Time courses are averaged hemodynamic responses from 32 visually primed, 32 haptically primed and 32 nonprimed (squares) trials per participant (N = 8). Vertical axis indicates percent signal change from a rest baseline.
148
JAMES ET. AL
sual input. It is possible that the visual imagery elicited by haptic exploration of objects during the study phase of the experiment could unfold in the same way, that is, by activating an abstract representation, which in turn activates a perceptual representation. But it is also possible that haptic exploration could directly activate the perceptual object representation, without activating an abstract representation. As we saw earlier, young infants and chimpanzees (Streri, 1993), who presumably have a limited capacity for abstract or symbolic representation, show efficient transfer of training between haptics and vision, suggesting that abstract representations are not necessary for cross-modal transfer. In addition, our study was designed to limit abstract encoding of the objects (by using meaningless novel objects). Furthermore, patient DF, who is described in further detail a bit further on, has preserved visual imagery, despite severe damage to the “normal” feedforward visual processing regions, suggesting that activation of these regions, and thus activation of geometric object representations, is not necessary for visual imagery. Finally, the fact that we found equivalent effects of visual and haptic priming on activation in visual areas such as the LOC suggests that no extra computational step, such as utilizing an abstract representation, was implemented. These findings, combined with the results of experiments using auditory-cued mental imagery (Amedi et al., 2002; Amedi et al., 2001), provide strong converging evidence that occipital cortex activity during haptic exploration of objects is not produced because of an endogenous cue to visually imagine the object, but instead is produced by direct haptic input to bimodal object representations in the LOC. Activation of the LOC may in turn produce activity in other occipital regions that are involved in the production of visual images, but these activations would likely be much more unspecified than those produced by direct haptic input, causing a much smaller priming effect. This is in fact what happens with cross-modal auditory-to-visual priming: priming effects are smaller across modalities than within modalities (e.g., see Greene, Easton, & LaShell, 2001). This is presumably because interactions between vision and audition can only occur if the incoming information is first transformed into a sufficiently abstract representation—a requirement made necessary because vision and audition do not share a common representation at a lower level of processing such as geometric structure. Although there was no behavioral data collected in our experiment, the fact that levels of activation were the same for both kinds of priming is consistent with the results of earlier behavioral experiments (Easton, Greene, & Srinivas, 1997; Easton, Srinivas, & Greene, 1997; Reales & Ballesteros, 1999). In these studies, cross-modal priming effects between haptics and vision were of the same magnitude as the within-modal priming effects observed with either vi-
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS
sion or haptics, even with novel objects. In both neural activation and behavior, then, cross-modal priming is no less “efficient” in its effect than within-modal priming. Taken together, these findings suggest that no extra computational step is required to prime visual processing of object shape using a representation based on previous haptic input than is required to prime visual processing of object shape using a representation based on previous visual input. Indeed, we would argue that cross-modal priming makes use of a common haptic and visual representation. One candidate region for the neural substrate of this common representation is the LOC, which not only showed equivalent within-modality and across-modality priming, but was also equally activated by haptic and visual exploration of objects in our study and in other studies (Amedi et al., 2002; Amedi et al., 2001). The common representation, we would argue, is not semantic or verbal in nature. In our priming study, we used novel objects instead of common objects to minimize the chances of semantic or verbal mediation of any priming effects that were observed. The fact that priming effects were found with these novel objects that are difficult to label verbally suggests that cross-modal priming can occur “below” the level of semantic or verbal representations of objects. Thus, one might speculate that the common visual and haptic representation of objects occurs first at the level of shape processing, and not at a more abstract or associative level, such as semantic or lexical processing. Evidence from neuropsychological studies of patients with visual agnosia also supports the idea that haptic and visual signals may converge at the level of geometric representations of objects. In a recent report, a patient with prosopagnosia, who could not recognize faces visually, was also found to have difficulty learning to recognize faces using the sense of touch (Kilgour, de Gelder, & Lederman, 2004). Further evidence for haptic and visual convergence comes from investigations in our own lab of a patient (DF) with visual form agnosia (for original report, see Milner et al., 1991). DF is able to recognize objects using information from surface properties like color and texture, but is unable to recognize objects based on contour or form information (Humphrey, Goodale, Jakobson, & Servos, 1994). In short, she is unable to generate geometric structural representations of objects (Milner & Goodale, 1995). Neuroimaging shows that DF has bilateral lesions in area LOC (T. W. James, Culham, Humphrey, Milner, & Goodale, 2003), in the same region of the occipital cortex that we have shown to underlie bimodal geometric structural representations of objects (T. W. James et al., 2002). This would suggest that DF should not only have difficulty recognizing the shape of an object from vision, but should also have difficulty recognizing the shape of an object from her sense of touch. Preliminary findings from our laboratory indicate that this is the case.
150
JAMES ET. AL
When given a tactile recognition memory test using objects like the ones shown in Fig. 7.1, DF was able to recognize only 7 of 12 (58%). This performance is just above chance level, and is significantly worse than that of an age-matched control. But more importantly, when DF was tested with similar objects in a visual recognition test, she actually performed slightly better, recognizing 8 of the 12 objects (67%). Given DF’s pronounced deficit in recognizing objects visually, one might have expected her to do better with tactile information. We explored DF’s haptic object recognition skills further, using a paired-associates task in which letter names were paired with a new set of novel objects that were explored haptically. As can be seen in Fig. 7.4 (right axis), a healthy control participant was able to learn the letter names A through L for 12 different objects within three blocks of trials. DF was unable to perform this task, managing only one correct response out of twelve after four blocks of trials.
FIG. 7.4. Paired associates learning for DF and one healthy control participant. Data are shown across blocks of either 6 (DF) or 12 (control) trials. The control participant performed only the haptic paired associates (circles), whereas DF performed the haptic (squares) and the visuohaptic (diamonds) paired associates.
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS
We then reduced the number of objects and had DF learn the letter names A through F paired with six haptically explored objects. As Fig. 7.4 (left axis) shows, DF’s average performance on this easier task was not only much poorer than the control participant, but she actually did worse over time. This is particularly surprising because feedback was given after every trial. In a final task, DF performed the same paired-associates task with six objects but this time used both vision and touch together. We assumed that exploring the objects using multiple sensory inputs should maximize her ability to identify the objects. DF’s performance on this task (Fig. 7. 4; diamonds), although again much worse than the control participant’s haptic-only performance, did show some improvement over time. In fact, with even more training on the combined vision and haptic task (not shown in Fig. 7.4), she reached an asymptote of five out of six correct. The fact that DF was able to perform the paired associates task under bimodal sensory conditions suggests that her deficit was not entirely a memory problem, but was a problem in using haptics to learn about the geometrical structure of new objects. Although the better performance in the bimodal learning condition suggests that the two systems can bootstrap one another despite the damage to the LOC, the performance in this condition was still well below normal. DF is also poor on sequential matching tasks using these same objects. In this task, she was allowed to explore a sample object for 3 sec and was then immediately given a test object and was asked if it was the same or different. Whether she performed this task haptically (with her eyes closed) or visually, she was equally poor (scoring 67% and 72% correct, respectively). Healthy controls find this task exceedingly easy. Again this suggests that her LOC lesion has interfered with her ability to learn the geometric structure of objects both visually and haptically. DF’s poor haptic performance at encoding the structure of new objects contrasts with her excellent haptic recognition of familiar objects. Like many individuals with visual form agnosia, DF is able to recognize objects, such as kitchen utensils and tools, as soon as they are placed in her hands—even though she is unable to identify them by sight alone. But the fact that she does so poorly in learning to recognize new objects using haptics suggests that area LOC, which is damaged bilaterally in her brain, may play an important role in enabling the haptic system to acquire information about the geometrical structure of new objects. This may be particularly true when the set of objects to be discriminated share many parts in common, as was the case for the novel objects in this particular study. In the case of haptic recognition of familiar objects, haptic information about object structure may be able to bypass LOC and make contact with higher order object representations. Visual information about object structure,
152
JAMES ET. AL
however, must be processed by the LOC, which is why DF has great difficulty recognizing the form of objects, even when they are familiar. Taken together, the results from DF (and the prosopagnosia patient discussed earlier) suggest that lesions of visual areas in the occipitotemporal cortex that disrupt the visual recognition of object form can also interfere with haptic recognition of objects. The deficit appears to be most apparent when encoding the structure of objects that have not been encountered before. It is important to note that although we have shown here that vision and haptics are intimately interrelated when it comes to representing the geometric structure of objects, there can also be no doubt that haptics and vision are integrated even more seamlessly when providing feedback for the successful execution of visuomotor commands. For instance, during movements of the arm and hand, a proprioceptive representation of the hand’s position in space is automatically and effortlessly referenced to the visual calculation of the hand’s position. Whether these calculations are carried out in isolation, or whether they share computational and neural overlap are questions that are beginning to be addressed. For instance, activation in regions of the parietal and occipital cortex are known to be influenced by the position of the eye (DeSouza et al., 2000; DeSouza, Dukelow, & Vilis, 2002). Haptics and vision also appear to be integrated during the processing of motion (Hagen et al., 2002) and it is likely that this is due to a direct somatosensory input into the middle temporal motion complex (Blake, Sobel, & James, 2004), an area specialized for the processing of object motion and optic flow. In addition, there is a growing body of evidence suggesting that vision, haptics, and also audition can all be influenced by each other during the allocation of attention to specific regions of space (Butter, Buchtel, & Santucci, 1989; Macaluso, Frith, & Driver, 2000, 2002; Maravita, Spence, Kennett, & Driver, 2002). In most studies of haptic or visual object recognition, the objects are fixed and are studied with a single sensory modality; this is not the way that we normally interact with objects when we are trying to recognize or encode them. Interactions between vision and proprioception, between visual and tactile motion perception, and between visual and tactile allocation of attention, would all be involved in the active exploration of an object that is held and manipulated in our hands. In fact, for optimum representation of the geometric structure of an object it may be necessary to exploit all of these visuohaptic and visuomotor interactions (Harman, Humphrey, & Goodale, 1999; K. H. James et al., 2002). More regions in the brain may be multisensory than was previously thought and consequently, demonstrating that area LOC is bimodal may be only the first step toward realizing the bimodal nature of much of what up to now has been regarded as exclusively “visual” cortex.
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS
ACKNOWLEDGMENTS The research discussed in this chapter was supported by grants from the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chairs Program, and the Canadian Institutes of Health Research. Thanks to Isabel Gauthier for all the helpful discussions of mental imagery.
REFERENCES Amedi, A., Jacobson, G., Hendler, T., Malach, R., & Zohary, E. (2002). Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex, 12, 1202–1212. Amedi, A., Malach, R., Hendler, T., Peled, S., & Zohary, E. (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience, 4, 324–330. Blake, R., Sobel, K. S., & James, T. W. (2004). Neural synergy between kinetic vision and touch. Psychological Science, 15, 397–402. Butter, C. M., Buchtel, H. A., & Santucci, R. (1989). Spatial attentional shifts: Further evidence for the role of polysensory mechanisms using visual and tactile stimuli. Neuropsychologia, 27(10), 1231–1240. Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12(1), 1–47. Deibert, E., Kraut, M., Kremen, S., & Hart, J. J. (1999). Neural pathways in tactile object recognition. Neurology, 52, 1413–1417. DeSouza, J. F., Dukelow, S. P., Gati, J. S., Menon, R. S., Andersen, R. A., & Vilis, T. (2000). Eye position signal modulates a human parietal pointing region during memory-guided movements. The Journal of Neuroscience, 20(15), 5835–5840. DeSouza, J. F., Dukelow, S. P., & Vilis, T. (2002). Eye position signals modulate early dorsal and ventral visual areas. Cerebral Cortex, 12(9), 991–997. Easton, R. D., Greene, A. J., & Srinivas, K. (1997). Transfer between vision and haptics: memory for 2-D patterns and 3-D objects. Psychonomic Bulletin and Review, 4, 403–410. Easton, R. D., Srinivas, K., & Greene, A. J. (1997). Do vision and haptics share common representations? Implicit and explicit memory within and between modalities. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 153–163. Farah, M. J. (2000). The cognitive neuroscience of vision. Malden, MA: Blackwell. Greene, A. J., Easton, R. D., & LaShell, L. S. R. (2001). Visual-auditory events: Cross-modal perceptual priming and recognition memory. Consciousness and Cognition, 10(3), 425–435. Hagen, M. C., Franzen, O., McGlone, F., Essick, G., Dancer, C., & Pardo, J. V. (2002). Tactile motion activates the human middle temporal/V5 (MT/V5) complex. European Journal of Neuroscience, 16, 957–964. Harman, K. L., & Humphrey, G. K. (1999). Encoding “regular” and “random” sequences of views of novel three-dimensional objects. Perception, 28, 601–615. Harman, K. L., Humphrey, G. K., & Goodale, M. A. (1999). Active manual control of object views facilitates visual recognition. Current Biology, 9(22), 1315–1318. Heller, M. A., Brackett, D. D., Scroggs, E., Steffen, H., Heatherly, K., & Salik, S. (2002). Tangible pictures: Viewpoint effects and linear perspective in visually impaired people. Perception, 31, 747–769. Henson, R., Shallice, T., & Dolan, R. (2000). Neuroimaging evidence for dissociable forms of repetition priming. Science, 287, 1269–1272.
154
JAMES ET. AL
Humphrey, G. K., Goodale, M. A., Jakobson, L. S., & Servos, P. (1994). The role of surface information in object recognition: Studies of a visual form agnosic and normal subjects. Perception, 23, 1457–1481. Humphrey, G. K., & Khan, S. C. (1992). Recognizing novel views of dimensional objects. Canadian Journal of Psychology, 46, 170–190. James, K. H., Humphrey, G. K., Vilis, T., Corrie, B., Baddour, R., & Goodale, M. A. (2002). “Active” and “passive” learning of three-dimensional object structure within an immersive virtual reality environment. Behavior Research Methods, Instruments, & Computers, 34(3), 383–390. James, T. W., Culham, J. C., Humphrey, G. K., Milner, A. D., & Goodale, M. A. (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: A fMRI study. Brain, 126, 2463–2475. James, T. W., Humphrey, G. K., Gati, J. S., Servos, P., Menon, R. S., & Goodale, M. A. (2002). Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia, 40, 1706–1714. Kanwisher, N., Chun, M. M., McDermott, J., & Ledden, P. J. (1996). Functional imagining of human visual recognition. Cognitive Brain Research, 5(1-2), 55–67. Kilgour, A. R., de Gelder, B., & Lederman, S. J. (2004). Haptic face recognition and prosopagnosia. Neuropsychologia, 42, 707–712. Kosslyn, S. M., & Shin, L. M. (1994). Visual mental images in the brain: Current issues. In M. J. Farah & G. Ratcliff (Eds.), The neuropsychology of high-level vision (pp. 269–296). Hillsdale, NJ: Lawrence Erlbaum Associates. Macaluso, E., Frith, C. D., & Driver, J. (2000). Modulation of human visual cortex by crossmodal spatial attention. Science, 289(5482), 1206–1208. Macaluso, E., Frith, C. D., & Driver, J. (2002). Crossmodal spatial influences of touch on extrastriate visual area take current gaze direction into account. Neuron, 34, 647–658. Malach, R., Reppas, J. B., Benson, R. R., Kwong, K. K., Jiang, H., Kennedy, W. A., et al. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences USA, 92(18), 8135–8139. Maravita, A., Spence, C., Kennett, S., & Driver, J. (2002). Tool-use changes multimodal spatial interactions between vision and touch in normal humans. Cognition, 83, B25–B34. Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford, UK: Oxford University Press. Milner, A. D., Perrett, D. I., Johnston, R. S., Benson, R. S., Jordan, P. J., Heeley, T. R., et al. (1991). Perception and action in visual form agnosia. Brain, 114, 405–428. Newell, F. N., Ernst, M. O., Tjan, B. S., & Bulthoff, H. H. (2001). Viewpoint dependence in visual and haptic object recognition. Psychological Science, 12, 37–42. Reales, J. M., & Ballesteros, S. (1999). Implicit and explicit memory for visual and haptic objects: cross-modal priming depends on structural descriptions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 644–663. Schacter, D. L., & Buckner, R. L. (1998). Priming and the Brain. Neuron, 20, 185–195. Schacter, D. L., Reiman, E., Uecker, A., Polster, M. R., Yun, L. S., & Cooper, L. A. (1995). Brain regions associated with retrieval of structurally coherent visual information. Nature, 376, 587–590. Streri, A. (1993). Seeing, reaching, touching: The relations between vision and touch in infancy. Cambridge, MA: MIT Press. Tarr, M. J. (1995). Rotating objects to recognize them: A case study of the role of viewpoint dependency in the recognition of three-dimensional objects. Psychonomic Bulletin and Review, 2(1), 55–82.
7. VISUAL AND TACTILE OBJECT REPRESENTATIONS Wiggs, C. L., & Martin, A. (1998). Properties and mechanisms of perceptual priming. Current Opinion in Neurobiology, 8, 227–233. Zangaladze, A., Epstein, C. M., Grafton, S. T., & Sathian, K. (1999). Involvement of visual cortex in tactile discrimination of orientation. Nature, 401, 587–590.
This page intentionally left blank
g
h
Cerebral Cortical Processing of Tactile Form: Evidence from Functional Neuroimaging K. Sathian S. C. Prather Emory University School of Medicine
I
n this chapter we review work, from our laboratory and other groups, that has used functional neuroimaging to study the cerebral cortical basis of the human tactile perception of form. This is an area that has been the subject of considerable investigation over the past few years. Although we still do not have a complete understanding of the relevant neural processes and some of the findings remain controversial, a significant body of knowledge has been accumulated, as we hope the reader will agree after perusing this chapter. We begin by reviewing studies addressing form processing within somatosensory cortical regions. Next, we cover work demonstrating that various areas of extrastriate visual cortex are active during tactile form perception in normally sighted humans, and explore the potential role of such activity. This work is relevant to the observations, described in the chapter by Pascual-Leone et al. in this volume, that visual cortical areas are recruited for Braille reading following visual deprivation. Finally, we turn to findings indicating that premotor cortical regions are also recruited during tactile form perception even in the absence of movement, and consider the implications of these findings.
158
SOMATOSENSORY CORTICAL PROCESSING Studies of monkeys with lesions in different parts of the primary somatosensory cortex (SI) indicate that, whereas lesions of the anterior part of the postcentral gyrus (Brodmann’s area 3b) nonselectively impair performance on multiple tactile tasks, lesions of the posterior part of the postcentral gyrus, involving Brodmann’s area 2, selectively impair haptic discrimination of three-dimensional (3-D) shape (Carlson, 1981; Randolph & Semmes, 1974). In keeping with this, a number of functional neuroimaging studies in humans have reported activity in the posterior part of the postcentral gyrus, around the postcentral sulcus (Fig. 8.1, PCS), during haptic discrimination of 3-D shape (Bodegård et al., 2000; Bodegård, Geyer, Grefkes, Zilles, & Roland, 2001; Boecker et al., 1995; Servos, Lederman, Wilson, & Gati, 2001). Of these studies, the first two, from Roland’s laboratory, used positron emission tomography (PET), while the other two used functional magnetic resonance imaging
FIG. 8.1. Horizontal slices through an anatomic magnetic resonance image derived by averaging across a number of normal subjects’ images, transformed into Talairach space (Talairach & Tournoux, 1988), with superimposed indications of the approximate location and extent of activations from various studies. Radiologic convention is used, with the left hemisphere appearing on the right. Talairach z coordinates are displayed below each slice. See text for abbreviations and details of activations. Sites labeled SMG, IPS, SMA, PMC and PCS are re-drawn from Bodegård et al. (2001) (copyright © 2001 by Cell Press, with permission from Elsevier). The remaining sites are re-drawn from PET/fMRI studies in the authors’ laboratory (Prather et al., 2004; S. C. Prather, J. R. Votaw, & K. Sathian, unpublished observations; Sathian et al., 2002; Stoesz et al., 2003).
8. CEREBRAL CORTICAL PROCESSING OF TACTILE FORM
(fMRI). The right hand was employed in all of these studies, two of which noted bilateral activation around the PCS (Bodegård et al., 2001; Boecker et al., 1995); the other two studies reported activation only in the left hemisphere (Bodegård et al., 2000; Servos et al., 2001). Comparison of anatomical landmarks with cytoarchitectonic borders, which notably has been achieved in very few instances in humans, indicates that this PCS activation lies in human area 2 (Bodegård et al., 2001; Grefkes, Geyer, Schormann, Roland, & Zilles, 2001). PET studies from Roland’s laboratory have also identified additional foci lying more posteriorly, in posterior parietal cortex contralateral to the hand used for exploration, that are active during haptic shape perception: one in cortex lining the anterior part of the intraparietal sulcus (Fig. 8.1, IPS; Bodegård et al., 2001; Roland, O’Sullivan, & Kawashima, 1998) and another in the anterior part of the supramarginal gyrus (Fig. 8.1, SMG; Bodegård et al., 2001). A hierarchy of form information processing within somatosensory cortical regions has been proposed (Bodegård et al., 2001). This is based on a constellation of findings by Roland’s group: 1. The anterior postcentral gyrus (areas 3b and 1) was nonspecifically activated across a number of tactile tasks, including discrimination between ellipsoidal or cuboidal shapes, discrimination of sphere curvature, and tasks not involving shape or curvature, such as discrimination of roughness or the velocity of skin brushing (Bodegård et al., 2001). 2. The PCS site was active during shape and curvature discrimination (Bodegård et al., 2000, 2001). 3. The IPS locus was active during shape (Bodegård et al., 2001; Roland et al., 1998) but not curvature (Bodegård et al., 2001) discrimination. 4. The SMG focus was also recruited during shape but not curvature discrimination (Bodegård et al., 2001). No differences in engagement of these areas were found whether exploration was active or passive. Thus, it was proposed (Bodegård et al., 2001) that areas 3b and 1, at the bottom of the hierarchy, initially process somatosensory inputs regardless of the task; area 2, at an intermediate level, processes both curvature and complex shapes that can be described fully by their curvature profile (Srinivasan & LaMotte, 1987); and the posterior parietal areas (IPS, SMG), at the top of the hierarchy, are recruited only during complex shape discrimination. These multiple foci were part of a contiguous complex of somatosensory areas that was bilaterally active in a recent fMRI study from our laboratory (Stoesz et al., 2003), in which covert discrimination of forms varying in two dimensions (the upside-down letters T and V) and presented to the immobilized
160
right index fingerpad was contrasted with a rest state. It should be noted that the IPS and SMG foci were not reported in all studies of haptic shape discrimination. Specifically, Servos et al. (2001) observed that these sites were not active in their study. Subjects in this study were trained to achieve a 90% correct performance level, whereas the tasks were apparently more demanding in the studies reporting posterior parietal contributions (Bodegård et al., 2001; Roland et al., 1998), where performance was around 75% correct. Such differences in task difficulty might account for discrepancies between studies in activation of posterior parietal regions. An unresolved issue is the potential contribution of parietal opercular cortex, including second somatosensory cortex (SII), to haptic shape discrimination. Lesions of human inferior parietal cortex in and around this region have been reported to impair haptic object recognition contralaterally, producing a deficit that has been characterized as tactile agnosia, a specific inability to recognize objects tactually despite otherwise intact somatic sensation (Caselli, 1993; Reed, Caselli, & Farah, 1996). This is in accord with the effects of SII lesions in monkeys, which impair the ability to master discrimination of shape using the contralateral hand (Murray & Mishkin, 1984). However, few functional neuroimaging studies have found activity during tactile form tasks in presumed SII. An early PET study (Roland, Eriksson, Widen, & Stone-Elander, 1989) noted bilateral activity in the parietal operculum and contralateral activity in SI when haptic object recognition was contrasted with rest. Later studies from the same laboratory have not confirmed this. Our fMRI study in which covert discrimination of two-dimensional form stimuli was contrasted with a rest state (see previous paragraph) found that activity within the somatosensory complex extended into the parietal operculum, but only in the right hemisphere, ipsilateral to the stimulated hand (Stoesz et al., 2003).
RECRUITMENT OF VISUAL AND MULTISENSORY CORTICAL AREAS
A number of fMRI studies have revealed that, in addition to somatosensory cortical regions, visual cortical areas are also active when normal subjects haptically identify 3-D objects. The first of these studies (Deibert, Kraut, Kremen, & Hart, 1999) found activity in occipital cortex, including both striate and extrastriate areas, when haptic object recognition was contrasted with haptic texture discrimination. Other laboratories have focused on an occipito-temporal area in the ventral visual pathway, known as the lateral occipital complex (Fig. 8.1, LOC),
8. CEREBRAL CORTICAL PROCESSING OF TACTILE FORM
which is selectively active for visual objects compared to visual textures (Malach et al., 1995). One group reported that a subregion of the LOC showed similar selectivity for objects versus textures when haptic stimuli were used (Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Amedi, Malach, Hendler, Peled, & Zohary, 2001), and that the LOC was not activated by object-specific sounds, suggesting that it is specifically concerned with object geometry (Amedi et al., 2002). Another group confirmed that activity is evoked in overlapping parts of the LOC by visual and haptic exploration of objects (James et al., 2002). This study used bimanual object exploration, which, not surprisingly, resulted in bilateral LOC activation. However, activity in visual cortical areas was bilateral even in other studies where only the right hand was used for haptic object identification (Amedi et al., 2002, 2001; Deibert et al., 1999). The finding that both visual and haptic object identification engage the LOC suggests a common neural representation for visual and haptic shape. This idea is reinforced by cross-modal priming effects described in two fMRI studies. Amedi et al. (2001) reported that haptic object exploration evoked a smaller fMRI signal in the LOC for previously seen objects, compared to previously unseen ones, analogous to priming effects in the LOC within the visual modality (Grill-Spector et al., 1999). James et al. (2002) reported fMRI priming effects in the opposite direction, with greater LOC activity during vision of previously explored objects: the magnitude of such neural priming was similar for prior visual and haptic exposure. The unusual direction of the priming effect in the latter study was attributed to the use of unfamiliar objects. These fMRI cross-modal priming effects correspond with psychophysical observations that cross-modal visuo-haptic priming can be as effective as within-modality priming (Easton, Greene, & Srinivas, 1997; Easton, Srinivas, & Greene, 1997; Reales & Ballesteros, 1999). Further support for a shared visuo-haptic shape representation is provided by the case report of a patient with visual agnosia following a lesion that presumably damaged the LOC: this patient also had tactile agnosia, although somatic sensation was otherwise intact (Feinberg, Gonzalez Rothi, & Heilman, 1986). Face perception is a special case of 3-D form perception, and of course has been extensively studied in the visual domain. Of interest here are recent studies of haptic face perception, from Lederman’s laboratory. A psychophysical study showed that humans can haptically recognize faces, and also perform cross-modal matching between visual and haptic inputs (Kilgour & Lederman, 2002). As described in a preliminary report, this group used fMRI to demonstrate activity during haptic face recognition in multiple extrastriate visual cortical areas, including those known to be active during visual face perception (Kilgour, Servos, James, & Lederman, 2004). Further, they found that a
162
prosopagnosic patient had a similar pattern of performance on haptic and visual face recognition tests (Kilgour, de Gelder, & Lederman, 2004). These studies using faces as haptic stimuli complement those using objects, and underscore the commonality of form perception between vision and haptics.
In a PET study from our laboratory (Prather, Votaw, & Sathian, 2004), form stimuli varying in two dimensions were presented to the immobilized right index fingerpad, while normal subjects engaged in perceptual tasks. This study was designed to address the generality of visual cortical recruitment during tactile form perception, and whether there are differences in the distribution of cortical activity between discrimination of form and mental spatial manipulation of form representations. To this end, one task involved mental rotation of tactile forms, in which stimuli were upside-down Js presented in one of two mirror-image configurations. Subjects were asked to report which of the two configurations they perceived. Stimulus angles (with respect to the long axis of the finger) were 0² (pure mirror-image discrimination condition) or 135²–180² (mental rotation condition). In the mental rotation condition, subjects were asked to mentally rotate the stimulus into the canonical orientation (0²) before reporting their decision. Subjects’ reports that they did mentally rotate in this condition were confirmed psychophysically by finding longer response times for the mental rotation condition compared to the mirror-image condition, in agreement with prior studies using visual (Shepard & Metzler, 1971) and tactile stimuli (Marmor & Zaback, 1976; Prather & Sathian, 2002). Additional conditions in this PET study included a condition in which subjects were asked to distinguish between two forms (the upside-down letters T and V); a condition requiring detection of a gap in a bar; and an orientation condition, in which subjects reported whether a bar was oriented along the long axis of the fingerpad or at a 45² angle to it. To avoid biasing subjects towards visual processing, subjects never saw the stimuli. In this study, the mental rotation condition evoked activity region in the left superior parietal cortex near the intraparietal sulcus (Fig. 8.1, SPC), when contrasted with the mirror-image discrimination condition. This focus was also active on contrasts of the mental rotation condition with the form, gap and orientation conditions, confirming its specificity for the process of mental rotation (S. C. Prather, J. R. Votaw, & K. Sathian, unpublished observations). Posterior parietal cortex had earlier been implicated in the mental rotation of tactile stimuli on the basis of electrophysiological findings (Röder, Rösler, & Hennighausen, 1997); however, these studies did not have sufficient spatial precision to localize the active site. This focus is also active during mental rotation of visual stimuli (Alivisatos & Petrides, 1997). When the mental rotation
8. CEREBRAL CORTICAL PROCESSING OF TACTILE FORM
condition was contrasted with either the orientation or the gap condition, there were additional bilateral activations in parieto-occipital cortex extending superiorly into superior parietal cortex (Fig. 8.1, POC; S. C. Prather, J. R. Votaw, & K. Sathian, unpublished observations). As Fig. 8.1 illustrates, the sites that are active during mental rotation of tactile stimuli (SPC, POC) are located postero-superior to the parietal foci discussed in the section on somatosensory cortical processing (IPS, SMG), and lie in what is considered to be the dorsal visual pathway, which is concerned with visuospatial processing (Mishkin, Ungerleider, & Macko, 1983; Ungerleider & Haxby, 1994). Whether this implies recruitment of visual processing for tactile perception, or multisensory activity in these regions, remains uncertain at present. Relative to the orientation condition, which required no form discrimination, the form condition recruited a focus in the right LOC (Fig. 8.1). As discussed previously, the LOC is an object-selective region located in the ventral visual pathway. Thus, the findings of this PET study from our laboratory (Prather et al., 2004) indicate a functional distinction between processes involved in tactile form discrimination, which engage the ventral visual pathway, and those involved in mental spatial manipulation of tactile form representations, which engage the dorsal visual pathway. This parallels the segregation of mental rotation and object recognition of visual stimuli in dorsal versus ventral visual areas, respectively (Gauthier et al., 2002), and the general organization of visual information flow into a dorsal cortical pathway for spatial processing and a ventral one for form processing (Mishkin et al., 1983; Ungerleider & Haxby, 1994). Further, these findings are consistent with an increasing body of evidence that regions of extrastriate visual cortex are engaged in a highly task-specific fashion during tactile perception, that is, the specific areas of extrastriate visual cortex that mediate particular visual tasks are recruited in the corresponding tactile tasks. Apart from the work reviewed here, other examples of such visual cortical recruitment include activation of a parieto-occipital cortical locus during both visual (Sergent, Ohta, & MacDonald, 1992) and tactile (Sathian, Zangaladze, Hoffman, & Grafton, 1997) discrimination of grating orientation, and activation of visual area MT during perception of both visual and tactile motion (Blake, Sobel, & James, 2004; Hagen et al., 2002). Moreover, the dependence of tactile perception on the function of extrastriate visual cortex has been demonstrated in a study from our laboratory (Zangaladze, Epstein, Grafton, & Sathian, 1999). This study showed that transcranial magnetic stimulation over parieto-occipital cortex, to disrupt its functioning, interfered with optimal tactile discrimination of grating orientation. In a related fMRI study (Stoesz et al., 2003), we used the same form discrimination and gap detection tasks as in our PET study (Prather et al., 2004) described earlier (except that task performance was covert in the fMRI study).
164
Compared with a baseline rest condition, there was bilateral activity in the LOC (Fig. 8.1), even though stimuli were presented to the right hand. However, LOC activity was absent for the gap detection task. A direct contrast between the tasks confirmed that LOC activity was greater for the form than the gap task. A key difference between these two tasks is that the form task is macrospatial (based on differences in large-scale features), whereas the gap task is microspatial (based on differences in small-scale features). Since vision is generally superior to touch at analyzing macrospatial features, the reverse being true for microspatial features (Heller, 1989), it has been suggested that macrospatial tactile tasks might be preferentially associated with visual processing (Klatzky, Lederman, & Reed, 1987). This is supported by our finding of greater LOC activity associated with the macrospatial form task than the microspatial gap task (Stoesz et al., 2003), and by earlier findings from our laboratory that tactile discrimination of grating orientation, a macrospatial task, recruits visual cortical activity functionally relevant to performance, but tactile discrimination of grating spacing, a microspatial task, does not (Sathian et al., 1997; Zangaladze et al., 1999).
An interesting observation in our fMRI study (Stoesz et al., 2003) was that most subjects reported using visual imagery during the macrospatial form task but not the microspatial gap task. Since the former but not the latter task engaged the LOC, this suggests that visual imagery is associated with recruitment of the LOC. Hence, visual cortical engagement during tactile perception might be due to visual imagery, possibly triggered by unfamiliarity with the stimuli or tasks in the tactile realm. Cross-modal translation into the format suited for the sensory modality that is more adept at the task at hand may in fact be quite general, especially when complex information is involved (Freides, 1974). Consistent with this, reports from subjects participating in multiple macrospatial tasks in our laboratory indicate that they mentally visualize tactile stimuli, even when they have not encountered them visually and even when tasks do not explicitly call for mental imagery. Interestingly, in a PET study, it was reported that activity in the left LOC was recruited by mental imagery of object shape (De Volder et al., 2001). The imagery was triggered by auditory cues that had previously been associated with the objects during an initial phase of exploration that was visual in a group of sighted subjects and haptic in a group of blind subjects and can thus be assumed to be visually based in the former group and haptically based in the latter group. Some workers have argued against a role for visual imagery during haptic form perception, since visual imagery evoked little activity in the LOC compared to haptic object identification (Amedi et al., 2001), and
8. CEREBRAL CORTICAL PROCESSING OF TACTILE FORM
performance on haptic face recognition was not correlated with ratings of the vividness of visual imagery (Kilgour & Lederman, 2002). However, taken together, the studies reviewed in this chapter indicate that activity in the LOC (and perhaps dorsal visual cortical areas as well) can be generated during perception as well as imagery, whether visually or haptically based. In other words, a multisensory (or even amodal) representation could be accessed either bottom-up, via direct projections from multiple sensory modalities, or top-down, via projections from higher sensory areas or regions involved in cognitive processes. The chapter by Pascual-Leone et al. in this volume describes studies in visually deprived individuals that, similar to the studies reviewed here, defy traditional concepts of rigid specialization within sensory cortex. An important goal of future studies will be to attempt unified explanations of findings in the sighted and in the visually deprived.
RECRUITMENT OF MOTOR AND PREMOTOR CORTICAL AREAS Many studies have found that cortical areas that are usually considered as part of the circuitry involved in motor control are active during tactile form discrimination, even in the absence of movement. These areas, whose locations are shown in Fig. 8.1, include primary motor cortex (PMC; Bodegård et al., 2001; Sadato, Ibanez, Deiber, & Hallett, 2000), dorsal premotor cortex (PMd; Bodegård et al., 2001; Sadato et al., 2000; Sathian, Prather, & Votaw, 2002; Stoesz et al., 2003), ventral premotor cortex (PMv; Sathian et al., 2002; Stoesz et al., 2003), and the region in and anterior to the supplementary motor area (SMA; Bodegård et al., 2001; Sadato et al., 2000; Stoesz et al., 2003). Based on neurophysiological studies in monkeys, Romo and Salinas (2001) suggested that the activity found in various cortical (and subcortical) motor areas is primarily related to decision-making processes. However, in human fMRI studies, even tactile stimulation without any perceptual or motor task engages PMC (Francis et al., 2000; Moore et al., 2000) and the SMA (Bushara et al., 2001). Other possible reasons for recruitment of motor structures during tactile perception include automatic planning of manual interaction with objects, or fundamental involvement of these areas in sensory or cognitive processes. The PET study from our laboratory described earlier also provided preliminary data suggesting that recruitment of lateral premotor cortex might be task-dependent (Sathian et al., 2002), in association with corresponding visual cortical areas. A baseline control condition used in this study lacked tactile stimulation but required emission of verbal output at a similar rate to that during the tactile tasks. Contrasted with this verbal control, mental rotation of tac-
166
tile forms activated foci in left PMd and PMv, whereas form discrimination recruited PMv bilaterally (Fig. 8.1). Contrasts of the mental rotation task to the orientation or gap tasks, which, as already described, revealed multiple active loci in the dorsal pathway, also activated left PMd. Thus, spatial mental manipulation of tactile forms recruited activity in both the dorsal visual stream and dorsal premotor cortex, which are known to be heavily interconnected (Tanné-Gariépy, Rouiller, & Boussaoud, 2002). Form discrimination recruited the LOC in the ventral visual pathway in addition to ventral premotor cortex, suggesting that a ventral visuo-motor network could be recruited during form processing of tactile stimuli. Figure 8.2 illustrates this concept by showing that the tendency of regional cerebral blood flow to vary between tasks was similar in the dorsal visual areas and dorsal premotor cortex (compare top left and right panels), whereas a different pattern was observed in ventral visual and ventral premotor cortex (compare bottom left and right panels). Further work will be
FIG. 8.2. Normalized regional cerebral blood flow (rCBF) averaged over four of the regions illustrated in Fig. 8.1 (note that the top panels are for left hemisphere regions and the bottom panels for right hemisphere regions; abbreviations as in text), for six different conditions (Ro: mental rotation; Mi: mirror-image discrimination; Fo: form discrimination; Or: bar orientation discrimination; Ga: gap detection; Vb: verbal control). Bars indicate standard errors. Data are taken from a PET study in our laboratory (S. C. Prather, J. R. Votaw, & K. Sathian, unpublished observations).
8. CEREBRAL CORTICAL PROCESSING OF TACTILE FORM
necessary to substantiate the idea of task-specific recruitment of premotor cortical fields during tactile perception.
CONCLUSIONS Tactile perception of form is associated with activity in a widely distributed cerebral cortical network, including multiple somatosensory cortical foci, regions of extrastriate visual cortex, and cortical areas in the motor circuit. Functional neuroimaging studies in humans have been instrumental in identifying the breadth of distribution of this activity. It is fair to say that a start has been made in addressing the functional role played by these diverse cortical regions. Elucidating the function of different nodes of the distributed network and their orchestration into the symphony of perceptual experience in both sighted and visually deprived individuals remains a challenge for the future.
ACKNOWLEDGMENTS We acknowledge support of our research by grants from the NIH (RO1EY12440 to KS) and the ARCS foundation (to SCP), and we thank our colleagues Hui Mao, Mark Stoesz, John Votaw, Valerie Weisser, and Minming Zhang for their contributions to the work reviewed here.
REFERENCES Alivisatos, B., & Petrides, M. (1997). Functional activation of the human brain during mental rotation. Neuropsychologia, 36, 111–118. Amedi, A., Jacobson, G., Hendler, T., Malach, R., & Zohary, E. (2002). Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex, 12, 1202–1212. Amedi, A., Malach, R., Hendler, T., Peled, S., & Zohary, E. (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience, 4, 324–330. Blake, R., Sobel, K. V., & James, T. W. (2004). Neural synergy between kinetic vision and touch. Psychological Science, 15, 397–402. Bodegård, A., Geyer, S., Grefkes, C., Zilles, K., & Roland, P. E. (2001). Hierarchical processing of tactile shape in the human brain. Neuron, 31, 317–328. Bodegård, A., Ledberg, A., Geyer, S., Naito, E., Zilles, K., & Roland, P. E. (2000). Object shape differences reflected by somatosensory cortical activation in human. Journal of Neuroscience, 20, RC51, 1–5. Boecker, H., Khorram-Sefat, D., Kleinschmidt, A., Merboldt, K.-D., Hänicke, W., Requardt, M., & Frahm, J. (1995). High-resolution functional magnetic resonance imaging of cortical activation during tactile exploration. Human Brain Mapping, 3, 236–244. Bushara, K. O., Wheat, J. M., Khan, A., Mock, B. J., Turski, P. A., Sorenson, J., & Brooks, B. R. (2001). Multiple tactile maps in the human cerebellum. NeuroReport, 12, 2483–2486. Carlson, M. (1981). Characteristics of sensory deficits following lesions of Brodmann’s areas 1 and 2 in the postcentral gyrus of Macaca mulatta. Brain Research, 204, 424–430.
168 Caselli, R. J. (1993). Ventrolateral and dorsomedial somatosensory association cortex damage produces distinct somesthetic syndromes in humans. Neurology, 43, 762–771. De Volder, A. G., Toyama, H., Kimura, Y., Kiyosawa, M., Nakano, H., Vanlierde, A., et al. (2001). Auditory triggered mental imagery of shape involves visual association areas in early blind humans. NeuroImage, 14, 129–139. Deibert, E., Kraut, M., Kremen, S., & Hart, J. (1999). Neural pathways in tactile object recognition. Neurology, 52, 1413–1417. Easton, R. D., Greene, A. J., & Srinivas, K. (1997). Transfer between vision and haptics: Memory for 2-D patterns and 3-D objects. Psychonomic Bulletin and Review, 4, 403–410. Easton, R. D., Srinivas, K., & Greene, A. J. (1997). Do vision and haptics share common representations? Implicit and explicit memory within and between modalities. Journal of Experimental Psychology: Learning, Memory and Cognition, 23, 153–163. Feinberg, T. E., Gonzalez Rothi, L. J., & Heilman, K. M. (1986). Multimodal agnosia after unilateral left hemisphere lesion. Neurology, 36, 864–867. Francis, S. T., Kelly, E. F., Bowtell, R., Dunseath, W. J. R., Folger, S. E., & McGlone, F. (2000). fMRI of the responses to vibratory stimulation of digit tips. NeuroImage, 11, 188–202. Freides, D. (1974). Human information processing and sensory modality: Cross-modal functions, information complexity and deficit. Psychological Bulletin, 81, 284–310. Gauthier, I., Hayward, W. G., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (2002). BOLD activity during mental rotation and viewpoint-dependent object recognition. Neuron, 34, 161–171. Grefkes, C., Geyer, S., Schormann, T., Roland, P., & Zilles, K. (2001). Human somatosensory area 2: Observer-independent cytoarchitectonic mapping, interindividual variability, and population map. NeuroImage, 14, 617–631. Grill-Spector, K., Kushnir, T., Edelman, S., Avidan, G., Itzchak, Y., & Malach, R. (1999). Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron, 24, 187–203. Hagen, M. C., Franzen, O., McGlone, F., Essick, G., Dancer, C., & Pardo, J. V. (2002). Tactile motion activates the human middle temporal/V5 (MT/V5) complex. European Journal of Neuroscience, 16, 957–964. Heller, M. A. (1989). Texture perception in sighted and blind observers. Perception and Psychophysics, 45, 49–54. James, T. W., Humphrey, G. K., Gati, J. S., Servos, P., Menon, R. S., & Goodale, M. A. (2002). Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia, 40, 1706–1714. Kilgour, A. R., & Lederman, S. J. (2002). Face recognition by hand. Perception and Psychophysics, 64, 339–352. Kilgour, A. R., de Gelder, B., & Lederman, S. J. (2004). Haptic face recognition and prosopagnosia. Neuropsychologia, 42, 707–712. Kilgour, A. R., Servos, P., James, T.W., & Lederman, S. J. (2004). Functional MRI of haptic face recognition. Brain and Cognition, 54, 159–161 (Abstract). Klatzky, R. L., Lederman, S. J., & Reed, C. (1987). There’s more to touch than meets the eye: The salience of object attributes for haptics with and without vision. Journal of Experimental Psychology: General, 116, 356–369. Malach, R., Reppas, J. B., Benson, R. R., Kwong, K. K., Jiang, H., Kennedy, W. A., et al. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences of the USA, 92, 8135–8139. Marmor, G. S., & Zaback, L. A. (1976). Mental rotation by the blind: Does mental rotation depend on visual imagery? Journal of Experimental Psychology: Human Perception and Performance, 2, 515–521.
8. CEREBRAL CORTICAL PROCESSING OF TACTILE FORM Mishkin, M., Ungerleider, L., & Macko, K. (1983). Object vision and spatial vision: Two cortical pathways. Trends in Neurosciences, 6, 414–417. Moore, C. I., Stern, C. E., Corkin, S., Fischl, B., Gray, A. C., Rosen, B. R., et al. (2000). Segregation of somatosensory activation in the human Rolandic cortex using fMRI. Journal of Neurophysiology, 84, 558–569. Murray, E. A., & Mishkin, M. (1984). Relative contributions of SII and area 5 to tactile discrimination in monkeys. Behavioral Brain Research , 11, 67–83. Prather, S. C. & Sathian, K. (2002). Mental rotation of tactile stimuli. Cognitive Brain Research, 14, 91–98. Prather, S. C., Votaw, J. R., & Sathian, K. (2004). Task-specific recruitment of dorsal and ventral visual areas during tactile perception. Neuropsychologia, 42, 1079–1087. Randolph, M., & Semmes, J. (1974). Behavioral consequences of selective subtotal ablations in the postcentral gyrus of Macaca mulatta. Brain Research, 70, 55–70. Reales, J. M., & Ballesteros, S. (1999). Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions. Journal of Experimental Psychology: Learning, Memory and Cognition, 25, 644–663. Reed, C. L., Caselli, R. J., & Farah, M. J. (1996). Tactile agnosia: Underlying impairment and implications for normal tactile object recognition. Brain, 119, 875–888. Röder, B., Rösler, F., & Hennighausen, E. (1997). Different cortical activation patterns in blind and sighted humans during encoding and transformation of haptic images. Psychophysiology, 34, 292–307. Roland, P. E., Eriksson, L., Widen, L., & Stone-Elander, S. (1989). Changes in regional cerebral oxidative metabolism induced by tactile learning and recognition in man. European Journal of Neuroscience, 1, 3–18. Roland, P. E., O’Sullivan, B., & Kawashima, R. (1998). Shape and roughness activate different somatosensory areas in the human brain. Proceedings of the National Academy of Sciences of the USA, 95, 3295–3300. Romo, R., & Salinas, E. (2001). Touch and go: Decision-making mechanisms in somatosensation. Annual Review of Neuroscience, 24, 107–137. Sadato, N., Ibanez, V., Deiber, M.-P., & Hallett, M. (2000). Gender difference in premotor activity during active tactile discrimination. NeuroImage, 11, 532–540. Sathian, K., Prather, S. C., & Votaw, J. R. (2002). Task-dependent premotor cortical recruitment during tactile perception. Society for Neuroscience Abstracts, 28, 841.12. Sathian, K., Zangaladze, A., Hoffman, J. M., & Grafton, S. T. (1997). Feeling with the mind’s eye. NeuroReport, 8, 3877–3881. Sergent, J., Ohta, S., & MacDonald, B. (1992). Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain, 115, 15–36. Servos, P., Lederman, S., Wilson, D., & Gati, J. (2001). fMRI-derived cortical maps for haptic shape, texture and hardness. Cognitive Brain Research, 12, 307–313. Shepard, R. N., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701–703. Srinivasan, M. A., & LaMotte, R. H. (1987). Tactile discrimination of shape: Responses of slowly and rapidly adapting mechanoreceptive afferents to a step indented into the monkey fingerpad. Journal of Neuroscience, 7, 1682–1697. Stoesz, M., Zhang, M., Weisser, V. D., Prather, S. C., Mao, H., & Sathian, K. (2003). Neural networks active during tactile form perception: Common and differential activity during macrospatial and microspatial tasks. International Journal of Psychophysiology, 50, 41–49. Talairach, J., & Tournoux, P. (1988). Co-planar stereotaxic atlas of the brain. New York: Thieme Medical Publishers. Tanné-Gariépy, J., Rouiller, E. M., & Boussaoud, D. (2002). Parietal inputs to dorsal versus ventral premotor areas in the macaque monkey: evidence for largely segregated visuomotor pathways. Experimental Brain Research, 145, 91–103.
170 Ungerleider, L. G., & Haxby, J. V. (1994). “What” and “where” in the human brain. Current Opinion in Neurobiology, 4, 157–165. Zangaladze, A., Epstein, C. M., Grafton, S. T., & Sathian, K. (1999). Involvement of visual cortex in tactile discrimination of orientation. Nature, 401, 587–590.
g
h
The Role of Visual Cortex in Tactile Processing: A Metamodal Brain Alvaro Pascual-Leone Hugo Theoret Lotfi Merabet Thomas Kauffmann Gottfried Schlaug Beth Israel Deaconess Medical Center, Harvard Medical School
C
urrently, the dominant view of the organization of the human brain postulates a series of parallel, hierarchically organized, sensory modality specific systems: a visual system, an auditory system, a tactile system, and so on. Sensory systems are generally characterized as having peripheral receptor systems that transduce information to pre-cortical neural relay stations. These relays (for instance, the nuclei of the thalamus) direct sensory signals to unimodal sensory cortical areas, thought to be responsible primarily for processing of data from a single modality. Unimodal sensory areas seem arranged in an orderly hierarchy of increasingly complex functional significance, from primary, through secondary, to unimodal association areas (Mesulam, 1998). Only after this series of steps, in which sensory information is believed to remain isolated by modality, is information thought to merge into higher order multimodal association areas of the cortex (Calvert, Campbell, & Brammer, 2000; Mesulam, 1998; Stein & Meredith, 1993). These multimodal association areas of the cortex contain
172
PASCUAL-LEONE ET AL.
multisensory cells, which provide a neural mechanism for integrating sensory experiences, modulating the saliency of stimuli, assigning experiential and affective relevance, and providing the substrate for perceptual experience. However, such a model of brain organization appears excessively simplistic and raises several questions. For example, cortico-cortical and cortico-subcortical connections are generally arranged in feed-forward and matching feed-back loops. If such is the case between unimodal and multisensory areas, we ought to expect that the activation of multisensory areas by one sensory input would affect activity in all other sensory systems. Such interactions are likely to be quite specific, depending on the precise pattern of reciprocal connections, the cortical layers targeted by them, and so forth. In any case, if connections are indeed reciprocal, we ought to expect that given the appropriate task and circumstance, activity in multisensory areas would affect and modulate the activity and presumably the behavioral contribution of unimodal areas, including primary unimodal areas (de Gelder, 2000; Driver & Spence, 2000). Several other chapters in this volume address this idea of feed-back, crossmodal interactions in detail. Furthermore, it is conceivable that given such a structure of parallel, sensory-modality specific systems, connections might exist across unimodal sensory areas (Falchier et al., 2001; Rockland & Ojima, 2001). Such connections (whether cortico-cortical or through subcortical structures such as the thalamus) could be present during development and be expected to normally degenerate except in cases of early loss of a sensory modality (Innocenti, 1995). In certain instances, for example in the case of congenital or early blindness, such connections would allow for the primary sensory cortices normally associated with the deprived sensory modality to become colonized by the remaining sensory modalities (Bavelier & Neville, 2002). On the other hand, persistence of such developmentally transient connections into adulthood could be viewed as a reason for the normal experience of crossmodal interaction and, if uncontrolled, even synesthesias (Baron-Cohen & Harrison, 1997; Cytowic, 1989; Grossenbacher & Lovelace, 2001). These two mechanisms, feed-back influences from multimodal areas or feed-forward connections across unimodal areas, are not mutually exclusive, nor all exhaustive. In any case, appropriate tasks or interventions, for example temporary complete visual deprivation, might unmask the functional significance of such connections and shed new light onto the organization (at systems level) of the nervous system. It seems reasonable to presume that in the setting of deprivation of a sensory modality, the brain would reorganize, so that neuroplasticity may alter the experience of the world. Interactions between multisensory and unimodal areas must be modified, and the respective strength of the different network components must be shifted. Multisensory areas could exert an increased influence on unimodal re-
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
gions, while connections between these unimodal areas might be unmasked or even newly established if not normally present. In turn, these plastic changes may alter the normal functioning of the remaining senses. Recent studies on blind, visually deprived, and even sighted subjects demonstrate that cortical processing for one sensory modality can indeed occur even in primary cortical regions typically dedicated to different modalities, and hence call into question the strict organization of the brain into parallel unimodal systems that are only integrated in higher order brain centers (Hamilton & Pascual-Leone, 1998; Pascual-Leone & Hamilton, 2001). Such studies provide novel clues as to the mechanisms involved in determining cortical functional specificity and the notion of a nervous system primarily organized in separate streams of unimodal processing. An alternative hypothesis may claim that the brain actually represents a metamodal structure organized as a network of cortical operators that execute given functions or computations regardless of the sensory modality of their inputs (Pascual-Leone & Hamilton, 2001). A given operator might have a predilection for a given sensory input based on its relative suitability for the kinds of computations required to process information from that modality. Such predilection might lead to operator-specific selective reinforcement of certain sensory modalities, eventually generating the impression of a brain structured in parallel, segregated systems processing different sensory signals. According to this view, a sense-specific brain region like the “visual cortex” may be visual only because we have sight and because the kinds of computations performed by the striate cortex are best suited for retinal, visual information. For example, we might postulate that the “visual cortex” is involved in discriminating precise spatial relations of local, detail features of an object or scene, which might be more advantageously done using visual than other sensory modalities. However, in the face of visual deprivation or well-chosen, challenging tasks, the striate cortex may unmask its tactile and auditory inputs to implement its computational functions on the available nonvisual sensory information.
CHOOSE THE TOOLS THAT CAN ADDRESS YOUR QUESTIONS In the present chapter we address issues of cross-modal interactions and a possible metamodal organization of the brain cortex, by reviewing several experiments that demonstrate a role of the visual cortex in tactile processing. However, prior to pursuing this further, it is important to note that the study of such phenomena, and indeed of any scientific question, requires the use of appropriate methodologies. In our case, the requirement is to establish causal links between brain activity and behavioral manifestations. Functional
174
PASCUAL-LEONE ET AL.
neuroimaging techniques such as positron emission tomography (PET) have convincingly shown the association between certain behaviors and specific patterns of joint activation of cortical and subcortical structures. Functional magnetic resonance imaging (fMRI) studies can add greater anatomical resolution and the temporal profile of the pattern of activation of such neural networks for specific behaviors. However, in the best of circumstances, these neuroimaging techniques only provide supportive evidence of the neural network associated with a given behavior rather than direct, causal evidence that the activated neural areas are critical for the behavior. Traditionally, “lesion studies” represent the best way of establishing such a causal link between brain function and behavior. However, “lesion studies” have many limitations. Following a brain injury, brain function reorganizes in an attempt to compensate for the lost abilities and, therefore, the observations might yield inaccurate results. Furthermore, cognitive abilities might be globally impaired after a brain insult so that the patient might not be suited for extensive, detailed testing of a given ability. Patients will frequently have more than a single brain injury, or the brain injury might be larger than the brain area under study, making the correlation between regional brain function and disturbed behavior difficult. Finally, lesion studies depend on the opportunity and chance occurrence of a given brain injury. Direct cortical stimulation techniques can address issues of causality, but they are invasive and only applicable to patients with cerebral pathology severe enough to require a neurosurgical intervention. Transcranial magnetic stimulation (TMS) provides a novel approach to overcome these limitations and expand functional imaging results, adding causality information. Applied as single pulses appropriately delivered in time and space or in trains of repetitive stimuli at appropriate frequency and intensity, TMS can be used to transiently disrupt the function of a given cortical target, thus creating a temporary, “virtual brain lesion” (Pascual-Leone, Bartres-Faz, & Keenan, 1999; Walsh & Cowey, 2000; Walsh & Pascual-Leone, 2003; Walsh & Rushworth, 1999). This allows the study of the contribution of a given cortical region to a specific behavior.
FEELING WITH THE MIND’S EYE: THE OCCIPITAL CORTEX IN THE EARLY BLIND Functional studies in early and congenitally blind subjects reveal activation of the occipital cortex during Braille reading and similarly demanding tactile discriminations. Using positron emission tomography (PET) as a measure of cortical activation during tactile discrimination tasks, Sadato et al. (Sadato et al., 1998; Sadato et al., 1996) found that blind subjects demonstrated activation of both primary and secondary occipital cortical areas (V1 and V2; Brodmann ar-
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
eas 17, 18, and possibly 19) during tactile tasks (Fig. 9.1A), whereas sighted controls showed deactivation in these regions. Studies by Uhl et al. (1991, 1993) using event-related potentials (ERPs) and cerebral blood flow measures also suggest occipital cortex activation in early blind humans. Others, using PET, and more recently fMRI, have found confirmatory results (Büchel et al., 1998; Burton et al., 2002). It is worth noticing that even congenitally blind subjects show activation of the occipital cortex, including the striate cortex during tactile reading, hence ruling out visual imagery as an explanation for these results. Second, it should be noted that the activation of the striate cortex is not
FIG. 9.1. (A) Activation of occipital, striate cortex (in addition to somatosensory cortex) during Braille reading in congenitally or early blind subjects during PET (modified from Sadato et al., 1996). (B) Activation of occipital cortex in early and late blind subjects in the study by Büchel et al. (1998). Note the lack of activation of striate cortex in the early blind, possibly due to the use of an auditory task as control and substraction. (C) Effects of transient disruption of the contralateral somatosensory cortex or the occipital cortex by TMS on the tactile Braille reading skill in early blind subjects and sighted controls. The sighted controls performed a difficulty-matched task of tactile reading of embossed Roman letters (modified from Cohen et al., 1997). (D) Effect of single-pulse TMS on tactile Braille character recognition. The TMS was delivered at different times (inter-stimulus interval) after presentation of the tactile stimulus to the digit pad. The graph displays the number of tactile stimuli detected (open symbols) and correctly identified (filled symbols). A total of 20 stimuli were presented. Mean data for five early blind subjects are presented (modified from Hamilton & Pascual-Leone, 1998).
176
PASCUAL-LEONE ET AL.
related to the linguistic demands of Braille, since other, nonverbal tasks demanding the same degree of detailed, spatial tactile discrimination also result in occipital activation (Büchel et al., 1998; Burton et al., 2002; Sadato et al., 1998; Sadato et al., 1996). Finally, it is interesting to point out that Büchel et al. (1998) failed to see activation of the striate cortex in their PET study of congenital and early blind subjects, while late blind subjects did show activation of the striate cortex (Fig. 9.1B). A possible reason for these findings and the apparent contradiction between the results of Sadato et al. (1996, 1998), Burton et al. (2002), and Büchel et al. (1998) is that the latter employed an auditory control task, hence possibly subtracting the activation of striate cortex during tactile stimuli from the activation during auditory stimuli. Neuroimaging evidence of activation of the visual cortex during tactile tasks in the blind does not prove causality. However, the functional significance of these changes can be addressed using rTMS. Cohen et al. (1997) found that repetitive TMS (rTMS) applied to the occipital cortex was able to disrupt Braille letter reading and the reading of embossed Roman characters in early blind subjects (subjects born blind or that had become blind before the age of 7) (Fig. 9.1C). In this study, repetitive rTMS induced errors and distorted the tactile perceptions of blind subjects in both tasks. In the case of the Braille task, subjects knew that they were touching Braille symbols but were unable to discriminate them, reporting instead that the Braille dots felt “different,” “flatter,” “less sharp and less well defined.” Occasionally, some subjects even reported feeling additional (“phantom”) dots in the Braille cell. By contrast, occipital stimulation had no effect on tactile performance in normal sighted subjects, whereas similar stimulation is known to disrupt their visual performance (Kosslyn et al., 1999). The functional significance of the occipital activation during Braille reading in the early blind can be further evaluated using single-pulse TMS in order to obtain information about the timing (chronometry) of information processing along a neural network (Pascual-Leone, Walsh, & Rothwell, 2000). TMS can very briefly disrupt the function of a targeted cortical region (Walsh & Cowey, 2000) and thus, applied at variable intervals following a given stimulus it can provide information about the temporal profile of activation and information processing along elements of a neural network (Maccabee et al., 1991; Cohen, Bandinelli, Sato, Kufta, & Hallett, 1991). A TMS stimulus can be delivered to the occipital or the contralateral somatosensory cortex at a variable interval after a peripheral stimulus is applied to the pad of the index finger. In normal, sighted subjects, stimuli to the occipital cortex have no effect, but TMS delivered appropriately in time and space can transiently disrupt the arrival of the thalamo-cortical volley into the primary sensory cortex and interfere with detection of peripheral somatosensory stimuli (Cohen et al., 1991; Pascual-Leone,
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
Cohen, Brasil-Neto, Valls-Sole, & Hallett, 1994). The subjects are not aware that they received a peripheral somatosensory stimulus prior to TMS. In congenitally blind subjects, TMS to the left somatosensory cortex disrupts detection of Braille stimuli presented to their right index finger at interstimulus intervals of 20 to 40 msec (Fig. 9.1D; Hamilton & Pascual-Leone, 1998). Similarly to the findings in the sighted, in these circumstances, the subjects did not realize that a peripheral stimulus had been presented to their finger. In the instances in which they did realize the presentation of a peripheral stimulus, they were able to correctly identify what Braille symbol was presented. On the other hand, TMS to the striate cortex disrupts the processing of the peripheral stimuli at interstimulus intervals of 50 to 80 msec. Contrary to the findings after sensorimotor TMS, the subjects generally knew whether a peripheral stimulus had been presented or not. However, they were unable to discriminate whether the presented stimuli were real or nonsensical Braille or what Braille symbol might have been presented (interference with perception). Therefore, in early blind subjects, the somatosensory cortex appears engaged in detection, while the occipital cortex contributes to the perception of tactile stimuli. Theoretically, two main alternative routes could be entertained for the interaction between somatosensory and occipital cortex during tactile tasks (including tactile Braille reading) in the blind: thalamo-cortical connections to sensory and visual cortex; and cortico-cortical connections from sensory cortex to visual cortex. Thalamic somatosensory nuclei could send input to both the somatosensory cortex and the striate cortex from multimodal cells in the geniculate nuclei. These theoretical multiple projections might be masked or even degenerate in the postnatal period given normal vision. However, in early blind subjects, these somatosensory thalamo-striatal projections might remain and be responsible for the participation of the striatal cortex in tactile information processing. Alternatively, changes in cortico-cortical connections might be postulated, such that spatial information originally conveyed by the tactile modality in the sighted subjects (SI–SII–insular cortex–limbic system) might be processed in the blind by the neuronal networks usually reserved for the visual, shape discrimination process (SI–BA 7–dorsolateral BA 19–V1–occipito-temporal region–anterior temporal region–limbic system).
MORE THAN BRAILLE: GRATINGS ORIENTATION TASK (GOT) Tactile Braille reading is difficult for sighted individuals. Indeed, sighted instructors of Braille to the blind, learn to read Braille using sight, rather than touch. This raises the question of whether visual deprivation and
178
PASCUAL-LEONE ET AL.
deafferentation of the visual cortex are required to learn Braille. Furthermore, it suggests that blindness results in a behavioral advantage for the acquisition of the Braille reading skill and possibly for other tactile tasks that require precise and detailed tactile feature discrimination. Consistent with this notion, early blind subjects have dramatically lower thresholds on the Gratings Orientation Task (GOT) than sighted controls (even when tested blindfolded; van Boven, Hamilton, Kaufman, Keenan, & Pascual-Leone, 2000). In the GOT (Van Boven & Johnson, 1994), different hemispherical plastic domes with gratings of variable widths cut into their surfaces resulting in parallel bars and grooves of equal widths at each dome, are pressed onto the subjects’ digit pad (Fig. 9.2A, JVP Domes, Stoelting Co.,Wood Dale, IL). Subjects are required to identify the stimulus orientation (two-alternative forced-choice paradigm) and a grating orientation threshold is determined by the interpolation between groove widths spanning 75% correct responses (Van Boven & Johnson, 1994). Interestingly, early blind subjects have the lowest
FIG. 9.2. (A) Example of the JVP domes used for the Gratings Orientation Task (GOT). (B) Results of the GOT threshold for the fingers in early blind and sighted controls (modified from van Boven et al., 2000). (C) Difference in GOT threshold for the different fingers in blind subjects, depending on the self-reported preferences for Braille reading. Data on the group of early blind subjects are displayed in grey. Data on patient MC are displayed in black.
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
GOT threshold in the finger that they report as being their preferred Braille reading digit (Fig. 9.2C). It seems reasonable to assume that performance in the GOT in the early blind is a result of long-term spatial learning which, in turn, is reflected by an expanded area of cortical representation of the fingertip (Pascual-Leone & Torres, 1993; Sterr et al., 1998). The differential performance gain in the GOT at the preferred finger for Braille identification (Fig. 9.2C) suggests that Braille reading experience confers a heightened capacity for spatial resolution and makes it unlikely that simply blindness accounts for this results. GOT thresholds have been carefully studied and are believed to measure the human ability to resolve the spatial modulation of the afferent discharge (Johnson & Hsiao, 1992; Johnson & Phillips, 1981; Sathian & Zangaladze, 1996; Van Boven & Johnson, 1994). Performance at the spatial resolution limit of pattern recognition at the fingertip is accounted for by the spatial neural representation of the slowly adapting type 1 (SA1) afferent fiber system (Phillips, Johansson, & Johnson, 1992; Phillips & Johnson, 1981). GOT, like Braille characters, or embossed letters, are represented in a spatial neural pattern of activity across the population of the SA1 afferents in an isomorphic fashion, that is, generating an image nearly identical to the actual stimuli (Phillips et al., 1992; Phillips, Johnson, & Hsiao, 1988). This would predict a critical role of primary somatosensory cortex, specifically area 3b, in GOT. In fact, though certainly a lot more work is needed, preliminary studies suggest that lower GOT thresholds and in general higher fidelity of the tactual spatial percept in blind persons is correlated with greater tactile activation of the occipital cortex (Fig. 9.3; Kiriakopoulos, Baker, Hamilton, & Pascual-Leone, 1999).
FIG. 9.3. Differential activation of the somatosensory (arrows) and occipital cortex (arrows) in a representative, early blind subject, during tactile stimulation of the pad of the right and left index fingers. The right index finger was this subject’s preferred Braille reading finger and had a significantly lower GOT threshold. Note the much greater activation of the occipital cortex in response of stimulation of the preferred Braille reading finger (right index). Modified from Kiriakopoulos et al., 1999.
180
PASCUAL-LEONE ET AL.
In sighted subjects, Sathian, Zangaladze, Hoffman, and Grafton (1997), using the same GOT task, have shown that tactile discrimination may lead to increased metabolic activity in parieto-occipital areas in PET (Sathian et al., 1997), and that occipital TMS can actually interfere with GOT performance (Zangaladze, Epstein, Grafton, & Sathian, 1999). These results in the sighted have been interpreted as showing that visual imagery is an obligatory component of spatial discrimination and that if blocked (e.g., by TMS) performance is critically disrupted (Sathian et al., 1997; Zangaladze et al., 1999).
SERENDIPITY OF NATURE: IMAGERY IS NOT THE ISSUE. The role of the visual cortex in tactile processing is dramatically illustrated by MC, an early, absolute blind woman who became alexic for Braille after bilateral occipital strokes (Hamilton, Keenan, Catala, & Pascual-Leone, 2000). The findings in this remarkable patient allow us to question visual imagery as the reason for the engagement of the occipital cortex in the GOT. MC, a non-insulin-dependent diabetic woman with retrolental fibrodisplasia and blindness “since birth,” was an extremely proficient Braille reader. She used Braille at work for 4 to 6 hours per day and read at a remarkable speed of 120 to 150 symbols per minute. She suffered an embolic basilar artery event, with the embolus eventually breaking off and causing bilateral, posterior cerebral artery strokes (Fig. 9.4). Despite the extensive, bihemispheric lesions, her neurological exam, including two-point discrimination and sensory testing,
FIG. 9.4. Bilateral posterior cerebral artery strokes in patient MC. Modified from Hamilton et al., 2000.
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
was normal. However, when she tried to read a Braille card sent to her, she was unable to do so. She stated that the Braille dots “felt flat” to her. She was able to “concentrate” and determine whether or not there was a raised dot in a given position in an isolated cell of Braille. Nonetheless, when attempting to read Braille normally, she found that she could not “extract enough information” to determine what letters, and especially what words, were written. It felt as if “having the fingers covered by thick gloves.” In addition to the difficulties decoding Braille, she was similarly unable to read by touch embossed roman letters, but remained able to identify the roughness of a surface or locate items on a board, identify her house key by touch, or discriminate different coins tactually. In addition, MC did have difficulties with the gratings orientation task (GOT) (Fig. 9.2C). The GOT threshold for patient MC, after the bilateral occipital strokes that left her alexic for Braille, was significantly higher than that of early blind Braille readers and similar to that of sighted controls for any of the fingers of either hand (Fig. 9.2C). In fact, MC found the task very frustrating, repeatedly stating, “I should be able to do this.” In her case, contrary to the findings in the early blind patient group, there was no advantage of the dominant Braille reading finger. This suggests that the occipital cortex might, in fact, participate in the spatial discrimination required for the GOT, and that the findings of activation of the occipital cortex during this task in the sighted might not be solely due to imagery. We tested MC on an imagery task previously used in sighted subjects (Salenius, Kajola, Thompson, & Kosslyn, 1995) which we adapted for application in the blind. MC’s results were compared with the performance of five early blind subjects and five sighted subjects who were tested at baseline and after receiving rTMS to transiently disrupt sensorimotor or occipital cortex. Five further sighted subjects completed an additional control experiment in which they received sham rTMS. The imagery task consisted of the auditory presentation of a cue word followed after one second by a letter name. Subjects entered keyboard responses with their right hand to the question prompted by the cue word, for the capitalized Roman form of the letter name. Subjects were allowed as long as they required in order to optimize accuracy. The five cue words were: “vertical” (Does the letter have a vertical line?); “symmetric” (Is the letter bilaterally symmetric?); “space” (Does the letter contain an enclosed space?); “diagonal” (Does the letter contain a diagonal line?); and “curve” (does the letter contain a curved line?). The task was presented in blocks of 60 randomized trials. Each block contained a different set of 12 randomly selected letters. For each letter all five cue conditions were presented. In the control blind and sighted subjects, a block of trials was performed before and one after rTMS at one of the two sites, and both sites were tested for each sub-
182
PASCUAL-LEONE ET AL.
ject at different sessions. The block and rTMS-site order was counterbalanced across subjects. Repetitive TMS was performed with a Magstim Super Rapid stimulator (Magstim, Whitland, UK) with a 70-mm figure-eight coil. A single 10-min train of rTMS was applied to each anatomical location. Stimulation was delivered at 1 Hz frequency and an intensity 10% above the subject’s motor threshold intensity. These stimulation parameters have been shown to result in a transient suppression of excitability in the targeted cortical area (Pascual-Leone et al., 1998; Gangitano et al., in press; Maeda, Keenan, Tormos, Topka, & Pascual-Leone, 2000; Romero, Anshel, Sparing, Gangitano, & Pascual-Leone, 2002). Blind subjects stated that they imagined the feeling of the letters on their fingers and traced each letter’s outline in their imagination as they tried to complete the task. Conversely, sighted subjects reported visualizing the characters. Consistently with these subjective reports, blind subjects’ reaction time ratios were significantly higher following sensorimotor than after occipital rTMS (Fig. 9.5), and sighted subjects’ reaction times were significantly higher after occipital than after sensorimotor rTMS. The reaction time for the sham condition in sighted subjects was not different from the reaction time after sensorimotor rTMS. The number of errors did not differ significantly across rTMS site for blind or sighted subjects. Consistent with the TMS results in the blind, patient MC, despite her extensive occipital damage, performed the task without difficulties (Fig. 9.5). Indeed, she performed equally well in a similar imagery task in which she was required to imagine the Braille symbols and respond to questions such as, “Does it have two horizontal dots?” or “Does it have a dot in the left upper quadrant?” This is surprising, since she was unable to read the Braille symbols that she could readily imagine in her mind. However, when asked about how she did approach the task, she reported, like the blind subjects in the TMS experiment, that she mentally traced or explored the symbol with her eyes. Indeed, at this level of feature decomposition, she was able to read Braille, though she could not do it fast enough to be practically useful. Imagery in the blind, at least for the kind of tasks we tested, seems to be done with the “mind’s finger” rather than the “mind’s eye.” Therefore, MC’s poor performance in the GOT task is unlikely due to poor imagery abilities.
ROUGHNESS VERSUS DISTANCE What, then, might be the critical determinant be for a task to be impaired in MC and hence the role of the occipital cortex in the blind? We have ventured earlier the hypothesis that the striate cortex might be related to the resolution of fine-graded spatial discrimination whether vision is available or not. In order to test this further, we conducted a task using arrays of raised dots spaced different
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
FIG. 9.5. Mean reaction time in sighted controls (n = 10), early blind subjects (n = 10) and patient MC during an imagery task. Results of the effects of TMS to the somatosensory and the occipital cortex on task performance are displayed in red and blue respectively. Note the increase in reaction time with occipital TMS in the sighted, but the increase of reaction time only with TMS to the somatosensory cortex in the blind. Patient MC performed at a level comparable with the baseline performance of blind and sighted.
distances from each other (Fig. 9.6A, generously provided by Stephen Hsiao, PhD) and asking subjects to judge either the roughness or the distance between the dots for a given array (Johnson & Hsiao, 1992). The arrays have raised dots that measure 1 mm in diameter and are raised 2 mm. The different arrays are composed of dots at various (though for each array constant) inter-dot distance. Subjects are asked to explore these arrays and report either the relative roughness or the relative perceived distance between dots. When judging roughness, closely spaced dots lead to an impression of little roughness, since the finger moved from the cusp of one dot to the cusp of the next. On the other hand, very widely spaced dots also give the impression of reduced roughness, as the array is experienced as isolated dots rather than a continuous surface of a given roughness. In fact, work from Johnson and Hsiao has documented a critical distance between the dots of approximately 3 mm at which the subjects report maximal roughness. Therefore, if reported roughness is plotted against the actual dot spacing, we might expect an inverted U-shape curve with a maximal value of approximately 3 mm. On the other hand, judging distance would be expected to generate a straight line when subjective reports are plotted against actual dot spacing, since the greater the gap between dots, the greater the distance perceived. This prediction is indeed fulfilled by the data (Fig. 9.6B).
184
PASCUAL-LEONE ET AL.
FIG. 9.6. (A) Example of the arrays of raised dots presented to the pad of the fingers for judgment of roughness versus distance. (B) Performance of sighted controls in the roughness versus distance task. Actual dot spacing is plotted against the subjects’ report. Mean results for 10 sighted subjects are displayed for roughness judgment (squares) and distance judgment (circles). (C) fMRI activation in a representative sighted subject during roughness judgments. (D) fMRI activation in a representative sighted subject during distance judgments.
As in the imagery experiment, we then employed TMS to either the somatosensory or the occipital cortex in order to inquire which brain regions might play a functionally relevant role in the roughness and/or distance judgments. Following rTMS to the contralateral somatosensory cortex, we found a disruption of the roughness perception, with flattening of the subjective reports (Fig. 9.7A). Subjects still perceived approximately 3 mm as the inter-dot spacing generating the maximal impression of roughness for the array, but the amount of roughness experienced was suppressed. Indeed subjects reported that the arrays “did not feel as bumpy” or “were less sharp.” This is consistent with the extensive neurophysiologic data in humans and especially in primates, documenting that area 3b is the primary target of roughness representation (Johnson & Hsiao, 1992). Preliminary functional imaging studies appear to support this notion as well, revealing activation of contralateral somatosensory cortex for this roughness judgment (Fig. 9.6C). Remarkably, distance judgment was not affected at all by the TMS to the somatosensory cortex (Fig. 9.7). On
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
the other hand, rTMS to the occipital cortex did not affect roughness judgments but disrupted distance perception, such that subjects tended to scale increasing inter-dot spacing with less perceived distance increases than prior to the suppression of the striate and peristriate cortex (Fig. 9.7). Preliminary functional imaging studies are consistent with these results, demonstrating activation of the occipital cortex during the distance judgment (Fig. 9.6D). As in the GOT results, these findings might be interpreted as suggesting that visual imagery is engaged for the spatial (distance) representation and not for the roughness judgment, hence the differential effect of rTMS of somatosensory and occipital cortex on task performance. However, the results on MC can again be taken as an argument against imagery. MC shows, after her bilateral occipital strokes, flawless performance on the roughness judgment, but is essentially unable to perform the distance judgment, demonstrating extremely flat scaling with increasing dot spacing (Fig. 9.7).
FIG. 9.7. Effects of TMS to the somatosensory or the occipital cortex on performance in the roughness versus distance judgment task. In addition, results of patient MC on the same task are displayed. Results are presented overlaid on the graph of the baseline performance of sighted controls (see Fig. 9.6B). Note that TMS to the somatosensory cortex results in a flattening of the roughness judgment curve without affecting the distance judgments. TMS to the occipital cortex has the opposite effects and performance in patient MC is consistent with these results.
186
PASCUAL-LEONE ET AL.
THE BLIND, THE SIGHTED, AND MC Patient MC became alexic for Braille after bilateral occipital strokes, and was impaired in GOT and distance judgments, while she could make roughness discriminations and perform any number of other tactile tasks without trouble. These results suggest that tasks requiring precise reconstruction of spatial patterns might engage topographically organized cortical areas and that the striate cortex might be one (or the main) such area, at least in an early blind person. On the other hand, the findings in the sighted reveal activation of the occipital cortex and disruption of task performance by occipital TMS for the same kind of spatial, tactile tasks. It seems that the phenomenology might be the same, but in the case that vision is available, we might think of “imagery” as the cause of recruitment of the occipital cortex. In the early blind, when vision is not available, we might be tempted to ascribe the engagement of the occipital cortex to cross-modal plasticity, arguing that the deafferented visual cortex has been recruited for processing of information in other modalities. In keeping with this hypothesis, one would need to argue that different mechanisms account for the activation of the occipital cortex in sighted subjects during specific tasks and in congenitally blind subjects. An alternative to the existence of such plasticity in primary sensory cortical models would be that areas like the striate cortex are part of distributed metamodal structures. This would mean that sensory cortical networks do not process information strictly from single modalities, but are instead capable of performing certain sets of computations and transformations on data irrespective of the sensory modalities from which that data is received. In other words, with the prolonged deafferentation of early blindness the functional and structural identity of the occipital cortex may change from visual cortex to tactile or auditory cortex. On the other hand, it may be the case that the occipital cortex is normally capable of the kinds of functions and computations required by the processing of nonvisual information, and that this activity is unmasked by the state of blindness. One way to differentiate these two possibilities and to reveal which of them truly characterizes the occipital cortex would be to investigate the possibility of transiently inducing nonvisual processing in the occipital cortex in a population of individuals totally deprived of vision.
MASKING THE EYES TO UNMASK THE BRAIN Normal sighted individuals are visually deprived for 5 days. Subjects stay at the General Clinical Research Center (GCRC) at Beth Israel Deaconess Medical Center throughout the study, and functional and behavioral changes in their occipital cortex are investigated using fMRI and occipital rTMS.
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
Subjects are right-handed, medication-free, healthy individuals between the ages of 18 and 35, with 20/20 vision (either corrected or uncorrected), normal audiograms, and normal neurological exam, including detailed sensory testing. Subjects are randomized to being blindfolded or not. Exclusion criteria include any history of neurologic disorders, head trauma, or prolonged loss of consciousness, learning disorders or dyslexia, central or peripheral disorder of somatic sensation, metal in the body, skin damage on the hands, any history of switching handedness, any smoking history (to prevent frequent departures from the GCRC), and any prior experience with Braille. Blindfolded subjects are fitted with a specially designed blindfold that completely prevents all light perception. From a behavioral point of view, it is worth noting that practically all blindfolded subjects experience visual hallucinations that resemble those reported by patients with Charles-Bonnet syndrome (Pascual-Leone & Hamilton, 2002; Pascual-Leone & Hamilton, 2001). These hallucinations generally start after 2 to 3 days of visual deafferentation (never before 12 hours of blindfolding) and cease immediately when the blindfold is removed. Generally, these hallucinations are well formed and appear to represent situationally appropriate percepts. For example, subject SN, a 29-year-old female, reported, 12 hours after blindfolding, seeing “a greenish face with big eyes reflected in a mirror.” This was the first instance of visual hallucination in her case and occurred when she was standing in front of what she knew to be a mirror. All subjects, such as is the case in Charles-Bonnet syndrome, recognized the unreality of these hallucinatory percepts. Nevertheless, all subjects commented on the great vividness and rich detail of the imagery. During the experiment, subjects were intensively taught Braille by daily sessions with an instructor of Braille from the Caroll School for the Blind (Kauffman, Theoret, & Pascual-Leone, 2002). During these Braille classes all subjects were blindfolded. Nevertheless, subjects in the blindfolded group showed significantly greater learning (Fig. 9.8). This finding is consistent with the notion that visual deafferentation results in an advantage for the acquisition of skills demanding precise, tactile discriminations. This interpretation is consistent with the findings of the activation of the occipital cortex with tactile Braille reading in the blind, and with the suggestion that the preferred reading finger is associated with greater occipital activation than the nonpreferred finger (Kiriakopoulos et al., 1999). Blindfolded and nonblindfolded control subjects were blindfolded temporarily for serial MRI studies. Sets of functional MRI data were collected at baseline prior to the study, repeatedly during the blindfolding period, and on the sixth day, one day after removal of the blindfold. These serial fMRI studies show increasing activation of the striate and peristriate cortex during tactile and au-
188
PASCUAL-LEONE ET AL.
FIG. 9.8. Results of the Braille character recognition testing in sighted controls and blindfolded subjects over the course of the 5 days of the experiment (modified from Kauffmann et al., 2002).
ditory stimulation while the subjects remain blindfolded. On the first day of the blindfolding period, comparison of tactile stimulation versus rest in both blindfolded and sighted subjects reveals activation in the contralateral somatosensory cortex, but none in the occipital cortex. On the fifth day of blindfolding, comparison of stimulation versus rest in blindfolded subjects shows additional increasing BOLD activation in occipital, “visual,” regions (Fig. 9.9). Sighted subjects continued to show no significant activation in the occipital cortex. On the sixth day, approximately 12 to 24 hours following removal of the blindfold, comparison of stimulation to rest reveals no occipital activation in either sighted or blindfolded conditions. Similarly, activation of striate cortex during auditory stimulation is seen at the end of the blindfold period and was absent prior to the blindfolding or in the control subjects (Fig. 9.9). TMS can be used in this experiment to assess the role that the emerging activity in the striate cortex might have in tactile or auditory processing. A Braille character recognition task was given to the subjects (either blindfolded or sighted controls) before and after TMS. The right index finger was tested using a Braille stimulator and software designed specifically for Braille discrimination tasks. The task consisted of 36 pairs of Braille characters presented in succession with a brief pause (1400 msec) between each pair. Subjects were instructed to identify whether the pair of characters presented had been the same Braille formation or two different Braille formations by simply saying “same” or “different.” Subject re-
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
sponses were recorded. Subjects’ testing fingers were immobilized using double-sided tape. Subjects were given a practice trial before an initial baseline performance was recorded. TMS was then delivered to the occipital pole, targeting striate cortex. This was immediately followed by a second set of Braille stimuli. Subjects were tested a third and final time after a 10-min rest period. The Braille character sets were randomized for each subject. All subjects were tested twice on the fifth and then again on the sixth day of the study. For the blindfolded subjects this meant that the first test was done after 5 days of blindfolding and the second test less than 24 hours after removal of the blindfold. All stimulation was delivered using a Magstim 70mm figure-eight coil with pulses generated by a Magstim Rapid Stimulator (Magstim Inc., Dyfed, UK). The stimulation coil was held tangentially, flat against the scalp at the Oz position of the 10-20 electrode system. This scalp position overlays the tip of the calcarine fissure in most subjects, and when TMS is applied to this scalp position sighted subjects experience phosphenes thought to originate in V1 (Kammer, 1999). A single train of 300 pulses of 1 Hz TMS was delivered to the visual cortex at an intensity of 110% of each subjects’ motor threshold. Motor threshold was determined following the recommendations of the International Federation of Clinical Neurophysiology. These TMS parameters are well within current safety guidelines (Wassermann, 1998). A train of repetitive TMS of
FIG. 9.9. FMRI results in the blindfold experiment. Results of the contrast between sighted controls and blindfolded subjects are presented for Day 1 and Day 5 of the experiment and for the tactile and the auditory tasks. The tactile condition consisted of the stimulation of the pad of the right index finger by a moving brush. In this context the lateralization of the occipital activation and the possible recruitment of area MT are notable. The auditory condition consisted of the identification of an odd, higher frequency tone. Note the activation of the peripheral V1 on Day 5. On Day 6, after removal of the blindfold, results reverted to those on Day 1.
190
PASCUAL-LEONE ET AL.
these characteristics to this scalp position in sighted subjects transiently disrupts visual perception and visual imagery (Kosslyn et al., 1999). For each subject, the number of errors in each block of trials was expressed as percent difference from baseline performance. Figure 9.10 demonstrates the effects in the blindfolded subjects. On Day 5, sham TMS of the occipital cortex had no effect on tactile Braille character discrimination, but real TMS significantly worsened performance. However, on Day 6, neither sham nor real TMS disrupted tactile discrimination. This is remarkable, as it reveals that the TMS effects in the blindfolded subjects changed significantly following removal of the blindfold for less than 24 hours, such that occipital TMS had a marked effect when the blindfold was on, but had no effect when the blindfold had been removed. These results suggest that the occipital cortex can be recruited for processing tactile and auditory information when it is deafferented of visual input, even in transiently blinded normal subjects. These findings have several implications. The speed of these functional changes seen in the occipital cortex is such that it is highly improbable that new cortical connections are being established de novo in sighted individuals. The findings, therefore, strongly suggest that tactile and auditory connections to the “visual cortex” are present in sighted as
FIG. 9.10. Graph summarizing the impact on tactile Braille character recognition of TMS to the occipital cortex (real versus sham TMS) on Day 5 (last day of visual deprivation) and Day 6 (after blindfold removal) in the blindfolded subjects. Average results in 10 blindfolded subjects are presented. Note that only real TMS on Day 5 has an effect.
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING
well as in blind individuals and can be unmasked under behaviorally desirable conditions. Furthermore, the fact that occipital networks are able to respond to nonvisual information so readily suggests that the occipital cortex functions in a metamodal capacity, and that it is somehow able to process sensory information without strict regard for its modality of input. The TMS results demonstrate that disruption of the occipital cortex significantly affects the performance of a tactile task in the blindfolded subjects. Nonblindfolded control subjects showed no significant effects from the stimulation. These findings confirm those found in the study with the early blind (Cohen et al., 1997) and are consistent with the findings that the occipital cortex contributes to tactile acuity in the blind (Kiriakopoulos et al., 1999; van Boven et al., 2000). However, it is important to realize that these findings do not demonstrate that the mechanisms of recruitment of the occipital cortex for tactile processing in the blind and the blindfolded are the same. Indeed, it is conceivable that different mechanisms might be engaged in the early blind and in adults deprived of visual input, and more studies are needed to address this further.
A METAMODAL “VISUAL” CORTEX The occipital cortex, including the striate cortex, plays a critical role in a variety of epicritic tactile discrimination abilities in the early blind. This seems critical for the acquisition of the Braille reading ability. In sighted subjects, even after a rather short period of visual deprivation of a few days, auditory and tactile inputs into striate cortex are unmasked that carry functional significance. It seems clear that tactile and auditory connections to the occipital cortex must exist in sighted, adult subjects. This suggests that the primary visual cortex is multimodal, and possibly capable of particular kinds of functions rather than processing specific sensory modalities. In the brain of normal sighted individuals, the presentation of visual stimuli normally results in preferential activation of the occipital cortex, auditory information preferentially activates the auditory cortex, and so forth. However, we might envision that the occipital cortex only “appears” to be a visual cortex because it preferentially uses vision, when in fact it is a detailed, epicritic operator for detection of local features and precise spatial discrimination, regardless of input modality. If primary sensory modules function as metamodal operators, capable of performing sets of processes on data irrespective of their sensory modality of origin, what drives the tendency for information from one modality to be processed primarily in one cortical region and information from another modality to be processed primarily in another? In order to support both the metamodal hypothesis and the notion of functional
192
PASCUAL-LEONE ET AL.
specificity in the cortex, one needs a model that can account or both the multipotentiality of metamodal processing and the functional specificity commonly observed in the brain. Such a model exists in the field of computational neuroscience (Jacobs, 1999) and can readily be applied to our findings with the blind and the blindfolded (Pascual-Leone & Hamilton, 2001). Competition between expert operators can be used to explain how metamodal cortical networks that are predisposed to certain kinds of functions eventually become specialized to preferentially process information from a particular modality, and how this development is altered in the case of peripheral blindness or visual deprivation. Such a concept does not require feed-forward connections between auditory, somatosensory, and visual cortices. In fact, there is novel evidence of such direct inputs of tactile and auditory information into V1 (Falchier et al., 2001; Rockland & Ojima, 2001). However, feedback connections from multimodal association areas may mediate behaviorally relevant processing through feedback mechanisms and changes in connectivity. Studies of functional connectivity using novel MRI techniques in humans and particularly in animals are essential to clarify this issue further.
ACKNOWLEDGMENTS The research reported in this chapter was supported by the National Eye Institute (RO1EY12091), the National Institute for Mental Health (RO1MH57980, RO1MH60734), and the Harvard-Thorndike General Clinical Research Center (NCRR MO1 RR01032).
REFERENCES Baron-Cohen, S., & Harrison, J. E. (Eds.). (1997). Synaesthesia. Oxford, UK: Blackwell. Bavelier, D., & Neville, H. (2002). Cross-modal plasticity: Where and how? Nature Neuroscience Review, 3, 443–452. Büchel, C., Price, C., & Frackowiak, R. S. J., et al. (1998). Different activation patterns in the visual cortex of late and congenitally blind subjects. Brain, 121, 409–419. Burton, H., Snyder, A. Z., Conturo, T. E., Akbudak, E., Ollinger, J. M., & Raichle, M. E. (2002). Adaptive changes in early and late blind: A fMRI study of Braille reading. Journal of Neurophysiology, 87, 589–607. Calvert, G. A., Campbell, R., & Brammer, M. J. (2000). Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology, 10, 649–657. Cohen, L. G., Bandinelli, S., Sato, S., Kufta, C., & Hallett, M. (1991). Attenuation in detection of somatosensory stimuli by transcranial magnetic stimulation. Electroencephalographic Clinical Neurophysiology, 81, 366–376. Cohen, L. G., Celnik, P., Pascual-Leone, A., Corwell, B., Falz, L., Dambrosia, J., et al. (1997). Functional relevance of cross-modal plasticity in blind humans. Nature, 389, 180–183.
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING Cytowic, R. E. (1989). Synaesthesia: A union of the senses. New York: Springer Verlag. de Gelder B. (2000). More to seeing than meets the eye. Science, 289, 1148–1149. Driver J., & Spence, C. (2000). Multisensory perception: Beyond modularity and convergence. Current Biology, 10, R731–735. Falchier A. (2001). Extensive projections from the primary auditory cortex and polysensory area STP to peripheral area V1 in the macaque. Sociology and Neuroscience Abstracts, 27, 511–521. Gangitano, M., Valero-Cabre, A., Tormos, J. M., Mottaghy, F. M., Romero, J. R., & Pascual-Leone A. (2000). Modulation of input-output curves by low and high frequency repetitive transcranial magnetic stimulation of the motor cortex. Clinical Neurophysiology. Grossenbacher, P. R., & Lovelace, C. T. (2001). Mechanisms of synesthesia: Cognitive and physiological constraints. Trends in Cognitive Science, 5, 36–41. Hamilton, R., Keenan, J. P., Catala, M . D., & Pascual-Leone, A. (2000). Alexia for Braille following bilateral occipital stroke in an early blind woman. NeuroReport, 7, 237–240. Hamilton, R. H., & Pascual-Leone, A. (1998). Cortical plasticity associated with Braille learning. Trends in Cognitive Science, 2, 168–174. Innocenti, G. M. (1995). Exuberant development of connections, and its possible permissive role in cortical evolution. Trends in Neuroscience, 18, 397–402. Jacobs, R. (1999). Computational studies of the development of functionally specialized neural modules. Trends in Cognitive Science, 3, 31–38. Johnson, K. O., & Hsiao, S. S. (1992). Neural mechanisms of tactual form and texture perception. Annual Review of Neuroscience, 15, 227–250. Johnson, K. O., & Phillips, J. R. (1981). Tactile spatial resolution: Part 1. Two-point discrimination, gap detection, grating resolution, and letter recognition. Journal of Neurophysiology, 46, 1177–1191. Kammer, T. (1999). Phosphenes and transient scotomas induced by magnetic stimulation of the occipital lobe: Their topographic relationship. Neuropsychologia, 37, 191–198. Kauffman, T., Theoret, H., & Pascual-Leone, A. (2002). Braille character discrimination in blindfolded human subjects. NeuroReport, 13, 1–4. Kiriakopoulos, E., Baker, J., Hamilton, R., & Pascual-Leone, A. (1999). Relationship between tactile spatial acuity and brain activation on brain functional magnetic resonance imaging. Neurology, 52(suppl. 2), A307. Kosslyn, S. M., Pascual-Leone, A., Felician, O., Camposano, S., Keenan, J. P., Thompson, W. L., et al. (1999). The role of area 17 in visual imagery: Convergent evidence from PET and rTMS [see comments; published erratum appears in Science, 284(5416), 197]. Science, 284, 167–170. Maccabee, P. J., Amassian, V. E., Cracco, R. Q., Cracco, J. B., Rudell, A. P., Eberle, L. P., et al. (1991). Magnetic coil stimulation of human visual cortex: Studies of perception. In W. J. Levy, R. Q. Cracco, A. T. Barker, & J. Rothwell (Eds.), Magnetic motor stimulation: Basic principles and clinical experience (Vol. EEG, Suppl. 43, pp. 111–120). Amsterdam: Elsevier Science. Maeda, F., Keenan, J. P., Tormos, J. M., Topka, H., & Pascual-Leone, A. (2000). Modulation of corticospinal excitability by repetitive transcranial magnetic stimulation. Clinical Neurophysiology, 111, 800–805. Mesulam, M. M. (1998). From sensation to cognition. Brain, 121, 1013–1052. Pascual-Leone, A., Bartres-Faz, D., & Keenan, J. P. (1999). Transcranial magnetic stimulation: Studying the brain-behaviour relationship by induction of “virtual lesions.” Philosophical Transactions of the Royal Society B: Biological Sciences, 354, 1229–1238. Pascual-Leone, A., Cohen, L. G., Brasil-Neto, J. P., Valls-Sole, J., & Hallett, M. (1994). Differentiation of sensorimotor neuronal structures responsible for induction of motor evoked potentials, attenuation in detection of somatosensory stimuli, and induction of sensation of movement by mapping of optimal current directions. Electroencephalography and Clinical Neurophysiology, 93, 230–236.
194
PASCUAL-LEONE ET AL.
Pascual-Leone, A., & Hamilton, R. (2002). Metamodal cortical processing in the occipital cortex of blind and sighted subjects. In S. G. Lomber & R. Galuske (Eds.), Virtual lesions: Understanding perception and cognition with reversible deactivation technique. Oxford: Oxford University Press. Pascual-Leone, A., & Hamilton, R. H. (2001). The metamodal organization of the brain. Progress in Brain Research, 134, 427–445. Pascual-Leone, A., Tormos, J. M., Keenan, J., Tarazona, F., Canete, C., & Catala, M. D. (1998). Study and modulation of human cortical excitability with transcranial magnetic stimulation. Journal of Clinical Neurophysiology, 15, 333–343. Pascual-Leone, A., & Torres, F. (1993). Plasticity of the sensorimotor cortex representation of the reading finger in Braille readers. Brain, 116, 39–52. Pascual-Leone, A., Walsh, V., & Rothwell, J. (2000). Transcranial magnetic stimulation in cognitive neuroscience—Virtual lesion, chronometry, and functional connectivity. Current Opinions in Neurobiology, 10, 232–237. Phillips, J. R., Johansson, R. S., & Johnson, K. O. (1992). Responses of human mechanoreceptive afferents to embossed dot arrays scanned across fingerpad skin. Journal of Neuroscience, 12, 827–839. Phillips, J. R., & Johnson, K. O. (1981). Tactile spatial resolution: Part 2. Neural representation of bars, edges, and gratings in monkey primary afferents. Journal of Neurophysiology, 46, 1192–1203. Phillips, J. R., Johnson, K. O., & Hsiao, S. S. (1988). Spatial pattern representation and transformation in monkey somatosensory cortex. Proceedings of the National Academy of Science, 85, 1317–1321. Rockland, K. S., & Ojima, H. (2001). Calcarine area V1 as a multimodal convergence area. Soc Neuroscience Abstracts, 27, 511–520. Romero, R., Anshel, D., Sparing, R., Gangitano, M., & Pascual-Leone, A. (2002). Subthreshold low frequency repetitive transcranial magnetic stimulation selectively decreases facilitation in the motor cortex. Clinical Neurophysiology, 113, 101–107. Sadato, N., Pascual-Leone, A., Grafman, J., Deiber, M.-P., Ibanez, V., & Hallett, M. (1998). Neural networks for Braille reading by the blind. Brain, 121, 1213–1229. Sadato, N., Pascual-Leone, A., Grafman, J., Ibañez V., Deiber, M.-P., Dold, G., et al. (1996). Activation of the primary visual cortex by Braille reading in blind subjects. Nature, 380, 526–528. Salenius, S., Kajola, M., Thompson, W. L., & Kosslyn, S. M. (1995). Electroencephalography Clinical Neurophysiology, 95, 453–462. Sathian, K., & Zangaladze, A. (1996). Tactile spatial acuity at the human fingertip and lip: Bilateral symmetry and interdigit variability. Neurology, 46, 1464–1466. Sathian, K., Zangaladze, A., Hoffman, J. M., & Grafton, S. T. (1997). Feeling with the mind’s eye. NeuroReport, 8, 3877–3881. Stein, B. E., & Meredith, M. A. (1993). The merging of the senses. Cambridge, MA: MIT Press. Sterr, A., Muller, M. M., Elbert, T., Rockstroh, B., Pantev, C., & Taub, E. (1998). Perceptual correlates of changes in cortical representation of fingers in blind multifinger Braille readers. Journal of Neuroscience, 18, 4417–4423. Uhl, F., Franzen, P., Lindinger, G., et al. (1991). On the functionality of the visually deprived occipital cortex in early blind person. Neuroscience Letters, 124, 256–259. Uhl, F., Franzen, P., Podreka, I., et al. (1993). Increased regional cerebral blood flow in inferior occipital cortex and the cerebellum of early blind humans. Neuroscience Letters, 150, 162–164. van Boven, R., Hamilton, R., Kaufman, T., Keenan, J. P., & Pascual-Leone, A. (2000). Tactile spatial resolution in blind Braille readers. Neurology, 54, 2230–2236. Van Boven, R. W., & Johnson, K. O. (1994). The limit of tactile spatial resolution in humans: Grating orientation discrimination at the lip, tongue, and finger. Neurology, 44, 2361–2366.
9. THE ROLE OF VISUAL CORTEX IN TACTILE PROCESSING Walsh, V., & Cowey, A. (2000). Transcranial magnetic stimulation and cognitive neuroscience. Nature Reviews, 1, 73–79. Walsh, V., & Pascual-Leone, A. (2003). Neurochronometrics of mind: TMS in cognitive science. Cambridge, MA: MIT Press. Walsh, V., & Rushworth, M. (1999). A primer of magnetic stimulation as a tool for neuropsychology. Neuropsychologia, 37, 125–135. Wassermann, E. M. (1998). Risk and safety of repetitive transcranial magnetic stimulation: Report and suggested guidelines from the International Workshop on the Safety of Repetitive Transcranial Magnetic Stimulation, June 5–7, 1996. Electroencephalography Clinical Neurophysiology, 108, 1–16. Zangaladze, A., Epstein, C. M., Grafton, S. T., & Sathian, K. (1999). Involvement of visual cortex in tactile discrimination of orientation. Nature, 401, 587–590.
This page intentionally left blank
g
h
Conclusions: Touch and Blindness Soledad Ballesteros Universidad Nacional de Educación a Distancia, Madrid, Spain
Morton A. Heller Eastern Illinois University
R
esearch on the psychology of touch and blindness has important implications for theoretical approaches in psychology and neuroscience. However, researchers in the areas of psychology and neuroscience normally work in isolation, present their research at different conferences, and even publish their work in different journals. The idea of this book was to promote communication between scientists from psychology and neuroscience in order to arrive at theoretical advances in these two closely interrelated fields (Ballesteros & Heller, 2004). In this final chapter, we offer a number of comments on the field of touch, blindness, and neuroscience in light of the ideas expressed in the preceding chapters. Our general view is rather optimistic. The field of touch is a flourishing research area today. Moreover, the interaction between the psychology of touch and cognitive neuroscience is a fertile field that is producing important results. New non-invasive imaging techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) are very exciting research tools for discovering the complex relationships between areas in the brain of sighted and blind persons and haptic behavior (see chaps. 7, 8, and 9, this volume). These new techniques, especially fMRI, are very powerful tools
198
BALLESTEROS AND HELLER
for investigating the relationship between the human brain and attention, perception, memory, and other cognitive processes. Cognitive neuroscience researchers sometimes use speculation from behavioral studies to investigate areas in the brain that are active during the performance of certain cognitive and behavioral tasks. On the other hand, findings from cognitive neuroscience are also very important for experimental psychologists, as they reveal the biological bases of visual, haptic, and crossmodal processing of objects and patterns. Neuroimaging results can help the experimental psychologist to formulate new theories and models about areas in the brain that are relatively preserved at early stages of some neurological diseases (e.g., Ballesteros & Reales, 2004a). The use of these new neuroimaging techniques have produced a great deal of interest, as they appear to represent a powerful window into cognitive processing, in general, and haptic exploration, in particular. Vision and haptics have large similarities as well as important differences in the way they extract information about shapes, objects, and space. However, interestingly, both modalities are capable of encoding and decoding important structural spatial information on the way the different objects’ parts relate to each other in the three-dimensional (3-D) world. In fact, there has been some speculation that visual and haptic object representations might be shared between these two perceptual modalities (Easton, Greene, & Srinivas, 1997; Reales & Ballesteros, 1999). Vision and touch are used by humans in normal life for extracting useful information to permit recognition of shapes and objects. Sighted perceivers are quite aware when shapes and objects are visually perceived, but they are not so aware of their use of touch for perceptual matters, and tend to think of the manipulatory functions of their hands. However, the haptic identification of familiar 3-D objects is both accurate and fast, especially when explorers freely move their hands (Ballesteros, Manga, & Reales, 1997; Klatzky, Lederman, & Metzler, 1985). Objects in the world have a geometrical structure that shows how their different parts are interrelated. This geometrical structure can be perceived visually and haptically. In the case of the sighted explorer, geometrical structure is most often perceived bimodally. The authors that contributed chapters to this book asked a number of important questions that we discuss in turn. Some of these questions are: How do humans process spatial information? How do visual and touch sense modalities relate to each other? What kind of information is contained in visual and tactile object representations? Do these representations share the same neural substrate, or are they based on different specific cortical areas? Are the same cortical areas activated in blind and sighted Braille readers? How is the cortex altered as a function of experience in sighted and blind people?
10. CONCLUSIONS
In the first section of this last chapter we consider a number of issues in the psychology of touch that are addressed by the contributing authors of this volume. We start by considering the main achievements and latest developments of the psychology of touch. Special attention is devoted to the perception of objects in space by the blind and what blindness can tell us about spatial cognition. Several authors in this book have devoted considerable effort to studying how the blind individual perceives objects in space with the aim of learning what perception in the blind can tell us about spatial cognition (see chaps. 2, 3, and 4, this volume). As the idea of this volume was to put together contributions from the fields of psychology and neuroscience, we discuss some recent convergent results from both fields obtained in different laboratories. These results support the idea that both modalities, vision and touch, may share the same underlying mental representations. A series of recent behavioral and neuroscience results support the idea that vision and active touch (haptics) encode the geometrical structure of objects in the same way for familiar and nonsense objects (see chaps. 5 and 7, this volume). In the field of neuroscience, researchers have investigated the cerebral cortical areas involved in the tactile perception of form. Although the relevant neural processes remain controversial, a large body of findings from several laboratories and different types of stimuli (shapes, objects, Braille dots) converge on the idea that visual cortical areas in the extrastriate cortex are active during tactile perception (chap. 8, this volume) and haptic perception of nonsense objects (chap. 7, this volume). Moreover, important findings reported in chapter 9 by Pascual-Leone and his collaborators suggest that the medial occipital areas are recruited for Braille reading. Other researchers emphasize the role of the higher order extrastriate cortex. We will discuss the results that support this proposal later in this chapter.
SENSORY SUBSTITUTION Molyneux asked how a person born without sight would react to the world given the restoration of vision (see Morgan, 1977). The basic issue was whether or not the person would be able to immediately understand the visual impressions about objects that were previously known to touch. Locke assumed that experience is a necessary precondition for the understanding of objects by the senses, and assumed that the answer to this question would be negative. While the philosophical issues are interesting and important, engineers have been acting on the assumption that touch can substitute for the absence of vision. They have been pragmatic and have simply raced ahead in the development of sen-
200
BALLESTEROS AND HELLER
sory aids and prosthetic devices for blind people. The most dramatic early example of this was the Tactile Vision Substitution System (TVSS) described and studied at length by Bach-y-Rita and his colleagues (Bach-y-Rita, 1972). The original TVSS linked a TV camera to a computer and an array of vibrating stimulators on the skin of the back. The basic idea was to allow persons to “see” with their skin (see Sherrick, 1991). A practical derivative of the TVSS was the Optacon, a reading machine for blind people. The Optacon allows a blind person to read text and translate the visual input from a small camera to a vibrotactile display. Muñoz (chap. 6, this volume) has been working on the development of a modern and sophisticated virtual reality device. Most current devices are limited to a single finger, as in the Phantom. However, the more adequate systems provide a richer sort of input from multiple fingers, on the assumption that this can allow blind people to obtain useful 3-D information about the world. The successful substitution of haptic for visual information would only make sense if haptics is capable of interpreting very similar spatial information. On this point of view, haptics and vision may be fundamentally similar in representing space.
THE PSYCHOLOGY OF TOUCH More than a decade ago, Heller (1991), in the introductory chapter of a book entitled “The Psychology of Touch,” called our attention to the historical lack of interest in studying touch. For more than a century, experimental psychologists were interested in the study of visual perception, and very little attention was paid to haptic perception. Since the Heller and Schiff (1991) volume was published, things have clearly changed. We can safely say that the field of touch has flourished during the last decade. A number of laboratories around the world are now dedicating considerable effort to investigating how touch works. The aim of the current volume is to present recent findings in the field of haptics, with a special focus on the intersection between the psychology of touch, blindness, and neuroscience. There is no doubt that human vision is a very remarkable perceptual system. Vision is a precise and accurate perceptual modality that allows the sighted to extract very useful information about the structure of shapes and objects, the spatial relations among the parts of objects, and the spatial relations among objects in the environment. However, vision is not the only perceptual modality that allows human perceivers to extract useful information about objects and shapes. Active haptic explorers are also quite fast at processing important information on a wide variety of dimensions of shapes and objects such as stimulus extent, orientation, and curvature (for a review, see Appelle, 1991). The haptic perceptual system readily provides excellent information about a number of dimensions of raised-line shapes and objects, namely surface texture,
10. CONCLUSIONS
hardness, and softness, and thermal properties (e.g., Klatzky, Lederman, & Reed, 1987). Even young blind and sighted school children perform reasonably well in a series of haptic tasks that require material and texture discrimination, spatial orientation, object naming, and tests of short-term and long-term memory. Moreover, they understand the concept of constancy within a series of haptic stimuli that vary in form, shape, texture, or in more than one dimension (Ballesteros, Bardisa, Reales, & Muñiz, 2004). Quite interestingly, Klatzky et al. (1985) showed that a large number of familiar objects pertaining to several natural categories were accurately identified by touch (96% correct). Note that failure to haptically identify familiar objects is an indication of parietal damage (Critchley, 1953). In blindfolded sighted individuals, only about 2 to 4 minutes were necessary to name most of the haptically explored familiar objects. The findings mean that the identification of objects with active touch alone (the haptic modality or combination of touch and movement) is extraordinarily efficient and fast when all of the fingers explore the objects and participants are free to move their hands during object exploration. Performance is not so accurate or fast when restrictions are imposed on the way observers explore the objects. In a recent study, Jansson and Monaci (2004) investigated how haptic identification is affected by the number of fingers (from one to ten) available for object exploration. The results show that the number of fingers involved in object exploration had a significant effect on the number of errors and exploration time. The effect was larger between one finger and two. Exploring objects with two fingers was faster and more accurate than using just one finger. However, the use of 3, 4, or 10 fingers did not significantly improve performance. The number of errors dropped from 25% with one finger to less than 10% with two-finger exploration. This suggests that there is a lot of integration of information with the use of just two fingers. It is interesting to note that even when the task was performed with all of the fingers, the results were lower than those obtained by Klatzky et al. (1985). The difference in performance could be explained by the limitation of hand movements and the fact that the objects were clamped in place, so the weight of the object could not be perceived. Another important factor that could explain the lower accuracy is that haptic explorers could not freely perform hand movements. Also, the objects were different, and so familiarity may not have been comparable.
BODY-CENTERED REFERENCE FRAMES Millar (chap. 2, this volume) alerts us that blind conditions produce an apparent paradox, since vision is our more obvious modality for obtaining spatial information. Nonetheless, haptic performance can be very good. She points out
202
BALLESTEROS AND HELLER
that visual experience is not necessary for solving spatial problems. Furthermore, she shows that paradoxically, visual illusions also occur in touch. She raises a number of important issues for psychology and cognitive neuroscience. According to Millar, it is necessary to clarify these issues in order to understand the effects of blindness and how people process spatial information. Millar (1994, 2000, chap. 2, this volume) has proposed a neurological metaphor based on “convergent active processing in interrelated networks” (Heller, 2000, pp. 99–41). The central core of the theory rests on the reference hypothesis. According to this hypothesis, spatial processing depends on organizing inputs used as reference cues that help one to perform spatial tasks (location, distance, or direction assessments). According to the reference hypothesis, accurate perception depends on the availability of congruent reference cues. Over the years, Millar (e.g., 1978, 1985, 1994) has entertained the view of the central role of reference cues for performing spatial tasks by the congenitally blind. The central core of her theory is that accurate shape and spatial perception depend on making use of spatial reference information. She has argued that spatial accuracy in vision and touch depend on the reference cues available in a given task. Further studies from three converging lines of evidence support the hypothesis: “veering” in 3-D space, perceptual illusions in touch and vision, and the spatial coding of haptic locations. In all cases, the use of body-centered (body midline) reference improved performance in both tactual and visual conditions. In addition, external reference cues may help improve performance in spatial tasks. Millar has argued that the idea that body-centered spatial reference cues apply to haptics and external spatial reference cues apply to vision is not supported by the data. In fact, what the data show is that both types of reference frames have produced additive effects, thereby improving performance twice as much as either frame alone. Another area of research that has provided further support for the reference hypothesis is the discrimination of bilateral symmetry by touch (Ballesteros, Manga & Reales, 1997; Ballesteros, Millar, & Reales, 1998; Ballesteros & Reales, 2004b). Bilateral symmetry is a salient spatial property of an object’s shape not only in vision, but also in touch. Several studies have investigated the accuracy of haptics in detecting this spatial property in raised-line shapes (Ballesteros et al., 1998), raised polygons (Locher & Simmons, 1978), and 3-D objects (Ballesteros et al., 1997). Haptic perception is only moderately accurate in detecting bilateral symmetry of small raised-line displays, especially when observers explore the shapes with just a single finger with very little reference (Ballesteros et al., 1997). Supporting the reference hypothesis, bimanual exploration was superior to unimanual exploration due to extraction of parallel shape information and to the use of the observer’s body midline as a body-centered
10. CONCLUSIONS
reference frame. With small, unfamiliar 2-D haptic displays, haptic observers were significantly more accurate with asymmetrical than with symmetrical displays. Accuracy did not differ much across a wide range of exploration times. Important for this discussion was that in congruence with the reference frame hypothesis, bimanual exploration was consistently more accurate than unimanual exploration (either with the left or with the right fingertip). Performance with 3-D novel objects was far more accurate overall. However, in contrast to raised-line shapes, accuracy was higher for symmetrical objects than with asymmetrical ones at all exploration times. Furthermore, the identification of symmetrical objects was more accurate than the perception of asymmetrical ones, both when they were explored with the preferred hand and when only an enclosing hand movement was allowed. The two stimulus sets (the raised-line small shapes and the 3-D objects) were different in many ways (e.g., size, shape, and complexity), and any comparison had to be considered tentative (Ballesteros et al., 1997). In a series of new experiments, stimuli were small open and closed raised-line shapes (half symmetrical and half asymmetrical). The task used was implicit in the sense that instead of asking directly for the identification of the symmetry or asymmetry of the small raised-shapes, observers had to indicate whether each raised shape was “open” or “closed” (Ballesteros et al., 1998). Bilateral symmetry is an incidental encoding property in vision, but it was also shown in this incidental tactual task when sufficient reference information was provided at encoding. The results showed something that is well known in vision in haptic perception. This common result is that shape symmetry facilitated tactual processing when reference information was provided at encoding. But what happened when flat shapes varying in complexity were extended in the third dimension? Haptic accuracy increased as the stimuli were extended in their third dimension, but only for symmetrical stimuli (Ballesteros & Reales, 2004b). The study showed that manipulating stimulus height, while keeping shape, size, and complexity constant, significantly improved observers’ haptic performance with symmetrical stimuli. More important, supporting the reference hypothesis, bimanual exploration produced an advantage over unimanual exploration for both symmetrical and asymmetrical stimuli in accuracy and response times. Exploring the stimuli with both hands parallel to the midbody axis (egocentric reference frame) facilitated the detection of bilateral symmetry. Moreover, presenting exactly the same stimuli to vision and touch has shown that both modalities are almost equally accurate in detecting the symmetry/asymmetry of the stimuli (especially with 3-D objects). However, vision is much faster. The results obtained in these spatial tasks support the egocentric reference frame hypothesis.
204
BALLESTEROS AND HELLER
Consistent with the previously mentioned results is the effect of bimanual exploration on the Mueller-Lyer illusion. In the haptic Mueller-Lyer illusion, wings-out patterns are judged as longer than wings-in patterns. Recently, Millar and Al-Attar (2002) reported that instructions to use an egocentric reference framework practically eliminated the normally robust illusion. Similarly, Heller et al. (in press) reported that the use of two index fingers nearly eliminated the Mueller-Lyer illusion. Thus, subjects can feel their bodies with their elbows as they explore stimuli with the index fingers of two hands. They can then use their bodies as a frame of reference for interpreting extent. Kappers and her colleagues (Kappers, 1999, 2002; Kappers & Koenderink, 2004) are also interested in exploring the role of reference frames in the performance of a parallelity task. In a series of experiments, they studied how haptic explorers perceive how lines are judged as parallel in a horizontal plane. The results show that explorers perceived physically nonparallel lines as parallel. In other words, blindfolded participants produced large systematic deviations from parallelity. The conclusion was that participants made their decisions based on a reference frame that was not totally egocentric (that is, based on the explorer’s hand). The frame of reference, however, was not totally allocentric because in this case, physically parallel bars would appear as parallels to touch. This was not the case, suggesting that participants were relying on a frame of reference that was intermediate to an allocentric and an egocentric one. Further studies showed that a 10-sec delay between bar exploration and the parallel-setting of the test bar improved performance. This suggests that delay between perception and action allows recoding the haptic egocentric reference frame into a more visual allocentric frame of reference (Zuidhoek, Kappers, Noordzij, Van der Lubbe, & Postma, 2004).
SPATIAL COGNITION AND BLINDNESS Although there is little doubt that vision is normally the premier modality for spatial cognition, there is agreement between the authors of this volume that touch provides substitute information when vision is lacking (chaps. 2, 3, and 4, this volume). The main questions posed by Millar in chapter 2 are, “What do sense modalities such as vision and touch contribute to spatial cognition, and how do these two perceptual modalities relate to each other?” In a large number of significant publications, Millar has provided many instances showing that active touch can compensate for a lack of vision (Millar, 1994, 2000, chap. 2, this volume). Multimodal inputs are crucial for spatial cognition. According to Millar, haptic perception of form depends on coordinated multisensory inputs from egocentric and exocentric reference cues (also see Kappers & Koenderink, 2004).
10. CONCLUSIONS
Heller (chap. 3, this volume) has conducted considerable work on the perception of tangible 2-D raised pictures, because he believes that these displays are extremely useful for the blind. His position is certainly opposed to some theoretical perspectives that argue that tangible pictures are not useful for the blind. He arrived at this thinking from his own research with congenitally blind, later blind, VLV, and sighted persons. Heller argues that performance with tangible pictures of the sighted is normally worse than the performance of the visually impaired. The problem, according to Heller, is that much more research is needed to understand which views and types of haptic perspective are the best for tangible pictures. In other words, further research will provide new methods of preparing tangible pictures that are capable of expressing the most useful spatial information for haptics and for blind people. Heller pointed out that an important question that requires further research is how haptic perceivers segregate figure-ground information. Although little research has been reported on this issue, one of the subtests of the Haptic Test Battery (Ballesteros, Reales, et al., 2004) precisely assesses figure-ground segregation in blind and sighted children from 3 to 16 years of age. The task evaluates whether the child perceives the figure as a whole, even when part of the stimulus is occluded by another shape. One hundred nineteen blind and sighted children from 3 to 16 years of age performed the task. The effects of age and visual condition were highly significant. Moreover, the interaction between both variables was also significant. The sighted children scored worse than the young blind children until the age of 12 to 13 years. Beyond this age, performance by the sighted dropped significantly. Proportion correct for the blind children went from .91 (3–5 year olds) to 1.00 (15–16 year olds ). Proportion correct for the sighted went from .51 for the youngest to .93 at the 12–13 age level, dropping again in the oldest group to .86. The pattern of results obtained in the study suggests the effect of greater experience with tangible materials by the blind children, especially in the preschool years, and again at the end of the school years (Ballesteros, Bardisa, Millar, & Reales, 2005). In a wide range of tasks, Heller found that the late blind and very low vision (VLV) participants performed at much higher accuracy levels, and with much greater speed than sighted controls. These findings are entirely consistent with the Ballesteros report of similar advantages for blind children. For example, the embedded figures task is a test of haptic perceptual selectivity (Heller, Wilson, Steffen, Yoneyama, & Brackett, 2003). Very low vision subjects performed at a far higher level than other subjects, and were much faster than the blindfolded sighted, restricted to touch. All groups of visually impaired subjects in this experiment performed approximately twice as fast as the blindfolded sighted participants. The VLV performance rivaled that of sighted subjects using vision,
206
BALLESTEROS AND HELLER
but of course, even the most skilled haptic perceivers are slower than the sighted using their eyes. Moreover, the VLV and late-blind participants out-performed blindfolded sighted subjects in the Piagetian water-level problem. This task assesses the understanding that the level of water stays horizontal, despite tilt of the container. It was interesting that the VLV subjects actually did better than a group of sighted subjects using vision. Also, performance advantages for the VLV and blind subjects appeared in picture matching tasks, viewpoint, and in the interpretation of perspective drawings. The frequent advantages of the blind found by Heller, Ballesteros, and others are in clear contrast to other data showing lower performance by congenitally blind subjects in picture naming tasks (e.g., Lederman, Klatzky, Chataway, & Summers, 1990). How may we explain this discrepancy in findings? There are two likely explanations. First, blind people show huge differences in their haptic spatial skills, much as sighted individuals are variable in these areas. The huge variability in spatial performance and reasoning has led to the development of tests of spatial skills in the sighted. Thus, the differences found here could derive from sampling error and individual differences. Some blind people never leave their homes, because they haven’t learned proper mobility skills and their spatial skills are poor. Others have extraordinary spatial skills. It is easy for different researchers to come to very different conclusions about haptics in the blind, given the huge variability in the subject populations. Note that we do not know what normal touch is, nor do we know what a “normal” blind person is. This makes it very difficult to come to firm conclusions about any possible limitations in haptics, and we should be very cautious about assuming deficiencies in blind people. This is especially important, given the excellent performance that has been shown in picture perception by blind persons (see chaps. 1, 2, and 4, this volume). The situation is even more complicated, since researchers do not agree on how to define the terms that describe the age of onset of blindness. Researchers may describe the late-blind as having lost sight after starting school, others described late-blind persons as having lost sight after the age of 1 (e.g., Heller, 1989), while others have limited this term to persons who lost their sight after the age of 10 (e.g., Grant, Thiagarajah, & Sathian, 2000). Similar confusion exists for the term “early blind.” Some of the discrepant findings in the literature may be related to different descriptions of the terms defining onset of blindness. There is a second explanation for the apparently discrepant findings about performance by blind subjects in tasks involving naming pictures. An earlier study by Heller (1989) found superior performance by the late-blind in picture naming, but lower accuracy by the congenitally blind and blindfolded sighted participants. Task performance is dependent upon familiarity with pictures and
10. CONCLUSIONS
the conventions governing them, familiarity with particular pictures, and haptic skill. Blind people are less familiar with pictures, and certainly have little experience with the representation of depth relations in pictures, for example, linear perspective. Furthermore, a high level of picture or object naming accuracy requires access to semantic memory. Thus, a child may see a dog but call it a cat. This does not mean the child cannot see the animal properly. Pictures yield very high levels of recognition accuracy when they are tested by using matching to sample (Heller, Brackett, & Scroggs, 2002) or by asking a subject to find a target picture among three choices (Heller, Calcaterra, Tyler, & Burson, 1996). These studies yielded high accuracy levels, with nearly perfect performance in picture matching by the VLV participants. Performance is much lower than this when haptics attempts to match 3-D plaster casts to live faces using three-alternative matching to sample (Kilgour & Lederman, 2002). Recently, Norman, Norman, Clayton, Lianekhammy, and Zielke (2004) reported performance of roughly 50% correct when subjects made visual matches in a task involving feeling plastic casts of solid peppers. All of the previously mentioned data are consistent with the idea that touch and blind people can make sense of 2-D arrays, and performance need not suffer in comparison with 3-D object recognition (Heller, et al., 2002; Heller, Brackett, Scroggs, et al., 2002). The notion that touch is handicapped when confronted by 2-D arrays only seems appropriate when we fail to fully consider the role of learning, familiarity, and skill in decoding complex patterns. Blind people can read Braille at very high speeds, but sighted individuals are never capable of doing this with their hands. Even sighted Braille readers can not read individual Braille characters with their hands at the sorts of speeds that are common for blind individuals. We know that haptically reading Braille requires considerable skill and long practice sessions. No one would assume that touch is incapable of reading Braille, based on the initial negative experiences of blindfolded sighted subjects. It would be a mistake to make any generalizations about possible restrictions of touch to 3-D objects derived from preliminary reports of low levels of haptic picture naming in blindfolded sighted undergraduate participants. For a number of years Kennedy has been engaged in the study of the relationship between vision and touch (Kennedy, 2003). Kennedy and Juricevic (chap. 4, this volume) present a series of interesting drawings produced by a blind girl, Gaia, and a blind woman, Tracy. The authors discuss the relation between haptics and projection onto a pictorial surface. These two blind persons have a lot of experience with drawing. Kennedy and Juricevic argue that the blind and the sighted may use perspective schemes and the representation of depth, planes and T-junctions. Kennedy and Juricevic look for similarities between
208
BALLESTEROS AND HELLER
drawings produced by the blind and by the sighted. In the drawings produced by Gaia and Tracy, lines are used for representing occluding surface boundaries. They argue that drawing skills develop similarly in blind and sighted individuals. To demonstrate this, Kennedy and Juricevic show that Tracy’s drawings were a stage ahead of those produced by the blind child. The blind woman used convergent perspective in her pictures. Kennedy and Juricevic support the idea that the ability to sense the alignment of a series of parts is like exploring real edges and a series of dot pairs. We perceive grouping and continuity. These authors argue that there are common physical factors in perceptual processes in vision and haptics and that those factors lead to perception of edges of 3-D surfaces and depicted edges in haptic pictures. There is little doubt that lines and edges make sense for touch and for congenitally blind individuals. This understanding of edges and lines holds for vision, and this understanding is surely present for touch, as well. If touch were insensitive to edges and depth relationships, then the difficulties for haptics would be the same for 3-D objects as for 2-D configurations. However, many blind persons have had no experience with drawing and/or maps. Some have had minimal instruction in geometry, despite attempts at modernizing and improving education of the visually impaired in the United States. For example, the Heller has met a number of blind persons who did not anticipate the notion of linear perspective. They were certainly able to understand perspective, after minimal exposure to pictures (Heller et al., 1996; Heller, Brackett, Scorggs, et al., 2002). One congenitally blind person said after feeling a perspective representation of a square that included convergence and foreshortening, “Oh, so you sighted people don’t see it as square.” This person realized, for the first time in her life, that sighted people experience perspective distortion when viewing objects from various vantage points. Note that failures to draw in correct perspective do not indicate any failure to understand a perspective depiction. Sighted individuals need to be taught to draw in correct perspective, and any art instructor will attest to the difficulty that most undergraduate students have with the production of drawings using correct linear perspective. We would not assume that people in the ancient and medieval world were not ultimately capable of understanding perspective representations, just because their artistic creations failed to use modern ideas of perspective. The understanding of linear perspective dates from the Renaissance, and was an invention of Brunelleschi (Kemp, 1990). Kennedy has engaged in pioneering work in this area. His approach has recently emphasized the long-term study of perceptual processes in selected individuals. This strategy has sometimes yielded important new insights, by a very careful and astute observer.
10. CONCLUSIONS
OBJECT PRIMING: THE COMMON BASE OF VISUAL AND HAPTIC OBJECT REPRESENTATIONS A number of experimental results obtained in behavioral investigations (Easton et al., 1997; Easton, Srinivas, & Greene, 1997; Reales & Ballesteros, 1999) have shown cross-modal priming between touch and vision, and vision and touch. By perceptual or repetition priming we mean the facilitation produced by a previous encounter with a stimulus on responses to the same stimulus in a subsequent encounter. People are usually unaware of this facilitative effect, but it is demonstrated by better performance (on accuracy or response time) with “old” compared to “new” stimuli of the same kind. Perceptual priming is a way to express implicit memory. This refers to previous experience with stimuli that does not require intentional and conscious retrieval of those previously presented stimuli. Perceptual priming, as a manifestation of implicit memory, can last a long time and relies on processing physical characteristics of the encoded stimuli. Most priming studies have used visually presented pictures of familiar (e.g., Biederman & Cooper, 1991, 1992) and unfamiliar objects (e.g., Cooper, Schacter, Ballesteros, & Moore, 1992; Schacter, Cooper, & Delaney, 1990). However, priming has also been shown using haptically presented nonsense or unfamiliar objects without vision (Ballesteros, Reales, & Manga, 1999).
CROSS-MODAL OBJECT PRIMING BETWEEN VISION AND TOUCH Most important for the issues discussed in this volume, theorists suggested that repetition priming is based on the physical properties of the stimulus and is modality specific. However, cross-modal object transfer was shown between vision and touch. Transfer between these two modalities was equal in strength to within-modal transfer. To explain these findings, Reales and Ballesteros (1999) suggested that visual and haptic object representations might be shared by these two modalities, since they are well suited to extract an object’s structure. The results were not due to the implication of lexical or semantic factors, as deep encoding conditions showed no advantage over shallow structural encoding in terms of perceptual facilitation.
THE NEUROPSYCHOLOGY OF TOUCH The cognitive neuroscience approach is well represented in this volume in chapters 7, 8, and 9. The field of neuroscience has flourished during the last de-
210
BALLESTEROS AND HELLER
cade. Neuroimaging techniques have created the hope and the expectation of dramatic breakthroughs, since they allow researchers to “see” the activation of specific parts of the cerebral cortex while human participants perform different cognitive tasks. The study of the cerebral cortical basis of tactile perception of tangible forms and 3-D objects is an expanding area of research that has rapidly increased during the last few years. The results from the laboratories of Goodale, Pascual-Leone, and Sathian provide converging evidence on new ways to conceptualize the organization and functioning of the cortex. Recent work from Sathian’s, Goodale’s, and Pascual-Leone’s laboratories on shape and object processing has produced results consistent with the role of areas of the extrastriate visual cortex functioning in an active manner during tactile shape and object perception. This activation of visual cortical areas occurs in blind participants, and in sighted subjects who are subject to long-term blindfolding and visual deprivation (chap. 9, this volume).
Several recent neuroimaging results have shown that both visual and haptic object identification produce activation in the occipital cerebral cortex (Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Amedi, Malach, Hendler, Peled, & Zahory, 2001). This finding suggests that visual and haptic object exploration may share a common neural representation. Recent fMRI findings reported by Goodale and his colleagues (James et al., 2002; chapter 7, this volume) are also pointing in the same direction. The haptic exploration of 3-D nonsense objects produced activation in the lateral occipital complex (LOC) when the same objects were later presented visually (to different modalities). Although these researchers did not collect behavioral data such as accuracy measures or response times, they showed that middle occipital (MO) areas and lateral occipital (LO) were similarly activated in haptic-to-visual nonsense object presentation, rather than in visual-to-visual presentation. Goodale and his colleagues (chap. 7, this volume) proposed that the lateral occipital complex (LOC) is the cerebral region that responds selectively to both visual and haptic apprehension of 3-D objects. There is ample evidence from both behavioral and neurological investigations that supports the claim that haptic exploration of objects activates the LOC, a cerebral area that was first considered as a specifically visual area. Results showing that both visual and haptic object exploration depend on the lateral occipital cortex (LOC) provide support for the proposal that there is a common neural representation for visual and haptic objects and shapes. The reported cross-modal priming effects, both in fMRI studies (Amedi et al., 2001; James et al.,
10. CONCLUSIONS
2002) and in behavioral studies (Easton et al., 1997; Reales & Ballesteros, 1999) support the notion of a shared visuo-haptic representation. Further support for the involvement of the ventral visual pathway in tactile shape discrimination is provided by a PET study conducted in Sathian’s laboratory (Sathian, Prather, & Votaw, 2002). Sathian, Zangaladze, Hoffman, and Grafton (1997) were the first to demonstrate, from any laboratory that a visual cortical area is active during any tactile task in sighted subjects. This was a dorsal, parieto-occipital area and was demonstrated using PET. Moreover, the involvement of the extrastriate visual cortex in tactile perception has been shown in the same laboratory. In this study, transcranial magnetic stimulation over the parieto-occipital cortex disrupted performance in a tactile discrimination task (Zangaladze, Epstein, Grafton, & Sathian, 1999). These findings, in conjunction with those reported by Pascual-Leone (chap. 9, this volume) conducted in visually deprived (blindfolded) persons are in disagreement with older ideas concerning a rigid specialization in the sensory cortex. Complete crossmodal behavioral priming (Reales & Ballesteros, 1999) in visual and haptic implicit tasks inspired the theoretical speculation that object representations might be shared between the visual and haptic modalities. This seemed appropriate, according to Ballesteros and her colleagues, since both modalities are well suited to process and represent the 3-D structure of objects in the world. Researchers in neuroscience approached the problem by using imaging techniques to look for activation in the same cortical areas after haptic and visual study of 3-D objects (James et al., 2002; chap. 7, this volume). Experimental psychologists (Ballesteros & Reales, chap. 5, this volume) rely on those imaging results to explain behavioral findings in normal aging adults and those with Alzheimer’s disease in implicit and explicit haptic tasks. Neuroscience and experimental psychology produce cross-fertilization of knowledge in each area. They use hypotheses that try to probe the same underlying reality, but use rather different methodologies. Further support for the idea of a fundamental correspondence between vision and haptics derives from the other behavioral work on blindness that is represented in this volume. Kennedy has provided evidence that touch responds to edges and contours, as does vision. Also consistent with this notion are the results showing intersensory equivalence and the capability of haptics to represent depth relations and perspective in pictures for congenitally blind adults. The data presented by Ballesteros, Bardisa, et al. (2005) on a haptic test battery for blind children are also consistent with this theoretical viewpoint. While these previously mentioned results suggest intersensory equivalence, and are certainly consistent with the data from neuroscience, there are differences between the modalities of vision and touch that cannot be ignored. Touch is unlikely to understand color, other than by analogy, and there is no clear
212
BALLESTEROS AND HELLER
counterpart in haptics to illusory contours. While no one seems to be able to generate illusory contours in touch, people do experience a continuous surface despite gaps between the exploratory fingers. Perhaps we simply wait for an inventive researcher. Note, however, that it is possible to conceive of synaesthesia as a normal state of the organism, and a reflection of a lack of specificity at the sensory level (Goldstein, 1939/1963). The work presented by the contributors to this volume suggests that both tactile perception of shapes and the manipulation of objects are associated with cortical distributed activity within a network that includes parts of the somatosensory cortex, parts of the extrastriate visual cortex, and areas in the motor cortex (James et al., chap. 7, this volume; Sathian & Prather, chap. 8, this volume). Further research may help to identify the components of the structural object representation system in the brain that are implicated in the perception and memory of objects explored by vision, by touch, and crossmodally. As many of the contributors of chapters to this volume suggest, posterior regions of the brain may be multisensory instead of exclusively visual, as was once believed. More generally, results gathered in the last decade do not support a rigid specialization in the cortex, in quite the same way that researchers conceived of this in the past.
BRAIN PLASTICITY, SENSORY COMPENSATION, SENSORY SUBSTITUTION, AND BLINDNESS One generally controversial idea in the study of blindness has been the notion that in the absence of sight, people learn to “sharpen” their other senses. This idea of sensory compensation has been controversial, because the older data were not very conclusive. Recent research suggests that touch may benefit when vision is not present, as in blindness. The results of studies by Van Boven, Hamilton, Kauffman, Keenan, and Pascual-Leone (2000), Grant et al. (2000), and others (see Sathian, 2000; Sathian & Prather, chap. 8, this volume) show an increase in tactile acuity in the absence of sight. Moreover, Stevens found that this advantage is maintained in older Braille readers (Stevens, Foulke, & Patterson, 1996). What mechanisms might explain the enhancement of the tactile sense when vision is missing? There are a number of plausible explanations that can be offered. Some explanations assume that the role of experience is crucial, and this may simply involve learning new skills. Other explanations point toward the elimination of the attentional distraction derived from foveal vision, or changes in the cerebral cortex that derive from sensory deprivation. Thus, it is possible that locations in the cerebral cortex take on new sensory roles as a function of experience, and visual centers may wind up being used to process input from touch.
10. CONCLUSIONS
The lack of vision causes changes in the educational and perceptual experience of individuals. Blind persons spend considerable time learning to use their senses of touch and audition for pattern perception and for spatial localization. Blindness rehabilitation includes explicit instruction in mobility skills, reading Braille, and the use of audition to locate objects in space. It is reasonable to expect general and specific transfer of these skills to related haptic tasks. There are a number of reports of improved auditory memory in blind persons (e.g., Roder, Rosler, & Neville, 2001; Rosler, Roder, Heil, & Henninghausen, 1993; Roder, Teder-Salejarvi, Sterr, Rosler, Hillyard, & Neville, 1999). Also, there are clear indications of improvement in tactile acuity in blind Braille readers (see Sathian, 2000). In addition, there are some rather robust illustrations of the advantage of blind subjects over the sighted. This may occur in the speed of processing pictorial information (Heller, 2002; Heller, Brackett, & Scroggs, 2002; Heller, Brackett, Scroggs, et al., 2002; Heller et al., 2003), the accuracy of picture matching, and in picture naming accuracy (Heller, 1989). Perceptual learning involves brain plasticity and changes in sensory cortex (e.g., Recanzone, Jenkins, Hradek, & Merzenich, 1992; Recanzone, Merzenich, & Jenkins, 1992; Recanzone, Merzenich, Jenkins, Grajski, & Dinse, 1992; Recanzone, Merzenich, & Schreiner, 1992). The basic idea is that the brain changes as a result of learning. Thus, tactile superiority in the blind might result from such use-dependent neural plasticity and learning, as argued by Sathian (2000). Sathian and colleagues suggest that touch perception engages visual imagery as sighted people examine surfaces and objects (e.g., Sathian et al., 1997). The use of visual imagery is one possible explanation that might be offered for the involvement of the visual cortex during haptic tasks, but there are also other possible explanations, such as engagement of multisensory representations. Sathian has also provided evidence that different areas of the cerebral cortex are activated when haptics is involved in form discrimination (ventral pathways) or mental spatial manipulation (dorsal pathways). It is not clear if the haptic mental manipulation tasks necessarily involve visual imagery in sighted persons. Also, Sathian and his colleagues have found that subjects report using visual imagery when engaging in form perception, but not texture or gap detection tasks. This could implicate visual imagery in the engagement of the LOC. Certainly, any studies showing differences between performance of the late blind, very low vision, and congenitally blind persons might reflect the impact of visual imagery in persons with visual experience. However, these differences could also arise because of educational and experiential variables that are not specifically related to visual sensory experience or visual imagery. Brain imaging techniques may eventually provide interesting answers to these important questions.
214
BALLESTEROS AND HELLER
Sadato, Pascual-Leone, and their co-workers presented a similar view on the reorganization of the cortex in blind persons (Sadato et al., 1998), but did not address the issue of visual imagery as a mechanism relating haptic perception and the cortex. Sadato et al. proposed that the lack of visual input in blindness causes changes in brain organization. Thus, they found persuasive evidence that the visual cortex ends up serving haptic Braille reading in congenitally blind people, while these primary visual areas normally serve vision in the sighted. Pascual-Leone et al. (chap. 9, this volume) present data showing rather dramatic changes in the cortex after only 5 days of blindfolding. The basic interpretation of the mechanism underlying the changes, according to Pascual-Leone, is that the lack of visual input allows cortical areas to take on new functions. These occipital areas may be most efficient for processing visual input, much as the eye is most sensitive to light. However, a punch in the eye may yield a visual sensation. Similarly, the visual cortex may be capable of processing input from touch when visual input is lacking. This may occur, even if the input is from a source of stimulation that we do not usually experience and process in that cortical region. Pascual-Leone has apparently extended Sherrington’s idea of the “adequate stimulus” to apply to cortical organization and responsiveness. This is not the idea that the loss of sight causes a degeneration of cortical functioning to a more primitive state, with little differentiation of functioning (e.g., Goldstein, 1939/1963). Rather, the loss of visual input allows an opportunistic perceiver to make the best use of remaining intact cortex. An analogous situation holds if an unwelcome relative moves out of one’s house: this frees up needed storage and/or work space. There can be little doubt that advances in neuroscience are exciting and promise a great deal in the future. However, just as the entire field of chemistry will not reduce to physics, all of psychology is unlikely to reduce to neuroscience (see Heller, 2004). The two fields represent different levels of analysis, but both are destined to advance our understanding in ways that are yet unknown. Psychology studies behaviors, which represent the overall function of the system as a whole, while neuroscience is interested in discovering how these behaviors are realized through the neural hardware, and cognitive-computational science in the “software programs” that run on this hardware (Sathian, personal communication, 2004). It is expected that the combined activity of researchers adopting the perspectives of neuroscience and more purely cognitive approaches will ultimately generate a synergistic advance in knowledge. The interface between psychology and neuroscience may hold the ultimate key to help us unravel the complex relationships between body and mind.
10. CONCLUSIONS
REFERENCES Amedi, A., Jacobson, G., Hendler, T., Malach, R., & Zohary, E. (2002). Convergence of visual and tactile shape processing in the human lateral occipital cortex. Cerebral Cortex, 12, 1202–1212. Amedi, A., Malach, R., Hendler, T., Peled, S., & Zohary, E. (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience, 4, 324–330. Appelle, S. (1991). Haptic perception of form: Activity and stimulus attributes. In M. A. Heller & W. Schiff (Eds.), The psychology of touch (pp. 169–188). Hillsdale, NJ: Lawrence Erlbaum Associates. Bach-y-Rita, P. (1972). Brain mechanisms in sensory substitution. New York: Academic Press. Ballesteros, S., Bardisa, D., Millar, D., & Reales, J. M. (2005). The Haptic Test Battery: A new instrument to test tactual abilities in blind and visually impaired and sighted children. British Journal of Visual Impairment, 23, 11–24. Ballesteros, S., Bardisa, D., Reales, J. M., & Muñiz, J. (2004). A haptic battery to test tactual abilities in blind and visually impaired children. In S. Ballesteros & M. A. Heller (Eds.), Touch, blindness, and neuroscience (pp. 303–313). Madrid: UNED, Varia. Ballesteros, S., & Heller, M. A. (Eds.). (2004). Touch, blindness, and neuroscience. Madrid: UNED, Varia. Ballesteros, S., Manga, D., & Reales, J. M. (1997). Haptic discrimination of bilateral symmetry in two-dimensional and three-dimensional unfamiliar displays. Perception & Psychophysics, 59, 37–50. Ballesteros, S., Millar, S., & Reales, J. M. (1998). Symmetry in haptic and in visual shape perception. Perception & Psychophysics, 60, 389–404. Ballesteros, S., & Reales, J. M. (2004a). Intact haptic priming in normal aging and Alzheimer’s disease: Evidence for dissociable memory systems. Neuropsychologia, 42, 1063–1070. Ballesteros, S., & Reales, J. M. (2004b). Visual and haptic discrimination of symmetry in unfamiliar displays extended in the z-axis. Perception, 33, 315–327. Ballesteros, S., Reales, J. M., & Manga, D. (1999). Implicit and explicit memory for familiar and novel objects presented to touch. Psicothema, 11, 785–800. Biederman, I., & Cooper, E. E. (1991). Evidence for complete translational and reflectional invariance in visual object priming. Perception, 20, 585–593. Biederman, I., & Cooper, E. E. (1992). Size invariance in visual object priming. Journal of Experimental Psychology: Human Perception and Performance, 18, 121–133. Cooper, L. A., Schacter, D. E., Ballesteros, S., & Moore, C. (1992). Priming and recognition of transformed three-dimensional objects: Effects of size and reflection. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 43–57. Critchley, M. (1953). The parietal lobes. London: Edward Arnold. Easton, R. D., Greene, A. J., & Srinivas, K. (1997). Transfer between vision and touch: Memory for 2-D patterns and 3-D objects. Psychonomic Bulletin & Review, 4, 403–410. Easton, R. D., Srinivas, K., & Greene, A. J. (1997). Do vision and haptic share common representations? Implicit and explicit memory within and between modalities. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 153–163. Goldstein, K. (1963). The organism. Boston: Beacon Press. (Original work published 1939) Grant, A. C., Thiagarajah, M. C., & Sathian, L. (2000). Tactile perception in blind Braille readers: A psychophysical study of acuity and hyperacuity using gratings and dot patterns. Perception & Psychophysics, 62, 301–312. Heller, M. A. (1989). Picture and pattern perception in the sighted and blind: The advantage of the late blind. Perception, 18, 379–389. Heller, M. A. (1991). Introduction. In M. A. Heller & W. Schiff (Eds.), The psychology of touch (pp. 1–19). Hillsdale, NJ: Lawrence Erlbaum Associates.
216
BALLESTEROS AND HELLER
Heller, M. A. (2002). Tactile picture perception in sighted and blind people. Behavioral Brain Research, 135, 65–68. Heller, M. A. (2004). Mind and brain: Psychology and neuroscience. Perception, 33, 383–385. Heller, M. A., Brackett, D. D., & Scroggs, E. (2002). Tangible picture matching in people who are visually impaired. Journal of Visual Impairment and Blindness, 96, 349–353. Heller, M. A., Brackett, D. D., Scroggs, E., Steffen, H., Heatherly, K., & Salik, S. (2002). Tangible pictures: Viewpoint effects and linear perspective in visually impaired people. Perception, 31, 747–769. Heller, M. A., Calcaterra, J. A., Tyler, L. A., & Burson, L. L. (1996). Production and interpretation of perspective drawings by blind and sighted people. Perception, 25, 321–334. Heller, M. A., & Schiff, W. (1991). The psychology of touch. Hillsdale, NJ: Lawrence Erlbaum Associates. Heller, M. A., McCarthy, M., Schultz, J., Greene, J., Shanley, M., Clark, A., et al. (in press). The influence of exploration mode, orientation, and configuration on the haptic Mueller-Lyer illusion. Perception. Heller, M. A., Wilson, K., Steffen, H., Yoneyama, K., & Brackett, D. D. (2003). Superior Haptic perceptual selectivity in late-blind and very-low-vision subjects. Perception, 32, 499–511. James, T. W., Humphrey, G. K., Gati, J. S., Servos, P., Menon, R. S., & Goodale, M. A. (2002). Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia, 40, 1706–1714. Jansson, G., & Monaci, L. (2004). Haptic identification of objects with different number of fingers. In S. Ballesteros & M. A. Heller (Eds.), Touch, blindness, and neuroscience (pp. 209–219). Madrid: UNED, Varia. Kappers, A. M. L. (1999). Large perception deviations in the haptic perception of parallelity. Perception, 28, 781–795. Kappers, A. M. L. (2002). Haptic perception of parallelity in the midsagittal plane. Acta Psychologica, 109, 25–40. Kappers, A. M. L., & Koenderink, J. J. (2004). Analysis of the large systematic deviations in a haptic parallelity task. In S. Ballesteros & M. A. Heller (Eds.), Touch, blindness, and neuroscience (pp. 147–154). Madrid: UNED, Varia. Kemp, M. (1990). The science of art. New Haven, CT: Yale University Press. Kennedy, J. M. (2003). Drawings from Gaia, a blind girl. Perception, 32, 321–340. Kilgour, A. R., & Lederman, S. J. (2002). Face recognition by hand. Perception & Psychophysics, 64, 339–352. Klatzky, R. L., Lederman, S. J., & Metzger, V. A. (1985). Identifying objects by touch: An “expert system.” Perception & Psychophysics, 37, 299–302. Klatzky, R. L., Lederman, S. J., & Reed, C. (1987). There’s more to touch than meets the eye: The salience of object attributes for haptics with and without vision. Journal of Experimental Psychology: General, 116, 356–369. Lederman, S. J., Klatzky, R. L., Chataway, C., & Summers, C. D. (1990). Visual mediation and the haptic recognition of two-dimensional pictures of common objects. Perception & Psychophysics, 47, 54–64. Locher, P. J., & Simmons, R. W. (1978). Influence of stimulus symmetry and complexity upon haptic scanning strategies during detection, learning, and recognition tasks. Perception & Psychophysics, 32, 110–116 Millar, S. (1978). Aspects of memory for information from touch and movement. In G. Gordon (Ed.), Active touch: The mechanism of recognition of objects by manipulation: A multidisciplinary approach (pp. 215–227). Oxford: Pergamon Press. Millar, S. (1985). Movement cues and body orientation in recall of locations of blind and sighted children. Quarterly Journal of Experimental Psychology, 37A, 257–279.
10. CONCLUSIONS Millar, S. (1994). Understanding and Representing Space: Theory and evidence for studies with blind and sighted children. Oxford: O.U.P Science Publications, Clarendon Press. Millar, S. (2000). Modality and mind: Convergent active processing in interrelated networks. A model of development and perception by touch. In M. A. Heller (Ed.), Touch, representation and blindness (pp. 99–141). Oxford, UK: Oxford University Press. Millar, S., & Al-Attar, Z. (2002). The Mueller-Lyer illusion in touch and vision: Implications for multisensory processing. Perception & Psychophysics, 64, 353–365. Morgan, M. J. (1977). Molyneux’s question: Vision, touch and the philosophy of perception. Cambridge: Cambridge University Press. Norman, J. F., Norman, H. F., Clayton, A. M., Lianekhammy, J., & Zielke, G. (2004). The visual and haptic perception of natural object shape. Perception & Psychophysics, 66, 342–351. Reales, J. M., & Ballesteros, S. (1999). Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 1–20. Recanzone, G. H., Jenkins, W. M., Hradek, G. T., & Merzenich, M. M. (1992). Progressive improvement in discriminative abilities in adult owl monkeys performing a tactile frequency discrimination task. Journal of Neurophysiology, 67, 1015–1030. Recanzone, G. H., Merzenich, M. M., & Jenkins, W. M. (1992). Frequency discrimination training engaging a restricted skin surface results in an emergence of a cutaneous response zone in cortical area 3a. Journal of Neurophysiology, 67, 1057–1070. Recanzone, G. H., Merzenich, M. M., Jenkins, W. M., Grajski, K. A., & Dinse, H. R. (1992). Topographic reorganization of the hand representation in cortical area 3b of owl monkeys trained in a frequency discrimination task. Journal of Neurophysiology, 67, 1031–1056. Recanzone, G. H., Merzenich, M. M., & Schreiner, C. E. (1992). Changes in the distributed temporal response properties of SI cortical neurons reflect improvements in performance on a temporally based tactile discrimination task. Journal of Neurophysiology, 67, 1071–1091. Roder, B., Rosler, F., & Neville, H. J. (2001). Auditory memory in congenitally blind adults: A behavioral-electrophysiological investigation. Cognitive Brain Research, 11, 289–303. Roder, B., Teder-Salejarvi, Sterr, A., Rosler, F., Hillyard, S. A., & Neville, H. J. (1999). Improved auditory spatial tuning in blind humans. Nature, 400, 162–166. Rosler, F., Roder, B., Heil, M., & Hennighausen, E. (1993). Topographic differences of slow event-related brain potentials in blind and sighted adult human subjects during haptic mental rotation. Cognitive Brain Research, 1, 145–159. Sadato, N., Pascual-Leone, A., Grafman, J., Deiber, M., Ibanez, V., & Hallett, M. (1998). Neural networks for braille reading by the blind. Brain, 121, 1213–1229. Sathian, K. (2000). Practice makes perfect: Sharper tactile perception in the blind. Neurology, 54, 2203–2204. Sathian, K., Prather, S. C., & Votaw, J. R. (2002). Visual cortical activation during tactile form perception. International Multisensory Research Forum, 3rd Annual Meeting, Geneva, Switzerland. Sathian, K., Zangaladze, A., Hoffman, J. M., & Grafton, S. T. (1997). Feeling with the mind’s eye. NeuroReport, 8, 3877–3881. Schacter, D. L, Cooper, L. A., & Delaney, S. M. (1990). Implicit memory for unfamiliar objects depends on access to structural description. Journal of Experimental Psychology: General, 119, 5–24. Sherrick, C. (1991). Vibrotactile pattern perception: Some findings and application. In M. A. Heller & W. Schiff (Eds.), The psychology of touch (pp. 189–217). Hillsdale, NJ: Lawrence Erlbaum Associates. Stevens, J. C., Foulke, E., & Patterson, M. Q. (1996). Tactile acuity, aging, and Braille reading in long-term blindness. Journal of Experimental Psychology: Applied, 2, 91–106.
218
BALLESTEROS AND HELLER
Van Boven, R. W., Hamilton, R. H., Kauffman, T., Keenan, J. P., & Pascual-Leone, A. (2000). Tactile spatial resolution in blind Braille readers. Neurology, 54, 2230–2236. Zangaladze, A., Epstein, C. M., Grafton, S. T., & Sathian, K. (1999). Involvement of visual cortex in tactile discrimination of orientation. Nature, 401, 587–590. Zuidhoek, S., Kappers, A. M. L. Noordzij, M. L., Van der Lubbe, R., & Postma, A. (2004). Frames of reference in a haptic parallelity task: Temporal dynamics and the possible role of vision. In S. Ballesteros & M. A. Heller (Eds.), Touch, blindness, and neuroscience (pp. 155–164). Madrid: UNED, Varia.
Author Index
Aglioti, S., 16, 18 Akbudak, E., 175, 176, 192 Al-Attar, Z., 4, 5, 11, 20, 35, 37, 38, 39, 41, 44, 45, 47, 204, 217 Albertazzi, L., 82, 91 Alivisatos, B., 162, 167 Allen, A. C., 2, 7, 19, 54, 66, 70 Amassian, V. E., 176, 193 Amedi, A., 9, 18, 142, 145, 146, 148, 149, 153, 161, 164, 167, 210, 215 Andersen, R. A., 43, 44, 45, 47, 152, 153 Anderson, A. W., 163, 168 Anshel, D., 182, 194 Appelle, S., 200, 215 Arditi, A., 6, 18, 55, 69 Arnheim, R., 76, 91 Avery, G. C., 65, 69 Avidan, G., 161, 168 Axel, E., 49, 69
Bach-y-Rita, P., 200, 215 Bäckman, L., 108, 109, 118 Baddeley, A. D., 28, 45
Baddour, R., 152, 154 Bahrick, L. E., 90, 92 Bai, J., 88, 92 Baker, J., 179, 187, 191, 193 Ballesteros, S., 3, 4, 7, 9, 14, 15, 18, 21, 97, 98, 100, 101, 102, 103, 104, 106, 107, 108, 110, 111, 113, 116, 117, 118, 141, 144, 148, 154, 161, 168, 197, 198, 201, 202, 203, 205, 209, 211, 215, 217 Baltes, P. B., 108, 116 Bandinelli, S., 176, 192 Banks, K. S., 15, 19, Banks, M. S., 4, 19 Barbero, J. I., 121, 135 Bardisa, D., 7, 18, 201, 205, 211, 215 Barnes, C. L., 109, 110, 117 Barnette, S. L., 53, 70 Baron-Cohen, S., 172, 192 Bartres-Faz, D., 174, 193 Bavelier, D., 172, 192 Bayliss, G. C., 43, 45 Bennett, D. A., 15, 19, 110, 117 Benson, R. R., 146, 154, 161, 168 Benson, R. S., 149, 154 Berthiaume, F., 53, 69 Berthoz, A., 43, 44, 45, 47 Best, C. T., 53, 70
220
AUTHOR INDEX
Biederman, I., 98, 103, 116, 209, 215 Birnbaum, J., 62, 71 Blake, R., 152, 153, 163, 167 Blaxton, T. A., 98, 118 Bodegard, A., 158, 159, 160, 165, 167 Boecker, H., 158, 159, 167 Boussaoud, D., 166, 169 Bowtell, R., 165, 168 Boyer, A., 35, 36, 46, 64, 65, 67, 70 Brackett, D. D., 2, 6, 7, 8, 12, 19, 20, 35, 36, 44, 46, 52, 54, 55, 56, 57, 60, 64, 65, 66, 67, 68, 70, 71, 83, 92, 141, 153, 205, 207, 208, 213, 216 Bradley, D. C., 44, 45 Brammer, M. J., 171, 192 Brasil-Neto, J. P., 177, 193 Brewster, S., 122, 127, 135 Brotchie, P., 43, 47 Brown, L., 67, 71 Brun, A., 15, 18, 115, 116 Büchel, C., 175, 176, 192 Buchtel, H. A., 152, 153 Buckner, R. L., 146, 154 Bulthoff, H. H., 54, 71, 141, 142, 154 Bunge, S. A., 109, 118 Burson, L. L., 6, 19, 44, 46, 51, 55, 56, 64, 65, 67, 70, 71, 207, 208, 216 Burton, H., 175, 176, 192 Burton, M., 122, 135 Butter, C. M., 152, 153 Byrd, M., 108, 117
Cabe, P. A., 80, 91 Cabeza, R.,146, 153 Calcaterra, J. A., 6, 7, 16, 19, 44, 46, 51, 53, 55, 56, 64, 65, 66, 67, 70, 71, 207, 208, 216 Calvert, G.A., 171, 192 Campbell, R., 171, 192 Camposano, S., 176, 190, 193 Canete, C., 182, 194 Carlesimo, G. A., 96, 116 Carlson, M., 158, 167 Carrasco, M., 100, 116 Caselli, R. J., 160, 168, 169 Catala, M. D., 89, 92, 180, 182, 193, 194 Celnik, P., 89, 92, 176, 191, 192 Chataway, C., 5, 20, 51, 71, 206, 216 Chiu, C. Y. P., 106, 119
Chun, M. M., 146, 154 Clark, A., 204, 216 Clayton, A. M., 207, 217 Clyburn, S., 65, 71 Cohen, L. G., 89, 92, 176, 177, 191, 192, 193 Colby, C. L., 31, 45 Colwell, C., 122, 135 Conturo, T. E., 175, 176, 192 Cooper, E. E., 98, 103, 116, 209, 215 Cooper, L. A., 98, 99, 100, 101, 103, 104, 106, 109, 111, 116, 117, 119, 146, 154, 209, 215, 217 Coren, S., 67, 69 Corkin, S., 15, 19, 110, 112, 113, 115, 117, 118, 165, 169 Cornoldi, C., 5, 18 Corrie, B., 152, 154 Corwell, B., 89, 92, 176, 191, 192 Cowey, A., 174, 176, 195 Cox, M., 83, 92 Cracco, J. B., 176, 193 Cracco, R. W., 176, 193 Craik, F. I. M., 108, 117 Cratty, B. J., 33, 45 Critchley, M., 201, 215 Culham, J. C., 149, 154 Cutosky, M. R., 133, 136 Cytowic, R. E., 172, 193
D’Angiulli, A., 74, 92 Dallas, M., 98, 106, 117 Damasio, A. R., 109, 110, 117 Dambrosia, J., 89, 92, 176, 191, 192 Dancer, C., 152, 153, 163, 168, 172, 193 Day, R. H., 65, 69 De Gelder, B., 20, 149, 154, 162, 168 De Toledo-Morrell, L., 15, 19, 110, 117 De Volder, A. G., 164, 168 Deiber, M. P., 5, 21, 73, 93, 115, 118, 165, 169, 174, 175, 176, 194, 214, 217 Deibert, E., 142, 145, 153, 160, 161, 168 Delaney, S. M., 98, 99, 101, 103, 104, 106, 111, 119, 209, 217 DeSisto, M. J., 35, 47 DeSouza, J. F., 16, 18, 152, 153 Dimigen, G., 122, 135 Dinse, H. R., 213, 217 Dolan, R., 146, 153
AUTHOR INDEX Dold, G., 73, 93, 115, 118, 174, 175, 176, 194 Douglas, S. A., 132, 135 Driver, J., 152, 154, 172, 193 Duhamel, J. R., 31, 45 Dukelow, S. P., 152, 153 Dunseath, W. J. R., 165, 168
Easton, R. D., 4, 19, 97, 108, 117,141, 148, 153, 161, 168, 198, 209, 211, 215 Eberle, L. P., 176, 193 Eccles, J. C., 10, 20 Edelman, S., 161, 168 Elbert, T., 179, 194 Englund, E., 15, 18, 115, 116 Epstein, C. M., 17, 21, 73, 93, 143, 155, 163, 164, 170, 180, 195, 211, 218 Eriksson, L., 160, 169 Eriksson, Y., 74, 92 Erlebacher, A., 36, 46 Erngrund, K., 108, 109, 118 Ernst, M. O., 4, 19, 54, 71, 141, 142, 154 Essick, G., 152, 153, 163, 168
Falchier, A., 172, 192, 193 Falz, L., 176, 191, 192 Fanger, J., 122, 135 Farah, M. J., 52, 69, 101, 118, 147, 153, 160, 169 Feinberg, T. E., 161, 168 Felician, O., 176, 190, 193 Fellows, B. J., 36, 46 Fennema, A. C., 113, 115, 117 Fernández, J. L., 121, 135 Fischl, B., 165, 169 Fleishman, D. A., 96, 109, 110, 112, 115, 117 Folger, S. E., 165, 168 Foulke, E., 212, 217 Frackowiak, R. S. J., 175, 176, 192 Frahm, J. 158, 159, 167 Francis, S. T., 165, 168 Franzen, O., 152, 153, 163, 168 Franzen, P., 175, 194 Freides, D., 1, 19, 164, 168
Frith, C. D., 152, 154 Fujita, N., 51, 71 Fuster, J, M., 97, 117
Gabrieli, J. D. E., 14, 15, 19, 96, 99, 100, 109, 110, 112, 113, 115, 117, 118 Galati, G., 43, 47 Gangitano, M., 182, 193, 194 Gati, J. S., 4, 9, 15, 20, 114, 117, 144, 146, 149, 152, 154, 158, 159, 160, 161, 168, 169, 210, 211, 215 Gauthier, I., 163, 168 Gazzaniga, M. S., 4, 21, 31, 46 Gentaz, E., 1, 8, 19, 21 Gerloff, C. M., 89, 92 Geyer, S., 158, 159, 160, 165, 167, 168 Gibson, B. S., 82, 93 Gibson, E. J., 85, 90, 92 Gibson, J. J., 2, 4, 19, 28, 30, 46, 92 Girgus, J. S., 67, 69 Gleitman, H., 29, 46 Goldberg, M. E., 31, 45 Goldstein, K., 212, 214, 216 Golomb, C., 74, 92 Gonzalez Rothi, L. J., 161, 168 Goodale, M. A., 4, 9, 15, 16, 18, 20, 73, 89, 92, 114, 117, 144, 146, 149, 152, 153, 154, 161, 168, 210, 211, 215 Gore, J. C., 163, 168 Graf, P., 96, 98, 117 Grafman, J., 5, 21, 73, 93, 115, 118, 174, 175, 176, 194, 214, 217 Grafton, S. T., 17, 21, 73, 93, 143, 155, 163, 164, 169, 170, 180, 194, 195, 211, 213, 217, 218 Grajski, K. A., 213, 217 Grant, A. C., 206, 212, 215 Gray, A. C., 165, 169 Gray, P., 127, 135 Graziano, M. S. A., 31, 44, 46 Green, S., 2, 7, 8, 16, 19, 44, 46, 53, 54, 55, 64, 65, 66, 67, 70, 71 Greene, A. J., 4, 19, 97, 108, 117, 141,148, 153, 161, 168, 198, 209, 211, 215 Greene, J., 204, 216 Grefkes, C., 158, 159, 160, 165, 167, 168 Gregory, R. L., 2, 19,67, 70 Grieve, K. L., 43, 47 Grill-Spector, K., 161, 168
222
AUTHOR INDEX
Gross, C. G., 31, 44, 46 Grossenbacher, P. R., 172, 193 Growdon, J. H., 112, 113, 115, 117, 118 Gutiérrez, T., 121, 135
Hagen, M. C., 152, 153, 163, 168 Haggard, P., 2, 20, 54, 59, 71 Hallett, M., 5, 21, 73, 89, 92, 93, 115, 118, 165, 169, 174, 176, 177, 192, 193, 194, 214, 217 Hamilton, R. H., 25, 9, 20, 21, 114, 118, 173, 175, 177, 178, 179, 180, 187, 191 192, 193, 194, 212, 218 Hänicke, W., 158, 159, 167 Hanley, C., 53, 70 Harman, K. L., 140, 152,153 Harris, L. J., 53, 70 Harrison, J. E., 172, 192 Hart, J J., 142, 145, 153, 160, 161, 168 Hasselbring, K., 65, 71 Hatwell, Y., 7, 8, 19, 28, 64, 46, 70 Haxby, J. V., 163, 170 Hayward, W. G., 163, 168 Heartherly, K., 2, 6, 8, 12, 19, 55, 56, 57, 67, 68, 70, 141, 153, 207, 208, 213, 216 Heeley, T. R., 149, 154 Heil, M., 213, 217 Heilman, K. M., 161, 168 Heller, M. A., 2, 3, 4, 6, 7, 8, 12, 16, 19, 20, 35, 36, 44, 46, 49, 51, 52, 53, 54, 55, 56, 57, 59, 60, 64, 65, 66, 67, 68, 70, 71, 73, 74, 83, 92, 102, 117, 141, 153, 164, 168, 197, 200, 204, 205, 206, 207, 208, 213, 214, 215, 216 Helmholtz, H. Von., 30, 32, 33, 36, 46 Hendler, T., 9, 18, 142, 145, 146, 148, 149, 153, 161, 164, 167, 210, 215 Henninghausen, E., 162, 169, 213, 217 Henson, R., 146, 153 Herman, J. L., 62, 71 Hernandez-Reif, M., 90, 92 Hillyard, S. A., 9, 21, 213, 217 Hoffman, J. M., 163, 164, 169, 180, 194, 211, 213, 217 Holmes, E., 6, 20, Holtzman, J. D., 6, 18, 55, 69 Honda, M., 89, 92 Hopkins, R., 6, 20, 74, 92 Howard, I. P., 30, 46
Hradek, G. T., 213, 217 Hsiao, S. S., 179, 183, 184, 193, 194 Hubbard, E. M., 89, 93 Hughes, B., 6, 20, Humphrey, G. K., 4, 9, 15, 20, 73, 92, 114, 117, 140, 144, 146, 149, 152, 153, 154, 161, 168, 210, 211, 215 Hyman, B. T., 109, 110, 117
Ibáñez, V., 5, 21, 73, 93, 115, 118, 165, 169, 174, 175, 176, 194, 214, 217 Inhelder, B., 53, 71 Innocenti, G. M., 172, 193 Ittyerah, M., 29, 47 Itzchack, Y., 161, 168 Ivnik, R. J., 15, 20 Iwamura, Y., 31, 47
Jack, C. R., 15, 20, 110, 117 Jacobs, R., 192, 193 Jacobson, G., 9, 18, 142, 145, 146, 148, 149, 153, 161, 167, 210, 215 Jacoby, L. L., 98, 106, 117 Jakobson, L. S., 149, 154 James, K. H., 152, 154 James, T. W., 4, 9, 15, 20, 73, 92, 114, 117, 144, 146, 149, 152, 153, 154, 161, 163, 167, 168, 210, 211, 215 Jansson, G., 6, 13, 20, 80, 83, 87, 92, 122, 135, 201, 216 Jenkins, W. M., 9, 20, 213, 217 Jiang, H., 146, 154, 161, 168 Johansson, R. S., 179, 194 Johnson, G., 112, 113, 119 Johnson, K. A., 110, 115, 117 Johnson, K. O., 73, 85, 92, 178, 179, 183, 184, 193, 194 Johnston, R. S., 149, 154 Jordan, P. J., 149, 154 Joyner, T. D., 5, 6, 19,55, 65, 71 Judd, C. H., 35, 46 Juricevic, I., 50, 78, 79, 90, 92
Kajola, M., 181, 194 Kammer, T., 189, 193 Kanwisher, N., 146, 154
AUTHOR INDEX Kappers, A. M. L., 80, 92, 204, 216, 218 Katz, D., 28, 46, 122, 135 Kauffman, T., 2, 21, 178, 187, 188, 191, 193, 194, 212, 218 Kausler, D. H., 109, 117 Kawashima, R., 159, 160, 169 Keane, M. M., 15, 19, 110, 112, 113, 115, 117, 119 Keenan, J. P., 2, 21, 174, 176, 178, 180, 182, 190, 191, 193, 194, 212, 218 Kelly, E. F., 165, 168 Kemp, M., 6, 20, 208, 216 Kennedy, J. M., 5, 6, 19, 20, 50, 55, 56, 71, 73, 74, 75, 77, 78, 79, 83, 85, 87, 88, 90, 92, 93, 207, 216 Kennedy, W. A., 146, 154, 161, 168 Kennett, S., 2, 20, 54, 59, 71, 152, 154 Khan, S. C., 140, 154 Khorram-Sefat, D., 158, 159, 167 Kilgour, A. R., 20, 149, 154, 161, 162, 165, 168, 207, 216 Kimura, Y., 164, 168 Kingstone, A., 4, 21 Kiriakopoulos, E., 179, 187, 191, 193 Kirkpatrick, A. E., 132, 135 Kiyosawa, M., 164, 168 Kjelgaard, M. M., 15, 19, Klatzky, R. L., 5, 20, 51, 80, 84, 86, 103, 71, 92, 93, 117, 126, 135, 164, 168, 198, 201, 206, 216 Kleinschmidt, A., 158, 159, 167 Koenderink, J. J., 80, 92, 204, 216 Kokmen, E., 20 Koning, H., 122, 135 Kornbrot, D., 122, 135 Kosslyn, S. M., 6, 18, 55, 69, 147, 154, 176, 181, 190, 193, 194 Kraut, M., 142, 145, 153, 160, 161, 168 Kremen, S., 142, 145, 153, 160, 161, 168 Kufta, C., 176, 192 Kushnir, T., 161, 168 Kwong, K. K., 146, 154, 161, 168
LaMotte, R. H., 159, 169 Landau, B., 29, 46 Landerer, C., 75, 93 LaShell, L. S. R., 148, 153 LaVoie, D., 109, 110, 118 Le-Biham, D., 43, 47
Ledberg, A., 158, 159, 167 Ledden, P. J., 146, 154 Lederman, S. J., 5, 20, 51, 71, 84, 86, 93, 102, 103, 117, 118, 126, 135, 149, 154, 158, 160, 161, 162, 164, 165, 168, 169, 198, 201, 206, 207, 216 Lehr, S., 62, 71 Levent, N., 49, 69 Lianekhammy, J., 207, 217 Liben, L. S., 29, 46 Light, L. L., 108, 109, 110, 118 Lima, F., 7, 16, 19, 55, 70 Linderberger, U., 108, 116 Lindinger, G., 175, 194 Lobel, E., 43, 47 Locher, P. J., 3, 20, 202, 216 Logie, R. H., 28, 46 Lomonaco, S., 62 Loomis, J. M., 51, 71, 102, 118 Lopes, D. M. M., 74, 93 Lovelace, C. T., 172, 193
Macaluso, E., 152, 154 Maccabee, P. J., 176, 193 MacDonald, B., 163, 169 Macko, K., 163, 169 Maeda,, F., 182, 193 Malach, R., 9, 18, 142, 145, 146, 148, 149, 153, 154, 161, 164, 168, 210, 215 Manga, D., 3, 14, 18, 97, 102, 103, 106, 110, 116, 198, 202, 203, 209, 215 Mao, H., 159, 160, 163, 164, 169 Mapstone, H. C., 110, 115, 117 Maravita, A., 152, 154 Marmor, G. S., 162, 168 Martin, A., 146, 155 McCarthy, M., 204, 216 McDermott, J., 146, 154 McDermott, K. B., 98, 99, 106, 118 McGee, M. R., 127, 135 McGlone, F., 152, 153, 163, 165, 168 Menon, R. S., 4, 9, 15, 20, 114, 117, 144, 146, 149, 152, 154, 161, 168, 210, 211, 215 Merabet, L., 89, 93, 114 Merboldt, K.-D., 158, 159, 167 Meredith, M. A., 5, 21, 44, 47, 171, 194 Merzenich, M. M., 213, 217 Mesulam, M. M., 171, 193
224
AUTHOR INDEX
Metzger, V. A., 103, 117, 198, 201, 216 Metzler, J., 162, 169 Mezernick, M. M., 9, 20 Milbrath, C., 75, 76, 91, 93 Millar, D., 205, 211, 215 Millar, S., 2, 3, 4, 5, 7, 11, 18, 20, 28, 29, 30, 31, 33, 34, 35, 37, 38, 39, 41, 42, 44, 45, 46, 47, 50, 71, 73, 93, 102, 103, 116, 118, 202, 203, 204, 215, 216, 217 Milner, A. D., 5, 16, 20, 149, 154 Mirabella, G., 75, 93 Mishkin, M., 160, 163, 169 Monaci, L., 13, 20, 201, 216 Monti, L. A., 110, 118 Moore, B. O., 43, 45 Moore, C. I., 165, 169 Moore, C., 100, 101, 104, 117, 209, 215 Morgan, M. J., 2, 20, 199, 217 Morrell, F., 15, 19, 110, 115, 117 Moses, F. L., 35, 47 Mottaghy, F. M., 182, 193 Müller, M. M., 179, 194 Muñiz, J., 7, 18, 201, 205, 215 Muñoz, J. A., 121, 135 Murray, E. A., 160, 169 Mussen, G., 98, 103, 118 Myers, D. S., 102, 117
Naito, E., 158, 159, 167 Nakano, H., 164, 168 Naville, H. J., 21 Navon, D., 65, 71 Neville, H. J., 9, 21, 172, 192, 213, 217 Newell, F. N., 54, 71, 141, 142, 154 Nicholls, A., 74, 77, 93 Nilson, L. G., 108, 109, 118 Noordzij, M. L., 204, 218 Norman, H. F., 207, 217 Norman, J. F., 207, 217 Norman, J., 30, 47 Nyberg, L., 108, 109, 118, 146, 153
O’Sullivan, B., 159, 160, 169 O’Brien, P. C., 15, 20, 110, 117 Oakley, I., 127, 135 Ochsner, K. N., 106, 119 Ohta, S., 163, 169 Ojima, H., 172, 192, 194
Ollinger, J. M., 175, 176, 192 Olofsson, U., 108, 109, 118 Oscar-Berman, M., 96, 116 Ottman, P. K., 61, 62, 71
Paillard, J., 44, 47 Paivio, A., 5, 20 Panofsky, E., 6, 20, 55, 71 Pantev, C., 179, 194 Pardo, J. V., 152, 153, 163, 168 Park, D. C., 108, 118 Parkin, A. J., 98, 118 Parks, T. E., 86, 93 Pascual-Leone, A., 2, 5, 9, 20, 21, 73, 89, 92, 93, 114, 115, 118, 173, 174, 175, 176, 177, 178, 179, 180, 182, 187, 188, 190, 191, 192, 192, 193, 194, 195, 212, 214, 217, 218 Patterson, M. Q. 212, 217 Peled, S., 9, 18, 142, 145, 146, 148, 149, 153, 161, 164, 167, 210, 215 Pelletier, J., 53, 69 Perrett, D. I., 149, 154 Perry, C. L., 102, 117 Petersen, M. E., 82, 93 Peterson, R. C., 15, 20, 82, 110, 117 Petrides, M., 162, 167 Petrie, H., 122, 135 Phillips, J. R., 179, 193, 194 Piaget, J., 53, 71 Pichora-Fuller, M. K., 108, 119 Pizzamiglio, L., 43, 47 Plaut, C. D., 101, 118 Podreka, I., 175, 194 Polster, M. R., 146, 154 Popper, K. R., 10, 20 Postle, B. R., 112, 113, 118 Postma, A., 204, 218 Prather, S. C., 159, 160, 162, 163, 164, 165, 169, 211, 217 Pressey, A. W., 36, 47 Pressey, C. A., 36, 47 Price, C., 175, 176, 192 Prull, M. W., 109, 118
Raichle, M. E., 175, 176, 192 Ramachadran, V. S., 89, 93 Ramloll, R., 122, 135
AUTHOR INDEX Randolph, M., 158, 169 Raskin, E., 61, 62, 71 Rassmuss, K., 122, 136 Raz, N., 108, 118 Reales, J. M., 3, 4, 7, 9, 14, 15, 18, 21, 97, 101, 102, 103, 106, 107, 108, 110, 111, 113, 116, 118, 141, 144, 148, 154, 161, 168, 198, 201, 202, 203, 205, 209, 211, 215, 217 Recanzone, G. H., 213, 217 Reed, C. L., 160, 169 Reed, C., 126, 135, 164, 168, 201, 216 Reiman, E., 146, 154 Reminger, S. L., 110, 118 Reminger, S., 115, 117 Reppas, J. B., 146, 154, 161, 168 Requardt, M., 158, 159, 167 Révész, G., 2, 21, 27, 28, 47, 51, 62, 71, 102, 118 Richardson, B., 86, 93 Riedel, B., 122, 135 Rinaldi, J., 115, 117 Robert, M., 53, 69 Rock, I., 30, 47 Rockland, K. S., 172, 192, 194 Rockstroh, B., 179, 194 Röder, B., 5, 9, 21, 162, 169, 213, 217 Roediger, H. L., 98, 99, 106, 118 Rogers, B. J., 30, 46 Rogers, G. J., 102, 117 Roland, P. E., 158, 159, 160, 165, 167, 159, 169 Roland, P., 159, 168 Rolls, E. T., 31, 43, 47 Romero, J. R., 182, 193 Romero, R., 182, 194 Romo, R., 165, 169 Rosen, B. R., 165, 169 Rösler, F., 5, 9, 21, 162, 169, 213, 217 Rothwell, J., 176, 194 Rouiller, E. M., 166, 169 Rudel, R. Q., 37, 47 Rudell, A. P., 176, 193 Rushworth, M., 174, 195 Russo, R., 98, 118 Ryan, T. A., 1, 21 Rybash, J. M., 109, 118
Sadato, N., 5, 21, 73, 89, 92, 93, 115, 118, 165, 169, 174, 175, 176, 194, 214, 217
Sakata, H., 31, 47 Salenius, S., 181, 194 Salik, S. S., 2, 6, 8, 12, 19, 44, 46, 55, 56, 57, 65, 67, 68, 70, 141, 153, 207, 208, 213, 216 Salinas, E., 165, 169 Salthouse, T. A., 108, 118 Santucci, R., 152, 153 Sathian, K., 5, 17, 21, 73, 93, 114, 118, 143, 155, 159, 160, 162, 163, 164, 165, 169, 170, 179, 180, 194, 195, 211, 213, 217, 218 Sathian, L., 206, 212, 215 Sato, S., 176, 192 Schacter, D. E., 100, 101, 117, 119, 209, 215 Schacter, D. L., 96, 98, 99, 101, 103, 104, 106, 109, 111, 117, 119, 146, 154, 209, 217 Schiff, W., 200, 216 Schneider, B. A., 108, 119 Schormann, T., 159, 168 Schreiner, C. E., 213, 217 Schultz, J., 204, 216 Scroggs, E., 2, 6, 7, 8, 12, 19, 44, 46, 52, 54, 55, 56, 57, 65, 66, 67, 68, 70, 141, 153, 207, 208, 213, 216 Seamon, J., 100, 116 Sedgwick, H. A., 88, 93 Sekuler, R., 36, 46 Semmes, J., 158, 169 Sergent, J., 163, 169 Servos, P., 4, 9, 15, 20, 114, 117, 144, 146, 149, 154, 158, 159, 160, 161, 168, 169, 210, 211, 215 Shallice, T., 146, 153 Shams, L., 8, 21 Shanley, M., 65, 71, 204, 216 Shepard, R. N., 162, 169 Sherrick, C., 200, 217 Shin, L. M., 147, 154 Shore, D. I., 4, 21 Shymojo, S., 8, 21 Simmons, R. W., 3, 20, 202, 216 Sjostrom, C., 122, 136 Skudlarski, P., 163, 168 Smith, G. E., 15, 20, Snyder, A. Z., 175, 176, 192 Snyder, L. H., 43, 44, 45, 47 Sobel, K. S., 152, 153 Sobel, K. V., 163, 167 Sparing, R., 182, 194 Spelke, E., 29, 46
Spence, C., 4, 21, 152, 154, 172, 193 Spencer, S., 15, 19, 110, 117 Spinnler, H., 96, 119 Squire, L. R., 96, 99, 119 Srinivas, J., 97, 108, 117 Srinivas, K., 4, 19, 97, 108, 117, 141, 148, 153, 161, 168, 198, 209, 211, 215 Srinivasan, M. A., 159, 169 Stanger, B. Z., 15, 19, Stark, H. A., 98, 119 Steffen, H., 2, 6, 8, 12, 19, 20, 35, 36, 46, 55, 56, 57, 60, 64, 65, 67, 68, 70, 71,83, 92, 141, 153, 205, 207, 208, 213, 216 Stein, B. E., 5, 21, 44, 47, 171, 194 Stein, J. F., 31, 47 Stern, C. E., 165, 169 Sterr, A., 9, 21, 179, 194, 213, 217 Stevens, J. C., 212, 217 Stoesz, M., 159, 160, 163, 164, 165, 169 Stone, R., 133, 136 Stone-Elander, S., 160, 169 St-Onge, R., 53, 69 Streri, A., 1, 21, 142, 148, 154 Sullivan, M. P., 15, 19, 110, 117 Summers, C. D., 51, 71, 206, 216 Symmons, M., 86, 93
Tangalos, E. G., 15, 20, 110, 117 Tanné-Gariépy, J., 166, 169 Tarazona, F., 182, 194 Tarr, M. J., 140, 154, 163, 168 Taub, E., 179, 194 Taylor-Clarke, M., 2, 20, 54, 59, 71 Teder-Salejarvi, W., 9, 21, 213, 217 Teuber, H. L., 37, 47 Theoret, H., 187, 188, 193 Theoret, H., 89, 93, 187, 188, 193 Thiagarajah, M. C., 206, 212, 215 Thompson, W., L., 176, 181, 190, 193, 194 Tjan, B. S., 54, 71, 141, 142, 154 Topka, H., 182, 193 Tormos, J. M., 182, 193, 194 Torres, F., 179, 194 Toyama, H., 164, 168 Treisman, A., 98, 103, 118 Tulving, E., 96, 97, 98, 99, 106, 119 Tyler, L. A., 6, 19, 51, 55, 56, 67, 70, 71, 207, 208, 216
Uecker, A., 146, 154 Uhl, F., 175, 194 Ungerleider, L. G., 163, 170 Ungerleider, L., 163, 169
Valdiserri, M., 109, 119 Valero-Cabre, A., 182, 193 Vallar, G., 43, 47 Valls-Sole, J., 177, 193 Van Boven, R. W., 2, 21, 178, 179, 191, 194, 178, 194, 212, 218 Van der Lubbe, R., 204, 218 Van Oesen, G. W., 109, 110, 117 Vanlierde, A., 164, 168 Vecchi., T., 5, 18 Verfaellie, M., 112, 113, 119 Vilis, T., 152, 153, 154 Von Senden, M., 28, 47 Votaw, J. R., 162, 163, 165, 169, 211, 217
Wake, H., 51, 71 Wallace,J. G., 2, 19 Wallace, M. T., 44, 47 Walsh, V., 174, 176, 194, 195 Waring, S. C., 15, 20, 110, 117 Warren, D. H., 28, 47 Wassermann, E. M., 189, 195 Weber, E. H., 28, 30, 47 Weisser, V. D., 159, 160, 163, 164, 165, 169 West, A. M., 133, 136 Widen, L., 160, 169 Wiggs, C. L., 146, 155 Willats, J., 74, 79, 91, 93 Wilson, D., 158, 159, 160, 169 Wilson, K., 2, 12, 20, 35, 36, 46, 60, 64, 65, 67, 70, 71, 83, 92, 205, 216, 213, 216 Wilson, R. S., 2, 15, 19, 110, 115, 117, 118 Witkin, H. A., 61, 62, 71 Wright, C. D., 80, 91 Wright, M. A., 80, 91
Xing, J., 44, 45 Xu, I. C., 15, 20, 110, 117
Yoneyama, K., 2, 12, 20, 35, 36, 46, 60, 64, 65, 67, 70, 71, 83, 92, 205, 213, 216 Yu Wai, 122, 135 Yun, L. S., 146, 154
Zaback, L. A., 162, 168 Zangaladze, A., 17, 21, 73, 93, 114, 118, 143, 155, 163, 164, 169, 170, 179, 180, 194, 195, 211, 213, 217, 218 Zhang, M., 159, 160, 163, 164, 165, 169 Zielke, G., 207, 217 Zilles, K., 158, 159, 160, 165, 167, 168 Zohary, E., 9, 18, 142, 145, 146, 148, 149, 153, 161, 164, 167, 210, 215 Zuidhoek, S., 204, 218
This page intentionally left blank
Subject Index
2-D form perception (see also Haptic pictures) 51, 162–164 3-D form perception, 3, 51, 100, 102, 145, 160–162
Alzheimer’s disease, 95–97, 108–115
Bilateral symmetry, 3, 202–203 Bimanual exploration, 3 Blindness adventitious blindness, 128 development, 7, 54, 73–74, 90–91, 172, 192 early blind, 5, 114, 172, 174–179, 181, 183. 186, 191 congenitally blind 2, 5–6, 8, 12–13, 17, 49, 51–56, 62–63, 66, 74, 80, 83, 115, 128, 174 visual experience 5–6, 25, 49–50, 52, 54–56, 58, 60, 62, 64, 66, 67, 202, 213 Braille reading, 2–3, 7, 17, 54–55, 65, 88, 114, 122, 128, 157, 174–182, 186–191 Brain plasticity, 9, 17, 212–214
Distal sense, touch as, 28–29, 140 Dorsal versus ventral pathways, 16–17, 89, 114, 163, 165, 166, 211, 213 Drawings by the blind, 5–7, 11–13, 51–61, 73–80 Gaia, 74–77 Tracy, 78–80
Egocentric references (see Spatial perception, body centered reference view)Embedded figures test, 62–63 Extrastriate cortex, 114–115, 142–144, 160, 165
Face perception, (see Prosopagnosia) Functional magnetic resonance (fMRI), 114, 142–144, 158, 174–175
Grating orientation task (GOT), 17, 177–180
230
Haptic perception (see also Perception) continuity, theory of, 86–7 direction and depth, 13, 54, 82 figure-ground (see also Embedded figures test), 7, 12, 82–83, 89, 205, grouping, surface edges, 83 haptic pictures, perception, 6, 11–13, 54–64, 84, 88–91 shape and borders, 87–88 texture, 2, 4, 36, 38, 51, 59, 83, 84, 97, 102, 126, 129, 133, 140, 149 touch perception, 2, 11, 85–86 vantage points, 6, 55, 80–82 Haptics blindfolded Subjects, in, 68 blindness, in, 6–8 illusions, 8, 11, 64–65, 67, 204 intersensory equivalence, 50, 64–65 reality sense, as a, 40, 67 tactile system, 139
haptic objects, 101–103 visual objects, 97–99 long term, 97 semantic, 98 Mental rotation task, 162 Motor and premotor cortical areas, 165–167 Multimodal cortical areas (see Multimodal perception) Multimodal perception, 4–5, 139, 160–162, 191–192
Neuroimaging findings, 210–211 studies, 17, 157 techniques, 197–198
Occipital cortex, 15, 143, 174–177, 181, 188–192 Illusions haptic illusions, (see Haptics) horizontal-vertical illusion (see also T shapes) 8, 64–65, 67 Müller-Lyer shapes, 8, 35–38, 64–65 Ponzo, 65, 67 T-shapes, 38, 42, 64 veering, 32–34 Imagery in the blind, 55 touch, 5–8 visual, 35–38, 164–165 Intersensory equivalence, 1–2, 64, 88–91, 139 Intraparietal sulcus (IPS), 159
Lateral occipital complex (LOC), 17, 143–149, 151, 161–166, 210
Memory episodic, 98 implicit and explicit memory age-related changes and Alzheimer, 14, 108–115
Perception (see also Haptic pictures) bottom-up processes, 88 grouping and edges, 83–84 multimodal, 4, 69 orientation, 2,7, 8, 17, 33–34, 60, 62, 75–76, 79, 81, 84–86, 100–101, 114, 132–133, 143, 162–164, 166, 177–178, 181 perspective, 6, 12–13, 55–56, 74, 79 roughness vs. distance, 182–185 top-down processes, 89 visual vs. haptic, 4–5, 16, 27–28, 36, 49–51, 139–140, 152 Perceptual memory, 97 Perceptual selectivity, 60–64 Piaget, 53–54 Picture matching, 52–66 Poscentral sulcus (PCS), 158 Positron emission tomography (PET), 114, 158, 162, 174 Priming in Alzheimer, (see Implicit and Explicit) memory: age related changes cross-modal, 5, 106–108, 141, 144, 146–149, 209
haptic, 14, 103–106, 145–146, 209 neuroimaging studies, 114, 142–143 visual, 14, 99–101, 145–146, 209 Prosopagnosia, 149, 161 Proximal sense, 28
Reference hypothesis, 26, 36–38 frames of reference, 2, 10, 27, 39–42, 53, 201–204
Temporal-parietal-occipital junction, 89 Touch, 3 imagery, 5–8 neuropsychology, 209–212 neuroscience, 8–9 psychology, 200–201 visual experience, 5–8 Transcranial magnetic simulation (TMS), 115, 142, 176–177
Unimanual exploration, 3 Second somatosensory cortex (SII), 160, 177 Sensory compensation, 212–214 Sensory substitution, 50, 199–200, 212–214 Sensory systems, 115, 139–140, 152, 171–173 Somatosensory cortical processing, 15, 158–160 Spatial accuracy, 31 Spatial cognition and blindness, 204–208 in touch, 50–51 Spatial perception, 5 amodal view, 29–30 constructivist view, 30 direct and indirect perception views, 30 body centered reference view, 10–11, 27–29, 33, 201–204 vs. modality specific tasks, 31 Spatial reasoning, 7, 53–54 Spatial tasks, 17, 31, 39 haptic, 7 Speeded object naming task, 103–104 Superior culliculus, 89 Supramarginal gyrus (SMG), 159 Symmetry judgement task, 105
Tangible pictures, 11–12, 49, 51, 66 depth relations, 58–60, 66 perception of, 141 perspective (see also Reference hypothesis, frames of reference)
Vantage points, 80–82 Viewpoint, viewing angle, 6, 54, 140–142 Viewpoint dependence, 56–60, 140–142 Virtual reality methods, 13 GRAB project, 121 GRAB system, 123–127 haptic interface123–124 functionality, 126 haptic geometric modeller, 125 integration strategy, 125 usability, 134 objectives, 121–122 validation of prototypes, 127–135 Visual agnosia, 149 Visual cortex (see also Occipital cortex) 114, 172–173 Visual guidance, 2, 59–60 Visual imagery, 5, 49, 51, 143–144, 146–148, 164–165 Visual system, 139–140
Water-level problem (see Spatial reasoning)