THE ADAPTIVE BRAIN I1 Vision, Speech, Language, and Motor Control
ADVANCES IN PSYCHOLOGY 43 Editors G . E. STELMACH
...
33 downloads
777 Views
26MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
THE ADAPTIVE BRAIN I1 Vision, Speech, Language, and Motor Control
ADVANCES IN PSYCHOLOGY 43 Editors G . E. STELMACH
P. A. VROON
NORTH-HOLLAND AMSTERDAM .NEW YORK * OXFORD .TOKYO
THEADAPTIVEBRAINII Vision, Speech, Language, and Motor Control
Edited by
Stephen GROSSBERG Centerfor Adaptive Systems Boston University Boston, Massachusetts U.S. A .
1987
NORTH-HOLLAND AMSTERDAM .NEW YORK . OXFORD .TOKYO
0 ELSEVIER
SCIENCE PUBLISHERS B.V., 1987
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, or any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.
ISBN : 0 444 70118 4 ISBNSet: 0 444 70119 2
The other volume in this set is: The Adaptive Brain I: Cognition, Learning, Reinforcement, and Rhythm, S.Grossberg, Ed., (1987). This is volume 42 in the North-Holland series Advances in Psychology, ISBN: 0 444 70117 6.
Publishers:
ELSEVIER SCIENCE PUBLISHERS B.V. P.O. Box 1991 1000 BZ Amsterdam The Netherlands
Sole distributors for the U.S. A. and Canada: ELSEVIER SCIENCE PUBLISHING COMPANY, INC. 52Vanderbilt Avenue NewYork,N.Y. 10017 U.S.A.
PRINTED IN THE NETHERLANDS
Dedicated to
Jacob Beck and George Sperling
With Admiration
This Page Intentionally Left Blank
VU
EDITORIAL PREFACE
The mind and brain sciences are experiencing a period of explosive development. In addition to experimental contributions which probe the widest possible range of phenomena with stunning virtuosity, a true theoretical synthesis is taking place. The remarkabk multiplicity of behaviors, of levels of behavioral and neural organization, and of experimental paradigms and methods for probing this complexity present a formidable challenge to all serious theorists of mind. The challenge is, quite simply, to discover unity behind this diversity by characterizing a small set of theoretical principles and mechanisms capable of unifiing and predicting large and diverse data bases as manifestations of fundamental processes. Another part of the challenge is to explain how mind differs from better understood physical systems, and to identify what is new in the theoretical methods that are best suited for a scientific analysis of mind. These volumes collect together recent articles which provide a unified theoretical analysis and predictions of a wide range of important psychological and neurological data. These articles illustrate the development of a true theory of mind and brain, rather than just a set of disconnected models with no predictive generality. In this theory, a small number of fundamental dynamical laws, organizational principles, and network modules help to compress a large data base. The theory accomplishes this synthesis by showing how these fundamental building blocks can be used to design specialized circuits in different neural systems and across species. Such a specialization is analogous to using a single Schrijdinger equation in quantum mechanics to analyse a large number of different atoms and molecules. The articles collected herein represent a unification in yet another sense. They were all written by scientists within a single research institute, the Center for Adaptive Systems at Boston University. The fact that a single small group of scientists can theoretically analyse such a broad range of data illustrates both the power of the theoretical methods that they employ and the crucial role of interdisciplinary thinking in achieving such a synthesis, It also argues for the benefits that can be derived from supporting more theoretical training and research programs within the traditional departments charged with an understanding of mind and brain phenomena. My colleagues and I at the Center for Adaptive Systems have repeatedly found that fundamental processes governing mind and brain can best be discovered by analysing how the behavior of individuals successfully adapts in real-time to constraints imposed by the environment, In other words, principles and laws of behavioral self-organization are rate-limiting in determining the design of neural processes, and problems of selforganization are the core issues that distinguish mind and brain studies from the more traditional sciences. An analysis of real-time behavioral adaptation requires that one identify the functional level on which an individual’s behavioral success is defined. This is not the level of individual nerve cells. Rather it is the level of neural systems. Many examples can now be given to illustrate the fact that one cannot, in principle, determine the properties which govern behavioral success from an analysis of individual cells alone. An analysis of individual cells is insufficient because behavioral properties are often emergent properties due to interactions among cells. Different types of specialized neural circuits govern different combinations of emergent behavioral properties.
Viil
Editorial Preface
On the other hand, it is equally incorrect to assume that the properties of individual cells are unimportant, as many proponents of artificial intelligence have frequently done to promote the untenable claim that human intelligence can be understood through an analysis of Von Neumann computer architectures. Carefully designed single cell properties are joined to equally carefully designed neural circuits to generate the subtle relationships among emergent behavioral properties that are characteristic of living organisms. In order to adequately define these circuits and to analyse their emergent behavioral properties, mathematical analysis and computer simulation play a central role. This is inevitable because self-organizing behavioral systems obey nonlinear laws and often contain very large numbers of interacting units. The mathematical theory that has emerged from this analysis embodies a biologically relevant artificial intelligence, as well as contributing new ideas to nonlinear dynamical systems, adaptive control theory, geometry, statistical mechanics, information theory, decision theory, and measurement theory. This mathematical work thus illustrates that new mathematical ideas are needed to describe and analyse the new principles and mechanisms which characterize behavioral self-organization. We trace the oceans of hyperbole, controversy, and rediscovery which still flood our science to the inability of some investigators to fully let go of unappropriate technological metaphors and nineteenth century mathematical concepts. Although initially attractive because of their simplicity and accessibility, these approaches have regularly shown their impotence when they are confronted by a nontrivial set of the phenomena that they have set out to explain. A unified theoretical understanding cannot be achieved without an appropriate mathematical language in our science any more than in any other science. A scientist who comes for the first time to such a new theoretical enterprise, embedded in such a confusing sociological milieu, may initially become disoriented. The very fact that behavioral, neural, mathematical, and computer analyses seem to permeate every issue defies all the traditional departmental boundaries and intellectual prejudices that have separated investigators in the past. After this initial period of disorientation passes, however, such a scientist can begin to reap handsome intellectual rewards. New postdoctoral fellows at the Center for Adaptive Systems have, for example, arrived with a strong training in experimental psychology augmented by modest mathematical and computer coursework, yet have found themselves within a year performing advanced analyses and predictions of previously unfamiliar neural data through computer simulations of real-time neural networks. The theoretical method itself and the foundation of knowledge to which it has already led can catapult a young investigator to the forefront of research in an area which would previously have required a lifetime of study. We have found often that problems which seemed impossible without the theory became difficult but tractable with it. In summary, the articles in these volumes illustrate a theoretical approach which analyses how brain systems are designed to form an adaptive relationship with their environment. Instead of limiting our consideration to a few performance characteristics of a behaving organism, we consider the developmental and learning problems that a system as a whole must solve before accurate performance can be achieved. We do not take accurate performance for granted, but rather analyse the organizational principles and dynamical mechanisms whereby it is achieved and maintained. Such an analysis is necessary if only because an analysis of performance per be does not impose sufficiently many constraints to determine underlying control mechanisms. The unifying power of such theoretical work is due, we believe, to the fact that principles of adaptation-such M the laws governing development and learning-are fundamental in determining the design of behavioral mechanisms. A preface precedes each article in these volumes. These commentaries link the articles together, highlight some of their major contributions, and comment upon future directions of research. The work reported within these articles has been supported by
ix
Editorial Preface
the Air Force Office of Scientific Research, the Army Research Office, the National Science Foundation, and the Office of Naval Research. We are grateful to these xgencies for making this work possible. We are also grateful to Cynthia Suchta for doing a marvelously competent job of typing and formatting the text, and to Jonathan Marshall for expertly preparing the index and proofreading the text. Beth Sanfield and Carol Yanakakis also provided valuable assistance. Stephen Grossberg Boston, Massachusetts March, 1986
X
TABLE OF CONTENTS CHAPTER I: THE QUANTIZED GEOMETRY OF VISUAL SPACE: THE COHERENT COMPUTATION OF DEPTH, FORM, AND LIGHTNESS 1. Introduction: The Abundance of Visual Models PART I 2. The Quantized Geometry of Visual Space 3. The Need for Theories Which Match the Data’s Coherence 4. Some Influences of Perceived Depth on Perceived Size 5. Some Monocular Constraints on Size Perception 6. Multiple Scales in Figure and Ground: Simultaneous Fusion and Rivalry 7. Binocular Matching, Competitive Feedback, and Monocular Self-Matching 8. Against the Keplerian View: Scale-Sensitive Fusion and Rivalry 9. Local versus Global Spatial Scales 10. Interaction of Perceived Form and Perceived Position 11. Some Influences of Perceived Depth and Form on Perceived Brightness 12. Some Influences of Perceived Brightness on Perceived Depth 13. The Binocular Mixing of Monocular Brightnesses 14. The Insufficiency of Disparity Computations 15. The Insufficiency of Fourier Models 16. The Insufficiency of Linear Feedforward Theories 17. The Filling-In Dilemma: To Have Your Edge and Fill-In Too PART II 18. Edges and Fixations: The Ambiguity of Statistically Uniform Regions 19. Object Permanence and Multiple Spatial Scales 20. Cooperative versus Competitive Binocular Interactions 21. Reflectance Processing, Weber Law Modulation, and Adaptation Level in Feedforward Shunting Competitive Networks 22. Pattern Matching and Multidimensional Scaling Without a Metric 23. Weber Law and Shift Property Without Logarithms 24. Edge, Spatial Frequency, and Reflectance Processing by thd Receptive Fields of Distance-Dependent Feedforward Networks 25. Statistical Analysis by Structural Scales: Edges With Scaling and Reflectance Properties Preserved 26. Correlation of Monocular Scaling With Binocular Fusion 27. Noise Suppression in Feedback Competitive Networks 28. Sigmoid Feedback Signals and Tuning 29. The Interdependence of Contrast Enhancement and Tuning
1
4
9 11
12 13 13 13 16 16 18 19 20
21 23 23 24 ,,27 28 30 33 34 36 36 41
Table of Contents 30. Normalization and Multistability in a Feedback
Competitive Network: A Limited Capacity Short Term Memory System 31. Propagation of Normalized Disinhibitory Cues 32. Structural versus Functional Scales 33. Disinhibitory Propagation of Functional Scaling From Boundaries to Interiors 34. Quantization of Functional Scales: Hysteresis and Uncertainty 35. Phantoms 36. Functional Length and Emmert's Law 37. Functional Lightness and the Cornsweet Effect 38. The Monocular Length-Luminance Effect 39. Spreading FIRE: Pooled Binocular Edges, False Matches, Allelotropia, Binocular Brightness Summation, and Binocular Length Scaling 40. Figure-Ground Separation by Filling-In Barriers 41. The Principle of Scale Equivalence and the Curvature of Activity-Scale Correlations: Fechner's Paradox, Equidistance Tendency, and Depth Without Disparity 42. Reflectance Rivalry and Spatial Frequency Detection 43. Resonance in a Feedback Dipole Field: Binocular Development and Figure-Ground Completion 44. Binocular Rivalry 45. Concluding Remarks About Filling-In and Quantization Appendix References
CHAPTER 2: NEURAL DYNAMICS OF FORM PERCEPTION: BOUNDARY COMPLETION, ILLUSORY FIGURES, AND NEON COLOR SPREADING 1. Illusions as a Probe of Adaptive Visual
Mechanisms 2. From Noisy Retina to Coherent Percept 3. Boundary Contour System and Feature Contour System 4. Boundary Contours and Boundary Completion 5. Feature Contours and Diffusive Filling-In 6. Macrocircuit of Processing Stages 7. Neon Color Spreading and Complementary Color Induction 8. Contrast, Assimilation, and Grouping 9. Boundary Completion: Positive Feedback Between Local Competition and Long-Range Cooperation of Oriented Boundary Contour Segments 10. Boundary Completion as a Statistical Process: Textural Grouping and Object Recognition 11. Perpendicular versus Parallel Contour Completion 12. Spatial Scales and Brightness Contrast 13. Boundary-Feature Trade-OB: Orientational Uncertainty and Perpendicular End Cutting 14. Induction of "Real" Contours Using "Illusory" Contour Mechanisms 15. Gated Dipole Fields
xi
41 42 42 44 44 45 46
47 47 48 56 56 58 59 62 63 64 67
80
83 83 85 85 89 91 91 97 99 105 105 108 109 112
113
Xii
Table of Corirenrs
16. Boundary Completion: Oriented Cooperation Among Multiple Spatial Scales 17. Computer Simulations 18. Brightness Paradoxes and the Land Retinex Theory 19. Related Data and Concepts About Illusory Contours 20. Cortical Data and Predictions 21. Concluding Remarks Appendix: Dynamics of Boundary Formation References
C H A P T E R 3: N E U R A L DYNAMICS OF P E R C E P T U A L G R O U P I N G : T E X T U R E S , BOUNDARIES, A N D EMERGENT SEGMENTATIONS 1. Introduction: Towards A Universal Set of Rules for Perceptual Grouping 2. The Role of Illusory Contours 3. Discounting the Illuminant: Color Edges and Featural Filling-In 4. Featural Filling-In Over Stabilized Scenic Edges 5. Different Rules for Boundary Contours and Feature Contours 6. Boundary-Feature Trade-off: Every Line End Is Illusory 7. Parallel Induction by Edges versus Perpendicular Induction by Line Ends 8. Boundary Completion via Cooperative-Competitive Feedback Signaling: CC Loops and the Statistics of Grouping 9. Form Perception versus Object Recognition: Invisible but Potent Boundaries 10. Analysis of the Beck Theory of Textural Segmentation: Invisible Colinear Cooperation 11. The Primacy of Slope 12. Statistical Properties of Oriented Receptive Fields: OC Filters 13. Competition Between Perpendicular Subjective Contours 14. Multiple Distance-Dependent Boundary Contour Interactions: Explaining Gestalt Rules 15. Image Contrasts and Neon Color Spreading 16. Computer Simulations of Perceptual Grouping 17. On-Line Statistical Decision Theory and Stochastic Relaxation 18. Correlations Which Cannot Be Perceived: Simple Cells, Complex Cells, and Cooperation 19. Border Locking: The Cafk Wall Illusion 20. Boundary Contour System Stages: Predictions About Cortical Architectures 21. Concluding Remarks: Universality of the Boundary Contour System Appendix: Boundary Contour System Equations References
114 116 121 127 127 129 134 138 143
145 147 149 149 151 153 154 158 162 163 165 165 167 170 173 177 180 187 189 193 198 202 207
Table of Contents
CHAPTER 4: NEURAL DYNAMICS OF BRIGHTNESS PERCEPTION: FEATURES, BOUNDARIES, DIFFUSION, AND RESONANCE 1. Paradoxical Percepts as Probes of Adaptive Processes 2. The Boundary-Contour System and the Feature Contour System 3. Boundary Contours and Boundary Completion 4. Feature Contours and Diffusive Filling-In 5. Macrocircuit of Processing Stages 6. FIRE: Resonant Lifting of Preperceptual Data into a Form-in-Depth Percept 7. Binocular Rivalry, Stabilized Images, and the Ganzfeld 8. The Interplay of Controlled and Automatic Processes 9. Craik-O’Brien Luminance Profiles and Multiple Step Illusions 10. Smoothly Varying Luminance Contours versus Steps of Luminance Change 11. The Asymmetry Between Brightness Contrast and Darkness Contrast 12. Simulations of FIRE 13. Fechner’s Paradox 14. Binocular Brightness Averaging and Summation 15. Simulation of a Parametric Binocular Brightness Study 16. Concluding Remarks Appendix A Appendix B References
CHAPTER 5: ADAPTATION AND TRANSMITTER GATING IN VERTEBRATE PHOTORECEPTORS Introduction Transmitters as Gates Intracellular Adaptation and Overshoot Monotonic Increments and Nonmonotonic Overshoots to Flashes on Variable Background 5. Miniaturized Transducers and Enzymatic Activation of Transmitter Production 6. Trun-Around of Potential Peaks at High Background Intensities 7. Double Flash Experiments 8. Antagonistic Rebound by an Intracellular Dipole: Rebound Hyperpolarization Due to Current Offset 9. Coupling of Gated Input to the Photoreceptor Potential 10. “Extran Slow Conductance During Overshoot and Double Flash Experiments 11. Shift Property and its Relationship to Enzymatic Modulation 12. Rebound Hyperpolarization, Antagonistic Rebound, and Input Doubling 13. Transmitter Mobilization 1. 2. 3. 4.
Xiii
211 213 215 215 219 220 222 224 225 225 229 237 239 243 247 247 251 258 263 267 271 273 275 276 279 281 283 284 287 290 292 293 294 296
xiv
Table of Contents
14. Quantitative Analysis of Models 15. Comparison with the Baylor, Hodgkin, Lamb Model 16. Conclusion References
CHAPTER 6: THE ADAPTIVE SELF-ORGANIZATION OF SERIAL ORDER IN BEHAVIOR: SPEECH, LANGUAGE, A N D MOTOR CONTROL 1. Introduction: Principles of Self-organization in Models of Serial Order: Performance Models versus Self-organizing Models 2. Models of Lateral Inhibition, Temporal Order, Letter Recognition, Spreading Activation, Associative Learning, Categorical Perception, and Memory Search: Some Problem Areas 3. Associative Learning by Neural Networks: Interactions Between STM and LTM 4. LTM Unit is a Spatial Pattern: Sampling and Factorization 5. Outstar Learning: Factorizing Coherent Patterns From Chaotic Activity 6. Sensory Expectations, Motor Synergies, and Temporal Order Information 7. Ritualistic Learning of Serial Behavior: Avalanches 8. Decoupling Order and Rhythm: Nonspecific Arousal aa a Velocity Command 9. Reaction Time and Performance Speed-Up 10. Hierarchical Chunking and the Learning of Serial Order 11. Self-Organization of Plans: The Goal Paradox 12. Temporal Order Information in LTM 13. Read-Out and Self-Inhibition of Ordered STM Traces 14. The Problem of STM-LTM Order Reversal 15. Serial Learning 16. Rhythm Generators and Rehearsal Waves 17. Shunting Competitive Dynamics in Pattern Processing and STM: Automatic Self-Tuning by Parallel Interactions 18. Choice, Contrast Enhancement, Limited STM Capacity, snd Quenching Threshold 19. Limited Capacity Without a Buffer: Automaticity versus Competition 20. Hill Climbing and the Rich Get Richer 21. Instar Learning: Adaptive Filtering and Chunking 22. Spatial Gradients, Stimulus Generalization, and Categorical Perception 23. The Progressive Sharpening of Memory: Tuning Prewired Perceptual Categories 24. Stabilizing the Coding of Large Vocabularies: Top-Down Expectancies and STM Reset by Unexpected Events 25. Expectancy Matching and Adaptive Resonance 26. The Processing of Novel Events: Pattern Completion versus Search of Associative Memory
300 305 308 309 311
313 314
320 324 325 328 329 332 332 335 335 338 338 339 343 344 345 346 349 350 351 353 354 356 359 359
Table of Contents
27. Recognition, Automaticity, Primes, and Capacity 28. Anchors, Auditory Contrast, and Selective Adaptation 29. Training of Attentions1 Set and Perceptual Categories 30. Circular Reactions, Babbling, and the Development of Auditory-Articulatory Space 31. Analysis-By-Synthesis and the Imitation of Novel Events 32. A Moving Picture of Continuously Interpolated Terminal Motor Maps: Coarticulation and Articulatory Undershoot 33. A Context-Sensitive STM Code for Event Sequences 34. Stable Unitization and Temporal Order Information in STM: The LTM Invariance Principle 35. Transient Memory Span, Grouping, and Intensity-Time Tradeoffs 36. Backward Effects and Effects of Rate on Recall Order 37. Seeking the Most Predictive Representation: All Letters and Words are Lists 38. Spatial Frequency Analysis of Temporal Patterns by a Masking Field: Word Length and Superiority 39. The Temporal Chunking Problem 40. The Masking Field: Joining Temporal Order to Differential Masking via an Adaptive Filter 41. The Principle of Self-Similarity and the Magic Number 7 42. Developmental Equilibration of the Adaptive Filter and its Target Masking Field 43. The Self-Similar Growth Rule and the Opposites Attract Rule 44. Automatic Parsing, Learned Superiority Effects, and Serial Position Effects During Pattern Completion 45. Gray Chips or Great Ships? 46. Sensory Recognition versus Motor Recall: Network Lesions and Amnesias 47. Four Types of Rhythm: Their Reaction Times and. Arousal Sources 48. Concluding Remarks Appendix: Dynamical Equations References
CHAPTER 7: NEURAL DYNAMICS OF WORD RECOGNITION AND RECALL: ATTENTIONAL PRIMING, LEARNING, AND RESONANCE 1. Introduction 2. Logogens and Embedding Fields 3. Verification by Serial Search 4. Automatic Activation and Limited-Capacity Attention 5. Interactive Activation and Parallel Access 6. The View from Adaptive Resonance Theory
xv
361 363 365 365 366 368 369 369 374 374 375 376 376 377 378 379 380 382 384 384 385 387 389 391 401
404 406 407 409 410 411
xvi
Table of Contents
7. Elements of the Microtheory: Tuning,
Categories, Matching, and Resonance 8. Counting Stages: Resonant Equilibration as Verification and Attention 9. Attentional Gain Control versus Attentional Priming: The 2/3 Rule 10. A Macrocircuit for the Self-organization of Recognition and Recall 11. The Schvaneveldt-McDonald Lexical Decision Experiments: Template Feedback and List-Item Error Trade-off 12. Word Frequency Effects in Recognition and Recall 13. Analysis of the Underwood and Freund Theory 14. Analysis of the Mandler Theory 15. The Role of Intra-List Restructuring and Contextual Associations 16. An Explanation of Recognition and Recall Differences 17. Concluding Remarks References
CHAPTER 8: NEURAL DYNAMICS OF SPEECH AND LANGUAGE CODING: DEVELOPMENTAL PROGRAMS, PERCEPTUAL GROUPING, AND COMPETITION FOR SHORT TERM MEMORY 1. Introduction: Context-Sensitivity of Self-organizing Speech and Language Units 2. Developmental Rules Imply Cognitive Rules as Emergent Properties of Neural Network interactions 3. A Macrocircuit for the Self-organization of Recognition and Recall 4. Masking Fields 5. The Temporal Chunking Problem: Seeking the Most Predictive Representation 6. The Word Length Effect 7. All Letters Are Sublists: Which Computational Units Can Self-organize? 8. Self-organization of Auditory-Motor Features, Items, and Synergies 9. Temporal Order Information Across Item Representations: The Spatial Recoding of Temporal Order 10. The LTM Invariance Principle 11. The Emergence of Complex Speech and Language Units 12. List Chunks, Recognition, and Recall 13. The Design of a Masking Field: Spatial Frequency Analysis of Item-Order Information 14. Development of a Masking Field: Random Growth and Self-similar Growth 15. Activity-Contingent Self-similar Cell Growth
412 419 420 425 430 439 441 442 445 446 448 450 456
458 459 459
461 461 462 462 463 465 465 466
466 467
469 470
Table of Contents
16. Sensitivity to Multiple Scales and
Intrascale Variations 17. Hypothesis Formation, Anticipation, Evidence, and Prediction 18. Computer Simulations 19. Shunting On-Center Off-Surround Networks 20. Mass Action Interaction Rules 21. Self-similar Growth Within List Nodes 22. Conservation of Synaptic Sites 23. Random Growth from Item Nodes to List Nodes 24. Self-similar Competitive Growth Between List Nodes 25. Contrast Enhancement by Sigmoid Signal Functions 26. Concluding Remarks: Grouping and Recognition Without Algorithms or Search Appendix References
xvii
473 473 475 481 489 490 490 491 492 492 493 494 496
AUTHOR INDEX
499
SUBJECT INDEX
505
This Page Intentionally Left Blank
Clinptrr 1
THE QIJANTIZED (;EOMETRY OF VISUAL SPACE: THE COHERENT COMPUTATION OF DEPTH, FORM. A N D LIGHTNESS Preface
The article which forms this Chapter introduces an ambitious research program aimed at creating a unified theory of preattentive visual perception; that is, a unified theory of 3-dimensional form, color, and brightness perception, including depth, texture, surface, and motion perception. The theory has since been developing very rapidly and has led to many new ideas and predictive successes. Four of the major published articles of the theory are contained in this volume (Chapters 1-4). In my prefaces, I highlight some of the key issues and directions for future research. As in all the articles in these volumes, the same small set of dynamical laws and mechanisms is used. What sets the different applications apart is not their local mechanisms. What sets them apart are the specialized circuits, built u p from a common set of mechanisms, which have evolved to adaptively solve particular classes of environmental problems. The present theory became possible when sufficiently many of these mechanisms and circuits were discovered in other applications -notably during the development of adaptive resonance theory (Volume I and Chapters 6-8)-to notice their applicability to visual perception. The theory in this Chapter was built up from two types of general purpose cooperative-competitive networks. The simpler type of network is an on-center off-surround network with feedlorward pathways whose cells obey mass action, or shunting, laws. I showed that such a network generates a constellation of emergent properties that is of fundamental importance in visual perception. no less than in many other applications. A single such network is capable of: reflectance processing, conservation or normalization of total activation (limited capacity). Weber law modulation, adaptation level processing, noise suppression, shift property, ratio-sensitive edge processing, qower law invariance, spatial frequency sensitivity. energetic amplification of matched input patterns, and energetic suppression of mismatched input patterns. The second type of general purpose network is an on-center off-surround network with feedback pathwayswhose cells obey mass action, or shunting, laws. Such a network is capable of contrast enhancement, short term memory storage, normalization of total activation, multistability, hysteresis, noise suppression, and propagation of reflectancesensitive and spatial frequency-sensitive st anding waves. With these mechanisms and their constellations of emergent properties as tools, I was able to address variants of the following basic question: How are ambiguous local visual cues bound together into unambiguous global context-sensitive percepts? To this end, Part I of the article reviews data concerning context-sensitive interactions between properties of depth, brightness, color, and form, as well as the inability of various models to explain these interactions. Some of the issues raised by these interactions are: Why are binocular rivalry and binocular fusion two alternative visual modes? How can fusion occur with respect to one spatial scale while rivalry simultaneously occurs with respect to another spatial scale at the same region of perceptual space? How does rivalry inhibit the visibility of percepts that would be visible when viewed monocularly? How do binocular matches at a sparse number of scenic locations impart unambiguous depth to large binocularly ambiguous regions? Moreover, how do the perceptual qualities, such ab color and brightness, of these ambiguous regions appear to inherit these depth values? How do we perceive flat surfaces as flat despite the fact that the binocular
2
Chapter I
fixat,ion point is a zero disparity point, and all other iinainbiguous binocular matches have increasing disparity as a function of their eccentricity from the fixation point? Such concerns lead to the realization that either binocularly fused edges or nionocularly viewed edges, but not binocularly mismatched edges, can trigger a filling-in process which is capable of rapidly lifting perceptual qualities, such as brightness and color, into a multiple-scale representation of form-and-color-in-depth. In order to understand how edge matches trigger filling-in, yet edge mismatches suppress filling-in, I introduced the concepts of filling-in generator (matched edge) and filling-in barrier (mismatched edge). I showed how to design a cooperative-competitive feedback network which ext#ractsedges from pairs of monocular input patterns, binocularly matches the edges, and feeds the results back toward the monocular patterns. Matched edges then automatically lift a binocularly fused representation of the monocular patterns up to the binocular perceptive field, and fill-in this binocular representation until a filling-in barrier (mismatched edge) is reached. Mismatched edges do not lift their monocular input patterns into a binocular representation. Thus the binocular filling-in process is triggered by monocularly viewed edges or binocularly matched edges, but not by binocularly mismatched edges. I call such a process a filling-in resonant erchange, or FIRE. The Weber law properties of the binocular FIRE process enable the binocular activation levels to mimmick binorular brightness data. Computer simulations of the FIRE process quantitatively demonstrate this property (Chapter 4). The FIRE process also clarifies many other visual data which are summarized in the Chapter. In particular, it was shown how a gated dipole field could be embedded within the FIRE process to generate some properties of binocular rivalry. The FIRE theory is based upon a single edge-driven filling-in process. As my colleagues 14ichael Cohen, Ennio Mingolla, and I began to quantitatively simulate more and more brightness and form data, it gradually became clear that a different sort of filling-in, called diffusive filling-in, pFeprocesses monocular input pattterns before they activate the FIRE process. This insight gradually led to the realization that a pair of distinct edge-driven systems exist, one devoted to boundary formation and segmentation and the other devoted to color and brightness detection and filling-in. Chapters 2 and 3 describe the rules of these systems and illustrate how their interactions can explain monocular form, color, and brightness percepts.
The Bchavioral and Brain Sciences 6,625 657 (1983) 01983 Cambridge Vniversity Press Reprinted by permission of the publisher
THE QUANTIZED GEOMETRY OF VISUAL SPACE: THE COHERENT COMPUTATION OF DEPTH, FORM, A N D LIGHTNESS
Stephen Grossbergt
Abstract A theory is presented of how global visual interactions between depth, length, lightness, and form percepts can occur. The theory suggests how quantized activity patterns which reflect these visual properties ran coherently fill-in, or complete, visually ambiguous regions starting with visually informative data features. Phenomena such as the Cornsweet and Craik-O’Brien effects, phantoms and subjective contours, binocular brightness summation, the equidistance tendency, Emmert’s law, allelotropia, multiple spatial frequency sraling and edge detection, figure-ground completion, coexistence of depth and binocular rivalry, reflectance rivalry, Ferhner’s paradox, decrease of threshold rontrast with increased number of cycle5 in a grating pattern, hysteresis, adaptation level tuning, Weber law modulation, shift of sensitivity with background luminance, and the finite rapacity of visual short term memory are discussed in terms of a small set of concepts and mechanisms. Limitations of alternative visual theories which depend upon Fourier analysis, Laplacians, zero-crossings, and cooperative depth planes are described. Relationships between monocular and binocular processing of the same visual patterns are noted, and a shift in emphasis from edge and disparity computations toward the characterization of resonant activity-scaling correlations across multiple spatial scales is recommended. This recommendation follows from the theory’s distinction between the concept of a structural spatial scale, which is determined by local receptive field properties, and a functional spatial scale, which is defined by the interaction between global properties of a visual scene and the network as a whole. Functional spatial scales, but not structural spatial scales, embody the quantization of network activity that reflects a scene’s global visual representation. A functional scale is generated by a filling-in resonant exrhange, or FIRE, which can be ignited by an exrhange of feedback signals among the binocular cells where monocular patterns are binocularly matched.
K e y Words: binocular vision; brightness perception; figure-ground; feature extraction; form perception; neural network; nonlinear resonance; receptive field; short-term memory; spatial srales; visual completion.
_____-_--
t Supported in part by the Air Force Office of Scientific Research (AFOSR 82-0148), the National Science Foundation (KSF 1ST-80-00257),and the Office of Naval Research (ONR N00014-83-K0337).
3
Chapter 1
4
The objects ofperception and the space in which thpy swm to lie are not abstracted by a rigid metric but a far looser one than any philosopher ever proposed or any psychologist dreamed. -Jerome Lettvin (1981)
1. Introduction: T h e Abundance of Visual Models Few areas of science can boast the wealth of interesting and paradoxical phenomena readily accessible to introspection that visual perception can. The sheer variety of effects helps to explain why so many different types of theories have arisen to carve up this data landscape. Fourier analysis (Cornsweet, 1970; Graham, 1981; Robson, 1975), projective geometry (Beck, 1972; Johannson, 1978; Kaufman, 1974), Riemannian geometry (Blank, 1978; Luneberg, 1947; Watson, 1978),special relativity (Caelli, Hoffman, and Lindman, 1978), vector analysis (Johannson, 1978), analytic function theory (Schwartz, 1980), potential theory (Sperling, 1970), and cooperative and competitive networks (Amari and Arbib, 1977; Dev, 1975; Ellias and Grossberg, 1975; Grossberg, 1970a, l973,1978e, 1981; Sperling, 1970; Sperling and Sondhi, 1968) are just some of the formalisms which have been used to interpret and explain particular visual effects. Some of the most distinguished visual researchers believe that this diversity of formalisms is inherent in the nature of psychological phenomena. Sperling (1981, p.282) has, for example, recently written In fact, as many kinds of mathematics seem to be applied to perception as there are problems in perception. I believe this multiplicity of theories without a reduction to a common core is inherent in the nature of psychology .. and we should not expect the situation to change. The moral, alas, is that we need many different models to deal with the many different aspects of perception.
.
The opinion Sperling offers is worthy of the most serious deliberation, since it predicts the type of mature science which psychology can hope to become, and thereby constrains the type of theorizing which psychologists will try to do. Is Sperling right? Or do there exist concepts and properties, heretofore not explicitly incorporated into the mainstream visual theories, which can better unify the many visual models into an integrated visual theory? Part I of this article reviews various visual data as well as internal paradoxes and inherent limitations of some recent theories that have attempted to explain these data. Part I1 presents a possible approach to overcoming these paradoxes and limitations and to explaining the data in a unified fashion. Numerical simulations that support the qualitative arguments and mathematical properties described in Part I1 are found in Cohen and Grossberg (1983a). Parts I and I1 are self-contained and can be read in either order.
PART I 2.
The Quantized Geometry of Visual Space
There is an important sense in which Sperling's assertion is surely correct, but in this sense it is also true of other sciences such as physics. Different formalisms can probe different levels of the same underlying physical reality without excluding the possibility that one formalism is more general, or physically deeper, than another. In physics, such theoretical differences can be traced to physical assumptions which approximate certain processes in order to clarify other processes. I will argue that several approaches to visual perception make approximations which do not accurately represent the physical processes which they have set out to explain. For this reason, such theories have predictive limitations which do not permit them to account, even to a first approximation, for major properties of the data. In other words, the mathematical
The Quantized Geometry of Visual Space
5
formalism of these theories has not incorporated fundamental physical intuitions into their computational structure. Once t h e intuitions are translated into a suitable formalism, the theoretical divcrsity in visual science will, I claim, gradually become qualitatively more like that known in physics. The comparison with physics is not an idle one. Certain of the intuitions which need to be formalized at the foundations of visual theory are well known to us all. They have not been acted upon because, despite their simplicity, they lead to conceptually radical conclusions that force a break with traditional notions of geometry. Lines and edges can no longer be thought of as a series of points; planes can no longer be built up from local surface elements or from sets of lines or points; and so on. All local entities evaporate as we build up notions of functional perceptual units which can naturally deal with the global context-dependent nature of visual percepts. The formalism in which this is achieved is a quantized dynamic geometry, and the nature of the quantization helps to explain why so many visual percepts seem to occur in a curved visual space. When a physicist discusses quantization of curved space, he usually means joining quantum mechanics to general relativity. This goal has not yet been achieved in physics. To admit that even the simplest visual phenomena suggest such a formal step clarifies both the fragmentation of visual science into physically inadequate formalisms, and the radical nature of the conceptual leap that is needed to remedy this situation.
S. The Need for Theories Which M a t c h the Data’s Coherence As background for my theoretical treatment, I will review various paradoxical data roncerning interactions between the perceived depth, lightness, and form of objects in a scene. These paradoxes should not, I believe, be viewed as isolated and unimportant anomalies, but rather its informative instances of how the visual system completes a scene’s global representation in response to locally ambiguous visual data. These data serve to remind us of the interdependence and rontext-sensitivity of visual properties; in other words, of their coherence. With these reminders fresh in our minds, I will argue in Part I1 that by probing important visual design principles on a deep mathematical level, one can discover, as automatic mathematical consequences, the way many visual properties are coherently caused as manifestations of these design principles. This approach to theory construction is not In the mainstream of psychological thinking today. Instead, one often finds models capable of computing some single visual property, such as edges or cross-correlations. Even with a different model for each property, this approach does not suggest how related visual properties work together to generate a global visual representation. For example, the present penchant for lateral inhibition by linear feedforward operators like a Laplacian or a Fourier trI$odeling nsform to compute edges or cross-correlations (Marr and Hildreth, 1980; Robson, 1975) pays the price of omitting related nonlinear properties like reflectance processing, Weber law modulation, figure-ground filling-in, and hysteresis. To the argument that one must first understand one property at a time, I make this reply: The feedforward linear theories contain errors even in the analysis of the concepts they set out to explain. Internal problems of these theories prevent them from understanding the other phenomena that rohere in the data. This lack of coherence, let alone correctness, will cause a heavy price to be paid in the long run, both scientifically and technologically. I‘nless the relationships among visual data properties are correctly represented in a distributed fashion within the system, plausible (and economic) ways to map these properties into other subsystems, whether linguistic, motor, or motivational, will be much harder to understand. Long-range progress, whether in theoretical visual science per se or in its relations to other scientific and technological disciplines, requires that the mathematical formalisms in which visual concepts are articulated be scrupulously criticized.
6
Chapter I
4. Some Infliirnccs of Perceivcd Depth on Pcrcrived Size
Interactions between an object’s perceived depth, size, and lightness have been intensively studied for many years. The excellent texts by Cornsweet (1970) and by Kaufman (1974) review many of the basic phenomena. The classical experiments of Holway and Boring (1941) show that observers can estimate the actual sizes of objects at different distances even if all the objects subtend the same visual angle on the observers’ retinas. Binocular cues contribute to the invariant percept of size. For example, Emmert (1881) showed that monocular cues may be insufficient to estimate an object’s length. He noted, among other properties, that a monocular afterimage seems to be located on any surface which the subject binocularly fixates while the afterimage is active. Moreover, the perceived size of the afterimage increases as the perceived distance of the surface increases. This effect is called Emmert’s law. Although the use of monocular afterimages to infer properties of normal viewing is fraught with difficulties, other paradigms have also suggested an effect of perceived depth on perceived size. For example, Gogel (1956, 1965, 1970) has reported that two objects viewed under reduction conditions (one eye looks through a small aperture in dim light) will be more likely to be judged as equidistant from the observer as they are brought closer together in the frontal plane. In a related experiment, one object is monocularly viewed through a mirror arrangement whereas all other objects in the scene are binocularly viewed. The monocularly viewed object then seems to lie at the same distance as the edge that, among all the binocularly viewed objects, is retinally most contiguous t o it. Gogel interpreted these effects as examples of an equidistance tendency in depth perception. The equidistance tendency also holds if a monocular afterimage occupies a retinal position near to that excited by a binocularly viewed object. One way to interpret these results is to assert that the perceived distance of the binocular object influenres the perceived distance of the adjacent afterimage by equidistance tendency, and thereupon influences the perceived size of the afterimage by Emmert’s law. Results such as these suggest that depth cues can influence size estimates. They also suggest that this influence can propagate between object representations whose cues excite disparate retinal points and that the patterning of all cues in the visual context of an object helps to determine its perceived length. The classical geometric notion that length can be measured by a ruler, or can be conreptualized in terms of any locally defined romputaton, thereby falls into jeopardy. 5. Some Monocular Constraints on Size Perception
Size estimates can also be modified by monocular cues, as in the corridor illusion (Richards and Miller, 1971; see Figure la). In this illusion, two cylinders of equal size in a picture are perceived to be of different sizes because they lie in distinct positions within a rectangular grid whose spatial scale diminishes toward a fixation point on the horizon. An analogous effect occurs in the Ponzo illusion shown on the right, wherein two horizontal rods of equal pictorial length are drawn superimposed over an inverted V (Kaufman, 1974; see Figure l b ) . The upper rod appears longer than the lower rod. The perception of these particular figures may be influenced by learned depth perspective cues (Gregory, 1966), although this hypothesis does not explain how perspective cues alter length percepts. There exist many other figures, however, in which a perspective effect on size scaling is harder to rationalize (Day, 1972). Several authors have therefore modeled these effects in terms of intrinsic scaling properties of the visual metric (Dodwell, 1975; Eijkman, Jongsma, and Vincent, 1981; Restle, 1971; Watson, 1978). A more dramatic version of scaling is evident when subjective contours complete the boundary of an incompletely represented figure. Then objects of equal pictorial size that lie inside and outside the completed figure may appear to be of different size (Coren,
The Quantized Geometry of Visual Space
F i g u r e 1. (a) The corridor illusion. (b) The Ponzo illusion. (After Kaufnian 1974. From Sight and Mind: An Introduction to Visual Pcrception. Copyright 0 1 9 7 4 by Oxford University Press, Inc. Reprinted by permission.) 1972). The very existence of subjective contoiirs raises the issue of how incomplete data about form can select internal representations which can span or fill-in the incomplete regions of the figure. How can we characterize those features or spatial scales in the incomplete figure which play an informative role in the completion process versus those features or scales which are irrelevant? Attneave (1954) has shown, for example, that when a drawing of a cat is replaced by a drawing in which the points of maximum curvature in the original are joined by straight lines, then the new drawing still looks like a cat (see Figure 2). Why are the points of maximumcurvature such good indicators of the entire form? Is there a natural reason why certain spatial scales in the figure might have greater weight than other scales? Attneave’s cat raises the question: Why does interpolation between points of maximum curvature with lines of zero curvature produce a good facsimile of the original picture? Different spatial scales somehow need to interact in our original percept for this to happen. To understand this issue, we need a correct definition of spatial scale. Such a definition should distinguish between local scaling effects, such as those which can be understood in terms of a neuron’s receptive field (Robson. 1975), and global scaling effects, such as those which control the fillingin of subjective rontours or of phantom images across a movie screen, which subtends a visual angle much larger than that spanned by any neuron’s receptive field (Smith and Over, 1979; Tynan and Sekuler, 1975; von Grunau, 1979; Weisstein, Maguire, and Berbaum, 1976).
6. Multiple Scaies in Figure and Ground: Siinultaneous Fusion and Rivalry T h a t interactions between several spatial scales are needed for form perception is also illustrated by the following type of demonstration (Beck, 1972). Represent a letter
8
Chapter 1
Figure 2. Attneave's cat: Connecting points of maximum curvature with straight lines yields a recognizable caricature of a cat. (After Attneave 1954.)
E by a series of nonintersecting straight lines of varying oblique and horizontal orientations drawn within an imaginary E contour and surrounded by a background of regular vertical lines. The E is not perceived because of the lines within the contour, since the several orientations of these interior lines do not group into an E-like shape. Somehow the E is synthesized as the complement of the regular background, or, more precisely, by the statistical differences between the figure and the ground. These statistical regularities define a spatial scale-broader than the scale of the individual lines-on which the E can be perceived. In a similar vein, construct a stereogram out of two pictures as follows (Kaufman, 1974; see Figure 3). The left picture is constructed from 45O-oblique dark parallel lines bounded by an imaginary square, which is surrounded by 135"-oblique lighter parallel lines. The right picture is constructed from 135'-oblique dark parallel lines bounded by an imaginary square whose position in the picture is shifted relative to the square in the left picture. This imaginary square is surrounded by 45O-oblique lighter parallel lines. When these pictures are viewed through a stereoscope, the dark oblique lines within the square are rivalrous. Nonetheless the square as a whole is seen in depth. How does this stereogram induce rivalry on the level of the narrowly tuned scales that interact preferentially with the lines, yet simultaneously generate a coherent depth impression on the broader spatial scales that interact preferentially with the squares? Kulikowski (1978) has also studied this phenomenon by constructing two pairs of pictures which differ in their spatial frequencies (see Figure 4). Each picture is bounded by the same frame, as well as by a pair of short vertical reference lines attached to the outside of each frame at the same spatial locations. In one pair of pictures, spatially blurred black and white vertical bars of a fixed spatial frequency are 180" out of phase. In the other pair of pictures, sharp black and white vertical bars of the same spatial extent are also 180' out of phase. The latter pair of pictures contains high spatial frequency components (edges) as well as low spatial frequency components. During binocular viewing, subjects can fuse the two spatially blurred pictures and see them in depth with respect to the fused images of the two frames. By contrast, subjects experience binocular rivalry when they view the two pictures of sharply etched bars. Yet they still experience the rivalrous patterns in depth. This demonstration suggests that the low spatial frequencies in the bar patterns can be fused to yield a depth impression even while the higher spatial frequency components in the bars elicit an alternating rivalrous
9
The Quantized Georneny of Visual Space
Figure S. The Kaufman stereogram induces an impression of depth even though the darker line patterns are rivalrous. (After Kaufman 1974. From Sight and Mind: An Introduction to Visual Perception. Copyright 01974 by Oxford Universitv Press, Inc. Reprinted by permission.)
a
b
C
Figure 4. Demonstration of depth perception with and without fusion. (a) Sinusoidal gratings in antiphase can be fused to yield a depth impression. (b) The square wave gratings yield a depth impression even when their sharp edges become double. (c) A similar dichotomy is perceived when single sinusoidal or bars are viewed. (After Kulikowski 1978. Reprinted by permission from Nature, volume 275, pp.126-127. Copyright BMacmillan Journals Limited.) perception of the monocular patterns. The demonstrations of Kaufman (1974) and Kulikowski (1978) raise many interesting questions. Perhaps the most pressing one is: Why are fusion and rivalry alternative binocular perceptual modes? Why are coexisting unfused monocular images so easily supplanted by rivalrous monocular images? How does fusion at one spatial scale coexist with rivalry at a different spatial scale that represents the same region of visual space?
7. Binocular Matching, Competitive Feedback. and Monocular SelfMatching These facts suggest some conclusions that will be helpful in organizing my data review and will be derived on a different theoretical basis in Part 11. I will indicate
10
Chapter 1
how rivalry suggests the existenre of biiioriilar r d l s that ran be activated by a single monocular input and that mutually interact in a roiiipetitive ferdbark network. First I will indicate why these binocular cells can be inonorularly activated. The binocular cells in question are the spatial loci where monocular data from the two eyes interact to grnerate fusion or rivalry as the outcome. To show why at least some of these cells can be monocularly activated, I will ronsider implications of the following mutually exclusive possibilities: either the outrome of binocular matching feeds back toward the monocular cells that generated the signals to the binocular cells, or it does not. Suppose it does not. Then the activities of monocular cells cannot subserve perception; rather, perception is associated with activities of binocular cells or of cells more central than the binocular cells. This is because both sets of monocular cells would remain active during a rivalry percept, since the binocular interaction leading to the rivalry percept does not, by hypothesis, feed back to alter the activities of the monocular cells. Now we confront the conclusion that monocular cells do not subserve perception with the fact that the visual world can be vividly seen through a single eye. It follows that some of the binocular cells which subserve perception can be activated by inputs from a single eye. Having entertained the hypothesis that the outcome of binocular matching does not feed back toward monocular cells, let us now consider the opposite hypothesis. In this rase, too, I will show that a single monocular representation must be able to activate rertain binocular cells. To demonstrate this fact, I will again argue by contradiction. Suppose it does not. In other words, suppose that the outcome of binocular matching does feed back toward monocular cells but a single monocular input cannot activate binocular cells. Because the visual world can be seen through a single eye, it follows that the activities of monocular cells subserve perception in this case. Consequently, during a binorular rivalry percept, the binocular-to-monocular feedback must quickly inhibit one of the monocular representations. The signals which this monocular representation was sending to the binocular cells are thereupon also inhibited. The binocular cells then receive signals only from the other monocular representation. The hypothesis that binorular cells cannot fire in response to signals from only one monocular representation implies that the binocular cells shut off, along with all of their output signals. The suppressed monocular cells are then released from inhibition and are excited again by their monocular inputs. The cycle ran now repeat itself, leading to the percept of a very fast flicker of one monocular view superimposed upon the steady percept of the other monocular view. This phenomenon does not ocrur during normal binocular vision. Consequently, the hypothesis that a single monocular input cannot activate binocular cells must be erroneous. Whether or not the results of binocular matching feed back toward monocular cells, certain binocular cells can be artivated by a single monocular representation. An additional conclusion can be drawn in the case wherein the results of binocular matching can feed back toward monocular cells. Here a single monocular source can activate binocular cells, which can thereupon send signals toward the monocular source. The monocular representation can thereby sell-match at the monocular source using the binocular feedback as a matching signal. This fact implies that the monocular source cells are themselves binocular cells, because a monocular input can activate binocular cells which then send feedback signals to the monocular source cells of the other eye. In this way the monocular source cells can be activated by both eyes, albeit less symmetrically than the binocular cells at which the primary binocular matching event takes place. This conclusion can be summarized as follows: The binocular cells at which binocular matching takes place are flanked by binocular cells that satisfy the following properties: (a) they are fed by monocular signals; (b) they excite the binocular matching cells; (c) they can be excited or inhibited due to feedback from the binocular matching
The Quantized Geometry of Visual-re
11
cells, depending upon whether fusion or rivalry occur. It remains only to consider the possibility that the results of binocular matching d o not fced back toward the monocular cells. The following argument indicates why this cannot happen. A purely feedforward interaction from monocular toward binocular rells cannot generate the main properties of rivalry, namely a sustained monocular percept followed by rapid and complete suppression of this percept when it is supplanted by the other monocular percept. This is because the very activity of the perceived representation must be the cause of its habituation and loss of competitive advantage relative to the suppressed representation. Consequently. the habituating signals from the perceived representation that inhibit the suppressed representation reach the latter at a stage at, or prior to, that representation’s locus for generating signals to the perceived representation that are capable of habituating. Such an arrangement allows the signals of the perceived representation to habituate but spares the suppressed representation from habituation. By symmetry, the two representations reciprocally send signals to each other that are received at, or at a stage prior to, their own signaling cells. This arrangement of signaling pathways defines a feedback network. One can now refine this conclusion by going through arguments like those above t o conclude that (a) the feedback signals are received at binocular cells rather than a t monocular rells, and (b) the feedback signals are not all inhibitory signals or else binocular fusion could not occur. Thus a competitive balance between excitatory and inhibitory feedback signals among binocular cells capable of monocular activation needs to be considered. Given the possibility of monorular self-matching in this framework, one also needs to ask why the process of monocular self-matching, in the absence of a competing input from the other eye, does not rause the cyclic strengthening and weakening of monocular activity that occurs when two nonfused monocular inputs are rivalrous. One does not need a complete theory of these properties to conclude that no theory in which only a feedforward flow of visual patterns from monocular to binocular cells occurs (e.g., to rompute disparity information) can explain these data. Feedback from binorular matching toward monocular computations is needed to explain rivalry data, just as such feedback is needed to explain the influence of perceived depth on perceived size or brightness. I will suggest in Part I1 how a suitably defined feedback scheme can give rise to all of these phenomena at once. 8 . Against the K e p l e r i a n View: Scale-Sensitive Fusion and R i v a l r y
The Kaufman (1974) and Kulikowski (1978) experiments also argue against the Keplerian view, which is a mainstay of modern theories of stereopsis. The Keplerian view is a realist hypothesis which suggests that the two monocular views are projected pointby-point along diagonal rays, and that their crossing-points are loci from which the real depth of objects may be computed (Kaufman, 1974). When the imaginary rays of Kepler are translated into network hardware, one is led to assume that network pathways carrying monocular visual signals merge along diagonal routes (Sperling, 1970). The Keplerian view provides an elegant way to think about depth, because (other things being equal) objects which are closer should have larger disparities, and their Keplerian pathways should therefore cross at points which are further along the pathways. M o r e over, all pairs of points with the same disparity cross a t the same distance along their pathway, and thereby form a row of contiguous crossing-points. This concept does not explain a result such as Kulikowski’s, since all points in each figure (so the usual reasoning goes) have the same disparity with respect to the corresponding point in the other figure. Hence all points cross in the same row. In the traditional theories, this means that all points should match equally well to produce an unambiguous disparity measure. Why then d o low spatial frequencies seem to match and yield a depth percept at the same disparity a t which high spatial frequencies do not seem to match?
12
Chapter 1
Rather than embrace the Keplrrian view, I will suggest how suitably preprocessed input data of fixed disparity can be matrhed by certain spatial scales but not by other spatial scales. To avoid misunderstanding, I should inimediately say what this hypothesis does not imply. It does not imply that a pair of high spatial frequency input patterns of large disparity cannot be matched, because only suitable statistics of the monocular input patterns will be matched, rather than the input patt.erns themselves. Furthermore, inferences made from linear statistics of the input patterns do not apply because the statistics in the theory need to be nonlinear averages of the input patterns to ensure basic stability properties of the feedback exchange between monocular and binocular cells. These assertions will be clarified in Part 11. Once the Keplerian view is questioned, the problem of false-images (Julesz, 1971), which derives from this view and which has motivated much thinking about stereopsis, also becomes less significant. The false-images are those crossing-points in Kepler’s grid that do not correspond to the objects’ real disparities. Workers like Marr and Poggio (1979)have also concluded that false images are not a serious problem if spatial scaling is taken into account. Their definition of spatial scale differs from my own in a way that highlights how a single formal definition can alter the whole character of a theory. For example, when they mixed their definition of a spatial scale with their view of the false-image problem, Marr and Poggio (1979) were led to renounce cooperativity as well, which I view as an instance of throwing out the baby with the bathwater, since all global filling-in and fi ure-ground effects thereby become inexplicable in their theory. Marr and Poggio (1974 abandoned cooperativity because they did not need it to deal with false images. In a model such aa theirs, the primary goal of which is to compute unambiguous disparity measures, their conclusion seems quite logical. Confronted by the greater body of phenomena that are affected by depth estimates, such a step seems unwarranted. 9. Local versus Global Spatial Scales
Indeed, both the Kaufman (1974)and the Kulikowski (1978)experiments, among many others, illustrate that a figure or ground has a coherent visual existence that is more than the sum of its unambiguous feature computations. Once a given spatial scale makes a good match in these experiments, a depth percept is generated that pervades a whole region. We therefore need to distinguish the scaling property that makes good matches based on local computations from the global scaling effects that fill-in an entire region subtending an area much broader than the local scales themselves. This distinction between local and global scaling effects is vividly demonstrated by constructing a stereogram in which the left ”figure” and its “ground” are both induced by a 5% density of random dots (Julesz, 1971,p.336) and the right “figure” of dots is shifted relative to its position in the left picture. Stereoscopically viewed, the whole figure, including the entire 95% of white background between its dots, seems to hover at the same depth. How is it that the white background of the “figure” inherits the depth quality arising from the disparities of its meagerly distributed dots, and the white background of the “ground” inherits the depth quality of its dots? What mechanism organizes the locally ambiguous white patches that dominate 95% of the pictorial area into two distinct and internally coherent regions? Julesz (1971,p.250) describes another variant of the same phenomenon using a random-dot stereogram inspired by an experiment of Shipley (1965). In this stereogram, the traditional center square in depth is interrupted by a horizontal white strip that cuts both the center square and the surround in half. During binocular viewing, the white strip appears to be cut along the contours of the square and it inherits the depth of figure or ground, despite the fact that it provides no disparity or brightness cues of its own at the cut regions.
The Quotitized Geoniety of Visual Space
13
10. Interaction of Perceived Form a i d P r r r r i v e d Position
The choice of scales leading to a depth percept can also cause a shift in perceived form, notably in the relative distance between patterns in a configuration. For example, when a pattern AB C is viewed through one eye and a pattern A BC is viewed through the other eye, the letter B can be seen in depth at a position halfway between A and C (von Tschermak-Seysenegg, 1952; Werner, 1937). This phenomenon, called displacement or allelotropia, again suggests that the dynamic transformations in visual space are not of a local character since the location of entire letters, not to mention their points and lines, can be deformed by the spatial context in which they are placed. The nonlocal nature of visual space extends also to brightness perception, as the following section summarizes. 11. Some Influences of Perceived Depth and Form on Perceived Brightness
The Craik-O’Brien and Cornsweet effects (Cornsweet, 1970; O’Brien, 1958) show that an object’s form, notably its edges or regions of rapid spatial change, can influence its apparent brightness or lightness (Figure 5). Let the luminance profile in Figure 5a describe a cross-section of the two-dimensional picture in Figure 5b. Then the lightness of this picture appears as in Figure 5c. The edges of the luminance profile determine the lightnesses of the adjacent regions by a filling-in process. Although the luminances of the regions are the same except near their edges, the perceived lightnesses of the regions are determined by the brightnesses of their respective edges. This remarkable property is reminiscent of Attneave’s cat, since regions of maximum curvature-in the lightness domain-again help to determine how the percept is completed. In the present instance, the filling-in proccss overrides the visual data rather than merely completing an incomplete pattern. Hamada (1976, 1980) has shown that this filling-in process is even more paradoxical than was previously thought. He compared the lightness of a uniform background with the lightness of the same uniform background with a less luminous Craik-O’Brien figure superimposed upon it. By the usual rules of brightness contrast, the lesser brightness of the Craik-O’Brien figure should raise the lightness of the background as its own lightness is reduced. Remarkably, even the background seems darker than the uniform background of the comparison figure, although its luminance is the same. Just as form can influence lightness, apparent depth can influence lightness. Figures which appear to lie at the same depth can influence each other’s lightness in a manner analogous to that found in a monocular brightness constancy paradigm (Gilchrist, 1979).
12. Some Influences of Perceived Brightness on Perceived Depth Just as depth can influence brightness estimates, brightness data can influence depth estimates. For example, Kaufman, Bacon, and Barroso (1973) studied stereograms build up from the two monocular pictures in Figure 6a. When these pictures are viewed through a stereogram, the eyes see the lines at a different depth due to the disparity between the two monocular views. If the stereogram is changed so that the left eye sees the same picture as before, whereas the right eye sees the two pictures superimposed (Figure 6b), then depth is still perceived. If both eyes see the same superimposed pictures, then of course no depth is seen. However, if one eye sees the pictures superimposed with equal brightness, whereas the other eye sees the two pictures superimposed, one with less brightness and the other with more, then depth is again seen. In the latter case there is no disparity between the two figures, although there is a brightness difference. How does this brightness difference elicit a percept of depth? The Kaufman et al. (1973)study raises an interesting possibility. If a binocular brightness difference can cause a depth percept, and if a depth percept can influence
14
Chapter 1
Figure 6. In (a) the luminance profile is depicted across a one-dimensional ray through the picture in (b). Although the interiors of all the regions have equal luminance, the apparent brightness of the regions is described by (c).
15
The Quantized Geomeny of Visual Space
PICTURE 1
PICTURE
2
Figure 6. Combinations of the two pictures in (a), such as in (b), yield a depth percept when each picture is viewed through a separate eye. Depth can be seen even if the two pictures are combined to yield brightness differences but no disparity differences.
16
Chapter 1
perceived length, then a binocular brightness difference should be able to cause a rhange in perceived length. It is also known that monocular cues can sometimes have effects on perceived length similar to those of binocular cues, as in the corridor and Ponao illusions. When these two phenomena are combined, it is natural to ask: Under what circumstances can a monocular brightness change cause a change (albeit small) in perceived length? I will return to this question in Part 11.
IS. The Binocular Mixing of Monocular Brightnesses The Kaufman et al. (1973)result illustrates the fact that brightness information from each eye somehow interacts in a binocular exchange. That this exchange is not simply additive is shown by several experiments. For example, let A B on a white field be viewed with the left eye and BC on a white field be viewed with the right eye in such a way that the two B’s are superimposed. Then the B does not look significantly darker than A and C despite the fact that white is the input to the other eye corresponding to these letter positions (Helmholtz, 1962). In a similar fashion, closing one eye does not make the world look half as bright despite the fact that the total luminance reaching the two eyes is halved (Levelt, 1964;von Tschermak-Seysenegg, 1952). This fact recalls the discussion of monocular firing of binocular cells from Section 7. The subtlety of binocular brightness interactions is further revealed by Fechner’s paradox (Hering, 1964). Suppose that a scene is viewed through both eyes but that one eye sees it through a neutral filter that attenuates all wavelengths by a constant ratio. The filter does not distort the reflectances, or ratios, of light reaching its eye, but only its absolute intensity. Now let the filtered eye be entirely occluded. Then the scene looks brighter and more vivid despite the fact that less total light is reaching the two eyes, and the reflectances are still the same. Binocular summation of brightness, in excess of probability summation, can occur when the monocular inputs are suitably matched “within some range, perhaps equivalent to Panum’s area ....Stereopsis and summation may be mediated by a common neural mechanism” (Blake, Sloane, and Fox, 1981). I will suggest below that the coexistence of Ferhner’s paradox and binocular brightness summation can be explained by properties of binocular feedback exchanges among multiple spatial scales. This explanation provides a theoretical framework in which recent studies and models of interactions between binocular brightness summation and monocular flashes can be interpreted (Cogan, Silverman, and Sekuler, 1982). Wallach and Adams (1954)have shown that if two figures differ only in terms of the reflectance of one region, then an effect quite the opposite of summation may be found. A rivalrous percept of brightness can be generated in which one shade, then the other, is perceived rather than a simultaneous average of the two shades. I will suggest below that this rivalry phenomenon may be related to the possibility that two monocular figures of different lightness may generate different spatial scales and thereby create a binocular mismatch. Having reviewed some data concerning the mutual interdependence and lability of depth, form, and lightness judgments, I will now review some obvious visual facts that seem paradoxical when placed beside some of the theoretical ideas that are in vogue at this time. I will also point out that some popular and useful theoretical approaches arc inherently limited in their ability to explain either these paradoxes or the visual interactions summarized above.
14. The Insufficiency of Disparity Computations It is a truism that the retinal images of objects at optical infinity have zero disparity, and that as an object approaches an observer, the disparities on the two retinas of corresponding object points tend to increase. This is the commonplace reason for assuming that larger disparities are an indicator of relative closeness. Julesz stereograms (Julesz,
The Quantized Geomehy of Visual Space
h
17
1971 have moreover provided an elegant paradigm wherein disparity computations are a s u cient indicator of depth, since each separate Julesz random dot picture contains no monocular form cues, yet statistically reliable disparities between corresponding random dot regions yield a vivid impression of a form hovering in depth. This stunning demonstration has encouraged a decade of ingenious neural modeling. Sperling (1970) introduced important pioneering concepts and equations in a classic paper that explains how cooperation within a disparity plane and competition between disparity planes can resolve binocular ambiguities. These ideas were developed into an effective computational procedure in Dev (1975) which led to a number of mathematical and computer studies (Amari and Arbib, 1977; Marr and Poggio, 1976). Due to these historical considerations, I will henceforth call models of this type Sperling-Dev models. All Sperling-Dev models assume that corresponding to each small retinal region there exist a series of disparity detectors sensitive to distinct disparities. These disparity detectors are organized in sheets such that cooperative effects occur between detectors of like disparity within a sheet, whereas competitive interactions occur between sheets. The net effect of these interactions is to suppress spurious disparity correlations and to carve out connected regions of active disparity detectors within a given sheet. These active disparity regions are assumed to correspond to a depth plane of the underlying retinal regions. Some investigators have recently expressed their enthusiasm for this interpretation by committing the homuncular fallacy of drawing the depth planes in impressive three-dimensional figures which carry the full richness of the monocular patterns, although within the model the monocular patterns do not differentially parse themselves among the several sheets of uniformly active disparity detectors. That something is missing from these models is indicated by the following considerations. The use of a stereogram composed of two separate pictures does not always approximate well the way two eyes view a single picture. When both eyes focus on a single point within a patterned planar surface viewed in depth, the fixation point is a point of minimal binocular disparity. Points increasingly far from the fixation point have increasingly large binocular disparities. Why does such a plane not recede toward optical infinity at the fixation point and curve toward the observer at the periphery of the visual field? Why does the plane not get distorted in a new way every time our eyes fixate on a different point within its surface? If disparities are a sufficient indicator of depth, then how do we ever see planar surfaces? Or even rigid surfaces? This insufficiency cannot be escaped just by saying that an observer's spatial scales get bigger as retinal eccentricity increases. To see this, let a bounded planar surface have an interior which is statistically uniform with respect to an observer's spatial scales (in a sense that will be precisely defined in Part 11). Then the interior disparities of the surface are ambiguous. Only its boundary disparities supply information about the position of the surface in space. Filling-in between these boundaries to create a planar impression is not just a matter of showing that the same disparity, even after an eccentricity compensation, can be locally computed at all the interior points, because an unambiguous disparity computation cannot be carried out at the interior points. The issue is not just whether the observer can estimate the depth of the planar surface, but also how the observer knows that a planar surface is being viewed. This problem is hinted at even when Julesz stereograms are viewed. Starting at one point in the stereogram results in the gradual loss of depth (Kaufman, 1974). Also, in a stereogram composed of three vertical lines to the left eye and just the two flanking lines to the right eye, the direction of depth of the middle line depends on whether the left line or the right line is fixated (Kaufman, 1974 . This demonstration makes the problem of perceiving planes more severe for any t eory which restricts itself to disparity computations, since it shows that depth can depend on the fixation points. What is the crucial difference between the way we perceive the depths of lines and planes? Kaufman (1974) seems to have had this problem in mind when he wrote that "all theories of stereopsis are really inconsistent with the geometry of stereopsis" (p.320).
L
18
Chapter 1
Another problem faced by Sperling-Dev models is that they cannot explain effects of perceived depth on perceived size and lightness. The attractive property that the correct depth plane fills-in with uniform activity due to local cooperativity creates a new problem: How does the uniform pattern of activity within a disparity plane rejoin the nonuniformly patterned monocular data to influence its apparent size and lightness? Finally, there is the problem that only a finite number of depth planes can exist in a finite neural network. Only a few such depth planes can be inferred to exist by joining data relating spatial scales to perceived depth-such as the Kaufman (1974) and Kulikowski (1978)data summarized in Section 6-to spatial frequency data which suggest that only a few spatial scales exist (Graham, 1981;Wilson and Bergen, 1979). Since only one depth plane is allowed to be active at each time in any spatial position in a Sperling-Dev model, apparent depth should discretely jump a few times as an observer approaches an object. Instead, apparent depth seems to change continuously in this situation. 15. T h e Insufficiency of Fourier Models
An approach with a strong kernel of truth but a fundamental predictive limitation is the Fourier approach to spatial vision. The kernel of truth is illustrated by threshold experiments with four different types of visual patterns (Graham, 1981; Graham and Nachmias, 1971). Two of the patterns are gratings which vary sinusoidally across the horizontal visual field with different spatial frequencies. The other two are the sum and difference patterns of the first two. If the visual system behaved like a single channel wherein larger peak-to-trough pattern intensities were more detectable, the compound patterns would be more detectable than the sinusoidal ones. In fact, all the patterns are approximately equally detectable. A model in which the different sinusoidal spatial frequencies are independently filtered by separate spatial channels or scales fits the data much better. Recall from Section 6 some of the other data that also suggest the existence of multiple scales. A related advantage of the multiple channel idea is that one can filter a complex pattern into its component spatial frequencies, weight each component with a factor that mirrors the sensitivity of the human observer to that channel, and then resynthesize the weighted pattern and compare it with an observer’s perceptions. This modulation transfer function approach has been used to study various effects of boundary edges on interior lightnesses (Cornsweet, 1970). If the two luminance profiles in Figure 7 are filtered in this way, they both generate the same output pattern because the human visual system attenuates low spatial frequencies. Unfortunately, both output patterns look like a Cornsweet profile, whereas actually the Cornsweet profile looks like a rectangle. This is not a minor point, since the interior regions of the Cornsweet profile have the same luminance, which is false in the rectangular figure. This application of the Fourier approach seems to me to be misplaced, since the Fourier transform is linear, whereas a reflectance computation must involve some sorts of ratios and is therefore inherently nonlinear. The Fourier scheme is also a feedforward transformation of an input pattern into an output pattern. It cannot in principle explain how apparent depth alters apparent length and brightness, since such computations depend on a feedback exchange between monocular data to engender binocular responses. In particular, the data reviewed in Section 4 show that the very definition of a length scale can remain ambiguous until it is embedded in a binocular feedback scheme. The Fourier transform does not at all suggest why length estimates should be so labile. The multiple channel and sensitivity notions need to be explicated in a different formal framework.
The Quantized Geometry of Visual Space
19
Figure 7. When the Cornsweet profile (a) and the rectangle (b) are filtered in such a way that low spatial frequencies are attenuated, both outputs look like a Cornsweet profile rather than a rectangle, as occurs during visual experience. 16. The Insufliciency of Linear Feedforward Theories
The above criticisms of the Fourier approach to spatial vision hold for all computational theories that are based on linear and feedforward operations. For example, some recent workers in artificial intelligence (Marr and Hildreth, 1980) compute a spatial scale by first linearly smoothing a pattern with respect to a Gaussian distribution and then computing an edge by setting the Laplacian (the second derivatives) of the smoothed pattern equal to zero (Figure 8). The use of the Laplacian to study edges goes back at least to the time of Mach (Ratliff, 1965). The Laplacian is time-honored, but it suffers from limitations that become more severe when its zero-crossings are made the centerpiece of a theory of edges. One of many difficulties is that zero-crossings compute only the position of an edge and not other related properties such as the brightness of the pattern near the edge. Yet the Cornsweet and Craik-O’Brien figures pointedly show that the brightnesses of edges can strongly influence the lightness of their enclosed forms. Something more than zero-crossings is therefore needed to understand spatial vision. The zero-crossing computation itself does not disclose what is missing, so its advocates must guess what is needed. Marr and Hildreth (1980)guess that factors like position, orientation, contrast, length, and width should be computed at the zero-crossings. These guesses do not follow from their definition--or their computation-of an edge. Such properties lie beyond the implications of the zero-crossing computation, because this computation discards essential features of the pattern near the zero-crossing location. Even if the other properties are added to a list of data that is stored in computer memory, this list distorts-indeed entirely destroys-the intrinsic geometric structure of the pattern. The replacement of the natural internal geomdrical relationships of a pattern by arbitrary numerical measures of the pattern prevents the Marr and Hildreth (1980)theory from understanding how global processes, such as filling-in, can spontaneously occur in a physical setting. Instead, the Marr and Hildreth (1980)formulation leads to an approach wherein all the intelligence of what to do next rests in the investigator rather than in the model. This restriction to local, investigator-driven computations is due not only to the
Chapter I
20
Figure 8. When a unit step in intensity (a) is smoothed by a Gaussian kernel, the result is (b). The first spatial derivative is (c), and the second spatial derivative is (d). The second derivative is zero at the location of the edge. present. state of their model’s development, but also to the philosophy of these workers, since Marr and Hildreth write (1980, p.189): “The visual world is not constructed of ripply, wave-like primitives that extend and add together over an area.” Finally, because their theory is linear, it cannot tell us how to estimate the lightnesses of objects, and because their theory is feedforward, it cannot say how apparent depth can influence the apparent size and lightness of monocular patterns.
17. The Filling-In Dilemma:
To Have Your Edge and Fill-In Too
Any linear and feedforward approach to spatial vision is in fact confronted with the full force of the filling-in dilemma: If spatial vision operates by first attenuating all but the edges in a pattern, then how do we ever arrive at a percept of rigid bodies with ample interiors, which are after all the primary objects of perception? How can we have our edges and fill-in too? How does the filling-in process span retinal areas which far exceed the spatial bandwidths of the individual receptive fields that physically justify a Gaussian smoothing process? In particular, in the idealized luminance profile in Figure 9, after the edges are determined by a zero-crossing computation, the directions in which to fill-in are completely ambiguous without further computations tacked on. I
The Quantized Geomeny of Vimal Space
21
Figure 9. In this luminance profile, zero-crossings provide no information about which regions are brighter than others. Auxiliary computations are needed to determine this. will argue in Part 11 of this article that a proper definition of edges does not require auxiliary guesswork. I should emphasize what I do not mean by a solution to the filling-in dilemma. It is not sufficient to say that edge outlines of objects constitute sufficient information for a viewer to understand a three-dimensional scene. Such a position merely says that observers can use edges to arrive at object percepts, but not how they do so. Such a view begs t,he question. It is also not sufficient to say that feedback expectancies, or hypotheses, can use edge information to complete an object percept. Such a view does not say how the feedback expectancies were learned, notably what substrate of completed form information was sampled by the learning process, and it also begs the question. Finally, it is inadequate to say that an abstract reconstruction process generates object representations from edges if this process would require a homunculus for its execution in real time. Expressed in another way, the filling-in dilemma asks: If it is really so hard for us to find mechanisms which can spontaneously and unambiguously fill-in between edges, then do we not have an imperfect understanding of why the nervous system bothers to compute edges? Richards and Marr (1981) suggest that the edge computation compresses the amount of data which needs to be stored. This sort of memory load reduction is important in a computer program, but I will suggest in Part I1 that it is not a rate-limiting constraint on the brain design which grapples with binocular data. I will suggest, in contrast, that the edge computation sets the stage for processes which selectively amplify and fill-in among those aspects of the data which are capable of matching monocularly, binocularly, or with learned feedback expectancies, as the case might be. This conclusion will clarify both why it is that edge extraction is such an important step in the processing of visual patterns, in partial support of recent models (Marr and Hildreth, 1980; Marr and Poggio, 1979), and yet edge preprocessing is just one stage in the nonlinear feedback interactions that are used to achieve a coherent visual percept. P A R T I1
18. Edges a n d Fixations: The Ambiguity of St,atistically ‘Chiform Regions The remainder of this article will outline the major concepts that are needed to build up my theory of these nonlinear interactions. I will also indicate how these concepts can be used to qualitatively interrelate data properties that often cannot be related at all by alternative theoretical approaches. Many of these concepts are mathematical
22
Chapter 1
properties of the membrane equations of neurophysiology, which are the foundation of all quantitative neurophysiological experimentation. The theory provides an understanding of these equations in terms of their computational properties. When the membrane equations are used in suitably interconnected networks of cells, a number of specialized visual models are included as special cases. The theory thereby indicates how these models can be interrelated within a more general, physiologically based, computational framework. Due to the scope of this framework, the present article should be viewed as a summary of an ongoing research program, rather than as a completely tested visual theory. Although my discussion will emphasize the meaning and qualitative reasons for various data from the viewpoint of the theory, previous articles about the theory will be cited for those who wish to study mathematical proofs or numerical simulations, and Appendix A describes a system that is currently being numerically simulated to study binocular filling-in reactions. I will motivate my theoretical constructions with two simple thought experiments. I will use these experiments to remind us quickly of some important relationships between perceived, depth and the monocular computation of spatial nonuniformities. Suppose that an observer attempts to fixate a perceptually uniform rectangle hovering in space in front of a discriminable but perceptually uniform background. How does the observer know where to fixate the rectangle? Even if each of the observer’s eyes independently fixates a different point of the rectangle’s interior, both eyes will receive identical input patterns near their fixation points due to the rectangle’s uniformity. The monocular visual patterns near the fixation points match no matter how disparately the fixation points are chosen within the rectangle. Several conclusions follow from this simple observation. Binocular visual matching between spatially homogeneous regions contains no information about where the eyes are pointed, since all binocular matches between homogeneous regions are equally good no matter where the eyes are pointed. The only binocular visual matches which stand out above the baseline of ambiguous homogeneous matches across the visual field are those which correlate spatially nonuniform data to the two eyes. However, the binocular correlations between these nonuniform patterns, notably their disparities, depend upon the fixation points of the two eyes. Disparity information by itself is therefore insufficient to determine the object’s depth. Instead, there must exist an interaction between vergence angle and disparity information to determine where an object is in space (Foley, 1980;Grossberg, 1976;Marr and Poggio, 1979;Sperling, 1970). This binocular constraint on resolving the ambiguity of where the two eyes are looking is one reason for the monocular extraction of the edges of a visual form and attendant suppression of regions which are spatially homogeneous with respect to a given spatial scale. Without the ability to know where the object is in space, there would be little evolutionary advantage in perceiving its solidity or interior. In this limited sense, edge detection is more fundamental than form detection in dealing with the visual environment. Just knowing that a feedback loop must exist between motor vergence and sensory disparities does not determine the properties of this loop. Sperling (1970)has postulated that vergence acts to minimize a global disparity measure. Such a process would tend to reduce the perception of double images (Kaufman, 1974). I have suggested (Grossberg. 1976b) that good binocular matches generate an amplification of network activity, or a binocular resonance. An imbalance in the total resonant output from each binocular hemifield may be an effective vergence signal leading to hemifield-symmetric resonant activity which signifies good binocular matching and stabilizes the vergence angle. The theoretical sections below will suggest how these binocular resonances also compute coherent depth, form, and lightness information.
The Quantized Geomehy of Visual Space
23
19. Object Perniancnce a n d Multiple Spatial Scales
The second thought experiment reviews a use for multiple spatial scales, rather than a single edge computation, corresponding to each retinal point. Again, our conclusions can be phrased in terms of the fixation process. As a rigid object approaches an observer, the binocular disparities between its nonfixated features increase proportionally. In order to achieve a concept of object permanence, and at the very least to maintain the fixation process, mechanisms capable of maintaining a high correlation between these progressively larger disparities are needed. The largest disparities will, other things being equal, lie at the most peripheral points on the retina. The expansion of spatial scales with retinal eccentricity is easily rationalized in this way (Hubel and Wiesel, 1977; Richards, 1975; Schwartz, 1980). It does not suffice, however, to posit that a single scale exists at each retinal position such that scale size increases with retinal eccentricity. This is because objects of different size can approach the observer. As in the Holway and Boring 1941) experiments, objects of different size can generate the same retinal image if they ie at different distances. If these objects possess spatially uniform interiors, then the boundary disparities of their monocular retinal images carry information about their depth. Because all the objects are at different depths, these distinct disparities need to be computed with respect to that retinal position in one eye that is excited by all the objects’ boundaries. Multiple spatial scales corresponding to each retinal position can carry out these multiple disparity computations. I will now discuss how the particular scales which can binocularly resonate to a given object’s monocular boundary data thereupon fill-in the internal homogeneity of the object’s representation with length and lightness estimates, as well as the related question of how monocular cues and learned expectancies can induce similar resonances and thus a perception of depth.
\
20. Cooperative versus Competitive Binocular Interactions
One major difference between my approach to these problems and alternative approaches is the following: I suggest that a Competitive process, not a cooperative process, defines a depth plane. The cooperative process that other authors have envisaged leads to sheets of network activity which are either off or maximally on. The competitive process that I posit can sustain quantized patterns of activity that reflect an object’s perceived depth, lightness, and length. In other words, the competitive patterns do not succumb to a homuncular dilemma. They are part of the representation of an object’s binocular form. The cells that subserve this representative process are sensitive to binocular disparities, but they are not restricted to disparity computations. In this sense, they do not define a “depth plane” at all. One reason that other investigators have not drawn this conclusion is because a binary code hypothesis is often explicit (or lurks implicitly) in their theories. The intuition that a depth plane can be perceived seems to imply cooperation, because in a binary world competition implies an either-or choice, which is manifestly unsuitable, whereas cooperation implies an and conjunction, which is at least tolerable. In actuality, a binary either-or choice does not begin to capture the properties of a competitive network. Mathematical analysis is needed to understand these properties. (I should emphasize at this point that cooperation and cooperativity are not the same notion. Both competitive and cooperative networks exhibit cooperativity, in the sense in which this word is casually used.) A large body of mathematical results concerning competitive networks has been discovered during the past decade (Ellias and Grossberg, 1975; Grossberg, 1970a, 1972d, 1973, 1978a, 1978c, 1978d, 1978e, 1980a, 1980b, 1981; Grossberg and Levine, 1975; Levine and Grossberg, 1976). These results clarify that not all competitive networks enjoy the properties that are needed to build a visual theory. Certain competitive networks whose cells obey the membrane equations of neurophysiology do have desirable
24
Chapter 1
properties. Such systems are called shuntin,g networks to describe the multiplicative relationship between membrane voltages and the conduct,ance changes that are caused by network inputs and signals. This multiplicative relationship enables these networks to automatically retune their sensitivity in response to fluctuating background inputs. Such an automatic gain control capacity implies formal properties that are akin to reflectance processing, Weber law modulation, sensitivity shifts in response to different backgrounds, as well as other important visual effects. Most other authors have worked with additive networks, whcih do not possess the automatic gain control properties of shunting networks. Sperling (1970, 1981) and Sperling and Sondhi (1968) are notable among other workers in vision for understanding the need to use shunting dynamics, as opposed to mere equilibrium laws of the form I ( A -tJ ) - ' . However, these authors did not develop the mathematical theory far enough to have at their disposal some formal properties that I will need. A review of these and other competitive properties is found in Grossberg (1981, Sections 10-27). The sections below build up concepts leading to binocular resonances.
a l . Reflectance Processing, Weber Law Modulation, and Adaptation Level in Feedforward Shunting Competitive Networks Shunting competitive networks can be derived as the solution of a processing dilemma that confronts all cellular tissues, the so-called noiee-saturation dilemma (Grossberg, 1973, 1978e). This dilemma notes that accurate processing both of low activity and high activity input patterns can be prevented by sensitivity loss due to noise (at the low activity end) and saturation (at the high activity end) of the input spectrum. Shunting competitive networks overcome this problem by enabling the cells to retune their sensitivity automatically as the overall background activity of the input pattern fluctuates through time. This result shows how cells can adapt their sensitivity to input patterns that fluctuate over a dynamical range that is much broader than the output range of the cells. As I mentioned above, the shunting laws take the form of the familiar membrane equations of neurophysiology in neural examples. Due to the generality of the noisesaturation dilemma, formally similar laws should occur in non-neural cellular tissues. I have illustrated in Grossberg (1978b) that some principles which occur in neural tissues also regulate non-neural developmental processes for similar computational reasons. The solution of the noise-saturation dilemma that I will review herein describes intercellular tuning mechanisms. Data describing intracellular adaptation have also been reported (Baylor and Hodgkin, 1974; Baylor, Hodgkin, and Lamb, 1974a, 1974b) and have been quantitatively fitted by a model in which visual signals are multiplicatively gated by a slowly accumulating transmitter substance (Carpenter and Grossberg, 1981). The simplest intercellular mechanism describes a competitive feedforward network in which the activity, or potential, z,(t) of the ith cell (population) u, in a field of cells vl, v l , . .. ,u, responds to a epatial pattern I,(t) = e , I ( t ) of inputs i = 1,2,. . .,n. A collection of inputs comprises a spatial pattern if each input has a fixed relative size or reflectance) 6,, but a possibly variable background intensity I ( t ) (due, say, to a uctuating light source). The convention that Cx,l 6 k = 1 implies that I ( t ) is the total input to the field; viz., Z ( t ) = c$=l I k ( t ) . The simplest law which solves the noisesaturation dilemma describes the net rate ( d z , ) / ( d t ) at which sites at w, are activated and/or inhibited through time. This law takes the form:
A
i = 1,2,. . . ,n where B > 0 2 -C and B 2 z , ( t ) 2 -C for all times t 2 0. Term -Azi describes the spontaneous decay of activity at a constant rate -A. Term (B- q)I,
The Quantized Geomet?y of Visua/@aee
Vi
0
0
0
0
0
"j 0
25
k' 0
Figure 10. In the simplest feedforward competitive network, each input I, excites its cell (population) u, and inhibits all other populations 3, j # i. (From Grossberg 1978e.)
describes the activation due to an excitatory input I, in the ith channel (Figure 10). Ik Term - ( x , + C ) &, I , describes the inhibition of activity by competitive inputs ,&j from the input channels other than u,. In the absence of inputs (namely all I, = 0, i = 1,2,. . . , n), the potential decays to the equilibrium potential 0 due to the decay term -Ax,. No matter how intense the chosen inputs I,, the potential x, remains between the values B and -C at all times C) Ckf, I, = 0 if x1 = -C. That is why because (B - q)Z, = 0 if 5, = B and -(zl B is called an excitatory saturation point and -C is called an inhibitory saturation point. When z, > 0, the cell v, is said to be depolarized. When 5, < 0, the cell u, is hyperpolarized. The cell can be hyperpolarized only if C > 0 since q ( t ) 2 -C at all times t .
+
Before noting how system (1) solves the noise-saturation dilemma, I should clarify its role in the theory as a whole. System (1) is part of a mathematical classification theory wherein a sequence of network variations on the noise-saturation theme is analysed. The classification theory characterizes how changes in network parameters (for example, decay rates or interaction rules) alter the transformation from input pattern ( 1 1 , 1 2 , . . . ,In)to activity pattern (51, q ,. ..,zn).The classification theory thereby provides useful guidelines for designing networks to accomplish specialized processing tasks. The inverse process of inferring which network can generate prescribed data properties is also greatly facilitated. In the present case of system (I), a feedforward flow of inputs to activities occurs wherein a narrow on-center of excitatory input (term (B- xl)Zt)is balanced against a broad off-surround of inhibitory inputs (term -(x, C) &+, 1,). Deviations from these hypotheses will generate network properties that differ from those found in system (l),as I will note in subsequent examples.
+
To see how system (1)solves the noise-saturation dilemma, let the background input
I(1) be held steady for a while. Then the activities in (1) approach equilibrium. These
Chapter 1
26
equilibrium values are found by setting d x , / d t = 0 in (1). They are
Equation (2) exhibits four main features: (a) Factorization and automatic tuning of sensitivity. Term 8, - C / ( B C) depends on the ith reflectance 8, of the input pattern. It is independent of the background intensity I. Formula (2) factorizes information about reflectance from information about background intensity. Due to the factorization property, zt remains proportional to 0, - C / ( B+ C)no matter how large I is chosen to be. In other words, 2, does not saturate. (b) Adaptation level, featural noise suppression, and symmetry-breaking. Output signals from cell v, are emitted only if the potential 5 , is depolarized. By ( l ) , 5 , is depolarized only if term B, - C / ( B C) is positive. Because the reflectance 0, must exceed C / ( B C) to depolarize z,,term C / ( B C) is called the adaptation leuel. The size of the adaptation level depends on the ratio of C to B. Typically B > C in uiuo, which implies that C / ( B-t C) B: 1. Were not C / ( B C) < 1, no choice of 6, could depolarize the cell, since B,, being a ratio, never exceeds 1. The most perfect choice of the ratio of C to B is C / B = l / ( n - 1) since then C / ( B C) = l / n . In this case, any uniform input pattern I1 = I2 = ... = I, is suppressed by the network because then all 8, = l / n . Since also C / ( B C) = l / n , all 5 , = 0 given any input intensity. This property is called featural noise suppression, or the suppression of zero spatial frequency patterns. Featural noise suppression guarantees that only nonuniform reflectances of the input pattern can ever generate output signals. The inequality B >> C is called a symmetry-breaking inequality for a reason that is best understood by considering the special case when C / B = l / ( n - 1). The ratio 1/(n - 1) is also, by (I), the ratio of the number of cells excited by the input I, devided by the number of cells inhibited by the input I , . Noise suppression is due to the fact that the asymmetry of the intercellular on-center off-surround interactions is matched by the asymmetry of the intracellular saturation points. In other words, the symmetry of the network as a whole is "broken" to achieve noise suppression. Any imbalance in this matching of intercellular to intracellular parameters will either increase or decrease the adapt ation level and thereby modify the noise suppression property. Thissymmetry-breaking property of shunting networks leads to a theory of how oncenter off-surround anatomies develop that is different from the one implied by an additive approach, such as a Fourier or Laplacian theory, if only because additive theories do not possess excitatory and inhibitory saturation points. In Grossberg (1978e, 1982e) I suggested how the choice of intracellular saturation points in a shunting network may influence the development of intercellular on-center off-surround connections to generate the correct balance of intracellular and intercellular parameters. An incorrect balance could suppress all input patterns by causing a pathologically large adaptation level. My suggestion is that the balance of intracellular saturation points determines the balance of morphogenetic substances that are produced at the target cells to guide the growing excitatory and inhibitory pathways. (c) Weber-law modulation. Term 8, - C / ( B C) is modulated by the term (B C)I(A which depends only on the background intensity I. This term takes the form of a Weber law (Cornsweet, 1970). Thus (2) describes Weber law modulation of reflectance processing above an adaptation level. (d) Normalization and limited capacity. The total activity of the network is
+
+
+
+
+
+
+
+
+
+
27
llie Quantized Geomeny of Vim1Space
By (3), z is independent of the number n of cells in the network if either C = 0 or C / ( B+ C) = 1/n. In every case, z 5 B no matter how intense I becomes, and B is independent of n. This tendency for total activity not to grow with n is called total activity normalization. Normalization implies that if the reflectance of one part of the input pattern increases while the total input activity remains fixed, then the cell activities corresponding to other parts of the pattern decrease. Weber law modulated reflectance processing helps to explain aspects of brightness constancy, whereas the normalization property helps to explain aspects of brightness contrast (Grossberg, 1981). The two types of property are complementary aspects of the same dynamical process. 22. Pattern Matching and Multidimensional Scaling Without a Metric
The interaction between reflectance processing and the adaptation level implies that the sum of two mismatched input patterns from two separate input sources will be inhibited by network (1). This is because the mismatched peaks and troughs of the two input patterns will add to yield an almost uniform total input pattern, which will be quenched by the noise suppression property. By contrast, the sum of two matched input patterns is a pattern with the same reflectances 8, as the individual patterns. The total activity I + J of the summed pattern, however, exceeds the total activities I and J of the individual patterns. Consequently, by (2) the activities in response to the summed pattern are
+
51
C + -(''-m)
( B C)(Z J ) = -A+I+J
(4)
which exceed the activities in response to the separate patterns. Network activity is thereby amplified in response to matched patterns and attenuated in response to mismatched patt,erns due to an interaction between reflectance processing, the adaptation level, and Weber law modulation. The fact that the activity of each cell in a competitive network can depend on how well two input patterns match is of great importance in my theory. Pattern matching is not just a local property of input sizes at each cell. A given cell can receive two different inputs, yet these inputs may be part of perfectly matched patterns, hence the cell activity is amplified. A given cell can receive two identical inputs, yet these inputs may be part of badly mismatched patterns, hence the cell activity is suppressed. This matching property avoids the homuncular dilemma by being an automatic consequence of the network's pattern registration process. Various models in Artificial (Zk - Jk)' or some other metric Intelligence, by contrast, use a Euclidean distance to compute pattern matches (Klatt, 1980; Newell, 1980). Such an approach requires a separate processor to compute a scalar distance between two patterns before deciding how to tack the results of this scalar computation back onto the mainstream of computational activity. A metric also misses properties of the competitive matching process which are crucial in the study of spatial vision, as well a~ in other pattern recognition problems wherein multiple scales are needed to represent the data unambiguously. In the competitive matching process, a match not only encodes the matched pattern; it also amplifies it. A metric does not encode a pattern, because it is a scalar rather than a vector. A metric does not amplify the matched patterns because it is minimized rather than maximized by a pattern match. Moreover, what is meant by matching differs in a metric and in a shunting network. A metric makes local matches between corresponding input intensities, whereas a network matches reflectances, which depend upon the entire pattern. One could of course use a metric to match ratios of input intensities, but this computation requires an extra homuncular processing step and is insensitive to overall input intensity, which is not true of the network matching mechanism. When the
Chapter I
28
long-range inhibitory term &+ Tk in (1) is replaced by dist ance-dependent inhibitory interactions, as in equation (22) of Section 24, a global match of patterns is replaced by simultaneous local matches on a spatial scale that varies monotonically with receptive field size. Although the properties of metric matches are disappointing in comparison to properties of feedforward network matching, they are totally inadequate when compared to properties of feedback network matching. In a feedback context, there is a flexible criterion of matching called the quenching threshold (Section 28). This criterion can be tuned by attentional and other cognitive factors. Furthermore, approximately matched patterns can mutually deform one another into a fused composite pattern via positive feedback signaling (Ellias and Grossberg, 1975; Grossberg, 1980b). These properties endow the matching process with hysteresis properties that can maintain a match during slow deformations of the input patterns (Fender and Julesz, 1967). When matching occurs between ambiguous bottom-up input patterns and top-down expectancies, the pattern fusion property can complete the ambiguous data leading to a cognitively mediated percept (Gregory, 1966; Grossberg, 1980b). The primary use of network matching in my binocular theory is to show how those spatial scales which achieve the best binocular match of monocular data from the two eyes can resonate energetically, whereas those spatial scales which generate a mismatched binocular interpretation of the monocular data are energetically attenuated. The ease with which these multidimensional scaling effects occur is due to properties that obtain in even the simplest competitive networks. I use the term “multidimensional scaling” deliberately, since similar competitive rules often operate on a higher perceptual and cognitive level Grossberg, 1978e), where metrical concepts have also been used as explanatory tools Osgood, Suci, and Tannenbaum, 1957; Shepard, 1980). An inadequate model of how cell activity reflects matching can limit a theory’s predictive range. For example, in a binocular context, I will use this relationship to suggest how several types of data can be related, including the coexistence of Fechner’s paradox and binocular brightness summation (Blake et a!., 1981), and the choice between binocular fusion and rivalry within a given spatial scale (Kaufman, 1974; Kulikowski, 1978). A reason for binocular brightness summation is already evident in equation (4). The effects of activities I and J on z, exceed those expected from noninteracting independent detectors, but are less than the sum I J , as a result of Weber law modulation (Cogan e t al., 1982). In a feedback network, the inputs It and J, are chosen to be sigmoid, or S-shaped, functions of the network activities at a prior processing stage. The sigmoid signals are needed to prevent the network as a whole from amplifying noise (Section 28). Then (4) is replaced by a nonlinear summation process that clarifies the success of power law and sigmoid summation rules in fitting data about spatial and binocular brightness interactions (Arend, Lange, and Sandick, 1981; Graham, 1981; Grossberg, 1981; Legge and Rubin, 1981).
t
+
23. Weber Law and Shift P r o p e r t y Without Logarithms
The simple equation (1) has other properties which are worthy of note. These properties describe other aspects of how the network retunes itself in response to changes in background activity. The simplest consequence of this retuning property is the classical Weber law - = constant
(5)
where A I is the just noticeable increment above a background intensity I. The approximate validity of (5) has encouraged the belief that logarithmic processing determines visual sensitivity (Cornsweet, 1970; Land, 1977), since A log1 = (AZ)/I, despite the
The Quantized Geometry of Visual Space
29
fact that the logarithm exhibits unphysiral infinities at small and large values of its argument. In fact, Cornsweet (1970) built separate theories of reflectance processing and of brightness perception by using logarithms to discuss reflectances and shunting functions like I ( A + J ) to discuss brightness. By contrast, shunting equations like (2) join together reflectance processing and brightness proressing into a single computational framework. Power laws have often been used in psychophysics instead of logarithms (Stevens, 1959). It is therefore of interest that equation (2) guarantees reflectance processing undistorted by saturation if the inputs I, are power law outputs Z, = XJ; of the activities J, at a prior processing stage. Reflectance processing is preserved under power law transformations because the form of (2) is left invariant by such a transformation. In particular,
where
I = JP and
To show how the Weber law (5) approximately obtains in (2), choose
Z, = K
+ AI,
and
I2 = 13 = . . . = I,, = K .
Then the total input before increment A I is applied to I , is I = n K . By (2),
21
If I
> AI
and n
=
(B+C)(I+AI) K+AI A Z+AI (-I
+
C
-
m)
> 1, then K + AI C - AZ(n - l ) Z AI n K + A I - m I n I + A Z + D z - +I D
where
D = 1 / n - C / ( B+ C). If Z > A , then
Consequently 11
AI g ( B + C ) ( I +D).
If z1 is detectable when it exceeds a threshold
r, then
I 2 . w
I
where
W=
r B+C
- D = constant.
Chapter 1
30
A more precise version of the Weber law (5) is the shifl property. This property says that the region of maximal visual sensitivity shifts without compression as the background off-intensity is parametrically increased (Werblin, 1971). The shift property obtains when the on-center input 1, is plotted in logarithmic coordinates despite the fact that (2) does not describe logarithmic processing. The shift property is important in a multidimensional parallel processing framework wherein changes in the number and intensity of active input sources can fluctuate wildly B) and the through time. Given the shift property, one can fix the activity scale (4, network’s output threshold once and for all without distorting the network’s decision rules as the inputs fluctuate through time. A fixed choice of operating range and of output thresholds is impossible in a multidimensional parallel processing theory that is built up from additive processors. If a fixed threshold is selective when rn converging input channels are active, then it may not generate any outputs whatsoever when n < m input channels of comparable intensity are active, and may unselectively generate outputs whenever n > rn input channels are active. Such a theory needs continually to redefine how big its thresholds should be as the input load fluctuates through time. To derive the shift property, rewrite (2) as xi
=
( B + C)Za- cz A t 1
’
Also write 1, in logarithmic coordinates as M = logel,, or Zi = ,e’ and the total off-surround input as L = Ctp, 4. Then, in logarithmic coordinates, (17)becomes
z,(M,L)=
BeM - C L A L eM‘
+ +
The question of shift invariance is: Does there exist a shift S such that
xt(M + S , h ) N r t ( M , L z ) for all M , where S depends only on hyperpolarization). Then
(1Q)
L1 and Ls? The answer is yes if C =
A+L s =log ’(-2) A+Lz
0 (no
(20)
which shows that successively increasing L by linear increments AL in (18) causes progressively smaller shifts S in (20). In particular, if L1 = ( n - 1)AL and Lz = nAL, then S approaches zero as n approaches infinity. If C > 0, then (19) implies that
A C ( L , - L2)e-M s = log,[ A B + ( B +A CB )tL (I +B + C)L2
1.
(21)
By (21), S depends on M only via term AC(L1 - LZ)e-M, which rapidly decreases as M increases. Thus the shift property improves, rather than deteriorates, at the larger intensities M which might have been expected to cause saturation. Moreover, if B B C , as occurs physically, then (20) is approximately valid at all values of M 2 0. 24. Edge, Spatial Frequency, and Reflectance Processing by the Receptive Fields of Distance-Dependent Feedforward Networks
Equation (1) is based on several assumptions which do not always occur in uiuo. It is the task of the mathematical classification theory to test the consequences of
The Quantized Geomehy of Visual Space
31
modifying these assumptions. One such assumption says that the inhibitory inputs excite all off-surround channels with equal strength, as in term -(zt c)x k f , 1, of (1). Another assumption says that only the ith channel is excited by the ith input, as in term (B z , ) I 1 of (1). In a general feedforward shunting network, both the excitatory and the inhibitory inputs can depend on the distance between cells, as in the feedforward network
+
d a5,
1
-A&
+ (B -
n
n
IkDk, -
5,)
(5,
k= 1
4-
c)
ZkEk,.
(22)
k=l
Here the coefficients Dk, and Ek1 describe the fall-off with the distance between cells vk and v, of the excitatory and inhibitory influences, respectively, of input Zk on cell u,. Equation (22) exhibits variants of all the properties enjoyed by equation (1). These properties follow from the equilibrium activities of (22), namely
x, = where
F,
+q
-
A
"
and
in response to a sustained input pattern I, = O,Z, i = 1 , 2 , . . . ,n. See Ellias and Grossberg (1975) and Grossberg (1981) for a discussion of these properties. For present purposes, I will focus on the fact that the noise suppression property in the network (22) implies an edge detection and spatial frequency detection capability in addition to its pattern matching capability. The noise suppression property in (23) is guaranteed by imposing the inequalities n
B
n
Dki k=l
5c
Eki k=l
I1
i = 1 , 2 , . . , n. Noise suppression follows from 26 because then all z, 5 0 in response to a uniform pattern (all t9* = l / n ) by (23) and 24 . The inequalities (26) say, just as in Section 21, that there exists a matched symmetry-breaking between the spatial bandwidths of excitatory and inhibitory intercellular signaling and the choice of inhibitory and excitatory intracellular saturation points -C and B , respectively. A distance-dependent network with the noise suppression property can detect edges and other nonuniform spatial gradients for the following reason. By (26), those cells w, which perceive a uniform input pattern within the breadth of their excitatory and inhibitory scales are suppressed by the noise suppression property no matter how intense the pattern activity is (Figure 11). Only those cells which perceive a nonuniform pattern with respect to their scales can generate suprathreshold activity. This is also true in a suitably designed additive network (Ratliff, 1965). When the interaction coefficients Dk, and Ek, of (22) are Gaussian functions of distance, as in Dk,= Dexp[-p(k - i)'] and Ek,exp[-u(k - l)'], then the equilibrium activities z, in (23) include and generalize the model of receptive field properties that is currently used to fit a variety of visual data. In particular, the term F, in (24) that appears in the numerator of z1 depends on sums of differences of Gaussians. Differenceof-Gaussian form factors for studying receptive field responses appear in the work of
32
Chapter 1
w +
+
+
Figure 11. When the feedforward competitive network is exposed to the pattern in it suppresses both interior and exterior regions of the pattern that look uniform to k k s at these pattern locations. The result is the differential amplification of pattern regions which look nonuniform to the network, as in (b).
The Quantized Geometv of Visual Space
33
various authors (Blakemore, Carpenter, and (;e.orgc.son, 1970; Ellias and Grossberg, 1975;Enroth-Cugell and Robson, 1966;Lrvine and Grossberg, 1976;Rodieck and Stone, 1965;Wilson and Bergen, 1979). At least three properties of (23)can distinguish it from an additive difference-of-Gaussian theory. The first is that each difference-of-Gaussian form factor BDk, - CEk, in (24) multiplies, or weights, a reflectance o k , and all the weighted reflectances are Weber-modulated by a ratio of the background input I to itself. The difference-of-Gaussian receptive field BDk, - CEk, thereby becomes a weighting term in the reflectance processing of the network as a whole. The second property is that each difference-of-Gaussian factor BDk, - CEk, is itself weighted by the excitatory saturation point B and the inhibitory saturation point C of the network, by contrast with a simple difference-of-Gaussian Dk, - &*. In networks in which zero spatial frequencies are exactly canceled by their receptive fields, the symmetry-breaking inequality B > C of the shunting model predicts that the ratio p-' of excitatory to inhibitory spatial bandwidths should be larger in a shunting theory than in an additive theory. A third way to distinguish experimentally between additive and shunting receptive field models is to test whether the contrast of the patterned responses changes as a function of suprathreshold background luminance. In an additive theory, the answer is no. In a distance-dependent shunting equation such as (23 , the answer is yes. This breakdown is numerically and mathematically analysed in E lias and Grossberg (1975). The ratios which determine z1 in (23) lead to changes of contrast as the background intensity Z increases only because the coefficients Dkr and Ekz are distance-dependent. In a shunting network with a very narrow excitatory bandwidth and a very broad inhibitory bandwidth, the relative sizes of the I, are independent of I. The contrast changes which occur as I increases in the distance-dependent case can be viewed as a partial breakdown of reflectance processing at high I levels due to the inability of inhibitory gain control to compensate fully for saturation effects. The edge enhancement property of a feedforward competitive network confronts us with the full force of the filling-in dilemma. If only edges can be detected by a network once it is constrained to satisfy, even approximately, such a basic property as noise suppression, then how does the visual system spontaneously fill-in among the edges to generate percepts of solid objects embedded in continuous media?
1
25. St,atistical Analysis by S t r u c t u r a l Scales: Edges With Scaling a n d Reflectance Properties Preserved
Before facing this dilemma, I need to review other properties of the excitatory input term I&, and the inhibitory input term ZkEk, in ( 2 2 ) . Let the interaction coefficients 4, and Ek, be distance-dependent, so that Dk, = D( k - t' I) and Ek, = E(I k - i I) where the functions D ( j ) and E ( j ) are decreasing unctions of J , such as Gaussians. Then the input terms C;=, ZkDk, cross-correlate the input pattern ( I , , Z z , , . . , I , ) with the kernel D ( j ) . Similarly, the input terms IkEk, cross-correlate the input pattern (Il,Zz,.. . ,I,,) with the kernel E ( j ) . These statistics of the input pattern, rather than the input pattern itself, are the local data to which the network reacts. I will call the kernels D ( j ) and E ( j ) structural scales of the network to distinguish them from the functional scales that will be defined below. The structural scales perform a statistical analysis of the data before the shunting dynamics further transform these data statistics. Although terms like Cf!, IkDkr are h e a r functions of the inputs l k , the inputs are themselves often nonlinear (notably S-shaped or sigmoidal) functions of outputs from prior network stages (Section 28). Thus the statistical analysis of input patterns is in general a nonlinear summation process. These concepts are elementary, as well as insufficient, for our purposes, It is, however, instructive to review how statistical preprocessing of an input pattern influences the network's reaction to patterns more complex than a rectangle, say, a periodic pat-
c;=l
c$=,
i
34
Chapter I
t,ern of high spatial frequency bars superimposrd on a periodic pattern of low spatial frequency bars (Figure 12a). Suppose for drfinitrness that the excitatory scale D ( j ) is narrower than the inhibitory scale E ( j ) to prevent the occurrence of spurious peak splits and multiple edge effects that can occur even in a feedforward network’s response to spots and bars of input (Ellias and Grossberg, 1975). Then the excitatory structural bandwidth determines a unit length over which input data is statistically pooled, whereas the inhibitory structural bandwidth determines a unit length over which the pooled data of nearby populations are evaluated for their uniformity. It is easily seen that a feedforward network in which featural noise suppression holds and whose excitatory bandwidth approximates a can react to the input pattern with a periodic series of smoothed bumps Figure 12b). By contrast,-a network whose excitatory bandwidth equals period 2a ut is less than the entire pattern width reacts only to the smoothed edges of the input pattern (Figure 12c). The interior of the input pattern is statistically unijorrn with respect to the larger structural scale, and therefore its interior is inhibited by noise suppression. As the excitatory bandwidth increases further, the smoothed edges are lumped to ether until the pattern generates a single centered hump, or spot, of network activity ?Figure 12d). This example illustrates how the interaction of a broad structural scale with the noise suppression mechanism can inhibit all but the smoothed edges of a finely and regularly textured input pattern. After inhibition takes place, the spatial breadth of the surviving edge responses depends on both the input texture and the structural scale; the edges have not lost their scaling properties. The peak height of these edge responses compute a measure of the pattern’s reflectances near its boundary, since ratios of input intensities across the network determine the steady-state potentials 5 , in (23). Rather than discard these monocular scaling and lightness properties, as in a zero-crossing computation, I will use them in an essential way below as the data with which to build up binocular resonances.
b
26. Correlation of Monocular Scaling With Binocular Fusion The sequence of activity patterns in Figures 12b, 12c, and 12d is reversed when an observer steadily approaches the picture in Figure 12a. Then the spot in Figure 12d bifurcates into two boundary responses, which in turn bifurcate into a regular pattern of smoothed bumps, which finally bifurcate once again to reveal the high frequency components within each bump. If the picture starts out sufficiently far away from the observer, then the first response in each of the observer’s spatial scales is a spot, and the bifurcations in the spot will occur in the same order. However, t h e distance at which a given bifurcation occurs depends on the spatial scale in question. Other things being equal, a prescribed bifurcation will occur at a greater distance if the excitatory bandwidth of the spatial scale is narrower (high spatial frequency). Furthermore, the registration of multiple spatial frequencies (or even of multiple spots) in the picture will not occur in a spatial scale whose excitatory bandwidth is too broad (low spatial frequency). The same sequence of bifurcations can occur within the multiple spatial scales corresponding to each eye. If the picture is simultaneously viewed by both eyes, the question naturally arises: How do the two activity patterns within each monocular scale binocularly interact at, each distance? Let us assume for the moment, as in the Kaufman (1974) and Kulikowski (1978)experiments, that as the disparity of two monocular patterns increases, it becomes harder for the high spatial frequency scales to fuse them. Since disparity decreases with increasing patterns (assuming they are detectable at all) when the distance is great enough, but the lower spatial frequency scales can maintain fusion over a broader range of decreasing distances than can the higher spatial frequency scales. Other things being equal, the scales which can most easily binocularly fuse their two monocular representations of a picture at a given distance are t h e scales which average away the finer features in the picture. It therefore seems natural to ask: Does the broad spatial smoothing within low spatial frequency scales enhance their ability t o
The Quantized Geomeny of VisualSpace
35
Figure 12. Transitions in the response of a network to a pattern (a) with multiple spatial frequencies progressively alters from (b) through (d) as the structural scales of the network expand.
Chapter 1
36
binocularly fuse disparate monocular activity patt,erns? Having arrived at this issue, we now need to study those properties of jeedback competitive shunting networks that will be needed to design scale-sensitive binocular resonances in which the fusion event is only one of a constellation of interrelated depth, length, and lightness properties. 27. Noise Suppression in Feedback Competitive Networks
The noise-saturation dilemma confronts all cellular tissues which process input patterns, whether the cells exist in a feedforward or in a feedback anatomy. As part of the mathematical classification theory, I will therefore consider shunting interactions in a feedback network wherein excitatory signals are balanced by inhibitory ones. Together, these feedback signals are capable of retuning network sensitivity in response to fluctuating background activity levels. The feedback analog of the distance-dependent feedforward network (22) is
i= 1,2,, . . ,n. As in (22), term - A z , describes the spontaneous decay of activity at rate -A. Term (B - z,)J, describes the excitatory effect of the feedforward excitatory input J,, which was chosen equal to En IkDk, in (22). Term -(z* -tc ) K , is also a feedforward term due to inhibition otactivity by the feedforward inhibitory input K,, which was chosen equal to Cz,l &ELI in (22). The new excitatory feedback term Cl=, f(Zk)DkI describes the total effect of all the excitatory feedback signals f(Zk)Dk, from the cells ut to v,. The function j ( z , ) transmutes the activity, or potential, of 5, into a feedback signal f(s,), which can be interpreted either as a density of spikes per unit time interval or as an electrotonic influence, depending on the situation. The g(zk)EkZ determines the total effect of all the inhibitory inhibitory feedback term feedback signals &,)& from the cells vk to u,. As in (22), the interaction coefficients Dk, and Ek, are often defined by kernels D ( j ) and E(j)., such that E ( j ) decreases more slowly than D ( j ) as a function of increasing values of 3. The problem of noise suppression is just as basic in feedback networks as in feedforward networks. Suppose, for example, that the feedforward inputs and the feedback signals both use the same interneurons and the same statistics of feedback signaling (f(z,)= g(z,)) to distribute their values across the network. Then (27) becomes
cE=l
d
&ZI
= - A z , -t ( B
n
n
k=l
k=l
c [ I k + f(Zk)]Dkt- ( 2 , + c)c [ I k +/(zk)]Eka
(28)
i = 1,2,...,n. In such a network, the same criterion of uniformity is applied both to feedforward and to feedback signals. Both processes share the same structural scales. Correspondingly, in (28) as in (22) the single inequality
suffices to suppress both uniform feedforward patterns and uniform feedback patterns.
28. Sigmoid Feedback Signals and Tuning Another type of noise suppression, called signal noise suppression, is also needed for a feedback network to function properly. This is true because certain positive feedback functions f ( w ) can amplify even very small activities w into large activities. Noise
The Quantized Geometry of VisuaISpace
31
ACTIVITY Figure 13. A sigmoid signal f ( w ) of cell activity w can suppress noise, contrast enhance suprathreshold activities, normalize total activity, and store the contrast enhanced and normalized pattern in short term memory within a suitably designed feedback competitive network. amplification due to positive feedback signaling can flood the network with internally generated noise capable of massively distorting the processing of feedforward inputs. Pathologies of feedback signaling have been suggested to cause certain seizures and hallucinations (Ellias and Grossberg, 1975; Grossberg, 1973; Kaczmarek and Babloyantz, 1977).
In Grossberg (1973), I proved as part of the mathematical classification theory that the simplest physically plausible feedback signal which is capable of attenuating, rather than amplifying, small activities is a sigmoid, or S-shaped, signal function (Figure 13). Several remarks should be made about t,his result. The comment is sometimes made that you only need a signal threshold to prevent noise amplification (Figure 13). This is true. but insufficient, because a threshold signal function does not perform the same pattern transformation as a sigmoid signal function. For example, in a shunting network with a narrow on-center and a broad off-surround, a threshold signal chooses the population that receives the largest input for activity storage and suppresses the activities of all other populations. By contrast, a sigmoid signal implies the existence of a quenching threshold (QT). This means that the activities of populations whose initial activation is less than the QT are suppressed, whereas the activity pattern of populations whose initial activities exceed the QT is contrast enhanced before being stored. I identify this storage process with storage in short term memory (STM).In a network that possesses a QT, any operation which alters the QT can sensitize or desensitize the network’s ability to store input data (Figure 14). This
38
Chapter 1
PATTERN BEFORE STORAGE
PATTERN AFTER STORAGE
Figure 14. In Figures 10a and lob, the same input pattern is differently transformed and stored in short term memory due to different settings of the network quenching threshold. tuning property is trivialized in a network that chooses the population which receives the largest input for STM storage. In either case, a nonlinear signal function is needed to prevent noise amplification in a feedback network. This fact presents a serious challenge to all linear feedforward models, such as Fourier and Gaussian models. A proper choice of signal function can be made by mathematically classifying how different signal functions transduce input patterns before they are stored in STM.Consider, for example, the following special case of (28):
f(zt)describes long-range a' = 1,2,. . . ,n. In (29), the competitive feedback term zk in the feedforward network (1). Xetwork (29) lateral inhibition, just like term &i strips away all extraneous factors ta focus on the following issue. After an input pattern ( I , , Iz,. . . ,I,, 51, Ja, . . . ,J,) delivered before time t = 0 establishes an initial pattern
The Quantized Geomehy of Visual @ace
39
( r ,(0).~ ~ ( 0 . . .) ,.~ ~ ( 0in)the ) network's artivitics. how does feedhark signaling within the nc,twork transform t,he init,ial patt,ern hcforci it is stored in STM? This problem was solvcd in Cirossberg (1973). Table 1 summarizes the main features of the solution. The function g ( w ) = w 'I(?) is graphed in Table 1 because the property that detcwnines t,he pattern transformation IS whethcr g ( w ) is an incrcasing, constant, or drrrvasing function at prescribed activities w. For example, a linear /(w) = au! drterminrs a constant, g ( w ) = a; a slower-than-linear /(w) = a,w(b + w ) ~ det,ermines a derreasing g ( w ) = a ( b w ) l ; a faster-than-linear j ( w ) = awn, n > 1 , determines an increasing g(w) = awn- I ; and a sigmoid signal function f ( w ) = a w 2 ( b +w 2 ) - I determines a ronraveg(iu) = a w ( b + w z ) Both linear and slower-than-linear signal functions amplify noise, and are therefore unsatisfactory. Faster-than-linear signal functions, such as power laws with powers greater than one, or threshold rules, suppress noise so vigorously that they make a choice. Sigmoid signal functions determine a Q T by mixing together properties of the other types of signal functions. Another import,ant point is that the QT docs not equal the turning point, or manifest threshold, of the sigmoid signal function. The QT depends on all of the parameters of the network. This fact must be understood to argue effectively that the breakd o w i of any of several mechanisms can induce pathological net,work properties, such as seizures or hallucinations, by causing the Q T to assume abnormally small values. Similarly. a n understanding of the factors that control the Q T is needed to analyse possible attentional and cognit.ive mechanisms that can modulat,e how precise a binocular or bottoiii-lip and t.op-down match has to be in order t,o ger1era.t.efusion and resonance. A forinula for t,he Q T of (29) has been roiriput,ed when this network is in its short term memory mode (set all inputs I, = J; = 0). Let. the fccdback signal function f(w) satisfy
+
where C 2 0, g ( w ) is increasing if 0 5 w 5 dl),and g ( u ) = 1 if z(') 5 w 5 B. Thus f ( w ) grows faster-than-linearly if 0 5 w 5 r ( ' ) ,linearly if 8 )5 w 5 B , and attains a maximum value of BC at w = B within the activity interval from 0 to B. The values of f ( w ) at activities w 2 B do not affect network dynamics because each 5, 5 B in (29). It was proved in Grossberg (1973, pp.355-359) that the Q T of (29) is
By (31), the Q T is not the manifest threshold of /(IN),which occurs where g ( w ) is increasing. Rather, the Q T depends on the transition activity where f(w) changes from faster-than-linear to linear, upon the overall slope C of the signal function in the physiological range, upon the number B of excitable sites in each population, and upon the decay rate A. By (31), an increase in C causes a decrease in the QT. Increasing a shuntingsignal C that nonspecifically gates all the network's feedback signals can thereby facilitate STM storage. Such a decrease in the Q T can facilitat,e binocular matching by weakening the criterion of how well matched two input patterns need to be in order for some network nodes t o supraliminally reverberate in STM. It cannot be overemphasized that this and other desirable tuning properties of competitive feedback networks depend upon the existence of a nonlinear signal function f ( w ) . For example, if f ( w ) is linear, then z(l)= 0 in (30) and the Q T = 0 by (31). Then all positive network activities, no matter how small, can be amplified and stored in STM, including activities due to internal cellular noise.
4MPUFIES NOISE
L kL
rMPLIFES NOISE
NJENCHES NOlSl
IAh
WENCHES NOISE
Table 1. Influence of signal function f(w) on input pattern transformation and short term memory storage.
The Quantized Geometry of VisualSpace
41
29. The Interdepcndcnce of Contrast Enhancement and Tuning
The existence of a QT suggests that the contrast enhancement of input patterns that is ubiquitous in the nervous system is not an end in itself (Ratliff, 1965). In feedback competitive shunting networks, contrast enhancement is a mathematical consequence of the signal noise suppression property. This fact is emphasized by the observation that linear feedback signals can perfectly store an input pattern’s refiectances-in particular, they do not enhance the pattern-but only at the price of amplifying network noise (Table 1). Contrast enhancement by a feedback network in its suprathreshold activity range follows from noise suppression by the network in its subthreehold activity range. Contrast enhancement can intuitively be understood if a feedback competitive network possesses a normalization property like that of a feedforward competitive network (Section 21). If small activities are attenuated by noise suppression and total activity is approximately conserved due to normalization, then large activities will be enhanced. The simplest example of total activity normalization in a feedback competitive network follows. Consider network (29) in its short term memory mode (all inputs I, = J, = 0). Let x = C:=, x, be the total STM activity and let F = / ( x , ) be the total feedback signal. Sum over the index i in (29) to find that d
= - A X + (B- x)F.
To solve for the possible equilibrium activities of x(t), let d x / d t = 0 in (32). Then Ax
B - X -- F.
(33)
~
By Table 1, a network with a faster-than-linear signal function choosesjust one activity, say x,, for storage in STM. Hence only one summand in F remains positive as time goes on, and its xi(i) value approaches that of x(t). Thus (33) can be rewritten as
Ax
B-z- I(.)
(34)
or equivalently
Equation (35) is independent of the number of active cells. Hence the total stored STM activity is independent of the number of active cells. The limiting equation (33) is analysed for other choices of signal function in Grossberg (1973). 30. Normalization and Multistability in a Feedback Competitive Network: A Limited Capacity Short Term Memory System
Thus suitably designed feedback competitive networks do possess a normalization property. Recall from Section 21 that in a feedforward competitive network, the total activity can increase with the total input intensity but is independent of the number of active cells. This is true only if the inhibitory feedforward interaction CkfrZk in (1) is of long range across the network cells. If the strengths of the inhibitory pathways are weakened or fall off rapidly with distance, then the normalization property is weakened also, and saturation can set in at high input int.ensities. The same property tends to hold for the feedforward terms (B - x,)J, and -(xt C ) K , of (27). The normalization property of a feedback competitive network is more subtle (Grossberg, 1973, 1981). If such a network is excited to suprathreshold activities and if the
+
42
Chapter 1
exciting inputs are then terminated, then the total activity of the network can approach one of perhaps several positive equilibrium values, all of which tend to be independent of the number of active cells. Thus if the activity of one cell is for some reason increased, then the activities of other cells will decrease to satisfy the normalization constraint unless the system as a whole is attracted to a different equilibrium value. This limited capacity ronstraint on short term memory is an automatic property in our setting. It is postulated without a mechanistic explanation in various other accounts of short term memory processing (Raaijmakers and Shiffrin, 1981, p. 126). The existence of multistable equilibria in a competitive feedback network is illustrated by equation (35). When / ( w ) is a faster-than-linear signal function, both A ( B - z)-l and g(z) in (35) are increasing functions of z, 0 5 z 5 B, and g(z) may be chosen so that these functions intersect at arbitrarily many values E l , Ez, ... of 2. Every other value in such a sequence is a possible stable equilibrium point of z, and the remaining values are unstable equilibrium points of 2. By contast, if g ( w ) is a concave function of w , as when f ( w ) is a sigmoid signal function, a tendency exists for the suprathreshold equilibria of z to be unique or closely clustered together. These assertions are mathematically characterized in Grossberg (1973).
31. Propagation of Normalized Disinhibitory Cues Just as in feedforward networks, the feedback normalization property is weakened if the inhibitory path strengths are chosen to decrease more rapidly with distance. Then the normalization property tends to hold among subsets of cells that lie within one bandwidth of the network’s inhibitory structural scale. In particular, if some cell activities are enhanced by a given amount, then their neighbors will tend to be suppressed by a comparable amount. The neighbors of these neighbors will then be enhanced by a similar amount, and so on. In this way, a disinhibitory wave can propagate across a network in such a way that each crest of the wave inherits, or “remembers,” the activity of the previous crest. This implication of the normalization property in a feedback network with finite structural scales will be important in my account of filling-in. Normalization within a structural scale also endows the network’s activity patterns with constancy and contrast patterns, as in the case of feedforward competitive networks (Section 24). In a feedback context, however, constancy and contrast properties can propagate far beyond the confines of a single structural scale because of normalized disinhibitory properties such as those Figure 15 depicts.
32. S t r u c t u r a l versus Functional Scales The propagation process depicted in Figure 15 needs to be understood in greater detail because it will be fundamental in all that follows. A good way to approach this understanding is to compare the reactions of competitive feedforward networks with those of competitive feedback networks to the same input patterns. Let us start with the simplest case. Choose C = 0 in (22) and (27). This prevents the noise suppression inequalities (26) from holding. Although feedforward and feedback inhibition are still operative, activities cannot be inhibited below zero in this case. Consequently, a uniform input pattern can be attenuated but not entirely suppressed. Choose a sigmoidal feedback signal function to prevent noise amplification, and thus to contrast-enhance the pattern of suprathreshold activities. These hypotheses enable us to study the main effects of feedback signaling unconfounded by the effect of noise suppression. What happens when we present a rectangular input pattern (Figure 15a) to both networks? Due to the feedforward inhibition in (22), the feedforward network enhances the edges of the rectangle and attenuates its interior (Figure 15b). By contrast, the feedback network elicits a regularly spaced series of excitatory peaks across the cells that receive the rectangular input (Figure 15c). This type of reaction occurs even if the
The Quantized Geometry of Visual Space
43
Figure 15. Reaction of a feedforward competitive network (b) and a feedback competitive network (c) to the same input pattern (a). Only the feedback network can activate the interior of the region which receives the input pattern with unattenuated activity.
44
Chapter I
input pattern is not contrast-enhanced by a fwdforward inhibitory stage, as in Figure 15b, before feedback inhibition can act on the contrast-enhanced pattern. The pattern of Figure 15c is elicited even if the feedback acts directly on the rectangular input pattern. Parametric numerical studies of this type of disinhibitory feedback reaction are found in Ellias and Grossberg (1975). The spatial bandwidth between successive peaks in Figure 15c is called the functional scale of the feedback network. My first robust points are that a functional scale can exist in a feedback network but not in a feedforward network, and that, although the functional scale is related to the structural scale of a feedback network, the two scales are not identiral. I will discuss the functional scale given C = 0 before reinstating the noise suppression inequalities (26) because the interaction between contrast enhancement and noise suppression in a feedback network is a much more subtle issue. 33. Disinhibitory Propagation of Functional Scaling From Boundaries to Interiors To see how a functional scale develops, let us consider the network’s response to the rectangular input pattern on a moment-to-moment basis. All the populations v, that are excited by the rectangle initially receive equal inputs. All the activities z, of these populations therefore start to grow at the same rate. This growth process continues until the feedback signals j(rm)Dmrand g(s,)E,, can be registered by the other populations u,. Populations vt which are near the rectangle’s boundary receive smaller total inhibitory sighals Ck=lg(rm)E,, than populations which lie nearer to the rectangle’s center, even when all the rectangle-excited activities smare equal. This is because the interaction strengths L,, = E(I rn - i I) are distance-dependent, and the boundary populations receive no inhibition from contiguous populations that lie outside the rectangle. As a result of this inhibitory asymmetry, the activities L, near the boundary start to grow faster than contiguous activities z3nearer to the center. The inhibitory feedback signal g(z,)E,, from ut to ‘u, begins to exceed the inhibitory feedback signal g(z3)Ejr from v, to v,, because za > z3 and El, = E j t . Thus although all individual feedback signals among rectangle-excited populations start out equal, they are soon differentiated due to a second-order effect whereby the boundary bias in the spatial distribution of the total inhibitory feedback signals is mediated by the activities of individual populations. As the interior activities zJ get differentially inhibited, their inhibitory signals g ( z J ) E , k to populations Vk which lie even deeper within the rectangle’s interior become smaller. Now the total pattern of inputs plus feedback signals is no longer uniform across the populations ‘uJ and v k . The populations are favored. Contrast enhancement bootstraps their activities zk into larger values. Now these populations can more strongly inhibit neighboring populations that lie even deeper into the rectangle’s interior, and the process continues in this fashion. The boundary asymmetry in the total inhibitory feedback signals hereby propagates ever deeper into the rertangle’s interior by a process of distance-dependent disinhibition and contrast enhancement until all the rectangle-excited populations are filled-in by a series of regularly spared activity peaks as in Figure l l c . 34. Quantization of Functional Scales: Hysteresis a n d Uncertainty
As I mentioned in Section 32, two distinct types of spatial scales can be distinguished in a feedback network. The structural scales D ( j ) and E ( j ) describe how rapidly the network’s feedback interaction coefficients decrease as a function of distance. The junctional scale describes the spatial wavelength of the disinhibitory peaks that arise in response to prescribed input patterns. Although these two types of scale are related, they differ in fundamental ways.
The Quantized Geometry of Visual @ace
4s
They are related beraiise an increase in a network’s structural scales can cause an increase in the functional scale with which it fills-in a given input pattern, as in the numerical studies of Ellias and Grossberg (1975). This is due to two effects acting together. A slower decrease of D ( j ) with increasing distance j can increase the number of contiguous populations that pool excitatory feedback. This effect can broaden the peaks in the activity pattern. A slower decrease of E ( j ) with increasing distance j can increase the number of contiguous populations which can be inhibited by an activity peak. This effect can broaden the troughs in the activity pattern. This relationship between structural and functional scales partially supports the intuition that visual processing includes a spatial frequency analysis of visual data (Graham, 1981; Robson, 1975), because if several feedback networks with distinct structural scales received the same input pattern, then they would each generate distinct functional scales such that smaller structural scales tended to generate smaller functional scales. However, the functional scale does not equal the structural scale, and its properties represent a radical departure from feedforward linear ideas. The most important of these differences can be summarized as follows. The functional scale is a quantized property of the interaction between the network and global features of an input pattern, such as its length. Unlike a structural scale, a functional scale is not just a property of the network. Nor is it just a property of the input pattern. The interaction between pattern and network literally creates the functional scale. The quantized nature of this interaction is easy to state because it is so fundamental. (The reader who knows some quantum theory, notably Bohr’s original model of the hydrogen atom, might find it instructive to compare the two types of quantization.) The length L of a rectangular input pattern might equal a nonintegral multiple of a network’s structural scales, but obviously there can only exist an integral number of disinhibitory peaks in the activity pattern induced by the rectangle. The feedback network therefore quantizes its activity in a way that depends on the global structure of the input pattern. The functional scales must change to satisfy the quantum property as distinct patterns perturb the network, even though the network’s structural scales remain fixed. For example, rectangular inputs of length L , L A L , L 2 A L , . . . ,L w A L might all induce M L peaks in the network’s activity pattern. Not until a rectangle of length L (w 1 ) A L is presented might the network respond with ML 1 peaks. This length quantization property suggests a new reason why a network, and perception, can exhibit hysteresis as a n input pattern is slowly deformed through time. This hysteresis property can contribute to, but is not identical with, the hysteresis that is due to penistent binocular matching as a result of positive feedback signaling when two monocular patterns are slowly deformed after first being binocularly matched (Fender and Julesz, 1967; Grossberg, 1980b). Another consequence of the quantization property is that the network cannot distinguish certain differences between input patterns. Quantization implies a certain degree of perceptual uncertainty.
+
+ +
+
+
+
35. Phantoms
The reader might by now have entertained the following objection to these ideas. If percepts really involve spatially regular patterned responses even to uniform input regions, then why don’t we easily see these patterns? I suggest that we sometimes do, as when spatially periodic visual phantoms can be seen superimposed upon otherwise uniform, and surprisingly large, regions (Smith and Over, 1979; Tynan and Sekuler, 1975; Weisstein, Maguire, and Berbaum, 1976). The disinhibitory filling-in process clarifies how these phantoms can cover regions which excite a retinal area much larger than a single structural scale. I suggest that we do not see phantoms more often for three related reasons. During day-twday visual experience, several functional scales are often simultaneously active. The peaks of higher spatial frequency functional scales can overlay the
46
Chapter 1
spaces between lower spatial frequency functional scales. Retinal tremor and other eye movements can randomize the spatial phases of, and thereby spatially smooth, the higher frequency scales across the lower frequency srales through time. Even within a single structural scale, if the boundary of an input pattern curves in two dimensions, then the disinhibitory wavelets can cause interference patterns as they propagate into the interior of the activity pattern along rays perpendicular to each boundary element. These interference patterns can also obscure the visibility of a functional scale. Such considerations clarify why experiments in which visual phantoms are easily seen usually use patterns that selectively resonate with a low spatial frequency structural scale that varies in only one spatial dimension. This suggestion that filling-in by functional scales may subserve phantoms does not imply that the perceived wavelength of a phantom is commensurate with any structural scale of the underlying network. Rather I suggest that once a pattern of functional wavelets is established by a boundary figure, it can quickly propagate by a resonant filling-in reaction into the interior of the figure if the shape of the interior does not define functional barriers to filling-in (Section 40). An important issue concerning the perception of phantoms is whether they are, of necessity, perceivable only if moving displays are used, or whether the primary effect of moving a properly chosen spatial frequency at a properly chosen velocity is to selectively suppress all but the perceived spatial wavelength via noise suppression. The latter interpretation is compatible with an explanation of spatial frequency adaptation using properties of shunting feedback networks (Grossberg, 1980b, Section 12). A possible experimental approach to seeing functional scales using a stationary display takes the form of a two-stage experiment. First adapt out the high spatial frequencies using a spatial frequency adaptation paradigm. Then fixate a bounded display which is large enough and is shaped properly to strongly activate a low spatial frequency scale in one dimension, and which possesses a uniform interior that can energize periodic network activity. 36. F u n c t i o n a l Length and Enimert’s Law Two more important properties of functional scales are related to length and lightness estimates. The functional wavelength defines a length scale. To understand what I mean by this, let a rectangular input pattern of fixed length L excite networks with different structural scales. I hypothesize that the apparent length of the rectangle in each network will depend on the functional scale generated therein. Since a broader structural scale induces a broader functional scale, the activity pattern in such a network will contain fewer active functional wavelengths. I suggest that this property is associated with an impression of a shorter object, despite the fact that L is fixed. The reader might object that this property implies too much. Why can a monocularly viewed object have ambiguous length if it can excite a functional scale? I suggest that under certain, but not all, monocular viewing conditions, an object may excite all the structural scales of the observer. When this happens, the object’s length may seem ambiguous. I will also suggest in Section 39 how binocular viewing of a nearby object can selectively excite structural scales which subserve large functional scales, thereby making the object look shorter. By contrast, binocular viewing of a far-away object can selectively excite structural scales which subserve small functional scales, thereby making the object look longer. Thus the combination of binocular selection of structural scales that vary inversely with an object’s distance, along with the inverse variation of length estimates with functional scales, may contribute to an explanation of Emmert’s law. This view of the correlation between perceived length and perceived distance does not imply that the relationship should be veridical- and indeed sometimes it is not (Hagen and Teghtsoonian, 1981)-for the following reasons. The functional scale is a quantized collective property of a nonlinear feedback network rather than a linear ruler. The selection of which structural scales will resonate to a given object and of
The Quantized Geometry of Visual Space
47
which functional scales will be generatrd within these structural scales depends on the interaction with the object in differrnt ways; for one, the choice of structural scale does not depend on a filling-in reaction. These remarks indicate a sense in which functional scales define an “intrinsic metric,” which is independent of cognitive influences but on whose shoulders correlations with motor maps, adaptive chunking, and lrarned feedback expectancy computations can build (Grossberg, 1978e, 1980b). This intrinsic metric helps to explain how monocular scaling effects, such as those described in Section 5, can occur. Once the relevance of the functional scale concept to metrical estimates is broached, one can begin to appreciate how a dynamic “tension” or “force field” or “curved metric” can be generated whereby objects which excite one part of the visual field can influence the perception of objects a t distant visual positions (Koffka, 1935; Watson, 1978). I believe that the functional scale concept explicates a notion of dynamic field interactions that escapes the difficulties faced by the Gestaltists in their pioneering efforts to explain global visual interactions. 33. Functional Lightness and the Cornsweet Effect
The functional scale concept clarifies how object boundaries can determine the iightness of object interiors, as in the Cornsweet effect. Other things being equal, a more intense pattern edge will cause larger inhibitory troughs around itself. The inhibitory trough which is interior to the pattern will thereby create a larger disinhibitory peak due to pattern normalization within the structural scale. This disinhibitory process continues t o penetrate the pattern in such a way that all the interior peak heights are influenced by the boundary peak height because each inhibitory trough “remembers” the previous peak height. The sensitivity of filled-in interior peak size to boundary peak size helps to explain the Cornsweet effect (Section 11). Crucial to this type of explanation is the idea that the disinhibitory filling-in process feeds off the input intensity within the object interior. The reader can now better appreciate why I set C = 0 to start off my exposition. Suppose that a feedforward inhibitory stage acts on an input pattern before the feedback network responds to the transformed pattern. Let the feedforward stage use its noise suppression property to convert a rectangular input pattern into an edge reaction that suppresses the rectangle’s interior (Figure 15b). Then let the feedback network transform the edge-enhanced pattern. Where does the feedback network get the input energy to fill-in off the edge r e actions into the pattern’s interior if the interior activities have already been suppressed? How does the feedback network know that the original input pattern had an interior at all? This is the technical version of the “To Have Your Edge and Fill-In Too” dilemma that I raised in Section 17. We are now much closer to an answer.
38. The M o n o c u l a r Length-Luminance Effect Before suggesting a resolution of this dilemma, I will note a property of functional scales which seems to be reflected in various data, such as the Wallach and Adams (1954) experiment, but seems not to have been studied directly. This property concerns changes in functional scaling that are due to changes in the luminance of an input pattern. To illustrate the phenomenon in its simplest form, I will consider qualitatively the response of a competitive feedback network such as (27) to a rectangular input pattern of increasing luminance. In Figure 16a the rectangle intensity is too low to elicit any suprathreshold reaction. In Figure 16b a higher rectangle intensity fills-in the region with a single interior peak and two boundary peaks. At the still higher intensity of Figure 16c, two interior peaks emerge. At successively higher intensities, more peaks emerge until the intensity gets so high that a smaller number of peaks again occurs (Figure 16d). This progressive increase followed by a progressive decrease in the number of interior peaks has been found in many comput,er runs (Cohen and Grossberg,
Chapter I
48
1983a; Ellias and Grossberg, 1975). It reflects the network’s increasing sensitivity at higher input intensities until such high intensities are reached that the network starts to saturate and is gradually desensitized. The quantitative change in the relative number of peaks is not so dramatic as Figure 16 suggests. If we assume that the total area under an activity pattern within a unit spatial region estimates the lightness of the pattern, then it is tempting to interpret the above result as a perceived lightness change when the luminance of an object, but not of its background, is parametrically increased. This interpretation cannot be made without extreme caution, however, because the functional scaling change within one monocular representation may alter the ability of this representation to match the other monocular representation within a given structural scale. In other words, by replacing spatially homogeneous regions in a figure by spatially patterned functional scales, we can think about whether these patterns match or mismatch under prescribed conditions. A change in the scales which are capable of binocular matching implies a change in the scales which can energetically resonate. A complex change in perceived brightness, depth, and length may hereby be caused. Even during conditions of monocular viewing, the phenomenon depicted by Figure 16 has challenging implications. Consider an input pattern which is a figure against a ground with nonzero reflectance. Let the entire pattern be illuminated at successively higher luminances. Within the energy region of brightness constancy, the balance between the functional scales of figure and ground can be maintained. At extreme luminances, however, the sensitivity changes illustrated in Figure 16 can take effect and may cause a coordinated change in both perceived brightness and perceived length. If the functional wavelength, as opposed to a more global estimate of the total activated region within a structural scale, influences length judgments, then a small length reduction may be detectable at both low and high luminances. This effect should at the present time be thought of as an intriguing possibility rather than as a necessary prediction of the theory because, in realistic binocular networks, interactive effects between monocular and binocular cells and between multiple structural scales may alter the properties of Figure 16.
39. Spreading FIRE:Pooled Binocular Edges, False Matches, Allelotropia, Binocular Brightness Summation, and Binocular Length Scaling Now that the concept of a functional scale in a competitive feedback network is clearly in view, I can reintroduce the noise suppression inequalities (26) to show how the joint action of noise suppression and functional scaling can generate a filling-in resonant exchange (FIRE that is sensitive to binocular properties such as disparity. Within the framework I ave built up, starting a FIRE capable of global effect8 on perceived depth, form, and lightness is intuitively simple. I will nonetheless describe the main ideas in mechanistic terms, since if certain constraints are not obeyed, the FIRE will not ignite (Cohen and Grossberg, 1Q83a). I will also restrict my attention to the simplest, or minimal, network which exhibits the properties that I seek. It will be apparent that the same types of properties can be obtained in a wide variety of related network designs. The equations that have been used to simulate such a FIRE numerically are described in the Appendix. First I will restrict attention to the case of a single structural scale, which is defined by excitat.ory and inhibitory kernels D ( j ) and E ( j ) ,respectively. Three main intuitions go into t,he construction. Proposition I: Only input pattern data which are spatially nonuniform with respect to a structural scale are informative (Section 18). Proposition II: The ease with which two monocular input patterns of fixed disparity can be binocularly fused depends on the spatial frequencies in the patterns (Sections 6 and 8). This dependence is not, however, a direct one. It is mediated by statistical
h
The Quantized Ceometiy of Visual Space
49
Figure 16. Response of a feedback competitive network to a rectangle of increasing luminance on a black background.
50
Chapter I
preprocessing of the input patterns using nonlinear cross-correlations, as in Section 25. Henceforth when I discuss an “edge,” I will mean a statistical edge rather than an edge wit,hin the input pattern itself. Proposition Irk Filling-in a functional scale can only be achieved if there exists an input source on which the FIRE can feed (Section 33). To fix ideas, let a rectangular input pattern idealize a preprocessed segment of a scene. The interior of the rectangle idealizes an ambiguous region and the boundaries of the rectangle idealize informative regions of the scene with respect to the structural scale in question. A copy of the rectangular input pattern is processed by each monocular representation. Since the scene is viewed from a distance, the two rectangular inputs will excite disparate positions within their respective monocular representations (Figure 17a). In general, the more peripheral boundary with respect to the foveal fixation point will correspond to a larger disparity. Proposition I suggests that the rectangles are passed through a feedforward competitive network capable of noise suppression to extract their statistical edges (Figure 17b). Keep in mind that these edges are not zero-crossings. Rather, their breadth is commensurate with the bandwidth of the excitatory kernel D(j) (Section 25). This property is used to realize Proposition I1 as follows. Suppose that the edge-enhanced monocular patterns are matched at binocular cells, where I mean matching in the sense of Sections 22 and 24. Because these networks possess distance-dependent structural scales, the suppressive effects of mismatch are restricted to the spatial wavelength of an inhibitory scale, E ( j ) , rather than involving the entire network. Because the edges are statistically defined, the concepts of match and mismatch refer to the degree of coherence between monocular statistics rather than to comparisons of individual edges. Three possible cases can occur. The case of primary interest is the one in which the two monocular edge reactions overlap enough to fall within each other’s excitatory on-center D ( j ) . This will happen, for example, if the disparity between the edge centers does not exceed half the width of the excitatory on-center. Marr and Poggio (1979) have pointed out that, within this range, the probability of false matches is very small, in fact less than 5%. Within the zero-crossing formalism of Marr and Poggio (1979), however, the decision to restrict matches to this distance is not part of their definition of an edge. In a theory in which the edge computation retains its spatial scale at a topographically organized binocular matching interface, this restriction is automatic. If this matching constraint is satisfied, then a pooled binocular edge is formed that is centered between the loci of the monocular edges (Figure 17c). See Ellias and Grossberg (1975, Figure 25) for an example of this shift phenomenon. The shift in position of a pooled binocular edge also has no analog in the Marr and Poggio (1979) theory. I suggest that this binocularly-driven shift is the basis for allelotropia (Section 10). If the two distal edges fall outside their respective on-centers, but within their offsurrounds, then they will annihilate each other if they enjoy identical parameters, or one will suppress the other by contrast enhancement if it has a sufficient energetic advantage. This unstable competition will be used to suggest an explanation of binocular rivalry in Section 44. Finally, the two edges might fall entirely outside each other’s receptive fields. Then each can be registered at the binocular cells, albeit with less intensity than a pooled binocular edge, due to equations (2) and (4). A double image can then occur. I consider the dependence of intensity on matching to be the basis for binocular brightness summation (Section 13). The net effect of the above operations is to generate two amplified pooled binocular edges at the boundaries of an ambiguous region if the spatial scale of the network can match the boundary disparities of the region. Networks which cannot make this match are energetically attenuated. Having used disparity (and thus depth) information to
The Quantized Ceomehy of Visual Space
51
Figure 17. After the two monocular patterns (a) are passed through a feedforward competitive network to extract their nonuniform data with respect to the network’s structural scales (b), the filtered patterns are topographically matched to allow pooled binocular edges to form (c) if the relationship between disparity and monocular functional scaling is favorable.
52
Chapter 1
L Figure 18. Monocular processing of patterns through feedforward competitive networks is followed by binocular matching of the two transformed monocular patterns. The pooled binocular edges are then fed back to both monocular representations at a processing stage where they can feed off monocular activity to start a FIRE. select suitable scales and to amplify the informative data within these scales, we must face the filling-in dilemma posed by Proposition 111. How do the binocular cells know how to fill-in between the pooled binocular edges to recover a binocular representation of the entire pattern? Where do these cells get the input energy to spread the FIRE? In other words, having used noise suppression to achieve selective binocular matching, how do we bypass noise suppression to recover the form of the object? If we restrict ourselves to the minimal solution of this problem, then one answer is strongly suggested. Signals from the pooled binocular edge are topographically fed back to the processing stage at which the rectangular input is registered. This is the stage just before the feedforward competitive step that extracts the monocular edges (Figure 18). Several important conclusions follow immediately from this suggestion: 1) The network becomes a feedback competitive network in which binocular match-
The Quantized Geomehy of Visual Space
53
ing modulates the patterning of monocular reprrsentations. 2) If filling-in can occur, a functional scale is defined within this feedback competitive network. A larger disparity between monocular patterns resonates best with a larger structural scale, which generates a larger functional scale. Thus perceived length depends on perceived depth. 3) The activity pattern across the functional scale is constrained by the network's normalization property. Thus perceived depth influences perceived brightness, notably the lightnesses of objects which seem to lie at the same depth. In short, if we can overcome the filling-in dilemma at all within feedback competitive shunting networks, then known dependencies between perceived depth, length, form, and lightness begin to emerge as natural consequences. I know of no other theoretical approach in which this is true. It remains to indicate how the FIRE can spread despite the action of the noise suppression inequalities (26). The main problem to avoid is summarized in Figure 19. Figure 19a depicts a pooled binocular edge. When this edge adds onto the rectangular pattern, we find Figure 19b. Here there is a hump on the rectangle. If this pattern is then fed through the feedforward competitive network, a pattern such as that in Figure 19c is produced. In other words, the FIRE is quenched. This is because the noise suppression property of feedforward competition drives all activities outside the hump to subthreshold values before the positive feedback loops in the total network can enhance any of these activities. I have exposed the reader to this difficulty to emphasize a crucial property of pooled binocular edges. If C > 0 in (27), then an inhibitory trough surrounds the edge (Figure 19d). (If C is too small to yield a significant trough, then the pooled edge must be passed through another stage of feedforward competition.) When the edge in Figure 19d is added to the rectangular input by a competitive interaction, the pattern in Figure 19e is generated. The region of the hump is no longer uniform. The uniform region is separated from the hump by a trough whose width is commensurate with the inhibitory scale E ( j ) . When this pattern is passed through the feedforward competition, Figure 19f is generated. The non-uniform region has been contrast-enhanced into a second hump, whereas the remaining uniform region has been annihilated by noise suppression. Now the pattern is fed back to the rectangular pattern stage and the cycle repeats itself. A third hump is thereby generated, and the FIRE rapidly spreads, or "develops," across the entire rectangular region at a rate commensurate with the time it takes to feed a signal through the feedback loop. Since the cells which are excited by the rectangle are already processing the input pattern when the FIRE begins, it can now spread very quickly. Some further remarks need to be made to clarify how the edge in Figure 19d adds to the rectangular input pattern. The inhibited regions in the edge can generate signals only if they excite off-cells whose signals have a net inhibitory effect on the rectangle. This option is not acceptable because mismatched patterns at the binocular matching cells would then elicit FIREs via off-cell signaling. Rather, the edge activities in Figure 19d are rectified when they generate output signals. These signals are distributed by a competitive (on-center off-surround) anatomy whose net effect is to add a signal pattern of the shape in Figure 19d to the rectangular input pattern. In other words, if all signaling stages of Figure 18 are chosen to be competitive to overcome the noisesaturation dilemma (Section Z l ) , then the desired pattern transformations are achieved. This hypothesis does not necessarily imply that the pathways between the processing stages are both excitatory and inhibitory. Purely excitatory pathways can activate each level's internal on-center off-surround interneurons to achieve the desired effect. From this perspective, one can see that the two monocular edge-extraction stages and the binocular matching stage at the top of Figure 18 can all be lumped into a single binocular edge matching stage. If this is done, the the mechanism for generating FIREs seems elementary indeed. If competitive signaling is used to binocularly match monocular
Chapter I
54
ICI
Figure 19. The FIRE is quenched in (a)-(.) because there exists no nonuniform region off the pooled binocular edge which can be amplified by the feedback exchange. In (d)(f), the inhibitory troughs of the edges enable the FIRE to propagate.
55
The Quantized Geometry of Visual Space
t
t
t
Figure 20. An antagonistic rebound, or off-reaction, in a gated dipole can be caused either by rapid offset of a phasic input or rapid onset of a nonspecific arousal input. As in Figure 21, function J ( t ) represents a phasic input, function I ( t ) represents a nonspecific arousal input, function z ~ ( t represents ) the potential, or activity, of the on-channel’s final stage, and function zs(t) represents the potential, or activity, of the off-channel’s final stage. (From Grossberg 1982c.)
56
Chapter I
representations, then a filling-in reaction will spontaneously occur within the matched scales. 40. Figure-Ground Separation by Filling-In Barriers
Now that we have seen how a FIRE can spread, it remains to say how it can be prevented from inappropriately covering the ent,ire visual field. A case in point is the Julesz (1971) 5% solut,ion of dots on a white background in the stereogram of Section 9. How do the different binocular disparities of the dots in the “figure” and “ground” regions impart distinct depths to the white backgrounds of these two regions? This is an issue because the same ambiguous white background fills both regions. I suggest that the boundary disparities of the “figure” dots can form pooled binocular edges in a spatial scale different from the one that best pools binocular edges in the Upround“scale. At the binocular cells of the ”ground” scale, mismatch of the monocular edges of the “figure” can produce an inhibitory trough whose breadth is commensurate with two inhibitory structural wavelengths. The spreading FIRE cannot cross a filling-in barrier (FIB) any more than a forest fire can cross a sufficiently broad trench. Thus, within a scale whose pooled binocular edges can feed off the ambiguous background activity, FIRES can spread in all directions until they run into FIBs. This mechanism does not imply that a FIRE can rush through all spaces between adjacent FIBs, because the functional scale is a coherent dynamic entity that will collapse if the spaces between FIBs, relative to the functional scale, are sufficiently small. Thus a random placement of dots may, other things being equal, form better FIBS than a deterministic placement which permits a coherent flow of FIRE to run between rows of FIBs. A rigorous study of the interaction between (passive) texture statistics and (coherent) functional scaling may shed further light on the discriminability of figureground separation. The important pioneering studies of Julesz (1978) and his colleagues on textmurestatistics have thus far been restricted to conclusions which can be drawn from (passive) correlational estimates. 41. T h e Prinriple of Scale Equivalence a n d t h e Curvature of ActivityScale Correlations: Fechner’s Paradox, Equidistance Tendency, and Depth Without Disparity
My description of how a FIRE can be spread and blocked sheds light on several types of data from a unified perspective. Suppose that, as in Section 36, an ambiguous monocular view of an object excites all structural scales due to self-matching of the monocular data at each scale’s binocular cells. Suppose that a binocular view of an object can selectively excite some structural scales more than others due to the relationship between matching and activity amplification (Section 22). These assumptions are compatible with data concerning the simultaneous activation of several spatial scales at each position in the visual field during binocular viewing (Graham, Robson, and Nachmias, 1978; Robson and Graham, 1981), with data on binocular brightness summation (Blake, Sloane, and Fox, 1981; Cogan, Silverman, and Sekuler, 1982 , and with data concerning the simultaneous visibility of rivalrous patterns and a dept percept (Kaufman, 1974; Kulikowski, 1978). The suggestion that a depth percept can be generated by a selective amplification of activity in some scales above others also allows us to understand: (1) why a monocular view does not lose its filling-in capability or other resonant properties (since it can excite some structural scales via self-matches); (2) why a monocular view need not have greater visual sensitivity than a binocular view, despite the possibility of activating several scales due to self-matches (since a binocular view may excite its scales more selectively and with greater intensity due to binocular brightness summation); (3) why a monocular view m a y look brighter than a binocular view (Fechner’s paradox) (since although the matched scales during a binocular view are amplified, so that activity lost by binocular mismatch in some scales is partially gained by binocular
h
The Quantized Geometry of Visual Space
57
summation in other scales, the monocular view may excite more scales by self-matches); and (4) yet why a monocular view may have a more ambiguous depth than a binocular view (since a given scene may fail to selectively amplify some scales more than others due to its lack of spatial gradients (Gibson, 1950)). The selective-amplification that enhances a depth percept is sometimes due to the selectivity of disparity matches, but it need not be. The experiment of Kaufman, Bacon, and Barroso (1973) shows that depth can be altered, even when no absolute disparities exist, by varying the relative brightnesses of monocular pattern features. The present framework interprets this result as an external manipulation of the energies that cause selective amplification of certain scales above others, and as one that does so in such a way that the preferred scales are altered as the experimental inputs are varied. The same ideas indicate how a combination of monocular motion cues and/or motion-dependent input energy changes can enhance a depth percept. Motions that selectively enhance delayed self-matches in certains scales above others can contribute to a depth percept. All of these remarks need quantitative implementation via a major program of computer simulations. The simulations that have already been completed do, however, support the mathematical, numerical, and qualitative results on which the theory is founded (Cohen and Grossberg, 1983a). Although this program is not yet complete, the qualitative concepts indicate how to proceed and how various data may be explained in a unified fashion that are not discussed in a unified way by competing theories. The idea that depth can be controlled by the energy balance across several active scales overcomes a problem in Sperling-Dev models. Due to the competition between depth planes in these models, only one depth plane at a time can be active in each spatial location. However, there can exist only finitely many depth planes, both on general grounds due to the finite dimension of neural networks, and on specific grounds due to inferences from spatial frequency data wherein only a few scales are needed to interpret the data (Graham, 1981; Wilson and Bergen, 1979). Why, then, do we not perceive just three or four different depths, one depth corresponding to activity in each depth plane? Why does the depth not seem to jump discretely from scale to scale as an object approaches us? Depth seems to change continuously as an object approaches us despite the existence of only a few structural scales. The idea that the energy balance across functional scales changes continuously as the object approaches, and thereby continuously alters the depth percept, provides an intuitively appealing answer. This idea also mechanistically explicates the popular thesis that the workings of spatial scales may be analogous to the workings of color vision, wherein the pattern of activity across a few cone receptor types forms the substrate for color percepts. The present framework suggests an explanation of Gogel’s equidistance tendency (Section 4). Suppose that a monocularly viewed object of ambiguous depth is viewed which excites most, or all, of its structural scales through self-matches. Let a nearby binocularly viewed object selectively amplify the scales with which it forms the best pooled binocular edges. Let a FIRE spread with the greatest vigor through these amplified scales. When the FIRE reaches the monocular self-matches within its scale, it can amplify the activity of these matches, much as occurs during binocular brightness summation. This shift in the energy balance across the scales which represent the monocularly viewed object impart it with depthfulness. This conclusion follows-and this is the crucial point-even though no new disparity information is produced within the self-matches by the FIRE. Only an energy shift occurs. Thus, although disparities may be sufficient to produce a depth percept, they may not be necessary to produce one. I suggest instead that suitable correlations between activity and scaling across the network loci that represent different spatial positions produce a depth percept. Depth is perceived whenever the resonant activity distribution is “curved” among several structural scales as representational space is traversed, no matter how-monocularly
58
Chapter 1
or binocularly-the activity distribution achieves its curvature. This conclusion may be restated as a deceptively simple proposition: An object in the outside world is perceived to be curved if it induces a curvature in the abstract representational space of activity-scale correlations. Such a conclusion seems to smack of naive realism, but it is saved from the perils of naive realism by the highly nonlinear and nonlocal nature of the shunting network representation of input patterns. This conclusion does, however, provide a scientific rationale for the temptations of naive realism, and points t,heway to a form of neorealism if one entertains the quantum-mechanical proposition that the curvature of an object in the outside world is also due to curved activity-scaling correlations in an abstract representational scale. Such considerations lead beyond the scope of this article. The view that all external operations that use equivalent activity-scaling correlations generate equivalent depth percepts liberates our thinking from the current addiction to disparity computations and suggests how monocular gradients, monocular motion cues, and learned cognitive feedback signals can all contribute to a depth percept. Because of the importance of this conception to my theory, I give it a name: the principle of ecale equivalence. 42. Reflectance Rivalry a n d Spatial Frequency Detection
The same ideas suggest an explanation of the Wallach and Adams (1954) data on rivalry between two central figures of different lightness (Section 13). Suppose that each monocular pattern generates a different functional scale when it is viewed monocularly (Section 38). Suppose, moreover, that the monocular input intensities are chosen so that the functional scales are spatially out of phase with each other. Then when a different input pattern is presented to each eye, the feedback exchange between monocular and binocular cells, being out of phase, can become rivalrous. This explanation leads to a fascinating experimental possibility: Given an input of fixed size, test a series of lightness differences to the two eyes. Can one find ranges of lightnes8 where the functional scales are rivalrous followed by ranges of lightness in which the functional scales can match? If this is possible, then it is probably due to the fact that only certain peaks in the two scales match binocularly. The extra peaks selfmatch. Should this happen, it may be possible to detect small spatial periodicities in lightness such that binocular matches are brighter than self-matches. I am not certain that these differences will be visible, because the filling-in process from the locations of amplified binocular matches across the regions of monocular self-matches may totally obscure the lightness differences of the two types of matches. Such a filling-in process may be interpreted as a type of brightness summation. Another summation phenomenon which may reflect the activation of a functional scale is the decrease in threshold contrast needed to detect an extended grating pattern as the number of cycles in the pattern is increased. Robson and Graham (1981) explain this phenomenon quantitatively 'by assuming that an extended grating pattern will be detected if any of the independently perturbed detectors on whose receptive field the stimulus falls signals its presence" (p.409). What is perplexing about this phenomenon is that "some kind of summation process takes place over at least something approaching 64 cycles of our patterns ... it is stretching credulity rather far to suppose that the visual system contains detectors with receptive fields having aa many as 64 pairs of excitatory and inhibitory regions" (p.413). This phenomenon seems less paradoxical if we suppose that a single suprathreshold peak within a structural scale can drive contiguous subthreshold peaks within that scale to suprathreshold values via a disinhibitory action. Suppose, moreover, that increasing the number of cycles increases the expected number of suprathreshold peaks that will occur at a fixed contrast. Then a summation effect across 64 structural wavelengths is not paradoxical if it is viewed as a filling-in reaction from suprathreshold peaks to subthreshold peaks, much like the filling-in reaction
The Quantized Geometry of Visual Space
59
that may occur between binocular matches and sclf-matches in the Wallach and Adams (1954)paradigm. Due to the large number of phenomena which become intuitively more plausible using this type of filling-in idea, I believe that quantitative studies of how to vary input brightnesses to change the functional scales generated by complex visual stimuli deserve more experimental and theoretical study. One challenge is to find new ways to selectively increase or decrease the activity within one structural scale without inadvertently increasing or decreasing the activities within other active scales as well. In meeting this challenge, possible effects of brightness changes on perceived length are no less interesting than their effects on perceived depth. For example, suppose that an increase in input contrast decreases the functional scale within a prescribed structural scale. Even if the individual peaks in the several functional scales retain approximately the same height, a lightness difference may occur due to the increased density of peaks within a unit cellular region. This lightness difference will alter length scaling in the limited sense that it can alter the ease with which matching can occur between monocular signals at their binocular interface, as I have just argued. It remains quite obscure, however, how such a functional length change in a network’s perceptual representation is related to the genesis of motor actions, or whether motor commands are synthesized from more global properties of the regions in which activity is concentrated across all scales. To the extent that motor consequences help to shape the synthesis of perceptual invariants, no more than a qualitative appreciation of how functional length changes can influence effects like Emmert’s law may be possible until quantitative sensory-motor models are defined and simulated. 43. Resonance in a Feedback Dipole Field: Binocular Development a n d Figure-Ground Completion
My discussions of how a FIRE spreads (Section 39) and of figure-ground completion (Section 40) tacitly used properties that require another design principle to be realized. This design suggests how visual networks are organized into dipole fields consisting of subfields of on-cells and subfields of off-cells with the on-cells joined together and the off-cells joined together by competitive interactions. Because this concept has been extensively discussed elsewhere (Grossberg, 1980b, 1982c, 1982d), I will only sketch the properties which I need here. I will start with a disclaimer to emphasize that I have a very specific concept in mind. My dipoles are not the classical dipoles which Julesz (1971b) used to build an analog model of stereopsis. My dipoles are on-cell off-cell pairs such that a sudden offset of a previously sustained input to the on-cell can elicit a transient antagonistic rebound, or off-reaction, in the activity of the off-cell. Similarly, a sudden and equal arousal increment to both the on-cell and the off-cell can elicit a transient antagonistic rebound in off-cell activity if the arousal increment occurs while the on-cell is active (Figure 20). Thus my notion of dipole describes how STM can be rapidly reset, either by temporal fluctuations in specific visual cues or by unexpected events, not necessarily visual at all, which are capable of triggering an arousal increment at visually responsive cells. In my theory, such an unexpected event is hypothesized to elicit the mismatch negativity component of the N200’evoked potential, and such an antagonistic rebound, or STM reset, is hypothesized to elicit the P300 evoked potential. These reactions to specific and nonspecific inputs are suggested to be mediated by slowly varying transmitter substances-notably catecholamines like norepinephrine-which multiplicatively gate, and thereby habituate to, input signals on their way to the on-cells and the off-cells. The outputs of these cells thereupon compete before eliciting net on-reactions and offreactions, respectively, from the dipole (Figure 21). In a dipole field, the on-cells are hypothesized to interact via a shunting on-center off-surround network. The off-cells are also hypothesized to interact via a shunting on-center off-surround network. These shunting networks normalize and tune the STM
60
Chapter 1
OFF
C0 MPETITION GATE SIGNAL
(4
AROUSAL INPUT
Figure 21. In the simplest example of a gated dipole, phasic input J and arousal input I add in the on-channel to activate the potential 21. The arousal input alone activates x 2 . Signals S1 = f ( q ) and S, = f(zz) such that S1 > Sz are thereby generated. In the square synapses, transmitters z1 and 22 slowly accumulate to a target level. Transmitter is also released at a rate proportional to Slzl in the on-channel and SazZ in the off-channel. This is the transmitter gating step. These signals perturb the potentials z3and 24, which thereupon compete to elicit the net on-reaction 5 5 and off-reaction 26. See Grossberg (1980b, 1982d) for a mathematical analysis of gated dipole properties. (From Grossberg 1982c.)
The Quantized Geometry of Visual Spare
61
activity within the on-subfield and the off-suhfield of the total dipole field network. The dipole interactions between on-cells and off-cells enable an on-cell onset to cause a complementary off-cell suppression, and an on-cell offset to cause a complementary off-cell enhancement. This duality of reactions makes sense of structural neural arrangements such as on-center off-surround networks juxtaposed against off-center onsurround networks and uses this unified processing framework to qualitatively explain visual phenomena such as positive and negative after-effects, the McCollough effect, spatial frequency adaptation, monocular rivalry, and Gestalt switching between ambiguous figures (Grossberg. 1980b). The new features that justify mentioning dipole fields here are that the on-fields and off-fields can interact to generat,e functional scales, and that the signals which regulate the balance of activity between on-cells and off-cells can habituate as the transmitter substances that gate these signals are progressively depleted. These facts will now be used to clarify how figure-ground completion and binocular rivalry might occur. I wish to emphasize, however, that dipole fields were not invented to explain such visual effects. Rather, they were invented to explain how internal representations which self-organize (e.g., develop, learn) as a result of experience can be stabilized against the erosive effects of later environmental fluctuations. My aduptitie resonance theory suggests how learning can occur in response to resonant activity patterns, yet is prevented from occurring when rapid STM reset and memory search routines are triggered by unexpected events. In the present instance, if LTM traces are placed in the feedforward and feedback pathways that subserve binocular resonances, then the theory suggests that binocular development will occur only in response to resonant data patterns, notably to objects to which attention is paid (Grossberg, 1976b, 1978e, 1980b; Singer, 1982). Because the mechanistic substrates needed for the stable self-organization of perceptual and cognitive codes are not peculiar to visual data, one can immediately understand why so many visual effects have analogs in other modalities. An instructive instance of figure-ground completion is Beck’s phantom letter E (Section 6). To fully explain this percept, one needs a good model of competition between orientation sensitive dipole fields; in particular, a good physiological model of cortical hypercolumn organization (Hubel and Wiesel, 1977). Some observations can be made about the relevance of dipole field organization in the absence of a complete model. Suppose that the regularly spaced vertical dark lines of the “ground” are sufficiently dense to create a statistically smoothed pattern when they are preprocessed by the nonlinear cross-correlators of some structural scales (Glass and Switkes, 1976). When such a smoothed patt,ern undergoes noise suppression within a structural scale, it generates statistical edges at the boundary of the “ground” region due to the sudden change in input statistics at this boundary. These edges of the (black) off-field generate complementary edges of the (white) on-field due to dipole inhibition within this structural scale. These complementary edges can use the ambiguous (preprocessed) white as an energy source to generate a FIRE that fills-in the interior of the “ground.’ This FIRE defines the ground as a coherent entity. The “ground” does not penetrate the “figure” because FIBS are generated by the competition which exists between orientation detectors of sufficiently different orientation. A “figure” percept can arise in this situation as the complement of the coherently filled-in “ground,” which creates a large shift in activity-scale correlations at the representational loci corresponding to the “ground” region. In order for the “figure” to achieve a unitary existence other than as the complement of the “ground,” a mechanism must operate on a broader structural scale than that of the variously oriented lines that fill the figure. For example, suppose that, due to the greater spatial extent of vertical ground lines than nonvertical figure lines, the smoothed vertical edges can almost completely inhibit all smoothed nonvertical edges near the figure-ground boundary. Then the “figure” can be completed as a disinhibitory filling-in reaction among all the
Chapter 1
62
smoothed nonvertical orientations of this structural scale. Thus, according to this view, “figure” and “ground” fill-in due to disinhibitory reactions among different subsets of cells. A lightncss difference may be produced between such a “figure” and a “ground” (Dodwell, 1975). A similar argument sharpens the description of how figure-ground completion occurs during viewing of the Julesz 5% stereogram (Section 40). In this situation, black dots that can be fused by one structural scale may nonetheless form FIBs in other structural scales. A FIRE is triggered in the struct,uralscales with fused black dots by the disinhibitory edges which flank the dots in the scale’s white off-field. This FIRE propagates until it reaches FIBs that are generated by the nonfused dots corresponding to an input region of different disparity. The same thing happens in all structural scales which can fuse some of the dots. The figure-ground percept is a statistical property of all the FIREs that occur across scales. 44. Binocular Rivalry
Binocular rivalry can occur in a feedback dipole field. The dynamics of a dipole field also explain why sustained monocular viewing of a scene does not routinely cause a perceived waxing and waning of the scene at the frequency of binocular rivalry, but may nonetheless cause monocular rivalry in response to suitably constructed pictures at a rate that depends on the juxtaposition of features in the picture (Grossberg, 1980b, Section 12). I will here focus on how the slowly habituating transmitter gates in the dipole field could cause binocular rivalry without necessarily causing monocular waxing and waning. Let a pair of smoothed monocular edges mismatch at the binocular matching cells. Also suppose that one edge momentarily enjoys a sufficient energetic advantage over the other to be amplified by contrast enhancement as the other is completely suppressed. This suppression can be mediated by the competition between the off-cells that correspond to the rivalrous edges. In particular, the on-cells of the enhanced edge inhibit the off-cells via dipole competition. Due to the tonic activation of off-cells, the off-cells of the other edge are disinhibited via the shunting competition that normalizes and tunes the off-field. The on-cells of these disinhibited off-cells are thereupon inhibited via dipole competition. As this is going on, the winning edge at the binocular matching cells elicits the feedback signals that ignite whatever FIREs can be supported by the monocular data. This resonant activity gradually depletes the transmitters which gate the resonating pathways. As the habituation of transmitter progresses, the net sizes of the gated signals decrease. The inhibited monocular representation does not suffer this disadvantage because its signals, having been suppressed, do not habituate the transmitter gates in their pathways. Finally, a time may be reached when the winning monocular representation loses its competitive advantage due to progressive habituation of its transmitter gates. As soon as the binocular competition favors the other monocular representation, contrast enhancement bootstraps it into a winning position and a rivalrous cycle is initiated. A monocularly viewed scene would not inevitably wax and wane, for the following reason. Other things being equal, its transmitter gates habituate to a steady level such that the habituated gated signals are an increasing function of their input sizes (Grossberg, 1968,1981, 1982e). Rivalry occurs only when competitive feedback signaling, by rapidly suppressing some populations but not others, sets the stage for the competitive balance to slowly reverse as the active pathways that sustain the suppression habituate faster than the inactive pathways. The same mechanism can cause a percept of monocular rivalry to occur when the monocular input pattern contains a suitable spatial juxtaposition of mutually competitive features (Rauschecker, Campbell, and Atkinson, 1973).
The Quantized Ceomeny of Visual Space
63
45. Concluding R e m a r k s A b o u t Filling-In and Qiiantizetion
The quantized dynamic geometry of FIRE provides a mechanistic framework in which the experimental interdependence of many visual properties may be discussed in a unified fashion. Of course, a great deal of theoretical work remains to be done (even assuming all the concepts are correct), not only in working out the physiological designs in which these dynamic transactions take place but also in subjecting the numerical and mathematical properties of these designs to a confrontation with quantitative data. Also, the discussion of disinhibitory filling-in needs to be complemented by a discussion of how hierarchical feedback interactions between the feedforward adaptive filters (features) and feedback adaptive templates (expectancies) that defme and stabilize a developing code can generate pattern completion effects, which are another form of filling-in (Dodwell, 1975; Grossberg, 1978e, Sections 21-22, 1980b, Section 17; Lanze, Weisstein, and Harris, 1982). Despite the incompleteness of this program, the very existence of such a quantization scheme suggests an answer to some fundamental questions. Many scientists have, for example, realized that since the brain is a universal measurement device acting on the quantum level, its dynamics should in some sense be quantized. This article suggests a new sense in which this is true by explicating some quantized properties of binocular resonances. One can press this question further by asking why binocular resonances are nonlinear phenomena that do not take the form of classical linear quantum theory. I have elsewhere argued that this is because of the crucial role which resonance plays in stabilizing the brain’s self-organization (Grossberg, 1976, 1978e, 1980b). The traditional quantum theory is not derived from principles of self-organization, despite the fact that the evolution of physical matter is as much a fundamental problem of self-organization on the quantum level as are the problems of brain development, perception, and learning. It will be interesting to see, as the years go by, whether traditional quantum theory looks more like an adaptive resonance theory as it too incorporates self-organizing principles into its computational structure.
Chapter I
64
APPENDIX The following system of equations defines a binocular interaction capable of supporting a filling-in resonant exchange (Cohen and Grossberg, 1983a). Monocular Representations
n
Binocular Matching
Binocular-to-Monocular Feedback
where
Fir= B'CEg - D' Ell and
+
Gf, = C;, El,.
Equation ( A l ) describes the response of the activities z t ~i ,= 1 , 2 , . ..,n, in the left monocular representation. Each Z ~ Lobeys a shunting equation in which both the excitatory interaction Coefficients c k l and the inhibitory interaction coefficients Ekg are Gaussian functions of the distance between vk and v,. Two types of simulations have been studied: Additive inputs: All ZkL are chosen equal. The terms J ~ register L the input pattern and summate with the binocular-to-monocular feedback functions Z k .
The Quantized Geometty of Visualspace
65
Shuntinginputs: All J ~ are L chosen equal. The terms I ~ register L the input pattern. The binocular-to-monocular feedback functions zk modulate the system’s sensitivity to the inputs &L in the form of gain control signals. Equation (A2) for the activities r , ~i , = 1 , 2 , . . . ,n, in the right monocular representation has a similar interpretation. Note that the same binocular-to-monocular feedback functions Zk are fed back to the left and right monocular representations. The binocular matching stage (A3) obeys an algebraic equation rather than a differential equation due to the simplifying assumption that the differential equation for and f(zt~). the matching activities y, reacts quickly to the monocular signals f(zt~) Consequently, y, is always in an approximate equilibrium with respect to its input signals. This equilibrium equation says that the monocular inputs f ( z k t ) and f(ZkR)are added before being matched by the shunting interaction. The signal functions f ( w ) are chosen to be sigmoid functions of activity w . The excitatory interaction coefficients C k r and inhibitory interaction coefficients E k r are chosen to be Gaussian functions of distance. The spatial decay rates of Ck,, C k , , and Cir are chosen equal. The spatial decay rates of Ekt, E k , , and Ei, are chosen equal. The on-center is chosen narrower than the off-surround. , .. . ,f(s,;)) and ( f ( ? l ~~)(, z z R ) , After monocular signal patterns ( ~ ( z I L ) f(?zd, . . , , ~(z,R))are matched at the binocular matching stage, the binocular activities yh are rectified by the output signal function g(yk), which is typically chosen to be a sigmoid function of yk. Then these rectified output signals are distributed back to the monocular representations via competitive signals (A6) with the same spatial bandwidths as are used throughout the computation. Numerical studies have been undertaken with the following types of results (Cohen and Grossberg, 1983a). An “edgeless blob,” or Gaussianly smoothed rectangular input, does not supraliminally excite the network at any input intensity. By contrast, when a rectangle is added to the blob input, the network generates a FIRE that globally fills-in the “figure” defined by the rectangle and uses the rectangle’s edges to generate a globally structured “ground” (Figure 22). Despite the fact that the network is totally insensitive to the blob’s intensity in the absence of the rectangle, the rectangle’s presence in the blob sensitizes the network to the ratio of rectangle-plus-blob to blob intensities, and globally fills-in these figure and ground lightness estimates. Parametric input series have been done with rectangles on rectangles, rectangles on blobs, triangles on rectangles, and so forth to study how the network estimates and globally fills-in lightness estimates that are sensitive to the figure-to-ground intensity ratio. Monocular patterns that are mismatched relative to a prescribed structural scale do not activate a FIRE at input intensities that are suprathreshold for matched monocular patterns. Thus, differentstructural scales selectively resonate to the patterns that they can match. Different structural scales also generate different functional scales, other things being equal. Matched monocular patterns such as those described above have been shown to elicit only subliminal feedforward edge reactions until their intensities exceed the network’s quenching threshold, whereupon a full-blown global resonance is initiated which reflects disparity, length, and lightness data in the manner previously described.
Chapter 1
66
RECTANGLE ON BLOB SUPRATHRESHOLD LEFT F ELD
r(PUT
I
1
I
t
0
-1.3*10*
-5.310-
1
6.5*10+
-6.5~10-
Figure 22. Figure-ground filling-in due to a rectangle on an “edgeless blob”: By itself, the blob elicits no suprathreshold reaction in the binocular matching field at any input intensity. By itself, in a network without feedback from the matching field, the rectangle elicits only a pair of boundary edges at any input intensity. Given a fixed ratio of rectangle to blob intensity in the full network, as the background input intensity is parametrically increased, the network first elicits subthreshold reactions to the edges of the rectangle. Once the quenching threshold is exceeded, a full blown global resonance is triggered. Then the rectangle fills-in an intensity estimate between its edges (the “figure”) and structures the blob so that it fills-in an intensity estimate across the entire blob (”ground”). The two intensity estimates reflect the ratio of rectangle-to-blob input intensities. (From Cohen and Grossberg 1982.)
The Quantized Geomcny of Visual Space
67
REFERENCES Amari, S., Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics, 1977, 2’1, 77 87. Amari, S., Competitive and cooperative aspects in dynamics of neural excitation and self-organization. In S. Amari and M. Arbib (Eds.), Competition and cooperation in neural networks. Berlin: Springer-Verlag, 1982, 1-28. Amari, S. and Arbib, M.A., Competition and cooperation in neural nets. In J. Metzler (Ed.), Systems neuroscience. New York: Academic Press, 1977. Arend, L.E., Spatial differential and integral operations in human vision: Implications of stabilized retinal image fading. Psychological Review, 1973,80,374-395. Arend, L.E., Buehler, J.N., and Lockhead, G.R., Difference information in brightness perception. Perception and Psychophysics, 1971,9, 367-370. Arend, L.E., Lange, R.V., and Sandick, B.L., Nonlocal determination of brightness in spatially periodic patterns. Perception and Psychophysics, 1981,29, 310-316. Attneave, F., Some informational aspects of visual perception. Psychological Review, 1954, 61, 183-193. Barlow, H.B., Optic nerve impulses and Weber’s Law. In W.R. Uttal (Ed.), Sensory coding. Boston: Little, Brown, and Co., 1972. Barlow, H.B. and Levick, W.R., The mechanism of directionally selective units in rabbit’s retina. Journal of Physiology, 1965,178,447-504. Baylor, D.A. and Hodgkin, A.L., Changes in time scale and sensitivity in turtle photoreceptors. Journal of Physiology, 1974,242, 729-758. Baylor, D.A., Hodgkin, A.L., and Lamb, T.D., The electrical response of turtle cones to flashes and steps of light. Journal of Physiology, 1974,242, 685-727 (a). Baylor, D.A., Hodgkin, A.L., and Lamb, T.D., Reconstruction of the electrical responses of turtle cones to flashes and steps of light. Journal of Physiology, 1974, 242, 759-791 (b). Beck, J., Surface color perception. Ithaca, NY: Cornell University Press, 1972. Bergstrom, S.S., A paradox in the perception of luminance gradients, I. Scandinavian Journal of Psychology, 1966,I, 209-224. Bergstrom, S.S.,A paradox in the perception of luminance gradients, 11. Scandinavian Journal of Psychology, 1967,8, 25-32 (a). Bergstrom, S.S., A paradox in the perception of luminance gradients, 111. Scandinavian Journal of Psychology, 1967,8,33-37 (b). Bergstriim, S.S., A note on the neural unit model for contrast phenomena. Vision Research, 1973,13, 2087-2092. Blake, R. and Fox, R., The psychophysical inquiry into binocular summation. Perception and Psychophysics, 1973, 14, 161-185. Blake, R., Sloane, M., and Fox, R., Further developments in binocular summation. Perception and Psychophysics, 1981,30,266-276. Blakemore, C., Carpenter, R.H., and Georgeson, M.A., Lateral inhibition between orientation detectors in the human visual system. Nature, 1970,228, 37-39. Blank, A.A., Metric geometry in human binocular perception: Theory and fact. In E.L.J. Leeuwenberg and H.F.J.M. Buffart (Eds.), Formal theories of visual perception. New York: Wiley and Sons, 1978. Boynton, R.M., The psychophysics of vision. In R.N. Haber (Ed.), Contemporary theory and research in visual perception. New York: Holt, Rinehart, and Winston, 1968.
68
Chapter I
Bridgeman, B., Metacontrast and lateral inhibition. Psychological Review, 1971, 78, 528-539. Bridgeman, B., A correlational model applied to metacontrast: Reply to Weisstein, Ozog, and Sroc. Bulletin of the Psychonomic Society, 1977,10, 85-88. Bridgeman, B., Distributed sensory coding applied to simulations of iconic storage and metacontrast. Bulletin of Mathematical Biology, 1978,40,605-623. Buffart, H., Brightness and contrast. In E.L.J. Leeuwenberg and H.F.J.M. Buffart (Eds.), Formal theories of visual perception. New York: Wiley and Sons, 1978. Buffart, H., A theory of cyclopean perception. Nijmegen: University, 1981. Buffart, H., Brightness estimation: A transducer function. In H.-G. Geissler, H.F.J.M. Buffart, P. Petzoldt, and Y.M. Zabrodin (Eds.), Psychophysical judgment and the process of perception. Amsterdam: North-Holland, 1982. Buffart, H., Leeuwenberg, E., and Restle, F., Coding theory of visual pattern completion. Journal of Experimental Psychology, 1981, 7, 241-274. Caelli, T.M., Visual perception: Theory and practice. Oxford: Pergamon Press, 1982. Caelli, T.M., Hoffman, W.C., and Lindman, H., Apparent motion: Self-excited 05 cillation induced by retarded neuronal flows. In E.L.J. Leeuwenberg and H.F.J.M. Buffart (Eds.), Formal theories of visual perception. New York: Wiley and Sons, 1978. Carpenter, G.A. and Grossberg, S., Adaptation and transmitter gating in vertebrate photoreceptors. Journal of Theoretical Neurobiology, 1981, 1, 142. Carpenter, G.A. and Grossberg, S., Dynamic models of neural systems: Propagated signals, photoreceptor transduction, and circadian rhythms. In J.P.E. Hodgson (Ed.), Oscillations in mathematical biology. New York: Springer-Verlag, 1983. Cogan, A.L., Silverman, G., and Sekuler, R., Binocular summation in detection of contrast flashes. Perception and Psychophysics, 1982,S1, 330-338. Cohen, M.A. and Grossberg, S., Some global properties of binocular resonances: Disparity matching, filling-in, and figure-ground synthesis. In P.Dodwell and T. C a d i (Eds.), Figural synthesis. Hillsdale, NJ: Erlbaum, 1983 (a). Cohen, M.A.and Grossberg, S., The dynamics of brightness perception. In preparation, 1983 (b). Cohen, M.A. and Grossberg, S., Absolute stability of global pattern formation and parallel memory storage in competitive neural networks. Transactions IEEE, in press, 1983 (c). Coren, S., Brightness contrast as a function of figure-ground relations. Journal of Experimental Psychology, 1969, 80,517-524. Coren, S., Subjective contours and apparent depth. Psychological Review, 1972, I Q , 359-367. Coren, S., Porac, C., and Ward, L.M., Sensation and perception. New York: Academic Press, 1979. Cornsweet, T.N., Visual perception. New York: Academic Press, 1970. Crick, F.H.C., Marr, D., and Poggio, T., An information processing approach to understanding the visual cortex. In The cerebral cortex: Neurosciences research program, 1980. Curtis, D.W.and Rule, S.J., Binocular processing of brightness information: A vectorsum model. Journal of Experimental Psychology: Human Perception and Performance, 1978,4,132-143. Dalenoort, G.J., In search of the conditions for the genesis of cell assemblies: A study in self-organization. Journal of Social and Biological Structures, 1982,5, 161-187 (a).
The Quantized Geometry of Visual Space
69
Dalenoort, G.J., Modelling cognitive processes in self-organizing neural networks, an exercise in scientific reduction. In L.M. Ricciardi and A.C. Scott (Eds.), Biomathematics in 1980. Amsterdam: North-Holland, 1982, 133-144 (b). Day, R.H., Visual spatial illusions: A general explanation. Science, 1972, 175, 13351340. DeLange, H., Attenuation characteristics a n d phase-shift characteristics of the human fovea-cortex systems in relation to flicker-fusion phenomena. Delft: Technical University, 1957. Deregowski, J.B., Illusion and culture. In R.L. Gregory and G.H. Gombrich (Eds.), Illusions in n a t u r e a n d art. New York: Scribner’s, 1973, 161-192. Dev, P.,Perception of depth surfaces in random-dot stereograms: A neural model. International Journal of Man-Machine Studies, 1975, 7,511-528. DeWcert, Ch. M.M.and Levelt, W.J.M., Binocular brightness combinations: Additive and nonadditive aspects. Perception and Psychophysics, 1974, 15, 551-562. Diner, D., Hysteresis in human binocular fusion: A second look. Ph.D. Thesis, California Institute of Technology, Pasadena, 1978. Dodwell, P.C., Pattern and object perception. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception, Vol. 5: Seeing. New York: Academic Press, 1975. Eijkman, E.G.J., Jongsma, H.J., and Vincent, J., Two-dimensional filtering, oriented line detectors, and figural aspects as determinants of visual illusions. Perception and Psychophysics, 1981, 29, 352-358. Ellias, S. and Grossberg, S., Pattern formation, contrast control, and oscillations in the short term memory of shunting on-center off-surround networks. Biological Cybernetics, 1975, 20, 69-98. Emmert, E., Grossenverhaltnisse der Nachbilder. Klinische Monatsblatt der Augenheilk unde, 1881, 19, 442-450. Engel, G.R., The visual processes underlying binocular brightness summation. Vision Research, 1967, 7,753-767. Engel, G.R., The autocorrelation function and binocular brightness mixing. Vision Research, 1969, 9,1111-1130. Enroth-Cugell, C. and Robson, J.G., The contrast sensitivity of retinal ganglion cells of the cat. Journal of Physiology, 1966, 187, 517-552. Fender, D. and Julesz, B., Extension of Panum’s fusional area in binocularly stabilized vision. Journal of the Optical Society of America, 1967, 57, 819-830. Festinger, L., Coren, S., and Rivers, G., The effect of attention on brightness contrast and assimilation. American Journal of Psychology, 1970,83, 189-207. Foley, J.M., Depth, size, and distance in stereoscopic vision. Perception and Psychophysics, 1968,3,265-274. Foley, J.M., Binocular depth mixture. Vision Research, 1976, 16, 1263-1267. Foley, J.M., Binocular distance perception. Psychological Review, 1980, 87, 411-434. Foster, D.H., Visual apparent motion and the calculus of variations. In E.L.J. Leeuwenberg and H.F.J.M. Buffart (Eds.), Formal theories of visual perception. New York: Wiley and Sons, 1978. Foster, D.H., A spatial perturbation technique for the investigation of discrete internal representations of visual patterns. Biological Cybernetics, 1980, 38, 159-169. Fox, R. and Mehtyre, C., Suppression during binocular fusion of complex targets. Psychonomic Science, 1967, 8, 143-144.
Chapter 1
70
Freeman, W.J., Cinematic display of spatial structure of EEG and averaged evoked potentials (AEPs) of olfactory bulb and cortex. Electroencephalography and Clinical Neurophysiology, 1973, S7, 199. Freeman, W.J., Mass action in the nervous system. New York: Academic Press, 1975.
Freeman, W.J., EEG analysis gives model of neuronal template matching mechanism for sensory search with olfactory bulb. Biological Cybernetics, 1979, 35, 221-234 ( a ) . Freeman, W .J., Nonlinear dynamics of paleocortex manifested in the olfactory EEG. Biological Cybernetics, 1979, 55, 21-37 (b). Freeman, W.J., Nonlinear gain mediating cortical stimulus response relations. Biological Cybernetics, 1979, 55, 237-247 (c). Freeman, W.J., A physiological hypothesis of perception. Perspectives in Biology and Medicine, 1981, 24, 561-592. Freeman, W.J. and Schneider, W., Changes in spatial patterns of rabbit olfactory EEG with conditioning to odors. Psychophysiology, 1982, 19, 44-56. Frisby, J.P., Seeing. Oxford: Oxford University Press, 1979. Frisby, J.P. and Julesz, B., Depth reduction effects in random line stereograms. Perception, 1975,4, 151-158. Gerrits, H.J.M., deHaan, B.,and Vendrick. A.J.H., Experiments with retinal stabilized images: Relations between the observations and neural data. Vision Research, 1966, 6, 427-440.
Gerrits, H.J.M. and Timmermann, J.G.M.E.N., The filling-in process in patients with retinal scotomata. Vision Research, 1969, 9, 439-442. Gerrits, H.J.M. and Vendrick, A.J.H., Artificial movementsof a stabilized image. Vision Research, 1970, 10, 1443-1456 (a). Gerrits, H.J.M. and Vendrick, A.J.H., Simultaneous contrast, filling-in process and information processing in man’s visual system. Experimental Brain Research, 1970, 11, 411-430 (b). Gerrits, H.J.M. and Vendrick, A.J.H., Eye movements necessary for continuous perception during stablization of retinal images. Bibliotheca Ophthalmologica, 1972, 82, 339-347.
Gerrits, H.J.M. and Vendrick, A.J.H., The influence of simultaneous movements on perception in parafoveal stabilized vision. Vision Research, 1974, 14, 175-180. Gibson, J., Perception of the visual world. Boston: Houghton Mifflin, 1950. Gilchrist, A.L., Perceived lightness depends on perceived spatial arrangement. Science, 1977, 195, 185-187.
Gilchrist, A.L., The perception of surface blacks and whites. Scientific American, 1979, 240, 112-124.
Glass, L., Effect of blurring on perception of a simple geometric pattern. Nature, 1970, 228, 1341-1342.
Glass,L. and Switkes, E., Pattern recognition in humans: Correlations which cannot be perceived. Perception, 1976, 5, 67-72. Gogel, W.C., The tendency to see objects as equidistant and its reverse relations to lateral separation. Psychological Monograph 70 (whole no. 411), 1966. Gogel, W.C.,Equidistance tendency and its consequences. Psychological Bulletin, 1965,64, 153-163.
Gogel, W.C., The adjacency principle and three-dimensional visual illusions. Psyche nomic Monograph, Supplement 3 (whole no. 45), 153-169, 1970. Gonzales-Estrada, M.T.and Freeman, W.J., Effects of carnosine on olfactory bulb EEG, evoked potentials and DC potentials. Brain Research, 1980, 202, 373-386.
The Quantized Geometry of VisualSpace
71
Graham, N., The visual system does a crude Fourier analysis of patterns. In S. Grossberg (Ed.), Mathematical psychology a n d psychophysiology. Providence, RI: American Mat,hematical Society, 1981. Graham, N. and Nachmias, J., Detection of grating patterns containing two spatial frequencies: A test of single-channel and multiple channel models. Vision Research, 1971, 11, 251-259. Graham, N., Robson, J.G., and Nachmias, J., Grating summation in fovea and periphery. Vision Research, 1978, 18, 816-825. Gregory, R.L., Eye and brain. New York: McGraw-Hill, 1966. Grimson, W.E.L., A computer implementation of a theory of human stereo vision. Philosophical Transactions of the Royal Society of London B, 1981, 292, 217-253. Grimson, W.E.L., A computational theory of visual surface interpolation. Philosophical Transactions of the Royal Society of London B, 1982, 298, 395-427 (a). Grimson, W.E.L., From images to surfaces: A computational study of the human early visual system. Cambridge, MA: MIT Press, 1982 (b). Grimson, W.E.L., Surface consistency constraints in vision. Computer Graphics and Image Processing, in press, 1983. Grossberg, S., Some physiological and biochemical consequences of psychological postulates. Proceedings of the National Academy of Sciences, 1968, 60,758-765. Grossberg, S., On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1969,1, 319-350(a). Grossberg, S.,On the serial learning of lists. Mathematical Biosciences, 1969,4,201253 (b). Grossberg, S.,Neural pattern discrimination. Journal of Theoretical Biology, 1970,27, 291-337 (a). Grossberg, S., Some networks that can learn, remember, and reproduce any number of complicated space-time patterns, 11. Studies in Applied Mathematics, 1970, 49, 135-166 (b). Grossberg, S., On the dynamics of operant conditioning. Journal of Theoretical Biology, 1971, 33,225-255 (a). Grossberg, S., Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 1971,68,828-831 (b). Grossberg, S., A neural theory of punishment and avoidance, I: Qualitative theory. Mathematical Biosciencs, 1972, 16, 39-67 (a). Grossberg, S.,A neural theory of punishment and avoidance, 11: Quantitative theory. Mathematical Biosciences, 1972, 15, 253-285 (b). Grossberg, S., Pattern learning by functional-differential neural networks with arbitrary path weights. In K. Schmitt (Ed , Delay and functional-differential equations and their applications. New ork: Academic Press, 1972 (c). Grossberg, S., Neural expectation: Cerebellar and retinal analogs of cells fired by learnable or unlearned pattern classes. Kybernetik, 1972,10, 49-57 (d). Grossberg, S., Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 1973, 52, 217-257. Grossberg, S., Classical and instrumental learning by neural networks. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Vol. 3. New York: Academic Press, 1974. Grossberg, S., A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 1975,18,263-327.
;I
12
Chapter 1
Grossberg, S., Adaptive pattern classification and universal recoding, I: Parallel development and coding of neural feature detectors. Biological Cybernetics, 1976,23, 121-134 (a). Grossberg, S., Adaptive pattern classification and universal recoding, 11: Feedback, expectation, olfaction, and illusions. Biological Cybernetics, 1976,23, 187-202 (b). Grossberg, S.,On the development of feature detectors in the visual cortex with applications to learning and reaction-diffusion systems. Biological Cybernetics, 1976,21, 145-159 (c). Grossberg, S., Behavioral contrast in short-term memory: Serial binary memory models or parallel continuous memory models? Journal of Mathematical Psychology, 1978, 17, 199-219 (a). Grossberg, S., Communication, memory, and development. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Vol. 5. New York: Academic Press, 1978 (b). Grossberg, S., Competition, decision, and consensus. Journal ofMathematical Analysis and Applications, 1978,66,470-493 (c). Grossberg, S., Decisions, patterns, and oscillations in the dynamics of competitive systems with applications to Volterra-Lotka systems. Journal of Theoretical Biology, 1978, 7S, 101-130 (a). Grossberg, S., A theory of human memory: Self-organization and performance of sensory-motor codes, maps, and plans. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Vol. 5. New York: Academic Press, 1978 (e). Grossberg, S., Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sciences, 1980,77,2338-2342 (a). Grossberg, S., How does a brain build a cognitive code? Psychological Review, 1980, 87,1-51 (b). Grossberg, S., Adaptive resonance in development, perception, and cognition. In S. Grossberg (Ed.), Mathematical psychology a n d psychophysiology. Providence, RI: American Mathematical Society, 1981. Grossberg, S., Associative and competitive principles of learning and development: The temporal unfolding and stability of STM and LTM patterns. In S.I. Amari and M. Arbib (Eds.), Competition and cooperation in neural networks. New York: Springer-Verlag, 1982 (a), Grossberg, S., A psychophysiological theory of reinforcement, drive, motivation, and attention. Journal of Theoretical Neurobiology, 1982,1,286-369 (b). Grossberg, S.,The processing of expected and unexpected events during conditioning and attention: A psychophysiological theory. Psychological Review, 1982, 89,529572 (c). Grossberg, S., Some psychophysiological and pharmacological correlates of a developmental, cognitive, and motivational theory. In R. Karrer, J. Cohen, and P. Tueting (Eds.), Brain and information: Event related potentials. New York: New York Academy of Sciences, 1982 (a). Grossberg, S., Studies of mind a n d brain: Neural principles of learning, perception, development, cognition, and motor control. Boston: Reidel Press, 1982 (e). Grossberg, S., The adaptive self-organization of serial order in behavior: Speech and motor control. In E.C. Schwab and H.C. Nusbaum (Eds.), Perception of speech and visual form: Theoretical issues, models, and research. New York: Academic Press, 1983. Grossberg, S. and Kuperstein, M., Adaptive dynamics of the saccadic eye movement system. In preparation, 1983.
The Quaritized Geonteny of Visual Space
73
Grossberg, S. and Levine, D., Some developmental and at,tentional biases in the contrast enhancement and short term memory of recurrent neural networks. Journal of Theoretical Biology, 1975, 53, 341-380. Grossberg, S. and Pepe, J., Schizophrenia: Possible dependence of associational span, bowing, and primacy versus recency on spiking threshold. Behavioral Science, 1970, 15, 359-362. Grossberg, S. and Pepe, J., Spiking threshold and overarousal effects in serial learning. Journal of Statistical Physics, 1971, 3, 95-125. Griinau, M.W. von, The involvement of illusory contours in stroboscopic motion. Perception and Psychophysics, 1979, 25, 205-208. Hagen, M.A. and Teghtsoonian, M., The effects of binocular and motion-generated information on the perception of depth and height. Perception and Psychophysics, 1981, SO, 257-265. Hamada, J., A mathematical model for brightness and contour perception. Hokkaido Report of Psychology, 1976, HRP-11-76-17. Hamada, J., Antagonistic and non-antagonistic processes in the lightness perception. Proceedings of the XXII International Congress of Psychology, Leipzig, July 6 - 12, 1980. Hebb, D.O., The organieation of behavior. New York: Wiley and Sons, 1949. Hecht, S., Vision 11: The nature of the photoreceptor process. In C. Murchison (Ed.), A handbook of general experinlent a1 psychology. Worcester, MA: Clark University Press, 1934. Helmholtz, H.L.F. von, Treatise on physiological optics, J.P.C. Southall (Trans.). New York: Dover, 1962. Hepler, N., Color: A motion-contingent after-effect. Science, 1968, 162, 376-377. Hering, E., Outlines of a theory of the light sense. Cambridge, MA: Harvard University Press, 1964. Hermann, A., T h e genesis of q u a n t u m theory (1899-1913), C.W. Nash (Trans.). Cambridge, MA: MIT Press, 1971. Hildreth, E.C., Implementation of a theory of edge detection. MIT Artificial Intelligence Laboratory Technical Report TR-579, 1980. Hochberg, J., Contralateral suppressive fields of binocular combination. Psychonomic Science, 1964, 1, 157-158. Hochberg, J. and Beck, J., Apparent spatial arrangement and perceived brightness. American Journal of Psychology, 1954, 47, 263-266. Holway, A.F. and Boring, E.G., Determinants of apparent visual size with distance variant. American Journal of Psychology, 1941, 54, 21-37. Horn, B.K.P., Determining lightness from an image. Computer Graphics and Image Processing, 1974, 3, 277-299. Horn, B.K.P., Understanding image intensities. Artificial Intelligence, 1977,8, 201-231. Hubel, D.H. and Wiesel, T.N., Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London (B),1977, 198, 1-59. Hurvich, L.M. and Jameson, D., Some quantitative aspects of an opponent-color theory, 11: Brightness, saturation, and hue in normal and dichromatic vision. Journal of the Optical Society of America, 1955,45, 602-616. Indow, T., Alleys in visual space. Journal of Mathematical Psychology, 1979, 19, 221-258. Indow, T., An approach to geometry of visual space with no a priori mapping functions. Journal of Mathematical Psychology, in press, 1983.
74
Chapter 1
Johansson, G., About the geometry underlying spontaneous visual decoding of the optical message. In E.L.J. Leeuwenberg and H.F.J.M. Buffart (Eds.), Formal theories of visual perception. New York: Wiley and Sons, 1978. Julesz, B., Binocular depth perception of computer-generated patterns. Bell System Technical Journal, 1960,59, 1125-1162. Julesz, B., Towards the automation of binocular depth perception (AUTOMAP). Praceedings of the IFIP Congress 62, 27 Aug-1 Sep 1962. Amsterdam: NorthHolland, 1962,439-444. Julesz, B., Binocular depth perception without familiarity cues. Science, 1964, 145, 356-362. Julesz, B., Binocular depth perception in mm-a cooperative model of stereopsis. In 0.-J. Grusser and R. Klinke (Eds. P a t t e r n recognition in biological and technical systems, Proceedings of t e German Cybernetic Society, Berlin, April 6-9, 1970. Berlin: Springer-Verlag, 1971,300-315 (a). Julesz, B.,Foundations of cyclopean perception. Chicago: University of Chicago Press, 1971 (b). Julesz, B., Cooperative phenomena in binocular depth perception. American Scientist, 62, 32-43. Reprinted in I.L. Janis (Ed.), Current trends in psychology: Readings from American Scientist. Los Altos, CA: W. Kaufmann, 1974. Julesz, B., Global stereopsis: Cooperative phenomena in stereoscopic depth percep tion. In R. Held, H.W. Leibowitz, and H.-L. Teuber (Eds.), Handbook of sensory physiology, Vol. 8: Perception. Berlin: Springer-Verlag, 1978,215-256 (a). Julesz, B., Perceptual limits of texture discrimination and their implications to figuground separation. In E.L.J. Leeuwenberg and H.F.J.M. Buffart Eds.), Formal theories of visual perception. New York: Wiley and Sons, 1978 b). Julesz, B. and Chang, J.J., Interaction between pools of binocular disparity detectors tuned to different disparities. Biological Cybernetics, 1976,22, 107-119. Just, M.A. and Carpenter, P.A., Eye fixations and cognitive processes. Cognitive Psychology, 1976,8,441-480. Kaczmarek, L.K. and Babloyantz, A., Spatiotemporal patterns in epileptic seizures. Biological Cybernetics, 1977,26, 199-208. Kaufman, L., Sight and mind: A n introduction to visual perception. New York: Oxford University Press, 1974. Kaufman, L., Bacon, J., and Barroso, F., Stereopsis without image segregation. Vision Research, 1973,19, 137-147. Klatt, D.H., Speech perception: A model of acoustic-phonetic analysis and lexical access. In R.A. Cole (Ed.), Perception and production of fluent speech. Hillsdale, NJ: Erlbaum, 1980. Konig, A. and Brodhun, E., Experimentelle Untersuchungen Uber die psychophysische Fundamentalformel in Bezug auf den Gesichtssinn. Siteungsberichte der preussischen Akademie der Wissenschaften, Berlin, 1889, 27, 641-644. Koffka, K., Principles of gestalt psychology. New York: Harcourt and Brace, 1935. Kulikowski, J.J., Limit of single vision in stereopsis depends on contour sharpness. Nature, 1978,275, 126-127. Laming, D.R.J., Mathematical psychology. London: Academic Press, 1973. Land, E.H., The retinex theory of color vision. Scientific American, 1977,2S7,108-128. Land, E.H. and McCann, J.J., Lightness and retinex theory. Journal of the Optical Society of America, 1971,61, 1-11. Lanze, M.,Weisstein, N.,and Harris, J.R., Perceived depth versus structural r e l e k c e in the object-superiority effect. Perception and Psychophysics, 1982,S1, 376-382.
k
t
The Quantized Georneny of VisualSpace
I5
Leake, B. and Annines, P., Effects of connectivity on the activity of neural net models. Journal of Theoretical Biology, 1976, 58,337-363. Leeuwenberg, E., The perception of assimilation and brightness contrast. Perception and Psychophysics, 1982,32, 345-352. Legge, G.E. and Foley, J.M., Contrast masking in human vision. Journal of the Optical Society of America, 1980, 70, 1458-1471. Legge, G.E.and Rubin, G.S.,Binocular interactions in suprathreshold contrast perception. Perception and Psychophysics, 1981,30, 49-61. LeGrand, Y., Light, colour, and vision. New York: Dover Press, 1957. Leshowitz, B., Taub, H.B., and Raab, D.H., Visual detection of signals in the presence of continuous and pulsed backgrounds. Perception and Psychophysics 1968,4, 207213. Lettvin, J.Y., “Filling out the forms”: An appreciation of Hubel and Weisel. Science, 1981, 214, 518-520. Levelt, W.J.M., O n binocular rivalry. Soesterberg, The Netherlands: Institute for Perception, RVO-TNO, 1965. Levine, D.S. and Grossberg, S., Visual illusions in neural networks: Line neutralization, tilt aftereffect, and angle expansion. Journal of Theoretical Biology, 1976, 61,477504. Logan, B.F. Jr., Information in the zero-crossings of bandpass signals. Bell System Technical Journal, 1977, 56, 487-510. Luneberg, R.K., Mathematical analysis of binocular vision. Princeton, NJ.: Princeton University Press, 1947. Luneberg, R.K., The metric of binocular visual space. Journal of the Optical Society of America, 1950,60,637-642. McCourt, M.E., A spatial frequency dependent grating-induction effect. Vision Research, 1982, 22, 119-134. Marr, D.,The computation of lightness by the primate retina. Vision Research, 1974, 14, 1377. Marr, D., Early processing of visual information. Philosophical Transactions of the Royal Society of London B, 1976, 275, 483-524. Marr, D., Artificial intelligence-a personal view. Artificial Intelligence, 1977,9,37-48. Marr, D., Representing visual information. Lectures on Mathematics in the Life Sciences, 1978, 10, 101-180. Marr, D., Vision: A computational investigation into the h u m a n representation and processing of visual information. San Francisco: W.H. Freeman, 1982. Marr, D. and Hildreth, E., Theory of edge detection. Proceedings of the Royal Society of London (B), 1980, 207, 187-217. Marr, D. and Poggio, T., Cooperative computation of stereo disparity. Science, 1976, 194, 283-287. Marr, D. and Poggio, T., From understanding computation to understanding neural circuitry. Neurosciences Research Progress Bulletin, 1977,15, 470-488. Marr, D. and Poggio, T., A computational theory of human stereo vision. Proceedings of the Royal Society of London B, 1979, 204, 301-328. Maudarbocus, A.Y. and Ruddock, K.H.,Non-linearity of visual signals in relation to shape-sensitive adaptation processes. Vision Research, 1973, 13, 1713-1737. Mayhew, J.E.W. and Frisby, J.P., Psychophysical and computational studies towards a theory of human stereopsis. Artificial Intelligence, 1981,17, 349-385.
Chopter I
16
Miller, R.F., The neuronal basis of ganglion-re11 receptive-field organization and the physiology of amacrine cells. In F.O. Schmitt (Ed.), The neuroscience fourth study program. Cambridge, MA: MIT Press, 1979. Minor, A.V., Flerova, G.I., and Byzov, A.L., Integral evoked potentials of single neurons in the frog olfactory blub (in Russian). Neurophysiologica, 1969, 1, 269-278. Mori, T., Apparent motion path composed of a serial concatenation of translations and rotations. Biological Cybernetics, 1982, 44, 31-34. Nachmias, J. and Kocher, E.C., Visual detection and discrimination of luminance increments. Journal of the Optical Society of America, 1970, 00, 382-389. Newell, A., Harpy, production systems, and human cognition. In R. Cole (Ed.), Perception and production of fluent speech. Hillsdale, NJ: Erlbaum, 1980. O’Brien, V., Contour perception, illusion and reality. Journal of the Optical Society of America, 1958, 48, 112-119. Osgood, C.E., Suci, G.J., and Tannenbaum, P.H., T h e measurement of meaning. Urbana: University of Illinois, 1957. Poggio, T., Neurons sensitive to random-dot stereograms in areas 17 and 18 of the rhesus monkey. Society for Neuroscience Abstracts, 1980, 0. Poggio, T., Trigger features or Fourier analysis in early vision: A new point of view. In D. Albrecht (Ed.), The recognition of p a t t e r n a n d form, Lecture Notes in Biomathematics. New York: Springer-Verlag, 1982, 44, 88 -99. Pollen, D.A. and Ronner, S.F., Phase relationships between adjacent simple cells in the visual cortex. Science, 1981, 212, 1409-1411. Pollen, D.A., Spatial computation performed by simple and complex cells in the visual cortex of the cat. Vision Research, 1982, 22, 101-118. Pulliam, K., Spatial frequency analysis of three-dimensional vision. Proceedings of the Society of Photo-Optical Instrumentation Engineers, 1981, 303, 71-77. Raaijmakers, J.G.W. and Shiffrin, R.M., Search of associative memory. Psychological Review, 1981, 88,93-134. Rall, W., Core conductor theory and cable properties of neurons. In E.R. Kandel (Ed.), Handbook of physiology: T h e nervous system, Vol. 1, Part 1. Bethesda, MD: American Physiological Society, 1977. Rashevsky, N., Mathematical biophysics. Chicago: University of Chicago Press, 1968.
Ratliff, F., Mach bands: Quantitative studies on neural networks in the retina. New York: Holden-Day, 1965. Rauschecker, J.P.J., Campbell, F.W., and Atkinson, J., Colour opponent neurones in the human visual system. Nature, 1973, 245, 42-45. Restle, F., Mathematical models in psychology. Baltimore. MD: Penguin Books, 1971.
Richards, W., Visual space perception. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception, Vol. 5: Seeing. New York: Academic Press, 1975. Richards, W. and Marr, D., Computational algorithms for visual processing. MIT Artificial Intelligence Lab, 1981. Richards, W. and Miller, J.F. Jr., The corridor illusion. Perception and Psychophysics, 1971, 9,421-423.
Richter, J. and Ullman, S., A model for the temporal organization of X- and Y-type receptive fields in the primate retina. Biological Cybernetics, 1982, 43, 127-145. Robson, J.G., Receptive fields: Neural representation of the spatial and intensive attributes of the visual image. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception, Vol. 5: Seeing. New York: Academic Press, 1975.
n2e Quantized Geomeuy of Visual Space
77
Robson, J.G. and Graham, N., Probability slimmation and regional variation in contrast sensitivity across the visual field. Vision Research, 1981,21, 409-418. Rock, I., In defense of unconscious inference. In W. Epstein (Ed.), Stability and constancy in visual perception. New York: Wiley and Sons, 1977. Rodicck, R.W. and Stone, J., Analysis of receptive fields of cat retinal ganglion cells. Journal of Neurophysiology, 1965,28, 833-849. Rozental, S. (Ed.), Niels Bohr. New York: Wiley and Sons, 1967. Rushton, W.A., Visual adaptation: The Ferrier lecture, 1962. Proceedings of the Royal Society of London B, 1965,162,20-46. Sakata, H., Mechanism of Craik-O’Brien effect. Vision Research, 1981,21, 693-699. Schriever, W.,Experimentelle studien uber stereokopische sehen. Zeitschrift fuer Psychologie, 1925,96, 113-170. Schrijdinger, E., Miiller-Pouillets Lehrbuch d e r Physik 11. Auflage, Zweiter Band. Braunschweig. Schwartz, E.L.,Computational anatomy and functional architecture of striate cortex: A spatial mapping approach to perceptual coding. Vision Research, 1980,20, 645669. Sekuler, R., Visual motion perception. In E.C. Carterette and M.P. Friedman (Eds.), H a n d b o o k of perception, Vol. 5: Seeing. New York: Academic Press, 1975. Shepard, R.N., Multidimensional scaling, tree-fitting, and clustering. Science, 1980, 210, 390-398. Shepard, R.N. and Chipman, S., Second-order isomorphism of internal representations: Shapes of states. Cognitive Psychology, 1970,1, 1-17. Shepard, R.N. and Metzler, J., Mental rotation of three-dimensional objects. Science, 1971, 171, 701-703. Shepherd, G.M., Synaptic organization of the mammalian olfactory bulb. Physiological Review, 1972,52, 864-917. Shipley, T., Visual contours in homogeneous space. Science, 1965,150, 348-350. Singer, W., The role of attention in developmental plasticity. Human Neurobiology, 1982,1, 41-43. Smith, A.T. and Over, R.,Motion aftereffect with subjective contours. Perception and Psychophysics, 1979,25, 95-98. Sperling, G., Binocular vision: A physical and a neural theory. American Journal of Psychology, 1970,85, 461-534. Sperling, G., Mathematical models of binocular vision. In S. Grossberg (Ed.), Mathematical psychology and psychophysiology. Providence, RI: American Mathematical Society, 1981. Sperling, G. and Sondhi, M.M., Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 1968,58, 1133-1145. Stevens, S.S.,The quantification of sensation. Daedalus, 1959,88,606-621. Stromeyer, C.F. I11 and Mansfield, R.J.W., Colored after-effects produced with moving edges. Perception and Psychophysics, 1970, 7,108-114. Swets, J.A., Is there a sensory threshold? Science, 1961,134, 168-177. Tschermak-Seysenegg, A. von, Introduction to physiological optics, P. Boeder (Trans.). Springfield, IL: C.C. Thomas, 1952. Tynan, P. and Sekuler, R., Moving visual phantoms: A new contour completion effect. Science, 1975,188, 951-952. Uttal, W., The psychobiology of sensory coding. New York: Harper and Row, 1973.
78
Chapter I
van den Brink, G. and Keemink, C.J., Luminance gradients and edge effects. Vision Research, 1976,16,155-159. van Nes, F.L., Experimental studies i n spatio-temporal contrast transfer by the human eye. Utrecht: University, 1968. van Nes, F.L. and Bouman, M.A., The effects of wavelength and luminance on visual modulation transfer. Excerpta Medica International Congress Series, 1965, 125, 183-192. van Tuijl, H.F.J.M. and Leeuwenberg, E.L.J., Neon color spreading and structural information measures. Perception and Psychophysics, 1979,25, 269-284. von BCkCsy, G., Mach-and Hering-type lateral inhibition in vision. Vision Research, 1968,8,1483-1499. Wallach, H. and Adams, P.A., Binocular rivalry of achromatic colors. American Journal of Psychofogy, 1954,07, 513-516. Watson, A.S., A Riemann geometric explanation of the visual illusions and figural aftereffects. In E.C.J. Leeuwenberg and H.F.J.M. Buffart (Eds.), Formal theories of visual perception. New York: Wiley and Sons, 1978. Weisstein, N., The joy of Fourier analysis. In C.S. Harris (Ed.), Visual coding and adaptability. Hillsdale, NJ: Erlbaum, 1980. Weisstein, N. and Harris, C.S., Masking and the unmasking of distributed representations in the visual system In C.S. Harris (Ed.), Visual coding and adaptability. Hillsdale, NJ: Erlbaum, 1980. Weisstein, N., Harris, C.S., Berbaum, K., Tangney, J., and Williams, A., Contrast reduction by small localized stimuli: Extensive spatial spread of above-threshold orientation-selective masking. Vision Research, 1977, 17, 341-350. Weisstein, N. and Maguire, W., Computing the next step: Psychophysical measures of representation and interpretation. In E. Riseman and A. Hanson (Eds.), Computer vision systems. New York: Academic Press, 1978. Weisstein, N., Maguire, W., and Berbaum, K., Visual phantoms produced by moving subjective contours generate a motion aftereffect. Bulletin of the Psychonomic Society, 1976, 8,240 (abstract). Weisstein, N., Maguire, W., and Berbaum, K., A phantom-motion aftereffect. Science, 1977,198, 955-998. Weisstein, N., Maguire, W., and Williams, M.C., Moving phantom contours and the phantom-motion aftereffect vary with perceived depth. Bulletin of the Psychonomic Society, 1978,12,248 (abstract). Weisstein, N., Matthews, M., and Berbaum, K., Illusory contours can mask real contours. Bulletin of the Psychonornic Society, 1974,4,266 (abstract). Werblin, F.S., Adaptation in a vertebrate retina: Intracellular recordings in Necturus. Journal of Neurophysiology, 1971,34,228-241. Werner, H., Dynamics in binocular depth perception. Psychological Monograph (whole no. 218), 1937. Wilson, H.R., A transducer function for threshold and suprathreshold human vision. Biological Cybernetics, 1980,38, 171-178. Wilson, H.R.and Bergen, J.R.,A four-mechanism model for spatial vision. Vision Research, 1979,19, 19-32. Wilson, H.R. and Cowan, J.D., Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 1972,12, 1-24. Wilson, H.R. and Cowan, J.D., A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 1973,13, 55-80.
The Quantized Geometry of Visual Space
79
Winston, P.H., MIT Progress in understanding images. Proceedings: Trnnge TJnderstanding Workshop, Palo Alto, California, 1979,25--36. Wyatt, H.J.and Daw, N.W., Directionally sensitive ganglion cells in the rabbit retina: Specificity for stimulus direction, size, and speed. Journal of -Veurophysiology,1975, 38,613-626. Zucker, S.W., Motion and the Mueller-Lyer illusion. McGili University Department of Electrical Engineering Technical Report 80-2R,1980.
80
Chapter 2
NEURAL DYNAMICS OF FORM PERCEPTION: BOITNDARY COMPLETION, ILLUSORY FIGURES, AND NEON COLOR SPREADING Preface This Chapter illustrates our belief that the rules for form and color processing can best be understood by considering how these two types of processes interact. We suggest that form and color and handled by two parallel contour-extracting systems: The Boundary Contour System detects, sharpens, and completes boundaries. The Feature Contour System generates the color and brightness signals which elicit featural filling-in within these boundaries. Our analysis of these systems leads to several revolutionary conclusions, whose paradoxical nature is most clearly perceived when they are expressed unabashedly without technical caveats or interpretations. These conclusions include: All boundaries are invisible. All line ends are illusory. Boundaries are formed discontinuously. Such conclusions arise from an analysis of visual perception which provides simple, if as yet incomplete, answers to the following types of questions: How do we rccognize emergent groupings without necessarily eeeing contrasts that correspond to these groupings? How can boundaries be formed preattentively, yet be influenced by attention and learned information? How can local features initiate the organization of a percept, yet often be overruled by global configural properties that determine the final percept? How can early stages in boundary formation be sensitive to local image contrasts, yet the final boundary configuration possess structural, coherent, and hysteretic properties which can persist despite significant changes in local image contrasts? In order to understand such issues,we have come to realize that the visual system trades-off several problems against one another. Indeed, the visual system provides excellent examples of how individual neural subsystems, by needing to be specialized to deal with part of an adaptive problem, cannot have complete information about the problem as a whole, yet the interactions between these subsystems are so cleverly contrived that the system as a whole can synthesize a globally consistent solution to the problem. We call one of the key trade-offs the Boundary-Feature Trade-Ofl. A study of this trade-off reveals that several basic uncertainty principles limit the information which particular visual processing stages can, in principle, compute. The visual system does not, however, succumb to these uncertainties. Instead, later processing stages are designed t o overcome them. One such uncertainty principle concerns how the visual system discounts the illuminant. In order to do so, it extracts color edges at an early processing stage. To recapture the veridical colors that lie between these color edges, it uses the color edges to fill-in color interiors at a later processing stage. In order to contain this featural filling-in process, the visual system uses cells with oriented receptive fields to detect local boundary contrasts. Such oriented cells cannot, however, detect line ends and corners. “Orientational” certainty thus implies a type of “positional” uncertainty. A later processing stage completes the boundaries at line ends and corners to prevent colors from flowing out of them. Often boundaries need to be completed over scenic regions that do not contain local image contrasts. Fuzzy bands of orientations cooperate across these regions to initiate the completion of these intervening boundaries. The final perceptual boundary is, however, sharp, not fuzzy. We explain how feedback interactions with the next level of processing eliminate this type of orientational uncertainty.
Neural Dynaniics of Form Perception
81
The circuit within the Boundary Contour System which completes sharp and coherent boundaries is a specialized type of cooperative-competitive feedback loop, which we have named the CC Loop. The featural filling-in process within the Feature Contour System does not possess coherent properties of this kind. Rather, it obeys a system of nonlinear diffusion equations which are capable of averaging featural qualities within each boundarycompartment. Thus, unlike the FIRE theory of Chapter 1, in which a single edge-driven process controls form and feat,ural filling-in, the present theory suggests that a pair of parallel edge-driven processes exist, and only the boundary completion process is a cooperative-competitive feedback network. The successes of these two theories in explaining their respective data bases therefore raises the burning question: How can they be unified into a single visual theory? When the present theory was being constructed from an analysis of perceptual data, the relevant neural data base was spotty at best. Within a year of our first publications in 1983 and 1984, striking support for the theory was reported in both neural and further perceptual experiments. We consider the 1984 data of von der Heydt, Peterhans, and Baumgartner, which we summarize herein, to be particularly important, because it seems to confirm the fact that the visual system compensates for the positional uncertainty caused by orientational tuning in area 17 of the visual by completing line ends at the next processing stage in area 18 of the visual
PsyrholoEical Review 92, 173-211 (1985) 01985 American Psychological Association, Inc. Reprinted by permission of the publisher
82
NEURAL DYNAMICS OF FORM PERCEPTION: BOUNDARY COMPLETION, ILLUSORY FIGURES, A N D NEON COLOR SPREADING
Stephen Grossbergt and Ennio Mingollat
Abstract A real-time visual processing theory is used to analyse real and illusory contour formation, contour and brightness interactions, neon color spreading, complementary color induction, and filling-in of discounted illuminants and scotomas. The theory also physically interprets and generalizes Land’s retinex theory. These phenomena are traced to adaptive processes that overcome limitations of visual uptake to synthesize informative visual representations of the external world. Two parallel contour sensitive processes interact to generate the theory’s brightness, color, and form estimates. A boundary eontour process is sensitive to orientation and amount of contrast but not to direction of contrast in scenic edges. It synthesizes boundaries sensitive to the global configuration of scenic elements. A feature eontout process is insensitive to orientation but sensitive to both amount of contrast and to direction of contrast in scenic edges. It triggers a diffusive filling-in of featural quality within perceptual domains whose boundaries are determined by completed boundary contours. The boundary contour process is hypothesized to include cortical interactions initiated by hypercolumns in Area 17 of the visual cortex. The feature contour process is hypothesized to include cortical interactions initiated by the cytochrome oxydase staining blobs in Area 17. Relevant data from striate and prestriate visual cortex, including data that support two predictions, are reviewed. Implications for other perceptual theories and axioms of geometry are discussed.
t Supported in part by the Air Force Office of Scientific Research (AFOSR 82-0148) and the Office of Naval Research (ONR N00014-83-K0337). $ Supported in part by the Air Force Office of Scientific Research (AFOSR 82-0148).
Ma& Dynamics of Form Perception
83
1. Illiisions a s a P r o b e of Adaptive Visual Mechanisms A fundamental goal of visual science is to explain how an unambiguous global visual representation is synthesized in response to ambiguous local visual cues. The difficulty of this problem is illustrated by two recurrent themes in visual perception: Human observers often do not see images that are retinally present, and they often do see images that are not retinally present. A huge data base concerning visual illusions amply illustrates the complex and often paradoxical relationship beteen scenic image and visual percept. That paradoxical data abound in the field of visual perception becomes more understandable through a consideration of how visual information is acquired. For example, light passes through retinal veins before it reaches retinal photoreceptors, and light does not influence the retinal regions corresponding to the blind spot or retinal scotomas. The percepts of human observers are not distorted, however, by their retinal veins or blind spots during normal viewing conditions. Thus some images that are retinally present are not perceived because our visual processes are adaptively designed to free our percepts from imperfections of the visual uptake process. The same adaptive mechanisms that can free our percepts from images of retinal veins can also generate paradoxical percepts, as during the perception of stabilized images (Krauskopf, 1963; Pritchard, 1961; Pritchard, Heron, and Hebb, 1970; Riggs, Ratliff, Cornsweet, and Cornsweet, 1953; Yarbus, 1967). The same adaptive mechanisms that can compensate for the blind spot and certain scotomas can also genrate paradoxical percepts, as during filling-in reactions of one sort or another (Arend, Buehler, and Lockhead, 1971; Gellatly, 1980; Gerrits, de Hann, and Vendrick, 1966; Gerrits and Timmermann, 1969; Gerrits and Vendrick, 1970; Kanizsa, 1974; Kennedy, 1978, 1979, 1981; Redies and Spillmann, 1981; van TuijI, 1975; van Tuijl and de Weert, 1979; van Tuijl and Leeuwenberg, 1979; Yarbus, 1967). These examples illustrate the general theme that many paradoxical percepts may be expressions of adaptive brain designs aimed at achieving informative visual representations of the external world. For this reason, paradoxical percepts may be used as probes and tests of the mechanisms that are hypothesized to instantiate these adaptive brain designs. The present article makes particular use of data about illusory figures (Gellatly, 1980; Kanizsa, 1974; Kennedy, 1978, 1979, 1981; Parks, 1980; Parks and Marks, 1983; Petry, Harbeck, Conway, and Levey, 1983) and about neon color spreading (Redies and Spillmann, 1981; van Tuijl, 1975; van Tuijl and de Weert, 1979; van Tuijl and Leeuwenberg, 1979) to refine the adaptive designs and mechanisms of a real-time visual processing theory that is aimed at predicting and explaining data about depth, brightness, color, and form perception (Carpenter and Grossberg, 1981, 1983; Cohen and Grossberg, 1983, 1984a, 1984b; Grossberg, 1981, 1983a, 1983b, 1984a; Grossberg and Cohen, 1984; Mingolla and Grossberg, 1984). As in every theory about adaptive behavior, it is necessary to specify precisely the sense in which its targeted data are adaptive without falling into logically circular arguments. In the present work, this specification takes the form of a new perceptual processing principle, which we call the boundary-jeature trade-of. The need for such a principle can begin to be seen by considering how the perceptual system can generate behaviorally effective internal representations that compensate for several imperfections of the retinal image. 2. From Noisy Retina to Coherent Percept
Suppressing the percept of stabilized retinal veins is far from sufficient to generate a usable percept. The veins may occlude and segment scenic images in several places. Even a single scenic edge can be broken into several disjoint components. Somehow in the final percept, broken retinal edges are completed and occluded retinal color and brightness signals are filled-in. These completed and filled-in percepts are, in a strict mechanistic sense, illusory percepts.
84
Chapter 2
Observers are often not aware of which parts of a perceived edge are ‘real” and which are “illusory.” This fact clarifies why data about illusory figures are so important for discovering the mechanisms of form perception. This fact also points to one of the most fascinating properties of visual percepts. Although many percepts are, in a strict mechanistic sense, ”illusory” percepts, they are often much more veridical, or “real,” than the retinal data from which they are synthesized. This observation clarifies a sense in which each of the antipodal philosophical positions of realism and idealism is both correct and incorrect, as is often the case with deep but partial insights. The example of the retinal veins suggests that two types of perceptual process, boundary completion and featural filling-in, work together to synthesize a final percept. In such a vague form, this distinction generates little conceptual momentum with which to build a theory. Data about the perception of artificially Stabilized images provide further clues. The classical experiments of Krauskopf (1963)and Yarbus (1967)show that if certain scenic edges are artificially stabilized with respect to the retina, then colors and brightnesses that were previously bounded by these edges are seen to flow across, or fill-in, the percept until they are contained by the next scenic boundary. Such data suggest that the processes of boundary completion and featural filling-in can be dissociated. The boundary-feature trade-off makes precise the sense in which either of these processes, by itself, is insufficient to generate a final percept. Boundary-feature trade-off also suggests that the rules governing either process can only be discovered by studying how the two processes interact. This is true because each system is designed to offset insufficiencies of the other system. In particular, the process of boundary completion, by itself, could at best generate a world of outlines or cartoons. The process of featural filling-in, by itself, could at best generate a world of formless brightness and color qualities. Our theory goes further to suggest the more radical conclusion that the process of boundary completion, by itself, would generate a world of invisible outlines, and the process of featural filling-in, by itself, would generate a world of invisiblefeatural qualities. This conclusion follows from the realization that an early stage of both boundary processing and of feature processing consists of the extraction of different types of contour information. These two contour-extracting processes take place in parallel, before their results are reintegrated at a later processing stage. Previous perceptual theories have not clearly separated these two contour-extracting systems. One reason for this omission is that, although each scenic edge can activate both the boundary contour system and the feature contour system, only the net effect of their interaction at a later stage is perceived. Another reason is that the completed boundries, by themselves, are not visible. They gain visibility by restricting featural filling-in and thereby causing featural contrast differences across the perceptual space. The ecological basis for these conclusions becomes clearer by considering data about stabilized images (Yarbus, 1967) alongside data about brightness and color perception (Land, 1977). These latter data can be approached by considering another ambiguity in the optical input to the retina. The visual world is typically viewed in inhomogeneous lighting conditions. The scenic luminances that reach the retina thus confound variable lighting conditions with invariant object colors. It has long been known that the brain somehow “discounts the illuminant” in order to generate percepts whose colors are more veridical than those in the retinal image (Helmholtz, 1962). The studies of Land (1977)have refined this insight by showing that the perceived colors within a picture constructed from overlapping patches of color are determined by the relative contrasts at the edges between successive patches. Lighting conditions can differ considerably as one moves across each colored patch. At each patch boundary, lighting conditions typically change very little. A measure of relative featural contrast across such a boundary therefore provides a good local estimate of object reflectances. Land’s results about discounting the illuminant suggest that an early stage of the
Neural Dynamics of Form Perception
85
featrural extraction process consists in computing featural contrasts at scenic edges. Data such as that of Yarbus (1967), which show that boundaries and features can be dissociated, then suggest that the extraction of feature contour and boundary contour information are two separate processes. The Land (1977) data also support the concept of a featural filling-in process. Discounting the illurninant amounts to suppressing the color signals from within the color patches. All that remains are nondiscounted feature contrasts at the patch boundaries. Without featural filling-in, we would perceive a world of colored edges, instead of a world of extended forms. The present theory provides a physical interpretation and generalization of the Land retinex theory of brightness and color perception (Grossberg, 1984a), including an explanation of how we can see extended color domains. This explanation is summarized in Section 18. Our theory can be understood entirely as a perceptual processing theory. As its perceptual constructs developed, however, they began to exhibit striking formal similarities with recent neural data. Some of these neural analogs are summarized in Table 1 below. Moreover, two of the theory’s predictions about the process of boundary completion have recently received experimental support from recordings by von der Heydt, Peterhans, and Baumgartner (1984) on cells in Area 18 of the monkey visual cortex. Neurophysiological linkages and predictions of the theory are more completely described in Section 20. Due to the existence of this neural interpretation, the formal nodes in the model network are called cells throughout the article. 3. B o u n d a r y Contour System a n d Feature Contour System
Our theory claims that two distinct types of edge, or contour, computations are carried out within parallel systems during brightness, color, and form perception (Grossberg, 1983a, 1983b, 1984a). These systems are called the boundary contour system (BCS) and the feature contour system (FCS). Boundary contour signals are used to generate perceptual boundaries, both “real” and “illusory.” Feature contour signals trigger the filling-in processes whereby brightnesses and colors spread until they either hit their first boundary contours or are attenuated due to their spatial spread. Boundary contours are not, in isolation, visible. They gain visibility by restricting the filling-in that is triggered by feature contour signals and thereby causing featural contrasts across perceptual space. These two systems obey different rules. We will summarize the main rules before using them to explain paradoxical visual data. Then we will explain how these rules can be understood as consequences of boundary-feature trade-off. 4. Boundary Contours and Boundary Completion
The process whereby boundary contours are built up is initiated by the activation of oriented masks, or elongated receptive fields, at each position of perceptual space (Hubel and Wiesel, 1977). An oriented mask is a cell, or cell population, that is selectively responsive to scenic edges. Each mask is sensitive to scenic edges that activate a prescribed small region of the retina, if the edge orientations lie within a prescribed band of orientations with respect to the retina. A family of such oriented masks exists at every network position, such that each mask is sensitive to a different band of edge orientations within its prescribed small region of the scene. Orientation and Contrast The output signals from the oriented masks are sensitive to the orientation and to the amount of contrast, but not to the direction of contrast, at an edge of a visual scene. A vertical boundary contour can thus be activated by either a close-to-vertical darklight edge or a close-tevertical light-dark edge at a fixed scenic position. The process whereby two like-oriented masks that are sensitive to direction of contrast at the same
86
Chapter 2
perceptual location give rise to an output signal that is not sensitive to direction of contrast is designated by a plus sign in Figure la. Short-Range Competition The outputs from these masks activate two successive stages of short-range competition that obey different rules of interaction. 1. The cells that react to output signals due to like-oriented masks compete between nearby perceptual locations (Figure Ib). Thus, a mask of fixed orientation excites the like-oriented cells at its location and inhibits the like-oriented cells at nearby locations. In other words, an on-center off-surround organization of like-oriented cell interactions exists around each perceptual location. It may be that these spatial interactions form part of the network whereby the masks acquire their orientational specificity during development. This possibility is not considered in this article. 2. The outputs from this competitive stage input to the next competitive stage. Here, cells compete that represent perpendicular orientations at the same perceptual location (Figure lc). This competition defines a push-pull opponent process. If a given orientation is inhibited, then its perpendicular orientation is disinhibited. In summary, a stage of competition between like orientations at different, but nearby, positions is followed by a stage of competition between perpendicular orientations at the same position. Long-Range Oriented Cooperation a n d Boundary Completion The outputs from the second competitive stage input to a spatially long-range cooperative process. We call this process the boundary completion process. Outputs due to like-oriented masks that are approximately aligned across perceptual space can cooperate via this process to synthesize an intervening boundary. We show how both “real” and “illusory” boundaries can be generated by this boundary completion process. The following two demonstrations illustrate a boundary completion process with the above properties of orientation and contrast, short-range competition, and long-range cooperation and boundary completion. In Figure 2a, four black pac-man figures are arranged at the vertices of an imaginary square on a white background. The famous illusory Kanizsa (1974) square can then be seen. The same is true when two pacman figures are black, the other two are white, and the background is grey, as in Figure 2b. The black pac-man figures form dark-light edges with respect to the grey background. The white pac-man figures form light-dark edges with the grey background. The visibility of illusory edges around the illusory square shows that a process exists that is capable of completing boundaries between edges with opposite directions of contrast. The boundary completion process is thus sensitive to orientational alignment across perceptual space and to amount of contrast, but not to direction of contrast. Another simple demonstration of these boundary completing properties can be constructed as follows. Divide a square into two equal rectangles along an imaginary boundary. Color one rectangle a uniform shade of grey. Color the other rectangle in shades of grey that progress from light to dark as one moves from end 1 of the rectangle to end 2 of the rectangle. Color end 1 a lighter shade than the uniform grey of the other rectangle, and color end 2 a darker shade than the uniform grey of the other rectangle. As one moves from end 1 to end 2, an intermediate grey region is passed whose luminance approximately equals that of the uniform rectangle. At end 1, a light-dark edge exists from the nonuniform rectangle to the uniform rectangle. At end 2, a dark-light edge exists from the nonuniform rectangle to the uniform rectangle. An observer can see an illusory edge that joins the two edges of opposite contrast and separates the intermediate rectangle region of equal luminance. Although this boundary completion process may seem paradoxical when its effects are seen in Kanizsa squares, we hypothesize that this process is also used to complete boundaries across retinal scotomas, across the faded images of stabilized retinal veins,
Neural Dynamics of Form Perception
t n n
n
n
Figure 1. (a) Boundary contour signals sensitive to the orientation and amount of contrast at a scenic edge, but not to its direction of contrast. (b) Like orientations compete at nearby perceptual locations. (c) Different orientations compete at each perceptual location. (d) Once activated, aligned orientations can cooperate across a larger visual domain to form “real” and “illusory” contours.
88
Chapter 2
Figure 2. (a) Illusory Kanizsa square induced by four black pac-man figures. (From “Subjective Contours” by G. Kanizsa, 1976, Scientific American, 234, p:Sl. Copyright 1976 by Scientific American, Inc. Adapted by permission.) (b) An illusory square induced by two black and two white pac-man figures on a grey background. Illusory contours can thus join edges with opposite directions of contrast. (This effect may be weakened by the photographic reproduction process.)
Neural Dynamics of Form Perception
89
and between all perceptual domains that are separated by sharp brightness or color differences. Binocular Matching A monocular boundary contour can be generated when a single eye views a scene. When two eyes view a scene, a binocular interaction can occur between outputs from oriented masks that respond to the same retinal positions of the two eyes. This interaction leads to binocular competition between perpendicular orientations at each position. This competition takes place at, or before, the competitive stage. Although binocular interactions occur within the boundary contour system they will not be needed to explain this article’s targeted data. Boundary contours are like frames without pictures. The pictorial data themselves are derived from the feature contour system. We suggest that the same visual source inputs in parallel to both the boundary contour system and the feature contour system, and that the outputs of both types of processes interact in a context-sensitive way at a later stage. 5. Feature Contours and Diffusive Filling-In
The feature contour process obeys different rules of contrast than does the boundary contour process. Contrast The feature-contour process is insensitive to the orientation of contrast in a scenic edge, but it is sensitive to both the direction of contrast as well as to the amount of contrast, unlike the boundary contour process. Speaking intuitively, in order to compute the relative brightness across a scenic boundary, it is necessary to keep track of which side of the scenic boundary has a larger reflectance. Sensitivity to direction of contrast is also used to determine which side of a red-green scenic boundary is red and which is green. Due to its sensitivity to the amount of contrast, feature contour signals discount the illuminant. We envision that three parallel channels of double-opponent feature contour signals exist: light-dark, red-green, and blue-yellow (Boynton, 1975; DeValois and DeValois, 1975; Mollon and Sharpe, 1983). These double-opponent cells are replicated in multiple cellular fields that are maximally sensitive to different spatial frequencies (Graham, 1981; Graham and Nachmias, 1971). Both of these processing requirements are satisfied in a network that is called a gotcd dipole field Grossberg, 1980, 1982). The detailed properties of double-opponent gated dipole fie ds are not needed in this article. Hence they are not discussed further. A variant of the gated dipole field design is, however, used to instantiate the boundary contour system in Section 15. The feature contour process also obeys different rules of spatial interaction than those governing the boundary contour process. Diffusive Filling-In Boundary contours activate a boundary completion process that synthesizes the boundaries which define monocular perceptual domains. Feature contours activate a diffusive filling-in process that spreads featural qualities, such as brightness or color, across these perceptual domains. Figure 3 depicts the main properties of this filling-in process. We assume that featural filling-in occurs within a syncytium of cell compartments. By a syncytium of cells, we mean a regular array of intimately connected cells such that contiguous cells can easily pass signals between each other’s compartment membranes. A feature contour input signal to a cell of the syncytium activates that cell. Due to the syncytial coupling of this cell with its neighbors, the activity can rapidly spread to neighboring cells, then to neighbors of the neighbors, and so on. Because the spreading occurs via a diffusion of activity (Cohen and Grossberg, 1984b; Grossberg, 1984a), it
I
Chapter 2
90
m
BOUNDARY CONTOUR SIGNALS
-------++-- ---- ---- --,+--
coMp#RTAIIENT
01FFUSION
FEATURE CONTOUR SIGNALS Figure 3. Monocular brightness and color stage domain (MBC). Monocular feature contour signals activate cell compartments that permit rapid lateral diffusion of activity, or potential, across their compartmental boundaries, except at those compartment boundaries that receive boundary contour signals from the BCS stage of Figure 4. Consequently, the feature contour signals are smoothed except at boundaries that are completed within the BCS stage. tends to average the activity that is triggered by a feature contour input signal across the cells that receive this spreading activity. This averaging of activity spreads across the syncytium with a space constant that depends on the electrical properties of both the cell interiors and their membranes. The electrical properties of the cell membranes can be altered by boundary contour signals in the following way. A boundary contour signal is assumed to decrease the diffusion constant of its target cell membranes within the cell syncytium. It does so by acting as an inhibitory gating signal that causes an increase in cell membrane resistance. A boundary contour signal hereby creates a barrier to the filling-in process at its target cells. This diffusive filling-in reaction is hypothesized to instantiate featural filling-in over retinal scotomas, over the faded images of stabilized retinal veins, and over the illuminants that are discounted by feature contour preproc9ssing. Three types of spatial interaction are implied by this description of the feature contour system: (a) Spatial frequency preprocessing: feature contour signals arise as the outputs of several double-opponent networks whose different receptive field sizes make them maximally sensitive to different spatial frequencies. (b) Diffusive filling-in: feature contour signals within each spatial scale then cause activity to spread across
Neural Dynamics of Form Perception
91
the scale cell's syncytium. This filling-in process has its own diffusive bandwidth. (c) Figural boundaries: boundary contour signals define the limits of featural filling-in. Boundary contours are sensitive to the configuration of all edges in a scene, rather than t,o any single receptive field size. Previous perceptual theories have tended to focus on one or another of these factors, but not on their interactive properties. 6. M a c r o e i r c u i t of Proeessiiig S t a g e s Figure 4 describes a macrocircuit of processing stages into which the microstages of the boundary contour system and feature contour system can be embedded. The processes described by this macrocircuit were introduced to explain how global properties of depth, brightness, and form information can be generated from monocularly and binocularly viewed patterns (Grossberg, l983b, 1984a). Table 1 lists the full names of the abbreviated macrocircuit stages, as well as their neural interpretation. Each monocular preprocessing (MP) st>ageMPL and MPR can generate inputs, in parallel, to its boundary contour system and its feature contour system. The pathway MPL -+ BCS carries inputs to the left-monocular boundary contour system. The pathway MPL -+ MBCL carries inputs to the left-monocular feature contour system. Only after all the stages of scale-specific, orientation-specific, contrast-specific, competitive, and cooperative interactions take place within the BCS stage, as in Section 4, does this stage give rise t o boundary contour signals B C S 4 MBCL that act as barriers to the diffusive filling-in triggered by MPL + MBCL feature contour signals, a6 in Section 5. The divergence of the pathways MPL MBCL and MPL 4 BCS allows the boundary contour system and the feature contour system to be processed according to their different rules before their signals recombine within the cell syncytia. -+
7. Neon Color S p r e a d i n g and C o m p l e m e n t a r y Color I n d u c t i o n The phenomenon of neon color spreading illustrates the existence of boundary contours and of feature contours in a vivid way. Redies and Spillmann (1981), for example, reported an experiment using a solid red cross and an Ehrenstein figure. When the solid red cross is perceived in isolation, it looks quite uninteresting (Figure 5a). When an Ehrenstein figure is perceived in isolation, it generates an illusory contour whose shape (e.g., circle or diamond) depends on the viewing distance. When the red cross is placed inside the Ehrenstein figure, the red color flows out of its containing contours and tends to fill the illusory figure (Figure 5b). Our explanation of this percept uses all of the rules that we listed. We suggest that vertical boundary contours of the Ehrenstein figure inhibit contiguous boundary contours of like orientation within the red cross. This property uses the orientation and contrast sensitivity of boundary masks (Figure la) and their ability to inhibit likeoriented nearby cells, irrespective of direction of contrast (Figures l a and lb). This inhibitory action within the BCS does not prevent the processing of feature contour signals from stage MPL to stsageMBCL and from stage MPR to stage MBCR, because boundary contour signals and feature contour signals are received by MBCL and MBCR despite the fact that some of their corresponding boundary contour signals are inhibited within the BCS stage. The inhibition of these boundary contour signals within the BCS stage allows the red featural activity to diffuse outside of the red cross. The illusory boundary contour that is induced by the Ehrenstein figure restricts the diffusion of this red-labeled activation. Thus during neon color spreading, one can "see" the difference between boundary contours and feature contours, as well as the role of illusory boundary contours in restricting the diffusion of featural activity. In Figure 5b, the illusory boundary induced
92
A
L
Figure 4. Macrocircuit of processing stages. Table 1 lists the functional names of the abbreviated stages and indicates a plausible neural interpretation of these stages. Boundary contour formation is assumed to occur within the BCS stage. Its output signals to the monocular MBCL and MBCR stages define boundaries within which feature contour signals from MPL and MPR, respectively, can trigger the spreading, or diffusion, of featural quality.
Neural Dynamics of Form Perception
93
TABLE 1 Siixiiriiary of Neiiral Analogs Abbreviation
Full N a m e
Neural Interpret a tion
MPL
Left monocular preprocessing stage Right monocular preprocessing stage Boundary cont,our synthesis stage
Lateral geniculate nucleus
MPR BCS
MBCL
Left monocular brightness and color stage
Right monocular brightness and color stage Binocular percept stage
Lateral geniculate nucleus Interactions initiated by the hypercolumns in striate cortex-Area 17 (Hubel and Wiesel, 1977) Interactions initiated by the cytochrome oxydase staining blobs-Area 17 (Hendrickson, Hunt, and Wu, 1981; Horton and Hubel, 1981; Hubel and Livingstone, 1981; Livingstone and Hubel, 1982) Interactions initiated by the cytochrome oxydase staining blobs-Area 17 Area V4 of the prestriate cortex (Zeki, 1983a, 1983b)
by the Ehrenstein figure restricts the flow of red featural quality, but the “real” boundary of the cross does not. This percept illustrates that boundary contours, both “real” and “illusory,” are generated by the same process. The illusory contour in Figure 5b tends to be perpendicular to its inducing Ehrenstein figures. Thus, the Ehrenstein figure generates two simultaneous effects. It inhibits like-orientated boundary contours at nearby positions, and it excites perpendicularly oriented boundary contours at the same nearby positions. We explain this effect as follows. The boundary contours of the Ehrenstein figure inhibit contiguous like-oriented boundary contours of the red cross, as in Figure Ib. By Figure Ic, perpendicular boundary contours at each perceptual position compete as part of a push-pull opponent process. By inhibiting the like-oriented boundary contours of the red cross, perpendicularly oriented boundary contours at. the corresponding positions are activated due to disinhibition. These disinhibited boundary contours can then cooperate with other approximately aligned boundary contours to form an illusory contour, as in Figure Id. This cooperative process further weakens the inhibited boundary contours of the red cross, as in Figure lc, thereby indicating why a strong neon effect depends on the percept of the illusory figure. Redies and Spillmann (1981) systematically varied the distance of the red cross from the Ehrenstein figure-their relative orientations, their relative sizes, and so forth-to study how the strength of the spreading effect changes with scenic parameters. They report that “thin [red] flanks running alongside the red connecting lines” (Redies and Spillmann, 1981) can occur if the Ehrenstein figure is slightly separated from the cross or if the orientations of the cross and the Ehrenstein figure differ. In our theory, the orientation specificity (Figure la) and distance dependence (Figure lb) of the inhibitory
94
Chapter 2
Figure 5. Neon color spreading. (a) A red cross in isolation appears unremarkable. (b) When the cross is surrounded by an Ehrenstein figure, the red color can flow out of the cross until it hits the illusory contour induced by the Ehrenstein figure.
Neural Dynamics of Form Perception
95
process among like-oriented cells suggest why these manipulations weaken the inhibitory effect of Ehrenstein boundary contours on the boundary contours of the cross. When the boundary contours of the cross are less inhibited, they can better restrict the diffusion of red-labeled activation. Then the red color can only bleed outside the contours of the cross. One might ask why the ability of the Ehrenstein boundary contours to inhibit the boundary contours of the cross does not also imply that Ehrenstein boundary contours inhibit contiguous Ehrenstein boundary contours? If they do, then how do any boundary contours survive this process of mutual inhibition? If they do not, then is this explanation of neon color spreading fallacious? Our explanation survives this challenge because the boundary contour process is sensitive to the amount of contrast, even though it is insensitive to the direction of contrast, as in Figure la. Contiguous boundary contours do mutually inhibit one another, but this inhibition is a type of shunting lateral inhibition (Appendix such that equally strong inhibitory contour signals can remain positive and balanced Grossberg, 1983a . If, however, the Ehrenstein boundary contour signals are stronger than the boun ary contour signals of the cross by a sufficient amount, then the latter signals can be inhibited. This formal property provides an explanation of the empirical fact that neon color spreading is enhanced when the contrast of a figure (e.g., the cross) relative to the background illumination is less than the contrast of the bounding contours (e.g., the Ehrenstein figure) relative to the background illumination (van Tuijl and de Weert, 1979). This last point emphasizes one of the paradoxical properties of the boundary contour system that may have delayed its discovery. In order to work properly, boundary contour responses need t o be sensitive to the amount of contrast in scenic edges. Despite this contrast sensitivity, boundary contours can be invisible if they d o not cause featural contrasts to occur. A large cellular activation does not necessarily have any perceptual effects within the boundary contour system. Although the rules of the boundary contour system and the feature contour system may prove sufftcient to explain neon color spreading, this explanation, in itself, does not reveal the adaptive role of these rules in visual perception. The adaptive role of these rules will become apparent when we ask the following questions: Why does not color spread more often? How does the visual system succeed as well as it does in preventing featural filling-in from flooding every scene? In Section 13, we show how these rules prevent a massive flow of featural quality in response to such simple images as individual lines and corners, not just in response to carefully constructed images like red crosses within Ehrenstein figures. We will now build up to this insight in stages. The same concepts also help to explain the complementary color induction that van Tuijl (1975) reported in his original article about the neon effect (Grossberg, 1984a). To see this, draw on white paper a regular grid of horizontal and vertical black lines that form 5mm squares. Replace a subset of black lines by blue lines. Let this subset of lines be replaced from the smallest imaginary diamond shape that includes complete vertical or horizontal line segments of the grid (Figure 6). When an observer inspects this pattern, the blue color of the lines appears to spread around the blue line segments until it reaches the subjective contours of the diamond shape. This percept has the same explanation as the percept in Figure 5b. Next replace the black lines by blue lines and the blue lines by black lines. Then the illusory diamond looks yellow rather than blue. Let us suppose that the yellow color in the diamond is induced by the blue lines in the background matrix. Then why in the previous display is not a yellow color in the background induced by the blue lines in the diamond? Why is the complementary color yellow perceived when the background contains blue lines, whereas the original color blue is perceived when the diamond contains blue lines? What is the reason for this asymmetry? This asymmetry can be explained in the following way. When the diamond is
d
1
96
Chapter 2
Figure 6 . Neon color spreading nd complementary color induction. When the ,lattice in (a) is composed of black lines and the contour in (b) composed of blue lines is inserted within its diamond-shaped space, then blue color flows within the illusory diamond induced by the black lines. When the lattice in (a) is blue and the contour in (b) is black, then yellow color can flow within the illusory diamond. (From “A New Visual Illusion: Neonlike Color Spreading and Complementary Color Induction between Subjective Contours” by H.F.J.M. van Tuijl, 1975, Acta Psychologica, 39,pp.441-445. Copyright 1975 by North-Holland. Adapted by permission.)
Neural Dynamics of Form Perception
97
composed of blue lines, then double-opponent color processing enables the blue lines to induce contiguous yellow feature contour signals in the background. These yellow feature contour signals are constrained by the boundary contour signals of the black lines to remain within a spatial domain that also receives feature contour signals from the black lines. The yellow color is thus not seen in the background. By contrast, the boundary contour signals of the black lines in the background inhibit the contiguous boundary contour signals of the blue lines in the diamond. The blue feature contour signals of the blue lines can thus flow within the diamond. When blue lines form the background, they have two effects on the diamond. They induce yellow feature contour signals via double-opponent processing. They also inhibit the boundary contour signals of the contiguous black lines. Hence the yellow color can flow within the diamond. To carry out this explanation quantitatively, we need to study how double-opponent color processes (light-dark, red-green, yellow-blue) preprocess the feature contour signals from stage MPL t o stage MBCL and from stage MPR to stage MBCR. Doubleopponent color processes with the requisite properties can be defined using gated dipole fields (Grossberg, 1980). We also need to quantitatively specify the rules whereby the boundary completion process responds to complex spatial patterns such as grids and Ehrenstein figures. We now approach this task by considering properties of illusory figures. 8. Contrast, Assimilation, and Grouping
The theoretical approach closest in spirit to ours is perhaps that of Kennedy (1979). We agree with many of Kennedy’s theoretical conclusions, such as Some kind of brightness manipulation . , . acts on certain kinds of inducing elements but in a way which is related to aspects of form....Changes in the luminance of the display have different effects on standard brightness contrast and subjective contour effects ....Something over and beyond simple bright,ness contrast is called for. (p.176) Grouping factors have to be an essential part of any discussion of subjective contours. (p.185) Contrast and grouping factors produce a percept that has some characteristics of a percept of an environmental origin. (p.189) Speaking intuitively, Kennedy’s remarks about contrast can be compared with properties of our feature contour system, and his remarks about grouping can be compared with properties of our boundary contour system. Once these comparisons are made, however, our theory diverges significantly from that of Kennedy, in part because his theory does not probe the mechanistic level. For example, Kennedy (1979) invoked two complementary processes to predict brightness changes: contrast and assimilation. Figure 7 describes the assimilation and contrast that are hypothesized to be induced by three shapes, Contrast is assumed to induce a brightening effect and assimilation is assumed to induce a darkening effect. This concept of assimilation is often used to explain how darkness or color can spread throughout an illusory figure (Ware, 1980). In our theory, local brightening and darkening effects are both consequences of a unified feature contour process. The fact that different parts of a figure induce different relative contrast effects does not imply that different levels of relative contrast are due to different processes. Also, in our theory a darkening effect throughout an illusory figure is not due to a lower relative contrast per Be, but to inhibition of a boundary contour leading to diffusion of a darker featural quality throughout the figure. Our theory thus supports the conclusion that perception of relative brightening and darkening effects
Chapter 2
98
) 8
k '..... ...... ....... ...............,.k.. ..... ........... .... ....... ....,...... I . . . .
..I..
,
L ...I......
atk
0 Figure 7. Three shapes redrawn from Kennedy (1979).Regions of contrast are indicated by [-] signs. Regions of assimilation are indicated by [+hsigns. Our theory suggests that the net brightening (contrast) or assimilation (dar ening) that occurs between two figures depends not only on figurally induced feature contour signals of variable contrast, but also on the configurationally sensitive boundary contours within which the featurally induced activations Representation, C.F. Nodine and D.F. Copyright 1979 by Praeger Publishers.
Neural Dynamics of Form Perception
99
cannot be explained just using locally defined scenic properties. The global configuration of all scenic elements determines where and how strongly boundary contours will be generated. Only after these boundary contours are completed can one determine whether the spatial distribution and intensity of all feature contour signals within these boundary contours will have a relative brightening or darkening effect. The theory of Kennedy (1979) comes close to this realization in terms of his distinction between brightness and grouping processes. Kennedy suggested, however, that these processes are computed in serial stages, whereas we suggest that they are computed in parallel stages before being joined together (Figure 4). Thus Kennedy (1979, p.191) wrote First, there are properties that are dealt with in perception of their brightness characteristics ....Once this kind of processing is complete, a copy is handed on to a more global processing system. Second, there are properties that allow them to be treated globally and grouped. Although our work has required new concepts, distinctions, and mechanisms beyond those considered by Kennedy, we find in his work a seminal precursor of our own. 9. B o u n d a r y Completion: Positive Feedback Between Local Competition and Long-Range Cooperation of Oriented Boundary Contour Segments
The following discussion employs a series of pictures that elicit illusory contour percepts to suggest more detailed properties of the cooperative boundary completion process of Figure Id. One or even several randomly juxtaposed black lines on white paper need not, induce an illusory rontour. By contrast, a series of radially directed black lines can induce an easily perceived circular contour (Figure 8a). This illusory contour is perpendicular t o each of the inducing lines. The perpendicular orientation of this illusory contour reflects a degree of orientational specificity in the boundary completion process. For example, the illusory contour becomes progressively less vivid as the lines are tilted to assume more acute angles with respect to the illusory circle (Figure 8b). We explain this tendency to induce illusory contours in the perpendicular direction by combining properties of the competitive interactions depicted in Figures I b and I c with properties of the cooperative process depicted in Figure Id, just as we did to explain Figure 5b. It would be mistaken, however, to conclude that illusory contour induction can take place only in the direction perpendicular to the inducing lines. The perpendicular direction is favored, BS a comparison between Figure 8a and Figure 9a shows. Figure 9a differs from Figure 8a only in terms of the orientations of the lines; the interior endpoints of the lines are the same. An illusory square is generated by Figure 9a to keep the illusory contour perpendicular to all the inducing lines. Not all configurations of inducing lines can, however, be resolved by a perpendicular illusory contour. Figure 9b induces the same illusory square as Figure Qa, but the square is no longer perpendicular to any of the inducing lines. Figures 8 and 9 illustrate several important points, which we now summarize in more mechanistic terms. At the end of each inducing line exists a weak tendency for several approximately perpendicular illusory line segments to be induced (Figure 10a). In isolation, these local reactions usually do not generate a percept of a line, if only because they do not define a closed boundary contour that can separate two regions of different relative brightness. Under certain circumstances, these local line segments can interact via the spatially long-range boundary completion process. This cooperative process can be activated by two spatially separated illusory line segments only if their orientations approximately line up across the intervening perceptual space. In Figure 8b, the local illusory line segments cannot line up. Hence no closed illusory contour is generated. In Figure 9b, the local illusory line segments can line up, but only in
100
Chapter 2
Figure 8. (a) Bright illusory circle induced perpendicular to the ends of the radial lines. (b) Illusory circle becomes less vivid as line orientations are chosen more parallel to the illusory contour. Thus illusory induction is strongest in an orientation perpendicular to the ends of the lines, and its strength depends on the global configuration of the lines relative to one another. (From Perception and Pictorial Representation, C.F. Nodine and D.F.Fisher (Eds.), p.182, New York: Praeger. Copyright 1979 by Praeger Publishers. Adapted by permission.)
Neural Dynamics of Form Perception
101
Figure 9. (a) Illusory square generated by changing the orientations, but not the end-points, of the lines in Figure 8a. (b) Illusory square also generated by lines with orientations that are not exactly perpendicular to the illusory contour. (From Pereeption a n d Pictorial Representation, C.F. Nodine and D.F. Fisher (Eds.), p.186, New York: Praeger. Copyright 1979 by Praeger Publishers. Adapted by permission.)
102
Chapter 2
directions that are not exactly perpendirnlar to the inducing lines. Thus the longrange cooperative process is orientation-specific across perceptual space (Figure Id). Boundary completion can be triggered only when pairs of sufficiently strong boundary contour segments are aligned within the spatial bandwidth of the cooperative interaction (Figure lob). An important property of Figures 8 and 9 can easily go unnoticed. Belore boundary completion occurs, each scenic line can induce a band of almost perpendicular boundary contour reactions. This property can be inferred from the fact that each line can generate illusory contours in any of several orientations. Which orientation is chosen depends on the global configuration of the other lines, as in Figures 9a and 9b. An adaptive function of such a band of orientations is clear. If only a single orientation were activated, the probability that several such orientations could be exactly aligned across the perceptual space would be slim. Boundary completion could rarely occur under such demanding conditions. By contrast, after boundary completion occurs, one and only one illusory contour is perceived. What prevents all of the orientations in each band from simultaneously cooperating to form a band of illusory contours? Why is not a fuzzy region of illusory contours generated, instead of the unique and sharp illusory contour that is perceived? Somehow the global cooperative process chooses one boundary orientation from among the band of possible orientations at the end of each inducing line. An adaptive function of this process is also clear. It offsets the fuzzy percepts that might otherwise occur in order to build boundaries at all. How can the coexistence of inducing bands and the percept of sharp boundaries be explained? Given the boundary contour rules depicted in Figure 1, a simple solution is suggested. Suppose that the long-range cooperative process feeds back to its generative boundary contour signals. The several active boundary contour signals at the end of each inducing line are mutually competitive. When positive feedback from the global cooperative process favors a particular boundary contour, then this boundary contour wins the local competition with the other active boundary contour signals. The positive feedback from the global cooperative process to the local competitive process must therefore be strong relative to the mask inputs that induce the band of weak boundary contour reactions at each inducing line end. Another important property can be inferred from the hypothesis that the boundary completion process feeds back an excitatory signal that helps to choose its own line orientation. How is this positive feedback process organized? At least two local boundary contour signals need to cooperate in order to trigger boundary completion between them. Otherwise, a single inducing line could trigger approximately perpendicular illusory lines that span the entire visual field, which is absurd. Given that two or more active boundary contour signals are needed to trigger the intervening cooperative process, as in Figure l l a , how does the cooperative process span widely separated positions yet generate boundaries with sharp endpoints? Why does not the broad spatial range of the process cause fuzzy line endings to occur, as would a low spatial frequency detector? Figure I l b suggests a simple solution. First, the two illusory contours generate positive signals along the pathways labeled 1. Thse orientationally aligned signals supraliminally excite the corresponding cooperative process, whose nodes trigger positive feedback via pathways such as pathway 2. Pathway 2 delivers its positive feedback to a position that is intermediate between the inducing line segments. Then, pathways such as 1 and 3 excite positive feedback from intervening pathways such as pathway 4. The result is a rapid positive feedback exchange between all similarly oriented cooperative processes that lie between the generative boundary contour signals. An illusory line segment is hereby generated between the inducing line segments, but not beyond them.
Neural Dynamics of Form Perception
103
‘\ I
0
0
--,’ ’- 0
0
‘\
Figure 10. Perpendicular induction. (a) The end of a scenic line (dark edge) activates a local tendency (dashed lines) to induce contours in an approximately perpendicular direction. (b) If two such local tendencies are sufficiently strong, if they approximately line up across perceptual space, and if they lie within a critical spatial bandwidth, then an illusory contour may be initiated between them.
104
Figure 11. Boundary completion. (a) Local competition occurs between different orientations at each spatial location. A cooperative boundary completion process can be activated by pairs of aligned orientations that survive their local competitions. (b) The pair of pathways 1 activate positive boundary completion feedback along pathway 2. Then pathways such as 3 activate positive feedback along pathways such as 4. Rapid completion of a sharp boundary between pathways 1 can hereby be generated.
Neural Dynamics of Form Perceprion
105
10. B o u n d a r y C o i n p l r t i o n as a Statistical Process: T e x t u r a l G r o u p i n g and Object Recognition
Figure 11 shows that the boundary completion process can be profitably thought of as a type of statistical grouping process. In response to a textured scene, many boundary contour segments simultaneously attempt to enhance their local competitive advantage by engaging the positive feedback from all possible cooperative processes that share their spatial position and orientational alignment. As shown in Figure l l b , there exist cooperative processes with multiple spatial bandwidths in order t o fill-in boundary contours between perceptual locations that are separated by variable distances. The most favorable combination of all positive feedback signals to the competing local boundary contour segments will win the orientational competition (Figure 12), as is illustrated by our simulations below. The statistical nature of the boundary completion process sheds light on how figures made u p of closely spaced dots can be used to induce illusory contours (Kennedy, 1979; Kennedy and Ware, 1978). We also suggest that the orientational tuning and spatially distributed nature of this statistical process contributes to the coherent cross correlations that are perceived using Julesz stereograms (Glass and Switkes, 1976; Julesz, 1971). These properties of the boundary completion process have been suggested by consideration of illusory contours. Clearly, however, the process itself cannot distinguish the illusory from the real. The same properties are generated by any boundary contour signals that can win the cooperative-competitive struggle. The ability of the boundary contour process to form illusory groupings enables our theory to begin explaining data from the Beck school (Beck, Prazdy, and Rosenfeld, 1983) on textural grouping, and data of workers like Biederman (1984) and Leeper (1935) concerning how colinear illusory groupings can facilitate or impair recognition of partially degraded visual images (Grossberg and Mingolla, 1985). One of the most important issues concerning the effects of illusory groupings on texture separation and object recognition is the following one. If illusory groupings can be so important, then why are they often invisible? Our theory’s distinction between boundary contours and feature contours provides a simple, but radical, answer. Boundary contours, in themselves, are always invisible. Perceptual invisibility does not, however, prevent boundary contours from sending large bottom-up signals directly to the object recognition system, and from receiving top-down boundary completion signals from the object recognition system (Grossberg, 1980). Our theory hereby makes a sharp distinction between the elaboration of a visible form percept a t the binocular percept (BP stage (Figure 4) and the activation of object recognition mechanisms. We suggest t at these two systems are activated in parallel by the BCS stage. The above discussion suggests some of the properties whereby cooperative interactions can sharpen the orientations of boundary contour segments as they span ambiguous perceptual regions. This discussion does not, however, explain why illusory contour segments are activated in bands of nearly perpendicular orientations at the ends of lines. The next section supplies some further information about the process of illusory induction. The properties of this induction process will again hold for both illusory and real contours, which exist on an equal mechanistic footing in the network.
b
11. Perpendicular versus Parallel Contour Completion
The special status of line endings is highlighted by consideration of Figure 2a. In this famous figure, four black pac-man forms generate an illusory Kanizsa square. The illusory edges of the Kanizsa square are completed in a direction parallel to the darklight inner edges of the pac-man forms. Why are parallel orientations favored when black pac-man forms are used, whereas perpendicular orientations are favored when the ends of black lines are used? Figure 13a emphasizes this distinction by replacing the
106
Chapter 2
Figure 12. Interactions between an oriented line element and its boundary completion process. (a) Output from a single oriented competitive element subliminally excites several cooperative processes of like orientation but variable spatial scale. (a) Several cooperative processes of variable spatial scale can simultaneously excite a single oriented competitive element of like orientation.
Neural Dynamics of Form Perception
107
3
3 Figure 13. Open versus closed scenic contours. (a) If the black pac-man figures of Figure 2 are replaced by black lines of perpendicular orientation, then a bright illusory square is seen. (b) If line ends are joined together by black lines and the resultant closed figures are colored black, then a bright illusory square is again seen. These figures illustrate how perpendicular contour induction by open line ends can be replaced by parallel contour induction by closed edges. black pac-man forms with black lines whose endpoints are perpendicular to the illusory contour. Again the illusory square is easily seen, but is now due to perpendicular induction rather than to parallel induction. An analysis of spatial scale is needed to understand the distinction between perpendicular induction and parallel induction. For example, join together the line endpoints in Figure 13a and color the interiors of the resultant closed contours black. Then an illusory square is again seen (Figure 13b). In Figure 13b, however, the illusory contours are parallel to the black closed edges of the bounding forms, rather than perpendicular to the ends of lines, as in Figure 13b. The black forms in Figure 13b can be thought of as thick lines. This raises the question: How thick must a line become before perpendicular induction is replaced by parallel induction? How thick must a line become before its “open” end becomes a “closed” edge? In our networks, the measure of thickness is calibrated in terms of several interacting parameter choices: the number of degrees spanned by an image on the retina, the mapping from retinal cells to oriented masks within the boundary contour system, the spatial extent of each oriented mask, and the spatial extent of the competitive interactions that are triggered by outputs from the oriented masks. The subtlety of this calibration issue is illustrated by Figure 14. In Figure 14, the black interiors of the inducing forms in Figure 13b are eliminated, but their boundaries are retained. The black contours in Figure 13b remain closed, in a geometrical sense, but the illusory square vanishes. Does this mean that these black contours can no longer induce an illusory square boundary contour? Does it mean that an illusory boundary contour does exist, but that the change in total patterning of feature contour signals no longer differentially brightens the inside or outside of this square? Or is a combination of these two types of effects simultaneously at work? Several spatial scales are simultaneously involved in both the boundary contour process and the feature
108
Chapter 2
Figure 14. Influence of figural contrast on illusory brightness. When the black interiors of Figure 13b are colored white, the illusory square is no longer perceived. contour process. A quantitative analysis of multiple scale interactions goes beyond the scope of this article. The following discussion outlines some factors that are operative within each spatial scale of the model. Section 13 suggests that both perpendicular induction and parallel induction are properties of the same boundary completion process. T h e different induction properties are traced to different reactions of the boundary completion process t o different visual patterns. Before exploring these points, the following section clarifies how removal of the black interiors in Figure 14 eliminates the percept of an illusory Kanizsa square.
12. Spatial Scales and Brightness Contrast Figure 15 uses pac-man forms instead of the forms in Figure 14 due to their greater simplicity. In Figure 15 the interiors of the upper two pac-man forms are black, but the interiors of the bottom two pac-man forms are white. When all four pac-man forms are colored white, an illusory square is not visible, just as in Figure 14. In Figure 15, by contrast, two vertical illusory contours can be perceived between the black pacman forms and the pac-man forms with white interiors. The existence of these vertical contours suggests that the vertical black lines in the bottom two pac-man figures can cooperate with the vertical black lines in the top two pac-man figures to induce boundary contours in a direction parallel to their orientation. When all the pac-man forms have white interiors, however, the interior contrast generated by these forms by the feature contour process does not differ significantly from the exterior contrast that is generated by these forms. By using two pac-man forms with black interiors, the interior contrast is enhanced relative to the exterior contrast. This enhanced interior brightness flows downward within the illusory vertical contours, thereby enhancing their visibility. Why does coloring the interiors of two pac-man figures black enhance their interior contrastive effect? This property can be better understood by comparing it with classical demonstrations of brightness contrast. This comparison shows that the property in question is not peculiar to illusory figures. It is the same property as the brightness contrast that is due to "real" figures. Figure 16 compares a thin letter 0 with a thick letter 0. The brightness levels interior to and exterior to the thin letter 0 are not obviously different. A sufficiently thick letter 0 can generate a different percept, however. If the letter 0 is made sufficiently thick, then it becomes a black annulus surrounding a white circle. It is well-knownfrom
Neural Dynamics of Form Perception
109
Figure 15. Influence of figural contrast on illusory brightness. If only two pac-man forms in Figure 2 are colored black, and the other two forms have white interiors, then an illusory contour can be seen between contiguous black and white forms. This percept suggests that some illusory boundary contour induction may occur in response to Figure 14, but than not enough differential feature contour contrast is generated inside and outside the boundary contour to make the boundary contour visible.
0 Figure 16. Effects of spatial scale on perceived contrast. (a) No obvious brightness difference occurs between the inside and the outside of the circle. (b) By thickening the circle sufficiently, it becomes a background annulus. The interior of the circle can then be brightened by classical brightness contrast. classical studies of brightness contrast that darkening an annulus around an interior circle can make the circle look brighter (Cornsweet, 1970). We suggest that the difference between a thin letter 0 and a brightness contrast demonstration reflects the same process of lateral inhibition (Grossberg, 1981) as the difference between a pac-man form with white interior and a pac-man form with black interior.
13. Boundary-Feature Trade-off: Orientational Uncertainty and Perpendicular End Cutting We are now ready to consider the boundary-feature trade-off and to show how it explains the paradoxical percepts above as consequences of an adaptive process of fundamental importance.
110
Chapter 2
The theory's rules begin to seem natural when one acknowledges that the rules of each contour system are designed to offset insufficiencies of the other contour system. The boundary contour system, by itself, could at best generate a perceptual world of'outlines. The feature contour system, by itself, could a t best generate a world of formless qualities. Let us accept that these deficiencies are, in part, overcome by letting featural filling-in spread over perceptually ambiguous regions until reaching a boundary contour. Then it becomes a critical t.ask to synthesize boundary contours that are capable of restraining the featural flow at perceptually important scenic edges. Orientationally tuned input masks, or receptive fields, are needed to initiate the process of building up these boundary contours (Figure 1). If the directions in which the boundaries are to point were not constrained by orientational tuning, then the process of boundary completion would become hopelessly noisy. We now show that orientationally tuned input masks are insensitive to orientation at the ends of scenic lines and corners. A compensatory process is thus needed to prevent featural quality from flowing out of the percepts of all line endings and corners. Without this compensatory process, filling-in anomalies like neon color spreading would be ubiquitous. This compensatory process is called the end-cutting process. The end-cutting process is the net effect of the competitive interactions described in Figures Ib and l c . Thus the rules of the boundary contour system take on adaptive meaning when they are understood from the viewpoint of how boundary contours restrict featural filling-in. This section discusses how this end-cutting process, whose function is to build up "real" boundary contours with sharply defined endpoints, can also sometimes generate illusory boundary contours through its interaction with the cooperative boundary completion process of Figure Id and Figure 11. The need for an end-cutting process can be seen by considering Figure 17. Figure 17 describes a magnified view of a black vertical line against a white background. Consider Position A along the right edge of the scenic line. A vertically oriented input mask is drawn surrounding Position A. This mask is sensitive to the relative contrast of line edges that fall within its elongated shape. The mask has been drawn with a rectangular shape for simplicity. The rectangular shape idealizes an orientationally sensitive receptive field (Hubel and Wiesel, 1977). The theory assumes t>hata sufficiently contrastive vertical dark-light edge or a sufficiently contrastive light-dark edge falling within the mask area can activate the vertically tuned nodes, or cells, that respond to the mask at Position A. These cells are thus sensitive both to orientation and to the amount of contrast, but not to the direction of contrast (Figure la). A set of masks of varying orientations is assumed to exist at each position of the field. Each mask is assumed to have an excitatory effect on cells that are tuned to the same orientation and an inhibitory effect on cells that are tuned to the other orientations at its spatial position (Figure l c ) . A t a position, such as A, which lies along a vertical edge of the line far from its end, the rules for activating the oriented masks imply that the vertical orientation is strongly favored in the orientational competition. A tacit hypothesis is needed to draw this conclusion: The oriented masks are elongated enough to sense the spatially anisotropic distribution of scenic contrast near Position A. Were all the masks circularly symmetric, no mask would receive a larger input than any other. When oriented masks are activated at a position such as B, a difficulty becomes apparent. Position B lies outside the black line, but its vertical mask still overlaps the black inducing line well enough to differentially activate its vertically tuned cells. Thus the possibility of selectively registering orientations carries with it the danger of generating boundary contours that extend beyond their inducing edges. Suppose that the vertically oriented cells at positions such as B were allowed to cooperate with vertically oriented cells at positions such as A. Then a vertical boundary contour could form that would enable featural quality to flow out of the line. We now show that the end-cutting process that prevents this from happening also has properties of illusory
Neural Dynamics of Form Perception
111
A
B
Figure 17. Orientational specificity at figural edges, corners, and exteriors. (a) At positions such as A that are along a figural edge, but not at a figural corner, the oriented mask parallel to the edge is highly favored. At positions beyond the edge, such as B, masks of the same orientation are still partially activated. This tendency can, in the absence of compensatory mechanisms, support a flow of dark featural activity down and out of the black figure. (b) A line is thin, functionally speaking, when at positions near a corner, such as C, many masks of different orientations are all weakly activated or not activated at all.
112
Chapter 2
induction that have been described above. Suppose that inhibitory signals can be generated from positions such as A to positions such as B that lie beyond the end of the line. Because the position of the line relative to the network can change unpredictably through time, these signals need to be characterized in terms of the internal network geometry rather than with respect to any particular line. To prevent featural flow, the vertical activation at Position A needs to inhibit the vertical activation at Position B, but not all activations at Position B. Thus the inhibitory process is orientationally selective across perceptual space (Figure lb). The spatial range of the inhibitory process must also be broad enough for vertical activations at line positions such as A to inhibit vertical activations at positions such as B that lie outside the line. Otherwise expressed, the spatial’range of these orientationally selective inhibitory signals must increase with the spatial scale of the masks. Once the need for an inhibitory end-cutting process is recognized, several paradoxical types of data immediately become more plausible. Consider, for example, Figure 5b in which the vertical boundary contours of the Ehrenstein figure inhibit the vertical boundary contours of the contiguous red cross. The orientational specificity and limited spatial bandwidth of the inhibition that are needed to prevent featural flow also explain why increasing the relative orientation or spatial separation of the cross and Ehrenstein figure weakens the neon spreading effect (Redies and Spillmann, 1981). The inhibitory end-cutting process explains how a vertical orientation of large contrast at a position such as A in Figure 17a can inhibit a vertical orientation of lesser contrast, as at Position B. More than this inhibitory effect is needed to prevent featural activity from flowing outside of the line. Horizontally oriented boundary contours must also be activated at the end of the line. These horizontal boundary contours are not activated, however, without further network machinery. To understand why this is so, consider Position C in Figure 17b. Position C lies at the end of a narrow black line. Due to the thinness of the line relative to the spatial scale of the oriented input masks, several oriented masks of differing orientations at Position C can all register small and similar amounts of activation, as in the computer simulations of Section 17. Orientational selectivity breaks down at the ends of lines, even though there may exist a weak vertical preference. After the strongly favored vertical orientation at position A inhibits the weakly activated vertical orientation at positions such as B or C, the mask inputs themselves do not provide the strong activations of horizontal orientations that are needed to prevent featural flow. Further processing is needed. The strong vertical inhibition from Position A must also disinhibit horizontal, or close-to-horizontal, orientations at positions such as B and C . This property followsfrom the postulate that perpendicular orientations compete at each perceptual position, as in Figure lc. Thus the same competitive mechanisms in Figures l b and lc that explain how end cutting-with its manifestly adaptive function-occurs, also explain how red color can paradoxically flow out of a red cross when it is surrounded by an Ehrenstein figure (Figure 5). As the thickness of the black line in Figure 17 is increased, the horizontal bottom positions of the line begin to favor horizontal orientations for the same reason that the vertical side positions of the line favor vertical orientations. When this occurs, the horizontal orientations along the thickened bottom of the line can cooperate better via the boundary completion process to directly form a horizontal boundary contour at the bottom of the figure. Parallel induction by a thick black form hereby replaces perpendicular induction by a thin black line as the thickness of the line is increased.
14. Induction of “Real” Contours Using “Illusory’y Contour Mechanisms Some readers might still be concerned by the following issues. Does not the endcutting process, by preventing the vertical boundary contour from extending beyond
Neural Dynamics of Form Perception
I13
Position C in Figure 17b, create an even worse property: the induction of horizontal illusory contours? Due to the importance of this issue in our theory, we summarize the adaptive valuc of this property using properties of the cooperative boundary completion process of Figure Id and Figure 11. Suppose that inhibition from Position A to Position B does not occur in Figure 17a. Then vertical activations can occur at both positions. By Figure 11, an illusory vertical boundary contour may be generated beyond the ‘‘real” end of the line. The same is true at the left vertical edge of the line. Due to the existence of ambiguous boundary contour orientations between these vertical boundary contours, featural quality can freely flow between the dark interior of the line and the white background below. The end-cutting process prevents featural flow from occurring at line ends. It does so by generating a strong horizontal activation near corner positions such as C in Figure 17b. In the same way, it generates a strong horizontal activation near the bottom left corner of the line. Using the cooperative process in Figure 11, these two horizontal activations can activate a horizontal boundary contour across the bottom of the line. Although this horizontal boundary contour is “illusory,” it prevents the downward flow of dark featural quality beyond the confines of the inducing line, and thereby enables the network to perceive the line’s ‘‘real” endpoint. Thus the “real” line end of a thin line is, strictly speaking, an “illusory” contour. “Real” and “illusory” contours exist on an equal ontological footing in our theory. In the light of this adaptive interaction between the competitive end-cutting process and the cooperative boundary completion process in the perception of “real” scenic contours, the fact that occasional juxtapositions of “real” scenic contours also generate boundary contours that are judged to be “illusory“ seems to be a small price to pay. The remaining sections of this article describe a real-time network that is capable of computing these formal properties. 15. G a t e d Dipole Fields
We assume that the competitive end-cutting and cooperative boundary completion processes are mediated by interaetions between on-cells and off-cells that form opponent processes called gated dipoles. Specialized networks, or fields, of gated dipoles have been used to suggest explanations of many visual phenomena, such as monocular and binocular rivalry, spatial frequency adaptation, Gestalt switching between ambiguous figures, color-contingent and orientation-contingent after-effects, and attentional and norepinephrine influences on visual critical period termination and reversal (Grossberg, 1976, 1980, 1982, 1983a, 1984a). The gating properties of these fields are described here only in passing. Before describing the details of the gated dipole fields that will be used, we qualitatively summarize how they can mediate the competitive end-cutting process. Several closely related variations of this design can generate the desired properties. We develop one scheme that incorporates the main ideas. Suppose that an input mask at position ( i , j ) is preferentially tuned to respond to an edge of orientation k. Denote the input generated by this mask by JaJk.Suppose that this input activates the potential z,,~ of the corresponding on-cell population. Also suppose that the variously oriented inputs J I J k at a fixed position (;,i)cause a competition to occur among the corresponding on-cell potentials + k . In the present scheme, we suppose that each orientation k preferentially inhibits the perpendicular orientation K at the same position ( i , j ) . In this sense, the on-potential Z , ~ Kis the off-potential of the input J l l k , and the on-potential X s j k is the off-potential of the input J r J ~These . pairs of competing potentials define the dipoles of the field. One consequence of dipole competition is that at most one potential Z t J k or z a Jof~ a dipole pair can become supraliminally active at any time. Furthermore, if both inputs
114
Chapter 2
JIlk and J l l ~ are equally large, then -other things being equal -neither potential z l i k nor zIJ? can become supraliminally active. Dipole competition between perpendicular orientations activates a potential q l k or z , ~ K only if it receives a larger net input than its perpendicularly tuned competitor. The amount of activation is, moreover, sensitive to the relative contrast of these antagonistic inputs. An oriented input Jt3k excites its own potential Z,,k and inhibits similarly oriented potentials Z W k at nearby positions ( p , q ) , and conversely. The input masks are thus organized as part of an on-center off-surround anatomy of short spatial range (Figure 18). Due to this convergence of excitatory and inhibitory inputs at each orientation and position the net input to a potential z , j k may be excitatory or inhibitory. Thissituation creates a new possibility. Suppose that zzlt receives a net inhibitory input, whereas Z , ~ K receives no external input. Then zIlk is inhibited and X,,K is supraliminally excited. This activation of z l , ~is due to a disinhibitory action that is mediated by dipole competition. In order for z l l ~to be excited in the absence of an excitatory input J I J ~a ,persistently active, or tonic, internal input must exist. This is another wellknown property of gated dipoles (Grossberg, 1982). By symmetry, the same tonic input influences each pair of potentials a!,jk and z I J ~ . When transmitter gates are placed in specialized dipole pathways-hence the name gated dipole--properties like negative after-effects, spatial frequency adaptation, and binocular rivalry are generated (Grossberg, 1980, 1983a, 1983b). Transmitter gates are not further discussed here. We now apply the properties of dipole competition to explain the inhibitory endcutting process in more quantitative detail. Suppose that vertical input masks Jwk are preferentially activated at positions such as A in Figure 17a. These input masks succeed in activating their corresponding potentials zWk, which can then cooperate to generate a vertically oriented boundary contour. By contrast, positions such as B and C in Figure 17 receive orientationally ambiguous inputs due to the thinness of the black bar relative to the length of the oriented masks. Consequently, the inputs J,3k to these positions near the end of the bar are small, and several mask orientations generate inputs of comparable size. Without compensatory mechanisms, featural quality would therefore flow from the end of the bar. This is prevented from happening by the vertically oriented input masks JWk at positions such as A. These input masks generate large off-surround inhibitory signals to z y k at positions (i,j) at the end of the bar. Due to dipole competition, the horizontally tuned potentials Z,,X are disinhibited. The horizontally tuned potentials of several horizontally aligned positions at the end of the bar can then cooperate to generate a horizontally oriented boundary contour that prevents featural quality from flowing beyond the end of the bar.
16. Boundary Completion: Oriented Cooperation Among Multiple Spatial Scales
The stage of dipole competition between perpendicular orientations is followed by a stage of shunting Competition among all the orientations corresponding to a fixed position (i,j). The stage of shuntingcompetition possesses several important properties. For one, the shunting competition tends to conserve, or normalize, the total activity of the potentials y,Jk at the final stage of competitive processing n
c
%k
k=l
(Figure 18). This limited capacity property converts the activities (gs,l, g,,?, ...,g,,”) of the final stage into a ratio scale. See the Appendix for mathematical details.
Neural Dynamics of Form Perception
115
INPUT TO ORIENTED
COO PE R A T I ON FEEDBACK
0
-
Figure 18. Orientationally tuned competitive interactions. A shunting on-center offsurround interaction within each orientation and between different positions is followed by a push-pull dipole competition between orientations and within each position. The different orientations also compete to normalize the total activity within each position before eliciting output signals t o the cooperative boundary completion process that exists between positions whose orientations are approximately aligned.
Chapter 2
116
An equally important property of the shunting competition at each position (i,j) becomes apparent when several positions cooperate to complete boundary contours. Figure 19 depicts how two properly aligned potentials, gnlk and yuvk, of orientation k at different positions (;,i)and ( u , u ) cooperate to activate the potential Zwk at an intervening position ( p , q ) . Potential zp9k, in turn, excites the potential zwk of the same orientation k and at the same position ( p , q ) . As in Figure 11, this positive feedback process rapidly propagates to the potentials of orientation k corresponding to all positions between ( i , i )and ( u , v ) . To generate a sharp contour (Section Q), a single orientation k needs to be chosen from among several partially activated orientations at each position (p, 9). Such a choice is achieved through an interaction between the oriented cooperation and the shunting competition. In particular, in Figure 19, the positive feedback from zpqk to enhances the relative size of v w k compared to its competitors gwr at position (p, 4). In order for the positive feedback signals h(zp9k)from zp9k to zp9k to achieve a definite choice, the form of the signal function h ( w must be correctly chosen. It was proved in Grossberg (1973) that a signal function (20) that is faster-than-linear at attainable activities w = zpqk is needed to accomplish this task. A faster-than-linear signal function sharply contrast-enhances the activity patterns that reverberate in its positive feedback loops (Grossberg, 1983a). Examples of faster-than-linear signal functions are power laws such as h ( w ) = Aw",A > 0 , n > 1; threshold laws such as h ( w ) = A max(w - B,O),A > 0 , B > 0; and exponential laws such as h ( w ) = At?, A > 0 , B > 0. The opponent competition among the potentials zt3k and the normalizing competition among the potentials gtlk may be lumped into a single process (Grossberg, 1983a). They have been separated herein to achieve greater conceptual clarity.
I,
17. Computer Simulations This section describes some of the simulations that have been done in our ongoing program of quantitative model testing and refinement. The equations that govern the simulations are defined in the Appendix. Figure 20 describes a simulation of boundary completion. In this simulation, the potentials of gated dipoles at positions 15 and 25 receive positive inputs. The potential of the gated dipole at position i is denoted by gn(t)in Figure 20. A single positional index i is sufficient because the simulation is carried out on a one-dimensional array of cells. The potential of the boundary completion cell at position i is denoted by t a ( t ) . Figure 20 provides a complete summary of how the boundary completion process unfolds through time. Each successive increasing curve in the figure describes the spatial pattern of activities y , ( T ) or za(T)across positions i at successive times t = 2'. Note that the input to the two gated dipole positions cause a rapid activation of gated dipole positions that lie midway between them via cooperative feedback signals. Then these three positions rapidly fill-in the positions between them. The final pattern of yl activities defines a uniformly active boundary that ends sharply at the inducing positions 15 and 25. By contrast, the final pattern of z, values extends beyond the inducing positions due to subliminal activation of these positions by the interactions depicted in Figure 12a. Figure 21 illustrates how the boundary completion process attenuates scenic noise and sharpens fuzzy orientation bands. Each column of the figure describes a different time during the simulation. The original input is a pattern of two noisy but vertically biased inducing sources and a horizontally oriented noise element. Horizontally biased end cuts are momentarily induced before the oriented cooperation rapidly attenuates all nonvertical elements to complete a vertical boundary contour. Figures 22a and 22b illustrate how a field of oriented masks, such as those depicted in Figure 17, react to the sharp changes in direction a t the end of a narrow input bar. These figures encode the activation level of each mask by the length of the line having
Neural Dynamics ofForm Perception
117
Figure 19. Excitatory boundary completion feedback between different positions. Outputs triggered by aligned dipole on-pot,entials Yt3k abd guvk can activate intervening boundary completion potentials zWk. The potentials z w k , in turn, deliver strong positive feedback to the corresponding potentials wmk, which thereupon excite the potentials X& and inhibit the potentials z W ~ .
Chapter 2
118
Y FIELD
I
0
lo
20
I
1
30
40
position
Figure 20a. Computer simulation of boundary completion in a one-dimensional array of cells. Two sustained inputs to positions 15 and 25 of the y field trigger a rapid filling-in. Activity levels at five successive time periods are superimposed, with activity levels growing to a saturation level. (a) Sharp boundary in y field of Figure 19.
Neural Dynamics of Form Perception
119
Z FIELD
Figure 20b. Fringe of subliminal activity flanks suprathreshold activity pattern in z field of Figure 19.
Chapter 2
120
REAL TIME
BOUNDARY
x
input 1.
x
!
COMPLETION rl
y field at time: 2. 3. 4.
1
5.
6.
Figure 21. Each column depicts a different time during the boundary completion process. The input consists of two noisy but vertically biased inducing line elements and an intervening horizontal line element. The competitive-cooperativeexchange triggers transient perpendicular end cuts before attenuating all nonvertical elements as it completes the vertical boundary.
Neural Dynamics of Form Perception
121
the same orientation as the mask at the position. We call such a display an’otiedation field. A position at which only one line appears is sensitive only to the orientation of that line. A position a t which several lines of equal length appear is equally sensitive to all these computed orientations. The relative lengths of lines across positions encode the relative mask activations due to different parts of the input pattern. Figure 22a shows that a strong vertical preference exists at positions along a vertical edge that are sufficiently far from an endpoint (e.g., positions such as A in Figure 17a). Masks with close-to-vertical orientations can also be significantly activated at such positions. Thus there exists a strong tenency for parallel induction of contours to occur along long scenic edges, as in the illusory Kanizsa square of Figure 2. This tendency for strong parallel induction to occur depends on the length of the figural edge relative to the length of the input masks. Consider, for example, positions along the bottom of the figure, such as position C in Figure 17b. Because the figure is narrow relative to the mask size, the orientational preferences are much weaker and more uniformly distributed, hence more ambiguous, at the ends of narrow lines. Figure 22b illustrates how different values of mask parameters can generate different orientational fields in response to the same input pattern. The dark-light and light-dark contrast that is needed to activate a mask (parameter a in the Appendix, equation ( A l ) ) is higher in Figure 22b than in Figure 22a. Consequently the positions that respond to scenic edges are clustered closer to these edges in Figure 22b, and edge positions near the line end are not activated. In both Figures 22a and 22b, the input activations near the line end are weak, orientationally ambiguous, or nonexistent. In Figures 23a and 23b, the orientation fields of Figures 22a and 22b are transformed by the competitive interactions within a dipole field. The functional unit of this field again consists of a complete set of orientations at each perceptual location. At each position (i,j),the value gtJk of the final competitive stage (Figure 18) is described by a line of orientation k whose length is proportional to y t i k . In response to the orientation field of Figure 22a, the dipole field generate a strong horizontal end cut in Figure 23a at the perceptual positions corresponding to the end of the line. These horizontal activations can cooperate to generate a boundary contour capable of preventing featural flow from the end of the line. Oblique activations are also generated near the line end as part of this complementary induction process. These oblique activations can induce nonperpendicular illusory contours, as in Figure 9b. In Figure 23b, “illusory” horizontal end cuts are generated at the locations where the vertically oriented inputs of Figure 22b terminate, despite the fact that the locations do not coincide with the end of the line. Comparison of Figures 23a and 23b shows that the horizontal end cuts in both examples exist on a similar ontological footing, thereby clarifying the sense in which even the percepts of ”real” line ends are “illusory” and the percepts of “illusory” line ends are “real.“ This conclusion does not imply that human observers are unable t o say when certain illusory boundaries seem to be ”unreal.” We trace this capability to the different ways in which some scenes coactivate the feature contour system and the boundary contour system, rather than to different boundary completion mechanisms within the boundary contour system for “real” and “illusory” line percepts. 18. Brightness Paradoxes and the Land Retinex Theory
This article has focused on the process whereby both real and illusory visual contours are formed. From the perspective of this process, the distinction between a real contour and an illusory contour is highly ambiguous. The role of end cutting in defining sharp “illusory” boundary contours at the “realn ends of narrow lines is a case in point (Section 14). To quantitatively underst,and illusory brightness effects in the theory, it is necessary to analyse how feature contour signals combine with boundary contour signals within
Chapter 2
122
+
...
,
-
.
*
~
.
-.~ *
.
.
+
+
*
.
.
I
.
Figure 22a. Orientation field. Lengths and orientations of lines encode relative sizes of activations and orientations of the input masks at the corresponding positions. The input pattern corresponds to the shaded area. Each mask has total exterior dimensions of 16 x 8 units, with a unit length being the distance between two adjacent lattice positions.
Neural Dynamics of Form Perception
123
Figure 22b. Orientational field whose masks respond to higher contrasts than those in Figure 22a.
124
Chapter 2
*
1
I:
I
*
*
1
* * I
*
I:
*
I
*
1
*
1
*
..
*
x
*
.
*
. .
* zc
.
+ +
I
-
\
$
\
,
,
I
-
Figure 23a. Response of the potentials gigk of a dipole field to the orientation field of Figure 22a. End cutting generates horizontal activations at line end locations that receive small and orientationally ambiguous input activations. The oblique activations that occur at the line end can induce nonperpendicular illusory contours, as in Figure Qb.
Neural Dynamics of Form Perception
125
Figure 2Sb. Response of the potentials yilk of a dipole field to the orientation field of Figure 22b. End cutting generates "illusory" horizontal activations at the locations where vertically oriented inputs terminate.
126
Chapter 2
the monocular brightness and color stages MBCL and MBCR of Figure 4, and the manner in which these processing stages interact to generate a binocular percept at the BP stage of Figure 4. This analysis of brightness extends beyond the scope of this article. Cohen and Grossberg (1984b) simulated a number of paradoxical brightness percepts that arise when observers inspect certain contoured images, such as the Craik-O’Brien effect (Arend et d.,1971; O’Brien, 1958) and its exceptions (Coren, 1983; Heggelund and Krekling, 1976; Todorovic‘, 1983; van den Brink and Keemink, 1976); the Bergstrijm (1966, 1967a, 1967b) demonstrations comparing the brightnesses of smoothly modulated and step-like luminance profiles; Hamada’s (1980) demonstrations of nonclassical differences between the perception of luminance decrements and increments; and Fechner’s paradox, binocular brightness averaging, and binocular brightness summation (Blake, Sloane, and Fox, 1981; Cogan, 1982; Cogan, Silverman, and Sekuler, 1982; Curtis and Rule, 1980; Legge and Rubin, 1981; Levelt, 1965). Classical concepts such as spatial frequency analysis, Mach bands, and edge contrast are insufficient by themselves to explain the totality of these data. Because the monocular brightness domains do not know whether a boundary contour signal from the BCS stage is due to a “real” scenic contour or an “imaginary” scenic contour, these brightness simulations support our theory of boundary-feature interactions. Cohen and Grossberg (1984a) and Grossberg (1983a) showed through mathematical derivations and computer simulations how the binocular visual representations at the BP stage combine aspects of global depth, brightness, and form information. Grossberg (1980, 1983a, 1984a) used the t,heory to discuss the dynamics of monocular and binocular rivalry (Kaufman, 1974; Kulikowski, 1978; Rauschecker, Campbell, and Atkinson, 1973). Grossberg (1984a) indicated how the theory can be used to explain the fading of stabilized images (Yarbus, 1967). Grossberg (1984a) also suggested how the theory can be extended to include color interactions. This extension provides a physical interpretation of the Land (1977) retinex theory. In this interpretation, a simultaneous parallel computation of contrast-sensitive feature contour signals occurs within double-opponent color processes (light-dark, redgreen, yellow-blue). This parallel computation replaces Land’s serial computation of edge contrasts along sampling paths that cross an entire visual scene. Despite Land’s remarkable formal successes using this serial scanning procedure, it has not found a physical interpretation until the present time. One reason for this delay has been the absence of an explanation of why gradual changes in illumination between successive scenic contours are not perceived. The diffusive filling-in of feature contour signals within domains defined by boundary contour signals provides an explanation of this fundamental fact, as well as of Land’s procedure of averaging the outcomes of many serial scans. In addition to physically interpreting the Land retinex theory, the present theory also substantially generalizes the Land theory. The Land theory cannot, for example, explain an illusory brightness change that is due to the global configuration of the inducing elements, as in Figure 8a. The illusory circle in Figure 8a encloses a region of enhanced illusory brightness. No matter how many radially oriented serial scans of the Land theory are made between the radial lines, they will compute a total contrast change of zero, because there is no luminance difference between these lines. If one includes the black radial lines within the serial scans, then one still gets the wrong answer. This is seen by comparing Figures 8a and 8b. In these two figures, the number, length, contrast, and endpoints of the lines are the same. Yet Figure 8a generates a strong brightness difference, whereas Figure 8b does not. This difference cannot be explained by any theory that depends only on averages of local contrast changes. The brightness effects are clearly due to the global configuration of the lines. A similar limitation of the Land theory is seen by comparing Figures 8 and 9, where rearranging the orientation of the line ends can alter the shape of the perceived region where enhanced brightness obtains.
Neural Dynamics of Form Perception
127
Although the present theory physically interprets the Land retinex theory, it does not by any means provide a complete description of color processing by the nervous system. Much further work needs to be done, for example, to characterize how visual preprocessing generat,es color-specific, as opposed to merely wavelength-sensitive, feature contour inputs into the featural filling-in syncytium (Zeki, 1983a, 1983b).
19. Related D a t a and Concepts About Illusory Contours A variety of other workers have developed concepts based on their data that support our conception of boundary completion, although no one of them has explicitly posited the properties of the feature contour and boundary contour processes. Petry et al. (1983) wrote, for example, that “apparent brightness is influenced more by number of inducing elements, whereas apparent sharpness increases more with inducing element width ....Theoretical accounts of subjective contours must address both perceptual attributes” (p.169), in support of our discussion in Sections 11 and 12. Day (1983) wrote that “illusory contours . . . are due primarily to the spread of induced contrast to partially delineated borders” (p.488), in support of our concept of diffusive filling-in (Section 5), but he did not describe either how the borders are completed or how the featural induction and spread are accomplished. Prazdny (1983) studied variants of the illusion in Figure 8a. He concluded that “simultaneous brightness contrast is not a cause of the illusion” (p.404) by replacing the black lines with alternating black and white rectangles on a grey background. In this way, he also demonstrated that illusory contours can be completed between scenic contours of opposite direction of contrast, as in Figure 2b, but he did not conclude from this that distinct boundary contour and feature contour processes exist. Instead, he concluded that “It remains to be determined which of the competing ‘cognitive’ theories offers the best explanation ... of subjective contours” (p.404). Our results suggest that a cognitive theory is not necessary to explain the basic phenomena about subjective contours, unless one reinterprets cognitive to mean any network computation whose results are sensitive to the global patterning of all inducing elements. 20. Cortical D a t a and Predictions
Although the analysis that led to the boundary contour system and feature contour system was fueled by perceptual data, it has gradually become clear that a natural neural interpretation can be given to the processing stages of these systems. This linkage is suggested herein to predict unknown but testable neurophysiological properties, to provide a perceptual interpretation of known neural data, and to enable future data about visual cortex to more sharply constrain the development of perceptual theories. We associate the early stages of left-monocular (MPL) and right-monocular (MPR) preprocessing in Figure 4 with the dynamics of the lateral geniculate nucleus, the first stages in the boundary contour system with the hypercolumns in striate cortex (Hubel and Wiesel, 1977 , and the first stages in the feature contour system with the blobs in striate cortex Hendrickson, Hunt, and Wu, 1981; Horton and Hubel, 1981). This interpretation is compatible with recent cortical data: The LGN projects directly to the hypercolumns as well as to the blobs (Livingstone and Hubel, 1982). The blobs are sensitive to color but not to orientation (Livingstone and Hubel, 1984), whereas the hypercolumns are sensitive to orientation but not to color (Hubel and Wiesel, 1977). Given this neural labeling, the theory predicts that the blobs and the hypercolumns activate testably different types of cortical interactions. These interactions do not necessarily occur within the striate cortex, although they must be triggered by signals from the blobs and hypercolumns. The blobs are predicted to initiate featural filling-in. Hence, a single blob should be able to elicit a spreading effect among cells encoding the same featural quality (Figure 3). By contrast, the hypercolumns are predicted to elicit boundary completion. Hence,
2
128
Chapter 2
pairs of similarly oriented and aligned hypercoli~mnsmust be activated before boundary completion over intervening boundary-sensitive cells can be activated (Figure 11). In other words, blobs are predicted to cause an oulurardly directed featural spreading, whereas hypercolumns are predicted to cause an inwardly directed boundary completion. Neural data that support our conception of how these interactions work are summarized below. Cells at an early stage in the boundary contour system are required to be sensitive to orientation and amount of contrast, but not to direction of contrast. Such contour-sensitive cells have been found in Area 17 of monkeys (Gouras and Kriiger, 1979; Tanaka, Lee, and Creutzfeldt, 1983) as well as cats (Heggelund, 1981). These contour-sensitive cells are predicted to activat,e several stages of competition and cooperation that together contribute to the boundary completion process. The boundary completion process is predicted to be accomplished by a positive feedback exchange between cells reacting to long-range cooperation within an orientation and cells reacting to short-range competition between orientations (Figure 1). The competitive cells are predicted to occur at an earlier stage of cortical processing than the cooperative cells (Figure 18). These competitive cells are instrumental in generating a perpendicular end cut at the ends of lines (Figures 24 and 25). The cooperative cells are predicted to be segregated, possibly in distinct cortical lamina, according to the spatial range of their cooperative bandwidths (Figure 12). The recent data of von der Heydt ei al. (1984) support two of these predictions. These authors have reported the existence of cells in Area 18 of the visual cortex that help to “extrapolate lines to connect parts of the stimulus which might belong to the same object” (p.1261). These investigators found these cells by using visual images that induce a percept of illusory figures in humans, as in Figures 2 and 8. Concerning the existence of a cooperative boundary completion process between similarly oriented and spatially aligned cells, they write: Responses of cells in area 18 that required appropriately positioned and oriented luminance gradients when conventional stimuli were used could often be evoked also by the corresponding illusory contour stimuli ....The way widely separated picture elements contribute to a response resembles the function of logical gates (pp.1261-1262). By logical gates they mean that two or more appropriately positioned and oriented scenic contours are needed to activate a response from an intervening cell, as in Figure 11. Concerning the existence of a competitive end-cutting process, they write “The responses to stimuli with lines perpendicular to the cell’s preferred orientation reveal an unexpected new receptive field property” (p.1262). The deep issue raised by these data can be expressed as follows. Why do cells that usually react to scenic edges parallel to their orientational preference also react to line ends that are perpendicular to their orientational preference? We provide an explanation of this property in Sections 11 and 13. If we put these two types of experimental evidence together, the theory suggests that the contour-sensitive cells in Area 17 input to the cells that von der Heydt et al. (1984) have discovered in Area 18. A large number of physiological experiments can be designed to test this hypothesis, using stimuli such as those in Figure 2. For example, suppose that the contour-sensitive cells that would stimulate one end of the boundary completion process in response to a Kanizsa square are destroyed. Then the Area 18 cells that would normally be activated where the illusory boundary lies should remain silent. If these contour-sensitive cells could be reversibly inhibited, then the Area 18 cells should fire only when their triggering contour-sensitive cells in Area 17 are uninhibited. Informative experiments can also be done by selectively inhibiting boundary contour signals using stabilized image techniques. Suppose, for example, that the large circular boundary and the vertical boundary in Figure 24 are stabilized on the retina of a monkey. Then the cells that von der Heydt et d. discovered should stop firing at the corresponding Area 18 locations. This effect should also be reversible
Neural Dynamics of Form Perception
129
when image stabilization is terminated. The net impact of the experiments of von der Heydt et al. is thus to provide strong support for the concept of an inwardly directed boundary completion process and an orthogonally oriented end-cutting process at the ends of lines, as well as a well-defined experimental methodology for testing finer aspects of these processes. Concerning the outwardly directed featural filling-in process, a number of predictions can be made. The cellular syncytium that subserves the featural spreading is predicted to possess membranes whose ability to passively, or electrotonically, spread activation can be gated shut by boundary contour signals (Figure 3). The syncytium is hypothesized to be an evolutionary homolog of the intercellular interactions that occur among the retinal horizontal layers of certain fish (Usui, Mitarai, and Sakakibara, 1983). A possible cortical mechanism of this feature contour syncytium is some form of dendrodendritic coupling. Any manipulation that inhibits signals from the boundary contour system to the feature contour system (pathways BCS- MBCL and BCS+ MBCR of Figure 4) is predicted to release the syncytial flow, as well as to generate a percept of featural flow of colors and brightnesses. If all boundary contour signals are inhibited, so that no boundary restrictions of featural flow occur, then a functional ganzfeld exists within the feature contour system. A dramatic reduction in visual sensitivity should occur, even if the feature contour system is otherwise intact. An indirect behavioral test of how boundary contour signals restrict featural flow can be done using a stabilized image technique (Figure 24). Suppose that the large circular boundary and the vertical boundary in Figure 24 can be stabilized on the retina of a monkey. Train a monkey to press the first lever for food when it sees the unstabilized figure, and to press the second lever to escape shock when it sees a figure with a red background containing two small red circles of different shades of red, as in the stabilized percept. Then stabilize the relevant contours of Figure 24 and test which lever the monkey presses. If it presses the second lever with greater frequency than in the unstabilized condition, then one has behavioral evidence that the monkey perceives the stabilized image much as humans do. Also carry out electrode recordings of von der Heydt e t al. (1984) cells at Area 18 locations corresponding to the stabilized image contours. If these cells stop firing during stabilization and if the monkey presses the second lever more at these times, then a featural flow that is contained by boundary contour signals is strongly indicated. Figure 25 depicts a schematic top-down view of how boundary contour signals elicited by cortical hypercolumns could restrict the syncytial flow of featural quality elicited by cortical blobs. This flow does not necessarily occur among the blobs themselves. Figure 25 indicates, however, that the topographies of blobs and hypercolumns are well suited to serve as inputs to the cell syncytium. We suggest that the cell syncytium occurs somewhere between the blobs in Area 17 (also called V1) and the cells in Area V4 of the prestriate cortex (Zeki, 1983a, 1983b . The theory suggests that the cells of von der Heydt et al. (1984)project to the eel syncytium. Hence staining or electrophysiological techniques that reveal the projections of these cells may be used to locate the syncytium. These experiments are illustrative rather than exhaustive of the many that are suggested by the theory.
I
21. Concluding Remarks By articulating the boundary-feature trade-off, our theory shows that a sharp distinction between the boundary contour system and the feature contour system is needed to discover the rules that govern either system. Paradoxical percepts like neon color spreading can then be explained as consequences of adaptive mechanisms that prevent observers from perceiving a flow of featural quality from all line ends and corners due to orientational uncertainty. The theory’s instantiation of featural filling-in, in turn, arises from an analysis of how the nervous system compensates for having discounted
130
Chapter 2
Figure 24. Contour stabilization leads to filling-in of color. When the edges of the large circle and the vertical line are stablized on the retina, the red color (dots) outside the large circle envelopes the black and white hemi-disks except within the small red circles whose edges are not stabilized (Yarbus, 1967). The red inside the left circle looks brighter and the red inside the right circle looks darker than the enveloping red.
Neural Dynamics of Form Perception
L
",i
131
R
L
R
L
R
Figure 25. Predicted interactions due to signals from blobs and hypercolumns. (a) In the absence of boundary contour signals, each blob can initiate featural spreading to blob-activated cells of like feat,ural quality in a light-dark, red-green, blue-yellow double-opponent system. The symbols L and R signify signals initiated with the left and right ocular dominance columns, respectively. The symbols r and g designate two different color systems; for example, the red and green double-opponent systems. The arrows indicate possible directions of featural filling-in. (b) An oriented boundary contour signal can be initiated from orientations at left-eye positions, right-eye positions, or both. The rectangular regions depict different orientationally tuned cells within a hypercolumn (Hubel and Wiesel, 1977). The shaded region is active. (c) These boundary contour signals are well positioned to attenuate the electrotonic flow of featural quality between contiguous perceptual positions. The shaded blob and hypercolumn regions are activated in the left figure. The arrows in the right figure illustrate how featural filling-in is restricted by the active boundary contour signal.
132
Chapter 2
spurious illuminants, stabilized retinal veins, scotomas, and other imperfections of the retinal image. Once one accepts the fact that featural qualities can fill-in over discounted inputs, then the need for another contour system to restrict the featural flow seems inevitable. A careful study of these contour systems reveals that they imply a strong statement about both the computational units and the types of visual representations that are used in other approaches to visual perception. We claim that local computations of scenic luminances, although useful for understanding some aspects of early visual processing, cannot provide an adequate understanding of visual perception because most scenic luminances are discounted as spurious by the human visual system. We also posit that physical processes of featural filling-in and boundary completion occur, as opposed to merely formal correspondences between external scenes and internal representations. Many contemporary contributors to perception eschew such physical approaches in order to avoid the pitfalls of naive realism. Despite the physical concreteness of the contour system processes, these processes do not support a philosophy of naive realism. This can be seen most easily by considering how the activity patterns within the contour systems are related to the “conscious percepts” of the theory. For example, many perpendicular end cuts due to scenic line endings never reach consciousness in the theory. This property reflects the fact that the theory does not just rebuild the edges that exist “out there.” Instead, the theory makes a radical break with classical notions of geometry by suggesting that a line is not even a collection of points. A line is, at least in part, the equilibrium set of a nonlinear cooperative-competitive dynamical feedbaek process. A line in the theory need not even form a connected set until it dynamically equilibrates, as Figures 20 and 21 demonstrate. This property may have perceptual significance, because a boundary contour cannot effectively restrict featural filling-in to become visible until it can separate two regions of different featural contrast. Initial surges of boundary completion may thus be competitively squelched before they reach consciousness, as in metacontrast phenomena. In a similar vein, featural filling-in within a cell syncytium does not merely establish a point-to-point correspondence between the reflectances of a scene and corresponding positions within the cell syncytium. Until a boundary contour pattern is set up within the syncytium, the spatial domain within which featural contour inputs interact to influence prescribed syncytial cells is not even defined, let alone conscious. Perhaps the strongest disclaimer to a naive realism viewpoint derives from the fact that none of the contour system interactions that have been discussed in this article are assumed to correspond to conscious percepts. All of these interactions are assumed to be preprocessing stages that may or may not lead to a conscious color-and-form-indepth percept at the binocular percept stage of Figure 4. As during binocular rivalry (Kaufman, 1974; Kulikowski, 1978), a contoured scene that is easily perceived during monocular viewing is not always perceived when it is binocularly viewed along with a discordant scene to the other eye. A conscious percept is synthesized at the theory’s BP stage using output signals from the two pairs of monocular contour system (Cohen and Grossberg, 1984a, 1084b; Grossberg, 1983a). The formal cells within the BP stage we sensitive to spatial scale, orientation, binocular disparity, and the spatial distribution of featural quality. Many BP cells that receive inputs from the MBCL and MBCR stages are not active in the BP percept. Although the BP stage instantiates a physical process, this process represents an abstract context-sensitive representation of a scenic environment, not merely an environmental isomorphism. We believe that Area V4 of the prestriate cortex fulfills a similar function in uiuo (Zeki, 1983a, 1983b). Even when a conscious representation is established at the BP stage, the information that is represented in this way is quite limited. For example, the process of seeing a form at the BP stage does not imply that we can recognize the objects within that form. We hypothesize that the boundary contour system sends signals in parallel to the monocular brightness and color stages (MBCL and MBCR in Figure 4) as well as
Neural Dynamics of Fomi Perception
133
to an object recognition system. The top-down feedback from the object recognition system to the boundary contour system ran provide “cognitive contourn signals that are capable of modulating the boundary completions that occur within the boundary contour system (Gregory, 1966; Grossberg, 1980, 1982, 1984b). Thus we envisage that two types of cooperative feedback-boundary Completion signals and learned top-down expectancies-can monitor the synthesis of monocular boundary contours. For the same reasons that not all bottom-up activations of boundary contours become visible, not all top-down activations of boundary contours become visible. A boundary contour that is invisible at the BP stage can, however, have a strong effect on the object recognition system. “Seeing” a BP form percept does not imply a knowledge of where an object is in space, any more than it implies a knowledge of which object is being seen. Nonetheless, just as the same network laws are being used to derive networks for color and form perception and for object recognition, so too are these laws being used to analyse how observers learn to generate accurate movements in response to visual cues (Grossberg, 1978, 1985, in press; Grossberg and Kuperstein, 1985). This work on sensory-motor control suggests how a neural network as a whole can accurately learn to synthesize and calibrate sensory-motor transformations in real-time even though its individual cells cannot do so, and even if the cellular parameters from which these networks are built may be different across individuals, may change during development, and may be altered by partial injuries throughout life. Our most sweeping reply to the criticism of naive realism is thus that a single set of dynamical laws can be used, albeit in specialized wiring diagrams, for the explanation of data that, on the level of naive experience, could not seem to be more different. Using such laws, the present theory promises to provide a significant synthesis of perceptual and neural data and theories. Spatial frequencies and oriented receptive fields are both necessary but not sufficient. The perceptual interpretation of the blobs and hypercolumns strengthens the arguments for parallel cortical processing, but the need for several stages of processing leading to a unitary percept also strengthens the arguments for hierarchical cortical processing. A role for propagated action potentials in the boundary contour system is balanced by a role for electrotonic processing in the feature contour system. Relatively local cortical processing is needed to comp.ute receptive field properties, but relatively global cortical interactions are needed to generate unambiguous global percepts, such as those of perceptual boundaries, from ambiguous local cues. The deepest conceptual issue raised by the present results concern the choice of perceptual units and neural design principles. The impoverished nature of the retinal image and a huge perceptual data base about visual illusions show that local computations of pointwise scenic luminances cannot provide an adequate understanding of visual perception. The boundary-feature trade-off suggests that the visual system is designed in a way that is quite different from any possible local computational theory. This insight promises to be as important for the design of future computer vision and robotics algorithms as it may be for progress in perceptual and neural theory.
134
Chapter 2
APPENDIX Dynamics of Boiindary Formation
A network that instantiates the qualitative requirements described in the text will now be defined in stages, so that the basic properties of each stage can be easily understood. A t each stage, we chose the simplest instantiation of the computational idea. Oriented Masks To define a mask centered at position (i,j)with orientation k, divide the rectangular receptive field of the mask into a left-rectangle Lgjk and a right-rectangle Ryk. Suppose that all the masks sample a field of preprocessed inputs. Let Spqequal the preprocessed input to the position ( p , q ) of this field. The output Jz3kfrom the mask at position (i,j) with orientation k is then defined by
where
and the notation Ip]+ = max(p,O). In ( A l ) , term
only if U*jk/Vi3k> a. Because U,j~ measures the total input to the left rectangle L , j k and Vtjk measures the total input to the right rectangle Rt3k, inequality (A4) says that the input to Ltjk exceeds that to RIJkby the factor a. Parameter a ( 2 1) thus measures the relative contrast between the left and right halves of the receptive field. The sum of two terms in the numerator of ( A l ) says that Jtlk is sensitive to the amount of contrast, but not to the direction of contrast, received by Lg3k and R+. The denominator term in ( A l ) enables JzJkto compute a ratio scale in the limit where p(V:Jk4- K J k ) is much greater than 1. Intraorientational Competition Between Positions As in Figure 18, inputs JIlk with a fixed orientation k activate potentials w,+ with the same orientation via on-center off-surround interactions. To achieve a disinhibitory capability, all potentials Wgjk, are also excited by the same tonically active input I. Suppose that the excitatory inputs are not large enough to saturate their potentials, but that the inhibitory inputs can shunt their potentials toward small values. Then
where Dp*, is the inhibitory interaction strength between positions ( p , q ) and (i,j),and f(J,ik) is the input signal generated by J,,k. Suppose, for simplicity, that f(Jijk)
= yJijk,
(A6)
Neural Dynamics of Form Perception
135
where y is a positive ronstant. Also suppose that w j J k equilibrates rapidly to its inputs through time and is thus always approximately at equilibrium. Setting $ w 1 3 k = 0 in (A5),we find that
Dipole Competition Between Perpendicular Orientations ~ output signals that compete at their Perpendicular potentials W,Jk and w I J elicit , (Figure 18). Assume that these output target potentials Z l J k and z t J ~respectively , are always nonnegative by (A7), signals equal the potentials of w , 3 k and w , ~ K which and that Z t J k and zl1x respond quickly to these signals within their linear dynamical range. Then Zagk = W13k - W i l K (A8)
Output signals are, in turn, generated by x , J k and z l 3when ~ they exceed a nonnegative threshold. Let this threshold equal zero and suppose that the output signals o i 3 k = O ( Z , J k ) and O i 3 = ~ 0(zlJ~ grow ) linearly above threshold. Then
and OljK = C[wlJx - wtjk]+,
(All)
wltere C is a positive constant and [p]' = max(p,O). ~nt:rorientationa~ Competition W i t h i n a Position Let the outputs Ollkr k = 1 , 2 , . . . , n, be the inputs to an orientationally tuned on-center off-surround competition within each position. The potential Y t J k is excited by O13k and inhibited by all OtJm,m-3 k . Potential Y I J k therefore obeys the shunting on-center off-surround equation (Gkssberg, 1983a)
Suppose that implies that
Vi3k
also equilibrates rapidly to its inputs. Setting
where
By equation (A13), the total activity
$&jk
= 0 in (A12)
Chapter 2
136
tends to be conserved because Yij
=
BO,j 13
Thus if A is small compared to OIJ, then vr3 2 B . Oriented Cooperation As in Figure 19, if two (sets of) output signals f ( Y y k ) and f ( v u v k ) can trigger supraliminal activation of an intervening boundary completion potential z,t, then positive can initiate a rapid completion of a boundary with orienfeedback from Zpqk to &?k tation k between positions (i,j) and ( u , ~ ) .The following equation illustrates a rule for activating a boundary completion potential Ztlk due to properly aligned pairs of outputs: ddt Z t j k = - z q k + 8 ( f(vpqk)Ep,pj) (k) (Pd
t 8(
(A171
f(Ypqk)Fi!)(Pd
In (A17), g(s) is a signal function that becomes positive only when 8 is positive, and has a finite maximum value. A sum of two sufficiently positive g(s) terms in (A17) is needed to activate Z,jk above the firing threshold of its output signal h(ztlk). The output signal function h ( s ) is chosen faster-than-linear, and with a large slope to help choose orientation k in position ( i , j ) . Each sum
cf
(Yppk)Ej:!J
(PP4
and
cf
(Ypqk)Fj:A
(Pd
adds up outputs from a strip with orientation k that lies to one side or the other of position (i,j), as in Figure 11. The oriented kernels E E , and Fk\ accomplish this process of anisotropic averaging. A set of modestly large f(&+) outputs within the bandwidth of Egi, or FZjl can thus have as much of an effect on 2& as a single larger f(g,k) output. This property contributes to the statistical nature of the boundary completion process. An equation in which the sum of g ( w ) terms in (A17) is replaced by a product of g ( w ) terms works just as well formally. At equilibrium, (A17) implies that
The effect of boundary completion feedback signals h(z+) on the ( i , j ) position is described by changing the equation (A7) to
h
'h
Equations Al), (A19), (AlO), A13), and (AM), respectively, define the equilibrium of the networ , up to parameter c oices. This system is summarized below for completeness.
Neural Dynamics of Form Perception
137
and Gjk
= 8(
(Pd
i(gpqk)E:!J)
+ 9 ( 1 f(@pqk)F$t)j). (Pd
Although these equilibrium equations compactly summarize the computational logic of competitive-cooperative boundary contour interactions, a full understanding of the information processing capabilities of this network requires a study of the corresponding differential equations, not just their equilibrium values. The equations for feature contour signals and diffusive filling-in are described in Cohen and Grossberg (1984b).
138
Chapter 2
REFERENCES Arend, L.E., Buehler, J.N., and Lockhead, G.R., Difference information in brightness perception. Perception and Psychophysics, 1971,9, 367-370. Beck, J., Prazdny, K., and Rosenfeld, A., A theory of textural segmentation. In J. Beck, B. Hope, and A. Rosenfeld (Eds.), H u m a n a n d machine vision. New York: Academic Press, 1983, pp.1-38. Bergstrom, S.S., A paradox in the perception of luminance gradients, I. Scandinavian Journal of Psychology, 1966, 7, 209-224. Bergstriim, S.S., A paradox in the perception of luminance gradients, 11. Scandinavian Journal of Psychology, 1967, 8 , 25-32 (a). Bergstrom, S.S., A paradox in the perception of luminance gradients, 111. Scandinavian Journal of Psyrhology, 1967, 8, 33-37 (b). Biederman, I., Personal communication, 1984. Blake, R., Sloane, M., and Fox, R., Further developments in binocular summation. Perception and Psychophysics, 1981, 30, 266-276. Boynton, R.M., Color, hue, and wavelength. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception: Seeing, Vol. 5. New York: Academic Press, 1975, pp.301-347. Carpenter, G.A. and Grossberg, S., Adaptation and transmitter gating in vertebrate photoreceptors. Journal of Theoretical Neurobiology, 1981, 1, 1-42. Carpenter, G.A. and Grossberg, S., Dynamic models of neural systems: Propagated signals, photoreceptor transduction, and circadian rhythms. In J.P.E. Hodgson (Ed.), Oscillations in mathematical biology. New York: Springer-Verlag, 1983, pp.102196. Cogan, A.L., Monocular sensitivity during binocular viewing. Vision Research, 1982, 22, 1-16. Cogan, A.L., Silverman, G., and Sekuler, R., Binocular summation in detection of contrast flashes. Perception and Psychophysics, 1982, 31, 330-338. Cohen, M.A. and Grossberg, S., Neural dynamics of binocular form perception. New roscience Abstracts, 1983, 13, No. 353.8. Cohen, M.A. and Grossberg, S., Some global properties of binocular resonances: Disparity matching, filling-in, and figure-ground synthesis. In P. Dodwell and T. Caelli (Eds.), Figural synthesis. Hillsdale, NJ: Erlbaum, 1984 (a). Cohen, M.A. and Grossberg, S., Neural dynamics of brightness perception: Features, boundaries, diffusion, and resonance. Perception and Psychophysics, 1984, S6, 428456 (b). Coren, S., When “filling-in” fails. Behavioral and Brain Sciences, 1983, 6, 661-662. Cornsweet, T.N., Visual perception. New York: Academic Press, 1970. Curtis, D.W. and Rule, S.J., Fechner’sparadoxreflects a nonmonotonerelation between binocular brightness and luminance. Perception and Psychophysics, 1980, 27, 263266. Day, R.H., Neon color spreading, partially delineated borders, and the formation of illusory contours. Perception and Psychophysics, 1983, 34, 488-490. DeValois, R.L. and DeValois, K.K., Neural coding of color. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception: Seeing, Vol. 5. New York: Academic Press, 1975, pp.117-166. Gellatly, A.R.H., Perception of an illusory triangle with masked inducing figure. Perception, 1980,9, 599-602.
Neural Dynamics of Form Perception
139
Gerrits, H.J.M., deHann, B., and Vendrick, A.J.H., Experiments with retinal stabilized images: Relations beween the observations and neural data. \'ision Research, 1966, 6,427 440. Gerrits, H.J.M. and Timmermann, J.G.M.E.N., The filling-in process in patients with retinal scotomata. vision Research, 1969, 9, 439-442. Gerrits, H.J.M. and Vendrick, A.J.H., Simultaneous contrast, filling-in process and information processing in man's visual system. Experimental Brain Research, 1970, 11, 411-430. Glass, L. and Switkes, E., Pattern recognition in humans: Correlations which cannot be perceived. Perception, 1976, 5, 67-72. Gouras, P. and Kriiger, J., Responses of cells in foveal visual cortex of the monkey to pure color contrast. Journal of Neurophysiology, 1979, 42, 850-860. Graham, N., The visual system does a crude Fourier analysis of patterns. In S. Grossberg (Ed.), Mathematical psychology and psychophysiology. Providence, R I American Mathematical Society, 1981, pp.1-16. Graham, N. and Nachmias, J., Detection of grating patterns containing two spatial frequencies: A test of single-channel and multiple-channel models. Vision Research, 1971, 11, 251-259. Gregory, R.L., Eye and brain. New York: McGraw-Hill, 1966. Grossberg, S., Contour enhancement, short term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 1973, 52, 217-257. Grossberg, S., Adaptive pattern classification and universal recoding, 11: Feedback, expectation, olfaction, and illusions. Biological Cybernetics, 1976, 23, 187-202. Grossberg, S., A theory of human memory: Self-organization and performance of sensory-motor codes, maps, and plans. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Vol. 5. New York: Academic Press, 1978, pp.233-374. Grossberg, S., How does a brain build a cognitive code? Psychological Review, 1980, 87, 1-51. Grossberg, S., Adaptive resonance in development, perception, and cognition. In S. Grossberg (Ed.), Mathematical psychology and psychophysiology. Providence, RI: American Mathematical Society, 1981, pp.107-156. Grossberg, S., Studies of mind a n d brain: Neural principles of learning, perception, development, cognition, a n d m o t o r control. Boston: Reidel Press, 1982. Grossberg, S., The quantized geometry of visual space: The coherent computation of depth, form, and lightness. Behavioral and Brain Sciences, 1983, 6,625492 (a). Grossberg, S., Neural substrates of binocular form perception: Filtering, matching, diffusion, and resonance. In E. Basar, H. Flohr, H. Haken, and A.J. Mandell (Eds.), Synergetics of the brain. New York: Springer-Verlag, 1983 (b), pp.274-298. Grossberg, S., Outline of a theory of brightness, color, and form perception. In E. Degreef and J. van Buggenhaut (Eds.), Trends in mathematical psychology. Amsterdam: North-Holland, 1984 (a),pp.59-86. Grossberg, S., Some psychophysiological and pharmacological correlates of a developmental, cognitive, and motivational theory. In R. Karrer, J. Cohen, and P. Tueting (Eds.), Brain and information: Event related potentials. New York: New York Academy of Sciences, 1984 (b), pp.58-151. Grossberg, S., The adaptive self-organization of serial order in behavior: Speech, language, and motor control. In E.C. Schwab and H.C. Nusbaum (Eds.), TITLE???. New York: Academic Press, 1985. Grossberg, S., The role of learning in sensory-motor control. Behavioral and Brain Sciences, in press.
140
Chapter 2
Grossberg, S. and Cohen, M., Dynamics of brightness and contour perception. Supplement t o Investigative Ophthalmology and Visual Science, 1984, 25, 71. Grossberg, S. and Kuperstein, M., Neural dynamics of adaptive sensory-motor control: Ballistic eye movements. Amsterdam: North-Holland, 1985. Grossberg, S. and Mingolla, E., Neural dynamics of perceptual grouping: Textures, boundaries, and emergent segmentations. Perception and Psychophysics, 1985, 38, 141-171.
Hamada, J., Antagonistic and non-antagonistic processes in the lightness perception. Proceedings of the XXII international congress of psychology, Leipzig, July, 1980.
Heggelund, P., Receptive field organization of complex cells in cat striate cortex. Experimental Brain Research, 1981,42, 99-107. Heggelund, P. and Krekling, S., Edge dependent lightness distributions at different adaptation levels. Vision Research, 1976, 16,493-496. Helmholtz, H.L.F. von, Treatise on physiological optics, J.P.C. Southall (Translator and Editor). New York: Dover, 1962. Hendrickson, A.E., Hunt, S.P., and Wu, J.-Y., Immunocytochemical localization of glutamic acid decarboxylase in monkey striate cortex. Nature, 1981, 292, 605-607. Horton, J.C. and Hubel, D.H., Regular patchy distribution of cytochrome oxidase staining in primary visual cortex of macaque monkey. Nature, 1981, 292, 762-764. Hubel, D.H. and Livingstone, M.S., Regions of poor orientation tuning coincide with patches of cytochrome oxidase staining in monkey striate cortex. Neuroscience A bstracts, 1981, 118.12. Hubel, D.H. and Wiesel, T.N., Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London (B),1977, 198, 1-59. Julesz, B., Foundations of cyclopean perception. Chicago: University of Chicago Press, 1971., Kanizsa, G., Contours without gradients or cognitive contours? Italian Journal of Psychology, 1974, I, 93-113. Kanizsa, G., Subjective contours. Scientific American, 1976, 234, 48-64. Kaufman, L., Sight and mind: An introduction to visual perception. New York: Oxford University Press, 1974. Kennedy, J.M., Illusory contours and the ends of lines. Perception, 1978, 7, 605-607. Kennedy, J.M., Subjective contours, contrast, and assimilation. In C.F. Nodine and D.F.Fisher (Eds.), Perception and pictorial representation. New York: Praeger Press, 1979. Kennedy, J.M., Illusory brightness and the ends of petals: Changes in brightness without aid of stratification or assimilation effects. Perception, 1981, 10, 583-585. Kennedy, J.M. and Ware, C.. Illusory contours can arise in dot figures. Perception, 1978, 7, 191-194. Krauskopf, J., Effect of retinal image stabilization on the appearance of heterochromatic targets. Journal of the Optical Society of America, 1963, 53, 741-744. Kulikowski, J.J., Limit of single vision in stereopsis depends on contour sharpness. Nature, 1978, 275, 126-127. Land, E.H., The retinex theory of color vision. Scientific American, 1977,237, 108-128. Leeper, R., A study of a neglected portion of the field of learning-the development of sensory organization. Journal of Genetic Psychology, 1935, 46, 41-75. Legge, G.E.and Rubin, G.S.,Binocular interactions in suprathreshold contrast perception. Perception and Psychophysics, 1981, SO, 49-61.
Neural Dynamics of Form Perception
141
Levelt, W.J.M., On binociilar rivalry. Soesterberg: Institute for Perception, 1965, RVO-TNO. Livingstone, M.S. and Hubel, D.H., Thalamic inputs to cytochrome oxidase-rich regions in monkey visual cortex. Proceedings of the Xational Academy of Sciences, 1982,79, 6098-6101. Livingstone, M.S. and Hubel, D.H., Anatomy and physiology of a color system in the primate visual cortex. Journal of Neuroscience, 1984, 4, 309-356. Mingolla, E. and Grossberg, S., Dynamics of contour completion: Illusory figures and neon color spreading. Supplement to Investigative Ophthalmology and Visual Science, 1984,25, 71. Mollon, J.D. and Sharpe, L.T. (Eds.), Colour vision. New York: Academic Press, 1983. O’Brien, V., Contour perception, illusion, and reality. Journal of the Optical Society of America, 1958,48, 112-119. Parks, T.E., Subjective figures: Some unusual concomitant brightness effects. Perception, 1980,9,239-241. Parks, T.E. and Marks, W., Sharp-edged versus diffuse illusory circles: The effects of varying luminance. Perception and Psychophysics, 1983,33, 172-176. Petry, S.,Harbeck, A., Conway, J., and Levey, J., Stimulus determinants of brightness and distinctions of subjective contours. Perception and Psychophysics, 1983, 34, 169-174. Prazdny, K.,Illusory contours are not caused by simultaneous brightness contrast. Perception and Psychophysics, 1983,34, 403-404. Pritchard, R.M., Stabilized images on the retina. Scientific American, 1961,204, 7278. Pritchard, R.M., Heron, W., and Hebb, D.O., Visual perception approached by the method of stabilized images. Canadian Journal of Psychology, 1960,14, 67-77. Rauschecker, J.P.J., Campbell, F.W., and Atkinson, J., Colour opponent neurones in the human visual system. Nature, 1973, 245, 42-45. Redies, C. and Spillmann, L., The neon color effect in the Ehrenstein illusion. Perception, 1981,10,667-681. Riggs, L.A., Ratliff, F., Cornsweet, J.C., and Cornsweet, T.N., The disappearance of steadily fixated visual test objects. Journal of the Optical Society of America, 1953, 43,495-501. Tanaka, M., Lee, B.B., and Creutzfeldt, O.D., Spectral tuning and contour representation in area 17 of the awake monkey. In J.D. Mollon and L.T. Sharpe (Eds.), Colour vision. New York: Academic Press, 1983,pp.269-276. Todorovic‘, D., Brightness perception a n d the Craik-O’Brien-Cornsweet effect. Unpublished M.A. Thesis. Storrs: University of Connecticut, 1983. Usui, S., Mitarai, G., and Sakakibara, M., Discrete nonlinear reduction model for horizontal cell response in the carp retina. Vision Research, 1983, 23, 413-420. Van den Brink, G. and Keemink, C.J., Luminance gradients and edge effects. Vision Research, 1976, 16, 155-159. Van Tuijl, H.F.J.M., A new visual illusion: Neonlike color spreading and complementary color induction between subjective contours. Acta Psychologica, 1975,39,441-445. Van Tuijl, H.F.J.M. and de Weert, C.M.M., Sensory conditions for the occurrence of the neon spreading illusion. Perception, 1979, 8,211-215. Van Tuijl, H.F.J.M. and Leeuwenberg, E.L.J., Neon color spreading and structural information measures. Perception and Psychophysics, 1979,25, 269-284.
142
Chapter 2
Von der Heydt, R., Peterhans, E., and Baumgartner, G., Illiisory cont,ours and cortical neuron responses. Science, 1984, 224, 1260-1262. Ware, C., Coloured illusory triangles due to assimilation. Perception, 1980, 9, 103-107. Yarbus, A.L., Eye movements and vision. New York: Plenum Press, 1967. Zeki, S., Colour coding in the cerebral cortex: The reaction of cells in monkey visual cortex to wavelengths and colours. Neurosrjence, 1983, 9, 741-765 (a). Zeki, S., Colour coding in the cerebral cortex: The responses of wavelength-selective and colour coded cells in monkey visual cortex to changes in wavelength composition. Neuroscience, 1983,9, 767-791 (b).
143
< ’ h a l ) t i ~3
NEURAL DYNAMICS OF PERCEPTUAL GROUPING: TEXTURES, BOUNDARIES, A N D EMERGENT SEGMENTATIONS Preface This Chapter illustrates our belief that, once a mind-brain theory has probed sufficiently deeply, its further development may proceed in an evolutionary, rather than a revolutionary, way. Although the theory described in Chapter 2 was derived to deal with issues and data concerning boundary formation and featural filling-in, a modest refinement of the theory also deals with many other phenomena about perceptual grouping and textural segmentation. Several of our analyses of these grouping phenomena are contained in this Chapter. Thus unlike traditional artificial intelligence models, each of whose steps forward requires another clever trick by a programmer in an endless series of tricks that never stwn to add up to a theory, in the present type of analysis, once a theory has been derived, its emergent properties continue t o teach us surprising new things. In addition to its competence in textural grouping, the present theory is also competent to provide insights into surface perception, including shape-from-shading. These applications make critical use of our revolutionary claim that all boundaries are invisible, and that they gain visibility by supporting filled-in featmuralcontrast differences within the compartments which boundaries form within the Feature Contour System. The present article documents our belief that the hypercolumns in visual cortex should not be viewed as part of an orientation system, as many visual neurophysiologists are wont t o do. We argue, instead, that the hypercolumns form part of a boundary completion and segmentation system. This is not a minor change in emphasis, because many emergent boundaries span regions of a scenic image which do not contain any oriented contrasts whatsoever. A nuniber of popular models of visual sharpening and recognition, including the Boltzmann machine and various associative learning machines, assume the existence of a cost function which the system acts to minimize. In contrast, we believe that many neural systems do not attempt to minimize a cost function. (See, however, Chapter 5 of Volume I.) Instead, a circuit like the CC Loop spontaneously discovers a coherent segmentation of a scene by closing its own internal cooperative-competitive feedback loops. A circuit like an ART machine discovers and manipulates for itself those “costs” which are appropriate to a particular input environment in the form of its top-down templates, or critical feature patterns (Volume I and Chapters 6 and 7). Although models which utilize explicit cost functions may have useful applications in technology, as models of brain processes, we consider them to be an unappropriate application of 19th century linear physical Hamiltonian thinking. K e advocate instead the use of 20th century nonlinear biological dissipative systems, derived and developed on their own terms as a direct expression of a truly biological intuition.
Perception and Psychophysics 38, 141 - 171 (1985) 0 1 9 8 5 The Psychonomic Society, Inc. Reprinted by permission of the publisher
144
NEURAL DYNAMICS OF PERCEPTUAL GROUPING: TEXTURES, BOUNDARIES, AND EMERGENT SEGMENTATIONS
Stephen Grossbergt and Ennio Mingollat
Abstract
A real-time visual processing theory is used to analyse and explain a wide variety of perceptual grouping and segmentation phenomena, including the grouping of textured images, randomly defined images, and images built up from periodic scenic elements. The theory explains how “local” feature processing and ”emergent” features work together to segment a scene, how segmentations may arise across image regions which do not contain any luminance differences, how segmentations may override local image properties in favor of global statistical factors, and why Segmentations that powerfully influence object recognition may be barely visible or totally invisible. Network interactions within a Boundary Contour Syatena (BCS), a Feature Contour System (FCS), and an Object Recognition System (ORS) are used to explain these phenomena. The BCS is defined by a hierarchy of orientationally tuned interactions, which can be divided into two successive subsystems, called the OC Filter and the CC Loop. The OC Filter contains two successive stages of oriented receptive fields which are sensitive to different properties of image contrasts. The OC Filter generates inputs to the CC Loop, which contains successive stages of spatially short-range competitive interactions and spatially long-range cooperative interactions. Feedback between the competitive and cooperative stages synthesizes a global context-sensitive segmentation from among the many possible groupings of local featural elements. The properties of the BCS provide a unified explanation of several ostensibly different Gestalt rules. The BCS also suggests explanations and predictions concerning the architecture of the striate and prestriate visual cortices. The BCS embodies new ideas concerning the foundations of geometry, on-line statistical decision theory, and the resolution of uncertainty in quantum measurement systems. Computer simulations establish the formal competence of the BCS as a perceptual grouping system. The properties of the BCS are compared with probabilistic and artificial intelligence models of segmentation. The total network suggests a new approach to the design of computer vision systems, and promises to provide a universal set of rules for perceptual grouping of scenic edges, textures, and smoothly shaded regions.
.
t
__
Supported in part by the Air Force Office of Scientific Research (AFOSR’85-0149) and the Army Research Office (DAAG-29-85-K-0095). $ Supported in part by the Air Force Office of Scientific Research (AFOSR 85-0149). 144
Neural Dynamics of Perceptual Grouping
145
1. Introdurtion: Towards A I’iiivrrsal Set of Rilles for P r r r e p t i i a l Group-
ing
The visual system segments optical input into regions that are separated by perceived contours or boundaries. This rapid, seemingly automatic, early step in visual processing is difficult to characterize, largely because many perceived contours have no obvious correlates in the optical input. A rontour in a pattern of luminances is generally defined as a spatial discontinuity in luminance. While usually sufficient, however, such discontinuities are by no means necessary for sustaining perceived contours. Regions separated by visual contours also occur in the presence of: statistical differences in textural qualities such as orientation, shape, density, or color (Beck, 1966a, 1966b, 1972, 1982, 1983; Beck, Prazdny, and Rosenfeld, 1983), binocular matching of elements of differing disparities (Julesz, 1960), accretion and deletion of texture elements in moving displays (Kaplan, 1969), and in classical “subjective contours” (Kanizsa, 1955). The extent t o which the types of perceived contours just named involve the same visual processes as those triggered by luminance contours is not obvious, although the former are certainly as perceptually real and generally as vivid as the latter. Perceptual contours arising at boundaries of regions with differing statistical distributions of featural qualities have been studied in great detail (Beck, 1966a, 1966b, 1972, 1982, 1983; Beck, Prazdny, and Rosenfeld, 1983; Caelli, 1982, 1983; Caelli and Julesz, 1979). Two findings of this research are especially salient. First, the visual system’s segmentation of the scenic input occurs rapidly throughout all regions of that input, in a manner often described as “preattentive.” That is, subjects generally describe boundaries in a consistent manner when exposure times are short (under 200 msec) and without prior knowledge of the regions in a display at which boundaries are likely to occur. Thus any theoretical account of boundary extraction for such displays must explain how early “data driven” processes rapidly converge on boundaries wherever they occur. The second finding of the experimental work on textures complicates the implications of the first, however: the textural segmentation process is exquisitely contextsensitive. That is, a given texture element a t a given location can be part of a variety of larger groupings, depending on what surrounds it. Indeed, the precise determination even of what acts as an element at a given location can depend on patterns at nearby locations. One of the greatest sources of difficulty in understanding visual perception and in designing fast object recognition systems is such context-sensitivity of perceptual units. Since the work of the Gestaltists (Wertheimer, 1923), it has been widely recognized that local features of a srene, such as edge positions, disparities, lengths, orientations, and rontrasts, are perceptually ambiguous, but that combinations of these features can be quickly grouped by a perceiver to generate a clear separation between figures, and between figure and ground. Indeed, a figure within a textured scene often seems to “pop out” from the ground (Neisser, 1967). The “emergent” features by which an observer perceptually groups the “local” features within a scene are sensitive to the global structuring of textural elements within the scene. The fact that these emergent perceptual units, rather than local features, are used to group a scene carries with it the possibility of scientific chaos. If every scene can define its own context-sensitive units, then perhaps object perception can only be described in terms of a n unwieldly taxonomy of scenes and their unique perceptual units. One of the great accomplishmentsof the Gestaltists was to suggest a short list of rules for perceptual grouping that helped to organize many interesting examples. As is often the case in pioneering work, the rules were neither always obeyed nor exhaustive. No justification for the rules was given other than their evident plausibility. More seriously for practical applications, no effective computational algorithms were given to instantiate the rules. Many workers since the Gestaltists have made important progress in advancing our understanding of perceptual grouping processes. For example, Sperling (1970), Julesz
146
Chapter 3
(1971),and Dev (1975) introduced algorithms for iifing disparity cues to coherently separate figure from ground in random dot stereograms. Later workers such as Marr and Poggio (1976)have studied similar algorithms. Caelli (1982, 1983) has emphasized the importance of the conjoint action of orientation and spatial frequency tuning in the filtering operations that preprocess textured images. Caelli and Dodwell (l982), Dodwell (1983), and Hoffman (1970) have recommended the use of Lie group vector fields as a tool for grouping together orientational cues across perceptual space. Caelli and Julesz (1979) have presented evidence that “first order statistics of textons” are used to group textural elements. The term “textons” designates the features that are to be statistically grouped. This view supports a large body of work by Beck and his colleagues (Beck, 1966a, 1966b, 1972, 1982, 1983; Beck, Prazdy, and Rosenfeld, 1983), who have introduced a remarkable collection of ingenious textural displays that they have used to determine some of the factors that control textural grouping properties. The collective effect of these and other contributions has been to provide a sophisticated experimental literature about textural grouping that has identified the main properties that need to be considered. What has not been achieved is a deep analysis of the design principles and mechanisms that lie behind the properties of perceptual grouping. Expressed in another way, what is missing is the raison d’etre for textural grouping and a computational framework that dynamically explains how textural elements are grouped in real-time into easily separated figures and ground. One manifestation of this gap in contemporary understanding ran be found in the image processing models that have been developed by workers in artificial intelligence. In this approach, curves are analysed using different models from those that are used to analyse textures, and textures are analysed using different models from the ones used to analyse surfaces (Horn, 1977; Marr and Hildreth, 1980). All of these models are built up using geometrical ideas-such as surface normal, curvature, and Laplacianthat were used to study visual perception during the nineteenth century (Ratliff, 1965). These geometrical ideas were originally developed to analyse local properties of physical processes. By contrast, the visual system’s context-sensitive mechanisms routinely synthesize figural percepts that are not reducible to local luminance differences within a scenic image. Such emergent properties are not just the effect of local geometrical transformat ions. Our recent work suggests that nineteenth century geometrical ideas are fundamentally inadequate to characterize the designs that make biological visual systems so efficient (Carpenter and Grossberg, 1981, 1983; Cohen and Grossberg, 1984a, 1984b; Grossberg, 1983a, 1983b, 1984a, 1985; Grossberg and Mingolla, 1985, 1986). This claim arises from the discovery of new mechanisms that are not designed to compute local geometrical properties of a scenic image. These mechanisms are defined by parallel and hierarchical interactions within very large networks of interacting neurons. The visual properties that these equations compute emerge from network interactions, rather than from local transformations. A surprising consequence of our analysis is that the same mechanisms which are needed to achieve a biologically relevant understanding of how scenic edges are internally represented also respond intelligently to textured images, smoothly shaded images, and combinations thereof. These new designs thus promise to provide a uniuereal set of rules for the pre-attentive perceptual grouping processes that feed into depthful form percept and object recognition processes. The complete development of these designs will require a major scientific effort. The present article makes two steps in that direction. The first goal of the article is to indicate how these new designs render transparent properties of perceptual grouping which previously were effectively manipulated by a small number of scientists, notably Jacob Beck. A primary goal of this article is thus to provide a dynamical explanation of recent textural displays from the Beck school. Beck and his colleagues have gone far in determining which aspects of textures tend to group and under what conditions. Our
Neural Dynamics of Percepmal Grouping
147
work sheds light on how such segmentation may be implemented by the visual system. The results of Glass and Switkes (1976) on grouping of statistically defined percepts and of Gregory and Heard (1979) on border locking during the cafe! wall illusion will also be analysed using the same ideas. The second goal of the article is to report computer simulations that illustrate the theory’s formal competence for generating perceptual groupings that strikingly resemble human grouping properties. Our theory first introduced the distinction between the Boundary Contour System and the Feature Contour System to deal with paradoxical data concerning brightness, color, and form perception. These two systems extract two different types of contoursensitive information-called Boundary Contour signals and Feature Contour signalsat an early processing stage. The Boundary Contour signals are transformed through successive processing stages within the Boundary Contour System into coherent boundary structures. These boundary structures give rise to topographically organized output signals t o the Feature Contour System (Figure 1). Feature Contour signals are sensitive to luminance and hue differences within a scenic image. These signals activate the same processing stage within the Feature Contour System that receives boundary signals from the Boundary Contour System. The feature contour signals here initiate the filling-in processes whereby brightnesses and colors spread until they either hit their first Boundary Contour or are attenuated by their spatial spread. While earlier work examined the role of the Boundary Contour System in the synthesis of individual contours, whether ‘‘real’’ or “illusory,” its rules also account for much of the segmentation of textured scenes into grouped regions separated by perceived contours. Accordingly, Sections 2 9 of this paper review the main points of the theory with respect to their implications for perceptual grouping. Sections 10-15 and 17-19 then examine in detail the major issues in grouping research to date and describe our solutions qualitatively. Section 16 presents computer simulations showing how our model synthesizes context-sensitive perceptual groupings. The model is described in more mechanistic detail in Section 20. Mathematical equations of the model are contained in the Appendix. 2. T h e Role of Illusory Contours
One of the main themes in our discussion is the role of illusory contours in perceptual grouping processes. Our results make precise the sense in which percepts of “illusory contours”--or contour percepts that do not correspond to one-dimensional luminance differences in a scenic image-and percepts of “real contours” are both synthesized by the same mechanisms. This discussion clarifies why, despite the visual system’s manifestly adaptive design, illusory contours are so abundant in visual percepts. We also suggest how illusory contours that are at best marginally visible can have powerful effects on perceptual grouping and object recognition processes. Some of the new designs of our theory can be motivated by contrasting the noisy visual signals that reach the retina with the coherence of conscious visual percepts. In humans, for example, light passes through a thicket of retinal veins before it reaches retinal photoreceptors. The percepts of human observers are fortunately not distorted by their retinal veins during normal vision. This is due, in part, to the action of mechanisms which attenuate the perception of images that are stabilized with respect to the retina as the eye jiggles in its orbit with respect to the outside world. Suppressing the percept of the stabilized veins does not, in itself, complete the percept of retinal images that are occluded and segmented by the veins. Boundaries need to be completed and colors and brightnesses filled-in in order to compensate for the image degradation that is caused by the retinal veins. A similar discussion follows from a consideration of why human observers do not typically notice their blind spots (Kawabata, 1984). Observers are not able to distinguish which parts of such a completed percept are derived directly from retinal signals and which parts are due to boundary completion and featural filling-in. The completed and filled-in percepts are called, in the usual jargon,
Chapter 3
148
tl t
Figure 1. A macrocircuit of processing stages: Monocular preprocessed signals (MP) are sent independently to both the Boundary Contour System (BCS) and the Feature Cont80urSystem (FCS). The BCS pre-attentively generates coherent boundary structures from these MP signals. These structures send outputs to both the FCS and the Object Recognition System (ORS).The ORS, in turn, rapidly sends top-down learned template signals to the BCS. These template signals can modify the pre-attentively rompleted boundary structures using learned information. The BCS passes these modifications along to the FCS. The signals from the BCS organize the FCS into perceptual regions wherein filling-in of visible brightnesses and colors can occur. This filling-in process is activated by signals from the MP stage.
Neural Dynamics of Perceptual Grouping
149
“illusory” figures. These examples siiggest that both “real” and “illusory”’ figures are generated by the same perceptual niechanisms. and suggest why “illusory” figures are so important in perceptual grouping processes. Once this is understood. the need for a perceptual theory that treats “real” and “illusory” percepts on an equal footing also becomes apparent. A central issue in such a theory concerns whether boundary completion and featural filling-in are the same or distinct processes. One of our theory’s primary contributions is to show that these processes are different by characterizing the different processing rules that they obey. At our present stage of understanding, many perceptual phenomena can be used to make this point. We find the following three phenomena to be particularly useful: the Land (1977) color and brightness experiments; the Yarbus (1967) stabilized image experiments; and the reverse-contrast Kanizsa square (Grossberg and Mingolla, 1985). 3. Discounting t h e Illuininant: Color Edges and Featural Filling-In The visual world is typically viewed under inhomogeneous lighting conditions. The scenic luminances that reach the retina thus confound fluctuating lighting conditions with invariant object colors and lightnesses. Helmholtz (1962) already knew that the brain somehow “discounts the illurninant” to generate color and lightness percepts that are more veridical than those in the retinal image. Land (1977) has clarified this process in a series of striking experiments wherein color percepts within a picture constructed from overlapping patches of colored paper are determined under a variety of lighting conditions. These experiments show that color signals corresponding to the interior of each patch are suppressed. The chromatic contrasts across the edges between adjacent patches are used to generate the final percept. It is easy to see how such a scheme “discounts the illuminant.” Large differences in illumination can exist within any patch. On the other hand, differences in illumination are small across an edge on such a planar display. Hence the relative chromatic contrasts across edges, assumed to be registered by Black-White, Red-Green, and Blue-Yellow double opponent systems, are good estimates of the object reflectances near the edge. Just as suppressing the percept of stabilized veins is insufficient to generate an adequate percept, so too is discounting the illuminant within each color patch. Without further processing, we could at best perceive a world of colored edges. Featural filling-in is needed to recover estimates of brightness and color within the interior of each patch. Thus extraction of color edges and featural filling-in are both necessary in order to perceive a color field or a continuously shaded surface. 4. F e a t u r a l Filling-In Over Stabilized Scenic Edges
Many images can be used to firmly establish that a featural filling-in process exists. The recent thesis of TodoroviC (1983) provides a nice set of examples that one can construct with modest computer graphics equipment. Vivid classical examples of featural filling-in were discovered by artificially stabilizing certain image contours of a scene (Krauskopf, 1963; Yarbus, 1967). Consider, for example, the image schematized in Figure 2. After the edges of the large circle and the vertical line are stabilized on the retina, the red color (dots) outside the large circle fills-in the black and white hemi-discs except within the small red circles whose edges are not stabilized (Yarbue, 1967). The red inside the left circle looks brighter and the red inside the right circle looks darker than the uniform red that envelopes the remainder of the percept. When the Land (1977) and Yarbus (1967) experiments are considered side-by-side, one can recognize that the brain extracts two different types of contour information from scenic images. Feature Contours, including “color edges,” give rise to the signals which generate visible brightness and color percepts a t a later processing stage. Feature Contours encode this information as a contour-sensitive process in order to discount
150
Chapter 3
Figure 2. A classical example of featural filling-in: When the edges of the large circle and t,he vertical line are stabilized on the retina, the red color (dots) outside the large circle envelopes the black and white hemi-discs except within the small red circles whose edges are not stabilized (Yarbus, 1987). The red inside the left circle looks brighter and the red inside the right circle looks darker than the enveloping red.
Neural Dyiiarnics of Perceptual Grouping
151
the illuminant. Boundary Contours arc extracted in order to define the perceptual boundaries, groupings or forms within which featiiral estimates derived from the Feature Contours ran fill-in at a later proccssing stage. In the Yarbus (1967) experiments, once a stabilized scenic edge can no longer generate a Boundary Contour, featural signals can flow across the locatjons corresponding to the stabilized scenic edge until they reach the next Boundary Contour. The phenomenon of neon color spreading also illustrates the dissociation of Boundary Contour and Feature Contour processing (Ejima, Redies, Takahashi, and Akita, 1984; Redies and Spillmann, 1981; Redies, Spillmann, and Kunz. 1984; van Tuijl, 1975; van Tuijl and de Weert, 1979; van Tuijl and Leeuwenberg, 1979). An explanation of neon color spreading is suggested in Grossberg (1984a) and Grossberg and Mingolla (1985). 6. Different Rules f o r Boundary Contours and Feature Contours
Some of the rules that distinguish the Boundary Contour System from the Feature Contour System can be inferred from the percept generated by the reverse contrast Kanizsa square image in Figure 3 (Cohen and Grossberg, 1984b; Grossberg and Mingolla, 1985). Prazdny (1983, 1985) and Shapley and Gordon (1985) have also used reverse contrast images in their discussions of form perception. Consider the vertical boundaries in the perceived Kanizsa square. In this percept, a vertical boundary connects a pair of vertical scenic edges with opposite direction-of-contrast. In other words: T h e black pac-man figure causes a dark-light vertical edge with respect to the grey background, The white pac-man figure causes a light-dark vertical edge with respect to the grey background. The process of boundary completion whereby a Boundary Contour is synthesized between these inducing stimuli is thus indifferent to direction-of-contrast. T h e boundary completion process is, however, sensitive to the orientation and amount of contrast of the inducing stimuli. The Feature Contours extracted from a scene are, by contrast, exquisitely sensitive to direction-of-contrast. Were this not the case, we could never tell the difference between a dark-light and a light-dark percept. \Vc would be blind. Another difference between Boundary Contour and Feature Contour rules can be inferred from Figures 2 and 3. In Figure 3, a boundary forms i n u ~ ~ rind an oriented way between a pair of inducing scenic edges. In Figure 2, featural filling-in is due to an outward and unoriented spreading of featural quality from indioidual Feature Contour signals that continues until the spreading signals either hit a Boundary Contour or are attenuated by their own spatial spread (Figure 4). The remainder of the article develops these and deeper properties of the Boundary Contour System to explain segmentation data. Certain crucial points may profitably be emphasized now. Boundaries may emerge corresponding to image regions in which no contrast differences whatsoever exist. The Boundary Contour System is sensitive to statistical differences in the distribution of scenic elements, not merely to individual image contrasts. In particular, the oriented receptive fields. or masks, which initiate boundary processing are not edge detectors; rather, they are local contrast detectors which can respond to statistical differences in the spatial distribution of image contrasts, including but not restricted to edges. These receptive fields are organized into multiple subsystems, such that the oriented receptive fields within each subsystem are sensitive to oriented contrasts over spatial domains of different sizes. These subsystems can therefore respond differently to spatial frequency information within the scenic image. Since all these oriented receptive fields are also sensitive to amount of contrast, the Boundary Contour System registers statistical differences in luminance, orientation, and spatial frequency even at its earliest stages of processing. Later stages of Boundary Contour System processing are also sensitive to these factors, but in a different way. Their inputs from earlier stages are already sensitive to these factors. They then actively transform these inputs using competitive-cooperative feedback interactions. The Boundary Contour System may hereby process statistical
152
Chapter 3
Figure 3. A reverse contrast Kanisza square: An illusory square is induced by two black and two white pac-man figures on a grey background. Illusory contours can thus join edges with opposite directions-of-contrast. (This effect may be weakened by the photographic reproduction process.)
differences in luminance, orientation, and spatial frequency within a scenic image in multiple ways. We wish also to dispel misconceptions that a comparison between the names Boundary Contour System and Feature Contour System may engender. As indicated above, the Boundary Contour System does generate perceptual boundaries, but neither the data nor our theory permit the conclusion that these boundaries must coincide with the edges in scenic images. The Feature Contour System does lead to visible percepts, such as organized brightness and color differences, and such percepts contain the elements that are often called features. On the other hand, both the Boundary Contour System and the Feature Contour System contain “feature detectors” which are sensitive to luminance or hue differences within scenic images. Although both systems contain “feature detectors,” these detectors are used within the Boundary Contour System to generate boundaries, not visible ”features.” In fact, within the Boundary Contour Syst,em, all boundaries are perceptually invisible. Boundary Contours do, however, contribute to visible percepts, but only indirectly. All visible percepts arise within the Feature Contour System. Completed Boundary Contours help to generate visible percepts within the Feature Contour System by defining the perceptual regions within which activations due to Feature Contour signals can fill-in. Our names for these two systems emphasize that conventional usage of the terms
Neural Dynamics of Perceprual Grouping
153
m
BOUNDARY CONTOUR SIGNALS
tttttttt FEATURE CONTOUR SIGNALS
Figure 4. A monocular brightness and color stage domain within the Feature Contour System: Monocular Feature Contour signals activate cell compartments which permit rapid lateral diffusion of activity, or potential, across their compartment boundaries, except at those compartment boundaries which receive Boundary Contour signals from the BCS. Consequently the Feature Contour signals are smoothed except at boundaries that are completed within the BCS stage. boundary and feature needs modification to explain data about form and color perception. Our usage of these important terms captures the spirit of their conventional meaning, but also refines this meaning to be consistent within a mechanistic analysis of the interactions leading to form and color percepts. 6 . Boundary-Feature Trade-off: Every Line End Is Illusory
The rules obeyed by the Boundary Contour System can be fully understood only by considering how they interact with the rules of the Feature Contour System. Each contour system is designed to offset insufficiencies of the other. The most paradoxical properties of the Boundary Contour System can be traced to its role in defining the perceptual domains that restrict featural filling-in. These also turn out to be the p r o p erties that are most important in the regulation of perceptual grouping. The inability of previous perceptual theories to provide a transparent analysis of perceptual group ing can be traced to the fact that they did not clearly distinguish Boundary Contours from Feature Contours; hence they could not adequately understand the rules whereby Boundary Contours generate perceptual groupings to define perceptual domains adequate to contain featural filling-in.
154
Chapter 3
When one frontally assaults the problem of designing Boundary Contours to rontain featural filling-in, one is led to many remarkable conclusions. One conclusion is that the end of every line is a n “illusory” contour. We now summarize what we mean by this assertion. An early stage of Boundary Contour processing needs to determine the orientations in which scenic edges are pointing. This is accomplished by elongated receptive fields, or orientationally tuned input masks (Hubel and Wiesel, 1977). Elongated receptive fields are, however, insensitive to orientation at the ends of thin lines and at object corners (Grossberg and Mingolla, 1985). This breakdown is illustrated by the computer simulation summarized in Figure 5a, which depicts the reaction of a lattice of orientationally tuned cells to a thin vertical line. Figure 5a shows that in order t o achievesome measure of orientational certainty along scenic edges, the cells sacrifice their ability to determine either position or orientation a t the end of a line. In other words, Figure 5a summarizes the effects of an “uncertainty principle” whereby “orientational certainty” along scenic edges implies “positional unrertainty” a t line ends and corners. Stated in a vacuum, this breakdown does not seem to be particularly interesting. Stated in the shadow of the featural filling-in process, it has momentous implirations. Without further processing that is capable of compensating for this breakdown, the Boundary Contour System could not generate boundaries corresponding to scenic line ends and corners. Consequently, within the Feature Contour System, boundary signals would not exist at positions corresponding to line ends (Figure 6). The Feature Contour signals generated by the interior of each line could then initiate spreading of featural quality to perceptual regions beyond the location of the line end. In short, the failure of boundary detection at line ends could enable colors to flow out of every line end! In order to prevent this perceptual catastrophe, orientational tuning, just like discounting the illuminant, must be followed by a hierarchy of compensatory processing stages in order to gain full effectiveness. To offset this breakdown under normal circumstances, we have hypothesized that outputs from the cells with oriented receptive fields input to two successive stages of competitive interaction (Grossberg, 1984a; Grossberg and Mingolla, 1985), which are described in greater detail in Section 20 and the Appendix. These stages are designed to compensate for orientational insensitivity a t the ends of lines and corners. Figure 5b shows how these competitive interactions generate horizontal Boundary Contour signals at the end of a vertical line. These “illusory” Boundary Contours help to prevent the flow of featural contrast from the line end. Such horizontal Boundary Contours induced by a vertical line end are said to be generated by end cutting, or orthogonal induction. The circle illusion that is perceived by glancing at Figure 7 can now be understood. The Boundary Contour end cuts at the line ends can cooperate with other end cuts of similar orientation that are approximately aligned across perceptual space, just as Boundary Contours do to generate the percept of a Kanizsa square in Figure 3. These Boundary Contours group “illusory” figures for the same reason that they complete figures across retinal veins and blind spots. Within the Boundary Contour System, both ”real” and “illusory” contours are generated by the same dynamical laws.
7. Parallel Induction by Edges versus P e r p e n d i c u l a r Induction by L i n e Ends Knowing the directions in which Boundary Contours will form is obviously essential to understanding perceptual grouping. Why does a boundary form parallel to the inducing edges in Figure 3 but perpendicular to the line ends in Figure 7? This is clearly a question about spatial scale, since thickening a line until its end becomes an edge will cause induction t o switch from being perpendicular to the line to being parallel to the edge. An answer to this question can be seen by inspecting Figure 5. In Figure Sa, strong vertical reactions occur in response to the long vertical edge of the line. Figure 5b shows
Neural Dynamics of Perceptual Grouping
15s
OUTPUT OF ORIENTED MASKS
. . . .
.
. . . .
Figure 5a. An orientation field: Lengths and orientations of lines encode the relative sizes of the activations and orientations of the input masks at the corresponding positions. The input pattern, which is a vertical line end as seen by the receptive fields, corresponds to the shaded area. Each mask has total exterior dimensions of 16 x 8 units, with a unit length being the distance between two adjacent lattice positions.
Chapter 3
156
OUTPUT OF
COMPETITION
' p x x t x x \ t *
l
-
\
i t
l \
t
t
l
I
'
4
1
)
-
Figure 5b. Response of the potentials I/i,k of the dipole field defined in the Appendix to the orientation field of Figure 5a: End cutting generates horizontal activations at line end locations that receive small and orientationally ambiguous input activations.
Neural Dynamics of Perceptual Grouping
157
FF 1-1 inp
Figure 6.Possible spurious flow within the Feature Contour System of featural quality from line ends: Labels ABCD outline the positions corresponding to the tip of a vertically oriented thin line. The black areas from A to B and from C to D indicate regions of the Feature Contour System which receive signals due to direct image-induced activation of vertically oriented receptive fields within the Boundary Contour System. The stipled areas indicate regions of the Feature Contour System which receive Feature Contour signals from the interior of the line image. Feature Contour System receptive fields, being small and unoriented, may be excited at line ends, even if the oriented receptive fields of the Boundary Contour System are not. The arrows indicate that filling-in due to these Feature Contour signals can spread outside the putative boundary ABCD of the line end.
158
Chapter 3
\ Figure 7. Cooperation among end cut signals: A bright illusory circle is induced perpendicular to the ends of the radial lines. that these vertical reactions remain vertical when they pass through the competitive stages. This is analogous to a parallel induction, since the vertical reactions in Figure 5b will generate a completed vertical Boundary Contour that is parallel to its corresponding scenic edge. By contrast, the ambiguous reaction at the line end in Figure 5a generates a horizontal end cut in Figure 5b that is perpendicular to the line. If we thicken the line into a bar, it will eventually become wide enough to enable the horizontally oriented receptive fields at the bar end to generate strong reactions, in just the same way as the vertically oriented receptive fields along the side of the line generated strong vertical reactions there. The transition from ambiguous to strong horizontal reactions as the line end is thickened corresponds to the transition between perpendicular and parallel Boundary Contour induction. This predicted transition has been discovered in electrophysiological recordings from cells in the monkey visual cortex (von der Heydt, Peterhans, and Baumgartner, 1984). The pattern of cell responding in Figure 5a is similar to the data which von der Heydt e t 41. recorded in area 17 of the striate cortex, whereas the pattern of cell responding in Figure 5b is similar to the data which von der Heydt et al. recorded in area 18 of the prestriate cortex. See Grossberg (1985) and Grossberg and Mingolla (1985) for a further discussion of these and other supportive neural data. 8. Boundary Completion via Cooperative-Competitive Feedback Signaling: CC Loops and the Statistics of Grouping
Another mechanism important in determining the directions in which perceptual groupings occur will now be summarized. As in Figure 5b, the outputs of the competitive stages can generate bands of oriented responses. These bands enable cells sensitive to similar orientations at approximately aligned positions to begin cooperating to form the
Neural Dynamics of Perceptual Grouping
159
final Boundary Contour percept. These bands play a useful role, because they increase the probability that spatially separated Boundary (‘ontour fragments will be aligned well enough to cooperate. Figure 8 provides visible evidence of the existence of t.hese bands. In Figure 8a, the end cuts that are exactly perpendicular to their inducing line ends can group to form a square boundary. In Figure Sb, the end cuts that are exactly perpendicular to the line ends cannot group, but end cuts that are almost perpendicular to the line ends can. Figure 8 also raises the following issue. If bands of end cuts exist at every line end, then why cannot all of them group to form bands of different orientations, which might sum to create fuzzy boundaries? How is a single sharp global boundary selected from among all of the possible local bands of orientations? We suggest that this process is accomplished by the type of feedback exchange between competitive and cooperative processes that is depicted in Figure 9. We call such a competitive-cooperative feedback exchange a CC Loop. Figure 9a shows that the competitive and cooperative processes occur a t different network stages, with the competitive stage generating the end cuts depicted in Figure 5b. Thus the outcome of the competitive stage serves as a source of inputs to the cooperative stage and receives feedback signals from the cooperative stage. Each cell in the cooperative process can generate output signals only if it receives a sufficient number and intensity of inputs within both of its input-collecting branches. Thus the cell acts like a type of logical gate, or statistical dipole. The inputs to each branch come from cells of the competitive process that have an orientation and position that are similar to the spatial alignment of the cooperative cell’s branches. When such a cell is activated, say by the conjoint action of both input pathways labeled 1 in Figure 9b, it sends excitatory feedback signals along the pathways labeled 2. These feedback signals activate cells within the competitive stage which rode a similar orientation and spatial position. The cells at the competitive stage cannot distinguish whether they are activated by bottom-up signals from oriented receptive fields or by top-down signals from the cooperative stage. Either source of activation can cause them to generate bottom-up competitive-to-cooperative signals. Thus new cells a t the cooperative stage may now be activated by the conjoint action of both the input pathways labeled 3 in Figure 9b. These newly activated cooperative cells can then generate feedback signals along the pathway labeled 4. In this way, a rapid exchange of signals between the competitive and cooperative stages may occur. These signals can propagate inwards between pairs of inducing Boundary Contour inputs, as in the Kanizsa square of Figure 3, and can thereby complete boundaries across regions whirh receive no bottom-up inputs from oriented receptive fields. The process of boundary completion occurs discontinuously across space by using the gating properties of the cooperative cells (Figure 9b) to successively interpolate boundaries within progressively finer intervals. This type of boundary completion process is capable of generating sharp boundaries, with sharp endpoints, across large spatial domains (Grossberg and Mingolla, 1985). Unlike a low spatial frequency filter, the boundary completion process does not sacrifice fine spatial resolution to achieve a broad spatial range. Quite the contrary is true, since the CC Loop sharpens, or contrast-enhances, the input patterns which it receives from oriented receptive fields. This process of contrast enhancement is due to the fact that the cooperative stage feeds its excitatory signals back into the Competitive stage. Thus the competitive stage does double duty: it helps to complete line ends that oriented receptive fields cannot detect, and it helps to complete boundaries across regions which may receive no inputs whatsoever from oriented receptive fields. In particular, the excitatory signals from the cooperative stage enhance the competitive advantage of cells with the same orientation and position at the competitive stage (Figure Qb). As the competitive-cooperative feedback process unfolds rapidly
Chapter 3
(a)
\
Figure 8. Evidence for bands of orientation responses: In a), an illusory square is generated with sides perpendicular to the inducing lines. In b), an illusory square is generated by lines with orientations that are not exactly perpendicular to the illusory contour. Redrawn from Kennedy (1979).
Neural Dynaniics of Perceptual Grouping
161
Figure 9. Boundary completion in a cooperative-competitive feedback exchange (CC Loop): (a) Local competition occurs between different orientations at each spatial location. A cooperative boundary completion process can be activated by pairs of aligned orientations that survive their local competitions. This cooperative activation initiated the feedback to the competitive stage that is detailed in Figure 9h. (b) The pair of pathways 1 activate positive boundary completion feedback along pathway 2. Then pathways such as 3 activate positive feedback along pathways such as 4. Rapid completion of a sharp boundary between pathways 1 can hereby he generated. See text for details.
162
Chapter 3
through time, these local competitive advantages are synthesized into a global boundary grouping which can best reconcile all these local tendencies. In the most extreme version of this contrast-enhancement process, only one orientation at each position can survive the competition. That is, the network makes an orientational choice at each active position. The design of the CC Loop is based upon theorems which characterize the factors that enable contrast-enhancement and choices to occur within nonlinear cooperativecompetitive feedback networks (Ellias and Grossberg, 1975; Grossberg, 1973: Grossberg and Levine, 1975). As this choice process proceeds, it completes a boundary between some, but not all, of the similarly oriented and spatially aligned cells within the active bands of the competitive process (Figure 8) This interaction embodies a type of real-time statistical decision process whereby the most favorable groupings of cells at the competitive stage struggle to win over other possible groupings by initiating advantageous positive feedback from the cooperative stage. As Figure 8b illustrates, the orientations of the grouping that finally wins is not determined entirely by local factors. This grouping reflects global cooperative interactions that can override the most highly favored local tendencies, in this case the strong perpendicular end cuts. The experiments of von der Heydt, Baumgartner, and Peterhans (1984) also reported the existence of area 18 cells that act like logical gates. These experiments therefore suggest that either the second stage of competition, or the cooperative stage, or both, occur within area 18. Thus, although these Boundary Contour System properties were originally derived from an analysis of perceptual data, they have successfully predicted recent neurophysiological data concerning the organization of mammalian prestriate cortex. I
9. Form Perception versus Object Recognition: Invisible but Potent Boundaries
One final remark needs to be made before turning to a consideration of textured scenes. Boundary Contours in themselves are invisible. Boundary Contours gain visibility by separating Feature Contour signals into two or more domains whose featural contrasts, after filling-in takes place, turn out to be different. (See Cohen and Grossberg, 1984b and Grossberg, 1985 for a discussion of how these and later stages of processing help to explain monocular and binocular brightness data.) We distinguish this role of Boundary Contours in generating visible form percepts from the role played by Boundary Contours in object recognition. We claim that completed Boundary Contour signals project directly to the Object Recognition System (Figure 1). Boundary Contours thus need not be visible in order to strongly influence object recognition. An “illusory” Boundary Contour grouping that is caused by a textured scene can have a much more powerful effect on scene recognition than the poor visibility of the grouping might indicate. We also claim that the object recognition system sends learned top-down template, or expectancy, signals back to the Boundary Contour System (Carpenter and Grossberg, 1985a, 1985b; Grossberg, 1980, 1982a, 1984b). Our theory hereby both agrees with and disagrees with the seminal idea of Gregory (1966) that “cognitive contours” are critical in boundary completion and object recognition. Our theory suggests that Boundary Contours are completed by a rapid, pre-attentive, automatic process as they activate the bottom-up adaptive filtering operations that activate the Object Recognition System. The reaction within the Object Recognition System determines which top-down visual templates to the Boundary Contour System will secondarily complete the Boundary Contour grouping based upon learned “cognitive” factors. These “doubly completed” Boundary Contours send signals to the Feature Contour System to determine the perceptual domains within which featural filling-in will take place. We consider the most likely location of the boundary completion process to be area 18 (or V2) of the prestriate cortex (von der Heydt, Peterhans, and Baumgartner, 19841,
Neural Dynamics of Perceptual Grouping
163
the most likely location of the final stages of color and form perception to be area V4 of the prestriate cortex (Drsimone. Srhrin, Moran. and I‘ngerleider, 1985; Zeki, 1983a, 1983b), and the most likely location of some aspects of object recognition to be the infcrotemporal cortex (Schwartz, Desimone, Albright, and Gross, 1983). These anatomiral interpretations have been chosen by a comparison between theoretical properties and known neural data (Grossberg and Mingolla, 1985). They also provide markers for performing neurophysiological experiments to further test the theory’s mechanistic predictions.
10. Analysis of the Beck T h e o r y of Textural Segmentation: Invisible Colinear Cooperation We now begin a dynamical explanation and refinement of the main properties of Beck’s important theory of textural segmentation (Beck, Prazdny, and Rosenfeld, 1983). One of the central hypotheses of the Beck theory is that “local linking operations form higher-order textural elements” (p.2). “Text,uraI elements are hypothesized to be formed by proximity, certain kinds of similarity, and good continuation. Others of the Gestalt rules of grouping may play a role in the formation of texture., .There is an encoding of the brightness, color, size, slope, and the location of each textural element and its parts” (p.31). We will show that the properties of these “textural elements” are remarkably similar to the properties of the completed boundaries that are formed by the Boundary Contour System. To explain this insight, we will analyse various of the images used by Beck, Prazdny, find Roscnfeld (1983) in the light of Boundary Contour System properties. Figure 10 provides a simple example of what the Beck school means by a “textural element.” Beck, Prazdny. and Rosenfeld (1983) write: “The short vertical lines are linked to form long lines. The length of the long lines is an ‘emergent feature’ which makes them stand out from the surrounding short lines” (p.5). The linking per se is explained by our theory in terms of the process whereby similarly oriented and spatially aligned outputs from the second competitive stage can cooperate to complete a colinear intervening Boundary Contour. One of the most remarkable aspects of this “emergent feature” is not analysed by Beck el al. Why do we continue to see a series of short lines if long lines are the emergent features which control perceptual grouping? In our theory, the answer to this question is as follows. Within the Boundary Contour System, a boundary structure emerges corresponding to the long lines described by Beck et al. This structure includes a long vertical component as well as short horizontal end cuts near the endpoints of the short scenic lines. The output of this Boundary Contour Structure to the Feature Contour System prevents featural filling-in of dark and light contrasts from crossing the boundaries corresponding to the short lines. On the other hand, the output from the Boundary Contour System to the Object Recognition System reads out a long line structure without regard to which subsets of this structure will be perceived as dark or light. This example points to a possible source of confusion in the Beck model. Beck e t al. (1983) claim that “There is an encoding of the brightness, color, size, slope, and the location of each textural element and its parts” (p.31). Figure 10 illustrates a sense in which this assertion is false. The long Boundary Contour structure can have a powerful effect on textural segmentation even if it has only a minor effect on the brightness percepts corresponding t o the short lines in the image, because a n emergent Boundary Contour can generate a large input to the Object Recognition System without generating a large brightness difference. The Beck model does not adequately distinguish between the contrast sensitivity that is needed to activate elongated receptive fields at an early stage of boundary formation and the effects of completed boundaries on featural filling-in. The outcome of featural filling-in, rather than the contrast sensitivity of the
164
Chapter 3
Figure 10. Emergent features: The colinear linking of short line segments into longer segments is an ‘emergent feature” which sustains textural grouping. Our theory explains how such emergent features can contribute to perceptual grouping even if they are not visible. (Reprinted from Beck, Prazdny, and Rosenfeld, 1983.)
Neuraf Dynamics of Perceptual Grouping
165
Boundary Contour System’s elongated receptive fields, helps to determine a brightness or color percept (Cohen and Grossberg, 1984b; Grossberg and Mingolla, 1985). A related source of ambiguity in the Beck model arises from the fact that the strength of an emergent Boundary Contour does not even depend on image contrasts, let alone brightness percepts, in a simple way. The Beck model does not adequately distinguish between the ability of elongated receptive fields to activate a Boundary Contour in regions where image contrast differences do exist and the cooperative interactions that complete the Boundary Contour in regions where image contrast differences may or may not exist. The cooperative interaction may, for example, alter Boundary Contours at positions which lie within the receptive fields of the initiating orientation-sensitive cells, as in Figure 8b. The final percept even at positions which directly receive image contrasts may be strongly influenced by cooperative interactions that reach these positions by spanning positions which do not directly receive image contrasts. This property is particularly important in situations where a spatial distribution of statistically determined image contrasts, such as dot or letter densities, form the image that excites the orientation-sensitive cells.
11. T h e P r i m a c y of Slope Figure 11 illustrates this type of interaction between bottom-up direct activation of orientationally tuned cells and top-down cooperative interaction of such cells. Beck and his colleagues have constructed many images of this type to demonstrate that orientation or “slope is the most important of the variables associated with shape for producing textural segmentation., . A tilted T is judged to be more similar to an upright T than is an L. When these figures are repeated to form textures.. . the texture made up of Ls is more similar to the texture made up of upright Ts than to the texture made up of tilted Ts” (Beck. Prazdny, and Rosenfeld, 1983, p.7). In our theory, this fact follows from several properties acting together: The elongated receptive fields in the Boundary Contour System are orientationally tuned. This property provides the basis for the system’s sensitivity to slope. As colinear boundary completion takes place due to cooperative-competitive feedback (Figure 9), it can group together approximately colinear Boundary Contours that arise from contrast differences due to the different letters. Colinear components of diflerent letters are grouped just as the Boundary Contour System groups image contrasts due to a single scenic edge that excites the retina on opposite sides of a retinal vein. The number and density of inducing elements of similar slope can influence the strength of the final set of Boundary Contours pointing in the same direction. Both Ls and Ts generate many horizontal and vertical boundary inductions, whereas tilted Ts generate diagonal boundary inductions. The main paradoxical issue underlying the percept of Figure 11 concerns how the visual system overrides the perceptually vivid individual letters. Once one understands mechanistically the difference between boundary completion and visibility, and the role of boundary completion in forming even individual edge segments without regard to their ultimate visibility, this paradox is resolved.
12. Statistical Properties of Oriented Receptive Fields: OC Filters Variations on Figure 11 can also be understood by refining the above argument.
In Beck (1966), it is shown that X’s in a background of T’s produces weaker textural segmentation than a tilted T in a background of upright Ts, even though both images contain the same orientations. We agree with Beck, Prazdny, and Rosenfeld (1983) that “what is important is not the orientation of lines per se but whether the change in orientation causes feature detectors to be differentially stimulated” (p.9). An X and a T have a centrally symmetric shape that weakens the activation of elongated receptive fields. A similar observation was made by Schatz 1977), who showed that changing the slope of a single line from vertical to diagonal 1 to stronger textural segmentation than changing the slope of three parallel lines from vertical to diagonal.
el
166
Chapter 3
Figure 11. The primacy of slope: In this classic figure, textural segmentation between the tilted and upright T’s is far stronger than between the upright T’s and L’s. The figure illustrates that grouping of disconnected segments of similar slope is a powerful basis for textural segmentation. (Reprinted from Beck, Prazdny, and Rosenfeld, 1983.)
Neural Dynamics of Perceptua! Grouping
167
Both of these examples are compatible with the fact that orientationally tuned cells measure the statistical distribution of contrasts within their receptive fields. They do not respond only to a template of an edge, bar, or other definite image. They are sensitive to the relative contrast of light and dark on either side of their axis of preferred orientation (Appendix, equation Al). Each receptive field at the first stage of Boundary Contour processing is divided into two halves along an oriented axis. Each half of the receptive field sums the image-induced inputs which it receives. The integrated activation from one of the half fields inhibits the integrated activation from the other half field. A net output signal is generated by the cell if the net activation is sufficiently positive. This output signal grows with the size of the net activation. Thus each such oriented cell is sensitive to amount-of-contrast (size of the net activation) and to direction-of-contrast (only one half field inhibits the other half field), in addition to being sensitive to factors like orientation, position, and spatial frequency. A pair of such oriented cells corresponding to the same position and orientation, but opposite directions-of-contrast, send converging excitatory pathways to cells defining the next stage in the network. These latter cells are therefore sensitive to factors like orientation, position, spatial frequency, and amount-of-contrast, but they are insensitive to direction-of-contrast. Together, the two successive stages of oriented cells define a filter that is sensitive to properties concerned with orientation and contrast. We therefore call this filter an OC Filter. The OC Filter inputs to the CC Loop. The Boundary Contour System network is a composite of OC Filter and CC Loop. The output cells of the OC Filter, being insensitive to direction-of-contrast. are the ones which respond to the relative contrast of light and dark on either side of their axis of preferred orientation. Both the X’s studied by Beck (1956) and the multiple parallel lines studied by Schatz (1977) reduce this relative contrast. These images therefore weaken the relative and absolute sizes of the input to any particular orientation. Thus even the “front end” of the Boundary Contour System begins to regroup the spatial arrangement of contrast differences that is found wtihin the scenic image.
IS. C o m p e t i t i o n Between Pcrpendiciilar Subjective Contours A hallmark of the Beck approach has been the use of carefully chosen but simple figural elements in arrays whose spatial parameters can be easily manipulated. Arrays built up from U shapes have provided a particularly rich source of information about textural grouping. In the bottom half of Figure 12, for example, the line ends of the U’s and of the inverted U’s line up in a horizontal direction. Their perpendicular end cuts can therefore cooperate, just as in Figures 7 and 8, to form long horizontal Boundary Contours. These long Boundary Contours enable the bottom half of the figure to be preattentively distinguished from the top half. Beck e t al. (1983) note that segmentation of this image is controlled by “subjective contours” (p.2). They do not use this phrase to analyse their other displays, possibly because the “subjective” Boundary Contours in other displays are not as visible. The uncertainty within Beck, Prazdny, and Rosenfeld (1983) concerning the relationship between “linking operations” and “subjective contours” is illustrated by their analysis of Figure 13. In Figure 13a, vertical and diagonal lines alternate. In Figure 13b, horizontal and diagonal lines alternate. The middle third of Figure 13a is preattentively segmented better than the middle third of Figure 13b. Beck e t al. (1983) explain this effect by saying that ”The linking of the lines into chains also occurred more strongly when the lines were colinear than when they were parallel, i.e., the linking of horizontal lines to form vertical columnsn (p.21). “The horizontal lines tend to link in the direction in which they point. The linking into long horizontal lines competes with the linking of the lines into vertical columns and interferes with textural segmentation” (p.22). Our theory supports the spirit of this analysis. Both the direct outputs from horizontally oriented receptive fields and the vertical end cuts induced by competitive
168
Chapter 3
Figure 12. Textural grouping supported by subjective contours: Cooperation among end cuts generates horizontal subjective contours in the bottom half of this figure. (Reprinted from Beck, Prazdny, and Rosenfeld, 1983.) processing at horizontal line ends can feed into the colinear boundary completion process. The boundary completion process, in turn, feeds its signals back to a competitive stage where perpendicular orientations compete (Figure 9). Hence direct horizontal activations and indirect vertical end cuts can compete at positions which receive both influences due to cooperative feedback. Beck et al. (1983) do not, however, comment upon an important difference between Figures 13a and 13b that is noticed when one realizes that linking operations may generate both visible and invisible subjective contours. We claim that, in Figure 13b, the end cuts of horizontal and diagonal line ends can cooperate to form long vertical Boundary Contours that run from the top to the bottom of the figure. As in Figure 8b, global cooperative factors can override local orientational preferences to choose end cuts that are not pekpendicular to their inducing line ends. We suggest that this happens with respect to the diagonal line ends in Figure 13b due to the cooperative influence of the vertical end cuts that are generated by colinear horizontal line ends. The long vertical Boundary Contours that are hereby generated interfere with textural segmentation by passing through the entire figure. This observtltion, by itself, is not enough to explain the better segmentation of Fig-
Neural Dynamics of Percephcal Grouping
169
Figure IS. Effects of distance, perpendicular orientations, and colinearity on perceptual grouping: In both (a) and (b vertical and horizontal subjective boundaries are generated. The text explains how t groupings in (a) better segregate the middle third of the figure.
Chapter 3
170
ure 13a. Due to the horizontal alignment of vertical and diagonal line ends in Figure 13a, horizontal Boundary Contours could cross this entire figure. In Figure 13a, however, vertical lines within the top and bottom thirds of the picture are contiguous to other vertical lines. In Figure 13b diagonal lines are juxtaposed between every pair of horizontal lines. Thus in Figure 13a, a strong tendency exists to form vertical Boundary Contours in the top and bottom thirds of the picture due both to the distance dependence of colinear cooperation and to the absence of competing intervening orientations. These strong vertical Boundary Contours can successfully compete with the tendency to form horizontal Boundary Contours that cross the figure. In Figure 13b, the tendencies to form vertical and horizontal Boundary Contours are more uniformly distributed across the figure. Thus the disadvantage of Figure 13b may not just be due to “linking into long horizontal lines competes with the linking of the lines into vertical columns” as Beck et at. (1983, p.22) suggest. We suggest that, even in Figure 13a, strong competition from horizontal linkages occurs throughout the figure. These horizontal linkages do not prevent preattentive grouping because strong vertical linkages exist at the top and bottom thirds of the figure and these vertical groupings cannot bridge the middle third of the figure. In Figure 13b, by contrast, the competing horizontal linkages in the top and bottom third of the figure are weaker than in Figure 13a. Despite this the relative strengths of emerging groupings corresponding to different parts of a scene, rather than the strengths of oriented activations at individual scenic positions, determine how well a region of the scene can be segmented.
14. Multiple Distance-Dependent Boundary Contour Interactions: plaining Gestalt Rules
Ex-
Figure 14 illustrates how changing the spatial separation of figural elements, without changing their relative positions, can alter interaction strengths at different stages of the Boundary Contour System; different rearrangements of the same scenic elements can differentially probe the hierarchical organization of boundary processing. This type of insight leads us to suggest how different Gestalt rules are realized by a unified system of Boundary Contour System interactions. In the top half of Figure 14a, horizontal Boundary Contours that cross the entire figure are generated by horizontal end cuts at the tips of the inverted U’s. These long Boundary Contours help to segregate the top half of the figure from its bottom, just as they do in Figure 12. This figure thus reaffirms that colinear cooperative interactions can span a broad spatial range. Some horizontal Boundary Contour formation may also be caused by cooperation between the bottoms of the 11’s. We consider this process to be weaker in Figure 14a for the same reason that it is weaker in Figure 12: the vertical sides of the U’s weaken it via competition between perpendicular orientations. Beck 1983. p.231, by contrast, assert that “The bottom lines of the U’s link on the basis et o colincarity (a special case of good continuation)”, and say nothing about the horizontal Boundary Contours induced by the horbontal end cuts. In Figure 14b, the U and inverted U images are placed more closely together without otherwise changing their relative spatial arrangement. End cuts at the tips of the inverted U’s again induce horizontal Boundary Contours across the top half of the figure. New types of grouping are also induced by this change in the density of the U’s. The nature of these new groupings can most easily be understood by considering the bottom of Figure 14b. At a suitable viewing distance, one can now see diagonal groupings that run at 45’ and 135’ angles through the bases of the U’s and inverted U’s. We claim that these diagonal groupings are initiated when the density gets sufficiently high to enable diagonally oriented receptive fields to record relatively large image contrasts. In other words, a t a low density of scenic elements, orientationally tuned receptive fields can be stimulated only by one U or inverted U at a time. At a sufficiently high density of scenic elements, each receptive field can be stimulated by parts of different scenic elements that fall within that receptive field. Once the diagonal receptive fields get activated, they
1
Neural Dynamics of Perceptual Grouping
171-
Figure 14. The importance of spatial scale: These three figures probe the subtle effects on textural grouping of varying spatial scale. For example, the diagonal grouping at the bottom of (b) is initiated by differential activation of diagonally oriented masks, despite the absence of any diagonal edges in the image. See the text for extended discussion.
172
Chapter 3
Figure 14e. can trigger diagonally oriented boundary completions. A similar possibility holds in the top half of Figure 14b. Horizontally and vertically tuned receptive fields can begin to be excited by more than one U or inverted U. Thus the transition from Figure 14a to 14b preserves long-range horizontal cooperation based on competitive end cuts and other colinear horizontal interactions, and enables the earlier stage of oriented receptive fields to create new scenic groupings, notably in diagonal directions. Beck e l al. (1983) analyse Figures 14a and 14b using Gestalt terminology. They say that segmentation in Figure 14a is due to “linking based on the colinearity of the base lines of the U’S” (p.24). Segmentation in Figure 14b is attributed to “linking based on closure and good continuation’’ (p.25). We suggest that both segmentations are due to the same Boundary Contour System interactions, but that the scale change in Figure 14b enables oriented receptive fields and cooperative interactions to respond to new local groupings of image components. In Figure 14c, the relative positions of U’s and inverted U’s are again preserved, but they are arranged to be closer together in the vertical than in the horizontal direction. These new columnar relationships prevent the image from segmenting into top and bottom halves. Beck et al. (1983) write that “Strong vertical linking based on proximity interferes with textural segmentation” (p.28). We agree with this emphasis on proximity, but prefer a description which emphasizes that the vertical linking process uses the same textural segmentation mechanisms as are needed to explain all of their displays. We attribute the strong vertical linking to the interaction of five effects within the Boundary Contour System. The higher relative density of vertically arranged U’s and inverted U’s provides 8 relatively strong axtivation of vertically oriented receptive fields. The higher density and stronger activation of vertically oriented receptive
Neural Dynamics of Perceptual Grouping
173
fields generates larger inputs to the vertirally oriented long-range rooperative process, which enhances the vertical advantage by generating strong top-down positive feedbark. The smaller relative density of horizontally arranged U’s and inverted U’s provides a relatively weak activation of horizontally oriented receptive fields. The lower density and smaller activation of these horizontally oriented receptive fields generates a smaller input to the horizontally oriented cooperative process. The horizontally oriented cooperation consequently cannot offset the strength of the vertically oriented cooperation. Although the horizontal end cuts can be generated by individual line ends, the reduction in density of these line ends in the horizontal direction reduces the total input to the corresponding horizontally oriented cooperative cells. All of these factors favor the ultimate dominance of vertically oriented long-range Boundary Contour structures. Beck el al. (1983)analyse the different figures in Figure 14 using different combinations of classical Gestalt rules. We analyse these figures by showing how they differentially stimulate the same set of Boundary Contour System rules. This type of mechanistic synthesis leads to the suggestion that the Boundary Contour System embodies a universal set of rules for textural grouping. 15. I m a g e Contrasts and Neon Color Spreading
Beck et al. (1983)used regular arrays of black and grey squares on a white background and of white and grey squares on a black background with the same incisiveness as they used U displays. All of the corresponding perceptual groupings can be qualitatively explained in terms of the contrast-sensitivity of Boundary Contour System responses to these images. The most difficult new property of these percepts can be seen by looking at Figure 15. Diagonal grey bands can be seen joining the grey squares in the middle third of the figure. We interpret thiv effect to be a type of neon color spreading (van Tuijl, 1975). This interpretation is supported by the percept that obtains when the grey squares are replaced by red squares of similar contrast, as we have done using our computer graphics system. Then diagonal red bands can be seen joining the red squares in the middle of the figure. Neither these red diagonal bands, nor by extension the grey bands seen upon inspection of Figure 15, can be interpreted as being merely a classical contrast effect due to the black squares. The percept of these diagonal bands can be explained using the same type of analysis that Grossberg (1984a) and Grossberg and Mingolla (1985)have used to explain the neon color spreading that is induced by a black Ehrenstein figure surrounding a red cross (Figure 16; Redies and Spillmann, 1981) and the complementary color induction and spreading that is induced when parts of an image grating are achromatic and complementary parts are colored (van Tuijl, 1975). These explanations indicate how segmentation within the Boundary Contour System can sometimes induce visible contrasts at locations where no luminance contrasts exist in the scenic image. Neon spreading phenomena occur only when some scenic elements have greater relative contrasts with respect to the background than do the complementary scenic elements (van Tuijl and de Weert, 1979). This prerequisite is satisfied by Figure 15. The black squares are much more contrastive relative to the white ground than are the grey squares. Thus the black-to-white contrasts can excite oriented receptive fields within the Boundary Contour System much more than can the grey-to-white contrasts. As in our other explanations of neon color spreading, we trace the initiation of this neon effect to two properties of the Boundary Contour System: the contrast-sensitivity of the oriented receptive fields, and the lateral inhibition within the first competitive stage among like-oriented cells at nearby positions (Section 20 and Appendix). Due to contrast-sensitivity, each light grey square activates oriented receptive fields less than each black square. The activated orientations are, by and large, vertical and horizontal, at least on a sufficiently small spatial scale. At the first competitive stage, each strongly activated vertically tuned cell inhibits nearby weakly activated vertically tuned cells,
174
Chapter 3
Figure 15. Textural segmentation and neon color spreading: The middle third of this figure is easily segmented from the rest. Diagonal flow of grey featural quality between the grey squares of the middle segment is an example of neon color spreading. See also Figures 16 and 17. (Reprinted from Beck, Prazdny, and Rosenfeld, 1983. We are grateful to Jacob Beck for providing the original of this figure.) and each strongly activated horizontally tuned cell inhibits nearby weakly activated horizontally tuned cells (Figure 17). In all, each light grey square’s Boundary Contours receive strong inhibition both from the vertical and the horizontal direction. This conjoint vertical and horizontal inhibition generates a gap within the Boundary Contours at each corner of every light grey aquare and a net tendency to generate a diagonal Boundary Contour via disinhibition at the second competitive stage. These diagonal Boundary Contours can then link up via colinear cooperation to further weaken the vertical and horizontal Boundary Contours as they build completed diagonal Boundary Contours between the light grey squares. This lattice of diagonal Boundary Contours enables grey featural quality to flow out of the squares and fill-in the positions bounded by the lattice within the Feature Contour System. In the top and bottom thirds of Figure 15, on the other hand, only the horizontal Boundary Contours of the grey squares are significantly inhibited. Such inhibitions tend to be compensated at the cooperative stage by colinear horizontal boundary completion. Thus the integrity of the horizontal Boundary Contours near such a grey square’s corner tends to be preserved. It is worth emphasizing a similarity and a difference between the percepts in Figures 14b and 15. In both percepts, diagonal Boundary Contours help to segment the images. However, in Figure 14b, the diagonals are activated directly at the stage of the oriented receptive fields, whereas in Figure 15, the diagonals are activated indirectly via disinhibition at the second competitive stage. We suggest that similar global factors may partially determine the Hermann grid illusion. Spillmann (1985) has reviewed evidence that suggests a role for central factors in generating this illusion, notably the work of Preyer (1897/98) and Prandtl (1927) showing that when a white grid is presented on a colored background, the illusory spots have the same color as the surrounding squares.
Neural Dynamics of Perceptual Grouping
175
Figure 16. Neon color spreading: (a) A red cross in isolation appears unremarkable. (b) When the cross is surrounded by an Ehrenstein figure, the red color can flow out of the cross until it hits the illusory contour induced by the Ehrenstein figure.
176
Chapter 3
Figure 17. Boundary Contour disinhibition and neon color spreading: This figure illustrates how the neon Spreading evident in Figure 16 can occur. If grey squares are much lighter than black squares and the squares are sufficiently close, thenet effect of strong inhibitory boundary signals from the black squares to the weakly activated grey square boundaries leads to disinhibition of diagonal Boundary Contours. Cooperation between these diagonal boundaries enables diagonal featural flow to occur between the grey squares.
Neural Dynamics of Perceptual Grouping
177
Wolfe (1984) has presented additional evidriicr that global factors contribute to this illusion. Although we expect our theory to be progressively refined as it achieves a greater behavioral and neural explanatory range, we believe that the types of explanation suggested above will continue to integrate the several classical Gestaltist laws into a unified neo-Gestaltist mechanistic understanding. In this new framework, instead of invoking different Gestalt laws to explain different percepts, one analyses how different images probe the same laws in context-sensitive ways. 16. Computer Simulations of Perceptual Grouping
In this section, we summarize computer simulations that illustrate the Boundary Contour System's ability to generate perceptual groupings akin to those in the Beck et ~ l displays. . In the light of these results, we then analyse data of Glass and Switkes (1976) about random dot percepts and of Gregory and Heard (1979) about border locking during the Caf6 wall illusion results before defining rigorously the model neuron interactions that define the Boundary Contour System (BCS). Numerical parameters were held fixed for all of the simulations; only the input patterns were varied. As the input patterns were moved about, the BCS sensed relationships among the inducing elements and generated emergent boundary groupings among them. In all of the simulations, we defined the input patterns to be the output patterns of the oriented receptive fields, as in Figure Ma, since our primary objective was to study the CC Loop, or cooperative-competitive feedback exchange. This step reduced the computer time needed to generate the simulations. If the BCS is ever realized in parallel hardware, rather than by simulation on a traditional computer, it will run in real-time. In all the Figures 18 25, we have displayed network activities after the CC Loop converges to an equilibrium state. These simulations used only a single cooperative bandwidth. They thus illustrate how well the BCS can segment images using a single "spatial frequency" scale. Multiple scales are, however, needed to generate three-dimensional form percepts (Grossberg, 1983a, 1985; Grossberg and Mingolla, 1986). Figure 18a depicts an array of four vertically oriented input clusters. We call each cluster a Line because it represents a caricature of an orientation field's response to a vertical line (Figure 5a). In Figures 18b, c, and d, we display the equilibrium activities of the cells at three successive CC Loop stages: the first competitive stage, the second competitive stage, and the cooperative stage. The length of an oriented line at each position is proportional to the equilibrium activity of a cell whose receptive field is centered at that position with the prescribed orientation. We will focus upon the activity pattern within the y-field, or second competitive stage, of each simulation (Figure 18c . This is the final competitive stage that inputs to the cooperative stage (Section 8 . The w-field (first competitive stage) and z-field (cooperative stage) activity patterns are also displayed to enable the reader to achieve a better intuition after considering the definitions of these fields in Section 20 and the Appendix. The input pattern in Figure 18a possesses a manifest vertical symmetry: Pairs of vertical Lines are colinear in the vertical direction, whereas they are spatially out-ofphase in the horizontal direction. The BCS senses this vertical symmetry, and generates emergent vertical lines in Figure 18c, in addition to horizontal end cuts at the ends of each Line, as suggested by Figure 10. In Figure 19a, the input pattern shown in Figure 18a has been altered, so that the first column of vertical Lines is shifted upward relative to the second column of vertical Lines. Figure 1%shows that the BCS begins to sense the horizontal symmetry within the input configuration. In addition to the emergent vertical grouping and horizontal end cuts like those of Figure 18c, an approximately horizontal grouping has appeared. In Figure 20, the input Lines are moved so that pairs of Lines are colinear in the
1
178
Chapter 3
a INPUT TO COMPETITION I
b
COMPETITION I ' I l l
I l l I I I
d
COOPERATION
Figure 18. Computer simulation of processes underlying textural grouping: The length of each line segment in this figure and Figures 19-25 is proportional to the activation of a network node responsive to one of twelve possible orientations. The dots indicate the positions of inactive cells. In Figures 18-25, part (a) displays the results of input masks which sense the amount of contrast at a given orientation of visual input, as in Figure 5a. Parts (b)-(d) show equilibrium activities of oriented cells at the competitive and cooperative layers. A comparison of (a) and (b) indicates the major groupings sensed by the network. Here only the vertical alignment of the two left and two right Lines is registered, See text for detailed discussion.
Neural Dynamics ofPerceptual Grouping
b
a INPUT TO
COMPETITION I
179
GO M PETITION I
. . .
I d COMPET IT ION I I I COOPERAT I ON C
. . . . . . . . . . . . . . . . . . . .
...---_-............ ......----.......... . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . I ( . . . . . . . . . . . . . . . . . . I 1 I . . . . . . . . . . . . . . . .
. I l l . . I l l . . I l l . . I 1 1 . . I 1 1 . . I 1 1 . . i l l . . I l l
. . . . . . .I . . . . . .11
I
.
.
. . .
.
. . . . . . I l l . . . . . .
. . . .
. . . .
. . . .
1
. . . . . .
. . . I l l . . . . . . . . . I l l . . . . . . . . . [ ) I . . . . . . . . . i l l . . . . . .
. . . . . . . I l l . . . . . .
. . . . . . .
. . . . . .
. I t . 1 1 1 . . . . . . . . . . . I l l . . . . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
I I
.
,
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
....----........... ......_----........
Figure 19. The emergence of nearly horizontal grouping: The only difference between the input for this figure and that of Figure 18 is that the right column of lines has been shifted upward by one lattice location. The vertical grouping of Figure 18 is preserved as the horizontal grouping emerges. The horizontal groupings are due to cooperation between end cuts at the Line ends.
180
Chapter 3
vertical direction and their Line ends are lined up in the horizontal direction. Now both vertical and horizontal groupings are generated in Figure 2&, as in Figure 13. In Figure 21a, the input lines are shifted so that they become non-colinear in a vertical direction, but pairs of their Line ends remain aligned. The vertical symmetry of Figure 20a is hereby broken. Thus in Figure 21c, the BCS groups the horizontal Line ends, but not the vertical Lines. Figure 22 depicts a more demanding phenomenon: the emergence of diagonal groupings where no diagonals whatsoever exist in the input pattern. Figure 22a is generated by bringing the two horizontal rows of vertical Lines closer together until their ends lie within the spatial bandwidth of the cooperative interaction. Figure 22c shows that the BCS senses diagonal groupings of the Lines, as in Figure 14b. It is remarkable that these diagonal groupings emerge both on a microscopic scale and a macroscopic scale. Thus diagonally oriented receptive fields are activated in the emergent boundaries, and these activations, as a whole, group into diagonal bands. In Figure 23c, another shift of the inputs induces internal diagonal bands while enabling the exterior grouping into horizontal and diagonal boundaries to persist. In Figure 24a, one of the vertical Lines is removed. The BCS now senses the remaining horizontal and diagonal symmetries (Figure 24c). In Figure 25a, the lower Line is moved further away from the upper pair of Lines until the cooperation can no longer support the diagonal groupings. The diagonal groupings break apart, leaving the remaining horizontal groupings intact (Figure 25c). 17. On-Line Statistical Decision Theory and Stochastic Relaxation These figures illustrate the fact that the BCS behaves like an on-line statistical decision theory in response to its input patterns. The BCS can sense only those groupings of perceptual elements which possess enough "statistical inertia" to drive its cooperativecompetitive feedback exchanges towards a non-zero stable equilibrium configuration. The emergent patterns in Figures 18-25 are thus as important for what they do not show as for what they do show. All possible groupings of the oriented input elements could, in principle, have been generated, since all possible groupings of the cooperativecompetitive interaction were capable of receiving inputs. In order to compare and contrast BCS properties with other approaches, one can interpret the distribution of oriented activities at each input position as being analogous to a local probability distribution, and the final BCS pattern as being the global decision that the system reaches and stores based upon all of its local data. The figures show that the BCS regards many of the possible groupings of these local data as spurious, and suppresses them as being functional noise. Some popular approaches to boundary segmentation and noise suppression do adopt a frankly probabilistic framework. For example, in a stochastic relaxation approach based upon statistical physics, Geqan and Geman (1984) slowly decrease a formal temperature parameter that drives their system towards a minimal energy configuration with boundary enhancing properties. Zucker (1985)has also suggested a minimization algorithm to determine the best segmentation. Such algorithms provide one way, indeed a classical way, to realize coherent properties within a many body system. These algorithms define open loop procedures in which external agents manipulate the parameters leading to coherence. In the BCS, by contrast, the only "external parameters" are the input patterns themselves. Each input pattern defines a different set of boundary conditions for the BCS,and this difference, in itself, generates different segmentations. The BCS does not need extra external parameters because it contains a closed loop process-the CC Loop-which regulates its own convergence to a symmetric and coherent configuration via its real-time competitivecooperative feedback exchanges. The BCS differs in other major ways from alternative models. Geman and Geman (1984), for example, build into the probability distributions of their algorithm informa-
NeurdqvMnrics ofPerceptual Grouping
a INPUT TO
COMPETIT ION I
. . . . . . . . . . . . . ..... .. . . . . . . . . . . . . .. . . . . . .
181
b
COMPETITION I
. . . . . . . . .I I . . . . . . . . .I I . . . . . . . I I . . . . . . .I I . . . . . . . . . . . . . . II II . . . . . . . . . . . . . . . . . . .. .................. . . . . . . . .
I I I I I I
l l l l l l
l l l l l l
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . . . .. . . . . . .. . . . . . .. . . . . . ..
. . . .
. . . .
. . . . . . . . . .
. . . .
. .
. . . . . .
. I l l . . . . . . . I l l . . . . . . . I l l . . . . . . . I l l . . . . . . . I 1 1 . . . . . . . I l l . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
d COMPETITION I 1 COOPERATION . . . . . _ . . . . . . . . . C
Figure 20. Coexistence of vertical and horizontal grouping: Here both horizontal and vertical groupings are completed at all Line ends.
Chapter 3
182
b COMPETITION I
a INPUT TO
I COMPETIT ION ......
. . . . . . . . . . . . . . . . . . . . . . . . . . .. I I I I I I . . . . . . . . . . . . ..
. .
. . . . . . . . . . . . . . . . . . . . . . , . . . . . , . . , , . . .
. . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . ..... . . . . . . . . II ll ll ." . " . ' . . . II . . . . . ... I ......Ill"" ... . . . . . . II ll ll . . " ' . ' ' . . . 1I . . . . . . I l l " ' . . . . . . . . . . . . . . . . . . . . .I . . . . .
. . . . . . . . . . . . .
. . .
. . . l l l 1 l .l
l l l 1 l .l
I '
. . . . . ..
I I I I
' ' ' '
I .
COOPERATION . . . . . . ................... ................... ................... ................... ................... ...... ......---........... . . . . . . . . . . . . . . . . . . . . .................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 21. Horizontal grouping by end cuts: A horizontal shift of the lower two Lines in Figure 20 breaks the vertical groupings but preserves the horizontal groupings.
Neural Dynamics of Perceptual Grouping
183
a INPUT TO
COMPETITION I
d
COOPERATION
Figure 22. The emergence of diagonal groupings: The Boundary Contour System (BCS) is capable of generating groupings along orientations which have no activity in the oriented mask responses. Individual diagonally oriented cells are activated within the diagonally oriented groupings.
184
Chapter 3
a
TO COMPETITION I . . . . . . . . . . . . . . . . . . . . INPUT
............. . I l l . . . . . . . I . I 1 1 . . . . . . .I . I l l . . . . . . . I . I l l . . . . . . .I . i l l . . . . . . .I . I 1 1 . . . . . . .1
L
c"0 M PET IT I ON I
....... l l ' . . . l l ' . . .
l l l 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
l l l 1 . .
' ' ' ' . .
. .
. . . . . . . . . . . . . . . .
. . ,
.
.
I
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . I '
I ' I '
I ' I '
I '
C
I d
COMPET I T I ON II 1 COOPERAT ION
Figure 23. Multiple diagonal groupings: A new diagonal grouping emerges as a result of shifting the input Lines. As in Figure 20, grouping in one orientation does not preclude grouping in an (almost) perpendicular orientation at the same Line end.
Neural Dynamics ofPereepfuai Grouping
a INPUT TO
COMPET IT ION
I
185
b
COMPETITION I
. . . . . I . . . . . .
I . . . .. . I . . . . . . I . . . . . .
I . . . . I "
. .
. . . . . . .
. . . . .
. . . . .
. . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
C
d
COMPET IT I ON I I I COOPERPT I ON
Figure 24. Global restructuring due to removal of local features: The inputs of this figure and Figure 23 are identical, except that the lower right Line has been removed. A comparison of Figure 24b with Figure 23b shows that, although gross aspects of the shared grouping are similar, removal of one Line can affect groupings among other Lines.
Chapter 3
186
b
a INPUT TO
COMPETITION I
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I . . . .. . . . . . .
. . . . . . . . . . . . . . . . . . . .
I . . . .
. .
I . . . .. . I . . . .. . I . . . .
I . . . .
COMPET IT ION I
. . . . . _ _ - - _ . . .
. . . . . . ..------\\. I l l . . . . . . . I 1 1 . . . . . . . I l l . . . . . . . I I I . . . . .
I
l
l
. I l l
. I , I
. . . . . . .
I
. . . . . . .
I 1 I I
. . . . . . .
I . . . . . . I . . . . . . I . . . . . .
1
. . . . . . . I1 I . . . . . . . . . . . _ _ - - _ . . .
..-\--------. . .
I
. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . .
I
. . . . . . . . . . . . . . . . . . . I l l . ' ' .. . . . . . . . . . . . . I I I . . ~ . .. . . . . .
. . . . . . I , I . . . . . . . . . . . . . . . . . I , [ . . . . . . . . . . . . . .
. .
. .
I
"
~
'
'.
. . . . . . . . . . . . ..
. . . . .
.
,
,
,
. .
. . . . . . . . . . . . .. . . . . . .
d
C
COMPETITION I 1
I
. . . . . .I l l ' " ' . . . . . . . . . . . . . . . . . . . .. . . . . . .
COOPERATION
. . . . . . _ _ _ . . . . . . . . . . .
..-----------.. . . . . . . . . . . . . . . . . . . . . . . . . . .................... . . . . . .
....................
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
..----------_..
. . . . .
. . . . . . - - . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
............ ............ ............ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.................... . . . . . . . . . . . . . . . . . . . . ....................
Figure 26. Distance-dependence of grouping: Relative to the inputs of Figure 24, the bottom Line has moved outside of the cooperative bandwidth that supported diagonal grouping. Although the diagonal grouping vanishes, the horizontal grouping at the bottom of the top Lines persists.
Neurol Dynamics of Perceptual Grouping
187
tion about the images to be processed. The dynamics of the BCS clarify the relevance of probabilistic concepts to the segmentation process. In particular, the distributions of oriented activities at each input position (Figure 5) play the role of local probability distributions. On the other hand, within the BCS, these distributions emerge as part of a real-time reaction to input patterns, rather than according to predet,ermined constraints on probabilities. The BCS does not incorporate hypotheses about which images will be processed into its probability distributions. Such knowledge is not needed to achieve rapid pre-attentive segmentation. The Object Recognition System (ORS) does encode information about which images are familiar (Figure 1). Feedback interactions between the BCS and the ORS can rapidly supplement a pre-attentive segmentation using the templates read-out from the ORS in response to BCS signals. Within our theory, however, these templates are not built into the ORS. Rather, we suggest how they are learned, in real-time, as the ORS self-organizes its recognition code in response to the pre-attentively completed output patterns from the BCS (Carpenter and Grossberg, 1985a, 1985b; Grossberg, 1980, 1984b). Thus the present theory sharply distinguishes between the processes of pre-attentive segmentation and of learned object recognition. By explicating the intimate interaction between the BCS and the ORS, the present theory also clarifies why these distinct processes are often treated as a single process. In particular, the degree to which topdown learned templates can deform a pre-attentively completed BCS activity pattern will depend upon the particular images being processed and the past experiences of the ORS. Thus by carefully selecting visual images, one can always argue that one or the other process is rate-limiting. Furthermore, both the pre-attentive BCS interactions and the top-down learned ORS interactions are processes of completion which enhance the coherence of BCS output patterns. They can thus easily be mistaken for one another.
18. Correlations Which Cannot Be Perceived: Simple Cells, Complex Cells, and Cooperation Glass and Switkes (1976)described a series of striking displays which they partially explained using the properties of cortical simple cells. Herein we suggest a more complete explanation of their results using properties of the BCS. In their basic display (Figure 26), “a random pattern of dots is superimposed on itself and rotated slightly.. , a circular pattern is immediately perceived.. . If the same pattern is superimposed on a negative of itself in which the background is a halftone gray and it rotated as before., ,, it is impossible to perceive the circular Moir4. In this case spiral petal-like patterns can be seen” (p.67). The circular pattern in Figure 26 is not “perceived” in an obvious sense. All that an observer can “see” are black dots on white paper. We suggest that the percept of circular structure is recognized by the Object Recognition System, whereas the Feature Contour System, wherein percepts of brightness and color are seen, generates the filledin contrast differences that distinguish the black dots from the white background (Figure 1). A similar issue is raised by Figure 10, in which short vertical lines are seen even though emergent long vertical lines influence perceptual grouping. Thus, in the Glass and Switkes (1976) displays, no less than in the Beck, Prazdny, and Rosenfeld (1983) displays, one must sharply distinguish the recognition of perceptual groupings from the percepts that are seen. These recognition events always have properties of “coherence,” whether or not they can support visible contrast differences. It then remains to explain why inverting the contrast of one of the images can alter what is recognized as well as what is seen. We agree with part of the Glass and Switkes (1976)explanation. Consider a pair of black dots in Figure 26 that arises by rotating one image with respect to the other, Let the orientation of the pair with respect to the horizontal be 8”. Since the dots are close to one another, they can activate receptive fields that have an orientation
Chapter 3
188
ee
Figure 26. A Glass pattern: The emergent circular pattern is "recognized" although it is not "seen" as a pattern of differing contrasts. The text suggests how this can happen. (Reprinted from Glass and Switkes, 1975.) approximately equal to 6'". This is due to the fact that an oriented receptive field is not an edge detector per se, but rather is sensitive to relative contrast differences across its medial axis. Only one of the two types of receptive fields at each position and orientation will be strongly activated, depending on the direction-of-contrast in the image. Each receptive field is sensitive to direction-of-contrast, even though pairs of these fields corresponding to like positions and orientations pool their activities at the next processing stage to generate an output that is insensitive to direction-of-contrast. We identify cells whose receptive fields are sensitive to direction-of-contrast with simple cells and the cells at the next stage which are insensitive to direction-of-contrast with complez cells of the striate cortex (DeValois, Albrecht, and Thorell, 1982;Gouras and Kriiger, 1979; Heggelund, 1981; Hubel and Wiesel, 1962, 1968; Schiller, Finlay, and Volman, 1976;Tanaka, Lee, and Creutzfeldt, 1983). Glass and Switkes (1976)did not proceed beyond this fact. We suggest, in addition, that long-range cooperation within the BCS also plays a crucial role in grouping Glass images. To see how cooperation is engaged, consider two or more pairs of black dots that satisfy the following conditions: Each pair arises by rotating one image with respect to the other. The orientation of all pairs with respect to the horizontal is approximately 0". All pairs are approximately colinear and do not lie too far apart. Such combinations of dots can more strongly activate the corresponding cooperative cells than can random combinations of dots. Each cooperative cell sends positive feedback to cells at the competitive stages with the same position and orientation. The competing cells which receive the largest cooperative signals gain an advantage over cells with different orientations. After competition among all possible cooperative groupings takes place, the favored groupings win and generate the large circular Boundary Contour structure that is recognized but not seen. Small circular
Neural Dynamics of Perceptual Grouping
189
houndaries are also generated around each dot and support the visible percept of dots on a white background within the Feature Contour System. Thus the orientation 0” of a pair of rotated black dots engages the BCS in two fundamentally different ways. First, it preferentially activates some oriented rereptive fields above others. Second, it preferentially activates some cooperative cells above others due to combinations of inputs from preferentially activated receptive fields. As in the displays of Beck, Prazdny, and Rosenfeld (1983), the Glass images probe multiple levels of the BCS. The other Glass images probe different levels of the BCS, notably the way in which simple cells activate complex cells which, in turn, activate the competitive layers. These images are constructed by reversing the contrast of one of the two images before they are superimposed. Then an observer sees black and white dots on a grey background. The recognition of circular macrostructure is, however, replaced by recognition of a more amorphous spiral petal-like pattern. Glass and Switkes (1976) noted that their “hypothesized neural mechanism does not appear to explain the observation of spirallike patterns” (p.71). To explain this recognition, we first note that the black dots on the grey background generate dark-light contrasts, whereas the white dots on the grey background generate light-dark contrasts. Hence the simple cells which responded to pairs of rotated black dots in Figure 26 are now stimulated by only one dot in each pair. Two or more randomly distributed black dots may be close enough to stimulate individual simple cells, but the cells favored by stimulation by two or more random dots will have different orientations than the cells stimulated by two or more rotated black dots in Figure 26. In addition, simple cells that are sensitive to the opposite direction-of-contrast ran respond to the white dots on the grey background. These cells will be spatially rotated with respect to the cells responding to the black dots. Moreover, since the black-to-grey contrast is greater than the white-to-grey contrast, the cells which respond to the black dots will fire more vigorously than the cells which respond to the white dots. Thus although both classes of simple cells feed into the corresponding complex cells, the complex cells which respond to the black dots will be more vigorously activated than the complex cells responding to the white dots. The cooperative stage will favor the most active rombinations of complex cells whose orientations are approximately colinear and which are not too far apart. Due to the differences in spatial position and orientation of the most favored competitive cells, a different boundary grouping is generated than in Figure 26. A similar analysis can be given to the Glass and Switkes displays that use complementary colors. In summary, the Glass and Switkes (1976) data emphasize three main points: Although simple cells sensitive to the same orientation and opposite direction-of-contrast feed into complex cells that are insensitive to direction-of-contrast, reversing the direction-of-contrast of some inputs can alter the positions and the orientations of the complex cells that are most vigorously activated. Although many possible groupings of cells can initially activate the cooperative stage, only the most favored groupings can survive the cooperative-competitive feedback exchange, as in Figures 18-25. Although all emergent Boundary Contours can influence the Object Recognition System, not all of these Boundary Contours can support visible filled-in contrast differences within the Feature Contour System. Prazdny (1984) has presented an extensive set of Glass-type displays, which have led him to conclude “that the mechanisms responsible for our perception of Glass patterns are also responsible for the detection of extended contours” (p.476). Our theory provides a quantitative implementation of this assertion. 18. Border Locking: T h e Caf6 Wall Illusion
A remarkable percept which is rendered plausible by BCS properties is the caf6 wall illusion (Gregory and Heard, 1979). This illusion is important because it clarifies the conditions under which the spatial alignment of colinear ’image contours with different contrasts is normally maintained. The illusion is illustrated in Figure 27. The illusion occurs only if the luminanre of the “mortar” within the horizontal strips
190
Chapter 3
Figure 27. The caf6 wall illusion: Although only horizontal and vertical luminance contours exist in this image, strong diagonal groupings are perceived. (Reprinted from Gregory and Heard, 1979.) lies between, or is not far outside, the luminances of the dark and light tiles, as in Figure 27. The illusion occurs, for example, in the limiting case of the Miinsterberg figure, in which black and white tiles are separated by a black mortar. Gregory and Heard (1979) have also reported that the tile boundaries appear to ‘kreep across the mortar during luminance changes” (p.368). Using a computer graphics system, we have generated a dynamic display in which the mortar luminance changes continuously through time. The perceived transitions from parallel tiles to wedge-shaped tiles and back are dramatic, if not stunning, using such a dynamic display. Some of the BCS mechanisms that help to clarify this illusion can be inferred from Figure 28. This figure depicts a computer simulation of an orientation field that was generated in response to alternating black and white tiles surrounding a black strip of mortar. Figure 29 schematizes the main properties of Figure 28. The hatched areas in Figure 29a depict the regions in which the greatest artivations of oriented receptive fields occur. Due to the approximately horizontal orientations of the activated receptive fields in Figure 29a, diagonal cooperative groupings between positions such as A and B can be initiated, as in Figures 23-25. Figure 28 thus indicates that a macroscopic spatial asymmetry in the activation of oriented receptive fields can contribute to the shifting of borders which leads to the wedge-shaped percepts. Figure 29b schematizes the fact that the microstructure of the orientation field is also skewed in Figure 28. Diagonal orientations tend to point into the black regions at the corners of the white tiles. Diagonal end cuts induced near positions A and B (Section 6) can thus cooperate between A and B in approximately the same direction as the macrostructure between A and C can cooperate with the macrostructure between B and E (Figure 29a). Diagonal activations near positions C and D can cooperate with each other in a direction almost parallel to the cooperation between A and B. These microscopic and macroscopic cooperative effects can help to make the boundaries at the top of the mortar seem to tilt diagonally downwards.
Neural Dynamics of Perceptual Grouping
191
0
Figure 28. Sixnulation of the responses of a field of oriented masks to the luminance pattern near the mortar of the caf6 wall illusion: The right of the bottom row joins to the left of the top row. The relative size of the masks used to generate the figure is indicated by the oblong shape in the center. Several finer points are clarified by the combination of these macroscale and microscale properties. By themselves, the microscale properties do not provide a sufficient explanation of why, for example, an end cut at position D cannot cooperate with direct diagonal activations at A. The macroscale interactions tilt the balance in favor of cooperation between A and B. In the Munsterberg figure, the black mortar under a white tile may seem to glow, whereas the black mortar under a black tile does not. Using a dark grey mortar, the grey mortar under a white tile may seem brighter, whereas the grey mortar under a black tile may better preserve its grey appearance. McCourt (1983)has also called attention to the relevance to brightness induction to explaining the cafb wall illusion. A partial explanation of these brightness percepts can be inferred from Figure 29. End cuts and diagonal groupings near position A may partially inhibit the parallel boundary between A and C. Brightness can then flow from the white tile downwards,
192
Chapter 3
Figure 29. A schematic depiction of the simulation in Figure 28: (a) shows the region of strong horizontal activity and indicates a possible diagonal grouping between positions A and B. (b) suggests that cooperation may occur in response to direct activations of oriented masks at positions C and D, as well as in response to end cuts at positions A and B. See text for additional discussion.
Neural Dynamics of Percepmal Grouping
193
t
as during neon color spreading Figure 16). The more vigorous boundary activations
above positions such as D and E Figure 29a) may better contain local featural contrasts within a tighter web of Boundary Contours. This property also helps to explain the observation of Gregory and Heard (1979) that the white tiles seem to be pulled more into the black a t positions such as A than at positions such as C. Our analysis of the caf6 wall illusion, although not based on a complete computer simulation, suggests that the same three factors which play an important role in generating the Glass and Switkes (1976) data also play an important role in generating the Gregory and Heard (1979) data. In addition, perpendicular end cuts and multiple spatial scales seem to play a role in generating the Gregory and Heard (1979) data, with different combinations of scales acting between positions such as A-B than positions such as C-D. This last property may explain why opposite sides A and C of an apparently wedge-shaped tile sometimes seem to lie a t different depths from an observer (Grossberg, 1983a). 20. B o u n d a r y Contour System Stages: P r e d i c t i o n s A b o u t Cortical Architectures
This section outlines in greater detail the network interactions that we have used to characterize t h e BCS. Several of these interactions suggest anatomical and physiological predictions about the visual cortex. These predictions refine our earlier predictions that the data of von der Heydt, Peterhans, and Baumgartner (1984) have since supported (Grossberg and Mingolla, 1985). Figure 30 summarizes the proposed BCS interactions. The process whereby Boundary Contours are built u p is initiated by the activation of oriented masks, or elongated receptive fields, a t each position of perceptual space (Hubel and Wiesel, 1977). An oriented mask is a cell, or cell population, that is selectively responsive to oriented scenic contrast differences. In particular, each mask is sensitive to scenic edges that activate a prescribed small region of the retina, and whose orientations lie within a prescribed band of orientations with respect to the retina. A family of such oriented masks lies at every network position such that each mask is sensitive to a different band of edge orientations within its prescribed small region of the scene. A. Position, Orientation, Amount -of-Contrast, and Direction-of- Con trast The first stage of oriented masks is sensitive to the position, orientation, amountof-contrast, and direction-of-contrast at an edge of a visual srene. Thus two subsets of masks exist corresponding to each position and orientation. One subset responds only to light-dark contrasts and the other subset responds to dark-light contrasts. Such oriented masks do not, however, respond only to scenic edges. They can also respond t o any image which generates a sufficiently large net contrast with the correct position, orientation, and direction-of-contrast within their receptive fields, as in Figures 14b and 26. We identify these cells with the simple cells of striate cortex (DeValois, Albrecht, and Thorell, 1982; Hubel and Wiesel, 1962, 1968; Schiller, Finlay, and Volrnan, 1976). Pairs of oriented masks which are sensitive to similar positions and orientations but to opposite directions-of-contrast excite the next BCS stage. The output from this stage is thus sensitive to position, orientation, and amount-of-contrast. but is insensitive to direction-of-contrast. A vertical Boundary Contour can thus be activated by either a close-to-vertical light-dark edge or a close-to-vertical dark-light edge a t a fixed scenic position, as in Figure 2. The activities of these cells define the orientation field in Figure 5a. We identify the cells at this stage with the complex cells of striate cortex (DeValois, Albrecht, and Thorell, 1982; Gouras and Kriiger, 1979; Heggelund, 1981; Hubel and Wiesel, 1962, 1968; Schiller, Finlay, and Volman, 1976; Tanaka, Lee, and Creutzfeldt, 1983). Spitzer and Hochstein (1985) have independently developed an essentially identical model of complex cell receptive fields to explain parametric properties of their cortical data.
Chapter 3
194
ORIENTED COOPERATION
DIPOLE FIELD
+
++ T
$I+
I+
cc
LOOP
Figure 30. Circuit diagram of the Boundary Contour System: Inputs activate oriented masks which cooperate at each position and orientation before feeding into an on-center off-surround interaction. This interaction excites like-orientations at the same position and inhibits like-orientations at nearby positions. The affected cells are on-cells within a dipole field. On-cells at a fixed position compete among orientations. On-cells also inhibit off-cells which represent the same position and orientation. Off-cells at each position, in turn, compete among orientations. Both on-cells and off-cells are tonically active. Net excitation (inhibition) of an on-cell (off-cell) excites (inhibits) a cooperative receptive field corresponding to the same position and orientation. Sufficiently strong net positive activation of both receptive fields of a cooperative cell enables it to generate feedback via an on-center off-surround interaction among like-oriented cells. Dipole oncells which receive the most favorable combination of bottom-up signals and top-down signals generate the emergent perceptual grouping.
Neural Dynamics of Perceptual Grouping
195
B. On-Center Off-Surround Interaction within Each Orimfation The outputs from these cells activate the first of two successive stages of short-range competition, which are denoted by Competition (I) and Competition (11) in Figures 18-25. At the first competitive stage, a mask of fixed orientation excites the likeoriented cells at its position and inhibits the like-oriented cells at nearby positions. Thus an on-center off-surround interaction between like-oriented cells occurs around each perceptual location. This interaction predicts that a stage subsequent to striate complex cells organizes cells sensitive to like orientations at different positions so that they can engage in the required on-center off-surround interaction. C. Push-Pull Competition between Orientations at Each Position The inputs to the second competitive stage are the outputs from the first competitive stage. At the second competitive stage, competition occurs between different orientations at each position. Thus a stage of competition between like orientations at different, but nearby, positions (Competition I) is followed by a stage of competition between different orientations at the same position (Competition 11). This second competitive stage is tonically active. Thus inhibition of a vertical orientation excites the horizontal orientation at the same position via disinhibition of its tonic activity. The combined action of the two competitive stages generates the perpendicular end cuts in Figure 5b that we have used to explain the percepts in Figures 7, 8, 12, and 13. Conjoint inhibition of vertical and horizontal orientations by the first competitive stage leading to disinhibition of diagonal orientations at the second competitive stage (Figure 17) was also used to explain the diagonal groupings in Figure 15. A similar interaction was used to help explain the neon color spreading phenomenon described in Figure 16 (Grossberg and Mingolla, 1985). Thus the interactions of the first and second competitive stages help to explain a wide variety of seemingly unrelated perceptual groupings, color percepts, and illusory figures. D. Dipole Field: Spatial Impenetrability The process described in this section refines the BCS model that was used in Grossberg and Mingolla (1985). This process incorporates a principle of cortical design which has been used to carry out related functional tasks in Grossberg (1980, 1983b). The functional role played by this process in the BCS can be understood by considering Figure 18c. At the second competitive stage of this figure, horizontal end cuts border the vertical responses to the inducing input Lines. What prevents the end cuts at both sides of each Line from cooperating? If these end cuts could cooperate, then each Line could activate one of a cooperative cell’s pair of receptive fields (Figure 9). As a result, horizontal Boundary Contours could be generated throughout the region between pairs of vertical Lines in Figure 18d, even though these Lines are spatially out-of-phase. The problem can thus be summarized as follows: Given the need for a long-range cooperative process to complete boundaries over retinal veins, the blind spot, etc., what prevents this cooperative process from leaping over intervening images and grouping together unappropriate combinations of inputs? In situations wherein no image-induced obstructions prevent such grouping, it can in fact occur, as in Figures 8 and 9. If, however, cooperative grouping could penetrate all perceived objects, then many spurious groupings would occur across every Line. The perceptual space would be transparent with respect to the cooperative process. To prevent this catastrophe, we propose a Postulate of Spatial Impenetrability. This postulate suggests that mechanisms exist which prevent the cooperative process from grouping across all intervening percepts. Inspection of Figure 18c discloses the primary computational properties that such a process must realize. It must not prevent likeoriented responses from cooperating in a spatially aligned position, because that is the primary functional role of cooperation. It need only prevent like-oriented responses
196
Chapter 3
(such as the horizontal end cuts in Figure 18a) from cooperating across a region of perpendicularly oriented responses (such as the vertical responses to the vertical Lines in Figure 18c). We therefore hypothesize that the vertical responses to the Lines generate inhibitory inputs to horizontally oriented receptive fields of the cooperative process (Figure 31). The net input due to both horizontal end cuts and vertical Lines at the horizontally oriented cooperative cells is thus very small or negative. As a result, neither receptive field of a horizontally oriented cooperative cell between the vertical Lines can be supraliminally excited. That is why the cooperative responses in Figure 18d ignore the horizontal end cuts. It remains to say how both excitatory and inhibitory inputs are generated from the second competitive stage to the cooperative stage. We hypothesize that the second competitive stage is a dipole field (Grossberg, 1980, 1983b) and that inputs from the first competitive stage activate the on-cells of this dipole field. Suppose, for example, that an input excites vertically oriented on-cells, which inhibit horizontally oriented oncells at the same position, as we have proposed in Section 20C.We assume, in addition, that inhibition of the horizontal on-cells excites the horizontal off-cells via disinhibition. The excited vertically oriented on-cells send excitatory inputs to the receptive fields of vertically-oriented cooperative cells, whereas the excited horizontally oriented off-cells send inhibitory inputs to the receptive fields of horizontally oriented cooperative cells (Figure 30). Two new cortical predictions are implied by this dipole field hypothesis: Both the on-cell subfield and the off-cell subfield of the dipole field are tonically active, thereby enabling their cells to be activat,ed due to disinhibition. Excitation of on-cells generates excitatory inputs to like-oriented cooperative receptive fields, whereas excitation of off-cells generates inhibitory inputs to like-oriented cooperative receptive fields. The tonic activity of the on-cell subfield helps to generate perpendicular end cuts, thereby preventing color flow from line ends. The tonic activity of the off-cell subfield helps to inhibit like-oriented cooperative cells, thereby augmenting spatial impenetrability. E. Long-Range Oriented Cooperation between Like-Oriented Pairs of Input Groupings
The outputs from the dipole field input to a spatially long-range cooperative process. We call this process the boundary completion process. Outputs due to like-oriented dipole field cells that are approximately aligned across perceptual space can cooperate via this process to synthesize an intervening boundary, as in Figures 18-25. A cooperative cell can be activated only if it receives a sufficiently positive net input at both of its orientationally tuned receptive fields (Figure 9). Two types of parameters must be specified to characterize these receptive fields: macroscale parameters which determine the gross shape of each receptive field; and microscale parameters which determine how effectively a dipole field input of prescribed orientation can excite or inhibit a cooperative receptive field. Figure 32 describes a computer simulation of the cooperative receptive field that we used to generate Figures 18-25. The Cooperative out-field, or projection field, in Figure 32a describes the interaction strengths, or path weights, from a single horizontally oriented dipole field on-cell to all cells within the cooperative stage. The length of each line is proportional to the size of the interaction strength to on-cells with the depicted positions and orientations. The cooperative in-field, or receptive field, in Figure 32b describes the path weights from all dipole field on-cells with the depicted positions and preferred orientations to a single cooperative cell with a horizontally oriented receptive field. The length of each line is thus proportional to the sensitivity of the receptive field to inputs received from cells coding the depicted positions and orientations. The cell in Figure 32b is most sensitive to horizontally oriented inputs that fall along a horizontal axis passing through the cell. Close-to-horizontal orientations and close-to-horizontal positions can also help to excite the cell, but they are less effective. Figures 32a and 32b describe the same information, but from different perspectives of a single dipole field on-cell source (Figure 32a) and a
Neural Dynamics of Perceptual Grouping
197
- - - - -
- I
1
I -
Figure 31. A mechanism to implement the postulate of spatial impenetrability: The left receptive fields of two horizontally tuned cooperative cells are crossed by a thin vertical Line. Although horizontal end cut signals can excite the upper receptive field, these are cancelled by the greater number of inhibitory inputs due to the vertical Line inputs. Within the lower receptive field, the excitatory inputs due to end cuts prevail.
198
Chapter 3
single cooperative cell sink (Figure 32b). Figure 33 depicts a cooperative out-field (Figure 33a) and in-field (Figure 33b) due to a different choice of numerical parameters. In Figure 33a, a single dipole field on-cell can spray inputs over a spatially broad region, but the orientations that it can excite are narrowly tuned at each position. From the perspective of a cooperative cell’s receptive fields, the out-field in Figure 33a generates an in-field which is spatially narrow, but the orientations that can excite it are broadly tuned. Figures 32 and 33 illustrate a duality between in-fields and out-fields that is made rigorous by the equations in the Appendix. F. On-Center Off-Surround Feedback within Each Orientation This process refines the BCS system that was described in Grossberg and Mingolla (1985). In Section 8, we suggested that excitatory feedback from the cooperative stage to the second competitive stage-more precisely to the on-cells of the dipole field-can help to eliminate fuzzy bands of boundaries by providing some orientations with a competitive advantage over other orientations. It is also necessary to provide some positions with a competitive advantage over other positions, so that only the favored orientations and positions will group to form a unique global boundary. Topographically organized excitatory feedback from a cooperative cell to a competitive cell is insufficient. Then the spatial fuzziness of the cooperative process (Figure 32) favors the same orientation at multiple non-colinear positions. Sharp orientational tuning but fuzzy spatial tuning of the resultant boundaries can then occur. We suggest that the cooperative-to-competitive feedback process realizes a Postulate of Spatial Sharpening in the following way. An active cooperative cell can excite likeoriented on-cells at the same position (Figure 30). An active cooperative cell can also inhibit like-oriented on-cells at nearby positions. Then both orientations and positions which are favored by cooperative groupings gain a competitive advantage within the on-cells of the dipole field. Figures 18-25 show that the emergent groupings tend to be no thicker than the inducing input Lines due to this mechanism. Figure 30 shows that both the bottomup inputs and the top-down inputs to the dipole field are organized as on-center offsurround interactions among like orientations. The net top-down input is, however, always nonnegative due to the fact that excitatory interneurons are interpolated between the on-center off-surround interaction and the dipole field. If this on-center off-surround interaction were allowed to directly input to the dipole field, then a single Line could generate a spatially expanding lattice of mutually perpendicular secondary, tertiary, and higher-order end cuts via the cooperative-competitive feedback loop. This completes our description of BCS interactions.
21. Concluding Remarks: Universality of the Boundary Contour System The Boundary Contour System and Feature Contour System interactions of our theory have suggested quantitative explanations and predictions for a large perceptual and neural data base, including data about perceptual grouping of textures and borders, illusory figures, monocular and binocular brightness percepts, monocular and binocular rivalry, the Land retinex demonstrations, neon color spreading and related filling-in phenomena, complementary color induction, fading of stabilized images, multiple scale interactions, shape-from-shading, metacontrast, hyperacuity, and various other global interactions between depth, lightness, length, and form properties (Cohen and Grossberg, 1984b; Grossberg, 1980, 1983b, 1984a, 1985; Grossberg and Mingolla, 1985). This expanded explanatory and predictive range is due, we believe, to the introduction and quantitative analysis of several fundamental new principles and mechanisms to the perceptual literature, notably the principle of Boundary-Feature Trade-off and the mechanisms governing Boundary Contour System and Feature Contour System interac t ions.
Neural Dynamics OfPerceptual Grouping
199
OUT FIELD
IN FIELD - - - - *
-
-
-
I***
&
@
*
--
. - z z . r -
-
-
-
- * * * I
*
-
I
-
-
Figure 32. Cooperative in-field and out-field: Line lengths are proportional to the strengths of signals from a horizontally tuned competitive cell to cooperative cells of various orientations at nearby positions. Thus in (a) strong signals are sent to horizontal cooperative cells 5 units to the left or the right of the competitive cell (center circle), but signal strength drops off with distance and change of orientation. (b) shows the dual perspective of weights assigned to incoming signals by the receptive field of a horizontal cooperative cell. (Note that only ezcilatory signal strengths are indicated in this figure.) The parameters used to generate these fields are the identical ones used in Figures 18-25.
200
Chapter 3
OUT FIELD
IN FIELD
Figure 33. Extreme cooperative in-field and out-field: This figure employs more extreme parameter choices than were used in the simulations of Figures 18-25. Greater orientational uncertainty at one location of the in-field corresponds to greater positional uncertainty in the out-field, thereby illustrating the duality between in-field and out-field.
Neural Dynamics of Perceptual Grouping
201
The present article has refined the mechanisms of the Boundary Contour System by using this system to quantitatively simulate enwrgent perceptual grouping properties that are found in the data of workers like Beck, Prazdny, and Rosenfeld (1983). Glass and Switkes (1976),and Gregory and Heard (1979). We have hereby been led to articulate and instantiate the postulates of spatial inipcnctrability and of spatial sharpening, and to thereby make some new predictions about prestriate cortical interactions. These results have also shown that several apparently different Gestalt rules can be analysed using the context-sensitive reactions of a single Boundary Contour System. Taken together, these results suggest that a universal set of rules for perceptual grouping of scenic edges, textures, and smoothly shaded regions is well on the way to being characterized.
Chapter 3
202
APPENDIX Boundary Contour System Equations The network which we used to define the Boundary Contour System (BCS)is defined in stages below. This network further develops the BCS system that was described in Grossberg and Mingolla (1985). A. Oriented Masks To define a mask, or oriented receptive field, centered at position (i,j) with orientation k, divide the elongated receptive field of the mask into a left-half L l , k and a right-half & j k . Let all the masks sample a field of preprocessed inputs. If ,S equals the preprocessed input to position ( p , q ) of this field, then the output J l , k from the mask at position (i,j) with orientation k is
where
and the notation [p]+ = max(p,O). The sum of the two terms in the numerator of ( A l ) says that &k is sensitive to the orientation and amount-of-contrast, but not to the direction-of-contrast, received by L r j k and R , , k . The denominator term in (Al) enables J I J k to compute a ratio scale in the limit where p ( u l , k v * , k ) is much greater than 1. In all of our simulations, we have chosen p = 0 . B. On-Center Off-Surround Interaction within Each Orientation (Competition I) Inputs J 1 3 k with a fixed orientation k activate potentials w , j k at the first competitive stage via on-center off-surround interactions: each J l , k excites W l J k and inhibits W& if I p - i 1' I q - j 1' is sufficiently small. All the potentials w,,k are also excited by the same tonic input I, which supports disinhibitory activations at the next competitive stage. Thus
+
+
where APqIlis the inhibitory interaction strength between positions ( p , g ) and ( i , i )and j ( & k ) is the input signal generated by J l , k . In our runs, we chose /(&k)
= BJsjk-
(-45)
Sections (C) and (D)together define the on-cell subfield of the dipole 5eld described in Section 20. C . Push-Pull Opponent Processes between Orientations at each Position Perpendicular potentials w,,k and W,,K elicit output signals that compete at their target potentials Zl,k and z , ~ K ,respectively. For simplicity, we assume that these output signals equal the potentials w r j k and w(,K, which are always nonnegative. We also assume that Z , j k and Z,,K respond quickly and linearly to these signals. Thus
= w13k - W t j K
(A61
Neural Dynamics of Perceplud Grouping
203
D. Normalization at each Position We also assume that, as part of this push-pull opponent process, the outputs yljk of the second competitive stage become normalized. Several ways exist for achieving this property (Grossberg, 1983a). We have used the following approach. The potentials Z,3k interact when they become positive. Thus we let the output 0 , k = o ( z , , k ) from Z t J k equal
where C is a positive constant and [p]+ = max(p,O). All these outputs at each position interact via a shunting on-center off-surround network whose potentials & j k statisfy
Each potential that
Yt3k
equilibrates rapidly to its input. Setting EO1,k
Yzjk
=D -__ + 02,
where
&Y,jk
= 0 in (A9) implies
('410)
n
Oij =
1 Oijm. m=l
(All)
Thus if D is small compared to O,], then C:=, gym E . E. Opponent Inputs to the Cooperative Stage The next process refines the BCS model used in Grossberg and Mingolla (1985). It helps to realize the Postulate of Spatial Impenetrability that was described in Section 20. The t U g j k , q j k , and VSJk potentials are all assumed to be part of the on-cell subfield of a dipole field. If u g j k is excited, an excitatory signal f ( Y t j k ) is generated at the cooperative stage. When potential I/(Jk is excited, the potential y i 3 ~corresponding to the perpendicular orientation is inhibited. Both of these potentials form part of the ~ the on-cell subfield of a dipole field. Inhibition of an on-cell potential y a j disinhibits ) the corresponding off-cell potential j & , ~ , which sends an inhibitory signal - f ( & , ~to cooperative level. The signals f ( 2 / r j k ) and - f ( g l l ~ )thus occur together. In order to instantiate these properties, we made the simplest hypothesis, namely that &jK
= Uyk.
('412)
F. Oriented Cooperation: Statistical Gates The cooperative potential Z , j k can be supraliminally activated only if both of its cooperative input branches receive enough net positive excitation from similarly aligned competitive potentials (Figure 9). Thus
204
Chapter 3
In (A13): g(s) is a signal function that beromcs positive only when s is positive, and has a finite maximum value. A slower-than-linrar function g(s) =
+I'
- - --
K t [s]+
was used in our simulations. A sum of two sufficiently positive g(s) terms in A13) is needed to activate ~ , above ~ k the firing t,hreshold of its output signal h ( z j J h . A threshold-linear signal function
h(2) = L [ z - M]+
(45)
and
is a spatial cross-correlation that adds up inputs from a strip with orientation (approximately equal to) & that lies to one side or the other of position ( i , j ) , as in Figures 31 and 32. The orientations r that contribute t.0 the spatial kernels Fj;;f' and Gkt) also approximately equal k. The kernels FL") and
G P ) are defined by
and
where -i QwtJ - arctan(-),9P-'
and P,R, and T are positive constants. In particular, R and T are odd integers. Kernels F and G differ only by a minus sign under the [. .It sign. This minus sign determines
-
the polarity of the kernel; namely, whether it collects inputs for z,,k from one side or N
2
the other of position ( i , j ) . Term e x p [ - 2 ( 9 - 1) ] determines the optimal distance P from (i,j) at which each kernel collects its inputs. The kernel decays in a Gaussian fashion as a function of N,,,/P, where Nw,, in (A20) is the distance between ( p , q ) and ( i,~') .The cosine terms in ( A N ) and (A19) determine the orientational tuning of the kernels. By (A21), QW,,is the direction of position ( p , q ) with respect to the position of the cooperative cell ( i ,j ) in (A13). Term 1 cos(QpqrJ-r) 1 in (AM) and (A19) computes how parallel Qwt3 is to the receptive field orientation r at position ( p , q ) . By (A21), term I c o ~ ( Q-~r) , I~is maximal when the orientation r equals the orientation of ( p , q ) with respect to (i,j). The absolute value sign around this term prevents it from becoming negative. Term cos(Qw,, - k) in (AM) and (A19) computes how parallel
Neural Dynamics ofpercephcclr Grouping
'
205
QPq13is to the orientation k of the receptive field of the cooperative cell (i'j.)in (A13). By (A21), term c o ~ ( Q -~ k) , ~is maximal when the orientation k equals the orientation of (p, q ) with respect to (i,j). Positions ( p , q ) such that cos(Qpql3- k) < 0 do not input of a negative number equals zero. On the other to Z I J k via kernel F because the [.. I' hand, such positions ( p , q ) may input to ZgJk via kernel G due to the extra minus sign in the definition of kernel G. The extra minus sign in (A19) flips the preferred axis of orientation of kernel GLt) with respect to the kernel F$) in order to define the two input-collecting branches of each cooperative cell, as in Figures 8 and 30. The product cos(QWl -k)T in (A18) and (A19) thus determine larger path terms f I cos(Qwla-r) weights from dipole field on-cells whose positions and orientations are nearly parallel to the preferred orientation k of the cooperative cell (i,j ) , and larger path weights from dipole field off-cells whose positions and orientations are nearly perpendicular to the preferred orientation k of the cooperative cell (i,j). The powers R and T determine the sharpness of orientational tuning: Higher powers enforce sharper tuning. G. On-Center Off-Surround Feedback within Each Orientation: The next process refines the BCS model used in Grossberg and Mingolla (1985). It helps to realize the Postulate of Spatial Sharpening that was described in Section 20. We assume that each Z,,k activates a shunting on-center off-surround interaction within each orientation k. The target potentials V , j k therefore obey an equation of the form
IR
The bottom-up transformation J 1 , k --* w , J k in (A4) is thus similar to the top-down v a j k in (A20). Functionally, the Zajk -+ v,? transformation entransformation Z , j k ables the most favored cooperations to enhance their preferred positions and orientation as they suppress nearby positions with the same orientation. The signale V t j k take effect by inputting to the W a l k opponent process. Equation (A4) is thus changed to --f
At equilibrium, the computational logic of the BCS is determined, up to parameter choices, by the equations
Chapter 3
206
and
Wherever possible, simple spatial kernels were used. For example the kernels WPqr3in (A22) and A ~ in j(A23) were both chosen to be constant within a circular receptive field: A . . - A if ( p - !))" ( q I w13- 0 otherwise and w W if(p-!)2+(q-j)21Wo pq'3 - ( 0 otherwise. The oriented receptive fields Lijk U R;jk in (A2) and (A3) were chosen to have parallel linear sides with hemicircular ends.
[
+
Neural Dynamics of Perceptual Crouping
207
REFER.ENCES Beck, J., Perceptual grouping produced by changes i n orientation and shape. Science, 1966,154, 538-540 (a). Beck, J., Effect of orientation and of shape similarity on perceptual grouping. Perception and Psychophysics, 1966,1,300-302(b). Beck, J., Similarity grouping and peripheral discriminability under uncertainty. American Journal of Psychology, 1972,85, 1 19. Beck, J., Textural segmentation. In J. Beck (Ed.), Organization a n d representation in perception. Hillsdale, NJ: Erlbaum, 1982. Beck, J., Textural segmentation, second-order statistics, and textural elements. Biological Cybernetics, 1983,48,125-130. Beck, J., Prazdny, K., and Rosenfeld, A., A theory of textural segmentation. In J. Beck, B. Hope, and A. Rosenfeld (Eds.), H u m a n and machine vision. New York: Academic Press, 1983. Caelli, T., On discriminating visual textures and images. Perception and Psychophysics, 1982, 31, 149-159. Caelli, T., Energy processing and coding factors in texture discrimination and image processing. Perception and Psychophysics, 1983, 34,349-355. Caelli, T. and Dodwell, P.C., The discrimination of structure in vectorgraphs: Local and global effects. Perception and Psychophysics, 1982,32, 314-326. Caelli, T. and Julesz, B.. Psychophysical evidence for global feature processing in visual texture discrimination. Journal of the Optical Society of America, 1979,69,675-677. Carpenter, G.A. and Grossberg, S., Adaptation and transmitter gating in vertebrate photoreceptors. Journal of Theoretical Neurobiology, 1981, 1, 1-42. Carpenter, G.A. and Grossberg, S., Dynamic models of neural systems: Propa ated signals, photoreceptor transduetion, and circadian rhythms. In J.P.E. Hodgson (kd.), Oscillations in mathematical biology. New York: Springer-Verlag, 1983. Carpenter, G.A. and Grossberg, S., Neural dynamics of category learning and recognition: Attention, memory consolidation, and amnesia. In J. Davis, W. Newburgh, and E. Wegman (Eds.), Brain structure, learning, and memory. AAAS Symposium Series, in press, 1985 (a). Carpenter, G.A. and Grossberg, S., Neural dynamics of category learning and recognition: Structural invariants, evoked potentials, and reinforcement. In M. Commons, R. Herrnstein, and S. Kosslyn (Eds.), P a t t e r n recognition a n d concepts in animals, people, and machines. Hillsdale, NJ: Erlbaum, 1985 (b). Cohen, M.A. and Grossberg, S., Some global properties of binocular resonances: Disparity matching, filling-in, and figure-ground synthesis. In P. Dodwell and T. Caelli (Eds.), Figural synthesis. Hillsdale, NJ: Erlbaum, 1984 (a). Cohen, M.A. and Grossberg, S., Neural dynamics of brightness perception: Features, boundaries, diffusion, and resonance. Perception and Psychophysics, 1984, 36,428456 (b). Cohen, M.A. and Grossberg, S., Neural dynamics of speech and language coding: Developmental programs, perceptual grouping, and competition for short term memory. Human Neurobiology, in press, 1985. Desimone, R., Schein, S.J., Moran, J., and Ungerleider, L.G., Contour, color, and shape analysis beyond the striate cortex. Vision Research, 1985,25, 441-452. Dev, P., Perception of depth surfaces in random-dot stereograms: A neural model. international Journal of Man-Machine Studies, 1975. 7, 511-528. DeValois, R.L.,Albrecht, D.G., and Thorell, L.G., Spatial frequency selectivity of cells in macaque visual cortex. Vision Research, 1982, 22, 545-559.
208
Chapter 3
Dodwell, P.C., The Lie transformation group model of visual perreption. Perception and Psychophysics, 1983, 34, 1-16. Ejima, Y., Redies, C., Takahashi, S., and Akita, M., The neon color effect in the Ehrenstein pattern: Dependence on wavelength and illuminance. Vision Research, 1984, 24, 1719-1726.
Ellias, S. and Grossberg, S., Pattern formation, contrast control, and oscillations in the short term memory of shunting on-center off-surround networks. Biological Cybernetics, 1975, 20, 69-98. Geman, S. and Geman, D., Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Patent Analysis and Machine Intelligence, 1984, 6, 721-741.
Glass, L. and Switkes, E., Pattern recognition in humans: Correlations which cannot be perceived. Perception, 1976, 5, 67-72. Gouras, P. and Kriiger, J., Responses of cells in foveal visual cortex of the monkey to pure color contrast. Journal of Neurophysiology, 1979, 42, 850-860. Gregory, R.L.,Eye and brain. New York: McGraw-Hill, 1966. Gregory, R.L. and Heard, P., Border locking and the Cafb Wall illusion. Perception, 1979, 8, 365-380.
Grossberg, S., Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 1973, 52, 217-257. Grossberg, S., How does a brain build a cognitive code? Psychological Review, 1980, 87, 1-51.
Grossberg, S., Studies of mind a n d brain: Neural principles of learning, perception, development, rognition, and motor control. Boston: Reidel Press, 1982.
Grossberg, S., The quantized geometry of visual space: The coherent computation of depth, form, and lightness. Behavioral and Brain Sciences, 1983, 6, 825-692 (a). Grossberg, S., Neural substrates of binocular form perception: Filtering, matching, diffusion, and resonance. In E. Basar, H. Flohr, H. Haken, and A.J. Mandell (Eds.), Synergetics of t h e brain. New York: Springer-Verlag, 1983 (b). Grossberg, S., Outline of a theory of brightness, color, and form perception. In E. Degreef and J. van Buggenhaut (Eds.), Trends in mathematical psychology. Amsterdam: North-Holland, 1984 (a). Grossberg, S., Some psychophysiological and pharmacological correlates of a developmental, cognitive, and motivational theory. In R. Karrer, J. Cohen, and P. Tueting Eds.), Brain and information: Event related potentials. New York: New ork Academy of Sciences, 1984 (b). Grossberg, S., Cortical dynamics of depth, brightness, color, and form perception: A predictive synthesis. Submitted for publication, 1985. Grossberg, S. and Levine, D., Some developmental and attentional biases in the contrast enhancement and short term memory of recurrent neural networks. Journal of Theoretical Biology, 1975, 53,341-380. Grossberg, S. and Mingolla, E., Neural dynamics of form perception: Boundary completion, illusory figures, and neon color spreading. Psychological Review, 1985, 92,
k
173-211.
Grossberg, S. and Mingolla, E., Neural dynamics of surface perception: Boundary webs, illuminants, and shape-from-shading. In preparation, 1986. Heggelund, P., Receptive field organisation of complex cells in cat striate cortex. Experimental Brain Research, 1981, 4Z, 99-107.
Neural Dynamics of Perceptual Grouping
209
Helmholtz, H.L.F. von, Treatise on physiologiral optirs, J.P.C. Southall (Trans.). New York: Dover, 1962. Hoffman, W.C., Higher visual perception as prolongation of the basic Lie transformation group. Mathematical Biosciences, 1970,6, 437-471. Horn, B.K.P., Understanding image intensities. Artificial Intelligence, 1977,8,201-231. Hubel, D.H. and Wiesel, T.N., Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology, 1962, 160, 106-154. Hubel, D.H. and Wiesel, T.N., Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 1968, 195, 215-243. Hubel, D.H. and Wiesel, T.N., Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London (B), 1977, 198, 1-59. Julesz, B., Binocular depth perception of computer-generated patterns. Bell System Technical Journal, 1960, 39, 1125-1162. Julesz, B., Foundations of cyclopean perception. Chicago: University of Chicago Press, 1971. Kanizsa, G . , Margini quasi-percettivi in campi con stimolazione omogenea. Revista di Psicologia, 1955, 49, 7-30. Kaplan, G.A., Kinetic disruption of optical texture: The perception of depth at an edge. Perception and Psychophysics, 1969,6, 193-198. Kawabata, N., Perception at the blind spot and similarity grouping. Perception and Psychophysics, 1984, 36, 151-158. Krauskopf, J., Effect of retinal image stabilization on the appearance of heterochromatic targets. Journal of the Optical Society of America, 1963, 53, 741-744. Land, EX., The retinex theory of color vision. Scientific American, 1977,237, 108-128. Marr, D. and Hildret,h, E., Theory of edge detection. Proceedings of the Royal Society of London (B), 1980, 207, 187-217. Marr, D. and Poggio, T., Cooperative computation of stereo disparity. Science, 1976, 194, 283-287.
MrCourt, M.E., Brightness induction and the caf6 wall illusion. Perception, 1983, 12, 131-142.
Neisser, U., Cognitive psychology. Xew York: Appleton-Century-Crofts, 1967. Prandtl, A . , Uber gleichsinnige induktion und die lichtverteilung in gitterartigen mustern. Zeitschrift fur Sinnesphysiologie, 1927, 58, 263-307. Prazdny, K., Illusory contours are not caused by simultaneous brightness contrast. Perception and Psychophysics, 1983, 34, 403-404. Prazdny, K., On the perception of Glass patterns. Perception, 1984, 13, 469-478. Prazdny, K., On the nature of inducing forms generating perception of illusory contours. Perception and Psychophysics, 1985,37,237-242. Preyer, W., On certain optical phenomena: Letter to Professor E.C. Sanford. American Journal of Psychology, 1897/98,9, 42-44. Ratliff, F., M a c h bands: Quantitative studies on neural networks in the retina. New York: Holden-Day, 1965. Redies, C. and Spillmann, L., The neon color effect in the Ehrenstein iIlusion. Perception, 1981, 10, 667-681. Redies, C., Spillmann, L., and Kunz, K., Colored neon flanks and line gap enhancement. Vision Research, 1984, 24, 1301-1309. Schatz, B.R., The computation of immediate texture discrimination. MIT A I Memo 426, 1977.
210
Chapter 3
Sehiller, P.H., Finlay, B.L., and Volman, S.F., Quantitative studies of singlccell properties in monkey striate cortex, I: Spatiotemporal organization of receptive fields. Journal of Neurophysiology, 1976,39, 1288-1319. Srhwartz, E.L., Desimone, R., Albright, T., and Gross, C.G., Shape recognition and inferior temporal neurons. Proceedings of the National Academy of Sciences, 1983, 80,577&5778. Shapley, R. and Gordon, J., Nonlinearity in the perception of form. Perception and Psychophysics, 1985,37, 84-88. Sperling, G., Binocular vision: A physical and a neural theory. American Journal of Psychology, 1970,83, 461-534. Spillmann, L.,Illusory brightness and contour perception: Current status and unresolved problems. Submitted for publication, 1985. Spitzer, H. and Hochstein, S., A complex-cell receptive field model. Journal of Neurophysiology, 1985,53, 1266-1286. Tanaka, M., Lee, B.B., and Creutzfeldt, O.D., Spectral tuning and contour representation in area 17 of the awake monkey. In J.D. Mollon and L.T. Sharpe (Eds.), Colour vision. New York: Academic Press, 1983. TodorovY, D., Brightness perception and the Craik-O’Brien-Cornsweet effect. Unpublished M.A. Thesis. Storrs, CT: University of Connecticut, 1983. van Tuijl, H.F.J.M., A new visual illusion: Neonlike color spreading and complementary color induction between subjective contours. Acta Psychologica, 1975,39, 441-445. van Tuijl, H.F.J.M. and de Weert, C.M.M., Sensory conditions for the occurrence of the neon spreading illusion. Perception, 1979,8,211-215. van Tuijl, H.F.J.M. and Leeuwenberg, E.L.J., Neon color spreading and structural information measures. Perception and Psychophysics, 1979, 25, 269-284. von der Heydt, R., Peterhans, E., and Baumgartner, G., Illusory contours and cortical neuron responses. Science, 1984,224, 1260-1262. Wertheimer, M., Untersuchungen zur Lehre von der Gestalt, 11. Psychologische Forschung, 1923,4,301-350. Wolfe, J.M., Global factors in the Hermann grid illusion. Perception, 1984, 13,33-40. Yarbus, A.L., Eye movements and vision. New York: Plenum Press, 1967. Zeki, S., Colour coding in the cerebral cortex: The reaction of cells in monkey visual cortex t,o wavelengths and colours. Neuroscience, 1983,9,741-765 (a). Zeki, S., Colour coding in the cerebral cortex: The responses of wavelength-selective and colour coded cells in monkey visual cortex to changes in wavelength composition. Neuroscience, 1983,9,767-791 (b). Zucker, S.W., Early orientation selection: Tangent fields and the dimensionality of their support. Technical Report 85-13-R,McGill University, Montreal, 1985.
211
Chapter 4 NEURAL DYNAMICS O F BRIGHTNESS PERCEPTION: F E A T I J R E S , BOUNDARIES, DIFFUSION, A N D R E S O N A N C E Preface This Chapter describes some of the simulations of paradoxical brightness data which led us to realize that a diffusive filling-in process exists which is contained by boundary contours. These computer simulations also enabled us to mathematically characterize interactions between brightness contours, boundary contours, and featural filling-in that are capable of quantitatively simulating the targeted brightness data. At the time that these simulations were being completed, we knew of no direct neurophysiological evidence for a system with these detailed properties, although we qualitatively interpreted the cortical filling-in system to be functionally homologous to the horizontal cell layers in fish and higher organisms. After completing our simulations, we were gratified to read the 1984 data of Piccolino, Neyton, and Gerschenfeld on the H1 horizontal cells of the turtle retina. These data reported a formally identical type of filling-in interaction among these horizontal cells. If a functional homolog does exist between retinal filling-in and cortical filling-in, then we may infer from the Piccolino el d. data that the chemical transmitter which delivers boundary contour signals to the cortical syncytium may be a catecholamine. We chose to simulate a set of monocular brightness data which no single previous theory had been able to explain. The multiple constraints which these data imposed upon us guided us to our filling-in model. As the Chapter describes, t,hese brightness data, taken together, challenge classical concepts of edge processing, and force one to consider how interactions between brightness and form processing govern the brightness profiles that we perceive. In addition to simulating monocular brightness data using Boundary Contour System and Feature Contour System interactions, the Chapter also describes computer simulations of a demanding set of binocular brightness data which were performed using the FIRE theory. Both types of theory thus help to explain their targeted data quite well, and both do so better than rival theories. On the other hand, the theory as a whole then uses two different types of filling-in--one monocular (diffusion) and the other binocular (FIRE)-and two different types of cooperative-competitive interactions --one monocular (CC Loop) and the other binocular (FIRE). In addition, as the FIRE theory began to be used to analyse 3-dimensional percepts of complex 2-dimensional images, the analysis seemed to become unnecessarily complicated. These inelegances ultimately focused my attention upon the following demanding problem: How can the FIRE theory be replaced by a binocular theory of the Boundary Contour System and Feature Contour System while preserving all of the good properties of the FIRE process within a theory with an expanded predictive range? Why did the FIRE theory work so well if it could be replaced in such a fashion? I have recently completed a theory of 3-dimensional form, color, and brightness perception which satisfies a11 of these concerns. In this theory, many of the key concepts from the FIRE process, such as filling-in generator and filling-in barrier, play a basic role. In addition, the monocular rules for the Boundary Contour and Feature Contour Systems generalize, indeed form the foundation for the 3-D form theory. Thus the theory has again developed in an evolutionary way. The range of perceptual and neural data which can now be analysed due to this synthesis is much larger, although I am now surer than ever that the process of evolutionary theoretical refinement is as yet far from over.
Perception and Psychophysics 30. 428- 450 (1984) 0 1 9 8 5 The Psychonomic Society, Inc. Reprinted by permission of the publisher
212
NEURAL DYNAMICS O F BRIGHTNESS PERCEPTION: FEATURES, B O U N D A R I E S , D I F F U S I O N , A N D R E S O N A N C E
Michael A. Cohent and Stephen Grossberg$
Abstract
A real-time visual processing theory is used to unify the explanation of monocular and binocular brightness data. This theory describes adaptive processes which overcome limitations of the visual uptake process to synthesize informative visual representations of the external world. The brightness data include versions of the CraikO’Brien-Cornsweet effect and its exceptions, Bergstrom’s demonstrations comparing the brightnesses of smoothly modulated and step-like luminance profiles, Hamada’s demonstrations of nonclassical differences between the perception of luminance decrements and increments, Fechner’s paradox, binocular brightness averaging, binocular brightness summation, binocular rivalry, and fading of stabilized images and ganzfelds. Familiar concepts such as spatial frequency analysis, Mach bands, and edge contrast are relevant but insufficient to explain the totality of these data. Two parallel contour-sensitive processes interact to generate the theory’s brightness, color, and form explanations. A boundary-contour process is sensitive to the orientation and amount of contrast but not to the direction of contrast in scenic edges. It generates contours that form the boundaries of monocular perceptual domains. The spatial patterning of these contours is sensitive to the global configuration of scenic elements. A feature-contour process is insensitive to the orientation of contrast, but is sensitive to both the amount of contrast and to the direction of contrast in scenic edges. It triggers a diffusive filling-in reaction of featural quality within perceptual domains whose boundaries are dynamically defined by boundary contours. The boundary-contour system is hypothesized to include the hypercolumns in visual striate cortex. The feature-contour system is hypothesized to include the blobs in visual striate cortex. These preprocessed monocular activity patterns enter consciousness in the theory via a process of resonant binocular matching that is capable of selectively lifting whole monocular patterns into a binocular representation of form-and-color-in-depth. This binocular process is hypothesized to occur in area V4 of the visual prestriate cortex.
t Supported in part by the National Science Foundation (NSF IST-80-00257) and the Office of Naval Research (ONR N00014-83-KO337). j: Supported in part by the Air Force Office of Scientific Research (AFOSR 82-0148) and the Office of Naval Research (ONR N00014-83-K0337).
N e m l Dynamics of BrightnessPerception
213
1. Paradoxical Percepts as Prohes of Adaptive Processes
This article describes quantitative simulations of monocular and binocular brightness data to illustrate and support a real-time perreptual processing theory. This theory introduces new concepts and mechanisms concerning how human observers achieve informative perceptual representations of the external world that overcome limitations of the sensory uptake process, notably of how distributed patterns of locally ambiguous visual features can be used to generate unambiguous global percepts. For example, light passes through retinal veins before it reaches retinal photoreceptors. Human observers do not perceive their retinal veins due to the action of mechanisms that attenuate the perception of images that are stabilized with respect to the retina. Mechanisms capable of generating this adaptive property of visual percepts can also generate paradoxical percepts, as during the perception of stabilized images or ganzfelds (Pritchard, 1961; Pritchard, Heron, and Hebb, 1960; Riggs, Ratliff, Cornsweet, and Cornsweet, 1953; Yarbus, 1967). Once such paradoxical percepts are traced to an adaptive perceptual process, they can be used as probes to discover the rules governing this process. This type of approach has been used throughout the research program on perception (Carpenter and Grossberg, 1981; Cohen and Grossberg, 1984; Grossberg, 1980, 1982a, 1983a, 1983b; Grossberg and Mingolla, 1985b) of which this work forms a part. Suppressing the perception of stabilized veins is insufficient to generate an adequate percept. The images that reach the retina can be occluded and segmented by the veins in several places. Somehow, broken retinal contours need to be completed, and occluded retinal color and brightness signals need to be filled in. Holes in the retina, such as the blind spot or certain scotomas, are also not visually perceived (Gerrits, deHaan, and Vendrick, 1966; Gerrits and Timmermann, 1969; Gerrits and Vendrick, 1970) due to some sort of filling-in process. These completed boundaries and filled-in colors are illusory percepts, albeit illusory percepts with an important adaptive value. The large literature on illusory figures and filling-in ran thus be used as probes of this adaptive process (Arend, Buehler, and Lockhead, 1971; Day, 1983; Gellatly, 1980; Kanizsa, 1974; Kennedy, 1978, 1979, 1981; Parks, 1980; Parks and Marks, 1983; Petry, Harbeck, Conway, and Levey, 1983; Redies and Spillmann, 1981; van Tuijl, 1975; van Tuijl and de Weert, 1979; Yarbus, 1967). The brightness simulations that we report herein illustrate our theory’s proposal for how real and illusory boundaries are completed and features are filled-in. Retinal veins and the blind spot are not the only blemishes of the retinal image. The luminances that reach the retina confound inhomogeneous lighting conditions with invariant object reflectances. Workers since the time of Helmholtz (Helmholtz, 1962) have realized that the brain somehow “discounts the illuminant” to generate color and brightness percepts that are more accurate than the retinal data. Land (1977) has shown, for example, that the perceived colors within a picture are constructed from overlapping colored patches are determined by the relative contrasts at the edges between the patches. The luminances within the patches are somehow discounted. These data also point to the existence of a filling-in process. Were it not possible to fill-in colors to replace the discounted illuminants, we would perceive a world of boundaries rather than one of extended forms. Since edges are used to generate the filled-in percepts, an adequate perceptual theory must define edges in a way that ciLn accomplish this goal. We suggest that the edge computations whereby boundaries are completed are fundamentally different-in particular, they obey different rules-from the edge computations leading to color and brightness signals. We claim that both types of edges are computed in parallel before being recombined to generate filled-in percepts. Our theory hereby suggests that the fundamental question “What is an edge, perceptually speaking?” has not adequately been answered by previous theories. One consequence of our answer is a physical explanation and generalization of the retinex theory (Grossberg, 1985), which Land (1977)
214
Chapter 4
has developed to explain his experiments. The present article further supports this conception of how edges are computed by qualitatively explaining, and quantitatively simulating on the computer, such paradoxical brightness data as versions of the Craik-O’Brien-Cornsweet effect (Arend ct a/., 1971; Cornsweet, 1970; O’Brien, 1958) and its exceptions (Coren, 1983; Heggelund and Krekling, 1976; van den Brink and Keemink, 1976; TodoroviC, 1983), the Bergstrinn demonstrations comparing brightnesses of smoothly modulated and step-like luminance profiles (Bergstrom, 1966, 1967a, 1967b), and the demonstrations of Hamada (1980) showing nonclassical differences between the perception of luminance decrements and increments. These percepts can all be seen with one eye. Our theory links these phenomena to the visual mechanisms that are capable of preventing perception of retinal veins and the blind spot, and that fill-in over discounted illuminants, which also operate when only one eye is open. Due to the action of binocular visual mechanisms that generate a self-consistent percept of depthful forms, some visual images that can be monocularly perceived may not be perceived during binocular viewing conditions. Binocular rivalry provides a classical example of this fact (Blake and Fox, 1974; Cogan, 1982; Kaufman, 1974; Kulikowski, 1978). To support the theory’s conception of how depthful form percepts are generated (Cohen .and Grossberg, 1984; Grossberg, 1983a, lQ83b),we suggest explanations and provide simulations of data concerning inherently binocular brightness interactions. These data include results on Fechner’s paradox, binocular brightness summation, binocular brightness averaging, and binocular rivalry (Blake, Sloane, and Fox, 1981; Cogan, 1982; Cogan, Silverman, and Sekuler, 1982; Curtis and Rule, 1980; Legge and Rubin, 1981; Levelt, 1965). These simulations do not, of course, begin to exhaust the richness of the perceptual literature. They are meant to be illustrative, rather than exhaustive, of a perceptual theory that is still undergoing development. On the other hand, this incomplete theory already reveals the perhaps even more serious incompleteness of rival theories by suggesting concepts and explaining data that are outside the range of these rival theories. The article also illustrates the theory’s burgeoning capacity to integrate the explanation of perceptual data by providing simulations of data about Fechner’s paradox, binocular brightness averaging, binocular brightness summation, and binocular rivalry using the same model parameters that were established to simulate disparity matching, filling-in, and figure-ground synthesis (Cohen and Grossberg, 1984). Although our theory was derived from perceptual data and concepts, after it reached a certain state in its development, striking formal similarities with recent neurophysiological data could not fail to be noticed. Some of these relationships are briefly summarized in Table 1 below. Although the perceptual theory can be understood without considering its neurophysiological interpretation, if one is willing to pursue this interpretation, then the perceptual theory implies a number of neurophysiological and anatomical predictions. Such predictions enable yet another data base to be used for the further development and possible disconfirmation of the theory. A search through the neurophysiological lit,erature has revealed that some of these predictions were already supported by known neural data, albeit data that took on new meaning in the light of the perceptual theory. Not all of the predictions were known, however. In fact, two of its predictions about the process of boundary completion have recently received experimental support from recordings by von der Heydt, Peterhans, and Baumgartner (1984) on cells in area 18 of the monkey visual cortex. Neurophysiological interpretations and predictions of the theory are described in Grossberg and Mingolla (1985b). Due to the existence of this neural interpretation, we will take the liberty of calling the formal nodes in our network “cells” throughout the article. The next sections summarize the concepts that we use to explain the brightness data.
Neural Dynamics of Brighmess Perception
TABLE
215
1
N A M E S OF M A C R O C I R C U I T STAGES Abbreviations
Full Names
MPL
Left Monocular Preprocessing Stage (Lateral geniculate nucleus) Right Monocular Preprocessing Stage (Lateral geniculate nucleus) Boundary Contour Synthesis Stage [Interactions initiated by the hyperrolumns in striate cortex-Area 17 (Hubel and Wiesel, 1977)] Left Monocular Brightness and Color Stage [Interactions initiated by the cytochrome oxydase staining blobs-Area 17 (Hendrickson, Hunt, and Wu, 1981; Horton and Hubel, 1981; Hubel and Livingstone, 1981; Livingstone and Hubel, l982)] Right Monocular Brightness and Color Stage [Interactions initiated by the cytochrome oxydase staining blobs- Area 171 Binocular Percept Stage [Area V4 of the prestriate cortex (Zeki, 1983a, 1983b)l
MPR BCS
MBCL
MBCR BP
2.
The Boundary-Contour System a n d the Feature-Contour System
The theory asserts that two distinct types of edge, or contour, computations are carried out within two parallel systems. We call these systems the boundary-contour system and the feature-contour system. Boundary-contour signals are used to synthesize the boundaries, whether "real" or "illusory," that the perceptual process generates. Feature-contour signals initiate the filling-in processes whereby brightnesses and colors spread until they either hit their first boundary contour or are attenuated due to their spatial spread. Boundary contours are not, in themselves, visible. They gain visibility by restricting the filling-in that is triggered by feature-contour signals. These two systems obey different tules. The main rules can be summarized as follows. 3. Boundary Contours a n d Boundary Completion
The process whereby boundary contours are built up is initiated by the activation of oriented masks, or elongated receptive fields, at each position of perceptual space (Hubel and Wiesel, 1977). Our perceptual analysis leads to the following hypotheses about how these masks activate their target cells, and about how these cells interact to generate boundary contours. (a) Orientation and contrast. The output signals from the oriented masks are sensitive to the orientation and to the amount of contrast, but not to the direction of contrast, at an edge of a visual scene. Thus,a vertical boundary contour can be activated by either a close-to-vertical dark-light edge or a close-to-vertical light-dark edge at a fixed scenic position. The process whereby two like-oriented masks that are sensitive to direction of contrast at the same perceptual location give rise to an output signal that is not sensitive to direction of contrast is designated by a plus sign in Figure la.
Chapter 4
216
4
n n
n
n
Figure 1. (a) Boundary-contour inputs are sensitive to the orientation and amount of contrast at a scenic edge, but not to its direction of contrast. (b) Like orientations compete at nearby perceptual locations. c) Different orientations compete at each perceptual location. (d) Once activated, asigned orientations can cooperate across a larger visual domain to form real or illusory contours.
Neural Dynamics of BrightnessPerception
217
(b) Short-range competition. (i) The rt-11s that. react to output signals due to like-oriented masks compete between nearby perceptual lorations (Figure Ib). Thus, a mask of fixed orientation excites the like-oriented cells at its location and inhibits the like-oriented cells at nearby locations. In other words, an on-center off-surround organization of like-oriented cell interactions exists around each perceptual location. (ii) The outputs from this competitive stage input to the next competitive stage. A t this stage, cells compete that represent perpendicular orientations at the same perceptual location (Figure l c ) . This competition defines a push-pull opponent process. If a given orientation is inhibited, then its perpendicular orientation is disinhibited. In all, a stage of competition between like orientations at different, but nearby, positions is followed by a stage of competition between perpendicular orientations at the same position. (c) Long-range oriented cooperation a n d boundary completion. The outputs from the last competitive stage input to a spatially long-range cooperative process that is called the “boundary-completion” process. Outputs due to like-oriented masks that are approximately aligned across perceptual space can cooperate via this process to synthesize an intervening boundary. The boundary-completion process is capable of synthesizing global visual boundaries from local scenic contours (Grossberg and Mingolla, 1985b). Both “real” and “illusory” boundaries are assumed to be generated by this boundary-completion process. Two simple demonstrations of a boundary-completion process with properties (a (c) can be made as follows. In Figure 2a, four pac-man figures are arranged at t e vertices of an imaginary’rectangle. It is a familiar fact that an illusory Kanizsa (1974) square can be seen when all four pac-man figures are black against a white background. The same is true when two pac-man figures are black, the other two are white, and the background is grey, as in Figure 2b. The black pac-man figures form dark-light edges with respect to the grey background. The white pac-man figures form light-dark edges with the grey background. The visibility of illusory edges around the illusory square shows that a process exists that is capable of rompleting the contours between edges with opposite directions of contrast. This contour-completion process is thus sensitive to amount of contrast but not to direction of contrast. Another simple demonstration of these contour-completing properties can be constructed as follows. Divide a square into two equal rectangles along an imaginary boundary. Color one rectangle a uniform shade of grey. Color the other rectangle in shades of grey that progress from light to dark as one moves from end 1 of the rectangle to end 2 of the rectangle. Color end 1 a lighter shade than the uniform grey of the other rectangle, and color end 2 a darker shade than the uniform grey of the other rectangle. Then, as one moves from end 1 to end 2, an intermediate grey region is passed whose luminance approximately equals that of the uniform rectangle. At end 1, a light-dark edge exists from the nonuniform rectangle to the uniform rectangle. At end 2, a darklight edge exists from the nonuniform rectangle to the uniform rectangle. Despite this reversal in the direction of contrast from end 1 to end 2, an observer can see an illusory edge that joins the two edges of opposite contrast and separates the intermediate rectangle region of equal luminance. This boundary completion process, which seems so paradoxical when its effects are seen in Kanizsa squares, is also hypothesized to complete boundaries across the blind spot, across the faded images of stabilized retinal veins, and between all perceptual domains that are separated by sharp brightness or color differences. (d) Binocular matching. A monocular boundary contour can be generated when a single eye views a scene. When two eyes view a scene, a binocular interaction can occur between outputs from oriented masks that respond to the same retinal positions of the two eyes. This interaction leads to binocular competition between perpendicular orientations at each position. This competition takes place at, or before, the competitive stage (b ii).
h
218
Chapter 4
Figure 2. (a) An illusory Kanizsa square is induced by four black pac-man figures. (b) An illusory square is induced by two black and two white pac-man figures on a grey background. Illusory contours can thus join edges with opposite directions of contrast (the effect may be weakened by the photographic reproduction process).
Neural Dynamics of Brightness Perception
219
4 . Fratiire Coritoiirs and Diffiisivr Filling-In
The rulcs of contrast obeyed by the feature-contour process are different from those obeyed by the boundary-contour process. (a) Contrast. The feature-contour process is insensitive to the orientation of contrast in a scenic edge, but it is sensitive to both the direction of contrast and the amount of contrast, unlike the boundary-contour process. For example, to compute the relative brightness across a scenic boundary, it is obviously important to keep track of which side of the scenic boundary has a larger reflectance. Sensitivity to direction of contrast is also needed to determine which side of a red-green scenic boundary is red and which is green. Due to its sensitivity to the amount of contrast, feature-contour signals ”discount the illuminant.” In the simulations in this article, only one type of feature-contour signal is considered, namely, achromatic or light-dark signals. In the simulations of chromatic percepts, three parallel channels of double-opponent feature-contour signals are used: light-dark, red-green, and blue-yellow. The simulations in this article consider only how input patterns are processed by a single network channel whose on-center off-surround spatial filter plays the role of a single spatial frequency channel (Grossberg, 1983b). We often call such a network a “spatial scale” for short. From our analysis of the dynamics of individual spatial scales, one can readily infer how multiple spatial scales, acting in parallel, transform the same input patterns. The rules of spatial interaction that govern the feature-contour process are also different from those that govern the boundary-contour process. (b) Diffusive filling-in. Boundary contours activate a boundary-completion process that synthesizes the boundaries that define monocular perceptual domains. Feature contours activate a diffusive filling-in process that spreads featural qualities, such as brightness or color, across these perceptual domains. Figure 3 depicts the main properties of this filling-in process. It is assumed that featural filling-in occurs within a syncytium of cell compartments. By a syiicytiiim of cells, we mean a regular array of cells in such an intimate relationship to one another than contiguous cells can easily pass signals between each other’s compitrtiiirnt membrancs. In the present instance, a feature-contour input signal to a cell of the syncytium activates that cell. Due to the synrytial coupling of this cell with its neighbors, the activity can rapidly spread to neighboring cells, then to neighbors of neighbors, and so on. Since the spreading occurs via a diffusion of activity (Appendix A), it tends to average the activity that was triggered by the feature-contour input signal across the cells that receive this spreading activity. This averaging of activity spreads across the syncytium with a space constant that depends upon the electrical properties of both the cell interiors and their membranes. The electrical properties of the cell membranes can be altered by boundary-contour signals in the following way. A boundary-contour signal is assumed to decrease the diffusion constant of its target cell membranes within the cell syncytium. It does so by acting as an inhibitory gating signal that causes an increase in cell membrane resistance (Appendix A). At the same time that a boundary-contour signal creates a barrier to the filling-in process at its target cells, it also acts to inhibit the activity of these cells. Thus, due to the physical process whereby a boundary contour limits featural spreading across the syncytium, a boundary-contour input also acts as a feature-contour input to its target syncytial cells. Such a diffusive filling-in reaction is hypothesized to instantiate featural filling-in over the blind spot, over the faded images of stabilized retinal veins, and over the illuminants that are discounted by feature-contour preprocessing. Three distinguishable types of spatial interaction are implied by this description of the feature-contour system: (i) Spatialfrequency preprocessing: Feature-contour signals arise as the outputs of several distinct on-center off-surround networks with different receptive field sizes, or spatial scales. (ii) Diffusive filling-in: The feature-contour signals
220
Chapter 4
tttmtt FEATURE CONTOUR SIGNALS
Figure 3. A monocular brightness and color stage (MBC):Monocular feature-contour signals activate cell compartments that permit rapid lateral diffusion of activity, or potential, across their boundaries, except at the boundaries that receive boundarycontour signals from the BCS stage of Figure 4. Consequently, the feature-contour signals are smoothed except at boundaries that are synthesized within the BCS stage. within each spatial scale then cause activity to spread across the scale cell's syncytium. This filling-in process has its own diffusive bandwidth. (iii) Figural boundaries: The boundary-contour signals define the limits of featural filling-in. Boundary contours are sensitive to the configuration of all edges in a scene, rather than to any single receptive field size. The interplay of these three types of spatial interaction will be essential in our explanations of brightness data. 5. Macrocircuit of Processing Stages
Figure 4 describes a macrocircuit of processing stages into which the microstages of the boundary-contour system and feature-contour system can be embedded. The processes described by this macrocircuit are capable of synthesizing global properties of depth, brightness, and form information from monocularly and binocularly viewed patterns (Grossberg, l983a, 1984). Table 1 lists the full names of the abbreviated macrocircuit stages, as well as the neural structures that seem most likely to execute analogous processes.
Neural Dynamics of Brightness Perception
221
I/
T F i g u r e 4. A macrocircuit of processing stages: Table 1 lists the functional names of the abbreviated stages and indicates a neural interpretation of these stages. Boundarycontour formation is assumed to occur within the BCS stage. Its output signals to the monocular MBCL and MBCR stages define boundaries within which feature-contour signals from MPL or MPR can trigger the spreading. or diffusion, of featural quality.
222
Chapter 4
Each monocular preprocessing stage MPL and MPR can generate inputs to a boundary-contour system and a feature-contour system. The pathway MPL -+ BCS carries inputs to the left-monocular boundary-contour system. The pathway MPL -+ MBCL carries inputs to the left-monocular feature-contour system. Only after a11 the microstages of scale-specific, orientation-specific, contrast-specific, competitive, and cooperative interactions (Section 3) take place within the BCS stage does this stage give rise to boundary-contour signals BCS -+ M B C L that act as barriers to the diffusive fillingin triggered by MPL + MBCL feature-contour signals (Section 4). Thus, the divergence of the pathways MPL -+ MBCL and MPL + B C S allows the boundary-contour system and the feature-contour system to undergo significant processing according to different rules before their signals recombine within the cell syncytia.
6. FIRE: Resonant Lifting of Preperceptual Data into a Form-in-Depth Percept The activity patterns generated by feature-boundary interactions at the monocular brightness and color stages MBCL and MBCR must undergo further processing before they can be perceived. This property is analogous to the fact that a contoured monocular image is not always perceived. It can, for example, be suppressed by a discordant image to the other eye during binocular rivalry. Only activity patterns at the binocular percept (BP) stage of Figure 4 are perceived. Signals from stage MBCL and/or stage MBCR that are capable of activating the BP stage are said to “lift” the preprocessed monocular patterns into the perceptual domain (Cohen and Grossberg, 1984; Grossberg, 1983b). We use the word “lift” instead of a word like “search” because the process occurs directly via a single parallel processing step, rather than by some type of serial algorithm. This lifting process works as follows. Monocular arrays of cells in MBCL and MBCR send topographically organized pathways to BP and receive topographirally organized pathways from BP. A monocular activity pattern across MBCL can elicit output signals in the M B C L -+ BP pathway only from positions that are near contours, or edges, of the MBCL activity pattern (Figure 5). Contours of an MBCL pattern must not be confused with edges of an external scene. They are due to boundary-contour signals in the B C S -+ MBCL pathway, which themselves are the result of a great deal of preprocessing. Thus, no contour signals are initially elicited from the MBCL stage to the BP stage at positions within the interiors of filled-in regions. Similar remarks hold for contour signals from the MBCR stage to the BP stage. Pairs of contour signals from MBCL and MBCR that correspond to similar perceptual locations are binocularly matched at the BP stage. If both contour signals overlap sufficiently, then they can form a fused binocular contour with the BP stage. If their positions mismatch by a larger amount, then both contours can mutually inhibit each other, or the stronger contour can suppress the weaker contour. If their positions are even more disparate, then a pair, or “double image,” of contours can be activated at the BP stage. These possibilities are due to the fact that contour signals from MBCL and MBCR to BP possess an excitatory peak surrounded by a pair of inhibitory troughs. Under conditions of monocular viewing, the contour signals from (say) MBCL to BP are always registered, or “self-matched,” at BP because no contours exist from MBCR that are capable of suppressing them. Contours at the BP stage that survive this binocular matching process can send topographic contour signals back to MBCL and MBCR along the feedback pathways (Figure 5). Remarkably, feedback exchange of such local contour signals can trigger a rapid filling-in reaction across thousands of cells. This filling-in reaction is due to the form of the contour signals that are fed back from BP to MBCL and MBCR. These signals also possess an excitatory peak surrounded by a pair of inhibitory troughs. The inhibitory troughs cause local nonuniformities in the activity pattern near the original MBCL or MBCR contour. These local nonuniformities are seen by the MBCL -+ B P
Neural Dynamics of Brightness Perception
223
Figure 5 . Binocular representation of MBC patterns at the BP stage: Each MBCt and MBCR activity pattern is filtered in such a way that its contours generate topographically organized inputs to the BP stage. At the BP stage, these contour signals undergo a process of binocular matching. This matching process takes place simultaneously across several on-center off-surround networks, each with a different spatial interaction bandwidth. Contours capable of matching at the BP stage send feedback signals to their respective MBCL or MBCR patterns. Closing this feedback loop of local edge signals initiates the rapid spreading of a standing wave that resonantly “lifts” a binocular representation of the matched monocular patterns into the BP stage. This standing wave, or filling-in resonant exchange (FIRE),spreads until it hits the first binocular mismatch within its spatial scale. The ensemble of all resonant standing waves across the multiple spatial scales of the BP constitutes the network percept. If all MBCL or MBCR contour inputs are suppressed by binocular matching at a spatial scale of the BP stage, then their respective monocular activity patterns cannot be lifted into resonant activity within this BP spatial scale. The BP spatial scale selectively resonates with some, but not all, monocular patterns within the MBCt and MBCR stages.
224
Chapter 4
and M B C R 4 B P pathways as new ront,iguous rontoiirs, which ran thus send signals to BP. In this way, a matched contour at BP ran trigger a standing wave of activity that can rapidly spread, or fill-in, across BP until its hits the first pair of mismatched contours. Such a mismatch creates a barrier to filling-in. As a result of this filling-in process across BP, the activities at interior positions of filled-in regions of MBCL and MBCR can be lifted into perception within BP. Although such an interior cell in MBCL sends topographic signals to BP, these signals are not topographically related to MPL in a simple way, due to syncytial filling-in within MBCi,. The properties of the resonant filling-in reaction imply that MBCL or MBCR activity patterns that do not emit any contour signals to BP cannot enter perception. Activity patterns, all of whose contour signals are inhibited within BP due to binocular mismatch, also cannot enter perception. Only activity patterns that lie between a contour match and its nearest contour mismatch can enter perception. Such a filling-in reaction, unlike diffusive filling-in (Section 4), is a type of nonlinear resonance phenomenon, which we call a "filling-in resonant exchange" (FIRE). In the full theory, multiple networks within MBCL and MBCR that are sensitive to different spatial frequencies and disparities are topographically matched within multiple networks of BP. The ensemble of all such resonant standing waves constitutes the network's percept. Cohen and Grossberg (1984) and Grossberg (1983b) describe how these ensembles encode global aspects of depth, brightness, and form information. In this article, we show that these ensembles also mimic data about Fechner's paradox, binocular brightness summation, and binocular brightness averaging (Sections 13-15). The fact that a single process exhibits all of these properties enhances the plausibility of the rules whereby FIRE contours are computed and matched within BP. The standing waves in the BP stage may themselves be further transformed, say by a local smoothing operation. This type of refinement does not alter our discussion of binocular brightness data; hence, it will not be further discussed. 7. Binocular Rivalry, Stabilized Images, and the Ganefeld
The following qualitative properties of the FIRE process illustrate how binocular rivalry and the fading of ganzfelds arid stabilized images can occur witthin the network of Figure 4. Suppose that, due to binocular matching of perpendicular orientations, as in Section 3d, some left-monocular boundary contours are suppressed within the BCS stage. Then these boundary contours cannot send boundary contour signals to the corresponding region of stage MBCL. Featural activity thus quickly diffuses across the network positions corresponding to these suppressed contours (Gerrits and Vendrick, 1970). Consequently, no contour output signals can be emitted from these positions within the MBCL stage to the BP stage. No edge matches within the BP stage can occur at these positions, so no effective feedback signals are returned to the MBCL stage at these positions to lift the corresponding monocular subdomain into perception. Thus, the subdomains whose boundary contours are suppressed within the BCS stage are not perceived. As soon as these boundary contours win the BCS binocular competition, their subdomain contours can again rapidly support the resonant lifting of the subdomain activity pattern into perception at the BP stage. During binocular rivalry, an interaction between rapidly competing short-term memory traces and slowly habituating transmitter gates can cause oscillatory switching between left and right BCS contours (Grossberg, 1980, 1983a). The same argument shows that a subdomain is not perceived if its boundary edges are suppressed by binocular rivalry within the BCS stage or by image stabilization, or if they simply do not exist, as in a ganzfeld.
Neural Dynamics of Brightness Perception
8.
225
The Interplay of Coiltrolled a n d Automatic Processes
The most significant technical insights that our theory introduces concern the manner in which local computations can rapidly generate global context-sensitive representations via hierarchically organized networks whose individual stages undergo parallel processing. Using these insights, one can also begin to understand how internally generated “cognitive” feature-contour signals or “cognitive” boundary-contour si nals can modify the global representations generated within the network of Figure 4 IGregory, 1966; Grossberg, 1980). Indeed, the network does not know which of its contour signals are generated internally and which are generated externally. One can also now begin to understand how state-dependent nonspecific changes in sensitivity at the various network stages (e.g., attentional shifts) can modify the network’s global representations. For example, the contrast sensitivity of feature-contour signals can change as a function of background input intensity or internal nonspecific arousal (Grossberg, 1983b, Sections 24-28). The balance between direct feature-contour signals and diffusive filling-in signals can thus be altered by changes in input luminance or arousal parameters, and can thereby influence how well filling-in can overcome feature-contour contrast effects during the Craik-O’Brien illusion (Section 9). Once such internally or externally controlled factors are specified, however, the network automatically generates its global representations using the intrinsic structure of its circuitry. In all aspects of our theoretical work, controlled and automatic factors participate in an integrated network design (Grossberg, 1982a), rather than forming two computationally disjoint serial and parallel subsyst,ems, as Schneider and Shiffrin (1977) have suggested. Even the complementary attentional and orienting subsystems that have been hypothesized to regulate the stability and plasticity of long-term memory encoding processes in response to expected and unexpected events (Grossberg, 1975, 1982a, 1982b) both utilize parallel mechanisms that are not well captured by the controlled versus automatic processing dichotomy. 9. Craik-O’Brien Luminance Profiles and Multiple S t e p Illusions
Arend ef al. (1971) have studied the perceived brightness of a variety of luminance profiles. The construction of these profiles was suggested by the seminal article of O’Brien (1958). Each of the luminance profiles was produced by placing appropriately cut sectors of black and white paper on a disk. The disk was rotated at a rate much faster than that required for flicker fusion. The luminances thereby generated were then independently calibrated. The subjects were asked to describe the relative brightness distribution by describing the locations and directions of all brightness changes, and by ordering the brightnesses of regions that appeared uniform. Ordinal, rather than absolute, brightness differences were thereby determined. One of their important results is schematized in Figure 6. Figure 6a describes a luminance profile in which two Craik-O’Brien luminance cusps are joined to a uniform background luminance. The luminances to the left and to the right of the cusps are equal, and the average luminance across the cusps equals the background luminance. Figure 6b shows that this luminance profile is perceived as (approximate) steps of increasing brightness. In particular, the perceived brightnesses of the left and right backgrounds are significantly different, despite the fact that their luminances are equal. This type of result led Arend et af. (1971, p.369) to conclude that “the brightness information generated by moving contours is difference information only, and the absolute informat,ion hypothesis is rejected.” In other words, the nonuniform luminances between successive edges are discounted, and only the luminance differences of the edges determine the percept. Similar concepts were developed by Land (1977). This conclusion does not explain how the luminance differences at the edges are computed, or how the edges determine the subjective appearance of the perceptual domains that exist between the edges. The incomplete nature of the conclusions does not,
226
Chapter 4
Figure 6. (a) A one-dimensional slice across a two-dimensional Craik-O’Brien luminance profile. The background luminances at the left and right sides of the profile are equal. (b) This luminance profile appears like a series of two (approximate) steps in increasing brightness.
Neural Dynamics of Brightness Pereeptwn
227
however, limit their usefulness as a working hypothesis. This hypothesis must, however, be tempered by the fact that it is not universally true. For example, the hypothesis does not explain illusory brightness differences that can exist along illusory contours that cross regions of uniform luminance (Kanizsa, 1974; Kaufman, 1974; Kennedy, 1979). It does not explain how Craik-O’Brien filling-in can improve or deteriorate as the balance between background illumination and edge contrast is varied Heggelund ain why a and Krekling, 1976; van den Brink and Keemink, 1976). It does not exp(I strong Craik-O’Brien effect is seen when a vertical computer-generated luminance cusp on a uniform background is enclosed by a black border that touches the two ends of the cusp, yet vanishes completely when the black border is removed and the cusp is viewed within a uniform background on all sides (Todorovic‘, 1983). It does not explain why, in response to five cusps rather than two, subjects may see a flattened percept rather than five rising steps (Coren, 1983). The present theory suggests an explanation of all these properties. The illusory brightness properties are discussed in Grossberg (1984) and Grossberg and Mingolla (198513). The remaining issues are clarified below. Figure 7 describes the results of a computer simulation of the two-step brightness illusion that is described in Figure 6. The networks of differential equations on which the simulation is based are summarized in Appendix A. Figure 7 depicts the equilibrium solutions to which these networks of differential equations rapidly converge. All of the simulation results reported herein are equilibrium solutions of such networks. These networks define one-dimensional arrays of cells due to the one-dimensional symmetry in the luminance profiles. Figure 7a describes the input pattern to the network. The double cusps are surrounded by a uniform luminance level that is Gaussianly smoothed at its edges to minimize spurious edge effects. Figure 7b shows that each of the two luminance cusps in the input pattern generates a narrow boundary-contour signal. Each boundary-contour signal causes a reduction in the rate of diffusion across the membranes of its target cells at the MBCL or MBCR stage. A reduced rate of diffusion prevents the lateral spread of featural activity across the membranes of the affected cells. A reduced diffusion rate thereby dynamically generates boundary contours within the cell syncytium (Figure 3). Successive boundary contours determine the spatial domains within which featural activity can spread. The feature-contour process attenuates the background luminance of the input pattern and computes the relative contrasts of the cusps. It does this by letting the individual inputs interact within a shunting on-center off-surround network (Grossberg, 1983b). Such a network is defined in Appendix A, equation (1).The resultant featurecontour activity pattern is an input pattern to a cell syncytium. The boundary-contour signals from the BGS stage also contribute to this input pattern. Boundary-contour signals generate feature-contour signals as well as boundary-contour signals because they increase cell membrane resistances in order to decrease the cells’ diffusion constants, as described in Section 4b. Due to this effect on cell-membrane resistances, boundary-contour signals are a source of inhibitory feature-contour signals. These inhibitory signals act on a narrower spatial scale than the feature-contour signals from the MPL and MPR stages. The total feature-contour input pattern received by MBCL is the sum of the feature-contour patterns from the MPL and BCS stages. This total feature-contour input pattern is depicted in Figure 7c. The flanks of this pattern were artificially extended to the left and to the right to avoi spurious boundary effects and to simulate the output when the input pattern is placed on an indefinitely large field.) When the feature-contour input pattern of Figure 7c is allowed to diffuse within the perceptual domains defined by the boundary-contour pattern of Figure 7b, the step-like activity pattern of Figure 7d is the result. Figure 8 simulates a luminance profile with five cusps, using the same equations and parameters that generated Figure 7. The activity pattern in Figure 8d is much flatter than one might expect from the step-like pattern in Figure 7d. Coren (1983)
6
Chapter 4
228
TWO STEP ILLUSION INPUT PATTERN
BOUNDARY CONTOUR PATTERN
a
b
FEATURE CONTOUR PATTERN
MONOCULAR BRIGHTNESS PATTERN
8.7*10-'
II
+.SrlCi-'
,
,
,
,
, POS
-8.7*10-'
1
C
-4.5*10-'
1
,
,
,
,
, 3500
d
Figure 7. Simulation of the two-step illusion: (a) Input luminance pattern. (b) The pattern of diffusion coefficients that is induced by boundary contours. This pattern d q termines the limits of featural spreading across the cell syncytium. The two luminance cusps in (a) determine a pair of boundary contours at which the diffusion coefficients are small in (b). (c) The featurecontour pattern induced by (a), The background luminance is attenuated, and the relative contrasts of the luminance cusps are accentuated. (d) When pattern (c) diffuses within the syncytial domains determined by (b), a series of two approximate steps of activity results.
Neural Dynamics of Brightness Perception
'
229
found a similar result with this type of stimulus. Figure 8 suggests that the result of Coren (1983), which he attributes to cognitive factors, may be partially explained by feature-contour and boundary-contour interactions due to a single spatial scale. Such a single-scale reaction does not, however, exhaust even the noncognitive monocular interactions that are hypothesized to occur within our theory. The existence of multiple spatial scales has been justified from several points of view Graham, 1981; Graham and Nachmias, 1971; Grossberg, 1983b; Kaufman, 1974; Ku ikowski, 1978). The influence of these multiple scale reactions are also suggested by some displays of Arend et al. (1971). One such display is redrawn in Figure 9. The transformation of cusp in Figure 9a into step in Figure 9b and the computation of the relative contrast of the increments on their background are easy for the single-scale network that simulates Figures 7 and 8. This network cannot, however, generate the same brightness on both sides of the increments in Figure Qb, because the boundary-contour signals due to the increments prevent the feature-contour signals due to the cusps from diffusing across the increments. Thus, to a single-scale network, the left and right distal brightnesses appear more equal than the brightnesses on both sides of the cusps. This difficulty is partially overcome when multiple spatial scales (vie., separate shunting on-center off-surround networks with different intercellular interaction coefficients) process the same input pattern, and the perceived brightness is derived from the average of all the resultant activity patterns across their respective syncytia. In this setting, a low-frequency spatial scale may generate a boundary contour in response to the cusp, but not in response to the increments (Grossberg, 1983b). The monocular brightness pattern generated by such a scale is thus a single step centered at the position of the cusp. When this step is averaged with the monocular brightness pattern of a high-spatial-frequency scale, the difference between proximal and distal background brightness estimates becomes small relative to the difference between step and background brightness. This explanation of Figure 9 may be testable by selectively adapting out the high- or low-spatial frequency scales. The action of low-spatial-frequency scales can also contribute to the flattening of the perceived brightnesses induced by a five-cusp display. Five cusps activate a broader network domain than do two cusps of equal size. Low-spatial-frequency scales that do not significantly react to two cusps may generate a blob-like reaction to five cusps. When such a reaction is averaged in with the already ffattened high-spatial-frequency reaction, an even flatter percept can result.
I
10. Smoothly Varying Luminance Contours versus Steps of Luminance Change Bergstriim (1966, 1967a, 1967b) has collected data that restrict the generality of the conclusion that sharp edges control the perception of brightness. In those experiments, he compared the relative brightness of several luminance displays. Some of the displays possessed no sharp luminance edges within their interiors. Other displays did possess sharp luminance edges. Bergstriim used a variant of the rotating prism method to construct two-dimensional luminance distributions in which the luminance changed in the horizontal direction but was constant in each narrow vertical strip. The horizontal changes in two such luminance distributions are shown in Figure 10. Figure 10a depicts a luminance profile wherein the luminance continuously d e creases from left to right. Bergstriim constructed this profile to quantitatively test the theory of Mach (1866)that attributes brightness changes to the second derivative d 2 L ( z /dz2 with respect to the spatial variable z of the luminance profile L ( z ) (see Ratlid, 1965). Mach (1866) concluded that, if two adjacent points 2 1 and 2 3 have , the point 23 at which the second derivative similar luminances [L(zl) M L ( z 3 ) ] then is negative {[da(z,)/dz2] < 0}, looks brighter than the point 21 at which the second derivative is positive { [ d 2 L ( 2 1 ) / d z 2>] 0}, and that a transition between a darker and
230
Chapter 4
FIVE STEP ILLUSION INPUT PATTERN
BOUNDARY CONTOUR PATERN '.9'
FUTURE CONTOUR PATTERN
MONOCULAR BRIGHTNESS PATTERN
8.7*10-'
E
*
-8.7*10"
00
C
E: , 94
,
,
,
, POS
,
,
,
,
, UOO
d
Figure 8. Simulation of the iive-step illusion: The main difference between Figures 7b and 8b is that Figure 8b contains six syncytial domains whereas Figure 7b contains only three. Each domain averages only the part of the featurecontour pattern that it receives. The result in Figure 8d is a much flatter pattern that one might expect from Figure 7d.
Neural Dynamics of Brightness Perception
23 1
Figure 9. The luminance profile in (a) generates the brightness profile in (b). (Redrawn with permission from Arend, Buehler, and Lockhead, 1971.)
232
Chapter 4
x; x;x; Figure 10. Two luminance profiles studied by Bergstriim. Position 23 of (a) looks brighter that position 2: of (b). Also position 23 looks brighter than positive z1 in (a), and position 2; looks somewhat brighter than position 2; in (b). These data challenge the hypothesis that sharp edges determine the level of brightness. They also challenge the hypothesis that a sum of spatial-frequency-filteredpatterns determines the level of brightness.
Neural Dynamics of Br@itness Perception
233
a lighter percept occurs at the intervening inflection point PZ { [ d z L ( q ) / d z 2 = ] 0 ) . In Figure I l a , as Mach would predict, the position 5 3 to the right of 5 2 looks brighter than the position 21 to the left of 5 2 . Figure l l a describes the results of a magnitudeestimation procedure that was used to determine the brightnesses of different positions along the luminance profile. For details of this procedure, Bergstrom’s original articles should be consulted. Figure 1l a challenges the hypothesis that brightness perception depends exclusively upon difference estimates at sharp luminance edges. No edge exists at the inflection point z2, yet a significant brightness difference is generated around position zz.Moreover the brightness difference inverts the luminance gradient, since z1 is more luminous that z3, yet 1 3 looks brighter than 5,. One might attempt to escape this problem by claiming that, although the luminance profile in Figure 10a contains no manifest edges, the luminance changes sufficiently rapidly across space to be edge-like with respect to some spatial scale. This hypothesis collapses when the luminance profile of Figure 10b is considered. The luminance profile of Figure lob is constructed from the luminance profile of Figure 10a as follows. The luminance in each rectangle of Figure 10b is the average luminance taken across the corresponding positions of Figure 10a. Unlike Figure 10a, however, Figure 10b possesses several sharp edges. If the hypothesis of Arend ef al. (1971) is taken at face value, then position 2; of Figure lob should look brighter than position z3 of Figure 10a. This is because mean luminances are preserved between the two figures and Figure 10b has sharp edges, whereas Figure 1Oa has no interior edges whatsoever. A magnitude estimation procedure yielded the data shown in Figure l l b . Comparison of Figures l l a and I l b shows that position 5; looks darker, not brighter, than position 53. These data cast doubt on the conclusion of Arend e t al. (1971),just as the data of Arend et al. cast doubt on the conclusion of Mach (1866). Our numerical simulations reproduce the main effects summarized in Figures 10 and 11. The critical feature of these simulations is that the two luminance profiles in Figure 10 generate different boundary-contour patterns as well as different featurecontour patterns. The luminance profile of Figure 12 generates boundary contours only at the exterior edges of the luminance profile (Figure 12b). By contrast, each interior step of luminance of Figure 13a also generates a boundary contour (Figure 13b , Thus, the monocular perceptual domains that are defined by the two luminance pro les are entirely different. In this sense, the two profiles induce, and are processed by, different perceptual spaces. These different parsings of the cell syncytium not only define different numbers of spatial domains, but also different sizes of domains over which featural quality can spread. In addition, the smooth versus sharp contours in the two luminance profiles generate different feature-contour patterns (Figures 12c and 13c). The differences between the feature-contour patterns do not, however, explain Bergstrtim’s data, because the featurecontour pattern at position 2: in Figure 13c is more intense than the feature-contour pattern at position 2 3 in Figure 12c. This is the result one would expect from classical analyses of contrast enhancement. By contrast, when these feature-contour patterns are diffusively averaged between their respective boundary contours, the result of Bergstrtim is obtained. The monocular brightness pattern at position 5 3 in Figure 12d is more intense than the monocular brightness pattern at position z; in Figure 13d. We therefore concur with Bergstrom in his claim that these results are paradoxical from the viewpoint of classical notions of brightness contrast. We know of no other brightness thmry that can provide a principled explanation of both the Arend et al. (1981) data and the Bergstrom (1966, 1967a, 1967b) data. In particular, both types of data cause difficulties for the Fourier theory of visual pattern perception as an adequate framework with which to explain brightness percepts. For example, the low-frequency spatial components in the two Bergstrijm profiles in Figure 10 are similar, whereas the step-like contour in Figure 10b also contains high-
B
Chapter 4
234
80
70
W
f 60 I
12 50 a
m 40
W
-+
>
30
0 7 20
m
3 v)
a
I
a u
I
I I
I
I I
a
I
1
2
34
5
67
8
910
11
12
SPACE
Figure 11. Magnitude estimates of brightness in response to the luminance profiles of Figure 10. (Redrawn from Bergstriim, 1966.)
Neural Dynanucs of Brightness Perception
235
BERGSTROM BRIGHTNESS PARADOX (1) INPUT PATTERN
BOUNDARY CONTOUR PATTERN
a
b
FUNRE CONTOUR PATERN
MONOCUUR BRIGHTNESS
PATTERN 5 1*10°
9.6.10'
-9.6.10'
C
-5.1*1Go
Figure 12. Simulation of a Bergstrom (1966) brightness experiment. The input pattern (a) generates boundary contours in (b) only around the luminance profile as a whole. By contrast, the input pattern in Figure 13a generates boundary contours around each step in luminance (Figure 13b). The input patterns in Figures 12a and 13a thus determine different syncytial domains within which featural filling-in can occur. The in ut patterns in Figures 12a and 13a also determine different feature-contour patterns rFigures 12c and 13c). The feature-contour pattern in Figure 13c is more active at position z; than is the feature-contour pattern of Figure 12c at the corresponding position q . (See Figure 10 for definitions of 2 3 and z;.) The feature-contour pattern of Figure 12c diffuses within the syncytial domains of Figure 12b, and the feature-contour pattern of Figure 13c diffuses within the syncytial domains of Figure 13b. The resultant brightness pattern of Figure 12d is more active at position 2 3 than is the brightness pattern of Figure 13d at position 2;. This feature-to-brightness reversal is due to the fact that the boundary-contour patterns and feature-contour patterns induced by the two input patterns are different. The global etructuting of each feature-contour pattern within each syncytial domain determines the ultimate brightness pattern.
236
Chapter 4
BERGSTROM BRIGHTNESS PARADOX (2)
INPUT PATTERN
BOUNDARY CONTOUR PATTERN
' 1
POS
a
b
FEATURE CONTOUR PATTERN
MONOCULAR BRIGHTNESS
-9,4*lo-1
700
PATTERN 9.6*10'
-9.6*10'
Figure 13. Simulation of a Bergstrom (1966) brightness experiment. See caption of Figure 12.
Neural Dynamics of Brightness Perception
231
spatial-frequency components. One might therefore expect position 5 3 to look brighter than position z’,whereas the reverse is true. In a similar fashion, when a rectangular luminance profije is Fourier analysed using the human modulation transfer function (MTF), it comes out looking like a Craik-O’Brien contour (Cornsweet, 1970). A CraikO’Brien contour also comes out looking like a Craik-O’Brien contour. Our explanation, by contrast, shows why both Craik-O‘Brien contours and rectangular contours look rectangular. Some advocates of the Fourier approach have responded to this embarrassment by saying that what the outputs of the MTF look like is irrelevant, since only the identity of these outputs is of interest. This argument has carefully selected its data. It does not deal with the problem that the interior and exterior activities of a Craik-O’Brien contour are the same and differ from the activities of the cusp boundary, whereas the interior and boundary activities of a rectangle are the same and differ from the activities of the rectangle exterior. The problem is not merely one of equivalence between two patterns. It is also one of the recognition of an individual pattern. These difficulties of the Fourier approach do not imply that multiple spatial scales are unimportant during visual pattern perception. Multiple scale processing does not, however, provide a complete explanation. Moreover, the feature-contour processing within each scale needs to use shunting interactions, rather than additive interactions of the Fourier theory, in order to extract the relative contrasts of the feature-contour pattern (Appendices A and B; Grossberg, 1983b). 11. The Asymmetry Between Brightness Contrast and Darkness Contrast
In the absence of a theory to explain the Arend et al. and Bergstriim data, one might. have hoped that a more classical explanation of these effects could be discovered by a more sophisticated analysis of the role of contrast enhancement in brightness perception. In both paradigms, it might at first seem that contrast enhancement around edges or inflection points could explain both phenomena in a unified way, if only a proper definition of contrast enhancement could be found. The following data of Hamada (1980) indicate, in a particularly vivid way, that more than a proper definition of contrast enhancement is needed to explain brightness data. Figure 14 depicts three luminance profiles. In Figure 14a, a uniform background luminance is depicted. Although the background luminance is uniform, it is not, strictly speaking, a ganzfeld, or it is viewed within a perceptual frame.) In Figure 14b, a brighter Craik-O’Brien luminance profile is added to the background luminance. In Figure 14a, a darker Craik-O’Brien luminance profile is subtracted from the background luminance. The purity of this paradigm derives from the facts that its two CraikO’Brien displays are equally long and that the background luminance is constant in all the displays. Thus, brightening and darkening effects can be studied uncontaminated by other variables. The classical theory of brightness contrast predicts that the more luminous edges in Figure 14b will look brighter than the background in Figure 14a and that, due to brightness contrast, the background around the more luminous edges in Figure 14b will look darker than the uniform pattern in Figure 14a. This is, in fact, what Hamada found. The classical theory of brightness contrast also predicts that the less luminous edges in Figure 14c will look darker than the background in Figure 14a and that, due to brightness contrast, the background around the less luminous edges in Figure 14c will look brighter than the background in Figure 14a. Hamada (1980) found, contrary to classical theory, that both the dark edges and the background in Figure 14c look darker than the background in Figure 14a. These data are paradoxical because they show that brighter edges and darker edges are, in some sense, asymmetrically processed, with brighter edges eliciting less paradoxical brightness effects than darker edges. Hamada (1976, 1978) developed a multistage mathematical model to attempt to deal with his challenging data. This model is remarkable for its clear recognition that
1;
238
Chapter 4
Figure 14. The luminance contours studied by Hamada (1980). All backgrounds in (a)-(c) have the same luminance.
Neural Dynamics of Brightness Perception
239
a “nonopponent” type of brightness processing is needed in addition to a contrastive, or edge-extracting, type of brightness processing. Hamada did not define boundary contours or diffusive filling-in between these contours, but his important model should
nonetheless be better known. Figures 15 and 16 depict a simulation of the Hamada data using our theory. As desired, classical brightness contrast occurs in Figure 15, whereas a nonclassical darkening of both figure and ground occurs in Figrue 16. The dual action of signals from the BCS stage to the MBC stages as boundary-contour signals and as inhibitory feature-contour signals contributes to this result in our simulations. All of the results described up to now consider how activity patterns are generated within the MBCL and MBCR stages. In order to be perceived, these patterns must activate the BP stage. In the experiments already discussed, the transfer of patterned activity to the BP stage does not introduce any serious constraints on the brightness properties of the FIRE model. This is because all the experiments that we have thus far considered present the same image to both eyes. The experiments that we now discuss present different combinations of images to the two eyes. Thus they directly probe the process whereby monocular brightness domains interact to generate a binocular brightness percept. 12. Simulations of FIRE
In the remaining sections of the article, we describe computer simulations using the simplest version of the FIRE process and the same model parameters that were used in Cohen and Grossberg (1984). We show that this model qualitatively reproduces the main properties of Fechner’s paradox (Levelt, 1965), binocular brightness summation and averaging (Blake, Sloane, and Fox, 1981; Curtis and Rule, 1980), and a parametric brightness study of Cogan (1982) on the effects of rivalry, nonrivalry suppression, fusion, and contour-free images. Thus, although the model was not constructed to simulate these brightness data and does not incorporate many known theoretical refinements, it performs in a manner that closely resembles difficult data. We believe that these simulations place the following quotation from a recent publication into a new perspective: “The emerging picture is not simple....Levelt’s theory .. . works for binocular brightness perception, but not for sensitivity to a contrast probe ....It seems unlikely that any single mechanism can account for binocular interactions ....The theory of binocular vision is essentially incomplete” (Cogan, 1982, pp.14-15). Before reporting simulations of brightness experiments, we review a few basic prop erties of this FIRE model. All the simulations were done on one-dimensional arrays of cells, for simplicity. All the simulations use pairs of input patterns that have zero disparity with respect to each other. The reaction of a single spatial scale to these input patterns will be reported. Effects using nonzero disparities and multiple spatial scales are described in Cohen and Grossberg (1984) and Grossberg (1983b). The input patterns should be interpreted as monocular patterns across MBCL and MBCR, rather than the srenic images themselves. (a) Insensitivity t,o functional ganzfelds. In Figure 17, two identical input patterns exist at the MBCL and MBCR stages (Figure l7a). Both input patterns are generated by putting a rectangular pattern through a Gaussian filter. This smoothing operation was sufficient to prevent the pathways MBCL --* BP and MBCR + BP in Figure 4 from detecting suprathreshold contours in the input patterns. We call an input pattern that has no contours that are detectable by these pathways a “functional ganzfeld.” The FIRE process does not lift functional ganzfelds at any input intensity. The simulation illustrates that the BP stage is insensitive to input patterns that include no boundary contours detectable by its filtering operations. (b) Figure-ground synthesis: Ratio scale and power law. Figure 18 describes the FIRE reaction that is triggered when a rectangular input pattern is superimposed
Chapter 4
240
HAMADA BRIGHTNESS PARADOX (1) INPUT PATTERN
BOUNDARY CONTOUR PATTERN 6 :*::J
1
- 1
POS
a
1700
13
1700
-6 O * l O a
> Voo
-5.3*10-'
1
C
'
'
' POS ' '
g : ' l '
-3.1*10-'
-
'
"
' 1700
d
Figure 15. Simulation of the Hamada (1980) brightness experiment. The dotted line in (d) describes the brightness level of the background in Figure 13a. Classical contrast enhancement is obtained in (d).
Neural Dynamics of Brighmess Perception
241
HAMADA BRIGHTNESS PARADOX (2)
INPUT PATTERN
BOUNDARY CONTOUR PATTERN
POS
-3.0*10-'
i
1700
b FEATURE CONTOUR PAlTfRN
MONOCULAR BRIGHTNESS PAlTfRN
5.3*10-'
3.;+d
1
Figure 16. Simulation of the Hamada (1980) brightness experiment. The dotted line in (d) describes the brightness level of the background in Figure 13a. Both background and cusp of (a) look darker than this reference level.
Chapter 4
242
3.0*:0''
3.0.10-'
E
F
5 c -1 0
<
a
3.0*10-1
?.9.lO4
t -
1 c 0
<
,
1
1
1
POS
MATCH F I E L D
I
LEFT F I E L D
100
5 <
-3.0*10-'
1.0.1O0
*
-1
POS
00
F I LTERED MATCH F I ELD
,
n, P-t -i
1 I0
<
-'.0*1O0
Figure 17. Matched ganzfelds in (a) cause no suprathreshold reaction at the BP stage at any input intensity. Left input in (a) denotes the input pattern that is delivered to both the MPL stage and the MPR stage. Left field in (b) denotes the activity pattern that is elicited at both the MBCL stage and the MBCR stage. Match field in (c) denotes the activity pattern that is elicited at the BP stage. Filtered match field in (d) denotes the feedback signal pattern that is emitted from the BP stage to both the MBCL and MBCR stages. No feedback is elicited because the BP stage does not generate any suprathreshold activities in response to the edgeless input pattern, or functional ganzfeld, in (a). (Reprinted from Cohen and Grossberg, 1984.)
Neural Dynamics of Brightness Perception
243
upon a functional ganzfeld. Such an input pattern idealizes a region of Tapid change in activity with respect to the network‘s filter bandwidth. The entire input pattern is now resonantly lifted into the BP stage. Although the BP stage is totally insensitive to the functional ganzfeld taken in isolation, the sharp edges of the rectangle trigger a resonant reaction that structures, indeed defines, the functional ganzfeld as a “ground” for the rectangular “figure.” Instead of being treated as merely formless energy, the functional ganzfeld now energizes a standing wave that propagates from the rectangle edges to the perimeter of the pattern. Due to the rectangle’s edges, the network is now exquisitely sensitive to the ratio of rectangle-to-ganzfeld input activities. When the entire input pattern is parametrically increased by a common multiple, FIRE activity levels obey a power law (Figure 19). Both the intensity of the standing wave corresponding to the rectangle and the intensity of the standing wave corresponding to the functional ganzfeld grow as a power of their corresponding input intensities. In these simulations, the power approximates .8. This power is not built into the network. It is a collective property of the network as a whole.
13. Fechner’s Paradox The simplest version of Fechner’s paradox notes that the world does not look half as bright when one eye is closed. In fact, suppose that a scene is viewed through both eyes but that one eye sees it through a neutral density filter (Hering, 1964). When the filtered eye is entirely occluded, the scene looks brighter and more vivid despite the fact that less total light reaches the two eyes. Another version of this paradox is described in Figure 20 (Cogan, 1982; Levelt, 1965). Figures 20a-2Oc depict three pairs of images. One image is viewed by each eye. In Figure 20a, an uncontoured image is viewed by the left eye and a black disk on a uniform background is viewed by the right eye. In Figure 20b, black disks are viewed by both eyes. In Figure 20c, the interior of the left disk is white. Given appropriate boundary conditions, the binocular percept generated by the images in Figure 20a looks about as dark as the binocular percept generated by the images in Figure 20b, despite the fact that a bright region in Figure 20a replaces a black disk in Figure 20b. Figure ~ O C by , contrast, looks much brighter. The input patterns that we used to simulate these images are displayed in Figures 20d-20g. These input patterns represent the images in only a crude way, because the input patterns correspond to activity patt,erns across stages MBCL and MBCR rather than to the images themselves. It is uncertain how, for example, to choose the activity of the ganzfeld in Figure 20a, since this activity depends upon the total configuration of contours throughout the field of view. We therefore carried out a simulation using a zero intensity ganzfeld, as well as a simulation with a functional ganzfeld whose intensity equals the background intensity of the input pattern to the other MBC stage. The actual functional ganzfeld intensity should lie somewhere in between these two values. Other approximations of this type are used throughout the simulations. The numbers listed in Figures 20d-20g describe the total rectified output from the FIRE cells that subtend the region corresponding to the black disk. As in the data, Figure 2Og generates a much larger output than Figure 20f. Figure 20g also generates a larger output than either Figure 20d or Figure 20e. If the actual functional ganzfeld level is small due to the absence of nearby feature-contour signals, then Figures 20a and 20b will look equally bright to the network. A comparison between Figures 20d and 20e provides the first evidence of a remarkable formal property of this version of the FIRE model. Although the FIRE process is totally insensitive to a pair of functional ganzfelds, when a functional ganzfeld is binocularly paired with a contoured figure, it can influence the overall intensity of binocular activity within the BP stage.
Chapter 4
244
LEFT F ELD
LEFT INPUT
1'3*10-*1 I-l 1 c < 0
i/Uo POS
vv
MATCH FIELD
0
\
F- I LTERED MATCH F I ELD
3.6*10-'
c1 c
0
0
<
-5.i*704
C
d
Figure 18. Figure on ganzfeld: The pair of sharp contours within the input pattern of (a) sensitizes the BP stage to the activity levels of both the rectangle figure and the ground, despite the total insensitivity of the BP to a functional ganzfeld in Figure 17 at any input intensity. Binocular matching of the contours at the BP stage lifts a standing wave representation (c) of figure and ground into the BP stage. (Reprinted from Cohen and Grossberg, 1984.)
Neural Dynamics oJBripl,tness Perception
SCALED INPUT
245
f
Figure 19. Power-law processing of figure and ground activity levels at the BP stage as the intensities of the input pattern (in the insert) are proportionally increased by a common factor. The abscissa (scaled input) measures this common factor. The ordinate (scaled activity) measures the peaks of BP activity at the rectangle (circles) and the ground (squares). (Reprinted from Cohen and Grossberg, 1984.)
Chapter 4
246
FECHNER'S PARADOX 5 .O ~ 1 -30
0.0
(El
/.I. mm 0.0
(61
IF)
(C1
(Cl
Figure 20. Fechner's paradox: In human experiments based on the images in (a)-(c), the left image is viewed by the left eye while the right image is viewed by the right eye. The simulations used the pairs of patterns in (a)-(g) as left and right input patterns to the FIRE process. Ganzfelds of different intensity are used as left input patterns to the FIRE model in (d) and (e). The FIRE activity levels corresponding to the dark region positions in the right input pattern are printed above. In vt'vo, the ganzfeld intensity of a large field will be close to zero at the MBCL stage, as in (e). In (f), identical !eft and right input patterns elicit zero FIRE activity in the dark region. In (g), the dark region generates the largest FIRE activity of the series.
Neural Dynamics of Brighmess Perception
247
14. Binociilar Bright ncss Averaging a n d Sumniation
Experimental studies of the conditions under which Fechner’s paradox hold have led to the conclusion that “binocular brightness should represent a compromise between the monocular brightnesses when the luminances presented to the two eyes are grossly different and . . . it should exceed either monocular brightness when their luminances approach equality” (Curtis and Rule, 1980. p.264). Curtis and Rule point out that “these results were in conflict with the prediction of averaging models, such as those of Engel (1909) and Levelt (1965)” (p.263). They introduce a vector model to partially overcome this difficulty. Although the averaging and vector models are useful in organizing brightness data, they do not provide a mechanistic explanation of these data. Figure 21 describes an example of binocular averaging by the FIRE process. In Figures 21a and 21b, one of the input patterns is a functional ganzfeld. The other input pattern is an increment or a decrement on a background. Since these monocular input figures differ greatly in intensity, binocular brightness averaging should occur when they are binocularly presented. In Figure 21c, the increment input pattern is paired with a decrement input pattern. The binocular figural activity in Figure 2lc almost exactly equals the average of the monocular figural activities in Figures 21a and 21b. In Figure 21d, a pair of increment input patterns is presented to the model. A comparison of Figure 21d with Figure 21a shows that the binocular figural activity in Figure 21d is significantly greater than the monocular figural activity in Figure 21a; that is, binocular brightness summation has occurred. Using these inputs, the binocular brightness is about 25% greater than the monocular brightness. Using a fully attenuated (zero) ganzfeld in one eye during the monocular condition, the binocular brightness is about 63% brighter than the monocular brightness. Nonlinear binocular summation in which the binocular percept is less than twice as bright as the monocular percept has been described by a number of investigators (Blake et al., 1981; Cogan et al., 1982; Legge and Rubin, 1981).
15. Simulation of a Parametric Binocular Brightness S t u d y Cogan (1982) has analysed binocular brightness interactions by studying a subject’s sensitivity to monocular test flashes while the subject binocularly views different pairs of monocular images. Cogan used this method of limits to obtain psychometric curves, and then rank-ordered paradigms in terms of subject sensitivity. Figure 22 describes the five conditions that Cogan studied in his Experiment 2. In each condition, a brief disk-shaped flash was presented to the left eye. The flash area was chosen to fit exactly within the circular contour in the left image. Figure 23 describes the sensitivity of six different subjects to each of the five pairs of images. Mean direction sensitivity tended to rank-order the images from Figure 22a to Figure 22e in order of decreasing sensitivity. Mean sensitivity to the images of Figure 22a was significantly greater than to the other images over a wide range of probe contrasts ( A I / I ) .Mean sensitivity to Figure 22e was significantly less than to the other images over a wide range of probe contrasts. Mean sensitivity to the other images grouped more closely together. The rank orderings of individual observers did not, moreover, always decrease from Figure 22b to Figure 22d. Simulations using the simplest one-dimensional input versions of the images in Figure 22 tended to reproduce this pattern of results. Figure 24 illustrates the input pairs that were used. Each input pair represents the flash condition. The increment above the background level on the left input pattern represents the flash. To estimate flash visibility, we first computed the figural activity within the flash area that was generated before the flash, then computed the figural activity within the flash area that was generated during the flash, and then subtracted the before-flash activity from the after-flash activity. The before-flash activities, after-flash activities, and flash-induced activity differences are listed in Table 2. As in the data, the images in Figure 24a generated the
248
Chapter 4
BRIGHTNESS AVERAGING AND SUMMATION
Figure 21. Brightness averaging and summation: The input pair in (c) generates a FIRE activity at their center that is approximately the average of the FIRE activities generated at the center positions of the inputs pairs in (a) and (b). The input pair in (d) generates a FIRE activity that is greater at its center than the FIRE activity generated at the center of the input pattern in (a).
Neural Dynamics of Brighhess Perception
249
FLASH DISPLAYS
I
I
ID1
I
I
I
[El
Figure 22. Flash displays used by Cogan (1982) to study binocular brightness processing. The dashed lines denote the regions that receive monocular flashes. Cogan tested the sensitivity of subjects to flashes in the designated positions.
Chapter 4
250
i l l l L tuL,
10 0
w
cn
f l J l J L h 10 0
A B C D E
A B C D E
Figure 25. Sensitivity of individual subjects to the flash displays described in Figure 22. Each bar height corresponds to a subject’s sensitivity to a particular flash display. The labels (A)-(E) refer to the flash conditions in Figure 22. (Redrawn from Cogan, 1982.)
Neural Dynamics of BrightnessPerception
251
TABLE 2 Simulations of Brightness Experiment in Figure 24
Figure 24a 24b 24c 24d 24e
Activity of Inner Region Before Flash
Activity of Inner Region During Flash
Activity Increment
.oooooo
,015740 .016165 .015689 .008904 .010407
.015740 .003809 .003865 .003873 .003332
.012356 .011824 .005031 .007075
largest increment, those in Figure 24e generated the smallest increment, and the other three increments were clustered together. The main discrepancy with the data is due to the fact that sensitivity to the images in Figure 24d slightly exceeds that to the images in Figures 24c and 24b. This type of order inversion also occurred, however, in two out of six of Cogan’s subjects (Cogan, 1982, Figure 6 , p.11). Considering the simplicity of the model and its input patterns, and the number of qualitatively correct effects that it can generate, this seems to be a relatively minor point. Figure 25 displays the resonant patterns that are generated by four pairs of distinct monocular images. Figures 25a and 25b illustrate the computer experiment in Figure 24d., In Figure 25a, a ganzfeld is paired with a black disk. Although the network is insensitive to a pair of ganzfelds (Figure 17),the black disk at the MBCR stage structures and energizes the ganzfeld at the MBCL stage via the BP stage. The structured ganzfeld, in turn, modifies the activity level at the BP stage. The monocular MBCR pattern remains inactive at cells that receive the black input, despite the fact that the binocular FIRE pattern is active within the corresponding region due to the influence of the ganzfeld. Figure 25b adds an increment, or flash, to the ganzfeld in Figure 25a. Again, the MBCR pattern remains inactive at cells that receive the black input. A comparison of Figures 25a and 25b shows, however, that the BP stage is sensitive to the activity levels of both monocular patterns within this region. In fact, the activity level in this region of the BP stage in Figure 25b averages the corresponding monocular activities. Figures 26a and 26b illustrate the computer experiment in Figure 24e. Note that the dark contour in the MBCL input pattern is detected by the resonance. This contour monocularly energizes the binocular resonance in Figure 26a much more than does the ganzfeld of equal background intensity in Figure 25a. 10. Concluding Remarks
The results in this article suggest that several of the most basic concepts of visual theory need to be refined. For example, the simulations described above include at least three mechanistically different concepts of contour: boundary contour, feature contour, and FIRE contour. They also include two different types of filling-in: diffusive filling-in, which is monocular, and resonant filling-in, which is binocular. Although these concepts add some complexity to the visual modeling literature, they have begun to simplify and unify the explanation of a large body of visual data. The same concepts have been used, for example, to suggest explanations of data concerning monocular and binocular rivalry, illusory figures, fading of stabilized images, neon color spreading, illusory complementary color induction, the Land retinex demonstrations, nonlinear multiple scale interactions, and various global interactions between depth, lightness, length, and form properties (Cohen and Grossberg, 1984; Grossberg, 1980, 1983b, 1985; Grossberg and Mingolla, 1985b). Moreover, the concepts seem to have more than a formal
Chapter 4
,252
FLASH PROFILES
n
I n
L
I
I
.
n
Figure 24. Flash profiles used to simulate the Cogan (1982) experiment. These profiles depict the profile when the flash is on. Before the flash is on, all the increments above the background luminance are absent.
Neural Dynamics of Brightms Perception
[a
2.5,lO-I
,
I
253
GANZFELD PLUS DARK FIGURE LEFT FIELD
RIGHT FIELD
MATCH flELD
flLTERED MATCH flELD
I
I
1.2.d
t>i 0
-2.5*?04
j
5
0
-1.2t10-1
Figure 25. FIRE patterns generated by the flash displays of Figure 24d: (a) Before flash FIRE pattern.
Chapter 4
254
[ bl
INCREMENT PLUS DARK FIGURE LEFT FIELD
RIGHT FIELD
0
MATCH FIELD
FILTERED MATCH FIELD
-2.2*10-’
Figure 25 (continued). (b) During flash FIRE pattern. Left field denotes the MBCtstage activity pattern. Right field ¬es the MBCR-stage activity pattern. Match field denotes the BP-stage activity pattern. Filtered match field denotes the feedback signal pattern emitted by the BP stage to the MBCL arid MBCH stages.
Neural Dynamics of Brightness Perception
255
BOUNDARY PLUS DARK FIGURE RIGHT FIELD
LEFT FIELD
FILTERED MATCH FIELD
MATCH FIELD 3
8,
3
1
,*#o-l
~
c> 0
5
0
-1.1*10-‘
Figure 26. FIRE patterns generated by the flash display of Figure 24e. (a) Before flash FIRE pattern. (b) During flash FIRE pattern.
Chapter 4
256
bl
INCREMENT IN BOUNDARY LEFT FIELD
RIGHT FIELD
FILTERED MATCH FIELD
MATCH FIELD 2.2*1G-’
1li;G-‘
c5
G>
E<
-2.2*10-’
Figure 26 (continued).
0
4
-l,l*lo-’
0
Neural Dynamics of Brightness Perception
257
existence. Boundary contour and feature rontour interactions can, for example, be interpreted in terms of recent physiological data concerning the orientation-sensitive but color-insensitive hypercolumn system in the striate cortex and the orientation-insensitive but color-sensitive blob system in the striate cortex (Table 1; Grossberg and Mingolla, 1985b). It remains to be seen just how far these new concepts and mechanisms can be developed for the further explanation and predict ion of complex visual phenomena.
Chapter 4
APPENDIX A This appendix describes t,he neural network that was used to simulate featurecontour and boundary-contour interactions. The following simulations were done on one-dimensional fields of cells. The input patt,ern ( I l ,1 2 , . . .,In)is transformed into the output pattern ( t l , ~ ,... ,zn) via the following equations. Feature Contours The input pattern (11,Iz, . . , ,I,) is transformed into feature contours via a feedforward on-center off-surround network of cells undergoing shunting, or membrane equation, interactions. The activity, or potential zl, of the ith cell in a feature-contour pattern is n n d = -Azi ( B - 21) I k c k t - (2, D) I k E k i . (1)
+
C
k=l
+ k=C
1
Both the on-center coefficients ck and the off-surround coefficients 4, are Gaussian functions of intercellular distance k - i I. System I is assumed to react more quickly than the diffusive filling-in process. Hence, we assume that each 2, is in approximate equilibrium with respect to the input pattern. At equilibrium, (d/dt)z, = 0 and
1
The activity pattern (q,q,... ,zn)is sensitive to both the amount and the direction of contrast in edges of the input pattern (Grossberg, 1983b). These feature-contour activities generate inputs of the form
to the diffusive filling-in process. The inhibitory term S, is defined by the boundarycontour process in equation (6) below. Boundary Contours The input pattern Zl , l a , , . , ,In) also activates the boundary-contour process, which we represent as a feed orward on-center off-surround network undergoing shunting interactions. This simplified view of the boundary-contour process is permissible in the present simulations because the simulations, being one-dimensional and monocular, do not need to account for orientational tuning, competition, or binocular matching. Since the simulations do not probe the dynamics of illusory contour formation, the boundary completion process c a n also be ignored. (See Grossberg and Mingolla, 1985b,for these extensions.) As in equation (2), the input pattern rapidly gives rise to an activity pattern
1
where c k , and E k , are Gaussian functions of intercellular distance. It is assumed that these boundary contours are narrower than the feature contours defined by equation (2).
The activity pattern (a, ~ 2 , .. . ,yn) is sensitive to both the direction and amount of contrast in the input pattern (IllZ2,. . .,In).The sensitivity to the direction of contrast
Neural Dynamics of BrightnessPerception
259
is progressively eliminated by the following operations. Let, the output signals from BCS to MBC that are elicited by activity y, equal f(y,), where f ( w ) is a sigmoid signal of the rectified part of y,; viz.,
The notation w]’ = max(w,O and y > 1. The output signals J ( g . are spatially distributed be ore influencing ce 1 compart,ments of the cell syncytium. he total signal to the ith cell compartment due to the activity pattern (yl, yz, . .. ,gn) is
I
/
3
where G,k is a Gaussian function of intercellular distance. This Gaussian falloff is less narrow than that of boundary contours in equation (41, but more narrow than that of feature contours in equation (2).
Diffusive Filling-In The activity z, of the ith cellular compartment of the cellular syncytium obeys the nonlinear diffusion equation
where the input F, is defined by equation (3). The diffusion coefficients J,+l,i and J,-l,, are determined by boundary contour signals according to equations of the form J,+l,i
=
1
+
x
+
- r]+ K(S, - r]+
and
where the threshold r > 0. Thus, an increase in the boundary signal S, decreases both diffusion coefficients J,+l,, and J,-l,,. The feature-contour signal F, also decreases when the boundary signal S, increases. In equations (3), (S), and (Q), the inhibitory effects of boundary signals S, on cell compartment membranes act via shunting inhibition. A positive threshold r occurs in equations (8) and (9), but not in equation (3), because, we msume, the intercompartmental membranes that regulate diffusion of activity between compartments are less accessible to the signals Si that are the exterior surface membranes that bound the cellular syncytium. The following parameters were used in all the simulations with equations (1)-(9). We let c i k = Cexp{-fn2[(i - k ) / p ] ’ } ,
Chapter 4
260
and
- k)/w]*}, where A = 1, B = 96, C = ,0625, E = ,0625, G = 2349, H = 1, = 1, h = 35.5546, 6 = 50, b = 12.5828, E = 50, p = 4 x LO'', 7 = 5, 6 = 1 x lo'', i'~= -5, t = 1.5. Gik = Gexp{-ln2[(i
The remaining parameters vary from simulation to simulation and will be given for each figure by title. (I) Two-step and Five-Step Illusions (Figures 7 nnd 8) n = 3500,
D = 9.12, a = 1,
= 10, u = 100, X = 1.926 x lo6,
p
IC
= 1.926 x
lo',
w=l, r = 1.7.
The inputs. The inputs consist of a step input filtered through a Gaussian kernel with a set of ramping functions superimposed on the output of the filter. The steady state level of the output is extended outward to simulate viewing the central portion of an indefinitely large field with ramps superimposed. In the two-cusp pattern of Figure 7a, zk = 8 ( k - 350/100) - @(k - 3150/100) R r ) ,
+
where
and 1451 - k/149)]/ tan[.9m 21 1750.5 - k/149.5)]/ tan{97r/2] 2050 - k/149)]/ tan[.9~/2]
if if if if if
0 I k 5 1450 1451 5 k 5 1600 1601 5 k 5 Is00 1901 5 k 5 2050 2051 5 k 5 3500.
In the five-cusp pattern of Figure 8a, Ik =
@(k - 350/100) - @(k - 3150/100) + Rf)
where '0
if 0 5 k 5 1000 if 1001 5 k 5 1450 if 1451 5 k 5 1750 if 1751 5 k 5 2050 if 2051 5 k 5 2350 if 2351 5 k 5 2650 if 2801 5 k 5 3500.
Neural Dynamics of Brightness Percepriori
261
( 2 ) Brrgstriiiii Briglitiirss Paradox (Figures 12 and 13) n = 700,
D = 12, a = 4, j l = 10, v = 60, x = 1000, tt = 116.7. w = 10, r = 2.6.
The input,s. The inputs represent Bergstrom’s experimental inputs. The inputs consist of two normal curves splined together a t h 3 standard deviations away from the 50 percentile point and placed on a pedestal. Thus, in Figure 12a, 0
k k
Ik={ 0
149/100 - 349/100{// -
if 0 5 k if 150 5 k if 350 5 k if 550 5 k
5 149 5 349 5 549 5 700.
In Figure 13a, the inputs were chosen to be four steps of length 100 whose value is equal to the corresponding average value in the previous set of runs. Thus,
if if .60 if .54 if -26 if 0 if 0 .94
Ik
=
0 5 k 5 150 150 5 k 5 249 250 5 k 5 349 350 5 k 5 449 450 5 k 5 549 550 5 k 5 700.
(3) H a m a d a Brightness Paradox (Figures 15 and 16) n = 700,
D = 14.4, a=l, P = 1, Y
= 6, = 6000,
K
= 500,
w=l,
r = 1.6. The inputs. The inputs were chosen to simulate Hamada’s experimental displays. The inputs consist of a step input filtered through a Gaussian filter with a parabolic segment superimposed on the output of this filter. Specifically, in Figure 15a, let
4 = .3@(k - 351/100)
-
.3@(k- 1350/100)
+ 4,
262
where
Chapter 4
if 0 5 k 5 750 - k/99.5)2 if 751 5 k 5 950 if 951 5 k 5 1700.
In Figure Ma, 1, = .3@(k- 351/100) - . 3 @ ( k- 1350/100) - Pk.
Neural Dynamics of Brightness Perception
263
APPENDIX B The following system of equations defines a binocular interaction capable of supporting a filling-in resonant exchange, or FIRE (Cohen and Grossberg, 1984; Grossberg, 1983b).
Monocular Representations
and
Binocular-to-Monocular Feedback (Filtered Match Field)
and i = 1 , 2 , . . . ,n, in the left Equation (10) describes the response of the activities z,~, monocular representation. Each z , obeys ~ a shunting equation in which both the excitatory interaction coefficients ck,and the inhibitory interaction coefficients Ek, are Gaussian functions of the distance between ?Jk and v,. Two types of simulations have been studied: Additive inputs: All Z ~ Lare chosen equal. The terms JkL register the input pattern and summate with the binocular-to-monocular feedback functions Zk. This is the form of the system that appears in the simulations reported herein.
264
Chapter 4
SIir~iitinginputs: All JkI, are chosen equal. The terms I k L register the input pattern. The binocular-to-monocular feedback functions Zk modulate the system’s sensitivity to the inputs IkL in the form of gain control signals. Equation ( l l ) ,for the activities X , R , i = 1 , 2 , . . , , n, in the right monocular representation, has a similar interpret ation. Note that the same binocular-to-monocular feedback functions z k are fed back to the left and right monocular representations. The binocular matching stage (12) obeys an algebraic equation rather than a differential equation due to the simplifying assumption that the differential equation for the matching activities g, reacts quickly to the monocular signals f ( z k L ) and f ( 2 k R ) . Consequently, gr is always in an approximate equilibrium with respect to its input signals. This equilibrium equation says that the monocular inputs f ( z k ~ )and f ( Z k R ) are added before being matched by the shunting interaction. The signal functions f ( w ) are chosen to be sigmoid functions of activity w . The excitatory interaction coefficients i?k, and inhibitory interaction coefficients i k , are chosen to be Gaussian functions of distance. The spatial decay rates of ck,,C k , , and Cit are chosen equal. The spatial decay rates of EL,,&, and E;, are chosen equal. The on-center is chosen narrower than the off-surround. f( ) ,q ~ . . ). , ,j ( z , ~ ) )and ( ~ ( Z I R ) , After the monocular signal patterns ( f ( q ~ ~ ( z Z R ) , . . . ,f ( 2 , ~ ) ) are matched at the binocular matching stage, the binocular activities gk are rectified by the output signal function g(yk), which is typically chosen to be a sigmoid function of Y k . Then these rectified output signals are distributed back to the monocular representations via competitive signals (15) with the same spatial bandwidths as are used throughout the computation. The parameters used in these simulations are exhaustively listed in Cohen and Grossberg (1984). The parameters used herein are the same as the parameters used in simulations 15-23 of Cohen and Grossberg (1984),except that in the present simulations we chose n = 200. Inputs The inputs are defined in terms of the functions
Fechner’s Paradox (Figure 2 0 ) : (D)Let J ~ = L N k , 1 5 k 5 200, and
(F) Let
Neural Dynamics of Brightness Perception
265
(G) Let
and
Brightness Averaging and Summation (Figure 21): (A) Let JkR = N k , 1 5 k 5 200, and
JkL
=
{
Nk Nk
+ .009
Nk
if l S k 5 8 0 if 81 5 k 5 120 if 121 5 k 5 200.
and
(D) Let JkL = JkR =
:{
Nk
+ .009
if 1 5 k 5 8 0 if 81 5 k 5 120 if 121 5 k 5 200.
The input patterns in Figures 24d and 24e are listed below to illustrate the parameter choices used in this figure and to characterize the FIRE patterns depicted in Figure 25. Ganefeld Plus D a r k Figure ( F i g u r e 258): Let J k L = Nk, 1 5 k 5 200, and
Nk 0
Nk
if 1 5 k 5 80 if 81 5 k 5 120 if 121 5 k 5 200.
Chapter 4
266
Increnient Plus Dark Figiire (Figure 25b): Let
and
Boundary Plus Dark Figure (Figure 26a): Let
Nk if 1 5 k 5 70 if 71 5 k 5 80 Nk if 81 5 k 5 120 0 if 121 5 k 5 130 Nk if 131 5 k 5 200 0
and
Increment in Boundary (Figure 26b): Let JkL
=
1
Nk 0
Nk
&k
and
+
if if .om if if if
15k570 71 5 k 5 80 81 5 k 5 120 121 5 k 5 130 131 5 k 5 200
Neural Dynamics of BrightnessPerception
267
REFERENCES Arend, L.E., Buehler, J.N., and Lockhead, G.R.,Difference information in brightness perception. Perception and Psychophysics, 1971, 9,367-370. Bcrgstriim, S.S., A paradox in the perception of luminance gradients, I. Scandinavian Journal of Psychology, 1966, 7, 209-224. Bergstrom, S.S., A paradox in the perception of luminance gradients, 11. Scandinavian Journal of Psychology, 1967,8,25-32 (a). Bergstrom, S.S., A paradox in the perception of luminance gradients, 111. Scandinavian Journal of Psychology, 1967,8,33-37 (b). Blake, R. and Fox, R., Binocular rivalry suppression: Insensitive to spatial frequency and orientation change. Vision Research, 1974,14,687-692. Blake, R., Sloane, M., and Fox, R., Further developments in binocular summation. Perception and Psychophysics, 1981,30,266-276. Carpenter, G.A. and Grossberg, S., Adaptation and transmitter gating in vertebrate photoreceptors. Journal of Theoretical Neurobiology, 1981,1,1-42. Cogan, A.L., Monocular sensitivity during binocular viewing. Vision Research, 1982, 22, 1-16. Cogan, A.L., Silverman, G., and Sekuler, R., Binocular summation in detection of contrast flashes. Perception and Psychophysics, 1982,31, 330-338. Cohen, M.A. and Grossberg, S., Some global properties of binocular resonances: Disparity matching, filling-in, and figure-ground synthesis. In P. Dodwell and T. Caelli (Eds.), Figrirctl synthesis. Hillsdale, NJ: Erlbaum, 1984. Coren. S., When “filling-in” fails. Behavioral and Brain Sciences, 1983,6,661-662. Cornsweet, T.N., Visual perception. New York: Academic Press, 1970. Curtis, D.W. and Rule, S.J., Fechner’s paradox reflects a nonmonotone relation between binocular brightness and luminance. Perception and Psychophysics, 1980,27, 263266. Day, R.H., Neon color spreading, partially delineated borders, and the formation of illusory contours. Perception and Psychophysics, 1983,34,488-490. Engel, G.R., The auto-correlation function and binocular brightness mixing. Vision Research, 1969,9, 1111-1130. Gellatly, A.R.H., Perception of an illusory triangle with masked inducing figure. Perception, 1980,9, 599-602. Gerrits, H.J.M., deHann, B., and Vendrick, A.J.H., Experiments with retinal stabilized images: Relations beween the observations and neural data. Vision Research, 1966, 6,427-440. Gerrits, H.J.M. and Timmermann, J.G.M.E.N., The filling-in process in patients with retinal scotomata. Vision Research, 1969,9,439-442. Gerrits, H.J.M. and Vendrick, A.J.H., Simultaneous contrast, filling-in process and information processing in man’s visual system. Experimental Brain Research, 1970, 11, 411-430. Graham, N., The visual system does a crude Fourier analysis of patterns. In S. Grossberg (Ed.), Mathematical psychology and psychophysiology. Providence, R I American Mathematical Society, 1981. Graham, N. and Nachmias, J., Detection of grating patterns containing two spatial frequencies: A test of single-channel and multiple-channel models. Vision Research, 1971. 11, 251-259. Gregory, R.L., E y e and brctin. New York: McGraw-Hill, 1966.
268
Chapter4
Grossberg, S., A ncwral model of attention, reinforrcmmt, and discrimination learning. Intrrnational Review of XeurobioJogy, 1975, 18, 263 327. Grossberg, S.,How does a brain build a rognitive rode? Psychological Review, 1980, 87, 1-51. Grossberg, S.,Studies of mind a n d brain: N r u r a l principles of learning, perception, development, rognition, a n d motor control. Boston: Reidel Press, 1982 (a). Grossberg, S.,The processing of expected and unexpected events during conditioning and attention: A psychophysiological theory. Psychological Review, 1982, 89, 529572 (b). Grossberg, S., Neural substrates of binocular form perception: Filtering, matching, diffusion, and resonance. In E. Basar, H. Flohr, H. Haken, and A.J. Mandell (Eds.), Synergetics of the brain. New York: Springer-Verlag, 1983 (a). Grossberg, S.,The quantized geometry of visual space: The coherent computation of depth, form, and lightness. Behavioral and Brain Sciences, 1983, 6, 625-692 (b). Grossberg, S., Outline of a theory of brightness, color, and form perception. In E. Degreef and J. van Buggenhaut (Eds.), Trends in mathematical psychology. Amsterdam: North-Holland, 1984. Grossberg, S., Cortical dynamics of depth, brightness, color, and form perception: A predictive synthesis. Submitted for publication. Grossberg, S.and Mingolla, E., Nonlinear competition, cooperation, and diffusion in the neural dynamics of visual perception. In B.D. Sleeman and R.J. Jarvis (Eds.), Proceedings of t h e Dundee conference on ordinary and partial differential equations, 1984. New York: Springer-Verlag, 1985 (a). Grossberg, S. and Mingolla, E., Neural dynainics of form perception: Boundary completion, illusory figures, and neon color spreading. Psychological Review, 1985, 92, 173-211 (b). Hamada, J., A mathematiral model for brightness and contour perception. Hokkaido Report of Psychology, 1976, HRP-11-76-17. Hamada, J., Antagonistic and non-antagonistic processes for the lightness perception. Hokkaido Behavioral Science Report, 1978, HBSR-P-4. Hamada, J., Antagonistic and non-antagonistic processes in the lightness perception. Proceedings of the XXII international congress of psychology, Leipzig, July, 1980. Heggelund, P. and Krekling, S., Edge dependent lightness distributions at different adaptation levels. Vision Research, 1976, 16, 493 496. Helmholtz, H.L.F. von, Treatise on physiological optics, J.P.C. Southall (Translator and Editor). New York: Dover, 1962. Hendrickson, A.E., Hunt, S.P., and Wu, J.-Y., Immunocytochemical localization of glutamic acid derarboxylase in monkey striate cortex. Nature, 1981, 292, 605-607. Hering, E., Outlines of a theory of the light sense. Cambridge, MA: Harvard University Press, 1964. Horton, J.C. and Hubel, D.H., Regular patchy distribution of cytochrome oxidase staining in primary visual cortex of macaque monkey. Nature, 1981, 292, 762-764. Hubel, D.H. and Livingstone, M.S., Regions of poor orientation tuning coincide with patches of cytochrome oxidase staining in monkey striate cortex. Neuroscience Abstracts, 1981, 118.12. Hubel, D.H.and Wiesel, T.N., Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London (B), 1977, 198, 1-59.
Neural Dynamics of Brightness perception
269
Kanizsa, G., Contours without gradients or rognitive contours? Italian Journal of Psychology, 1974,1,93-113. Kaufman, L., Sight a n d mind: A n introduction to visiial perception. New York: Oxford University Press, 1974. Kennedy, J.M., Illusory contours and the ends of lines. Perception, 1978,7,605-607. Kennedy, J.M., Subjective contours, contrast, and assimilation. In C.F. Nodine and D.F. Fisher (Eds.), Perception a n d pictorial representation. New York: Praeger Press, 1979. Kennedy, J.M., Illusory brightness and the ends of petals: Changes in brightness without aid of stratification or assimilation effects. Perception, 1981, 10, 583-585. Kulikowski, J.J., Limit of single vision in stereopsis depends on contour sharpness. Nature, 1978,275, 126-127. Land, E.H., The retinex theory of color vision. Scientific American, 1977,237,108-128. Legge, G.E. and Rubin, G.S., Binocular interactions in suprathreshold contrast perception. Perception and Psychophysics, 1981,30, 49-61. Levelt, W.J.M., On binocular rivalry. Soesterberg: Institute for Perception, 1965, RVO-TNO. Livingstone, M.S. and Hubel, D.H., Thalamic inputs to cytochrome oxidase-rich regions in monkey visual cortex. Proceedings of the National Academy of Sciences, 1982,79, 6098-6101. Mach, E., Uber die Wirkung der raumlichen Verteilung des Lichtreizes auf die Netzhaut. Sitzungsber. Akad. K'iss., Mathemnaturwiss Kl., 1866,52, 303-322. O'Brien, V., Contour perception, illusion, and reality. Journal of the Optical Society of America, 1958,48, 112-119. Parks, T.E., Subjective figures: Some unusual concomitant brightness effects. Perception, 1980, 9,239-241. Parks, T.E. and Marks, W.,Sharp-edged versus diffuse illusory circles: The effects of varying luminance. Perception and Psychophysics, 1983,33, 172-176. Petry, S.,Harbeck, A,, Conway, J., and Levey, J., Stimulus determinants of brightness and distinctions of subjective contours. Perception and Psychophysics, 1983, 34, 169-174. Pritchard, R.M., Stabilized images on the retina. Scientific American, 1961, 204, 7278. Pritchard, R.M.,Heron, W., and Hebb, D.O., Visual perception approached by the method of stabilized images. Canadian Journal of Psychology, 1960, 14, 67-77. Ratliff, F.,Mach bands: Quantitative studies on neural networks in the retina. New York: Holden-Day, 1965. Redies, C. and Spillmann, L., The neon color effect in the Ehrenstein illusion. Perception, 1981,10,667-681. Riggs, L.A.,Ratliff, F., Cornsweet, J.C., and Cornsweet, T.N., The disappearance of steadily fixated visual test objects. Journal of the Optical Society of America, 1953, 43,495-501. Schneider, W. and Shiffrin, R.M., Controlled and automatic information processing, I: Detection, search, and attention. Psychological Review, 1977,84,146. Todorovic', D., Brightness perception and the Craik-O'Brien-Cornsweet effect. Unpublished M.A. Thesis. Storrs: University of Connecticut, 1983. van den Brink, G. and Keemink, C.J., Luminance gradients and edge effects. Vision Research, 1976,16, 155-159. van Tuijl, H.F.J.M., A new visual illusion: Neonlike color spreading and complementary color induction between subjective contours. Acta Psychologica, 1975,39,441-445.
210
Chapter 4
van Tuijl, H.F.J.M. and de Weert, C.M.M., Sensory conditions for the occurrence of the neon spreading illusion. Perception, 1979, 8, 211 - 215. von der Heydt, R., Peterhans, E., and Baumgartner, G., Illusory contours and cortical neuron responses. Science, 1984, 224, 1260- 1262. Yarbtis, A.L., E y e movements and vision. New York: Plenum Press, 1967. Zeki, S., Colour coding in the cerebral cortex: The reaction of cells in monkey visual cortex to wavelengths and colours. Neurosrience, 1983, 9, 741-765 (a). Zeki, S., Colour coding in the cerebral cortex: The responses of wavelength-selective and colour coded cells in monkey visual cortex to changes in wavelength composition. Neuroscienre, 1983, 9, 767-791 (b).
271
Chnptw 5
ADAPTATION A N D TRANSMITTER GATING IN VERTEBRATE PHOTORECEPTORS Prefrrre This Chapter analyses visual processing at a more microscopic and peripheral level than do the previous Chapters. Before light signals can ever benefit from the automatic gain control properties of on-center off-surround networks of neurons (Chapter I ) , they must first be able to activate photoreceptors without causing the photoreceptors to saturate. If photoreceptors were easily saturated by light signals, then later adaptive stages could do little to correct this problem. How does a photoreceptor internally adapt its response as the flux of photons which excites it varies over a very wide dynamical range? Baylor, Hodgkin, and Lamb performed a classical series of parametric experiments to examine how the turtle cone can accomplish this feat, and constructed an important model of their data. Despite the complexity of this model, its fits of certain critical data were off by a factor of ten. The constellation of parametric properties in their data drew our attention to the possibility that a transmitter gating action is operative within opponent channels of a cone’s cell membranes. This possibility gained further attraction from the fact that ,an intracellular transmitter mediates many of the photoreceptor’s adaptive properties. Thus we set out to discover whether adaptation by a photoreceptor is controlled by an intracellular gated dipole circuit. In this study, we were particularly interested in how the photoreceptor’s gated dipole circuitry may be specialized to deal with the enormous range of photon intensities that it can handle. The result of this study is a simple model that can quantitatively simulate the major data properties reported by Baylor, Hodgkin, and Lamb. This model replaces the concept of Baylor, Hodgkin, and Lamb of a transmitter blocking process by the concept of a transmitter gating process within opponent membrane channels. When one combines this result about photoreceptors with the results in Volume I about possible gated dipole regulation of circadian rhythms, one can begin to see, at least formally, how to transform an intracellular gated dipole circuit of photoreceptor type into a circadian pacemaker circuit. A circadian pacemaker has, for example, been reported to exist in the eye of Aplysia. This theoretical bridge from photoreceptor circuits to circadian circuits suggests a useful way to think about the light-sensitivity of circadian pacemakers. Intracellular adaptation by a gated dipole circuit generates a Weber law using a different mechanism than intercellular adaptation by a shunting on-center off-surround network of cells (Chapter 1). This comparison illustrates the need to distinguish how different mechanisms can generate similar functional properties. An intercellular Weber law is also useful in the self-organization of cognitive recognition codes (Volume I, Chapter 4). A Weber law depends nonlinearly upon its inputs, whatever its mechanism, thus illustrating again that nonlinear properties often enable brain mechanisms to accomplish important functional tasks. Chapters 2-4 discuss parallel processing within the visual system that is carried out on the gross anatomical scale of the Boundary Contour System and the Feature Contour System. This Chapter describes parallel processing on a much more microscopic scale. In order to explain how a photoreceptor can intracellularly adapt to such a wide range of photon intensities, we postulate that light activates parallel intracellular pathways: one pathway enzymatically speeds up production of the intracellular transmitter, whereas the other pathway speeds up transmitter release.
212
Chapter 5
This simple idea implies a quantitative explanation of the fasrinating dissociations between the photoreceptor’s temporal and energetic responses which are found in the data. For example, in response to a fixed flash on a series of increasing background light levels, the time at which the photoreceptor’s response peaks first decreases and then increases, whereas the size of the response always decreases. In a double flash experiment, the first flash causes an overshoot in response, but the second flash only acts to prolong the response. Such nonobvious results illustrate, once again, how rich are the dynamical properties of basic neural modules, such as gated dipoles, and how important it is to subject these modules to a searching mathematical and numerical analysis.
Joiirnal of Theoretical Neurobiology 1, 1 .42 (1981) 0 1 9 8 1 Australian Scientific Press Reprinted by permission of the publisher
213
ADAPTATION AND TRANSMITTER GATING IN VERTEBRATEPHOTORECEPTORS
Gail A. Carpentert and Stephen Grossberg$
Abstract
A quantitative model for the transduction dynamics whereby intracellular transmitter in a vertebrate cone mediates between light input and voltage output is analysed. A basic postulate is that the transmitter acts to mu1t)iplicativelygate the effects of light before the gated signal ever influences the cone potential. This postulate does not appear in the Baylor, Hodgkin, and Lamb (BHL) model of cone dynamics. One consequence of this difference is that a single dynamic equation from our model can quantitatively fit turtle cone data better than the full BHL theory. The gating concept also permits conceptually simple explanations of many phenomena whose explanations using the BHL unblocking concept are much more complex. Predictions are suggested to further distinguish the two theories. Our transmitter laws also form a minimal model for an unbiased miniaturized transduction scheme which can be realized by a depletable transmitter. Thus our theory allows us to consider more general issues. Can one find an optimal transmitter design of which the photoreceptor transmitter is a special case? Does the cone transmitter obey laws that are shared by transmitters in other neural systems, with which the photoreceptors can be compared and contrasted to distinguish its specialized design features from its generally shared features?
1. Introduction
Abundant experimental evidence has shown that many vertebrate photoreceptors undergo large sensitivity changes during light and dark adaptation, and that receptor adaptation is a significant romponent of the adaptive process (Boynton and Whitten, 1970; Dowling and Ripps, 1971, 1972; Grabowski e l al., 1972; Kleinschmidt, 1973; Kleinschmidt and Dowling, 1975; Norman and Werblin, 1974). Various studies also suggest that light liberates internal transmitter molecules, possibly of Ca++, which close Na+ channels in the plasma membrane of the photoreceptor outer segment, thereby
t Supported in part by the National Science Foundation (NSF MCS-80-04021) and the Northeastern University Research and Scholarship Development Fund. 3 Supported in part by the National Science Foundation (NSF IST-80-00257).
214
Chapter 5
decreasing the “dark current” of Na’ ions entering this membrane and hy erpolarizing the photoreceptor (Arden and Low, 1978; Bkkstr6rn and Hemilli, 19797. Extensive parametric experiments on turtle cones have shown the adaptive process to be highly nonlinear (Baylor and Hodgkin, 1974; Baylor ct al., 1974a, 1974b). From these data, Baylor ct al. (1974b) constructed an ingenious model of cone dynamics which quantitatively reproduces many data features. However, the model’s voltage reactions are a factor of ten off in response to flashes on variable backgrounds and, more importantly, the timing of voltage peaks does not fit the data well. Other quantitative difficulties can also be cited. We will suggest that the quantitative difficulties of the BHL model are also qualitative, and are due to the model’s omission of a major feature of cone design. The BHL model omits the basic postulate that the transmitter acts to multiplicatively gate the effects of light before the gated signal ever influences the cone potential. Without the notion of a multiplicative transmitter gate, the full BHL theory grew in a different direction than our own. We have achieved a better quantitative fit of the BHL data using a transmitter model that was introduced in 1968 Grossberg, 1968,1969). In fact, for key experiments we achieve a better quantitative s! t using a single dynamic equation from our theory than BHL do with their full theory with many equations. These successes can be traced to the inclusion within our theory of a multiplicative transmitter gate. Our goal in this article is not merely to use this transmitter model to fit photoreceptor data. We wish also to make a general point concerning neural modeling. The BHL model, despite its many partial successes, is in a sense profoundly disturbing. It leaves one with the impression that the photoreceptor is not merely complex, but also that its complexities describe a rather mysterious transduction scheme with properties that seem impossible to guess a priori. If this is the true situation at each photoreceptor, then what hopes can we sustain for finding understandable principles of neural organization in the large? We will derive our transmitter laws as a minimal model for an unbiased miniaturized transduction scheme that can be realized by a depletable chemical (Grossberg, 1980). Because the principles from which these laws are derived have a general significance,our theory allows us to suggest affirmative answers to the following more general questions: Can one find an optimal transmitter design of which the photoreceptor is a special case? Does the cone transmitter obey laws that are shared by transmitters in other neural systems, with which the photoreceptor can be compared and contrasted to distinguish its specialized design features from its generally shared features? A gating concept appears in the model of Hemili (1977, 1978), which Hemili used to explain adaptation in the rods of the frog retina. Hemili does not, however, suggest dynamical laws for the gating process. Both the BHL theory and our theory suggest that transmitter can close Na+ channels. BHL call this process “blocking.” It is at this point that the two theories diverge. The BHL theory invokes a process to ”unblock” the blocking process. We never need such an idea. Once the unblocking concept is accepted, however, it naturally suggests a series of auxiliary hypotheses which diverge significantly from the ideas that emerge from a gating concept. Our theory also explains photoreceptor data from systems other than turtle cones, such as data from Gekko gekko rods (Kleinschmidt and Dowling, 1975). Because gating mechanisms are also used in nonvisual transmitter systems, adaptation, overshoot, and rebound of the rod potential can be compared and contrasted with analogous phenomena in midbrain reinforcement centers (Grossberg, 1972a, 1972b, IQSla, 198lb). The Gekko gckko data can, for example, be explained by a gated dipole model which shows how slow gates acting on the signals within competing channels can elicit adaptation, overshoot, and rebound. In the rod, the dipole is due to intracellular membrane interactions; in the midbrain, it is due to intercellular network interactions. This type of insight would be impossible to achieve were our theory not derived from a general principle of neural
Adaptation and nansmitter Gating in VertebratePhotoreceptors
275
design. In Sections 2-13 of this article we derive the gating theory and its predictions. In Section 14 we fit the theory to photoreceptor data. In Sections 14-15 we contrast the gating theory with BHL’s unblocking theory. 2. Transmitters as Gates
We start by asking the following question: What is the simplest law whereby one nerve cell could conceivably send unbiased signals to another nerve cell? The simplest law says that if a signal S passes through a given nerve cell ul, the signal S has a proportional effect T=SB (1) where B > 0 on the next nerve cell u2. Such a law would permit unbiased transmission of signals from one cell to another. We are faced with a dilemma, however, if the signal from u1 to v2 is due to the release of a chemical z ( t ) from u1 that activates u2. If such a chemical transmitter is persistently released when S is large, what keeps the net signal T from getting smaller and smaller as u1 runs out of transmitter? Some means of replenishing, or accumulating, the transmitter must exist to counterbalance its depletion due to release fom u1. Based on this discussion, we can rewrite (1) in the form
T=Sz
(2)
and ask how the system can keep z replenished so that z(t) 2
B
(3)
at all times t. This is a question about the eeneitiuity of 212 to signals from q,since if z could decrease to very small values, even large signals S would have only a small effect on T. Equation (2)has the following interpretation. The signal S causes the transmitter z to be released at a rate T = Sz. Whenever two processes, such as S and z , are multiplied, we say that they interact by maee action, or that r gatee S. Thus (2 says that z gates S to release a net signal T , and (3) says that the cell tries to replenis z to maintain the system’s sensitivity to S. Data concerning the gating action of transmitters in several neural preparations have been collected by capek et al. (1971), Esplin and Zablmka-Esplin (1971), Zablocka-Esplin and Esplin (1971). What is the simplest law that joins together both (2) and (3)? It is the following differential equation for the net rate of change dzldt of z:
h
d = A ( B - 2 ) - SZ. dt -Z
(4)
Equation (4) describes the following four processes going on simultaneously.
I and II. Accumulation and Production and Feedback Inhibition The term A ( B - z ) enjoys two possible interpretations, depending on whether it represents a passive accumulation process or an active production process. In the former interpretation, there exist B sites to which transmitter can be bound, z sites are bound at time t , and B - z sites are unbound. Then term A ( B - r ) says simply that transmitter is bound at a rate proportional to the number of unbound sites. In the latter interpretation, two processes go on simultaneously. Term A B on the right-hand side of (4) says that z is produced at a rate A B . Term - A z says that
Chopter 5
216
once E is produced, it inhibits the production rate by an amount proportional to E’S concentration. In biochemistry, such an inhibitory effect is called jeedback inhibition by the end product of a reaction. Without feedback inhibition, the constant rate A B of production would eventually cause the cell to burst. With feedback inhibition, the net production rate is A ( B - , which causes z ( t ) to approach the finite amount B , as we - z ) thus enables the cell to accumulate a target level B desire by (3). The term of transmitter.
III and IV. Gating and Release Term - S E in (4) says that z is released at a rate S z , as we desire by (2). As in (2 , release of z is due to mass action activation of z by S, or to gating of S by z (Figure 1 . The two equations (2) and (4) describe the simplest dynamic law that “corresponds” to the constraints (2) and (3). Equations (2) and (4) hereby begin to reconcile the two constraints of unbiased signal transmission and maintenance of sensitivity when the signals are due to release of transmitter. All later refinements of the theory describe variations on this robust design theme.
1
5. Intracellular Adaptation and Overshoot
Before describing these variations, let us first note that equations (2) and (4) already imply important qualitative features of photoreceptor dynamics; namely, adaptation to maintained signal levels, and overshoot in response to sudden changes of signal level. Suppose for definiteness that S ( t ) = So for all times 1 to and that at time t = t o , S ( t ) suddenly increases to S1. By (4), z(2) reacts to the constant level S ( t ) = SO by approaching an equilibrium value 20. This equilibrium value is found by setting dzldt = 0 in (4) and solving to get
By (2), the net signal TOto u2 at time t = 20 is ABSo
sozo = A+S’ 0 Now let S ( t ) switch to the value S1 > SO. Because z ( t ) is slowly varying, z ( t ) approximately equals zo for some time after t = t o . Thus the net signal to uz during these times is approximately equal to ABSl A So’
s1qJ= -
+
(7)
+
b
Equation (7 has the same form as a Weber law J ( A I)-1.The signal S1 is evaluated relative to t e baseline So just as J is evaluated relative to I. The Weber law in (7) is due to slow intracellular adaptation of the transmitter to the sustained signal level. A Weber law can also be caused by fast intercellular lateral inhibition across space, but the mechanisms underlying these two adaptive processes are entirely different (Grossberg, 1973, 1980). The capability for intracellular adaptation can be destroyed by matching the reaction rate of the transmitter to the fluctuation rate in S ( t ) . For example, if z ( t ) reacts as quickly as S(t), then at all times t ,
T ( t )s
ABS(t) A S(t)
~
+
Adoptotion and Transmitter Gating in Vertebrate Photoreceptors
S
Lr
.
z
f
211
SZ
release
gating
accumulation
-
S
SZ
gat ing
r elea 8 e
b
Figure 1. (a) Production, feedback inhibition, gating, and release of a transmitter z by a signal S. (b) Mass action transmitter accumulation at unoccupied sites has the same formal properties as production and feedback inhibition.
Chapter 5
278
c
t
Figure 2. Overshoot and habituation of the gated signal T = S z due to a sudden increment in signal S. no matter what values S ( t ) attains, so that the adaptational baseline, or memory of prior input levels, is destroyed. A basis for overshoot behavior can also be traced to z’s slow reaction rate. If z ( t ) in (4) reacts slowly to the new transmitter level S = S1, it gradually approaches the new equilibrium point that is determined by S = S1, namely AB
= __ A + Si
(9)
as the net signal decays t o the asymptote
Thus after S ( t ) switches from So to Sl, the net signal T = Sz jumps from (6) to (7) and then gradually decays to (10) (Figure 2). The exact course of this overshoot and decay is described by the equation
+ Sl)(t - to)> + so ABSl + -(i - exp{-(A + Si)(t- t o ) ) ) A+
SIZ(t) = ABS1 exp{-(A
A
5‘1
+
for t 2 to. Equation (11) shows that the gain or averaging rate A S, of T through time increases with the size of the signal S1. The transmitter law (4) is thus capable of “automatic gain control” by the signal. The sudden increment followed by slow decay of T is called “overshoot” in a photoreceptor and “habituation” in various other neural preparations.
Adaptation and Pansmitter Gating in Vertebrate Photoreceptors
279
4. Monot,onic Increments a n d Nonmonotonic Overshoots to Flashes on Variable Background
The minimal transmitter model implies more subtle properties as well. Some of these properties figure prominently in our explanation of the BHL data. Others stand as experimental predictions. Baylor el al. (1974b) found that, in response to a flash of fixed size superimposed on a succession of increasing background intensities, the cone potential V reacts with a progressive decrease in the size of its transient response. By contrast, V’s transient response reaches its peak at successively earlier times until a sufficiently high background intensity is reached. In response to even higher background intensities, the potential reaches its peak at successively later times (Figure 3). This is a highly nonlinear effect. In our theory, T is the input to the photoreceptor’s potential V . We study T in its own right to provide a better understanding of V’s behavior in the full theory. Simple approximations make possible analytic estimates that qualitatively explain the behavior in Figure 3. Since a flash sets off a chain reaction in the cone, and the chain reaction lasts for some time after the flash terminates, we approximate the chain reaction by a rectangular step of fixed size 6. When a flash occurs at a succession of background intensities, we superimpose the step 6 on a succession of background intensities S (Figure 4). We estimate the effect of the flash on the potential peak by computing the initial change in T due to the change in S by 6. We also estimate a possible initial “hump” in the potential through time by measuring the height and the area of t,he overshoot created by prescribed background levels S (Figure 4). The initial change in T to a change in S by 6 is found to be a decreasing function of S. This result is analogous to the decreasing size of the potential change caused by a fixed flash at successively higher background intensities. However, the size of the overshoot, or “hump,” need not be a decreasing function of 6. If 6 is sufficiently small, then the overshoot size can increase before it decreases as a function of 6. In other words, a more noticeable hump can appear at large background intensities S, but it can eventually shrink as the background intensity is increased even further. Baylor et al. (1974b) report humps at high background intensities as well as their shrinkage at very high background intensities. To estimate the change A T due to a step size of 6, we subtract (6) from (7) to find
Let S1 - So = 6, corresponding to a step of fixed size 6 superimposed after S ( t ) equilibrates to a background intensity SO= S. Then
AT=-
AB6 A+S
which is a decreasing function of S. To estimate the overshoot size n, we subtract (10) from (7) to find
Again setting S1 - So = 6 and SO= S, we express R as the function of S
R(S)=
+
A B ( S 6)6 ( At S ) ( A S 6 ) ’
+ +
(13)
Chpter 5
280
0
c
.-c0
0
300
rnsec
Figure 3. The transient reactions of a cone potential to a fixed flash superimposed on a succession of increasing background levels. The potential peaks decrease, whereas the times of maximal potential first decrease and then increase, a8 the background parametrically increases. Effect of increasing intensity of conditioning step on response to 11 msec flash applied 1.1 sec after beginning of a step lasting 1.7 sec. The abscissa is the time after the middle of the flash; and the ordinate is U ( t ) / l A t ,where U ( t ) is the hyperpolarization, At is the pulse duration, and Z is proportional to flash intensity. The numbers against the curves give the logarithm of the conditioning light expressed in photoisomerizations cone-' sec-l. Redrawn from Figure 3 (Baylor and Hodgkin, 1974, p.734).
Adaptation and Transmitter Gating in Vertebrate Photoreceptors
281
t
t
Figure 4. An input step of fixed size 6 on a background S causes a transient change in T of size A T and an overshoot of size n. How does n(S)change as a function of S? To test whether n(S) increases or decreases as a function of S, we compute whether dn/dS is positive or negative. One readily proves and that d n / d S < 0 if S > that d f l / d S > 0 at S = 0 if A > ;(1+&)6; or if A < 6. In other words, the size of the overshoot always decreases as a function of S if S is chosen sufficiently large, but the overshoot size increases at small S values if the increment 6 is sufficiently small. A similar type of non-monotonic behavior describes the total area of the overshoot.
-6+JAIA-6)
5. Miniaturized Transducers a n d Enzymatic Activation of Transmitter
Production We will now discuss how the time at which the potential reaches its peak can first decrease and then increase as a function of background intensity. Our discussion again centers on the design theme of ensuring the transducer’s sensitivity. By proceeding in this principled fashion, we can explain more than the “turn-around” of the potential peaks. We can also explain why the steady-state of T as a function of S can obey a law of the form QS) T = 1PS(1 R S US2
+
+
+
282
chapter 5
with
P,Q,R, and U constants, rather than a law of the form T=- PS
1+RS
as in (6).
Equation (16)is the analog within our theory of the BHL equation ~1
K =
PS(1+ QS) 1+RS
for the steady-state level of their ”blocking” variable 21. Equation (18) cannot be valid at very large S values because it predicts that z1 can become arbitrarily large, which is physically meaningless. This does not happen in (16).The appearance of term U S a in (16)allows us to fit BHL’s steady-state data better than they could using (18). More important than this quantitative detail is the qualitative fact that the mechanism which replaces (17) by (16)also causes the turn-around in the peak potential. We now suggest that this mechanism is a light-induced enzymatic modulation of transmitter production and/or mobilization rates. Thus we predict that selective poisoning of this enzymatic mechanism can simultaneously abolish the turn-around in the potential peak and reduce (16)to (17). The need for enzymatic modulation can be motivated by the following considerations. Despite the transmitter accumulation term A ( B - z ) in equation (4), habituation to a large signal S can substantially deplete z, as in (5). What compensatory mechanism can counteract this depletion as S increases? Can a mechanism be found that maintains the sensitivity of the transmitter gate even at large S values? One possibility is to store an enormous amount of transmitter, just in case; that is, choose a huge constant B in (4). This strategy has the fatal flaw that a very large storage depot takes up a lot of space. If each photoreceptor is large, then the number of photoreceptors that can be packed into a unit retinal area will be small. Consequently the spatial resolution of the retina will be poor in order to make its resolution of individual input intensities good. This solution is unsatisfactory. Given this insight, our design problem can be stated in a more refined fashion as follows: How can a miniaturized receptor maintain its sensitivity at large input values? An answer is suggested by inspection of equation (4). In equation (4), the transmitter depletion rate -Sz increases as S increases, but the transmitter production rate A is constant. If the production rate keeps up with the depletion rate, then transmitter can be made continuously available even if B is not huge. The marriage of miniaturization to sensitivity hereby suggests that the coefficient A is enzymatically activated by the signal S . Let us suppose that this enzymatic step obeys the simplest mass action equation, d -dtA = - C ( A - Ao)
+ D [ E - ( A - Ao)]S.
In (19),A ( t ) has a baseline level A0 in the dark (S = 0). Turning light on makes S positive and drives A ( t ) towards its maximum value A0 E. Rewriting (19)as
+
-dA = -(C dt
+ D S ) ( A - Ao) + DES
shows that the activation of A ( t ) by a constant signal S increases the gain C + DS as well as the asymotote DES A=A0+=
Adaptation and 7kansmitter Gating in Vertebrate Photoreceptors
283
of A ( t ) . This asymptote can be rewritten in the ronvenient form A=A
by using the notation
O
(-)i1++ GF SS
F = (A0 + E)DA;'C-'
and
G = DC-'
.
(23) (24)
To make our main qualitative points, let us assume for the moment that the enzymatic activation of A by S proceeds much more rapidly than the release of z by S. Then A ( t ) approximately equals its asymptote in (22) at all times t . Equation (4) can then be replaced by the equation
d
i+FS 1+GS
- z ) - St.
&z = Ao(--)(B
Let us use (25) to commte the steadv-state resDonse T = S z to a sustained signal S . We find t h i t ' T = PS(1 QS) 1 R S US2
+
+
where
and
+ P = B, Q = F = (A0 + E)DA,'C-' R = A,' + F = A;'[l + (A0 + E ) D C - ' ] u = GA,'
= DA;'c-'.
(29)
Note that the form of (16) does not change if S is related to light intensity I by a law of the form PI S(I)= (30) 1+uI' Only the coefficients P , Q, R, and U change. 6. Turn-Around of Potential Peaks at High Background Intensities Despite the assumption that A depends on S, all of our explanations thus far use a single differential equation (25). We will qualitatively explain the turn-around of potential peaks, the quenching of a second overshoot in double flash experiments, and the existence of rebound hyperpolarization when a depolarizing current is shut off during a hyperpolarizing light using only this differential equation. In the BHL theory, by contrast, a substantial number of auxiliary differential equations are needed to explain all of these phenomena at once. Moreover, we can quantitatively fit the data using only equation (25) better than BHL can fit the data using all their auxiliary variables. Our full theory provides an even better fit. More importantly, equation (25) suggests that all these phenomena are properties of a transmitter gate. To qualitatively explain the turn-around of peak potential as background intensity increases, we consider Figures 5 and 6 . In Figure 5, S starts out at a steady-state value SO. Then a flash causes a chain reaction which creates a gradual rise and then fall in S. Function S reaches its maximum at the time 1 = t s when dS/dt = 0. The transmitter z responds to the increase in S by gradually being depleted. As the chain reaction wears off, z gradually accumulates
Chapter 5
2.84
again. Function z reaches its minimum at the time t = t , when dzldt = 0. From Figure 5, we can conclude that the gated signal T = Sz reaches a maximum at a time t = t~ before S reaches its maximum. This is because
dT - = -dS 2 dt dt
dz + s--. dt
After time t = tS, both d s l d t and dzldt are negative until the chain reaction wears off. Thus d T / d t is also negative during these times. Consequently d T / d t = 0 at a time tT < t s . Figure 6 explains the turn-around by plotting the times when d S / d t = 0 , dz dt = 0 , and d T / d t = 0 as a function of the background level SO. In Figure 6, we think o ts(So), t,(So), and ~ T ( Sas~functions ) of SO. Two properties control the turn-around: (a) the function t s ( S 0 ) might or might not decrease as SOincreases, but eventually it must become approximately constant at large So values; (b) the function t,(So) decreases faster as SOincreases until t,(So) approximately equals ts(S0) at large So values. Property (a) is due to the fact that the photoreceptor has a finite capacity for reacting to photons in a unit time interval. After this capacity is exceeded, higher photon intensities cannot be registered. Property (b) is due to the light-induced increase of z’s reaction rate to higher SO levels. Light speeds up 2’s reaction rate, so that at higher So values z can equilibrate faster to the chain reaction S. In particular, t,(So) approaches ts(S0) as SOincreases. Using properties (a) and b), we will now explain the turn-around. When SO is small, d z / d t is also small. By 31) this means that dTldt = 0 almost when dS/dt = 0, or that tT S ts. As SO increases to intermediate values, the chain reaction S also increases. Consequently dz/dt becomes more negative and makes z smaller. Also z’s gain is sped up, so that t , approaches closer to t s . In (31), this means that S d t l d t will be large and negative at times when e is small. To achieve dTldt = 0, we therefore need dSldt to be large and positive. In other words, T reaches its peak while S is still growing rapidly. Hence tT occurs considerably earlier than t s . This argument shows why the peak of T occurs earlier as SOIncreases. Why does turn-around occur? Here properties (a) and (b) are fully used. By property (b), z reaches its minimum right after S reaches its maximum if SO is large. In other words, t , approaches t s as SO becomes large. This means that both dS/dt = 0 and d t / d t = 0 at almost the same time. By (31), also d T l d t = 0 at about this time. In all, tT Z tS 2 t , if S w 0. Now we use property (a). Since t s ( S 0 ) is approximately ) bend backwards from its position much earlier constant at large So values, t ~ ( S 0must than ts(So) at intermediate So values to a position closer to ts(S0) values. This is the turn-around that we seek.
1
t
7. Double Flash Experiments
In BHL’s double flash experiments, a bright flash causes the potential to overshoot. A second bright flash that occurs while the potential is reacting to the chain reaction caused by the first flash does not cause an overshoot even though it extends the duration of the chain reaction. This effect can be explained as follows (Figure 7). The first bright flash causes an overshoot due to the slow reaction of z to the onset of the chain reaction, aa in Section 4. For definiteness suppose that z ( 0 ) = B at time t = 0 and that the chain reaction starts rising at time t = 0 to a maintained intensity of approximately S. By (M), z ( t ) decreases from B to approximately
Adaptation and lhnsmiiier Gaiing in veriebrate pkoiorecepiors
I
I
I I
tS
I
285
I
tT
t
t
Figure 5. Signal S ( t ) peaks at time t = t s before transmitter z ( t ) reaches its minimum at time t = t,. Consequently, the gated signal T = S z peaks at a time t = t~ earlier than 1 = t g .
Chapter 5
n
I SO
Figure 6. As &(So) is drawn closer to ts(So at large So values due to enzymatic activation of transmitter accumulation rate, t~ So) reaches a minimum and begins to increase again. This decrease in z(2) causes the overshoot, since the product Sz(2) first increases due to the fast increase in S ( t ) and then decreases due to the slower decrease in z(t). Once z ( t ) equilibrates at the level (32), it thereafter maintains this level until the chain reaction decays. In a double flash experiment, the second flash occurs before the chain reaction can decay. The second flash maintains the chain reaction a while longer at the level S. No second overshoot in z occurs simply because z has already equilibrated at the level (32) by the time the second flash occurs. When T is coupled to the potential V, the overshoot in T also causes a gain change in V’s reaction rate. BHL noticed this gain change and introduced another conductance into their model whose properties were tailored to explain the double flash experiment. In the BHL model, this second conductance is a rather mysterious quantity (see Section 15). In our model, it follows directly from the slow fluctuation rate of the transmitter gate (Section 9). Our model’s predictions can be differentiated from those of the BHL model because they all depend on the slow rate of the transmitter gate. Speeding up the transmitter’s reaction rates should eliminate not only the overshoot and the second conductance, but also the photoreceptor’s ability to remember an adaptational baseline (Section 3).
Adaptation and lkanmitter Gating in Vertebrate Photoreceptors
281
n
mV
- 20
-
0
750
msec
Figure 7. Effect of a bright conditioning flash on the response to a subsequent bright test flash. (a) Response to test flash alone. (b) Response to conditioning flash alone. (c) Response to both flashes, with the upper two responses dotted. Redrawn from Figure 15 of Baylor et ul. (1974a, p.716). 8. Antagonistic Rebound by an Intraeellular Dipole: Rebound Hyperpolarization Due to Current Offset
The ubiquity of the gating design in neural systems is illustrated in a striking way by the following data. Baylor et ul. (1974a) showed that offset of a rectangular pulse of depolarizing current during a cone's response to light causes a rebound hyperpolarization of the cone's potential. By contrast, offset of a depolarizing current in the absence of light does not cause a rebound hyperpolarization (Figure 8). In other words, an antagonistic rebound in potential, from depolarization to hyperpolarization, can sometimes occur. One of the most important properties of a slow gate is its antagonistic rebound property. This property was flrst derived to explain data about reinforcement and attention in Grossberg (1972a, 1972b, 1975) and was later used to explain data about perception and cognitive development in Grossberg (1976, 1980). These results show how antagonistic rebounds can be caused when the signals to one or both of two par-
Chapter 5
288
1
40 f
I
I
I
t
I
I
I
I
120
0
msec Figure 8 . Changes in potential produced by current in darkness (a), and during the response to light (b), superimposed tracings. Between arrows, a rectangular pulse of depolarizing current (strength 1.5x lo-'') was passed through the microelectrode. (c) is the response to light without current. Redrawn from Figure 10 of Baylor et al. (1974a, p.706). allel channels are gated before the gated signals compete to elicit net outputs from the channels (Figure 9). In reinforcement and cognitive examples, the two competing channels have typically been interpreted to be due to intercellular interactions. The competing channels implicated by the BHL data are, by contrast, intracellular. They are the depolarizing and hyperpolarizing voltage-conductance terms in the membrane equation for the cone potential (Section 9). In the remainder of this section, we will review how slow gates can cause antagonistic rebounds. Then we will have reached the point where the gated signal must be coupled to the potential in order to derive further insights. This coupling is, however, quite standard in keeping with our claim that most of the interesting properties of the BHL data are controlled by the fluctuations of T under particular circumstances. To explain the main idea behind antagonistic rebound, suppose that one channel receives input S1 and that the other channel receives input Sz = S1 c, L > 0. Let the first channel possess a slow gate 21 and the second channel possess a slow gate 21. Suppose for definiteness that each gate satisfies
+
i = 1,2 as in (32). The explicit form of (33) is irrelevant. All we need is the property that z, is a decreasing function of Si. In other words, larger signals can deplete more transmitter. This is true in (33) because, by (27) and (28), Q < R . However, the opposite is true for the gated signals TI= S1zl and Ta = 5 2 ~ 2 .The
function
t QS) T = 1PS(1 + RS + US2
(16)
is an increaeing function of S because, by (27)-(29), QR > U. In other words, a larger S signal produces a larger output T even though it depletes more z . This simple yet subtle fact about gates lies at the heart of our explanation of antagonistic rebound. The property was first derived in Grossberg (1968,1909). The lack of widespread knowledge of this property among experimentalists has caused much unnecessary confusion about
Adaptaticy and Transmitter Gating in Vertebrate Photoreceptors
289
Figure 9. A gated dipole. Signals S1 and Sz are gated by the slow transmitters z1 and zz, respectively, before the gated signals T I = Slzl and Tz = Szzz compete to generate a net reaction. the dynamics of transmitters in various neural systems. Because this fact was not known to BHL, they found an ingenious, albeit unintuitive, way to explain the rebound in terms of ?heir second conductance. Our themy differs from theirs strongly on this point. The steady-state equation (18) does not embody either the intuitive meaning or the mathematical properties of our steady-state equation (16). In our theory, antagonistic rebound can be trivially proved as follows. When t is on, Sz > S1. Consequently, despite the fact that zz < 21, it follows that T2 > TI. After competition acts, the net output Tz - TI of the on-channel is positive. To see how rebound occurs, shut c off. Then Sz and S1 rapidly equalize at the value S,. However z2 and z1 change more slowly. Thus the inequality zz < z1 persists for some time. Consequently the net output reverses sign because
Tz - 21'
SI(ZZ- 21) < 0
(34)
and an antagonistic rebound occurs. The rebound is transient due to the fact that and z1 gradually equilibrate to the same input S1 at a common value 21, and thus
22
after equilibration occurs. A similar argument shows how antagonistic rebound can occur if only the channel whose input is perturbed contains a slow gate.
Chapter 5
290
gated signa I
intensity
log
Figure 10. Shift of dynamic range to increments in log S after transmitter equilibrates to different background intensities So, S1, Ss, .... 9. Coupling of G a t e d Input to the Photoreceptor Potential
The photoreceptor potential V is assumed to obey the standard membrane equation
dV
COdt = (V+ - V)g+
+ (v-- V)g- + (Vp - V)gP
(36)
where V(t)is a variable voltage; Co is a capacitance; V + , V - , and VP are excitatory, inhibitory, and passive saturation points, respectively; g + , g - , and g p are excitatory, inhibitory, and passive conductances, respectively; and
v- 5 v p < v+. Then V -
(37)
V(1)5 V + for all 1 2 0 if V - 5 V(0)5 V + . By rewriting (36)as dV cox = -(g+ + 9- + gP)V t v + g + + v-g- + v p g p
+ g- + g p and the asymptote of V
we notice that the total gain of V is g+ to constant conductances is
+
v+g+ v - g - +-~ vpgp 9+ g- t g p
+
(38)
in response
(39)
Both the gain and the asymptote are altered by changing the conductances. In the special case of the turtle cone, light acts by decreasing the excitatory conductance g+ (Baylor et al., 1974b). We will assume below that the gated signal T causes this change in g+. Light hereby slows down the cone’s reaction rate as it hyperpolarizes the cone (driving V towards V-). We wish to emphasize at the outset that similar results would hold if we assumed that T increased, rather than decreased, g+. The main difference
Adaptation and lkansmitter Gating in VertebratePhotoreceptors
291
would be a speeding up of the potential change rather than its slowing down by inputs T . In all situations wherein V can react more quickly than T can fluctuate, differences in the gain of V do not imply new qualitative properties, although they can imply quantitative differences. One of these differences is that a decrease of V’s gain as T increases prolongs the duration of V’s reaction to light. We will couple T to g+ using a simple mass action law. Suppose that there exist go membrane “pores” of which g+ = g ( t ) pores are open and go - g(t) are closed at any time t . Suppose that T closes open pores by mass action, so that go pores will open after T shuts off. Then d 2 s = H(so - 9 ) - JgT (40) where H and J are positive constants. Suppose also that this process is rapid compared to V’s reaction rate to changes in g . We can then assume that g is always in approximate equilibrium with T . Setting d g l d t = 0, we find go g
+
=
m
where X = J H - I . To achieve a more symmetric notation, we write g - = g1 and for simplicity set g p = 0. We also rescale the time variable so that CO= 1 in (36). Then equation (36)takes the form
Our next steps are to compute the equilibrium potential Vo that occurs when T = 0, and to write an equation for the amount of hyperpolarization
s=vo-v that occurs in response to an arbitrary function
(43)
T. We find
and where The steady-state value , z found by (45) to be
L = Kg,(Vo - v - ) . of
2,
in response to a constant or slowly varying T is
zoo
where
MT = N+T
M = Vo - V - > 0
and From (47), it follows that
N~oo =T M-s,
(47)
292
Chapter 5
where M is the maximum possible level of hyperpolarization. This equat,ion is formally identical to the BHL equation (in their notation)
except that their blocking variable 21 is replaced by our gated signal T = Sz (Baylor et al., 197413). The formal similarity of (50)to (51) is one cornerstone on which our fit to the BHL data is based. Another cornerstone is the fact that T satisfies the equation
t QS) T = 1PS(1 + R S + US2 whereas
21
satisfies
PS(l+QS) 1+RS ’ BHL relate data about U to data about S via the hypothetical process z1 using (18) and (51) just as we related data about z to data about S via the hypothetical process T using (16) and (50). Despite these formal similarities, the substantial differences between other aspects of the two theories show how basic the gating concept is in transmitter dynamics. ZI
K =
10. “Extra” Slow Conductance During Overshoot a n d Double Flash Experiment s Baylor et al. (1974a) found that a bright flash causes an overshoot in hyperpolarization followed by a plateau phase before the potential returned to its baseline level. They also found that an extra conductance accompanies the overshoot. Because their blocking and unblocking variables could not explain these overshoot and conductance properties, they added a new conductance term, denoted by G f , to their voltage equation and defined its properties to fit the data. Baylor e l al. (1974b) also defined the properties of Gf to explain double flash experiments. If a second bright flash occurs during the plateau phase of the response to the first flash, then the plateau phase is prolonged, but a second overshoot does not occur (Figure 7). We will argue that such an “extra” conductance follows directly from the coupling of the gated signal T to the potential V. In other words, an extra conductance can be measured without postulating the existence of an extra membrane channel to subserve this conductance. To qualitatively understand this property, note that the gain of z in (45) is
r = 80 +181+ +K 8lKT T
‘
(52)
Approximate the chain reaction that is elicited by a bright flash with a rectangular step 0 ift
Then (25)and (53) imply z ( t ) = B for t
< 0, whereas
(53)
Adaptation and Dansmitter Gating in Vertebrate Photoreceptors
for t
293
2 0, where 1+FS A(S) = Ao[1 GS]'
+
(55)
Equation (54) can be rewritten as
+
Thus z reacts to the chain reaction with a fast component B A ( S ) [ A ( S ) S1-l and S t ] . The slow component a slowly decaying component B S [ A ( S ) S1-lexp(-A(S) causes an overshoot in T = S z , and thus in z's asymptote M T ( N T ) - I . The gain r of z also possesses fast and slow components. We identify the slow component of r with the extra conductance that BHL observed. Note that the same process which causes the overshoot of z also causes the emergence of the slow conductance; namely, the slow equilibration of t ( t ) to the new level of the chain reaction. This explanation easily shows not only why a second flash (Figure 7) does not create another overshoot, but also why the slow component in the gain of z does not reappear in response to a second flash. In the BHL model, this slow conductance and its relationship to the overshoot had to be added to the theory to fit the data. In the gating model, the slow component and its relationship to the overshoot occur automatically. An estimate of r as a sum of constant, fast, and slow conductances can be computed by using the approximation
+
-= 1
1+KT-
to rewrite (52) as
r
go
+
1 - KT
+ K2T2
-
+ g1 - goKT + goK2TZ
-
+
K3T3
(57)
goK3T3.
(58)
Then substitute
T=--B S A ( S ) A(S)
+S
+
BS2 ,-[A(S)+S]t A(S) S
+
(59)
into (58), and segregate constant, fast, and slow terms. This computation is unnecessary, however, to understand the qualitative reasons for correlated overshoot and gain changes.
11. Shifk Property and its Relationship to Enzymatic Modulation An important property of certain isolated photoreceptors is their ability to shift their operating range of maximal sensitivity, without compressing their full dynamic range, in response to shifts in background light intensity (Figure 10). Such a shift property can be caused by the action of shunting lateral inhibition within a network of cells (Grossberg, 1978a, 1978b . Herein we show how it can be caused when a transmitter gate reacts to changes in light level more slowly than the light-mediated chain reaction that causes transmitter release. The results also predicts a relationship between the amount of shift and the influence of light on the enzymatic activation of transmitter at high light intensities. By Section 5, the amount of shift should also be related to the size of the turn-around in the timing of potential peaks to fixed flashes on increasing background intensities. The result follows from (47). Suppose that zo has equilibrated to a steady background level SOso that
Chapter 5
294
as in (33). Now parametrically change t,he input level S and measure the hyperpolarization z at a fixed time after the change in S and before z can substantially change. For simplicity, denote the series of S values at this time by S(O) and the corresponding hyperpolarizations by
(61) Also perform the same experiment using a different background level S1 that causes a different z level z1 = P(1+ Qsi) (62) 1 RS1+ us?
+
and a different series of hyperpolarizations
in response to the test inputs S('). The theoretical question to be answered is the following. If the series of test inputs do) and S ( ' ) are plotted in logarithmic coordinates does the series z(O) differ from the series d1)by a constant shift A ? In other words, if we write S(O) = eK and S ( ' ) = eK+A, is
+ =Z(O)(K)
z ( ' ) ( K A)
(64)
for all K 2 0 and a suitable choice of A? A simple computation answers this question in the affirmative with X = h -ZO) . (651 21 The effect of light-induced enzymatic activation of z on X is controlled by the quadratic terms US: and US: in (60) and (62), respectively. Without enzymatic activation, U = 0. Thus as SO-+ 00 and S1 .+ 00, both 20 and 21 approach P Q R - ' , so A 4 0. By contrast, if U > 0, then at large values of SOand S1,
Consequently by choosing a series of background inputs SOand S1 such that S1 = MSo, an asymptotic shift X of size In (M)can be achieved. Of course, all of these estimates are approximate, since I begins to adapt to the new level of S as soon as S changes the chain reaction, z's adaptation rate depends on S,and S can asymptote at a finite level whether or not U = 0 because the photoreceptor possesses only finitely many receptors with which to bind photons. Nonetheless, the qualitative relationship between asymptotic X values and the highest power of S in the steady-state equation for z is worth noting as a possible tool for independently testing whether an experimental manipulation has altered the enzymatic step.
12. Rebound Hyperpolarieation, Antagonistic Rebound, a n d Input Doubling The rebound hyperpolarization depicted in Figure 9 can be explained using equation (42) ; namely,
-v d dt
= (V+ - V 1 + ) KT A
- (' - '-)gl*
(42)
Adaptation and Tkonsmitter Gating in Vertebrote Photoreceprors
295
Rebound hyperpolarization can be explained if the depolarizing pulse interferes with the ability of the signal S in the g+ channel to release transmitter. This suggestion is compatible with the data in Figure 9, since the depolarizing pulse acting by itself achieves the same depolarization as the pulse acting together with hyperpolarizing light. We therefore suppose that the effective signal strength during a depolarizing pulse of intensity J is BS, where the function B = @ is a non-negative (.I) decreasing function of J such that e ( 0 ) = 1 and O(J) < 1 if J > 0. To see how this mechanism works, suppose that S ( t ) is a rectangular step with onset time t = 0 and intensity S. After the light and the depolarizing pulse are both turned on, z ( t ) will approach the asymptote
rather than the smaller asymptote
that would have been approached in the absence of the depolarizing current. If I9 is small, the asymptote of V with and without current will be similar because the gated signal
approaches wro as 8 does. If the pulse is shut off at time t = t o , B rapidly returns to the value 1, so that S can bind transmitter with its usual strength. Hence shortly after time t = t o , the gated signal will approximately equal
by (68), rather than the smaller value
*I
SP(1+ QS) = 1 RS + US2
+
that it would have attained by (68), had the depolarizing pulse never occurred. By (70), (71), and (42), more hyperpolarization occurs after the current is shut off than would have occurred in response to the light alone. This explanation of rebound hyperpolarization can be tested by doing parametric studies in which the asymptote of V in response to a series of J values is used to estimate e ( J ) from (42) and (69). When this B ( J ) function is substituted in (70), a predicted rebound hyperpolarization can be estimated by letting T = To in (42). A related rebound hyperpolarization effect can be achieved if, after the photoreceptor equilibrates to a fixed background level S , a step of additional input intensity is imposed for a while, after which the input is returned to the level S. An overshoot in potential to step onset, and an undershoot in potential to step offset, as well as a slowing down of the potential gain, can all be explained using (42) augmented by a transmitter gating law. Kleinschmidt and Dowling (1975) have measured such an effect in the Geklco geklco rod. It can be explained using Figure 11. Figure I l a depicts the (idealized) temporal changes in the input signal S ( t ) , Figure 1 l b depicts the corresponding depletion and recovery of z ( t ) , and Figure l l c depicts the consequent overshoot and undershoot
296
Chapter 5
of the gated signal T ( t ) ,which has corresponding effects on the asymptote and gain of
the potential V ( t ) . Baylor et al. (1974a, p.714) did a related experirnent when they either interruptedor brightened a steady background light. In particular, t.hey first exposed the turtle eye to a light equivalent to 3.7 x lo4 photon pm-2 sec- for one second. Then the light intensity was either doubled or reduced to zero for 40 msec. The net effect is to add or subtract the same light intensity from a steady background. The depolarization resulting from the offset of light is larger than the hyperpolarization resulting from doubling the light. This follows from (42) by showing that the equilibrium hyperpolarization achieved by setting S = So is greater than the change in hyperpolarization achieved right after switching S to 25'0 given that the transmitter has equilibrated to S = So. In other words, (72) where
and Inequality (72) can be reduced to the inequality V + > V-,and is therefore true. Another inequality follows from V + > V - and is stated as a prediction. Twice the equilibrium hyperpolarization achieved by setting S = SO exceeds the total hyperpolarization achieved right after switching S to 2So given that the transmitter has equilibrated to S = So. In other words,
13. Transmitter Mobilization
Baylor et al. (1974a) found that very strong flashes or steps of light introduce extra components into the response curves of the cone potential. These components led BHL to postulate the existence of more slow processes 23, 2 4 , and 25, in addition to their blocking and unblocking variables z1 and 22. The time scales which BHL ascribed to this augmented chain reaction of slow processes are depicted in Figure 12. Below we will indicate how transduction processes that are familiar in other transmitter systems, say in the mobilization of acetylcholine at neuromuscular junctions (Eccles, 1964,p.W) or of calcium in the sarcoplasm reticulum of skeletal muscles (Caldwell, 1971), can account for the existence of extra components. We will also indicate how these processes can cause very small correction terms to occur in the steady-state relationship (16)between the gated signal T and the signal intensity S. Let us distinguish between transmitter that is in bound, or storage, form and transmitter that is in available, or mobilized, form, as in Figure 13. Let the amount of storage transmitter at time t be w ( t ) and the amount of mobilized transmitter at time t be r(t). We must subdivide the processes defining (4) among the components w ( t ) and ~ ( t and ), allow storage transmitter to be mobilized and conversely. Then (4) is replaced by the system d --w = K ( L - w ) - (M-w - NE) (78) dt
Adaptation and Transmitter Gating in Vertebrate Photoreceptors
a
b
C
291
t
t
t
Figure 11. (a) Rectangular step in S ( t ) causes (b) gradual depletion-then-accumulation of z ( t ) . The combined effect is (c) overshoot and undershoot of T ( t ) .
chapter 5
298
Figure 12. Order of magnitude of the time constants of the zl processes in seconds. Backward reactions are all small compared to forward reactions. Redrawn from Baylor and Hodgkin (1974, p.757).
accumulation
mo b iIi z a t ion
sz
S gating
release
Figure 13. Transmitter w accumulates until a target level is reached. Accumulated transmitter is mobilized until an equilibrium between mobilized and unmobilized transmitter fractions is attained. The signal S is gated by mobilized transmitter which is released by mass action. The signal also modulates the accumulation and/or mobilization process. and
d
-z = (Mw dt
- Nz)- S t .
(79)
Term K(L - w ) in (78) says that w ( t ) tries to maintain a level L via transmitter accumulation (or production and feedback inhibition). Term - ( M w - Nz)in (78) says that storage transmitter w is mobilized at a rate M whereas mobilized transmitter z is demobilized and restored at a rate N until the two processes equilibrate. Term M w - N z in (79)says that w’s loss is z’s gain. Term -Sz in (79) says that mobilized transmitter is released at rate -Sz as it couples to the signal S by mass action. In all, equations (78) and (79) are the minimal system wherein transmitter accumulation, gating, and release can occur given that transmitter must be mobilized before it can be released. Once this system is defined, we must again face the habituation dilemma that was discussed in Section 5. Should not some or all of the production and mobilization
Adaptation and Dansmitter Gating in Vertebrate Photoreceptors
299
terms be enzymatically activated by light to prevent t)he mobilized transmitt,er 'from being rapidly depleted by high intensity lights? The terms which are candidates for enzymatic activation in (78) and (79) are K, M , and N, as in the equations
and
dN dt = - ~ N ( N- N O )+ P N [ ~ N- ( N - N o ) ] S .
(82)
The BHL data are insufficient to conclude whether all the terms K, M, and N can vary due to light activation. A possible empirical test of how many terms are activated will be suggested below. Before this test is described, however, we note an interesting analogy with the five slow variables 21, zz, 23, 24, and 5 that BHL defined to meet their data and the five slow variables w , L,K , M, and N. BHL needed the two slow variables 21 and zz to fit their data in moderate light intensities, and the three extra variables 23, 24, and 25 to describe components at very high light intensities. By comparison, the variables w , z , K , M , and N are five slow variables with w and E the dominant variables at intermediate light intensities, and K,M, and N possibly being slowly activated at high light intensities. Apart from the similarity in the numbers of slow variables in the two models, their dynamics and intuitive justification differ markedly, since our variables have an interpretation in general transmitter systems, whereas the BHL variables were formally defined to fit their data. A possible test of the number of enzymatically activated coefficients is the following. Recall that enzymatic activation of transmitter production changed the steady-state law relating T to S from (17) to (16). In other words, enzymatically activating one coefficient adds one power of S to both the numerator and the denominator of the law for T. Analogously, enzymatically activating n coefficients adds n powers of S to the numerator and denominator of this law. When n = 3, the law takes the form
+
P * S ( 1 + Q'S t R'S2 U ' S 3 ) W'SZ X'S3 Y'S4'
T = 1 + V'S
+
+
+
(83)
The higher-order coefficients R', U',X', and Y' are very small compared to the other, coefficients P', Q',V', and W'. Thus the enzymatic activation terms add very small corrections to the high intensity values of T, and thus to the corresponding values of , 2 via (50). If these high-intensity corrections could be measured, we would have an experimental test of how many terms K , M , and N are enzymatically activated. These but they do alter the rate higher powers do not alter the asymptotic shift X in (M), with which the asymptotic shift is approached as a function of increasing light intensity. We have hereby qualitatively explained all the main features of the BHL data using a minimal model of a miniaturized chemical transmitter. It remains to comment more completely on the form of the chain reaction which we used to convert light intensity Z ( t ) into the signal S ( t ) and to display quantitative data fits. The simplest chain reaction is the one used by BHL: 71Y1 = fva 4- 73YZ = 71111
+
(W
;liYn
+ TnYn = 'Yn-1Yn-1
Chapter 5
300
and S ( t ) = lin(t)-
We have used this chain reaction with good results. However, this law possesses the physically implausible property that y, 4 00 as I + 00. Only finite responses are possible in v i m . A related chain reaction avoids this difficulty and also fits the data well. This modified chain reaction approximates (84) a t small Z ( t ) values. It is
+ YI/1 = (6 - c I / l ) W + YYZ = (6 - ~ ~ 2 1 fgn
~
1
(85)
+ yvn = (6 - c ~ n ) ~ n - 1
and S(t)I/n(t)-
It is easily checked that in response to a step of light intensity I, all the asymptotes vi(00) in (85) have the form pI(1 v I ) - l , as in (30). The possibility exists that each step in the chain reaction is gated by a slow transducer. This would help to explain why so many slow variables appear at high light intensities even if not all the rates K,M, and N are enzymatically activated. Such a complication of the model adds no new conceptual insights and will remain unwarranted until more precise biochemical data are available.
+
14. Quantitative Analysis of Models In this section we will compare the experimental measurements of Baylor and Hodgkin 1974) with the predictions of their model Baylor e l al., 1974b) and our models I [equation (25)) and I1 (equations (4)and (l9\).The BHL model is outlined in Section 15. For each of Models I
and 11:
d
Z Z= A ( B - Z ) - SZ d & A = - C ( A - Ao)
+ DIE - ( A - Ao)]S,
(4) (19)
we will examine the properties of the gated signal
T = Sz.
(2)
That is, we present a model in which the amount of hyperpolarization, z, is directly where TO= SOZOis the steady-state level. Similar results, proportional to Sz - SOZO, with better quantitative fits, are obtained when the potential obeys the equation
cO ddt v
= ( V + - V ) - Po - (V - V - ) g , 1+KT
Recall that, if the potential obeys equation (42),then the amount of hyperpolarization is given by the equation
Adaptation and Transmitter Gating in VertebratePhotoreceptors
301
and, if z equilibrates quickly relative to z , Z E
MT
__
(47)
N+T’
Equation (47) says that z is approximately proportional to T if N is large relative to
T. For the rest of the section we will consider the experiment (Section 4) in which a short flash of fixed intensity is superimposed on ever-increasing levels of background light. Let z be the amount of hyperpolarization and 2 0 the equilibrium level for a fixed background intensity. As presented in Figure 3, the peak z - zo decreases as background intensity increases, but the time at which the peak occurs first decreases and then increases as the intensity increases. Figures 14-17 show that, of the three models under consideration, Model I1 provides the best fit to the data and BHL the poorest. Figure 14a presents the results of the intracellular recordings of Baylor and Hodgkin (1974); Figure 14b gives the predictions of the BHL model; and Figures 14c and 14d give predictions of Models I and 11. In each case, the peak potential in the dark is scaled to the value 25. The minimal background intensity is calibrated by finding that level at which the peak potential is 12.5, or half the peak in the dark. Thus each model fits the peak data eikctly in the dark and with the lowest positive background intensity. Note that the vertical scale in Figure 14d (Model 11) is the same as that of Figure 14a, which depicts the data. By contrast, the scales of Figure 14b (BHL) and Figure 14c (Model I) have been adjusted to accomodate the poorer match between the data and BHL and between the data and Model I. These results on peak potential are summarized in Figure 15. Figure 16 indicates the time of peak hyperpolarization BS a function of background intensity. Here, BHL gives a poor fit to the data; Model I gives a much better fit; and Model 11, with the slow enzymatic activation, gives the best fit of all. Figure 17 shows the fit of the steady-state data (equilibrium levels of 20) for the parameters chosen in each model.
The Chain Reaction In Models I and 11, the signal S ( t ) is given by a chain reaction described by equation (84) (Baylor ct al., 1974a). The constants n and rl . ..7nare chosen so that, when the light stimulus I ( t ) is a flash in the dark, S ( t ) matches the experimental dark response top curve of Figure 14a). Since equation (84) is linear, S ( t ) is equal to the sum of the ark response curve plus a constant which is proportional to the background intensity. Consequently, in this paradigm, any choice of chain reaction constant which provides a good fit to the dark curve will fit the data as well as any other choice. A simple function form which provides an adequate fit in the dark is
6
A suitable choice of constant J makes f(t) equal to S ( t ) in the dark when n = 6 and 71 = W ,
7a
= 57,
...,
76
= 7.
(87)
This is the “independent activation” form of Baylor et al. (1974a). This form is used in Model I (7= 17.3) and Model I1 (7= 17.0). Other chain reactions give similar results. In the BHL model, a similar chain reaction is used, except that the last step is modified to incorporate the unblocking variable z2 and the slow process 23 (Sections 13 and 15).
Chapter 5
302 0
msec
I
300
0
1
msec
300
13.5
0
BH L
a
b
Figure 14. Intracellular response curves z ( t )- zo showing the effect of a flash superimposed on a background light of fixed intensity. Each horizontal axis represents the time since the middle of the flash, which lasts 11 msec. The vertical axis is scaled so that the peak value of z ( t ) - zo = z t) in the dark is equal to 25. The number above each curve is loglo of the background ight intensity 1 0 , which is calibrated so that when loglo I0 = 3.26,the peak of z ( t ) - zo is equal to 12.5. (a) The Baylor-Hodgkin (1974) data. (b) The BHL model (redrawn from Baylor et al., 1974b, p.785).
\
Adaptation and Transmitter Gating in Vertebrate Photoreceptors
0
msec
msec
3 00
303
300
I
.OISr
.2Or
jjj,
0
I
n
C
d
Figure 14 (continued). (c) Model 1. (d) Model 11. Note that the vertical scales are not all the same.
Chapter 5
304
t
I og I x
- xo I
Figure 15. The size of the peak hyperpolarization, as a function of Iogl&, for the Baylor-Hodgkin data and the three models. Note that at high input intensities, BHL differs from the data and Model I1 by a factor of 10. Parameter Values of Model I and Model II Equation 88 contains the parameter values chosen for Model I in Figures 1417. Equation 89 contains the parameter values for Model 11. We wish to emphasize, however, that the model properties described in this section are robust over a wide range of parameter values and are not particular to the choices listed below.
11
Model I
Model 11
A0 = 1.8, F = 0.00333, G = 0.00179.
(88)
d 3% = A(B - Z ) - SZ
(4)
Adaptation and nansmitter Gating in Vertebrate Photoreceptors
t
t
305
msec
i
100
90
-
80
-
70 60
-
4
3
5
6
7
Figure 16. Times at which the peak hyperpolarization occur for the Baylor-Hodgkin data and the three models. Note that the input intensity at which the turn-around occurs and the dynamic range of peak times are much too small in the BHL model. BHL consider this the most serious defect of their model. d - A = - C ( A - Ao) dt
A0 = 0.5,
+ D [ E - ( A - Ao)]S
C = 0.2, D = 0.0047,
E = 18.830.
(89)
For each model, B is arbitrary, since the gated signal is assumed to be proportional to S z: a particular B would just multiply S r by a constant factor. 15. Comparison with the Baylor, Hodgkin, Lamb Model
BHL first observed that the voltage response of a turtle cone to a weak and brief flash of light (e.g., 11 msec) can rise for over 100 milliseconds before it slowly decays over a period of several hundred milliseconds. In order to maintain a prolonged response after the flash terminates, they assume that light sets off a chain reaction (84). They also assume that, in response to small light signals, the change V ( t ) in membrane potential is proportional to y , ( t ) . The main physical idea is suggested by that fact that, in response to higher input intensities, the chain reaction fits the data during the rising portion of the potential,
Chapter 5
306
-
2 0o t 2
15
-
3
4
5
6
7
Figure 17. Graphs of the steady-state hyperpolarization xo in response to the constant light intensity 10for the Baylor-Hodgkin data and the three models. but yields higher values than the potential during its falling phase. Because this effect occurs at later times, it is ascribed to a process that is triggered at the end of the chain reaction. Because the potential undershoots the chain reaction, it is assumed that this later process interferes with the potential. Such considerations led BHL to argue that the chain reaction activates a process which elicits the initial hyperpolarization of the potential. Thus it is assumed that "after a certain delay, light liberates a substance, possible calcium ions, which blocks sodium channels in the outer segments" (Baylor e t al., 1974b, p.760) and thereby tends to hyperpolarize the cone. The larger hyperpolarization produced by the chain reaction than the data is then ascribed to subsequent processes that interfere with ("unblock") the blocking substance during the falling phase of the potential. Their entire model is based on this assumption. Denote the concentration of blocking substance by 21 ( 1 ) and the concentration of the unblockingsubstances by zz(t) and zg(t). Function q ( t ) replaces the last stage yn(t) of the chain reaction in (84). Baylor et al. (1974b) choose the equations for z1 to fit the data in Figure 3. To explain these and related data, it is assumed that the zj act on each other via a nonlinear feedback process that is deflned as follows: q,V = E
- V(1 + G j + GI)
.
Adaptation and Transmitter Gating in Vertebrate Photorereptors
307
and
Initial conditions on all y i ( t ) and z j ( t ) for 1 5 0 are 0, V ( 0 ) = V,, and VD, the potential in darkness, satisfies
to make V , an equilibrium point of (90) in the absence of light. The potential V is related to the hyperpolarization U via the equation U = V - V,. Equation (90) describes how the potential V is hyperpolarized by changes in the conductances G f and G1. Equation (91) shows that GI is a decreasing function of zI. Equations (92) and (93) say that Gf time-averages a logistic function of V. Equation (94) describes the chain reaction with end product h - 1 and light input I ( t ) . Equations (95)-(98)describe the nonlinear chain reaction of blocking and unblocking variables zl, 22, and 23 that is driven by the output yn-l of the chain reaction. Equation (99) defines parameters to make Vg the equilibrium point of (90) when I ( t ) = 0. The equations (90)-(99) are an ingenious interpretation of the data, but their main features, such as the chain reaction of blocking and unblocking variables in (95 -(98), the non-linear dependence of the blocking and unblocking rates on these varia les in (98), and the existence of the voltage-dependent conductance GI in (92)-(93) are hard to interpret as logical consequences of a well-designed transducer, and have difficulties meeting the data quantitatively, as shown in Section 14. To explain the turn-around of potential peaks, BHL use the nonlinear feedbaek process between blocking and unblocking variables in (go), (91), (95)-(98) to argue that “the shortening of the time to peak occurs because the concentration of 22 increases and speeds up the conversion of 21 to 22’’ (Baylor et al., 1974b, p.784). To explain the eventual slowdown of response to high background intensities, the parameters are chosen (e.g., A > 1in (98)) so that “at very high levels of .z2 the reaction is so fast that there is no initial peak and the reaction is in equilibrium throughout the whole response. This results in an increase in the time to peak because the rate of destruction of 22 at a high intensity is less than the rate of destruction of 21 at some lower intensity” (Baylor et al., 1974b, p.784). Thus, the existence of process z2 and its properties are postulated to fit these data rather than to satisfy fundamental design constraints. Of great qualitative importance is the fact that this explanation of the turn-around in potential peak implies
b
Chapter 5
308
the nonexistence of overshoots at high flash intensities. This implication does not hold in our gating model. It forces the following auxiliary hypotheses in the BHL model. Baylor, Hodgkin, and Lamb note that the above mechanisms do not suffice to explain certain phenomena that occur after a strong flash. In particular, the potential transiently overshoots its plateau, achieving a peak change of 15-25 mV, before it settles to a plateau of 12-20 mV. They did their double flash experiments (Figure 7) to study this phenomenon. In Figure 7, the second flash does not elicit a second overshoot, but rather merely prolongs the plateau phase. They need two variable conductances GIand Gfto account for these data. The light-sensitive conductance GIin (91)is a decreasing function of zl,which is, in turn, an increasing function of light intensity due to (94) and (95). The conductance Gf in (92)depends on light only through changes in potential. In particular Gf is a time-average of a logistic function (93)of the potential. The main idea is that the light-sensitive conductance GIis shut off by the first flash. This leads to an initial hyperpolarization which changes Gf.This latter change decreases the potential at which the cell saturates from 30 to 20 mV, and causes the potential to return toward its plateau value. A t the plateau value, Gf is insensitive to a new flash, so a second overshoot does not occur, but the newly reactivated chain reaction does prolong the plateau phase. Even without the extra conductance Gf,some overshoot can be achieved in the model in response to weaker lights which hyperpolarize V(t) by 5-10 mV. These over, shoots are due to delayed desensitization, but they disappear when strong lights perturb the BHL model, unlike the situation in real cones; hence the need for Gf. The authors also use the conductance Gf to explain why offset of a rectangular pulse of depolarizing current that is applied during a cone’s response to light does cause a rebound hyperpolarization, whereas a depolarizing current in the absence of light does not (Figure 8).
16. Conclusion We have indicated how a minimal .model for a miniaturized unbiased transducer that is realized by a depletable chemical transmitter provides a conceptually simple and quantitatively accurate description of parametric turtle cone data. These improvements on the classical studies of Baylor, Hodgkin, and Lamb are, at bottom, due to the use of a ‘gating” rather than an “unblocking” concept to describe the transmitter’s action. Having related the experiments on turtle cone to a general principle of neural design, we can recognize the great interest of testing whether analogous parametric experiments performed on nonvisual cells wherein slowly varying transmitters are suspected to act will also produce similar reactions in cell potential. Where the answer is “no,” can we attribute this fact to specialized differences in the enzymatic modulation of photoreceptor transmitters that enable them to cope with the wide dynamic range of light intensity fluctuations?
ADDENDUM The anatomical site is presently uncertain at which the transmitter gating stage described herein may take place. Earlier experimental studies suggested a site in the outer segment. More recent work using suction electrodes suggests that this site is unlikely. The model’s validity is not dependent on this issue. Rather, we provide parametric tests of the gate’s existence, wherever it might be spatially located.
Adaptation and Dansniitter Gating in Vertebrate Photoreceptors
309
REFERENCES Arden, G.B. and Low, J.C., Changes in pigeon cone photocurrent caused by reduction in extracellular calcium activity. Journal of Physiology, 1978, 280, 55-76. BickstGm, A.-C. and Hemilli, S.O., Dark-adaptation in frog rods: Changes in the stimulus-response function. Journal of Physiology, 1979, 287, 107-125. Baylor, D.A. and Hodgkin, A.L., Changes in time scale and sensitivity in turtle photoreceptors. Journal of Physiology, 1974,242, 729-758. Baylor, D.A., Hodgkin, A.L., and Lamb, T.D., The electrical response of turtle cones to flashes and steps of light. Journal of Physiology, 1974, 242, 685-727 (a). Baylor, D.A., Hodgkin, A.L., and Lamb, T.D., Reconstruction of the electrical responses of turtle cones to flashes and steps of light. Journal of Physiology, 1974, 243, 759-791 (b). Boynton, R.M. and Whitten, D.N., Visual adaptation in monkey cones: Recordings of late receptor potentials. Science, 1970, 170, 1423-1426. Caldwell, P.C., Calcium movements in muscle. In R.J. Podolsky (Ed.), Contractility of muscle cells and related processes. Englewood Cliffs, NJ: Prentice-Hall, 1971, pp.105-114. eapek, R., Esplin, D.W., and Salehmoghaddam, S., Rates of transmitter turnover at the frog neuromuscular junction estimated by electrophysiological techniques. Journal of Neurophysiology, 1971, 34, 831-841. Dowling, J.E. and Ripps, H., S-potentials in the skate retina: Intracellular recordings during light and dark adaptation. Journal of General Physiology, 1971, 58, 163-189. Dowling, J.E. and Ripps, H., Adaptation in skate photoreceptors. Journal of General Physiology, 1972,00, 698-719. Eccles, J.C., T h e physiology of synapses. New York: Springer-Verlag, 1964. Esplin, D.W. and Zablocka-Esplin, B., Rates of transmitter turnover in spinal mone synaptic pathway investigated by neurophysiological techniques. Journal of Neure physiology, 1971, 34, 842-859. Grabowski, S.R., Pinto, L.H., and Pak, W.L., Adaptation in retinal rods of axolotl: Intracellular recordings. Science, 1972, 176, 1240-1243. Grossberg, S., Some physiological and biochemical consequences of psychological postulates. Proceedings of the National Academy of Sciences, 1968,60, 758-765. Grossberg, S.,On the production and release of chemical transmitters and related topics in cellular control. Journal of Theoretical Biology, 1969,22, 325-364. Grossberg, S., A neural theory of punishment and avoidance, I: Qualitative theory. Mathematical Biosciences, 1972, 15, 39-67 (a). Grossberg, S., A neural theory of punishment and avoidance, 11: Quantitative theory. Mathematical Biosciences, 1972, 25, 253-285 (b). Grossberg, S.,A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 1975, 18, 263-327. Grossberg, S., Adaptive pattern classification and universal recoding, 11: Feedback, expectation, olfaction, and illusions. Biological Cybernetics, 1976, 25, 187-202. Grossberg, S., A theory of human memory: Self-organization and performance of sensory-motor codes, maps, and plans. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Vol. 5. New York: Academic Press, 1978 (a). Grossberg, S.,Behavioral contrast in short-term memory: Serial binary memory models of parallel continuous memory models? Journal of Mathematical Psychology, 1978, 17, 199-219 (b).
310
Chapter 5
Grossberg, S., How does a brain build a cognitive code? Psychological Review, 1980, 87, 1-51.
Grossberg, S., Psychophysiological substrates of schedule interactions and behavioral contrast. In S. Grossberg (Ed.), Mathematical psychology and psyehophysiology. Providence, R I American Mathematical Society, 1981 (a). Grossberg, S., Some psychophysiological and pharmacological correlates of a developmental, cognitive, and motivational theory. In J. Cohen, R. Karrer, and P. Tueting (Eds), The proceedings of the sixth international conference on evoked potentials, 1981 (b). Hemila, S., Background adaptation in the rods of the frog’s retina. Journal of Physiology, 1977, 266, 721-741.
Hemila, S., An analysis of rod outer segment adaptation based on a simple equivalent circuit. Biophysical and Structural Mechanism, 1978, 4, 115-128. Kleinschmidt, J., Adaptation properties of intracellularly recorded gekko photoreceptor potentials. In H. Langer (Ed.), Biochemistry and physiology of visual pigments. New York: Springer-Verlag, 1973. Kleinschmidt, J. and Dowling, J.E., Intracellular recordings from gekko photoreceptors during light and dark adaptation. Journal of General Physiology, 1975, 66, 617-648. Norman, R.A. and Werblin, F.S., Control of retinal sensitivity, I: Light and dark adaptation of vertebrate rods and cones. Journal of General Physiology, 1974,63, 37-61. Zablocka-Esplin, B. and Esplin, D.W., Persistent changes in transmission in spinal mono-synaptic pathway after prolonged tetanization. Journal of Neurophysiology, 1971, 34, 860-867.
31 1
Chapter 6
THE ADAPTIVE SELF-ORGANIZATION OF SERIAL ORDER IN BEHAVIOR: SPEECH, LANGUAGE, AND MOTOR CONTROL Preface This Chapter describes several of the general organizational principles, network modules, and neural mechanisms that our group has used to analyse and predict data about speech, language, and motor control. The Chapter describes a progression of concepts and mechanisms, leading from simple to more complex, that we have used to analyse such temporally ordered behaviors. In this way, the reader can discern how relatively simple and pre-wired circuits for the control of temporal order, as are found, say, in the command cell networks of invertebrates, are functionally related to much more context-sensitive and adaptive circuits for the control of speech and motor planning. These latter circuits are specialized versions of adaptive resonance theory (ART architectures (Volume I). A comparison of the temporal order mechanisms describe herein with the ART circuits that were used to analyse data about conditioning and cognitive recognition learning in Volume I enables the reader to better appreciate how evolution may specialize a fundamental network module to accomplish a surprising diversity of behavioral tasks. All of the results in this Chapter are predicated upon a proper choice of functional units. These functional units are epatial patterns of short term memory (STM) activation across ensembles of cells and 5patial patterns of associative long term memory (LTM) traces across ensembles of cell pathways. Many scientists now realize that distributed processing is a characteristic feature of neural dynamics. A precise mathematical understanding is needed to confidently design distributed processes, because improperly designed distributed processors can become wildly unstable in any but the most artificially constrained environments. The processes described in this and the subsequent Chapters were derived from an analysis of how systems with desirable emergent behavioral properties can stably self-organize in complex environments. The Chapter begins by noting that a number of popular models cannot withstand an analysis of their internal structure, let alone of their ability to stably self-organize. All too little modeling expertise is currently being devoted to building principled theories which can last, or to analysing the computational basis for a model’s partial successes. A heavy price is often paid by these models. Their interna1,flaws are mirrored by their narrow explanatory range. Three examples of how concepts about self-organization can guide, and even drastically change, one’s thinking are highlighted here. Many people consider the process of storing temporal order information in STM,or working memory, to be primarily a performance issue that is not strongly linked to learning problems. Indeed, the shift from studies of verbal learning to studies of free recall in the late 1960’s may partly be understood as an effort to avoid perplexing issues about context-sensitive changes in the learned units, or chunks, that emerge during verbal learning experiments. In contrast, I suggest that the laws which govern the storage of temporal order information in STM are designed to ensure that STM is updated in a way that enables temporally stable list learning in LTM to occur. In other words, I relate the laws which store individual items in STM to the LTM laws which group, or chunk, these items into unitized lists. I derive laws for storing temporal order information in STM from the hypothesis that STM storage enables LTM to form unitized list codes in a temporally stable way. Such laws show how to alter the STM activities of previous items in response to the presentation of new items so that the repatterning of STM
h
312
Chapter 6
activities that is caused by the new items does not inadvertently obliterate the LTM codes for old item groupings. Remarkably, despite these adaptive constraints, and often because of them, temporal order information is not always stored veridically, as is true during free recall. This analysis clarifies how the entire spatial pattern of temporal order information over item representations can act as a working memory representation of a speech stream. This representation helps to explain a variety of data concerning the LTM encoding of speech presented at different rates, notably how shortening or lengthening of a later vowel can alter the perception of a prior consonant, or how varying the durations of silence and fricative noise that follow a syllable can influence the perception of the syllable. Another issue that is clarified by an analysis of stable unitization concerns the functional units that are processed at successive stages of a language or visual recognition system. From the perspective of self-organization, it seems wrong to assume that a letter level exists that is followed by a word level. Although an extensive analysis of more appropriate levels was already published in 1978, the hypothesis that letter and word levels exist seem to be dying hard. This is true, I believe, because people confuse their lay experiences using letters and words with the abstract unitization operations which subserve their lay experiences. I suggest that, instead of a letter level and a word level, there exists an item level and a list level, or more precisely a "temporal order information over item representations in STM" level and a "sublist chunks in STM" level. These alternative levels differ in many ways from letter and word levels. For example, a familiar letter that is not a word cannot be represented on the word level. In contrast, a familar letter that is or is not a word can be represented on both the item and the list level. A familiar phoneme can also be represented on the item and list levels, and can thus, under certain circumstances, interact with semantic information. The analysis of how to design the list level leads to a multiple grouping circuit called a maeking field. Computer simulations of a masking field architecture are described in Chapter 8. The present Chapter describes some of the data which a masking field list level has successfully predicted-such as a word length effect in word superiority studies-and can help to explain-such as how attentional processes can prevent word superiority or phonemic restoration effects from occurring, and how the Magic Number 7 of G.A. Miller is dynamically generated. A third self-organization theme concerns the manner in which temporal order information is encoded in LTM. I suggest that mechanisms for explaining data about one of my first loves, serial verbal learning, are also used to learn cognitive list chunks and predictive motor plans. I also show how top-down mechanisms for learning temporal order information can supplement, and often supplant, more classical notions such as associative chaining. Indeed, bottom-up adaptive filtering of temporal order information in STM can encode unitized speech or motor planning nodes, whose top-down read-out of learned templates can encode temporal order information in LTM. This circuit as a whole is a specialized ART architecture. These results provide a general foundation for the extensive computer simulation studies of specialized neural circuits governing adaptive sensory-motor control which Michael Kuperstein and I have published as a separate volume in this series.
P a t t e r n Rrcogiiitioii by I I i i ~ n a n sa n d %facIiiiics, Vol. 1: Speech Perception E.C. Schwah and H.C. Nusbauni (Eds.) 0 1 9 8 6 Academic Press, Inc. Reprintred by permission of the publisher
313
THE A D A P T I V E SELF-ORGANIZATION OF S E R I A L O R D E R I N BEHAVIOR: SPEECH, L A N G U A G E , A N D M O T O R C O N T R O L
Stephen Grossbergt
1. Introduction: Principles of Self-organization in Models of Serial Order: Performance Models versus Self-Organizing Models The problem of serial order in behavior is one of the most difficult and far-reaching problems in psychology (Lashley, 1951). Speech and language, skilled motor control, and goal-oriented behavior generally are all instances of this profound issue. This chapter describes principles and mechanisms that have been used to unify a variety of data and models, as well as to generate new predictions concerning the problem of serial order. The present approach differs from many alternative contemporary approaches by deriving its conclusions from concepts concerning the adaptive self-organization (e.g., the developmeng, chunking, and learning) of serial behavior in response to environmental pressures. Most other approaches to the problem, notably the familiar information processing and artificial int.el1igence approaches, use performance models for which questions of self-organization are raised peripherally if at all. Some models discuss adaptive issues but do not consider them in a real-time context. A homunculus is often used either implicitly or explicitly to make the model work. Where a homunculus is not employed, models are often tested numerically in such an impoverished learning environment that their instability in a more realistic environment is not noticed. These limitations in modeling approaches have given rise to unnecessary internal paradoxes and predictive limitations within the modeling literature. I suggest that such difficulties are due to the facts that principles and laws of self-organization are rate-limiting in determining the design of neural processes, and that problems of self-organization are the core issues that distinguish psychology from other natural sciences such as traditional physics. In light of these assertions, it is perhaps more understandable why a change of terminology or usage of the same concepts and mechanisms to discuss a new experiment can be hailed as a new model. The shared self-organizing principles that bind the ideas in one model to the ideas in other models are frequently not recognized. This style of model building tends to perpetuate the fragmentation of the psychological community into noninteracting specialities, rather than foster the unifying impact whereby modeling has transformed other fields.
t Supportedin part by the National Science Foundation (NSF IST-80-00257 and NSF IST-84-17756) and the Office of Naval Research (ONR N00014-83-K0337).
314
Chapter 6
The burgeoning literature on network and activation niodels in psychology has, for example, routinely introduced as new ideas concepts that were previously developed to explain psychological phenomena in the neural modeling literature. Such concepts as unitized nodes, the priming of short-term memory, probes of long-term memory, automatic processing, spreading activation, distinctiveness, lateral inhibition, hierarchical cascades, and feedback were all quantitatively used in the neural modeling literature before being used by experimental psychologists. Moreover, the later users have often ignored the hard-won lessons t o be found in the neural modeling literature. The next section illustrates some characteristic difficulties of models and how they can be overcome by the present approach (this discussion can be skipped on a first reading). 2. Models of Lateral Inhibition, Temporal Order, Letter Recognition, Spreading Activation, Associative Learning, Categorical Perception, and Memory Search: Some Problem Areas
A. Lateral Inhibition and the SufRx ERect From a mathematical perspective, a model that uses lateral inhibition is a competitive dynamical system (Grossberg, 1980a). Smale (1976)has proven that the class of competitive dynamical systems contains systems capable of exhibiting arbitrary dynamical behavior. Thus to merely say that lateral inhibition is at work is, in a literal mathematical sense, a vacuous statement. One needs to define precisely the dynamics that one has in mind before anything of scientific value can be gleaned. Even going so far as to say that the inhibitory feedback between nearby populations is linear says nothing of interest, because linear feedback can rause such varied phenomena as oscillations that never die out or the persistent storage of short-term memory patterns, depending on the anatomy of the network as a whole (Cohen and Grossberg, 1983; Grossberg, 1978c, 1980a). An imprecise definition of inhibitory dynamics will therefore inevitably produce unnecessary controversies, as has already occurred. For example, Crowder’s (1978) explanation of the suffix effect (Dallett, 1965) and Watkins and Watkins’s (1982) critique of the Crowder theory both focus on the purported property of recurrent lateral inhibition that an extra suffix should weaken the suffix effect due to disinhibition. However, this claim does not necessarily hold in certain shunting models of recurrent lateral inhibition that are compatible with the suffix effect (Grossberg, 1978a, 1978e). This controversy concerning the relevance of lateral inhibition to the suffix effect cannot be decided until the models of lateral inhibition used to andyse that effect are determined with complete mathematical precision. A type of lateral inhibition that avoids the controversy is derived from a rule of self-organization that guarantees the stable transfer of temporal order information from short-term memory to long-term memory as new items continually perturb a network (Section 33). B. Temporal Order Information in Long-Term Memory A more subtle problem arises in Estes’s (1972) influential model of temporal order information in long-term memory. Estes (1972, p.183) writes: “The inhibitory tendencies which are required to properly shape the response output become established in memory and account for the long term preservation of order information.” Estes goes on to say that inhibitory connections form from the representations of earlier items in the list to the representations of later list items. Consequently, earlier items will be less inhibited than later items on recall trials and will therefore be performed earlier. Despite the apparent plausibility of this idea, a serious problem emerges when one writes down dynamical equations for how these inhibitory interactions might be learned in real-time. One then discovers that learning by this mechanism is unstable because, as Estes realized, the joint activation of two successive network nodes is needed for the network to know which inhibitory pathway should be strengthened. As such an inhibitory pathway is strengthened, it can more strongly inhibit its receptive node, which
The Adaptive Self-organization of Serial Order in Behavior
315
is the main idea of the Estes model Houever. mhen this inhibitory action inhibits the receptive node, it undermines the joint excitation that is needed to learn and remember the strong inhibitory connection. The inhibitory connection then weakens, the receptive node is disinhibited, and the learning proress is initiated anew. An unstable cycle of learning and forgetting order information is thus elicited through time. Notwithstanding the heuristic appeal of Estes’s mechanism, it cannot be correct in its present form. All conclusions that use this mechanism therefore need revision, such as Rumelhart and Norman’s (1982) discussion of typing and MarKay’s (1982) discussion of syntax. One might try to escape the instability problem that arises in Estes’s (1972) theory of temporal order information by claiming that inhibitory connections are prewired into a sequential buffer and that many different lists can be performed from this buffer. Unfortunately, traditional buffer concepts (e.g., -4tkinson and Shiffrin, 1968; Raaijmakers and Shiffrin, 1981) face design problems that are as serious as the instability criticism (Grossberg, 1978a). In this way, the important design problem of how to represent temporal order information in short-term and long-term memory without using either a traditional buffer or conditioned inhibitory connections is vividly raised. Solutions of these problems are suggested in Sections 12- 19 and Section 34. C. Letter and Word Recognition A similar instability problem occurs in the work on letter perception of Rumelhart and McClelland (1982). They write: “Each letter node is assumed to activate all of those word nodes consistent with it and inhibit all other word nodes. Each active word node competes with all other word nodes . . .” (Rumelhart and McClelland, 1982, p.61). Obviously, the selective connections between letter nodes and word nodes are not prewired into such a network at birth. Otherwise all possible letter-word connections for all languages would exist in every mind, which is absurd. Some of these connections $re therefore learned. If the inhibitory connections are learned, then the model faces the same instability criticism that was applied to Estes’s (1972) model. Grossberg (1984b) shows, in addition, that if the excitatory connections are learned, then learning cannot get started. The connections hypothesized by Rumelhart and McClelland also face another type of challenge from a self-organization critique. How does the network learn the difference between a letter and a word? Indeed, some letters are words, and both letters and words are pronounced using a temporal series of motor commands. Thus many properties of letters and words are functionally equivalent. Why, then, should each word compete with all other words, whereas no letter competes with all other words? An alternative approach is suggested in Section 37, where it is suggested that the levels used in the Rumelhart and McClelland model are insufficient. The McClelland and Rumelhart model faces such difficulties because it considers only performance issues concerning the processing of four-letter words. In contrast, the present approach considers learning and performance issues concerning the processing of words of any length. Its analysis of how a letter stream of arbitrary length is organized during real-time presentation leads to a process that predicts, among other properties, a word-length effect in word superiority studies (Grossberg, 1978e, Section 41; reprinted in Grossberg, 1982d, p.595). Subsequent data have supported this prediction (Matthei, 1983; Samuel, van Santen, and Johnston, 1982,1983). No such prediction could be made using Rumelhart and McClelland’s (1982) model, since it is defined only for four-letter words. Moreover, the theoretical ideas leading to predictions such as the word-length effect are derived from an analysis of how letter and word representations are learned. An analysis of performance issues per se provides insufficient constraints on processing design. D. Spreading Activation Similar difficulties arise from some usages of ideas like spreading activation in network memory models. In Anderson (1976) and Collins and Loftus (1975), the amount of activation arriving at a network node is a decreasing function of the number of links
316
Chapter 6
the activation has traversed, and the tinie for activation to spread is significant (about 50-100 msec per link). By contrast, there is overwhelming neural evidence of activations that do not pass passively through nerve cells and that are not carried slowly and decrementally across nerve pathways (Eccles, 1952; Kuffler and Nicholls, 1976; Stevens, 1966). Rather, activation often cannot be triggered at nerve cells unless proper combinations of input signals are received, and when a signal is elicited, it is carried rapidly and nondecrementally along nerve pathways. Although these ideas have been used in many neural network analyses of psychological data, their unfamiliarity to many psychologists is still a source of unnecessary controversy (Ratcliff and McKoon, 1981). Most spreading activation models are weakened by their insufficient concern for which nodes have a physical existence and which dynamical transactions occur within and between nodes. Both of these issues are special cases of the general question of how a node can be self-organized through experience. Anderson's (1983) concepts of a /an eflect in spreading activation illustrates these difficulties. Anderson proposes that if more pathways lead away from a concept node, each pathway can carry less activation. In this view, activation behaves like a conserved fluid that flows through pipe-like pathways. Hence the activation of more pathways will slow reaction time, other things being equal. The number of pathways to which a concept node leads, however, is a learned property of a self-organizing network. The pathways that are strengthened by learning are a subset of all the pathways that lead away from the concept node. At the concept node itself, no evidence is available to label which of these pathways was strengthened by learning (Section 3). The knowledge of which pathways are learned is only available by testing how effectively the learned signals can activate their recipient nodes. It is not possible, in principle, to make this decision at the activating node itself. Since many nodes may be activated by signals from a single node, the network decides which nodes will control observable behavior by restricting the number of activated nodes. Inhibitory interactions among the nodes help to accomplish this task. Inhibitory interactions are not used in Anderson's (1983) theory, although it is known that purely excitatory feedback networks are unstable unless artificially narrow choices of parameters are made. Without postulating that activation behaves like a conserved fluid, a combination of thresholds and inhibitory interactions can generate a slowing of reaction time as the number of activated pathways is increased. In fact, the transition from a fan concept (associative normalization) to inhibitory interactions and threshold was explicitly carried out and applied to the study of reaction time (Grossberg, 1968b, 1969~).This theoretical step gradually led to the realization that inhibitory interactions cause limited capacity roperties as a manifestation of a fundamental principle of network design (Section 197. Anderson (1983) intuitively justifies his fan concept in terms of a limited capacity for spreading activation, but he does not relate the limited capacity property to inhibitory processes. E. Associative Learning and Categorical Perception In the literature on associative learning, confusion has arisen due to an insufficient comparative analysis of the adaptive models that are available. For example, some authors erroneously claim that all modern associative models use "Hebbian synapses" (Anderson, Silverstein, Ritz, and Jones, 1977) and thus go on to equate important differences in processing capabilities that exist among different associative models. For example, in their discussion of long-term memory, Anderson et al. (1977) claim that the change in synaptic weight .zcj from a node v, to a node vJ equals the product of the activity f, of v, with the activity g, of u,, where f, and g3 may be positive or negative. If both f, and g3 are negative, two inhibited nodes can generate a positive increment in memory, which is neurally unprecedented. Also, if f, is positive and g3 is negative, a negative memory trace zS3can occur. Later, if f, is negative, its interaction with negative memory z,, causes a positive activation of g]. Thus an inhibited node v, can, via a negative memory trace q 3 ,excite a node u,. This property is also neurally
The Adaptive Self Organization of Serial Order in Behavior
317
unprecedented. Both of these properties follow from the desire of Anderson et al. 1977) to apply ideas from linear system theory to neural networks. These problems do not arise in suitably designed nonlinear associative networks (Section 3). The desire to preserve the framework of linear system theory also led Anderson et al. (1977) to employ a homunculus in their model of categorical perception, which cannot adequately be explained by a linear model. To start their discussion of categorical perception, they allowed some of their short-term memory activities to become amplified by positive linear feedback. Left unchecked in a linear model, the positive feedback would force the activities to become infinite, which is physically impossible. To avoid this property, the authors imposed a rule that stops the growth of each activity when it reaches a predetermined maximal or minimal size, and thereafter stores this extremal value in memory. The tendency of all variables to reach a maximal or minimal value is then used to discuss data about categorical perception. No physical process is defined to justify the discontinuous change in the slope of each variable when it reaches an extreme of activity, or to explain the subsequent storage of these activities. The model thus invokes a homunculus to explain both categorical perception and short-term memory storage. If the discontinuous saturation rule is replaced by a continuous saturation rule, and if the dynamics of short-term memory storage are explicitly defined, then positive linear feedback can compress the stored activity pattern, rather than contrast enhance it, as one desires to explain categorical perception (Grossberg, 1973, 1978d). This example illustrates how perilous it is to substitute formal algebraic rules, such as those of linear system theory, for dynamical rules in the explication of a psychological process. Even in cases where the algebraic rule seems to express an intuitive property of the psychological process such as the tendency to saturate-the algebraic rule may also suggest the use of other rules - such as linear positive feedback-that produce diametrically opposed results when they are used in a dynamical description of the process. No homunculus is needed to explain categorical perception in suitably designed nonlinear neural networks (Sections 18 and 22). Indeed, nonlinear network mechanisms are designed to avoid the types of instabilities and interpretive anomalies that a linear feedback system approach often generates in a neural network context. F. Classical Conditioning and Attentiond Processing Much as Anderson ct al. (1977) improperly lumped all associative models into a Hebbian category, so Sutton and Barto (1981) have incorrectly claimed that associative models other than their own use Hebbian synapses. They go on to reject all Hebbian models in favor of their own non-Hebbian associative model. Given the apparent importance of the Hebbian distinction, it is necessary to define a Hebbian synapse and to analyse why it is being embraced or rejected. Sutton and Barto (1981, p.135) follow Hebb to define a Hebbian synapse as follows: “when a cell A repeatedly and persistently takes part in firing another cell B, then A’s efficiencyin firing B is increased.” However, in my associative theory, which Sutton and Barto classify as a Hebbian theory, repeated and persistent associative pairing between A and B can yield conditioned decreases, as well as increases, in synaptic strength (Grossberg, 1969b, 1970b, 1972~). This is not a minor property, since it is needed to assert that the unit of long-term memory is a spatial pattern of synaptic strengths (Section 4 ) . Hebb’s law, by contrast, is consistent with the assumption that the unit of long-term memory is a single synaptic strength. This property does not satisfy the definition of a Hebbian synapse; hence my associative laws are not Hebbian, contrary to Sutton and Barto’s claim. Moreover, the associative component of these laws is only one of several interesting factors that control their mathematical and behavioral properties. None of these factors was considered by Hebb. Notwithstanding these important details, we still need to ask why Sutton and Barto attack “Hebbian” models. The reason is that Hebbian theories are purported to be unable (1) to recall a conditioned response with a shorter time lag after the presentation
318
Chapter 6
of a conditioned stimulus (CS) than was required for efficient learning to occur between the CS and the unconditioned stimulus (VCS), or (2) to explain the inverted U in learning efficacy that occurs as a function of the time lag between a CS and C’CS on learning trials. Indeed, Sutton and Barto (1981,. p.142) confidently assert: “not one of the adaptive element models currently in the literature is capable of producing behavior whose temporal structure is in agreement with that observed in animal learning as described above.” Unfortunately, this assertion is false. In fact, Sutton and Barto refer to the article by Grossberg (1974) which reviews a conditioning theory that can explain these phenomena (Grossberg, 1971, 1972, 1972b, 1975), as well as a variety of other phenomena that Sutton and Barto cannot explain due to their model’s formal kinship with the Rescorla-Wagner model (Grossberg. 1982b; Rescorla and Wagner, 1972). Moreover, my explanation does not depend on the non-Hebbian nature of my associative laws, but rather on the global anatomy of the networks that I derive to explain conditioning data. This anatomy includes network regions, called drive representations, at which the reinforcing properties of external cues join together with internal drive inputs to compute motivational decisions that modulate the attentional procesing of external cues. No such concept is postulated in Sutton and Barto’s (1981) model. Thus the fact that a pair of simultaneous CS’s can be processed, yet a CS that is simultaneous with a UCS is not processed, does not depend on the elaboration of the UCS’s motivational and attentional properties in the Sutton and Barto model, despite the fact that the UCS might have been a CS just hours before. Sutton and Barto’s model of classical conditioning excludes motivational and attentional factors, instead seeking all explanations of classical conditioning data in the properties of a single synapse. Such an approach cannot explain the large data base concerning network interactions between neocortex, hypothalamus, septum, hippocampus, and reticular formation in the control of stimulus-reinforcer properties (Berger and Thompson, 1978; Deadwyler, West, and Robinson, 1981; DeFrance, 1976; Gabriel, Foster, Orona, Saltwiek, and Stanton, 1980; Haymaker, Anderson, and Nauta, 1969; MacLean, 1970; O’Keefe and Nadel, 1978; Olds, 1977; Stein, 1958; West, Christian, Robinson, and Deadwyler, 1981) and leads its authors to overlook the fact that such interactions are interpreted and predicted by alternative models (Grossberg, 1975). The present chapter also focuses on behavioral properties that are emergent properties of network interactions, rather than of single cells, and illustrates that single cell and network laws must both be carefully chosen to generate desirable emergent properties. G.Search of Associative Memory The Anderson e t at. (1977) model provides one example of a psychological model whose intuitive basis is not adequately instantiated by its formal operations. Such a disparity between intuition and formalism causes internal weaknesses that limit the explanatory and predictive power of many psychological models. These weaknesses can coexist with a model’s ability to achieve good data fits on a limited number of experiments. Unfortunately good curve fits have tended to inhibit serious analysis of the internal structure of psychological models. Another example of this type of model is Raaijmakers and Shiffrin’s (1981) model of associative memory search. The data fits of this model are remarkably good. One reason for its internal difficulties is viewed by the authors as one of its strengths: “Because our main interest lies in the development of a retrieval theory, very few assumptions will be stated concerning the interimage structure” (Raiijmakers and Shifiin, 1981, p.123). To characterize this retrieval theory, the model defines learning rules that are analogous to laws of associative learning. However, in information processing models of this kind, terminology like short-term memory (STM) and long-term memory (LTM)is often used instead of terminology like CS, UCS,and conditioning. These differences of terminology seem to have sustained the separate development of models that describe mechanistically related processes.
The Adaptive Self-organizationof Sen2 Order in Behavior
319
Although Raiijmakers and Shiffrin’s (1981) model intuitively discusses STM and LTM, no STM variables are formally defined; only LTM strengths are defined. This omission forces compensatory assumptions to be made through the remaining theoretical W,S) between the ith word at test structure. In particular, the LTM strength S(W*T, (T)and the j t h stored (S) word is made a linear function
S(Wt~7wj~) = btaj
(1)
of the time t,, during which both words are in the STM buffer. Thus there is no forgetting, the LTM strength grows linearly to infinity on successive trials, and although both words are supposedly in the buffer when LTM strength is growing, strength is assumed to grow between W,Tand W,s rather than between Wasand W!s. A more subtle difficulty is that time per 8 C should not explicitly determine a dynamical process, as it does in l), unless it parameterizes an external input. All of these problems arise because the t eory does not define STM activities which can mediate the formation of long-term memories. Instead of using STM activities as the variables that control performance, the theory defines sampling and recovery probabilities directly in terms of LTM traces. The sampling probabilities are built up out of products of LTM traces, as in the formula
6
for the probability of sampling the ith word W t s given a probe consisting of a context cue CT and the kth word WkT at test (T).This formula formally compensates for the problem of steadily increasing strengths by balancing numerator strengths against denominator strengths. It also formally achieves selectivity in sampling by multiplying strengths together. The theory does not, however, explain how or why these operations might occur in viuo. The context cue CT is of particular importance because the relative strength of context-to-word associations is used to explain the theory’s proudest achievement: the part-list cuing effect. However, the context cue is just an extra parameter in the theory because no explanation is given of how a context representation arises or is modified due to experimental manipulations. In other words, because recall theory says nothing about chunking or recognition, the context cue plays a role akin to that played by the “fixed stars” in classical explanations of centrifugal force. In addition to the continuous rule for strength increase (ebuation (l)),the theory defines a discrete rule for strength increase
S‘(wi~,Wjs) = S(waT,Wjs) + 91
(3)
which also leads to unbounded strengths as trials proceed. The incrementing rule (equation (3)) is applied only after a successful recall. Although this rule helps to fit some data, it is not yet explained why two such different strengthening rules should coexist. The authors represent the limited capacity of STM by appending a normalization constraint onto their sampling probability rule. They generalize equation (2) with the samdinp: rule
where the weights w, satisfy
c m
j= 1
wj
2 w.
(5)
320
Chapter 6
Equation (4) defines the probability of sampling the ith image I t , given the set of probe cues Q1, Q a ,, , , Qn.Why these normalization weights, which intend to represent the
.
limited capacity of STM, should appear in a sampling rule defined by LTM traces, is unexplained in the theory. The properties that the formalism of Raiijuiakers and Shiffrin 1981) attempts to capture have also arisen within my own work on human memory Grossberg, 1978a, 1978b). Because this theory describes the self-organization of both recognition and recall using real-time operations on STM and LTM traces, it exhibits these properties in a different light. Its analog of the product rule (equation (2)) is due t o properties of temporal order information in STM derived from a principle that. guarantees the stable transfer of temporal order information from STM t.o LTM (Section 34). Its analog of the continuous strengthening rule (equation (1)) is found in the chunking process whereby recognition chunks are formed (Section 21). Its analog of the discrete strengthening rule (equation (3)) is due to the process whereby associations from recognition chunks to recall commands are learned (Section 6). Its analog of the normalization rule (equation (5)) is a normalization property of competitive STM networks that are capable of retuning their sensitivity in response to variable operating loads (Section 17). Not surprisingly, the part-list cuing effect poses no problem for this theory, which also suggests how contextual representations are learned. In light of these remarks, I suggest that Raaijmakers and Shiffrin (1981) have not realized how much the data they wish to explain depends on the “interimage structure’’ that their theory does not consider. A few principles and mechanisms based on ideas about self-organization have, in fact, been the vantage point for recognizing and avoiding internal difficulties within psychological models of cognition, perception, conditioning, attention, and information processing (Grossberg, 1978a, 1978e, 1980b, 1980d, 1981b, 1982b, 1982d, 1983, 1984a, 198413). Some of these principles and mechanisms of self-organization are defined below and used t o discuss issues and data concerning the functional units of speech, language, and motor control. This foundation was originally built u p for this purpose in Grossberg (1978e). That article, as well as others that derive the concepts on which it is based, are reprinted in Grossberg (1982d).
I
3. Associative Learning by Neural Networks: Int,eractions Between STM and LTM
The foundation of the theory rests on laws for associative learning in a neural network, which I call the embedding field equations (Grossberg, 1964). These laws are derived from psychological principles and have been physiologically interpreted in many places (e.g., Grossberg, 1964, 1967, 196813, 1969b, 1970b. 1972c, 1974). They are reviewed herein insofar as their properties shed light on the problem of serial order. The associative equations describe interactions among unitized nodes u, that are connected by directed pathways, or azona, e i 3 . These interactions are defined in terms of STM traces s , ( t ) computed at the nodes vI and LTM traces z,, computed at the endpoints, or eynaptic knoba, S13of the directed pathways eL3(Figure 1). The simplest realization of these interactions among n nodes v1, v2,. . . ,vn is given by the system of differential equations
and
The Adaptive Self-organization of Serial Order in Behavior
321
sij vj
V. I
Figure 1. STM trace x, fiuctuates at each node v,, and an LTM trace z,, fluctuates at the end (synaptic knob) S,, of each conditionable pathway er3. The performance signal B,, is generated in el, by 2, and travels at a finite velocity until it reaches S,,. The LTM trace z,, computes a time average of the contiguous trace 5, multiplied by a sampling signal E,, that is derived from Ill,, The performance signal B,, is gated by z,, before the gated signal Bl,z,, perturbs z J . where i , j = 1 , 2 , . . . , n; $ denotes the rate of change of the continuous variable, zr or E,,. as the case might be; and the notation [[I+ = max(E,O) defines a threshold. The terms in equations ( 6 ) and (7) have the following interpretations. A. STM Decay Function A, in equation ( 6 ) is the decay rate of the STM trace 2 , . This rate can, in principle, depend on all the unknowns of the system, as in the competitive interaction
which I describe more fully in Section 18. Equation ( 8 ) illustrates that STM decay need not be a passive process. Active processes of competitive signaling, as in this equation, or other feedback interactions, can be absorbed into the seemingly innocuous term A,z, in equation (0). B. Spreading Activation Function Bkl in equation (6) is a performance signal from node v k to the synaptic knob(s) Sk, of pathway e k , . Activation “spreads” along ek, via the signal &,. Two typical choices of Bkl are
Bk,(t) = M k ( t
-
4 - rrr1+
(9)
or
Bk&) = f ( 4 t - Q1))bkl (10) where f ( ( ) is a sigmoid, or S-shaped, function of [ with f ( 0 ) = 0. In equation (Q), a signal leaves vk only if zk exceeds the signal threshold r k , (Figure 2a). The signal moves along e k , at a finite velocity (“activation spreads”) and reaches Sk, after Tk, time units. Typically, rkl is a short time compared to the time it takes vk to exceed threshold r k , in response to signals. Parameter bkl measures the strength of the pathway ek, from vk to u,. If bk, = 0,no pathway exists. In equation (lo),the signal threshold r k l is replaced by attenuation of the signal at small xk values and saturation of the signal at large zk values (Figure 2b). The S-shaped
322
Chapter 6
Figure 2. (a) A threshold signal: B,J(t)is positive only if z,(t - rIJ)exceeds the signal threshold B,, is a linear function of z j ( t - r1,) above this threshold. (b) A sigmoid signal: B,,(t) is attenuated at small values of zl(t - r,,), much as in the threshold case, and levels off at large values of sl(t- rIJ)after all signaling sites are turned on.
rIJ.
The Adaptive Selforganization of Serial Order in Behavior
323
signal function is the simplest phycical signal function that can prevent noise amplification from occurring due to reverberatory signaling in a feedback network (Section 18). C. Probed Read-Out of LTM:Gating of Performance Signals Term B k r z k r in equation (6) says that the signal b k , from Vk to S k , interacts with the LTM trace z k , at s k , . This interaction can be intuitively described in several ways. For one, B k , is a probe signal, activated by STM at v k , that reads-out the LTM trace z k , into the STM trace z, of v,. For another, z k , gates signal &, before it reaches v, from V k , so that the signal strength that perturbs zIat v, is B k , Z k , rather than B k , . Thus even if an input to Z’k excites equal signals B k , in all the pathways e k , , only these v, abutted by large LTM traces z k , will be appreciably activated by V k . Activation does not merely “spread” from Vk to other nodes; it can be transformed into propagated signals ( Z k into B k , ) and gated by LTM traces ( B k , into B t , Z k , ) before it reaches these nodes. D.Adaptive Filtering The gated signals from all the nodes Uk combine additively at v, to form the total signal TI = C%, B k , z k , of equation ( 6 ) . Speaking mathematically, T,is the dot product, or inner product, of the vectors B, = (B1*, Bz,,. . , B,,) and z, = (zl,,zzt,. . . ,zn,) of probe signals and LTM traces, respectively. Such a dot product is often written as
T,= B, * z t .
(11)
The transformation of the vector z’ = (zl,z2, . . . ,in)of all STM traces into the vector T’ = ( T I T , z ,. . . ,T,)of all dot products, specifically
z*+ T’, completely describes how STM traces generate feedback signals within the network. A transformation by dot products as in equation (12) is said to define a filter. Because the LTM traces z, that gate the signals B,can be changed by experience, the transformation (12) is said to define an adaptive filter. Thus the concepts of feedback signaling and adaptive filtering are identiral in equation ( 6 ) . E. Lateral Inhibition C k , in equation (6) describes the total inhibitory signal from all nodes Term Wk to w j . An illustrative choice of the inhibitory signal from wk to v, is
1
is the time lag for a signal to be transmitted where g(E is a sigmoid signal function, (“spread” between v k and v,, and c k t describes the strength of the inhibitory path from Vk to V I . F. Automatic Activation of Content-Addressable Nodes Function Z , ( t ) in equation ( 6 ) is an input corresponding to presentation of the ith event through time. The input & ( t ) can be large during and shortly after the event and otherwise equals zero. The input automatically excites u, in the sense that the input has a direct effect on the STM activity of its target node. In all, each STM trace can decay, can be activated by external stimuli, and can interact with other nodes via sums of gated excitatory signals and inhibitory signals. These equations can be generalized in several ways (Grossberg, 1974, 1982d). For example, LTM traces for inhibitory pathwayscan also be defined (Grossberg, 1969b) and in a way that avoids the difficulties of Estes’s (1972) theory in Section 1. The Appendix describes a more general version of the equations that includes stable conditionable inhibitory pathways.
chapter 6
324
G. LTMDecay Function Dl,in equation (7) is the decay rate of the LTM trace z,,. The LTM decay rate, like the STM decay rate, can depend on the state of the system as a whole. For example, in principle it can be changed by attentional signals, probe signals, slow threshold fluctuations, and the like without destroying the invariants of associative learning that I need to carry out my argument (Grossberg, 1972c, 1974, 1982d). H . Read-In of STM into LTM: Stimulus Sampling Function El, in equation (7) describes a learning signal from u, to Sl,that drives the Otherwise LTM changes in z,, at S,J. In other words, P, eampks v, by turning on El,. expressed, the STM trace z, is read-into the LTM trace z,, by turning on the sampling signal E,,. In the simplest case, E,, is proportional to B,,. By setting both D,, and E,, equal to zero in equation (7),a pathway e,, can be converted from a conditionable pathway to a prewired pathway that is incapable of learning. .4n important technical issue concerns the most general relationship that can exist between B,, and E,,. It has been proven that, in a precise mathematical sense, unbiased learning occurs if L‘B,Jis large only if E,, is large” (Grossberg, 1972c, 1982d). This condition, called a local flow condition, is interpreted physically as follows. After the sampling signal E,, reaches S,,, it influences learning by zl,within S., The sampling signal Ell is also averaged, delayed, or otherwise transformed within S,, to give rise to the performance signal B,,. This signal acts at a ‘‘later stage” within S,, than EI, because B,,energizes the net effect B , , Z , ~ of v, upon v3. The mathematical local flow condition shows that this physical interpretation of the relationship between E,, and Btl is sufficient to guarantee unbiased learning. I. Mutual Interaction of STM and LTM By joining together terms D,,z,, and E,,z,, it follows from equation (7) that the LTM trace z,, is a time average of the product of learning signals El,from w, to S,,, with STM traces at u,. When tegchanges in size, it alters the gated signals from u, to ul via term B,,zs,, and thus the value of the STM trace 2,. In this way the STM and LTM traces mutually influence each other, albeit on different spatial and temporal scales. 4.
LTM Unit is a Spatial Pattern: Sampling and Factorization
To understand the functional units of goal-oriented behavior, it is necessary to characterize the functional unit of long-term memory in an associative network. This problem was approached by first analysing what the minimal anatomy capable of associative learning can actually learn (Grossberg, 1967, 1968a, 1968g, 1970b) and then proving that the same functional unit of memory is computed in much more general anatomies (Grossberg, 1969b, 1972c, 1974). Three properties that were discovered by these investigations will be needed here: 1) The functional unit of LTM is a spatial pattern of activity. 2 ) A spatial pattern is encoded in LTM by a process of stimulus sampling. 3) The learning process factorizes the input properties which energize learning and performance from the spatial patterns to be learned and performed. Each of these abstract properties is a computational universal that appears under different names in ostensibly unrelated concrete applications. Henceforth in the chapter, an abstract property will be described before it is applied to concrete examples.
The Adaptive Self-Organization of Serial Order in Behavior
325
5. Oiitstnr Lrnrning: Factoriring Cohrrrnt P n t i c w n F r o m C h n o t i c Ac-
tivity The minimal anatomy capable of agsociative learning is depicted in Figure 3a. A single node, or population, V O , is activated by an external event via an input function I o ( t ) . This event is called the sampling event. For example, in studies of classical conditioning, the sampling event is the conditioned stimulus (CS). If the sampling event causes the signal thresholds of node vo to be exceeded by its STM trace T O ,then learning signals Eo, propagate along the pathways eot toward a certain number of nodes utr i = 1 , 2 , . . ., n. The same analysis of learning applies no matter how many nodes v, exist, provided that at least two nodes exist (n 2) to permit some learning to occur. The learning signals EO, are also called sampling signals because their size influences the learning rate, with no learning occurring when all signals Eo, are equal to zero. The sampling signals Eo, from vo do not activate the nodes v, directly. In contrast, directly influence the nodes v, by activating the LTM-gated performance signals BO,ZO, their STM traces 2,. The nodes vt can also be activated directly by the events to be learned. These events are represented by the inputs I , ( t ) which activate the STM traces 2, of the nodes u t , i = 1 , 2 , . . . ,n. Because the signals Eorenable the 20, to sample STM traces, the inputs ( I , ( t ) ,Zz(t), . . . ,Zn(t)) are called the sampled event. In studies of classical conditioning, the sampled event IS the unconditioned stimulus (UCS). The output signals from the nodes v, that are caused by the UCS control the network’s unconditioned response (UCR). The sampling signals Eot directly activate the performance signals Bot and the LTM trace zol rather than the STM traces 5 , . These LTM traces are computed a t the synaptic knobs So, that abut the nodes v,. This location permits the LTM traces 20, to sample the STM traces z, when they are activated by the sampling signals Eo,. Such a minimal network is called an outsfar because it can be redrawn as in Figure 3b. Mathematical analysis of an outstar reveals that it can learn a spatial pattern which is a sampled event to the nodes vl, vz, . . . , vn whose inputs Z, have a fixed relative size while uo’s sampling signals are active. If the inputs I, have a fixed relative size, they can be rewritten in the form Il(t) = W ) (14) where 8, is the constant relative input size, or “reflectance,” and the function 1(t)is the fluctuating total activity, or “background” input, of the sampled event. The convention that C:=l 8, = 1 guarantees that Z(t) represents the tofal sampled input t o the outstar, specifically I ( t ) = C:=l I t ( f ) .The pattern weights of the sampled event is the vector
>
of constant relative input sizes. The outstar learns this vector. The assertion that an outstar can learn a vector 8 means the following. During learning trials, the sampling event is followed a number of times by the sampled event. Thus the inputs Zo(t) and Z(t) can oscillate wildly through time. Despite these wild oscillations, however, learning in an outstar does not oscillate. Rather, the outstar can progressively, or monotonically, learn the invariant spatial pattern B across trials, corresponding to the intuitive notion that “practice makes perfect” (Figure 4 ) . The outstar does this by using the fluctuating inputs Z o ( t ) and I ( t ) as energy to drive its encoding of the pattern 8. The fluctuating inputs I o ( t ) and I ( t ) determine the rate of learning but not the pattern 8 that is learned. This is the property of factorization: fluctuating input energy determines the learning rate, while t,he invariant input pattern determines what is learned. The factorization property shows that the outstar can detect and encode temporally coherent relationships among the inputs that represent the sampled event.
326
Chapter 6
cs
ucs
Ibl
Figure 3. The minimal network capable of associative pattern learning: (a) A conditioned stimulus (CS) activates a single node, or cell population, V O , whichsends sampling signals to a set of nodes v1, v2, . . . ,v,. An input pattern representing an unconditioned stimulus (UCS)activates the nodes u1, 212,. . . ,u,, which elicit output signals that contribute to the unconditioned response (UCR). The sampling signals from uo activate the LTM traces 20, that are computed at the synaptic knobs SO,,i = 1,2,. . . ,n. The activated LTM traces can learn the activity pattern across vl ,v2,. ..,v, that represents the UCS. (b) When the sampling network in (a) is drawn to emphasize its symmetry, the result is an outetar wherein vo is the sampling source and the set { q ,v l , . ..,v,} is the sampled border.
The Adaptive Self-organization of Serial Order in Behavior
x ,(t) t
321
Amount learned on successive trials
Figure 4. Oscillatory inputs due to repetitive A-then-B presentations are translated into a monotonic learned reaction of the corresponding stimulus sampling probabilities. In the text, fluctuations in the sampling input Z o ( t ) and total sampled input I ( t ) , as well as the monotonic reactions of the relative LTM traces Zo,(t), generalize the A-then-B interpretation. In mathematical terms, factorization implies that the relative LTM traces
are drawn monotonically toward the target ratios 8,. Stimulus sampling means that the LTM ratios 2, change only when the sampling signals from vo to the synaptic knobs So, are positive. Because the LTM ratios form a probability distribution (each 2, 2 0 and C:=, 2, = 1) and change only when sampling signals are emitted, I call them the stimulus sampling probabilities of an outstar. The behavior of these quantities explicates the probabilistic intuitions underlying stimulus sampling theory (Neimark and Estes, 1967) in terms of the deterministic learning dynamics of a neural network. In particular, the factorization property dynamically explains various properties that are sssumed in a stimulus sampling model; for example, why learning curves should be monotonic in response to wildly oscillating inputs (Figure 4). The property of factorization also has an important meaning during performance trials. Both sampling signals and performance signals are released during performance trials (Grossberg, 1972~).The property of factorization means that the performance signal may be chosen to be any nonnegative and continuous function of time without destroying the outstar’s memory of the spatial pattern that was encoded in LTM on learning trials. The main constraint is that the pattern weights 8, be read out synchronously from all the nodes v,. What happens if the sampled event to an outstar is not a spatial pattern, as in the case when a series of sampled events occur, rather than a single event? Such an event series can be represented by a vector input
t 2 0, where each input Z , ( t ) is a nonnegative and continuous function of time. Because
Chopter 6
328
each input, I , ( t ) is continuous, the relative pattern weights
are also continuous functions of time. as in the vector function
of pattern weights. Mathematical analysis of the outstar reveals that its LTM traces learn a spatial pattern even if the weights e ( t ) vary through time. The spatial pattern that is encoded in LTM is a weighted average of all the spatial patterns 6 ( t ) that are registered at the nodes u, while sampling signals from 00 are active. This result raises the question: How can each of the patterns 6 ( t ) be encoded in LTM, rather than an average of them all? The properties of outstar learning (Section 4) readily suggest an answer to this question. This answer propelled the theory on one of its roads toward a heightened understanding of the serial order problem. Before following this road, some applications of outstar learning are now summarized.
6. Sensory Expectations, Motor Synergies, and Temporal Order Information The fact that associative networks encode spatial patterns in LTM suggests that the brain's sensory, motor, and cognitive computations are all pattern transformations. This expectation arises from the fact that computations which cannot in principle be encoded in LTM can have no adaptive value and thus would presumably atrophy during evolution. Examples of spatial patterns as functional units of sensory processing include the reflectance patterns of visual processing (Cornsweet, 1970),the sound spectrograms of speech processing Cole, Rudnicky, Zue, and Reddy, 1980; Klatt, 1980), the smell-induced patterns of 01 actory bulb processing (Freeman, 1975), and the tasteinduced patterns of thalamic processing (Erickson, 1963). More central types of pattern processing are also needed to understand the self-organization of serial order. A. Sensory Expectancies Suppose that the cells u1 ,u2,. . ,u, are sensory feature detectors in a network's sensory cortex. A spatial pattern across these feature detectors may encode a visual or auditory event. The relative activation of each vt then determines the relative importance of each feature in the global STM representation of the event across the cortex. Such a spatial pattern code can effectively represent an event even if the individual feature detectors u, are broadly tuned. Using outstar dynamics, even a single command node vo can learn and perform an arbitrary sensory representation of this sort. The pattern read out by uo is often interpreted as the representation that uo ezpeete to find across the field ul, v2,. ,u, due to prior experience. In this context, outstar pattern learning illustrates top-down expectancy learning (Section 25). The expectancy controlled by a given node uo is a time-average of all the spatial patterns that it ever sampled. Thus, it need not equal any one of these patterns. B. Motor Synergies Suppose that the cells u1, uz, .. . ,u, are motor control cells such that each ut can excite a particular group of muscles. A larger signal from each u, then causes a faster contraction of its target muscles. Spatial pattern learning in this context means that an outstar command node uo can learn and perform fixed relative rates of contraction across all the motor control cells v1,u2, ...,u,. Such a spatial pattern can control a motor synergy, such as playing a chord on the piano with prescribed fingers; making a
1
.
..
The Adaptive Self Organizationof Serial Order in Behavior
329
synchronous motion of the wrist, arm, and shoulder: or actinting a prescribed target configuration of lips and tongue nhile iittering a speech sound (Section 32). Because outstar memory is not disturbed when the performance signal from vo is increased or derreased, such a motor synergy, once learned, can be performed at a variety of synchronous rates without requiring the motor pattern to be relearned at each new rate (Kelso, Southard, and Goodman, 1979; Soerhting and Laquaniti, 1981). In other words, the factorization of pattern and energy provides a basis for independently processing the command needed to reach a terminal motor target and the velocity with which the target will be approached. This property may be better understood through the following example. When I look at a nearby object, I can choose to touch it with my left hand, my right hand, my nose, and so on. Several terminal motor maps are simultaneously available to move their corresponding motor organs towards the object. “Willing” one of these acts releases the corresponding terminal motor map, but not the others. The chosen motor organ ran, moreover, be moved towards the invariant goal at a wide range of velocities. The distinction between the invariant terminal motor map and the flexibly programmable performance signal illustrates how factorization prominently enters the problems of learned motor control. C. Temporal Order Information Over Item Representations Suppose that a sequence of item representations is activated in a prescribed order during perception of a list. At any given moment, a spatial pattern of STM activity exists across the excited populations. Were the same items excited in a different order by a different list, a different spatial pattern of STM activity would be elicited. Thus the spatial pattern reflects temporal order information as well as item information. An outstar sampling source can encode this spatial pattern as easily as any other spatial pattern. Thus although an outstar can encode only a spatial pattern, this pattern can represent temporal properties of external events. Such a spatial encoding of temporal order information is perhaps the example par ezcellenee of a network’s parallel processing capabilities. How a network can encode temporal order information in STM without falling into the difficulties mentioned in Section 2 will be described in Section 34.
7. Ritualistic Learning of Serial Bchavior: Avalariches The following sections approach the problem of serial order in stages. These stages mark different levels of sophistication in a network’s ability to react adaptively to environmental feedback. The stages represent a form of conceptual evolution in the theory reflecting the different levels of behavioral evolution that exist across phylogeny. The first stage shows how outstar learning capabilities can be used to design a minimal network capable of associatively learning and/or performing an arbitrary sequence of events, such as a piano sonata or a dance. This construction is called an avalanche (Grossberg, 1969g, 1970a, 1970b) because its sampling signal traverses a long axon that activates regularly spaced cells (Figure 5 ) in a manner reminiscent of how avalancheconduction along the parallel fibers in the cerebellum activates regularly spaced Purkinje cells (Ecrles, Ito, and Szentagothai, 1967; Grossberg, 1969d). The simplest avalanche requires only one node to encode the memory of the entire sequence of events. Thus, the construction shows that complex performance per se is easily achieved by a small and simple neural network. The simplest avalanche also exhibits several disadvantages stemming from the fact that its performance is ritualistic in several senses. Each of these disadvantages has a remedy that propels the theory forward. Performance is temporally ritualistic because once performance has been initiated, it cannot be rhythmically modified by the performer or by conflicting environmental demands. Performance is spatially ritualistic in the sense that the motor patterns to be performed do not have learned sensory referents.
Chapter 6
330
5
5
Figure 6. An avalanche is the minimal network that can associatively learn and rituaemits a brief sampling listically perform any space-time pattern. The sampling node pulse that serially excites the outstar sampling bouquets that converge on the sampled field F(’)= { w ; ’ ) , w ! ) , . . . ,vi’’}. On performance trials, a sampling pulse resynthesizes the space-time pattern as a series of smoothly interpolated spatial patterns.
”1’)
The first modification of the avalanche enables performance speed to be continuously modulated, or even terminated, in response to performer or environmental signals. This flexibility can be achieved on performance trials without any further learning of the ordered patterns themselves. The construction thus provides a starting point for analysing how order information and rhythm c a n be decoupled in more complex learning situations. The construction is not of merely formal interest, however, since it shares many properties with the command cell anatomies of invertebrates (Dethier, 1968;Hoyle, 1977; Kennedy, 1968; Stein, 1971; Willows, 1968). With the modified avalanche construction before us, some design issues become evident concerning how to overcome the network’s spatially ritualistic properties. The
The Adaetive Self-organizationof Serial Order in Behavior
33 1
pursuit of these issues leads to a study of Serial learning and chunking that, in turn, provides concepts for building a theory of recognition and recall. The needed serial learning and chunking properties are also properties of the embedding field equations, albeit in a different processing context than that of outstar learning. Because the avalanche constructions require a hierarchy of network stages, superscripts are used on the following variables. Suppose that the act to be learned is con";'), . . . , L J ~ ' ) ,henceforth called the field of cells F(I). This trolled by a set of nodes field replaces the nodes q ,v 2 , . . . , vn of an outstar. Let each node receive a nonnegative and continuous input II(t),1 2 0, i = 1 , 2 , . . . ,n. The set of inputs I,(t) collectively form a vector input J ( t ) = ( Z ~ ( t ) , I z ( t ) , . I. Z. n ( t ) ) l (17) t 2 0, that characterizes the commands controlling the sequence of events. At the end of Section 4 I raised the question of how such a vector input could be learned despite the outstar's ability to learn only one spatial pattern. An avalanche can accomplish this task using a single encoding cell in the following way. Speaking intuitively, J(1) describes a moving picture playing through time on the "screen" of nodes dl). An avalanche can learn and perform such a "movie" as a sequence of still pictures that are smoothly interpolated through time. Because each input Z , ( t ) is continuous, the pattern weights
"!'),
are also continuous and can therefore be, arbhrarily closely approximated by a sequence of values 0; (0)3 Bi ( <), 4 (2E)' ( 3 0 3 .* * (20)
<
sampled every 6 time units, if is chosen so small that O,(t) does not change too much in a time interval of length <. For every fixed k, t,he vector of weights
sampled across all the cells in F(') at a time 1 = kE is a spatial pattern. To learn and perform the movie J ( t ) , 1 2 0, it suffices to learn and perform the sequence d1),d2),d3),. .. of spatial patterns in the correct order. This can be done if a sequence of outstars 01,02,03,. .. is arranged so that the kth outstar Ok samples only the kth spatial pattern d k )on successive learning trials, and is then briefly activated in the order O , , 0 2 , 0 ~.,..on performance trials. Using the outstar properties of stimulus sampling and learning in spatial pattern units, an uwlunche-type anatomy, such as that in Figure 5, instantiates these properties using a single sampling node and a minimum number of pathways. In Figure 5, a brief sampling signal travels along the long pathway (axon) leading from node &'). This node replaces the outstar sampling source uo of the previous discussion, $his sampling signal moves from left to right, traveling down the serially arranged bouquets of pathways that converge on F ( ' ) . Each bouquet is an outstar and can therefore learn a spatial pattern. This pattern is a weighted average of all the spatial patterns that are active in STM across 17") while the outstar samples F'('). Because each outstar o k samples F(') briefly, its LTM traces encode essentially only the pattern dk). By the property of stimulus sampling, none of the other patterns d j ) , j # k, playing across F(') through time will influence the LTM traces of ok. O n
332
Chapter 6
performance trials, a performance signal rims along the axon, serially reading-out the spatial patterns encoded by the bouquets in the correct order. The STM t8racesacross F ( ’ )smoothly interpolate thcse spatial patterns through time to generate a continuously varying output from ~ ( 1 ) . 8 . Decoupling O r d e r and R h y t h m : Nonspecific Arousal as a Velocity Command
Performance by an avalanche is temporally ritualistic, because once a performance signal is emitted by node v1(’), there is no way to temporally modulate or stop its inexorable transit down the sampling pathway. In order to modify performance at any time during the activation of an avalanche, there must exist a locus at each outstar’s sampling source where auxiliary inputs can modify the performance signal. These loci are denoted by nodes v i a ) ,v p ) , v f ) , . . . such that node vf’ is the sampling source of outstar Ok, as in Figure 6a. These nodes form a field of nodes F(’) in their own right. Merely defining the nodes F(’)does not make avalanche performance less ritualistic as long as a signal from the ith node vr(’) can trigger a signal from the (i 1)st node v $ ) ~ . An auxiliary input source or sources needs to be defined whose activity is required
+
to maintain avalanche performance. The minimal solution is to define a single node up) that can activate all the populations in F(’)(approximately) simultaneously (Figure 6b), and to require that node v$l can elicit a signal only if it receives simultaneous
signals from v!’) and v I 3 ) . Shutting off v i 3 ) at any time can then abruptly terminate performance, because even if node v$)l receives a signal from v!’) at such a time, u!Il cannot emit a signal without convergent input from v i 3 ) . Since the signals from vj3) energize the avalanche as a whole, I call them nonspecific arousal signals. Node v y ) may also be called a command node because it subliminally prepares the entire avalanche for activation. This node may just as well be thought of as a contezl node, or contextual bias, in situations wherein the avalanche is interpreted
as a subnetwork within a larger network. For example, the same cue can often trigger different behavior depending on the context or plan according to which it is interpreted (e.g., turning left at the end of a hall to enter the bedroom or right to enter the dining room). A nonspecific arousal source can achieve this result by subliminally sensitizing certain subnetworks more than others for supraliminal activation by a given set of cues. Due to the fact that the arousal source often encodes a contextual cue, I henceforth call the network in Figure 6 a coded-modulated avalanche. 9. Reaction Time and Performance Speed-Up
The requirement that the command node v$)l can fire only if it receives simultaneous signals from vi’) and v i 3 ) can be implemented in several related ways, all of which depend on the existence of a threshold I? that each STM trace must exceed before a signal
z!i)l
from v$\
can be emitted. Increasing the arousal signal from up) t o vjt), makes it
easier for z$), to exceed this threshold. Moreover, the threshold of v::), is exceeded faster in response to a large signal from v i 3 ) than to a small signal from v p ) . The reaction time (RT) for activating any node v!:)] can thus be decreased by increasing
The Adaptive Self-organization of Serial Order in Behavior
333
(b) Figure 6. A nonspecific arousal signal can act as a command that decouples order information from the velocity or rhythm of a particular performance. (a) Outstar sampling sources serially excite each other to determine the order with which F(’)is
$’
sampled. (b) Simultaneous input from the nonspecific arousal source ui3) and vi2) is needed to elicit an output signal from u!:)~. Continuous variations in the size of the v1(3) signal are translated into continuous variations in the velocity of performance.
Chapter 6
334
~1~).
the nonspecific arousal level at the time ahcn this node is excited by Since this is true for every node v!:’,, a continuous variation of arousal level through time can continuously modulate performance speed. Indeed, such a feedforward modulation of performance velocity can be achieved at a very rapid rate (Lashley, 1951). Both additive and multiplicative (shunting) rules can, in principle, be used to restrict avalanche performance except when the nonspecific arousal source is active. A shunting rule enjoys an important formal advantage. It works even without an exquisitely precise choice of the relative sizes of all the avalanche‘s signals and threshold. A shunting rule has this advantage because a zero arousal level will gate to zero even a large signal from v1(*) to v(’:)~. Consequently, vi:), will not be artivated at all by v1(’). Its zero STM trace
z$)l will therefore remain smaller than any choice of a positive threshold. The concept of a shunting arousal signal is illustrated by the following equation for by v,“’ and vi3): the activation of 1vl:!
where B, is the performance signal from vr(’) and S is the shunting signal from 711~). Often Bi has the form B,(t) = f(z!”(t - 0 )and S has the form S ( t ) = g ( $ ) ( t - T ) ) where both f ( w ) and g ( w ) are sigmoid functions of w . For simplicity, let B, and S equal zero at times t < 0 and equal constant positive values U and V , respectively, at time t 5 0. Define the RT of z$),(t) as the first time t = T when s$),(T) = I’ and
$ Z $ ) ~ ( T>) 0. By equation (22), if V = 0, then the RT is infinite since z$)l if UV > A r does a signal ever leave v1(:), and it does so with an RT of 1
RT = A
In
uv (-----) UV-AI’
3
0. Only
’
which is a decreasing function of U and 1’. Equation (23) shows that determination of performance rate depepds on the threshold of each node, the arousal level when the node exceeds threshold, the size of the signal generated by the previous node, and the delays and r due to propagating signals between nodes.
<
In more complex examples, the signals to the nodes v!’) can also be altered by LTM traces that gate these signals (Section 13). Equation (23) illustrates how an increase in such an LTM trace due to learning ran cause a progressive speed-up of performance (Fitts and Posner, 1967; Welford, 1968), because an increase in an LTM trace that gates either signal B, or S in equation (23) is equivalent to an increase in S. An interaction between learning, forgetting, and reaction time was used to explain such phenomena as performance speed-up due to learning as well as some backwardmasking properties in Grossberg (1969~).Later articles on speed-up have used the same ideas to explain the power law relationship that often obtains between the time it takes to perform a task and the number of practice trials (Anderson, 1982; MacKay, 1982). Although both Anderson and MacKay suggest that their explanation of speed-up lends special support to their other concepts, it is historically more correct to say that both explanations lend support to well-known neural modeling ideas. There exist many variations on the avalanche theme. Indeed, it may be more appropriate to talk about a theory of avalanches than of assorted avalanche examples (Grossberg, 1969g, 1970b, 1974, 1978e). For present purposes, we can summarize some avalanche ideas in a way that generalizes to more complex situations:
The Adaptive Self-organization of Serhl Order in Behavior
335
1) A source of nonspeeifir arousal signals can niodulate the performance rhythm. 2) Specific activations encode temporal order information in a way that enables a
nonspecific arousal signal to generate temporally ordered performance. 3) The sequential events to be learned and performed are decomposed into spatial patterns.
10. Hierarchical Chunking and the Learning of Serial Order
Given the concept of a context-modulated avalanche, the following question generates a powerful teleological pressure for extending the theory. Suppose that the links v i 2 ) -+ $), in the chain of nodes are not prewired into the network. How can a network with the processing capabilities of a context-modulated avalanche be self-organized? For example, if the spatial patterns across the sampled nodes F ( ’ ) control successive chords in a piano sonata, then the individual chords need to be learned as well as the ordering of the chords. This process is typically carried out by reading piano music and transforming these visual cues into motor commands which elicit auditory feedback. Nowhere in a context-modulated avalanche are visual cues encoded before being mapped into motor commands. No sequence of auditory feedback patterns is encoded before being correlated with its generative sequence of motor commands, and no mechanism is included whereby the serial order of the motor commands can be learned. It is clear from this and many analogous examples that the set of sampling nodes in F ( 2 )corresponding to a particular behavioral sequence cannot be determined a priori, any more than each mind could contain a priori representations of all the sonatas that ever were or will be composed. For a similar reason, neither the serial ordering of these nodes nor the selective attachment of an arousal source to this set of sampling nodes is determined a priori. All of these structures need to be self-organized by developmental and learning mechanisms, and we need to understand how this happens. 11. Self-Organization of Plans: T h e Goal Paradox
These formal problems concerning the self-organization of avalanches also arise in many types of goal-oriented behavior. Consider maze learning (Figure 7) as an idealization of the many situations in which one learns a succession of choice points leading to a goal, such as going from home to the store or from one’s office to the cafeteria. In Figure 7, one leaves the filled-in start box and is rewarded with food in the vertically hatched goal box. After learning has occurred, every errorless transit from start box to goal box requires the same sequence of turns at choice points in the maze. In some sense, therefore, our memory traces can encode the correct order at the correct choice points. Moreover, the goal box always occurs last on every learning trial, whether or not errors occur. During goal-directed performance, by contrast, the first thing that we think of is the goal, which somehow activates a plan that generates a correct series of actions. How is it possible for the goal event to always occur last on learning trials and for memory to encode the correct performance order, and yet for an internal representation of the goal to activate commands corresponding to events that preceded the goal on every learning trial? This is the goal paradox (Grossberg, 1978e). Actually, the goal paradox seems paradoxical only if one takes seriously the manner in which a learning subject’s information unfolds in real-time and vigorously resists the temptation to beg the question by appealing to a homunculus. To overcome the goal paradox, we need to understand how the individual sensorymotor coordinations leading to a correct turn at each choice point can be self-organized. This problem is formally analogous to the problem of self-organizing the individual nodes F ( 2 )of a context-modulated avalanche and of associating each of these nodes with a motor command across F ( I ) . We also need to understand how a plan can serially
336
Chapter 6
GOAL BOX 1
GOAL BOX 2
START BOX Figure 7. The Goal Paradox: Correct performance from a start box to a goal box is always order-preserving. On every learning trial, the goal always occurs last. How can the correct order be learned despite the fact that. an internal representation of the goal is activated first on performance trials and organizes the correct ordering of acts that preceded the goal on every learning trial? organize individual choices into a correctly ordered sequence of choices. This latter problem needs to be broken into at least two successive stages. First, the internal representation corresponding to the plan is determined by the internal representations of the individual choices themselves, since during a correct transit of the maze on a learning trial, these are the relevant data that are experienced. The dependence of the plan on its defining sequence of events is schematized by the upward directed arrow in Figure 8a. The process whereby a planning node is selected is a form of code development, or chunking. After the plan is self-organized by its defining sequence of events, the plan in turn learns which internal representations have activated it, so that it can selectively activate only these representations on performance trials (Figure 8b). A feedforward, or bottom-up, process of coding (chunking) thus sets the stage for a feedback, or top-down, process of pattern learning (expectancy learning). In its role as a source of contextual feedback signals, the plan is analogous to the nonspecific arousal node vi3) in the context-modulated avalanche. This discussion of the goal paradox emphasizes an issue that was implicit in my discussion of how an arousal-modulated avalanche can be self-organized. Both the nodes in F(a) and the context-sensitive node in F(3)are self-organized by a chunking process. Given this conclusion, some specialized processing issues now come into view. What
I
The Adaptive Self-Organizationof Serial Order in Behavior
331
PLAN
0 "1
0
0
"2
"3
0
Figure 8. The feedback loop between chunking and planned temporal ordering: (a) Activity patterns across the item representations choose their planning nodes via a process of feedforward (bottom-up) code learning (chunking). (b) The chosen planning nodes can learn to activate the item representations which choose them, and the order with which items are activated, by a process of feedback (top-down) associative pattern learning (expectancy matching).
338
Chapter 6
mechanism maintains the activities of the F ( 2 )nodes long enough for the simultaneously active F(') nodes to determine and be determined by the F ( 3 ) planning node? What mechanism enables the planning node to elicit the correct performance order from the F('-) nodes? Are learned associations between the F ( 2 )nodes necessary, as is suggested by the links u:') + u!t\ of a context-modulated avalanche, or can the learned signals from F(3)to F ( 2 )encode order information by themselves? Once we explicitly recognize that the sequence of active F(') nodes determines its own F(3) node, we must also face the following problem. Every subsequence of an event sequence is a perfectly good event sequence in its own right. By the principle of sufficient reason, each subsequence may therefore encode its own node. Not every subsequence may be able to encode a planning node with equal ease. Nonetheless, the of command single nonspecific arousal source v13) needs to be replaced by a field d3) nodes which are activated by prescribed subsequences across F('). Every node in F13) can in turn sample the activity pattern across F(2)while it is active. As performance proceeds, the event sequences represented across F(') and the planning nodes activated across the field F ( 3 ) continually change and mutually influence one another. A more advanced version of the same problem arises when, in addition to feedback exchanges within sensory and motor modalities, intermodality reciprocal exchanges control the unfolding of recognition and recall through time (Grossberg, 1978e). The deepest issues relating to these feedback exchanges roncern their stability and from destroying the encoding self-consistency. What prevents every new event in F('-) of past event sequences in F(3)? What enables the total output from F ( 3 ) to define a more global and predictive cont,ext than could any individual planning node?
12. Temporal Order Information i n LTM The design issues raised by the goal paradox will be approached in stages. First I consider how a command node can learn to read out item representations in their correct order without using learned associations between the item representations themselves. I also consider how ordered associations among item representations can be learned. Both of these examples are applications of serial learning properties that were first analysed in Grossberg (1969a) and generalized in Grossberg and Pepe (1970, 1971). The temporal order information in both examples is learned using the associative learning equations (6)and (7). Both types of temporal order information can be learned simultaneously in examples wherein pathways within F(') and between F(') and F(3) are conditionable. Both examples overcome the instability problem of Estes' (1972) model of temporal order in LTM,and neither example requires a serial buffer to learn its temporal order information. Murdock (1979) has studied serial learning properties using a related approach, but one that is weakened by the conceptual difficulties of linear system theory models (Section 2). In particular, the Murdock approach has not yet been able to explain the bowed and skewed cumulative error curve in serial verbal learning. The problem of how a command node learns to read-out an STM pattern that encodes temporal order information across item representations has two different versions, depending on whether the command node is excited before the first list item is present,ed or after the last list item is presented. The former problem will be considered now, but the latter problem cannot be considered until Section 34, since it requires a prior analysis of competitive STM interactions for its solution (Section 17). 13. Read-Out a n d Self-Inhibition of Ordered STM Traces Figure 9a depicts the desired outcome of learning. The LTM traces
21,
from the
The Adaptive Selforganization of Serial Order in Behavior
339
roliimand node vi3) to the item representations TI!’) satisfy the chain of inequalities 211
> 2 1 2 > 213 > ... > 21,
(24)
due to the fact that the list of items r , , rz, . . . , r, was previously presented to F ( 2 ) . Consequently, when a performance signal from u i 3 ) is gated by these LTM traces, an STM pattern across F(’) is generated such that
A reaction time rule such as equation (23) initiates an output signal faster from a node with a large STM activity than from a node with a small STM activity. The chain of STM inequalities (25) can thus be translated into the correct order or performance using such a reaction time rule if the following problem of perseveration can be prevented. After the first item rl is performed, the output signal from v i 2 ) must shut off to prevent a sustained output signal from interfering with the performance of later items. A specific inhibitory feedback pathway thus inhibits zr)after a signal is emitted from viz) (Figure 10). The same perseveration problem then faces the remaining active nodes u r ) , u p ) , . . . , Hence every output pathway from F(’) can activate a specific inhibitory feedback pathway whose activation can self-inhibit the corresponding STM trace (Grossberg, 1978a, 1978e; Rumelhart and Norman, 1982). With this performance mechanism in hand, we now consider the more difficult problem of how the chain of LTM inequalities (24) can be learned during presentation of a list of items r l , r 2 , .. . ,r,.
WE).
14. The Problem of STM-LTM O r d e r Reversal The following example illustrates the problem in its most severe form. The STM properties that I now consider will, however, have to be generalized in Section 34. Suppose that each node v i 2 ) is excited by a fixed amount when the i t h list item r, is presented. Suppose also that, as time goes on, the STM trace z!’) gets smaller due either to internodal competition or passive trace decay. Which of the two decay merhanisms is used does not affect the basic result, although different mechanisms will cause testable secondary differences in the order information to be encoded in LTM. Whichever decay mechanism is used, in response to serially presented lists, the last item to have occurred always has the largest STM trace. In other words, a recency effect exists at each time in STM (Figure 9b). Given this property, how is the chain of LTM inequalities learned? In other words, how does a sequence of recency gradients in STM get translated into a primacy gradient in LTM? I call this issue the STM-LTM order reversal problem (Grossberg, 1978e). The same problem arises during serial verbal learning, but in a manner that disguises its relevance to planned serial behavior. In this task, the generalization gradients of errors at each list position have the qualitative form depicted in Figure 11. A gradient of anticipatory (forward) errors occurs a t the beginning of the list, a two-sided gradient of anticipatory and perseverative (backward) errors near the middle of the list, and a gradient of perseverative errors at the end of the list (Osgood, 1953). I suggest that the gradient of anticipatory errors at the beginning of the list is learned in the same way as a primacy gradient in LTM. I have shown (Grossberg, 1969f) that the same associative laws also generate the other position-sensitive error gradients. Thus a command node that is activated after the entire list is presented encodes a recency gradient in LTM rather than the primacy gradient that is encoded by a command node activated before (or when) the first list item is presented. The same laws also provide an explanation
340
Chapter 6
Figure 9. Simultaneous encoding of context and temporal order by top-down STM-
LTM order reversal: (a) The context node v y ) reads-out a primacy gradient across the item representations of F ( 2 ) . (b) The context node v i 3 ) can learn a primacy gradient in LTM by multiplicatively sampling and additively storing a temporal series of STM recency gradients across F ( a ) .
The Adaptive Self-Otganizarion of Serial Order in Behavior
341
STM P A T T E R N
STM COMPET I T ION SELFINHIBITORY
REHEARSAL WAVE
THRESHOLD
Figure 10. A reaction time rule translates larger STM activities into faster output onsets. Output-contingent STM self-inhibition prevents item perseveration. of why the curve of cumulative errors versus list position is bowed and skewed toward the end of the list, and of why associations at the beginning of the list are often, but not always, learned faster than associations at the end of the list (Grossberg and Pepe, 1970, 1971). From the perspective of planned serial behavior, these results show how the activation of a command node at different times during list presentation causes the node to encode totally different patterns of order information in LTM. Thus the learning of order information is highly context-sensitive. If a command node is activated by a prescribed list subsequence via F @ )--+ F(3)signals that subserve chunking and recognition, then this subsequence constrains the order information that its command node will encode by determining the time at which the command node is activated. Moreover, this context-sensitivity is not just a matter of determining which item representations will be sampled, as the issue of STM-LTM order reversal clearly shows. An important conclusion of this analysis is that the same sort of context-sensitive LTM gradients are learned on a single trial regardless of whether command nodes sample item representations at different times, or if the item representations sample each other
342
Chapter 6
Figure 11. Each sampling node vJ learns a different LTM pattern z, = ( z j 1 , z j 2 , .. ., zJn) if it samples at different times. In a list of length n = I, whose intertrial interval is sufficiently long, a node that starts sampling at the list beginning ( j 2 1) learns a primacy gradient in LTM. At the list end ( j Z L), a recency gradient in LTM is learned. Near the list middle ( j 2 L / 2 ) , a two-sided LTM gradient is learned. When STM probes read different LTM patterns z j into STM, the different patterns generate different error gradients due to the action of internal noise, the simultaneous read-out by other probes, and the STM competition that acts after LTM read-out. through time. Although the order information that is encoded by the sampling nodes is the same, the two situations are otherwise wholly distinct. In the former case, list subsequences are the functional units that control learned performance, and many lists can be learned and performed over the same set of item representations. In the latter case, individual list items are the functional units that control learned performance, and once a given chain of associations is learned among the item representations, it will interfere with the learning of any other list ordering that is built up from the same item representations (Dixon and Horton, 1968; Lenncberg, 1967; Murdock, 1974). A third option is also available. It arises by considering a context-modulated avalanche whose serial ordering and context nodes are both self-organized by associative processes (Figure 12). In such a network, each of the item nodes can be associated with any of several other item nodes. In the absence of contextual support, activating any one of these it,em nodes causes only subliminal activation of its set of target item nodes, while activating a particular context node sensitizes a subset of associatively linked item nodes. A serial ordering of supraliminally activated item nodes can thus be generated. Such an adaptive context-modulated avalanche possesses many useful properties. For example, item nodes are no longer bound to each other via rigid associative chains. A given item can activate different items in different contexts. The inhibition of a given context node can rapidly prevent the continued performance of the item ordering that it controls, while the activation of different context nodes can rapidly instate a new performance ordering among the same items or different items. In Figure 12, the item nodes are called command nodes. This change of terminology is intended to emphasize the fact that, in order for this design to be useful, the items must represent chunks on a rather high level of network processing. The number of transitions from each command node to its successors must be reasonably small in order to achieve the type of unambiguous serial ordering that the context chunk is supposed to guarantee. The sequence chunks within the masking field discussed in
The Adaptive Self-organization of Serhl Order in Behavior
343
CONTEXT IODES
0
SERIAL LINKS AMONG COMMAND NODES
Figure 12. In an adaptive context-modulated avalanche, each command node can be associated with a set of command nodes that it subliminally activates. Learned topdown signals from a context node can also sensitize a set of command nodes. The convergent effects of top-down and internodal signals causes the supraliminal activation of command nodes in a prescribed serial order. Different context nodes can generate different serial orders. Section 38 are prime candidates for command nodes in an adaptive context-mediated avalanche. The ability of top-down and serial associative signals to activate ordered STM traces supraliminally without also unselectively activating a broad field of STM traces is facilitated by balancing these excitatory associative signals against inhibitory signals, notably inhibitory masking signals (see Section 38; Grossberg, 1978e, Sections 41-46). 15. Serial Learning This section indicates how the context-sensitive LTM gradients in Figure 11 are learned. Why the same rules imply that the cumulative error curve of serial learn-
344
Chapter 6
ing is bowed and skewed is reviewed from a recent perspective in Grossberg (1982a). First I consider how a primacy gradient (equation (24)) is encoded by the LTM traces ( z l l , z 1 2 , . . . , zlm) of a node that is first activated before, or when, the first list item is presented. I then show how a recency gradient
"I3)
Z,]
< 2,2 < . . . < Znm
(26)
is encoded by the LTM traces (znl, zn2,. . . ,znm) of a node vL3) that is first activated after the whole list is presented. A two-sided gradient zk]
< Zk2 < . . . < Zkr > Z k , r + l > . . . > Zkm
(27)
"12)
encoded by a node that is activated during the midst of the list presentation can then be understood as a combination of these effects. Let node ui3) start sending out a sampling signal El at about the time that rl is being presented. After rapidly reaching peak size, the signal El gradually decays through time with its S T M trace zf)as future list items r2, r 3 , . , . are presented. Thus El is largest when STM trace zi2)is maximal, smaller when both traces zy)and z!f' are active, smaller still when traces z?),z r ) , and zf)are active, and so on. Consequently, the product E1$) in row 1 of Figure 9b exceeds the product Elzf) in row 2 of Figure Qb, which in turn exceeds the product E l @ in row 3, and so on. Due to the slow decay of each LTM trace zlt on each learning trial, zl1adds up the products E l z y ) in successive rows of column 1, 212 adds up the products El$) in successive rows of column 2, and so on. An LTM primacy gradient (equation (24)) is thus generated due to the way in which El samples the successive STM recency gradients, and to the fact that the LTM traces 21%add up the sampled STM gradients Els,('). By contrast, the sampling signal En emitted by node u p ) samples a different set of STM gradients because starts to sample only after all the item representations v1(2) ,u2( 2 ) , . . . , u!? have already been activated on a given learning trial. Consequently, when the sampling signal En does turn on, it sees the already active STM recency gradient 21') < zf' < . . . < 5 m (2) (28)
of the entire list. Moreover, the ordering (28) persists for a while because no new items are presented until the next learning trial. Thus signal En samples a n STM recency gradient a t every time. When all sampled recency gradients are added u p through In time, they generate a recency gradient (equation (26)) in the LTM traces of summary, command nodes that are activated at the beginning, middle, or end of a list encode different LTM gradients because they multiplicatively sample STM patterns at different times and summate these products through time.
"A3).
16. Rhythm Generators and Rehearsal Waves The previous discussion forces two refinements in our ideas about how nonspecific arousal is regulated. In a context-modulated avalanche, the nonspecific arousal node 0i3) both selects the set of nodes v!') that it will control and continuously modulatesperformance velocity across these nodes. A command node that reads out temporal order information as in Figure 9a can no longer fulfill both roles. Increasing or decreasing the
The Adaptive Serf--Organizationof Senid Order in Behavior
345
command node’s activity while it reads-out its LTM pattern proportionally amplifies the STM of all its item representations. Arbitrary performance rhythms are no longer attainable, because the relative reaction times of individual item representations are constrained by the pattern of STM order information. Nor is a sustained but continuously modulated supraliminal read-out from the command node permissible, because item representations that were already performed could then be reexcited, leading to a serious perseveration problem. Thus if a nonspecific arousal source dedicated to rhythmic control is desired, it must be distinguished from the planning nodes. Only then can order information and rhythm information remain decoupled. The reader should not confuse this idea of rhythm with the performance timing that occurs when item representations are read out as fast as possible (Sternberg, Monsell, Knoll, and Wright, 1978; Sternberg, Wright, Knoll, and Monsell, 1980). Properties of such a performance can, in fact, be inferred from the mechanism for read-out of temporal order information per se (Section 47). Another type of nonspecific arousal is also needed. If read-out of LTM order informa, prevents these tion is achieved by activating the item representations across F ( 2 ) what item representations from being uncontrollably rehearsed, and thereby self-inhibited, while the list is being presented? To prevent this from happening, it is necessary to distinguish between STM artivation of an item representation and output signal generation by an active item representation. This distinction is mechanized by assuming the existence of a nonspecific rehearsal wave capable of shunting the output pathways of the item representations. When the rehearsal wave is off, the item representations can blithely reverberate their order information in STM without generating self-destructive inhibitory feedback. Only when the rehearsal wave turns on does the read-out of order informat ion begin. The distinction between STM storage and rehearsal has major implications for which planning nodes in F(3)will be activated and what they will learn. This is due to two facts working together: The rehearsal wave can determine which item subsequences will be active at any moment by rehearsing, and thereby inhibiting, one group of item representations before the next group of items is presented. Each active subsequence of item representations can, in turn, chunk its own planning node. The rehearsal wave thus mediates a subtle interaction between the item sequences that occur and the chunks that form to control future performance (Section 37).
17. Shunting Competitive Dynamics in Pattern Processing and STM: Automatic Self-Tuning by Parallel Interactions This analysis of associative mechanisms suggests that the unit of LTM is a spatial pattern. This result raises the question of how cellular tissues can accurately register input patterns in STM so that LTM mechanisms may encode them. This is a critical issue in cells because the range over which cell potentials, or STM traces, can fluctuate is finite and often narrow compared to the range over which cellular inputs can fluctuate. What prevents cells from routinely turning on all their excitable sites in response to intense input patterns, thereby becoming desensitized by saturation before they can even register the patterns to be learned? Furthermore, if small input patterns are chosen to avoid saturation, what prevents the internal noise of the cells from distorting pattern registration? This noise-saturation dilemma shows that cells are caught between two potentially devastating extremes. How do they achieve a golden mean of sensitivity that balances between these extremes? I have shown (Grossberg, 1973) that mass action competitive networks can automatically retune their sensitivity as inputs fluctuate to register input differences without being desensitized by either noise or saturation. In a neural context, these systems are called shunting on-center off-surround networks. The shunting, or mass action, dynamics are obeyed by the familiar membrane equations of neurophysiology; the automatic
Chapter 6
346
retuning is due to automatic gain control by the inhibitory signals. The fixed operating range of cells should not be viewed as an unmitigated disadvantage, By fixing their operating range once and for all, cells can also define fixed output threshold and other decision criteria with respect to this operating range. By maintaining sensitivity within this operating range despite fluctuations in total input load, cells can achieve an impressive parallel processing capability. Even if parallel input sources to the cells switch on or off unpredictably through time, thereby changing the total input to each cell, the automatic gain control mechanism can recalibrate the operating level of total STM activity to bring it into the range of the cells’ fixed decision criteria. Additive models, by contrast, do not have this capability. These properties are mathematically described in Grossberg (1983,Sections 21-23). Because the need to accurately register input patterns by cells is ubiquitous in the nervous system, competitive interactions are found at all levels of neural interactions and of my models thereof. A great deal of what is called “information processing” in other approaches to intelligence reduces in the present approach to a study of how to design a competitive, or close-to-competitive, network to carry out a particular class of computations. Several types of specialized competitive networks will be needed. As I mentioned in Section 1, the class of competitive systems includes examples which exhibit arbitrary dynamical behavior. Computer simulations t,hat yield an interesting phenomenon without attempting to characterize which competitive parameters control the phenomenon teach us very little, because a small adjustment of parameters could, in principle, generate the opposite phenomenon. To quantitatively classify the parameters that control biologically important competitive networks is therefore a major problem for theorists of mind. Grossberg (IQSla, Sections 10-27) and Cohen and Grossberg (1983)review some results of this ongoing classification.
18. Choice, Contrast Enhancement, Limited STM Capacity, and Quenching Threshold Some of the properties that I use can be illustrated by the simplest type of competitive feedback network:
where i = 1,2,.. . , n. In equation (29),term - A z a describes the passive decay of the STM trace z, at rate -A. The excitatory term (B - zt)[Z, f(zl)] describes how an excitatory input I, and an excitatory feedback signal f(z,)from v, to itself excites by mass action the unexcited sites ( B - z, of the total number of sites B at each node u,. The inhibitory term -zl(J1 &fc f t z k ) ] describes how the inhibitory input J, and the inhibitory, or competitive, feedback signals f(q) from all tIk, k # i , turn off the 5, excited sites of v, by mass action. Equation (29)strips away all extraneous factors to focus on the following issue. How does the choice of the feedback signal function f ( w ) influence the transformation and storage of input patterns in STM? To discuss this question, I assume that inputs (11, I,, . . . ,I,, J1,J z , . . . ,J,) are delivered before time t = 0 and switch off at time t = 0 after having instated an init,ial pattern z(0) = (z1(0)?z2(0),... ,.zn(0)) in the network’s STM traces. Our task is to understand how the choice of f ( w ) influences the transformation of z(0) into the stored pattern z(m) = (z1(oo),z~(oo), . ..,z,(oo)) as time increases. Figure 13 shows that different choices of f ( w ) generate markedly different storage modes. The function g ( w ) = w - ’ f ( w ) is also graphed in Figure 13 because the property that determines the type of storage is whether g(w) is an increasing, constant, or decreasing function at prescribed values of the activity w . For example, as in the
+
+
llre Adaptive SeIjWrganization of Serial Order in Behavior
347
four rows of Figure 13, a linear / ( u s ) = 011’ grnerates a constant g ( w ) = a ; a slowerthan-linear f ( w ) = a w ( b w) generates a decreasing g ( w ) = a ( b + u l ) - l ;a fasterthan-linear f(w)= a w n , n > 1, generates an increasing g ( w ) = awn-’; and a sigmoid ,(w) = aw2(b + w 2 ) - ’ generates a concave g ( w ) = ow(b w 2 ) - ’ . Both linear and slower-than-linear signal functions amplify noise. Even tiny activities are bootstrapped into large activities by the network’s positive feedback loops. This fact represents a serious challenge to linear feedback models (Grossberg, 1978d). A faster-than-linear signal function can tell the difference between small and large inputs by amplifying and storing only sufficiently large activities. Such a signal function amplifies the large activities so much more than the smaller activities that it makes a choice. Only the largest initial activity is stored in STM. A sigmoid signal function can also suppress noise, although it does so less vigorously than a faster-than-linear signal function. Consequently, activities less than a criterion level, or quenching threshold ( Q T ) , are suppressed, whereas the pattern of activities that exceeds the Q T is contrast enhanced before being stored in STM. Any network that possesses a Q T can be tuned. By increasing or decreasing the QT, the criterion of which activities represent functional signals-and hence should be processed and stored in STM -and of which activities represent functional noise-and hence should be suppressed--can be flexibly modified through time. An increase in the Q T can cause all but the largest artivities to be quenched. Thus the network can behave like a choice machine if its storage criteria are made sufficiently strict. A sudden decrease in the Q T can cause all recently presented patterns to be stored. If a novel or unexpected event suddenly decreases the Q T , all relevant data can be stored in STM until the cause of the unexpected event is learned (Grossberg, 1975, 1982b). It cannot be overemphasized that the existence of the Q T and its desirable tuning properties all follow from the use of a nonlinear signal function. To illustrate the Q T concept concretely, consider a sigmoid signal function / ( w ) that is faster-than-linear for 0 w 5 z(’)and linear for z(’)5 w 5 B . The slowerthan-linear part of / ( w ) does not affect network dynamics because each z, B by equation (29). More precisely, let f ( w ) = C w g ( w ) where C 2 0, g(w) is increasing 5 w 5 B . Grossberg (1973, pp.355-359) has for 0 5 w 5 ~ ( ‘ 1 , and g ( w ) = 1 for demonstrated that the Q T of this network is
+
’
+
<
By this equation, the Q T is not the “manifest” threshold of f ( w ) , which occurs in the range where g ( u i ) is increasing. Instead the Q T depends on the transition activity z(’) at which the signal function becomes linear, the slope C of the signal function, the number of excitable sites B, and the STM decay rate A. Thus all the parameters of the network influence t h e size of the QT. By equation (30), an increase in C causes a decrease in the QT. In other words, increasing a shunting signal C that nonspecifically gates all the network’s feedback pathways facilitates STM storage. Another property of STM in a competitive network is its limited capacity. This property follows from the network’s tendency to conserve, or normalize, the total suprathreshold activity that it can store in STM. Consequently, an increase in one STM trace forces a decrease in other STM traces. As soon as one of these diminished traces becomes smaller than t h e QT, it ‘is suppressed. A full understanding of the normalization concept, no less than the Q T concept, requires a mathematical study of relevant examples. The case wherein f ( w ) is fasterthan-linear illustrates normalization in its simplest form. Let z = z, be the total f(zt)be the total feedback signal. Summing over the STM activity and let F =
xr=l
348
.-
0
d
c
X
i Chpter 6
1
Figure 13. Influence of signal function / ( w ) on input pattern transformation and STM storage.
The Adaptive Serarganization of &rid Order in Behavior
349
index i in equation (29) yields the equation d ;ii~ = -Az
+ ( B - 5)F.
To solve for possible equilibrium activities X ( X ) of r ( t ) ,let $2 = 0 in equation (31). Then _Ax_ = F. B-x Since a network with a faster-than-linear feedback signal makes a choice, only one STM trace z , ( t ) remains positive as t + 00. Hence only one summand in F remains positive as t + 00, and its z,(l)value approaches ~ ( t ) .Consequently equation (31) can be rewritten as As Kz= fb). (33) Equation (33) is independent of the number of active nodes. Hence the total STM activity is independent of the number of active nodes.
19. Limited Capacity Without a Buffer: Automaticity versus Competition The formal properties of the previous section are reflected in many types of data. A fixed capacity buffer is often posited to explain the limited capacity of STM (Atkinson and Shiffrin, 1968; Raaijmakers and Shiffrin, 1981). Such a buffer is often implicitly or explicitly endowed with a serial ordering of buffer positions to explain free recall data. Buffer models do not, however, explain how items can be read-in to one buffer position and still move their representations along buffer positions in such a way that every item can be performed from each buffer position, as is required for the buffer idea to meet free recall data. The buffer concept also tacitly implies that the entire hierarchy of codes that is derivable from item representations can also be shifted around as individual item codes move in the buffer. The normalization property provides a dynamical explanation of why STM has a limited capacity without using a serial buffer. In the special case that new item representations get successively excited by a list of inputs, the normalization property implies that other item representations must lose activity. As soon as one of the activities falls below the QT, it drops out of STM. No notion of item representations through a buffer is required. Hence, no grueling problems of shift-invariant read-in and read-out need to be solved. In this view of the limited capacity of STM, it is important to know which item representations are mutually inhibitory. Equation (29) represents the atypical situation in which each item representation can inhibit all other item representations with equal f ( Z k ) . More generally, an equation of the form ease via the inhibitory terms C&+,
holds, i = 1 , 2 , . . . , n, in which the excitatory signal f k ( z k ) from Uk excites v, with a strength f k ( Z k ) C k l r whereas the inhibitory signal g k ( z k ) from Uk inhibits w, with a strength g k ( z k ) E k , . If the inhibitory coefficients E k , decrease with the network distance between V k and u,, then total STM activity can progressively build up as more i t e m are presented until the density of active nodes causes every new input to be partly inhibited by a previously excited node, Thus sparsely distributed items may, at least at a single network level, sometimes be instated “automatically” in STM by their inputs without
350
Chapter 6
incurring competitive “rapacity limitations” (Norman and Bobrow, 1975; Schneider and Shiffrin, 1976, 1977). The possibility that total STM activity can build up to an asymptote plays an important part in characterizing stable rules for laying down temporal order information in STM (Section 34). ”Automatic” processing can also occur in the present theory due to the influence of learned top-down expectancies, or feedback templates, on competitive matching processes (Section 24). The tendency to sharply differentiate automatic versus controlled types of processing has been popularized by the work of Schneider and Shiffrin (1976, 1977), who ascribe automatic properties to a parallel process and controlled properties to a serial process. This distinction creates conceptual paradoxes when it is joined to concepts about learning (e.g., Grossberg, 1978e, Section 61). Consider the serial learning of any new list of familiar items. Each familiar item is processed by a parallel process, while each unfamiliar inter-item contingency is processed by a serial process. Schneider and Shiffrin’s theory thus implies that the brain somehow rapidly alternates between parallel and serial processing when the list is first presented but switches to sustained parallel processing as the list is unitized. Consider the perception of a picture whose left half contains a familiar face and whose right half contains a collection of unfamiliar features. Schneider and Shiffrin’s theory implies that the brain somehow splits the visual field into a serial half and a parallel half, and that the visual field gets reintegrated into a parallel whole as the unfamiliar features are unitized. These paradoxes arise from the confusion of serial properties with serial processes and of parallel properties with parallel processes. All the processes of the present theory are parallel processes, albeit of a hierarchically organized network. The present theory shows how both serial properties and parallel properties can be generated by these parallel processes in response to different experimental paradigms. In particular, “the auditory-to-visual codes and templatea that are activated in VM varied mapping] and CM [consistent mapping] conditions are different, but the two con itions otherwise share common mechanisms” (Grossberg, 1978e, p.364). Some evoked potential tests of this viewpoint and explanations of other data outside the scope of the Schneider and Shiffrin theory are described elsewhere (Banquet and Grossberg, 1986; Carpenter and Grossberg, 1986b, 1986c; Grossberg, 1982d, 1984a; Grossberg and Stone, 1986a). A growing number of recent experiments also support this viewpoint (e.g., Kahneman and Chajczyk, 1983).
6
20. Hill Climbing and the Rich Get Richer The contrast enhancement property of competitive networks manifests itself in a large body of data. A central role for both contrast enhancement and normalization has, for example, been suggested in order to search associative memory and to selforganize new perceptual and cognitive codes (Carpenter and Grossberg, 1986b; Grossberg, 1978e, 1980c, 1982b). A more direct appearance of contrast enhancement has also been suggested to exist in letter and word recognition (Grossberg, 1978e; McClelland and Rumelhart, 1981). McClelland and Rumelhart introduce a number of evocative phrases to enliven their discussion of letter recognition, such as the “rich-get-richer” effect and the “gang” effect. The former effect is simply a contrast enhancement effect whereby small differences in the initial activation levels of word nodes get amplified through time into larger differences. The numerical studies and intuitive arguments presented by McClelland and Rumelhart (1981) do not, however, disclose why this can sometimes happen. Figure 13 illustrates that a correct choice of signal function is needed for it to happen, but a correct choice of signal function is not enough to guarantee even that the network will not oscillate uncontrollably through time (Grossberg, 1978c, 1980a). The gang effect uses a reciprocal exchange of prewired feedforward filters and feedback templates between letter nodes and word nodes to complete a word representation in response to an incomplete list of letters. This type of positive feedback exchange is also susceptible to uncontrollable instabilities whose prevention has been
The Adaptive Self-organization of Serial Order in Behavior
351
analysed previously (Grossberg, 1976a, 1976b, 1980~). I prefer the use of functional words rather than shibboleths for both an important reason and a frivolous reason. The important reason is that an adherence to functional words emphasizes that a single functional property is generating data in seemingly disparate paradigms. Functional words thus tend to unify the literature rather than to fragment it. The frivolous reason is that another rich-get-richer effect has already been so christened in the literature before the usage of McClelland and Rumelhart (1981) and I was the person to blame. In Grossberg (1977), I called the normative drift whereby activity can be sucked from some populations into others due to the amplified network parameters of the latter populations a rich-get-richer effect. A t the time, I found this sociological interpretation both amusing and instructive. Later (Grossberg, 1978b), however, I realized that the same mechanism could elicit automatic hill climbing along a developmental gradient, and I suggested (Grossberg, 1978e) how the same hillclimbing mechanism could respond to an ambiguous stimulus by causing a spontaneous STM drift from a representation that was directly excited by the stimulus to a more complete, or normative, nearby representation which could then quickly read-out its feedback template to complete the ambiguous data base. This normative mechanism was, in fact, first presented in Levine and Grossberg (1976) as a possible explanation of Gibson’s (1937) line neutralization effect. Thus the same functional idea has now been used to discuss visual illusions, pattern completion and expectancy matching, developmental gradients, and even sociology. It thus needs a functional name that is neutral enough to cover all of these cases. It needs a name other than contrast enhancement because it uses a mechanism of lateral masking that is distinct from simple contrast enhancement. This type of masking will be discussed in more detail to show how item sequences can be automatically parsed in a contextsensitive fashion through time (Section 38).
21. Instar Learning: Adaptive Filtering and Chunking With these introductory remarks about competition in hand, I now discuss the issue of how new recognition chunks can be self-organized within the fields F(’) and F(3) of a context-modulated avalanche, or more generally within the command hierarchy of a goal-oriented sequence of behaviors. I first consider the minimal anatomy that is capable of chunking or code development; namely, the inalar (Figure 14). As its name suggests, the instar is the network dual of an outstar. An instar is constructed from an outstar by reversing the direction of its sampling pathways. Whereas in an outstar, conditionable pathways radiate from a sampling cell to sampled cells, in an instar conditionable pathways point from sampled cells to a sampling cell. Consequently, the sampling cell of an instar is activated by a sum of LTM-gated signals from sampled cells. These signals may be large enough to activate the sampling cell and thus cause the sampled cells to be sampled. If a spatial pattern of inputs persistently excites the sampled cells, it can cause alterations in the pattern of LTM traces across the conditionable pathways that gate the sampled signals. These LTM changes can enable a practiced input pattern to excite the instar’s sampling node with greater efficacy. The heightened activation of the sampling node measures how well the sampling source has come to represent, or chunk, the input pattern. Although this version of chunking is too simplistic, two aspects of the problem as studied in this form have far-reaching consequences (Grossberg, 1976a, 1980~).To fix ideas, denote the sampling node by uo and the sampled nodes by uy, ~ 2 , . ,v,. For simplicity, let the cells u1, u2,. . . ,v, rapidly normalize any input pattern I, = @,I that they receive into STM activities 2, = 0,. Denote the signal emitted from w, into pathway 4 0 by f(8, . This signal is gated by the LTM trace Z,O before the gated signal f(Ol)zIo reaches uo rom ui. All of these signals are added at uo to gate a total signal To= C:=l f(8,)zIo. As in equation (ll),To is the dot product
..
1
To = fe * 20
(35)
Chapter 6
352
Figure 14. The duality between expectancy learning and chunking: (a) An outstar is the minimal network capable of associative pattern learning, notably expectancy learning. (b An instar is the minimal network capable of code development, notably chunking. T e source of an outstar excites the outstar border. The border of an instar excites the instar source. In both outstars and instars, source activation is necessary to drive LTM sampling. Since the signals from the instar border are gated by LTM traces before activating the instar source, code learning both changes the efficacy of source activation and is changed by it in an STM-LTM feedback exchange.
h
The Adaptive Self-01.gonizarionof Serial Order in Behavior
353
of the two vectors fe = ({(el), f ( S z ) , . . . ,f(0,)) and 20 = ( Z I O , Z Z O , . . . , z , ~ ) ., To characterize the STM response at vo to signal TO,suppose for simplicity that the total activity of vo is normalized to 1 and that vo possesses a QT equal to c. Then 1 if
To 2 c
x o = { 0 if To < c. Moreover, suppose that the LTM traces
Z,O
satisfy the equation
Equation (37) is a special case of equation (7) in which no learning occurs unless To succeeds in exciting 20. When learning does occur, zio is attracted towards f ( e i ) , i = 1 , 2 ,...,n. Under these conditions, it is easily shown that as input trials proceed, the vector zo(t) of LTM traces is monotonically attracted to the vector fe of signal pattern weights. In other words, zo(t becomes parallel to fe as practice proceeds. This trend tends to maximize the signa To(t) = fe t o ( t ) as trials proceed because the dot product is maximized when vectors of fixed length are made parallel to each other. A! Toy) grows, its ability to activate vo by exceeding its QT also increases. Speaking intuitive y, vo "codes" 0 due to learning trials, or the adaptive filter TOis 'tuned" by experience. Unfortunately, this example is deficient in several ways. For one, there is no nontrivial coding: vo can become more sensitive to a single pattern e, but no one node can differentially encode a large number of patterns into more than two categories. Clearly, more sampling nodes are needed. This can be accomplished as follows.
I
22. Spatial Gradients, Stimulus Generalization, and Categorical Percep-
tion Let the nodes u l , u2,. . .,u, be replaced by a field F ( ' ) of nodes, and let vo be replaced by a field of nodes. Each pattern across F(') can now send a positive signal to many nodes of F('). How is an increasingly selective response across F12)to be achieved as sequences of input patterns perturb F(')? Both networks F(') and F(') include competitive interactions to solve the noisesaturation dilemma. The easiest way to achieve learned selectivity is thus to design F ( 2 )as a sharply tuned competitive feedback network that chooses its maximal input for STM storage, quenches all other inputs, and normalizes its total STM activity. By analogy with the previous example, let the total input to vjz' in F(') equal
let
and let
d
-2"
dt '3
- (-2ij
+f(ei))Xj2'.
Now let a sequence of spatial patterns perturb F(') in some order. What happens? This is a situation where the good news is good and the bad news is better. If the spatial patterns are not too densely distributed in pattern space, in a sense that can be
354
Chapter 6
made precise (Grossberg, 1976a), then learning partitions the patterns into mutually exclusive and exhaustive subsets 4, Pz,. , . ,P,,,such that every input pattern 8 in P, excites its recognition chunk vjz) with the maximal possible input, given the constraint that the LTM vector z, is attracted to all the vectors 18 of patterns B in P,. Node v!') is also activated by patterns B that are weighted averages of patterns in
P,,even if these patterns B are novel patterns that have never been experienced. Hence a generalization gradient exists across F@).The adaptive filter projects novel patterns into the classification set spanned by the patterns in P,. If a pattern B is deformed so much that it crosses from one set P, to another set 5, then a rapid switch from choosing to choosing v r ) occurs. The boundaries between the sets P, are categorical. Categorical perception can thus be anticipated whenever adaptive filtering interacts with sharp competitive tuning, not just in speech recognition experiments (Hary and Massaro, 1982; Pastore, 1981; Studdert-Kennedy, 1980). The categorical boundaries are determined by how each input pattern is filtered by all the LTM vectors z 3 , and by how all the dot product signals T, fare in the global competition for STM activity. Consequently, practicing one pattern B can recode the network's STM response to a novel pattern 0' by changing the global balance between filtering and competition. This conclusion can be understood most easily by substituting equations (38) and (39) into equation (40). One then observes that the rate of change $ zt, of each LTM trace depends on the global balance of all signals and all LTM traces, and thus on the entire history of the system as a whole. Factors that promote adherence to or deviations from categorical perception are more subtle than these equations indicate. Section 23 notes that an interplay of attentional factors with feature coding factors can cause the same network to react categorically or continuously to different experimental conditions. Such a result is not possible in the categorical perception model of Anderson ct al. (1977) because the activities in that model must always reach a maximal or a minimal value (Section 2). 23. The Progressive Sharpening of Memory: Tuning Prewired Perceptual Categories
The requirement that F ( z )make an STM choice is clearly too strong. More generally, F(a)possesses a tunable QT due to the fact that its competitive feedback signals are sigmoidal. Then only those signals T, whose LTM vectors z, are sufficiently parallel to an input pattern, within some range of tolerance determined by the QT,will cause suprathreshold STM reactions z:). In this case, the competitive dynamics of F(2)can be approximated by a rule of the form
instead of equation (39). In equation (41), the inequality T, 1 c says that the dot product input to $) exceeds the QT of F ( z ) .The function h(T,) approximates the contrastenhancing action of sigmoid signaling within &'). The ratio of h(Tj) to 1 ~ h(T1) ~ approximates the normalization property of F('). Due to equation (41), a larger input T, than Tk causes a larger STM reaction )2 : than
#.
By equation (40), a larger value of
than
causes faster conditioning
2
~
The Adaptive Self-Organization of Serial Order in Behavior
355
of the LTM vector zl than of Z k . Faster conditioning of zJ causes h(T,) to be relatively larger than h ( Q ) on later learning trials than on earlier learning trials. Due to the normalization property, relatively more of the total activity of F(') will be concentrated at ):2 than was true on earlier learning trials. The relative advantage of is then translated into relatively faster conditioning of the LTM vector z j . This feedback exchange between STM and LTM continues until the process equilibrates, if indeed it does. As a result of this exchange, the critical features within the filter T = (2'1, Tz,.. . ,T,) eventually overwhelm less salient features within the STM representation across F(') of input patterns due to F ( ' ) . Representations can thus be sharpened, or progressively tuned, due to a feedback exchange between "slow" adaptive coding and "fast" competition. The tendency to sharpen representations due to training leads to learned codes with context-sensitive properties. This is because the critical features that code a given input pattern are determined by all of the input patterns that the network ever experiences. The ultimate representation of a single "word" in the network's input vocabulary thus depends on the entire "language" being learned, despite the fact that prewired connections in the signaling pathways from F(') to F('), and within F(') and F('), constrain the features that will go into these representations. Other sources of complexity are due to the fact that equations (38), (40 and (41) approximate only the most elementary aspects of the learning process. T e filter in equation (38) often contains parameters P,,,as in
ziz'
I;
which determine prewired positional gradients from F(') to F@). These positional gradients break up the filtering of an input pattern into sets of partially overlapping channels. Some choice of prewired connections P,j is needed to even define a filter such as that in equations (38) or (42). Thus "tuning an adaptive filter" always means "tuning the prewired positional gradients of an adaptive filter." Infants may thus be "able to perceive a wide variety of phonetic contrasts long before they actually produce these contrasts in their own babbling" (Jusczyk, 1981, p.156). The fact that developmental tuning may alter the LTM traces zij in equation (42) in no way invalidates the ability of the prewired gradients Pal in the equation to constrain the perceptual categories that tuning refines, both before and after tuning takes place. The special choice PiJ = 1 for all i and j in equation (38) describes the simplest (but an unrealistic) case in which all filters TJ receive equal prewired connections, but possibly different initial LTM traces, from all nodes ui. In subsequent formulas, all Pi, will be set equal to 1 for notational simplicity. However, the need to choose nonuniform Pal in general should not be forgotten. Other simplifying assumptions must also be generalized in order to deal with realistic cases. The rule in equation (41) for competition ignores many subtleties of how one competitive design can determine a different STM transformation than another. This rule also ignores the fact that the input pattern T to F ( z ) is transformed into a pattern of STM traces across F(2) before this STM pattern, not the input pattern itself, is further transformed by competitive feedback interactions within F @ ) . Despite these shortcomings, the robust tendency for memory to sharpen progressively due to experience is clarified by these examples (Cermak and Craik, 1979; Cohen and Nadel, 1982). The degree of representational sharpening can be manipulated at will by varying the QT of F ( z ) . A high QT will cause sharply tuned codes to evolve from F(') to F('), aa in
356
Chapter 6
equation (39). A lower QT will enable a more diffusely distributed map to be learned from F(')to F ( a ) .Such a diffuse map may be protected from noise amplification by the use of sigmoid competitive signaling at every processing stage that is capable of STM storage. If, however, the QT is chosen so small that fluctuations in internal cellular noise or in nonspecific arousal can exceed the QT, then the network STM and LTM can both be pathologically destabilized. A network in which two successive stages of filtering P(')+ F(2)--t F(3)occur, where the first stage F(')+ F(') generates a diffuse map and the second stage F ( 2 ) + F(3) generates a sharply tuned map, is capable of computing significant global invariants of the input patterns to F(') (Fukushima, 1980; Grossberg, 1978e).
ar. Stabilizing the Coding of Large Vocabularies: Top-DownExpectanies and STM Reset b y Unexpected Events Now for the bad news. If the number of input patterns to be coded is large relative to the number of nodes in F(2) and if these input patterns are densely distributed in pattern space, then no temporally stable code may develop across F(2)using only the interactions of the previous section (Grossberg, 1976a). In other words, the STM pattern across F(*)that is caused by a b e d input pattern can persistently change through time due to the network's adaptive reaction to the other input patterns. The effort to overcome this catastrophe led to the introduction of the adaptive resonance theory (Grossberg, 1976b). 1 refer the reader to previous articles (Grossberg, 1982b, 1982d, 1984a) for a more thorough analysis of these results. The main observation needed here is that a developing code can always be temporally stabilized by the action of conditionable top-down templates or feedback expectancies. This fact sheds new light on results which have suggested a role for feedback templates in a diverse body of data, including data about phonemic restoration, word superiority effects, visual pattern completion effects, and olfactory coding, to name a few (Dodwell, 1975; Foss and Blank, 1980; Freeman, 1979; Johnston and McClelland, 1974; Lanze, Weisstein, and Harris, 1982; Marslen- Wilson, 1975; Marslen-Wilson and Welsh, 1978; Rumelhart and McClelland, 1982; Warren, 1970; Warren and Obusek, 1971). My theory suggests that top-down templates are a universal computational design in all neural subsystems capable of achieving temporally stable adaptation in response to a complex input environment. The theory also identifies the mechanisms needed to achieve temporally stable adaptation. Because many articles that use topdown mechanisms consider only performance issues rather than a composite of learning and performance issues, they do not provide a sufficient indication of why top-down expectancies exist or how they work. My theory considers the basic issue of how a network can buffer the internal representations that it has already self-organized against the destabilizing effects of behaviorally irrelevant environmental fluctuations and yet adapt rapidly in response to novel environmental events which are crucial to its survival. To do this, the network must know the difference between and be able to differentially process both expected and unexpected events. I trace this ability to the properties of two complementary subsystems: an orienting subsystem and an attentional subsystem. Figures 15 and 16 summarize how these two types of subsystems operate. In both figures I assume that an active STM pattern is reverberating across certain nodes in the ( i + 1)st field F('+') of a coding hierarchy. These active nodes are emitting conditioned feedback signals to the previous stage F(1) in this hierarchy. The total pattern E of these feedback signals represents the pattern that the active nodes in F('+1) collectively ezpect to find across F(') due to prior learning trials on which these nodes sampled the STM patterns across F(') via an associative process akin to outstar
The Adaptive S,elf-Organizationof Serial Order in Behavior
357
Figure 15. Reaction of attentional and orienting subsystems to an unexpected event: (a) A subliminal top-down expectancy E at F(*) is maintained by a supraliminal STM pattern across F('). (b) The input pattern U nonspecifically sensitizes F(') as it instates itself across F('). The input also sends an activating signal to the nonspecific arousal source A. (c) The event U is unexpected because it mismatches E across F ( l ) . The mismatch inhibits STM activity across F(') and disinhibits A. This in turn releases a nonspecific arousal pulse that rapidly resets STM across F ( a ) before adventitious recoding of the LTM can occur, and drives a search of associative memory until a better match can be achieved.
Chapter 6
358
learning. More precisely, an expectancy E is a vector E = ( E l , Ez, . . .,En)such that Ek = S j z j j where Sj is the sampling signal emitted from vj1") and z# is the
cpl
LTM trace in the synaptic knobs of the pathway
ejk
from v!1+') to v r ' .
I assume that the feedback signals E bias F(') by subliminally activating the STM traces across F ( i ) . Only a subliminal reaction is generated by the expectancy because the QT of F(') is assumed to be controlled by a nonspecific shunting signal, as in equation (30). Although the expectancy E is active, the QT of F(8) is too high for E to cause a supraliminal STM reaction. A feedforward input pattern U from F('-I) to F(') has two effects on F(t).It delivers the specific input pattern U and activates the nonspecific shunting signal that lowers the QT of F(1). The conjoint action of U and E then determines the STM pattern elicited by U across F('). It is worth noting at this point that any other input source capable of turning on the nonspecific shunting signal to F(') could lower its QT and thereby bootstrap the expectancy signals into a supraliminal STM pattern even in the absence of a feedforward input pattern. I believe that fantasy activities such as internal thinking and the recall of music can be maintained in this fashion. We now consider how unexpected versus expected input patterns are differentially processed by this network. In Figure 15b, an unexpected input pattern U is delivered to F(') from F(i-l). The pattern U is unexpected in the sense that the feedback template E and the unexpected input U are mismatched across F ( $ ) .The concept of mismatch is a technical concept whose instantiation is a property of the interactions within F(') (Carpenter and Grossberg, 1986a, 1986b; Grossberg, 1980c, Appendix C). For present purposes, we need to know only that a mismatch between two input patterns at F(') quickly attenuates STM activity across F(*)(Figure 15c). Just as the input pattern U activates a nonspecific gain control mechanism within F('),it also delivers an input to the orienting subsystem A. Because each node in F(') sends an inhibitory pathway to A (Figure 15b), suprathreshold STM activity anywhere across F(') can inhibit the input due to U at A. If mismatch across Fli) occurs, however, inhibitory signals from F(') to A are attenuated. Consequently, U's input to A is capable of unleashing a nonspecific signal to F('+'), which acts quickly to reset STM activity across F('+'). One of the properties of STM reset is to selectively inhibit the active nodes across F('+') that read out the incorrect expectancy of which event was about to happen. STM reset thus initiates a search for a more appropriate code with which to handle the unexpected situation. An equally important effect of inhibiting the active nodes in F('+') is to prevent these nodes from being recoded by the incorrect pattern U at F('). STM reset shuts off the active STM traces z!!'~) across F('+') so quickly that the slowly varying LTM traces zg)from F(') to
F('+') cannot be recoded via the LTM law
The top-down expectancy thus buffers its activating chunks from adventitious recoding.
The Adoptive SelfWrgonization of Serkl Order in Behavior
359
25. Expectancy Matching a n d Adaptive Resonance
In Figure 16, the pattern U to F(') is expected. This means that the top-down expectancy E and the bottom-up pattern U match across F('). This notion of matching is also a technical concept that is instantiated by interaction within F('). When a pair of patterns match in this sense, the network can energetically amplify the matched pattern across F(') (Figure 16b). These amplified activities cause amplified signals to pass from F(') to F('+') (Figure 16c). The STM pattern across F('+') is then amplified and thereupon amplifies the feedback signals from F('+') to F('). This process of mutual amplification causes the STM patterns across F(')and F('+') to be locked into a sustained STM resonance that represents a context-sensitive encoding of the expected pattern U. The resonant STM patterns can be encoded by the LTM traces in the pathways between F(') and F('+') because these STM patterns are not rapidly inhibited by an STM reset event. Because STM resonance leads to LTM encoding, I call this dynamical event an adaptive resonance. 26. The Processing of Novel Events: Pattern Completion versus Search of Associative Memory
A novel input pattern U can elicit two different types of network reaction, depending on whether U triggers STM resonance or STM reset. When a novel event U is filtered via feedforward F(') -+ F('+') signaling, it may activate a feedback expectancy E via feedback F('+') -+ F(') signaling which, although not the same pattern as U , is sufficiently like U to generate an approximate match across F('). This can happen because the QT of F(') determines a flexible criterion of how similar two patterns must be to prevent the inhibition of all STM activity across F('f. A large QT implies a strict criterion, whereas a low QT implies a weak criterion. If two patterns are matched well enough for some populations in F(') to exceed the QT, then STM resonance will occur and the orienting reaction will be inhibited. Because U is filtered by feedforward signaling, and because E reads-out the optimal pattern that the active chunks across F('+') have previously learned, E will deform the STM reaction across F(') that would have occurred to U alone toward a resonant encoding that completes U using the optimal data E. This consequence of buffering LTM codes against adventitious recoding is, I believe, a major source of Gestalt-like pattern completion effects, such as phonemic restoration, word superiority effects, and the like. Grossberg and Stone (1986a) develop such concepts to analyse data about word recognition and recall. By contrast, if U is so different from E that the QT causes STM suppression across F ( * ) then , the orienting subsystem will be activated and a rapid parallel search of associative memory will ensue. To understand how such a search is regulated, one needs to analyse how a nonspecific arousal pulse to F('+') can selectively inhibit only the active nodes across F('+') and spare inactive nodes for subsequent encoding of the unexpected event. This property is instantiated by expanding the design of F('+'), as well as all other network levels that are capable of STM reset, in the following fashion. A11 nodes that have heretofore been posited in the competitive networks are on-cells; they are turned on by inputs. Now I supplement the on-cell competitive networks with apposing offcell competitive networks such that offset of an input to an on-cell triggers a transient activation of its corresponding off-cell. Such an activation is called an antagondie rebound.
360
Chapter 6
Figure 16. Reaction of attentional and orienting subsystems to an expected event: (a) A subliminal topdown expectancy E at F(') is maintained by a supraliminal STM pattern across F@). (b) The input pattern U nonspecifically sensitizes F(') as it instates itself across F ( l ) . The input also sends an activating signal to the nonspecific arousal source A. (c) The event U is expected because it approximately matches E across F(');that is, it falls within the hysteresis boundary defined by E. This match amplifies patterned STM activity across F(') and F(2)into a resonant STM code capable of being encoded in LTM.
The Adaptive Self-Organizationof Serial Order in Behavior
361
Antagonistic rebound at an off-cell in response to offset of an on-cell input can be achieved due to three mechanisms acting together: (1) All the inputs to both the on-cell channel and the off-cell channel are gated by slowly accumulating transmitter substances that generate output signals by being released at a rate proportional to input strength times the amount of available transmitter. (2) The inputs to on-cells and offcells are of two types: specific inputs that selectively activate an on-cell channel or an off-cell channel, but not both, and nonspecific inputs that activate both on-cell and off-cell channels equally. (3) The gated signals in both the on-channel and off-channel compete before the net gated inputs activate the on-cell or the off-cell, but not both. The network module that carries out these computations is called a gated dipole (Figure 17). One proves that if a sufficiently large increment in nonspecific arousal occurs while an on-cell is active, this increment can cause an antagonistic rebound that rapidly shuts off the on-cell’s STM trace by exciting the corresponding off-cell. This rebound is part of the STM reset event. The antagonistic rebounds in gated dipoles are due to the fact that unequal inputs to the on-cell and off-cell cause their respective transmitter gates to be depleted, or habh&atcd, at unequal rates. When a rapid change in input patterning occurs, it is gated by the more slowly varying transmitter levels. A mathematical analysis shows that either a rapid offset of the specific on-cell input or a rapid increase of nonspecific arousal can cause an antagonistic rebound due to the imbalance in transmitter habituation across the on-cell and off-cell channels (Grossberg, 1972b, 1975, 198Oc, 1981b, 1984a). Once a subfield of on-cells is inhibited by dipole rebounds, they remain inhibited for a while due to the slow recovery rate of the transmitter gates. Only a subset of nodes in F(*+’)can therefore respond to the filtered signals from F(’) in the next time interval. If another ‘mismatch occurs, more nodes in F(’+’) are inhibited. As the search continues, the normalized STM patterns across F(’+’) contract rapidly onto a final subset of F(’+’) nodes. The STM pattern across this final subset of nodes is used to condition the filter of its corresponding pathways or to stabilize the already conditioned pathways via an adaptive resonance. One of the intriguing facts about searching associative memory in this way is that transmitter habituation is one of the important mechanisms. Habituation acts in my theory to regulate active memory buffering and adaptive encoding processes; it is not just a passive result of “use,” “fatigue,” or other classical notions. 21. Recognition, Automaticity, Primes, and Capacity
These processes are reflected in many types of data. Analyses and predictions about some of these data are found in Grossberg (lWOc, 1982b, 1982~). Carpenter and Grossberg (1986a, 1986b) describe extensive computer simulations of how adaptive resonance theory mechanisms can self-organize a self-stabilizing recognition code in response to an arbitrary list of input patterns. The following remarks summarize some of the other types of data that may be clarified by such processes. Recent studies of recognition memory point out: “Complex elaborate encoding . . . can be utilized to enhance recognition only when the test conditions permit a reinstatement of the original encoding context” (Fisher and Craik, 1980, p.400). In a similar vein, other studies support the idea that “the direction of priming effects ... dependls] upon the validity of the prime as a predictor of the probe stimulus” (Myers and Lorch, 1980, p.405). This type of effect holds at every level of reciprocal signaling in the encoding hierarchy, because a particular pattern of feedforward chunking is wed to a characteristic pattern of template feedback. An active pattern of template feedback leads to rapid resonant matching only when it meets with compatible feedforward input patterns. The resonant process is interpreted to be the attentive moment, or recognition event, that groups individual nodal activities into a unitary percept. The “priming”
Chapter 6
362
ON
COMPETlTlON
GATE SIGNAL
Figure 17. Reaction of a feedforward gated dipole to phasic onset and offset of a cue: The phasic test input J and the arousal input I add in the on-channel, thereby activating the STM trace 2 1 . The arousal input I activates the STM trace 2 2 in the and f ( z 2 ) . off-channel. Since Z J > I , 21 > 2 2 . Traces 21 and z2 elicit signals f(q) Because z1 > z f 21) > f ( 2 2 ) . Each signal is gated by a transmitter 21 or 22 (in the square synapses?. iransmitters 21 and 22 are released at rates proportional to f ( z 1 ) q and f(z2)zz, respectively. The gated signals f(z1)zl and f(z2)zl in turn activate 2 2 and z4, respectively, and 2 3 > 2 4 . Both 2 3 and 2 4 excite their own channel and inhibit the other channel. After competition acts, an output from the on-channel is elicited that habituates through time. A rapid offset of J causes 21 = 2 2 and /(q) = f(z9. However, .zl < 22 due to greater prior depletion, or habituation, of the on-channel by , and to the slow reaction rate of z1 and 2 2 relative to z1 and 2 2 . Thus f(21)zl < f(z2)zz and z3 < 2 4 . After competition acts, the off-channel wins. Gradually z1 = 22 in response to the equal values of 21 and 2 2 . Then f(z1)tl = f ( 2 2 ) 2 2 , so the competition shuts off all dipole output. The same mechanism causes an off-rebound in response to a rapid decrement in I while J is active. In a feedback gated dipole, output from 2 5 re-excites zl, and output from 2 6 re-excites 2 2 . In a dipole field, the positive feedback loops z1 H 2 5 an 2 2 tt 2 6 form the on-centers of competitive shunting networks that join on-cells to on-cells and off-cells to off-cells. Such networks exhibit STM properties such as contrast enhancement and normalization modulated by the slow habituation of activated transmitter gates.
+
The Adaptive Self-organization of Serial Order in Behavior
363
feedback template leads to enhanced recognition only if the “probe” input pattern is sufficiently similar to trigger a resonant match. This view of template matching and associative memory search suggests a different way to think about automatic versus capacity-limited processing than that of researchers like Norman and Bobrow (1975) or Posner and Snyder (1975). Capacity limitations alone do not determine whether very fast inhibition will slow down reaction times in a search situation. Mismatch can trigger rapid STM reset and associative search, leading to an increase in reaction time, under the same capacity limitations that would speed reaction time if a match were to occur. An increased reaction time is not due merely to a capacity limitation that excites two mutually inhibitory nodes, because mutual inhibition can subserve either a match or a mismatch. At bottom, the traditional discussion of increased reaction time due to capacitylimited inhibitory effects follows from an inadequate choice of the functional unit of network processing. The functional unit is not the activation of a single node, nor a “spreading activation” among individual nodes. Rather, it is a coherent pattern across a field of nodes. This viewpoint is compatible with the results of Myers and Lorch (1980, p.405), who showed, among other things: “Reaction time to decide that a sentence was true or false was longer if the preceding prime was a word that was unrelated to the probe than if the prime was the word ‘blank’ at prime-tcprobe intervals as short as 250 msec. How can a more direct test of behaviorally observable reset and search routines be made? One possible way may be to adapt techniques for measuring the N200 and P300 evoked potentials to letter, word, and sentence recognition and verification tasks. The theory suggests that every mismatch event will elicit the mismatch negativity component of the N200 at A, and that every subsequent nonspecific arousal burst will elicit a P300 at F ( 2 )(Grossberg, 1984; Karrer, Cohen, and Tueting, 1984). 28. Anchors, Auditory Contrast, and Selective Adaptation
Numerous studies of contextual effects in vowel perception have attempted to distinguish between “feature detector fatigue” and ‘auditory contrast” explanations of how a categorical boundary shift during a selective adaptation or anchoring experiment. Sawusch and Nusbaum (1979) have interpreted their results as favoring an auditory contrast explanation, beeause even widely spaced repetitions of an anchor vowel elicit a significant shift in the boundary. The mechanisms of template feedback, STM resonance, STM reset, and transmitter habituation may shed some further light on this discussion by suggesting some new experimental tests of these ideas. Sawusch and Nusbaum (1979, p.301) discuss auditory contrast in terms of “incorporating the influence of both information from immediately preceding stimuli (auditory memory) and prototypes in long-term memory into one unified auditory ground against which new stimuli are compared.” I represent the “auditory memory” by a pattern reverberating in STM across a field F(’+’) of nodes, the “prototypes in long-term memory” by LTM traces in the active feedback template pathways from F(’+’) to F ( * ) ,and the “unified auditory ground” by a subliminal feedback expectancy E across F(’). This interpretation immediately raises several questions. Why does an input U that mismatches E cause a boundary shift away from the event represented by E? In the framework presented earlier, the answer is: The mismatch event actively inhibits the active nodes across F(’+’)which represent E. In the time interval after this STM reset event, the pattern U will be encoded by a renormalized field F(’+*) in which the nodes that encoded E remain inhibited. A similar combination of mismatch-then-reset has been used to discuss bistable visual illusions such as those that occur during the viewing of Necker’s cube (Grossberg, 1980~).I suggest that a P300 evoked potential will occur at the moment of switching.
364
Chapter 6
As in the discussion of matching probes to primes, a stronger anchoring effect may cause a faster reaction time when U equals E and a slower reaction time when U mismatches E. Another type of test would start with an event U that equals E and would gradually cause U to mismatch E using a temporally dense series of successive presentations of slightly changing U’s. The hysteresis boundary that causes perseveration of the anchor percept may get broader as the anchoring effect is made stronger, even though a stronger anchoring effect causes a larger shift when a discrete event U mismatches E . A P300 may also occur when the hysteresis boundary is exceeded by U. Thereafter, the percept may again be shifted away from E. The latter test may be confounded by the fact that a dense series of U’s may cause persistent STM reverberation of the anchor representation across F(’+’).Such a reverberation may progressively habituate the transmitter gates in the reverberating pathways. In this way, “fatigue” may enter even an auditory contrast explanation, albeit fatigue of a nonclassical kind. If significant habituation does occur, then the shift due to STM reset may become smaller as a function of longer storage in auditory memory, but this effect would be compensated for by a direct renormalization of the STM response of F(*+’)to U as a result of habituation. Such habituation effects may occur on a surprisingly long time scale because a slow transmitter accumulation rate is needed to regulate the search of associative memory. Sawusch and Nusbaum (1979) suggest that adaptation level theory (Helson, 1964; Restle, 1978) may be used as a mathematical framework to explain selective adaptation and anchoring effects. I believe that this is a correct intuition. Shunting competitive networks possess an adaptation level that is the basis for their matching properties (Grossberg, 1980c, Appendix C; 1983, Section 22). A feedback template E has the effect of biasing the adaptation level and thereby producing a different reaction to a feedforward input pattern U than would otherwise occur. However, this property of shunting networks controls only one step in the network’s total reaction to a shifted input. In this regard, Sawusch, Nusbaum, and Schwab (1980) show that anchoring b the vowel [i] (as in beet) or by [I](as in bit produce contrast effects in tests involving vowel series by affecting different mec anisms. “Contrast effects for an [i] anchor were found to be largely the result of changes in sensitivity between various vowel pairs.... The [i] anchoring effect occurs prior to phonetic labeling. This is clearly the case, since [i] anchoring was found to increase discriminability within the [il category The [i] anchor seems to alter or retune the prototype space.” By contrast, ‘the [I]-anchor effects were largely the result of criterion shifts....The auditory ground would reflect two sources of information: prototype information from long-term memory and certain information from the stimulus being presented ....Some form of auditory memory which contains information about the quality of the stimulus may underlie the changes in criterion for [I] anchoring” (Sawusch et al., 1980, p.431). Within the present theory, the changes in discriminability whereby the [i] anchor “retunes the prototype space” may be interpreted as an [il-induced shift in some of the LTM vectors that form part of the auditory adaptive filter (Sections 22-23). This LTM tuning process occurs prior to the stage of “phonetic labeling,” or STM competition in auditory memory, and changes the outcome of the phonetic competition by altering the pattern of filtered inputs on which the competition feeds. Both the [i] vowel and the [I] vowel can bias the adaptation level by reading their subliminal feedback templates out of auditory memory. This type of top-down bias can create criterion shifts without redistributing the bottom-up LTM vectors in the adaptive filter that control relative sensitivity. If no change in the adaptive filter takes place, interference with auditory memory will reduce contrast effects due to anchoring. However, if an adaptive filter shift has occurred prior to interference with auditory memory, a large anchoring effect can still occur due to the direct effect on each trial of the shifted bottom-up filtering and top-down template read-out. Sawusch ct al. (1980)
i]-[IJ
b
....
The Adaptive SelfOrganization of Serial Order in Behavior
365
demonst,rate an analogous effect. They partially interfere with auditory memory by and [I] in CVC syllables, such as [sis and [sIs]. In this case, the anchoring significantly greater than that o [sIs].
I
29. Training of Attentional Set and Perceptual Categories
Studdert-Kennedy (1980) has reviewed data that are compatible with this interpretation of the auditory ground. Spanish-English bilinguals can shift their boundaries by a change in language set within a single test (Elman, Diehl, and Buchwald, 1977). This shift can be formally accomplished in a network by activating nodes across F('+') that read-out different feedback templates. Training enables subjects to shift categorical boundaries at will, thereby suggesting that "utilization of acoustic differences between speech stimuli may be determined primarily by attentional factors" (Carney, Widen, and Viemeister, 1977, p.969 Such training may tune both the adaptive filters and the feedback templates of t e subjects, much as American English speakers perceive an [r to [I] continuum categorically, whereas Japanese speakers do not (Miyawaki, Strange, erbrugge, Liberman, Jenkins, and Fujimura, 1975).
1
1;
SO. Circular Reactions, Babbling, and the Development of AuditoryArticulatory Space
Using the operations that have been sketched above, one can quantitatively discuss how neural networks initiate the process whereby their sensory and motor potentialities are integrated into a unitary system. The main concepts are that endogenously activated motor commands generate patterns of sensory feedback; that internal representations of the activated sensory and motor patterns are synthesized (learned, chunked) by adaptive tuning of coarsely prewired filters (feature detectors, positional gradients); and that the sensory internal representations are joined to their motor counterparts via a learned associative map. The sensory internal representations can also be tuned by sensory inputs other than sensory feedback as soon as external sensory inputs can command the attention-for example, lower the QT (Section 18)-of the sensory modality. These concepts were first used in real-time network models to discuss how motivated and attentive behavior can be self-organized in a freely moving animal (Grossberg, 1971, 1972a, 1972b, 1975). Later, they were used to show how cognitive, attentive, and motivational mechanisms can interact to generate a consistent goal-oriented aensorymotor plan (Grossberg, 1978e; reprinted in Grossberg, 1982d). An earlier version of this approach is embodied in Piaget's concept of a circular reaction (Piaget, 1963). Then Fry (1966) emphasized the importance of the infant's babbling stage for the later development of normal speech (Marvilya, 1972), notably for the tuning of prewired adaptive sensory filters. The work of Marler and his colleagues (Marler, 1970; Marler and Peters, 1981) on the development of birdsong in sparrows has recognized the relevance of self-generated auditory feedback to the development of normal adult song. Similarly, the motor theory of speech perception recognized the intimate relationship between acoustic encoding and articulatory requirements (Cooper, 1979; Liberman, Cooper, Shankweiler, and Studdert-Kennedy, 1967; Liberman and Studdert-Kennedy, 1978; Mann and Repp, 1981; Repp and Mann, 1981; StuddertKennedy, Liberman, Harris, and Cooper, 1970). Studdert-Kennedy (1975, lQ80) has written about this approach with particular eloquence: "Only by carefully tracking the infant through its first two years of life shall we come to understand adult speech perception and, in particular, how speaking and listening establish their links at the base of the language system" (Studdert-Kennedy, 1980, p:45). "The system follows the moment-to-moment acoustic flow, apprehending an auditory 'motion picture,' as it were, of the articulation" (p.55).
366
Chapter 6
This and the next section sketch a framework for analysing how individual sensory and motor patterns are integrated. After that, I consider the deeper question of how temporal sequences of patterns are processed. To carry out this discussion, I use the notation F$) for the ith motor field in a coding hierarchy, and F(’) for the ith sensory field in a coding hierarchy. The example of babbling in an ingnt can be used to intuitively fix ideas. Suppose that an activity pattern across F$ represents a terminal motor map (TMM) of a motor act. Such a map specifies the terminal lengths of target muscles. It is often organized in a way that reflects the agonist-antagonistic organization of these muscles. Suppose that during a specified time interval, a series of TMMs are endogenously activated across F k ) , analogous to the babbling of simple sounds. The execution of such a TMM elicits sensory feedback, analogous to a sound, which is registered as an input pattern across Ff) (Figure 18a). As these unconditioned events are taking place, they are accompanied by the following adaptive reactions. The active TMM is chunked by adaptive filtering from F k ) to F g ) . This motor code, in turn, learns its corresponding TMM via the conditioning of its feedback template from F$ to F$. As this learning takes place, the corresponding sensory feedback pattern across F F ) is chunked by adaptive filtering from Ft)to Ff). Its sensory code learns the corresponding pattern of sensory features across F t ) by the conditioning of its template feedback from Ff) to F t ) (Figure 18b). Due to the simultaneous activity of the sensory and motor representations in Ff) and Ffi), a map from F f 2 )to FE) can be self-organized by associative pattern learning (Figure 18c). One of the quantitative issues of the theory concerns how diffuse or sharply tuned this map should be (Grossberg, 1978e, Sections 55-58). The above network matches auditory to articulatory requirements in several ways. It preferentially tunes the sensory “feature detectors” that are activated most often by spoken sounds (Section 22). It also maps the tuned internal representations of these sounds onto the motor commands that are capable of eliciting the sounds. The construction accomplishes this by associatively sampling the motor commands in the form that succeeded in executing the sounds through endogenous activation. Although the internal representations of sounds and motor commands may differ in many significant ways, they can be joined together by the common coin of pattern learning in associative networks. These patterns are the still pictures in the “motion picture” described by Studdert-Kennedy (1980). The flow of activity in such a network is circular. It proceeds from F# to F(’) via external sensory feedback, and in the reverse direction by a combination of adptive filtering, associative mapping, and conditionable template feedback. This is a circular reaction, network-style. The completion of the circle by the internal network flow enables simple imitation to be accomplished via the learned map
31. Analysis-By-Synthesis and the Imitation of Novel Events After babbling stops, how does language continue to develop? In particular, how does a network learn to recognize and recall novel sounds other than those learned during the babbling phase? Part of this capability is built into the map in equation (44) in
The Adaptive Self-organizationof Serial Order in Behavior
361
ENDOGENOUS INPUT
_ _ _ _ _ _ - ------SENSORY
FEEDBACK
Figure 18. A circular reaction that matches acoustic encoding to articulatory requirements: (a) Endogenous motor commands across Ffi) elicit babbled sounds that are received as auditory feedback patterns across F$). (b) The motor commands and the auditory feedback patterns are chunked at Fg) and Ff),via bottom-up adaptive filtering in pathways Ffi) -+ F g ) and F t ) + Ff), respectively. These chunks learn their generative motor commands and auditory patterns via top-down expectancy learning in pathways F g ) -+ F f i ) and Ff’ 4 F$), respectively. (c) The sensory and motor chunks are joined together by an associative map Ff) -+ F g ) . The learned map Ff) -;* Ff) -+ F(2)-+ Ffi) completes the circular reaction and enables novel sounds to be imitated and Men chunked by the same mechanisms.
368
Chapter 6
a way that sheds li ht on the many successes of the analysis-by-synthesis approach to speech recognition fHalle and Stevens, 1962; Stevens, 1972; Stevens and Halle, 1964). The structure of the map suggests that motor theory and analysis-by-synthesis theory have probed different aspects of the same underlying physical process.
Suppose that a novel sound is received by F ( ’ ) . This sound is decomposed, or analysed, into familiar sound components in the foqlowing way. The adaptive filter from Ff) to F f ) has been tuned by experience in such a way that the dot products corresponding t o familiar sound patterns elicit larger inputs across FF) than do unfamiliar sound patterns, as in Section 23. This conclusion must be tempered by the fact that filter tuning of LTM traces .z8, is driven by the initial filtering of sound patterns by prewired positional gradients P,,,as in equation (42). The tuned adaptive filter analyses the novel sound in a weighted combination of familiar sounds, where the weights correspond to the relative activations of representational nodes across Ff). Suppose that the associational map from Ff)to Ffi) is diffuse. In this case the spatial pattern of weights that represent the novel sound is relayed as signal strengths from F f ) to FE). Each familiar motor command at F$ is activated with an intensity corresponding to the size of the signal that it receives. All of the excited motor commands read-out their TMM’s to with relative intensities corresponding to the relayed weights. The total read-out of familiar motor commands is then synthesized into a novel TMM across F i ) . The net effect of this analysis-by-synthesis process is to construct a novel TMM across Ffi) in response to a novel sound at F t ) . The novel TMM needs to process the following continuous napping property: It should elicit a sound that is more similar to the novel sound than to any of the familiar sounds in the network’s repertoire. To achieve this property, continuous changes in the TMM’s across F g ) need to correspond to continuous changes in the auditory feedback patterns across Ff). The continuous mapping property is most easily achieved by organizing auditory representations and motor representations in a topographic fashion.
If a novel TMM possessing the continuous mapping property is activated at Fg) while the novel sound is still represented at Ff),the network can build internal representations for both the sound and the TMM, as well as an associative map between these representations, just as it did for babbled sounds. As more internal representations and maps are built up, the network’s ability to imitate and initiate novel sounds will become progressively refined. The metrical distances between a novel sound pattern or TMM pattern and the familiar sound or TMM patterns into which it is decomposed can be used an a n intrinsic measure of the phenomenal complexity, or nonautomaticity, of a new behavior relative to a network code of familiar behaviors. 32. A Moving P i c t u r e of Continuously Interpolated Terminal Motor Maps: Coarticulation and Articulatory Undershoot
A topographic structuring of motor representations can be achieved by organizing these representations into agonist-antagonist pairs. The relative activations of such pairs can be signaled independently as part of a larger spatial pattern, much aa individual articulators in the vocal tract, such as the lips, tongue, and vocal chords, can move with a large degree of independence (Darwin, 1976). A temporal sequence of TMM’s from such a motor field is expressed aa a sequence of spatial pattern outputs (Section 5). Each spatial pattern controls a motor synergy of articulatory motions to intended positional targets (Section 6). The intrinsic organization of the articulatory system continuously interpolates the motion of the articulators between these targets.
Tlie Adaptive SelfOrgatiizalion of Serial Order in Behavior
369
The timing of component subpatterns within a sequence of spatial pattern TMM’s can cause coarticulation to occur (Fowler, 1977). A high rate of emitting TMM’s due, say, to an increase in the gain of the read-out system by a nonspecific arousal increment (Section 7), can cause the next TMM to be instated before the last target has been attained. Rapid speech can thus be associated with articulatory undershoot and a corresponding acoustic undershoot (Lindblom, 1963;Miller, 1981). 33. A Context-Sensitive STM Code for Event Sequences
I now sketch some results about how sequences of events can be performed out of STM after a single presentation, and of how sequences of events can generate contextsensitive representations in LTM that are capable of accurately controlling planned, or predictive, behavior. These properties can be achieved by parallel mechanisms. No serial buffer is necessary. Several classes of phenomena have been analysed using these concepts, notably phenomena concerning free recall, letter and word recognition, and skilled motor behavior (Grossberg, 1978a,1978e). A number of other authors have also discussed these phenomena using a network approach (e.g., MacKay, 1982; McClelland and Rumelhart, 1981;Norman, 1982;Rumelhart and McClelland, 1982;Rumelhart and Norman, 1982). Although I am in complete sympathy with these contributions, I believe that they have overlooked available principles and mechanisms that are essential for achieving better understanding of their targeted data. In the next few sections, I focus my discussion on how the functional unit of speech perception is self-organized by uan active continuous process’! (Studdert-Kennedy, 1980),notably how backward effects, time-intensity tradeoff effects, and temporal integration processes can alter a speech percept (Miller and Liberman, 1979; Repp, 1979; Repp, Liberman, Eccardt, and Pesetsky, 1978;Schwab, Sawusch, and Nusbaum, 1981) in a manner that is difficult to explain using a computer model of speech perception (Levinson and Liberman, 1981). These ideas are supplemented by some mechanisms helpful in the analysis of rhythmic substrates of speech, skilled motor control, and musical performance (Fowler, 1977;Rumelhart and Norman, 1982;Shaffer, 1982;Studdert-Kennedy, 1980). 34. Stable Unitization a n d Temporal Order Information in STM: The
LTM Invariance Principle up’,.. ., For simplicity, I begin by supposing that unitized item representations ..,rn. To fix ideas, the reader may suppose that the unitized representations are generated by adaptive filtering from either F,f) or F g ) , since similar temporal order mechanisms are used in both sensory and motor modalities (Kimura, 1976; Kinsbourne and Hicks, 1978; Semmes, 1968;Studdert-Kennedy, 1980). v p ) in a field F(3)are sequentially activated by a list of events 71, r2,.
Suppose that a certain number of nodes v f ) , v f ) , .. . ,v:3) have been activated by the sublist rl, P~,...,r; and therefore have active STM traces at a given time t i . At this moment, the set of active STM traces defines a spatial pattern. Had the same sublist been presented in a different order, a different STM pattern would exist across the same set of nodes. Thus the active STM pattern encodes temporal order information across the item representations. To achieve a correct read-out of temporal order information directly from STM, a primacy effect in STM,
z p > $1
> . . . > 2,(3) ,
(45)
is desired, as in equation (25).Section 12 shows how a temporal series of recency effects in STM can elicit a learned read-out from LTM of a primacy effect in STM. We now
370
Chapter 6
consider how a primacy effect in STM can sometimes be caused directfv by experimental inputs, yet also how a recency effect in STM can sometimes be caused by experimental inputs, thereby leading to order errors in the read-out of items from STM. To understand this issue, I abandon all homunculi and consider how the evolving STM pattern can be encoded in LTM in a temporally stable fashion by the adaptive filter from F(3) to F(4). This adaptive filter groups together, or unitizes, sublists of the items that are simultaneously stored in STM at F ( 3 ) . The STM pattern at F(4) codes as unitized sublist chunks those item groupings that are salient to the network when a prescribed list of items is stored at F ( 3 ) (Figure 19). Thus I now consider laws for storing individual items in STM at F(3)which enable the LTM unitization process to proceed in a stable fashion within the adaptive filter from F(3) to F(‘). In short, I constrain STM to be compatible with LTM. This is a self-organization approach to the unitization and temporal order problems that are invisible to a performance theoretic approach. It turns out that a shunting competitive network of a specialized design for F ( 3 )does the job. Two considerations motivate this design. Once a sequence rl ,r 2 , . . . ,rr has already been presented, its STM pattern represents “past” order information. Presenting a new item rl+l can alter the total pattern of STM across F ( 3 ) ,but I assume that this new STM pattern does not cause LTM recoding of that part of the pattern which represents past order information. New events are allowed to weaken the influence of codes that represent past order information but not to deny the fact that the past event occurred. This hypothesis prevents the LTM record of past order information from being obliterated by every future event that happens to occur. This idea can be stated in a related way that emphasizes the possible destabilizing effects of new events when there are no homunculi present to beg the question. Every subsequence of the sequence rl ,?a,. , . , r, is a perfectly good sequence in its own right. In pathways. principle, all possible subsequences can be adaptively coded by F ( 3 ) + d4) How can the STM activities across Ff3) be chosen so that the relative activities of all possible filtcrings of a past event sequence be left invariant by future events? These constraints lead to the LTM invariance principle: The spatial patterns of STM acrss F(3) are generated by a sequentially presented list in such a way as to leave the F(3)-t F(4) codes of past event groupings invariant, even though the STM activations caused across d 3 )and F(4)by these past groupings may change through time as new items activate
F(3). This principle is instantiated as follows. To simplify the discussion, let the feedforward signals from F(3) to F(4)(but not the internal feedback signals within F ( 3 ) that control contrast enhancement and normalization) be linear functions of the STM activities across F(3). At time t,, the STM pattern
is adaptively Altered by the LTM vectors zJ of all nodes uj4) in F(4). By the LTM invariance principle, the relative sizes of all the dot products S , ( t , ) = P,(t,) * z,(tr) should not change when rg+l occurs. In other words,
P$,+l)
* %j(t,)
= w,+1P,(t,)
(47)
ZJk)
for all i and j,where w,+1 is a proportionality constant that is independent of j.The . . . ,zp) are LTM invariance principle thus implies that, after the STM traces z?),
zp),
Die Adaptive Self-Orgnllization of Serial Order in Behavior
371
VISUAL OBJECT RECOGNITION SYSTEM
MOTOR STM
FEATURES
I
1 INPUTS
I
SELFGENERATED AUDITORY FEEDBACK
I
: +OUT’””
Figure 19. A macrocircuit governing self-organization of recognition and recall processes: Auditorily mediated language processes (the $)], visual recognition processes ( V ’ ) ,and motor control processes (the F(’) interact internally via conditionable path-
M! ways (black lines) and externally via environmental feedback (dotted lines) to selforganize the various processes which occur at the different network stages.
Chapter 6
312
(3)
(3)
x3
x2
(3) x4
u2
'3
'2'3
0
Table 1. LTM invariance principle constrains STM activities of sequentially activated item representations. excited by the items rlr 7 2 , . . . ,r,, they thereafter undergo proportional changes. The STM traces are shunted by multiplicative factors w,+1, wi+z,. , . that are independent of
i.
Table I describes rules for generating these changes. In the Table, the ith item r, is instated in STM at v,'"' with activity pa. At every successive item representation, all past STM traces are simultaneously shunted by the amounts w,+1, then w,+2, and so on. The STM activity of the ith item r, after rI occurs ( i c i) is thus
43)(q= P s n : = * + p I .
(48)
It remains to be shown how the shunting parameters w& can be expressed in terms of the initial STM activities p z , where p, measures the amount of attention that is paid to r, when it is first stored in STM. I accomplish this by using the fact that every shunting competitive network exhibits a normalization property to impose the following normalization rule. The total STM activity across F(3) after i items have been presented is
s, = Pa42 + M(1 -
$1).
(49)
In equation (49), 4, is a positive decreasing function of i with & , = 1 and li%-,=., 4, = 0. By equation (49), S,grows from p l to its asymptote M ( 1 p1) as more items are stored.
The Adaptive Self-Organizotionof Serial Order in Behavior
373
The load parameterr, 4, estimate how close F(3)is to saturating its total capacity. The load parameter 4, also estimates how close ui3) is to the active item representations u1(3) ,u2(3) ,. . . ,u{!)~ of previous events. A relatively large decrease of 4, below means that u!~)gets activated with relatively little competition from previous items, due to the fact that r, is represented in a different region of F(3) than previous events. As more events are represented within F(3),all regions of F(3)become densely activated; hence l i q + m 4, = 0. The special case 6, = Bi, where 0 < B < 1, represents a field F(3) whose activated item representations are uniformly spaced with respect to each other. In such a “homogeneous” field, = 6 , which is independent of i. By equation (48), the total STM activity also equals
Equations (49) and (50)for S, can be recursively identified to prove that the shunting weights satisfy the equation
k = 1,2,.. . . By equations (48) and (51),
Equation (52)characterizes STM across F(3)for all time in terms of the attention paid to the items when they are stored ( p , ) the STM capacity of the network (M), and the load parameters (4,). Equation (52 can be rewritten in a way that suggests the relevance of probabilistic ideas to the S M temporal order problem. In terms of the ; ’ pi = pis;’, equation (52) becomes notation P,(tj) = ~ , ‘ ~ ) ( t j ) Sand
‘1
I
The STM patterns that evolve under the law in equation 52) have been worked out in a number of cases (see Grossberg, 1978a, 1978e, Section 26 . It is readily shown that a primacy effect often occurs in STM when a short subsequence of the list activates F(3), but that this primacy effect is converted into an STM bow (primacy and recency effect) as more items are presented. For sufficiently long lists, the recency effect dominates the STM pattern. Multimodal bows, as in von Restorff STM effects, can also be generated under special circumstances. All of the equations in this section have obvious generalizations to the case in which each item is distributed over many nodes. This is true because the shunting operations on the past field and the STM capacity of the network do not depend on how many nodes subserve each item representaton. The same is true of the equations in the next section.
Chapter 6
374
35. Transient Memory Span, Groiiping, a n d Intensity-Time Tradeoffs
Some remarks may help the reader to think about these STM results before I consider their implications for what is encoded in LTM. Given a fixed choice of the attentional sequence ,q,p2,. . .; the capacity M ;and the inhibitory design 41~42, ..., it follows that if rl, r2,. ..,rw is the longest sublist that causes a primacy effect in STM, then every longer sublist rl, r2,. . .,r K , r K + l , . .. ,r, will cause an STM bow at item rx. I call K the transient memory span (TMS) of the list. The TMS is the longest sublist that can be directly read-out of STM in the correct order. In Grossberg (1978e), I proved under weak conditions that the TMS is always shorter than the more familiar immediate memory span (IMS), which also benefits from LTM read-out. A typical choice of these parameters is TMS Z 4 and I M S S 7 (Miller, 1956 One way to guarantee a correct read-out from STM without requiring template feed ack from LTM is to rehearse the list items in subsequences, or groups, of a length no greater than the TMS.
b.
"I3),
In assigning the values p , and wk to the STM traces I have tacitly assumed that the times t , at which items r1 are presented are sufficiently separated to enable these values to reach asymptote. If presentation rates are rapid, then only partial activations may occur, leading to weights of the form
where X i is the rate of activation and
I;. = t , - t i - ] . Then the STM traces become
where by equation (51),
Due to equation (54), an intensity-time tradeoff, or Bloch's law (Repp, 1979) holds that may alter the STM pattern across F(3)under conditions of rapid presentation. Such a tradeoff can limit the accuracy with which temporal order information is encoded in STM, most obviously by preventing some items from being stored in STM at all because they receive inadequate activation to exceed the network QT. 36. Backward Effects a n d Effects of R a t e on Recall Order
Two more subtle interactions of intensity and rate are worth noting. If items are rapidly presented but some are more drawn out than others, then the relative sizes of the STM activities can be changed. By changing the STM patterns across F(3),the STM pattern across F(4)that is caused by the adaptive filter F(3)-t F(4)can also be changed. This STM pattern determines item recognition. Thus a change in rate can cause a conteztually induced change in perception. In examples wherein items are built up from consonant and vowel sequences, a relative change in the duration of a later vowel may thus alter the perception of a prior consonant (Miller and Liberman, 1979; Schwab, Sawusch, and Nusbaum, 1981). Such examples support the hypothesis that STM patterns of temporal order information over item representation control network perception, not activations of individual nodes. A uniform but rapid activation rate can alter both the items that are recalled and the order in which they are recalled. Suppose that the network is instructed to pay attention to a list during an attentional window of fixed duration. Whereas a slower
The Adaptive SelJ--Organizationof Serial Order in Behavior
375
presentation rate may allow a smaller number of items to be processed during this duration, a faster presentation rate may allow a larger number of items to be processed. In the former case, a primacy effect in STM may be encoded; hence correct read-out of order information from STM is anticipated. In the latter case, a bow in STM may be encoded. A fast rate may increase the number of items processed and thereby cause an STM bow in which items near the list middle are recalled worst (Grossberg, 1978a). A fast rate may also cause an STM bow in processing a fixed number of items if attention must be switched to the items as they are presented. Then the items near the l i t beginning and end may be recalled worst (Grossberg and Stone, 1986b; Reeves and Sperling, 1986; Sperling and Reeves, 1980).
37. Seeking the Most Predictive Representation: All Letters and Words are Lists The LTM invariance principle indicates how a competitive shunting network can instate temporal order information in STM without destabilizing the LTM filters that learn from the STM patterns. We now ask how the outputs from all of these filters are interpreted at the next processing stage F(4).How does F(‘) know which of its filtered inputs represent reliable data on which to base its output decisions? How does F(4)select the codes for those sublists across F(3)that are most predictive of the future? How does F(‘) know how to automatically group, or parse, the total event list represented across F ( 3 ) into sublists that have the best a priori chance of predicting the future within the context defined by the unique past represented by the list? The next few sections indicate how to design F(‘) so that its best predictive sublist chunks are assigned the greatest STM activity; how these most predictive chunks are differentially tuned by adaptive filtering and differentially gain control of predictive commands; how less predictive chunks are rapidly masked by more predictive chunks, therefore preventing the less predictive chunks from interfering with performance and enabling them to remain uncommitted’by learning until they are unmasked in a different context where they are better predictors; how the masking due to predictive sublist chunks compresses the LTM code, computes a “Magic Number 7” (Miller, 1956), and changes the time scale of STM reset-and thus of LTM prediction-within the subfield of unmasked chunks; how the predictive recognition chunks remain uninhibited by rehearsal, since otherwise they could not sample the sequences to be learned and recalled; and how the predictive chunks can be directly inhibited only by other predictive chunks-say those activated by new sensory feedback, or by nonspecific gain changes due to attentional shifts. The design of this field thus addresses the fundamental question of how “our conscious awareness ... is driven to the highest level present in the stimulus” (Darwin, 1976). In contrast to the distinction made by McClelland and Rumelhart (1981) between a separate letter level and a word level (Section 2). I suggest that “all letters and words are lists,” indeed that all unitized events capable of being represented in F(‘) exist on an equal dynamical footing. This conclusion clarifies how changes in the context of a verbal item can significantly alter the processing of that item, and why the problem of identifying functional units has proved to be so perplexing (Studdert-Kennedy, 1980). In F(‘), no common verbal descriptor of the functional unit, such as phoneme or syllable or letter or word, has a privileged existence. Only the STM patterns of unitized chunks that survive the context-sensitive interaction between associative and competitive rules have a concrete existence. These rules instantiate principles of predictive stability that transcend the distinctions of lay language. Before describing these rules, I should state what they do not imply. Despite the fact that ”all letters and words are lists,” a subject can be differentially set to respond to letters rather than words, numbers rather than letters, and so on. Such a capability
316
Chapter 6
involves the activation of learned top-down expectancies that selectively sensitize some internal representations more than others. Thus the phrase “all letters and words are lists” is a conclusion about the laws of unitization that letters and words share, not about the top-down attentional and expectational processes that can flexibly modulate the STM and LTM traces that these laws define. S8. Spatial Frequency Analysis of Temporal P a t t e r n s by a Masking Field: Word Length and Superiority
The main idea is to join together results about positional gradients, lateral masking, and multiple spatial frequency scales to synthesize an F(‘) network-alled a masking field-that selectively amplifies the STM of F(4)chunks representing longer sublists at the expense of chunks representing shorter sublists, other things being equal, up to some optimal sequence length (Grossberg, 1978e). Each of these three types of concepts can also be used to analyse aspects of spatial visual processing (Ganz, 1975; Robson, 1975). The network F(‘) thus illustrates that the same mechanisms can be specialized to do either spatial processing or temporal processing. The results of Samuel, van Santen, and Johnston (1982, 1983) on a word length effect in word superiority studies were published after the first draft of this chapter was completed. That is, a letter is better recognized as it is embedded in longer words of lengths from 1to 4. These authors write: “One could posit that the activation of a word is a function of the evidence present for it; more letters could provide more evidence for a word. Very short words would be at an inherent disadvantage, since they only receive a limited amount of support” (Samuel et al., 1983, p.322). Both their data and their intuitive interpretation support the properties of word processing by a masking field developed in Grossberg (1978e). To clarify the critical issue of how “evidence” is deflned to imply a word length effect, I have expanded my review of this issue in the subsequent sections, notably of how F(‘) chunks that represent longer item lists may mask 8”‘) chunks that represent shorter item lists. Several other predictions in Grossberg (1978e) have not yet been experimentally tested. Some of these predictions concern the rules of neuronal development whereby a masking field is self-organized. My expanded review forms a bridge between these and related levels of description. 39. T h e Temporal Chunking Problem
The need for masking rules that are sensitive to the length of a l i t of items can be understood by considering the temporal chunking problem: Suppose that an unfamiliar list of familiar items is sequentially presented (e.g., a novel word composed of familiar letters). In terms of frequency and familiarity, the most familiar units in the list are the items themselves. The first item starts to be processed before the whole list is even presented. What prevents processing of the first familiar item from blocking the chunkingof the unfamiliar list? Another way to state this problem is as follows: In order to completely process a novel list, all of its individual items must first be presented. All of the items are more familiar than the list itself. What prevents item familiarity from forcing the list to always be processed as a sequence of individual items, rather than eventually as a unitired whole? The temporal chunking problem is only recognized as a serious constraint on processing design when one analyses frontally how word-like representations are learned in response to serially scanned sound streams or visual letter arrays. To overcome this problem, somehow the sequence as a whole uses prewired processing biases to overcome, or mask, the learned salience of its constituent items. The type of masking that I need goes beyond the usual masking models. To emphasize what is new, I briefly review some earlier masking models. The seminal model of
Tie Adaptive SeU-Organization of Serial Order in Behavior
377
Weisstein (1968, 1972) is a model of cont,rast enhancement. Ganz (1975) modified Weisstein’s model to avoid its assumption that inhibition acts faster than excitation. Ganz’s (1975) trace-decay-and-lateral-inhibition model is a special case of equation (6). This model does not, however, discuss how the signal thresholds or LTM traces in equation ( 6 ) interact with STM trace decay and lateral inhibition to alter a network’s reaction time in response to target-then-mask. These factors were used by Grossberg (1969~)to provide a unified account of masking and performance speed-up during learning. In this model, performance speed-up due to learning is a variant of the fan effect (Section 2): An increase in a pathway’s LTM trace amplifies signals in the pathway; these amplified signals more vigorously activate their receptive node; each activity therefore grows more rapidly and exceeds the output threshold of the node more quickly. The existence of more competing nodes can cause a larger total inhibitory signal to be received by each node; the net rate of growth of activity at each node is thereby decreased; node activity therefore exceeds the output threshold of the node less quickly. These properties also hold in the masking model that I now discuss. This masking model is not just a model of contrast enhancement. It was introduced to analyse how developmental and attentional biases can alter competitive decisionmaking before STM storage occurs (Grossberg and Levine, 1975). The model was extended to explain certain normative visual illusions, such as neutralization (Gibson, 1937; Levine and Grossberg, 1976). Both investigators analysed how the STM decision process is altered by giving subsets of nodes different numbers of excitable sites, differentially amplified interactions strengths, and/or broader spatial interactions in shunting networks of the form
i = 1 , 2 , .. . ,n, which are a special case of equation (34). In these networks, a larger choice of coding sites B,, or of shunting signals Fl in f,(q)= f ( F , z , ) , or of spatial frequencies elk and Elk endows node v, with the ability to mask the STM activities of nodes V k with smaller parameters. The control of masking by parameters such as B,, F,, or E,k is not the same process as contrast enhancement, since the latter is controlled by the choice of the signal function /(w)(Grossberg, 1973). We discovered that a subtle interaction exists between the choice of parameters and signal functions. A linear signal function f ( w ) can cause the STM activities of all nodes with smaller parameters to be inhibited to zero no matter how big their STM activities start out relative to the STM activities of nodes with larger parameters. In such a network, structural or attentional biases (larger parameters) win out over the intensities of learned salience of individual cues (larger initial activities). This unsatisfactory state of affairs is overcome by using a sigmoid signal function f ( w ) in which case nodes with sufficiently large initial activities can mask nodes with larger parameters but smaller initial activities. Thus a flexible tug-of-war between stimulus factors, like intensity or learned salience, and structural factors, like the number of coding sites or spatial frequencies, exists if a nonlinear signal is used but not if a linear signal function is used. This fact poses yet another challenge to linear models. Masking, as opposed to mere contrast enhancement, thus occurs in networks whose nodes are partitioned into subfields. Within each subfield, each node possesses (approximately) the same parameters. The interactions between nodes in different subfields are biased by the differences between subfield parameters. 40. The Masking Field: Joining Temporal Order to Differential Masking via an Adaptive Filter
Although these masking insights were originally derived to study spatial processing in vision, I soon realized that they are useful, indeed crucial, for the study of temporal
378
Chapter 6
processing in language and motor control (Grossberg, 1978e). This realization came in stages. First I showed how the LTM invariance principle can be used to generate STM temporal order information over item representations (Section 34). A spatial pattern of STM activity over a set of item representations in F ( 3 )encodes this information. As more items are presented, a new spatial pattern is registered that includes a larger region of the item field F ( 3 ) . The main insight is thus to translate the temporal processing of a list of items into a problem about a succession of expanding spatial patterns. Given this insight, the temporal chunking problem can be translated as follows. How do chunks in F(4)that encode broader regions of the item field F(3) mask F(4) chunks that encode narrower regions of F(3)? Phrased in this way, the relevance of the masking field results becomes obvious because these results show how subfields with larger parameters can mask subfields with smaller parameters. Putting together these ideas about item coding and masking, the temporal chunking problem leads to the following design constraint: Sequence masking principle: Broader regions of the item field F(3) are filtered in su‘ch a way that they selectively excite F(4)nodes with larger parameters (Figure 20).
41. The Principle of Self-Similarity and the Magic N u m b e r 7 In order to realize this functional property in a computationally effective way, some specialized design problems must be solved. First, the masking parameters must be chosen self-consistently. It is inadmissible to allow a node v,’s larger number of sites B, to cancel the masking effect of its smaller spatial frequency Elk. Nodes with more sites need to have broader interactions, other things being equal. This numerical constraint is a special case of a design principle that reappears in several guises throughout my work-the so-called principle of self-similarity (Grossberg, 1969e, 1982d). Every use of the principle suggests a different example wherein a local rule for designing individual cells achieves a global property that enhances the operating power of the entire network. In the present usage, self-similarity means that nodes with larger parameters are excited only by longer sublists. Due to their self-consistent parameters, these nodes can be effectively inhibited only by other nodes that are also excited by longer sublists. Self-similarity thus introduces a prewired partial ordering among subfields such that the nodes activated by longer sublists can inhibit the nodes activated by shorter (and related) sublists, but not conversely, unless this partial ordering is modified through learning. More list items need to be presented to activate a long-list than a short-list node. Thus many more list items need to be presented to activate the same number of long-list nodes as short-list nodes. Since long-list nodes can be masked only by other long-list nodes, such nodes can remain active long enough to sample many more future events than can short-list nodes. This last property shows how self-similarity enhances the network’s predictive power. The network takes a risk by allowing any node to remain active for a long time. If the node samples inappropriate information on a given trial, on its next activation it can read out errors far into the future. This risk is minimized by letting the long-list nodes stay active the longest, because these nodes are better characterized by the temporal context (i.e., length) into which they are embedded. Self-similarity is thus a structural constraint on individual nodes that enables the network as a whole to resolve uncertain input data without taking untoward predictive risks. The abstract property of self-similarity also helps to explain a classic experimental property of human information processing, namely Miller’s (1956) “magic number seven plus or minus two.” This is because the total length of the lists that can simultaneously be coded by a prescribed subfield increases with the total length of the sublists that can be chunked by the nodes of the subfield. Grossberg (1978e) discusses these properties
The Adaptive Sc~:Orgatiizarioriaf Serial Order iii Behavior
379
MASKING FIELD
ITEM
FIELD
Figure 20. Selective activation of a masking field. The nodes in a masking field are organized so that longer item sequences, up to some optimal length, activate nodes with more potent masking properties. Individual items, as well as item sequences, are represented in the masking field. The text describes how the desired relationship between item field, masking field, and the intervening adaptive filter can be self-organized using surprisingly simple developmental rules. in greater detail, notably their effects on word recognition, code compression, recall clustering effects, and the synthesis of predictive motor commands leading to rapid planned performance. 42. Developmental Equilibration of t h e Adaptive Filter a n d its Target Masking Field
It remains for me to explain how the conditionable pathways that form the adaptive filter from the item field F(3) to the masking field F(4)generate the desired sublist masking properties. This explanation cannot merely offer formal rules for connecting the two fields. To be convincing, it must show how the connections can be established by growth rules that are simple enough to hold in v i m . The rules stated here thus amount to predictions about brain development in language-related anatomies. The main properties to be achieved are all tacitly stated in the sequence masking principle. They may be broken down as follows: 1. List Representation: The unordered sets of items in all realizable item lists, up
Chapter 6
380
to a maximal list length, are initially represented in the masking field. 2. Masking Parameters Increase With List Length: The masking parameters of masking field nodes increase with the length of the item lists that activate them. This rule holds until an optimal list length is reached. 3. Masking Hierarchy: A node that is activated by a given item list can mask nodes that are activated by sublists of this list. 4. List Selectivity: If a node’s trigger list has length n, it cannot be supraliminally activated by lists of length significantly less than n. Properties 1 and 2 suggest that the adaptive filter contains a profusion of pathways that are scattered broadly over the masking field. Property 3 suggests that closely related lists activate nearby nodes in the masking field. Property 4 says that, despite the profusion of connections, long-list nodes are tuned not to respond to short sublists. The main problem is to resolve tthe design tension between profuse connections and list selectivity. This tension must be resolved both for short-list (e.g., letter) and long-list (e.g., word) nodes: If connections are profuse, why are not short-list nodes unselective? In other words, what prevents many different item nodes from converging on every short-list node and thus being able to activate it? And if many item nodes do converge on long-list nodes, why aren’t these long-list nodes activated by sublists of the items? Somehow the number of item nodes that contact a list node is calibrated to match the output threshold of the list node. A combination of random growth rules for pathways and self-similar growth rules for list nodes can be shown to achieve all of these properties (Cohen and Grossberg, 1986a; Grossberg, 1978e). Suppose that each item node of F(3) sends out a large number of randomly distributed pathways toward the list nodes of F(4).Suppose further that an item node contacts a list node with a prescribed small probability p. This probability is small because there are many more list nodes than item nodes. Let X be the mean number of such contacts across all of the list nodes. The probability that exactly k pathways contact a given list node is given by the Poisson distribution
-+
If K is chosen so that K < < K 1,then Pk is an increasing function of k if 1 5 k 5 K and a decreasing function of k if k 2 K . Thus lists of length k no greater than the optimal length K are represented within the masking field, thereby satisfying properties 1 and 2. Other random growth rules, such as the hypergeometric distribution, also have similar properties. Due to the broad and random distribution of pathways, l i t nodes will tend to be clustered near nodes corresponding to their sublists, thereby tending to satisfy property 3. 43.
The Self-Similar G r o w t h Rule and the Opposites A t t r a c t Rule
To discuss property 4, I interpret each list node as a population of cell sites. This population may consist of many neurons, each of which possesses many sites. For simplicity, I consider only a single neuron in such a population. A list node that receives k pathways somehow dilutes the input due to each pathway so that (almost) all k pathways must be active to generate a suprathreshold response. As k increases, the amount of dilution also increases. This property suggests that longlist cells have larger cellular volumes, since a larger volume can more effectively dilute a signal due to a single output pathway. Larger volumes also permit more pathways to reach the cell’s surface, other things being equal. The formal constraint that longlist nodes are associated with larger parameters, such as number of sites and spatial frequencies, is thus extended to a physical instantiation wherein more sites exist partly
The Adaptive Self-organization of Serial Order in Behavior
381
because the cells have larger surface areas. This conclusion reaffirms the importance of the self-similarity principle in designing a masking field: A cell has longer interactions (e.g., axons) because it has a larger cell body to support them. How do larger cell surfaces attract more pathways, whereas smaller cell surfaces attract fewer pathways? This property is not as obvious as it may seem. Without further argument, a cell surface that is densely encrusted with axon terminals might easily be fired by a small subset of these axons. To avoid this possibility, the number of allowable pathways must be tuned so that the cell is never overloaded by excitation. There exist two main way to guarantee this condition. I favor the second way, but a combination of the two is also conceivable: 1) At an early stage of development, a spectrum of cell sizes is endogenously generated across the masking field by a developmental program. Each cell of a given size contains a proportional number of membrane organelles that can migrate and differentiate into mature membrane receptors in response to developing input pathways (Patterson and Purves, 1982). The number of membrane organelles is regulated to prevent the internal level of cell excitation (measured, say, by the maximum ratio of free internal Na+ to K + ions) from becoming too large. 2) Pathways from the item field grow to the list nodes via random growth rules. Due to random growth, some cells are contacted by more pathways than others. Before these pathways reach their target cells, these cells are of approximately the same size. As longer item lists begin to be processed by the item field, these lists activate their respective list nodes. The target cells experience an abnormal internal cellular milieu (e.g., abnormally high internal Na+/K+ concentration ratios) due to the convergence of many active pathways on the small cell volumes. These large internal signals gradually trigger a self-similar cell gowth that continues until the cell and its processes grow large enough to reduce the maximal internal signal to normal levels. The tuning of cell volumes in F(‘) to the number of converging afferent pathways from F(3) is thus mediated by a self-similar use-and-disuse growth rule. Grossberg (1989e) proposed such a rule for cell growth to satisfy general properties of cellular homeostasis. In the present application, the fact that internal cellular indices of membrane excitation can trigger cell growth until these indices equilibrate to normal levels suggests why the mature cell needs simultaneous activation from most of its pathways before it can fire. A self-similar growth rule has many appealing properties. Most notably, only item lists that occur in a speaker’s language during the critical growth period will be wellrepresented by the chunks of the speaker’s masking field. This fact may be relevant to properties of second language learning. If input excitation continues to maintain cell volume throughout the life of the cell, partial transactions of the cell’s input pathways should induce a partial reduction in cell volume. Moreover, if the transient memory span of the item field equals K (Section 35), the optimal chunk length in the masking field should also approximate K . In other words, the chunk lengths (in the masking field F(4))to which the speaker is sensitive are tuned by the lengths of item sequences (in the item field F ( 3 ) )that the speaker can recall directly out of STM. A second issue concerning the developmental self-organization of the masking field is the following: How does each masking subfield know how to choose inhibitory pathways that are strong enough to carry out efficient masking but not so strong as to prevent any list from activating the masking field? I have predicted (Grossberg, 1978e’ Section 45) that this type of property is developmentally controlled by an oppocrites attract rule, whereby excitatory sites attract inhibitory pathways and inhibitory sites attract excitatory pathways. The prediction suggests how intracellular parameters can regulate the attracting morphogens in such a way that balanced on-center off-surround pathways result.
382
Chapter 6
It remains to illustrate how constraints on list length, masking hierarchy, and list selectivity can be computationally realized in a network such as that in equation (57). Cohen and Grossberg (1986a, 1986b) describe computer simulations that demonstrate all the desired properties of a masking field. This masking field obeys equations of the form
In equation (59),zjJ)is the STM activity of the ith cell in F(4)which receives input pathways from only the item representations of items ri, 1 E J , where J is an unordered set of indices. Notation J I denotes the size of set J . Thus interaction coefficients such depen only upon the size of the set J of items, not upon the items as DlJl and themselves. These coefficients are chosen to satisfy the growth constraints
d
44. Automatic Parsing, Learned Superiority Effects, and’Serial Position Effects D u r i n g P a t t e r n Completion
I now summarize some psychological implications of masking field dynamics. As a list of items is presented to the item field F(3),the encoding of the list will be updated continuously. At every time, the most predictive chunks of the list that is active at that moment will rapidly mask the activities of less predictive chunks, even though these less predictive chunks may have been dominant in an earlier temporal context. As the total list length exceeds the maximal length of any sublist encoded within the masking field F(4),the network will automatically parse the total list into that grouping of sublists that can best survive mutual masking across F(4). Both of these properties are influenced by learning in important ways. For example, suppose that a given list is familiar to the network (e.g., a familiar word) but that none of its sublists has ever been individually presented to the network. In the F(3) + F(’) adaptive filter, the pattern of LTM traces corresponding to the chunk of the whole list will be much better tuned than the LTM patterns corresponding to any sublist chunk. This is because rapid masking of all sublist chunks by the whole list has occurred on all learning trials (Section 23). Consequently, on recall trials a sufficiently large sublist of the list may activate the chunk corresponding to the whole list rather than to its own sublist chunk. This property is due to the differential amplification of these F(3)-+ F(4) signals that correspond to the tuned LTM traces of the list chunk. This whole-list pattern completion effect should be weaker in situations wherein the sublists are also familiar lists (e.g., familiar word embedded in familiar word) due to three factors working together: the stronger relative amplification of the sublist chunks by their tuned LTM patterns; the greater innate ease with which a sublist can activate a sublist chunk than a list chunk; and the possibility that nodes with smaller parameters can mask nodes with larger parameters if they receive larger inputs (Section 39). These properties also indicate how a familiar word in a nonword may be recognized. Which sublist of a list can best activate the full list code? Often the answer is the lists concentrated at the beginning and end of a list. This is because the pattern of STM temporal order information across F(3)often exhibits a primacy effect, a recency effect, or an STM bow (Section 34). Thus the strongest STM activities are often at either end
The Adaptive Self-Organization of Serial Order in Behavior
383
of a list. As the LTM pattern of a list chunk within the F ( 3 ) + F(4)adaptive filter the largest LTM traces becomes parallel through learnmg to its STM pattern at d3), correspond to items at the list beginning and end. These large LTM traces are the ones capable of selectively amplifying sublist items. Rumelhart and McClelland (1982) report serial bowing effects in their data on word recognition. However, their model does not explain these data without benefit of auxiliary hypotheses (see Lawry and LaBerge (1981) for related data). If certain sublists are practiced often, their LTM patterns will preferentially activate the corresponding sublist chunks in the STM struggle across F(4), Different parsings of the same list can thus be determined by changing the alphabet of practiced sublists. Once a given parsing of sublist codes across F(4)starts to be activated, it delivers template feedback back to F ( 3 ) . Word superiority effects (Johnston and McClelland, 1974) are abetted by the larger parameters of long lists, although there exists a trade-off in reaction time meaures of superiority between how long it takes to supraliminally activate a list code and the strength of its top-down feedback. The list length prediction has received experimental support from Samuel, van Santen, and Johnston (1982, 1983). A template explanation of word superiority is also suggested by Rumelhart and McClelland (1982). There are at least two important differences between our theories. As I noted in Section 1, Rumelhart and McClelland (1982) postulate the existence of distinct letter and word levels, and connect,letters and words to each other in different ways. My theory replaces letter and word levels by item and list levels, and connects these levels in a different way than would be appropriate if letter and word levels existed. These theories are fundamentally different both in their levels and in their interactions. For example, in a model using letter and word levels, letters such as A and I which are also words are represented on both levels, but letters such as K and L are represented only on the letter level. It remains unclear how such a distinction can be learned without using a homunculus. In contrast, in a model using item and list levels, all familiar letters are represented on both levels, because “all letters and words are lists.” Although both letters and words can activate list chunks, they do so with varying degrees of ease due to differences in the spatial and temporal contexts into which they have been embedded. This fact leads to a second major difference between our theories. Rumelhart and McClelland (1982) consider only four-letter words and do not discuss the role of learning. Therefore they cannot easily explain how a familiar three-letter word in a four-letter non-word is processed. Instead of being able to use the parametric biaeee due to sublist length, learning, and so on in the coding of individual subsequences, they must derive all of the processing differences between words, pseudo-words, and non-words from differences in the number of activated words in their network hierarchy. This type of explanation does not seem capable of explaining the word length effect. Grossberg (1984b) and Grossberg and Stone (1986a) describe other differences between the theories and their ability to explain word recognition data. The present theory suggests some of the operations that may prevent a word superiority effect from occurring (Chastain, 1982). Of particular interest is the manner in which attentional factors can modulate this effect. For example, suppose that a subject gives differential attention to the first item in a string, thus amplifying the corresponding item representation in STM. When the adaptive filter responds to the whole list of items, the input to the sublist code that corresponds to the first item is differentially amplified. The additional salience of this sublist code enables it to compete more effectively with the sublist codes of longer list chunks. This competition weakens the activation of the longer chunks in F(4)and enables the item chunk to generate relatively more of the template feedback to F ( 3 ) .Item chunk feedback is also the primary source of template feedback when a string of unrelated letters is presented. Attentional processes enter this explanation in two mechanistically distinct but interdependent ways. Attentional mechanisms amplify the item representation in F ( 3 ) .Template feedback is
Chapter 6
384
also an attentional mechanism but one that is capable of actng on a more global scale of processing. The two attentional mechanisms are linked via the adaptive filter and the masking field. 45. G r a y Chips or G r e a t Ships?
The resonant feedback dynamics between F ( 3 ) and F(4)also help to explain the interesting findings of Repp, Liberman, Eccardt, and Pesetsky (1978). By varying the fricative noise and silence durations in GRAY CHIP, they found that “given sufficient silence, listeners report GRAY CHIP when the noise is short but GREAT SHIP when it is long” ARepp et al., 1978, ~ ~ 6 2 1These ). exists “a trading relation between silence and noise urations. As noise increases more silence is needed ....For equivalent noise durations, more silence was needed in the fast sentence frame than in the slow sentence frame to convert the fricative into an affricative” (Repp et al., 1978, p.625). Part of an explanation for this phenomenon depends on the fact that articulatory acts influence which “feature detectors” are tuned by auditory feedback (Section 30). Auditory experience of articulatory acts thus determines not only what item representations of F(3) will be activated, but also what sequence represent,ations of F(4)will be activated. Another part of the explanation uses the fact that the list codes in F(‘) group together and perceptually complete auditory signals into familiar articulatory A subtle issue here is that a parconfigurations via learned template feedback to d3). ticular completion by template feedback is often contingent on the receipt of at least partially confirmatory auditory cues. Yet another part of the explanation uses the fact that a speed-up of speaking rate may alter commensurately all STM activities across d3)by changing all the item integration times, as in equation (54). Due to the LTM invariance principle, a sufficiently uniform speed-up may not significantly alter the list codes selected across F(4)(Section 35) after contrast enhancement has acted to generate tuned categories (Section 22). Thus “judgements of phonetic structure and tempo are not independent, but are made simultaneously and interactively” (Miller, 1981, p.69). Finally, we come to the role of silence, which I consider the most challenging aspect of these data. Silence is not a passive state of “nothingness”; it is an active state that reflects the temporal context in which it is placed. Apart from the featural properties of silence as a temporal boundary to activity pattern onsets and offsets, I believe that the trading relationship reflects the fact that the nonspecific gain of F(4)is higher during rapid speech than during slower speech, and that this gain varies on a slower time scale than the onset or offset of an individual auditory cue. Recall from Section 23 that a nonspecific gain control signal accompanies each specific cue to regulate the network QT, or, equivalently, to renormalize the total operating load on the network (see also Grossberg, 1978e, Section 59). Such a variation of gain with speech rate can partially compensate for a decrease in integration times T,by increasing A, in equation (54) and decreasing (bk in equation (56). Thus the effects of a given duration of silence can be interpreted only by knowing the context-sensitive gain that calibrates the processing rates of auditory cues which bound the silence. These properties need to be studied further in numerical simulations., 46. Sensory Recognition versus Motor Recall: Network Lesions and Am-
nesias This chapter’s summary of the temporal coding designs that are presently known is incomplete. One also needs to build up analogous machinery to chunk the temporal order of motor commands and then to show how sensory and motor chunks are interconnected via associative maps. Only in this fashion can a full understanding of the differences and relationships between sensory recognition and motor recall be achieved.
The Adaptive Self-Oautiizutiottof Serial Order in Behavior
385
The partial independence of sensory and motor teniporal order mechanisms is perhaps best shown through the behavior of amnesic patients (Butters and Squire, 1983). Formal amnesic syndromes that are strikingly reminiscent of real amnesic syndromes ran be generated in the networks of the present theory. For example, cutting out the source of orienting arousal in Figure 15 generates a network that shares many symptoms of medial temporal amnesia, and cutting out the source of incentive motivational feedback in network models of motivated behavior generates a syndrome characterized by flat affect and impaired transfer from sensory STM to LTM (Grossberg, 1971, 1975, 1982b). Both of these lesions are interpreted as occurring in formal network analogs of hippocampus and other closely related structures. Grossberg (1978e, Section 34) analyses a network (Figure 19) in which sensory and motor temporal coding mechanisms are associatively joined to allow updating of internal representations to take place during the learning and performance of planned action sequences. The next section supplements these designs by outlining some related concepts about rhythm. 47. Four Types of R h y t h m : Their Reaction Times a n d Arousal Sources
I believe that humans possess at least four mechanistically distinct sources of rhythmic capability. The on-off rebounds within specialized gated dipole circuits can be used to generate endogenous rhythms (Carpenter and Grossberg, 1983a, 1983b, 1984, 1985), as in the periodic rhythms of agonist-antagonist motor contractions. For example, suppose that the on-cell of a gated dipole is perturbed to get a rhythm started. A few controlled on-cell inputs at a fixed rate can determine the nonspecific arousal level (Figure 21) which then feeds back to maintain an automatic on-off oscillation at the same rate, as in walking, until the arousal level is inhibited. Willed changes in the arousal level can continuously modulate the frequency of the oscillation after it gets going. When successive dipole fields in a network hierarchy interact mutually, they can also mutually entrain one another in a rhythmic fashion (Grossberg, 1978f). A deep understanding of this type of entrainment requires further numerical and mathematical study of the nonlinear dynamics of int,eracting dipole fields. In a related type of rhythm generator, source cells excite themselves with positive feedback signals and inhibit other source cells via inhibitory interneurons that temporally average outputs from the source cells. In these on-center off-surround networks, the temporal averaging by inhibitory interneurons replaces the temporal averaging by transmitter gates which occurs in a gated dipole. A nonspecific arousal signal which equally excites all the source cells energizes the rhythm and acts as a velocity signal. Elhas and Grossberg (1975) showed that in-phase oscillations can occur when the arousal level is relatively small. As the arousal level is increased, these in-phase oscillations occur with higher frequency. When a critical arousal level is reached, a Hopf bifurcation occurs. Out-of-phase oscillations then occur at increasingly high frequency as the arousal level is further increased. Such results suggest an approach towards understanding how changes in motor gait are controlled by spinal circuits which automatically interpret a simple descending velocity signal by changing both the frequency and the patterning of motor outflow signals to several limbs (Grillner, 1975). A third type of rhythm occurs when a preplanned "program" of actions is readout of a pattern of temporal order information in STM as rapidly as possible (Section 16). This type of rhythm also uses nonspecific arousal, in the form of a rehearsal wave, abetted by self-inhibitory feedback that sequentially resets the STM pattern to prevent a single action from being performed perseveratively. Performance of this kind can exhibit at least two properties: an increase in the reaction time of the first item as a function of list length, due to normalization of the total STM activity (Section 18), and a slowing down of the performance of later items, due to the tendency for primacy to dominate recency in short lists (Sections 8 and 34). Sternberg and his colleagues have reported reaction time data of this type during rapid speaking and typewriting of
Clrapter 6
386
OFF-CELLS
ON-CELLS
-PROBED
AUTOMATI c
AUTOMATIC
ON-INPUT AROUSAL SOURCE
LEVEL
Figure 21. A feedback gated dipole as a rhythm generator: A few evenly spaced oninputs to the gated dipole start an out-of-phase oscillation going between on-cells and off-cells. Both types of cells can activate the nonspecific arousal node which has been sensitized by an act of will. The period of the rhythm sets the average level of arousal which, in turn, perpetuates the rhythm until the arousal node is inhibited or further excited to speed up the rhythm.
The Adaptive Ser-Organizationof serial Order in Behavior
381
word lists of different lengths (Sternberg, Monsell, Knoll, and Wright, 1978; Sternberg, Wright, Knoll, and Monsell, 1980). Grossberg and Kuperstein (1986) have used this type of rhythm generator to analyse how a sequence of planned eye movements can be performed under control of the frontal eye fields. I call the fourth type of rhythmic capability imitative rhythm. This is the type of rhythm whereby a list of familiar items of reasonable length can be performed at a prescribed aperiodic rhythm after a single hearing. It is also the type of rhythm whereby one can think of DA-DA not as a list of four symbols, but as DA repeated twice. This example suggests the relevance of interactions between ordered item representations and a rhythm-generating mechanism to the development of simple counting skills (Gelman and Gallistel, 1978). The mechanism of imitative rhythm also uses a nonspecific arousal mechanism. Indeed, all rhythmic mechanisms use nonspecific arousal in some way, and all nonspecific arousal sources elicit rhythm by being interpreted by the field of specific representations that they energize (Section 9). In this sense, all rhythmic mechanisms are structural expressions of the factorization of pattern and energy (Section 5). The intuitive idea leading to a mechanism of imitative rhythm is depicted in Figure 22. Each list item is encoded by adaptive filtering and a temporal order representation in STM.Each list item also simultaneously delivers a nonspecific arousal pulse to the rhythm generator. Thus every item excites a specific pathway and a nonspecific pathway in a variant of Figure 15. The interal organization of the rhythm generator converts the duration of the nonspecific pulse into a topographically organized STM intensity. This happens for every item in the sequence, up to some capacity limit. Thus the rhythm generator is a (parallel) buffer of sorts, but it does not encode item information. Rather, it codes rhythm abstractly as an ordered series of intensities. When the rehearsal wave nonspecifically activates the whole field, these intensities are read-out in order by a parallel mechanism and are reconverted into durations. A detailed construction of a rhythm generator is given in Grossberg (1985). Coding a series of durations as a spatially ordered pattern of STM intensities greatly simplifies the efficient learning of aperiodic sequences of actions. For example, a single sequence chunk in F(4)can simultaneously sample a spatial pattern of temporal order information in STM over a field of item representations and a spatial pattern of ordered intensities in the rhythm generator. Read-out from a single sequence chunk can thus recall the entire sequence of items with the correct aperiodic rhythm. For example, a baby may repeat the correct number of sounds in response to an unfamiliar list of words, as well as the rhythm with which the sounds were spoken, even though he or she cannot pronounce the sounds themselves. The ability to imitate the rhythm of one, two, or three sounds is at first much better than the ability to imitate the rhythm or number of a longer list of sounds. 48. Concluding Remarks
This chapter illustrates how a small number of network principles and mechanisms can be used to discuss many topics related to the adaptive self-organization of serial order in behavior. Perhaps the most important unifying concepts that arise in this framework are those of adaptive resonance, adaptive context-mediated avalanche, adaptively imvariant STM order information in an item field, and an adaptively tuned self-similar masking field. All of these concepts suggest that the functional units of network activity are inherently nonlinear and nonlocal patterns that coherently bind a network's local computations into a context-sensitive whole. The program of classifying the adaptive resonances that control different types of planned serial behavior promises to antiquate the homunculi that burden some contemporary theories of intelligent behavior, and to end Neisser's (1976) nightmare of "processing and still more processing" with a synthetic moment of resonant recognition.
Chapter 6
388
SEQUENCE CHUNK
NONSPECIFIC REHEARSAL WAVE
STM ORDER OVER
INTENSIVE ENCODING OF
ITEM REPRESENTATIONS
TI M I NG
DURAT IONS
n NONSPECIFIC'
SPEC1 FIC ITEM IN FORMATION
DU R AT I 0N
I
I
Figure 22. An aperiodic rhythm generator transforms durations of nonspecific arousal waves into ordered intensities of stored STM activities. Concurrently, the list items that elicited the arousing signals are stored as a pattern of STM temporal order information across item representations. The onset of a sustained nonspecific rehearsal wave transforms the ordered intensities of the rhythm generator back into durations during which the rehearsal wave is inhibited. The result is a series of timed and ordered output bursts from the item representations. Rapid performance of the item representations can occur in response to the rehearsal wave alone even when no inhibiting signals from the rhythm generator are available to modulate performance rate. Both the STM pattern of temporal order information and the STM pattern of timing information can be encoded by a single node, such as the sequence chunk that is generated by adaptive filtering of the STM pattern of ordered item information.
The Adaptive Se~X2rganizationof Serial Order in Behavior
389
APPENDIX: DYNAMICAL EQIJATIONS A few equations include all the constructions of embedding field theory. Although it is hard work to choose the parameters that characterize specialized processors, these equations provide a guiding framework. When equation (6) is generalized to include conditionable inhibitory LTM traces zli, as well as conditionable excitatory LTM traces z:, we find (using an obvious extension of the notation) that
and
If the inhibitory signals are mediated by slowly varying where [(I- = mu(-(,O). inhibitory interneuronal potentials x: that are activated by excitatory potentials xt , we find that -xt d = - A t x t + B?+zf+ CJ;'z,i' + I,?', (A41 31 Y dt i i
c
c
and
In equation (A4), B;' denotes an excitatory signal from ',v to v,' and CJi+denotes an inhibitory signal from ;u and v;'. The other notations can be read analogously. Four types of LTM traces are now possible; for example, ;z;+
and
= -Dl;+zI;+
d __ 8 z t 3 = -D1;-z1i-
+ E,;+[z,]+
(-461
+ E1;-[zJ]-.
(-4'1
If interactions can be either shunting or additive. then equation (A4) is generalized to
Equations (A5)-(A7) have similar generalizations. For example, equation (A6) becomes
390
Chapter 6
If transmitter accumulation rate is slow relative to transmitter depletion rate, then the amount of transmitter 2:;' generated by the LTM trace zllt satisfies
22; + -- ( N 1-1+ z 13- + - PI; Z:;') - QI;' Z1it
'
('410)
where Qz;+ increases with 2,. The transmitter gating equations of a gated dipole are of this type. Correspondingly, equation (AS) is changed to
If self-regulatory autoreceptive feedback occurs among all the synapses of similar type that converge on a single node, then equation (A10) becomes
The other transmitter equations admit analogous autoreceptive generalizations. Transient properties of transmitters, such as mobilization and enzymatic modulation, may be defined by extensions of these equations (Carpenter and Grossberg, 1981; Grossberg, 1974).
The AdaprilJeSelf-Orgariizationof Serial Order in Behavior
39 1
REFERENCES
Anderson, J.R., Language, memory, and t h o u g h t . Hillsdale, NJ: Erlbaum, 1976. Anderson, J.R., Acquisition of cognitive skill. Psychological Review, 1982,89. 369-406. Anderson, J.R ., Retrieval of information from long-term memory. Science, 1983, 220, 25-30. Anderson, J.A., Silverstein, J.W., Ritz, S.A., and Jones, R.S., Distinctive features, categorical perception, and probability learning: Some applications of a neural model'. Psychological Review, 1977, 84, 413-451. Atkinson, R.C. and Shiffrin, R.M., Human memory: A proposed system and its control processes. In K.W. Spence and J.T.Spence (Eds.), Advances in the psychology of learning a n d motivation rescarch and theory, Vol. 2. New York: Academic Press, 1968. Banquet, J.-P. and Grossberg, S., Structure of event-related potentials during learning: An experimental and theoretical analysis. Submitted for publication, 1986. Berger, T.W. and Thompson, R.F., Neuronal plasticity in the limbic system during classical conditioning of the rabbit nictitating membrane response, I: The hippocampus. Brain Research, 1978, 145, 323-346. Butters, N. and Squire, L. (Eds.), Neuropsychology of rnemory. New York: Guilford Press, 1983. Carney, .4.E., Widen, G.P., and Viemeister, N.F., Noncategorical perception of stop consonants differing in VOT. Journal of the Acoustical Society of America, 1977,62, 961-970. Carpenter, G.A. and Grossberg, S., Adaptation and transmitter gating in vertebrate photoreceptors. Journal of Theoretical Neurobiology, 1981, 1, 1-42. Carpenter, G.A. and Grossberg, S., A neural theory of circadian rhythms: The gated pacemaker. Biological Cybernetics, 1983, 48, 35-59 (a). Carpenter, G.A. and Grossberg, S., Dynamic models of neural systems: Propagated signals, photoreceptor transduction, and circadian rhythms. In J.P.E. Hodgson (Ed.), Oscillations in mathematical biology. New York: Springer-Verlag, 1983 (b). Carpenter, G.A. and Grossberg, S., A neural theory of circadian rhythms: .4schoff's rule in diurnal and nocturnal mammals. American Journal of Physiology, 1984, 247, R1067-R1082. Carpenter, G.A. and Grossberg, S., A neural theory of circadian rhythms: Split rhythms, after-effects, and motivational interactions. Journal of Theoretical Biology, 1985, 113, 163-223. Carpenter, G.A. and Grossberg, S.. A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing, in press, 1986 (a). Carpenter, G.A. and Grossberg, S., Neural dynamics of category learning and recognition: Attention, memory consolidation, and amnesia. In J. Davis, R. Newburgh, and E. Wegman (Eds.), Brain s t r u c t u r e , learning. and memory. AAAS Symposium Series, in press, 1986 (b). Carpenter, G.A. and Grossberg, S., Neural dynamics of category learning and recognition: Structural invariants, reinforcement, and evoked potentials. In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds.), Pattern recognition and concepts in animals, people, and machines. Hillsdale, NJ: Erlbaum, 1986 (c). Cermak, L.S. and Craik, F.I.M., Levels of processing in human memory. Hillsdale, NJ: Erlbaum, 1979.
392
Chapter 6
Chastain, G., Scanning, holistic encoding, and the word-superiority effect. Memory and Cognition, 1982, 10, 232-236. Cohen, M.A. and Grossberg, S., Absolute stability of global pattern formation and parallel memory storage in competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, 1983, SMC-13, 815-826. Cohen, M.A. and Grossberg, S., Neural dynamics of speech and language coding: Developmental programs, perceptual grouping, and competition for short term memory. Human Neurobiology, in press, 1986 (a). Cohen, M.A. and Grossberg, S., Unitized recognition codes for parts and wholes: The unique cue in configural discriminations. In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds.), P a t t e r n recognltion and concepts in animals, people, a n d machines. Hillsdale, NJ: Erlbaum, 1980 (b). Cole, R.A., Rudnicky, A.I., Zue, V.W., and Reddy, D.R., Speech as patterns on paper. In R.A. Cole (Ed.), Perception and production of fluent speech. Hillsdale, NJ: Erlbaum, 1980. Collins, A.M. and Loftus, E.F., A spreading-activation theory of semantic memory. Psychological Review, 1975, 82, 407-428. Cooper, W.E., Speech perception a n d production: Studies in selective adaptation. Norwood, NJ: Ablex, 1979. Cornsweet, T.N., Visual perception. New York: Academic Press, 1970. Crowder, R.G., Mechanisms of auditory backward masking in the stimulus suffix effect. Psychological Review, 1978, 85, 502-524. Dallet, K.M., “Primary memory” : The effects of redundancy upon digit repetition. Psychonomic Science, 1965,3,365-373. Darwin, C.J., The perception of speech. In E.C. Carterette and M.P. Friedman (Eds.), H a n d b o o k of perception, Vol. VII: Language a n d speech. New York: Academic Press, 1970. Deadwyler, S.A., West, M.O., and Robinson, J.H., Entorhinal and septal inputs differentially control sensory-evoked responses in the rat dentate gyrus. Science, 1981, 211, 1181-1183. DeFrance, J.F., T h e septal nuclei. New York: Plenum Press, 1970. Dethier, V.G., Physiology of insect senses. London: Methuen, 1908. Dixon, T.R. and Horton, D.L.,Verbal behavior and general behavior theory. Englewood Cliffs, NJ: Prentice-Hall, 1968. Dodwell, P.C., Pattern and object perception. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception, Vol. V: Seeing. New York: Academic Press, 1975.
Eccles, J.C., The neurophysiological basis of mind: The principles of new* physiology. London: Oxford University Press, 1952. Eccles, J.C., Ito, M., and Szentagothai, J., T h e cerebellum as a neuronal machine. New York: Springer-Verlag, 1967. Ellias, S.A. and Grossberg, S., Pattern formation, contrast control, and oscillations in the short term memory of shunting on-center off-surround networks. Biological Cybernetics, 1975, 20, 69-98. Elman, J.H., Diehl, R.L., and Buchwald, S.E., Perceptual switching in bilinguals. Journal of the Acoustical Society of America, 1977, 62,971-974. Erickson, R.P., Sensory neural patterns and gustation. In Y.Zotterman (Ed.), Olfaction and taste. New York: Pergamon Press, 1963.
Tlre Adaptive Self-Orgarrizatiotiof Serial Order in Beliavior
393
Est,es, W.K., An associative basis for coding and organization in memory. In A.W. Melton and E. Martin (Eds.), Coding processes in hiiman memory. S e w York: Wiley, 1972.
Fisher, R.P. and Craik, F.I.M., The effects of elaboration on recognition memory. Memory and Cognition, 1980, 8, 400-404. Fitts, P.M. and Posner, M.I., Human performance. Monterey, CA: Brooks/Cole, 1967. Foss, D.J. and Blank, M.A., Identifying the speech codes. Cognitive Psychology, 1980, 12, 1-31. Fowler, C.A., Timing control in speech production. Unpublished doctoral dissertation, Dartmouth College, Hanover, New Hampshire, 1977. Freeman, W.J., Mass action in the nervous system. New York: Academic Press, 1975. Freeman, W.J., EEG analysis gives models of neuronal template-matching mechanism for sensory search with olfactory bulb. Biological Cybernetics, 1979, 35, 221-234. Fry, D.B., The development of the phonological system in the normal and the deaf child. In F. Smith and G.A. Miller (Eds.), The grncsis of language. Cambridge, MA: MIT Press, 1966. Fukushima, K., Neocognitron: A self-organized neural network model for a mechanism of pat.tern recognition unaffected by shift in position. Biological Cybernetics, 1980, 36, 193-202. Gabriel, M., Foster, K., Orona, E., Saltwick, S.E., and Stanton, M., Neuronal activity of cingulate cortex, anteroventral thalamus, and hippocampal formation in discrimination conditioning: Encoding and extraction of the significance of conditioned stimuli. Progress in Psychobiology and Physiological Psychology, 1980, 9, 125-231. Ganz, L., Temporal factors in visual perception. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception, Vol. V: Seeing. New York: Academic Press, 1975. Gelman, R. and Gallistel, C.R., The child's understanding of number. Cambridge, MA: Harvard University Press, 1978. Gibson, J.J., Adaptation, after-effect, and contrast in the perception of tilted lines, 11: Simultaneous contrast and the areal restriction of the after-effect. Journal of Experimental Psychology, 1937, 20, 553-569. Grillner, S., Locomotion in vertebrates: Central mechanisms and reflex interaction. Physiological Review, 1975, 55, 247-304. Grossberg, S., The theory of embedding fields with applications to psychology and neurophysiology. New York: Rockefeller Institute for Medical Research, 1964. Grossberg, S., Nonlinear difference-differential equations in prediction and learning theory. Proceedings of the A'ationaf Academy of Sciences, 1967. 58, 1329-1334. Grossberg, S., Some nonlinear networks capable of learning a spatial pattern of arbitrary complexity. Proceedings of the National Academy of Sciences, 1968, 59, 368-372 (a). Grossberg, S., Some physiological and biochemical consequences of psychological postulates. Proceedings of the National Academy of Sciences. 1968, 60, 758-765 (b). Grossberg, S., Embedding fields: A theory of learning with physiological implications. Journal of Mathematical Psychology, 1969, 6,209-239 (a). Grossberg, S., On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1969, 1, 319-350 (b). Grossberg, S., On learning, information, lateral inhibition, and transmitters. Mathematical Biosciences, 1969, 4, 255-310 (c).
394
Chapter 6
Grossberg, S., On learning of spatiotemporal patterns by networks with ordered sensory and motor components, I: Excitatory components of the cerebellum. Studies in Applied Mathematics, 1969, 48, 105- 132 (d). Grossberg, S., On the production and release of chemiral transmitters and related topics in cellular control. Journal of Theoretical Biology, 1969, 22, 325- 364 (e). Grossberg, S., On the serial learning of lists. Mathematiral Biosciences, 1969, 4, 201253 (f). Grossberg, S., Some networks that can learn, remember, and reproduce any number of complicated space-time patterns, I. Journal of Mathematics and Mechanics, 1969, 19, 53-91 (g). Grossberg, S., Neural pattern discrimination. .Journal of Theoretical Biology, 1970, 27, 291-337 (a). Grossberg, S., Some networks that can learn, remember, and reproduce any number of complicated space-time patterns, 11. Studies in Applied Mathematics, 1970, 49, 135-166 (b). Grossberg, S., On the dynamics of operant conditioning. Journal of Theoretical Biology, 1971, 33, 225-255. Grossberg, S., A neural theory of punishment and avoidance, I: Qualitative theory. Mathematical Biosciences, 1972, 1 5 , 39-67 (a). Grossberg, S., A neural theory of punishment and avoidance, 11: Quantitative theory. Mathematical Biosciences, 1972, 15, 253-285 (b). Grossberg, S., Pattern learning by functional-differential neural networks with arbitrary path weights. In K. Schmitt (Ed.), D e l a y and functional-differential e q u a t i o n s and t h e i r applications. New York: Academic Press, 1972 (c). Grossberg, S., Contour enhancement, short term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 1973, 52, 217-257. Grossberg, S., Classical and instrumental learning by neural networks. In R. Rosen and F. Snell (Eds.), Progress in theoret,ical biology. New York: Academic Press, 1974. Grossberg, S., A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 1975, 1 8 , 263-327. Grossberg, S., Adaptive pattern classification and universal recoding, I: Parallel development and coding of neural feature detectors. Biological Cybernetics, 1976, 23, 121-134 (a). Grossberg, S., Adaptive pattern classification and universal recoding, 11: Feedback, expectation, olfaction, and illusions. Biological Cybernetics, 1976, 23, 187-202 (b). Grossberg, S., Pattern formation by the global limits of a nonlinear competitive interaction in n dimensions. Journal of Mathematical Biology, 1977, 4, 237-256. Grossberg, S., Behavioral contrast in short term memory: Serial binary memory models or parallel continuous memory models? Journal of Mathematical Psychology, 1978, 3, 199-219 (a). Grossberg, S., Communication, memory, and development. In R. Rosen and F. Snell (Eds.), P r o g r e s s in theoretical biology, Vol. 5. New York: Academic Press, 1978 (b). Grossberg, S., Decisions, patterns, and oscillations in nonlinear competitive systems with applications to Volterra-Lotka systems. Journal of Theoretical Biology, 1978, 73, 101-130 (c). Grossberg, S., DO all neural models really look alike? Psychological Review, 1978, 85, 592-596 (d).
Tlw Ailapriw Seif-Organizationof Serial Order it1 Eehai’ior
39s
Grossberg, S , A theory of human Iricmory: Self-organiiat ion and performance of sensory-motor codes. maps, and plans. In R Rowxi and F. Snell (Eds.), Progrrss i n tlieorrtical biology, Vol. 5. S e w York: .4radeinic Press, 1978 (e). Grossberg, S., A theory of visual coding. nlrmory, and development. In E. Leeuwenberg and H. Buffart (Eds.). Fornial tllrorirs of visiial p r r c r p t i o n . New York: Wiley, 1978 (f). Grossberg, S., Biological competition: Derision rules. pattern formation, and oscillations. Proceedings of the National Aradcrny of Sciences. 1980, 77, 2338-2342 (a). Grossberg, S.. Direct perception or adaptivr rrwnance? Behavioral and Brain Sriences, 1980, 3, 385 (b). Grossberg, S., How does a brain build a cognitive code? Psychological Review, 1980, I , 1-51 (c). Grossberg, S., Human and computer rules and represrntations are not equivalent. Behavioral and Brain Sciences, 1980, 3, 136-138 (d). Grossberg, S., Adaptive resonance in development. perception, and cognition. In S. Grossberg (Ed.), M a t h e m a t i c a l psychology and psychophysiology. Providence, RI: hmerican Mathematical Society, 1981 (a). Grossberg, S., Psychophysiological substrates of schedule interactions and behavioral contrast. In S. Grossberg (Ed.), Mathematical psychology and psychophysiology. Providence, RI: American Mathematical Society, 1981 (b). Grossberg, S., Associative and competitive principles of learning and development: The temporal unfolding and stability of STM and LTM patterns. In S.I. Amari and M. Arbib (Eds.), C o m p e t i t i o n and c o o p e r a t i o n i n n e u r a l n e t w o r k s . New York: Springer-I‘erlag, 1982 (a). Grossberg, S., Processing of expected and unexpected events during conditioning and attention: A psychophysiological theory. Psychological Review, 1982, 89, 529572 (b). Grossberg, S., A psychophysiological theory of reinforcement, drive, motivation, and attention. Journal of Theoretical Neurobiology, 1982, 1, 286 369 (c). Grossberg, S., S t u d i e s of m i n d a n d b r a i n : Neiiral prinriples of learning, perception, development, cognition, and motor control. Boston: Reidel Press, 1982 (d). Grossberg, S.. The quantized geometry of visual space: The coherent computation of depth, form, and lightness. Behavioral and Brain Sciences, 1983, 6,625-657. Grossberg, S., Some psychophysiological and pharmacological correlates of a developmental, cognitive, and motivational theory. In R. Karrer, J. Cohen, and P. Tueting (Eds.), Brain and information: Event related potentials. New York: New York Academy of Sciences, 1984 (a). Grossberg, S., Unitization, automaticity, temporal order, and word recognition. Cogzdion and Brain Theory, 1984, 7 , 263-283 (b). Grossberg, S., On the coordinated learning of item, order, and rhythm. Unpublished manuscript, 1985. Grossberg, S. and Kuperstein, M., Neural d y n a m i c s of adaptive sensory-motor control: Ballistic eye movements. Amsterdam, North-Holland, 1986. Grossberg, S. and Levine, D.S.,Some developmental and attentional biases in the contrast enhancement and short term memory of recurrent neural networks. Journal of Theoretical Biology, 1975, 55, 341- 380. Grossberg, S.and Pepe, J., Schizophrenia: Possible dependence of associational span, bowing, and primacy versus recency on spiking threshold. Behavioral Science, 1970, 1 5 , 359-362.
396
Chapter 6
Grossberg, S. and Pepe, J., Spiking threshold and overarousal effects in serial learning. Journal of Statisfiral Physics, 1971,3,95 125. Grossberg, S. and Stone, G.O., Neural dynamics of word recognition and recall: Attentional priming, learning, and resonance. Psychological Review, 1986,93,46-74 (a). Grossberg, S. and Stone, G.O., Neural dynamics of attention switching and temporal order information in short term memory. Submitted for publication, 1986 (b). Halle, M. and Stevens, K.N., Speech recognition: A model and a program for research. IRE Transactions and Information Theory, 1962,IT-8, 155-159. Hary, J.M. and Massaro, D.W., Categorical results do not imply categorical perception. Perception and Psychophysics, 1982,32,409-418. Haymaker, W., Anderson, E., and Nauta, W.J.H., The h y p o t h a l a m u s . Springfield, IL: C.C. Thomas, 1969. Helson, H., A d a p t a t i o n level theory. New York: Harper and Row, 1964. Hoyle, G.,Identified neurons and b e h a v i o r of arthropods. New York: Plenum Press, 1977. Johnston, J.C. and McClelland, J.L., Perception of letters in words: Seek not and ye shall find. Science, 1974,184,1192-1194. Jusczyk, P.W., Infant speech perception: A critical appraisal. In P.D. Eimas and J.L. Miller (Eds.), Perspectives on the s t u d y of speech. Hillsdale, NJ: Erlbaum, 1981. Kahneman, D. and Chajczyk, D., Tests of the automaticity of reading: Dilution of Stroop effects by color-irrelevant stimuli. Journal of Experimental Psychology: Human Perception and Performance, 1983,9,497-509. Karrer, R., Cohen, J., and Tueting, P. (Eds.), Brain and information: Event rel a t e d potentials. New York: New York Academy of Sciences, 1984. Kelso, J.A.S., Southard, D.L., and Goodman, D., On the nature of human interlimb coordination. Science, 1979,203, 1029-1031. Kennedy, D., Input and output connections of single arthropod neurons. In F.O. Carlson (Ed.), Physiological and biochemical a s p e c t s of iiervous integration. Englewood Cliffs, NJ: Prentice-Hall, 1968. Kimura, D., The neural basis of language qua gesture. In H. Whitaker and H.A. Whitaker (Eds.), Studies in neurolinguistics, Vol. 111. New York: Academic Press, 1976. Kinsbourne, M. and Hicks, R.E., Mapping cerebral functional space: Competition and collaboration in human performance. In M. Kinsbourne (Ed.), A s y m m e t r i c a l f u n c t i o n of the brain. London: Cambridge University Press, 1978. Klatt, D.H., Speech perception: A model of acoustic-phonetic analysis and lexical access. In R.A. Cole (Ed.), P e r c e p t i o n and p r o d u c t i o n of fluent speech. Hillsdale, NJ: Erlbaum, 1980. Kuffler, S.W. and Nicholls, J.G., From neuron t o b r a i n . Sunderland, MA: Sinauer, 1976. Lanze, M., Weisstein, N., and Harris, J.R., Perceived depth versus structural relevance in the object-superiority effect. Perception and Psychophysics, 1982, 31,376-382. Lashley, K.S., The problem of serial order in behavior. In L.A.Jeffress (Ed.), Cerebral mechanisms in behavior. New York: Wiley, 1951. Lawry, J.A. and LaBerge, D.,Letter and word code interactions elicited by normally displayed words. Perception and Psychophysics, 1981,30, 70-82. Lenneberg, E.H., Biological f o u n d a t i o n s of language. New York: Wiley, 1967.
Tlre Adaptive Self Organization of Serial Order in Behavior
397
Levine, D.S. and Grossberg, S., Visual illusions in neural networks: Line neutralization, tilt aftereffect, and angle expansion. Journal of Theoretical Biology, 1976,61,477 504. Levinson, S.E. and Liberman, M.Y., Speech recognition by computer. Scientific Arnerican, April, 1981,64-76. Liberman, A.M., Cooper, F.S., Shankweiler, D.S., and Studdert-Kennedy, M., Perception of the speech code. Psychological Review, 1967, 74,431-461. Liberman, A.M. and Studdert-Kennedy, M., Phonetic perception. In R. Held, H. Leibowitz, and H.L.Teuber (Eds.), H a n d b o o k of sensory physiology, Vol. VIII. Heidelberg: Springer-Verlag, 1978. Lindblom, B.E.F., Spectrographic study of vowel reduction. Journal of the Acoustical Society of America, 1963,35, 1773-1781. MacKay, D.G., The problems of flexibility, fluency, and speed-accuracy trade-off in skilled behavior. Psychological Review, 1982,89,483-506. MacLean, P.D., The limbic brain in relation to psychoses. In P. Black (Ed.), Physialogical correlates of emotion. New York: Academic Press, 1970. Mann, V.A. and Repp, B.H., Influence of preceding fricative on stop consonant perception. Journal of the Acoustical Society of America, 1981,69,548-558. Marler, P.A., A comparative approach to vocal learning: Song development in whitecrowned sparrows. Journal of Comparative and Physiological Psychology, 1970,71, 1-25. Marler, P. and Peters, S., Birdsong and speech: Evidence for special processing. In P.D. Eimas and J.L. Miller (Eds.), Perspectives on t h e s t u d y of speech. Hillsdale, NJ: Erlbaum, 1981. Marslen-Wilson, W.D., Sentence perception as an interactive parallel process. Science, 1975,189, 226-228. Marslen-Wilson, W.D.and Welsh, A., Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology, 1978,10,29-63. Marvilya, M.P., Spontaneous vocalizations and babbling in hearing-impaired infants. In C.G.M. Fant (Ed.), Speech communication ability and profound deafness. Washington, DC: A.G. Bell Association for the Deaf, 1972. Matthei, E.H., Length effects in word perception: Comment on Samuel, van Santen, and Johnston. Journal of Experimental Psychology: Human Perception and Performance, 1983,9, 318-320. McClelland, J.L. and Rumelhart, D.E., An interactive activation model of context effects in letter perception, Part 1: An account of basic findings. Psychological Review, 1981,88,375-407. Miller, G.A., The magic number seven plus or minus two. Psychological Review, 1956, 63,81-97. Miller, J.L., Effects of speaking rate on segmental distinctions. In P.D.Eimas and J.L. Miller (Eds.), Perspectives on the s t u d y of speech. Hillsdale, NJ: Erlbaum, 1981. Miller, J.L. and Liberman, A.M., Some effects of later-occurring information on the perception of stop consonant and semivowel. Perception and Psycbopbysics, 1979, 25, 457-465, Miyawaki, K., Strange, W., Verbrugge, R., Liberman, A.M., Jenkins, J.J., and Fujimura, O., An effect of linguistic experience: The discrimination of (r] amd [l] by native speakers of Japanese and English. Perception and Psychophysics, 1975, 18, 331-340. Murdock, B.B., Human memory: Theory and data. Potomac, MD: Erlbaum, 1974.
398
Chapter 6
Murdock, B.B., Convolution and correlation in perception and memory. In L.G. Nilsson (Ed,), Perspectives in nieinory research: Essays in honor of Uppsala University’s 500th anniversary. Hillsdale, NJ: Erlbaum, 1979. Myers, J.L. and Lorch, R.F. Jr., Interference and facilitation effects of primes upon verification processes. Memory and Cognition. 1980,8,405-414. Neimark, E.D. and Estes, W.K. (Eds.), Stiniulus sampling theory. San Francisco: Holden-Day, 1967. Neisser, U., Cognition a n d reality. San Francisco: Freeman Press, 1976. Newell, A , , Harpy, production systems, and human cognition. In R.A. Cole (Ed.), Perception a n d production of fluent speech. Hillsdale, NJ: Erlbaum, 1980. Norman, D.A., Categorization of action slips. Psychological Review, 1982,88, 1-15. Norman, D.A. and Bobrow, D.G., On data-limited and resource-limited processes. Cognitive Psychology, 1975,?, 44-64. O’Keefe, J. and Nadel, L., T h e hippocampus a s a cognitive map. Oxford: Clarendon Press, 1978. Olds, J., Drives and reinforcements: Behavioral studies of hypothalamic functions. New York: Raven Press, 1977. Osgood, C.E., Method a n d theory i n experimental psychology. New York: Oxford University Press, 1953. Pastore, R.E., Possible psychoacoustic factors in speech perreption. In P.D. Eimas and J.L. Miller (Eds.), Perspectives on the study of speech. Hillsdale, NJ: Erlbaum, 1981. Patterson, P.H. and Purves, D. (Eds.), Readings in developmental neurobiology. Cold Spring Harbor, NY: Cold Spring Harbor Lab, 1982. Piaget, J., T h e origins of intelligence in children. New York: Norton, 1963. Posner, M.I. and Snyder, C.R.R., Facilitation and inhibition in the processing of signals. In P.M.S. Rabbitt and S. Dornic (Eds.), Attention a n d performance, Vol. 5. New York: Academic Press, 1975. Raaijmakers, J.G.W. and Shiffrin, R.M., Search of associative memory. Psychological Review, 1981,88,93-134. Ratcliff, R. and McKoon, G., Does activation really spread? Psyrhological Review, 1981, 80, 454-462. Reeves, A. and Sperling, G., Attentional theory of order information in short-term visual memory. Preprint, 1986. Repp, B., Relative amplitude of aspiration noise as a voicing cue for syllable-initial stop consonants. Language and Speech, 1979,22, 173-189. Repp, B.H., Liberman, A.M., Eccardt, T., and Pesetsky, D., Perceptual integration of temporal cues for stop, fricative, and affricative manner. Journal of Experimental Psychology: Human Perception and Performance, 1978,4,621-637. Repp, B.H. and Mann, V.A., Perceptual assessment of fricative-stop coarticulation. Journal of the Acoustical Society of America, 1981,69, 1154-1163. Rescorla, R.A. and Wagner, A.R., A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A.H.Black and W.F. Prokasy (Eds.), Classical conditioning 11: C u r r e n t research and theory. New York: Appleton-Century-Crofts, 1972. Restle, F., Assimilation predicted by adaptation-level theory with variable weights. In N.J. Castellan and F. Restle (Eds.), Cognitive theory, Vol. 3. Hillsdale, NJ: Erlbaum, 1978.
The ddaptiue Self-organization of Serial Order in Behavior
399
Robson, J.G., Receptive fields: Neural representation of the spatial and intensive attributes of the visual image. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of perception, Vol. V: Seeing. New York: Academic Press, 1975. Rumelhart, D.E. and McClelland, J.L., An interactive activation model of context effects in letter perception, Part 2: The contextual enhancement effect and some tests and extensions of the model. Psychological Review, 1982, 89, 60-94. Rumelhart, D.E. and Norman, D.A., Simulating a skilled typist: A study of cognitivemotor performance. Cognitive Science, 1982,6, 1-36. Samuel, A.G., van Santen, J.P.H., and Johnston, J.C., Length effects in word perception: We is better than I but worse than you or them. Journal of Experimental Psychology: Human Perception and Performance, 1982, 8, 91-105. Samuel, A.G.,van Santen, J.P.H., and Johnston, J.C., Reply to Matthei: We really is worse than you or them, and so are ma and pa. Journal of Experimental Psychology: Human Perception and Performance, 1983, 9, 321-322. Sawusch, J.R. and Nusbaum, H.C., Contextual effects in vowel perception, I: Anchorinduced contrast effects. Perception and Psychophysics, 1979,25, 292-302. Sawusch, J.R., Nusbaum, H.C., and Schwab, E.C., Contextual effects in vowel perception, 11: Evidence for two processing mechanisms. Perception and Psychophysics, 1980,
a?, 421-434.
Schneider, W. and Shiffrin, R.M., Automatic and controlled information processing in vision. In D. LaBerge and S.J. Samuels (Eds.), Basic processes in reading: Perception a n d comprehension. Hillsdale, NJ: Erlbaum, 1976. Schneider, W. and Shiffrin, R.M., Controlled and automatic information processing, I Detection, search, and attention. Psychological Review, 1977, 84, 1-66. Schwab, E.C., Sawusch, J.R., and Nusbaum, H.C., The role of second formant transitions in the stop-semivowel distinction. Perception and Psychophysies, 1981, 29, 121-128.
Semmes, J., Hemispheric specialization: A possible clue to mechanism. Neuropsychologia, 1968,6, 11-26. Shaffer, L.H.,Rhythm and timing in skill. Psychological Review, 1982, 89, 109-122. Shepard, R.N., Multidimensional scaling, tree-fitting, and clustering. Science, 1980, 210, 390-398.
Smale, S., On the differential equations of species in competition. Journal of Theoretical Biology, 1976, 3, 5-7. Soechting, J.F. and Laquaniti, F., Invariant characteristics of a pointing movement in man. Journal of Neuroscience, 1981, I, 710-720. Sperling, G. and Reeves, A., Measuring the reaction time of a shift of visual attention. In R. Nickerson (Ed.), Attention and performance, Vol. VII. Hillsdale, NJ: Erlbaum, 1980. Squire, L.R., Cohen, N.J., and Nadel, L., The medial temporal region and memory consolidation: A new hypothesis. In H. Weingartner and E. Parker (Eds.), Memory consolidation. Hillsdale, NJ: Erlbaum, 1982. Stein, L., Secondary reinforcement established with subcortical stimulation. Science, 1958, 121, 466-467.
Stein, P.S.G., Intersegmental coordination of swimmeret motoneuron activity in crayfish. Journal of Neurophysiology, 1971, 54, 310-318. Sternberg, S., Monsell, S., Knoll, R.L., and Wright, C.E., The latency and duration of rapid movement sequences: Comparison of speech and typewriting. In G.E. Stelmaeh (Ed.), Information processing in motor control and learning. New York: Academic Press, 1978.
400
Chapter 6
Sternberg, S., Wright, C.E., Knoll, R.L., and Monsell, S., Motor programs in rapid speech: Additional evidence. In R.A. Cole (Ed.), Perception and production of fluent speech. Hillsdale, NJ: Erlbaum, 1980. Stevens, C.F., Neurophysiology: A primer. New York: Wiley, 1966. Stevens, K.N.,Se ments, features, and analysis by synthesis. In J.V. Cavanaugh and I.G. Mattingly PEds.), Language b y eye and b y ear. Cambridge, MA: MIT Press, 1972.
Stevens, K.N. and Halle, M., Remarks on analysis by synthesis and distinctive features. In W. Wathen-Dunn (Ed.), Proceedings of the A F C R L symposium on models for the perception of speech and visual form. Cambridge, MA: MIT Press, 1964.
Studdert-Kennedy, M., The nature and function of phonetic categories. In F. Restle, R.M. Shiffrin, N.J. Castellan, H.R. Lindman, and D.B. Pisoni (Eds.), Cognitive theory, Vol.’ 1. Hillsdale, NJ: Erlbaum, 1975. Studdert-Kennedy, M.,Speech perception. Language and Speech, 1980, 23, 4545. Studdert-Kennedy, M., Liberman, A.M., Harris, K.S., and Cooper, F.S., Motor theory of speech perception: A reply to Lane’s critical review. Psychological Review, 1970, 77, 234-249. Sutton, R.S. and Barto, A.G., Toward a modern theory of adaptive networks: Expectation and prediction. Psychological Review, 1981, 88, 135-170. Warren, R.M., Perceptual restoration of missing speech sounds. Science, 1970, 167, 393-395.
Warren, R.M. and Obusek, D.J., Speech perception and phonemic restorations. Perception and Psychophysics, 1971,0, 358-362. Watkins, O.C. and Watkins, M.J., Lateral inhibition and echoic memory: Some comments on Crowder’s (1978) theory. Memory and Cognition, 1982,10, 279-286. Weisstein, N., A Rashevsky-Landahl neural net: Simulation of metacontrast. Psyche logical Review, 1968, 75, 494-521. Weisstein, N., Metacontrast. In D. Jameson and L.M. Hurvich (Eds.), Handbook of sensory physiology, Vol. VII/4. Berlin: Springer-Verlag, 1972. Welford, A.T., Fundamentals of skill. London: Methuen, 1968. West, M.O.,Christian, E., Robinson, J.H., and Deadwyler, S.A., Dentate granule cell discharge during conditioning. Experimental Brain Research, 1981, 44,287-294. Willows, A.O.D., Behavioral acts elicited by stimulation of single identifiable nerve cells. In F.O. Carlson (Ed.), Physiological a n d biochemical aspects of nervous integration. Englewood Cliffs, NJ: Prentice-Hall, 1968.
401
Chapter 7
NEURAL DYNAMICS OF WORD RECOGNITION A N D RECALL: ATTENTIONAL PRIMING, LEARNING, A N D RESONANCE Preface This Chapter uses adaptive resonance theory to explain reaction time and error data about word recognition and recall, notably data about lexical decisions, recognition of prior occurrence, familiarity, situational frequency, and encoding specificity. The Chapter also analyses alternative models, such as the logogen model, verification model, Posner and Snyder model, interactive activation model, Mandler model, and Underwood and Freund model, and shows how the useful intuitions of these models are modified and thereby unified within adaptive resonance theory. This synthesis also implies a far-reaching revision of basic cognitive concepts such as automatic versus controlled processing, spreading activation, limited capacity processing, conscious attention, matching and masking, and speed-accuracy trade-off. The same mechanisms which are competent to stably self-organize cognitive recognition codes in a complex input environment (Volume I) herein supply principled explanations of data in the word rrcagnition and recall field. Unlike the interactive activation model, in which both excitatory and inhibitory interactions are postulated to exist between letter and word levels, adaptive resonance theory postulates that all bottom-up and top-down interactions are excitatory, that these interactions occur between item levels and list levels (among others), and that all inhibitory interactions occur within levels. The theory also distinguishes between inhibitory interactions that govern temporal order information across item representations in short term memory and inhibitory interactions that enable unitized list chunks to be learned. In 1985, Rumelhart and Zipser addressed the question of how learning mechanisms might be appended to the interactive activation model, which heret.ofore had been exclusively a performance model. To this end, they published some computer simulations which confirmed stable learning properties of the competitive learning models that I introduced and mathematically characterized in 1976. Their simulation study was aimed a t exploring how a competitive learning model might be joined to the interactive activation model. The rules for a competitive learning model are, in fact, inconsistent with the rules for the interactive activation model. In addition, despite the existence of a wellunderstood range of stable learning within a competitive learning model, such a model also exhibits learning instabilities. The nature of these instabilities shows that competitive learning mechanisms, although highly attractive, are insufficient in themselves to achieve stable learning in a complex environment. Indeed, such learning instabilities plague most of the associative models that are being used today. An analysis of these learning instabilities led me to supplement the bottom-up learning and competition mechanisms of a competitive learning model with top-down priming, learning, and matching mechanisms, as well as with interactions of such an attentional subsystem with an orienting subsystem. In this way, adaptive resonance theory was born in 1976. In 1978, the theory was used to develop explanations of the key properties which McClelland and Rumelhart later described when they introduced the interactive activation model in 1981-1982. It is therefore gratifying to see the gradual transformation of the interactive activation model into a form more consistent with adaptive resonance theory. The present article further differentiates adaptive resonance theory from alternative models by showing, for example, how attentional gain control enables subliminal topdown priming to occur. The matching rule which arises from this insight is then used to
402
Chapter 7
articulate the lid-item error trade-of. This trade-off indicates how an initial activation of the wrong list code can tend to correct itself by reading-out a top-down template whose mismatch with bottom-up data causes activation of the list code to collapse. The trade-off is used to explain differences between lexical decision data that are collected with and without masks. The Chapter also suggests an important role for learning mechanisms, notably bottom-up adaptive tuning of recognition codes, in the explanation of word frequency effects. We analyse how the long-term cumulative effects of many 5ecent occurrences,” notably learning-induced structural changes within the internal representations that subserve recognition at all, interact with a later episode of “recent occurrence” to help generate word frequency effects. Competitive learning networks have recently been used by other modelers, notably Kohonen, to show how spatial maps can self-organize in a variety of interesting applications. When these competitive learning networks are embedded in an ART circuit, they can be used as the ‘front end” of a network that is capable of stably self-organizing associative maps in response to arbitrarily complex spaces of input and output patterns. Chapters 6-8 mention several associative map learning circuits which are built up from competitive learning mechanisms. These associative maps self-organize in an ”unsupervised learning” situation. Recently, a number of “supervised learning” algorithms have been introduced for the learning of associative maps. In both supervised and unsupervised systems, the expected outcome, or “cost,” of the behavior is represented. In an unsupervised learning model, these expected outcomes are discovered and learned by the network on its own; for example, in the form of its top-down templates or critical feature patterns (Volume I). In a supervised learning model, the expected outcomes are imposed from outside. Comparative analyses of the merits of supervised and unsupervised learning algorithms will be one of the important topics in artificial intelligence in the coming years. Even in an A R T circuit, learning is “supervised” when errors cause its vigilance parameter to change, but this change merely alters the circuit’s overall sensitivity rather than imposing an explicit target or cost (Volume I, Chapter 4). Externally controlled reinforcements can also regulate the information that is attended and therefore learned (Volume I, Chapters 2 and 3), but reinforcement operations are also distinguishable from the information being learned. It remains to be seen whether supervised learning algorithms which represent supervision exclusively by an externally imposed expected target or cost can provide a parametric understanding of biological learning data or of the behavioral invariances that have not been adequately dealt with by conventional artificial intelligence approaches. That both unsupervised and supervised learning algorithms can learn associative maps will not be the factor that will decide between them.
Psychological Review 93,46-74 (1986) @ 1986 American Psychologir a1 Association Reprinted by permission of the publisher
403
NEURAL DYNAMICS OF WORD RECOGNITION A N D RECALL: ATTENTIONAL PRIMING, LEARNING, AND RESONANCE
Stephen Grossbergt and Gregory Stone$
Abstract Data and models about recognition and recall of words and nonwords are unified using a real-time network processing theory. Lexical decision and word frequency effect data are analysed in terms of the same theoretical concepts that have unified data about development of circular reactions, imitation of novel sounds, matching phonetic to articulatory requirements, serial and paired associate verbal learning, free recall, unitization, categorical perception, selective adaptation, auditory contrast, and word superiority effects, The theory, called adaptive resonance theory, arose from an analysis of how a language system self-organizes in real-time in response to its complex input environment. Such an approach emphasizes the moment-by-moment dynamical interactions that control language development, learning, and stability. Properties of language performance emerge from an analysis of the system constraints that govern stable language learning. Concepts such as logogens, verification, automatic activation, interactive activation, limited-capacity processing, conscious attention, serial search, processing stages, speedaccuracy trade-off, situational frequency, familiarity, and encoding specificity are revised and developed using this analysis. Concepts such as adaptive resonance, resonant equilibration of short term memory, bottom-up adaptive filtering, top-down adaptive template matching, competitive masking field, unitized list representation, temporal order information over item representations, attentional priming, attentional gain control, and list-item error trade-off are applied.
t Supported in part by the Air Force Office of Scientific Research (AFOSR 82-0148), the National Science Foundation (NSF IST-8417756), and the Office of Naval Research (ONR N00014-83-K0337). $ Supported in part by the Offire of Kaval Research (ONR N00014-83-KO337).
Chapter 7
404
1. Introduction An explosive outpouring of data during the past two decades has documented many aspects of how humans process language in response to visual and auditory cues. With the data have arisen a number of conceptual frameworks and models aimed at integrating data from individual experimental paradigms and suggesting new experiments within these paradigms. A complex patchwork of experiments and models has thus far organized the data into a loose confederation of relatively isolated data domains. The time is ripe for a synthesis. A parallel line of theoretical development over the past two decades has begun to achieve such a synthesis Grossberg, 1980, 1982a, 1982b, 1985a, 1Q85b;Grossberg and Kuperstein, 1985; Gross erg and Mingolla, 1985a). This article uses the theory’s principles and mechanisms to explain some challenging data about letter and word recognition and recall. Alternative models of letter and word recognition and recall are also reviewed and discussed. The theory’s principles and mechanisms arose from an analysis of how behaving individuals can adapt in real-time to environments whose rules can change unpredictably. Only a few principles and real-time mechanisms itre needed to unify a large data base. We believe that the unifying power of the theory is due to the fact that principles of adaptation-such as the laws regulating development, learning, and unitization-are fundamental in determining the design of behavioral mechanisms. This perspective suggests that the lack of a unifying account of the data base is not due to insufficient data quality or quantity, but to the use of conceptual paradigms that do not sufficiently tap the principles of adaptation that govern behavioral designs. Such adaptive principles are often called principles of self-organization in theoretical biology and physics (Basar, Flohr, Haken, and Mandell, 1983). Many of the information processing models that have been suggested during the past two decades have ignored principles of self-organization. Where learning was included, the learning rules and the information processing rules were usually introduced as independent hypotheses. We suggest that the linkage between learning and information processing is more intimate than these models suggest. A growing appreciation of this close linkage is suggested by experiments which demonstrate that five or six presentations of a pseudoword can endow it with many of the identification properties of a high frequency word (Salasoo, Shiffrin, and Feustel, 1985). Such an intimate linkage was also evident in classical paradigms such as serial verbal learning (Underwood, 1966; Young, 1968), wherein the functional units, or chunks, governing a subject’s performance can change in a context-sensitive way from trial to trial. The great successes of the 1970’s in exploring information processing paradigms made it possible to ignore vexing issues concerning dynamically changing functional units for a time, but the price paid has been the fragmentation of explanatory concepts concerning this important data base. The organization of the article reflects the multifaceted nature of its task. To deeply understand word recognition and recall data, one needs to analyse the computational units that subserve speech, language, and visual processing. One needs to consider how these computational units acquire behavioral meaning by reacting to behavioral inputs and generating behavioral outputs; how a particular choice of computational units determines a model’s processing stages and interactions between stages; and how such concepts are supported or challenged by functionally related data other than those under particular scrutiny. All the while, one needs to explicate all the hidden processing assumptions that go into a model, and to test their plausibility and ability to arise through self-organization in a stable fashion. To this end, Sections 2-5 review some of the main models and empirical concepts which have been used to explain word recognition data. Models such as the logogen model, the verification model, and the Posner and Snyder model are reviewed using concepts such as automatic activation, limited capacity attention, serial search, and interactive activation. Experimental evidence is cited which suggests the need for multiple processing stages in whatever model is chosen. Internal and predictive limitations
b
Neural @nomics of Word Recognition and Recall
405
of these models are noted to prepare for a resolution of these difficulties using our theoretical framework. Section 6 begins the exposition of adaptive resonance theory by discussing the relationship between the theory’s computational units, its processing stages, and its mechanisms. These are network mechanisms whose interactions define the theory’s processing stages and give rise to its computational units. The computational units are spatial patterns of short term memory activation and of long term memory strength. The computational properties of these units are emergent, or collective, properties of the network interactions. These interactions do not include serial programs, algorithms, or cognitive rule structures. Instead, the network as a whole acts as i/ intelligence is programmed into it. The network’s emergent computational properties are not adequately described by either of the familiar metaphors of symbol manipulation or of number crunching. The spatial pattern units are concrete yet indivisible entities that are capable of coding highly abstract context-sensitive information. Breaking such a pattern down into its constituent parts destroys the pattern’s contextual information and its behavioral meaning. There are many possible theories which use spatial patterns as their computational units. Section 7 therefore describes the main mechanisms which set apart the present theory from possible alternatives. Just a few mechanisms are needed, despite the theory’s broad predictive range. Section 8 shows how these mechanisms can be used to clarify fundament,al concepts found in other models, such as attention, limited capacity, and processing stage. Section 9 goes on to show how the distinction between attentional priming and attentional gain control can further clarify concepts of automatic spreading activation and conscious attention. Using these general ideas, we describe the theory’s hierarchical organization in Section 10. This discussion indicates how the theory’s computational units and stages differ in critical ways from those used in other theories. The discussion also clarifies how word recognition and recall are related to other types of speech and language processes, notably to processes governing auditory feature detection, analysis-by-synthesis, matching phonetic to articulatory requirements, imitation of novel sounds, word su eriority effects, temporal order information over item representations in short term Kr working) memory, and list chunking. The several design principles which are used to determine the architecture of each processing stage are also reviewed. With both mechanistic and organizational processing concepts in hand, we turn in Section 11 to a detailed analysis of the lexical decision data of Schvaneveldt and McDonald (1981). We show how these data are clarified by a property of the theory called the List-Item Error Trade-off. This new trade-off is closely related to the Speed-Accuracy Trade-off. The analysis considers the moment-by-moment influence of related, unrelated, and neutral word primes on subsequent recognition of word and non-word targets under mask and no-mask presentation conditions. The analysis clarifies how unconscious processes can strongly influence the course of recognition (McKay, 1973; Marcel, 1980).
Sections 12-16 illustrate how the theory can explain a different type of word recognition and recall data. We analyse data concerning the recognition of prior occurrence, familiarity, situational frequency, and encoding specificity, such as that of Mandler, Tulving, Underwood and their collaborators. We note problems within the empirical models that have arisen from these data, and indicate how these problems are overcome by our theory. By synthesizing these several domains of recognition and recall data, we illustrate the explanatory power of processing concepts that have been derived from an analysis of language self-organization. We now turn to a description of several of the most important word recognition models and representative data upon which they were built. We recognize that many variations are possible within each framework. Our critique identifies core problems
406
Chapter 7
with certain type8 of models. We acknowledge t.hat variations on these model types may be devised to partially overcome these core problems. Our critique suggests that any improved model that is suggested must deal with these problems in a principled and uncontrived fashion. At the present time, these models do not seem to contain any principles which naturally overcome the problems. Then we show how our theory deals with these problems in a principled way. 2. Logogens and Embedding Fields
Many recent experiments on word recognition have been influenced, either directly or indirectly, by the seminal work of Morton (1969, 1970). The functional unit of Morton’s model is called the logogen. “The logogen is a device which accepts information from the sensory analysis mechanisms concerning the properties of linguistic stimuli and from context-producing mechanisms. When the logogen has accumulated more than a certain amount of information, a response (in the present case the response of a single word) is made available” (Morton, 1969, p.165). The logogen model can be instantiated as a real-time network. In such a network, a combination of visual and auditory feature detectors and semantically related contextual information can input to network nodes that represent the logogens. When the total input at a logogen node exceeds a threshold value, an output from the node is triggered. A node’s threshold defines the Yevel of evidence” that its Iogogen requires before it can generate outputs that influence other logogens or post-lexical processes. The logogen model was one of a family of network models that arose in psychology and neurobiology during the 1960’s. In the domain of visual neurophysiology, the classical Hartline-Ratliff model of lateral inhibition in the retina of the horseshoe crab (Ratliff, 1965) obeys the same formal rules. It also compares activation due to input increments and decrements to output thresholds at network nodes. Although the interpretation of nodal activations in such models may differ depending on the processing levels that they represent, all of the models are, from a formal perspective, continuous versions of the influential binary-threshold McCullough-Pitts networks that were introduced in the 1940’s (Pitts and McCullough, 1947). The logogen model differs, however, from the Hartline-Ratliff model in yet another, equally important way. Semantically related logogens are assumed to interact more strongly than semantically unrelated logogens. Thus a logogen is tacitly the outcome of a learning process. Within Morton’s theory, familiarization with known words is conceptualized as a process whereby output thresholds for existing logogens are lowered by repeated presentations. Lowering a threshold facilitates selection of a logogen. However, the logogen’s internal organization does not change as a function of experience. The learning processes whereby internal representations for words are organized and maintained are not rendered explicit or used to derive the model. We will show that mechanisms whereby internal representations, such as logogens, are organized and maintained have properties which also help to explain familiarization effects. Moreover, changes in output threshold are not among these mechanisms. Thus an analysis of how logogens become logogens leads to different explanations of the types of data that the logogen theory was constructed to explain. At the same time that Morton was developing his network model for word recognition, Grossberg was developing network models in which learning issues were central. These networks, called embedding fields, were introduced to explain data about human learning, such as serial verbal learning and paired associate learning (Grossberg, 1969a; Grossberg and Pepe, 1971), performance speed-up and perceptual masking due to the interaction between learning, lateral inhibition, and thresholds (Grossberg, 1969b), and unitization of hierarchically organized sensory-motor plans and synergies Grossberg, 1969c, 1970a). Grossberg also showed that the Hartline-Ratliff network eoul be derived as a special case of an embedding field in which no learning occurs (Grossberg, 1969b) and that the simplest circuits for learning and performing sequential motor acts-the
d
Neural Dynamics of Word Recognition and RecaIl
407
so-called avalanche circuits- -were similar to the command cells of invertebrates (Grossberg, 1970a, 1982a). Such derivations suggested that formal similarities exist between the processing laws utilized by vertebrates and invertebrates, yet that these laws are organized into different circuits across species to solve different classes of environmental problems. Thus it was clear by the late 1960’s that an analysis of learning could lead to conclusions about performance that cut across species and psychological-neural boundaries. The original embedding field theory also had a limited domain of applicability beyond which its predictive power failed. The nature of this failure helped to extend the theory by suggesting other principles and mechanisms of self-organization that are used to generate complex behaviors (Grossberg, 1982a). We indicate in this article how offspring of the original logogen networks can be unified by offspring of the original embedding field networks. 5. Verification by Serial Search
The incompleteness of the logogen model was suggested by several types of evidence. Meyer, Schvaneveldt, and Ruddy (1974) showed that in a lexical decision experiment stimulus quality and semantic context interact. Becker and Killion (1977) studied the interaction between stimulus quality and word frequency. They found an interaction of about one-quarter of the main effect of stimulus quality. The interaction, although in the direction predicted by the logogen model, did not reach statistical significance. Under the assumptions of the Sternberg (1969) additive factor method, these data suggested that stimulus quality and semantic context affect at least one common stage of information processing, whereas stimulus quality and word frequency influence different stages of information processing. In particular, semantic context can influence the relatively early stages of processing at which stimulus quality is important, and word frequency influences a later processing stage. Since both semantic context and word frequency effects arise through learning, these data also indicate that learning can go on at multiple stages during language learning experiences. The logogen model did not specify two levels at which learned factors can operate, although considered as a framework rather than a definite model, any number of levels could in principle be contemplated. Instead, within the logogen model, semantic context acts by supplying activation to target logogens from semantically related logogens. This type of context effect is equivalent t o lowering the output threshold, or criterion, of the target logogens, which is the mechanism used to account for faster recognition of high frequency words. In a lexical decision task, such a lowering of criterion should induce bias toward word responses. Schvaneveldt and McDonald (1981) summarized data of Antos (1979),Lapinski and Tweedy (1976), McDonald (1977), McDonald and Schvaneveldt (1978), and Schuberth (1978) which disconfirmed this expectation. Semantic context thus does not merely induce a criterion shift. Somehow, context facilitates the processing of information in the stimulus. The verification model was introduced to compensate for this weakness of the logogen model (Becker, 1976; Becker and Killion, 1977; Becker, Schvaneveldt, and Gomez, 1973; Paap, Newsome, McDonald, and Schvaneveldt, 1982; Schvaneveldt, Meyer, and Becker, 1976). To deal with the two stages at which learned factors can operate, the verification model “assumed that the feature analysis and feature increment process is an indeterminant one: a process that results in numerous word detectors exceeding criterion.. .The function of this type of processing is to delineate a subset of lexical memory items that are consistent with the primitive visual feature information.. .It is assumed that the verification process selects a word from the sensory set and uses an abstract representation stored with the word to ‘predict’ the relational features contained in visual memory. If the predicted relational features match those found in visual memory, then the word that generated the predictions is recognized. If the predictions fail to match the stimulus, another word is sampled from the sensory set to be compared
408
Chapter 7
against the stimulus. Thus verification is an interactive comparison process that operates on one word at a time [italics ours]. . .The operation of semantic context in the verification model is as follows: When a context word is recognized, a semantic priming process, similar to that suggested by Morton (1969), activates word detectors that are semantically related to the prime word., .it is assumed that the semantically defined set of words is sampled by the verification process during the time that the sensory feature set is being defined.. . Thus, if the stimulus presented following the prime word DOCTOR is semantically related, then that stimulus would be recognized by the successful verification of a word selected from the semantic set.. .the effect of a semantic context is to bypass the visual feature analyser component of the model. If the new stimulus is not related to the context word, then the semantic set would be ezhaustiuely sampled [italics ours], and verification would proceed to sample the sensory set.. .for a word in context, the only effects of intensity would be those localized in the peripheral visual system.. .the interaction of intensity with context derives from the effect of intensity on a process that is necessary to recognize a word out of context but that is bypassed for a word in context" (Becker and Killion, 1977, pp.395-396). Adding the verification process introduced a second stage at which learning could occur into the word recognition literature. This second stage operates according to processing assumptions that deserve further analysis. The assumption that verification operates on one word at a time creates problems of implementation if serial mechanismu are used. In a serial processor, order information is typically stored using some type of buffer. The order in which items are arranged in the buffer slots determines the order of search. In the verification model, two types of buffers would be needed. One would order items in terms of decreasing semantic relatedness, the other in terms of decreasing word frequency. To determine decreasing word frequency, pairwise comparisons of all word frequencies in the sensory set, or some equivalent procedure, would have to take place before any item could be stored in the corresponding buffer. The system would have to be able to compare arbitrary pairs of items, store the results of these arbitrary comparisons, and then move and store the data repeatedly until a winner could be found. Only then could an item be stored in the buffer. After the first item is stored, the entire process would be repeated on the remaining items. In short, ordering items according to word frequency in a serial buffer tacitly implies the existence of a complex presorting device in which arbitrary pairs of items can be accessed and compared using a procedure whose total number of operations increases exponentially with candidate set size. The same type of presorting process would be needed to order items in the semantic set according to decreasing semantic relatedness. It is left unclear in the verification model how to index word relatedness or word frequency in a way that could support such comparisons. Finally, since the sensory set cannot be searched before the semantic set is fully sampled, one must assume that matches based upon word frequency do not even begin until every item of the semantic set is serially eliminated. Thus the demands of the verification model on a serial buffer are much more severe than the demands placed on a serial buffer which attempts merely to store items in their presentation order. Even such classical serial buffer concepts do not fare well when they are analysed from the perspective of self-organization (Grossberg, 1978b). We have not succeeded in finding any mechanism that could self-organize these processing requirements for a serial buffer. Of course, one might argue that a concept of verification can be salvaged by characterizing suitable parallel mechanisms. Such a characterization is a primary task of this article. Once such a characterization is articulated, however, a finer processing language replaces concepts such as verification. Coltheart, Davelaar, Jonasson, and Besner (1977)obtained evidence against serial search in word recognition using a lexical decision experiment. They varied the number N of English words that can be produced by changing just one letter in the target letter string. They found that reaction times to respond "word" were independent of N, but reaction times to respond "nonword" increased with N. Coltheart et al. used this type of evidence to argue against serial search and for some type of parallel access to lexical
Neural Dynamics of Word Recognition and Recall
409
information. The relevance of the Coltheart ef a!. data to serial search concepts can be understood by considering several cases. Consider a serial search which uses the complete lexicon as its search set. Then both “word” and ”nonwordr responses should depend upon N in a similar way. If the time to make each match were independent of N, then both sets of reaction times should be independent of N. If the time to make each match depended on N , then both sets of reaction times should depend on N . In the verification model, by contrast, a serial search would be restricted to the visual set, whose size increases with N. In this situation, since times to respond “nonword” would require an exhaustive search of the visual set, increasing set size should increase reaction times. Times to respond “word” would not require an exhaustive search, but they too should increase with N , although less rapidly. Thus the verification model is consistent with the Coltheart el al. data if the experiment was not powerful enough to detect the smaller effect of N on “word” response reaction times. On the other hand, this type of explanation becomes less plausible if serial presorting operations increase exponentially with N. To reconcile this implication of a serial presorting mechanism with the Coltheart et af. data would seem to require that the sum total of an exponentially growing number of presorting operations takes much less time than a linearly increasing number of buffer matching operations. Thus a specification of real-time serial mechanisms to instantiate the verification model raises concerns not only about a serial model’s ability to self-organize, but also about the plausibility of the model’s explanations of certain data. Even if a parallel presorting mechanism is assumed, a mechanistic analysis raises a serious problem concerning the assumption that “the interaction of intensity with context derives from the effect of intensity on a process that is necessary to recognize a word out of context but that is bypassed for a word in context” (Becker and Killion, 1977, p.396). The model asserts that the time to match the information in the sensory buffer with items in the semantic set does not depend upon the quality of the sensory information. The model also asserts that the activation of the visual feature analysers does depend upon the quality of information in the sensory buffer. It is unclear what kind of matching mechanismcould be insensitive to the quality of the information being matched, especially when the quality of this information does effect other processing channels. Thus despite the great heuristic value of the verification model, attempts to embody the model using real-time mechanisms provide converging evidence against certain versions of the verification process. 4. A u t o m a t i c Activation and Limited-Capacity A t t e n t i o n
The Verification model explicitly recognizes that at least two processes are required to discuss data about word recognition. Other two-process theories have also been useful in the analysis of word recognition data. The popular model of Posner and Snyder (1975a) posits one process whereby a stimulus rapidly and automatically activates its logogen, which in turn rapidly activates a set of semantically related logogens. The second process is realized by a limited-capacity attentional mechanism. It is slower acting, cannot operate without conscious attention, and inhibits the retrieval of information stored in logogens upon which attention is not focused. If a stimulus activates a n unattended Iogogen, that logogen can build up activation but cannot be read-out for post-lexical processing or response selection. Before output can occur from an active logogen, a limited-capacity attentional process must be shifted to that logogen. This two-process theory successfully explains various data about word recognition (Neely, 1977; Posner and Snyder, 1975b). It also, however, raises serious questions. For example, suppose that attention happens to be misdirected away from the semantic set that is activated by a stimulus word. How does the processing system then redirect attention to this semantic set? Unless attention is randomly redirected, which is manifestly false, or an undefined agent redirects attention towards the semantic set, signals from the logogens in the semantic set to the attentional mechanism are the
410
Chapter 7
only agents whereby attention can be redirected to the semantic set,. As Neely (1977) notes “the conscious-attention mechanism does not inhibit the build-up of activation in unattended logogens, but rather inhibits the readout of information from unattended logogens.. .before the information stored at the unattended logogen can be analysed in preparation for response initiation, the conscious-attention readout mechanism must be ‘shifted’ to that logogen” (p.228). The Posner-Snyder model is silent about how logogens may draw attention to themselves without outputting to post-lexical processes. The deep feature of the Posner-Snyder model lies in the fact that attention can eventually get redirected, that competitive or limited-capacity processes are often involved in its redirection and sharpening, and that these processes occur after the initial wave of logogen activation. At first glance, it may appear that the incompleteness of the Posner-Snyder model can be resolved if an attention shift can be a consequence of logogen output as well as a cause of logogen output, as in the attention theory of Grossberg (1975). Further consideration shows, however, that the problem cannot be resolved just by assuming that active logogens can draw attention to themselves. In a spreading activation model, the activation across logogens can be of unlimited capacity. How an unlimited capacity output from logogens to the attentional mechanism can be transformed into a limited capacity output from logogens to other post-lexical processes is not explained within the Posner-Snyder model. We trace this problem within the Posner-Snyder model to its very foundations; in particular, to its choice of spreading activation as a computational unit. By design, spreading activation of their type does not have limited capacity. Even if each logogen were to activate connected logogens by only a fraction of its current activation, then the total logogen activation could still grow with the number of activated logogens, and hence would not be of limited capacity. By contrast, any model whose lexical processing takes place among logogens must generate a limited capacity logogen output to post-lexical processes. In their framework, an external mechanism that interacts with spreading activation is needed to generate a limited capacity among the logogens, due to the lack of an internal mechanism to restrict the capacity. In our theory, the computational unit is no longer a spreading activation among network nodes. It is a spatial pattern of activity that is processed ae 4 whole across a field of network nodes. Such a spatial pattern has limited capacity whether or not attention is directed towards it. Thus our computational unit per se does not require a two-process model. This change of unit does, however, necessitate a far-reaching revision in how one thinks about basic notions such as attention and capacity (Grossberg, 1984). The nature of these changes is hinted at by a comparison of the verification model with the Posner-Snyder model. Both models have posited a second process subsequent to logogen activation to deal with word recognition data. In the light of our analysis of the verification model, we can now ask: How does the rapid activation of logogens initiate competitive interactions, notably interactions due to read-out of a top-down expectancy which is matched against sensory data? How does the notion of verification relate to the notion of attention? Can conscious attention be an outcome of verification, rather than its cause? 5. Interactive Activation and Parallel Access
The previous analysis suggests some of the problems faced by a verification process that utilizes a serial search mechanism. To overcome such difficulties, McClelland and Rumelhart have described a two-level model in which serial search is replaced by parallel access (McClelland and Rumelhart, 1981; Rumelhart and McClelland, 1982). The theory that we will apply, which was introduced in Grossberg (1978a), also has this “interactive activation” characteristic. We do not use the McClelland and Rumelhart formulation for several reasons. We have elsewhere argued. that their model cannot arise in a stable manner as a result of a self-organization process, and that both its nodes and its interactions are incompatible with some word recognition data that our theory
Neural Dynamics of Word Recognition and Recall
41 1
has successfully predict,ed (Grossberg, 1984, 1 9 8 5 ~ ) .Within the general framework of real-time network models, there exist many different possibilities. The present theory is one of many “interactive activation”, or real-time network, theories. It happens to be one that is capable of stable self-organization and is able to explain a larger data base than the McClelland and Rumelhart version. 0. The View from A d a p t i v e Resonance T h e o r y
We will explain word recognition data using the theory of human memory that was developed in Grossberg (1978a). This theory is more complex than many psychological models because it analyses how auditory, visual, cognitive, and motor representations can develop and be learned in real-time within individual learning subjects. The simultaneous consideration of several modalities by the theory enables one to discover many more design constraints than can consideration of any one factor taken in isolation. For example, interactions between auditory and motor factors impose constraints upon how internal language representations are initiated, imitated, chunked, recognized, and used to generate motor commands for recall. Interactions between visual, auditory, and motor factors impose constraints upon how visual symbols for language units, such as letters, numbers, and words, can be recognized through learned verbally-mediated language representations, which in turn can generate motor commands for recall. A central issue in each of these intermodal interactions concerns the manner in which development, learning, and unitization are stabilized via the action of feedback loops. Some of these feedback loops are closed internally, via bottom-up and top-down signal exchanges. Other feedback loops are closed externally, via the registration of environmentally-mediated sensory feedback signals. These cyclic organizations, rather than one-way traffic between processing stages, define the computational units that have behavioral meaning. Piaget (1963) and Neisser (1967) both emphasized the importance of the cyclic organization of perceptual and cognitive units, Piaget through his theory of circular reactions and Neisser through his theory of perceptual cyclea. Circular reactions are, in fact, dynamically characterized within the theory and play an important role in initiating the self-organization of its cyclic memory structures. The term adaptive resonance was introduced in Grossberg (197613) to describe this cyclic aspect of the theory. An adaptive resonance is a fully elaborated recognition event within a feedback network capable of dynamically buffering its learned codes and expectancies against recoding by irrelevant cues. The final results of the Grossberg (1978a) analysis are a Macrotheory and a Microtheory. These two aspects of the theory coexist in a mutually supportive relationship. The Macrotheory consists of several design principles, dynamical laws, and macrocircuits whose macrostages compute functionally characterized properties. The Microtheory describes the processes that generate the properties of the various macrostages. Unlike many Artificial Intelligence models, the Macrotheory and the Microtheory cannot easily be dissociated. This is because the critical properties at the macrostages are interactive, or collective, properties of the Microtheory’s processes. Even the apparently local concept of feature detector is the net effect of widespread interactions within a Microtheory network. The Microtheory thus does not freely invent properties at each macrostage. Each process of the Microtheory generates a formal syndrome of interactive properties in response to prescribed experimental and system constraints. The internal structuring of these syndromes defines the Macrotheory properties and is the source of the theory’s predictive force. The Macrotheory’s general principles and laws severely constrain the types of microprocesses that are allowed to occur at any macrostage. Only a few principles and laws are used in the entire theory, despite its broad scope. For example, every stage of the theory is a mixed cooperative-competitive network, and e v e r y interstage signal process is an adaptive filter. Furthermore, the same mechanisms are used to
412
Chapter 7
generate chunking and temporal order properties of both language and motor control processes. That feedback cycles define the basic building blocks of the theory leads to a sobering conclusion. Such feedback cycles must be built up out of nonlinear mechanisms, since linear mechanisms have been proven to be unstable (Grossberg, 1973, 1983). Thus a certain amount of mathematics is needed to achieve a deep understanding of behavioral self-organization. The human mind does not easily grasp nonlinear interactions among thousands or millions of units without mathematical tools. Fortunately, once one has identified good nonlinear laws, a mathematical theory with data-predictive properties can be developed.
7. Elements of the Microtheory: Tuning, Categories, M a t c h i n g , and Resonance This section describes some of the interactions of short term memory (STM) processes at successive network stages with bottom-up and top-down long term memory (LTM) processes between these stages. We denote the ith stage in such a network hierarchy by Fa. Suppose that a pattern X of STM activities is present at a given time across the nodes of Fa. Each sufficiently large STM activity can generate excitatory signals that are transmitted by the pathways from its node to target nodes within F’+l. When a signal from a node in Fa is carried along a pathway to Fa+l,the signal is multiplied, or gated, by the pathway’s LTM trace. The LTM-gated signal (signal times LTM trace), not the signal alone, reaches the target node. Each target node sums u p all of its LTM gated signals. In this way, a pattern X of STM activities across Fa elicits a pattern S of output signals from Fa. Pattern S, in turn, generates a pattern T of LTM-gated and summed input signals t o F,,1. This transformation from S to T is called an adaptive filter. The input pattern T to F,+l is itself quickly transformed by a cooperative-competitive interaction among the nodes of F,+1. In the simplest example of this process, these interactions choose the node which received the largest input (Grossberg, 1976a, 1982a). The chosen node is the only one that can store activity in STM. In other words, the chosen node “wins” the competition for STM activity. The choice transformation executes the most severe type of contrast enhancement by converting the input pattern T, in which many signals can be positive, into a pattern Y of nodal activities in which at most one activity is positive. In more realistic rooperative-competitive interaction schemes, the contrast enhancing transformation from T to Y is more subtle than a simple choice, because it is designed to properly weight many possible groupings of an input pattern. Such multiple grouping networks are generically called masking field8 (Section 10). In every case, the transformed activity pattern Y, not the input pattern T, is the one that is stored in STM at F,+1 (Figure l a ) . In every case, the transformation of T into Y is nonlinear. Only nodes of F,+, which support stored activity in STM can elicit new learning at their contiguous LTM traces. Thus, whereas all the LTM traces in t h e adaptive filter, and thus all learned past experiences of the network, are used to determine recognition via the transformation X+S+T-.Y, only those LTM traces whose target activities in Fa+1survive the competitive-cooperative struggle for stored STM activity can learn in response to the activity pattern X. The fact that only the transformed STM patterns Y,rather than the less focused input pattern T, can influence LTM in the network helps to explain why recognition and attention play an important role in learning (Craik and Lockhart, 1972; Craik and Tulving, 1975). This type of feedback interaction between associative LTM mechanisms and cooperative-competitive STM mechanisms has many useful properties (Grossberg, 1976a, 1978a). It generalizes the Baysian tendency to minimize risk in a noisy environment. It
Neural Dynamics of Word Recognition and Recall
413
Fi+I
Y T
F.I
S X
Fi+l
F.I
I
xi ‘kj
Xk
Figure 1. Bottom-up interaction of short term memory and long term memory between network levels: (a) An activity pattern X at level F, gives rise to a pattern of output signals S. Pattern S is multiplicatively gated by long term memory traces. These gated signals summate to form the input pattern T to level Fi+l. Level F,+l contast enhances T before storing the contrast enhanced activity pattern Y in short term memory. (b) Each activity Zk in F, gives rise to a signal sk, (possibly zero) which is gated by a long term memory trace Zkj before the gated signal activates 2, in F,+1.
414
Chapter 7
spontaneously tends to form learned categories Y in response to the activity patterns X. Novel activity patterns X can be unambiguously classified into a category on their first presentation if they are weighted averages of experienced patterns that all lie within a single category. Input patterns that are closely related to previously classified patterns can elicit faster processing than input patterns that are not. The rate of processing is also sensitive to the number and dissimilarity of experienced patterns that fall within a category. For example, a larger number of dissimilar patterns can cause slower processing, other things being equal. Learning by each LTM trace is sensitive to the entire activity pattern X that is active at that time, as well as to all prior learning at all LTM traces. This property follows from the fact that the output pattern T engages the cooperative-competitive STM interactions across Fa+lto generate Y,before Y, in turn, regulates how each LTM trace will change in response to X. Thus both learning and recognition are highly context-sensitive. The learning and recognition capabilities of the choice model are mathematically characterized in Carpenter and Grossberg (1985) and Grossberg (1976a). The properties of the masking geometry model are described in Cohen and Grossberg (1985) and Grossberg (1978a, 1985). The bottom-up STM transformation X+S+T+Y is not the only signal process that occurs between F, and Fl+l. In the absence of top-down processing, the LTM traces within the learned map S+T can respond to a sequence of input patterns by being ceaselessly recoded by inappropriate events (Grossberg, 1976a). In other words, the learning within the bottom-up code can become temporally unstable in that individual events are never eventually encoded by a single category as they are presented on successive trials. Carpenter and Grossberg (1985) describe an infinite class of examples in which such ceaseless recoding occurs. Information processing models could not articulate this basic stability problem because they did not use learning in an essential way. Simulation studies of conditioning in response to impoverished input environments also failed to deal with the problem. In Grossberg (1976b), it was shown that properly designed mechanisms for top-down signaling from F,,, to F, and for matching within F, can stabilize learning within the bottom-up code against recoding by inappropriate events. Because information processing models could not achieve this insight about the functional role of top-down processing, the constraints that follow from this insight were not available to help choose among the many possible embodiments of top-down signaling and matching. The aspects of this top-down scheme that we will need are reviewed below. The STM transformation X+S+T+Y takes place very quickly. By "very quickly" we mean more quickly than the rate at which the LTM traces in the adaptive filter S+T can change. As soon as the bottom-up transformation X+Y takes place, the activities Y in F,+1 elicit top-down excitatory signals U back to Fa (Figure 2a). The rules whereby top-down signals are generated are the same as the rules by which bottom-up signals are generated. Signal thresholds allow only sufficiently large activities in Y to elicit the signals U in pathways from F?+1 to Fa.The signals U are gated by LTM traces in these pathways. The LTM gated signals excite their target nodes in Fa. These LTM gated signals summate at the target nodes to form the total input signals from &+,. In this way, the pattern U of output signals from F,+1 generates a pattern V of input signals to F,. The map U-rV is said to define a top-down template, or learned expectation, V. Note that V is not defined exclusively in terms of LTM traces. It is a combination of LTM traces and the STM activities across F,+,. Two sources of input now perturb F,:the bottom-up input pattern I which gives rise to the original activity pattern X, and the top-down template V that results from activating X. A t this point, descriptive language breaks down unless it is supported by precise mathematical concepts. For,once the feedback loop X-+S+T+Y+U+ V+X' closes, it continues to reverberate quickly between Faand F,,, until its activity patterns equilibrate. Notations such as X and Y are inadequate to describe this equilibration p r e cess because the activity pattern X' across Fa that is induced by I and V taken together may not be the same activity pattern X that was induced by I alone. This compos-
Neural Dynanues of Word Recognition and Recall
X
415
i
"ik
zik 'k
Figure 2. Top-down interaction of short term memory and long term memory between network levels: (a) An activity pattern Y at level F,,, gives rise to a pattern of output signals U. Pattern U is multiplicatively gated by long term memory traces. These gated signals summate to form the input pattern V to level F,.Level F, matches the bottomup input pattern I with V to generate a new activity pattern X' across 4. (b) Each activity zJ in F,+, gives rise to a signal TJk (possibly zero) which is gated by a long term memory trace Z,, before the gated signal activates xk in F,.
416
Chapter 7
ite activity pattern quickly activates a -new template, which quickly activates a new activity pattern, which quickly activates a new template, ad infinitum, as the system equilibrates. This conceptual problem is naturally handled by the formalism of nonlinear systems of diffe-ential equations, as is the quantitative analysis of equilibration and its relationship to learning, recognition, and recall. To sidestep these technical difficulties, we complete our intuitive discussion using suggestive, but incompletely specified, terminology. Due to the design of its cooperative-competitive interactions, F, is capable of matching a bottom-up input pattern I with a top-down template V (Grossberg, 1976b, 1983). The functional units that are matched or mismatched at F, are whole input patterns, which are spatially distributed across the nodes of F,, rather than the inputs to each node. This choice of functional units is not an independent hypothesis. It is forced by two sets of mathematical results that follow from more basic theoretical hypotheses. The first set of mathematical results proves that the functional unit of associative learning and LTM in such networks is a spatial pattern of LTM traces (Grossberg, 1969d, 1982a). The second set of mathematical results proves that the functional unit of matching and STM is a spatial pattern of STM traces (Grossberg, 1970b, 1982a). We illustrate what is meant by saying that the functional units are spatial patterns of STM and LTM traces, rather than individual traces, with the following properties. A bottom-up input and a top-down template signal to a node can be equal, yet they can be part of mismatched bottom-up and top-down patterns. A bottom-up input and a top-down ternplate signal to a node can be different, yet then can be part of perfectly matched bottom-up and top-down patterns. The relative sizes of the traces in an STM pattern or an LTM pattern determine the “information” carried by the pattern. The scaling parameter which multiplies these relative sizes into their actual sizes is an “energy” variable that determines properties such as how quickly this “information” is processed, rather than what the information is. In the special case wherein a top-down template V perfectly matches a bottom-up input pattern I, the relative sizes of the activities that comprise X can be preserved after V is registered, while the absolute size of each activity is proportionally increased. Thus a perfect top-down match does not disrupt ongoing bottom-up pattern processing. Rather, it facilitates such processing by amplifying its energy without causing pattern distortions. This energy amplification due to pattern matching is one of the properties used to generate adaptive resonances. Pattern matching within su 11 a network thus does not just compare bottom-up and top-down inputs at each node, as in Euclidean matching algorithms. Instead, the network senses the degree of match between subpatterns of the total bottom-up and top-down input. As in Figure 3, approximate matches between subpatterns of I and V tend to enhance the corresponding activities of X, whereas serious mismatches between I and V tend to suppress the corresponding activities of X.The effect of an approximate match is to deform X so that it forms a compromise between I and V. When X is deformed, so are all the patterns that form the map X+S-+T-+Y. The net effect of all these shifts is to represent the same input pattern I by a new activity pattern Y whose template V provides the best match to I. As this best match equilibrates, the system enters a state of energy amplification, or resonance. The resonant state persists long enough for the slowly varying LTM traces of the bottom-up adaptive filter S+T and the top-down template U+V to adjust their values to the resonating STM patterns. Only LTM traces lying in pathways that carry signals to or away from active nodes in F,+I can learn after resonance has been reached. The computational unit of the feedback network F, ct F,+1 is called an adaptive resonance because resonance triggers learning. When learning does occur, its form is quite simple. Denote by zk, the LTM traces in the pathway from node vb in F, to node vI in F,+l (Figure lb). We restrict attention to the LTM traces zkJ in the pathways that lead to a fixed node v, of F,+1. Whenever the STM trace zI at vI remains suprathreshold for a long enough time, each zbJ can
Neural Dynamics of Word Recognition and Recall
417
V
x'
t
BOTTOM-UP INPUT PATTERN
Figure 3. Matching of a bottom-up input pattern with a top-down template pattern: Regions of approximate match lead to an amplification of the corresponding activities. Regions of serious mismatch lead to a suppression of the corresponding activities. Abbreviations: I = input pattern, V = template pattern, X' = activity pattern due to conjoint action of I and V.
Chapter 7
418
gradually become proportional to the signal sk, emitted by vk into its pathway. Two important properties are implicit in this statement. First, the rate with which zk, changes increases with the size of 2,. Second, while z, is active, the LTM vector z, 9 ( z l i , zz,, -,zn,) of the LTM traces leading to v, gradually becomes parallel to the signal vector S, = (Sl,, Sa,, -,Sn,)emitted by F, to v,. The following learning law (Grossberg, 1969d, 1976a) is the simplest differential equation for the rate of change fzk, of the LTM trace zk, that rigorously captures these properties:
-
-
In ( l ) ,a and p are positive constants. A t times when 2, = 0, it follows that fZk, = 0 so that no learning occurs. Positive values of z, induce positive learning rates. Increasing z, increases the learning rate. The rate constant a is sufficiently small that momentary activations of 5, cause insignificant changes in Z k , . If 2, remains positive for a sufficiently long time-that is, if 2, is stored in STM at F,+]-then zk, approaches the value psk,, which is proportional to the signal Sk,. For simplicity of exposition, we suppose that p = 1. Then (1) says that the LTM vector z3 approaches the signal vector SJ at a rate that increases with the STM activity z,. The total input T, to w, is defined by T, = Cl!l S k j Z k , , which can also be written as the dot product T, = S, . z3 of the vectors S, and z3. As z, approaches S, due to learning, T, becomes larger. This property indicates how a fized signal vector S, can generate an amplified and faster reaction of z, due to prior learning by z,. This type of performance speed-up does not require a larger total size C;,l zk, of the LTM traces abutting v,. Rather it requires a repatterning of the LTM traces within the vector 2., The signal patterns S, that succeed in activating v, can differ on successive learning trials. The LTM vector z3 thus encodes a weighted time-averageof all the signal patterns S, that us has actually sampled. The weight of a particular pattern S, in this average increases with the intensity and duration of z,'s suprathreshold activity during the learning episodes when S, was active. Even though v, may have intensely sampled a particular S, on a previous learning trial, the LTM vector z, will not generally equal S, on the current trial. The LTM vector z, provides a statistical measure of all the patterns S, that ever activated v,. If many signal vectors S, succeed in activating w J , then z, will be different from any one of the vectors S, since it computes an average of all the vectors. Thus the dot product T, = S, . 2, of any one of these vectors S, with the average encoded by 2, may become smaller as the number of exemplars sk within the category corresponding to v, increases. The amount of practice on a single pattern (e.g., familiarity) and category variance will thus tend to have opposite effects on the reaction rate of Fe+l. A similar type of learning rule governs the LTM traces Z3k in the top-down pathways from nodes v, in F,+1 to nodes Vk in F, (Figure 2b). Again x, must be suprathreshold before Z,k can learn. At least one fundamental difference exists, however, between bottom-up learning and top-down learning. During bottom-up learning, each LTM vector z, determines its own total input Tsto v,. These inputs trigger a cooperativecompetitive struggle among the STM activities 2, of F e + l . Consider, for example, the case wherein the node v, which receives the largest input Tjwins the STM competition. Then each input T, acts to improve the competitive advantage of its own STM trace =3.
The same is not true during top-down learning. Once the bottom-up filter determines which STM activities zJ in Fe+l will be stored, all of the suprathreshold x,
Neural Dynamics of Word Recognition and RecaII
419
values generate signal vectors U, = (U31,U12, Usm) to F,. All of the signal vectors U, simultaneously read-out their top-down LTM patterns 2, = (Zll,Z32, ., Z,,) to form a single top-down template V = (V,, V,, V,), where Vk = Ul,Zl,. Thus whereas bottom-up filtering separates the effects of the LTM vectors z3 in order to drive the STM competition within F,+1 , the survivors of this competition pool their top-down LTM vectors to form a consensus in the form of a composite template to F,. This template, as a whole, is then matched against the bottom-up input pattern to F,. If the LTM pattern 2, of one node u3 in Ft+l matches the input pattern much better than the other LTM patterns which form the template consensus, then this node will win a larger portion of the total STM activity across F,+1 as the resonance equilibrates. In other words, the template presents a consensus to F, so that the matching process within F, can rearrange the competitive STM balance across F,,] and thereby better match the bottom-up data at Fa. Computer simulations that illustrate these competitive and matching properties during pattern recognition and category formation via adaptive resonance are described in Carpenter and Grossberg (1985).
-
' I . ,
cy=l
8. Counting Stages: Resonant Eqiiilibration as Verification and Attention
Many of the types of properties which the Verification model and the Posner-Snyder model addressed are mechanized and extended by using the parallel process of resonant equilibration. For example, other things being equal, high frequency words are more parallel to the LTM vectors z, and 2, that they persistently activate than low frequency words are to their preferred LTM vectors. This property helps to explain how a verification type process can occur without invoking serial search. To understand why, suppose that a low frequency word activates an output signal pattern S from F, to F,+l that does not correspond to any high frequency word. Before S can influence F,+1 via T,it is gated by the bottom-up LTM traces. Due to the relatively large effect of high frequency words on LTM tuning, the largest inputs T, to F,+1 may initially bias towards a high frequency interpretation of the low frequency the STM reaction of Fa+1 word. Consequently, the fastest and largest top-down signals in the template V may initially tend to code high frequency words. The template V plays the role of the verification signal. Thus high frequency word components can tend to make a faster and larger contribution to the early phase of "verification", even though the process that instantiates the template is a parallel read-out rather than serial search. After the matching process between bottom-up and top-down templates is initiated at F,,it can change the bottom-up inputs T, in favor of the low frequency word. That is, although high frequency components across F,,, may be favored initially, once "verification" begins at Fa,lower frequency components across Fa+,whose top-down signals better match the sensory data at F, are amplified in STM as the high frequency components are attenuated. No serial search is needed for this to occur. This capability is already implied by the conjoint action of rules whereby ( I ) the bottom-up filter biases the STM competition across Fa+]and (2) the top-down template synthesizes a consensus for matching across F,. We maintain the intuition of the verification model-that many alternative interpretations are tested until the correct one is found-but the serial search mechanism is replaced by a parallel zooming-in process whereby top-down consensus becomes better matched to the sensory data as the bottom-up filter chooses a better STM representation. No serial mechanisms are needed to accomplish this process of matching, deformation, and equilibration. After the resonance quickly equilibrates in this way, the final resonant STM pattern is the one to which "attention is paid." As in the Posner-Snyder model, the initial phase of activating the "logogens" across Fa+]uses excitatory F, + F,+1 signals. The subsequent stages of competition for STM storage across Fa+]and matching across F, both use lateral inhibitory signals, which is also consistent with the Posner-Snyder model. Attention does not, however, switch to the logogens before inhibition can act. Inhibi-
420
Chapter 7
tion can help to draw attention to its final locus by forusing the process of resonant equilibration. Also, in these processes, the mechanism of inhibition does not primarily function to impose a limited capacity. This fact reflects a major difference of our theory from the Posner-Snyder and verification models. The computational unit in our networks is not a logogen that can be excited or inhibited by other network nodes. The computational unit is a distributed pattern of activity across a field of network nodes. Inhibition may help to cause an amplified STM reaction to a matched bottom-up input pattern and top-down template pattern, as well as to suppress the STM response to mismatched input patterns. These facilitative and suppressive reactions can occur without changing the set of nodes that the input patterns activate or the total activities of the input patterns. Thus the mechanism of inhibition can cause amplification or suppression, not as a function of network capacity, but rather as a function of pattern match or mismatch. Another reflection of the difference between computational units in the theories is that inhibition also acts before signals ever leave F, for F,+l. The same inhibito;y interactions within Ft that are used for top-down template matching at F, are also used to accurately register the bottom-up input at F, without noise or saturation before Fi ever elicits its excitatory signals to F,+1. The fact that this early phase of inhibitory interaction, prior to the activation of the "logogens" at F,+l, does not primarily function to "limit capacity" reflects the deeper property that the computational unit of the network is not a logogen at all (Grossberg, 1980, 1984a). Perhaps the strongest departure from both the Posner-Snyder model and the Verification model concerns the concept of processing stages. Even if we restrict our consideration to two levels F, and F,+l,these two levels do not correspond to two processing stages of the sort that these other models have described. In particular, these stages are not separable from one another. Rather, each feeds into the next and thereby alters its own processing due to feedback. The feedback continues to cycle between the stages as it drives the approach to resonance, and it maintains the resonance after it has been attained. We believe that it is the absence of this type of insight in the PosnerSnyder model that led to its tacitly contradictory argument about how inhibition and attention are related (Section 4). We see no way to fully understand these basic intuitions except through the use of nonlinear systems of differential equations. Once this is accepted, serial stage models and binary computer models of human information processing seem much less appealing than they once did. In fact, the computational units of our Macrotheory are not even necessarily determined within individual Macrotheory levels. Rather, computational units such as adaptive resonances emerge from interactions using Microtheory mechanisms that are distributed across several Macrotheory levels. 9. Attentional Gain Control Versus Attentional Priming:
The 2/3 Rule
The present theory makes another distinction that is not mechanistically elaborated within the Posner-Snyder theory. As Neely (1977 notes, experiments like those of Posner and Snyder (1975b) and Neely (1976) "con ounded the facilitatory effects of conscious attention with the facilitatory effects of automatic spreading activation" (p.231). The Posner-Snyder model accounts for these two types of facilitation by positing a separate process for each. In the present theory, at the level of mechanism, the two types of facilitation both share "automatic" properties. They are distinguished by other factors than their automaticity. A distinction that has been central in the development of the present theory is the difference between attentional gain control and attentional priming (Grossberg, 1975, 1982b). The need for distinct mechanisms of attentional gain control and attentional priming can be appreciated by considering Figure 4. In Figure 4a, a learned top-down template from F,+l to F, is activated before a bottom-up input pattern activates F,. The level F, is then primed, or ready, to receive a bottom-up input that may or may not match the
Neural Dynamics of Word Recognition and Recall
421
GAIN
rr 3i+i
Figure 4. Interaction of attentional priming with attentional gain control: (a) A supraliminal activity pattern within Fi+l causes a subliminal response within Fj. (b) A bottom-up input pattern can instate a supraliminal activity pattern within F, by engaging the attentional gain control channel. ( c ) During bottom-up and top-down matching, only cells at F, which receive convergent bottom-up and top-down signals can generate supraliminal reactions. (d) .4n inhibitory attentional gain control signal from a competing source can block supraliminal reaction to a bottom-up input.
Uzapter 7
422
active template. The template represents the input that the network expects to receive. The template plays the role of an expictancy. Level F, can be primed to receive a bottom-up input without necessarily eliciting suprathreshold output signals in response to the priming expectancy. If this were not possible, then every priming event would lead to suprathreshold reactions. Such a property would prevent anticipation of a future event. On the other hand, certain top-down expectancies can lead to suprathreshold consequences, much as we can, at will, experience internal conversations and musical events, as well as other fantasy activities. Thus there exists a difference between the read-out of a top-down expectancy, which is a mechanism of attentional priming, and the translation of this operation into suprathreshold signals due to attentional gain control. The distinction between attentional priming and attentional gain control can be sharpened by considering the opposite situation in which a bottom-up input pattern I activates F, before a top-down expectancy from F,+1 can do so. As in the discussion within Section 7, we want the input pattern I to generate a suprathreshold activity pattern X across F, so that the sequence of transformations X+S+T+Y-tU +V+X' can take place. How does F, know that it should generate a suprathreshold reaction to the bottom-up input pattern but not to the top-down input pattern? In both cases, an input pattern stimulates the cells of F,. Some other mechanism must exist that distinguishes between bottom-up and top-down inputs. We me the mechanism of attentional gain control. These considerations suggest that at least two of the three signal sources to F,must be simultaneously active in order for some F, cells to become supraliminally active. Carpenter and Grossberg (1985) have called this constraint the 2/3 Rule. These three signal sources are 1) the bottom-up input channel that delivers specific input patterns to F,; (2) the top- own template channel that delivers specific expectancies, or priming signals, to F,; and (3)the attentional gain control channel that nonspecificallymodulates the sensitivity of F,. Figure 4 illustrates cme realization of the 2/3 Rule. In Figure 4a, supraliminally active cells within F,+I read out a specific top-down expectancy to F, along excitatory (+) and conditionable pathways. The active F,+1 cells also read out inhibitory (-) signals that converge in a nonspecific fashion upon the cells which regulate attentional gain control. Since all the cells in Ft receive inputs from at most one channel, they cannot generate supraliminal activations. By contrast, in (b), the bottom-up input channel instates a specific input pattern at Fi and excites the attentional gain control channel, which nonspecifically sensitizes all the cells of Ft . (Alternatively, the attentional gain control channel may remain endogenously, or tonically, active.) Those cells at which a bottom-up input and an attentional gain control signal converge can generate supraliminal activations. Cells which receive no bottom-up input cannot generate supraliminal activations, since they only receive inputs from a single signal source. In Figure 4c, a bottom-up input pattern and a top-down template pattern are simultaneously active. The top-down signal source shuts off the nonspecific gain control channel. However, if the bottom-up input pattern and the topdown template pattern are not too different, then some cells in F, will receive inputs from both signal channels. By the 2/3 Rule, these cells can become supraliminally active. Cells which receive inputs only from the bottom-up input or from the top-down template, but not both, cannot become supraliminally active. Nor can cells become active which receive no inputs whatsoever. Thus, in addition to suggesting how F, can res ond supraliminally to bottom-up inputs and subliminally to top-down templates, the 273 Rule suggests a rule for matching bottom-up and top-down patterns at F,. Carpenter and Grossberg (1985) have shown, moreover, that use of the 2/3 Rule for matching is necessary to prevent ceaseless recoding of the learned categories that form in response to sequences of bottom-up input patterns. The 2/3 Rule also leads to other useful conclusions. For example, a supraliminal reaction to bottom-up inputs does not always occur, especially when attention is fo-
d
Neural Dynamics of Word Recognition and RecaN
423
rused upon a different processing modality. This can be understood as a consequence of intermodal competition between attentional gain control signals. In Figure 4d the attentional gain control channel is inhibited by such a competitive signal; hence, only a subliminal reaction to the bottom-up input occurs. In a similar way, top-down signals do not always generate subliminal reactions. We can wilfully generate conscious internal fantasy activities from within. Supraliminal reactions in response to top-down signals can occur if the attentional gain control channel can be activated by an “act of will.” In order t o ground these concepts in a broader framework, we make the following observations. The 2/3 Rule was first noticed in a study of reinforcement and attention (Grossberg, 1975; reprinted in Grossberg, 1982a, p.290). In that context, the two specific channels carry internal drive inputs and conditioned reinforcer signals, respectively. The third channel carried nonspecific arousal signals. The 2/3 Rule in this situation suggests how incentive motivational signals can be generated by pairwise combinations of these signal channels. In this application, the 2/3 Rule suggested how attention is modulated by the processing of emotion-related signals. More generally, sensitization of specific processing channels by a nonspecific channel is a theme that occurs even in the modeling of invertebrate motor control (Grossberg, 1978a; reprinted in Grossberg, 1982a, pp.517531). Thus the concepts of attentional gain control and attentional priming reflect design constraints that seem to be used in several modalities as well as across species. The attentional priming and attentional gain control processes can be used to clarify and modify the “automatic activation” and “conscious attention” constructs that are found elsewhere in the literature. We discuss these constructs to form a bridge to other models. Our discussion will simultaneously refine the mechanistic concepts which we have introduced above and show that these mechanistic concepts do not match well with descriptive ideas such as “automatic activation” and “conscious attention.” The abundant use of quotation marks emphasizes this mismatch. For present purposes, we suppose that a supraliminal activity pattern across F, which can survive matching against a top-down expectancy from F,+l becomes “conscious.” In other words, activity across F, can become “conscious” if it persists long enough to undergo resonant equilibration. Activity within F,+1 never becomes “conscious,” whether or not it is supraliminal. The discussion is broken up into four parts. A. Top-down Subliminal Control: “automatic” attentional priming in the absence of attentional gain control (Figure 4a). B. Top-down Supraliminal Control: “automatic” attentional priming plus “willed” excitatory attentional gain control. C. Bottom-up Supraliminal Control: “automatic” content-addressable input activation plus “automatic” excitatory attentional gain control (Figure 4b). D. Bottom-up Subliminal Control: ‘La~tomatic’’ content-addressable input activation plus “automatic” or “willed” inhibitory attentional gain control (Figure 4d). These properties are explained mechanistically as follows. A. A top-down expectancy from F,+1 to F, has a direct excitatory effect on F,. The F,+l cells do not control a nonspecific excitatory signal capable of sensitizing all the cells of F, to its inputs. Such a nonspecific arousal signal is said to lower the quenching threshold (QT) of F, (Grossberg, 1973, 1980), which in the absence of inputs is chosen to be large. The QT is a parameter which STM activities must exceed in order to elicit a suprathreshold reaction. Thus F, cannot generate a suprathreshold output in response to a top-down expectancy alone. The top-down expectancy can prime F,, but cannot release the priming pattern. B. An “act of will” can activate the attentional gain control channel. This act generates a nonspecific arousal signal that sensitizes all the cells of F, to whatever inputs happen to be delivered a t that time. This type of willed control does not deliver a specific pattern of information to be processed. Rather it exerts a non-obligatory type of sensitivity modulation upon the entire processing channel.
424
Chapter 7
This way of mechanizing the distinction between attentional priming and attentional gain control is a special case of a general design principle in neural information processing; namely, the factorization oi pattern and energy, or jactorization of information and arousal (Grossberg, 1982a), that was also mentioned in Section 7. Another example of the dissociation between information and arousal occurs during willed motor acts. When I look at an object, I can decide to reach it with my left hand or my right hand, or not at all. Learned target position maps are “automatically” read-out by the eye-head system to the hand-arm systems corresponding to both my left hand and my right hand. These maps remain subliminally active at their target cells until an “act of will” generates nonspecific arousal to the desired hand-arm system. Suprathreshold read-out of the corresponding target position map is thereby effected. In this example, the top-down expectancy is a target position map that encodes the desired terminal positions of the hand-arm system with respect to the body. C. When a bottom-up input pattern activates F,, it has two simultaneous effects upon Fi. The obvious effect is the delivery of the input pattern directly to F,. As this is happening, the input also activates a nonspecific channel. This nonspecific channel controls the nonspecific arousal, or attentional gain control, that sensitizes F, by lowering its quenching threshold. Thus the bottom-up pathway “automatically” instates its input and “automatically” activates the attentional gain control system. It is misleading to suggest that “automatic spreading activation” and “conscious attention” are two independent stages of information processing, because the same mechanisms which give rise to “conscious attention” are often “automatically” activated in parallel with the mechanisms which subserve “automatic spreading activation”, D. The automatic activation of attentional gain control by a bottom-up input pattern can be prevented from generating a suprathreshold response at F,, If a given processing channel is already active, then its attentional gain control mechanism can competitively inhibit the automatic activation of the gain control mechanisms within other channels. There exists a large-scale competition between the gain control sources of different processing channels in addition to the small-scale cooperative-competitive interactions within each processing channel that regulate its STM matching and contrast enhancement properties. Property D shows that inhibition is not mobilized only by “conscious attention” processes, as the Posner-Snyder model suggests. Attentional gain control signals elicited by bottom-up inputs in one channel can cause “automatic” attentional inhibition in a different processing channel. Moreover, “automatic” excitatory attentional gain control signals in a given channel cause “automatic” inhibitory signals in that channel by rendering suprathreshold the small-scale competitions that regulate STM matching and contrast enhancement within the channel. “Conscious attention” can be the outcome, rather than the cause, of this inhibitory process. Neely (1977) designed and performed a remarkable set of experiments to unconfound “the facilitatory effects of conscious attention” and “the facilitatory effects of automatic spreading activation”. The present theory also recognizes at least two different types of facilitatory effects: attentional gain control and attentional priming, but does not attribute these properties to “conscious” and “automatic” processes in the same manner as the Posner-Snyder (1975a) theory. Despite this fact, the present theory can explain the Neely (1977) data. A more serious test of the theory concerns its ability to explain transient vs. equilibration effects, such as effects of inconsistent vs. consistent primes and of mask vs. no mask manipulations. Another serious test concerns the theory’s ability to predict different outcomes of recognition and recall tests. These applications of the theory are presented after the next section. Before turning to these applications, we outline a theoretical macrocircuit that embodies the theory’s view of how learning, recognition, and recall take place in realtime. The macrocircuit embeds our analysis of word recognition and recall into a broader theory. This theory clarifies the types of information that are coded by the functional
Neural Dynamics of WordRecognition and RecalI
425
units at different processing stages, and locates the stages subserving word recognition and recall processes. Such a theory is necessary to understand how the functional units arise and how they give rise to observable behaviors. The reader can skim this section on a first reading. 10. A Macrocircuit for the Self-Organieation of Recognition and Recall
Figure 5 depicts a macrocircuit that is capable of self-organizing many recognition and recall properties of visual, verbal, and motor lists. The boxes A, are macrostages for the elaboration of audition, speech, and language. The boxes M, are macrostages for the elaboration of language-related motor representations. The box V’ designates a source of preprocessed visual signals. At an early stage of development, the environmentally activated auditory patterns at stage Al start to tune the long-term memory (LTM) traces within the pathways of the adaptive filter from A1 to Az, and thus to alter the patterning of short-term memory (STM) auditory “feature detector” activation across Az. After this tuning process begins, endogenous activations of the motor command stage Ml can elicit simple verbalizations (“babbling”) whose environmental feedback can also tune the Al -+ A2 adaptive filter. The learning within the feedback pathway Ml -+ A2 -+ A2 helps to tune auditory sensitivities to articulatory requirements. This process is consistent with the motor theory of speech perception (Cooper, 1979;Liberman, Cooper, Shankeweiter, and Studdert-Kennedy, 1967; Liberman and Studdert-Kennedy, 1978;Mann and Repp, 1981;Repp and Mann, 1981;Studdert-Kennedy, Liberman, Harris, and Cooper, 1970). Just as the auditory patterns across A1 tune the A l -+ Az adaptive filter, the endogenously activated motor command patterns across MI tune the M I -+ Mz adaptive filter. The activation patterns across Mz encode the endogenously activated motor commands across MI using the same mechanisms by which the activation patterns across Az encode the exogenously activated auditory patterns across A]. The flow of adaptive signaling is not just bottom-up from A1 to Az and from M I to M z . Top-down conditionable signals from Az to Al and from Mz to M I are also hypothesized to exist. These top-down signal patterns represent learned expectancies, or templates. Their most important role is to stabilize the learning that goes on within t,he adaptive filters A1 -+ Az and MI + Mz. In so doing, these top-down signal patterns also constitute the read-out of optimal templates in response to ambiguous or novel bottom-up signals. These optimal templates predict the patterns that the system expects to find at A1 or MI based upon past experience. The predicted and actual patterns merge at A, and A41 to form completed composite patterns which are a mixture of actual and expected information. Auditory and motor features are linked via an associative map from A2 to Mz. When MI is endogenously activated, it activates a motor representation at Ma via its adaptive filter MI -+ M z , as well as an auditory representation at Az via environmental feedback MI 4 Al and the adaptive filter A1 + Az. Since A2 and Mz are then simultaneously active, the associative map A2 -+ Mz can be learned. This map also links auditory and articulatory features. The associative map Az -+ Mz enables the imitation of novel sounds-in particular, of non-self-generated sounds-to get underway. It does so by analysing a novel sound via the bottom-up auditory filter A1 4 Az, mapping the activation patterns of auditory feature detectors into activation patterns of motor feature detectors via the associative map A2 -+ Mz, and then synthesizing the motor feature pattern into a net motor command at M I via the top-down motor template Mz -+ M I . The motor command, or synergy, that is synthesized in this way generates a sound that is closer to the novelsound that are any of the sounds currently coded by the system. The properties whereby the learned map A1 -+ A2 4 Mz + M I enables imitation of novel sounds to occur accords with the analysis-by-synthesis approach to speech recognition (Halle and Stevens, 1962; Stevens, 1972;Stevens and Halle, 1964).
chapter 7
426
VISUAL OBJECT RECOGNITION SYSTEM
SEMANTIC NETWORK
LIST PARSING IN STM (MASKING FIELD)
ITEM AND ORDER IN MOTOR STM
ICONIC MOTOR FEATURES
FEATURES
:
:I I
INPUTS
SELFGENERATED AUDITORY FEEDBACK
I
I
iou
Figure 5. A macrocircuit governing self-organization of recognition and recall processes: The text explains how auditorily mediated language processes (the A,), visual recognition processes ( V * ) ,and motor control processes (the M 3 ) interact internally via conditionable pathways (black lines) and externally via environmental feedback (dotted lines) to self-organize the various processes which occur at the different network stages.
New01 Dynamics of Word Recognition and Recoll
427
The environmental feedback from MI to A , followed by the learned map Al -+ A2 4 -+ MI defines a closed feedback loop, or “circular reaction” (Piaget, 1963). Thus the present theory’s explication of the developmental concept of circular reaction helps to clarify the speech performance concepts of motor theory and analysis-by-synthesis in the course of suggesting how an individual can begin to imitate non-self-generated speech sounds. The stages Az and Mz can each process just one spatial pattern of auditory or motor features at a time. Thus Az can process an auditory “feature code” that is derived from a narrow time slice of a speech spectrogram, and Mz can control a simple motor synergy of synchronously coordinated muscle contractions. These properties are consequences of the fact that spatial patterns, or distributed patterns of activity across a field of network nodes, are the computational units in embedding field networks. This computational unit is a mathematical consequence of the associative learning laws that govern these networks (Grossberg, 1969d, 1982a). This fact is not intuitively obvious, and was considered surprising when first discovered. The later stages A, and M3 in Figure 4 are all devoted to building up recognition and recall representations for temporal groupings, or lists, of spatial pattern building blocks. These higher stages embody solutions to aspects of the fundamental problem of self-organizing serial order in behavior (Lashley, 1951). A spatial pattern of activation across Az encodes the relative importance of each ‘Lfeaturedetector” of Az in representing the auditory pattern that is momentarily activating A l . In order to encode temporal lists of auditory patterns, one needs first to simultaneously encode a sequence of spatial patterns across Az’s auditory feature detectors. The following way to accomplish this also addresses the vexing problem that individual speech sounds, and thus their spatial patterns across Az, can be altered by the temporal context of other speech sounds in which they are embedded. In addition to activating the associative map from Az to Mz, each spatial pattern across A2 also activates an adaptive filter from Az to A 3 . Although all the adaptive filters of the theory obey the same laws, each filter learns different information depending on its location in the network. Since the Az 4 A3 filter is activated by feature patterns across A z , it builds up learned representations, or chunks, of these feature patterns. Each such representation is called an item representation within the theory. It is important to realize that all new learning about item representations is encoded within the LTM traces of the Az 4 A3 adaptive filter. Although each item representation is expressed as a pattern of activation across As, the learning of these item representations does not take place within A3. This flexible relationship between learning and activation is needed to understand how temporal codes for lists can be learned and performed. For example, whereas the spatial patterns across Az can rapidly decay, via a type of iconic memory Sperling, 1960), the item representations across A3 are stored in short term memory STM), also called “working memory” (Cermak and Craik, 1979). As a sequence of sound patterns activates A l , a succession of item representations is stored in STM across A3. The spatial pattern of STM activity across A3 represents temporal order information across the item representations of A S . This temporal order information cannot be laid down arbitrarily without causing temporally unstable LTM recodings to occur in the adaptive filter from A3 to A 4 . Laws for regulating temporal order information in STM have been derived from the LTM Invariance Principle. This principle shows how to alter the STM activities of previous items in response to the presentation of new items so that the repatterning of STM activities that is caused by the new items does not inadvertently obliterate the LTM codes for old item groupings. These STM patterns of temporal order information have been used, for example, to explain and predict data showing primacy and recency gradients during free recall experiments ( Grossberg, 1978b). Computer simulations that illustrate how these temporal order patterns evolve through time are described in Grossberg and Stone (1985).
Mz
I
428
Chapter 7
The concept of temporal order information across item representations is necessary, but not sufficient, to explain how lists of items can be learned and performed. One also needs to consider the analogous bottom-up filtering process from Mz to M3 which builds up unitized representations of motor items (synergies), the top-down learned templates from M3 to Mz, and the associative map A3 4 M3 that is learned from sensory items to motor items. In particular, suppose that analysis-by-synthesis (the map A, -+ Az + Ma + M I ) has elicited a novel pattern of sensory features across Az and of motor features across MZ. These feature patterns can then generate unitized item representations at A3 and M3 even though the network never endogenously activated these patterns during its “babbling” phase. A map A3 -+ M3 between these unitized item representations can then be learned. Using these building blocks, we can now show how a unitized representation of an entire list can be learned and performed. When the network processes a verbal list, it establishes an STM pattern of temporal order information across the item representations of Ax. Since every sublist of a list is also a list, the adaptive filter from A3 to A4 simultaneously “looks at“ all the sublist groupings to which it is sensitive as a list is presented through time. The cooperativecompetitive interaction across A4 then determines which of these sublist representations will be stored in STM at Ad. In order for A4 to store maximally predictive sublist chunks, the interactions within A, are designed to simultaneously solve several problems. A core problem is called the temporal chunking problem. Consider the problem of unitizing an internal representation for an unfamiliar list of familiar items; e.g., a novel word composed of familiar phonemes (auditory) or letters (visual). The most familiar groupings of the list are the items themselves. In order to even know what the novel list is, all of its individual items must first be presented. All of these items are more familiar than the lit itself. What mechanisms prevent item familiarity from forcing the list always to be processed as a sequence of individual items, rather than eventually as a whole? How does a not-yet-established word representation overcome the salience of well-established phoneme or syllable representations? How does unitization of unfamiliar lists of familiar items even get started? If the temporal chunking problem is not solved, then internal representations of lists with more than one item can never be learned. The cooperative-competitive design of A4 that solves the temporal chunking problem is called a maeking field. One property of this design is that longer lists, up to some maximal length, can selectively activate populations in A4 that have a pre-wired competitive advantage over shorter sublists in the struggle for STM storage. Simple growth rules are sufficient to achieve this competitive STM advantage of longer sublists. Such a competitive advantage enables a masking field to exploit the fact that longer sublists, other things being equal, are better predictors of subsequent events than are shorter sublists because they embody a more unique temporal context. A masking field’s preferential STM response to longer sublists leads, in turn, to preferential LTM chunking, or representation, of longer sublists using the LTM law given by equation (1 important side benefit, the competitive advantage of longer, but unfamilar, su lists Asan enables them to compete effectively for STM activity with shorter, but familiar, sublists, thereby providing a solution to the temporal chunking problem. The postulate that longer sublists, up to some maximum length, have a competitive STM advantage led to the prediction of a word length effect in Grossberg (1978a, Section 41 . A word length effect was reported in the word superiority experiments of Samue), van Santen, and Johnston (1982, 1983), with longer words producing greater word superiority. Grossberg (1984a) analyses these and related data from the perspective of masking field design. Computer simulations that illustrate how a masking field can group temporal order information over item representations into sublist chunks are described in Cohen and Grossberg (1985b). The word length property is only one of several differences between properties of stages A3 and A4 and those of the stages in alternative theories. Instead of letting A3 and
b.
Neural Dynamics of Word Recognition and Recall
429
A4 represent letters and words, as in the McClelland and Rumelhart (1981) theory, A and A4 represent items (more precisely, temporal order and item information in STM! and lists (more precisely, sublist parsings in STM), respectively. These properties do not require that individual nodes exist for all items and lists. Learning enables distributed item and list representations to be formed over a network substrate whose rules do not change through time. Aff familiar letters possess both item and list representations, not just letters such as A and I that are also words. This property helps to explain the data of Wheeler (1970) showing that letters such as A and I are not recognized more easily than letters such as E and F. By cont,rast, the McClelland and Rumelhart (1981 model postulates a letter level and a word level, instead of an item level and a list level! By their formulation, letters such as A and I must be represented on both the letter level and the word level, while letters such as E and F are represented only on the letter level. This choice of levels leads to both conceptual and data-related difficulties with the McClelland and Rumelhart (1981) model, including a difficulty in explaining the Wheeler (1970) data without being forced into further paradoxes (Grossberg, 1984b). More generally, any model whose nodes represent letters and words, and only these units, faces the problem of describing what the model nodes represented before a particular letter or word entered the subject’s lexicon, or what happens to these nodes when such a verbal unit is forgotten. This type of issue hints at more serious problems concerning such a model’s inability to self-organize. Such concerns are dealt with by a theory whose levels can learn to encode abstract item and list representations on a substrate of previously uncommitted nodes. These abstract item and list processing units of A3 and A4 play an important role in the theory’s explanation of how unfamiliar and unitized words are recalled. Fqr example, suppose that a list has just been represented in STM across the item representations of As. Before the items in the list can be rehearsed, the entire list begins to tune the A3 -+ A4 adaptive filter. The LTM traces within this adaptive filter learn to encode temporal order information in LTM. After this learning has occurred, the tuned filter activates unitized sublist representations across A4. These sublist representations contribute to the recognition of words but cannot, by themselves, elicit recall. This raises the issue of how short novel lists of familiar items can be recalled even before they are unitized. The fact that a verbal unit can have both an item representation and a list representation now becomes crucial. Recall of a short novel list of familiar items is triggered by a nonspecific rehearsal wave to A3. Such a wave opens an output gate that enables output signals of active items to be emitted from A3 to M3. As each item is read-out, it activates a negative feedback loop to itself that selectively inhibits its item representation, thereby enabling the next item representation to be read-out. Each item representation is recalled via the learned A3 + M3 .-, Mz + M Isensory-motor map. This type of recall is immediate recall from STM, or working memory, of a lit of unitized item representations. It is a type of “controlled” process. It is not “automatic” recall out of LTM. In order for a unitized list chunk in Ad to learn how to read-out its list of motor commands from LTM, the chunk must remain active long enough during the learning process to sample pathways to all of these motor commands. We will briefly sketch the simplest version of how learning and recall can occur in the correct order. It should be realized, however, that mechanisms which sometimes control recall in the correct order can also generate recall in an incorrect order. In fact, these mechanisms provide an explanation of t,he bowed and skewed serial position curve of serial verbal learning, as well as related breakdowns of temporal order information in LTM. Our review will not consider why these STM and LTM temporal order mechanisms cannot always encode veridical STM and LTM order information. See Grossberg (1982c, 1984a) for recent discussions of this issue. In the simplest example of how temporal order information across item representations is encoded and read-out of LTM, the top-down template from A4 to A3 learns this
430
Chapter 7
information while the adaptive filter from A3 to A4 is being tuned. Thus the learning of temporal order information is part of the learning of an adaptive resonance. Later activation of a list chunk in A4 can read this LTM temporal order information into an STM pattern of order information across the item representations of AB. Activation of the rehearsal wave at this time enables the list to be read-out of STM. In sum, recall can occur via the learned A4 + A3 + M3 -+ Mz + M I sensory-motor map. All the stages A l , Az, As, and A4 are sensitive to phonetic information to different degrees. The next stage A5 can group the list representations of A4 into list representations which exceed the maximal list length that can be represented within A4 due to the finite STM capacity of AS. In other words, the list representations of A4 spontaneously parse a list into the most predictive sublist grouping that A4’s prior experience permits, and A5 groups together the parsed components via associative mechanisms. Associative bonds also exist among the chunks of stage As. The learned groupings from A5 to A4 can bind together multisyllable words as well as supraword verbal units. The learned interactions within A5 tend to associate verbal units which are highly correlated in the language. Since the verbal units which are capable of activating A5 are already of a rather high order, AS’Sassociations encode semantic information, in addition to other highly correlated properties of these verbal units. The visual stage V * is not broken down in the present analysis because its several processing stages, such as boundary formation, featural filling-in, binocular matching, and object recognition (Carpenter and Grossberg, 1985; Cohen and Grossberg, 1984a, 1985a; Grossberg and Mingolla, 1985a, 1985b) go beyond the scope of this article. The stages within V’ that are used for visual object recognition (Carpenter and Grossberg, 1985), as distinct from visual form perception (Grossberg and Mingolla, 1985a), also use bottom-up adaptive filters and top-down learned expectancies. This is so because the problem of stabilizing a self-organizing code in a complex input environment imposes similar general design constraints on all sensory modalities (Grossberg, 1980). Not all of these visual processing stages input to the language system. We assume that associative maps from the object recognition stages in V’ to A4 or A5 can lead to phonetic and semantic recognition as well as to motor recall of a visually presented letter or word via the sensory-motor paths previously described. Associative maps from A4 or AS to V * can, in turn, match the correct visual template of a word, such as NURSE, against a phonetically similar target nonword, such as NERSE. 11. The Schvaneveldt-McDonald Lexical Decision Experiments: Template Feedback a n d List-Item Error Trade-off Adaptive resonance theory predicts ordinal relationships between accuracy and reaction time. To illustrate these properties, we use the theory to analyse the lexical decision experiments of Schvaneveldt and McDonald (1981), which included both reaction time and tachistoscopic conditions. In these experiments, there were three types of primes (semantically related, neutral, and semantically unrelated) and two types of targets (normal words and altered words). The neutral primes were used to establish a baseline against which effects of related primes and unrelated primes could be evaluated. The altered words were formed by replacing one interior letter of each word with a different letter to form a nonword (e.g., TIGAR from TIGER). Assignment of non-words to the related or unrelated prime condition was based on the relationship between the prime and the word from which the non-word was constructed. Subjects responded ‘Lword’’or Unon-word”manually by pressing a left-hand key or a right-hand key. “Each trial consisted of two events in the reaction time paradigm and three events in the tachistoscopic paradigm. In either case the first event was always the priming signal, which consisted of a string of x’s or a valid English word. If the prime was neutral (x’s), it was the same length.. .as the related word prime for the target on that trial. The prime remained on for 750 msec. and
Neural Dynamics of Word Recognition and Recall
43 1
was followcd by a blank intrrval of 500 nisec. No rcq>onse t,o t h c i priincx was rquired, and subjects were only told that the first event was t,o prcparcb t.herll to response to the target. The second event. or target,, apprared in thc same location on thc screen and was either an English word or an altered word, as defined by the task requirements. In the reaction time experiments. the target rrnia.ined visible unt,il the subject responded. The instruct>ionswere t,liat thr subject was to respond as rapidly and accurately as possible.
In the tachistoscopic experiments, the target was displayed for approximately 33.3 msec. and was followed by a masking pattern consisting of a string of number signs (#). Subjects were instructed to make as few errors as possible, and speed was not encouraged. The interval between target and mask, or interstimulus interval (ISI) was adjusted at the end of each block of trials in order to maintain a n error rate of approximately .250” (Schvaneveldt and McDonald, 1981, p.678). The results of these manipulations are summarized in Figure 6. We will develop a systematic explanation of these data by comparing every pair of data points within
ea.ch graph. Due to the qualitative nature of the explanation, only the relative values of compared dat,a points, not their absolute values, can be derived from such an analysis. The main point of this analysis is to compare and contrast the interactions between a visual list level and an auditory list level. The visual list level exists in the visual subsystem V’, which projects to the list levels A4 and/or A5 in the auditory system. Reciprocal connections from {A4,As} to V * are also assumed to exist, and act as feedback templates. Under conditions of auditory presentation, an analogous analysis could be given of reciprocal interactions between A3 and A4. An issue of crit,ical importance in our analysis concerns t,he crit,cria used by subjects to select a response. Our Microtheory implies that an individual letter or word can be completely identified within its moda1it.y when the corresponding resonance equilibrates (Section 7). Many conditions of lexical decision experiments do not enable a fully blown resonance to evolve. Thus subjects are forced to use incomplete processing measures. Within our theory, the size and speed of an initial burst of activation a t the appropriate list level correlates well with subject performance. The characteristics of such a burst depend upon factors such as whether or not a mask is imposed, or whether or not priming events, among other factors, lead to a match or mismatch situation. Our explanation therefore emphasizes the context-dependent nature of subject response criteria. An analysis of these transient dynamical events provides new insights into speed-accuracy trade-off (Pachella, 1974; Pew, 1969) and statistical decision-like performance under uncertainty (Green and Swets, 1966). We first consider the tachistoscopic condition (Figure 6a), whose analysis is simpler than the reaction time condition. The lower curve in the tachistoscopic condition describes the error rate when word targets (W) followed related (R), neutral (N), or unrelated (U) primes. Unrelated primes caused more errors than neutral primes, whereas related primes caused fewer errors than neutral primes. The upper curve describes the error rate when nonword targets (Nw) followed R, N, or U primes. A nonword target is constructed from a word target that is in the designated relation R, N, or U to a prime by changing one of the word target’s interior letters. With nonword targets, the reverse effect occurred: Related primes caused more errors than neutral primes, and unrelated primes caused fewer errors than neutral primes. These curves are consistent with two hypotheses: (1) The bottom-up pathways from V’ to {A4.,45} are capable of activating the auditory list representations, but the action of the visual mask 33.3 msec. after the onset of the target obliterates target-induced item activation and prevents top-down template signals {Ad, A S } -+ V’ from causing resonant sharpening and equilibration.
Chapter 7
432
850
%
.053
-.
.052
I-
U
z a
700
!650 J
I
I
R N U TYPE OF PRIME
600
-
R N U TYPE OF PRIME
Figure 6. Results from the Schvaneveldt and McDonald (1981) lexical decision experiments: (a) A tachistoscopic experiment with a backward pattern mask. Error rates in response to word (W) and nonword (Nw) targets that follow related word (R), neutral ( N ) , or unrelated word ( U ) primes. (b) A reartion time experiment, without a backward pattern mask. Reaction times in response to W and Nw targets that follow R, N , or 17 primes. (Reprinted with permission.)
Neural Dynaniics of Word Recopiition and Recall
433
Thus in t.his case, traiisimt activations arc t h c only d p a n l i c c-vcnts on weliich the> 111odel can base a dccision. (2) On the average, R.,N, and U primes raiise cqiial amounts of intcrfertmce in the bot,tom-up registration of word and nonword targets at, the item level of V ' . This hypot,hesis is compatible with t,he fact that the R.,N, and U categories were defined by semantic relat,rdness to the targets, not by similarity of visual features. Independent rxperirnental cvidence for assumption (2) was provided by t,he gap clrtection experiment of Schvaneveldt and McDonald (1981). This experiment differed from the lexical decision experiments only in that the ahred-word foils were constructed by introducing a gap in t,he letter which had been replaced in the lexical decision st,udy. The task was to indicate whether or not a gap had been present. Detecting these gaps should not have required semantic information. In the tachistoscopic condition of this experiment, word targets (without gaps) were recognized with equal error rates after R, N, or U word primes. This result would not be expected if the different prime categories had caused unequal aniounts of interference with the visual registration of the targets. The following discussion first compares priming effects on words, then on non-words. After that, we considc?r word-non-word comparisons t,o describe effects of response bias. Word T a r g e t C o ~ ~ i p a r i s o n s - ~ T a ~oscopic liist Each tachistoscopic condition is identified by prime t.ype followed by target type; for example, R/Nw denot,es an R prime followed by an Nw t.arget. The following discussion suggests why non-word responses to word targets inrrease in t.he order from R to N to IT primes. R / W - N/N': - By hypothesis (2), the t,arget word receives approximately equal interference on the it.em level from the prior R and N priiiies, because relatedness is defined semantically, not in terms of shared item features. By contrast. the R prime strongly activates list, nodes of the word target, due to their sciiiaritic relattdness. When the word target occurs after an R prime, its input to it>slist represent,ation augments the prior activation that has lingered since the R prime preseiit,ation. The N prime does not significantly activate t,he list representations of target words. Henre the R prime facilitates recognition more than the N prime. N / W - U / W : Both the N prime and the R prime cause equal amounts of interference on the item level. The N prime does not significantly activate the list representations of any target words. The U prime does not st,rongly activate list nodes of the word target. In fact, the U prime can activate list nodes that competitively inhibit the list nodes of the word target, due to the recurrent lateral inhibition between list representations that exists within a masking field (Section 10). Hence a target word is recognized more easily after an N prime than after a U prime. Non- Word Target Comparisons --Tachistoscopic The following discussion suggests why word responses to non-word targets decrease in the order from R to N to U primes. R I N w - N / N w : By hypothesis (2), the target word receives approximately equal interference on the item level from the prior R and N primes. The N prime does not significantly activate the list level. The R prime does. Moreover, the R prime significantly activates the same list representation that the subsequent target word activates. Thus the non-word target is more often misidentified as a word after an R prime than after an N prime. N / N w - U / N w : The N prime does not, significantly activate the list level. The 77 prime significantly activates word representations on the list level which inhibit the word representation that is activated by the nonword target. The nonword target can significantly activate its word representation on the list level. Template feedback cannot act to correct this misindentification. Inhibition from the U prime can. Thus a nonword target is misidentified as a word more frequently after an N prime than after a U ~~
434
Chapter 7
prime. We now compare the transient bursts caused by word and non-word targets after the same type of prime. We show that both bursts are similar if the mask acts sufficiently quickly after target onset. The sizes of these bursts increase across the prime conditions from R to N to U. Thus the different error rates in response to word and non-word targets can be ascribed to response biases. For example, a subject who demands a fully blown resonance in order to respond “word” will be biased to respond “non-word” in all conditions. RIW - R / N w : Both word and nonword targets receive equal amounts of interference on the item level due to the R prime. Both word and non-word targets similarly activate the list representation of the word. This list representation received significant activation from the prior R prime. The target mask prevents template feedback from correcting activation of a list representation of a word by a non-word. Consequently, both word and non-word list representations generate similar activation bursts, both of which are amplified by the prior R prime. U/W - U / N w : By hypothesis (2), both the word target and the non-word target receiveequal amounts of interference from the prior U word prime activating their item representations within V’.Due to the brief presentation of word and non-word targets, both activate similar list nodes in A4 and As. In particular, letters that are interior t o words or word syllables are less important in the adaptive filtering of items into lists than letters that are at the ends of words or syllables (Grossberg, 1978a, 1984a). This property is due to the primacy and recency gradients of temporal order information that form in the pattern of STM activation across active item representations. Due to the rapid onset of the target mask, template feedback from lists to items cannot correct the misclassification of a non-word target as a word. Consequently, both word and non-word list representations generate similar activation bursts, both of which are attenuated by the prior U prime. Schvaneveldt and McDonald (1981) obtained a different pattern of results with the same design when no mask was presented and reaction times were recorded. We now trace the differences between the data of the reaction time condition (Figure 6b) and of the tachistoscopic condition (Figure 68) to the action of the feedback template from list to item representations. In order to emphasize these differences, we compare pairs of data points in the reaction time condition with the corresponding pairs of data points in the tachistoscopic condition. Several of our explanations depend upon a trade-off that exists in these networks between an initial tendency to misidentify a word at the list level and the ability of this initial tendency to generate an error-correcting mismatch at the item level before false recognition can occur. We call this trade-off the List-Item Error T r a d e - 0 8 Two changes in the reaction time data are of particular interest. First, there was no increase in reaction time on word trials due to a U prime relative to an N prime, although a U prime increased error rate relative to an N prime in the tachistoscopic experiment. Second, an R prime decreased reaction time relative to an N prime, whereas an R prime increased error rate relative to an N prime in the tachistoscopic experiment. We will analyse the different between the two types of experiments by making pairwise comparisons of the data points. Non-Word Comparisons-Reaction Time Each reaction time condition is again identified by prime type followed by target type, such that unmasked targets are indicated using an apostrophe; for example, N / W ’ denotes an N prime followed by an unmasked W target. R / N w ’ - U / N w ’ va. R /-~~ N w - U / N w : These comparisons analyse the large difference between the error rates for R/Nw and U/Nw in the tachistoscopic condition, and the insignificant difference between both the error rates and the reaction times of R I N w ’ and U/Nur’ in the reaction time condition.
Neural Dynamics of Word Recognition and Recall
435
Compare R I N w - R / N u I ‘ .In condition R/.Vu*,hoth the R prime and the nonword target activate the list representation of the uord from which the nonword was derived. Hence the nonword target generates a relatively large number of word misidentifications. The processing in condition RINw‘ starts out just as it does in condition R/hrui. In condition R I N w ’ , by contrast, the conjoint activation of the word’s list representation by both the R prime and the nonword target generatrs large template feedback from the word list representation to the nonword item representation. Thus the very factor that caused many false word identifications of the nonword target in condition R I N w leads to a relatively strong feedback signal with which to disconfirm this misidentification in condition R I N w ’ . The mismatch between the nonword‘s item representation and the word’s item representation causes a significant collapse in the item-to-list signals that were supporting the word’s list representation. Thus the number of word misidentifications of the nonword target in condition R/A’uJ’is reduced relative to condition
RINw. Moreover, the fact that the word representation is still active due to the R prime when the nonword target is presented speeds up, as well as amplifies, the read-out of the word template, thereby causing a speed-up in reaction time. Compare RINw’ - U / N w ’ . By contrast with condition R J N w ’ , in condition UlNui’ the I’ prime inhibits the list representation of the word that the nonword target activates. Thus the net activation of the word’s list representation by a nonword target is weaker in condition UINw’ than in condition R I N w ’ . Consequently the word template that is read-out by the list representation in condition UINw’ is weaker than in condition R I N w ’ . The U / N w ’ template is therefore less effective than the R / N w ’ template in mismatching the nonword item representation. Thus there exists a trade-off (List-Item Error Trade-off) between the initial tendency towards word misidentification at the list level and the ability of this initial tendency to trigger template feedback that can correct this tendency before erroneous recognition can occur. The two factors--degree of incorrect initial list activation and degree of item mismatch-tend to cancel each other out. This trade-off holds both for activity levels and for rates of activation. A large prior activation of the word’s list representation by the R prime helps to cause a rapid read-out of the word template. This rapid reaction elicits a strong item mismatch that is capable of undercutting the already large initial activation of the word’s list representation. The greater speed is hereby compensated for by the larger activation that must be undercut. Thus both in error rate and in reaction time, conditions A’ and C’are similar despite the large difference between A and C. N / N w - U / N w ’ us. N I N w - U/Nui: The main points of interest concern why, in the reaction time paradigm, condition U / N w ’ is reliably faster than condition NINw’ even though their error rates are comparable, whereas in the tachistoscopic paradigm, the error rate in condition UINui is reliably less than that of condition N I N w . Our task is to show how the feedback template in the NINw’ - IJ/Nur’ comparison alters the N I N w - U/Nw dynamics of the tachistoscopir case. Once again, a List-Item Error Trade-off between the amount and timing of list representation activation and its effects on the amount and timing of itern mismatch will form the core of our analysis. Our explanation will, moreover, differ from the hypothesis which Schvaneveldt and McDonald (1981) derived from these data. The M prime does not activate the list representations nearly as much as a word prime. Hence a nonword target can modestly activate its word representation without major interference or augmentation from the prior N prime. The word’s list representation then reads-out a template whose ability to mismatch the nonword’s item representation depends upon how strongly the nonword was able to activate the word’s list representation. Thus the size of the initial tendency to misidentify the nonword target covaries with the ability of the feedback template to correct this error.
436
Chapter 7
This type of balance betwren initial list activation and subsequent item mi4match a150 occurs in the U prime condition. In this condition, however. the U prime inhibits the
word list representation that the nonword target will activate. This list representation is thus less activated by the nonword after a P prime than after an N prime. The weaker tendency to misidentify the nonword as a word after a U prime leads to a weaker template read-out and a weaker item misniatch with the nonword’s item representation. The similar error rates in the N and U nonword conditions ran thus be traced to the List-Item Error Trade-off. Why, then, is the reaction time in the I‘ prime condition reliably faster (36f 12 msec, p < .01)than in t h e N prime condition? We suggest that a major factor is the following one. In the U prime condition, the nonword target causes a relatively small initial activation of the word level due to the prior occurrence of the U prime. This relatively small initial activation tends to cause a relatively weak item mismatch which c a n only cause the initial list activation to become even smaller. We suggest that the absence of a large rate or amount of activation within the list level at any time provides a relatively rapid cue that a nonword has occurred. By contrast, within the N prime condition, the nonword target can cause a relatively large initial activation of the list level. Although the large template read-out that is caused by this activation can compensate for it, via List-Item Error Trade-off, it takes more time to inhibit this initial activity surge than it does to register the absence of such a surge in the U prime condition. Thus the reaction time tends to be longer in the I\: prime Condition than in the U prime condition. This explanation assumes that subjects in the reaction time condition tend to respond t o the network’s equilibrated activities rather than to momentary activity surges. Subjects who do respond to initial surges might respond faster and be more prone to make word misidentifications. Schvaneveldt and McDonald (1981, p.681) discuss these data in terms of the “hypothesis that the priming event can lead to general activation of linguistic informationprocessing mechanisms” in response to an R prime or a U prime, but not an N prime. This hypothesis is used t o explain why the reaction times to nonword targets after R or U primes are faster than after an S prime. This explanation does not seem to be sufficient to explain these data, at least riot using a serial search model of verification. This is because the N prime should not generate any semantic set. Even if R and U primes can speed up search through their semantic sets, such a search would presumably take longer than no search at all. Our questioning of how a nonspecific activating mechanism could modulate serial search does not mean that we deny the existence of nonspecific activating mechanisms. Within the context of adaptive resonance theory, a level F, can also nonspecifically activate a level F,+1, in addition to specifically activating F,+l. This nonspecific activation lowers the gain of F,, 1 and thereby enables the F, + F,+l signals to supraliminally activate the STM traces of F,+1. As summarized in Seetion 9, the need for such nonspecific gain control can be seen by considering how a higher stage F,+S reads top-down templates into F,+1. The theory suggests that Ft+2 can actively read such template signals into F,, without supraliminally activating F,+,. These templates subliminally prepare F,+l for supraliminal bottom-up activation from F,. When F, does send bottom-up signals to F,, 1 , it “opens the F, gate”-that is, nonspecifically lowers the gain-to enable the bottom-up signals from F, and the top-down template from F,+z to begin to supraliminally match or mismatch, as the case might be. A role for bottom-up nonspecific gain control within our theory would require that R and U primes are more vigorously processed by V’ than are N primes, hence can elicit larger nonspecific signals to { A d , Ab}. This property does not occur in the outputs of the V’ item level, because all the R, N, and U primes are constructed from equal numbers of letters. The property can, however, occur in the outputs of the V’ list level, because a familiar word prime of a fixed length can generate greater activation within a masking field than can an unfamiliar prime with the same number of letters (Section 10). Thus,
Neitral Dwaniics of Word Recognition arid Recall
437
the Srhvaneveldt and McDonald (1981) hypothcsis can be restated as the suggestion that V’ contains a list representational btage that obeys the laws of a masking field. Then the prcscnt analysis still holds if lists in 1” project to lists in A4 and/or As. R / S u ! ’ - AV/LVu~’ u s . R/Nur - N/A\’uf: This comparison is already implicit in the R/iZ’ui’ - lJ/JVu!‘ us. R / N w - U/IVui and N/lVw’ - U/,Vw’ 19s. N / N w - U / N w comparisons. It is included to emphasize the importance of the List-Item Error Trade-off. In the tachistoscopic experiments. condition R I K u I has a higher error rate than conditiori .Y/.Yui due to significant conjoint activation of word list representations by the R prinir and the subsequent nonword target. In condition R / N w ’ , by contrast, the large initial activation of the nonword target due t o the prior R prime elicits a faster and stronger template read-out and item mismatch than in the K prime condition. The faster-acting template in the R prime case than in the N prime case leads to a faster reaction time in condition R / N w ’ than in condition N / N w ’ . The stronger template read-out and item mismatch in the R prime case than in the K prime case compensates for the larger initial activation via the List-Item Error Trade-off, thereby reducing the relative error rates of R I N w ’ to N / N w ’ as compared with the error rates of R/.Vw to N I N w , by causing a significant collapse in the incorrect initial activation of word list representations. Our theory overcomes the objection we made to a serial search version of verification in the following way. At the moment when an A‘w‘ target occurs, a word representation is already significantly active in case RINw’ but not case .V/LVw’. Even if the word representation were capable of generating suprathreshold top-down signals to the item level at this time, these signals would not elicit suprathreshold item activation until the target occurred (Figure 3b). In any case, the N u t ’ target causes the list representation of the corresponding word to exceed its output threshold faster when it follows an R prime than when it follows an N prime, since the N prime does not significantly activate this list representation. Thus List-Item Error Trade-off begins to act sooner, and more vigorously, in the R/Nur’ case than the N / N u i ’ case. Word Comparisons Reaction Time N / w ’ - U / w ’ us. N / W - U/U’us. NINw’ - Lr/iVw’: The main points of interest are that. in the reaction time experiments, the reaction times to word targets after N and V primes were not significantly different (the U prime condition was slightly slower), although the error rate in the U prime condition was larger (.042) than in the N prime condition (.029), although this trend was not significant. A similar pattern of error rates was found in the tachistoscopic experiments ( N / W - U / W ) . By contrast, the reaction time to a nonword target after a U prime was significantly less than the reaction time to a nonword target after an 11’ prime ( N / N w ’ - U / N w ’ ) . Why does letting a nonword-activated template act preserve relative error rates while producing approximately equal reartion times (NIW’ - U/W’ vs. N I W - V / W ) ,whereas letting a word-activated template act does not produce approximately equal reaction times? When a word target occurs after an N prime, it can activate its representation on the list level without experiencing inhibition or facilitation due t o the prior N prime. In the reaction time experiments, the word target can also cause read-out of a template capable of matching its itern representation. By contrast. when a word target occurs after a U prime, its list representation is still experiencing residual inhibition due to the prior U prime. However, the word target remains on long enough for its list representation to be activated and to read-out a template capable of matching its item representation. Matching the item representation can, in turn, further amplify input to the list representation. In all, despite a slower initial start in being activated, if the word target stays on long enough, it generates equal levels of equilibrated list activation after both an N prime and a U prime. The initial difference in list activation levels, which is so important in the tachistoscopic experiments ( N / W - t ‘ / W ) ,becomes less so in the reaction time experiments ( N / W ’ - U/U”) due to the sirriilar course of list-item
438
Chapter 7
equilibration after the inital activation diffrrrnre is owrrome. Nonetheless, a slightly longer reaction time can be caused by the inhibitory effect of the U prime on the initial course of processing. This explanation makes important use of the different effects of template-mediated item matching or mismatching on subsequent list activation. An item match due to a word target tends to strengthen the word list representation. An item mismatrh due to a nonword target tends to weaken the word list representation according to the List-Item Error Trade-off. These different consequences of word targets and nonword targets are sufficient to explain the reaction time differences between N/W' - U/W' and NINw' - U / N w ' . It remains to say why the error rates are not the same in N/W' - U/W' whereas the reaction times are approximately the same. The upper limit on reaction time differences is set by the equal equilibration of list activation due to word targets that follow N primes and U primes. Even if subjects respond with a statistical distribution of reaction times that is concentrated throughout the time interval until equilibration is finalized, the difference in mean reaction times should not significantly exceed the brief interval needed to offset U prime inhibition. Nonetheless, if some subjects do respond at times before equilibration occurs, and use the level of list activation to determine word or nonword responses, then the initial U prime inhibition can cause a significant increase in nonword responses to the word target. This tendency should correlate with shorter reaction times. RIW' - N/W' u s . RfW - N / W : The condition R / W error rate is less than the condition NIW error rate due to the priming of the correct word list representation after an R prime but not after an N prime. The same is true for the relative sizes of the condition RIW' and NIW' error rates, although this trend was not significant. In conditions RIW' and NIW', a word target that follows an R prime can more quickly and strongly read-out its template than a word target that follows an N prime. These templates tend to match and amplify the word item representation. Thus condition R/W' possesses a faster reaction time and a lower error rate than does condition NIW'. In summary, these lexical decision data can be qualitatively explained using the following properties: a) R, N , and U primes all generate comparable levels of interference to later targets at the item level. b) R primes subliminally activate semantically related word list representations via recurrent conditioned excitatory pathways within the list levels. U primes inhibit semantically unrelated word list representations via recurrent unconditioned inhibitory pathways within the list levels. N primes neither activate nor inhibit word list representations by a significant amount. These priming properties tacitly assume that the list representations of words with more than one letter can inhibit the list representations of their constituent letters, including the list representations of letters that form the N prime. This property, which is needed to learn selective word list representations, is achieved by designing the list levels as masking fields (Section 10). c) A larger activation of a word list representation due to a word target causes faster and stronger item matching and activity amplification at the item and list levels. d) A larger activation of a word list representation due to a nonword target causes faster and stronger item mismatch and activity suppression at the item and list levels. This compensatory property is called the List-Item Error Trade-off. This property tacitly assumes that mismatch of a single letter in a word at the item level can cause a significant collapse in the activation of the word's list representation. Thus, in order for our explanation to hold, activation of a list level representation must be selectively sensitive to word length in a manner that is consistent with the properties
Neural Dynamics of Word Recognition and Recall
439
of a masking field (Cohen and Grossberg, 1985).
12. Word Frrqiirriry Effrrts in Rrrogiiition and Recall Recognition events which occur during lexical decision experiments are often analysed as a world unto themselves Relationships with other sorts of word recognition and recall phenomena are often neither noted nor used to provide additional constraints upon one’s understanding of word recognition phenomena. The remaining sections of the article relate lexical drciaion data to another sort of word recognition and to recall. A nurnher of experiments have denionstrated that word frequency manipulations can have different effects on recall than on recognition of prior occurrence. A unified explanation of these effects is suggested by our theory, and leads to interesting comparisons with previous explanations. Our explanation invokes unitization and inter-list associative reactions in a basic way. Thus whereas our explanation of lexical decision data did not require an analysis of LTM changes, our analysis of memory of previous occurrence does. Some of the main experimental phenomena will now be summarized. In lexical decision experiments, different effects of word frequency occur with and without the use of a backward pattern mask. Under conditions of backward masking, word frequency typically does not have a significant effect on accuracy of word recognition, although insignificant improvements in recognition have been noted as a function of increasing frequency (Manelis, 1977; Paap and Newsome, 1980). By contrast, if a backward mask is not used, then high frequency words are consistently classified faster t h a n low frequency words (Landauer and Freedman, 1968; Rubenstein, Garfield, and Millikan, 1970: Scarborough, Cortese, and Scarborough, 1977). This difference has bern used to support the verification model hypothesis that word frequency does not influence word encoding, but it does influence the later stage of word verification. A paradoxical pattern of data emerges when influences of word frequency on recognition and recall are contrasted. This data pattern, which is often called the word {requency e f e d , states that high-frequency words are recalled better than low-frequency words, but low-frequency words are recognized better than high-frequency words (Gorman, 1961; Glanzer and Bowles, 1976; Schulman, 1967; Underwood and Freund, 1970). In order to understand this effect, it is necessary to carefully define the relevant experimental procedures. Underwood and Freund (1970) used a two-alternative forced choice recognition procedure. In stage 1 of their experiment, subjects studied a list of 50 low-frequency (L) words or a list of 50 high-frequency (H) words. In stage 2 of the experiment, subjects were shown pairs of words. One word in each pair was chosen from the list of study words. The other word in each pair was chosen from a list of either H words or from a list of L words. Thus subjects fell into one of four categories: L-L, L-H, H-L, H-H, in which the first letter designates whether the study list was composed of L or H words, and the second letter designates whether the distractor word in each pair was chosen from an L or H list. The subjects were instructed to identify the word in each pair which they had seen on the study trial. The results of Underwood and Freund (1970) are summarized in Figure 7. The main word frequency effect compares H-H with L-L; that is, when studied L words were paired with unstudied L words recognition was better than when studied H words were paired with unstudied H words. This effect reversed in the H-L and L-H conditions. Studied H words in the H-L condition were recognized better than studied L words in either the L-H or the L-L conditions. To understand these results, one needs to consider the effect of word frequency on the study trial as well as the effect of word frequency differences on the test trial. Underwood and Freund (1970) offer an interesting explanation of their results. Our explanation will be compared and contrasted with theirs below. This type of experiment raises fundamental questions about the processes that lead to judgments of recognition. Unlike a judgment between word and nonword, all items
Chapter 7
FREQUENCY OF OLD WORDS Figure 7. Results from the Underwood and Freund (1971) experiment concerning word frequency influences on the recognition of previous occurrence: Forced-choice recognition of (old-new) word pairs leads to more errors for old high frequency words paired with new high frequency words (H-H), than for old low frequency words paired with new low frequency words (L-L). By contrast, more errors occur for old low frequency words paired with new high frequency words (L-H) than the converse (H-L).
Neurul Dyriariiics of Word Recogttitiorr atid Recull
441
in the I’nderwood and Freund (1970) ty)crinient are words. The task is to judge which of these words have recently been seen. This type of recognition has been called different names by different authors. Concepts such as “the judgement of previous occurrence” (hlandler. Pearlstone, and Koopniann. 1969: Mandlrr, 1980). “familiarity” (Jiiola, Fisrhler, Wood. and .4tkinson, 1971; Kintsch, 1967: hlaridler, 1980). “situational frequency” (1:nderwood and Freund, 1970). and “rncoding sprrificity“ (Tulving, 1976; Tulving and Thomson. 1973) have been used to dintingiiish this type of recognition from other types of recognition, such as t,he word-non\rordrrcogriition of lexical decision tasks, or recognition of the individual items in a list. The I‘nderwood and Freund (1970) experiment iinderscores the difficulty of making such a concept precise, by showing that recent presentation can interact in a complex fashion with word frequency. Otherwise expressed, these data show that subjects may confuse the internal recognition indices t h a t are due to recent presentation with internal recognition indices that are due t o the cumulative effects of many past presentations. This is not, in itself, surprising when one considers that it is the long-term cumulative effects of many ”recent presentations” which yield the internal representations that subserve word frequency effects. How one should proceed from such general observat,ions is, however, far from being obvious, as we shall see by reviewing two of the leading models of this type of recognition.
13. Analysis of the Underwood and Freund Theory The explanation of IJriderwood and Freund (1970) of the data in Figure 7 is a prot.otype of later explanations of the word frequency effect. These authors assign a measure of “situational frequency” to each item. A study trial is assumed to increase the situat,ional frequency of a studied word from 0 t o 1. A second study trial is assumed to increase the situational frequency to 2. However. discriminability of an item increases as a slower-than-linear function of situational frequency, so that two study t,rials have little more effect than one study trial on discriminability. It is also assumed that a subject chooses that item in a pair of it.ems with the higher discriminability. A critical assumption that differentiates the effects of H and L words concerns the role of implicit associational responses (IARs). If a studied item elicits an IAR, then the IAR also acquires a frequency of 1. It becomes a n “old” it.em even if it does not explicitly appear in the study list. It is also assumed that H words have more IARs than L words, and that the IARs of H words tend t o be other H words. Consequently, H-L recognition is best,. H words directly receive an increment. of 1 due to study. They may also indirectly achieve a greater discriminability than 1 by being t h e IARs of other studied H words. By contrast, the unst.udied L words are unlikely t o be the IARs of studied H words, so that the frequency differences between H a n d L words in H-L pairs will be maximal. By contrast, in t.he H-H situation, many of the unstudied I.4Rs of studied H words may be t h e new words in the H-H pairs. Furthermore. studied H words that are IARs of other studied H words derive little extra advantage from this fact.. Hence many of the H-H pairs will tend t o have similar discriminability values, so that many errors will occur. Condit,ion L-H should produce fewer errors than condition H-H, because the studied L words acquire a large frequency 1 without increasing the frequency of the unstudied H words. Condition L-L should also produce fewer errors than H-H for a similar reason. T h e other comparisons follow less easily from this analysis. The advantage of condition H-L over condition L-L can only be attributed t o the slight benefit received by studied H words that are IARs of other studied words. In the data, the error difference between H-L and L-L is a t least a third of the difference between H-L and H-H. If studied words derive little benefit from being IARs of other studied H words, then this difference should be small. If, however, studied H words derive a great deal of benefit from being IARs of other studied H words, then condition H-H should not produce nearly so many errors. However, this difficulty would be less pronounced if it is the case that L words have more L word IARs than do H words.
442
Chapter 7
A further difficulty concerns the model’s implications concerning the real-time events that translate situational frequencies into decisions. The situational frequency changes may be likened to changes in LTM. The theory does not, however, explain how a new H item which was an IAR of a studied H item translates its LTM situational frequency value of 1 into a decision on a test trial. In particular, suppose that the frequency change between a studied H word and an IAR H word were due to a change in the LTM strengths within the conditionable pathways between internal representations of these words. Then these LTM traces would have no effect on the activation of the IAR H word on a test trial unless the studied H word with which it is paired is the word which activated it as an IAR on the study trial. Thus, something more than the strengthening of interword associative linkages is needed to explicate the situational frequency concept. Suppose, for example, that all the activated H words, both studied words and IAR words, form new LTM linkages with internal representations of contextual cues, notably with visual representations of the experimental context. Then these contextual cues could activate the list representations of the studied words and the IAR H words on recognition trials. Such a contextual contribution could save the formal properties of the “situational frequency” concept, but still could not explain the relatively large error difference between cases H-L and L-L. Despite these uncertain and incomplete aspects of the Underwood and Freund (1970)model, it has provided a seminal framework for later models of the word frequency effect. 14. Analysis of t h e Mandler Theory Mandler (1980)and Mandler, Goodman, and Wilkes-Gibbs (1982) have further developed the theory and the data of these recognition and recall differences. In the Mandler et a[. (1982)experiments, a lexical decision task provided the occasion for studying H and L words. The subjects’ task, as usual, was to identify words and nonwords. They were not told that they would be asked to remember these stimuli. Half the subjects were then given the same words and asked to define them. The remaining subjects were not. Then all subjects were asked to return in 24 hours. At that time, half of the subjects in each group underwent a recognition test in which old and new words were intermixed. More L words were recognized than H words. After the recognition test, these subjects were also given a recall test. Recall was better for H words than for L words. The other half of the subjects were tested for recall before recognition. A prior recognition test was found to increase recall significantly, but it also increased errors due to distractors from the recognition test. Mandler, Goodman, and Wilkes-Gibbs (1982)also did an analysis aimed at replicating the major results of Underwood and Freund (1970)although they did not use a two-alternative forced choice paradigm. This analysis was restricted to data from the definition task group, since this task provided a learning experience analogous to that in the Underwood and Freund (1970)study. Mandler et al. (1982)analysed the hit rates and false alarm rates for the L and H new and old words that were used during the recognition test. They showed that the d’ for the L-L comparison was larger than that for the H-H comparison, whereas the d’ for the H-L comparison was larger than that for the L-Hcomparison. Also the d’ for the H-L comparison was larger than that for the H-H comparison. The model of Mandler (1980)introduces refinements and modifications of the Underwood and Freund (1970) model, but also new difficulties. Mandler replaces the notion of situational frequency and the slower-than-linear increase of discriminability with frequency by introducing his concept of familiarity. Mandler discusses how an event’s familiarity and a retrieval process can work together to determine recognition. He lets ”F = the probability that an event will be called old on the basis of its familiarity value; R = the probability that an event will be called old as a result of retrieval processes; Rg = the probability that an event will be called old” (Mandler, 1980,p.257).
Neural Dynamics of Word Recognition and Recall
443
Thew prohabilitirs obey the equation
Rg = F + R - FR.
(2)
Both Mandler (1980) and Mandler el al. (1982) argue that retrieval processes are not rate-limiting in determining the reversal that occurs during word recognition. Consequently they develop the properties of the familiarity concept to explain the word frequrncy effect. “Familiarity of an event is determined by the integration, perceptual distinctiveness, and internal structure of that event.. .and by the amount of attention expended on the event or item itsell [italics ours]. Retrievability.. .is determined by interevent relationships and the elaboration of the target event in the context of other events or items” (Mandler e t al., 1982, p.33). Thus intra-item changes in familiarity bear the total burden of explaining the recognition reversal in the Mandler (1980) model, unlike the role of IAR’s in the Underwood and Freund (1970) model. The actual implementation of the familiarity concept to explain the word frequency effect faces several ty es of difficulty. Despite these difficulties, the intuitions that led to the Mandler (1980f)theory are instructive. Hence we will describe both the model’s intuitive basis and its formal difficulties before suggesting how our theory overcomes these difficulties. Mandler (1980) assumes that every word has a base jamiliarity before it is presented in a recognition experiment. Typically the base familiarity FO of a high frequency word is larger than the base familiarity value 10of a low frequency word. In a theory which explains judgments of recent occurrence using a familiarity concept, a base familiarity value musf be defined, or else one could not even compare the familiarities of old word targets and new word distractors. Mandler (1980) and Mandler e t al. (1982) acknowledge this need with their discussions of the Pnderwood and Freund (1970) experiment and the Glanzer and Bowles (1976) experiment. The latter experiment revealed the basic fact “that false alarms demonstrate the dominance of high frequency words; that is, hit rates are higher for low frequency words, but false alarm rates are higher for high frequency words. In other words, in the absence of retrievability the recognition judgement (for distractors) depends on the familiarity of the item” (Mandler, 1980, p.267). The crucial step in a theory based upon familiarity is to explain how base familiarity is altered when a word is presented, or how a new word becomes an old word. Since
Fo > l o 3
(3)
one needs an operation that can reverse the effect of word frequency on the base familiarity value. Speaking formally, the question becomes: How do increments A F and A / in familiarity alter the base familiarity values Fo and j o to generate new familiarity values F, and fi such that FI < fl. (4) Mandler (1980) makes the plausible assumption that “the increment in familiarity (integration for all words is a constant function of the amount of time that the item is presented” (Mandler, 1980, p.268). In other words, d=
A F = Af.
(5)
Given assumption (5), no obvious additive model can convert equality (3) into inequality (4). Mandler therefore chooses a ratio model. He suggests that “the operative F value for a word b e d / ( d F),where F is the preexperimental base familiarity value of the word” (Mandler, 1980, p.268). In other words,
+
d
F‘ -- -d + F o
Chapter 7
444
and =
fl
d
(j+fi’
(7)
Mandler goes on to apply this definition by using his equation (2) to determine when f1
+r
-
rf1 > F1
+ R - RF1
(8)
(Mandler, 1980, p.268) under the condition that the retrievability ( 7 ) of low frequency words and of high frequency words ( R ) are equal. He notes that inequality (3) is the basis for deriving inequalities (4) and (8). Despite the ingenuity of this familiarity concept, it is not entirely satisfactory. For example, a comparison of (3) and 8) shows that a single study trial, no matter how short, must reverse the inequality ( 3 governing base familiarities, which are determined by a large number of prior word exposures under natural conditions. This paradoxical conclusion follows because, for any positive A f = A F , no matter how small, F, < f 1 if Fo > fo. Another way to state this problem is as follows. One might expect the operative familiarity value to approach the base familiarity value as the increment in familiarity approaches zero; that is,
\
11
fo
+
as A f
0
+
(9)
and
Fl
+
FO as A F
+
0.
Instead, the definitions (6) and (7) imply that
and
Fl
+
0 as A F
-+
0.
One might wish to salvage the situation by replying that, if an item has not been studied in the experiment, it has a zero familiarity value. However one cannot then explain how new distractors generate false alarms based on word frequency in the Underwood and Freund (1970) and Glanzer and Bowles (1980) experiments. Moreover, the Mandler et a!. (1982) analysis of d‘ scores in their definition task experiment is inconsistent with the Underwood and Freund (1970) data in an important respect. Mandler el al. (1982) assume that familiarity is an intra-item variable and that forced choice responses are based upon selection of the more familiar word in each pair. Under such assumptions, a d’ analysis requires that the lines between points (L-H, HH) and points (L-L. H-L) be parallel. This is not true in the Underwood and Freund (1970) data (Figure 6). Thus, although the Maridler (1980) model escapes its worst difficulties when it does not combine old words with new words, it fails to be able to make such a comparison both formally and in important data. These difficulties suggest that t,he Mandler (1980) familiarity concept is insufficient to capture major properties of subjects’ ability to judge previous occurrences. One can escape the limit problem expressed in (9) and (10) by redefining Fl and f l as follows. Let Ad F1 = Fo (13) d Fo
+ + ~
and f1
= fo
+ d +Adl o ’ --
(14)
Neural Dyiiatiiics of’Word Recogilitioii and Recall
445
where A is a positive ronstant. Then the desired h i t properties (11) and (12) hold. hlorcover, if A is sufficiently large and d is not too small, then F1 < fl even if Fo > j o . I‘sing this new model, one can explain the Vnderwood and Freund (1970) data as follows. To explain the greater errors for H-H than for L-L, we need to show that
FI - J-0 < I1 - Io.
(15)
By (13) and (14), this reduces to the inequality
Fo > fo
(3)
for the base familiarities. To explain the greater errors for to show that Fl - fo > I 1 - Fo. This inequality reduces to
L-H than for H-L,we need (16)
whirh is attainable for a range of d values, given any A, fo, and Fo. .4lthough the new’ definitions (13) and (14)of familiarity escape some problems of the Mandler (1980)model. they face challenges of a more subtle nature. Mandler himself has emphasized the cumulative nature of familiarity: ”Each additional presentation and processing of an event adds some specified degree of inlegration to the target” (Mandler, 1980,p.267). “Repeated recognition tests (presentations) not only prevent loss of familiarity but artually increment it” (Mandler, 1980, p.269) [italics ours]. If presentations alter an item’s familiarity by incrementing the integration of an item’s internal representation, then where do incremental terms such as d / ( d t F) leave off and new base familiarities begin? In other words. no matter how large d gets in a formula such as Ad Fl = Fo + d + (13) 3
whether due to a single sustained presentation or to many brief presentations, the formula does not show how an old “base” familiarity Fo generates a new “base” familiarity J - ] , It makes no physical sense to arbitrarily write
for the next round of recognition experiments. A formula such as (13)fails to explain how the cumulative effects of many presentations determine the influence of word frequency on the base familiarities Fo and fo. It is also difficult to understand how a subject could separately store, as part of an item’s “integration,” an increment d and a base value FOfor 24 hours before comparing them on later recognition trials. Due to the fundamental nature of these issues in explaining the word frequency effect, a different theory must be sought which captures the insights of the Underwood and Freund (1970) and Mandler (1980) models, but also escapes their pitfalls. 15. The Role of Intra-List Restructuring and Contextual Associations
Our explanation of the word frequency effect contains elements in common with both the Underwood and Freund (1970)model and the Mandler (1980)model, since we suggest that both inter-item and intra-item organizational changes subserve this effect. As in our interpretation of the IARs of the Underwood and Freund (1970)model, we note that contextual associations between V’ and { A d , A s } , as well as between A4 and
446
Chapter 7
AS, ran form as a result of studying a list of old words. This conclusion does not require any new assumptions within our theory. Such LTM associations always form when the relevant item and list representations are simultaneously activated in STM. Such associations ran also be quickly restrurturrd by competing LTM associations or masked by competitive STM interactions unless their triggering environmental events can utilize or form distinctive list representations, buffered by their own top-down templates, between which to build these new associative bonds (Grossberg, 1978a). Thus a unique visual experience. via contextually mediated bonds between V’ and { Aq,A 5 } . can have an enduring effect on word recognition. In a similar way, embedding an item in a unique verbal list can generate strong contextual effects due to the formation of new list representations within {A,,A5}. Such contextual effects are not usually important in a lexical decision task because such a task does not define lists of old items and of new items. Contextual effects are, however, important in experiments studying the word frequency effect, serial verbal learning, and the like. In order to explain how contextual associations can be differentially formed with old L words and old H words, we first need to understand how L words and H words differentially activate their internal representations in STM. We show how such differential STM activations can differentially alter the intra-list LTM organization, or “integration” to use Mandler’s phrase, of the corresponding list representations. These differential STM activations can also differentially form new inter-list LTM associations with contextual representations in V * and, under appropriate experimental conditions, with As. Such inter-list LTM reactions are closer to the Underwood and Freund concepts of IARs. Both of these types of LTM changes cooperate to alter the total reaction to old words on test trials. Even in an experiment wherein old words are not divided into two classes, such as L and H, contextual associations can still contribute an increment in integration over and above the increment caused within the internal representations of the old words. This contextual increment helps to reduce the overall error rate in recognizing old words as distinct from new words. We suggest that subjects use relatively simple STM indices to make these recognition judgments. In experiments wherein presentation of a memory item can activate an informative contextual impression, say via an A4 * A5 + V’ pathway, or via a V’ + {A4,AS} + V’ pathway, then differential STM activations of the contextual representrations themselves may be used as cues. In experiments wherein the experimental context is the same on study trials and test trials, context. can differentially act primarily via V’ 4 {A4,A5} associations. Then the subject is reduced to using S T M indices such as the differential sizes of STM bursts or equilibration values to judge old from new. These are the same types of STM indices that subjects use to make their judgments in a lexical decision task (Section 11). Our theory hereby unifies the explanation of lexical decision and word frequency data by showing how different types of experiments can differentially probe the same perceptual and cognitive mechanisms. 16. An E x p l a n a t i o n of Recognition and Recall Differences To start our explanation of the main recognition and recall differences described in Sections 12-14, we note the obvious fact that both L and H words can be recognized as words by experimental subjects. Both types of words possess unitized internal representations at the list level (Figure 4). Their differences, or lack of differences, in recognition and recall properties thus cannot be attributed to unitization or lack thereof per 8 e . Several quantitative differences in the learned encoding and interlist interactions between H and L words are, however, relevant to our discussion. The first difference can be understood by considering a property of the bottom-up feature tuning process that was described in Section 7. There we concluded that an
Neural Dynarvics o j Word Rrcognitiori and Recall
447
LTM vector zl(i!) equals a t,ime-average
of all the bottom-rip signal patterns S:k) that its node u3 ran ever sample. If a particular
pattern SJ'u) appears with a high relative frequency, then its weight a i M ' ( t ) in (18) becomes relatively large Once z3 ( t ) approximately equals Sj"', of Sj"' S:"'
further presentations
cause relatively small changes in zI ( 1 ) . The converse is also true. If a particular has not been occurring frequently, then its reoccurrence can begin to cause a
significant change in zj ( t ) towards S:M'. Thus presentation of infrequent patterns can begin to significantly retune LTM, other things being equal. This conclusion depends upon the fact that the pattern of signals T, = S:M).z, which is caused by an infrequent pattern Sj')
is sharpened u p by contrast enhancement before
being stored as a pattern of STM activities I]. Thus relatively infrequent patterns can cause relatively large zJ's in those encoding populations uj whose STM activities survive the process of contrast enhancement. These surviving STM activities can then drive learning within the corresponding LTM vectors zJ via the learning equation (1). The contrast enhancement property helps to explain the insignificant difference between H and L word recognition in lexical decision tasks using a backward mask. The diffcrenres that may exist between the tuning of the LTM vectors to H and L words tend to be offset by contrast enhancement in STM. Nonwords, by contrast, rannot easily activate any list representations. This conclusion no longer follows when the bottom-up artivation process can readout top-down templates, as in lexical decision tasks which do not use backward masks. Then the same template-matching property which implies the List-Item Error Trade-off (Section 11) progressively amplifies the small differences between H and L word LTM tuning that, exist in the bottom-up and top-down pathways to generate large STM differences in the list representations of H and L words. Thus, whereas initial contrast enhancement within a list level tends to reduce word frequenry effects, asymptotic template match-mismatch differences between list and item levels tends to amplify word frequency effects. A significant difference in the speed of H and L word recognition is also generated by the better top-down matching, and hence larger and faster STM activation, that occurs in response to H words than to L words. This difference in the size and degree of sharpening of the H word list representations also helps to generate better H word rerall than L word recall. In recall, as opposed to recognition, the task is to generate old words, whether H words or L words. On a study trial, H words generate greater STM activations than L words. This difference is reflected on later recognition trials in the lower error rates of H-L than of L-L. During a study trial, the greater activation of H words facilitates the formation of inter-item chunks (with A s ) and contextual associations (with V ' ) . On a later recall trials, those chunks and contextual associations that survive intervening competitive recoding can lead to better retrieval of old H words than of old L words. The better tuning of the sensory-motor associative map A4 + A3 4 M3 (Section 10) to H words than to L words also contributes to this effect. We can use these properties to begin our explanation of why old L words paired with new L words are recognized better than old H words paired with new H words,.yet why old H words paired with new L words are recognized better than old L words paired
Clrupter 7
448
with new H nords. Before turning to the role of rontrxtual agsociations, we consider the interaction of two properties: (I) The greater LTM retuning of list representations caused by L word presentation than H word prrsentation; (11) The greater SThl activations, both of a word’s list reprcsentation and its inter-list associates, that is caused by an H word than an L word. Factor (I) may be compared with the Mandler (1980) concept of a change in familiarity. As in klandler’s model, a larger tuning change is caused by L words than by H words. I‘nlike hfaridler’s model, a very brief training trial need not reverse the base “familiarity” values. Factor (11) may be compared with the IAR concept of Underwood and Freund (1970). As in their model, we assert that learned interlist interactions play a role. Unlike their model, we do not associate values of 1 with all the IARs of a study item. Instead we focus upon the effect of study conditions on the total STM activation generated by a new or old L word or H word. Using these factors, we draw the following conclusions. The (old H)-(new H) comparison causes a relatively high number of errors, in part, because study of an H word causes relatively little LTM tuning of its chunk (Factor I). Hence on a later recognition trial, both the old H words and the new H words Cause large, and similar, amounts of total STM activation (Factor 11). The (old L)-(new L) case causes a relatively low number of errors because the study of an L word causes relatively rapid LTM tuning of its chunk (Factor I). Hence on a later recognition trial, an old L word causes a relatively large S T M activation, whereas a new L word causes a relatively small STM activation (Factor 11). The better recognition of an old H word paired with a new L word than an old H word paired with a new H word is also easily understood from this perspective (Factor 11). In order to explain the reversal effect, we note that an old H uord generates significantly more STM activation than a new L word (Factor 11), whereas the additional STM activation caused by an old L word (Factor I) is offset by the large STM activation caused by a new H word (Factor 11). Factor (111) in our explanation is the formation of contextual associations to old L word and H word list reprcsentations. Contextual associations can form to all old words and their associates, but not to new words that are not associates of old words. Contextual associations can hereby lower the over-all error rate in a recognition task by augmenting the STM activations of all old words. Some finer learned interactions can occur among the list representations of study items and with contextual list representations when the study period is organized in a way that approximates serial verbal learning or paired associative learning conditions (Grossbcrg. 1969a, 1982a, 1 9 8 2 ~ Grossberg ; and Pepe, 1971), as when lists of L words are studied together (Underwood and Freund, 1970). Although such learned interactions have not been needed to explain the main comparisons within the above data, they do provide a clearer understanding of how H items become associatively linked with many list representations, and they may eventually help to explain why certain H-H, L-L, H-L, and L-H comparisons are not invariant across experimental conditions. Finally, other mechanisms described in this paper may help to expand the data base explained by our theory. For example, the competitive interaction of all viable sublist representations in the masking field (Section 10) could explain why new distractors which are compounds of previously presented words produce increased false alarm rates (Ghatala, Levin, Bell, Truman, and Lodico, 1978).
17. Concluding Remarks The present article describes a Macrotheory and a Microtheory capable of modifying and unifying several models of language-related behavior, and of characterizing the relationships between different types of language-related data that are often treated separately from one another. Development of circular reactions, analysis-by-synthesis, motor theory of speech perception, serial and paired associate verbal learning, free recall, categorical perception, selective adaptation, auditory contrast, word superiority effects, word frequency effects on recognition and recall, and lexical decision tasks can
Neural Dyrmriiies of’ Word Recognitioii arid Recall
449
now all he analysed using a single proccssing theory (Figure 4 ) . The core of this theory consists of a few basic design principles. such a$ the temporal chunking problem, the LTM Invariance Principle. and the factoriiation of pattern and energy (Sections 9 and 10). These principles are realized by rral-time networks that are built up from a few basic mechanisms, such as bottom-up adaptive filters. top-down learned templates, and cooperative-competitive interactions of one sort or another. No alternative theory has yet been shown to have a comparable explanatory and predictive range. In particular, as the debate continues concerning the relative virtues of matrix models and convolution models (Anderson ef a / . , 1977; Eich, 1985; Murdock, 1983. 1985; Pike, 1984), it should be realized that the embedding field theory, which is assimilated within the adaptive resonance theory, long ago suggested a detailed explanation of the classical bowed and skewed serial position curve and of the error distributions found in serial verbal learning (Grossberg, 1969a, 1982a; Grossberg and Pepe, 1971). These fundamental data have not yet been explained by either the matrix model or the convolution model. We trace this explanatory gap to the absence within these models of the very sorts of design principles and nonlinear mechanisms that we have used to explain data about word recognition and recall. We suggest that any future theory that may supplant the present one must also include such design principles and nonlinear mechanisms. Superimposed upon these design principles and mechanisms are a number of new functional ideas that can be used in a model-independent way to think about difficult data. For example, the idea of adaptive resonance provides a new vantage point for understanding how learned codes arc stabilized against chaotic recoding by the “blooming buming confusion” of irrelevant experience. and for thinking about processing stages that interact via rapidly cycling feedback interactions The concept of resonant equilibration provides a helpful way to think about verification and attentional focusing without being led into a serial processing metaphor that seems to have no plausible physical realization. The List-Item Error Trade-off provides a new perspective for analpsing certain deviations from speed-accuracy trade-off, cspccially in situations wherein matching due t o top-down feedback can compensate for initial error tendencies. The concepts of top-down subliminal control and bottom-up supraliminal control rationalize the distinction between attentional gain control and attentional priming, and indicate how t o supplant the intuitive concepts of “automatic activation” and “conscious control” by a mechanistic understanding. Such known principles, Iiiechanisms, and functional ideas enable a large data base to be integrated concerning how humans learn and perform simple language skills, and provide a foundation for future studies of the dynamical transformations whereby higher language skills are self-organized.
450
Chapter 7
REFERENCES Anderson, J.A., Silverstein, J.W., Ritz, S.R., and Jones, R.S., Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychological Review, 1977,84,413-451. Antos, S.J., Processing facilitation in a lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 1979,5, 527-545. Basar, E., Flohr, H., Haken, H., and Mandell, A.J. (Eds.), Synergetics of the brain. New York: Springer-Verlag, 1983. Becker, C.A., Allocation of attention during visual word recognition. Journal of Experimental Psychology: Human Perception and Performance, 1976,2, 556-566. Becker. C.A. and Killion, T.H., Interaction of visual and cognitive effects in word recognition. Journal of Experimental Psychology: Human Perception and Performance, 1977, 3, 389-401. Becker, C.A., Schvaneveldt, R.W., and Gomez, L., Semantic, graphemic, and phonetic factors in word recognition. Paper presented at the meeting of t h e Psychonomic Society, St. Louis, Missouri, November, 1973. Carpenter, G.A. and Grossberg, S., Neural dynamics of adaptive pattern recognition: Attentional priming, search, category formation, and amnesia. Submitted for publication, 1985. Cermak, L.S. and Craik, F.I.M. (Eds.), Lcvels of processing i n human memory. Hillsdale, NJ: Erlbaum, 1979. Cohen, M.A. and Grossberg, S., Some global properties of binocular resonances: Disparity matching, filling-in, and figure-ground synthesis. In P. Dodwell and T. Caelli (Eds.), Figural synthesis. Hillsdale, NJ: Erlbaum, 1984 (a). Cohen, M.A. and Grossberg, S., Neural dynamics of brightness perception: Features, boundaries, diffusion, and resonance. Perception and Psychophysics, 1984,36,428456 (b). Cohen, M.A. and Grossberg, S., Neural dynamics of speech and language coding: Developmental programs, perceptual grouping, and competition for short term memory. Human Neurobiology, in press, 1985. Coltheart, M., Davelaar, E., Jonasson, J.T., and Besner, D., Access t o the internal lexicon. In S. Dornic (Ed.), A t t e n t i o n and p e r f o r m a n c e VI. New York: Academic Press, 1977. Cooper, W.E., Speech perception and production: Studies i n selective adaptation. Norwood, NJ: Ablex, 1979. Craik. F.I.M. and Lockhart, R.S., Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 1972, 11,671-684. Craik, F.I.M. and Tulving, E.. Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 1975, 104, 268294. Eich, J.M., Levels of processing, encoding specificity, elaboration, and CHARM. Psychological Review, 1985, 92,1-38. Ghatala, E.S., Levin, J.R., Bell, J.A., Truman, D.L., and Lodico, M.G., The effect of semantic and nonsemantic factors on the integration of verbal units in recognition memory. Journal of Experimental Psychology: Human Learning and Memory, 1978, 4,647-655. Glanzer, M. and Bowles, N., Analysis of the word frequency effect in recognition memory. Journal of Experimental Psychology: Human Learning and Memory, 1976, 2, 21-31.
Neural Dynamics of Word Recognition and Recall
45 1
Gorinan, A.M.. Recognition mrmory for noiins as a fiinction of abstractness and frequency. Joiirnal of Experimental PSJchology, 1961,61,23-29. Green. D.M. and Swets, J.A., Signal drtertion theory and psychophysics. Huntington, S Y : Robert E. Kreiger, 1906. Grossberg, S., On the serial learning of lists. Mathematical Biosciences, 1969,4,201253 ( a ) . Grossberg, S., On learning, information, lateral inhibition, and transmitters. Mathematical Biosciences, 1969,4,255-310 (b). Grossberg, S., On learning of spatiotemporal patterns by networks with ordered sensory and motor components, I: Excitatory components of the cerebellum. Studies in Applied Mathematics, 1969,48,105-132 ( c ) . Grossberg, S., On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1969,1, 319-350 (d). Grossberg, S., Some networks that can learn, remember, and reproduce any number of complicated space-time patterns, 11. Studies in Applied Mathematics, 1970, 4Q, 135-166 (a). Grossberg, S., Neural pattern discrimination. Journal of Theoretical Biology, 1970,2?, 291-337 (b). Grossberg, S., Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 1973.52, 217-257. Grossberg, S., A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 1975,18,263-327. Grossberg, S., Adaptive pattern classificat ion and universal recoding, I: Parallel development and coding of neural feature detectors. Biological Cybernetics, 1976,23, 121 134 (a). Grossberg, S., -4daptive pattern classification and universal recoding, 11: Feedback, expectation, olfaction, and illusions. Biological Cybernetics, 1976,23, 187-202 (b). Grossberg, S., A theory of human memory: Self-organization and performance of sensory-motor codes, maps, and plans. In R. Rosen and F. Snell (Eds.), Progress i n theoretical biology, Vol. 5. New York: Academic Press, 1978 (a). Grossberg, S.,Behavioral contrast in short-term memory: Serial binary memory models or parallel continuous memory models? Journal of Mathematical Psychology, 1978, 17, 199-219 (b). Grossberg, S., How does a brain build a cognitive code? Psychological Review, 1980, 87,1-51. Grossberg, S., Studies of mind and brain: Neural principles of learning, perc e p t i o n , development, cognition, and motor control. Boston: Reidel Press, 1982 (a). Grossberg, S.,Processing of expected and unexpected events during conditioning and attention: A psychophysiological theory. Psychological Review, 1982, 89, 529-572 (b). Grossberg, S., Associative and competitive principles of learning and development: The temporal unfolding and stability of STM and LTM patterns. In S.I. Amari and M. Arbib (Eds.), Competition and cooperation in n e u r a l networks. New York: Springer-Verlag, 1982 (c). Grossberg, S., The quantized geometry of visual space: The coherent computation of depth, form, and lightness. The Behavioral and Brain Sciences, 1983,6,625-692. Grossberg, S., Unitization, automaticity, temporal order, and word recognition. Cognition and Brain Theory, 1984,7, 263-283.
452
Chapter 7
Grossberg, S., The adaptive b r a i n , I: Lrariiiiig, rc*i~iforccwrntniotivatioxr, and r h y t h m . Amsterdam: North-Holland. 1985 (a). Grossberg, S., The adaptive b r a i n , 11: Vision, speech, Iangaage, and motor control. Amsterdam: North-Holland, 1985 (b). Grossberg, S., The adaptive self-organization of serial order in behavior: Speech, language, and motor control. In E.C. Srhwah arid H.C. Nusbaum (Eds.), P e r c e p t i o n of s p c r c h and visual form: Theoretical issiics, iiiodels, and research. New York: Academic Prebs. 1985 (c). Grossberg, S. and Kuperstein, M., Neiiral dyiiamirs of a d a p t i v e sensory-motor control: Ballistic rye movements. Amsterdam: North-Holland, 1985. Grossberg, S. and Mingolla, E., Neural dynamics of form perception: Boundary completion, illusory figures, and neon color spreading. Psychological Review, 1985, 92, 173-211 (a). Grossberg, S. and Mingolla, E., Neural dynamics of perceptual grouping: Textures, boundaries, and emergent segmentations. Submitted for publication, 1985 (b). Grossberg, S . and Pepe, J., Spiking threshold and overarousal effects in serial learning. Journal of Statidcaf Physics, 1971, 3, 95-125. Grossberg, S. and Stone, G., Neural dynamics of attention switching and temporal order information in short term memory. Submitted for publication, 1985. Halle, M. and Stevens, K.N., Speech recognition: A model and a program for research. IRE Transactions and Information Theory, 1962, IT-8, 155 159. Juola, J.F., Fishler, I., Wood, C.T., and Atkinson, R.C., Recognition time for information stored in long-term memory. Perrcyhon and Psychophysvsics, 1971, 10, 8-14. Kintsch, W., Memory and decision aspects of recognition learning. Psyrhological Review, 1967, 74, 496 504. Landauer, T. and Freedman, J., Information retrieval from Iong-ti.rm memory: Category size and recognition time. Journal of i’erbal Learning and lbrbal Behavior, 1968, 7, 291-295. Lapinski, R.H. and Tweedy, J.R., Associate-like nonwords in a lexical-decision task: Paradoxical semantic context effects. Paper presented at the meeting of the Society for Mathematical Psychology, New York LJnivrrsity, August, 1976. Lashley, K.S., The problem of serial order in behavior. In L.A. Jeffress (Ed.), Cerebral m e c h a n i s m s in behavior. New York: Wiley, 1951. Lewis, J., Semantic processing of unattended messages using dichoptic listening. Journal of Experimental Psychology, 1970, 85, 225-228. Liberman, A.M., Cooper, F.S., Shankweiler, D.S., and Studdert-Kennedy, M., Perception of the speech code. Psychological Review, 1967, 74, 431-461. Liberman, A.M. and Studdert-Kennedy, M.. Phonetic perception. In R. Held, H. Leibowitz, and H.-L. Tueber (Eds.), Handbook of sensory physiology (Vol. VIII). Heidelberg: Springer-Verlag, 1978. Mandler, G., Recognizing: The judgement of previous occurrence. Psychological Review, 1980,87, 252-271. Mandler, G., Goodman, G.O., and Wilkes-Gibbs, D.L.,The word-frequency paradox in recognition. Memory and Cognition, 1982, 10, 33-42. Mandler, G., Pearlstone, Z.,and Koopmans, H.J., Effects of organization and semantic similarity on recall and recognition. Journal of Verbal Learning and Verbal Behavior, 1969, 8, 410-423. Manelis, J., Frequency and meaningfulness in tachistoscopic word perception. American Journal of Psychology, 1977, 99, 269-280.
Neural Dyi:arnics of Word Recognition and Recall
453
; \ ? a m , V.A. and Repp, B.H., Influence of prccrding fricative on stop consonant per-
ception. Journal of the Acoustical Society of America, 1981,69,548 558. Marcel, A., Conscious and preconscious recognition of polysemous words: Locating the selective effects of prior verbal context. In R. Kickerson (Ed.), A t t e n t i o n a n d p e r f o r m a n c e VIII. Hillsdale, NJ: Erlbauin, 1980. McClelland, J.L. and Rumelhart, D.E., An interactive activation model of context effects in letter perception, Part I: An account of basic findings. Psychological Review. 1981, 88,375-407. McDonald, J.E., Strategy in a lexical decision task. ‘I‘npublished Master’s Thesis, New Mexico State University, 1977. McDonald, J.E. and Schvaneveldt, R., Strategy in a lexical-decision task. Paper presented at the meeting of the Rocky Mountain Psychological Association, Denver, April, 1978. McKay, D.G., Aspects of the theory and comprehension, memory, and attention. Quarterly Journal of Experimental Psychology, 1973, 25, 22-40. Meyer, D.E., Schvaneveldt, R., and Ruddy, M.G., Loci of contextual effects on visual word recognition. In P. Rabbitt and S. Dornic (Eds.), A t t e n t i o n and performance, V. Sew York: Academic Press, 1974. Morton, J., Interaction of information in word recognition. Psychological Review, 1969, 76, 165 178. Morton, J., A functional model for memory. In D.A. Norman (Ed.), M o d e l s of h u m a n nicmory. Kew York: Academic Press, 1970. Murdock, B.B. Jr., A distributed memory model for serial-order information. Psychological Review, 1983,90,316 338. Murdock, B.B. Jr., Convolution and matrix systems: A reply to Pike. Psychological Review, 1985,92, 130-132. Neely, J.H., Semantic priming and retrieval from lexical memory: Evidence for facilitory and inhibitory processes. Memory and Cognition, 1976, 4,648-654. Neely, J.H., Semantic priming and retrieval from lexical memory: The roles of inhibitionless spreading activation and limited capacity attention. Journal of Experimental Psychology: General, 1977, 106, 226-254. Neisser, U., Cognitive psychology. New York: Appleton-Century-Crofts, 1967. Paap, K.R. and Newsome, S.L., A perceptual confusion account of the WSE in the target search paradigm. Perception and Psychophysics, 1980,27, 444-456. Paap, K.R., Newsome, S.L., McDonald, J.E., and Schvaneveldt, R.W., An artivationverification model for letter and word recognition: The word superiority effect. Psychological Review, 1982,89,573-594. Pachella, R.G., The interpretation of reaction time in information-processing research. In B. Kantowitz (Ed.), Human information processing: Tutorials in performance and cognition. Hillsdale, NJ: Erlbaum, 1974. Pew, R.W., The speed-accuracy operating characteristic. Acta Psyrhologica, 1969,30, 16-26. Piaget, J., The origins of intelligence in children. New York: Norton, 1963. Pike, R., Comparison of convolution and matrix distributed memory systems for associative recall and recognition. Psychological Review, 1984,91, 281-294. Pitts, W.and McCullough, W.S., How we know universals: The perception of auditory and visual forms. Bulletin of Mathematical Biophysics, 1947,9,127-147. Posner, M.I. and Snyder, C.R.R., Attention and cognitive control. In R.L. Solso (Ed.), Information processing and cognition: The Loyola symposium. Hillsdale, N J : Erlbaum, 1975 (a).
454
Chapter 7
Posner, M.I. and Snyder, C.R .R., Farilitation and inhibition in the processing of signals. In P. Rabbitt and S. Dornic (Eds.), Attcwtion arid perforlnance, V. New York: Academic Press, 1975 (b). Ratliff,F., M a c h bands: Quantitative studies on neural networks in the retina. New York: Holden-Day, 1965. Repp, B.H. and Mann, V.A., Perceptual assessment of fricative-stop coarticulation. Journal of the Acoustical Society of America, 1981,69,1154-1163. Rubenstein, H., Garfield, L., and Millikan, J., Homographic centers in the internal lexicon. Journal of Verbal Learning and Verbal Behavior, 1970,9,487-494. Rumelhart, D.E. and McClelland, J.L., An interactive activation model of context effects in letter perception, Part 2: The contextual enhancement effect and some tests and extensions of the model. Psychological Review, 1982,89, 60-94. Salasoo, A,, Shiffrin, R.M., and Feustal, T.C., Building permanent memory codes: Codification and repetition effects in word identification. Journal of Experimental Psychology: General, 1985, 114, 50-77. Samuel, A.G., van Santen, J.P.H., and Johnston, J.C., Length effects in word perception: We is better that I but worse than you or them. Journal of Experimental Psychology: Human Perception and Performance, 1982,8,91-105. Samuel, A.G., van Santen, J.P.H., and Johnston, J.C., Reply to Matthei: We really is worse than you or them, and so are ma and pa. Journal of Experimental Psychology: Human Perception and Performance, 1983,9,321-322. Scarborough, D.L., Cortese, C., and Scarborough, H.S., Frequency and repitition effects in lexical memory. Journal of Experimental Psychology: Human Perception and Performance, 1977, 3, 1-17. Schuberth, R.E., Context effects in a lexical-decision task. Paper presented at the 16th annual meeting of the Psychonomic Society, San Antonio, Texas, November, 1978. Schulman, A.I., Word length and rarity in recognition memory. Psychonomic Science, 1967,9,211-212. Schvaneveldt, R.W. and McDonald, J.E., Semantic context and the encoding of words: Evidence for two modes of stimulus analysis. Journal of Experimental Psychology: Human Perception and Performance, 1981,7,673-687. Schvaneveldt, R.W., Meyer, D.E., and Becker, C.A., Lexical ambiguity, semantic context, and visual word recognition. Journal of Experimental Psychology: Human Perception and Performance, 1976, 2, 243-256. Sperling, G., The information available in brief visual presentations. Psychological Monographs, 1960,74, 1-29. Sternberg, S., The discovery of processing stages: Extensions of Dorder’s method. In W.G. Koster (Ed.), Attention a n d performance, 11. Amsterdam: North-Holland, 1969. Stevens. K . N . , Segments, features, and analysis by synthesis. In J.V. Cavanaugh and I.G. Mattingly (Eds.), Language by eye a n d b y ear. Cambridge, MA: MIT Press, 1972. Stevens, K.N.and Halle, M., Remarks on analysis by synthesis and distinctive features. In W. Wathen-Dunn (Ed.), Proceedings of the A F C R L symposium on models for the perception of speech and visual form. Cambridge, MA: MIT Press, 1964. Studdert-Kennedy, M., Liberman, A.M., Harris, K.S., and Cooper, F.S., Motor theory of speech perception: A reply to Lane’s critical review. Psychological Review, 1970, 77, 234-249. Tulving, E., Ecphoric processes in recall and recognition. In J. Brown (Ed.), Recall and recognition. London: Wiley and Sons, 1976.
Neural Dynamics of Word Recognition and Recall
455
Tulving, E. and Thonison, D.M.. Encoding sperifirity and retrieval processes in episodic memory. Psychological Review, 1973, 80. 352-373. I:nderwood. B.J., Experimental psychology. Second edition. New York: AppletonCentury-Crofts, 1966. I'nderwood, B.J. and Freund, J.S., Word frequency and short term recognition memory. American Journal of Psychology, 1970, 85, 343-351. Wheeler, D.D., Processes in word recognition. Cognitive Psychology, 1970, 1, 59-85. Young, R.K., Serial learning. In T.R. Dixon and D.L.Horton (Eds.), Verbal behavior and general behavior theory. Englewood Cliffs, NJ: Prentice-Hall, 1968.
456
C’hnptc*r 8
NEURAL DYNAMICS OF SPEECH AND LANGUAGE CODING: DEVELOPMENTAL PROGRAMS: PERCEPTUAL GROUPING, AND COMPETITION FOR SHORT TERM MEMORY Preface This Chapter debcribes computer simulations of the massively parallel architecture which we have called a masking field. The Chapter focuses primarily on the role of a masking field in speech and language recognition, where it instantiates a list code processing level. A masking field is also useful, however, in visual object recognition and general cognitive information processing. This is true because a masking field simultaneously detects multiple groupings within its input patterns and assigns weights to the rodes for these groupings which are predictive with reKpect to the contextual information embedded within the patterns and the prior learning of the system. A masking field automatically rescales its sensitivity as the overall size of an input pattern changes, yet also reniains sensitive to the microstructure within each input pattern. In this way, a masking field distinguishes between codes for pattern mholcs arid for pattern parts, yet amplifies the code for a pattern part when it becomes a pattern whole in a new input context. A masking field Fz performs a new type of multiple scale analysis in which unpredictive list codes are competitively masked, or inhibited, and predictive codes are amplified in direct response to trainable signals from an adaptive filter Fl F2 that is activated by item codes over an input source F l . This recognition code becomes less distributed across Fz when the item representation across Fl contains more information upon which to base an unambiguous prediction of which input pattern is being processed. Thus a masking field suggests a solution of the credit assignment problem by embodying a real-time code for the predictive evidence contained within its input patterns. The Chapter describes simple developmental rules whereby a masking field can grow and characterizes these rules mathematically. As yet unpublished computer simulations with Michael Cohen show how associative learning within the F, F2 adaptive filter can adaptively sharpen the unitized grouping code which the masking field F. selects t o represent the item code across Fl . As in all of our work, the masking field project began with a qualitative analysis of a large data base (Chapters 6 and 7), and gradually led to a quantitative mathematical and simulation analysis aimed a t completely characterizing the new design concepts as a real-time parallel architecture. Once characterized, such an architecture can be embodied in hardware, where it will run in real-time due to its direct interactions and parallel structure. Several such hardware projects are presently underway. As increasing numbers of modelers in the mind-brain scicnres devrlop their stable architectures into a form suitable for hardware implementation, theoretical psychology and neurobiology will fully realize their potential as a valid basis for a biologically relevant artificial intelligence.
Hiirnan 3'riirobiology 5, 1. 22 (1986) @ 1986 Springer-\'erlag. Inc. Rcprinted by permission of the puhlihher
457
NEURAL DYNAMICS OF SPEECH A N D LANGUAGE CODING: DEVELOPMENTAL PROGRAMS, PERCEPTUAL GROUPING, A N D COMPETITION FOR SHORT TERM MEMORY
Michael Cohent and Stephen Grossberg$
Abstract A computational theory of how an observer parses a speech stream into contextsensitive language representations is described. It is shown how temporal lists of events can be chunked into unitized representations, how perceptual groupings of past item sublists r a n be reorganized due to information carried by newly occurring items, and how item information and temporal order information are bound together into contextsensitive codes. These language units are emergent properties due to intercellular interactions among large numbers of nerve cells. The controlling neural network, ralled a masking field,can arise through simple rules of neuronal development: random growth of connections along spatial gradients. activity-dependent self-similar cell growth, and competition for conserved synaptic sites. These growth rules generate a network architecture whose parallel interactions can directly activate correct sublist groupings or chunks without the need for prior search. The network accomplishes direct access by performing a new type of spatial frequency analysis of temporally evolving activity patterns. This analysis enhances correct list encoding and competitively masks unappropriate list encodings in short term memory. The enhanced short term memory activities embody a hypothesis, or code, which represents the input stream. This code c m predict, or anticipate, subsequent events by assigning activities to groupings which have not yet fully occurred, based on the available evidence. No serial programs or cognitive rule structures exist within the network to accomplish these properties. The neurons obey membrane equations undergoing shunting recurrent on-center off-surround interactions. Several novel design principles are embodied by the network, such as the sequence masking principle, the long term memory invariance principle, and the principle of self-similar growth.
t Supported in part by the National Science Foundation (NSF IST-8417756) and the Office of Naval Research (ONR N00014-83-K0337). 4 Supported in part by the Air Force Office of Scientific Research (AFOSR 85-0149) and the Office of Naval Research (ONR N00014-83-K0337).
458
Chapter 8
1. I n t r o d u c t i o n : Coiitr.xt-Srnnitivity of Self-Organizing S p r e c h and Language Units One of the fundamental problem areas in speech and language research concerns the characterization of the functional units into which sperch sounds are integrated by a fluent speaker. A core issue concerns the eo?ilerf-sensitiiiify of these functional units, or the manner in which the perceptual grouping into functional units can depend upon the spatiotemporal patterning of the entire speech stream. Such context-sensitivity is evident on every level of speech and language organization. For example, a word such as Myself is used by a fluent speaker as a unitized verbal chunk. In different verbal contexts, however, the components My, Self, and Elf of Myself are all words in their own right. Moreover, although an utterance which ended at My would generate one grouping of the speech flow, an utterance which went on to include the entire word Myself could supplant this encoding with one appropriate to the longer word. Thus in order t o understand how context-sensitive language units are perceived by a fluent speaker, one must analyse how all possible groupings of the speech flow are analysed through time, and how certain groupings can be chosen in one context without preventing other groupings from bring chosen in a different context. This problem has been stated in different ways by different authors. Darwin (1976) has, for example, asked how “our conscious awareness.. .is driven to the highest level present in the stimulus.” Repp (1982) has noted “that the perception of phonetic distinctions relies on the integration of multiple acoustic cues and is sensitive to the surrounding context in very specific ways. .listeners make continuous use of their tacit knowledge of speech patterns.“ Studdert-Kennedy (1980) has written that “The view of speech perception that seems to be emerging.. .is of an active continuous process.. .of perceptual integration across the syllable.” Thti functional units into which a fluent speaker groups a 5peech strvam are dependent ripon the observer’s prior language experiences. For example, a unitized representation for the word Myself does not exist in the brain of a speaker who is unfamiliar with this word. Thus a n adequate theory of how an observer parses a speech stream into ( ontext-sensitire language units needs to analyse how developmental and learning processes bias the observer to experience some perceptual groupings above others. Such developmental and learning processes are often called processes of ad/-or anization in theoretical biology and physics (Basar, Flohr, Haken, and Mandell, 1983f. Lindblom, MacNeilage, and Studdert-Kennedy (1983) have recently suggested the importance of self-organizing processes in speech perception. The present article contributes to a theory of speech and language perception which arose from an analysis of how a language system self-organized in real-time in response to its complex input environment (Grossberg, 1978a, l982a). This approach eniphasizes the moment-by-moment dynamical interactions that control language development, learning, and memory. Within this theory. properties of language performance emerge from an analysis of the system constraints that govern stable language Iearning. This analysis has led to the discovery of a small number of dynamiral principles and mechanisms which have been used to unify and predict a large data base concerning speech and language. Data concerning lexical decisions, recognition and recall of previous occurrences, development of circular reactions, imitation and unitization of novel sounds, matching phonetic to articulatory requirements, serial and paired associate verbal learning, free recall, categorical perception, temporal order information in short term memory, selective adaptation. auditory contrast, and word superiority effects have been analysed and predicted using this theoretical framework (Grossberg, 1969. 1978a, 1978b, 1982a, 1984a, 1985; Grossberg and Pepe, 1971; Grossberg and Stone, 1985a. 1985b). These articles should br consulted for analyses of relevant data and of alternative models. We believe that the unifying power of the theory is due to the fact that principles of self-organization- such as the laws regulating development. learning, and unitization-
Neural Dyt?atnirs of Speech and Lariguage Codiug
459
are fundamental in deterniining the design of hehavioral mechanisms. This perspective suggests that the lack of alternative unifying accounts of this data base is due to the use of modrls that do not sufficiently tap the principles of self-organization that govern brhavioral designs. 2. D c v r l o p m e n t a l Rules Imply Cogiiitivr Riilrs as Eniergeiit P r o p e r t i e s of Nciiral Network I n t e r a c t i o n s
The present article quantitatively analyses and further develops a core process within this theory: The process whereby internal language representations encode a speech stream in a context-sensitive fashion. This process can be stated in several equivalent ways: How temporal lists of events are chunked into unitized representations; how perceptual groupings of past item sublists are reorganized due to information carried by newly Occurring items; how item information and temporal order information are bound together to generate maximally predictive encodings of temporally occurring lists. This article briefly reviews the principles which Grossberg (1978a, 1985) proposed for this process, outlines real-time networks which we have further developed to instantiate the principles, and demonstrates the competence of the networks using massive computer simulations. These networks can be interpreted as networks of neurons whose interconnections arise through simple rules of neiironal growth and developinrrit. The context-sensitive speech and language representations which are activated within these networks are emergent properties due t o intercellular interactions among large numbers of nerve cells. These proprrties are not built into the individual cells. Xor are there any serial algorithms or Cognitive rule structures defined within the network. Instead, the networks illustrate how simple rules of neuronal development on the cellrilar level --can give rise to a system whose parallel interactions act as i/ it obeys complex rules of contextsensitive encoding -on the cognitive level. This is not the only way in which the theory relates different levels of behavioral organization. We also show how organizational principles which are critical in visual processing can be specialized for use in language processing. In other words, similar mechanisms ran be used both for spatial processing and for temporal processing. The theory hereby illustrates how a small number of dynamical laws can unify data on several levels of organization, ranging from microscopic rules of neuronal development t o macroscopic properties of cognitive coding, and across modalities such as vision and audition. 3. A Macrocircuit f o r t h e Self-Organization of Recognition and Recall
The encoding, or chunking, process which is analysed herein takes place within the macrocircuit depicted in Figure 1. This macrocircuit governs self-organization of language recognit ion and recall processes via a combination of auditorily-mediatsed language proce4scs (the levels A,), visual recognition processes (level V'), and motor control processes for language production (the levels M 3 ) . These stages interact internally via conditionable pathways (black lines) and externally via environmentally-mediated auditory feedback of self-grnerated sounds (dotted lines). All the stages .4, and M3 within the theory obey similar general network laws. These laws describe cooperative and competitive interactions among the cells. or nodes, that exist at each level. Such cooperative-competitive interactions endow the network levels with properties of cellular activation and short term memory (STM). Different levels exhibit specialized properties of STM due to two types of factors: Differences in the interconnections and other parameters of the cells at each level; and the very fact that the different levels, by occurring within different locations of the total ncwork hierarchy, receive different types of inputs. One task of the theory is to show how a wide variety
Chapter 8
460
VISUAL OBJECT RECOGNITION SYSTEM
SEMANTIC NETWORK
LIST PARSING IN STM (MASKING FIELD)
ITEM AND ORDER IN MOTOR STM
ICONIC MOTOR FEATURES FEATURES
8 I
INPUTS
I
SELFGENERATED AUDITORY FEEDBACK
I
3
+OUTPUTS
Figure 1. A macrocircuit governing self-organization of recognition and recall processes: The text explains how auditorily mediated language processes (the A t ) , visual recognition processes ( V ' ) , and motor control processes (the M 3 ) interact internally via conditionable pathways (black lines) and rxt,ernally via environment,al feedback (dotted lines) to self-organize the various processes which occur at the different network stages.
Neural Diwaiilics of Speech a i d Larlgnagc Codiug
461
of STM I m p r t i c > s can be g c w r a t r d f r o i i i il wiall number of STM laws by choosing spccializrd i~~tcwrlli~lilr wiring diagrauis. All of the Icarniilg and long trrni r~ic~riory (LTXI) processes within the theory occur in it,s inter-lcwl pathways. A11 of these learning proccssrs also obey similar dynamiral laws. They encode diffrrrnt types of irifor~iiationdue t o their diffrrent parameter choices a n d t.hrir diffcrrnt locations within the total network hierarchy. 4. hlaskiiig Ficlds
The present article focuses upon the design of level A 4 in t.his network hierarchy (Figure 1). Level Ad, which is called a rnasking field. generates a context-sensitive encoding of the activation patt.erns that. flicker across level A3 through time. The activation patterns across A3 influence A4 via t.he conditionable pathways from A3 to A q . \Ye will describe how developniental rulrs for growth of connections from A3.to A 4 and for growth of connections within A4 enable A4 to achieve a context-sensitive parsing of A3’s activity patterns. In order to underst,and this masking field design, we rcJrirw the design problems and principles which led to its discovery. 5 . T l i r T r i n p o r n l Chiiriking Problriii: Srrkiiig the Most Predictive Reprc~siwtat ion
Thr core problem leading to masking field design is called the temporal chunking problcrn (Grossberg, 1978a, 1984a, 1985). Consider the problcm of iinitizing an internal rtsprvwnt at ion for an unfamiliar list of faiiiiliar it,riris; e.g., a ~ i o v e lword composed of fiiiiiiliar items: such as phonemes or syllables. T h r most familiar groupings of the list are the itrms themscl~es.In order to even know what the novel list is, all of it,s individual items must first be presented. All of these items are more familiar than the list itself. What mechanisms prevent item fainiliarity from forcing the list always to be processed as a sequence of individual itrms, rather than cvrntually as a whole? How does a notyet,-cst,ablished Lvord reprcw~itation overcome the salience of well-cst.ablished phoneme or syllable reprrsrntations? How does unitization of unfamiliar lists of familiar items even get started? If the temporal chiinking problem is not, solved. then unitized internal representations of lists with more than one item can never be learned. Another version of the temporal chunking problem becomes evident by noticing that every sublist of a list is a perfectly good list in its own right. Letters, syllables, and words are special sublists that. have achieved a privileged st,at,us due to experience. In order to understand how this privileged status emerges, we need to analyse the processing substrate upon which all possible sublists are rqmsented before learning occurs. This processing substrat,e exists within the theory at level AS. Then we need to examine how prewired network processes interact with network lrarning processes to determine which of thew sublists will succeed in artivating a unitized representation at level Ad. The subtlrty of this unitization process is reflect,ed by even the trivial fact that novel words roniposed of familiar it.ems can be learned. This fact. shows that not all sublists have equal prewired weights in the corripetitivr struggle to be represented a t A 4 , Such prewired weights include the number of coding sit,es in.a sublist representation and the strength of the competitive intercellular signals that, are emitted from each sublist’s representation, as we describe in Section 13. A word as a whole can use such prewired processing biases to competitively inhibit, or to mask, the STM activities corresponding to its familiar constituent items as it initiates the learning of its own unitized representation. That is why t,he rooperat ire-competitive design of A4 that solves the temporal chunking problem is called a inusking field. One property of the masking field design is that, longer lists, up t,o some maximal Iengt,h, can selectively activate cells that have a yrewired competitive advantage over shorter sublists in the struggle for STM activation and storage. Such a Competitive advant.age enables a masking field t,o exploit the fact that longer sublists, other things
462
Clzapter 8
being equal, are better predictors of subsequmt evtmts than are shorter sublists because they embody a more unique temporal context. Thus a masking field is designed to generate STM representations which have the best a priori chance to correctly predict the activation patterns across .q3. As an important Ride benefit, the a priori advantage of longer, but unfamiliar, sublists enables them to compete effectively for STM activity with shorter, but familiar, sublists, thereby providing a solution t o the temporal rhunking problem.
6. The W o r d L e n g t h Effect The postulate that longer sublists, u p to some maximal length, have a competitive STM advantage led to the prediction of a word length effect in Grossberg (1978a, Section 41; reprinted in Grossberg, 1982a). A word length effect was reported in the word superiority studies of Samuel, van Sant,en, and Johnston (1982, 1983). In these experiments, a letter was better recognized when it was embedded in longer words of lengths from 1 to 4. As Samuel, van Santen, and Johnston (1983, p.322) have noted, other “lexical theories had not previously included mechanisms that were explicitly length dependent.” We believe this is because other lexical theories did not state, nor attempt to solve, the temporal chunking problem. Further discussion of the word length effect and related data is provided in Grossberg (1984a, 1985). In the light of the word length effect, the conclusion that longer sublists have an a priori competitive advantage over shorter sublists, up to some maximal length, may seem to b e self-contradictory. If prewired word biases can inhibit learned letter biases, then how is perception of letters jaciliiated by a word context, which is the main result of word superiority studies? This paradox can also be resolved through an analysis of how .44 encodes activity patterns across A3.
7. All L e t t e r s A r e Sublists: W h i c h Computational Units Can Self-Orgaiiize?
A resolution of this paradox can be derived by further considering what it means to say that every sublist of a list is also a list. In order for sublists of a list to struggle for representational status, sets of individual items of the list need first to be simultaneously represented at some level of processing, which we identify with As. The theory shows how item representations that are simultaneously active in STM across A3 can be grouped, or chunked, into representations of sublists at the next level of processing A 4 . The sublist representations can then compete with each other for STM activation within A 4 . Once the two levels A3 and A4 are clearly distinguished, it becomes obvious that individual list items, being sublists, can be represented at A4 as well as a t As. In the special case of letters and words, this means that letters are represented at the item level, as well as a t the list level. Prewired word biases can inhibit learned letter biases at the level Aq, but not at the level A3. Excitatory top-down priming from A4 to A3 and from A4 to V’ can then support the enhanced letter recognition that obtains during word superiority experiments. To clearly understand how the item representations at A3 differ from the sublist representations at Ad, one must study the theory’s processes in some detail. Even without such a study, one can conclude that “all letters are sublists.” Indeed, all events capable of being represented at A4 exist on a equal dynamical footing. This conclusion clarifiefi how changes in the cont,ext of a verbal item can significantly alter the processing of that item, and why the problem of identifying the functional units of language has proved to be so perplexing (Darwin, 1976; Studdert-Kennedy. 1980; Young, 1968). In A 4 . no simple verbal description of the functional unit, such as phoneme, syllable, or word, has a privileged status. Only the STM patterns that survive a context-sensitive interaction between associative and competitive rules have a concrete existence.
Nwral Dwiariiics of’Speccliarid Lairguage Coding
463
The rolirlurion that “all letters arc whlists“ iiiiplies the use of different computational units than one finds in many other models of language processing. In other models, lcvels such as A3 and -44 often reprcscnt lcttcrs and words, respectively. In thc present theory, lcvels A3 and Ad rrprcwnt iterris (more precisely, patterns of temporal order and item information in STM) and lists (more precisely, sublist parsings in STSI). rrapcctirrly. Thus in our theory, 011 familiar Irtters possess both item and list rcil)rc,sc,ritations,not just letters such as .4 a n d I that are also words. This property hclps to explain the data of \Vherler (1970) showing that letters such as A and I. i\Iiich are also words, are not recognized niore easily than letters such as E and F, which are not also words (Grossberg, 1984a, 1985). In a model which postulates a lrttrr level and a word level, letters such as A and I are represented on both the letter lcvrl and the word level. whereas letters such as E and F are represented only on the letter level. In such a niodel, letters such as A and I might be expected to be better rcvwgnized than letters such as E and F. Choosing letter and word levels thus leads to serious data-related dificultics, including the inability to explain the Wheeler (1970) data and the Samuel, van Santen, and Johnston (1982,1983) data without being forced into fiirtlirr paradoxcs. Morr g c m n l l y , any niodel whose nodes represent letters and words, and only these i i i i i t s . face5 the problexn of describing what the niodel nodes rrpresented before a particular Irtter or word enters the subject’s lexicon, or what happens to these nodes when siich a verbal unit is forgotten. This issue hints a t the core problem that such a Iiiodel cannot sclf-organize (Grossberg, 1984a, 1985). The self-organization process which controls language processing hides the mechanistic substrate upon which it is built. Concepts from lay language, such as letters and words, provide a misleading tool for articulating the computational units which are manipulated by a self-organizing language system. Repp (1981, p.1462) has made the similar point “that linguistic categories are abstract and have no physical properties, and that, therefore, their physical correlates in the speech wave are appropriately described in acoustic terms only.” This problem can be dealt with using a theory whose levels can Icarn to encode abstract item and list representations within a substrate of previously unconiinitted nodes or cells. 8. Self-Organization of A u d i t o r y - M o t o r F e a t u r e s , I t ems, arid Synergies
The conclusion that increasingly abstract computational units are activated at higher levels of a self-organizing language system does not deny the fact that more concete entities, such as auditory features and phoncmrs, are activated at earlier processing stages. Within our theory, however. even these entities are emergent properties due t o intercellular interactions. Before dcscribing A4 in detail, we briefly review properties of levels A ] , A ? , and A3 to clarify the meaning of t h r activity patterns across A3 that .+I4 can encode. The mechanisms which rigorously instantiate this intuitive review are described in detail in Grossberg (1978a) and used to explain word recognition data in Grossberg and Stone (1985a). .4t an early stage of development, the environmentally activated auditory patterns a t stage A , in Figure 1 start to tune the long-term memory (LTM) traces within the pathways from A l to A z , and thus to alter the patterning of short-term memory (STM) auditory “feature detector” activation across Az. After this LTM tuning process begins. it can be supplemented by a “babbling” phase during which endogenous activations of the motor command stage MI can elicit simple verbalizations. These verbalizations generate environmental feedback from MI to A ] which can also tune the A1 + A2 pat,hways. The learning within the feedback pathway MI A ] -, -42 helps to tune auditory sensitivities to articulatory requirements. This process clarifies aspects of the motor theory of speech perception (Cooper, 1979: Liberrnan, Cooper. Shankwciler. and St,uddert-Kennedy. 1967: Liberman and Studdert-Kennedy. 1978; Mann and Repp. 1981; Repp and Mann. 1981; Studdert-Kennedy, Liberinan. Harris. and Cooper. 1970). +
464
Chapter 8
Just as the auditory patterns acroqs Al tune the .41 A2 LTM traces, the endogenously activated motor command patterns across .if1 tune the MI + Afz LTM traces. The activation patterns across Mz encode the endogenously act iratcd motor commands across M1 into “motor features” using the same mechanisms by which the activation patterns across A 2 encode the exogenously activated auditory patterns across A1 into “auditory features.” The flow of adaptive signaling is not just bottom-up from A ] to Az and from M I to M z . Top-down conditionable signals from -q2 to Al and from Mz to M1 are also hypothesized to exist, These top-down signal patterns represent learned expectancies, or templates. Their most important role is to stabilize the learning that goes on within the adaptive pathways A1 --+ Az and M1 + Mz. In so doing, these top-down signal patterns also constitute the read-out of optimal templates in response to ambiguous or novel bottom-up signals. These optimal templates predict the patterns that the system expects to find at A1 or MI based upon past experience. The predicted and actual patterns merge at A1 and MI to form completed composite patterns which are a mixture of actual and expected information. Auditory and motor features are linked via an associative map from Az to Mz. When M I is endogenously activated, it activates a motor representation at Mz via the adaptive pathway M 1 -+ Mz, as well as an auditory representation at A2 via environmental feedback M I + A , and the adaptive pathway A1 + Az. Since Az and Mz are then simultaneously active, the associative map A2 + Mz can be learned. This map also links auditory and articulatory features. The associative map Az + -142 enables the imitation of novel sounds-in particular, of non self-generated sounds -to get underway. It does so by andysing a novel sound via the bottom-up auditory pathway A1 --t Az, mapping the activation patterns of auditory feature detectors into activation patterns of motor feature detectors via the associative map Az + M z , and then synthesizing the motor feature pattern into a net motor command at MI via the top-down motor template Mz --+ M I . The motor command, or synergy, that is synthesized in this way generates a sound that is closer to the novel sound than are any of the sounds currently coded by the system. The properties whereby the learned map Al Az + Mz -+ M I enables imitation of novel sounds to occur clarifies with the analysis-by-synthesis approach to speech recognition (Halle and Stevens, 1962; Stevens, 1972; Stevens and Halle, 1964). The environmental feedback from MI to Al followed by the learned map A1 --$ A2 -+ M z + M1 defines a closed feedback loop, or “circular reaction” (Piaget, 1963). The theory’s explication of the developmental concept of circular reaction helps to clarify the speech performance concepts of motor theory and analysis-by-synthesis in the course of suggesting how an individual can begin to imitate non-self-generated speech sounds. The stages Az and Mz can each process just one spatial pattern of auditory or motor features at a time. Thus Az can process an auditory “feature code” that is derived from a narrow time slice of a speech spectrogram, and Mz can control a simple motor synergy of synchronously coordinated muscle contractions. These properties are consequences of the fact that spatial patterns, or distributed patterns of activity across a field of network nodes, are the computational units in these real-time networks. This computational unit is a mathematical consequence of the associative learning laws that govern these networks (Grossberg, 1982a). The later stages A, and MI in Figure 1 axe all devoted to building u p recognition and recall representations for temporal groupings, or lists, of spatial pattern building blocks. A spatial pattern of activation across Az encodes the r ~ l a t i v eimportance of all the “feature detectors” of A2 which represent the auditory pattern that is momentarily activating Al. In order to encode temporal lists of auditory patterns, one needs to simultaneously encode a sequence of spatial patterns across Az’s auditory feature detectors. The following way to accomplish this also addresses the fundamental problem that individual speech sounds, and thus their spatial patterns across A z , can be altered +
-+
Neural Dynamics of Speech and Language Coding
465
by the tcmporal context of other speech sounds in which they are embedded. In addition to activating the associative map from A2 to M 2 , each spatial pattern across also activates an adaptive pathway from A 2 to As. Although all the adaptive pathways of the theory obey the same laws, each pathway learns different information deprnding on its location in the network. Since the A2 -+ A3 pathway is activated by feature patterns across Az, it builds u p learned representations, or chunks, of these feature patterns. Each such representation is called an item representation. The item representations include the representations of phonemes. All new learning about item representations is encoded within the LTM traces of the A2 + A3 adaptive pathway. Although each item representation is expressed as a pattern of activation across A3, the learning of these item representations does not take place within A3. This flexible relationship between learning and activation is needed to understand how temporal codes for lists can be learned and performed. For example, as a sequence of sound patterns activates A l , the patterns of "auditory feature" activation across A2 can build u p and rapidly decay, via a type of iconic memory (Sperling, 1960). These A2 activation patterns, in turn, lead to activation of item representations across A3. The item reprrsentations are stored in STM, as a type of "working memory" (Cermak and Craik, 1979), due to the feedback interactions within A3. As a succession of itrm represrntations across A3 is stored in STM, the spatial pattern of STM activity across .43 represmts letnporal order in/ortnation across the item representations of A3. 9. Triiipornl Order Informatioii Across If ein Rcprcscutations: The Spatial R c c o d i n g of Tcinporal Order
As more items are presented, the evolving spatial patterns of activity across A3 include larger regions of the item field, up to some maximal length. Thus the temporal processing of items is converted into a succession of expanding spatial patterns within ,q3. This is the main reason why spatial mechanisms that are used in visual processing can also be used to design the masking field Ad. Each activity pattern across A3 is a context-sensitive computational unit in its own right. In such a representation, changing any one activity changes the coded meaning of the entire list of items. The activity pattern "is" the code, and no further labels or algorithms are needed to define it. In order to understand how such a code works, it is necessary to specify laws for the unitized encoding and recognition of item sublists by Ad, and laws for the rehearsal and recall of items before and after they are unitized by A4.
10. The LTM Invariance Principle Before these tasks can be accomplished, it is first necessary to characterize the laws whereby items can reliably represent temporal order information via the spatial patterning of activation across A3. These laws can be derived from an analysis of the self-organization process. In particular, an incorrect choice of STM laws within A3 could cause an unstable breakdown of LTM within the conditionable pathways from A3 to A4. Grossberg (1978a, 1978b) introduced the LTM Invariance Principle in order to derive STM laws for A3 that are compatible with stable LTM encoding. This principle shows how to alter the STM activities of previous items in response to the presentation of new items so that the repatterning of STM activities that is caused by the new items does not inadvertently obliterate the LTM codes for old item groupings. For example, consider the word Myself from this perspective. We would not wish the LTM codes for My, Self, and Elf to be obliterated just because we are learning the new word Myself. On the other hand, the predictive importance.of the groupings My, Self, and Elf may be reduced by their temporal embedding within the list Myself. We therefore assume that A3 is designed to satisfy the .
466
Chapter 8
LTM Invariance Principle: The spatial patterns of temporal order information in STM are generated by a sequentially presented list in such a way as to leave the A3 + A4 LTM codes of past event groupings invariant, even though the STM activations caused by these past groupings may change markedly across A4 as new items activate A3. It turns out that a suitably designed cooperative-competitive interaction across A3 can mechanistically realize this principle. This A3 design has been used to analyse and predict data about free recall, serial verbal learning, intensity-time tradeoffs, backward coding effects, item grouping effects, and influences of presentation rate on recall order (Grossberg, 1978a, 1978b, 1982b, 1985; Grossberg and Stone, 198513). For present purposes, we simply note that different STM activity patterns across the same set of item representations within A3 can encode different temporal orderings of these items. 11. The Emergence of Complex Speech and Language Units
The concept of temporal order information across item representations is necessary, but not sufficient, to explain how novel lists of items can be learned and performed. In addition, one needs to consider how the bottom-up conditionable pathway from Mz to M3 (Figure 1) learns unitized representations of motor items (synergies); how the top-down conditionable pathway from M3 to Mz learns motor templates or expectancies that can read-out coarticulated performance of these synergies; and how the conditionable pathway from A3 to M3 learns an associative intermodality map. Using these mechanisms, one can analyse how novel item representat,ions are formed. For example, suppose that an analysis-by-synthesis of a novel sound has been accomplished by the composite map A l + Az -+ it42 -+ M I . Such a map generates a novel pattern of auditory features across A3 and a novel pattern of motor features across Mz (Section 8). These feature patterns can then trigger learning of unitized item representations at A3 and M3. These unitized representations can be learned even though the network never endogenously activated these feature patterns during its “babbling” phase. In this way, the network’s learned item codes can continue to evolve into ever more complex configurations by a combination of imitation, self-generated vocalization, STM regrouping, and LTM unitization. An associative map A3 + M3 between new unitized item representations also continues to be learned. Using this background, we can now summarize how a unitized representation of an entire list, such as a word, can be learned and performed. 12. List Chunks, Recognition, and Recall
As the network processes a speech stream, it establishes an evolving STM pattern of temporal order information across the item representations of A3. Since every sublist of a list is also a list, the conditionable pathway from A3 to A4 simultaneously “looks at,” or filters, all the sublist groupings to which it is sensitive as the speech stream is presented through time. The masking field within A4 then determines which of these sublist groupings will represent the list by being stored in STM at Ad. These sublist representations contribute to the recognition of words (Grossberg and Stone, 1985a) but cannot, by themselves, elicit recall. This raises the issue of how short novel lists of familiar items can be recalled even before they are unitized. The fact that a verbal unit can have both an item representation and a list representation (Section 7) now plays an important role. Recall of a short novel list of familiar items is triggered by a nonspecific rehearsal wave to A3 (Grossberg, 1978a, 1978b). Such a wave opens an output gate that enables output signals of active items to be emitted from A3 to M3, with the most active item representations being read-out before less active item representations. As each item is read-out, it activates a negative feedback loop to itself that selectively inhibits its item representation, thereby enabling the next item representation to be read-out. Each item representation is recalled via the learned A3 -+ M3 + Ma + M I sensory-motor map.
Neural Dynamics of Speech and Language Coding
467
This type of recall is immediate recall from STM, or working memory, of a list of unitized item representations. It is a type of “controlled” process, rather than being an “automatic’ unitized recall out of LTM. In order for a unitized list chunk in A4 t.0 learn how to read-out its list of motor commands from LTM, the chunk must remain active long enough during the learning process to sample pathways to all of these motor commands. In the simplest realization of how temporal order information across item representations is encoded and read-out of LTM, the top-down template from A4 to A3 learns this information while the conditionable pathway from A3 to A4 is being tuned. Later activation of a list chunk in A4 can read this LTM temporal order information into a pattern of STM temporal order information across the item representations of AS. Activation of the rehearsal wave a t this time enables the list to be read-out of STM. Unitized recall can hereby occur via the learned A4 -+ A3 + M3 + M2 + MI sensory-motor map. The order of recall due to read-out of temporal order information from LTM is not always the order in which the items have been presented. Thus although the network is designed to stabilize learning and LTM insofar as possible, its interactions occasionally force a breakdown of temporal order information in LTM; for example, as occurs during serial verbal learning. See Grossberg (1982b, 1985) and Grossberg and Stone (1985b) for recent analyses of how such breakdowns in temporal order information in LTM can occur.
.
IS. The Design of a M a s k i n g Field: Spatial Frequency Analysis of ItemOrder Xnfornicttion With this background, we can now turn to the quantitative design of the masking field A4. As a sequence of items is temporally processed, the masking field updates its choice of list representation, parsing the item sequence into a predictive grouping of unitized sublist choices based on a combination of a priori parameter choices and past learning. A spatial pattern of STM activity across the item representations of A3 provides the inputs which are grouped by A4. As more items are presented, new spatial patterns are registered that include larger regions of the A3 item field, up to some maximum list length. Thus the temporal processing of items is converted by A3 into a succession of ezpanding spatial patterns. Given this property, the temporal chunking problem can be rephrased as follows. How do sublist chunks in A4 that encode broader regions of the item field mask sublist chunks that encode narrower regions of the item field? This insight can be rephrased as a principle of masking field design (Grossberg, 1984a, 1985): Sequence Masking Principle: Broader regions of the item field A3 are filtered by the A3 + A4 pathway in such a way that they selectively excite nodes in A4 with stronger masking parameters. In other words, A4 is sensitive to the spatial jrequency of the input patterns that it receives from AS. We will show how nodes in A4 which are selectively sensitive to a prescribed spatial frequency range define a masking subfield. Each masking subfield is characterized by a different choice of numerical parameters, which are determined by simple growth rules. Subfields whose cell populations have broader spatial frequencies and/or more coding sites can competitively mask STM activities of subfields with narrower spatial frequencies and fewer coding sites (Figure 2). This on-line list parsing capability must reconcile several properties that could be in conflict in a poorly designed system. For example, how does a short sublist activate one representation, yet an updated list that includes the sublist activate a different representation? Why does not the representation of the shorter list always inhibit the representation of the longer l i t ? Why does not the converse hold? In short, how does the masking field automatically rescale itself to selectively respond to all list lengths and orderings, up to some maximal length?
Chapter 8
468
MASKING FIELD
ADAPTIVE FILTER
e e ITEM
FIELD
Figure 2. Selective activation of a mabking ficld. The nodes in a masking field are organized so that longer item sequences, up to some optimal length, activate nodes with more potent masking properties. Individual items, as well as item sequences, are represented in the masking field. The text describes how the desired relationship between item field, masking field, and the intrrvening adaptive filter can be self-organized using simple developmental rules. Several properties are implicit in these design requirements; namely: ( A ) Sequence Representation: All realizable item sequences, up to a maximal sequence length, can initially generate some differential reaction, however weak, in the masking field. (B) Masking Parameters Increase with Sequence Length: Critical masking parametcrs of masking field nodes increase w i t h t h e lcngth of the item sequences that activate them. This rule holds until an optimal sequence length is reached. (C) Masking Hierarchy: Nodes that are activated by a given item sequence can mask nodes that are activated by subsequences of this sequence. (D) Sequence Selectivitv: If a node’s trigger sequence has length n, it cannot be supraliminally activated by sequences of length significantly less than n. Properties (A) and (B) suggest that the .43 --t A4 pathway contains a profusion of
Neirral Djwairiics ojsprerli and Laiiguage Coding
469
rsnncrtions that are scattcrrd broadly over the masking field. Property (C) suggests that closely rclat,ed sequences activate nrarby crlls in the masking field. Postulate (D) says that. despite the profusion of ronncctions, the tuning of long-sequence cells prevrnts thcm from rrsponding to short subsequences. T h r iliain problrm is to resolve the design trnsion between profuse connections and wqiirnce srloctivity. This tension must be resolved both for short-sequence (e.g., lett.rr) cells and long-sequence (e.g., word) cells: If connections are profuse, why are not short-sequence nodes unselective? In other words. what prevents many different item rrprrsentations in A3 from converging on every short-sequence cell in A4 and t.hus bring able t.o actirat.e it? On the other hand, if many item representations from A3 do coiivrrge on long-sequence cells in A4, t.hen aren’t these long-sequence nodes activated by subsequrnces of the items? Somehow the growth rules that generate positional gradients in A3 ---* A44pathways and the competitive interactions within A4 are properly balanced t o achieve all of these properties. Grossbrrg (1985)suggested how a combination of random gr0wt.h rules in A3 + A , and activity-contingent. self-similar growth rules within A4 could achieve such a balance. These concepts Ird to several predictions concerning the developmental events that may regulate growth of lateral neural connect,ions in response to afferent neural signals. In the present work, we have furt,her developed these ideas to the point where the desired properties ran be obtained even if critical numerical parameters in some of our networks are altered by a factor of 10. This numerical stability is all the more remarkable when one considers that diflerenf orderings of the same items, as well as the Same orderings -of items in diflerent lists, ran be selectively coded by such a masking field. 14. Devrlopincnt of a Masking Field: Random Growth and Self-similar Growth
The primary structure of a masking field can be understood in terms of two interacting growth rules: Random growth of connections from A3 to A4, and self-similar growth of cells and connections within ,q4. We now explain these concepts. Suppose that each item node in A3 sends out a large number of randomly distributed pathways towards the list nodes in A4. Suppose that an item node randomly contacts a sequence node with a small probability p . This probability is small because there are many more list nodes than item nodes. Let X be the mean number of such contacts across all of the sequence nodes. Then the probability that exactly k pathways contact a given sequence node is given by the Poisson distribution
If K is chosen so that X < X < K + 1, then P k is an increasing function of k if 1 5 k 5 K and a decreasing function of k if k 2 K. If A. is sufficiently small (approximately 4), then (1) implies that sequences of length k 5 K will be represented within the masking field, thereby satisfying properties (A) and (B). Related random growth rules, such as the hypergeometric distribution, also have analogous properties. Due to the broad and random distribution of pathways, list nodes will tend to be clustered near nodes corresponding to their sublists, thereby tending to satisfy property (C). A further property is also needed to satisfy property (C). Since a long-list node tends to mask all of its sublists, such a node must be able to send inhibitory signals to all the nodes which code these sublists. Thus the interaction range (viz., the axons) of an A4 node should increase with the length of the list to which it is maximally sensitive (Figure 2). This is called the Principle of Selfsimilar Growth (Grossberg, 1982a, 1985). In order to realize property (D), an A4 node that receives k pathways from A3 somehow dilutes the input in each pathway so that (almost) all k pathways must be
470
Chapter 8
active to generate a suprathreshold response. As k increases, the amount of dilution also increases. This property suggests that long-list cells may have larger cellular volumes, since a larger volume can more effectively dilute a signal due to a single output pathway. Larger volumes also permit more pathways to reach the cell’s surface, other things being equal. The constraint that long-list nodes are associated with larger parameters, such as number of sites and spatial frequencies, is hereby extended to include larger surface areas. This conclusion reaffirms the importance of the self-similarity principle in designing a masking field: A cell has longer interactions (viz., axons) because it has a larger cell body to support these interactions. into two growth rules: This discussion translates the four formal properties (A)-(D) random A3 4 A4 growth and self-similar A4 + A4 growth. It remains to say how these two types of rules are joined together, as is required by the Sequence Masking Principle. In other words, how do larger cell surfaces attract more pathways, whereas smaller cell surfaces attract fewer pathways? Without further argument, a cell surface that is densely encrusted with axon terminals might easily be fired by a small subset of these axons. To avoid this possibility, the number of allowable pathways must be tuned so that the cell is never overloaded by excitation. 15. Artivity-Contingent Self-Similar Cell G r o w t h
There exist two main ways to accomplish this property which have not yet been experimentally tested. A combination of the two ways is also possible: A. Volume-Dependent Membrane Receptors: At an early stage of development, a spectrum of cell sizes is endogenously generated across the masking field by a developmental program. Each cell of a given size contains a fixed number of membrane organelles that can migrate and differentiate into mature membrane receptors in response to developing input pathways (Patterson and Purves, 1982 . The number of rrirrnbrane organelles covaries with cell size to prevent the internal eve1 of cell excitation, say as measured by the maximum ratio of free internal N a + to K + ions, from becoming too large (Figure 3). B. Activity-Dependent Self-Similar Cell Growth: Pathways from the item field grow to the list nodes via random growth rules. Before these pathways reach their target cells, these cells are of approximately the same size. As longer item lists begin to be processed by A 3 , these lists begin to activate their respective list nodes. The A4 cells which receive many A3 -+ A4 connections experience an abnormal internal cellular milieu (e.g., abnormally high internal N a + / K + concentration ratios) due to the convergence of many active pathways on the small cell volumes. These large internal signals trigger self-similar cell growth that continues until the cell and its processes grow large enough to reduce the maximal internal signal to normal levels (Figure 4). The tuning of A4 cell volume to the number of pathways from A3 is thus predicted to be mediated by a self-similar use-and-disuse growth rule. The fact that internal cellular indices of membrane excitation trigger cell growth until these indices equilibrate to normal levels immediately shows why the mature cell needs simultaneous activation from most of its pathways before it ran fire. A self-similar use-and-disuse growth rule has many appealing properties. Most notably, only item sequences that occur in the speaker’s language during the critical growth period may be well-represented by the chunks of the speaker’s masking field. This fact may be related to properties of second language learning. In summary, the design of a masking field can be realized by a simple developmental program: Profuse random growth along spatial gradients from A S to A4, which induces activity-contingent self-similar growth within A4 that is constrained by competition for synaptic sites.
i‘
Neural Dynamics of Speech and Language Coding
47 1
0 BEFORE
0 AFTER Figure 3. Activity-dependent self-similar cell growth: (a) Fz cells are initially all approximately the same size. (b) Variable numbers of Fl cell connections across Fz cells generate variable levels of average Fz cell activation, which cause variable amounts of compensatory cell growth until a target average level of intracellular excitation is attained within all
F2
cells.
412
Chapter 8
BEFORE
n AFTER
Figure 4. Volume-dependent membrane receptors: (a) A spectrum of Fz cell sizes is generated such that the number of membrane synaptic sites covaries with cell size. (b) More Fl connections can arborize on the larger cells.
Neural Dytraniics of Speech and Language Coding
413
10. Sciisitivity to Miiltiple Scales and Intrasrale Variations
A masking field is sensitive to two different types of pattern changes. A . Expanding Patterns: Temporal L‘pdating As a word like MYSELF is processed, a subword such as MY occurs before the entire word MYSELF is experienced. Figure 5a schematizes this type of informational change. As the word is presented, earlier STM activations are modified and supplemented by later STM activations. The STM pattern across A3 expands as the word is presented. After MYSELF is fully presented, parts such as MY, SELF, and ELF are still (at least partially) represented within the whole. The masking field can nonetheless update its initial response to MY as the remainder of MYSELF is presented. In this way, the masking field can react to the whole word rather than only its parts. B. Internal Pattern Changes: Temporal Order Information The second type of masking field sensitivity is illustrated by the two words LEFT and FELT. This comparison is meant to be illustrative, rather than attempting to characterize the many subtle differences in context-sensitive alterations of sound patterns or reading patterns. The words LEFT and FELT illustrate the issue that the same set of items may be activated by different item orderings. To distinguish two such patterns, sensitivity to different spatial scales is insufficient because both lists may activate the same spatial scale. Instead, sensitivity to different STM patterns which excite the same set of items is required (Figure 5b). The romputer simulations summarized by Figures 6 13 helow illustrate both types of masking field selectivity.
17. Hypothesis Formation, Anticipation, Evidence, a n d Prediction A third property of a masking field is of such importance that it deserves special mention. We will describe a masking field that is capable of simultaneously discriminating more than one grouping within a list. For example, such a masking field might respond to the A3 representation of the word MYSELF by strongly activating an A4 population that is sensitive to the whole word and weakly activating A4 populations that are sensitive to the word’s most salient parts. In such a representation, the total STM pattern across A4 represents the A3 STM pattern. The relative sizes of A4’s STM activities weight the relative importance of the groupings which are coded by the respective cell populations. The suprathreshold STM activities across A4 are approximately normalized, or conserved, due to its competitive feedback interactions. The STM activities across A4 may thus be interpreted as a type of real-time probabilistic logic, or hypothesis-testing algorithm, or model of the evidence which A4 has about the pattern across A3. Such a masking field also possesses a predictive, or anticipatory, capability. In response to a single item across A3, the A4 population which is most vigorously activated may code that item. In addition, less vigorous activations may arise at those A4 populations whirh represent the most salient larger groupings of which the item forms a part. Such a masking field can anticipate, or predict, the larger groupings that may occur of which the item forms a part. As more items are stored in STM across As, the set of possible groupings encoded by A4 changes. In response to additional items, different groupings are preferred within A4. Moreover, as more items are stored by A3, A4’s uncertainty concerning the information represented at A3 may decrease, much as the prediction of what follows ABC is less ambiguous than the prediction of what follows C alone. As Ad’s uncertainty decreases, the spatial distribution of STM activity across A4 becomes more focused, or spatially localized. This type of spatial sharpening is not merely due to contrast enhancement. Rather it measures the degree of informational uncertainty within the A4 code. These predictive, multiple-grouping properties of a masking field are illustrated by the computer simulations summarized in Figures 14-16.
474
Chapter 8
Figure 5 . Two types of masking field sensitivity: (a) A masking field A4 can automatically rescale its sensitivity to differentially react, to activity patterns which activate variable numbersof A3 cells. It hereby acts like a “multiple spatial frequency filter.” (b) A masking field can differentially react to different A3 activity patterns which act,ivate the same set of A3 cells. By (a) and (b), it acts like a spatial pattern discriminator which can compensate for changes in overall spatial scale without losing its sensitivity to pattern changes at the finest spat,ial scale.
Neural Dynamics of Speeclt and Language Coding
47.5
18. Computer Sirnulations
Before describing a masking field with mathematical equations, we summarize some of our computer simulations of masking field properties. Figures 6-13 depict the simplest example of masking field dynamics. In this example, each distinct spatial pattern across A3 chooses a unique nodal population within Aq. The same numerical parameters were used in all of these simulations. Only the input patterns varied. In Figures 14-16, a fixed but different set of parameters was chosen t o illustrate how a masking field can generate spatially distributed and anticipatory sublist representations of spatial patterns across AB. In these representations, the masking field is maximally sensitive to the entire list across AB,but also generates partial activations to salient sublists and superlists of this list. These figures can be combined in several ways t o provide different insights into masking field properties. Consider Figure 6 to start. In this figure, a single item in A3 (which is denoted by F l ) is active. This item generates positive inputs to a large number of nodes in A4 (which is denoted by Fz). The input sizes are depicted by the heights of the bars in the three rows labeled Input Pattern. Each row lists all Fz nodes which receive the same number of pathways from F l . The first row consists of Fz nodes which receive one pathway, the second row consists of Fz nodes which receive two pathways, and the third row consists of Fz nodes which receive three pathways. In row 1, each Fz node in the set labeled ( i } receives a pathway from the Fl item node labeled {i}, i = 0 , 1 , 2 . . . . , 4 . Note that four Fz nodes receive inputs from the 0 Fl node. In row 2, all Fz nodes labeled 0 , l ) receive pathways from the 4 nodes 0 and (1). In row 3, all Fz nodes labeled 0 , 1 , 2 } receive pathways from the F1 nodes 0}, (l}, and (2). The inputs to all the Fz nodes which receive pathways from the Fl node ( 0 ) are positive. There are 44 such nodes in Figure 6. Despite this fact, the only Fz nodes capable of becoming active in STM are the nodes which receive pathways only from the active item node (0). These are the Fz nodes labeled (0). The STM activities of all other Fz nodes are inhibited by the feedback competitive interaction within Fz, despite the fact that many of these Fz nodes also receive large excitatory inputs from Fl . The STM activities of the Fz nodes are listed in three rows under the heading List Code in STM. Thdse are the activities which the nodes store in STM after the network equilibrates to the entire input pattern. Figure 6 illustrates how Fz can transform a widespread input pattern into a focal, and appropriate, STM activation. Figures 7 and 8 further illustrate this property. In each figure, a different item at Fl is activated. Each item generates a widespread input pattern to Fz. Each input pattern is contrast-enhanced into a focal STM activation. This STM activation is restricted to the Fz nodes which receive pathways from only the active item node. A comparison of Figures 6. 7, and 9 discloses a different property of masking field dynamics. Suppose that the temporally ordered list of items (0), (1) is received by Fl. The list as a whole generates a different spatial pattern across Fl (Figure 9) than does its first item (Figure 6) or its second item (Figure 7) taken in isolation. The list as a whole also activates even more nodes than does either item taken separat,ely-82 nodes in all. Despite this fact, only a single node becomes active in STM. This node is, moreover, an appropriate node because it is one of the Fz ( 0 , l ) nodes that receive pathways only from the Fl items ( 0 ) and (1). This comparison between Figures 6, 7, and 9 thus illustrates the following Fz properties: Sequence selectivity, and the ability of Fz nodes which are activated by larger numbers of Fl nodes to mask the activity of Fz nodes which are activated by smaller subsets of Fl nodes. A comparison of Figures 9 and 10 reveals another important Fz property. In both of these figures, the same set of F1 items-(0) and (1) is activated, but a different spatial pattern of activity exists across the items. The spatial pattern in Figure 9 may represent the temporally ordered list (0, l}, whereas the spatial pattern in Figure 10 may represent the temporally ordered list {1,0}. The simulations show that Fz is sensitive to the item pattern as a whole, because Fz can generate different STM responses to these
t
?!
Chapter 8
476
ITEhf FIELD ( F ? ) TEMPORAL ORDER
MASKING FIELD (F2) INPUT PAT-ERN
LIST CODE IN STM 0 u)7
0 2J4
0 341
0 000
0 293
o ooa
Figure 6. List coding of a single item: Network F, encodes in short term memory fTM) the patt,ern of temporal order information over item representations. In this gure. the single item (0) is activated. Network Fz encodes in STM the pattern of sublist chunks that are activated by F,. The first three row8 depict the inputs from Fl to Fz. They are broadly distributed across Fz. The List Code in STM depicts the STM response to these inputs, Only the (0) cells in Fz are stored in STM, despite the broad distribution of inputs.
Neural Dynamic8 of Qeech and Language Coding
411
ITEM FiELD ( T l ) 1580
TEMPORAL ORDER OVER ITEMS IN STM
0
:oo/-
101
I
0.293
121
101
ill
occ c :oc 121
121
? .'?3 141
I31
141
111
0 000
Figure 7. List coding of a single item: In response to item { 1) in Fl, the masking field in Fz chooses the (1) cells in response to a broad distribution of inputs. Thus the level I of the List Code in STM responds selectively to individual items in FI. The same thing is true in the next figure.
Chapter 8
'.%L
TEMPORAL CRDER
! 500
1.500
I
0.293
I l l
I t
i0l
151
141
121
0.000
0.1.3
0.1.4
lO.Z.31
Figure 8. See caption for Figure 6.
(O,2,4
10.3.4
{!2.31
11.2.61
IS.4
Neural Dynnmics of Speech and Language Coding
TEMPORAL ORDER OVER ITEMS IN STM
0 GCO
PI
PI
121
479
s om 2 ::5 131
)At
MASKING FIELD (F2) INPUT PATTERN
LIST CODE IN STM 0.W
0.000
0 377
a 000
Figure 9. List coding of an STM primacy gradient across two items: A primacy gradient in STM across two items of Fl generates an even broader input pattern to F2.The List Code in STM no longer responds at either the (0) cells or the (1) cells. Instead, a choice occurs among the set of possible {0,1} cells. Comparison with Figure 6 shows that F. can update its internal representation in a context-sensitive way.
ITEM FIELC ( F l ) 1.m
TEMPORAL CRDER OVER ITEMS IN STM
3sco
%?ti 2 2:;
5.:23
MASKING FIELD (F2) INPUT P A T X R N
i.noo
! 000
I
1:I
t*t
LIST CODE IN STM 0.441
0.000
0.307
0 167
0.377
n
n.m
Figure 10. List coding of an STM recency gradient across two items: A recency gradient in STM occurs across the same two items of Fz,rather than a primacy gradient. Again, the (0) cells and the (1) cells are suppressed. A diferent choice among the (0,1} cells occurs than in response to the primacy gradient of the preceding figure. Thus Fa can distinguish different temporal orderings of the same items.
Nertral Dyriarnics of’Speeclr arid Language Coding
48 1
patterns even though they activate the saiiie unordcrcd set of F, nodes. In particular, in Figures 9 and 10, diffwent Fz nodes become active within the set of Fz nodes which receives pathways only from items (0) and {I}. This comparison between Figures 9 and 10 rlarifies what we mean by the assertions that the spatial pattern across FI is the computational unit of the network, and that the differential STM responses of Fz to these computational units embodies a contextsensitive list chunking process. A comparison of Figures 6, 7, 8 , 9, and 11 illustrates a more demanding variant of these Fz properties. As a temporally ordered list of items {a}, {I}, {Z} is processed by Fj , all the items berome individually active at Fl as the spatial patterns in Figures 6, 9, and 11 evolve through time. The final STM response in Figure 11 is, however, restricted to a single Fz node, which is one of the nodes receiving pathways only from items {0}, {l),and (2). A comparison of Figures 11-13 makes the same point as the comparison of Figures 9 10, but in a more demanding variation. In each of the Figures 11-13, all the same unordered set of items-{O}, {I}, and {2}-is active across FI. The different spatial patterns across Fl represent different temporal orderings of these items: (0,1,2}, { 1 , 2 , 0 } , and {2,1,0}, respectively. In each figure, a different Fz node is activated. The active Fz node is. moreover, one of the nodes that receives pathways only from the item nodes {0}, {I}, and (2). Figures 14-16 illustrate how presentation of a list through time can update the sublist chunks in an Fz field that is capable of simultaneously storing several sublist groupihgs in STM. In Figure 14, item {0} most strongly activates the ( 0 ) nodes of Fa, but also weakly activates other F2 nodes that represents groupings which include ( 0 ) . The Fz nodes which receive an item pathway only from {0} have a maximal activity of .163. The F2nodes which receive two item pathways,including a pathway from {0}, have a maximal activity of .07. The Fz nodes which receive three item pathways, including a pathway from {O}, have a maximal activity of ,007.These activity weights characterize the degree of “evidence” which the masking field possesses that each grouping is reflected in the input pattern. In Figure 15, the {0,1} spatial pattern across Fl most strongly activates a node within the {0,1} subfield of Fz, but also weakly activates other nodes of Fz which receive inputs from ( 0 ) . The activity levels are .246 and .04, respectively. In Figure 16, the {0,1,2} spatial pattern across Fl most strongly activates a node within the {0,1,2} subfield of F2 (with activity .184) but also weakly activates the {O} subfield of Fz (with activity .004). Note that the STM activity pattern across Fz becomes more focused from Figure 14 to 16, as increasing information reduces predictive uncertainty. These simulations illustrate how simple growth rules can generate a masking field with context-sensitive list parsing properties. The results do not show how such an initial encoding can b e refined and corrected by learning and memory search processes. Such simulations would need to develop other mechanisms of the adaptive resonance theory (Carpenter and Grossberg, 1985; Grossberg, 1978a, 1980a, 1982a, 1984a) of which the masking field forms a part. We now provide a mathematical description of a masking field. 19. Shunting On-Center Off-Surround N e t w o r k s The cell populations v, of a masking field have potentials z l ( t ) , or STM activities, which obey the membrane equations of neurophysiology; namely,
f3V
cat = (V’ - V ) g + + (v-- V ) g - + (VP - v)gP. In (2), V ( t ) is a variable voltage; C is a constant capacitance; the constants V + , V - , and VP are excitatory, inhibitory, and passive saturation points, respectively; and the
Chapter 8
482
ITEM FIELD ( F l ) TEMPORAL CRDER OVER ITEMS IN STM
0 000 1.000
MASKIYG FIELD (F2) INPUT PATTERN 0.m
CIO
I
121
LIST CODE IN STM 0.506
0.m
0.515
0.m
Figure 11. List coding of an STM primacy gradient across three items: In this figure, a primacy gradient in STM occurs across three items of 9. The input pattern to F2 is even broader than before. However, the STM response of Fz retains its selectivity. Network Fz suppresses all {0}, {l}, {2}, ( 0 , l}, {0,2}, .. . cells and chooses for STM storage a population from among the {0,1,2}cells.
Neural Dynamics of Speech and Language Coding
483
ITEM FIELD ( F l ) 0.680
TEMPORAL ORDER OVER ITEMS IN STM
0.ow 0.000 101
1'1
121
131
141
MASKING FIELD (F2)
I
INPUT PATTERN
om
0.m
lot
0.580
0.w
0.506
0.000
0514
0 om
0.51 1
0.330
111
121
131
141
LIST CODE IN STM
n
Figure 12. List codings of different temporal orderings across three items: In this and the next figure, different temporal orderings of the same three items generate selective STM responses among the {0,1,2} cells. Thus as future items activate an updated STM item code across PI, the STM list coding within Fz is also updated in a context-sensitive way.
Chapter 8
484
ITEM FiELD (F1) 0.680
iEMPORAL ORDER OVER ITEMS IN STM
> l f
0.000 0 oco
MASKING FIELD (F2) INPUT PATTERN 0.680
I
0 680
11-
101
a.m
111
131
141
331
141
a.sm
11 -0
0506
0.506
0 ow
LIST CODE IN STM
I
Ill
301
0.509
0.339
n
Figure 15. See caption for Figure 11.
121
Neural @namics of beech and hnguage Coding
TEMPORAL CRDER OVER ITEMS IN STM
485
2 300 ? :c3 2 :3i 0 1:o
MASKIXG FIELI) (F2) I
wn
: r;”O
INPUT PATTERN
131
0.088
141
0.018
Figure 14. Distributed sublists encodings of one, two, and three items: This and the subsequent two figures illustrate the STM responses of Fz when numerical parameters are chosen outside of the STM choice range. Note that distributed STM reactions occur in every case, and that these STM reactions favor the populations that were chosen in the STM choice simulations.
486
Chapter 8
ITEM FIELD (,F1) 3
moo
TEMPORAL ORDER OVER ITEMS IN STM
3.oco 9 100 ICI
I'!
121
131
z
:30
lY
MASKIKG FIELD (F2) INPUT PAT'ERN
I
1'1
Figure 15. See caption for Figure 13.
121
131
IL!
487
0 680
?EHPURAL ORDER OVER ITEMS IN STM
MASKING FIELD (F2) INPUT PATTERN
LIST CODE IN STM 0.090
0 004
0 083
0 000
Figure 16. See caption for Figure 13.
Chapter 8
Figure 17. Connections grow randomly from Fl to Fz along positionally defined gradients. Cells within Fz interact via a shunting on-center off-surround feedback network. terms g+, g-, and gp are conductances which can vary through time as a function of input signals. Due to the multiplicative relationship between conductances and voltages in (Z),a membrane equation is also said t,o describe a shunting interaction. In a masking field, the cells are linked t,ogether via recurrent, or feedback, on-center off-surround interactions. The properties of a masking field are thus part of the general theory of shunting recurrent on-center off-surround networks (Figure 17). Grossberg (1981, 1983 reviews the most important functional properties of this class of networks. Masking fie d properties may be viewed as an evolutionary specialization of these general functional properties . To emphasize the essential simplicity of the masking field equations, we will build them up in stages. We first rewrite equation (2) for the potential z,(t) in the form
I
where 0 is the passive equilibrium point, B(> 0) is the excitatory saturation point, and -C(< 0 ) is the inhibitory saturation point. Term P, is the total excitatory input and term Q,is the total inhibitory input to v,. Potential zl(t)can vary between B and -C in equation (3) as the inputs PI and QI fluctuate through time. In a masking field, the excitatory input P, is a sum of two components: the total input from the item field plus a positive feedback signal from v, to itself (Figure 16). Thus P, can be written in the form P I
=
c QWJl + D / ( z , ) .
1E J
(4)
Neural Dynamics of Speech and Language Coding
489
In (4), term IJ is the output from the item node {j},p J t is the connection strength of the pathway from vI in Fl to v, in Fz,and z3, is the LTM trace within this pathway. Each LTM trace was set equal to 1 in our simulations, since we did not investigate the effects of learning. The terms zJl will thus be ignored in the subsequent discussion. Term D f ( z , )describes the positive feedback signal from u, to itself. Such a feedback signal is needed so that u, can store activities in STM after the inputs Z, terminate. The inhibitory input Q, in is a sum of feedback signals g(zm) from other populations v, in the masking field. hus Q,can be written in the form
20. Mass Action Interaction Rules
We now refine the notation in equations (3)-(5) to express the fact that the cells in different subfields of a masking field possess different parameters as a result of random growth and activity-dependent self-similar growth. A notation is needed to express the fact that an Fz population receives Fl pathwaysonly from a prescribed (unordered) set J of items. Let denote the STM activity of an Fz population v!J’ which receives input pathways only from the set J of Fl items. There may, in principle, be any number of different populations v , ’ ~ ’in Fz corresponding to each fixed set J of Fl items. Equation (3) is then replaced by the equation
~ 1 ~ )
b
( J )
= - A z j J ) + ( B - Z ! ~ ) ) P ,-( (~z I )( ~ )
+ C)QiJ),
(6)
which holds for all unordered sets J of F, items that can selectively send pathways to nodes in Fz. Equation (4) for the excitatory input P, is then replaced by
j€J
The only notable change in equation (7)is in term D I J INotation . 1 J I denotes the size ~ upon the size of set J, but not upon the items in set J . of set J . Thus D I J depends This excitatory feedback coefficient is one of the self-similar parameters that is sensitive to the spatial scale of the population u , ’ ~ ’ . Equation (5) for the inhibitory input, Ql can be refined in several stages. First we note that Q j J )obeys an equation of the form
Equation (8) can be interpreted as follows. Coefficient E K J determines the strength of the inhibitory feedback pathway from v f ) to u!~). This path strength depends only respond. In particular, upon the unordered sets K and J of items to which and EK J can be written in a form which expresses the randomness of the self-similar growth process:
VF)
Chapter 8
490
r j ~ AC~0.ns Interactions
E K J = FIJiGixlHlxnJI.
(9)
By (Q),E K J is a product of three factors. Each factor depends only upon the size of an unordered set, of it,erns. These unordered sets are set K , set, J , and their intersection K n J. Equation (9) says that the inhibitory interaction strength from vLK) to v j J )is the result of a random process. The net strength EK J is due to a statistically independent interact.ion between growth factors that depends on the sizes of K , J , and their overlap. By putting together all of these contraints, we find the following Masking Field Equations .-. -- -~
All of the ”intelligence” of a masking field is embodied in the parallel interactions defined by such a network of equations. It remains to define how the coefficients D I J I , Fp,, Glxl, and HircnJi depend upon the unordered sets K and J ; how the positive and negative feedback functions / ( w ) and g ( w ) depend upon their activities w ;how the path strengths pi:’ from Fl to Fz express a random growth rule; and how numerical parameters were chosen. 21. Self-similar Growth Within List Nodes The coefficient DIJl determines how the positive feedback from a node to itself varies with the node’s self-similar scale. We assume that DlJiincreases with scale, thereby enabling nodes corresponding to longer sublists to gain a competitive advantage in STM, other things being equal. The simplest choice is made in our simulations, namely
where D is a positive constant. This rule is consistent with the possibility that, as an Fa cell (population) grows in response to high levels of Fl input, it also produces more excitatory synaptic sites for its own axon collaterals.
23. Conservation of Synaptic Sites The dependence of the intermodal connection strengths p:?, ~ J IGlxl, , and H l K n J I on the sets K and J will now be described. The total connection strength to each population u i J ) from all cells in F, and the total inhibitory connection strength to each population vi J)from all cells in Fz are both chosen to be independent of K and J. This property is compatible with the interpretation that the size of each cell (population) is scaled to the total strength of its input pathways. If more pathways input to such a cell, then each input’s effect is diluted more due to the larger size of the cell. This constraint may, in principle, be achieved by either of the mechanisms depicted in Figures 3 and 4. We call the net effect of matching cell (population) volume to its total number of afferents conservation o j synaptic sites.
Neural Dviianiies of Speech arid Language Coding
49 1
Conservation of synaptic sites enablrs t h r network to overcome the following possible problem. Due to the randomnrss of the growth rilles, thcre may exist different numbers of cells in each of Fz’s masking subfirlds. For exarriple, lo3 F2 cells may rereive inputs only from the Fl node {0}, lo4 rells may receive inputs only from the Fl node {I}, 106 cells may receive inputs only from the Fl nodes (0) and {l},and so on. As these F2 crlls compete for STM activity, the competitive balance could be seriously biased by accidents of random growth. Some mechanism is needed to compensate for the possible uncontrolled protiferation of random connections. Conservation of synaptic sites is one rffective mechanism. The present results suggest a new functional role for such a growth rule. Thus we impose the following constraints: Synaptic Conservation .-- Rule: Let pi:) = constant = I (12) ~
C
I€
J
and
In Darticular. we choose
to satisfy (12), where F is a positive constant.
23. Random Growth from Item Nodes to List Nodes The connections
from
Fl to F2 are chosen to satisfy the conservation law (12)
as well as a random growth law. We therefore impose the following constraint:
Landon! NormaliBd (;r_owthhR ule: Let
= -l
IJI
( l - P I J i ) -t r j : ) p i J / *
(15)
The fluctuation coefficient PIJ I in (14) determines how random the growth is from F1 to F2. If plJl = 0, then growth is deterministic (but spatially distributed) because
6.
p:? = In this limiting case, all connection strengths from item nodes in Fl to a fized list node in Fz are equal, and vary inversely with the number I J I of item nodes
that contact the list node. If0
< p l ~ l5
1, then the coefficients rii’ in (15) influence
rj?
the connection strengths p!?. The numbers { : i E J}are chosen pseudo-randomly: They are uniformly distributed between 0 and 1 such that
c
rJZ( J )= 1
(16)
JEJ
(See Appendix). Equations (15) and (16) together imply the conservation rule (12). It remains to say how the fluctuation coefficients ~ I Jdepend I upon the set size I J 1. We choose these Coefficients to keep the statistical variability of the connection strengths independent of I J 1. In other words, we choose p i ~ lso that the standard deviation of {r:? : j E J } divided by the mean of {r!? : i E J} is independent of Appendix.)
1
J
1.
(See
Chapter 8
492
24. Self-Similar Competitive G r o w t h Betwern List Nodes
Coefficient ~
in I(10) describes the total number of inhibitory synaptic sites within
J
a population vI(~). By (14), this quantity is chosen to keep the number of synaptic sites constant across all the cells. Small random variations could also be allowed, but we have absorbed all of the effects of randomness into the coefficients p::) in (15) for simplicity. Coefficient GIKl in (10) measures the total number of inhibitory connections, or axons, emitted by each population v r ) to all other F; populations. Due to self-similar growth, GlKi increases with 1 K 1. In our simulations, we make the simplest choice. Self - Similar Axon Generation: Let GIKI =I K I . (17) In particular, Gpl = 0 if 1 K I= 0. Coefficient H l K n J l in (10) describes how well growing axons from a population
ZtE)can compete for synaptic sites at a population vr(J). In particular, coefficient ~ 1 describes the number of emitted axons. Coefficient Hjx n measures the fraction of J ,
these axons that can reach v , ' ~and ) compete for synaptic space there. Due to self-similar growth, H l K n J l increases with I K n J I. Consequently, if either set K or J increases, also increases, other things being equal. Given fixed sizes of K and J , then HIK then H I K n J iincreases as the overlap, or intersection, of the sets increases. This last property reflects the fact that list nodes become list nodes due to random growth of connections from item nodes. Two list nodes therefore tend to be closer in FZ if they receive more input pathways from the same item nodes in Fl.If a pair of list nodes in Fz is closer, then their axons can more easily contact each other, other things being equal. In our simulations, we choose HIKnJ I as follows. Let
nJ1
By ( N ) , HlxnJI increases linearly with I K n J I. We also assume, however, that HlKnJI is always positive. When HlxnJI multiplies G l ~ in l (lo), this implies that every population vkxnK) can send weak long-range inhibitory pathways across the whole of Fz, but that these pathways tend to arborize with greater density at populations v : ~ ) which receive inputs from the same Fl nodes. In all, (14), (17), and (18) imply that
25. Contrast Enhancement by Sigmoid Signal Functions
The positive and negative feedback signals f ( z I J ' )and g(zF)) in (10) enable the network to contrast enhance its input patterns before storing them in STM.The rnathematical theory of how to design shunting on-center off-surround feedback networks with this property was introduced in Grossberg (1973), further developed in Ellias and Grossberg (1975) and Grossberg and Levine (1975), and led to a rather general mathematical theory in Grossberg (1978c, 1978d, I98Ob) and Cohen and Grossberg (1983). Salient properties of these networks are reviewed in Grossberg (1983).
~
1
Neural Dynamics of Speech and Language Coding
493
Based on this analysis, we choose both f ( w ) and g ( w ) to be sigmoid,'or S-shaped, functions of the activity level w . In particular, we let
and
The notation [w]' in (20) and (21) stands for max(w,O). Thus f ( w ) and g ( w ) do not generate feedback signals if w is smaller than the signal threshold zero. As w increases above zero, both f ( w ) and g ( w ) grow quadratically with w until they begin to saturate a t their maximum value 1. Sigmoid signal functions have been described in sensory neural processing regions (Freeman, 1979, 1981). 26. C o n c l u d i n g Remarks: G r o u p i n g and Recognition W i t h o u t Algorithms or S e a r c h
This article shows how to design a masking field capable of encoding temporally occurring lists of items in a context-sensitive fashion. The STM activations within such a masking field are sensitive to different temporal orderings of the same list items as well as to the same temporal orderings of different item sublists. Our results have focused upon two different levels of language processing, both of which are context-sensitive, but in different ways. The spatial patterning of activity across an item field defines computational units which blend item information and temporal order information in a context-sensitive, but non-unitized, code. Such item representations can be recalled even before unitization can take place. The STM activations of a masking field perceptually group different portions of a spatial pattern across an item field into unitized sublist representations. We have illustrated how such properties of perceptual grouping, unitization, and recognition can be analysed as emergent properties of interactions within a large nonlinear network of recurrently interacting neurons. These networks do not incorporate any serial programs or cognitive rule structures. Moreover, the appropriate sublist representations are directly accessed without any prior search. We have furthermore shown how neural networks with these emergent properties can arise from simple developmental programs governing the manner in which the neurons grow and interconnect. In our present simulations, we have made one choice of these developmental rules. This choice suggests that Fz cells obey different synaptic rules for recognizing Fz excitatory signals, Fz inhibitory signals, and Fl excitatory signals. However, the principles of network design-such as the principle of activity-dependent self-similar growth-are much more general than this choice. This combination of general principles and rigorous examples provides a firm foundation for testing the theory experiment a1ly. In the present work, we have shown how a prewired developmental program can generate a network with the desired functional properties. These results place harsher demands upon the network than are, we believe, required in vim. In the full Adaptive Resonance Theory of cognitive self-organization to which our results contribute, it is not necessary for the initial list encodings to be accurate. One only needs a sufficiently good processing substrate for top-down learned template-matching signals (from A4 to A3 in Figure 1) to drive an automatic memory search that can provide the occasion for learning a better encoding (Carpenter and Grossberg, 1985; Grossberg, 1980a, 1984b). With a quantitative understanding of how prior development can set the stage for later matching, memory search, and code learning events, we can now frontally attack the full problem of list code self-organization.
Chapter 8
494
APPENDIX This section describes some technical details of our simulations. First we list the input values that are used in the simulations. The inputs are listed by Figure number. Only positive inputs are listed. All other inputs equal zero. A. Inputs: Figure 6 (10 = 1.5); Figure 6 (ZI = 1.5); Figure 8 (I2 = 1.5); Figure 9 (I0 = 1.0, I 1 = .5); Figure 10 (I0 = .5,'11= 1.0); Figure 11 (I0 = 3 8 , 11 = .48, I2 = .34); Figure 12 (10= .34, I1 = .68,12 = .48); Figure 13 ( I o = .34, II = .48, I2 = .68); Figures 14-16 are the same as Figures 6,9, and 11, respectively. B. Connections from.-F1 to Fz: p
_
_
_
_
To produce a pseudorandom sequence of numbers { r i f ) : j E J } distributed uniformly over the simplex
we proceed as follows. By a standard algorithm (Knuth, 1981), we obtain a vector of numbers w = (w1, wz, , . ., w,) uniformly distributed over the n-cube I,, = ~,"=~[0,1]. Rearrange the numbers in w in order of increasing size to produce a new vector w' = (w;, wk, . . ., w:) such that w: 5 wh 5 . .. 5 wh. The map w + w' from I, into itself is determined by a permutation u of the indices (1, 2, . . ., n} such that w: = w c ( , ) . Each permutation u can transform a different subset of I , into vectors with increasing entries. Thus I,, can be decomposed into sets 0, such that a single permutation u can map all w E D, into w' E I,. Hence the map w + w' transforms uniformly distributed vectors in I,, onto uniformly distributed vectors in I , with elements in increasing order. We next map vectors w' in Z, with elements in increasing order onto vectors 1 in S,+1 via the one-to-one linear transformation y1 = wi , yz = wa - w i , . . ., gn = wh - w,-], and Y,,+~ = 1 - w,. Since this linear transformation maps equal volumes onto equal surface areas, the vectors g are uniformly distributed on the simplex S,,,. The coefficient of variation of {pi;) : j E J } is made independent of
I J I (> 1) as
follows. By the above construction, the marginal distribution rl;) in (14) is distributed with density function (1 J I -1)(1 - z)IJIp2.The mean of this distribution is ,$,,and its standard deviation is TJi deviation is
#
+l. Thus the mean of p!? is also
,$ and its standard
The coefficient of variation of is its standard deviation divided by its mean, which we set equal to a constant p independent of I J I. Thus we chose
In our simulations, p =
&.
Neural Dynamics of Speech and Language Coding
495
C. Interaction Constants: The following parameter choices were made: A = 1, B = 1, D = 4, fo = 1 , and go = .16. In the total choice runs (Figures 6-13), we let C = 1 and F = 1088. In the partial choice runs (Figures 14-16), we let C = ,125 and F = 8704. Note that CF = 1088 in both cases. The behavior of a masking field has also been characterized over a wide range of other parameter choices.
496
Chapter 8
R.EFEREN<’ES Basar, E., Flohr, H., Haken, H.,and Mandell, A . J . (Eds.), Syiiergrtics of t h e brain. Xew York: Springer-L‘erlag, 1983. Carpenter, G.A. and Grossberg, S., Neural dynamics of adaptive pattern recognition: Attentional priming, search, and category formation. Submitted for publication, 1985. Cermak, L.S. and Craik, F.I.M. (Eds.), Levels of processing in human memory. Hillsdale, KJ: Erlbaum, 1979. Cohen, M.A. and Grossberg, S., Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. I.E.E.E. Transactions on Systems, Man, and Cybernetics, 1983, S M C - I S ,815-826. Cooper, W.E., Speech perception and production: Studies in selective adaptation. Norwood, NJ: Ablex, 1979. Darwin, C.J., The perception of speech. In E.C. Carterette and M.P. Friedman (Eds.), Handbook of p e r c e p t i o n , Vol. VII: Language and speech. New York: Academic Press, 1976. Ellias, S.A. and Grossberg, S., Pattern formation, contrast control, and oscillations in the short term memory of shunting on-center off-surround networks. Biological Cybernetics, 1975, 20, 69-98. Freeman, W.J., Nonlinear dynamics of paleocortex manifested in the olfactory EEG. Biological Cybernetics, 1979,55, 21-37. Freeman, W . J . , A neural mechanism for generalization over equivalent stimuli in the olfactory system. In S. Grossberg (Ed.), Mathematical psychology and psyrhophysiology. Providence, RI: American Mathematical Society, 1981. Grossberg, S., On the serial learning of lists. Mathematical Biosciences, 1969,4,201253. Grossberg, S., Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 1973, 52, 217-257. Grossberg, S., A theory of human memory: Self-organization and performance of sensory-motor codes, maps, and plans. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Vol. 5. New York: Academic Press, 1978 (a). Grossberg, S., Behavioral contrast in short-term memory: Serial binary memory models or parallel continuous memory models. Journal of Mathematical Psychology, 1978, 17, 199-219 (b). Grossberg, S., Competition, decision, and consensus. Journal of Mathematical Analysis and Applications, 1978,66,470-493 (c). Grossberg, S., Decisions, patterns, and oscillations in the dynamics of competitive systems with applications to Volterra-Lotka systems. Journal of Theoreticaf Biology, 1978, 73,101-130 (d). Grossberg, S., How does a brain build a cognitive code? Psychological Review, 1980, 87, 1-51 (a). Grossberg, S., Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sriences, 1980,77, 2338-2342 (b). Grossberg, S., Adaptive resonance in development, perception, and cognition. In S. Grossberg (Ed.), Mathematical psychology and psychophysiology. Providence, RI: American Mathematical Society, 1981. Grossberg, S., S t u d i e s of mind and brain: N e u r a l principles of learning, perc e p t i o n , development, cognition, and motor control. Boston: Reidel Press, 1982 (a).
Neural Dytianzics of Speech arid Language Coding
497
Grossberg, S., Associative and competitive prinriplcs of learning and development: The temporal unfolding and stability of STM and LTM patterns. In S.I. Amari and M. Arbib (Eds.), C o m p c t i t i o n a n d c o o p e r a t i o n i n n e u r a l networks. New York: Springer-Verlag, 1982 (b). Grossberg, S., The quantized geometry of visual space: The coherent computation of depth, form, and lightness. The Behavioral and Brain Sciences, 1983, 6,625-692. Grossberg, S., Unitization, automaticity, temporal order, and word recognition. Cognition and Brain Theory, 1984, 7, 263-283 (a). Grossberg, S., Some psychophysiological and pharmacological correlates of a developmental, cognitive, and motivational theory. In R. Karrer, J. Cohen, and P. Tueting (Eds.), Brain and information: Event related potentials. New York: New York Academy of Sciences, 1984 (b). Grossberg, S., The adaptive self-organization of serial order in behavior: Speech, language, and motor control. In E.C. Schwab and H.C. Nusbaum (Eds.), Perception of speech and visual form: Theoretical issues, models, and research. New York: Academic Press, 1985. Grossberg, S. and Levine, D.S.,Some developmental and attentional biases in the contrast enhancement and short term memory of recurrent neural networks. Journal of Theoretical Biology, 1975, 53, 341-380. Grossberg, S. and Pepe, J., Spiking threshold and overarousal effects in serial learning. Journal of Statistical Physics, 1971,3,95-125. Grossberg, S. and Stone, G.O., Neural dynamics of word recognition and recall: Priming, learning, and resonance. Psychological Review, in press, 1985 (a). Grossberg, S. and Stone, G.O., Neural dynamics of attention switching and temporal order information in short term memory. In preparation, 1985 (b). Halle, M.and Stevens, K.N., Speech recognition: A model and a program for research. I R E Transactions and Information Theory, 1962,IT-8, 155-159. Knuth, D.E., Seminumerical algorithms: The art of computer p r o g r a m m i n g , Vol. 2. Reading, MA: Addison-Wesley, 1981. Liberman, A.M., Cooper, F.S.,Shankweiler, D.S., and Studdert-Kennedy, M., Perception of the speech code. Psychological Review, 1967,74,431-461. Liberman, A.M. and Studdert-Kennedy, M., Phonetic perception. In R. Held, H. Leibowitz, and H.-L.Teuber (Eds.), Hanclbook of sensory physiology (Vol. VIII). Heidelberg: Springer-Verlag, 1978. Lindblom, B., MacNeilage, P., and Studdert-Kennedy, M., Self-organizing processes and the explanation of phonological universals. In Butterworth, Comrie, and Dahl (Eds.), Explanations of linguistic universals. The Hague: Mouton, 1983. Mann, V.A. and Repp, B.H., Influence of preceding fricative on stop consonant perception. Journal of the Acoustical Society of America, 1981, 69,548-558. Patterson, P.H. and Purves, D. (Eds.), Readings in developmental neurobiology. Cold Spring Harbor, NY: Cold Spring Harbor Lab, 1982. Piaget, J., The origins of intelligence in children. New York: Norton, 1963. Repp, B.H., On levels of description in speech research. Journal of the Acoustical Society of America, 1981,09,1462-1464. Repp, B.H., Phonetic trading relations and context effects: New experimental evidence for a speech mode of perception. Psychological Bulletin, 1982, 92, 81-110. Repp, B.H. and Mann, V.A., Perceptual assessment of fricative-stop coarticulation. Journal of the Acoustical Society of America, 1981,09, 1154-1163.
498
Chapter 8
Samuel, A.G., van Santrn, J.P.H., and .Johnston, J.C.. Lwgth rffrrts in word prrcrption: We is Letter than I but. worse than you or thrni. Joiirnal of Esprrimental Psychology: Hiirnari Percepfion and Perforniancr. 1982. 8 , 91 105. Samuel, A.G., van Sant,en, J.P.H.: and Johnston, .J.C'., Rrply to Matthri: \Ve rrally is worse than yoii or t,hem, and so are ilia and pa. Joiirnal of Experimental Psychology: Human Perception and Performance, 1983. 9. 321 322. Sperling, G., The information available in brief visual prrsrritations. Psychological Monographs, 1960, 14, 1-29. Stevens, K . N . , Segments, features, and analysis by synthesis. In J.V. Cavanaiigh and I.G. Mattirigly (Eds.), L a n g u a g e by eye aiid by Par. Cambridge. MA: MIT Press, 1972.
Stevens, K.N. and Halle, M., Remarks on analysis by synthrsis and distinctive features. In W. Wathen-Dunn (Ed.), Proceedings of the AFCRL symposium on models for the perception of speech and visiial form. Cambridge, MA: MIT Press, 1961.
Studdert-Kennrdy, M., Sprech prrreption. Langiiagr and Speech, 1980: 23, 4 5 66. Studdr,rt-Kennrdy, M., LibrrInan, .4.hI.,Harris, K.S., and Cooper, F.S., Motor theory of sprrrh prrreptiori: A reply to Lanr's critiral rrvicw. Psychological Rrview, 1970, 77, 234 249. LYhreler. D.D., Proccsscs in word rrcognition. Cognitive Psyrhology, 1970, 1, 59-85. Yoiing, R.K.. Serial learning. In T.R. Dixon and D.L.Hort,on (Eds.), Verbal brlitlvior aiid griirral bchavior throry. Englewood Cliffs, NJ: Prentice-Hall, 1968.
499
ATTTIIOR II\IDEX
Adarns. P.A.. 16. 47. 58-59. 78 Akita. 51 , 151. 208 Albrecht. D.C.. 76. lB8, 193, 207 .4lbright. T..163. 210 Amari. S..4. 17. 6 i , 72, 395. 451, 4 9 i Anderson, E..396 Anderson, J.A.. 316-318. 354. 391, 449-450 .4nderson. J.R.. 315-316. 334. 391 .4nnines. P., 75 Antos. S.J.. 407. 450 Arbib. M..4.,4, 17. 67. i 2 . 395, 451. 4 9 i Arden. G . B . , 274. 309 Arend, L.E.. 28. 67. 83. 126. 138. 213-214. 225, 229, 233. 2 3 i . 267
.4schoff, J . , 391 .4tkinson, J.. 62. 76, 126. 141 Atkinson. R.C.. 315. 349. 391. 441. 452 Attneave. F., 7, 13, 67 Babloyanct. A.. 37. 74 Backstrom. A.-C.. 274, 309 Bacon, J.. 13. 57. 74 Banquet, J.-P., 350. 391 Barlow, H.B.. 67 Barroso, F., 13. 57, i 4 Barto. A.G.. 317-318. 400 Basar. E.. 139. 208. 268. 404. 450. 458, 496 Baumgartner. G . . 81. 85. 142, 158. 162, 193, 210. 214. 270
Baylor. D..4., 24. 6.;
271. 273-274. 279. 287. 290. 292, 296, 300-301.105-309 Beck. J . . 4. 7. 61. 67. i 3 . 105. 138, 145-146. 163. 165. 167-16d. 170. 172-173. 177. 187. 189. 201. 2Oi Becker. C.A.. 407-409.450. 454 Bell. J.A.. 448. 450 Berbaurn. K..i , 45. 7'8 Bergen, J.R.. 18. 33. 5 i . 58 Bereer. T . W . . 318. 391 Bergstrom, S.S., 67. 126. 138. 212. 214. 229, 253. 23i. 261. 267 Besner. D., 408. 450 Btederman, I., 105, 138 Black. A.H.. 398 Black, P..397 Blake. R..16. 28. 56, 67. 126, 138, 214. 239. 247. 267 Blakernore, C.. 33. 67 Blank. A.A., 4. 67 Blank. M A . . 356.393 Bobrow. D.G.. 350. 363. 398 Boring. E.G.. 6, 23, 53
Bournan. M.A., 78 Bowles. N.. 438, 443-444. 450 Boynton, R.M., 6 i . 89. 138. 273. 309
Hridgr. .I (;., 170. '271. 376, 423 Bridgeman. B . 68
Brodhun, E.. i 4 Brooks. C Sl . 393 B r o w n . J.L.. 67. 454 Buchwald. S.E.. 365. 392 Buehler. J.N.. 67, 83. 138, 213. 267 BufTart. H.. 67-69. 74. 78. 395 Butters. N.,385. 391 Byzov, .4 L.. 76 Caelli. T.. 4. 68. 138, 145-146. 207,267. 450 Caldwell, P C.. 296, 309 Campbell, F.W.. 62:76. 126, 141 eapek, R..275. 309 Carney, 4.E.. 365. 391 Carpenter. C.A.. 24. 68. 83, 138. 146, 162. 187, 207, 213, 267. 273. 350. 3 5 i . 361. 385. 390391, 414, 419, 422. 430, 450, 481, 493. 496
Carpenter. P.A.. 74 Carpenter. R.H., 33. 67 Cerrnak. L.S., 355, 391, 427, 450, 465, 496 Chajcryk, D.. 350, 396 Chang, J.J.. 74 Chastain. C . . 3 ~ 2 392 , Chiprnan, S.,77 Christian, E.. 318, 400 Clark. E.A., 73 Cogan. A.L.. 16. 28. 56, 68, 126. 138, 214. 239. 243. 2 4 i , 251. 267
Cohen, J.. 72. 139, 208, 310, 363. 395-396.497 Cohen. M.A.. 2. 4. 47-48, 57. 64-65. 68, 83. 89, 126, 198, 264, 430,
132. 135-138. 140, 146. 151, 162, 165, 207. 212-214, 222, 224, 239, 251. 263267, 314, 346. 380. 382. 392, 414, 428, 438, 450, 456-457, 492. 496 Cohen, N.J.. 399 Cole, R.A., 74. 76, 328. 392. 596, 398, 400 Collins, .4.M..315. 592 Coltheart, M.. 408-409. 450 Conway, J.. 83. 141, 213. 269 Cooper, F.S.. 365. 397. 400. 425. 452, 454. 463. 49i-498 Cooper. W . E . . 392. 425. 450. 463, 196 Coren. S . . 6. 68-69. 126, 138. 214. 227, 229, 267 Cornsweet. J.C.. 83, 138, 141. 213, 269 Cornsweet. T.N.. 3-4, 6. 13, 18-19, 26, 28-29, 47. 68. 109. 138. 141. 210, 214. 237. 267. 269. 328, 392 Cortese, C., 438. 454 Cowan. J.D.. 78 Craik. F.I.M.. 3. 13, 19, 7 7 , 126, 141, 210. 212. 214. 225. 22i. 237. 269. 355, 361, 391. 393. 412, 427, 450, 465. 496 Creutafeldi. O.D., 128, 141, 188, 193, 210 Crick. F.H.C.. 68 Crowder. R.G.. 314, 392. 400 Curtis. D.W.. 68. 126. 1S8. 214. 239. 247, 267 Dalenoort. G.J , 68-69 Dallet. K . M . . 392 Darwin, C.J., 368. 315. 392. 458? 462. 496
Author Index
500
Davelaar. E.. 408. 450 Daw, S . W . , 79 Day. R.H.. 6. 69, 127, 138, 213. 267 Deadwyler, S.A.. SIB, 392. 400 DeFrance. J.F.. S18. 392 DeHann. B.. 85. 159 DeLangc, H.. 69 Deregowski. J.B.. 69 Desimone, R., 165. 2Oi, 210 Dethier. V.G., 330, 392 Dev, P..4, 17-10, 57, 69, 146, 207 DeValois, K.K., 89, 138 DeValois, R.L.. 89, 138, 188, 193. 207 DeWeert, Ch.M.M., 69, 83, 95, 141, 151. 173, 210, 213, 270
Diehl. R.L..365, 392 Diner, D., 69 Dixon. T.R., 342, 392. 455, 498 Dodwell, P.C., 6. 62-63. 68-69, 138, 146, 201-208, 267, 356. 392, 450
Dowling, J.E., 215-274, 295,309-310' Eccardt. T., 369. 384, 398 Eccles. J.C., 296, 309. 316,329, 392 Eich. J.Y.. 449-450 Eijkman, E.G.J..6, 69 Ejima. Y..151. 208 Ellias, S.A.. 4, 23. 28, 31. 33-34, 37. 44-45, 48, 50, 69, 162. 208, 385, 392, 492, 496
Elrnan, J.H., 365, 392 Ernmert. E., 3, 6, 46. 59, 69 Engel, G.R., 69. 247, 267 Enroth-Cugell, C., 33, 69 Epstein, A.N., 77 Erickson, R.P., 328, 592 Esplin, D.W., 275. 309-510 Estes. W.K., 314-315, 323, 327, SS8, 393, 398 Fender. D., 28, 45. 69 Fesringer. L., 69 Feustal, T.C.. 454 Finlay, B.L..188, 193. 210 Fisher. D.F..140. 269 Fisher. R.P..561. 395 Fishler. 1.. 152 Fitts. P.M.. SS4, 395 Flerova. G.I., 76 Flohr. H.. 139, 208. 268, 404, 450. 458, 496 Foley. J.M.. 22, 69, 75 Foss. D.J., 356. 393 Foster, D.H.. 69 Foster, K.,318. 393 Fowler. C.A.. 369. 393 Fox, R.. 16. 33. 67, 69. 126, 138, 214. 239, 267 Freedmen. J., 438, 452 Freeman. W.J., 70,75, 328, 356, S93, 398, 493, 496 Freund. J.S., 401, 438. 441-446, 448, 455 Friedman. D., 69, 76-i7. 138, 392-595, 399, 496 Frisby. J.P., 70, 75 Fry, D.B.,365, S93 Fujimura, O., 365, S97
Fukushirna,
K..356. 393
Gabriel, %I.. 318. 392 Gallistel. C.R.. 387. 393 Ganz. L.. 3 i 6 - 3 7 , 393 Garfield. L.. 438. 454 Gellatly. A.R.H.. 83. 138. 213, 207 Gelman. R.. 387, 393 Geman. D.. 180. 208 Geman, S., 180, 208 Georgeson, M.A., 33, 67 Gerrits. H.J.M., 70. 83, 139, 213, 224, 2 6 i Ghatala. E.S., 448, 450 Gibson, J . J . , 57, 70,351, 377, 595 Gilchrist, A.L., 13, 70 Glanzer, M., 438, 445-444, 450 Glass, L., 61. 70, 105. 1S9, 147: 177, 187-189. 193, 201, 2OP-209
Gogel. W.C.. 6. 5 i , 70 Gomez. L., 407, 450 Gonzales-Estrada, M.T., 70 Goodman, D., 329, 396 Goodman, G.O., 442,452 Gordon, J.. 151, 210 Gorman. A.M., 458, 451 Gouras,
P..128, 1S9, 188, 193, 208
Grabowski. S.R.,273, 309 Graham, N., 4, 18, 28. 45, 56-58, 71, 57, 89. 139. 229, 267
Green. D.M.. 431, 451 Gregory, R.L.. 6. 28, 69, 71, 1S3, 139, 147, 162, 17i, 189-190, 19S, 201, 208. 225, 267, 403
Grillner, S., 385. 993 Grimson, W.E.L., 71 Gross, C.G., 163, 196, 210, 271 Griinau, M.W. von, 7s Hagen. M.A.. 46. 7S Haken. H., 139. 208, 268, 404. 450, 458. 496 Halle. M., 368. 396, 400, 425, 452, 454, 464, 497498
Hamada. J.. 1% i 3 . 126. 140, 212, 214, 237, 2S9, 261. 268
Hanson. ME.. 78 Harbcck. A,. 83. 141. 213, 269 Harris. C.S.. 78 Harris, J.R.. 63,74. 356. 596 Harris, K . S . , 365, 400, 425, 454. 463, 498 Hary, J.M.. 354. 396 Haymaker, W., 318, 398 Heard. P.. 147. 177, 189-190. I9S, 201. 208 Hebb, D.O.. i 3 , 83, 141, 213, 269, 3 l i Hecht. S.. i 3 Heggelund. P.. 126, 128, 140, 188, 193. 208, 214, 22i, 268
Held. R.. 25, 74. 17i, 397. 452, 497 Helmholtz. H.L.F. von, 16, 73, 84, 140, 149, 209, 213. 268
Helson. H.. 564. 396 Hcmilh. S., 274. 3 0 9 4 1 0 . Hendrickson. A.E.. 127. 140, 268
Author hidex
Hcpler. N..i 3 Hering. E.. 16, 73. i 8 . 243, 268 Hermann. A . , 73, 174. 210 Heron, W..83, 141. 213, 269 Herrnstein. R.J.. 207, 391-392 Hicks. R.E.. 369. 396 Hildreth, E., 5. 19-21, 73, 75, 146, 209 Hochberg. J.. 73 Hochstein. S.,193, 210 Hodgkin. A.L.. 24.67, 271,273- 274.300-301.305, 308-309
Hoffman, W.C., 4, 68. 146, 209 Holway, A.F.. 6, 23. 13 Horn. B.K.P., i 3 , 146, 209 Horton. D.L.. 342, 392. 455. 498 Horton. J . C . , 127, 140, 268 Hoyle, G.. 330, 396 Hubel, D.H., 23. 61, 73, 75, 85. 110. 127, 140-141. 154. 188. 193. 209. 215. 268-269
Hunt, S.P.. 127, 140. 268 Hurvich, L.M., 73, 400 Indow, T..73 Ito, M . . 329, 592 Jameson. D.. 73. 400 Jenkins. J.J., 365, 397 Johansson. G., 74 Johnston. J.C.. 315. 356, S76, 383. 396-391. 399, 128. 454. 462-463, 498
Jonasson. J.T...408. 450 Jones. R.S..316. 391, 450 Jongsma. H.J., 6. 69 Julesz. B.. 12, 16-11. 28, 45, 56, 59, 62. 69-10, 74, 105. 140, 145-146. 207, 209
Juola. J.F., 441, 452 Jusczyk, P.W., 355. 396 Kaczmarek. L.K.. 3 i . 74 Kahneman, D.. 350. 396 Kandel. E.R.. 76 Kanizsa. G.. 83. 86. 105, 108. 121. 128, 140. 145. 149, 151. 154. 159. 209. 213. 211, 227, 269 Kaplan. G .4.. 145. 209 Karrer. R.. i?. 139. 208. 510. 363. 395-396. 491 Kaufman. L.. 4. 6. 8-9. 11-13. 16-18. 22. 28. 34. 56-53. 54. 126. 152, 140. 214. 227. 229. 269
Kawabata. S . . 147, 209 Keemink. C.J., 78. 126. 111. 214. 227. 269 Kelso. J.A.S., S29. 396 Kennedy. D.. 83, 97, 99. 105, 140. 213, 227. 269. 330. 396
Killion. T.H.. 407-409, 450 Kimura, D..369. 396 Kinsbourne. M.. 369. 396 Kintsch. W..441. 452 Klact. D.H.. 27. 74, 328, 396 Kleinschmidt, J.. 273-274. 295, 310 Knoll. R.L.. 345. 3 8 i . 399-400 Knuth. D.E..494, 497 Kocher. E.C.. 76 Koffka. K.. 47, 14
501
Konig. A,. 14 Kooprnans. H.J . 452 lirauskopf, J . . 83-84, 140. 149. 209 Krekling, S.. 126. 140. 214. 227. 268 Kruger. J . , 128. 139. 188, 193. 208 KuBer. S.W.. 316, 396 Kulikowski, J.J..8-9, 11-12, 18, 28, 34, 56, 74, 126. 132, 140. 214. 229, 269
Kunz, K.. 151, 209 Kuperstein. M.. 72. 133. 140. 312, 387. 395. 404. 452
LaBerge. D . , 383. 396, S99 Lamb, T.D..24. 67, 271, 273, 305, 308-309 Laming, D.R.J.. 74 Land. E.H.. 28. 74, 82, 84-85. 121. 126-127, 110. 149, 198. 209. 213, 225. 251. 269
Landauer, T., 438, 452 Lange, R . V . , 28, 6 i Lanze, M.. 63, 74, '356, 596 Lapinski, R.H., 40?, 452 Laquaniti. F., 329, 399 Lashley, K.S.. 313, 3S4, S96, 427, 452 Lawry. J.A., 383. 396 Leake. B., 75 Lee, B.B., 128. 141, 188, 193. 210 Leeper. R.,105. 140 Leeuwenbcrg, E.L.J., 67-69, 74-15. 78. 83, 142. 151. 210, 395
Legge, G.E.. 28. 75, 126, 141. 214, 247. 269 LeGrand, Y.. 15 Leibowitr, S.F..74, 397, 452, 497 Lenneberg, E.H.. 342, 396 Leshowitz. B.. 75 Lettvin. J.Y., 4, 75 Levelt, W.J.M.. 16. 69, 75, 126. 111, 214. 239, 245, 247, 269
Levey. J . , 83. 141, 213, 269 Levick. W.R.. 6 1 Levin. J.R.. 448. 450 Levine. D.S., 23. 33. 7s. 75, 162.208, 351,377,395. 3 9 i . 492, 497
Levinson. S.E.. 369, 397 Lewis. J.. 452 Liberman. A.M.. 365. 374, 384, 397-398. 400, 425. 452. 454, 462. 49T-49R
Liberman. M.Y..369. 397 Lindblom, B.. 369. 397, 458. 4 9 i Lindman. H..4. 68. 400 Livingstone. M.S..127, 140-141, 268-269 Lockhart. R.S.. 412. 450 Lockhead, G.R..6 i . 83, 138. 213, 267 Lodico. M.G.. 448, 450 Loftus. E.F..315. 592 Logan, B.F. Jr.. 75 Lorch. R.F.Jr., 361. 363, 398 Low. J.C.: 274. 309 Luneberg, R.K., 4, i 5 Mach. E..19. 76, 78. 126. 209. 212, 229. 233. 269. 454
502
Author Index
Slacliay. D.C.. 315. 334. 369. 39i MacLean, P . D . . 318,397 MacNeilage. P.,458,497 blaguire. W . , 7,45,78 Mandell, A.J., 139,208, 268, 404,450. 458,496 Mandler, G.,401. 405,441-446,448. 452 Manelis, J.. 438. 452 Mann. V.A.. 365,397-398.425,453-454. 46:. 497 Mansfield, R.J.W.. 77 Martel. A., 405, 455 Marks, W.. 83, 141. 213, 269,423 Marler. P.A.. 365, 397 Marr, D . , 5, 12, 17,19-22, 50. 68,75-76, 146,209 Marslen-Wilson, W.D., 356,397 .Martin, L.K.,393 .Marvilya. M.P.. 365,397 MaPsaro, D . W . . 554,396 Matthei, E.H., 315, 39i,399,454,498 Matthews, M., i 8 Maudarbocus, A.Y., 75 Mayhew, J . E . W . , 75 McCann, J . J . , 74 hlcClelland, J.L.. 315, 350-351,356,369.375,3851, 396-397,399,401, 410-411,429, 453-454 McCourt, M.E..75, 191,209 .McCullough, W.S., 406,453 McDonald, J.E.. 405. 407,430-436. 453-454 Mclntyn, C..69-70 McKay. D.G..405. 453 McKoon. G..516. 398 Metzler. J., 65. 77 Meyer, D.E.. 407-453-454 Miller. G.A., 312, 375,378 Miller, J.F.Jr.. 6, 76 Miller. J.L.. 369,374,384.396-397 Millikan. J . , 438, 454 Mingolla, E..2.82-83,105,140-141, 144. 146. 149, 151, 154. 158-159, 165, 165, 17s. 17i. 193. 195, 198. 202-205,.205, 208, 213-214, 217. 227. 251. 257-258,268,404,430.452 Minor, A,\.'.. 76 Mitarai. G . . 129,141 Miyawaki. K.,365,397 Mollon. J.D..89, 141. 210 Monsell. S..345,387. 399-400 %loran,J.. 163. 207 Mori, T..76 .Morton. J.. 406,408,453 Mudock. B.B., 358, 342.59'1-398.449. 453 Myen. J.L., 361,365, 598 Nachmias, J.. 18. 56. 71, 76, 89. 139. 229, 26i Sadel. L..318. 355,398-399 h'auta, W.J.H., 318, 396 Keely, J.H.. 409-410,420,424,453 Neimark, E.D.. 327,398 Seisser, U.. 145.209,587, 398,411,453 Newell. A., 27, i6, 398 Newsome. S.L.. 407,458,453 Nicholls, J . G . , SIB. 396
Zorman. D .A,. 350. 363. 398.433 Norman. R . . 4 . . 273.310,315. 339.369. 399 Nusbaum. H.C.. 72, 139. 313. 363-364.369. 3 i 4 . 399,452. 497 O'Brien. I... 3. 13. 19. 76-7i,126, 141, 210. 212. 214. 225, 227.237. 269 O'Keefe. J . . 318. 398 Obusek. D.J.. 356. 400 Olds, J.. 318. S98 Orona, E.. 318,393 Osgood. C.E..28. 76,339,398 Over. R.,i7 Paap, K.R..407,438,453 Pachella. R . G . , 431.453 Pak, W.L., 309 Parks. T.E., 83, 141, 213,269 Pastore, R.E.. 554. 398 Patterson, P.H., 381. 398,4'10,49i Pearlstone, 2.. 441,452 Pepe, J . , 73. 338,341,395-396,406,448-449.452, 458, 49i Pesecsky, D., 369,384, 398 Peterhans, E.,81, 85, 142, 158, 162, 193.210. 214, 270 Peters, S., 365,397 Petry, S.:83, 127, 141. 213,269 Pew, R.W., 451. 453 Piaget. J., 365. 398,411, 427,453, 464.49i Pike. R., 419. 453 Pinto. L.H.. 309 Pitts, W..406,453 Poggio, T..12, li, 21-22,50, 68, 75-76. 146,209 Pollen, D.A., 76 Porac. C.. 68 Posner, M.I., 334.363.393.398,401,404,409-410, 419-420,424,453-454 Prandtl, A., 174,209 Prazdny, K..127. 138,141. 145,151, 163. 165,167, 187,189. 201, 207, 209 Preyer. N'.,174. 209 Pritchard. R.M., 83, 141,213, 269 Pulliam. K..76 Purves. D.. 381,398. 470, 497 Raab. D . H . , 75 Raaijmakers. J . G . W . . 42. 76. 315, 318. 320, 349, 398 Rall. W., 76 Rashevsky. N., 76. 400 Ratcliff. R.,316. 398 Ratliff. F.. 19, 31. 41. is, 83, 141, 146, 209, 213, 229. 269,406.454 Rauschecker, J.P.J.. 62,76, 126, 141 Reddy. D.R.. 328,392 Redies. C..83. 91,95,112, 141, 151. 173.208-209, 213, 269 Reeves. A,. 375. 398-399 Repp, B.H.. 385, 369,374. 384! 397-398,425,453454, 458. 463,497 Rescorla. R.A.. 318.398 Restle, F.,6,68. 76. 364, 398. 400
Author Index Richards. H .. 6.21, 23. 76 Richter. J . . 76 Riggs. L.A., 83, 141. 213. 269 Ripps, H..273. 309 Ritr. S.. 316.391,450 Rivers. G..69 Robinson. J.H.. 318. 392. 400 Robson. J G . . 4-5,i, 33, 45,56, 58.69,71. 76-77, 376, 399 Rock, I.. 7i Rodieck. R . W . . 33,i i Ronner. S.F..16 Rosenfeld. A,, 105, 138, 145-146, 163. 165. 167, I8i. 189. 201, 201 Rozental. S . , 11 Rubenstein, H . , 438,454 Rubin. G.S.. 28. 75. 126, 141,214. 247. 269 Ruddock. K.H.,75 Ruddy. M.G.. 401,453 Rudnicky. A . I . , 328.392 Rumelhart, D.E..315. 339. 350-351. 356. 369. 375. 383.391, 399,401,410-411. 429,453-454 Rushton. W . A . , 71 Sakakibara. M.$129. 141 Sakata, H.. 71 Salasoo. A., 404. 454 Salehmoghaddam. S..309 Saltwick. S.E.,318, 393 Samuel. A.G.. 315. 376, 383, 39i, 399. 428. 454. 462-463,498 Sandick, B.L., 28,6i Sawusch. J.R., 363-364, 369. 374, 399 Scarborough. D . L . . 438,454 Scarborough. H.S., 438,454 Schatr, B.R..165, 167,209 Schein. S.J.. 163.207 Schiller. P.H.. 188. 193. 210 Schneider. W . . 10. 225.269, 350, 399 Schriever, W.. ii Schrodinger. E..i i Schuberth. R.E., 40i,454 Schulman. .4.I., 438. 454 Schvaneveldt. R..405, 401. 430-436,450,152-454 Schwab. E.C.. 12. 139, 313. 364,369,314,399, 452. 49i Schrartz. E.L..4, 21. 77, 163.210 Scott. J.W..69 Sekuler. R . . 7, 16. 45,56, 68. ii. 126. 138.214,267 Semmes. J.,369, 399 ShrtTer. L.H.. 369.399 Shankweiler. D.S.. 365,397,452,463,49i Shapley, R., 151,210 Sharpe, L.T.. 89. 141. 210 Shepard. R.N.. 28. 77. 399 Shepherd, G . M . . ii ShitTrin. R.M., 42.16.225,269. 315. 318-320.349350.391,398-400, 404.454 Shipley. T..12. 77 Silverman. G . , 16,56.68. 126, 138. 214. 261
503
Silverstein. J.M .. 316,391. 450 Singer. &'.. 61.77 Sloane. M.. 16. 56,67. 126, 138.214. 229. 26; Srnale. S . . 314. 299 Smith. A.T.. i. 15. ii Smith, F.. 393 Snyder. C . R . R . .363. 398. 401,404. 409-410, 419420. 424,453-454 Soechting, J.F..329. 399 Sondhi. M.M..4. 24. i i Southard. D.L.. 329, 396 Spence, K . W . , 391 Sperling. G.,4, 11. 17-18,22, 24,57. 17, 145, 210. 375, 398-399.421, 454. 465,498 Spillmann, L., 83. 91. 93, 112, 141, 151, 173-174, 209-210, 213. 269 Spitzer, H., 193. 210 Squire. L.. 385,391,399 Slanton, .u., 318, 393 Stein, L . , 318. 399 Stein, P.S.G.,330. 399 Sternberg, S.. 345,385. 3E1, 399-400,40i.454 Stevens, C.F.,316.400 Srevens. K.N., 368, 396, 400, 425. 452, 454, 164. 497-498 Stevens. S.S.,29,l i Stone. G.O., 350. 359.375,383. 396.403. 421,452. 458, 463,466-467,491 Scone J., 33, ii Strange, w., 365,397 Stromeyer, C.F., 111 ii Studdert-Kennedy, M., 354. 365-366. 369. 375. 397, 400. 425. 452, 454, 458, 462-463, 497498 Suci. G . J . , 28, 76 Sutton. R.S.. 311-318.400 Swets. J . A . , ii.431.451 Switkes. E.. 61. i0, 105, IS9. 147, lii. 187-189, 193. 201,208 Szentagothai. J.. 329. 392 Takahashi, S.,151. 208 Tanaka. M.. 128. 141. 188, 193,210 Tangney. J., i8 Tannenbaum. P.H., 28, 76
Taub. H.B.. 75 Teghtsoonian. M.. 46,73 Thompson. R.F.. 318. 391 Thomson. D.M..441,455 Thorell. L.G.. 188, 193.205 Timmermann, J.C..M.E.N.. 10, 83. 139,213,261 TodoroviC. D.,126. 141, 149,210. 214, 227. 269 Truman. D.L., 448,450 Tschermak-Seysencgg. A., von 13, 16,7 i Tueting, P.. 72. 139, 208. 310. 363. 395-396,491 Tulving, E.. 405. 412. 441,450.454-455 Tweedy, J.R.. 401,452 Tynan. P.. I , 45. l i CiIlman. S., 76 Underwood, B.J., 401.4[11-405,438.441-446,448. 455
SO4
Author Index
Ungerleider. L.C.. 163, 207 Usui. S.. 129. 141 C'ttal, W.. 67, 77 Van Nes. F.L., 78 Van Santen, J.P.H.. 515, 376. 583, 397, 399, 428. 454. 462-465, 498 Van Tuijl. H.F.J.M.,78. 83, 95, 141-142. 151, 173, 210, 213. 269-270 Van den Brink, C.. 78, 126, 141, 214. 227, 269 Vendrick. A.J.H., 70, 83. 139, 213, 224, 267 Verbrugge, R., 565. 397 Viemeister, N.F.,365, 591 Vincent, J., 6, 69 Volman. S.F.,188, 193, 210 Von BCkisy. C., i 8 Von der Heydt, R.,81, 85, 128-129, 142, 158, 162, 193. 210, 214, 270 Wagner, A.R., 318, 598 Wallach. H., 16, 47, 58-59, 78 Ward, L.M.. 68 Ware. C.. 97, 105, 140, 142 Warren, R.M., 556,400 Watkins, M.J., 514, 400 Watkins, O.C., 314, 400 Watson, A S . , 4, 6, 47, 78 Weber, B.A., 1-3, 5, 24, 26-50, 33, 67. 271. 276 Weisstein, N.. 7, 45, 63, 68, 74, 78, 356, 3773 596, 400 Welford. A.T., 334. 400 Welsh. A.. 556, 397 Werblin. F.S.,30, 78, 273, 310 Werner. H.,13. 78 Wertheimer, M., 145. 210 West, M.O.. 318. 392, 400 Wheeler. D.D., 429. 455, 463, 498 Whitten, D.N., 275. 509 Widen, G.P.. 365. 591 Wieael, T.N., 25, 61, 'IS, 85, 110. 127, 140, 154, 188. 193. 209, 215, 268 Wilkes-Gibbs. D.L.,442, 452 Williams. A.. 78 Willows. A.O.D., 330, 400 Wilson, H.R., 18, 55, 57, 78 Winston. P.H.. 79 Wolfe J.M., 177, 210 Wood, C.T., 441, 452 Wright. C.E.. 345, 587, 599-400 Wu, J.-Y..127, 140, 268 Wyatt. H.J.. 79 Ymbus. A.L.. 83-85, 126, 142. 149, 151, 210, 213, 270 Young. R.K.. 404, 455, 462, 498 Zablocka-Esplin. B..275, 309-510 Zeki, S.: 127, 129, 152, 142, 163, 210, 270 Zucker. S.W.. 79, 180, 210 Zue, V.W., 328, 392
505
SITBJECT INDEX
Access direct, 457 parallel. 410 Accuracy-speed trade-off, 403 Action, m u s . 489 Activation automatic. 405, 409 chunk, 457 enzymatic, tranamitter producttion. 281 interactive, 403,410 spreading, 314 Activity -contingent growth, 470 -dependent growth, 457 chaotic, 325 grouping, 457 pattern. 212. 457 quantization. 3 resonant. 3 scale: ace scale, activity
STM. 457 Adaptation intracellular, 276, 287 level, 3, 24 processes, 82, 112 real-time, 313 selective, 363,403 self-organization, 3 13 transmitter gating. 271, 273 visual. 82. 212 Adaptive filter, 351. 377. 379, 403 processes. 213 resonance, 359, 403.411 self-organization of serial order, 311. 313 template matching, 403 visual mechanisms, 83 Allelotropia, 3, 48 Ambiguity, uniform regions, 3. 21 Amnesia. 384 Anchors. 363 Antagonistic rebound. 287. 294 Anticipation: I e e expectancies Arousal nonspecific, 332 source, 385 Articulatory requirements, 403 space, auditory-, 365 undershoot. 368 Artificial intelligence, 144,313 Assimilation, 97 Association, contextual, 445
Associativr learning. 314. 320 memory. 359 Asymmetry. brightness and darkness, 23; Attention conscious. 403 gain control. 403. 420 limited capacity, 409 priming. 401, 403. 420 resonant equilibration, 419 set. 365 Auditory .articulatory space, 365 contrast. 363.403 motor features, 463 Automatic activation, 403. 409 parsing. 382 processes, 225, 314 self-tuning, 345 Automaticity, 349, S61 Avalanches, 329 Babbling, 365 Backward effects, 374 Barrier, filling-in. 56 Baylor, Hodgkin, and Lamb model: ree'BHL model BCS: gee boundary contour system BHL model of cone dynamics, 273,305 Binocular brightness. 2, 48. 212,247 development, 59 edges, pooled, 48 feedback, 3 fusion: ace fusion interactions, cooperative versus competitive, 23 length scaling. 48 matching. 3,9.64. 89, 212, 217, 263 miring. 16 processing, 3. 212 representation. 212 rivalry: see rivalry to monocular feedback, 64,263 Blobs. cytochrome oxydaJe staining, 82, 212 Border locking. 189 Boundary -feature tradeoff. 109, 153 completion, 80.82. 85-86.99.105. 114,158,215,
217 contour. 82. 85. 99,144, 151, 170, 193. 198,202,
212. 215. 258 contour system (BCS), 85. 144. 193, 198, 202.
212. 215 formation, I34 invisibility, 162-163 Brightness and contours. 82 averaging. 212. 265 binocular. 16.247 contrut, 108. 237
506
Sitbiect hidex
modulation. 212 rhunting. 345 paradoxes. 121. 261 stage. 144 perception, 3. 13. 82. 211-212 synaptic sites. 4 5 i summation, 3. 48, 212 versus automaticity. 349 theory, 82. 212 Completion Buffer. limited capacity without. 349 boundary. 80, 82, 85-86, 99. 105, 114, 158, 215. Caf6-wall illusion, 189 211 Capacity, STM.3, 41. 318. 349. 361, 403,409 contour, 105 Categorical perception, 314. 353-354,365.403,412 figure-ground, 3, 59 CC loop: aec cooperative-competitive loop pattern. 359,382 Center: r e e on-center off-surround visual. 3 Chaotic activity, 325 Computer Chunking simulations. 116. 144, 117,475 activation. 451 vision. 144 hierarchical. 335,351 Conductanre. 292 list, 457, 466 Cone self-organization, 313 BHL model, 213 temporal, 376, 461 potential, 273 Circular reaction, 365, 403 transmitter, 273 Climbing, hill, 350 turtle. 273 Coarticulation. 588 vertebrate. 275 Code Connections. growth, 457 context-sensitive. 457 Consciousness language, 456-451 attention. 405 list, 451 monocular activity, 212 specificity, 403 Conservation, synaptic sites. 457,490 stabilization, 356 Context STM. 369. 457 association. 445 Cognitive rules, -457.459 real-time, 3 13 Coherence Con text-sensit ive computation, 1, 3 codes. 451 data, 5 language representations, 457 factorizing. 325 language units, 458 filling-in. 3 segmentation, 144 percept, 83 STM code, 369 Color Contour edges, 149 -sensitive processes. 82, 212 estimates. 82 boundary: ace boundary. contour induction. complementary, 82,91 brightness, 82 represen tat ion. 2 12 completion. 105 spreading. neon: t e e neon color spreading feature: see feature, contour Command. velocity, 332 formation, 82 Comparison ilhsory. 112. 121, 147 non-word, 432-433 luminance. 229 word. 432. 4S6 real. 112 Competition spatial pattern. 212 and grouping, 456-457 subjective: m e illusion, contour binocular. 23 Contrast dipole, 135 amount. 82, 212 feedback. 9, 36, 41. 158 and orientation. 85. 215 feedforward shunting, 24 assimilation and grouping, 97 interorientarional, 135 auditory. 365, 403 intraorientational. 134 brightness. 108,231 local. 99 darkness. 231 mask. 451 direction, 82, 212 masking field, 403 edge, 212 perpendicular subjective contours, 16; enhancement. 41, 346. 492 self-similar growth, 492 feature contour, 89. 219 short-range, 86. 144. 217 image, 144, l i 3
Subject Index orientation. 212 threshold, 3 Control bottom-up, 423 gain: nee gain control language development, 403 masking 6eld. 457 motor, 311, 313 process. 225 top-down. 423 Cooperation binocular. 23 colinear. 16s complex cells, 181 depth planes. 3 long-range, 144 long-range oriented. 86, 99. 114. 136, 217 stage. 144 Cooperative-competitive (ccjloop. 144, 158 Cornsweet effect, 3, 47. 212 Cortex architectures. 193 blobs, 82 d a b and predictions. 127 hypercolumns, 82 visual, 82. 144. 212 Counting stages. 419 Coupling, gated input, 290 Craik-O’Brien effect, 3. 212 luminance profile. 225 Crossings, zero, 3 Cue. normalized disinhibitory. 42 Current, offset. 287 Curvature, activity scale, 56 Cutting. end: gee end-cutting Cycles, grating, 3 Cytochrome oxydase staining blobs, 82 Darkness contrast. 237 Decoupling. order and rhythm, 532 Depletion, transmitter, 273 Depth coherent computation, 1, 3 form-in-, 3. 212, 222 perceived. 6. 13 planes. 3 rivalry, 3 without disparity. 56 Detection edge. 3 spatial frequency, 58 Development auditory-articulatory space, 365 binocular, 59 circular reactions. 403 dynamical interactions, 405 equilibration. 379 masking field. 4 6 9 neuronal, 4 5 i
rules. 4.59 serial behavior. 313 speech and language codmg. 456-457 Differential masking. 377 Diffusion. 89. 211-212. 219. 259 ree olso filling-in Dipole corn pet it ion, 135 field, 59 gated. 113 rebound, 287 Direct access, 457 Direction of contrast. 82, 212 Discounting, illuminant. 82, 149 Disparity. binocular, 3, 16. 56 Doubling, input, 294 Dynamics, cone, 273 Edge binocular, 48 color, 149 contrast, 212 detection, 3 fill-in. 20 fixations. 21 parallel induction, 154 processing, 30 scaling, S3 scenic, 82, 144. 149, 212 Embedding fields, 408 Emergence network properties, 459 segmentations. 143-144 speech and language units, 4 5 i , 466 Emmert’s law, J, 4 6 End, line, I 5 4 End-cutting. 109 Enhancement. contrast, 41. 346, 492 Enzymatic activation. transmitter production, 281 modulation, shift property, 293 Equidistance tendency, 3. 56 Equilibration developmencal. 3’19 resonant. 403. 419 Equivalence. scale. 56 Error trade-off. list-item, 403. 430 Expectancies. 356. 3 i 5 . 4 5 i , 461, 475 matching, 359 sensory, 328 top-down, 356 Expectation: ace expectancies Extraction. feature. 3 Factorization, 324-325 Fading. stabilized images. 212 Familiarity, 103 Featural filling-in: rec fillinpin Feature contour. 82. 89. 144, 151,.212. 219, 258 contour system (FCS). 85, 144, 212, 215
507
508
Subject Index
extraction, S filling-in. 82, 212 grouping, 144 trade-off. bounduy-, 109, 153 Fechner's paradox, 3, 56, 212, 24S, 264 Feedback binocular to monocular, 64, 265 competitive. 9, 36, 41 competitive-cooperative, 99. 144, 158 dipole field, 59
FIRE. S quantitative. 314 shunting, 457 sigmoid. 36 template. 430 Feedfomd distance-dependent, 30 linear, 19 shunting competitive, 24 Field dipole. 59, 113 embedding. 408 masking, 376-377, 379, 403, 457, 461, 40?, 469 match, 26s receptive, 3, SO, 144, 165 Figureground, 7, 56, 59, 239 completion, S feature extraction. S Filling-in, S. 20, 65, 82, 89. 149, 212, 219, 259 barrier, 56 dilemma. 20 resonant exchange: ace FIRE Filter adaptive, 351, 377, 379, 40s feedback. 263 oriented contrast (OC). 144, 165 FIRE (filling-in resonant exchange), S, 48,222, 259 Fixations. and e d g e , 21 Flash double. 284. 292 variable background, 279 Form -in-depth, 212, 222 coherent computation. 1. 3 perception, 3, 13, 80. 82, 162. 212 Fourier analysis. 3 models, 18 Frequency ri I uat jond , 403 spatial, 3. 30. 58, 212, 376. 457, 467 word, 403, 438 Functional ganrfeld, 259 length, 46 lightness, 47 scale, S, 42, 44 Fusion, S4 binocular, S4
scale. sensitive. 11 simultaneous. i Gain control, 103.420 Ganzfeld. 212. 224. 239. 266 Gate multiplicative, 273 transmitter, 271. 275, 275 Gated dipole, 11s input to photoreceptor, 290 Geometry axioms. 82, 144 quantized, 1, 3-4 Gestalt rules, 144, 170 Global sccnc configuration, 82, 144. 212 segmentation, 144 spatial scales, 12 visual representation, S Gradient, spatial, 355, 457 Grating, cycles, S Ground: ICC figure-ground Grouping perceptual, 97, 143-145, 177, 374. 456-457 statistics, 158 sublist, 457 textural, 105. 144 Growth random, 457. 469,'491 self-similar, 380, 457, 469-470, 490, 492 Hierarchy
BCS. 144 cascade, S14 chunking, JS5 Hill climbing, 350 Homunculus, S1S Hypercolumns, 82. 212 Hyperpolarization. 287, 294 Hysteresis, 3, 44 Illurninant, discounted. 82, 149 Illusion abstrsct, 83 cd4-wall. 189 contour. 3. 80. 82. 112. 127. 47, 15% 167 Craik-O'Erien, 225, 260 Image periodic, 144 randomly defined. 144 stabilited. 212, 224 texrured, 144 Imitation, 566, 40s Induction complementary color, 82,91 contour, 112 parallel, 154 perpendicular. 154 Inhibition lateral, S14 self. 338
Subject Index Instar. 351 Intelligence, artificial. 144,313 Intensity-time tradeoff. 374 Interior. functional scaling, 44 Interpolation, terminal motor map. 368 Intra-list restructuring, 445 Invariance principle, LTM. 589,465 Invisibility. BCS. 144 Keplerian view, 1 1 Laplacians, 3 Lateral inhibition. 314 Law Emmert's, 5, 40 power, 259 self-organization. 315 transmitter. 273 Weber, 5. 24. 28 Learning associative, 314,320 instar, 551 language, 403 outstar, 325 paired associate, 403 serial. 31S, S29.335,343 stability. 313 superiority effects, 382 word recognition. 401, 403 Length -luminancedfect. monocular, 47 effect. word, 462 functional, 46 scaling, binocular, 48 visual, 3 word, 370 Lesions. 384 Letters and words, 375 as sublists, 402 recognition, 314 Lexical decision. 405, 450 Lifting. resonant. 212. 222 Light gating. 273 visual, 273 Lightness computation. 1, 3 functional, 47 8cc also brightness Linear feedforward theories, 19 List -item error trade-off, 403, 430 chunks, 468 encoding. 457 letters and words, 575 nodes. 490-492 representation. 403 restructuring. intra-, 445 temporal. 457 Local
509
competition. 99 featural elements. 144 image properties, 144 receptive field properties. 3 spatial scales. 12 Locking, border. 189 Logarithms. shift property without, 28 Logogens. 403. 406 LOOP. CC: I C C cooperative-competitive loop LTM (long-term memory), 314,320,324,338-339, 309,465 Luminance background. 3 contour, 229 decrement, 212 differences. 144 monocular, 47 profile, 212 profile, Craik-O'Bricn, 225 step. 229 Mach bands, 212 ?hcrocucuit, 91,220,425,459 Magic number seven. 378 Map, terminal motor, 368 Mask competitive. 457 oriented: nee orientation. mask Masking differential. 37i field. S76-37i. 379,403. 457,461,407, 469 principle. sequence, 457 Mass action, 489 Match binocular, 3. 9. 64,89,212, 217,265 expectancy, 359, 412 false, 48 field, 263 pattern, 2i self-. 9 template, 403 !bfeasurement. quantum. 144 Membrane equations. 4 5 i Memory associative. 559 search, 314 sharpening, 354 short-term: rec STM span, transient. 374 .Metric. scaling without, 27 Microtheory, 412 Minimal model. 215 Mobilirarion. transmitter. 296 Monocular -binocular feedback, 64, 263 brightness, 16. 212 constraints, 6 length-luminance effect, 4'1 patterns, 3. 212 perceptual domains. 212
510
Subjecr Index
processing, 3 representations, 64,263 scaling, 34 self-matching, 9 .Multiplicative gate, 273 Multistability. 41 Neon color spreading, 80,82,91,173 Network architecture, 457 associative learning, 520 BCS. f44 direct access. 457 distance-dependent feedforward. 30 feedback competitive, 36. 41 feedforward shunting competitive, 24 global properties. 3. 144 interactions. 459 lesions, 384 masking field, 457 model, 314 nonlinear, 3 on-center of-surround, 481 quantization, 3 real-time, 405 Node item. 491 list, 490-492 unitization. 314 Noise retina, 8s suppression, 36 Nonlinear resonance. 3 Nonspecific arousal, 332 Sonwords, recognition and recall, 403 Normaluation, 41-42 Object permanence, 23 recognition system (ORS),144 OC (oriented contraat) filter: ace filter, oriented contrast Offset. current, 287 On-center OR-surround, 457, 181 Opposites attract rule, 380 Order item. 46'1 recall, 374 reversal, STM-LTM, 339 rhythm, 332 serial. 311. 313, 335 STM,538 tempord, 314, 328.SS8, 369.37i. 403, 457, 465 Organization. selE ~ L self-organization L Orientation competition, 134-135 contrast, 85, 215 mask, 134 sensitivity, 82,212 tuning, 144 uncertainty, 109
Oriented contrast (OC) filter: ;ee filter, ortented contrast ORS: ace object recognition system Outstar, 325 Overshoot, 278, 279, 292 Paired associate verbal learning, 403 Paradox Bergstrom brightness. 261 Brightness, 121,212,261 Fechner's, 3, 56,212,243,284 c o a l , 335 Hamada brightness, 261 intcmal, 313 Parallel access. 410 contour completion, 105 induction, 154 interactions, 345 Parsing, 457 automatic, 382 Pattern activity, 3, 212,457 completion, 359, 382 factorization, 325 grating, 3 matching, 27 monocular, 5, 212 processing. 345 recognition. 313 spatial. 212, 324 temporal, 3i6 visual. 3 Peak, potential. 285 Perception brightness. 3. IS,211-212 categoricd, 314,353, 403 coherent, 85 correlations, 187 depth, 6,13. 222 form, 5, IS, 80,82, 182 paradoxical, 213 position, 15 size, 6 speech, 313 Performance, speed, 332 Permanence, object, 2S Phantoms. 3, 45 Phonetic requirements, 403 Photoreceptor potential, 290 transmitter. 275 vertebrate. 271, 273 Planer, depth, S Plans, 535 ace olro expectancies Pooled binocular edges, 48 Position competition, 154-135 perceived, 13
Subject Index serial, 382 Potential cone. 273 peak. 283 photoreceptor, 290 Power law. 239 Prediction. ace expectancies Preprocessing, monocular. 212 Prestriate cortex, 82. 144,212 Prewired perceptual categories, 354 Primes. 361, 401,403. 420 Priming attentional. 403 STM probes, 314 Probes, 213.314 Quantization filling-in, 63 functional scalea, 44 geometry. I , 3-4 measurement, 144 network activity, 3 Quenching, threshold, 346 Rate. recall, 374 Ratio scale, 239 Reaction circular, 365,403 time. 332.385,433, 436 Read-out, LTM. 338 Rebound antagonistic, 285. 294 hyperpolariration. 287. 294 Receptive field, 3, 30, 144. 165 Receding, spatial, 465 Recognition automaticity, 381 letter, 314 object. 105, 144, 162 pattern, 313 recall, 448. 466 self-organization, 425, 459 sensory. 384 word, 401. 403,458 Recurrence: ace feedback Reflectance. 24, 30,33, 58 rivalry, 3 Rehearsal, 344 Relaxation. 180 Representation binocular. 212 code. 457 item. 403. 465 language, 457 list, 403 monocular, 64,265 predictive. 375, 481 unitized, 457 visual, 3. 82, 242 Reset, STM, 556 Resolution. uncertainty, 144
Resonance activity. 3 adaptive. 359. 403. 411 binocular, 212 brightness perception, 211-212 equilibration, 403,419 feedback, 59 lifting, 222 matching, 412 nonlinear, 3 recognition and recall. 401. 403 Resonant exchange. filling-in: rec FIRE Retina. noisy, 83 Retinex theory, 82. 121 Reversal. STM-LTM order, 339 Rhythm generators, 344 order, 332 types, 385 Ritualistic learning, 329 Rivalry binocular, 3, 7,62,212, 224 reflectance, 3, 58 scale-sensitive, 11 Rule boundary and feature, 151 cognitive. 457,459 developmental. 459 Gestalt, 144, 170 growth, 457 masking field. 457 mass attion interaction, 489 opposites-attract, 380 perceptual grouping, 145 self-similar growth, 380
2.'3.420 universal, 144 Sampling, LTM, 324 Scale -sensitive fusion, 11 activity. 3. 58 binocular length, 48 curvature, 56 edges. 33 equivalence, principle. 56 functional, 3. 42.44 monocular, 34 multidimensional. 27 multiple, 7, 473 ratio. 259 spatial, 3, 12, 23, 108, 114 structural. 33 Scotomas, 82 Search chunking. 457 memory. 314. 359 serial, 403,407 Segment. boundary contour,99 Segmentation
51 1
Subject Index
512
emergent, 143-144 scene. 144 textural. 163 Selective adaptation, 403 Self -inhibition. 338 -match, 9 -organization, 311, 313, 335, 403, 425, 458-459. 462-463
-similarity, 378. 380, 457, 469-470, 490, 492 -tuning, 345 Sensitivity, shift. 3 Separation. figure-ground. 56 Sequence masking principle, 457 Serial associate verbal learning, 405 order. 311. 313, 329, 355, 343 position. 582 prograrnc, 457 search, 403, 407 Set. attentional, 365 Shading, scenic, 144 Sharpening, memory tuning, 354 Shift property, 28, 293 sensitivity, 3 Short -range competition, 86. 144, 217 -term memory: rec STM Shunting, 24, 264, 345, 4 5 i , 4 8 1 Sigmoid signal function. 36. 1 9 2 Similarity, self-. 378, 380,4 5 i , 469-470. 490, 492 Simple cells. 187 Sites. synaptic, 457, 490 Situationai frequency, 403 Sire, perception, 6 Slope, primacy, 165 Sounds. novel, 403 Space auditory-articulatory, 365 visual, 1, 3-4 Span. memory. 374 Spatial frequency, 3, SO, 58, 212. 376, 457, 467 gradient. 353, 457 long-range cooperation. 144 pattern. 112. 324 recoding of temporal order. 465 scale, 3, 12, 23, 108, 114 short-range competition, 144 Speech, 311, 313, 456-458,466 motor control, 313 stream, 4 5 i Speed -accuracy trade-of, 403 performance, 532 Spreading activation. 314
FIRE,48
neon color: I L C neon color spreading Stability. learning. 313, 403 St a bilization code, 356. 369 edge, 149 Stabilized image. 212. 224 Stimulus generalization. 353 STM (short-term memory). 3. 41. 314. 320, 338359, 345-346, 356. 369, 403, 456-457
Stochastic relaxation, 180 Striate cortex, 82, 144. 212 Subjective contour. 3, 105 w e alro illusion, contour Subliminal, control, 425 Sublist groupings, 4 5 7 letters, 462 Superiority learned. 382 word, 376, 403 Suppression, noise, 36 S u p r a h i n a l , control, 425 Surround: dee on-center OR-surround Synaptic sites, 457, 490 Synergy, motor, 328, 463 Tachiacoscope, 432 Target masking field. 579 non-word, 432 word. 432 Template feedback, 4SO matching, 403 Temporal activity patterns, 457 chunking, 376, 4 6 1 order, 314. 328, 338, 369. 377. 403,457, 465 pattern. 3 7 0 Terminal motor map, 368 Texture grouping, 105, 143-144 segmentation. 163 Threshold contrast, 3 quenching, 346 Trace, ordered STM, 338 Trade-off boundary-feature, 109, 155 intensity-time. 374 list-item error, 403, I S 0 speed-accuracy. 405 Training, attentional set, 565 Transducers, miniaturized, 2 8 1 Transient memory span, 374 Transmitter depletion, 275 gating, 271, 275, 275 mobiliration. 296 photoreceptor, 273
Subject Index production, 281 Tuning adaptation level, 3 categories. 412 network, 36, 41, 345 orientation. 144 prewired perceptual categories, 354 Turtle, cone, 273 Unblocking, 273 Uncertainty, 44, 109.144 Undershoot. 368 Unexpected events: t e e expectancies Uniform regions, 21 Unitization, 369, 405 Units language. 4 5 i self-organizing. 458, 462. 468 Universality of boundary contour system, 198 Velocity command. 532 Verbal, learning, 403 Verification, 403 by serial search, 407 resonant equilibration, 4 19 Vertebrate photoreceptors, 271, 273 cone. 273 l’isibility, BCS. 144 Vocabularies. large, 356 wall, cd6, illusion, 189 Waves. rehearsal, 344 Weber law, 3, 24, 28 Word frequency, 403 frequency effect. 438 length and superiority, 376 1ength.effect. 462 recognition and recall. 401, 403 superiority. 403 target comparisons. 432-433.436 Zero crossings. 3
513
This Page Intentionally Left Blank