Pipelined Processor Farms
WILEY SERIES ON PARALLEL AND DISTRIBUTED COMPUTING
Series Editor: Albert Y Zomaya Parallel...
80 downloads
1032 Views
19MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Pipelined Processor Farms
WILEY SERIES ON PARALLEL AND DISTRIBUTED COMPUTING
Series Editor: Albert Y Zomaya Parallel and Distributed Simulation Systems / Richard Fujimoto Surviving the Design of Microprocessor and Multimicroprocessor Systems: Lessons Learned / Veljko Milutinovic Mobile Processing in Distributed and Open Environments / Peter Sapaty Introduction to Parallel Algorithms / C. Xavier and S. S. lyengar Solutions to Parallel and Distributed Computing Problems: Lessons from Biological Sciences / Albert Y. Zomaya, Fikret Ercal, and Stephan Olariu (Editors) New Parallel Algorithms for Direct Solution of Linear Equations / C. Siva Ram Murthy, K. N. Balasubramanya Murthy, and Srinivas Aluru Practical PRAM Programming / Joerg Keller, Christoph Kessler, and Jesper Larsson Traeff Computational Collective Intelligence / Tadeusz M. Szuba Parallel and Distributed Computing: A Survey of Models, Paradigms, and Approaches / Claudia Leopold Fundamentals of Distributed Object Systems: A CORBA Perspective / Zahir Tari and Omran Bukhres Pipelined Processor Farms: Structured Design for Embedded Parallel Systems / Martin Fleury and Andrew Downton
Pipelined Processor Farms Structured Design for Embedded Parallel Systems
Martin Fleury Andrew Downton
A Wiley-lnterscience Publication JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto
This text is printed on acid-free paper. © Copyright © 2001 by John Wiley & Sons, Inc. All rights reserved. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. For ordering and customer service, call 1-800-CALL-WILEY. Library of Congress Cataloging in Publication Data is available. ISBN 0-471-38860-2
Printed in the United States of America 10 9 8 7 6 5 4 3 2 1
Foreword
Parallel systems are typically difficult to construct, to analyse, and to optimize. One way forward is to focus on stylized forms. This is the approach taken here, for Pipelined Processor Farms (PPF). The target domain is that of embedded systems with continuous flow of data, often with real-time constraints. This volume brings together the results of ten years study and development of the PPF approach and is the first comprehensive treatment beyond the original research papers. The overall methodology is illustrated throughout by a range of examples drawn from real applications. These show both the scope for practical application and the range of choices for parallelism both in the pipelining and in the processor farms at each pipeline stage. Freedom to choose the numbers of processors for each stage is then a key factor for balancing the system and for optimizing performance characteristics such as system throughput and latency. Designs may also be optimized in other ways, e.g. for cost, or tuned for alternative choices of processor, including future ones, providing a high degree of future-proofing for PPF designs.
v
vi
FOREWORD
An important aspect is the ability to do "what if analysis, assisted in part by a prototype toolkit, and founded on validation of predicted performance against real. As the exposition proceeds, the reader will get an emerging understanding of designs being crafted quantitatively for desired performance characteristics. This in turn feeds in to larger scale issues and trade-offs between requirements, functionality, benefits, performance, and cost. The essence for me is captured by the phrase "engineering in the performance dimension". CHRIS WADSWORTH TECHNICAL CO-ORDINATOR EPSRC PROGRAMME ON PORTABLE SOFTWARE TOOLS FOR PARALLEL ARCHITECTURES Wantage, Nov.2000
Preface
In the 1980s, the advent of the transputer led to widespread investigation of the potential of parallel computing in embedded applications. Application areas included signal processing, control, robotics, real-time systems, image processing, pattern analysis and computer vision. It quickly became apparent that although the transputer provided an effective parallel hardware component, and its associated language Occam provided useful low-level software tools, there was also a need for higher-level tools together with a systematic design methodology that addressed the additional design parameters introduced by parallelism. Our work at that time was concerned with implementing real-time document processing systems which included significant computer vision problems requiring multiple processors to meet throughput and latency constraints. Reviews of similar work highlighted the fact that processor farms were often favored as an effective practical parallel implementation architecture, and that many applications embodied an inherent pipeline processing structure. After analyzing a number of our own systems and those reported by others we concluded that a combination of the pipeline structure with a generalized processor farm implementation at each pipeline stage offered a flexible generalpurpose architecture for soft real-time systems. We embarked upon a major project, PSTESPA (Portable Software Tools for Embedded Signal Processing Applications) to investigate the scope of the Pipeline Processor Farm (PPF) design model, both in terms of its application potential and the supporting software tools it required. Because the project focused mostly upon high-level VII
viii
PREFACE
design issues, its outcome largely remains valid despite seismic changes within the parallel computing industry. By the end of our PSTESPA project, notwithstanding its successful outcome, the goalposts of parallel systems had moved, and it was becoming apparent that many of the ambitious and idealistic goals of general-purpose parallel computing had been tempered by the pragmatic reality of market forces. Companies such as Inmos, Meiko, Parsys and Parsytec (producing transputer-based machines), and ICL, AMT, MasPar and Thinking Machines (producing SIMD machines), found that the market for parallel applications was too fragmented to support high-volume sales of large-scale parallel machines based upon specialized processing elements, and that application development was slow and difficult with limited supporting software tools. Sharedmemory machines produced by major uniprocessor manufacturers such as IBM, DEC, Intel and Silicon Graphics, and distributed Networks of Workstations (NOWs) had however established a foothold in the market, because they are based around high-volume commercial off-the-shelf (COTS) processors, and achieved penetration in markets such as database and file-serving where parallelism could be supported within the operating system. In our own application field of embedded systems, NOWs and sharedmemory machines have a significant part to play in supporting the parallel logic development process, but implementation is now increasingly geared towards hardware-software co-design. Co-design tools may currently be based around heterogeneous computing elements ranging from conventional RISC and DSP processors at one end of the spectrum, through embedded processor cores such as ARM, to FPGAs and ASICs at the other. Historically, such tools have been developed bottom-up, and therefore currently betray a strong hardware design ethos, and a correspondingly weak high-level software design model. Our current research (also funded by EPSRC) is investigating how to extend the PPF design methodology to address this rapidly developing embedded applications market using a software component-based approach, which we believe can provide a valuable method of unifying current disparate lowlevel hardware-software co-design models. Such solutions will surely become essential as complex multimedia embedded applications become widespread in consumer, commercial and industrial markets over the next decade. ANDY DOWNTON Colchester, Oct. 2000
Acknowledgments
Although this book has only two named authors, many others have contributed to its content, both by carrying out experimental work and by collaborating in writing the journal and conference papers from which the book is derived. Early work on real-time handwritten address recognition, which highlighted the problem to be addressed, was funded by the British Post Office, supported by Roger Powell, Robin Birch and Duncan Chapman. Algorithmic developments were carried out by Ehsan Kabir and Hendrawan, and initial investigations of parallel implementations were made by Robert Tregidgo and Aysegul Cuhadar, all of whom received doctorates for their work. In an effort to generalise the ideas thrown up by Robert's work in particular, further industrial contract work in a different field, image coding, was carried out, funded by BT Laboratories through the support of Mike Whybray. Many people at BT contributed to this work through the provision of H.261 image coding software, and (later) other application codes for speech recognition and microphone beam forming. Other software applications, including those for model-based coding, H.263, and Eigenfaces were also investigated in collaboration with BT. In addition to Mike Whybray, many others at BT laboratories provided valuable support for work there, including Pat Mulroy, Mike Nilsson, Bill Welsh, Mark Shackleton, John Talintyre, Simon Ringland and Alwyn Lewis. BT also donated equipment, including a Meiko CS2 and Texas TMS320C40 DSP systems to support our activities.
X
As a result of these early studies, funding was obtained from the EPSRC (the UK Engineering and Physical Sciences Research Council) to investigate the emergent PPF design methodology under a directed program on Portable Software Tools for Parallel Architectures (PSTPA). This project — PSTESPA (Parallel Software Tools for Embedded Signal Processing Applications) — enabled us not only to generalise the earlier work, but also to start investigating and prototyping software tools to support the PPF design process. Chris Wadsworth from Rutherford Appleton Laboratories was the technical coordinator of this program, and has our heartfelt thanks for the support and guidance he provided over a period of nearly four years. Adrian Clark, with extensive previous experience of parallel image processing libraries, acted as a consultant on the PSTESPA project, and Martin Fleury was appointed as our first research fellow, distinguishing himself so much that before the end of the project he had been appointed to the Department's academic staff. Several other research fellows also worked alongside Martin during the project: Herkole Sava, Nilufer Sarvan, Richard Durrant and Graeme Sweeney, and all contributed considerably to its successful outcome, as is evidenced by their co-authorship of many of the publications which were generated. Publication of this book is possible not only because of the contributions of the many collaborators listed above, but also through the kind permission of the publishers of our journal papers, who have permitted us to revise our original publications to present a complete and coherent picture of our work here. We particularly wish to acknowledge the following sources of tables, figures and text extracts which are reproduced from previous publications: The Institution of Electrical Engineers (IEE), for permission to reprint: • portions of A. C. Downton, R. W. S. Tregidgo, and A. Quhadar, Topdown Structured parallelization of embedded image processing applications. IEE Proceedings Part I (Vision, Image, and Signal Processing), 141(6):438-445, 1994 as text in Chapter 1, as Figure 1.1 and A.1-A.4, and as Table A.I; • portions of M. Fleury, A. C. Downton, and A. F. Clark, Scheduling schems for data farming, IEE Proceedings Part E (Computers and Digital Techniques), in press at the time of writing, as text in Chapter 6, as Figures 6.1-6.9, and as Tables 6.1 and 6.2; • portions of A. C. Downton, Generalised approach to parallelising image sequence coding algorithms, IEE Proceedings I (Vision, Image, and Signal Processing), 141(6):438-445, 1994 as text in Section 8.1, as Figures A.6-8.12, and as Tables 8.1 and 8.2; • portions of H. P. Sava, M. Fleury, A. C. Downton, and A. F. Clark, Parallel pipeline implementation of wavelet transforms, IEE Proceedings Part I (Vision, Image, and Signal Processing), 144(6):355-359, 1997 as text in Section 9.2, and as Figures 9.6-9.10;
xi
• portions of M. Fleury, A. C. Downton, and A. F. Clark, Scheduling schemes for data farming, IEE Proceedings Part E (Computers and Digital Techniques), 146(5):227-234, 1994 as text in Section 11.9, as Figures 11.11-11.17, and as Table 11.6; • portions of M. Fleury, H. Sava, A. C. Downton, and A. F. Clark, Design of a clock synchronization sub-system for parallel embedded systems, IEE Proceedings Part E (Computers and Digital Techniques), 144(2):65-73, 1997 as text in Chapter 12, as Figures 12.1-12.4, and as Tables 12. land 12.2. Elsevier Science, for inclusion of the following: • portions reprinted from Microprocessors and Microsystems, 21, A. Quhadar, A. C. Downton, and M. Fleury, A structured parallel design for embedded vision systems: A case study, 131-141, Copyright 1997, with permission from Elsevier Science, as text in Chapter 3, as Figures 3.1-3.10, and as Table 3.1 and 3.2; • portions reprinted from Image and Vision Computing, M. Fleury, A. F. Clark, and A. C. Downton, Prototyping optical-flow algorithms on a parallel machine, in press at the time of writing, Copyright 2000, with permission from Elsevier Science, as text in Section 8.4, as Figures 8.198.28, and as Tables 8.8-8.12; • portions of Signal Processing: Image Communications, 7, A. C. Downton, Speed-up trend analysis for H.261 and model-based image coding algorithms using a parallel-pipeline model, 489-502, Copyright 1995, with permission from Elsevier Science, as text in Section 10.2, Figures 10.510.7, and Table 10.2. Springer Verlag, for permission to reprint: • portions of H. P. Sava, M. Fleury, A. C. Donwton, and A. F. Clark, A case study in pipeline processor farming: Parallelising the H.263 encoder, in UK Parallel'96, 196-205, 1996, as text in Section 8.2, as Figures 8.13-8.15, and as Tables 8.3-8.5; • portions of M. Fleury, A. C. Downton, and A. F. Clark, Pipelined parallelization of face recognition, Machine Vision Applications, in press at the time of writing, as text in Section 8.3, Figures 5.1 and 5.2, Figures 8.16-8.18, and Tables 8.6 and 8.7; • portions of M. Fleury, A. C. Downton, and A. F. Clark, Karhunen-Loeve transform: An exercise in simple image-processing parallel pipelines, in Euro-Par'97, 815-819, 1997, as text in Section 9.1, Figures 9.4-9.5; • portions of M. Fleury, A. C. Downton, and A. F. Clark, Parallel structure in an integrated speech-recognition network, in Euro-Par'99, 9951004, 1999, as text in Section 10.1, Figures 10.1-10.4, and Table 10.1.
xii
Academic Press, for permission to reprint: • portions of A. Cuhadar, D. G. Sampson, and A. C. Downton, A scalable parallel approach to vector quantization, Real-Time Imaging, 2:241-247, 1995, as text in Section 9.3, Figures 9.11-9.19, and Table 9.2. The Institute of Electrical and Electronic Engineers (IEEE), for permission to reprint: • portions of M. Fleury, A. C. Downton, and A. F. Clark, performance metrics for embedded parallel pipelines, IEEE Transactions in Parallel and Distributed Systems, in press at the time of writing, as text in Chapter 11, Figures 2.2-2.4, Figures 11.1-11.10, and as and Tables 11.111.5. John Wiley & Sons Limited, for inclusion of: • portions of Constructing generic data-farm templates, M. Fleury, A. C. Downton, and A. F. Clark, Concurrency: Practice and Experience, 11 (9): 1-20, 1999, ©John Wiley & Sons Limited, reproduced with permission, as text in Chapter 7 and Figures 7.1-7.7. The typescript of this book was typeset by the authors using I^I^X, MikTex and WinEdt. A. C. D. and M. F.
Contents
Foreword
v
Preface
vii
Acknowledgments
ix
Acronyms Part I
xix
Introduction and Basic Concepts
1 Introduction 1.1 Overview 1.2 Origins 1.3 Amdahl's Law and Structured Parallel Design 1.4 Introduction to PPF Systems 1.5 Conclusions Appendix A.I Simple Design Example: The H.261 Decoder
1 1 2 4 4 8 10 10
2 Basic Concepts 2.1 Pipelined Processing
17 20 XIII
xiv
CONTENTS
2.2
Pipeline Types 2.2.1 Asynchronous PPF 2.2.2 Synchronous PPF 2.3 Data Farming and Demand-based Scheduling 2.4 Data-farm Performance Criteria 2.5 Conclusion Appendix A.I Short case studies
24 25 26 27 28 30 31 31
3 PPF in Practice 3.1 Application Overview 3.1.1 Implementation issues 3.2 Parallelization of the Postcode Recognizer 3.2.1 Partitioning the postcode recognizer 3.2.2 Scaling the postcode recognizer 3.2.3 Performance achieved 3.3 Parallelization of the address verifier 3.3.1 Partitioning the address verifier 3.3.2 Scaling the address verifier 3.3.3 Address verification farms 3.3.4 Overall performance achieved 3.4 Meeting the Specification 3.5 Conclusion Appendix A.I Other Parallel Postcode Recognition Systems
37 38 39 39 40 41 4% ^7 ^7 49 50 50 51 53 53 53
4
57 58 59 60 62
Development of PPF Applications 4-1 Analysis Tools 4.2 Tool Characteristics 4-3 Development Cycle 4-4 Conclusion
Part II Analysis and Partitioning of Sequential Applications 5 Initial Development of an Application 5.1 Confidence Building 5.2 Automatic and Semi-automatic Parallelization
67 67 69
CONTENTS
5.3 5.4 5.5 5.6 5.7 5.8 5.9
Language Proliferation Size of Applications Semi-automatic Partitioning Porting Code Checking a Decomposition Optimizing Compilers Conclusion
Graphical Simulation and Performance Analysis ofPPFs 6.1 Simulating Asynchronous Pipelines 6.2 Simulation Implementation 6.3 Graphical Representation 6.4 Display Features 6.5 Cross-architectural Comparison 6.6 Conclusion Template-based Implementation 7.1 Template Design Principles 7.2 Implementation Choices 7.3 Parallel Logic Implementation 7.4 Target Machine Implementation 7.4-1 Common implementation issues 7.5 'NOW Implementation for Logic Debugging 7.6 Target Machine Implementations for Performance Tuning 7.7 Patterns and Templates 7.8 Conclusion Part III
xv
71 12 73 75 77 77 79 81 82 82 84 88 89 93 95 96 99 100 101 102 104 109 112 113
Case Studies
8 Application Examples 8.1 Case Study 1: H.261 Encoder 8.1.1 Purpose of parallelization 8.1.2 'Per macroblock' quantization without motion estimation 8.1.3 'Per picture' quantization without motion estimation
117 118 119 119 123
xvi
CONTENTS
8.1.4
8.2
8.3
8.4
8.5
'Per picture' quantization with motion estimation 8.1.5 Implementation of the parallel encoders 8.1.6 H.261 encoders without motion estimation 8.1.7 H.261 encoder with motion estimation 8.1.8 Edge data exchange Case Study 2: H263 Encoder/Decoder 8.2.1 Static analysis of H.263 algorithm 8.2.2 Results from parallelizing H.263 Case Study 3: 'Eigenfaces' — Face Detection 8.3.1 Background 8.3.2 Eigenf aces algorithm 8.3.3 Parallelization steps 8.3.4 Introduction of second and third farms Case Study 4- Optical Flow 8.4.1 Optical flow 8.4-2 Existing sequential implementation 8.4-3 Gradient-based routine 8.4-4 Multi-resolution routine 8.4-5 Phase-based routine 8.4.6 LK results 8.4.7 Other methods 8.4-8 Evaluation Conclusion
9 Design Studies 9.1 Case Study 1: Karhunen-Loeve Transform (KIT) 9.1.1 Applications of the KLT 9.1.2 Features of the KLT 9.1.3 Parallelization of the KLT 9.1.4 PPF parallelization 9.1.5 Implementation 9.2 Case Study 2: 2D- Wavelet Transform 9.2.1 Wavelet Transform 9.2.2 Computational algorithms 9.2.3 Parallel implementation of Discrete Wavelet Transform (DWT)
125 126 128 129 131 132 134 135 139 139 140 141 143 145 145 147 14? 150 154 156 158 160 161 163 164 164 165 165 168 171 171 172 173 173
CONTENTS
9.2.4
9.3
9-4
Parallel implementation of oversampled WT Case Study 3: Vector Quantization 9.3.1 Parallelization of VQ 9.3.2 PPF schemes for VQ 9.3.3 VQ implementation Conclusion
xvii
10 Counter Examples 10.1 Case Study 1: Large Vocabulary ContinuousSpeech Recognition 10.1.1 Background 10.1.2 Static analysis of the LVCR system 10.1.3 Parallel design 10.1.4 Implementation on an SMP 10.2 Case Study 2: Model-based Coding 10.2.1 Parallelization of the model-based coder 10.2.2 Analysis of results 10.3 Case Study 3: Microphone Beam Array 10.3.1 Griffiths-Jim beam-former 10.3.2 Sequential implementation 10.3.3 Parallel implementation of the G-J Algorithm 10.4 Conclusion Part IV
176 179 180 181 183 186 189 190 190 191 193 195 196 196 198 202 202 203 204 206
Underlying Theory and Analysis
11 Performance of PPFs 11.1 Naming Conventions 11.2 Performance Metrics 11.2.1 Order statistics 11.2.2 Asymptotic distribution 11.2.3 Characteristic maximum 11.2.4 Sample estimate 11.3 Gathering Performance Data 11.4 Performance Prediction Equations 11.5 Results 11.5.1 Prediction results
211 212 212 213 216 217 219 220 221 223 224
xviii
CONTENTS
11.6 11.7 11.8 11.9
Simulation Results Asynchronous Pipeline Estimate Ordering Constraints Task Scheduling 11.9.1 Uniform task size 11.9.2 Decreasing task size 11.9.3 Heuristic scheduling schemes 11.9.4 Validity of Factoring 11.10 Scheduling Results 11.10.1 Timings 11.10.2Simulation results 11.11 Conclusion Appendix A.I Outline derivation of Kruskal-Weiss prediction equation A.2 Factoring regime derivation 12 Instrumentation of Templates 12.1 Global Time 12.2 Processor Model 12.3 Local Clock Requirements 12.4 Steady-state Behavior 12.5 Establishing a Refresh Interval 12.6 Local Clock Adjustment 12.7 Implementation on the Paramid 12.8 Conclusion
225 227 230 235 236 236 237 238 238 238 240 241 242 242 243 247 248 249 249 250 253 256 257 259
Part V Future Trends 13 Future Trends 13.1 Designing for Differing Embedded Hardware 13.2 Adapting to Mobile Networked Computation 13.3 Conclusion
263 265 265 267
References
269
Index
299
Acronyms
AGP
Advanced Graphics Protocol
API
Application Programming Interface
APTT
Analysis, Prediction, Template Toolkit
AR ASIC
Autoregressive Application Specific Integrated Circuits
ATR
Automatic Target Recognition
AWT BSD
Abstract Window Toolkit Berkeley Standard Distribution
BSP
Bulk Synchronous Parallel
CCITT cdf
International Consultative Committee for Telephone and Telegraph Cumulative Distribution Function
CDT GIF
Categorical Data Type Common Intermediate Format
COTS
Commercial Off-The-Shelf
CPU
Central Processing Unit
CSP
Communicating Sequential Processes
CSS
Central Synchronization Server xix
xx
Acronyms
CWT
Continuous Wavelet Transform
DAG
Directed Acyclic Graph
DOOM
Distributed Component Object Model
DCT
Discrete Cosine Transform
DSP
Digital Signal Processor
DVD DWT
Digital Versatile Disc Discrete Wavelet Transform
FDDI FFT
Fibre Distributed Data Interface Fast Fourier Transform
FIFO FIR
First-In-First-Out Finite Impulse Response
FPGA
Field Programmable Gate Arrays
G-J
Griffiths-Jim
HMM
Hidden Markov Model
HP HPF
Hewlett-Packard High Performance Fortran
I/O
Input/Output
IBM
International Business Machines
IFFT
Inverse Fast Fourier Transform
IFR
Increasing Failure Rate
ISO
International Standards Organization
ITU
International Telecommunications Union
JIT
Just-in-Time
JPEG
Joint Photographic Experts Group
KLT
Karhunen-Loeve Transform
LAN
Local Area Network
LVCR
Large Vocabulary Continuous-Speech Recognition
LWP MAC
Light-Weight Process Multiply Accumulate Operation
ME
Motion Estimation
MIMD
Multiple Instruction Multiple Data Streams
MIT
Massachusetts Institute of Technology
MMX
Multimedia Extension
MPEG
Motion Picture Experts Group
Acronyms
MPI
Message-Passing Interface
NOW
Network of Workstations
NP
Non-polynomial
NT NUMA
New Technology Non-Uniform Memory Access
OCR
Optical Character Recognition
OF
Optical Flow
OOC
Object-oriented Coding
PC
Personal Computer
PCA
Principal Components Algorithm
pdf PE
Probability Distribution Function Processing Element
PK
Pollaczek-Khintchine
POSIX
Portable Operating System-IX
PPF
Pipelined Processor Farms
PSNR PSTN
Peak Signal-to-Noise Ratio Public System Telephone Network
PVM RISC
Parallel Virtual Machine Reduced Instruction Set Computer
RMI
Remote Method Invocation
RPC
Remote Procedure Call
RTE
Run-time Executive
RTOS SAR
Real-time Operating System Synthetic Aperture Radar
SCSI
Small Computer System Interface
SIMD
Single Instruction Multiple Data Streams
SMP
Symmetric Multiprocessor
SNN
Semantic Neural Network
SPG
Series Parallel Graph
SSD SSS
Sum-of-Squared-Differences Safe Self-Scheduling
STFT
Short-Time Fourier Transform
TM
Trademark
UTC
Universal Time Coordinated
xxi
xxii
Acronyms
VCS VLC
Virtual Channel System Variable Length Encoder
VLIW
Very-Large Instruction Word
VLSI VQ
Very-Large Scale Integration Vector Quantization
w.r.t.
with respect to
WS
Wavelet Series
WWW
World Wide Web
Parti
Introduction and Basic Concepts
This page intentionally left blank
1 Introduction 1.1
OVERVIEW
Much of the success of computers can be attributed to their generality, which allows different problems to be compiled and executed in different languages on the same or different processors. Parallel processing currently does not possess the generality of sequential processing1 because new degrees of freedom, such as the programming paradigm, topology (the connection pattern between processors [170, 199]), and number of processors, have been introduced into the design process. It appears that the potential offered by these additional design choices has led to an insistence by designers on obtaining maximum performance, with a consequent loss of generality. This is not surprising, because parallel solutions are typically investigated for the very reason that conventional sequential systems do not provide sufficient performance, but it ignores the benefits of generality which are accepted by sequential programmers. The sequential programming paradigm, or rather the abstract model of a computer on which it rests, was introduced by von Neumann [45] and has persisted ever since despite the evident internal parallelism in most microprocessor designs (pipelined, vector, and superscalar [115]) and the obvious bottleneck if there is just one memory-access path from the central processing unit (CPU) for data 1
Strictly, the term serial processing is more appropriate, as processing takes place on a serial machine or processor. The term sequential processing implies that the algorithms being processed are inherently sequential, whereas in fact they may contain parallel components. However, this book retains common usage and takes sequential processing to be synonymous with serial processing.
1
2
INTRODUCTION
and instructions alike. The model suits the way many programmers envisage the execution of their programs (a single step at a time), perhaps because errors are easier to find than when there is an interleaving of program order as in parallel or concurrent programming paradigms.2 The Pipelined Processor Farms (PPF) design model, the subject of this book, can be applied in its simplest form to any Multiple Instruction Multiple Data streams (MIMD) [114] multiprocessor system.3 Single Instruction Multiple Data streams (SIMD) computer architecture, though current at the very-large scale integration (VLSI) chip-level, and to a lesser extent in multimedia-extension (MMX) microprocessor instructions for graphics support at the processor level [212], is largely defunct at the processor level, with a few honorable exceptions such as Cambridge Memory System's DAP and the MasPar series of machines [13].4 Of the two categories of MIMD machines, the primary concentration is upon distributed-memory machines, where the address space is partitioned logically and physically between processors. However, it is equally possible to logically partition shared-memory machines, where there is a global address space. The boundaries between distributed and shared-memory machines have dissolved in recent times [70], a point to be returned to in Chapter 13.
1.2
ORIGINS
The origins of the PPF design method arose in the late 1980s as a result of research carried out at the University of Essex to design and implement a real-time postcode/address recognition system for the British Post Office (see Chapter 3 for a description of the outcome of this process). Initial investigation of the image analysis and pattern recognition problems demonstrated that significant research and development was needed before any kind of working demonstrator could be produced, and that, of necessity, the first demonstrator would need to be a non-real-time software simulation running on a workstation. This provided the flexibility to enable easy experimental evaluation and algorithm updates using offline databases of address images,
2
Shared-memory machines can also relax read-write access across the processor set ranging from strong to weak consistency, presenting a continuum of programming paradigms [259]. 3 Categorization of processors by the multiplicity of parallel data and instruction streams supported is a well-known extension of von Neumann's model [65]. 4 Systolic arrays are also used for fine-grained, signal processing [200] though largely again at the VLSI level. In systolic designs, data are pumped synchronously across an array of processing elements (PEs). At each step a different stage in processing takes place. Wavefront processors are an asynchronous version of the systolic architecture. Other forms of instruction level parallelism are very-large instruction word (VLIW) DSPs (digital signal processors) and its variant explicitly parallel instruction computing (EPIC) [319]. The idea of transferring SIMD arrays such as the DAP to VLSI has also been mooted. The DIP 'chip [66] is an experimental and novel SIMD VLSI array.
ORIGINS
3
and also a starting point for consideration of real-time implementation issues. In short, solving the problem at all was very difficult; generating a real-time solution (requiring a throughput of 10 envelope images/second, with a latency of no more than 8 seconds for processing each image) introduced an additional dimension of processing speed which was beyond the bounds of available workstations. A literature survey of the field of parallel processing at that time showed that numerous papers had been published on parallelization of individual image processing, image coding and image analysis algorithms (see, e.g. , [362]), many inspired by the success of the transputer [136]. Most of these papers were of limited generality however, since they reported bespoke parallelization of specific well-known algorithms such as 2-D filters, FFTs, DCTs, edge detectors, component labeling, Hough transforms, wavelets, segmentation algorithms, etc. Significantly, examination of many of these customized parallel algorithms revealed, in essence, the same solution; that of the single, demandbased, data farm. Practical image analysis and pattern recognition applications, however, typically contain a number of algorithms implemented together as a complete system. Like the postal address reading application, the CCITT H.261 encoder/decoder algorithm [49] is also a good illustration of this characteristic, since it includes algorithms for discrete cosine transformation (DOT), motion estimation and compensation, various filters, quantizers, variable length coding, and inverse versions of several of these algorithms. Very few papers addressed the issue of parallelizing complete systems, in which individual algorithm parallelization could be exploited as components. Therefore, a clue to an appropriate generic parallel architecture for embedded applications was to view the demand-based processor farm as a component within a higher-level system framework. From our point of view, parallel processing was also simply a means to an end, rather than an end in itself. Our interest was in developing a general system design method for MIMD parallel processors, which could be applied after or during the initial iterative algorithm development phase. Too great a focus on performance at the expense of generality would inevitably have resulted in both implementations and design skills that rapidly became obsolete. We therefore aimed to support the early, architecture independent stages of the design process, where parallelization of complete image processing applications is considered, by a process analogous to stepwise refinement in sequential program design [312, 335]. Among the advantages of the PPF design methodology which resulted are the following: • Upper bound (idealized) throughput scaling of the application is easily defined, and aspects of the application which limit scaling are identified. • Input/output latency is also defined and can be controlled.
INTRODUCTION
• Performance is incrementally scalable up to the upper bound (i.e. there are no quantization restrictions on the number of processors which can be used), so that real-time performance requirements can be met exactly. • The granularity of parallelism is maximized, thus minimizing the design effort required to move from the sequential to the parallel implementation. • Design effort is focused on each performance bottleneck of each pipeline stage in turn, by identifying the throughput, latency, and scalability.
1.3
AMDAHL'S LAW AND STRUCTURED PARALLEL DESIGN
Amdahl's law [15, 16] is the Ohm's law of parallel computing. It predicts an upper bound to the performance of systems which contain both parallelization and inherently sequential components. Amdahl's law states that the scaling performance of a parallel algorithm is limited by the number of inherently sequential operations in that algorithm. Consider a problem where a fraction /of the work must be performed sequentially. The speed-up, S, possible from a machine with N processors is:
I f / — 0.2 for example (i.e 20% of the algorithm is inherently sequential), then the maximum speedup however many processors are added is 5. As will be shown in later chapters, applying Amdahl's law to multi-algorithm embedded systems demonstrates that the scaling which can be achieved is largely defined, not by the number of processors used, but by any residual sequential elements within the complete application algorithm. Thus effective system parallelization requires a method of minimizing the impact of residual sequential code, as well as of parallelizing the bulk of the application algorithm. In the PPF design methodology, pipelining is used to overlap residual sequential code execution with other forms of parallelism.
1.4
INTRODUCTION TO PPF SYSTEMS
A PPF is a software pipeline intended for recent, accessible, parallel machines. Examples of such lowly parallel machines [278], which now abound, are networks of workstations (NOW), processor farms, symmetric multiprocessors (SMP) and small-scale message-passing machines. A feature of such machines is that scalability is localized [93] and consequently the communication diameter is also restricted. The commercial off-the-shelf (COTS) processors used within such machines will outstrip the available interconnect bandwidth
INTRODUCTION TO PPF SYSTEMS
5
if combined in large configurations since such processors were not designed with modularity in mind. To avoid this problem in PPF, a pipeline is partitioned into a number of stages, each one of which may be parallel. PPF is primarily aimed at continuous-flow systems in the field of signal processing, image-processing, and multimedia in general. A continuous-flow system is one in which data never cease to arrive, for example a radar processor which must always monitor air traffic. These systems frequently need to meet a variety of throughput, latency, and output-ordering specifications. It becomes necessary to be able to predict performance, and to provide a structure which permits performance scaling, by incremental addition of processors and/or transfer to higher performance hardware once the initial design is complete. The hard facts of achievable performance in a parallel system are further discussed in Section 2.4. There are two basic or elementary types of pipeline components: asynchronous and synchronous, though many pipelined systems will contain some segments of each type. PPF caters for any type of pipeline, whether synchronous, asynchronous or mixed; their performance characteristics are discussed in detail in Section 2.2. Pipeline systems are a natural choice for some synchronous applications. For example, a systolic pipeline-partitioning methodology exists for signal-processing algorithms with a regular pattern [237]. Alternatively, [8] notice that there is an asynchronous pipeline structure to the mind's method of processing visual input which also maps onto computer hardware. If all information flow is in the forward direction [8] then the partitions of the pipeline mirror the peripheral, attentive, and cognitive stages of human vision [232]. The CMU Warp [18], the Cytocomputer [341], PETAL and VAP [56] are early examples of machines used in pipelined fashion for image processing.5 Input to the pipeline either takes the form of a succession of images grouped into a batch (medical slides, satellite images, video frames and the like) or raster-scan in which a stream of pixels is input in the same order as a video camera scans a scene that is in horizontal, zigzag fashion. PPF generalizes the pipeline away from bespoke hardware and away to some extent from regular problems. Examples of applicable irregular, continuous-flow systems can be found in vision [50] (see Chapter 3), radar [97], speech-recognition processing [133], and data compression [52]. Chapters 8 and 9 give further detailed case studies where PPF has been consciously applied. PPF is very much a systems approach to design, that is, it considers the entire system before the individual components. Another way of saying this is that PPF is a top-down as opposed to a bottom-up design methodology. For some years it has been noted [214] that many reported algorithm examples merely form a sub-system of a vision-processing system while it is a complete
5
The common idea across these machines is to avoid the expense of a 2D systolic array by using a linear systolic array.
6
INTRODUCTION
system that forms a pipeline. Various systems approaches to pipeline implementation are then possible. With a problem-driven approach it may be difficult to assess the advantages and disadvantages of alternative architectures for any one stage of a problem. However, equally an architecture-driven design ties a system down to a restricted range of computer hardware. In PPF, the intention is to design a software structure that, when suitably parameterized, can map onto a variety of machines. Looking aside to a different field, Oracle has ported its relational database system to a number of apparently dissimilar parallel computers [337] including the Sequent Symmetry shared-memory machine and the nCube2 MIMD message-passing computer. Analogously to the database abstract machine, the software pipeline is a flexible structure for the PPF problem domain. Having settled on a software pipeline, there are various forms of exploitable parallelism to be considered. The most obvious form of parallelism is temporal multiplexing, whereby several complete tasks are processed simultaneously, without decomposing individual tasks. However, simply increasing the degree of temporal multiplexing, though it can improve the mean throughput, does not change the latency experienced by an individual task. To reduce pipeline traversal latency, each task must be decomposed to allow the component parts to experience their latency in parallel. Geometric parallelism (decomposing by some partition of the data) or algorithmic parallelism (decomposition by function) are the two main possibilities available for irregularly structured code on medium-grained processors.6 After geometric decomposition, data must be multiplexed by a farmer process across the processor farm which is why in PPF data parallelism is alternatively termed geometric multiplexing. When a processor farm utilizes geometric multiplexing, it is called a data farm, and certainly the term data farm is more common in the literature.7 This book does not include many examples of algorithmic parallelism, not by intent but because the practical opportunities of exploiting this form of parallelism are limited. An early analysis [277] in the field of single-algorithm image processing established both the difficulty of finding suitable algorithmic decompositions and the limited speed-up achievable by functional decomposition. However, algorithmic parallelism does have a role in certain applications, which is why it is not discounted in PPF. For example, pattern matching may employ a parallel search [202], a form of OR-parallelism, whereby alternative searches take place though only the result of successful searches are retained.8 6 Dataflow computers [340] have been proposed as a way of exploiting the parallelism inherent in irregularly structured code (i.e. code in which there are many decision points resulting in branching), but though there are research processors [79], no commercial dataflow computer has ever been produced. 7 The term data parallelism is an alternative to geometric parallelism, but this term has the difficulty that data parallelism is associated with parallel decomposition of regular code (i.e. code with few branch points) by a parallel compiler. 8 Divide-and-conquer search algorithms may be termed AND-parallelism, as the result of parallel searches may be combined through an AND-tree [294].
INTRODUCTION TO PPF SYSTEMS
7
Bringing together the preceding discussion, it can be stated that: 1. A data set can be subdivided over multiple processors (data parallelism or geometric multiplexing). 2. The algorithm can be partitioned over multiple processors (algorithmic parallelism). 3. Multiple processors can each process one complete task in parallel (processor farming or temporal multiplexing). 4. The algorithm can be partitioned serially over multiple processors (pipelining) (pipelining being an instance of algorithmic parallelism). 5. The four basic approaches outlined above can be combined as appropriate. The field of low-level image processing [74] illustrates how these forms of parallelism can be applied within a processor farm: Geometric multiplexing An example of geometric multiplexing is where a frame of image data is decomposed onto a grid of processors. Typical low-level image-processing operations such as convolution and filtering can then be carried out independently on each sub-image requiring reference only to the four nearest neighbor processors for boundary information. To adapt such operations to a processor farm, the required boundary information for each processor can be included in the original data packet sent to the processor. Algorithmic parallelism In the case of algorithmic parallelism, different parts of an algorithm which are capable of concurrent execution can be farmed to different processors, for example the two convolutions with horizontal and vertical masks could be executed on separate processors concurrently in the case of a Sobel edge detector [290, 75]. The advantage of a processor farm in this context is that no explicit synchronization of processors is required; however, the algorithm itself normally defines explicitly the possible degree of parallelism (i.e. incremental scaling is not possible). Temporal multiplexing Applying each of a sequence of images to a separate processor does not speed up the time to process an individual image, but enables the average system throughput to be scaled up in direct proportion to the number of processors used. The approach is limited by the allowable latency between the input and output of the system, which is not reduced by temporal parallelism. Pipelining Pure pipelining has the same effect as temporal multiplexing in speeding up overall application throughput without reducing the latency
8
INTRODUCTION
for any particular image, but is achieved by sequentially subdividing the complete application algorithm and placing each component onto a separate processor. The throughput increase is constrained by the maximum processing time for any one stage within the pipeline. Thus, the pipeline of four processors shown in Fig. l.la increases the steady-state task throughput from 0.1 tasks/second for a single sequential processor to 0.25 tasks/second (limited by the slowest pipeline stage), a speedup of 2.5. Note however that the latency (delay between task input and task output) increases from 10 seconds for the sequential algorithm to 15 seconds (3x4 seconds + 3 seconds for the final stage) for the unbalanced pipeline shown in Fig. l.la). The role of pipelining within the PPF design philosophy is to increase throughput and reduce latency by allowing necessarily independent components of an application (some of which may be inherently sequential) to be overlapped. By combining the techniques described above, and mapping a PPF architecture onto the pipeline of stages which comprises any embedded application, both the throughput and the latency of the application can be scaled. Fig. Lib illustrates the effect of using temporal multiplexing alone to achieve throughput scaling: when the throughput of each pipeline stage is matched at 1 task/second, a speedup of 10 is achieved with the same latency as the original sequential algorithm. Of course, exactly the same throughput scaling (with unchanged latency) could be achieved using a single processor farm, with each processor executing a copy of the complete application. The reason for using a pipeline instead is to break down the overall application into its sub-components, so that data or algorithmic parallelism can be exploited to reduce latency as well as increase throughput. Finally, Fig. Lie illustrates the exploitation of data or algorithmic parallelism in each pipeline stage instead of temporal multiplexing: in this case, the same speedup of 10 is achieved, but with a reduction of latency to 4 seconds. Appendix A.I below illustrates how basic profiling data, extracted from execution of a sequential image coding algorithm, can be used to guide the PPF design process to achieve a scalable parallel implementation of the algorithm with analytically defined performance bounds.
1.5
CONCLUSIONS
The primary requirement in parallelizing embedded applications is to meet a particular specification for throughput and latency. The Pipeline Processor Farm (PPF) design model maps conveniently onto the software structure of many continuous data flow embedded applications, provides incrementally scalable performance, and enables upper-bound scaling performance to be easily estimated from profiling data generated by the original sequential imple-
CONCLUSIONS
Fig. 1.1 Idealized examples of PPF.
9
10
INTRODUCTION
mentation. Using the PPF model, sequential sub-components of the complete application are identified from which data or algorithmic parallelism can be easily extracted. Where neither of these forms of parallelism is exploitable (i.e. the residual sequential components identified in Amdahl's law), temporal multiplexing can often be used to match pipeline throughput without reducing latency. Each pipeline stage will then normally map directly onto the major functional blocks of the software implementation, written in any procedural language. Furthermore, the exact degree of parallelization of each block required to balance the pipeline can be determined directly from its sequential execution time.
Appendix A.I
SIMPLE DESIGN EXAMPLE: THE H.261 DECODER
Image sequence coding algorithms are well known to be computationally intensive, due in part to the massive continuous input/output required to process up to 25 or 30 image frames per second, and in part to the computational complexity of the underlying algorithms. In fact, it was noted (in 1992) [380] that it was only just possible to implement the full H.261 encoder algorithm for quarter-GIF (176 x 144 pixels) images in real time on DSP chips such as the TMS 320C30. In this case study, a non-real-time H.261 decoder algorithm developed for standards work at BT Laboratories and written in C, was parallelized to speed up execution on an MIMD transputer-based Meiko Computing Surface. Results presented are based upon execution times measured when the H.261 algorithm was run on sequences of 352 x 288 pixel common intermediate format (GIF) images. Fig. A.I shows a simplified representation of the H.261 decoder architecture. The decoder consists of a 3-stage pipeline of processes, with feedback of the previous picture applied around the second stage. Feedback within a pipeline is a key constraint on parallelism, since it restricts the degree to which temporal multiplexing can be exploited: in the H.261 decoder, the reconstructed previous frame is used to construct the current frame from the decoded difference picture. Table A.I summarizes the most computationally intensive functions within the BT H.261 decoder, and is derived from statistics generated by the Sun profiling tool gprof [138] while running the decoder on 30 image frames of data on a Sparc2 processor. To simplify interpretation, processing times have been normalized for one frame of data. The 10 functions listed in the table constitute 99.2% of total execution time. Program execution of the H.261 decoder can be broken down on a per-frame basis into a pipeline of three major components: Tl frame initialization (functions 1 and 2 in Table A.I);
SIMPLE DESIGN EXAMPLE: THE H.261 DECODER
Fig. A.I
Simplified representation of the H.261 decoder execution timing.
Table A.I
Summary Execution Profile Statistics for the H.261 Decoder
Sequence
Function name
Normalized Execution Time (s)
1 2 3 4 5 6 7 8 9 10
clear_picture h26 1 _decode_picture in verse _runJevel_picture tmjn verse _scan .change-picture h261Jnverse_quantize_picture reconstruct _h26 1 in verse _transform_picture copy .macro-block write_picture Jile macro_block_to_line
0.098 0.567 0.284 0.346 0.750 1.695 2.199 0.239 0.685 0.240
11
12
INTRODUCTION
T2 frame decoder loop (functions 3-8 in Table A.I); and T3 frame output (functions 9 and 10 in Table A.I). The first and last of these components are executed once for each image frame, whereas the middle component contains considerable data parallelism and involves a loop executed 396 times (once for each 16 x 16 pixel macroblock making up a GIF picture). It is therefore clear that considerable scope exists for speeding up the middle stage of the pipeline by exploiting data parallelism. Temporal multiplexing cannot be utilized because each image frame is reconstructed by means of a difference picture added to the motion-compensated previous frame (although it would be possible to partially overlap the decoding of consecutive frames). Since pipeline stages Tl and T3 are inherently sequential, direct application of Amdahl's law to the data in Fig. A.I shows that / = 0.22, giving a maximum speedup of only 4.55. An asymptotic approach to this speedup could be obtained by parallelizing the decoder using a single processor farm, with the data-parallel component T2 farmed onto worker processors, and the remaining code executed on the master processor. The upper-bound predicted speedup for the PPF is presented graphically in Fig. A.2 and may be represented theoretically by the following piecewise approximation:
where the first and last stages of the PPF contain a single processor, the second processor farm stage contains n — 2 processors, and Ti-T3 are execution times of the three stages of the pipeline shown in Fig. A.2. As the throughput for a PPF is defined solely by the slowest pipeline stage, its speedup is given by the ratio of sequential application execution time to the execution time for this stage alone (this illustrates the advantage of the pipeline in overlapping execution of residual sequential components). Where (as in this case) the slowest stage is perfectly parallelizable (i.e. it contains no residual sequential elements and thus / = 0 in Amdahl's law), linear speedup is obtained up to the point where the scaled stage is no longer the slowest. The first equation defines this case where the performance is increasing linearly as the number of workers in the processor farm increases (S is proportional to n); this continues until the execution time for the processor farm drops below that of the next slowest stage, T3 in this case. The second equation then defines the fixed scaling achieved for any further increase in processor numbers (5 is fixed and independent of n). It is assumed that the processor farm implementing the middle stage of the pipeline receives its work packets directly from the first stage and passes
SIMPLE DESIGN EXAMPLE: THE H.261 DECODER
13
Fig. A.2 Idealized and actual speedup for the H.261 decoder.
its results directly to the third stage, as in the topology of Fig. A.3, where an implementation with five worker processors is shown. The analysis is of course idealized, ignores communication overheads, and assumes static task characteristics. As can be seen from Fig. A.2, the performance is predicted to scale linearly up to six workers (8 processors total).
Fig. A.3 PPF topology for a 3-stage pipeline with 5 workers in the second stage.
14
INTRODUCTION
Actual scaling performance results are also presented in Fig. A.2, for two different practical cases. In both cases, the scaling performance is less than that predicted in the idealized graph, due to communication overheads being neglected, but the general shape of the graphs is in other respects as predicted. The maximum speedup obtained (5.59) exceeds the limit predicted by Amdahl's law, thus demonstrating the advantage which the PPF has compared with a single processor farm implementation. In practice, transputer communication links do not provide sufficient bandwidth for real-time communication of H.261 GIF picture data structures, and therefore communication overheads substantially limit the performance scaling which can be achieved in a transputer-based system. On the AMD Share family of processors with six link ports, real-time parallel processing of images sequences is far more practicable. For example, the ADSP-21160 [14], running at 100 MHz, supports 'glueless' multiprocessing9 and floating point like the transputer, but now is superscalar, with a maximum of six issues per cycle. In the first implementation, each image was simply subdivided into a number of horizontal strips defined by the number of processor farm workers, in line with the idealized model of data parallelism presented earlier. As can be seen from Fig. A.4(a), this results in a series of black strips in the reconstructed image, where data adjacent to each worker's sub-image were not available for constructing the motion compensated previous image. In the second implementation, additional rows of macroblocks at the boundaries of the sub-image processed by each farm worker were exchanged in a second communication phase between the master and worker processors in the processor farm, after the difference image had been decoded. This enables the full motion compensated previous image to be reconstructed, as shown in Fig. A.4(b), but results in an additional communication overhead, which decreases scaling performance compared with the case where edge data are not exchanged.
9
Glueless refers to the absence of a requirement for auxiliary logic or chips to either connect processors by communication links or to mediate processor access to global memory.
SIMPLE DESIGN EXAMPLE: THE H.261 DECODER
15
Fig. A.4 Sample image output by the parallel H.261 decoder with 5 workers (a) without edge data exchange and (b) with edge data exchange.
This page intentionally left blank
2 Basic Concepts PPF is primarily intended for embedded systems [313], which are not the traditional target for parallel processing, and, as will become apparent, are not well covered in the literature, where the emphasis is on general-purpose programming. Embedded systems are dedicated applications of computers running within some larger application framework or context. Most embedded systems are continually available, either processing data, or reacting to external events [35]. Increasingly, with the development of more complex computer applications and tighter deadlines, embedded systems cannot be designed without utilizing parallel processing. Consider automatic target recognition (ATR) of aircraft found by Synthetic Aperture Radar (SAR) reported in [343], Fig. 2.1. The design of this application, though developed independently of PPF, has many of the characteristic design features of a PPF. There is: a single flow of processing control through a pipeline; interest in accelerating the target identification stage by some form of processor farm (or other topology); and use of Commercial-off-theShelf (COTS) hardware configured in parallel. The result is a cost-effective, scalable solution which will meet (presumably) latency and throughput targets. Additionally, rather like the two-level hardware employed in a number of the applications covered in later chapters of this book, a Myrinet high-speed network [103] is deployed independently of the processing nodes, which can be PowerPC RISC (reduced instruction set) microprocessors, AMD SHARC DSPs, or Lucent Orca FPGAs (field programmable gate arrays). There is also a need to coordinate the flow of data across the hierarchy of ATR algorithms, requiring a system-wide view. 17
18
BASIC CONCEPTS
Having arrived at a design for (say) a DSP processor at the computation layer, why consider an FPGA alternative? Why, in fact, partition an application between a communication structure and computation layer? The key problem a parallel system designer must face is how to make a system scalable. This is not simply because larger (or smaller) problems can be tackled solely by adding hardware in an incremental fashion, without otherwise changing the design, important though a modular design remains. Equally important is that uniprocessor performance increases in proportion to the number of transistors on a microchip, which has been observed to double approximately every eighteen months (the well-known Moore's law1). Therefore, a design tied to a specialized parallel machine may well be rapidly overtaken in terms of price and performance by a uniprocessor implementation. The principal reason for the shift to COTS hardware is to exploit the economies in scale that arise within the uniprocessor market, which lead to exponential gains in performance. In other words, by exchanging the computation hardware within the design, which can also be modular, a design is made doubly scalable, and hopefully future-proof. As the life cycle of a typical commercial microprocessor is less than five years, while the life time of many embedded products is much longer (e.g. an avionics system has a lifetime greater than thirty years), system (or code) portability [23] is an important method of amortizing the investment in the original embedded software. A PPF design is a pipeline of processor farms. The essence of a processor farm within PPF is one central farmer, manager, or controller process, and a set of worker or slave processes spread across the processor farm. Notice that there is no insistence in PPF on having a single worker process per processor, though in fact our farm template design (Chapter 7) does not exploit parallel slackness [329] by having more than one process to a processor. In a sharedmemory MIMD machine, worker threads [189] replace worker processes, a thread being a single line of instruction control existing in a shared or global address space. The role of the farmer is not simply to coordinate the activity of the workers but additionally to pass partially-processed work onto the next stage of processing. By introducing modularity, each module being a farm, it becomes possible to cope with heterogeneous hardware, and separately to scale each farm as larger problems or versions of the original problem are tackled. PPF is appropriate for all applications with continuous data input/output, a characteristic typical of soft, real-time, embedded systems.2 However, PPF is by no means a panacea for all such embedded systems, and in Chapter 10,
Moore's law is named after Gordon Moore, co-founder and Chairman of Intel, who discovered the law in 1965. 2 Soft, real-time systems as opposed to hard, real-time systems are those in which responsiveness to deadlines can be relaxed. Hard, real-time systems [216] usually involve the control of machinery, such as in fly-by-wire avionics and industrial manufacturing control, and are not the subject of this book.
19
Fig. 2.1 Sandia National Laboratory's Automatic Target Recognition System showing the Myricom Parallel Stage.
some counter-examples encountered by the authors are considered. In fact, it may always be possible for a better, optimized solution to be found for any application. However, first consider how long the development task will take; whether the solution will transfer easily to alternative hardware; whether late algorithmic changes can easily be accommodated and so on. These are the problems of medium- and larger-scale systems which are expected to persist over more than one generation of hardware. To tackle such applications re-
20
BASIC CONCEPTS
quires high-level, systems thinking, which, because of its generality, can be deceptively hard. Another way of seeing the processor farm is as a client-server system [378] where there is one client (the farmer) and multiple servers (the workers). Workers only interact with the farmer, usually directly, though a per-farm multicast has proved useful (see the H.263 example in Chapter 8) and has become a feature of our farm template design, Chapter 7. When demandbased scheduling of work takes place, unlike the classic client-server model, the workers request further work from the farmer. The ability of a server to make a request introduces a circular path of communication. However, deadlock is avoided because the farmer will normally only send work when it is requested. An introduction to the demand-based method of processor farming occurs in Section 2.3, and this form of scheduling is revisited in Chapter 11. Workers for the most part perform identical tasks. However, in our definition of the processor farm, algorithmic parallelism can occur, whereby different functions of an algorithm are distributed among the worker processes. There are instances where divide-and-conquer is a way of distributing work to the farm [60]. In the divide-and-conquer paradigm, workers perform an identical algorithm but at varying levels of granularity, for example when decomposing an FFT (Fast Fourier Transform). However, in most cases, including the 2D Fast Fourier Transform (FFT) (a staple of many signal-processing algorithms), equal-sized tasks are easier to implement; are at least as efficient; and are portable.3
2.1
PIPELINED PROCESSING
The role of pipelining within the PPF design method is to increase throughput and reduce latency by allowing necessarily independent components of an application (some of which may be inherently sequential) to be overlapped. The performance of a pipeline will depend on its organization which in turn is dependent on the application. The application may consist of multiple algorithms within which may exist varying types of parallelism. One common way to categorize algorithms is through granularity which is the ratio between communication and computation. In other words, granularity is the amount of computation that can take place before a result is exchanged between processors. Two considerations occur: is there sufficient granularity within the algorithm, and can the multiprocessor architecture exploit that granularity? As was pointed out in Chapter 1, fine-grained algorithms with a regular structure are no longer catered for at the processor level. According to [121], most scientific and engineering applications lie in
3
Numerous FFT algorithms suitable for vector processors [348] or VLSI implementations [240] are available, but are not portable.
PIPELINED PROCESSING
21
the middle of the granularity spectrum, and in the middle of a code regularity spectrum and therefore it makes sense to design general-purpose parallel machines for that market. Broadly, on current PPF hardware medium- and coarse-grained parallelism is exploitable as befits the target hardware, that is the lowly parallel configurations that are current in embedded systems. How can work be organized across a pipeline? The following per-stage work allocation patterns can be distinguished (Fig. 2.2): Macro-task data-flow is used when there is no need to load balance as there is a steady work arrival rate combined with an approximately constant per-task processing time. Local buffering of tasks can be employed to ease short-term blockages, but then a scheduling regime becomes necessary, e.g. [219]. Data-flow may be appropriate if there is no suitable decomposition of jobs (temporal multiplexing in PPF nomenclature), e.g. for the parallelization of wavelet transforms [317]. A data-farm is used to give closer control of task scheduling particularly when processing time is data (content) dependent. Notice that the task processing time can include variable communication times, for point-topoint networks, and the effect of background processing. Load balancing can be automatic if returning processed work (or a token signifying task completion) forms the basis of a request for more work. A static scheduling phase (i.e. with no system feedback) will be necessary at load-up time. Presently only demand-based scheduling is used as it is a general solution which can be captured in a data-farm template. Where the parallel hardware is known, other methods of load-balancing may be appropriate though not necessarily superior, e.g. linear programming based on mean-value statistics has been successful for farms operating in a store-and-forward communication regime [338]. Feedback paths will exist particularly if an algorithm has been designed originally for a sequential machine. The H.261 and later H.263 hybrid video-stream encoder and decoder applications exhibit multiple feedback routes. The stage awaiting feedback is synchronous. Folded-back pipelines may be a way of coping with feedback. For example, in Fig. 2.2 C, the time to complete stages one and three together may approximately balance the time to complete stage two. Therefore, the output from stage three is fed-back to stage one, Fig. 2.2 D. In the parallelization of H.261, the timings from profiling of the sequential code established the approximate equivalence between two stages and a folded-back pipeline was used. Multiple farms (Fig. 2.3) are suitable when independent algorithms can be grouped in the same pipeline stage, with an obvious reduction in the pipeline traversal time. It may also become easier to balance or scale other stages of the pipeline. The Karhunen-Loeve transform (KLT)
22
BASIC CONCEPTS
parallelization, Chapter 9, is an example of a single algorithm where dual farms are possible. When differing communication patterns or job service rates are segregated there is a increase in performance due to the decrease in variance. This is a standard result from queueing theory, for instance consider the Pollaczek-Khintchine (P-K) formula for the expected queue length in M/G/1 systems [187] .4
Fig. 2.2 Pipeline work allocation: a) data-flow farm; b) data-farm; c) feed-back; d) folded-back pipeline.
Centralization is necessary either when the granularity of a stage is such that with the available hardware it is not sensible to parallelise, or a brief phase requiring global or irregular data dependencies exists. Otherwise, where regional data dependencies are present, data-flow and data-farming work allocation can be adapted by sending out bordering data, as is the case for spatial filtering of images. A general-purpose processor topology which can accomplish global data exchange is the uni-directional ring since it can be embedded into many other topologies [336]. If centralization is necessary it is accomplished by a single processor or by a uni-ring stage. Alternative hardware may be employed to accelerate centralized stages. For example, histogram equalization of
4
\2jjrc2i
The PK formula is E[n] = p + 2(1-0') > where ^ is the expectation operator, n is the number in the queue, p is the availability, and 5 is the service time. Notice the effect of E[S2] in the numerator.
PIPELINED PROCESSING
23
an image which has two short-lived phases for moderately sized images separated by a global exchange of information can be accomplished by an FPGA linked to the address-bus of a host processor.
Fig. 2.3 Dual-farm pipeline stage.
In PPF systems, care should be taken that the farmer does not become an under-used resource, responsible only for scheduling. On machines, such as Meiko's CM-5 or Transtech's Paramid, a dual processor arrangement allows a farmer process to be placed on a communication coprocessor while a worker process resides on the computation processor [113]. On the Texas Instruments TMS320C80 [27], a control processor with access to floating-point facilities supervises four support processors, all on the same integrated circuit. If farmers are directly connected in the design, a fast bus can link the processors, while point-to-point communication is used to form the worker network. A farm becomes the smallest replaceable unit in fault-tolerance terms [97]. Data-flow pipelines and dual pipelines do not have this flexibility.
24
BASIC CONCEPTS
2.2
PIPELINE TYPES
Both fundamental types of pipeline, synchronous and asynchronous, are provided by PPF. A purely asynchronous pipeline is obviously one in which there are no synchronous stages. If a task can be decomposed then its individual parts can proceed independently along the pipeline and consequently be independently scheduled at each of the stages of the pipeline. If the task components, that is, processed data or composite results, only need to be reassembled at the output of the pipeline, the latency can be reduced considerably. In practice, a partial reassembly may take place at various stages within an asynchronous pipeline. The handwritten postcode recognition system in Chapter 3 is an asynchronous pipeline, but the characters of the postcodes still needed to be reassembled at the final stage. Furthermore, identified postcodes have to emerge from the pipeline in the same order that the original handwritten address was input (in order to be printed as a phosphorescent dot code on the correct envelope), an output 'ordering' constraint. The effect of an ordering constraint at the output can be limited if output buffering is provided because there are no further stages of the pipeline. However, it is perfectly possible for a variety of ordering constraints to occur. To estimate the performance of an asynchronous pipeline with such a constraint, it may be possible to consider the pipeline as if no such constraint existed and then superimpose the percentage degradation which will result from a constraint, Chapter 11. Synchronous pipelines are those in which the next stage of processing cannot proceed before the previous stage has completed. A pipeline stage becomes synchronous through waiting for the completion of 'a job', or for feedback to arrive. Usually an algorithmic constraint causes the completion of a job to stall. For instance, the processing of columns in a row-column 2D Fourier transform cannot proceed before the completion of processing all the rows. Many multi-step numerical analysis and signal-processing applications have component algorithms with this characteristic. It is rare for all processing to stop at the same time even if the algorithm permits, as there will also be a synchronization overhead [90]. The presence of a single synchronous stage within a pipeline will affect all previous stages, in the sense that any reduction of latency will be lost at the synchronous stage. On the other hand, latency can be reduced if the remainder of a pipeline is asynchronous. Therefore, synchronous and asynchronous segments of a pipeline may be analyzed separately and the results combined. Fig. 2.4 is a classification of pipeline types. The classification is meant as guidance only, indeed some categories are omitted to avoid repetition. The category 'algorithmic' is not strictly a method of work allocation as it is possible to structure a decomposition into a set of independent functions organized either as a farm or a flow. The farm will be chosen as a way of load-balancing provided there are enough functions of suitable granularity.
PIPELINE TYPES
25
Fig. 2.4 Pipeline types.
Synchronization might be constrained by the quickest function to complete. Speculative searches have this characteristic, e.g. [202]. 2.2.1
Asynchronous PPF
In Fig. l.la, a simple pipeline without parallelization has already been informally examined. A simple pipeline is defined as one in which a single processor appears at each stage. This structure is now analyzed more formally to identify its mathematical characteristics. Fig. l.la shows just one global maximum latency time though there might be a finite number of local maxima. It is the global maximum and its stage index that are of interest. Formally,
26
BASIC CONCEPTS
with {T(i)} being the set of maximum per-stage latencies and s being the number of stages. The performance parameters for a simple pipeline are
Fig. Lib is a PPF in which there is pseudo-parallelism: no parallel decomposition has taken place. If enough processors have been deployed to prevent waiting at any stage,
where pi is the number of processors at stage i. Fig. l.lc, introduces parallel decomposition of individual tasks. For a pipeline in which no task waits
Equation (2.8) is most appropriate for geometric multiplexing. However, for some types of algorithmic parallelism the algorithm finishes when the first solution is found. If f ( x ) is the probability distribution function of the independent and identically distributed set of random variables {Xi,i = 1,2,... ,p}, with F(x) = /^ n f t y f ( x ) d x , then the minimum expected finishing time is given as
which is a result from order statistics (see [269] or for an advanced treatment [25]). 2.2.2
Synchronous PPF
For a synchronous pipeline in which there is no waiting, define
which allows us to write
with 5(-) the Kronecker delta function. Throughput remains dependent on the rate that the pipeline can accept tasks.
DATA FARMING AND DEMAND-BASED SCHEDULING
2.3
27
DATA FARMING AND DEMAND-BASED SCHEDULING
As was remarked upon in the introductory section to this chapter, the data farm is the building block of PPF systems. A processor or data farm [235] is a programming paradigm involving message-passing in which a single task is repeatedly executed in parallel on a collection of initial data. Data farming is a commonly used paradigm in parallel processing [386] and appears in numerous guises: some (network-of-workstations) NOW-based [318]; some based on dedicated multicomputers [373]; some constructed as algorithmic skeletons [272]; and some implicitly invoked in parallel extensions of C++ [142]. A staple scheduling method for data-farm tasks is demand-based farming which gives the potential for automatic, run-time load balancing. To some extent a scheduling system is a compromise between generality and performance. Global optimization, which is a form of static scheduling, is at the performance end of the spectrum. Because global optimization is an NP-complete problem (mathematically intractable [139]), heuristics [101] are necessary, though all pertinent system characteristics should be taken into account. In some embedded systems, where the work-load is deterministic (constant), for example in repeatedly performing a Fast Fourier Transform, global optimization has a place within a data-farming regime. As optimization is time consuming, off-line partitioning of the work is usually required. In contrast, demand-based farming is a generalized method of scheduling. In demand-based farming, for which the number of tasks should greatly exceed the number of processes (processors, if each processor runs a single process), returning processed work (or a suitable token) forms the basis of a request for more work from a central 'farmer'. The computation time of an individual task should exceed the maximum two-way data-transfer time from worker processor to farmer processor so as to hide communication latency. Initially, the central farmer loads all processors with some work, which may be buffered locally, that is a static-scheduling phase. To avoid subsequent work starvation on some processes, no worker processor should be able to complete initial processing before the loading phase is over. Subsequently, after making a request, the worker should be able to process locally-buffered work while its request is answered. Linear programming has been used to predict a lower bound on the performance of data farms using demand-based scheduling [292], that is the performance beyond which it is not possible to improve due to the physical bounds of the system. The prediction assumes that store-and-forward communication is employed, in which message passing is point-to-point and in which there is a handling delay at each intermediary node before the message is forwarded. A further assumption was that task duration times are deterministic. However in [373], varying task durations were represented by mean values but the linear-programming technique was retained. Maximum performance was more closely estimated by including a number of overhead terms. In one version of the scheme [338], tasks were placed on workers in advance as if the
28
BASIC CONCEPTS
distribution of tasks had been formed by demand-based farming. In theory, task granularity can be found by optimizing the governing equations for the model. In many cases, reported timings on unmodified regimes are accurate to 5%, which may well be more than adequate in our application domain where generality and scalability are as important as optimal performance. In Chapter 11, an alternative, more generalized, scheme of task scheduling is considered.
2.4
DATA-FARM PERFORMANCE CRITERIA
Introductory textbooks on parallel computing, e.g. [65], discuss Amdahl's law, and also point out that the fraction of sequential work / is a constant function in the law whereas in some applications / is a function whose value decreases with increasing problem size. Embarrassingly parallel, single-algorithm applications certainly have this characteristic. Algorithms in which there are multiple opportunities for parallel decomposition and a very small irremedially sequential component have attracted attention because of the 'Grand Challenge' awards [34]. However, in multi-algorithm embedded systems, while / may not be a constant function it certainly is not monotonically decreasing in value. Amdahl's law has been revisited in [185]. Speedup is not a normalized quantity as [163] is at pains to point out. If a faster processor type is employed in the parallel system but the communication bandwidth remains fixed then the speedup will decline though the absolute processing time may well decrease. Therefore, a speedup measurement implicitly assumes a balanced system. Nevertheless, Amdahl's law provides a useful 'back-of-the envelope' or first-cut estimate of the performance of a data farm. The extended example in Chapter 3 combines data-farm estimates to form an idealized performance estimate for a complete postcode-recognition pipeline. In some pipeline systems, sequential blockages within a stage of the pipeline can be masked by running that stage with one set of data in parallel with other stages of the pipeline, each processing different data. Of course, escaping the consequences of Amdahl's law is only possible because of the flow of data in a continuous-flow system. The performance of the pipeline is then found by finding the dominant stages across the pipeline. There are also more pessimistic laws, such as Minsky's law or in [213], which state that speedup decreases logarithmically due to the added need to synchronize between parallel processes. A weakness of Amdahl's law is that it assumes a perfect decomposition of any task, whereas it may not be possible to break a task into equal parts. If algorithmic parallelism is employed within a farm then it is unlikely that the computation of sub-algorithms will be balanced. A more likely scenario is that the same algorithm will run on each worker process but that data dependencies will result in different processing times. The different processing times may only be known in a statistical sense or indeed for some types of algorithm (e.g. searching algorithms) may
DATA-FARM PERFORMANCE CRITERIA
29
be unknowable in advance. At some point, the results of data processed in parallel has to be recombined. This may be after each stage of the pipeline (a synchronous pipeline) or the recombination may not need to occur until the results are finally output (an asynchronous pipeline) but surely recombination will take place. It is at this point that the synchronization overhead is incurred. Once a first cut estimate has been made then the estimate can be refined by accounting for the cost of communication. In the data-farm paradigm there is no global communication. The only communication takes place between farmer and workers. Similarly, across the pipeline the only form of communication which is not point-to-point is through feedback paths. However, as feedback to a farmer process occurs from a single source, that is another farmer, congestion does not arise. Congestion can nevertheless occur within a single farm, particularly if the communication mode is store-and-forward. Because the design model is regular however (all farm workers will normally use the same communication model), it is possible to derive analytic estimates of upper-bound static performance [354]. As was mentioned in Section 2.3, a linear-programming technique can be used to derive the communication overhead. For example, if a processor farm is implemented using a buffered linear chain of N workers, then the maximum throughput achievable, MTV, is given by:
where TCALC is the calculation time (assumed identical for all work packets) and TSETUP is the CPU time required to set up the communication (the communication itself is assumed to occur autonomously). Similar expressions [353] can be derived for other topologies such as n-ary trees, resulting in a performance surface for varying numbers of processors, setup times, and calculation times. For small numbers of processors there is little difference in predicted performance for all forms of topology over a linear pipeline implementation. The linear-programming estimate is itself an idealisation as it assumes perfect buffering. However, it is not easy to fully buffer communications for image-processing applications where large data structures must be communicated, and this is the major reason for actual performance falling below the theoretical upper bounds. By contrast, the effect of setup times on the upper-bound performance is of secondary importance. Topology was a source of considerable academic debate when store-andforward communication was predominant amongst first-generation messagepassing machines [291]. With the advent of generalized communication methods such as the routing switch [305], the 'fat' tree [215], wormhole routing
30
BASIC CONCEPTS
[256] and virtual channels [78] the situation is less critical.5 Non-linear communication latency as a consequence is becoming less of an issue. The Bulk Synchronous Parallelism (BSP) programming model [236], which has been widely ported to recent machines, adopts a linear model of piecewise execution time, characterized by a single network permeability constant. Messagelatency variance can be reduced by a variety of techniques, such as the message aggregation used on the IBM SP2 SMP. Routing congestion can be alleviated either by a randomized routing step [364], or by a changing pattern of communication generated through a latin square. The asynchronous equivalent of BSP, that is the LogP model [69], likewise employs a linear model of communication. Even if communication overhead were constant however, there will always be a synchronization overhead. This issue is returned to in Chapter 11. As real-time exigencies must be met, asynchronous PPF systems do not normally have the luxury of postponed communication [159]. In the BSP programming model all communication takes place at a point of barrier synchronization. While this has advantages in tailoring the pattern of communication to the network, that communication is not overlapped with computation. Latency tolerance by 'parallel slackness' is also not always assumed to be possible in PPF. Indeed, even if multiple threads of control are available it would appear that the advantages of parallel slackness are bounded [7]. As in earlier systems [214], buffering is used extensively as a form of latency hiding in PPF.
2.5 CONCLUSION As embedded applications grow in size and complexity, it has become apparent that a uniprocessor will be insufficient to cope with real-time processing requirements. However, there has been no systematic way of designing such applications to run across multiple processors. In contrast, there have been many attempts at making tasks already running on uniprocessors perform faster on parallel hardware. There are also parallel languages and environments but few attempts at guiding the design process. In essence, this is the same dilemma presented to users of object-oriented languages: there is a passive set of facilities but no guidance as to how to combine those facilities. Though recent attempts have been made to remedy this situation, the target application domain is not usually real-time, continuous-flow systems. In contrast, Pipeline Processor Farming is a systematic design method that attempts to cut the Gordian knot by reducing the design freedoms inherent
5 The 'fat' tree topology proportionally increases bandwidth towards the root of the tree, with usually only the tree leaves performing computation. In wormhole routing, a message is segmented with the head of the message passing through an intermediary node without waiting as the remaining message segments follow, thus avoiding set-up time in intermediary nodes. Virtual channels emulate wormhole routing by software means on a store-andforward multiprocessor.
APPENDIX
31
in parallelism in favor of simplicity, practicality and consistency. PPF designs always result in a pipeline of some sort. PPF accepts the need to be able to mask a sequential bottleneck within a stage of the pipeline. However, any stage of a pipeline can be internally parallel, so long as a processor farm is used. A processor farm can take various forms, in the PPF model of parallelism, but the staple form of processing is data farming, which should be considered before opting for an alternative method. This is because data farming is the most generalized, scalable, and portable parallel-programming paradigm available.
Appendix A.I
SHORT CASE STUDIES
A number of systems are outlined here of the type suitable for a PPF approach. Reference [97] is a study of a parallel radar processing system for a phasedarray radar [330]. In such a system (Fig. A.I), by controlling the phase of individual elements of the array, beams can be formed at various elevations and directions. The Doppler effect is employed to detect motion in transverse directions, helping to remove clutter. As the beam rotates a new sector of the data must be processed to meet tight latency deadlines, requiring sufficient processing power to be available. A complete pipeline (Fig. A.2) consists of: a phase-detection unit to extract phase and quadrature signals from incoming Doppler-shifted signals (hardware); an array of processors which process signals in order to remove clutter; and a final stage to identify the target types over time, requiring a global address space. The radar pipeline is largely asynchronous with a single flow of data (though some control information and correlation between adjacent beams is not shown). Multiple pipelines correspond to beams at varying elevations. The second stage of the pipeline was originally implemented with transputers but later adapted for more powerful 200MHz TI 'C40 DSPs [349]. Multiple data farms are employed within the second stage. Data farmers are linked within this stage by an lOOMhz Fibre Distributed Data Interface (FDDI) backbone [19] (the links of the transputer version operated at 20MHz), which allows scaling of communication bandwidth. Each data farm is responsible for a different range of returning pulses. Within a processor farm, the complete processing algorithm was performed by each worker, an example of temporal multiplexing in PPF terminology. The farm arrangement allowed the pipelines for lower beams to be scaled according to low processing requirements for these beams. However, the true advantage of this design is the ability to scale to differing clients, civilian or military. Traditional hardware designs with application specific integrated circuits (ASICs) are not only costly but also inflexible by comparison, though there is scope for ASICS, such as
32
BASIC CONCEPTS
GEC Plessey PDSP16510A FFT processor or PDSP16116/A complex multiplier, in processing regular algorithms [38] such as the Fast Fourier Transform (FFT).
Fig. A.I
Multi-elevation radar array.
Fig. A.2 Block diagram of the GEC Marconi Phased-Array Radar System.
In [133], the AT&T array multiprocessor is configured as a pipeline in order to perform near (3 times) real-time speech recognition on a full-search algorithm. Though a pipeline, this design preferred a dataflow organization, not data farming. The pipeline is asynchronous with a single flow of data. In Chapter 10, a system based on an integrated recognition network is considered
SHORT CASE STUDIES
33
though it is harder to achieve real-time performance and preserve full-accuracy on such a system. The DSP3 multiprocessor has a cuboid node topology (Fig. A.3), each node consisting of a cluster of DSP32 DSPs from AT&T. Figure A.3 shows the topology of a node sub-system upon which the speech-recognition pipeline was run, though with the intention of later, scaling to the larger 128 PE topology.
Fig. A.3
Topology of the DSPS multiprocessor from AT&T.
A virtual pipeline with the communication topology of Fig. A.4 was super-imposed upon a node within the DSP3. Feature extraction from raw speech frames is accomplished on DSPs through standard signal-processing algorithms [295, 179], and can be buffered if running ahead of subsequent stages in the pipeline. The compute-intensive parts of speech recognition are the calculation of the probabilities needed eventually to establish which of a number of candidates best matches the utterance. The latency of the system for real-time processing is given by the duration of speech frames which are typically 10 ms in length to avoid variations over time. Each incoming frame must be matched against all phone (sub-syllable speech sound) possibilities, the number of which increases according to the vocabulary and the complexity of the grammar (perplexity). The AT&T system was tested on a vocabulary of 992 words with perplexity of 60, but vocabularies of at least an order of magnitude greater can be expected. After training from a set of representative utterances, a Gaussian mixture model is constructed for each phone to reflect variations in intonation. Figure A.5 shows a triphone model with just three mixtures (24 or more is realistic), denoted by unn with entry probability given by cnn. Each of the probabilities of entering a state must be updated for all current candidate utterances as a new frame enters, and there are also fixed probabilities for moving between states to reflect variations in time. The
34
BASIC CONCEPTS
candidate utterances are then scored against a grammar (an application dependent set of possible phrases) at the phone, word, and phrase level. The throughput of the system is therefore given by the number of candidate utterances being examined at any one time, and will usually change with time as some candidates fall below an acceptable probability. Real-time speech recognition remains an interesting target for pipelined parallel systems.
Fig. A.4 The AT&T speech-recognition pipeline.
Fig. A.5 Simplified Gaussian mixture model.
The work in [53] is an early parallelization of the H.261 video codec by purely software means. Though specialized VLSI chip-sets have been developed for the standard codecs [37, 130], there is always a period of experimen-
SHORT CASE STUDIES
35
tation when the algorithms are prototyped within the context of a complete system. Testing is a lengthy process unless a parallel implementation is developed. Figure A.6 is a block-diagram of the H.261 hybrid predictive/transform coder6. The H.261 encoder is given fuller treatment in Chapter 8 and a parallel implementation of the simpler H.261 decoder has already been described in Chapter 1. H.261 is typical in layout and algorithmic content of a series of other codecs stemming from motion JPEG, through MPEG-1 & -2, and now H.263+ and H.263L. MPEG4 is of a different nature, building on model-based coding, and will no doubt go through a prototyping phase.
Fig. A.6
Block diagram of the structure of the H.261 encoder.
In such codecs, it is found that motion estimation (ME) takes up from 60-70% of computation [87], followed by calculation of the discrete cosine transform (DCT) across 8 x 8 pixel blocks of the video frame. The basis of parallelization of ME by geometric multiplexing is also by blocks grouped in units of four blocks called macroblocks. Reference [53] adopted a single data farm with static work scheduling, rather than a pipeline and rather than employ demand-based work scheduling. To avoid overflow of the output channel, the quantization coarseness is varied, normally requiring feedback. In fact, the quantization level is estimated by the farmer on receipt of processing mode information on a per macro-block basis from the worker processes. The estimate is broadcast so that the second phase of processing can proceed. 6 The decoder is incorporated into the encoder in order to reconstruct the encoded image which is then subtracted from the new incoming frame, thus giving rise to a predictive error signal.
36
BASIC CONCEPTS
Therefore, this data farm works in synchronous fashion. However, notice that this implementation feeds back the quantization level without waiting for the variable-length encoder (VLC) to complete. Figure A.7 shows the parallel implementation, which with the technology of the time (transputers) had a throughput of 2 QCIF (188 x 144 pixel) frames/s.
Fig. A.7 H.261 video coding system.
3 PPF in Practice In this chapter, a detailed case study is provided. The automated recognition of handwritten postcodes is a classic soft, real-time system for which PPF is ideally suited. Chapters 8-10 provide further case studies illustrating the advantages and disadvantages of the PPF method. In the United Kingdom (UK) postal system, postcodes are often handwritten onto envelopes ready for posting. When the envelopes arrive at the sorting centre, they are placed on a mechanical conveyor belt. The task of the recognition software is to identify the postcode ready to stamp a corresponding phosphorescent code onto the envelope when it reaches the end of the belt. The phosphorescent code is subsequently used for sorting the envelopes in later sorting machines, right down to the level of the individual postman's delivery round. There are no hard deadlines to be met, because there are no safetycritical factors at play. Nor are there asynchronous interrupts to respond to from peripheral or sensor devices. Nevertheless, there are throughput and latency targets to be met, defined by the speed and length of the conveyor belt. In [258], this postcode recognition application is included as a typical, soft, real-time system alongside classical hard, real-time system examples. UK postcodes consist of variable length words consisting of 5 to 7 alphabetic (A) and numeric (N) characters. The postcode can be subdivided into two parts; an initial outward code of two to four alphanumeric characters and a final three-character inward code. The outward codes correspond broadly to postal towns but some large towns and cities might have several different postcodes defining different areas. The inward code refers to a part of a street or a building. The outward codes are of primary concern and can be in any of the following six formats: AN, AAN, ANN, ANA, AANN, AANA. 37
38
PPF IN PRACTICE
The inward code always has the format NAA. Unlike postcodes in use in other countries (such as the fixed-length numeric codes used in the United States and Australia and the alphanumeric, fixed format codes used in the Netherlands and Canada), the variable length and format, alphanumeric postcodes used in the United Kingdom are not easily distinguishable from some place names, and are therefore relatively difficult to extract from the other address fields.1 To avoid this problem, in this work the address and postcode were each written within pre-printed boxes on the envelopes, though pre-printed envelopes are not currently widely used in the United Kingdom.
3.1
APPLICATION OVERVIEW
Handwritten postal address recognition is a typical multi-level knowledgebased vision application. Real-time OCR systems for recognizing printed addresses are widely used by the postal services, but systems capable of recognizing handwritten postal addresses (15-20% of all mail [285]) have only recently achieved the recognition performance and the throughput required for commercial application (the work reported here was carried out in the early 1990's). Figure 3.1 shows a top-level block diagram of the handwritten postal address recognition system, which can be broken down into two independent and inherently parallel pipelines of components. One of these attempts to recognize the characters within the postcode (from which a corresponding address can be determined using the UK postal address database) while the other simultaneously extracts features which are used to verify the address predicted from the postcode. If the verification features match the postcode address prediction sufficiently accurately, the address is accepted; otherwise the mailpiece is rejected for manual sorting. Full details of the vision algorithms used within the postcode and address verification pipelines are in [89] and [156] respectively. In this particular application, the system design specification is that 10 envelopes/second must be processed by the machine. Furthermore, the size of the computational pipeline is limited by the associated mechanical conveyor which is specified as having a maximum capacity of 90 envelopes (i.e. 9 seconds latency). An ordering constraint also exists, in that results from the OCR system must emerge in the same sequence that the mail-pieces were scanned, to ensure correct phosphorescent dot coding of each envelope with its corresponding postcode. Since the postcode recognition and address verification pipelines shown in Fig. 3.1 can operate concurrently and independently, each can be parallelised
1
As an example, the postcode 'E5 5EX' and the county name 'ESSEX' are respectively a valid postcode and county name which could easily be confused.
PARALLELIZATION OF THE POSTCODE RECOGNIZER
39
separately, and their processing speeds may be balanced to meet the overall throughput and latency specification.
Fig. 3.1
3.1.1
Postal OCR system.
Implementation issues
The address recognition system was originally implemented as a sequential simulation algorithm on a Sun SparcStation [155]. It was then ported to a single T800 transputer running on a Meiko Computing Surface (CS2), a transputer-based multiprocessor [356], which was used to obtain the execution profiling data presented below. All initial parallelizations were implemented using Meiko's CSTools parallel programming environment. This provides a library of C functions for implementing virtual channel communication between arbitrary pairs of processors, whether or not they are physically directly connected. A further library of functions (CSBuild) which run on the host processor (a Sun workstation), allows a custom loader program to be written which configures and then loads and runs a network of processors. In the case study examples presented below the CSBuild loader program reads parameters from the command line which define the overall pipeline configuration and specify the number of worker processors to be configured within each processor farm. This allowed practical results to be rapidly obtained for a large number of different PPF configurations, as shown in the results graphs below.
3.2
PARALLELIZATION OF THE POSTCODE RECOGNIZER
The OCR algorithm for handwritten postcodes utilizes character features proposed in the characteristic loci algorithm [190] (preprocessing stage) combined with a quadratic classifier (classification stage). Ranked lists of characters at
40
PPF IN PRACTICE
each postcode character position are then pruned by applying postcode syntax rules. The n (n < 5) highest ranked characters remaining in each postcode character position after applying the syntax rules are then permuted to form n6 (6 character postcodes) or n7 (7 character postcodes) possible postcodes which are presented to the dictionary. Postcodes which are matched in the dictionary are sorted according to an overall match function derived by multiplying the matches for individual characters, and the addresses corresponding to one or more best matched postcodes are retained for verification against features extracted from the handwritten address. It is necessary to introduce some limited data parallelism (splitting the processing up by a data division, though subsequent load balancing can be dynamic or static) in order to meet the application latency specification of 9 seconds, since although speedup can be achieved using temporal multiplexing alone (i.e. retaining the original processing granularity), temporal multiplexing has no effect on latency, and the latency of the original sequential algorithm was 10.5 seconds. Conversely however, temporal multiplexing requires less parallel design effort than either data or algorithmic parallelism, hence the design objective was to introduce only as much data or algorithmic parallelism as necessary to satisfy the specification. Since some parts of the application are inherently sequential, the overall application must be partitioned so as to separate these parts (which can only be speeded up using temporal multiplexing) from other parts to which data and/or algorithmic parallelism can be applied. In the following section, possible partitioning points for the postcode recognition algorithm are identified by considering what forms of parallelism can be applied to each algorithm component. 3.2.1
Partitioning the postcode recognizer
Table 3.1 summarizes the average processing times for the major functions within the postcode recognition algorithm, and is derived from sequential execution time statistics obtained while running the algorithm on 100 address images of data on a single processor. As can be seen from the table, the preprocessing and classification tasks exhibit almost the same computational complexity and are three times slower than the dictionary search algorithm (measured for n — 5). The decomposition of the system into a pipeline of processor farm stages can be carried out as follows: • Preprocessing. This is composed of five basic tasks as shown in Table 3.1. Due to the sequential nature of the tasks, algorithmic parallelism is not feasible, but data parallelism can be implemented at the level of postcode characters or character features. Since the features for each of the characters need to be combined before the classification stage takes place, it is necessary to implement classification on a different processor farm from preprocessing if different levels of parallelism are exploited
PARALLELIZATION OF THE POSTCODE RECOGNIZER
41
in the two stages. In the implementation described below, both feature extraction and classification exploit character-level parallelism, however each stage was implemented as a separate processor farm to allow a future upgrade to the use of feature-level parallelism within the preprocessing stage if necessary (this would potentially reduce latency further). • Classification. This comprises three algorithms: 1. transformation of the features into quadratic space; 2. location of the region where the classes lie in quadratic space; and 3. production of a ranked list of postcode characters. Again algorithmic decomposition is not applicable, and in this case data parallelism can be implemented only at the character level. Since the ranked lists of characters need to be combined before presenting them to the dictionary stage, it is necessary to implement the classification and dictionary stages on different processor farms. • Dictionary. This involves application of the syntax rules (a table lookup operation), generation of all possible postcodes from the remaining ranked lists of characters, and a trie search [191] in the postcode/address dictionary.2 A form of algorithmic parallelism can be applied here by dividing the full postcode dictionary into six sub-dictionaries corresponding to the six possible UK postcode formats, but in the implementation reported below temporal multiplexing alone was applied to this stage. By introducing character-level data parallelism to the preprocessing and classification stages, the latency of these stages should be reduced by a factor of between 6 and 7, leading to an overall mean latency of less than 3 seconds (ignoring communication overheads). The implementation below therefore describes a design which utilizes data parallelism in the first two pipeline stages and temporal multiplexing alone in the final dictionary stage. The performance for this implementation is then compared with that of an implementation which utilizes temporal multiplexing in all three stages. 3.2.2
Scaling the postcode recognizer
Parallel implementation of the system was performed in two steps. Firstly, parallel versions of each stage were implemented independently, to enable the 2 A trie search is an efficient character-by-character tree search. However, search speed has subsequently been improved by an order of magnitude through the use of a novel semantic neural network (SNN) [225] dictionary method. The replacement of the original dictionary search algorithm by the SNN alternative would of course require re-balancing the PPF, exactly the sort of iterative development which the methodology is intended to accommodate.
42
PPF IN PRACTICE
dynamic scaling performance of each stage to be measured. Then the overall system was integrated by configuring the stages into a pipeline. Fig. 3.2 shows a plot of throughput (postcodes per second) performance measured against number of processors for each independent pipeline stage for the case where character-level data parallelism has been utilized in the preprocessing and classification stages, and temporal multiplexing alone in the dictionary stage. Table 3.1 Average Execution Times for the Functions in the Postcode Recognition Process and Data Packet Sizes Communicated Between the Stages Process
Preprocessing
Classification
Dictionary
Function
Average Time (second/image)
Filtering Feature extraction Feature concentration Feature unification Feature counting Total
0.650 1.170 0.220 1.640 0.790 4.470
Quadratic space transformation Character classification Ranking calculation Total
0.007 4.400 0.040 4.447
Search Sort Total
1.500 0.004 1.504
Data Packet Size (bytes)
2112
33
82
The plots reveal a number of important points: • The throughput achieved by each stage scales incrementally and fairly linearly up to the maximum number of processors available in the Meiko system (32). At this limit, the dictionary stage is at the required throughput level, and the other two stages are operating at just over half the specified throughput. Further increases in throughput could be achieved at each stage if more processors were available, as no stage is close to saturation of its communication links. • The throughput of the dictionary stage scales less rapidly than might be expected from the static profiling statistics of Table 3.1. This is because the execution time for this stage is an average of a strongly bimodal distribution (see Fig. 3.3), as 7-character postcodes take about five times as long as 6-character postcodes to process in the dictionary. The large variation in execution times leads to queueing delays at the output of the dictionary, since postcode ordering within this stage must be preserved. This degradation in the performance due to the wide
PARALLELIZATION OF THE POSTCODE RECOGNIZER
43
Fig. 3.2 Throughput graphs achieved by scaling individual stages of the postcode recognizer.
distribution of processing times in the dictionary stage can also be predicted theoretically [353]. • For a given throughput, Fig. 3.2 allows the required number of processors in each stage to be estimated so as to achieve a balanced and efficient PPF implementation. The full postcode recognizer PPF consists of a pipeline of three ternary-tree processor farms. Each processor farm comprises a farmer process on a master processor which receives data from the previous stage (from the input device in the case of preprocessing) and distributes it over the worker processes on the other processors allocated to that stage. As soon as a worker finishes processing, it returns its results to the farmer, which forwards them to the next stage and sends new data to the worker. In the first two stages, where data parallelism is exploited, each work packet comprises the data required to process a single postcode character; in the dictionary stage each work packet consists of the ranked list of character matches for a complete postcode. 3.2.3
Performance achieved
Speedup, throughput and latency of the overall PPF were measured by first choosing fixed and equal number of processors at the preprocessing and classification stages (as required from Fig. 3.2). The number of processors in the dictionary stage was then chosen as the independent variable because of the large divergence between static and dynamic scaling predictions for this stage.
44
PPF IN PRACTICE
Fig. 3.3 Distribution of execution times for the dictionary search stage of the postcode recognizer.
Example speedup graphs are shown in Fig. 3.4 for the cases of 3, 5, 7, 9 and 11 worker processors in each of the first two pipeline stages. Each graph shows how the speedup varies with the number of workers in the dictionary stage, for the specified number of workers in the other two stages. The ideal graph is a line of gradient 1 which represents a parallel implementation in which all processors operate at 100% efficiency. In reality, efficiency will be less than 100% due to processor overheads such as the farmer's housekeeping operations for each stage, variability of execution time for different work packets, and communication overheads. The general form of each speedup graph is similar: as the number of processors in the dictionary stage is increased, the achieved speedup also increases until saturation occurs when the dictionary stage no longer limits the pipeline throughput. Optimum efficiency occurs at the point where each graph most closely approaches the ideal graph; at this point, the pipeline is balanced. For the graphs shown, balanced pipelines are achieved with 9 (3+3+3), 15 (5+5+5), 19 (7+7+5), 25 (9+9+7) and 29 (11+11+7) workers, and correspond to efficiencies of 53.3%, 58.7%, 60%, 64.8% and 66.9%. The gradual increase in efficiency is a result of the farmer overhead decreasing as the size of the processor farm increases, and would eventually decline again as communication saturation is approached. Figure 3.5 compares the throughput of the complete 3—stage PPF postcode recogniser described above with a similar alternative implementation which utilizes only temporal multiplexing in each of the three stages. Both implementations achieve linear throughput scaling as long as the numbers of workers in each of the three processor farm stages are maintained in balance,
PARALLELIZATION OF THE POSTCODE RECOGNIZER
45
Fig. 3.4 Speed-up graphs for the postcode recognition pipeline.
but the PPF implemented solely using temporal multiplexing achieves slightly greater throughput for two reasons: 1. The fixed overhead of header information in each communication packet means that more data must be communicated in total when the data stream is divided into character-based packets than when it is divided into postcode-based packets. 2. Splitting the postcodes up into individual characters and processing them independently increases the ordering constraints, as the ranked list of characters at each postcode character position must be recombined into postcodes at the dictionary searching stage before any processing starts in this stage. The main advantage of introducing data parallelism into the implementation is that it decreases the latency of the pipeline, as is shown in Fig. 3.6 which indicates the latency measured (in seconds) for the two different implementations. The latency of the postcode-parallel implementation remains constant regardless of the number of processors since there is no sub-division of either the data or the algorithm in this implementation. In contrast, the latency of the character-parallel implementation decreases as more processors are added until sufficient processors are available to fully exploit the data parallelism in the design (7 workers in each of the first two pipeline stages). For PPFs with sufficient processors, the latency of the character-parallel implementation is 3.2 seconds whereas it is 10.8 seconds (i.e. the same as for the original sequential application, but with additional communication overheads) for the pure temporal multiplexing approach. Hence the introduction of some limited data parallelism into the implementation makes it possible
46
PPF IN PRACTICE
Fig. 3.5 Throughput graphs for character-parallel and postcode-parallel postcode recognizers.
to satisfy the latency specification for the system, at the expense of a slight decrease in throughput and efficiency.
Fig. 3.6 Latency graphs for character-parallel and postcode-parallel postcode recognizers.
PARALLELIZATION OF THE ADDRESS VERIFIER
3.3
47
PARALLELIZATION OF THE ADDRESS VERIFIER
The redundancy between the postcode and the remainder of the address is exploited by extracting features of the address which are then matched against one or more candidate addresses corresponding to the postulated postcode(s) and derived from the UK postcode/address database. The verification process consists of two major stages: 1. Preprocessing. Address images are processed to extract first the address lines, then the address words, and finally, a slant correction process is performed on each address word to minimize variations in the features relative to the slant of the handwriting. 2. Feature extraction. A number of algorithms are applied to the address word images to extract predefined global and local features, which include initial and final characters of words, loops, and ascender/descender sequences. The final stage of the complete address recognition system matches features extracted from the address image against corresponding features derived from the reference addresses found by the dictionary search. If a sufficiently high match is found, then the postcode is accepted otherwise it is rejected. This matching process is around 103 times faster than the other algorithms described above, therefore parallelization effort was concentrated on the preprocessing and feature extraction stages of verification. Preprocessing and extraction of verification features is roughly 10 times more computationally expensive than postcode recognition, and the volumes of data communicated between stages are also correspondingly larger, because verification utilizes the full address image, whereas postcode recognition only operates on the constrained postcode image field. 3.3.1
Partitioning the address verifier
Figure 3.7 shows that the preprocessing and feature extraction stages of the address verification algorithm can be broken down into a pipeline of four independent stages, and gives the average sequential execution time on a single processor for each stage, and also the amount of data communicated between the stages. 1. Line extraction. At the line extraction stage, image processing operations are applied to the complete address image: parallelization can therefore best be achieved by temporal multiplexing. The input data to the task is the complete address binary image, and the output data is a set of up to five address line binary images. 2. Word extraction. Word extraction operations are applied to each line of words in the address separately, hence line-level data parallelism can
48
PPF IN PRACTICE
be readily exploited in this stage as well as temporal multiplexing. The output data from this stage is a set of up to five binary word images per line. 3. Slant correction. Slant correction is applied separately to each word image extracted by the previous two stages, hence word-level or linelevel data parallelism is easily exploitable here in addition to temporal multiplexing. Both input and output data consist of a set of independent images of each word in the address. 4. Feature extraction. Feature extraction consists of four independent algorithms (character segmentation, word case classification and ascender/descender detection, character recognition and loop detection) which are configured as shown in Fig. 3.8. Word-level data parallelism is again available at this stage, but in addition algorithmic parallelism can also be exploited, since up to three tasks can be performed concurrently (corresponding to the three independent branches).
Fig. 3.7 Functional block diagram of the address verification system.
PARALLELIZATION OF THE ADDRESS VERIFIER
49
Fig. 3.8 Functional block diagram of address feature extraction.
3.3.2
Scaling the address verifier
The partitioning above initially suggests that a pipeline of four independent processor farms with worker processors provided for each stage in the rough ratio 3:1:2:5 will achieve approximate throughput balance within the pipeline, and provide maximum opportunity to exploit data parallelism, and thus minimize pipeline latency. In practice however, it was noted that by combining the word extraction and slant correction stages, a simpler three-stage pipeline could be achieved in which static balance occurs for worker processors in the ratio 3:3:5. The results reported below are for this configuration, in which temporal multiplexing alone is applied at the first stage, line-level data parallelism and temporal multiplexing at the second stage, and algorithmic parallelism, word-level data parallelism and temporal multiplexing at the final stage. It should be apparent that several other processor configurations are also possible.
50
3.3.3
PPF IN PRACTICE
Address verification farms
The address line and address word segmentation farms exploit temporal multiplexing and data parallelism and are conceptually similar to the postcode recognition farms described earlier. However, the address feature extraction stage combines data parallelism (the complete address is divided into individual word images) with algorithmic parallelism (workers execute one of three different algorithms, as shown in Fig. 3.8), and therefore operates in a somewhat more complex way than any of the processor farms previously described. For simplicity, the algorithmic parallelism is distributed statically, by allocating one third of the total processors in the farm to each of the three required algorithms (each process has roughly similar static computational complexity). The farmer on the master processor therefore has to buffer the input word image data, since the same data is sent to two of the three algorithmic processes. The first worker process receives word images from the farmer and performs loop detection. The number of extracted loops is returned to the farmer. The second worker process receives the same word images from the farmer and performs the first character segmentation, word case classification and ascender/descender detection, last character segmentation and recognition tasks. It returns the ascender/descender sequence or the recognized last character in ranked order, depending on whether an upper case or mixed case word was detected. This worker process is also responsible for transmitting the first character of the word image after the segmentation process to the third worker process. Finally the third worker process receives the first character of the word image from the second worker and performs character recognition on it. Recognized characters in ranked order are returned to the farmer. 3.3.4
Overall performance achieved
The full address verification PPF was built by connecting the three stages described above in a pipeline configuration. The performance of the verification pipeline was measured by fixing the number of processors in the first two stages and varying the number of processors at the feature extraction stage, as shown in Fig. 3.9. In this figure, the 'ideal' graph represents the speedup if perfect buffering between stages were obtainable and communication overheads could be ignored, but the residual sequential overhead of each pipeline stage's farmer distributing data to its workers is taken into account by applying Amdahl's law [16]. Three sample practical speedup graphs are also shown; these represent parallel configurations in which 3, 5 and 7 worker processors are utilized in each of the first two pipeline stages, and the number of workers in the final stage is then varied. As can be seen, for each configuration, the performance increases fairly linearly as additional processors are added to the final stage, until that stage no longer limits throughput, at which point adding further processors has no effect. Thus the speedup scales incrementally with
MEETING THE SPECIFICATION
51
the number of processors used, achieving a maximum of about 15 with 26 workers (29 processors total), an efficiency of 52%. Figure 3.10 shows that the throughput achieved with optimal PPF configurations also scales fairly linearly up to 29 processors, but at this level it is still about 80 times slower than the required real-time performance specification.
Fig. 3.9 Speed-up graphs for the complete address verification pipeline
3.4
MEETING THE SPECIFICATION
The maximum speedup achieved for the verification pipeline was 15 with 29 processors in total, and in this case the throughput (0.12 address/second) and latency (58.7 seconds) specifications are still far from being met. More significantly, only a limited increase in speedup could be achieved by adding more processors, because the communication channels saturate with 15-25 workers in each processor farm. In reality, it is not surprising that the specification has no chance of being met by the verification PPF, as this would require a speedup of around 103, which is well beyond the practical scaling range feasible using this technique. Future attempts would require more recent generations of processors with higher communicate to compute ratios. For the postcode recognition pipeline, an overall maximum throughput of just over 2 postcodes/second and mean latency of 3.2 seconds were achieved with a PPF of 32 processors (29 workers and 3 farmers). These results correspond to a speedup of 21.4 for the application as a whole, and an overall processor efficiency of 66.9%. As no stage is close to communication saturation, it was concluded that simply by introducing more powerful computa-
52
PPF IN PRACTICE
tional engines to augment or replace the processor the specification could be fully met. The remainder of this section describes a port to an eight module Paramid machine [352], where each computational node consists of one Inmos T805 transputer and one Intel i860-XP microprocessor [22, 119, 286]. The i860 acts as a high-performance computational engine allowing the transputer to become a dedicated communications co-processor. In our implementations, transputers also took the role of master processors hosting the farmer processes, distributing work packets to workers, so that it was possible to configure a maximum of 3 pipeline stages with up to 8 workers in total.
Fig. 3.10 Throughput scaling graph for the complete address verification pipeline.
Unlike the prototype implementation on the Meiko CS, buffering was used both locally at the interface between each i860 and transputer and globally between each stage of the pipeline. The former buffers are needed to ensure that the i860 does not wait for the transputer and the latter are used to ease flows along the PPF. In particular this reduces the bottleneck at the dictionary stage where there is a bimodal computation time distribution (Fig. 3.3). The best number of slots in all local buffers was found to be at least ten. As in the earlier implementation, the Paramid implementation utilized data parallelism in the preprocessing and classification stages and temporal multiplexing in the dictionary classification stage. Table 3.2 shows the throughput and latency achieved with an optimal 4:3:1 configuration of 8 processors overall within the pipeline, and confirms that with this configuration both the throughput and latency specifications for postcode processing are met. The demand-based farming method timings now show an increase in throughput, because of the added flexibility.
CONCLUSION
53
Table 3.2 Throughput and Latency of the Postcode Recognition Pipeline. Number Time Throughput of Postcodes (s) (postcode/s)
Mean Latency
S.D.
9.23 17.93
1.81 2.11 2.14 2.23
0.29 0.37 0.32 0.33
100 200 300 400
3.5
25.89 35.11
10.83 11.16 11.58 11.39
CONCLUSION
The postcode recognition example's key feature when viewed as a system is the comparative ease with which the design can be incrementally scaled so that a given performance can be attained. When performance limits of a particular hardware solution are encountered, the design can also be easily transferred to alternative hardware. In the present climate of technological change (by virtue of Moore's law), order of magnitude advances in processor performance will occur roughly every five years. By capturing the design as a generic PPF solution, hardware changes can be accommodated without completely rewriting the software.
Appendix A.I
OTHER PARALLEL POSTCODE RECOGNITION SYSTEMS
A number of other parallel handwritten postcode systems have been developed, and to some extent the PPF application is a synthesis of these systems. The commonalty of these approaches reinforces the need for a standard development methodology. In [182] a transputer-based automatic handwritten postcode reading system is described. The distinctive feature of the Dutch postal system is that the postcode (4 digits followed by 2 letters) is normally written in boxes preprinted on envelopes or postcards. This makes it easier to locate the postcode and segment the individual postcode characters and thereby reduces the complexity of the image segmentation process. The postcode recognition algorithm consists of six steps: 1. Connected component labeling; 2. Location of postcode boxes; 3. Elimination of postcode boxes; 4. Segmentation of postcode characters;
54
PPF IN PRACTICE
5. Recognition of individual characters; and 6. Verification of the postcode's existence. In order to achieve the required throughput, the algorithm was parallelised on a processor farm architecture of 36 T800 transputers. The input to the system is the scanned address image. If the postcode is identified, it is printed on the envelope as a fluorescent bar code by the machine. The mean processing time for one address image on a T800 is 3.3 seconds. By applying address-level parallelism, a throughput of around 10.8 items per second was obtained. This implies a speedup value of around 35 with an impressive efficiency factor of 97%. However detailed information about the parallel implementation has not been described elsewhere. Reference [323] proposed a parallel pipeline architecture based on transputers for real-time OCR applications, such as recognizing scanned documents including handwriting, typescript and figures. The OCR machine consists of two basic modules: character recognition and linguistic analysis. The recognition unit comprises four stages: (1) preprocessing, (2) segmentation, (3) classification, and (4) recognition. In the linguistic analysis stage, the letter alternatives output from the recogniser are combined to form a number of letter strings. These candidate words are then checked against a trie dictionary which accommodates up to 70,000 words. After word look-up, syntax rules are applied to filter out the words which are not acceptable grammatically. Meanwhile, a semantic analysis is carried out to select from the list of words by using the information about the meaning of the words. The results from the semantic and syntax analyzes are then combined to reconstruct the words. The system architecture is a pipeline of five tasks. It is reported that real-time performance using this system was achieved, but performance results such as speedup and efficiency were not quoted. Itamato et al, [174] introduced NEPLOS (NEC's Pipelined Parallel OCR System) for postal mail sorting. The system consists of eight basic functions: (1) address block location, (2) line segmentation, (3) character segmentation, (4) normalization, (5) character determination, (6) address collation, (7) address recognition, and (8) postal code recognition. A previously-designed machine used a pipeline architecture with processors allocated to each function according to the load and the capacity. However, improvement in processing speed as well as the extension of readable content (e.g. handwritten addresses) was required. Therefore a new multiprocessing architecture was designed which consists of a main central processing unit (MCPU) and a number of recognition processing units (RCPU). Each RCPU has the capability to recognize the postcode and address on mail items independently of other RCPU's. The MCPU controls the status of the RCPU's and performs scheduling of RCPU's upon request. All functions requiring high-speed processing, such as histogramming, image labeling, feature extraction and pattern collation are implemented using specially designed hardware. It is reported that up to 128 RCPU's can be connected in parallel, but further processing capacity can be
OTHER PARALLEL POSTCODE RECOGNITION SYSTEMS
55
achieved by adding more OCR engines. With the new machine, a reading capacity of 20 times that of the previous system was achieved and a processing speed of approximately 9 items per second for address reading can be expected. Reference [43] presents a transputer-based system for the recognition of handwritten zipcodes. The system consists of preprocessing, zipcode location, zipcode segmentation, zipcode recognition, and zipcode decision. Using a pipeline architecture and implementing either geometric or algorithmic parallelism it was estimated that a processing speed of up to 9 items per second would be obtained with sufficient transputers. Reference [62] describes a parallel architecture called EMMA2E for postal applications developed at Elsag Bailey Inc. This is an improved version of the proprietary commercial EMMA2 multiprocessor which was delivered in 1987. The new architecture differed from its predecessor in that heterogeneous PEs were developed to satisfy the requirements imposed by the applications involved. Therefore, EMMA2 has a very different architecture to PPF systems. The architecture is built on various hierarchical levels. For each level, there is a physical communication channel (bus) which makes data exchange possible. At the lowest level there is the PE which is the computing unit present on all active modules. It is stated that different types of custom and commercial co-processors (e.g. DSPs) can be used as PE co-processors. The next level is called PN32 which is the multiprocessor module of the architecture and has four PEs. Above this level there are two higher levels called FAMILY and REGION. These are multi-port levels and allow connection between PEs located at different modules. The architecture is designed to have a throughput rate of 10-13 mail pieces per second for typewritten addresses. The research reported in [350] describes a multi-grain parallel architecture, ArMenX, specifically designed for implementing neural networks. Again this architecture is very different from a PPF architecture. The processing sequence of the system is letter digitization, zipcode extraction, zipcode segmentation, digit recognition, zipcode assembling and verification. The parallel architecture is intended to implement neural networks for the recognition of five-digit numeric French zipcodes. The architecture is organized into three processing layers. The upper layer is built using five Inmos T805 transputers. The middle layer, the so called Programmable Logic Layer, consists of network of FPGAs and offers very fine-grain parallelism structured as a high-bandwidth communication processing ring. The bottom layer is constructed using five DSPs. The first Transputer receives the whole postcode, picks up the last postcode digit and sends the others to the next node. Each transputer proceeds in the same way. When the digit has been received in a transputer, it communicates with the DSP and the digit is loaded in the DSP's memory. Next, the DSP unpacks the data and performs the recognition algorithms. As the recognition is completed, the DSP generates an interrupt to the transputer. The former reads the result and spreads it to the previous node. Finally, the head transputer concatenates the five digits.
This page intentionally left blank
4 Development of PPF Applications
Previous chapters have described the way by which a first-level PPF design is executed. However, software development requires more than a design. It requires a systematic way of constructing software. This chapter gives an overview of the PPF development cycle, while in Part II, we examine each of the development stages in more detail. Software tools have become crucial to the communication of a software development method. In effect, the tools encapsulate the method, in some instances to such an extent that the design method itself becomes of secondary importance. Tools can guide the developer to a lesser or greater extent. We have taken the view that there are already sufficient excellent tools available for the early analysis of an application provided it is already written as sequential code and not written directly as parallel code. There are also dangers in locking a development system into a toolset. For example, a software tool at the end of its lifecycle may no longer be supported whereas a tool at the beginning of its lifecycle may involve the developers in product Beta testing. Nevertheless, tool construction is important and this chapter examines the special requirements for tools to aid the development of PPF systems. Reference [117] is a more general study of software tools for parallel systems. Chapter 5 will consider the need for a development stage when the code exists in a purely sequential form whenever larger-scale applications are constructed. Sequential code can either be legacy code or it can be code written with subsequent parallelization in mind. If the latter enterprise is undertaken it is wise to have design rules ready for the programming team to avoid needless work when the code is parallelized. Most legacy code in the real-time 57
58
DEVELOPMENT OF PPF APPLICATIONS
domain is in the C programming language as many devices (notably DSP chips) only have a C compiler available.
4.1
ANALYSIS TOOLS
Analysis of larger parallel embedded systems may rely on pre-packaged schemes such as the Yourdon dataflow method [134]. The dataflow approach is bottomup and avoids fixing the design prematurely. Dataflow is intended to approximate the way people see systems. The advantage to the designer is that the system might be applied to a sequential programming environment, a pseudoparallel or concurrent system, or a parallel environment. The raw dataflow system does not capture timing information, however, raising questions about its suitability for real-time systems. The PARET environment [257] for multicomputers has similarities with dataflow schemes. The parallel application is represented by a network of nodes connected by arcs. Tokens, representing data or control flow, pass between nodes. Buffers can store tokens. State transitions occur according to node output and input policy, which bears a resemblance to modeling by timed Petri-nets [258], usually applied to small-scale systems. The user can zoom in at successive resolutions. The PARET system is suitable for a variety of purposes; examples given are a parallel simulator of MOS circuit timing, and a hypercuboid computer interconnect (a hypercube topology but with a cube of processors at each node rather than a single processor). The aim of the PPF development system has been to produce a machineneutral system description, which will reflect linguistically based thought processes in developing a design. Computer environments frequently model an aspect of the world. In the PPF toolkit it is important to present such concepts as 'pipeline', 'data farm', 'flow' and 'hotspot', which do not necessarily exist in other visualizers. ParaGraph [153] is a well-known example of a tool which aims at the widest generality. ParaGraph provides 25 displays interpreting the same event-trace data in different ways. In retrospect, ParaGraph's displays lean overly towards the hypercube1, which at the time the tool was developed was regarded as the most likely general topology, in part because other topologies could be embedded within a hypercube [326]. By its nature, ParaGraph does not specifically support pipelined systems.
*A two-dimensional hypercube is a square with processor nodes at each of the four vertices. A three-dimensional hypercube is formed by connecting two-dimensional hypercubes in such a way that corresponding vertices are connected by a link, with the result that each node has three links. Conflict-free routing can be achieved by routing along each dimension in turn. The most notable example of a hypercube topology is the early Connection Machine [160].
TOOL CHARACTERISTICS
4.2
59
TOOL CHARACTERISTICS
In surveying existing tools, the following user requirements were identified: Correctness checking. Soft real-time systems merely require verification of program behaviour. Safety-critical and hard real-time systems require something more than verification; either comprehensive traces or formal methods of proof should be considered. Performance debugging. Slow downs, bottlenecks or hold-ups should be identified. Processor utilization is important when costing a solution. Cross-architectural comparisons can also guide purchasing decisions. Prediction models can be used in conjunction with analyses of event traces to satisfy these requirements. A prediction model aids the identification of untoward behaviour which emerges from analysis of the event trace. Changing the communication and computation parameters within the prediction model is a method that has been successfully applied [316] in order to project performance to other machine types from timings at the basic-block level on a development machine. Should the tools be integrated? The term 'integrated' can be used in the sense that all tools have a similar 'look-and-feel'. Integration can also imply that the output from one tool feeds into another, a toolset as opposed to a toolkit. In the MAD environment [135], there are: EMU, which instruments and monitors the application; ATEMPT, which provides performance analysis and error detection through visualization; and PARASIT, which simulates the application in order to predict race conditions. We decided to avoid the attempt to lock the user into a toolset and possibly lock ourselves into a restrictive toolset environment. On the other hand, the user is guided through a set of actions by the development methodology and the core prediction and analysis tools are integrated. The advantages and disadvantages of integrated toolsets are further discussed in [111]. In a sense, the distinction lies between whether software design or system design is being undertaken. System design has a wider aspect as the software produced may be transferred to different hardware. However, just as the distinction between the development system and software development toolkit has become blurred, so has the distinction between system design and software design. The form of the user interface needs to be considered. A number of display types for parallel systems have been influential: • The space-time diagram is common to a number of event-trace systems [395]. Its advantage may stem from the persistence of information displayed, allowing the mind to build up a pattern of activity.
60
DEVELOPMENT OF PPF APPLICATIONS
• The diagram-based display, such as SPY for the Paragon [173], is a way of showing process meters indicating parameters such as the instantaneous arrival and departure rates of messages, the activity status of a process, and link activity. A disadvantage of a diagram-based display may be confusion if large numbers of processes are involved, to counter which zooming is possible. • The state-change display is well suited to showing pipeline activity. In the visual programming tool HeNCE [33] the nodes of a graph change their shading according to activity in the associated process. • xpvm [124] has a run-time display system for networked computation using a space-time diagram. Bias introduced by the extra messages relaying the event trace make 'on-the-fly' displays unsuitable for realtime systems.
4.3
DEVELOPMENT CYCLE
The PPF development cycle is shown in Fig. 4.1. Input to the cycle is a structured sequential program. The term structured is used in the technical sense of a program constructed by top-down analysis of its functionality [397]. As discussed in Chapter 5, the programming language 'C' remains the lingua franca or common tongue of software for embedded RISCs and DSPs, and is entirely consonant with structured programming. Obviously, C++ is also employed now for construction of large-scale systems [207] for reasons of software reuse and as an object-oriented design framework. However, when a run of a program written in C++ is profiled, Chapter 5, the same functional hierarchy is strongly apparent. Profiling tools become more necessary because the calling sequence is locked into the object structure. Existing sequential code is analyzed by existing run-time profilers. Examples of graphical displays from such a profiler can be found in Chapter 5. However, older profilers which produce tabular format timing information can in principle also be employed. When working with a large-scale software application in C++ then a class browser is an invaluable aid to partitioning as information hiding within an object is potentially at risk from an arbitrary partition. Transfer onto parallel hardware of sections or segments of sequential code may have unforeseen side-effects. It is always best to check the original sequential code with a memory debugger, which shows memory leaks, use of uninitiated variables, array boundary overruns and the like. The output from these analysis tools is a set of sequential code sections which can be incorporated into a template. The template is a high-level way of encapsulating the structure of the worker process. Unlike an object, a template is not a passive repository of the data and operations that can take place on the data. A template actively guides the usage of the software. In origin, PPF templates were programmers' guides but at a cost in flexibility a graphi-
DEVELOPMENT CYCLE
61
Fig. 4.1 PPF design cycle.
cal front end is being introduced. This makes a template resemble a software component [109] which only presents an interface to the user. Ideally, a parallel software component would only contain a parallel structure irrespective of granularity. There are also tentative ways of determining a necessary but sufficient set of operations that the component should support. However, the template represents the stage before that idealization. The design of PPF templates is further discussed in Chapter 7. Timing data from making test runs on the sequential code is also output from the analysis stage into the performance predictor tool. Library calls to time sections of code can be made on most processors and environments, though these differ considerably in accuracy. In fact, the calls can give a false assurance of accuracy and/or what is being timed. The Unix call gettimeof day times by seconds and microseconds yet the resolution of most workstation clocks is to the millisecond. Whether a clock records wall-clock time, process time, system time or user process time is also an issue. Reference [163] is a guide to the vagaries of benchmarking parallel machines which also considers timing in general. When a code section has been instrumented, the time within that section can be gathered as a mean or include the variance. For deterministic algorithms, where there are no branches then the variance will not be relevant. However, bear in mind that system noise can disturb deterministic processes to a lesser or greater extent [90]. Some algorithms are data-dependent and require a set of realistic test data to determine the long-term behaviour. The mean and variance, second-order statistics, can in principle serve to determine a statistical distribution for that behaviour. In practice, the distribution may be difficult to determine. However, there are still analytic methods for judging the impact on a complete pipeline.
62
DEVELOPMENT OF PPF APPLICATIONS
The performance predictor works by simulation. Simulation can be supplemented by analytic methods, as discussed in Chapter 11. Because it is desirable to compare predicted with actual performance the simulation should set up a visual impression of activity within a pipeline segment which the designer can compare with recorded activity on the prototyping machine, using a similar display format. The display format of the simulator is deceptively simple: • The PPF methodology restricts, in terms of the types of pipeline and the use of processor farms, the degrees of freedom in the parallel system development path; • Extraneous detail is avoided so that the mapping between prototype design and target system(s) is not prematurely fixed. By alternating simulation and analytic predictions the performance of a complete pipeline can be built up in a piecewise fashion. A configuration file is output from the predictor tool. There is little standardization among the format files for parallel configurations. Therefore, a set of format drivers is provided. In addition, as configuration files merely provide a topology in the mathematical sense, a high-level description of the pipeline layout is also output. PPF templates are instrumented, ideally transparently to the user code. This allows output in the form of event-trace files to drive a performance analysis tool. The analysis tool mimics the graphical display for the predictor tool in order to make comparisons between the expected performance and the actual performance. Instrumenting a parallel system is not a trivial affair as no global clock exists. The problems become worse if the instrumentation is also used for debugging purposes, to detect race conditions for example. In this case, instrumentation and recording times critically may interfere with the accuracy of the data: the probe effect. Equally, when an event trace is visualized it shows a physical PPF with feedback routes and folded stages whereas the predictor tool shows a generalized PPF which is abstracted from the physical implementation. These issues are discussed further in Chapter 6.
4.4
CONCLUSION
This short chapter has provided an overview of the PPF design cycle. Software tools are now considered an essential support to the design of a system. Careful thought should be given to the choice of tools, though it would be naive to think that there is a perfect solution for all facets of a system's development. By starting development with sequential code, existing tools for uniprocessor systems can be employed. Performance estimation is an important issue for real-time systems so at the heart of the design cycle should lie tools for estimating the likely performance and checking that the specification
CONCLUSION
63
has been met. Much of the structure of a PPF system has the potential to be boilerplated, by repeated application of software templates, and this both simplifies and focuses the design of the required software development tools.
This page intentionally left blank
Part II
Analysis and Partitioning of Sequential Applications
This page intentionally left blank
5 Initial Development of an Application This chapter is concerned with the initial stage in development when a project is conceived, initiated, and the first steps towards partitioning are taken. As we explain, in PPF there is an emphasis on providing a separate stage of development built around sequential code. In other words, for the type of complex embedded applications which might require parallelization, we do not consider it a practical proposition to go directly to parallel code without first developing a sequential version of the algorithm. As PPF is also intended for legacy code, the form that sequential code takes is not overly prescribed, for example by eliminating unnecessary data dependencies. However, a minimum set of rules to aid the development of parallel code from sequential code could easily be extracted from a perusal of this chapter. The chapter also offers general development advice for parallelizing code. 5.1
CONFIDENCE BUILDING
Before an embedded system involving significant capital outlay is embarked upon 'for real' there is period of confidence building. The function of the confidence building phase is to convince or persuade the project's backers that the project is feasible, and that the stated performance is achievable. A project's backers may variously be higher management within a business company, an outside governmental or institutional funding agency, or a venture capital organization. The implementors themselves will also need to edge towards a solution, as with any new endeavor the outcomes are by no means 67
68
INITIAL DEVELOPMENT OF AN APPLICATION
assured. Confidence building is at the heart of the 'waterfall' model [312, 335] of the software life-cycle, in which clearly defined stages are passed through: requirement capture, specification, software design leading to implementation and testing. The 'waterfall' development process, though suitable for very large projects, can be rather ponderous, which is why exploratory programming or prototyping are preferred when time to market is critical. All methods of system analysis and development incorporate the idea of iteration, whereby the design is refined in the light of experience. Confidence can also be established by some tangible demonstration such as a small scale version of the intended system. The Marconi target-tracking parallel radar system [97] referred to in Chapter 1, Section A.I consists of a set of pipelines each stage of which is a processor node of forty or more modular processors. The very first step in constructing this system of many hundreds of processors was to construct a single node. Before software was written the timings of the inner loops were found by reference to processor specifications. Subsequently, the performance could be timed by putting in delay loops with the perceived characteristics of the final, yet-to-be-written, software. Subsequently, software tools, such as Transim [149], took this idea further, producing a simulation environment in which inner loop timings or estimates could be inserted. Potentially, this approach avoids the problem of estimating performance before a parallel machine is available, as parallel structure is available in the simulator though the simulation itself is playedout sequentially. At the heart of Communicating Sequential Processes (CSP) [161] is the idea that parallel systems are in essence compositions of sequential processes. Put another way, any parallel system can eventually be reduced to a set of sequential code segments. The Marconi real-time example highlights a difference between the software development path of many large-scale systems and that of soft, real-time systems, where timing guarantees at an early stage are especially important at an early stage. As a radar dish turns it receives a stream of reflected pulses which go through various signal-processing routines before the detection of moving targets. Failure to detect a particular target may not prevent the operation of the radar in the short-term but in the medium term would certainly soon be critical if the target were an aggressor. Developing (say) a database enquiry system for medical records has a lesser degree of real-time criticality and therefore the software functionality can first be ensured. One can conclude that confidence building in parallel systems is equally about establishing performance and functionality whereas in other domains functionality has a more dominant role. Establishing performance limits for PPF applications will be covered in Chapter 11. During the phase of anarchic development of parallel systems which occurred in the late eighties and nineties [88] it was unfortunate that many small-scale one-off solutions from research groups emerged but few large-scale commercial systems were built (the Marconi parallel radar system is an honorable exception). Often system analysis was neglected as there was little need
AUTOMATIC AND SEMI-AUTOMATIC PARALLELIZATION
69
to coordinate work within large projects or to build-up confidence once initial backing had been secured. A consequence of this phase of stunted projects is that little thought was given as to how the development of parallel systems differed from that of sequential systems. There are many difficulties in the development of large-scale parallel systems. For example, a popular method of system development is reuse, whereby systems are constructed from a set of pre-existing components. The variety of rapidly evolving parallel architectures [302, 121, 162, 332, 328, 334, 92] and languages [276, 384] inhibits reuse in parallel systems, when compared with the corresponding stability and slow evolution of conventional sequential processors and languages. One solution is to construct a set of parallel components that embody parallel structure but hide the implementation [60]. The solution adopted in PPF is to utilize existing sequential code as the raw material from which systems can be constructed, and also to provide a software template which embodies parallel structure in a general fashion. Given the historical record of parallelism, using a purely parallel solution with parallel algorithms and a parallel language, may be perceived by potential backers as a leap in the dark, justifiable only for small-scale systems where little is lost by failure. Using existing sequential code before parallelization means that confidence is already established by the fact that the code works. Timing sequential code segments is also a way of gaining confidence in the performance of the system. In fact, writing code in a constrained sequential environment can be viewed as a necessary preliminary step before parallelization. Subsequent changes can first be applied to the sequential version of the code to ensure the correct results still occur. The changes are then transferred to the parallel version where any failures can be attributed to the parallel implementation and not to the code itself. As an aside, it should be pointed out that there are reasons for the resilience of sequential programming that go beyond any possible advantages of the model. An essential task of any manager is to assign work tasks to individuals or teams, depending on the project size. However, in multimedia projects there is a clear divide between those software engineers who are algorithmic developers and those who are implementors. It is often difficult to divorce an algorithmic developer from the familiar, personal workstation environment. An implementor will however be more interested in the details of the technology than in the precise workings of an algorithm. 5.2
AUTOMATIC AND SEMI-AUTOMATIC PARALLELIZATION
Given that a sequential coding stage in system development may well be needed, in what ways can the transition from this stage to the corresponding parallel partition be automated? It would appear that automation is only possible for a limited range of problems; these do not currently include most
70
INITIAL DEVELOPMENT OF AN APPLICATION
embedded applications, which typically have a complex algorithm and/or data structure. A complex system is one in which multiple algorithms are combined. The algorithms may each involve processing of dissimilar data structures. In contrast, a linear algebra algorithm will involve similar data structures throughout (usually matrices). The parallelization of a single linear algebra algorithm may take place in isolation as such algorithms often form part of code libraries. Automatic parallelization of sequential code for linear algebra can be attempted by data-parallel versions of Fortran [251]. Parallelizing compilers (which convert automatically from sequential code to parallel code) are successful in extracting loop-based parallelism for well-behaved languages like Fortran [281], but less successful where there are intricate data dependencies. In [57], it is argued that parallelizing compilers are unlikely to work in the general case because no information in the code tells the compiler how to schedule sub-tasks, how to map tasks with an appropriate granularity, and how to specify the distribution of a data-structure for a particular architecture. On the other hand, a parallel language may also suffer in practice from being tied to one class of parallel architecture: distributed (partitioned) memory, or shared memory.1 The language Par [57] offers guidance to the compiler in the form of program annotations to direct the parallel implementation. In effect, annotations are an extension of compiler pragma directives. An early example of an annotated language is Kali [238]. The program itself captures the core parallel logic whereas the annotations can be changed according to circumstances. This approach may well work with numerical analysis algorithms, but may not be adequate to cope with the logistics of larger, multi-algorithm applications where any core parallel logic is often not apparent above the surrounding code detail. Annotations have also been taken up in High Performance Fortran (HPF), which was designed for portability. However, they do not directly provide a portable solution as different annotations will be needed for different machines. An alternative is to compile to an intermediate form (F-code) which may contain access invariants [331]. Again though, F-code is largely applicable to Fortran and as such to regular problems as found in linear algebra. Within the Fortran world, the problem that compilers assume worst-case data-dependency may be approached by eager or optimistic evaluation [24] and subsequent roll-back if necessary. Run-time scheduling of parallelization is also a feature of the Jade environment [307]. Jade detects static parallelism within 'C' programs and resolves data-dependencies during a run by means of supervisory software. Jade also employs code annotations. Unfortunately,
1
In theory, however, one should distinguish between the programming model presented by a language and the machine model onto which it is mapped. The Linda programming language extension, though ostensibly aimed at shared-memory machines has been implemented on a variety of machines [10] including NOWs with a distributed memory model.
LANGUAGE PROLIFERATION
71
run-time parallelism mainly seems to be fine-grained, and as such is unlikely to be of general use in embedded systems. While automatic parallelization may not be feasible, semi-automatic parallelization has more potential. A novel development is the construction of a tool which will aggregate granularity [388] for those parallel algorithms or parallelizations which are too fine-grained (presumably having originally been intended for a machine with a different compute/communicate ratio, as a result of moving from shared- to distributed- memory). Granularity detectors may emerge as a semi-automatic aid to parallelization. The Tag tool [102] is part of an ongoing effort in this direction based on much earlier research [383]. It may also be possible at some future date to simulate the effect of different pipeline partitioning schemes. In [3] address tracing has been adapted in order to analyze potential communication patterns. An address-trace tool, DCompose, simulates the parallel behaviour of a sequential program. The sequential version is annotated with a possible method of data placement. The assembler code is instrumented around each data load or save. This is not a daunting problem on RISC microprocessors because of the absence of multiple addressing modes, since it is only the general pattern of data accesses that are of interest. The intercepting software calculates the processor which would store the data for a particular partitioning scheme were the application to run on a parallel machine. It is however unclear how the DCompose method would adapt from monolithic algorithms working on one large decomposed data structure to algorithms which have multiple data structures accessed by a number of algorithms.
5.3
LANGUAGE PROLIFERATION
The previous section discussed experimental approaches to parallelizing code, often using new languages, or language variants that would not compile on existing compilers. Unfortunately, there are serious difficulties with the concept of 'yet another computer language' for embedded systems. 'C' compilers are still the 'lowest common denominator' or common factor across DSP and real-time RISC. Another compiler for any processor represents a significant development effort. As a more viable and limited alternative, the Parallel 'C' [1] approach (from 3L Ltd.) of introducing parallel constructs into 'C' by library calls has been successful for the transputer, the i860, the Texas Instruments C40, and the AMD SHARC. Java has had some support, for example in VxWorks [387], because of its popularity as an internet language. Many of the constructs of C++ have been employed which eases the programmer training problem. Despite the similarities with C++, Java may be viewed as closer to a component-based language [169, 344] than to an object-based language. For example, Java prefers references or interfaces rather than pointers. Automatic garbage col-
72
INITIAL DEVELOPMENT OF AN APPLICATION
lection nevertheless represents a barrier to Java as a real-time language as it introduces unpredictable delays. A number of measures such as provision of a Just-in-Time (JIT), or hotspot compiler have been made to improve the processing speed of the Java interpreter.2 There is also interest in a parallel version of Java, see [47] from an issue devoted to this topic. Because of Java's established momentum, there is already support for software development (such as Symantec Cafe and JBuilder), on a par with that available for the 'C' programming language. The Jini distributed systems layer [99] (also now used in VxWorks) and JavaSpaces [118], with implications for embedded systems, have also been added to the numerous existing application-specific software libraries. With the existing commitment to these languages, it seems unlikely that another language beyond 'C' and Java will now emerge in the embedded sphere.
5.4
SIZE OF APPLICATIONS
An inherent weakness of research is that realistically sized applications are often not investigated until late in a project. It may also be difficult to obtain or construct larger scale applications. Equally, the manpower available in a research project is usually quite limited. To address these points, the approach in developing PPF was to utilize existing application codes written by other software engineers whenever possible. All codes were written in 'C'. Three initial applications were: 1. A handwritten postal address recognition application, circa 4500 lines of code, written by computer vision researchers at the University of Essex (see Chapter 3); 2. The Telenor (Norwegian Telecom) hybrid video encoder, H.263, with circa 7900 lines of code; 3. MIT's (Massachusetts Institute of Technology's) 'Eigenfaces' face identification system, with circa 8700 lines of code. The number of lines of code in each of these applications approaches the limits of University attempts at parallelization, which is about 10,000 lines [268]. Commercial and financial systems are very much larger, up to 106 lines of code, but it can be assumed that for embedded systems these applications approach the correct scale of problem, particularly since embedded systems will usually run on direct or hard memory, with no virtual memory support. 2 A JIT compiler compiles a method's code just in time for its first use, whereas additionally a hotspot compiler introduces dynamic optimization of code such as function inlining [169]. Garbage collection is accelerated as well.
SEMI-AUTOMATIC PARTITIONING
73
Various versions of the postal address recognition application were parallelised over a lengthy period of time, as part of the PPF research and development program, so the time spent parallelizing this application is not representative3 The 'Eigenfaces' program took one man-month to parallelize, whereas the H.263 application took about three man-months. H.263 does not have a straightforward pipeline structure like the 'Eigenfaces' application. There are multiple feedback paths. Folding-back the pipeline could not be achieved in a convenient form as with the precursor encoder H.261 [87].
5.5
SEMI-AUTOMATIC PARTITIONING
Semi-automatic partitioning of sequential code can be achieved by a top-down profiling tool. The profiler will identify the principal parts of the algorithm in terms of functions. It is assumed that the functions represent meaningful groupings of the application's activities. The dependencies between functions indicate where it may be possible to aggregate functions to form a computation process suitable for placement on a single worker processor. However, knowledge of data structures within an application is also needed to resolve dependencies. This is accomplished through familiarity with the algorithms involved and is confirmed by reference to the code, hence parallelization normally needs to be undertaken by a programmer who has a detailed understanding of the application algorithm, rather than being purely a parallel processing specialist. In Chapter 10, the speech recognition counter-example illustrates that an intimate knowledge of the data structures involved is essential, as although functions in one branch of the computation act independently of others, these functions also update a dominating global data-structure, the decoder network. Another purpose of profiling is to identify portions of code that take similar times. In the event of feedback, it may be possible to alternate between two sections of code in a particular stage of the PPF pipeline (refer to the H.261 encoder example in Chapter 8). The long-established profiling tool gprof [138] can be used to determine timings and a call sequence. For some types of code, especially recursive routines, there are doubts over the statistical sampling method employed by gprof [283]. Quantify [293], which works by object-code modification, has therefore been preferred for PPF development, and also provides extensive GUI support for interpreting profiling data. In Quantify, each basic block has a 'door-step' module attached which counts the number of entries to the block in the course of a run. The number of machine cycles for each block (less the cost of the entry code) is calculated at the point of object code
3
However, The final port from a transputer-based machine to an i860-based Paramid supercomputer reported in Section 3.4 took less than one man-month.
74
INITIAL DEVELOPMENT OF AN APPLICATION
instrumentation.4. A further attraction of Quantify is that it will work with a multi-threaded application. Threads might be used to simulate parallel applications. An example of a call graph produced by Quantify for 'Eigenfaces' is given in Fig. 5.1.
Fig. 5.1 Call graph for the Eigenfaces application.
Profiling of sequential code indicates a static partition but does not take into account communication costs. If communication can be overlapped with computation then a static analysis is adequate. The same can be said if communication costs are proportionate throughout the proposed stages of a PPF. After static analysis, the pipeline can be adjusted heuristically by altering the number of processors at each stage should the communication costs be disproportionate at a particular stage. For example, increasing the number of processors might also disproportionately increase the communication cost. In this eventuality, it may prove difficult to balance the pipeline. However, no such instances have been found so far and it is safe to say that such cases would be obvious at an early stage.
4
Note that PPF does not require the timing accuracy that (say) might be necessary for improving a compiler or a random access hashing function as the static timings will inevitably be altered once parallelization has taken place
PORTING CODE
5.6
75
PORTING CODE
Once a tentative partition of the code has been achieved the code can be decomposed. In fact, before this happens performance estimation will normally take place. However, the porting details are included below, before the discussion of performance estimation in Chapter 11, because porting code to a parallel platform is actually initially concerned with sequential code. The following steps are required according to [41]. The first set of steps is for sequential code: 1. Check the consistency of the code. 2. Understand the control and data structures. 3. Improve the code quality. 4. Gather runtime statistics. 5. Optimize the most time consuming parts. The second set of steps concerns the parallelization: 1. Choose an appropriate data decomposition. 2. Select loops for parallelization. 3. Modify the code according to the data decomposition. 4. Insert communication statements into the source code. 5. Test the parallel program on the target machine. 6. Profile the parallel execution. 7. Optimize/restructure the parallel program. With the 'Eigenfaces' application [110] almost all these steps were taken. The exceptions were that only obvious algorithmic mistakes were corrected as they were met; and the sequential code was not manually optimized (though the features of an optimizing compiler were used to considerable effect). Extensive changes to code are unwise if that code is at an early stage of development, because of the difficulty of maintaining consistency. In fact, it is preferable to remain faithful to the original sequential version in all but performance. Our experience from parallelizing a range of applications has been that some characteristics of conventional sequential software development make the porting/parallelization task more onerous than it might otherwise be. The principal problem is that distributed-memory machines naturally require data messages to be passed by value. However, in sequential code it is common
76
INITIAL DEVELOPMENT OF AN APPLICATION
practice to bundle all the parameters that may or may not be needed together into structures (the 'C' equivalent of a record data structure). An address pointer to one or more of these structures is passed to a function. Within the structure there will be further pointers and indeed chains of three pointers have been encountered in some examples. Unfortunately, programmers do not follow the advice of Ousterhout [266] and notate pointer variables with a suffix Ptr.5 Another bad programming habit (from the parallelization point of view) is to dynamically create a data-structure within a function, but free it elsewhere. One of three techniques might be tried to accomplish the decomposition of the code at function boundaries: 1. All calls by reference can be replaced by calls by value which in turn are replaced by messages at the point that the sequential program is partitioned. A problem with this approach, apart from the loss of efficiency in the remaining sequential sections, is that the unfamiliarity of the code leads to elusive errors. 2. A more successful method is to change only the type of reference at the points of partition. From the point-of-view of future modifications it would be sensible to pass the pass-by-value structures as one message. However, if the software has not originally been written with message-passing in mind then an overhead from copying into the message structure arises. 3. Sending separate messages for each data-structure may therefore be preferred. A large number of different types of message are then generated (over 40 in the 'Eigenfaces' application). Systematic organization of the messages avoids problems from varying message lengths. However, verbose processes are a difficulty when later employing an event trace (Chapter 6). A verbose process is defined as a process that emits a large number of messages without necessarily having a large effect on the parallel logic. One verbose process can fill the visualizer display. If the event buffer needs to be flushed by any one process then the accuracy of the complete trace is compromised. However, this is a second-order problem that should be addressed by the event-tracer software rather than the application programmer. Which of the three approaches for coding the message interface is chosen is a mixture of personal preference, the application's characteristics, and the perceived overheads. 5 Though it is true that a pointer is easily identified at the point of declaration by the * notation, e.g. int * ip;, in the body of the code, pointers may occur without this notation. It is time-consuming and tedious to have to continually refer back to the declaration.
CHECKING A DECOMPOSITION
77
The problems identified in porting sequential code to parallel code are easily remedied if it is known in advance that code will be parallelized. It is then a straightforward matter to avoid the difficulties in resolving pointer references by always passing function parameters by value at potential partitioning points within the software. 5.7
CHECKING A DECOMPOSITION
Porting essentially involves dereferencing pointers whenever they are employed as parameters. When converting to arrays, errors may result due to overstepping array bounds. Purify [151] was helpful in detecting run-time errors of this type. Purify works by inserting extra instructions into the object code wherever there is a memory access instruction. The access is checked against a state table for every byte of heap memory. Clearly this will catch all access errors for dynamically allocated memory, including memory leaks. However, Purify does not detect array bound errors for static memory except when a read is made to uninitialized memory. When converting from big-endian processors (e.g. those from Sun) to little-endian processors (e.g. those from Inmos and Intel) errors may arise due to reversing the byte order. Unfortunately, if the access is to initialized memory one may still "get lucky with a wild pointer", so Purify, though useful, is not an infallible method of detecting memory referencing errors. Figure 5.2 is an example of a Purify display. The figure shows an array bound overstep and a memory leak (artificially generated). Purify will also show the location of errors in the source code.
5.8
OPTIMIZING COMPILERS
It has already been suggested that source code should not be changed except where essential as part of parallelization: if the code to be parallelized is still subject to change, then changes to the parallel code should go in lockstep with changes to the sequential version and not be in advance, otherwise the two versions will be difficult to reconcile. It is, however, possible to improve the performance of code without changing the source code by using an optimizing compiler for the target processor. Code developed for parallel systems is, in this respect, no different from code developed for sequential systems: in both cases the advice is to avoid premature optimization [44], which will slow down compilation. Any improvement in such a ubiquitous process as compilation is important in a production environment, though greatly improved processor speed of late has ameliorated the situation. If repeated testing is taking place and/or lengthy compilations are occurring then compilation switches can always be removed temporarily. The sequential version of the code is always the reference version as the sequential environment offers the necessary constraints and range of tools to enable correct working to be easily verified.
78
INITIAL DEVELOPMENT OF AN APPLICATION
Fig. 5.2 A test run of Purify.
Once the code has reached the Alpha testing stage, then optimization can be considered. At this stage, a profiler will highlight the most costly parts of the code and in particular inner loops. This should be well-known already to developers of sequential code. Optimizing compilers can be set by level or sometimes by individual feature. Differing types of code will respond to different optimization techniques. Each section of sequential code intended as a single computation process on the in-house Paramid i860-based parallel machine was compiled by the Portland optimizing compiler [284]. This compiler has a large number of options which can be turned on in order to tune the performance. Broadly the categories of optimization are: local block optimization; global optimization across blocks; software pipelining (overlap of instructions) within a countable loop; vectorization and loop unrolling; and function inlining. The i860 [22, 119], which is a RISC superscalar design (graphics, floating-point, adder, and multiplier units), was primarily designed for linear algebra and as such has features easily exploitable by an optimizing compiler: separarate data (128-bit wide) and instruction (64-bit wide) caches, four-way vector processing, three-stage pipelines (and an external memory pipeline). The 'Eigenfaces' application did achieve significant timing improvements from the use of the optimizing compiler on the sequential parts of the routine. There was a 42% additional reduction in time for a single farm version
CONCLUSION
79
running with 7 processors. Fifteen per cent of the improvement came from software pipelining. On the other hand, in the postcode recognition application, software pipelining only gained 0.08% advantage in timing. This should not be surprising as there are repeated calculations of KL transforms and Mahalonobis distances within the Eigenfaces application which in turn require calculation of variances within loops, and rely on a library of subroutines available from the well-known 'Numerical Recipes in C' [288]. In contrast, the postcode application has many sections of code characterized by repeated decisions in order to classify characters. Finally, once an implementation is fixed, further improvements in speed can be obtained by substituting hand-coded assembly language routines such as those available for the i860 from Kuck and Associates [198].
5.9 CONCLUSION Historically, little attention has been paid to the development process for larger-scale embedded parallel systems. Given the need to maintain a version of the software on which algorithm updates can be developed and debugged and which can also be transferred to various parallel machines, it is difficult to see how to avoid an initial stage in which the application is coded for a sequential environment. The constrained sequential version of the code allows a range of existing development, debugging and performance profiling tools to be utilized. Existing tools for automatic parallelization can only realistically be applied to a narrow range of algorithms. Profilers are however of considerable value in exploring potential partitions, and memory-access checking tools can be useful for debugging parameter-passing across parallel partitions. If legacy code is to be decomposed then a strategy for identifying the contents of messages is needed. However, if it is known at the outset that an application is to be parallelized then a set of ground rules for writing the initial sequential code can easily be specified, which will considerably simplify the process of parallel decomposition.
This page intentionally left blank
6 Graphical Simulation and Performance Analysis of PPFs Chapter 4 gave an overview of the development process (summarized in Fig. 4.1), which shows the various stages considered in this chapter. Chapter 5 considered the early development of an application. This chapter considers subsequent stages of the PPF development, and example performance, prediction and analysis tools (designed as part of our research and development of a PPF design methodology) are described. Prediction and analysis are linked because a similar graphical format is used in both the predictor and analyzer software tools. The input to the predictor tool is a set of timings, derived from test runs on the sequential code version of the software, whereas the input to the analyzer is a timed recording of communication events between the parallel processes making up the PPF. The predictor and analyzer tools are intended to be integrated with a templating tool (described in Chapter 7). Before a graphical format for a template tool was considered, programmers' templates were produced for various parallel and distributed setups. Recent work has examined a Java RMI (Remote Method Invocation) version of the template. The integrated software toolkit is called (APTT) (Analysis, Prediction, Template Toolkit). The predictor tool is based upon a discrete event simulator [131]. While it is possible to simulate both synchronous and asynchronous pipelines, only asynchronous pipelines cannot be completely solved analytically. Determination of analytic results for asynchronous pipelines requires the use of waiting time distributions which are available for exponential distributions [63] but require queueing approximations for other distributions. A discrete-event simulator was constructed without difficulty as an alternative to analytic prediction. A 81
82
GRAPHICAL SIMULATION AND PERFORMANCE ANALYSISOF PPFS
simulation also enabled start-up and wind-down behaviour to be found, which analytic methods cannot provide. 6.1
SIMULATING ASYNCHRONOUS PIPELINES
Previous models of performance and scheduling [197] have supposed that work tasks are divided into jobs. The number of jobs within a task can be varied to reach a benign scheduling regime, depending on the statistical nature of the job service-time distribution. There may also be savings in message size by grouping jobs into a logical task. In image processing for example, when spatial filtering of several image rows (jobs) is combined into one task, there is a reduction in border size. Conversely, where a task represents a physical unit, then splitting it into its constituent parts can reduce latency. This is because the latency of the subtasks can now be experienced in parallel. An example of a physical task is postcode recognition. Splitting the postcode into characters (jobs) reduces the postcode latency because the character recognition latencies can be played-out in parallel. At present, the task size in our simulator may vary between stages of the pipeline but not within a stage. Single jobs are passed between stages in a pipeline. A task is assembled from jobs held in an inter-stage buffer. The latency shown in a running display is the per-job pipeline traversal latency. Job latencies are grouped into their output tasks, enabling the spread of latencies to be seen. The per-task latency metric is found at the final stage by selecting the highest latency within the set of jobs making up a task. Ordering constraints when grouping jobs within the pipeline are not simulated as they are application dependent, but such degradation can be estimated by the use of order statistics (Chapter 11). For example, in the postcode example of Chapter 3 there is an output ordering constraint as the postcodes must leave the pipeline in the order they entered, but breaking the postcode into characters may cause one character requiring extra processing time to hold up others. 6.2
SIMULATION IMPLEMENTATION
Our simulation tool was written in the Java™ 1.1 [64] programming language (version 1.2 appeared towards the end of this work) for reasons of portability. Java enables the portable distribution of applications without revealing the source code through the intermediate medium of byte-code. Java comes with a comprehensive graphical library, though implementational weaknesses have limited its usefulness. Using a somewhat slower semi-interpreted language with additional byte-code integrity checking in the virtual machine caused us to carefully review approaches to dynamic display. As mentioned in Section 5.3, improvements to Java's speed are on-going including JIT compilers
SIMULATION IMPLEMENTATION
83
and now hot-spot compilers. It is also possible to use 'final' methods to aid the compiler to optimise, and to judiciously arrange when objects are instantiated. When verifying the simulation against analytical results, the simulation update order was chosen to suit the calculation algorithm. The simulation loop was subsequently modified to perform updates in the order that the display is updated. An array indexed by pipeline stage and worker process records the outstanding work left at each worker process. A further array records the latency to date experienced by each job. Advantage was taken of multi-dimensional arrays, each dimension of which could have a variable number of elements — a feature of Java. When a job passes to an inter-stage buffer the latency history is passed with it. Because inter-stage buffers can be unbounded they would normally be implemented as a linked list. However, Java avoids pointers ostensibly for security reasons and destructor methods are replaced by background garbage collection, preventing memory leakage. The utility vector class methods allow the storage of object references and hence dynamic data structures. On locating the object, its contents are unpacked. The capacity of the vector is covertly and automatically doubled when necessary. At each step of the simulation the global minimum remaining work time is located. All other work times are decremented on each worker process and similarly all jobs waiting in buffers have their latency incremented. If the global minimum is found on a worker processor at the output stage, the latency and throughput characteristics of the pipeline are updated on the display. Before updating, if bounded inter-stage buffering is set, a check is made to ensure that there are empty buffer slots at the next stage. If not, a new global minimum must be found, though the output stage will always remain unblocked. The simulator is implemented as a single thread, so as to allow the incorporation of other user threads at a future date. Figure 6.1 is a simplified snapshot of the simulation process. Jobs are grouped into tasks, which may differ in size between stages of the pipeline. Tasks being processed carry a cumulative latency. Similarly, jobs within interstage buffers accumulate latency, as do jobs waiting in local buffers (not shown in Fig. 6.1). Service times are selected from a specified distribution on a per-stage basis. The simulation update cycle consists of finding the global minimum time (after tie resolution) from among the worker service times. In Fig. 6.1, the minimum is two, which value is deducted from all service times and added to all waiting jobs, resulting in Fig. 6.2. Finished jobs are moved onto another stage or output.
84
GRAPHICAL SIMULATION AND PERFORMANCE ANALYSISOF PPFS
Fig. 6.1 Simplified snapshot of PPF simulation.
6.3
GRAPHICAL REPRESENTATION
A graphical interface [287, 86] is the user's view of a toolkit, and in terms of person hours of design effort has been the most expensive part of the APTT development. Our design aims to exploit familiar user interface paradigms for navigating data entry screens and utilizing simulation and trace tools in order to reduce the user's learning time. The interface was written in Java, which enabled a trivial port between the Windows NT and Unix™ operating systems which would not have been possible with X-window software, except in the not widely used instances of X implementations under Windows NT. A problem with previous analysis visualization tools, such as ParaGraph [153], is that a highly animated display occurs, rather like a cartoon film. The user may find it difficult to establish a pattern. Moreover, in seeking generality, with 24 ways of presenting data, no structure to the ParaGraph tool's usage was provided. An over-animated display also reinforces the sequentiality of the simulation whereas the pipeline in reality has both local and general parallelism.
GRAPHICAL REPRESENTATION
85
Fig. 6.2 PPF simulation after update.
Figure 6.3, showing a summary of statistics entered, is taken from the APTT data-entry 'wizard' which has a familiar look-and-feel to ease user adaptation. Figure 6.5 shows a snapshot of the predictor running a simulation of the postcode recognition application described in Chapter 3. The pipeline backplane occupies the main window with details of the stage activity such as buffer and processor usage available from subsidiary windows. Processor activity is shown using color through analogy with stop/go displays. Again using the semantic associations of color, the communication arrows change color from black, through red to white to highlight communication 'hotspots'. Provided the shades range in tone then this type of display is suitable for monochrome as well as color displays. The arrows also widen and contract, which is a format well-known in static displays [357]. Mean bandwidth, rather than instantaneous bandwidth is displayed, so that the rate of display change is smoothed out allowing the viewer to establish a pattern. The color scaling is adjustable to center on and bracket critical data rates as the variation across the full bandwidth range might otherwise be too low to show up; see Fig 6.4. Latency is also indicated in a persistent display. Jobs are marked off at task boundaries, with the task latency determined by the slowest job. Though persistent displays convey more information, they need to be balanced with
86
GRAPHICAL SIMULATION AND PERFORMANCE ANALYSISOF PPFS
features marking progress, which is why the processor activity diagram and message motion arrows are included. Qualitative, graphical information is balanced by the quantitative instantaneous information in the center of the display. The user can control the speed of simulation; customize the display; and select the random-number generator.
Fig. 6.3 APTT data-entry 'wizard'.
Fig. 6.4 APTT scaling window.
GRAPHICAL REPRESENTATION
87
Also included in Fig. 6.5 are a couple of pop-up windows (activated by double-clicking on the farmer icons) which display the activity of individual farms. The pop-up windows themselves have further pop-up capability (not shown) which gives the number of jobs processed and the activity level of each worker process. In general, qualitative information is given priority but quantitative information is always available. Figure 6.6 shows the post-mortem analyzer window (used for reviewing actual performance after an instrumented run), with the postcode recognition application trace running, and the configuration set-up from an intermediary format file. As instrumentation is inserted transparently to the application the display does not try to infer performance statistics other than run-times.
Fig. 6.5 APTT predictor window.
88
GRAPHICAL SIMULATION AND PERFORMANCE ANALYSISOF PPFS
Fig. 6.6 APTT analyzer window.
6.4
DISPLAY FEATURES
Animation using individual tokens for each message makes it difficult to distinguish between levels of activity, since the display is not persistent: after the passage of a token, it reverts to its previous state. The use of color in these types of display was mentioned in Section 6.3, and has been found to give a clearer indication of overall average activity levels. Because the passing of just one message is normally not enough to change the arc color state, screen flicker is reduced. The end result is a slowly changing display, which as desired establishes a pattern of activity in the user's mind because the display is persistent. The most difficult aspects of using a graduated color display are arranging step size and choice of color, and conveying when an arc's com-
CROSS-ARCHITECTURAL COMPARISON
89
munication bandwidth has been saturated. Another indicator can be used to show saturation. The worker and farmer states are also indicated by color, either active, idle, or blocked through full buffers. The internal state of any worker is accessed by a pop-up window. However, notice that true pop-ups are not part of the Java Abstract Window Toolkit (AWT)1 [123]. Arrangement of graphical objects on a display panel is time-consuming, though to maintain portability object placement should be relative. The grid-bag layout was found to be the most convenient pre-supplied format. The tool was tested on a Unix™-based workstation as well as a PC to confirm portability. Features of the display (Fig. 6.7) include: • Fine control of each stage's communication parameters and work distribution; • Zoom windowing on individual farms; • Communication 'hotspot' indication through arc color and size; • Color state change of farmer and worker activity (busy, idle and blocked); • Access to individual idling times and work history; • Inter-stage and local buffering monitoring; • Running indication of simulation time and performance metrics; • Controls over simulation display speed and state; • Selection of computation rate and interconnect bandwidth scaling (relative to a base setting); • Help through WWW pages.
6.5
CROSS-ARCHITECTURAL COMPARISON
To allow the performance within APTT on one machine to be extrapolated to another we sought a simple but widely recognized characterization. A twoparameter model of performance has now been applied to a variety of parallel architectures [163], though not apparently previously in a predictor tool. For example, in Fig. 6.8, which is a log-log plot, the Paramid reaches half its maximum bandwidth with messages of about 60 bytes (first parameter, established lr The Java AWT uses the peer graphical components of the underlying operating system at some loss in efficiency. Subsequently, Swing Components [374], which are written purely in Java, have been introduced as a longer term solution to graphics programming in Java.
90
GRAPHICAL SIMULATION AND PERFORMANCE ANALYSISOF PPFS
Fig. 6.7 Screen shot of the simulation predictor tool.
by linear regression) before reaching steady state (second parameter). In this case, the user need only know the message length and the target processor to project results. Measurements on an individual Paramid processor, an i860, showed that a two parameter characterization might be insufficient for computation as there was dependency on the computation kernel being performed, with additional cache effects evident. Figure 6.9, showing results for four out of seventeen test kernels at full compiler optimization, indicates two linear phases for some kernels where the vector length being computed stays within and steps outside the cache. However, it is not a difficult matter to store in a look-up-table the results for each machine and for each kernel. The user then selects a kernel, vector size, and processor to enable the performance tool to give a first-order approximation by means of scaling the computation times. This is likely to be more helpful for regular computations such as orthogonal transforms. An alternative characterization is to use the computational intensity of the code, /, in units of flops/memory reference. Table 6.1 records steadystate performance (which is well below theoretically optimal performance), r™ and r£c being respectively in-cache and out-of-cache performance in units
CROSS-ARCHITECTURAL COMPARISON
91
of Mflop/s.2 The settings for compiler optimization level three show a reverse trend to the expected increase in performance as intensity increases; compare level two in Table 6.1. However, we use the higher figures in order to make a fair comparison between the two processors.3 Table 6.1 Paramid Computational Performance
/ rZ r°ac
1 25.6 18.1
2 18.3 16.1
4 15.9 15.7
5 15.6 14.8
6 15.4 14.7
8 15.1 14.9
9 13.2 15.0
10 13.3 13.2
Fig. 6.8 Point-to-point communication performance for the Paramid. Applying the out-of-cache computational intensity test (level three setting for the Paramid), a Dec Alpha (21064 at 175 MHz) server was found to scale over the i860 by a factor of 3.0 for f — 5 with a -fast compiler setting. As this is a load-dependent measurement, the arithmetic mean of five selected results was taken. Table 6.2 records the projected timings if 21064s
2
The out-of-cache measurements arise by using vector lengths designed to exceed the cache size, and by causing a cache flush between tests through repeatedly accessing a large array in random fashion. 3 Note that at / = 5 out-of-cache performance is similar for both compiler options.
92
GRAPHICAL SIMULATION AND PERFORMANCE ANALYSISOF PPFS
Fig. 6.9 Selected computation performance for the Paramid.
were to be substituted for 1860s, otherwise keeping the system the same.4 The longer out-of-cache test figures were chosen because the lower resolution clock would otherwise affect the accuracy5 though in-cache timings indicated a larger scaling particularly for higher values of /, indicating an efficient memory hierarchy. Low-resolution software clocks may be a deterrent to the use of a processor in some hard real-time system, though probably not for the soft real-time systems considered here. Spatial filtering is an example of an image processing operation commonly performed with integer operations, whereas benchmarking kernels, being derived from the numerical analysis community, usually employ floating-point operations. There would therefore also appear to be a need for a set of agreed kernels specifically for image-processing and other integer/fixed-point operations, which are frequently required in embedded applications.
4
This is not a practical possibility as the choice of processor and coprocessor is dictated by cost and compatibility, the i860 and transputer both being little-endian [59] (the byte ordering is the same to allow exchange of complex data types, in particular floating-point numbers). 5 Clock resolution was ~ 0.01 s on the Dec Alphas as opposed to ~ 0.001 s on the Paramid, mean timing error ±6.1%.
CONCLUSION
93
Table 6.2 Simulated Results for a Variety of Pipelines Dec Alpha (21064):
Par amid: Pipeline Worker Ratio
Run-time (s)
Throughput pcodes/s
Run-time (s)
Throughput pcode/s
3:2:1 2:3:1 4:3:1 3:4:1 3:3:2 3:4:2 4:4:2 2:5:1
35.1 27.6 24.3 23.9 23.5 18.6 17.7 27.3
8.6
11.9
11.0 12.4 12.6 12.8 16.1 17.0 11.0
9.4 8.1 8.3 8.0 6.3 6.0 9.1
23.3 29.8 34.5 33.3 34.7 43.9 46.3 30.0
6.6 CONCLUSION Tool-centric system design and development is well established. The most important part of tool design is the human-computer interface, which should present an easily assimilable model of the system's behaviour. For this reason, considerable attention was paid to PPF graphical representation, which has the inherent advantage of presenting a single consistent model for all applications. Representing parallelism on a serial machine is difficult. The APTT toolkit explores the use of color for this purpose, presented via the Java portable graphical environment. This enables a single software toolkit to be used both in the Unix and Windows environments. PPF pipelines should also be portable across different processor types, and for this reason a simple, two parameter model of cross-architectural performance comparison has been utilized. The current research implementation of APTT is available at http: //esewww. essex. ac.uk/research/vasa/pstespa/aptt/. 6 A description of the PSTESTPA project and the APTT software can also be reached from http://esewww.essex.ac.uk/research/vasa/pstespa/index.html.
6
Note that this software is the outcome of a research project intended to investigate ideas rather than to produce commercial strength software, and is made available for the benefit of the community. No guarantees can be given as to its performance.
This page intentionally left blank
7 Template-based Implementation The leitmotiv or driving aim of PPF design is to introduce genericity into the creation of parallel systems. Given this aim it is not surprising that ways of automating PPF implementations have been explored. A high-level template for data farms seemed an obvious way of doing this as it is relatively simple to link data-farm templates together to form a pipeline. The concept has been tested by implementing a template with the same functionality on several typical platforms. There are ideally two versions of the template: the first (parallel logic) is employed to gain confidence that the transition to a parallel implementation from the sequential code has worked; and the second (performance tuning) is instrumented (allowing timings to be made) in order to check that the desired performance goals can be reached. The reader should refer back to Fig. 4.1 to see how this stage fits into the PPF design cycle. Instrumentation perturbs a parallel system, potentially disguising data races adding overheads. It is also not easy to instrument a system without a global clock, which aspect is discussed in Chapter 12. A number of performancetuning templates might be needed in the course of time as and when a software pipeline is transferred to new target hardware. Templates already crop up in various related settings. Reference [60] introduced algorithmic skeletons which are a high-level template written in a functional language. The skeleton provides a parallel control structure but hides implementational detail from the programmer. A model for calculating the cost of the parallelization is provided. A number of similar structured approaches to parallel programming exist; for a comparative review refer to [329]. Coincidentally closest to the PPF approach, in the sense that farms and pipelines occur among the skeletons, is the Pisa Parallel Programming Lan95
96
TEMPLATE-BASED IMPLEMENTATION
guage, for example in [366]. The categorical datatype (CDT) framework [329] for list programming is a related higher-level model which steps further back from the control structures necessary for a particular parallel architecture, and being polymorphic, is also without a preferred granularity. An attraction in principle of the skeleton/CDT approach is a formal and more complete software development scheme. More generally, the software component is a user-level variant of the programmer's template. Like the template, the software component encapsulates dynamic behavior, and is not passive as is an object in object-oriented design [239] or in an object-oriented languages. The template is, therefore, at an equivalent level to the JavaBean or DOOM component [195] but with an informal interface, being available in source code form. A template implementation need not reproduce every feature of the design and can be built on existing software facilities, thus easing the implementational burden. Templates are used through a text editor in the manner of the Linda program builder [9]. The programmer can slot in sequential code sections, and form messages, provided the message-passing structure is preserved. Other parts of the structure are transparent to the programmer, such as message buffering, and event-trace instrumentation.
7.1
TEMPLATE DESIGN PRINCIPLES
The abstract design of the data-farm template was motivated by engineering utility. Communicating Sequential Processes (CSP) [161] was selected as a model of parallelism, in part because it has been successfully disseminated amongst the programming community, which in itself is a practical consideration. CSP presents a static process structure, that is there is no need to support dynamic process creation. A common characteristic of embedded applications is that the overall process structure is known in advance. Communication between two processes is solely via a channel causing a synchronous rendezvous between sender and receiver. Otherwise processes can be scheduled in an arbitrary interleaving, though CSP provides a process algebra which in principle can establish correctness. CSP's process algebra implies two important features from the efficiency standpoint: low-overhead context switching by means of multi-threading; and the ability to alternate responses in a non-deterministic fashion. For ease of programming, between threads on the same or different processes should be transparent and symmetric. These aims were also engendered in the transputer design and associated programming language, occam. However, the intention of our template design was not to emulate the transputer virtual machine, as in [271], but to incorporate the model in a looser fashion, relaxing non-critical features, and adding new features to enable smooth operation of the data farm.
TEMPLATE DESIGN PRINCIPLES
97
In CSP, channels are a means by which the normal semantics of a programming language may be extended in a seamless way to include communication between processes. In [158], the absence of a channel from programming languages is viewed as an historical accident due to memory costs. Channels form an implicit name space, without the need for a name-server. In a number of implementations of CSP there are no compiler checks to prevent a programmer writing to both ends of a CSP channel. Use of a template explicitly designs out this possibility. There is also a problem of excessive 'plumbing' inherent in the CSP channel, which results in the need for the programmer to keep a check on a large number of channels and ensure that messages correspond. Again, a template alleviates this problem. CSP, as implemented in occam2, is not sufficient for data-intensive applications such as low-level image processing, as excessive memory-to-memory data movements may be needed between threads. Unfortunately, on recent hardware, improvements in memory access significantly lag, and may obviate, gains in processor speed. An indication of the problem is that the following techniques are all aimed at ameliorating memory latency: multiple caches; interleaved memory; and decoupled architectures. Therefore, shared memory was added to our template design, while it is not present in the CSP model. Support for shared channels has also been added in occamS [29], originally intended for the T9000 series transputer. Similarly, the need to relax the occam specification has been recognized in [379], where semaphores, resources, events, and buckets are alternative synchronization mechanisms to the channel. In our template design, counting semaphores have several advantages as a means of controlling access to shared memory. In PPF-style applications, semaphores are not needed extensively so that the danger of unforeseen interactions from disparate parts of the program is not present. Access to a critical region is not denied by a semaphore if data will not be compromised. An implementation of semaphores requires one process queue, a counter and a locking mechanism. The monitor construct was not included for a logical reason: it allows only one active call at any one time; and for a practical reason: its operation may be hidden from the programmer [382]. Other contention access primitives, for example the serializer [157], though convenient for the programmer were not considered suitable as resources are used inefficiently. CSP does not include complex communication structures. An asynchronous multicast from the farmer process to its worker processes was however deemed necessary in our template design to reduce message traffic. The multicast can also act as a means of synchronization for computational phases as well as physical reconfigurations. The multicast does not initiate any reply messages, thereby restricting circular message paths. Care was taken that normal communication could not overtake multicast communication, as a multicast will often contain start-up parameters. Message records were added as a useful structuring device, the equivalent of Occam's protocols. To enable reuse, the communication structure was made
98
TEMPLATE-BASED IMPLEMENTATION
transparent to the type of application messages. A tag message precedes each data message giving the length of the message for intermediate buffers and its type for the application code. For soft real-time applications, two priority, pre-emptive context switching is sufficient but necessary. In the data farm, the higher priority is needed to respond to communication events if it is possible to provide an asynchronous response. Once the communication event has been serviced the responding thread deschedules. As a base-level facility, implementable on most platforms, round-robin context switching was also supported within the template. Scheduling of the ready queue is by a FIFO mechanism. As two levels of priority and a FIFO queueing policy are not sufficient for hard real-time applications, [367] describes an alternative CSP-based real-time kernel. Round-robin scheduling may also be viewed [216] as unsuitable for such applications, as it reduces response time. If there are many potential inputs, alternation may be a hindrance because of the need to monitor all the inputs [386]. However, for embedded applications in the PPF pattern the number of inputs was not expected to become large enough to make necessary a tree of multiplexed requests, and unlike hard real-time systems, deterministic response was not required. The datafarm paradigm is deadlock-free [378], thus avoiding the principle disadvantage of non-determinism. Buffers are employed at the user process level to mask communication latency and to increase bandwidth. The buffers will normally be transparent to the application programmer. Input buffers reduce the time spent waiting for work and output buffers smooth out access to the return channel. Ideally, a one-slot local buffer is enough to mask communication but in practice a few more slots were needed because of variance in task computation time and communication hold-ups. CSP's synchronous communication was retained. However, buffers act as agents [244] for the application which make it appear to the application that there is an asynchronous send and a blocking receive. Additional communication structure is provided to enable data-farm template instantiations to be grouped in a pipeline, with options for I/O if the data-farm in question is a terminal stage. The same buffering module is employed between pipeline stages but more slots are normally necessary to smooth flow between the stages (than for local buffers). A similar buffered pipeline design methodology has already been developed in [214]. A method of predicting the number of buffer slots for local- and inter-stage buffering is discussed in Chapter 11. Additional data-structures such as a linked list may be needed if arriving message data needs to be re-ordered before passing to the data farm. Demand-based data farming within the template is, in most cases, a way of scheduling work with limited loss of efficiency. At start-up time, a static scheduling phase is needed to fill buffers. Indeed, for constant task computation times, the time to fill all buffers should exceed single task computation time.
IMPLEMENTATION CHOICES
99
Instrumentation, recording communication events, is a built-in feature of the template. Experience [303] shows that instrumentation is difficult to include at a later stage and that a static design will need to be tuned after an initial implementation. In different circumstances, instrumentation has been included in a number of environments such as Jade [307]. Correct termination of the data-farm template is necessary for both the collection of outstanding results and the gathering in of trace files. It was anticipated that the farm might need to be reconfigured if the workload altered during the course of a run. On termination, the data farmer employs a sink process, which is broadly in line with the methods discussed in [377]. 7.2
IMPLEMENTATION CHOICES
Implementation of the design, in an object-oriented sense, is a process of establishing idioms, that is low-level features not present in the underlying software that are desirable. At first sight, existing communication harnesses appear to be a natural way of implementing a data-farm template in a distributed and/or parallel environment. The de facto standard for such message-passing communication harnesses is PVM [124] whereas MPI [143] is the de jure standard, and continues to be developed as MPI-2 [144]. However, message-passing [21] is different in kind from CSP, being asynchronous, and moreover starts with the pragmatic premise that a wide range of facilities is preferable to a tightly constrained communication model. Reference [125] is a comparison between PVM and MPI, where PVM's support for heterogeneous processors within the virtual machine, and fault-tolerant features are pointed out. MPI-2 has introduced coordination of parallel I/O (on Unix-like systems) and remote memory access (through memory windows). As PVM has an underlying dynamic model of parallelism, not static as in CSP, daemons are also necessary to spawn additional tasks and to act as name servers. Where user-level daemon processes act as communication intermediaries a performance burden arises from the extra messaging needed to communicate between user application and daemon. Many embedded applications simply cannot support this extra software superstructure. A number of features of PVM militate against implementing the design model outlined in Section 7.1. PVM does not support internal threads as there is just one process per PVM task. Therefore, a PVM task is reliant on probes if non-blocking communication is to occur. PVM lacks CSP's nondeterministic operator and has no support for internal concurrency. PVM has a restricted set of message-passing primitives, but in version 3.3 a system of buffers involving memory-to-memory copying restricts performance, though this may be remedied. Broadcasts in PVM 3.3 are not true broadcasts but one-to-many transmissions, necessary because the generality of PVM must accommodate networks without true broadcast capabilities. Different versions of PVM may exist on target machines (vendor implementations rather than Oak
100
TEMPLATE-BASED IMPLEMENTATION
Ridge implementations) for compatibility reasons. For example, the version of PVM available for the Paramid machine used in many of the examples in this book is restricted to a host/worker configuration. As all communication is routed over a SCSI link performance is limited. MPI has avoided daemons or servers in its design. Instead, groups of processes coordinate within the context of a communicator. MPI-1 has a static model of parallelism whereas MPI-2 as part of a convergence with PVM has introduced dynamic process creation, but again through the communicator mechanism. Therefore, MPI is most aptly described as semi-dynamic. Obviously members of a group still have to generate traffic to coordinate among themselves. MPI is thread-safe but there is no support for internal thread creation in either version. Other features of MPI make the template design model difficult to implement. The message-passing standard, MPI [143] does not supply a nondeterministic operator. In MPI, multicast is available but by way of its semidynamic process groups.1 MPI has a proliferation of communication primitives that can lead to confusion. Whatever the advantages in portability, it may be unclear which message-passing modes are efficiently implemented on the target machine. PETsc [26] is a remedy to this diversity for numerical software libraries. MPI's derived datatypes, intended to improve performance because they avoid copies out of user space, are too low level for many tastes. A real-time version of the MPI specification [250] is in the process of addressing performance concerns. Given the limitations of PVM and MPI, it may be preferable to examine what native facilities are available to implement the main features of a datafarm template efficiently. In particular, a customized implementation gives the option to use a true broadcast if the LAN supports that function.
7.3
PARALLEL LOGIC IMPLEMENTATION
One of the template implementations discussed in this chapter was on workstations running SunOS 4.1. On Unix systems, the cost of context switching for a heavyweight process in a worst-case scenario might include swapping the user context from disc and cache flushing. Threads (lightweight processes) are a way of reducing the response time in either interactive or real-time settings. Communication can either be by means of a virtual circuit or by datagrams. The socket is an abstraction through which the programmer can interact with the networking software, typically by binding the socket to a source and destination address, by establishing a connection when a circuit is required, and by sending messages via the designated socket. Remote procedure call (RFC) was not chosen as a basis for the implementation because of the known over-
1
PVM supports group communication through a further group manager daemon.
TARGET MACHINE IMPLEMENTATION
101
heads, which in [40] were already shown to be an order of magnitude above a procedure call on the same machine. On the SunOS 4.1 o.s., the socket application programming interface (API) [342], included from BSD Unix, was combined with lightweight processes (LWP) or user-level threads.2 The result made possible implementation of most of the required features of the data farm. The BSD version of Unix implements the socket API directly in o.s. kernel space. The main weakness of the original SunOS thread system is that all threads, and indeed all processes, share the one kernel instance. Therefore, it was necessary to employ asynchronous communication to ensure that a LWP does not prevent a context switch by blocking on a communication call. However, asynchronous communication is reliant on signal-handling which in standard Unix implementations occurs as a result of a context switch internal to the process. 7.4
TARGET MACHINE IMPLEMENTATION
A template was also implemented for the Paramid machine. Prom the user perspective, the Paramid appears as a multi-transputer (each with attached i860 accelerator) workstation application co-processor, onto which jobs are allotted on a first-come-first-served basis by a host-based scheduler (a Sun workstation). Interprocessor communication is effected in the first instance by the i860 interrupting the transputer (via the transputer event pin) to signal a request. The transputer inspects a common memory area in order to service the request, releasing a software lock after fulfilling the request. In Inmos parallel 'C' [172] for the transputer there is a set of thread library calls. Interaction with the hardware communication-link engines from within a thread is well defined on the transputer and posed no special problems. There is a choice of point-to-point physical channels or virtual channels. The virtual channel system (VCS) [78] enables direct global communication at a small cost from link sentinel support software. An alternative version of the template was designed for a target distributed system, namely a set of processors running VxWorks[387] connected by a network. VxWorks is a Unix-like single-user o.s. for real-time development work, which comes into the class of priority-driven o.s. with enhanced time services [216]. VxWorks is the market leader in real-time operating systems. VxWorks includes a performance simulator, and Java modules (with Jini sup-
2
'Lightweight process' was Sun Microsystem's name for a thread. IEEE POSIX standard (PlOOS.lc) threads (ptkreads) [171] are implemented on the later Solaris 2 o.s. [218] also from Sun. The signal handling and scheduling schemes vary in pthreads but are not fundamentally different from the viewpoint of an implementation of a data farm. Note that POSIX threads are implemented in Solaris 2 [189] as user-level threads that act through an intermediary thread (also called a light-weight process) which multiplexes a number of threads on to a limited number of kernel threads.
102
TEMPLATE-BASED IMPLEMENTATION
port). However, inhouse kernels as opposed to commercial operating systems predominant in this market.3 In VxWorks, there are no heavyweight processes, only threads with optimized context-switching and response to events. The data-farm template module is written in 'C', cross-compiled on a PC running the Windows 95/NT o.s. and loaded and linked on attached 68030K boards. The 68030 microprocessor [345], has an instruction set with testand-set and compare-and-swap, suitable for implementing semaphores which indeed are a built-in feature of VxWorks. The 68030K boards are linked by an Ethernet LAN, with VxWorks providing a source-compatible BSD 4.3 socket API for using the network. There are versions of VxWorks for a variety of embedded microprocessors4, such as (in 1993) the MC68040, the SPARC and SPARClite, the Intel i960, MIPS R3000, by 1997, the Pentium Pro, and more recently for smart devices. A current large embedded system would consist of a rack of PowerPCs connected by Myrinet (already mentioned in Chapter 2). The relationship between such a system and a cluster of PCs (with FPGA accelerators) linked by Myrinet (the Tower of Power from Virginia Polytechnic Institute) is discussed in [181]. 7.4.1
Common implementation issues
Figure 7.1 is an exemplar multi-threaded structure which implements the design principles. All of the features were explicitly put in place on the Unix version of the data-farm template. Urgent messages were implemented solely on the Unix system as they are not an essential feature. As I/O is efficiently buffered on Unix systems [189], the provision of a separate I/O thread may be nugatory. On the Paramid system, thread scheduling is implicit. Further, there was no need for message recovery for multicast messages. The VxWorks system version of sockets does not implement network broadcasts, but communication is asynchronous and thread-specific. Thread scheduling is user selectable, either priority-based with pre-emptive options or round-robin. In designing the worker module an initial consideration was the nature of message traffic. Messages occur in two parts: a tag and the body of the message. The tag must include: the size of the message to follow; a type indicating whether a message is a broadcast or a request for processing; and a message number. The message number is intended to signal to the receiver which data structure to position for the accommodation of the second part of the message. The message number might also be used for other purposes. The body of the message should include a function number as the first field of the message, but otherwise the message record structure was undetermined. Compared to a sequential version of a program, excessive data movement may
3
For a review of other real-time operating systems refer to [313]. A characteristic embedded microprocessor such as the i960 requires a minimum of auxiliary chips.
4
TARGET MACHINE IMPLEMENTATION
103
Fig. 7.1 Simplified layout of a single farm.
be involved in the formation of messages for a parallel implementation. The application programmer will need to balance utility with complexity. The potential for a large number of different messages was the reason for the restriction to a rigid message format. Each work request message is serviced by one application thread function (Fig. 7.2). In 'C' it is possible to use an array of function pointers to which the function number forms an index. Though not entirely satisfactory, data and parameters are passed to the function as globals. The result is that each function can be referenced simply by a number. From the figure it will be seen that the application thread was divided between a public interface and a private part into which different functions can be slotted. This makes it possible to extract the functions from sequential code constructed with structured programming and place them into the slots. Circular incoming and outgoing buffers service each application process (Fig. 7.3). Access-contention control was set up through semaphores. Separate buffer slots are kept for tags, otherwise there is a danger of the small slots needed for tags being expanded to handle larger messages. Buffers are automatically enlarged by dynamic memory allocation.5 To avoid the possibility of deadlock if a series of broadcast message were to arrive at asynchronous 5 Systems with strict deadlines might wish to allocate all memory before the main processing cycle as memory allocation can be a costly operation in terms of time, is not a fixed cost, and may occur at unforeseen times.
104
TEMPLATE-BASED IMPLEMENTATION
Fig. 7.2 Generic application thread.
intervals, at least one extra buffer slot is needed over and above the number of messages sent out at loading time. This method is a variant of an algorithm which is proven in [311]. 7.5
'NOW IMPLEMENTATION FOR LOGIC DEBUGGING
A thread system was first established on the worker and farmer modules using the LWP library. Context switching is within a Unix heavyweight process. A process's stack is multiplexed by the LWP library between the various LWPs. The LWP library maintains minimal state for each LWP, for instance a program counter and the LWP's priority, so that if a LWP's time-slice is interrupted it does not complete the remainder of the time slice when it is rescheduled. This seemed appropriate for the data farm, but other thread systems (e.g. VxWorks) maintain more state. To allow for the rudimentary
•NOW IMPLEMENTATION FOR LOGIC DEBUGGING
105
Fig. 7.3 Generic buffer. nature of the thread system, the Unix version of the template caught any stack overflows between LWPs.6 The data-farm design calls for pre-emptive communication threads and background round-robin thread scheduling. However, the SunOS LWP library is non-preemptive and priority-driven. A compromise was to set-up all normal threads to be switched by a round-robin scheduler, but after polling for a communication event a communication thread immediately deschedules if no response occurs. Round-robin context switching was provided in the farm template by a software scheduler LWP:
while (TRUE) sleep(TIME.QUANTUM) reschedule active process queue
6
Stack partitions can be red-zone protected by the LWP library. A red area of memory is defined as being unallocated and uninitialized. Therefore, any attempts to read, write, or free a red zone will throw up an error. Other colors of memory have a subset of these restrictions.
106
TEMPLATE-BASED IMPLEMENTATION
A time quantum of 100/us was satisfactorily used in tests. External communication was implemented by means of the BSD socket API. An effort was made to tune communication within the data farm by utilizing some of the specialist features of the API in coordination with the LWP library. The f cntl system call enables socket communication to be made non-blocking, as was required in the design model. Reliable stream communication occurs via the underlying TCP/IP transport-level protocol. However, care was still needed in coding the template channel primitives as message contents could be lost if either the message system call unblocked or if the data stream delivered an incomplete section of the intended message. Standard Unix has a global error number, errno, which in this instance was important as it indicates message status. Therefore, the global errno was mapped to a local LWP copy. Conveniently, the LWP context in the relevant template threads was augmented to include errno by means of a LWP library facility. The template sockets were set so that Nagle's algorithm [254] was not applied (whereby small messages are delayed in order to avoid congestion on long-haul networks). The f cntl call also can set a socket to give an asynchronous response and this facility was employed for broadcast and urgent sockets. The socket was awakened by a BSD Unix I/O signal. Where more than one reception socket is needed then it is necessary to inspect each socket in turn as an I/O signal occurs on a per-process basis. In order to map signals to threads an agent was needed. An LWP agent sleeps at the highest priority until a signal arrives. The agent then makes a rendezvous with its associated LWP. The template broadcast thread or urgent thread services all pending communication before descheduling. Internal communication between the template threads, principally between the buffer threads and an application thread, was implemented by wrapping the LWP rendezvous to resemble a CSP channel. Unlike CSP's channel, the LWP rendezvous is asymmetric. The designated sender passes the addresses of an input buffer and an output buffer. The receiver LWP is rescheduled when the sender and the receiver reach the rendezvous point. The sender is rescheduled by the receiver once it has processed the buffers in any way. In the template implementation, the rendezvous was used simply for synchronization, so as to avoid an extra memory-to-memory copy. Data were transferred not by the rendezvous buffer mechanism but by fast memory transfer through ANSI 'C' memcpy into global buffers. The template design specifies semaphore regulation of global buffers. However, Sun's LWP library supports a monitor data structure with associated condition variables, which can be signaled. The action of the monitor is hidden through mutex variables. Counting semaphore wait (p) and signal (v) were fashioned from this primitive, illustrated in pseudo-code: wait: lock mutex
'NOW IMPLEMENTATION FOR LOGIC DEBUGGING
107
while (semaphore count is zero) {wait on semaphore condition variable} decrement semaphore count unlock mutex
signal: lock mutex if (semaphore count is zero) {set contention} else {unset contention} increment signal count if (contention is set) {signal on semaphore condition variable} unlock mutex
No LWP is scheduled while another LWP is within either critical region, marked by the mutex, unless that LWP has been descheduled by wait. The queue to the semaphore is assumed to be FIFO. The nondeterministic operator was simulated in the template by the socket API select system call. The system call is unblocked by only using select in its polling form. As more than one socket may become ready for communication a routine was added to shuffle the order of selection according to a suitable and ideally random re-ordering. The intention was to ensure fairness to the selection of inputs. An alternative which was easy to implement was to move the current socket tag to the end of a list and move all the other tags forward one step. Message records were implemented by means of socket API vectored messages. Again, care was needed in writing routines for vectored messages as it is possible for one or more of the parts of the vectored message to arrive incomplete or not at all. The template code identifies which part of the vectored message has been dropped and picks up the rest of the message stream. Broadcasts under BSD Unix are restricted to datagrams, for which delivery is not guaranteed. In tests, it was found that if successive broadcasts were sent there was a distinct possibility that broadcast frames would be
108
TEMPLATE-BASED IMPLEMENTATION
dropped.7 Therefore, the template incorporates robust checking. A recovery mechanism was necessary whereby after a timed interval a repeat request is sent. The timer was implemented by an alarm-clock interrupt. Each work message is stamped with the sequence number of the last broadcast to be sent. The broadcast thread maintains the sequence number of the last consecutive broadcast. It will then be evident if either a broadcast has been dropped or if a work message has overtaken a broadcast.8 The application programmer can prepare application code for incorporation into the template largely as if a worker process and the farmer process are single-threaded. The main exception is non-reentrant system calls [189] which should be avoided. The programmer can check the working of the application by reference to an event-trace which is time-stamped (or event-stamped) by a global clock. [112] gives a simple example of this approach whereby the code for a one-dimensional FFT was inserted in a worker template. Two farms were formed for the row/column processing stages of a two-dimensional transform. The code for the intervening matrix transpose formed a centralized stage catered for in a single farmer. The whole formed a three-stage 2-D FFT parallel pipeline. A scalar logical clock was not difficult to implement as it requires no extra message passing. The clock update algorithm required a minimum of calculation: Initialize: Set logical_clock to zero. Set clock increment (usually to one). Logical-clock procedure: If (message is a receive) {let logical_clock = maximum(logical.clock, received_time_stamp] Increment logical.clock If (message is a send) {time-stamp message} The farm system does not have simultaneous messages arriving, because otherwise the tie would be broken by the process identities. The scalar clock may be extended to a vector clock, which allows ordering of internal events as long as the processes concerned are causally related (by a message). The components of the vector are the sending process's most recent scalar clock for all processes. The overhead from passing a vector of clocks with the most recent timings at any one process for all other processes grows with the num7 8
The reception socket was again in unblocking and asynchronous mode. Duplicate broadcasts pose no extra difficulty.
TARGET MACHINE IMPLEMENTATIONS FORPERFORMANCE TUNING
109
her of processes, while the scalar logical clock keeps a record with minimal perturbation. Generally, one should bear in mind that the pattern of message passing on a distributed system might be different to that of the target machine. The intention is to catch unexpected orderings. A scalar logical clock is also employed in the ATEMPT trace system [135]. At this stage in development, the trace display was via the post-mortem visualizer, ParaGraph [153]. A standard trace file [392] format was used, compatible with ParaGraph. The format includes a broadcast field but does not include multicast, which is understandable as the destinations are difficult to specify if the record size is restricted. However, if several farms are employed within the PPF pattern, multiple per-farm multicasts can take place. These were emulated by creating multiple message records in the trace file. Multicasts were stamped with the source and a message-type code not used elsewhere. Post-processing changed the multicast message to a set of messages with the same timestamps but different destinations (ParaGraph does not assume a monotonic clock). Initialization and termination messages could also be removed at trace-file post-processing time. Figure 7.4 is a screen-shot showing a trace taken from a single-farm test run. The plotted lines represent messages from one process to another. The sign of the gradient (the slope direction) of the line indicates the direction of communication. Processor 0 hosts the farmer process, which initially loads three buffer slots with work on each of four worker processes. At each cycle of subsequent processing: a broadcast is sent; a request for work in the form of processed work is serviced; and an urgent signal is sent to one of the processes (on processor two).9 Because broadcast messages, urgent messages and normal work messages are handled by separate threads within the worker process it is necessary to maintain distinct logical clocks for each of these threads.10 At the end of processing, the trace files for each thread and each process are merged and subsequently sorted into order.
7.6
TARGET MACHINE IMPLEMENTATIONS FOR PERFORMANCE TUNING
Apart from multicast, most of the template communication primitives were already present in Inmos parallel 'C'. The existing system software (Fig. 7.5) includes servicing of run-time I/O requests on all modules by means of a run-time executive (RTE). I/O is multiplexed onto the SCSI link to the host. The multiplexer channels are set up conveniently by VCS software. Only one process running on the i860 can communicate with the system interface
9
The urgent signal is an optional facility provided to enable a response to interrupts. The size of each trace-record file data structure should be regulated in order not to perturb the application which for example could occur through excessive paging. 10
110
TEMPLATE-BASED IMPLEMENTATION
Fig. 7.4 Logical clock trace of a farm test run.
program running on the transputer. Naturally, this process was the worker application thread which was supplied with a public interface and a set of protected services, exactly as in the Unix version. Instrumentation was the main sub-system missing from the Paramid system software (though console monitoring of link activity is available and useful). The existing interface program was enhanced with a trace recorder and synchronized clock process (Fig. 7.6). To substitute the new interface program, the application object code is booted onto the i860 network and in a second loading phase the transputers are booted up. The interface program then restarts the i860. The local clocks are updated by periodic pulses from a monitor process. On receipt of a message from the i860 application program, a trace record is generated, time-stamped by a call to the local clock. All the processes mentioned run at high-priority as it is important to service the i860. Where a trace is made on a transputer-based process, the clock should also run at high priority so as to reduce the interrupt latency, which for a single high-priority process is 58 processor cycles (2/Lts). If need be, an additional process is run at low priority [32] with the purpose of monitoring processor activity. The process simply counts each time it is activated before descheduling itself. If the processor monitor is called relatively frequently the processor can be assumed to be relatively idle. Internal monitoring of processes is not necessary if there is limited competition for the transputer's time. If the interface program could determine the destination or source of a message by its contents these arrangements would be enough. At present, the communication primitive on the i860 is augmented to include these details.11 Many of the external communication features of the data-farm template were implemented in VxWorks exactly as in the Unix-based system. A data11 The Paramid shared-memory data structure can be changed usually without disturbing the pre-compiled kernel routines.
TARGET MACHINE IMPLEMENTATIONS FORPERFORMANCE TUNING
111
Fig. 7.5 Paramid system software.
farm worker module, consisting of application and buffering threads is spawned from an initializing thread. Remote spawning was accomplished by writing an iterative server as RFC daemons are unavailable in VxWorks. The internal state of the Vx Works system is almost completely user-accessible and in many cases user modifiable. For example, the o.s. clock is programmable and cache flushing is specifiable. In fact, the data-farm template structure is needed to avoid anarchic use of the facilities. Internal channels were emulated in VxWorks, by a message queue primitive which when single-spaced fulfills the same purpose. A wrapper was provided to make it appear that the channel is used for intra- and inter-processor communication. The queueing discipline on semaphores is selectable, though for compatibility with the data-farm template in the other environments a FIFO discipline was chosen.
112
TEMPLATE-BASED IMPLEMENTATION
Fig. 7.6 The Paramid monitoring layout.
7.7
PATTERNS AND TEMPLATES
It is possible to construct larger software objects, frameworks, by grouping software components, just as a pipeline of data-farm templates forms a PPF. High-level architectural patterns with design reuse in mind [122] are a recent informal feature of the object-oriented approach to software development. A pattern is an abstract design which can be reapplied in a variety of situations. For example the model-view-controller and presentationabstraction-controller [46] are two abstract patterns for organizing graphical user interfaces. However, adoption of pattern-oriented software has not been widespread in parallel computing because of an emphasis on performance and hence on customized solutions. On the contrary, a PPF can be seen as a pattern; see Fig. 7.7. Certainly, viewing software in architectural terms has been widely perceived as beneficial.
CONCLUSION
113
Fig. 7.7 PPF as a pattern. 7.8
CONCLUSION
The benefit of a clearly defined design process is increased software productivity, and there is no reason why parallel computing should not share in this gain. Embedded software for parallel systems can particularly benefit from this endeavor as embedded software is expensive to produce and in some cases, for example in avionics, is continually reused over very long periods of time. A way of capturing the features of the design that can be reproduced on distributed and parallel systems alike is required. Presently, the cluster or NOW running as a distributed system is a low-cost way of testing the parallel logic of an application, once it has been partitioned. A workstation holds the core sequential application algorithm, while the cluster implementation [194] holds the core parallel version of the application. Target versions running on performance-oriented hardware are spawned as and when they are needed. Again the template captures the same functionality as was present in the parallel logic version. This chapter has detailed how the farm design, which is a relaxed version of CSP, can be mapped onto two typical distributed development environments and also more generally onto a range of target embedded parallel systems.
This page intentionally left blank
Part III
Case Studies
This page intentionally left blank
8 Application Examples The essence of PPF is practicality. In this chapter, four case studies are presented in which PPFs have been used to parallelize substantial applications. Here substantial means that the code lengths are significant and, moreover, there are a number of algorithmic components. To reiterate, while single algorithm implementations make convenient exemplars, in many 'real-world' applications the interaction of multiple algorithms is more significant in performance terms than any one algorithm. Nevertheless, in Chapter 9 a few small embedded PPF applications based on a single algorithm are presented. The case studies are related to image communication and vision. Two of the case studies are concerned with coders/decoders (codecs) for video sequences [347]. With the advent of Digital Versatile Disc (DVD) drives, video decoders are becoming widespread in consumer electronics. For example, the Advanced Graphics Protocol (AGP) Bus [96] on PCs has provision for a decoder in order to reduce bus bandwidth or to read from a DVD. Similarly, Sony intends to provide an embedded decoder in the Emotion Machine 1C for the Playstation2 [82]. Both decoders follow the MPEG-2 standard which in its general structure is closely related to H.261, discussed in the first case study. Hybrid video codecs, that is those codecs which make use of both transform and stochastic algorithms, are computationally asymmetrical. The decoder is simpler than the encoder (indeed is embedded within the encoder) and, therefore, as it has a regular structure, has been implemented in hardware at the VLSI level. Before the transfer to VLSI takes place, algorithms have to be tested against a range of sample video streams, necessitating the type of accelerated coding performance evaluation outlined in Chapter 1, Section A.I. 117
118
APPLICATION EXAMPLES
Fully featured encoders are more difficult to transfer to hardware. The H.261 encoder is also difficult to parallelize as a PPF because of the synchronous constraint imposed by previous picture feedback and quantization level feedback. Nevertheless it was possible to overcome this problem, in the case study, by employing a folded-back pipeline. Section 8.1 is a comprehensive study of an H.261 PPF. The H.263 encoder, considered in Section 8.2, is capable of a 6dB peak signal-to-noise ratio (PSNR) improvement in performance over H.261 [130] when used in advanced mode. However, the improvement is at a cost in software complexity. The next case study, Section 8.3, is connected to the work on codecs in the sense that facial recognition could be applied to a stream of video surveillance images. In fact, for recognition purposes a single frame is searched for a head image. Subsequently, the face and face features are transformed to a feature space where identification is easier. Depending on the application, various real-time processing deadlines may exist. Finally, Section 8.4 contains three case studies on optical flow, again using a stream of video frames to overcome reliance on any particular two frame sequence. Because of the volume of data, and the computational intensity, real-time operation of optical flow is still far away for some of the proposed detection systems. Buffering of up to nineteen frames is required, whereas the preceding frame only is buffered for motion estimation in video codecs (with about eight frames in MPEG-like group-of-picture buffers). Whereas motion estimation in standard codecs is usually based on a 16 x 16 pixel macroblock, in optical-flow algorithms a motion vector for every pixel is sought. Therefore, parallelization aims to reduce the runtime both to facilitate further research, and to provide tolerable throughput in live applications of optical flow.
8.1
CASE STUDY 1: H.261 ENCODER
Figure A.6 shows a block diagram of the structure of an H.261 encoder. The H.261 coding algorithm [37, 130] is a hybrid algorithm which utilizes motioncompensated differential pulse code modulation to generate an inter-frame difference picture, and then codes the picture differences using a discrete cosine transform (DCT). Motion estimation, motion compensation, and the DCT are all carried out with reference to 16 x 16 pixel picture 'macroblocks', each of which is subdivided into four 8x8 pel luminance (Y) and two 8x8 pixel chrominance (U, V) blocks. The coefficients of the DCT (arising on a block-by-block basis) are quantized, thresholded and coded (along with x and y motion vectors for each macroblock) using a variable length coder, before being buffered to match the bit-rate of the constant bit-rate transmission channel. The fullness of the buffer is monitored and used to adjust the quantizer step size, which determines the resolution and number of DCT coefficients transmitted. Where motion cannot be reliably estimated (e.g. the first frame in a sequence,
CASE STUDY 1: H.261 ENCODER
119
or parts of an image with greater than 15 pixels translation compared with the previous frame) intra-frame coding is applied using the DCT alone: this obviously generates a much higher bit-rate. To minimize the run-time of the H.261 encoder algorithm simulation described in Sections 8.1.2 and 8.1.3, motion estimation was carried out as a separate off-line process using consecutive pairs of uncoded image frames. This generated a file of pre-computed motion vectors which were then read in at the same time as the corresponding image frame for subsequent encoding. The H.261 encoder algorithm described in Section 8.1.4 was subsequently developed as an enhancement of the earlier encoder, and included motion estimation within the algorithm implementation. 8.1.1
Purpose of parallelization
Image sequence coding algorithms are computationally intensive, due in part to the massive continuous input/output required to process up to 25 or 30 image frames per second, and in part to the computational complexity of the underlying algorithms. In fact, it is only just possible to implement the H.261 encoder algorithm [49] for quarter-GIF (Common Intermediate Format) (176 x 144 pixels) images in real time on present-generation DSP chips such as the TMS 320C30 [380]. In this case study, H.261 algorithm simulations developed for standards work and written in C, were parallelized to speed up execution on an MIMD transputer-based Meiko Computing Surface. Results presented are based upon execution times measured when the H.261 algorithms were run on sequences of 352x288 pixel GIF images. 8.1.2
'Per macroblock' quantization without motion estimation
Table 8.1 summarizes the most computationally-intensive functions within the first implementation of the H.261 encoder and is derived from statistics generated by the Sun top-down execution profiler gprof [138] while running the encoder on 30 frames of image data on a Sparc2 processor. Processing times have been normalized for one frame of data. The functions listed in the table constitute 99.6% of total execution time. Figure 8.1 shows a simplified representation of Figure A.6, restructured to emphasize the forward pipeline exploited in the PPF design model and the feedback mechanisms. A restructuring of this form is generally straightforward if the algorithm has been implemented using a procedural language, since it corresponds directly with the top-down structure and sequential program listing. Fig. 8.2 shows how the functions of Table 8.1 are distributed among the five pipeline stages shown in Fig. 8.1: Tl - this picture file input and frame initialization (functions 1-4 in Table
i);
120
APPLICATION
EXAMPLES
Table 8.1 Execution Profile for Initial H.261 Encoder without Motion Estimation Sequence
Function Name
Normalized Exec. Time (s)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
clear -picture copy -field read_picture_file read-vectors copy -macro-block encoder _decisionsJi261 forward -transform -picture h261_forward_quantize_picture tm_forward_scan_change_picture forward j-un Jevel -picture h261 .code-picture in verse _run Jevel .picture tin _inverse jscan jchange_picture h261 _in verse _quantize_picture inverse -transform .picture reconstruct _h26 1 tmxalculate_snr tnudisplay ^statistics write_picture_file macro -block-to Jine
0.097 0.102 0.726 0.039 0.489 4.454 2.603 1.144 0.350 0.419 0.511 0.284 0.352 0.734 2.604 1.786 0.609 0.158 0.731 0.247
T2 - picture difference and forward transform (functions 20, 5-7); T3 - quantization and bitstream coding (functions 8-11); T4 - picture decoding (functions 12-16) T5 - frame output (functions 17-19 in Table 8.1). Stages Tl and T5 are executed once for each image frame, whereas stages T2-T4 execute a loop 396 times per frame (once for each macroblock making up the picture). The basic approach to speeding up the application is to exploit the macroblocklevel data parallelism within stages T2-T4 by sub-dividing each frame of picture data among a number of processors using the processor farm parallel processing paradigm. Further speedup can be obtained by overlapping the execution of stages Tl and T5 (which are inherently sequential since they involve access to disk files) with stages T2-T4. Close inspection of the code shows that a feedback path exists from the output of function 11 to the input of function 8, since function 8 tests the output bitstream buffer for every macroblock to determine its fullness, and updates the quantization value used accordingly. Thus feedback from the buffer to the quantizer (as shown in Fig. 8.2) occurs with a delay of only one macroblock, implying that functions 8-11 must be executed in a macroblock-sequential
CASE STUDY 1: H.261 ENCODER
121
Fig. 8.1 Simplified pipeline architecture of the H.261 encoder.
fashion so that the correct quantization value can be maintained throughout the picture coding process. In addition, function 11 generates the coded bitstream which is output to a file, and this is inherently a sequential operation. As a result, macroblock-level data parallelism can in practice only be exploited in stages T2 and T4, with stage T3 representing a residual sequential element. Figure 8.3(a) shows the effect of the pipelining proposed above using a processing execution time chart: in this figure stages T2-T4 are implemented sequentially on a single processor (P2), and stages Tl (on processor PI) and T5 (on processor P3) can be overlapped to increase throughput.
Fig. 8.2 Five stage encoder pipeline.
A further constraint imposed by feeding back the previous picture is that function 6, which compares the previous and current pictures, cannot be executed until the previous encoded and decoded picture is available. Thus, although it is possible to scale stages T2 and T4 using two independent processor farms, it is not possible to overlap their execution in the pipeline since
122
APPLICATION EXAMPLES
they are not independent of each other. In practice for the second and fourth pipeline stages, therefore, there is no advantage to using separate farms, and a more efficient implementation can be achieved by using the 'folded pipeline' structure shown in Fig. 8.4. In this structure the second and fourth pipeline stages are combined in a single processor farm and the third and fifth pipeline stages in a single processor. Data are passed from the first stage to the second, and after executing the DOT on the processor farm, the results are passed to the third stage, which sequentially quantizes and codes the picture and produces the bitstream output. The macroblock coefficients are then passed back to the processor farm, where data parallelism can again be exploited in decoding the picture. Finally, the encoded and decoded picture is reassembled from its distributed components by passing these back to the third pipeline stage, where the output picture file is written. Figure 8.3(b) shows the distribution of pipeline stages between processors for the 'folded pipeline' case, and confirms that the throughput with this configuration is identical to that of the original pipeline. As processors are added to the processor farm (P2), the initial speed-up achievable can be modeled as:
where n is the total number of processors in the pipeline, T2 is redefined as the scalable processing time for the second stage (i.e. excluding the function macro_block_to_line which converts the current picture to previous frame format, and for convenience was not parallelized), and Ta is this residual sequential element. The speed-up is quite limited (see Fig. 8.8) because of the residual sequential execution time of stage T3. In fact, equation (8.1) is simply an alternative representation of Amdahl's law. Equation (8.1) models the speedup until the processing time for stage 2 is reduced to that for stage 5 (as is illustrated in Fig. 8.3(c)):
Solving this equation for the values given in Fig. 8.2 gives n = 8.03. As further processors continue to be added to P2, the speed-up can be modeled as:
where T4 is now the only scalable element and T3 + T5 the residual sequential elements. Although it would in principle be possible to achieve increased speed-up for n > 8 processors by moving T5 to a separate pipeline stage, in
CASE STUDY 1: H.261 ENCODER
123
Fig. 8.3 Image frame processing cycle with: (a) pipelining alone; (b) folded pipelining; (c) folded pipelining and parallelism in stage P2.
practice this was not worthwhile because communication overheads restrict achievable performance. 8.1.3
'Per picture' quantization without motion estimation
To increase the scaling, it is necessary to restructure the feedback and/or functions within the application so as to reduce the execution time of the residual sequential component. One way of achieving this is to relax the requirement to update the quantization level every macroblock. Practical implementations of H.261 may update the quantization value once every few macroblocks, and the coder will still work even if the quantization level is only updated once per picture, although this reduces its stability. Choosing to update the quantization value on a 'per picture' basis results in scaling performance at the other extreme from the 'per macroblock' case,
124
APPLICATION EXAMPLES
Fig. 8.4 'Folded pipeline' structure of the 'per macroblock' quantization H.261 encoder. and so this case has been examined here. Figure 8.5 shows the modifications required to the feedback communication structure within the folded pipeline. If the quantization value is only updated once per picture, functions 8, 9 and 10 can be moved from the third (sequential) pipeline stage to the second (processor farm) pipeline stage, and parallelized. Thus, all stages of the encoder apart from the variable length code bitstream generation can now be parallelized on the processor farm.
Fig. 8.5 'Folded pipeline' structure of the 'per picture' quantization H.261 encoder. Once encoding is complete, the processor farm workers can pass the macroblock coefficients to the third pipeline stage for bitstream coding and then
CASE STUDY 1: H.261 ENCODER
125
proceed independently (and in parallel with bitstream coding on the third pipeline stage) to decode the macroblock coefficients once again and obtain the decoded picture. Since there is no longer any need for the processor farm to wait for macroblock coefficients to be transmitted back from the third stage, the second pipeline stage is now fully decoupled from the third stage, and execution of these stages can therefore be properly overlapped (see Fig. 8.6). Figure 8.8 shows that the predicted scaling for this case, based upon the same execution profiling data as for the previous case, is much increased. The scaling is now broken down into three parts, as follows:
Equation (8.4) applies when the scaling performance is purely defined by the processor farming in stages T2 and T4; (8.5) applies when the overlapping execution times of stages T2 and T5 are dominated by T5, and (8.6) identifies the limiting case where stages T3 and T5 define the execution time and further farming has no effect. Equation (8.4) defines a region of nearlinear performance scaling, where the residual sequential element T8 defines the divergence of the graph from linear scaling; this region would represent the desirable design region for parallelization purposes since processor efficiency is high. Equation (8.6) defines the maximum scaling which can be achieved for this configuration, and (8.5) identifies the region where scaling T2 has no effect because T2 has to wait for T5 to complete processing of the previous picture before it can pass its data to T3. (Note that the values of T\-T$ used to generate the graph in Fig. 8.8 are different from those used earlier, because functions are allocated to pipeline stages differently.) The breakpoints between the regions defined by equations (8.4), (8.5) and (8.6) are at 9.56 and 13.27 processors, respectively, for the execution timings given in Fig. 8.5. 8.1.4
'Per picture' quantization with motion estimation
The introduction of motion estimation into the H.261 encoder implementation adds an additional function in the second stage of the five-stage pipeline shown in Fig. 8.1; the effect of this single function is to more than double the execution time per frame. Table 8.2 provides profiling data for the encoder
126
APPLICATION EXAMPLES
Fig. 8.6 Image frame processing cycle for the 'per picture' quantization H.261 encoder.
implementation with motion estimation in a comparable format to that given in Table A.6 for the first encoder implementation. (It should be noted that the data for Table 8.2, although based upon the same image sequence, were obtained upon a different workstation from that of Table 8.1.) Figure 8.8 shows how the function execution times are mapped onto the folded pipeline structure of the parallel implementation. For the execution time data given in Fig. 8.6, the piecewise scaling graphs are given by equations (8.7)-(8.9) below:
These equations are of the same form as (8.4)-(8.6), except that the limits are reversed since in this case the ratio of T±/T3 is smaller than T2/(T5 —T8}. For the data given in Fig. 8.7, the breakpoints between the piecewise regions occur for 14.11 and 25.14 processor. Figure 8.8 compares the predicted scaling performance for all three implementations up to 26 processors, and is derived from equations (8.1)-(8.9). 8.1.5
Implementation of the parallel encoders
The parallel versions of the H.261 encoder were implemented by a process of incremental development from a sequential version, obtained by porting the Sparc2 code to a single transputer within a Meiko Computing Surface. The parallel implementation proceeded in five stages:
CASE STUDY 1: H.261 ENCODER
127
Table 8.2 Summary Execution Profile Statistics for the H.261 Encoder with Motion Estimation Sequence
Function Name
Normalized Exec. Time (s)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
clear .picture copyjfield read_picture_file copy -macro-block _data encoder .motion -estimation _h26 1 encoder _decisions_h261 forward-transform -picture h26 1 _forward_quantize_picture tm_forward_scan_change_picture forward-run JeveLpicture h261 -code-picture inverse _run JeveLpicture tin-inverse -scan_change_picture h26 1 -inverse _quantize_picture inverse-transform -picture reconstruct_h261 macro -block _to Jine tm_calculate_snr tm_display .statistics write_picture_file
0.147 0.196 1.200 0.728 24.306 6.392 2.230 1.398 0.632 0.712 0.591 0.486 0.634 0.991 2.232 2.813 0.531 0.753 0.134 1.217
1. Implementation of a communication harness for image data.1 (The same communication functions were used for all three parallel H.261 encoder implementations.) Under Meiko's CSTools parallel programming environment, a virtual channel can be established between any pair of processors, regardless of whether they are directly physically connected. Therefore the programmer does not need to provide through-routing communications functions, regardless of the size of the network. 2. Implementation of a minimal pipeline (3 processors) to test all communication functions and the pipeline operation. 3. Expansion of the single worker in the middle pipeline stage to a processor farm with an arbitrary number of workers, by exploiting data parallelism at the macroblock level. At this stage, the first full implementation, of the 'per macroblock' encoder without motion estimation, was complete. 4. Adaptation of the feedback structure to obtain the 'per picture' encoder without motion estimation. 1 A harness provides a library of functions to initialize communications and communicate image data and image data structures between any arbitrary pair of processors.
128
APPLICATION EXAMPLES
Fig. 8.7 Execution times for stages in the H.261 encoder with motion estimation 'folded pipeline'.
5. Inclusion of motion estimation within the 'per picture' encoder implementation.
CSTools will automatically interconnect a network of transputers when a parallel application is loaded, using an algorithm which minimizes average distance between processors, however this algorithm takes no account of the distribution of communications between processors, and was found to be nonoptimal for PPF configurations. Therefore in this work, automatic interconnection was disabled and optimal processor configurations, defined manually using a configuration file, were used instead. Fig. 8.9 shows the topology of the largest PPF configuration of 16 processors that was used. The design employs maximal connectivity i.e. all four links of each transputer are engaged. 8.1.6
H.261 encoders without motion estimation
Figure 8.10 summarizes the scaling results obtained for the 'per macroblock' and 'per picture' implementations of the H.261 encoder without motion estimation. The corresponding idealized scaling performances predicted from Fig. 8.8 are shown alongside for comparison. As can be seen, the actual performance matches the general form predicted by the theoretical ideal cases, but with absolute scaling values somewhat less than those predicted. This is due to communications overheads (which were ignored in the theoretical analysis) reducing the efficiency with which execution in the different pipeline stages can be overlapped.
CASE STUDY 1: H.261 ENCODER
129
Fig. 8.8 Idealized predicted scaling performance for the different H.261 encoder parallelizations.
8.1.7
H.261 encoder with motion estimation
Figure 8.11 compares the scaling performance of the H.261 'per picture' encoders with and without motion estimation. Again, the corresponding idealized scaling performances predicted from Fig. 8.8 are shown alongside for comparison. The actual performance of the encoder with motion estimation again conforms well with the theoretical prediction, as the performance continues to scale fairly linearly up to 16 processors (14 workers)—the maximum number available on the Meiko system at Essex university at the time. The gradual decline of performance from the idealized approximation is the expected effect of increasing communications overheads as the number of processors increases. The discrepancy from the idealized performance prediction is less marked for the encoder with motion estimation than for the encoder without motion estimation because execution time of the former is twice that of the latter, while their communication requirements are identical; thus, the communication/ computation ratio is much lower.
130
APPLICATION EXAMPLES
Fig. 8.9 Sixteen processor, 3 stage pipeline configuration, with stage 2 processor farm implemented using 14 workers.
The maximum scaling achieved by the encoder with motion estimation is roughly double that of the encoder without motion estimation. Since the sequential execution time of the encoder with motion estimation is also double that of the encoder without motion estimation, the asymptotic minimum execution time per frame of the two encoders is about the same, although this minimum is reached with twice as many worker processors for the encoder with motion estimation (Fig. 8.12). In practice, this means that any throughput achieved with the original encoder, regardless of the number of farm workers used, can also be achieved with the modified encoder, even though its basic sequential execution time is doubled, by increasing the size of the processor farm appropriately. This is an important point in practical applications,
CASE STUDY 1: H.261 ENCODER
131
Fig. 8.10 Scaling performance for the H.261 encoder without motion estimation, with quantization setting updated on a per-macroblock or per-picture basis.
where performance would normally be defined in terms of throughput rather than speedup, and the critical requirement would be to maintain a particular processing rate, in spite of making changes to the underlying algorithm. 8.1.8
Edge data exchange
Subdividing the picture data on the processor farm introduces the problem of data exchange at the edges of each sub-picture, as has previously been reported [1]. For simplicity, the effect on speedup of exchanging edge data has been ignored in the results reported in Figs. 8.10 and 8.11. Further experiments were carried out to determine the practical degradation in speedup obtained when additional edge data is exchanged. (The requirement to exchange additional edge data of course has no effect on the earlier analytical performance predictions as these ignore communication overheads.) For the 'per-macroblock' parallelization, the exchange of edge data has a negligible effect on speedup, since this is largely defined by the inherently sequential component for this implementation. For the 'per picture' parallelizations, limited results were obtained using the same approach. These
132
APPLICATION EXAMPLES
Fig. 8.11 Scaling performance for the H.261 encoder with and without motion estimation.
showed a progressive degradation in speedup compared with the results reported in Fig. 8.11 as the number of worker processors was increased, with a degradation of about 10% being measured for six worker processors. 8.2 CASE STUDY 2: H263 ENCODER/DECODER While H.261 is a standardardized codec intended for bit rates of range p x 64kbps (p = 1,2...30), H.263 is a standard codec for very low bit-rate videocoding (< Q^kbps] [175]. The principal application of H.261 was intended to be teleconferencing, while H.263 is also intended for PSTN (Public Switched Telephone Network) videotelephony, that is videotelephony on normal analogue telephone lines. The H.263 standard has been developed collaboratively by researchers in telecommunications organizations around the world, and simulation algorithms for an H.263 decoder and encoder implemented by Telenor (Norwegian Telecom) are freely available over the Internet. As with most hybrid video codecs [37, 130], the decoder is also embedded in the encoder in a feedback loop. On a SparcStation 20, the decoder runs in real-time, but the encoder only runs at about 2 frames/s. Initially, before
CASE STUDY 2: H263 ENCODER/DECODER
133
Fig. 8.12 Execution time per frame for the H.261 encoder with and without motion estimation.
real-time VLSI hardware implementations became widely available, there was therefore significant interest in paralleling the encoder simulation algorithm to obtain real-time performance in distributed- or shared-memory multiprocessor environments. Figure 8.13 is a block diagram of H.263, showing the normal inverse quantization and transform operations of a decoder, contained within the coder. The basic coding method again uses 16 x 16 macro-blocks with 8 x 8 subblocks, motion estimation and compensation, DCT transform of prediction errors, run-length coding and variable length codewords. Also shown is the coding control provided by H.263, which has a number of additional modes in comparison to H.261. In addition to the basic video source coding algorithm, four negotiable coding options are available: (1)Unrestricted Motion Vectors, (2)Syntax-based Arithmetic Coding,(3)Advanced Prediction, and (4)PBframes.2 2 There are three main types of picture frame stemming from MPEG 1 & 2 codec usage: I type are anchor (intra-coded) frames without motion estimation; P type are intermediate
134
APPLICATION EXAMPLES
Fig. 8.13 Block diagram of the structure of the H.263 encoder.
8.2.1
Static analysis of H.263 algorithm
Analysis of the execution profile of the H.263 encoder using the gprof profiler on test sequences such as "Mother and Daughter" and "Car Phone" (available from the Telenor site) showed that only three functions have execution times large enough to be measurable at the sampling accuracy used. One of these functions, CodeOnelntra, is called only once to code the initial frame using mtra-frame coding alone. Therefore, this function was excluded from the analysis, although it is structurally similar to the inter-frame coding used for other frames, and so could in practice be included in the encoder/decoder stage of the pipeline. The analysis of the other two major functions, which are CodeOneOrTwo and ComputeSNR, showed that 98% of the execution time is spent on CodeOneOrTwo regardless of the coding options chosen. Table 8.3 shows the breakdown of the execution time of the main function CodeOneOrTwo into sub-functions, and includes the eight most significant sub-functions which constitute 97.9% of the execution time of CodeOneOrTwo. Nearly 64% of the execution time of CodeOneOrTwo is expended in the motion estimation function and the remaining time on the EncoderDecoder function.
(prediction) frames between I frames, with motion estimation to reduce error rates; and B are interpolated from surrounding P- and I-frames. Hence the term PB-frame refers to a P and B frame coded as one unit.
CASE STUDY 2: H263 ENCODER/DECODER
135
Table 8.3 Top-down Profiling Data for the 8 Most Intensive Functions within the CodeOneOrTwo Function of the H.263 Coder. Function Name
MotionEstimatePicture MBJDecode MB .Encode Predict? Interpolatelmage MBJleconJP Clip Reconlmage
%CodeOneOrTwo Execution Time (Default Mode)
%CodeOneOrTwo Execution Time (Full Options)
63.8 13.1 12.8 2.6 1.9 1.3 1.2 1.2
64.8 9.1 9.4 5.2 1.9 5.1 1.1 1.3
From the above analysis it was concluded that these two functions should be placed in separate PPF pipeline stages within the frame feedback loop, and each image frame subdivided into half-frames for processing concurrently in the consecutive pipeline stages. Comparison of the execution times of the two stages shows that their static execution times are approximately in the ratio 2:1, hence a balanced pipeline can be achieved by populating the respective processor farms with workers in this ratio. An implementation based on this architecture is shown in Fig. 8.14. According to Amdahl's law, the speedup which can be achieved is largely defined by any residual sequential elements within an application algorithm. Hence, an implementation based upon this architecture would result in an overall speedup of up to six according to Amdahl's law. Table 8.4 shows comparative execution for Spare 20, Spare 5, and a single i860 (on the Paramid multicomputer) of the sequential program. Results presented in Table 8.4 suggest that the practical implementation using up to eight i860 nodes is unlikely to exceed the performance of a Spare 20, as the i860 has a lower computation performance than the Spare 20. However, a Paramid implementation would give an insight to parallelization of H.263 on a more powerful machine. Another issue that required particular attention was that of distributing feedback information after each slice of macro-blocks. To handle this problem a per-farm data-broadcast primitive was needed. 8.2.2
Results from parallelizing H.263
Due to the complexity of the H.263 algorithm, an incremental approach was adopted during the parallel implementation of the algorithm. Figure 8.15 depicts the developing stages of the parallel implementation and Table 8.5
136
APPLICATION EXAMPLES
Fig. 8.14 Pipeline architecture of H.263 encoder.
Table 8.4 Comparative Overall Performance of Spare 20, Sparc5, and i860 for H.263 Algorithm. Processor Spare 20 (passing pointers) Spare 5 (passing pointers) i860 (passing pointers) i860 (passing data)
Execution Speed (s/frame)
0.40 3.12 3.20 4.47
CASE STUDY 2: H263 ENCODER/DECODER
137
presents the corresponding results on a time-per-frame basis. In all these cases the image frame is sub-divided, so that while one half-frame is being coded, motion estimation is simultaneously being carried out on the next half. Table 8.5 Comparative Overall Performance of Several Parallel Topologies for H.263 Algorithm. Parallel Topology Single i860 (passing pointers) Single i860 (passing data) Fig. 8.15(a) Fig. 8.15(b) Fig. 8.15(c) Fig. 8.15(d) Fig. 8.15(e) Fig. 8.15(f) Figure 8.15(g) Fig. 8.15(h) Fig. 8.15(i)
Execution speed(s/frame) 3.20 4.47 2.04 2.2 2.05 2.52 2.6 2.46 2.48 1.86 1.89
From Table 8.5, it is easy to see that the proposed topology from the static analysis (Figure 8.15(g)) does not match the speedup predicted accorded to Amdahl's law. Indeed, the speedup achieved in this case is about three times less than the theoretical predicted speed up. This is mainly caused by the large amount of data that has to be communicated between farmers and respective workers. From Table 8.5, it can also be seen that the extension to a second farmer, which pipelines the encode-decode part of the algorithm, deteriorates the performance of the parallel structure. This can easily be seen from the comparison of time performance of Figure 8.15(a)(b)(c) with Figure 8.15(d)(e). In the latter case a 25% increase in per frame processing time is measured. This is mainly caused by the inherent restriction within the H.263 algorithm which requires a slice-by-slice update of the quantisation variable. This constrains the encoder-decoder to process only on a slice-by-slice basis and increases the number of messages sent from the workers to the relevant farmer. The measurement of the processing and the communication time spent in the second farmer showed that both these times are equal. This finding suggests that for the present hardware, the extension to a second farmer is inappropriate. However, for another machine with higher communication bandwidth a second pipeline stage utilizing a greater number of processors would be worthwhile. Furthermore, the requirement for slice-by-slice update of quantization can be relaxed in many cases, and if this is possible, the communication overhead of using a second pipeline stage could be reduced further. However, as
138
APPLICATION EXAMPLES
Fig. 8.15 Parallel implemented topologies for H.263 encoder algorithm.
CASE STUDY 3: 'EIGENFACES'— FACE DETECTION
139
the presented work is entirely based on the Telenor software constraints, this modification was not considered at this stage. According to Table 8.5, the best results are achieved by the topology described in Figure 8.15(h). In this case the processors are entirely balanced and a further reduction in time is achieved from the introduction of the third farmer. However, it must be said that even for this case the time performance, as expected, is well below the upper-bound speedups required for a real-time implementation.
8.3
CASE STUDY 3: 'EIGENFACES1 — FACE DETECTION
This case study is a PPF solution for a well-known method of face inspection, 'Eigenfaces' [188, 358], capable of separating a face from a video scene and matching that face against a set of candidate faces, taken from a pre-compiled database of faces. The variant of Eigenfaces described in [274, 246, 247] was the subject of the parallelization. The aim of the parallelization was to speedup an existing sequential processing pipeline originating from the MIT Media Laboratory. It is assumed that there is a continuous sequence of video images to be matched as may typically occur in visual surveillance applications. The objective was to reduce the sequential eigenfaces pipeline throughput of about 2 min/image on a SparcStation 2 to less than 10 s/image on the Paramid. In the first instance, this would enable evaluation and testing of the prototype software in a realistic setting. However, the processing structure is intended to be incrementally scalable so that with appropriate hardware a video-rate solution could be achieved. As the Eigenfaces application consists of circa 8700 lines of code, including options but not library code, considerable logistic organization is required (for a single programmer) to effect the parallelization. 8.3.1
Background
Though parallelization of face recognition has taken place before, a pipeline structure for parallelization has not been employed. This may have been because the motive for previous parallelizations was to speed up algorithmic development and not eventually to provide a real-time processing pipeline. The sequential version of Eigenfaces was, however, constructed as a pipeline which immediately makes it a candidate for real-time implementation. The regular structure present in the head-search and feature-matching processing is suitable for data farming. The head-search processing is particularly computationally demanding, which implies a favorable compute communicate granularity. There are no feedback loops which would otherwise impose a synchronous constraint on any pipeline.
140
APPLICATION EXAMPLES
Parallelization of face inspection software has previously been described in [203, 204], which is a graph-based approach on hardware with a similar architecture to the Paramid. Three independent processing tasks were identified with differing compute/communicate ratios, namely fast Fourier transform of image data; computation of Gabor filters in the frequency domain to form 'objects'; and comparison with stored 'objects' held as sparse graphs. A processing farm was used but the vital step of forming a pipeline was apparently not made. The implementation of the graph-based method is reported [203] to take 25 s with optimized code to make a match between face images of size 128 x 128 pixels (captured in controlled circumstances) and one of 87 stored 'objects'. Twenty-one active 20 MHz transputers were arranged in a tree topology with two other transputers providing system support. A more recent and enhanced version of elastic graph matching is described in [389], though without implementation details. 8.3.2
Eigenfaces algorithm
In the Eigenfaces method, faces are projected into Eigen-space by a KarhunenLoeve transform [252], avoiding reliance on semantic knowledge.3 Since identification appears to depend on good intensity correlation between images, substantial pre-processing is needed [210]. Changes in scale are a principal cause of error for this method, so corrections have to be made [358]. Allowance for lighting and contrast variation [246] and affine transformation of the head position are also made. Changes in facial expression and time variation in facial appearance are not addressed in the variant of Eigenfaces parallelized in this case study. The bulk of the processing (the first stage) is taken up with finding a head within an image scene at different positions. (The candidate head after normalization by the 'average' face from the existing database is measured against its projection onto the database Eigen-space [358] by a maximum likelihood (ML) estimator.) This procedure is designed to screen out objects that are close to an individual Eigenface but not close to the class of heads. Only ten eigenvectors were used at this stage. A computational formula reducing the disparity error calculation to a set of correlations is employed. (The possibility that a person is present in an image scene can be ascertained by prior motion detection using spatiotemporal filtering.) In tests, the grey-scale scene was sized 512 x 512 pixels. Candidate heads, once located, are normalized to size 128 x 128 pixels. Since the size of the head is not known, an error map is compiled for candidate head sub-images at 21 different possible scales. A second stage is feature detection. Once the best position and scale for a head is found, four facial features are sought within appropriate parts of the
3
Notice that the KLT approach has been shown to have some of the aspects of human vision such as ability to distinguish between individuals and genders, and form affinities [265].
CASE STUDY 3: 'EIGENFACES'— FACE DETECTION
141
face. The features should conform to the known facial geometry (e.g. nose below eyes and mouth below nose). The feature detection stage also has some scope for parallelization as each feature is sought independently from the others. Feature detection employs five Eigenvectors in a similar procedure to the head-search. The eventual pipeline did not utilize this possibility in the interests of balancing the load throughout the pipeline. The location of the facial features serves to parameterize an affine warp which brings the head into standard alignment with the faces in the database. The head is also cropped to produce a face. The third and final stage, also had the potential to be parallelized: the normalized face is projected into Eigen-space with 100 coefficients, where the nearest three matching images are output. The restriction to three is merely a test heuristic. 8.3.3
Parallelization steps
The first step involved in porting the code was to check the correct behaviour of the application on Sun workstations. The libraries were then reconstructed for use on a single i860 processor (as the Paramid can also act as a throughput machine). Table 8.6 Sequential Pipeline Timings for the Eigenfaces Application Pipeline stage
Time
Proportion
Head Location Feature Detection Face Inspection
116 s 90 s 4s
55% 43% 2%
The second step was to form one farm, Figure 8.16a, in which the first, computationally dominant, head-matching stage was parallelized, the remaining sequential code being performed centrally while the farm is stalled. The i860, being run in single-threaded mode, must change from data farmer to sequential processor, shown by the dashed circle in Figure 8.16a. Table 8.7 gives the recorded execution times, excluding the initial file loading time. "Farm 6" in Table 8.7 refers to six worker processors and one data-farmer. The worker processors were arranged as a binary tree. Using the features of the i860 Portland C optimizing compiler [284] considerably improved the performance. Level four optimization principally vectorizes the code.4 Figure 8.17 shows the proportion of time spent in the Eigenfaces parallel section compared to the overall time. The proportion of time in the headinspection stage is larger than the original static times suggest. It is apparent 4
Other options subsumed in level 4 are pipelining, loop unrolling and function inlining inter alia. Debugging code was turned off.
142
APPLICATION EXAMPLES
Fig. 8.16 Implementation topologies for the Eigenfaces application.
that beyond seven processor modules the application speed-up is starting to saturate. In order to preserve fidelity to the original tests 21 scales were farmed out. Purely from the point of view of sharing out work, one would want a larger number of scales, a number for which the number of worker processes was an exact divisor. The latter requirement reduces the proportion of time spent idling while waiting for the processing to finish. In fact, depending on the application, the range of scales will vary; for example: when heads are captured as 'mugshots'; when images are taken by a door-entry system; and
CASE STUDY 3: 'EIGENFACES'— FACE DETECTION
143
when external surveillance cameras are employed. This emphasizes the need for flexibility in the parallel implementation. Table 8.7 Initial Timings for the Eigenfaces Application Processor
Time/Image (s)
Sun4 (Optimised) SPARCstation 5 (Optimized) i860 (Level 2 Option) i860 (Level 4 Option) Farm 6 (Level 0 Option) Farm 6 (Level 2 Option) Farm 6 (Level 4 Option)
84 37 33 30 19 13 11
Most of the data I/O, which is considerable as it involves loading efface and feature databases, takes place at the start-up phase of the pipeline. For computational tests a reduced database of 60 known faces was searched5 though the Eigenfaces basis set had dimension 100. For long runs such overhead would be amortized over the lifetime of the pipeline. The original and scaled images must be broadcast during each processing round but otherwise communication is nugatory. 8.3.4
Introduction of second and third farms
A second farm for feature matching was constructed with the farmer process on a transputer and the workers on i860s. The basis of the parallelization is the matching of features within the normalized face image that is passed on from stage one. The face image therefore has to be broadcast to the worker processes at the onset of processing. Feature matching against an average feature taken from the database again takes place in Eigenspace. Due to the limited number of facial features, the potential for data farming is restricted. However, the parallelization went ahead in part because, if multiple candidate heads were later used to achieve greater identification accuracy, the parallel structure would already exist. Were this extended form of processing to be introduced performance scaling of the second stage could take place. In Fig. 8.16 b), the second farmer process, hosted on an i860, is shown sharing a processor module with a worker process, hosted by a transputer. The two processes communicate by overlapped memory and not by a transputer link. The first farm solely used i860s since the farmer originally had not only to pass on data but also to perform processing. Provided there is enough memory 5 A 99% successful recognition rate over 155 individuals on the FERET database for the variant of the system used by us is reported in [245]. Encouraging results for the 1996 MIT system are also given in [280].
144
APPLICATION EXAMPLES
Fig. 8.17 Eigenfaces timings for one farm.
(4 Mbytes on the transputer as opposed to 16 Mbytes on the i860), then it may be possible to free another processor on the Paramid by also moving the farmer on farm one onto a transputer, which otherwise has no hardware support for memory management. To verify the behavior of the second farm, the output was fed back to the first farm where the remaining processing was completed. When the second farm was seen to work the remaining processing was transferred to a third farm, which actually has only one 'worker' on an i860. In other words, the granularity of the pipeline as a whole did not justify partitioning between farmer and workers in the third farm. A three-farm pipeline was tested in the processor ratio 5:2:1 and then 6:1:1. The topology for the second pipeline arrangement is shown in Figure 8.16 c). A test was made processing 25 images (the five test images repeated), recording pipeline traversal latency and throughput, Figure 8.18. The latency of an image passing through the 5:2:1 pipeline was found to have increased from lls for the single farm to 11.96 s (mean of 25 images). However, the throughput increased from an image every 11 s to an image every 9.24 s. The 6:1:1 pipeline resulted in a further increase in latency (12.80 s/image) but a similar increase in throughput: an image every 7.48 s. Since the first farm is not in saturation with five processors (and four workers) increasing to six processors improves the speed that images emerge from the first farm, freeing the farmer to load another image. The second farm will be depleted
CASE STUDY 4: OPTICAL FLOW
145
by one worker which consequently increases the latency. Clearly on a larger machine one would increase the number of processors in farm two as well as farm one, when the latency would be reduced along with the throughput. The performance figures depend on the speed of I/O when loading from disk across a SCSI bus and on the system load at the time of the tests. I/O time for the original image, which is variable, is included to give realistic timing figures. If more images were tested the start-up time of the pipeline would be shared over a longer time-span, again improving the results. The processing scales, which vary from 0.15 (scale 1) to 0.40 (scale 21) in the supplied test file, differ on an ascending scale in the time taken to process them. (A scale is a measure of the transformation scaling made to a search image before searching for a match within the target video image.) gprof records 1 s for scale 2 and 10 s for scale 21. If the scales are processed from 21 downwards (i.e. in descending size) the result is an improvement in latency and throughput to 12.2 s/image and an image every 6.92 s, respectively. The reason for this improvement over Fig. 8.18 is that any worker left idling for work at the end of processing an image will spend less time waiting if the final scales take less time to process.
8.4
CASE STUDY 4: OPTICAL FLOW
Optical-flow (OF) methods intended for pixel-level, video stream inter-frame motion estimation are complex, multi-component algorithms with lengthy execution times if processed on a workstation or PC. In an attempt to remedy the situation, three representative OF methods have been parallelized. Despite their complex structures, OF methods can be parallelized by mapping onto a PPF. Component algorithmic parts of an OF method are partitioned across the pipeline stages. After the OF field has been found, further processing stages may be needed. For example, motion segmentation or frame in-betweening are possible further steps, which can easily be added to the PPF. No one current OF method is suitable for all purposes, so a number were implemented on the Paramid to form an algorithmic testbed. 8.4.1
Optical flow
A 2-D OF dense vector field, akin to a disparity field, can be extracted from a sequence of video frames by processing the image irradiance over time, without the difficulty of tracking corresponding features in the 3-D scene structure [165]. However, due to the well-known aperture problem only a single component of motion may be available by such processing, unless there is further amalgamation of results. Amorphous imagery, as found in weather images, [393] can be tackled. The OF implementations were parameterized for small pixel motions (three or fewer pixels per-frame) suitable for applications
146
APPLICATION EXAMPLES
Fig. 8.18 Implementations: pipeline, 5) 6:1:1 pipeline.
1) SparcStation 5, 2) i860, 3) Single farm, 4) 5:2:1
such as monitoring heart motion [28] or plant growth [31]. Sub-pixel estimates of motion are required for applications such as plant growth. Inaccuracies, typically blurring at motion boundaries, are particularly visible in motionadjusted film sequences, which occurs in frame-rate inter-conversions.
CASE STUDY 4: OPTICAL FLOW
8.4.2
147
Existing sequential implementation
Three OF routines, chosen for their representative nature, have been parallelized: a gradient-based method [327, 224] (LK); a correlation method using a Laplacian pyramid [17, 132] (AN); and a phase-based method [104, 106] (FJ). Public-domain sequential code first written for a comparative study [30], available from ftp://csd.uwo.ca/pub/vision/, served as the starting point for the parallelizations. All the original sequential code was written in the 'C' programming language. 8.4.3
Gradient-based routine
The LK algorithm assumes: that image-patch brightness is conserved over a small interval in time (normalized to a frame interlude) leading to the wellknown optical-flow equation (OFE) [165]; and that the spatial gradient is also constant. The linear least-squares error is found for an over-determined set of equations resulting from these assumptions for each pixel from weighted values taken across a neighborhood patch. As the LK routine relies on numerical differentiation, it is essential first to remove high-frequency components, which would otherwise be amplified. Typically, Gaussian smoothing across 5 x5 pixel regions as well as temporal smoothing across fifteen frames is performed. The processing cycle for the LK algorithm is summarized in Fig. 8.19. All processing is with real-valued numbers. Stage one of data processing is Gaussian temporal-spatial filtering, with a reduction in per-frame computational complexity from O(N2m?T) to O(N2(T + 2m)) multiply/accumulates (MACs) through separability, where N is the size of a square image, T is the filter's temporal extent, and ra is the spatial region of support. Initially, the temporal filter (larger than shown) is applied, centered on a middle target frame. Spatial filtering can then take place on that target frame. Stage two in Fig. 8.19 is numerical differentiation which is applied to the frames output from stage one. A tagged array allows frames to be manipulated without regard to storage order in a circular (finite) buffer. Differentiation is separately applied in each dimension to a stage two target frame, giving rise to three images, Fig. 8.19, with complexity O(N2m) MACs. For each pixel in a target frame, a normalized 2 x 2 measurement matrix is formed from a weighted 5 x 5 neighborhood. A weighted 1 x 2 time measurement matrix is likewise formed. Per-frame computational complexity in this stage is at least O(./V2m2(M2 + B)) MACs where M is the size of the spatial measurement matrix and B is the number of columns in the time matrix. Finally in stage four, the eigenvectors of the spatial measurement matrix are first calculated in order to determine confidence thresholds for acceptance of velocity vectors. The complexity is O(N2M3) multiplications. Calculation of full velocity vectors for each pixel is by matrix multiplication of the spatial measurement matrix and the time measurement matrix.
148
APPLICATION EXAMPLES
Fig. 8.19 Data processing in the LK method.
The application structure is revealed through a call graph output by Quantify, Fig. 8.20 showing the top twenty calls stemming from the main function. The graph can be expanded, and further detail still is available from an annotated listing. A function list will show which functions are candidates for optimization, but more helpful to the parallel programmer is the function detail display showing the percentage call time taken up by a function and its descendants. Fort-seven per-cent of time is taken up in successively calculating derivatives and velocities, for an image of size 252 x 316 pixels. Of the remaining functions, 29% of time is taken up by a candidate for data-farming, Gaussian
CASE STUDY 4: OPTICAL FLOW
149
smoothing. About 5% variation in this proportion occurs if a smaller image is employed, because the ratios also include load time.6
Fig. 8.20 Call graph for the LK sequential code.
To load balance, the same number of processors were used in each stage of the pipeline, Fig. 8.21. Because of the size of each inter-stage message (at least 57 Kbytes for the 252 x 316 frame sequence) the disparity in processing time between the two stages is taken up by message transfer time. This is convenient as the size of buffers would be prohibitive on the machine available. Data farming of image strips in the first stage of the pipeline was straightforward, requiring limited modification of the data-farm template. Demandbased scheduling in stage one will result in out-of-order row arrivals to stage two. If the two pipeline stages can be decoupled the LK algorithm becomes an asynchronous pipeline with potential gains in frame traversal latency. In the algorithm for input to stage two, Fig. 8.22, a linked-list data structure enables a message to be assembled in the correct order ready to transmit to a stage two worker. The linked list enables storage nodes to be discarded once a message is assembled. The messages are not immediately farmed out 6
System call time can alternatively be turned off. A number of other variable overheads are ignored such as paging and window traps where present.
150
APPLICATION EXAMPLES
Fig. 8.21 LK farm layout.
but are transferred to a circular message buffer to preserve order. Worker requests from farm two are 'fairly' demultiplexed, though in alternation with the servicing of stage one arrivals. To combine numerical differentiation and calculation of output motion vectors in the second stage an intermediate data structure was needed. In Fig. 8.23, the input message, sufficient in extent for 3D numerical differentiation, is transformed to an intermediate set of image strips, now reduced in size and ready for calculation of measurement matrices. Border regions are extended by zeroization rather than using wrap-around. Edge velocities are discarded because of possible sensor error. Seven rows across five frames were needed to perform four-point weighted central differencing7 for any one output row. A linear least-squares estimate was made across five rows of the differentiated data, finally resulting in a single row of output.8 8.4.4
Multi-resolution routine
The AN method, again assuming brightness conservation, correlates image patches across two frames through a coarse-to-fine strategy. In forming Laplacian pyramids (one for each frame), band-pass smoothing is performed but no temporal smoothing occurs. At the coarsest level, direct neighborhood corre7
See [107] on the accuracy of this numerical differentiation method. Partial velocity is velocity in the normal direction, perpendicular to a patch edge; see [165]; raw velocity is before thresholding from the eigenvector spread; and the residual is an error term from the OFE [327]. 8
CASE STUDY 4: OPTICAL FLOW
151
Fig. 8.22 Buffering out-of-order arrivals.
lation takes place but at higher levels four parent estimates of the flow vector, when suitably projected, are employed to guide the search at each pixel site. The overlapped search avoids one erroneous estimate passing up several levels of the tree. Within the neighborhood the search is exhaustive, unlike corresponding motion estimation algorithms in image compression. The minimum of the estimates, using a sum-of-squared-differences (SSD) measure for ease of computation, forms a correlation or error surface. The principal curvatures of the surface along with the amplitude at each pixel site are the basis of a confidence measure. The confidence measure guides a global relaxation which
152
APPLICATION EXAMPLES
Fig. 8.23 Pipeline second-stage message structure.
propagates firm to weak estimates. The Gauss-Seidel relaxation method was used in this version of the AN routine. The call graph output by Quantify for the AN method, Fig. 8.24, shows that there is just one main processing stream (through compute-flow, shown qualitatively by the width of the connections). For small pixel motions, a two-step resolution pyramid is needed. Construction of a two-step Laplacian pyramid is a relatively small overhead so parallelization was not implemented even though spatial smoothing is often parallelized. The coarse flow estimate formed a first data-farm stage. As the subsequent relaxation phase is over a decimated image, it is short-lived. Therefore, for ease of implementation, the second matching phase was also performed on the same data farm, Fig. 8.25. Finally, the fine level relaxation could be performed on a single-processor pipeline stage.
CASE STUDY 4: OPTICAL FLOW
153
Fig. 8.24 Call graph for the AN sequential code.
The AN parallelization utilized two vertically-sliced image pyramids to enable data farming for the correlation search. Each row of flow vectors has four parents originating in two successive rows at a coarser level. In Fig. 8.26, the four-parent estimate sources for one particular fine-level pixel site are shown. An example projected flow vector (labeled 'a') for just one of the parents is included in Fig. 8.26. Projected vectors, previously adjusted for sub-pixel accuracy, have a maximum extension of three pixels in either direction. The fine-level search involves a further maximum one pixel extent offset from the projected flow vector, for example, the vector labeled 'b' in Fig. 8.26. At each potential match point a 3 x 3 convolution patch was formed, for example, the dashed-line rectangle in Fig. 8.26. At each search point, the convolution patch, which was previously smoothed by a Guassian filter, was correlated with a fixed correlation area in the second image; in the example, shown outlined by a bold line in Fig. 8.26. In a two-level pyramid, with a 3 x 3 search to source one row of flow vectors, two rows at the coarse level were sent and eleven rows from each of the two image frames were needed at the fine level (Fig. 8.26 shows part of this strip for one image). Indexing was required to send the correct two coarse-level rows in any message as parity differed with the fine-level row index.
154
APPLICATION EXAMPLES
Fig. 8.25 AN farm layout.
For pyramids greater than two levels in height and for larger images, the data-farm processing can be replicated across several pipeline stages. The data transfer burden was reduced over the LK algorithm, making the AN algorithm a more efficient parallelization. However, its accuracy over small motions was limited due to the quantized nature of the search. Though correlation is normally used if temporal differentiation is not possible because of lack of frames, paradoxically an improvement may be to pre-process with temporal smoothing if such frames were available. 8.4.5
Phase-based routine
The FJ method consists of multiple stages of processing, Fig. 8.27, before full velocity estimates are reached.9 Timing tests on the sequential version revealed that 80% of computation time is taken up by filtering with Gabor pseudo-quadrature filters (to extract phase), which therefore formed the basis of the data-farming stage. Gabor filters are designed in 3D frequency space to match the features of interest but are implemented in the spatiotemporal
9
Phase component and full velocities are alternative estimates. The component estimate requires differentiating the phase.
CASE STUDY 4: OPTICAL FLOW
155
Fig. 8.26 Example AN search area.
domain.10 Separate filters are needed for each spatial orientation, speed, and scale. Greater resolution is possible for filters corresponding to higher frequencies. Despite the use of the Heeger algorithm [154] based on a trigonometric identity for making separable 3D Gabor filtering, multiple passes through the data results in a computational load which is significantly greater than other methods implemented. If each of orientation, speed, and scale were to be sampled n times, with a region of support m constant in all dimensions with separable filters, and square image size JV, and T frames then the time complexity is O(n3mN2T) multiplications per frame. It will be seen from Fig. 8.28 that there is also a significant data storage problem as typically twenty-two sets (one for each filter) of five frames each are needed after filtering. Five frames are needed later to perform numerical differentiation which is a prelude to finding the phase. Had a greater range of velocities been expected then further sets of filters would be needed. The different filter estimates enable the detection of multiple motions in the same locality as might occur at occlusion boundaries, through transparency, shadowing, and virtual motion arising from specularities (highlights). However, clustering of velocities at a single location was not performed. If one row is farmed out at any one time, as implemented, then parallel processing
10 In frequency space, the amplitude spectrum of an image patch moving at a constant velocity has the normal component of this velocity spectrum concentrated in a line around the origin, which feature is matched by a Gabor filter.
156
APPLICATION EXAMPLES
Fig. 8.27 FJ processing pipeline.
can proceed as far as component calculation of the OF vectors.11 If five rows were to be processed at any one time then all processing can be performed in parallel without the need for the temporary storage of intermediate files, but with a corresponding increase in data communication.
Fig. 8.28 Simplified FJ filtering data structure.
8.4.6
LK results
Initial work concentrated on the LK method because in [30] the method is reported to be among the most accurate for well-behaved image sequences. Of 11 Only the normal component of phase, that is the component normal to the phase contour not to an image edge, is unambiguously resolved.
CASE STUDY 4: OPTICAL FLOW
157
the standard synthetic sequences, when the error is measurable, the Yosemite Valley scene is usually selected as most representative of natural images [221]. First tests took place on a single i860, and considered the effect of relaxing the accuracy. Table 8.8 shows that, for frame 9 of the sequence, no advantage in accuracy is to be gained from temporal smoothing over 15 frames (hence these results are not recorded), but dropping below 11 frames introduces significant error. It appears that a large temporal extent is needed, which brings a communication overhead on parallel machines. Notice that a general processing model that includes OF must account for non-causal filtering [249], (i.e. sufficient previous frames must be available). Some researchers [355, 369] have not used temporal smoothing arguing that it is unnecessary for small pixel motions. Table 8.8 shows the error when no temporal smoothing was applied. The final row, parameter = 0.0, records the effect of turning off spatial smoothing as well. Table 8.8 LK Method: Angular Error (Degrees) for Yosemite Sequence, Frame 9, Processing Times for One Processor Filter Param.
No. of Frames
Angular Error
Standard Deviation
Density %
Min. Error
Max. Error
Time (s)
1..5 1..0
15 11 9
4. 26 5. 28 18.74
10.14 10.89 23.09
39.79 54.07 71.90
0 .00 0 .00 0.02
111..62 121..04 140..23
39.53 36.27 35.02
0.00 0. 00 0.,00 0. 00
125..40 131. .18 136. 88 152. 99
39.94 36.64 35.53 27.40
0,.5
Without Temporal Smoothing 1..5 1..0 0..5 0.0
15 11 9 5
7.68 11.09 22 .31 28 .78
15.42 19.26 25.87 28.01
54.92 62.75 75.01 79.74
Paradoxically, the timings on the single i860, which exclude file I/O, record lower times for apparently more processing. When the smoothing loop is switched off transfer of spatial results to the target array still occurs, implying that a memory interaction was responsible for the delay. In fact, in all sequential runs a swap file was stored in the host machine's local disc requiring transfers over a SCSI bus. For filter parameter 1.5 a SparcStation 1+ takes 154.9 s and 60.3 s with compiler optimization. Timings were taken, without I/O, for the LK method running with two active processors on each of the two pipeline stages. The message size was varied between pipeline stages by means of the data structures described in Section 8.4.3. The data size needed for each stage of the LK routine, Table 8.9, is somewhat different even if the number of image lines remained the same. However, the optimum times occurred, Table 8.10, when the message lines size
158
APPLICATION EXAMPLES
were balanced with little effect from changing between fifteen frames, a = 1.5, and eleven frames, a — 0.5. However, processing was not balanced between the two stages as only one of the processors in stage two was in constant use. This result was quickly determined by examining the event trace. The almost linear speed-up, compared to Table 8.8 timings, for four active processors is misleading as reduction in per-processor program size meant that no intermediate disc access was required. Importantly, increasing the number of processors in each pipeline stage did not result in any improvement in times, because the message-passing overhead escalated.
Table 8.9 LK Method: Message Sizes (bytes) with Filter Parameter = 1.5, Two Stages 2 Processors per Stage Lines per Message
I 2 3 4 5
8.4.7
Stage 1 Out Return 52144 56884 61624 66364 71104
6324 12644 18964 25284 31604
Stage 2 Out Return 56888 63208 69528 78848 82168
10140 20240 30364 40476 50588
Other methods
Table 8.11 records timings for the AN method as additional stages of processing were incrementally introduced into a single farm pipeline. Considerable savings over workstation timings are possible. The two fixed overhead timings, from relaxation and from construction of the Laplacian pyramid, are also recorded. Pyramid construction precedes the parallel stage and in an implementation would be linked to I/O (not included as I/O is a system-dependent overhead) on a single processor. The relaxation routine timing represents processing after the first and second stages of the coarse-to-fine processing, but the processing after the second stage dominates. The timings suggest that a balanced pipeline is possible for the AN method. FJ timings, Table 8.12, were taken for a full run on the workstation and for the parallelized Gabor filtering stage on the Paramid machine. However, the Par amid system proved inadequate to the task of processing larger images, although remaining timings are unaffected by swap-file activity. From the timings taken, which indicate scaling, given an effective memory sub-system, using a parallel machine would pay off if simply to reduce the inordinately lengthy computation time on a workstation.
CASE STUDY 4: OPTICAL FLOW
159
Table 8.10 Per-Prame LK Timings (s) with Various No. of Rows per Message for a 256 x 316 Sized Image., 2 Stages, 2 Processors per Stage
Stage One
1
1 2 3 4 5
18.05 18.02 17.88 17.92 17.01
Filter Parameter a — 1.5 15 Frames Temporal Smoothing Stage Two 4 2 3 17.42 12.12 12.56 12.10 12.56
17.66 12.66 10.70 11.22 11.50
17.47 12.33 11.36 10.12 11.19
5
17.55 13.40 11.91 11.61 13.87
Filter Parameter a = 0.5 11 Frames Temporal Smoothing
1 2 3 4 5
18.69 18.78 18.50 18.82 17.76
11.41 11.36 10.94 11.36 10.85
12.30 12.47 12.70 12.70 12.55
12.04 10.72 11.26 10.62 11.24
12.78 11.90 11.58 11.25 13.07
Table 8.11 Per-Frame AN Method Timings (s) Parallelizing: (a) Flow Vectors (b) Flow Vectors and Confidence Measures (c) with Coarse-level Calculations, and Sequential Overheads: (d) Relaxation (e) Pyramid Construction No. of Processors
Image Size (Pixels) 100 x 100 150 x 150 252 x 316 SparcStationl+
1 (gcc) 1 (opt. 3)
171.3 103.6
458.8 282.5
1455.5 895.3
i860 workers 1 2(a) 2(b) 2(c) 4(a) 4(b) 4(c) 4(d) 4(e)
23.5 15.2 13.2 12.3 12.2 9.4 8.2 3.8 0.3
60.0 34.3 29.7 27.8 27.5 20.9 18.2 8.4 0.8
192.2 122.7 107.2 100.1 98.1 75.6 65.2 29.8 0.9
160
8.4.8
APPLICATION EXAMPLES
Evaluation
Prototyping the three methods allowed a tentative evaluation to be reached. Other considerations being equal, the LK method is appealing to implement because it has the shortest processing time and because the complete structure can be parallelized. Given the regular nature of the processing, there is also the possibility of transfer to VLSI. However, further reductions in speed could not be achieved on our general-purpose machine because of the data-transfer cost. The peak speed-up was 3.69 for four active processors processing a 252 x 316 image in 10.7 s with full accuracy. If detection of larger pixel motions were to be required either a larger local patch would be needed, which would add to the data communication burden, or a multiresolution pyramid would need to be constructed. The correct method of extending the OF field to 100% density is not established. These difficulties are probably resolvable but would reduce the speed advantage that the LK method holds. Table 8.12 Per-Frame FJ Method Timings (s)
Size 100 x 100 150 x 150 252 x 316
SparcStation 1+ gcc opt. 3 834.1 2476.7 10500.1
257.9 775.3 3392.6
Worker i860s 1 2 152.7 353.2 -
105.8 165.5 -
4
51.4 114.9 -
Correlation methods, especially without a search guided through an image pyramid, are prone to distortion from multi-modal results. There is also a question mark over how to best achieve sub-pixel accuracy. Conversely, correlation has a higher immunity to noise than methods involving numerical differentiation. However, because of the reduction in data transfer this class of algorithm is worth persevering with. The per-row computational complexity of the AN method is O(d4) while message sizes are O(d), implying that increasing the window size, d, will quickly bring a benign parallel regime. The peak speed-up for one stage using four processors was 2.95 at 65.2s/frame for the AN method. The FJ method is presently simply too burdensome to consider in a production environment, probably being most suitable for research using small-scale images. Another difficulty is low densities reported as being below 50% [30]. Parallelization has some value in reducing job throughput (reducing the time for a 150 x 150 pixel image from 339.7 s to 114.9 s, a speed-up of 3.1 for four processors). It is not clear that simplified filters will significantly reduce the computation and storage overhead [105]. Reference [39] advocates determining orientation by finding the Eigenstructure of just one scatter matrix for a local patch in space-time (by analogy with forming an inertia tensor). How-
CONCLUSION
161
ever, the method shares a similar computational structure to the LK method. A link between the detection of spectral energy (and possibly phase) and spatial differentiation, via the Fourier derivative theorem has long been noticed [6].
8.5
CONCLUSION
Four applications have been considered, all concerned in one way or another with processing video sequences. The Eigenfaces application stands out because a pipeline structure was imposed upon the software when the initial sequential code was developed. However, many systems are designed on workstations or PCs where feedback loops in particular can be applied without imposing any extra execution time penalty. Yet it is balancing feedback which is the principal obstacle to efficient parallel solutions. However, PPF is a real-world design system that can be pragmatically adjusted to accommodate for different application constraints, and in particular will identify those constraints at the start of the design cycle, before a major investment in engineering design has been made. The study of H.263 is a case in point. Because these studies were on development hardware, to some extent I/O could be neglected. As all the case studies illustrate, handling large data bandwidths is essential in any working system and should be the first consideration even before consideration of computational complexity. The value of PPF, which is evident in all the case studies, is that a clear path to performance enhancement is identified at the start. As deadlines are present in industry more than in research, the presence of a clear performance upgrade path is correspondingly more important. Naturally, PPF is not a panacea and in Chapter 10 a number of counter-examples of systems for which a PPF solution is not suitable are presented.
This page intentionally left blank
9 Design Studies This chapter contains three short case studies illustrating synchronous PPFs in the 2D signal processing domain. As each case study is centered on a single algorithm, the system analysis techniques developed in earlier chapters to balance varied and multiple algorithms are hardly in evidence. Nevertheless, the case studies show the strengths and weaknesses of the PPF approach, and are therefore useful. The first case study in Section 9.1 suggests various PPF solutions for the Karhunen-Loeve algorithm. This is a difficult algorithm to parallelize largely because of the need to centralize results before proceeding from the first phase to the second phase of the algorithm. The PPF is treated as a design tool for mapping from general-purpose hardware to specialist hardware. In the second study, Section 9.2, two varieties of wavelet transform are treated as PPFs. The filter-bank transform, using a Mallat architecture, is the more interesting parallelization, if large order filters are needed. The oversampled transform is almost embarrassingly parallel. A wavelet transform can provide the coefficients for vector quantization coding, the subject of the final case study, Section 9.3. Processor farms provide good speed-ups for the codebook-parallel algorithm, raising the interesting possibility of a combined wavelet transform, vector quantization coder PPF. The sharply asymmetric nature of vector quantization computation, between coder and decoder, has deterred inclusion in standard hybrid codecs. However, vector quantization codecs are used to decompress texture maps for PC graphics accelerators. Similarly, the lack of an equivalent method to zig-zag selection of transform (DCT) coefficients has deterred interest in the wavelet transform within standard codecs. However, zero-tree coding [321] does now make wavelet methods 163
164
DESIGN STUDIES
possible within the MPEG-4 standard, which is expected to be implemented through a software toolkit. 9.1
CASE STUDY 1: KARHUNEN-LOEVE TRANSFORM (KLT)
The KLT differs from other common orthogonal transform algorithms [177], such as the Fourier transform, in two respects: 1. It is data-dependent; and 2. It is applied to an image ensemble. In statistics, the columns of the matrix to be transformed represent realizations of a stochastic process. Therefore, it is legitimate to employ the KLT to reduce the dimensionality of the data. In image processing, each image can be viewed as a single realization of a stochastic process. Therefore, the transform should act on a sample set of images, from a possibly infinite population of images. The nomenclature of the Karhunen-Loeve Transform (KLT) [184, 222] is confused [128]. In statistics, the KLT is reserved for a transform that acts on any data set, while the term Principal Components Algorithm (PCA) is reserved for zero-meaned data. However, in this case study the term KLT refers to a transform acting on zero-meaned image data. 9.1.1
Applications of the KLT
The KLT is employed in multi-spectral analysis of satellite-gathered images [301] through the spectral signature of imaged regions. Significant data reductions are achieved in the storage of satellite images if the multi-spectral set are transformed to KLT space. The dimensionality in this application is relatively low. The KLT has also been applied to sets of face images [188], as was illustrated in Section 8.3. A reformulation of the KLT algorithm is utilized for the face recognition application, whereby the rows of each face image are stacked to form one vector per image. In [252], a way of reducing the computational complexity of the reformulation is demonstrated. In fact, the reformulation apparently is equivalent to the algorithm developed in Section 9.1.3. Unfortunately, the alternative KLT algorithm is not as clearly parallelizable as the algorithm of Section 9.1.3 because of the long vectors required. The face database and other databases are usually of high dimensionality. In this case, an iterative solution may be necessary [368]. The iterative solution relies on keeping the state of the KLT space, which does not suit a data-farming programming paradigm. The signal-to-noise ratio (SNR) will be improved by a KLT if additive Gaussian noise is present (see Fig. 9.1), resulting from incoherent sensors, as in
CASE STUDY 1: KARHUNEN-LOEVE TRANSFORM (KLT)
165
multi-spectral scanning. There is a variant of the KLT [211] suitable for coping with multiplicative noise such as speckle noise in multifrequency synthetic aperture radar. Finally, noise-dominated image sets may be analyzed through the low-component images. 9.1.2
Features of the KLT
The KLT has a number of features [80] which occur by virtue of the rotation of the data representation basis vectors (Fig. 9.1). Among the features relevant to the computation of a KLT are: • The KLT transform achieves optimal data compression in the meansquare error sense.1 • The KLT projects the data onto a basis that results in complete decorrelation, though only if the data are first zero-meaned. Notice that the decorrelation is of statistical significance and does not correspond necessarily to a semantic decomposition. • If the data are of high dimensionality, by reason of the above two properties it is possible to reduce the dimensionality. • For some finite stationary Markov order-one processes with known boundary conditions — many natural scenes acquired by an appropriate sensor — the basis vectors are a priori harmonic sinusoids and hence a fast algorithm (the FFT-like sine transform) is available [176]. Another route to fast implementation is by neural nets employing Hebbian learning [262]. However, the lack of a general fast algorithm, because the covariance matrix Eigenvectors must be found in every case, makes it pressing to find a suitable parallel decomposition. 9.1.3
Parallelization of the KLT
Consider a sample set of real-valued images from an ensemble of images. For example, these might be the same scene at different wavelengths or a collection of related images at the same wavelength. Create vectors with the equivalent pixel taken from each of the images, that is if there are D images each of size M x N then form the column vectors Xk = (2^-, x\^ ..., x^~1} for k = 0,1,..., MTV - 1, i = 0,1,..., M - 1 and j = 0,1,... ,N - 1. Calculate the sample mean vector: lr
That is, e(k) — E[(x — x)T(x — x)] is a minimum, where x is the representation of x truncated to k terms. E is the mathematical expectation operator. The minimal orthonormal basis set is found by the method of Lagrangian multipliers, using the orthonormality of the basis vectors as the constraint.
166
DESIGN STUDIES
Fig. 9.1 The effect of the KLT change of basis on signal and noise (additive).
Use a computational formula to create the sample covariance matrix:
with superscript T representing the transpose. Equation (9.2) is appropriate if the image ensemble is formed by a stochastic process that is wide-sense stationary in time. Form the Eigenvector set:
where {uk} are the Eigenvectors with associated Eigenvalue set {A&}. [Cx] is symmetric and non-negative definite, which implies that the {uk} exist and are orthogonal. In fact, the Eigenvectors are orthonormal and therefore form
CASE STUDY 1: KARHUNEN-LOEVE TRANSFORM (KLT)
167
a well-behaved coordinate basis. The associated Eigenvalues are nonnegative. In any expansion of a data set onto the axis, the eigenvalues index the variance of the data set about the associated Eigenvector.2 The KLT kernel is a unitary matrix, [V], whose columns, {«&} (arranged in descending order of Eigenvalue amplitude), are used to transform each zero-meaned vector:
The properties of [V] can serve as a check on the correct working of the algorithm. The time complexity of the operations is analyzed as follows, where no distinction is made between a multiplication and an add operation: • Form the mean vector with O(MND) element-wise operations. • Calculate the set of outer products and sum, X)fc=o~ ^k^i m O(MND2} time. • Form mzraj; subtract matrices to find [Cx]\ and find the Eigenvectors of [Cx]. The Eigenvector calculation is O(D3). • Convert the {of*} to zero-mean form in O(MND). • Form the {t/^} by O(MND2} operations. Since the covariance matrix is, for the chosen multi-spectral application, too small to justify parallelization, the total parallelizable complexity is
Consider the KLT as applied to a single image in one-off mode. One way to parallelize the steps leading to (9.4) would be to send a cross-section through the images to each process, selecting the cross-section on the basis of image strips. The geometry is shown in Fig. 9.2. In a first phase, the mean vector of each cross-section image strip is found and returned to a central farmer along with a partial vector sum, forming the strip matrix:
for n strips. In a second phase, the farmer can find [V] from [Cx], which is now broadcast so that for each strip the calculation of {y][} can go ahead. However, the duplication of sub-image distribution (once for the partial sums and once to compute the transform) is inefficient.
168
DESIGN STUDIES
Fig. 9.2 Decomposition of the KLT processing.
A possibility is to retain the data that are farmed out in the first phase at the worker processes. On a distributed memory system with store-and-forward communication the first farming phase will have established an efficient distribution of the workload given the characteristics of the network. Therefore, the second phase will already have approximately the correct workload distribution. This is not a solution on a shared network of workstations as processor load and network load is time dependent. The solution is also not a general one since other two-phased low-level IP algorithms do not usually use the same data in both phases, though the time complexity can be similar. The method of finding a workload distribution by a demand-based method and then re-using the distribution for a static load-balance in subsequent independent runs may have wider applicability. An alternative static load-balancing scheme is to exchange partial results among the worker processes so that the calculation of matrix [V] can be replicated on each worker process. A suitable exchange algorithm for largegrained machines can be adapted from [336], if the processors can be organized in a uni-ring. 9.1.4
PPF parallelization
If a continuous-flow pipeline for KLTing image sets were to use temporal parallelism, by simply sending an image-set to each worker processor, then the communication and/or buffering overhead would be large given that it
CASE STUDY 1: KARHUNEN-LOEVE TRANSFORM (KLT)
169
would not be easy to overlap communication with computation. An idealized, pipeline timing sequence of a decomposed KLT is given in Fig. 9.3. The covariance and transform threads are in fact the data-farmers which in theory should use sufficient worker tasks to balance the time required to find the Eigenvectors of the covariance matrix. In a preliminary implementation of this elementary pipeline on a distributed system, both farmers were placed on the same processor (Fig. 9.4) since the same data are needed for forming the covariance matrix and for transforming to Eigenspace.3
Fig. 9.3 Ideal KLT pipeline timing.
In principle, double buffering of image sets allows the loading of one image set to proceed while the previous image set is transformed. However, for a VLSI implementation this implies a total buffer size of 5 Mbytes and upwards for (say) 10 images of 512 x 512 size. Additionally, a best-case estimate is made of the desired number of processors on the two farms to achieve a balance, based on (9.5). Given the relative time complexity of the Eigenvector calculation, and the number of images in a set then the number of processors is impossibly large. The Eigenvector calculation may take relatively longer (compared to the other calculations) in a VLSI implementation. A suitable VLSI processor would combine a RISC core for the Eigenvector calculation with an array for regular calculations, as implemented in [396]. The array can work in systolic or SIMD mode. The first pipeline stage can be further partitioned into calculation of the mean vector and the outer-products, since the two calculations are independent and therefore can take place in parallel. Additionally, the second stage 3
A worker thread can also be placed on the same processor to soak up any spare processing capacity.
170
DESIGN STUDIES
Fig. 9.4 KLT pipeline partitioning.
calculations can be split further between reducing the image set to zero-mean form and transforming the image set, though these calculations are not independent. However, the reduction to zero-mean form is independent of the Eigenvector calculation and could take place in parallel with that calculation. These partitioning possibilities are shown in Fig. 9.5. Assume that the two farms in the first pipeline partition can be operated in parallel, by means of two farmers on the same processor feeding from a common buffer. Since the maximum time complexity of each stage of the new pipeline is reduced from O(MND] + O(MND2) to O(MND2), then the number of processors on any one farm that will increase pipeline throughput is reduced. However, the bandwidth requirements are increased. The pipeline of Fig. 9.5 is relevant as the basis of a VLSI scheme, possibly through a systolic array. For a large-grained parallelization, the arrangement shown in Fig. 9.4 of merging the Eigenvector calculation into the work of the second farmer is practical. The scheduling regime on the processor hosting the two farmers is round-robin for fairness. Since the time complexity of both stages of the pipeline is the same it is now easily possible to scale the throughput in an incremental fashion.
CASE STUDY 2: 2D-WAVELET TRANSFORM
171
Fig. 9.5 Alternative partitioning of the KLT pipeline.
9.1.5
Implementation
Table 9.1 records timings for two pipelines on the Paramid multicomputer, for a variety of image set sizes, that is the number of images in an ensemble. The size of a square image within any set was also varied. To discount I/O times, the same image set, loaded into main memory, was reused. The Paramid normally loads images via a SCSI link which would create an I/O bottleneck. Local buffers to store three image lines were placed at the workers. In the first pipeline, each farmer occupied its own processor, two workers were employed in each farm, and the Eigencalculator was also placed on a separate i860. In order to increase the size of the farms to three workers, the Eigenvector calculations were switched to the transputer associated with the first farmer. However, the second pipeline showed an appreciable drop in performance. Equivalent times were recorded (not shown in the table) when the Eigenvector calculations were shifted to the second farmer's transputer. The difficulty of improving the throughput illustrates the need to consider special-purpose hardware.
9.2
CASE STUDY 2: 2D-WAVELET TRANSFORM
Wavelets and wavelet transforms have been one of the important developments in signal processing and image analysis of late, with applications in data compression [231], image processing [230], and time-frequency spectral estimation [196, 11]. Although several efficient implementations of wavelet transforms have been derived, their computational burden is still considerable. This case study describes PPF parallelizations of two of the most common manifestations of the wavelet transform: the discrete wavelet transform
172
DESIGN STUDIES
Table 9.1 Timings (s) for Parallel Pipelines on a Paramid. Pipeline: Set Size 4 5 6 7 8 9 10
(2)
(1)
Image Size (Pixels) 128 256 128 256
0.73 0.88 1.02 1.12 1.28 1.39 1.52
2.58 3.19 3.59 4.13 4.80 5.20 5.74
1.93 2.06 2.14 2.27 2.40 2.53 2.64
4.88 5.37 5.81 6.24 6.73 7.23 7.70
used in image processing and data compression, and the oversampled wavelet Transform used in time-scale spectral estimation. The parallel environment in which the algorithms were implemented comprised two TMS320C40 boards with a total of six processors (one dual-C40B board and one quad-C40B module board) [223], An alternative mediumgrained parallelization for an adaptive transform is given in [360] and finegrained implementations are surveyed in [51]. 9.2.1
Wavelet Transform
Wavelet analysis is performed using a prototype function called a 'wavelet' which has the effect of a bandpass filter. Other bandpass filters (wavelets) are scaled versions of the prototype [370, 260]. The wavelet transform (WT) has been described in a number of different ways. It can be viewed as a modification of the short-time Fourier transform (STFT) [260], or as the decomposition of a signal s(t) into a set of basis functions [196] or as an equivalent of sub-band signal decomposition [36]. Throughout all these descriptions there is one constant, the equation for the wavelet transform:
where h*(t) is the complex conjugate of the wavelet h(t), a is a term describing scale and b represents a time-shift value, and the term 4^ preserves energy. There are several types of WT: • The continuous wavelet transform (CWT) in which the time and timescale parameters vary continuously; • The wavelet series (WS) coefficients, in which time remains constant but time-scale parameters (6, a) are sampled on a so-called 'dyadic' grid [309]; and
CASE STUDY 2: 2D-WAVELET TRANSFORM
173
The discrete wavelet transform (DWT), in which both time and timescale parameters are discrete.
Depending on the application, one of these types of WT may be selected over the others. The CWT is best suited to signal analysis [11]; WS and DWT have been used for signal coding applications, including image compression [20], and various tasks in computer vision [231, 230]. 9.2.2
Computational algorithms
Several discrete algorithms have been devised for computing wavelet coefficients. The Mallat algorithm [231, 230] and the 'a trous' algorithm [164] have been known for some time. Shensa [322] originally provided a unified approach for the computation of discrete and continuous wavelet transforms. Rioul and Duhamel [309] proposed the use of fast convolution techniques (such as the FFT) or 'fast-running FIR (finite impulse response) filtering' techniques in order to reduce the computational complexity. For the case of large filter lengths, L, on the same order of magnitude as the data length, TV, the FFT technique based on the 'a trous' algorithm reduces the computational complexity per computed coefficient from 2L to klog^L. For the case of short filters, where L « N, fairly modest gains in efficiency are obtained. Clearly, the FFT-based fast wavelet transform algorithms provide significant computational savings for large filter lengths. However, DWTs have so far been mostly used with short filters [309]. Furthermore, for a long one-dimensional array or two dimensional data, such as in image processing, the execution time of wavelet transforms, even in the case of 'fast' FFT-based implementations, is still large. 9.2.3
Parallel implementation of Discrete Wavelet Transform
(DWT)
Borrowing from the field of sub-band filters, for example [333], the idea of implementing the DWT by means of a sub-band decomposition has emerged [230, 322, 309]. Sub-band decomposition maps onto a hierarchical structure (Fig. 9.6 [231, 36]) in which half the samples from the previous stage are processed in the current stage. The output of the tree-like structure is equivalent to a bank of adjacent band-pass filters, spread logarithmically over the frequency range, unlike the short-time Fourier transform (STFT), which is equivalent to a set of linearly dispersed filters [260]. The standard DWT algorithm, implemented directly as a filter bank, is already fast [309, 299], as a result of the decomposition of the computation into elementary cells and the sub-sampling operations ('decimation') which occur at each stage. The complexity of the DWT is linear in the number of input samples, with a constant factor that depends on the length of the filter.
174
DESIGN STUDIES
Fig. 9.6 Block diagram of the DWT implemented as a sub-band decomposition using a filter bank tree.
The total complexity of the DWT is bounded by
where CQ is the number of operations per input sample required for the first filter bank [260]. A potential drawback for a straightforward parallelization of the DWT is that the computational complexity declines exponentially at each stage. However, (9.8) remains the key to a parallel implementation of the DWT based on a PPF. From (9.8) it is clear that a pipeline of J stages (where J is the number of octaves over which the analysis is performed), with each of them containing half as many workers as the previous farm, would lead to a totally balanced topology. However, this fine-grained topology would require a large number of processors and introduce significant communication overheads. A more practical solution, still based on pipeline processor farming, is the topology presented in Fig. 9.7(a). This can be regarded as two-stage pipeline processing: • The first farmer is mainly responsible for reading the data and distributing it to the first stage workers, which compute the first N/2 filter coefficients (i.e., the first filter bank). This requires CQ operations per input sample. • The second stage of the pipeline collects the results from the first stage of the pipeline and computes the remainder of the coefficients. From (9.8) it can be seen that the total number of computations carried out by the second stage tends to CQ as the number of octaves analyzed increases. Hence, the number of processors for the second farm should be the same as for the first farm to balance the pipeline.
CASE STUDY 2: 2D-WAVELET TRANSFORM
175
Fig. 9.7 Pipeline architecture of DWT transform.
Figure 9.7(a) represents a general scalable topology in terms of the number of workers that can be connected to farmers Fl and F2. However, due to the limited number of processors available, only two workers were employed in the two stages of the pipeline. In this case the topology of Figure 9.7(a) can be modified slightly, bearing in mind that farmer F2 and F3 and one of the workers in the second pipeline stage can be implemented on the same processor without affecting the performance. This follows from the following analysis: F2 performs only the required permutations of the data outputs received from workers W21 and W22 after they have finished the octave analysis. Hence, F2 is idle during the working time of W2I and W22. Consequently, the topology of Fig. 9.7(b) with only five processors was used for the implementation. Figure 9.8 compares performance of the parallel topology shown in Fig. 9.7(b) with a sequential implementation on a single processor for different data and filter length. The ordinates record 'relative time performance', that is all timings are on a scale in which sequential processing takes unit time. Timings include data loading from the local PC machine, which had local storage facilities, and the overhead from transferring data to worker processors. A set of Daubechies wavelet filters with coefficient lengths from 4 to 40 was implemented [289]. A sequence of five one-dimensional synthesized signal 'frames' was used as input. Due to memory limitations on the four C40 module boards, the data length was restricted to a maximum of 1024 points. For filter order 4 in particular the communication overhead is greater than the computation. Where computation is much smaller, for example for data length 128, filter order 4 performs approximately three times more slowly than a sequential implementation due to the poor ratio of compute time to communication time. From results in Fig. 9.8 it can be seen that the parallel implementation of the wavelet transform outperforms the sequential algorithm in terms of speed-up for N > 256 and L > 12. The best performance is obtained for N = 1024 and L = 40, in which case a speed-up of almost three is achieved with four worker processors. It is clear from Fig. 9.8 that the performance advantage of the parallel implementation increases compared to a sequential implemen-
176
DESIGN STUDIES
Fig. 9.8 Comparative overall performance of parallel topology and sequential single processor as a function of data and filter length.
tation as the data length increases. This finding is particularly important if one bears in mind that a wavelet transform of a d-dimensional array is most easily obtained by transforming the array sequentially on its first index (for all values of its other indices), then on its second, and so on. Therefore this parallel implementation of the DWT is suitable for large filter orders which are more 'regular' preserving image smoothness [58], while short filters should be processed sequentially. In computer vision, the use of differing-sized filters, including large-sized filters, stems from [233], though Laplacian of Gaussian filters were used in that instance. Low-order filters are commonly preferred in image compression because of the reduced computation, but in [20] a filter of order 15 is used as part of a biorthogonal coding/decoding scheme, which makes for smoothing and linear phase in the resulting images. 9.2.4
Parallel implementation of oversampled WT
Because of the reduction in computation, the octave band analysis, described in Section 9.2.3, is appropriate for image compression, and has also been used in image enhancement and image fusion [48]. For other purposes, for example [385], a less sparse coverage of the frequency range is required. Previously spectograms generated by the STFT have been employed for this purpose. In Fig. 9.9(a) [230, 260, 36], octave band time-frequency is contrasted to the fuller scheme of Fig. 9.9(b). Equation (9.9) is a discrete version of (9.7), suitable for production of a spectogram (where a normalized sampling interval is assumed):
CASE STUDY 2: 2D-WAVELET TRANSFORM
177
Fig. 9.9 Sampling of the time-scale plane: (a) the discrete wavelet transform; (b) finer sampling in scale and no sub-sampling.
where N is the number of samples in the signal and i = Q, 1,...,JV — 1. The signal is now oversampled within each octave, unlike the subband decomposition, which is critically sampled. Additionally, within each of J octaves one can have M voices [309], resulting in an indexing of the scale factor, a, as:
where j = 0,1,...,«/. The discrete version of the CWT results in a magnitude and phase plot with N samples along the time-shift axis and JM samples along the scale axis. Using (9.9) to generate a grid with M voices per octave, as in Figure 9.9(b), requires 2NJM multiplications/point and 2MJ(N — 1) additions/point [309], with the 'a trous' algorithm in which a high- and lowfilter are used at each stage. The algorithm implemented in this case study originates from analyzing the CWT in the frequency domain. For even wavelets, by virtue of the Convolution theorem and as the Fourier transform is shift invariant:
where F [•} is the Fourier operator. The FFT is pre-calculated. Any suitable FFT is possible but if N = 2n then a split-radix FFT, which uses a four-way butterfly for odd terms of a radixtwo transform, has minimal time complexity. The overall complexity for realvalued data when performed sequentially is given as [6 4- (n — 3) + 2/JV] MJ
178
DESIGN STUDIES
multiplications/point and [6 + (3n — 5) + 4/N]MJ additions/point [309, 36], assuming three real multiplications and adds for each complex multiplication. The FFT-based algorithm is selected not only because it results in reduced complexity compared with the direct implementation given by (9.9), but also because it represents a straightforward parallel implementation as shown in Fig. 9.10. Processor 1 reads the data and estimates the signal's FFT. The second layer of processors can be increased to as many as JM. Each processor performs a convolution between the signal's FFT and respective wavelet's FFT and then calculates the resulting the IFFT in order to generate the corresponding wavelet transform coefficients. It should be noted that the wavelet coefficients for each worker can be pre-calculated and stored in a look-up table by each of these processors. The third-layer processor is responsible for arranging the data in the proper order and storing them in the appropriate format.
Fig. 9.10 Pipeline architecture implementation of finer sampling in scale of WT. Due to the number of processors available, the topology investigated in this case study included up to four processor in the second layer. Hence, the analysis was carried out for four octaves or two octaves with two voices. A stream of five one-dimensional 'frames' of synthetic data was used as the input signal. A Morlet wavelet (frequency-modulated Gaussian) was used because of its direct scale/frequency property4, making it pertinent to time-frequency analysis [36]. For the architecture of Fig. 9.10 with four worker processors 4
A Gaussian is the only waveform that fits Heisenberg's inequality with equality.
CASE STUDY 3: VECTOR QUANTIZATION
179
a speed-up of 4.32 was obtained compared to the sequential implementation. The wavelet length is not included as a parameter in this case because the algorithm uses zero padding on the sides of scaled wavelets to match the data length in order to perform the convolution in the frequency domain. Hence, the time performance is independent of wavelet length. This finding confirms that the efficiency of the parallel implementation in this case is very high, and that parallel implementation will therefore be especially useful in the analysis of large numbers of octaves and many voices.
9.3
CASE STUDY 3: VECTOR QUANTIZATION
Vector quantization (VQ) has been widely investigated for image and video data compression [255, 4, 314, 71]. Vector quantization is a generalization of scalar quantization, where a group of samples are jointly quantized instead of individual quantization of each sample [129]. This offers the advantage that dependencies between neighboring data can be directly exploited. In general, a vector quantizer, VQ of dimension k and size N can be defined as the mapping of the vectors or points of the /c-dimensional Euclidean space 7£fc into a finite subset y of N output points. That is,
where y = [yieR,k : i = 1,2,..., TV} is the set of reproduction vectors or code vectors, referred to as the reproduction alphabet or codebook. The coding rate or resolution of such a VQ is determined by r — [Iog2 N]/k measured in bits per sample, assuming that a fixed-length binary codeword is assigned to each code vector. Figure 9.11 illustrates the block diagram of a basic image compression system based on VQ. Typically, vector quantization operates using a pre-defined set of prototype code vectors (codebook), in such a way that each input vector is reconstructed from its best-matched code vector available in the codebook. Vector quantization offers a simple decoding process, where the index of the selected code vector is used to produce the output vector through a look-up table operation. On the other hand, the selection of the best-matched code vector typically involves expensive computations. The encoding complexity of full-codebook searched VQ increases exponentially with the vector dimension and the coding rate. The main drawback of vector quantization is the fact that the complexity of the encoder imposes restrictions on the size of the codebook that can be used in practice. This can restrict the efficiency of VQ-based compression systems for two main reasons: 1. Only blocks of small dimension (typically, 4x4) can be used. However, VQ operating on vectors of larger size (e.g. 8x8) can result in higher compression ratios due to the fact that dependencies between neighboring vectors can be exploited.
180
DESIGN STUDIES
2. A large codebook is essential for applications where high quality coding (e.g. super-high-definition images) is required, or in image sequence coding where the VQ codebook should be able to respond to the changes in input statistics.
Fig. 9.11 Block diagram of a general image VQ coding scheme
9.3.1
Parallelization of VQ
Different methods have been suggested to reduce the encoding complexity at the expense of sub-optimal coding performance. Typically, these techniques involve imposing a certain structure on the VQ codebook, so that unconfmed access to all effective code vectors is restricted. Examples of sub-optimal VQ techniques are tree-structured VQ, product-code VQ, and lattice-based VQ [129, 315]. An alternative approach has been to exploit parallelism in special-purpose VLSI implementations of VQ. Early architectures used a pipeline of fast processors, where each processor executes part of the distortion measure [5, 76], while more recently, data parallelism was exploited by partitioning the codebook over a number of devices [81]. Although these approaches have achieved real-time performance, the solutions are expensive (typically requiring up to 100 chips), and inflexible. An alternative is to utilize general-purpose processors in a PPF. The advantage of using general purpose processors is that they perform the encoding task of the full-codebook search VQ, so that a high-throughput optimal vector quantizer can be realized, but at the same time provide the flexibility to allow any desired tradeoff to be made between algorithm speedup, peak signal-tonoise ratio (PSNR), and bit-rate. It is relatively straightforward to apply fast codebook search algorithms to processor farms (which exhibit automatic load
CASE STUDY 3: VECTOR QUANTIZATION
181
balancing between processors) to achieve further speed-ups, whereas this is often impractical for synchronized dedicated VLSI implementations. Parallel processing thus provides a mechanism whereby high-speed VQ-based compression systems operating with blocks of large dimension and/or codebooks of large population become practical. 9.3.2
PPF schemes for VQ
Two different schemes are possible for parallelizing the VQ encoding algorithm: 1. Image data parallelism, The entire image is partitioned into a number of sub-images which are distributed over worker processors. Each worker processor then needs to perform an exhaustive search of the entire codebook to select the best-matched available code vector. This scheme is illustrated in Fig. 9.12, assuming that four worker processors are employed in the configuration.
Fig. 9.12 Block diagram of image-parallel approach.
2. Codebook parallelism, Each worker processor can perform the encoding process on its own portion of the codebook. Upon receiving the same image block, each worker processor then needs to search a smaller part of the entire codebook to select the closest code vector in the corresponding sub-codebook. However, the partial encoding results from the worker processors need to be compared at a final stage where the bestmatched available code vector is computed according to the minimum distortion criterion. Figure 9.13 illustrates this scheme assuming four worker processors. The first approach is straightforward to apply since there is no need to further process the encoding results received from the worker processors, but it has the disadvantage that the entire codebook needs to be stored at each
182
DESIGN STUDIES
worker processor. This can impose a limitation on the size of the codebook that can be employed. In order to circumvent this drawback, we implemented the second parallelization scheme. To achieve further speed-up, the selection of the final code vector (through comparison of the intermediate encoding results) is assigned to a separate processor, referred to as the collector. The advantage is that the encoding task of the next input vector is overlapped with the final comparison process of the current input vector.
Fig. 9.13 Block diagram of codebook-parallel approach.
Hence, the parallelization of the VQ encoding algorithm comprises three processes which are mapped onto a three-stage pipeline configuration as follows: Distributor. This process partitions the input image into rectangular blocks of m x n size (e.g. 4x4 or 8x8) and sends each block (input vector) to every worker processor. Worker. This process performs the encoding task on the received input vector using its own sub-codebook and sends the index of the selected codevector and the corresponding distortion value for the particular input vector to the process collector. The worker process is duplicated 5 — 2 times where 5 is the total number of processors in the configuration. Collector. This process receives the indices and the corresponding distortion values for the particular image block from the worker processors and compares the partial results to find the best-matched coding index according to the minimum distortion criterion.
CASE STUDY 3: VECTOR QUANTIZATION
9.3.3
183
VQ implementation
The VQ encoder was parallelized in two steps. In the first step, a sequential Sparc2 implementation was ported to a single processor of a Meiko Computing Surface. Then, the implementation was decomposed into three different processes as outlined in Section 9.3.2. The parallel application is designed such that the number of the processors in the configuration is specified by the user as a runtime argument. Hence, the user does not need to modify the application as the size of the processor network is altered. Although the parallelization method for VQ is applicable to any data compression application that employs vector quantization, the results reported in this case study are based on the encoding of still images. The spatial resolution of the test images used was 512x512 pixels. Three different codebook populations, namely, N = 256, 1024 and 4096 for vector dimensions of 4x4 and 8x8 were used to evaluate the performance of the parallel implementation. Figure 9.14 illustrates the speed-up performance of the algorithm as the number of worker processors is increased when the vector dimension is set at 4x4. As can be seen, the performance of the implementation increases fairly linearly up to the point where the communication links become saturated. Saturation occurs when there are 10 workers and 20 workers in the parallel configuration for codebook populations of 256 and 1024 code vectors, respectively. As the communication requirements are fixed, but computations increase linearly with the codebook size (for the same vector dimension), if the codebook size is increased, then the load at the worker processors becomes larger and hence better speed-ups are obtained. The maximum speed-up achieved with a codebook population of 4096 and 30 workers was 25.6. Further increases in execution speed could be achieved for this codebook if more processors were available, as the communication links are not saturated. When the vector dimensions are increased to 8x8, the corresponding speedup performance as a function of the number of worker processors for different codebook populations is shown in Fig. 9.15. The graphs exhibit similar characteristics to the case of a 4x4 block size, however, better speed-up performance figures are obtained for an 8x8 block size due to the increased work load. As the task size increases, the execution time required to perform the subcodebook search by the workers increases, whereas the cost of transmitting the intermediate results to the collector remains the same. For a codebook population of 4096 codevectors, a maximum speed-up of 27.75 was obtained. Figures 9.16 and 9.17 illustrate the execution timings obtained by the parallel implementation for the 4x4 and 8x8 block size cases respectively. By selecting the points where the execution time is a minimum for a particular codebook population, the effect of increased codebook population on the execution time performance of both sequential and parallel implementations can be examined. Figures 9.18 and 9.19 show the execution time of the encoding process as the codebook population is increased for the sequential and the
184
DESIGN STUDIES
Fig. 9.14 Speed-up graphs for k = 4x4.
Fig. 9.15 Speed-up graphs for k = 8x8.
32-processor parallel implementation. It can be seen that the execution time of the parallel encoding process even for the largest (e.g. N = 4096) codebook population is still well below the execution time of the sequential implementation with the smallest (e.g. N=256) codebook population. In general, a larger VQ codebook population results in better quality of the compressed image at the expense of extra bit rate. There are applications, such as super-highdefinition TV and medical images where perceptually transparent quality is essential. For the well-known test image LENA 512x512x8 and vector dimension k — 4x4, using N = 4096 rather than N = 256 code vectors provided a PSNR of 33.78 dB instead of 30.11 dB.
CASE STUDY 3: VECTOR QUANTIZATION
185
Fig. 9.16 Execution timings for k = 4x4.
Fig. 9.17 Execution timings for k = 8x8.
Finally, Table 9.2 illustrates the advantage of using large dimensional blocks in low bit rate coding. In the table, two vector quantizers operating on blocks of different size are compared in terms of PSNR, compression ratio and execution time. It can be seen that the vector quantizer which operates on 8 x 8 blocks and N — 4096 code vectors gives similar PSNR results to the one operating on 4 x 4 blocks and N = 256 codevectors. However, the former leads to compression ratio 42:1 rather than 16:1 for the latter. This corresponds to a reduction of 2.625 in the total amount of data required to represent the compressed image. Although the sequential implementation of VQ8 x 8 is 15.49 times slower than the one of VQ4 x 4, the parallel VQ8x8 is 1.79 times
186
DESIGN STUDIES
Fig. 9.18 Comparison of sequential and parallel implementations for k = 4x4.
Fig. 9.19 Comparison of sequential and parallel implementations for k = 8x8.
faster than the sequential VQ4 x 4. Hence, it can be concluded that parallel processing can be used to enhance the overall performance of VQ-based compression systems, as well as to speed up their execution, and that by trading off between image compression, PSNR and speedup, improvements to all three parameters can be achieved simultaneously.
9.4 CONCLUSION The three design studies in this chapter all illustrate the fact that, even with a single algorithm implementation, the need often arises for extensive exper-
CONCLUSION
187
Table 9.2 Performance Evaluation of Parallel VQ for Still Image Coding Metric
k = 4x4
A; = 8 x 8
Codebook population PSNR (dB) Bit rate (bit per pixel) Compression ratio Execution time (s) - Sequential Execution time (s) - Parallel
N=256 30.114 0.50 16:1 591.36 88.32
N=4096 29.105 0.1875 42:1 9164.80 328.96
imental investigation into the trade-off between algorithm performance and execution speed. In such circumstances, a generic parallel implementation using a PPF provides a useful framework that allows rapid reconfiguration of processors and optimization of all aspects of performance. In the KLT design study, PPFs provided the basis for the parallelization but general-purpose processors remain a less than ideal implementation architecture. Due to the symmetry of the two phases of the algorithm some interesting ways of balancing the workflow emerged. The wavelet transform design study identified two parallelizations which outperform the corresponding sequential implementation as long as the filter order is large. Though the VQ algorithm makes use of a single data farm in a dataflow PPF layout, when combined with choice of vector-generating algorithm, a two- or three-stage synchronous PPF seems a legitimate architecture for this variety of coder. The VQ parallelization enables combined optimization of image compression and PSNR with speed-up, so that additional processors can be used to achieve better quality, and not simply faster, image coding.
This page intentionally left blank
10 Counter Examples It is probably as important to recognize a system which does not fit a particular paradigm or pattern as the reverse. To this end, there now exist pattern books [122, 46], containing collections of examples of software architectures. This development would come as no surprise to building architects, as ever since Vitruvius, writing in the first century B.C.E. [372], patterns have been collected for re-use. The first case study, on speech recognition (Section 10.1) shows one of the unsuitable cases for PPF. Though undoubtedly there are data farms to be extracted, there is no pipeline structure. The processing cycle revolves around update of a large global data structure which is difficult to partition. Aside from the need to distribute large amounts of data, the data structure itself is dynamic, making static load-balancing problematic. A symmetric multiprocessor is a preferred solution for this type of system as the global data structure can be accommodated in shared memory. The second example, model-based image coding (Section 10.2), has a strong sequential structure. In addition, pipelining is prohibited by multiple feedback loops combined with global data dependencies. In the case of the H.261 coder, Section 8.1, a single feedback dependency could be coped with, but modelbased coders do not have this advantage. The lack of a clear parallel structure may make these codecs equally unsuitable for VLSI implementation. There may only be scope for fine-grained parallelism as exhibited by recent very-large instruction word (VLIW) processors for multimedia such as the TMS32062 [325] or Philip's Trimedia [275] Finally, in the microphone beam-forming example (Section 10.3), there is a strongly fine-grained synchronous structure to processing. Though digital 189
190
COUNTER EXAMPLES
signal processors (DSPs) with hardware assistance are the nearest available processor architecture, as the resources of the DSP are scarcely used, DSPs are not an ideal solution. Field-programmable gate arrays (FPGAs) may well be suitable for this application, provided that the clock speed is sufficient and that enough gates are available for fixed-point arithmetic on multiple input streams. Recently, the number of gates in FPGAs has increased to 1 million, for example, within Xilinx's Virtex FPGA [394], encouraging this approach.
10.1
CASE STUDY 1: LARGE VOCABULARY CONTINUOUS-SPEECH RECOGNITION
Large-vocabulary continuous-speech recognition (LVCR) speaker-independent systems which integrate cross-word context dependent acoustic models and ngram language models are difficult to parallelize because of their interwoven structure, large dynamic data structures, and complex object-oriented software design. Two varieties of LVCR system exist: a pipelined structure in which components of acoustic matching and language modeling are separated; and an approach which integrates cross-word context dependent acoustic models and n-gram language models into the search. The former has been thought to be more computationally tractable [297], while the latter has delivered a low mean error rate, 8.2% per word in ARPA evaluation, for a 65 k vocabulary, tri-gram language model [391]. On a high-performance workstation, even after introducing efficient memory management of dynamic data structures, and optimizing inner loops, timings on a 20 k vocabulary application, perplexity1 145, indicate that a further fivefold increase in execution speed is needed to achieve real-time performance. Increasingly complex future applications are likely to maintain this requirement deficit even as uniprocessor performance increases through Moore's law. 10.1.1 Background
A standard stochastic modeling approach to speech recognition has both improved recognition accuracy and the speed of computation [248] over earlier simple pattern-matching approaches. Mel-frequency cepstrum acoustic feature vectors, hidden Markov models (HMMs) [296] to capture temporal and acoustic variance, tri-phone sub-word representation, and Gaussian probability distribution mixture sub-word models [220] are among the algorithmic components that have led to the emergence of LVCR. Any parallelization should seek to preserve an existing stable software architecture, so further algorithmic innovations can continue to be added. Tied states and modes within Hidden Markov models (HMM) for sub-word acoustic matching im1
Perplexity is a measure of average recognition network branching.
CASE STUDY 1: LARGE VOCABULARY CONTINUOUS-SPEECH RECOGNITION
191
prove training accuracy for 'unseen' crossword tri-phones but imply shared data. Such common data also reduce computation during a recognition run on a uniprocessor or a multiprocessor with a shared address space but pose a problem for a distributed-memory parallel implementation. Achieving speaker-independent recognition in real time is significantly harder than for speaker-dependent systems. Speaker-independent systems must anticipate differences in speech intonation such as accent, dialect, age, and gender. Compare, for example, the speaker-dependent IBM Tangora PC system [73] which is able to use an iterative search to reach real-time performance, once the recognition network has been trained. Assuming a 10 ms frame acquisition window, processing on workstations is still an order of magnitude away from real time, if an n—best single-pass search is made. Formation of the initial feature vector from the speech frame is a task that is well understood and can be delegated to digital signal processors (DSP's). The Viterbi search algorithm [116], based on a simple maximal optimality condition, has made the subsequent network search, which matches feature vector sequences, at least feasible on uni-processors. The Viterbi search is unfortunately breadth first and synchronous, not asynchronous and depth first which might be more suited to parallel computation. A beam search [361] is a further pruning option, whereby available routes through the network are thresholded. Beam-pruning with two-tiered score thresholding, signatures [308], and path merging [167] are other heuristics to reduce the processing time. However, it is at the network decoding phase that more processing power still needs to be deployed if no further radical pruning heuristics are forthcoming. In any case, the pruning heuristics are often designed to reduce computation at the expense of slightly sub-optimal performance, and may thus be regarded as undesirable. The existing design, Fig. 10.1 [263], resists decomposition due to the close coupling of the network update procedures. 49-way acoustic feature vectors (frames) arriving every 10 ms, are applied to each active node of the recognition network. Real, noise, and null nodes embody models for, respectively, speech, noise, and word connections. The nodes are kept in global lists, necessary because a variety of update procedures are applied. In particular, dormant nodes are reused from application-maintained memory pools without variable delay due to system memory allocation. Large networks, for unconstrained speech or language models beyond bi-gram, are dynamically extended when a token reaches a network boundary. Network extension makes parallel decomposition by statically forming sub-networks problematic because of the need to load balance and hence repeatedly re-divide the network. 10.1.2
Static analysis of the LVCR system
The top-20 functions call graph from Quantify, Fig. 10.2, for 97 utterances on the standard 20 k Wall Street Journal test with bi-grams, showed 67% of total
192
COUNTER EXAMPLES
Fig. 10.1 Network processing cycle.
computation time including 3% load-time, was taken up by the 'feedforward' update. The branch of function calls, Fig. 10.4, resulting in the calculation of state output probabilities, bprobs, was uncharacteristically free of subfunction calls which otherwise can give rise to unforeseen data dependencies. Other parameters such as state transition probabilities, aprobs, remain fixed. The seemingly redundant level of indirection for mode-level checks enables future sharing of modes which model variety in speech intonation. Forty-four per-cent of time is spent calculating a quadratic part of the sum forming the mixture of unimodal Gaussian densities which comprise the core of any state. 6,641 nodes were on average present for 395 frames representing 4s of speech.
CASE STUDY 1: LARGE VOCABULARY CONTINUOUS-SPEECH RECOGNITION
193
Fig. 10.2 Extract from high-level call graph, showing call intensity by link width.
10.1.3
Parallel design
The resulting parallel design can be considered to be a pipeline, Fig. 10.3, though no overlapped processing takes place across the pipeline stages because of the synchronous nature of the processing. The first of the two pipeline stages employs a data-farm. A data-manager farms out the computationally intensive low-level probability calculations to a set of worker processes, with some work taking place local to the data-farmer while communication occurs. The standard PVM library of message-passing communication primitives [124] was used in the prototype, run over a network of HP Apollo 700 series workstations. Worker processes each hold copies of a pool of 1,954 tri-state models (and 50 one-state noise models). State-level parallelism requires broadcast of the current model identity (4 bytes), the prototype system needing no global knowledge if started at the 55% point in Fig. 10.4. By a small breach of object demarcation, whereby at the node object level (with embedded HMM) the state object update history was inspected, the number of messages was
194
COUNTER EXAMPLES
Fig. 10.3 Synchronous LVCR process pipeline.
sharply reduced, as only one in twelve state bprobs for any frame were newly calculated. A small overhead from manipulation of the node active list enables the parallel ratio to reach the 67% point, collection of thresholding levels then being centralized. Mode level checking when introduced would check for replications at the local level thus limiting the loss in efficiency. POSIX-standard pthreads are proposed in the second pipeline stage whereby the residual system is parallelized. Propagation of null nodes and real nodes, sweeping-up nodes (thus avoiding over-use of free store), and recognition network pruning functions all have a similar structure. For example, the prune function first establishes pruning levels, which are then globally available for all spawned threads. Once spawned, the host thread of control, that is the prune function, is descheduled until it is reawoken by the completion of its worker threads. Worker threads proceed by taking a node(s) from the active list, deciding whether pruning should take place, and updating the trash list
CASE STUDY 1: LARGE VOCABULARY CONTINUOUS-SPEECH RECOGNITION
195
and active list if pruning takes place. The large number of active nodes allows granularity to be adapted to circumstances and the relatively few points of potential serialization, requiring locks, increases the potential scale-up.
Fig. 10.4 Probability calculation function hierarchy with call ratios.
10.1.4
Implementation on an SMP
Consideration was given to whether a widely-available type of parallel machine would be sufficient to parallelize the complete system. On a symmetric multiprocessor (SMP), the thread manager would share one processor with the data manager. Efficient message-passing is available for SMPs [226] in addition to threads. Triphones, usual for continuous speech, restrict potential parallelism but with node level decomposition, Table 10.1, an eight-processor machine would approach the required fivefold speed-up while a four processor machine would reduce turn-around during testing. The estimate assumes conservatively that half of the residual system is parallel, while scaling of the system to this level is irrespective of the frame processing workload distribution over time. Inlining of some functions is available as a further sequential optimization. Table 10.1 Speed-up Estimate (Amdahl's Law) Parallelization Level/Processors: State Node
Stage 1 4 8
1.58 2.01
1.92 2.42
Stages 1 & 2 4 8
1.88 2.68
1.94 3.71
196
COUNTER EXAMPLES
10.2
CASE STUDY 2: MODEL-BASED CODING
Model-based and object-oriented coding algorithms are generally more computationally complex than current block-based image coding standards such as H.261 (Section 8.1), due primarily to the complexity of the image analysis they require. However, it has become apparent that some block-based coding algorithms are unlikely to produce satisfactory picture quality at the very low bit-rates required for transmission over analogue PSTN and mobile telephone networks. This has led to increasing interest in model-based image coding, and most recently to standardization efforts within Europe and internationally (MPEG-4 [130]) based upon the object-oriented coding (OOC) technique first developed at the University of Hannover [253]. An updated re-implementation of the Hannover object-oriented coder described in [166] has been parallelised. When both the H.261 and the OOC code were run on a SparcStation 2, a factor of ten difference in execution times was recorded for a QCIF-sized video frame sequence. The increase in execution time/frame for the model-based coders is largely due to the increased complexity of image analysis carried out by these coders. However, the H.261 coder ran at 100 times slower than real-time on this workstation. As a practical real-time H.261 encoder algorithm can just be implemented in software alone on a single TMS320C30 DSP (albeit with some simplification of the motion estimation algorithm) [380], it may be surmised that increases in execution speed of up to two orders of magnitude (100 times) can be achieved in the transition from the development to the application environment, without utilizing dedicated hardware. Achieving this sort of speed-up for model-based coding algorithms would therefore not be sufficient to permit real-time coding using a single current-generation processor. 10.2.1
Parallelization of the model-based coder
Figure 10.5 shows a simplified representation of the object-oriented coder. No quantization level feedback is shown as the version tested did not include the variable length bitstream. The execution times shown are for the QCIF 'Suzie' image sequence [347], and are averaged over 30 frames. The main stages within the frame feedback loop are: Change detection determines which parts of the image differ by more than a specified threshold from the previous frame (unchanged areas are ignored in subsequent stages of image analysis); Motion analysis is based upon a three-stage algorithm proposed by the University of Hannover [351]; Model construction consists of a number of stages which progressively build up the object model by specifying two types of model compliant objects (MCI and MC2) and residual model failure (MF) areas. For coding efficiency, each
CASE STUDY 2: MODEL-BASED CODING
197
object type is specified by shape parameters based upon fitting B-splines to the object boundary. Several important points can be deduced from Fig. 10.5: • The major constraint on parallelization is that the previous frame is used in many processing functions of the current frame. Thus, full temporal multiplexing of functions within the frame feedback loop is not possible. The possibility of objects occupying any part (or all) of the image frame makes partial temporal multiplexing impractical for model-based coders. • Only the motion analysis function of the OOC exhibits significant data parallelism (parallelism elsewhere depends largely upon the number of objects detected in the image). • Although three separate pipeline stages are shown within the frame feedback loop of Fig. 10.5, this breakdown is for conceptual clarity and convenience only. In fact, there is little point in implementing these functions on separate processors, since their execution is not independent and thus cannot be overlapped. • It is possible to pipeline the first and last functions which are outside the frame feedback loop, as these are genuinely independent of each other and of the loop functions; however, they constitute only 0.34% of total execution time. Thus, overlapping these functions within a pipeline has a negligible effect on performance. The conclusion from these points was that no practical benefit could be gained by pipelining in this case. Thus the only form of parallelism which could be exploited was to implement the full algorithm as a single processor farm. Within this farm, the master processor was responsible for reading and writing all files, execution of all functions except the motion analysis, and distributing motion analysis work packets to worker processors.
Fig. 10.5 Execution times for five pipeline stages of the object-oriented coder.
198
COUNTER EXAMPLES
Practical parallelizations of the OOC were carried out both on a Meiko Computing Surface, and also on a network of workstations using PVM [124]. Figure 10.6 compares the performance achieved for each parallel implementation with the theoretical upper bound prediction, which in the case of a single processor farm is given directly by Amdahl's law. The asymptotic speed-up performance limit in this case is only 1.98, since the motion analysis function constitutes slightly less than 50% of the total execution time. The performance of the Meiko implementation diverges as expected from the upper bound prediction, achieving a maximum speedup of about 1.5 for 20 processors. The PVM implementation gives slightly better performance, with a maximum speed-up of around 1.7 for 10 processors, beyond which performance declines. The disjointed nature of the graph for the PVM implementation is a result of the characteristics of the parallel resource, which consisted of a laboratory of Sun workstations, being used simultaneously for this experiment and by other users. Thus the throughput of work packets varied for each workstation according to the additional load on that workstation, leading to some nondeterminism in the speedup results obtained. In contrast, processors within the Meiko system are dedicated solely to the parallel tasks, and so respond in a consistent way. 10.2.2
Analysis of results
A subset of the profiling data which is summarized in Fig. 10.5 is shown in more detail in Table 10.2. Fig. 10.5 is derived from analysis of the 22 most computationally intensive functions in the OOC, which together constitute 99.2% of total execution time. Table 10.2 shows a breakdown of the five most intensive functions, which together constitute 82.88% of total execution time. The practical parallelization results reported in Fig. 10.6 were achieved by parallelising the function MVSearch on a processor farm. In theory, the maximum speed-up which could be achieved by parallelizing MVSearch alone would be 1.57, but in practice it was found that execution of the other subfunctions of MotionAnalysis was independent of MVSearch, and these functions could be overlapped with communication, so as to 'hide' their execution time. It would obviously be possible to increase speed-up by applying the same techniques to further functions within the object-oriented coder. Examination of the data in Table 10.2 shows that this would be difficult to achieve in practice, however. The effect of speeding up the MotionAnalysis function is significant because the function MVSearch which it calls constitutes 73.11% of the total execution time of MotionAnalysis and is executed 297 times/frame for the 'Susie' image sequence. However, the remaining four functions listed, which represent the next highest priorities for parallelization in terms of their potential effect on overall application speed-up, are very difficult to parallelize effectively. This is because the execution time of each is made up of several components, none of which dominate execution time of the function they are
CASE STUDY 2: MODEL-BASED CODING
199
Table 10.2 Top-down Profiling Data for the Five Most Intensive Functions Within the Object-Oriented Coder Parent Function
% of Total Exec. Time
Functions Called
% Exec. Time of Parent Function
Average No. of Calls/Frame
MotionAnalysis
49.62
MVSearch InterpolateYUVPelFrame FilterYUVPelFrame UpSample Mask
73.11 18.53 7.52 0.81
297 2 2 1
MFDetection
11.83
Binary Medi anValueFilter BlowChangedRegions ShrinkChangedRegions DeleteSmallRegions
33.05 26.56 19.44 4.71
1.2 3.7 3.7 1.2
ChangeDetection
9.72
Binary Medi an ValueFilter BlowChangedRegions ShrinkChangedRegions CreateBinaryMask DeleteSmallRegions
32.61 26.20 19.19 16.11 4.65
1 3 3 1 1
MFShapeApprox
6.56
FindMFParams FillContours
77.55 21.01
1.6 1.6
MClShapeApprox
5.15
FindClosestPts EliminateSingleObject FillContours ExtractContourDesc
39.40 25.95 17.24 11.18
1.1 1 1 1.1
placed within. Furthermore, none of these functions exhibits substantial data parallelism which could be easily exploited. Although practical parallelization of these functions has not been attempted, the analytical techniques presented previously can be extended to give an upper bound prediction of the cumulative speed-up achievable as further functions are parallelized. Figure 10.7 illustrates the trends which result when this approach is applied to the data of Table 10.2. Since each function listed is largely independent of the others, the x-axis of Fig. 10.7 may be viewed as providing a first-order indicator of cumulative programming (or hardware design) effort required to achieve a particular speed-up. For each function, the grey bar indicates the original sequential execution time of the function, and the black bar the residual execution time once the function has been parallelized (refer to left axis). For the MotionAnalysis function, the maximum speed-up actually achieved has been used; for the other functions, the residual execution times take into account both the degree of data parallelism available (based on average calls/frame in Table 10.2, and assuming that sufficient processing elements are available to reduce each function to a single call/frame), and the scope for concurrent execution of
200
COUNTER EXAMPLES
Fig. 10.6 Theoretical and practical speed-up for the object-oriented coder.
sub-functions of each parent function listed in Table 10.2. These assumptions are generous and would be difficult to achieve in practice, and so will tend to produce an upper bound performance estimate. By progressively replacing each sequential execution time element with its corresponding residual parallel execution time, it is then possible to determine both the individual speed-up achievable by parallelizing each element alone, and also the cumulative effect on speed-up when each reduction is subtracted from the overall residual sequential execution time. These speed-ups are also shown in Fig. 10.7 (refer to right axis). As expected, only parallelization of the MotionAnalysis function produces a significant degree of speed-up in isolation; the other functions are not only insignificant in their contribution to overall sequential processing time, but are also not very parallelizable. Although not shown in Fig. 10.7, the analysis presented above has been extended to cover all 22 functions included in the data of Fig. 10.5. Speed-up trends over this larger number of functions shows an overall (achievable) speed-up of less than 4, and confirms the law of diminishing returns visible in Fig. 10.7. To summarize, the execution characteristics of the function MVSearch are suitable for parallelization because the function dominates execution time of the function within which it is called, and also itself exhibits considerable
CASE STUDY 2: MODEL-BASED CODING
201
data parallelism. Unfortunately, there are no other functions among those constituting nearly 83% of total execution time which have these characteristics. Although it would be possible to examine further functions or lower levels of function call to seek data parallelism, Amdahl's law shows that any parallelism exploited would apply to such a small fraction of total execution time that its potential for speeding up the application as a whole would be minimal. It is encouraging to note, however, that most of the additional complexity is introduced by the image analysis stage, and thus applies to the coder but not the decoder. This suggests the possibility that, like H.261, an open-ended standard is defined which permits a range of compromises in the encoder design, but which all decoders must meet.
Fig. 10.7 Extended speed-up trends for the object-oriented coder.
202
COUNTER EXAMPLES
10.3
CASE STUDY 3: MICROPHONE BEAM ARRAY
The traditional way of overcoming the problems of room acoustic interference, noise, and speaker separation, in microphone recordings or communications, is to use a single-directional microphone placed near to the source, and rely on close proximity to achieve a high signal-to-noise ratio. However, in situations where there are either a large number of speakers, or the acoustic transducer is required to be distant from the source, a system with a variable, and controllable, directivity pattern is an attractive proposition. Two conventional approaches have been used to solve this problem: speech enhancement signal processing, and adaptive microphone-array beam-formers [183]. This case study concerns a multi-TMS320C40 (C40) digital signal processor (DSP) network using synthetic data for microphone array beam-forming. The C40 has six serial point-to-point links capable of building a processing network in modular fashion. Inter-processor communication when using the links is by message-passing. 3L's Parallel 'C' for the C40 [2] was employed. 3L C presents a CSP model of parallelism through library calls to communication primitives. The purpose of the work recorded in this case study was to provide a benchmark for performance measurements of the processor farm model before proceeding to further stages of this project. 10.3.1
Griffiths-Jim beam-former
A beam-former is a processing system used in conjunction with an array of sensors to provide a versatile form of spatial filtering. The objective is to estimate the signal arriving from a desired direction in the presence of noise and interfering signals [365]. During the last decade various methods have been applied in the area of microphone array beam-forming. Amongst these techniques, the Griffiths-Jim (G-J) algorithm has several advantages [140]. The most important one is that it implements a hard constraint in the look direction by using an unconstrained least-squares algorithm. Figure 10.8 gives a block diagram of the Griffiths-Jim beam-former. The beam steering delays create the correct time alignment for the target signal components arriving from the look direction. In the terminology of adaptive noise cancelling, the primary signal is a filtered version of the sum of these delayed antenna signals. Since the in-phase target signals from the individual omni-directional microphones are subtracted in pairs, the reference signals contain no target signal components from the look direction. They contain only interference, and are applied to a bank of adaptive filters, then summed, and finally subtracted from the primary signal. If there are K microphone elements, then there are K — I adaptive filters. Since each of the L weights in each adaptive filter is not constrained, the number of degrees of freedom is (K — 1)L, the same as other hard constrained algorithms.
CASE STUDY 3: MICROPHONE BEAM ARRAY
203
Fig. 10.8 Block diagram of the Griffiths-Jim beam-former. 10.3.2
Sequential implementation
The first stage was the implementation of a single-channel adaptive filter in order to instrument the algorithm. An autoregressive (AR) parametric model was used to generate the synthetic data. This model was selected as it is widely used to generate speech signals. A first- and second-order process was generated according to the following two equations [152]:
where v[n] represents a white noise process of zero mean and variance unity. In the second stage, a four channel G-J adaptive beam-former was investigated. Figure 10.9 gives the topology and results. A 1000-point synthetic second-order AR signal (SI in Fig. 10.9) represents the target signal whereas signals in channels 2-4 were generated by adding Gaussian noise to SI. It is presumed that the arriving angle of the beam is 90°, hence there is no need for tap delay. The fixed-target signal filter is chosen to give unity array gain at all frequencies. As such the array output will be a maximum-likelihood estimate of the target signal [381]. From the results in Fig. 10.9, it can be said
204
COUNTER EXAMPLES
that the algorithm starts to converge midway through the data. However, the convergence point is subject to the signal-to-noise ratio characteristics of the additive noise [152]. 10.3.3
Parallel implementation of the G-J Algorithm
The parallel topology implemented was a processor farm. The farmer is responsible only for collecting the data from each worker and sending back the difference between the fixed target signals and uncorrelated target signals received from each worker. In order to reduce the amount of communication overhead, the input and output data of each worker were packed in a single message. All other computation is performed by each worker on its own data. As only two C40 cards were available (one card with 4 DSPs and the other with 2 DSPs, memory, and interface boards), a topology of up to five workers was investigated. Two different versions were investigated; (a) each channel was implemented on each processor, and (b) two channels implemented in each processor. Beam-formers with up to eight channels were investigated for each case. Figure 10.10 gives time results obtained from several experiments. Since the hardware did not include any analogue I/O facilities, Fig. 10.10 deals only with computation time. In Fig. 10.10, the X axis represents the filter order and the Y axis the relative comparative time, normalized to the time performance of a sequential program for filter order 1. From Fig. 10.10 it can be seen that a single processor outperforms the two other parallel implementations. Although there is a slight increase in the execution time of the sequential program as the filter order rises, the sequential program is much faster than the parallel ones. This is mainly caused by the small amount of computation and the frequent synchronized communication required by this particular algorithm. Comparing the two parallel versions, it was found that the two-channel option performed four or five times faster than the single-channel one. This seems to suggest that implementing more channels on a single processor would speed up the application. However, this improvement will have a limited effect, as the total number of channels in this application is unlikely to exceed 32. Based on these results it appears that the various parallel topologies investigated are not appropriate for this application. However, one must bear in mind the fact that the current results do not take into account the I/O communications. Obviously, this is a matter that requires further investigations, as it is believed that the I/O communication will put more constraints on the sequential program than the parallel one. Another factor that must be considered is the choice of fixed-target filter. In this implementation a unit gain filter was chosen, but if other more complex filters are preferred, this choice might improve the time performance of the parallel version slightly. The main requirement of a DSP like the C40 is to provide a hardware multiplier sufficient to complete processing within the interval between the
CASE STUDY 3: MICROPHONE BEAM ARRAY
205
206
COUNTER EXAMPLES
Fig. 10.10 Performance of the parallel adaptive beam-former.
arrival of samples from the microphone array. For example, in the case of car telephones as the input frequency is 300-3200Hz, the sampling frequency is at least twice 3200 Hz by the Nyquist sampling law (i.e. 0.16 ms processing window less about 8 /us for analogue-to-digital conversion (ADC)). Because of set-up time overheads, message passing eats into this interval. Therefore, a preferable implementation is to use global memory for data I/O, including passing adaptive filter coefficient updates. The synchronization role played by message passing is better performed by hardware interrupts. Reference [55] describes a comprehensive implementation along these lines for up to eight TMS320C25 DSPs. In general, data farming is not suited to highly synchronous processing.
10.4 CONCLUSION The three studies in this chapter are examples of system analysis based upon the PPF design methodology rather than implementations. The examples were never constructed as working PPFs beyond the point of taking timings of prototypes, because the initial analysis clearly demonstrated that coarsegrained parallel processing based upon a distributed memory pipelined model would be unable significantly to improve throughput. It is important in these
CONCLUSION
207
cases to be able to suggest alternative architectures to a PPF. Fortunately, due to the commercial database market, there has been a growth in the number and scale of systems offering some form of shared memory space [70], which may prove eminently suitable for speaker-independent large-vocabulary continuous speech recognition. Similarly, fine-grained parallelism for embedded systems is now feasible with FPGAs, which can emulate both SIMD and systolic architectures. The growth of multimedia and graphics applications has also led to specialist parallel microarchitectures as well as instruction set parallelism in general-purpose processors. Further consideration of these issues is given in Chapter 13.
This page intentionally left blank
Part IV
Underlying Theory and Analysis
This page intentionally left blank
11 Performance of PPFs Performance is of critical importance in parallel programs, otherwise most developers would be content not to take their application out of the convenient cradle afforded by a serial programming model. Because of the dominant part played in embedded systems by short microprocessor life cycles, portability and scalability are equally important. Ideally a simple way is required to characterize performance at the point of transfer to a parallel system. Already in BSP [236], the design process has been turned around, so that a 'bridging model' [363] with a simple linear set of performance characteristics is used. As Chapter 2 made clear, PPF also seeks a simplified characterization. Chapter 2 also referred to the transition from store-and-forward processor interconnects in the first generation of multicomputers to more efficient techniques such as wormhole routing [256]. In second-generation multicomputers, communication cost and variability is sharply reduced, especially for algorithms in which the communication diameter is also lower. With the new processor interconnect technologies [70] it appears that a single metric, the mean communication latency, may suitably characterize communication performance. For predicting the overall time of a job involving a large number of task-bearing messages, the mean latency is sufficient provided the statistical distribution of transfer times is infinitely divisible, as is the case for deterministic, normal, or exponential message latency distributions. The advantage of this situation for the system developer is that the behavior of the algorithm becomes central, with communication characteristics remaining stable and decoupled from the algorithm. It may appear that with this communication model there is no problem to consider, but in fact in [197] with a similar communication model, the maximum efficiency asymptotically approaches a value 211
212
PERFORMANCE OF PPFS
of 0.4, a pessimistic conclusion. The non-asymptotic version of the model is discussed in Section 11.4. This chapter sets out to derive the performance PPF systems can expect on second-generation multicomputers. The aim is to find an analytical or mathematical performance model. On the Paramid multicomputer (introduced in Chapter 7), where wormhole communication is simulated by a virtual channel system [78], the measured communication time was found to be a linear function of message size. Packetization and message aggregation for a common destination are used to reduce message transit variance.1 The mathematical techniques for more precise performance modeling of PPF systems have also been employed to predict the effect of various scheduling regimes, and hence to select an appropriate scheduler. It turns out that in some circumstances data-farming latencies can be found, allowing a generalized system of scheduling. Section 11.9 concentrates on scheduling.
11.1
NAMING CONVENTIONS
For the purposes of this chapter, a job is defined as a finite set of tasks. For a continuous-flow system a job arises if, for the purpose of measurement, the flow is halted at some time. Tasks can frequently be combined into chunks (nomenclature adopted from the cited performance literature). For example, in the handwritten postcode recognition, Chapter 3, with characters (tasks) within a postcode (chunk) from a file of postcode images (job); a hybrid video encoder, Chapter 8, with macro-blocks (tasks) of a video row (chunk) taken from a sequence of frames (job); and 2-D frequency transforms, Chapter 9, with rows (tasks) in a sub-image (chunk) from a batch of images (job).
11.2
PERFORMANCE METRICS
This section covers the mathematical groundwork necessary to understand the later results. The mathematical model is based on related work on loop scheduling on non-uniform memory access (NUMA) multiprocessors [95] and on earlier proposed models for performance prediction for pipelines [310]. These models all involve order statistics. Order statistics reflect the individual distributions of p random variables selected from a population and placed in ascending order. Order statistics have been employed in directed acyclic graph (DAG) models of parallelism,
1 The Paramid's interconnect bandwidth is limited to 20 Mbps links but high-bandwidth interconnects can be simulated by sending small messages, and including a notional communication latency.
PERFORMANCE METRICS
213
stemming from [310].2 The general properties of series/parallel graphs, SPG, of the DAG variety with unconstrained numbers of nodes and probabilistic branching have been studied from the standpoint of queueing theory in [126]. [54] is a practically oriented study of the SPG model for parallel pipelines, though not using queueing theory or order statistics. Queueing theory is not normally helpful for the performance of individual applications as it gives rise to means not maxima. The linear form of pipelines in PPF means that the wider generality of the SPG model is not helpful. In PPF, a tight upper-bound is sought to check that real-time constraints are met. However, the mean of the maximum or other common averages such as the mode and median, are not necessarily the correct statistic when dealing with extremal statistics. For example, the characteristic maximum, considered in Section 11.2, may be a more suitable statistic. The maximum duration of any task, viewed stochastically, can be found from extremal statistics, which are concerned with probability distribution tails.3 A number of distribution-free estimates for the behavior of distribution tails are available [146]. The underlying notion is that distribution of the tails may be grouped into common families. Distribution-specific estimates are also possible in PPF though this involves extra statistical pre-processing to establish any distribution, Section 11.3. It is also not always the case that 'real-life' distributions can be confidently matched to any one classical distribution, though broad classifications of symmetry or asymmetry are of value. However, exact results are also useful for checking the accuracy of estimators. 11.2.1
Order statistics
In this section, the fundamental results of extremal statistics estimators are established. Certain equations are designated a name for easy reference in Sections 11.5 and 11.6. Consider a set of p continuous independent and identically distributed (i.i.d.) random variables (r.v)., {Xj:p}, with common probability density function (pdf) /(#), and constrained such that — oo < X\ < X<2 < ... < Xp < oo.4 Consider further
2
Earlier work [201] was also concerned with asynchronous algorithms on small-scale multiprocessors. 3 A distribution tail is a common informal term applied to unimodal distributions, for example, [197], meaning: that portion of the area under the curve of a probability density function beyond some point at a significant distance from the mode. 4 The standard convention that X is the r.v. and x its value is used.
214
PERFORMANCE OF PPFS
which is equivalent to P(i or more variates < ar), with P(-) returning the probability. Fi:p(x) is the cumulative distribution function (cdf) of the ith order statistic out of p with variate x. From (11.1) or otherwise, the pdf of the maximum order statistic [25] is derived as
Therefore, the mean is
where E is the mathematical expectation operator, and similarly the mean of the minimum is
The range is the difference between the two extremes. Consequently, the mean of the range of p random variables, wp, is
Provided that the mean, /u, and the standard deviation (s.d.), cr, of the common cdf, F(x) exist, it is possible to use the calculus of variations5 to find the maximum of /up [145] (with region of support for the pdf from 0 to 1):
The variational constants are not resolved if the calculus (with the same region of support) is used to find the maximum of HW(P) but on converting to a standardized variate, and with a = 1, // = 0, the result is
where ep = (p — l)!2/(2p — 2)!, which vanishes as 2 2p. For a symmetrical distribution the mean of the largest value, which is max(p,w(p))/2, is l/\/2 down on the equivalent value given by (11.6). However, the mean grows more slowly, as v^/2 and not \/p/2The standardized upper bounds that arise for the mean of the maximum value are plotted in Fig. 11.1. Suppose that p tasks are started at the same 5
There is an alternative derivation using the Cauchy-Schwartz inequality [150].
PERFORMANCE METRICS
215
time, then the mean of the finishing time of all the tasks is either asymptotically bounded by fj,p or 0.5/zu,(p) depending on the type of the task duration cdf. However, it is not difficult to convert from asymmetrical to symmetrical distribution since by grouping sufficient tasks into chunks the sum will approach a normal distribution. This observation follows by the well-known Central Limit theorem, that is distribution of p~* (Sp — rip,) approaches a normal distribution, for Sp the sum of a sequence of p r.v. with finite variance and arbitrary distribution.
Fig. 11.1 The mean of the maximum for various distributions.
In fact, exact results are available by solving (11.3). Notably, for the exponential cdf (1 — e~Aa:, x > 0, here A = 1),
with 7 being Euler's constant (0.5772157.. .)6, and where as usual O(f(ri)} is the set of functions of growth rate order /(n)for some variate n. Additionally, 6
Equation 11.8 will be recognized as a variant of Riemman's zeta function.
216
PERFORMANCE OF PPFS
for the standard normal distribution [67, pp. 374-378]
designated 'max', and for the Uniform distribution on [0,1]
The distribution-specific standardized means are plotted in Fig. 11.1 so that the relationship to the maximized means is evident. The maximized means clearly represent upper bounds. In general, the distribution may be unknown or p,p may be difficult to derive and some distributions may approach the upper bounds slowly. Naturally, as p increases the possibility increases of a large timing pushing the mean upwards away from the majority of the timings. 11.2.2
Asymptotic distribution
Suppose is the probability that, out of p observations, all are less than x. Then the asymptotic distribution has the 'stability' property that
since the form of original distribution is not altered by applying a linear transformation. If ap is set to one, after some work, it is found that
which is the first asymptotic distribution of G. It can be shown [146] that the normal and exponential distributions have asymptotic distributions of this type for which all moments exist (which is a necessary but not sufficient condition). By integrating (11.3) for G^(x) in (11.13) and converting to standard form, it is found that
which will be designated 'asymptotic'. It would appear that HG represents a suitable estimate for the maximum value. Unfortunately, the standard normal distribution asymptotic behavior at large values of p converges slowly to that of the double exponential distribution (Fig. 11.2). By setting bn = 0 in (11.12) the second and third asymptotic distributions are found as
PERFORMANCE METRICS
217
Fig. 11.2 Behaviour of the asymptotic distribution's mean of the maximum.
for some constant k > 0. The second asymptotic distribution is a fit for a cdf with no or only a few moments such as, respectively, the Cauchy distribution or the polynomial distribution, common in modeling bursty traffic but not often found in computing applications. The asymptotics of Fp(x) where F(x) = x, that is the uniform distribution, is an example of G?(3) for which the distribution is bounded in some way. Note that some distributions fall into none of the three asymptotic categories. 11.2.3
Characteristic maximum
Because fj,p may be difficult to find and because both p,p and HG may present too loose a bound, another measure, the characteristic maximum, rap, may act as an estimate. However, the characteristic maximum is not an upper bound but is most closely associated with the mode or most popular value for the maximum. Previous work in applying these results to parallel computation has not emphasized this relationship. The similarity between the mode of the maximum and mp is very noticeable in the case of the normal distribution.
218
PERFORMANCE OF PPFS
Define mp such that for a cdf H
i.e. out of p timings just one is greater than mp. Consider
where the last equality arises from L'Hopital's rule, provided h(x) and 1—H(x) are small for large x. By equating the derivative of the pdf of (11.2) to zero the mode of the distribution of the maximum is found to satisfy
By substituting (11.17) into (11.18) it will be seen that
If H is the distribution of the sum of a number of r.v. of a distribution with finite second moment, an estimate of mp for a normal distribution [205] might be used:
Inequality (11.20) arises because for some cdf
and in particular for a standard normal pdf, (/>(•), the right-hand-side (R.H.S.) of (11.21) is E[Xp.p] = n0(mp). A number of potentially useful approximations for the normal distribution are also demonstrated in [186, pp. 136-138]. Since for large values of x the normal distribution asymptotically approaches
using (11.18)
Solving for x,
PERFORMANCE METRICS
219
which should be compared to (11.20). The characteristic maximum of the exponential distribution is easily derived from (11.16) (designated 'mp') which should be compared to (11.8), where a = 1. In fact, mp — Zmode < %med < E[Hp:p] where xmed is the median of H(x) an exponential distribution. The value ofE[Hp:p] is already given in standardized form in (11.8) and is almost within 7 of mp. In [206], it is further proven that for large p, mp approaches E[Hp:p] from below, provided: H is a cdf with finite moments; H is the cdf of a positive r. v.; and H has an increasing failure rate (IFR) (cf. (11.21)). An IFR distribution, H, is defined as:
IFR distributions, which are further referred to in Section 11.4, are an alternative categorization to the three asymptotic categories.7 The variance of the first asymptotic distribution, (11.13), is given by
where ap is the value of the intensity function at x = mp (i.e. ph(mp)}. Therefore, since the intensity function of the double exponential distribution is an increasing function of p, the estimate improves with p. However, the variance of the second asymptotic distribution, (11.15), is given by
provided k > 2, k being a distribution-dependent parameter can be estimated from the coefficient of variation [145, p. 266] (F(-) is the Gamma function). Since for cdf bounded by (11.15) mp increases as p, the value of (11.15) as an estimate is limited. 11.2.4
Sample estimate
The foregoing estimates are based on population statistics. A simple bound on the sample statistics is easily derived:
7 Expression (11.25), a variant of the intensity function, is the probability function that given an event has occurred after x it will now occur. In general, the intensity function governs the convergence of a distribution's tail.
220
PERFORMANCE OF PPFS
x and s being, respectively, the sample mean and s.d., is rearranged to yield
which is designated 'sample'. The sample estimate is an upper bound to all previous estimates.
11.3
GATHERING PERFORMANCE DATA
PPF systems are data-dominated systems which have soft real-time targets. Real-time performance is dependent on maximum latency and minimum throughput specifications. In order to meet a specification these should ideally be population statistics, evident from Section 11.2. Otherwise, a set of timings from representative test data can be made. Given a sequential version of an application, sections of code are timed in an isolated environment. As Chapter 6 has described, sections of sequential code are preserved intact in the parallel version as the kernel of the worker processing tasks. Counter-based profilers can give a timing that is independent of system load. A partition is provided between user code and system call code, which is useful when transferring between machines. However, the profiler available did not allow the global timing to be decomposed. Estimates of the time needed for small sections of code also can be made from source code but only in restricted circumstances due to the effects of compiler optimization [270]. Due to advances in compiler technology and because the code was to run on superscalar processors (i860s), the source code method was not used. Instead, code was timed on a single processor within the parallel machine in order to cut out system load. Timings are assembled into a task duration histogram. The chisquare and Kolmogorov-Smirnov tests are well-known generalized methods to establish a fit to the histogram [192, pp. 39-52]. The use of such a method to fit a distribution is also reported in [95]. Notice from Section 11.2, that for many task duration distributions, the maximum duration varies statistically with the number of processors as 0(c^/p), which may mean that increasing the mean throughput will increase the maximum latency, albeit at a slow rate, c is an arbitrary factor that will vary with the task scheduling system. If it were desired to find the value of c, this would be done empirically by taking a set of measurements for varying values of p and using nonlinear regression [178]. Demand-based scheduling can be optimized if tasks are grouped into uniform-sized chunks. One then minimizes an expression for the run time which includes the chunk size as a parameter. An alternative is to have a chunk size which decreases in time. However, for algorithms for which the data size increases with the problem size, buffering demands may make decreasing chunk sizes impractical.
PERFORMANCE PREDICTION EQUATIONS
11.4
221
PERFORMANCE PREDICTION EQUATIONS
The raw estimates of the maximum task duration can be combined to form performance prediction equations. If there were a perfect parallel decomposition of the task durations, one would expect the running time to be
where p is the number of processors in a farm, n is the number of tasks, k is the number of tasks in a chunk, and p, is the population mean for the task duration pdf. h is a fixed per-chunk overhead, which would include the cost of communication and any central overhead, h can be safely assumed to be fixed if the overhead has an infinitely divisible distribution.8 The first term is (numerator) the total run-time for finitely large n, acceptable as this is a continuous-flow system, divided by (numerator) the degree of parallelization with zero synchronization cost (i.e. 'perfect parallelization'). k is needed in (11.30) as, though there are n tasks, only n/k chunks are sent out. In [228], a distribution-free upper bound was proposed based on (11.6). When the first processor becomes idle there are naturally p — 1 active processors. The remaining time is
that is as if the first processor finishes just when p — 1 chunks are assigned but have not been transferred to the remaining processors. Therefore,
which is designated 'M&S', after the names of the originators (M)adala(S)inclair. It may seem odd that when combining jobs in a task, varying the number of jobs would make a difference to the time taken. Yet this is exactly what was implied by task scheduling experiments on an early multicomputer [375], because of the changing statistical properties when adding (convolving) distributions. In [197], three main bounds occur based on finding mp for different chunking regimes and predicated on IFR distributions. mp is taken to be the time to finish after the last chunk has been taken. Notice that the s.d. of k tasks with common distribution is a^/k. An easily derived upper bound for the time up to when the last chunk is taken is
8
z has an infinitely divisible distribution if, for each positive integer n, there are i.i.d. random variables {z?,z%,... ,z%} such that z = z? -\ 1- z% [376, p. 252]
222
PERFORMANCE OF PPFS
Now d = E[T8tart] + ft, where the extra h arises from sending the last chunk. Where k ~ n/p,
•EpW,.rJ
» d + a^2k\np,
(11.34)
which is designated 'KWlarge', after the names of the originators, (K)ruskal(W)eiss, and large because k/logp should be large. The result (11.34) should come as no surprise in view of (11.23). In fact, (11.34) is derived via the theory of large deviations [141, p. 184], which perhaps obscures the relevance of the result. However, (11.34) should be applied with care since it is not standardized yet it will be observed that there is no ju dependency in the remainder portion of (11.34). A tighter bound can be found if k <£. n/p. This is the normal regime for demand-based farming. If ^/k|p is small,
which is called 'KW2'. This last bound would occur if, for large n, the chunk size k was much larger than p, hence the lack of fc-dependency in the remainder term of (11.36). The prediction equations of [197] are estimates which are only reliable for the appropriate regime. The bounds ideally require n, p and k to all be large. From the point of view of predicting the run-time, and not the optimal scheduling regime, A; is a scaling factor. If one sets k = p, = a = lasa form of normalization it is found that equations 11.34 and 11.35 are the same. For an exponential distribution, which has the coefficient of variation (c.o.v) cr/p, = 1, with k = 1 the two equations are also the same. This suggests that (11.35), since it is dependent on the c.o.v. as well as p and fc, is the most reliable. Normalized versions of the prediction equations are plotted in Fig. 11.3.
RESULTS
223
Fig. 11.3 Normalized prediction equations.
11.5
RESULTS
Experiments were performed to relate the theoretical results of Section 11.2.1 to timings on the Paramid multicomputer. The r.v. considered in Section 11.2.1 provide a mapping from task durations to the real line, thus linking the prediction equations with the experiments.9 The timing tests obviously do not verify large p behavior, which is considered in Section 11.6. For these tests, the farmer was placed on a transputer where it does not impede computation. By means of the farm template a variety of statistical regimes could be parameterized, amounting to a parallel test-bed. Four hundred tasks were sent out on demand for each job. Buffering was turned off and the message size was restricted to a 12 byte tag and 16 bytes data. For larger task durations the i860 running at 50 MHz waited on a 1/zs clock. Where the wait resolution could not be improved, the task duration was given as a number of loop iterations. The task durations were timed during the parallel 9
The random variables are i.i.d., which is appropriate to data farming (Section 1.4) as there are no inter-task dependencies.
224
PERFORMANCE OF PPFS
run so that the sample mean and s.d. could be found. Similarly, the sample mean start-up time and the sample mean message delivery time were formed. A software synchronized clock, Chapter 12, with worst-case error of 0.5 ms was employed. The overall time was taken at the central task farmer. The prediction equations were then calculated using the mean start-up time as an extra term in the equations. 11.5.1
Prediction results
Tests ranged over a number of distributions because it is thought that this approach to testing widens the generality of the tests. An alternative strategy would have been to simulate worst-case task distributions such as linearly increasing task size. In the tests, the job duration level varies across distributions according to choice of distribution parameters. However, the results are consistent for any one distribution. The normal distribution is ubiquitous, for example, it models the sum of tasks in a chunk. Both the small and large duration normal distribution test job durations were accurately predicted in all cases. Standard normal variates were transformed in a way that preserves the distribution by
where t is the task index. (All other variates were scaled by t and where necessary an absolute value was taken.) Tables 11.1 andll.2 show any small differences that may exist. The task index is either in seconds or in loop iterations. When n is large, the results are dominated by the behavior determined by x with a small remainder arising from the influence of p and a. It is apparent that KWLarge, KW1, asymptotic and sample are consistently below or close to the actual job duration while KW2, M&S, and max are upper bounds. The worst-case error is about 1 s or 1%. The exponential distribution may occur whenever there is interactive use of the computer. Though exponentially distributed task sizes are not normally expected in PPF, the distribution is easily analyzed. Table 11.3 suggests that the metrics are slightly less accurate for an exponential distribution than a normal distribution. The effect may result from a larger s.d., since it can now be seen that estimate KW2, which is proportional to a-2, diverges. KWLarge and KW1 come near to the job duration and M&S or one of the other metrics are suitable upper bounds. The Bernoulli distribution, which of course is discrete, does not have an IFR distribution but has finite moments. The Bernoulli distribution may act as a model even for deterministic task duration distributions because periodic system delays may impose a bimodal distribution. Bimodal distributions occur when there are two alternative branches in computer code. Tests were conducted with prob(ar — t), where t is a large delay, being 0.25 and prob(x = t/2) naturally being 0.75. In all tests, the predictors accurately track the job du-
SIMULATION RESULTS
225
Table 11.1 Predictions for a Normal Distribution, Part (a) Task Index
Job Duration
Prediction (s): KWLarge
KW1
KW2
M&S
2.0 1.0 0.5 0.25 0.125 0.625 0.0313 0.0156 0.0078 10k 9k 8k 7k 6k 5k 4k 3k 2k 1k
106.2368 53.2737 26.8061 13.5591 6.9443 3.6446 1.9839 1.1622 0.7358 1.0646 0.9646 0.8622 0.7602 0.6616 0.5602 0.4631 0.3652 0.2777 0.2348
105.5765 53.1466 26.7416 13.4763 6.9239 3.6298 1.9780 1.1563 0.7344 1.0620 0.9596 0.8581 0.7567 0.6581 0.5568 0.4598 0.3628 0.2754 0.2332
105.5765 52.9386 26.6367 13.4763 6.8966 3.6154 1.9701 1.1516 0.7315 1.0562 0.9561 0.8549 0.7539 0.6556 0.5548 0.4582 0.3616 0.2745 0.2328
107.7293 54.0188 27.0751 13.6926 7.0014 3.6655 1.9932 1.6260 0.7376 1.0731 0.9713 0.8684 0.7657 0.6657 0.5632 0.4649 0.3666 0.2778 0.2343
107.7293 54.0188 27.1811 13.7523 7.0385 3.6903 2.0115 1.1761 0.7472 1.0778 0.9756 0.8723 0.7692 0.6690 0.5661 0.4675 0.3690 0.2801 0.2375
ration, Fig. 11.4. For small numbers of iterations, the overhead dominates in Fig. 11.4, causing the performance shelf below 4000 iterations. Predictor M&S bounds the result whereas the two KW predictors are tight lower bounds. Normal, exponential, Bernoulli and deterministic distributions are common and accurately predicted. Symmetrical distributions are more likely to be applicable to image-processing applications [214].
11.6
SIMULATION RESULTS
A simulation was conducted to verify the large p behavior of the various estimators. To approach the population statistics 20,000 tasks were distributed. A delay of 0.001 was set and chunk size was one task. Experiments were made at eight processor intervals from 8 to 248 processors. Notice that the model does not account for any congestion introduced through scaling, by analogy with 'hot-spot' contention when loop scheduling [279]. A hierarchical scheme is possible [72], sub-farmers within a large farm may also be possible. Figure 11.5 shows the estimators for an exponential cdf with parameter A = 1 (i.e. fj, = a — I/A = 1). The sample estimator and 'M&S' are clearly seen as loose upper bounds. E[Xp:p], (11.8), also tends to act as an upper bound to the simulation results. The double exponential asymptotic estimator
226
PERFORMANCE OF PPFS
Table 11.2 Predictions for a Normal Distribution, Part (b) Task Index
Job Duration
Prediction (s): Sample
Asymptotic
Max.
2.0 1.0 0.5 0.25 0.125 0.625 0.0313 0.0156 0.0078 10 k 9k 8k 7k 6k 5k 4k 3k 2k 1k
106.2368 53.2737 26.8061 13.5591 6.9443 3.6446 1.9839 1.1622 0.7358 1.0646 0.9646 0.8622 0.7602 0.6616 0.5602 0.4631 0.3652 0.2777 0.2348
106.5473 53.4247 26.8807 13.5990 6.95876 3.64734 1.9869 1.1610 0.7372 1.0655 0.9645 0.8624 0.7605 0.6613 0.5595 0.2645 0.3643 0.2764 0.2338
105.6076 52.9550 26.6458 13.4815 6.9000 3.6177 1.9719 1.1530 0.7325 1.0565 0.9564 0.8552 0.7542 0.6559 0.5560 0.4584 0.3618 0.2746 0.2329
107.3153 53.8086 27.0726 13.6951 7.0068 3.6716 1.9992 1.1676 0.7411 1.0729 0.9711 0.8683 0.7656 0.6657 0.5631 0.4644 0.3667 0.2779 0.2345
is a good estimator for the exponential cdf. Estimator 'KW2' is excluded as it distorts the scaling of the graph, but 'KW1' acts as a tight lower bound. A standard normal distribution was transformed by
in order to keep a high proportion of the results before truncating. Negative values are truncated but by rule-of-thumb as 95% of a Gaussian density is found within two standard deviations of the mean. The negative values are first pushed accordingly into the positive region. Except for the two upper bound estimators, the sample estimate and 'MS', there is little to distinguish the estimators in practical value, according to the results of the simulation shown in Fig. 11.6. As indicated in Section 11.2.2, the asymptotic predictor is not to be relied upon for a normal distribution, though in the simulation it was a tighter upper bound than 'M&S' and 'sample'. The simulation for £7(0,1) (i.e. fji — 1/2, a2 = 1/12) showed that all estimators considered act as upper bounds, Fig. 11.7. This should not be surprising as the uniform cdf is bounded whereas the estimators are primarily intended for unbounded distributions. However, since the uniform distribution is IFR one might have expected the 'KW estimators to lie around the simulated value and not to act as an upper bound. Estimators for the Bernoulli cdf, Fig. 11.8, under the same conditions as Section 11.5.1, t = 1, are well behaved. Despite the fact
ASYNCHRONOUS PIPELINE ESTIMATE
227
Table 11.3 Predictions for an Exponential Distribution Task Index
Job Duration
Prediction (s): KWLarge
KW2
M&S
Sample
Asymptotic
nip
2.0 1.0 0.5 0.25 0.125 0.0625 0.0313 0.0156 0.0078 10 k 9k 8k 7k 6k 5k 4k 3k 2k 1k
46.9520 23.6496 11.9928 6.1452 3.2466 1.8060 1.0899 0.7594 0.6060 0.5002 0.4564 0.4142 0.3709 0.3327 0.2969 0.2626 0.2413 0.2337 0.2337
48.2911 24.3126 12.3234 6.3254 3.3353 1.8505 1.1060 0.7617 0.7617 0.5131 0.4678 0.4247 0.3796 0.3401 0.3018 0.2675 0.2446 0.2342 0.2783
55.4189 27.8468 14.0629 7.1672 3.7274 2.0231 1.1729 0.7804 0.5987 0.5807 0.5285 0.4785 0.4265 0.3801 0.3349 0.2935 0.2639 0.2466 0.2908
48.7672 24.5557 12.4499 6.3935 3.3746 1.8755 1.1237 0.7764 0.6115 0.5187 0.4730 0.4295 0.3839 0.3441 0.3055 0.2710 0.2481 0.2380 0.2830
48.8607 24.5990 12.4682 6.3993 3.3738 1.8713 1.1177 0.7688 0.6003 0.5186 0.4728 0.4292 0.3835 0.3434 0.3046 0.2697 0.2463 0.2353 0.2795
47.8005 24.0690 12.2032 6.2669 3.3078 1.1838 1.1015 0.7611 0.5995 0.5084 0.4636 0.4210 0.3763 0.3373 0.2995 0.2657 0.2432 0.2333 0.2775
47.3529 23.8421 12.0867 6.2057 3.2740 1.8182 1.0880 0.7503 0.5887 0.5040 0.4597 0.4175 0.3732 0.3346 0.2972 0.2638 0.2418 0.2323 0.2765
that the Bernoulli distribution is discrete and bounded, the estimators axe all reliable predictors. 11.7
ASYNCHRONOUS PIPELINE ESTIMATE
In a synchronous pipeline of farms, the various estimates of maximum latency developed in Section 11.4 are immediately applicable. If there are asynchronous stages it will be necessary to account for waiting time in a queue before being served at the next stage of the pipeline. With no loss of generality, consider a two-stage asynchronous PPF. The maximum possible latency is made up of the maximum service time at each farm and the maximum waiting time between the two farms. Suppose the pipeline is in steady state and that the task duration distribution in the first farm is exponentially distributed, that is by Burke's theory its output is also so distributed [187], then delay-cycle analysis [63] can be used to find the waiting-time distribution. In practice, only the Laplace-Stieltjes transform of the waiting-time cdf is available but this easily yields the moments of waiting time which for the second
228
PERFORMANCE OF PPFS
Fig. 11.4 Small duration Bernoulli distribution.
farm (treated as a single server) are given as:
where, as is conventional in queueing theory, A is the task arrival rate. S is the task duration r.v. If as in conventional notation n is the task completion rate then p — X/p, is the availability.10 If the second farm also has an exponential task duration cdf following [264], the single processor results can easily be adapted to the multiple processor case by appropriately setting the task completion rate to pp,. Figure 11.9 shows two results from a simulation in which farm two had fifty processors and farm one had either five or forty processors, reflecting 10
Equation (11.39) will be recognized as the Pollaczek-Khintchine equation for M/G/1 queues.
ASYNCHRONOUS PIPELINE ESTIMATE
229
Fig. 11.5 Simulation of exponential distribution.
differing arrival rates. The task duration rate in farm two was fixed to 3.0 tasks/unit time to prevent saturation. The mean latency is plotted as well as the maximum latency recorded. Using the moment estimates of (11.39) and (11.40) and the characteristic maximum of an exponential distribution, (11.24), the maximum latency over 20,000 tasks is found. Notice that the maximum is from the number of tasks and not the number of processors. The estimates of Section 11.4 are less successful in finding the asynchronous maximum since they reflect the mode of a normal cdf and not an exponential cdf. In particular, such estimates appear erroneous when the task arrival rate from farm one is low but the task duration in both farms is large. The mean task duration for the first farm was used, reflecting (intuitively) the unlikelihood of all three maxima occurring together. The estimate remains an upper bound. If the sampled number of tasks is made lower, the estimates lose accuracy. From the large-scale test, it emerged that the mean latency is close to the sum of the two mean service times, which implies that system design based on means is likely to be accurate. However, in some conditions the maximum latency departs considerably from the mean which will be an important issue for time-critical systems.
230
PERFORMANCE OF PPFS
Fig. 11.6 Simulation of normal distribution.
11.8
ORDERING CONSTRAINTS
On a hypothetically balanced pipeline, an ordering constraint will degrade the latency. With sufficient buffering, no change in throughput occurs because when the order is restored all waiting tasks can be instantaneously output, provided sampling of throughput is at output instances. If sampling of throughput is by a random observer then the output throughput will have the same form of degradation as now discussed for latency. Ordering is a synchronization constraint over the whole of an asynchronous pipeline segment, as the order needs to be restored before leaving the asynchronous segment. Re-ordering can be implemented by forming a linked-list to act as a buffer for out-of-order tasks at the pipeline stage, typically the last stage of an asynchronous segment, where the ordering constraint applies. Suppose that an asynchronous segment of a pipeline has a capacity for n tasks at the instant that any one of these tasks becomes ready for processing. Suppose also that (in the worst case) all n tasks start at the same time and their finishing times are distributed as a set of ordered r.v. {Xi:n}, i = 1,2,... ,n with the conditions of Section 11.2.1. Though, particularly in the
ORDERING CONSTRAINTS
231
Fig. 11.7 Simulation of uniform distribution.
case of the extremal value (Section 11.2.1), the mean may not be the best estimator, nonetheless a practical approach might be to use the mean of the ordered means to estimate the average time that any completing task waits:
or if the mean latency without a constraint is p, and E[Xi:n] — fa then the percentage degradation is
The distribution of the means for all order statistics, i — 1,2,..., n is required. The uniform distribution, C7(0,1), has E[Ui:p] = i/(p+ 1) = pi which gives a triangular distribution. Unfortunately, other distributions do not have such a convenient formula. Therefore an estimate is proposed. In
232
PERFORMANCE OF PPFS
Fig. 11.8 Simulation of Bernoulli distribution.
[25], from Xi:p = F~1(Ui;p) = G(Ui:p) a Taylor expansion11 around pi is used to approximate E[Xi:p], that is,
where G' denotes d/duG(u)\tt=p^ u = F(x). A convenient example is a logistics cdf, F(y) = (l + e~y)-1 with qi = l-pi = (p-i + l)/(p + 1):
Taking expectations of both sides of (11.43), using the result for E[Ui:p], gives
11
The inverse function relationship is also often used to generate random numbers. Unfortunately, the convergence of the expansion is slow for the extreme order statistic.
ORDERING CONSTRAINTS
233
Fig. 11.9 Maximum latency for a two-stage asynchronous PPF.
Figure 11.10 uses expansion (11.44) to make a continuous estimate of E[Yi:p], i odd. The form of (11.44) is approximately triangular and other common distributions have the same shape for E[Xi:p], bearing in mind that the logistic distribution when suitably parameterized approximates a normal distribution. This result is of course helpful in estimating tiatency or tthmputMeasurements have been made on the handwritten postcode recognition application introduced in Chapter 3 implemented on a transputer-based Meiko CS-1. The CS-1 [356] uses proprietary routing hardware and software aside from the normal transputer links. The processing times for complete postcodes were found to have a symmetrical distribution for the preprocessing and classification stages. The final dictionary stage had a Bernoulli distribution as UK postcodes are almost all 6 or 7 characters in length, with the test data mean length being 6.7 characters calculated using a complete set for the city of Colchester, UK. Table 11.4 records the measured percentage throughput degradation, found on a per-stage basis by decoupling each stage with very large buffers. The range, R cf. Equation 11.5, is employed as a measure of variance. Perf. I is the theoretical performance after ordering degradation, and Perf. II is the degradation due to ordering cumulatively applied.
234
PERFORMANCE OF PPFS
Fig. 11.10 Approximate mean order statistics for the logistics distribution.
Table 11.4 Ordering Constraints on the Postcode Recognition PPF Stage:
PreProcessing
Classification
Dictionary
Min. proc. time (ms) Min. time (ms) Max. time (ms) R/2
487
773
2802
406 698 146 1.30 17 9.1
651 1060 205 1.30 18 9.0
629 5349 2360 1.84 36 7.0
9.1
7.5
4.8
M+(«/2) M
% degrad. Perf. I (pcode/s) Perf. II (pcode/s)
TASK SCHEDULING
235
In fact, there is no need to assemble complete postcodes until the dictionary stage, and additionally using the i860 processors in the Paramid increased performance by an order of magnitude as has been noted earlier. The dictionary stage was tested in isolation on the Paramid, since the scaling on the Paramid did not allow more than one worker task in the dictionary stage when testing the complete pipeline. The mean time to recognize a postcode of six or seven characters was found to be 0.027 s or 0.13 s respectively from a test file of 1945 characters. Message size was a maximum of 112 bytes. The Bernoulli probability for a six-character postcode was 0.517. The observed distribution, though closely resembling a Bernoulli distribution, was continuous and bimodal. The processors were arranged in a binary tree topology. Table 11.5 shows that again all estimates track the results well with small numbers of processors.
Table 11.5 Estimates for the Dictionary Stage of the Postcode Recognition PPP
11.9
Number of Processors
Actual Time (s)
Estimate (s): KWLarge
KW1
KW2
MS
Sample
8 7 3
2.88 3.30 7.61
2.99 3.36 7.34
3.07 3.44 7.44
3.28 3.59 7.33
2.98 3.34 7.25
3.09 3.44 7.33
TASK SCHEDULING
In Section 11.4, A; was defined as the number of tasks in a chunk, where a chunk is a scheduling unit, when considering a single farm. However, should k be fixed (uniform)12 or should it be varied? For general task duration distributions, a statistical approach appears necessary rather than relying on heuristics, though such approaches are also mentioned in Section 11.9.3. In theory, it is possible to examine the distribution of all order statistics to find how task finishing times are distributed in the mean. The optimal general scheduling scheme may employ this information to minimize the idling time of processors waiting for the last tasks in a run to complete.
12
In the general case, though the number of tasks is fixed due to the statistical nature of task durations the actual duration of computation will vary.
236
11.9.1
PERFORMANCE OF PPFS
Uniform task size
An optimal value of k can be found by equating the derivative with respect to (w.r.t.) k to zero, though strictly this optimizes the bounding equation and not necessarily the task size. Unfortunately, the resulting expression may not give a closed-form solution, for example, from (11.32)
For embedded applications it may be possible to pre-calculate kopt, as otherwise a lengthy calculation is required. From (11.34) the optimal value of ft is
which can be pre-calculated. Equation (11.46) is independent of n and therefore is likely to be unreliable, even though koptkw oc h/cr is intuitively correct. A prediction for the optimal time based on substituting back koptkw is
11.9.2
Decreasing task size
Suppose that there are i = 1,2,..., v rounds of task distribution but that at each round tasks are dispatched in a batch of Pi sets of ki tasks; then from
is a bound on the mean time for all processors, p, to complete for all distributions in which the first two moments exist. In the worst case, just one processor will exceed p,k{. For conservative scheduling, enough work must be still available to keep the remaining processors active in most circumstances. The same considerations recursively apply until the predicted maximum duration for a task approaches the overhead time for sending a task. Figure 11.11 illustrates how the total run-time might approach /m/p if the maximum departure from the mean old task duration is the same as the new task duration at each scheduling round, with p\ = p,p#i = p— 1. The idle time experienced by p — 1 processors is that time at the end of the final round while the last processor finishes. Suppose
with fixed yi = 2. With ki so set the task size decreases exponentially and half the work is statically allocated in the first round, which is the 'Factoring'
TASK SCHEDULING
237
regime [168]. Assume that the finishing times at the first scheduling round are distributed as -E[JQ:p], i = 1,2,... ,p, with the r.v. X being the execution times. An approximate solution to setting T/^, thus restricting the idle time in the general case, is shown in Section A.2.
Fig. 11.11 Decreasing task-size method of scheduling.
11.9.3
Heuristic scheduling schemes
In [101], for safe self-scheduling (SSS), in effect a variant of Factoring, it is shown that if the task duration cdf is modeled as Bernoulli a value of T/J < 2 will limit the idle-time to ptmax/(p — 1), where tmax is the maximum time of any task.13 In [282], a similar limit to the idle time, guided self-scheduling (GSS), was proposed but the rate of task-size descent, which is (1 — 1/p)*, appears too steep.14 Reference [359] shows that for a monotonically decreasing task size GSS will behave badly and proposes a linear rate of descent, 'trapezoidal 13
The Bernoulli distribution models long and short conditional branches (i.e. Binomial cdf, B(l,q), where q is the probability of a long branch). 14 After p rounds, (1 —l/p) p —> l/e sa 1/3, which is approximately the equivalent of y, = 3/2 in (11.49).
235
PERFORMANCE OF PPFS
self-scheduling' (TSS). However, in [101], a large-scale test showed that TSS performed weakly if the task cdf was a Bernoulli distribution. 11.9.4
Validity of Factoring
For a uniform distribution, [7(0,1), {E[Ui:p\ = i/(p+1) = Pi} form a triangular distribution of finishing times (and hence a triangular distribution of idling times). Refer again to Fig. 11.10, a continuous estimate of E[Xi:p], i odd for a logistic distribution. This plot can also be used to find idling times when task durations are so distributed. Again, the idling times roughly have a triangular distribution. In practical situations, symmetrical distributions are likely to have the same balanced ratio of idle time to active time (since the logistic distribution when suitably parameterized can model the ubiquitous normal distribution). It follows that when using decreasing task-size scheduling at most half the available work needs to be reserved (for symmetrical distributions). The Factoring method is predicated on a triangular distribution of job times and is therefore suitable for all symmetrical distributions.
11.10
SCHEDULING RESULTS
Again the theoretical forecasts were tested experimentally and by simulation. 11.10.1 Timings The Paramid multicomputer was employed under the same conditions as for Section 11.5. Four hundred jobs were sent out on demand for each test. Buffering was turned off and the message size was restricted to a 12 byte tag and 16 bytes data. The task durations were timed during the parallel run so that the sample mean and s.d. could be found. Similarly, the sample mean start-up time and the sample mean message delivery time were taken. These results are in a form suitable for the prediction equations. The overall time was taken at the central task farmer. A central per-message overhead was introduced by waiting on the low-priority transputer clock, one tick per 0.64 ms, according to a truncated normal distribution. The intention of the overhead was to model both the global processing and/or the reassembly of messages before further processing, an activity often performed at the farmer. An exponential cdf (l-e~ Az , x > 0) job duration distribution was used, which is not symmetrical but is mathematically tractable. Therefore, the Factoring results do not represent an ideal regime. In Fig. 11.12, a task size of six is in most cases a clear minimum. The relative depth of the minimum is governed by the extent of the potential saving. Larger task sizes than six cause the remainder portion, that is the time waiting for the last task to complete, to exceed any saving made by reducing the num-
SCHEDULING RESULTS
239
her of messages. Smaller task sizes than six result in more central overhead. The message-passing overhead remains approximately constant across task size. As to sensitivity to task size, it can be seen from Fig. 11.12 that within the range of task sizes surveyed according to central overhead, a difference of between about 3- and 10% is possible.
Fig. 11.12 Uniform task-size scheduling.
In Fig. 11.13, a factoring scheduling regime using (11.49) is compared to the worst and best case task-size regime over a wider range of central workloads. Factoring appears to be a minimal regime though it does not in this instance exceed the performance of uniform task-size scheduling. A simple estimate of the minimum time can be taken from
where v is the number of scheduling rounds from factoring. In Table 11.6, the minimal time estimates arising from Equations (11.47) and (11.50) are compared. The two estimates form upper and lower bounds for the timings. An exact result for the exponential cdf (here A = 1) has already been give with (11.8). By using (11.32) with (11.8) substituted for E[Rm8] it is also
240
PERFORMANCE OF PPFS
Fig. 11.13 Comparison of scheduling regimes, possible to use a distribution-specific estimate of the optimal task size
The resulting estimate of kopt in that it rises less steeply appears to be more accurate than kminkw, but the estimate of tminma '1S "too pessimistic. Refer again to Table 11.6 for the timings.
11.10.2
Simulation results
A simulation was conducted to verify the large p behavior of the various estimators. To approach the population statistics 20,000 jobs were distributed. Experiments were made at eight processor intervals from 8 to 248 processors. Figures 11.14-11.16 are the result, for a sample range of processor numbers, of varying the task size from 1 to 8 jobs with exponential service distribution. In regard to uniform task sizes, for small per-task delays there is a minimum
CONCLUSION
241
Table 11.6 Predictions for Minimum Job Duration (s) Central Workload Index
Predicted Task Size:
1 5 10 20 30 40 50
2.1 2.9 3.7 5.1 6.3 7.4 8.5
Koptkvl
Actual Task Size Koptma
^actual
Actual Min. Time
1.5 2.0 2.6 3.5 4.4 5.2 5.9
2 6 6 6 6 6 6
0.8322 0.8377 0.8406 0.8500 0.8619 0.8712 0.8811
Factoring Estimate
Uniform Estimate: tmininj,
*rainma
tminfact
0.8489 0.8570 0.8647 0.8762 0.8850 0.8923 0.8986
0.8599 0.8697 0.8791 0.8931 0.9039 0.9128 0.9205
0.8004 0.8061 0.8093 0.8157 0.8221 0.8285 0.8349
shared between 1 and 7 tasks (not distinguishable on Figs. 11.14-11.15). The timings for 2, 3 and 6 tasks are bunched together and cannot be distinguished on the plots. The relative order changes with the number of processors deployed. At the largest delay the order changes to favor sizes 4 and 6 though the results vary considerably with the number of processors. From the simulation it is again apparent that it will be difficult to accurately predict the best task size on an a priori basis. Turning to Factoring, it is apparent that as the per-task delay increases factoring more clearly represents a minimal regime. In Fig. 11.17, the optimal time predictions of Section 11.10 are tested against the simulation results. For the smallest delay, 0.001, it is possible for the predictors to find a task size of less than one, with the result that the predictions for the optimal time are less than the simulated time. Conversely, for larger delays, here 0.1, the estimates move in the opposite direction.
11.11
CONCLUSION
Order statistics provide a general way to predict the latency experienced by a task in its journey across a pipeline. For real-time systems this method of estimating maximal times is advocated over queueing theory techniques. For asynchronous pipelines, there is the possibility of combining the two methods of prediction. Conveniently, the same equations can help predict the finishing times of a job when its tasks are scheduled on a single farm, representing a synchronous stage within a pipeline. Timings and simulations have indicated the validity of the various estimators, and lead to the conclusion that this generalized method is a useful way of making high-level performance predictions for PPF systems. The intention is to help plan portable embedded systems as obviously the high-level predictions will need to be refined and honed for individual implementations.
242
PERFORMANCE OF PPFS
Fig. 11.14 Simulation of uniform task-size scheduling, delay 0.001. Appendix A.I
OUTLINE DERIVATION OF KRUSKAL-WEISS PREDICTION EQUATION
The approximation to mp, (11.34), is derived (in non-trivial fashion) from (11.16) by setting
where Sk is the execution time of a task of size fe, and H is the cdf of the execution times. (The final approximation in (A.I) is found by manipulating the Taylor series approximation to ln(l — #).) For large k
By the theory of large deviations [141, p. 184],
FACTORING REGIME DERIVATION
243
Fig. 11.15 Simulation of uniform task-size scheduling, delay 0.01. where G(p,,6] = / e e ( x ~ ^ d F ( x } , the moment-generating function of X — p,. Finally, (11.34) is arrived at by a Taylor expansion of (7(/z,0).
A.2
FACTORING REGIME DERIVATION
At round i, assuming p > 1,
where r, is the total number of tasks remaining at each scheduling round, ro = n and y is unknown. To achieve a balance requires, from (11.48),
If
244
PERFORMANCE OF PPFS
Fig. 11.16 Simulation of uniform tasks, delay 0.1. are substituted in (A.5), after rearrangement,
Notice that (A.7) depends on the c.o.v. The L.H.S. of (A.7) is at a minimum at y = 1, and can be approximated by a Taylor expansion close to one. A conservative estimate is
With y = 2 one has the Factoring regime.
FACTORING REGIME DERIVATION
Fig. 11.17 Predictions of optimal times.
245
This page intentionally left blank
12 Instrumentation of Templates An application is instrumented in order to record its progress and/or performance. As debugging is achieved in a PPF templated system by firstly verifying the sequential algorithm on a workstation or PC, and secondly relying on the tried-and-tested template for the parallel logic, it is primarily performance instrumentation which is of concern. Hardware instrumentation of a (network of) processors and/or their communication links should have very little effect on real-time execution performance, but is specific to a particular machine and can be costly to implement. There may also be constraints on the volume of data which can be recorded. The PPF data-farm template was therefore instrumented by software means. For a fuller discussion of instrumentation of parallel computers in general refer to [303]. The primary problem to be faced when instrumenting a parallel or distributed system is how to ensure a consistent global clock, so that reliable visualization of an application's progress becomes possible. Design of a global clock algorithm for event tracing is by no means a trivial task, as the maintenance of the clock should not significantly slow down processing relative to an uninstrumented run, or worse still, result in a reordering of message arrival times due to inconsistencies in the distributed clocks associated with each separate processor. Large systems or long event traces also present another problem—how to restrict or select the flow of data. For instance, the designers of the Pablo [117, 304] data collection and visualization system, faced with the possibility of arbitrarily large parallel processors, are investigating how to cope with the flow of trace data. It appears that previous automatic approaches of semantic 247
248
INSTRUMENTATION OF TEMPLATES
compression [241] and output throttling are not adequate to limit the file size. A new approach using statistical clustering of related behaviour is one avenue of research. Since the role of the worker process is passive in the PPF methodology it is not appropriate for the worker to initiate tracing when it has reached steadystate. To provide event traces, a centralized clock synchronization system is therefore required, which differs from some of the solutions that have been proposed elsewhere for keeping global time.
12.1
GLOBAL TIME
If global time is not kept consistently when timestamping a record of events in a multi-computer application, then the ordering of significant events on different nodes (i.e. usually processors) can easily go awry. In particular, the phenomenon of 'tachyons' can occur in which a message apparently arrives before it is sent. Various approaches to keeping a global clock exist. Solutions in which logical clocks [208, 300] are kept give a consistent global ordering but cannot in any sense be said to keep real time. Such systems are however applicable to distributed systems where debugging is the primary goal [217] rather than performance measurement. Fault-tolerant solutions [209, 324, 339] typically involve O(n 2 ) messages for n processes even if all message journeys are confined to a single hop. In [77] an O(n) message solution relies on the presence of an embedded ring topology, which excludes tree-topologies. However, tree topologies are popular for data-farms. Other work [227], though important in a theoretical sense, relies on complete graph processor topologies. Finally, statistical post-processing of the trace record [91] represents an alternative approach to ensuring global correlation of time. A simple method, interpolation between start and end timing signals [261], was initially explored as a way of instrumenting a template but, though suitable for gauging intra-node process activity, was not found to be sufficiently accurate. A number of convergence algorithms [234, 306] exist, but these may not provide accurate timing at the inception of an algorithm. However, unlike distributed or networked methods [193], it is possible to have an initial traffic-free phase; and one can assume that faults are rare, making a centralized algorithm feasible. As stated in the chapter introduction, low-level manipulation of the hardware, as for instance in the synchronized wave solution for transputers of [371], was not the aim. The requirements of the centralized, software design can thus be summarized as:
A minimum of user intervention is needed in regard to the clocking mechanism.
PROCESSOR MODEL
249
• The design should not be dependent on any feature of the hardware such as would, for example, necessitate the use of assembly language. Solutions involving hardware interrupts [180] are by no means trivial to implement and equally run into the problem of application perturbation. • The number of synchronization messages should be minimized. • The mechanism should not rely on a particular architecture. • It should not be assumed that the system is already in steady state or can converge slowly to synchronization. • The times should conform to a suitable real-time reference. In practice, two algorithms are required: an algorithm to bring the clocks on all nodes within some minimum error range, and an algorithm subsequently to maintain time against clock drift.
12.2
PROCESSOR MODEL
The synchronization method described in this chapter is suitable for processor nodes with at least two levels of thread or task priority and supporting internal concurrency, as is common for interrupt-driven applications. Only local physical clocks are assumed to be available (most multicomputers can provide high-priority monotonic clocks with resolution of at least 1 /us). An intermediary process which acts as a monitor able to intercept communication is assumed in the model; the provision of the monitoring process is assumed to be an implementation-dependent feature of the template. In [94] there is an appropriately accurate method of supplying a global time reference, which was aimed at hypercube multicomputers. However, it is important for a trace mechanism within the real-time PPF environment to provide timestamps that are not biased by possible drift of the master clock. In particular, where interrupts occur real time should be maintained. If real time is not required then it will be sufficient to synchronize to an uncorrected master clock. (The whole system will drift at the same linear rate.)
12.3
LOCAL CLOCK REQUIREMENTS
Experimental evidence [61] shows that crystal clocks drift linearly if the temperature regime is constant and that anyway drift is small (< 10~5) so that second-order terms are neglected in the following. (For runs longer than a few minutes temperature oscillations may occur on some processors [229], in which case second-order correction terms offer one solution.) It is therefore assumed that the accuracy of clocks should be such that a bound to the error is given by
250
INSTRUMENTATION OF TEMPLATES
(1 - /)(* - t'} ± p < H(t] - H(t') <(! + /)(*- f ) ± p,
(12.1)
where / is the maximum clock frequency drift rate, t is a notional reference time (real time) such as Universal Time Coordinated (UTC) [193], p is the discrete clock precision or resolution, H(t) is the reading of a clock at time t, and (t — t'} > p. To clarify the explanation, Table 12.1 summarizes the notation used in this chapter. Table 12.1 Notation Used in the Text
Variable / p Oi Oi Di n m do R Ci h TQ(/I) Cd e/ S
12.4
Definition maximum clockfrequencydrift rate precision or resolution of all physical clocks measured offset of node i's clock from node O's clock (at time t) offset a corrected for drift from global real time total measured round-trip time from node 0 to node i (at time t} the apparent round-trip time to node i recording the start on the central clock and the end on the remote clock median of clock differences (at time t} estimate of node O's drift rate (at time t) clock refresh interval, between synchronization pulses estimated drift of central clock during roundtrip from node 0 to node i (at time t) number of stages or hops in a message's outward journey theoretical minimum one-way journey time over h hops theoretical error in the estimate of Oi theoretical error in the estimate of the difference in clock rates (the skew-rate) the sampling interval used to establish R
STEADY-STATE BEHAVIOR
To aid the explanation, this section assumes that the processing system has already reached steady state. In particular, the appropriate time interval between refresh signals, R, the clock refresh interval, is assumed to have been established. Section 12.5 details the calculation of the clock refresh interval. Steady-state behavior proceeds as follows: The central synchronization server (CSS) on node i = 0 sends a request message to every other node by sending a synchronization 'pulse' and in each case immediately receives back a reading for the arrival time of the 'pulse', as shown in Fig. 12.1. 'Sending the request messages' is called polling. The central node in each case records
STEADY-STATE BEHAVIOR
251
the time delay for the round-trip, as given by its local clock. The central node subsequently and for every other node makes the following calculations: • The time delay for the round-trip is adjusted to account for intervening drift of the local clock on the central node; • The corrected round-trip time delay is halved; • The apparent one-way journey time to the remote node, r, is found by subtracting the synchronization departure time-stamp from the arrival time reading; and • The difference between r and the halved, corrected round-trip time serves as an offset estimate of the discrepancy between the central node's clock rate and the remote node's clock rate. By using the mid-point of the corrected round-trip time as an estimate of the reading at the other node, the central node minimizes its error in the long run. In other words, the mid-point is an unbiased estimator of the arrival time of the pulse (i.e. its mean approaches the mean of the population of round-trip times).1 Formally, the offset estimate at time t for node i (i = 1,2,.. . n), n being the number of nodes, is
with Di the total round-trip time to node i recorded on the central node, Cj, the correction to the central node's recording of the round-trip time to node i, will be further explained in Equation (12.4). T{ is the interval between sending the pulse as recorded on the central node and receiving the pulse as recorded on node i's clock when the pulse arrives. There is no reason to give special significance to the central node's clock. Therefore, offset o; is further adjusted to account for the departure of the central node's local clock from some view of real time. A view of real time can be arrived at by an averaging procedure applied to the drift of all clocks. The average of the differences is found at each polling time t. In part, so as to reject outliers (particularly if the algorithm is also to be adapted for times close to boot-up time), the median is chosen as an average. Alternatively, the mean of the middle could be used but this would require sorting the differences, time complexity O(nlogn), in order to find the middle set of differences. In [147] the mean of the center is taken, by requiring all readings in the center to be within 4e + I f R of each other, where e is the uncertainty 1
Were a priori knowledge available on a skew in the ratio of outward to return-trip journey times then another point other than the mid-point could be taken. However, bear in mind that measurement of a skew implies a hardware timer, which would obviate the need for software timing.
252
INSTRUMENTATION OF TEMPLATES
in the message delay. In general, the selection algorithm for the center is not trivial. The procedure in [147] is intended to remove from consideration the occasional clearly faulty clock and as such the value of e is taken to be TO, the known minimum one-way journey time. If a way of using a tighter value for e were to be found there is still the possibility that the precision of the global time estimate could be reduced if too many differences were rejected. Once the median, m(i), is calculated, it is added to the offset for each of the nodes so that the final offset is
On the central node, the offset is simply m(t). By recording successive median estimates, the central node is able to estimate the drift of its local clock:
where again R is the clock refresh interval. Given this estimate, then a = Did0. Node 0 can correct its local clock for drift by means of do. Node 0 sends the median-adjusted offsets, {Ot}, to the nodes. On receipt of its offset, each node additionally corrects for the drift in its time since the synchronization pulse was sent. This node drift correction uses the same method as described for local clock adjustment in Section 12.6. Drift corrections are assumed to be by software means as normally a node would not be able to alter its local physical clock. The polling is repeated by the central node after clock refresh interval R. Using the median of the clock differences rejects outliers but does not rely on a priori knowledge to reject them. It makes no assumptions about the nature of the aggregate message round-trip time distribution. Other methods that have been employed or suggested to increase the accuracy of readings include: using a recursive estimate of the round-trip time with a linear smoothing function [61]; and rejecting long delay times as these are more likely to include asymmetrical journeying times [242, 243]. These solutions rely on convergence of local clocks to a global time, whereas for event tracing one requires the accuracy to be within guaranteed bounds at a time signaled from the central farmer. This contrasts with toolkits that allow a trace to be initiated when a process ascertains it has reached a steady state. Note that for cooperative algorithms using the median may be unsuitable since no new value is introduced [306] and convergence is therefore not guaranteed. The calculation of the median is not computationally intensive if the following algorithm is employed [288, pp. 476-479]:
ESTABLISHING A REFRESH INTERVAL
253
(with n data values Xi having median xm) to form an iterative and logarithmically convergent equation, that is
(Other algorithms are at best 0(nlog(logn)) [191, pp. 216-220].) The situation is better than this: since only an estimate of the true median is needed it is only necessary to make a few iterations. The possibility of ill-conditioning may anyway make it necessary to curtail the iterations. An advantage of using a median-adjusted correction term, particularly if the clocks were not to be brought together by the initialization phase, is that the correction jumps on each clock are reduced in aggregate. The possibility of global time appearing to jump forward abruptly or even move negatively is reduced.
Fig. 12.1 Synchronization signal pattern.
12.5
ESTABLISHING A REFRESH INTERVAL
Since there is uncertainty in the estimate of the clock offset and clock drifts relative to a central reference, it is necessary to refresh the clocks by periodic synchronization pulses. The method described in this chapter does not seek to keep track of when to refresh each individual node with a separate synchronization pulse (or allow each node to request a synchronization from the
254
INSTRUMENTATION OF TEMPLATES
central server). Instead, the clock refresh interval R for all nodes is chosen as min({Ri}). An advantage of a single clock refresh interval is that all synchronization pulses are sent at the same time and not at irregularly spaced intervals. For simplicity consider the refresh time, R, necessary to ensure communication events are correctly ordered between the central node and one other node [94], which is given by
where TQ(/I) is the minimum possible one-way message time with 0(0) data and h is the number of message hops. In a single bus system h = 1, but for our Paramid multicomputer, employing store-and-forward communication, h > 1. €d is the error as recorded by a notional global clock in the offset estimate of (12.3) (caused by the variance of round-trip timings). An upper bound for 6d is given in (12.8). The number e/ is the error as calculated from a notional global clock in the skew-rate estimate. The skew is between the clock rates of the two nodes. An upper bound for ef is given in (12.11). R is the minimum time taken for the clocks to drift apart to such an extent that a message between any two nodes might result in an ordering error. An ordering error would occur if a message were to be recorded as arriving before it was sent. The factors of two in (12.7) arise because if the uncertainty between the central node and a subsidiary node is e then the uncertainty in timing between any two subsidiary nodes, were they to pass a message, is 2e. TO can be estimated by a message-timing measurement of the intended computer system when no other message-generating applications are present. A long-tailed distribution of round-trip times is common to LAN networks [68] and multicomputers [94] alike. It follows that any one attempt at measurement of TO is most likely to occur at the most populated end of the distribution of TO measurements. The clock difference error is bounded for any one node by
fi is the relative clock drift between node 0 and node i. Di/2—T0(/i) represents the uncertainty in the h hop journey time to node i were the clocks at node 0 and node i running at the same rate; (D{/2 — TO(/I))/» is the additional error due to clock skew; and fiT0(h] is the uncertainty in node O's measurement of the minimum journey time to node i. For the purposes of tracing, an estimate of TO(!) is sufficient rather than keeping a table of estimated minimum journey times indexed by h. fi is estimated by:
where Oi(t) is the measured clock offset at time i, as given by (12.3), and 5 is a suitable sample interval. The maximum error, em, that can be made is:
ESTABLISHING A REFRESH INTERVAL
255
where fm is the maximum frequency drift rate for one clock, substituted for fi into Equation (12.8) in order to arrive at equation 12.10. Notice that if fm is the maximum frequency drift rate for any one clock the total possible drift is 2/m. In fact, (12.10) is the same equation for maximum clock reading error as that developed in [68] by a different route and for a different purpose. The maximum possible error is determined by the maximum round-trip time which naturally arises as the synchronization method depends on message passing. An upper bound to the error in the frequency drift can also be arrived at:
Though S needed for a required e/ can be estimated (by estimating 64 for each subsidiary node through comparing successive values of Di), this would mean that 5 might cause an arbitrary delay at the beginning of a test in order to bound e/ sufficiently. A simple expedient of not allowing e/ above a characteristic data-sheet value for fm ensures that accuracy is preserved at a small cost in efficiency. A lower bound on 5 is that its duration should not be so short that successive measurements are not statistically independent. Equation (12.8) relies on a measurement of DI to bound the reading error. If the same refresh pulse is used for all nodes then one seeks the maximum error that can occur, which will arise from the maximum journey time. However, if successive sampling rounds are made to find the maximum journey time, then one can select the minimum maximum journey time. If the probability of finding a low enough value from any one round of samples is p, then the probability that waiting time will be k rounds before finding that value is p(l — p)k~1. The expected number of Bernoulli trials (including the successful trial) to achieve a low enough value is
with variance
If the probability of not getting a low enough value is 0.5, then x\ is two. It is necessary to include at least a extra trials to account for the error in the estimate, which is approximately four trials in all. The distribution of round-trip messages can also be examined in order to estimate the number of rounds necessary. Certainly, sufficient samples should be taken to bracket any system interrupts.
256
INSTRUMENTATION OF TEMPLATES
12.6
LOCAL CLOCK ADJUSTMENT
The method outlined in Section 12.4 does not correct for the drifting of local clocks between the synchronization pulses. For generality, a software correction method is described in this section, though on some machines it is possible to correct by hardware means, (e.g. phase-locked loops on Internet machines serviced by the Network Time Protocol [243]). Each local node can estimate its clock drift relative to the averaged real-time by finding the change in physical clock offsets over a number of synchronization rounds. Since drift is assumed to be linear, two observations are sufficient. A linear correction function ensures smoothness, avoiding discontinuities [371]. In Fig. 12.2, Part A, node i's local physical clock is shown drifting from global time in linear fashion. However, in practice offset adjustments are made to a local software clock, Fig. 12.2, Part B. If a local time enquiry is made the returned value is the sum of the software clock time at the latest synchronization instance and the time interval measured on the physical clock. Therefore, this returned time will be in error by the amount marked 'local error' in Fig. 12.2, Part B. Formally,
where Li(t) returns the local time for node i adjusted to account for drift between synchronization pulses. Hi(sp) is node i's centrally adjusted software clock reading at synchronization instance sp. (H?(t) — Hf(sk)) is the time interval from Sk to t returned by node z's physical clock, k = 1,2,... is an index of the synchronization instances. A{(k) is the correction function given by:
where Of(sfc) is the accumulative centrally supplied offset to node f's physical clock. Assuming a constant clock refresh interval, -R, (12.15) can be rewritten as
where Oi is given by (12.3). Adjusting a local clock reading between synchronization pulses by means of a linear correction function does not require regularly spaced synchronization intervals and it will tolerate a time lag between the arrival of a synchronization pulse and the arrival of offset Oi(t). In some cases, it is possible to implement clock synchronization for our application domain without some of the features described. For example, local clock adjustment is not required if message events are the only timed events. Similarly, if no reference is made to events outside the system, then difference averaging is not needed.
IMPLEMENTATION ON THE PARAMID
257
Fig. 12.2 Local drift on physical and software clocks.
12.7
IMPLEMENTATION ON THE PARAMID
The global clock instrumentation was tested on our Paramid distributed memory multicomputer. The Paramid usually employs a virtual channel system but the underlying hardware is dependent on store-and-forward communication. Figure 12.3 is the result of measurements on an empty system when it will be seen that the round-trip message time distribution is indeed long-tailed,
258
INSTRUMENTATION OF TEMPLATES
though the lower 'hump' may be caused by the low-priority process time-slice interrupt (every 1024/^s) which makes some readings appear longer.
Fig. 12.3 Frequency of timings (ps) for one hop (round-trip) on the Paranoid.
A crude measure of overhead was made on the central node by recording (without clock drift adjustment) the time taken for synchronization, recording the trace and supplying a local clock. The overhead on other nodes will naturally be less. An analysis of the overhead is given in Table 12.2, which is for 1000 round-trip messages of approximately 200 bytes each. Note that the results in the table are consistent since in some cases, discounting the first four pulses at 0.1 second intervals, the central synchronization process was halted just before making a last pulse. The overheads recorded refer to a particular total job size regime, as Fig. 12.4 shows that the fixed overhead becomes a much greater burden for shorter job sizes. The communication overhead, aside from synchronization, was between 6% and 9%. The number of synchronization pulses includes those made mostly over shorter intervals at initiation time. The number of messages is restricted so that the trace file can be held entirely in main memory. Though no work, and a roundrobin distribution of work with constant intervals were tried, in this instance truncated and random quantized Gaussian distributions were employed for the workloads (with mean per node work time equal to the variance). Experience of testing showed that using a Gaussian distribution was more likely to find
CONCLUSION
259
errors in recording message times, though once the appropriate parameters had been set no errors occurred. Table 12.2 Timings (s) Recorded on the Central Node Using the Local Clock
Wall-clock Time
Sync. Interval
No. of Sync. Timings
16.641 32.125 47.452 109.040 155.096 309.957
3.197 3.197 3.197 3.197 3.197 3.197
9 13 18 37 52 100
Overhead Prom Syncs. Time Trace 0.013 0.063 0.145 0.097 0.183 0.321
0.311 0.316 0.316 0.294 0.308 0.306
0.505 0.503 0.504 0.505 0.502 0.504
Total % Overhead
Mean Work Timi
4.987 2.745 2.034 0.822 0.640 0.365
0.1 0.2 0.3 0.7 1.0 2.0
The sampling interval was set to 0.1 s, the clock drift to 10 5 and the minimum one hop journey time for 1 byte was found to be 60 /xs. With these parameters the clocks reached an acceptable state of accuracy after four synchronization pulses during an enforced quiescence for the eight processors used. Fewer initial synchronization pulses always led to ordering errors. The maximum error for varying round-trip times can be calculated from equation 12.10 using the measured minimum round-trip time, 60 p,s, a clock-drift rate of 10~5 and a clock resolution of 10~6s. With a measured net data transmission rate of approximately 16 MHz on any one Paramid communication link and with the diameter of the network only three, the maximum error for our set-up is well within a 0.5ms boundary. After four timings, the refresh period was set once and for all. It would probably be too disruptive to re-sample at a later time and would add to the complexity of the software. It is certainly necessary not to use a recursive method of finding the refresh times (by using the last refresh time as a sample interval) as this would lead to errors spiraling upwards.
12.8
CONCLUSION
The detailed design of a global clock system for instrumentation is a compromise between achieving accuracy and avoiding perturbation of the parallel application. At least three causes of perturbation can be isolated: 1. Excess number of messages; 2. Disturbance of the pattern of communication; 3. Excess calculations to achieve the synchronization.
260
INSTRUMENTATION OF TEMPLATES
Fig. 12.4 Overhead from tracing on the Paramid.
In the design described in this chapter, by using one centrally determined refresh time, the disruption to the pattern of communication is limited to one regularly occurring pulse. The selection of the refresh time is assisted by discarding inaccurate samples by a probabilistic criterion. Specifically, averaging by a median is used to find an estimate of real time. Local adjustments for the estimated local clock drift between refresh pulses reduce the central computation as does the selection of an iterative method of median calculation.
Part V
Future Trends
This page intentionally left blank
13 Future Trends Mainstream computing is currently dominated by the ubiquity of the PC, and PC-based architectures increasingly compete with many traditional highperformance machines such as workstations, servers and mainframes. The driving force behind this trend is the economy-of-scale which PCs have established through their deep penetration of the office and, to a lesser extent, home environment. There is however, increasing evidence that this situation may be changed by the current convergence between mobile computing and telecommunications, which should establish embedded networked computing as a new mass-market application area. A notable feature of such systems will be their much simpler user interface than current general-purpose PCs, mandated by their limited physical size and anticipated user population. As a result, embedded systems, in which any configurability will be implicit rather than explicitly controlled by the user, will have a much higher profile than at present. This, in turn, will significantly shift the balance in computer systems design from software-only solutions, towards hardware-software co-design, in which an initial software application simulation will need to be mapped to a variety of heterogeneous hardware architectures, ranging from the conventional PC, through newly established standards for embedded mobile processors1 to dedicated hardware implementations for high-volume applications.
1
For example, the combination of Symbian's EPOC real-time operating system for mobile devices [346] and the ARM low-power RISC processor core [120], which can be augmented with a variety of on-chip dedicated processing capabilities.
263
264
FUTURE TRENDS
Single-processor embedded systems will continue pragmatically to utilize whatever hardware satisfies a particular cost-function in the marketplace of the time. This can and has resulted in numerous one-off, bespoke solutions for particular applications. As the scale of application increases, the development costs become so large that it becomes uneconomic not to reuse software. PPF is primarily aimed at large-scale embedded applications which rely on some form of parallel architecture. The key problem facing the developers of such applications is that the underlying technology continues to change at an alarming rate, often making software obsolete before it is completed. The current PPF model assumes a homogeneous network of general-purpose processing modules. While modular processors such as the transputer have been replaced first by the Texas Instruments TMS320C40, and then by the AMD SHARC family of DSPs [14], there is also a growing variety of special-purpose hardware. The consensus in choice of parallel hardware for embedded systems that existed (at least in Europe) during the late 1980s and early 1990s no longer exists. At the same time, in the defense field sourcing concerns have led to a commercial-off-the-shelf (COTS) system-level design philosophy. Parallelism has not as yet taken the route originally anticipated, whereby increasingly large arrays of modular processors were deployed, perhaps due to concern over scalability. Instead, there is increased support for 'glueless' multiprocessing, even from mainstream manufacturers such as in Intel's Pentium P6 family. However, the power of these processors at present makes for a rather limited processor parallelism [93]. On the other hand, mainstream manufacturers have become increasingly confident in incorporating instruction level parallelism: either superscalar designs such as the i860 and now within Pentium processors; or through very-large instruction word (VLIW) processors such as Transmeta's Crusoe (TMS5400)[127]; or through SIMD instruction set extensions for sub-word parallelism [273, 298] aimed at multimedia applications. Instruction level parallelism is also prevalent in fieldprogrammable gate arrays (FPGAs) when programmed by means of silicon compilation. In fact, CSP-like design methodologies [267, 83] are strongly represented in emergent FPGA applications, now being mapped to 1M gate arrays [394]. The point about this diversity is that no one hardware medium can be guaranteed to persist. However, the variety of hardware enables heterogeneous pipelines to be constructed, especially in applications such as optical flow (Chapter 8) where fine-grained parallelism forms one stage of the pipeline. It is fine-grained parallelism which is least supportable by general-purpose modular parallel processors. A software framework is more than ever required in order to mask technological diversity either within the range of possible hardware implementations, or over time. Given the limited processor parallelism evident and/or the presence of internal parallelism, a coprocessor model of parallelism appears more appropriate than the data farm. Pipelines will still be apparent, but they will be pipelines of heterogeneous coprocessors, not
DESIGNING FOR DIFFERING EMBEDDED HARDWARE
265
pipelines of homogeneous data-farms. In fact, this scenario is a generalization of PPF, as it is still possible for a pipeline stage to be a data-farm.
13.1
DESIGNING FOR DIFFERING EMBEDDED HARDWARE
Future large-scale embedded systems will be developed using commodity parallel computing hardware as an extension of the PC/workstation. Networks of workstations (NOWs), or symmetric multiprocessors (SMP) will be utilized to reduce evaluation time and simulate real-time performance. Conversely, target architectures will include heterogeneous hardware such as field-programmable gate arrays (FPGAs) (which may be evolvable or reconfigurable during application execution) and application-specific integrated circuits (ASICs) integrated with RISC and DSP processor cores. To adapt the existing PPF design model to this diversity of target architectures, PPF can be subsumed within a more general coprocessor model of parallelism which retains the advantages of the previous approach by enforcing a constrained design cycle with limited degrees of freedom. Design of the coprocessor pipeline will be recast as a form of hardware/software parallel co-design [390], which integrates algorithm design and development closely with real-time prototyping and implementation. One way to achieve this goal is by using software components. The software component, common in commercial settings [84], is the software analogue of the coprocessor. The end-user should be oblivious of the hardware actually used, and generic serial, shared-memory and distributedmemory solutions can be supported alongside drivers for more esoteric accelerating hardware. In particular, the software component can mask the raw interface presented by low-level, hardware-bound, approaches to prototyping, for example [98]. Unlike a class library which contains static constructs, the software component encapsulates dynamic behavior. The user is able to specify code segments, message structures, and default actions to determine an instantiation of that dynamic behavior. Through polymorphism, the same dynamic structure can support different levels of granularity. A software component will enable complex parallel prototypes to be assembled rapidly, allowing alternative structures to be explored. The component will also encapsulate important infrastructure necessary for prototyping, in the form of: test data; records of results with an objective testing framework; structure to support a user library of common subroutines; and timings, which will allow performance costing of dynamic application behavior. In other words, breadth is as important as depth in component construction. 13.2
ADAPTING TO MOBILE NETWORKED COMPUTATION
It seems increasingly likely that fixed networks, while remaining as a core, will no longer be the norm, especially in consumer electronics [108]. Java
266
FUTURE TRENDS
Remote Method Invocation (RMI) [85] is one of a number of distributed object communication harnesses, which have the possibility of being extended to semi-dynamic mobile networks (ones with a fixed and mobile component). RMI has a three-layered software architecture with transport level protocol (TCP/IP circuit-oriented) handling at the lowest level, and choice of communication model, (unicast point-to-point), at the intermediate level below the application layer. RMI is specialized for Java applications, providing remote object loading, dynamic class and stub loading.2 Polymorphic loading can occur, whereby a sub-type of a declared type is loaded. On each processor, a registry must be run to act as a nameserver for remote object method invocation. The registry is not persistent as remote object garbage collection takes place by means of reference counting. How would a PPF computation on a mobile network be initiated and synchronized? One plan is to start with a single master farmer module which is always present, and a pool of farmer and worker modules which can be chained together. As currently implemented, the first farm is formed dynamically to save time. When processed work is ready to go on to another pipeline stage, the first farmer inspects for farmer multicasts and selects a ready farmer. A more flexible arrangement is shown in Fig. 13.1. The pipeline manager (PM) is contacted from a pool of RMI elements on a known port/multicast address. The PM then ascribes the 'gender' (either farmer or worker) to an RMI element. Having set-up the pipeline, Fig 13.2, the PM reflects its behavior to become a monitor. Each instance of a farmer is characterized by the type of data manager: at the start and end of a pipeline a file data manager is assigned, while in intermediate stages a message reception and relay manager is needed. RMI can pass objects as parameters to a server provided that the objects are serializable, in effect allowing PPF to dynamically load code onto a worker, though there are issues of casting. For the purposes of automatic configuration, objects designated as remote can be passed from server to client though in this case only the stub object arrives.3 In principal, an application can be assembled as a pipeline of data-farms from a central source, or the application can be extended by passing stubs (having first sent naming information). A further issue to address is security, as code distribution is much like virus distribution. The Jini model for distributed computation [100] would now appear to provide a software technology to support this sort of mobile computation. Jini is layered on RMI, but enhances the RMI nameserver with an automatic and generic way of discovering and forming an ad hoc computation network. Through leasing, Jini also enables such background federations of devices to
2
Dynamic class loading is also a feature of mobile agent systems constructed under RMI. As the stub object is strictly constrained, it becomes necessary to provide a customized security manager. 3
CONCLUSION
267
Fig. 13.1 Inchoate system.
be dissolved in due course. Jini provides an implementation repository which can be accessed via an http server, maybe through a WAP portal but not directly by RMI. The BlueTooth radio standard [148] is a low-level way of forming device federations, especially by short-range connections to the fixed network.
13.3
CONCLUSION
Large-scale, embedded systems continue largely to use homogeneous processing hardware, for instance in the DARPA SLAAC project sonar and radar systems. PPF systems remain entirely appropriate for this type of application. However, there is also acknowledgement that for such large-scale systems some form of future-proofing is needed, which in the case of the SLAAC project comes in the form of the two-level computer. In the two-level computer [320], processors hosting the communication network are separated from the computation processing elements, which can be changed according to technological circumstances. For example, the top-level may consist of PowerPC processors linked by a Myrinet interconnect [42], while the lower layer might be filled by FPGAs or by SHARC DSPs. References [137] and [12] represent
268
FUTURE TRENDS
Fig. 13.2 Dual data-farm pipeline after configuration.
alternative such solutions for a sonar beam-forming application. This computational form comes as no surprise because our earlier Paramid research machine is also a two-level computer with separate communication and computation processing elements. When heterogeneous hardware is integrated into a PPF-like system, further encapsulation becomes necessary. This chapter has proposed that the way this might be achieved is through software components. Software components are a further step beyond object-orientation, as they encapsulate dynamic behavior, not simply a static selection of reusable objects. In the extended PPF case, each component would incorporate parallel behavior, varying from the data-farm paradigm. The embodiment of each component would be a co-processor, with the resulting pipeline forming a heterogeneous collection of co-processors. Finally, PPF may be developed in the direction of mobile networks or a combination of mobile and fixed networks. A semi-dynamic solution has been proposed for the latter alternative.
References
1. 3L Ltd., Peel House, Ladywell, Livingston, Scotland. Parallel C User Guide, 1991. 2. 3L Ltd, 86/92 Causeway, Edinburgh, Scotland. Parallel C User Guide: Texas Instruments TMS320C40, 1992. 3. D. Abramson and G. Cameron. DCompose: a tool for measuring data decomposition on distributed-memory multiprocessors. In A. Y. Zomaya, editor, Parallel Computing: Paradigms and Applications, pages 411-374. Thomson, London, 1996. 4. H. Abut, editor. Vector Quantization, IEEE Reprint Collection. IEEE, NJ, 1990. 5. H. Abut, B. P. M. Tao, and J. M. Smith. Vector quantizer architecture for speech and image coding. In IEEE International Conference on Acoustics, Speech and Signal Processing, volume 2, pages 756-759,1987. 6. E. H. Adelson and J. R. Bergen. The extraction of spatio-temporal energy in human and machine vision. In IEEE Workshop on Motion: Representation and Analysis, pages 151-155, 1986. 7. A. Agarwal. Performance tradeoffs in multithreaded processors. IEEE Transactions on Parallel and Distributed Systems, 3(5):525-539, September 1992. 269
270
REFERENCES
8. D. P. Agrawal and R. Jain. A pipeline pseudoparallel system architecture for real-time dynamic scene analysis. IEEE Transactions on Computers, 31(10):952-962, 1982. 9. S. Ahmed, N. Carriero, and D. Gelernter. The Linda program builder. In A. Nicolau, D. Gelernter, T. Gross, and D. Padua, editors, Advances in Languages and Compilers for Parallel Processing, pages 71-87. Pitman, London, 1991. 10. S. Ahuja, N. J. Carriero, D. H. Gelernter, and V. Krishnaswamy. Matching language and hardware for parallel computation in the Linda machine. IEEE Transactions on Computers, 37(8):921-929, August 1988. 11. Time-frequency and wavelet analysis. IEEE Engineering in Medicine and Biology Magazine, Special Issue, 14(2), 1995. 12. G. E. Allen and B. L. Evans. Real-time sonar beamforming on workstations uisng process networks and POSIX threads. IEEE Transactions on Signal Processing, 48(3):921-926, 2000. 13. G. S. Almasi and A. Gottlieb. Highly Parallel Computing. Benjamin/Cummings, Redwood City, CA, 1994. 14. Analog devices'ADSP-21160 SHARC DSP vs. Texas Instruments TMS320C6x DSPs, 1995. White Paper available from http: //www. analog. com/publications/whitepapers/. 15. G. M. Amdahl. Validity of the single processors approach to achieving large scale computing capabilities. In Spring Joint Computing AFIPS Conference, volume 30, pages 483-485,1967. 16. G. M. Amdahl. Limits of expectation. International Journal of Supercomputing Applications, 2:88-94, 1988. 17. P. Anandan. A computational framework and an algorithm for the measurement of visual motion. International Journal of Computer Vision, 2:283-310,1989. 18. M. Annaratone, E. Arnould, T. Gross, H. T. Kung, S. M. Lam, 0. Mezilcioglu, K. Sarocky, and J. A. Webb. Warp architecture and implementation. In Proceedings of the 13— Annual International Symposium on Computer Architecture, pages 346-356, 1986. 19. Fiber distributed data interface (FDDI) token ring media access control (MAC) X3.239, 1987. 20. M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies. Imaging coding using vector quantisation in the wavelet transform domain. In IEEE
REFERENCES
271
International Conference on Acoustics, Speech, and Signal Processing, pages 2297-2300, 1990. 21. W. S. Athas and C. L. Seitz. Multicomputers: Message passing concurrent computers. IEEE Computer, 21(8):9-24, August 1988. 22. M. Atkins. Performance and the i860 microprocessor. IEEE Micro, pages 24-27,72-78, October 1991. 23. N. Audsley, I. Bate, and A. Grigg. Portable code: Reducing the cost of obsolescence in embedded systems. Computer Control Engineering, 10(3):98-104, June 1999. 24. A. Back and S. Turner. Using optimistic execution techniques as a parallelizsation tool for general purpose computing. In B. Hertzberger and G. Serazzi, editors, High-Performance Computing and Networking International Conference, pages 21-26. Springer, Berlin, 1995. Lecture Notes in Computer Science Volume 919. 25. N. Balakrishnan and A. C. Cohen. Order Statistics and Inference. Academic Press, Boston, 1991. 26. S. Balay, W. D. Gropp, L. C. Mclnnes, and B. F. Smith. Efficient management of parallelism in object-oriented numerical software libraries. In E. Arge, A. M. Bruaset, and H. P. Langtangen, editors, Modern Software Tools in Scientific Computing. Birkhauser, 1997. 27. K. Balmer, N. Ing-Simmons, F. Moyse, I. Robertson, J. Keay, M. Hammes, E. Oakland, R. Simpson, G. Barr, and D. Roskell. A single chip multimedia video processor. In IEEE Custom Integrated Circuits Conference, 1994. 28. P. Baraldi, A. Sarti, C. Lamberti, A. Prandini, and F. Sgalli. Evaluation of differential optical flow techniques on synthesized echo images. IEEE Transactions on Biomedical Engineering, 43(3):259-271, March 1996. 29. G. Barrett. occamS reference manual. Technical report, Inmos Ltd., 1000 Aztec Way, Bristol, UK, 1992. 30. J. L. Barren, D. J. Fleet, and S. S. Beauchemin. Performance of optical flow techniques. International Journal of Computer Vision, 12(l):43-77, 1994. 31. J. L. Barren and A. Liptay. Optic flow to detect minute increments in plant growth. Biolmaging, 2:57-61, 1994. 32. A. Bauch, T. Kosch, E. Maehle, and Obeloer. The software-monitor DELTA-T and its use for performance measurements of some farming
272
REFERENCES
variants on the multi-transputer system DAMP. Lecture Notes in Computer Science, 634:67-78, 1992. Proceedings of CONPAR '92 - VAPP V. 33. A. Beguelin, J. J. Dongarra, A. Geist, R. Manchek, K. Moore, and V. Sunderam. PVM and HeNCE: Tools for heterogeneous network computing. In J. J. Dongarra and B. Tourancheau, editors, Environments and Tools for Parallel Scientific Computing. North-Holland, Amsterdam, 1993. 34. G. Bell. Ultracomputer: A teraflop before its time. Communications of ACM, 35(8):27-47, 1992. 35. A. Beneviste and G. Berry. The synchronous approach to reactive and real-time systems. IEEE Proceedings, 79(9):1270-1282, September 1991. 36. P. M. Bentley and J. T. E. McDonnell. Wavelet transforms: an introduction. IEE Electronics and Communication Engineering Journal, 6(4): 175-186,1994. 37. V. Bhaskaran and K. Konstantinides. Image and Video Compression Standards: Algorithms and Architectures. Kluwer, Boston, UK, 2— edition, 1997. 38. L. H. J. Bierens. Hardware design for modern radar processing. Electronics & Communications Engineering Journal, pages 257-270, December 1997. 39. J. Bigiin, G. H. Granlund, and J. Wiklund. Multidimensional orientation estimation with applications to texture analysis and optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13 (8): 775790, August 1991. 40. A. D. Birrell and B. J. Nelson. Implementing remote procedure calls. ACM Transactions on Computer Systems, 2(l):39-59, 1984. 41. U. Block, F. Ferst, and W. Gensch. Software tools for developing and porting parallel programs. In J. S. Kowalik and L. Grandinetti, editors, Software for Parallel Computation, pages 62-75. Springer, Berlin, 1993. 42. N. Boden, D. Cohen, R. Felderman, A. Kuwalik, C. Seitz, J. Seizovic, and W. Su. Myrinet: A gigabit-per-second local area network. IEEE Micro, 15(l):29-38, 1995. 43. A. G. Boisdullier, Arcas-Luque G., V. Brahamcha, V. Buisson, and B. Dalgren. Towards a transputer based system for real time zip code recognition of handwritten addresses. In USPS Advanced Technology Conference, pages 560-583, 1990.
REFERENCES
273
44. F. P. Brooks. The Mythical Man-month: Essays on Software Engineering. Addison-Wesley, Reading, MA, 1975. 45. A. Burks, H. H. Goldstein, and J. von Neumann. Preliminary design of the logical design of an electronic instrument, part 2. Datamation, 8(10):36-41, October 1962. 46. F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal. A System of Patterns: Pattern-oriented Software Architecture. Wiley, Chichester, UK, 1996. 47. B. Carpenter, Y-J. Chang, G. Fox, D. Leskiw, and L. Xiaoming. Experiments with 'HP Java'. Concurrency: Practice and Experience, 9(6):633648, June 1997. 48. K. R. Castleman. Digital Image Processing. Prentice-Hall, Englewood Cliffs, NJ, 1996. 49. CCITT. Draft revision of recommendations H261: video codec for audiovisual services at px64bits/s. Signal Processing: Image Communication, 2(2):221-239, 1990. 50. A. Quhadar and A. C. Downton. Structured parallel design for embedded vision systems: An application case study. In Proceedings of IPA '95 IEE International Conference on Image Processing and Its Applications, pages 712-716, July 1995. IEE Conference Publication No. 410. 51. G. Chakraharti and M. Vishvanath. Efficient realizations of the discrete and continuous wavelet transforms: From single chip implementations to mappings on simd array computers. IEEE Signal Processing, 43(3):759771, 1995. 52. A-M Cheng. High speed video compression testbed. IEEE Transactions on Consumer Electronics, 40(3):538-548, 1994. 53. M. N. Chong. Multi-transputer video coding (MTVC) system. In IEEE TENCON'93, pages 394-397, 1993. 54. A. Choudhary, B. Narahari, D. M. Nicol, and R. Simha. Optimal processor assignment for a class of pipelined computations. IEEE Transactions on Parallel and Distributed Systems, 5(4):439-445, April 1994. 55. I. Claesson, S. E. Nordholm, B. A. Bengtsson, and P. Eriksson. A multiDSP implementation of a broad-band adaptive beamformer for use in a hands-free mobile radio telephone. IEEE Transactions on Vehicular Technology, 40(1):194-202, 1991. 56. A. F. Clark, K. Martinez, and W. Welsh. Parallel architectures for image processing. In D. Pearson, editor, Image Processing, pages 169189. McGraw-Hill, London, 1991.
274
REFERENCES
57. M. H. Coffin. Parallel Programming A New Approach. Silicon Press, NJ, 1992. 58. A. Cohen and A. D. Ryan. Wavelets and Multi-scale Signal Processing. Chapman & Hall, London, 1995. 59. D. Cohen. On holy wars and a plea for peace. 14(10) :48-54, November 1981.
IEEE Computer,
60. M. Cole. Algorithmic Skeletons: Structured Management of Parallel Computation. MIT, Cambridge, MA, 1989. 61. R. Cole and C. Foxcroft. An experiment in clock synchronization. Computer Journal, 31(6):496-502, 1988. 62. B. Conterno and G. Musso. Parallel architecture for computing intensive tasks in postal applications. In Jet Paste 93, pages 821-829, 1993. 63. R. W. Conway, W. L. Maxwell, and L. W. Miller. Theory of Scheduling. Addison-Wesley, Reading, MA, 1967. 64. G. Cornell and C. S. Horstmann. Core Java. SunSoft, Mountain View, CA, 1997. 65. M. Cosnard and D. Trystram. Parallel Algorithms and Architectures. Thomson, London, 1995. 66. P. Courtney, R. B. Yates, and P. A. Ivey. Mapping algorithms on to platforms: An approach to algorithm and hardware co-design. In British Machine Vision Conference, pages 801-810, 1994. 67. H. Cramer. Mathematical Methods of Statistics. Princeton U.P., Princeton, 1946. 68. F. Cristian. A probabilistic approach to distributed clock synchronization. In Proceedings of the Ninth IEEE International Conference on Distributed Computer Systems, pages 288-296, June 1989. 69. D. E. Culler, A. C. Dussea, R. P. Martin, and K. E. Schauser. Fast parallel sorting under LogP: from thoery to practice. In A. Hey and J. Ferrante, editors, Portability and Performance for Parallel Processing. Wiley, Chichester, UK, 1994. 70. D. E. Culler and J. P. Singh. Parallel Computer Architecture: A Hardware/Software Approach. Morgan Kaufmann, San Francisco, CA, 1999. 71. E. A. B. da Silva, D. G. Sampson, and M. Ghanbari. A successive approximation vector quantizer for wavelet transform image coding. IEEE Transactions on Image Processing, Special Issue in Vector Quantization, 6(1):293-310, 1996.
REFERENCES
275
72. S. P. Dandamundi and S. P. Cheng. A hierarchical task organization for shared-memory multiprocessor systems. IEEE Transactions on Parallel and Distributed Systems, 6(l):l-6, January 1995. 73. S. K. Das and M. A. Picheny. Issues in practical large vocabulary isolated word recognition: The IBM Tangora system. In C-H. Lee, F. K. Soong, and K. K. Paliwal, editors, Automatic Speech and Speaker Recognition Advanced Topics, pages 457-479. Kluwer, Boston, 1996. 74. E. R. Davies. Machine Vision: Theory, Algorithms and Practicalities. Academic Press, London, 2^ edition, 1996. 75. R. Davies. Machine Vision. Academic, San Diego, 2— edition, 1997. 76. G. A. Davinson, P. R. Cappelo, and A. Gersho. Systolic architectures for vector quantization. IEEE Transactions on Acoustics, Speech and Signal Processing, 36:1651-1664,1988. 77. U. de Carlini and U. Villano. A simple algorithm for clock synchronization in transputer networks. Software—Practice and Experience, 18(4):331-347, April 1988. 78. M. Debbage, M. B. Hill, and D. A. Nicole. The virtual channel router. Transputer Communications, 1(1):3-18, August 1993. 79. J. B. Dennis. Dataflow computation for artificial intelligence. In K. Hwang and D. DeGroot, editors, Parallel Processing for Supercomputers and Artificial Intelligence, pages 525-560. McGraw-Hill, New York, 1989. 80. P. A. Devijver and J. Kittler. Pattern Recognition: A Statistical Approach. Prentice-Hall, London, 1982. 81. K. Dezhgosha, M. M. Jamali, and S. C. Kwatra. A VLSI architecture for real-time image coding using a vector quantization based algorithm. IEEE Transactions on Signal Processing, 40(1):181-189, 1992. 82. K. Diefendorff. Sony's emotionally charged chip. Microprocessor Report, 13(4):5-12, 1999. 83. O. Diesel and G. Milne. Compiling process algebraic descriptions into reconfigurable logic. In IPDPS 2000 Workshops, pages 916-923, 2000. Lecture Notes in Computer Science No. 1800. 84. T. Digre. Business object component architecture. 15(4):60-69, 1998.
IEEE Software,
85. T. B. Downing. Java RMI. IDG, Foster City, CA, 1998.
276
REFERENCES
86. A. C. Downton, editor. Engineering the Human-Computer Interface. McGraw-Hill, London, 1991. 87. A. C. Downton. Speed-up trend analysis for H.261 and model-based image coding algorithms using a parallel-pipeline model. Signal Processing: Image Communications, 7:489-502, 1995. 88. A. C. Downton, R. W. S. Tregidgo, and A. Quhadar. Generalized parallelism for embedded vision applications. In A. Y. Zomaya, editor, Parallel Computing: Paradigms and Applications, pages 553-577. Thomson, London, 1996. 89. A C Downton, R W S Tregidgo, and E Kabir. Recognition and verification of handwritten and hand-printed british postal addresses. International Journal of Pattern Recognition and Artificial Intelligence, 5(1):265 - 291, 1991. 90. M. Dubois and F. A. Briggs. Performance of synchronized iterative processes in multiprocessor systems. IEEE Transactions on Software Engineering, 8(4):419-431, July 1988. 91. A. Duda, G. Harms, Y. Haddad, and G. Bernard. Estimating global time in distributed systems. In 7— Internatinal Conference on Distributed Computer Systems, pages 299-306. IEEE, 1987. 92. R. Duncan. A survey of parallel computer architectures. IEEE Computer, 23(2):5-16, February 1990. 93. R. Duncan. Developments in scalable MIMD architectures. In A. Y. Zomaya, editor, Parallel Computing: Paradigms and Applications, pages 3-24. Thomson, London, 1996. 94. T. H. Dunigan. Hypercube clock synchronization. Concurrency: Practice and Experience, 4(3):257-268, May 1992. 95. D. Durand, T. Montaut, L. Kervella, and W. Jalby. Impact of memory contention on dynamic scheduling on NUMA multiprocessors. IEEE Transactions on Parallel and Distributed Systems, 7(11): 1201-1214, November 1996. 96. D. Dzatko and T. Shanley. AGP System Architecture. Addison-Wesley, Reading, MA, 2^ edition, 2000. 97. M. N. Edward. Radar signal processing on a fault tolerant transputer array. In T. S. Durrani, W. A. Sandham, J. J. Soraghan, and S. M. Forbes, editors, Applications of Transputers 3. IOS, Amsterdam, 1991. 98. M. Edwards and J. Forest. A practical hardware architecture to support software acceleration. Microprocessors and Microsystems, 20:167-174, 1996.
REFERENCES
277
99. W. K. Edwards. Core Jini. Prentice Hall, Upper Saddle River, NJ, 1999. 100. W. K. Edwards. Core Jini. Prentice Hall, Upper Saddle River, NJ, 1999. 101. H. El-Rewini, T. Lewis, and H. H. Ali. Task Scheduling in Parallel and Distributed Systems. Prentice-Hall, Englewood-Cliffs, NJ, 1994. 102. D. J. Evans and M. Y. M. Saman. Automatic determination of parallelism in programs. In J. S. Kowalik and L. Grandinetti, editors, Software for Parallel Computation, pages 163-175. Springer, Berlin, 1993. 103. R. Feldman, A. E. Kulawik, N. G. Bowden, D. Cohen, C. L. Seitz, and W-K. Su. Myrinet: A Gigabit per second local area network. IEEE Micro, 15(1):29-36, February 1995. 104. D. J. Fleet. Measurement of Image Velocity. Kluwer Academic, Norwell, MA, 1992. 105. D. J. Fleet and A. D. Jepson. Hierarchical construction of orientation and velocity selective filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, ll(3):315-325, March 1989. 106. D. J. Fleet and A. D. Jepson. Computation of component image velocity from local phase information. International Journal of Computer Vision, 5(1):77-104, 1990. 107. D. J. Fleet and K. Langley. Recursive filters for optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(1) :6167, January 1995. 108. J. Fleischmann and K. Buchenreider. Integrated engineering. IEEE Computer, 32(2):116-119, 1999. 109. M. Fleury, A. C. Downton, and A. F. Clark. Constructing generic datafarm templates. Concurrency: Practice and Experience, 11 (9):509-528, 1999. 110. M. Fleury, A. C. Downton, A. F. Clark, and H. P. Sava. The design of a clock synchronization sub-system for parallel embedded systems. IEE Part I (Computer and Digital Techniques), 144(2):65-73, March 1997. 111. M. Fleury, N. Sarvan, A. C. Downton, and A. F. Clark. Methodology and tools for system analysis of parallel pipelines. Concurrency: Practice and Experience, 11(11):655-670, 1999. 112. M. Fleury, H. Sava, A. C. Downton, and A. F. Clark. Design of a clock synchronization sub-system for parallel embedded systems. IEE Proceedings Part E (Computers and Digital Techniques), 144(2):65-73, March 1997.
278
REFERENCES
113. M. Fleury, H. P. Sava, A. C. Downton, and A. F. Clark. Designing and instrumenting a software template for embedded parallel systems. In UK Parallel '96, pages 163-180. Springer, London, 1996. 114. M. J. Flynn. Some computer organizations and their effectiveness. IEEE Transactions on Computers, 21(9):948-960,1972. 115. M. J. Flynn. Computer Architecture: Pipelined and Parallel Processor Design. Jones and Bartlett, Boston, MA, 1995. 116. G. D. Forney. The Viterbi algorithm. 61(3):268-278, March 1973.
Proceedings of the IEEE,
117. I. T. Foster. Designing and Building Parallel Programs. Addison-Wesley, Reading, MA, 1995. 118. E. Freeman, S. Hupfer, and K. Arnold. JavaSpaces: Principles, Patterns, and Practice. Addison-Wesley, Reading, MA, 1999. 119. S. S. Fried. Personal computing with the Intel i860. BYTE, pages 347358, January 1991. 120. S. Furber. ARM System-on- Chip Architecture. Addison-Wesley, Harlow, UK, 2000. 121. D. D. Gajski and J-K Peir. Comparison of five multiprocessor systems. Parallel Computing, 2:265-282, 1985. 122. E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design patterns: Abstraction and reuse of object-oriented design. In ECOOP'93 — ObjectOriented Programming, pages 406-421,1993. Lecture Notes in Computer Science volume 707. 123. D. M. Geary and A. L. McClellan. Graphic Java: Mastering the AWT. SunSoft, Mountain View, CA, 1997. 124. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM: Parallel Virtual Machine — A Users' Guide and Tutorial for Networked Parallel Computing. MIT, Cambridge, MA, 1994. 125. G. A. Geist, J. A. Kohl, and P. M. Papadopoulos. PVM and MPI: A comparison of features. Calculateurs Paralleles, 8(2), 1996. 126. E. Gelenbe. Multiprocessor Performance. Wiley, Chichester, UK, 1989. 127. L. Geppert and T. S. Perry. Magic show. IEEE Spectrum, pages 26-33, May 2000. 128. J. J. Gerbrands. On the relationship between SVD, KLT and PCA. Pattern Recognition, 14(l-6):375-381, 1981.
REFERENCES
279
129. A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer, Boston, MA, 1991. 130. M. Ghanbari. Video Coding: An Introduction to Standard Codecs. IEE, London, 1999. 131. M. Ghanbari, C. J. Hughes, M. C. Sinclair, and J. P. Eade. Principles of Performance Engineering for Telecommunications Systems and Information Systems. IEE, London, 1997. 132. F. Glazer, G. Reynolds, and P. Anandan. Scene matching by hierarchical correlation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 432-441, 1983. 133. S. Glinski and D. Roe. Spoken language recognition on a DSP array processor. IEEE Transactions on Parallel and Distributed Systems, 5(7):697-703, 1994. 134. S. Goldsmith. A Practical Guide to Real-Time Systems Development. Prentice Hall, New York, 1993. 135. S. Grabner, D. Kranzlmiiller, and J. Volkert. Debugging parallel programs using ATEMPT. In B. Hertzberger and G. Serazzi, editors, High-Performance Computing and Networking International Conference, pages 235-240. Springer, Berlin, 1995. Lecture Notes in Computer Science Volume 919. 136. I. Graham and T. King. The Transputer Handbook. Prentice Hall, New York, 1990. 137. P. Graham and B. Nelson. FPGA-based sonar processing. FPGA '98, pages 201-208, 1998.
In ACM
138. S. L. Graham, P. B. Kessler, and M. K. McKusick. gprof: a call graph execution profiler. ACM SIGPLAN notices, 17(6):120-126, June 1982. 139. R. Greenlaw and H. J. Hoover. Fundamentals of the Theory of Computation: Principles and Practice. Morgan Kaufmann, San Francisco, 1998. 140. L. J. Griffiths and J. C. Jim. An alternative approach to linearly constrained adaptive beamforming. IEEE Transactions on Antennas and Propagation, 30(l):27-34, 1982. 141. G. R. Grimmett and D. R. Stirzaker. Probability and Random Processes. OUP, Oxford, UK, 2^ edition, 1992. 142. A. S. Grimshaw, W. T. Strayer, and P. Narayan. Dynamic, objectoriented parallel processing. IEEE Computer, pages 33-46, May 1993.
280
REFERENCES
143. W. Gropp, E. Lusk, and A. Skjellum. Using MPI: Portable Parallel Programming with the Message-Passing Interface. MIT, Cambridge, MA, 1994. 144. W. Gropp, E. Lusk, and R. Thakur. Using MPI-2. MIT, Cambridge, MA, 1999. 145. E. J. Gumbel. The maxima of the mean largest value and of the range. Annals of Mathematical Statistics, 25:76-84, 1954. 146. E. J. Gumbel. The Statistics of Extremes. Columbia University, New York, 1958. 147. R. Gusella and S. Zatti. Accuracy of the clock synchronization achieved by TEMPO in Berkeley Unix 43.BSD. IEEE Transactions on Software Engineering, 15(7):847-852, July 1989. 148. J. C. Haartsen. The Bluetooth radio system. IEEE Personal Communications, 7(l):28-36, 2000. 149. E. Hart. Transim user manual, 1993. Available from http://www.eeng.brad.ac.Uk/help/.packlangtool/.transput/.man.transim.htm 150. H. O. Hartley and H. A. David. Universal bounds for mean range and extreme observations. Annals of Mathematical Statistics, 25:85-99,1954. 151. R. Hastings and B. Joyce. Usenix white paper on Purify. In Winter Usenix '92 Conference, 1992. Preprint. 152. S. Haykin. Adaptive Filter Theory. Prentice Hall, London, 1991. 153. M. T. Heath and J. Etheridge. Visualizing the performance of parallel programs. IEEE Software, 8(5):29-39, 1991. 154. D. J. Heeger. Model for the extraction of image flow. Journal of the Optical Society of America A, 4(8):1455-1470, August 1987. 155. Hendrawan. Recognition and Verification of Handwritten Postal Addresses. PhD thesis, University of Essex, December 1994. 156. Hendrawan, A C Downton, and C G Leedham. A fuzzy approach to handwritten address verification. In Proceedings of the Third International Workshop on Frontiers in Handwriting Recognition, pages 409 416, Buffalo, New York, May 1993. 157. C. E. Hewitt and R. R. Atkinson. Specification and proof techniques for serializers. IEEE Transactions on Software Engineering, 5(l):10-23, January 1979.
REFERENCES
281
158. G. Hilderink, J. Broenink, W. Vervoot, and A. Bakkers. Communicating Java threads. In A. Bakkers, editor, Parallel Programming and Java. IOS, Amsterdam, 1997. 159. J. M. Hill and D. B. Skillicorn. Lessons learned from implementing BSP. In High Performance Computing and Networking (HPCN'97), pages 762-771. Springer, Berlin, April 1997. 160. H. W. Hillis. The Connection Machine. MIT, 1985, 1985. 161. C. A. R. Hoare. Communicating Sequential Processes. Prentice-Hall, Englewood Cliffs, NJ, 1989. 162. R. Hockney. Classification and evaluation of parallel computer systems. Lecture Notes in Computer Science, 295:13-25, 1987. 4— International DFVLR Seminar. 163. R. W. Hockney. The Science of Computer Benchmarking. Philadelphia, 1996.
SIAM,
164. M. Holschneider, Kronland-Martinet R., and Ph. Tchamitchian. A realtime algorithm for signal analysis with the help of the wavelet transform. In J.M. Combes, A. Grossmann, and Ph. Tchamitchian, editors, Wavelets, Time-Frequency Methods and Phase Space, pages 286-297. Springer, Berlin, 1989. 165. B. K. P. Horn and B. G. Schunk. Determining optical flow. Artificial Intelligence, 17:185-203, 1981. 166. M. Hotter. Optimization and efficiency of an object-oriented analysissynthesis coder. IEEE Transactions on Circuits and Systems for Video Technology, 4(2):181-194, 1994. 167. S. Hovell. The incorporation of path merging in a dynamic network parser. In ESCA, EuroSpeech97, volume 1, pages 155-158, 1997. 168. S. F. Hummel, E. Schonberg, and L. E. Flynn. Factoring: A method for scheduling parallel loops. Communications of the ACM, 35(8):90-101, 1992. 169. J. E. Hunt and A. G. McManus. Key Java: Advanced Tips and Techniques. Springer, London, 1998. 170. K. Hwang. Advanced Computer Architecture: Parallelism, Scalability, Programmability. McGraw-Hill, New York, 1993. 171. Information Technology - Portable Operating System Interface (POSIX) - Part 1: System Application Program Interface (API) - Amendment 1 .-Realtime Extension (C Language). IEEE, New York, NY, 1995. Standard 1003.1c-1995, also ISO/IEC 9945-l:1990b.
282
REFERENCES
172. Inmos Ltd., Planar House, Parkway, Globe Park, Marlow, Bucks., UK. ANSI C Toolset User Manual, 1990. 173. Intel Corporation. System Performance Visualization Tool User's Guide, 1993. Intel Corporation, Ref. No 312889-001. 174. Y. Itamato, K. Sudo, I. Kaneko, Y. Nishjima, and K. Egami. New pipelined parallel OCR engine architecture. In Jet Poste 93, pages 804811, 1993. 175. Draft ITU-T recommendation H.263: Video coding for low bit-rate communication. Technical report, ITU-T, Geneva, 1995. 176. A. K. Jain. A fast Karhunen-Loeve transform for a class of random processes. IEEE Transactions on Communications, 24:1023-1029, September 1976. 177. A. K. Jain. Fundamentals of Digital Image Processing. Prentice-Hall, London, 1989. 178. R. Jain. The Art of Computer Systems Performance Analysis. Wiley, New York, 1992. 179. F. Jelinek. Statistical Methods for Speech Recognition. MIT, Cambridge, MA, 1997. 180. J. Jiang, A. Wagner, and S. Chanson. Tmon: A real-time performance monitor for transputer-based multicomputers. In D. L. Fielding, editor, Transputer Research and Applications 4, pages 36-45. IOS, Amsterdam, 1990. 181. M. Jones, L. Scharf, J. Scott, C. Twaddle, M. Yaconis, K. Yao, P. Athanas, and B. Schott. Implementing an API for distributed adaptive-computing systems. In IEEE Symposium on Field Programmable Custim Computing, FCCM'99, 1999. 182. R. J. N. Kahlberg, de Jong J. B., and H. P. M. Essink. Automatic reading of handwritten postcodes in boxes. In Jet Poste 93, pages 20-25, 1993. 183. Y. Kaneda and J. Ohga. Adaptive microphone-array system for noise reduction. IEEE Transactions on Acoustics, Speech, and Signal Processing, 34(6):1391-1400, 1986. 184. K. Karhunen. Ueber lineare methoden in der wahrscheinlichtskeitsrechnung. Annals Acad. Sci. FennicteSeries A.I, 37, 1947. 185. A. H. Karp and H. P. Flatt. Measuring parallel processor performance. Communications of ACM, 33(5):539-549, 1990.
REFERENCES
283
186. M. G. Kendall and A. Stuart. The Advanced Theory of Statistics, volume 1. Griffin, London, UK, 2^ edition, 1963. 187. P. J. B. King. Computer and Communication Performance Modelling. Prentice Hall, New York, 1990. 188. M. Kirby and L. Sirovich. Application of the Karhunen-Loeve transform procedure for the characterisation of human faces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1):103-108, 1990. 189. S. Kleiman, D. Shah, and B. Smaalders. Programming with Threads. Prentice Hall, Upper Saddle River, NJ, 1996. 190. A L Knoll. Experiments with characteristic loci for recognition of handprinted addresses. In IEEE Transactions on Computers, April 1969. 191. D. E. Knuth. The Art of Computer Programming, volume 3/Sorting and Searching. Addison-Wesley, Reading, MA, 1973. 192. D. E. Knuth. The Art of Computer Programming, volume 2/ Seminumerical Algorithms. Addison-Wesley, Reading, MA, 2— edition, 1981. 193. H. Kopetz and W. Ochsenreiter. Clock synchronization in distributed real-time systems. IEEE Transactions on Computers, 36(8):933-940, August 1987. 194. J. S. Kowalik and K. W. Neves. Software for parallel computing, key issues and research directions. In J. S. Kowalik and L. Grandinetti, editors, Software for Parallel Computation, pages 3-36. Springer, Berlin, 1993. 195. D. Krieger and R. M. Adler. The emergence of distributed component platforms. IEEE Computer, 31(3):43-53, March 1997. 196. R. Kronland-Martinet, J. Morlet, and A. Grossmann. Analysis of sound patterns through wavelet transforms. International Journal of Pattern Recognition and Artificial Intelligence, 1:273-302, 1987. 197. C. P. Kruskal and A. Weiss. Allocating independent subtasks on parallel processors. IEEE Transactions on Software Engineering, 11(10):10011016, October 1985. 198. Kuck & Associates, Inc., 1906 Fox Drive, Champaign, IL 61820. CLASSPACK Basic Math Library/C User's Guide 1.3, 1993. 199. V. Kumar, A. Grama, A. Gupta, and G. Karypis. Introduction to Parallel Computing: Design and Analysis of Algorithms. Benjamin/Cummings, Redwood City, CA, 1994.
284
REFERENCES
200. H. T. Kung. Why systolic architectures? 1982.
IEEE Computer, 15:37-46,
201. H. T. Kung, L. M. Ruane, and D. W. L. Yen. A two-level pipelined systolic array for convolutions. In H. T. Kung, B. Sproull, and G. Steele, editors, VLSI Systems and Computations, pages 255-264. Springer, Berlin, 1981. 202. T. L. Kunii, S. Asami, and K. Maeda. Information-driven parallel pattern recognition through communicating processes—a case study on classification of wallpaper groups. In M. Reeve M. and S. E. Zenith, editors, Parallel Processing and Artificial Intelligence, pages 37-49. Wiley, Chichester, UK, 1989. 203. M. Lades, J. C. Vorbriiggen, J. Buhmann, J. Lange, C. von der Marsburg, R. P. Wiirtz, and W. Kohen. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computers, 42:300-311,1993. 204. M. Lades, J. C. Vorbriiggen, and R. P. Wiirtz. Recognizing faces with a transputer farm. In T. S. Durrani, W. A. Sandham, and J. J. Soraghan, editors, Applications of Transputers 3, volume 1, pages 148-153. IOS, Amsterdam, Holland, 1991. 205. T. L. Lai and H. Robbins. Maximally dependent random variables. Proceedings of the National Academy of Science, USA, 73(2):286-288, February 1976. 206. T. L. Lai and H. Robbins. A class of dependent random variables. Zeitschrift fur Wahrscheinlichkeitstheorie, 42:89-111, 1978. 207. J. Lakos. Large-Scale C++ Software Design. Addison-Wesley, Reading, MA, 1996. 208. L. Lamport. Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 21(7):558-565, 1978. 209. L. Lamport and Melliar-Smith P. M. Synchronizing clocks in the presence of faults. Journal of the Association for Computer Machinery, 32(1):5278, January 1985. 210. S. Lawrence, C. L. Giles, and A. D. Tsoi, A. C.and Back. Face recognition: A convolutions! neural-network approach. IEEE Transactions on Neural Networks, 8(1):98-114, January 1997. 211. J-S. Lee and K. Hoppel. Principal components transformation of multifrequency polarimetric SAR imagery. IEEE Transactions on Geoscience and Remote Sensing, 30(4):686-696, 1992.
REFERENCES
285
212. R. Lee. Subword parallelism with MAX2. IEEE Micro, 16(4):51-59, August 1996. 213. R. B. Lee. Performance Characterization of Parallel Processor Organizations. PhD thesis, Stanford University, 1980. 214. S-Y Lee and J. K. Aggarwal. A system design scheduling strategy for parallel image processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(2): 194-204, 1990. 215. C. E. Leiserson. Fat trees: Universal networks for hardware-efficicient supercomputing. IEEE Transactions on Computers, 34(10) :892-901, October 1985. 216. S. Levi and A. K. Agrawala. Real-Time System Design. McGraw-Hill, New York, 1990. 217. L. J. Levrouw and K. M. R. Audenaert. Minimimizing the log size for execution replay of shared-memory programs. In B. Buchberger and J. Volkert, editors, CONPAR'94-VAPP VI, pages 76-87. Springer, Berlin, 1994. Lecture Notes in Computer Science Volume 854. 218. B. Lewis and D. J. Berg. Multithreaded Programming with Pthreads. Prentice Hall, Upper Saddle River, NJ, 1998. 219. L. M. Li and K. Huang. Optimal load balancing in a multiple processing system with many job classes. IEEE Transactions on Software Engineering, ll(5):492-496, May 1985. 220. L. A. Liporace. Maximum likelihood estimation for multivariate observations of Markov sources. IEEE Transactions on Information Theory, 28(5):729-734, September 1982. 221. H. Liu, T-H. Hong, and M. Herman. A general motion model and spatiotemporal filters for computing optical flow. International Journal of Computer Vision, 22(2): 141-172, 1997. 222. M. M. Loeve. Fonctions aleatories de seconde ordre. In P. Levy, editor, Process Stochastiques et Movement Brownien. Hermann, Paris, 1948. 223. LSI. Ltd. DPC/C40B Technical Reference Manual, 1993. 224. B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In 7— International Joint Conference on Artificial Intelligence, pages 674-679, 1981. 225. S. M. Lucas. Rapid best-first retrieval from massive dictionaries. Pattern Recognition Letters, 17(14):1507-1512, December 1996.
286
REFERENCES
226. S. S. Lumetta and D. E. Culler. Managing concurrent access for shared memory active messages. In IPPS/SPDP'98, 1998. 7 pages from http://now.CS.berkeley.EDU/Papers2. 227. J. Lundelius and N. Lynch. An upper and lower bound for clock synchronization. Information and Control, 62(2-3): 190-204,1984. 228. S. Madala and J. B. Sinclair. Performance of synchronous parallel algorithms with regular structures. IEEE Transactions on Parallel and Distributed Systems, 2(1):105-116,1991. 229. E. Maillet. TAPE/PVM an Efficient Parformance Monitor for PVM Applications — User Guide. LMC-IMAG, 46, av. Felix Viallet, F-38031 Grenoble, Fr., 1995. 230. S. Mallat. Multi-frequency channel decompositions of images and wavelet representation. IEEE Transactions on Acoustic, Speech, and Signal Processing, 37:2091-2110, 1989. 231. S. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, ll(7):674-€93, 1989. 232. D. Marr. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H. Freeman, San Francisco, 1982. 233. D. Marr and E. Hildreth. Theory of edge detection. Proceedings of the Royal Society of London B, 207:187-217,1980. 234. K. Marzullo and S. Owicki. Maintaining the time in a distributed system. ACM Operating System Review, 19(3):44-54, July 1983. 235. D. May, R. Shepherd, and C. Keane. Communicating process architectures: Transputers and occam. In P. Treleaven and M. Vanneschi, editors, Future Parallel Computers, pages 35-81. Springer, Berlin, 1987. Lecture Notes in Computer Science #272. 236. W. F. McColl. Scalability, portability and predictability: The BSP approach to parallel programming. Future Generation Computer Systems, 12(4):265-272,1996. 237. J. G. McWhirter. Algorithmic engineering in adaptive signal processing. IEE Proceedings Part F (Radar and Signal Processing), 139(3):226-232, 1992. 238. P. Mehrotra and J. Van Rosendale. Programming distributed memory architectures using Kali. In A. Nicolau, D. Gelernter, T. Gross, and
REFERENCES
287
D. Padua, editors, Advances in Languages and Compilers for Parallel Processing, pages 364-384. Pitman, London, 1991. 239. B. Meyer. Object-oriented Software Construction. Prentice-Hall, New York, 1998. 240. R. Meyer and K. Schwarz. FFT implementation on DSP-chips — theory and practice. In Proceeding of the International Conference on ASSP, volume 3, pages 1503-1506,1990. 241. B. P. Miller, M. Clark, J. Hollingsworth, S. Kierstead, S-S Lim, and T. Torzewski. IPS-2: The second generation of a parallel program measurement system. IEEE Transactions on Parallel Distributed Systems, 1(2):206-217, 1990. 242. D. L. Mills. On the accuracy and stability of clocks synchronized by the Network Time Protocol in the Internet system. ACM Computer Communication Review, 20(l):65-75, January 1980. 243. D. L. Mills. Internet time synchronization: the Network Time Protocol. IEEE Transactions on Communications, 39(10): 1482-1493, October 1991. 244. R. Milner. Communication and Concurrency. Prentice-Hall, Hemel Hempstead, UK, 1989. 245. B. Moghaddam and A. Pentland. Face recognition using view-based and modular eigenfaces. SPIE, 2277:12-21, 1994. 246. B. Moghaddam and A. Pentland. Maximum likelihood detection effaces and hands. In International Workshop on Automatic Face- and GestureRecognition, pages 122-128, 1995. 247. B. Moghaddam and A. Pentland. Probabilistic visual learning for face detection. In International Conference on Computer Vision and Pattern Recognition, pages 786-793, 1995. 248. R. Moore. Recogition - the stochastic modelling approach. In C. Rowden, editor, Speech Processing, pages 223-255. McGraw-Hill, London, 1993. 249. S. E. Moore, J. Sztipanovitz, G. Karsai, and J. Nichols. A modelintegrated program synthesis environment for parallel/real-time image processing. In Parallel and Distributed Methods for Image Processing, pages 31-41, 1997. SPIE Volume 3166. 250. MPI/RT-1.0 draft standard, http://www.mpirt.org/drafts.html. 251. V. N. Muchnick and A. V. Shafarenko. Data-Parallel Computing: the Language Dimension. Thomson, London, 1996.
288
REFERENCES
252. H. Murakami and B. V. K. V. Kumar. Efficient calculation of primary images from a set of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 4(5):511-515, September 1982. 253. H.G. Musmann, M. Hotter, and J. Osterman. Object-oriented analysissynthesis coding of moving images. Signal processing: Image Communication, 1:117-138, 1989. 254. J. Nagle. Congestion control in ip/tcp internetworks. Technical report, Ford Aerospace and Communications Corporation, 1984. RFC 896. 255. N. M. Nasrabadi and R. M. King. Image coding using vector quantization: a review. IEEE Transactions on Communications, 36:957-971, 1988. 256. L. M. Ni and P. K. McKinley. A survey of wormhole routing techniques in direct networks. IEEE Computer, 26(2):62-76, February 1993. 257. K. M. Nichols and J. T. Edmark. Modeling multicomputer systems with PARET. IEEE Computer, pages 39-47, May 1988. 258. N. Nissanke. Realtime Systems. Prentice Hall, London, 1997. 259. B. Nitzberg and V. Lo. Distributed shared memory: A survey of issues and algorithms. IEEE Computer, 24(8):52-€0,1991. 260. Rioul O. and P. Duhamel. Fast algorithms for discrete continuous wavelet transforms. IEEE Transactions on Information Theory, 38(2):569-586, 1992. 261. W. Obeloer, H. Willeke, and E. Maehle. Performance measurement and visualization of multi-transputer systems with DELTA-T. In G. Haring and G. Kotsis, editors, Performance Measurement an Visualization of Parallel Systems, pages 119-143. Elsevier, Amsterdam, 1993. 262. E. Oja. Principal components, minor components, and linear neural networks. Neural Networks, 5:927-935, 1992. 263. D. Ollason, S. Hovell, and M. Wright. Requirements and design of the new continuous speech recognition parser - the Grid. Technical report, BT Laboratories, Martlesham Heath, Ipswich, IP5 3RE, UK, 1998. 264. K. Omahen and V. Marathe. Analysis and applications of the delay cycle for the M/M/c queueing system. Journal of the ACM, 25(2):283-303, April 1978. 265. A. O'Toole, H. Abdi, K. A. Deffenbacher, and D. Valentin. A perceptual learning theory of the information in faces. In T. Valentine, editor, Cognitive and Computational Aspects of Face Recognition, pages 159182. Routledge, London, UK, 1995.
REFERENCES
266. J. K. Ousterhout. Tcl/Tk engineering manual, 1994. ftp://ftp.smli.com/pub/tcl/engManual.tar.Z.
289
Available as
267. I. Page. Constructing hardware-software systems from a single description. Journal of VLSI Signal Processing, 12:87-107, 1996. 268. C. M. Pancake. Where are we headed? Communications of the ACM, 34(ll):53-64, November 1991. 269. A. Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, 3^ edition, 1991. 270. C. Y. Park and A. C. Shaw. Experiments with a program timing tool based on source-level timing schema. IEEE Computer, 24(5):48-57, May 1991. 271. D. G. Patrick, P. R. Green, and T. A. York. A multiprocessor OCCAM development system for UNIX network clusters. In A. Bakkers, editor, Parallel Programming and Java. IOS, Amsterdam, 1997. 272. S. Pelagatti. Structured Development of Parallel Programs. Taylor & Francis, London, 1998. 273. A. Peleg and U. Weiser. MMX technology extension to the Intel architecture. IEEE Micro, 16(4):42-50, August 1996. 274. B. Pentland, A. Moghaddam and T. Starner. View-based and modular eigenspaces for face recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 1994. Also MIT Technical Report #TR245. 275. C. Peplinski and T. Fink. A digital television receiver constructed using a media processor. In Audio Engineering Society Meeting, 1999. 276. R. Perrott. Parallel programming. Addison-Wesley, Wokingham, UK, 1987. 277. R. H. Perrott and P. J. Morrow. The design and implementation of low level image processing algorithms on a transputer network. In I. Page, editor, Parallel Architecture and Computer Vision, pages 243-269. Oxford University Press, Oxford, UK, 1988. 278. G. F. Pfister. In Search of Clusters: the Coming Battle of Lowly Parallel Computing Systems. Prentice-Hall, Upper Saddle River, NJ, 1995. 279. G. F. Pfister and V. A. Norton. "Hot Spot" contention and combining in multistage interconnection networks. IEEE Transactions on Computers, 34(10) :943-948, 1985.
290
REFERENCES
280. P. J. Phillips, H. Moon, P. Rauss, and S. A. Rizvi. The FERET September 1996 database and evaluation procedure. In First International Conference on Audio and Video-based Biometric Person Authentication, pages 395-402. Springer, Berlin, March 1997. Lecture Notes in Computer Science 1206. 281. C. D. Polychronopolos. Automatic restructuring of Fortran programs for parallel execution. Lecture Notes in Computer Science, 295:107-130, 1987. 4^ International DFVLR Seminar. 282. C. D. Polychronous and D. J. Kuck. Guided self-scheduling: A practical scheduling scheme for parallel supercomputers. IEEE Transactions on Computers, 36(12): 1425-1439, December 1987. 283. C. Ponder and R. Fateman. Inaccuracies in program profilers. SoftwarePractice and Experience, 18(5):459-467, May 1988. 284. Portland Group, 9150 SW Pioneer Court, Suite H, Wilsonville, Oregon 97070. PGCC User's Guide, 1992. Release 2.3. 285. R. W. Powell. Optical recognition system in letter mechanisation. British Telecommunications Journal, 6:265-291, January 1988. 286. R. Pozo. Performance Models of Parallel Architectures for Scientific Computing. PhD thesis, University of Colorado, 1992. 287. J. Preece and L. Keller, editors. Human-computer Interaction: Selected Readings: a Reader. Prentice Hall, Hemel Hempstead, UK, 1990. 288. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C. C.U.P.j Cambridge, UK, 1— edition, 1988. 289. W. H. Press, W. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C. Cambridge University Press, Cambridge, UK, 2^ edition, 1992. 290. K. K. Pringle. Visual perception by a computer. In Automatic Interpretation and Classification of Images, pages 277-284. Academic, New York, 1969. 291. D. M. N. Prior, M. G. Norman, N. J. Radcliife, and L. J. Clarke. What price regularity? Concurrency: Practice and Experience, 2(l):55-78, March 1990. 292. D. J. Pritchard. Mathematical models of distributed computation. In 7— Occam User Group Technical Meeting. IOS, Amsterdam, 1987. 293. Pure Software Inc., 1309 South Mary Ave., Sunnyval. CA. User's Guide, 1992.
Quantify
REFERENCES
291
294. M. J. Quinn. Parallel Computing: Theory and Practice. McGraw-Hill, New York, 1994. 295. L. Rabiner and Juang B-H. Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, NJ, 1993. 296. L. R. Rabiner. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77:257-285, February 1989. 297. L. R. Rabiner, Juang B.-H., and C.-H. Lee. An overview of automatic speech recognition. In Lee C.-H., F. K. Soong, and K. K. Paliwal, editors, Automatic Speech and Speaker Recognition Advanced Topics. Kluwer, Boston, 1996. 298. S. K. Raman, V. Pentkovski, and J. Kehsava. Implementing streaming SIMD extensions on the Pentium III processor. IEEE Micro, 20(4) :4757, August 2000. 299. T. A. Ramstad and T. Saramaki. Efficient multi-rate realisation for narrow transition-band FIR filters. In IEEE 1988 International Symposium on Circuits and Systems, pages 2019-2022, 1988. 300. M. Raynal and M. Singhal. Capturing causality in distributed systems. IEEE Computer, 29(2):49-56, February 1996. 301. P. J. Ready and P. A. Wintz. Information extraction, SNR improvement and data compression in multispectral imagery. IEEE Trans, on Comms., 31(10):1123-1130,1973. 302. S. S. Reddi and E. A. Feurstel. A conceptual framework for computer architectures. Computing Surveys, 8(2):277-300, June 1976. 303. D. A. Reed. Performance instrumentation techniques for parallel systems. Lecture Notes in Computer Science, 729:463-490, 1993. 304. D. A. Reed. Pablo performance analysis environment: Version 4.0 release announcement, 1995. Documentation and software are available by ftp from www-pablo.cs.uiuc.edu. 305. D. A. Reed and R. M. Fujimoto. Multicomputer Networks: MessageBased Parallel Processing. MIT, Cambridge, MA, 1988. 306. N. W. Rickert. Non byzantine clock synchronization — a programming experiment. ACM Operating System Review, 22(l):73-78, January 1988. 307. M. C. Rinard, D. J. Scales, and M. S. Lam. Jade: A high-level machineindependent language for parallel programming. IEEE Computer, pages 28-38, June 1993.
292
REFERENCES
308. S. P. A. Ringland. Application of grammar constraints to ASR using signature functions. In Speech Recognition and Coding, pages 260-263. Springer, Berlin, 1995. Volume 147 NATO ASI Series F. 309. O. Rioul and M. Vetterli. Wavelet and signal processing. IEEE Signal Processing Magazine, pages 14-38, 1991. 310. J. T. Robinson. Some analysis techniques for asynchronous multiprocessor algorithms. IEEE Transactions on Software Engineering, 5(1):24-31, January 1979. 311. A. W. Roscoe. Routing messages through networks: An exercise in deadlock avoidance. In T. Muntean, editor, 7— Occam User Group Technical Meeting, pages 55-79. IOS, Amsterdam, 1987. 312. W. W. Royce. Managing the development of large software systems. In WESTCON, CA, 1970. 313. Heath. S. Embedded Systems Design. Newnes, Oxford, UK, 1997. 314. D. G. Sampson, da Silva E. A. B., and M. Ghanbari. Low bit rate video coding using wavelet vector quantization. IEE Proceedings - Vision, Image and Signal Processing, 142(3): 141-148, 1995. 315. D. G. Sampson and M. Ghanbari. Fast lattice-based gain-shape vector quantization for image sequence coding. IEE Proceedings, Part-I, Communication, Speech and Vision, Special issue on Image processing and its Applications, 140(1) :56-66, 1993. 316. V. Sarkar. Determining average program execution times and their variance. In ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 298-312, 1989. 317. H. Sava, M. Fleury, A. C. Downton, and A. F. Clark. Parallel pipeline implementation of wavelet transforms. In 6— International Conference on Image Processing and its Applications, pages 171-173, 1997. 318. J. Schaeffer, D. Szafron, G. Lobe, and I. Parsons. The Enterprise model for developing distributed applications. IEEE Parallel and Distributed Technology, pages 85-95, August 1993. 319. M. S. Schlansker and B. R. Rau. EPIC: Explicitly parallel instruction computing. IEEE Computing, 33(2):37-45, 2000. 320. C. Seitz. A prototype two-level multicomputer, 1996. Project Information in DARPA/ITO project, available as http/www .myri. com/research/darpa/96summary .html. 321. J. M. Shapiro. Embedded image coding using zero-trees of wavelet coefficients. IEEE Transactions on Signal Processing, 4(12):3445-3462,1993.
REFERENCES
293
322. M. Shensa. The discrete wavelet transform: Wedding the A trous and mallat algorithms. IEEE Transactions on Signal Processing, 40(10):2464-2482,1992. 323. N. Sherkat, R. K. Powalka, and R. J. Whitrow. A parallel engine for real-time handwriting and optical character recognition. In Jet Paste 93, pages 830-837, 1993. 324. K. G. Shin and P. Ramanathan. Clock synchronization of a large multiprocessor system in the presence of malicious faults. IEEE Transactions on Computers, 36(1):2-12, January 1987. 325. R. Jr. Simar. DSP architectures, algorithms, and code-generation: Fission or fusion. In IEEE Workshop on Signal Processing Systems, pages 50-59, 1997. 326. H. D. Simon, editor. Scientific Applications of the Connection Machine. World Scientific, Singapore, 1988. 327. E. P. Simoncelli, E. H. Adelson, and D. J. Heeger. Probability distributions of optical flow. In IEEE Conference on Computer Vision and Pattern Recognition, pages 310-315, 1991. 328. D. B. Skillicorn. A taxonomy for computer architectures. IEEE Computer, 21(9):46-57, September 1988. 329. D. B. Skillicorn. Foundations of Parallel Programming. C.U.P., Cambridge, UK, 1994. 330. M. I. Skolnik. Introduction to Radar Systems. McGraw-Hill, New York, 1962. 331. H. C. Slusanschi and C. Jesshope. A FORTRAN 90 to F-code compiler. In UK Parallel '96, pages 40-52. Springer, London, 1996. 332. B. P. Smith. Shared memory, vectors, message passing, and scalability. Lecture Notes in Computer Science, 295:29-34, 1987. 4^ International DFVLR Seminar. 333. M. J. T. Smith and T. P. Barnwell. Exact tree construction techniques for tree-strcutured subband codecs. IEEE Transactions on Acoustics, Speech, and Signal Processing, 34(3):434-440, 1986. 334. L. Snyder. A taxonomy of synchronous parallel machines. In 17^- International Conference on Parallel Processing, volume 1, pages 281-285, 1988. 335. I. Sommerville. Software Engineering. Addison-Wesley, Wokingham, UK, 3^ edition, 1989.
294
REFERENCES
336. L. Sousa, J. Burrios, A. Costa, and M. Picdate. Parallel image processing for transputer-based systems. In IEE International Conference on Image Processing and its Applications, pages 33-36, 1992. 337. J. Spiers. Database management systems for parallel computers. Technical report, Oracle Corporation, 1990. 338. H. V. Sreekantaswamy, S. Chanson, and A. Wagner. Performance prediction modelling of multicomputers. In 12^- International Conference on Distributed Computing Systems, pages 278-285. IEEE, 1992. 339. T. K. Srikanth and S. Toueg. Optimal clock synchronization. Journal of the ACM, 34(3):626-645, July 1987. 340. V. Srini. An architectural comparison of dataflow systems. IEEE Computer, 19(3):68-88, March 1986. 341. S. R. Sternberg. Pipeline architectures for image processing. In K. Preston Jr. and L. Uhr, editors, Multicomputers and Image Processing, pages 291-206. Academic Press, New York, 1982. 342. W. R. Stevens. Unix Network Programming. Prentice Hall, Englewood Cliffs, NJ, 1990. 343. W-K. Su, R. Siviloti, Y. Cho, and D. Cohen. Scalable, networkconnected, reconfigurable hardware accelerator for automatic target recognition. Technical report, Myricom Inc., 1998. Available as http://www.myri.com/darpa/atr-report.pdf. 344. C. Szyperski. Component Software. Addison-Wesley, Harlow, UK, 1998. 345. D. Tabak. Multiprocessors. Prentice-Hall, Engelwood Cliffs, NJ, 1990. 346. M. Tasker, J. Allin, J. Dixon, M. Shackman, and J. Forrest. Professional Symbian Programming: Mobile Solutions on the EPOC Platform. Wrox, London, 2000. 347. A. M. Tekalp. Digital Video Processing. Prentice Hall, Upper Saddle River, NJ, 1995. 348. C. Temperton. Self-sorting mixed-radix Fast Fourier Transforms. Journal of Computational Physics, 15:1-23, 1983. 349. Texas Instruments Inc. TMS320C4x User's Guide, 1992. 350. A. Thepaut, J. Le Drezen, G. Ouvradu, and J. D. Laisne. Preporcessing and recognition of handwritten postal codes using a multigrain parallel machine. In Jet Paste 93, pages 813-819, 1993.
REFERENCES
295
351. R. Thoma and M. Bierling. Motion compensating interpolation considering covered and uncovered background. Signal Processing: Image Communication, 1:191-212, 1989. 352. Transtech Parallel Systems Ltd., 17-19 Manor Court Yard, Hughenden Ave., High Wycombe, UK. Paramid User's Guide, 1993. 353. R. G. Tregidgo. Parallel Processing and Automatic Postal Address Recognition. PhD thesis, Essex University, May 1992. 354. R. W. S. Tregidgo and A. C. Downton. Processor farm analysis and simulation for embedded parallel processing systems. In S. J. Turner, editor, Tools and Techniques for Transputer Applications (OUG 12). IOS, Amsterdam, 1990. 355. O. Tretiak and L. Pastor. Velocity estimation from image sequences with second order differential operators. In IEEE International Conference on Pattern Recognition, volume 1, pages 16-19, 1984. 356. A. Trew and G. Wilson. Past, Present, Parallel: a Survey of Available Parallel Computing Systems. Springer, London, 1991. 357. E. R. Tufte. The Visual Display of Quantitative Information. Graphics Press, Cheshire, CO, 1983. 358. M. Turk and A. Pentland. Eigenfaces for recognition. Journal for Cognitive Neuroscience, 3:71-86, 1991. 359. T. H. Tzen and L. M. Ni. Trapezoid self-scheduling: A practical scheduling scheme for parallel compilers. IEEE Transactions on Parallel and Distributed Systems, 4(l):87-98, January 1993. 360. A. Uhl and A. Bruckman. A double-tree wavelet compression on parallel MIMD computers. In 6^- International Conference on Image Processing and Its Applications, pages 179-183, 1997. IEE Conference Publication 443. 361. R. Umbach and H. Ney. Improvements in beam search for 10,000—word continuous-speech recognition. IEEE Transactions on Speech and Audio Processing, 2(2):353-356, April 1994. 362. P. E. Undrill. Transputers in image processing. In M. R. Jane, R. J. Fawcett, and T. P. Mawby, editors, Transputer Applications - progress and prospects. IOS, Amsterdam, 1992. 363. L. G. Valiant. A bridging model for parallel computation. Communications of the ACM, 33(8): 103-111, August 1990.
296
REFERENCES
364. L. G. Valiant. General purpose parallel architectures. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A, pages 943972. Elsevier, Amsterdam, 1990. 365. B. van Veen and K. M. Buckley. Beamforming: A versatile approach to spatial filtering. IEEE Acoustics, Speech, and Signal Processing Magazine, pages 4-24, April 1988. 366. M. Vanneschi. Heterogeneous HPC environments. In D. Pritchard and J. Reeve, editors, Euro-Par'98 Parallel Processing, pages 21-34. Springer, Berlin, 1998. Lecture Notes in Computer Science No. 1470. 367. E. Verhulst. Non-sequential processing: bridging the semantic gap left by the von Neumann architecture. In 1997 IEEE Workshop on Signal Processing Systems, pages 35-49. IEEE, Piscataway, NJ, 1997. 368. P. J. Vermeulen and D. P. Casasent. Karhunen-Loeve techniques for optimal processing of time-sequential imagery. Optical Engineering, 30(4):415-423, April 1991. 369. A. Verri, F. Girosi, and V. Torre. Differential techniques for optical flow. Journal of the Optical Society of America A, 7(5):912-922, 1990. 370. M. Vetterli and C. Herley. Wavelets and filter banks: Theory and design. IEEE Transactions on Signal Processing, 40(9):2207-2232,1992. 371. U. Villano. Monitoring parallel programs running in transputer networks. In G. Haring and G. Kotsis, editors, Performance Measurement an Visualization of Parallel Systems, pages 67-96. Elsevier, Amsterdam, 1993. 372. Vitruvius. De Architectura, volume 1 & 2. Harvard University, 1945 & 1970. Editor: F. Granger. 373. A. S. Wagner, H. V. Skreekantaswaniy, and S. T. Chanson. Performance models for the processor farm paradigm. IEEE Transactions on Parallel and Distributed Systems, 8(5):475-489, May 1997. 374. K. Walrath and M. Campione. The JFC Swing Tutorial: A Guide to Constructing GUIs. Addison-Wesley, Reading, MA, 1999. 375. B. W. Weide. Analytical models to explain anomalous behaviour of parallel algorithms. In International Conference on Parallel Processing, pages 183-187,1981. 376. A. Weiss. Large Deviations for Performance Analysis. Chapman & Hall, London, 1995.
REFERENCES
297
377. P. H. Welch. Graceful termination — graceful resetting. In Bakkers A., editor, 10— Occam User Group Technical Meeting. IOS, Amsterdam, 1989. 378. P. H. Welch, G. R. R. Justo, and C. Willcock. High-level paradigms for deadlock-free high-performance systems. In Transputer Applications and Systems '93, pages 981-1004. IOS, Amsterdam, 1993. 379. P. J. Welch and D. C. Wood. Higher levels of process synchronisation. In A. Bakkers, editor, Parallel Programming and Java, pages 104-129. IOS, Amsterdam, 1997. 380. M. W. Whybray. Video codec design using dsp devices. BT Technology Journal, 10(1):127-140, January 1992. 381. B. Widrow and S. D. Stearns. Adaptive Signal Processing. Prentice Hall, London, 1985. 382. H. Williams. Threads and multithreading. In Java 1.1 Unleashed, pages 121-163. Sams, Indianapolis, IN, 1997. 383. S. Williams. Approaches to the Determination of Parallelism for Computer Programs. PhD thesis, Loughborough University of Technology, 1978. 384. S. Williams. Programming Models for Parallel Systems. Wiley, Chichester, UK, 1990. 385. W. J. Willimas, H. P. Zaveri, and J. C. Sackellares. Time-frequency analysis of electrophysiological signals in epilepsy. IEEE Engineering in Medicine and Biology Magazine, pages 133-143,1995. 386. G. V. Wilson. Practical Parallel Processing. MIT, Cambridge, MA, 1995. 387. Wind Rivers Systems. VxWorks Programmer's Guide 5.1, 1993. Part #: DOC-100000-0002. 388. S. C. Winter. Research report — current projects. Technical report, University of Westminster, London, 1994. Serialization of parallel programs by G. R. Ribeiro Justo. 389. L. Wiskott, J. -L. Fellous, N. Kriiger, and Ch. von der Malsburg. Face recognition and gender determination. In International Workshop on Automatic Face- and Gesture-Recognition, pages 92-97, 1995. 390. W. H. Wolf. Hardware-software co-design of embedded systems. Proceedings of the IEEE, 82(7):967-989, 1994. 391. P. C. Woodland, C. J. Leggetter, J. J. Odell, V. Valtchev, and S. J. Young. The 1994 HTK large vocabulary speech recognition system. In ICASSP'95, volume I, pages 73-76, 1995.
298
REFERENCES
392. P. H. Worley. A new PICL trace file format. Technical report, Oak Ridge National Laboratory, Oak Ridge, TN, USA, September 1992. Report ORNL/TM-12125. 393. Q. X. Wu. A correlation-relaxation-labeling framework for computing optical flow - template matching from a new perspective. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8):843-853, September 1995. 394. Xilinx Inc., 2100 Logic Drive, San Jose, CA. Field Programmable Gate Arrays Datasheet, 2000. http://www.xilinx.com/partinfo/d2003.pdf.
Virtex 2.5V Available as
395. J. C. Yan. Performance tuning with AIMS—an automated instrumentation and monitoring system for multicomputers. In 27— Annual Hawaii International Conference on System Sciences, 1994. 396. R. B. Yates, N. A. Thacker, S. J. Evans, S. N. Walker, and P. A. Ivey. An array processor for general purpose digital image compression. IEEE Transactions of Solid-State Circuits, 30(3):244-249, March 1995. 397. E. N. Yourdon. Structured design: fundamentals of a discipline of computer program and systems design. Yourdon, NY, 2^ edition, 1978.
Index
68030 microprocessor, 102 Acoustic interference, 202 Acoustic models, 190 Adaptive filters, 202 Address tracing, 71 Address verification, 38 ADSP-21160, 14 AfBne transformation, 140 Affine warp, 141 Agents, 98 Alarm-clock interrupt, 108 Algorithmic parallelism, 7, 20, 55 Algorithmic skeletons, 27, 95 AMD Share, 14 AMD SHARC, 71 AMD SHARC DSPs, 17 Amdahl's law, 4, 28, 50, 135, 198 Anarchic development, 68 AND-parallelism, 7 AND-tree, 7 Animation, 88 Annotated language, 70 Aperture problem, 145 Application-specific integrated circuits, 265 APTT, 81, 85 Arithmetic Coding, 133 ARM, 263 ArMenX, 55 Ascender/descender sequences, 47 ASICs, 31
Asymptotic behavior, 216 Asymptotic distributions, 216 Asynchronous pipeline, 24 ATEMPT, 59 Autoregressive, 203 B-splines, 197 Bandpass filter, 172 Beam-former, 202 Beam-formers, 202 Beam search, 191 Benchmarking, 61 Bernoulli distribution, 224, 237 Bernoulli trials, 255 Beta testing, 57 Bi-gram, 191 Big-endian, 77 Bimodal, 235 Bimodal distribution, 42, 224 Biorthogonal, 176 BlueTooth, 267 BSP, 211 Buckets, 97 Bulk Synchronous Parallelism, 30 Burke's theory, 227 Byte-code, 82 C++, 27 C40, 71 Cache flushing, 100 Caches, 97 Calculus of variations, 214 299
300
INDEX
Categorical data type, 96 Cauchy-Schwartz inequality, 214 Cauchy distribution, 217 Central differencing, 150 Central Limit theorem, 215 Cepstrum, 190 Channels, 97 Characteristic loci algorithm, 39 Characteristic maximum, 213, 217, 219 Chi-square, 220 Class browser, 60 Client-server, 20 Clock drift, 249 Closed-form solution, 236 Clutter, 31 CMU Warp, 5 Coarse-to-fine, 150, 158 Code vector, 179 Codebook, 179, 184 Coefficient of variation, 222 Commercial-off-the-Shelf, 17 Commercial-off-the-shelf, 264 Communicating Sequential Processes, 68, 96 Communication diameter, 4 Compiler optimization, 220 Computational intensity, 90 Confidence building, 68 Confidence building, 68 Confidence measure, 151 Connected component labeling, 53 Conservative scheduling, 236 Context-switching, 102 Continuous-flow, 5 Continuous-speech recognition, 190 Continuous wavelet transform, 172 Convergence, 252 Convolution, 7 Convolution patch, 153 Convolution theorem, 177 Convolving, 221 Coprocessor, 264 Correlation methods, 160 COTS, 4 Covariance matrix, 167 Critical region, 97 Crusoe, 264 Crystal clocks, 249 CSP, 96, 202 Cytocomputer, 5 Daemon, 99 DAG, 212 DARPA SLAAC project, 267 Data-dependent, 61 Data farming, 27 Data parallelism, 40, 180, 197
Data races, 95 Datagrams, 107 Daubechies wavelet, 175 DCOM, 96 DCompose, 71 DCT, 133 Deadlock, 20, 103 Decoupled architectures, 97 Delay-cycle analysis, 227 Demand-based, 21 Demand-based farming, 27 Derived datatypes, 100 Diagram-based display, 60 Dialect, 191 Directed acyclic graph, 212 Discrete event simulator, 81 Discrete wavelet transform, 171 Disparity field, 145 Distributed-memory, 75 Distribution-free, 221 Divide-and-conquer, 20 Doppler effect, 31 Double exponential distribution, 216, 219 DSP, 71 DSP3 multiprocessor, 33 Dynamic memory allocation, 103 Eigenfaces, 72, 139 Eigenvector set, 166 Eigenvector spread, 150 Eigenvectors, 141 Eigenvectors, 147 Embarrassingly parallel, 28 Embedded system, 67 EMMA2E, 55 EMU, 59 EPOC, 263 Error map, 140 Euclidean space, 179 Event pin, 101 Event trace, 62 Event tracing, 247 Events, 97 Expectation operator, 214 Explicitly parallel instruction computing, 2 Exponential distribution, 222, 224 Extremal statistics, 213 F-code, 70 Face images, 164 Facial geometry, 141 Factoring, 236 Farmer, 18 Fat tree, 29 Fault-tolerance, 23 Fault-tolerant, 248 Feature detection, 140 Feedback, 21
INDEX
FERET database, 143 FFT, 20 Field-programmable gate arrays, 264 Fine-grained, 71 Fine-grained algorithms, 20 FIR, 173 First asymptotic distribution, 219 First character segmentation, 50 Fly-by-wire avionics, 18 Folded-back, 21 Formal methods, 59 Fortran, 70 Fourier derivative theorem, 161 Fourier transform, 24, 140 FPGA, 23 FPGAs, 55 Frame-rate inter-conversions, 146 Frame in-betweening, 145 Full velocities, 154 Function inlining, 78, 141 Functional language, 95 Funding agency, 67 Gabor niters, 140, 154 Gamma function, 219 Garbage collection, 71, 83, 266 Gaussian density, 226 Gaussian mixture model, 33 Gaussian smoothing, 147 Geometric multiplexing, 7 Geometric parallelism, 6 Global clock, 62 Global error number, 106 Gprof, 73, 134 Gradient-based, 147 Grand Challenge, 28 Granularity, 20, 24, 144 Granularity detectors, 71 Grid-bag, 89 Griffiths-Jim, 202 Guided self-scheduling, 237 H.261, 34, 73, 196 H.263, 72, 132 H.263 encoder, 134 Hand-coded assembly, 79 Handwritten postcode recognition, 233 Handwritten postcodes, 37 Hannover, 196 Harmonic sinusoids, 165 Hashing function, 74 Heap memory, 77 Heart motion, 146 Heavyweight process, 100 Hebbian learning, 165 Heeger algorithm, 155 Heisenberg's inequality, 178 HeNCE, 60
Heterogeneous PEs, 55 Heuristics, 235 Hidden Markov models, 190 Hierarchical scheme, 225 High Performance Fortran, 70 Higher management, 67 Histogram equalization, 22 Histogramming, 54 Hot-spot contention, 225 Hotspot compiler, 72 Human vision, 5 Hypercuboid, 58 I860, 52, 223 IBM Tangora, 191 Idioms, 99 IFR, 219 Ill-conditioning, 253 Incoherent sensors, 164 Increasing failure rate, 219 Industrial manufacturing control, 18 Inertia tensor, 160 Infinitely divisible, 211 Infinitely divisible distribution, 221 Instrumentation, 95, 98 Instrumentation, 247 Integration, 59 Intel i960, 102 Intensity function, 219 Interconnect technologies, 211 Interleaved memory, 97 Internal concurrency, 99 Interpolation, 248 Interrupt latency, 110 Irradiance, 145 Irregular data dependencies, 22 Irregularly structured, 6 Iterative server, 111 Jade, 70 Java, 71, 82, 265 Java RMI, 81 JavaBean, 96 JavaSpaces, 72 JBuilder, 72 Jini, 72, 266 JIT, 82 Kali, 70 Kernel, 101 KLT, 21, 140, 164 Kolmogorov-Smirnov, 220 Language modeling, 190 Laplace-Stieltjes transform, 227 Laplacian pyramid, 147, 152, 158 Last character segmentation, 50 Latency, 40 Latency hiding, 30 Latin square, 30
301
302
INDEX
Lattice-based VQ, 180 Leasing, 266 Legacy code, 57 Lightweight processes, 100 Linda program builder, 96 Linear algebra, 70 Linear least-squares error, 147 Linear programming, 21 Linear programming, 27 Linguistic analysis, 54 Linked-list, 230 Linked list, 149 Little-endian, 77 Logical clocks, 248 Logistic distribution, 233, 238 LogP model, 30 Long-tailed, 254, 257 Look-and-feel, 85 Loop scheduling, 212 Loop unrolling, 78, 141 Lowly parallel, 4 Lucent Orca FPGAs, 17 MAD, 59 Mahalonobis, 79 Markov order-one, 165 MasPar series, 2 Maximum-likelihood estimate, 203 Maximum latency, 220 Maximum likelihood, 140 Mean-value statistics, 21 Mean latency, 211 Mean of the maximum, 213 Median, 219, 252 Meiko's CM-5, 23 Meiko, 198 Meiko Computing Surface, 39 Mel-frequency, 190 Memcpy, 106 Memory-to-memory copy, 106 Memory debugger, 60 Memory leaks, 60 Message-passing, 4 Message aggregation, 30 Message queue, 111 Message records, 97 Microphone recordings, 202 MIMD, 2 Minsky's law, 28 MIPS R3000, 102 MIT Media Laboratory, 139 MMX, 2 Model-based coders, 197 Model-view-controller, 112 Monitor, 97 Moore's law, 53, 190 Morlet wavelet, 178
MOS circuit, 58 Motion estimation, 35, 133 MPEG, 133 MPEG4, 35 MPI, 99 Multi-spectral analysis, 164 Multi-threaded, 74 Multicast, 20, 97 Multicasts, 266 Multimedia-extension, 2 Multiple Instruction Multiple Data, 2 Multiplicative noise, 165 Multiresolution pyramid, 160 Mutex, 106 Myrinet, 17, 102, 267 N-ary trees, 29 Nagle's algorithm, 106 Nameserver, 266 NCube2, 6 Network Time Protocol, 256 Networks of workstations, 4 Networks of workstations, 265 Neural nets, 165 Neural networks, 55 Noise cancelling, 202 Non-reentrant system calls, 108 Non-uniform memory access, 212 Nondeterministic operator, 99, 107 Nonlinear regression, 220 Normal component, 156 Normal distribution, 218 Normalization, 54 NP-complete, 27 NUMA, 212 Numerical analysis, 70 Numerical differentiation, 147 Nyquist sampling, 206 Object-oriented, 30, 196 Occam, 96 OCR, 38, 54 Optical-flow, 145 Optical-flow equation, 147 Optimization, 77 Optimizing compilers, 78 OR-parallelism, 6 Order statistics, 212 Ordering, 82 Ordering constraint, 230 Orientation, 160 Orthogonal transform, 164 Orthogonal transforms, 90 Outliers, 251 Overhead, 258 P6 family, 264 Pablo, 247 Paging, 149
INDEX
Par, 70 ParaGraph, 58, 84 Parallel radar system, 68 Parallel slackness, 18 Parallelizing compilers, 70 Paramid, 171, 223, 238 Paramid machine, 52 PARASIT, 59 PARET, 58 Partitioning, 67 Path merging, 191 Pattern collation, 54 Patterns, 112 PB-frames, 133 Pentium, 264 Performance-tuning, 95 Perplexity, 33 PETAL, 5 Petri-nets, 58 Phase-based, 147 Phase-locked loops, 256 Phase component, 154 Phased-array radar, 31 Phosphorescent dot coding, 38 Pipeline architecture, 54 Pipelining, 8 Pisa Parallel Programming Language, 95 Plant growth, 146 Plumbing, 97 Point-to-point, 29 Pollaczek-Khintchine, 22 Pollaczek-Khintchine equation, 228 Polymorphic, 96 Polymorphism, 265 Polynomial distribution, 217 Pop-ups, 89 Population statistics, 219 Porting, 77 Post-processing, 248 PowerPCs, 102 Pre-emptive context switching, 98 Prediction model, 59 Presentation-abstraction-controller, 112 Principal Components Algorithm, 164 Probability density function, 213 Process algebra, 96 Processor farm, 18 Product-code VQ, 180 Profiler, 60 Profilers, 220 Prototype, 62 Prototyping, 265 Pseudo-parallel, 58 Pseudo-quadrature, 154 Pthreads, 101, 194 Purify, 77
PVM, 99, 193, 198 Quadratic classifier, 39 Quantify, 73, 148, 152 Quantisation, 137 Quantization, 196 Queueing theory, 213 Race conditions, 62 Radar, 267 Random-number generator, 86 Random variables, 213 Randomized routing, 30 Red-zone protected, 105 Refresh interval, 254 Relational database, 6 Relaxation, 158 Remote Method Invocation, 266 Remote procedure call, 100 Rendezvous, 96, 106 Resources, 97 Reuse, 69 RISC, 71 RISC core, 169 Roll-back, 70 Round-robin, 170 Round-robin context switching, 98 Round-trip, 251 Run-length coding, 133 Run-time executive, 109 Run-time scheduling, 70 Safe self-scheduling, 237 Sample statistics, 219 Satellite images, 164 Scalability, 264 Scalar logical clock, 108 Scatter matrix, 160 Screen flicker, 88 SCSI bus, 157 SCSI link, 100 Second asymptotic distribution, 219 Select, 107 Semantic neural network (SNN), 41 Semaphore, 97 Semi-dynamic process groups, 100 Sequent Symmetry, 6 Sequential overhead, 50 Serializer, 97 Series/parallel graphs, 213 Service-time distribution, 82 Shadowing, 155 SHARC, 264 Shared-memory, 2 Shift invariant, 177 Short-time Fourier transform, 172 Signal-processing, 5 Signal handling, 101 Signatures, 191
303
304
INDEX
SIMD, 2, 169 Simulation, 62 Single Instruction Multiple Data, 2 Sink process, 99 Slant correction, 47 Sobel edge detector [290, 75], 7 Socket API, 106 Software component, 265 Software pipeline, 4, 6 Software pipelining, 78 Software tools, 57 Solaris 2 o.s., 101 Sonar, 267 Space-time diagram, 59 Spare 20, 135 Spare 5, 135 SparcStation 2, 139 SparcStation 20, 132 Sparse graphs, 140 Spatial filtering, 22, 82 Spatiotemporal filtering, 140 Speaker-dependent, 191 Spectograms, 176 Spectral energy, 161 Spectral estimation, 171 Specularities, 155 Speculative searches, 25 Speech enhancement, 202 Speech recognition, 73 Speedup, 28 Speedup, 50 SPG, 213 SPY, 60 Standard deviation, 214 State-change display, 60 Static-scheduling, 27 Static profiling, 42 Store-and-forward, 21, 27 Stunted projects, 69 Sub-band filters, 173 Sub-pixel accuracy, 153 Sub-word parallelism, 264 Sum-of-squared-differences, 151 Superscalar, 14, 78, 220, 264 Symantec Caf6, 72 Symbian, 263 Symmetric multiprocessor, 195 Symmetric multiprocessors, 4, 265 Symmetrical distribution, 214 Symmetrical distributions, 238 Synchronization, 253 Synchronous applications, 5 Syntax rules, 54 Synthetic Aperture Radar, 17 Synthetic aperture radar, 165 Systolic, 5, 169
Systolic arrays, 2 T800 transputer, 39 Tachyons, 248 Tag, 71 Tagged array, 147 Taylor expansion, 232 Telenor, 132, 139 Template, 60 Templates, 95 Temporal multiplexing, 47, 197 Temporal parallelism, 8, 168 Temporal smoothing, 157 Texas Instruments TMS320C80, 23 Throughput, 42, 50 Timestamping, 248 TMS320C30, 196 TMS320C40, 202, 264 Topology, 1 Topology, 29 Trace file, 258 Trace recorder, 110 Transducer, 202 Transim, 68 Transparency, 155 Transputer, 238 Trapezoidal self-scheduling, 237 Traversal latency, 82 Tree-structured VQ, 180 Tri-phone, 190 Tri-state models, 193 Triangular distribution, 238 Trie dictionary, 54 Trie search, 41 Triphone model, 33 Two-level computer, 267 Typewritten addresses, 55 Unconstrained least-squares, 202 Uni-directional ring, 22 Uni-ring, 168 Uniform distribution, 217 Unimodal distributions, 213 Uniprocessor, 30 Unitary matrix, 167 Universal Time Coordinated, 250 Unix™, 84 VAP, 5 Variable-length encoder, 36 Variance, 61 Vector quantization, 179 Vectored messages, 107 Vectorization, 78 Venture capital, 67 Very-large instruction word, 2, 264 Video encoder, 72 Videotelephony, 132 Virtual channel system, 101, 212
INDEX
Virtual memory, 72 Virtual motion, 155 Visual surveillance, 139 Viterbi search, 191 VLSI, 160 VLSI implementation, 169 VLSI implementations, 180 Von Neumann, 2 VxWorks, 71, 101, 110 Waiting-time distribution, 227 Waiting time distributions, 81 WAP, 267 Waterfall, 68 Wavefront processors, 2 Wavelet, 171
Weather images, 145 White noise process, 203 Wide-sense stationary, 166 Window traps, 149 Windows 95/NT, 102 Windows NT, 84 Word case classification, 50 Word extraction, 47 Wormhole routing, 29 X-window, 84 Yosemite Valley, 157 Yourdon dataflow method, 58 Zero padding, 179 Zeta function, 215 Zipcode, 55 Zooming, 60
305