Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2958
Springer Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo
Lawrence Rauchwerger (Ed.)
Languages and Compilers for Parallel Computing 16th International Workshop, LCPC 2003 College Station, TX, USA, October 2-4, 2003 Revised Papers
Springer
eBook ISBN: Print ISBN:
3-540-24644-4 3-540-21199-3
©2005 Springer Science + Business Media, Inc. Print ©2004 Springer-Verlag Berlin Heidelberg All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America
Visit Springer's eBookstore at: and the Springer Global Website Online at:
http://ebooks.springerlink.com http://www.springeronline.com
Preface
The 16th Workshop on Languages and Compilers for Parallel Computing was held in October 2003 at Texas A&M University in College Station, Texas. It was organized by the Parasol Lab and the Department of Computer Science at Texas A&M and brought together almost 100 researchers from academia and from corporate and government research institutions spanning three continents. The program of 35 papers was selected from 48 submissions. Each paper was reviewed by at least two program committee members and, in many cases, by additional reviewers. Prior to the workshop, revised versions of accepted papers were informally published on the workshop’s Web site and on a CD that was distributed at the meeting. This year, the workshop was organized into sessions of papers on related topics, and each session consisted of an initial segment of 20-minute presentations followed by an informal 20-minute panel and discussion between the attendees and all the session’s presenters. This new format both generated many interesting and lively discussions and reduced the overall time needed per paper. Based on feedback from the workshop, the papers were revised and submitted for inclusion in the formal proceedings published in this volume. The informal proceedings and presentations will remain available at the workshop Web site: parasol.tamu.edu/lcpc03 This year’s experience was enhanced by the pleasant environment offered by the Texas A&M campus. Different venues were selected for each day and meals were served at various campus locales, ranging from a fajitas lunch in the Kyle Field Press Box, to a Texas barbeque dinner on the alumni center lawn. The banquet was held at Messina Hof, a local winery, and was preceded by a widely attended tour and wine tasting session. The success of LCPC 2003 was due to many people. We would like to thank the Program Committee members for their timely and thorough reviews and the LCPC Steering Committee (especially David Padua) for providing invaluable advice and continuity for LCPC. The Parasol staff (especially Kay Jones) did an outstanding job with the local arrangements and workshop registration and the Parasol students (especially Silvius Rus, Tim Smith, and Nathan Thomas) provided excellent technical services (wireless internet, presentation support, electronic submission, Web site, proceedings) and local transportation, and just generally made everyone feel at home. Last, but certainly not least, we are happy to thank Microsoft Research and Steve Waters from Microsoft University Relations for sponsoring the banquet and Dr. Frederica Darema’s program at the National Science Foundation for providing a substantial travel grant for LCPC attendees.
December 2003
Lawrence Rauchwerger
This page intentionally left blank
Organization The 16th Workshop on Languages and Compilers for Parallel Computing was hosted by the Parasol Laboratory and the Department of Computer Science at Texas A&M University in October 2003 and was sponsored by the National Science Foundation and Microsoft Research.
Steering Committee Utpal Banerjee David Gelernter Alex Nicolau David Padua
Intel Corporation Yale University University of California at Irvine University of Illinois at Urbana-Champaign
General and Program Chair Lawrence Rauchwerger
Texas A&M University
Local Arrangements Chair Nancy Amato
Texas A&M University
Program Committee Nancy Amato Hank Dietz Rudolf Eigenmann Zhiyuan Li Sam Midkiff Bill Pugh Lawrence Rauchwerger Bjarne Stroustrup Chau-Wen Tseng
Texas A&M University University of Kentucky Purdue University Purdue University Purdue University University of Maryland Texas A&M University Texas A&M University University of Maryland
This page intentionally left blank
Table of Contents
Search Space Properties for Mapping Coarse-Grain Pipelined FPGA Applications Heidi Ziegler, Mary Hall, and Byoungro So
1
Adapting Convergent Scheduling Using Machine-Learning Diego Puppin, Mark Stephenson, Saman Amarasinghe, Martin Martin, and Una-May O’Reilly
17
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime Sagnik Nandy, Xiaofeng Gao, and Jeanne Ferrante
32
A Hierarchical Model of Reference Affinity Yutao Zhong, Xipeng Shen, and Chen Ding
48
Cache Optimization for Coarse Grain Task Parallel Processing Using Inter-Array Padding Kazuhisa Ishizaka, Motoki Obata, and Hironori Kasahara
64
Compiler-Assisted Cache Replacement: Problem Formulation and Performance Evaluation Hongbo Yang, R. Govindarajan, Guang R. Gao, and Ziang Hu
77
Memory-Constrained Data Locality Optimization for Tensor Contractions Alina Bibireata, Sandhya Krishnan, Gerald Baumgartner, Daniel Cociorva, Chi-Chung Lam, P. Sadayappan, J. Ramanujam, David E. Bernholdt, and Venkatesh Choppella 93 Compositional Development of Parallel Programs Nasim Mahmood, Guosheng Deng, and James C. Browne
109
Supporting High-Level Abstractions through XML Technology Xiaogang Li and Gagan Agrawal
127
Applications of HP Java Bryan Carpenter, Geoffrey Fox, Han-Ku Lee, and Sang Boem Lim
147
Programming for Locality and Parallelism with Hierarchically Tiled Arrays Gheorghe Almási, Luiz De Rose, Basilio B. Fraguela, José Moreira, and David Padua
162
Co-array Fortran Performance and Potential: An NPB Experimental Study Cristian Coarfa, Yuri Dotsenko, Jason Eckhardt, and John Mellor-Crummey
177
X
Table of Contents
Evaluating the Impact of Programming Language Features on the Performance of Parallel Applications on Cluster Architectures Konstantin Berlin, Jun Huan, Mary Jacob, Garima Kochhar, Jan Prins, Bill Pugh, P. Sadayappan, Jaime Spacco, and Chau- Wen Tseng
194
Putting Polyhedral Loop Transformations to Work Cédric Bastoul, Albert Cohen, Sylvain Girbal, Saurabh Sharma, and Olivier Temam
209
Index-Association Based Dependence Analysis and its Application in Automatic Parallelization Yonghong Song and Xiangyun Kong
226
Improving the Performance of Morton Layout by Array Alignment and Loop Unrolling (Reducing the Price of Naivety) Jeyarajan Thiyagalingam, Olav Beckmann, and Paul H. J. Kelly
241
Spatial Views: Space-Aware Programming for Networks of Embedded Systems Yang Ni, Ulrich Kremer, and Liviu Iftode
258
Operation Reuse on Handheld Devices (Extended Abstract) Yonghua Ding and Zhiyuan Li
273
Memory Redundancy Elimination to Improve Application Energy Efficiency Keith D. Cooper and Li Xu
288
Adaptive MPI Chao Huang, Orion Lawlor, and L. V. Kalé
306
MP Java: High-Performance Message Passing in Java Using Java.nio William Pugh and Jaime Spacco
323
Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays Wei-Yu Chen, Arvind Krishnamurthy, and Katherine Yelick
340
A System for Automating Application-Level Checkpointing of MPI Programs Greg Bronevetsky, Daniel Marques, Keshav Pingali, and Paul Stodghill
357
The Power of Belady’s Algorithm in Register Allocation for Long Basic Blocks Jia Guo, María Jesús Garzarán, and David Padua
374
Load Elimination in the Presence of Side Effects, Concurrency and Precise Exceptions Christoph von Praun, Florian Schneider, and Thomas R. Gross
390
To Inline or Not to Inline? Enhanced Inlining Decisions Peng Zhao and José Nelson Amaral
405
Table of Contents
XI
A Preliminary Study on the Vectorization of Multimedia Applications for Multimedia Extensions Gang Ren, Peng Wu, and David Padua
420
A Data Cache with Dynamic Mapping Paolo D’Alberto, Alexandru Nicolau, and Alexander Veidenbaum
436
Compiler-Based Code Partitioning for Intelligent Embedded Disk Processing Guilin Chen, Guangyu Chen, M. Kandemir, and A. Nadgir
451
Much Ado about Almost Nothing: Compilation for Nanocontrollers Henry G. Dietz, Shashi D. Arcot, and Sujana Gorantla
466
Increasing the Accuracy of Shape and Safety Analysis of Pointer-Based Codes Pedro C. Diniz
481
Slice-Hoisting for Array-Size Inference in MATLAB Arun Chauhan and Ken Kennedy
495
Efficient Execution of Multi-query Data Analysis Batches Using Compiler Optimization Strategies Henrique Andrade, Suresh Aryangat, Tahsin Kurc, Joel Saltz, and Alan Sussman
509
Semantic-Driven Parallelization of Loops Operating on User-Defined Containers Dan Quinlan, Markus Schordan, Qing Yi, and Bronis R. de Supinski
524
Cetus – An Extensible Compiler Infrastructure for Source-to-Source Transformation Sang-Ik Lee, Troy A. Johnson, and Rudolf Eigenmann
539
Author Index
555
This page intentionally left blank
Search Space Properties for Mapping Coarse-Grain Pipelined FPGA Applications* Heidi Ziegler, Mary Hall, and Byoungro So University of Southern California / Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, California, 90292 {ziegler, mhall, bso}@isi.edu
Abstract. This paper describes an automated approach to hardware design space exploration, through a collaboration between parallelizing compiler technology and high-level synthesis tools. In previous work, we described a compiler algorithm that optimizes individual loop nests, expressed in C, to derive an efficient FPGA implementation. In this paper, we describe a global optimization strategy that maps multiple loop nests to a coarse-grain pipelined FPGA implementation. The global optimization algorithm automatically transforms the computation to incorporate explicit communication and data reorganization between pipeline stages, and uses metrics to guide design space exploration to consider the impact of communication and to achieve balance between producer and consumer data rates across pipeline stages. We illustrate the components of the algorithm with a case study, a machine vision kernel.
1
Introduction
The extreme flexibility of Field Programmable Gate Arrays (FPGAs), coupled with the widespread acceptance of hardware description languages such as VHDL or Verilog, has made FPGAs the medium of choice for fast hardware prototyping and a popular vehicle for the realization of custom computing machines that target multi-media applications. Unfortunately, developing programs that execute on FPGAs is extremely cumbersome, demanding that software developers also assume the role of hardware designers. In this paper, we describe a new strategy for automatically mapping from high-level algorithm specifications, written in C, to efficient coarse-grain pipelined FPGA designs. In previous work, we presented an overview of DEFACTO, the system upon which this work is based, which combines parallelizing compiler technology in the Stanford SUIF compiler with hardware synthesis tools [12]. In [21] we presented an algorithm for mapping a single loop nest to an FPGA and a case study [28] describing the communication and partitioning analysis *
This work is funded by the National Science Foundation (NSF) under Grant CCR0209228, the Defense Advanced Research Project Agency under contract number F30603-98-2-0113, and the Intel Corporation.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 1–16, 2004. © Springer-Verlag Berlin Heidelberg 2004
2
Heidi Ziegler et al.
necessary for mapping a multi-loop program to multiple FPGAs. In this paper, we combine the optimizations applied to individual loop nests with analyses and optimizations necessary to derive a globally optimized mapping for multiple loop nests. This paper focuses on the mapping to a single FPGA, incorporating more formally ideas from [28] such as the use of matching producer and consumer rates to prune the search space. As the logic, communication and storage are all configurable, there are many degrees of freedom in selecting the most appropriate implementation of a computation, which is also constrained by chip area. Further, due to the complexity of the hardware synthesis process, the performance and area of a particular design cannot be modelled accurately in a compiler. For this reason, the optimization algorithm involves an iterative cycle where the compiler generates a high-level specification, synthesis tools produce a partially synthesized result, and estimates from this result are used to either select the current design or guide generation of an alternative design. This process, which is commonly referred to as design space exploration, evaluates what is potentially an exponentially large search space of design alternatives. As in [21], the focus of this paper is a characterization of the properties of the search space such that exploration considers only a small fraction of the overall design space. To develop an efficient design space exploration algorithm for a pipelined application, this paper makes several contributions: Describes the integration of previously published communication and pipelining analyses [27] with the single loop nest design space exploration algorithm [21]. Defines and illustrates important properties of the design space for the global optimization problem of deriving a pipelined mapping for multiple loop nests. Exploits these properties to derive an efficient global optimization algorithm for coarse-grained pipelined FPGA designs. Presents the results of a case study of a machine vision kernel that demonstrate the impact of on-chip communication on improving the performance of FPGA designs. The remainder of the paper is organized as follows. In the next section we present some background on FPGAs and behavioral synthesis. In section 3, we provide an overview of the previously published communication analysis. In section 4, we describe the optimization goals of our design space exploration. In section 5 we discuss code transformations applied by our algorithm. We present the search space properties and a design space exploration algorithm in section 6. We map a sample application, a machine vision kernel in section 7. Related work is surveyed in section 8 and we conclude in section 9.
2
Background
We now describe FPGA features of which we take advantage and we also compare hardware synthesis with optimizations performed in parallelizing compilers. Then we outline our target application domain.
Search Space Properties
3
Fig. 1. MVIS Kernel with Scalar Replacement(S2) and Unroll and Jam (S1)
2.1
Field Programmable Gate Arrays and Behavioral Synthesis
FPGAs are a popular vehicle for rapid prototyping. Conceptually, FPGAs are sets of reprogrammable logic gates. Practically, for example, the Xilinx Spartan3 family of devices consists of 33, 280 device slices [26]; two slices form a configurable logic block. These blocks are interconnected in a 2-dimensional mesh. As with traditional architectures, bandwidth to external memory is a key performance bottleneck in FPGAs, since it is possible to compute orders of magnitude more data in a cycle than can be fetched from or stored to memory. However, unlike traditional architectures, FPGAs allow the flexibility to devote internal configurable resources either to storage or to computation.
Heidi Ziegler et al.
4
Configuring an FPGA involves synthesizing the functionality of the slices and chip interconnect. Using hardware description languages such as VHDL or Verilog, designers specify desired functionality at a high level of abstraction known as a behavioral specification as opposed to a low level or structural specification. The process of taking a behavioral specification and generating a low level hardware specification is called behavioral synthesis. While low level optimizations such as binding, allocation and scheduling are performed during synthesis, only a few high level, local optimizations, such as loop unrolling, may be performed when directed by the programmer. Subsequent synthesis phases produce a device configuration file. 2.2
Target Application Domain
Due to their customizability, FPGAs are commonly used for applications that have significant amounts of fine-grain parallelism and possibly can benefit from non-standard numeric formats. Specifically, multimedia applications, including image and signal processing on 8-bit and 16-bit data, respectively, are applications that map well to FPGAs. Fortunately, this domain of applications maps well to the capabilities of current parallelizing compiler analyses, that are most effective in the affine domain, where array subscript expressions are linear functions of the loop index variables and constants [25]. In this paper, we restrict input programs to loop nest computations on array and scalar variables (no pointers), where all subscript expressions are affine with a fixed stride. The loop bounds must be constant.1 We support loops with control flow, but to simplify control and scheduling, the generated code always performs conditional memory accesses. We illustrate the concepts discussed in this paper using a synthetic benchmark, a machine vision kernel, depicted in Figure 1. For clarity, we have omitted some initialization and termination code as well as some of the numerical complexity of the algorithm. The code is structured as three loop nests nested inside another control loop (not shown in the figure) that process a sequence of image frames. The first loop nest extracts image features using the Prewitt edge detector. The second loop nest determines where the peaks of the identified features reside. The last loop nest computes a sum square-difference between two consecutive images (arrays and ). Using the data gathered for each image, another algorithm would estimate the position and velocity of the vehicle.
3
Communication and Pipelining Analyses
A key advantage of parallelizing compiler technology over behavioral synthesis is the ability to perform data dependence analysis on array variables. Analyzing 1
Non-constant bounds could potentially be supported by the algorithm, but the generated code and resulting FPGA designs would be much more complex. For example, behavioral synthesis would transform a for loop with a non-constant bound to a while loop in the hardware implementation.
Search Space Properties
5
communication requirements involves characterizing the relationship between data producers and consumers. This characterization can be thought of as a data-flow analysis problem. Our compiler uses a specific array data-flow analysis, reaching definitions analysis [2], to characterize the relationship between array accesses in different pipeline stages [15]. This analysis is used for the following purposes: Mapping each loop nest or straight line code segment to a pipeline stage. Determining which data must be communicated. Determining the possible granularities at which data may be communicated. Selecting the best granularity from this set. Determining the corresponding communication placement points within the program. We combine reaching definitions information and array data-flow analysis for data parallelism [3] with task parallelism and pipelining information and capture it in an analysis abstraction called a Reaching Definition Data Access Descriptor (RDAD). RDADs are a fundamental extension of Data Access Descriptors (DADs) [7], which were originally proposed to detect the presence of data dependences either for data parallelism or task parallelism. We have extended DADs to capture reaching definitions information as well as summarize information about the read and write accesses for array variables in the high-level algorithm description, capturing sufficient information to automatically generate communication when dependences exist. Such RDAD sets are derived hierarchically by analysis at different program points, i.e., on a statement, basic block, loop and procedure level. Since we map each nested loop or intervening statements to a pipeline stage, we also associate RDADs with pipeline stages. Definition 1 A Reaching Definition Data Access Descriptor, RDAD(A), defined as a set of 5-tuples describes the data accessed in the m-dimensional array A at a program point s, where s is either a basic block, a loop or pipeline stage. is an array section describing the accessed elements of array A represented by a set of integer linear inequalities, is the traversal order of a vector of with array dimensions from as elements, ordered from slowest to fastest accessed dimension. A dimension traversed in reverse order is annotated as An entry may also be a set of dimensions traversed at the same rate. is a vector of length and contains the dominant induction variable for each dimension. is a set of definition or use points for which captures the access information. is the set of reaching definitions. We refer to as the set of tuples corresponding to the reads of array A and as the set of writes of array A at program point s. Since writes do not have associated reaching definitions, for all After calculating the set of RDADs for a program, we use the reaching definitions information to determine between which pipeline stages communication must occur. To generate communication between pipeline stages, we consider each pair of write and read RDAD tuples where an array definition point in the
6
Heidi Ziegler et al.
Fig. 2. MVIS Kernel Communication Analysis
sending pipeline stage is among the reaching definitions in the receiving pipeline stage. The communication requirements, i.e., placement and data, are related to the granularity of communication. We calculate a set of valid granularities, based on the comparison of traversal order information from the communicating pipeline stages, and then evaluate the execution time for each granularity in the set to find the best choice. We define another abstraction, the Communication Edge Descriptor (CED), to describe the communication requirements on each edge connecting two pipeline stages. Definition 2 A Communication Edge Descriptor (CED), (A), defined as a set of 3-tuples describes the communication that must occur between two pipeline stages and is the array section, represented by a set of integer linear inequalities, that is transmitted on a per communication instance. and are the communication placement points in the send and receive pipeline stages respectively. Figure 2 shows the calculated RDADs for pipeline stages S1 and S2, for array peak. The RDAD reaching definitions for array peak from pipeline stage S1 to S2 imply that communication must occur between these two stages. From the RDAD traversal order tuples, we see that both arrays are accessed in the same order in each stage and we may choose from among all possible granularities, e.g. whole array, row, and element. We calculate a CED for each granularity, capturing the data to be communicated each instance and the communication placement. We choose the best granularity, based on total program execution time, and apply code transformations to reflect the results of the analysis. The details of the analysis are found in [27]. Figure 3 shows the set of CEDs representing communication between stages S1 and S2.
4
Optimization Strategy
In this section, we set forth our strategy for solving the global optimization problem. We briefly describe the criteria, behavioral synthesis estimates, and metrics used for local optimization, as published in [21, 20] and then describe how we build upon these to find a global solution. A high-level design flow is shown in Figure 4. The shaded boxes represent a collection of transformations and analyses, discussed in the next section, that may be applied to the program.
Search Space Properties
Fig. 3.
Fig. 4.
7
MVIS Kernel Communication Analysis
High Level Optimization Algorithm
The design space exploration algorithm involves selecting parameters for a set of transformations for the loop nests in a program. By choosing specific unroll factors and communication granularities for each loop nest or pair of loop nests, we partition the chip capacity and ultimately the memory bandwidth among the pipeline stages. The generated VHDL is input into the behavioral synthesis compiler to derive performance and area estimates for each loop nest. From this information, we use balance and efficiency [21], along with our 2 optimization criteria to tune the transformation parameters.
8
Heidi Ziegler et al.
The two optimization criteria, for mapping a single loop nest, 1. a design’s execution time should be minimized 2. a design’s space usage, for a given performance, should be minimized are still valid for mapping a pipelined computation to an FPGA but the way in which we calculate the input and evaluate these criteria has changed. The of design related to criterion 2, is a summation of the individual behavioral synthesis estimates of the FPGA area used for the data path, control and communication for each pipeline stage in this design. The of design related to criterion 1, is a summation of the behavioral synthesis estimates for each pipeline stage of the number of cycles it takes to run to completion, including the time used to communicate data and excluding time saved by the overlap of communication and computation.
5
Transformations
We define a set of transformations, widely used in conventional computing, that permit us to adjust computational and memory parallelism in FPGA-based systems through a collaboration between parallelizing compiler technology and high-level synthesis. To meet the optimization criteria set forth in the previous section, we have reduced the optimization process to a tractable problem, that of selecting a set of parameters, for local transformations applied to a single loop nest or global transformations applied to the program as a whole, that lead to a high-performance, balanced, and efficient design. 5.1
Transformations for Local Optimization
Unroll and Jam Due to the lack of dependence analysis in synthesis tools, memory accesses and computations that are independent across multiple iterations must be executed in serial. Unroll and jam [9], where one or more loops in the iteration space are unrolled and the inner loop bodies are fused together, is used to expose fine-grain operator and memory parallelism by replicating the logical operations and their corresponding operands in the loop body. Following unroll-and-jam, the parallelism exploited by high-level synthesis is significantly improved. Scalar Replacement This transformation replaces array references by accesses to temporary scalar variables, so that high-level synthesis will exploit reuse in registers. Our approach to scalar replacement closely matches previous work [9]. There are, however, two differences: (1) we also eliminate unnecessary memory writes on output dependences; and, (2) we exploit reuse across all loops in the nest, not just the innermost loop. We peel iterations of loops as necessary to initialize registers on array boundaries. Details can be found in [12].
Search Space Properties
9
Custom Data Layout This code transformation lays out the data in the FPGA’s external memories so as to maximize memory parallelism. The compiler performs a 1-to-1 mapping between array locations and virtual memories in order to customize accesses to each array according to their access patterns. The result of this mapping is a distribution of each array across the virtual memories such that opportunities for parallel memory accesses are exposed to high-level synthesis. Then the compiler binds virtual memories to physical memories, taking into consideration accesses by other arrays in the loop nest to avoid scheduling conflicts. Details can be found in [22]. 5.2
Transformations for Global Optimization
Communication Granularity and Placement With multiple, pipelined tasks (i.e., loop nests), some of the input/output data for a task may be directly communicated on chip, rather than requiring reading and/or writing from/to memory. Thus, some of the memory accesses assumed in the optimization of a single loop nest may be eliminated as a result of communication analysis. The previously-described communication analysis selects the communication granularity that maximizes the overlap of communication and computation, while amortizing communication costs over the amount of data communicated. This granularity may not be ideal when other issues, such as on-chip space constraints, are taken into account. For example, if the space required for on-chip buffering is not available, we might need to choose a finer granularity of communication. In the worst case, we may move the communication off-chip altogether. Data Reorganization On-Chip As part of the single loop solution, we calculated the best custom data layout for each accessed array variable, allowing for a pipeline stage to achieve its best performance. When combining stages that access the same data either via memory or on-chip communication on the same FPGA, the access patterns for each stage may be different and thus optimal data layouts may be incompatible. One strategy is to reorganize the data between loop nests to retain the locally optimal layouts. In conventional systems, data reorganization can be very expensive in both CPU cycles and cache or memory usage, and as a result, usually carries too much overhead to be profitable. In FPGAs, we recognize that the cost of data reorganization is in many cases quite low. For data communicated on-chip between pipeline stages that is already consuming buffer space, the additional cost of data reorganization is negligible in terms of additional storage, and because the reorganization can be performed completely in parallel on an FPGA, the execution time overhead may be hidden by the synchronization between pipeline stages. The implementation of on-chip reorganization involves modifying the control in the finite state machine for each pipeline stage, which is done automatically by behavioral synthesis; the set of registers containing the reorganized array will simply be accessed in a different order. The only true overhead is the increased complexity of routing associated with the reorganization; this in turn would lead to increased space used for routing as well as a potentially slower achieved clock rate.
10
6
Heidi Ziegler et al.
Search Space Properties
The optimization involves selecting unroll factors, due to space and performance considerations, for the loops in the nest of each pipeline stage. Our search is guided by the following observations about the impact of the unroll factor and other optimizations for a single loop in the nest. In order to define the global design space, we discuss the following observations: Observation 1 As a result of applying communication analysis, the number of memory accesses in a loop is non-increasing as compared to the single loop solution without communication. The goal of communication analysis is to identify data that may be communicated between pipeline stages either using an on or off-chip method. The data that may now be communicated via on-chip buffers would have been communicated via off-chip memory prior to this analysis. Observation 2 Starting from the design found by applying the single loop with communication solution, the unroll factors calculated during the global optimization phase will be non-increasing. We start by applying the single loop optimizations along with communication analysis. We assume that this is the best balanced solution in terms of memory bandwidth and chip capacity usage. We also assume that the ratio of performance to area has the best efficiency rating as compared to other designs investigated during the single loop exploration phase. Therefore, we take this result to be the worst case space estimate and the best case performance achievable by this stage in isolation; unrolling further would not be beneficial. Observation 3 When the producer and consumer data rates for a given communication event are not equal, we may decrease the unroll factor of the faster pipeline stage to the point at which the rates are equal. We assume that reducing the unroll factor does not cause this pipeline stage to become the bottleneck. When comparing two pipeline stages between which communication occurs, if the rates are not matched, the implementation of the faster stage may be using an unnecessarily large amount of the chip capacity while not contributing to the overall performance of the program. This is due to the fact that performance is limited by the slower pipeline stage. We may choose a smaller unroll factor for the faster stage such that the rates match. Since the slower stage is the bottleneck, choosing a smaller unroll factor for the faster stage does not affect the overall performance of the pipeline until the point at which the faster stage becomes the slower stage. Finally, if a pipeline stage is involved in multiple communication events, we must take care to decrease the unroll factor based on the constraints imposed by all events. We do not reduce the unroll factor of a stage to the point that it becomes a bottleneck.
Search Space Properties
Fig. 5.
6.1
11
MVIS Task Graph
Optimization Algorithm
At a high-level, the design space exploration algorithm involves selecting parameters for a set of transformations for the loop nests in a program. By choosing specific unroll factors and communication granularities for each loop nest or pair of loop nests, we partition the chip capacity and ultimately the memory bandwidth among the pipeline stages. The generated VHDL is input into the behavioral synthesis compiler to derive performance and area estimates for each loop nest. From this information, we can tune the transformation parameters to obtain the best performance. The algorithm represents a multiple loop nest computation as an acyclic task graph to be mapped onto a pipeline with no feedback. To simplify this discussion, we describe the task graph for a single procedure, although interprocedural task graphs are supported by our implementation. Each loop nest or computation between loop nests is represented as a node in the task graph. Each has a set of associated RDADs. Edges, each described by a CED, represent communication events between tasks. There is one producer and one consumer pipeline stage per edge. The task graph for the MVIS kernel is shown in Figure 5. Associated with each task is the unroll factor for the best hardware implementation, area and performance estimates, and balance and efficiency metrics.
1. We apply the communication and pipelining analyses to 1) define the stages of the pipeline and thus the nodes of the task graph and 2) identify data which could be communicated from one stage to another and thus define the edges of the task graph. 2. In reverse topological order, we visit the nodes in the task graph to identify communication edges where producer and consumer rates do not match.
12
Heidi Ziegler et al.
From Observation 3, if reducing a producer or consumer rate does not cause a task to become a bottleneck in the pipeline, we may modify it. 3. We compute the area of the resulting design, which we currently assume is the sum of the areas of the single loop nest designs, including the communication logic and buffers. If the space utilization exceeds the device capacity, we employ a greedy strategy to reduce the area of the design. We select the largest task in terms of area, and reduce its unroll factor. 4. Repeat steps two and three until the design meets the space constraints of the target device. Our initial algorithm employs a greedy strategy to reduce space constraints, but other heuristics may be considered in future work, such as reducing space of tasks not on the critical path, or using the balance and efficiency metrics to suggest which tasks will be less impacted by reducing unroll factors.
7
Experiments
We have implemented the loop unrolling, the communication analysis, scalar replacement, data layout, the single loop design space exploration and the translation from SUIF to behavioral VHDL such that these analyses and transformations are automated. Individual analysis passes are not fully integrated, requiring minimal hand intervention. We examine how the number of memory accesses has changed when comparing the results of the automated local optimization and design space exploration with and without applying the communication analyses. In Table 1 we show the number of memory accesses in each pipeline stage before and after applying communication analysis. The rows entitled Accesses Before and After are the results without and with communication analysis respectively. As a result of the communication analysis, the number of memory accesses greatly declines for all pipeline stages. In particular, for pipeline stage S2, the number of memory accesses goes to zero because all consumed data is communicated on-chip from stage S1 and all produced data is communicated on-chip to stage S3. This should have a large impact on the performance of the pipeline stage. For pipeline stages S1 and S3, the reduction in the number of memory accesses may be sufficient to transform the pipeline stage from a memory bound stage into a compute bound stage. This should also improve performance of each pipeline stage and ultimately the performance of the total program.
Search Space Properties
13
From the design space exploration for each single loop, we would choose unroll factors of 4, 4, and 2 for pipeline stages S1, S2, and S3. This is based on both the metrics and estimates as explained in [28]. We then apply the design space exploration with global optimizations. Since the sum of the areas, 306K Monet space units, for the implementation for all three pipeline stages with the previously mentioned unroll factors is larger than the total area of the chip (150K), we must identify one or more pipeline stages for which to decrease the unroll factors. We apply the second step of our algorithm, which matches producer and consumer rates throughout the pipeline. Since S3 is the bottleneck when comparing the rates between stages S2 and S3, we know that we may reduce the unroll factor of stage S2 to 2 without affecting the pipeline performance. Then, our algorithm will detect a mismatch between stages S1 and S2. Again, we may decrease the unroll factor of stage S1 from 4 to 2 without affecting performance. Then we perform the analyses once again on each pipeline stage, using the new unroll factor of 2 for all pipeline stages. The size of the resulting solution is 103K Monet units. We are now within our space constraint. In summary, by eliminating memory accesses through scalar replacement and communication analysis, and by then matching producer and consumer data rates for each pipeline stage, we were able to achieve a good mapping while eliminating large parts of the search space.
8
Related Work
In this section we discuss related work in the areas of automatic synthesis of hardware circuits from high-level language constructs, array data-flow analysis, pipelining and design space exploration using high-level loop transformations. Synthesizing High-Level Constructs Languages such as VHDL and Verilog allow programmers to migrate to configurable architectures without having to learn a radically new programming paradigm. Efforts in the area of new languages include Handel-C [18]. Several researchers have developed tools that map computations to reconfigurable custom computing architectures [24], while others have developed approaches to mapping applications to their own reconfigurable architectures that are not FPGAs, e.g., RaPiD [10] and PipeRench [14]. The two projects most closely related to ours, the Nimble compiler and work by Babb et al. [6], map applications in C to FPGAs, but do not perform design space exploration. Design Space Exploration In this discussion, we focus only on related work that has attempted to use loop transformations to explore a wide design space. Other work has addressed more general issues such as finding a suitable architecture (either reconfigurable or not) for a particular set of applications (e.g., [1]). Derrien/Rajopadhye [11] describe a tiling strategy for doubly nested loops. They
14
Heidi Ziegler et al.
model performance analytically and select a tile size that minimizes the iteration’s execution time. Cameron’s estimation approach builds on their own internal data-flow representation using curve fitting techniques [17]. Qasem et al. [19] study the affects of array contraction and loop fusion. Array Data-Flow Analysis Previous work on array data flow analysis [7, 23, 3] focused on data dependence analysis but not at the level of precision required to derive communication requirements for our platform. Parallelizing compiler communication analysis techniques [4, 16] exploited data parallelism. Pipelining In [5] Arnold created a software environment to program a set of FPGAs connected to a workstation; Callahan and Wawrzynek [8] used a VLIW-like compilation scheme for the GARP project; both works exploit intraloop pipelined execution techniques. Goldstein et al. [14] describes a custom device that implements an execution-time reconfigurable fabric. Weinhardt and Luk [24] describes a set of program transformations to map the pipelined execution of loops with loop-carried dependences onto custom machines. Du et al. [13] provide compiler support for exploiting coarse-grained pipelined parallelism in distributed systems. Discussion The research presented in this paper differs from the efforts mentioned above in several respects. First the focus of this research is in developing an algorithm that can explore a wide number of design points, rather than selecting a single implementation. Second, the proposed algorithm takes as input a sequential application description and does not require the programmer to control the compiler’s transformations. Third, the proposed algorithm uses high-level compiler analysis and estimation techniques to guide the application of the transformations as well as evaluate the various design points. Our algorithm supports multi-dimensional array variables absent in previous analyses for the mapping of loop computations to FPGAs. Fourth, instead of focusing on intra-loop pipelining techniques that optimize resource utilization, we focus on increased throughput through task parallelism coupled with pipelining, which we believe is a natural match for image processing data intensive and streaming applications. Within an FPGA, assuming the parallelism is achieved by the synthesis tool, we have more degrees of freedom by keeping loop bodies separate instead of fusing them. Finally, we use a commercially available behavioral synthesis tool to complement the parallelizing compiler techniques rather than creating an architecture-specific synthesis flow that partially replicates the functionality of existing commercial tools. Behavioral synthesis allows the design space exploration to extract more accurate performance metrics (time and area used) rather than relying on a compiler-derived performance model. Our approach greatly expands the capability of behavioral synthesis tools through more precise program analysis.
Search Space Properties
9
15
Conclusion
In this paper, we describe how parallelizing compiler technology can be adapted and integrated with hardware synthesis tools, to automatically derive, from sequential C programs, pipelined implementations for systems with multiple FPGAs and memories. We describe our implementation of these analyses in the DEFACTO system, and demonstrate this approach with a case study. We presented experimental results, derived, in part, automatically by our system. We show that we are able to reduce the size of the search space by reasoning about the maximum unroll factors, number of memory accesses and matching producer and consumer rates. While we employ a greedy search algorithm here, we plan to investigate trade-offs between and effects of adjusting unroll factors for pipeline stages both on and off the critical path. Once our design is within the space constraints of the chip capacity, we will continue to search for the best allocation of memory bandwidth.
References [1] Santosh Abraham, Bob Rau, Robert Schreiber, Greg Snider, and Michael Schlansker. Efficient design space exploration in PICO. Technical report, HP Labs, 1999. [2] A. Aho, R. Sethi, and J. Ullman. Compilers Principles, Techniques, and Tools. Addison-Wesley Publishing, 1988. [3] S. Amarasinghe. Parallelizing Compiler Techniques Based on Linear Inequalities. PhD thesis, Dept. of Electrical Engineering, Stanford University, Jan 1997. [4] S. Amarasinghe and M. Lam. Communication optimization and code generation for distributed memory machines. In Proc. ACM Conf. Programming Languages Design and Implementation, pages 126–138, Albuquerque, 1993. [5] J. Arnold. The Splash 2 software environment. In Proc. IEEE Symp. FPGAs for Custom Computing Machines, pages 88–93, 1993. [6] J. Babb, M. Rinard, C. Moritz, W. Lee, M. Frank, R. Barua, and S. Amarasinghe. Parallelizing applications into silicon. In Proc. IEEE Symp. FPGAs for Custom Computing Machines, pages 70–81, 1999. [7] V. Balasundaram and K. Kennedy. A technique for summarizing data access and its use in parallelism enhancing transformations. In Proc. ACM Conf. Programming Languages Design and Implementation, pages 41–53, 1989. [8] T. Callahan and J. Wawrzynek. Adapting software pipelining for reconfigurable computing. In Proc. Intl. Conf. Compilers, Architectures and Synthesis for Embedded Systems, pages 57–64, Nov 2000. [9] S. Carr and K. Kennedy. Improving the ratio of memory operations to floatingpoint operations in loops. ACM Transactions Programming Languages and Systems, 16(6):400–462, 1994. [10] D. Cronquist, P. Franklin, S. Berg, and C. Ebeling. Specifying and compiling applications for RaPiD. In Proc. IEEE Symp. FPGAs for Custom Computing Machines, pages 116–125, 1998. [11] Steven Derrien, Sanjay Rajopadhye, and Susmita Sur Kolay. Combined instruction and loop parallelism in array synthesis for FPGAs. In Proc. 14th Intl. Symp. System Synthesis, pages 165–170, 2001.
16
Heidi Ziegler et al.
[12] P. Diniz, M. Hall, J. Park, B. So, and H. Ziegler. Bridging the gap between compilation and synthesis in the DEFACTO system. In Proc. 14th Workshop Languages and Compilers for Parallel Computing, LNCS. Springer-Verlag, 2001. [13] Wei Du, Renato Ferreira, and Gagan Agrawal. Compiler support for exploiting coarse-grained pipelined parallelism. In to appear in Proc. Super Computing, ACM SIGPLAN Notices, Nov. 2003. [14] S. Goldstein, H. Schmit, M. Moe, M. Budiu, S. Cadambi, R. Taylor, and R. Laufer. PipeRench: a coprocessor for streaming multimedia acceleration. In Proc. 26th Intl. Symp. Comp. Arch., pages 28–39, 1999. [15] M. Hall, S. Amarasinghe, B. Murphy, S. Liao, and M. Lam. Detecting coarse-grain parallelism using an interprocedural parallelizing compiler. In Proc. Ninth Intl. Conf. Supercomputing, pages 1–26, 1995. [16] S. Hiranandani, K. Kennedy, and C.-W. Tseng. Preliminary experiences with the FortranD compiler. In Proc. Seventh Intl. Conf. Supercomputing, Portland, Nov 1993. [17] W. Najjar, D. Draper, A. Bohm, and R. Beveridge. The Cameron project: highlevel programming of image processing applications on reconfigurable computing machines. In Proc. 7th Intl. Conf. Parallel Architecturs and Compilation Techniques - Workshop Reconfigurable Computing, 1998. [18] I. Page and W. Luk. Compiling OCCAM into FPGAs. In Field Programmable Gate Arrays, pages 271–283. Abigdon EE and CS Books, 1991. [19] A. Qasem, G. Jin, and J. Mellor-Crummey. Improving performance with integrated program transformations. In manuscript, October 2003. [20] B. So, P.C. Diniz, and M.W. Hall. Using estimates from behavioral synthesis tools in compiler-directed design space exploration. In Proc. 40th Design Automation Conference, June 2003. [21] B. So, M. Hall, and P. Diniz. A compiler approach to fast design space exploration in FPGA-based systems. In Proc. ACM Conf. Programming Languages Design and Implementation, pages 165–176, June 2002. [22] B. So, H. Ziegler, and M. Hall. A compiler approach for custom data layout. In Proc. 14th Workshop Languages and Compilers for Parallel Computing, July, 2002. [23] C.-W. Tseng. Compiler optimizations for eliminating barrier synchronization. In Proc. Fifth Symp. Principles and Practice of Parallel Programming, volume 30(8) of ACM SIGPLAN Notices, pages 144–155, 1995. [24] M. Weinhardt and W. Luk. Pipelined vectorization for reconfigurable systems. In Proc. IEEE Symp. FPGAs for Custom Computing Machines, pages 52–62, 1999. [25] M. Wolfe. Optimizing Supercompilers for Supercomputers. Addison, 1996. [26] Xilinx Inc. Spartan-3 1.2V FPGA family: introduction and ordering information, DS099-1(v1.1) edition, April 24 2003. [27] H. Ziegler, M. Hall, and P. Diniz. Compiler-generated communication for pipelined FPGA applications. In Proc. 40th Design Automation Conference, June 2003. [28] H. Ziegler, B. So, M. Hall, and P. Diniz. Coarse-grain pipelining on multiple FPGA architectures. In Proc. IEEE Symp. FPGAs for Custom Computing Machines, April 2002.
Adapting Convergent Scheduling Using Machine-Learning Diego Puppin1, Mark Stephenson2, Saman Amarasinghe2, Martin Martin 2 , and Una-May O’Reilly2 1
Institute for Information Science and Technologies ISTI - CNR, Pisa, Italy
[email protected] 2
Massachusetts Institute of Technology {mstephen,saman}@cag.lcs.mit.edu {mcm,unamay}@ai.mit.edu
Abstract. Convergent scheduling is a general framework for instruction scheduling and cluster assignment for parallel, clustered architectures. A convergent scheduler is composed of many independent passes, each of which implements a specific compiler heuristic. Each of the passes shares a common interface, which allows them to be run multiple times, and in any order. Because of this, a convergent scheduler is presented with a vast number of legal pass orderings. In this work, we use machinelearning techniques to automatically search for good orderings. We do so by evolving, through genetic programming, s-expressions that describe a particular pass sequence. Our system has the flexibility to create dynamic sequences where the ordering of the passes is predicated upon characteristics of the program being compiled. In particular, we implemented a few tests on the present state of the code being compiled. We are able to find improved sequences for a range of clustered architectures. These sequences were tested with cross-validation, and generally outperform Desoli’s PCC and UAS.
1
Introduction
Instruction scheduling on modern microprocessors is an increasingly difficult problem. In almost all practical instances, it is NP-complete, and it often faces multiple contradictory constraints. For superscalars and VLIWs, the two primary issues are parallelism and register pressure. Traditional scheduling frameworks handle conflicting constraints and heuristics in an ad hoc manner. One approach is to direct all efforts toward the most serious problem. For example, many RISC schedulers focus on finding ILP and ignore register pressure altogether. Another approach is to attempt to address all the problems together. For example, there have been reasonable attempts to perform instruction scheduling and register allocation at the same time [1]. The third, and most common approach, is to address the constraints one at a time in a sequence of passes. This approach however, introduces pass ordering problems, as decisions made by early passes L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 17–31, 2004. © Springer-Verlag Berlin Heidelberg 2004
18
Diego Puppin et al.
are based on partial information and can adversely affect the quality of decisions made by subsequent passes. Convergent Scheduling [2, 3] alleviates pass ordering problems by spreading scheduling decisions over the entire compilation. Each pass makes soft decisions about instruction placement: it asserts its preference of instruction placement, but does not impose a hard schedule on subsequent passes. All passes in the convergent scheduler share a common interface: the input and output to each one is a collection of spatial and temporal preferences of instructions: a pass operates by modifying these data. As the scheduler applies the passes in succession, the preference distribution will converge to a final schedule that incorporates the preferences of all the constraints and heuristics. Passes can be run multiple times, and in any order. Thus, while mitigating ordering problems due to hard constraints, a convergent scheduler is presented with a limitless number of legal pass orders. In our previous work [3], we tediously hand-tuned the pass order. This paper builds upon it by using machine learning techniques to automatically find good orderings for a convergent scheduler. Because different parallel architectures have unique scheduling needs, the speedups our system is able to obtain by creating architecture-specific pass orderings is impressive. Equally impressive is the ease with which it finds effective sequences. Using a modestly sized cluster of workstations, our system is able to quickly find good convergent scheduling sequences. In less than two days, it discovers sequences that produce speedups ranging from 12% to 95% over our previous work, and generally outperform UAS [4] and PCC [5]. The remainder of the paper is organized as follows. Section 2 describes Genetic Programming, the machine-learning technique we use to explore the passorder solution space. We describe our infrastructure and methodology in Section 3. Section 4 quickly describes the set of available heuristics. Section 5 follows with a description of the experimental results. Section 6 discusses related work, and finally, Section 7 concludes. Because of limited space, we refer you to [2, 3] for architecture and implementation details related to convergent scheduling.
2
Genetic Programming
From one generation to the next, architectures in the same processor family may have extremely different internal organizations. The Intel Pentium™ family of processors is a case in point. Even though the ISA has remained largely the same, the internal organization of the Pentium 4 is drastically different from that of the baseline Pentium. To help designers keep up with market pressure, it is necessary to automate as much of the design process as possible. In our first work with convergent scheduling, we tediously hand-tuned the sequence of passes. While the sequence works well for the processors we explored in our previous work, it does not generally apply to new architectural configurations. Different parallel architectures
Adapting Convergent Scheduling Using Machine-Learning
19
Fig. 1. Flow of genetic programming. Genetic programming (GP) initially creates a population of expressions. Each expression is then assigned a fitness, which is a measure of how well it satisfies the end goal. In our case, fitness is proportional to the execution time of the compiled application(s). Until some user-defined cap on the number of generations is reached, the algorithm probabilistically chooses the best expressions for mating and continues. To guard against stagnation, some expressions undergo mutation
necessarily emphasize different grains of computation, and thus have unique compilation needs. We therefore developed a tool to automatically customize our convergent scheduler to any given architecture. The tool generates a sequence of passes from those described in section 4. This section describes genetic programming (GP), the machine-learning technique that our tool uses. Of the many available learning techniques, we chose to employ genetic programming because its attributes fit the needs of our application. GP [6] is one example of evolutionary algorithm (EA). The thesis behind evolutionary computation is that a computational version of fitness-based selection, reproductive inheritance and blind variation acting upon a population will lead the individuals in subsequent generations to adapt toward better performance in their environment. In the general GP framework, individuals are represented as parse trees (or equivalently, as lisp expressions) [6]. In our case, the parse trees represent a sequence of conditionally executed passes.The result of each subexpression is either a convergent scheduling pass, or a sequence of passes. Our system evaluates an individual in a pre-order traversal of the tree. Table 1 shows the grammar we use to describe pass orders. The
expression is used to extract pertinent information about the status of the schedule, and the shape of the block under analysis. This introspection allows the scheduler to run different passes based on schedule state. The four variables that our system considers are shown in Table 2.
20
Diego Puppin et al.
Figure 1 shows the general flow of genetic programming. The algorithm starts by creating an initial population of random parse trees. It then compiles and runs each of the benchmarks in our training set for each individual in the population. Each individual is then assigned a fitness based on how fast each of the associated programs in the training set execute. In our case, the fitness is simply the average speedup (compared to the sequence used in previous work) over all the benchmarks in the training set. The fittest individuals are chosen for crossover, the GP analogy of sexual reproduction. Crossover begins by choosing two well-fit individuals. Our system then clones the selected individuals, chooses a random subexpression in each of them, and swaps them. The net result is two new individuals, composed of building blocks from two fit parents. To guard against stagnant populations, GP often uses mutation. Mutations simply replace a randomly chosen subtree with a new random expression. For details on the mutation operators we implemented, see [7, p. 242]. In our implementation, the GP algorithm halts when a user-defined number of iterations has been reached.
Adapting Convergent Scheduling Using Machine-Learning
21
We conclude this section by noting some of GP’s attractive features. First, it is capable of exploring high-dimensional spaces. It is also highly scalable, highly parallel and can run effectively on a distributed cluster of workstations. In addition, its solutions are human-readable, compared with other algorithms (e.g. neural networks) where the solution is embedded in a very complex state space.
3
Infrastructure and Methodology
This section describes our compilation framework as well as the methodology we used to collect results. We begin by describing the GP parameters we used to train the convergent scheduler, then we give an overview of our experimental compiler and VLIW simulator. 3.1
GP Parameters
We wrapped the GP framework depicted in Figure 1 around our compiler and simulator. For each individual in the population, our harness compiles the benchmarks in our training suite with the pass ordering described by its genome. All experiments maintain a population of 200 individuals, initially randomly chosen. After every generation we discard the weakest 20% of the population, and replace them with new individuals. New individuals are created to replace the discarded portion of the population. Of these new pass orderings, half of them are complelety random, and the remainder are created via the crossover operator described in the last section. 5% of the individuals created via crossover are subject to mutation. Finally, we run each experiment for 40 generations. Fitness is measured as the average speed-up (over all the benchmarks in our training suite) when compared against the pass ordering that we used in our previous work [3]. We also reward parsimony by giving preference to the shorter of two otherwise equivalently fit sequences. 3.2
Compiler Flow and Simulation Environment
Our compilation process begins in the SUIF front-end [8]. In addition to performing alignment analysis [9], the front-end carries out traditional optimizations such as loop unrolling, constant propagation, copy propagation, and dead code elimination. Our Chours VLIW back-end follows [10]. Written using MachSUIF [11], the back-end allows us to easily vary the number of clusters, functional units, and registers in the target architecture. Instruction latencies, memory access latencies, and inter-cluster communication latencies are also configurable. The convergent scheduler uses such information, combined with data from alignment analysis, to generate effective code. Similarly, our register allocator must know the number of registers in each cluster.
22
Diego Puppin et al.
The result of the compilation process is a compiled simulator that we use to collect performance numbers. The simulator accurately models the latency of each functional unit. We assume that all functional units are fully pipelined. Furthermore, the simulator enforces lock-step execution. Thus, if a memory instruction misses in the cache, all clusters will stall. The memory system is runtime configurable so we can easily isolate the performance of various memory topologies. In total, the back-end comprises nine compiler passes and a simulation library. The four target architectures on which we experimented are described below. Baseline (4cl) The baseline architecture is a 4-cluster VLIW with rich interconnectivity. In this configuration, the clusters are fully connected with a 4x4 crossbar. Thus, the clusters can exchange up to four words every cycle. The delay for the communication is 1 cycle. Register file, functional units and L1 cache are split into the clusters – even though every address of the memory can be accessed by any cluster – with a penalty of 1 cycle for non-local addresses. The cache takes 6 cycles to access and the register file takes 2 cycles. In addition, memory writes take 1 cycle. Each cluster has 64 general-purpose registers and 64 floating-point registers. Limited Bus (4cl-comm) This architecture is similar to the baseline architecture, the only difference being inter-cluster communication capabilities. This architecture only routes one word of data per cycle on a shared bus, which can be snooped, thus creating a basic broadcasting capability. Because this model has limited bandwidth, the space-time scheduler must be more conservative in splitting computation across clusters. Limited Bus (2cl-comm) Another experiment uses an architecture that is substantially weaker than the baseline. It is the same as machine 4cl-comm, except it only has 2 clusters. Limited Registers (4cl-regs) The final machine configuration on which we test our system is identical to the baseline architecture, except that each cluster has half the number of registers (32 general-purpose and 32 floating-point registers).
4
Available Passes
In this section, we describe quickly the passes used in our experimental framework. Passes are divided into time heuristics, passes for placement and critical path, for communication and load balancing, and register allocation. The miscellaneous passes help the convergence by breaking symmetry and strengthening the current assignment. For implementation details, we refer the reader to [2, 3].
Adapting Convergent Scheduling Using Machine-Learning
4.1
23
Time Heuristics
Initital Time Assignment (INITTIME) initializes the weight matrix by squeezing to 0 all the time slots that are unfeasible for a particular instruction. If the distance to the farthest root of the data-depedency graph is the preference for that instruction to be scheduled a cycle earlier than is set to 0. The distance to the leaf is similarly used. Dependence Enforcement (DEP) verifies that no instruction is scheduled before an instruction on which it depends. This is done by reducing the preference for early time slots in the dependent instruction. Functional Units (FUNC) reduces the preference for overloaded time-slots, i.e. slots for which the load is higher than the number of available functional units. Emphasize Critical Path Distance (EMPHCP) tries to schedule every instruction at the time indicated by its level, i.e. the distance from roots and leaves. 4.2
Placement and Critical Path
Push to First Cluster (FIRST) gives instructions a slight bias to the first cluster, where our compiler guarantees the presence of all alive registers at the end of each block (so, less communication is needed for instructions in the first cluster). Preplacement (PLACE) increases, for preplaced instructions (see [9]), the preference for their home cluster. Preplacement Propagation (PLACEPROP) propagates the information about preplacement to neighbors in the data dependence graph. The preference for each cluster decreases with the distance (in the dependence graph) from the closest preplaced instruction in that cluster. Critical Path Strengthening (PATH) identifies one critical path in the schedule, and tries to keep it together in the least loaded cluster or in the home cluster of its preplaced instructions. Path Propagation (PATHPROP) identifies high-confidence instructions, and propagates their preferences to the neighbors in the critical path. Create Clusters (CLUSTER) creates small instruction clusters using the Partial Component Clustering [5], and then allocates them to clusters trying to minimize communication. This is useful when the preplacement information is poor. 4.3
Communication and Load Balancing
Communication Minimization (COMM) tries to minimize communication by keeping in the same cluster instructions that are neighbors in the dependence graph.
24
Diego Puppin et al.
Parallelism for Successors (SUCC) exploits the broadcast feature of some of our VLIW configurations by distributing across clusters the children of an instruction which is already communicating data on the bus. The other instructions can snoop the value, so no extra communications will be needed. Load Balance(LOAD) reduces the preferences for the cluster that has the highest preferences so far. Level Distribute (LEVEL) tries to put in different clusters the instructions that are in the same level (distance from roots and leaves) if they do not communicate. 4.4
Register Allocation
Break Edges (EDGES) tries to reduce register pressure by breaking the data dependence edges that cross any specific time in the schedule (if there are more edges than the available registers). This is done by reducing the preferences of the instructions in the edges to be scheduled around Reduce Parallelism (SEQUENTIAL) emphasizes the sequential order of instructions in the basic block. This reduces parallelism and register pressure due to values with long life-span. 4.5
Miscellaneous
Noise Introduction (NOISE) adds noise to the distribution to break symmetry in subsequent choices. Assignment Strengthening (BEST) boosts the highest preference in the schedule, so far.
5
Results
In this section, we compare the performance of convergent scheduling to two existing assignment/scheduling techniques for clustered VLIW architectures: UAS [4] and PCC [5]. We augmented each existing algorithm with preplacement information. For UAS, we modified the CPSC heuristic described in the original paper to give the highest priority to the home cluster of preplaced instructions. For PCC, the algorithm for estimating schedule lengths and communication costs properly accounts for preplacement information. It does so by modeling the extra costs incurred by the clustered VLIW machine for a non-local memory access. For simplicity, in the following, we will refer to the sequence (SEQ (PassA) (PassB)) simply as (PassA) (PassB), removing SEQ: when no variables are used, genomes reduce to a linear sequence of passes. Also, in all of our experiments, (inittime) is hardwired to be the first pass, as part of the initialization, and (place) is always run at the end of the sequence to guarantee semantics.
Adapting Convergent Scheduling Using Machine-Learning
25
Fig. 2. Performance comparisons between PCC, UAS, and Convergent scheduling on a four-cluster VLIW architecture. Speedup is relative to a single-cluster machine
5.1
Baseline (4cl)
The baseline sequence was hand-tuned in our initial work with convergent scheduling. For the baseline architecture, our compiler used the following sequence:
As shown in Figure 2, convergent scheduling outperforms UAS and PCC by 14% and 28%, respectively, on a four-clustered VLIW machine. Convergent scheduling is able to use preplacement information to find good natural partitions for our dense matrix benchmarks. 5.2
Limited Bus (4cl-comm)
We use this configuration to perform many experiments. We evolved a sequence for 100 generations, with 200 individuals, over seven representative benchmarks. Figure 4 plots the fitness of the best creature over time. The fitness is measured as the average (across benchmarks) normalized completion time with respect to the sequence for our baseline architecture. The sequence improves quickly in the first 36 generations. After that, only minor and slow improvements in fitness could be observed. This is why, in our cross-validation tests (see section 5.5), we limit our evolution to 40 generations.
26
Diego Puppin et al.
Fig. 3. Speedup on 4cl-comm compared with 1-cluster convergent scheduling (original sequence). In the graph, conv. is the baseline sequence, evolved is the new sequence for this architecture.
The evolved sequence is more conservative in communication. (dep) and (func) are important: (dep), as a side effect, increases the probability that two dependent instructions are scheduled next to each other in space and time; (func) reduces peaks on overloaded clusters, which could lead to high amounts of localized communication. Also, the (comm) pass is run twice, in order to limit the total communication load.
The plot in Figure 3 compares the evolved sequence with the original sequence and our reference schedulers. The evolved sequence performs about 10% better than UAS, and about 95% better than the sequence tuned for the baseline architecture. In this test, PCC performed extremely poorly, probably due to limitations in the modeling of communication done by our implementation of the internal simplified scheduler (see [5]). 5.3
Limited Bus (2cl-comm)
Similar to the previous tests, (comm), (dep) and (func) are important in creating a smooth schedule. We notice the strong presence of (noise) in the middle of the sequence. It appears as if the pass is intended to move away from local minima by shaking up the schedule. The evolved sequence outperforms UAS (about 4% better) and PCC (about 5% better). Here PCC does not show the same problems present with 4cl-comm (see Figure 5). We observe an improvement of 12% over the baseline sequence.
Adapting Convergent Scheduling Using Machine-Learning
27
Fig. 4. Completion time for the set of benchmarks for the fittest individual, during evolution on 4cl-comm
Fig. 5.
5.4
Speedup on 2cl-comm
Limited Registers (4cl-regs)
Figure 6 shows the performance of the evolved sequence when compared with our baseline and our reference. We measure an improvement of 68% over the baseline sequence. Here again, (func) is a very important pass. UAS outruns convergent scheduling in this architecture by 6%, and PCC by 2%. We believe this is due to the need for new expressive heuristics for register allocation. Future work will investigate this.
5.5
Leave-One-Out Cross Validation
We tested the robustness of our system by using leave-one-out cross validation on 4cl-comm. In essence, cross validation helps us quantify how applicable the
28
Diego Puppin et al.
Fig. 6.
Speedup on 4cl-regs.
sequences are when applied to benchmarks that were not in the training set. The evolution was rerun excluding one of the seven benchmarks, and the result tested again on the excluded benchmark. In Table 4, the results are shown as speed-up compared with a one-cluster architecture. The seven cross-validation evolutions reached results very similar to the full evolution, for the excluded benchmarks too. In particular, the sequences evolved excluding one benchmark still outperform, on average, the comparison compilers, UAS and PCC. The seven evolved sequences (in Table 3) are all similar: (func) is the most important pass for this architecture. 5.6
Summary of Results
We verified that convergent scheduling is well suited to a set of different architectures. Running on 20 dual-processor Pentium 4 machines, evolution takes a couple of days. Sequences that contain conditional expressions never appeared in the best individuals. It turns out that running a pass is more beneficial than running
Adapting Convergent Scheduling Using Machine-Learning
29
a test to condition its execution. This is largely because convergent scheduling passes are somewhat symbiotic by design. In other words, the results show that passes do not disrupt good schedules. So, running extra passes is usually not detrimental to the final result. We verified that running a complex measurement can take as much time as running a simple pass. Therefore, when measuring the complexity of resulting sequences, we assign equal weight to passes and tests. Our bias for shorter genomes (parsimony pressure) penalizes sequences with extra tests as well as sequences with useless passes. In the end, conditional tests were not used in the best sequences. Rather, all passes are unconditionally run. Nevertheless, we still believe in the potential of this approach, and leave further exploration to future work.
6
Related Work
Many researchers have used machine-learning techniques to solve hard compilation problems. Therefore, only the most relevant works are discussed here. Cooper et al. use a genetic-algorithm solution to evolve the order of passes in an experimental compiler [12]. Our research extends theirs in many significant ways. First, our learning representation allows for conditional execution of passes, while theirs does not. In addition, we differ in the end goal; because they were targeting embedded microprocessors, they based fitness on code size. While this is a legitimate metric, code size is not a big issue for parallel architectures, nor does it necessarily correlate with wall clock performance. We also simultaneously train on multiple benchmarks to create general-purpose solutions. They use the application-specific sequences to hand-craft a general-purpose solution. Finally, we believe the convergent scheduling solution space is more interesting than that of an ordinary backend. The symmetry and unselfishness of convergent scheduling passes implies an interesting and immense solution space. Calder et al. used supervised learning techniques to fine-tune static branch prediction heuristics [13]. They employ two learning techniques — neural net-
30
Diego Puppin et al.
works and decision trees — to search for effective static branch prediction heuristics. While our methodology is similar, our work differs in several important ways. Most importantly, we use unsupervised learning, while they use supervised learning. Unsupervised learning is used to capture inherent organization in data, and thus, only input data is required for training. Supervised learning learns to match training inputs with known outcomes. This means that their learning techniques rely on knowing the optimal outcome, while ours does not. Our problem demands an unsupervised method since optimal compiler sequences are not known. The COGEN(t) compiler creatively uses genetic algorithms to map code to irregular DSPs [14]. This compiler, though interesting, evolves on a per-application basis. Nonetheless, the compile-once nature of DSP applications may warrant the long, iterative compilation process.
7
Conclusion
Time-to-market pressures make it difficult to effectively target next generation processors. Convergent scheduling’s simple interface alleviates such constraints by facilitating rapid prototyping of passes. In addition, an architecture-specific pass is not as susceptible to bad decisions made by previously run passes as in ordinary compilers. Because the scheduler’s framework allows passes to be run in any order, there are countless legal pass orders to consider. This paper showed how machinelearning techniques could be used to automatically search the pass-order solution space. Our genetic programming technique allowed us to easily re-target new architectures. In this paper, we also experimented with learning dynamic policies. Instead of choosing a fixed static sequence of passes, our system is capable of dynamically choosing the best passes for each scheduling unit, based on the status of the schedule. Although the learning algorithm did not find sequences that conditionally executed passes, we still have reasons to believe that this is a promising approach. Future work will explore this in greater detail. In closing, our technique was able to find architecture-specific pass orders which improved execution time by 12% to 95%. Cross validation showed that performance improvement is not limited to the benchmarks on which the sequence was trained.
Acknowledgements We want to thank Shane Swenson and Walt Lee for their contribution. This work has been partially supported by the Italian National Research Council (CNR) FIRB project GRID.it “Enabling platforms for high-performance computational grids oriented to scalable virtual organizations,” by a grant from DARPA (PCA F29601-04-2-0166), an award from NSF (CISE EIA-0071841) and fellowships from the Singapore-MIT Alliance and the MIT Oxygen Project.
Adapting Convergent Scheduling Using Machine-Learning
31
References [1] Motwani, R., Palem, K.V., Sarkar, V., Reyen, S.: Combining register allocation and instruction scheduling. Technical Report CS-TN-95-22, Stanford University, Department of Computer Science (1995) [2] Puppin, D.: Convergent scheduling: A flexible and extensible scheduling framework for clustered vliw architectures. Master’s thesis, Massachusetts Institute of Technology (2002) [3] Lee, W., Puppin, D., Swenson, S., Amarasinghe, S.: Convergent scheduling. In: Proceedings of the 35th International Symposium on Microarchitecture, Istanbul, Turkey (2002) [4] Ozer, E., Banerjia, S., Conte, T.M.: Unified assign and schedule: A new approach to scheduling for clustered register file microarchitectures. In: International Symposium on Microarchitecture. (1998) 308–315 [5] Desoli, G.: Instruction Assignment for Clustered VLIW DSP Compilers: A New Approach. Technical Report HPL-98-13, Hewlett Packard Laboratories (1998) [6] Koza, J.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. The MIT Press (1992) [7] Banzhaf, W., Nordin, P., Keller, R., Francone, F.: Genetic Programming : An Introduction : On the Automatic Evolution of Computer Programs and Its Applications. Morgan Kaufmann (1998) [8] Wilson, R.P., French, R.S., Wilson, C.S., Amarasinghe, S.P., Anderson, J.M., Tjiang, S.W.K., Liao, S.W., Tseng, C.W., Hall, M.W., Lam, M.S., Hennessy, J.L.: SUIF: An Infrastructure for Research on Parallelizing and Optimizing Compilers. SIGPLAN 29 (1994) 31–37 [9] Larsen, S., Amarasinghe, S.: Increasing and detecting memory address congruence. In: Proceedings of 11th International Conference on Parallel Architectures and Compilation Techniques (PACT), Charlottesville, VA (2002) [10] Maze, D.: Compilation infrastructure for vliw machines. Master’s thesis, Massachusetts Institute of Technology (2001) [11] Smith, M.D.: Machine SUIF. In: National Compiler Infrastructure Tutorial at PLDI 2000. (2000) http://www.eecs.harvard.edu/hube. [12] Cooper, K.D., Schielke, P.J., Subramanian, D.: Optimizing for reduced code space using genetic algorithms. In: ACM Proceedings of the SIGPLAN Workshop on Languages, Compilers and Tools for Embedded Systems (LCTES). (1999) [13] Calder, B., Grunwald, D., Jones, M., Lindsay, D., Martin, J., Mozer, M., Zorn, B.: Evidence-Based Static Branch Prediction Using Machine Learning. In: ACM Transactions on Programming Languages and Systems. Volume 19. (1997) [14] Grewal, G.W., Wilson, C.T.: Mappping Reference Code to Irregular DSPs with the Retargetable, Optimizing Compiler COGEN(T). In: International Symposium on Microarchitecture. Volume 34. (2001) 192–202
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime Sagnik Nandy, Xiaofeng Gao, and Jeanne Ferrante Department of Computer Science and Engineering University of California at San Diego {snandy,xgao,ferrante}@cs.ucsd.edu
Abstract. Program profiling can help performance prediction and compiler optimization. This paper describes the initial work behind TFP, a new profiling strategy that can gather and verify a range of flow-specific information at runtime. While TFP can collect more refined information than block, edge or path profiling, it is only 5.75% slower than a very fast runtime path-profiling technique. Statistics collected using TFP over the SPEC2000 benchmarks reveal possibilities for further flow-specific runtime optimizations. We also show how TFP can improve the overall performance of a real application. Keywords: Profiling, dynamic compilation, run-time optimization.
1
Introduction
Profiling a program can be used to predict the program’s performance [1], identify heavily executed code regions [2, 3, 4], perform additional code optimizations [5, 6], and locate data access patterns [7]. Traditionally, profiling has been used to gather information on one execution of the program, which is then used to improve its performance on subsequent runs. In the context of dynamic compilation and runtime optimizations, profiling information gathered in the same run itself can be used to improve the program’s performance. This creates a greater need for efficient profiling, since the runtime overheads might exceed any possible benefit achieved from its use. In addition, the information gathered by profiling must be relevant for runtime optimizations and should remain true while the optimized code is executed. In this paper we propose a new profiling framework, TFP (Time-Sensitive, Flow-Specific Profiling), that extracts temporal control flow patterns from the code at runtime which are persistent in nature i.e., hold true for a given, selectable period of time. This information can then be used to guide possible optimizations from a dynamic perspective. This paper makes the following contributions: 1. Proposes a new profiling strategy that is both flow-specific and timesensitive. 2. Provides a comparison of the profiling overheads of TFP with the dynamic path profiling of [8]. On the SPEC 2000 benchmarks, we show that TFP is L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 32–47, 2004. © Springer-Verlag Berlin Heidelberg 2004
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
33
on average only 5.75% slower than the technique of [8] (which is well suited for a dynamic environment), while collecting a wider range of information. 3. Provides a case study of RNAfold [9] that demonstrates that information gathered by TFP can be used to improve overall performance of an application. The rest of this paper is organized as follows. Section 2 describes the background and motivation for our work. Section 3 discusses our framework in detail and how it can be used to collect a range of runtime information. In Section 4 we discuss some implementation details and how they can be changed to meet specific requirements. Section 5 presents experimental results and a case study using our framework. We conclude in Section 6 with possible future research directions.
2
Background and Motivation
Profiling code to gather information about the flow of control has received considerable attention over the years. Most existing profiling techniques are meant for off-line program analysis. However, with the advent of dynamic compilation and runtime optimizations, the use of profile data generated for runtime use has increased [10, 11, 7, 12, 13, 14, 15]. In [16, 7], a technique called Bursty Tracing is introduced that facilitates the use of runtime profiling. This technique allows the programmer to skip between profiled and un-profiled versions of a code as well as control the duration spent in either version. Such a technique will allow the user to control the overheads involved in running profiled code to a far greater extent. Some of these techniques require hardware support while others rely completely on software. Our work falls in the latter category. Some of the more popular flow profiling techniques include block profiling, edge profiling [17], [18] and path profiling [2], [19]. These techniques differ in the granularity of the information they collect with path profiling edge profiling block profiling, i.e. all the information gathered by block profiling can be gathered by edge profiling, while all the information gathered by edge profiling can be collected by path profiling. However, retrieving this information comes at a greater cost in terms of overheads since one needs to maintain data structures to save this information and often require multiple passes of these data structures to get the necessary granularity of information. In [8], Bala developed a profiling technique well suited to finding path profiles in a dynamic environment. This technique instruments each edge of a code segment with a 0 or 1, and represents each path as a starting block followed by a bit sequence of 0’s and 1’s. The easy implementation and simplicity of this technique makes it an attractive choice for runtime path profiling. With adequate support from the compiler and hardware this technique can provide near-zero overhead profiling and forms the basis of comparison for the work we develop here. However, several possible runtime optimizations such as dependence analysis and loop unrolling can benefit from block and edge profiling alone, and often
34
Sagnik Nandy et al.
Fig. 1. Sample Code Snippets: (b) has a path with 50-PFP while (a) has no such path
do not require more refined information. Even though this information might be retrieved from path profiles, it could require considerable additional processing (to store the blocks and edges a path corresponds to, and then scan through the paths again to retrieve the necessary information). A fundamental question to be addressed by our research is whether there is additional advantage in using more powerful profiling information at runtime. We also question whether a detailed analysis of the programs execution pattern is useful for online analysis. For example, one might want to detect whether a single path is being executed persistently (thereby making it a possible target of optimizations [15]) or observe if certain pathological cases never occur [20]. This paper seeks to combine several benefits of block, edge and path profiling in a single unified profiling framework - providing easy and efficient access to a range of information at runtime.
3
The TFP Approach to Profiling
TFP can not only count frequencies of flow-patterns but is also capable of capturing a variety of temporal trends in the code. These trends can then be used to guide runtime optimizations. To capture this idea we make use of persistence i.e. flow patterns and information that continuously holds true for a period of time. We define the property of persistence as follows: A K-Persistent Flow Property (K-PFP) of a program segment is a property which holds true for the control flow of that segment for K consecutive executions of the segment. The motivation for such a technique lies in the assumption that if a PFP holds for a period K, it may continue for some additional time. Additional optimizations could then be made assuming the trend would remain persistent. For example consider the two code snippets in Figure 1. Traditional frequency-based profilers will find both the paths along and to be equally hot [2]. The code in 1(a) is not suitable for runtime optimization, since the path in the loop body only lasts for one iteration. On the other hand in l(b) an optimization that is valid for only one path in the loop body would
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
35
remain valid longer, possibly making it worthwhile to perform the optimization. This shows that frequency is not the only parameter for locating hot paths, persistence can also be considered (similar distinctions about access patterns can also be found in [21] for the purpose of code layout). When using a PFP guided approach, the code snippet in 1(b) will qualify as a 50-PFP but the one in 1(a) will not, allowing us to differentiate between them. Even if a sequence of code does not have a persistent path, we might still be interested in finding other PFPs. Each PFP might lead to different kind and granularity of optimization. Listed below are some other possible PFPs and examples of runtime optimizations that can be based on them.
1. Persistently Taken Paths: This information can help the compiler identify a possibly smaller segment of code on which runtime path-specific optimizations might be conducted. 2. Persistently Untaken Basic Blocks: This information would allow one to form a smaller CFG by eliminating these blocks from the original CFG. As a result, one can eliminate dependences, loops, variables etc., leading to further optimizations. 3. Persistently Taken Path Segments: Even if persistent paths do not exist we might have sub-paths that are persistent. This can help in eliminating certain dependences and code regions. 4. Whether a Given Set of Edges Are Ever Taken: This information can be used to remove possible dependences at runtime. Though some of the existing profiling techniques can be modified to incorporate persistence, they are aimed at gathering one kind of information efficiently. While path profiling can do a good job of example PFP 1, block profiling can perform 2 and edge profiling can collect 3 and 4 efficiently. Path profiling techniques like [2] and [8] can also be used to detect 2, 3 and 4, but would require maintaining additional data structures, storing additional data, and making multiple passes of the profiled information. TFP provides a unified framework that collects all the above mentioned PFPs with a small amount of instrumentation. The following section describes TFP in detail. 3.1
Detailed Description of TFP
TFP profiles acyclic code regions (we later describe in Section 4 how we can include nested loops) and is a hybrid between Bala’s method [8] of path profiling and block profiling. Instead of assigning each edge a 0 or 1 (as in Bala’s method), we represent each (profiled) block by a single bit position in a bit string. Conceptually, each block represents an integer which is a unique power of two (i.e. is represented by the value The initial block always sets the value of this register to 0. At the end of each profiled block an instruction is inserted to perform a mathematical ADD (or bitwise OR) of this number to a register The value of this register identifies the path taken, and Bookkeeping code is inserted in the exit block of the instrumented region. For acyclic code
36
Sagnik Nandy et al.
Fig. 2. An example of profiling using TFP (values of also given)
corresponding to the paths is
regions with multiple exit blocks we add the Bookkeeping code in each of the exit blocks. The Bookkeeping code can vary with the kind of PFP(s) we wish to track, as illustrated in the sections to come. Figure 2 gives an example of our profiling method, showing a sample code region, the inserted instrumentation code and the register values associated with the various paths. The basic idea behind our approach is that each path will produce a unique value in the register, as well as give all the information about the blocks that form that path. Thus we get the benefits of both block and path profiling simultaneously (and some benefits of edge profiling as shown in Section 3.5). This idea is embodied in the following: Theorem 1. With the register assignments inserted as described above, each different value of the register corresponds to a unique path. Proof. Since each basic block is represented by a bit in the register, the only way in which the register can get a value is by traversing all the blocks that correspond to a 1 in the bitwise representation of the value. Thus given a value in the register, we can determine the basic blocks in the corresponding path. To complete the proof, we have to show that no two paths can have the exact same set of blocks in them. The proof is by contradiction. Assume that are the basic blocks that were traversed and and are two different paths using X, i.e., Y and Z are two different permutations of X. All the elements of X must be unique, else X would have a loop, and thus an associated back edge, contradicting our assumption. Now let be the position where Y and Z first differ i.e. for and Obviously or Y and Z would be identical. Now since there is some value for which (since both Y and Z have the same set of elements). Thus there exists an edge from to in the path Z. Now has to appear in Y as well and it can only appear after or at the position. Thus in Y there is a path from to and we also know that there is an edge from to Thus this edge is actually a back edge, contradicting our assumption for profiling candidates. Hence Y and Z cannot be different.
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
37
TFP provides two major benefits when compared to traditional path profiling techniques. Firstly it collects a wider range of information as a by-product of path profiling. Other path profiling techniques would require additional data structures (TFP uses just a few variables), and multiple passes over these data structures to find this information. The second advantage that TFP provides is that most other path profiling techniques instrument the edges which can result in additional branches in the program, which can affect the overall performance. TFP instruments at the block level and though this requires instrumenting every block of the region it does not add further checks in the code. We now describe how TFP can be used to detect some of the PFPs mentioned earlier in this section. We first consider the parameter Persistence Factor (K) which represents a lower bound (threshold) on the persistence of interest. To gather various K-PFPs the TFP instrumented code is executed for K iterations (this can be achieved using [16]). The values of the Bookkeeping variables at the end of these iterations reveal the various K-PFPs observed. 3.2
Persistent Paths
The following Bookkeeping is inserted once at the end of the acyclic region, to track persistent paths using TFP.
Bookkeeping for Persistent Paths After running the TFP instrumented code for K successive executions if and are equal then we know that we have a K-persistent path. This follows from the fact that each path produces a unique value of (from Theorem 1) and the only way and will be equal is if remained unchanged for the K iterations and are initially set to -1 and 0 respectively). If we detect a persistent path then we can expect the code to remain in the same path for a while and make further optimizations based on this assumption. 3.3
Path Segments that Are Always Taken
Even if we do not find persistent paths using the method given in Section 3.2, we might still want to find the set of path segments or sub-paths that are always taken. To get this information using TFP, we use the same Bookkeeping code as in Section 3.2 but assign the numbers ADD/ORed to in each basic block in a topologically sorted manner (this need not be done at runtime if one uses a framework like [16] or if all regions of possible interest are instrumented at compile time itself). Thus if each instrumented basic block ADD/ORs the value to then if comes before in the topologically sorted order of the blocks. To gather the information about path segments that are always taken (during the K successive executions of the TFP instrumented code), we need to scan
38
Sagnik Nandy et al.
through (from left to right or right to left) and join blocks that correspond to adjacent 1’s in unless there is some other block in between the two blocks in that is a 1. Thus if where is a persistent path segment then the bit locations will be 1 in and the only bits which are 1 in between and will be those at This follows from the fact that since the blocks are topologically sorted, then the edge is always taken if no block between and is ever taken. Consider the program graph shown in Figure 2. Assume that during a profiled run only ACEF and ABCEF are taken. At the end of the profiled run will contain 11010 and will contain 11011. Following the technique given above we see that bit positions 1, 2 and 4 are set to 1 in and none of the other intermediate positions (position 3) are set to 1 in Thus we can conclude that the path segment connecting bit positions 1, 2 and 4 (i.e CEF) is always taken - which is the case. 3.4
Basic Blocks that Are Not Taken
To determine the basic blocks that are not taken, we could use block profiling and check each counter of the basic blocks to see if they are 0. However, for TFP, we do not need counters to gather this information. Using our method this information can be easily obtained using the following code for Bookkeeping.
Bookkeeping for Blocks Persistently Not Taken On executing the instrumented code for K successive executions, the variable bblock has a 1 for all the blocks that get taken at any time during the execution of the instrumented code, and all bit positions that have 0’s correspond to basic blocks that are not executed even once, in those K executions. Note that TFP doesn’t gather the exact frequency of the blocks that are taken. It can be observed that the Bookkeeping for this PFP is a subset of the ones described in Sections 3.2 and 3.3, and need not be additionally inserted in case we are also instrumenting for persistently paths or sub-paths. 3.5
Tracking If Specific Edges Are Taken
Several useful optimizations are impossible to verify statically because of possible dependences along different control flow paths. Our framework provides an easy way of tracking whether a specific set of edges is ever executed (or persistently not executed). The compiler can use this information to eliminate false dependences at runtime, enabling several optimizations (such as constant propagation, loop unrolling, code compaction etc). To achieve this using our framework, we assign blocks their additive values based on a topological sort as described in Section 3.3. Thereafter if we want to test if an edge between blocks and is ever
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
39
taken, we add a test in the Bookkeeping code to check if the bit positions and are ever simultaneously 1 with no other 1’s between them. This can be done by assigning two variables having the initial values of an integer with all bits between positions and as 1, and an integer with only bit positions and set as 1. These variables can be defined at compile time with their corresponding values. At runtime, the following code is added to Bookkeeping:
Bookkeeping Needed to Track If Specific Edges Are Taken It is easy to see why this works. If the edge is ever taken, then the bit positions and of will be 1 (by definition of our profiling technique). Moreover all the intermediate bit positions between and will be 0 (otherwise the edge could not have been taken since the blocks are topologically sorted). Thus when is ANDed with the only bit positions which will be 1 are and making the profiled code call the optimizer. If after executing the TFP instrumented code for K executions the OPTIMIZER is not informed (we need not necessarily inform the OPTIMIZER but can just set a flag to true as well) we can conclude that the monitored edge is not taken persistently. 3.6
TFP for Normal Path Profiles
TFP can be used to measure normal path frequencies as well. Each path in TFP produces a unique value in This value can be hashed into a counter array at the end of the profiled region to maintain the path frequencies. However, path profiling techniques like [2] will do a better job of maintaining such frequencies alone. The range of the path identifiers used by this technique is exactly equal to the total number of paths, making direct indexing into the counter array possible. Both TFP and Bala’s method [8] use path identifiers that do not reflect the actual number of paths in the instrumented region, thereby requiring hashing. To summarize, several dynamic optimizations might not need the “exact” frequency of paths. However, if needed, TFP can easily be modified to maintain these frequencies without adding to the overheads significantly ([8] required 3 cycles for their hashing phase). These are just some of the statistics we can gather using our profiling strategy. One can easily change the Bookkeeping segment to calculate further statistics like basic blocks that are always taken, minimum amount of persistence between paths etc. Moreover we have already seen that some part of the bookkeeping needed for different statistics overlap, making the bookkeeping more efficient.
4
Implementation Issues for TFP
In this section we discuss some of the issues involved in implementing TFP and how the strategy can be modified in different situations.
40
Sagnik Nandy et al.
Fig. 3. Cumulative distribution of the number of basic blocks present in the profiled code regions in (a) INT Benchmarks and (b) FP Benchmarks
4.1
Use of Variables and Registers
Much of TFP’s value relies on the fact that it uses only a few variables to achieve profiling as well as to maintain the information gathered. Traditional profilers can consume large amounts of memory to store profiled data, thereby affecting the runtime performance. TFP shows that it is possible to maintain a fairly wide and relevant range of runtime information by using only a few variables. This, however is based on the assumption that the number of blocks in the instrumented region is not too large. If the number of blocks in a region instrumented by TFP is small allowing a single register to be used for representing all the blocks. This helps in reducing the overheads of TFP as we avoid reloading values from memory every time profiling occurs and all the data needed for profiling can be maintained in a single register. The bookkeeping may need a few extra variables (depending on the amount of information we want to gather) but still this would be significantly less than using large arrays to store the frequencies of every path/block/edge. To test our assumption that a single register is sufficient to store temporal program behavior, we profiled the code regions covered by the most frequently executed back edges in the SPEC 2000 benchmarks1 to see how many basic blocks they cover. The results are shown in Figure 3. Observe that more than 99% of these frequently executed code regions have less than 64 blocks in them. This implies that in nearly all cases a 64 bit register is sufficient to implement TFP efficiently. To use TFP for code regions having more than 64 blocks, we can use a new variable every time we finish instrumenting 64 blocks (assuming we are using a 64-bit register) i.e. instead of just using we use as needed. At the end in the Bookkeeping section, instead of checking if we check if AND AND and set all to 0 after that. Thus we make up for not being able to store the bit stream corresponding 1
We did not consider some trivial two block loops having just a single path. Also for eon and some FP benchmarks we considered less than 10 back edges as there was a significant drop in the frequencies of the remaining ones.
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
41
to a path in a single variable by maintaining parts of the bit stream in separate variables. However, is rarely more than 1 (of the 110 instrumented regions only one had more than 64 blocks in it). Thus at most we will need a few extra variables for these codes. 4.2
Nested Loops and Procedure Boundaries
Till now we have discussed how TFP can be applied to an acyclic region of code. TFP can also be applied to multiply-nested regions of code. A simple way to achieve this this is to assign a separate variable to monitor different levels of loops. Since loops normally don’t have more than 2-3 levels of nesting, this should not be a problem2. Another trend of interest is to track PFPs across multiple procedure calls. For example one might detect a K-PFP in a procedure, even if the K runs of the instrumented code region are spread across multiple calls to the procedure. A simple way of achieving this is to declare the TFP variables, used for profiling the procedure as static so that they are persistent across multiple procedure calls.
Experimental Results
5 5.1
Overheads of Using TFP
We implemented TFP and ran it on 7 SPEC 2000 INT benchmarks and 6 FP benchmarks (the remaining benchmarks are omitted from our study since most of their dominant back-edges led to trivial single-path regions). Instrumentation was done using ATOM [22]. We instrumented the programs to detect the most frequently executed back edges, and then instrumented the code regions covered by these back edges. We omitted trivial two-block loops with a single path between them3. Since ATOM itself added large overheads we decided to test the overheads of TFP by comparing it with our implementation of [8]. TFP did not maintain the path frequencies since the primary purpose of our experiments was to study the use of TFP in gathering PFPs. To be fair we did not save the results of [8] as originally done (thus preventing it from making unnecessary stores) but just used it to ensure that the same path was persistently taken. TFP on the other hand not only tracked persistent paths but also tracked persistent subpaths and untaken blocks (Section 3.3 and 3.4). For our experiments, we wanted to ensure that the instrumented code kept running for the entire duration of the program (to study its overall overheads) and therefore set a very high value of K. The normalized results are shown in Figure 4. 2
3
The same technique can also be used to perform inter-procedural profiling using TFP by treating function calls as inner-loops and using separate variables to profile them. There remained 4 regions (out of the total 110 regions we instrumented) with only one static control flow path between them. The compiler should have coalesced them into a single block but did not do so.
Sagnik Nandy et al.
42
Fig. 4. TFP vs Bala’s technique on the SPEC (a) INT Benchmarks (b) FP Benchmarks
On average, TFP was only 5.75% slower than Bala’s method, even though it gathered a wider range of information (persistent sub-paths and untaken blocks). For three of the FP benchmarks, TFP outperformed Bala’s method. This happens because Bala’s method needs two instrumentation statements (a bitwise OR and a register shift) at each conditional edge4, while TFP requires a single instrumentation statement (a bitwise OR) at every block. For the FP benchmarks the paths were small and the number of blocks in a path was comparable to the number of conditional edges along the path, making TFP more efficient. For the INT benchmarks we observed that several blocks that could be coalesced together were left separate. Since we did not have control over the compiler, we instrumented each of these blocks, though ideally they would have been one block (reducing our overhead). Since there were no conditional edges in these blocks Bala’s method did not instrument them. We believe our 5.75% relative slow down is a good result, since Bala’s technique achieves nearly zero overhead profiling with adequate compiler support. We thus conclude that TFP is lightweight enough for runtime use on these benchmarks. 5.2
Statistics from TFP
In this section we present some runtime statistics collected by TFP on the SPEC 2000 benchmarks. These statistics reveal the presence of persistent trends in programs which can be used for dynamic compilation. Persistent Paths We ran TFP over the SPEC 2000 INT and FP benchmarks and detected persistent paths with different values of K. The results from these experiments are shown in Table 1. We used static variables to track the paths as 4
Often one needs additional conditional statements to instrument conditional edges. TFP instruments at the block level and does not add additional conditional statements.
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
43
mentioned in Section 4.2. Since we have considered the most frequently executed back edges, the instrumented code regions constitute a large fraction of the program’s actual running time. In summary, the regions of code we instrumented had 1961 static paths each on an average. Of these a small number of paths 16 for K=50 and 14 for K=100) account for a fairly large percentage 61% for K=50 and 59% for K=100) of the total iterations in these regions at runtime.5 These paths also have the property that the code continuously stays in these paths for at least 50/100 iterations on average without shifting to the other possible paths in the region. Thus it makes sense to perform pathspecific runtime optimizations on these paths since (i) these paths constitute a fair fraction of the executed code and (ii) the path-specific optimizations will hold true for a while. Persistently Untaken Blocks Information that might also be of use is the number of blocks that do not get executed persistently. One can remove these blocks from the code iterations, which might lead to several subsequent optimizations. We used TFP to detect opportunities for such optimizations. The 5
Note that 50-PFP 100-PFP and 50-PFP 100-PFP implies that most of the PFPs with persistence 50 also had a persistence of 100.
44
Sagnik Nandy et al.
total number of such untaken blocks is provided in Table 2. We have also provided the average number of blocks in the code regions we instrumented to give an estimate of how many blocks one might actually eliminate temporarily. Since block-reduction is a smaller sub-set of path-reduction we set higher values of K for these experiments(500, 1000). To summarize these results - the instrumented code regions had on an average 10.67 blocks each, of which 29.03% blocks were not executed for at least 500 consecutive runs of these regions and 27.94% of the blocks were not executed for at least 1000 consecutive runs of these regions. 5.3
A Case Study: RNAFold
We studied if TFP could lead to improved program performance on RNAfold [9]. This computational biology application folds a given RNA sequence and returns its minimum free energy. The major part of the program is spent in a loop of the form:
Though this is a predominantly memory-intensive loop, one can get some benefits by unrolling the loop. However, there is a a true dependence on decomp
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
45
between successive iterations of the loop. If we can implement aggressive unrolling and decomp seldom changes, then we can get a fair amount of additional parallelism. However, we observed that if decomp changed frequently, unrolling slowed down the execution by consuming additional resources (registers etc.). For RNAfold it is not possible to decide at compile time whether unrolling might be useful, since the decision is dependent on the data values of the input arrays. One can use TFP to detect PFPs in the loop (either a persistent path along the dependence-free path, or to see if the edge leading to the dependence is ever taken). If we notice that the path along which decomp doesn’t change is executed persistently we can decide to unroll the loop. Ideally the instrumentation and optimization would be done in the compiler. However, since we used an existing compiler (gcc-2.96) that we did not have full control over, we hand-coded the optimization. We manually implemented different unrolled versions of the loop (3-level and 4-level). The original loop was instrumented using TFP. The instrumentation searched for certain degrees of persistence along the path where decomp did not change and on finding such a trend it passed on control to the corresponding optimized, unrolled version. To test the usefulness of TFP in this experiment we also ran a separate version of the code with just the unrolled version of the loop. We ran the program with four different sizes of input sequences. The results are shown in Figure 5. The TFP-enabled unrolled version outperforms both the original code and the unrolled version (without TFP). This is because the unrolled version uses registers and is only useful if it manages to introduce additional parallelism. The TFP-enabled version uses the original loop till it finds a persistent trend, and then dynamically transfers control to the unrolled version, making the optimization more profitable. Though the improvements are small, the experiments show that time sensitive flow information can be used to improve overall performance at runtime.
Fig. 5. The normalized execution times for the three optimized versions of RNAfold
46
6
Sagnik Nandy et al.
Conclusion
In this paper we presented a new profiling strategy, TFP, designed to be used in the context of dynamic compilation and optimization. In such a context, profiling must not only provide information useful in a dynamic setting, but do so with low runtime overhead. Our strategy, TFP, can collect a range of time-sensitive, control-flow-based information which is more detailed than than that collected by block, edge or path profiling. Despite being more powerful, TFP’s additional overheads are negligible. Statistics gathered from the SPEC 2000 benchmarks, revealed further opportunities for profile-directed flow specific optimizations at runtime. We also showed a case study that demonstrates the usefulness of the information collected by TFP for optimization at runtime. We also plan on incorporating TFP in the context of a dynamic compiler to further explore its usefulness and actual overheads. Moreover, the amount of persistence (K) needed at runtime to actually produce benefit should be explored. Work is also going on to find efficient ways of using TFP to gather the exact path frequencies, if needed, at runtime. We plan to study if the definition of persistence can be relaxed (to accommodate a larger range of information) without adding to the overheads.
References [1] Alkindi, A.M., Kerbyson, D.J., Nudd, G.R.: Dynamic instrumentation and performance prediction of application execution. Lecture Notes in Computer Science 2110 (2001) [2] Ball, T., Larus, J.R.: Efficient path profiling. In: International Symposium on Microarchitecture. (1996) 46–57 [3] Young, C., Smith, M.: Improving the accuracy of static branch prediction using branch correlation. In: Proceedings of the 6th International Conference on Architectural Support for Programming Languages and Operating Systems. (1994) 232–241 [4] Chang, P.P., Hwu, W.W.: Trace selection for compiling large c application programs to microcode. In: Proceedings of the 21st Annual Workshop on Microprogramming and Microarchitecture, IEEE Computer Society Press (1988) 21–29 [5] Chang, P.P., Mahlke, S.A., mei W. Hwu, W.: Using profile information to assist classic code optimizations. Software - Practice and Experience 21 (1991) 1301– 1321 [6] Calder, B., Feller, P., Eustace, A.: Value profiling and optimization. In: Journal of Instruction Level Parallelism. (1999) [7] Chilimbi, T.M., Hirzel, M.: Dynamic hot data stream prefetching for generalpurpose programs. In: Proceeding of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation (PLDI’02, ACM Press (2002) 199–209 [8] Bala, V.: Low overhead path profiling. Technical Report, Hewlett Packard Labs (1996) [9] : Vienna-RNA-Package. http://www.tbi.univie.ac.at/~ivo/RNA/ (2002) [10] Merten, M., Trick, A., Barnes, R., Nystrom, E., George, C., Gyllenhaal, J., , HwuB, W.: An architectural framework for runtime optimization (2001)
TFP: Time-Sensitive, Flow-Specific Profiling at Runtime
47
[11] Kistler, T., Franz, M.: Continuous pogam optimization: Design and analysis. IEEE Transaction on Computers 50 (2001) 549–566 [12] Arnold, M., Fink, S., Grove, D., Hind, M., Sweeney, P.F.: Adaptive Optimization in the Jalapeño JVM”, booktitle = ”ACM SIGPLAN Conference on ObjectOriented Programming Systems, Languages, and Applications (OOPSLA’00). (2000) [13] Arnold, M., Ryder, B.G.: A Framework for Reducing the Cost of Instrumented Code. In: In Proceedings of the Conference on Programming Language Design and Implementation (PLDI’01). (2001) [14] Arnold, M., Hind, M., Ryder, B.G.: Online Instrumentation and FeedbackDirected Optimization of Java. In: In the proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA’02). (2002) [15] Bala, V., Duesterwald, E., Banerjia, S.: Dynamo: a transparent dynamic optimization system. In: Proceedings of the ACM SIGPLAN ’00 conference on Programming language design and implementation, ACM Press (2000) 1–12 [16] Hirzel, M., Chilimbi, T.M.: Bursty tracing: A framework for low-overhead temporal profiling. In: Workshop on Feedback-Directed and Dynamic Optimizations (FDDO). (2001) [17] Knuth, D., Stevenson, F.: Optimal measurement points for program frequency counts. BIT 13 (1973) 313–322 [18] Ball, T., Larus, J.R.: Optimally profiling and tracing programs. ACM Transactions on Programming Languages and Systems 16 (1994) 1319–1360 [19] Larus, J.R.: Whole program paths. In: Proceedings of the ACM SIGPLAN ’99 conference on Programming language design and implementation, ACM Press (1999) 259–269 [20] Paleczny, M., Vick, C., Click, C.: The Java Hotspot server compiler. In: In Proceedings of the USENIX Symposium on Java Virtual Machine Research and Technology. (2001) [21] Gloy, N., Blackwell, T., Smith, M.D., Calder, B.: Procedure placement using temporal-ordering information. ACM Transactions on Programming Languages and Systems 21 (1999) 977–1027 [22] Srivastava, A., Eustace, A.: Atom:a system for building customized program analysis tools. In: Proceedings of 1994 ACM Symposium on Programming Language Design and Implementation, ACM Press (2002) 196–205
A Hierarchical Model of Reference Affinity Yutao Zhong, Xipeng Shen, and Chen Ding Computer Science Department, University of Rochester Rochester, NY 14627, USA {ytzhong,xshen,cding}@cs.rochester.edu
Abstract. To improve performance, data reorganization needs locality models to identify groups of data that have reference affinity. Much past work is based on access frequency and does not consider accessing time directly. In this paper, we propose a new model of reference affinity. This model considers the distance between data accesses in addition to the frequency. Affinity groups defined by this model are consistent and have a hierarchical structure. The former property ensures the profitability of data packing, while the latter supports data packing for storage units of different sizes. We then present a statistical clustering method that identifies affinity groups among structure fields and data arrays by analyzing training runs of a program. When used by structure splitting and array regrouping, the new method improves the performance of two test programs by up to 31%. The new data layout is significantly better than that produced by the programmer or by static compiler analysis.
1
Introduction
The widespread use of hierarchical memory on today’s PCs and workstations is based on the assumption that programs have locality. At the early days of virtual memory design, Denning defined locality as “a concept that a program favors a subset of its segments during extended intervals (phases)” and locality set as “the set of segments needed in a given program phase” [9]. Locality set measures the memory demand but does not suggest how to improve it. Abu-Sufah, working with Kuck, used data dependence information to estimate program locality and to reorder the program execution for better locality [1]. Thabit, working with Kennedy, analyzed the access affinity among data elements and used data placement to improve locality [32]. Subsequent research has examined a great number of locality models and their use in computation reordering, data reordering, or their combination. In this paper we restrict our attention to locality models that are used in data transformation. Data placement improves memory performance by grouping useful data into the same or adjacent cache blocks or memory pages. On today’s high-end machines from IBM, SUN, and companies using Intel Itanium and AMD processors, the largest cache in the hierarchy is composed of blocks of no smaller than 64 bytes. If only one four-byte integer is useful in each cache block, 94% of cache space would be occupied by useless data, and only 6% of L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 48–63, 2004. © Springer-Verlag Berlin Heidelberg 2004
A Hierarchical Model of Reference Affinity
49
cache is available for data reuse. A similar issue exists for memory pages, except that the utilization problem can be much worse. By grouping useful data together, data placement can significantly improve cache and memory utilization. Data placement needs some model of reference affinity to tell which data are useful and should be grouped together. The past models are based on the access frequency. Thabit and many others used the frequency of data pairs called access affinity [32]. Chilimbi used the frequence of data “streams”, which are subsequences of data access [4]. The frequency model does not consider time of the access. For example, suppose that a program executes in three phases that frequently access three data pairs and and and and respectively. If we use only frequency information, we may group all three elements into the same cache block, although they are never used together. The problem becomes worse in grouping for larger storage units such as a memory page because the chance of grouping unrelated data is much greater. In 1999, Ding and Kennedy used a model to find arrays that are always accessed together [11]. However, the compiler-based model does not address locality in programs with general data and complex control flow. In this paper, we describe a new model of reference affinity. A set of data have reference affinity if they are always used together by a program. We say that they are in the same affinity group. This reference affinity model has two unique properties that are important for data placement. The first is consistency—the group of data elements are always accessed together. Placing group data in the same cache block always guarantees high space utilization. We will later define what we mean by “accessed together” and show how the consistency requirement can be relaxed to consider partial utilization of cache. The second property is that the model has a hierarchical structure. The largest group is the set of all program data, if we treat the entire execution as one unit of time. As we change the granularity of time, we find a decomposition of program data into groups of smaller sizes until the extreme case when each data element is a group. Hierarchical groups allow us to fully utilize cache hierarchy. An affinity group used in cache-block packing needs at most a dozen elements, while a group used for a memory page may need over one thousand elements. These two properties distinguish this reference affinity model from other existing models especially frequency-based models. The rest of this paper is organized as follows. We first define reference affinity and prove its consistency and hierarchical properties. We describe a new method for analyzing reference affinity at the source level and use it to improve cache utilization. This research is still in progress. We have not formulated all extensions of the basic concepts, nor have we evaluated our method on a broad class of programs or against alternative approaches. This is a preliminary report of our current findings.
50
2
Yutao Zhong et al.
Reference Affinity
This section first defines three preliminary concepts and gives two examples of our reference affinity model. Then it presents its formal definition and proves its properties including consistent affinity and hierarchical organization. An address trace or reference string is a sequence of accesses to a set of data elements. If we assign a logical time to each access, the address trace is a vector indexed by the logical time. We use letters such as to represent data elements, subscripted symbols such as to represent accesses to a particular data element and array index such as to represent the logical time of an access on trace T. The volume distance between two accesses, and in a trace T is the number of distinct data elements accessed in times We write it as If we let If The volume distance measures the volume of data accessed between two points of a trace. It is in contrast with the time distance, which is the difference between the logical time of two accesses. For example, the volume distance between the accesses to and in trace abbbc is two because two distinct element is accessed in Given any three accesses in time order, and we have because the cardinality of the union of two sets is no greater than the sum of the cardinality of each set. Mattson defined the volume distance between a pair of data reuses as LRU stack distance [23]. Volume distance can be measured in the same way as stack distance. Ding and Zhong recently gave a fast analysis technique that can measure volume distance in traces with tens of billions of memory accesses to hundreds millions of data [13]. We use Ding and Zhong’s method in our experimental study, which will be presented in Section 4. Based on the volume distance, we define a linked path on a trace. There is a linked path from to if and only if there exist accesses, such that (1) and (2) and and are all different data elements. In other words, a linked path is a sequence of accesses to different data elements, and each link (between two consecutive members of the sequence) has a volume distance no greater than We call the link length. We will later restrict to be members of some set S. If so, we say that there is a linked path from to with link length for set S. We now explain reference affinity with two example address traces in Fig. 1. The “…” represents accesses to other data elements other than and In the first example, accesses to and are in three time ranges. They have consistent affinity because they are always accessed together. They belong to the same affinity group. The consistency is important for data placement. For example, and are not always used together, then putting them into the same cache block would waste cache space when only one of the two is accessed. The example shows that finding this consistency is not trivial. The accesses to the three data elements appear in different orders, with different frequency, and
A Hierarchical Model of Reference Affinity
Fig. 1.
51
Examples of reference affinity model and its properties
mixed with accesses to other data. However, one property holds in all three time ranges—the accesses to the three elements are connected by a linked path with a link length of at most 2. As we will prove later, affinity groups are parameterized by the link length and for each they form a partition of program data. The second example in Fig. 1 shows that group partition has a hierarchical structure for different link lengths. The affinity group with the link length of 2 is If we reduce the link length to 1, the two new groups will be and The structure is hierarchical with respect to the link length: groups at a smaller link length are subsets of groups at a greater link length. The hierarchical structure is useful in data placement because it may find different-sized affinity groups that match the capacity of the multi-level cache hierarchy. We now present the formal definition of reference affinity. Definition 1 Strict Reference Affinity. Given an address trace, a set G of data elements is a strict affinity group (i. e. they have reference affinity) with the link length if and only if 1. for any all its accesses must have a linked path from to some for each other member that is, there exist different elements such that 2. adding any other element to G will make Condition (1) impossible to hold
The following theorem proves that strict affinity groups are consistent because they form a partition of program data. In other words, each data element belongs to one and only one affinity group. Theorem 1 Given an address trace and a link length the affinity groups defined by Definition 1 form a unique partition of program data. Proof. We show that any element of program data belongs to one and only one affinity group at a link length For the “one” part, observe that Condition (1) in Definition 1 holds trivially when is the only member of a group. Therefore any element must belong to some affinity group.
52
Yutao Zhong et al.
We prove the “only-one” part by contradiction. Suppose belongs to and Then we can show that satisfies Condition (1). For any two elements if both belong to and then Condition (1) holds. Without loss of generality, assume and Because any must have a linked path to an that is, there exist and an access such that Similarly, there is a linked path for this to an because that is, there exist and an access such that If Suppose from to there exist
then there is a linked path from to some but Then we have a linked path Since there is a linked path from to that is, such that Now belongs to just like We have come back to the same situation except the linked path from to is shorter than the path from to We repeat this process. If then we have a linked path from to Otherwise, there must be for some The process cannot repeat for ever because each step shortens the path from to the next chosen access by this process. It must terminate in a finite number of steps. We then have a linked path from to in Therefore, Condition (1) always hold for Since they are not the largest sets that satisfy Condition (1). Therefore, Condition (2) does not hold for or A contradiction. Therefore, belongs to only one affinity group, and affinity groups form a partition. For a fixed link length, the partition is unique. Suppose more than one types of partition can result from Definition 1, then some belongs to in one partition and in another partition. As we have just seen, this is not possible because satisfies Condition (1) and therefore neither nor is an affinity group. As we just proved, reference affinity is consistent because all members will always be accessed together. The consistency means that packing data in an affinity group will always improve cache utilization. In addition, the group partition is unique because each data element belongs to one and only one group for a fixed The uniqueness removes any possible conflict, which would happen if a data element could appear in multiple affinity groups. Next we prove that strict reference affinity has a hierarchical structure— an affinity group with a shorter link length is a subset of an affinity group with a greater link length. Theorem 2 Given an address trace and two distances and affinity groups at form a finer partition of affinity groups at Proof. We show that any affinity group at at Let G be an affinity group at and
the
is a subset of some affinity group be the affinity group at that
A Hierarchical Model of Reference Affinity
53
overlaps with Since any are connected by a linked path with link length they are connected by a linked path with a larger link length According to the proof of Theorem 1, is an affinity group at G must be a subset of otherwise is not an affinity group because it can be expanded while still guaranteeing Condition (1). Finally, we show that elements of the same affinity group is always accessed together. When one element is accessed, all other elements will be accessed within a time range with a bounded volume distance. Theorem 3 Given an address trace with an affinity group G at link length any time an element of G is accessed at there exists a time range that includes and at least one access to all other members of G, and the volume distance of the time range is no greater than where is the number of elements in the affinity group. Proof. According to Definition 1, for any in G, there is a linked path from to some Sort these accesses in time order. Let be the earliest and be the latest in the trace. There is a linked path from to Let the sequence be The volume distance from to is It is no greater than which is The bound of the volume distance from to is the same. Considering that needs to be included in the time range, the total volume distance is at most The strict affinity requires that members of an affinity group are always accessed together. In many cases, a group of data may often be accessed together but not always. We can relaxed the first condition to require a group member to be accessed with other members of the time instead of all the time. The formal definition is below. The only change is the first condition. Definition 2 Partial Reference Affinity Given an address trace, a set G of data elements is a partial affinity group with the link length if and only if 1. for any at least accesses has a linked path from to some for each other member in G 2. adding any other element to G will make Condition (1) impossible to hold
Partial affinity groups do not produce unique partition of program data. The structure is not strictly hierarchical. The loss in consistency and organization depends on As on-going work, we are currently quantifying the bound of this loss as a function of
54
3
Yutao Zhong et al.
Clustering Analysis
The purpose of clustering analysis is to identify affinity groups. Data elements that tend to be accessed simultaneously should be clustered into the same group. We use k-means and its extension, x-means, to statistically measure the similarity of the reuse behavior of individual data elements and do the clustering. K-means is a popular statistical clustering algorithm. It was first proposed by MacQueen [22] in 1967. The optimization criterion implied in k-means is sumof-squares criterion [16]. The aim is to minimize the total within-group sum of squares. The basic idea of the algorithm is an iterative regrouping of objects until a local minimum is reached[18]. A sketch of the algorithm is as the follows: 1. Initialize with arbitrarily selected centroids for groups; 2. Assign each object to the closest centroids; 3. For each group, adjust the centroid to be the point denoted by the means of all objects assigned to that group; 4. If there are any changes in step 2 or 3, go to step 2; otherwise, stop. One problem of k-means is that the value of need to be specified at the beginning. For our affinity analysis, we may not know the optimal number of groups at the first place. Therefore, we also apply the extension of k-means, [27] in our analysis. X-means relies on BIC (Bayesian Information Criterion) to compare different clusterings formed for different Based on BIC calculation, it approximately measures the probability of each clustering given the original data set. Then the one with the highest probability is chosen. According to the definition given in Section 2, an accurate way to identify affinity groups would be recording and comparing the reference trace of each data element. However, the time and space overhead of this approach would be high. Therefore, we propose an approximate estimation of affinity groups according to the reuse distance distribution of individual data elements. Reuse distance is equivalent to volume distance between two adjacent reuses to the same datum. We use the efficient measurement described in [13] to collect reuse distance information. For an array, we do not distinguish references to different array elements but view the whole array as a single data element. For a structure, we consider the accumulated reuse distance distributions of the accesses to each structure field of all instances of the structure. For example, a tree structure composed of two fields left and right will be considered two data elements. No matter how many objects of this structure type will be dynamically allocated, the references to the first field of these objects will always be counted as the accesses to the first data element. The same rule is applied to the references to the second field of allocated objects. For any datum, the whole reuse distance scope for one execution is from zero to the maximal reuse distance that occurs in the execution. We divide this scope into a set of ranges, each in the form of where are two reuse distance values Then for each range, we count the number of references whose reuse distance falls into that range and calculate the average distance of
A Hierarchical Model of Reference Affinity
55
these reuses. The set composed of the number of references within each range forms a counter vector. The set of the average distance calculated for each range forms a distance vector. These two vectors are collected for every data element that we target to be grouped. They each describes the reuse behavior of the corresponding data element by locating them in an N dimensional space, while N is the number of reuse distance ranges considered. Since references with a long reuse distance have more significant effect on performance, we emphasize on such references in clustering analysis. For all experiments reported in this paper, only references with a reuse distance longer than 2048 are used in clustering and the reuse distance ranges are divided linearly with a constant length of 2048. In another word, the reuse distance ranges we considered in clustering analysis begin with [2048, 4096) and go on with [4096, 6144), [6144, 8192),..., and so forth. An example with the above setting is given in Table 1. Suppose there are 10 reuses to the left field of the instances of a tree structure. Their reuse distances are (sorted incrementally): {2500, 3000, 3500, 4000, 4500, 5000, 5500, 6000, 6500, 7000}. We will construct 3 reuse distance ranges for this field and Table 1 describes the statistics collected for each range. The last two columns of the table list the two vectors to be clustered as (4, 4, 2) and (3250, 5250, 6750) respectively. Our overall algorithm is shown in Fig. 2.
Fig. 2. Clustering analysis for data grouping
56
4
Yutao Zhong et al.
Evaluation
In this section, we measure the effect of the affinity group clustering by reorganizing data layout according to clustering results and comparing performance changes. Two Test Programs We test on two different programs: Cheetah and Swim. Cheetah is a cache simulator written in C included in SimpleScalar suite 3.0. It constructs a splay tree to store and maintain the cache content. The tree structure of Cheetah is composed of seven fields. In our experiments, we check the reuse behavior of each individual field within the tree structure and cluster them into groups. According to different clustering, we implement different versions of structure splitting on the source file and compare their performance. Swim from Spec95 calculates finite difference approximations for shallow water equation. The reuse distance distributions of the fourteen arrays of real type in this Fortran program are collected and used in clustering. Then, the source file is changed by merging arrays clustered in the same group into a single array. Again, the performance of different versions are compared. These experiments also explore the potential uses and benefits of data clustering based on locality behaviors. Clustering Methods We use the k-means and x-means analyzing tool implemented by Pelleg and Moore at Carnegie Mellon University [27]. Each data object to be clustered is represented by a set of feature values, each collected for a given reuse distance range. Two types of feature are considered: the number of reuses and the average reuse distance. The reuse distance ranges have a fixed length of 2048, as described in Section 3. Platforms The experiments are performed on three different machines, including a 250MHz MIPS R10K processor with SGI MISPro compiler, a Sun Sparc U4500/336 processor and a 2GHz Intel PentiumIV processor with Linux gcc compiler. All programs are compiled with optimization flag -Ofast or -03 respectively. Structure Splitting Table 2 lists the clustering results for the tree structure in Cheetah. The input to Cheetah simulator is an access trace from JPEG encoding images sizing from dozens to thousands of bytes. Although the tree structure consists of seven fields: rtwt, rt, l f t , inum, addr, grpno and prty, the table only list the first five of them. The reason is the simulator for fully associative LRU cache only accesses the first five fields and we apply clustering only on them. Clustering the other two is trivial. The first column of the table gives the size of the encoded image. Column 2 and 3 describe how the the clustering is applied. The fourth column contains the number of clusters identified while the last column lists the clustering result. While k-means
A Hierarchical Model of Reference Affinity
57
gives results for two to four clusters, we only show groupings with two and three clusters here. The clustering results shown in Table 2 have two important features. First, the clustering on the five fields varies across different inputs or different clustering algorithms. Second, although there is no single winner, the clustering indicates a strong affinity among rtwt, rt, lft and inum. Therefore, we choose to reorganize the tree structure by grouping these four fields in different ways. Table 3 lists the structure splittings we tested. Each row of Table 3 describes a grouping of the seven fields of the tree structure. Version orig is an array-based version of the original Cheetah. In orig, the tree nodes are contained in a big pre-allocated array instead of dynamically allocated at run time. This array-based Cheetah simulator is faster than the original Cheetah (over 10% faster when tested on an SGI processor). All other versions modify version orig to implement structure splitting. V1 divides the structure into seven groups, each containing a single field. The other three versions group in different ways according to the similarity measured by the clustering analysis. We change the source files by hand to get different versions. The access trace of JPEG encoding a testing image size of 272 KB is used as the testing input.
58
Yutao Zhong et al.
Different versions of Cheetah are compiled and run on the three platforms described above and the running times are collected by the standard time utility. Table 4 summaries the experiment results. For each version listed, Table 4 gives the execution time on the three platforms. The user time reported by time command is used as the execution time. The comparison between the first and second rows of the table shows there is no clear benefit by simply dividing the original structure into individual fields. Version runs slower than orig on both MIPS and Pentium machines. However, by grouping the fields with similar reuse behavior together, there is almost always a performance gain from other versions. Version is consistently the best among all the groupings. It is up to 17.5% faster than the original version. This shows the clustering analysis based on reuse distance distribution is effective in identifying affinity groups. Array Grouping Swim has fourteen arrays with the same size. We apply clustering analysis on these arrays and merge arrays in the same cluster. Table 5 describes a subset of the clustering results. We tested the performance of these groupings. The training data for clustering analysis was collected by running Swim with an input matrix of size 32 × 32. The last row of Table 5 includes an array grouping based on static analysis. It was obtained by compiler analysis [11]. One restriction of array merging is the original arrays must have exactly the same size in all dimensions. This can be checked manually or by a compiler. The transformation process to get different
A Hierarchical Model of Reference Affinity
59
grouping versions at source-level is semi-automatic. We tested Swim for an input matrix of size 512 × 512. Table 6 gives the execution time for the original Swim and all the five grouping versions. Table 6 shows that the array groupings identified by clustering analysis outperforms the grouping based on static analysis most of the time. Version identified as the optimal clustering by x-means method, is the best one on all machines. It reduces the execution time by up to 31.1% compared to the original version and up to 13.2% compared to the static analysis version.
5
Related Work
An effective method for fully utilizing cache is to make data access contiguous. Instead of rearranging data, the early studies reordered loops so that the innermost loop traverses data contiguously within each array. Various loop permutation schemes were studied for perfect loop nests or loops that can be made perfect, including those by Abu-Sufah et al. [2], Gannon et al. [15], Wolf and Lam [33], and Ferrante et al. [14]. McKinley et al. developed an effective heuristic that permutes loops into memory order for both perfect or non-perfect nested loops [24]. Loop reordering, however, cannot always achieve contiguous data traversal because of data dependences. This observation led Cierniak and Li to combine data transformation with loop reordering [6]. Kremer developed a general formulation for finding the optimal data layout that is either static or dynamic for a program at the expense of being an NP-hard problem and showed that it is practical to use integer programming to find an optimal solution [20]. Computation reordering is powerful when applicable. However, in many programs, not all data accesses in all programs can be made contiguous. Alternatively, we can pack data that are used together. Early studies used the frequency of data access, measured by sample- and counter-based profiling by Knuth [21] and static probability analysis by Cocke and Kennedy [7] and by Sarkar [29]. Frequency information is frequently used in data placement, as we reviewed in the introduction. In addition, Chilimbi et al. split Java classes based on the access frequency of class members [5]. In addition to packing for cache
60
Yutao Zhong et al.
blocks, Seidel and Zorn packed dynamically allocated data in memory pages [30]. Access frequency does not distinguish the time of access: that a pair or a group of data are frequently accessed does not mean that they are frequently accessed together, and that a group of data are accessed together more often than other data does not mean the data group are accessed together always or most of the time. In a recent paper, Petrank and Rawitz formalized this observation and proved a harsh bound: with only pair-wise information, no algorithm can find a static data layout that is always within a factor of from the optimal solution, where is proportional to the size of cache [28]. Unlike reference affinity, the frequency-based models do not find data groups that are always accessed together. Neither do they partition data in a hierarchy based on their access pattern. Eggers and Jeremiassen grouped data fields that were accessed by a parallel thread to reduce false sharing [19]. Ding and Kennedy regrouped Fortran arrays that are always accessed together to improve cache utilization [11]. Ding and Kennedy later extended it to group high-dimensional data at multiple granularity [12]. While the previous studies used compiler analysis, this work generalizes the concept of reference affinity to address traces. It also proposes a new profilingbased method for finding affinity groups among source-level data. Preliminary results show that the new method out-performs the data layout given by either the compiler or the programmer. The above data packing methods are static and therefore cannot fully optimize dynamic programs whose data access pattern changes during execution. Dynamic data placement was first studied under an inspector-executor framework [8]. Al-Furaih and Ranka examined graph-based clustering of irregular data for cache [3]. Other models include consecutive packing by Ding and Kennedy [10], space-filling curve by Mellor-Crummey et al. [25], graph partitioning by Han and Tseng [17], and bucket sorting by Michell et al [26]. Several studies found that consecutive packing compared favorably with other models [25, 31].
6
Summary
We have defined a new reference affinity model and proved its three basic properties: consistent groups, hierarchical organization, and bounded reference distance. We have described a clustering method to identify affinity groups among source-level structure fields and data arrays. The method uses data reuse statistics collected from training runs. It uses k-means and x-means clustering algorithms as a sub-procedure and explores a smaller number of choices before determining the reference affinity. When used by structure splitting and array grouping, the new method reduces execution time by up to 31%. It outperforms previous compiler analysis by up to 13%. As on-going work, we are formulating partial reference affinity, studying more accurate ways of reference affinity analysis, and exploring other uses of this locality model in program optimization.
A Hierarchical Model of Reference Affinity
61
Acknowledgement This work is supported by the National Science Foundation (Contract No. CCR0238176, CCR-0219848, and EIA-0080124) and the Department of Energy (Contract No. DE-FG02-02ER25525). We would like to thank Dan Pelleg and Andrew Moore for their assistance with the k-means and x-means toolkit.
References [1] W. Abu-Sufah. Improving the Performance of Virtual Memory Computers. PhD thesis, Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1979. [2] W. Abu-Sufah, D. Kuck, and D. Lawrie. On the performance enhancement of paging systems through program analysis and transformations. IEEE Transactions on Computers, C-30(5):341–356, May 1981. [3] I. Al-Furaih and S. Ranka. Memory hierarchy management for iterative graph structures. In Proceedings of International Parallel Prcessing Symposium and Symposium on Parallel and Distributed Processing, Orlando, Florida, April 1998. [4] T. M. Chilimbi. Efficient representations and abstractions for quantifying and exploiting data reference locality. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, Snowbird, Utah, June 2001. [5] T. M. Chilimbi, B. Davidson, and J. R. Larus. Cache-conscious structure definition. In Proceedings of SIGPLAN Conference on Programming Language Design and Implementation, Atlanta, Georgia, May 1999. [6] M. Cierniak and W. Li. Unifying data and control transformations for distributed shared-memory machines. In Proceedings of the SIGPLAN ’95 Conference on Programming Language Design and Implementation, La Jolla, California, 1995. [7] J. Cocke and K. Kennedy. Profitability computations on program flow graphs. Technical Report RC 5123, IBM, 1974. [8] R. Das, D. Mavriplis, J. Saltz, S. Gupta, and R. Ponnusamy. The design and implementation of a parallel unstructured euler solver using software primitives. In Proceedings of the 30th Aerospace Science Meeting, Reno, Navada, January 1992. [9] P. Denning. Working sets past and present. IEEE Transactions on Software Engineering, SE-6(1), January 1980. [10] C. Ding and K. Kennedy. Improving cache performance in dynamic applications through data and computation reorganization at run time. In Proceedings of the SIGPLAN ’99 Conference on Programming Language Design and Implementation, Atlanta, GA, May 1999. [11] C. Ding and K. Kennedy. Inter-array data regrouping. In Proceedings of The 12th International Workshop on Languages and Compilers for Parallel Computing, La Jolla, California, August 1999. [12] C. Ding and K. Kennedy. Improving effective bandwidth through compiler enhancement of global cache reuse. In Proceedings of International Parallel and Distributed Processing Symposium, San Francisco, CA, 2001. http://www.ipdps.org. [13] C. Ding and Y. Zhong. Predicting whole-program locality with reuse distance analysis. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, San Diego, CA, June 2003.
62
Yutao Zhong et al.
[14] J. Ferrante, V. Sarkar, and W. Thrash. On estimating and enhancing cache effectiveness. In U. Banerjee, D. Gelernter, A. Nicolau, and D. Padua, editors, Languages and Compilers for Parallel Computing, Fourth International Workshop, Santa Clara, CA, Aug. 1991. Springer-Verlag. [15] D. Gannon, W. Jalby, and K. Gallivan. Strategies for cache and local memory management by global program transformation. Journal of Parallel and Distributed Computing, 5(5):587–616, Oct. 1988. [16] A.D. Gordon. Classification. Chapman and Hall, 1981. [17] H. Han and C. W. Tseng. Locality optimizations for adaptive irregular scientific codes. Technical report, Department of Computer Science, University of Maryland, College Park, 2000. [18] J. A. Hartigan. Clustering Algorithms. John Wiley & Sons, 1975. [19] T. E. Jeremiassen and S. J. Eggers. Reducing false sharing on shared memory multiprocessors through compile time data transformations. In Proceedings of the Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 179–188, Santa Barbara, CA, July 1995. [20] K. Kennedy and U. Kremer. Automatic data layout for distributed memory machines. ACM Transactions on Programming Languages and Systems, 20(4), 1998. [21] D. Knuth. An empirical study of FORTRAN programs. Software—Practice and Experience, 1:105–133, 1971. [22] J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings of 5th Berkeley Symposium on Mathematical Statisitics and Probability, pages 281–297, 1967. [23] R. L. Mattson, J. Gecsei, D. Slutz, and I. L. Traiger. Evaluation techniques for storage hierarchies. IBM System Journal, 9(2):78–117, 1970. [24] K. S. McKinley, S. Carr, and C.-W. Tseng. Improving data locality with loop transformations. ACM Transactions on Programming Languages and Systems, 18(4):424–453, July 1996. [25] J. Mellor-Crummey, D. Whalley, and K. Kennedy. Improving memory hierarchy performance for irregular applications. International Journal of Parallel Programming, 29(3), June 2001. [26] N. Mitchell, L. Carter, and J. Ferrante. Localizing non-affine array references. In Proceedings of International Conference on Parallel Architectures and Compilation Techniques, Newport Beach, California, October 1999. [27] D. Pelleg and A. Moore. X-means: Extending k-means with efficient estimaiton of the number of clusters. In Proceddings of the 17th International Conference on Machine Learning, pages 727–734, San Francisco, CA, 2000. [28] E. Petrank and D. Rawitz. The hardness of cache conscious data placement. In Proceedings of ACM Symposium on Principles of Programming Languages, Portland, Oregon, January 2002. [29] V. Sarkar. Determining average program execution times and their variance. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, Portland, Oregon, January 1989. [30] M. L. Seidl and B. G. Zorn. Segregating heap objects by reference behavior and lifetime. In Proceedings of the Eighth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-VIII), San Jose, Oct 1998. [31] M. M. Strout, L. Carter, and J. Ferrante. Compile-time composition of run-time data and iteration reorderings. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, San Diego, CA, June 2003.
A Hierarchical Model of Reference Affinity
63
[32] K. O. Thabit. Cache Management by the Compiler. PhD thesis, Dept. of Computer Science, Rice University, 1981. [33] M. E. Wolf and M. Lam. A data locality optimizing algorithm. In Proceedings of the SIGPLAN ’91 Conference on Programming Language Design and Implementation, Toronto, Canada, June 1991.
Cache Optimization for Coarse Grain Task Parallel Processing Using Inter-Array Padding Kazuhisa Ishizaka, Motoki Obata, and Hironori Kasahara Department of Computer Science, Waseda University 3-4-1 Ohkubo, Shinjuku-ku, Tokyo, 169-8555, Japan {ishizaka,obata,kasahara}@oscar.elec.waseda.ac.jp
Abstract. The wide use of multiprocessor system has been making automatic parallelizing compilers more important. To improve the performance of multiprocessor system more by compiler, multigrain parallelization is important. In multigrain parallelization, coarse grain task parallelism among loops and subroutines and near fine grain parallelism among statements are used in addition to the traditional loop parallelism. In addition, locality optimization to use cache effectively is also important for the performance improvement. This paper describes inter-array padding to minimize cache conflict misses among macro-tasks with data localization scheme which decomposes loops sharing the same arrays to fit cache size and executes the decomposed loops consecutively on the same processor. In the performance evaluation on Sun Ultra 80(4pe), OSCAR compiler on which the proposed scheme is implemented gave us 2.5 times speedup against the maximum performance of Sun Forte compiler automatic loop parallelization at the average of SPEC CFP95 tomcatv, swim hydro2d and turb3d programs. Also, OSCAR compiler showed 2.1 times speedup on IBM RS/6000 44p-270(4pe) against XLF compiler.
1
Introduction
Multiprocessor architectures are currently used in wide range of computers including high performance computers, entry level servers and games embedding chip multiprocessors. To improve usability and effective performance of multiprocessor systems, automatic parallelizing compilers are required. To this end, automatic parallelizing compilers have been researched. For example, Polaris[1] compiler exploits loop level parallelism by using symbolic analysis, runtime data dependence analysis, range test and so on. Loop parallelization considering the data locality optimization using unimodular transformation, affine partitioning and so on has been researched in SUIF compiler [2]. Since various kinds of loops can be parallelized by those advanced compilers, to further improve the effective performance of multiprocessor systems, the use of different grains of parallelism such as the use of coarse grain task parallelism among loops and subroutines and fine grain parallelism among statements and instructions in addition to loop level parallelism should be considered. NANOS L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 64–76, 2004. © Springer-Verlag Berlin Heidelberg 2004
Cache Optimization for Coarse Grain Task Parallel Processing
65
compiler[3] uses multi level parallelism by using the extended OpenMP API. PROMIS compiler [4] integrates loop level parallelism and instruction level parallelism using a common intermediate language. Multigrain parallel processing which has been realized in OSCAR compiler [5], APC compiler (Advance Parallelizing Compiler developed by Japanese millennium project IT21)[6] uses coarse grain task parallelism among loops and subroutines and near fine grain parallelism among statements. Also, optimization for memory hierarchy to minimize the memory access overhead that is getting larger with the speedup of a processor is important to improve the performance. Loop restructurings such as loop permutation, loop fusion and tiling to change data access pattern in a loop are researched as the cache optimization by the compiler. Data layout transformations including strip mining and array permutation to make data access pattern contiguous are also researched. Intra-array padding and inter-array padding to reduce conflict misses in a single loop or a fused loop are proposed[7]. Also, the loop fusion scheme using peeling and shifting of loop iteration to allow fusion and maintain loop parallelism has been used to enhance data locality[8]. Furthermore, after loop fusion, conflict misses can be reduced by cache partitioning[9]. The performance of physically-indexed cache depends on the page placement policy of operating system such as page coloring and bin hopping[10]. Runtime recoloring scheme using the extended hardware such as Cache Miss Lookaside buffer to traces cache conflict misses has been proposed [11]. Low overhead recoloring using extended TLB to record the cache color is also researched [12]. In addition to these approaches requiring the extended hardware, OS and compiler cooperative page coloring scheme without hardware extension using information on access pattern of program provided by compiler is proposed [13]. This paper proposes the padding scheme to reduce conflict misses to improve the performance of the coarse grain task parallel processing. In the cache optimization for coarse grain task parallel processing [14], at first, complier divides loops into smaller loops to fit data size accessed by loops to cache size. Next, the compiler analyzes parallelism among tasks including the divided loops using Earliest Executable Condition analysis and schedules tasks which shared the same data to the same processor so that the tasks can be executed consecutively accessing the shared data on the cache. After that, cache line conflict misses among tasks which are executed consecutively are reduced by padding proposed in this paper. Although ordinary cache optimizations by the compiler target a single loop or a fused loop, the proposed scheme optimizes cache performance over loops. The rest of this paper is organized as follows. In section 2, the coarse grain task parallel processing is described. Section 3 describes the cache optimization scheme using data localization for the coarse grain task parallel processing. Section 4 proposes the padding scheme to reduce conflict misses over loops. The effectiveness of the proposed schemes is evaluated on the commercial multiprocessors using several benchmarks in SPEC CFP95 in section 5. Finally, concluding remarks are described in section 6.
66
Kazuhisa Ishizaka et al.
Fig. 1. An Example of Macro-Task Graph
2
Coarse Grain Task Parallel Processing
This section describes coarse grain task parallel processing to which the proposed cache optimization scheme is applied. In the coarse grain task parallel processing, a source program is decomposed into three kinds of coarse grain tasks, or macro-tasks, namely block of pseudo assignment statements(BPA) repetition block(RB), subroutine block(SB). Also, macro-tasks are generated hierarchically inside of a sequential repetition block and a subroutine block. 2.1
Generation of Macro-Task Graph
After the generation of macro-tasks, compiler analyzes data flow and control flow among macro-tasks in each layer or each nested level. Next, to extract parallelism among macro-tasks, the compiler analyzes Earliest Executable Condition(EEC)[5] of each macro-task. EEC represents the conditions on which macro-task may begin its execution earliest. EEC of macro-task is represented in macro-task Graph (MTG) as shown in Fig.1. In macro-task graph, nodes represent macro-tasks. A small circle inside nodes represents conditional branches. Solid edges represent data dependencies. Dotted edges represent extended control dependencies. Extended control dependency means ordinary control dependency and the condition on which a data dependent predecessor macro-task is not executed. A solid arc represents that edges connected by the arc are in AND relationship. A dotted arc represents that edges connected by the arc are in OR relation ship. 2.2
Macro-Task Scheduling
In the coarse grain task parallel processing, static scheduling and dynamic scheduling are used for assignment of macro-tasks to processors.
Cache Optimization for Coarse Grain Task Parallel Processing
67
If a macro-task graph has only data dependencies and is deterministic, static scheduling is selected. In the static scheduling, assignment of macro-tasks to processors is determined at compile time by the scheduler in the compiler. Static scheduling is useful since it allows us to minimize data transfer and synchronization overhead without runtime scheduling overhead. If a macro-task graph has control dependencies, dynamic scheduling is selected to cope with runtime uncertainties like conditional branches. Scheduling routine for dynamic scheduling are generated by compiler and embedded into a parallelized program with macro-task code. 2.3
Code Generation
OSCAR compiler has several backends and generates the parallelized code for multiple target architectures. In this paper, OpenMP backend is used to generate OpenMP FORTRAN from sequential FORTRAN. OSCAR compiler generates the portable code for various shared memory multiprocessors by using “one-time single code generation” technique[5, 15]. Furthermore, by using native compiler as the backend of OSCAR compiler, general optimizations and machine specific optimizations provided by it are applied to the generated code. Therefore, the performance of OSCAR compiler can be used as a performance booster of the native compiler on the state of the art multiprocessor.
3
Cache Optimization for Coarse Grain Task Parallel Processing
If macro-tasks that access the same data are executed consecutively on the same processor, shared data can be transffered among these macro-tasks using fast memory such as cache. This section describes cache optimization using data localization [16] to enhance the performance of coarse grain task parallel processing. 3.1
Loop Aligned Decomposition
To avoid cache misses among the macro-tasks, Loop Aligned Decomposition(LAD)[16] is applied to loops that access large size data. LAD divides a loop into partial loops with the smaller number of iterations so that data size accessed by the divided loops is smaller than cache size. Partial loops are treated as coarse grain tasks and the Earliest Executable Condition(EEC) analysis is applied. Partial loops connected by data dependence edge on the macro task graph are grouped into “Data Localization Group(DLG)”. Partial loops, or macro-tasks, inside a DLG are assigned to the same processor as consecutively as possible by static or dynamic scheduler. In macro-task graph of Fig.2(a), it is assumed that macro-tasks 2, 3 and 7 are parallel loops and they access the same shared variables and their size exceeds
68
Kazuhisa Ishizaka et al.
Fig. 2.
Example of Loop Align Decomposition
cache size. In this example, loops are divided into four partial loops by the LAD. For example, macro-task 2 in Fig.2(a) is divided into macro-task 2_A through 2_D in Fig.2(b). Also, DLGs are defined, for example, 2_A, 3_A, 7_A are grouped into DLG_A. 3.2
Scheduling for Consecutive Execution of Macro-Tasks
Macro-tasks are executed in the increasing order of the node number on the macro-task graph in the original program. For example, the execution order of macro-tasks 2_A to 3_D is 2_A, 2_B, 2_C, 2_D, 3_A 3_B, 3_C, 3_D. In this order, macro-tasks in the same DLG are not executed consecutively. However, the earliest executable condition shown in Fig. 2(b) means that macro-task 3_B, for example, can be executed immediately after macro-task 2_B because macro-task 3_B depends on only macro-task 2_B. In the proposed cache optimization scheme, a task scheduler for the coarse grain tasks assigns macro-tasks inside a DLG to the same processor as consecutively as possible[14] in addition to “critical path” priority. Fig.3 shows a schedule when the proposed cache optimization is applied to macro-task graph in Fig.2(b) for a single processor. As shown in Fig.3, macro-task 2_B, 3_B, 8_B in DLG_B and macro-task 2_C, 3_C, 7_C in DLG_C are executed consecutively to use cache effectively.
4
Reduction of Cache Conflict Misses
This section describes the data layout transformation using padding to reduce conflict misses among macro-tasks in a DLG.
Fig. 3.
Example of Scheduling Result on Single Processor
Cache Optimization for Coarse Grain Task Parallel Processing
Fig. 4.
4.1
69
Data Layout Image on Cache of Swim
Conflict Misses in a DLG
In the data localization, loops accessing the same shared variable larger than cache size are divided to smaller loops or macro-tasks. Furthermore, macrotasks in the same DLG are executed consecutively on the same processor. This enables the shared data to be reused before cache out. However, if data accessed by macro-tasks in a DLG share the same line on the cache, data may be removed from the cache because of line conflict miss even though data size accessed in a DLG is not larger than the cache size. Conflict misses in a DLG are reduced by data layout transformation by interarray padding. In this section, SPEC CFP95 swim is used as an example for the proposed padding scheme. Swim has 13 single precision 513x513 arrays and each size is about 1MB. Fig.4 shows the data layout image on cache where 13 arrays are allocated to 4MB direct map cache. In this figure, boxes framed by thick lines show arrays. Horizontal direction represents 4MB cache space. This figure means that arrays on the same vertical position are allocated to the same cache lines and they cause line conflict misses. For example, arrays U, VNEW, POLD and H are allocated to the same part of cache. Dotted lines in the figure show the partial arrays accessed by the divided loops by the LAD when loops are divided to 4 smaller loops. Gray part of each array shows a partial array accessed by the divided loops in a DLG. As shown in the figure, conflict misses may be caused among the partial arrays accessed in a DLG, or on a vertically same position. This conflict misses interfere the data reuse among the consecutively executed macro-tasks. Data layout transformation by the padding to reduce conflict misses in a DLG is required for the cache optimization among loops or macro-tasks. This section describes the padding scheme to reduce conflict misses in a DLG. 4.2
Inter-Array Padding
This section describes an inter-array padding procedure using array declaration size change. Step1 Select Target Arrays Since OSCAR compiler on which the proposed scheme is implemented generates the parallelized OpenMP FORTRAN, the actual data layout is determined by the machine native compiler which is used
70
Kazuhisa Ishizaka et al.
as the back end of OSCAR compiler. Therefore, in the current implementation, OSCAR compiler chooses arrays of the same size as the target of the proposed padding and changes declaration size of the target arrays to realize inter-array padding. Arrays in FORTRAN “common block” are also chosen as the target of interarray padding if a common block has the same shape over all program modules because changing declaration size of such arrays dose not break the program semantics. Padding for arrays in common block that has different shapes are described in section 4.3. Step2 Generate Data Layout Image on Cache Next, a compiler calculates addresses of selected arrays and generates data layout image on cache as shown in Fig.4. In this step, because all target arrays have the same size, a compiler can determine the data layout image regardless of the actual data layout determined by the native compiler. Step3 Calculate Minimum Division Number A compiler calculates the minimum division number (div_num) to make data size accessed in a DLG smaller than the cache size by dividing total array size of target arrays by cache size. In the example in Fig.4, total array size is 13MB and cache size is 4MB. Then, div_num is ceil(13/4) = 4. Step4 Calculate Maximum DLG Access Size The maximum data size accessed in a DLG (part_size, gray range in Fig.4) is calculated by dividing array size by div_num. If there are overlaps among partial arrays of part_size in data layout image on cache, it means that conflicts may be caused among arrays accessed in a DLG. If there is no overlap, padding is not applied. Step5 Calculate Padding Size To remove conflict, the distance on the cache between the base address of first array (array U in Fig.4) and the base address of first array after cache size (array VNEW) should be part_size. Padding size to remove conflict between U and VNEW is cache_size + part_size – base_address where base_address is the base address of VNEW. Similarly, same size pads are inserted to remove all conflicts as shown in Fig.5(a). Step6 Change Array Size In the proposed scheme, pads inserted among certain arrays as shown in Fig.5(a) are distributed to all arrays so that the data layout dose not depends on the specific order of arrays. In practice, the rightmost dimension of each array is changed to increase array size by padding_size/narrays , where narrays is the number of arrays in the range from the beginning to cache_size+ part_size(4 in this example). Fig.5(b) shows the data layout image on cache after the proposed padding by changing array size.
4.3
Padding for Common Block
Some program modules may have the different array declarations size for a common block. Because padding among arrays in such common block may change
Cache Optimization for Coarse Grain Task Parallel Processing
71
Fig. 5. Inter-Array Padding for Swim
the program semantics, it is difficult to apply inter-array padding to such arrays. Therefore, a compiler merges such common blocks to single large common block and inserts pads among common blocks to maintain program semantics and reduce conflict misses among arrays in common blocks. 4.4
Set Associative Cache
In the current implementation, the proposed padding targets LRU replacement policy for a set associative cache. A set associative cache is treated as a direct map cache of same size. If padding removes conflicts on a direct map cache, the number of overlaps on n-way cache is smaller than n because the data layout image on cache of n-way set associative is same as that of a direct map cache of 1/n size. Therefore, there is no conflict on an n-way cache because a cache set of n-way cache can hold n lines. 4.5
Page Placement Policy of Operating System
Data layout transformation by a compiler is made on virtual address. Therefore, page placement policy of operating system to map a virtual address to a physical address affects it on a physically-indexed cache. A simple page coloring maps sequential virtual pages to sequential physical pages. Therefore, a page conflicts with the page apart from it by cache size. Data transformation by a compiler is effective in this policy because continuity of the address on virtual address is kept on physical address. In bin hopping, sequential physical pages are assigned to virtual pages in the order of page fault, irrespective of their virtual address. Continuity on the virtual address is not remained on physical address in this policy. Therefore, it is difficult that a compiler applies data layout transformation effectively beyond the page size on virtual address.
5
Performance Evaluation
This section describes the performance evaluation of the proposed scheme on Sun Ultra 80 and IBM RS/6000 44p-270. Ultra80 has four 450MHz Ultra SPARCIIs with 4MB direct map L2 cache for each processor and RS/6000 has four
72
Kazuhisa Ishizaka et al.
375MHz Power3s with 4MB 4-way set associative L2 cache(LRU). Both caches are physically-indexed caches. Solaris 8 on Ultra 80 and AIX 4.3 on RS/6000 support page coloring and bin hopping. In the evaluation, sequential FORTRAN programs are translated into parallelized OpenMP FORTRAN programs using OSCAR compiler on which the proposed scheme has been implemented. Three kinds of compilation, namely OSCAR with the proposed padding, OSCAR without the padding and automatic parallelization by the machine native compiler are compared. SPEC CFP95 tomcatv, swim, hydro2d and turb3d are used in this evaluation. Original sources code of SPEC are used by both OSCAR and native compiler for tomcatv, swim and hydro2d. However, turb3d is preprocessed by APC compiler[6] in order to parallelize some loops containing subroutine calls because both OSCAR and native compilers currently cannot parallelize such loops. Since data size of programs used in this evaluation are about ten MB, the target of the proposed padding with data localization in this evaluation is L2 cache that has larger miss penalty and larger impact on performance than L1 cache of 32KB or 64KB. In this evaluation, the number of loops generated from a loop by loop division is same as the number of processors. Therefore, performance improvement is obtained mainly by the proposed padding. The proposed inter-array padding extends 513x513 2-dimensional array to 513x573 for tomcatv, 513x513 to 513x544 for swim, 66x64x64 to 66x64x71 for turb3d. The padding for common blocks is applied to hydro2d. Four common blocks, VAR1, VAR2, VARH and SCRA, are merged to a common block and a dummy array of 318696 bytes is inserted between VAR2 and VARH. 5.1
Performance on Sun Ultra 80
Solaris 8 supports Hashed VA, V.addr=P.addr and bin hopping as the page placement policy. V.addr=P.addr method keeps continuity on virtual address on physical address. Hashed VA is similar to V.addr=P.addr but it inserts a small gap every L2 cache size (4MB) to avoid conflict miss among two addresses, distance among which is just L2 cache size. Default policy of Solaris 8 is Hashed VA. Speedups for 4PEs against sequential execution by Sun Forte 6 update 2 compiler on Sun Ultra 80 are shown in Fig.6. Numbers above the bar in the figure show execution times. In addition, the number of cache misses measured by CPU Performance count of Ultra SPARC-II is shown in Fig.7. Speedups on Hashed VA by the automatic parallelization of Forte for tomcatv, swim and hydro2d are only 1.2, 1.7 and 1.8 times against sequential execution respectively as shown in Fig.6(a). Also, speedups by OSCAR without padding are 1.4, 1.7 and 2.3 times, since conflict misses prevent the scalability. For example, the number of cache misses of swim by Forte automatic parallelization is 300 million and that of OSCAR without padding is also 300 million as shown in Fig.7(a). These are not much reduced compared with that of the sequential execution (350 million) in spite of the quadruple cache size on 4PEs. On the other hand, turb3d has two kinds of loops. The first access is sequential and
Cache Optimization for Coarse Grain Task Parallel Processing
73
Fig. 6. Speedups on Sun Ultra 80
Fig. 7. L2 Cache Misses on Sun Ultra 80
it causes conflict misses as show in section 4. However, because second access pattern is interleaved, cache performance is better than other three programs. Since Ultra 80 used in this evaluation has a single memory bank, memory accesses are serialized and the bottleneck of scalability. Therefore, reduction of conflict misses to improve the L2 cache performance is important. Speedups by OSCAR with padding on Hashed VA are 6.3 times for tomcatv, 9.4 for swim, 4.6 for hydro2d and 3.4 for turb3d on 4PEs against the sequential execution. Also, padding increases the performance of OSCAR without padding 4.7, 5.5, 2.0 and 1.2 times respectively. The number of cache misses are decreased by padding to 3.5% of OSCAR without padding for tomcatv, 4.2% for swim, 25% for hydro2d, 61% for turb3d as shown in Fig.7. Speedups by OSCAR without padding against sequential execution on bin hopping are 2.4 times for tomcatv, 3.0 swim, 3.2 for hydro2d and 3.2 for turb3d as shown in figure 6(b). These are 1.8, 1.7, 1.4 and 1.1 times better than OSCAR without padding on Hashed VA. The reason is that conflict misses assumed on virtual address dose not appear on physical address. Speedups by OSCAR with padding on bin hopping are 2.5, 3.0, 3.2, 3.0 times for each program and only few percentage speedups compared with OSCAR without padding.
74
Kazuhisa Ishizaka et al.
Fig. 8. Speedups on RS/6000 44p-270
In this evaluation, the best performance on Ultra 80 is given by OSCAR with padding on Hashed VA. Execution times by it are 19 seconds for tomcatv, 11 seconds for swim, 31 seconds for hydro2d and 56 seconds for turb3d and minimum execution times on bin hopping are 47 seconds, 35 seconds, 44 seconds and 60 seconds respectively. 5.2
IBM RS/6000 44p-270
Fig.8 shows speedups for 4PEs against sequential execution on IBM RS/6000 44p-270 with 4-way set associative L2 cache(LRU). Default page placement policy of AIX 4.3 is bin hopping and page coloring is supported. As shown in Fig.8(a), speedups by OSCAR with padding against sequential execution on bin hopping are 2.6 times for tomcatv, 5.0 for swim, 4.6 for hydro2d and 3.2 for turb3d. They are 27%, 4.6%, 2.3%, 0.2% better than OSCAR without padding. Speedups by OSCAR without padding on page coloring are 1.6, 1.9, 2.9, 3.0 times against sequential execution and less than on bin hopping. Bin hopping show 1.2 times better performance for XLF automatic parallelization, 1.5 times better for OSCAR without padding compared with page coloring. However, OSCAR with padding gave us 3.0 times speedup for tomcatv, 7.8 for swim, 4.3 for hydro2d and 3.2 for turb3d against sequential execution on bin hopping. Padding increases the performance by OSCAR without padding 2.0, 4.1, 1.5 and 1.1 times for each program. Execution times by OSCAR with padding on page coloring are 23 seconds for tomcatv, 8 seconds for swim, 17 seconds for hydro2d and 25 seconds for turb3d and minimum execution times on bin hopping are 27 seconds, 12 seconds, 16 seconds and 25 seconds for respectively. OSCAR with padding on page coloring gave us the best performance on RS/6000 44p-270.
Cache Optimization for Coarse Grain Task Parallel Processing
6
75
Conclusions
This paper has described the cache optimization with data localization for coarse grain tasks parallel processing on SMP machine. In the proposed scheme, loops are divided into smaller loops to fit the cache and loops accessing the shared data are executed on the same processor as consecutively as possible to improve temporal locality over different loops. Moreover, cache line conflicts among loops are reduced by inter-array padding. The proposed scheme is implemented in OSCAR compiler as a core compiler of APC compiler developed in the Japan METI Advanced Parallelizing Compiler project in a part of Millennium Project IT21[6] and it was evaluated on the two commercial SMP workstations having different cache configurations with popular page placement policies of operating system. In the evaluation on the Sun Ultra 80(4pe) which has 4MB direct map L2 cache, the proposed padding scheme gave us 5.9 times speedup against sequential execution at the average of 4 programs of SPEC CFP95, tomcatv, swim, hydro2d and turb3d, on the default page placement policy called Hashed VA. OSCAR with padding on page coloring also gave us 4.6 times speedup against sequential execution on the RS/6000 44p270(4pe) having 4MB 2-way set associative L2 cache. The evaluation on two multiprocessors shows that OSCAR with padding on page coloring gave us the best performance on both machines.
Acknowledgements This research is supported by METI/NEDE millennium project IT21 “advanced Parallelizing Compiler” and STARC (Semiconductor Technology Academic Research Center).
References [1] R. Eigenmann, J. Hoeflinger, and D. Padua. On the automatic parallelization of the perfect benchmarks. IEEE Trans, on parallel and distributed systems, 9(1),
Jan. 1998. [2] M.W. Hall, J.M. Anderson, S. P. Amarasinghe, B. R. Murphy, S. Liao, E. Bugnion, and M. S. Lam. Maximizing multiprocessor performance with the suif compiler. IEEE Computer, 1996. [3] X. Martorell, E. Ayguade, N. Navarro, J. Corbalan, M. Gonzalez, and J. Labarta. Thread fork/join techniques for multi-level parallelism exploitatio in numa multiprocessors. Proc. of the 1999 International Conference on Supercomputing, June
1999. [4] C.J. Brownhill, A. Nicolau, S Novack, and C. D. Polychronopoulos. Achieving multi-level parallelization. Proc. of the International Symposium on High Performance Computing, 1997. [5] H. Kasahara, M. Obata, and K. Ishizaka. Automatic coarse grain task parallel processing on smp using openmp. Proc. of 13 th International Workshop on Languages and Compilers for Parallel Computing 2000, Aug. 2000.
76
Kazuhisa Ishizaka et al.
[6] APC. Advanced parallelizng compiler project. http://www.apc.waseda.ac.jp. [7] G. Rivera and C-W. Tseng. Eliminating conflict misses for high performance architectures. Proc. of the 1998 ACM International Conference on Supercomputing, July 1998. [8] Naraig Manjikian and Tarek S. Abdelrahman. Fusion of loops for parallelism and locality. Proc. of the 24th International Conference on Parallel Processing, Aug. 1995. [9] Naraig Manjikian and Tarek S. Abdelrahman. Array data layout for the reduction of cache conflicts. Proc. of 8th International Conference on Parallel and Distributed Computing Systems, Sep. 1995. [10] R. E. Kessler and Mark D. Hill. Page placement algorithms for large real-indexed caches. ACM transaction of Computer Systems, Nov. 1992. [11] Brian N. Bershad, Dennis Lee, Theodore H. Romer, and J. Brandley Chen. Avoiding conflict misses dynamically in large direct-mapped caches. Proc. of the Sixth Internatinal Symposium of Architectural Support for Programing Languages and Operating Systems, Oct. 1994. [12] Timothy Sherwood, Brad Calder, and Joel Ember. Reducing cache misses using hardware and software page placement. In Proc. of the International Conference of Supercomputing, June 1999. [13] E. Bugnion, J. M. Anderson, T. C. Mowry, M.R. Rosenblum, and M.S. Lam. Compiler-directed page coloring for multiprocessors. Proc. of the Seventh Internatinal Symposium of Architectural Support for Programing Languages and Operating Systems, Oct. 1996. [14] K. Ishizaka, M. Obata, and H. Kasahara. Coarse grain task parallel processing with cache optimization on shared memory multiprocessor. In Proc. of 14th International Workshop on Languages and Compilers for Parallel Computing, Aug. 2001. [15] H. Kasahara et al. Performance of multigrain parallelization in Japanese millennium project it21 advanced parallelizing compiler. In Proc. of 10th International Workshop on Compilers for Parallel Computers (CPC), Jan. 2003. [16] H. Kasahara A. Yhoshida, K. Koshizuka. Data-localization using loop aligned decomposition for macro-dataflow processing. Proc. of 9th Workshop on Languages and Compilers for Parallel Computing, Aug. 1996.
Compiler-Assisted Cache Replacement: Problem Formulation and Performance Evaluation* Hongbo Yang1, R. Govindarajan2, Guang R. Gao1, and Ziang Hu1 1
Department of Electrical and Computer Engineering, University of Delaware Newark, DE 19716, USA {hyang,ggao,hu}@capsl.udel.edu
2
Department of Computer Science and Automation, Indian Institute of Science Bangalore, 560012, India [email protected]
Abstract. Recent research results show that conventional hardwareonly cache solutions result in unsatisfactory cache utilization for both regular and irregular applications. To overcome this problem, a number of architectures introduce instruction hints to assist cache replacement. For example, Intel Itanium architecture augments memory accessing instructions with cache hints to distinguish data that will be referenced in the near future from the rest. With the availability of such methods, the performance of the underlying cache architecture critically depends on the ability of the compiler to generate code with appropriate cache hints. In this paper we formulate this problem – giving cache hints to memory instructions such that cache miss rate is minimized – as a 0/1 knapsack problem, which can be efficiently solved using a dynamic programming algorithm. The proposed approach has been implemented in our compiler testbed and evaluated on a set of scientific computing benchmarks. Initial results show that our approach is effective on reducing the cache miss rate and improving program performance.
1
Introduction
Over the last few decades, as the processor performance kept undergoing substantial progress, the gap between processor and memory speeds has been widening steadily. This problem, known as the “memory wall” problem, exists in both general-purpose high-performance computers [13] and embedded systems [17]. To bridge this performance gap, cache is introduced which has ameliorated the “memory wall” problem to some extent. However, a conventional cache is typically designed in a hardware-only fashion, where data management including cache line replacement is decided purely by hardware. A consequence of this design approach is that cache can make poor decisions in choosing data to be replaced, which may lead to poor cache performance. The widely used LRU (least recently used) cache replacement algorithm makes replacement decisions based *
This research is funded by NSF, under the NGS grant 0103723, DOE, grant number DE-FC02-01ER25503 and Intel Corp.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 77–92, 2004. © Springer-Verlag Berlin Heidelberg 2004
78
Hongbo Yang et al.
on past reference behavior. This can cause data with good reuse yield cache space to data that comes in later but has poor reuse. Research results reveal that considerable fraction of cache lines are held by data that will not be reused again before it is displaced from the cache. This is true for both irregular [4] and regular applications [15]. This phenomenon, called cache pollution, severely degrades cache performance. There are a number of efforts in architecture design to address this problem and the cache hint mechanism implemented in the Intel Itanium processor [9] is one of them. The memory accessing instructions of Itanium can be accompanied by a nt (stands for non-temporal) cache hint. In response, Itanium-2 implemented a modified LRU replacement algorithm honoring the nt cache hint [9]. In the Itanium-2 processor, the execution of memory accessing instructions with nt cache hint differs from that of a normal memory instruction in the following way. For a set-associative cache, when a normal memory instruction is executed, a cache line is allocated for the accessed data, and the just allocated cache line is given the highest rank in the set (to indicate that it is the most recently used). Thus it becomes the last to be replaced among all cache lines in the particular set. In contrast, the execution of a memory instruction with nt cache hint does not change the rank of the touched cache line. In this modified LRU replacement mechanism, data accessed by instructions with nt hint is more likely to be evicted on a subsequent cache miss. By relying on the compiler to give nt hint to the instructions accessing data without temporal reuse, this architecture effectively prevents cache pollution thus has the potential to achieve better cache locality. On this architecture, a good compiler algorithm to generate cache hint is essential, which is the focus of this paper. Intuitively, two kinds of memory instructions should be given nt hint: (i) whose referenced data doesn’t exhibit temporal-reuse, (ii) whose referenced data does exhibit temporal-reuse, but it cannot be realized under the particular cache configuration. It sounds as though the problem is pretty simple for regular applications, and existing techniques for analyzing data reuse [20] and estimating cache misses [11, 21, 12] suffice to solve this problem. This plausible statement, however, is not true because a fundamental technique used in cache miss estimation — footprint analysis — is based on the assumption that all accessed data compete for cache space equally. However, in our target architecture, memory instructions are not homogeneous — those with cache hints have much less demand for cache space. This makes the approach derived from traditional footprint analysis very conservative. In summary, the following cyclic dependence exists: Cache hint assignment must be known to achieve accurate cache miss estimation, while accurate cache miss estimation is only possible when cache hint assignment is finalized. In this paper, we develop a simple yet effective formulation to address the above problem. Our formulation is based on the observed relationship between cache miss rate and cache-residency of reference window [10]. This is used to formulate the problem as a 0/1 knapsack problem [8]. For the case that all considered memory referencing instructions are enclosed by a perfect loop nest,
Compiler-Assisted Cache Replacement
79
the formulated problem falls in a special category of knapsack problem that can be solved in polynomial time. For case that loops are imperfectly nested, this is a general 0/1 knapsack problem, which is known to be NP-complete [8]. In this case, good heuristic algorithms exist to achieve near-optimal result [5]. However, since the number of references in a loop nest is typically small, even obtaining optimal result using a dynamic programming algorithm [8, 14] is quite inexpensive. We have evaluated the benefit of our approach on reducing cache misses on a set of loop kernels and a full SPEC benchmark program by simulating their execution using the SimpleScalar simulator [3]. Initial experimental results show that our approach reduces the number of data cache misses by up to 57.1%, and reduces execution time by up to 27%. The rest of the paper is organized as follows. Section 2 briefly reviews the basic concepts of data reuse and reference window. Section 3 illustrates, through an example, the relationship between reference window and cache miss rate which sets up the rationale for our problem formulation. The heart of this paper — an elegant knapsack problem formulation — is derived in Section 4. Our implementation and experimental results are then presented in Section 5. Section 6 discusses related work. Section 7 concludes the paper and envisions possible future research directions.
2
Preliminaries
We review some basic concepts on data reuse and reference window that will be used in the rest of this paper. For an affine array reference in a loop nest of depth the subscripts can be represented as (where H is the access matrix, is the iteration vector and is the offset vector). If two different executions of an array reference at iteration points and access the same array element, it must be true that Therefore, if the equation has a solution, the array reference with subscripts exhibits self-temporal reuse and the solution to constitutes the self-temporal reuse vector. Two references to the same array, with the same access matrix but different offset vectors, say reference and reference may access the same data only if equation can be satisfied. Thus group-temporal reuse exists when has a solution, and the solution constitutes the group-temporal reuse vector. A uniformly generated reference set (UGS) is a set of references of the same array, with the same access matrix and has group data reuse within the set [10]. By defining uniformly generated reference set and partitioning all array references into UGSs, we can study data reuse on a per-UGS basis. Gannon et al’s work introduced the term reference window, which is defined as the set of array elements that are accessed by the source reference of a reusepair in the past and will be accessed in the future by the sink reference [10]. Consider the Fortran program shown in Figure 1 as an example. This is a small
80
Hongbo Yang et al.
Fig. 1. The MXM loop kernel from SPEC92. Values of M, N and L are 128, 64 and 256 respectively, A, B and C are two dimensional arrays with 8-byte double precision floating-point array elements
Fig. 2.
Reference windows for reuse pairs
kernel from the SPEC92 benchmark 093. nasa7. Reference windows associated with all loop-carried reuse-pairs are listed in Figure 2. Let us explain why the reference windows are as given in Figure 2. For reference C(I,K) at iteration where and the entire array has been traversed by previous iterations, and all the array elements will be accessed again, before the loop execution advances to Therefore the reference window is the entire array. For the self-reuse of array reference A(I,J) at iteration where and all elements in the first dimension will be referenced in the future. Other reference windows given above can be derived similarly. A careful study reveals that the size of a reference window is determined by its reuse vector. By solving reuse equations for each reference, we get the reuse vectors for references C(I,K), A(I,J), B(J,K) as (1,0,0), (0,1,0), and (0,0,1) respectively. By the definition of reuse vector, we know that C(I,K) accesses the same element at iterations and however, these two iterations are far apart, thus the number of different array elements accessed in between (i.e., the reference window) is large. While the reuse of B(J,K) happens at the innermost loop, its reference window is much smaller. Gannon et al., gave a formula to compute the size of reference window based on reuse vector; we refer interested readers to [10] for more details.
Compiler-Assisted Cache Replacement
81
Fig. 3. Cache occupancy and miss rate of array reference A(I,J) in the program shown in Figure 1
3
A Case Study
In this section, using the matrix multiply program shown in Figure 1, we illustrate the relationship between reference window and cache miss rate. First let us analyze the data reuse1 for this program. We start with the array reference A(I,J) , data accessed by this reference at iteration is and it will be accessed again by the same array reference at iteration Intervening data accesses by all array references during this interval (from to do not interfere with This kind of reuse is named self-reuse [20]. Following the reuse analysis method given by Wolf and Lam [20] we can easily derive that types of data reuse of all other array references of A are self-reuse (there doesn’t exist reuse between references A(I,J+1) and A(I,J) since the stride of loop J is 4). Since there does not exist data reuse relation between any two references of A, we can study each reference of A(I,J), A(I,J+1), A(I,J+2) and A(I,J+3) in isolation. Without loss of generality, we choose the reference A(I,J) and profiled its cache behavior. Before giving the profiling result, we define the term cache occupancy to refer to the number of cache lines occupied by a particular array reference. We traced the cache occupancy and the cache miss rate for the reference A(I,J) on a 256-set, 4-way associative cache with a cache line size of 8 bytes. Both cache occupancy and cache miss rate are shown in Figure 3. In this figure, both cache occupancy and cache miss rate are obtained by averaging the respective values for the last 20 clock cycles. In the figure cache occupancy of the reference varies slightly from 255.5 to 256, while the cache miss rate varies widely, from 0% to 100%. 1
Data reuse is a term different from cache locality; data reuse leads to cache locality only when the reuse can be realized by the particular cache configuration.
82
Hongbo Yang et al.
We observe that the cache miss rate is tightly coupled with the cache occupancy, and is inversely proportional to cache occupancy. When the average cache occupancy of the reference is 256 for the last 20 cycles, the cache miss rate is zero during this period. While the cache occupancy reduces to 255.5 255.6 (due to competition with other array references), the cache miss rate rises to 100%. This is somewhat surprising, at least initially, as the decrease in the cache occupancy is only marginal (from 256 to 255.5). Let us go back to the source program and analyze why this happens. As we have discussed before, the array element accessed by reference A(I,J) at iteration will be accessed again by the same array reference at iteration The number of distinct array elements accessed by A(I,J) in between (including the two bounding iterations) is 256. These 256 array elements are the reference window for the self-reuse vector of A(I,J) that we derived in Section 2. Hence we conclude that if the cache holds all elements of the reference window for a particular reuse pair, the data-reuse is translated into cache-locality at run-time; otherwise, that reuse cannot be exploited by the cache. Based on this observation, we formulated the problem of giving nt cache hint in Section 4.
4
Problem Formulation
In this section we give a problem formulation for generating nt hint for memory instructions. We start with the case that all memory references have self-reuse only and give the problem formulation in Section 4.1. The general case that includes group-reuse is discussed in Section 4.2. 4.1
Problem Formulation for Self-Reuse: Case I
The particular problem that we address in this section is as follows: Problem 1. Given a cache size and a perfect loop nest whose loop body has array references with no two references having data reuse between them, determine the subset of references that should be given nt hint such that cache miss rate of executing this loop nest on the given cache is minimized. As demonstrated by the profiling result of matrix multiply program (shown in Figure 3), to realize a data reuse, the reference window of that data reuse must be accommodated by the cache. In reality, cache size is limited and reference windows that it can hold is subject to the cache capacity. We associate each array reference with a binary variable to denote whether it is given nt or the variables constitute all decision variables of the problem. The constraint imposed by cache capacity can be formulated as:
Compiler-Assisted Cache Replacement
83
where refers to the size of the reference window of array reference and C is the effective cache size [11,18]. We use the effective cache size instead of full cache size in the capacity constraint since stride access with a stride larger than 1 cannot exploit the full cache capacity, as shown in Gao et al’s work [11]. The capacity constraint ensures that for array reference whose corresponding decision variable has a value 1, its reference window will be fully accommodated by the cache. Hence its temporal reuse can be realized. Since our objective is to minimize the cache miss rate, it is desirable to have as many array references as possible achieve temporal locality. And since all array references are enclosed by a perfect loop nest, their execution frequencies are the same. Thus our objective function is formulated as:
This problem composed of the constraint specified by Inequality 1 and the objective function (specified by Equation 2). This is, in essence, a 0/1 knapsack problem[8]. For the problem formulation that we have given, the knapsack problem falls into a special category where the candidate items have different weights(size of the reference window) but the same value(1). For this special case, the knapsack problem can be solved using a greedy algorithm in polynomial time. We give the details of such an algorithm in [22]. For more complicated cases where the loops are imperfectly-nested, the coefficients of in the objective function will not be uniform, resulting in a more general 0/1 knapsack problem. For the general 0/1 knapsack problem, optimal result can be obtained by using a dynamic programming algorithm in time [8, 14], where is the number of array references and C is the effective cache size. If the time-complexity of the dynamic programming approach is unaffordable, heuristic algorithm also exists to obtain near-optimal result [5]. 4.2
Problem Formulation for Group Reuse: Case II
Now we extend our approach to the general case that group-reuse exists. The problem that we address in this section is: Problem 2. Given a cache size and a perfect loop nest whose loop body has array references that have group data reuse, determine the subset of references that should be given nt hint such that cache miss rate of executing the loop nest is minimized. To address this problem, group reuse of these array references should be figured out first. Then we can formulate this problem in a similar way as in Case I. Our approach to address this problem is therefore divided into the following three steps:
84
Hongbo Yang et al.
1. Partition the array references into UGSs. 2. Represent the reuse within each UGS using a reuse graph and prune the edges of the reuse graph to simplify the problem. 3. Form a 0/1 knapsack problem from the pruned reuse graph.
We illustrate these steps by using an example program:
Step 1. Partitioning: In the first step, we partition array references into UGSs such that group reuse exists only within each set. This step is the same as that documented in Wolf et al’s paper [21] and Mowry’s dissertation [16]. For the example program, the five array references are partitioned into two UGSs:
Step 2. Pruning: The nice feature of the target loops of Problem 1 that we dealt with in Section 4.1 is that data reuse is within each single reference, thus the cost and benefit of realizing the reuse is clearly defined. The presence of group-reuse makes this feature disappear and we have to deal with the case that data accessed by one array reference is reused by several other array references. We represent group data reuse using a reuse graph (as shown in Figure 4), where each edge (solid or dashed) represents a possible reuse. The reuse graph can be simplified such that each reference has only one successor and one predecessor. In the following paragraph we discuss how to prune the reuse graph. In Figure 4, edges remaining after pruning are shown as solid edges and edges that can be pruned are shown as dashed edges. For legibility reasons, we did not show all pruned edges. However all solid edges that remain after pruning are shown. Consider the reuse between A(I+1,J) and A(I,J-1) as an example. Although reuse testing by solving the reuse equation renders us a reuse edge from A(I+1,J) to A(I,J-1), a careful analysis reveals this reuse actually does not happen. This is because of the intervening access generated by A(I,J+1). Consider the location accessed by references A(I+1,J) and A(I,J-1). The above accesses happen respectively at iteration and Before this reuse can be realized between these two references, a reuse by reference A(I,J+1) happens at iteration Hence the reuse edges (A(I+1,J), A(I,J+1)) and (A(I,J+1), A(I,J-1)) together, transitively, represent the reuse information between A(I+1,J) and A(I,J-1). Therefore the edge (A(I+1,J),A(I+1,J-1)) can be pruned. In a similar way, all transitive edges can be pruned from the reuse graph. By pruning the transitive edges, we get a reuse graph in which each node has at most one successor and one predecessor. This nice feature of the pruned reuse graph facilitates our knapsack problem formulation since the cost and benefit of realizing each reuse can be easily identified.
Compiler-Assisted Cache Replacement
85
Fig. 4. Data reuse graph for of the example program. The vector adjacent to each reuse edge is the reuse vector
As seen, for multiple array references that possibly reuse data of a common parent, the pruning step chooses the one that reuses the data at the earliest time. Thus the rule for pruning is: For an array reference which emanates multiple reuse edges, keep the edge that has minimum reuse vector and prune all other edges. Reuse vectors are ordered in lexicographic order [1]. Step 3. Formulation: After pruning we proceed to the last step, viz., formulating the problem. The cost of realizing a temporal reuse is size of the reference window associated with the reuse. By realizing the reuse, the reference reusing the data will get its data from cache instead of memory, thus saving a memory reference for an iteration. In the pruned reuse graph, the reference window size of the four reuse edges emanating from A(I+1,J), A(I,J+1), A(I,J-1) and A(I-1,J) are N – 1,2, N – 1 and (M–2) . N respectively. L(I,J) has self-reuse with reference window size of M · N. The problem for the example program can be formulated as: Maximize:
within the constraint:
5 5.1
Experimental Results Experimental Platform
We have implemented our approach in the MIPSpro compiler and evaluated its performance by running SPEC benchmarks on SimpleScalar simulator [3].
86
Hongbo Yang et al.
The MIPSpro compiler is the production-quality compiler developed by SGI for MIPS processors. We have re-engineered the code generator of the MIPSpro compiler to generate code for SimpleScalar. The MIPSpro compiler has a rich set of optimizations to maximize program performance. We have enabled most of them in our experiment. As a first step of our work, we did not enable loop nest transformation in our experiment. Studying the interaction between our technique and other locality-enhancing techniques like loop fusion, loop fission, loop permutation and loop tiling is our future work. However, optimizations applied on loop bodies, like strength reduction, induction variable elimination and cross-iteration common subexpression elimination that do not change the loop nest structure, are still invoked. We implemented the algorithm for computing the reference window size given in Gannon et al’s paper [10] which is used in the 0/1 knapsack problem. We have also implemented the knapsack problem formulation (i.e., generating the constraints) and a greedy algorithm to get the optimal solution for it in the MIPSpro compiler. A dynamic-programming algorithm for general 0/1 knapsack problem is interesting but was not required since in the benchmarks we evaluated perfect loop nests dominate. We did not consider scalar references for nt hint, as scalar variables only consume a small portion of cache space. SimpleScalar uses MIPS instruction set with a few minor differences. Each instruction word in SimpleScalar is of length 64 bits, of which the most significant 16 bits are not used at present. This 16-bit field is called annotation field in SimpleScalar, which is used by us to carry cache hint in our experiment. During code generation, memory instructions are given nt hint according to the solution of the 0/1 knapsack problem. In response to this modification on ISA, we modified the simulation mechanism as well. We implemented the modified LRU algorithm which does not change the rank a the cache line for accesses with nt hint. We chose two representative loop kernels, mxm, in which most data accesses are column-major, and vpenta, in which most data accesses are row-major. Both of them are from SPEC92 093.nasa7 benchmark written in Fortran. Besides, to evaluate the effectiveness of our approach on large benchmarks, we also included one complete benchmark, tomcatv from SPEC95 with train data set, in our workload. We experimented our approach on caches of varying sizes (ranging from 4K bytes to 32K bytes) and varying associativity (2-way and 4-way). Note that for direct-mapped cache, the replacement algorithm and cache hint do not play any role. In our experimental work, we used a fixed cache line size of 16 bytes. 5.2
Performance Results
The cache miss rates of the conventional cache and that of nt hint assisted cache are compared in Table 1. The cache miss results of these two types of cache are obtained by running exactly the same program generated by our compiler on the SimpleScalar simulator (simulating, respectively, the LRU replacement algorithm and the modified LRU replacement algorithm).
Compiler-Assisted Cache Replacement
87
Our approach shows most performance benefits on mxm for 8K byte 4-way cache. It reduces the cache miss rate from 35% to 15% (a 57.1% reduction on the number of cache misses). As elaborated in Section 3, the key to achieve satisfactory overall cache locality is to ensure that reuse of array references of A is materialized, since in this example, reference of C has distant reuse and references of B are loop-invariant. But, in a conventional cache, cache pollution caused by array C prevents array A from enjoying its temporal locality, leading to poor locality on a cache of size 4K and 8K bytes. For 8K byte cache, 41.3% of the executed memory instructions are given the nt cache hint by our approach. This ensures that data accessed by normal memory instructions (references of A in this case) stay in the cache for a relatively longer time which in turn results in better temporal locality. For 2-way 8K byte cache, our approach is also quite effective, reducing the number of cache misses in mxm by 30.6%. The percentage reduction achieved on a 2-way cache is lower than that achieved by a 4-way cache. Although this is counter-intuitive, we observe that, even for the conventional cache with the original LRU replacement, mxm achieves lower cache miss rates on a 2-way 8K byte cache than on a 4-way cache of the same size. This could be due to higher conflict misses as 4-way associativity results in fewer sets (128 sets) than 2-way associativity (256 sets) on a 8K byte cache. Our approach performs consistently better over conventional cache for larger cache sizes (16K and 32K bytes). For caches of relatively smaller sizes (especially 4K bytes), our approach performs marginally better than the conventional cache, but not consistently. The reason for this is that when data accessed by an instruction with nt hint is brought in, its life time in the cache is typically much shorter than that in a conventional cache. Although this is beneficial for other data with temporal locality, the short cache life-time of the accessed data
88
Hongbo Yang et al.
Fig. 5. Impact of our approach on locality of regular and nt-hint objects
jeopardizes spatial locality since it may be replaced before the adjacent data items are accessed. On a cache of small size, this happens more frequently. To verify the above conjecture, we designed an experiment in which each cache object is classified as a regular object or an nt-hint object depending on whether the data object accessed is brought into the cache using a regular memory instruction or with an nt hint memory instruction. We measured the number of references for each cache block during its life-time (from the time the cache block is brought in to the time it is replaced). Using this we compute the average number of references for regular objects and nt-hint objects. We compute these values for both classes of objects with the original as well as the modified LRU replacement algorithm. Note, in all our experiments the code run in the simulator is the same (one which includes nt-hint memory instruction). Only the replacement policy used (original LRU or modified LRU) is different for the different caches. Figure 5 (a) shows the average number of references for nt-hint objects for tomcatv benchmark. It can be seen that the average number of references remain the same between the original and the modified LRU replacement for 32K byte cache. However, for small cache sizes, there is a decrease in the average number of references. This shows that spatial locality exploited in nt-hint objects is lower in nt-hint assisted caches, especially when the cache size is smaller. In other words, the locality of the nt-hint objects is really sacrificed. For reference purposes, we also show the average number of references for regular objects in Figure 5(b). It can be seen that the modified LRU algorithm (with nt hint) improves the locality of regular objects in all cache sizes. These two graphs (refer to Figure 5) tell us the key to reduce the cache miss ratio on the studied architectures is to avoid/minimize the degradation of the locality exploited in nt-hint objects while enhancing the locality of exploited in regular objects. Fortunately, for most cases the benefits achieved in temporal locality exploited in regular objects by our approach dominate the possible loss on spatial locality exploited in nt-hint objects. This is evidenced by the positive average reduction on cache misses we achieved for all cache sizes we considered.
Compiler-Assisted Cache Replacement
89
We observe that our approach is more effective on caches of higher associativity. As shown, our approach reduces the cache miss rate by a larger extent for 4-way associative caches than for 2-way associative caches. One possible reason for this is that our problem formulation does not take cache conflicts into account. In our problem formulation given in Section 4, we optimistically assumed that the residency of reference windows is only constrained by the effective cache size. This assumption gives us a simple problem formulation; but it suffers from not considering conflict misses which is non-negligible on caches of low associativity. Our future work will consider using conflict-avoiding techniques like data padding to improve the effectiveness of our approach. Next we report the impact of reduced cache misses (due to nt hint) on program performance. For this, we obtain program execution time, expressed in execution cycles, from SimpleScalar simulator. We simulate a superscalar processor which issues 2 instructions in a clock cycle and employs out-of-order instruction issue and out-of-order execution. We consider one level of cache: Icache of 16K bytes, and the size of D-cache varies between 4K and 32K bytes. The cache hit latency is 1 cycle, and the cache miss penalty is 40 cycles. Performance results for a conventional cache and a cache with nt hints are reported in Table 2. We observe that the reduction in cache misses (due to nt hints) does result in a corresponding reduction in the execution time, although not exactly by the same/similar amount. This is because cache miss rate is not the only factor affecting program performance, especially in out-of-order issue processors. In general, we observe that the cache miss rate reduction achieved by our approach is accompanied by a corresponding performance improvement. With the widening speed gap between processor and memory, our approach can have more performance impact on future microprocessors.
90
6
Hongbo Yang et al.
Related Work
Improving cache performance has attracted a lot of attention from both the architecture and compiler perspective. Specifically, enhancing instruction set with cache hints is pioneered by Chi and Dietz. They studied an architecture innovation by introducing cache-bypassing memory instructions [6, 7]. In their architecture model, data accessed by cache-bypassing memory instructions is not allocated a cache line. Their approach is helpful to avoid cache pollution, but severely compromises spatial locality. By using cache hints, we can get better temporal locality without sacrificing the spatial locality significantly. Wang et al studied a hypothetical architecture similar to the one considered in this paper [19], and proposed a heuristic compiler algorithm for this architecture. However our work differs from their work in two major aspects: (i) we performed an in-depth study on the compiler algorithm while they focused on the architectural implementation; (ii) we presented a systematic formulation while they used an ad-hoc algorithm. Lastly, their algorithm does have the cyclic dependency problem mentioned in Section 1. In a future work, we plan to compare our approach with their heuristic method. Anantharaman and Pande studied the problem of optimizing loop execution on embedded systems with scratch-pad memory and without cache [2]. Interestingly, they formulated the problem as a 0/1 knapsack problem as well. However, the problem they studied is different from ours since scratch-pad memory differs from the cache in that it is free of hardware interference.
7
Conclusions
Improving cache performance is of significant importance in modern processors. In this paper we exploited compiler-assisted cache management which utilizes the cache more efficiently to achieve better performance. In particular, we studied the problem of determining the subset of references that should be given nt (stands for “non-temporal”) cache hints to minimize the cache miss rate. We observe the relationship between cache miss rate and cache-residency of reference windows in Section 3. This observation forms the basis for our formulation that in order for an array reference to realize its temporal reuse, its reference window must be fully accommodated in the cache. We then formulated the problem as a 0/1 knapsack problem for the following two cases: (i) only self-reuse exists, and (ii) group-reuse exists. To the best of our knowledge this is the first systematic formulation of this problem. We evaluated our approach by implementing it in a re-engineered MIPSpro compiler generating SimpleScalar instructions and running it through SimpleScalar simulator. Our simulation results show that our approach exploited the architecture potential well. It reduced the number of data cache misses by up to 57%, and program execution time by up to 25.7%. Our plan for future work includes performing a comprehensive evaluation on the sensitivity of our approach to cache associativity and cache line size, integrating our approach with other locality-enhancing techniques, and comparing it with related work.
Compiler-Assisted Cache Replacement
91
References [1] R. Allen and K. Kennedy. Optimizing Compilers for Modern Architectures. Morgan Kaufmann Publishers, 2002. [2] S. Anantharaman and S. Pande. Compiler optimization for real time execution of loops on limited memory embedded systems. In Proceedings of 1998 IEEE Real-Time Systems Symposium, Madrid, Spain, Dec 1998. [3] Doug Burger and Todd Austin. The SimpleScalar tool set, version 2.0. Technical Report 1342, Computer Sciences Department, Univ of Wisconsin, 1997. [4] Douglas C. Burger, James R. Goodman, and Alain Kägi. The declining effectiveness of dynamic caching for general-purpose microprocessors. Technical Report WMADISONCS CS-TR-95-1261, University of Wisconsin-Madison, Computer Sciences Department, 1995. [5] David Callahan, Steve Carr, and Ken Kennedy. Improving register allocation for subscripted variables. In Proc. of SIGPLAN PLDI ’90, pages 53–65, White Plains, N. Y., Jun. 1990. [6] C.-H. Chi and H. Dietz. Improving cache performance by selective cache bypass. In Twenty-Second Annual Hawaii International Conference on System Sciences, pages 277–285, 1989. [7] Chi-Hung Chi and Hank Dietz. Unified management of registers and cache using liveness and cache bypass. In Proc. of SIGPLAN PLDI ’89, pages 344–355, Portland, Ore., Jun. 1989. [8] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to algorithms. MIT Press and McGraw-Hill Book Company, 1992. [9] Intel Corp. Intel Itanium 2 Processor Reference Manual for Software Development and Optimization, Jun 2002. [10] Dennis Gannon, William Jalby, and Kyle Gallivan. Strategies for cache and local memory management by global programming transformation. Journal of Parallel and Distributed Computing, 5(5):587–616, October 1988. [11] Guang R. Gao, Vivek Sarkar, and Shaohua Han. Locality analysis for distributed shared-memory multiprocesors. In Proc. of the 1996 International Workshop on Languages and Compilers for Parallel Computing(LCPC), San Jose, California, Aug 1996. [12] Somnath Ghosh, Margaret Martonosi, and Sharad Malik. Cache miss equations: An analytical representation of cache misses. In Conf. Proc., 1997 Intl. Conf. on Supercomputing, pages 317–324, Vienna, Austria, Jul. 1997. [13] John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach. Morgan Kaufmann Pub., Inc., San Francisco, 2nd edition, 1996. [14] J. Lee, E. Shragowitz, and S. Sahni. A hypercube algorithm for the 0/1 knapsack problem. Journal of Parallel and Distributed Computing, 5(4):438–456, August 1988. [15] Kathryn S. McKinley and Olivier Temam. Quantifying loop nest locality using spec’95 and the perfect benchmarks. ACM Transactions on Computer Systems (TOCS), 17(4):288–336, 1999. [16] T. Mowry. Tolerating Latency Through Software-Controlled Data Prefetching. PhD thesis, Stanford University, 1994. [17] P. R. Panda, F. Catthoor, N.D. Dutt, K. Danckaert, E. Brockmeyer, C. Kulkarni, A. Vandercappelle, and P. G. Kjeldsberg. Data and memory optimization techniques for embedded systems. ACM Transactions on Design Automation of Electronic Systems (TODAES), 6(2):149–206, 2001.
92
Hongbo Yang et al.
[18] Vivek Sarkar and Nimrod Megdido. An analytical model for loop tiling and its solution. In Proceedings of IEEE 2000 International Symposium on Performance Analysis of Systems and Software, Austin, TX, Apr 2000. [19] Zhenlin Wang, Kathryn S. McKinley, Arnold L. Rosenberg, and Charles C. Weems. Using the compiler to improve cache replacement decisions. In Proceedings of the 11th International Conference on Parallel Architecture and Compilation Techniques(PACT’02), Charlottesville, Virginia, Sept 2002. [20] Michael E. Wolf and Monica S. Lam. A data locality optimizing algorithm. In Proc. of SIGPLAN PLDI ’91, pages 30–44, Toronto, Ont., Jun. 1991. [21] Michael E. Wolf, Dror E. Maydan, and Ding-Kai Chen. Combining loop transformations considering caches and scheduling. In Proc. of MICRO-29, pages 274–286, Paris, Dec. 1996. [22] Hongbo Yang, R. Govindarajan, Guang R. Gao, and Ziang Hu. A problem formulation of assisting cache replacement by compiler. Technical Report 47, Computer Architecture and Parallel Systems Laboratory, University of Delaware, 2003.
Memory-Constrained Data Locality Optimization for Tensor Contractions Alina Bibireata1, Sandhya Krishnan1, Gerald Baumgartner1, Daniel Cociorva1, Chi-Chung Lam1, P. Sadayappan1, J. Ramanujam2, David E. Bernholdt3, and Venkatesh Choppella3 1
Department of Computer and Information Science The Ohio State University, Columbus, OH 43210, USA {bibireat,krishnas,gb,cociorva,clam,saday}@cis.ohio-state.edu 2
Department of Electrical and Computer Engineering Louisiana State University, Baton Rouge, LA 70803, USA [email protected] 3
Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA {bernholdtde,choppellav}@ornl.gov
Abstract. The accurate modeling of the electronic structure of atoms and molecules involves computationally intensive tensor contractions over large multi-dimensional arrays. Efficient computation of these contractions usually requires the generation of temporary intermediate arrays. These intermediates could be extremely large, requiring their storage on disk. However, the intermediates can often be generated and used in batches through appropriate loop fusion transformations. To optimize the performance of such computations a combination of loop fusion and loop tiling is required, so that the cost of disk I/O is minimized. In this paper, we address the memory-constrained data-locality optimization problem in the context of this class of computations. We develop an optimization framework to search among a space of fusion and tiling choices to minimize the data movement overhead. The effectiveness of the developed optimization approach is demonstrated on a computation representative of a component used in quantum chemistry suites.
1
Introduction
An increasing number of large-scale scientific and engineering applications are highly data intensive, operating on large data sets that range from gigabytes to terabytes, thus exceeding the physical memory of the machine. Scientific applications, in particular electronic structure codes widely employed in quantum chemistry [12, 13], computational physics, and material science, require elaborate interactions between subsets of data; data cannot be simply brought into the physical memory once, processed, and then over-written by new data. Subsets of data are repeatedly moved back and forth between a small memory pool, limited physical memory, and a large memory pool, the L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 93–108, 2004. © Springer-Verlag Berlin Heidelberg 2004
94
Alina Bibireata et al.
unlimited disk. The cost introduced by these data movements has a large impact on the overall execution time of the computation. In such contexts, it is necessary to develop out-of-core algorithms that explicitly orchestrate the movement of subsets of data within the memory-disk hierarchy. These algorithms must ensure that data is processed in subsets small enough to fit the machine’s main memory, but large enough to minimize the cost of moving data between disk and memory. This paper presents an approach to the automated synthesis of out-of-core programs with particular emphasis on the Tensor Contraction Engine (TCE) program synthesis system [1, 3, 2, 5, 4]. The TCE targets a class of electronic structure calculations, which involve many computationally intensive components expressed as tensor contractions (essentially generalized matrix products involving higher dimensional matrices). Although the current implementation addresses tensor contraction expressions arising in quantum chemistry, the approach developed here has broader applicability; we believe it can be extended to automatically generate efficient out-of-core code for a range of computations expressible as imperfectly nested loop structures operating on arrays potentially larger than the physical memory size. The evaluation of such expressions involves explicit decisions about: the structure of loops, including tiling strategies the evaluation order of intermediate arrays memory operations (allocate, deallocate, reallocate) disk operations (read, write) The fundamental compiler transforms that we apply here are loop fusion and loop tiling: Loop Fusion: The evaluation of the tensor contraction expressions often results in the generation of large temporary arrays that would be too large to be produced entirely in main memory by a “producer” loop nest and then consumed by a “consumer” loop nest. By suitably fusing common loops in the producer and consumer loop nests, it is feasible to reduce the dimensionality of the buffer array used in memory and store the intermediate array on disk. Thus, a smaller in-memory array may be used to produce the full disk array in chunks. Loop Tiling: It enables data locality to be enhanced, so that the cost of moving data to/from disk is decreased. For minimizing the disk access cost under a given memory constraint, the compiler needs to search among many possible loop fusion structures, tile sizes, and placements of temporaries on disk. Conceptually, it is necessary to search along all three dimensions simultaneously. A decoupled approach that first searches for a fusion structure that minimizes the memory usage, followed by tiling and disk placements [5], may produce code with a sub-optimal disk-access cost as an example in the next section illustrates. A decoupled approach that first optimizes disk access by tiling the loops and placing temporaries on disk,
Memory-Constrained Data Locality Optimization for Tensor Contractions
95
followed by loop fusion for reducing memory usage, may fail to find a solution that fits into memory since the constraints imposed by tiling prohibit many possible fusions. However, a simultaneous search along all three dimensions is computationally infeasible. In this paper, we present an integrated approach in which we first search for possible fusion structures together with disk placements. The result of this search is a set of candidate loop structures with different memory requirements and different combinations of disk placements for the temporaries. For each of the solutions from this search we then search for the tile sizes that minimize the disk access cost [6]. We present two algorithms for the combined fusion and disk placement search: an optimal algorithm that is guaranteed to find the solution that will have the minimum disk access cost after tiling and a heuristic algorithm that is more efficient but may result in a suboptimal solution after tiling. The rest of the paper is organized as follows. In the next section, we discuss the class of computations that we consider and discuss an example from computational chemistry. In Sec. 3, we introduce the main concepts used in this paper. Sec. 4 presents an optimal fusion plus tiling algorithm. Sec. 5 presents a suboptimal, but empirically efficient fusion plus tiling algorithm. Sec. 6 presents experimental evidence of the performance of this algorithm, and conclusions are provided in Sec. 7.
2
The Computational Context
We consider the class of computations in which the final result to be computed can be expressed in terms of tensor contractions, essentially a collection of multidimensional summations of the product of several input arrays. There are many different ways to compute the final result due to commutativity, associativity and distributivity. The ways to compute the final result could differ widely in the number of floating point operations required, in the amount of memory needed, and in the amount of disk-to-memory traffic. As an example, consider a transformation often used in quantum chemistry codes to transform a set of two-electron integrals from an atomic orbital (AO) basis to a molecular orbital (MO) basis:
Here, is an input four-dimensional array (assumed to be initially stored on disk), and is the output transformed array, which needs to be placed on disk at the end of the calculation. The arrays C1 through C4 are called transformation matrices. The indices and denote the total number of orbitals, and have the same range N equal to O + V, where O is the number of occupied orbitals in the chemistry problem and V is the number of unoccupied (virtual) orbitals. Similarly, the indices and have the same range equal to V. Typical values for O range from 10 to 300, and the number of virtual orbitals V is usually between 50 and 1000.
96
Alina Bibireata et al.
The result array B is computed in four steps to reduce the number of floating point operations from in the initial formula (8 nested loops, for and to as shown below:
The result of this operation-minimal approach is the creation of three temporary intermediate arrays T1, T2, and T3 as follows: and Assuming that the available memory limit on the machine running this calculation is less than (which is STB for V = 800), any of the logical arrays A, T1, T2, T3, and B is too large to entirely fit in memory. Therefore, if the computation is implemented as a succession of four independent steps, the intermediates T1, T2, and T3 have to be written to disk once they are produced, and read from disk before they are used in the next step. Furthermore, the amount of disk access volume could be much larger than the total volume of the data on disk containing A, T1, T2, T3, and B. Since none of these arrays can be fully stored in memory, it may not be possible to perform all multiplication operations by reading each element of the input arrays from disk only once. We use loop fusion to reduce the memory requirements for the temporary arrays and loop fusion together with loop tiling to reduce the disk access volume. For illustrating the interactions between fusion and tiling consider the following simple example with only two contractions:
To prevent the intermediate array from having to be written to disk in case it does not fit in memory, we need to fuse loops between the producer and the consumer of This results in the intermediate array being formed and used in a pipelined fashion. For every loop that is fused between the producer and the consumer of an intermediate, the corresponding dimension can be removed from the intermediate. E.g., in the loop structure in Fig. 1(a), the intermediate could be reduced to a scalar, while in the loop structure in Fig. 2(a), it could only be reduced to a vector Notice that for reducing the memory requirements of the temporary to a scalar in Fig. 1(a), it is necessary to have the file read operations for B and C inside the innermost loop. This results in the input arrays to be read redundantly multiple times. In this example, B is read once for every iteration of the loop, while C is read once for every iteration of the loop. The number of redundant read operations can be reduced by tiling the loops and reading entire tiles in one operation as illustrated in Fig. 1(b). B, e.g., is now only read redundantly once for every iteration of the tiling loop. In exchange,
Memory-Constrained Data Locality Optimization for Tensor Contractions
97
Fig. 1. Illustration of the decoupled approach for a simple example
Fig. 2. Illustration of the integrated approach for a simple example
the memory requirement increases since all fused array dimensions get expanded to tile size. The disk access volume for a given loop structure can, therefore, be minimized by increasing the tile sizes until the memory is exhausted. In our previous decoupled approach to fusion and tiling, we first fused the loops in order to minimize the memory usage. The memory-minimal loop structure was then tiled to minimize the disk access cost, as shown in Fig. 1. We found that for some examples, this resulted in suboptimal solutions, since there were too many redundant read operations for the input arrays. Also, the memoryminimal loop structure often results in the summation loop being the outermost loop for a contraction. This requires the initialization of the result array to be outside the non-summation tiling loops, which then requires both a read and a write operation for the result array. This is illustrated with array D in Fig. 1(b).
98
Alina Bibireata et al.
Minimizing the disk access cost before fusion by deciding which temporaries to put on disk is not possible, since the resulting constraints on the loop structure might prevent the solution from fitting in memory. Also, since fusion can eliminate the need of writing some temporaries to disk, it can help reduce the disk access cost. What is, therefore, needed is an integrated approach in which we minimize the disk access cost under a memory constraint. The loop structure in Fig. 2 is the result of such an integrated approach. It is not feasible, to simultaneously search for all possible loop structures and all possible tile sizes. Instead, we first produce a set of candidate loop structures and decide which of the temporaries are written to disk for a given loop structure. For each candidate solution in this set, we then determine the tile sizes that minimize the disk access cost. Finally, we select the tiled loop structure with the minimal disk access cost. We have previously described the tile size search and the proper placement of I/O operations in the tiled loop structure [6]. In this paper, we concentrate on the algorithms for finding the candidate solutions for the tile size search.
3
Preliminaries
Before describing the algorithms, we first need to present the notions of expression trees, fusions, and nestings. Since these concepts, as well as the algorithms, are not limited to tensor contraction expressions, we describe them in the context of arbitrary sums-of-products expressions. For more detailed explanation, readers are referred to [7, 8, 9, 10, 11]. As an example to illustrate the concepts, we use the multi-dimensional summation shown in Figure 3(a) represented by the expression tree in Figure 3(b). One way to fuse the loops is shown in Figure 3(c). Indexset Sequence. To describe the relative scopes of a set of fused loops, we introduce the notion of an indexset sequence, which is defined as an ordered list of disjoint, non-empty sets of loop indices. For example, is an indexset sequence. For simplicity, we write each indexset in an indexset sequence as a string. Thus, is written as Let and be indexset sequences. We denote by the union of all indexsets in i.e., For instance, Fusion. We use the notion of an indexset sequence to define a fusion. Intuitively, the loops fused between a node and its parent are ranked by their fusion scopes in the subtree from largest to smallest; two loops with the same fusion scope have the same rank (i.e., are in the same indexset). In the example, the fusion between B and is Nesting. Similarly, a nesting of the loops at a node can be defined as an indexset sequence. Intuitively, the loops at a node are ranked by their scopes in the subtree; two loops have the same rank (i.e., are in the same indexset) if they have the same scope. In the example, the loop nesting at is (because the fused covers one more node, namely C).
Memory-Constrained Data Locality Optimization for Tensor Contractions
99
Fig. 3. An example multi-dimensional summation
The “More-Constraining” Relation on Nestings. A nesting at a node is said to be more or equally constraining than another nesting at the same node, denoted if any loop fusion configuration for the rest of the expression tree that works with also works with This relation allows us to do effective pruning among the large number of loop fusion configurations for a subtree.
4
Optimal Fusion + Tiling Algorithm
We derive the memory usage and the disk access volume of arrays in tiled, imperfectly nested loops as follows. Without tiling, the memory usage of an array is the product of the ranges of its unfused dimensions. With tiling, the tile sizes of the fused dimensions also contribute to the product. The disk access volume is the size of the array times the trip counts of the loops surrounding the read/write statement but not corresponding to the dimensions of the array. Without tiling, the trip counts of such extra loops are simply their index ranges. With tiling, the trip counts become their index ranges divided by their tile sizes. In addition, if partial sums are produced and written to disk, they need to be read back into memory, thus doubling the disk access volume.
100
Alina Bibireata et al.
where 2 if
is the fusion between produce A and write A and 1 otherwise
and is the fusion between read A and consume A, between produce A and consume A, or between produce A and write A. As an example, for a disk-resident array if the fusion between produce X and write X is then we have from the above equations:
Note that if an intermediate array is written to disk, it would have two potentially-different MemUsage: one for before writing to disk and one after reading back from disk. Similarly, it would have two DiskCost: one for writing it and one for reading it. Since MemUsage and DiskCost depend on tile sizes, it may appear we cannot compare MemUsage and DiskCost between different fusions without knowing the tile sizes. However, some comparison is still possible. Continuing with the above example, if the fusion between produce X and write X is then:
No matter what tile sizes are used for we can use the same tile sizes for and assure that and because and Hence, fusion for array X is inferior to fusion and can be pruned away. Generalizing from this example, we obtain the sufficient conditions for a fusion to result in less or equal MemUsage or DiskCost than another one.
Memory-Constrained Data Locality Optimization for Tensor Contractions
The
101
first
condition above implies and hence for same set of tile sizes because for any index Similarly, the second condition above (for implies and for same set of tile sizes because for any index In our example, both and are true because is a superset of and is a subset of To apply LeqMemUsage and LeqDiskCost to compare different solutions corresponding to different fusion configurations for a subtree, we need to consider the different combinations of whether each array is disk-resident or not.
where means array A is disk-resident in solution is the fusion between read A and consume A in solution is the fusion between produce A and write A in solution is the fusion between produce A and consume A in solution For input or final-result arrays where fusions or do not apply, or for intermediate disk-resident arrays where fusion is yet to be decided, such fusions are considered empty sets. Making use of the above results, we can compare and prune solutions as follows. A solution that has higher or equal memory usage and disk access cost and a more or equally constraining nesting than another solution is considered inferior and can be pruned away safely. Between solutions for the entire tree and between solutions for a subtree whose root array is disk-resident and its fusion is undecided, pruning without the condition of a more or equally constraining nesting is also safe.
102
Alina Bibireata et al.
A dynamic programming, bottom-up algorithm using the Inferior condition as a pruning rule works as follows. For each leaf node (corresponding to an input array) in the tree, one solution is formed for each possible fusion (or if it is not disk-resident) with its parent and then inferior solutions are pruned away. For each intermediate array A in the tree, all possible legal fusions and for writing A to disk or not respectively, are considered in deriving new solutions from the children of A. Solutions that write A to disk are pruned against each other before all possible legal fusions are enumerated to derive new solutions. Then all inferior solutions for the subtree rooted at A, whether writing A to disk or not, are pruned away. For the root of tree, if it is to be written to disk, all possible legal fusions are considered in deriving new solutions. Finally, all inferior solutions for the entire tree are pruned away. Although this approach is guaranteed to find an optimal solution, it could be expensive. The reason is the condition requires each and every array in the subtree in solution to have lower or equal memory usage than the corresponding array in solution and similarly for in terms of disk access cost. If either the memory usage or the disk access cost of any array in is incomparable to the corresponding array in no solution derived from for a larger subtree would be comparable to any solution derived from Thus, in the worse case, the number of unpruned solutions for the entire tree could grow exponentially in the number of arrays. Due to its exponential complexity, we have yet to implement this approach.
5
Efficient Fusion + Tiling Algorithm
Since the optimal fusion and tiling algorithm is impractical to implement, due to its large number of unpruned solution, we have devised a sub-optimal, efficient algorithm to solve the fusion and tiling problem. The central idea of this algorithm is to first fix a tile size T common to all the tiled loops, and, based on this tile size, determine a set of candidate solutions by a bottom-up tree traversal. In the second part of the algorithm, the tile sizes are allowed to vary, and optimal tile sizes are determined for all candidate solutions. The candidate solution with the lowest disk cost is finally chosen as the best overall solution. Our current implementation of the first part of the algorithm uses T = 1. With and defined according to Section 4, the memory usage and disk cost for an array A become:
Memory-Constrained Data Locality Optimization for Tensor Contractions
103
where is the fusion between read A and consume A, between produce A and consume A, or between produce A and write A. When an intermediate array is stored on disk, it has two MemUsage: one for before writing to disk and one after reading back from disk. In this case, we define MemUsageas the maximum of the two values. Similarly, the array has two DiskCost: one for writing it and one for reading it. We define the total disk cost of an intermediate array that is stored on disk as the sum of the disk costs for writing it and for reading it back. With these definitions, we calculate the memory usage and disk cost of a solution corresponding to a given fusion configuration for a subtree:
where is the fusion between read A and consume A, between produce A and consume A, or between produce A and write A given the fusion configuration of the solution Different solutions corresponding to different fusion configurations for a subtree are now easily comparable:
Making use of the above results, we can introduce pruning rules similar to those of the optimal algorithm: a solution that has higher or equal memory usage and disk access cost and a more or equally constraining nesting than another solution is considered inferior and can be pruned away safely.
A dynamic programming, bottom-up algorithm using the Inferiorcondition as a pruning rule works in the same fashion as the optimal algorithm described in Section 4. The major difference between the optimal algorithm and the efficient
104
Alina Bibireata et al.
Fig. 4. Fused Structure with temporary T2 on disk
algorithm is that the condition is more relaxed in the latter: we no longer require that the MemUsage and DiskCost inequalities be valid for all individual arrays in the subtree rooted at Instead, only the sums of MemUsage and DiskCost over the entire subtree need to be compared. The result of this approach is a set of candidate solutions that are characterized by pairs of the form The algorithm described above prunes away all solutions that have higher MemUsage and DiskCost under the tile size constraint T = 1. For each candidate solution in the set, we then search for the tile sizes that minimize the disk access cost [6]. Increasing the tile sizes causes the disk access cost to decrease and the memory usage to increase, since array dimensions that have been eliminated by fusion get expanded to tile size. Finally, we select the solution with the least disk access cost.
6
Experimental Evaluations
We used the algorithm from Sec. 5 to generate code for the AO-to-MO index transformation calculation described in Sec 2. The algorithm generated 77 candidate solutions that would then be run through the tiling algorithm. We present two representative solutions generated by this algorithm. The solution shown in Fig. 4 places only temporary T2 on disk, while the solution shown in Fig. 5 places only the temporary T1 on disk. After tile size search, the tiled code with the least disk access cost was the one based on the solution with T2 on disk. The optimal code is shown in Fig. 6. Measurements were taken on a Pentium II system with the configuration shown in Table 1. The codes were all compiled with the Intel Fortran Compiler
Memory-Constrained Data Locality Optimization for Tensor Contractions
105
Fig. 5. Fused Structure with temporary T1 on disk
Fig. 6. Loop Structure after tiling
for Linux. Although this machine is now very old and much slower than PCs available today, it was convenient to use for our experiments in an uninterrupted mode, with no interference to the I/O subsystem from any other users. Table 2 shows the measured I/O time for the AO-to-MO transform where the sizes of the tensors (double precision) considered were: and We used 100M B as the memory limit. The I/O time for each array was separately accumulated. The predicted values match quite well with the measured time. The match is better for the overall I/O time than for some individual arrays. This is because disk writes are asynchronous and may be overlapped with succeeding disk reads — hence the measurements of I/O time attributable to individual arrays is subject to error due to such overlap, but the total time should not be affected by the interleaving of writes
106
Alina Bibireata et al.
with succeeding reads. For these tensor sizes and an available memory of 100MB, it is possible to choose fusion configurations so that the sizes of any two out of the three intermediate arrays can be reduced to fit completely in memory, but it is impossible to find a fusion configuration that fits all three intermediates within memory. Thus, it is necessary to keep at least one of them on disk, and incur disk I/O cost for that array. Table 3 shows the predicted I/O times and the improvement factor of the integrated fusion+tiling algorithm over the decoupled algorithm for the AO-toMO transformation example for different array sizes and memory limits. For the arrays sizes and actual measurements were performed using the 100M B, 500M B, and 2000M B memory limits and, in all cases, for the integrated algorithm, the predicted results matched the actual results. For the memory limits of 500M B and 2000M B and the small array sizes, both the decoupled and the integrated algorithm were able to fit all the temporaries in memory, and thus no significant improvement was achieved. Depending on the size of the problem, as the memory pressure increases, the improvement factor of the integrated algorithm over the decoupled algorithm increases significantly. This is to be expected, because the decoupled algorithm introduces more redundant reads and writes than the integrated algorithm. With high memory pressure, the tiles cannot be made very large, which results in an insufficient reduction of the redundant disk accesses. The measured results and the predicted results match well and the integrated fusion+tiling algorithm outperforms the decoupled datalocality algorithm.
7
Conclusion
We have described an optimization approach for synthesizing efficient out-of-core algorithms in the context of the Tensor Contraction Engine. We have presented two algorithms for performing an integrated fusion and tiling search. Our algorithms produce a set of candidate solutions, each with a fused loop structure and read and write operations for temporaries. After determining the tile sizes that minimize the disk access cost, the optimal solution is chosen. We have demonstrated with experimental results, that the integrated approach outperforms a decoupled approach of first determining the fused loop structure and then searching for the optimal tile sizes.
Memory-Constrained Data Locality Optimization for Tensor Contractions
107
Acknowledgments We thank the National Science Foundation for its support of this research through the Information Technology Research program (CHE-0121676 and CHE-0121706), NSF grants CCR-0073800 and EIA-9986052, and the U.S. Department of Energy through award DE-AC05-00OR22725.
References [1] G. Baumgartner, D.E. Bernholdt, D. Cociorva, R. Harrison, S. Hirata, C. Lam, M. Nooijen, R. Pitzer, J. Ramanujam, P. Sadayappan. A High-Level Approach to Synthesis of High-Performance Codes for Quantum Chemistry. In Proc Supercomputing 2002, Nov. 2002. [2] D. Cociorva, G. Baumgartner, C. Lam, P. Sadayappan, J. Ramanujam, M. Nooijen, D. Bernholdt, and R. Harrison. Space-Time Trade-Off Optimization for a Class of Electronic Structure Calculations. Proc. of ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation (PLDI), June 2002, pp. 177–186. [3] D. Cociorva, X. Gao, S. Krishnan, G. Baumgartner, C. Lam, P. Sadayappan, J. Ramanujam. Global Communication Optimization for Tensor Contraction Expressions under Memory Constraints. Proc. of 17th International Parallel & Distributed Processing Symposium (IPDPS), Apr. 2003. [4] D. Cociorva, J. Wilkins, C.-C. Lam, G. Baumgartner, P. Sadayappan, and J. Ramanujam. Loop optimization for a class of memory-constrained computations. In Proc. 15th ACM International Conference on Supercomputing(ICS’01), pp. 500– 509, Sorrento, Italy, June 2001. [5] D. Cociorva, J. Wilkins, G. Baumgartner, P. Sadayappan, J. Ramanujam, M. Nooijen, D. E. Bernholdt, and R. Harrison. Towards Automatic Synthesis of High-Performance Codes for Electronic Structure Calculations: Data Locality Optimization. Proc. of the Intl. Conf. on High Performance Computing, Dec. 2001, Lecture Notes in Computer Science, Vol. 2228, pp. 237–248, Springer-Verlag, 2001.
108
Alina Bibireata et al.
[6] S. Krishnan, S. Krishnamoorthy, G. Baumgartner, D. Cociorva, C. Lam, P. Sadayappan, J. Ramanujam, D. E. Bernholdt, and V. Choppella. Data Locality Optimization for Synthesis of Efficient Out-of-Core Algorithms. In Proc. of the Intl. Conf. on High Performance Computing, Dec. 2003, Hyderabad, India. [7] C. Lam. Performance Optimization of a Class of Loops Implementing MultiDimensional Integrals, Ph.D. Dissertation, The Ohio State University, Columbus, OH, August 1999. [8] C. Lam, D. Cociorva, G. Baumgartner and P. Sadayappan. Optimization of Memory Usage and Communication Requirements for a Class of Loops Implementing Multi-Dimensional Integrals. Proc. 12th LCPC Workshop San Diego, CA, Aug. 1999. [9] C. Lam, D. Cociorva, G. Baumgartner, and P. Sadayappan. Memory-optimal evaluation of expression trees involving large objects. In Proc. Intl. Conf. on High Perf. Comp., Dec. 1999. [10] C. Lam, P. Sadayappan and R. Wenger. On Optimizing a Class of MultiDimensional Loops with Reductions for Parallel Execution. Par. Proc. Lett., (7) 2, pp. 157–168, 1997. [11] C. Lam, P. Sadayappan and R. Wenger. Optimization of a Class of MultiDimensional Integrals on Parallel Machines. Proc. of Eighth SIAM Conf. on Parallel Processing for Scientific Computing, Minneapolis, MN, March 1997. [12] T. J. Lee and G. E. Scuseria. Achieving chemical accuracy with coupled cluster theory. In S.R. Langhoff (Ed.), Quantum Mechanical Electronic Structure Calculations with Chemical Accuracy, pp. 47–109, Kluwer Academic, 1997. [13] J.M. L. Martin. In P. v. R. Schleyer, P. R. Schreiner, N.L. Allinger, T. Clark, J. Gasteiger, P. Kollman, H.F. Schaefer III (Eds.), Encyclopedia of Computational Chemistry. Wiley & Sons, Berne (Switzerland). Vol. 1, pp. 115–128, 1998.
Compositional Development of Parallel Programs Nasim Mahmood, Guosheng Deng, and James C. Browne Department of Computer Sciences, University of Texas at Austin Austin, Texas 78712 {nmtanim,gsdeng,browne}@cs.utexas.edu
Abstract. This paper presents a programming model, an interface definition language (P-COM2) and a compiler that composes parallel and distributed programs from independently written components. P-COM2 specifications incorporate information on behaviors and implementations of components to enable qualification of components for effectiveness in specific application instances and execution environments. The programming model targets development of families of related programs. One objective is to be able to compose programs which are near-optimal for given application instances and execution environments. Component-oriented development is motivated for parallel and distributed computations. The programming model is defined and described and illustrated with a simple example. The compilation process is briefly defined and described. Experience with one more complex application, a generalized fast multipole solver is sketched including performance data, some of which was surprising.
1 Introduction 1 This paper presents a language and a compiler that composes parallel and distributed programs from independently written components and illustrates their application. is an interface definition language which incorporates information on behaviors and implementations of components to enable qualification of components for effectiveness in specific application instances and execution environments. The general strategy is somewhat similar to composition of programs in the Web Services paradigm but the goals are quite different. A component is a serial program which is encapsulated by an associative interface [8,11] which specifies the properties of the component. The composition implemented by the compiler is based on matching of associative interfaces and generates as final output either an MPI program or multi-threaded code for a shared memory multi-processor. The CODE [26] parallel programming system is used as an intermediate language and is the immediate target language of the compositional compiler. Component-oriented software development is one of the most active and significant threads of research in software engineering [1,10,15,29]. There are many motivations for raising the level of abstraction of program composition from individual statements to components with substantial semantics. It is often the case
1
stands for Parallel COMposition from COMponents.
L. Rauchwerger(Ed.): LCPC 2003, LNCS 2958, pp. 109-126, 2004. © Springer-Verlag Berlin Heidelberg 2004
110
Nasim Mahmood et al.
that there is a family of applications which can be generated from a modest number of appropriately-defined components. Optimization and adaptation for different execution environments is readily accomplished by creating and maintaining multiple versions of components rather than by direct modifications of complete applications. Programs generated and maintained as compositions of components are much more understandable and thus much more readily modifiable and maintainable. Even though there are additional benefits to component-oriented development in the distributed and parallel domain2, there has been relatively little research on component based programming in the context of high performance parallel and distributed programming. (Section 8 summarizes related work.) The execution environments for parallel programs are much more diverse than those for sequential programs. It is often necessary to maintain multiple versions of parallel programs for different execution environments. Program development by composition of components enables adaptation of parallel programs to different execution environments and optimization for different application instances by replacement of components. Adaptive control of parallel and distributed programs [3] is also enabled by replacement of components. Management of adaptations such as degree of parallelism and load balancing are readily accomplished at the component level. Parallelism is most often determined by the number of instances of a component which are executing in parallel (SPMD parallelism). The language and the compiler explicitly make provision for dynamic SPMD parallelism It has also been found that viewing programs as compositions of components tends to lead to programs with better structuring and better performance even for sequential versions. approaches component-oriented development of parallel and distributed programs from a different perspective than most other projects. The principal concerns and goals for the project have been to enable automation or at least partial automation of composition through a compiler, to develop a mechanism enabling runtime adaptation of parallel and distributed programs at the component level [3] and to enable performance-oriented, evolutionary development of parallel and distributed programs. This paper covers the first topic, compiler-implemented composition. The interface definition language incorporates information on component properties and behaviors as well as function/procedure/method interfaces including an implicit state machine to sequence invocations of components with internal state. Additionally the system targets development of families of programs with instances of the family targeting given application instances or given execution environments. The language and compiler have been used in implementing some substantial programs. One of the applications is to construct components and compose programs for solving linear equations using a fast multipole solver (FMM). The FMM code can be formulated in either a memory intensive or computation intensive formulation and at points in between. It is complex to write a parameterized program spanning these options but they are readily composed from parameterized components. The compiler has also been applied in the composition of parallel 2
CORBA, Web Services, etc. which are very much component-oriented development systems, are not commonly used for development of parallel or high performance applications.
Compositional Development of Parallel Programs
111
method of lines (MOL) codes for solving time dependent partial differential equations. MOL also has a great number of possible configurations and runtime adaptations. The remainder of the paper is organized in the following way. Section 2 explains some terms and concepts used in the compiler. Next, the programming model, the language and the compilation process are described in section 3, 4, and 5 respectively. Then a simple program, a macro-parallel FFT algorithm [32], is used to introduce the programming model, the programming language (which is an interface definition language) and the compilation process in section 6. The components and compilation process and a short discussion of the FMM code is given in Section 7. Section 8 discusses related work in this area. The paper is concluded and some future directions are discussed in Section 9.
2 Definition of Terms and Concepts Domain Analysis: Domain analysis [5] identifies the components from which a family of programs in the domain can be constructed and identifies a set of attributes in which the properties and behaviors of the components can be defined. It is usually the case that applications require components from multiple domains. Component: A component is one or more sequential computations, an interface which specifies the information used for selection and matching of components and a state machine which manages the interface, the interactions with other peers and the invocation of the sequential computations. An interaction, which may be initiated as an incoming message (or set of messages) or as an invocation of a transaction, will trigger an action which is associated with some state of the state machine. The action may include execution of a sequential computation. Sequential Computation: A computation is a unit of work that implements some atomic functionality. A computation is a sequential program which refers only to its own local variables and its input variables. Associative Interface: An associative interface [8] encapsulates a component. It describes the behavior and functionality of a component. One of the most important properties of associative interfaces is that they enable differentiation among alternative implementations of the same component. These interfaces are called “associative” because selection and matching is similar to operations on contentaddressable memories. An associative interface consists of an accepts specification and a requests specification. Accepts Specification: An accepts interface specifies the set of interactions in which a component is willing to participate. The accepts interface for a component is a set of three-tuples (profile, transaction, protocol). A profile is a set of attribute/value pairs. Components have a priori agreement on the set of attributes and values which can appear on the accepts and requests interface of a component.
112
Nasim Mahmood et al.
A transaction specification incorporates one or more function signatures including the data types, functionality and parameters of the unit of work to be executed and a state machine which manages the order of execution of the units of work. The state machine is defined in the form of conditional expressions over states and function signatures. A transaction can be enabled or disabled based on its current state and its current state can be used in runtime binding of the components. Multiple transactions controlled by the state machine can be used to represent complex interactions such as precedence of transactions, “and” relationships among transactions acting as a barrier and “or” relationships between transactions representing alternative ways of executing the component. A protocol defines a sequence of simple interactions necessary to complete the interaction specified by the profile. The most basic protocol is data-flow (continuations), which is defined as executing the functionality of a component and transmitting the output to a successor defined by the selectors at that component without returning to the invoking component. More complex interaction protocols such as call-return and persistent transactions are planned but not yet implemented. Requests Specification: A requests interface specifies the set of interactions which a component must initiate if it is to complete the interactions it has agreed to accept. The requests interface is a set of three-tuples (selector, transaction, protocol). A component can have multiple tuples in its requests interface to implement its required functionality. A selector is a conditional expression over the attributes of all the components in the domain. Transaction specifications are similar to those for accepts specifications. Protocol specifications are as given for accepts specifications. Start Component: A start component is a component that has at least one requests interface and no accepts interface. Every program requires a start component. There can be only one start component in a program which provides a starting point for the program. Stop Component: A stop component is a component that has at least one accepts interface and no requests interface. A stop component is also a requirement for termination of a program. There can be more than one stop component of a program denoting multiple ending points for the program.
3
Programming Model
The domain-based, component-oriented programming model targets development of a family of programs rather than a single program. The programming model has two phases: development of families of components and specification of instances from the family of programs which can be instantiated from the sets of components.
Compositional Development of Parallel Programs
113
3.1 Component Development The set of components which enables construction of a family of application programs may include components which utilize different algorithms for different problem instances or different implementation strategies for different execution environments. A program for a given problem instance or given execution environment is composed from appropriate components by selecting desired properties for the components and the properties of the execution environment in the Start component. The steps for developing components are: a. Domain Analysis – Execute the necessary domain analyses. It is usually the case that applications require components from multiple domains. b. Component Development – Specify and either design and implement or discover in existing libraries, the family of components identified in the domain analysis in an appropriate sequential procedural language. c. Encapsulate – Encapsulate the components in the interface definition language using the attributes identified in the domain analysis to specify associative interfaces for the components. The interfaces must differentiate the components by identifying their properties in terms of the attributes defined in the domain analysis.
3.2 Program Instance Development The steps in specifying a given instance of an application are: a. Analyze the problem instance and the target execution environment. Identify the attributes and attribute values which characterize the components desired for this problem instance and execution environment. b. Identify the components from which the application instance will be composed. If the needed components are not available then some additional implementations of components may be necessary together with an extension of the domain analysis. c. Identify the dependence graph of the application instance. The dependence graph is expressed in terms of the components identified. Specify the number of replications desired for parallelism and for fault-tolerance. Incorporate these specifications into the component interfaces or as parameters in the Start component if parameterized parallelism has been incorporated into the component interfaces. d. Define a Start component which initializes the replication parameters, sets attribute values needed to ensure that the desired components are selected and matched. e. Define at least one Stop component.
114
Nasim Mahmood et al.
4 The Interface Definition LanguageThe fundamental concepts underlying the interface definition language were given in Section 2. The syntax will be illustrated in the example in Section 6. Here we discuss what is expressed in the interfaces specifiable in the language. The language is rooted on the domain analyses for the program family. The domain analyses specify problem domain knowledge. It is expected that an application developer should be able, once familiar with the concepts of domain analysis, to generate domain analyses for a family of codes in her/his area of expertise. The associative interfaces define the behaviors of the components and will usually give properties of a given component’s implementation of its functionality. Properties of desired implementations such as degree of parallelism for a given component are also specified in the associative interface as runtime determined parameters. It is often desirable for a component to retain state across executions. There may be precedence or sequencing relations among the transactions implemented by a component. Precedence and sequencing information is also specified in the interface as an implicit state machine implemented as a conditional expression over the states of the components and the transaction specifications. Finally, the protocol specification enables choice among interaction modes (Although only one is currently implemented).
5 Compilation Process The conditional expression of a selector is a template which has slots for attribute names and values. The names and values are specified in the profiles of other components of the domain. Each attribute name in the selector expression of a component behaves as a variable. The attribute variables in a selector are instantiated with the values defined in the profile of another component. The profile and the selector are said to match when the instantiated conditional expression evaluates to true. The source program for the compilation process is a start component with a sequential computation which implements initialization for the program and a requests interface which specifies the components implementing the first steps of the computation and one or more libraries to search for components. The libraries should include the components needed to compose a family of applications specified by a domain analysis. The set of components which is composed to form a program is primarily dependent on the requests interface of the start component. The target language for the compilation process is a generalized data flow graph as defined in [26]. A node in this data flow graph consists of an initialization, a firing rule, a sequential computation and a routing rule for distribution of the outputs of the computation. There are two special node types, a start node and a stop node. Acceptable data flow graphs must begin with a start node and terminate on a stop node. The compilation process starts by parsing the associative interface of the start component. The compiler then searches a specified list of libraries for components
Compositional Development of Parallel Programs
115
whose accepts interface matches with the requests interface of the start component. The matching process is actually not much more than a sophisticated type matching. If the matching between the selector of one component and the profile of another component is successful, the compiler tries to match the corresponding transactions of the requests and accepts interface. The transactions are said to match when all of the following conditions are true. 1) The name of the two transactions is the same. 2) The number of arguments of each of the two transactions is the same. 3) The data type of each argument in the requests transaction is the same as that of the corresponding argument in the accepts transaction. 4) The sequencing constraint given by the conditional expression in the accepts transaction specification (the state machine) is satisfied. Finally the protocol specifications must be consistent. When compilation of the start component is completed, it is converted into a start node [26] for the data flow graph which will represent the parallel program and each match of a requests interface to an accepts interface results in addition of a node to the data flow graph which is being incrementally constructed by the compilation process and an arc connecting the this new node to the node which is currently being processed by the compiler. If there is a replication clause in a transaction specification then at runtime the specified number of replicas of the matched component are instantiated and linked with data flow arcs. This searching and matching process for the requests interface is applied recursively to each of the components that are in the matched set. The composition process stops when no more matching of interfaces is possible which will always occur with a Stop component since a Stop component has no requests interface. Compilation of a stop component results in generation of a stop node for the data flow graph. The compiler will signal an error if a requests interface cannot be matched with an accepts interface of a desired component. The data flow graph which has been generated is then compiled to a parallel program for a specific architecture by compilation processes implemented in the CODE [26] parallel programming system.
6 Example Program This section presents an example program showing the complete process of developing a parallel program for the fast Fourier transformation (FFT) of a matrix in two dimensions from simple components. The algorithm presented is an adaptation of Swarztrauber’s multiprocessor FFT algorithm [32]. This problem is simple enough to cover in detail and illustrates many of the important concepts such as stateful components and precedence constraints. Given an N x M matrix of complex numbers where both N and M are powers of 2, we want to compute the 2D FFT of the complex matrix. This 2D FFT can be described in terms of 1D FFTs, which helps in parallelizing the algorithm. Let us assume that there are P available processors where P is also a power of 2. In this case the domain analysis is straightforward and is an analysis of the algorithm itself. The steps of the algorithm are following: a)
Partitioning the matrix row wise (horizontally) into P submatrices, one for each processor. b) Sending these submatrices to each of the P processors for computation. The size of each the submatrix is N/P x M.
Nasim Mahmood et al.
116
c)
Each processor performs a 1D FFT on every row of the submatrix that it received. d) Collecting these 1D FFT’s and then transposing the N x M matrix. The resulting matrix is of size M x N. e) Splitting the M x N matrix row wise into P submatrices. The size of each of the submatrix is M/P x N. f) Sending these submatrices to the each of the P processors for computation. g) Again each processor performs a 1-D FFT on every row of the submatrix that it received. h) Collecting all the submatrices from the P processors and transposing the M x N matrix to get an N x M matrix. The resulting N x M matrix is the 2D FFT of the original matrix. This simple analysis suggests that all of the instances of this algorithm can be created from composing instances of three components: a one-dimensional FFT component, a component which partitions and distributes matrices and a component which merges rows or columns to recover a matrix and which may optionally transpose the recovered matrix. Let us name the components as fft_row, distribute, and gather_transpose respectively. One could as well formulate the algorithm with separate components for merge and transpose but that could introduce additional communication. Or the algorithm can use any 1D FFT algorithm to calculate the 2D FFT of the matrix. Additionally the choice of implementation for transposition of an array may vary with execution environment. Note that each of the components above can be reused as each of them is actually used twice in the algorithm. These components could reasonably be expected to be found as “off the shelf” component which can be found and reused from linear algebra and fft libraries. Other than the above three components we need a component that will read/initialize the matrix and one component to print out the final result. Let us name the component as initialize and print. The component to read/initialize the array may be the Start component and the print component may be the Stop component. The Start component will be written to specify the set of component instances which will be composed for a given data set and target execution environment.
Fig. 1. Data Flow Graph of 2D FFT Computation
The depencence graph of the program in terms of these components is shown in Fig. 1. This data flow graph suggests an optimization of creating a new component which combines the functions of distribute and gather_transpose. This depending on the mapping of nodes to processors, could eliminate two transmissions of the large matrix. As shown in Figure 1, parallelism can be achieved through the use of multiple fft_row components. Note that the gather_transpose component has to keep
Compositional Development of Parallel Programs
117
track of its state as it sends data to the distribute component on its first execution and to the print component after its second execution. Once we have identified the components, the next step is to complete the domain analysis by defining a list of attributes through which we can describe the functions, behaviors and implementations of a component and their instantiations. When some service is required it is described in terms of the attributes in the format of accepts and requests interfaces. The two domains from which this computation is composed are the matrix and fft domains. There is a generic attribute “Domain” which is required for multi-domain problems. The matrix domain has these distinct attributes: a) Function: an attribute of type string. Describes its function. b) Element_type: an attribute of type string. Describes the type information of the input matrix. c) Distribute_by_row: an attribute of type boolean. Describes whether the component partitions the matrix by row or by col.
The fft domain has these attributes: a) Input: an attribute of type string. Describes the input structure. b) Element_type: an attribute of type string. Describes the type information of the input. c) Algorithm: an attribute of type string. d) Apply_per_row: an attribute of type boolean. Describes whether to apply the FFT function per row or per column.
The completed domain analysis for the components is shown in Figure 2. Once the domain analysis is done, we encapsulate the components in associative interfaces using the attributes and transactions. As shown in Figure 3, the requests interface of the initialize component specifies that it needs a component that can distribute a matrix row-wise. The interface passes real and imaginary parts of the matrix, the dimension of the matrix and the total number of processors to the distribute component using the transaction specification. The data type mat2 is defined as a two dimensional array data type.
Fig. 2. Domain Analysis of the Components
Fig. 4.a shows the accepts interface of the distribute component. This distribute component assumes that the matrix which it partitions and distributes will be merged.
118
Nasim Mahmood et al.
This is specified in Figure 4b. The first selector interfaces to the gather_transpose component providing the size of each of the submatrices, the total number of submatrices to collect at the gather_transpose component and also state information which is needed in the gather_transpose component. The second selector in Figure 4b specifies that it needs p instances of the fft_row component and distributes the submatrices to each of the replicated components along with their size. The construct “index [p]” is used to specify that multiple copy of the fft_row component are needed. The construct “[]” with the transaction argument is used to transmit different data to different copies of component. For different transmission patterns, different constructs may be used in the language of the interface. Note that the number of instances of the fft_row component is determined at runtime.
Fig. 3. Requests Interface of Initialize Component
Fig. 4a. Accepts Interface of distribute component
Fig. 5.a specifies that this implementation of fft_row component uses the “CooleyTukey” algorithm [13]. The fft_row component requires no knowledge of how many copies of itself are being used. From Fig. 5.b, we can see that the instance number of the fft_row component is passed to the gather_transpose component using the variable “me”. Figure 6a illustrates the use of the “>” operator between the transactions to describe the precedence relationship between the transactions. The second transaction cannot execute until the first transaction is completed. The gather_transpose component collects the submatrices one by one through the second transaction in the interface. P-COM2 incorporates precedence ordering operations sufficient to express simple state machines for management of interactions among components. As shown in Fig. 6.b, the first requests interface of the gather_transpose component is used to connect to the distribute component. The second interface connects to the print component. The variable “state” is used to enable one of the transactions based on the current state of the gather_transpose component.
Compositional Development of Parallel Programs
Fig. 4b. Requests Interface of distribute component
Fig. 5a. Accepts Interface of Fft_row Component
Fig. 5b. Requests Interface of Fft_row Component
119
120
Nasim Mahmood et al.
Fig. 6a. Accepts Interface of Gather_transpose Component
Fig. 6b. Requests Interface of Gather_transpose Component
7 Case Study - A Generalized Fast Multipole Solver The Fast Multipole Method (FMM) [20,21], which solves the N-body electrostatics problems in O(N) rather than operations, is central to fast computational strategies for particle simulations. The FMM is also useful for iterative solution of linear algebraic equations associated with approximate solution of integral equations. There the FMM is used for O(N) matrix-vector multiplication. In order to adapt the FMM for applications in fluid and solid mechanics, the classical electrostatics problem must be replaced with a generalized electrostatics problem [17,18]. Such problems involve vector and tensor valued charges, which means that one generalized electrostatics problem is equivalent to several classical electrostatics problems, which share the same geometry. In particular, FLEMS code [17] relies on the generalized electrostatics problem that is equivalent to 13 classical electrostatics problems. We have performed a domain analysis for the FMM for generalized (multiple charge type) electrostatics. For example, the FMM tree has certain attributes, such as its depth and its number of charges per cell and the application component has an attribute with values that select between classical and generalized electrostatics. For generalized electrostatics the number of charge types is an attribute. For each
Compositional Development of Parallel Programs
121
attribute, the analysis defines a range of legal values. Components for a family of FMM codes for generalized electrostatics were derived from the FLEMS FMM implementation. These components were given associative interfaces that define their properties and behaviors and were annotated with domain attributes and architectural attributes. An instance of the component family can be specified by providing specific values for each attribute. An example of an attribute that would lead to different implementations is the number of charge types to be processed simultaneously. There are a family of space-computation tradeoffs which can be applied in the matrix-structured formulation [30] of the FMM algorithm which can be chosen to optimize the code for a given execution environment and problem specification. These include: Simultaneous computation of cell potentials for multiple charge types. Use of optimized library routines for vector-matrix multiply. Use of optimized library routines for matrix-matrix multiply. Loop interchange over the two outer loops to improve locality (Within a component). Number of terms in the multipole expansion. There are many variants of these structures and interactions among them. The original FMM implementation in the FLEMS code is approximately 4500 lines in length with the logic distributed throughout the code. Manual construction of optimized versions for even a modest number of execution environments would lead to rather complex code. But a small number (eight) of components characterized by the number of charges which are simultaneously computed and the number of terms in the multipole expansion suffice to realize an important subset of execution environment optimized codes. The FMM includes five translation theorems: Particle charge to Multipole (P2M is applied at the finest partitioning level) Multipole to Multipole (M2M is applied at all partitioning levels, from the finest to the coarsest) Multipole to Local (M2L is applied at all partitioning levels) Local to Local (L2L is applied at all partitioning levels, from the coarsest to the finest)
Local to Particle potential and forces (L2P is applied at the finest partitioning level) Two kinds of components are needed structure the FMM computation framework. The first category comes directly from the FMM algorithm. The five translation theorems, charges-to-multipole, multipole-to-multipole, multipole-to-local, local-tolocal, local-to-potential and force, and direct-interaction calculation belong to this category. The second category contains the communication components, distribute and collect which actually also derive from the FMM algorithm since they implement distribution and collection according to the interaction lists for each partition of the domain.. The data flow graph for the FMM code for two processors is shown in Fig. 7.
122
Nasim Mahmood et al.
Fig. 7. Data flow Graph of FMM code
An extensive set of performance studies were made comparing the original and componentized sequential codes. Preliminary results are reported [16] and a more detailed paper is in preparation. The performance of the sequential componentized code, contrary to conventional wisdom, is up to 15 times faster than the original implementation which had itself been optimized by several generations of students and post-doctoral fellows. This surprising result is largely due to specialization of functionality based on selection of optimal components and replacing loop implementations of matrix-matrix multiply by BLAS implementations of matrixmatrix multiply. Table 1 shows a small sample of the performance data obtained. The data was taken on a Linux cluster of Pentium III’s at 1.8 Gigahertz and a 100MB Ethernet interconnect. There are approximately half a million charges in this system. There are two factors to be noted: (i) Speedup is near-linear for the small number of processors and (ii) the time increases less than linearly with the number of charge types due to the change due to optimizations local to components.
8 Related Research There has been relatively little research on component based programming in the context of parallel and distributed program. Darwin [25] is a composition and configuration language for parallel and distributed programs. Darwin uses a
Compositional Development of Parallel Programs
123
configuration script to compose programs from components. This composition process is effectively manual. In our approach, the composition information encapsulates the components themselves, as a result the compiler can choose the required component automatically. The component-based software development environment [23,28] of the SciRun project feature powerful graphical composition of data flow graphs of components which are compiled to parallel programs. H2O [31] is a component-oriented framework for composition of distributed programs based on web services. Triana [33] is a graphical development environment for composing distributed programs from components targeting peer to peer execution environments. The G2 [24] composes distributed parallel programs from web services through Microsoft .Net. Armada [27] composes distributed/parallel programs specialized to data movement and filtering. The Common Component Architecture (CCA) project [6] is a major research and development project focused on composition of parallel programs from components. One primary goal of CCA is to enable composition of programs from components written in multiple languages. CCA has developed interface standards. The implementations of the CCA interface specifications are object-oriented. There are several tools, XCAT, [19] Ccaffeine [14] and BABEL [7,9] implementing the CCA interface specification system. Component composition are either graphical or through scripts and make files. CCA components interact through two types of ports. The first type of port is the provides port. The provides port is an interface that components provide to other components. The second type of port is the uses port. It is an interface through which components connects with other components which they require. These port type exhibit some similarities to the accepts and requests transaction specifications. However, the details and implementations are quite different as we have focused on incorporation of the information necessary to enable composition by compilation. ArchJava [4] annotates ports with provides and requires methods which helps the programmer to better understand the dependency relations among components by exposing it to the programmer. The accepts and requests interface of a component incorporate signatures as do ArchJava provides and requires. The accepts and requests interfaces also include profiles and precedence specification carrying semantic information and enabling automatic program composition. The attribute name/value pairs in profiles are used for both selecting and matching components thereby providing a semantics-based matching in addition to type checking of the matching interfaces. The use of associative interface has been reported earlier in the literature. Associative interface is used in one broadcast based coordination model [12]. This model uses run time composition, whereas our paper presents compile time composition. Associative interfaces have also been reported in composition of performance modeling [11].
124
Nasim Mahmood et al.
9 Conclusion and Future Research This paper has presented a programming model, a programming system and a compiler for composing distributed and parallel programs from independently written components. The conceptual foundations are domain analysis, support for families of programs, integration and automation of discovery and linking and management of components with state. The component-based development method described and illustrated in this paper is not intended for development of small or “one-off ” applications. The investment of effort in domain model development and characterization and encapsulation of components is not trivial and these software engineering methods are not typically a part of the development process for high performance applications. The target applications are those where several instances of an application are to be developed, where the application may need to be optimized for several different execution environments or where the application is expected to evolve over a substantial period of time. In such cases the investment of effort in domain model development and characterization and encapsulation of components can be expected to show return. That being said, the parallel programs which have been developed to demonstrate and evaluate the method show good performance and are readily evolvable. We are currently investigating the feasibility of combining runtime [12] and compile-time composition of associative interfaces. We plan to implement a hybrid graphical composition and compiler-based composition system. We also plan to integrate the compositional compiler with the Broadway annotational compiler [22] to overcome the problem of “too many components.” Finally we are working on additional applications including an hp-adaptive finite element code.
Acknowledgements This research was supported by the National Science Foundation under grant number 0103725 “Performance-Driven Adaptive Software Design and Control” and grant number 0205181, “A Computational Infrastructure for Reliable Computer Simulations.” The experiments were run on the facilities of the Texas Advanced Computation Center.
References [1]
[2]
[3]
Achermann F., Lumpe M., et al., Piccola - a Small Composition Language, in Formal Methods for Distributed Processing - A Survey of Object-Oriented Approaches, pp. 403426, Cambridge University Press, 2001. Adve V. S., Bagrodia R., et al., POEMS: End-to-end Performance Design of Large Parallel Adaptive Computational Systems, in IEEE Transactions on Software Engineering, vol. 26(11): pp. 1027-1049, November 2000. Adve V., Akinsanmi A., et al., Model-Based Control of Adaptive Applications: an Overview, in Proceedings of the 16th International Parallel and Distributed Processing Symposium (IPDPS 2002), April 2002.
Compositional Development of Parallel Programs [4]
[5]
[6]
[7] [8]
[9]
[10] [11]
[12]
[13]
[14] [15]
[16] [17]
[18] [19]
[20] [21] [22]
125
Aldrich J., Chambers C., et al., ArchJava: connecting software architecture to implementation, in Proceedings of the 22nd International Conference on Software Engineering, pp. 187-197, May 2002. Arango G., Domain Analysis: From Art Form to Engineering Discipline, in Proceedings of the Fifth International Workshop on Software Specification and Design, pp. 152-159, 1989. Armstrong R., Gannon D., et al., Toward a Common Component Architecture for Highperformance Scientific Computing, in Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computation, pp. 115-124, August 1999. Babel: Components @ LLNL, http://www.llnl.gov/CASC/components/babel.html. Bayerdorffer B., Associative Broadcast and the Communication Semantics of Naming in Concurrent Systems, Ph.D. Dissertation, Dept. of Computer Sciences, University of Texas at Austin, December 1993. Bernholdt D. E., Elwasif W. R., et al., A Component Architecture for High-Performance Computing, in Proceedings of the Workshop on Performance Optimization via HighLevel Languages and Libraries (POHLL-02), June 2002. Birngruber D., Coml: Yet another, but simple component composition language, in Workshop on Composition Languages, WCL’01, pp. 1–13, September 2001. Browne J. C. and Dube A., Compositional Development of Performance Models in POEMS, in International Journal of High-Performance Computing Applications, vol. 14(4), Winter 2000. Browne J. C., Kane K., et al., An Associative Broadcast Based Coordination Model for Distributed Processes, in Proceedings of COORDINATION 2002, LNCS 2315, pp. 96110, 2002. Cooley J. W. and Tukey J. W., An Algorithm for the Machine Computation of the Complex Fourier Series, in Mathematics of Computation, vol. 9, pp. 297-301, April 1965. Ccaffeine - a CCA component framework for parallel computing, http://www.ccaforum.org/ccafe/ Czarnecki K. and Eisenecker U. W., Components and Generative Programming, in Proceedings of the Joint European Software Engineering Conference and ACM SIGSOFT International Symposium on the Foundations of Software Engineering, Springer-Verlag LNCS 1687, pp. 2-19, 1999. Deng G., New approaches for FMM implementation, Masters Thesis, Dept. of Manufacturing Systems Engineering, University of Texas at Austin, August 2002. Fu Y., Klimkowski K. J., et al., A fast solution method for three-dimensional manyparticle problems of linear elasticity, in International Journal for Numerical Methods in Engineering, vol. 42(7): pp. 1215-1229, 1998. Fu Y. and Rodin G. J., Fast solution method for three dimensional Stokesian manyparticle problems, in Commun. Numer. Meth. Engng, vol. 16(2): pp. 145-149, 2000. Govindaraju M., Krishnan S., et al., Merging the CCA Component Model with the OGSI Framework, in Proceedings of the 3rd International Symposium on Cluster Computing and the Grid (CCGrid2003), pp. 182-189, May 2003. Greengard L. and Rokhlin V., A fast algorithm for particle simulations, in Journal of Computational Physics, vol. 73(2): pp. 325-348, 1987. Greengard L. and Rokhlin V., A new version of the fast multipole method for the Laplace equation on three dimensions, in Acta Numerica, vol. 6: pp. 229-270, 1997. Guyer S. and Lin C., An Annotation Language for Optimizing Software Libraries, in Proceedings of the Second Conference on Domain Specific Languages, pp. 39-53, October 1999.
126
[23]
[24]
[25] [26]
[27] [28]
[29]
[30] [31]
[32] [33]
Nasim Mahmood et al. Johnson C.R., Parker S., et al., Component-Based Problem Solving Environments for Large-Scale Scientific Computing, in Journal on Concurrency and Computation: Practice and Experience, vol. 14, pp. 1337-1349, 2002. Kelly W., Roe P., et al., An Enhanced Programming Model for Internet Based Cycle Stealing, in Proceedings of the 2003 International Conference on Parallel and Distributed Processing Techniques and Applications, pp. 1649-1655, June 2003. Magee J., Dulay N., et al., Structuring parallel and distributed programs, in Software Engineering Journal, vol. 8(2): pp. 73-82, March 1993. Newton P. and Browne J. C., The CODE 2.0 Graphical Parallel Programming Language, in Proceedings of the ACM International Conference on Supercomputing, July 1992. Oldfield R. and Kotz. David, Armada: a parallel I/O framework for computational grids, Future Generation Computer Systems, vol. 18(4), pp. 501-523, 2002. Parker S. G., A Component-based Architecture for Parallel Multi-Physics PDE Simulation, in Proceedings of the International Conference on Computational Science, Springer-Verlag LNCS 2331, pp. 719-734, April 2002. Seiter L., Mezini M., et al., Dynamic component gluing, in OOPSLA Workshop on Multi-Dimensional Separation of Concerns in Object-Oriented Systems, November 1999. Sun X. and Pitsianis N, A Matrix Version of the Fast Multipole Method, in Siam Review, vol. 43(2): pp. 289-300, 2001. Sunderam V. and Kurzyniec D., Lightweight Self-Organizing Frameworks for Metacomputing, in Proceedings of the 11th IEEE International Symposium on High Performance Distributed Computing HPDC-11 (HPDC’02), pp. 113-124, July 2002. Swarztrauber P. N., Multiprocessor FFTs, in Journal of Parallel Computing, vol. 5: pp. 197-210, 1987. Taylor I., Shields M., et al., Distributed P2P Computing within Triana: A Galaxy Visualization Test Case, in Proceedings of International Parallel and Distributed Processing Symposium (IPDPS 2003), April 2003.
Supporting High-Level Abstractions through XML Technology Xiaogang Li and Gagan Agrawal Department of Computer and Information Sciences Ohio State University, Columbus OH 43210 {xgli,agrawal}@cis.ohio-state.edu
Abstract. Development of applications that process large scientific datasets is often complicated by complex and specialized data storage formats. In this paper, we describe the use of XML technologies for supporting high-level programming methodologies for processing scientific datasets. We show how XML Schemas can be used to give a high-level abstraction of a dataset to an application developer. A corresponding low-level Schema describes the actual layout of data and is used by the compiler for code generation. The compiler needs a systematic way for translating the high-level code to a low-level code. Then, it needs to transform the generated low-level code to achieve high locality and efficient execution. This paper describes our approach to these two problems. By using Active Data Repository as the underlying runtime system, we offer an XML based front-end for storing, retrieving, and processing flat-file based scientific datasets in a cluster environment.
1
Introduction
Processing and analyzing large volumes of data is playing an increasingly important role in many domains of scientific research. Large datasets are being created by scientific simulations, or arise from digitization of images and/or from data collected by sensors and other instruments. A variety of analysis can be performed on such datasets to better understand scientific processes. Development of applications that process large scientific datasets is often complicated by complex and specialized data storage formats. When the datsets are disk-residents, understanding the layout and maintaining high locality in accessing them is crucial for obtaining a reasonable performance. While the traditional relational database technology supports high-level abstractions and standard interfaces, it is suitable more for storing and retrieving datasets, and not for complex analyses on such datasets [12]. Recently, there has been a lot of interest in XML and other related technologies developed by the W3C consortium [5]. XML is a flexible exchange format that can represent many classes of data, including structured documents, heterogeneous and semi-structured records, data from scientific experiments and simulations, and digitized images. One of the key features of XML is XML Schemas, L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 127–146, 2004. © Springer-Verlag Berlin Heidelberg 2004
128
Xiaogang Li and Gagan Agrawal
which serve as a standard basis for describing the contents and structure of a dataset. In this paper, we describe the use of XML technologies for supporting highlevel programming methodologies for processing scientific datasets. We particularly show how XML Schemas can be used to give a high-level abstraction of a dataset to the application developers, who can use such a high-level Schema for developing the applications. A corresponding low-level Schema describes the actual layout of data, but is hidden from the programmers. The compiler can use the source code, the low-level Schema, and the mapping from the high-level Schema to the low-level Schema for code generation. Two key compiler techniques are required for supporting such an approach. First, we need a systematic way to translate the high-level code to the low-level code. Second, we need to transform the generated low-level code to achieve high locality and efficient execution. This paper describes our approach to these two problems. Our techniques have been implemented in a compilation system. By using Active Data Repository [6, 7] as the underlying runtime system, we offer an XML based front-end for storing, retrieving, and processing flat-file based scientific datasets in a cluster environment. As part of our system, we use the XML query language XQuery [4] for writing queries using high-level abstractions. XQuery is derived from declarative, database, as well as functional languages. Though XQuery significantly simplifies the specification of processing, compiling it to achieve efficient execution involves a number of new challenges. Our recent related paper has addressed two key issues, i.e, replacing recursive reductions by iterative constructs and type-inferencing to translate from XQuery to an imperative language [11].
2
Background: XML, XML Schemas, and XQuery
This section gives background on XML, XML Schemas, and XQuery.
2.1
XML and XML Schemas
XML provided a simple and general facility which is useful for data interchange. Though the initial development of XML was mostly for representing structured and semi-structured data on the web, XML is rapidly emerging as a general medium for exchanging information between organizations. For example, a hospital generating medical data may make it available to other health organizations using XML format. Similarly, researchers generating large data-sets from scientific simulations may make them available in XML format to other researchers needing them for further experiments. XML models data as a tree of elements. Arbitrary depth and width is allowed in such a tree, which facilitates storage of deeply nested data structures, as well as large collections of records or structures. Each element contains character data and can have attributes composed of name-value pairs. An XML document
Supporting High-Level Abstractions through XML Technology
129
Fig. 1. XML and XML Schema
represents elements, attributes, character data, and the relationship between them by simply using angle brackets. Note that XML does not specify the actual lay-out of large data on the disks. Rather, if a system supports a certain data-set in an XML representation, it must allow any application expecting XML data to properly access this data-set. Applications that operate on XML data often need guarantees on the structure and content of data. XML Schema proposals [2, 3] give facilities for describing the structure and constraining the contents of XML documents. The example in Figure (a) shows an XML document containing records of students. The XML Schema describing the XML document is shown in Figure (b). For each student tuple in the XML file, it contains two string elements to specify the last and first names, one date element to specify the date of birth, and one element of float type for the student’s GPA.
2.2
XML Query Language: XQuery
As stated previously, XQuery is a language currently being developed by the World Wide Web Consortium (W3C). It is designed to be a language in which queries are concise and easily understood, and to be flexible enough to query
130
Xiaogang Li and Gagan Agrawal
Fig. 2. An Example Illustrating XQuery’s FLWR Expressions
a broad spectrum of information sources, including both databases and documents. XQuery is a functional language. The basic building block is an expression. Several types of expressions are possible. The two types of expressions important for our discussion are: FLWR expressions, which support iteration and binding of variables to intermediate results. FLWR stands for the keywords for, let, where, and return. Unordered expressions, which use the keyword unordered. The unordered expression takes any sequence of items as its argument, and returns the same sequence of items in a nondeterministic order. We illustrate the XQuery language and the for, let, where, and return expressions by an example, shown in Figure 2. In this example, two XML documents, depts.xml and emps.xml are processed to create a new document, which lists all departments with ten or more employees, and also lists the average salary of employees in each such department. In XQuery, a for clause contains one or more variables, each with an associated expression. The simplest form of for expression, such as the one used in the example here, contains only one variable and an associated expression. The evaluation of the expression typically results in a sequence. The for clause results in a loop being executed, in which the variable is bound to each item from the resulting sequence in turn. In our example, the sequence of distinct department numbers is created from the document depts.xml, and the loop iterates over each distinct department number. A let clause also contains one or more variables, each with an associated expression. However, each variable is bound to the result of the associated expression, without iteration. In our example, the let expression results in the variable $e being bound to the set or sequence of employees that belong to the department $d. The subsequent operations on $e apply to such sequence. For example, count($e) determines the length of this sequence.
Supporting High-Level Abstractions through XML Technology
131
Fig. 3. An Example Using XQuery’s Unordered Expression
A where clause serves as a filter for the tuples of variable bindings generated by the for and let clauses. The expression is evaluated once for each of these tuples. If the resulting value is true, the tuple is retained, otherwise, it is discarded. A return clause is used to create an XML record after processing one iteration of the for loop. The details of the syntax are not important for our presentation. To illustrate the use of unordered, a modification of the example in Figure 2 is presented in Figure 3. By enclosing the for loop inside the unordered expression, we are not enforcing any order on the execution of the iterations in the for loop, and in generation of the results. Without the use of unordered, the departments need to be processed in the order in which they occur in the document depts.xml. However, when unordered is used, the system is allowed to choose the order in which they are processed, or even process the query in parallel.
3
System Overview
In this section, we briefly introduce the overall architecture of our system. This discussion forms the basis for our description of the various compilation phases. An overview of the system is shown in Figure 4. Our target environment is a cluster of machines, each with an attached disk. To efficiently support processing on large disk-resident datasets and on a cluster architecture, our compiler generates code for a runtime system called Active Data Repository (ADR) [7, 6]. ADR run-time support has been developed as a set of modular services implemented in C++, which targets processing of datasets that are stored as flat files. Our system does not directly process XML datasets. As a physical lay-out standard, XML involves several-fold storage overheads. Therefore, for scientific applications that involve large datasets, XML is only beneficial as a logical lay-out standard. Here, the key advantage of XML technologies is that XML Schemas allow the users to view the data at a high-level. Conse-
132
Xiaogang Li and Gagan Agrawal
Fig. 4. Overview of the System Architecture
quently, an XML query language like XQuery can be used for specifying the processing a high-level, i.e., keeping it independent of the details of the low-level layout of data. In our system, XML file will be mapped to flat files by XML mapping service according to a XML Schema. This XML Schema is called the high-level XML schema, because it describes an high-level abstraction of the dataset and does not expose any details of the physical layout of the dataset. The flat file generated by XML mapping service will then be distributed to disks of a cluster architecture by using data distribution service provided by ADR. A low-level XML Schema file reflecting the physical layout and meta-data information will be provided. Highlevel XML Schemas are known to the programmers when developing XQuery code, and will be used by the compiler for XQuery type checking. Low-level XML Schemas will guide the compiler in generating efficient codes executing on the disk-resident datasets. More details and examples of high-level and low-level Schemas will be given in the next section.
4
High-Level and Low-Level Schemas and XQuery Representation
This section focuses on the interface for the system. We use two motivating examples, satellite data processing [7] and the multi-grid virtual microscope [1], for describing the notion of high-level and low-level schemas and XQuery representation of the processing.
Supporting High-Level Abstractions through XML Technology
133
Fig. 5. High-Level XML Schema for Satellite
4.1
Satellite Data Processing
The first application we focus on involves processing the data collected from satellites and creating composite images. A satellite orbiting the Earth collects data as a sequence of blocks. The satellite contains sensors for five different bands. The measurements produced by the satellite are short values (16 bits) for each band. The XML Schema shown in Figure 5 provides a high-level abstraction of the satellite data. The pixels captured by the satellite can be viewed as a sparse three dimensional array, where time, latitude, and longitude are the three dimensions. Pixels for several, but not all, time values are available for any given latitude and longitude. Each pixel has 5 short integers to specify the sensor data. Also, latitude, longitude, and time is stored within each pixel. With this high-level XML Schema, a programmer can easily define computations processing the satellite data using X Query. The typical computation on this satellite data is as follows. A portion of Earth is specified through latitudes and longitudes of end points. A time range (typically 10 days to one year) is also specified. For any point on the Earth within the specified area, all available pixels within that time period are scanned and an application dependent output value is computed. To produce such a value, the application will perform computation on the input bands to produce one output value for each input value, and then the multiple output values for the same point on the planet are combined by a reduction operation. For instance, the Normalized Difference Vegetation Index (NDVI) is computed based on bands one and two, and correlates to the “greenness” of the position at the surface of the Earth. Combining multiple ndvi values consists of execution a max operation over all of them, or finding the “greenest” value for that particular position. XQuery specification of such processing is shown in Figure 6. The code iterates over the two-dimensional space for which the output is desired. Since the order in which the points are processed is not important, we use the directive
134
Xiaogang Li and Gagan Agrawal
Fig. 6. Satellite Data Processing Expressed in XQuery
unordered. Within an iteration of the nested for loop, the let statement is used to create a sequence of all pixels that correspond to the those spatial coordinates. The desired result involves finding the pixel with the best NDVI value. In XQuery, such reduction can only be computed recursively. 4.2
Multi-grid Virtual Microscope
The Virtual Microscope [8] is an application to support the need to interactively view and process digitized data arising from tissue specimens. The raw data for such a system is captured by digitally scanning collections of full microscope slides at high power. In a typical dataset available when a virtual microscope is used in a distributed setting, the same portion of a slide may be available at different resolution levels, but the entire slide is not available at all resolution levels. A particular user is interested in viewing a rectangular section of the image at a specified resolution level. In computing each component of this rectangular section (output), it is first examined if that portion is already available at the specified resolution. If it is not available, then we next examine if it is available at a higher resolution (i.e., at a smaller granularity). If so, the output portion
Supporting High-Level Abstractions through XML Technology
Fig. 7.
135
High-Level XML Schema for Virtual Microscope
is computed by averaging the pixels of the image at the next higher level of granularity. If it is only available at a lower resolution, then the pixels from the lower resolution image are used to create the output. The digitized microscope slides can also be viewed as a three dimensional dataset. Each pixel has x and y coordinates and the resolution is the third dimension. The high-level XML Schema of virtual microscope is shown in Figure 7. For each pixel in a slide, three short integers are used to represent the RGB colors. XQuery code for performing the computations is shown in Figure 8. We assume that the user is only interested in viewing the image at the highest possible resolution level, which means that averaging is never done to produce the output image. The structure of this code is quite similar to our previous example. Inside an unordered for loop, we use the let statement to compute a sequence, and then apply a recursive reduction. 4.3
Low Level XML Schema and XQuery
The above XQuery codes for multi-grid virtual microscope and satellite data processing specify a query on a high-level abstraction of the actual datasets, which eases the development of applications. However, storing XML data in such a high-level format will result in unnecessary disk space usage as well as large overheads on query processing. For example, storing and coordinates for each pixel in a regular digitized slide of virtual microscope is not necessary, since these values can be easily computed from the meta-data and the offset of a pixel. In our system, XML files are mapped to flat files by a special mapping service. Pixels in each flat file are later partitioned and organized into chunks by data distribution and indexing services. A low-level XML Schema file is provided to the compiler after partitioning of the datasets to specify the actual data layout. Here, the pixels are divided into chunks. Each chunk is associated with
136
Xiaogang Li and Gagan Agrawal
Fig. 8.
Multigrid Virtual Microscope Using XQuery
a bounding box for all pixels it contains, which is specified by a lower bound and a higher bound. Within a chunk, the values of pixels are stored consecutively, with each pixel occupying three bytes for RGB colors. For each application whose XML data is transformed into ADR dataset by data distribution and indexing services, we provide several library functions written in XQuery to perform data retrieval. These library functions have a common name, getData, but the function parameters are different. Each getData function implements a unique selection operation based on its parameters. The getData functions are similar to physical operators of a SQL query engine. A physical operator of SQL engine takes as input one or more data streams and produces an output data stream. In our case, the default input data stream of a getData function is the entire dataset, while the output data stream is result of filtering the input stream by parameters of the getData function. For example, the getData function shown in Figure 9 (a) returns pixels whose and coordinates are equal to those specified by the parameters. The detailed implementation is based on the meta-data of the dataset, which is specified by the low-level XML
Supporting High-Level Abstractions through XML Technology
137
Fig. 9. getData functions for Multigrid Virtual Microscope
Schemas. The getData function in Figure 9 (b) requires only one parameter, which retrieves pixels with specified coordinate. For space reason, the detailed implementation of only one getData function is shown here. The XQuery code for virtual microscope that calls a getData function is shown in Figure 10. This query code is called low-level XQuery and is typically generated automatically by our compiler. The XQuery codes described in the above section operate on high-level data abstractions and are called highlevel XQuery. The recursive functions used in both the low-level and high-level XQuery are the same. The low-level XML Schemas and getData functions are expected to be invisible to the programmer writing the processing code. The goal is to provide a simplified view of the dataset to the application programmers, thereby easing the development of correct data processing applications. The compiler translating XQuery codes obviously has the access to the source code of the getData functions, which enables it to generate efficient code. However, an experienced programmer can still have access to getData functions and low-level Schemas. They can modify the low-level XQuery generated by the compiler, or even write their own version of getData functions and low-level XQuery codes. This is the major reason why our compiler provides an intermediate low-level query format, instead of generating the final executable code directly from high-level codes.
138
Xiaogang Li and Gagan Agrawal
Fig. 10. Multigrid Virtual Microscope Using Low Level XQuery
5
Compiler Analysis
In this section, we describe the various analysis, transformations, and code generation issues that are handled by our compiler. 5.1
Overview of the Compilation Problem
Because the high-level codes shown in Figures 6 and 8 do not reflect any information of how the actual layout of data, the first task for our compiler is to generate corresponding low-level XQuery codes. After such high-level to low-level query transformation, we can generate correct codes. However, there are still optimization issues that need to be considered. Consider the low-level XQuery code for virtual microscope shown in Figure 10. Suppose, we translate this code to an imperative language like C/C++, ignoring the unordered directive, and preserving the order of the computation otherwise. It is easy to see that the resulting code will be very inefficient, particularly when the datasets are large. This is primarily because of two reasons. First, each execution of the let expression will involve a complete scan over the dataset, since we need to find all data-elements that will belong to the sequence. Second, if this sequence involves elements, then computing the result will require recursive function calls, which again is very expensive. We can significantly simplify the computation if we recognize that the computation in the recursive loop is a reduction operation involving associative and commutative operators only. This means that instead of creating a sequence and then applying the recursive function on it, we can initialize the output, process each element independently, and update the output using the identified associative and commutative operators. A direct benefit of it is that we can replace recursion by iteration, which reduces the overhead of function calls. However, a more significant advantage is that the iterations of the resulting loop can be executed in any order. Since such a loop is inside an unordered nested for loop,
Supporting High-Level Abstractions through XML Technology
139
powerful restructuring transformations can be applied. Particularly, the code resulting after applying data-centric transformation [9, 10] will only require a single pass on the entire dataset. Thus, the three key compiler analysis and transformation tasks are: 1) transforming high-level XQuery codes to efficient low-level query codes, 2) recognizing that the recursive function involves a reduction computation with associative and commutative operations, and transforming such a recursive function into a foreach loop, i.e., a loop whose iterations can be executed in any order, and 3) restructuring the nested unordered loops to require only a single pass on the dataset. An algorithm for the second task listed above was presented in our recent publication [11]. Therefore, we will only briefly review this issue, and focus on the first and the third tasks in the rest of this section. 5.2
High Level XQuery Transformation
High-level XQuery provides an easy way to specify operations on high-level abstractions of dataset. If the low-level details of the dataset is hidden from a programmer, a correct application can be developed with ease. However, the performance of the code written in this fashion is likely to be poor, since a programmer has no idea how the data is stored and indexed. To address this issue, our compiler needs to translate a program expressed in the high-level XQuery to low-level XQuery. As described earlier, a low-level XQuery program operates on the descriptions of the dataset specified by the low-level XML Schemas. Although the recursive functions defined in both highlevel and low-level XQuery are almost the same, the low-level XQuery calls one or more getData functions defined externally. getData functions specify how to retrieve data streams according to meta-data of the dataset. A major task for the compiler is to choose a suitable getData function to rewrite the high-level query. The challenges for this transformation are compatibility and performance of the resulting code. This requires the compiler to determine: 1) which of the getData functions can be correctly integrated, i.e., if a getData function is compatible or not, and 2) which of the compatible functions can achieve the best performance. We will use virtual microscope as an example to further describe the problem. As shown in Figure 8, in each iteration, the high-level XQuery code retrieves a desired set of elements from the dataset first, then, a recursive function is applied on this data stream to perform the reduction operation. There are three getData functions provided, each will retrieve an output data stream from the entire dataset. The issue is if and how the output stream from a getData functions can be used to construct the same data stream as used in the high-level query. For a given getData function G with actual arguments we define the output stream of to be
140
Xiaogang Li and Gagan Agrawal
Similarly, for a given query Q with loop indices data stream that is processed in a given iteration to be
we define the
Let the set of all possible iterations of Q be We say that a getData function G is compatible with the query Q if there exists an affine function such that
such that
and
If a getData function G is compatible with Q, it means that in any iteration of the query, we can call this getData function to retrieve a data stream from the dataset. Since this data stream is a superset of the desired data stream, we can perform another selection on it to get the correct data stream. Here, the second selection can be easily performed in memory and without referring to the low-level disk layout of the dataset. For the three functions shown in Figure 9, it is easy to see that the first two functions are compatible. Their selection criteria is either less or equally restrictive to what is used in the high-level query. Because of the similarities between physical operators of SQL engine and our getData functions, the technique we proposed for translation from high-level XQuery to low-level XQuery is based on relational algebra. Relational algebra is an unambiguous notation for expressing queries and manipulating relations and is widely used in the database community for query optimization. We use the following three step approach. First, we compute the relational algebra of the high-level XQuery and getData functions. A typical high-level XQuery program retrieves desired tuples from an XML file and performs computations on these tuples. We focus on the data retrieval part. The relational algebras of XQuery and the getData functions are shown in Figure 11 (a). Here, we use to represent selection from the entire dataset E by applying restriction In the second step, we formalize these relational algebras into an equivalent canonical form that is easier to compare and evaluate. The canonical form we choose is similar to the disjunctive normal form (DNF), where the relations are expressed as unions of one or more intersections. Figure 11 (b) shows the equivalent canonical forms transformed. The actual canonical forms are internally represented by trees in our compiler. In the third step, we compare the canonical forms of the high-level query and getData functions. For a given getData function, if its canonical form is
Supporting High-Level Abstractions through XML Technology
141
Fig. 11. Relational Algebra Based Approach for High-level to Low-level Transformation
an isomorphic subtree of the canonical form of the query, we can say that the getData function is compatible with the original query. This is because when replacing part of the relational algebra of the high-level query with a getData function, the query semantics are maintained. From Figure 11 (b) it is easy to see that the first two getData functions are compatible. is not compatible, because the its selection restriction on is equal, while the restriction of the query on is and The next task is to choose the getData function which will result in the best performance. The algorithm we currently use is quite simple. Because applying restrictions early in a selection can reduce the number of tuples to be scanned in the next operation, a compatible getData function with the most parameters is preferred here. Formally, we select the function whose relational algebra in the canonical form is the largest isomorphic subtree. As shown in Figure 11 (c), the final function we choose is The resulting relational algebra for low-level XQuery is shown in Figure 11, part (c). Here, the pixels are retrieved by calling and then performing another selection on the output stream by applying the restriction on scale.
142
Xiaogang Li and Gagan Agrawal
Fig. 12.
5.3
Recursion Transformations for Virtual Microscope
Reduction Analysis and Transformation
Now, we have a low-level XQuery code, either generated by our compiler or specified directly by an experienced programmer. Our next task is analyzing the reduction operation defined in low-level query. The goals of this analysis is to generate efficient code that will execute on disk-resident datasets and on parallel machines. The reductions on tuples that satisfy user-defined conditions are specified through recursive functions. Our analysis requires the recursive function to be linear recursive, so that it can be transformed it into an iterative version. Our algorithm examines the syntax tree of a recursive function to extract desired nodes. These nodes represent associative and commutative operations. The details of the algorithm are described in a related paper [11]. After extracting the reduction operation, the recursive function can be transformed into a foreach loop. An example of this is shown in Figure 12. This foreach loop can be executed in parallel by initializing the output element on each processor. The reduction operation extracted by our algorithm can then be used for combining the values of output created on each processor. 5.4
Data Centric Transformation
Replacing the recursive computation by a foreach loop is only an enabling transformation for our next step. The key transformation that provides a significant difference in the performance is the data-centric transformation, which is described in this section. In Figure 12, we show the outline of the virtual microscope code after replacing recursion by iteration. Within the nested for loops, the let statement and the recursive function are replaced by two foreach loops. The first of these loops iterates over all elements in the document and creates a sequence. The second foreach loop performs the reduction by iterating over this sequence.
Supporting High-Level Abstractions through XML Technology
143
Fig. 13. Data-Centric Transformations on Virtual Microscope Code
The code, as shown here, is very inefficient because of the need for iterating over the entire dataset a large number of times. If the dataset is disk-resident, it can mean extremely high overhead because of the disk latencies. Even if the dataset is memory resident, this code will have poor locality, and therefore, poor performance. Since the input dataset is never modified, it is clearly possible to execute such code to require only a single pass over the dataset. However, the challenge is to perform such transformation automatically. We apply the data-centric transformation that has previously been used for optimizing locality in scientific codes [9, 10]. The overall idea here to iterate over the available data elements, and then find and execute the iterations of the nested loop in which they are executed. As part of our compiler, we apply this transformation to the intermediate code we obtain after removing recursion. The results of performing data-centric transformation on the virtual microscope are shown in Figure 13. This code requires only one scan of the entire dataset.
6
Experimental Results
This section reports experimental data from our current compilation system. We used the two real applications, satellite and mg-vscope, discussed earlier in this paper. The cluster we used had 700 MHz Pentium machines connected through Myrinet LANai 7.0. We ran our experiments on 1, 2, 4, 8 nodes of the cluster. The goal of our experiments was to demonstrate that even with high-level abstractions and a high-level language like XQuery, our compiler is able to generate reasonably efficient code. The compiler generated codes for our two applications were compared against versions whose performance was reported in earlier work [9]. These versions were generated by a compiler starting from a data par-
144
Xiaogang Li and Gagan Agrawal
Fig. 14. Parallel Performance of satellite
Fig. 15.
Parallel Performance for mg-vscope
allel dialect of Java, and were further manually optimized. For our discussion, the versions generated by our current compiler are referred to as comp and the baseline version is referred to as manual. For the mg-vscope application, the dataset we used contains an image of 29, 238 × 28, 800 pixels collected at 5 different magnification levels, which corresponds to 3.3 GB of data. The query we used involves processes a region of 10, 000 × 10, 000 pixels, which corresponds to reading 627 MB and generating an output of 400 MB. The entire dataset for the satellite application contains data for the entire earth at a resolution of of a degree in latitude and longitude, over a period of time that covers nearly 15, 000 time steps. The size of the dataset is 2.7 GB. The query we used traverses a region of 15, 000 × 10, 000 × 10, 000 which involves reading 446 MB to generate an output of 50 MB.
Supporting High-Level Abstractions through XML Technology
145
The results from satellite are presented in Figure 14. The results from mg-vscope are presented in Figure 15. For both the applications and on 1, 2, 4, and 8 nodes, the comp versions are slower. However, the difference in performance is only between 5% and 8% for satellite and between 18% and 22% for mg-vscope. The speedups on 8 nodes is around 6 for both versions of satellite and around 4 for both versions of mg-vscope. The reason for limited speedups is the high communication volume. To understand the differences in performance, we carefully compared the comp and manual versions. Our analysis shows that a number of additional simple optimizations can be implemented in the compiler to bridge the performance difference. These optimizations are, function inlining, loop invariant code motion, and elimination of unnecessary copying of buffers.
7
Conclusions
In this paper, we have described a system that offers an XML based front-end for storing, retrieving, and processing flat-file based scientific datasets. With the use of aggressive compiler transformations, we support high-level abstractions for a dataset, and hide the complexities of the low-level layout from the application developers. Processing on datasets can be expressed using XQuery, the recently developed XML Query language. Our preliminary experimental results from two applications have shown that despite using high-level abstractions and a highlevel language like XQuery, the compiler can generate efficient code.
References [1] Asmara Afework, Michael D. Beynon, Fabian Bustamante, Angelo Demarzo, Renato Ferreira, Robert Miller, Mark Silberman, Joel Saltz, Alan Sussman, and Hubert Tsang. Digital dynamic telepathology - the Virtual Microscope. In Proceedings of the 1998 AMIA Annual Fall Symposium. American Medical Informatics Association, November 1998. [2] D. Beech, S. Lawrence, M. Maloney, N. Mendelsohn, and H. Thompson. XML Schema part 1: Structures, W3C working draft. Available at http://www.w3.org/TR/1999/xmlschema-l, May 1999. [3] P. Biron and A. Malhotra. XML Schema part 2: Datatypes, W3C working draft. Available at http://www.w3.org/TR/1999/xmlschema-2, May 1999. [4] S. Boag, D. Chamberlin, M.F. Fernandez, D. Florescu, J. Robie, and J. Simeon. XQuery 1.0: An XML Query Language. W3C Working Draft, available from http://www.w3.org/TR/xquery/, November 2002. [5] T. Bray, J. Paoli, and C. Sperberg-McQueen. Extensible Markup Language (XML) 1.0. Available at http://www.w3.org/TR/REC-xml, February 1998. [6] Chialin Chang, Renato Ferreira, Alan Sussman, and Joel Saltz. Infrastructure for building parallel database systems for multi-dimensional data. In Proceedings of the Second Merged IPPS/SPDP (13th International Parallel Processing Symposium & 10th Symposium on Parallel and Distributed Processing). IEEE Computer Society Press, April 1999.
146
Xiaogang Li and Gagan Agrawal
[7] Chialin Chang, Bongki Moon, Anurag Acharya, Carter Shock, Alan Sussman, and Joel Saltz. Titan: A high performance remote-sensing database. In Proceedings of the 1997 International Conference on Data Engineering, pages 375-384. IEEE Computer Society Press, April 1997. [8] R. Ferreira, B. Moon, J. Humphries, A. Sussman, J. Saltz, R. Miller, and A. Demarzo. The Virtual Microscope. In Proceedings of the 1997 AMIA Annual Fall Symposium, pages 449-453. American Medical Informatics Association, Hanley and Belfus, Inc., October 1997. Also available as University of Maryland Technical Report CS-TR-3777 and UMIACS-TR-97-35. [9] Renato Ferreira, Gagan Agrawal, and Joel Saltz. Compiler supported high-level abstractions for sparse disk-resident datasets. In Proceedings of the International Conference on Supercomputing (ICS), June 2002. [10] Induprakas Kodukula, Nawaaz Ahmed, and Keshav Pingali. Data-centric multilevel blocking. In Proceedings of the SIGPLAN ’97 Conference on Programming Language Design and Implementation, pages 346-357, June 1997. [11] Xiaogang Li, Renato Ferreira, and Gagan Agrawal. Compiler Support for Efficient Processing of XML Datasets. In Proceedings of the International Conference on Supercomputing (ICS), pages 67-77. ACM Press, June 2003. [12] S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with databases: alternative and implications. In In Proceedings of ACM SIGMOD International Conference on Management of Data (SIGMOD). ACM Press, June 1998.
Applications of HP Java Bryan Carpenter, Geoffrey Fox, Han-Ku Lee, and Sang Boem Lim Pervasive Technology Labs at Indiana University Bloomington, IN 47404-3730 {dbcarpen,gcf,hanklee,slim}@indiana.edu
Abstract. We describe two applications of our HP Java language for parallel computing. The first is a multigrid solver for a Poisson equation, and the second is a CFD application that solves the Euler equations for inviscid flow. We illustrate how the features of the HP Java language allow these algorithms to be expressed in a straightforward and convenient way. Performance results on an IBM SP3 are presented.
1
Introduction
The HPJava project [10] has developed translator and libraries for a version of the Java language extended to support parallel and scientific computing. Version 1.0 of the HPJava software was released earlier this year as open source software. This paper reports experiences using HPJava for applications, with some benchmark results. A particular goal here is to argue the case that our programming model is flexible and convenient for writing non-trivial scientific applications. HPJava extends the standard Java language with support for “scientific” multidimensional arrays (multiarrays), and support for distributed arrays, familiar from High Performance Fortran (HPF) and related languages. Considerable work has been done on adding features like these to Java and C++ through class libraries (see for example [17], [8], [15]). This seems like a natural approach in an object oriented language, but the approach has some limits: most obviously the syntax tends to be inconvenient. Lately there has been widening interest in adding extra syntax to Java for multiarrays, often through preprocessors1. From a parallel computing point view of an interesting feature of HPJava is its spartan programming model. Although HPJava introduces special syntax for HPF-like distributed arrays, the language deliberately minimizes compiler intervention in manipulating distributed data structures. In HPF and similar languages, elements of distributed arrays can be accessed on essentially the same footing as elements of ordinary (sequential) arrays—if the element being accessed resides on a different processor, some run-time system is probably invoked transparently to “get” or “put” the remote element. HPJava does not have this feature. It was designed as a framework for development of explicit libraries operating on distributed data. In this mindset, the right way of accessing 1
See, for example, the minutes of recent meetings at [12].
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 147–161, 2004. © Springer-Verlag Berlin Heidelberg 2004
148
Bryan Carpenter et al.
remote data is to explicitly invoke a communication library method to get or put the data. So HP Java provides some special syntax for accessing locally held elements of multiarrays and distributed arrays, but stops short of adding special syntax for accessing non-local elements. Non-local elements can only be accessed by making explicit library calls. The language attempts to capture the successful library-based approaches to SPMD parallel computing—it is in very much in the spirit of MPI, with its explicit point-to-point and collective communications. HPJava raises the level of abstraction a notch, and adds excellent support for development of libraries that manipulate distributed arrays. But it still exposes a multi-threaded, non-shared-memory, execution model to to programmer. Advantages of this approach include flexibility for the programmer, and ease of compilation, because the compiler does not have to analyse and optimize communication patterns. The basic features of HPJava have been described in several earlier publications. In this paper we will jump straight into a discussion of the implementation of some representative applications in HPJava. After briefly reviewing the compilation strategy in section 2, we illustrate typical patterns of HPJava programming through a multigrid algorithm in section 3. This section also serves to review basic features of the langauge. Section 4 describes another substantial HPJava application—a CFD code—and highlights additional common coding patterns. Section 5 collects together benchmark results from these applications. 1.1
Related Work
Other ongoing projects that extend the Java language to directly support scientific parallel computation include Titanium [3] from UC Berkeley, Timber/Spar [2] from Delft University of Technology, and Jade [6] from University of Illinois at Urbana-Champaign. Titanium adds a comprehensive set of parallel extensions to the Java language. For example it includes support for a shared address space, and does compile-time analysis of patterns of synchronization. This contrasts with our HPJava, which only adds new data types that can be implemented “locally”, and leaves all interprocess communication issues to the programmer and libraries. The Timber project extends Java with the Spar primitives for scientific programming, which include multidimensional arrays and tuples. It also adds task parallel constructs like a foreach construct. Jade focuses on message-driven parallelism extracted from interactions between a special kind of distributed object called a Chare. It introduces a kind of parallel array called a ChareArray. Jade also supports code migration. HPJava differs from these projects in emphasizing a lower-level (MPI-like) approach to parallelism and communication, and by importing HPF-like distribution formats for arrays. Another significant difference between HPJava and the other systems mentioned above is that HPJava translates to Java byte codes, relying on clusters of conventional JVMs for execution. The systems mentioned above typically translate to C or C++. While HPJava may pay some price in
Applications of HP Java
149
performance for this approach, it tends to be more fully compliant with the standard Java platform (e.g. it allows local use of Java threads, and APIs that require Java threads).
2
Features of the HP Java System
HPJava adds to Java a concept of multi-dimensional arrays called “multiarrays” (consistent with proposals of the Java Grande Forum). To support parallel programming, these multiarrays are extended to “distributed arrays”, very closely modelled on the arrays of High Performance Fortran. The new distributed data structures are cleanly integrated into the syntax of the language (in a way that doesn’t interfere with the existing syntax and semantics of Java—for example ordinary Java arrays are left unaffected). In the current implementation, the source HPJava program is translated to an intermediate standard Java file. The preprocessor that performs this task is reasonably sophisticated. For example it performs a complete static semantic check of the source program, following rules that include all the static rules of the Java Language Specification [9]. So it shouldn’t normally happen that a program accepted by the HPJava preprocessor would be rejected by the backend Java compiler. The translation scheme depends on type information, so we were essentially forced to do a complete type analysis for HPJava (which is a superset of standard Java). Moreover we wanted to produce a practical tool, and we felt users would not accept a simpler preprocessor that did not do full checking. The current version of the preprocessor also works hard to preserve linenumbering in the conversion from HPJava to Java. This means that the line numbers in run-time exception messages accurately refer back to the HPJava source. Clearly this is very important for easy debugging. A translated and compiled HPJava program is a standard Java class file, ready for execution on a distributed collection of JIT-enabled Java Virtual Machines. All externally visible attributes of an HPJava class—e.g. existence of distributed-array-valued fields or method arguments—can be transparently reconstructed from Java signatures stored in the class file. This makes it possible to build libraries operating on distributed arrays, while maintaining the usual portability and compatibility features of Java. The libraries themselves can be implemented in HPJava, or in standard Java, or as JNI interfaces to other languages. The HPJava language specification documents the mapping between distributed arrays and the standard-Java components they translate to. Currently HPJava is supplied with one library for parallel computing—a Java version of the Adlib library of collective operations on distributed arrays [18]. A version of the mpiJava [1] binding of MPI can also be called directly from HPJava programs. Of course we would hope to see other libraries made available in the future.
150
3
Bryan Carpenter et al.
A Multigrid Application
The multigrid method [5] is a fast algorithm for solution of linear and nonlinear problems. It uses a hierarchy or stack of grids of different granularity (typically with a geometric progression of grid-spacings, increasing by a factor of two up from finest to coarsest grid). Applied to a basic relaxation method, for example, multigrid hugely accelerates elimination of the residual by restricting a smoothed version of the error term to a coarser grid, computing a correction term on the coarse grid, then interpolating this term back to the original fine grid. Because computation of the correction term on the fine grid can itself be handled as a relaxation problem, the strategy can be applied recursively all the way up the stack of grids. In our example, we apply the multigrid scheme to solution of the twodimensional Poisson equation. For the basic, unaccelerated, solution scheme we use red-black relaxation. An HP Java method for red-black relaxation is given in Figure 1. This looks something like an HPF program with different syntax. One obvious difference is that the base language is Java instead of Fortran. The HP Java type signature double [[-,-]] means a two dimensional distributed array of double numbers2. So the arguments passed to the method relax( ) will be distributed arrays The inquiry rng( ) on the distributed array f returns the Range objects x, y. These describe the distribution format of the array index (for the two dimensions). The HP Java overall construct operates like a forall construct, with one important difference. In the HP Java construct one must specify how the iteration space of the parallel loop is distributed over processors. This is done by specifying a Range object in the header of the construct. The variables i, j in the figure are called distributed index symbols. Distributed indexes are scoped by the overall constructs that use them. They are not integer variables, and there is no syntax to declare a distributed index except through an overall construct (or an at construct—see later). The usual Java scoping rules for local variables apply: one can’t for example use i as the index of an overall if there is already a local variable i in scope—the compiler doesn’t allow it. An unusual feature of the HPJava programming model is that the subscripts in a distributed array element reference usually must be distributed index symbols. And these symbols must be distributed with the essentially same format as the arrays they subscript. As a special case, shifted index expressions like i+1 are allowed as subscripts, but only if the distributed array was created with ghost regions. Information on ghost regions, along with other information about 2
The main technical reason for using double brackets here is that it is useful to support an idea of rank-zero distributed arrays: these are “distributed scalars”, which have a localization (a distribution group) but no index space. If we used single brackets for distributed array type signatures, then double [] could be ambiguously interpretted as either a rank-zero distributed array or an ordinary Java array of doubles.
Applications of HP Java
151
Fig. 1. Red black relaxation on array u
distribution format, is captured in the Range object associated with the array dimension or index. These requirements ensure that a subscripting operation in an overall construct only accesses locally held elements. They place very stringent limitations on what kind of expression can appear as a subscript of a distributed array. We justify this by noting that this restricted kind of data parallel loop is a frequently recurring pattern in SPMD programs in general, and it is convenient to have it captured in syntax. A glance at the full source of the applications described in this paper should make this claim more plausible3. The method Adlib.writeHalo( ) is a communication method (from the library called Adlib). It performs the edge-exchange to fill in the ghost regions. As emphasized earlier, the compiler is not responsible for inserting communications– this is the programmer’s responsibility. We assume this should be acceptable to programmers currently accustomed to using MPI and similar libraries for communication. Because of the special role of distributed index symbols in array subscripts, it is best not to think of the expressions i, j, i+1, etc, as having a numeric value: instead they are treated as a special kind of thing in the language. We use 3
When less regular patterns of access are necessary, the approach depends on the locality of access: if accesses are irregular but local one can extract the locally-held blocks of the distributed array by suitable inquiries, and operate on the blocks as in an ordinary SPMD program; if the accesses are non-local one must use suitable library methods for doing irregular remote accesses.
152
Bryan Carpenter et al.
Fig. 2. Illustration of restrict operation
the notation i‘ to extract the numeric global index associated with i, say4. In particular, use of this expression in the modulo 2 expression in the inner overall construct in Figure 1 implements the red-black pattern of accesses. This completes the description of most “non-obvious” features of HPJava syntax. Remaining examples in the paper either recycle these basic ideas, or just introduce new library routines; or they import relatively uncontroversial syntax, like a syntax for array sections. Figure 2 visualizes the “restrict” operation that is used to transfer the error term from a fine grid to a coarse grid. The HPJava code is given in Figure 3. The restrict operation here computes the residual term at sites of the fine grid with even coordinate values, then sends these values to the coarse grid. In multigrid the restricted residual from the fine grid becomes the RHS of a new equation on the coarse grid. The implementation uses a temporary array tf which should be aligned with the fine grid (only a subset of elements of this array are actually used). The last line introduces two new features: distributed array sections, and the library function Adlib.remap(). Sections work in HPJava in much the same way as in Fortran—one small syntactic difference is that they use double brackets. The bounds in the fc section ensure that edge values, corresponding to boundary conditions, are not modified. The stride in the tf section ensures only values with even subscripts are selected. The Adlib.remap() operation is needed because in general there is no simple relation between the distribution format of the fine and coarse grid—this function introduces the communications necessary to perform an assignment between any two distributed arrays with unrelated distribution format. As another example, the interpolation code of Figure 4 performs the complementary transformation from the coarse grid to the fine grid. 4
Early versions of the language used a more conventional “pseudo-function” syntax rather than the “primed” notation. The current syntax arguably makes expressions more readable, and emphasizes the unique status of the distributed index in the language.
Applications of HP Java
153
Fig. 3. HPJava code for restrict operation
Fig. 4. HP Java code for interpolate operation
The basic pattern here depends only on the geometry of the problem. More complex (perhaps non-linear) equations with similar geometry could be tackled
Bryan Carpenter et al.
154
by similar code. Problems with more dimensions can also be programmed in a similar way.
A CFD Application
4
In this section we discuss another significant HP Java application code. This code solves the Euler equations for inviscid fluid flow by a finite volume approach. One version of this code, viewable at http://www.hpjava.org/demo.html also has a novel parallel GUI implemented in HP Java5. The Euler equations are a family of conservation equations, relating the time rates of change of various densities to divergences of associated flow fields. In two dimensions there are four densities—the ordinary matter density, densities of the two components of momentum, and the energy density. The Euler equations can be summarized as a conservation equation for four-component vectors U, and
The flow variables are related to the dependent variables U by simple (but non-linear) algebraic equations. So the set of differential equations is closed. Two important quantities that figure in the equations are the pressure, and the enthalpy per unit mass, H, which can be computed from the components of U using the equations of state for the fluid. 4.1
Discretization and Numerical Integration
The system of partial differential is discretized by a finite volume approach— see for example [7] or [11]. Space is divided into a series of quadrilateral (but not necessarily rectangular) cells labelled This reduces the PDEs to a large coupled system of ordinary differential equations. These are integrated by a variant of the well-known 4th order Runge Kutta scheme. A single time-step involves four stages like:
where
is a fractional value characteristic of the scheme, and
Here is the volume a cell and are coordinate differences between end-points of the face. Since the dependent variables and fluxes are defined at 5
The code is adapted from a version of an original Java code by David Oh of MIT [16], modified by Saleh Elmohamed and Mike McMahon of Syracuse University. It is almost identical to the CFD benchmark in the Java Grande Benchmark suite, which came from the same original source.
Applications of HP Java
155
cell centers, their values at a cell face in equation 3 is are approximated as the average of the values from the two cells meeting at the face. So at its most basic level the program for integrating the Euler equations consists of a series of steps like: 1. 2. 3. 4. 5.
Calculate H from current U (via equations of state). Calculate from U, H. Calculate from U, H. Calculate R from Update U.
To parallelize in HPJava, the discretized field variables are naturally stored in distributed arrays. All the steps above become overall nests. As a relatively simple case, the operation to calculate (step 2) looks like:
The four fields a, b, c, d of Statevector correspond to the four conserved densities. A general observation is that the bodies of overall statements are now more complex than those in the (perhaps artificially simple) Poisson equation example of the previous section. We expect this will often happen in “real” applications. It is good for HPJava, because it means that various overheads associated with starting up a distributed loop are amortized better. Another noteworthy thing is that these overall statements work naturally with aligned data—no communication is needed here. Out of the five stages enumerated above, only computation of R involves non-local terms (formally because of the use of averages across adjacent cells for the flow values at the faces). The code can be written easily using ghost regions, shifted indices, and the writeHalo() operation. Again it involves a single overall nest with a long body. A much-ellided outline is given in Figure 5. The optional arguments wlo, whi to Adlib.writeHalo() define the widths of the parts ghost regions that need updating (the default is to update the whole of the ghost regions of the array, whatever their width). In the current case these vectors both have value [1, 1]—because shifted indices displace one site in positive and negative x and y directions.
156
Bryan Carpenter et al.
Fig. 5. Outline of computation of R
The arrays xnode and ynode hold coordinates of the cell vertices. Because these arrays are constant through the computation, the ghost regions of these arrays are initialized once during startup. We will briefly discuss two other interesting complications: handling of socalled artificial viscosity, and imposition of boundary conditions. Artificial viscosity (or artifical smoothing) is added to damp out a numerical instability in the Runge Kutta time-stepping scheme, which otherwise causes unphysical oscillatory modes associated with the discretization to grow. An accepted scheme adds small terms proportional to 2nd and 4th order finite difference operators to the update of U. From the point of view of HPJava programming one interesting issue is that 4th order damping implies an update stencil requiring source elements offset two places from the destination element (unlike Figure 5, for example, where the maximum offset is one). This is handled by creating the U array with ghost regions of width 2. Implementing numerically stable boundary conditions for the Euler equations is non-trivial. In our implementation the domain of cells is rectangular, though the grid is distorted into an irregular pipe profile by the choice of physical coordinates attached to grid points (xnode, ynode distributed arrays). HPJava has an additional control construct called at, which can be used to update edges (it has other uses). The at statement is a degenerate form of the overall statement.
Applications of HP Java
157
It only “enumerates” a single location in its specified range. To work along the line for example‚ one may write code like:
The actual code in the body is a fairly complicated interpolation based on Riemann invariants. In general access to U [i+1‚ j] here relies on ghost regions being up-to-date‚ exactly as for an index scoped by an overall statement.
Benchmark Results
5
For the two applications described above‚ we have sequential and parallel programs to compare performance. The sequential programs were written in Java and/or Fortran 95. The parallel programs‚ of course‚ were written in HP Java. For multigrid we also compare with an available HPF code (taken from [4]). The experiments were performed on the SP3 installation at Florida State University. The system environment for SP3 runs were as follows: System: IBM SP3 supercomputing system with AIX 4.3.3 operating system and 42 nodes. CPU: A node has Four processors (Power3 375 MHz) and 2 gigabytes of shared memory. Network MPI Settings: Shared “css0” adapter with User Space(US) communication mode. Java VM: IBM ’s JIT. Java Compiler: IBM J2RE 1.3.1. For best performance‚ all sequential and parallel Fortran and Java codes were compiled using -O5 or -O3 with -qhot or -O (i.e. maximum optimization) flag.
5.1
Multigrid Results
First we present some results for the the computational kernel of the multigrid code‚ namely unaccelerated red-black relaxation algorithm of Figure 1. Figure 6 gives our results for this kernel on a 512 by 512 matrix. The results are encouraging. The HP Java version scales well‚ and eventually comes quite close to the HPF code (absolute megaflop performances are modest‚ but this feature was observed for all our codes‚ and seems to be a property of the hardware)6. The flat lines at the bottom of the graph give the sequential Java and Fortran performances‚ for orientation. We did not use any auto parallelization feature here. 6
We do not know why the HP Java result on 25 processors appears to be below the general trend. However the result was repeatable.
158
Bryan Carpenter et al.
Fig. 6. Red-black relaxation of two dimensional Laplace equation with size of
Corresponding results for the complete multigrid code are given in Figure 7. The results here are not as good as for simple red-black relaxation—both HP Java speed relative to HPF‚ and the parallel speedup of HPF and HP Java are less satisfactory. The poor performance of HP Java relative to Fortran in this case can be attributed largely to the naive nature of the translation scheme used by the current HP Java system. The overheads are especially significant when there are many very tight overall constructs (with short bodies). We saw several of these in section 3. Experiments done elsewhere [13] lead us to believe these overheads can be reduced by straightforward optimization strategies which‚ however‚ are not yet incorporated in our source-to-source translator7. The modest parallel speedup of both HPJava and HPF is due to communication overheads. The fact that HPJava and HPF have similar scaling behavior‚ while absolute performance of HPJava is lower‚ suggests the communication library of HPJava is slower than the communications of the native SP3 HPF (otherwise the performance gap would close for larger numbers of processors). This is not too surprising because Adlib is built on top of a portability layer called mpjdev‚ which is in turn layered on MPI. We assume the SP3 HPF is more carefully optimized for the hardware. Of course the lower layers of Adlib 7
There are also likely to be inherent penalties in using a JVM vs an optimizing Fortran compiler‚ but other experiments suggest these overheads should be smaller than what we see here. The communication overheads are probably aggravated by a choice we made in the data distribution format in these experiments. All levels are distributed blockwise. A better choice may be to distribute only the finest levels‚ and keep the coaser levels sequential. This doesn’t require any change to the main code—only to initialization of the grid stack. However this wasn’t what was done in these experiments.
Applications of HP Java
159
Fig. 7. Multigrid solver with size of
could be ported to exploit low-level features of the hardware (we already did some experiments in this direction‚ interfacing Java to LAPI [14]).
5.2
CFD Results
Figure 8 and gives some performance results for a version of the CFD code. The speedup results are quite reasonable‚ even for small problem sizes. Presumably this reflects the intrinsically greater granularity of this problem‚ compared with the multigrid case. (In this case unfortunately we don’t have a Fortran version to compare with.)
6
Discussion
We illustrated‚ by a detailed discussion of the coding of two parallel applications‚ that the parallel primitives introduced in HPJava are a good match to the requirements of various applications. The limitations imposed on distributed control constructs like overall‚ and especially the strict rules for subscripting distributed arrays‚ may look strange from a language design perspective. But these features are motivated by patterns observed in practical parallel programs. In particular the language provides a good framework for the development of SPMD libraries operating on distributed arrays. The collective operations of high-level libraries like Adlib‚ operating directly on distributed arrays‚ abstract and generalize the popular collective operations of MPI and other SPMD libraries. They also follow in the spirit of the array intrinsics and libraries of Fortran 90/95 and HPF. The language resembles HPF in various ways. But the programming model is closer to the MPI style. MPI programming seems to have
160
Bryan Carpenter et al.
Fig. 8. CFD with size of
been more popular in practice than HPF‚ perhaps because it gives the programmer control over communication‚ and it allows the programmer to estimate the cost of his program by looking at the code. We claim these as advantages for HP Java‚ too. In its current stage of development HPJava‚ like HPF‚ seems most naturally suited for problems with some regularity. This is not to say that more irregular problems can’t be tackled. But doing so will at least need more specialized communication library support. We have also shown that the performance of the initial implementation of HPJava is quite promising8. The current implementation provides full functionality‚ but it has not been seriously optimized. There is scope for dramatic improvements in efficiency [13]
7
Acknowledgement
This work was funded in part by National Science Foundation Division of Advanced Computational Infrastructure and Research‚ under contract number 9872125. We are very grateful to Saleh Elmohamed for donating the original Java version of the CFD code‚ and for help with understanding and parallelizing it. All software discussed in this article‚ including the demonstration codes‚ is freely available‚ with full source‚ from www.hpjava.org. 8
Java vs Fortran on the IBM machine is a relatively tough case. The IBM Fortran compilers tend to be better than those on important commodity platforms. On PCs the inherent performance of Java is typically more competitive with C and Fortran.
Applications of HP Java
161
References [1] mpiJava Home Page. http://www.hpjava.org/mpiJava.html. [2] Timber Compiler Home Page. http://pds.twi.tudelft.nl/timber. [3] Titanium Project Home Page. http://www.cs.berkeley.edu/projects/titanium. [4] C. A. Addison‚ V. S. Getov‚ A.J.G. Hey‚ R.W. Hockney‚ and I.C. Wolton. The Genesis Distributed-Memory Benchmarks. Elsevier Science B. V.‚ North-Holland‚ Amsterdam‚ 1993. [5] William L. Briggs‚ Van Emden Henson‚ and Steve F. McCormick. A Multigrid Tutorial. The Society for Industrial and Applied Mathematics (SIAM)‚ 2000. [6] Jayant DeSouza and L. V. Kale. Jade: A parallel message-driven java. In Proceedings of the 2003 Workshop on Java in Computational Science‚ Melbourne‚ Australia‚ 2003. Available from http://charm.cs.uiuc.edu/papers/ParJavaWJCSO3.shtml. [7] E. Dick. Introduction to finite volume techniques in computational fluid dynamics. In John F. Wendt‚ editor‚ Computational Fluid Dynamics: An Introduction‚ pages 261–288. Springer-Verlag‚ 1992. [8] Jose E.Moreira‚ Samuel P. Midkiff‚ and Manish Gupta. A standard Java array package for technical computing. Technical Report RC21313‚ IBM Research‚ 1999. Available from http://www.research.ibm.com/resources/. [9] James Gosling‚ Bill Joy‚ Guy Steele‚ and Gilad Bracha. The Java Language Specification‚ Second Edition. Addison-Wesley‚ 2000. [10] HPJava project home page. www.hpjava.org. [11] A. Jameson‚ W. Schmidt‚ and E. Turkel. Numerical solutions of the Euler equations by finite volume methods using Runge-Kutta time-stepping schemes. In AIAA 14th Fluid and Plasma Dynamics Conference. American Institute of Aeronautics and Astronautics‚ June 1981. [12] Java Grande Numerics Working Group home page. http://math.nist.gov/javanumerics/. [13] Han-Ku Lee. Towards Efficient Compilation of the HPJava Language for High Performance Computing. PhD thesis‚ Florida State University‚ June 2003. [14] Sang Boem Lim. Platforms for HPJava: Runtime Support for Scalable Programming in Java. PhD thesis‚ Florida State University‚ June 2003. [15] J. E. Moreira‚ S. P. Midkiff‚ M. Gupta‚ and R. Lawrence. High Performance Computing with the Array Package for Java: A Case Study using Data Mining. In Supercomputing 99‚ November 1999. [16] David Oh. The Java virtual wind tunnel. http://raphael.mit.edu/Java/. [17] R. Parsons and D. Quinlan. A++/P++ array classes for architecture independent finite difference calculations. In Object Oriented Numerics Conference‚ 1994. [18] Guansong Zhang‚ Bryan Carpenter‚ Geoffrey Fox‚ Xiaoming Li‚ Xinying Li‚ and Yuhong Wen. PCRC-based HPF compilation. In Zhiyuan Li et al‚ editor‚ 10th International Workshop on Languages and Compilers for Parallel Computing‚ volume 1366 of Lecture Notes in Computer Science. Springer‚ 1997. http://www.hpjava.org/pcrc/npacWork.html.
Programming for Locality and Parallelism with Hierarchically Tiled Arrays* Gheorghe Almási1‚ Luiz De Rose1‚ Basilio B. Fraguela2‚ José Moreira1‚ and David Padua3 1
IBM Thomas J. Watson Research Center Yorktown Heights‚ NY 10598-0218
{gheorghe‚laderose‚jmoreira}@us.ibm.com 2
Dept. de Electrónica e Sistemas‚ Universidade da Coruña E-15071 A Coruña‚ Spain [email protected]
3
Department of Computer Science‚ University of Illinois at Urbana-Champaign Urbana‚ IL 61801 [email protected]
Abstract. This paper introduces a new primitive data type‚ hierarchically tiled arrays (HTAs)‚ which could be incorporated into conventional languages to facilitate parallel programming and programming for locality. It is argued that HTAs enable a natural representation for many algorithms with a high degree of locality. Also‚ the paper shows that‚ with HTAs‚ parallel computations and the associated communication operations can be expressed as array operations within single threaded programs. This‚ is then argued‚ facilitates reasoning about the resulting programs and stimulates the development of code that is highly readable and easy to modify. The new data type is illustrated using examples written in an extended version of MATLAB.
1
Introduction
This paper introduces a new primitive data type which could be incorporated into conventional languages to facilitate parallel programming and programming for locality. This new data type facilitates the representation and manipulation of arrays that are organized as a hierarchy of tiles. These hierarchically tiled arrays (HTAs) are a generalization of the recursively blocked arrays arising in some linear algebra algorithms with a high degree of locality. Our proposal is to use HTAs to facilitate the expression of both locality and parallelism. In a nutshell‚ our idea is to distribute the outermost tiles of a hierarchically tiled array for parallelism‚ and used the inner tiles for locality and message aggregation. In the case of sequential programs all tile levels will be used for locality. *
This work is supported in part by the Defense Advanced Research Project Agency under contract NBCH30390004. This work is not necessarily representative of the positions or policies of the Army or Government.
L. Rauchwerger (Ed.): LCPC 2003‚ LNCS 2958‚ pp. 162–176‚ 2004. © Springer-Verlag Berlin Heidelberg 2004
Programming for Locality and Parallelism with Hierarchically Tiled Arrays
163
The two main sources of inspiration for this project were the extensive body of work on blocked linear algebra algorithms [6‚ 2] and two recently proposed languages‚ Co-Array Fortran [12] and Unified Parallel C (UPC) [3]. Our proposal follows these two languages in that it represents communication explicitly as array assignments. The use of array assignments to represent communication has at least two advantages over the library-based‚ approach of MPI [5]. First‚ thanks to APL [8] and Fortran 90 we have at our disposal a wealth of powerful array operators that can serve to unify and simplify the many communication and collective operations of MPI. Second‚ making the operations part of the language enables compiler support that simplifies the notation and improves error detection. We‚ however‚ do not follow Co-Array Fortran and UPC in the use of the SPMD programming paradigm. Instead‚ our proposal resembles the programming model of the old SIMD machines‚ but instead of limiting the parallelism to simple arithmetic or logic array operations‚ we take advantage of the MIMD nature of todays parallel machines and allow in the expression of parallelism the use of complex array operations represented as user-defined functions. Abandoning the SPMD model has the drawback of removing some control on the parallelism from the programmer‚ but the single thread programming model has the great advantage of enforcing structure and leading to programs that are more readable and easier to develop and maintain. Furthermore‚ we expect that much of the potential loss of performance can be avoided with relatively simple compiler and run-time techniques. Our approach differs from that of the High Performance Fortran [7‚ 9] in that it makes all communication and array distribution explicit and therefore it requires much less from the compiler than High Performance Fortran. Although making communication explicit complicates programming there is no better alternative at this time given the failure of High Performance Fortran. Furthermore‚ languages for parallel programming with explicit communication will always be necessary much in the same way that assembly language programming is still necessary today for conventional programming. The availability of a lower level language is useful as a fall back position whenever the compiler fails to do the right thing and as a means to experiment with alternative solutions that can later be incorporated into a compiler. Hierarchically tiled arrays can be easily incorporated into several programming languages including Fortran 90‚ APL‚ and MATLAB. In this paper we focus on extending MATLAB with hierarchically tiled arrays for two main reasons. First is that an extended MATLAB system would make a great tool for prototyping parallel programs. Such a tool is sorely needed and although many MATLAB programmers may not be interested in parallelism‚ we believe that many parallel programmers would be interested in a good prototyping tool. The second reason is that MATLAB has many features that make it a convenient platform for a first implementation of our ideas. In the rest of this paper‚ we describe hierarchically tiled arrays (Section 2)‚ present mechanism for their rep-
164
Gheorghe Almási et al.
Fig. 1. Two tiled arrays
Fig. 2. A partitioned array
resentation in memory (Section 3) and then illustrate their use in programming for locality (Section 4) and parallelism (Section 5).
2
Hierarchically Tiled Arrays
In this section we define hierarchically tiled arrays (Section 2.1)‚ and discuss how to build them (Section 2.2)‚ access their components (Section 2.3)‚ and how they can be used in expressions and values assigned to them (Section 2.4).
2.1
Definition of Hierarchically Tiled Array
We define a tiled array as an array that is partitioned into subarrays in such a way that adjacent subarrays have the same size along the dimension of adjacency. Although the literature usually assumes that array tiles have the same shape (Fig. 1(a))‚ we do not require this in our definition because there are important cases where using tiles of different sizes (Fig. l(b)) is advantageous. Notice that our definition implies that arrays are partitioned by hyper planes that are perpendicular to one of the dimensions. Furthermore‚ “randomly” partitioned arrays such as that shown in Fig. 2 do not fall under our definition of tiled arrays. We define hierarchically tiled arrays (HTAs) as tiled arrays where each tile is either an unpartitioned array or a hierarchically tiled array. Although this definition allows different tiles to be partitioned in different ways‚ most often HTAs will be homogeneous‚ that is adjacent submatrices at each level will not only have the same size as their neighbors along the dimension of adjacency‚ but they will also agree in the number and position of the partitions along that dimension. A two-level hierarchy where neighboring tiles are partitioned differently‚ and therefore depicts a non-homogeneous HTA‚ is shown in Fig. 3(a). In this figure‚ the outer tiles are separated by the dashed lines and the inner tiles by the dotted lines. There are three mismatches in Fig. 3(a). One is between outermost tile {1‚2}‚ which is not partitioned at all‚ and tile {1‚1} which is partitioned into two parts along the vertical dimension which is the dimension of adjacency
Programming for Locality and Parallelism with Hierarchically Tiled Arrays
165
Fig. 3. Two level tiled arrays
Fig. 4.
Bottom up tiling
between these two tiles. The other two mismatches are between outermost tiles {1‚ 1} and {2‚ 1} and between tiles {2‚ 1}‚ and {2‚ 2}. Fig. 3(b) is an example of homogeneous HTA where the number of tiles and the sizes of all tiles match along the dimensions of adjacency.
2.2
Construction of HTAs
A simple way to obtain homogeneous HTAs is to tile the matrix at the lowest level of the hierarchy first and then proceed recursively by tiling the resulting array of tiles. This bottom–up process‚ illustrated in Fig. 4‚ always generates homogeneous HTAs. We can alternatively start from the top and successively refine each partition. The top down approach is more flexible than the bottom up approach in that it enables the generation of both homogeneous and nonhomogeneous HTAs. In an interactive array language such as MATLAB‚ HTAs can be built following either approach if the appropriate functions are available. For the bottom up approach we define the function tile that accepts as parameters an HTA or unpartitioned array and vectors‚ (one for each dimension of the HTA) and returns an HTA partitioned by the hyperplanes defined by These partition dimension of the array right after element For example‚ given a 10 × 12 matrix D ‚ the statements
will generate the three HTAs shown in Fig. 4.
166
Gheorghe Almási et al.
For the top down approach we define the function hta which accepts natural numbers as parameters‚ and returns a array whose elements are empty tiles that can hold HTAs or unpartitioned arrays. Before presenting an example of top down creation of HTAs‚ we need to describe how to address the tiles in an HTA. The outermost tiles of an HTA can be addressed using subscripts enclosed by curly brackets. An additional set of subscript should be added for each level of the HTA that needs to be addressed. Thus‚ the tile containing element E(5, 4) if E is partitioned as shown in Fig. 1(a) would be accessed as E{3‚ 2}. Also‚ the inner tile containing element F(5‚ 4) in an array F with the shape shown in Fig. 3(b)‚ would be addressed as F{2‚ 1}{1‚ 2}. We can now illustrate the top down creation of HTAs. The top two levels of an array E with the shape shown in Fig. 3(a) could be created as follows:
and the elements of the upper left quadrant could be filled with two-dimensional arrays of normally distributed random number as follows:
A drawback of the bottom up approach as illustrated in (2.1) is that it creates intermediate HTAs which are in most cases unnecessary. A reasonable compiler could have these temporary HTAs deleted after their only use in the creation sequence or could avoid their creation altogether by‚ for example‚ reversing the creation process into a top down form. As can be seen in the foregoing example‚ the top down approach does not suffer of this problem.
2.3
Addressing the Scalar Elements of an HTA
We discuss next how to address the scalar elements of an HTA. The simplest way to address an element is to ignore all tiling and address the elements using conventional subscripting. For example‚ element (4‚ 5) of an array‚ H‚ that has been tiled as shown in Fig. l(b) can be addressed as H(4‚ 5). To use tiling for addressing a scalar element‚ we can use the curly bracket notation introduced above followed by conventional subscripts enclosed within parenthesis. The conventional subscripts specify the location of the element within the innermost tile in the hierarchy. Thus‚ element H(4 ‚ 5) can also be addressed as H{2‚ 2}(2‚ 3). We call flattening the mechanism that allows addressing an array ignoring the tile structure. Thus‚ we say that flattening enables the use of H(4‚ 5) to access element (4‚5) of array H. Flattening can also be applied at an intermediate level of the hierarchy. For example element (5‚ 7) of an array A tiled a shown in Fig. 4 could be referenced as A(5‚ 7)‚ or as A{1‚ 2}{2}{3}(1‚ 1) if all the levels of the
Programming for Locality and Parallelism with Hierarchically Tiled Arrays
167
tiling hierarchy are taken into account. We could also flatten the last level of the hierarchy and address the same element as A{1‚ 2}{2}(5‚ 1) or flatten the second level to get A{1‚ 2}(5‚ 4).
2.4
Assignments and Expressions Involving HTAs
The last topic to be discussed in this section is the meaning of assignment statements and expressions involving HTAs. Our objective is to generalize the notion of conformable arrays of Fortran 90 and the semantics of assignments to undefined variables of MATLAB. Let us first present four definitions that we will need in this section. We call leaf elements the elements of an HTA that do not have any components. The leaf elements could be empty containers or arrays of scalars. In the spirit of MATLAB‚ scalars cannot appear in isolation within HTAs and will be represented as 1 × 1 arrays. We say that an HTA is complete when all of its leaf elements are arrays of scalars. Otherwise‚ when some of the leaf elements are empty containers‚ the HTA is said to be incomplete. For example‚ arrays A‚ B‚ and C in the sequence (2.1) are complete. On the other hand‚ array A right after the sequence (2.2) is incomplete and will remain incomplete after the statements in (2.3) are executed‚ because these statements do not fill the containers A{1‚ 2}‚ A{2‚ 1} and A{2‚ 2}. In Fortran 90‚ two arrays with the same shape (that is‚ the same number of dimensions and the same size in each dimension) are conformable. Also‚ scalars (and 1 × 1 arrays in our case) are conformable to arrays of any shape. Scalar binary operations such as add and multiply are extended in Fortran 90 to work on conformable objects. When both operands are arrays with the same shape‚ the operation is performed on corresponding pairs of scalars. When one of the objects is a scalar and the other an array‚ the scalar is operated with each of the elements of the array. Thus‚ c (1:10‚ 1:20:3)+d( 1:10‚1:7) is a valid Fortran 90 operation since the operands are both 10 × 7 arrays. Here‚ corresponding elements of the operands are added to each other to produce an array that is conformable to the operands. The expression e (: ‚ :) +5 is also valid and will add the scalar 5 to each element of array e producing an array with the shape of e. Two complete HTAs have the same topology if their outermost array of tiles have the same shape and corresponding outermost tiles are HTAs with the same topology or contain arrays of scalars that are conformable. This means that two HTAs will have the same topology if the only difference between them is on the leaves where the arrays have to be conformable‚ but do not have to have identical shapes. We now proceed by discussing conformability‚ expressions‚ and assignment operations.
168
Gheorghe Almási et al.
Fig. 5. Operating on a section of an HTA
Conformability. Two complete HTAs are conformable if they have the same topology or one of them is conformable to all elements at the top level of the other. Notice that the second part of the definition is recursive. That is‚ if the smaller HTA does not have the same topology to one of the top level elements‚ then it must have the same topology of all components of this top level element‚ and so on. Informally‚ this definition means that two HTAs of different sizes will be conformable if the smaller one has the same topology of all elements of the other that are a certain level above the leaf elements. Notice also that our definition implies that a scalar is conformable to any complete HTA. Expressions involving HTAs. Following Fortran 90 the meaning of scalar operations is extended so that when the operands are both HTAs with the same topology the operation will be performed between corresponding scalar elements and will return an HTA with the topology of the operands. When the operands have different topologies‚ the smaller one is operated with all matching objects at the bottom of the hierarchy of the larger one. For example‚ adding a 2 × 3 array M is to HTA A resulting from sequence (2.1) is a valid operation that would result in M being added to all 2 × 3 arrays of scalars that are at the bottom of the hierarchy of A. Similarly‚ A + 3 is valid and will add 3 to each scalar element in A. Notice that flattening changes the topology of an HTA. Thus‚ while the term B by itself represents the HTA computed in sequence (2.1) and therefore has two levels of tiling‚ the term B{:‚:}(:‚:) represents an HTA with a single level of tiling. It is also possible to operate on a section of an HTA. Thus‚ B{1‚:}{2:3}+1 will operate on only one section of the HTA and will return an HTA with the shape of the section as illustrated in Fig. 5. Also following Fortran 90‚ scalar intrinsic functions are extended to operate on complete HTAs. These functions will operate on each scalar separately and will return an HTA with the topology of the operand. For example‚ sin(A) will return an HTA with the topology of A‚ but with each scalar replaced by its sine. Similarly‚ intrinsic array operations involving a single array will be extended in the natural way. For example‚ max(A) will return an HTA that will have the topology of A‚ except that every array of scalars will be replaced by a single scalar (which is a 1 × 1 array‚ as stated above) that contains the maximum value of the array it replaces.
Programming for Locality and Parallelism with Hierarchically Tiled Arrays
169
Assignments. Next‚ we generalize the semantics of assignment operations. In MATLAB‚ when the name of an array X appears in an expression‚ it refers to the whole array‚ but on the left hand side of an assignment statement X refers to the variable name as a container. Thus‚ in MATLAB if X is the one-dimensional array [1 2]‚ the expressions X+1 and X(1:2)+1 have the same meaning‚ adding one to each element. On the other hand‚ while X(1:2)=3 will change X into [3 3]‚ X=3 will change X into the scalar 3. We extend this semantics to HTAs by assuming that references to containers that appear in expressions represent their content while on the left hand side of an assignment statement they represent the containers themselves. Thus‚ B=5‚ where B is the HTA constructed in (2.1) will replace B with the scalar 5. However‚ B{:‚:}{:‚: }=5 will replace each of the 2 × 3 arrays inside B by a 1 × 1 array containing 5 and B{:‚:}{:‚:}(:‚:)=0 will replace each of the 2 × 3 arrays inside B with a 2 × 3 array of zeros.
3
Mapping HTAs onto Memory
To specify how an HTA is to be mapped onto the memory of a machine‚ we could add a parameter to the functions for building HTAs introduced in the foregoing section or create a function variant for each type of mapping. We will follow the second approach in this paper. We consider two classes of mappings. First‚ we discuss the mapping onto the main memory of a conventional uniprocessor or a shared memory multiprocessor. This mapping associates a unique memory location to each subscript value. Most programming languages assume a linear mapping‚ whose main advantage is that computation of the memory location is simple and successive elements of an array along any dimension can be computed by addition without the need for multiplication. Compilers take advantage of this property via the strength reduction optimization. Linear mapping can be done at any particular level of an HTA by laying out the tiles at this level in consecutive memory locations following a row major order or a column major order. We will assume that the functions tile and hta allocate objects in a row major order. To obtain column major order we would have to define new functions such as tileColumMajor or htaColumMajor. Other layout functions that are advantageous for some classes of algorithms such as Corder‚ U-order‚ Hilbert order‚ and Z or Morton order can be attained similarly by creating the appropriate functions (e.g. tileCOrder) and extending the compiler to generate the corresponding address expressions. Next‚ we discuss mapping onto the different nodes of a multicomputer or distributed-memory multiprocessor. To this end‚ we will assume that the nodes of the target machine form an mesh. The mesh (virtual) organization is by far the most frequently assumed topology in parallel programming. In our extension to MATLAB‚ we will use descriptors of node arrangement that is created by the nodes function and could be assigned to a variable. The invocation nodes returns a descriptor of a mesh of nodes.
170
Gheorghe Almási et al.
For parallel programming‚ the top level of an HTA can be distributed across the nodes of a distributed memory machine using functions htaD and tileD. In the simplest case‚ when the top level of that HTA has the same shape as the node mesh‚ the meaning of the distribution operation is the obvious one: Each tile at the topmost level of the HTA is allocated to a different node. For example‚ to distribute HTA A created in (2.1) across a 2 × 2 node mesh we could modify the sequence as follows:
Here‚ tile A{i‚ j} is allocated to node (i‚j)‚ Similarly‚ to distribute the top level of HTA G created in (2.2)‚ one tile per node‚ we could modify sequence (2.2) as shown next. Here‚ again tile G{i‚j} is allocated to node (i‚j)‚
If the top level of the HTA has the same number of dimensions‚ but fewer components than the mesh of nodes where it is to be distributed‚ allocation will take place on consecutive processors starting at node 1 on each dimension. If the top level of the HTA has fewer dimensions than the mesh‚ then the top level HTA is extended with additional dimensions of size one to match the number of dimensions of the mesh. It is invalid for the top level of the HTA to have more dimensions than the mesh where it is to be distributed. If the top level of the HTA has more elements along one of the dimensions than the number of processors along that dimension‚ we assume a cyclical distribution.
4
Programming for Locality with HTAs
Following the pioneering work of McKellar and Coffman [10]‚ linear algebra computations are today usually organized to access arrays one tile at a time [2‚ 6‚ 13]. The same approach has been studied as a compiler optimization technique where loops are automatically restructured so that arrays are accessed by tiles rather than in the more natural but less efficient row or column order [1‚ 14‚ 11]. Although in some cases these algorithms require that the arrays to be manipulated be stored by tiles‚ in many cases this is not necessary and the reorganization of the computation usually suffices to significantly improve the memory hierarchy performance of the algorithms. Nevertheless‚ for large arrays‚ storage by tiles is desirable when the unit of transfer (page or cache line) is large [10] or to avoid cache collisions [13].
Programming for Locality and Parallelism with Hierarchically Tiled Arrays
171
The HTA notation of this paper should produce significantly more readable code when programming for locality. At the same time‚ our notation enables the layout of arrays in block order which should help performance for large arrays as was just mentioned. We now illustrate the benefit of HTA when programming for locality using the simple case of matrix multiplication. The typical matrix multiplication algorithm with tiling in the three dimensions has the following form:
Here and in the following examples‚ we assume that c is initially all zeros. This loop is clearly much more complex than the version that does not use tiles‚ and would be even more complex had we not assumed that the size of the matrix‚ n‚ is a multiple of the block size‚ q. In contrast‚ the algorithm implemented on a tiled array stored as a single level HTA would have the following form:
This is a much simpler and easier to read form of the same algorithm. One reason for the simplicity is the use of the HTA notation. It also helps that in MATLAB * stands for the matrix-matrix multiply operator. Notice that‚ for the algorithm to work not all tiles have to have the same size nor be square. Clearly‚ before code (4.1) executes‚ a‚ b and c must be created using functions such as hta or tile. Several levels of blocking can be useful in dealing with several levels of the memory hierarchy. A simple way to extend (4.1) to handle several levels of blocking is to replace a{I‚ K}*b{K‚ J} with an invocation to a user written function that uses recursion by calling itself to multiply the tiles of its operands when they are HTAs‚ and which stops the recursion when its parameters are arrays of scalars.
5
Parallel Programming with HTAs
The only construct we use to express parallelism are array or collective operations on distributed HTAs. We assume that the main thread of our parallel programs will execute on a client sequential machine that could be a workstation. All variables‚ except distributed HTAs‚ will be assumed to reside in the memory of this client. The distributed HTAs on the other hand will be contained in the memory of a parallel server. All operations on elements of a distributed
172
Gheorghe Almási et al.
HTA will take place in the server as dictated by a simple version of the ownercomputes rule: the operations that compute values to be stored in an object must be performed in the node containing the object. We assume that the compiler will take care of generating the code that is executed in the server and of inserting message-passing primitives so that the needed values are moved to the location before they are needed for the computation. For example‚ assume that HTAs A and B have the topology of Fig. 1(a) and that their tiles are distributed across a two-dimensional array of nodes or processors. Consider then the statement This statement means that for and tile A{i‚ j} and tile B{i‚ j} should be multiplied element by element (. * is the element-by-element multiplication operator in MATLAB) and the result should replace tile A{i‚ j}. Since the result of multiplying A{i‚ j} by B{i‚ j} is to be stored in A{i‚ j}‚ the multiplication must take place in the node containing A{i‚ j}. Also‚ these multiplications can proceed in parallel with each other since the operation appears in an array statement. Notice that in this case no communication is necessary to execute the statement. On the other hand‚ the statement requires communication. Therefore‚ the compiler must generate the appropriate message-passing primitives so that for and tile B{i+1‚j} be copied to a temporary in the node containing A{i‚ j} before the operation can take place. Consider finally the statement where X is a variable residing in the client. In this case the compiler will have to generate a broadcast operation to send the value of X to all nodes before the operation can take place. Before proceeding with the examples‚ we need to make an additional extension to MATLAB. As mentioned above‚ in MATLAB when the operands are arrays‚ the * operator represents matrix multiplication and .* represents element by element multiplication. With the introduction of tiled arrays‚ we introduce additional level in the data hierarchy and the meaning of * and .* must be extended. We will assume that * between to HTAs with the topology of Fig. l (a) will produce the effect of matrix multiplication at the tile level. Thus‚ we will assume that c{: ‚ :}=a{: ‚ :}*b{: ‚ :} or simply c=a*b will have the same effect as loop (4.1). If we just wanted to multiply corresponding submatrices‚ we will write c{: ‚ :}=a{: ‚ :}.*b{: ‚ :} or c=c.* b. This will be equivalent to:
Notice that in the loop the operands of * are matrices and therefore the operator stands for matrix multiplication‚ the same meaning it has in MATLAB. Finally‚
Programming for Locality and Parallelism with Hierarchically Tiled Arrays
173
if we just want to multiply corresponding scalars in two-level HTAs‚ we would write c=a..* b. Next‚ we present two examples of parallel programs using HTAs. The first is a dense matrix-matrix multiply and the second is a matrix vector multiply where both the matrix and the vector are sparse. For our first example‚ we will implement the SUMMA algorithm. The algorithm has a very simple representation using HTAs. To explain the algorithm‚ consider first the matrix multiplication loop (4.1) with the innermost loop (loop K) moved to the outermost location:
The inner two loops increment the array c{:‚:} so that for and tile c{I‚ J} is incremented by a{I ‚ K}*b{K‚ J} on each iteration of the outermost loop. Notice that for each I‚ J‚ and K‚ the tiles a{I‚ K} and b{K‚ J} are each used in the computation of m different tiles of c. Also‚ the inner two loops are a parallel operation on the two-dimensional array of tiles c which can be easily represented in array form if a and b are appropriately replicated. The introduction of the replication operations‚ leads directly to the SUMMA algorithm. In our notation‚ we can achieve the replication extending the MATLAB repmat function to HTAs. The first parameter of the MATLAB repmat function is the matrix to replicate‚ the second parameter is the number of copies to make in the first dimension‚ the third parameter is the number of copies in the second dimension‚ and so on. Our repmat is an overloaded version that has the same semantics as the original one except that it operates on distributed arrays of tiles instead of on arrays of scalars.
The repmat function when applied to distributed HTAs could be implemented in many different ways depending on the characteristics of the target machine and the mapping of the source HTA onto the parallel machine. The previous loop can be written in a simpler form:
This representation leaves the decision of how to implement the broadcasting of a and b to the compiler‚ while in the previous loop the programmer exercises some control by choosing the appropriate routine. The second example will be a matrix vector multiplication where both the vector and the matrix are sparse. Coding is significantly simplified by the way
174
Gheorghe Almási et al.
MATLAB handles sparse computations. In fact‚ sparse matrices are operated in MATLAB using the same syntax used for dense computations. The MATLAB interpreter automatically selects the appropriate procedure to handle sparse data. Let us assume first that the data is originally in matrix a and vector b‚ both located in the client. Array a will be distributed by blocks of rows across the nodes of the target machine. To this end‚ a is assigned to HTA c that is just a distributed linear arrangement of containers. Also‚ vector b will be distributed by blocks of elements using HTA v. There is a vector dista in the client that specifies which rows of a are to be assigned to c{I}. These are rows dista(I) to dista(I+1)-1. Similarly‚ array distb specifies which elements of b will be assigned to v{I}. The first step of the code would‚ therefore‚ contain the following statements:
If matrix a or vector b are too large to fit the client‚ the previous loop could be easily replaced by an I/O function that will read the data directly to the components of c and v. The matrix vector multiplication will be performed in chunks. In fact‚ each node‚ I‚ will compute a chunk of the vector by multiplying c{I} by v. However‚ only the elements of the vector corresponding to nonzero columns of c{I} are needed. If we provide to each node a copy of vector distb‚ by analyzing c{I} and correlating the result with distb the node can easily determine‚ for each J‚ which elements of v{J} will be needed to perform the c{I}*v operation. The result of this analysis will be stored in HTA w. Node I will assign to each w{I}{J} a vector containing the indices of the elements needed from v{J}. We assume the existence of a function need that computes w{I}. This function should be asy to write by a programmer familiar with MATLAB. The second step of our algorithms is then to call the function need as follows:
Here‚ we have used the forall construct with the same meaning it has in Fortran 90. MATLAB does not have such a construct‚ but we have found it necessary in many cases to implement parallel algorithms. The next step in the algorithm is to send the data in w{I}{J} (contained in node I) to node J for all I and J. In this way‚ node J will know which elements of its vector block‚ v{J}‚ are needed by node I. We will store this information in HTA x so that x{I}{J} will contain a vector of the indices of the elements needed by node J from node I. Clearly‚ x is the transpose of w‚ therefore step 3 of the computation will just be:
Programming for Locality and Parallelism with Hierarchically Tiled Arrays
175
In the next step each node‚ I‚ gathers‚ for all J‚ into y{I}{J} the elements of v{I} needed by node J. Then this data is sent to the appropriate node using another transpose operation:
Finally‚ each local vector is extended with the data that just arrived into z and the matrix vector multiplication can be performed:
6
Conclusions
The parallel programming approaches that have attracted most attention in the recent past fall at the two extremes of the range of possible designs. On one hand there is the SPMD‚ message-passing programming model. MPI is by far the most popular implementation of this model‚ but not the only one. Two parallel programming languages‚ Co-Array Fortran and UPC‚ are other examples of this model. The incorporation of communication primitives into programming languages like Co-Array Fortran and UPC significantly reduces the amount of detail that must be specified for each communication operation in the librarybased approach of MPI. However‚ in our opinion‚ Co-Array Fortran and UPC do not go far enough due to their adoption of the SPMD model which can easily lead to unstructured code. This lack of structure could be the result of communication taking place between widely separated sections of code with the additional complication that a given communication statement could interact with several different statements during the execution of a program. In theory at least‚ the lack of structure possible with SPMD programs could be much worse than anything possible with the use of goto statements in conventional programming. In other‚ perhaps more colorful‚ words what we are saying is that the use of the SPMD programming model could lead to four-dimensional spagetti code. The other class of parallel programming models on the spotlight mostly follows a single-threaded model. Languages in this class include the OpenMP directives [4] and High Performance Fortran. One difficulty with OpenMP is that it assumes a shared memory support that is not always available in the hardware of todays machine.The shared-memory model could be implemented in software‚ but that often leads to highly inefficient parallel programs. A second‚ and much more serious‚ limitation of OpenMP is that the directives do not explicitly represent the notion of locality. This is a very important notion for parallel programming since distributed memory is a physical necessity in large-scale multiprocessors. It could be said that the compiler could take care of rearranging
176
Gheorghe Almási et al.
the code and distributing data to take care of locality‚ but a proven compiler technology for this purpose is not at hand today. High-Performance Fortran is based on sequential source code complemented with directives mainly for specifying how data is to be distributed. It is the task of the compiler to transform the sequential code into SPMD form and generate all the necessary communication primitives. Unfortunately‚ from the poor reception given to HPF it seems that automatically producing highly efficient code from HPF source is beyond the capabilities of todays technology. Our proposal lies somewhere between these two extremes. The programming model is single-threaded‚ but communication and distribution is explicit. Therefore‚ the requirements from the compiler should be more modest that those of HPF. Our experience in the programming of both dense and sparse kernels is that the use of array notation and the incorporation of tiling in a native data type significantly improve readability when programming for locality and parallelism. This is clearly due to the importance of tiling for parallel programming and for locality‚ a fact that has become increasingly evident in the recent past.
References [1] W. A. Abu-Sufah‚ D. J. Kuck‚ D. H. Lawrie: On the Performance Enhancement of Paging Systems Through Program Analysis and Transformations. IEEE Trans. on Computers 30(5): 341-356(1981). [2] E. Anderson‚ et.al‚ LAPACK Users’ Guide. SIAM‚ Philadelphia‚ 1992. [3] W. Carlson‚ et.al‚ Introduction to UPC and Language Specification. CCS-TR99-157‚ IDA Center for Computing Sciences‚ 1999. [4] R. Chandra‚ et.al‚ Parallel Programming in OpenMP. Morgan Kaufmann 2000. [5] W. Gropp‚ L. Ewing and A. Skjellum: Using MPI. The MIT Press‚ 1999. [6] F. G. Gustavson: Recursion Leads to Automatic Variable Blocking for Dense Linear-Algebra Algorithms. IBM J. Res. and Dev. 41‚ No. 6‚ 737-755 (Nov. 1997). [7] S. Hiranandani‚ K. Kennedy and C. Tseng: Compiling Fortran D for MIMD Distributed–Memory Machines. CACM‚ 35(8):66-80‚ Aug. 1992. [8] K. E. Iverson: A Programming Language. Wiley‚ 1962. [9] C. Koelbel and P. Mehrotra: An Overview of High Performance Fortran. ACM SIGPLAN FORTRAN Forum‚ Vol. 11‚ No. 4‚ Dec.‚ 1992. [10] A. C. McKellar and Edward G. Coffman Jr.: Organizing Matrices and Matrix Operations for Paged Memory Systems. CACM 12(3): 153-165 (1969). [11] K. S. McKinley‚ S. Carr‚ and C.-W. Tseng: Improving Data Locality with Loop Transformations. ACM TOPLAS‚ 18(4):424-453‚ July 1996. [12] R. W. Numrich and J‚ Reid: Co-Array Fortran for Parallel Programming. ACM SIGPLAN FORTRAN Forum‚ Vol. 17 (1998) 1–31. [13] R. Clint Whaley‚ Antoine Petitet‚ and Jack Dongarra: Automated Empirical Optimizations of Software and the ATLAS Project. Available as http://www.netlib.org/lapack/lawns/lawn147.ps‚ 19 Sept. 2000. [14] M. E. Wolf and M. S. Lam: A Data Locality Optimizing Algorithm. SIGPLAN Notices‚ 26(6):30–44‚ June 1991. Proc. of the ACM SIGPLAN ’91 PLDI.
Co-array Fortran Performance and Potential: An NPB Experimental Study* Cristian Coarfa‚ Yuri Dotsenko‚ Jason Eckhardt‚ and John Mellor-Crummey Rice University Houston TX 77005‚ USA
Abstract. Co-array Fortran (CAF) is an emerging model for scalable‚ global address space parallel programming that consists of a small set of extensions to the Fortran 90 programming language. Compared to MPI‚ the widely-used message-passing programming model‚ CAF’s global address space programming model simplifies the development of single-program-multiple-data parallel programs by shifting the burden for choreographing and optimizing communication from developers to compilers. This paper describes an open-source‚ portable‚ and retargetable CAF compiler under development at Rice University that is well-suited for today’s high-performance clusters. Our compiler translates CAF into Fortran 90 plus calls to one-sided communication primitives. Preliminary experiments comparing CAF and MPI versions of several of the NAS parallel benchmarks on an Itanium 2 cluster with a Myrinet 2000 interconnect show that our CAF compiler delivers performance that is roughly equal to or‚ in many cases‚ better than that of programs parallelized using MPI‚ even though support for global optimization of communication has not yet been implemented in our compiler.
1
Introduction
Parallel languages and parallelizing compilers have been a long term focus of compiler research. To date‚ this research has not had the widespread impact on the development of parallel scientific applications that had been hoped. The two standard parallel programming models suited to scientific computation that have received industrial backing are OpenMP [1] and High Performance Fortran (HPF) [2]. However‚ both of these models have significant shortcomings that reduce their utility for writing portable‚ scalable‚ high-performance parallel programs. OpenMP programmers have little control over data layout; as a result‚ *
This work was supported in part by the Department of Energy under Grant DE-FC03-01ER25504/A000, the Los Alamos Computer Science Institute (LACSI) through LANL contract number 03891-99-23 as part of the prime contract (W-7405ENG-36) between the DOE and the Regents of the University of California, Texas Advanced Technology Program under Grant 003604-0059-2001, and Compaq Computer Corporation under a cooperative research agreement. The Itanium cluster used in this work was purchased using funds from NSF under Grant EIA-0216467.
L. Rauchwerger (Ed.): LCPC 2003‚ LNCS 2958‚ pp. 177–193‚ 2004. © Springer-Verlag Berlin Heidelberg 2004
178
Cristian Coarfa et al.
OpenMP programs are difficult to map efficiently to distributed memory platforms. In contrast‚ HPF enables programmers to explicitly control the mapping of data to processors; however‚ to date‚ commercial HPF compilers have failed to deliver high-performance for a broad range of programs. As a result‚ the Message Passing Interface (MPI) [3] has become the de facto standard for parallel programming because it enables application developers to write portable‚ scalable‚ high-performance parallel programs using very sophisticated parallelizations under programmer’s control. Recently‚ there has been a significant interest in trying to improve the productivity of parallel programmers by using language-based parallel programming models that abstract away most of the complex details of high-performance communication (e.g. asynchronous calls)‚ yet provide programmers with sufficient control to enable them to employ sophisticated parallelizations. Two languages in particular have been the focus of recent attention as promising near-term alternatives to MPI: Co-array Fortran (CAF) [4‚ 5] and Unified Parallel C (UPC) [6]. Both CAF and UPC support a global address space model for single-programmultiple-data (SPMD) parallel programming. Communication in these languages is simpler than MPI: one simply reads and writes shared variables. With communication and synchronization as part of the language‚ these languages are more amenable to compiler-directed communication optimization than MPI programs. To date‚ CAF has not appealed to application scientists as a model for developing scalable‚ portable codes‚ because the language is still somewhat immature and a fledgling compiler is only available on Cray platforms [7]. At Rice University‚ we are working to create an open-source‚ portable‚ retargetable‚ high-quality CAF compiler suitable for use with production codes. Our compiler translates CAF into Fortran 90 plus calls to ARMCI [8]‚ a multi-platform library for onesided communication. Recently‚ we completed implementation of the core CAF language features‚ enabling us to begin experimentation to assess the potential of CAF as a high-performance programming model. Preliminary experiments comparing CAF and MPI versions of the BT‚ MG‚ SP and CG NAS parallel benchmarks [9] on a large Itanium 2 cluster with a Myrinet 2000 interconnect‚ show that our CAF compiler prototype already yields code with performance that is roughly equal to hand-tuned MPI. In the next section‚ we briefly describe the CAF language and the ARMCI library that serves as the communication substrate for our generated code. Section 3 proposes extensions to CAF to enable it to deliver portable high performance. In Section 4‚ we outline the implementation strategy of our sourceto-source CAF compiler. Section 5 presents our recommendations for writing high-performance CAF programs. In Section 6‚ we describe experiments using versions of the NAS parallel benchmarks to compare the performance of CAF and MPI. Section 7 presents our conclusions and outlines our plans for future work.
Co-array Fortran Performance and Potential: An NPB Experimental Study
2
179
Background
Co-Array Fortran (CAF) supports SPMD parallel programming through a small set of language extensions to Fortran 90. Like MPI programs‚ an executing CAF program consists of a static collection of asynchronous process images. CAF programs explicitly manage data locality and computation distribution; however‚ CAF is a global address space programming model. CAF supports distributed data using a natural extension to Fortran 90 syntax. For example‚ the declaration integer :: x(n‚m)[*] declares a shared co-array with n × m integers local to each process image. The dimensions inside brackets are called co-dimensions. Co-arrays may also be declared for user-defined types as well as primitive types. A local section of a co-array may be a singleton instance of a type rather than an array of type instances. Instead of explicitly coding message exchanges to obtain data belonging to other processes‚ a CAF program can directly reference non-local values using an extension to Fortran 90 syntax for subscripted references. For instance‚ process p can read the first column of data in co-array x from process p+1 with the right-hand side reference to x(: ‚ 1) [p+1]. The CAF language includes several synchronization primitives; the most important of them are sync_all‚ which implements a synchronous barrier‚ sync_team‚ which is used for barrier-style synchronization among dynamically-formed teams of two or more processes‚ andstart_critical/end_critical primitives for controlling entry to a single global critical section. Since both remote data access and synchronization are language primitives in CAF‚ communication and synchronization are amenable to compiler-based optimizing transformations. In contrast‚ communication in MPI programs is expressed in a more detailed form‚ which makes it more difficult to improve with a compiler. CAF also contains several features that improve the expressiveness and power of the language including dynamic allocation of co-arrays‚ co-arrays of user-defined types containing pointers‚ and fledgling support for parallel I/O. A more complete description of the CAF language can be found elsewhere [5].
2.1
ARMCI
The CAF compiler we describe in this paper uses the Aggregate Remote Memory Copy Interface (ARMCI) [8]—a multi-platform library for high-performance one-sided (get and put) communication—as its implementation substrate for global address space communication. One-sided communication separates data movement from synchronization; this can be particularly useful for simplifying the coding of irregular applications. ARMCI provides both blocking and split-phase non-blocking primitives for one-sided communication. On some platforms‚ using split-phase primitives enables communication to be overlapped with computation. ARMCI provides an excellent implementation substrate for global address space languages making use of coarse-grain communication because it achieves high performance on a variety of networks (including Myrinet‚ Quadrics‚ and IBM’s switch fabric for its SP systems) while insulating its clients from platform-specific implementation details such as shared memory‚ threads‚ and
180
Cristian Coarfa et al.
DMA engines. A notable feature of ARMCI is its support for non-contiguous data transfers [10].
3
Towards Portable High-Performance CAF
The CAF programming model is still emerging. Prior to our compiler‚ the only existing CAF compiler implementation was for the Cray T3E and X1 platforms— tightly-coupled shared memory architectures with high-performance interconnects that support efficient fine-grain communication and global synchronization. The original CAF language specification [4‚ 5] was influenced by these architectural features‚ leading to CAF codes that would not perform well on less tightly-coupled architectures. Evaluating the performance of CAF codes written according to the original language specification on a Myrinet cluster helped us to identify several features of the specification that reduce the potential performance of CAF codes. Below we discuss some of these features along with approaches we propose to address the problems they cause. Memory fence semantics associated with CAF procedure calls. The sync_memory intrinsic is a memory fence that ensures the consistency of a process image’s local memory by waiting for the completion of all of that process’s outstanding communication events. To ensure a consistent state for co-array data accessed during or after a procedure call‚ the original CAF model requires implicit memory fences before and after every procedure invocation. We found this requirement to be overly restrictive since it prevents overlapping communication with a procedure call‚ which is often an important strategy for hiding communication latency. It should be possible for a sophisticated programmer to relax this requirement where it is unnecessary for correctness. We are in the process of exploring design alternatives that will make this possible. Overly restrictive synchronization primitives. An issue that arose during our application evaluation was that using synchronization primitives in the original CAF language specification reduced the performance of applications we studied. For example, the original CAF specification only supports collective synchronization (sync_all and sync_team); however, many applications require only unidirectional, point-to-point synchronization. Using collective synchronization where only point-to-point synchronization is needed degrades performance and in some cases makes programming harder. We propose sync_notify(q) and sync_wait(p) as two new intrinsics for point-to-point synchronization. When a process executes a sync_notify, it initiates notification of the specified process image and then can continue immediately. When a process executes a sync_wait, it must block until it is notified by the specified process image. When a notification from process is delivered to process all pending communication events (both puts and gets) that issued to before initiated the sync_notify have completed.
Co-array Fortran Performance and Potential: An NPB Experimental Study
181
Collective operations. The CAF language specification does not provide collective communication intrinsics. CAF is expressive enough so that users can write collective communication routines in CAF; however this is likely to result in programs tailored to a particular architecture (and in many cases to a range of processor counts too) that are unlikely to deliver high performance on architectures with different communication latency and bandwidth characteristics. CAF should be extended to include collective communication intrinsics to give a CAF compiler flexibility to choose an appropriate algorithm and implementation suited to the target architecture at hand. We are in the process of designing a set of CAF intrinsics for collective communication.
4
Compiler Implementation Strategy
We have implemented the core features of CAF, enabling us to express non-trivial CAF programs. Section 6 gives a description of some programs we have compiled and evaluated. Our compiler performs source-to-source transformation of CAF codes into F90 plus calls to a communication library (currently ARMCI). This strategy was designed to leverage the best back-end compiler available on the target platform to optimize local computation. Our CAF compiler is implemented on top of OPEN64/SL [11], a version of the OPEN64 compiler infrastructure [12] that we have modified to support source-to-source transformation of Fortran 90. Below we outline some of the principal compiler design issues that arose when implementing CAF. Memory management issues. Current operating systems do not usually allow for sharing of arbitrary memory allocated independently by different processes. For this reason, memory for co-arrays must be managed by the communication substrate separately from memory managed conventionally by an F90 compiler’s language runtime system. Having the communication library allocate co-array memory enables our generated code to use the most efficient communication strategy for a particular platform. For example, on an SMP machine the memory can be allocated in shared memory which would enable communication to be performed using processor load and store instructions. On a Myrinet-based cluster, allocating data for a communication event in pinned physical memory enables the library to perform data transfers on the memory directly using the Myrinet adapter’s DMA engine. For CAF programs to perform well, access to the local portions of co-arrays must be efficient. Since co-arrays are not supported in F90, we need to translate references to the local portion of a co-array into a valid F90 construct and this construct must be amenable to back-end compiler optimization. We believe that the best strategy is to use an F90 pointer to access local co-array data. However, the difficulty with this strategy is that we want to allocate co-array data outside F90-managed memory. To use an F90 pointer to access co-array data, we must initialize the pointer’s dope vector outside an F90 compiler’s
182
Cristian Coarfa et al.
language runtime system. This requires compiler-dependent code for initializing F90 pointers, which poses a minor difficulty when retargeting. Co-array sequence association and reshaping. CAF explicitly provides sequence association between local parts of co-arrays in common blocks, but equivalence of co-array and non-co-array memory is prohibited. To support sequence association, our compiler allocates storage once for each common block at program launch and then sets up a procedure-level view for each common block containing co-arrays. Our CAF compiler implements this using a two-part strategy. First, at compile time, it generates a set of static initializers, which set up each procedure’s view of a common block containing co-arrays. Next, at link time, a global initialization routine is generated by collecting the static initializers. This routine allocates memory once for each common block and invokes the static initializer to create each procedure’s view in turn. CAF allows programmers to pass co-arrays as arguments to procedures. For each formal co-array parameter passed by reference, our implementation augments the subroutine prototype with a “hidden” parameter; each hidden parameter is a pointer to a runtime data structure describing the co-array to the callee. At each call site, every co-array actual parameter is replaced by an F90 pointer to its local co-array data and a pointer to the run-time data structure describing the co-array. Co-array communication generation. Communication events expressed with CAF’s bracket notation must be converted into F90; however, this is not straightforward because the remote memory is in a different address space. Although the language provides shared-memory semantics, the target architecture may not. A CAF compiler must provide transformations to bridge this semantic gap. On a hardware shared memory platform, the transformation is relatively straightforward since references to remote memory in CAF can be expressed as loads and stores to shared locations. The situation is more complicated for cluster-based systems with distributed memory. To perform the data movement, the compiler must generate calls to a communication library since the data resides on a remote node. Moreover, storage must be managed to temporarily hold off-processor data to perform a computation. Naive translation may lead to situations where excessive storage is used and superfluous copying is performed. Eventually, our compiler will automatically detect such situations and eliminate the extraneous storage and copying, when possible. Compare two statements where remote memory is updated:
In the first case, a separate communication buffer may not be necessary since the data to be sent to processor p is already available in b. On the other hand, the second statement calls for local computation; the result should be computed into a temporary communication buffer and then transferred to processor p. Now consider the case when remote data is used:
Co-array Fortran Performance and Potential: An NPB Experimental Study
183
In the first statement, no extra communication buffer is necessary because we can use a for temporarily storing b(:)[p] (a becomes dead) to evaluate the expression. But for the second case, one extra communication buffer is required because we need to transfer two vectors of off-processor data to evaluate the expression. Key missing features. There are a number of language features that are not yet implemented in our preliminary compiler. The most important of these are allocatable co-arrays, co-arrays of user-defined types (including those with pointer components), triplet notation in co-dimensions, and multiple co-dimensions.
5
Writing High Performance Co-array Fortran Code
Once we completed support for core CAF language features in our prototype CAF compiler, we undertook a study of several of the NAS parallel benchmarks to understand the interplay of CAF language, compiler, and runtime issues and their impact on the programmability, scalability, performance and performance portability of applications. From our colleagues Bob Numrich at University of Minnesota and Allan Wallcraft at Naval Research Lab, we received draft CAF versions of the MG, CG, SP, and BT NAS parallel benchmarks that they created from the MPI codes in the NPB version 2.3 release. Analyzing variants of these codes gave us a better understanding of how to develop high performance programs in CAF. All of the CAF code transformations we describe in this section represent manual source-level tuning we applied to CAF sources for the NAS benchmarks to best exploit CAF language features for performance. It is our goal to enhance the capabilities of our prototype CAF compiler to apply such transformations automatically. Our aim is to generate high-performance code that meets or exceeds the performance of hand-coded MPI parallelizations from easy to write CAF source programs. We are in the process of adding program analysis to our compiler to support automating such transformations. In our study, we found that there are several key coding strategies for writing high performance CAF code. We list them in the decreasing order of importance: Communication aggregation and vectorization. This is a critical optimization for architectures in which the communication fabric does not support low-latency, fine-grain memory transactions. Analysis of the NAS benchmark loops revealed that all major communication could be vectorized manually using triplet notation for subscripts of co-array references. Once support for data flow and dependence analysis are in place in our CAF compiler, in most cases it should be straightforward to automate this transformation. Consider Figure 1(a) which is a simple code fragment from the conj_grad routine of a first-draft CAF
184
Cristian Coarfa et al.
Fig. 1. NAS CG before and after communication vectorization
parallelization of NAS CG that we received from our colleagues. For this code, our prototype CAF compiler generates a get for every iteration, which is expensive. Using the triplet notation as in Figure 1(b) enables our present CAF compiler prototype to generate a single ARMCI communication event for such a statement, which is substantially faster than the original. We observed performance improvements up to two orders of magnitude by applying this transformation, even for relatively small problem sizes of the NAS benchmarks we studied. In CAF, when the shapes of source and destination array sections are conformant, vectorized communication can be expressed using triplet notation. Otherwise, a buffer copy is necessary at the source or destination to yield conformant shapes or to pack the data on the sender and unpack the data on the receiver. The latter approach mimics the message packing and unpacking in MPI. Synchronization strength reduction. Analogous to the well-known operator strength reduction transformation, synchronization strength reduction involves transforming a strong synchronization primitive, e.g., a barrier, into a weaker one(s), e.g., point-to-point notify/wait, while preserving the meaning of the pro-
Fig. 2. Communication in NAS MG before and after synchronization strength reduction
Co-array Fortran Performance and Potential: An NPB Experimental Study
185
gram, with the aim of improving performance. Others have previously employed similar optimizations with significant benefits [13]. This optimization was a key performance boost for each of the NAS benchmarks we studied. Figure 2 uses a fragment from NAS MG, a 3D multigrid solver, to illustrate this transformation. Figure 2(a) shows the original CAF version of the code in which processors perform a barrier synchronization before and after exchanging boundary layers of its 3D block with its pair of neighbors along a coordinate dimension. This code was originally written for the Cray T3E, which has fast hardware support for barriers. However, a barrier provides much stronger synchronization than necessary; only synchronization with the adjacent neighbors is needed. On cluster interconnects that do not have fast hardware support for barriers it is more efficient to use point-to-point synchronization. Figure 2(b) shows the code recast to use our new CAF one-way point-to-point synchronization primitives. On a Myrinet 2000 cluster, this transformation improved performance by about 30% for 64 processors. Conversion of Gets into Puts. On communication fabrics such as Myrinet, put operations are supported directly, whereas get operations require asking a server-side thread to supply the requested data with a put. For such an interconnect, when using regular algorithms, it is feasible and potentially profitable to transform each get operation into a put.
6
Experiments and Discussion
In this section we compare the performance of the code our compiler generates from CAF with hand-coded MPI implementations of the MG, CG, BT and SP NAS parallel benchmark codes. For our study, we used MPI versions from the NPB 2.3 release. Sequential performance measurements used as a baseline were performed using the NPB 2.3-serial release. The NPB codes are widely regarded as useful for evaluating the performance of compilers on parallel systems. For each benchmark, we compare the parallel efficiency of MPI and CAF implementations of each benchmark. We compute parallel efficiency as follows. For each parallelization the efficiency metric is computed as In this equation, is the execution time of the original sequential version implemented by the NAS group at the NASA Ames Research Laboratory; P is the number of processors; is the time for the parallel execution on P processors using parallelization Using this metric, perfect speedup would yield efficiency 1.0 for each processor configuration. We use efficiency rather than speedup or execution time as our comparison metric because it enables us to accurately gauge the relative performance of multiple benchmark implementations across the entire range of processor counts. All experiments were performed on a cluster of 92 HP zx6000 workstations interconnected with Myrinet 2000. Each workstation node contains two 900MHz Intel Itanium 2 processors with 32KB/256KB/1.5MB of L1/L2/L3 cache, 4-8GB of RAM, and the HP zx1 chipset. Our operating environment is the GNU/Linux
186
Cristian Coarfa et al.
Fig. 3. A typical fragment of optimized CAF for NAS CG
operating system (kernel version 2.4.20 plus patches). Although this Linux kernel is SMP-capable, we used only one of the processors on each SMP node for our experiments (1) to avoid contention for the Myrinet and local memory, and (2) to avoid process ping-ponging since our kernel was not configured to support affinity scheduling. We used the Intel Fortran v7.0 for Itanium (efc) as the backend compiler for all F90 code generated by the CAF translator as well as for the MPI versions of the benchmarks. Optimization level 3 was used along with the override-limits option to prevent the compiler from automatically disabling certain expensive optimizations. CAF executables were linked against ARMCI 1.1-beta for Myrinet GM. All executables were linked against Myricom’s MPI implementation MPICH-GM 1.2.5..10 (compiled with Intel’s efc) running on Myricom’s GM 1.6.4 driver substrate. In the following sections, we briefly describe the NAS benchmarks used in our evaluation, the key features of their MPI and CAF parallelizations and compare the performance of the CAF and MPI implementations.
6.1
NAS CG
In the NAS CG parallel benchmark, a conjugate gradient method is used to compute an approximation to the smallest eigenvalue of a large, sparse, symmetric positive definite matrix [9]. This kernel is typical of unstructured grid computations in that it tests irregular long distance communication and employs sparse matrix vector multiplication. The irregular communication requirement of this benchmark is evidently a challenge for all systems. On each iteration of loops involving communication the MPI version initiates a non-blocking receive from reduce_exch_proc(i) processor followed by an MPI send to the same processor. After the send, the process waits until its MPI receive completes. Thus, no overlap of communication and computation is possible. Our tuned CAF version of NAS CG does not differ much from the MPI handcoded version. In fact, we directly converted two-sided MPI communication into equivalent calls to notify/wait and a vectorized one-sided get communication event. Figure 3 shows a typical fragment of our CAF parallelization using notify/wait synchronization. Our experiments showed that for this code, replacing
Co-array Fortran Performance and Potential: An NPB Experimental Study
187
the co-array read (get) operation with a co-array write (put) had a negligible effect on performance because of the amount of synchronization necessary to preserve data dependences. In initial experimentation with our CAF version of CG on various numbers of processors, we found that on less than eight processors, performance was significantly lower than its MPI counterpart. In our first CAF implementation of CG, the receive array q was a common block variable, allocated in the static data by the compiler and linker. To perform the communication shown in Figure 3 our CAF compiler prototype allocated a temporary buffer in memory registered with ARMCI so that the Myrinet hardware could initiate a DMA transfer. After the get was performed, data was copied from the temporary buffer into the q array. For runs on a small number of processors, the buffers are large. Moreover, the registered memory pool has the starting address independent of the addresses of the common blocks. Using this layout of memory and a temporary communication buffer caused the number of L3 cache misses in our CAF code to be up to a factor of three larger than for the corresponding MPI code, resulting in performance that was slower by a factor of five. By converting q (and other arrays used in co-array expressions) to co-arrays, it moved their storage allocation into the segment with co-array data (reducing the potential for conflict misses) and avoided the need for the temporary buffer. Overall, this change greatly reduced L3 cache misses and brought the performance of the CAF version back to level of the MPI code. Our lesson from this experience is that memory layout of communication buffers, co-arrays, and common block/save arrays might require thorough analysis and optimization. As Figure 4 (a) shows, our CAF version of NAS CG achieves performance comparable to that of the MPI version. The parallel efficiency of the CAF and MPI codes are almost indistinguishable across a range of processor numbers.
6.2
NAS MG
The MG multigrid kernel calculates an approximate solution to the discrete Poisson problem using four iterations of the V-cycle multigrid algorithm on a grid with periodic boundary conditions [9]. The communication is highly structured and goes through a fixed sequence of regular patterns. In the NAS MG benchmark, for each level of the grid, there are periodic updates of the border region of a three-dimensional rectangular data volume from neighboring processors in each of six spatial directions. Four buffers are used: two as receive buffers and two as send buffers. For each of the three spatial axes, two messages (except for the corner cases) are sent using basic MPI send to update the border regions on the left and right neighbors. Therefore, two buffers are used for each direction, one buffer to store data to be sent and the other to receive the data from the corresponding neighbor. Because two-sided communication is used, there is implicit two-way point-to-point synchronization between each pair of neighbors. The CAF version of MG mimics the MPI version. The communication buffers used in the MPI version are replaced by co-arrays; the communication is ex-
188
Cristian Coarfa et al.
Fig. 4. Comparison of MPI and CAF parallel efficiency for NAS CG and MG
pressed using CAF syntax, as opposed to using MPI primitives. This approach requires explicit synchronization. The example code is shown on Figure 2 (b). The give3 procedure performs a one-sided put to the appropriate neighbor. Because of communication buffer reuse, two sync_notify are necessary to signal our left and right neighbors that our receive buffers are ready to receive data from them; two following sync_wait ensure that the remote buffers on the left and the right neighbors are ready for us to send data. The sync_notify following each give3 call is matched by the neighbor’s sync_wait and signals the completion of the put. Similarly, our sync_wait matches the neighbor’s sync_notify signaling that the data transfer from the neighbor is complete and we can proceed to the unpacking phase in take3. As the performance graph on Figure 4 (b) illustrates, our CAF version of NAS MG achieves comparable performance to that of the MPI version.
6.3
NAS SP and BT
As described in a NASA Ames technical report [9], the NAS benchmarks BT and SP are two simulated CFD applications that solve systems of equations resulting from an approximately factored implicit finite-difference discretization of three-dimensional Navier-Stokes equations. The principal difference between the codes is that BT solves block-tridiagonal systems of 5x5 blocks, whereas SP solves scalar penta-diagonal systems resulting from full diagonalization of the approximately factored scheme [9]. Both consist of an initialization phase followed by iterative computations over time steps. In each time step, boundary conditions are first calculated. Then the right hand sides of the equations are calculated. Next, banded systems are solved in three computationally intensive bi-directional sweeps along each of the x, y, and z directions. Finally, flow variables are updated. During each timestep, loosely-synchronous communication is required before the boundary computation, and tightly-coupled communication is required during the forward and backward line sweeps along each dimension.
Co-array Fortran Performance and Potential: An NPB Experimental Study
189
Fig. 5. Forward sweep communication in NAS BT and NAS SP
Because of the line sweeps along each of the spatial dimensions, traditional block distributions in one or more dimensions would not yield good parallelism. For this reason, SP and BT use a skewed block distribution called multipartitioning [9, 14]. With multi-partitioning, each processor handles several disjoint blocks in the data domain. Blocks are assigned to the processors so that there is an even distribution of work for each directional sweep, and that each processor has a block on which it can compute in each step of every sweep. Using multipartitioning yields full parallelism with even load balance while requiring only coarse-grain communication. The MPI implementation of NAS BT and SP attempts to hide communication latency by overlapping communication with computation, using nonblocking communication primitives. For example, in the forward sweep, except for the last tile, non-blocking sends are initiated in order to update the ghost region on the next tile. Afterwards, each process advances to the next tile it is responsible for, posts a non-blocking receive, performs some local computation, then waits for the completion of both non-blocking send and receive. The same pattern is present in the backward sweep. The CAF implementation for BT and SP inherits the multipartitioning scheme used by the MPI version. In BT, the main working data resides in coarrays, while in SP it resides in non-shared arrays. For BT, during the boundary condition computation and during the forward sweep for each of the axes, no buffers are used for packing and unpacking, as shown in Figure 5(a). On the contrary, in SP all the communication is performed via co-array buffers (see Figure 5(b)). This shows that when an application is written in the spirit of the Co-array Fortran programming model, it might require less memory copies. In the backward sweep, both BT and SP use auxiliary co-array buffers to communicate data. In our CAF implementations, we had to consider the trade-off between the amount of memory used for buffers and the amount of necessary synchronization. By using more buffer storage we were able to eliminate both output and anti-dependences due to buffer reuse, thus obviating the need for extra synchronization. We used a dedicated buffer for each communication event during the sweeps, for a total buffer size increase by a factor of square root of the number of processors. Experimentally we found that this was beneficial for performance while the memory increase was acceptable.
190
Cristian Coarfa et al.
Fig. 6. Comparison of MPI and CAF parallel efficiency for NAS SP and BT
The performance graphs in figure 6 show that the CAF version performs consistently better than the MPI version for BT, but is about 5% slower for SP. For both benchmarks, our compiler uses blocking communication primitives. By applying hand optimization to the code generated by our CAF compiler for SP, we discovered that using non-blocking communication enables us to achieve performance comparable to that of MPI. We observed that even though we used blocking communication for both BT and SP, we only paid a performance penalty for SP. This difference is due to the computation and communication characteristics of the benchmarks. Measurements showed that BT communicates half the number of messages that SP does, whereas the communication volume is about 2/3 of the communication volume for SP. Therefore, in BT the communication is less frequent than in SP, and consists of larger messages. As a consequence, overlapping computation with communication is more critical for performance in SP.
6.4
Discussion
In the course of our experimentation on our Itanium2+Myrinet2000 cluster, we observed that allocating co-arrays and temporary communication buffers in registered memory provides a noticeable boost in performance. Myrinet is only able to perform DMA on registered (pinned) pages. If all local variables involved in communication are allocated in registered memory, they can be used by the communication library directly, without copying into temporary buffers allocated from a pool of registered memory. In our prototype compiler, we don’t automatically migrate local variables involved in communication into pinned memory; instead, we accomplished this by modifying the source code to turn them into co-arrays that are never referenced remotely. Automatically migrating local variables into co-array storage can be complex to do because of the need to preserve sequence association among local variables in common blocks and Fortran data initialization statements.
Co-array Fortran Performance and Potential: An NPB Experimental Study
191
Our experiments showed that using split-phase, non-blocking communication and overlapping computation with DMA transfers significantly boosts performance. Our prototype compiler implements communication by an in-place conversion of language-level communication constructs into blocking gets and puts. Once we have a framework for data-flow and dependence analysis in place, we will be able to automatically translate blocking communication into split-phase, non-blocking equivalents to effectively overlap communication with computation. At a higher level, the original semantics of CAF as defined by Numrich and Reid [4, 5] require an implicit sync_memory at each procedure call boundary to complete any outstanding gets or puts. This requirement makes it impossible to overlap communication with a procedure call that does not use any of the data involved in communication. Requiring this implicit sync_memory for SP would remove a significant opportunity for latency hiding that is exploited by the MPI hand-coded parallelization, so we believe that this language requirement should be dropped. In the first draft CAF versions of the NAS benchmarks, we found that there were frequent references to read-only data that was stored off-processor. For example, there were frequent off-processor references to the variable reduce_send_start in the initial CAF version of CG, to the communication buffer offsets for each face in SP, and to the cell size information of neighboring processors in BT. We improved code performance by fetching these values once after they had been initialized and storing them locally. With interprocedural analysis to determine that these variables are essentially run-time constants, we could potentially apply this transformation automatically.
7
Conclusions
This paper presents an overview of the issues that we have been grappling with as we work on (1) refinement of the CAF language to make it the programming model of choice for portable, high-performance scientific programming in Fortran and (2) design and implementation of a portable and retargetable CAF compiler. Preliminary performance results for several NAS benchmarks on a cluster of workstations show that CAF is capable of achieving good performance despite the current lack of automatic communication and synchronization optimizations in our prototype CAF compiler. We were able to achieve performance comparable to highly-tuned, hand-coded MPI versions of the same benchmarks. The expressive syntax and explicit, one-sided communication model enabled us to manually perform key optimizations such as communication vectorization and synchronization strength reduction in the CAF source code. The CAF model is expressive enough to allow the user to perform these transformations manually when a compiler cannot. This is in contrast to HPF, for example, where it is more difficult for a user to improve code performance through source code adjustments. While we performed optimizing transformations manually on our CAF source code in this preliminary work, it is our intention to improve our prototype CAF
192
Cristian Coarfa et al.
compiler to perform automatically most of the optimizing transformations described in this paper. An advantage of writing parallel programs in CAF over MPI is that because communication and synchronization are expressed at the language level, it is possible for a compiler to analyze and tailor code to whatever target platform is at hand. On a shared memory architecture, CAF accesses to remote data can simply be turned into loads and stores; performing such a radical transformation on an MPI program would be exceedingly difficult. While it may be possible to annotate MPI libraries so that compilers could understand the semantics of the communication expressed by library calls, CAF offers a simpler, coherent model for parallel programming. Because CAF is amenable to automatic analysis and transformation, it is possible and desirable to express computation and communication in a natural and general way, leaving the burden of platform-specific code tuning to the compiler. This is important because user-applied optimizations that perform well on one architecture may actually be counter-productive on a different architecture.
Acknowledgements We thank D. Chavarría-Miranda for explaining the intricacies of the NAS benchmarks. We thank J. Nieplocha and V. Tipparaju for collaborating on the refinement and tuning of ARMCI on Myrinet. We thank R. Numrich and A. Wallcraft for providing us with draft CAF versions of the BT, CG, MG, and SP NAS parallel benchmarks. We thank F. Zhao for her work on the Open64/SL Fortran front-end.
References [1] Mattson, T. G.: An introduction to OpenMP 2.0. In: High Performance Computing. Volume 1940 of Lecture Notes in Computer Science. Springer-Verlag (2000) 384–390 [2] Koelbel, C., Loveman, D., Schreiber, R., Steele, Jr., G., Zosel, M.: The High Performance Fortran Handbook. The MIT Press, Cambridge, MA (1994) [3] Gropp, W., Snir, M., Nitzberg, B., Lusk, E.: MPI: The Complete Reference. Second edn. MIT Press (1998) [4] Numrich, R. W., Reid, J.K.: Co-Array Fortran for parallel programming. Technical Report RAL-TR-1998-060, Rutheford Appleton Laboratory (1998) [5] Numrich, R. W., Reid, J. K.: Co-Array Fortran for parallel programming. ACM Fortran Forum 17 (1998) 1–31 [6] Carlson, W.W., Draper, J. M., Culler, D. E., Yelick, K., E. Brooks, K. W.: Introduction to UPC and language specification. Technical Report CCS-TR-99-157, IDA Center for Computing Sciences (1999) [7] Silicon Graphics: CF90 co-array programming manual. Technical Report SR-3908 3.1, Cray Computer (1994) [8] Nieplocha, J., Carpenter, B.: ARMCI: A portable remote memory copy library for distributed array libraries and compiler run-time systems. In: Parallel and Distributed Processing. Volume 1586 of Lecture Notes in Computer Science. SpringerVerlag (1999) 533–546
Co-array Fortran Performance and Potential: An NPB Experimental Study
193
[9] Bailey, D., Harris, T., Saphir, W., van der Wijngaart, R., Woo, A., Yarrow, M.: The NAS parallel benchmarks 2.0. Technical Report NAS-95-020, NASA Ames Research Center (1995) [10] Nieplocha, J., Tipparaju, V., Saify, A., Panda, D.: Protocols and strategies for optimizing performance of remote memory operations on clusters. In: Proc. Workshop Communication Architecture for Clusters (CAC02) of IPDPS’02, Ft. Lauderdale, Florida (2002) [11] Open64/SL Developers: Open64/SL compiler and tools. http://hipersoft.cs.rice.edu/open64 (2003) [12] Open64 Developers: Open64 compiler and tools. http://sourceforge.net/projects/open64 (2001) [13] Prakash, S., Dhagat, M., Bagrodia, R.: Synchronization issues in data-parallel languages. In: Proceedings of the 6th Workshop on Languages and Compilers for Parallel Computing. Volume 768. Springer-Verlag (1994) 76–95 [14] Naik, V.: A scalable implementation of the NAS parallel benchmark BT on distributed memory systems. IBM Systems Journal 34 (1995)
Evaluating the Impact of Programming Language Features on the Performance of Parallel Applications on Cluster Architectures Konstantin Berlin1, Jun Huan 2 , Mary Jacob3, Garima Kochhar 3 , Jan Prins2, Bill Pugh 1 , P. Sadayappan3, Jaime Spacco1, and Chau-Wen Tseng1 1
Department of Computer Science University of Maryland, College Park, MD 20742 2 Department of Computer Science University of North Carolina, Chapel Hill, NC 27599 3 Department of Computer and Information Science Ohio State University, Columbus, OH 43210
Abstract. We evaluate the impact of programming language features on the performance of parallel applications on modern parallel architectures, particularly for the demanding case of sparse integer codes. We compare a number of programming languages (Pthreads, OpenMP, MPI, UPC) on both shared and distributed-memory architectures. We find that language features can make parallel programs easier to write, but cannot hide the underlying communication costs for the target parallel architecture. Powerful compiler analysis and optimization can help reduce software overhead, but features such as fine-grain remote accesses are inherently expensive on clusters. To avoid large reductions in performance, language features must avoid degrading the performance of local computations.
1
Introduction
Parallel computing can potentially provide huge amounts of computation power for solving important problems in science and engineering. However, the difficulty of writing parallel programs poses a major barrier to exploiting the power of parallel architectures. Programming is especially difficult for applications with irregular, fine-grain memory access patterns, since current parallel programming languages, tools, and architectures are evolving in directions less suited for these codes. Three vital goals are in conflict when choosing a parallel programming paradigm for clusters of shared-memory multiprocessors: Exploitation of maximum machine performance on a particular platform. Portability of code and performance across various high performance computing platforms. Programmability: easy creation of correct, reliable and efficient programs.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 194–208, 2004. © Springer-Verlag Berlin Heidelberg 2004
Evaluating the Impact of Programming Language Features
195
Parallel programming languages are designed by making different tradeoffs, depending on assumptions of the underlying compiler, runtime system, hardware support, target application characteristics, and acceptable user effort. For embarrassingly parallel applications with coarse-grain communication, the choice of a parallel programming language is less important since almost all languages can achieve good performance with low programmer effort. Unfortunately, no current parallel programming paradigm is satisfactory for more complex applications with fine-grain parallelism and irregular remote accesses. MPI is the most portable and achieves the best performance on distributed-memory machines for most codes, but is difficult to program and is inefficient for applications with many irregular fine-grained accesses. OpenMP and Pthreads are simple and efficient on shared-memory nodes, but do not work well (if at all) on clusters. HPF is portable but limited in its flexibility and applicability. Java is popular but does not yet have widely adopted libraries/APIs for efficient parallel execution on clusters. A promising approach for easing the task of writing codes with fine-grain parallel accesses is to use programming languages that provide flexible remote accesses and support for a shared address space, such as UPC and Co-Array Fortran. These hybrid languages simplify code development because programmers can rely on language support for fine-grain remote accesses to get a working version quickly, before selectively putting effort into modifying a small subset of the code for enhanced performance. In comparison, programming paradigms such as MPI require explicit communications to be inserted throughout the code for correctness. A problem with this hybrid approach is the architectural trend towards building high-end supercomputers from clusters of PCs or shared-memory multiprocessors (SMPs) using commodity parts, since this approach yields systems with expensive, high latency inter-processor communication. As a result users are gravitating towards parallel programming paradigms such as MPI that can efficiently support coarse-grain bulk communications. Parallel programming paradigms such as UPC that rely on fine-grained remote accesses may find it difficult to achieve good performance on clusters, because the underlying architecture does not efficiently support such operations. Our goal in this paper is to evaluate and quantify the performance of parallel language features based on experimental evaluations of a number of challenging parallel applications, particularly those requiring fine-grain remote accesses. We identify programming language features that can reduce programmer effort and quantify the overhead encountered when using such features. We attempt to determine the feasibility of using a hybrid fine and coarse-grain parallel programming model on cluster architectures. We pay special attention to the performance of UPC because it is the first widely available commercially supported high-level parallel programming language that provides flexible non-local accesses for both shared and distributed memory paradigms. We also attempt to place our evaluation in the context of ongoing trends in parallel architectures and applications. More specifically, the contributions of this paper include:
196
Konstantin Berlin et al.
1. Experimental evaluation of language features for challenging irregular parallel applications. 2. Observations on programmability and performance for Pthreads, OpenMP, MPI, and UPC. 3. Suggestions for achieving both programmability and good performance in the future. 4. Predictions on impact of architectural developments on performance of parallel language features.
While our findings that fine-grained parallel applications perform poorly on cluster architectures is not surprising, our study quantifies the performance penalty for interesting programming languages using several challenging irregular benchmarks. In the remainder of the paper, we explain our choice of evaluation parameters (applications, parallel languages) and present our experimental results. We present our observations on programming language features and their impact on performance, followed by a number of suggestions for their usage in developing parallel applications. We conclude with a discussion of the impact of architecture trends and comparison with related work.
2
Applications
Many scientific applications have very regular memory access patterns and can be easily parallelized and implemented efficiently for a large number of parallel architectures. We chose for our evaluation three application classes that are more complex and represent challenging test cases for parallel programming paradigms. The three types of parallel applications are: Irregular table update Many parallel database operations can be viewed as making irregular parallel accesses to a large distributed table of values. If the accesses perform associative reduction operations (e.g., summation), the application is similar to a large histogram and may be implemented using a coarsegrain bucket algorithm. Accesses may also perform arbitrary read-modify-write operations, in which case fine-grain algorithms are necessary. The amount of computation in table updates is static and may be distributed evenly at compile time. Table update has potentially very high communication requirements. Irregular dynamic accesses A second class of challenging parallel applications perform irregular parallel accesses to sparse data structures. The application may allow a limited amount of coarse-grained accesses. The amount of computation is static and may be distributed evenly at compile time, and has very high communication requirements.
Evaluating the Impact of Programming Language Features
197
Integer sort Large in-memory sorting is a third parallel application class that is surprisingly difficult to perform efficiently on distributed-memory parallel architectures. Many parallel implementations are possible, including both coarse and fine-grained algorithms. Sorting has high communication requirements. All three types of benchmarks are characterized by irregular memory access to large data structures. Depending on the benchmark, both coarse and finegrained remote accesses may be necessary.
3
Programming Paradigms
Broadly speaking, parallel paradigms can be classified as shared-memory with explicit threads (Pthreads, Java threads), shared-memory with task/data parallelism (OpenMP, HPF), distributed memory with explicit communication (MPI, SHMEM, Global Arrays), or distributed-memory with special global accesses (Co-Array Fortran, UPC). We describe paradigms used in our study in more detail. Pthreads (POSIX threads) is a shared-memory programming model where parallelism takes the form of parallel function invocations [9]. A parallel function body is executed in parallel by many threads, which can all access shared global data. Pthreads is the underlying implementation of parallelism for many programming paradigms. Java is a general purpose programming language that supports parallelism in the form of threads [10]. Parallel Java programs on SMPs resemble Pthreads programs. Pthreads and Java are available only on SMPs. OpenMP is a shared-memory programming model where parallelism takes the form of parallel directives for loops and functions [4]. OpenMP directives specify loops whose iterations should be executed in parallel, as well as functions that may be invoked in parallel. Additional directives specify data that should be shared or private to each thread. Compilers translate OpenMP programs into code that resembles Pthreads programs, where parallel loop bodies are made into parallel functions. OpenMP is an industry standard and is supported in many languages and platforms. OpenMP is currently available only on SMPs. MPI (Message Passing Interface) is a distributed-memory programming model where threads explicitly communicate using functions in the MPI run-time library to send and receive messages [8]. It also includes a large selection of efficient collective communication routines. MPI is widely available (virtually every parallel platform) and well tuned for performance. Despite the programming effort required, MPI is the current programming paradigm of choice for its portability and performance. UPC (Unified Parallel C) is a shared-memory programming model based on a version of C extended with global pointers and data distribution declarations for shared data [3]. Accesses via global pointers are translated into interprocessor communication by the UPC compiler. A distinguishing feature of UPC
198
Konstantin Berlin et al.
is that global pointers may be cast into local pointers for efficient local access. Explicit one-way communication similar to SHMEM [?] is also supported in the UPC run-time library via routines such as upc_memput() and upc_memget(). It is the compiler’s responsibility to translate memory addresses and insert inter-processor communication. UPC is the first commercially supported parallel paradigm that supports flexible remote accesses to a shared memory abstraction.
4
Performance Evaluation
We believe performance is a key factor (if not the key factor) determining the success of parallel programming paradigms. To gain insight into the factors underlying performance, we performed an experimental performance evaluation of a number of programming paradigms on the following parallel platforms. Compaq AlphaServer SC. A 64-node cluster located at ORNL. Each node is an SMP with 2GB of memory, four ES-40 processors, and a single Quadrics network adapter. The nodes run AlphaServer 2.0 OS, the MPI implementation is built on the native Quadrics libraries. Sun SunFire 6800. A 24-processor Sun shared-memory multiprocessor located at the University of Maryland, with UltraSparc III processors, 24GB memory, and crossbar interconnect running SunOS 5.8.
4.1
Table Update
TableUpdate performs irregular updates on a large distributed hash table. Updates are commutative and may be reordered. Several different versions of TableUpdate are used: MPI. Coarse-grain algorithm uses buckets to store updates to data on other processors. All buckets are synchronously exchanged between processors once buckets are filled. Upon receiving buckets, updates in bucket are applied to the local portion of the table. UPC. Fine-grained algorithm uses global pointers to update non-local table elements. UPC (bucket). Coarse-grain algorithm also uses bucketized algorithm as in MPI code. One-way explicit communication used to transfer buckets between processors. C with Pthreads. Shared-memory code uses parallel function calls to update table elements. All threads directly access table as shared array. C with OpenMP. Shared-memory code parallelizes loops computing table elements using OpenMP annotations. Java. Shared-memory code uses Java threads to update shared global table.
Evaluating the Impact of Programming Language Features
199
Figure 1 presents the performance of TableUpdate for a table of size on a Compaq AlphaServer SC for MPI, UPC, and UPC (bucket). Performance is measured in number of table updates per millisecond per processor, and is presented using log scale. Results show that MPI greatly outperforms UPC, though UPC using a coarse-grain bucket algorithm can approach the performance of MPI. UPC suffers significant performance degradations when using fine-grain access patterns because of software and hardware overhead in making pointwise remote accesses. We next examine TableUpdate performance on a Sun SunFire SMP. Results in Figure 2 show that Java, C with Pthreads, and C with OpenMP implementations of TableUpdate achieve comparable performance, though Java performance is slightly higher (possibly because it is better tuned for performance by the vendor). The SUN UPC compiler has significantly poorer performance because of software overhead in translating point-wise accesses to shared data.
4.2
Conjugate Gradient
The conjugate gradient benchmark (NAS CG benchmark) finds the principal eigenvalue of a sparse real matrix A with random pattern of nonzeros using the inverse power method [1]. This involves solving a linear system of the form for different vectors The solver uses the conjugate gradient method and repeatedly calculates the sparse matrix-vector product where are dense vectors of length This benchmark is widely used and stresses memory and communication performance. We evaluated the following versions of CG: MPI. This Fortran 77 version was taken from the NAS 2.3 suite, and uses explicit MPI communication operations. The implementation uses a (block, block) distribution of A, and replicates the appropriate section of for the dot product with the corresponding section of A. The total size of the implementation is 1800 lines. OpenMP. This is a shared-memory implementation in C with OpenMP directives, derived from the NAS 2.3 serial code by the RWC in Japan, and has total size of 900 lines. This implementation uses a static partition across processors of the row-loop of the matrix-vector product. A long-lived parallel region is used to reduce overheads between successive sparse-matrix vector products. OpenMP work distribution directives are inserted for initializations, sparse matrix-vector product, and dot products in the algorithm. UPC (OpenMP). This UPC implementation was derived from the OpenMP shared-memory version. About 1/3 was rewritten from OpenMP, and 1/4 was added new. The total size of this version is 1300 lines. It distributes the matrix A using a block-cyclic distribution with a large block size. This is the best distribution for this problem that can be expressed directly in UPC without explicitly partitioning the matrix A. Work is partitioned between processors in the sparse vector-matrix product according to the portions of A held by each processor. The vector is replicated to reduce communication;
200
Konstantin Berlin et al.
Fig. 1. Table Update (AlphaServer, 2 ^ 22 Table)
Fig. 2. Table Update (Sun SMP, 2 ^ 25 Table)
Fig. 3. Conjugate Gradient (AlphaServer, Class B)
Evaluating the Impact of Programming Language Features
201
the default strategy of distributing the shared vector leads to run times that are two orders of magnitude larger due to the repeated fine-grain random accesses to in the sparse matrix-vector product. UPC (MPI). This UPC implementation more closely follows the MPI algorithm. It uses an explicit (blocked,*) distribution of A and replicates the vector Coarse-grain data movement (e.g., upc_memget(), upc_memput()) is used to replicate the result The total size of this version is 1600 lines. Figure 3 presents our results for a class B problem size for CG on the AlphaServer SC. Results are reported in MFLOPS per processor. The total number of FLOPS required is defined by the problem size. OpenMP results are only available up to the 4 processors on each node and scale relatively poorly due to the replication of into the processor caches through misses on in random order. MPI outperforms both versions of UPC, though the UPC (MPI) implementation is closer in performance. The sequential performance of the UPC implementations is 50-60% of the single processor MPI and OpenMP performance. The MPI implementation achieves a speedup of 10.4 with 16 processors, and a speedup of 17.6 with 32 processors. The UPC (OpenMP) speedup is 4.0 with 16 processors, and 5.0 with 32 processors, hence performs at only 28% of the MPI implementation at 32 processors. The UPC (MPI) speedup is better at 7.0 with 16 processors, and 9.1 with 32 processors, hence performs at 52% of the MPI implementation at 32 processors. The performance of CG is heavily dependent on memory system performance. For comparison, a vectorized implementation of the CG benchmark achieves about 1,500 MFLOPS on a single processor of an NEC SX-6, and about 1,100 MFLOPS per processor using all eight processors of an SX-6 node.
4.3
Integer Sort
Integer sort performs a parallel radix sort of a large collection of integer data. We timed MPI and UPC implementations on an AlphaServer SC. Both implementations used coarse-grain parallel algorithms employing bulk explicit messages, since a fine-grain UPC implementation was found to be intolerably inefficient. A 128K key input data size is used. Performance is reported as efficiency. Results in Figure 4 show that MPI outperforms UPC slightly, with the difference increasing for larger numbers of processors.
4.4
UPC Microbenchmark
Our experimental results for entire applications showed that fine-grain algorithms were exceedingly inefficient for cluster architectures. We repeated our experiments using the Berkeley UPC compiler [2] on the AMD Athlon PC cluster at Ohio, and UPC performance was only slightly improved relative to MPI. The problem, we believe, is caused by the overhead of fine-grained accesses in UPC. UPC provides global shared pointers that can easily access non-local
202
Konstantin Berlin et al.
data, providing a convenient shared-memory abstraction for parallel programming. Though a shared data element can be accessed in a completely transparent fashion by any process executing on any processor, the overhead of direct pointwise access can be quite significant. To quantify both the hardware and software overheads in greater detail, we used UPC microbenchmarks to evaluate performance on a wide range of parallel architectures. Compaq AlphaServer SC system (Falcon) at Oak Ridge National Laboratory, running Version 1.7 of the Compaq UPC compiler. Single node AlphaServer Marvel at University of Florida, running Version 2.1 of the Compaq UPC compiler.1 AMD Athlon Cluster (64 dual-processor nodes) with Myrinet interconnect at the Ohio Supercomputer Center, running the Berkeley UPC compiler. SUN SunFire 6800 system (24-nodes) at the University of Maryland, running the Sun UPC compiler. Cray T3E system at Michigan Tech University, running the original UPC compiler. SGI Origin 2000 at University of North Carolina, running the Intrepid UPC compiler. We measured the cost of direct point-wise shared data access costs, using both private and shared pointers. Figure 5 shows the per-word access cost using a read-modify-write (increment-by-one) operation on floating-point doubles, for various modes of access: Private: local shared data that is accessed as private data via casting UPC pointer to private. Shared-local: local shared data that accessed directly as using a UPC shared pointer. Shared-same-node: non-local shared data that is local to another process on the same SMP node. Shared-remote: non-local shared data that is on a different node. It can be observed that on all systems, there is a significant difference in the access time for private data and shared-local data, even though there is no data movement involved with the latter. The difference represents the overhead of translating a shared UPC reference into a node-address pair. This overhead was over 500 times a local memory access cost on the Compaq AlphaServer with the earlier version (v1.7) of the Compaq UPC compiler. Compiler enhancements have reduced the overhead in later versions (v2.1) of the compiler to around 100 times the private data access cost. Another area where compiler optimization can reduce software overhead for memory access costs was in accessing non-local data located on the same node (belonging to another thread on the same node). More powerful compiler optimizations can use more efficient local memory accesses in this situation, as 1
The authors would like to thank Dr. Alan George at the University of FloridaGainesville for providing access to this machine.
Evaluating the Impact of Programming Language Features
203
Fig. 4. Integer Sort (AlphaServer, 128K keys)
Fig. 5. UPC Point-wise Data Access Costs (read-modify-write double)
demonstrated by the newer Compaq UPC compiler (v2.1). Nonetheless, even with both optimizations (for local shared data and same-node shared data), memory access costs are still two orders of magnitude higher than access to private memory for UPC on the AlphaServer Marvel system. Fine-grain non-local accesses must therefore be used sparingly if at all in performance critical sections of a parallel UPC program.
4.5
Evaluation Summary
Summarizing our results, we find on SMPs threads-based paradigms are closest to the underlying hardware and provide the best performance. On clusters, paradigms with explicit communication have the lowest overhead and achieve
204
Konstantin Berlin et al.
the best performance. UPC programs can achieve good performance when written in a similar coarse-grain style using bulk communication routines, otherwise performance can be extremely poor.
5
Language Features
Based on our experimental evaluation, we present some observations and suggestions with high-level language features. A number of parallel programming languages provide language features for providing the illusion of shared memory. The UPC programming model provides access to cyclically distributed shared arrays through global pointers, though when accessing only local portions of a shared array, global pointers may be cast back into local pointers for greater efficiency. In addition, the UPC run-time library also provides oneway, coarse- grained explicit communication primitives through functions such as upc_memget() and upc_memput(). We make the following observations about these language features: A global shared memory programming model is easy to use. At the core of the UPC programming model is the ability to easily access non-local data in a parallel program simply through global pointers. Programmers need only specify data that is to be distributed across processors, and reference them through special global pointers. The fine-grained UPC programming model is very simple and easy to use. The resulting code is cleaner and more maintainable than paradigms such as MPI that require explicit communication in the program. User level shared memory is not a good reflection of clusters. While the programming model may allow easy fine-grain access to non-local data, this is not supported by the underlying hardware architecture. The interconnect between nodes of a cluster typically provides high bandwidth but also long latencies, making aggregate coarse-grained communication much more efficient than many fine-grained remote accesses. This problem will only worsen as future parallel architectures continue to evolve towards clusters of SMPs. In comparison, the coarse-grain one-way communication primitives in many languages more accurately reflect the actual communication mechanisms supported by the hardware. A shared-memory programming model can encourage poor performance on clusters. Because the fine-grained shared-memory programming model is so seductive, one can argue that it actually leads to poor performance by encouraging programmers to write fine-grain codes that execute poorly on clusters. Programmers can code around this problem, but usually only at the cost of complicating the programming model or changing their coarse-grain algorithm.
Evaluating the Impact of Programming Language Features
205
We are dubious that compiler techniques will solve this problem. Given the lack of hardware support for efficient fine-grain communication on clusters, we believe programmers will need to develop parallel algorithms with coarsegrain block data movement to achieve good performance. Compilers can remove some of the inefficiencies of fine-grain communication, but cannot robustly transform fine-grain parallel algorithms into efficient block parallel codes for clusters. The (hybrid) programming model can combine fine-grain and coarsegrain accesses. One advantage of the UPC programming model is that it allows integration of fine-grain remote accesses with global pointers and coarsegrain explicit communication using library routines such as upc_memput() and upc_memget(). As we stated previously, a hybrid programming paradigm such as UPC can ease the development and maintenance of parallel codes. Most of the program may be written cleanly using global pointers, inserting explicit coarse-grain communication only for performance critical sections. Our experimental evaluation shows that when done well, the resulting codes can achieve performance close to MPI on clusters. However, programmers must be extremely careful because the cost of using global pointers for remote accesses is so high. Developing coarse-grain parallel algorithms for performance-critical sections of the program may also require extensive modifications to the algorithms and data structures used in the code. Programming language features must avoid degrading local computations. Many computations in parallel programs can be performed on purely local or previously prefetched remote data. Parallel programming languages should be designed so that these local computations can be compiled (and optimized) by the native sequential compiler. Otherwise performance can degrade, sometimes significantly. A great deal of the success of MPI can be attributed to following this rule, since all computations depend only on local data after calls to MPI communications functions return. In comparison, UPC require user-inserted explicit copies of remote global data to local buffers (or casting global pointers to local if shared data is alrady local) to avoid excessive overhead. Simply accessing global shared data is too expensive, even though the global data may be completely located locally. For instance, accessing local data using a global pointer in UPC can result in over 100 times slowdown. is actually local.
5.1
Advice on Choosing Parallel Paradigms
We summarize our observations on the parallel language features as follows. Even though a language like UPC may support a fine-grain programming model, it can achieve respectable performance on clusters only if fine-grain remote accesses are used sparingly. Coarse-grain parallel algorithms and bulk communication are still essential for achieving good performance. For fine-grain parallel algorithms, even though language and compiler support can improve performance compared
206
Konstantin Berlin et al.
to naive implementations, absolute performance on clusters is likely so poor that differences will be insignificant. Based on our experiences, we believe that the prime factor in choosing parallel paradigms is the nature of the algorithm. For coarse-grain parallel algorithms on clusters, many choices are possible. For peak performance, explicit message passing paradigms such as MPI and SHMEM will likely provide the best performance. If program development time is an issue, choosing a hybrid UPC implementation and selectively using bulk and collective communication such as upc_memget() and upc_memput() routines in computationally intensive portions of the program can be useful. Programming effort can also be reduced by exploiting existing libraries where possible. For fine-grain parallel algorithms, there are fewer options. Implementations on clusters using only fine-grain language features are likely to be extremely slow. If the data size is small, these codes may be executed on SMPs. Otherwise coarse-grain alternatives should be developed if possible. On the Cray T3E (the original platform for UPC), UPC appears to be an unqualified success and one of the best possible choices for a programming language/paradigm. However, the suitability of fine-grain programming languages for cluster environments, with higher latencies and message overheads, is unclear. Obtaining good performance from a shared memory in a cluster environment requires programming in specific and sometimes convoluted styles, discarding many of the easy of use features of the language. Advancing compiler technology can help in some cases, but still results in an environment with a complicated and opaque performance model. The ability of a programmer to write a complicated fine-grain parallel program and have confidence that it will achieve good performance across a range of platforms still seems a distant dream.
6
Impact of Trends in Parallel Architectures
We also wish to evaluate parallel language features in the context of ongoing architectural developments. Here we examine developments and trends in parallel computer architectures and their impact on parallel programming paradigms. Faster interconnects. High-speed cluster interconnects continue to improve in bandwidth and latency. Both proprietary interconnects (e.g., Quadrics Elan used in Compaq AlphaServer) and systems for connecting commodity processors (e.g., SCI, Dolphin, Myrinet, VIA, InfiniBand) are improving in performance. Such interconnects also offer better support for shared memory, small messages, and one-sided communication, and thus may improve fine-grain communication performance. On the other hand, while the absolute performance of inter-processor communication is steadily improving, the cost of communication relative to computation continues to increase due to ever faster nodes and processors. We see no technological developments that will reduce or even slow this gap in the near future.
Evaluating the Impact of Programming Language Features
207
Larger memories. Although memory latency is increasing relative to processor speeds, memory size is increasing due to greater chip densities. As memory prices continue to drop, it is becoming possible to construct parallel systems with much larger amounts of memory than in the past. Cluster and MPP systems can now be built with several Terabytes of memory, and even SMPs can be purchased with 256 Gigabytes or more of memory. Continuing increases in SMP memory size may allow them to run (commercial) applications previously limited to MPPs and clusters, reducing the demand and vendor support for more complicated programming models. Processor/memory integration. Processor-in-memory (PIM) designs can potentially offer enormous improvements for specific problems by providing efficient parallel operations on data. However, they do not obviate the need for inter-processor communication. Hence the general utility of such designs will still depend on communication performance. Specific aspects of PIM designs may start to appear in memory controllers for conventional systems, but are probably still a few years away. In general, PIM- like systems will likely increase the cost of non-local memory accesses relative to computation, increasing rather than reducing the difficulty of efficient parallel programming. Multithreading. Microprocessor design seems to be heading towards greater support for multithreading to tolerate increasing memory latencies. Increasing levels of task-level multithreading will start to make even single processor nodes on MPP systems resemble SMPs, and likely accelerate the shift into hybrid programming models suitable for cluster architectures. The good news is that as parallel architectures improve, programs will be able to process larger irregular problems more quickly. The bad news is that the efficiency of parallel programs will continue to decrease.
7
Related Work
Obviously there is a tremendous amount of research on parallel language design and benchmarking. The most relevant to this paper is the recent work analyzing the performance of UPC. El-Ghazawi et al. have been developing and benchmarking UPC codes [5, 6, 7] and have discovered performance can be respectable, if a coarse-grain programming style is adapted. Yelick et al. have actually developed their own UPC translator/compiler [2]. Their experiments show similar results, that fine-grain accesses are significantly more expensive, and performance improves if the compiler can aggregate remote accesses to reduce costs. In comparison, we study a wider range of parallel languages on a slightly different set of applications. Pugh and Spacco use similar benchmarks to evaluate MPJava, a method for developing high-performance parallel computations in Java [11].
208
8
Konstantin Berlin et al.
Conclusions
In this paper, we evaluated features from a number of parallel programming languages (MPI, UPC, OpenMP, Java, C/Pthreads) for their performance and ease of use. We find that languages such as UPC that support a shared memory and flexible non-local accesses can reduce the difficulty of parallel programming. Unfortunately, parallel applications requiring fine-grain accesses still achieve poor performance on clusters because the amount of inherent software and hardware overhead, regardless of the programming paradigm or language feature used. Language support for fine-grain non-local accesses can still prove useful, by reducing the difficulty of parallel programming. Decent performance is achievable by using coarse-grain bulk communication in performance-critical sections of the code.
References [1] D. Bailey, E. Barszcz, J. Barton, D. Browning, R. Carter, L. Dagum, R. Fatoohi, S. Fineberg, P. Frederickson, T. Lasinski, R. Schreiber, H. Simon, V. Venkatakrishnan, and S. Weeratunga, “The NAS Parallel Benchmarks,” Technical Report RNR-94-007, NASA Ames Research Center, March 1994. [2] W. Chen, D. Bonachea, J. Duell, P. Husbands, C. Iancu, and K. Yelick. A Performance Analysis of the Berkeley UPC Compiler, Proceedings of the 17th Annual International Conference on Supercomputing (ICS’03), June 2003. [3] W. Carlson, J. Draper, D. Culler, K. Yelick, E. Brooks, and K. Warren, “Introduction to UPC and Language Specification,” Center for Computing Sciences Technical Report CCS-TR-99-157, May 1999. [4] R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, and J. McDonald, “Parallel Programming in OpenMP,” Morgan Kaufmann Publishers, 2000. [5] F. Cantonnet, Y. Yao, S. Annareddy, A. Mohamed, T. El-Ghazawi. Performance Monitoring and Evaluation of a UPC Implementation on a NUMA Architecture, Proceedings of the International Conference on Parallel and Distributed Parallel Systems (IPDPS’03), April 2003. [6] T. El-Ghazawi and S. Chauvin, UPC Benchmarking Issues, Proceedings of the International Conference on Parallel Processing (ICPP’01), September 2001. [7] T. El-Ghazawi and F. Cantonnet. UPC Performance and Potential: A NPB Experimental Study, Proceedings of SC2002, Baltimore, November 2002. [8] W. Gropp E. Lusk, and A. Skjellum, ”Using MPI: Portable Parallel Programming with the Message-Passing Interface,” MIT Press, Cambridge, MA, 1994. [9] B. Lewis and D. J. Berg, ”Multithreaded Programming with Pthreads,” Prentice Hall, 1998. [10] S. Oaks and H. Wong, ”Java Threads. Nutshell Handbook,” O’Reilly & Associates, Inc., 1997. [11] B. Pugh and J. Spacco, “MPJava: High-Performance Message Passing in Java using Java.nio,” Proceedings of the Workshop on Languages and Compilers for Parallel Computing (LCPC’03), College Station, TX, October 2003.
Putting Polyhedral Loop Transformations to Work Cédric Bastoul1,3, Albert Cohen1, Sylvain Girbal 1,2,4 , Saurabh Sharma1, and Olivier Temam2 1
A3 group, INRIA Rocquencourt LRI, Paris South University PRiSM, University of Versailles 4 LIST, CEA Saclay
2 3
Abstract. We seek to extend the scope and efficiency of iterative compilation techniques by searching not only for program transformation parameters but for the most appropriate transformations themselves. For that purpose, we need a generic way to express program transformations and compositions of transformations. In this article, we introduce a framework for the polyhedral representation of a wide range of transformations in a unified way. We also show that it is possible to generate efficient code after the application of polyhedral program transformations. Finally, we demonstrate an implementation of the polyhedral representation and code generation techniques in the Open64/ORC compiler.
1
Introduction
Optimizing and parallelizing compilers face a tough challenge. Due to their impact on productivity and portability, programmers of high-performance applications want compilers to automatically produce quality code on a wide range of architectures. Simultaneously, Moore’s law indirectly urges the architects to build complex architectures with deeper pipelines and (non uniform) memory hierarchies, wider general-purpose and embedded cores with clustered units and speculative structures. Static cost models have a hard time coping with rapidly increasing architecture complexity. Recent research works on iterative and feedback-directed optimizations [17] suggest that practical approaches based on dynamic information can better harness complex architectures. Current approaches to iterative optimizations usually choose a rather small set of program transformations, e.g., cache tiling and array padding, and focus on finding the best possible transformation parameters, e.g., tile size and padding size [17] using parameter search space techniques. However, a recent comparative study of model-based vs. empirical optimizations [22] stresses that many motivations for iterative, feedback-directed or dynamic optimizations are irrelevant when the proper transformations are not available. We want to extend the scope and efficiency of iterative compilation techniques by making the program transformation itself one of the parameters. Moreover, we want to search L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 209–225, 2004. © Springer-Verlag Berlin Heidelberg 2004
210
Cédric Bastoul et al.
for composition of program transformations and not only single program transformations. For that purpose, we need a generic method for expressing program transformations and composition of those. This article introduces a unified framework for the implemention and composition of generic program transformations. This framework relies on a polyhedral representation of loops and loop transformations. By separating the iteration domains from the statement and iteration schedules, and by enabling per-statement transformations, this representation avoids many of the limitations of iterationbased program transformations, widens the set of possible transformations and enables parameterization. Few invariants constrain the search space and our non-syntactic representation imposes no ordering and compatibility constraints. In addition, statements are named independenty from their location and surrounding control structures: this greatly simplifies the practical description of transformation sequences. We beleive this generic expression is appropriate for systematic search space techniques. The corresponding search techniques and performance evaluations are out of the scope of this work and will be investigated in a follow-up article. This work presents the principles of our unified framework and the first part of its implementation. Also, since polyhedral transformation techniques can better accommodate complex control structures than traditional loop-based transformations, we start with an empirical study of control structures within a set of benchmarks. The four key aspects of our research work are: (1) empirically evaluating the scope of polyhedral program transformations, (2) defining a practical transformation environment based on a polyhedral representation, (3) showing that it is possible to generate efficient code from a polyhedral transformation, (4) implementing the polyhedral representation and code generation technique in a real compiler, Open64/ORC [18], with applications to real benchmarks. Eventually, our framework operates at an abstract semantical level to hide the details of control structures, rather than on a syntax tree. It allows per-statement and extended transformations that make few assumptions about control structures and loop bounds. Consequently, while our framework is initially geared toward iterative optimization techniques, it can also facilitate the implementation of statically driven program transformations in a traditional optimizing compiler. The paper is organized as follows. We present the empirical analysis of static control structures in Section 2 and discuss their significance in typical benchmarks. The unified transformation model is described in Section 3. Section 4 presents the code generation techniques used after polyhedral transformations. Finally, implementation in Open64/ORC is described in Section 5.
2
Static Control Parts
Let us start with some related works. Since we did not directly contribute to the driving of optimizations and parallelization techniques, we will not compare with the vast literature in the field of model-based and empirical optimization.
Putting Polyhedral Loop Transformations to Work
211
Well-known loop restructuring compilers proposed unified models and intermediate representations for loop transformations, but none of them addressed the general composition and parameterization problem of polyhedral techniques. ParaScope [6] is both a dependence-based framework and an interactive sourceto-source compiler for Fortran; it implements classical loop transformations. SUIF [11] was designed as an intermediate representation and framework for automatic loop restructuring; it quickly became a standard platform for implementing virtually any optimization prototype, with multiple front-ends, machinedependent back-ends and variants. Polaris [4] is an automatic parallelizing compiler for Fortran; it features a rich sequence of analyzes and loop transformations applicable to real benchmarks. These three projects are based on a syntaxtree representation, ad-hoc dependence models and implement polynomial algorithms. PIPS [12] is probably the most complete loop restructuring compiler, implementing polyhedral analyses and transformations (including affine scheduling) and interprocedural analyses (array regions, alias). PIPS uses an expressive intermediate representation, a syntax-tree with polyhedral annotations. Within the Omega project [14], the Petit dependence analyzer and loop restructuring tool [13] is much closer to our work: it provides a unified polyhedral framework (space-time mappings) for iteration reordering only, and it shares our emphasis on per-statement transformations. It is intended as a research tool for small kernels only. The MARS compiler [16] is also very close to our work: its polyhedral representation allows to unify several loop transformations to ease the application of long transformation sequences. Its successes in iterative optimization [17] makes it the main comparison point and motivation for our work, although MARS lacks the expressivity of the affine schedules we use in our unified model. Two codesign projects have a lot in common with our semi-automatic optimization project. MMAlpha [10] is a domain-specific single assignment language for systolic array computations, a polyhedral transformation framework, and a high-level circuit synthesis tool. The interactive and semi-automatic approach to polyhedral transformations were introduced by MMAlpha. The PICO project [20] is a more pragmatic approach to codesign, restricting the application domain to loop nests with uniform dependences and aiming at the selection and coordination of existing functional units to generate an application-specific VLIW processor. Both tools only target small kernels.
2.1
Decomposition into Static Control Parts
In the following, loops are normalized and split in two categories: loops from 0 to some bound expression with an integer stride, called do loops; other kinds of loops, referred to as while loops. Early phases of the Open64 compiler perform most of this normalization, along with closed form substitution of induction variables. Notice some Fortran and C while loops may be normalized to do loops when bound and stride can be discovered statically. The following definition is a slight extension of static control nests [8]. Within a function body, a static control part (SCoP) is a maximal set of consecutive
212
Cédric Bastoul et al.
Fig. 1. Example of decomposition into static control parts
statements without while loops, where loop bounds and conditionals may only depend on invariants within this set of statements. These invariants include symbolic constants, formal function parameters and surrounding loop counters: they are called the global parameters of the SCoP, as well as any invariant appearing in some array subscript within the SCoP. A static control part is called rich when it holds at least one non-empty loop; rich SCoPs are the natural candidates for polyhedral loop transformations. An example is shown in Figure 1. We will only consider rich SCoPs in the following. As such, a SCoP may hold arbitrary memory accesses and function calls; a SCoP is thus larger than a static control loop nest [8]. Interprocedural alias and array region analysis would be useful for precise dependence analysis. Nevertheless, our semi-automatic framework copes with crude dependence information in authorizing the expert user to override static analysis when applying transformations.
2.2
Automatic Discovery of SCoPs
SCoP extraction is greatly simplified when implemented within a modern compiler infrastructure such as Open64/ORC. Previous phases include function inlining, constant propagation, loop normalization, integer comparison normalization, dead-code and goto elimination, and induction variable substitution, along with language-specific preprocessing: pointer arithmetic is replaced by arrays, pointer analysis information is available (but not yet used in our tool), etc. The algorithm for SCoP extraction is detailed in [2]; it outputs a list of SCoPs associated with any function in the syntax tree. Our implementation in Open64 is discussed in Section 5.
2.3
Significance Within Real Applications
Thanks to an implementation of the previous algorithm into Open64, we studied the applicability of our polyhedral framework to several benchmarks.
Putting Polyhedral Loop Transformations to Work
213
Fig. 2. Coverage of static control parts in high-performance applications
Figure 2 summarizes the results for the SpecFP 2000 and PerfectClub benchmarks handled by our tool (single-file programs only, at the time being). Construction of the polyhedral representation takes much less time than the preliminary analyses performed by Open64/ORC. All codes are in Fortran77, except art and quake in C, and lucas in Fortran90. The first column shows the number of functions (inlining was not applied in these experiments). The next two columns count the number of SCoPs with at least one global parameter and enclosing at least one conditional, respectively; the first one advocates for parametric analysis and transformation techniques; the second one shows the need for techniques that handle static-control conditionals. The next two columns in the “Statements” section show that SCoPs cover a large majority of statements (many statements are enclosed in affine loops). The last two columns in the “Array References” section are very promising for dependence analysis: most subscripts are affine except for lucas and mg3d (the rate is over 99% in 7 benchmarks), but approximate array dependence analyses will be required for a good coverage of the 5 others. In accordance with earlier results using Polaris [7], the coverage of regular loop nests is strongly influenced by the quality of loop normalization and induction variable detection. Our tool also gathers detailed statistics about the number of parameters and statements per SCoP, and about statement depth (within a SCoP, not counting non-static enclosing loops). Figure 3 shows that almost all SCoPs are smaller than 100 statements, with a few exceptions, and that loop depth is rarely greater than 3. Moreover, deep loops also tend to be very small, except for applu, adm and mg3d which contain depth-3 loop nests with tenths of statements. This means that most polyhedral analysis and transformations will succeed and require reasonable resources. It also gives an estimate of the scalability required for worst-case exponential algorithms, like the code generation phase to convert the polyhedral representation back to source code.
214
Cédric Bastoul et al.
Fig. 3. Distribution of statement depths and SCoP size
3
Unified Polyhedral Representation
In this section, we define the principles of polyhedral program transformations. The term polyhedron will be used in a broad sense to denote a convex set of points in a lattice (also called or lattice-polyhedron), i.e., a set of points in a vector space bounded by affine inequalities. Let us now introduce the representation of a SCoP and its elementary transformations. A static control part within the syntax tree is a pair where is the set of consecutive statements — in their polyhedral representation — and is the vector of global parameters of the SCoP. Vector is constant for the SCoP but statically unknown; yet its value is known at runtime, when entering the denotes the number of global parameters. We will use a few specific linear algebra notations: matrices are always denoted by capital letters, vectors and functions in vector spaces are not; returns a prefix of i.e., the vector built from the first components of is equivalent to being a prefix of denotes the unit vector in a reference base of a space, i.e., (0, … , 0, 1, 0, … , 0); likewise, denotes the matrix filled with zeros but element set to 1. A SCoP may also be decorated with static properties such as array dependences or regions, but this work does not address static analysis.
3.1
Domains, Schedules and Access Functions
The depth of a statement S is the number of nested loops enclosing S in the SCoP. A statement is a quadruple where is the iteration domain of S, and are sets of polyhedral representations of array references, and is the affine schedule of S , defining the sequential execution ordering of iterations of S. To represent arbitrary lattice
Putting Polyhedral Loop Transformations to Work
215
Fig. 4. Running example
polyhedra, each statement is provided with a number of local parameters to implement integer division and modulo operations via affine projection: e.g., the set of even values of is described by means of a local parameter — existentially quantified — and equation Let us describe these concepts in more detail and give some examples. is a convex polyhedron defined by matrix such that Notice the last matrix column is always multiplied by the constant 1; it corresponds to the homogeneous coordinate encoding of affine inequalities into linear form. The number of constraints in is not limited. Statements guarded by non-convex conditionals — such as — are separated into convex domains in the polyhedral representation. Figure 4 shows an example that illustrates these definitions. The domains of the five statements are (the zero-dimensional vector), E.g., the for statements and are
and describe array references written by S (left-hand side) or read by S (right-hand side), respectively; it is a set of pairs where A is an array variable and is the access function mapping iterations in to locations in A. The access function is defined by a matrix such
Cédric Bastoul et al.
216
that
E.g.,
stored
and
as
is the affine schedule of S; it maps iterations in to time-stamps (i.e., logical execution dates) in time [8]. Multidimensional timestamps are compared through the lexicographic ordering over vectors, denoted by : iteration i of S is executed before iteration of if and only if To facilitate code generation and to schedule iterations and statements independently, we need time dimensions instead of (the minimum for a sequential schedule). This encoding was first proposed by Feautrier [8] and used extensively by Kelly and Pugh [13]: dimension encodes the relative ordering of statements at depth and dimension encodes the ordering of iterations in loops at depth Eventually, is defined by a matrix such that
Notice does not involve local parameters, since lattice polyhedra do not increase the expressivity of sequential schedules. The schedules for the previous example are: E.g., the
3.2
for
and
are:
Invariants
Our representation makes a clear separation between the semantically meaningful transformations expressible on the polyhedral representation from the semantically safe transformations satisfying the statically checkable properties. The goal is of course to widen the range of meaningful transformations without relying on the accuracy of a static analyzer. Although classical transformations are hampered from the lack of information about loops bounds, they may be feasible in a polyhedral representation separating domains from affine schedules and authorizing per-statement operations. To reach this goal and to achieve
Putting Polyhedral Loop Transformations to Work
217
a high degree of transformation compositionality, the representation enforces a few invariants on the domains and schedules. There is only one domain invariant. To avoid integer overflows, the coefficients in a row of must be relatively prime:
This restriction has no effect on the expressible domains. The first schedule invariant requires the schedule matrix to fit into a decomposition amenable to composition and code generation. It separates the square iteration reordering matrix operating on iteration vectors, from the parameterized matrix and from the statement-scattering vector
Statement scattering may not depend on loop counters or parameters, hence the zeroes in “even dimensions”. Notice subscripts range from 0 to Back to the running example, matrix splits into
The second schedule invariant is the sequentiality one: two distinct statement iterations may not have the same time-stamp:
Whether the iterations belong to the domain of S and does not matter in (3): we wish to be able to transform iteration domains without bothering with the sequentiality of the schedule. Because this invariant is hard to enforce directly, we introduce two additional invariants with no impact on schedule expressivity and stronger than (3):
Finally, we add a density invariant to avoid integer overflow and ease schedule comparison. The “odd dimensions” of the image of form a sub-space of the multidimensional time, since is unimodular, but an additional requirement is needed to enforce that “even dimensions” satisfy some form of dense encoding:
218
Cédric Bastoul et al.
i.e., for a given prefix, the next dimension of the statement-scattering vectors span an interval of non-negative integers.
3.3
Constructors
We define some elementary functions on SCoPs, called constructors. Many matrix operations consist in adding or removing a row or column. Given a vector and matrix M with columns and at least rows, inserts a new row at position in M and fills it with the value of vector whereas does the opposite transformation. Analogous constructors exist for columns, inserts a new column at position in M and fills it with vector whereas undoes the insertion. AddRow and RemRow are extended to operate on vectors. Displacement of a statement S is also a common operation. It only impacts the statement-scattering vector of some statements sharing some common property with S. Indeed, forward or backward movement of S at depth triggers the same movement on every subsequent statement at depth such that Although rather intuitive, the following definition with prefixed blocks of statements is rather technical. Consider a SCoP a statement-scattering prefix P defining the depth at which statements should be displaced, a statement-scattering prefix Q — prefixed by P — making the initial time-stamp of statements to be displaced, and a displacement distance is the value to be added/subtracted to the component at depth dim(P) of any statement-scattering vector prefixed by P and following Q. The displacement constructor leave all statements unchanged except those satisfying
Constructors make no assumption about representation invariants and may violate them.
3.4
Primitives
From the earlier constructors, we will now define transformation primitives that enforce the invariants and serve as building blocks for higher level, semantically sound transformations. Most primitives correspond to simple polyhedral operations, but their formal definition is rather technical and will be described more extensively in a further paper. Figure 5 lists the main primitives affecting the polyhedral representation of a statement.1 U denotes a unimodular matrix; M implements the parameterized shift (or translation) of the affine schedule of a statement; denotes the depth of a statement insertion, iteration domain extension or restriction; and is a vector implementing an additional domain constraint. 1
Many of these primitives can be extended to blocks of statements sharing a common statement-scattering prefix (like the fusion and split primitives).
Putting Polyhedral Loop Transformations to Work
219
Fig. 5. Main transformation primitives
The last two primitives — fusion and split (or distribution) — show the benefit of designing loop transformations at the abstract semantical level of polyhedra. First of all, loop bounds are not an issue since the code generator will handle any overlapping of iteration domains. Next, these primitives do not directly operate on loops, but consider prefixes P of statement-scattering vectors. As a result, they may virtually be composed with any possible transformation. For the split primitive, vector prefixes all statements concerned by the split; and parameter indicates the position where statement delaying should occur. For the fusion primitive, vector prefixes all statements that should be interleaved with statements prefixed by Eventually, notice that fusion followed by split (with the appropriate value of leaves the SCoP unchanged. This table is not complete: privatization, array contraction and copy propagation require operations on access functions.
3.5
Transformation Composition
We will illustrate the composition of primitives on a typical example: twodimensional tiling. To define such a composed transformation, we first build
220
Cédric Bastoul et al.
Fig. 6.
Composition of transformation primitives
the strip-mining and interchange transformations from the primitives, as shown in Figure 6. swaps the roles of and in the schedule of S; it is a per-statement extension of the classical interchange. — where is a known integer — prepends a new iterator to virtually unroll the schedule and iteration domain of S at depth Finally, tiles the loops at depth and with blocks. This tiling transformation is a first step towards a higher-level combined transformation, integrating strip-mining and interchange with privatization, array copy propagation and hoisting for dependence removal. The only remaining parameters would be the statements and loops of interest and the tile size.
4
Code Generation
After polyhedral transformations, code generation is the last step to the final program. It is often ignored in spite of its impact on the target code quality. In particular, we must ensure that a bad control management does not spoil performance, for instance by producing redundant guards or complex loop bounds. Ancourt and Irigoin [1] proposed the first solution, based on the FourierMotzkin pair-wise elimination. The scope of their method was limited to a single polyhedron with unimodular transformation (scheduling) matrices. The basic idea was to apply the transformation function as a change of base of the loop indices, then for each new dimension, to project the polyhedron on the axis and thus find the corresponding loop bounds. The main drawback of this method was the large amount of redundant control. Most further works on code generation tried to extend this first technique, in order to deal with non-unit strides [15, 21] or with a non-invertible transformation matrix [9]. A few alternatives to Fourier-Motzkin were discussed, but without addressing the challenging problem of scanning more than one polyhedron at once. This problem was first solved and implemented in Omega by generating a naive perfectly nested code and then by (partially) eliminating redundant
Putting Polyhedral Loop Transformations to Work
221
guards [14]. Another way was to generate the code for each polyhedron separately, and then to merge them [9, 5]; it generates a lot of redundant control, even if there were no redundancies in the separated code. Quilleré et al. proposed to recursively separate union of polyhedra into subsets of disjoint polyhedra and generating the corresponding nests from the outermost to the innermost levels [19]. This approach provides at present the best solutions since it totally eliminates redundant control. However, it suffers from some limitations, e.g. high complexity, code generation with unit strides only, and a rigid partial order on the polyhedra. Improvements are presented in the next section. This section presents the code generation problem, its resolution with a modern polyhedral-scanning technique, and its implementation.
4.1
The Code Generation Problem
In the polyhedral model, code generation amounts to a polyhedron scanning problem: finding a set of nested loops visiting each integral point, following a given scanning order. The generated code quality can be assessed by using two valuations: the most important is the amount of duplicated control in the final code; second, the code size, since a large code may pollute the instruction cache. We choose the recent Quilleré et al. method [19] with some additional improvements, which guarantee a code generation without any duplicated control. The outline of the modified algorithm is presented in Section 4.2 and some useful optimization are discussed in Section 4.3.
4.2
Outline of the Code Generation Algorithm
Our code generation process is divided in two main steps. First, we take the scheduling functions into account by modifying each polyhedron’s lexicographic order. Next, we use an improved Quilleré et al. algorithm to perform the actual code generation. When no schedule is specified, the scanning order is the plain lexicographic order. Applying a new scanning order to a polyhedron amounts to adding new dimensions in leading positions. Thus, from each polyhedron and scheduling function we build another polyhedron with the desired lexicographic order: if and only if The algorithm is a recursive generation of the scanning code, maintaining a list of polyhedra from the outermost to the innermost loops: 1. intersect each polyhedron of the list with the context of the current loop (to restrict the scanning code to this loop); 2. project the resulting polyhedra onto the outermost dimensions, then separate the projections into disjoint polyhedra; 3. sort the resulting polyhedra such that a polyhedron is before another one if its scanning code has to precede the other to respect the lexicographic order; 4. merge successive polyhedra having at least another loop level to generate a new list and recursively generate the loops that scan this list; 5. compute the strides that the current dimension imposes to the outer dimensions.
222
Cédric Bastoul et al.
This algorithm is slightly different from the one presented by Quilleré et al. in [19]; our two main contributions are the support for non-unit strides (Step 5) and the exploitation of degrees of freedom (i.e., when some operations do not have a schedule) to produce a more effective code (Step 4). Let us describe this algorithm with a non-trivial example: the two polyhedral domains presented in Figure 7(a). Both statements have iteration vector local parameter vector and global parameter vector We first compute intersections with the context, supposed to be We project the polyhedra onto the first dimension, then separate them into disjoint polyhedra. Thus we compute the domains associated with alone, both and and alone (as shown in Figure 7(b), this last domain is empty). We notice there is a local parameter implying a non-unit stride; we can determine this stride and update the lower bound. We finally generate the scanning code for this first dimension. We now recurse on the next dimension, repeating the process for each polyhedron list (in this example, there are now two lists: one inside each generated outer loop). We intersect each polyhedra with the new context, now the outer loop iteration domains; then we project the resulting polyhedra on the outer dimensions, and finally we separate these projections into disjoint polyhedra. This last processing is trivial for the second list but yields two domains for the first list, as shown in Figure 7(c). Eventually, we generate the code associated with the new dimension.
Fig. 7. Step by step code generation example
Putting Polyhedral Loop Transformations to Work
4.3
223
Complexity Issues
The main computing kernel in the code generation process is the separation into disjoint polyhedra, with a worst-case complexity in polyhedral operations (exponential themselves). In addition, the memory usage is very high since we have to allocate memory for each separated domain. For both issues, we propose a partial solution. First of all, we use pattern matching to reduce the number of polyhedral computations: at a given depth, the domains are often the same (this is a property of the input codes), or disjoint (this is a property of the statement-scattering vectors of the scheduling matrices). Second, to avoid memory problems, we detect high memory consumption and switch for a more naive algorithm when necessary, leading to a less efficient code but using far less memory. Our implementation of this algorithm is called CLooG (Chunky Loop Generator) and was originally designed for a locality-improvement algorithm and software (Chunky) [3]. CLooG could regenerate code for all 12 benchmarks in Figure 2. Experiments were conducted on a 512 MB 1 GHz Pentium III machine; generation times range from 1 to 127 seconds (34 seconds on average). It produced optimal control for all but three SCoPs in lucas, apsi and adm; the first SCoP has more than 1700 statements and could be optimally generated on a 1 GB Itanium machine in 22 minutes; the two other SCoPs have less than 50 statements, but 16 parameters; since the current version of does not analyse the linear relations between variables, the variability of parameter interactions leads to an exponential growth of the generated code. Complexity improvements and studies of the generated code quality are under investigation.
5
WRaP-IT: An Open64 Plug-In for Polyhedral Transformations
Our main goal is to streamline the extraction of static control parts and the code generation, to ease the integration of polyhedral techniques into optimizing and parallelizing compilers. This interface tool is built on Open64/ORC. It converts the WHIRL — the compiler’s hierarchical intermediate representation — to an augmented polyhedral representation, maintaining a correspondence between matrices in SCoP descriptions with the symbol table and syntax tree. This representation is called the WRaP: WHIRL Represented as Polyhedra. It is the basis for any polyhedral analysis or transformation. Then, the second part of the tool is a modified version of CLooG, to regenerate a WHIRL syntax tree from the WRaP. The whole Interface Tool is called WRaP-IT; it may be used in a normal compilation or source-to-source framework, see [2] for details. Although WRaP-IT is still a prototype, it proved to be very robust; the whole source-to-polyhedra-to-source transformation was successfully applied to all 12 benchmarks in Figure 2. See http://www-rocq.inria.fr/a3/wrap–it for further information.
224
6
Cédric Bastoul et al.
Conclusion
We described a framework to streamline the design of polyhedral transformations, based on a unified polyhedral representation and a set of transformation primitives. It decouples transformations from static analyses. It is intended as a formal tool for semi-automatic optimization, where program transformations — with the associated static analyses for semantic-preservation — are separated from the optimization or parallelization algorithm which drives the transformations and select their parameters. We also described WRaP-IT, a robust tool to convert back and forth between Fortran or C and the polyhedral representation. This tool is implemented in Open64/ORC. The complexity of the code generation phase, when converting back to source code, has long been a deterrent for using polyhedral representations in optimizing or parallelizing compilers. However, our code generator (CLooG) can handle loops with more than 1700 statements. Moreover, the whole source-to-polyhedra-to-source transformation was successfully applied to the 12 benchmarks. This is a strong point in favor of polyhedral techniques, even in the context of real codes. Current and future work include the design and implementation of a polyhedral transformation library, an iterative compilation scheme with a machinelearning algorithm and/or an empirical optimization methodology, and the optimization of the code generator to keep producing optimal code on larger codes.
References [1] C. Ancourt and F. Irigoin. Scanning polyhedra with DO loops. In 3rd ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming, pages 39– 50, june 1991. [2] C. Bastoul, A. Cohen, S. Girbal, S. Sharma, and O. Temam. Putting polyhedral loop transformations to work. Research report 4902, INRIA Rocquencourt, France, July 2003. [3] C. Bastoul and P. Feautrier. Improving data locality by chunking. In CC’12 Intl. Conference on Compiler Construction, LNCS 2622, pages 320–335, Warsaw, Poland, april 2003. [4] W. Blume, R. Eigenmann, K. Faigin, J. Grout, J. Hoeflinger, D. Padua, P. Petersen, W. Pottenger, L. Rauchwerger, P. Tu, and S. Weatherford. Parallel programming with Polaris. IEEE Computer, 29(12):78–82, December 1996. [5] P. Boulet, A. Darte, G-A. Silber, and F. Vivien. Loop parallelization algorithms: From parallelism extraction to code generation. Parallel Computing, 24(3):421– 444, 1998. [6] Keith D. Cooper, Mary W. Hall, Robert T. Hood, Ken Kennedy, Kathryn S. McKinley, John M. Mellor-Crummey, Linda Torczon, and Scott K. Warren. The ParaScope parallel programming environment. Proceedings of the IEEE, 81(2):244–263, 1993. [7] R. Eigenmann, J. Hoeflinger, and D. Padua. On the automatic parallelization of the perfect benchmarks. IEEE Trans. on Parallel and Distributed Systems, 9(1):5–23, January 1998.
Putting Polyhedral Loop Transformations to Work
225
[8] P. Feautrier. Some efficient solution to the affine scheduling problem, part II, multidimensional time. Int. Journal of Parallel Programming, 21(6):389–420, December 1992. See also Part I, One Dimensional Time, 21(5):315–348. [9] M. Griebl, C. Lengauer, and S. Wetzel. Code generation in the polytope model. In PACT’98 Intl. Conference on Parallel Architectures and Compilation Techniques, pages 106–111, 1998. [10] A.-C. Guillou, F. Quilleré, P. Quinton, S. Rajopadhye, and T. Risset. Hardware design methodology with the alpha language. In FDL’01, Lyon, France, September 2001. [11] M. Hall et al. Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer, 29(12):84–89, December 1996. [12] F. Irigoin, P. Jouvelot, and R. Triolet. Semantical interprocedural parallelization: An overview of the pips project. In ACM Int. Conf. on Supercomputing (ICS’2), Cologne, Germany, June 1991. [13] W. Kelly. Optimization within a unified transformation framework. Technical Report CS-TR-3725, University of Maryland, 1996. [14] W. Kelly, W. Pugh, and E. Rosser. Code generation for multiple mappings. In Frontiers’95 Symp. on the frontiers of massively parallel computation, McLean, 1995. [15] W. Li and K. Pingali. A singular loop transformation framework based on nonsingular matrices. Intl. J. of Parallel Programming, 22(2):183–205, April 1994. [16] M. O’Boyle. MARS: a distributed memory approach to shared memory compilation. In Proc. Language, Compilers and Runtime Systems for Scalable Computing, Pittsburgh, May 1998. Springer-Verlag. [17] M. O’Boyle, P. Knijnenburg, and G. Fursin. Feedback assisted iterative compiplation. In Parallel Architectures and Compilation Techniques (PACT’01). IEEE Computer Society Press, October 2001. [18] Open research compiler. http://ipf–orc.sourceforge.net. [19] F. Quilleré, S. Rajopadhye, and D. Wilde. Generation of efficient nested loops from polyhedra. Intl. J. of Parallel Programming, 28(5):469–498, October 2000. [20] R. Schreiber, S. Aditya, B. Rau, V. Kathail, S. Mahlke, S. Abraham, and G. Snider. High-level synthesis of nonprogrammable hardware accelerators. Technical report, Hewlett-Packard, May 2000. [21] J. Xue. Automating non-unimodular loop transformations for massive parallelism. Parallel Computing, 20(5):711–728, 1994. [22] K. Yotov, X. Li, G. Ren, M. Cibulskis, G. DeJong, M. Garzaran, D. Padua, K. Pingali, P. Stodghill, and P. Wu. A comparison of empirical and model-driven optimization. In ACM Symp. on Programming Language Design and Implementation (PLDI’03), San Diego, California, June 2003.
Index-Association Based Dependence Analysis and its Application in Automatic Parallelization Yonghong Song and Xiangyun Kong Sun Microsystems, Inc. {yonghong.song,xiangyun.kong}@sun.com
Abstract. In this paper, we present a technique to perform dependence analysis on more complex array subscripts than the linear form of the enclosing loop indices. For such complex array subscripts, we decouple the original iteration space and the dependence test iteration space and link them through index-association functions. Dependence analysis is performed in the dependence test iteration space to determine whether the dependence exists in the original iteration space. The dependence distance in the original iteration space is determined by the distance in the dependence test iteration space and the property of index-association functions. For certain non-linear expressions, we show how to equivalently transform them to a set of linear expressions. The latter can be used in traditional dependence analysis techniques targeting subscripts which are linear forms of enclosing loop indices. We also show how our advanced dependence analysis technique can help parallelize some otherwise hard-to-parallelize loops.
1
Introduction
Multiprocessor and multi-core microprocessor machines demand good automatic parallelization to utilize precious machine resources. Accurate dependence analysis is the essential for effective automatic parallelization. Traditional dependence analysis only considers array subscripts which are linear functions of the enclosing loop indices [6, 8, 13]. Various techniques, from a simple one like the GCD test to a complex one like the Fourier-Motzkin test, are applied to determine whether two array references could access the same memory location. For more complex subscripts, these techniques often consider them too complex and will give up with the assumption that a dependence exists. Figure 1(a) shows a simple example, where these traditional techniques are not able to parallelize it because they make the worst assumption. (In this paper, the program is written in Fortran format.) This paper tries to conquer this conservativity. We apply a decoupled approach where a new dependence test iteration space is constructed for dependence test purpose. The original iteration space is linked to the dependence test iteration space by the mapping through index-association functions. We call our approach index-association based dependence analysis. Dependence analysis L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 226-240, 2004. © Springer-Verlag Berlin Heidelberg 2004
Index-Association Based Dependence Analysis
227
Fig. 1. Example 1
is performed under the dependence test iteration space. Whether the dependence exists in the original iteration space is determined by whether the dependence exists in the dependence test iteration space. If the dependence exists, the dependence distance in the original iteration space is determined by the dependence distance in the dependence test iteration space and the property of index-association functions. We also present a general approach to equivalently transform a non-linear expression, involving plus, minus, multiplication and division, to a set of linear expressions. The latter can be used in dependence testing with traditional techniques. When performing traditional dependence analysis and analyzing the indexassociation functions, our dependence analysis framework is also able to generate certain conditions under which cross-iteration dependence does not exist in the original iteration space. Such a condition can often be used as a run-time test for parallelization vs. serialization of the target loop. With the combination of indexassociation based dependence analysis and such two-version code parallelization, the code in Figure 1(a) can be parallelized as the code in Figure 1(b). We have implemented the index-association based dependence analysis in our production compiler. Before this implementation, our compiler already implemented several dependence tests targeting subscripts which are linear functions of enclosing loop indices, which already enables us to parallelize a lot of loops. With this new implementation, our compiler is able to parallelize some loops which otherwise are not able to be parallelized without it. We select two well-known benchmarks from SPEC CPU2000 suite. With our technique, several important loops inside these two benchmarks can be parallelized successfully. In the rest of the paper, we describe the previous work in Section 2. We present a program model in Section 3. We then describe our index-association based dependence analysis in Section 4. We present how to transform a non-linear expression to a set of linear expressions in Section 5. We show how our advanced dependence analysis helps automatic parallelization in Section 6. We present experimental results in Section 7. Finally, a conclusion is drawn in Section 8.
228
Yonghong Song and Xiangyun Kong
Fig. 2. Program Model
2
Previous Work
Dependence analysis has been studied extensively. Maydan et al. use a series of special case dependence tests with the hope that they can catch the majority of cases in practice [6]. They use an expensive integer programming method as the backup in case all these special tests fail to determine whether the dependence exists or not. Goff et al. present a practical dependence testing by classifying subscripts into different categories, where different dependence tests are used in different categories [3]. Pugh presents an integer programming method for exact dependence analysis with the worst exponential complexity in terms of loop levels and the number of array dimensions [8]. Feautrier analyzes dependences using parametric integer programming [2]. His technique takes statement context into consideration so that the dependence test result is more accurate. All the above techniques focus on array subscripts which are linear functions of enclosing loop indices. Dependence analysis with array subscripts which are not linear functions of enclosing loop indices has also been studied. Blume and Eigenmann propose range test, where the range of symbolic expressions is evaluated against the loop index value [1]. The loop can be parallelized if the range of elements accessed in one iteration does not overlap with the range of the other elements in other iterations. Haghighat and Polychronopoulos handle non-linear subscripts by using their mathematical properties [4]. They use symbolic analysis and constraint propagation to help achieve a mathematically easy-to-compare form for the subscripts. Hoeflinger and Paek present an access region dependence test [5]. They perform array region analysis and determine dependence based on whether array regions overlap with each other or not. All these works are complementary to our work and can be used in our work as our dependence test iteration space can be extended to include more complex subscripts.
3
Program Model
Figure 2 illustrates our program model. Our target loop nest is an perfect nest where The loop lower bound and loop upper bound
Index-Association Based Dependence Analysis
229
are linear functions of the enclosing loop indices The loop steps are loop nest invariants. In the beginning of the innermost loop body, we have functions which maps a set of values to a new set of values In the rest of loop body, linear combinations of are used in array subscripts. We call the iteration space defined by all possible values of as the original iteration space. We call the iteration space defined by all possible values of as the dependence test iteration space. We call such a mapping from the original iteration space to the dependence test iteration space as index association and functions as index-association function. In modern compilers, symbolic analysis is often applied before data dependence analysis to compute Traditional dependence analysis techniques are able to handle index-association functions that are linear functions. For such cases, the function can be forward substituted into the subscript (to replace and traditional techniques apply. However, if any is a non-linear function (e.g., the tiny example in Figure 1), traditional techniques often consider the subscript too complex and assume the worst dependence conservatively. In the next section, we present details of our index-association based dependence analysis, which tries to conquer such conservativity.
4
Dependence Analysis with Index Association
The index-association based dependence analysis can be partitioned into three steps. First, the dependence test iteration space is constructed. Second, dependence analysis is conducted in the dependence test iteration space. Finally, the dependence relation in the original iteration space is determined by the result in the dependence test iteration space and the property of index-association functions. We elaborate the details below. 4.1
Constructing Dependence Test Iteration Space
The original iteration space can be viewed as a space. For dimension we have a constrain where is the lower bound, is the upper bound and is the step value. To construct the dependence test iteration space, the compiler needs to analyze the index-association functions. Currently, our compiler requires the indexassociation function have the following two properties: Each index function only takes one original loop index variable as the can take the same loop index variable as argument (Note that different the argument.) For example, our compiler can handle the index-association functions like while it is not able to handle where both and are outer loop index variables.
230
Yonghong Song and Xiangyun Kong
It is possible to relax such a requirement for index-association functions, in order to cover more cases. For certain cases, we can transform the indexassociation function to make it conform to the requirement. For example, for the function we can have and If we propagate into the subscripts, index-association functions and will satisfy the requirement. For more general cases, however, it is much more difficult to compute the dependence test iteration space. We leave such extension for our future work. The operators in must be plus, minus, multiplication, division or modulo. The can be composed using the permitted operators recursively. For example, our compiler is able to handle where is an outer loop index. Given the original iteration space, our compiler tries to construct the corresponding dependence test iteration space for with a form where is the lower bound, is the upper bound and is the step. Supposing the loop has a lower bound and an upper bound we have and The step represents the difference between two values mapped from two adjacent values. Note that could be a sequence of values, including 0. Suppose that in Figure 2 there exists a dependence from iteration to iteration We say the corresponding dependence distance is in the original iteration space. Suppose that are the corresponding J values for and for The dependence distance in the dependence test iteration space is Table 1 illustrates our basic iteration space mapping from original iteration space to dependence test iteration space, assuming the iteration space for is For division, two different steps may result. For modulo, because of the wrap-around nature of the function, some negative steps may appear which are represented by _others_ in the table. The dependence test iteration space is computed by recursively computing the iteration space for sub-expressions of starting with and ending with Here, we want to specially mention the following two scenarios:
Index-Association Based Dependence Analysis
231
Fig. 3. Example 2
Because it may potentially generate many negative step values for a modulo operator, a condition is often generated considering the relation between and (in Table 1), in order to limit the number of negative steps. It is possible to have different associated with the same such as and The coupling relation of and will be lost in the dependence test iteration space, which will cause difficulties when the dependence distance in the dependence test iteration space is mapped back to the original iteration space. For such cases, if functions and are both linear forms, we will perform forward substitution for these functions and have a single as the index-association function. Otherwise, we can still perform dependence analysis in the dependence test iteration space. However, we are not able to compute the dependence distance in the original iteration space precisely. Figure 3 shows three examples. For Figure 3(a), the original iteration space is (1, N, 1). The dependence test iteration space is (0, N/2, where the step is variant with a value of 0 or 1. For Figures 3(b) and (c), the original iteration space is (1, N, 2). The dependence test iteration space is (0, N/2,1). 4.2
Dependence Analysis in the Dependence Test Iteration Space
After the dependence test iteration space is constructed, dependence analysis can be done in the dependence test iteration space, where traditional techniques, which target the linear form of the enclosing loop indices, are applied. However, note that the dependence test iteration space could have multiple step values in certain dimension. For such cases, traditional techniques have to assume a step value which is greatest common divisor of all possible non-zero step values. If the step value could be 0, we also assume a step value of 0 during dependence analysis. With such assumptions, we may get conservative results. In Section 5, we describe a technique which can potentially give us better results for such cases. Given a pair of references, there are three possible results from the dependence test in the dependence test iteration space. If there exists no dependence in the dependence test iteration space, then there will be no dependence in the original iteration space. If there exists a dependence with a distance in the dependence test iteration space, then we compute the dependence distance in the original space based on and the property of index-association functions. This will be further explored in the next subsection.
232
Yonghong Song and Xiangyun Kong
If there exists a dependence with an unknown distance in the dependence test iteration space, we simply regard that there exists an unknown distance dependence in the original iteration space. In Figure 3(a), because the step can have a value of 0, the dependence distance from A( J) to itself could be 0 in the dependence test iteration space. In Figures 3(b) and (c), however, there exists no dependence from A( J) to itself in the dependence test iteration space. In Figure 3(b), there exists a dependence from A(J + 2) to A(J) with distance 2 in the dependence test iteration space. In Figure 3(c), because the dependence test iteration space for J is (0, N/2,1), we can easily get that there exist no dependence between A( J) and A( J + 1 + N/2) in the dependence test iteration space. 4.3
Computing Dependence Distance in Original Iteration Space
Given a dependence distance in the dependence test iteration space, we need to analyze the property of index-association functions in order to get the proper dependence distance in the original iteration space. Table 2 illustrates how we compute the dependence distance in the original iteration space based on indexassociation functions, where “org expr” and “org dist” represents the original expression and its associated distance, and “new expr” and “new dist” represents the sub-expression in the original expression and its associated distance. The dependence distance in the original iteration space is computed by recursively computing the distance for the sub-expression of starting with and ending with In Table 2, we want to particularly mention the dependence distance calculation of for multiplication and division. Let us assume that iterations and have a dependence. For multiplication, we have We can derive if Otherwise, there will be no dependence between and For division, we have We want to find the range of Through mathematical manipulation, we can find for general cases, as illustrated in Table 2. For certain cases, however, we can get more precise result. For example, if is always equal to 0, the distance for would be solely In Figure 3(a), there exists a dependence from A(J) to itself with a distance 0 in the dependence test iteration space. Because of index-association
Index-Association Based Dependence Analysis
Fig. 4.
233
Top algorithm for index-association based dependence analysis
Fig. 5. Example 3
function DIV(I, 2), it is easy to see that the corresponding distance in the original iteration space is 0 or 1. (The –1 is an illegal distance and is ignored.) In Figure 3(b), there exists a dependence from A( J + 2) to A( J) with a distance 2 in the dependence test iteration space. Because of index-association function DIV(I, 2), the corresponding distance in the original iteration space would be 3 or 4 or 5. 4.4
Overall Structure
Figure 4 shows our overall algorithm for index-association based dependence analysis. The first step of our index-association based dependence analysis is to construct the dependence test iteration space. If the dependence test space cannot be constructed due to complex index-association functions, we have to assume a worst-case dependence test iteration space, i.e., for each with iteration space we have and could be any integer value. As stated previously, if there exists multiple steps for certain dimension in the dependence test iteration space, dependence analysis must assume a conservative step, often the greater common divisor of all possible steps, in order to compute correct dependence relation. The resultant dependence relation, however, might be conservative. For example, for the loop in Figure 5, the steps for J values can be either 3 or 4. So our index-association based approach has to take the conservative step of 1 in the dependence test iteration space. This will assume array references A(J + 9) and A(J) have cross-iteration dependences.
234
Yonghong Song and Xiangyun Kong
Hence, the original loop I cannot be parallelized. In the next section, we present a technique to handle certain index-association functions with division, which can be equivalently transformed to a set of linear expressions. The latter can be used to compute the dependence relation, including dependence distances, more precisely than with traditional techniques.
5
Accurate Dependence Analysis with Division
The basic idea here is to replace the non-linear expression with a set of linear expressions and then use these linear expressions during dependence testing with traditional techniques. Specifically, we want to find a set of linear expressions which are equivalent to where the index I has the iteration space (L, U, S) and the function contains operations such as plus, minus, multiplication and division. Without losing generality, we assume and S > 0. Let be the loop trip count for loop I, and we have Let represent the loop index I values, from the smallest one to the largest one. Let be the corresponding J index values. First, let us take the loop in Figure 5 as an example. We want to express J = 5* I/4 as a set of linear expressions. For the I value sequence (1,4, 7, 10, 13, 16, 19, 22,..., 97, 100), the corresponding J value sequence is (1, 5, 8, 12, 16, 20, 23, 2 7 , . . . , 121, 125). Clearly, the J value sequence is not a linear sequence because the difference between adjacent values vary. However, note that the difference between every and J values is a constant of 15. Therefore, the original J value sequence can be represented as 4 linear sequences, each with a step of 15 and initial value, 1, 5, 8 and 12 respectively. To generalize the above observation, for a sequence of J values we want to find the number of linear expressions needed to represent and the step value for each individual linear expression. The difference between the J values in the J value sequence can be expressed as
With the semantics of This is equivalent to
we have
holds.
Different index-association functions may require different complexities to compute Conservative methods can also be applied if the compiler is not able to do sophisticated analysis and manipulation. The compiler has to make the
Index-Association Based Dependence Analysis
235
worst assumption if it can not find a compiler-time known constant e.g., using the dependence analysis technique in Section 4. Now suppose is available, for each linear expression, we can easily compute the corresponding step as
In this paper, we do not try to construct the trip count for different linear expressions and rather conservatively assume a trip count which equals to that for the linear expression with the initial value of which also has the maximum trip count over all linear expressions. With and available, the can be expressed as
where is an integer variable and its iteration space is and is a set of discrete numbers Since the set of linear expressions is equivalent to the original non-linear expression, whether a dependence exists with the original non-linear expression can be determined by whether a dependence exists with the transformed set of linear expressions. For any dependence distance value (regarding loop index computed with transformed linear expressions, the dependence distance in the original I iteration space can be computed based on and the difference between corresponding For example, suppose that we have a dependence between and with a dependence distance We have from which we can further estimate maybe conservatively. As an example, we now show how we compute the and for the expression If is divisible by D, the equation is divisible by D, we can let where will hold. To make represents the greatest common divisor of C * S and D. Now, we show how our technique can determine whether a dependence exists between A(J + 9) and A(J) in Example 3 (Figure 5), i.e., whether there exist any instances of J, say and and
has a solution. With our technique, the non-linear expression J = 5 * I/4, where loop I’s iteration space is (1, 100, 3), can be represented equivalently by
With the linear expression (5), equation (4) is equivalent to
where and are used for and and for To consider whether equation (6) has a solution or not, we have
236
Yonghong Song and Xiangyun Kong
All possible values on the right-hand side are not divisible by 15, so there exists no solution for (4) and no dependence between A(J+9) and A(J). Therefore, the loop I in Figure 5 can be parallelized successfully. Our index-association based dependence distance can help both general loop transformations and automatic parallelization because it tries to provide a more accurate dependence test result. In the next section, we particularly illustrate how our technique helps automatic parallelization, i.e., whether a certain level of loop is a DOALL loop or not, and under what condition it is a DOALL loop. We do not explore how our technique helps general loop transformations in this paper.
6
Automatic Parallelization with Index Association
For automatic parallelization, our index-association based dependence analysis can help determine whether a loop, which conforms to our program model in Figure 2 with some non-linear index-association functions is a DOALL loop or not. For those non-DOALL loops, previous work like [7] generate run-time conditionals under which the loop will be a DOALL loop, to guard the parallelized codes. Our compiler also has the ability to generate proper conditions under which a certain loop is a DOALL loop, such as the example in Figure 1. From Table 1, if the index-association function contains operators division and modulo, multiple step values may be generated in the dependence test iteration space, which makes dependence analysis conservative. To get more precise dependence analysis results, conditionals are often generated so that we can have fewer step values, often just one, in the dependence test iteration space for one index-assocation function. By combining index-association based dependence analysis and such two-version code parallelization, our compiler is able to parallelize some otherwise hard-to-parallelize loops. For example, our compiler is able to determine that the loops in Figures 3 (a) and (b) are not DOALL loops and that the loop in Figure 3(c) is a DOALL loop, based on the dependence analysis in Section 4. We will now work through a more complex example to show how we combine index-association based dependence analysis and two-version code parallelization to successfully parallelize one outer loop. Figure 6(a) shows the original code where and are all compiletime known constants and is a loop nest invariant. We also suppose that all right-hand sides of assignments do not contain references to array A. The original iteration space for loop is With the property of index-association function DIV, we can derive the dependence test iteration space for J (corresponding to the original loop as where the step is variant with either or Therefore, if the condition holds, the loop is parallelizable.
Index-Association Based Dependence Analysis
237
Fig. 6. Example 4
Parallelizing the outer loop needs more analysis. Here, by analyzing the loop bounds and steps, our compiler is able to determine that if the condition holds, i.e., is divisible by the loops and actually can be collapsed into one loop. Figure 6(b) shows the code after loop collapsing. The new loop in Figure 6(b) can be further parallelized if the condition holds, as analyzed in the previous paragraph. Figure 6(c) shows the final code where the collapsed loop is parallelized under the condition and Our compiler is able to successfully parallelize the outer loop in Figure 6(a).
7
Experimental Results
We have implemented our index-association based dependence analysis technique in the Sun ONE Studio [tm] 8 compiler collection [11], which will also be used in our experiments. (We have not implemented the technique presented in Section 5 yet. We plan to evaluate and experiment with it in future releases.) Our compiler has already implemented several dependence analysis techniques for subscripts which are linear forms of enclosing loop indices, such as GCD test, separability test, Banerjee test, etc. Our compiler also implements some sophisticated techniques for array/scalar privatization analysis, symbolic analysis, parallelization-oriented loop transformations including loop distribution/fusion, loop interchange, wavefront transformation [12], etc. Therefore, our compiler can already parallelize a lot of loops in practice. With our new index-association based dependence analysis, we extend our compiler’s ability to parallelize more loop nests which otherwise cannot be parallelized.
238
Yonghong Song and Xiangyun Kong
We choose two programs from the well-known SPEC CPU2000 suite [10], swim and lucas, which benefit from the technique developed in this paper. In the second quarter of 2003, we submitted automatic parallelization results for SPEC CPU2000 on a Sun Blade [tm] 2000 workstation with 2 1200MHZ UltraSPARC III Cu [tm] processors to SPEC [10], which is the first such submission for SPEC CPU2000 on automatic parallelization. Compared to the results on Sun Blade [tm] 2000 with just 1 1200MHZ UltraSPARC III Cu [tm] processor [10], we achieve a speedup of 1.60 for swim and a speedup of 1.14 for lucas. To evaluate the effectiveness of our technique on more than two processors, we further experimented on a Sun Fire [tm] 6800 server with 24 1200MHZ UltraSPARC III Cu [tm] processors and Solaris [tm] 9 operating system. For each program, we measure the best serial performance as well as the parallel performance with various number of processors up to 23 processors. We did not report the result for 24 processors as in general, due to system activity, it may not bring any speedup over the result for 23 processors. 7.1
swim
The benchmark swim is a weather prediction program written in Fortran. It is a memory bandwidth limited program and the tiling technique in [9], which has been implemented in our compiler, can improve data temporal cache locality, thus alleviating the bandwidth problem. For example, in one processor of our target machine, the code without tiling runs in 305 seconds and in 134 seconds with tiling. Tiling improves the performance for a single-processor run with a speedup of 2.28 because of the substantially improved cache locality. After tiling, however, some IF statements and MOD operators are introduced into the loop body because of aggressive loop fusion and circular loop skewing [9], which makes it impossible to reuse the same dependence information derived before tiling. To parallelize such loop nests, our dependence analysis phase correctly analyzes the effect of IF statements and MOD operators, and generates proper conditions to parallelize all four most important loops. Figure 7(a) shows the speedup for swim with different number of processors with and without our index-association based dependence analysis, represented by “With IA-DEP” and “Without IA-DEP” respectively. Without indexassociation based dependence analysis, the tiled code is not able to be parallelized by our compiler. However, our compiler is still able to parallelize all four important loop nests if tiling is not applied. We regard the result for such parallelization as “Without IA-DEP” parallelization. For processor number equal to 2, the actual “Without IA-DEP” parallelization performance is worse than the performance of the tiled code on one processor, so we use the result for the tiled code on one processor for “Without IA-DEP” result for two-processor result. From Figure 7(a), it is clear that our index-association based dependence can greatly improve parallel performance for swim. Figure 7(a) also shows that parallelization with IA-DEP scales better than without IA-DEP. This is because swim is a memory bandwidth limited benchmark and tiling enables better scaling with most data accessed in L2 cache, which
Index-Association Based Dependence Analysis
239
Fig. 7. Speedup on different number of processors for swim and lucas
is local to each processor, instead of in main memory. This is true also with large data sizes in OpenMP version of swim. In March 2003, Sun submitted the performance results for 8/16/24 threads for SPEC OMPM2001 on Sun File [tm] 6800 server [10]. The results show that without tiling, using OpenMP parallelization directives, the speedup from 8 threads to 16 threads is 1.33. With tiling, turning off OpenMP directive parallelization, however, the speedup is 1.44. The performance of with tiling is also significantly better than without tiling, e.g., SPEC scores 14199 vs. 8351 for 16 threads. 7.2
lucas
The benchmark lucas tests primality of Mersenne numbers. There are mainly two classes of loop nests in the program. One class is similar to our example 4 in Figure 6, and the other contains indexed array references, i.e., array references appear in the subscripts. Currently, our compiler is not able to parallelize loops in the second class. However, with index-association based dependence analysis, it is able to parallelize all important loops in the first class. Figure 7(b) shows the speedup for lucas on different number of processors. Note that no speedup is achieved for multiple processor runs without index-association based dependence analysis since all important loops are not parallelized.
8
Conclusion
In this paper, we have presented a new dependence analysis technique called index-association based dependence analysis. Our technique targets a special class of loop nests and uses a decoupled approach for dependence analysis of complex array subscripts. We also present a technique to transform a non-linear expression to a set of linear expressions and the latter can be used in dependence test with traditional techniques. Experiments show that our technique is able to help parallelize some otherwise hard-to-parallelize loop nests.
240
Yonghong Song and Xiangyun Kong
Acknowledgements The authors want to thank the entire compiler optimizer group for their efforts to build and continuously improve the SUN’s compilers, on which our work has relied. The authors also want to thank Partha Tirumalai for his helpful comments which greatly improved this paper.
References [1] William Blume and Rudolf Eigenmann. Non-linear and symbolic data dependence testing. IEEE Transactions of Parallel and Distributed Systems, 9(12):1180–1194, December 1998. [2] Paul Feautrier. Dataflow analysis of array and scalar references. International Journal of Parallel Programming, 20(1):23–53, January 1991. [3] Gina Goff, Ken Kennedy, and Chau-Wen Tseng. Practical dependence testing. In Proceedings of the ACM SIGPLAN’91 Conference on Programming Language Design and Implementation, pages 15–29, Toronto, Ontario, Canada, June 1991. [4] Mohammad Haghighat and Constantine Polychronopoulos. Symbolic analysis for parallelizing compilers. ACM Transactions on Programming Languages and Systems, 18(4):477–518, July 1996. [5] Jay Hoeflinger and Yunheung Paek. The access region test. In Proceedings of the Workshop on LCPC 1999, also in Lecture Notes in Computer Science, vol. 1863, by Springer, La Jolla, California, August 1999. [6] Dror Maydan, John Hennessy, and Monica Lam. Efficient and exact data dependence analysis. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 1–14, Toronto, Ontario, Canada, June 1991. [7] Sungdo Moon and Mary Hall. Evaluation of predicated array data-flow analysis for automatic parallelization. In Proceedings of ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 84–95, Atlanta, GA, May 1999. [8] William Pugh. A practical algorithm for exact array dependence analysis. Communications of the ACM, 35(8):102–114, August 1992. [9] Yonghong Song and Zhiyuan Li. New tiling techniques to improve cache temporal locality. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 215–228, Atlanta, GA, May 1999. [10] Standard Performance Evaluation Corporation, The SPEC CPU2000 benchmark suite, http://www.specbench.org. [11] Sun Microsystems, Inc., Sun ONE Studio 8 Compiler Collection. http://docs.sun.com. [12] Michael Wolf. Improving Locality and Parallelism in Nested Loops. PhD thesis, Department of Computer Science, Stanford University, August 1992. [13] Michael Wolfe. High Performance Compilers for Parallel Computing. AddisonWesley Publishing Company, 1995.
Improving the Performance of Morton Layout by Array Alignment and Loop Unrolling Reducing the Price of Naivety Jeyarajan Thiyagalingam, Olav Beckmann, and Paul H. J. Kelly Department of Computing, Imperial College London 180 Queen’s Gate, London SW7 2AZ, UK {jeyan,ob3,phjk}@doc.ic.ac.uk
Abstract. Hierarchically-blocked non-linear storage layouts, such as the Morton ordering, have been proposed as a compromise between rowmajor and column-major for two-dimensional arrays. Morton layout offers some spatial locality whether traversed row-wise or column-wise. The goal of this paper is to make this an attractive compromise, offering close to the performance of row-major traversal of row-major layout, while avoiding the pathological behaviour of column-major traversal. We explore how spatial locality of Morton layout depends on the alignment of the array’s base address, and how unrolling has to be aligned to reduce address calculation overhead. We conclude with extensive experimental results using five common processors and a small suite of benchmark kernels.
1
Introduction
Programming languages that offer support for multi-dimensional arrays generally use one of two linear mappings to translate from multi-dimensional array indices to locations in the machine’s linear address space: row-major or columnmajor. Traversing an array in the same order as it is laid out in memory leads to excellent spatial locality; however, traversing a row-major array in columnmajor order or vice-versa, can lead to an order-of-magnitude worse performance. Morton order is a hierarchical, non-linear mapping from array indices to memory locations which has been proposed by several authors as a possible means of overcoming some of the performance problems associated with lexicographic layouts [2, 4, 10, 12]. The key advantages of Morton layout are that the spatial locality of memory references when iterating over a Morton order array is not biased towards either the row-major or the column major traversal order and that the resulting performance tends to be much smoother across problem-sizes than with lexicographic arrays [2]. Storage layout transformations, such as using Morton layout, are always valid. These techniques complement other methods for improving locality of reference in scientific codes, such as tiling, which rely on accurate dependence and aliasing information to determine their validity for a particular loop nest. L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 241–257, 2004. © Springer-Verlag Berlin Heidelberg 2004
242
Jeyarajan Thiyagalingam et al.
Previous Work. In our investigation of Morton layout, we have thus far confined our attention to non-tiled codes. We have carried out an exhaustive investigation of the effect of poor memory layout and the feasibility of using Morton layout as a compromise between row-major and column-major [8]. Our main conclusions thus far were It is crucial to consider a full range of problem sizes. The fact that lexicographic layouts can suffer from severe interference problems for certain problem sizes means that it is important to consider a full range of randomly generated problem sizes when evaluating the effectiveness of Morton layout [8]. Morton address calculation: table lookup is a simple and effective solution. Production compilers currently do not support non-linear address calculations for multi-dimensional arrays. Wise et al. [12] investigate the effectiveness of the “dilated arithmetic” approach for performing the address calculation. We have found that a simple table lookup scheme works remarkably well [8]. Effectiveness of Morton layout. We found that Morton layout can be an attractive compromise on machines with large L2 caches, but the overall performance has thus far still been disappointing. However, we also observed that only a relatively small improvement in the performance of codes using Morton layout would be sufficient to make Morton storage layout an attractive compromise between row-major and column-major. Contributions of this Paper. We make two contributions which can improve the effectiveness of the basic Morton scheme and which are both always valid transformations. Aligning the Base Address of Morton Arrays (Section 2). A feature of lexicographic layouts is that the exact size of an array can influence the pattern of cache interference misses, resulting in severe performance degradation for some datasizes. This can be overcome by carefully padding the size of lexicographic arrays. In this paper, we show that for Morton layout arrays, the alignment of the base address of the array can have a significant impact on spatial locality when traversing the array. We show that aligning the base address of Morton arrays to page boundaries can result in significant performance improvements. Unrolling Loops over Morton Arrays (Section 3). Most compilers unroll regular loops over lexicographic arrays. Unfortunately, current compilers cannot unroll loops over Morton arrays effectively due to the nature of address calculations: unlike with lexicographic layouts, there is no general straight-forward (linear) way of expressing the relationship between array locations A[i][j] and A[i][j+1] which a compiler could exploit. We show that, provided loops are unrolled in a particular way, it is possible to express these relationships by simple integer increments, and we
Improving the Performance of Morton Layout
243
demonstrate that using this technique can significantly improve the performance of Morton layout. 1.1
Background: Morton Storage Layout
Lexicographic Array Storage. For an M × N two-dimensional array A, a mapping is needed, which gives the memory offset at which array element will be stored. Conventional solutions are row-major (for example in C and Pascal) and column-major (as used by Fortran) mappings expressed by
respectively. We refer to row-major and column-major as lexicographic, i.e. elements are arranged by the sort order of the two indices (another term is “canonical”). Blocked Array Storage. Traversing a row-major array in column-major order, or vice-versa, leads to poor performance due to poor spatial locality. An attractive strategy is to choose a storage layout which offers a compromise between row-major and column-major. For example, we could break the M × N array into small, P × Q row-major subarrays, arranged as a M/P × N/Q rowmajor array. We define the blocked row-major mapping function (this is the 4D layout discussed in [2]) as:
Fig. 1. Blocked row-major (“4D”) lay- Fig. 2. Morton storage layout for an out with block-size P = Q = 4. The dia- 8 × 8 array. Location of element A[5, 4] is gram illustrates that with 16-word cache calculated by interleaving “dilated” replines, illustrated by different shadings, resentations of 5 and 4 bitwise: the cache hit rate is 75% whether the array is traversed in row-major or columnmajor order
244
Jeyarajan Thiyagalingam et al.
For example, consider 16-word cache blocks and P = Q = 4, as illustrated in Figure 1. Each block holds a P × Q = 16-word subarray. In row-major traversal, the four iterations (0, 0), (0, 1), (0, 2) and (0, 3) access locations on the same block. The remaining 12 locations on this block are not accessed until later iterations of the outer loop. Thus, for a large array, the expected cache hit rate is 75%, since each block has to be loaded four times to satisfy 16 accesses. The same rate results with column-major traversal. Most systems have a deep memory hierarchy, with block size, capacity and access time increasing geometrically with depth [1]. Blocking should therefore be applied for each level. Note, however, that this becomes very awkward if larger blocksizes are not whole multiples of the next smaller blocksize. Bit-Interleaving and Morton Layout. Assume for the time being that, for an M × N array, Write the array indices and as
respectively. From this point, we restrict our analysis to square arrays (where M = N). Now the lexicographic mappings can be expressed as bit-concatenation (written
If
and
the blocked row-major mapping is
Now, choose P = Q = 2, and apply blocking recursively:
This mapping is called the Morton Z-order, and is illustrated in Figure 2.
Improving the Performance of Morton Layout
245
Morton Layout can be an Unbiased Compromise Between Row-Major and Column-Major. The key property which motivates our study of Morton layout is the following: Given a cache with any even power-of-two block size, with an array mapped according to the Morton order mapping the cache hit rate of a row-major traversal is the same as the cache-hit rate of a columnmajor traversal. This applies given any cache hierarchy with even power-of-two block size at each level. This is illustrated in Figure 2. The cache hit rate for a cache with block size is Examples. For cache blocks of 32 bytes (4 double words, this gives a hit rate of 50%. For cache blocks of 128 bytes the hit rate is 75% as illustrated earlier. For 8kB pages, the hit rate is 96.875%. In Table 1, we contrast these hit rates with the corresponding theoretical hit rates that would result from row-major and column-major layout. Notice that traversing the same array in column-major order would result in a swap of the row-major and column-major columns, but leave the hit rates for Morton layout unchanged. In Section 2, we show that this desirable property of Morton layout is conditional on choosing a suitable alignment for the base address of the array. Morton-Order Address Calculation Using Dilated Arithmetic or Table Lookup. Bit-interleaving is too complex to execute at every loop iteration. Wise et al. [12] explore an intriguing alternative: represent each loop control variable as a “dilated” integer, where the bits are interleaved with zeroes. Define and such that
Now we can express the Morton address mapping as where denotes bitwise-or. At each loop iteration we increment the loop control variable; this is fairly straightforward. Let “&” denote bitwise-and. Then:
This approach works when the array is accessed using an induction variable which can be incremented using dilated addition. We found that a simpler scheme often works nearly as well: we simply pre-compute a table for the two mappings and Table accesses are likely cache hits, as their range is small and they have unit stride.
2
Alignment of the Base Address of Morton Arrays
With lexicographic layout, it is often important to pad the row or column length of an array to avoid associativity conflicts [7]. With Morton layout, it turns
246
Jeyarajan Thiyagalingam et al.
Fig. 3. Alignment of Morton-order Arrays. This figure shows the impact of misaligning the base address of a 4 × 4 Morton array from the alignment of a 4-word cache line. The numbers next to each row and below each column indicate the number of misses encountered when traversing a row (column) of the array in row-major (columnmajor) order, considering only spatial locality. Underneath each diagram, we show the average theoretical hit rate for the entire Morton array for both row-major (RM) and column-major (CM) traversal
out to be important to pad the base address of the array. In our discussion of the cache hit rate resulting from Morton order arrays in the previous section, we have implicitly assumed that the base address of the array will be mapped to the start of a cache line. For a 32 byte, i.e. 2 × 2 double word cache line, this would mean that the base address of the Morton array is 32-byte aligned. As we have illustrated previously in Section 1.1, such an allocation is unbiased towards any particular order of traversal. However, in Figure 3 we show that if the allocated array is offset from this “perfect” alignment, Morton layout may no longer be an unbiased compromise storage layout: The average miss-rate of traversing the array, both in row- and in column-major order, is always worse when the alignment of the base address is offset from the alignment of a 4-word cache line. Further, when the array is mis-aligned, we lose the symmetry property of Morton order being an unbiased compromise between row- and column-major storage layout.
Improving the Performance of Morton Layout
247
Fig. 4. Miss-rates for row-major and column-major traversal of Morton arrays. We show the best, worst and average miss-rates for different units of memory hierarchy (referred to as blocksizes), across all possible alignments of the base address of the Morton array. The top two graphs use a linear y-axis, whilst the graph underneath uses a logarithmic y-axis to illustrate that the pattern of miss-rates is in fact highly structured across all levels of the memory hierarchy
Systematic Study Across Different Levels of Memory Hierarchy. In order to investigate this effect further, we systematically calculated the resulting miss-rates for both row- and column-major traversal of Morton arrays, over a range of possible levels of memory hierarchy, and for each level, different missalignments of the base address of Morton arrays. The range of block sizes in memory hierarchy we covered was from double words, corresponding to a 32byte cache line to double words, corresponding to an 8kB page. Architectural considerations imply that block sizes in the memory hierarchy such as cache lines or pages have a power-of-two size. For each block size, we calculated, over all possible alignments of the base address of a Morton array with respect to this block size, respectively the best, worst and average resulting miss-rates for both row-major and column-major traversal of the array. The standard C library malloc() function returns addresses which are double-word aligned. We therefore conducted our study at the resolution of double words. The results of our calculation are summarised in Figure 4. Based on those results, we offer the following conclusions.
248
Jeyarajan Thiyagalingam et al.
1. The average miss-rate is the performance that might be expected when no
2.
3. 4.
5.
special steps are taken to align the base address of a Morton array. We note that the miss rates resulting from such alignments are always suboptimal. The best average hit rates for both row- and column-major traversal are always achieved by aligning the base address of Morton arrays to the largest significant block size of memory hierarchy (e.g. page size). The difference between the best and the worst miss-rates can be very significant, up to a factor of 2 for both row-major and column-major traversal. We observe that the symmetry property which we mentioned in Section 1.1 is in fact only available when using the best alignment and for even powerof-two block sizes in the memory hierarchy. For odd power-of-two block sizes (such as double words, corresponding to a 64-byte cache line), we find that the Z-Morton layout is still significantly biased towards row-major traversal. An alternative recursive layout such as Hilbert layout [6, 3] may have better properties in this respect. The absolute miss-rates we observe drop exponentially through increasing levels of the memory hierarchy (see the graphs in Figure 4). However, if we assume that not only the block size but also the access time of different levels of memory hierarchy increase exponentially [1], the penalty of missalignment of Morton arrays does not degrade significantly for larger block sizes. From a theoretical point of view, we therefore recommend aligning the base address of all Morton arrays to the largest significant block size in the memory hierarchy, i.e. page size.
In real machines, there are conflicting performance issues apart from maximising spatial locality, such as aliasing of addresses that are identical modulo some power-of-two, and some of these could negate the benefits of increased spatial locality resulting from making the base address of Morton arrays page-aligned. Experimental Evaluation of Varying the Alignment of the Base Address of Morton Arrays. In our experimental evaluation, we have studied the impact on actual performance of the alignment of the base address of Morton arrays. For each architecture and each benchmark, we have measured the performance of Morton layout both when using the system’s default alignment (i.e. addresses as returned by malloc()) and when aligning arrays to each significant size of memory hierarchy. Our experimental methodology is described in Section 3.1. Detailed performance figures showing the impact of varying the alignment of the base address of Morton arrays over all significant levels of memory hierarchy are contained in an accompanying technical report [9]. Our theoretical assertion that aligning with the largest significant block size in the memory hierarchy, i.e. page size, should always be best is supported in most, but not all cases, and we assume that where this is not the case, this is due to interference effects. Figures 5–8 of this paper include performance results for Morton storage layout with default- and page-alignment of the array’s base address.
Improving the Performance of Morton Layout
3
249
Unrolling Loops over Morton Arrays
Linear array layouts have the following property. Let be the address calculation function which returns the offset from the array base address at which the element identified by index vector is stored. Then, for any offsetvector we have
As an example, for a row-major array A, is stored at location Compilers can exploit this transformation when unrolling loops over arrays with linear array layouts by strength-reducing the address calculation for all except the first loop iteration in the unrolled loop body to simple addition of a constant. As stated in Section 1.1, the Morton address mapping is where denotes bitwise-or, which can be implemented as addition. Given offset
The problem is that there is no general way of simplifying and all
for all
Proposition 1 (Strength-reduction of Morton address calculation). Let be some power-of-two number such that Assume that mod and that Then, This follows from the following observations: If mod then the least significant bits of are zero; if then all except the least significant bits of are zero. Therefore, the dilated addition can be performed separately on the least significant bits of As an example, assume that mod 4 = 0. Then, the following strengthreductions of Morton order address calculation are valid:
An analogous result holds for the i index. Therefore, by carefully choosing the alignment of the starting loop iteration variable with respect to the array indices used in the loop body and by choosing a power-of-two unrolling factor, loops over Morton order arrays can benefit from strength-reduction in unrolled loops. In our implementation, this means that memory references for the Morton tables are replaced by simple addition of constants. Existing production compilers cannot find this transformation automatically. We therefore implemented this unrolling scheme by hand in order to quantify the possible benefit. We report very promising initial performance results in Section 3.1.
250
Jeyarajan Thiyagalingam et al.
3.1
Experimental Evaluation
Benchmark Kernels and Architectures. To test our hypothesis that Morton layout is a useful compromise between row-major and column-major layout experimentally, we have collected a suite of simple implementations of standard numerical kernels operating on two-dimensional arrays and carried out experiments on five different architectures. The kernels used are shown in Table 2 and the platforms in Table 3. Performance Results. Figures 5–8 show our results in detail, and we make some comments directly in the figures. We have carried out extensive measurements over a full range of problem sizes: the data underlying the graphs in Figures 5–8 consist of more than 25 million individual measurements. For each experiment / architecture pair, we give a broad characterisation of whether Morton layout is a useful compromise between row-major and column-major in this setting by annotating the figures with win, lose, etc. Impact of Unrolling. By inspecting the assembly code, we established that at least the icc compiler on x86 architectures does automatically unroll our benchmark kernels for row-major layout. In Figures 5–8, we show that manually unrolling the loops over Morton arrays by a factor of four, using the technique described in Section 3, can result in a significant performance improvement of the Morton code: On several architectures, the unrolled Morton codes are for part of the spectrum of problem sizes very close to, or even better than, the performance of the best canonical code. We plan to explore this promising result further by investigating larger unrolling factors.
Improving the Performance of Morton Layout
4
251
Related Work and Conclusions
Related Work. Chatterjee et al. [2] study Morton layout and a blocked “4D” layout. They focus on tiled implementations, for which they find that the 4D layout achieves higher performance than the Morton layout because the address calculation problem is easier, while much or all the spatial locality is still exploited. Their work has similar goals to ours, but all their benchmark applications are tiled for temporal locality; they show impressive performance, with the further advantage that performance is less sensitive to small changes in tile size and problem size, which can result in cache associativity conflicts with conventional layouts. In contrast, the goal of our work is to evaluate whether Morton layout can simplify the performance programming model presented by compilers for languages with multi-dimensional arrays. Wise et al. [11] argue for compiler-support for Morton order matrices. They use a recursive implementation of loops over Morton arrays, with recursion unfolding and re-rolling into small loops. However, they find it hard to overcome the cost of addressing without recursion. Gustavson at al. [5] show that complementing a tiled implementation of BLAS-3 routines with a recursively blocked storage layout can lead to additional performance improvements. Conclusions. We believe that work on nonlinear storage layouts, such as Morton order, is applicable in a number of different areas.
252
Jeyarajan Thiyagalingam et al.
Simplifying the performance-programming model offered to application programmers is one important objective of language design and compiler research. We believe that the work presented in this paper can reduce the price of the attractive properties offered by Morton layout over canonical layouts. Storage layout transformations are always valid and can be applied even in codes where tiling is not valid or hard to apply. Store layout transformation can thus be additional and complementary to iteration space transformations. Future Work. We have reason to believe that unrolling loops over Morton arrays by factors larger than four is likely to yield greater benefits than we have measured thus far. We are also planning to investigate the performance of Morton layout in tiled codes and software-directed pre-fetching for loops over Morton arrays. We believe that the techniques we have presented in this paper facilitate an implementation of Morton layout for two-dimensional arrays that is beginning to fulfil its theoretical promise.
Acknowledgements This work was partly supported by mi2g Software, a Universities UK Overseas Research Scholarship and by the United Kingdom EPSRC-funded OSCAR project (GR/R21486). We also thank Imperial College Parallel Computing Centre (ICPC) for access to their equipment. We are grateful to David Padua and J. Ramanujam for suggesting that we investigate unrolling during discussions at the CPC 2003 workshop in Amsterdam.
References [1] Bowen Alpern, Larry Carter, Ephraim Feig, and Ted Selker. The uniform memory hierarchy model of computation. Algorithmica, 12(2/3):72–109, August/September 1994. [2] Siddhartha Chatterjee, Vibhor V. Jain, Alvin R. Lebeck, Shyam Mundhra, and Mithuna Thottethodi. Nonlinear array layouts for hierarchical memory systems. In ICS ’99: Proceedings of the 1999 International Conference on Supercomputing, pages 444–453, June 20–25, 1999. [3] Siddhartha Chatterjee, Alvin R. Lebeck, Praveen K. Patnala, and Mithuna Thottethodi. Recursive array layouts and fast parallel matrix multiplication. In SPAA ’99: Eleventh Annual ACM Symposium on Parallel Algorithms and Architectures, pages 222–231, New York, June 1999. [4] Peter Drakenberg, Fredrik Lundevall, and Björn Lisper. An efficient semihierarchical array layout. In Proceedings of the Workshop on Interaction between Compilers and Computer Architectures, Monterrey, Mexico, January 2001. Kluwer. Available via www.mrtc.mdh.se.
Improving the Performance of Morton Layout
253
[5] F. Gustavson, A. Henriksson, I. Jonsson, and B. Kaagstroem. Recursive blocked data formats and BLAS’s for dense linear algebra algorithms. In PARA ’98: Applied Parallel Computing. Large Scale Scientific and Industrial Problems, volume 1541 of LNCS, pages 195–206, 1998. [6] D. Hilbert. Über die stetige Abbildung einer Linie auf ein Flächenstück. Math. Ann., 38:459–460, 1891. [7] Gabriel Rivera and Chau-Wen Tseng. Data transformations for eliminating conflict misses. In PLDI ’98: Proceedings of the ACM SIGPLAN’98 Conference on Programming Language Design and Implementation, pages 38–49, Montreal, Canada, 17–19 June 1998. (8] Jeyarajan Thiyagalingam, Olav Beckmann, and Paul H. J. Kelly. An exhaustive evaluation of row-major, column-major and Morton layouts for large twodimensional arrays. In Stephen A. Jarvis, editor, Performance Engineering: Annual UK Performance Engineering Workshop, pages 340–351. University of Warwick, UK, July 2003. [9] Jeyarajan Thiyagalingam, Olav Beckmann, and Paul H.J. Kelly. Improving the performance of basic morton layout by array alignment and loop unrolling — towards a better compromise storage layout. Technical report, Department of Computing, Imperial College London, September 2003. Available via www.doc.ic.ac.uk/~jeyan/. [10] Vinod Valsalam and Anthony Skjellum. A framework for high-performance matrix multiplication based on hierarchical abstractions, algorithms and optimized low-level kernels. Concurrency and Computation: Practice and Experience, 14(10):805–839, August 2002. [11] David S. Wise and Jeremy D. Frens. Morton-order Matrices Deserve Compilers’ Support. Technical Report, TR533, November 1999. [12] David S. Wise, Jeremy D. Frens, Yuhong Gu, and Gregory A. Alexander. Language support for Morton-order matrices. ACM SIGPLAN Notices, 36(7):24–33, July 2001. Proceedings of PPoPP 2001.
254
Jeyarajan Thiyagalingam et al.
Notice for Alpha, the upper limit is 1024 × 1024. For Alpha (Sun), the fall-off in RM performance occurs at 725 × 725 (1024 × 1024) when the total datasize exceeds L2 cache size of 4MB (8MB), direct mapped. This assumes a working set of 725 × 725 (1024 × 1024) doubles. Alignment significantly improves performance of the default Morton scheme on P3. On other platforms, alignment also yields slight improvements.
Fig. 5. ADI performance in MFLOPs on different platforms. We compare row-major, column-major, Morton with default alignment of the base address of the array, Morton with page-aligned base address and unrolled-Morton with page-aligned base address and factor 4 loop unrolling
Improving the Performance of Morton Layout
255
On Alpha, Spare and P3, the page-aligned Morton version improves over the basic Morton scheme. Unrolling improves performance of the best aligned Morton implementation, in particular on x86 where the unrolled Morton performance is within reach of the best canonical.
Fig. 6. Jacobi2D performance in MFLOPs on different platforms. We compare row-major, column-major, Morton with default alignment of the base address of the array, Morton with page-aligned base address and Morton with page-aligned base address and factor 4 loop unrolling
256
Jeyarajan Thiyagalingam et al.
For Alpha and P3, notice that upper limit is 1024 × 1024. On all platforms except Spare, unrolling yields a significant improvement over the basic Morton scheme.
Fig. 7. MMikj performance in MFLOPs on different platforms. We compare row-major, column-major, Morton with default alignment of the base address of the array, Morton with page-aligned base address and Morton with page-aligned base address and factor 4 loop unrolling
Improving the Performance of Morton Layout
257
For Alpha, notice that the upper limit is 1024 × 1024. Notice the sharp drop in RM and CM performance on Alpha (around 360 × 360) and on Spare (around 700 × 700) platforms . On all platforms except Spare, unrolling yields a significant improvement over the basic Morton scheme.
Fig. 8. MMijk performance in MFLOPs on different platforms. We compare row-major, column-major, Morton with default alignment of the base address of the array, Morton with page-aligned base address and Morton with page-aligned base address and factor 4 loop unrolling
Spatial Views: Space-Aware Programming for Networks of Embedded Systems* Yang Ni, Ulrich Kremer, and Liviu Iftode Department of Computer Science Rutgers University, Piscataway, NJ 08854 {yangni,uli,iftode}@cs.rutgers.edu
Abstract. Networks of embedded systems, in the form of cell phones, PDAs, wearable computers, and sensors connected through wireless networking technology, are emerging as an important computing platform. The ubiquitous nature of such a platform promises exciting applications. This paper presents a new programming model for a network of embedded systems, called Spatial Views, targeting its dynamic, space-sensitive and resource-restrained characteristics. The core of the proposed model is iterative programming over a dynamic collection of nodes identified by the physical spaces they are in and the services they provide. Hidden in the iteration is execution migration as the main collaboration paradigm, constrained by user specified limits on resource usage such as response time and energy consumption. A Spatial Views prototype has been implemented, and first results are reported.
1
Introduction
The possibility of building massive networks of embedded systems (NES) has become a reality. For instance, cell phones, PDA’s, and other gadgets carried by passengers on a train can form an ad hoc network through wireless connection. In addition to those volatile and dynamic nodes, the network may contain fixed nodes installed on the train, for instance public displays, keyboards, sensors, or Internet connections. Similar networks can be established across buildings, airports or even on highways among car-mounted computers. Any device with a processor, some memory and a network connection, probably integrated on a single chip, can join such a network. The application of such a network is limited only by our imagination, if we had the right programming models and abstractions. Existing programming models do not address key issues for applications that will run on a network of embedded systems. Physical Locations: An application has a physical target space region, i.e., a space of interest in which it executes. The semantics of a program executing outside its target space in not defined. For instance, it makes a difference * This research was partially supported by NSF ITR/SI award ANI-0121416. L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 258–272, 2004. © Springer-Verlag Berlin Heidelberg 2004
Spatial Views: Space-Aware Programming
259
if an application collects temperature reading within a building or outside a building, and whether all or only a subset of temperature sensors are to be polled. A motion sensor reading may trigger the activation of other sensors but only of those which are in the spatial proximity of the motion sensor. A programmer must be able to specify physical spaces and location constraints over these spaces. Volatile and Dynamic Networks: Nodes may join and leave at any time, because of movements or failure. Portable devices or sensors, carried by a person or an animal[1], may go out the space of interest while they are moving with the carriers. Battery powered small devices may go out of power at any point. A node available at time can not be assumed available at time or time where can be very small relative to the application execution time. Resource Constraints: Resources like energy and execution time are limited in a network of embedded systems, due to the hardware form factor and application characteristics. Graceful degradation of quality of results is necessary in such an environment. Instead of draining the battery of the sensors, you might want to limit the total energy used by a program and accept a slightly worse answer. Or you may limit the response time of a query for traffic information 10 miles ahead on the highway, so you will have enough time to choose a detour after getting the answer. In those cases, energywasting or late answers are not better or even worse than no answer. Programmers should be able to specify the amount of resource used during a program execution, so trade-offs between quality of results and resource usage can be made. In this paper, we introduce Spatial Views, a novel programming model targeting networks of embedded systems. Spaces, services and resource constraints are explicit programming elements in Spatial Views. Spaces and services are combined to define dynamic collections of interesting nodes in a network, called Spatial Views. Iterators and Selectors specify code to execute in a view under a specified time constraint, and possibly additional user specified resource constraints. These high level program constructs are based on a migratory execution model, guided by the space and service of interest. However, Spatial Views does not exclude an implementation using other communication mechanisms, such as remote procedure calls, message passing or even socket programming, for performance or energy efficiencies. Network or node failures are transparent to the programming model. However, there is no guarantee that the execution of an application will be able to complete successfully. Our proposed model is not fault tolerant, but allows answers of different qualities. In contrast, in a traditional programming model for a stable target system, any answer is considered to have perfect quality. In our programming model, it is the responsibility of the programmer to assess the quality of an answer. For example, if a user wants the average temperature calculated from readings of at least ten network nodes, he or she should report the average temperature together with the number of actually visited nodes to
260
Yang Ni et al.
assess the quality of the answer. A best-effort compiler and run-time system will try to visit as many nodes as possible, as long as no user defined constraint is violated, assuming that visiting more nodes will produce a potentially better answer. A target space and a time constraint have to be specified for each program to confine its execution, including node discovery, to a space×time interval. Security and privacy issues are also important in a NES but are not currently part of our programming model. The same application will run on a secure network as well as on a insecure network. We assume that security-sensitive hosts will implement authentication and protection policies at a lower level than the programming model. Smart Messages[2] and Spatial Programming[3] are possible implementation platforms for our proposed Spatial Views programming model. A programming environment for execution migration that includes protection and encryption for Smart Messages is currently under investigation[4], which could be used as an secure infrastructure to implement our programming model. However, in this paper we describe an implementation of Spatial Views on top of Sun’s K Virtual Machine (KVM) independent of Smart Messages. In the rest of this paper, we will present a survey of related works (section 2), the programming model (section 3), a discussion of the implementation of a prototype system (section 4) and experimental results (section 5).
2
Related Work
Our work is correlated to recent work on sensor networks[5, 6, 7, 8, 9] in that they all target ad hoc networks of wireless devices with limited resources. However, we broaden the spectrum of network nodes to include more computing powerful devices like PDA’s, cell phones and even workstations or servers. TinyOS[5] and nesC[7] provide a component-based event-driven programming environment for Motes. Motes are small wireless computing devices that have processors of a couple of MHz, about 4KB RAM and 10Kbps wireless communication capability. TinyOS and nesC use Active Messages as the communication paradigm. Active Messages has a similar flavor to execution migration of Spatial Views, but use non-migrating handlers instead of migrating code. Maté[6] is a tiny virtual machine built on top of TinyOS for sensor networks. It allows capsules, i.e. Maté programs, in bytecode to forward themselves through a network with a single instruction, which bears the resemblance to execution migration in Spatial Views. Self forwarding enables on-line software upgrading, which is important in large-scale sensor networks. Next, we are going to discuss related work about services and locations. We will also discuss related work about execution migration, which is used in the implementation of the prototype for our programming model. 2.1
Service Discovery
Service discovery is a research area with a long history. Service is usually specified either as an interface (like in Jini,)[10] or as a tuple of attribute-and-value pairs
Spatial Views: Space-Aware Programming
261
(like in INS.)[11] Attribute-and-value pairs describe a hierarchical service space by adding new attributes and corresponding values in a describing tuple. The same goal can be achieved through interface sub-typing. Spatial Views programming model specify services as interfaces. Applications and services agree on the semantics of the methods of the services. We assume that the operating system provides service discovery as a basic function. However, we did implement a simple service discovery in the Spatial Views runtime libraries using the random walk technique. 2.2
Location Technology
GPS[12] is the most developed positioning technology. It is all-weather worldwide available with very high accuracy regarding its scale, 16 meters for absolute positions and 1 meter for relative positions. In spite of its many advantages, GPS is only available outdoor and its accuracy is still not satisfactory for many mobile computing applications. In recent years, more accurate in-door positioning technologies have been developed by the mobile computing community. Active Badges and Bats[13, 14, 15] are tracking systems as accurate as to a few centimeters. Each object is attached a RFID and tracked by a centralized system. Although accurate, Active Badges and Bats are costly and hard to deploy. User privacy is not protected since everyone with a tag exposes his/her position by sending out radio signals. The central machine in charge of analyzing each user’s position causes scalability problem and represents a single point of failure. Cricket[16, 17] tries to address those issues by using a distributed and passive architecture similar to GPS. Cricket is based on special beacons and receivers and uses time of fly of radio and ultrasound signals to locate. It provides a precision to a few meters. RADAR[18] is also a passive system like GPS, but it is based on the popular 802.11 technology and uses radio signal strength to locate. The precision of RADAR is in the range of 2 or 3 meters. 2.3
Migratory Execution
Spatial Views is part of the Smart Messages project[19, 2]. The goal of Spatial Views is to build a high-level space-aware programming language over Smart Messages. We had a simple implementation of the migratory execution feature of Smart Messages for rapid prototyping and evaluation of Spatial Views. Migratory execution has been extensively studied in the literature, especially in the context of mobile agents[20, 21]. However, Spatial Views only supports implicit transparent migration hidden in its iteration operation, and names a node based on the services that it provides. Spatial Views/Smart Messages is different from mobile agents in terms of the design goal. We are designing a programming tool and infrastructure for cooperative computing on networks of embedded systems. The major network connection is assumed wireless. Spatial Views/Smart Messages uses content naming, and a migrating program is responsible for its own routing.
262
3
Yang Ni et al.
Programming Model
To program a network of embedded system in Spatial Views, a programmer specifies the nodes in which he or she is interested based on the properties of the nodes. Then he or she specifies the task to be executed on those nodes. The properties used to identify interesting nodes include the services of the nodes and their locations. A program starts running on one node. Whenever it needs some services which the current node does not provide, it discovers another node that does, and migrates there to continue its execution. Spatial Views provides necessary programming abstractions and constructs for this novel programming model. Node discovery, ad hoc network routing, and execution migration are transparently implemented by the compiler, runtime system, and the operating system. A programmer is freed from dealing directly with the dynamic network. Figure 1 shows an example of Spatial Views program. We will walk through this example in Section 3.3. 3.1
Services and Virtual Nodes
NES computing is cooperative computing[19]. Nodes participate in a common computing task by providing some service and using services provided by other nodes at the same time. A service is described or named with an interface in Spatial Views. Nodes provide services which are discovered at run-time, and are provided as objects implementing certain interfaces. In our programming model, discovery is assumed a basic function provided by the underlying middleware or OS. But we provide a simple discovery implementation based on the “random walk” technique in Section 4. The discovery procedure looks for nodes hosting classes implementing the interface. When such a node is found, an object of the class is created. The program is then able to use the service through the object. The discovery may be confined to certain physical space as we will discuss in Section 3.2. The basic programming abstraction in Spatial Views is a virtual node, which is denoted as a pair (service, location), representing a physical node providing the service and locating in the location. Concrete physical nodes with IP addresses or MAC addresses are replaced by virtual nodes. Depending on how many services it provides, a single physical node may be represented by multiple virtual nodes. More interestingly, if a physical node is mobile, it may be used as different virtual nodes at different points during the application execution. Uniquely identifying a particular physical node is not supported in Spatial Views. In case that an application needs to do so, the programmer can use some application-specific mechanism, for example, using MAC addresses. 3.2
Spatial Views, Iterators and Selectors
A spatial view is a dynamic collections of virtual nodes that provide a common service and locate in a common space. Here a space is a set of locations, which
Spatial Views: Space-Aware Programming
263
can be a room, a floor, or a parking lot. Iterators and selectors describe actions to be performed over the nodes in a view. The instructions specified in the body of an iterator are executed on “all” or as many nodes as possible of the view. In contrast, the body of a selector is executed on only one node in the view if the view is not empty. The most important characteristics of a spatial view is its dynamic nature. It is a changing set of virtual nodes. A physical node may move out, or run out of power. So a virtual node may just disappear at an arbitrary point. On the other hand, new nodes may join at any time. For this reason, two consecutive invocations of the same iterator over the same view may lead to different results. A spatial view is defined as follows:
where Service is the name of an interface and Space is the space of interest. If the space is omitted, any node providing the interesting service would be included in the view no matter where it is. A spatial view is accessed through an iterator or selector.
TimeConstraint gives a time constraint, which is mandatory. ConstraintList gives a list of constraints on energy, monetary or other resources to apply to an iterator or a selector. At this point, only time constraints are supported. A time constraint demands an iterator or selector finish in NumberOfMilliseconds. Time constraints are enforced following a best-effort semantics with the iteration body as the minimal atomic unit of constraint control. This means an iteration will never be partially executed even when a time constraint is violated. A time constraint in Spatial Views is a soft deadline, and is a time budget rather than a real-time deadline. In other words, the time constraint does not ensure that a program terminates successfully within the deadline, but ensures no further execution after the budget is exhausted. 3.3
Example
The example shown in Figure 1 illustrates a Spatial Views application that executes on a network that contains nodes with cameras and nodes that provide image processing services such as human face detection[22]. The program tries to find a person with a red shirt or sweater on the third floor of a building. An answer is expected back within 30 seconds (soft deadline). A time limit is
264
Yang Ni et al.
Fig. 1. Spatial Views example application of locating a person in red
necessary because the computed answer may become “stale” if returned too late (the missing person may have left the building at the time the successful search result is reported). Static physical spaces such as buildings and floors within buildings may be defined as part of a Spatial Views space library. In the example, we assume that the package “SpaceDefinition.Rutgers.*” contains such definitions for the Rutgers University campuses. Line 6 defines a spatial view of cameras on the third floor of a building named CoRE (a building at Rutgers University.) Lines 10-27 define the task to be performed on the cameras in the spatial view defined in line 6. It is an iterator, so the task will be executed on each camera discovered within the time constraint, 30 seconds as defined in line 10. When the execution reaches Line 11, the program would have migrated to a camera. Then a picture is taken. Line 12 tries to find a region in the picture that is mostly red. If such a red region is found, another spatial view consisting of face detectors is defined (Line 16.) Lines 20 and 21 use a face detector in the view defined in Line 16 to find a face in the picture. (Because it is a selector, line 20 and 21 finishes right after the first face detector is discovered.) If the face detected is close to the red region in the picture, the program concludes it is a person in a red shirt, and remember the location of the camera which takes the picture. This location is reported at the end of the program (Lines 29-32.)
Spatial Views: Space-Aware Programming
265
Fig. 2. Compilation of Spatial Views Programs
Fig. 3. Architecture of a Node
4
Implementation
The implementation itself is not the major contribution of this paper. The programming model is. The purpose of this implementation is to justify the programming model, and to provide an opportunity to study the abstractions and constructs proposed in the model. It is part of our on-going work to make this implementation faster, scalable, secure and economic acceptable. However, the current implementation has shown the feasibility of our programming model. Our prototype is an extension to Java 2 Platform, Micro Edition (J2ME)[23]. Figure 2 shows the basic structure of the Spatial Views compilation system. We are currently investigating optimization passes that improve the chances of a successful program execution in a highly volatile target network. The compiled bytecode runs on a network, each node of which has a Spatial Views virtual machine and a Spatial Views runtime library. Figure 3 shows the architecture of a single node. We build the Spatial Views compiler, virtual machine and runtime library based on Sun’s J2ME technology [23]. J2ME is a Java runtime environment targeting extremely tiny commodities. KVM[24] is a key part of J2ME. It is a virtual machine designed for small-memory, limited-resource and networked devices like cell phones, which typically contain 16- or 32-bit processors and a minimum memory of about 128 kilobytes. We modified javac in Java 2 SDK 1.3.1 to support the new Spatial Views language structures, including the foreach and forany statement and space definition statements. We modified the KVM 1.0.3 to support transparent process migration. And we extended CLDC 1.0.3 with new system classes to support Spatial Views language features. We ported our implementation to x86 and ARM architectures, running Linux 2.4.x.
266
4.1
Yang Ni et al.
Spatial Views Iteration and Selection
At the beginning of an iteration, a new thread is created to discover interesting nodes and to migrate the process there. We call the new thread Bus Thread. The Bus Thread implements a certain discovery/routing algorithm and respects the user-specified constraints. The Bus Thread migrates from one interesting node to another. An interesting node is a node that provides the service and is located in the space specified in the spatial view definition. On such a node, the Bus Thread blocks and switches to the user task thread, the code of which is specified in the iteration body. When an iteration step finishes, the user task thread blocks and switches back to the Bus Thread. The Bus Thread continues until no more interesting nodes can be found or the time budget is used out. In the case of selectors, the Bus Thread finishes right after the first interesting nodes is found. When the Bus Thread finishes, the corresponding spatial views iteration ends. The Bus Thread is like a bus carrying passengers (user task threads in our case), running across a region and stopping at certain interesting places, hence the name. This implementation with a Bus Thread provides a simple framework to iterate a spatial view as a dynamic set of interesting nodes. Node discovery is transparent to the programmer and performed by the underlying middleware or by the OS using existing or customized discovery and routing algorithms. Such a framework does not limit the search algorithm a program uses to discover an interesting node. In the current implementation, we use “random walk” technique, which randomly picks a neighbor of the current node and migrates there. On each node the bus thread checks for the service and location. If the interesting service is found in the specified space, it switches to user task. The Bus Thread remembers the nodes that it has visited by recording their ID’s (e.g. IP addresses and port numbers) and avoid visiting them again. Such an algorithm may be slow and not scalable, but one can hardly do better in an unstructured, dynamic network. However, if the network is not changing very fast or not changing at all, a static directory of services can be maintained to find interesting nodes. Another possible improvement is to allow the Bus Thread to clone itself and search the network in parallel. This optimization is currently under investigation. As to the constraints, so far we have implemented the time constraint. The Bus Thread times each single iteration step, and checks the remained time budget after each single iteration step finishes. If the budget drops below zero, the iteration is stopped. So the time constraint is a soft deadline implemented with “best-effort” semantics. This soft deadline provides effective trade-offs between quality-of-results and time consumption as shown in section 5.3. 4.2
Transparent Process Migration
Transparent process migration is implemented as a native method, migrate, in a Spatial Views system class. It is used in the implementation of foreach and forany operations. migrate takes the destination node address as its parameter.
Spatial Views: Space-Aware Programming
267
Fig. 4.
When migrate is called, the Spatial Views KVM sends the whole heap to the destination, as well as the virtual machine status, including the thread queue, instruction counter, the execution stack pointer and other information. The KVM running on the destination node receives the heap contents and the KVM status and starts a new process. Instead of ordinary process initialization, the receiving KVM populates its heap with the contents received from the network and adjusts its registers and data structures with the KVM status received from the network. To make migrate more efficient, we enforce a garbage collection before each migration.
Experiments
5
We used 10 Compaq iPAQ PDA’s (Model H3700 and H3800) as our test bed, 2 of which are equipped with camera sleeves developed as part of the Mercury project at HP Cambridge Research Laboratory (CRL) (Figure 4(a)). The iPAQ’s were connected via 802.11b wireless technology. Since we had not implemented a location service based on GPS or other location technology, all node locations were statically configured in these experiments. 5.1
Application Example
We implemented the person search application discussed in Section 3.3. We timed the execution of the application on 10 iPAQ PDA’s connected by a 802.11b wireless network. The network topology is shown in Figure 4(b). 1 1
In this paper, “network topology” refers to the network topology observed by one program execution. Another execution is very likely to observe a different topology, because the network is changing.
268
Yang Ni et al.
Node “i” and “j” have cameras, shown as dark gray pentagons in the figure; node “b”, “c”, and “f” provide the face detection service, shown as light gray triangles in the figure. The program starts from node “a” and eventually visits all the nodes in the network in the depth-first order. Once it finds a node with a camera, it takes a picture and checks if there is a red region in the picture. If there is, the program will look for a node providing face detection service. It stops on the first node with the service and looks for a face in the picture. If a face is found, and it is close to the red region in the picture, the program records the location where the picture is taken. Once the program finishes all the nodes, it migrates back to the starting node. We experimented with two situations. Situation 1: A red region is detected on both node “i” and “j”, but a face is found only in the picture from node “j”. Situation 2: No red region is detected on either node “i” or “j”, so no face detection is triggered. We timed the executions in both situations. The program took on average 23.1 seconds in situation 1 and 10.0 seconds in situation 2. In both cases, the time constraint was not violated. It is important to note that all the iPAQ’s use SA-1100 StrongARM processors running at 206MHz. But the nodes that provide face detection service offload the face detection computation to a PC. The execution times for the first situation was dramatically reduced as suggested in [25]. 5.2
One-Hop Migration Time
To assess the efficiency of execution migration, we measured the one-hop migration time. We measured the overall execution time of two consecutive migrations, one migrating to a neighbor, followed by another one migrating back. The time taken by those two consecutive migrations is the round-trip time for one-hop migration, which is twice the migration time. We measured the time for different live data size (The heap size is 128KB, but only live data are transfered.) The result is shown in Figure 5 using a wired (100Mbps Ethernet) and a wireless (11Mbps 802.11b) connection. In the KVM heap, there is a permanent space which is not garbage collectible. For our test program, the size of the permanent space is 65KB (66560 bytes). The contents of the permanent space include Java system classes, which are available on all the nodes, and strings, most of which are used only once in a program. The current implementation transfers the entire permanent space in a migration operation. We are making efforts to avoid this, which we expect would significantly speed up migration. 5.3
Effects of Timeout Constraints
To evaluate the effects of timeout constraints, we fake failures with certain probabilities for the network links. The test program iterates over “temperatures sensors” and reads the temperatures to calculate the average temperature. After finishing on each node, the program tries to connect to a neighbor. If none of
Spatial Views: Space-Aware Programming
269
Fig. 5. One-Hop Migration Time
Fig. 6. Topologies for Experiment on Timeout Constraint
the neighbors is reachable, the program waits for 10ms and tries again. And it keeps trying until it successfully migrates to a neighbor. If the network link failure probability is high, the iteration time might be very long. In that case, the timeout constraints can significantly reduce the iteration time and still get some result. We did the experiments with two different topologies shown in Figure 6(a) and 6(b), with the experimental results shown in Figure 7. The time to wait before a successful migration is where p is the probability that all the links of a node to its neighbors fail. In Topology (a), where is the failure probability of a single link. In Topology (b), Then the time of a single iteration step is where 400ms is the maximum one-hop migration time(see Figure 5). If no time constraint is imposed, the expected execution time is where is the number of nodes visited. We omit the task execution time on each node, because the temperature reading is so fast that the time it takes is much less than migration and waiting time. If a timeout is specified, the expected program execution time will be For Topology (a), link failure probability and that upper bound is 3100ms , which is verified by the experimental result, 3095ms (see Figure 7(a)). For Topology (b), and that upper bound is 1850ms. which is also verified by the experimental result, 1802ms (see Figure 7(b)).
270
Yang Ni et al.
Fig. 7.
Effects of Timeout
Using time constraints, a programmer is able to keep a decent quality of result of the program, while significantly reducing the execution time. Instead of producing no answer (as it happens when a user presses “Ctrl-C” in a traditional programming environment,) the program reports a result of reduced quality (e.g. only two temperature readings.) when the time budget is used out. The number of nodes visited in our experiments, as the criterion for quality of result, is shown in Figure 7(c) and 7(d).
6
Conclusion
Spatial Views is a programming model that allows the specification of programs to be executed on dynamic and resource-limited networks of embedded systems. In such environments, the physical location of nodes is crucial. Spatial Views allows a user to specify a virtual network based on common node characteristics and location. Nodes in such a virtual network can be visited using an iterator or selector. Execution migration, node discovery, or routing is done transparently. Time and other resource constraints allow the programmer to express quality of result trade-offs and to manage the inherent volatility of the underlying network. The Spatial Views programming model is simple and expressive. A prototype of Spatial Views including a compiler, a runtime library and a virtual machine, has been implemented as an extension to J2ME. Experimental results on a net-
Spatial Views: Space-Aware Programming
271
work of up to 10 iPAQ’s handheld computers running Linux are very encouraging for a person search application. In addition, the effectiveness of time constraints to allow graceful degradation of the quality of a program’s answer was experimentally evaluated for a temperature sensor network with two different network topologies. Spatial Views is the first spatial programming models with a best-effort semantics. The model allows optimization such as parallelization (multiple threads), and quality of result vs. resources usage trade-offs.
References [1] Juang, P., Oki, H., Wang, Y., Martonosi, M., Peh, L. S., Rubenstein, D.: Energyefficient computing for wildlife tracking: Design tradeoffs and early experiences with ZebraNet. In: ASPLOS. (2002) [2] Borcea, C., Intanagonwiwat, C., Saxena, A., Iftode, L.: Self-routing in pervasive computing environments using smart messages. In: the First IEEE Annual Conference on Pervasive Computing and Communications (PerCom). (2003) [3] Iftode, L., Borcea, C., Iyer, D., Kang, P., Kremer, U., Saxena, A.: Spatial programming with Smart Messages for networks of embedded systems. Technical Report DCS-TR-490, Department of Computer Science, Rutgers University (2002) [4] Xu, G., Borcea, C., Iftode, L.: Toward a security architecture for smart messages: Challenges, solutions, and open issues. In: Proceedings of the First International Workshop on Mobile Distributed Computing. (2003) [5] Hill, J., Szewczyk, R., Woo, A., Hollar, S., Culler, D., Pister, K.: System architecture directions for network sensors. In: ASPLOS. (2000) [6] Levis, P., Culler, D.: Maté: A tiny virtual machine for sensor networks. In: ASPLOS. (2002) [7] Gay, D., Levis, P., von Behren, R., Welsh, M., Brewer, E., Culler, D.: The nesC language: A holistic approach to networked embedded systems. In: PLDI. (2003) [8] Intanagonwiwat, C., Govindan, R., Estrin, D.: Directed diffusion: A scalable and robust communication paradigm for sensor networks. In: MobiCom. (2000) [9] Kulik, J., Rabiner, W., Balakrishnan, H.: Adaptive protocols for information dissemination in wireless sensor networks. In: MobiCom. (1999) [10] Waldo, J.: The Jini architecture for network-centric computing. ACM Communications (1999) [11] Adjie-Winoto, W., Schwartz, E., Balakrishnan, H., Lilley, J.: The design and implementation of an intentional naming system. In: SOSP. (1999) [12] Getting, I. A.: The global positioning system. IEEE Spectrum (1993) [13] Addlesee, M., Curwen, R., Hodges, S., Newman, J., Steggles, P., Ward, A., Hopper, A.: Implementing a sentient computing system. IEEE Computer (2001) [14] Harter, A., Hopper, A., Steggles, P., Ward, A., Webster, P.: The anatomy of a context-aware application. In: MobiCom. (1999) [15] Harter, A., Hopper, A.: A distributed location system for the active office. IEEE Network 8 (1994) [16] Priyantha, N. B., Miu, A. K. L., Balakrishnan, H., Teller, S. J.: The cricket compass for context-aware mobile applications. In: MobiCom. (2001) [17] Priyantha, N. B., Chakraborty, A., Balakrishnan, H.: The cricket location-support system. In: MobiCom. (2000)
272
Yang Ni et al.
[18] Bahl, P., Padmanabhan, V. N.: RADAR: An in-building RF-based user location and tracking system. In: INFOCOM (2). (2000) [19] Borcea, C., Iyer, D., Kang, P., Saxena, A., Iftode, L.: Cooperative computing for distributed embedded systems. In: Proceedings of the 22nd International Conference on Distributed Computing Systems (ICDCS). (2002) [20] Gray, R. S., Cybenko, G., Kotz, D., Peterson, R. A., Rus, D.: D’agents: Applications and performance of a mobile-agent system. Software: Practice and Experience (2002) [21] Gray, R. S.: Agent Tcl: A flexible and secure mobile-agent system. PhD thesis, Dartmouth College (1997) [22] Rowley, H. A., Baluja, S., Kanade, T.: Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (1998) 23–38 [23] Sun Microsystems, Inc.: Java 2 platform, micro edition (j2me). (http://java.sun.com/j2me) [24] Sun Microsystems: KVM White Paper. Sun Microsystems, Inc. (2000) [25] Kremer, U., Hicks, J., Rehg, J.: A compilation frame work for power and energy management on mobile computers. In: International Workshop on Languages and Compilers for Parallel Computing (LCPC’01). (2001)
Operation Reuse on Handheld Devices (Extended Abstract) Yonghua Ding and Zhiyuan Li Department of Computer Sciences Purdue University, West Lafayette, Indiana 47907 {ding,li}@cs.purdue.edu
Abstract. Compilers have long used redundancy removal to improve program execution speed. For handheld devices, redundancy removal is particularly attractive because it improves execution speed and energy efficiency at the same time. In a broad view, redundancy exists in many different forms, e.g., redundant computations and redundant branches. We briefly describe our recent efforts to expand the scope of redundancy removal. We attain computation reuse by replacing a code segment by a table look-up. We use IF-merging to merge conditional statements into a single conditional statement. We present part of our preliminary experimental results from an HP/Compaq iPAQ PDA.
1
Introduction
Compilers have long used redundancy removal to improve program execution speed. For handheld devices, which have limited energy resource, redundancy removal is particularly attractive because it improves execution speed and energy efficiency at the same time. In a broad sense, any reuse of a previous result can be viewed as a form of redundancy removal. Recently, our research group has investigated methods to expand the scope of redundancy removal. The investigation has resulted in two forms of operation reuse, namely computation reuse and branch reuse. Computation reuse can be viewed as an extension of common subexpression elimination (CSE). CSE looks for redundancy among expressions in different places of the program. Each of such expressions computes a single value. In contrast, computation reuse looks for redundancy among different instances of a code segment or several code segments which perform the same sequence of operations. In this paper, we shall discuss computation reuse for a single code segment which exploits value locality [1, 2, 3, 4] via pure software means. We exploit branch reuse through an IF-merging technique which reduces the number of conditional branches executed at run time. This technique does not require special hardware support and thus, unlike hardware techniques, it does not increase the power rate. The merger candidates include IF statements which have identical or similar IF conditions which nonetheless are separated by other statements. The idea of IF-merging can be implemented with various degrees of L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 273–287, 2004. © Springer-Verlag Berlin Heidelberg 2004
274
Yonghua Ding and Zhiyuan Li
aggressiveness : the basic scheme, a more aggressive scheme to allow nonidentical IF conditions, and lastly, a scheme based on path profiling information. In the next two sections, we discuss these techniques respectively and compare each technique with related work. We make a conclusion in the last section.
2
Computation Reuse
Recent research has shown that programs often exhibit value locality [1, 2, 3, 4], a phenomenon in which a small number of values appear repeatedly in the same register or the same memory location. A number of hardware techniques [5, 6, 7, 1, 2, 8, 9, 4] have been proposed to exploit value locality by recording the inputs and outputs of a code segment in a reuse table implemented in the hardware. The code segment can be as short as a single instruction. A subsequent instance of the code segment can be simplified to a table look-up if the input has appeared before. The hardware techniques require a nontrivial change to the processor design, typically by adding a special buffer which may contain one to sixteen entries. Each entry records an input (which may consist of several different variables) and its matching output. Such a special buffer increases the hardware design complexity and the hardware cost, and it remains unclear whether the cost is justified for embedded systems and handheld computing devices. Using a software scheme, the table size can be much more flexible, although table look-up will take more time. The benefit and the overhead must be weighed carefully. In our scheme, we use a series of filtering to identify stateless code segments which are good candidates for computation reuse. Figure 1 shows the main steps of our compiler scheme. For each selected code segment, the scheme creates a hashing table to continuously record the inputs and the matching outputs of the code segment. Based on factors such as value repetition rate, computation granularity estimation, and hashing complexity, we develop a formula to estimate whether the table look-up will cost less than repeating the execution. The hashing complexity depends on the hash function and the input/output size. The hashing table can be as large as the number of different input patterns. This offers opportunities to reuse computation whose inputs and outputs do not fit in a special hardware buffer. 2.1
How to Reuse
Computation reuse is applied to a stateless code segment whose output depends entirely on its input variables, i.e. variables and array elements which have upwardly-exposed reads in the segment. The output variables are identified by liveness analysis. A variable computed by the code segment is an output variable if it remains live at the exit of the code segment. If we create a look-up hash table for the code segment, the input variables will form the hash key. An invariant never needs to be included in the hash key. Therefore, for convenience, we exclude invariants from the set of input variables.
Operation Reuse on Handheld Devices
275
Fig. 1. Frame-work of the compiler scheme
The code segment shown in Figure 2(a) has an input variable val which is upwardly exposed to the entry of function quan. The array power2 is assumed to be invariant. The output variable is integer which remains live at the exit of the function. Our scheme collects information on three factors which determine the performance gain or loss from computation reuse, namely the computation granularity, the hashing overhead, and the input reuse rate of the given code segment. With the execution-frequency profiling information, it is relatively easy to estimate the computation granularity defined as the number of operations performed by the code segment. To get the reuse rate, we estimate the number of distinct sets of input values by value profiling and the number (N) of instances the code segment executed. We define the reuse rate by the following equation:
Based on the inputs and the outputs of the candidate code segment, we estimate the overhead of hashing table for computation reuse. The hashing overhead depends mainly on the complexity of the hash function and the size of each set of inputs and outputs. To produce a hash key for each code segment, we first define an order among the input variables. The bit pattern of each input value forms a part of the key. In the case of multiple input values, the key is composed by concatenating multiple bit strings. In common cases, the hash key can be quite simple. For example, the input of the code segment in Figure 2(a) is an integer scalar, so the hash key is simply the value of the input. The hash index can simply be the
Yonghua Ding and Zhiyuan Li
276
Fig. 2. reuse
An example code segment and its transformation by applying computation
hash key modularized by the hash size. Figure 2(b) shows the transformation result of the code segment in Figure 2(a). The hashing overhead depends on the size of the input and the output. The time to determine whether we have a hit is proportional to the size of the input. For a hit, the recorded output values should be copied to the corresponding output variables. For a miss, the computed output values must be recorded in the hashing table. In both cases, the cost of copying is proportional to the size of the output. In our scheme, we count the numbers of extra operations performed during a hit or a miss. (Note that a hit or a miss has the same number of extra operations.) A hashing collision can increase the hashing overhead. However, we assume there exist no hashing collisions. 2.2
Cost-Benefit Analysis
For a specific code segment, suppose we know the computation granularity C, the hashing overhead O, and the reuse rate The cost of computation before transformation equals C. The new cost of computation with computation reuse is specified by formula (1) below. Our scheme checks to see whether the gain by applying computation reuse, defined by formula (2), is positive or negative.
In the above, computation reuse improve performance for the specific code segment if and only if the condition in formula (3) is satisfied. Obviously, reuse
Operation Reuse on Handheld Devices
277
rate can never be greater than 1. This gives us another criteria to filter out code segments so as to reduce the complexity of value-set profiling. The compiler scheme removes code segments which do not satisfy from further consideration. For the remaining code segments, value profiling is performed to get After we obtain the compiler picks the code segments which satisfy formula (3) for computation reuse. Such code segments are transformed into codes that perform table look-up. 2.3
Value-Set Profiling
Our scheme requires information on the reuse rate which measures the repetitiveness of a set of input values for a code segment. This is in contrast to single-variable value profiling [10], where one can record the number of different values of the variable written by an instruction during the program execution. The ratio of this number over the total number of execution of the instruction defines the value locality at the instruction. (The lower the ratio, the higher the locality.) The locality of a set of values, unfortunately, cannot be directly derived based upon the locality of the member values. For example, suppose and each has two distinct values. The set of may have two, three, or four distinct value combinations. Therefore, our scheme first needs to define code segments for which we conduct value-set profiling. Given such a code segment, profiling code stubs can be inserted to record its distinct sets of input values. If we indiscriminately perform such value-set profiling for all possible code segments, the profiling cost will be prohibitive. To limit such cost, we confine the code segments of interest to those frequently executed routines, loops and IF branchs. Such frequency information is available by well-known tools such as gprof and gcov. 2.4
Experimental Results
We use Compaq’s iPAQ 3650 for the experiments. The iPAQ 3650 has a 206MHZ Intel StrongArm SA1110 processor [11] and 32MB RAM, and it has 16KB instruction cache and 8KB data cache both 32 way set-associative. To test the energy consumption on the handheld device, we connect an HP 3458a high precision digital multi-meter to measure the actual current drawn on the handheld computer during the program execution. We have experimented with six multimedia programs from Mediabench [12] and the GNU Go game. In our experiments, we use the default input parameters and input files as specified on the Mediabench web-site. The results from these programs are described below. The two programs, G721_encode and G721_decode perform voice compression and decompression, respectively, based on the G.721 standard. They both call a function quan which have a computation reuse rate of over 99%.
278
Yonghua Ding and Zhiyuan Li
The programs MPEG2_encode and MPEG2_decode encode and decode, respectively, MPEG data. Our scheme identifies the function fdct for computation reuse in MPEG2_encode and the function Reference_IDCT in MPEG2_decode. RASTA, which implements front-end algorithms of speech recognition, is a program for the rasta-plp processing. Its most time-consuming function FR4TR contains a code segment with one input variable and six output variables. The input repetition rate is 99.6%. UNEPIC is an image decompression program. Its main function contains a loop to which our compiler scheme is applied. The loop body has a single input variable and a single output variable, both integers. The input has a repetition rate of 65.1%. GNU Go is a go game. In our experiments, we use the input parameters “-b 6 -r 2”, where “-b 6” means playing 6 steps in benchmark mode and “-r 2” means setting the random seeds as 2 (to make it easier to verify results). The function accumulate_influence contains eight code segments for computation reuse and the average repetition rate of inputs is 98.2%. Tables 1 and 2 compare the performance and energy consumption, respectively, before and after the transformation. The machine codes (both before and after our transformations) are generated by GCC compiler (pocket Linux version) with the most aggressive optimizations (O3). The energy is measured in Joules (J).
Operation Reuse on Handheld Devices
279
Since our computation reuse scheme is based on profiling, we test the effectiveness of the scheme with different input files. The program transformation is based on the profiling with default input files from the Mediabench web-site, and we run the transformed programs with other different input files. We show the results in Table 3. GNU Go has no input files, and we change the parameter from 6-step to 9-step. For each other program, we arbitrarily collect one input file from Internet or other benchmark suite such as MiBench [13]. We list the sources of input files in the second column of Table 3. For G721, we choose the input file small.pcm from the MiBench program ADPCM. We select the tens_015.m2v, which plays table tennis, from Tektronix web-site, and extract the first 6 frames as the input of MPEG2 encode and decode. For RASTA, we choose the input file phone.pcmbe.wav in 1998’s RASTA test suite from ICSI. For UNEPIC, we get the input file baboon.tif of EPIC, and we generate its UNEPIC input file by running EPIC with the baboon.tif as input. The last column of Table 3 shows the effectiveness of our scheme. Based on the profiling information with the default input files, these programs applied the computation reuse scheme can achieve substantial performance improvement for other different input files. 2.5
Related Work
Since Michie introduced the concept of memoization [8], the idea of computation reuse had been used mainly in the context of declarative languages until the early 90’s. In the past decade, many researchers have applied this concept to reuse the intermediate computation results of previously executed instructions [5, 6, 7, 1, 2, 9, 4]. Richardson applies computation reuse to two applications by recording the previous computation results in a result cache [9]. However, he does not specify how the technique was implemented, and the result cache in his paper is a special hardware cache. Sodani and Sohi [4] propose an instruction reuse method. The performance improvement of instruction level reuse is not significant, due to the small reuse granularity [14]. In the block and sub-block reuse schemes [1, 2], hardware mechanisms are proposed to exploit computation reuse in a basic block or sub-block. The reuse granularity on basic block level seems still too small, and
280
Yonghua Ding and Zhiyuan Li
the hardware needs to handle a large number of basic blocks for computation reuse. Connors and Hwu propose a hybrid technique [7] which combines software and hardware for reusing the intermediate computation results of code regions. The compiler identifies the candidate code segments with value profiling. During execution, the computation results of these reusable code regions are recorded into hardware buffers for potential reuse. Their compiler analysis can identify large reuse code regions and feed the analysis results to the hardware through an extended instruction set architecture. In the design of the hardware buffer, they limit the buffer size to 8 entries for each code segment.
3
IF-Merging
Modern microprocessors use deep instruction pipelining to increase the number of processed instructions per clock cycle. Branch instructions, however, tend to degrade the efficiency of deep pipelining. Further, conditional branches reduce the size of basic blocks, introduce control dependences between instructions, and hence may hamper the compiler’s ability to perform code improvement techniques such as redundancy removal, software pipelining, and so on [15, 16, 17]. To reduce the penalty due to branch instructions, researchers have proposed many techniques, including static and dynamic branch prediction [18, 19], predicated execution [20, 21], branch reordering [17], branch alignment [22] and branch elimination [15, 16, 23], etc. Among these, branch prediction, especially dynamic branch prediction, has been extensively studied and widely used in modern highperformance microarchitectures. Branch prediction predicts the outcome of the branch in advance so that the instruction at the target address can be fetched without delay. However, if the prediction is incorrect, the instructions fetched after the branch have to be squashed. This situation results in a waste of CPU cycles and power consumption. Hence, a high prediction rate is critical to the performance of high-performance microprocessors. To achieve a high prediction rate, almost all high-performance microprocessors today employ some form of hardware support for dynamic branch prediction. In contrast, processors designed for power-aware systems, such as mobile wireless computing and embedded systems, must take both the program speed and the power consumption into consideration. The concern for the latter may often be greater than for the former on many platforms. A branch predictor dissipates a non-trivial amount of power, which can be 10% or higher of the total processor’s power dissipation. Such a predictor, therefore, may not be found on microprocessors have more stringent power constraints [24]. Hardware support for predicated execution [25] of instructions has been used on certain microprocessors, such as Intel XScale. Predicated execution removes forward conditional branches by attaching flags to instructions. The instructions are always fetched and decoded. But if the predicate evaluates to false, then a predicated instruction does not commit. Obviously, the effectiveness of predi-
Operation Reuse on Handheld Devices
Fig. 3.
281
An example code shows opportunity of basic IF-merging
cated execution highly depends on the rate at which the predicates evaluate to true. If the rate is low, then the waste in CPU cycles and power can be rather high. It is also worth noting that branch prediction, as a run-time technique, generally does not help enhance the compiler’s ability to improve codes. Recently proposed speculative load/store instructions expose the control of speculative execution to the software, which may increase the compiler’s ability to pursue more aggressive code improvement techniques [26]. However, by today’s technology, hardware support for speculative execution tends to increase power consumption considerably. Therefore, such support is not available on microprocessors which have more stringent power constraints. In order to reduce the number of conditional branches executed at run time, we perform a source-level program transformation called IF-merging. This technique does not require special hardware support and it does not increase the power rate. Using this technique, the compiler identifies IF statements which can be merged to increase the size of the basic blocks, such that more instruction level parallelism (ILP) may be exposed to the compiler backend and, at run time, fewer branch instructions are executed. The merger candidates include IF statements which have identical or similar IF conditions which nonetheless are separated by other statements. Programmers usually leave them as separate IF statements to make the program more readable. The idea of IF-merging can be implemented with various degrees of aggressiveness: the basic scheme, a more aggressive scheme to allow nonidentical IF conditions, and lastly, a scheme based on path profiling information. 3.1
A Basic IF-Merging Scheme
In the basic scheme, we merge IF statements with identical IF conditions to reduce the number of branches and condition comparison. Figure 3(a) shows an example extracted from the Mediabench suite. In the example code, two IF
282
Yonghua Ding and Zhiyuan Li
statements with identical condition are separated by other statements, which we call intermediate statements. Based on the data dependence information, we find that such intermediate statements have data dependences with the two merger candidates. Hence, we cannot move any of these intermediate statements before or after the new IF statement. We duplicate the intermediate statements and place one copy in the then-component of the merged IF statement, and another in the else-component. Figure 3(b) shows the transformed code by applying IFmerging on the code in Figure 3(a). Throughout this section, we assume the source program is structured. Thus we can view the function body as a tree of code segments, such that each node may represent a loop, a compound IF statement, a then-component, an elsecomponent, or simply a block of assignment statements and function calls. The function body is the root of the tree. If a node A is the parent of another node B, then the code segment represented by B is nested in the code segment represented by A. Unless stated otherwise, the merger candidates must always have the common parent in such a tree. Obviously, in all of our IF-merging schemes, we need to be able to identify identical IF conditions, which requires symbolic analysis of IF conditions. To facilitate such analysis, we perform alias analysis [27], global value numbering [28], and transform the program into static single assignment (SSA) form [29], such that variables with identical values can be clearly identified. We then apply a set of normalization rules to the predicate trees of IF conditions, including the sub-trees that represent the arithmetic expressions in those conditions. Such normalization rules and the ensuing symbolic comparisons have been discussed extensively in the literature of software engineering and parallelizing compilers.
Fig. 4. Nonidentical conditions with common sub-predicates and its transformation
Operation Reuse on Handheld Devices
283
Fig. 5. Example code shows IF-merging with profiling
3.2
IF-Condition Factoring
The basic IF-merging scheme only identifies IF statements with identical IF conditions for IF-merging. Suppose the conditions are nonidentical but have common sub-predicates. By factoring the conditions we can also reduce the number of branches. The left-hand side of Figure 4 shows an example code extracted from Mediabench, and the right-hand side of Figure 4 shows the transformed code. Our factoring scheme identifies IF statements with conditions containing common sub-predicates, and it factors the common sub-predicates from the conditions to construct a common IF statement, which encloses the original IF statements with the remaining sub-predicates as conditions. 3.3
IF-Merging with Path Profiling
With path profiling information [30], we can make the IF-merging technique even more aggressive. For example, in the case of the code in the left-hand side of Figure 5, if the path profiling shows that majority of executions go to both S1 and S2, then we can transform the code into that showed in the right-hand side of Figure 5. We note that the probability of both taken in the two IF statements is If is greater than 0.5, merging the two IF statements will reduce the number of branches. (The original code has two branches and the merged code has 1 + 2* < 2 branches.) The number of comparison operations (denoted by in the transformed code is defined by Formula (4) below, where is the probability of taken in the first IF statement.
Hence, the number of comparison operations in the transformed code ranges from 2 to 2.5 when is greater than 0.5. The original code has two comparison
284
Yonghua Ding and Zhiyuan Li
Fig. 6. Nested IF statements and the transformation of IF-exchanging
operations. Although the number of condition comparisons is increased after merging, the performance has a net gain. Further, the then-component of the merged IF statement may present more opportunities for other optimizations. Another case for consideration is nested IF statements whose conditions are dependent. For example, the condition (or its negation) of the inner IF statement may derive the condition of the outer IF statement. (Obviously, the opposite is normally false. Otherwise we can remove the inner IF statement.) Given such nested IF statements, with profiling information on the taken probability, we can decide whether it benefits to exchange the nesting. Figure 6 shows an example code of nested IF statements in the left-hand side, and the code after the IFexchange transformation in the right-hand side. In this example, we suppose the condition (the negation of implies the condition (For example, suppose is X > 0 and is We further suppose that, based on profiling information, the taken probability of the outer IF statement is greater than that of the inner IF statement In the original code, both the number of branches and the number of comparison are and in the transformed code, both of them are Since is greater than the IF-exchange will reduce both the number of branches and the number of comparison. 3.4
Experimental Results
We have experimented with eight multimedia programs from Mediabench [12]. Tables 4 and 5 show the performance and energy consumption, respectively, before and after IF-merging. The machine codes (both before and after our transformations) are generated by GCC (pocket Linux version) with the most aggressive optimizations (O3). Due to the space limit, detailed explanations are omitted. 3.5
Related Work
To reduce branch cost, many branch reduction techniques have been proposed, which include branch reordering [17], conditional branch elimination [15, 23],
Operation Reuse on Handheld Devices
285
branch alignment [22], and predicated execution [20, 21], etc. As we finish writing this paper, we have discovered that part of our work in Section 3.3 is similar to a recent independent effort by Kreahling et al [16]. They present a profilebased condition merging technique to replace the execution of multiple branches, which have different conditions, with a single branch. Their technique, however, does not consider branches separated by intermediate statements. Neither do they consider nested IF statements, which we consider in Section 3.3. We have also given an analysis of the trade-off which is missing in [16]. Moreover they restrict the conditions in the candidate IF statements to be comparisons between variables and constants. We do not have such restrictions. Calder and Grunwald propose an improved branch alignment based on the architectural cost model and the branch prediction architecture. Their branch alignment algorithm can improve a broad range of static and dynamic branch prediction architectures. In [23], Mueller and Whalley describe an optimization to avoid conditional branches by replicating code. They perform a program analysis to determine the conditional branches in a loop which can be avoided by code replication. They do not merge branches separated by intermediate statements. In [17], Yang et al describe reordering the sequences of conditional branches using profiling data. By branch reordering, the number of branches executed at run-time is reduced. These techniques seem orthogonal to our IF-merging.
286
4
Yonghua Ding and Zhiyuan Li
Conclusion
In this extended abstract, we use computation reuse and IF-merging as two examples of expanding the scope of redundancy removal. We show that both program execution time and energy consumption can be reduced quite substantially via such operation reuse techniques. It is clear that profile information is important in both examples. We believe that a general model for redundancy detection can be highly useful for uncovering more opportunities of redundancy removal. As our next step, our research group is investigating alternative models for this purpose.
Acknowledgements This work is sponsored by National Science Foundation through grants CCR0208760, ACI/ITR-0082834, and CCR-9975309.
References [1] Huang, J., Lilja, D.: Exploiting basic block value locality with block reuse. In The 5th Int. Symp. on High-Performance Computer Architecture (1999) [2] Huang, J., Lilja, D.: Balancing reuse opportunities and performance gains with sub-block value reuse. Technical Report, University of Minnesota (2002) [3] Sastry, S., Bodik, R., Smith, J.: Characterizing coarse-grained reuse of computation. 3rd ACM Workshop on Feedback Directed and Dynamic Optimization (2000) [4] Sodani, A., Sohi, G.: Dynamic instruction reuse. Proc. of the 24th Int. Symp. on Computer Architecture (1997) 194–205 [5] Citron, D., Feitelson, D.: Hardware memoization of mathematical and trigonometric functions. Technical Report, Hebrew University of Jerusalem (2000) [6] Connors, D., Hunter, H., Cheng, B., Hwu, W.: Hardware support for dynamic activation of compiler-directed computation reuse. Proc. of the 9th Int. Conf. on Architecture Support for Programming Languages and Operating Systems (2000) [7] Connors, D., Hwu, W.: Compiler-directed dynamic computation reuse: Rationale and initial results. Proc. of 32nd Int. Symp. on Microarchitecture (1999) 158–169 [8] Michie, D.: Memo functions and machine learning. Nature 218 (1968) 19–22 [9] Richardson, S.: Exploiting trivial and redundant computation. Proc. of the 11th Symp. on Computer Arithmetic (1993) 220–227 [10] Calder, B., Feller, P., Eustace, A.: Value profiling. Proc. of the 30th Int. Symp. on Microarchitecture (1997) 259–269 [11] : Intel StrongARM SA-1110 Microprocessor Developer’s Manual. (2001) [12] Lee, C., Potkonjak, M., Mangione-Smith, W.: Mediabench: A tool for evaluating and synthesizing multimedia and communications systems. Proc. of the 30th Int. Symp. on Microarchitecture (1997) 330–335 [13] Guthaus, M., Ringenberg, J., Ernst, D., Austin, T., Mudge, T., Brown, R.: Mibench: A free, commercially representative embedded benchmark suite. IEEE 4th Annual Workshop on Workload Characterization (2001) 3–14
Operation Reuse on Handheld Devices
287
[14] Sodani, A., Sohi, G.: Understanding the differences between value prediction and instruction reuse. Proc. of the 31th Int. Symp. on Computer Architecture (1998) 205–215 [15] Bodik, R., Gupta, R., Soffa, M.: Interprocedural conditional branch elimination. Proc. of the Conference on Programming Language Design and Implementation (1997) [16] Kreahling, W., Whalley, D., Bailey, M., Yuan, X., Uh, G., Engelen, R.: Branch elimination via multi-variable condition merging. Proc. of the European Conference on Parallel and Distributed Computing (2003) [17] Yang, M., Uh, G., Whalley, D.: Efficient and effective branch reordering using profile data. Trans. on Programming Languages and Systems 24 (2002) [18] Ball, T., Larus, J.: Branch prediction for free. Proc. of the Conference on Programming Language Design and Implementation (1993) [19] Smith, J.: A study of branch prediction strategies. Proc. of the 4th International Symposium on Computer Architecture (1981) [20] Park, J., Schlansker, M.: On predicated execution. Technical Report. HPL-91-58, Hewlett Packard Laboratories (1991) [21] Sias, J., August, D., , Hwu, W.: Accurate and efficient predicate analysis with binary decision diagrams. Proc. of the 33rd International Symposium on Microarchitecture (2000) [22] Calder, B., Grunwald, D.: Reducing branch costs via branch alignment. Proc. of the 6th International Conference on Architectural Support for Programming Languages and Operating Systems (1994) [23] Mueller, F., Whalley, D.: Avoiding conditional branches by code replication. Proc. of the Conference on Programming Language Design and Implementation (1995) [24] Parikh, D., Skadron, K., Zhang, Y., Barcella, M., Stan, M.: Power issues related to branch prediction. Proc. of the 8th International symposium on High-Performance Computer Architecture (2002) [25] Hennessy, J., Patterson, D.: Computer architecture: A quantitative approach. (Second Edition, Morgan Kaufmann) [26] Wu, Y., Lee, Y.: Comprehensive redundant load elimination for the ia-64 architecture. 12th International Workshop, LCPC’99 (1999) [27] Hind, M., Burke, M., Carini, P., Choi, J.: Interprocedural pointer alias analysis. ACM Trans. on Programming Languages and Systems 21 (1999) [28] Simpson, T.: Global value numbering. Technical report, Rice University (1994) [29] Wolfe, M.: High performance compilers for parallel computing. Addison-Wesley Publishing Company (1996) [30] Ball, T., Larus, J.: Efficient path profiling. Proc. of the 29th International Symposium on Microarchitecture (1996)
Memory Redundancy Elimination to Improve Application Energy Efficiency Keith D. Cooper and Li Xu Department of Computer Science, Rice University Houston, Texas, USA
Abstract. Application energy consumption has become an increasingly important issue for both high-end microprocessors and mobile and embedded devices. A multitude of circuit and architecture-level techniques have been developed to improve application energy efficiency. However, relatively less work studies the effects of compiler transformations in terms of application energy efficiency. In this paper, we use energyestimation tools to profile the execution of benchmark applications. The results show that energy consumption due to memory instructions accounts for a large share of total energy. An effective compiler technique that can improve energy efficiency is memory redundancy elimination. It reduces both application execution cycles and the number of cache accesses. We evaluate the energy improvement over 12 benchmark applications from SPEC2000 and MediaBench. The results show that memory redundancy elimination can significantly reduce energy in the processor clocking network and the instruction and data caches. The overall application energy consumption can be reduced by up to 15%, and the reduction in terms of energy-delay product is up to 24%.
1
Introduction
Application energy consumption has become an increasingly important issue for the whole array of microprocessors spanning from high-end processors used in data centers to those inside mobile and embedded devices. Energy conservation is currently the target of intense research efforts. A multitude of circuit and architecture-level techniques have been proposed and developed to reduce processor energy consumption [1, 2, 3]. However, many of these research efforts focus on hardware techniques, such as dynamic voltage scaling (DVS) [1, 4, 5] and low-energy cache design [2, 6, 3]. Equally important, application-level techniques are necessary to make program execution more energy efficient, as ultimately, it is the applications executed by the processors that determine the total energy consumption. In this paper, we look at how compiler techniques can be used to improve application energy efficiency. In Section 2, we use energy profiling to identify top energy consuming micro-architecture components and motivate the study of memory redundancy elimination as a potential technique to reduce energy. Section 3 overviews a new algorithm for memory redundancy detection and presents L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 288–305, 2004. © Springer-Verlag Berlin Heidelberg 2004
Memory Redundancy Elimination
289
two frameworks to remove redundant memory instructions. Section 4 presents the experimental results. Section 5 summarizes related work and Section 6 concludes the paper.
2
Energy Profiling
Optimizing compilers have been successful to improve program performance [7]. One key reason is that accurate performance models are used to evaluate various code transformations. Similarly, an automatic program energy efficiency optimization requires accurate energy dissipation modeling. Unfortunately, most energy estimation tools work on the circuit and transistor level and require detailed information from circuit design. Recently, researchers have started to build higher level energy modeling tools, such as Wattch [8] and SimplePower [9], which can estimate power and energy dissipation of various micro-architecture components. When combined with instruction-level performance simulators, these tools provide an ideal infrastructure to evaluate compiler optimizations targeting energy efficiency. In our work, we use the Wattch [8] tool along with the SimpleScalar [10] simulators to study compiler techniques in terms of application energy consumption. Our approach is as follows: we first profile application execution and measure energy consumption breakdown by major processor components. This step will reveal how energy is dissipated. Based on the application energy profile, we can then identify promising code optimizations to improve energy efficiency. In CMOS circuits, dynamic power consumption accounts for the major share of power dissipation. We use Wattch to get the dynamic power breakdown of superscalar processors. The processor configuration is shown in Table 1, which is similar to those of the Alpha 21264. The Wattch tool is configured to use
290
Keith D. Cooper and Li Xu
Fig. 1. Active dynamic power consumption by micro-architecture components
parameters of a .35um process at 600 MHz with supply voltage of 2.5V. The pie-chart in Figure 1 shows the percentage of dynamic power dissipation for the micro-architecture components, assuming each component is fully active. Figure 1 shows that the top dynamic power-dissipating components are the global clocking network and on-chip caches. Combined together, they account for more than 70% of the total dynamic power dissipation. This suggests that the clocking network and caches should be the primary targets for compiler techniques to improve energy efficiency. The processor energy of dynamic switching can be defined as:
In the above equation of C is the load capacitance, is the supply voltage and is the switching activity factor indicating how often logic transitions from low to high take place [11]. C and are dependent on the particular process technology and circuit design, while the activity factor is related to the codes being executed [11]. The main leverage for the compiler to minimize is to reduce We ran 12 benchmark applications and profiled the energy consumption. The benchmarks are chosen from the widely used SPEC2000 and MediaBench [12]. Table 2 shows the descriptions of the benchmark applications. We compiled the benchmark applications using GNU GCC compiler with -O4 level optimization. The compiled executables are then run on the out-of-order superscalar simulator with Wattch to collect run-time and energy statistics. Table 3 shows the total energy consumption and energy in the clocking network, top level I-Cache and D-Cache. The results show that the energy distribution of the micro-architecture components is very similar to the power distribution graph in Figure 1. The major difference is that all applications exhibit good cache locality, and L2 cache is
Memory Redundancy Elimination
291
rarely accessed due to the low number of top level cache misses. Therefore, compared to energy in other components, energy in L2 cache is negligible due to infrequent access activities. As shown in Table 3, energy consumption in the clocking network and top level cache accounts for a large share of total application energy. For the 12 applications, the clocking network and L1 cache account for more than 58% (geometric mean) of total energy. Table 4 shows the dynamic instruction count and dynamic load and store count. The results show that memory instructions account for about 24% (geometric mean) of dynamic instructions, and for the more sophisticated SPEC2000 applications, the percentage is even higher, with a geometric mean of 36%. The large number of dynamic memory instructions have the following consequences: first, these instructions must be fetched from the I-Cache before execution, thus costing energy in the I-Cache; second, instruction execution also costs energy, including that in the clocking network; and thirdly, the execution of memory instructions also requires D-Cache access, and this is the major cause of D-Cache energy consumption. As both the clocking network and caches are top power-
292
Keith D. Cooper and Li Xu
dissipating components, memory instructions thus have significant impact on total energy consumption. The above energy profiling data indicate that memory instructions are good target to improve application energy efficiency. Redundant memory instructions represent wasted energy; removing them [14,40] should reduce energy costs. The dominant modern processor architecture is the load-store architecture, in which most instructions operate on data in the register file, and only loads and stores can access memory. Between the processor core and main memory, the I-Cache stores the instructions to be fetched and executed; the D-Cache serves as local copy of memory data, so loads and stores can access data faster. When redundant memory instructions are removed, the traffic from memory through the I-Cache to the CPU core is reduced because fewer instructions are fetched. This saves energy in the I-Cache. Data accesses in the D-Cache are also reduced, saving energy in the D-Cache. Finally, removing memory instructions speeds up the application and saves energy in the clocking network. In our prior analysis, the clocking network and cache structures are among the top energy consuming components in the processor. Thus energy savings in these components can significantly reduce total energy consumption. The rest of this paper will present the compile-time memory redundancy elimination and evaluate its effectiveness to improve energy efficiency.
3
Memory Redundancy Elimination
Memory redundancy elimination is a compile-time technique to remove unnecessary memory instructions. Consider the sample C code in Figure 2; in the functions full_red, par_cond and par_loop, the struct field accesses by p->x and p->y are generally compiled into loads. However, the loads in line 11 and 12 are fully redundant with those in line 10, as they always load the same values at run time; similarly, the loads in line 19 are partially redundant with those in line 17 when the conditional statement is executed; the loads in line 25 are partially
Memory Redundancy Elimination
293
Fig. 2. Memory Redundancy Example. The loads in line 11 and 12 for p->x and p->y are fully redundant; the loads in line 19 for p->x and p->y are partially redundant due to conditional statement in line 16; the loads for p->x and p->y in line 25 are partially redundant as they are loop invariant
redundant, as the load values need to be loaded only for the first loop iteration and all the remaining iterations load the same values. These redundant loads can be detected and removed at compile time. As we discussed in Section 2, memory instructions incur significant dynamic energy consumption, so memory redundancy elimination can be an effective energy-saving transformation. In our prior work [13], we presented a new static analysis algorithm to detect memory redundancy. This algorithm uses value numbering on memory operations, and is the basis for memory redundancy removal techniques described in this paper. In comparison, this paper extends our work in [13] by providing a more powerful removal framework which is capable to eliminate a larger set of memory redundancies; furthermore, this paper focuses on energy efficiency benefits, while the previous work concerns about performance improvements. In Section 3.1, we first give an overview of this memory redundancy detection algorithm; and in Section 3.2, we present code transformations which use the analysis results of the detection algorithm to remove those fully and partially redundant memory instructions.
294
3.1
Keith D. Cooper and Li Xu
Finding Memory Redundancy
In [13], we presented a new static analysis algorithm to detect memory redundancy. We extended Simpson’s optimistic global value-numbering algorithm [14, 15] to value number memory instructions. SCCVN is a powerful procedure-scope scalar redundancy (i.e. non-memory instructions) detection algorithm. It discovers value-based identical scalar instructions (as opposed to lexical identities), performs optimistic constant propagation, and handles a broad set of algebraic identities. To extend SCCVN so that it can also detect identities for loads and stores, we annotated the memory instructions in the compiler’s intermediate representation (IR) with M-LISTS – lists of the names of memory objects that are potentially defined by the instruction (an M-DEF list) and the names of those that are potentially used by the instruction (an M-USE list). The M-LISTS are computed by a flow-insensitive, context-insensitive, Andersen-style pointer analysis [16]. Our compiler uses a low-level, RISC-style, three-address IR, called ILOC. All memory accesses in ILOC occur on load and store instructions. The other instructions work from an unlimited set of virtual registers. The ILOC load and store code with M-LISTS for line 10 in Figure 2 is shown in the following:
As an example, the M-USE in iLD r1 => r4 M-use[@pa_x @pb_x] means the integer load will load from address r1, put result in r4, and the load may access memory object pa.x or pb.x. This corresponds to p->x in the source code. In the annotated IR with M-LISTS, loads only have M-USE list, as loads don’t change the states of those referenced memory objects, and during value numbering, the value numbers of names in M-USE indicate both before and after-states of memory objects affected by the loads; stores are annotated with both M-USE and M-DEF, as stores may write new values to memory objects, and during value numbering of stores, the value numbers of names in M-USE indicate the states before the execution of stores, and the value numbers in M-DEF indicates the states after the execution of stores. Using M-LIST, we can value number memory instructions along with scalar instructions and detect instruction identities. To value number memory instructions, both normal instruction operands (base address, offset, and result) and M-LIST names are used as a combined hash-key to look up values in the hash table. If there is a match, the memory instructions will access the same address with the same value and change the affected memory objects into identical states, therefore the matching instructions are redundant. For example, after value numbering, the three loads which correspond to p->x in function full_red in Figure 2, all have the same form as iLD r1_vn => r4_vn M-use [@pa_x_vn @pb_x_vn]; therefore the three loads are identities, and the last two are redundant and can be removed to reuse the value in register r4_vn. Also in Figure 2, for those loads of p->x and p->y in the functions par_cond and par_loop, memory value numbering can detect they are redundant.
Memory Redundancy Elimination
295
Fig. 3. CSE Data Flow Equation System
3.2
Removing Memory Redundancy
After memory redundancies are detected, code transformations are used to eliminate the redundant instructions. We have used two different techniques to perform the elimination phase: traditional common subexpression elimination (CSE) [17] and partial redundancy elimination (PRE) [18, 19, 20]. Using memory value numbering results, we can easily extend scalar CSE and PRE and build unified frameworks that remove both scalar and memory-based redundancies. Memory CSE was first described in [13] and is briefly recapitulated in this paper for completeness; memory PRE is a more powerful removal framework and can eliminate a larger set of memory redundancies. This section shows the two frameworks, extended to include memory redundancy removal. Available Expressions Traditional common subexpression elimination (CSE) finds and removes redundant scalar expressions (sometimes called fully redundant expressions). It computes the set of expressions that are available on entry to each block as a data-flow problem. An expression is available on entry to block if every control-flow path that reaches contains a computation of Any expression in the block that is also available on entry to the block (in AVIN) is redundant and can be removed. Figure 3 shows the equations used for value-based CSE. To identify fully redundant memory instructions, for equivalent memory instructions, we assign them a unique ID number. The set for block is computed by adding scalar values and memory IDs defined in When the equations in Figure 3 are solved, the set contains the available scalar values and memory IDs at the entry of block Fully redundant instructions (including redundant memory instructions) can be detected and removed by scanning the instructions in in execution order as follows: if scalar instruction computes is redundant and removed; if memory instruction with ID is redundant and removed. For the example in Figure 2, the new memory CSE removes the 4 redundant loads on line 11 and 12 as they are assigned same IDs as those in line 10. Partial Redundancy Elimination The key idea behind partial redundancy elimination (PRE) and lazy code motion is to find computations that are redundant on some, but not all paths [18, 19, 20]. Given an expression at point
296
Keith D. Cooper and Li Xu
Fig. 4. PRE Data Flow Equation System
that is redundant on some subset of the paths that reach the transformation inserts evaluations of on paths where it had not been, to make the evaluation at redundant on all paths. Our transformation is based on the formulation due to Drechsler and Stadel [20]. Drechsler and Stadel’s formulation computes the sets INSERT and DELETE for scalar expressions in each block. The set contains those partially redundant expressions that must be duplicated along the edge The set contains expressions in block that are redundant and can be removed. The data-flow equations are shown in Figure 4. PRE is, essentially, a code motion transformation. Thus, it must preserve data dependences during the transformation. (The flow, anti, and output de-
Memory Redundancy Elimination
297
pendences of the original program must be preserved [7].) The results from our new memory redundancy detection algorithm let us model dependence relations involving memory instructions and remove redundant loads. 1 To encode the constraints of load motion into the equations for PRE, we must consider both the load address and the states of the memory objects in the MUSE list for the load. Specifically, a load cannot be moved past the instructions of its addressing computation; in addition, other memory instructions might change the states of the memory objects that the load may read from, so a load cannot be moved past any memory instruction which assigns new value number (i.e. defines a new state) to memory object in the M-USE list of the load. Using memory value numbering results, We can build a value dependence graph that encodes the dependence relationship among the value numbers of the results of scalar instructions and the value numbers of the M-DEF and M-USE lists for memory instructions. In particular, 1) for each scalar instruction, the instruction becomes the DEF node that defines the value number of its result; furthermore, we also add a dependence edge from each DEF node of the source operands to the scalar instruction node; 2) for a store, the store becomes the DEF node that defines the value numbers of any objects on its M-DEF list that are assigned new value numbers; 3) for a load, the instruction becomes the DEF node for the load result, and we also add edges from the DEF nodes for the load address and the value numbers of any memory objects on the load M-USE list. Intuitively, the value numbers of scalar operands and M-LIST objects capture the DEF-USE relations among scalar and memory instructions. Stores can be thought of as DEF-points for values stored in the memory objects on the M-DEF; the value dependence edges between stores and loads which share common memory objects represent the flow dependences between store and load instructions. Thus, using the value numbers assigned by the memory redundancy detection algorithm, we can build the value dependence graph so that it represents the dependence relations for both scalar and memory instructions. Once the value dependence graph has been built, the compiler can build the local set for each block. The set contains the instructions whose source operands would change values due to the execution of block If then the code motion should not move backward beyond as otherwise it would violate the dependence rule. We set to include all instructions in block other than scalar and load instructions. This prevents the algorithm from moving those instructions. Furthermore, any instructions that depend transitively on these instructions are also included in This can be computed by taking the transitive closure in the value dependence graph with respect to the DEF nodes for the instructions in Another local set contains the candidate instructions for PRE to remove. In traditional applications of PRE, only contains scalar 1
We exclude stores from PRE for two reasons. First, loads do not create anti and output dependences. Fixing the positions of stores greatly simplifies dependence graph construction. Second, and equally important, our experiments show that opportunities to remove redundant stores are quite limited [21].
298
Keith D. Cooper and Li Xu
Fig. 5. ILOC Execution Model
instructions. Using the value numbers for M-LISTs, we can model memory dependences and put loads into We set to contain both scalar and load instructions in block which are not in in other words, it contains the scalars and loads whose movement is not restricted by The last local set in the PRE framework is It contains the all scalars and loads in By treating memory instructions in this way, we force the data-flow system to consider them. When the data flow system is solved, the INSERT and DELETE sets contain the scalar instructions and loads that are partially redundant and can be removed. In the example in Figure 2, the partially redundant loads for p->x and p->y in line 19 are in the DELETE set, and copies of these loads are in the INSERT set of the block where the test conditional is false. Similarly, the loads in the loop body in line 25 are also removed and copies of these loads are inserted in the loop header. In summary, the memory PRE can successfully remove those partial memory redundancies in Figure 2.
4
Experimental Results
Figure 5 shows the execution model for our compiler. The C front end (c2i) converts the program into ILOC. The compiler applies multiple analysis and optimization passes to the ILOC code. Finally, the back end (i2ss) generates SimpleScalar executables To evaluate the energy efficiency improvement of memory redundancy elimination, we implemented memory CSE and memory PRE as ILOC passes, referred to as M-CSE and M-PRE. As the memory versions of CSE and PRE subsume scalar CSE and PRE, to evaluate the effects of memory redundancy removal, we also implemented the scalar versions of CSE and PRE, referred to as S-CSE and S-PRE. We use the same benchmarks in Table 2. The benchmarks are first translated into ILOC, then multiple passes of traditional compiler optimizations are run on the ILOC codes, including constant propagation, dead code elimination, copy coalescing and control-flow simplification. We then run the whole-program pointer analysis to annotate the ILOC codes with M-lists. We run separately the S-CSE, M-CSE, S-PRE and M-PRE passes on the ILOC codes, followed by the SimpleScalar backend i2ss to create the SimpleScalar executables We then run the generated executables on the out-of-order
Memory Redundancy Elimination
299
Fig. 6. Normalized execution cycles
superscalar simulator with the Wattch tool and collect the run-time performance and energy statistics. Dynamic Load Count and Cycle Count Table 5 shows the dynamic load count for the benchmarks. The ratio columns show the load count ratio between the memory and scalar versions of CSE and PRE. For the majority of the benchmark applications, both M-CSE and M-PRE significantly reduce the dynamic load count, with a geometric mean of 16.6% for M-CSE and 21.6% for M-PRE. As M-PRE removes memory redundancies from conditionals and loops, it removes a larger number of memory redundancies than M-CSE. Furthermore, the
300
Keith D. Cooper and Li Xu
data show that M-CSE and M-PRE have more opportunities in SPEC2000 programs than MediaBench: the dynamic load ratios between M-PRE and SPRE are 73% for SPEC2000 and 84.2% for MediaBench; and the ratios between M-CSE and S-CSE are 75.6% for SPEC2000 and 91.9% for MediaBench. The cause of this difference is that SPEC2000 applications are generally larger and more complex than those in MediaBench, and more data references are compiled as memory instructions, which provides more opportunities for memory redundancy elimination. Figure 6 shows the impact of memory redundancy elimination on application execution cycles. As expected, M-PRE achieves the best results as it is the most powerful redundancy elimination 2. The reduction in execution cycle count leads to energy savings in the clocking network. Figure 7 shows the normalized clocking network energy consumption of M-CSE, S-PRE and M-PRE with S-CSE as base. The curves are mostly identical to those in Figure 6. Like the execution count results, the benchmarks of 300.twolf, 175.vpr, 256.bzip2 and gsm have the largest energy savings with M-PRE. Cache Energy As we discussed in Section 2, memory redundancy elimination reduces cache accesses in both L1 I-Cache and D-Cache, thus it saves energy in the cache structures. Figure 8 shows the normalized L1 I-Cache energy consumption for M-CSE, S-PRE and M-PRE with S-CSE as the base. Figure 9 shows the normalized L1 D-Cache energy for the four versions. In Figure 8 and Figure 9, the curves for M-PRE are the lowest, as MPRE generally incurs the fewest I-Cache and D-Cache accesses, thus achieving the largest energy savings. The energy consumption diagrams of the I-Cache and D-Cache also show that memory redundancy elimination is more effective to reduce the D-Cache energy, as both M-CSE and M-PRE achieve more than 10% energy savings in the D-Cache for pegwit, 164.gzip, 256.bzip2, 175.vpr and 300.twolf, while the amount of energy savings in the I-Cache are relatively smaller. Total Energy and Energy-Delay Product Figure 10 shows the normalized total application energy consumption. Among the redundancy elimination techniques, M-PRE produces the best energy efficiency. A useful metric to measure both application performance and energy efficiency is the energy-delay product [11]. The smaller the energy-delay product, the better the application energy efficiency and performance. Figure 11 shows the normalized energy-delay product with S-CSE as the base. As memory redundancy elimination reduces both application execution cycles and the total energy consumption, the energy-delay product for M-CSE and M-PRE is smaller. In contrast to other techniques, such as dynamic voltage scaling, which trade application execution speed to reduce energy consumption, memory redundancy 2
The large execution cycle count for S-PRE in 300.twolf is due to abnormally high L1 I-Cache misses. For other cache configurations, the S-PRE cycle count is generally comparable to that of S-CSE.
Memory Redundancy Elimination
Fig. 7. Normalized clocking network energy consumption
Fig. 8. Normalized L1 I-Cache energy consumption
Fig. 9. Normalized L1 D-Cache energy consumption
301
302
Keith D. Cooper and Li Xu
Fig. 10.
Fig. 11.
Normalized total energy consumption
Normalized energy-delay product
elimination boosts both application performance and energy efficiency, making it a desirable compiler transformation to save energy without loss in performance. Application Energy Breakdown We also studied the micro-architecture component energy contribution for total application energy consumption. Figures 12 and 13 show the component energy breakdown for 256.bzip2 and 175.vpr – the two applications which have the largest energy efficiency improvement. The major energy savings for these two applications come from the clocking network and top level instruction and data cache. In 256.bzip2, the clocking network energy savings for M-CSE and M-PRE are 12% and 15% respectively, the L1 I-Cache savings are 8% and 10%, and the L1 D-Cache savings are 23%
Memory Redundancy Elimination
Fig. 12.
Energy breakdown of 256.bzip2
Fig. 13.
303
Energy breakdown of 175.vpr
and 24%. The final energy savings are 12% for M-CSE and 15% for M-PRE. Similarly, in 175.vpr, the clocking network energy savings for M-CSE and MPRE are 13% and 15% each, the L1 I-Cache savings are 10% and 12% each, and the L1 D-Cache savings are 25% and 26%. The final energy savings on 175.vpr are 14% for M-CSE and 15% for M-PRE.
5
Related Work
Recently, power and energy issues have become critical design constraints for both high-end processors and embedded digital devices powered by battery. Researchers have developed many hardware-based techniques to reduce power and energy consumption in these systems. Dynamic voltage scaling (DVS) dynamically varies processor clock frequency and voltage to save energy and is described in [1, 4, 5]. The work in [2, 6, 3] discussed ways to reduce cache energy consumption. However, all of these are circuit and architecture-level techniques. Relatively less focus is put on application-level energy saving techniques. In [22], Kandemir et. al. studied the energy effects of loop-level compiler optimizations using arraybased scientific codes. In contrast to their work, we first profiled the total application energy consumption to identify top energy-consuming components and then evaluated one compiler technique – memory redundancy elimination, which can significantly reduce energy consumption in these components. Furthermore, our technique targets more complicated general purpose and multimedia applications. Recently, researchers have been studying compile-time management of hardware-based energy saving mechanisms, such as DVS. Hsu et. al. described a compiler algorithm to identify program regions where CPU can be slowed down with negligible performance loss [23]. Kremer summarized compiler-based energy management methods in [24]. These methods are orthogonal to the techniques in this paper. Both scalar [17, 18, 19, 25] and memory [26, 27, 28, 29, 30] redundancy detection and removal have been studied in the literature. The redundancy detection algorithm used in our work is described in [13]. Compared to other methods, this algorithm unifies the process of scalar and memory redundancy detection and is
304
Keith D. Cooper and Li Xu
able to find more redundancies. Most of the previous work concerns application run-time speed, while our work targeted toward the benefits of energy savings, though the results show that performance is also improved.
6
Conclusion
Most of the recent work on low power and energy systems focuses on circuit and architecture-level techniques. However, more energy savings are possible by optimizing the behavior of the applications. We profiled the energy consumption of a suite of benchmarks. The energy statistics identify that the clocking network and first level cache as the top energy consuming components. With this insight, we investigated the energy savings of a particular compiler technique – memory redundancy elimination. We present two redundancy elimination frameworks and evaluate the energy improvements. The results indicate that memory redundancy elimination can reduce both execution cycles and the number of top level cache accesses, thus saving energy from the clocking network and the instruction and data caches. For our benchmarks, memory redundancy elimination can achieve up to a 15% reduction in total energy consumption, and up to a 24% reduction in the energy-delay product.
Acknowledgements We would like to thank Tim Harvey and the anonymous reviewers, whose comments greatly helped improve the presentation of the paper.
References [1] Pering, T., et. al.: The simulation and evaluation of dynamic voltage scaling algorithms. Int. Symp. on Low Power Electronics and Design, (1998) 76–81. [2] Ghose, K., Kamble, M.: Reducing power in superscalar processor caches using subbanking, multiple line buffers and bit-line segmentation. Int. Symp. on Low Power Electronics and Design, (1999) 70–75. [3] Kin, J., et. al.: The filter cache: An energy efficient memory structure. Int. Symp. on Microarchitecture, (1997) 184–193. [4] Simunic, T., Benini, L., Acquaviva, A., Glynn, P., Micheli, G.D.: Dynamic voltage scaling for portable systems. Design Automation Conf., (2001). [5] Lorch, J.R., Smith, A.J.: Improving dynamic voltage scaling algorithms with PACE. In: SIGMETRICS/Performance. (2001) 50–61 [6] Albonesi, D.H.: Selective cache ways: On-demand cache resource allocation. International Symp. on Microarchitecture, (1999). [7] Allen, R., Kennedy, K.: Optimizing Compilers for Modern Architectures: A Dependence-based App roach. Morgan Kaufmann (2002). [8] Brooks, D., Tiwari, V., Martonosi, M.: Wattch: a framework for architectural-level power analysis and optimizations. In: ISCA (2000) 83–94
Memory Redundancy Elimination
305
[9] Ye, W., Vijaykrishnan, N., Kandemir, M.T., Irwin, M.J.: The design and use of simplepower: a cycle-accurate energy estimation tool. In: Design Automation Conf.. (2000) 340–345 [10] Burger, D., Austin, T.: The simplescalar toolset, version 2.0. Computer Architecture News, (1997) 13–25. [11] Gonzalez, R., Horowitz, M.: Energy dissipation in general purpose microprocessors. IEEE Journal of Solid-State Circuits (1996) 1277–1283. [12] Lee, C., Potkonjak, M., Mangione-Smith, W.H.: Mediabench: A tool for evaluating and synthesizing multimedia and communications systems. In: Int. Symp. on Microarchitecture. (1997) 330–335. [13] Cooper, K.D., Xu, L.: An efficient static analysis algorithm to detect redundant memory operations. In: ACM Workshop on Memory Systems Performance. (2002). [14] Simpson, T.: Value-Driven Redundancy Elimination. PhD thesis, Rice U. (1996). [15] Gargi, K.: A sparse algorithm for predicated global value numbering. In: ACM SIGPLAN PLDI. (2002) 45–56. [16] Andersen, L.O.: Program Analysis and Specialization for the C Programming Language. PhD thesis, University of Copenhagen (1994). [17] Cocke, J.: Global common subexpression elimination. Symp. on Compiler Construction, (1970) 20–24. [18] Morel, E., Renvoise, C.: Global optimization by suppression of partial redundancies. Commun. ACM (1979) 96–103. [19] Knoop, J., Rüthing, O., Steffen, B.: Lazy code motion. ACM SIGPLAN PLDI, (1992) 224–234. [20] Drechsler, K.H., Stadel, M.P.: A variation of knoop, ruthing, and steffen’s lazy code motion. SIGPLAN Notices (1993) 29–38. [21] Xu, L.: Program Redundancy Analysis and Optimization to Improve Memory Performance. PhD thesis, Rice University (2003). [22] Kandemir, M.T., Vijaykrishnan, N., et. al.: Influence of compiler optimizations on system power. In: Design Automation Conf., (2000) 304–307. [23] Hsu, C.H., Kremer, U.: The design, implementation, and evaluation of a compiler algorithm for cpu energy reduction. ACM SIGPLAN PLDI, 2003) 38–48. [24] Kremer, U.: Compilers for power and energy management. PLDI Tutorial, (2003). [25] Briggs, P., Cooper, K.D., Simpson, L.T.: Value numbering. Software: Practice and Experience (1977) 710–724. [26] Lu, J., Cooper, K.: Register promotion in c programs. ACM SIGPLAN PLDI, (1997) 308–319. [27] Lo, R., Chow, F., Kennedy, R., Liu, S.M., Tu, P.: Register promotion by sparse partial redundancy elimination of loads and stores. ACM SIGPLAN PLDI, (1998) 26–37. [28] Sastry, A., Ju, R.D.: A new algorithm for scalar register promotion based on ssa form. ACM SIGPLAN PLDI, (1998) 15–25. [29] Bodik, R., Gupta, R., Soffa, M.L.: Load-reuse analysis: Design and evaluation. ACM SIGPLAN PLDI, (1999) 64–76. [30] Callahan, D., Carr, S., Kennedy, K.: Improving register allocation for subscripted variables. ACM SIGPLAN PLDI, (1990) 53–65.
Adaptive MPI Chao Huang, Orion Lawlor, and L. V. Kalé Parallel Programming Laboratory University of Illinois at Urbana-Champaign {chuang10,l-kale1}@uiuc.edu [email protected]
Abstract. Processor virtualization is a powerful technique that enables the runtime system to carry out intelligent adaptive optimizations like dynamic resource management. Charm++ is an early language/system that supports processor virtualization. This paper describes Adaptive MPI or AMPI, an MPI implementation and extension, that supports processor virtualization. AMPI implements virtual MPI processes (VPs), several of which may be mapped to a single physical processor. AMPI includes a powerful runtime support system that takes advantage of the degree of freedom afforded by allowing it to assign VPs onto processors. With this runtime system, AMPI supports such features as automatic adaptive overlap of communication and computation and automatic load balancing. It can also support other features such as checkpointing without additional user code, and the ability to shrink and expand the set of processors used by a job at runtime. This paper describes AMPI, its features, benchmarks that illustrate performance advantages and tradeoffs offered by AMPI, and application experiences.
1
Introduction
The new generation of parallel applications are complex, involve simulation of dynamically varying systems, use adaptive techniques such as multiple timestepping and adaptive refinements, and often involve multiple parallel modules. Typical implementations of the MPI do not support the dynamic nature of these applications well. As a result, programming productivity and parallel efficiency suffer. We present AMPI, an adaptive implementation of MPI, that is better suited for such applications, while still retaining the familiar programming model of MPI. The basic idea behind AMPI is to separate the issue of mapping work to processors from that of identifying work to be done in parallel. Standard MPI programs divide the computation into P processes, one for each of the P processors. In contrast, an AMPI programmer divides the computation into a large number V of virtual processors, independent of the number of physical processors. The virtual processors are programmed in MPI as before. Physical processors are no longer visible to the programmer, as the responsibility for assigning virtual processors to physical processors is taken over by the runtime system. This provides an effective division of labor between the system and the programmer: the L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 306–322, 2004. © Springer-Verlag Berlin Heidelberg 2004
Adaptive MPI
307
programmer decides what to do in parallel, and the runtime system decides where and when to do it. This division allows the programmer to use the most natural decomposition for their problems, rather than being restricted by the physical machine. For example, algorithmic considerations often restrict the number of processors to a power of 2, or a cube, but with AMPI, V can still be a cube even though P is prime. Note that the number of virtual processors V is typically much larger than P. Using multiple virtual processors per physical processor brings several additional benefits.
1.1
Related Work
The virtualization concept embodied by AMPI is very old, and Fox et al. [1] make a convincing case for virtualizing parallel programs. Unlike Fox’s work, AMPI virtualizes at the runtime layer rather than manually at the user level, and AMPI can use adaptive load balancers. Virtualization is also supported in DRMS [2] for data-parallel array based applications. CHARM++ is one of the earliest, if not the first, processor-virtualization system implemented on parallel machines[3, 4]. AMPI builds on top of CHARM++, and shares the run-time system with it. There are several excellent, complete, publicly available non-virtualized implementations of MPI, such as MPICH [5] and MPI/LAM [6]. Many researchers have described their implementations for fault-tolerance via checkpoint/restart, often built on top of one of the free implementations of MPI like CoCheck[7] and StarFish [8]. AMPI differs from these efforts in that it provides full virtualization to improve performance and allow load balancing rather than solely for checkpointing or for fault tolerance. Meanwhile there are plenty of efforts in implementing MPI nodes on top of light-weight threads. MPI-Lite [9] and TMPI [10] are two good examples. They have successfully used threaded execution to improve the performance of message passing programs, especially on SMP machines. Adaptive MPI, however, enables extra optimization with the capability of migrating the user-level threads that MPI processors are executed on. The CHARM++/AMPI approach is to let the runtime system change the assignment of VPs to physical processors at runtime, thereby enabling a broad set of optimizations. In the next section, we motivate the project, providing an overview of the benefits. In Section 3 we describe how our virtual processors are implemented and migrated. Section 4 describes the design and implementation strategies for specific features, such as checkpointing and load-balancing. We then present performance data showing that these adaptive features are beneficial in complex applications, and affordable (i.e. present low overhead) in general. We will summarize our experience in using AMPI in several large applications.
308
2
Chao Huang et al.
Benefits of Virtualization
In [11], the author has discussed in detail the benefits of processor virtualization in parallel programming, and CHARM++ has indeed taken full advantage of these benefits. Adaptive MPI inherits most of the merits from CHARM++, while furnishing the common MPI programming environment. Here is a list of the benefits that we will detail in this paper. Adaptive Overlap of Communication and Computation: If one of the virtual processors is blocked on a receive, another virtual processor on the same physical processor can run. This largely eliminates the need for the programmer to manually specify some static computation/communication overlapping, as is often required in MPI. Automatic Load Balancing: If some of the physical processors become overloaded, the runtime system can migrate a few of their virtual processors to relatively underloaded physical processors. Our runtime system can make this kind of load balancing decision based on automatic instrumentation, as explained in Section 4.1. Asynchronous Interfaces to Collective Operations: AMPI supports asynchronous, or non-blocking, interfaces to collective communication operations to allow the overlap between time-consuming collective operations with other useful computation. Section 4.2 describes this in detail. Automatic Checkpointing: AMPI’s virtualization allows applications to be checkpointed without additional user programming, as described in Section 4.3. Better Cache Performance: A virtual processor handles a smaller set of data than a physical processor, so a virtual processor will have better memory locality. This blocking effect is the same method many serial cache optimizations employ. Flexible Usage of Available Processors: The ability to migrate virtual processors can be used to adapt the computation if the available part of the physical machine changes. See Section 4.5 for details.
3 3.1
Adaptive MPI AMPI Implementation
AMPI is built on CHARM++, and uses its communication facilities, load balancing strategies and threading model. CHARM++ uses an object based model: programs consist of a collection of message driven objects mapped onto physical processors by CHARM++ runtime system. The objects communicate with other objects by invoking an asynchronous entry method on the remote object. Upon each of these asynchronous invocation, a message is generated and sent to the destination processor where the remote object resides. Adaptive MPI implements its MPI processors as CHARM++ “user-level” threads bound to CHARM++ communicating objects.
Adaptive MPI
309
Fig. 1. An MPI process is implemented as a user-level thread, several of which can be mapped to one single physical processor. This virtualization enables several powerful features including automatic load balancing and adaptive overlapping
Message passing between AMPI virtual processors is implemented as communication among these CHARM++ objects, and the underlying messages are handled by the CHARM++ runtime system. Even with object migration, CHARM++ supports efficient routing and forwarding of the messages. CHARM++ supports migration of objects via efficient data migration and message forwarding if necessary. Migration presents interesting problems for basic and collective communication which are effectively solved by the CHARM++ runtime system[12]. Migration can be used by the built-in measurement-based load balancing [13], adapting to changing load on workstation clusters [14], and even shrinking/expanding jobs for timeshared machines [15]. The threads used by AMPI are user-level threads; they are created and scheduled by user-level code rather than by the operating system kernel. The advantages of user-level threads are fast context switching1, control over scheduling, and control over stack allocation. Thus, it is feasible to run thousands of such threads on one physical processor (e.g. See [16]). CHARM++’s user-level threads are scheduled non-preemptively.
3.2
Writing an AMPI Program
Writing an AMPI program is barely different from writing an ordinary MPI program. In fact, a legal MPI program is also a legal AMPI program. To take full advantage of the migration mechanism, however, there is one more issue to address: global variables. Global variable is any variable that is stored at a fixed, preallocated location in memory. Although not specified by the MPI standard, many actual MPI 1
On a 1.8 GHz AMD AthlonXP, overhead for a suspend/schedule/resume operation is 0.45 microseconds.
310
Chao Huang et al.
programs assume that global variables can be used independently on each processor, i.e., global variable on processor 1 can have a different value than that of global variable on processor 2. However, in AMPI, all the threads on one processor share a single address space and thus a single set of global variables; and when a thread migrates, it leaves its global variables behind. Another problem is global variables shared on the same processor might be changed by other threads. Therefore, having global variables is disallowed in AMPI programming.
3.3
Converting MPI Programs To AMPI
If the MPI program uses global variables, it cannot run unmodified under AMPI, and we need to convert it to fit AMPI. As discussed in section 3.2, for thread safety, global variables need to be either removed or “privatized”. To remove the global variables from the code, one can collect all the formal globals into a single structure (allocated “type” in F90) named, say, “GlobalVars”, which is then passed into each function. To manually remove all the global variables is sometimes cumbersome, though mechanical. Fortunately this can be automated. AMPIzer [17] is our source-to-source translator based on Polaris [18] that privatizes global variables from arbitrary FORTRAN77 or FORTRAN90 code and generates necessary code for moving the data across processors.
4
Features
In this section, the key features that can help achieving higher parallel performance and alleviate the complexity of parallel programming will be discussed in detail.
4.1
Automatic Load Balancing
To achieve automatic dynamic load balancing without introducing an excessive amount of overhead poses fair challenges. CHARM++ addresses this issue with its integrated load balancing strategies, or Load Balancers[13]. The common mechanism they share is: during the execution of the program, a load balancing framework collects workload information on each physical processor in the background, and when the program hands over the control to a load balancer, it uses this information to redistribute the workload, and migrate the parallel objects between the processors as necessary. As there are different answers to the questions of (1) what information to collect, (2) where the information is processed, and (3) how to design the redistribution scheme, there are different types of load balancing strategies. For the first question, some load balancers look at computation workload only, while others take inter-processor communication into consideration. For the second question, some load balancers contribute the information to a central agent in
Adaptive MPI
311
the system for processing, whereas others only have objects exchange information with their neighbors and make decisions locally. At the last link, some load balancers randomly redistribute the workload and hope for the best, as opposed to having deliberate algorithms to help determine the new distribution toward better balance. For more detail, please refer to [13] and CHARM++ manuals. A key issue in automatic load balancing is to cleanly move objects from one processor to another processor. CHARM++ natively supports object migration; but in the context of AMPI, thread migration required several interesting additions to the runtime system, as described in the following sections. Isomalloc Stacks A user-level thread, when suspended, consists of a stack and a set of preserved machine registers. During migration, the machine registers are simply copied to the new processor. The stack, unfortunately, is very difficult to move. In a distributed memory parallel machine, if the stack is moved to a new machine, it will almost undoubtedly be allocated at a different location, so existing pointers to addresses in the original stack would become invalid when the stack moves. We cannot reliably update all the pointers to stack-allocated variables, because these pointers are stored in machine registers and stack frames, whose layout is highly machine- and compiler-dependent. Our solution is to ensure that even after a migration, a thread’s stack will stay at the same address in memory that it had on the old processor. This means all the pointers embedded in the stack will still work properly. Luckily, any operating system with virtual memory support has the ability to map arbitrary pages in and out of memory. Therefore we merely need to mmap the appropriate address range into memory on the new machine and use it for our stack. To ensure that each thread allocates its stack at a globally unique range of addresses, the available virtual address space is divided into P regions, each for one thread respectively. This idea of “isomalloc” approach to thread migration is based on [19]. Isomalloc Heaps Another obvious problem with migrating an arbitrary program is dynamically allocated storage. Unlike the thread stack, which the system allocated, dynamically allocated locations are known only to the user program. The “isomalloc” strategy available in the latest version of AMPI uses the same virtual address allocation method used for stacks to allocate all heap data. Similarly, the user’s heap data is given globally unique virtual addresses, so it can be moved to any running processor without changing its address. Thus migration is transparent to the user code, even for arbitrarily interlinked, dynamically allocated data structures. To do this, AMPI must intercept and handle all memory allocations done by the user code. On many UNIX systems, this can be done by providing our own implementation of malloc. Machines with 64-bit pointers, which are becoming increasingly common, support a large virtual address space and hence can fully benefit from isomalloc heaps.
312
Chao Huang et al.
Limitations During migration, we do not preserve a thread’s open files and sockets, environment variables, or signals. However, threads are only migrated when they call the special API routine MPI_Migrate, so currently the nonmigration-safe features can be used at any other time. The intention is to support these operations via a thread-safe AMPI specific API, which will work with migration, in the future. Thread migration between different architectures on a heterogeneous parallel machine is also not supported.2
4.2
Collective Communication Optimization
Collective communications are required in many scientific applications, as they are used in many basic operations like high dimensional FFT, LU-factorization and linear algebra operations. These communications involves many or all processors in the system, which makes them complex and time-consuming. AMPI uses the CHARM++ communication library[20, 21] to optimize its collective communication. This library uses two intelligent techniques in optimizing collective communications. For small messages, messages are combined and routed via intermediate processors to reduce the software overhead. For large messages, network contention, the dominant factor in the total cost, is lowered by smart sequencing of the messages based on the underlying network topology. Beside the above optimization inherited from CHARM++ , AMPI has its own improvement on the collective communication operations. If we take a closer look at the time spent on collective communications, only a small portion of the total time is software overhead, namely the time CPU spends on communication operations. Especially, a modern NIC with communication co-processor performs message management through remote DMA so that this operation requires very little CPU interference. On the other hand, the MPI standard defines collective operations like MPI _Alltoall and MPI_Allgather to be blocking, wasting the CPU time on waiting for the communication calls to return. To better utilize the computing power of CPU, we can make the collective operations non-blocking to allow useful computation while other MPI processors are waiting for slower collective operations. In IBM MPI for AIX [22], the similar non-blocking collectives were implemented but not well benchmarked or documented. Our approach differs from IBM’s in that we have more flexibility of overlapping, since the light-weight threads we use are easier to schedule to make full use of the physical processors.
4.3
Checkpoint and Restart
As Stellner describes in his paper on his checkpointing framework [23], process migration can easily be layered on top of any checkpointing system by simply rearranging the checkpoint files before restart. AMPI implements checkpointing 2
This will require extensive compiler support or a common virtual machine. Alternatively, stack-copying threads along with user-supplied pack/unpack code can be used to support AMPI in heterogeneous environment.
Adaptive MPI
313
in exactly the opposite way. In AMPI, rather than migration being a special kind of checkpoint/restart, checkpoint/restart is seen as a special kind of migration migration to and from the disk. A running AMPI thread checkpoints itself by calling MPI_Checkpoint with a directory name. Each thread drains its network queue, migrates a copy of itself into a file in that directory, and then continues normally. The checkpoint time is dominated by the cost of the I/O, since very little communication is required. There are currently two ways to organize the checkpoint files: (1) All threads on the same physical processor will group into one single disk file to reduce the number of files to be created, (2) Each thread has its own file. In the second option, because AMPI system checkpoints threads rather than physical processors, an AMPI program may be restored on a larger or smaller number of physical processors than was it started on. Thus a checkpoint on 1000 processors can easily be restarted on 999 processors if, for example, a processor fails during the run.
4.4
Multi-module AMPI
Large scientific programs are often written in a modular fashion by combining multiple MPI modules into a single program. These MPI modules are often derived from independent MPI programs. Current MPI programs transfer control from one module to another strictly via subroutine calls. Even if two modules are independent, idle time in one cannot be overlapped with computations in the other without breaking the abstraction boundaries between the two modules. In contrast, AMPI allows multiple separately developed modules to interleave execution based on the availability of messages. Each module may have its own “main”, and its own flow of control. AMPI provides cross-communicators to communicate between such modules.
4.5
Shrink-Expand Capability
AMPI normally migrates virtual processors for load balance, but this capability can also be used to respond to the changing properties of the parallel machine. For example, Figure 2 shows the conjugate gradient solver responding to the availability of several new processors. The time per step drops dramatically as virtual processors are migrated onto the new physical processors.
5
AMPI Benchmarks
In this section we use several benchmarks to illustrate the aspects of performance improvement that AMPI is capable of. One of the basic benchmarks here is 2D grid-based stencil-type calculation. It is a multiple timestepping calculation involving a group of objects in a mesh. At each timestep, every object exchanges part of its data with its neighbors and does some computation based on the neighbors’ data. The objects can be organized in a 2D or 3D mesh, and 1-away
314
Chao Huang et al.
Fig. 2. Time per step for the million-row conjugate gradient solver on a workstation cluster. Initially, the application runs on 16 machines. 16 new machines are made available at step 600, which immediately improves the throughput
or 2-away neighbors may be involved. Depending on these different choices, the number of points in the stencil computation can range from 5 to 13. Although this is a simplified model of many applications, like fluid dynamics or heat dispersion simulation, it can well serve the purpose of demonstration. We have chosen Lemieux, the supercomputerat Pittsburgh Supercomputing Center [24], as the major benchmark platform.
5.1
Adaptive Overlapping
In Adaptive MPI, Virtual Processors are message-driven objects mapped onto physical processors. Several VPs can be mapped onto one physical processor, and the message passing among VPs is really communication between these objects. We have explained this in Section 3.1. Now we will show the first benefit of virtualization: adaptive overlapping of computation with communication and it can improve the utilization of CPUs. Figures 3 and 4 are the timeline from the visualization tool for CHARM++: Projections3. In the timelines, x direction is time and y direction shows 8 physical processors. For each processor, the solid block means it is in use, while the gap between blocks is idle time. Figures shown are from 2 separate runs of 2D 5-point stencil calculation. In the first run, only one VP is created on each physical processor, so there is no virtualization allowed. In the second run, 8 VPs are created for each physical processor, with each VP taking less amount of computation, the total problem size is the same. In the displayed portion of the execution time, in Figure 3 we can see there are obvious gaps between blocks, and the overall utilization is around 70%. This illustrates the CPU time wasted while waiting for blocking communication to return. In Figure 4, however, the 3
Manual available at http://finesse.cs.uiuc.edu/manuals/
Adaptive MPI
315
Fig. 3. Timeline of 2D 5-point stencil calculation on Lemieux. No virtualization is used in this case: one VP per processor
Fig. 4. Timeline of 2D 5-point stencil calculation on Lemieux. Virtualization ratio is 8: eight VPs created on each processor
gaps of communication are filled with smaller chunks of computation: when one object is waiting for its communication to return, other objects on the processor can automatically take over and do their computation, eliminating the need for manual arrangement. With the adaptive overlapping of communication and computation, the average utilization of CPU is boosted to around 80%.
5.2
Automatic Load Balancing
In parallel programming, load imbalance is to be very carefully avoided. Unfortunately, load imbalance, especially dynamic load imbalance, appears frequently and is difficult to remove. For instance, consider a simulation on a mesh, where part of the mesh has a more complicated structure than the rest of the mesh, and the load within this mesh is imbalanced. As another example, when adaptive mesh refinement (AMR) is in use, hot-spots can arise where the mesh structure is highly refined. This dynamic type of load imbalance requires more programmer/system interference to remove. AMPI, using the automatic load balancing mechanism integrated in CHARM++ system, accomplishes the task of removing static and dynamic load imbalance automatically. As a simple benchmark, we modified the 5-point stencil program by dividing the mesh in a 2D stencil calculation into 2 part: in the first 1/16 mesh, all objects
316
Chao Huang et al.
Fig. 5. Utilization of 16 processors before(Left) and after(Right) automatic load balancing in a non-uniform stencil calculation
Fig. 6. Overall CPU utilization before and after automatic load balancing in a nonuniform stencil calculation
do 2-away (13-point) calculation, while the rest do 1-away (5-point) calculation. The load on the 1/16 processors is thus much heavier than that on the rest 15/16. The program used 128 AMPI VPs on 16 processors. Although it is an artificial benchmark, it represents a common situation: very small fraction of overloaded processors potentially ruin the overall performance of all processors. The load balancer is employed to solve this problem, as shown in Figure 5 and 6. According to Figure 5, one of the 16 processors are overloaded while others are underloaded, with average utilization less than 60% before load balancing, while after load balancing, the variation of the workload is diminished and the overall utilization is about 20% higher. Correspondingly, the average time per iteration drops from 1.15ms to 0.85ms. Figure 6 demonstrates how the load balancer is activated and utilization increased from 55% to 85% approximately. Note that this load balancing is all automatically done by the system; there is no programmer interference needed at all.
5.3
Collective Communication Optimization
MPI standard defines the collective operations as blocking, which makes it impossible to overlap them with computation, because many or all processors are blocked waiting for the collective operation to return. In Section 4.2 we discussed the optimization of supporting non-blocking collective operations to allow over-
Adaptive MPI
317
Fig. 7. Breakdown of execution time of 2D FFT benchmark on 4, 8, and 16 processors, with comparison between blocking(MPI) and non-blocking(AMPI) all-to-all operations
lapping. Now we illustrate how this feature can save the execution time in parallel applications. In [25], a parallel algorithm for Quantum Molecular Dynamics is discussed. One complexity in the algorithm arises from 128 independent and concurrent sets of 3D FFTs. Although each of the FFT can be parallelized, overlapping between different sets of FFTs is difficult due to the all-to-all operation required for transposing data in each FFT. However, AMPI’s non-blocking all-to-all operation allows the programmer to overlap the communication and computation from consecutive sets of FFT and save execution time. To make a benchmark based on this application, we simplified the above problem. We do two independent sets of 2D FFT, each consisting of the one 1D FFT, transpose, and another 1D FFT. To pipeline the operations, we move the second 1D FFT of the first set after the transpose of the first set. In the blocking version, however, this pipelining is not gaining any performance, because the transpose, implemented as blocking all-to-all communication, stops any other computation from being done. In the non-blocking version, the second set is able to do real computation while the first set is waiting for its communication to complete. Figure 7 demonstrates the effect of overlapping collective communications with computation. The y axis is different number of processors, for blocking version(labeled as MPI) and non-blocking version(labeled as AMPI) respectively, and the x axis is the execution time. Using distinct colors in the stacked bars, we denote the breakdown of the overhead for 1D FFT (computation), communication, and for non-blocking version, the waiting time for non-blocking operation, as discussed in Section 4.2. It can be observed that the two versions have similar amounts of computation, but in terms of communication, the non-blocking version has advantage because part of its waiting time is reduced by overlapping it with computation. The AMPI bar is 10% - 20% shorter than the MPI bar, the amount of saving depending on the amount of possible overlap. This saving could be even larger if there is more computation for overlap.
318
5.4
Chao Huang et al.
Flexibility and Overhead
In this section we are going to show the flexibility virtualization provides, as well as the overhead virtualization incurs. Our benchmark is 3D 7-point stencil calculation. First we run it with native MPI on Lemieux. Because the model of the program divides the job into K-cubed partitions, not surprisingly, the program runs only on a cube number of processors. On Adaptive MPI with virtualization, the program runs transparently on any given number of processors, exhibiting the flexibility that virtualization offers. The comparison between these two runs are visualized in Table 2. The performances on native MPI and on Adaptive MPI appear to have very little difference. Note that on some “random” number of PEs, like 19 and 140, the native MPI program is not able to run, while AMPI handles the situation perfectly. Now let’s take a closer look at the speedup data of the same program running on native MPI, AMPI with 1 VP per processor and AMPI with multiple (K=4 10) VPs per processor. Table 1 displays the execution time of the same size problem running on increasing number of processors, with the best K values shown in AMPI(K) column. Comparing the execution time of native MPI against AMPI, we find that although native MPI outperforms AMPI in many cases as expected, it does so by only a small amount. Thus, the flexibility and load balancing advantages of AMPI do not come at an undue price in basic performance 4 . In some cases, nevertheless, AMPI does a little better. For example AMPI(K) is faster than native MPI when number of processors is small. This is due to the caching effect; many VPs grouped on one processor will increase the locality of data as well as instructions. The advantage of this caching effect is shown in Table 1, where AMPI with virtualization outperforms AMPI(1) on smaller number of processors. When there are many processors involved, the cost of coordinating the VPs takes over and offset the caching effect. Two results (marked by “*” in Table 1) are anomalous, and we have not identified the underlying causes yet. 4
A microbenchmark shows an average of for a context switch between the threads with which AMPI VPs are associated, on an 400MHz PIII Xeon processor.
Adaptive MPI
6
319
AMPI Experience: Rocket Simulation
The Center for Simulation of Advanced Rockets (CSAR) is an academic research organization funded by the Department of Energy and affiliated with the University of Illinois. The focus of CSAR is the accurate physical simulation of solid-propellant rockets, such as the Space Shuttle’s solid rocket boosters. CSAR consists of several dozen faculty from ten different engineering and science departments, as well as 18 professional staff. The main CSAR simulation code consists of four major components: a fluid dynamics simulation, for the hot gas flowing through and out of the rocket; a surface burning model for the solid propellant; a nonmatching but fully-coupled fluid/solid interface; and finally a finite-element solid mechanics simulation for the solid propellant and rocket casing. Each one of these components - fluids, burning, interface, and solids - began as an independently developed parallel MPI program. One of the most important early benefits CSAR found in using AMPI is the ability to run a partitioned set of input files on a different number of virtual processors than physical processors. For example, a CSAR developer was faced with an error in mesh motion that only appeared when a particular problem was partitioned for 480 processors. Finding and fixing the error was difficult, because a job for 480 physical processors can only be run after a long wait in the batch queue at a supercomputer center. Using AMPI, the developer was able to debug the problem interactively, using 480 virtual processors distributed over 32 physical processors of a local cluster, which made resolving the error much faster and easier. Because each of the CSAR simulation components are developed independently, and each has its own parallel input format, there are difficult practical problems involved in simply preparing input meshes that are partitioned for the correct number of physical processors available. Using AMPI, CSAR developers often simply use a fixed number of virtual processors, which allows a wide range of physical processors to be used without repartitioning the problem’s input files. As the solid propellant burns away, each processor’s portion of the problem domain changes, which will change the CPU and communication time required by that processor. The most important long-term benefit that the CSAR codes will derive from AMPI is the ability to adapt to this changing computation by migrating work between processors, taking advantage of the CHARM++ load balancing framework’s demonstrated ability to optimize for load balance and communication efficiency. Because the CSAR components do not yet change
320
Chao Huang et al.
the mesh structure during a run, and merely distort the existing mesh, the computation and communication patterns of the virtual MPI processors do not yet change. However, this mesh distortion breaks down after a relatively small amount of motion, so the ability to adjust the mesh to the changing problem domain is scheduled to be added soon. Finally, the CSAR simulator’s current main loop consists of one call to each of the simulation components in turn, in a one-at-a-time lockstep fashion. This means, for example, the fluid simulation must finish its timestep before the solids can begin its own. But because each component runs independently except at well-defined interface points, and AMPI allows multiple independent threads of execution, we will be able to improve performance by splitting the main loop into a set of cooperating threads. This would allow, for example, the fluid simulation thread to use the processor while the solid thread is blocked waiting for remote data or a solids synchronization. Separating each component should also improve our ability to optimize the communication balance across the machine, since currently the fluids processor has no physical correspondence with the solids processor. In summary, AMPI has proven a useful tool for the CSAR simulation, from debugging to day-to-day operations to future plans.
7
Conclusions
We have presented AMPI, an adaptive implementation of MPI on top of CHARM++. AMPI implements migratable virtual and light-weight MPI processors. It assigns several virtual processors on each physical processor. This efficient virtualization provides a number of benefits, such as the ability to automatically load balance arbitrary computations, automatically overlap computation and communication, emulate large machines on small ones, and respond to a changing physical machine. Several applications are being developed using AMPI, including those in rocket simulation. AMPI is an active research project; much future work is planned for AMPI. We expect to achieve full MPI-1.1 standards conformance soon, and MPI-2 thereafter. We are rapidly improving the performance of AMPI, and should soon be quite near that of non-migratable MPI. The CHARM++ performance analysis tools are being updated to provide more direct support for AMPI programs. Finally, we plan to extend our suite of automatic load balancing strategies to provide machine-topology specific strategies, useful for future machines such as BlueGene/L.
References [1] Fox, G., Williams, R., Messina, P.: Parallel Computing Works. Morgan Kaufman (1994) [2] V.K.Naik, Setia, S.K., Squillante, M.S.: Processor allocation in multiprogrammed distributed-memory parallel computer systems. Journal of Parallel and Distributed Computing (1997)
Adaptive MPI
321
[3] Kalé, L., Krishnan, S.: CHARM++: A Portable Concurrent Object Oriented System Based on C++. In Paepcke, A., ed.: Proceedings of OOPSLA’93, ACM Press (1993) 91-108 [4] Kale, L.V., Krishnan, S.: Charm++: Parallel Programming with Message-Driven Objects. In Wilson, G.V., Lu, P., eds.: Parallel Programming using C++. MIT Press (1996) 175–213 [5] Gropp, W., Lusk, E., Doss, N., Skjellum, A.: Mpich: A high-performance, portable implementation of the mpi message passing interface standard. Parallel Computing 22 (1996) 789–828 [6] Burns, G., Daoud, R., Vaigl, J.: Lam: An open cluster environment for mpi. In: Proceedings of Supercomputing Symposium 1994, Toronto, Canada. (1994) [7] Stellner, G.: CoCheck: Checkpointing and Process Migration for MPI. In: Proceedings of the 10th International Parallel Processing Symposium (IPPS ’96), Honolulu, Hawaii (1996) [8] Agbaria, A., Friedman, R.: StarFish: Fault-tolerant dynamic mpi programs on clusters of workstations. In: 8th IEEE International Symposium on High Performance Distributed Computing. (1999) [9] : (MPI-Lite, Parallel Computing Lab, University of California) http://may.cs. ucla.edu/projects/sesame/mpi_lite/mpi_lite.html. [10] Tang, H., Shen, K., Yang, T.: Program transformation and runtime support for threaded MPI execution on shared-memory machines. ACM Transactions on Programming Languages and Systems 22 (2000) 673–700 [11] Kalé, L.V.: The virtualization model of parallel programming : Runtime optimizations and the state of art. In: LACSI 2002, Albuquerque (2002) [12] Lawlor, O., Kalé, L.V.: Supporting dynamic parallel object arrays. In: Proceedings of ACM 2001 Java Grande/ISCOPE Conference, Stanford, CA (2001) 21–29 [13] Kale, L.V., Bhandarkar, M., Brunner, R.: Run-time Support for Adaptive Load Balancing. In Rolim, J., ed.: Lecture Notes in Computer Science, Proceedings of 4th Workshop on Runtime Systems for Parallel Programming (RTSPP) Cancun - Mexico. Volume 1800. (2000) 1152–1159 [14] Brunner, R.K., Kalé, L.V.: Adapting to load on workstation clusters. In: The Seventh Symposium on the Frontiers of Massively Parallel Computation, IEEE Computer Society Press (1999) 106–112 [15] Kalé, L.V., Kumar, S., DeSouza, J.: An adaptive job scheduler for timeshared parallel machines. Technical Report 00-02, Parallel Programming Laboratory, Department of Computer Science, University of Illinois at Urbana-Champaign (2000) [16] Saboo, N., Singla, A.K., Unger, J.M., Kalé, L.V.: Emulating petaflops machines and blue gene. In: Workshop on Massively Parallel Processing (IPDPS’01), San Francisco, CA (2001) [17] Mahesh, K.: Ampizer: An mpi-ampi translator. Master’s thesis, Computer Science Department, University of Illinois at Urbana-Champiagn (2001) [18] Blume, W., Eigenmann, R., Faigin, K., Grout, J., Hoeflinger, J., Padua, D., Petersen, P., Pottenger, B., Rauchwerger, L., Tu, P., Weatherford, S.: Polaris: Improving the effectiveness of parallelizing compilers. In: Proceedings of 7th International Workshop on Languages and Compilers for Parallel Computing. Number 892 in Lecture Notes in Computer Science, Ithaca, NY, USA, Springer-Verlag (1994) 141-154 [19] Antoniu, G., Bouge, L., Namyst, R.: An efficient and transparent thread migration scheme in the runtime system. In: Proc. 3rd Workshop on Runtime Systems for Parallel Programming (RTSPP) San Juan, Puerto Rico. Lecture Notes in Computer Science 1586, Springer-Verlag (1999) 496–510
322
Chao Huang et al.
[20] Kale, L.V., Kumar, S., Vardarajan, K.: A framework for collective personalized communication, communicated to ipdps 2003. Technical Report 02-10, Parallel Programming Laboratory, Department of Computer Science, University of Illinois at Urbana-Champaign (2002) [21] Kale, L.V., Kumar, S.: Scaling collective multicast on high performance clusters. Technical Report 03-04, Parallel Programming Laboratory, Department of Computer Science, University of Illinois at Urbana-Champaign (2003) [22] IBM Parallel Enviroment for AIX, MPI Subroutine Reference http://publib.boulder.ibm.com/doc_link/en_US/a_doc_lib/sp34/pe/html/am 107mst.html. [23] Stellner, G.: Cocheck: Checkpointing and process migration for mpi. In: Proceedings of the International Parallel and Distributed Processing Symposium, IEEE Computer Society Press, Los Alamitos, CA (1996) 526–531 [24] Lemieux, Pittsburgh Supercomputing Center http://www.psc.edu/ machines/tcs/lemieux.html. [25] Vadali, R., Kale, L.V., Martyna, G., Tuckerman, M.: Scalable parallelization of ab initio molecular dynamics. Technical report, UIUC, Dept. of Computer Science (2003)
MPJava: High-Performance Message Passing in Java Using Java.nio William Pugh and Jaime Spacco University of Maryland, College Park, MD 20740, USA {pugh,jspacco}@cs.umd.edu
Abstract. We explore advances in Java Virtual Machine (JVM) technology along with new high performance I/O libraries in Java 1.4, and find that Java is increasingly an attractive platform for scientific clusterbased message passing codes. We report that these new technologies allow a pure Java implementation of a cluster communication library that performs competitively with standard C-based MPI implementations.
1
Introduction
Previous efforts at Java-based message-passing frameworks have focused on making the functionality of the Message Passing Interface (MPI) [1] available in Java, either through native code wrappers to existing MPI libraries (mpiJava [2], JavaMPI [3]) or pure Java implementations (MPIJ [4]). Previous work showed that both pure Java and Java/native MPI hybrid approaches offered substantially worse performance than MPI applications written in C or Fortran with MPI bindings. We have built Message Passing Java, or MPJava, a pure-Java message passing framework. We make extensive use of the java.nio package introduced in Java 1.4. Currently, our framework provides a subset of the functionality available in MPI. MPJava does not use the Java Native Interface (JNI). The JNI, while convenient and occasionally necessary, violates type safety, incurs a performance penalty due to additional data copies between the Java and C heaps, and prevents the JVM’s Just-In Time (JIT) compiler from fully optimizing methods that make native calls. MPJava offers promising results for the future of high performance message passing in pure Java. On a cluster of Linux workstations, MPJava provides performance that is competitive with LAM-MPI [5] for the Java Grande Forum’s Ping-Pong and All-to-All microbenchmarks. Our framework also provides performance that is comparable to the Fortran/LAM-MPI implementation of a Conjugate Gradient benchmark taken from the NASA Advanced Supercomputing Parallel Benchmarks (NAS PB) benchmark suite.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 323–339, 2004. © Springer-Verlag Berlin Heidelberg 2004
324
2
William Pugh and Jaime Spacco
Design and Implementation
We have designed MPJava as an MPI-like message passing library implemented in pure Java, making use of the improved I/O capabilities of the java.nio package. MPJava adheres to the Single Program Multiple Data (SPMD) model used by MPI. Each MPJava instance knows how many total nodes are in use for the computation, as well as its own unique processor identification tag (PID). Using this information, the programmer can decide how to split up shared data. For example, if 10 nodes are being used for one MPJava computation, a shared array with 100 elements can store elements 0-9 on node 0, 10-19 on node 1, etc. Data can be exchanged between nodes using point-to-point send() and recv() operations, or with collective communications such as all-to-all broadcast. Distributing data in this manner and using communication routines is typical of MPI, OpenMP, and other parallel programming paradigms. 2.1
Functionality
The MPJava API provides point-to-point send() and recv() functions:
These high-level functions abstract away the messy details related to TCP, allowing the user to focus on the application rather than the message passing details. MPJava also provides a subset of the collective communication operations typically available to a message passing library such as MPI. For example, if an array with 100 elements is distributed between 10 nodes, an all-to-all broadcast routine can be used to recreate the entire array of 100 elements on each node: The distribution parameter is a constant that tells MPJava how the data is distributed between nodes. The default setting we use is as follows: an array with elements will be split between nodes with each node holding elements and the last node holding elements. Other distribution patterns are possible, though we employ the simple default setting for our experiments. 2.2
Bootstrapping
MPJava provides a series of start-up scripts that read a list of hostnames, perform the necessary remote logins to each machine, and start MPJava processes on each machine with special arguments that allow each MPJava process to find the others. The result of the bootstrap process is a network of MPJava processes where each process has TCP connections to every other process in the network. These TCP connections are used by the nodes for point-to-point as well as collective communications.
MPJava: High-Performance Message Passing in Java Using Java.nio
2.3
325
Collective Communication Algorithms
We explored two different all-to-all broadcast algorithms: a multi-threaded concurrent algorithm in which all pairs of nodes exchange data in parallel, and a parallel prefix algorithm that only uses a single thread. In the concurrent algorithm, each node has a separate send and receive thread, and the select() mechanism is used to multiplex communication to all the other processors. In the parallel prefix implementation, data exchange proceeds in rounds, sending pieces of data in each round, where is the current round number. For example, if there were 16 total nodes, node 0 would broadcast according to the following schedule:
Example broadcast schedule for node 0 with 16 total nodes round partner data 1 1 0 2 2 0,1 4 3 0-3 4 8 0-7
3
Introduction to Java.nio
Java’s New I/O APIs (java.nio), are defined in Java Specification Request (JSR) 51 [6]. These New I/O, or NIO, libraries were heavily influenced and address a number of issues exposed by the pioneering work of Matt Welsh et. al. on JAGUAR [7] and Chi-Chao Chang and Thorsten von Eicken on JAVIA [8]. 3.1
Inefficiencies of Java.io and Java.net
The original java.io and java.net libraries available prior to JDK 1.4 perform well enough for client-server codes based on Remote Method Invocation (RMI) in a WAN environment. The performance of these libraries is not suitable, however, for high-performance communication in a LAN environment due to several key inefficiencies in their design: Under the java.io libraries, the process of converting between bytes and other primitive types (such as doubles) is inefficient. First, a native method is used that allows a double to be treated as a 64 bit long integer (The JNI is required because type coercions from double to long are not allowed under Java’s strong type system). Next, bit-shifts and bit-masks are used to strip 8 bit segments from the 64 bit integer, then write these 8 bit segments into a byte array. java.nio buffers allow direct copies of doubles and other values to/from buffers, and also support bulk operations for copying between Java arrays and java.nio buffers.
326
William Pugh and Jaime Spacco
The java.io operations work out of an array of bytes allocated in the Java heap. Java cannot pass references to arrays allocated in the Java heap to system-level I/O operations, because objects in the Java heap can be moved by the garbage collector. Instead, another array must be allocated in the C heap and the data must be copied back and forth. Alternatively, to avoid this extra overhead, some JVM implementations “pin” the byte array in the Java heap during I/O operations. java.nio buffers can be allocated as DirectBuffers, which are allocated in the C heap and therefore not subject to garbage collection. This allows I/O operations with no more copying than what is required by the operating system for any programming language. Prior to NIO, Java lacked a way for a single thread to poll multiple sockets, and the ability to make non-blocking I/O requests on a socket. The workaround solution of using a separate thread to poll each socket introduces unacceptable overhead for a high performance application, and simply does not scale well as the number of sockets increases, java.nio adds a unix-like select() mechanism in addition to non-blocking sockets. 3.2
Java.nio for High-Performance Computing
MPJava demonstrates that Java can deliver performance competitive with MPI for message-passing applications. To maximize the performance of our framework we have made careful use of several java.nio features critical for highperformance computing: channels, select(), and buffers. Channels such as SocketChannel are a new abstraction for TCP sockets that complement the Socket class available in java.net. The major differences between channels and sockets are that channels allow non-blocking I/O calls, can be polled and selected by calls to java.nio’s select() mechanism, and operate on java.nio.ByteBuffers rather than byte arrays. In general, channels are more efficient than sockets, and their use, as well as the use of select(), is fairly simple, fulfills an obviou need, and is self-explanatory. The use of java.nio.Buffers, on the other hand, is slightly more complicated, and we have found that careful use of buffers is necessary to ensure maximal performance of MPJava. We detail some of our experiences with buffers below. One useful new abstraction provided by NIO is a Buffer, which is a container for a primitive type. Buffers maintain a position, and provide relative put () and get () methods that operate on the element specified by the current position. In addition, buffers provide absolute put(int index, byte val) and get(int index) methods that operate on the element specified by the additional index parameter, as well as bulk put () and get () methods that transfer a range of elements between arrays or other buffers. ByteBuffers allocated as DirectByteBuffers will use a backing store allocated from the C heap that is not subject to relocation by the garbage collector.
MPJava: High-Performance Message Passing in Java Using Java.nio
327
The DirectByteBuffer the results can be directly passed as arguments to system level calls with no additional copying required by the JVM. Because direct buffers are expensive to allocate and garbage collect, we preallocate all of the required buffers. The user sees a collective communication API call much like an MPI function; our framework handles behind-the-scenes any necessary copies of data between the user’s arrays and our pre-allocated direct buffers. For the best performance, it is important to ensure that all buffers are set to the native endianness of the hardware. Because Java is platform independent, it is possible to create buffers that use either big-endian or little-endian formats for storing multi-byte values. Furthermore, the default byte-order of all buffers is big-endian, regardless of the native byte order of the machine where the JVM is executing. To communicate among a set of heterogeneous platforms with mixed byte orders, one would need to perform some extra bookkeeping, and weather some performance overhead in the process. In our experience, this has never been an issues, as most clusters consist of a homogenous set of machines. ByteBuffer provides an asDoubleBuffer() method that returns a DoubleBuffer, which is a “view” of the chunk of backing data that is shared with the ByteBuffer. Maintaining multiple “views” of the same piece of data is important for three reasons: First, while ByteBuffer supports operations to read or write other primitive types such as doubles or longs, each operation requires checks for alignment and endian-ness in addition to the bounds checks typical of Java. Next, ByteBuffer does not provide bulk transfer operations for other primitive types. Finally, all socket I/O calls require ByteBuffer parameters. NIO solves all of these issues with multiple views: DoubleBuffer provides bulk transfer methods for doubles that do not require checks for alignment and endian-ness. Furthermore, these transfers are visible to the ByteBuffer “view” without the need for expensive conversions, since the ByteBuffer shares the same backing storage as the DoubleBuffer. Maintaining two views of each buffer is cumbersome but manageable. We map each DoubleBuffer to its corresponding ByteBuffer with an IdentityHashMap and take care when changing the position of one of these buffers, as changes to the position of one buffer are not visible to other “views” of the same backing data. Furthermore, we are careful to prevent the overlap of simultaneous I/O calls on the same chunk of backing data, as the resulting race condition leads to nasty, unpredictable bugs. The MPJava API calls in our framework take normal Java arrays as parameters. This approach requires that data be copied from arrays into buffers before the data can be passed to system-level OS calls. To avoid these extra copy operations, we initially implemented our framework with an eye towards performing all calculations directly in buffers. Not only does this strategy requires a more complicated syntax (buffers must be manipulated via put() and get() methods rather than the cleaner square bracket notation used with Java arrays), but the performance penalty for repeated put() and get() methods on a buffer is as much as an order of magnitude worse than similar code that uses Java arrays. It
328
William Pugh and Jaime Spacco
turns out that the cost of copying large amounts of data from arrays into buffers before every send (and from buffers to arrays after each receive) is less than the cost of the put () and get () methods required to perform computations in the buffers.
4
Performance Results
We conducted these experiments on a cluster of Pentium III 650 MHz machines with 768MB RAM, running Redhat Linux 7.3. They are connected by two channel-bonded 100 Mbps links through a Cisco 4000 switch capable of switching at maximum 45 million packets/s or 64 GB/s. We compared Fortran codes compiled with the g77-2.96 and linked with LAM-MPI 6.5.8 against MPJava compiled with JDK-1.4.2-b04 and mpiJava 1.2.3 linked with mpich 1.2.4. We use mpich as the underlying MPI implementation for mpiJava because mpiJava supports mpich but not LAM. We chose LAM over mpich for our other experiments because LAM (designed for performance) delivers better performance than mpich (designed primarily for portability). 4.1
Ping-Pong
First we compare our MPJava framework with LAM-MPI and java.io for a ping-pong benchmark. The benchmark, based on the Java Grande Forum’s ping-pong benchmark, measures the maximum sustainable throughput between two nodes by copying data from an array of doubles on one processor into an array of doubles on the other processor and back again. The results are given in Figure 1. The horizontal axis represents the number of doubles swapped between each pair of nodes. To avoid any performance anomalies occurring at the powers of two in the OS or networking stack, we adopt the Java Grande Forum’s convention of using values that are similar to the powers of two. The vertical axis, labeled Mbits/s, shows bandwidth calculated as the total number of bits exchanged between a pair of nodes, divided by the total time for the send and receive operations. We only report results for the node that initiates the send first, followed by the receive, to ensure timing the entire round-trip transit time. Thus, the maximum bandwidth for this benchmark is 100 Mbps, or half of the hardware maximum. We report the median of five runs because a single slow outlier can impact the mean value by a significant amount, especially for small message sizes where the overall transmission time is dominated by latency. We used two different java.io implementations: java.io (doubles), which performs the necessary conversions from doubles to bytes and vice versa, and java.io (bytes), which sends an equivalent amount of data between byte arrays without conversions. The java.io (doubles) implementation highlights the tremendous overhead imposed by conversions under the old I/O model, while the results for java.io (bytes) represent an upper bound for performance of the old Java I/O model.
MP Java: High-Performance Message Passing in Java Using Java.nio
329
Fig. 1. Ping-Pong performance for MP Java, LAM-MPI, mpiJava, and java.io. Note that java.io (doubles) performs conversions between doubles and bytes, while java.io (bytes) does not
It is not surprising that our java.nio-enabled MPJava framework outperforms the java.io doubles implementation because the conversions are extremely inefficient. However, MPJava also outperforms the java.io (bytes) implementation for data sizes larger than about 2000 doubles. We surmise that this is due to inefficiencies in java.io’s buffering of data. Although both implementations need to copy data from the Java heap into the C heap, MPJava needs to copy data from a Java array into a pre-allocated direct buffer that does not need to be cleaned up, while the java.io (bytes) implementation needs to allocate and then clean-up space in the C heap. This may be an expensive operation on some JVMs. The native LAM-MPI implementation provides better performance than MPJava for message sizes until about 1000 doubles, while MPJava provides superior performance for sizes larger than 7000 doubles. The main contribution of this particular experiment is the empirical evidence we provide that Java is capable of delivering sustained data transfer rates competitive with available MPI implementation of this common microbenchmark. 4.2
All-to-All
The next microbenchmark we implemented was an all-to-all bandwidth utilization microbenchmark based on the Java Grande Forum’s JGFAlltoAllBench.java. The all-to-all microbenchmark measures bandwidth utilization in a more realistic manner than ping-pong. An all-to-all communication is necessary when a vector shared between many nodes needs to be distributed, with each node sending its portion to every other node. Thus, if there are n nodes and the vector has v total elements, each node must communicate its v/n elements to n - 1 peers.
330
William Pugh and Jaime Spacco
Fig. 2. All-To-All performance for MPJava, prefix algorithm
Fig. 3. All-To-All performance for LAM-MPI
Figure 2 represents the results of our framework using the parallel prefix algorithm, while Figure 4 shows the results for the concurrent algorithm. Figure 3 illustrates the performance of the same microbenchmark application written in C using the LAM-MPI library, and Figure 5 shows the results for mpiJava with bindings to mpich. Note that we do not chart mpiJava’s performance for 32 nodes because the performance achieved was under 1 Mbps. The values on the X-axis represent the number of doubles exchanged between each pair of nodes. A value v on the X-axis means that a total of v * (n - 1) bytes were transmitted, where n is the number of nodes used. The actual values selected for the X-axis are the same as those used in the ping-pong microbenchmark previously, for the same reason. The Y-axis charts the performance in megabits/s (Mbps). We chose the median value of many runs because a single slow outlier can negatively impact the mean value, especially for small message sizes where overall runtimes are domi-
MPJava: High-Performance Message Passing in Java Using Java.nio
331
Fig. 4. All-To-All performance for MPJava, concurrent algorithm
Fig. 5.
All-To-All performance for mpiJava
nated by latency. Thus, the “dips” and other irregularities are repeatable. Note that Figures 2, 3 and 4 have the same scale on the Y-axis, and the theoretical hardware maximum for this experiment is 200 Mbps. The MPJava concurrent broadcast algorithm occasionally outperforms the parallel prefix algorithm; however, the performance of the concurrent algorithm is not consistent enough to be useful. We believe this is due at least in part to sub-optimal thread scheduling in the OS and/or JVM. In addition, we were not able to achieve true concurrency for this experiment because the machines we used for our experiments have only 1 CPU. MPJava’s parallel prefix algorithm outperformed the LAM-MPI implementation for large message sizes. We ascribe these differences to the difference in the broadcast algorithms. Parallel prefix has a predictable send/receive schedule, while LAM-MPI uses a naïve all-to-all algorithm that exchanges data between each pair of nodes.
332
William Pugh and Jaime Spacco
The comparison with mpiJava is somewhat unfair because MPICH, the underlying native MPI library for mpiJava, gave substantially worse performance than LAM-MPI. However, the comparison does provide evidence of some of the performance hurdles that must be overcome for Java to gain acceptance as a viable platform for clustered scientific codes. While it is possible that a C-based MPI implementation could use a more sophisticated broadcast strategy that outperforms our current implementation, there is no reason why that strategy could not be incorporated into a java.nio implementation that would achieve similar performance. 4.3
CG
Our final performance results are for the NAS PB Conjugate Gradient (CG) benchmark [9]. The CG benchmark provides a more realistic evaluation of the suitability of Java for high performance scientific computation because it contains significant floating point arithmetic. The CG algorithm uses the inverse power method to find an estimate of the largest eigenvalue of a symmetric positive definite sparse matrix with a random pattern of nonzero values. The kernel of the CG algorithm consists of a multiplication of the sparse matrix A with a vector p followed by two reductions of a double, then a broadcast of the vector p before the next iteration. These four core operations comprise over 80% of the runtime of the calculation. This kernel iterates 25 times, and is called by the CG benchmark 75 times to approximate a solution with the desired precision. We have evaluated the CG benchmark for the Class B and Class C sizes. Class rows of A total nonzeroes in A avg. nonzeroes/row 13,708,072 B 75,000 183 36,121,058 C 150,000 241 The data used by the CG benchmark is stored in Compressed Row Storage (CRS) format. The naïve way to parallelize this algorithm is to divide the m rows of the A matrix between n nodes when performing the matrix-vector multiplication, then use an all-to-all broadcast to recreate the entire p vector on each node. We implemented this approach in Fortran with MPI and also MPJava, and provide results for this approach in Figure 6. Because g77 does not always adequately optimize code, we also ran the NAS CG benchmark using pgf90, the Portland Compiler Group’s optimizing Fortran compiler. The performance was nearly identical to the g77 results. It is likely that even a sophisticated compiler cannot optimize in the face of the extra layer of indirection required by the CRS storage format for the sparse matrix A. The NAS PB implementation of CG performs a clever two-dimensional decomposition of the sparse matrix A that replaces the all-to-all broadcasts with reductions across rows of the decomposed matrix. The resulting communication pattern can be implemented with only send() and recv() primitives, and is
MPJava: High-Performance Message Passing in Java Using Java.nio
333
Fig. 6. Conjugate Gradient, Class B: MPJava (mpj), Simple Fortran (for). Note that for each pair of stacked bar charts, MPJava is the leftmost, simple Fortran is the rightmost
Fig. 7. Conjugate Gradient, Class B: MPJava (mpj), Original NAS Fortran (for). Note that for each pair of stacked bar charts, MPJava is the leftmost, NAS Fortran is the rightmost
more efficient than using collective communications. We implemented the more sophisticated decomposition algorithm used by the NAS CG implementation in MPJava, and provide results in Figure 7. We instrumented the codes to time the three major contributors to the runtime of the computation: the multiplication of the sparse matrix A with the vector p, the all-to-all broadcast of the vector p, and the two reductions required in the inner loop of the CG kernel. All four versions of the code perform the same number of floating point operations. We report results for four versions: naïve Fortran (Fortran), the naïve MPJava (MPJava), Fortran with the 2D decomposition (Fortran 2D), and MPJava with the 2D decomposition (MPJava 2D). These results are in Table 1 for the Class B problem size, and Table 2 for the Class C problem size. The results of the naïve algorithm presented in Figure 6 show that MPJava is capable of delivering performance that is very competitive with popular, freelyavailable, widely-deployed Fortran and MPI technology. The poor performance
334
William Pugh and Jaime Spacco
observable at 32 nodes for the Fortran code reflects the fact that LAM-MPI’s all-to-all collective communication primitive does not scale well. These results highlight the importance of choosing the appropriate collective communication algorithm for the characteristics of the codes being executed and the hardware configuration employed.
MPJava: High-Performance Message Passing in Java Using Java.nio
335
The results of the 2D decomposition algorithm presented in Figure 7 also show MPJava to be competitive with Fortran and MPI. Although the MPJava performance is slightly worse, it is within 10% of the Fortran/MPI results. Popular wisdom suggests that Java performs at least a factor of 2 slower than Fortran. While there is much work left to do in the field of high-performance Java computing, we hope that our results help bolster Java’s case as a viable platform for scientific computing. The results of this benchmark suggest that MPJava is capable of delivering performance comparable to or in excess of the performance achievable by native MPI/C applications. In addition, this benchmark provides promising results for the current state of Java Virtual Machine (JVM) technologies. The results of the A.p sparse matrixvector multiplications are nearly identical between the Simple Fortran and simple MPJava versions, and MPJava performs within 0% of Fortran for 2D versions. The only optimization we performed on the A.p sparse matrix-vector multiplication code was unrolling the loop by a factor of 8, which accounted for an improvement of about 17% for the Simple MPJava implementation. We assume that the Fortran compilers already perform this optimization, as loop unrolling by hand had no effect on Fortran code compiled with either g77 or pgf90.
5
Related Work
There is a large body of work dealing with message-passing in Java. Previous approaches can be loosely divided into two categories: Java/native hybrids, and java.io approaches. JavaMPI [3] and mpiJava [2] are two efforts to provide native method wrappers to existing MPI libraries. The resulting programming style of JavaMPI is more complicated, and mpiJava is generally better supported. Both approaches provide the Java programmer access to the complete functionality of a wellsupported MPI library such as MPICH [10]. This hybrid approach, while simple, does have a number of limitations. First, mpiJava relies on proper installation of an additional library. Next, the overhead of the Java Native Interface (JNI) imposes a performance penalty on native code which will likely make the performance of an application worse than if it were directly implemented in C with MPI bindings. Furthermore, the JIT compiler must make maximally conservative assumptions in the presence of native code and may miss potential optimizations. Most java.io implementations are based on the proposed MPJ standard of Carpenter et. al. [11]. However, there is no official set of MPI bindings for Java, so each implementation will have its own particular advantages and disadvantages. MPIJ, part of the Distributed Object Groups Metacomputing Architecture (DOGMA) project at BYU [4], is a pure-Java implementation of a large subset of MPI features. Their implementation is based on the proposed MPI bindings of Carpenter et. al. [11]. The MPIJ codebase was not available for public download at the time of publication. Steve Morin provides an excellent overview of MPIJ’s
336
William Pugh and Jaime Spacco
design here [12]. We were unable to find any published results of the performance of MPIJ. The Manta project [13] supports several interesting flavors of message-passing codes in Java, including Collective Communication Java (CCJ) [14], Group Method invocation (GMI) [15], and Ibis [16]. CCJ is an RMI-based collective communication library written entirely in Java. It provides many features, but does not provide the all-to-all broadcast necessary for many scientific codes such as Conjugate Gradient. GMI is a generalization of Java RMI in which methods can be invoked on a single object or on a group of objects, and results can be discarded, returned normally, or combined into a single result. This work is extremely interesting from a high-level programmatic perspective, as its fully orthogonal group-based design potentially allows programs that break from the SPMD model so dominant to MPI codes. Ibis harnesses many of the techniques and infrastructures developed through CCJ and GMI to provide a flexible GRID programming environment. MPI Soft Tech Inc. announced a commercial endeavor called JMPI, an effort to provide MPI functionality in Java. However, they have yet to deliver a product; all we have are their design goals [17]. JCluster [18] is a message-passing library that provides PVM and MPI-like functionality in Java. The library uses threads and UDP for improved performance, but does not utilize java.nio. The communications are thus subject to the inefficiencies of the older java.io package. At the time of publication, an alpha version of the library was available for Windows but did not work properly under Linux. JPVM [19] is a port of PVM to Java, with syntactic and semantic modifications better suited to Java’s capabilities and programming style. The port is elegant, full-featured, and provides additional novel features not available to PVM implementations in C or Fortran, However, the lackluster performance of JPVM, due in large part to the older io libraries, has proved a limiting factor to its wide adoption. KARmi [20] presents a native-code mechanism for the serialization of primitive types in Java. While extremely efficient, native serialization of primitive types into byte arrays violates type safety, and cannot benefit from java.nio SocketChannels. Titanium [21] is a dialect of Java that provides new features useful for highperformance computation in Java, such as immutable classes, multidimensional arrays, and zone-based memory management. Titanium’s backend produces C code with MPI calls. Therefore the performance is unlikely to outperform native MPI/C applications, and could be substantially worse. Al-Jaroodi et. al. provide a very useful overview of the state of distributed Java endeavors here [22]. Much work has been done on GRID computing. Our work does not directly deal with issues important to the GRID environment, such as adaptive dynamic scheduling or automatic parallelism. Rather, we focus of developing an efficient
MPJava: High-Performance Message Passing in Java Using Java.nio
337
set of communication primitives that any GRID-aware library can be built on top of.
6
Conclusion
We have built a pure Java message-passing framework using NIO. We demonstrate that a message passing framework that harnesses the high-performance communication capabilities of NIO can deliver performance competitive with native MPI codes. We also provide empirical evidence that current Java virtual machines can produce code competitive with static Fortran compilers for scientific applications rich in floating point arithmetic.
7
Future Work
Though MPI supports asynchronous messages, it typically does so without the benefit of threads, and in a cumbersome way for the programmer. We have a modified version of our framework that provides the abstraction of asynchronous pipes. This is accomplished through separate send and receive threads that make callbacks to user-defined functions. We would like to evaluate the performance of our asynchronous message-passing framework for problems that do not easily fit into an SPMD model, such as distributed work-stealing and work-sharing. Clusters are increasingly composed of interconnected SMPs. It is typically not useful to schedule multiple MPI tasks on an SMP node, as the additional processes will fight over shared resources such as bandwidth and memory. A Java framework that supports the interleaving of computation and communication through send, receive and compute threads can better utilize extra processors because the JVM is free to schedule its threads on all available processors. We have developed a threaded version of our MPJava framework that maintains a send, receive and computation thread. In the CG algorithm, since each node only needs the entire p vector for the A.p portion of any iteration, and the broadcast and matrix-vector multiply step are both significant contributors to the total runtime, we use threads to interleave the communication and computation of these steps. Our preliminary results were worse than the single-threaded results, most likely due to poor scheduling of threads by the OS and the JVM. The notion of interleaving computation and communication, especially on an SMP, is still very appealing, and requires more study. Multi-threading is an area where a pure-Java framework can offer substantial advantages over MPI-based codes, as many MPI implementations are not fully thread-safe. Although interest in an MPI-like Java library was high several years ago, interest seems to have waned, perhaps due to the horrible performance reported for previous implementations. Now that NIO enable high-performance communication, it is time to reassess the interest in MPI-Java.
338
William Pugh and Jaime Spacco
Finally, we would like to investigate the high-level question of whether a highperformance message-passing framework in Java should target MPI, or should adhere to its own standard.
Acknowledgements This work was supported by the NSA, the NSF and DARPA.
References [1] : (The Message Passing Interface Standard) [2] Baker, M., Carpenter, B., Ko, S., Li, X.: mpiJava: A Java interface to MPI (1998) [3] Mintchev, S.: Writing programs in javampi. Technical Report MAN-CSPE-02, School of Computer Science, University of Westminster, London, UK (1997) [4] Judd, G., Clement, M., Snell, Q.: DOGMA: Distributed Object Group Management Architecture. In: Concurrency: Practice and Experience. (1998) ??–?? [5] : LAM (Local Area Multicomputer), http://www.lam-mpi.org (2002) [6] : JSR 51 - New I/O APIs for the JavaTM Platform, http://www.jcp.org/jsr/ detail/51.jsp (2002) [7] Welsh, M., Culler, D.: Jaguar: enabling efficient communication and I/O in Java. Concurrency: Practice and Experience 12 (2000) 519–538 [8] Chang, C.C., von Eicken, T.: Interfacing Java with the Virtual Interface Architecture. ACM Java Grande (1999) [9] : (The NAS Parallel Benchmarks, http://www.nas.nasa.gov/nas/npb/) [10] : MPICH-A Portable Implementation of MPI, http://www-unix.mcs.anl.gov/ mpi/mpich/ (2002) [11] Carpenter, B., Getov, V., Judd, G., Skjellum, A., Fox, G.: MPJ: MPI-like message passing for Java. Concurrency - Practice and Experience 12 (2000) 1019–1038 [12] Morin, S.R., Koren, I., Krishna, C.M.: Jmpi: Implementing The Message Passing Interface Standard In Java. In: IPDPS Workshop on Java for Parallel and Distributed Computing. (2002) [13] : (Manta: Fast Parallel Java, http://www.cs.vu.nl/manta/) [14] Nelisse, A., Kielman, T., Bal, H., Maassen, J.: Object Based Collective Communication in Java. Joint ACM Java Grande - ISCOPE Conference (2001) [15] Maassen, J., Kielmann, T., Bal, H.E.: GMI: Flexible and Efficient Group Method Invocation for Parallel Programming. In: Languages, Compilers, and Runtime Systems. (2002) 1–6 [16] van Nieuwpoort, R.V., Nelisse, A., Kielman, T., Bal, H., Maassen, J.: Ibis: an Efficient Java-bsed Grid Programming Environment. Joint ACM Java Grande ISCOPE Conference (2002) [17] : (JMPI, http://www.mpi-softtech.com/publications/jmpi121797.html) [18] : JCluster, http://vip.6to23.com/jcluster/ (2002) [19] Ferrari, A.: JPVM: network parallel computing in Java. Concurrency: Practice and Experience 10 (1998) 985–992 [20] Nester, C., Philippsen, M., Haumacher, B.: A More Efficient RMI for Java. In: Java Grande. (1999) 152–159
MPJava: High-Performance Message Passing in Java Using Java.nio
339
[21] Yelick, K., Semenzato, L., Pike, G., Miyamoto, C., Liblit, B., Krishnamurthy, A., Hilfinger, P., Graham, S., Gay, D., Colella, P., Aiken, A.: Titanium: A highperformance Java dialect. In ACM, ed.: ACM 1998 Workshop on Java for HighPerformance Network Computing, New York, NY 10036, USA, ACM Press (1998) [22] Al-Jaroodi, Mohamed, Jiang, Swanson: A Comparative Study of Parallel and Distributed Java Projects. In: IPDPS Workshop on Java for Parallel and Distributed Computing. (2002)
Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays Wei-Yu Chen1, Arvind Krishnamurthy 2 , and Katherine Yelick1 1
Computer Science Division, University of California, Berkeley {wychen,yelick}@cs.berkeley.edu 2
Department of Computer Science, Yale University [email protected]
Abstract. The simplest semantics for parallel shared memory programs is sequential consistency in which memory operations appear to take place in the order specified by the program. But many compiler optimizations and hardware features explicitly reorder memory operations or make use of overlapping memory operations which may violate this constraint. To ensure sequential consistency while allowing for these optimizations, traditional data dependence analysis is augmented with a parallel analysis called cycle detection. In this paper, we present new algorithms to enforce sequential consistency for the special case of the Single Program Multiple Data (SPMD) model of parallelism. First, we present an algorithm for the basic cycle detection problem, which lowers the running time from to Next, we present three polynomial-time methods that more accurately support programs with array accesses. These results are a step toward making sequentially consistent shared memory programming a practical model across a wide range of languages and hardware platforms.
1
Introduction
In a uniprocessor environment, compiler and hardware transformations must adhere to a simple data dependency constraint: the orders of all pairs of conflicting accesses (accesses to the same memory location, with at least one a write) must be preserved. The execution model for parallel programs is considerably more complicated, since each thread executes its own portion of the program asynchronously, and there is no predetermined ordering among accesses issued by different threads to shared memory locations. A memory consistency model defines the memory semantics and restricts the possible execution orders of memory operations. Of the various memory models that have been proposed, the most intuitive is sequential consistency, which states that a parallel execution must behave as if it is an interleaving of the serial executions by individual threads, with each execution sequence preserving the program order [1]. Sequential consistency is a natural extension of the uniprocessor execution model and is violated when the reordering operations performed by one thread can be observed by another L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 340–356, 2004. © Springer-Verlag Berlin Heidelberg 2004
Polynomial-Time Algorithms
341
Fig. 1. Violation of Sequential Consistency: The actual execution may produce results that would not happen if execution follows program order
thread, and thus potentially visible to the user. Figure 1 shows a violation of sequential consistency due to reordering of memory operations. Although there are no dependencies between the two write operations in one thread or the two read operations in the other, if either pair is reordered, a surprising behavior may result, which does not satisfy sequential consistency. Despite its advantage in making parallel programs easier to understand, sequential consistency can be expensive to enforce. A naive implementation would forbid any reordering of shared memory operations by both restricting compiletime optimizations and inserting a memory fence between every consecutive pair of shared memory accesses from a given thread. The fence instructions are often expensive, and the optimization restrictions may prevent code motion, prefetching, and pipelining [2]. Rather than restricting reordering between all pairs of accesses, a more practical approach computes a subset that is sufficient to ensure sequential consistency. This set is called a delay set, because the second access will be delayed until the first has completed. Several researchers have proposed algorithms for finding a minimal delay set, which is the set of pairs of memory accesses whose order must be preserved in order to guarantee sequential consistency [3, 4, 5]. The problem of computing delay sets is relevant to any programming model that is explicitly parallel and allows processors to access shared variables, including serial languages extended with a thread library and languages like Java with a built-in notion of threads. It is especially relevant to global address space languages like UPC [6], Titanium [7], and Co-Array Fortran [8], which are designed to run on machines with physically distributed memory, but allow one processor to read and write the remote memory on another processor. For these languages, the equivalent of a memory barrier thus may be a round-trip event.
342
Wei-Yu Chen et al.
In this paper, we focus on efficient algorithms to compute the delay sets for various types of Single Program Multiple Data (SPMD) programs. For example, given the sample code in Figure 1, the analysis would determine that neither pair of accesses can be reordered without violating sequential consistency. Our analysis framework is based on the cycle detection problem first described by Shasha and Snir [3]; previous work [9] showed that such analysis for SPMD programs can be performed in polynomial time. In this paper we substantially improve both the speed and the accuracy of the SPMD cycle detection algorithm described in [9]. By utilizing the concept of strongly connected components, we improve the running time of the analysis asymptotically from to where is the number of shared memory accesses in the program. We then present three methods that extend SPMD cycle detection to handle programs with array accesses by incorporating into our analysis data dependence information from array indices. All three methods significantly improve the accuracy of the analysis for programs with loops; each differs in their relative precision and offers varying degrees of applicability and speed, so developers can efficiently exploit their tradeoffs. The rest of the paper is organized as follows. We formally define the problem in Section 2 and summarize the earlier work on it in Section 3. Section 4 describes our improvements to the analysis’ running time, while Section 5 present extensions to the cycle detection analysis that significantly improve the quality of the results for programs with array accesses. Section 6 concludes the paper.
2
Problem Formulation
Our analysis is designed for shared memory (or global address space) programs with an SPMD model of parallelism. An SPMD program is specified by a single program text, which defines an individual program order for each thread. Threads communicate by explicitly issuing reads and writes to shared variables. For simplicity, we consider the program to be represented by its control flow graph, P. An execution of an SPMD program for threads is a set of sequences of operations, each of which is consistent with P. An execution defines a partial order, which is the union of those sequences. Definition 1 (Sequential Consistency). An execution is sequentially consistent if there exists a total order consistent with the execution’s partial order, such that the total order is a correct serial execution. We are interested only in the behavior of the shared memory operations, and thus restrict our attention to the subgraphs containing only such operations. In general, parallel hardware and conventional compilers will allow memory operations to execute out of order as long as they preserve the program dependencies. We model this by relaxing the program orders for each thread, and instead use a subset of P called the delay set, D. Definition 2 (Sufficient Delay Set). Given a program graph P and a subgraph D, D is a sufficient delay set if all executions of D are equivalent to a sequentially consistent execution of P.
Polynomial-Time Algorithms
343
All executions must now observe only the program dependencies within each thread and the orderings given in D. Intuitively, the delay set contains pairs of memory operations that execute in order. They are implemented by preventing program transformations that would lead to reordering and by inserting memory fences during code generation to ensure that the hardware preserves the order. A naive algorithm will take D to be the entire program ordering P, forcing compilers and hardware to strictly follow program order. A delay set is considered minimal if no strict subset is sufficient. We are now ready to state the problem in its most general form: Given a program graph P for an SPMD parallel program, find a sufficient minimal delay set D for P.
3 3.1
Background Related Work
Shasha and Snir [3] pioneered the study of correct execution of explicitly parallel programs and characterized the minimal set of delays required to preserve sequential consistency. Their results are for an arbitrary set of parallel threads (not necessarily an SPMD program), but does not address programs with branches, aliases or array accesses. Midkiff and Padua [2] further demonstrated that the delay set computation is necessary for performing a large number of standard compile-time optimizations. They also extended Shasha and Snir’s characterization to work for programs with array accesses, but did not provide a polynomialtime algorithm for performing the analysis. Krishnamurthy and Yelick [4, 9] later showed that Shasha and Snir’s framework for computing the delay set results in an intractable NP hard problem for MIMD programs and proposed a polynomial-time algorithm for analyzing SPMD programs. They also improved the accuracy of the analysis by treating synchronization operations as special accesses whose semantics is known to the compiler. They also demonstrated that the analysis enables a number of techniques for optimizing communication, such as message pipelining and prefetching. Once the delay set has been computed, sequential consistency can be enforced by inserting memory barriers into the program to satisfy the delays. Lee and Padua [10] presented a compiler technique that reduces the number of fence instructions for a given delay set, by exploiting the properties of fence and synchronization operations. Their work is complementary to ours, as it assumes the delay set is already available, while we focus on the earlier problem of computing the minimal set itself. Recent studies have focused on data structures for correct and efficient application of standard compile-time optimizations for explicitly parallel programs. Lee et al. [5] introduced a concurrent CFG representation for summarizing control flow of parallel code, and a concurrent SSA form that encodes sequential data flow information as well as information about possibly conflicting accesses from concurrent threads. They also showed how several classical analyses and
344
Wei-Yu Chen et al.
optimizations can be extended to work on the CSSA form to optimize parallel code without violating sequential consistency. Knoop and Steffen [11] showed that unidirectional bitvector analyses can be performed for parallel programs to enable optimizations such as code motion and dead code elimination without violating sequential consistency. 3.2
Cycle Detection
Analyses in this paper are based on Shasha and Snir’s [3] cycle detection algorithm, which we briefly describe here. All violations of sequential consistency can be attributed to conflicting accesses: Definition 3 (Conflicting Accesses). Two shared memory operations from different threads are said to conflict if they access the same memory location, and at least one of them is a write. Conflicting accesses are the mechanism by which parallel threads communicate, and also the means by which one thread can observe memory operations reordered by another. The program order P defines a partial order on individual threads’ memory accesses, but does not impose any restrictions on how operations from different threads should be interleaved, so there is not a single program behavior against which we can define correct reorderings. Instead, a happens-before relation for shared memory accesses originating from different threads is defined at runtime based on the time of their occurrences to fully capture the essence of a parallel execution. Due to its nondeterministic nature, each instance of parallel execution defines a different happens-before relation, which may affect execution results depending on how it orders conflicting accesses. For a given parallel execution, let E be the partial order on conflicting accesses that is exhibited at runtime, which is determined by the values returned by reads from writes. The graph given by captures all information necessary to reproduce the results of a parallel execution: P orders accesses on the same thread, while E orders accesses from different threads to the same memory location. If there is a violation of sequential consistency, then for two accesses issued by the same thread, both and are related in Viewed as a graph, such a situation occurs exactly when contains a cycle that includes E edges.1 Since we cannot predict at compilation time which access in a conflicting pair will happen first, we approximate E by C, the conflict relation which is a superset of E and contains all pairs of conflicting accesses. The conflict relation is irreflexive, symmetric, and not transitive, and can be represented in a graph as bidirectional edges between two conflicting accesses. The goal of Shasha and Snir’s analysis is thus to perform cycle detection on the graph of a parallel program. Their algorithm uses the notion of critical cycle to find the minimal delay set necessary for sequential consistency: Definition 4 (Critical Cycle). A critical cycle in the property that for any two non-adjacent nodes 1
Intrinsic cycles in P due to loops are ruled out.
is a simple cycle with in the cycle,
Polynomial-Time Algorithms
Fig. 2.
345
Computing the Delay Set
In other words, when detecting cycles we always attempt to find minimal cycles, and a critical cycle can have at most two (successive) nodes in any thread. Shasha and Snir proved the following theorem [3] that the P edges in the set of critical cycles form a delay set that guarantees sequential consistency: Theorem 1 (Existence of a Minimal Delay Set for SC). Let D be the set of edges in straight-line code, where is part of a critical cycle. Then any execution order that preserves delay D is sequentially consistent; furthermore, the set D is minimal. Figure 2 shows how a critical cycle can be used to compute the minimal delay set for sequential consistency, for the sample code from Figure 1. 3.3
Cycle Detection for SPMD Programs
Detecting critical cycles for an arbitrary program order P, unfortunately, is NPhard as the running time is exponential in the number of threads. Krishnamurthy and Yelick [9] proposed a polynomial time algorithm for the common special case of SPMD programs, taking advantage of the fact that all threads execute identical code. Their algorithm, explained in detail in [12], works as follows:
Definition 5 (Conflict Graphs for SPMD Programs). Consider be two copies of the original program P, so that Define C to be the set of conflicting accesses, and
and
to
if
The graph CG, named the conflict graph, will also be used in other analyses described later in this paper. The right side of the conflict graph is identical
346
Wei-Yu Chen et al.
Algorithm 1: Krishnamurthy and Yelick’s Algorithm for SPMD Cycle Detection to while the left side has no internal edges and connects to the right side via the conflict edges. Krishnamurthy and Yelick described an algorithm that computes the delay set by detecting a back-path in the transformed graph for each P edge and proved the following theorem in [9]: Theorem 2 (Cycle Detection for SPMD Programs). For an edge if there exists a path from to in CG, then belongs to the minimal delay set. Furthermore, the delay set computed is the same as the one defined in Theorem 1. Based on the above theorem, they claimed that cycle detection for SPMD programs can be performed in time (Algorithm 1), where is the number of shared memory accesses in P.
4
A Faster Algorithm for SPMD Cycle Detection
In this section, we show a slight modification of Krishnamurthy and Yelick’s algorithm that can compute the identical delay set in time. Algorithm 1 is easy to understand but inefficient due to the breadth-first search required for each node. Instead, we can improve its running time by using strongly connected components (SCC) to avoid the redundant computations performed for each node. Note that proofs to theorems presented in this paper have been omitted due to space constraints; interested readers can refer to them in our technical report [13] that contains the full version of the paper. Our algorithm is similar to the one proposed in [14] in that both rely on the concept of strong connnectivity; an important distinction, however, is that we do not require initialization writes for every variable. If all accesses are readonly, step 3 fails due to the absence of conflicts, and no edges will be added to the delay set. This difference is vital if we want to combine the algorithm with synchronization analysis of barriers, since it is common for SPMD variables to be read-only in some phases of the program. Before proving Algorithm 2, we first explain the claim in step 3 that all conflicting accesses of a node will belong to the same strongly connected component in Consider a node and any two of its conflicting accesses Since there exist bidirectional edges between and in T_2 (the C edges), it is clear that they all belong to the same
Polynomial-Time Algorithms
Algorithm 2: A
347
Algorithm for Computing Delay Set
SCC. We can now show that for a SPMD program this modified algorithm is equivalent to Algorithm 1, and calculate its running time: Theorem 3. Algorithm 2 returns the same delay set as Algorithm 1 for any SPMD program. Proof. The proof can be found in [13]. Theorem 4. Algorithm 2 runs in accesses in P.
time, where
is the number of shared
Proof. The proof can be found in [13].
5
Extending SPMD Cycle Detection for Array Accesses
Another area in which the SPMD cycle detection algorithm can be improved is the quality of the delay set for array accesses. Although Theorem 2 states that the delay set computed by the algorithm is “minimal”, the claim holds only for straight-line code with perfect alias information. The algorithm is therefore overly conservative when analyzing array accesses in loops; every P edge inside a loop can be included in the delay set, as a back-path can be constructed using the loop’s back edge. This has an undesirable effect on performance, as the false delays can thwart useful loop optimizations such as loop-invariant code motion and software pipelining. In this section, we present an analysis framework that extends SPMD cycle detection to handle array accesses. After describing an existing approach that requires exponential time in Section 5.1, we present three polynomial-time algorithms that could significantly reduce the number of delays inside loops. While all three techniques collect information from array subscripts to make the analysis more precise, they differ in their approaches for representing and propagating information: classical graph theory algorithms, data-flow analysis, and integer programming methods. The choice of the three methods largely depends on the amount of information that can be statically extracted from the array subscripts;
348
Wei-Yu Chen et al.
Fig. 3.
Conflict Graph with Corresponding Constraints
for instance, the data-flow analysis approach sacrifices some precision for a more efficient algorithm, and the integer programming techniques supports complex affine array expressions at the cost of increased complexity. For simplicity, we consider nested well-behaved loops in C, where each dimension of the loop is of the form with the following provisions: both and loop_body may be different for each thread, the loop index is not modified in the loop body, and array subscripts are affine expressions of loop indices. While the definition may seem restrictive, in practice loops in scientific applications with regular access patterns frequently exhibit this characteristic. We further assume that the base address of the array access is a constant, and that different array variables do not overlap (i.e., arrays but not pointers in C). 5.1
Existing Approach
Midkiff et al. [15] proposed a technique that extends Shasha and Snir’s analysis to support array accesses. Under their approach, every edge of the conflict graph (named a in their work) is associated with a linear constraint that relates the array subscripts on its two nodes. A conflict edge generates an equality constraint, since the existence of a conflict implies the subscripts must be equal. Also, the constraint of each conflict edge will use a fresh variable for the loop index, as in general the conflicts can happen in different iterations. The constraint for a P edge is only slightly more complicated. Consider where and represent possibly different loop index values, and and are affine functions. From the definition of P we could immediately derive where is a multiple of the loop increment, since happens first by program order. The inequality constraint for depends on the context of the P edge; we specify if it is the back edge, and otherwise. Figure 3 shows the constraints generated by each edge in the sample graph.
Polynomial-Time Algorithms
349
Fig. 4. Adding Edge Weights for Cycle Detection
Once the constraints are specified, the next step is to generate all cycles in the graph that may correspond to critical cycles (Definition 4). Midkiff et al. showed that the problem can be reduced to finding solutions to the linear system formed by the constraints of every edge in the cycle; a delay is necessary only for edges on a cycle with a consistent linear system. If a cycle satisfies the criteria, a final filtering step is applied to see if it can be discarded because it shares common solutions with another smaller cycle. While their technique successfully incorporates dependence information to improve accuracy of the analysis, its applicability appears limited due to two factors. First, it does not specify how to generate the cycles in the conflict graph; the total number of (simple and non-simple) cycles is exponential in the number of nodes, so a brute force method that examines all is clearly not practical. Another limitation is the cost of solving each linear system, which is equivalent to integer linear programming, a well-known NP-complete problem. Since a cycle can contain edges and thus constraints, solving the system again requires exponential time. As a result, in the next section we will present several polynomial-time algorithms that make cycle detection practical for programs with loops. 5.2
Polynomial-Time Cycle Detection for Array Accesses
Our analysis framework combines array dependence information with the conflict graph, except that we assign each P edge with an integer weight equal to the difference between the array subscripts of its two nodes. Scalars can be considered as array references with a zero subscript. Also, an optional preprocessing step can apply affine memory disambiguation techniques [16] to eliminate conflict edges between independent array accesses. Figure 4 illustrates this construction2, where the two edges in the loop body receive weights of 1 and – 1, and the back edges are assigned the value of 0 and 2 to reflect both the difference between 2
We showed only the right part of the conflict graph, as the left part remains unchanged
350
Wei-Yu Chen et al.
Algorithm 3: Handling Array Accesses Through Zero Cycle Detection the array subscripts and the increment on the loop index variable after each iteration. Conflict edges always have zero weight, as the presence of a conflict implies the two array subscripts must be equal. For an edge the goal of the analysis is not only to detect a path from to in the conflict graph, but also to verify that the back-path together with the edge forms a (not necessarily simple) cycle with zero weight: Theorem 5 (Cycle Detection with Weighted Edges). With the above construction, an edge is in the delay set if it satisfies the conditions in Theorem 2, and where is the weight of edge Proof. The proof can be found in [13]. Zero Cycle Detection: If all edge weights are compile-time constants, Theorem 5 can be reduced to the problem of finding zero cycles in the conflict graph. On the surface the reduced problem still seems difficult to solve, as finding a simple cycle with zero total weight is known to be NP-complete. For our purposes, however, we are interested in finding zero cycles that need not be simple, as a zero cycle that visit a node multiple times conveys a delay due to conflicts among array accesses in different iterations. Several studies [17, 18] have presented recurrence equations and linear programming techniques to solve the general form of the ZERO-CYCLE problem, which determines for a graph G with k-dimensional vector edge weights if it contains a cycle whose weight sums to the zero vector. In particular, Cohen and Megiddo [19] proved that zero cycle detection for a graph with fixed can be performed in polynomial time; they further showed that the special case of can be answered in time using a modified all pairs shortest path algorithm, where is the number of nodes. Algorithm 3 computes the delay set based on this result. As each invocation of the zero cycle detection algorithm takes time, this algorithm unfortunately has a running time of The loss in efficiency is compensated, however, by obtaining a much more accurate delay set. Figure 5
Polynomial-Time Algorithms
351
Fig. 5. SPMD Code for which Algorithm 3 Is More Accurate Than Algorithm 1
demonstrates the analysis’s benefit: while plain SPMD cycle detection (Algorithm 1) will incorrectly include every P edge in the delay set due to spurious cycles created by the loop back edge, Algorithm 3 can accurately eliminate these unnecessary delays. Another benefit of this algorithm is that it can be easily extended to support multidimensional arrays. For a k-dimensional iteration space, we simply construct CG using k-dimensional vectors as its edge weights, with each element corresponding to a loop index variable. As the level of loop nests in real programs rarely exceed 4 or 5, this more complex scenario can still be solved in the same asymptotic time as the scalar-weight case. Data-Flow Analysis Approximation: The major limitation of Algorithm 3 is that edge weights in general may not be compile-time constants; for example, it is common in scientific code to have a loop performing strided array accesses with an either dynamic or run-time constant stride value. The signs of the weights, however, are usually statically known, and using abstract interpretation techniques [20] we can deduce the sign of a cycle’s weight sum. If every edge of the cycle has the same sign, it can never be a zero cycle; otherwise we conservatively assume that it may satisfy the conditions in Theorem 5. Algorithm 4 generalizes this notion by applying data-flow analysis with the lattice and flow equations in Figure 6 to estimate the weight sum of each potential cycle. For each P edge represents the possible sign of any paths from to therefore, if is reachable from (indicating a back-path) and is either + or –, by definition will not be part of any zero cycle. This approach is a sound but conservative approximation of the zero cycle detection problem, and thus may compute some false positive delays. While it gives the same result as Algorithm 3 for Figure 4 (delays) and 5 (no delays), a more complicated example in Figure 3 illustrates their differences. Although the zero cycle detection algorithm correctly concludes that sequential consistency could never be violated there due to the absence of zero cycles, Algorithm 4, affected by the negative edge from S3 to S4, will conservatively place every P
352
Wei-Yu Chen et al.
Fig. 6. Lattice and Flow Equations for Algorithm 4
edge in the delay set. For the common cases of loops with monotonic array subscripts, however, this analysis is as accurate as the one in the previous section. Since the lattice has a height of two, the data-flow analysis step will finish in at most time. As the analysis step needs to be done for each P edge, it appears that we have a algorithm. The insight here, however, is that when initializing the data-flow analysis for an edge can take only one of the three different values; it thus suffices to run the data-flow analysis three times for each node in the graph to cover all possible initial conditions of the analysis. So Algorithm 4 has a worst-case time bound. Extensions of this approach to support nested loops is straightforward; we can run the analysis separately for each dimension, and add an edge to the delay set only when all dimensions return a sign of either 0 or T. Integer Programming Based Method: In the most general case, array subscripts will be affine expressions with arbitrary constant coefficients and symbolic terms, so the previous methods are no longer applicable as neither the value nor
Algorithm 4: Handling Array Accesses Using Data-flow Analysis
Polynomial-Time Algorithms
353
Algorithm 5: Handling Array Accesses Using Integer Programming
sign of edge weights are statically known. In this case, we can still attempt to perform cycle detection by adopting the technique from Section 5.1 to convert it into an integer programming problem. To avoid the exponential cost of exhaustively searching for cycles and solving linear systems, we can take advantage of the properties of our conflict graph representation. A P edge can be a delay only if it has a back-path such that the generated cycle has a consistent system. While the number of back-paths may be exponential, they all share the structure that both and are C edges crossing between the left and right part of the graph. If the internal path contains no C edges, it can be viewed as a single P edge and represented as one constraint on the subscripts of and We have thus significantly reduced the number of cycles that need to be considered for each P edge; since a node can participate in at most conflicts, the number of such cycles never exceeds Furthermore, since each cycle examined can now have only four edges, the cost of solving a linear system is constant independent of problem size. This technique is in a sense a conservative approximation of Section 5.1, as it ignores the C edges in the internal path, which results in additional constraints that may cause the system to become inconsistent. Such an approximation, however, is necessary for soundness anyway, since it may be possible to construct a back-path without using internal conflict edges for loops of pure SPMD programs. Algorithm 5 describes how to compute the delay set by solving the set of linear constraints. For each P edge we need to verify if there exists a back-path whose corresponding linear system has a solution; the solution of the system gives the iterations that are involved in the conflict. As an example, for the edge (S1, S2) in Figure 3 we can identify its only pair of conflict edges (S2, S3) and (S4, S1), which generates the following constraints:
354
Wei-Yu Chen et al.
Simple arithmetic reveals that the system has no integer solution, and we therefore conclude that the edge is not part of the delay set. In the worst case this algorithm will take a running time of as the Cartesian product of C edges may have a cardinality. Like the previous methods, this algorithm can also be adopted to support multidimensional arrays; each dimension of the access is handled independently, and an edge is added to the delay set only if all of its dimensions have identified a linear system that is consistent.
5.3
Algorithm Evaluation
We have presented three polynomial-time algorithms in this section that extend SPMD cycle detection analysis to support array accesses. Here we compare the three techniques using the following criteria: applicability, accuracy, as well as running time and implementation difficulty. In terms of applicability, the dataflow analysis method is the clear winner as it can be applied even to subscripts with non-affine terms, provided that the sign of the edge weights are still computable. Integer programming technique is also general enough to handle any affine array accesses, while zero cycle detection can only apply to simple subscript expressions. What the zero cycle algorithm lacks in generality, however, it compensates with greater accuracy by computing the correct and smallest delay set. Integer programming also offers good accuracy, especially when the loop bounds can be calculated statically so that the linear system can incorporate inequality constraints between loop index variables and the loop bounds. The data-flow analysis method is, as expected, the least accurate of the three and not compatible for loops with non-monotonic access patterns; its accuracy, however, can easily be improved by introducing more constant values into the lattice, at the cost of increased analysis time. With regards of the running time, the data-flow analysis technique is clearly the most efficient, and is also likely to be more easily incorporated into a compiler’s optimization framework. The integer programming method, on the other hand, is the most difficult to implement due to the construction and solving of linear systems. This suggests the following implementation strategy. In normal cases data-flow analysis will be the method of choice, while the more accurate zero cycle algorithm is applied to hot-spots in the program where aggressive optimization is desired; the integer programming technique is used for complex affine terms where neither zero cycle detection nor data-flow analysis is applicable.
6
Conclusion
In a multiprocessor environment, most standard sequential compiler optimizations could result in unexpected changes to program behavior because they may reorder shared memory operations. In this paper, we presented an efficient algorithm that computes the minimal delay set required to enforce sequential consistency for parallel SPMD programs. This analysis can be used to implement a sequentially consistent programming model on a machine that has a weaker
Polynomial-Time Algorithms
355
model. In particular, implementing a global address space language on a machine with remote memory accesses can be done by issuing nonblocking memory operations by default, except when the compiler has determined that a delay between memory operations is needed. For machines with a remote latency of thousands of machine cycles, the ability to overlap in this fashion is critical. Our algorithm is based on the concept of cycle detection, and has an asymptotic running time of improving on a previous algorithm. We have also described techniques for combining array analysis with the SPMD cycle detection algorithm; this further minimizes the delay set that guarantees sequential consistency without greatly slowing down the analysis. The analyses are based on classical graph algorithms, data-flow analyses, and integer programming method. In practice, we expect the data-flow analysis method to be most applicable. The proposed algorithms have made cycle detection more practical for SPMD programs, thus opening the door for optimizations of parallel programs that do not violate sequential consistency.
References [1] Lamport, L.: How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE transactions on computers (1976) [2] Midkiff, S., Padua, D.: Issues in the compile-time of parallel programs. In: Proceedings of the 19th International Conference on Parallel Processing. (1990) [3] Shasha, D., Snir, M.: Efficient and correct execution of parallel programs that share memory. ACM Transactions on Programming Languages and Systems (1988) [4] Krishnamurthy, A., Yelick, K.A.: Optimizing parallel programs with explicit synchronization. In: SIGPLAN Conference on Programming Language Design and Implementation. (1995) 196–204 [5] Lee, J., Padua, P., Midkiff, S.: Basic compiler algorithms for parallel programs. In: 7th ACM SIGPLAN Symposium on Principles and Practices of Parallel Programming. (1999) UPC specification. (2001) [6] El-Ghazawi, T., Carlson, W., Draper, J.: http://www.gwu.edu/~upc/documentation.html. [7] Hilfinger, P., et al.: Titanium language reference manual. Technical Report CSD01-1163, University of California, Berkeley (2001) [8] Numwich, R., Reid, J.: Co-Array Fortran for parallel programming. Technical Report RAL-TR-1998-060, Rutherford Appleton Laboratory (1998) [9] Krishnamurthy, A., Yelick, K.: Analyses and optimizations for shared address space programs. Jorunal of Parallel and Distributed Computing (1996) [10] Lee, J., Padua, D.: Hiding relaxed memory consistency with compilers. In: IEEE International Conference on Parallel Architectures and Compilation Techniques (PACT). (2001) [11] Knoop, J., Steffen, B.: Code motion for explicitly parallel programs. In: the 7th ACM SIGPLAN Symposium on Principles and Practices of Parallel Programming. (1999) [12] Krishnamurthy, A.: Compiler Analyses and System Support for Optimizing Sgreo hared Address Space Programs. PhD thesis, U.C. Berkeley (1998)
356
Wei-Yu Chen et al.
[13] Chen, W., Krishnamurthy, A., Yelick, K.: Enforcing sequential consistency for SPMD programs. Technical Report CSD-03-1272, University of California, Berkeley (2003) [14] Kurhekar, M., Barik, R., Kumar, U.: An efficient algorithm for computing delay set in spmd programs. In: International Conference on High Peformance Computing (HiPC). (2003) [15] Midkiff, S., Padua, D., Cytron, R.: Compiling programs with user parallelism. In: Proceedings of the 2nd Workshop on Languages and Compilers for Parallel Computing. (1989) [16] Maydan, D.E.: Accurate Analysis of Array References. PhD thesis, Stanford University (1992) [17] Karp, R.M., Miller, R.E., Winograd, S.: The organization of computations for uniform recurrence equations. Journal of the ACM (JACM) 14 (1967) 563–590 [18] Iwano, K., Steiglitz, K.: Testing for cycles in infinite graphs with periodic structure. In: Proceedings of the nineteenth annual ACM conference on Theory of computing, ACM Press (1987) 46–55 [19] Cohen, E., Megiddo, N.: Strongly polynomial-time and NC algorithms for detecting cycles in periodic graphs. Journal of the ACM (1993) [20] Abramsky, S., Hankin, C. In: Abstract Interpretation of Declarative Languages. Halsted Press (1987) 6 3 – 1 0 2
A System for Automating Application-Level Checkpointing of MPI Programs* Greg Bronevetsky, Daniel Marques, Keshav Pingali, and Paul Stodghill Department of Computer Science, Cornell University Ithaca, NY 14853
Abstract. Fault-tolerance is becoming necessary on high-performance platforms. Checkpointing techniques make programs fault-tolerant by saving their state periodically and restoring this state after failure. System-level checkpointing saves the state of the entire machine on stable storage, but this usually has too much overhead. In practice, programmers do manual application-level checkpointing by writing code to (i) save the values of key program variables at critical points in the program, and (ii) restore the entire computational state from these values during recovery. However, this can be difficult to do in general MPI programs. In ([1],[2]) we have presented a distributed checkpoint coordination protocol which handles MPI’s point-to-point and collective constructs, while dealing with the unique challenges of application-level checkpointing. We have implemented our protocols as part of a thin software layer that sits between the application program and the MPI library, so it does not require any modifications to the MPI library. This thin layer is used by the (Cornell Checkpoint (pre-) Compiler), a tool that automatically converts an MPI application in an equivalent fault-tolerant version. In this paper, we summarize our work on this system to date. We also present experimental results that show that the overhead introduced by the protocols are small. We also discuss a number of future areas of research.
1
Introduction
The problem of implementing software systems that can tolerate hardware failures has been studied extensively by the distributed systems community [3]. In contrast, the parallel computing community has largely ignored this problem because until recently, most parallel computing was done on relatively reliable big-iron machines whose mean-time-between-failures (MTBF) was much longer than the execution time of most programs. However, trends in high-performance computing, such as the popularity of commodity clusters, the increasing complexity of parallel machines, and the dawn of Grid computing, are increasing the *
This work was supported by NSF grants ACI-9870687, EIA-9972853, ACI-0085969, ACI-0090217, ACI-0103723, and ACI-0121401.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 357–373, 2004. © Springer-Verlag Berlin Heidelberg 2004
358
Greg Bronevetsky et al.
probability of hardware failures, making it imperative that parallel programs tolerate such failures. One solution that has been employed successfully for parallel programs is application-level checkpointing. In this approach, the programmer is responsible for saving computational state periodically, and for restoring this state after failure. In many programs, it is possible to recover the full computational state from relatively small amounts of data saved at key places in the program. For example, in an ab initio protein-folding application, it is sufficient to periodically save the positions and velocities of the bases of the protein; this is a few megabytes of information, in contrast to the hundreds of gigabytes of information that would be saved by a system-level checkpoint. This kind of manual application-level checkpointing is feasible if the parallel program is written in a bulk-synchronous manner, but it is not clear how it can be applied to a general MIMD program without global barriers. Without global synchronization, it is not obvious when the state of each process should be saved so as to obtain a global snapshot of the parallel computation. Protocols such as the Chandy-Lamport [4] protocol have been designed by the distributed systems community to address this problem, but these protocols were designed for system-level checkpointing, and cannot be applied to application-level checkpointing, as we explain in Section 4. In two previous papers ([1],[2]), we presented non-blocking, coordinated, application-level checkpointing protocols for the point-to-point and collective constructs of MPI. We have implemented these protocols in the (Cornell Checkpoint (pre-) Compiler), a system that uses program transformation technology to automatically insert application-level checkpointing features into an application’s source code. Using our system, it is possible to automatically convert an MPI application to an equivalent fault-tolerant version. The rest of this paper is organized as follows. In Section 2, we present background for and define the problem. In Section 3, we define some terminology and describe our basic approach. In Section 4, we discuss some of the difficulties of adding fault-tolerance to MPI programs. In Sections 5 and 6 we present non-blocking checkpointing protocols for point-to-point and collective communication, respectively. In Section 7, we discuss how our system saves the sequential state of each process. In Section 8, we present performance results of our system. In Section 9 we discuss related work, and in Section 10 we describe future work. In Section 11, we offer some conclusions.
2
Background
To address the problem of fault tolerance, it is necessary to define the fault model. We focus our attention on stopping faults, in which a faulty process hangs and stops responding to the rest of the system, neither sending nor receiving messages. This model captures many failures that occur in practice and is a useful mechanism in addressing more general problems.
Automating Application-Level Checkpointing of MPI Programs
359
We make the standard assumption that there is a reliable transport layer for delivering application messages, and we build our solutions on top of that abstraction. One such reliable implementation of the MPI communication library is Los Alamos MPI [5]. We can now state the problem we address in this paper. We are given a long-running MPI program that must run on a machine that has (i) a reliable message delivery system, (ii) unreliable processors which can fail silently at any time, and (iii) a mechanism such as a distributed failure detector [6] for detecting failed processes. How do we ensure that the program makes progress in spite of these faults? There are two basic approaches to providing fault-tolerance for distributed applications. Message-logging techniques require restarting only the computation performed by the failed process. Surviving processes are not rolled back but must help the restarted process by replaying messages that were sent to it before it failed. Our experience is that the overhead of saving or regenerating messages tends to be so overwhelming that the technique is not practical for scientific applications. Therefore, we focus on Checkpointing techniques, which periodically save a description of the state of a computation to stable storage; if any process fails, all processes are rolled back to a previously saved checkpoint (not necessarily the last), and the computation is restarted from there. Checkpointing techniques can be classified along two independent dimensions. (1) The first dimension is the abstraction level at which the state of a process is saved. In system-level checkpointing (e.g., [7], [8]), the raw process state, including the contents of the program counter, registers and memory, are saved on stable storage. Unfortunately, complete system-level checkpointing of parallel machines with thousands of processors can be impractical because each global checkpoint can require saving terabytes of data to stable storage. For this reason, system-level checkpointing is not done on large machines such as the IBM Blue Gene or the ASCI machines. One alternative which is popular is application-level checkpointing, in which the application is written such that it correctly restarts from various positions in the code by storing certain information to a restart file. The benefit of this technique is that that the programmer needs to save only the minimum amount of data necessary to recover the program state. In this paper, we explore the use of compiler technology to automate application-level checkpointing. (2) The second dimension along which checkpointing techniques can be classified is the technique used to coordinate parallel processes when checkpoints need to be taken. In [1], we argue that the best approach for our problem is to use non-blocking coordinated checkpointing. This means that all of the processes participate in taking each checkpoint, but they do not stop the computation while they do so. A survey of the other approaches to checkpointing can be found in [3].
360
Greg Bronevetsky et al.
Fig. 1.
3 3.1
Epochs and message classification
Our Approach Terminology
We assume that a distinguished process called the initiator triggers the creation of global checkpoints periodically. We assume that it does not initiate the creation of a global checkpoint before any previous global checkpoint has been created and committed to stable storage. The execution of an application process can therefore be divided into a succession of epochs where an epoch is the period between two successive local checkpoints (by convention, the start of the program is assumed to begin the first epoch). Epochs are labeled successively by integers starting at zero, as shown in Figure 1. Application messages can be classified depending upon whether or not they are sent and received in the same epoch. Definition 1. Given an application message from process A to process B, let be the epoch number of A at the point in the application program execution when the send command is executed, and let be the epoch number of B at the point when the message is delivered to the application. Late message: If Intra-epoch message: If message. Early message: If
the message is said to be a late message. the message is said to be an intra-epoch the message is said to be an early message.
Figure 1 shows examples of the three kinds of messages, using the execution trace of three processes named P, Q and R. MPI has several kinds of send and receive commands, so it is important to understand what the message arrows mean in the context of MPI programs. Consider the late message in Figure 1. The source of the arrow represents the point in the execution of the sending process at which control returns from the MPI routine that was invoked to send this message. Note that if this routine is a non-blocking send, the message may not make it to the communication network until much later in execution; nevertheless, what is important for us is that if the application tries to recover from global checkpoint 2, it will not reissue the MPI send. Similarly, the destination
Automating Application-Level Checkpointing of MPI Programs
Fig. 2.
361
System Architecture
of the arrow represents the delivery of the message to the application program. In particular, if an MPI_Irecv is used by the receiving process to get the message, the destination of the arrow represents the point at which an MPI_Wait for the message would have returned, not the point where control returns from the MPI_Irecv routine. In the literature, late messages are sometimes called in-flight messages, and early messages are sometime called inconsistent messages. This terminology was developed in the context of system-level checkpointing protocols but in our opinion, it is misleading in the context of application-level checkpointing.
3.2
System Architecture
Figure 2 is an overview of our approach. The system reads almost-unmodified single-threaded C/MPI source files and instruments them to perform applicationlevel state-saving; the only additional requirement on the programmer is that he inserts a pragma statement, #pragma ccc Potential Checkpoint, at the locations in the application where checkpointing should occur. The output of this precompiler is compiled with the native compiler on the hardware platform, and is linked with a library that constitutes a co-ordination layer for implementing the non-blocking coordination. This layer sits between the application and the MPI library, and intercepts all calls from the instrumented application program to the MPI library. Note that MPI can bypass the co-ordination layer to read and write message buffers in the application space directly. Such manipulations, however, are not invisible to the protocol layer. MPI may not begin to access
362
Greg Bronevetsky et al.
a message buffer until after it has been given specific permission to do so by the application (e.g. via a call to MPI_Irecv). Similarly, once the application has granted such permission to MPI, it should not access that buffer until MPI has informed it that doing so is safe (e.g. with the return of a call to MPI_Wait). The calls to, and returns from, those functions are intercepted by the protocol layer. This design permits us to implement the coordination protocol without modifying the underlying MPI library, which promotes modularity and eliminates the need for access to MPI library code, which is proprietary on some systems. Further, it allows us to easily migrate from one MPI implementation to another.
4
Difficulties in Application-Level Checkpointing of MPI programs
In this section, we briefly describe the difficulties with implementing applicationlevel, coordinated, non-blocking checkpointing for MPI programs. Delayed State-Saving A fundamental difference between system-level checkpointing and application-level checkpointing is that a system-level checkpoint may be taken at any time during a program’s execution, while an application-level checkpoint can only be taken when a program reaches a location that had been marked by a Potential Checkpoint pragma statement. System-level checkpointing protocols, such as the Chandy-Lamport distributed snapshot protocol, exploit this flexibility with checkpoint scheduling to avoid the creation of early messages. This strategy does not work for applicationlevel checkpointing, because, after being notified to take a checkpoint, a process might need to communicate with other processes before arriving at a point where it may take a checkpoint. Handling Late and Early Messages Suppose that an application is restored to Global Checkpoint 2 in Figure 1. On restart, some processes will expect to receive late messages that were sent prior to failure. Therefore, we need mechanisms for (i) identifying late messages and saving them along with the global checkpoint, and (ii) replaying these messages to the receiving process during recovery. Late messages must be handled by non-blocking system-level checkpointing protocols as well. Similarly on recovery, some processes will expect to send early messages that were received prior to failure. To handle this, we need mechanisms for (i) identifying early messages, and (ii) ensuring that they are not resent during recovery. Early messages also pose a separate and more subtle problem: if a nondeterministic event occurs between a checkpoint and an early message send, then on restart the event may occur differently and, hence, the message may be different. In general, we must ensure that if a global checkpoint depends on a non-deterministic event, that the event will re-occur exactly the same way after
Automating Application-Level Checkpointing of MPI Programs
363
restart. Therefore, mechanisms are needed to (i) record the non-deterministic events that a global checkpoint depends on, so that (ii) these events can be replayed the same during recovery. Non-FIFO Message Delivery at Application Level In an MPI application, a process P can use tag matching to receive messages from Q in a different order than they were sent. Therefore, a protocol that works at the application-level, as would be the case for application-level checkpointing, cannot assume FIFO communication. Collective Communication The MPI standard includes collective communications functions such as MPI_Bcast and MPI_Alltoall, which involve the exchange of data among a number of processors. The difficulty presented by such functions occurs when some processes make a collective communication call before taking their checkpoints, and others after. We need to ensure that on restart, the processes that reexecute the calls do not deadlock or receive incorrect information. Furthermore, MPI_Barrier guarantees specific synchronization semantics, which must be preserved on restart. Problems Checkpointing MPI Library State The entire state of the MPI library is not exposed to the application program. Things like the contents of message buffers and request objects are not directly accessible. Our system must be able to reconstruct this hidden state on recovery.
5
Protocol for Point-to-Point Operations
We now sketch the coordination protocol for global checkpointing for point-topoint communication. A complete description of the protocol can be found in [1].
5.1
High-Level Description of Protocol
Initiation As with other non-blocking coordinated checkpointing protocols, we assume the existence of an initiator that is responsible for deciding when the checkpointing process should begin. In our system, the processor with rank 0 in MPI_COMM_WORLD serves as the initiator, and starts the protocol when a certain amount of time has elapsed since the last checkpoint was taken. Phase #1 The initiator sends a control message called pleaseCheckpoint to all application processes. After receiving this message, each process can send and receive messages normally.
364
Greg Bronevetsky et al.
Phase #2 When an application process reaches its next potentialCheckpoint location, it takes a local checkpoint using the techniques described in Section 7. It also saves the identities of any early messages on stable storage. It then starts recording (i) every late message it receives, and (ii) the result of every non-deterministic decision it makes. Once a process has received all of its late messages1, it sends a control message called readyToStopRecording back to the initiator, but continues recording. Phase #3 When the initiator gets a readyToStopRecording message from all processes, it sends a control message called stopRecording to all other processes. Phase #4 An application process stops recording when (i) it receives a stopRecording message from the initiator, or (ii) it receives a message from a process that has stopped its recording. The second condition is required because we make no assumptions about message delivery order. In particular, it is possible for a recording process to receive a message from non-recording process before receiving the stopRecording message. In this case, the saved state might depend upon an unrecorded nondeterministic event. The second condition prevents this situation from occurring. Once the process has saved its record on disk, it sends a stoppedRecording message back to the initiator. When the initiator receives a stoppedRecording message from all processes, it commits the checkpoint that was just created as the one to be used for recovery, saves this decision on stable storage, and terminates the protocol.
5.2
Piggybacked Information on Messages
To implement this protocol, the protocol layer must piggyback a small amount of information on each application message. The receiver of a message uses this piggybacked information to answer the following questions. 1. Is the message a late, intra-epoch, or early message? 2. Has the sending process stopped recording? 3. Which messages should not be resent during recovery?
The piggybacked values on a message are derived from the following values maintained on each process by the protocol layer. epoch: This integer keeps track of the process epoch. It is initialized to 0 at start of execution, and incremented whenever that process takes a local checkpoint. amRecording: This boolean is true when the process is recording, and false otherwise. 1
We assume the application code receives all messages that it sends.
Automating Application-Level Checkpointing of MPI Programs
365
Fig. 3. Possible Patterns of Point-to-Point Communication
nextMessageID: This integer is initialized to 0 at the beginning of each epoch, and is incremented whenever the process sends a message. Piggybacking this value on each application message in an epoch ensures that each message sent by a given process in a particular epoch has a unique ID. A simple implementation of the protocol can piggyback all three values on each message that is sent by the application. When a message is received, the protocol layer at the receiver examines the piggybacked epoch number and compares it with the epoch number of the receiver to determine if the message is late, intra-epoch, or early. By looking at the piggybacked boolean, it determines whether the sender is still recording. Finally, if the message is an early message, the receiver adds the pair <sender, messageID> to its suppressList. Each process saves its suppressList to stable storage when it takes its local checkpoint. During recovery, each process passes relevant portions of its list of messagelD’s to other processes so that resending of these messages can be suppressed. By exploiting properties of the protocol, the size of the piggybacked information can be reduces to two booleans and an integer. By exploiting the semantics of MPI message tags, it is possible to eliminate the integer altogether, and piggyback only two boolean values, one to represent epoch and the other amRecording.
5.3
Completing the Reception of Late Messages
Finally, we need a mechanism for allowing an application process in one epoch to determine when it has received all the late messages sent in the previous epoch. The solution we have implemented is straight-forward. In every epoch, each process P remembers how many messages it sent to every other process Q (call this value Each process Q also remembers how many messages it received from every other process P (call this value When a process P takes its local checkpoint, it sends a mySendCount message to the other processes, which contains the number of messages it sent to them in the previous epoch. When process Q receives this control message, it can compare the value with to determine how many more messages to wait for. Since the value of is itself sent in a control message, how does Q know how many of these control messages it should wait for? A simple solution is for each process to send its sendCount to every other process in
Greg Bronevetsky et al.
366
Fig. 4.
Collective Communication
the system. This solution works, but requires quadratic communication. More efficient solutions can be obtained by requiring processes that communicate with one another to explicitly open and close communication “channels”.
5.4
Guarantees Provided by the Protocol
It can be shown that this protocol provides the following guarantees that are useful for reasoning about correctness. Claim. 1. No process stops recording until all processes have taken their local checkpoints. 2. A process that has stopped recording cannot receive a late message. In Figure 3, this means that a message of the form cannot occur. 3. A message sent by a process after it has stopped recording can only be received by a process that has itself stopped recording. In Figure 3, this means that messages of the form or cannot occur.
Figure 3 shows the possible communication patterns, given these guarantees.
6
Protocol for Collective Operations
In this section, we build on the mechanisms of the point-to-point protocol in order to implement a protocol for collective communication. A complete description of our protocols can found in [2]. There are two basic approaches to handling MPI’s collective communication functions. The most obvious is to implement these functions on top of our pointto-point protocol. However, because this approach does not use the low-level network layer directly, it is likely to be less efficient than the collective functions provided by the native MPI library. Instead, what we have chosen to do is to use the basic concepts and mechanisms of our point-to-point protocol in order to provide fault-tolerant versions of the collective communication functions that are implemented entirely in terms of the native MPI collective communication functions.
Automating Application-Level Checkpointing of MPI Programs
367
We will use MPI_Allreduce to illustrate how collective communication is handled. In Figure 4, collective communication call A shows an MPI_Allreduce call in which processes P and Q execute the call after taking local checkpoints, and process R executes the call before taking the checkpoint. During recovery, processes P and Q will reexecute this collective communication call, but process R will not. Unless something is done, the program will not recover correctly. Our solution is to use the record to save the result of the MPI_Allreduce call at processes P and Q. During recovery, when the processes reexecute the collective communication call, the result is read from the record and returned to the application program. Process R does not reexecute the collective communication call. To make this intuitive idea precise, we need to specify when the result of a collective communication call like MPI_Allreduce should be recorded. A simple solution is to require a process to record the result of every collective communication call it makes during the time it is recording. Collective communication call B in Figure 4 illustrates a subtle problem with this solution - process R executes the MPI-Allreduce after it has stopped recording, so it would be incorrect for processes P and Q to record the results of their call. This problem is similar to the problem encountered in the point-to-point message case, and the solution is similar (and simpler). Each process piggybacks its amRecording bit on the application data, and the function invoked by MPI_Allreduce computes the conjunction of these bits. If any process involved in the collective communication call has stopped recording, all the other processes will learn this fact, and they will also stop recording. As a result, no process will record the result of the call. Most of the other collective communication calls can be handled in this way. Ironically, the only one that requires special treatment is MPI_Barrier, and the reason is that the MPI standard requires that no processor finishes a call to MPI_Barrier until every processor has started a call to MPI_Barrier. Suppose that the collective communication call A in Figure 4 is an MPI_Barrier. Upon recovery, processors P and Q will have returned from their calls to MPI_Barrier, while R has not yet started its call. This is a clear violation of the required behavior. The solution is to ensure that all processes involved in a barrier execute it in the same epoch. In other words, barriers cannot be allowed to cross recovery lines. A simple implementation is the following. All processes involved in the barrier execute an all-reduce communication just before the barrier to determine if they are all in the same epoch. If not, processes that have not yet taken their local checkpoints do so, ensuring that the barrier is executed by all processes in the same epoch. This solution requires the precompiler to insert the all-reduce communication and the potential checkpointing locations before each barrier. As shown in [2], the overhead of this addition is very small in practice.
368
7
Greg Bronevetsky et al.
State Saving
The protocols described in the previous sections assume that there is a mechanism for taking and restoring a local checkpoint on each processor, which we describe in this section. Application State-Saving The state of the application running on each node consists of its position in the static text of the program, its position in the dynamic execution of the program, its local and global variables, and its heapallocated structures. Our precompiler modifies the application source so that this state is correctly saved, and can be restarted, at the Potential Checkpoint positions in the original code. A previous paper [1] discussed the transformations that makes to an application. Some of these are similar to those used by the PORCH system[9]. Currently is only slightly more efficient than system-level checkpointing; however, it offers two significant advantages over that approach. First, it is a starting point for work on optimizing the amount of state that is saved at a checkpoint. In Section 10, we describe our ongoing efforts in this area. Second, it is much simpler and more portable than system-level checkpointing, which very often requires modifying the operating system and native MPI library. MPI Library State-Saving As was already mentioned, our protocol layer intercepts all calls that the application makes to the MPI library. Using this mechanism our system is able to record the direct state changes that the application makes (e.g., calls to MPI_Attach_buffer). In addition, some MPI functions take or return handles to opaque objects. The protocol layer introduces a level of indirection so that the application only sees handles to objects in the protocol layer (hereafter referred to pseudo-handles), which contain the actual handles to the MPI opaque objects. On recovery, the protocol layer reinitializes the pseudo-handles in such a way that they are functionally identical to their counterparts in the original process.
8
Performance
In this section, we present an overview of the full experimental results that can be found in [1] and [2]. We performed our experimental evaluation on the CMI cluster at the Cornell Velocity supercomputer. This cluster is composed of 64 2-way Pentium III 1GHz nodes, featuring 2GB of RAM and connected by a Giganet switch. The nodes have 40MB/sec bandwidth to local disk. The point-to-point experiments were conducted on 16 nodes, and the collective experiments were conducted on 32 nodes. On each node, we used only one of the processors.
Automating Application-Level Checkpointing of MPI Programs
Fig. 5.
Point-to-point Overheads
Fig. 6.
8.1
369
MPI_Allgather
Point-to-Point
We evaluated the performance of the point-to-point protocol on three codes: a dense Conjugate Gradient code, a Laplace solver, and Neurosys, a neuron simulator. All the checkpoints in our experiments are written to the local disk, with a checkpoint interval of 30 seconds2. The performance of our protocol was measured by recording the runtimes of each of four versions of the above codes. 1. 2. 3. 4.
The unmodified program Version #1 + code to piggyback data on messages Version #2 + protocol’s records and saving the MPI library state Version #3 + saving the application state
Experimental results are shown in Figure 5. We observe in the results that the overhead of using our system is small, except in a few instances. In dense CG, the overhead of saving the application state rises dramatically for the largest problem size. This is as a result of the large amount of state that must be written to disk. The other extreme is Neurosys, which has a very high communication to computation ration on the small problem size. In this case, the overhead of using the protocol becomes evident. For the larger problems it is less so.
2
We chose such a small interval in order to amplify the overheads for the purposes of measurement. In practice, users would choose checkpoint intervals on the order of hours or days, depending upon the underlying system.
370
Greg Bronevetsky et al.
In our experiments, we initiated a new checkpointing 30 seconds after the last checkpoint was committed. For real applications on real machines, the developer will want to select a checkpoint frequency that carefully balances the overhead against the need to make progress. Since our protocol only incurs overhead during the interval in which a checkpoint is being taken, the developer can arbitrarily reduce the protocol overhead by reducing the frequency at which checkpoints are taken.
8.2
Collective
MPI supports a very large number of collective communication calls. Here, we compared the performance of the native version of MPI_Allgather with the performance of a version modified to utilize our protocol. Those modifications include sending the necessary protocol data (color and logging bits) and performing the protocol logic. There are two natural ways to send the protocol data: either via a separate collective operation that precedes the data operation, or by “piggy-backing” the control data onto the message data and sending both with one operation. We have measured the overhead of both methods. The time for the separate operation case includes the time to send both messages. For the combined case, it includes the time to copy the control and message data to a contiguous region, to send the combined message, and to separate the message and protocol data on receipt. The top graph in Figure 6 shows the absolute time taken by the native and protocol (both the separate and combined message) versions of MPI_Allgather for data message ranging in size from 4 bytes to 4 MB. The bottom graph shows the overhead, in seconds, that the two versions of the protocol add to the communication. Examining the graphs, we see that for small messages, the relative overhead (percentage) might be high but the absolute overhead is small. For large messages sizes, the absolute overhead might be large, but relative to the cost of the native version, the cost is very small. This is the expected behavior. The net effect is that the observed overhead for real applications will be negligible.
9
Existing Work
While much theoretical work has been done in the field of distributed faulttolerance, few systems have been implemented for actual distributed application environments. One such system is CoCheck [10], which provides fault-tolerance for MPI applications. CoCheck provides only the functionality for the coordination of distributed checkpoints, relying on the Condor [7] system to take system-level checkpoints of each process. In contrast to our approach, CoCheck is integrated with its own MPI implementation, and assumes that collective communications
Automating Application-Level Checkpointing of MPI Programs
371
are implemented as point-to-point messages. We believe that our ability to interoperate with any MPI implementation is a significant advantage. Another distributed fault-tolerance implementation is the Manetho [11] system, which uses causal message logging to provide for system recovery. Because a Manetho process logs both the data of the messages that it sends and the non-deterministic events that these messages depend on, the size of those logs may grow very large if used with a program that generates a high volume of large messages, as is the case for many scientific programs. While Manetho can bound the size of these logs by occasionally checkpointing process state to disk, programs that perform a large amount of communication would require very frequent checkpointing to avoid running out of log space. Furthermore, since the system requires a process to take a checkpoint whenever these logs get too large, it is not clear how to use this approach in the context of application-level checkpointing. Note that although our protocol, like the Chandy-Lamport protocol, also records message data, recording happens only during checkpointing. Another difference is that Manetho was not designed to work with any standard message passing API, and thus does not need to deal with the complex constructs - such as non-blocking and collective communication – found in MPI. The Egida [12] system is another fault-tolerant system for MPI. Like CoCheck, it provides system-level checkpointing, and it has been implemented directly in the MPI layer. Like Manetho, it is primarily based upon message logging, and uses checkpointing to flush the logs when they grow too large.
10 10.1
Future Work State Savings
A goal of our project is to provide a highly efficient checkpointing mechanism for MPI applications. One way to minimize checkpoint overhead is to reduce the amount of data that must be saved when taking a checkpoint. Previous work in the compiler literature has looked at analysis techniques for avoiding the checkpointing of dead and read-only variables [13]. This work focused on statically allocated data structures in FORTRAN programs. We would like to extend this work to handle the dynamically created memory objects in C/MPI applications. We are also studying incremental checkpointing approaches for reducing the amount of saved state. Another technique we are developing is the detection of distributed redundant data. If multiple nodes each have a copy of the same data structure, only one of the nodes needs to include it in its checkpoint. On restart, the other nodes will obtain their copy from the one that saved it. Another powerful optimization is to trade off state-saving for recomputation. In many applications, the state of the entire computation at a global checkpoint can be recovered from a small subset of the saved state in that checkpoint. The simplest example of this optimization is provided by a computation in which we need to save two variables and If is some simple function of it is sufficient
372
Greg Bronevetsky et al.
to save and recompute the value of during recovery, thereby trading off the cost of saving variable against the cost of recomputing during recovery. Real codes provide many opportunities for applying this optimization. For example, in protein-folding using ab initio methods, it is sufficient to save the positions and velocities of the bases in the protein at the end of a time-step because the state of the entire computation can be recovered from that data. 10.2
Extending the Protocols
In our current work, we are investigating the scalability of the protocol on large high-performance platforms with thousands of processors. We are also extending the protocol to other types of parallel systems. One API of particular interest is OpenMP [14], which is an API for shared-memory programming. Many highperformance platforms consist of clusters in which each node is a shared-memory symmetric multiprocessor. Applications programmers are using a combination of MPI and OpenMP to program such clusters, so we need to extend our protocol for this hybrid model. On a different note, we plan to investigate the overheads of piggybacking control data on top of application messages. Such piggybacking techniques are very common in distributed protocols but the overheads associated with the piggybacking of data can be very complex, as our performance numbers demonstrate. Therefore, we believe that a detailed, cross-platform study of such overheads would be of great use for parallel and distributed protocol designers and implementors.
11
Conclusions
In this paper, we have shown that application-level non-blocking coordinated checkpointing can be used to add fault-tolerance to C/MPI programs. We have argued that existing checkpointing protocols are not adequate for this purpose and we have developed protocols for both point-to-point [1] and collective [2] operations to meet the need. These protocol can be used to provide fault tolerance for MPI programs without making any demands on or having knowledge of the underlying MPI implementation. Used in conjunction with the method for automatically saving uniprocessor state described in [1], we have built a system that can be used to add faulttolerance to C/MPI programs. We have shown how the state of the underlying MPI library can be reconstructed by the implementation of our protocol. Experimental measurements show that the overhead introduced by the protocol implementation layer and program transformations is small.
Acknowledgments This work was inspired by a sabbatical visit by Keshav Pingali to the IBM Blue Gene project. We would like to thank the IBM Corporation for its support, and
Automating Application-Level Checkpointing of MPI Programs
373
Marc Snir, Pratap Pattnaik, Manish Gupta, K. Ekanadham, and Jose Moreira for many valuable discussions on fault-tolerance.
References [1] Bronevetsky, G., Marques, D., Pingali, K., Stodghill, P.: Automated applicationlevel checkpointing of mpi programs. In: Principles and Practices of Parallel Programming, San Diego, CA (2003) [2] Bronevetsky, G., Marques, D., Pingali, K., Stodghill, P.: Collective operations in an application-level fault tolerant MPI system. In: International Conference on Supercomputing (ICS) 2003, San Francisco, CA (2003) [3] Elnozahy, M., Alvisi, L., Wang, Y.M., Johnson, D.B.: A survey of rollbackrecovery protocols in message passing systems. Technical Report CMU-CS-96181, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA (1996) [4] Chandy, M., Lamport, L.: Distributed snapshots: Determining global states of distributed systems. ACM Transactions on Computing Systems 3 (1985) 63–75 [5] Graham, R., Choi, S.E., Daniel, D., Desai, N., Minnich, R., Rasmussen, C., Risinger, D., Sukalski, M.: A network-failure-tolerant message-passing system for tera-scale clusters. In: Proceedings of the International Conference on Supercomputing 2002. (2002) [6] Gupta, I., Chandra, T., Goldszmidt, G.: On scalable and efficient distributed failure detectors. In: Proc. 20th Annual ACM Symp. on Principles of Distributed Computing. (2001) 170–179 [7] M. Litzkow, T. Tannenbaum, J.B., Livny, M.: Checkpoint and migration of UNIX processes in the Condor distributed processing system. Technical Report 1346, University of Wisconsin-Madison (1997) [8] Plank, J.S., Beck, M., Kingsley, G., Li, K.: Libckpt: Transparent checkpointing under UNIX. Technical Report UT-CS-94-242, Dept. of Computer Science, University of Tennessee (1994) [9] Ramkumar, B., Strumpen, V.: Portable checkpointing for heterogenous architectures. In: Symposium on Fault-Tolerant Computing. (1997) 58–67 [10] Stellner, G.: CoCheck: Checkpointing and Process Migration for MPI. In: Proceedings of the 10th International Parallel Processing Symposium (IPPS ’96), Honolulu, Hawaii (1996) [11] Elnozahy, E.N., Zwaenepoel, W.: Manetho: Transparent rollback-recovery with low overhead, limited rollback and fast output. IEEE Transactions on Computers 41 (1992) [12] Rao, S., Alvisi, L., Vin, H.M.: Egida: An extensible toolkit for low-overhead faulttolerance. In: Twenty-Ninth Annual International Symposium on Fault-Tolerant Computing, Madison, Wisconsin (June 15 - 18, 1999) [13] Beck, M., Plank, J.S., Kingsley, G.: Compiler-assisted checkpointing. Technical Report UT-CS-94-269, Dept. of Computer Science, University of Tennessee (1994) [14] OpenMP: Overview of the OpenMP standard. Online at http://www.openmp.org/ (2003)
The Power of Belady’s Algorithm in Register Allocation for Long Basic Blocks* Jia Guo, María Jesús Garzarán, and David Padua Department of Computer Science University of Illinois at Urbana-Champaign {jiaguo,garzaran,padua}@cs.uiuc.edu http://polaris.cs.uiuc.edu
Abstract. Optimization techniques such as loop-unrolling and tracescheduling can result in long straight-line codes. It is, however, unclear how well the register allocation algorithms of current compilers perform on these codes. Compilers may well have been optimized for human written codes, which are likely to have short basic blocks. To evaluate how the state of the art compilers behave on long straight-line codes, we wrote a compiler that implements the simple Belady’s MIN algorithm. The main contribution of this paper is the evaluation of Belady’s MIN algorithm when used for register allocation for long straight-line codes. These codes were executed on a MIPS R12000 processor. Our results show that applications compiled using Belady’s MIN algorithm run faster than when compiled with the MIPSPro or GCC compiler. In particular, Fast Fourier Transforms (FFTs) of size 32 and 64 run 12% and 33% faster than when compiled using the MIPSPro compiler.
1
Introduction
In modern processors, optimizations such as loop-unrolling and trace-scheduling help increase the Instruction Level Parallelism (ILP). These techniques have traditionally been applied by compilers, and in the recent past, have also been incorporated into library generators. Two examples of the latter are SPIRAL [1, 2] that generates Digital Signal Processing (DSP) libraries and ATLAS [3] that generates linear algebra subroutines. These systems use empirical search to determine the best shape of the transformations to be applied. One of the values they search for is the degree of loop unrolling. Usually large degrees of unrolling improve the performance. The body of these unrolled loops takes the form of long basic blocks. ATLAS and SPIRAL produce high-level code incorporating program transformations. A compiler is then used to translate the high-level code into machine code. The compiler is thus responsible for the low-level optimizations such as instruction scheduling and register allocation. However, given that the code produced by these library generators may contain long basic blocks (with sizes *
This work was supported by the NSF under grant CCR 01-21401 ITR
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 374–389, 2004. © Springer-Verlag Berlin Heidelberg 2004
The Power of Belady’s Algorithm
375
ranging from 300 to 5000 statements), it is unknown whether the register allocation algorithms in the today’s compilers are effective. If they are not, we could be missing the benefit of one of the most important compiler optimization techniques [4]. A standard solution to the problem of register allocation is based on graph coloring [5] and it tries to assign registers so that simultaneously live variables are not assigned to the same register. Coloring produces an optimal solution when there are no spills. However when the number of variables is larger than the number of registers, some registers need to be spilled. Since coloring and spilling is an NP-complete problem [6], heuristics are used to select the register to spill [7, 8, 5, 9]. It was argued in [10] that these heuristics work well in most cases because programs usually contain small procedures and basic blocks where register spilling is unlikely to happen. The task of assigning values to registers over an entire block of straightline code is usually done by local register allocation algorithms. Horwitz [11] published the first algorithm on register allocation for straight-line codes that minimizes the number of load and stores. Faster algorithms [10, 12, 13] were later proposed to achieve the same goal. A simpler algorithm that can be applied for register allocation in long basic blocks is based on Belady’s MIN [14] algorithm. The idea is that, on a register replacement, the register to replace is the one that contains the variable with the farthest next use. This heuristic guarantees the minimum number of reloads. However, it does not guarantee the minimum number of stores because it does not take into account whether the register to replace needs to be written back to memory. In this paper, we evaluate the performance of the Belady’s MIN algorithm for register allocation on long basic blocks. We developed a back-end compiler that implements Belady’s MIN algorithm and used it to compile codes implementing FFTs and Matrix Multiplication (MM), that were generated by SPIRAL and ATLAS, respectively. The main contribution of this paper is an evaluation of Belady’s MIN algorithm for register allocation for long basic blocks (more than 1,000 lines). We measured performance by running the codes on a MIPS R12000 processor, and we compared the performance obtained with the performance of the codes generated by the MIPSPro and GCC compilers. To the best of our knowledge this is the first report of the performance of Belady’s MIN algorithm for a modern out-of-order superscalar processor. In fact, previous papers have evaluated the performance of this algorithm primarily by measuring the number of load and stores, and rarely report the execution time on the real machine. Our results show that our compiler always performs better than GCC. In addition, the FFT codes of size 32 and 64 generated by SPIRAL, when compiled using Belady’s MIN algorithm, run 12% and 33% faster than when compiled using the MIPSPro compiler. For the MM generated by ATLAS, Belady’s MIN algorithm can also execute faster than the the MIPSPro compiler by an average of 10% for highly unrolled loops. However, for MM, the best performance in the MIPS processor is achieved by limiting the amount of unrolling. For the limited unrolling, our compiler and MIPSPro obtain the same performance. Our exper-
376
Jia Guo et al.
iments show that when the number of live variables is smaller than the number of registers, MIPSPro and our compiler have similar performance. However, as the number of live variables increases, register allocation seems to become more important. Under high register pressure, the simple Belady’s MIN algorithm performs better than the MIPSPro compiler, although MIN is not an optimal register allocation algorithm. This paper is organized as follows. Section 2 outlines some of the characteristics of the code that we used for register allocation, and shows the performance benefit obtained when unroll is applied. Section 3 explains the implementation of our compiler. Section 4 evaluates performance. Section 5 presents related work, and Section 6 concludes.
2
Long Straight-Line Code
In modern processors, loop unrolling is a well-known transformation that compilers apply in order to increase the Instruction Level Parallelism (ILP). Loop unrolling has the additional benefit of reducing the number of bookkeeping operations that are executed. Although unrolling is usually beneficial, too much unrolling could result in instruction cache overflow or increased register spilling. If either of these happens, performance degrades. Thus, different methods are used to control the amount of unrolling. One approach applied by many compilers is to use heuristics to decide the degree of unrolling. Another approach, taken by SPIRAL [1, 2], ATLAS [3], as well as several compilers, is to empirically search for the best amount of unrolling. As mentioned above, SPIRAL and ATLAS use empirical search to find the best values for important transformation parameters. Empirical search generates different versions of the program with different forms of the transformation they apply, run the code after applying the transformation, and choose the transformation that achieves the best performance. When compilers perform empirical search, they usually estimate execution time instead of actually executing the code. One of the transformations tested is the degree of unrolling of loops. The codes tested are implementations of FFT in the case of SPIRAL and MM in the case of ATLAS. Figure 1 shows the speedup of the unrolled versions over the non-unrolled versions in FFT codes of size 16-64, and MM. In each case, two bars are shown. The first one (SPARC) corresponds to the speed-up obtained when the codes were executed on a UltraSparcIII 900 Mhz, and the compiler was the Workshop SPARC compiler, version 5.0. In the second bar (MIPS) the codes were executed on a R12000 270 Mhz processor, and the compiler was the MIPSpro compiler version 7.3.3.1m. For the MIPSPro compiler, the compilation flags were set to the values specified in Table 2. For the SPARC compiler, the flags were set to “-fast -O5 -silent”. The results for the FFT bars were collected using SPIRAL which applies dynamic programming as a heuristic to find the formula that results in the best performance for a particular FFT size. We use SPIRAL to find the formula that
The Power of Belady’s Algorithm
377
Fig. 1. Speedup obtained by unrolling FFTs of sizes 16, 32 and 64 and Matrix Multiplication (MM)
leads to the best performance from all formulas implemented by fully unrolling, and the formula that leads to the best performance when partially unrolling is applied. We used these two formulas to compute the speedup shown in Figure 1. Notice that these two formulas can be different, and what the plot shows is the performance benefit obtained from unrolling. Figure 1 shows that the best unrolled FFT formula runs between 2.4 and 1.4 faster than the best non-unrolled formula of size 16, 32 and 64. In all the cases, the version that achieved the best performance is the one totally unrolled. Notice that the speedups obtained for SPARC and MIPS are quite similar. Figure 1 also presents the benefits of unrolling for the Matrix Multiplication(MM). These results were collected using ATLAS [3]. On MIPS, we used a matrix size of 64x64; on SPARC, we used a matrix size of 80x80. These are the optimal tile sizes that ATLAS found for these platforms. ATLAS also does register tiling. The resulting code is produced by applying unroll and jam to the innermost three loops. The optimal degree of unrolling for the two outer loops depends on the number of registers of the machine since too much unrolling can result in register spilling. The degree of unrolling that obtained the best performance is 4x4x64 for MIPS, and 2x2x80 for SPARC [15]. Figure 1 shows that the unrolled version is 2.5 times faster that the non-unrolled version. The code generated by SPIRAL and ATLAS was compiled using a conventional compiler. However, these codes have basic blocks with sizes that range from 300 - 5,000 statements. These sizes are by far much larger than what any person would write. Although those fully unrolled versions perform much better than the same versions without unrolling, it is not known how good is the register allocation applied in these compilers to very long basic blocks where register replacements are likely to happen frequently. In the next section, we explain the compiler we designed to perform the register allocation on the FFT and MM codes.
378
3
Jia Guo et al.
A Compiler for Long Straight-Line Code
Belady’s MIN [14] is a replacement algorithm for pages in virtual memory. On a replacement, the page to replace is the one with the farthest next use. The MIN algorithm is optimal because it generates the minimal number of physical memory block replacements. However, in the context of virtual memory the MIN algorithm is usually impractical because in most cases it is usually not known which memory block will be referenced in the future. Belady’s MIN algorithm has been proposed for use in register allocation of long basic blocks, where the compiler knows exactly the values that will be used in the future. In this context, the MIN algorithm is also known as Farthest First(FF) [16] since, on a register replacement, the register to replace first is the one holding the value with the farthest next use. The MIN algorithm is not optimal for register allocation since the replacement decision is simply based on the distance and not on whether the register has been modified. When a register holds a value that is not consistent with the value in memory, we say that the register is dirty. Otherwise, we say that the register is clean. If the register to be replaced is dirty, the value of the register needs to be stored back to memory before a new value can be loaded into it. Thus, for a given instruction scheduling, the MIN algorithm guarantees the minimum number of register replacements, but it does not guarantee the minimum traffic with memory, that is, the minimum number of load/stores. In our implementation, when there are several candidates for replacement with the same distance, our compiler chooses the one with the clean state to avoid an extra store. In order to further reduce the number of stores, another simple heuristic called Clean First (CF) was proposed in [16]. With this heuristic, when a live register needs to be replaced, CF first searches in the clean registers. The clean register which contains the value with the farthest next use is chosen. If there are no clean registers, the most distant dirty one is chosen. We implemented a back-end compiler that uses the MIN and the CF algorithms for register allocation. Next, we describe the implementation details of our compiler.
3.1
Implementations Details
We built a simple compiler that translates high-level code into MIPS assembly code. Our compiler assumes that all optimizations other than register allocation have been applied to the high-level code. Our compiler only performs register allocation using Belady’s MIN or the CF heuristic explained above. The compiler has two steps. In the first step we transform the long straightline code into a static single-assignment (SSA) form and build the definition-use chain for all the variables. At the second step, we do register allocation using two data structures: (1) the register file and (2) the definition-use chain. The register file is an array of registers. Each register has 3 fields: current var, state and addr. The task is to assign a register to a variable. First, our register allocator[17]
The Power of Belady’s Algorithm
379
Fig. 2. An example of register allocation using Belady’s MIN algorithm. (a) Source code for FFT of size 4. (b) The resulting assembly code using Belady’s MIN algorithm
checks 3 cases in order: 1) whether the variable is already in a register, 2) whether there is a free register, 3) whether there is a variable in a register that will never be used in the future. If one of these easy cases is satisfied, it returns that register. Otherwise, it begins to calculate the next use for every variable in the register file based on the definition-use chain. It chooses the register with the farthest next use. If there are several candidates, it prefers those registers that have not been modified since loading (clean). If the only option is a dirty register, the compiler generates a store instruction before loading the new variable into the register. For efficiency, the register file can be a priority queue implemented with a binary heap, where the higher priority is given to the farther next use. Registers not used are assigned to distance and therefore would have the highest priority. Operations such as extracting the register with the farthest next use can be executed in where R is the number of registers. So the time complexity for MIN algorithm is where is the number of references to the variables in the program. Figure 2 gives an example of how our compiler applies the MIN algorithm. The code in Figure 2-(a) corresponds to the code for the FFT transform of size 4 generated by SPIRAL [1, 2]. Suppose the number of Floating Point (FP) registers is 6. At statement 5, a register needs to be replaced for the first time. Table 1 gives a snapshot of the data structures at statement 5. The variable t1 has the farthest next use. As a result, register $f5 is replaced. Since $f5 is dirty, there is a register spill. Figure 2-(b) shows the assembly code after doing register allocation for the code in Figure 2-(a). Notice that our compiler follows the instruction scheduling embedded in the source code; that is, our compiler schedules the arithmetic operations in the same order as they appear in the source code. Also, while most compilers do
380
Jia Guo et al.
aggressive scheduling for load operations, ours does not. Compilers try to hoist loads a significant distance above so that cache latency can be hidden on a miss. However, our compiler loads values into the registers immediately before they are used. Although load hoist can in some cases increase register pressure, some loads could be moved ahead of their use without increasing register pressure. Adding load hoisting would simply require an additional pass to our compiler. However, we have not implemented that. We placed each long basic block in a procedure in a separate file, and then we call our compiler to do the register allocation and generate the assembly code. When the generated code uses registers that contain values from outside of the procedure, we save them at the beginning of the procedure, and restore them at the end.
4 4.1
Evaluation Environmental Setup
In this section, we compare our compiler against GCC and MIPSPro compilers and evaluate how well they behave on long straight-line codes. For the evaluation, we have used the already optimized unrolled codes obtained from SPIRAL and ATLAS. SPIRAL produces Fortran codes, while ATLAS produces C code. Table 2 shows the version and flags that we used for MIPSPro and GCC compilers. Our compiler is the one described in Section 3.1 that implements MIN and CF algorithms. Remember that our compiler schedules operations in the same order as they appear in the source code generated by ATLAS and SPIRAL. Both ATLAS and SPIRAL perform some kind of instruction scheduling in the source code. MIPSPro and GCC, however, rearrange the SPIRAL or FORTRAN code. As a result, the instruction schedule they generate is different from ours. All the experiments were done on a MIPS R12000 processor with a frequency of 270 MHz. The machine has 32 floating point registers, a L1 Instruction Cache of 32 KB, and a L1 Data Cache of 32 KB. In all the experiments that use MIPSPro and our compiler, the code fits into the L1 Instruction cache, like the data fit into the L1 Data Cache. However, in a few cases where the code was
The Power of Belady’s Algorithm
381
compiled with GCC, it did not fit into the instruction cache (we point this out in the evaluation in next section). Finally, note that integer registers are not a problem in the FFT or MM codes because we only use floating point registers. Next, we study the effectiveness of our compiler on the long straight-line code of FFT (Section 4.2) and MM (Section 4.3). Finally, in Section 4.4, we summarize our results.
4.2
FFT
The FFT code that we use is the code generated by the SPIRAL compiler. In this section, we first study the characteristics of the FFT code generated by the SPIRAL compiler and then we evaluate the performance. SPIRAL and FFT Code. SPIRAL translates formulas representing signal processing transforms into efficient Fortran programs. It uses intelligent search strategies to automatically generate optimized DSP libraries. In the case of FFT, SPIRAL first searches for a good implementation for small-size transforms, 2 to 64, and then searches for a good implementation for larger size transforms that use the small-size results as their components. For FFT sizes smaller than 64, SPIRAL assumes that straight-line code achieves the best performance since loop control overheads are eliminated and all the temporary variables can be scalars. To better understand the performance results, we first study the patterns that appear in the FFT code. Some of these patterns are due to the way SPIRAL generates code, while others are due to the nature of FFT. Patterns that come from SPIRAL are: 1) Each variable is defined only once; that is, every variable holds only one value during its lifetime. 2) If a variable has two uses, at most one statement is between the two uses of the variable. Patterns due to the nature of FFT are: 3) Each variable is used at most twice. 4) If two variables appear on the RHS of an expression, then they always appear together, and they appear twice.
382
Jia Guo et al.
Fig. 3. Performance of the best formula for FFTs 4 - 64
Thus, in Figure 2-(a), array is the input, array is the output, and are temporary variables. The input array has two uses. The uses of or variables always appear in pairs, and there is only one statement between the two uses. This FFT code generated by SPIRAL is used as the input to our compiler. Therefore, given the proximity of the two uses of each variable in the SPIRAL code, any compiler would minimize register replacements by keeping the variable in the same register during the two uses. As a result, the two uses of a variable can be considered as a single use. Thus, the problem of register allocation for the FFT code generated by SPIRAL is just the problem of register allocation where each variable is defined once and used once. Based on this simplified model, register replacement only occurs between the definition and the use of the variable. One consequence is that the MIN and CF algorithm behave similarly and they always choose the same register to replace. In addition, since the MIN algorithm implemented on our compiler is known to produce the minimum number of register replacements, we can claim that for the FFT problem and given SPIRAL scheduling, our compiler generates the optimal solution, the one with the minimum number of loads and stores. We evaluate next the performance differences between this optimal solution and the MIPSPro or G77 compilers. Performance Evaluation. SPIRAL does an exhaustive search to find the fastest FFT formula for a given platform. We studied the performance obtained by the best formula when the code was compiled using the MIPSPro compiler, G77, or our compiler. Figure 3 shows the best performance obtained for FFTs of size 4 - 64 using the MIPSPro compiler (MIPSPro), the G77 compiler (G77), or our compiler (MIN). The performance is measured in terms of “pseudo MFlops”, which is the value computed by using the equation where N is the size of the FFT and is the execution time in microseconds. Notice that the formula that achieving the best performance can be different in each case. We focus on MISPPro and MIN, since G77 always produces much slower code. To help understand the results, Table 3 shows the number of lines of the assembly code (LOG), spills, and reloads for each point in Figure 3. A spill
The Power of Belady’s Algorithm
383
is a store of a value that needs to be loaded again later. A reload is a load of a value that previously was in a register. The data in Table 3 show that, using MIN, the number of spills and reloads is always the same. This is due to SPIRAL scheduling. As Table 3 shows, for FFTs of size 4 and 8, the 32 FP registers are enough to hold the values in the program, and as a result there is no register replacement. Thus, the difference in performance between MIPSPro and MIN comes from the differences in instruction scheduling. From FFTs of size 16, we start to see some spills and reloads, and MIN overcomes the effects of instruction scheduling and obtains the same performance as MIPSPro. Finally, for FFTs of size 32 and 64, since the amount of spilling is larger, the effect of instruction scheduling becomes less important, and MIN outperforms MIPSPro. MIN performs 12% and 33% better than MIPSPro for FFTs of size 32 and 64 respectively. In Figure 4, we show the execution time of several FFT codes of size 64 that SPIRAL produced using different formulas. For each formula we show two points. One corresponds to the performance obtained when the SPIRAL code for that formula was compiled using the MIPSpro compiler (MIPSPro), or using our
Fig. 4. Performance of the different formulas for FFTs of size 64
384
Jia Guo et al.
Fig. 5. Empirical optimizer in ATLAS
compiler (MIN). On average, MIN runs 18% faster than MIPSPro. In addition, the figure shows that our compiler always performs better. As before, as register pressure increases, register allocation becomes the dominant factor.
4.3
Matrix Multiplication
In this section, we study the performance of register allocation for the matrix multiplication code produced by ATLAS. We first describe ATLAS and then present the performance evaluation . Overview of ATLAS. ATLAS is an empirical optimizer whose structure is shown in Figure 5. ATLAS is composed of i) a Search Engine that performs empirical search of certain optimization parameter values and ii) a Code Generator that generates C code given these values. The generated C code is compiled, executed, and its performance measured. The system keeps track of the values that produced the best performance, which will be used to generate a highly tuned matrix multiplication routine. For the search process, ATLAS generates a matrix multiplication of size Tile Size that we call MiniMMM. This code for the MiniMMM is itself tiled to make better use of the registers. Each of these small matrix multiplications multiplies a MUx1 sub-matrix of A with a 1xNU submatrix of B, and accumulates the result in a MUxNU sub-matrix of C. We call these micro-MMMs. Figure 6 shows a pictorial view and the pseudo-code corresponding to the mini-MMM after register tiling and unrolling of the micro-MMM. The codes of a micro-MMM are unrolled to produce a straight-line of code. After the register tiling, the K loop in Figure 6 is unrolled by a factor KU. The result is a straight-line code that contains KU copies of the micro-MMM code. Figure 8-(a) shows two copies of the micro-MMM that corresponds to the unrolls MU=4 and NU=2 shown in Figure 6. Notice that the degree of unroll MUxNU determines the number of FP registers required. This number is MUxNU to hold the C values, MU to hold the A values, and NU to hold the B values. Performance Evaluation. To evaluate our compiler, we ran it on the miniMMM code generated by ATLAS that contains the straight-line code explained above. Figure 7 compares the performance in MFlops for different values of
The Power of Belady’s Algorithm
385
Fig. 6. Mini-MMM after register tiling
Fig. 7. Performance versus MUxNU unroll for the mini-MMM
MUxNU unroll using the MIPSpro compiler (MIPSPro), GCC compiler (GCC), or our compiler with the MIN algorithm (MIN) (the line MINSched in the figure will be explained later). For this experiment the rest of the parameters of the miniMMM have been set to the values that ATLAS found to be optimal [15]. In particular, TileSize and KU have been set to 64 1; that is, the innermost k loop in Figure 6 is totally unrolled. Notice that while unrolling along the k dimension does reduces loop overheads, it does not increase register pressure. Figure 7 shows that, as before, GCC produces the code with the worst performance. In particular the sharp drop for unrolls 6x8 and larger is due to the size of the code, that overflows the 32KB instruction cache of the MIPS R12000 processor. Figure 7 also shows that MIN behaves almost like MIPSPro when the MU and NU are small and, as a result, there is no register replacement. Thus, the slightly better performance of MIN is mostly due to differences in instruction scheduling. Register replacement occurs for unrolls 4x4 and larger in MIPSPro, and unrolls 4x5 and larger in MIN. For unrolls 4x6, where there are more values than registers and register replacement occurs, MIN performs worse than MIPSPro. 1
ATLAS only tries square tiles
386
Jia Guo et al.
The MIN algorithm performs worse because of the particular scheduling of the operations in the MM in ATLAS. Figure 8-(a) shows the micro-MMM generated by ATLAS for an unroll of 4x2 that is the input to our compiler. The code in Figure 8-(b) is the resulting assembly code after our compiler does register allocation for the first few instructions. For the example we have assumed that we have only 6 FP registers. It can be seen that when register replacement starts (line 3 of 8-(a)), the farthest next use is the variable that we have just computed. For this particular scheduling, variables always have the furthest next use. The registers holding the variables are in dirty state, and consequently its contents need to be written back to memory. This results in an increase in memory traffic because, at each register replacement, we have a register spill that generates one store. In addition to the spills, the compiler introduces additional dependencies by allocating the same register to independent instructions (storage-related dependence). For instance, although all the madds are independent, due to the farthest next use of the MIN algorithm, we always spill the register $f4. Thus, we have created a chain of dependences along the instructions using register $f4, as shown in Figure 8-(b). As a result, the performance of MIN decreases, as shown in Figure 7. We also tried our compiler using the CF heuristic, but the performance was even worse. The CF heuristic always replaces the registers containing the ai and bi values that tend to be needed shortly again. We looked then at the instruction scheduling in the MIPSPro assembly code. Since the MIPSpro assembly code was the result of instruction scheduling and register allocation, we extracted the scheduling of the madd instructions in the MIPSpro assembly code and obtained the code shown in Figure 8-(c). We ran our compiler on the code with the new scheduling to do the register allocation. The resulting assembly code is shown in Figure 8-(d). We executed this code and the performance obtained is line MINSched in Figure 7. Now for unrolls larger than 4x6, our compiler behaves better. As Figure 8-(d) shows, with this new scheduling, the variables have a higher reuse rate. Since the registers containing these variables are in dirty state, this new scheduling helps to reduce register spilling.
The Power of Belady’s Algorithm
387
Fig. 8. Two micro-MMM codes for a miniMMM of size 64, MU=4 and NU=2
Table 4 helps understand the results in Figure 7. For each degree of unrolling, we show the number of lines of the assembly code (LOC), spills, and reloads. Table 4 shows that MINSched always has fewer spills and reloads than MIPSPro. As the unroll grows and register pressure becomes more prominent, fewer spills and reloads result in better performance of MINSched (Figure 7). On average, for unrolls larger than 4x6, MINSched performs 10% better than MIPSPro. Finally, we also tried the CF algorithm with the new scheduling, but it performed worse than MIN, so we did not show results for it. We have used the long straight-lines code in the MM in ATLAS as an example of where we apply register allocation. We have shown that MINSched performs better than MIPSPro for large degrees of unroll. However, this improvement is not useful. The reason is that the unroll that obtained the best performance corresponds to the largest unroll before register replacement starts (this point is 4x4 in Figure 7). As a result, ATLAS will select this unroll, where register replacement heuristics are not used.
4.4
Analysis
Next we summarize our results. When the straight-line code is such that the number of simultaneously live values is smaller than the number of registers, there is no need to do register replacement. In that case, instruction scheduling is the dominant factor in optimization. This is the case for FFTs of size 4 and 8, as well as for MM, because the best performance is obtained for a degree of unrolling without register spilling or reloads. When the number of simultaneously live values is larger than the number of registers, register replacement becomes important. In FFT, we observed that as register pressure increases, register allocation becomes more important than instruction scheduling. For FFTs of size 32 and 64, where the number of spills
388
Jia Guo et al.
and reloads is larger, register allocation becomes important, and our compiler achieves a higher performance. The higher performance of our compiler also can be due to the use of the SPIRAL scheduling together with the MIN algorithm, which result in an optimal register allocation for that scheduling. On the other hand, for degrees of unrolling larger than 4x6, when register pressure was high for the MM code, the use of ATLAS scheduling and the MIN algorithm resulted in additional dependences. As a result, MIPSPro performed better than our compiler. It is unclear to us whether by using the scheduling in ATLAS, we could have found an optimal register replacement better than the MIPSPro instruction scheduling. However, it is clear that there are schedulings that can reduce register pressure, and these schedules should be used when register spills and reloads become important. In summary, by using our compiler with the simple MIN algorithm we have improved on the performance obtained by the MIPSPro compiler for long straight-lines of code when register pressure was high. Today’s compilers like MIPSPro or GCC are not optimized to handle this type of codes and, as a result, highly optimized code like those with loop unrolling and trace scheduling could result in sub-optimal performance. Performance could be improved by an appropriate register allocator, and maybe an instruction scheduling chosen to minimize register pressure.
5
Related Work
Local register allocation is the task of assigning values to registers over a basic block so that the traffic between registers and memory is minimized. Belady’s MIN [14] and Horwitz [11] are often used in local register allocation. Belady’s MIN [14] optimizes for the minimal number of register replacements, and not for the minimum number of load/stores. As a result, it may not find the optimal solution. Horwitz’s algorithm minimizes the number of loads and stores but it is for index registers, not for general purpose registers. Later algorithms [10, 12, 13] are mainly improvements to the compilation efficiency. However, they are still exponential in time and space. On the other hand, Belady’s MIN algorithm runs in polynomial time, although it may not find the optimal solution. The problem of register allocation and instruction scheduling in straight-line code has also been studied in the literature. In particular Goodman [18] proposes two different scheduling algorithms: one tries to minimize pipeline stalls, and the other one tries to reduce register pressure. The algorithm is chosen based on the register pressure. It agrees with our observation in section 4.4.
6
Conclusion
In this paper, we have shown that a simple algorithm like Belady’s MIN can beat the performance of state-of the art compilers like the MIPSPro or GCC compilers in long straight-line codes. We have applied Belady’s MIN algorithm to codes corresponding to FFTs transforms and Matrix Multiplication that are produced
The Power of Belady’s Algorithm
389
by SPIRAL and ATLAS, respectively. We have measured the performance by running these codes on a real machine (a MIPS R12000 processor). Our results show that Belady’s MIN algorithm is about 12% and 33% faster for FFTs of size 32 and 64. In the case of Matrix Multiplication, it can also execute faster than the the MIPSPro compiler by an average 10%. However, in this application, the unroll that achieves the best performance is the one without register spilling. Our compiler and MIPSPro perform similarly using this unroll. Our experiments show, that when the number of live variables is smaller than the number of registers, MIPSPro and our compiler have similar performance. However, as the number of live variables increases, register allocation seems to become more important. We believe that, in this case of high register pressure, instruction scheduling needs to be considered in concert with register allocation so that the number of register spills and reloads can be minimized.
References [1] M. Puschel et. al.: SPIRAL: A Generator for Platform-Adapted Libraries of Signal Processing Algorithms. Journal of HPCA (2002) [2] Xiong, J., Johnson, J., Johnson, R., Padua, D.: SPL: A Language and a Compiler for DSP Algorithms. In: Proc. of PLDI. (2001) 298-308 [3] Whaley, R., Dongarra, J.: Automatically Tuned Linear Algebra Software. Technical Report UT CS-97-366, University of Tenessee (1997) [4] Hennessy, J.L., Patterson, D.A.: Computer Architecture: A Quantitative Approach. Morgan Kaufmann Publishers, San Francisco, CA (1996) [5] Chaitin, G.: Register Allocation and Spilling Via Graph Coloring . In: Proc. of the SIGPLAN Symp. On Compiler Construction. (1982) 98-105 [6] Garey, M., Johnson, D.: Computers and Intracdtability: A Guide to the Theory of NP-Completeness. W.H. Freeman and Company, New York (1989) [7] Bergner, P., Dahl, P., Engebretsen, D., O, M.T.: Spill code minimization via interference region spilling. In: SIGPLAN Conf. on PLDI. (1997) 287-295 [8] P. Briggs et. al.: Improvements to Graph Coloring Register Allocation. ACM Trans. on Programming Languages and Systems 6 (1994) 428-455 [9] George, L., Appel, L.: Iterated Register Coalescing. ACM Trans. on Programming Languages and Systems 18 (1996) 300-324 [10] Hsu, W., Fischer, C., Goodman, J.: On the Minimization of Load/Stores in Local Register Allocation. IEEE TSE 15 (1989) 1252-1260 [11] Horwitz, L., karp, R.M., Miller, R.E., Winograd, S.: Index Register Allocation. Journal of the ACM 13 (1966) 43-61 [12] Kennedy, K.: Index Register Allocation in Straight Line Code and Simple Loops. Design and Optimization of Compilers, Englewood Cliffs, NJ: Prentice Hall (1972) [13] Luccio, F.: A Comment on Index Register Allocation. CACM 10 (1967) 572-574 [14] Belady, L.: A Study of Replacement of Algorithms for a Virtual Storage Computer. IBM Systems Journal 5 (1966) 78–101 [15] K. Yotov et. al.: A Comparison of Empirical and Model-driven Optimization. In: Proc. of PLDI. (2003) 63-76 [16] Fischer, C., LeBlanc, T.: Crafting a Compiler. Benjamin Cummings (1987) [17] Aho, A., Sethi, R., Ullman, J.D.: Compilers, Principles, Techniques, and Tools. Addison-Wesley Publishing Comapny (1985) [18] Goodman, J.R., Hsu, W.C.: Code Scheduling and Register Allocation in Large Basic Blocks. In: Proc. of the 2nd ICS, ACM Press (1988) 442-452
Load Elimination in the Presence of Side Effects, Concurrency and Precise Exceptions Christoph von Praun, Florian Schneider, and Thomas R. Gross Laboratory for Software Technology ETH Zürich 8092 Zürich, Switzerland
Abstract. Partial redundancy elimination can reduce the number of loads corresponding to field and array accesses in Java programs. The reuse of values loaded from memory at subsequent occurrences of load expressions must be done with care: Precise exceptions and the potential of side effects through method invocations and concurrent modifications in multi-threaded programs must be considered. This work focuses on the effect of concurrency on the load optimization. Unlike previous approaches, our system determines accurate information about side effects and concurrency through a whole-program analysis. Partial redundancy elimination is extended to exploit this information and to broaden the optimization scope. There are three main results: (1) Load elimination is effective even in the most conservative variant without side effect and concurrency analysis (avg. dynamic reduction of loads 23.4%, max. 55.6%). (2) Accurate side effect information can significantly increase the number of optimized expressions (avg. dynamic reduction of loads 28.6%, max. 66.1%). (3) Information about concurrency can make the optimization independent of the memory model, enables aggressive optimization across synchronization statements, and can improve the number of optimization opportunities compared to an uninformed optimizer that is guided by a (weak) memory model (avg. dynamic reduction of loads 28.5%, max. 70.3%).
1
Introduction
A common storage model for object-oriented programs is to allocate objects explicitly and access them indirectly through references. While this model is convenient for the programmer and the memory management (garbage collection), indirect memory accesses may become a performance bottleneck during program execution. Object and array access is done through access expressions. Objects can refer to other objects, resulting in a sequence of indirect loads, so called path expressions (e.g., o.f1.f2.f3). The evaluation of a path expression results in a pointer traversal, which is a common runtime phenomenon in object-oriented programs. Current processor architectures and memory subsystems are not designed to handle pointer traversals at peak rates, and hence pointer traversals may cause a performance bottleneck. Standard optimization techniques such as common subexpression elimination (CSE) and partial redundancy elimination (PRE) can be applied to reduce the number of (indirect) loads that are evaluated at runtime. The technique is also known as register promotion because non-stack variables are ‘promoted’ to faster L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 390–404, 2004. © Springer-Verlag Berlin Heidelberg 2004
Load Elimination in the Presence of Side Effects
391
register memory during a certain period of the program execution. However, the elimination and motion of loads must be done with special care to account for constraints imposed by the programming language: Aliasing The assignment to a reference field may invalidate a loaded value in the assigning method. Side effects Method calls may modify objects and hence invalidate loaded values in the caller. Precise exception semantics The evaluation of an indirect load may raise an exception. Hence code motion that involves indirect loads must not lead to untimely exceptions. Concurrency In multi-threaded programs, objects may be modified concurrently. The elimination of a load is only admissible if the visibility of concurrent updates, as prescribed in the thread and memory model of the language, is not violated. Java programs may be affected by all of these aspects. Previous work [6] has employed a simple type-based alias analysis to account for some of these impediments to load elimination but situations that could not be resolved through simple alias information are handled in the most conservative manner. This paper improves upon previous studies by employing a detailed side effect and concurrency analysis. We classify redundant load expressions according to their interaction with aliasing, side effects, exception handling and concurrency, and quantify the consequences of the individual aspects on the effectiveness of PRE. In the context parallel programs, Lee and Padua [9] define a program transformation as correct, if “the set of possible observable behaviors of a transformed programs is a subset of the possible observable behaviors of the original program”. The possible observable behaviors of a parallel program (and consequently the permissible program transformations) are determined by the memory model. A restrictive memory model like SC would defeat a number of standard reordering optimizations (due to the potential of data races) [12]. One way to handle the problem is to conceive the memory model as weak as possible, allow a large number of behaviors, and hence designate standard transformations that are known from the optimization of sequential programs as “correct” in the context of the parallel program. The design of the revised Java memory model (we refer to this model as JMM) follows this strategy [11, Appendix A]. Our technique pursues a different strategy that is independent of the memory model, i.e., its correctness does not rely on a consistency weaker than SC. Concurrency analysis determines a conservative set of variables and access sites with access conflicts. An conflict is found, if the analysis cannot determine start/join or monitor-style synchronization for read/write accesses to the same data from different threads. The number of conflicting variables and access sites is typically moderate and such sites are exempted from the optimization. For the remaining accesses, the synchronization strategy is known (determined by the concurrency analysis) and a number of aggressive optimizations are possible that would not be performed if the optimizer only followed the minimal constraints of the memory model. The main difference between (1) the conventional application of standard optimizations constrained by the memory model and (2) our approach based
392
Christoph von Praun et al.
Fig. 1. Program fragment illustrating uninvolved synchronization
on concurrency analysis is as follows: Approach (1) applies optimization to all loads disregarding the potential of sharing or access conflict. Access to volatile variables or synchronization kill the availability of previous loads. Approach (2) is conservative about loads of variables with access conflicts and in addition puts fewer constraints on the availability of load expressions. Hence, approach (2) enables a number of optimizations that are rejected by approach (1). Consider the program in Figure 1. Approach (1) would abstain from optimizing the second load expression due to the intervening synchronization (kill). Approach (2) can determine that the object referenced by s1 is not conflicting (it is thread-local or accesses are protected by enclosing monitor synchronization); hence the synchronization that occurs in the call is uninvolved in protecting variable s1.f and hence the second load can be optimized.
2
Example
Figure 2 reviews several simple control-flow graphs that illustrate a classification of load-redundancies. We assume that local variables follow static single assignment constraints and that – unless explicitly specified – there are no side effects on the involved objects. The classification applies to occurrences of expressions and is not exclusive, i.e., a single expression occurrence could fall into several categories. Partial and full redundancies can be affected by updates due to aliasing, side effects, or concurrency:
(a) Loss of redundancy due to aliasing: The expression in block (4) is fully redundant wrt. the syntactically equivalent expression in block (1). However, the aliasing of local variables p and o in combination with the update in block (3) invalidates this redundancy. (b) Loss of redundancy due to side effect: Assume that the call in block (3) has a potential side effect on the object referenced by o. Hence, the redundancy of the load in block (4) wrt. the load in block (1) is lost due to a side effect since the call occurs on some control-flow path between the redundant expressions. (c) Loss of redundancy due to precise exceptions: The load in block (4) is partially redundant wrt. the load in block (2). This redundancy could be avoided by hoisting the expression from block (4) to the end of block (3). Such code motion would however violate precise exception semantics: The evaluation of the access expression o.f may throw a NullPointerException. Assume that code motion is performed (o.f is hoisted above the assignment to variable a) and the hoisted expression throws an exception at runtime; then the update to a would not be performed and hence would not be visible in the handler. Thus, code motion to avoid partial redundancies must not bypass updates that should be visible in some handler. The occurrence of
Load Elimination in the Presence of Side Effects
393
Fig. 2. Possible losses of full and partial redundancies
o.f in block (4) is hence a partial redundancy that is lost due to precise exceptions and must not be optimized. Redundancies like the load in block (4) can be optimized in combination with speculative code motion and compensation code that is executed in the case of an exception [5]. This approach is further discussed in Section 5. (d) Loss of redundancy due to concurrent update: Let v be a volatile variable. The load in block (4) is fully redundant wrt. the load in block (1). There is however an intervening load of a volatile variable that enforces that updates of thread 2 become available to thread 1. This includes an update of field f on the object referenced by s1 and s2, and hence the elimination of the load expression in block (4) should not be performed. The redundancy in block (4) is lost due to a concurrent update, because its elimination might lead to the phenomenon that the update of thread 2 might not become visible to thread 1. (e) Loss of redundancy due monitor boundary: There is a full redundancy of the second load in block (1) and the load in block (2) wrt. the first load in block (1). Accesses to the object referenced by s1 and s2 are guarded through locks, hence there is no access conflict. The elimination of loads must nevertheless be done with special care: The second load in block (1) can be optimized. The load in block (2) however must not be eliminated due to the potential update done in thread 2.
394
Christoph von Praun et al.
In cases (a), (b), (d) and (e) a full or partial redundancy can be lost. In case (c), it is the opportunity to optimize a partial redundancy that is lost. In cases (d) and (e), redundancy is is lost irrespective the memory model, i.e., an optimization would not be permissible in SC and weaker models.
3
PRE of Path Expressions
PRE is a well-known technique to reduce the number of load expressions, and Chow et al. [2] describe a practical approach (SSAPRE) to use PRE with an SSA intermediate representation. SSAPRE was originally developed in the context of translating C/C++ programs. Java programs must obey precise exception semantics, and the language prescribes a thread model, so a compiler that wants to eliminate (some) path expressions for this language must handle the situations illustrated in Section 2. We describe here an algorithm that builds on SSAPRE of [2] to remove load operations. This algorithm requires additional whole-program analyses that provide information about aliasing, side effects and concurrency. The outline of the algorithm is as follows: 1. 2. 3. 4. 5.
3.1
Transform the program such that covert redundancies are revealed (3.1). Compute alias information (Section 3.2). Determine side effects at all call sites (Section 3.3). Compute escape information and conflicting fields (Section 3.4). Perform partial redundancy elimination (Section 3.6). Program Transformations
The detection of redundancies through the compiler is enhanced by two program transformations: First, method inlining allows the intraprocedural algorithm to operate to some degree across method boundaries. Second, loop peeling allows to hoist loop invariant expressions and can significantly reduce the dynamic frequency of loads inside loops.
3.2
Alias Analysis
The alias analysis is based on global value numbering, where value sets are associated (1) with local variables and parameters, (2) with fields, and (3) with array variables. A value number is created at an allocation site and flows into the value set of the variable holding the reference to the allocated object. Value numbers are propagated in a flow-insensitive, interprocedural manner along a variabletype analysis (VTA) that is described in detail in [14]. Value numbers approximate may-alias information: Two reference variables may refer to the same object at runtime if the intersection of their value number sets is not empty.
Load Elimination in the Presence of Side Effects
3.3
395
Side Effect Analysis
The purpose of the side effect analysis is to determine at a specific call site if the callee updates objects that are referenced in available load expressions in the caller. If a side effect is determined, the call site kills the availability of the respective load expression. The update effects and the aliasing relationships introduced by the callee are encoded in a method summary, which is computed separately for each method in a bottom up traversal of the call graph. A summary contains abstractions of the objects that are accessed or allocated in the dynamic scope of a method and the reference relationships among those objects induced through field variables. Updates are specified per object and do not differentiate individual fields. The concept and computation of method summaries is described in more detail in Ruf [13]; the method also accounts for recursion. A method summary encodes updates in a generic form and this information has to be adapted to individual contexts where the method is called. At a call site, the actual parameters and the objects reachable through those are unified with the corresponding formal parameters of the summary. Caller-side aliasing is approximated through the may-alias information computed in Section 3.2. At this point, the embedded method summary provides a conservative approximation about the objects that are modified in the callee at a specific call site.
3.4
Concurrency Analysis
Concurrency analysis determines for a specific access expression if the accessed object is shared and if there are potential access conflicts on field variables. A variable is subject to a conflict if there are two accesses that are not ordered through enclosing monitor synchronization and at least one access is an update. In our model, we limit the optimization to loads that access data without conflicts. In addition to access conflicts, certain statements may necessitate a reload of a variable even if the variable is not subject to an access conflict. Such statements are called killing statements because they kill the availability of preceding load expressions. First, two methods for determining the absence of access conflicts are discussed, then the notion of killing statements and their computation is defined. Stack-Locality In a simple approximation, concurrent access can be excluded for those objects that remain confined in the scope of their allocating method and hence are not made available to other threads. Such abstract objects are called stack-local. Stack-locality can be computed similar to update effects (Section 3.3): Instead of abstract objects that are updated, sets of objects that escape from the stack are noted in the method summary. Stack locality is however a very conservative approximation to the property of ‘having no conflicting access’: First, the definition applies to whole objects instead of individual field variables. Second, there is typically a significant number of objects that escape the stack and are nevertheless not shared or subject to conflicting access [16].
396
Christoph von Praun et al.
Object Use Analysis The object use analysis determines the set of field variables that are regarded as conflicting. A context-sensitive symbolic program execution is used to track accesses to fields in different object and locking contexts. Synchronization patterns like “init then shared read”, thread start/join, and monitors can be recognized and allow to determine the absence of conflicts for a large number of variables and statements. A detailed description of the object use analysis is presented in [16]. Given the set of fields on which potential conflicts arise, the load elimination refrains from optimizing loads of such fields through reference variables that are stack-escaping. If arrays are subject to conflicts accesses to escaping arrays are not optimized. Kill Analysis Kill information specifies if a statement necessitates to reload previously loaded values from memory. Our load optimization is intraprocedural and only targets objects that have no conflicts. For shared variables that are protected by a monitor, a reload is necessary if the scope of the protecting monitor is temporarily quit (... other threads could enter the monitor and update the loaded variable). In Java, there are two cases in which a monitor can be temporarily left at the method scope: First, at the boundary of a block monitor; second, at a call site of Object::wait or callers of it. Access ordering on shared variables can also be guaranteed by thread start and join; hence calls to Thread::start and Thread::join are also considered as killing. The kill analysis determines such killing statements in a single pass over the caller hierarchy of the program.
3.5
Exceptions
The potential of exceptions narrows the flexibility of code motion when eliminating partial redundancies. Precise exception semantics in Java demand the following behavior in the case of an exception: All updates prior to the excepting statement must appear to have taken place. Updates that follow the excepting statement must not appear to have taken place. Program transformation must not change the order of thrown exceptions. To account for these semantics, a simple program traversal identifies potentially excepting instructions (PEI) [5]: explicit raise of exception, indirect loads, memory allocation, synchronization, type check, and calls. We assume that PEIs are the only source of exceptions (synchronous exceptions) and do not account for Java’s asynchronous exceptions that are raised at very severe error conditions (e.g., machine error, lack of memory) and often hinder the further execution of a thread or program. Information about PEIs is used to restrain load elimination, such that indirect load expressions are not hoisted above PEIs or assignments to local variables, if the current method defines an exception handler.
Load Elimination in the Presence of Side Effects
3.6
397
PRE
This section elaborates on the modifications to the SSAPRE algorithm [2] to account for side effects, concurrency and precise exception semantics. The algorithm is driven by a worklist of expressions and optimizes one expression at a time; compound expressions (e.g., access path expressions) are split and handled in the appropriate order. SSAPRE performs the following steps: Initialize worklist with candidate expressions. Insert availability barriers. While worklist not empty do: 1. 2. Rename 3. DownSafety 4. WillBeAvail 5. Finalize 6. CodeMotion 7. If there are new expressions then: Add new expressions to worklist. Determine availability barriers. The detailed description of each of these steps can be found in [2]. We describe the changes to the original algorithm in the following paragraphs. The steps (5), (6) and (7) remain unchanged. Candidate Expressions The PRE implementation described here optimizes three types of expressions: Arithmetic expressions Scalar loads (static field accesses) Indirect loads (non-static field and array accesses) Optimization candidates are determined during the collection phase, where expressions that cannot be optimized due to an access conflict are filtered out. If a program is single-threaded, there are no further constraints. For multi-threaded programs, the results of the escape and concurrency analysis inhibit the optimization of a load expression if the base object is globally visible (for direct loads, this is always true, for indirect loads, all stack-escaping objects are assumed to be globally visible), and there is a conflict on the accessed field. Accesses to volatile variables are also excluded from the optimization.
398
Christoph von Praun et al.
Availability Barriers A second pass over the program determines statements that kill the availability of an expression, i.e., call sites with side effects or potentially aliased stores (may-defs). There are two cases that render an access expression invalid: (1) An update of the base reference variable; (2) An update of the accessed field variable. Updates of the callee that are visible in the caller are determined from the method summary information. (Section 3.3). If a side effect is detected, the affected expression must be invalidated and a so called kill-occurrence of that expression is inserted at the call-site; this is done for both cases of invalidation, (1) and (2). Similarly, kill-occurrences are inserted at stores that are potentially aliased to the base reference variable of an expression (case (1)). The availability of load expression is also killed by a store through the same reference variable to the same field (must-def, case(2)). In this case however, a so called left-occurrence [10] is inserted. Since such a store is a definition, a left-occurrence makes an expression available, so that subsequent loads of the same variable are redundant wrt. this store. For PEIs and other statements with side effects that are potentially visible in a local exception handler or outside the current method, an exception-side-effect occurrence is inserted; such occurrences do not specify a potential update, but only serve to prohibit optimization. for an expression are inserted at the iterated dominance frontier (IDF) of each real occurrence. Additionally, are inserted at the IDF of left occurrences and kill occurrences since these may change the value of an expression. Rename The rename step builds the factored redundancy graph [2] (FRG) and assigns version numbers to each occurrence of an expression. This is done in a pre-order pass over the dominator tree. Left occurrences always get a new version number, whereas kill occurrences simply invalidate the current version. PEIs have no impact on the version numbers. Multiple expression occurrences with the same version number indicate redundancy. Exception Safety To account for the restrictions of Java’s exception semantics, the computation of safe insertion points for expressions needs to be extended. In the original algorithm, down-safety is a sufficient criterion for the insertion of E at a position S in the program. Here, we add the concept of exception-safety, which needs to be satisfied in addition to down-safety for expressions E that contain PEIs (e.g., indirect loads, division). A position T in a program is exception-safe with respect to an expression occurrence E iff there is no critical statement on any path from T to E. Critical statements are: PEIs Stores to escaping objects Stores to local variables that are visible inside a local exception handler. The exception-safe flag is initialized along the rename-step. For each critical statement, the immediately dominating is determined and marked as not
Load Elimination in the Presence of Side Effects
399
exception-safe. In a second step, the exception-safe flag is propagated upward beginning at the that are initially not exception-safe. If an operand of such a is defined by another we mark the defining as not exception-safe. This is done recursively until no more are reached. For example, the insertion point at the end of block (3) in Figure 2(c) is not exception-safe with respect to the expression occurrences in block (4) because there is an assignment with side effect that is visible inside the exception handler. In the scenario of Figure 2(b), the occurrence of o.f in block (4) is fully redundant. In that case a thrown exception does not impose any restrictions because there is no insertion of o.f on a new path. WillBeAvail This step determines at which points an expression will be made available by inserting code. There are two modifications related to precise exception semantics in this step: First, for partially available marked as not exception safe the willBeAvail flag is reset so that there will be no code motion that violates the exception semantics. Second, expressions that are partially available at the beginning of an exception handler must be invalidated. Hence the willBeAvail flag of partially available at the beginning of an exception handler is reset.
4
Evaluation
We implemented the modified SSAPRE in a Java-X86 way-ahead compiler and report here on its effectiveness. The runtime system is based on GNU libgcj version 2.96 [4]. The numbers we present in the static and dynamic assessment refer to the overall program including library classes, but excluding native code. The effect of native code for aliasing and object access is modeled explicitly in the compiler. The efficiency of the optimization has been evaluated for several single- and multi-threaded benchmarks, including programs from the SPEC JVM98 [15] and the Java Grande Forum [7] suite (Table 1). The benchmarks have been compiled in four variants: (A) No side effect and no concurrency analysis: Every method call is assumed to have side effects and all loads are optimized according to the JMM, i.e., all synchronization barriers invalidate the availability of loads. Alias information is used to disambiguate the value of reference variables. This is
400
Christoph von Praun et al.
the most conservative configuration of our algorithm and could, e.g., be implemented in an optimizing JIT compiler. This configuration resembles the analysis in [6]. (B) Side effect but no concurrency analysis: Side effects are determined at method calls, and all loads are optimized according to the JMM. (C) Side effect and concurrency analysis: Precise information about side effects and concurrency guide the optimization. Loads of variables that are not conflicting are aggressively optimized across synchronization statements; loads of conflicting variables are not optimized. The resulting optimization is correct with respect to an SC memory model. (D) Upper limit for concurrency analysis: In case (C), some optimizations might be defeated because conservatism in the conflict analysis may classify too many variables as conflicting. The artificial configuration (D) constitutes an upper bound on the optimization potential that could be achieved by a “ideal synchronization analysis”: All variables are assumed to be free of conflicts and there are no synchronization barriers. The optimization might be incorrect in this configuration and hence we present only static counters (Table 2).
4.1
Number of Optimized Expressions
The comparison of column (A) and (B) in Table 2 shows the improvement due to the side effect analysis: For mtrt, montecarlo, and compress the analysis is most effective and many calls to short methods that have only local effects can be identified. The increase of optimization opportunities for PRE ranges between 9.3% for moldyn and 92.0% for mtrt, with an average of 31.7%. The concurrency analysis (Table 2, column (C)) increases the number of optimization opportunities for all but one benchmarks in comparison to the version that optimizes according to the JMM (Table 2 column (B)). For the single-threaded programs the improvement is generally a bit higher (15.7% to 52.7% for compress resp. jess) because there are no conflicting accesses that prevent load elimination. For the multi-threaded benchmarks the improvement is between 5.4% (tsp) and 11% (montecarlo). The average improvement due to concurrency analysis is 9.6%. This average value is reduced through a loss of
Load Elimination in the Presence of Side Effects
401
optimization opportunities in moldyn in variant (C): A cycle in the heap shape abstraction leads to unnecessary conservatism in the conflict analysis such that a number of actually thread-local object are classified as conflicting. A simple change in the source code of the program could improve the precision of the analysis such that variant (C) would provide 9% more optimization opportunities than (B). Column (D) shows the theoretical upper bound for a perfect concurrency analysis. Apart from moldyn, the results of variant (C) for the multi-threaded benchmarks are within 5% of that upper bound. For the single-threaded benchmarks the concurrency analysis provides already “perfect” information, hence there is no further improvement. Note that the concurrency analysis is only effective in combination with the side effect analysis: If method calls kill all available expressions, then not much is gained through the reduction of synchronization barriers provided by the concurrency analysis. Overall, the combined side effect and concurrency analysis increases the number of optimized occurrences for all programs but moldyn, ranging from 27.8% (tsp) up to 102.6% (mtrt), with an overall average of 45.3%.
4.2
Dynamic Count of Load Operations
Table 3 specifies the dynamic number of loads in different variants of the benchmarks. The base variant (A) is already quite successful in reducing the number of loads (avg. 23.4%, max. 55.6%). Side effect information improves the reduction further (avg. 28.6%, max. 66.1%). Most successful is again variant (C) which achieves a reduction from 9.1% for mtrt up to 70.3% for montecarlo (avg. 28.5%). For single-threaded benchmarks, the concurrency analysis determines the absence of concurrency and hence creates additional optimization opportunities compared to variant (B). The benefit is most pronounced for db where variant (C) executes only 76.3% (last column of Table 3) of the loads of the variant without concurrency information (B). For the benchmarks mtrt, compress and jess, Table 2 shows an increase in the number of optimized expressions among variant (B) and (C). This effect is not manifested at runtime, i.e., the last column in Table 3 specifies no reduction in the number of dynamic loads. There are two reasons for this behavior: First, the execution frequency of optimized expressions can vary greatly, i.e., few optimized expressions can contribute to a majority of the dynamic savings. Second, variant
402
Christoph von Praun et al.
(C) might optimize other expressions than variant (B) and consequently the overall dynamic effect can be different. Similarly for tsp: Side effect analysis produces only an insignificant reduction and concurrency analysis even results in a slight increase of dynamic load operations. In this benchmark, the concurrency analysis classifies a frequently accessed variable as conflicting and hence the configuration (C) is conservative about corresponding loads. Configuration (B) in contrast allows to optimize the respective load expressions, resulting in an overall dynamic benefit over configuration (C). moldy is again hampered by the loss of optimization opportunities due to spurious conflict reports. The dynamic situation is worse than the static situation because exactly those fields that are falsely assumed to be conflicting (hence are not optimized in variant (C)) are most frequently accessed. The speedup of execution time compared to the unoptimized version ranges between 0% and 12% (avg. 8%).
5
Related Work
There are numerous contributions in the field of program analysis and optimization for Java and related programming languages. We discuss only a selection of approaches that are closely related to our work. Side Effect Analysis Modification side effect analysis for C programs has been done by Landi, Ryder, and Zhang [8]. The analysis computes a precise set of modified abstract storage locations for call sites and indirect store operations. It is difficult to compare this work with ours: On the one hand, Java has a more uniform storage model, on the other hand, polymorphism and loose type information in object-oriented languages necessitate conservatism when approximating side effects at compile-time. Clausen [3] developed an interprocedural side effect analysis for Java bytecodes and demonstrates its utility for various optimizations like dead code removal and common subexpression elimination. Side effects are specified by field variables that a method and its callees might modify. The computational complexity of this analysis is lower than ours, however the analysis is not sensitive to the heap-context of a method call and hence is less precise. In contrast to Clausen’s work, our analysis does not distinguish individual fields, but merely specifies updates for specific abstract objects (including field information would however be a straightforward extension). Optimization in the Presence of Precise Exceptions Optimizing Java program in the presence of exceptions has been studied by Gupta, Choi, and Hind [5]. Since common Java programs contain many PEIs (about 40%), many optimization opportunities are prohibited by the dependencies created through Java’s precise exception model. However [5] observes that the visibility of updates in exception handlers can be limited to a few variables that are live. Hence liveness information helps to significantly reduce the number of dependencies and enables reordering transformations. Reordering of PEIs remains however critical as, in case an exception occurs, the order of thrown exceptions must not
Load Elimination in the Presence of Side Effects
403
be altered. For preserving the correct order, compensation code is introduced that triggers the correct exception that would have been thrown by the unoptimized code. [5] is orthogonal to our work, because liveness analysis could also be used to enable optimization that our algorithm neglects due to conservatism assumptions. Gupta, Choi and Hind use the relaxed dependency model to enable loop transformations with aggressive code motion; Our focus is on PRE for load elimination where code motion is only required to handle partial redundancies. Hence the overall limitations of precise exception semantics in our work is not as pronounced as in [5]. Load Elimination Lo et al. [10] have developed an algorithm for eliminating direct and indirect load operations in C programs, promoting memory that is accessed through these operations to registers. The algorithm is based on their previous work on SSAPRE [2], which we also use as a foundation. The authors employ aggressive code motion, relying on speculative execution and hardware support to mask potential exceptions; this is possible, because the C programming language does not require precise exception semantics. In addition to eliminating loads, the authors have defined SSU (static singe use) form which allows to eliminate stores through the dual of the SSAPRE algorithm. Bodik, Gupta, and Soffa also explore PRE-based load elimination in [1] and consider, besides syntactical information, also value-number and symbolic information to capture equivalent loads. The focus of our study is on object-oriented programs (Section 4). Compared to the C routines that have been investigated in [10, 1], the size of methods is usually smaller and the call-interaction among methods is more vivid. Hence our work emphasizes the modeling of side effects through procedure interaction and concurrency. PRE has also been used by Hosking at al. [6] to eliminate access path expressions in Java; program transformation is done at the level of bytecodes. Similar to our evaluation, the authors achieve a clear reduction of load operations (both static and dynamic counts). Contrary to our work, the authors do not use wholeprogram information to determine inter procedural side effects. Moreover, our evaluation clarifies the impact of precise exceptions, interprocedural side effects and concurrency on the effectiveness of load elimination. Lee and Padua [9] adopt standard reordering transformations to parallel programs and describe caveats and limitations. Similar to our approach, a particular program analysis and IR are used to determine the interaction of threads on shared data (concurrent static single assignment form, CSSA) and to conclude on restrictions of the optimization. Their algorithms handle programs with structured parallelism (SPMD) and hence allow for a precise analysis and selective optimization of accesses to variables with access conflicts. Our approach addresses general programs with unstructured parallelism but is more conservative in the treatment of loads from conflicting variables.
6
Conclusions
There are three main results: (1) Load elimination is effective even in the most conservative variant without side effect and concurrency analysis (avg. dynamic reduction of loads 23.4%, max. 55.6%). (2) Accurate side effect information
404
Christoph von Praun et al.
can significantly increase the number of optimized expressions (avg. dynamic reduction of loads 28.6%, max. 66.1%). (3) Information about concurrency can make the optimization independent of the memory model, enables aggressive optimization across synchronization statements, and can improve the number of optimization opportunities compared to an uninformed optimizer that is guided by a (weak) memory model (avg. dynamic reduction of loads 28.5%, max. 70.3%).
7
Acknowledgements
We thank Matteo Corti for his contributions to our compiler infrastructure and the anonymous referees for their useful comments. This research was supported, in part, by the NCCR “Mobile Information and Communication Systems”, a research program of the Swiss National Science Foundation, and by a gift from the Microprocessor Research Lab (MRL) of Intel Corporation.
References [1] R. Bodik, R. Gupta, and M. Soffa. Load-reuse analysis: Design and evaluation. In Proc. PLDI’99, pages 64–76, 1999. [2] F. Chow, S. Chan, R. Kennedy, S. Liu, R. Lo, and P. Tu. Partial redundancy elimination in SSA form. ACM TOPLAS, 21(3):627–676, May 1999. [3] L. R. Clausen. A Java bytecode optimizer using side-effect analysis. In ACM Workshop on Java for Science and Engineering Computation, June 1997. [4] GNU Software. gcj - The GNU compiler for the Java programming language. http://gcc.gnu.org/java, 2000. [5] M. Gupta, J. Choi, and M. Hind. Optimizing Java programs in the presence of exceptions. In Proc. ECOOP’00, pages 422–446, June 2000. LNCS 1850. [6] A. Hosking, N. Nystrom, D. Whitlock, Q. Cutts, and A. Diwan. Partial redundancy elimination for access path expressions. Software Practice and Experience, 31(6):577–600, May 2001. Grande Forum. Multi-threaded benchmark suite. [7] Java http://www.epcc.ed.ac.uk/javagrande/, 1999. [8] W. Landi, B. G. Ryder, and S. Zhang. Interprocedural modification side effect analysis with pointer aliasing. ACM SIGPLAN Notices, 28(6):56–67, 1993. [9] J. Lee, D. Padua, and S. Midkiff. Basic compiler algorithms for parallel programs. In Proc. PPoPP’99, pages 1–12, May 1999. [10] R. Lo, F. Chow, R. Kennedy, S. Liu, and P. Tu. Register promotion by sparse partial redundancy elimination of loads and stores. In Proc. PLDI’98, pages 26– 37, 1998. [11] J. Manson and B. Pugh. JSR-133: Java Memory Model and Thread Specification. http://www.cs.umd.edu/ ~ pugh/java/memoryModel, 2003. [12] S. Midkiff and D. Padua. Issues in the optimization of parallel programs. In D. Padua, editor, Proc. ICPP’90, pages 105–113, Aug. 1990. [13] E. Ruf. Effective synchronization removal for Java. In Proc. PLDI’00, pages 208–218, June 2000. [14] V. Sundaresan, L. Hendren, C. Razafimahefa, R. Vallée-Rai, P. Lam, E. Gagnon, and C. Godin. Practical virtual method call resolution for Java. In OOPSLA ’00, pages 264–280, Oct. 2000. [15] The Standard Performance Evaluation Corporation. SPEC JVM98 Benchmarks. http://www.spec.org/osg/jvm98, 1996. [16] C. von Praun and T. Gross. Static conflict analysis for multi-threaded objectoriented programs. In Proc. PLDI’03, pages 115–128, June 2003.
To Inline or Not to Inline? Enhanced Inlining Decisions Peng Zhao and José Nelson Amaral Department of Computing Science, University of Alberta, Edmonton, Canada {pengzhao,amaral}@cs.ualberta.ca
Abstract. The decision to inline a procedure in the Open Research Compiler (ORC) was based on a temperature heuristics that takes into consideration the time spent in a procedure and the size of the procedure. In this paper we describe the trade-off that has to be worked out to make the correct inlining decisions. We introduce two new heuristics to enhance the ORC inlining heuristics: adaptation and cycle_density. With adaptation we are allowed to vary the temperature threshold and prevent penalizing small benchmarks. With cycle_density we prevent the inlining of procedures that have a high temperature in spite of being called infrequently. Experiments show that while adaptation improves the speedup obtained with inlining across the SPEC2000 suite, cycle_density reduces significantly both the code growth and compilation time increase caused by inlining. We then characterize the SPEC INT2000 benchmarks according to the inlining potential of their function calls. Our enhancement is released in the ORC 2.0.
1
Introduction
Function inlining is a very important optimization technique that replaces a function call with the body of the function [2, 5, 6, 7, 8, 10, 13, 17, 14]. One advantage of inlining is that it eliminates the overhead resulting from function calls. The savings are especially pronounced for applications where only a few call sites are responsible for the bulk of the function invocations because inlining those call sites significantly reduces the function invocation overhead. Inlining also expands the context of static analysis. This wider scoped analysis creates opportunities for other optimizations. However, inlining has negative effects. One problem with inlining is the growth of the code, also known as code bloat. With the growth of functions because of inlining, the compilation time and the memory space consumption may become intolerable because some of the algorithms used for static analysis have super-linear complexity. Besides the time and memory resource cost, inlining might also have the adverse effect of increasing the execution time of the application. After inlining the register pressure may become a limitation because the caller now contains more code, more variables, and more intermediate values. This additional storage requirement may not fit in the register set available in L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 405–419, 2004. © Springer-Verlag Berlin Heidelberg 2004
406
Peng Zhao and José Nelson Amaral
the machine. Thus, inlining may increase the number of register spills resulting in a larger number of load and store instructions executed at runtime. The above discussion of the benefits and drawbacks of inlining leads to an intuitive criteria to decide which call sites are good candidates for profitable inlining. The benefits of inlining (elimination of function call overhead and enabling of more optimization opportunities) depend on the execution frequency of the call site. The more frequently a call site is invoked, the more promising the inlining of the site is. On the other hand, the negative effects of inlining relate to the size of the caller and the size of the callee. Inlining large callees results in more serious code bloat, and, probably, performance degradation due to additional memory spills or conflict cache misses. Thus, we have two basic guidelines for inlining. First, the call site must be very frequent, and, second, neither the callee nor the caller should be too large. Most of the papers that address inlining take these two factors in consideration in their inlining analysis. In this paper we describe our experience in tunning the inlining heuristics for the Open Research Compiler (ORC). The main contributions of this paper are: We propose adaptive inlining to enable aggressive inlining for small benchmarks. Usually, small benchmarks are amenable to aggressive inlining as shown in section 4. Adaptive inlining becomes conservative for large benchmarks such as GCC because the negative effects of aggressive inlining are often more pronounced in such benchmarks. We introduce the concept of cycle_density to control the code bloat and compilation time increase. Our detailed experimental results show the potential of inlining. We investigate the impediments to beneficial inlining and reveal further research opportunities. The rest of the paper is organized as follows: Section 2 describes the existent inlining analysis in ORC. Section 3 describes our enhancements of the inlining analysis (adaptive inlining and cycle_density heuristics) and Section 4 is the performance study. Section 5 reviews related work. Section 6 quantifies impediments to inlining and discusses our ongoing research.
2
Overview of ORC Inlining
In order to control the negative effects of inlining, we should inline selectively. The problem of selecting the most beneficial call sites while satisfying the code bloat constraints can be mapped to the knapsack problem, which has been shown to be NP-complete [11, 15]. Thus, we need heuristics to estimate the gains and the costs of each potential inlining. ORC used profiling information to calculate the temperature of a call site to approximate the potential benefit of inlining
To Inline or Not to Inline? Enhanced Inlining Decisions
an edge graph).1
(i.e. a call site in function
which calls function
407
in the call
where:
is the frequency of the edge and is the overall execution frequency of function in the training execution. Total_cycle_count is the estimated total execution time of the application:
PU set is the set of all program units (i.e. functions) in the program, is the estimated number of cycles spent on function
where is the set of all statements of function is the frequency of execution of statement in the training run. Furthermore, the overall frequency of execution of the callee is computed by:
where is the set of all functions that contain a call to Essentially, cycle_ratio is the contribution of a call graph edge to the execution time of the whole application. A function’s cycle count is the execution time spent in that function, including all its invocations. is the number of cycles contributed by the callee invoked by the edge Thus, is the contribution of the cycles resulting from the call site to the application’s total cycle count. The larger the is, the more important the call graph edge.
Total_application_size is the estimated size of the application. It is the sum of the estimated sizes of all the functions in the application. the estimated size of the function is computed by: 1
Because function may call at different call sites, the pair does not define an unique call site. Thus, we add the subscript to uniquely identify the call site from to
408
Peng Zhao and José Nelson Amaral
Fig. 1. Temperature Distribution of BZIP2
where is the number of basic blocks in function and reflects the complexity of the control flow in the PU, is the number of statements in excluding non-executable statements such as labels, parameters, pragmas, and so on, and is the number of call sites in The is the callee contribution to the whole application’s size. And the Total_application_size is given by:
With careful selection of a threshold, ORC can use temperature to find cycleheavy calling edges whose callee is small compared to the whole application. For instance, Figure 1 shows the distribution of the temperature for the BZIP2 benchmark.2 The horizontal axis shows the calling frequency and the vertical axis the temperature. Each dot in the graph represents an edge in the call graph. The temperature varies in a wide range: from 0 to 3000. The calling frequency is shown in reverse order, the most frequently called edges appear to the left of the graph and the least frequently called are toward the right. From left to right, the temperature usually decreases as the frequency of the call sites also decreases. It is reasonable that the temperature doesn’t go straight down because besides the call site frequency, the temperature heuristics also takes the callee’s size into consideration. Procedure size negatively influences the temperature. Thus, frequently invoked call sites might be “cold” simply because they are too large. In the original ORC inlining heuristic, an edge (call site) is rejected for inlining if its temperature is less than a specified threshold. The intuition for this 2
To make it easy to read, the two axes of the graphs are drawn in log scale, thus some call sites whose frequencies or temperatures are 0 are not shown in the graph. The same situation exists in Figure 3.
To Inline or Not to Inline? Enhanced Inlining Decisions
409
Fig. 2. Frequency accumulation of GCC (Only the top 2750 of all the 19,000 call sites are plotted.)
heuristic is that edges with high temperature are call-sites that are invoked frequently and whose callee is small compared to the entire application.
3
Inlining Tuning
We improve the inlining heuristics of ORC in two ways. First, adaptive inlining is employed to make the inlining heuristics more flexible. Second, a new cycle_density heuristics is introduced to restrict the inlining of “hot” but infrequent procedures. 3.1
Adaptive Inlining
The original inlining heuristic in ORC used a fixed temperature threshold (120) for inlining decisions. This threshold was chosen as a trade-off among compilation time, executable sizes and performance results of different benchmarks. However, a fixed threshold turns out to be very inflexible for applications with very different characteristics. For example, a high threshold (e.g. 120) is reasonable for large benchmarks because they are more vulnerable to the negative effects of code explosion resulting from inlining. However, the same threshold might not be good for small applications such as MCF, BZIP2, GZIP etc . We will use GCC, which is a typical large application, and BZIP2, which is a representative small application, to illustrate this problem. Figure 2 shows the frequency accumulation for the GCC benchmark and Figure 3 shows its temperature distribution. In Figure 2, the X-axis represents the call sites sorted by invocation frequency from high to low. The point numbered from left to right in the figure represents the accumulated percentage of the most frequent call sites. GCC has a very complex function call hierarchy and the function invocations are distributed amongst a large number of call sites: there are more than 19,000 call sites in GCC. In the standard SPEC2000 training execution, there are more
410
Peng Zhao and José Nelson Amaral
Fig. 3. Temperature Distribution of GCC
than 42,000,000 function invocations, and the most frequent call site is called no more than 800,000 times. Figure 2 shows that the top 10% (about 2,000) most frequently invoked call sites account for more than 95% of all the function calls. Inlining these 2,000 call sites would result in substantial compilation cost and code bloat. In Figure 3, according to the frequency of execution, we should inline the call sites on the left hand side of the graph and we should avoid inlining the call sites on the right hand side. Notice that several call sites on the right hand side are hot, and thus are inlined by the original heuristics of ORC. For large applications, the improvement from inlining is usually very limited (as we will see in the section 4). On one hand, it is impossible to eliminate most of the function overheads without wholesale inlining. On the other hand, if we use the same temperature threshold as for small benchmarks, we might end up with the problem of over-inlining, i.e. too many procedures are inlined and the negative effects of inlining are more pronounced than the positive ones. For example, if the temperature threshold is set to 1, there will be more than 1,700 call sites inlined in GCC. Such aggressive inlining makes the compilation time much longer without performance improvement as our experiments show. The high temperature threshold (120) in the original ORC was chosen to avoid over-inlining in large applications. However, this conservative strategy impedes aggressive inlining for small benchmarks where code bloat is not as prominent. For instance, Figure 1 and Figure 4 show the temperature distribution and frequency accumulation of the BZIP2 benchmark. There are only 239 call sites and about 3,900 lines of C code in BZIP2. This implies that the program is quite small (compared to more than 19,000 call sites and 190,000 lines of C code in the GCC benchmark). Moreover, in BZIP2 the top ten most frequently invoked call sites (about 4.2% of the total number of call sites) accounts for nearly 97% of all the function calls (Figure 4).
To Inline or Not to Inline? Enhanced Inlining Decisions
411
Fig. 4. Frequency accumulation of BZIP2 (Only the top 38 of all the 239 call sites are plotted.)
As we will see in the section 4, aggressive inlining is good for small benchmarks such as BZIP2: inlining the 10 most frequently invoked call sites in BZIP2 eliminates almost all the function calls. However, the inflexible temperature threshold often prevents the inlining of the most frequent call sites (the points in the shadowed area in Figure 1) because their temperatures are lower than the fixed threshold (120). Thus, it is desirable that the temperature threshold for small benchmarks be lowered because many of the call sites that have performance potential do not reach the conservative temperature threshold used to prevent code bloat in large applications. The contradiction between the threshold distributions of large benchmarks and small ones naturally motivates adaptive inlining: we use high temperature threshold for large applications because they tend to have many “hot” call sites; and we enable more aggressive inlining for small applications by lowering the temperature threshold for them. Adapting the inlining temperature threshold according to application size is pretty simple in ORC. Because the estimated size of each procedure in ORC is available in the Inter-Procedural Optimization (IPO) phase, their sum is the estimated size of the application.3 We classify applications into three categories: large applications, median applications and small applications. In the compilation, we utilize proper temperature threshold according to the estimated application size. If an application is a large application, its temperature threshold is 120. If it is a median application, its temperature threshold is 50. Otherwise, the temperature threshold is lowered to 1. The threshold values were obtained by a detailed empirical study of the SPEC2000 benchmarks.4 This division of ap3
4
We ignore library functions and dynamic shared-objects because we cannot acquire this information at compilation time. This approach is not unlike the application of machine learning to tune compilers used in [16]. However in our case we chose the parameter through manual tuning.
412
Peng Zhao and José Nelson Amaral
plications into three categories produces better results than any single threshold applied to all benchmarks. 3.2
Cycle_density
The intuition behind the definition of temperature is that hot procedures should be frequently invoked and not too large. However, as we have seen in Figure 3 and Figure 1, some of the procedures with high temperature are not actually “hot”, i.e. some infrequently invoked call sites also have high temperatures (those points in the top-right part of the graphs). These call sites correspond to functions that
Fig. 5. Adaptive inlining in ORC
To Inline or Not to Inline? Enhanced Inlining Decisions
413
Fig. 6. Cycle Density VS. Temperature (BZIP2)
are not called frequently, but contain high-trip count loops that contribute to their high cycle_ratio, which result in a high temperature (see Equation 2). We call the functions that are called infrequently but have high temperatures heavy functions. Inlining heavy functions results in little performance improvement. First, very few runtime function calls are eliminated. Second, the path from the caller to a heavy function is not a hot path at all, and thus will not benefit from postinlining optimization. Third, inlining heavy functions might prevent frequent edges from being inlined if the code growth budget is spent. To handle this problem, we introduce cycle_density to filter out heavy functions.
where is the number of cycles spent on procedure and is the number of times that the procedure is invoked. When a call site fulfills the temperature threshold, the cycle_density of the callee is computed. If the callee has a large cycle count but small frequency, i.e. its cycle_density is high, it must contain loops with high trip count. These heavy procedures are not inlined. cycle_density has little impact on the performance because it only filters out infrequent call sites. However, cycle_density can significantly reduce the compilation time and executable sizes, which is important in some application contexts, such as embedded computing. Figure 6 compares the temperature against the cycle_density for each call site in BZIP2. For call sites that are actually “hot”, the temperature is indeed high while the cycle_density is low (for BZIP2 they are always less than 0.5). These call sites are the ones that will benefit from inlining. Infrequently invoked call sites fall into two categories according to their temperatures. Infrequently invoked call sites with low temperature are eliminated by the temperature threshold. Infrequently invoked call sites with high temperature always have very high cycle_density. Thus we can prevent the inlining of these
414
Peng Zhao and José Nelson Amaral
Fig. 7.
Overall performance comparison
sites by choosing a proper cycle_density threshold. In our tuning, we use a fixed cycle_density threshold of 10 that works well for the SPEC2000 benchmarks as we will see in the next section. We implemented this enhanced inlining decision criteria and contributed it to the ORC-2.0 release. Figure 5 shows the C-style pseudo code for the improved inlining analysis in the ORC. Notice that a procedure that has a single call site in the entire application will always be inlined. The reasoning is that the inlining of that single call site will render the callee dead, and will allow the elimination of the callee, therefore this inlining will save function invocations without causing code growth.
4 4.1
Results Experimental Environment
We investigate the effects of adaptive inlining and of the introduction of the cycle_density heuristic on performance, compilation time, and the final executable size of SPEC INT2000 benchmarks. We use a cross-compilation method: we run ORC on an IA32 machine (a SMP machine with 2 Pentium-III 600MHz processors and 512MB memory) to generate an IA64 executable which is run on an Itanium machine (733MHz Itanium-I processor, 1GB memory). Thus our performance comparison is conducted on the IA64 systems and our compilation time comparison is conducted on the IA32 system. All direct measurements are the average result of three independent runs. 4.2
Performance Analysis
Figure 7 shows the performance improvement when different inlining strategies are used. T120 represents a fixed temperature threshold of 120, T1, is a fixed temperature threshold of 1, similarly for the other T labels. In adaptive the
To Inline or Not to Inline? Enhanced Inlining Decisions
Fig. 8.
415
Final Performance Comparison
temperature threshold varies according to the adaptation heuristic described in Section 2. In the adaptive+density compiler, both the adaptation and the cycle_density heuristics are used. Except for PERLBMK, in all benchmarks the adaptation heuristic results in positive speedup for inlining.5 These results suggest that our adaptive temperature threshold is properly selected. In some cases the difference between a fixed threshold and the threshold chosen with adaptation is very significant (see BZIP2 and TWOLF). Note also that the addition of cycle_density to adaptation does not produce much effect on performance. This result is explained by the fact that cycle_density only prevents heavy and infrequently invoked functions from inlining. We arranged the benchmarks in Figure 7 according to their sizes with the smaller benchmarks on the left and the larger ones on the right. Comparatively, in general, for small benchmarks inlining yields better speedups than for large benchmarks. This observation can be made by examining the maximum performance improvement from all the strategies. Excluding TWOLF and VORTEX, the maximum performance improvement decreases from left to right (from small benchmark to large benchmarks). This trend suggests a loose correlation between the application size and potential performance improvements that can be obtained from inlining. Figure 8 compares the performance improvements of different strategies more explicitly. Each bar represents the average performance speedup for the 11 benchmarks studied. The base line is the average performance of the 11 benchmarks compiled without inlining. And the two rightmost bars are for adaptive inlining without and with cycle_density heuristics. Adaptive inlining strategy speeds up the benchmarks by 5.28%, while the best average performance gain of all other 5
Inlining seems to always have a slight negative effect on the performance of PERLBMK. We are currently investigating this benchmark in more detail.
416
Peng Zhao and José Nelson Amaral
strategies is 4.45% when the temperature threshold is 50. Notice also that the performance influence of cycle_density heuristics is negligible. 4.3
Compilation Time and Executable Size Analysis
In this section, we study the effect of the cycle_density heuristics on the compilation time and on the executable size. Because cycle_density filters procedures that have high temperatures but are infrequently invoked call sites, we expected that its use should reduce both the compilation time and the final executable size. Table 1 shows the executable size, measured in bytes, and the compilation time, measured in seconds, for all benchmarks when no inlining is performed. Then for the compiler with adaptive inlining and the compiler with adaptive inlining with cycle_density, the table displays the percentage increase in the executable size and on the compilation time. The table also show, under the “calls” columns, the number of call sites that were inlined in each case. The cycle_density heuristic significantly reduces the code bloat and compilation time problem. On average, adaptive inlining increases the code size by 21.9% and the compilation time by 34.3%. When cycle_density is used to screen out heavy procedures, these numbers reduce to 14.8% and 24%, respectively. It is also interesting to compare the actual number of inlined call sites: the cycle_density heuristic only eliminates a few call sites. Except for GZIP and PARSER, cycle_density prevents the inlining of no more than 2 call sites in each benchmark. Table 1 also shows some curious results. Although cycle_density prevents the inlining of a single call site for BZIP2, the code growth reduces from 54.1% to 26.9%. A close examination of BZIP2 reveals that the procedure doReversibleTransformation calls sortIt infrequently (only 22 times in the
To Inline or Not to Inline? Enhanced Inlining Decisions
417
Fig. 9. Call Sites Breakdown
standard training run). However ORC performs a bottom-up inlining, in which the edges in the bottom of the call graph are analyzed and inlined first. In the BZIP2 case, sortIt absorbs many functions and becomes very large and heavy before it is analyzed as the callee. When ORC analyzes the call sites that have sortIt as the callee, the estimated cycle number spent in sortIt is huge, which contributes to its high temperature. However, sortIt is called infrequently and its inlining does not produce measurable performance benefits. cycle_density filters these heavy functions successfully. Finally, cycle_density only eliminates a few call sites because it is not applied to callees that are only called at one call site in the entire application (see Figure 5).
5
Related Work
Ayers et al. [2] and Chang et al. [5, 13] demonstrate impressive performance improvement by aggressive inlining and cloning. Their inlining facility is very much like that in ORC: the inlining happens on high level intermediate representation; they both use feedback information and apply cross-module analysis. Without feedback information, Allen and Johnson perform inlining at source level [1]. Besides reporting impressive speedup (12% in average), they also show that inlining might exert negative impact on performance. A series of special inlining approaches were developed to improve the performance of applications that employ indirect function calls or virutal function calls intensively [3, 4, 9, 10, 12].
418
6
Peng Zhao and José Nelson Amaral
Ongoing Work
Figure 9 shows how many dynamic function calls we can eliminate using our adaptive inlining technique. We divided the function calls into five different categories: Inlined Call sites that can be inlined with our adaptive inlining technique. These call sites have high temperature and low cycle_density. NotHot Call sites that are not frequently invoked. It brings no benefit to inline these call sites. Recursive ORC does not inline call sites that are in a cycle in the call graph. Large Call sites that have high temperature but cannot be inlined because either the callee, the caller or their combination is too large. GCC, PERLBMK, CRAFTY and GAP have some large call sites. Other Call sites that cannot be inlined due to some other special reasons. For example, the actual parameters to the call sites do not match the formal parameters of the callee. As Figure 9 shows, these call sites are very rare. With our enhanced inlining framework, we were able to eliminate most of the dynamic function calls for small benchmarks such as MCF, BZIP2 and GZIP. However we only eliminated about 30% dynamic function invocations for GCC and 57% for PERLBMK. Examining the graph in Figure 9, to obtain further benefits from inlining we need to address inlining in these large benchmarks. The categories that are the most promising are the recursive function calls and call sites with large callers or callees. This motivates us to investiage the potential of partial inlining and recursive call inlining in the future.
Acknowledgements We had a lot of help to perform this work. Most of the heuristics analysis and performance tunning were done during Peng Zhao’s internship in the Intel China Research Center (ICRC). We thank the ICRC and the ORC team in the Institute of Computing Technology, Chinese Academy of Sciences for building the ORC research infrastructure. Intel generously donated us the Itanium machine used in the experiments. Sincere thanks to Sun C. Chan and Roy Ju for their help and discussion on the ORC inlining tuning. This research is supported by the Natural Science and Engineering Research Council of Canada (NSERC).
References [1] Randy Allen and Steve Johnson. Compiling C for vectorization, parallelization, and inline expansion. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 241–249, 1988. [2] Andrew Ayers, Robert Gottlieb, and Richard Schooler. Aggressive inlining. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), May 1997.
To Inline or Not to Inline? Enhanced Inlining Decisions
419
[3] David F. Bacon and Peter F. Sweeney. Fast static analysis of C++ virtual function calls. In Object-Oriented Programming Systems, Languages and Applications (OOPSLA), pages 324–341, 1996. [4] Brad Calder and Dirk Grunwald. Reducing indirect function call overhead in C++ programs. In ACM SIGPLAN Symposium on Principles of Programming Languages (POPL), pages 397–408, Portland, Oregon, 1994. [5] Pohua P. Chang, Scott A. Mahlke, William Y. Chen, and Wen mei W. Hwu. Profile-guided automatic inline expansion for c programs. Software - Practice and Experience, 22(5):349–369, 1992. [6] J. W. Davidson and A . M . Holler. A model of subprogram inlining. Technical report, Computer Science Technical Report TR-89-04, Department of Computer Science, University of Virginia, July 1989. [7] Jack W. Davidson and Anne M. Holler. A study of a C function inliner. Software - Practice and Experience (SPE), 18(8):775–790, 1989. [8] Jack W. Davidson and Anne M. Holler. Subprogram inlining: A study of its effects on program execution time. IEEE Transactions on Software Engineering (TSE), 18(2):89–102, 1992. [9] Jeffrey Dean, David Grove, and Craig Chambers. Optimization of object-oriented programs using static class hierarchy analysis. In European Conference on ObjectOriented Programming (ECOOP), pages 77–101, Arhus, Denmark, August 1995. [10] David Detlefs and Ole Agesen. Inlining of virtual methods. In 13th European Conference on Object-Oriented Programming (ECOOP), June 1999. [11] M. R. Garey and D.S. Johnson. Computers and Intractability, A Guide to the Theory of NP-Completeness. W. H. Freeman, 1979. [12] K. Hazelwood and D. Grove. Adaptive online context-sensitive inlining. In Internetaional Symposium on Code Generation and Optimization, pages 253–264, San Francisco, CA, March 2003. [13] W. W. Hwu and P. P. Chang. Inline function expansion for compiling realistic c programs. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 1989. [14] Rainer Leupers and Peter Marwedel. Function inlining under code size constraints for embedded processors. In International Conference on Computer-Aided Design (ICCAD), Nov 1999. [15] Robert W. Scheifler. An analysis of inline substitution for a structured programming language. Communications of the ACM, 20(9):647–654, Jan 1977. [16] M. Stephenson, S. Amarasinghe, M. Martin, and U. O’Reilly. Meta-optimization: Improving compiler heuristics with machine learning. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 77–90, 2003. [17] Toshio Suganuma, Toshiaki Yasue, and Toshio Nakatani. An empirical study of method inlining for a Java just-in-time compiler. In 2nd Java Virtual Machine Research and Technology Symposium (JVM ’02), Aug 2002.
A Preliminary Study on the Vectorization of Multimedia Applications for Multimedia Extensions* Gang Ren1, Peng Wu 2 , and David Padua 1 1
Department of Computer Science, University of Illinois at Urbana-Champaign 1304 W Springfield Ave, Urbana, IL 61801 {gangren,padua}@cs.uiuc.edu 2
IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 [email protected]
Abstract. In 1994, the first multimedia extension, MAX-1, was introduced to general-purpose processors by HP. Almost ten years have passed, the present means of accessing the computing power of multimedia extensions are still limited to mostly assembly programming, intrinsic functions, and the use of system libraries. Because of the similarities between multimedia extensions and vector processors, it is believed that traditional vectorization can be used to compile for multimedia extensions. Can traditional vectorization effectively vectorize multimedia applications for multimedia extensions? If not, what additional techniques are needed? To answer these two questions, we conducted a code study on the Berkeley Multimedia Workload. Through this, we identified several new challenges arise in vectorizing for multimedia extensions and proposed some solutions to these challenges.
1
Introduction
The past decade has witnessed multimedia processing become one of the most important computing workloads, especially on personal computing systems. To respond to the evergrowing performance demand of multimedia workloads, multimedia extensions (MME) have been added to general-purpose microprocessors to accelerate these workloads [1]. The multimedia extensions of most processors have a simple SIMD architecture based on short, fixed-length vectors, a large register file, and an instruction set targeted at the very specific multimedia application domain. Although it was almost ten years ago when the first multimedia extension, MAX-1, was introduced by HP, today multimedia extensions are usually programmed in assembly languages, intrinsic functions, or using libraries [12]. * This work is supported by the National Science Foundation under Grant CCR0121401ITR. This work is not necessarily representative of the positions or policies of the Government.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 420–435, 2004. © Springer-Verlag Berlin Heidelberg 2004
A Preliminary Study on the Vectorization of Multimedia Applications
421
A promising alternative is to compile programs written in high-level languages directly to MME instructions. Because of the similarities between multimedia extensions and vector processors, one may naturally consider applying traditional vectorization techniques to multimedia applications. However, satisfactory results are yet to be obtained for the vectorization of realistic multimedia programs on MME. Therefore, this paper sets out to answer two questions: 1) Can traditional vectorization techniques effectively vectorize for multimedia extensions? If not, 2) what additional techniques are needed? To answer these questions, we conducted a code study on the Berkeley Multimedia Workload (BMW) benchmark, which is a set of multimedia programs written in C/C++ [2]. During the code study, we identified the differences between compilation for MME and traditional vectorization, and discuss new analyses and transformations to bridge the difference between the two. The rest of the paper is organized as follows: Section 2 gives an overview of MME architectures and the BMW benchmark. In Section 3, we survey current programming models and existing compiler supports for multimedia extensions. Section 4 discusses the difference between vectorizing for MME and traditional vector machines, and presents solutions to address some of these differences. Section 5 concludes and outlines the future work.
2 2.1
Background Multimedia Extensions (MME)
Because of the increasing importance of multimedia workloads, most major microprocessor vendors have added multimedia extensions to their microarchitectures. Multimedia extensions that are available today include MMX/SSE/SSE2 for Intel [21][23], VMX/AltiVec for IBM/Motorola [24], 3DNow! for AMD [22], MAX1/2 for HP [18], VIS for SUN [17], DVI for DEC [19], and MDMX/MIPS3D for MIPS [20]. Most multimedia extensions are vector units that support operations in fixed-length vectors that are short, typically are no longer than 16 bytes. The purpose of the SIMD design is to exploit the data parallelism inherent in multimedia processing. Multimedia extensions have evolved rapidly in recent years. Early MMEs often provide very limited instruction sets. For example, the very first multimedia extension, MAX-1, offers only 9 instructions for processing 64-bit vectors of 16bit integers [18]. Today’s MMEs support wider vector length, more vector types, and a much more comprehensive instruction set architecture (ISA). VMX, for instance, supports 128-bit vectors of 8-, 16-, and 32-bit integers, or 32-bit singleprecision float with an ISA of 162 instructions [24]. We project that the future multimedia extensions will support a more extensive ISA, especially with better support for floating point computations. With MMEs becomes more powerful and more general-purpose, we foresee that many traditional applications (e.g., numerical codes) will be able to leverage the computing power of MME.
422
Gang Ren et al.
Fig. 1. Streaming SIMD Extensions 2 [13]
An Example of MME: Intel’s SSE2 Announced in 2000 with Pentium 4 processor, SSE2 evolves from SSE (Streaming SIMD Extensions) by incorporating double-precision floating-point support and more instructions [21]. SSE2 supports 128-bit vectors of almost all data types, including single- and double-precision floating-point numbers and 8-, 16- and 32-bit integers as shown in Figure 1. It provides 144 instructions that can be grouped into arithmetic, compare, conversion, logical, shift or shuffle, and data movement instructions. SSE2 instruction set is non-uniform. That is, not all vector types are equally supported by the ISA. For example, SSE2 provides max and min operations for vectors of signed 16-bit integers and unsigned 8-bit integers, but not for vectors of other integer types. Multimedia Extensions vs. Vector Processors Despite the general similarity between multimedia extensions and traditional vector processors, there exist three key differences between the two architectures. First, a multimedia extension instruction only processes a small number of data elements, limited by its register width, often no longer than 16 bytes. This is in contrast with the very long vectors typical of traditional vector machines. Second, multimedia extensions provide much weaker memory units. For cost reasons, multimedia extensions do not support gather/scatter type memory operations as vector machines usually do. In addition, many multimedia extensions, such as VMX, can access memory only at vector-length aligned boundaries. Others like SSE2 allow misaligned memory accesses, but such accesses incur additional overhead. For example, in SSE2, a misaligned load involves two loads and the execution of several micro-ops [23]. Finally, multimedia extension ISAs tend to be less general-purpose, less uniform, and more diversified. Many operations are very specialized and are only supported for specific vector types. A good example is SSE2’s max/min operation mentioned before.
A Preliminary Study on the Vectorization of Multimedia Applications
2.2
423
Berkeley Multimedia Workload
Our code study is based on the Berkeley Multimedia Workload (BMW) benchmark [2]. The BMW benchmark is written in C/C++ and evolves from MediaBench [3]. Table 1 lists the BMW benchmark programs used in our study. One characteristic of multimedia applications is that a few core procedures take up most of the execution time. In fact, the hotspot behavior is much more pronounced in multimedia workloads than in integer programs or even floating point programs. This characteristic makes multimedia programs suitable for both hand and compiler optimizations. Table 2 gives the execution time distribution of several representative multimedia, integer, and floating-point workloads on a 2.0 GHz Pentium IV processor. In Table 2, the column “#Proc” gives the number of procedures that take up more than 10% of the total execution time, “%Exec” gives the total percentage of execution time spent on these procedures (excluding the time spent on the procedures they call), and “%Line” gives the percentage of total source lines of these procedures.
424
3 3.1
Gang Ren et al.
Overview of MME Compilation Programming Multimedia Extensions
For a long time, assembly language programming or embedding inline assembly in C programs has been the dominant means to access multimedia extensions. Due to the difficulties in programming, debugging, and maintaining assembly programs, usually only very important processing kernels are off-loaded to multimedia extensions. As multimedia extensions become more powerful, the need for more efficient programming methods grows in importance. Some computer vendors provide high-level language interfaces to multimedia extensions through intrinsic functions to facilitate accesses to MME. Intrinsic functions embedded in high-level programming languages are translated to MME instructions by native compilers. Gcc v3.1, for instance, supports intrinsic functions for several multimedia extensions including AtliVec, SSE2 and 3DNow! [4]. Compared to assembly coding, intrinsic function programming model achieves better productivity, readability, and portability without sacrificing much performance. Programming in standard high-level languages and relying on the compiler to produce optimized codes offer programmers a much easier way to utilize multimedia extensions. However, this approach can only be feasible if the compiled codes match the performance of the previous two approaches. 3.2
Compilers for Multimedia Extensions
Automatically compiling C programs to multimedia extension instructions have been tried out in both academia and industry. Because of architectural similarities between vector processors and multimedia extensions, traditional vectorization was naturally considered to compile programs for multimedia extensions. Traditional vectorization techniques, such as the allen-kennedy algorithm [6], were developed for vector processors mainly at the end of the 1980’s and the beginning of the 1990’s [5]. They are based on the notion of data dependence [7]. And an overview of vectorizing compiler technology is given in[8]. In 1997, Cheong and Lam [9] developed an optimizer for VIS, the SUN multimedia extension, based on a SUIF vectorizer from MERL [10]. The focus of this work was to address alignment issues during code generation. Krall and Lelait [12] also applied traditional vectorization to generate VIS code. Sreraman and Govindarajan [11] developed a vectorizer for the Intel MMX. However, only experiments with small kernels were reported in [9, 10, 11, 12]. Larsen and Amarasinghe [15] proposed the SLP algorithm to do vectorization within the basic block. Instead of vectorizing across loop iterations, SLP algorithm packs isomorphic instructions from the same basic block to vector instructions. The vectorizer was implemented in SUIF and targeted AltiVec. Speedups were reported on some kernels and few programs from SPECfp. In [14], a domain-specific C-like language, SWARC (SIMD-within-a-register), was developed to provide a portable way of programming for MMEs.
A Preliminary Study on the Vectorization of Multimedia Applications
425
Fig. 2. Pointer Access Example from BMW/LAME
To date, only a few commercial compilers that support automatic vectorization for multimedia extensions are available. The Crescent Bay Software extends VAST to generate codes for AltiVec extension [30]. The Portland Group offers the PGI Workstation Fortran/C/C++ compilers that support automatic usage of SSE/SSE2 extensions [31]. The Codeplay announces the VectorC compiler for all x86 extensions [32]. Also, Intel extended its own product compiler to vectorize for MMX/SSE/SSE2 [23].
4
Gaps between MME and Traditional Vectorization
Our studies show that despite the success of the vectorization for traditional vector machines, vectorization for multimedia extension still has a long way to go. In this section, we identify the key differences between traditional and MME vectorizations. The difference is the natural result of their differences in programming style (Section 4.1), in common data types and operations (Section 4.2), in application code patterns (Section 4.3), and in the architectures (Section 4.4). 4.1
Difference in Programming Styles
Use of Pointers vs. Arrays Traditional vectorization is most effective for programs where most cycles are spent on tight loops involving mostly array accesses. Multimedia applications, on the other hand, rely on pointers and pointer arithmetic to access data in computationally intensive loops. Figure 2 gives an example of such pointer accesses extracted from LAME, an MPEG audio encoding application from the BMW benchmark suite. In this example, and point to the input buffer, and point to the output buffers, and all buffers are initially passed into the procedure as parameters. All twelve programs in the BMW benchmarks use pointers in their core procedures, and six of them also use pointer arithmetic. Such pervasive use of
426
Gang Ren et al.
Fig. 3. BMW/LAME after Transforming Pointers to Closed-form Expressions
Fig. 4.
Non-closed-form Pointer Access Example from BMW/Mesa
pointers and pointer arithmetic has a great impact on vectorization in terms of memory disambiguation and dependence testing. Using Figure 2 as an example, before conducting any dependence analysis, the compiler needs to determine whether there is any overlapping between the regions accessed through variables and during the iterations. This is not exactly a pointer aliasing problem. The complication comes from the fact that are changing their values within the loop. A conventional alias analysis may determine whether and are aliased at a particular iteration, but not whether at any iteration may be aliased to at any other iterations. One may observe that and change their values in a regular way. Not only does each variable change monotonically (either increasing or decreasing), but also their values change by a constant per iteration. In fact, and are induction variables and can be represented by closedform expressions of the iteration counter In Figure 3, we present the loop after replacing and by their closed-form expressions. To avoid confusion, we use and to represent the values of and before entering the loop.
A Preliminary Study on the Vectorization of Multimedia Applications
Fig. 5.
427
Manually Unrolled Loop from BMW/MPEG2
One must keep in mind that, although represented in array syntax, and are still pointers. This means that accesses through them can still be aliased. In fact, in this example, and are pointing to the first and the last element of an array of 1024 double elements, respectively. If the compiler knows that accesses up-to 512 elements onward and accesses up-to 512 elements backward during the loop, and that and are 1024 elements apart, the compiler can prove that the accesses through and are non-overlapping. The region access information can be obtained by analyzing subscripts and loop bounds [25]. The task of pointer analysis is then to find out the distance between the memory location pointed to by and Once able to disambiguate the regions accessed by and we can apply traditional dependence analyses to resolve the dependences in the transformed loop in Figure 3. There may be loops that contain pointers with no closed-form expressions as shown in Figure 4. In this case, we can still exploit the monotonicity of the pointers to estimate the access region as well as conducting dependence analysis [26]. Manually Unrolled Loops Because of the high performance demand of multimedia workloads, many multimedia programs are hand-optimized. One typical example is manually unrolled loops. For example, four of the BMW benchmark programs contain unrolled inner loops. Figure 5 gives an example of a manually unrolled loop extracted from MPEG2, a video encoder application, where an inner loop has been completely unrolled 16 times to accumulate the absolute difference of two input arrays. In this example, it is difficult and expensive to vectorize statements across loop because the accesses through and are non-continuous across iterations. The opportunity lies in vectorizing the 16 unrolled statements within the loop body. One solution is to first reroll the loop body, and then apply vectorization. Figure 6 shows the code after rerolling the loop in Figure 5. Fortunately, most of the unrolled loops we have seen in BMW benchmark are quite simple.
428
Gang Ren et al.
Fig. 6. Rerolled Loop from Figure 5
Another approach is to vectorize unrolled loops directly. The SLP algorithm mentioned in Section 3.2 offers such a solution by identifying isomorphic operations within a basic block and groups them to form MME SIMD instructions. In graphics codes, a common data structure used to represent colors is a struct with three fields, such as RGB or YUV. These fields are stored in continuous memory, and are often processed together with similar operation sequences. This coding pattern also makes it a perfect candidate for SLP-type of vectorization. 4.2
Limitations of the C Language
The mismatch between the C language and the underlying MME architecture also widens the gap between traditional and MME vectorization. Integral Promotion and Subword Types In ANSI C semantics, all char or short types (i.e., subword data types) are automatically promoted to integer type before conducting any arithmetic operations. This is known as integral promotion [29]. In essence, ANSI C supports the storage of subword data types but not operations on them. This design is a perfect match for general-purpose architectures because general-purpose ISA often only support integer operations on whole registers. In addition, integer extensions are often combined with load operations and incur no additional overhead. On the other hand, since 8- or 16-bit integers are natural representation of many types of media data, subword types are widely used in multimedia applications. As a result, it is very common to see a MME ISA that provides better support for subword operations than for 32-bit operations. From the vectorization point of view, when dealing with subword data types, following integral
A Preliminary Study on the Vectorization of Multimedia Applications
Fig. 7.
Fig. 8.
429
Saturated Add Implemented in C from BMW/GSM
Another C Implementation of Saturated Add from BMW/MPEG2
promotion rule means wasting more than half of the total computation bandwidth, and incurring additional overhead due to type extension. Vectorization of subword operations to word operations may even introduce slowdowns if the underlying ISA provides a native support for the former but not for the latter. Therefore, the issue is to automatically avoiding unnecessary integral promotion without affecting program semantics. We need a backward dataflow analysis to trace the effective width of the result of any operation based on how the result is consumed. The effective width of the result operand is propagated to the source operands according to the operations. We can then safely convert a word operation to a subword operation if all the source operands of a word operation have a subword effective width. Saturated Operations Saturated arithmetic is widely used in multimedia programs, especially in audio and image processing applications. Since C semantics does not support saturated arithmetic as native operators, programmers must express saturated operations in native C operations. Figure 7 gives such one example. With the help of if-conversion, the code sequence in Figure 7 can be vectorized into a sequence of compare, mask, subtract and add. However, for MME that directly supports saturated add, the best performance can only be achieved by recognizing the sequence and transforming it into a saturated add instruction. Idiom recognition, which has been used to identify max and min operations in scientific applications, can be extended to identify these saturated operations [27]. Interestingly, within BMW benchmark, there are other implementations of saturated add. In the example of Figure 8, array Clip is generated on-the-fly and it maps a subscript to its corresponding saturated 8-bit value. In this case, it becomes more difficult for a compiler to recognize this pattern.
430
Gang Ren et al.
Fig. 9.
4.3
Reduction on bit-wise operations from BMW/Mesa
Code Patterns
Bit-Wise Operations Due to the nature of multimedia processing, bit-wise operations are often used in multimedia applications. Figure 9 gives an example of bit-wise operations extracted from Mesa, an OpenGL 3D graphics library. To vectorize this code, the key techniques are if-conversion and recognizing tmpOrMask, and tmpAndMask as reduction of bit-wise AND and OR operations.
Mapping Arrays In some applications from BMW benchmark, the mapping arrays are used in the kernel loops for different purposes. For example, as we described in Section 4.2.2, the application mpeg2 uses a mapping array to obtain the saturated results from the original ones. Another common use of mapping arrays is to replace the expensive math functions, such as pow function. As shown in Figure 10, array lutab is generated to store the result of pow function for integer 0 to LUTABSIZE. Therefore, it can be used in the kernel loop to get the result directly without calling pow function. 4.4
Limitations of the MME Architecture
In Section 2.1.2, we have thoroughly discussed the architectural differences between MME and traditional vector architectures. To summarize, multimedia extension uses short, fixed-length vectors, has a much weaker memory unit, and provides a less uniform and general-purpose ISA. We believe that these architectural differences lead to many differences between MME and traditional vectorization, which oftentimes make the former more difficult. Some of the new challenges are still open questions. The short fixed-length SIMD architecture (typically with vector length less than 16-byte) implies that we can vectorize not only across iterations but also
A Preliminary Study on the Vectorization of Multimedia Applications
431
Fig. 10. A Kind of Use of Mapping Array from BMW/LAME
Fig. 11. Strided Memory Access from BMW/MPEG2
within iteration or even within a basic block. For the latter, a Superword Level Parallelism (SLP) approach may be more effective [15]. The weak memory unit imposes a significant challenge to MME vectorization. The lack of native support for gather/scatter type of memory operations makes it very difficult to vectorize codes with non-continuous memory accesses. Figure 11 gives an simple example of stride accesses on In addition, many multimedia extensions support only vector-aligned loads and stores. Precise alignment information not only benefits vectorization but also simplifies code generation. There are two aspects of alignment optimization for vectorization purposes: to obtain alignment information and to improve alignment by program transformation, such as loop unrolling [28]. Because of the pervasive use of pointers in multimedia applications, alignment analysis is in essence alignment analysis of pointers, and may require whole-program analysis. An alternative is to version different vectorization of the programs according to different alignment assumptions. The non-uniform and domain-specific ISA complicates code generation. When we identify an expression that satisfies all the dependence, continuous access, and alignment requirements, we may still find that the expression does not have a direct mapping in the underlying ISA. Very likely this is because the operands of the expression are of less supported data types. For the vectorization to be successful, the vector code generator must be able to map a non-supported vectorizable expression into a sequence of native vector instructions. In essence,
432
Gang Ren et al.
A Preliminary Study on the Vectorization of Multimedia Applications
433
the code generator serves as a layer that hides the difference between the underlying non-uniform, domain-specific ISA and the uniform general-purpose “ISA” of the high-level programming languages. 4.5
Summary
All these features we discussed in this section are summarized in Table 4 for core procedures from Berkeley Multimedia Workload. In summary, more than 70 out of 82 important loops in these kernel procedures can be vectorized if the issues discussed in this section can be handled by compiler. In the mean time, at least 7 loops are ineligible for fully vectorization because of inherent dependence circles or use of function pointer in the loop body.
5
Conclusion
Our study showed that despite the success of vectorizing for traditional vector processors, the vectorization for multimedia extensions still has a long way to go. The gap between MME vectorization and traditional vectorization is the natural result of both the architectural differences between multimedia extensions and traditional vector processors and the differences between multimedia applications and numerical applications. In this paper, we conducted an indepth study of the BMW benchmark suite. Based on the code study, we identified the key differences between MME and traditional vectorization, code patterns that are common in multimedia applications and new issues that arise in MME vectorization. We also discussed solutions to address some of them. This work is only the first step towards unleashing the power of multimedia extensions through vectorization. Our study focuses more on identifying the new requirements and challenges faced by MME vectorization than on providing the actual solutions. Therefore our immediate future work is to propose new techniques to address the issues we identified and measure the effectiveness of these techniques on the BMW benchmark. At the same time, we would also like to extend our study on other application domains, such as numerical applications. It would be interesting to see how numerical programs can be vectorized and benefit from multimedia extensions.
References [1] Diefendorff, K., Dubey, P.: How Multimedia Workloads Will Change Processor Design. IEEE Computer, Vol. 30 (1997) 43–45 [2] Slingerland, N., Smith, A.: Design and characterization of the Berkeley multimedia workload. Multimedia Systems, Vol. 8 (2002) 315–327 [3] Lee, C., Potkonjak, M., Mangione-Smith, W.: MediaBench: A Tool for Evaluating and Synthesizing Multimedia and Communications Systems. Proceedings of Micro ’97 (1997) 330–335
434
Gang Ren et al.
[4] Free Software Foundation: Using the GNU Compiler Collection (GCC). Boston, MA (2002) [5] Patterson, D., Hennessy, J.: Computer Architecture :A Quantitative Approach. Morgan Kaufmann Publishers, San Mateo, California (1996) [6] Allen, R., Kennedy, K.: Automatic translation of FORTRAN programs to vector form. ACM Transactions on Programming Languages and Systems (TOPLAS), Vol. 9 (1987) 491–542 [7] Kuck, D., at all: Measurements of Parallelism in ordinary FORTRAN programs. IEEE Computer, Vol. 7 (1974) 37–46 [8] Padua, D., Wolfe, M.: Advanced Compiler Optimizations for Supercomputers. ACM Communication, Vol. 29 (1986) 1184–1201 [9] Cheong, G., Lam, M.: An Optimizer for Multimedia Instruction Sets. Second SUIF Compiler Workshop, Stanford (1997) [10] Konda, V., Lauer, H., Muroi, K., Tanaka, K., Tsubota, H., Xu, E., Wilson, C.: A SIMDizing C Compiler for the Mitsubishi Electric Neuro4 Processor Array. First SUIF Compiler Workshop, Stanford (1996) [11] Sreraman, N., Govindarajan, R.: A Vectorizing Compiler for Multimedia Extensions. International Journal of Parallel Programming, Vol. 28 (2000) 363–400 [12] Krall, A, Lelait, S.: Compilation Techniques for Multimedia Processors. International Journal of Parallel Programming, Vol. 18 (2000) 347–361 [13] Bik, A., Girkar, M., Grey, P., Tian, X.: Automatic Intra-Register Vectorization for the Intel? Architecture. International Journal of Parallel Programming, Vol. 30 (2002) 65–98 [14] Fisher, R., Dietz, H.: Compiling for SIMD within a Register. 1998 Workshop on Languages and Compilers for Parallel Computing, University of North Carolina at Chapel Hill, North Carolina, (1998) [15] Larsen, S., Amarasinghe, S.: Exploiting Superword Level Parallelism with Multimedia Instruction Sets. Proceeding of the SIGPLAN Conference on Programming Language Design and Implementation, Vancouver, B. C. (2000) [16] Callahan, D., Dongarra, J., Levine, D.: Vectorizing Compilers: A Test Suite and Results. Proceedings of the 1988 ACM/IEEE conference on Supercomputing, Orlando, Florida (1988) [17] Kohn, L., Maturana, G., Tremblay, M., Prabhu, A., Zyner, G.: The Visual Instruction Set (VIS) in UltraSPARC. Proc.of Compcon ’95, San Francisco, California (1995) [18] Lee, R., McMahan, L.: Mapping of Application Software to the Multimedia Instructions of General Purpose Microprocessors. IS&T/SPIE Symp. on ElectricImaging: Science and Technology, San Jose, California (1997) [19] Carlson, D., Castelino, R., Mueller, R.: Multimedia Extensions for a 550-MHz RISC Microprocessor. IEEE Journal of Solid-State Circuits, Vol. 32 (1997) 1618– 1624 [20] MIPS Technologies, Inc.: MIPS Extension for Digital Media with 3D. White Paper (1997) [21] Intel Corporation.: IA32 Intel Architecture Software Developer’s Manual with Preliminary Intel Pentium 4 Processor Information Volume 1: Basic Architecture. [22] Oberman, S., Favor, S., Weber, F.: AMD 3DNow! Technology: Architecture and Implementations. IEEE Micro, Vol. 19 (1999) 37–48 [23] Intel Corporation.: Intel Architecture Optimization Reference Manual. [24] Fuller, S.: Motorola’s AltiVec Technology. White Paper (1998)
A Preliminary Study on the Vectorization of Multimedia Applications
435
[25] Blume, W., Eigenmann, R.: The Range Test: A Dependence Test for Symbolic, Non-linear Expressions. Proceedings of Supercomputing ’94, Washington D. C. (1994) 528–537 [26] Wu, P., Cohen, A., Hoeflinger, J., Padua, D.: Monotonic Evolution: An Alternative to Induction Variable Substitution for Dependence Analysis. Proceedings of the 15th International Conference on Supercomputing, Sorrento, Italy (2001) [27] Bik, A., Girkar, M., Grey, P., Tian, X.: Automatic Detection of Saturation and Clipping Idioms. Proceedings of the 15th International Workshop on Languages and Compilers for Parallel Computers (2002) [28] Larsen, S., Witchel, E., Amarasinghe, S.: Increasing and Detecting Memory Address Congruence. Proceedings of 11th International Conference on Parallel Architectures and Compilation Techniques (PACT), Charlottesville, VA (2002) [29] International Standard Organization: Programming Languages - C, ISO/IEC 9899 (1999) [30] Crescent Bay Software Corp.: http://www.psrv.com/vast_altivec.html [31] The Portland Group Compiler Technology.: http://www.pgroup.com/products/ [32] Codeplay Software Limited.: http://www.codeplay.com/vectorc/features.html
A Data Cache with Dynamic Mapping* Paolo D’Alberto, Alexandru Nicolau, and Alexander Veidenbaum Department of Computer Science University of California, Irvine {paolo.nicolau,alexv}@ics.uci.edu
Abstract. Dynamic Mapping is an approach to cope with the loss of performance due to cache interference and to improve performance predictability of blocked algorithms for modern architectures. An example is matrix multiply: tiling matrix multiply for a data cache of 16KB using optimal tile size achieves an average data-cache miss rate of 3%, but with peaks of 16% due to interference. Dynamic Mapping is a software-hardware approach where the mapping in cache is determined at compile time, by manipulating the address used by the data cache. The reduction of cache misses translates into a 2-fold speed-up for matrix multiply and FFT by eliminating data-cache miss spikes. Dynamic mapping has the same goal as other proposed approaches, but it determines the cache mapping before issuing a load. It uses the computational power of the processor - instead of the memory controller or the data cache mapping - and it has no effect on the access time of memory and cache. It is an approach combining several concepts, such as nonstandard cache mapping functions and data layout reorganization and, potentially, without any overhead.
1
Introduction
The increasing gap between memory access latency time and CPU clock cycle makes it extremely hard to feed a processor with useful instructions or data. This problem is exacerbated in multiprocessor systems and distributed systems because of extremely high demand of data and instructions, and because of communication through relatively slow devices. To avoid CPU stalls due to data and instruction starvation, several approaches have been proposed. We can divide them in three broad and, somewhat arbitrary, classes.1 Memory hierarchies and utilization bounds: in this class, we consider hardware implementations of memory systems, interconnection of memory systems and performance bounds for families of algorithms on architectures with memory hierarchy. For examples in hardware implementations, Smith offers a survey [26] on caches and an excellent reference for uniprocessor systems is Hennesy and * 1
This work is supported in part by NSF, Contract Number ACI 0204028. Arbitrary because approaches in different classes may share several goals and properties, and incomplete because the literature is so rich that is difficult to keep record.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 436–450, 2004. © Springer-Verlag Berlin Heidelberg 2004
A Data Cache with Dynamic Mapping
437
Patterson [16]. For examples in performance bounds, Hong and Kung propose lower bounds for simple memory architectures with 1-level of cache [17]; the bounds and the methodology to estimates bounds have been extended for multiple levels of cache later on by Aggarwal et al. [3, 4] and more recently by Vitter et al. [29, 30] and Bilardi et al. [2]. Application reorganization: in this class, we consider projects related to the trade-off between portability and performance across different architectures with multiple level of caches. For example, we consider in this class optimizations as code reorganization through tiling [1, 32], selective copying [19], tailoring the application at installation time [31, 22], and utilization of memory hierarchy independent optimizations [7]. Hardware-software adaptive approaches: we consider in this class, on-thefly hardware and software adaptations. An example of at-run-time hardware adaptation is cache-associativity adaptation, and an example of the software adaptation is the at-run-time reorganization of registers allocation to reduce register file power dissipation, [10, 15, 6]. Another example of algorithm adaptation is the work by Gatlin et al. [13], where data-copy strategies are applied to exploit cache locality. Our approach is a software-hardware approach and we propose it to solve the problem of cache interference in blocked algorithms. Blocked algorithms, such as blocked matrix multiply and FFT, achieve good cache performance on average; however, we notice a quite erratic cache behavior on individual input sets - due to cache interference. We propose an approach to minimize cache misses due to cache interference changing the cache mapping for some memory references dynamically. We aim at the optimization of blocked algorithms because of their data locality property. A blocked algorithm applies a divide and conquer approach: a blocked algorithm divides the problem in smaller problems and it solves them locally. The advantage is clear when the decomposition of the problem exploits the available parallelism and the memory hierarchy (i.e., the data of every subproblem fit the local memory or the data cache of a processor). The divide and conquer approach improves data reuse and it reduces communication among processors as well as between cache and memory. Avoiding data interference in cache is the last, and final, step to exploit data locality fully. We enforce the problem by an example quantitatively. We implement an optimal blocked implementation of matrix multiply for an architecture with a directed mapped data cache of size 16KB. We opt for matrices stored in row-major format, which are used commonly. We design the algorithm with no pre-fetching - pre-fetching hides the latency but does not reduce cache misses. The blocked algorithm can be the result of tiling exploiting cache locality on a uniprocessor system, or the result of a parallelizing compiler for shared-memory multiprocessor systems (or both). We achieve on a uniprocessor system an average 3% data cache miss rate. The average cache miss is close to the optimal cache performance (roughly 0.5%, [17]). When we observe the cache performance for square matrices of size only, the miss rate soars because data cache interference.
438
Paolo D’Alberto et al.
For example, for square matrix of size 2048 × 2048, the data cache miss rate is 16%. We propose a software-hardware approach to remove data cache miss spikes, changing the cache mapping only when needed. The name of our approach is Dynamic Mapping: 1. We produce a blocked algorithm, either by tiling of a loop nest or by a recursive implementation, so that we maximize temporal locality - for one or more cache level. 2. The blocked algorithm has each elementary block computation (i.e., loop tile) accessing rectangular tiles of data (i.e., tile of matrices). 3. For each memory reference in the elementary block computation, we determine a physical address and an alternative - and unique - address, twin address. The physical address is used to map the element in memory; the twin address is used to map the element in cache (the details, how to determine and use twin addresses are explained through an example in Section 2.1). The twin address space does not need to be physically present and, in practice, the twin space is larger than the physical space. A 64bit-register can address bytes of memory, relatively few bytes are physically available. 4. The physical address is used whenever there is a miss in cache to access the second level of cache or memory; we assume the cache is physical tagged, and we can modify the processor and the load queue for our purpose (we shall give more details in Section 2.3).
We can apply our approach to a family of blocked algorithms. A blocked algorithm in this family must have the following properties: Fixed decomposition. The algorithm does not change at runtime. Convexity. The algorithm solves a problem by dividing the problem into smaller problems. The problem and each sub-problem work on data with similar shape and properties; that is, if we solve a problem on square matrices, every subproblem works on square tiles - with the property that elements in different tiles cannot use the same cache line. Total reuse. If a data is used by two different subproblems, both use the common data consistently: every element in a tile is always mapped with the same twin address to the same location in cache. Leaf Size. The decomposition stops as soon as each sub-problem fits a target cache. We can copy all tiles, without overlapping, in a common space no larger than the cache size. Our approach analyzes the application at compile time. The analysis is used to determine a data cache mapping to minimize data cache misses. Such mapping is introduced in the code as affine functions. The affine functions are computed at run time and the results used as alternative addresses. These addresses are used to map the data in cache. It has the same effect as to have the data layout reorganized in memory at runtime [11], using the computational power of the
A Data Cache with Dynamic Mapping
439
processor, with no data movement [19], no overhead or extra accesses. It is in practice a data mapping in cache that uses only partial information about the schedule of the instructions (for example of cache mapping using a DAG computation, see Bilardi and Preparata [2] and, for an heuristic, see D’Alberto [8]) Dynamic mapping differs from IMPULSE project [18, 24], which introduces a new memory controller leaving the memory hierarchy untouched. IMPULSE supports a configurable physical address mapping and pre-fetching at memory controller. Our approach is simpler in the sense that it does not require an operating system layer and any changes to the memory controller. The cache mapping is defined completely by the application, and it can be driven automatically by a compiler. Dynamic mapping does not need any profile-based approach or dynamic computation changes [13]; it improves portability and lets the developer focus on the solution of the original problem. We differ from Johnson et al. work [20, 21], because we do not use any dedicated hardware to keep track of memory references; the reference pattern is recognized statically. Dynamic mapping does not change the physical data cache mapping [25, 14, 33] and, potentially, it has no increase in data-cache access latency. Dynamic mapping is not a bypass technique: we are able to exploit data locality fully - for a level of cache - when algorithms have data locality; the processor does not need bypass a cache entirely. Cache bypassing is an efficient technique designed to increase the bandwidth between processor and memory hierarchy. In general, cache bypassing increases traffic on larger caches, which are slower and more energy demanding (see processors as R5k), and it does not aim to reduce data cache misses. Furthermore, cache bypassing makes the design more complicated and suitable for a general-purpose and high-performance processor. Dynamic mapping is a 1-1 mapping among spaces; therefore it assures cache mapping consistency for any loads and writes to/from the same memory location. Hardware verification approaches for stale-data in registers - used by processorscompilers that allow speculative loads; for example, IA64 microprocessor [9] can be safely applied. The paper is organized as follows. Our software-hard ware approach is presented in Section 2. In Section 2.2, we apply our approach to recursive FFT algorithms. FFT does not satisfy the convexity property of the decomposition, but we can apply successfully dynamic mapping. In Section 2.3 we propose the potential architectural modifications to support our approach. We present the experimental setup and results in Section 3. Then we conclude in Section 4.
2
Dynamic Mapping
In this section, we investigate a software-hardware approach to minimize data cache interferences for perfect loop nest with memory references expressed by affine functions of the loop indexes. This is a common scenario, where other approaches have been presented and powerful analysis techniques can be applied; for example, the analysis techniques proposed by Ghosh et al. [27], by Clauss et
440
Paolo D’Alberto et al.
al. [5], by D’Alberto et al. [28] or other approaches based on Omega Test [23], are worth applying even though they are time expensive. In Section 2.1, we propose our approach in conjunction with tiling to perfect loop nests, and in Section 2.2, we show that the approach can be successfully applied to recursive algorithms as FFT, which does not satisfy the convexity property - because the algorithms cannot always access continuous elements. 2.1
Matrix Multiply
In matrix multiply, every memory reference in the loop body is determined by an affine function of loop indexes, for short, index function (note: scalar references, if any, are array references with constant index function). An index function determines an address used to access the memory and the cache at a certain loop iteration. The idea is to compute, in parallel to a regular index function, a twin function. The twin function is an affine function of the indexes and it maps a regular address to an alternative address space, or shadow address. The index function is used to access the memory; the twin function is used to access the cache. In practice, a compiler can determine the twin functions as a result of an index function analysis and it can tailor the data cache mapping for each load in the inner loop. Example 1. We consider square matrices of size N × N.
The reference
is a constant in the inner loop and it has index function The index function for (respectively, is (respectively, Matrix multiply loop nest can be reorganized to exploit temporal locality.
Example 2. Let us tile the loop nest by square tiles of size we assume that N is a multiple of and matrices are aligned to the line size, and is a multiple of the line size L:
When we achieve memory accesses (cache misses). Cache misses may be more, due to interference.
A Data Cache with Dynamic Mapping
441
When we tile the loop as in Example 2, the index function can be described concisely by three vectors (or projections onto 1-dimensional space), one for each matrix: and If we indicate with an iteration in the loop nest, the index and function for A, B and C are When matrix and all matrices are stored continuously one after the other and there is cross interference between references to different tiles (e.g., and and there is self interference between any two rows in the same tile (e.g., and When we tile the loop nest, we tile each matrix as well. Every tile is a square, as the matrix, and it has size When matrices are aligned to the cache line (i.e.; and all tiles are aligned to the cache line and (i.e., we can change the data-cache mapping for all memory references safely. An element in the 6-dimensional space is associated to a twin element in a 6-dimensional space. The difference is that we enforce the projection of a twin tile to be a convex space on a 1-dimensional space. The twin tile is stored continuously in memory - but the tile, with which is associated, needs not. Any two twin tiles are spaced at interval of S elements; so different twin tiles will be mapped into the same cache portion. The twin function uses the following vectors: and The twin functions for A, B and C are and We consider in details the construction of from for matrix A. The components and allow the computation to access different tiles of matrix A: allows to go from tiles to tiles of size in the same column, and allows to go from tiles to tiles in the same row. These two components become and respectively. The coefficients and allow the computation to access elements in a tile: allows to access element in the same column of the tile, and allows to access elements in the same row of the tile - and stored continuously in memory. They become and The original coefficient that is unitary is left unchanged (e.g., and so a line in memory is a line in cache. 2.2
Fast Fourier Transform
A Fourier Transform, can be represented as the product of a matrix by a vector: and Each component of y is the following sum: where is called twiddle factor. When is the product of two factors and (i.e., ) we can apply Cooley-Tookey’s algorithm. The input vectors x can be seen as matrix X stored in row major. The output vector y can be seen as a matrix Y of size but stored column major. We can write the algorithm as follows:
442
Paolo D’Alberto et al.
1. for every we compute putation on the columns of matrix X; 2. we distribute the twiddle factors, 3. for every we compute 4.
- this is a com-
Algorithms implementing on points are well known, and attractive, because the designer can reduce the number of computations (twiddle factors reductions). However, they are inefficient when the cache has size due to their intrinsic self interference. Implementations, such as FFTW [12], may exploit temporal locality through copying the input data on a temporary work space. Nonetheless, the spatial locality between the computation of and is not exploited fully; because two elements in the same column of X interfere in cache, preventing the spatial reuse. Even though the cache interference is responsible for relatively few misses, it effects every level of the memory hierarchy - memory pages too. Any improvement in the number of misses at the first level of cache, even small, is very beneficial for a multilevel cache system. Sub-problems of access data in non-convex set, therefore our algorithm cannot be applied as is. We follow a very simple implementation of a recursive algorithm. When we execute the algorithms on the columns of X, the original input matrix X of size is associated with its twin image in of size where is the number of matrix elements that can be stored in a cache line. If elements in the last and first column of X share the same cache line. If the input vector does not fit the cache, the first and the last column of X are not accessed at the same time; the cache coherence is unaffected. 2.3
Architecture
To describe the effects of the dynamic mapping approach on the architecture design we use the block structure of MIPS R10K microprocessor, Figure 1. We use MIPS but these modifications can be applied to other processors like SPARC64 processors as well. SPARC64 has a large integer register file (RF), 32 registers directly addressable but a total of 56 for register renaming; it is a true 64 bits architecture, and it has only one level of cache. The twin function is performed in parallel with the regular index function. The computations share the same resources, integer units and register file. The twin function result will be stored in the register file, but it is not really an address. The address calculation unit (ACU) and TLB do not process the twin function result. The twin functions need not to be valid addresses in memory at all. The architecture is as follows. The instruction set is augmented with a new load instruction with three operands, or registers: the destination register, the index function register and the twin function register. The load instruction becomes like any other instructions, with two source operands and one destination. To improve performance, a load can be issued as soon as the twin index function
A Data Cache with Dynamic Mapping
443
Fig. 1. Proposed architecture, designed based on MIPS R12K
is computed to speed up the access in cache. The twin function can be used directly by the cache without further manipulation. The regular index function is really necessary in case of a miss and ACU and TLB must process it. We can imagine that a possible implementation may decide to execute the regular index function only when a miss happens. We can see there is potential to change cache mapping without increasing the hit latency time in cache. The design remains simple: the functional units communicate with the register files only and the register file with the first level of cache only. The design can be applied even when a cache is multi-ported; that is, when multiple loads and writes can be issued to cache. The proposed approach increases integer register pressure and it issues more integer instructions because of the index computations. The compiler may introduce register spills on the stack, but in general, they are not misses in cache (having temporal and spatial locality). Index function and twin index function are independent and we can issue them in parallel. If the number of pipelined ALUs does not suffice the parallelism available, the index function can lead to a slow down. The slow-down factor is independent from the number of index functions and it is no larger than 2. (The slow-down factor and the total work can be reduced issuing the index function only in case of a miss; we increase only the cache miss latency.)
444
Paolo D’Alberto et al.
To avoid consistency problems the data cache is virtually indexed and physically tagged. The new type of load does not affect how many load instructions can be issued or executed per cycle. We assume that an additional RF output port has a negligible effect on the register file access time. (Otherwise, twin functions can be processed in parallel with the index functions and dedicated RF can be used.)
3
Experimental Results
Dynamic Mapping is applied to two applications, Matrix Multiply (Section 3.1, Example 2) and FFT (Section 3.2). We show the potential performance on 5 different systems. Indeed, the algorithms are performed only using the twin function (no regular index function is performed). Here we describe the 5 architectures. Sun Ultra 5 is based on a Sparc-Ultra-IIi processor 333MHz and Sun Enterprise 250 is based on two Sparc-Ultra-IIe processors 300MHz, both implement 32-bit V8+ instruction set. Their memory hierarchy is: L1 composed of I1=16KB 2-way 32B line and D1=16KB 1-way 32B line 16B sub-block, and L2 is unified off chip U2=1MB 1-way. We have compiled our code with gcc/2.95.3 (g77) and cc (Sun Workshop 6 update 1 FORTRAN 95 6.1 2000/09/11). We present the best performance. Sun Blade 100 is based on a Sparc-Ultra-IIe processor 500MHz. Sun Blade implements 64-bit V9 instruction set. The memory hierarchy is: level L1 composed of I1 = 16KB 2-way 32B line composed of two block of size 16B and D1=16KB 1-way 32B line 16B sub-block, and L2 is unified U2=256KB 4-way on-chip. We have compiled our code with gcc version 2.95.3 20010315 (release) . Silicon Graphics O2 is based on a MIPS R12K IP32 333MHz, O2 implements 64-bit MIPS IV instruction set. The memory hierarchy is: L1 is composed of I1=32KB 2-way and 64B line and D1=32KB 2-way 32B line and L2 is unified U2=1MB 2-way. We have used f90/cc MIPSpro Compilers Version 7.30. Fujitsu HAL Station 300 is based on SPARC64 100MHz processor, HAL implements a V9 instruction set. Its memory hierarchy is composed on one level of cache with an instruction cache of 128KB 4-way associative and an identical data cache. We have compiled our code with the native compiler HaL SPARC C/C++/Fortran Compiler Version 1.0 Feb. 3.1
Matrix Multiply
We implemented the matrix multiply as described in Example 2 in Section 2.1. The application has spatial and temporal locality. Our goal is to show the cache improvements due to dynamic mapping, Figure 2. We measure the cache performance of matrix multiplication with either standard index function or dynamic mapping - no both. The two algorithms have the same number of operations, memory accesses and most probably the same instruction schedule. The only difference is the access pattern. The dynamic mapping allows a stable miss
A Data Cache with Dynamic Mapping
445
Fig. 2. Matrix Multiply on Blade 100, miss rate comparison
ratio across different input size (2% which is close to the expected, roughly removing cache miss spikes altogether. We always improve cache performance. We can see a cache miss reduction of 30% in average and up to 8 times cache miss reduction for power of two matrixes. In this section, we do not take in account the effects of register allocation, which can improve cache and overall performance. An optimal register allocation, for both tiled and recursive algorithms, can reduce the number of memory accesses and exploits a better data reuse at register level. In general, register allocation is not designed to reduce cache interference and it is machine dependent - different libraries use different register allocations, see for examples of register allocation for matrix multiply kernel [7, 31]. 3.2
We implemented our variation of Cooley-Tookey algorithm as follows. Cooley-Tookey algorithm finds a factorization of in two factors so that When two factors are found, the problem is decomposed in two subproblems. We can represent this divide and conquer algorithm by a binary tree. An internal node represents a problem of size and its children the factors of - and the two subproblems. The leaves of the binary tree are the codelets from FFTW [12] - codelets are small FFTs of size between 2 and 16 written as a sequence of straight-line code. The binary tree can have a hight between 1 and The tree hight is 1 when is prime and we need to perform operations. In case one of the two factors is always between 2 and 32, the tree
446
Paolo D’Alberto et al.
hight is at most - when - and we need to perform operations. Our algorithm aims at a balanced decomposition of in two factors, such as and that is, the difference is as small as possible. The binary tree can have a hight at most - when - and we need to execute operations. We show the performance of our FFT with and without dynamic mapping in Figure 3 and 4. The bars represent normalized MFLOPS: given an input of size we show where timeOneFFT is the average running time for one FFT. Since our implementation does not have the same number of floating point instructions of FFTW, we opted to measure the execution time and determine a normalized number of FLOPS - even when is prime. Every complex point is composed of two float point numbers of 4 byte each (we want to have spatial locality for 16 B cache line size). We choose to show performance in MFLOPS instead of data-cache miss rate for two reasons. First, the reduction of cache misses using dynamic mapping is small, but the performance improvement is extremely significant. Second, the implementation is one for all architecture, therefore we may compare performance across architectures. We collected experimental results for four algorithms: we identify with our FFT, our implementation of Cooley-Tookey algorithm; Dynamic Mapping is our FFT when dynamic mapping is applied; we identify with Upper Bound a recursive algorithm for FFT that accesses the cache with an ideal pattern (but invalid). When present, FFTW is the Fastest Fourier Transform in the West [12]. FFTW is used as reference to understand the relation between the performance of our implementation, the potential performance of dynamic mapping and the performance of a well known FFT. Dynamic mapping could be applied to the FFTW as well, and the improvements would be proportional to the ones shown in this section. In Figure 3, we show that our implementation is efficient for large power-oftwo inputs, but also, it has no steady performance - as FFTW. The performance is a function of the input size. Indeed, the performance is a function of the input decomposition. Large leaves allow fewer computations, therefore better performance. Since the decomposition does not assure that all leaves have the best size, this behavior is expected. We can notice that for small the performance of our implementations may be faster than expected (or slower). This is due to several reasons. One of them is the accuracy problem: the execution of other processes effect the execution-time measure under investigation (e.g., caches, register file, ALU and FPU pipeline trashing). For fairly large to very large problems, where the memory hierarchy is utilized intensely, dynamic mapping lays between its upper bound and our implementation of FFT, as expected. The characterization of the worst and best performance is twofold: first, it shows the performance we can achieve, the performance we can achieve with dynamic mapping and ideal performance if cache locality is fully exploited; second, when there is no reference for execution time, we are still able to have an estimation of execution time and potential performance.
A Data Cache with Dynamic Mapping
447
Fig. 3. FFT Ultra 5 (330Mhz), Enterprise 250 (300Mhz) and Blade 100 (500MHz). Normalized performance: where timeOneFFT is the average running time for one FFT. The bars from left to right:Upper Bound, Dynamic Mapping,our FFT and FFTW
Paolo D’Alberto et al.
448
Fig. 4.
FFT Silicon Graphics O2 and Fujitsu HAL 300 Normalized performance: where timeOneFFT is the average running time for one FFT. The higher the bar the better the performance. The bars from left to right:Upper Bound, Dynamic Mapping,our FFT
FFT is an excellent example of application with a high self interference. Even caches with a large cache associativity cannot cope with the loss of performance due to interference for large number of points. In Figure 4, we present performance for two systems with associative caches: Silicon Graphic O2 system and Fujitsu HAL 300.
4
Conclusions
We have presented a software-hardware approach to cope with the loss of performance due to cache interference. Dynamic Mapping exploits spatial and temporal locality, tailoring the cache mapping through a manipulation of the memory addresses. The cache mapping in itself does not change. In general the approach is effective when interference is present, otherwise some slowdown may appear. We present two test cases: matrix multiply and FFT. For matrix multiply we achieve 30% data cache miss reduction and a 8-fold data cache reduction for matrix of size of power of two. For FFT the data cache miss reduction are quantitatively sensitive for large problems and with high interference. We achieve a 4-fold cache miss reduction and such reduction translates into an equal performance speed up.
A Data Cache with Dynamic Mapping
449
References [1] U. Banerjee. Loop Transformations for Restructuring Compilers The Foundations. Kluwer Academic Publishers, 1993. [2] Gianfranco Bilardi and Franco P. Preparata. Processor - time tradeoffs under bounded-speed message propagation: Part II, lower bounds. Theory of Computing Systems, 32(5):531–559, 1999. [3] A. Aggarwal A. K. Chandra and M. Snir. Hierarchical memory with block transfer. In In 28th Annual Symposium on Foundations of Computer Science, pages 204– 216, Los Angeles, California, October 1987. [4] A. Aggarwal B. Alpern A. K. Chandra and M. Snir. A model for hierarchical memory. In Proceedings of 19th Annual ACM Symposium on the Theory of Computing, pages 305–314, New York, 1987. [5] P. Clauss and B. Meister. Automatic memory layout transformation to optimize spatial locality in parameterized loop nests. In 4th Annual Workshop on Interaction between Compilers and Computer Architectures, INTERACT-4, Toulouse, France, January 2000. [6] M. L. C. Cabeza M.I.G. Clemente and M.L. Rubio. Cachesim: a cache simulator for teaching memory hierarchy behavior. In Proceedings of the 4th annual Sigcse/Sigue on Innovation and Technology in Computer Science education, page 181, 1999. [7] G. Bilardi P. D’Alberto and A. Nicolau. Fractal matrix multiplication: a case study on portability of cache performance. In Workshop on Algorithm Engineering 2001, Aarhus, Denmark, 2001. [8] Paolo D’Alberto. Performance evaluation of data locality exploitation. Technical report. [9] Stephane Eranian David. The making of Iinux/ia64. Technical report. [10] F. Catthoor N. D. Dutt and C. E. Kozyrakis. How to solve the current memory access and data transfer bottlenecks: At the processor architecture or at the compiler level? In DATE, march 2000. [11] P. R. Panda H. Nakamura N. D. Dutt and A. Nicolau. Augmenting loop tiling with data alignment for improved cache performance. IEEE Transactions on Computers, 48(2): 142–9, Feb 1999. [12] M. Frigo and S. G. Johnson. The fastest fourier transform in the west. Technical Report MIT-LCS-TR-728, Massachusetts Institute of technology, Sep 1997. [13] Kang Su Gatlin and Larry Carter. Memory hierarchy considerations for fast transpose and bit-reversals. In HPCA, pages 33–, 1999. [14] Antonio González, Mateo Valero, Nigel Topham, and Joan M. Parcerisa. Eliminating cache conflict misses through xor-based placement functions. In Proceedings of the 11th international conference on Supercomputing, pages 76–83. ACM Press, 1997. [15] R. Gupta. Architectural adaptation in amrm machines. In Proceedings of IEEE Computer Society Workshop on VLSI 2000, pages 75–79, Los Alamitos, CA, USA, 2000. [16] J. L. Hennesy and D. A. Patterson. Computer architecture a quantitative approach. Morgan Kaufman 2-nd edition, 1996. [17] J. Hong and T. H. kung. I/o complexity, the red-blue pebble game. In Proceedings of the 13th Ann. ACM Symposium on Theory of Computing, pages 326–333, Oct 1981.
450
Paolo D’Alberto et al.
[18] L. Zhang Z. Fang M. Parker B. K. Mathew L. Schaelicke J. B. Carter W. C. Hsieh and S. A. McKee. The impulse memory controller. IEEE Transactions on Computers, Special Issue on Advances in High Performance Memory Systems, pages 1117–1132, November 2001. [19] E. D. Granston W. Jalby and O. Teman. To copy or not to copy: a compiletime technique for assessing when data copying should be used to eliminate cache conflicts. In Proceedings of Supercomputing, pages 410–419, Nov 1993. [20] Teresa L. Johnson and Wen mei Hwu. Run-time adaptive cache hierarchy management via reference analysis. In Proceedings of the 24th Annual International Symposium on Computer Architecture, 1997. [21] T. L. Johnson and W. W. Hwu. Run-time adaptive cache hierarchy management via reference analysis. In 24th Annual International Symposium on Computer Architecture ISCA ’97, pages 315–326, May 1997. [22] M. Frigo C. E. Leiserson H. Prokop and S. Ramachandran. Cache oblivious algorithms. In Proceedings 40th Annual Symposium on Foundations of Computer Science, 1999. [23] W. Pugh. Counting solutions to presburger formulas: How and why. In SIGPLAN Programming language issues in software systems, pages 94–6, Orlando, Florida, USA, 1994. [24] J. B. Carter W. C. Hsieh L. B. Stoller M. R. Swanson L. Zhang E. L. Brunvand A. Davis C. C. Kuo R. Kuramkote M. A. Parker L. Schaelicke and T. Tateyama. Impulse: Building a smarter memory controller. In In the Proceedings of the Fifth International Symposium on High Performance Computer Architecture (HPCA5), pages 70–79, January 1999. [25] Andre Seznec. A case for two-way skewed-associative caches. In Proc. 20th Annual Symposium on Computer Architecture, pages 169–178, June 1993. [26] Alan Jay Smith. Cache Memories. ACM Computing Surveys, 14(3):473–530, September 1982. [27] Sharad Malik Somnath Ghosh, Margaret Martonosi. Cache miss equations: A compiler framework for analyzing and tuning memory behavior. ACM Transactions on Programming Languages and Systems, 21(4):703–746, July 1999. [28] P. D’Alberto A. Nicolau A. Veidenbaum and R. Gupta. Static analysis of parameterized loop nests for energy efficient use of data caches. In Proceeding on Compilers and Operating Systems for Low Power 2001 (COLP’01), September 2001. [29] Jeffrey Scott Vitter and Elizabeth A. M. Shriver. Algorithms for parallel memory I: Two-level memories. Algorithmica, 12(2/3): 110–147, August and September 1994. [30] Jeffrey Scott Vitter and Elizabeth A. M. Shriver. Algorithms for parallel memory II: Hierarchical multilevel memories. Algorithmica, 12(2/3):148–169, August and September 1994. [31] R. Clint Whaley and Jack J. Dongarra. Automatically tuned linear algebra software. Technical Report UT-CS-97-366, 1997. [32] M. Wolfe and M. Lam. A data locality optimizing algorithm. In Proceedings of the ACM SIGPLAN’91 conference on programming Language Design and Implementation, Toronto, Ontario, Canada, June 1991. [33] Zhao Zhang and Xiaodong Zhang. Cache-optimal methods for bit-reversals. In Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), page 26. ACM Press, 1999.
Compiler-Based Code Partitioning for Intelligent Embedded Disk Processing Guilin Chen, Guangyu Chen, M. Kandemir, and A. Nadgir The Pennsylvania State University, University Park, PA 16802, USA {guilchen,gchen,kandemir,nadgir}@cse.psu.edu
Abstract. Recent trends indicate that system intelligence is moving from main computational units to peripherals. In particular, several studies show the feasibility of building an intelligent disk architecture by executing some parts of the application code on an embedded processor attached to the disk system. This paper focuses on such an architecture and addresses the problem of what parts of the application code should be executed on the embedded processor attached to the disk system. Our focus is on image and video processing applications where large data sets (mostly arrays) need to be processed. To decide the work division between the disk system and the host system, we use an optimizing compiler to identify computations that exhibit a filtering characteristic; i.e., their output data sets are much smaller than their input data sets. By performing such computations on the disk, we reduce the data volume that need to be communicated from the disk to the host system substantially. Our experimental results show significant improvements in execution cycles of six applications.
1
Introduction
Recent years have witnessed several efforts towards making the disk storage system more intelligent by exploiting available computing power within the disk subsystem. A common characteristic of these proposals (e.g., active disks [22, 1], intelligent disks [11], smart disks [15]) is to use computing power at the disk (provided by an embedded processor attached to the disk), to perform some filtering type of computations on the storage device itself. For example, [18] demonstrates how several database operations can be performed by the embedded processor attached to the storage device. A similar project in IBM [9, 8] attempts to embed processing power near the data (e.g., on the disk adapter) to handle general purpose processing offloaded from the host system. Thus, an intelligent disk-based computation can significantly reduce the demand on the communication bandwidth between the storage device and the rest of the system. Uysal et al propose an active disk architecture by allowing more powerful on-disk processing and large on-disk memory [22]. To address the software design and implementation for active disks, Acharya et al describe a stream-based programming model, whereby host-resident code interacts with disk-resident code using streams [1]. The active disk concept proposed by Riedel et al [18] helps us L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 451–465, 2004. © Springer-Verlag Berlin Heidelberg 2004
452
Guilin Chen et al.
investigate the behavior of scan-based algorithms for databases, nearest neighbor search, frequent sets, and edge detection of images on such architectures. The authors use these applications to show performance improvements brought by active disks over conventional architectures. Sivathanu et al [21] propose the concept of semantically-smart disk system, wherein the disk system obtains from the file system information about its on-disk data structures and policies. It then exploits this information by transparently improving performance. Most of these prior studies focus on application programming model and operating system (OS) support for intelligent disk architectures. While this support is critical for the successful deployment of such architectures, for a large application, it would be very difficult for an average programmer to decide what to execute on the embedded processor on the disk and what to execute on the host system. In this paper, we address this important problem and propose a compiler-based strategy that automatically divides an application between the disk system (the embedded processor) and the host system. Our focus is on image and video applications where large data sets (arrays) need to be processed. To decide the work division between the disk system and the host system, we identify computations that exhibit a filtering characteristic; i.e., their output data sets are much smaller than their input data sets. By performing such computations on the disk, we reduce the data volume that need to be communicated from the disk to the host system significantly. In fact, our experiments with several applications reveal communication volume reductions around 50%. Obviously, such a reduction in communication volume between the disk and the host system helps reduce power consumption and enhances overall system performance. It should be emphasized that such intelligent disk architectures are actually being built by disk drive and chip manufacturers. For example, Infineon markets a chip called TriCore that includes a 100 MHz micro-controller, up to 2MB of main memory, and some custom logic that implements disk drive-specific functions [18]. Considering the drops in costs of embedded processors and the growing demand for data-intensive computing, we can expect that such architectures will be more prevalent in the future. It should also be mentioned that while we present our approach that exploits filtering characteristic of a computation to reduce communication demands in the context of a disk-host pair, the compiler analysis presented here is general enough to be employed in other circumstances where filtering data before communication is desirable (e.g., in a sensor network based environment where sensors process the collected data before passing it to a central base station).
2
Architecture and Programming Model
The architecture we focus on this study has two major components: host system and disk system. The host system is the unit where computations are normally performed. The disk system is the storage subsystem that consists of a disk (which might be a RAID) and an embedded processor which can be used to perform some of the computation that would normally be performed by the host
Compiler-Based Code Partitioning
453
system (i.e., a system without an embedded processor on the disk would perform all computations in the host system). A sketch of the architecture considered in this paper is given in Figure 1. To make use of the embedded processor on the disk, we need instructions (or compiler directives) to map some application code portions to the disk system. In this paper, we assume the existence of two compiler directives, called begin{map} and end{map}, that enclose a code portion which will be executed on the disk system. Note that, in a given application, these directives can be used a number of times. Also, the code portions (fragments) that can be enclosed by these directives can be of different granularities (e.g., a loop, a loop nest, or an entire procedure). However, since our focus in this work is on array-intensive applications, we work on a loop nest granularity. In other words, for each nest of a given application, our approach decides whether to execute that nest on the host system or on the disk system. In this work, the begin{map} and end{map} directives are automatically inserted in the application code by the compiler. It should be noted that in this architecture a given data set can be in memory or in the disk. We use the term disk-resident to indicate that the data set (array) in question resides in the disk system. To enable efficient compiler analysis, we assume that the disk-resident arrays are annotated using a special compiler directive. Note that the data transfers between the disk-resident and memoryresident data sets are explicit. That is, to copy a disk-resident data set (or a portion of it)to a memory-resident data set, one needs to perform an explicit file operation. However, to make our presentation clear, in the code fragments considered in this paper, we mix disk-resident and memory-resident data set (array) accesses, assuming implicitly that each access to a disk-resident array involves a file operation. Our approach can operate with cases where only some of the arrays are disk-resident and also with cases where all of the arrays are disk-resident. It should be mentioned that user-inserted compiler directives have been employed in the past in the context of parallel programming to govern data distributions across memories of multiple processors [13].
3
Work Division
An important issue that need to be addressed for extracting the maximum benefit from our storage architecture is to divide application execution between the host system and the disk system. To accomplish computation partitioning between the disk system and the host (also called work division), our compiler analyzes the entire application to extract data access pattern. It then inserts begin{map} and end{map} calls in the code to perform work division. In our implementation, a code fragment is mapped on to the disk system if it satisfies the following two criteria: It should perform input/output (I/O). While the embedded processor in our architecture can be exploited for performing non-I/O related functionalities as well, in this study, we consider only I/O-intensive code fragments for
454
Guilin Chen et al.
Fig. 1.
The sketch of the architecture considered in this work
potential candidates to be executed in the disk system. Since we require disk-resident data sets to be explicitly identified by the programmer, we can easily check whether a given computation performs I/O or not (i.e., we just check whether it involves a disk-resident data set). It should exhibit a filtering characteristic. A code fragment is said to exhibit a filtering characteristic if the size of its input data sets (arrays) is much larger than its output data sets (arrays). As an example, consider the following code fragment that consists of two separate nests (written in a pseudolanguage) :
Assuming that arrays U and W are disk-resident, the first nest above does not have any filtering characteristic since it takes a two-dimensional array (U) and generates another two-dimensional array (V). In contrast, the second nest exhibits filtering. This is because it takes a three-dimensional array (W) and generates a two-dimensional array (X). Therefore, it is a better candidate to be executed on the disk system. It should be noticed that, if we do not execute this second nest on the disk system, it needs to be executed in the host system. But, in this case, to perform the required computation, we need to transfer the entire data set W from the disk to the host system, resulting in tremendous network traffic. This obviously will eat up lots of
Compiler-Based Code Partitioning
455
execution cycles and waste the storage bandwidth (it also increases systemwide energy consumption). This is exactly the overhead that we want to eliminate. Instead, if we can execute this nest on the disk system, we need to transfer only the resulting data set (X) to the host system (so that it can be used by the rest of the application). In this way, data is filtered in the disk system before it is shifted to the host system, thereby leading to an improvement in overall performance. The only drawback of performing the computation on the disk system (instead of the host) is that it will take longer time as the embedded processor on the disk is typically less powerful than the host processor. 3.1
Detecting the Filtering Characteristic
In this work, we experiment with two different strategies for detecting whether or not a given computation exhibits a filtering characteristic. The first strategy (called Strategy I) is easy to implement and (as will be shown later in the paper) generates very good results in practice. It checks (for each array) the number of dimensions and their extents (i.e., dimension sizes). Let us consider the following loop nest and the assignment statement shown.
Here, we assume that f1, f2, ..., fn, g1, g2, ...,gm are the subscript expressions (array index functions), and each fi and gj is an affine function of loop indices i1,i2,...,is and loop-independent variables. Assuming further that arrays U(ndimensional) and V(m-dimensional) are declared as type U[N1][N2]...[Nn], V[M1][M2]...[Mm], where type be any (data) type such as integer or float, Strategy I decides that the assignment statement in the loop shown above exhibits a filtering characteristic if:
In this last expression, c is a constant to make sure that the difference in the sizes of input (right-hand-side) and output (left-hand-side) arrays is large enough so that shifting the computation (the statement) to the disk system will be really beneficial (note that if c=1, this corresponds to the informal description of the concept of “exhibiting a filtering characteristic” that we have been using so far). In most of the cases (of array-based applications) encountered in practice, it is possible to check the above condition statically (at compile-time). In cases where this is not possible, we have at least two choices. First, we can employ profile data (e.g., by instrumenting the code) to see whether the condition holds for typical data sets. Second, we can insert a, conditional statement (if-statement) into the code that chooses between performing computation on the host side and
456
Guilin Chen et al.
performing it on the disk side, depending on the outcome of the condition. It is to be noted that selecting a suitable c value is critical. This is because a small c value can force aggressive computation mapping to the disk system. This in turn can result in some unsuitable computation being mapped to the embedded processor, thereby reducing overall performance. On the other hand, a very large c value can be overly conservative and can result in a code mapping that does not exercise the embedded processor at all. Since the best value for c is both application and architecture dependent, it is not possible to determine an optimal value statically (compile-time). As a result, in this paper, we experimented with different c values (instead of fixing it at a specific value). If, in a given loop, there is at least one statement that exhibits filtering characteristic, we mark the entire loop to be executed on the disk system (i.e., we assume that the loop has filtering characteristic). Later in the paper, we demonstrate how loop transformations can be used for improving the effectiveness of our optimization strategy. As an example, let us consider again the code example shown above in Section 3 (which consists of two separate nests). Assuming that all array dimensions are of the same size (extent), using the approach summarized in the previous paragraph, one can easily see that only the second nest is identified to be executed on the disk system (assuming c = 1). While it might be possible to have more elaborate strategies for identifying the loops that need to be mapped to the disks system, as the experimental results (presented later)show, Strategy I performs well in practice. Our second strategy (called Strategy II) is more sophisticated but can also be expected to generate better results than Strategy I. Considering the loop nest and the assignment statement shown above, this strategy decides that the assignment statement exhibits a filtering characteristic if:
where c is the same as described earlier and G{E} gives the number of distinct array elements accessed by array reference E. In other words, instead of just checking the bounds of the arrays involved in the computation, Strategy II checks the actual number of elements accessed. Consequently, in general, it can be more accurate than the first strategy (since not all the loops access all the elements of the arrays they manipulate). The drawback is that determining the exact number of elements accessed by an affine expression is a costly operation [19, 6]. In this paper, we adopt the first strategy as our default strategy; but, we also perform experiments with the second strategy to demonstrate its potential in some applications. In implementing Strategy II, we represent the set to be counted using the Presburger formulas and use the technique proposed in [17]. 3.2
Reducing Communication Between the Host System and the Disk System
It should be clear that mapping large code fragments to the disk system is preferred to mapping smaller ones as the former implies less communication be-
Compiler-Based Code Partitioning
457
tween the host code fragments and the fragments mapped to the disk system. To determine whether two neighboring code fragments, say Frag1 and Frag2, should be mapped to the disk system as a whole or not, we adopt the following strategy. Suppose that Frag1 generates an output dataset X that will subsequently be used by Frag2. If X is also requested by the host code fragment (in addition to Frag2), then we need to transfer X to the host system. In this case, Frag1 and Frag2 are treated independently (i.e., they are not combined). On the other hand, if Frag2 is the only consumer of X, then these two fragments can be combined together (i.e., they can be mapped to the disk using the same begin{map}-end{map} construct; no communication is necessary when execution moves from Frag1 to Frag2, or vice versa), and X does not need to be transferred to the host system at all (saving bandwidth as well as latency). Many array-based applications exhibit such producer-consumer relationships. In particular, in array-intensive applications, given two nests, an optimizing compiler can, test whether they should be mapped together or not. Our current implementation uses data-flow analysis for this purpose. Dataflow analysis is a program analysis technique that is mainly used to collect information about how data flows through program statements/blocks [16]. In our context, we use data-flow analysis to determine the the nests that will be mapped to the disk system together. Our approach can be summarized as follows. First, using the strategy explained above (Section 3.1), we determine the set of nests that should be mapped to the disk system. This set represents the minimum set of nests to be mapped to the disk. After that, using the strategy explained in the previous paragraph, we determine the additional nests to be mapped to the disk. These are typically the nests that are the only customers for the data generated by a nest from the set determined in the first step. After this process, each loop nest in the application is assigned to be executed either on the disk or on the host system, and the corresponding begin{map} and end{map} directives are inserted in the code. We omit the formal description of our dataflow algorithm due to space concerns.
4
Parallel Processing on the Disk System
In our architecture considered so far, we have assumed only a single embedded processor. However, in many array-intensive applications, the computations mapped to the disk system has some degree of loop-level parallelism. That is, the loop iterations can be executed in parallel. Therefore, it makes sense to consider the compiler support for a more aggressive architecture that consists of multiple embedded processors on the disk system. This would lead to the following additional constraint (in addition to the two criteria described earlier in Section 3) to map a code fragment onto the disk system: The code fragment should take advantage of the parallel embedded processors on the disk system. In other words, the code portions mapped to the disk system should be parallelizable. This parallelization can be achieved in two
458
Guilin Chen et al.
ways. First, for array-intensive applications (which is the focus of this work), the compiler can analyze data dependences between the loop iterations [16] and can detect whether the loop can be parallelized. Second, for other types of codes (e.g., those that make heavy use of pointer arithmetic), the user can annotate parallel code fragments, and this can help our compiler in deciding which code portions must be mapped to the disk system. As an example of the first type of scenario, the second nest of the fragment shown in Section 3 exhibits loop level parallelism. More specifically, I and J loops can be parallelized across the embedded processors on the disk. It should also be noted that sometimes it might be beneficial to relax our requirements for mapping data on to the disk system, and still take advantage of our storage architecture. For example, in some cases, the I/O portion of the application code may not be parallelizable; but, mapping it to the disk system can lead to large reductions in communication volume due to filtering type of computation on disk-resident data. Similarly, in some cases, there may not be any data filtering activity, but we may have large amount of I/O parallelism. Again, mapping this I/O (and the associated computation) to the disk system can reduce I/O and execution time. In our current implementation, we can work with two different styles of parallelism. First, we can allow the programmer to annotate the loops in the program to execute parallel. Second, we have developed a strategy that parallelizes a sequential application based on data reuse analysis. The approach used tries to put as much data reuse as possible into innermost loop positions, hence leaves dependence-free outer loops to be parallelized. The details of this approach is beyond the scope of this paper and can be found elsewhere [10]. In the next section, we experimentally (quantitatively) evaluate our approach to see whether it improves performance in practice.
5 5.1
Experiments Simulation Environment
We designed and implemented a custom simulation environment to perform our experiments. This environment can simulate systems with different number of host processors, disks, and embedded processors. Our simulator uses DiskSim [7] for simulating the disk behavior. DiskSim is an accurate and highly-configurable disk system simulator developed at the University of Michigan and enhanced at CMU to support research into various aspects of storage subsystem architecture. It includes modules for most secondary storage components of interest, including device drivers, buses, controllers, adapters, and disk drives. The detailed disk module employed in DiskSim has been carefully validated against ten different disk models from five different manufacturers. The accuracy demonstrated exceeds that of any other publicly-available disk simulator [7]. The simulation of the host processor(s) and embedded processor(s) have been performed using Simplescalar [3] infrastructure. To simulate communication between processors,
Compiler-Based Code Partitioning
459
we adopted a simple strategy based on the number of communication messages and the available bandwidth. The default simulation parameters used in the experiments for processors, disk and communication subsystems are listed in Table 1. Unless stated otherwise, all experimental results to be presented have been obtained using the simulation parameters in this table. It is to be noted that the parameters given in the cache and memory hierarchy part are the same for both the host and embedded processors; the cases where they differ are specified explicitly.
460
Guilin Chen et al.
Fig. 2. Simulation process
We conducted experiments with two configurations: one without the embedded processor attached to the disk, and one with the embedded processor. The first configuration is called the base configuration and has a host processor of 1GHz with a 128MB memory space. In the second configuration, the host processor speed is again 1GHz, but we also have a 200MHz embedded processor with a 16MB memory space (Texas Instruments’ C27x series, for example, has this memory capacity) attached to the disk system. The I/O interconnect between the disk system and the host system is assumed to be 160 MB/s (a reasonable value for a typical disk-based architecture). When we have multiple host processors (or embedded processors), they communicate with each other using respective communication networks. Figure 2 illustrates how the simulations have been performed. First, the application code is divided between the host system and the disk system (as discussed earlier in detail). Then, the host program is simulated using the CPU simulator and the communication simulator. Similarly, the disk program (i.e., the code portion mapped to the disk system) is simulated using the CPU simulator, the communication simulator, and the disk simulator. The last phase collects the statistics and combines them.
Compiler-Based Code Partitioning
5.2
461
Benchmarks and Code Versions
To evaluate the performance of our strategy, we conducted experiments with six array-intensive benchmark programs. Feature: This is a speech processing program that implements perceptual linear prediction (PLP). PLP is based on the short-term spectrum of speech. In contrast to pure linear predictive analysis of speech, perceptual linear prediction (PLP) modifies the short-term spectrum of the speech by several psychophysically based transformations. ImgComp: This code implements a wavelet transform-based image coder for gray-scale images. While the coder itself is not very sophisticated, each individual piece of the transform coder has been chosen for high performance. Restore: This is a high order iterative method for image restoration. As compared to pure iterative methods, it is much faster and converges after two dozens of iterations. SMT: This code implements a video smoothing algorithm using temporal multiplexing. The algorithm smoothes out the rate variability of the data transmission from a server to a client so that the network utilization can be improved. T-Image: This program controls outputs of several cameras connected to a single display. It implements a simple automatic video image processing system which outputs statistics such as detection of intrusion in forbidden areas and detection of abnormal lack of movement or counterflow movements. Vehicle-V: This code implements an algorithm that employs a predictive Kalman filter to track motion through occlusions (2D rigid motion). It also calculates the gradient of the error to adjust estimation. The number of C lines of the sources of these applications range from 465 to 3,128. Their important characteristics are given in Table 2. The third column shows the total size of the disk resident data manipulated by each application. The fourth column gives the execution cycles for the original codes on our base configuration (i.e., without the embedded processor on the disk system). The next three columns give the distribution(breakdown) of execution cycles into three categories: the cycles spent in I/O; the cycles spent in computation (on the host); and the cycles spent in communication between the host system and the disk system. We see that a significant number of cycles spent in communication, which means that minimizing communication volume can bring large performance benefits in practice. The reason that we used these specific applications is that they were available to us from University of Manchester, and that they represent a good mix of real-life applications as far as the filtering capability and I/O parallelism is concerned (that is, some of them do not have much filtering (I/O parallelism), whereas the others have). This last point is indicated in the last two columns of Table 2, where we give the percentage of statements with the filtering characteristic for the two strategies (Strategy I and Strategy II) explained in Section 3.1. One can see from these last two columns that our benchmarks include codes with low filtering opportunity (e.g., Vehicle-V) as well
462
Guilin Chen et al.
as codes with high filtering opportunity (e.g., T-Image). We also see from these two columns that using Strategy I (which is based on array sizes) and Strategy II (which is based on the number of distinct elements accessed) results in, respectively, 34.9% and 36.8% of the statements being identified as exhibiting the filtering characteristic. This shows that there exists scope to perform intelligent computation on the disk system. For each application, we used two different code versions. The base version is the one that runs on the architecture without embedded processor (i.e., the base configuration), and the optimized version is the one that runs on the architecture with embedded processor. To evaluate the impact of the number of processors, we also performed experiments with different number of host and embedded processors. All the results presented in this section are normalized with respect to the base version with a single host processor (i.e., the base configuration). Unless stated otherwise, no loop distribution is applied in the optimized version, Strategy-I is used, and the c value (see Section 3) is set to the largest dimension size of the largest array in the application. More specifically, the default c values for Feature, ImgComp, Restore, SMT, T-Image, and Vehicle-V are 1000, 512, 712, 256, 256, and 512, respectively. All code modifications (when loop distribution is employed)have been automated within the SUIF (Stanford University Intermediate Format) infrastructure from the Stanford University [2]. 5.3
Results
The graph given in Figure 3 shows the normalized overall execution cycles for both original and optimized versions. All bars are normalized with respect to the first bar, which gives, for each application, the distribution of execution cycles, and divided into three parts: (i) the time spent in computation (on the host or on the disk system);(ii) the time spent in communication (between the host and the disk); and (iii) the time spent in I/O on the disk. The second bar gives the execution cycle breakdown for the optimized version when a single embedded processor is used in the disk system. When we compare these first two bars, we observe that our compiler-based approach reduces the execution cycles spent in communication by 41.2% on the average. This results in a 10.1% reduction, on the average, on overall execution cycles. The third bar for each application in Figure 3 gives the reduction in execution time when the number of embedded processors is increased to 4 (each is 200MHz). One can clearly see that this brings large reductions in the I/O time as well as the computation time spent by the embedded processors. This is due to exploiting parallelism on the disk system. As a result, we obtain a 49.1% execution cycle reduction on the average across all six applications. What this means is that the code portions shifted to the disk system take advantage of parallelism to a large extent. One might argue that increasing the number of host processors could also bring comparable benefits. The fourth bar in Figure 3 for each application gives the normalized execution cycles when the number of host processors on the host system is increased to 4, while keeping the number of embedded processors at 1. When we compare the third and the fourth bars,
Compiler-Based Code Partitioning
Fig. 3.
463
Normalized execution cycles for different versions
we see that the (overall) execution time reductions brought by increasing the number of embedded processors are, in general, higher than those obtained by increasing the number of host processors. In other words, using three additional (cheap—200MHz) embedded processors on the disk is more beneficial than employing three additional(powerful—1GHz)processors on the host. This is mainly because embedded processors can reduce I/O, computation, and communication times, whereas the host processor can reduce only I/O and computation times. Although the reductions in I/O/computation times due to increased number of host processors are higher than those due to increased number of embedded processors (as host processors are faster), this effect is overshadowed by the decrease in communication cycles as a result of increased number of embedded processors. It is also conceivable that in the future the embedded processors will become very powerful, and it might be possible to put such powerful processors on the disk system. To quantify the benefits that could come from such systems, the last bar for each application in Figure 3 gives the normalized execution cycles, when 4 powerful (1GHz) embedded processors are used on the disk system (with only one host processors). These results clearly show that employing powerful processors on the disk system (rather than on the host system) is much more beneficial, reducing the overall execution cycles by 59.8% on an average. To study the scalability of parallel processing on the disk system, we also conducted further experiments with different number of embedded processors (in conjunction with a single host processor). The results normalized with respect to the original version (with one host processor) are shown in Figure 4. We can clearly see that increasing the number of embedded processors generates good scalability; that is, where available, we are able to take advantage of the large number of embedded processors. For example, when we have 16 embedded processors, the average reduction in overall execution cycles is 67% across
464
Guilin Chen et al.
Fig. 4. Normalized execution cycles with different number of embedded processors
all benchmarks. It should also be observed that the additional benefits of our approach gets reduced as one increases the number of processors to large values as a result of the contention on the disk system.
6
Concluding Remarks
Intelligent disk systems with large storage capacities and fast interconnects are expected to become prevalent in the next decade. This is due to the trends that try to bring computation to where data resides (instead of more traditional approach where the data is brought into where computation is normally executed). An important problem that needs to be addressed in such architectures is how to divide the computation between the disk system (embedded processor) and the host system (processor). This paper has proposed and evaluated a compiler-based work division (computation partitioning) strategy for arrayintensive applications. Our strategy is based on the idea that the computations (loop nests) that filters their input data sets should be mapped on to the disk system. The experimental results with six application codes have indicated that the proposed approach is very successful in practice.
References [1] A. Acharya, M. Uysal and J. Saltz, “Active Disks: Programming Model, Algorithms and Evaluation”. In Proc. the 8th International Conference on Architectural Support for Programming Languages and Operating Systems, October 1998. [2] http://suif.stanford.edu/ [3] D. C. Burger and T. M. Austin. “The SimpleScalar Toolset”, Version 2.0, Technical Report 1342, Dept. of Computer Science, UW, June, 1997. [4] R. Chandra, D. Chen, R. Cox, D. Maydan, N. Nedeljkovic, and J. Anderson. “Data Distribution Support on Distributed-Shared Memory Multiprocessors.” In Proc. Programming Language Design and Implementation, Las Vegas, NV, 1997.
Compiler-Based Code Partitioning
465
[5] A. Chandrakasan, W. J. Bowhill, and F. Fox. “Design of High-Performance Microprocessor Circuits,” IEEE Press, 2001. [6] P. Clauss. “Counting Solutions to Linear and Nonlinear Constraints through Ehrhart Polynomials: Applications to Analyze and Transform Scientific Programs” . In Proc. the 10th International Conference on Supercomputing, pp. 278– 285, May 25–28, 1996, PA. [7] G. Ganger. “System-Oriented Evaluation of I/O Subsystem Performance”, Technical Report CMU-TR-243-95, University of Michigan, 1995. [8] W. Hsu, A. Smith, and H. Young, “Projecting the Performance of Decision Support Workloads on Systems with Smart Storage (SmartSTOR)”. Report No. UCB/CSD–99–1057, 1999. Automatic Locality-Improving Storage (ALIS). [9] IBM http://www.almaden.ibm.com/cs/storagesystems/alis/index.html. [10] M. Kandemir, A. Choudhary, J. Ramanujam, and P. Banerjee. “Improving Locality Using Loop and Data Transformations in an Integrated Framework.” In Proc. International Symposium on Microarchitecture, Dallas, TX, December 1998. [11] K. Keeton, D. Patterson and J. Hellerstein, “A Case for Intelligent Disks (IDISKs)”. In SIGMOD Record, 27(3), 1998. [12] I. Kodukula, N. Ahmed, and K. Pingali. “Data-centric multi-level blocking.” In Proc. SIGPLAN Conf. Programming Language Design and Implementation, June 1997. [13] C.H. Koelbel, D.B. Loveman, R.S. Schreiber, G.L. Steele, and M. E. Zosel. “The High-Performance Fortran Handbook”, MIT Press, Cambridge, MA, 1994. [14] K. McKinley, S. Carr, and C. W. Tseng. “Improving Data Locality with Loop Transformations.” ACM Transactions on Programming Languages and Systems, 1996. [15] G. Memik, M. Kandemir and A. Choudhary, “Design and Evaluation of Smart Disk Architecture for DSS Commercial Workloads”. In Proc. International Conference on Parallel Processing, September 2000. [16] S. S. Muchnick. “Advanced Compiler Design Implementation.” Morgan Kaufmann Publishers, San Francisco, California, 1997. [17] W. Pugh “Counting Solutions to Presburger Formulas: How and Why”, In Proc. the ACM Conference on Programming Language Design and Implementation 1994, Orlando, Florida. [18] E. Riedel, C. Faloutsos, G. Gibson and D. Nagle, “Active Disks for Large-Scale Data Processing”. IEEE Computer, June 2001, pp. 68–74. [19] A. Schrijver. “Theory of Linear and Integer Programming”, John Wiley and Sons, Inc., New York, NY, 1986. [20] S. Singhai and K.S. McKinley. “A Parameterized Loop Fusion Algorithm for Improving Parallelism and Cache Locality. The Computer Journal”, 40(6) :340– 355, 1999. [21] M. Sivathanu, V. Prabhakaran, F.I. Popovici, T. E. Denehy, A.C. ArpaciDusseau, R. H. Arpaci-Dusseau, “ Semantically-Smart Disks Systems,” Technical Report 1445, Computer Sciences Department, UW, Madison, 2002. [22] M. Uysal, A. Acharya and J. Saltz, “Evaluation of Active Disks for Decision Support Databases”. In Proc. International Conference on High Performance Computing Architecture, January 2000.
Much Ado about Almost Nothing: Compilation for Nanocontrollers Henry G. Dietz, Shashi D. Arcot, and Sujana Gorantla Electrical and Computer Engineering Department University of Kentucky, Lexington, KY 40506-0046 [email protected]
Abstract. Advances in nanotechnology have made it possible to assemble nanostructures into a wide range of micrometer-scale sensors, actuators, and other novel devices... and to place thousands of such devices on a single chip. Most of these devices can benefit from intelligent control, but the control often requires full programmability for each device’s controller. This paper presents a combination of programming language, compiler technology, and target architecture that together provide full MIMD-style programmability with per-processor circuit complexity low enough to allow each nanotechnology-based device to be accompanied by its own nanocontroller.
1
Introduction
Although the dominant trend in the computing industry has been to use higher transistor counts to build more complex processors and memory hierarchies, there always have been applications for which a parallel system using processing elements with simpler, smaller, circuits is preferable. SIMD (Single Instruction stream, Multiple Data stream) has been the clear winner in the quest for lower circuit complexity per processing element. Examples of SIMD machines using very simple processing elements include STARAN [2], the Goodyear MPP [3], the NCR GAPP [6], the AMT DAP 510 and 610, the Thinking Machines CM-1 and CM-2 [19], the MasPar MP1 and MP2 [4], and Terasys [13]. SIMD processing element circuit complexity is less than for an otherwise comparable MIMD processor because instruction decode, addressing, and sequencing logic does not need to be replicated; the SIMD control unit decodes each instruction and broadcasts the control signals. However, that savings in logic complexity is negligible in comparison to the savings from not needing to replicate the program in the local memory of each MIMD processor. No matter how simple the processor, a long and complex program still requires a very large number of bits of local storage. Program length is somewhat affected by the choice of instruction set architecture, but even the most dense encodings only reduce program size by a small constant factor.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 466–480, 2004. © Springer-Verlag Berlin Heidelberg 2004
Much Ado about Almost Nothing: Compilation for Nanocontrollers
1.1
467
Meta-state Conversion (MSC)
The ideal would be the simplicity of SIMD hardware with the independent programmability of MIMD. Interpretation of a MIMD instruction set using SIMD hardware is an obvious approach, with a number of non-obvious optimizations required to achieve good efficiency [18] [20] [7]. One proposal even adds additional hardware broadcast structures to a basic SIMD design to increase efficiency of these techniques [1]. However, any type of interpretation requires a copy of the program local to each processing element, so the size of local program memory brings hardware complexity close to that of a MIMD. In the early 1990s, primarily targeting the MasPar MP-1, we developed basic compiler technology that performs a state-space conversion of a set of MIMD programs into a pure SIMD program with similar relative timing properties: Meta-State Conversion (MSC) [11]. For MSC, it is fairly easy to show that the minimum amount of local memory needed is just enough bits to hold a unique identifier for each processing element, no matter how large the MIMD program may be. MSC is a state space conversion closely resembling NFA to DFA conversion (as used in constructing lexical analyzers). A MIMD program can be viewed as a state transition diagram in which a particular processor can follow any path it chooses, being in only one state at any given point in time. The MSC algorithm constructs a state transition diagram in which each meta state represents a possible set of original MIMD states that could be held simultaneously by different processors (threads) each executing its own path in the MIMD code. The code within each meta state is guarded (predicated) by testing if the processor was in the original state that would have executed that code. Thus, if one processor might be executing A while another is executing {B}, MSC would represent this by placing both chunks of code into a meta state structured like: if (in_A) {A} if (in_B) {B}. The result is conversion of the MIMD program into pure SIMD code, although the SIMD code will run slower unless code for different threads can be factored, i.e., the code becomes somethings like: AintersectB; if (in_A) {Aunique} if (in_B) {unique}. Thus, factoring code within a meta state is an important step after MSC. At the time MSC was introduced, we viewed it primarily as a novel way to obtain some additional functionality from a SIMD supercomputer — literally allowing us to program our 16,384 processing element MasPar MP-1 using a shared memory MIMD model while achieving a significant fraction of peak native (distributed memory SIMD) performance. Now, we realize it is actually a fundamentally different model of computation. Von Neuman Architecture places code and data in the same memory system; Harvard Architecture places code and data in separate but similar memory systems; what one might now call “Kentucky Architecture” places data in memory and factors out code, implementing control entirely by selection. The point is that a fully programmable processor can be made very small using this model, on the order of 100 transistors: small enough to be used to add fully-programmable intelligence to micrometer-scale devices fabricated using nanotechnology.
468
1.2
Henry G. Dietz et al.
Common Subexpression Induction (CSI)
Although MSC is the enabling compiler technology, efficiency of code resulting from MSC critically depends on the quality of a second technique applied to each block of code: the factoring mentioned above. The Common Subexpression Induction (CSI) [8] problem is, given a block of code consisting of K separate instruction sequences each with a different selection predicate, minimize the number of instruction broadcast cycles necessary by factoring-out instructions that can be shared by multiple predicates. This is a very difficult problem. In many ways, the primary contribution of this paper is an operation model better suited to CSI. Clearly, the simplest processing element design will be bitserial, so applying CSI at the word level, as was originally presented, is likely to miss bit-level opportunities to factor-out operations. There are inherently fewer different types of operations on single-bit values than there are on multi-bit values, so there is an increase in the probability that an operation can be shared by multiple predicates; we can further improve the odds by reducing the number of bit operations available, simultaneously reducing the hardware complexity. We also can simplify the process by making enable be simulated by masking, which makes cost significantly less sequence dependent. The result is that hardware logic minimization techniques can be adapted to perform CSI. In summary, by adapting the if-the-else operation model used in Binary Decision Diagrams (BDDs) [5] [17] [15] to be both the compilation model and the only instruction in the target Instruction Set Architecture (ISA), the analysis, compilation, and target architecture are all made significantly simpler. The following section describes the target architecture, which is deliberately chosen to be a close match for the operation model used by the new compiler technology. Section 3 describes BitC, a simple C dialect designed to facilitate authoring nanocontroller programs. The compilation process is described in section 4, with close attention paid to the CSI algorithm. Preliminary results are summarized in section 5. The conclusion is given in section 6.
2
Nanoprocessor/Nanocontroller Architecture: KITE
We define a nanoprocessor to be a Kentucky Architecture processor that has MIMD node circuit complexity several orders of magnitude lower than that of a minimal Von Neuman or Harvard microprocessor. A nanocontroller is essentially a nanoprocessor with two additional features: The ability to perform local digital and/or analog input/output operations, typically monitoring or controlling another device constructed on the same circuit substrate. For low-temperature nanotechnology fabrication methods, the nanocontroller array might be constructed first and then the nanotechnology devices built on top of their corresponding nanocontrollers. Provision for ensuring that real-time timing constraints for specific operations (local input/output) will be met as specified by the nanoprocessor
Much Ado about Almost Nothing: Compilation for Nanocontrollers
Fig. 1.
469
The KITE Architecture
program. Real-time control requires predictable real-time behavior and analog input/output generally would be done using programmed timing and an RC circuit because separate analog converter (ADC or DAC) circuits would be too large. Normally, the MSC process preserves only approximate relative timing between processors, not absolute timing of operations. Rather than discussing general architectural design constraints for nanocontrollers, the current work is focussed on a very simple new architecture suitable for both nanoprocessors and nanocontrollers: KITE: Kentucky If- Then-Else. There are at least two component modules in the KITE architecture, but more can be added to create a hierarchy of clock rates, thus avoiding running each nanoprocessor at a slow global broadcast clock speed. The complete blocklevel design using 3 levels is summarized in Figure 1. 2.1
The Control Unit
In KITE, the Control Unit at first appears to be a conventional SIMD control unit. However, it would be more accurate to say that it controls the program memory interface, not the processors.
470
Henry G. Dietz et al.
MSC yields a potentially very large meta-state automaton, which can be viewed as a single sequential program. Each basic block in that program is generally large, and the compiler technology discussed here (see section 4.2) can make basic blocks even larger, if desired. Unlike basic blocks generated for traditional sequential programs, MSC-generated basic blocks typically end in branches rather than binary branches; the next meta state is selected by examining the Global OR (GOR) of votes from all the processors. Essentially, state transtions in meta-state programs resemble those in code for a VLIW (Very Long Instruction Word) architecture [12]. Given the expectation that meta-state programs will be large, it is appropriate to use off-chip DRAM or PROM, interfaced by conventional address (A) and data (D) busses. Off-chip memory would be loaded with the meta-state program by a conventional computer host or other mechanism. A conventional SIMD control unit fetches an instruction at a time, but it is actually more efficient to fetch a block of code as a single logical entity. For KITE programs, the instruction fetch bandwidth required can be significantly reduced by storing a compressed representation of each basic block in memory. The controller would perform decompression, branch prediction, and instruction cache management treating each basic block as a single unit. This allows the control unit to intelligently prefetch code chunks using a relatively slow clock (C0) determined by the external memory system, while internally broadcasting partially decoded instructions (SITEs) from cache at a significantly faster rate (C1). 2.2
The Sequencers
Although broadcast of decoded signals sounds easy, this has been the clockspeed-limiting factor for most SIMD machines. Even in an on-chip network, huge fanout means being either very slow or very large — relative to nanotechnology fabrication, wires even need to get thick. The purpose of the sequencers is to make a slow broadcast not imply a slow nanoprocessor. Thus, there would be many sequencers, each hosting a moderate number of nanoprocessors. The SITE representation of an instruction actually is a compact form that generates four consecutive clock cycles worth of control information for the nanoprocessors. Thus, the input clock (C1) to a sequencer can be as much as four times slower than the nanoprocessor clock. More precisely, a particular sequencer’s control line outputs imply a “clock” for the nanoprocessors, but nanoprocessors are only loosely synchronized across sequencers. Put another way, clock skew is managed by higher-level hardware structures working with longer clock cycles that effectively hide the skew of lower levels. Incorporating additional nanoprocessors and sequencers could also provide a means for fault tolerance by disabling the sequencer above each faulty component.
Much Ado about Almost Nothing: Compilation for Nanocontrollers
2.3
471
The Nanoprocessors/Nanocontrollers
The nanoprocessor itself is exceedingly simple: it consists of 1-bit registers, a single register-number decoder, and a 2-to-1 multiplexor. The operation of a multiplexor can be described by analogy to the software concept of an if-then-else; if the value in i is true, return t, else return e. The value returned by the multiplexor can be stored in any register selected. The SITE representation of an instruction is literally four register numbers: the register to store into, the one to load i with, the one to load t with, and the one to load e with. The sequencer simply converts that into a four-cycle sequence using RN to specify the register number for the decoder and using the other lines to latch a value into the corresponding register. For reasons which will become obvious in section 4, “registers” 0 and 1 are not registers, but respectively generate the read-only constants 0 and 1. Similarly, for each application, a KITE nanoprocessor will require specific network connections and local input and output registers; these are addessed like registers starting with register number 2. A minor but important detail is that all network and local input or output registers should be controlled by the values stored, not by the act of decoding their address. (The address decode trick is used in many microprocessor systems, but our nanoprocessors lack true hardware enable/disable logic, so storing the same value that was already present in a register must have no effect.) The minimum number of bits in a KITE register file is thus the sum of 2 constant registers, the number of additional registers needed for network and local input and output, the ceiling of of the total number of nanoprocessors in the system, and the maximum number of ordinary data bits required in any nanoprocessor. Given the above, a slightly smarter sequencer could be used to opportunistically reduce the total number of clock cycles required from 4 per SITE to as few as one — the result store cycle. For example, if the same register number is used to load both i and t, the loading of both can be accomplished in a single clock cycle. Further, if the current SITE duplicates fields from the previous one, and those fields do not correspond to network or local input or output accesses, the sequencer can skip loading of any of i, t, or e. Such a sequencer would need to buffer incoming SITEs to compensate for variability in the rate at which it processes SITEs, but execution time would still be predictable because the optimization opportunities depend only on the SITE sequence coming from the control unit.
3
Programming Language: BitC
The programming language that we have created for nanocontrollers like KITE is a very small dialect of C that we call BitC. It is essentially a sequential programming language, but also supports barrier synchronization, real-time operations, and inter-processor communication. Like SWARC [10], BitC extends the notion of C types to allow explicit declaration and/or type casting of arbitrary bit precisions using the bitfield syntax
472
Henry G. Dietz et al.
from C. For example, int:3 a; declares a to be a signed integer value that can be stored in 3 bits. The type system is generally consistent with that of C, although precision of intermediate results is carefully tracked; for example, logical and comparison operators like == always generate a 1-bit unsigned integer result rather than a “full precision” integer value of 0 or 1. C operators are supported with the standard precedences; a few additional operators like minimum, maximum, and population count also are provided. BitC supports the usual conditional and looping constructs. However, due to the severely limited amount of data memory associated with each nanoprocessor, stack manipulation for function calls is not supported in the current implementation. Input and output are accomplished using application-specific reserved registers, which are accessed using variable names assigned by a user-specified mapping. Normally, reservation of special registers would be done before allocation of any ordinary variables. For example, int:1 adc@5; defines adc to be allocated starting at bit register 5, simultaneously reserving register 5 so that ordinary variables and temporaries will not be allocated there. Interprocessor communications also are implemented using reserved registers.
4
Compilation
The compilation of BitC for KITE is a complicated process involving a number of transformations. The first step in compiling BitC code is the transformation of word-level operations into simple operations producing single-bit results. These bit-slice operations are optimized and simplified using a variety of techniques from both conventional compiler optimization and hardware logic minimization. The optimized bit-slice versions of all the programs are then logically merged into a single SPMD (Single Program, Multiple Data) program which is Meta-State Converted into guarded SIMD code. Common Subexpression Induction (CSI) is the next step; as discussed in section 4.3, the use of ITEs dramatically simplifies the CSI algorithm. After CSI, the process is essentially complete, but not if too many registers are needed. Thus, the final step is to order the instructions to ensure that max live does not exceed the number of registers available, and then to allocate registers on that basis. The internal form of the instructions after register allocation is identical to the instruction format used by KITE’s sequencers. 4.1
Transformation of Word-Level Operations into ITEs
The transformation of word-level operations to bit-level operations is conceptually simple enough, but tedious. The BitC compiler’s front end manipulates data at the word level using a data structure for each word value that contains the basic object type (e.g., signed or unsigned), the number of valid bits in the word, and a vector of pointers to bit-level operations or storage cells that yield each bit of the value. Each word-level operation is converted to bit-slice form
Much Ado about Almost Nothing: Compilation for Nanocontrollers
473
Fig. 2. ITE Equivalents for Familiar Logic Operations
by invoking a routine that takes word value descriptions as inputs, generates bit-level operations internally, and then returns a description of the results in the form of a word value structure. Consider transforming the expression c=a+b; where a, b and c have been declared as unsigned int:2 a , b , c ; . Notationally, let be the bit position in the word value of Thus, our result value, c, can be written as the word level function:
At the bit level, this turns into two separate functions, one to generate each of the bits of c. Using the same logic expressions one would find in a typical textbook discussion of constructing a 2’s-complement adder circuit, the bit-level representation would be something like:
However, this is not a very desirable bit-level format for our purposes. The problem is that even this very short example uses two different operators: XOR and AND. It is not difficult to construct hardware capable of performing any of the 16 possible logic functions of two inputs; a 4-bit lookup table and a 1-of4 multiplexor suffice. However, having different opcodes complicates both CSI and the logic minimization process. A potentially simpler alternative, commonly used in hardware logic minimization, is to express all functions as if-then-else selections, i.e., as 1-of-2 multiplexors. Throughout the rest of this paper and the entire BitC and KITE system, an if-then-else tuple is referred to as an ITE and, for convenience, we will use C’s trinary operator syntax to show the contents of an ITE. Equivalents for several familiar logic operations are summarized in Figure 2. In the BitC compiler, the ITE table data structure is used not only to represent ITE operations, but also nanoprocessor registers. In effect, for a KITE target with of register file, the BitC compiler uses first ITE index values to represent the registers. Thus, ITE index 0 represents the constant 0 and ITE index 1 represents the constant 1. ITEs starting at index 2 represent network registers connecting to other nanoprocessors, local input and output device registers, and bits of user-defined variables. ITEs representing operations start at
474
Henry G. Dietz et al.
Fig. 3. Normalized ITEs for unsigned int:2 a,b,c; c=a+b;
index although fewer than registers may have been allocated at this stage, register allocation will make use of these unallocated registers to hold temporary values. In fact, the BitC compiler can be used to determine the minimum usable value for for a particular application, thus serving as a design tool for customizing a KITE hardware implementation. Rewriting our two-bit addition example using ITEs:
The multi-valued multi-level logic minimization process using ITEs is very similar to the conventional compiler Common Subexpression Elimination (CSE) process. Just as normalizing the form of tuples for CSE simplifies the search for matches, the logic minimization process can profit greatly from normalizing the ITEs. A detailed discussion of normalization of a similar form for logic minimization is given in [17]; as each ITE is generated, we normalize such that the i component is always a register and registers used in the t and e parts have lower numbers. The result of processing the above example is shown graphically in Figure 3. Registers 0 and 1 hold those constants; registers 2-7 hold a, b, and c.
Much Ado about Almost Nothing: Compilation for Nanocontrollers
4.2
475
Predication Using ITEs
Given that an ITE is really the most basic guarded/predicated operation, it is not surprising that conversion of control flow constructs into ITEs is straightforward and efficient. In [9], we discussed how speculative predication could be supported across arbitrary control flow including conditionals, loops, and even recursive function calls. Although speculative execution using KITE would not obtain the same benefits as it does targeting IA64, the same techniques can be used to generate larger basic blocks of ITEs that, hopefully, will lead to more effective CSI and a reduction in the total number of control states after the single-nanoprocessor speculative programs have been combined using MSC. Consider a very simple example:
The nested conditionals can be speculatively executed by nothing more than nested application of if-conversion:
This same transformation also is used to implement the guards created by the MSC process, which normally look like single-level if sequences testing local state variables. The interesting thing about this transformation is that ITEs created as predication are indistinguishable from ITEs implementing portions of word operations. Thus, the ITE optimization process will naturally minimize both simultaneously.
4.3
Common Subexpression Induction Using ITEs
The basic CSI algorithm was first described in [8]. That algorithm was implemented and shown to be effective, with reasonable compile times, when targeting the MasPar MP-1 [4]. Thus, in describing our new approach using ITEs, it is useful to begin by summarizing the original approach and the assumptions that it makes. There were 8 steps: Step 1: Construct Guarded DAG. The first step in the CSI algorithm is the construction of a Guarded DAG (Directed Acyclic Graph) for the assemblylevel operations. The basic DAG structure is typical of those generally used to represent basic blocks in optimizing compilers. The guards are bit masks that indicate to which QthreadU each instruction belongs. Step 2: Inter-thread CSE. Given a guarded DAG, the next step is essentially standard compiler technology for recognizing and factoring-out common subexpressions, but operations with different guards can be factored-out as common subexpressions. Portions of the DAG which, ignoring the guards, would be
476
Henry G. Dietz et al.
common subexpressions are in fact equivalent code sequences if the guards say they belong to mutually-exclusive threads. The ITE logic minimization process effectively accomplishes this as a side effect. Step 3: Earliest and Latest Computation. The CSI search for factoring additional instructions, as proposed in [8], is based on permutation of instructions within a linear schedule. A linear schedule was used for two reasons: 1. On machines like the MasPar MP-1, changing the enable status of a processing element is a modal operation requiring execution of at least one instruction. The result is that the precise cost of a potential CSI solution is a function of the number of times the execution sequence changes from one set of threads being enabled to a different set being enabled. For example, an instruction sequence with guards ordered { thread0; thread1; thread0; } is significantly more expensive to execute than { thread1; thread0; thread0; }. Without constructing the order, CSI might combine operations in such a way that execution time is actually increased by factoring! 2. Only instructions that could be physically adjacent in some sequential order could potentially be merged, thus, by searching linear orders it becomes possible to consider only potential pair factorings of instructions adjacent to each other in the schedule under consideration.
A full permutation search on linear schedules is clearly feasible only for Instead, the original CSI algorithm used a permutation-in-range search. Each instruction only can appear between its earliest and latest possible schedule slot, the positions of which are computed from the DAG in this step. Using ITEs, the search need not be based on a linear schedule because reason 1 above does not apply. Because guards are applied directly in each instruction, the cost of a schedule is not directly dependent on the number of times the guard being applied is changed. There only are Qsecond orderU effects that may favor one schedule over another, primarily concerning register allocation and opportunistic optimizations made by the sequencers. Step 4: Classification. The next step in the original CSI algorithm was classification of instructions for potential pairings. This was a fairly complex process involving checking opcodes, immediate operands, ordering constraints (first approximately using earliest and latest and then precisely using th DAG itself), and guard overlap. Classes are used to further prune the search for potential pairings in the linear schedules. For ITEs, there is only one opcode and no immediate values are embedded in the instruction; there are only two constants, 0 and 1, and they are accessed from registers. Neither is guard overlap a concern, since application of guards is effectively embedded within each instruction. Step 5: Theoretical Lower Bound. Using the classes and expected execution times for each type of operation, it is possible to compute a good estimate of the lower bound on minimum execution time. This estimate can be used to determine if performing the CSI search is worthwhile — i.e., if the potential for improvement in code execution time by CSI is small, then one might abort
Much Ado about Almost Nothing: Compilation for Nanocontrollers
477
the search. The same algorithm is used to evaluate partial schedules to aid in pruning the search. Step 6: Creation of An Initial Schedule. A viable initial linear schedule is created. In the linear schedule, the Nth operation in a schedule is either executed at the same time as the (N-1)th instruction or in the next “tick.” Step 7: Improving the Initial Linear Schedule. The original CSI technique next applied a heuristic sort to obtain an improved initial schedule. Step 8: The Search. This is the final, and most complex and time-consuming, step of the original CSI algorithm. It is a heavily pruned permutation-in-range search using pairwise-exchanges of instructions and incrementally updating evaluation of partial schedules.
4.4
The New CSI Algorithm for ITEs
As described above, the guards can be directly absorbed in the computation performed by the ITEs; it then becomes trivial to perform inter-thread CSE. In fact, inter-thread CSE for ITEs is ordinary CSE for ITEs, which also can be viewed as a weaker equivalent to the multi-level logic optimization methods published for BDDs. For ITEs, CSI can be accomplished using logic minimization on the ITE DAG: steps 3 through 8 essentially disappear! Thus, the new CSI algorithm for ITEs has only three steps: Step 1: Generate ITEs. Generate the ITEs, encoding guards for the threads directly, as discussed in section 4.2. ITEs computing the meta-state next state information, as per [11], also are generated. Step 2: Perform Logic Minimization. Perform essentially standard logic minimization. The goal is simply minimization of the total number of ITEs needed to compute all bit values stored in the block that are live at exit from the block. Thus, this is a multi-valued (each bit stored is a value) multi-level (ITEs can be nested to any depth, as opposed to 2-level AND/OR trees) logic optimization problem. A multitude of logic minimization techniques have been developed since Quine McCluskey surfaced in the 1950s; a good overview of various approaches appears in [16]. Logic optimization is not an easy problem, but there are a variety of efficient algorithms that yield good, if not optimal, results. Our current compiler uses an approach based on the improvements [17] made to [5]. Step 3: Allocate Registers and Schedule Code. Allocate registers for all intermediate values using a linear code schedule that satisfies the constraint that max live never exceeds the number of registers available. The current BitC compiler has effective implementations of steps 1 and 2, but step 3 requires further study. In many cases, it is easy to find a usable linear schedule and to allocate registers; we are unsure what to do when it is not. One possibility is cracking basic blocks to make smaller DAGs. In summary, the BitC compiler’s handling of too-complex basic blocks of ITEs can and should be significantly improved.
478
5
Henry G. Dietz et al.
Results
Although the BitC compiler is still far from generating compressed code blocks for KITE, it is sufficiently complete to allow a wide range of experiments. To facilitate such experiments, the BitC compiler is capable of generating code in a variety of formats, including one suitable for generating graphs using dot [14] (figure 2 was created in this way). From a large number of example codes, here are our preliminary observations: Compiler speed is not a problem. Simple test programs are processed in small fractions of a second using a Linux PC. Complexity of higher-precision arithmetic operations is a problem. Currently, all the operations appearing within a single basic block are performed by a DAG which is equivalent to a fully combinatorial hardware logic implementation. Further, the normalized form used is inefficient in representing logic involving XOR operations, such as those occuring in binary addition and multiplication. Empirically, these operations on values containing more than 12 bits generate too many ITEs. While many nanocontroller applications can be effective using no more than 12-bit precision, a better approach would be for the BitC compiler to introduce additional basic blocks — the equivalent of using multiple clock cycles in a hardware logic implementation. Although the BitC compiler does not explicitly perform word-level optimizations, they are very effectively performed as a result of the bit-level analysis. For example, int:8 a; a=a+1; a=a–1; generates 60 ITEs, but not a single one is left live by the end of the bit-level analysis! Of course, bit-level optimizations within word computations also are recognized. The normalized ITE form used in BitC was originally proposed not for logic minimization, but for checking equivalence of expressions: equivalent expressions generate the same ITE DAG. Although the BitC compiler currently does not support non-constant-indexed array references, perhaps this type of equivalence checking could be used for dependence analysis so that arrays could be efficiently supported despite the lack of a hardware indexing method? In the past, SIMD architectures which lacked hardware indexing (e.g., the Thinking Machines CM-2 [19]) generally scanned through all array elements to select the one indexed by a non-constant expression. The precise basic block representation that KITE will use to encode compressed SITE code, multiple exit arcs (multiway branches), and explicit cache management is not yet finalized, so we have not yet simulated whole-system performance. Given the large number of ITEs resulting from arithmetic operations, total processing speed for a KITE system might not be greatly superior to that of a conventional uniprocessor with comparable circuit complexity. However, there is no practical method by which a uniprocessor design could implement the huge number of I/O operations necessary for control of thousands to millions of nanofabricated devices on the same chip.
Much Ado about Almost Nothing: Compilation for Nanocontrollers
6
479
Conclusion
Even using existing micro-scale technology, there are many arrays of devices that could profit from smart control, but for which smart control has been infeasible. It is not feasible to carry thousands of signals to a conventional processor off-chip or even on-chip; nor are conventional processor+memory combinations small enough to be placed with each device. As nanotechnology develops, there will be an increasing need for local intelligent monitoring and control of the new devices. There have been a number of exotic new computational models suggested for using nanotechnology to build computers, but none of these provides a straightforwardly programmable solution to the nanocontroller problem. It is well known that a SIMD architecture can dramatically lower the circuit complexity per computational node. It also was known that, using meta-state conversion (MSC) [11] and common subexpression induction (CSI) [8], SIMD hardware can efficiently execute MIMD programs. The primary difficulty was the complexity of the CSI algorithm and interactions between instruction selection and the effectiveness of CSI. The primary contribution of this paper is the recognition that, by using the bit-level ITE (if-then-else) construct, circuit complexity is reduced and compiler analysis, especially CSI, is dramatically simplified. The ITE representation also facilitates use of existing hardware-oriented logic minimization techniques to simplify the final code. Preliminary measurements of performance of the BitC compiler clearly demonstrate that low-precision integer control efficiently can be implemented using MSC with an ITE-based target architecture, such as KITE. The circuit complexity of a KITE nanoprocessor is essentially that of a small register file (1-bit SRAM/DRAM or I/O cells plus a possibly shared address decoder), three staging registers (each 1 bit), and a l-of-2 multiplexor (possibly implemented more like two connected tri-state outputs) with all other resources shared by many nanoprocessors. Thus, given programs that need only a few bytes of local data, complexity can be on the order of 100 transistors per nanocontroller. This design is less suited for general-purpose parallel computing; a somewhat more complex design, with a more powerful function unit, could yield higher performance per unit circuit complexity. There is much more work to be done to optimize performance of the language, compiler, and details of the KITE architecture and hardware implementation. Much will depend on what control new nanotechnology devices need. This paper marks the start of our research in developing practical nanocontrollers, not the end.
References [1] Nael B. Abu-Ghazaleh, Shared Control Multiprocessors - A Paradigm for Supporting Control Parallelism on SIMD-like Architectures, PhD Dissertation, University of Cincinnati, July 1997.
480
Henry G. Dietz et al.
[2] K. Batcher, “STARAN Parallel Processor System Hardware,” Proc. of the 1974 National Computer Conference, AFIPS Conference Proceedings, vol. 43, pp. 405410. [3] K. Batcher, “Architecture of a Massively Parallel Processor,” Proc. of IEEE/ACM International Conference on Computer Architecture, 1980, pp. 168-173. [4] T. Blank, “The MasPar MP-1 Architecture,” 35th IEEE Computer Society International Conference (COMPCON), February 1990, pp. 20-24. [5] R. E. Bryant, “Graph-Based Algorithms for Boolean Function Manipulation,” IEEE Transactions on Computers, vol. C35, no. 8, pp. 677-691, 1986. [6] R. Davis and D. Thomas, “Systolic Array Chip Matches the Pace of High-Speed Processing,” reprint from Electronic Design, October 31, 1984. [7] H. G. Dietz and W. E. Cohen, “A Control-Parallel Programming Model Implemented On SIMD Hardware,” Languages and Compilers for Parallel Computing, edited by U. Banerjee, D. Gelernter, A. Nicolau, and D. Padua, Springer-Verlag, New York, New York, pp. 311-325, 1993. [8] H. G. Dietz, “Common Subexpression Induction,” Proceedings of the 1992 International Conference on Parallel Processing, Saint Charles, Illinois, August 1992, vol. II, pp. 174-182. [9] H. G. Dietz, “Speculative Predication Across Arbitrary Interprocedural Control Flow,” Languages and Compilers for Parallel Computing, edited by L. Carter and J. Ferrante, Springer-Verlag, New York, New York, pp. 432-446, 2000. [10] H. G. Dietz and R. J. Fisher, “Compiling for SIMD Within A Register,” Languages and Compilers for Parallel Computing, edited by S. Chatterjee, J,. F. Prins, L. Carter, J. Ferrante, Z. Li, D. Sehr, and P-C Yew, Springer-Verlag, New York, New York, pp. 290-304, 1999. [11] H. G. Dietz and G. Krishnamurthy, “Meta-State Conversion,” Proceedings of the 1993 International Conference on Parallel Processing, vol. II, pp. 47-56, Saint Charles, Illinois, August 1993. [12] J.R. Ellis, Bulldog: A compiler for VLIW Architectures, ACM Doctoral Dissertation Award, MIT Press, 1985. [13] R. F. Erbacher, Implementing an Interactive Visualization System on a SIMD Architecture, University of Massachusetts at Lowell Technical Report, Lowell, MA 01854. [14] E. Ganser, E. Koutsofios, and S. North, Drawing graphs with dot (dot user’s manual), ATT Research, February 4, 2002. [15] C. Gropl, Binary Decision Diagrams for Random Boolean Functions, Ph.D. Dissertation, Humboldt University, Berlin, Germany, May 1999. [16] G. D. Hachtel and F. Somenzi, Logic Synthesis and Verification Algorithms, Kluwer Academic Publishers, June 1996. [17] K. Karplus, Representing Boolean Functions with If-Then-Else DAGs, Technical Report UCSC-CRL-88-28, University of California at Santa Cruz, Nov. 1, 1988. [18] M. Nilsson and H. Tanaka, “MIMD Execution by SIMD Computers,” Journal of Information Processing, Information Processing Society of Japan, vol. 13, no. 1, 1990, pp. 58-61. [19] Thinking Machines Corporation, Connection Machine Model CM-2 Technical Summary, Version 5.1, May 1989. [20] P.A. Wilsey, D.A. Hensgen, C.E. Slusher, N.B. Abu-Ghazaleh, and D.Y. Hollinden, “Exploiting SIMD Computers for Mutant Program Execution,” Technical Report No. TR 133-11-91, Department of Electrical and Computer Engineering, University of Cincinnati, Cincinnati, Ohio, November 1991.
Increasing the Accuracy of Shape and Safety Analysis of Pointer-Based Codes Pedro C. Diniz University of Southern California / Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, California, 90292 [email protected]
Abstract. Analyses and transformations of programs that manipulate pointer-based data structures rely on understanding the topological relationships between the nodes i.e., the overall shape of the data structures. Current static shape analyses either assume correctness of the code or trade-off accuracy for analysis performance, leading in most cases to shape information that is of little use for practical purposes. This paper introduces four novel analysis techniques, namely structural fields, scan loops, assumed/verified shape properties and context tracing. Analysis of structural fields allows compilers to uncover node configurations that play key roles in the data structure. Analysis of scan loops allows compilers to establish accurate relationship between pointer variables while traversing the data structures. Assumed/verified property analysis derives sufficient shape properties that guarantee termination of scan loops. These properties must then be verified during shape analysis for consistency. Context tracing allows the analyses to isolate data structure nodes by tracing relationships between pointer variables along control-flow paths in the program. We believe that future static shape and safety analysis algorithms will have to include some if not all of these techniques to attain a high level of accuracy. In this paper we illustrate the application of the proposed techniques to codes that build (correctly as well as incorrectly) data structures that are beyond the reach of current approaches.
1
Introduction
Codes that directly manipulate pointers can construct arbitrarily sophisticated data structures. To analyze and transform such codes compilers must understand the topological relationships between the nodes of these structures. For example, nodes might be organized as trees, directed acyclic graphs (DAGs) or and general cyclic graphs. Even when the overall structure has cycles, it might be important to understand that the induced data structure topology traversing only specific fields is acyclic or even a tree. Statically uncovering the shape of pointer-based data structures is an extremely difficult problem. Current approaches interpret the statements in the program (ignoring safety issues) against an abstract representation of the data L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 481–494, 2004. © Springer-Verlag Berlin Heidelberg 2004
482
Pedro C. Diniz
structure. As pointer-based data structures have no predefined dimensions, compilers must summarize (or abstract) many nodes into a finite set of summary nodes in their internal representation of the data structure. The need to summarize nodes and symbolic relationships between pointers that manipulate the data structures leads to conservative, and often incorrect, determinations of the shape of data structures, e.g., reporting that a data structure has a cycle when in reality it has not. We believe the key to address many of the shortcomings of current shape analysis algorithms is to exploit the information that can be derived from both the predicates of conditional statements and from looping constructs. These construct, as the examples in this paper illustrate, can help compilers to derive accurate symbolic relationships between pointer variables. For example the while loop code below (left) scans a data structure along the next field. If the body of the loop executes, on exit we are guaranteed that {t != NULL} holds, but more importantly that {p = t->next}. This fact is critical to verifiy the correct insertion of an element in a linked-list. The code below (right) corresponds to an insertion in a doubly-linked list where the predicate clearly identifies the node denoted by p as the last node in the list by testing its next field. The node with the configuration {next = NULL}, therefore plays the important role of signaling the list’s end. The loop code also reveals that a sufficient 1 condition for its termination is that the data structure be acyclic along next. An analysis algorithm can operate this acyclicity assumption to ascertain termination and properties of other constructs and later verify the original acyclicity assumption.
These examples illustrate that programmers fundamentally encode, “state” in their programs via conditionals and loop constructs. Many loop constructs are used to scan the structures to position pointer variables at nodes that should be modified. Conditional statements define which operations should be performed. The fact that programmers used them to encode “state” and reason about the relative position of pointer variables and consequently nodes in the data structure is a clear indication that a shape and safety analysis algorithms should exploit the information conveyed in these statements. 1
Although not a necessary condition as the programmer might insert sentinel values that prevent the code from ever reaching a section of the data structure with a cycle.
Increasing the Accuracy of Shape and Safety Analysis
483
This paper presents a set of symbolic analyses techniques that we believe will extend the reach of current static shape and safety analysis algorithms for codes that manipulate pointer-based data structures, namely: Structural Fields: Uncovering value configurations or “states” of nodes that potentially play key roles in the data structure. Scan Loops: Symbolic execution of loops that only traverse (but do not modify) data structures. These loop will enable the extraction of all possible bindings of pointers to nodes in the abstract shape representation using the relationships imposed by the loop statements. Assumed/verified Properties: Derivation of sufficient shape properties that guarantee termination of scan loops. These properties must be verified during shape analysis. Context Tracing: The compiler can propagate contexts, i.e., the set of bindings of variables to nodes in the data structure throughout the program and use conditionals to prune the sets of nodes pointer variables can point to at particular program points. While the integration and effective exploitation of the knowledge gained by the techniques presented in this paper with actual shape and safety analysis algorithms are beyond the scope of this paper, this paper presents a set of techniques we believe will enhance the applicability and accuracy of these analysis algorithms. For example, identifying nodes with selected configurations can help summarization and materialization algorithms to retain particular nodes of the data structure. Retaining precise symbolic relationships between pointer variables can also allow materialization algorithm to preserve structural invariants. This paper is organized as follows. The next section describes a specific example that illustrates potential of the proposed approach. Section 3 describe the set of basic symbolic analysis our algorithm relies on. We present experimental evidence of the success of this approach for both correct and incorrect codes in section 4. We survey related work in section 5 and then conclude.
2
Example
We now illustrate how a compiler can use the techniques presented in this paper to increase the accuracy of shape analysis and safety information for codes that manipulate sophisticated pointer-based data structures. This code, depicted in figure 1, builds a data structure by the successive invocation of the insert function. In this code the function new_node(int) allocates through the malloc function a node that is unaliased with any of the nodes in the data structure. In this example we assume an initial binding of the node argument to a single node with both link and next pointer fields equal to NULL. The code starts by scanning (via what we call a scan loop) the data structure along the link field searching for the appropriate insertion point. Next it allocates the storage for a new node and inserts it “forward of” the node pointed to by b along the next field. It then conditionally links the node pointed to by
484
Pedro C. Diniz
Fig. 1.
Pointer-Based Data Structure Insertion C Code
Fig. 2.
Skip-List Pointer-Based Data Structure Example
b to the node denoted by b->next->next along the link field. This relinking step effectively splits a long sequence of nodes into two shorter sequence along the link field and resets the value of nchild field to 0. Figure 2 illustrates an instance with 8 nodes of a structure this code builds. To understand the topology of the created data structure a compiler must be able to deduce the following: 1. The nodes are linked (linearly) along the next field. 2. At each insertion, the code increments the value of the nchild field starting with a 0 value. 3. Nodes with nchild = 1 and link == NULL have one child node along next. 4. Nodes with nchild = 1 and have 2 child nodes along next. The nodes satisfy the structural identity link = next.next.next. 5. When the field nchild transitions from 1 to 2 the node has 3 child nodes along next all of which have nchild set to 0. 6. When the condition in line 13 evaluates to true the code resets nchild to 0 while relinking the node along link. This link field jumps over 2 nodes along next and the node satisfies the structural identity link = next.next.
Using these facts, and more importantly retaining them in the internal abstract representation for the shape analysis, a compiler can recognize that the data structure is both acyclic along next and link but following both fields leads to non-disjoint sets of nodes. Based on the structural properties the compiler can
Increasing the Accuracy of Shape and Safety Analysis
485
prove the while loop always terminates and that the statements in lines 14 and 15 are always safe. The key to allow an analysis to deduce the facts enumerated above, lies in its ability to track the various “configurations” or combinations of values for nchid, link and next, in effect building a Finite-State-Machine (FSM) transitions for the various configurations and their properties. To accomplish this the analysis must be able to trace all possible mappings contexts of b to nodes of the data structure and record, after the data structure has been manipulated, the configuration and identities the newly modified nodes meet. In this process the compiler uses scan loop analysis to recognize which sets of nodes in the internal shape analysis abstraction the various variables can point to. For this example the compiler determines that for the contexts reaching line 10 b must point to nodes in the abstraction following only the link fields and starting from the binding of the variables n. The compiler then uses the bindings to isolate and capture the modifications to the data structure. These updates to the internal shape representation generate other possible contexts the compiler needs to examine in subsequent iterations. For this example the compiler can exhaustively prove that for nodes pointed to by b, that satisfy the b–>link != NULL only two contexts can reach 10 as shown below. In this illustration the field values denote node identifiers in the internal shape analysis abstract representation and structural identities are denoted by path expressions.
Using this data the compiler can propagate only the through the conditional section of the code in lines 14 through 16 leading to the creation of a new The compiler reaches a point where the bindings of contexts to the variables in the program is fixed. Since in any case the symbolic execution of these contexts preserves both acyclicity along next and link the compiler can therefore ensure termination of the while and preserve the abstract information regarding the data structure topology. For the example in figure 1 a possible abstract representation using a storage graph approach is depicted in figure 3. This representation is complemented with several boolean function per node indicating acyclicity and field interdependences. For example for node s1 the predicate link = next.next.next holds. This example suggest the inclusion of the knowledge derived from the basic techniques described in this paper into a shape and safety analysis algorithm. First, the analysis identifies which fields of the nodes of the data structure have a potential for affecting the topology. Next the analysis tracks the values of the various fields exhaustively across all possible execution paths using symbolic contexts. These symbolic contexts make references to the internal shape representation to capture acyclic and structural identities. Finally, the algorithm
486
Pedro C. Diniz
Fig. 3. Shape Graph Representation for Code in Figure 1
assumes properties about its data structure that guarantee termination of loop constructs. A compiler must later verify the properties uncovered during shape analysis to ensure they logically imply the properties assumed to derive them.
Basic Analyses
3
We now describe the set of analysis techniques and begin by outlining the set of assumptions they rely on. As we other approaches we assume, that the input program is type safe. For simplicity we also enforce that pointer handles hold addresses of user defined data structure types and not the address of fields within. We do not handle pointer arithmetic nor, for tractability, arrays of pointers. All the data structures are built with heap allocated nodes each of which has a discrete set of field names. For simplicity we also assume the data structures to be constructed by the repeated invocation of a set of identifiable functions. 2
3.1
Structural Fields and Value Analysis
Certain fields of nodes of the data structure contain data that is intimately related to its structural identities. This case reveals itself when predicates that guard code that modifies the data structures test specific values for these fields. As such we define a field of a type declaration as structural if there exists at least one conditional statement in which the value of the field is tested, and the corresponding conditional statement guards the execution of a straight line code segment that (possibly) modifies the data structure. This conditions capture most of the relevant pointer fields of a data structure that are actively manipulated in the construction of the data structure. For non-pointer fields we require that a structural field must have all of its predicates in the form ptr–>field == where is an integer value. 2
This constraint can be lifted in most cases by performing a call-graph analysis and identifying the functions that allocated and manipulate the data structures.
Increasing the Accuracy of Shape and Safety Analysis
487
In many cases it is also useful, and possible, to determine statically the range of values a given structural field assumes. While for pointer fields we are only concerned with the two values {nil, non-nill}, for numeric integer fields we are interested in determining the set of values, and possibly the sequence(s) in which they are assumed. In figure 4 we outline an algorithm that tracks the set of values for a given integer field and attempts to establish the “cycles” of sequences of values it assumes in essence trying to uncover the FSM of the values of each field. As with the symbolic analysis technique describe in a later section this algorithm assumes all modifications to fields in the data structures are performed via variables local to each procedure that manipulates the data structure. The algorithm uses several auxiliary functions, namely: The function computes the control flow region of statements for which the local variables is used to update a numerical field but not redefined. Such as region can encompass many control-flow paths that do not update or any of the numerical fields of interest. In these cases the control-flow can be contracted for algorithmic performance. The function computes a symbolic transfer function along the control-flow paths in region for variable and field This function symbolically evaluates the statements in the region folding relevant conditional statements into the symbolic transfer function. As a summarization result this function returns a tuple with the following elements: an increment; a limit and a reset value. When unable to derive constant values for the elements of this tuple, the algorithm simply returns an unknown symbolic value. When a given element is absent, however, the algorithm returns an empty indication. In its simplest implementation this function evaluates statements such as var–>field++ and predicates such as {var–>field == } and conservatively returns unknown when the updates and tests involve computations using temporary variables. The function determines the actual set of values the field can assumes given the input set of symbolic transfer functions that update that particular field and the initial field values. The extracted FSM, other that the values and corresponding transitions, also labels the values of the field as persistent if across invocations of the code the fields can retain that specific value. Without accurate data the FSM consists of a simple node with the value unknown. Again, and in its simplest implementation this function can enforce that all transfer functions have the same increment value and a non-empty limit and reset values. A transfer function without a limit means that there is possibly a control flow path in the procedure that can increment the value of the field without bound. In this case the extractFSM function returns a FSM with unknown elements. For the example in section 2 the algorithm outlined above would uncover a single path region defined by the statements in lines 12 through 17. For this region the algorithm could extract a single symbolic transfer function for the nchild field denoted by nchild nchild + 1; if (nchild = = 2) nchild
488
Pedro C. Dinia
Fig. 4. Structural Field Values Analysis Algorithm
0; else nchild nchild; resulting in a tuple with incr = 1; limit = 2; reset = 0. Using this data and with an init(nchild) = {0} the algorithm would uncover a FSM with 3 states corresponding the the values {0,1,2} of which {0,1} are persistent and with transitions 0 1; 1 2; 2 0.
3.2
Node Configurations
It is often the case that nodes with different configurations in terms of nil and non-nil values for pointer fields or simple key numerical values occupy key places in the data structure. For this reason we define the configuration of a node as a specific combination of values of pointer and structural fields. We rely on the analysis in section 3.1 to define the set of persistent values for each field and hence define the maximum number of configurations. If the value analysis algorithm is incapable of uncovering a small number of values for a given non-pointer field the shape analysis algorithm ignores the corresponding field. This leads to a reduced number of configuration but possibly to inaccurate shape information.
3.3
Scan Loop Analysis
The body of a scan loop can only contain simple assignment statements of scalar variables (i.e., non-pointer variables) or pointer assignment of the form var = var–>field. A scan loop can have nested conditional statements and/or break or continue statements as illustrated below. One can trivially extend the definition of scan loops to include calls to functions such as printf or any functions whose call-chains include functions that do not modify the data structure.
Increasing the Accuracy of Shape and Safety Analysis
489
Scan loops are pervasive in programs that construct sophisticated pointerbased data structures. Programmers typically use scan loops to position a small number of pointer handles into the data structures before performing updates. For example the sparse pointer-based code available from McCat Group [6] has 17 scan loops with at most 2 pointer variables. All these 17 loops can be statically analyzed for safety and termination as described in the next section. The fundamental property of scan loops, is that they do not modify the data structure. This property allows the compiler to summarize their effects and “execute” them by simply matching the path expressions extracted from symbolic analysis against the abstract representation of the data structure. From the view point of abstract interpretation scan loops behave as multi–valued operation. We use symbolic composition techniques to derive path expressions that reflect the symbolic bindings of pointer variables in scan loops. The algorithm derives symbolic path expressions for all possible entry and exit paths of the loop for the cases of zero-iterations and compute a symbolic closed-form expressions for one or more loop iterations. The number of these expressions is exponential in the nesting depth of the loop body and linear on the number of loop exit points. We use conservative symbolic path expression merge functions to limit the number of bindings for each pointer variables to 1. For the example above our path expression extraction algorithm derives the binding where pin represents the value of p on entry of the loop and derives the binding for the zero-iteration case. More importantly, however, is that the symbolic analysis can uncover the precise relationship between p and t as {p = t–>next} on exit for the non-zero-iteration case.
3.4
Assumed Properties for Termination
Using the symbolic analysis of scan loops it is also possible to developed an algorithm that extracts conditions that guarantee the termination and safety of scan loops. We examine all the zero-iteration execution paths for initial safety conditions. Next we use inductive reasoning to validate the safety of a generic iteration based on safety assumptions for the previous iteration. To reason about the safety requirements of an iteration we extract the set of non-nil predicates each statement requires. In the case of conditionals we can also use the results of the test to generate new predicates that can ensure the safety of other statements. For the code sample above the algorithm can derive that for the safe execution of the entire loop only the predicate { pin!= NULL} needs to hold at the loop entry. The algorithm can also determine that only the dereferencing of the predicate in the loop {p–>next != NULL} header in the first iteration is potentially unsafe. Subsequent iterations use a new value of p assigned to the previous p–>next expression, which by control flow has to be non-null. Finally the algorithm derives, whenever possible, shape properties that is guaranteed to imply the termination of the loop for all possible control flow paths. For the example above the property Acyclic(next) would guarantee termination of the loop. As a by-product of the symbolic evaluation of each statement in the loop, a compiler can extract termination condition by deriving a symbolic transfer
490
Pedro C. Diniz
function of the scan loop. This transfer function which takes into account copies through temporary local variables and exposes which fields the loop uses for advancing through the data structures. On a typical null checking termination the algorithm derives conservative acyclicity properties along all of the traversed fields. For other termination conditions such as p–>next != p (which indicates a self-loop terminated structure), the algorithm could hint the shape analysis algorithm that a node in the abstraction with a self-loop should be prevented from being summarized with other nodes.
3.5
Domain of Applicability - Segmented Codes
The techniques presented here are geared towards codes that are segmented, i.e., they consists of a sequence of assignment or conditional statements and scan loops. Segmented codes allow our approach to handle the codes as if they were a sequence of conditional codes with straight-line sequences of statements, some of which multi-valued as is the case of scan loops. Fortunately codes that manipulate pointer-based data structures tend to be segmented. This is not surprising as programmers naturally structure their codes in phases that correspond to logic steps of the insertion/removal of nodes. Furthermore, both procedural abstractions and object-oriented paradigm promote this model.
4
Application Examples
We now describe the application of the base analysis techniques and suggest a shape representation using the information gathered by our techniques for the jump-list C code presented in section 2. We present results for both correct and “incorrect” constructions of the data structure. We assume the code repeatedly invokes the function insert and that the initial value for its argument n points to a node with both nil link and next fields. For the correct construction code the various techniques presented here would uncover the information presented in figure 5. We manually performed a shape analysis algorithm (not presented here) that exploits this information for the materialization and abstraction steps and were able to capture the abstract shape graph presented in Section 2 (figure 3). Because of the need to trace all of the execution context the algorithmic complexity of this approach is expected to be high. In this experiment we have generated and traced 53 contexts and required 8 iterations to reach a fixed-point solution. We now examine the algorithm’s behavior for the trivial case where the programmer would simply remove the conditional statement in line 13. In this case a shape analysis algorithm would immediately recognize that for the first analysis context the dereferencing of the statement in line 14 would definitely generate an execution error. We now examine the case where the programmer has swapped link with next in line 14 and instead used the sequence of instructions below.
Increasing the Accuracy of Shape and Safety Analysis
Fig. 5. Validation Analysis for Skip-List Example
Fig. 6. Unintended Skip-List Construction Example
491
492
Pedro C. Diniz
With this code a compiler can recognize that on the invocation the program creates a cycle in the data structure along the link field. More surprising is that this true cycle along link in would not cause the program to go into an infinite loop!. Rather, the program would construct the ASG as outlined in figure 6 and with the properties shown. The analysis described in this would allow a shape analysis algorithm to realize this fact as while tracing the set of valid contexts it would verify that b = n.(link)+ never visits nodes but only the set node represented in the abstraction by As such the compiler could verify that the scan loop assumption is still valid.
5
Related Work
We focus on related work for shape analysis of C programs both automated and with programmer assistance. We do not address work in pointer aliasing analysis (see e.g., [3, 15]) or semi-automated program verification (see e.g., [1, 11]).
5.1 Shape Analysis The methods described by other researchers differ in terms of the precision in which nodes and their relationships are represented in abstract-storage-graphs (ASGs)3. Larus and Hilfinger [10] describe, possibly, the first automated shape analysis algorithm. This approach was refined by Zadeck et al. [2] by aggregating only nodes generated from the same heap allocation site. Plevyak et al. [12] addresses the issue of cycles by representing simple invariants between pointer fields. Sagiv et al., [13, 14] describe a series of refinements to the naming scheme for nodes in the abstract storage as well as more sophisticated materialization and summarization operations. This refinement allows their approach to handle list-reversal type of operations and doubly-linked lists assuming those properties were known a-priori, that is before the reversal operations would be executed. Ghyia and Hendren [6] describe an approach that sacrifices precision for time and space complexity. They describe an interprocedural dataflow analysis in which for every pair of pointer variables, pointer handles, the analysis keeps track of connectivity, direction and acyclicity. Even in cases where the analysis yields incorrect shape, the information is still useful for application of transformations other than parallelism. Corbera et al. [4] have proposed an representation by allowing each statement in the program to be associated with more than a one ASG using invariant and property predicates at each program point to retain connectivity, direction and cycle information. Such expanded representation could 3
For an example of a storeless alias approach see [5] where pointer relations are represented solely via path expressions.
Increasing the Accuracy of Shape and Safety Analysis
493
use the wealth of symbolic information the analysis proposed in this paper has to offer to maintain the accuracy and reduce the possibly large space overhead multiple ASGs per program execution point imply.
5.2 Shape Specification and Verification Hummel et al. [7, 8] describe an approach in which the programmer specifies, in the ADDS and ASAP languages, a set of data structures properties using direction and orthogonality attributes as well as structural invariants. The compiler is left with the task of checking if any of the properties is violated and every point of the execution of the program. The expressiveness power of both ADDS and ASAP languages is limited to the specification of uniform properties that must hold for all nodes of a given data structure. It is not therefore possible to isolate specific nodes of the data structure (assumed connected) and define properties that are different from properties of other nodes and is the case of a cycle-terminated linked list. This expressiveness constraint is due to decidability limitations of theorem proving in the presence of both universal and existential quantifiers. Kuncak et al., [9] describe a language that allows programmer to describe the referencing relationships of heap objects. The relationships determine the role of the various nodes of the data structures and allow an analysis tool to verify if the nodes comply with the legal alias relationships. Programmers explicitly augment the original code with role specifications at particular points in the code, in effect indicating to the analysis tool precisely where should the role specification be tested. This point-specific insertion is equivalent to choosing where to perform abstraction of the shape analysis, in itself, traditionally a hard problem.
6
Conclusion
In this paper we described four novel analysis techniques, namely, structural fields, scan loops, assumed/verified shape properties and context tracing to attain a high level of accuracy regarding the relationship between pointer handles that traverse pointer-based data structures. We have illustrated the application of the proposed analysis techniques to codes that build (correctly as well as incorrectly) sophisticated data structures that are beyond the reach of current approaches. This paper supports the thesis that compiler analysis algorithms must uncover and exploit information derived from conditional statements in the form of the techniques presented here if they are to substantially increase their accuracy.
References [1] T. Ball, R. Majumdar, T. Millstein and S. Rajamani. Automatic Predicate Abstraction of C Programs. In Proc. of the ACM Conference on Programming Language Design and Implementation, pages 203–213, ACM Press, New York, NY, June 2001. [2] D. Chase, M. Wegman and F. Zadek. Analysis of Pointers and Structures. In Proc. of the ACM Conference on Program Language Design and Implementation, pages 296–310, ACM Press, New York, NY, June 1990.
494
Pedro C. Diniz
[3] J. Choi, M. Burke, and P. Carini. Efficient Flow-Sensitive Interprocedural Computation of Pointer-induced Aliases and Side Effects. In Proc. of the Twentieth Annual ACM Symp. on the Principles of Programming Languages, ACM Press, pages 232–245, New York, NY, January 1993. [4] F. Corbera, R. Asenjo, and E. Zapata. Accurate Shape Analysis for Recursive Data Structures. In Proc. of the Thirteenth Workshop on Languages and Compilers for Parallel Computing, August 2000. [5] A. Deutsh. Interprocedural may-alias analysis for pointers: Beyond k-limiting. In Proc. of the ACM Conference on Program Language Design and Implementation, pages 230–241, ACM Press, New York, NY, June 1994. [6] R. Ghiya and L. Hendren. Is it a Tree, a DAG, or a Cyclic Graph? a Shape Analysis for Heap-directed Pointers in C. In Proc. of the Twenty-third Annual ACM Symp. on the Principles of Programming Languages, ACM Press, pages 1–15, New York, NY, January 1996. [7] L. Hendren, J. Hummel, and A. Nicolau. A General Data Dependence Test for Dynamic, Pointer-based Data Structures. In Proc. of the ACM Conference on Program Language Design and Implementation, ACM Press, pages 218–229, New York, NY, June 1994. [8] J.Hummel, L. Hendren, and A. Nicolau. A language for conveying the aliasing properties of pointer-based data structures. In Proc. of the 8th International Parallel Processing Symposium, pages 218–229, IEEE Computer Society Press, Los Alamitos, CA, April 1994. [9] V. Kuncak, P. Lam, and M. Rinard. Role Analysis. In Proc. of the Twenty-nineth Annual ACM Symp. on the Principles of Programming Languages, ACM Press, pages 17–32, New York, NY, 2002. [10] J. Larus and P. Hilfinger. Detecting Conflicts between Structure Accesses. In Proc. of the ACM Conference on Program Language Design and Implementation, ACM Press, pages 21–34, New York, NY, June 1988. [11] G. Necula and P. Lee. The Design and Implementation of a Certifying Compiler. In Proc. of the ACM Conference on Programming Language Design and Implementation, ACM Press, pages 333–344, New York, NY, 1998. [12] J. Plevyak, V. Karamcheti, and A. Chien. Analysis of Dynamic Structures for Efficient Parallel Execution. In Proc. of the Sixth Workshop on Languages and Compilers for Parallel Computing, Published as Lecture Notes in Computer Science (LNCS) Vol. 768, pages 37–57. Springer-Verlag, 1993. [13] M. Sagiv, T. Reps and R. Wilhelm. Parametric Shape Analysis via 3-valued Logic. In Proc. of the Twenty-sixth Annual ACM Symp. on the Principles of Programming Languages, ACM Press, New York, NY, January 1999. [14] M. Sagiv, T. Reps, and R. Wilhelm. Solving Shape-Analysis Problems in Languages with Destructive Updating. ACM Transactions on Programming Languages and Systems, 20(1): 1–50, January 1998. [15] R. Wilson and M. Lam. Efficient Context-Sensitive Pointer Analysis for C Programs. In Proc. of the ACM Conference on Programming Language Design and Implementation, ACM Press, pages 1–12, New York, NY, June 1995.
Slice-Hoisting for Array-Size Inference in MATLAB Arun Chauhan and Ken Kennedy Department of Computer Science, Rice University, Houston, TX 77005 {achauhan,ken}@cs.rice.edu
Abstract. Inferring variable types precisely is very important to be able to compile MATLAB libraries effectively in the context of the telescoping languages framework being developed at Rice. Past studies have demonstrated the value of type information in optimizing MATLAB [4]. The variable types are inferred through a static approach based on writing propositional constraints on program statements [11]. The static approach has certain limitations with respect to inferring array-sizes. Imprecise inference of array-sizes can have a drastic effect on the performance of the generated code, especially in those cases where arrays are resized dynamically. The impact of appropriate array allocation is also borne out of earlier studies [3]. This paper presents a new approach to inferring array-sizes, called slice-hoisting. The approach is based on simple code transformations and is easy to implement in a practical compiler. Experimental evaluation shows that slice-hoisting, along with the constraints-based static algorithm, can result in a very high level of precision in inferring MATLAB array sizes.
1
Introduction
There is a growing trend among the scientific and engineering community of computer users to use high-level domain-specific languages, such as MATLAB®, R, Python, Perl, etc. Unfortunately, these languages continue to be used primarily for prototyping. The final code is still written in lower-level languages, such as, C or Fortran. This has profound implications for programmers’ productivity. The reason behind the huge popularity of domain-specific languages is the ease of programming afforded by these languages. We refer to these languages as high-level scripting languages. The users programming in scripting languages are usually analytically oriented and have no trouble in writing high-level algorithms to solve their computational problems. These languages provide direct mappings of high-level operations onto primitive operations or domain-specific libraries. Unfortunately, the compilation and the runtime systems of high-level scripting languages often fall far short of users’ requirements. As a result, the users are forced to rewrite their applications in lower-level languages. We and our colleagues at Race have been developing a strategy, called telescoping languages, to address the issue of compiling high-level scripting languages ®
MATLAB is a registered trademark of MathWorks Inc.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 495–508, 2004. © Springer-Verlag Berlin Heidelberg 2004
496
Arun Chauhan and Ken Kennedy
Fig. 1. The telescoping languages strategy
efficiently and effectively [9]. The idea is to perform extensive offline processing of libraries that constitute the primary computation sites in high-level scripting languages. The end-user programs, called the scripts, are passed through an efficient script compiler that has access to the knowledge-base built by the library compiler. The script compiler utilizes this knowledge-base to rapidly compile end-user scripts into effective object code. The strategy is outlined in Fig. 1. This strategy enables extending the language in a hierarchical manner by repeating the process of library building, which is the origin of the term “telescoping”. A part of the current effort in telescoping languages is towards compiling MATLAB. The telescoping languages strategy envisions generating the output code in an intermediate language, such as C or Fortran. Emitting code in an intermediate low-level language, rather than the binary, has the advantage of leveraging the excellent C or Fortran compilers that are often available from vendors for specific platforms. One performance related problem that arises in compiling high-level scripting languages is that of inferring variable types. MATLAB is a weakly typed language, treating every variable as an array and performing runtime resolution of the actual type of the variable. This imposes an enormous performance overhead. It is possible to infer variable types statically, thereby eliminating this overhead [11]. Knowing precise variable types can improve code generation by emitting code that uses the primitive operations in the target language, whenever possible—primitive operations in lower-level languages can be orders of magnitude faster than calling library functions that operate on a generic userdefined type. Earlier studies have found type-based optimizations to be highly rewarding [4]. In order to demonstrate the effect that type inference can have on code generation, consider the Fig. 2 that shows the performance improvements in a Digital Signal Processing procedure, called jakes, after it has been translated into Fortran based on the inferred types from the original MATLAB code. No other optimization was performed on this code. There are no results for MATLAB 5.3
Slice-Hoisting for Array-Size Inference in MATLAB
497
Fig. 2. Value of type inference
on Apple PowerBook because the version 5.3 is not available on the PowerBook. The MATLAB times have been obtained under the MATLAB interpreter. The running times for the stand-alone version that is obtained by converting the MATLAB code into equivalent C using the MathWorks mcc compiler are even higher than the interpreted version (possibly because of startup overheads). The “compilation” performed by mcc is a very straightforward one in which each variable has the most general possible type and all operations translate to calls to library procedures that can handle any arbitrary argument types. The speed improvements directly show the value of knowing precise variable types. It is not always possible to do a complete static resolution of variable types, especially, the size of an array that is a component of a variable’s type. In such a case the compiler may need to generate code to perform expensive operations such as resizing an array dynamically. We present a new strategy to compute the array-sizes based on slicing the code and hoisting it to before the first use of the array. This strategy can enable resolution of several cases that the static analysis fails to handle. Moreover, slice-hoisting becomes more precise in the presence of advanced dependence analysis while still being useful without it. Experimental results show that slice-hoisting results in substantial gains in the precision of type inference of MATLAB variables for code taken from the domain of Digital Signal Processing (DSP).
498
2
Arun Chauhan and Ken Kennedy
Types and Array-Sizes
MATLAB is an array-processing language. Most numerical programs written in MATLAB rely heavily on array manipulations. Therefore, it is useful to define the type of a MATLAB variable such that the type can describe the relevant properties of an array. A variable’s type is defined to be a four-tuple, such that: denotes the intrinsic type of a variable (integer, real, complex, etc.) is the dimensionality, which is 0 for scalar variables is a tuple that denotes the size of an array along each dimension denotes the structure of an array (e.g., whether an array is dense, triangular, diagonal, etc.) This definition is motivated by de Rose’s work [6]. McCosh uses a framework based on propositional logic to write “constraints” for the operands of each operation in a MATLAB procedure being compiled [11]. The constraints are then “solved” to compute valid combinations of types, called type-configurations that preserve the meaning of the original program. This process is carried out for each component of the type-tuple defined above. For size solving the constraints results in a set of linear equations that are solved to obtain array sizes, i.e., the values for each type-configuration. The constraints-based static analysis technique does a good job of computing all possible configurations and taking care of forward and backward propagation of types. However, it has certain limitations. 1. Constraints-based static analysis does not handle array-sizes that are defined by indexed array expressions, e.g., by changing the size of an array by indexing past its current extent along any dimension. 2. The control join-points are ignored for determining array-sizes, which can lead to a failure in determining some array-sizes. 3. Some constraints may contain symbolic expressions involving program variables whose values are not known at compile time. These values cannot be resolved statically. As a result of these limitations, if a purely constraints-based approach is used to infer array-sizes some of them may not be inferred at all. This can result in generated code that might have to perform expensive array resizing operations at runtime. Consider the Fig. 3 that shows a code snippet from a DSP procedure. The array vcos is initialized to be an empty matrix in the outer loop and then extended by one element in each iteration of the inner loop. At the end of the inner loop vcos is a vector of size sin_num. The variable mcos is initialized to be an empty matrix once outside the outer loop. In every iteration of the outer loop it is extended by the vector vcos. At the end of the outer loop mcos is a sin_num by sin_num two-dimensional array. This is a frequently occurring pattern of code in DSP applications and the constraints-based static analysis
Slice-Hoisting for Array-Size Inference in MATLAB
499
fails to infer the correct sizes for vcos and mcos. There is no straightforward way to write constraints that would accurately capture the sizes of vcos and mcos. The problem here is that the static analysis ignores while those are crucial in this case to determine the sizes. Past studies have shown that array-resizing is an extremely expensive operation and pre-allocating arrays can lead to substantial performance gains [3, 12]. Therefore, even if the array size has to be computed at runtime, computing it once at the beginning of the scope where the arFig. 3. Implicit array resizing in a DSP ray is live and allocating the entire procedure array once will be profitable.
3
Slice-Hoisting
Slice hoisting is a novel technique that enables size inference for the three cases that are not handled by the static analysis: array sizes changing due to index expressions, array sizes determined by control join-points, and array sizes involving symbolic values not known at compile time. An example DSP code that resized array in a loop was shown in the previous section. Another way to resize an array in MATLAB is to index past its current maximum index. The keyword end refers to the last element of an array and indexing past it automatically increases the array size. A hypothetical code sequence shown below resizes the array A several times using this method.
This type of code occurs very often in DSP programs. Notice that the array dimensionality can also be increased by this method, but it is still easily inferred. However, the propositional constraint language used for the static analysis does not allow writing array sizes that change. The only way to handle this within that framework is to rename the array every time there is an assignment to any
500
Arun Chauhan and Ken Kennedy
Fig. 4.
Three examples of slice-hoisting
Slice-Hoisting for Array-Size Inference in MATLAB
501
of its elements, and then later perform a merge before emitting code, to minimize array copying. If an array cannot be merged, then a copy must be inserted. This is the traditional approach to handling arrays in doing SSA renaming [5]. Array SSA could be useful in this approach [10]. Finally, if the value of N is not known until run time, such as when it is computed from other unknown symbolic values, then the final expression for the size of A will have symbolic values. Further processing would be needed before this symbolic value could be used to declare A. Slice-hoisting handles these cases very simply through code transformations. It can be easily used in conjunction with the static analysis to handle only those arrays whose sizes the static analysis fails to infer. The basic idea behind slice hoisting is to identify the slice of computation that participates in computing the size of an array and hoist the slice to before the first use of the array. It suffices to know the size of an array before its first use even if the size cannot be completely computed statically. Once the size is known the array can be allocated either statically, if the size can be computed at compile time, or dynamically. The size of an array is affected by one of the following three types of statements: A direct definition defines a new array in terms of the right hand side. Since everything about the right hand side must be known at this statement, the size of the array can be computed in terms of the sizes of the right hand side in most cases. For an indexed expression on the left hand side, the new size is the maximum of the current size and that implied by the indices. For a concatenation operation the new size of the array is the sum of the current size and that of the concatenated portion. The size of a variable v is denoted by A value is a tuple where is the dimensionality of the variable and denotes the size of v along the dimension The goal of the exercise is to compute the value for each variable and hoist the computation involved in doing that to before the first use of the variable. The final value of is the size of the array v. This process involves the following four steps: 1. 2. 3. 4.
transform the given code into SSA form, insert statements and transform these into SSA as well, identify the slice involved in computing the values, and hoist the slice.
These steps are illustrated with three examples in Fig. 4. Steps 1, 2 and 3 have been combined in the figure for clarity. The top row in the figure demonstrates the idea with a simple straight line code. The next row shows how a branch can be handled. Notice that some of the code inside the branch is part of the slice that computes the size of A. Therefore, the branch must be split into two while making sure that the conditional c is not recomputed, especially if it can have side-effects. Finally, the bottom row of Fig. 4 illustrates the application of slice
502
Arun Chauhan and Ken Kennedy
Fig. 5. Dependencies can cause slice hoisting to fail
hoisting to a loop. In this case, again, the loop needs to be split. The loop that is hoisted is very simple and induction variable analysis would be able to detect to be an auxiliary loop induction variable, thereby eliminating the loop. If eliminating the loop is not possible then the split loop reduces to the inspectorexecutor strategy. Notice that in slice-hoisting concatenation to an array, or assignment to an element of the array, does not create a new SSA name. This approach has several advantages: It is very simple and fast, requiring only basic SSA analysis in its simplest form. It can leverage more advanced analyses, if available. For example, advanced dependence analysis can enable slice hoisting in the cases where simple SSA based analysis might fail. Similarly, symbolic analysis can complement the approach by simplifying the hoisted slice. Other compiler optimization phases—constant propagation, auxiliary induction variable analysis, invariant code motion, common subexpression elimination—all benefit slice hoisting without having to modify them in any way. It subsumes the inspector-executor style. The approach works very well with, and benefits from, the telescoping languages framework. In particular, transformations such as procedure strength reduction and procedure vectorization can remove certain dependencies making it easier to hoist slices [2]. Most common cases can be handled without any complicated analysis. In some cases it may not be possible to hoist the slice before the first use of the array. Figure 5 shows an example where a dependence prevents the identified slice from being hoisted before the array’s first use. Such cases are likely to occur infrequently. Moreover, a more refined dependence analysis, or procedure specialization (such as procedure strength reduction) can cause such dependencies to disappear. When the slice cannot be hoisted the compiler must emit code to resize the array dynamically.
Slice-Hoisting for Array-Size Inference in MATLAB
Fig. 6.
503
Precision of the constraints-based type inference
When slice-hoisting is applied to compute an array size it may be necessary to insert code to keep track of the actual current size of the array, which would be used in order to preserve the semantics of any operations on the array in the original program.
4
Experimental Evaluation
To evaluate the effect of slice-hoisting we conducted experiments on a set of Digital Signal Processing (DSP) procedures that are a part of a larger collection of procedures written in MATLAB. The collection of procedures constitutes an informal library. Some of these procedures have been developed at the Electrical and Computer Engineering department at Rice for DSP related research work. Others have been selected from the contributed DSP code that is available for download at the MathWorks web-site.
4.1
Precision of Constraints-Based Inference
We first evaluated the precision of the static constraints-based type inference algorithm. Due to the heavy overloading of operators, MATLAB code is often valid for more than one combination of variable types. For example, a MATLAB function written to perform FFT might be completely valid even if a scalar value is passed as an argument that is expected to be a one-dimensional vector. The
504
Arun Chauhan and Ken Kennedy
results might not be mathematically correct, but the MATLAB operations performed inside the function may make sense individually. As a result, the static type inference algorithm can come up with multiple valid type-configurations. Additionally, the limitations enumerated earlier in section 2 can cause the number of configurations to be greater than what would be valid for the given code. This does not affect the correctness since only the generated code corresponding to the right configurations will get used—the extra configurations simply represent wasted compiler effort. In the case of the DSP procedures studied it turns out that if argument types are pinned down through annotations on the argument variables then exactly one type-configuration is valid. Figure 6 shows the number of type-configurations generated for five different DSP procedures by the constraints-based inference algorithm. The left darker bars indicate the number of configurations generated without any annotations on the arguments. The lighter bars indicate the number of type-configurations generated when the arguments have been annotated with their precise types that are expected by the library writer. The fact that the lighter bars are not all one (only the leftmost, for acf, is one) shows that the static constraints-based algorithm does have limitations that get translated to more than the necessary number of type-configurations. However, these numbers are not very large—all, exceptfourier_by_jump, are smaller than 10—showing that the static analysis performs reasonably well in most cases. Another important observation here is that annotations on the libraries serve as a very important aid to the compiler. The substantial difference in the precision of the algorithm with and without annotations indicates that the hints from the library writer can go a long way in nudging the compiler in the right direction. This conclusion also validates the strategy of making library writers’ annotations an important part of the library compilation process in the telescoping languages approach.
4.2
Effectiveness of Slice-Hoisting
Having verified that there is a need to plug the hole left by the limitations in the constraints-based inference algorithm, we conducted another set of experiments on the same procedures to evaluate the effectiveness of slice-hoisting. Figure 7 shows the percentages of the total number of variables that are inferred by various mechanisms. In each case, exactly one type-configuration is produced, which is the only valid configuration once argument types have been determined through library-writer’s annotations. In one case, of acf, all the arguments can be inferred without the need for any annotations. The results clearly show that for the evaluated procedures slice-hoisting successfully inferred all the array-sizes that were not handled by the static analysis.
Slice-Hoisting for Array-Size Inference in MATLAB
Fig. 7.
4.3
505
Value of slice-hoisting
Implementation
The constraints-based static type inference has been implemented as a typeinference engine in the library compiler for MATLAB that is being developed at Rice as a part of a telescoping languages compiler. The various number of configurations shown in Fig. 6 are based on this engine. Slice-hoisting is under implementation at the time of this writing and the number of variables shown to be inferred through slice-hoisting in Fig. 7 are based on a hand-simulation of the slice-hoisting algorithm.
5
Related Work
Conceptually, slice-hoisting is closely related to the idea of inspector-executor style pioneered in the Chaos project at the University of Maryland, College Park by Saltz [13]. That style was used to replicate loops to perform array index calculations for irregular applications in order to improve the performance of the computation loops. In certain cases, hoisted slices can reduce to an inspectorexecutor style computation of array sizes to avoid the cost of array resizing in loops. However, the idea of slice-hoisting applies in a very different context and is used to handle a much wider set of situations. Type inference for MATLAB was carried out in the FALCON project at the University of Illinois, Urbana-Champaign [7, 6]. A simplified version of FALCON’s type inference was later used in the MaJIC compiler [1]. The FALCON
506
Arun Chauhan and Ken Kennedy
compiler uses a strategy based on dataflow analysis to infer MATLAB variable types. To perform array-size inference that strategy relies on shadow variables to track array sizes dynamically. In order to minimize the dynamic reallocation overheads it uses a complicated symbolic analysis algorithm to propagate symbolic values of array-sizes [14]. Slice-hoisting, on the other hand, can achieve similar goals through a much simpler use-def analysis. Moreover, if an advanced symbolic or dependence analysis is available in the compiler then it can be used to make slice-hoisting more effective. Finally, even very advanced symbolic analysis might not be able to determine sizes that depend on complicated loops while slice-hoisting can handle such cases by converting them to the inspectorexecutor style. An issue related to inferring array-sizes is that of storage management. Joisha and Banerjee developed a static algorithm, based on the classic register allocation algorithm, to minimize the footprint of a MATLAB application by reusing memory [8]. Reducing an application’s footprint can improve the performance by making better use of the cache. If a hoisted slice must be executed at runtime to compute the size of an array then the array will be allocated on the heap by Joisha and Banerjee’s algorithm. Their algorithm can work independently of—and even complement—slice-hoisting. Type inference, in general, is a topic that has been researched well in the programming languages community, especially in the context of functional programming languages. However, inferring array sizes in weakly typed or untyped languages is undecidable in general and difficult to solve in practice. Some attempts have been made at inferring array sizes by utilizing dependent types in the language theory community. One such example is eliminating array-bound checking [15].
6
Conclusion
Type-inference is an important step in compiling MATLAB. Precise type information can greatly improve the generated code resulting in substantial performance improvement. Inferring array-sizes turns out to be a difficult problem, while not having precise array-size information can lead to very expensive arraycopy operations at runtime. This paper has presented a new technique to perform array-size inference that complements the constraints-based static type-inference approach. The technique, called slice-hoisting, relies on very simple code transformations without requiring advanced analyses. At the same time, availability of advanced analyses can improve slice-hoisting either by making it possible to hoist slices where it might have been deemed undoable due to imprecise dependence information, or by improving the static evaluation of the hoisted slice. Evaluation of the technique on a selection of DSP procedures has demonstrated its effectiveness in plugging the holes that are left by a purely static constraints-based approach to infer array-sizes.
Slice-Hoisting for Array-Size Inference in MATLAB
507
Acknowledgements We thank Randy Allen for making some of the MATLAB procedures available from the MathWorks web-site in a ready-to-use form. Thanks to Vinay Ribeiro, Justin Romberg, and Ramesh Neelamani for making their MATLAB code available for our study and to Behnaam Aazhang who heads the Center for Multimedia Applications that has an ongoing collaboration with the telescoping languages effort.
References [1] George Almási and David Padua. MaJIC: Compiling MATLAB for speed and responsiveness. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 294–303, June 2002. [2] Arun Chauhan and Ken Kennedy. Procedure strength reduction and procedure vectorization: Optimization strategies for telescoping languages. In Proceedings of ACM-SIGARCH International Conference on Supercomputing, June 2001. [3] Arun Chauhan and Ken Kennedy. Reducing and vectorizing procedures for telescoping languages. International Journal of Parallel Programming, 30(4):289–313, August 2002. [4] Arun Chauhan, Cheryl McCosh, Ken Kennedy, and Richard Hanson. Automatic type-driven library generation for telescoping languages. To appear in the Proceedings of SC: High Performance Networking and Computing Conference, 2003. [5] Ron Cytron, Jeanne Ferrante, Barry K. Rosen, Mark N. Wegman, and F. Kenneth Zadeck. Efficiently computing static single assignment form and the control dependence graph. ACM Transactions on Programming Languages and Systems, 13(4):451–490, October 1991. [6] Luiz DeRose and David Padua. Techniques for the translation of MATLAB programs into Fortran 90. ACM Transactions on Programming Languages and Systems, 21(2):286–323, March 1999. [7] Luiz Antônio DeRose. Compiler Techniques for Matlab Programs. PhD thesis, University of Illinois at Urbana-Champaign, 1996. [8] Pramod G. Joisha and Prithviraj Banerjee. Static array storage optimization in MATLAB. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2003. [9] Ken Kennedy, Bradley Broom, Keith Cooper, Jack Dongarra, Rob Fowler, Dennis Gannon, Lennart Johnson, John Mellor-Crummey, and Linda Torczon. Telescoping Languages: A strategy for automatic generation of scientific problem-solving systems from annotated libraries. Journal of Parallel and Distributed Computing, 61(12):1803–1826, December 2001. [10] Kathleen Knobe and Vivek Sarkar. Array SSA form and its use in parallelization. In 25th Proceedings of ACM SIGACT-SIGPLAN Symposium on the Principles of Programming Languages, January 1998. [11] Cheryl McCosh. Type-based specialization in a telescoping compiler for MATLAB. Master’s thesis, Rice University, Houston, Texas, 2002. [12] Vijay Menon and Keshav Pingali. A case for source level transformations in MATLAB. In Proceedings of the ACM SIGPLAN / USENIX Conference on Domain Specific Languages, 1999.
508
Arun Chauhan and Ken Kennedy
[13] Shamik Sharma, Ravi Ponnusamy, Bongki Moon, Yuan-Shin Hwang, Raja Das, and Joel Saltz. Run-time and compile-time support for adaptive irregular problems. In Proceedings of SC: High Performance Networking and Computing Conference, November 1994. [14] Peng Tu and David A. Padua. Gated SSA-based demand-driven symbolic analysis for parallelizing compilers. In Proceedings of ACM-SIGARCH International Conference on Supercomputing, pages 414–423, 1995. [15] Hongwei Xi and Frank Pfenning. Eliminating array bound checking through dependent types. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 249–257, June 1998.
Efficient Execution of Multi-query Data Analysis Batches Using Compiler Optimization Strategies* Henrique Andrade 1,2 , Suresh Aryangat1, Tahsin Kurc 2 , Joel Saltz2, and Alan Sussman1 1
Dept. of Computer Science University of Maryland College Park, MD 20742
{hcma,suresha,als}@cs.umd.edu 2
Dept. of Biomedical Informatics The Ohio State University Columbus, OH, 43210 {kurc.1,saltz.3}@osu.edu
Abstract. This work investigates the leverage that can be obtained from compiler optimization techniques for efficient execution of multiquery workloads in data analysis applications. Our approach is to address multi-query optimization at the algorithmic level, by transforming a declarative specification of scientific data analysis queries into a highlevel imperative program that can be made more efficient by applying compiler optimization techniques. These techniques – including loop fusion, common subexpression elimination and dead code elimination – are employed to allow data and computation reuse across queries. We describe a preliminary experimental analysis on a real remote sensing application that analyzes very large quantities of satellite data. The results show our techniques achieve sizable reductions in the amount of computation and I/O necessary for executing query batches and in average execution times for the individual queries in a given batch.
1
Introduction
Multi-query optimization has been investigated by several researchers, mostly in the realm of relational databases [5, 7, 13, 17, 18, 20]. We have devised a database architecture that allows efficiently handling multi-query workloads where userdefined operations are also part of the query plan [2, 4]. The architecture builds on a data and computation reuse model that can be employed to systematically expose reuse sites in the query plan when application-specific aggregation *
This research was supported by the National Science Foundation under Grants #EIA-0121161, #EIA-0121177, #ACI-9619020 (UC Subcontract #10152408), #ACI-0130437, #ACI-0203846, and #ACI-9982087, and Lawrence Livermore National Laboratory under Grant #B517095 and #B500288, NIH NIBIB BISTI #P20EB000591, Ohio Board of Regents BRTTC #BRTT02-0003.
L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 509–523, 2004. © Springer-Verlag Berlin Heidelberg 2004
510
Henrique Andrade et al.
methods are employed. This model relies on an active semantic cache, in which semantic information is attached to prior computed aggregates that are cached by the system. This permits the query optimizer to retrieve the matching aggregates based on the metadata description of a new query. The cache is active in that it allows application-specific transformations to be performed on the cached aggregates so that they can be reused to speed up the evaluation of the query at hand. The reuse model and active semantic caching have been shown to effectively decrease the average turnaround time for a query, as well as to increase the database system throughput [2, 3, 4]. Our earlier approach leverages data and computation reuse for queries submitted to the system over an extended period of time. For a batch of queries, on the other hand, a global query plan that accommodates all the queries can be more profitable than scheduling queries based on individual query plans, especially if information at the algorithmic level for each of the query plans is exposed. A similar observation was the motivation for a study done by Kang et al. [13] for relational operators. The need to handle query batches arises in many situations. In a data server concurrently accessed by many clients, there can be multiple queries awaiting execution. A typical example is the daily execution of a set of queries for detecting the probability of wildfire occurring in Southern California. In this context, a system could issue multiple queries in batch mode to analyze the current (or close to current) set of remotely sensed data at regular intervals and trigger a response by a fire brigade. In such a scenario, a pre-optimized batch of queries can result in better resource allocation and scheduling decisions by employing a single comprehensive query plan. Many projects have worked on database support for scientific datasets [8, 19]. Optimizing query processing for scientific applications using compiler optimization techniques has attracted the attention of several researchers, including those in our own group. Ferreira et. al. [9, 10] have done extensive studies on using compiler and runtime analysis to speed up processing for scientific queries. They have investigated compiler optimization issues related to single queries with spatiotemporal predicates, which are similar to the ones we target [10]. In this work, we investigate the application of compiler optimization strategies to execute a batch of queries for scientific data analysis applications as opposed to a single query. Our approach is a multi-step process consisting of the following tasks: 1) Converting a declarative data analysis query into an imperative description; 2) Handing off the set of imperative descriptions for the queries in the batch to the query planner; 3) Employing traditional compiler optimization strategies in the planner, such as common subexpression elimination, dead code elimination, and loop fusion, to generate a single, global, efficient query plan.
2
Query Optimization Using Compiler Techniques
In this section, we describe the class of data analysis queries targeted in this work, and present an overview of the optimization phases for a batch of queries.
Efficient Execution of Multi-query Data Analysis Batches
Fig. 1.
2.1
511
General Data Reduction Loop
Data Analysis Queries
Queries in many data analysis applications can be defined as range-aggregation queries (RAGs) [6]. The datasets for range-aggregation queries can be classified as input, output, or temporary. Input (I) datasets correspond to the data to be processed. Output (O) datasets are the final results from applying one or more operations to the input datasets. Temporary (T) datasets (temporaries) are created during query processing to store intermediate results. A user-defined data structure is usually employed to describe and store a temporary dataset. Temporary and output datasets are tagged with the operations employed to compute them and also with the query metadata information (i.e. the parameters specified for the query). Temporaries are also referred to as aggregates, and we use the two terms interchangeably. A RAG query typically has both spatial and temporal predicates, namely a multi-dimensional bounding box in the underlying multi-dimensional attribute space of the dataset. Only data elements whose associated coordinates fall within the multidimensional box must be retrieved and processed. The selected data elements are mapped to the corresponding output dataset elements. The mapping operation is an application-specific function that often involves finding a collection of data items using a specific spatial relationship (such as intersection), possibly after applying a geometric transformation. An input element can map to multiple output elements. Similarly, multiple input elements can map to the same output element. An application-specific aggregation operation (e.g., sum over selected elements) is applied to the input data elements that map to the same output element. Borrowing from a formalism proposed by Ferreira [9], a range-aggregation query can be specified in the general loop format shown in Figure 1. A Select function identifies the subdomain that intersects the query metadata for a query The subdomain can be defined in the input attribute space or in the output space. For the sake of discussion, we can view the input and output datasets as being composed of collections of objects. An object can be a single data element or a data chunk containing multiple data elements. The objects whose elements are updated in the loop are referred to as left hand side, or LHS, objects. The objects whose elements are only read in the loop are considered right hand side, or RHS, objects. During query processing, the subdomain denoted by in the foreach loop is traversed. Each point in and the corresponding subscript functions are used to access the input and output data elements
512
Henrique Andrade et al.
for the loop. In the figure, we assume that there are RHS collections of objects, denoted by contributing the values of a LHS object. It is not required that all RHS collections be different, since different subscript functions can be used to access the same collection. In iteration of the loop, the value of an output element is updated using the application-specific function The function uses one or more of the values and may also use other scalar values that are inputs to the function, to compute an aggregate result value. The aggregation operations typically implement generalized reductions [11], which must be commutative and associative operations.
2.2
Case Study Application – Kronos
Before we present our approach and system support for multi-query optimization for query batches, we briefly describe the Kronos application used as a case study in this paper. Remote sensing has become a very powerful tool for geographical, meteorological, and environmental studies [12]. Usually systems processing remotely sensed data provide on-demand access to raw data and user-specified data product generation. Kronos [12] is an example of such a class of applications. It targets datasets composed of remotely sensed AVHRR GAC level 1B (Advanced Very High Resolution Radiometer – Global Area Coverage) orbit data [15]. The raw data is continuously collected by multiple satellites and the volume of data for a single day is about 1 GB. The processing structure of Kronos can be divided into several basic primitives that form a processing chain on the sensor data. The primitives are: Retrieval, Atmospheric Correction, Composite Generator, Subsampler, and Cartographic Projection. More details about these primitives can be found in the technical report version of this paper [1]. All the primitives (with the exception of Retrieval) may employ different algorithms (i.e., multiple atmospheric correction methods) that are specified as a parameter to the actual primitive (e.g., Correction(T0,Rayleigh/Ozone), where Rayleigh/Ozone is an existing algorithm and T0 is the aggregate used as input). In fact, Kronos implements 3 algorithms for atmospheric correction, 3 different composite generator algorithms, and more than 60 different cartographic projections.
2.3
Solving the Multi-Query Optimization Problem
The objective of multi-query optimization is to take a batch of queries, expressed by a set of declarative query definitions (e.g., using the SQL extensions of PostgreSQL [16]), and generate a set of optimized data parallel reduction loops that represent the global query plan for the queries in the batch. Queries in a declarative language express what the desired result of a query should be, without proscribing exactly how the desired result is to be computed. Previous researchers have already postulated and verified the strengths of using declarative languages from the perspective of end-users, essentially because the process
Efficient Execution of Multi-query Data Analysis Batches
513
of accessing the data and generating the data product does not need to be specified. Optimization of declarative queries is then a multi-phase process, in the which query definitions are first converted into imperative loops that conform to the canonical data reduction loop of Figure 1, and then those loops are optimized using various compiler techniques. Consider Kronos queries as examples. For our study, queries are defined as a 3-tuple: [ spatio-temporal bounding box and spatio-temporal resolution, correction method, compositing method ]. The spatio-temporal bounding box (in the SQL WHERE clause) specifies the spatial and temporal coordinates for the data of interest. The spatio-temporal resolution (or output discretization level) describes the amount of data to be aggregated per output point (i.e., each output pixel is composed from input points, so that an output pixel corresponds to an area of, for example, The correction method (in the SQL FROM clause) specifies the atmospheric correction algorithm to be applied to the raw data to approximate the values for each input point to the ideal corrected values. Finally, the compositing method (also in the SQL FROM clause) defines the aggregation level and function to be employed to coalesce multiple input grid points into a single output grid point. Two sample Kronos queries specified in PostgreSQL are illustrated in Figure 2. Query 1 selects the raw AVHRR data from a data collection named AVHRR_DC, for the spatio-temporal boundaries stated in the WHERE clause (within the boundaries for latitude, longitude, and day). The data is subsampled in such a way that each output pixel represents of data (with the discretization levels defined by deltalat, deltalon and deltaday). Pixels are also corrected for atmospheric distortions using the WaterVapor method and composited to find the maximum value of the Normalized Difference Vegetation Index (MaxNDVI). Figure 2 presents an overview of the optimization process. The goal is to detect commonalities between Query 1 and 2, in terms of the common spatiotemporal domains and the primitives they require. In order to achieve this goal, the first step in the optimization process is to parse and convert these queries into imperative loops conforming with the loop in Figure 1. That loop presents the high-level description of the same queries, with the spatio-temporal boundaries translated into input data points (via index lookup operations). Therefore, loops can iterate on points, blocks, or chunks depending on how the raw data is stored, declustered, and indexed. We should note that we have omitted the calls to the subscript mapping functions in order to simplify the presentation. These functions enable both finding an input data element in the input dataset and determining where it is placed in the output dataset (or temporary dataset). In some cases, mapping from an absolute set of multidimensional coordinates (given in the WHERE clause of the query) into a relative set of coordinates (the locations of the data elements) may take a considerable amount of time. Thus, minimizing the number of calls to the mapping operations can also improve performance. As seen in Figure 2, once the loops have been generated, the following steps are carried out to transform them into a global query plan. First, the imperative
514
Henrique Andrade et al.
Fig. 2. An overview of the entire optimization process for two queries. MaxNDVI and MinCh1 are different compositing methods and Water Vapor designates an atmospheric correction algorithm. All temporaries have local scope with respect to the loop. The discretization values are not shown as part of the loop iteration domains for a more clear presentation
descriptions are concatenated into a single workload program. Second, the domains for each of the foreach loops are inspected for multidimensional overlaps. Loops with domains that overlap are fused by moving the individual loop bodies into one or more combined loops. Loops corresponding to the non-overlapping domain regions are also created. An intermediate program is generated with two parts: combined loops for the overlapping areas and individual loops for the non-overlapping areas. Third, for each combined loop, common subexpression elimination and dead code elimination techniques are employed. That is, redundant RHS function calls are eliminated, redundant subscript function calls are deleted, and multiple retrievals of the same input data elements are eliminated.
3
System Support
In this section, we describe the runtime system that supports the multi-query optimization phases presented in Section 2. The runtime system is built on
Efficient Execution of Multi-query Data Analysis Batches
515
a database engine we have specifically developed for efficiently executing multiquery loads from scientific data analysis applications in parallel and distributed environments [2, 3]. The compiler approach described in this work has been implemented as a front-end to the Query Server component of the database engine. The Query Server is responsible for receiving declarative queries from the clients, generating an imperative query plan, and dispatching them for execution. It invokes the Query Planner every time a new query is received for processing, and continually computes the best query plan for the queries in the waiting queue which essentially form a query batch. Given the limitations of SQL-2, we have employed PostgreSQL [16] as the declarative language of choice for our system. PostgreSQL has language constructs for creating new data types (CREATE TYPE) and new data processing routines, called user-defined functions (CREATE FUNCTION). The only relevant part of PostgreSQL to our system is its parser, since the other data processing services all are handled within our existing database engine.
3.1
The Multi-Query Planner
The multi-query planner is the system module that receives an imperative query description from the Query Server and iteratively generates an optimized query plan for the queries received, until the system is ready to process the next query batch. The loop body of a query may consist of multiple function primitives registered in the database catalog. In this work, a function primitive is an applicationspecific, user-defined, minimal, and indivisible part of the data processing [4]. A primitive consists of a function call that can take multiple parameters, with the restriction that one of them is the input data to be processed and the return value is the processed output value. An important assumption is that the function has no side effects. The function primitives in a query loop form a chain of operations transforming the input data elements into the output data elements. A primitive at level of a processing chain in the loop body has the dual role of consuming the temporary dataset generated by the primitive immediately before (at level and generating the temporary dataset for the primitive immediately after (at level Figure 2 shows two sample Kronos queries that contain multiple function primitives. In the figure, the spatio-temporal bounding box is described by a pair of 3-dimensional coordinates in the input dataset domain. Retrieval, Correction, and Composite are the user-defined primitives. I designates the portion of the input domain (i.e., the raw data) being processed in the current iteration of the foreach loop and T0 and T1 designate the results of the computation performed by the Retrieval and Correction primitive calls. O1 and O2 designate the output for Query 1 and Query 2, respectively. Optimization for a query in a query batch occurs in a two-phase process in which the query is first integrated into the current plan, and then redundancies are eliminated. The integration of a query into the current plan is a recursive process, defined by the spatio-temporal boundaries of the query, which describe
516
Henrique Andrade et al.
the loop iteration domain. The details of this process are explained in the next sections and in more detail in the technical report version of this paper [1]. Loop Fusion The first stage of the optimization mainly employs the bounding boxes for the new query, as well as the bounding boxes for the set of already optimized loops in the query plan. The optimization essentially consists of loop fusion operations – merging and fusing the bodies of loops representing queries that iterate at least partially over the same domain The intuition behind this optimization goes beyond the traditional reasons for performing loop fusion, namely reducing the cost of the loops by combining overheads and exposing more instructions for parallel execution. The main goal of this phase is to expose opportunities for subsequent common subexpression elimination and dead code elimination. Two distinct tasks are performed when a new loop (newl) is integrated into the current query batch plan. First, the query domain for the new loop is compared against the iteration domains for all the loops already in the query plan. The loop with the largest amount of multidimensional overlap is selected to incorporate the statements from the body of the new loop. The second task is to modify the current plan appropriately, based on three possible scenarios: 1) The new query represented by newl does not overlap with any of the existing loops, so newl is added to the plan as is; 2) The iteration domain for the new loop newl is exactly equal to that of a loop already in the query plan (loop bestl). In this case, the body of bestl is merged with that of newl; 3) The iteration domain for newl is either subsumed by that of bestl, or subsumes that of bestl, or there is a partial overlap between the two iteration domains. This case requires computing several new loops to replace the original bestl. The first new loop iterates only on the common, overlapping domain of newl and bestl. The body of newl is merged with that of bestl and the resulting loop is added to the query plan (i.e., bestl is replaced by updatedl). Second, loops covering the rest of the domain originally covered by bestl are added to the current plan. Finally, the additional loops representing the rest of the domain for newl are computed, and the new loops become candidates to be added to the updated query plan. They are considered candidates because those loops may also overlap with other loops already in the plan. Each of the new loops is recursively inserted into the optimized plan using the same algorithm. This last step guarantees that there will be no iteration space overlap across the loops in the final query batch plan. Redundancy Elimination After the loops for all the queries in the batch are added to the query plan, redundancies in the loop bodies can be removed, employing straightforward optimizations – common subexpression elimination and dead code elimination. In our case, common subexpression elimination consists of identifying computations and data retrieval operations that are performed multiple times in the loop body, eliminating all but the first occurrence [14]. Each statement in a loop body creates a new available expression (i.e., represented by the right hand side of the assignment), which can be accessed through
Efficient Execution of Multi-query Data Analysis Batches
517
a reference to the temporary aggregate on the left hand side of the assignment. The common subexpression algorithm [1] performs detection of new available expressions and substitutes a call to a primitive by a copy from the temporary aggregate containing the redundant expression. The equivalence of the results generated by two statements is determined by inspecting the call site for the primitive function invocations. Equivalence is determined by establishing that in addition to using the same (or equivalent) input data, the parameters for the primitives are also the same or equivalent. Because the primitive invocation is replaced by a copy operation, primitive functions are required to not have any side effects. The removal of redundant expressions often causes the creation of useless code – assignments that generate dead variables that are no longer needed to compute the output results of a loop. We extend the definition of dead variable to also accommodate situations in which a statement has the form where and are both temporaries. In this case, all uses of can be replaced by We employ the standard dead code elimination algorithm, which requires marking all instructions that compute essential values. Our algorithm computes the def-use chain (connections between a definition of a variable and all its uses) for all the temporaries in the loop body. The dead code elimination algorithm [1] makes two passes over the statements that are part of a loop in the query plan. The first pass detects the statements that define a temporary and the ones that use it, A second pass over the statements looks for statements that define a temporary value, checking for whether they are utilized, and removes the unneeded statements. Both the common subexpression elimination and the dead code elimination algorithms must be invoked multiple times, until the query plan remains stable, meaning that all redundancies and unneeded statements are eliminated. Although similar to standard compiler optimization algorithms, all of the algorithms were implemented in the Query Planner to handle an intermediate code representation we devised to represent the query plan. We emphasize that we are not compiling C or C++ code, but rather the query plan representation. Indeed, the runtime system implements a virtual machine that can take either the unoptimized query plan or the final optimized plan and execute it, leveraging any, possibly parallel, infrastructure available for that purpose.
4
Experimental Evaluation
The evaluation of the techniques presented in this paper was carried out on the Kronos application (see Section 2.2). It was necessary to re-implement the Kronos primitives to conform to the interfaces of our database system. However, employing a real application ensures a more realistic scenario for obtaining experimental results. On the other hand, we had to employ synthetic workloads to perform a parameter sweep of the optimization space. We utilized a statistical workload model based on how real users interact with the Kronos system, which we describe in Section 4.1.
518
Henrique Andrade et al.
We designed several experiments to illustrate the impact of the compiler optimizations on the overall batch processing performance, using AVHRR datasets and a mix of synthetic workloads. All the experiments were run on a 24-processor SunFire 6800 machine with 24 GB of main memory running Solaris 2.8. We used a single processor of this machine to execute queries, as our main goal in this paper is to evaluate the impact of the various compiler optimization techniques on the performance of query batches. Leverage from running in a multi-processor environment will be investigated in future work, to obtain further decreases in query batch execution time. A dataset containing one month (January 1992) of AVHRR data was used, totaling about 30 GB.
4.1
A Query Workload Model
In order to create the queries that are part of a query batch, we employed a variation of the Customer Behavior Model Graph (CBMG) technique. CBMG is utilized, for example, by researchers analyzing performance aspects of e-business applications and website capacity planning. A CBMG can be characterized by a set of states, a set of transitions between states, and by an matrix, of transition probabilities between the states. In our model, the first query in a batch specifies a geographical region, a set of temporal coordinates (a continuous period of days), a resolution level (both vertical and horizontal), a correction algorithm (from 3 possibilities), and a compositing operator (also from 3 different algorithms). The subsequent queries in the batch are generated based on the following operations: another new point of interest, spatial movement, temporal movement, resolution increase or decrease, applying a different correction algorithm, or applying a different compositing operator. In our experiments, we used the probabilities shown in Table 1 to generate multiple queries for a batch with different workload profiles. For each workload profile, we created batches of 2, 4, 8, 16, 24, and 32 queries. A 2-query batch requires processing around 50 MB of input data and a 32-query batch requires around 800 MB, given that there is no redundancy in the queries forming the batch and also that no optimization is performed. There are 16 available points of interest; for example, Southern California, the Chesapeake Bay, the Amazon Forest, etc. This way, depending on the workload profile, subsequent queries after the first one in the batch may either remain around that point (moving around its neighborhood and generating new data products with other types of atmospheric correction and compositing algorithms) or move on to a different point. These transitions are controlled according to the transition probabilities in Table 1. More details about the workload model can be found in [4]. For the results shown in this paper each query returns a data product for a 256 × 256 pixel window. We have also produced results for larger queries – 512 × 512 data products. The results from those queries are consistent with the ones we show here. In fact, in absolute terms the performance improvements are even larger. However, for the larger data products we had to restrict the experiments to smaller batches of up to 16 queries, because the memory footprint exceeded
Efficient Execution of Multi-query Data Analysis Batches
519
2 GB (the amount of addressable memory using 32-bit addresses available when utilizing gcc 2.95.3 in Solaris).
4.2
Experimental Study
We studied the impact of the proposed optimizations varying the following quantities: 1) The number of queries in a batch (from a 2-query batch up to a 32-query batch). 2) The optimizations that are turned on (none, only common subexpression elimination and loop fusion – CSE-LF; or common subexpression elimination, dead code elimination, and loop fusion – CSE-DCE-LF). 3) The workload profile for a batch. Workload 1 represents a profile with high probability of reuse across the queries. In this workload profile, there is high overlap in regions of interest across queries. This is achieved by a low probability for the New Point-of-Interest and Movement values, as seen in the table. Moreover, the probabilities of choosing new correction, compositing, and resolution values are low. Workload 4, on the other hand, describes a profile with the lowest probability of data and computation reuse. The other profiles – 2 and 3 – are in between the two extremes in terms of the likelihood of data and computation reuse. Our study collected five different performance metrics: batch execution time, number of statements executed (loop body statements), average query turnaround time1, average query response time2, and plan generation time (i.e., the amount of time from when the parser calls the query planner until the time the plan is fully computed). Batch Execution Time The amount of time required for processing a batch of queries is the most important metric, since that is the main goal of the optimizations we employ. Figure 3 (a) shows the reduction in execution time for different batches and workload profiles, comparing against executing the batch without any optimizations. The results show that reductions in the range of 20% 1
2
Query turnaround time is the time from when a query is submitted until when it is completed. Query response time is the time between when a query is submitted and the time the first results are returned.
520
Henrique Andrade et al.
Fig. 3. (a) The reduction in batch execution time. (b) Average query turnaround time
to 70% in execution time are achieved. Greater reductions are observed for larger batches using workload profile 1, which shows high locality of interest. In this profile, there is a very low chance of selecting a new point of interest or performing spatial movement (which implies high spatial and temporal locality as seen in Table 1). Therefore, once some data is retrieved and computed over, most queries will reuse at least the input data, even if they require different atmospheric correction and compositing algorithms. Additionally, there are only 16 points of interest as we previously stated, which means that across the 32 queries at least some of the queries will be near the same point of interest, which again implies high locality. On the other hand, when a batch has only 2 queries, the chance of having spatio-temporal locality is small, so the optimizations have little effect. The 2-query batches for workload profiles 1 and 3 show this behavior (note that the y-axis in the chart starts at -10% improvement). In some experiments we observe that the percent reduction in execution time decreases when the number of queries in a batch is increased (e.g., going from a 4-query batch to an 8-query batch for Workload 3). We attribute this to the fact that queries in different batches are generated randomly and independently. Hence, although a workload is designed to have a certain level of locality, it is possible that different batches in the same workload may have different amounts of locality, due to the distribution of queries. The important observation is that the proposed approach takes advantage of locality when it is present. Query Turnaround Time A query batch may be formed while the system is busy processing other queries, and interactive clients continue to send new queries that are stored in a waiting queue. In this scenario, it is also important for a database system to decrease the average execution time per query so that interactive clients experience less delay between submitting a query and seeing its results. Although the optimizations are targeted at improving batch execution time, Figure 3 (b) shows that they also improve average query turnaround time. In these experiments, queries are added to the batch as long as the system is busy. The query batch is executed as soon as the system becomes available for processing it. As seen from the figure, for the workload profiles with higher
Efficient Execution of Multi-query Data Analysis Batches
521
Fig. 4. Time to generate a batch execution plan
locality (1 and 2), execution time decreases by up to 55%. Conversely, for batches with low locality there is little decrease in execution time, as expected. Plan Generation Time The application of compiler optimization strategies introduces costs for computing the optimized query plan for the query batch. Figure 4 illustrates how much time is needed to obtain the execution plan for a query batch. There are two key observations here. The first observation is that the planning time depends on the number of exploitable optimization opportunities that exist in the batch (i.e., locality across queries). Hence, if there is no locality in the query batch, the time to generate the optimized plan (which should be the same as the unoptimized plan) is roughly equivalent to the time to compute the non-optimized plan. The second observation is that the time to compute a plan for batches that have heavily correlated queries increases exponentially (due to the fact that each spatio-temporal overlap detected produces multiple new loops that must be recursively inserted into the optimized plan). However, even though much more time is spent in computing the plan, executing the query batch is several orders of magnitude more expensive than computing the plan. As seen from Figures 3 and 4, query batch planning takes milliseconds, while query batch execution time can be hundreds of seconds, depending on the complexity and size of the data products being computed. Finally, a somewhat surprising observation is that adding dead code elimination to the set of optimizations slightly decreases the time needed to compute the plan. The reason is that the loop merging operation and subsequent common subexpression elimination operations become simpler if useless statements are removed from the loop body. This additional improvement is doubly beneficial because dead code elimination also decreases batch execution time, as seen in Figures 3 (a) and (b).
522
5
Henrique Andrade et al.
Conclusions
In this paper we have described a framework for optimizing the execution of scientific data analysis query batches that employs well-understood compiler optimization strategies. The queries are described using a declarative representation – PostgreSQL – which in itself represents an improvement in how easily queries can be formulated by end users. This representation is transformed into an imperative representation using loops that iterate over a multidimensional spatiotemporal bounding box. The imperative representation lends itself to various compiler optimizations techniques, such as loop fusion, common subexpression elimination, and dead code elimination. Our experimental results using a real application show that the optimization process is relatively inexpensive and that when there is some locality across the queries in a batch, the benefits of the optimizations greatly outweigh the costs. Two important issues we plan to address in the near future are batch scheduling for parallel execution and resource management. Use of loop fusion techniques not only reduces loop overheads, but also exposes more operations for parallel execution and local optimization. In fact, because of the nature of our target queries (i.e., queries involving primitives with no side effects and generalized reduction operations), each statement of the loop body can be carried out in parallel. This means that scheduling the loop iterations in a multithreaded environment or across a cluster of workstations can improve performance, assuming that synchronization and communication issues are appropriately handled. With respect to resource utilization, there are complex issues to be addressed, in particular with regard to memory utilization. When two or more queries are fused into the same loop, all the output buffers for the queries need to be allocated (at least partially) to hold the results produced by the loop iteration. Moreover, those buffers may need to be maintained in memory for a long time, since all the iterations required to complete a query may be spread across a large collection of loops that may be executed over a long time period (i.e., the first and last loop for a query may be widely separated in the batch plan). Another extension we plan to investigate in a future prototype is to integrate the active caching system and the batch optimizer. In that case, the batch optimizer can also leverage the cache contents when performing common subexpression eliminations.
References [1] Henrique Andrade, Suresh Aryangat, Tahsin Kurc, Joel Saltz, and Alan Sussman. Efficient execution of multi-query data analysis batches using compiler optimization strategies. Technical Report CS-TR-4507 and UMIACS-TR-2003-76, University of Maryland, July 2003. [2] Henrique Andrade, Tahsin Kurc, Alan Sussman, and Joel Saltz. Efficient execution of multiple workloads in data analysis applications. In Proceedings of the 2001 ACM/IEEE Supercomputing Conference, Denver, CO, November 2001. [3] Henrique Andrade, Tahsin Kurc, Alan Sussman, and Joel Saltz. Active Proxy-G: Optimizing the query execution process in the Grid. In Proceedings of the 2002 ACM/IEEE Supercomputing Conference, Baltimore, MD, November 2002.
Efficient Execution of Multi-query Data Analysis Batches
523
[4] Henrique Andrade, Tahsin Kurc, Alan Sussman, and Joel Saltz. Exploiting functional decomposition for efficient parallel processing of multiple data analysis queries. Technical Report CS-TR-4404 and UMIACS-TR-2002-84, University of Maryland, October 2002. A shorter version appears in the Proceedings of IPDPS 2003. [5] Upen S. Chakravarthy and Jack Minker. Multiple query processing in deductive databases using query graphs. In Proceedings of the 12th VLDB Conference, pages 384–391, 1986. [6] Chialin Chang. Parallel Aggregation on Multi-Dimensional Scientific Datasets. PhD thesis, Department of Computer Science, University of Maryland, April 2001. [7] Fa-Chung Fred Chen and Margaret H. Dunham. Common subexpression processing in multiple-query processing. IEEE Transactions on Knowledge and Data Engineering, 10(3):493–499, 1998. [8] Josephine M. Cheng, Nelson Mendonça Mattos, Donald D. Chamberlin, and Linda G. DeMichiel. Extending relational database technology for new applications. IBM Systems Journal, 33(2):264–279, 1994. [9] Renato Ferreira. Compiler Techniques for Data Parallel Applications Using Very Large Multi-Dimensional Datasets. PhD thesis, Department of Computer Science, University of Maryland, September 2001. [10] Renato Ferreira, Gagan Agrawal, Ruoming Jin, and Joel Saltz. Compiling data intensive applications with spatial coordinates. In Proceedings of the 13th International Workshop on Languages and Compilers for Parallel Computing, pages 339–354, Yorktown Heights, NY, August 2000. [11] High Performance Fortran Forum. High Performance Fortran – language specification – version 2.0. Technical report, Rice University, January 1997. Available at http://www.netlib.org/hpf. [12] Satya Kalluri, Zengyan Zhang, Joseph JáJá, David Bader, Nazmi El Saleous, Eric Vermote, and John R. G. Townshend. A hierarchical data archiving and processing system to generate custom tailored products from AVHRR data. In 1999 IEEE International Geoscience and Remote Sensing Symposium, pages 2374–2376, 1999. [13] Myong H. Kang, Henry G. Dietz, and Bharat K. Bhargava. Multiple-query optimization at algorithm-level. Data and Knowledge Engineering, 14(1):57–75, 1994. [14] Steve S. Muchnick. Advanced Compiler Design and Implementation. Morgan Kaufmann, San Francisco, CA, 1997. [15] National Oceanic and Atmospheric Administration. NOAA Polar Orbiter User’s Guide – November 1998 Revision. compiled and edited by Katherine B. Kidwell. Available at http://www2.ncdc.noaa.gov/docs/podug/cover.htm. [16] PostgreSQL 7.3.2 Developer’s Guide. http://www.postgresql.org. [17] Prasan Roy, S. Seshadri, S. Sudarshan, and Siddhesh Bhobe. Efficient and extensible algorithms for multi query optimization. In Proceedings of the 2000 ACMSIGMOD Conference, pages 249–260, 2000. [18] Timos K. Sellis. Multiple-query optimization. ACM Transactions on Database Systems, 13(1):23–52, 1988. [19] Michael Stonebraker. The SEQUOIA 2000 project. Data Engineering, 16(1):24– 28, 1993. [20] Kian Lee Tan and Hongjun Lu. Workload scheduling for multiple query processing. Information Processing Letters, 55(5):251–257, 1995.
Semantic-Driven Parallelization of Loops Operating on User-Defined Containers Dan Quinlan, Markus Schordan, Qing Yi, and Bronis R. de Supinski Lawrence Livermore National Laboratory, USA {dquinlan,schordan1,yi4,bronis}@llnl.gov
Abstract. We describe ROSE, a C++ infrastructure for source-tosource translation, that provides an interface for programmers to easily write their own translators for optimizing the use of high-level abstractions. Utilizing the semantics of these high-level abstractions, we demonstrate the automatic parallelization of loops that iterate over user-defined containers that have interfaces similar to the lists, vectors and sets in the Standard Template Library (STL). The parallelization is realized in two phases. First, we insert OpenMP directives into a serial program, driven by the recognition of the high-level abstractions, containers, that are thread-safe. Then, we translate the OpenMP directives into library routines that explicitly create and manage parallelism. By providing an interface for the programmer to classify the semantics of their abstractions, we are able to automatically parallelize operations on containers, such as linked-lists, without resorting to complex loop dependence analysis techniques. Our approach is consistent with general goals within telescoping languages.
1
Introduction
In object-oriented languages such as C++, abstractions are a key aspect of library design, sharing aspects of language design, which aims to provide the application developer with an efficient and convenient interface. For example, the C++ Standard Template Library (STL), parts of which are standardized within the C++ standard libraries, includes a collection of template classes that can be used as containers for elements of a user specified type. Although each of these containers provide different means to access their elements, they all provide a unified sequential access interface and thus can all be used in the code fragment in Figure 1. This design strategy permits all containers to be used interchangeably in algorithms that process a sequence of elements. Although part of the C++ standard libraries, STL containers are not defined as part of the C++ language explicitly and thus can be considered user-defined. At this level, library design greatly resembles language design, but without increasing the complexity of the compiler. The term telescoping languages was coined by Kennedy [14] in 2000. Within telescoping languages, a base language is chosen and domain-specific types are constructed entirely within the base language with no language extension. The iterative progression of a library of L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 524–538, 2004. © Springer-Verlag Berlin Heidelberg 2004
Semantic-Driven Parallelization of Loops Operating on User-Defined Contain
525
Fig. 1. Example: a code fragment processing a user-defined container
abstractions to a higher-level language comes only with compile-time support for its user-defined types. The result can alternatively be thought of as a domainspecific language implemented via a library without formal language extension. The telescoping aspect relates to the optional compile-time optimizations for the library’s abstractions, which are defined entirely within the base language (without language extension). The optimizations are often expressed as lowerlevel code being generated in place of the high-level abstractions. It is clarifying to note that we don’t optimize the high-level abstractions directly within the library , but instead optimize the use of the abstractions within applications of the library. As a special case of this general strategy, in this paper we utilize the semantics of the user-defined containers and generate parallelized code. The idea of higher-level languages driving the generation of lower-level C++ code was originally discussed by Stroustrup in 1994 [23]. Due to the increasing popularity of the STL library, more libraries now provide containers that conform to the STL interface. Since the library developer knows the semantics of the library’s containers and of each element in the containers, he is in a unique position to write a source-to-source translator that optimizes the performance of every program that uses his library. For the simplicity of the application developers using the library, the translator is an optional part of their development process, since the translator only performs optimizations. For example, in Figure 1, if the library writer knows that none of the elements in MyContainer can be aliased and that the function foo is side-effect free (i.e., it does not modify any global variables), he can safely parallelize the surrounding loop and thus achieve better performance for the application using the library. Due to the undecidability of precise alias and control-flow analysis, it could be impossible for a compiler to automatically figure out this semantic information. Thus, our approach can better optimize any application code that uses the library since we allow the library developer to communicate this semantic information to the source-to-source translator. The application developer sees only an automated process. We present ROSE, a C++ source-to-source infrastructure especially for this purpose [18, 19]. Within ROSE, we can use specific type information (including semantics) about the high-level abstractions as a basis for optimizing applications. Essentially, the compiler has more information, thus enabling greater levels of optimization. In the case of parallelizing user-defined containers, for example, we can automate the introduction of OpenMP directives into otherwise serial code because the library writer guarantees the required semantics. Based on the additional semantics of the user-defined containers, this approach permits par-
Dan Quinlan et al.
526
Fig. 2.
ROSE Source-To-Source infrastructure with front-end/back-end reinvocation
allel execution of appropriate fundamentally serial code. Section 2 presents the ROSE infrastructure in more detail. Using the ROSE approach for processing high-level abstractions, we present a source-to-source translator that automatically introduces OpenMP directives in loop computations on STL-like container classes such as the one in Figure 1. The only additional information that needs to be provided by the library writer is the set of container classes that disallow aliased elements and the side-effects of library functions. We then invoke another translator within ROSE to recognize specific OpenMP pragma directives and to translate these directives (along with their associated code fragments). The final result is a parallel program that explicitly creates and manages parallelism.
2
Infrastructure
The ROSE infrastructure offers several components to the library writer to build a source-to-source translator. The translator is then used to read in the sequential user code, parallelize it, and generate code with OpenMP directives explicitly expressing parallelism. A complete C++ front-end is available that generates an object-oriented annotated abstract syntax tree (AST) as an intermediate representation. Several different components can be used to build the mid-end of a translator that operates on the AST to implement transformations: a predefined traversal mechanism; a restructuring mechanism; and an attribute evaluation mechanism. Other features include parsing of OpenMP directives and integrating these directives into the AST. A C++ back-end can be used to unparse the AST and generate C++ code (see Figure 2). 2.1
Front-End
We use the Edison Design Group C++ front-end (EDG) [11] to parse C++ programs. The EDG front-end performs a full type evaluation of the C++ program and then generates an AST, which is represented as a C data structure. We translate this data structure into an object-oriented abstract syntax tree (AST) which is used by the mid-end as an intermediate representation. We use Sage III
Semantic-Driven Parallelization of Loops Operating on User-Defined Contain
527
as an intermediate representation, which we have developed as a revision of the Sage II [7] AST restructuring tool. 2.2
Mid-End
The mid-end supports restructuring of the AST. The programmer can add code to the AST by specifying a source string using C++ syntax, or by constructing subtrees node by node. A program transformation consists of a series of AST restructuring operations, each of which specifies a location in the AST where a code fragment (specified as a C++ source string or as an AST subtree) should be inserted, deleted, or replaced. The order of the restructuring operations is based on a pre-defined traversal. A transformation traverses the AST and invokes multiple restructuring operations on the AST. To address the problem of restructuring the AST while traversing it, we make restructuring operations side-effect free functions that define a mapping from one subtree of the AST to another subtree. The new subtree is not inserted until after the complete traversal of the original subtree. We provide interfaces for invoking restructuring operations that buffer these operations to ensure that no subtrees are replaced while they are being traversed. The mid-end also provides an attribute evaluation mechanism that allows the computation of arbitrary attribute values for AST nodes. During traversal, context information can be passed down the AST as inherited attributes, and results of transforming a subtree can be passed up the tree as synthesized attributes. Examples for inherited and synthesized attributes include the type information of objects, the sizes of arrays, the nesting levels of loops and the scopes of associated pragma statements. These attributes can then be used to compute constraints on transformations — for example, to decide whether to apply a restructuring operation on a particular AST node. Our infrastructure supports the use of C++ source strings to define code fragments. Any source string that represents a valid declaration, statement list, or expression can specify a code pattern to be inserted into the AST. The translation of a source code string, into an AST fragment, is performed by reinvoking the front-end. Our system extends to form a complete program, which it then parses into an AST by reinvoking the front-end. From this AST, it finally extracts the AST fragment that corresponds to This AST fragment is inserted into the AST of the original program. Further, we provide an abstract C++ grammar which covers all of C++ and defines the set of all abstract syntax trees. The grammar has 165 production rules. It is abstract with respect to the concrete C++ grammar and does not contain any C++ syntax. We have integrated the attribute grammar tool Coco [16], ported to C++ by Frankie Arzu. This allows the use of the abstract C++ grammar. In the semantic actions source-strings and restructure operators can be used to specify the source code transformation. In section 3.4 we show how a transformation can be specified using the abstract grammar, sourcestrings, and AST restructure operations.
528
2.3
Dan Quinlan et al.
Back-End
The back-end unparses the AST and generates C++ source code. It can either unparse all included (header) files or the source file(s) specified on the command line only. This feature is important when transforming user-defined data types, for example, when adding compiler-generated methods. Using this feature preserves all C preprocessor (cpp) control structures (including comments). Output code from the back-end appears nearly indistinguishable from input code, except for transformations, to simplify acceptance by users. The back-end can also be invoked during a transformation, to obtain the source code string that corresponds to a subtree of the AST. Such a string can be combined with new code (also represented as a source string) and inserted into the AST. Both phases, the introduction of OpenMP directives and the translation of OpenMP directives, can be automated using the above mechanisms, as described in the following sections.
3
Parallelizing User-Defined Containers Using OpenMP
The OpenMP standard provides a convenient mechanism for achieving high performance on modern parallel machine architectures. By extending traditional languages such as Fortran, C and C++, OpenMP allows developing parallel applications without the explicit management of threads or communications. Introducing OpenMP directives into a sequential program thus requires substantially less work than using distributed memory programming models like MPI. In addition, current use of distributed memory programming models only extends to a subset of the processors available on IBM machines at LLNL. Specifically, the limit on the number of MPI tasks requires a hybrid programming model that combines message passing and shared memory programming in order to use all of the machine’s processors. These hybrid programming models significantly increase the complexity of the already difficult task of developing scientific applications. Thus, our approach is particularly useful in extending existing distributed memory applications to use these modern computer architectures effectively. By automating (or simplifying) the introduction of parallelism to leverage the shared memory nodes and, thus, a larger part of these machines, we can significantly improve programmer productivity. The use of dual shared memory and distributed memory programming models is a more general issue within cluster computing (using a connected set of shared memory nodes). Current compiler technology [1, 24, 5, 4, 17] can efficiently automate the introduction of OpenMP directives to regular loops that iterate over randomaccess arrays as defined by Fortran or C. However, because most C++ programs, including many scientific applications, use higher-level abstractions for which semantics are unknown to the compiler, these abstractions are left unoptimized by most parallelizing compilers. By providing mechanisms to optimize object-oriented library abstractions, we thus allow the efficient tailoring of the
Semantic-Driven Parallelization of Loops Operating on User-Defined Contain
529
programming environment as essentially a programming language that is more domain-specific than a general purpose language could allow, thereby allowing the improvement of programmer productivity without degrading application performance. The ROSE infrastructure provides support for generating source-to-source translators that essentially act as compilers for these domain-specific languages. The designer of the high-level abstractions captures the semantics of those abstractions so that the source-to-source translators can generate high performance code for the user of the domain-specific language. Generally, the designer of the abstractions will be a library writer, although nothing prevents the end user from designing clean interfaces and capturing the semantics for his specific abstractions. In this section, we present a mechanism to automatically introduce OpenMP directives for parallelization of iterators which operate on user-defined containers. 3.1
User-Defined Containers
Scientific applications are increasingly using STL, but at present with no path available toward automated shared memory parallelization of sequential STL usages in application programs. Clearly our goal in addressing the parallelization of user-defined container classes includes eventually processing STL containers. Such work would have broad impact on how STL could be used within scientific programming. At present, the ROSE infrastructure does not handle templates sufficiently well to address STL optimization directly. Figure 3 presents a compromise, an example container class that is similar to the STL list class. It has an identical iterator interface, but does not use templates. The example list class accurately reproduces the same iterator interface as is used in STL and more general user-defined containers. The exact details of the iterator interface are not particularly important; our approach could be used to parallelize alternative methods of traversing the elements of containers. Further, the easy construction of compile-time transformations with ROSE could use even more precise semantics of domain-specific containers if necessary. Figure 4 defines a class to support the automated transformation of iteration on user-defined containers. The automated transformation process introduces new code that uses this supporting class into the application. The SupportingOmpContainer_list class builds an array of fixed size, internally, containing pointers to the container’s elements and provides a random access iterator. The generated OpenMP parallel for loop uses this random access iterator instead of the original bidirectional iterator. 3.2
Collecting Domain-Specific Information
Our goal is to parallelize loops that iterate over user-defined containers. Given a candidate loop, we must ensure parallelization safety, that is, dependences cannot exist between different iterations of the loop body [2]. In determining
530
Dan Quinlan et al.
Fig. 3. Example: Code fragment showing list class using iterators
this constraint, our algorithm is different from traditional compiler approaches in that we ask the library developer to supply the following domain-specific information to drive the analysis. known_containers A set of user-defined containers for which the library writer guarantees element uniqueness, i.e., the instances of the container class includes no aliased or overlapping elements. All of these containers must have a forward iterator interface as shown in Figure 1. Since the elements cannot be aliased to each other, our analysis can safely conclude that it is safe to parallelize a loop that uses the iterator interface of the container, as long as the loop body does not carry cross-iteration dependences. known_functions A set of user-defined functions whose side effects are known to the library writer. These functions can include both global functions and the member functions of user-defined abstractions. The side effects of each function known by the library writer. Specifically, for each function which parameters and global variables can be modified by This information allows us to compute the set of variables modified by an arbitrary statement without resorting to inter-procedural side effect analysis.
Semantic-Driven Parallelization of Loops Operating on User-Defined Contain
531
Fig. 4. Example: Code fragment showing the implementation of supporting abstraction for OpenMP translation
To collect the above information, we ask the library writer to supply two files: one contains a list of known_containers, each container specified by a string representing its class name; the other file contains a list of known_functions, each function specified by a string representing its name (for class member functions, the class name is specified as part of the function name), a list of strings representing the names of global variables modified by the function, and a list of integers representing the indices of the function parameters being modified. Our compiler reads these two files to construct a user-specification class object (variable libSpec in Figure 5), which then uses the collected information to answer queries from the parallelization analysis algorithm shown in Figure 5. Note that by using type names to recognize the parallelizable containers and iterators, we are able to collect sufficient information without going into details of describing specific properties, such as the specific interface required from the container and iteration classes. Similarly, by describing the side effects of functions using function and variable names, library writers do not need to change their code. This is especially useful if the programmer does not have the source code of the functions for annotations. 3.3
Safety of Parallelization
Figure 5 presents our algorithm for the parallelization safety analysis of userdefined containers, where TestParallelLoop is the top-level function, and function get_modified_locs is invoked to compute the set of memory locations mod-
532
Dan Quinlan et al.
Fig. 5. Algorithm for safety analysis of parallelization
ified by a list of arbitrary statements. The domain-specific information described in section 3.2 is represented as the libSpec input parameter, In Figure 5, the function get_modified_locs is used to compute the set of memory locations modified by each iteration of the loop body. For each function invocation within the loop body, if does not belong to the annotated functions in libSpec, we assume that it could induce unknown side effects and thus conservatively disallow the loop parallelization; otherwise, we summarize the locations modified by and add them into modLocs, the result of get_modified_locs. We then add to modLocs all the locally modified memory locations by each statement. Note that get_modified_locs returns not only variable names, which represent storage locations allocated either statically or on the runtime stack; it also returns dynamically allocated heap locations, which are accessed through pointer and reference variables, such as (where is a pointer variable) and (where is a reference variable in C++). Because we don’t yet have an alias analysis implementation, we conservatively disallow loop parallelization whenever get_modified_locs returns such indirect memory references. Applying get_modified_locs, the function TestParallelLoop determines whether a candidate loop can be safely parallelized. First, we examine the loop header of to see if it iterates over one of the annotated parallel containers in libSpec. If the answer is ‘yes’ we invoke get_modified_locs to summarize the complete side effect of the loop body. To guarantee the safety of parallelization, we conservatively disallow all possible dependences across different iterations of the loop body. For each modified memory location loc returned from get_modified_locs, we require that loc must satisfy one of the following two conditions. That is, either loc is exactly the current element inside the parallel container (that is, the container element accessed by the current loop iteration), or loc is a variable locally declared within the loop body (which means that the variable is private to the current iteration and thus cannot induce cross-iteration dependences). Otherwise, we assume that either global or dynamically allocated memories could be modified and disallow the parallelization.
Semantic-Driven Parallelization of Loops Operating on User-Defined Contain
533
Note that the algorithm in Figure 5 is more conservative than traditional dependence-based approaches in several ways. For example, we perform no standard privatizable array analysis, aliasing analysis, interprocedural analysis or traditional array dependence analysis [2]. Instead, we utilize the C++ variable declaration syntax ( a variable is privatizable only if it is locally declared) and domain-specific semantic information from library writers to drive the analysis. However, by configuring our system with library specific type information, we are able to optimize user-defined objects more effectively than traditional compiler techniques in many cases. 3.4
OpenMP Transformation
OpenMP transformations are specified as source-to-source translations. The input program is a sequential C++ program. The output is a parallelized program with OpenMP directives. A transformation is specified as semantic actions of our abstract C++ attribute grammar. In the following example we show how the attribute grammar in combination with the use of source-strings and AST replacement operations, allows to specify the introduction of OpenMP pragmas and the transformation of for-loops to conform to the required canonical form of an omp parallel for. In the example with show how to parallelize a for-loop which uses a bidirectional iterator on a container list. The generated code uses a random access iterator of a supporting class, SupportingOmpContainer_list, which allows indexed access of the elements of the list from different threads in an OpenMP parallel for.
Fig. 6. An iteration on a user-defined container l which provides an iterator interface. The object f is an instance of the user-defined class Foo. Object l is of type list. In the optimization the iterator is replaced by code conforming to the required canonical form of an OpenMP parallel for
534
Dan Quinlan et al.
In the example source in fig. 6 we show an iteration on a user-defined container with a bidirectional iterator. This pattern is frequently used in applications using C++98 standard container classes. The object f is an instance of the user-defined class Foo. The transformation we present takes into account the semantics of the type Foo and the semantics of class list. The transformation is therefore specific to these classes and its semantics. The class list offers a bidirectional iterator for accessing the elements of the list. The class Foo offers a method f which is thread safe. Based on these semantics of the classes list and Foo the OpenMP transformation is specified. We show the core of the transformation to transform the code into the canonical form of an OpenMP for-loop as required by the OpenMP standard. Note that the variable i in the transformed code is implicitly private according to the OpenMP standard 2.0. In the example in fig. 7 the SgScopeStatement production is shown. The grammar symbols (excluding the suffix NT) correspond to names of classes implementing AST nodes. The semantic actions specify a transformation based on the structure of the AST. Methods of the object subst insert new source code and delete subtrees in the AST. The substitution object subst buffers pairs of target location and string. The substitution is not performed before the semantic actions of all subtrees of the target location node have been performed. The object query is of type AstQuery and provides frequently used methods for obtaining information stored in annotations of the AST. These methods are also implemented as attribute evaluations. The inherited attribute forNestingLevel is used to handle the nesting of for-loops. It depends on how an OpenMP compiler supports nested parallelism whether we want to parallelize inner for statements or only the outer for statement. In the example isUserDefIteratorForStatement is a boolean function which determines whether a for-loop should be parallelized or not. It uses the algorithm TestParallelLoop, see Fig. 5 and additional information which can be provided by using attributes. In the example we only use the nesting level of for-loops as additional information. The object query of type AstQuery offers methods to provide information on subtrees that have proven to be useful in different transformations. In the example we use it to obtain variable names and type names. The example shows how we can decompose different aspects of a transformation into separate attribute evaluations. The methods of the query object are implemented by using the attribute evaluation. In fig. 6 the generated code is shown. The generated code uses a random access iterator of the supporting class SupportingOmpContainer_list. This supporting class is used to generate an array of pointers to all elements of the list to achieve a complexity of O(1) for the element access. The list of pointers is generated when the supporting container l2 is created at run time. When the generated code is compiled with an OpenMP compiler, the body can be executed in parallel at run time.
Semantic-Driven Parallelization of Loops Operating on User-Defined Contain
535
Fig. 7. A part of the SgScopeStatement rule of the abstract C++ grammar with the semantic action specifying the transformation of a SgForStatement
4
Related Work
The research community has developed many automatic parallelizing compilers, examples of which include the DSystem [1], the Fx compiler [24], the Vienna Fortran Compiler [5], the Paradigm compiler [4], the Polaris compiler [17], and the SUIF compiler [21]. However, except for SUIF, which has front-ends for Fortran, C, and C++; the others listed above optimize only Fortran applications. By providing a C++ front-end for automatic parallelization, we complement previous research in providing support for higher-level object-oriented languages. In addition, we extend previous techniques by utilizing the semantic information of user-defined containers and thus allowing user-defined abstractions to be treated as part of a domain-specific language. As more programmers now use OpenMP to express parallelism, many OpenMP compilers were developed, including both research projects [8, 22, 3, 15] and commercial compilers (SGI, IBM, Intel, Fujitsu). In addition to OpenMPdirective translation, many research compilers also investigate techniques to automatically generate OpenMP directives and to optimize the parallel execution of OpenMP applications. However, these research compilers only support applications written in C or FORTRAN, while existing commercial C++ compilers target only specific machine architectures and do not provide an open sourceto-source transformation interface to the outside world. By providing a flexi-
536
Dan Quinlan et al.
ble source-to-source translator, we complement previous research by presenting an open research infrastructure for optimizing C++ constructs and OpenMP directives. A relatively large body of work uses parallel libraries or language extensions, or both, to allow the user to parallelize their code. The Parallel Standard Library [13] uses parallel iterators as a parallel equivalent to STL iterators and provides some parallel algorithms and containers. NESL [6], CILK [10], and SPLIT-C [9] are extended programming languages with NESL providing a library of algorithms. STAPL [20] borrows from the STL philosophy, i.e., containers, iterators, and algorithms. The user must use pContainers, pRange (similar to iterators), and pAlgorithms to express parallelism. STAPL is further distinguished in that it emphasizes both automatic support and user-specified policies for scheduling, data composition, and data dependence enforcement. In contrast, with our approach the application developer does not need to learn language extensions nor does he need to use a parallel library. It is the library writer who needs to provide additional information, such as side effects, aliasing, etc., about the abstractions used in the library. He then builds a translator using the infrastructure presented in section 2. This translator is used by the application developer to automatically parallelize the sequential user code. Wu and Padua [25] first originated the research on compiler parallelization of general-purpose containers. They studied three standard Java container classes: Vector, LinkedList and Hashtable, and proposed analysis and transformation techniques that enable safe parallelization of Java applications in the presence of container-induced dependences. They also manually implemented the transformations and provided experimental results for several Java applications. In contrast, we study a more general class of user-defined containers in C++ but lack as sophisticated a dependence analysis mechanism. We have also automated the parallelization transformation through annotation mechanisms. The Broadway Compiler system [12] is in some aspects similar to our approach. It uses an annotation language and a compiler that together can customize a library implementation for specific application needs. The annotation language used in the Broadway Compiler is more sophisticated. However, it addresses optimizations of C programs only which does not allow as great a flexibility in the expression of high-level abstractions as C+ + .
5
Conclusions and Future Work
This paper presents a C++ infrastructure for semantic-driven parallelization of computations that operate on user-defined containers that have an access interface similar to that provided by the Standard Template Library in C++. First, we provide an interface for library developers to inform our compiler about the semantics of their containers and the side-effects of their library functions. Then, we use this information to parallelize loops that iterate over these containers automatically when it is safe to do so.
Semantic-Driven Parallelization of Loops Operating on User-Defined Contain
537
Our analysis algorithm conservatively disallows the parallelization of loops that modify non-local memory locations, that is, memory locations that are not elements of the user-defined container and are defined outside of the loop. In the future, we will extend our algorithm to be more precise by incorporating global alias analysis and array dependence analysis techniques [2]. This more sophisticated algorithm will be as precise as those used by other automatic parallelizing compilers [1, 24, 4, 5, 17, 21], while still being more aggressive for user-defined abstractions by optimizing them as part of a domain-specific language.
References [1] V. Adve, G. Jin, J. Mellor-Crummey, and Q. Yi. High performance fortran compilation techniques for parallelizing scientific codes. In Proceedings of SC98: High Performance Computing and Networking, Nov 1998. [2] R. Allen and Ken Kennedy. Optimizing Compilers for Modern Architectures. Morgan Kaufmann, San Francisco, October 2001. [3] Eduard Ayguade, Marc Gonzalez, and Jesus Labarta. Nanoscompiler: A research platform for openMP extensions. In European Workshop on OpenMP, September 1999. [4] P. Banerjee, J. A. Chandy, M. Gupta, J. G. Holm, A. Lain, D. J. Palermo, S. Ramaswamy, and E. Su. The paradigm compiler for distributed-memory message passing multicomputers. In in Proceedings of the First International Workshop on Parallel Processing, Bangalore,India, Dec 1994. [5] S. Benkner. Vfc: The vienna fortran compiler. Scientific Programming, 7(1):67–81, 1999. [6] Guy E. Blelloch. NESL: A nested data-parallel language. Technical Report CMUCS-93-129, Carnegie Mellon University, April 1993. [7] Francois Bodin, Peter Beckman, Dennis Gannon, Jacob Gotwals, Srinivas Narayana, Suresh Srinivas, and Beata Winnicka. Sage++: An object-oriented toolkit and class library for building fortran and C++ restructuring tools. In Proceedings. OONSKI ’94, Oregon, 1994. [8] Christian Brunschen and Mats Brorsson. OdinMP/CCp - a portable implementation of openMP for c. In European Workshop on OpenMP, September 1999. [9] David E. Culler, Andrea Dusseau, Seth Copen Goldstein, Arvind Krishnamurthy, Steven Lumetta, Thorsten von Eiken, and Katherine Yelick. Parallel programming in split-C. International Conference on Supercomputing, November 1993. [10] Matteo Frigo, Charles E. Leiserson, and Keith H. Randall. The implementation of the Cilk-5 multithreaded language. In Proceedings of the ACM SIGPLAN ’98 Conference on Programming Language Design and Implementation, pages 212– 223, 1998. [11] Edison Design Group, http://www.edg.com. [12] Samuel Z. Guyer and Calvin Lin. An annotation language for optimizing software libraries. ACM SIGPLAN Notices, 35(l):39–52, January 2000. [13] E. Johnson, D. Gannon, and P. Beckman. HPC++: Experiments with the parallel standard template library. In Proceedings of the 11th International Conference on Supercomputing (ICS-97), pages 124–131, New York, July 7–11 1997. ACM Press.
538
Dan Quinlan et al.
[14] Ken Kennedy, Bradley Broom, Keith Cooper, Jack Dongarra, Rob Fowler, Dennis Gannon, Lennart Johnsson, John Mellor-Crummey, and Linda Torczon. Telescoping languages: A strategy for automatic generation of scientific problem-solving systems from annotated libraries. Journal of Parallel and Distributed Computing, 61(12):1803–1826, December 2001. [15] Seung Jai Min, Seon Wook Kim, Michael Voss, Sang Ik Lee, and Rudolf Eighmann. Portable compilers for openMP. In Workshop on OpenMP Applications and Tools, July 2001. [16] Hanspeter Moessenboeck. Coco/R - A generator for fast compiler front-ends. techreport, ETH Zurich, February 1990. [17] D. Padua, R. Eigenmann, J. Hoeflinger, P. Petersen, P. Tu, S. Weatherford, and K. Faigin”. Polaris: A new-generation parallelizing compiler for mpp’s. Technical Report 1306, Univ. of Illinois at Urbana-Champaign, Center for Supercomputing Res. and Dev., june 1993. [18] Daniel Quinlan, Brian Miller, Bobby Philip, and Markus Schordan. Treating a user-defined parallel library as a domain-specific language. In 16th International Parallel and Distributed Processing Symposium (IPDPS, IPPS, SPDP), pages 105–114. IEEE, April 2002. [19] Daniel Quinlan, Markus Schordan, Brian Miller, and Markus Kowarschik. Parallel object-oriented framework optimization. Concurrency and Computation: Practice and Experience, 2003, to appear. [20] L. Rauchwerger, F. Arzu, and K. Ouchi. Standard templates adaptive parallel library (STAPL). Lecture Notes in Computer Science, 1511:402–412, 1998. [21] M. S. Lam S. P. Amarasinghe, J. M. Anderson and C. W. Tseng. The suif compiler for scalable parallel machines. In in Proceedings of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, Feb 1995. [22] Mitsuhisa Sato, Shigehisa Satoh, Kazuhiro Kusano, and Yoshio Tanaka. Design of openMP compiler for an SMP cluster. In European Workshop on OpenMP, September 1999. [23] Bjarne Stroustrup. The Design and Evolution of C++. Addison-Wesley, 1994. [24] J. Subhlok, J. Stichnoth, D. O’Hallaron, and T. Gross. Exploiting task and data parallelism on a multicomputer. In Proc. of the Sixth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), San Diego, May 1993. [25] Peng Wu and David Padua. Containers on the parallelization of general-purpose Java programs. In Proceedings of International Conference on Parallel Architectures and Compilation Techniques, Oct 1999.
Cetus – An Extensible Compiler Infrastructure for Source-to-Source Transformation* Sang-Ik Lee, Troy A. Johnson, and Rudolf Eigenmann Purdue University, West Lafayette IN 47906, USA {sangik,troyj,eigenman}@ecn.purdue.edu http://paramount.www.ecn.purdue.edu
Abstract. Cetus is a compiler infrastructure for the source-to-source transformation of programs. We created Cetus out of the need for a compiler research environment that facilitates the development of interprocedural analysis and parallelization techniques for C, C++, and Java programs. We will describe our rationale for creating a new compiler infrastructure and give an overview of the Cetus architecture. The design is intended to be extensible for multiple languages and will become more flexible as we incorporate feedback from any difficulties we encounter introducing other languages. We will characterize Cetus’ runtime behavior of parsing and IR generation in terms of execution time, memory usage, and parallel speedup of parsing, as well as motivate its usefulness through examples of projects that use Cetus. We will then compare these results with those of the Polaris Fortran translator.
1
Introduction
Parallelizing compiler technology is most mature for the Fortran 77 language [4, 5, 17, 19]. The simplicity of the language without pointers or user-defined types makes it easy to analyze and to develop many advanced compiler passes. By contrast, parallelization technology for modern languages, such as Java, C++, or even C, is still in its infancy. When trying to engage in such research, we were faced with a serious challenge. We were unable to find a parallelizing compiler infrastructure that supports interprocedural analysis, provides an advanced software engineering environment for pass writers, and allows us to compile large, realistic applications. However, we feel these properties are of paramount importance. They enable a compiler writer to develop “production strength” passes, which can successfully transform and be evaluated with realistic benchmarks. The lack of such thorough evaluations in many current research papers has been observed and criticized by many. The availability of an easy-to-use compiler infrastructure would help improve this situation significantly. Hence, continuous research and development in this area are among the most important tasks of the compiler community. Our paper contributes to this goal. * This material is based upon work supported in part by the National Science Foundation under Grant No. 9703180, 9975275, 9986020, and 9974976. L. Rauchwerger (Ed.): LCPC 2003, LNCS 2958, pp. 539–553, 2004. © Springer-Verlag Berlin Heidelberg 2004
540
Sang-Ik Lee et al.
During an early development stage, Cetus was used in a class project. Ten students of a graduate compiler class were challenged to create a source-tosource C compiler with a number of passes fundamental to parallelization, including induction variable substitution, dependence analysis, and privatization. The students were free to choose the compiler infrastructure. Among the serious contenders were the GNU Compiler Collection (GCC) [22], the SUIF 2 [25] compiler (a.k.a. the National Compiler Infrastructure), and a “from-scratch” design building on Cetus. After an initial feasibility study, half of the students decided to pursue the GCC option and the other half the Cetus option. This provided an excellent opportunity to see if Cetus could meet our goals. The discussion of Cetus versus GCC will reflect some of the findings of the class, given in the final project review. The success of the class project led to a new infrastructure that will be made available to the research community. Cetus has the following goals, which will be explored throughout this paper: The Internal Representation (IR) is visible to the pass writer (the user) through an interface, which we will refer to as the IR-API. Designing a simple, easy-to-use IR-API, that is extensible for future capabilities – especially to support other languages – is the most difficult engineering task. It must be easy to write source-to-source transformations and optimization passes. The implementation is an object-oriented class hierarchy with a minimal number of IR-API method names (using virtual functions and consistent naming), easy-to-use IR traversal methods, and information that can be inferred from other data strictly hidden from the user. Ease of debugging can be decisive for the success of any compiler project that makes use of the infrastructure. The IR-API should make it impossible to create inconsistent program representations, but we still need tools that catch common mistakes and environments that make it easy to track down bugs if problems occur. Cetus should run on multiple platforms with no or minimal modification. Portability of the infrastructure to a wide variety of platforms will make Cetus useful to a larger community.
2
Design Rationale and Comparison with Existing Infrastructures
From a substantial list of compiler infrastructures, we choose to discuss three open-source projects that most closely match our goals. The goals are to create a source-to-source infrastructure that supports C and is extensible to other languages. The three projects are the Polaris, SUIF, and GNU compilers. We explain our reasons for not using these infrastructures as our basis, and also discuss important features of these compilers that we want to adopt in Cetus. 2.1
The Polaris Compiler
The Polaris [5, 15] compiler, which we have co-developed in prior work, was an important influence on the design of our new infrastructure. Polaris is written
Cetus – An Extensible Compiler Infrastructure
541
in C++ and operates on Fortran 77 programs. So far, no extensions have been made to handle Fortran 90, which provides a user-defined type system and other modern programming language features. Polaris’ IR is Fortran-oriented [7] and extending it to other languages would require substantial modification. In general, Polaris is representative of compilers that are designed for one particular language, serve their purpose well, but are difficult to extend. Cetus should not be thought of as “Polaris for C” because it is designed to avoid that problem. However, there are still several Polaris features that we wanted to adopt in Cetus. Polaris’ IR can be printed in the form of code that is similar to the source program. This property makes it easy for a user to review and understand the steps involved in Polaris-generated transformations. Also, Polaris’ API is such that the IR is in a consistent state after each call. Common mistakes that pass writers make can be avoided in this way. 2.2
SUIF – National Compiler Infrastructure
The SUIF [25] compiler is part of the National Compiler Infrastructure (NCI), along with Zephyr [3], whose design began almost a decade ago. The infrastructure was intended as a general compiler framework for multiple languages. It is written in C++, like Polaris, and the currently available version supports analysis of C programs. SUIF 1 is a parallelizing compiler and SUIF 2 performs interprocedural analysis [2]. Both SUIF and Cetus fall into the category of extensible source-to-source compilers, so at first SUIF looked like the natural choice for our infrastructure. Three main reasons eliminated our pursuit of this option. The first was the perception that the project is no longer active – the last major release was in 2001 and does not appear to have been updated recently. The second reason was, although SUIF intends to support multiple languages, we could not find complete front ends other than for C and an old version of Java. Work began on front ends for Fortran and C++ [1, 2, 11], but they are not available in the current release. Hence, as is, SUIF essentially supports a single language, C. Finally, we had a strong preference for using Java as the compiler implementation language. Java offers several features conducive to good software engineering. It provides good debugging support, high portability, garbage collection (contributing to the ease of writing passes), and its own automatic documentation system. These facts prompted us to pursue other compiler infrastructures. 2.3
GNU Compiler Collection
GCC [22] is one of the most robust compiler infrastructures available to the research community. GCC generates highly-optimized code for a variety of architectures, which rivals in many cases the quality generated by the machine vendor’s compiler. Its open-source distribution and continuous updates make it attractive. However, GCC was not designed for source-to-source transformations. Most of its passes operate on the lower-level RTL representation. Only
542
Sang-Ik Lee et al.
recent versions of GCC (version 3.0 onward) include an actual syntax tree representation. This representation was used in our class project for implementing a number of compiler passes. Other limitations are GCC compiles one source file at a time, performs separate analysis of procedures, and requires extensive modification to support interprocedural analysis across multiple files. The most difficult problem faced by the students was that GCC does not provide a friendly API for pass writers. The API consists largely of macros. Passes need to be written in C and operations lack logical grouping (classes, namespaces, etc), as would be expected from a compiler developed in an objectoriented language. GCC’s IR [21] has an ad-hoc type system, which is not reflected in its implementation language (C). The type system is encoded into integers that must be decoded and manipulated by applying a series of macros. It is difficult to determine the purpose of fields in the IR from looking at the source code, since in general every field is represented by the same type. This also makes it difficult for debuggers to provide meaningful information to the user. Documentation for GCC is abundant. The difficulty is that the sheer amount ([22] and [21] combined approach 1000 pages) easily overwhelms the user. Generally, we have found that there is a very steep learning curve in modifying GCC, with a big time investment to implement even trivial transformations. The above difficulties were considered primarily responsible for the fact that the students using GCC advanced more slowly than those creating a new compiler design. The demonstrated higher efficiency of implementation was the ultimate reason for the decision to pursue the full design of Cetus. 2.4
Cetus
Among the most important Cetus design choices were the implementation language, the parser, and the internal representation with its pass-writer interface. We will not present any language discussion in this paper. As mentioned above, the language of choice for the new infrastructure is Java. Cetus does not contain any proprietary code and relies on freely available tools. For creating a Cetus parser we considered using the parser generators Yacc [13] and Bison [9], which use lex [14] or flex [10] for scanning, and Antlr [18], which is bundled with its own scanner generator. Yacc and Bison generate efficient code in C for an LALR(1) parser, which handles most languages of interest. However, neither generates Java code. By contrast, Antlr generates code in C, C++, Java, or C#. It is an LL(k) parser, which can be more restrictive; however, there is good support for influencing parse decisions using semantic information. Antlr grammars for C and Java exist, but to our knowledge there have not been any successful attempts to use Antlr for parsing arbitrary C++ programs, though Antlr has successfully been used to parse subsets of the language. We selected Antlr for the C front end, because it generates Java code that easily interfaces with Cetus’ Java code. Extending Cetus with front ends for other languages is discussed in Section 3.4.
Cetus – An Extensible Compiler Infrastructure
543
Fig. 1. Cetus components and interfaces: Components of Cetus only call methods of the components beneath them. The driver interprets command-line arguments and initiates the appropriate parser for the input language, which in turn uses the highlevel interface to build the IR. The driver then initiates analysis and transformation passes. Utilities are provided to perform complex operations in order to keep the base and interface as uncluttered as possible
Instead of implementing our own preprocessor for the C langause, we rely on external preprocessors. The preprocessed files are given to the parser, which builds the IR. For the design of the IR we chose an abstract representation, implemented in the form of a class hierarchy and accessed through the class member functions. The next section describes this architecture. We consider a strong separation between the implementation and the interface to be very important. In this way, a change to the implementation may be done while maintaining the API for its users. It also permits passes to be written before the implementation is ready. These concepts had already proved their value in the implementation of the Polaris infrastructure – the Polaris base was rewritten three to four times over its lifetime while keeping the interface, and hence all compilation passes, nearly unmodified [7]. Cetus has a similar design, shown in Figure 1, where the high-level interface insulates the pass writer from changes in the base.
3 3.1
Implementation IR Class Hierarchy
Our design goal was a simple IR class hierarchy that is easily understood by users. It should also be easy to maintain, while being rich enough to enable future extension without major modification. The basic building blocks of a program are the translation units, which represent the content of a source file, and procedures, which represent individual functions. Procedures include a list
544
Sang-Ik Lee et al.
of simple or compound statements, representing the program control flow in a hierarchical way. That is, compound statements, such as IF-constructs and FOR-loops include inner (simple or compound) statements, representing then and else blocks or loop bodies, respectively. Expressions represent the operations being done on variables, including the assignments to variables. Cetus’ IR contrasts with the Polaris Fortran translator’s IR in that it uses a hierarchical statement structure. The Cetus IR directly reflects the block structure of a program. Polaris lists the statements of each procedure in a flat way, with a reference to the outer statement being the only way for determining the block structure. There are also important differences in the representation of expressions, which further reflects the differences between C and Fortran. The Polaris IR includes assignment statements, whereas Cetus represents assignments in the form of expressions. This corresponds to the C language’s feature to include assignment side effects in any expression. The IR is structured such that the original source program can be reproduced, but this is where source-to-source translators face an intrinsic dilemma. Keeping the IR and output similar to the input will make it easy for the user to recognize the transformations applied by the compiler. On the other hand, keeping the IR language independent leads to a simpler compiler architecture, but may make it impossible to reproduce the original source code as output. In Cetus, the concept of statements and expressions are closely related to the syntax of the C language, facilitating easy source-to-source translation. The correspondence between syntax and IR is shown in Figure 2. However, the drawback is increased complexity for pass writers (since they must think in terms of C syntax) and limited extensibility of Cetus to additional languages. That problem is mitigated by the provision of several abstract classes, which represent generic control constructs. Generic passes can then be written using the abstract interface, while more language-specific passes can use the derived classes. We feel it is important to work with multiple languages at an early stage, so that our result is not simply a design that is extensible in theory but also in practice. Toward this goal, we have begun adding a C++ front end and generalizing the IR so that we can evaluate these design trade-offs. Preliminary work in this area is discussed below in Section 3.4. Other potential front ends are Java and Fortran 90. 3.2
Navigating the IR
Traversing the IR is a fundamental operation that will be used by every compiler pass. For a block-structured IR, one important question is whether to support flat or deep traversal of the statement lists. In a flat traversal the compiler pass steps through a list of statements at a specific block level, where each statement is either simple or compound. Moving into inner or outer blocks must be done explicitly, after inspecting the type of a compound statement. By contrast, deep traversal would visit each statement and expression in lexical order, regardless of the block structure. Deep traversal is useful for tasks that need to inspect all expressions, independent of the statements they are in. An example is flowinsensitive analysis of defined and used variables in a procedure. Flat traversal is
Cetus – An Extensible Compiler Infrastructure
545
Fig. 2. A program fragment and its IR in Cetus. IR relationships are similar to the program structure and a symbol table is associated with each block scope
needed by all passes whose actions depend on the type of statements encountered. Most passes belong to this latter category. Therefore, the Cetus base supports flat traversal. The IR-API is the interface presented by Cetus’ base. In general the Cetus base is kept minimal and free of redundant functionality, so as to make it easy to learn about its basic operation and easy to debug. Cetus also provides a utility package, that will offer convenience to pass writers. The utility package provides additional functions, where needed by more than a single compiler pass. Obviously, this criterion will depend on the passes that will be written in the future. Hence, the utilities will evolve, while we expect the base to remain stable. The utility functions operate using only the IR-API. Deep traversal is an example of a utility function. 3.3
Type System and Symbol Table
Modern programming languages provide rich type systems. In order to keep the Cetus type system flexible, we divided the elements of a type into three concepts: base types, extenders, and modifiers. A complete type is described by a combination of these three elements. Base types include built-in primitive types and user-defined types. Built-in types have a predefined meaning in programming languages. User-defined types are introduced into the program by composing a new structure – typedef, struct, union, and enum types in C. Base types are often combined with type extenders. Examples of type extenders are arrays, pointers, and functions. Modifiers express an attribute of a type, such as const and volatile in C. They can decorate any part of the type definition. Types are understood by decoding the description, one element at a time. We use a list
546
Sang-Ik Lee et al.
structure to hold type information, so that types could be easily understood by looking at the elements in the list, one at a time. Another important concept is the symbol, which represents the declaration of a variable in the program. Symbol information is kept in symbol tables, pointed to by the IR tree. Our concept of a symbol table is a repository of type information for a variable that is declared in a certain scope. The scope must always be considered when dealing with symbols. Cetus also considers structs in C as scopes, and their members are represented as local symbols within that scope. A compiler may use one large symbol table with hashing to locate symbols [6]. However, since source transformations can move, add, or remove scopes, we chose a distributed symbol table organization, where each scope has a separate physical symbol table. The logical symbol table for a scope includes its physical symbol table and the physical symbol tables of the enclosing scopes, with inner declarations hiding outer declarations. Although there are certain drawbacks with this approach – the need to search through the full hierarchy of symbol tables to reach a global symbol [8] – we find it to be convenient. For example, all the declarations in a scope can be manipulated as a group simply by manipulating that scope’s symbol table. It is especially convenient in allowing Cetus to support object-oriented languages, where classes and namespaces may introduce numerous scopes whose relationships can be expressed through the symbol table hierarchy. Another benefit is reducing symbol table contention during parallel parsing, which we discuss in Section 5.2. 3.4
Extensions
Cetus is designed to handle additional languages. We have begun adding support for C++ and plan to add support for Java. Cetus’ IR can represent expressions and statements in these languages with the addition of new IR nodes to represent exceptions. The type system supports user-defined types that can include both data and functions. Coupled with the distributed symbol table, Cetus can represent classes and their inheritance relationships. Additional analysis and transformation passes are written using the same IRAPI, so they can interoperate with existing passes. The standard IR-API makes common operations among passes more obvious, and the most useful operations can be moved into the utilities module. Future passes then become easier to write because they can make use of the new utilities. The Cetus parser generates a parse tree, which is then turned into the IR, using the same interface available to other passes. This modular structure facilitates adding other front ends. When adding a C++ front end we encountered a number of challenges. The grammar does not fit easily into any of the grammar classes supported by standard generators. The GNU C++ compiler was able to use an LALR(1) grammar, but it looks very different from the ISO C++ grammar. If any rules must be rearranged to add actions in a particular location, it must be done with extreme care to avoid introducing inconsistencies. Another challenge is that C++ has more complex rules than C for distinguishing iden-
Cetus – An Extensible Compiler Infrastructure
547
tifiers from type names. Because of this, substantial symbol table information must be maintenance while parsing [23], so as to resolve ambiguities. We are extending Cetus for C++ by using a Generalized LR (GLR 1) parser generator [24]. Such parsers allow grammars that accept any language and defer semantic analysis to a later pass. GLR support has recently been added to GNU Bison [9] and provides a way to create a C++ parser that accepts the entire language without using a symbol table [12]. An important benefit is that the grammar can be kept close to the ISO grammar. We have developed a parser for the complete C++ language plus some GCC extensions using Bison 2. We believe it is due to the language’s complexity that there are fewer research papers dealing with C++ than with other languages, despite C++’s wide use in industry. The above reasons should allow Cetus to provide an easy-to-use C++ infrastructure, making it a very important research tool.
Cetus Features
4
In this section we discuss a number of features that may become important for users of this new infrastructure. They deal with debugging support, readability of the transformed source code, expression manipulation capabilities, and the parallel execution of Cetus. 4.1
Debugging Aids
One important aspect that makes an infrastructure useful is providing a good set of tools to help debug future compiler passes. Cetus provides basic debugging support through the Java language, which contains exceptions and assertions as built-in features. Cetus executes within a Java virtual machine, so a full stack trace including source line numbers is available whenever an exception is caught or the compiler terminates abnormally. Furthermore, the IR-API is designed to prevent programmers from corrupting the program representation. For instance, the IR-API will throw an exception if a compiler pass illegally uses the same nodes for representing two similar but separate expressions. Internally, Cetus will detect a cycle in the IR, indicating an illegal operation. Other errors may be detected through language-specific semantic checks. 4.2
Readability of the Transformed Code
An important consideration is the presentation of header files in the output. Internally, header files are expanded, resulting in a program that is much larger than the original source. This form is more difficult to read. By default, Cetus 1 2
Also called stack-forking or Tomita parsing. The parser is written in C++. It interfaces with Cetus by writing the parse tree to a file.
548
Sang-Ik Lee et al.
detects code sections that were included from header files and replaces them by the original #include directives. Similarly, undoing macro substitutions would make the output code more readable. However, the current IR does not store macro definitions. Cetus prints the expanded macros in the output. Cetus also “pretty prints” the output with appropriate spacing and indentation, potentially improving the structure of the original source program. 4.3
Expression Simplifier
The expression simplifier provides a very important service to pass writers. Our experience with the class project showed that source-to-source transformations implemented within GCC often resulted in large, unreadable expressions. GCC does not provide a symbolic expression simplifier, and the students of our class project decided that adding such a capability would be very involved. The Cetus API, however, made it possible to add a powerful expression simplifier with a modest effort. While it is not as powerful as the simplifiers provided by math packages, such as Maple, Matlab, or Mathematica, it does reduce the expressions to a canonical form and has been able to eliminate the redundancy in the expressions we have encountered in our experiments. Expression simplification enhances the readability of the compiler output and enables other optimizations because it transforms the program into a canonical form. For instance, idiom recognition and induction variable substitution benefited most from the expression simplifier [20]. Recognition is easier because there are fewer complicated expressions to analyze and all expressions have a consistent structure. Substitution can create very long expressions that make later passes less effective unless it is immediately followed by simplification. 4.4
Parallel Parsing
Cetus is written in Java, which generally executes more slowly and requires more memory than C or C++. These factors contributed to Cetus taking a noticeably longer time to process its input than, for instance, the GCC compiler. The Antlr parser is reentrant, so we use Java threads to parse and generate IR for several input files at once. Some interesting observations about this approach appear next in the evaluation section.
5
Evaluation
In this section, we first qualitatively evaluate Cetus by discussing its use in writing an OpenMP translator. Next, we measure the efficiency of Cetus using several quantitative metrics. 5.1
Using Cetus for Translation of OpenMP Applications
OpenMP is currently one of the most popular paradigms for programming shared memory parallel applications. Unlike MPI, where programmers insert library
Cetus – An Extensible Compiler Infrastructure
Fig. 3.
549
Code excerpt from an OpenMP to POSIX threads translator
calls, OpenMP programmers use directives, whose semantics is understood by the compiler. Compiler functionality for translating OpenMP falls into two broad categories. The first category deals with the translation of the OpenMP work-sharing constructs into a micro-tasking form. This entails the extraction of the work sharing code to separate microtasking subroutines and insertion of the corresponding function calls and synchronization. Cetus provides an API sufficient for these transformations. The second category deals with the translation of the data clauses, which requires support for accessing and modifying symbol table entries. Cetus provides several ways in which the pass writer can access the symbol table to add and delete symbols or change their scope. Figure 3 shows a section of the code used to handle the private data clause in OpenMP. There are currently two different OpenMP translators which have been implemented using Cetus. Both of these use the same OpenMP front end. One translator generates code for shared-memory systems using the POSIX threads API. The other translator targets software distributed shared memory systems and was developed as part of a project to extend OpenMP to cluster systems [16]. Although the entire OpenMP 2.0 specification is not supported yet, the translators are powerful enough to handle benchmarks such as 330.art_m and 320.equake_m from the SPEC OMPM2001 suite. 5.2
Cetus Efficiency
Parsing and IR Construction Time Cetus is able to parse all of the SPEC CPU2000 benchmarks that are written in C. Parsing and IR construction time for some of them are shown in the left graph of Figure 4. Parsing time is not a serious issue for modest-size programs, but it can be a problem for large benchmarks, such as 176.gcc. On the SUN platform it requires 410 seconds to parse and completely generate the IR. In this case, parallel parsing and IR generation is useful to reduce the time overhead. Cetus can parse multiple files at a time
550
Sang-Ik Lee et al.
Fig. 4. Parse time and speedup of the compiler for some SPEC CPU2000 benchmarks. SUN is a four-processor, 480MHz Sun Enterprise 450 running Solaris and AMD is a two processor 1.533GHz AMD Athlon system running Linux. The JVM is version 1.4.1_01 from Sun Microsystems
Fig. 5. Memory usage and efficiency of Cetus for some SPEC CPU2000 benchmarks
using Java threads and Antlr’s reentrant parser. The right graph in Figure 4 shows speedup for 176.gcc using up to 4 threads on our SUN platform. Cetus does not show ideal speedup, one reason being that symbol table accesses are synchronized. Memory Usage Efficient memory usage is important to source-to-source compilers because the entire program must be kept in memory for interprocedural analysis and transformation. Figure 5 shows the memory usage of Cetus. The left graph shows the size of the Java heap after IR construction and the right graph shows the ratio of the Java heap size to the preprocessed input source file size (excluding comments). All measurements were done on the SUN platform. Currently, Cetus requires around 10 times more memory compared to the input source size.
Cetus – An Extensible Compiler Infrastructure
Fig. 6.
551
Memory usage and efficiency of Polaris for some SPEC CPU2000 benchmarks
In addition to optimizing the usage of Java collection classes, such as using a smaller initial size for each collection to save memory, we applied the following improvements in order to reduce Cetus’ working set. Taken together, these modifications reduced the memory requirement of 176.gcc from 250MB to 64MB. Optimizing Symbol Table for Header Files: The largest reduction in memory usage was achieved by merging symbols from multiple uses of header files. Initially, if two different source files included the same header file, Cetus’ IR would contain two copies of same symbol information. Using a single copy saved a lot of memory. Parallel parsing remains possible because Java hash tables are thread safe, so multiple parser threads entering symbols into the same symbol table do not interfere with each other. Streamlining Internal Data Structures: Another improvement was the elimination of temporary objects. To this end, we rewrote the Cetus code so as to avoid the need for temporary data structures. We also eliminated or reused internal data structures in many places. Together, these changes had a significant impact on the overall memory usage. As a result of the reduced need for garbage collection, they also improved speed. Comparison with Polaris Figure 6 shows heap memory usage of Polaris after parsing and constructing the IR on our SUN platform. Directly comparing heap sizes of Cetus and Polaris is difficult since they are written in different languages and translate different languages. However, comparing Figure 5 and Figure 6 indicates that Cetus uses a reasonable amount of memory. The ratio of IR size to program size is an order of magnitude less in Cetus than Polaris, which suggests that Cetus has a more memory-efficient IR. Another interesting comparison is parsing speed. Cetus parses about 20K characters per second while Polaris handles 10K characters per second on average for the benchmarks in Figure 5 and Figure 6, running on the same system.
552
6
Sang-Ik Lee et al.
Conclusion
We have presented an extensible compiler infrastructure, named Cetus, that has proved useful in dealing with C programs. In particular, Cetus has been used for source transformations on the SPEC OMPM2001 benchmark suite. The infrastructure’s design is such that adding support for other languages, analysis passes, or transformations will not require a large effort. Preliminary work on extending Cetus for C++ was used as an example of how we have prepared Cetus for future growth. Future work involves finishing other front ends and providing more abstractions for pass writers. We consider the high-level interface and the utility functions to be a kind of programming language for the pass writers. The motivation behind expanding and generalizing that language is the need to bring the amount of code written by a pass writer closer to the pseudocode they see in a textbook or research paper. By providing more ways to abstract away the details of the language and providing more high-level operations to the pass writers, large portions of the passes should become reusable. Starting to add other languages early in the development process is vital to proving this hypothesis.
References [1] Portland Group Homepage, http://nci.pgroup.com. [2] SUIF Homepage. http://suif.stanford.edu. [3] Andrew Appel, Jack Davidson, and Norman Ramsey. The Zephyr Compiler Infrastructure. 1998. [4] P. Banerjee, J.A. Chandy, M. Gupta, et al. The PARADIGM Compiler for Distributed-Memory Multicomputers. IEEE Computer, 28(10):37–47, October 1995. [5] William Blume, Rudolf Eigenmann, et al. Restructuring Programs for High-Speed Computers with Polaris. In ICPP Workshop, pages 149–161, 1996. [6] Robert P. Cook and Thomas J. LeBlanc. A Symbol Table Abstraction to Implement Languages with Explicit Scope Control. IEEE Transactions on Software Engineering, 9(1):8–12, January 1983. [7] Keith A. Faigin, Stephen A. Weatherford, Jay P. Hoeflinger, David A. Padua, and Paul M. Petersen. The Polaris Internal Representation. International Journal of Parallel Programming, 22(5):553–586, 1994. [8] Charles N. Fischer and Richard J. LeBlanc Jr. Crafting a Compiler. Benjamin/Cummings, 1988. [9] Free Software Foundation. GNU Bison 1.875a Manual, January 2003. [10] Free Software Foundation. GNU Flex 2.5.31 Manual, March 2003. [11] David L. Heine and Monica S. Lam. A Practical Flow-Sensitive and ContextSensitive C and C++ Memory Leak Detector. PLDI, 2003. [12] Warwick Irwin and Neville Churcher. A Generated Parser of C++. 2001. [13] Steven C. Johnson. Yacc: Yet Another Compiler Compiler. In UNIX Programmer’s Manual, volume 2, pages 353–387. Holt, Rinehart, and Winston, New York, NY, USA, 1979. [14] M. Lesk and E. Schmidt. Lex-A Lexical Analyzer Generator. Technical report, AT&T Bell Laboratories, 1975.
Cetus – An Extensible Compiler Infrastructure
553
[15] Seuing-Jai Min, Seon Wook Kim, Michael Voss, Sang-Ik Lee, and Rudolf Eigenmann. Portable compilers for openmp. In OpenMP Shared-Memory Parallel Programming, Lecture Notes in Computer Science #2104, pages 11–19, Springer Verlag, Heidelberg, Germany, July 2001. [16] Seung-Jai Min, Ayon Basumallik, and Rudolf Eigenmann. Supporting Realistic OpenMP Applications on a Commodity Cluster of Workstations. WOMPAT, 2003. [17] Trung N. Nguyen, Junjie Gu, and Zhiyuan Li. An Interprocedural Parallelizing Compiler and Its Support for Memory Hierarchy Research. In Proceedings of the International Workshop on Languages and Compilers for Parallel Computing (LCPC), pages 96–110, 1995. [18] Terence J. Parr and Russell W. Quong. ANTLR: A Predicated-LL(k) Parser Generator. Software - Practice and Experience, 25(7):789–810, 1995. [19] Constantine Polychronopoulos, Milind B. Girkar, et al. The Structure of Parafrase-2: An Advanced Parallelizing Compiler for C and Fortran. In Languages and Compilers for Parallel Computing. MIT Press, 1990. [20] Bill Pottenger and Rudolf Eigenmann. Idiom Recognition in the Polaris Parallelizing Compiler. International Conference on Supercomputing, 1995. [21] Richard M. Stallman. GNU Compiler Collection Internals. Free Software Foundation, December 2002. [22] Richard M. Stallman. Using and Porting the GNU Compiler Collection. Free Software Foundation, December 2002. [23] Bjarne Stroustrup. The C++ Programming Language - 3rd Edition. AddisonWesley, 1997. [24] Masaru Tomita. Efficient Parsing for Natural Language. Kluwer Academic Publishers, 1986. [25] Robert P. Wilson, Robert S. French, et al. SUIF: An Infrastructure for Research on Parallelizing and Optimizing Compilers. SIGPLAN Notices, 29(12):31–37, 1994.
This page intentionally left blank
Author Index
Agrawal, Gagan Almási, Gheorghe Amaral, José Nelson Amarasinghe, Saman Andrade, Henrique Arcot, Shashi D. Aryangat, Suresh
127 162 405 17 509 466 509
Bastoul, Cédric Baumgartner, Gerald Beckmann, Olav Berlin, Konstantin Bernholdt, David E. Bibireata, Alina Bronevetsky, Greg Browne, James C.
209 93 241 194 93 93 357 109
Carpenter, Bryan Chauhan, Arun Chen, Guangyu Chen, Guilin Chen, Wei-Yu Choppella, Venkatesh Coarfa, Cristian Cociorva, Daniel Cohen, Albert Cooper, Keith D.
147 495 451 451 340 93 177 93 209 288
D’Alberto, Paolo Deng, Guosheng Dietz, Henry G. Ding, Chen Ding, Yonghua Diniz, Pedro C. Dotsenko, Yuri
436 109 466 48 273 481 177
Eckhardt, Jason Eigenmann, Rudolf
177 539
Ferrante, Jeanne Fox, Geoffrey Fraguela, Basilio B.
32 147 162
Gao, Guang R. Gao, Xiaofeng
77 32
Garzarán, María Jesús Girbal, Sylvain Gorantla, Sujana Govindarajan, R. Gross, Thomas R. Guo, Jia
374 209 466 77 390 374
Hall, Mary Hu, Ziang Huan, Jun Huang, Chao
1 77 194 306
Iftode, Liviu Ishizaka, Kazuhisa
258 64
Jacob, Mary Johnson, Troy A.
194 539
Kalé, L. V. Kandemir, M. Kasahara, Hironori Kelly, Paul H. J. Kennedy, Ken Kochhar, Garima Kong, Xiangyun Kremer, Ulrich Krishnamurthy, Arvind Krishnan, Sandhya Kurc, Tahsin
306 451 64 241 495 194 226 258 340 93 509
Lam, Chi-Chung Lawlor, Orion Lee, Han-Ku Lee, Sang-Ik Li, Xiaogang Li, Zhiyuan Lim, Sang Boem
93 306 147 539 127 273 147
Mahmood, Nasim Marques, Daniel Martin, Martin Mellor-Crummey, John Moreira, José
109 357 17 177 162
556
Author Index
Nadgir, A. Nandy, Sagnik Ni, Yang Nicolau, Alexandru
451 32 258 436
O’Reilly, Una-May Obata, Motoki
17 64
Padua, David 162, 374, 420 Pingali, Keshav 357 Praun, Christoph von 390 Prins, Jan 194 Pugh, Bill 194 Pugh, William 323 Puppin, Diego 17
Shen, Xipeng So, Byoungro Song, Yonghong Spacco, Jaime Stephenson, Mark Stodghill, Paul Supinski, Bronis R. de Sussman, Alan
48 1 226 194, 323 17 357 524 509
Temam, Olivier Thiyagalingam, Jeyarajan Tseng, Chau-Wen
209 241 194
Veidenbaum, Alexander
436
Quinlan, Dan
524
Wu, Peng
420
Ramanujam, J. Ren, Gang Rose, Luiz De
93 420 162
Xu, Li
288
Yang, Hongbo Yelick, Katherine Yi, Qing
77 340 524
Zhao, Peng Zhong, Yutao Ziegler, Heidi
405 48 1
Sadayappan, P. Saltz, Joel Schneider, Florian Schordan, Markus Sharma, Saurabh
93, 194 509 390 524 209