CBMS-NSF REGIONAL CONFERENCE SERIES IN APPLIED MATHEMATICS A series of lectures on topics of current research interest in applied mathematics under the direction of the Conference Board of the Mathematical Sciences, supported by the National Science Foundation and published by SIAM. GARRETT BIRKHOFF, The Numerical Solution of Elliptic Equations D. V. LINDLEY, Bayesian Statistics, A Review R. S. VARGA, Functional Analysis and Approximation Theory in Numerical Analysis R. R. BAHADUR, Some Limit Theorems in Statistics PATRICK BILLINGSLEY, Weak Convergence of Measures: Applications in Probability J. L. LIONS, Some Aspects of the Optimal Control of Distributed Parameter Systems ROGER PENROSE, Techniques of Differential Topology in Relativity HERMAN CHERNOFF, Sequential Analysis and Optimal Design i. DURBIN, Distribution Theory for Tests Based on the Sample Distribution Function SOL I. RUBINOW, Mathematical Problems in the Biological Sciences P. D. LAX, Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves I. J. SCHOENBERG, Cardinal Spline Interpolation IVAN SINGER, The Theory of Best Approximation and Functional Analysis WERNER C. RHEINBOLDT, Methods of Solving Systems of Nonlinear Equations HANS F. WEINBERGER, Variational Methods for Eigenvalue Approximation R. TYRRELL ROCKAFELLAR, Conjugate Duality and Optimization SIR JAMES LIGHTHILL, Mathematical Biofluiddynamics GERARD SALTON, Theory of Indexing CATHLEEN S. MORAWETZ, Notes on Time Decay and Scattering for Some Hyperbolic Problems F. HOPPENSTEADT, Mathematical Theories of Populations: Demographics, Genetics and Epidemics RICHARD ASKEY, Orthogonal Polynomials and Special Functions L. E. PAYNE, Improperly Posed Problems in Partial Differential Equations S. ROSEN, Lectures on the Measurement and Evaluation of the Performance of Computing Systems HERBERT B. KELLER, Numerical Solution of Two Point Boundary Value Problems ]. P. LASALLE, The Stability of Dynamical Systems - Z. ARTSTEIN, Appendix A: Limiting Equations and Stability of Nonautonomous Ordinary Differential Equations D. GOTTLIEB AND S. A. ORSZAG, Numerical Analysis of Spectral Methods: Theory and Applications PETER J. HUBER, Robust Statistical Procedures HERBERT SOLOMON, Geometric Probability FRED S. ROBERTS, Graph Theory and Its Applications to Problems of Society JURIS HARTMANIS, Feasible Computations and Provable Complexity Properties ZOHAR MANNA, Lectures on the Logic of Computer Programming ELLIS L. JOHNSON, Integer Programming: Facets, Subadditivity, and Duality for Group and Semi-Group Problems SHMUEL WINOGRAD, Arithmetic Complexity of Computations (continued on inside back cover)
Lectures on the Logic of Computer Programming
ZOHAR MANNA Stanford University and Weizmann Institute of Science
SOCIETY for INDUSTRIAL and APPLIED MATHEMATICS
1980
PHILADELPHIA, PENNSYLVANIA 19103
Copyright © 1980 by the Society for Industrial and Applied Mathematics. 1098765432 Library of Congress Catalog Card Number: 79-93153. ISBN: 0-89871-164-9
Copyright © 1980 by the Society for Industrial and Applied Mathematics. 1098765432 Library of Congress Catalog Card Number: 79-93153. ISBN: 0-89871-164-9
Contents Introduction
1
Chapter 1 PARTIAL CORRECTNESS A. Invariant method B. Subgoal method C. Subgoal method versus invariant method
3 5 6
Chapter 2 TERMINATION A. Well-founded ordering method B. The multiset ordering
9 10
Chapter 3 TOTAL CORRECTNESS A. Intermittent method
15
Chapter 4 SYSTEMATIC PROGRAM ANNOTATION A. Range of individual variables B. Relation between variables C. Control invariants D. Debugging E. Termination and run-time analysis
20 20 21 22 23
Chapter 5 SYNTHESIS OF PROGRAMS A. The weakest precondition operator B. Transformation rules C. Simultaneous-goal principle D. Conditional-formation principle E. Recursion-formation principle F. Generalization G. Program modification H. Comparison with structured programming
25 27 28 30 31 34 36 36
iii
iv
CONTENTS
Chapter 6 TERMINATION OF PRODUCTION SYSTEMS A. Example 1: Associativity B. Example 2: Distribution system C. Example 3: Differentiation system D. Nested multisets
41 42 44 45
References
49
Introduction Techniques derived from mathematical logic have been applied to many aspects of the programming process. The following lecture notes deal with six of these aspects: partial correctness of programs: proving that a given program produces the intended results whenever it halts. termination of programs: proving that a given program will eventually halt. total correctness of programs: proving both that a given program is partially correct and that it terminates. systematic program annotation: describing the intermediate behavior of a given program. synthesis of programs: constructing a program to meet given specifications. termination of production systems: proving that a system of rewriting rules always halts.
l
This page intentionally left blank
CHAPTER 1
Partial Correctness We use the following conventions: Let P be a program; x be all the variables in P; x0 be the initial values of x (when computation starts); xh be the final values of x (upon termination of computation). Let L0, LI, ..., Lh be a designated set of labels (cutpoints) in P, where L0 is the entrance and Lh is the exit. (It is assumed that each loop passes through at least one of the designated labels.) A path a between L, and L/
is said to be basic if there are no designated labels between L, and L/. Let t a ( x ) be the condition for path a to be traversed; ga(x) be a function expressing the change in values along path a Thus
Let <&(XQ] be an input specification, giving the range of legal initial values; "^(XQ, xh) be an output specification, giving the desired relation between the initial and final values of x. We define the following: P is partially correct w.r.t. (with respect to)
and W: for every x0 such that <&(x0) holds, if computation reaches the exit Lh then "^(x0, xh) holds. P terminates w.r.t. 4>: for every x0 such that 3>(x0) holds, the computation reaches the exit Lh. P is totally correct w.r.t. <£ and ^: P is partially correct w.r.t. and ^ and P terminates w.r.t. . A. Invariant method (Floyd (1967), Hoare (1969)). To prove the partial correctness of P w.r.t. <E> and ^, find a set of predicates qo(xo, x), q\(xQ, x), - • •, qh(xo, x), corresponding to the labels L0, LI, • • • , Lh, such that the 3
4
CHAPTER 1
following verification conditions are satisfied for all x, x0, and xh (universally quantified): (11) 4>(f0) ^ qo(xo, *o) (the input specification implies the initial predicate), (12) qh(xo,Xh) ^ ^(xo^h) (the final predicate implies the output specification), and for each basic path a from L, to L/ (13) t a ( x ) and qi(x0, x) => g/Cc0, g«(*)) (the predicate before the path implies the predicate after, whenever the path can be taken). Verification conditions (II) and (13) imply that for each /, qi(xQ, x) is an invariant assertion at L,, i.e., each time we pass through L,, qi(x0, x) is true for the initial value x0 and for the current value of x. In particular, qh(xo, x) is an invariant assertion at L/,, which, by verification condition (12), implies the partial correctness of the program. In practice, we usually take qo(xo, *o) to be Q>(XO) and qh(xo, x) to be ^(jc0, jc). Then, (II) and (12) are guaranteed to hold. Example: integer square-root program. The following program computes b — [VaJ for every nonnegatiye integer a; that is, the final value of b is the largest integer k such that k ^ Va, i.e., b2 ^ a < (b +1)2. input (a) L0: (6,c,d)«-(0,l,l) LI: if c > a then LI,: output (b) else (ft,rf)<-(ft + l,rf + 2)
c<-c + J goto L!.
(Note that the value of a is unchanged by the program and that b, c, and d have undefined values at L0.) To prove partial correctness w.r.t.
let
and find a predicate q\(a0, a, b, c, d) such that the following (13) verification conditions hold for all aQ, a, b, c, and d:
PARTIAL CORRECTNESS 5
The invariant assertion
qi(a0,a,b,c,d):
a=aQ and b ^a
and c = (& + !)
and d = 2b + l
will do. The verification of a program with respect to given input-output specifications consists of three phases: finding appropriate invariant assertions, generating the corresponding verification conditions, and proving that the verification conditions are true. Finding the invariant assertions requires a deep understanding of the principles behind the program, and we therefore assume at this stage that they are provided by the programmer. Thus, the verification process can be summarized as follows: program + input/output specifications + invariant assertions verification-conditions generation verification conditions theorem proving yes, no, or don't know. B. Subgoal method (Manna (1971), Morris and Wegbreit (1977)). This method is also used for proving the partial correctness of programs, but there are two basic differences between the invariant method and the subgoal method: (1) An invariant assertion g,(f 0 , x) relates the current value of x to the initial value XQ. A subgoal assertion q*(x, XH) relates current value of x to the final value xh. Thus qi(x0, x) describes what was done so far, while q*(x,xh) describes what remains to be done. (2) The invariant method uses forward induction from L0 to Lh:
while the subgoal method uses backward induction from Lh to L0:
To prove the partial correctness of P w.r.t. and ^ by the subgoal method, find a set of predicates q*(x, xh), q*(x, xh), • • • , q*(x, xh), corresponding to the designated labels L0, LI, . . . , Lh, such that the following verification conditions are satisfied for all x, x0, and xh • (51) q*(xh, xh) (the final predicate always holds for the final value of x), (52) <£(;to) and q* (XQ, xh) => ^(*o, xh) (the input specification and the initial predicate imply the output specification),
6
CHAPTER 1
and for each basic path a from L, to L,
(the predicate after the path
implies the predicate before). Verification conditions (SI) and (S3) imply that for each /, qf (x, xh) is a subgoal assertion at L,; i.e., each time we pass through L,, qf (x, xh) is true for the current value of x and the final value xh- In particular, q* (x, xh) is a subgoal assertion at L0, which, by the verification condition (S2), implies the partial correctness of the program. In practice, we usually take q*(x, xh)tobex=xh. Example: greatest-common divisor program. The following program computes the greatest-common divisor of two positive integers a0 and b0. input (a0, bo) L0: (a, b)*-(a0, b0) Li: if a = 0 then L2: output (b) else (a, b}*-(rem (b, a), a) goto LI, where rem (b, a) stands for the remainder of dividing b by a. Let
To prove partial correctness w.r.t. <J> and ^ by the invariant method, take the invariant assertions:
and use the corresponding verifications conditions (II), (12), and (13). Here, qt(ao, b0, a, b) relates the program variables a and b with their initial values a0, b0. To prove partial correctness w.r.t. 4> and ^ by the subgoal method, take the subgoal assertions:
and use the corresponding verification conditions (SI), (S2), and (S3). Here, q? (a, b, bh) relates the current values of the program variables a and b to the final value bh of b. C. Subgoal method versus invariant method. THEOREM. Any invariant proof can effectively proof, and vice-versa.
be transformed into a subgoal
PARTIAL CORRECTNESS 7
Proof, (invariant proof => subgoal proof}. Suppose we have an invariant proof showing that P is partially correct w.r.t. <J> and ^, i.e., we have a set of invariant assertions qt(x0, x) satisfying (II), (12), and (13). Let us take We have to show that (SI), (S2), and (S3) are satisfied. (51) We must show follows from (12). (52) We must show
But this
is given;
is given; holds by 1 and (II); holds by 2 and 3. (53) We must show is given; is given; holds by assumption; holds by 1, 3 and (13) holds by 2 and 4. (subgoal proof ^> invariant proof). Suppose we have a subgoal proof showing that P is partially correct w.r.t. 3> and ^, i.e., we have a set of subgoal assertions q*(x, xh), q*(x, xh), • — ,q*(x, xh) satisfying (SI), (S2), and (S3). Let us take qi(x0tx) to be Vy[^*(Jc, y)^>^(jf 0 ,y)], and show that (II), (12), and (13) are satisfied. (11) We must show is given; holds by assumption; 3. ¥(J0, y) holds by 1, 2, and (S2). (12) We must show qh(x0, J/,)=>^(J 0 , xh\ i.e., Vy[^J(J fc , y)^>^(jc 0 , y)]^>^(jc 0 , xh). 1. Vy[^? (Jfc, y) ^> ^(J0, y)] is given; 2. ^(Jfc, J h ) holds by (SI); 3. ¥(J0, xh) holds by 1 and 2. (13) We must show t a ( x ) and qt(x0, J)=>^.(J0, ga(J)), i.e., r a (f) and Vy[^(fo, y)]=>Vy[^f (g a (jc), y)=>^(J 0 , y)]. 1. f a (J) is given; 2. Vy[g?(f,y)=>¥(£o,y)] is given; 3- *(ga(J), y) holds by assumption; 4. gf (J, y) holds by 1, 3 and (S3) 5. ^(Jc0, y) holds by 2 and 4. D
This page intentionally left blank
CHAPTER 2
Termination A. Well-founded ordering method (Floyd (1967)). DEFINITION. A well-founded set (W, > ) consists of a nonempty set W and a strict partial ordering > on W, that has no infinite decreasing sequences. That is, > is a transitive, asymmetric, and irreflexive binary relation on W such that for no infinite sequence a 0 > a\, #2, • • • of elements of W d o w e h a v e a 0 > t f i > a2>- • • . Note that there may exist distinct elements a and b such that neither a > b nor b > a. The set of nonnegative integers N with the regular > ordering, i.e. (N, >), is well-founded; but the set of all integers Z with the same ordering, i.e. (Z, >), is not well-founded. To prove the termination of program P w.r.t. <S>(Jco): (1) Find a set of predicates qo(x~0, *), 4i(*o, * ) , • • • , qn-i(xo, x), associated with the designated labels L0, LI, • • • , LH-I, such that for every Jc0 and x: (Wl) 3>(xo) => floCxo, x0) (the input specification implies the initial predicate), and (W2) ta(x) and qt(x0, *) => 4/(*o, g a (*)) for every basic path a from L, to LI (the predicate before the path implies the predicate after). (2) Find a well-founded set (W, >) and a set of expressions ei(x0y x), • • • , eh-i(x0, x), called termination functions, such that (W3) qi(xo,x) => ei(xQ,x)£ W for each label L, that is on some loop (the value of the expression belongs to W when control passes through Lf), and (W4) t a ( x ) and qi(x0, x) => e>(x~0, x) > ej(x0, ga(x)) for every basic path a from LI to L; which is on some loop (as control passes from L/ to L,, the value of the corresponding expression is reduced). Note that (Wl) and (W2) imply that each qi(x0, x) is an invariant assertion at L,; it is used in (W3) and (W4) to restrict the domain of d. Termination conditions (Wl) to (W4) imply the termination of the program For, suppose the program does not terminate for some x0, such that 4>(jc0). Then the computation for x0 passes through an infinite sequence of labels The corresponding values of x are Then by (Wl) and (W2) 9
10
CHAPTER 2
hold, and by (W3) and (W4) We have obtained an infinite decreasing sequence of elements of W, which contradicts the well-foundedness of W. Example: integer square-root program. Let us consider again input (a) L0: (&,c,<*)«-(0,l,l) LI: if c >a then L2: output (6)
else (b,d)<-(b + l,d + 2) c*-c+d goto LI.
To prove termination of the program w.r.t. a 0 = 0, take ^o(«o) to be a0 = 0, q\(a0, a, b, c, d) to be a = a0 and c-d^a and d>0, and show (Wl) tfo^O => a 0 ^0 (W2) flo^O => qi(aQta0,0,1,1), c ^a and ^i(a 0 , a, ^, c» ^) ^ ^1(^0, «, ^ + 1, c + d + 2, d + 2}. Take (W, >) to be (N, >), ^i(a0, a, b, c, d)tobea-c+d, and show (W3) qi(a0, a, b,c, d} ^> a-c + d^O; (W4) c^aand^i(a 0 , a, b, c, d) => a-c + ^ > a - ( c + ^ + 2) + (£/-l-2). D B. The multiset ordering (Dershowitz and Manna (1979)). Let us consider a special case of well-founded ordering that is particularly useful for termination proofs. A multiset, or bag, over N is a "set" of natural numbers with multiple occurrences allowed. For example, {4, 2, 2, 2, 0, 0} is a multiset over N. The set of all finite multisets over N is denoted by M(N}. Under the multiset ordering, M»M', where M, M'&M(N}, if M' may be obtained from M by the removal of at least one element from M and/or the replacement of at least one element in M with any finite number of smaller elements. Thus {4, 2, 2, 2, 0, 0} v 2, 0, 0 is removed. v {4, 2, 2} v 2 is replaced by 1,1,1, 0, 0. v {4,2,1,1,1,0,0} v 4 is replaced by 3, 3, 2, 2 v and 1, 1, 1, 0,0 are removed {3, 3, 2, 2, 2}.
Since (M(N}, ») can be shown to be a well-founded set, it can be used for proving the termination of programs.
TERMINATION
11
Example: shunting yard program. Consider the program loop until the shunting yard is empty select a train if the train consists of a single car then remove it (from the yard) else split it into two shorter trains repeat This program, (due to Dijkstra) is nondeterministic, since "select" and "split" are not uniquely determined. Let trains (yard) be the number of trains in the yard, cars (yard) be the total number of cars in the yard, and for any train e yard let cars (train) be the number of cars it contains. We present two proofs of termination. A regular termination proof uses the well-founded set: (N, >) termination function: 2 • cars (yard)-trains (yard). Each iteration of the program loop clearly reduces the value of the termination function: removing a one-car train from the yard decreases 2 • cars(yard) by 2, and increases -trains(yard) by 1, thereby decreasing the termination function by 1; splitting a train conserves the number of cars in the yard and increases the number of trains in the yard by 1, thereby decreasing the value of the termination function by 1. The multiset termination proof uses the well-founded set: (M(N), ») termination function: {cars(train): train e yard}. Each iteration of the program loop decreases the value of the termination function under the multiset ordering: removing a train from the yard reduces the multiset by removing one element; splitting a train replaces one element with two smaller ones, corresponding to the two shorter trains. In general, a multiset over S is a "set" allowing multiple occurrences of elements of S. The set of all finite multisets over S is denoted by M(S). For a given partially-ordered set (S, >), the multiset ordering » between multisets over 5 is defined as follows: if for some X, Y, Z e Jt(S), X not empty,
and (where U denotes multiset union). In words, a multiset is reduced by the removal of one or more elements (those in X) and their replacement with any finite
12
CHAPTER 2
number—possibly zero—of elements (those in Y), each of which is smaller than one of the elements that have been removed. THEOREM. (M(S], ») is well founded « (5, >) is well founded. Example: tips program. Let tips (t) = the number of tips (terminal nodes) of a full binary tree t; tips is defined recursively by tips(t)
<=
if t is a tip then 1 else tips(left(t)) + tips (right (t)};
e.g.,
Let us consider the iterative program for computing tips(t0) input U0)
Lo:S«-(g C<-0 L,:if S = ( )
then L2: output (C) else y*-head(S) S*-tail(S) if y is a tip then C <- C +1 goto LI else S«- /e/f(y) - right(y) • S goto LI.
This program initially inserts the given tree t0 as a single element of the stack 5 of trees. With each iteration, the first element is removed from the stack. If it is a tip, the element is counted; otherwise, its left and right subtrees are inserted as the first and second elements of the stack. The process terminates when the stack is empty; the counter C is then the number of tips in the given tree. Regular termination proof: Take
TERMINATION
13
or
(N x N, >) where > is the lexicographic ordering between pairs of nonnegative integers and
Multiset-ordering proof: Since in each iteration head(S) is either removed or replaced by two smaller subtrees, take
(M(T), »•)
where T is the set of all full binary trees and > is the subtree ordering
Example: Ackerman's program. Ackerman's function a(m,n) over the nonnegative integers is defined recursively as follows: a (m, n)
<=
if m = 0 then n +1 elseifrc=0 then a (m-1,1) else a(m — 1, a(m, « -1)).
The following is an iterative program to compute the function: 5<-(m) z <-n loop LI: assert a(m, n) = a(s\s\, a(s\s\-i, • • • , a(s2, a ( s i , z ) ) • • • )) if $i = 0 then S«- (s\s\, • • • , $2) z<-z + l else if z = 0 then 5 <- C%|, • • • , s2, 5i -1) r*-l else 5 «- (j|s|, • • • , s2, si -1, s\) z^z-l until S = ( ) repeat assert z = a(m, n), where |S| denotes the number of elements in the stack S. To prove termination of this program, consider the set (NxN, >) of lexicographically-ordered pairs of natural numbers, and use the corresponding multiset ordering over N x N, i.e., (M(N x N), »). Take
14
CHAPTER 2
The proof considers the three cases, corresponding to the three branches of the conditional in the loop: (1) Casesi = Q:
since C$2 + l, 0) > (^2, z + l) and (s\, z) has been removed. (2) Case sv = 0 and z=0:
since (.$1, z)>(s-(-l, 1). (3) Case s^Oand 2 ^ 0 :
since (si, z) > (si, 0) and (si, z) > (si, z -1).
CHAPTER 3
Total Correctness A. Intermittent method (Burstall (1974), Manna and Waldinger (1978)). The two main characteristics of this method are the following: (1) We prove total correctness, i.e., partial correctness and termination, w.r.t. 4>(jc0) and W(XQ, xh) in a single proof. (2) An invariant assertion means always g,(*0, x) at L,; a subgoal assertion means always q f ( x , ^ ) a t L , ; while an intermittent assertion means sometimes
O,(Jc0, x) at L,.
That is, sometime (eventually) control will pass through L, and satisfy Oi(x0, x) for the current value of x and the initial value x0. Thus, control may pass through L, many times without satisfying Q/, but control must pass through L, at least once with Qi(x0, x) true. Example: tips program. Let us consider again the iterative program for computing the number of tips (terminal nodes) of a full binary tree. input (t0} L0: S«-(r 0 ) C^O L I : if S = ( ) then L2: output (C) else y*~head(S] S <- tail(S) if y is a tip then C^C+1 goto LI else S*-left(y) • right(y) • S goto L I . We can express the total correctness of this program as the following: THEOREM. if (sometime) at L0 then sometime C = tips (t0) at L2. 15
16
CHAPTER 3
This theorem states the termination of the program in addition to its partial correctness, because it implies that control must eventually reach the program's exit L2 and satisfy the output specification. To prove the total correctness of the program we must supply a lemma describing the behavior of the loop using an intermittent assertion at LI. LEMMA. if sometime C — c and 5 = t - s at L\ then sometime C = c +tips(r) and S = s at LI. That is, if we enter the loop with some subtree t at the head of the stack, then eventually the tips of t will be counted and t will be removed from the stack. Note that we may need to return to L\ many times before the tips of t are counted. Proof of Theorem. (Lemma => Theorem.) Suppose sometime at L0. By computation sometime
C = 0 and S = (t0] = t0 • ( ) at LI.
sometime
C = 0 + tips(t0) and S = ( ) at LI.
By the lemma By computation sometime
C = tips(t0) at L2.
Proof of Lemma. Proof is by complete induction on the structure of head(S) = t. Assume the lemma holds whenever head(S) = t' where t' is any subtree of t. Suppose sometime
C = c and S = t • s at LI.
Case t is a tip: By computation, sometime
C = c +1 and 5 = s at LI,
i.e., by the definition of tips, sometime
C = c + tips(t) and S = s at LI.
Case t is not a tip: By computation, sometime
C = c and 5 = left(t) • right(t) • s at LI.
By induction hypothesis with t' = left(t), sometime
C = c + tips(left(t)) and 5 = right(t) • s at LI.
By induction hypothesis with t' = right(t), sometime
C = c + tips(left(t)) + tips (right(t)) and S = s at LI,
TOTAL CORRECTNESS 17
i.e., by the definition of tips sometime C=c+tips (t) and S= s at l1
Note that once the lemma was formulated (this required some ingenuity) and the basis for the induction decided, the proofs proceeded in a fairly mechanical manner. An invariant proof of the partial correctness of the program will use the invariant assertion
The intermittent method allows us to relate the point at which we are about to count the tips of a subtree / to the point at which we have completed the counting and to consider the many executions of the body of the loop between these points as a single unit, which corresponds naturally to a single recursive call tips(t). On the other hand, the invariant method requires that we identify a condition that relates the situation before and after each single execution of the body of the loop. The intermittent method is as powerful as any one of the other methods in the following sense: THEOREM. There exist effective procedures for the following transformation of proofs: (a) Invariant proof =£> Intermittent proof (assuming termination}; (b) Subgoal proof ^> Intermittent proof (assuming termination); (c) Well-founded ordering proof ^> Intermittent proof. In other words, each proof by one of the conventional methods can be translated into an intermittent proof. The translation process is purely mechanical and does not increase the complexity of the original proof. Is it possible that a similar translation could be performed in the other direction? We believe not. We have seen no invariant proof for the tips program that does not require consideration of the sum of the tips of all the elements in the stack. To make such an argument rigorously, we would have to formulate a precise notion of "complexity of proofs." Application: correctness of continuously operating programs. So far we have considered only programs that are expected to terminate, and proved their behavior (correctness) when they terminate. Some programs, such as operating systems, airline-reservation systems, and management information systems are never expected to terminate—they are continuously operating programs. The correctness of such programs cannot be expressed by output specifications, but rather their intended behavior while running must be specified. The specification often involves stating that some event A is inevitably followed by some other event B. Such a relationship connects two different states of the program and, generally, cannot be phrased as an invariant assertion. In other words, the standard tools for proving correctness of terminating programs are not appropriate for continuously operating programs. The intermittent method is a natural approach here.
18
CHAPTER 3
Let us consider the following sequential "operating system": L0: read(requests) LI: if requests = ( ) then goto L0 else job <- head (requests] requests <- tail (requests) L 2 : process (job) goto Li. At each iteration, the program reads a list requests of jobs to be processed. If requests is empty, the program will continue reading indefinitely until a nonempty requests list is read. The system will then process the jobs one by one; when they are all processed, the system will again read a requests list. The correctness of this program can be expressed as if sometime / e requests at LI
then sometime job = j at L2,
i.e., if a job is read into the requests list, it will eventually be processed. (It is assumed that process itself terminates.)
CHAPTER 4
Systematic Program Annotation (Dershowitz and Manna (1977)). The invariant method for proving the partial correctness of programs requires that the program be annotated with intermediate invariant assertions. Normally, it is expected that these invariants are to be supplied by the programmer. Let us examine, in the context of a particular example, to what extent we can expect this to be done automatically. We are given a real-division program that is supposed to approximate the quotient c/d of two real numbers c and d, where 0 ^ c < d, within tolerance e > 0: Input specification: 0^c
20
CHAPTER 4
A. Range of individual variables. We derive global invariants by considering only assignment statements. (i) r <- 1 or r <- r/2 in P; therefore
One can then deduce the global invariant 0 < r ^ 1 in P. rr<-d or rr<- rr/2 in P; therefore
One can deduce the global invariant 0 < rr ^ d in P. (iii) q<-Q or q<-q + r in P. Since (3n eN) r= 1/2" by (1), we have
(iv) qq 4- 0 or qq«- 44 + rr in P. Since (3n e A/") rr = d/2" by (2), we have
B. Relation between variables. Again, we consider only assignment statements and derive global invariants. (i) (r, rr) <-(\,d) or (r, rr) <- (r/2, rr/2); the original proportion between r and rr is preserved, therefore
r • d = l • rr in P, or simply
(ii) (q, qq)+- (0, 0) or (q, qq)-*- (q + r,qq + rr}. Since rr = d • r by (5), we have (q, qq)<-(0, 0) thus
or
(q, qq}*-(q + r, qq + d • r);
SYSTEMATIC PROGRAM ANNOTATION
C. Control invariants. Analyzing the effect of the exit test r^=e,v/e get (q, qq, r, rr)«- (0, 0, l,d) **- assert r = 1 loop L: until r ^ e «*- assert r > e itqq + rr^c then (q, qq) <-(q + r,qq + rr) **- assert r > e (r,rr)«-(r/2,rr/2) »- assert 2r>e repeat assert r ^ e.
Thus
Analyzing the effect of the conditional test qq+rr^ c, we get (,<W,r,rr)«-(0,0, 1, d) i*- assert (q, qq, r, rr) = (0, 0, 1, d) loop L: until r^e if qq + rr ^ c then assert qq + rr^c
(q,qq)*-(q + r,<w + rr)
assert qq = c else assert c
and
suggest (q, qq, r, rr} = (0, 0, 1, d} or qq ^ c at L suggest (o, 40, r, rr} = (0, 0, 1, d) or c < 44 + 2rr at L.
In both cases the first disjunct implies the second, i.e., qq = 0
=>
q
(44 = 0 and rr = d)
and
=>
c
and we have therefore suggest qq^c
at L
suggest c < qq + 2rr at L.
21
22
CHAPTER 4
Both candidates can be verified and we get
D. Debugging. We have invariants (1) to (9) at L. Thus we can derive invariants (1) to (9) and r ^ e at E. (Note that we have not used the output specification at all.) We can conclude at E: qq = d-q
qq^c
(by (6))
(by (8))
} J
^>
q^c/d-
^>
c<^-^+2rf-r
and
rr = < / - r qq = d - q
(by (5)) (by (6))
c
-
>
^>
c/d
r^e
So we have q ^ c/d as desired, but c/d
Note that the output specification is now asserted rather than suggested.
SYSTEMATIC PROGRAM ANNOTATION
23
E. Termination and run-time analysis. Let us consider the corrected program augmented with a counter n B: assert 0^c
loop L: • • •
until 2r ^ e if qq + rr ^ c then (q, qq) <-(q + r, qq + rr)
**-
(r,rr)«-(r/2,rr/2)
«<-« + ! repeat.
We have the following additional global invariants (i) n <- 0 or n <- « + 1; therefore
(ii) (/-, n)<- (1, 0) or (r, n)<- (r/2, « +1); therefore
(iii) (rr, n)«- (d, 0) or (rr, n)<- (rr/2, n + 1); therefore
We have at E: r=l/2"
(by (11))
r=lor4r>e
(by (7))
^>
2-l/2n^e
=>
l-log2e=n
and (1/2" = 1 or 4 • 1/2" >e)
2r^e and (n =0 or n <2-log 2 e).
Therefore 1 -Iog2 e is lower bound on n at E; 2-log 2 e is upper bound on n at L
=> termination.
This page intentionally left blank
CHAPTER 5
Synthesis of Programs (Manna and Waldinger (1979)). The goal: constructing programs to meet given specifications. The specification language consists of high-level constructs that enable us to describe in a very direct, precise, and natural way what is in the user's mind. For example, a program to find the maximum z of an array of numbers a is specified as follows: max(a,n)
<=
achieve all(a[0:n]}^z ) and z e a [0: n ] f and only z changed J where a is an array of numbers) and n is an integer \ and 0 ^ n \
output specification input specification
Our targ^f language will be a LISP-like language (if-then-else, recursion) with arrays and assignments. The specification language consists of the target-language constructs (primitive constructs] and high-level constructs (nonprimitive constructs] that are not in the target language. A. The weakest precondition operator. Let us consider a program segment of the form
where P and P' are conditions and F is a program segment that always terminates. (1) {P1} F {P} means if P' is true before executing F, then P is true after executing F. (2) wp(F,P) = P' means P' is true before executing F, if and only if P is true after executing F. P' is called the weakest precondition of F and P. 25
26
CHAPTER 5
For example, for the program segment
we have {x^0}x <-* + !{* si}, {X^\}X+-X
+ 1{X^I},
{jc ^ 2} x «- x +1 {x ^ 1}, but
wp(x <- x +1, jc ^ 1) = x ^ 0. In general, wp(x«-f,P(*)) = P(0, because P(t) is true before executing x «- r if and only if P(x) is true afterwards. We also have wp(A, P) = P where A is the empty program wp(if q then 5i else 52, P) = (q => wp(5i, P)) and (not4=>wp(S 2 , P)) wp(if i? then 5, P) = (q => wp(S, P)) and (notq=>P) wp(SlS2, P) = wp(5i, wp(52, P)) wp (achieve Q, P) = true if Q => P. wp(interchange(u, v), P(u, v)) = P(v, u). For recursive calls we have the following rule: Suppose /(JC)<=B(JC) is the definition of a recursive function /, f(s] is a recursive call, and > is a well-founded ordering on the domain of /. Then P'(s) =
wp(f(s},P(s}),
SYNTHESIS OF PROGRAMS
27
if we can prove
under the inductive assumption for any t such that / B. Transformation rules. The basic tools we use for synthesis are transformation rules. A transformation rule is of the form It denotes that an expression /, which can be any part of the current program description, may be replaced by an expression t'. A transformation rule can also be of form which means that the transformation may be applied only if condition P is true. Here are some transformation rules that will be used in the following example: (1) the creation rule: [achieve wp(F, P F
(F for any statement F
e.g. the assignment-formation rule: achieve P(x)
achieve P(t) ( \.x<-t for any term t
^
(2) achieve-elimination rule: achieve P
^>
prove P
(3) prove-elimination rule: prove
true
^>
A
(4) composition-identity rule: FA
^>
F
AF
^>
F
(5) reflexivity rule for ^: u^u
=$> true.
Example: maxl program. Specification: max 1 (a) 4= achieve a ^ z where a is a number.
28
CHAPTER 5
The synthesis: => => => =>
=>
achieve a^z achieve a ^ a z*-a prove a ^ a z «- a prove frae 2 <- a A z <- a 2 <- a
by assignment formation by achieve-elimination by reflexivity by prove elimination by composition identity.
C. Simultaneous-goal principle. In general, we cannot achieve a simultaneous goal by decomposing it into a sequence of goals, i.e. achieve P and Q
=?>
achieve P \ _ I achieve Q
achieve P and Q
=?>
< achieve Q I achieve P,
or
because in the course of making the second condition true we may very well make the first false. We really need the simultaneous-goal principle
{
achieve P and Q
achieve P achieve Q prove P.
Here, "prove F" assures us that P is protected while Q is achieved. For this purpose, it is very convenient to use the weakest-precondition operator with the regression rules: (F 1 achieve P
f achieve wp(F, P) \F
and
IF 1 prove P
(prove wp(F, P) \F
Thus, the goal F! F2 achieve P prove Q
SYNTHESIS OF PROGRAMS
29
can be transformed to F, achieve wp(F2, P) prove wp(F2, O) F2
^
and then to ^>
achieve wp(F\, wp(F2,P)) prove wp(Fi, wp(F2, O)) F, F2.
For example, suppose our goal is y <-x y<-y + l achieve y ^ 2 prove x < y.
An attempt to achieve the relation y ^ 2 by the assignment y «- 2 will fail, since we cannot prove x < 2:
y <-*
y^y +i
achieve y jg 2 => prove AT < y
y <- 2 failure.
However, it can be transformed to y <-x achieve wp(y <-y + l , y ^ 2 ) = achieve y ^ 1 ^ prove wp(y <-y + l,x
y *- 1 failure
y < - y + l, and then again achieve wp(y <- x, y ^ 1) = achieve x ^ 1 => prove wp(y <-x, x < y +1) = prove x <x +1 y +-x yey + 1, Therefore, the following program segment achieves the goal
x*-l y *-x y«-y + l.
jc <- 1 success
CHAPTER 5
30
D. Conditional-formation principle. Example: max2 program. The goal: max2(a, b)
4= achieve a ^ z and b ^ z where a and & are numbers
The last step is justified by the fact that in order to prove the goal a ^ b, when it is already known that a ^z, it suffices to prove z^b.
SYNTHESIS OF PROGRAMS 31
We are going to use the conditional-formation principle: if we have
where P or P' is true and P is primitive, we may then transform it to
In the max2 example therefore we have
Thus
E. Recursion-formation principle. Example: simplified max program. max(a,n)
41 achieve all(a[0:n])^z and z ea[0:n] and only z changed where a is an array of numbers n is an integer 0^«.
32
CHAPTER 5
Since, in this example, we only attempt to illustrate how recursive calls are introduced, and not how the simultaneous-goal principle is applied, we consider a simplified goal <=
max (a, n)
achieve all (a [0: n ]) ^ z where a is an array of numbers n is an integer O^n.
Here we have the new all construct, where P(all(a[u : w])) means V/, u ^ i ^ w, P(a[i]}. Some of the all rules are (1) the vacuous rule P(all(a[u: w]))
^>
frwe
ifw>w;
(2) the singleton rule P(all(a[u:w]))
=>
P(a[u\)
i f w = w;
(3) the right-decomposition rule P(a//(a[w:w]))
=>
P(a//(a[u:w-l]))andP(fl[w])
if«<w;
(4) the left-decomposition rule P(all(a[u:w]))
^>
P(a[w]) and P(a//(a[w + 1: w]))
The derivation starts as follows: max (a, n)
4^
achieve all(a[Q:n]}^z
ifw<w.
SYNTHESIS OF PROGRAMS
33
The recursion-formation principle is of the form: If f(x)
achieve P(x) where O(x)
goal: achieve P(x) subgoal: achieve P(t) then achieve P(t)
^ /(r), provided t is primitive, Q(t) is satisfied (input condition) and x > t in some well-founded ordering > (termination condition). Proceeding with our examples, we obtain prove 0 < n achieve all (a[0: n - 1]) ^ z achieve a[n]^=z prove all(a[0: n — 1])^2
34
CHAPTER 5
Thus applying the conditional-formation rule, we obtain max (a, n)
max (a, n)
4=
<=
achieve all (a[0: n]) ^ z
if n = 0
then z «- a [0] else max (a, n — l) if z < a[n] then z «- a[n].
F. Generalization. In the above example we have used the right-decomposition rule of all: P(all(a[u : w])) => P(all(a[u : w - 1])) and P(a[w]) if u<w. Now, suppose we attempt to use the left-decomposition rule instead: P(all(a[u:w]))
=>
P(a[u]) and P(all(a\u + l: w])) if u < w.
Then we obtain max (a, n)
4=
achieve all (a [0: n ]) ^ z
prove 0 < n achieve a// (a[l: n]) ^ z achieve a[0]^z prove all (a[l: n]) ^ z.
SYNTHESIS OF PROGRAMS
35
We cannot apply the recursion-formation principle, since the top-level goal is a//(a[0:«])^z, while the subgoal is all(a[\ :n])^z. However, by generalization (0 becomes / and 1 becomes / + 1) we obtain maxgen (a,i,n)
<^
achieve all (a[i: n]) ^ z
prove /' < n achieve all (a[i +1: n])^ z
achieve a[i]^z
prove all (a [i +1: n ]) ^ z recursion formation
prove i < n maxgen (a, J + l, n) achieve a[/]^z provea//(a[/ + l:n])^z
The final program is ma* (a, n) <^ where maxgen (a, i, n)
maxgen (a, 0, n) <£:
if / = n then z <-a[i] else maxgen (a, / +1, n) if z < a[i] then z «- a[/].
G. Program modification. Suppose we also want to find the index of the maximal element in the example max above. Then, by modifying the max (a, n) program we obtain max (a, n) <= achieve all (a[0: n]) ^ z and z = a[y] simultaneous goal
achieve all (a[0: n]) ^ z achieve z = a[y] prove a//(a[0: n]) ^ z
4
CHAPTER 5
36
max (a, n) «sserta//(a[0:n])^z achieve z = a[y] prove all (a[Q:n])^z
max (a, n)
4= if n = 0
max(a,n)
thenz<-a[0] assert z = a[0] achieve z = a[y] provea[0]^z else max (a, n — 1) assert a//(a[0:n-l])^z ifz
4: if n = 0 then z«-fl[0] y «-0 -»» else max (a, n — 1) if z
H. Comparison with structured programming. Let us consider the synthesis of an exp(x, y) program which sets the value of a variable z to be the exponential z = xy of two integers x and y, where x > 0 and y =S 0. We are given a number of properties of the exponential function: wu = l
if w ?*0 and v=Q,
uv = (u • u)v^2 v
it v is even, and v2
u = u - (u • u} '
if v is odd,
for any integers u and v. Here, -*• denotes integer division.
SYNTHESIS OF PROGRAMS
37
1. Constructing an iterative program (by the structured-programming approach). Written in our notation, the top-level goal of a structured programming derivation is Goal A: achieve z = xy,
assuming that xy is a nonprimitive construct which may not appear in the final program. This goal can be decomposed into the conjunction of two conditions Goal B: achieve z • xxyy = xy and yy = 0. The motivation for this step is that, initially, we can achieve the first condition z • xxyv = xy easily enough (by setting xx to x, yy to y, and z to 1); thus, if we manage to achieve the second condition yy = 0 subsequently, while maintaining the first condition, we will have achieved our goal. For this purpose, we establish an iterative loop, whose invariant in z • xxyy = xy and whose exit condition is yy = 0; the body of the loop must bring yy closer to zero while maintaining the invariant. (xx, yy, z )<-(*, y, 1)
while yy ^ 0 do assert z • xx yy = x y achieve reduction in yy
while maintaining invariant. By exploiting the known properties of the exponential and other arithmetic functions, we are led ultimately to the final program exp(x, y)
<=
(xx, yy, z) *- U, y, 1) while yy 7* 0 do assert z • xxyy =xy if even(yy) then (ACJT, yy) <- (xx • xx, yy -H 2). else (jcx, yy, z)«- (xx • jcx, yy -r- 2, xx • z).
The weak point in this derivation seems to be the passage from Goal A to Goal B. This step is necessary to provide the invariant for the loop of the ultimate program, but requires considerable insight. 2. Constructing a recursive program (by the above synthesis approach). The derivation of the corresponding recursive program by the program-synthesis technique is straightforward. By using the same properties of the arithmetic
CHAPTER 5
38
functions that were exploited in the structured-programming derivation, we can derive
Thus, only after we observe that the subexpression (jc • x}y 2 is an instance of the expression jc y , do we actually decide to introduce a recursive call exp(x • x, y + 2) to compute this subexpression. Since 0 ^ y -r- 2 < y, the resulting exp(x, y) program must terminate. exp(x, y)
4=
ify=0 then 1 else if even (y) then exp(x • x, y -=-2) else x • exp(x • x, y + 2).
CHAPTER 6
Termination of Production Systems (Manna and Ness (1970), Dershowitz and Manna (1979)) The following is a production system that differentiates with respect to x an expression containing + and •, 1. Dx A 1 2. Dy A* 0 3. D(a+p) A Da+Dp 4. D(a • P) A (0 • Da) + (a • DP), where y can be any constant or variable other than x. Let us consider the expression
D(D(x • *) + y). We could either apply the third production to the outer D, or else we could apply the fourth production to the inner D, and so on, obtaining Fig. 1. The left path, for example, is denoted by
D(D(x • *) + y)-»£>((* -Dx+x • Dx) + y)^D(x -Dx+x • Dx) + Dy^> In general, at each stage of the computation there are many ways to proceed, and the choice is made nondeterministically. In our case, all choices eventually lead to the expression ((!• l+jc -0) + (1- 1+jc -0)) + 0, for which no further application of a production is possible. DEFINITION. A production system ir over a set of expressions E terminates if there exists no infinite sequence e\ -> e^-* e3-> • • • of expressions in E such that every ei+i results from et by the application of a rule. In other words, given any expression e\ in £", all possible executions that start with e\ reach a state for which there is no way to continue applying productions. It is difficult to prove the termination of such a system because (1) a production (e.g. the last two in the differentiation example above) may increase the size of an expression; (2) a production (e.g. the fourth) may actually duplicate occurrences of subexpressions; (3) a production may affect not only the structure of the subexpression it is applied to, but it may also change the set of productions which match its superexpressions. We must take into consideration the many different possible sequences, generated by the nondeterministic choice of productions and subexpressions. 39
FIG. 1
TERMINATION OF PRODUCTION SYSTEMS
41
THEOREM. A production system TT over E terminates if and only if there exists a well-founded set (W, >), and a termination function r: E-> W, such that for any e,e'eE
A. Example 1: Associativity. Consider the production system Note that the size of the expression is unchanged. Regular termination proof: Take (N,>)
and r(e): E^N such that r(a +/3) = 2 • r(a) + r(ft) r(atom}= 1.
Thus, T((a+3) + ((£ + l) + c)) = 2(2- 1 + 1)+ (2(2- 1 + 1) + 1) = 13. 1. r decreases
since r(a)^l. 2. r is monotonic in each operand
Thus, if e -» e' for the outer expression e, then some subexpression (a + /3) + y of e has been replaced by a + (/3 + y) to obtain e'. We have r((a+/8) + -y)> r(a + (/8 + y)>, by the first property. Therefore, by the monotonicity property we get r(e)>r(e') for the outer expression. That is
Multiset proof (see Chapter 2): Take
i.e., r(e) is the multiset of the sizes (=number of symbols) of all left components of + in e. Thus T((fl+3) + ((A + l) + c)H{l,3,l,3}.
1
3
1
3
42
CHAPTER 6
1. r decreases
That is, r((a + /3) + y) is the multiset consisting of |a|, |a +/3|, and the sizes of left components of all inner +'s in a, /3 and y; while r(a + (/3 + y)) is the same but with | a. + (31 replaced by the smaller element | /3 |. 2. Size is maintained therefore the sizes of the left components of higher-level +'s are not changed. Thus,
B. Example 2: Distribution system. Consider
a • (0 + y)
A
(a • /3) + (a • y)
(/3+y)-a
A
(0 - a ) + (y • a).
Note that the size of the expression is increased in both cases. Regular proof: Take
(N, >) and r(e):E^N
where r(a +(3) = r(a) + r(/3) + l r(a -/3) = T(a)-T(/3) r(afom) = 2.
Then
r((a + 3)-((Z)-l-l)-c)) = (2 + 2 + l ) - ( ( 2 + 2 + l ) - 2 ) = 50. 1. r decreases
since r(a)> 1; and similarly for the other production. 2. r is monotonic in each operand
TERMINATION OF PRODUCTION SYSTEMS
Therefore Multiset proof: Take and
(M(N),»)
r(e) to be {val(a • (3): a • (3 in e},
where val(e) is the arithmetic value of e with 1 for all atoms. Thus r((a • 3) • ((b +1) • c)) = {i*z/(a • 3), val((a • 3) • ((b +1) • c)), t>a/((£ +1) • c)} = {1-1, 1 - 2 , 2 - 1 } = {1,2,2}. 1. r decreases
2. value is maintained
and similarly for the other production. Therefore
C. Example 3: Differentiation system. Consider
43
44
CHAPTER 6
Regular proof: Take (N, >)
and r(e): E-+N where r(a(x)/3)=T(a) + r(/3)
where (x) stands for any of the binary operations. 1. T decreases for every production. For example
since r(a), r(/3)^4. 2. T is monotonic in each operand. Therefore e->e'
^>
r(e)>r(e')-
Multiset proof: Since each production replaces an occurrence of D by one or more occurrences of D with shorter arguments, let us take (Ji(N), »)
and
r(e) to be (|a : Da in e}.
1. T decreases for every production. For example
2. However therefore the proof does not work. We need a stronger mathematical tool since we are losing vital information: the D's are nested in the expression, while our multiset is linear.
TERMINATION OF PRODUCTION SYSTEMS
45
D. Nested multisets. DEFINITION. A nested multiset over a set tf is either an element of 5f, or else it is a finite multiset of nested multisets over &. We denote by M*(^) the set of nested multisets over &. For example, {{1, !},{{<)}, 1, 2}, 0} is a nested multiset over N. Each element of a nested multiset is assigned a depth corresponding to the number of braces surrounding it. Thus {{1,1},{{0},1,2},5} 22
3 2 2 1 • • • depth.
This multiset is said to be of depth 3 since it contains at least one element of depth 3 and no element of greater depth. For a given partially-ordered set (&, >), the nested-multiset ordering »* on M*(£f) is the recursive version of the standard multiset ordering: For two elements M, A/'eM*(S?) we say that
if
1. M, M' e y and M > M' (two elements of the base set are compared using >), or else 2. M £ y and M' e y (any multiset is greater than any element of the base set), or else 3. M, M' & y, and for some X, Y, Z e M*(&), where X is not empty, M = ATUZ
and M'=Y\JZ
and
For example, the nested multiset {{1, 1}, {{!},!, 2}, 0} is greater than
{{1, 0,0}, 5, (1,1, 2}, 0}, since {1,1} is greater than both {1, 0, 0} and 5, and {{!}, 1, 2} is greater than {1,1, 2}. The nested multiset {{1,1}, {{!},!, 2}, 0}
46
CHAPTER 6
is also greater than {{{0,0},7},{5,5,2},5}, since {1,1} and 0 were removed, and {{!}, 1, 2} is greater than each of the three elements {{0, 0}, 7), (5, 5, 2}, and 5. THEOREM. (Jt*(^),»*) is well-founded > (&, >) is well founded. Let us consider again the differentiation system
For a nested multiset proof, we take M**(AO, »*) and r(e): £-^^*(N), where for each occurrence of D in e we associate the size of its argument with the appropriate nesting. For example r(D(D(Dx -Dy) + Dy)/Dx) {{9,{5,{1}, {!}},
{!}}, {!}}.
1. Outer D's have lesser depth—they are dominated by the inner D's. 2. r decreases
for every production. For example
TERMINATION OF PRODUCTION SYSTEMS
47
Since, {ja • (3\, • • •} on the left-hand side is greater than any one of the components of the right-hand side. Thus,
Acknowledgments. I would like to thank N. Dershowitz and R. Waldinger for their help in the preparation of these lecture notes. D. Dolev and P. Wolper provided valuable comments on the manuscript.
This page intentionally left blank
References This list includes only the papers that were used directly in the preparation of these lecture notes. The reader can find an extensive bibliography in the survey by Manna and Waldinger (1978a). R. M. BURSTALL (1974), Program proving as hand simulation with a little induction, Information Processing 1974, North-Holland, Amsterdam, pp. 308-312. N. DERSHOWITZ AND Z. MANNA (1977), Inference rules for program annotation, technical report, Computer Science Dept., Stanford University, Stanford, CA. (1979), Proving termination with multiset orderings, Comm. ACM, 22, pp. 465-476. R. W. FLOYD (1967), Assigning meanings to programs, Proceedings of Symposium in Applied Mathematics, vol. 19, J. T. Schwartz, ed., American Mathematical Society, Providence, RI, pp. 19-32. C. A. R. HOARE (1969), An axiomatic basis of computer programming, Comm. ACM, 12, pp. 576-580, 583. Z. MANNA (1971), Mathematical theory of partial correctness, J. Comput. System Sci., 5, pp. 239-253. Z. MANNA AND S. NESS (1970), On the termination of Markov algorithms, Proc. Third Hawaii International Conference on System Sciences (Honolulu, HI), pp. 789-792. Z. MANNA AND R. WALDINGER (1979), Synthesis: dreams =$• programs, IEEE Trans. Software Engineering, SE-5, pp. 294-328. (1978), Is "sometime" sometimes better than "always"? Intermittent assertions in proving program correctness, Comm. ACM, 21, pp. 159-172. (1978a), The logic of computer programming, IEEE Trans. Software Engineering, SE-4, pp. 199-229. J. H. MORRIS AND B. WEGBREIT (1977), Subgoal induction, Comm. ACM, 20, pp. 209-222.
49
(continued from inside front cover) J. F. C. KINGMAN, Mathematics of Genetic Diversity MORTON E. GURTIN, Topics in Finite Elasticity THOMAS G. KURTZ, Approximation of Population Processes JERROLD E. MARSDEN, Lectures on Geometric Methods in Mathematical Physics BRADLEY EFRON, The Jackknife, the Bootstrap, and Other Resampling Plans M. WOODROOFE, Nonlinear Renewal Theory in Sequential Analysis D. H. SATTTNGER, Branching in the Presence of Symmetry R. TEMAM, Navier-Stokes Equations and Nonlinear Functional Analysis MIKLOS CSCRGO, Quantile Processes with Statistical Applications J. D. BUCKMASTER AND G. S. S. LuDFORD, Lectures on Mathematical Combustion R. E. TARJAN, Data Structures and Network Algorithms PAUL WALTMAN, Competition Models in Population Biology S. R. S. VARADHAN, Large Deviations and Applications KIYOSI ITO, Foundations of Stochastic Differential Equations in Infinite Dimensional Spaces ALAN C. NEWELL, Solitons in Mathematics and Physics PRANAB KUMAR SEN, Theory and Applications of Sequential Nonparametrics LASZLO LOVASZ, An Algorithmic Theory of Numbers, Graphs and Convexity E. W. CHENEY, Multivariate Approximation Theory: Selected Topics JOEL SPENCER, Ten Lectures on the Probabilistic Method PAUL C. FIFE, Dynamics of Internal Layers and Diffusive Interfaces CHARLES K. CHUI, Multivariate Splines HERBERT S. WILF, Combinatorial Algorithms: An Update HENRY C. TUCKWELL, Stochastic Processes in the Neurosciences FRANK H. CLARKE, Methods of Dynamic and Nonsmooth Optimization ROBERT B. GARDNER, The Method of Equivalence and Its Applications GRACE WAHBA, Spline Models for Observational Data RICHARD S. VARGA, Scientific Computation on Mathematical Problems and Conjectures INGRID DAUBECHIES, Ten Lectures on Wavelets STEPHEN F. McCoRMicK, Multilevel Projection Methods for Partial Differential Equations HARALD NIEDERREITER, Random Number Generation and Quasi-Monte Carlo Methods JOEL SPENCER, Ten Lectures on the Probabilistic Method, Second Edition CHARLES A. MICCHELLI, Mathematical Aspects of Geometric Modeling ROGER TEMAM, Navier-Stokes Equations and Nonlinear Functional Analysis, Second Edition
For information about SIAM books, journals, conferences, memberships, or activities, contact:
Society for Industrial and Applied Mathematics 3600 University City Science Center Philadelphia, PA 19104-2688 Telephone: 215-382-9800 / Fax: 215-386-7999 / E-mail: [email protected] Science and Industry Advance with Mathematics