Numerical Linear Approximation in C
© 2008 by Taylor & Francis Group, LLC
CHAPMAN & HALL/CRC
Numerical Analysis and...
54 downloads
923 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Numerical Linear Approximation in C
© 2008 by Taylor & Francis Group, LLC
CHAPMAN & HALL/CRC
Numerical Analysis and Scientific Computing Aims and scope:
Scientific computing and numerical analysis provide invaluable tools for the sciences and engineering. This series aims to capture new developments and summarize state-of-the-art methods over the whole spectrum of these fields. It will include a broad range of textbooks, monographs and handbooks. Volumes in theory, including discretisation techniques, numerical algorithms, multiscale techniques, parallel and distributed algorithms, as well as applications of these methods in multidisciplinary fields, are welcome. The inclusion of concrete real-world examples is highly encouraged. This series is meant to appeal to students and researchers in mathematics, engineering and computational science.
Editors Choi-Hong Lai School of Computing and Mathematical Sciences University of Greenwich
Frédéric Magoulès Applied Mathematics and Systems Laboratory Ecole Centrale Paris
Editorial Advisory Board Mark Ainsworth Mathematics Department Strathclyde University
Peter Jimack School of Computing University of Leeds
Todd Arbogast Institute for Computational Engineering and Sciences The University of Texas at Austin
Takashi Kako Department of Computer Science The University of Electro-Communications
Craig C. Douglas Computer Science Department University of Kentucky Ivan Graham Department of Mathematical Sciences University of Bath
Peter Monk Department of Mathematical Sciences University of Delaware Francois-Xavier Roux ONERA Arthur E.P. Veldman Institute of Mathematics and Computing Science University of Groningen
Proposals for the series should be submitted to one of the series editors above or directly to: CRC Press, Taylor & Francis Group 24-25 Blades Court Deodar Road London SW15 2NU UK © 2008 by Taylor & Francis Group, LLC
Numerical Linear Approximation in C
Nabih N. Abdelmalek, Ph.D. William A. Malek, M.Eng.
© 2008 by Taylor & Francis Group, LLC
Chapman & Hall/CRC Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2008 by Taylor & Francis Group, LLC Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-1-58488-978-6 (Hardcover) This book contains information obtained from authentic and highly regarded sources Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The Authors and Publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Abdelmalek, Nabih N. Numerical linear approximation in C / Nabih Abdelmalek and William A. Malek. p. cm. -- (CRC numerical analysis and scientific computing) Includes bibliographical references and index. ISBN 978-1-58488-978-6 (alk. paper) 1. Chebyshev approximation. 2. Numerical analysis. 3. Approximation theory. I. Malek, William A. II. Title. III. Series. QA297.A23 2008 518--dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com © 2008 by Taylor & Francis Group, LLC
2008002447
v
Contents
List of figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Warranties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii About the authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
PART 1 Preliminaries and Tutorials Chapter 1 1.1 1.2
1.3 1.4
Applications of Linear Approximation
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Applications to social sciences and economics . . . . . . . . . . . . . . . . 5 1.2.1 Systolic blood pressure and age . . . . . . . . . . . . . . . . . . . 6 1.2.2 Annual teacher salaries . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.3 Factors affecting survival of island species . . . . . . . . . . 7 1.2.4 Factors affecting fuel consumption. . . . . . . . . . . . . . . . . 8 1.2.5 Examining factors affecting the mortality rate . . . . . . . . 8 1.2.6 Effects of forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.7 Factors affecting gross national products . . . . . . . . . . . . 9 Applications to industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.1 Windmill generating electricity . . . . . . . . . . . . . . . . . . 10 1.3.2 A chemical process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Applications to digital images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4.1 Smoothing of random noise in digital images . . . . . . . 12 1.4.2 Filtering of impulse noise in digital images . . . . . . . . . 14 1.4.3 Applications to pattern classification . . . . . . . . . . . . . . 15
© 2008 by Taylor & Francis Group, LLC
vi
Numerical Linear Approximation in C 1.4.4 1.4.5 1.4.6 1.4.7
Chapter 2 2.1 2.2 2.3
2.4 2.5 2.6
3.2 3.3 3.4 3.5
3.6
3.7
16 18 19 20
Preliminaries
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discrete linear approximation and solution of overdetermined linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison between the L1, the L2 and the L∞ norms by a practical example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Some characteristics of the L1 and the Chebyshev approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Error tolerances in the calculation. . . . . . . . . . . . . . . . . . . . . . . . . Representation of vectors and matrices in C . . . . . . . . . . . . . . . . . Outliers and dealing with them . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Data editing and residual analysis. . . . . . . . . . . . . . . . .
Chapter 3 3.1
Restoring images with missing high-frequency components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . De-blurring digital images using the Ridge equation . . De-blurring images using truncated eigensystem . . . . . De-blurring images using quadratic programming . . . .
25 26 30 32 32 33 34 36
Linear Programming and the Simplex Algorithm
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Exceptional linear programming problems. . . . . . . . . . Notations and definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The simplex algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Initial basic feasible solution . . . . . . . . . . . . . . . . . . . . 3.3.2 Improving the initial basic feasible solution . . . . . . . . . The simplex tableau. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The two-phase method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Phase 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Phase 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Detection of the exceptional cases . . . . . . . . . . . . . . . . Duality theory in linear programming. . . . . . . . . . . . . . . . . . . . . . 3.6.1 Fundamental properties of the dual problems. . . . . . . . 3.6.2 Dual problems with mixed constraints . . . . . . . . . . . . . 3.6.3 The dual simplex algorithm . . . . . . . . . . . . . . . . . . . . . Degeneracy in linear programming and its resolution . . . . . . . . . 3.7.1 Degeneracy in the simplex method. . . . . . . . . . . . . . . . 3.7.2 Avoiding initial degeneracy in the simplex algorithm .
© 2008 by Taylor & Francis Group, LLC
39 41 44 45 47 48 51 55 55 56 56 57 59 60 61 62 62 63
Contents
3.8 3.9
3.7.3 Resolving degeneracy resulting from equal θmin . . . . . 3.7.4 Resolving degeneracy in the dual simplex method. . . . Linear programming and linear approximation. . . . . . . . . . . . . . . 3.8.1 Linear programming and the L1 approximation . . . . . . 3.8.2 Linear programming and Chebyshev approximation . . Stability of the solution in linear programming . . . . . . . . . . . . . .
Chapter 4 4.1 4.2
4.3 4.4
4.5
4.6 4.7
vii 63 64 64 64 66 67
Efficient Solutions of Linear Equations
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Vector and matrix norms and relevant theorems. . . . . . . . . . . . . . 72 4.2.1 Vector norms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.2.2 Matrix norms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.2.3 Hermitian matrices and vectors . . . . . . . . . . . . . . . . . . 73 4.2.4 Other matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.2.5 Euclidean and the spectral matrix norms . . . . . . . . . . . 77 4.2.6 Euclidean norm and the singular values . . . . . . . . . . . . 78 4.2.7 Eigenvalues and the singular values of the sum and the product of two matrices. . . . . . . . . . . . . . . . . . . . . . 79 4.2.8 Accuracy of the solution of linear equations . . . . . . . . 80 Elementary matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Gauss LU factorization with complete pivoting . . . . . . . . . . . . . . 84 4.4.1 Importance of pivoting . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.4.2 Using complete pivoting . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.3 Pivoting and the rank of matrix A. . . . . . . . . . . . . . . . . 88 Orthogonal factorization methods . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5.1 The elementary orthogonal matrix H . . . . . . . . . . . . . . 90 4.5.2 Householder’s QR factorization with pivoting. . . . . . . 90 4.5.3 Pivoting in Householder’s method . . . . . . . . . . . . . . . . 92 4.5.4 Calculation of the matrix inverse A–1 . . . . . . . . . . . . . . 93 Gauss-Jordan method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Rounding errors in arithmetic operations . . . . . . . . . . . . . . . . . . . 95 4.7.1 Normalized floating-point representation . . . . . . . . . . . 95 4.7.2 Overflow and underflow in arithmetic operations . . . . 96 4.7.3 Arithmetic operations in a d.p. accumulator . . . . . . . . . 97 4.7.4 Computation of the square root of a s.p. number . . . . 100 4.7.5 Arithmetic operations in a s.p. accumulator . . . . . . . . 100 4.7.6 Arithmetic operations with two d.p. numbers. . . . . . . 101 4.7.7 Extended simple s.p. operations in a d.p. accumulator 101 4.7.8 Alternative expressions for summations and innerproduct operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
© 2008 by Taylor & Francis Group, LLC
viii
Numerical Linear Approximation in C 4.7.9 4.7.10 4.7.11 4.7.12 4.7.13
More conservative error bounds . . . . . . . . . . . . . . . . . D.p. summations and inner-product operations . . . . . Rounding error in matrix computation . . . . . . . . . . . . Forward and backward round-off error analysis. . . . . Statistical error bounds and concluding remarks . . . .
104 105 106 107 109
PART 2 The L1 Approximation Chapter 5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Characterization of the L1 solution . . . . . . . . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . The dual simplex method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modification to the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . Occurrence of degeneracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A significant property of the L1 approximation . . . . . . . . . . . . . Triangular decomposition of the basis matrix . . . . . . . . . . . . . . . Arithmetic operations count . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_L1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_L1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DR_Lone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Lone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 6 6.1 6.2 6.3 6.4
Linear L1 Approximation 113 116 116 118 120 124 126 129 130 132 134 140 145 165 170
One-Sided L1 Approximation
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Applications of the algorithm . . . . . . . . . . . . . . . . . . . 6.1.2 Characterization and uniqueness. . . . . . . . . . . . . . . . . A special problem of a general constrained one . . . . . . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Obtaining an initial basic feasible solution . . . . . . . . . 6.4.2 One-sided L1 solution from above . . . . . . . . . . . . . . .
© 2008 by Taylor & Francis Group, LLC
183 186 186 186 187 188 188 191
Contents
6.5 6.6 6.7
6.4.3 The interpolation property . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Loneside. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Loneside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 7 7.1 7.2 7.3 7.4 7.5 7.6 7.7
8.2 8.3 8.4 8.5 8.6 8.7
191 191 195 203
L1 Approximation with Bounded Variables
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Linear L1 approximation with non-negative parameters (NNL1) . . . . . . . . . . . . . . . . . . . . . . . . . . . A special problem of a general constrained one . . . . . . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . 7.3.1 Properties of the matrix of constraints . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Lonebv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Lonebv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 8 8.1
ix
213 216 216 217 220 221 222 225 233
L1 Polygonal Approximation of Plane Curves
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Two basic issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Approaches for polygonal approximation . . . . . . . . . 8.1.3 Other unique approaches. . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Criteria by which error norm is chosen. . . . . . . . . . . . 8.1.5 Direction of error measure . . . . . . . . . . . . . . . . . . . . . 8.1.6 Comparison and stability of polygonal approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.7 Applications of the algorithm . . . . . . . . . . . . . . . . . . . The L1 approximation problem . . . . . . . . . . . . . . . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear programming technique . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 The algorithm using linear programming . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_L1pol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_L1pol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
© 2008 by Taylor & Francis Group, LLC
245 245 246 248 249 249 250 251 251 252 253 255 255 261 267
x
Numerical Linear Approximation in C
Chapter 9 9.1 9.2 9.3 9.4
9.5 9.6 9.7 9.8 9.9
Piecewise L1 Approximation of Plane Curves
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Applications of piecewise approximation. . . . . . . . . . Characteristics of the piecewise approximation . . . . . . . . . . . . . The discrete linear L1 approximation problem . . . . . . . . . . . . . . Description of the algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Piecewise linear L1 approximation with pre-assigned tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Piecewise linear approximation with near-balanced L1 norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_L1pw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_L1pw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DR_L1pw2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_L1pw2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
275 276 276 278 279 279 280 281 285 289 294 298
PART 3 The Chebyshev Approximation Chapter 10 10.1 10.2 10.3 10.4 10.5 10.6 10.7
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Characterization of the Chebyshev solution . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . 10.2.1 Property of the matrix of constraints . . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . A significant property of the Chebyshev approximation . . . . . . 10.4.1 The equioscillation property of the Chebyshev norm. Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Linf. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Linf. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 11 11.1
Linear Chebyshev Approximation 307 309 310 311 315 320 321 321 325 334
One-Sided Chebyshev Approximation
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 11.1.1 Applications of the algorithm . . . . . . . . . . . . . . . . . . . 349
© 2008 by Taylor & Francis Group, LLC
Contents 11.2 11.3 11.4 11.5 11.6 11.7
A special problem of a general constrained one . . . . . . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . 11.3.1 Properties of the matrix of constraints . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 One-sided Chebyshev solution from below . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Simple relationships between the Chebyshev and one-sided Chebyshev approximations . . . . . . . . . . . . DR_Linfside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Linfside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 12 12.1 12.2 12.3 12.4 12.5 12.6 12.7
13.2 13.3 13.4 13.5 13.6 13.7 13.8 13.9
350 351 353 356 360 361 362 365 373
Chebyshev Approximation with Bounded Variables
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Linear Chebyshev approximation with non-negative parameters (NNLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . A special problem of a general constrained one . . . . . . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . 12.3.1 Properties of the matrix of constraints . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Linfbv. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Linfbv. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 13 13.1
xi
387 389 389 390 392 394 396 399 408
Restricted Chebyshev Approximation
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 The semi-infinite programming problem . . . . . . . . . . 13.1.2 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.3 Applications of the restricted Chebyshev algorithm. . A special problem of general constrained algorithms . . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . 13.3.1 Properties of the matrix of constraints . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Triangular decomposition of the basis matrix . . . . . . . . . . . . . . . Arithmetic operations count . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Restch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Restch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
© 2008 by Taylor & Francis Group, LLC
419 421 421 422 422 423 425 427 428 429 430 435 446
xii
Numerical Linear Approximation in C
Chapter 14 14.1 14.2 14.3
14.4 14.5 14.6
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The problem as presented by Descloux. . . . . . . . . . . . . . . . . . . . Linear programming analysis of the problem . . . . . . . . . . . . . . . 14.3.1 The characteristic set R1 and how to obtain it . . . . . . 14.3.2 Calculating matrix (DT)–1 . . . . . . . . . . . . . . . . . . . . . . 14.3.3 The case of a rank deficient coefficient matrix. . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Strict. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Strict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 15 15.1 15.2 15.3 15.4
15.5 15.6 15.7 15.8 15.9
16.2 16.3 16.4 16.5 16.6 16.7 16.8
469 471 472 473 475 476 476 481 489
Piecewise Chebyshev Approximation
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.1 Applications of piecewise approximation. . . . . . . . . . Characteristic properties of piecewise approximation . . . . . . . . The discrete linear Chebyshev approximation problem . . . . . . . Description of the algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.1 Piecewise linear Chebyshev approximation with pre-assigned tolerance. . . . . . . . . . . . . . . . . . . . . . . . . 15.4.2 Piecewise linear Chebyshev approximation with near-balanced Chebyshev norms . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Linfpw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Linfpw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DR_Linfpw2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Linfpw2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 16 16.1
Strict Chebyshev Approximation
517 519 519 519 520 520 521 521 525 529 534 538
Solution of Linear Inequalities
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.1 Linear programming techniques . . . . . . . . . . . . . . . . . Pattern classification problem . . . . . . . . . . . . . . . . . . . . . . . . . . . Solution of the system of linear inequalities Ca > 0 . . . . . . . . . . Linear one-sided Chebyshev approximation algorithm . . . . . . . Linear one-sided L1 approximation algorithm . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Chineq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DR_L1ineq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
© 2008 by Taylor & Francis Group, LLC
545 548 549 551 551 552 553 560 566
Contents
xiii
PART 4 The Least Squares Approximation Chapter 17 17.1 17.2 17.3
17.4 17.5 17.6
17.7
17.8
Least Squares and Pseudo-Inverses of Matrices
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Least squares solution of linear equations. . . . . . . . . . . . . . . . . . 17.2.1 Minimal-length least squares solution . . . . . . . . . . . . Factorization of matrix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.1 Gauss LU factorization . . . . . . . . . . . . . . . . . . . . . . . . 17.3.2 Householder’s factorization . . . . . . . . . . . . . . . . . . . . 17.3.3 Givens’ transformation (plane rotations) . . . . . . . . . . 17.3.4 Classical and modified Gram-Schmidt methods. . . . . Explicit expression for the pseudo-inverse . . . . . . . . . . . . . . . . . 17.4.1 A+ in terms of Gauss factorization . . . . . . . . . . . . . . . 17.4.2 A+ in terms of Householder’s factorization . . . . . . . . The singular value decomposition (SVD) . . . . . . . . . . . . . . . . . . 17.5.1 Spectral condition number of matrix A . . . . . . . . . . . 17.5.2 Main properties of the pseudo-inverse A+ . . . . . . . . . Practical considerations in computing. . . . . . . . . . . . . . . . . . . . . 17.6.1 Cholesky’s decomposition . . . . . . . . . . . . . . . . . . . . . 17.6.2 Solution of the normal equation . . . . . . . . . . . . . . . . . 17.6.3 Solution via Gauss LU factorization method . . . . . . . 17.6.4 Solution via Householder’s method . . . . . . . . . . . . . . 17.6.5 Calculation of A+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.6 Influence of the round-off error . . . . . . . . . . . . . . . . . Linear spaces and the pseudo-inverses . . . . . . . . . . . . . . . . . . . . 17.7.1 Definitions, notations and related theorems . . . . . . . . 17.7.2 Subspaces and their dimensions . . . . . . . . . . . . . . . . . 17.7.3 Gram-Schmidt orthogonalization . . . . . . . . . . . . . . . . 17.7.4 Range spaces of A and AT and their orthogonal complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.5 Representation of vectors in Vm . . . . . . . . . . . . . . . . . 17.7.6 Orthogonal projection onto range and null spaces . . . 17.7.7 Singular values of the orthogonal projection matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multicollinearity, collinearity or the ill-conditioning of matrix A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.8.1 Sources of multicollinearity . . . . . . . . . . . . . . . . . . . .
© 2008 by Taylor & Francis Group, LLC
575 576 578 579 579 581 582 582 583 584 584 585 587 587 588 588 588 589 589 589 590 591 591 592 594 596 598 599 601 601 603
xiv
17.9 17.10 17.11 17.12 17.13 17.14 17.15 17.16
Numerical Linear Approximation in C 17.8.2 Detection of multicollinearity . . . . . . . . . . . . . . . . . . . Principal components analysis (PCA) . . . . . . . . . . . . . . . . . . . . 17.9.1 Derivation of the principal components . . . . . . . . . . . Partial least squares method (PLS) . . . . . . . . . . . . . . . . . . . . . . . 17.10.1 Model building for the PLS method . . . . . . . . . . . . . Ridge equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.11.1 Estimating the Ridge parameter . . . . . . . . . . . . . . . . . 17.11.2 Ridge equation and variable selection . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Eluls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Eluls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DR_Hhls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Hhls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 18
Piecewise Linear Least Squares Approximation
18.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.1 Preliminaries and notation . . . . . . . . . . . . . . . . . . . . . 18.2 Characteristics of the approximation. . . . . . . . . . . . . . . . . . . . . . 18.3 The discrete linear least squares approximation problem . . . . . . 18.4 Description of the algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.1 Piecewise linear L2 approximation with pre-assigned tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.2 Piecewise linear L2 approximation with nearbalanced L2 norms . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5 Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . 18.6 The updating and downdating techniques . . . . . . . . . . . . . . . . . . 18.6.1 The updating algorithm. . . . . . . . . . . . . . . . . . . . . . . . 18.6.2 The downdating algorithm . . . . . . . . . . . . . . . . . . . . . 18.6.3 Updating and downdating in the L1 norm. . . . . . . . . . 18.7 DR_L2pw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.8 LA_L2pw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.9 DR_L2pw2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.10 LA_L2pw2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 19 19.1 19.2 19.3
604 605 605 607 609 612 613 614 614 621 630 645 654
671 673 673 673 674 675 675 675 677 678 679 680 682 686 691 695
Solution of Ill-Posed Linear Systems
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703 Solution of ill-posed linear systems . . . . . . . . . . . . . . . . . . . . . . 704 Estimation of the free parameter . . . . . . . . . . . . . . . . . . . . . . . . . 708
© 2008 by Taylor & Francis Group, LLC
Contents 19.4 19.5 19.6 19.7 19.8 19.9
xv
Description of the new algorithm . . . . . . . . . . . . . . . . . . . . . . . . 19.4.1 Steps of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . Optimum value of the rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5.1 The parameters TOLER and EPS . . . . . . . . . . . . . . . . Use of linear programming techniques . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Mls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Mls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
709 709 714 714 715 717 724 730
PART 5 Solution of Underdetermined Systems Of Linear Equations Chapter 20 20.1 20.2 20.3 20.4 20.5 20.6
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.1 Applications of the algorithm . . . . . . . . . . . . . . . . . . . Linear programming formulation of the problem . . . . . . . . . . . . 20.2.1 Properties of the matrix of constraints . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Fuel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Fuel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 21 21.1 21.2 21.3 21.4 21.5 21.6
L1 Solution of Underdetermined Linear Equations 739 740 741 742 744 747 749 755
Bounded and L1 Bounded Solutions of Underdetermined Linear Equations
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.1 Applications of the algorithms . . . . . . . . . . . . . . . . . . Linear programming formulation of the two problems. . . . . . . . 21.2.1 Properties of the matrix of constraints . . . . . . . . . . . . Description of the algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.1 Occurrence of degeneracy. . . . . . . . . . . . . . . . . . . . . . 21.3.2 Uniqueness of the solution . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Tmfuel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Tmfuel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
© 2008 by Taylor & Francis Group, LLC
765 767 767 769 771 771 772 772 776 783
xvi
Numerical Linear Approximation in C
Chapter 22 22.1 22.2 22.3 22.4 22.5 22.6
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.1.1 Applications of the algorithm . . . . . . . . . . . . . . . . . . . The linear programming problem . . . . . . . . . . . . . . . . . . . . . . . . 22.2.1 Properties of the matrix of constraints . . . . . . . . . . . . Description of the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3.1 The reduced tableaux . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 23 23.1 23.2 23.3 23.4
23.5 23.6 23.7
Chebyshev Solution of Underdetermined Linear Equations 799 801 801 803 806 811 812 815 822
Bounded Least Squares Solution of Underdetermined Linear Equations
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.1.1 Applications of the algorithm . . . . . . . . . . . . . . . . . . . Quadratic programming formulation of the problems. . . . . . . . . Solution of problem (E0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solution of problem (E). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.4.1 Asymmetries in the simplex tableau . . . . . . . . . . . . . . 23.4.2 The condensed tableau for problem (E) . . . . . . . . . . . 23.4.3 The dual method of solution . . . . . . . . . . . . . . . . . . . . 23.4.4 The reduced tableau for problem (E) . . . . . . . . . . . . . 23.4.5 The method of solution of problem (E) . . . . . . . . . . . Numerical results and comments. . . . . . . . . . . . . . . . . . . . . . . . . DR_Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LA_Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
833 834 836 837 840 841 842 843 844 844 845 848 852
Appendices Appendix A Appendix B Appendix C Appendix D
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Constants, Types and Function Prototypes . . . . . . . . . . . . Utilities and Common Functions. . . . . . . . . . . . . . . . . . . .
© 2008 by Taylor & Francis Group, LLC
869 893 897 917
xvii
List of Figures
Figure 1-1: Figure 1-2: Figure 2-1:
Relative coordinates of pixels inside a 3 by 3 window . . . . 12 Pixel labels inside the 3 by 3 window . . . . . . . . . . . . . . . . . 12 Curve fitting a set of 8 points, including a wild point, using the L1, L2 and L∞ norms . . . . . . . . . . . . . . . . . . . . . . 31 Figure 3-1: Feasibility Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Figure 3-2: Unbounded Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Figure 3-3: Inconsistent Constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Figure 4-1: General floating-point format . . . . . . . . . . . . . . . . . . . . . . . 95 Figure 6-1: Curve fitting with vertical parabolas of a set of 8 points using L1 approximation and one-sided L1 approximations 184 Figure 7-1: Curve fitting with vertical parabolas of a set of 8 points using L1 approximation and L1 approximation with bounded variables between –1 and 1. . . . . . . . . . . . . . . . . 215 Figure 8-1: L1 polygonal approximation for a waveform. The L1 norm in any segment is ≤ 0.6. . . . . . . . . . . . . . . . . . . . . . . 256 Figure 8-2: L1 polygonal approximation for a contour. The L1 norm in any segment is ≤ 0.4. . . . . . . . . . . . . . . . . . . . . . . 256 Figure 9-1: Disconnected linear L1 piecewise approximation with vertical parabolas. The L1 residual norm in any segment ≤ 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Figure 9-2: Connected linear L1 piecewise approximation with vertical parabolas. The L1 residual norm in any segment ≤ 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Figure 9-3: Near-balanced residual norm solution. Disconnected linear L1 piecewise approximation with vertical parabolas. Number of segments = 4 . . . . . . . . . . . . . . . . . 283 Figure 11-1: Curve fitting with vertical parabolas of a set of 8 points using Chebyshev and one-sided Chebyshev approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Figure 12-1: Curve fitting a set of 8 points using Chebyshev approximation and Chebyshev approximation with bounded variables between –1 and 1. . . . . . . . . . . . . . . . . 389 © 2008 by Taylor & Francis Group, LLC
xviii
Numerical Linear Approximation in C
Figure 13-1: Curve fitting a set of 8 points using restricted Chebyshev approximations with arbitrary ranges . . . . . . . 422 Figure 15-1: Disconnected linear Chebyshev piecewise approximation with vertical parabolas. Chebyshev residual norm in any segment ≤ 1.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 Figure 15-2: Connected linear Chebyshev piecewise approximation with vertical parabolas. Chebyshev residual norm in any segment ≤ 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 Figure 15-3: Near-balanced residual norm solution. Disconnected linear Chebyshev piecewise approximation with vertical parabolas. Number of segments = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Figure 16-1: Two curves that each separate the patterns of class A from the patterns of class B . . . . . . . . . . . . . . . . . . . . . . . . 555 Figure 18-1: Disconnected linear L2 piecewise approximation with vertical parabolas. The L2 residual norm in any segment ≤ 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676 Figure 18-2: Connected linear L2 piecewise approximation with vertical parabolas. The L2 residual norm in any segment ≤ 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676 Figure 18-3: Near-balanced residual norm solution. Disconnected linear L2 piecewise approximation with vertical parabolas. Number of segments = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 677
© 2008 by Taylor & Francis Group, LLC
xix
Preface Discrete linear approximation is one of the most frequently used techniques in all areas of science and engineering. Linear approximation of a continuous function is typically done by digitizing the function, then applying discrete linear approximation to the resulting data. Discrete linear approximation is equivalent to the solution of an overdetermined system of linear equations in an appropriate measure or norm, with or without some constraints. This book describes algorithms for the solution of overdetermined and underdetermined systems of linear equations. Software implementations of the algorithms and test drivers, all written in the programming language C, are provided at the end of each chapter. Also included are piecewise linear approximations to plane curves in the three main norms, as well as solutions of overdetermined linear inequalities and of ill-posed linear systems. It is assumed that the reader has basic knowledge of elementary functional analysis and the programming language C. All the algorithms in this book are based on linear programming techniques, except the least squares ones. In addition, Chapter 23 uses a quadratic programming technique. Linear programming proved to be a powerful tool in solving linear approximation problems. The solution, if it exists, is obtained in a finite number of iterations, and no conditions are imposed on the coefficient matrix, such as the Haar condition or the full rank condition. The issue of the uniqueness of the solution is determined by methods that use linear programming. Characterization of the solution may also be deduced from the final tableau of the linear programming problem. The algorithms presented in this book are compared with existing algorithms and are found to be the best or among the best of them. The first author developed and published almost all the algorithms presented in this book while he was with the National Research Council of Canada (NRC), in Ottawa, Canada. While at the NRC, the algorithms were applied successfully in several engineering projects. Since then, the software was converted to FORTRAN 77, and then to C for this book.
© 2008 by Taylor & Francis Group, LLC
xx
Numerical Linear Approximation in C
The second author is the son of the first author. He has been instrumental in converting the software from FORTRAN 77 to C. He has worked for over 25 years as a software engineer in the high-tech industry in Ottawa, and has spent 4 years teaching courses in software engineering at Carleton University in Ottawa, Canada. This book may be considered a companion to the books, Numerical Recipes in C, the Art of Scientific Computing, second edition, W.H. Press et. al, Cambridge University Press, 1992 and A Numerical Library in C for Scientists and Engineers, H.T. Lau, CRC Press, 1994. Except for solving the least squares approximation by the normal equation and by the Singular Value Decomposition methods (Section 15.4 in the former and 3.4.2 in the latter), they lack the subject of Numerical Linear Approximation. Also, the generic routine on linear programming and the simplex method in the former reference (Section 10.8) has little or no application to linear approximation. Standard Subroutine Packages, such as LINPACK, LAPACK, IMSL and NAG, may contain few linear approximation routines, but not in C. This book is divided into 5 main parts. Part 1 includes introductory chapters 1 and 2, and tutorial chapters 3 and 4. Chapter 1 describes some diverse applications of linear approximation. In Chapter 2, the relationship between Discrete Linear Approximation in a certain norm and the solution of overdetermined linear equations in the same norm is established. The L1, the least squares (L2) and the Chebyshev (minimax or L∞) norms are considered. A comparison is made between the approximations in the three norms, using a simple example. An explanation of single- and double-precision computation, tolerance parameters and the programmatic representation of vectors and matrices is also given. The chapter concludes with a study of outliers or odd points in data, how to identify them, and some methods of dealing with them. Chapter 3 gives an overall description of the subject of linear programming and the simplex method. Also, the (ordinary or two-sided) L1 and the (ordinary or two-sided) Chebyshev approximation problems are formulated as linear programming problems. Chapter 4 starts with the familiar subject of vector and matrix norms and some relevant theorems. Then, elementary matrices, which are used to perform elementary operations on a matrix equation, are described. The solution of square linear
© 2008 by Taylor & Francis Group, LLC
Preface
xxi
systems using the Gauss LU factorization method with complete pivoting and the Householder’s QR factorization method with pivoting follows. A note on the Gauss-Jordan elimination method for a set of underdetermined system of linear equations is given. This chapter ends with a presentation of rounding error analysis for simple and extended arithmetic operations. Chapter 4 serves as an introduction to Chapter 17 on the pseudo-inverses of matrices and the solution of linear least squares problems. Part 2 of this book includes chapters 5 through 9. Chapters 5, 6 and 7 present the ordinary L1 approximation problem, the one-sided L1 approximation, and the L1 approximation with bounded variables, respectively. Chapters 8 and 9 present, respectively, the L1 polygonal approximation and the linear L1 piecewise approximation of plane curves. Part 3 contains chapters 10 through 16. Chapters 10 through 15 present, respectively, the ordinary Chebyshev approximation problem, the one-sided Chebyshev approximation, the Chebyshev approximation with bounded variables, the Restricted Chebyshev approximation, the Strict Chebyshev approximation and the piecewise Chebyshev approximation of plane curves. Chapter 16 presents the solution of overdetermined linear inequalities and its application to pattern classification, using the one-sided Chebyshev and the one-sided L1 approximation algorithms. Part 4 contains chapters 17 through 19. Chapter 17 introduces the pseudo-inverses of matrices and the minimal length least squares solution. It then describes the least squares approximation by the Gauss LU factorization method with complete pivoting and by Householder’s QR factorization method with pivoting. An introduction of linear spaces and the pseudo-inverses is given next. The interesting subject of multicollinearity, or the ill-conditioning of the coefficient matrix, is presented. This leads to the subject of dealing with multicollinearity via the Principal Components Analysis (PCA), the Partial Least Squares (PLS) method and the Ridge equation technique. Chapter 18 presents the piecewise approximation of plane curves in the L2 norm. Chapter 19 presents the solution of ill-posed systems such as those arising from the discretization of the Fredholm integral equation of the first kind. Part 5 contains Chapters 20 through 23. They present the solution
© 2008 by Taylor & Francis Group, LLC
xxii
Numerical Linear Approximation in C
of underdetermined systems of consistent linear equations subject to different constraints on the elements of the unknown solution vector (not on the residuals, since the residuals are zeros). Chapter 20 constitutes the L1 approximation problem for the elements of the solution vector. Chapter 21 describes (a) the solution of underdetermined linear equations subject to bounds on the elements of the solution vector and (b) the L1 approximation problem for these elements that satisfy bounds on them. Chapter 22 describes the L∞ approximation problem for the elements of the solution vector. Finally, Chapter 23 describes the bounded L2 approximation problem for the elements of the solution vector. Chapters 5 through 23 include the C functions that implement the linear approximation algorithms, along with sample drivers. Each driver contains a number of test case examples. The results of one or more of the test cases are given in the text. Most of these chapters also include a numerical example solved in detail. To use the software provided in this book, it is not necessary to understand the theory behind each one of the functions. The examples given in each driver act as a guide to their usage. The code was compiled and tested using MicrosoftTM Visual C++TM 6.0, Standard Edition. To ensure maximum portability, only the most basic features of the ANSI C standard were used, and no C++ features were employed. It is hoped that this book will be of benefit to scientists, engineers and university students.
© 2008 by Taylor & Francis Group, LLC
xxiii
Acknowledgments The first author wishes to thank his friends and colleagues at the Institute for Informatics of the National Research Council of Canada, particularly, Stuart Baxter, Brice Wightman, Frank Farrell, Bert Lewis and Tom Bach for their unfailing help during his stay with the Institute. From the Institute for Information Technology, he wishes to thank his friend and colleague Tony Kasvand, who got him interested in the fields of image processing and computer vision to which numerical techniques were applied. We also wish to sincerely thank Robert B. Stern, Executive Editor at Taylor & Francis Group, Chapman and Hall, and CRC Press for his support and interest in this book. Our thanks also go to the reviewers for their constructive comments. Last but not least, we thank our families for their love and encouragement.
Warranties The authors and the publisher of this book do not assume any responsibility for any damage to person or property caused directly or indirectly by the use, misuse or any other application of any of the algorithms contained in this book. We also make no warranties, in any way or form that the programs contained in this book are free of errors or are consistent or inconsistent with any type of machine. The contents of this book including the software on a CD are subject to Copyright laws. The individual owner of this book may copy the contents of the CD to their own Personal Computer for their own exclusive use, but they should not allow it to be copied by any third parties.
© 2008 by Taylor & Francis Group, LLC
xxv
About the authors Nabih N. Abdelmalek The first author was born in Egypt in 1929. He received a B.Sc. in electrical engineering in 1951, a B.Sc. in mathematics in 1954, both from Cairo University, Egypt, and a Ph.D. in mathematical physics in 1958 from Manchester University, England. From 1959 to 1965 he was an associate professor of mathematics in the faculty of engineering, Cairo University, Egypt. From 1965 to 1967 he was a member of Scientific Staff at Northern Electric (now Nortel Networks) in Ottawa, Canada. From 1967 until his retirement in 1990, he was with the National Research Council of Canada, in Ottawa, where he was a senior research officer. His interest was in the application of numerical analysis techniques to image processing, particularly image restoration, data compression, pattern classification and segmentation of 3-D range images. William A. Malek The second author was born in Egypt in 1960. He received a B.Eng. in electrical engineering in 1982 and an M.Eng. in computer engineering in 1984, both from Carleton University in Ottawa Canada. From 1984 to 1988 he was a lecturer in the Department of Systems and Computer Engineering at Carleton University, teaching undergraduate and graduate courses in software engineering and real-time systems. Since 1982, he has worked in Ottawa as a software engineer for networking, avionics and military applications.
© 2008 by Taylor & Francis Group, LLC
xxvi
“Every good gift and every perfect gift is from above, and comes down from the Father of lights.” (James 1:17 NKJV)
© 2008 by Taylor & Francis Group, LLC
PART 1
Preliminaries and Tutorials
© 2008 by Taylor & Francis Group, LLC
2
Numerical Linear Approximation in C
Chapter 1
Applications of Linear Approximation
Chapter 2
Preliminaries
Chapter 3
Linear Programming and the Simplex Algorithm 39
Chapter 4
Efficient Solutions of Linear Equations
© 2008 by Taylor & Francis Group, LLC
3 25
71
3
Chapter 1 Applications of Linear Approximation
1.1
Introduction
In this chapter, we present a number of everyday life problems whose solutions need the application of linear approximation, which is the subject of this book. The problems presented here occur in social sciences, economics, industry and digital image processing. In the following, we assume that mathematical models have been agreed upon, that observed data have been collected constituting the columns of a matrix A, and that the response data constitute vector b. The solution of the problem, vector x, is the solution vector of the matrix equation Ax = b The n by m, n > m, matrix A consists of n observations in m parameters. Because of measuring errors in the elements of matrix A and in the response n-vector b, a larger number of observations than the number of parameters are collected. Thus system Ax = b is an overdetermined system of linear equations. Equation Ax = b may also be written as x1a1 + x2a2 + … + xmam = b where a1, a2, …, am are the m columns of matrix A and the (xi) are the elements of the solution m-vector x. In many instances, the mathematical model represented by Ax = b is an approximate or even incorrect one. In other instances, matrix A is badly conditioned and needs to be stabilized. A badly conditioned matrix results in a solution vector x that would be inaccurate or even wrong.
© 2008 by Taylor & Francis Group, LLC
4
Numerical Linear Approximation in C
Since Ax = b is an overdetermined system, in most cases it has no exact solution, but it may have an approximate solution. For calculating an approximate solution, usually, the residual, or the error vector r = Ax – b is minimized in a certain norm. The p-vector norm of the residual r is given by n
r p =
∑
r ( xi ) p
1⁄p
, 1≤p≤∞
i=1
There are three main vector-norms, namely for p = 1 (the L1 approximation), p = 2 (the L2 or the least squares approximation) and for p = ∞ (the L∞, the minimax or the Chebyshev approximation). This notion of norms is repeated in Chapter 2. The approximate solution of Ax = b may be achieved as follows: (a) One may minimize the norm of the residual vector r. (b) One may also minimize the norm of the residual vector r subject to the condition that all the elements of r are either non-negative; that is, ri ≥ 0, or non-positive; ri ≤ 0, i = 1, 2, …, n. This is known as the linear one-sided approximation problem. (c) The solution of Ax = b may also be obtained by minimizing the norm of r subject to constraints on the solution vector x. The elements (xi) may be required to be bounded bi ≤ xi ≤ ci, i = 1, 2, …, m
(d)
where (bi) and (ci) are given parameters. This problem is known as the linear approximation with bounded variables. If all the ci are very large positive numbers and all the bi are 0’s, then the approximation has a non-negative solution vector a. One may also minimize the norm of the residual vector r in the L∞ norm subject to the constraints that the l.h.s. Ax is bounded; that is l ≤ Ax ≤ u
© 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
(e)
5
where l and u are given n-vectors. This approximation is known as the restricted Chebyshev approximation. If vector l is 0 and all the elements of vector u are very large numbers, the approximation is known to be with non-negative functions. Instead of solving an overdetermined system of linear equations, we may have an overdetermined system of linear inequalities in the form Ax ≥ b or Ax ≥ 0
(f)
where 0 is a zero n-vector. Such systems of linear inequalities have applications to digital pattern classification. Once more, in some cases, particularly in control theory, equation Ax = b is an underdetermined system; that is, n < m. The residual vector r = 0 and system Ax = b has an infinite number of solutions. In this case, one minimizes the norm of the solution vector x (not the residual vector r, since r = 0), and may set bounds on the elements of x.
In Section 1.2, applications to social sciences and economics are presented. In Section 1.3, we present two applications to industrial problems and in Section 1.4, applications are presented to digital image processing. For all the problems in Section 1.2, the matrix equations are overdetermined and are solved in the least squares sense. However, they may also be solved in the L1 or in the Chebyshev sense. These problems are taken from published literature. 1.2
Applications to social sciences and economics
Let the response vector be b and the columns of matrix A be (ai), i = 1, 2, …, m. As mentioned before, in all of these applications, the linear model Ax = b is a convenient way to formulate the problem mathematically. In no way is this model assumed to be the right one. Whenever possible, for each problem, comments are provided on the given data and/or on the obtained solution. Note 1.1 The minimum L2 norm ||r||2 of a given problem is of importance in assessing the quality of the least squares fit. Obviously, if this norm © 2008 by Taylor & Francis Group, LLC
6
Numerical Linear Approximation in C
is zero, there is a perfect fit by the approximating curve (surface) for the given data. On the other hand, if ||r||2 is too large, there might be large errors in the measured data that cause this norm to be too high, or the used model may not be suitable for the given data and a new model is to be sought. 1.2.1
Systolic blood pressure and age
The data of this problem is given in Kleinbaum et al. ([18], p. 52). It gives the observed systolic blood pressure (SBP) and the age for a sample of n = 30 individuals. b = The observed systolic blood pressure (SBP) a1 = 1 a2 = The ages Equation Ax = b is of the form x0a1 + x1a2 = b where 1 is an n-column of 1’s and x0 and x1 are the elements of x. The straight line fit reflects the trend that the SBP increases with age. It is observed that one of the given points seems to be an odd point to the other points. This is known as an outlier. Because outliers can affect the solution parameters x0 and x1, it is important to decide if the outlier should be removed from the data and a new least squares solution be calculated. In this problem, the outlier was removed and the new line fit was slightly below the one obtained by using all the data. On the other hand, if this example is solved in the L1 norm instead of the least squares, the fitted straight line would have ignored the odd point and would have interpolated (passed through) two of the given data points (Section 2.3). 1.2.2
Annual teacher salaries
The data of this problem is given in Gunst and Mason ([16], p. 221) and it gives the average annual salaries of particular teachers in the USA during the 11 year periods from 1964-65 to 1974-75. By examining the average salaries vs. time, three different curve fitting methods were used. Here, we consider only the first two; © 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
7
a linear fit and a quadratic fit x01 + x1t = b and x01 + x1t + x2t2 = b where b is the average salary, a1 = 1, an n-column of 1’s and a2 = (t1, t2, …, t11)T, represent the time (years). For the quadratic fit, b, a1 and a2 are the same and a3 = (t12, t22, …, t112)T. The quadratic equation gives the better fit since the residuals are the smallest for almost all the points and are randomly centered around zero. The linear fit is not as good as it offers larger residuals and are not distributed as randomly well around the zero value. Hence, the quadratic fit seems the best. However, it is observed that the quadratic curve is an inverted parabola. It reaches a maximum value and decreases after this value. This is unrealistic, as after this maximum, the salaries of teachers decrease in the latest years! 1.2.3
Factors affecting survival of island species
The Galapagos, 29 islands off the coast of Ecuador, provide data for studying the factors that influence the survival of different life species. The data of this problem is given in Weisberg ([26], pp. 224, 225). b = Number of species a1 = Endemics a2 = Area of the island in square km a3 = Elevation of the island in meters a4 = Distance in km from nearest island a5 = Distance in km from Santa Cruz a6 = Area of adjacent island in square km In this data, one complicating factor is that the elevations of 6 of the islands (elements of a3) were not recorded, so provisions must be made for this; delete these 6 islands from the data or substitute a plausible value for the missing data.
© 2008 by Taylor & Francis Group, LLC
8
1.2.4
Numerical Linear Approximation in C
Factors affecting fuel consumption
For a whole year, the fuel consumption in millions of gallons was measured in 48 states in the United States as a function of fuel tax, average income and other factors. The data of this problem is given in Weisberg ([26], pp. 35, 36). The model Ax = b is as follows: b = Fuel consumption in millions of gallons a1 = 1, an n-column of 1’s a2 = Fuel tax in cents per gallon a3 = Percentage of population with drivers licenses a4 = Average income in thousands of dollars a5 = Length in miles of the highway network In this problem all the variables have been scaled to be roughly of the same magnitude. This scaling does not affect the relationship between the measured values. For example, it does not matter if the elements of column a3 are expressed as fractions or as percentages. 1.2.5
Examining factors affecting the mortality rate
In 1973, a study was conducted to examine the effect of air pollution, environmental and other factors, 15 of them, on the death rate for (60) sixty metropolitan areas in the United States. The data is given in Gunst and Mason ([16], Appendix B, pp. 368-372). In this problem b = Age adjusted mortality rate, or death per 100,000 population = Average annual precipitation in inches a1 a2 = Average January temperature in degrees F a3 = Average July temperature in degrees F a4 = Percentage of 1960 population 65 years and older a5 = Population per household, 1960 a6 = Median school years completed to those over 22 years a7 = Percentage of housing units that are sound a8 = Population per square mile in urbanized area, 1960 a9 = Percentage of 1960 urbanized non-white population a10 = Percentage employed in white collar in 1960 a11 = Percentage of families with annual income < $3000
© 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
a12 = a13 = a14 = a15 =
9
Relative pollution potential of Hydrocarbons Relative pollution potential of Oxides of Nitrogen Relative pollution of Sulfur Dioxide Percentage relative humidity, annual average at 1 pm
One purpose of this study was to determine whether the pollution variables defined by a12, a13 and a14 were influential in the results, once the other factors (other ai) were included. 1.2.6
Effects of forecasting
This example displays a case of mild multicollinearity (ill-conditioning of matrix A). For the subject of multicollinearity, see Section 17.8. The problem discusses the factors of domestic production, stock formation and domestic consumption of the imports, all measured in millions of French francs, from the years 1949 through 1966. The data for this problem is given in Chatterjee and Price ([14], p. 152). b a1 a2 a3 a4
= = = = =
Imports 1, an n-column of 1’s Domestic production Stock formation Domestic consumption
Statistically, this model is not well specified. The solution of this problem gives a negative value to x1, the coefficient of a2. The reason is that the problem has collinear data. In other words, matrix A is badly conditioned. It means that one or more of the columns of matrix A are mildly dependent on the others. This example is solved again as Example 17.5 in Section 17.11, using the Ridge equation. 1.2.7
Factors affecting gross national products
This example displays case of strong multicollinearity. The gross national products for 49 countries were presented as a function of six socioeconomic factors. The data for this problem is given in Gunst and Mason ([16], Appendix A, p. 358). b = Gross national products © 2008 by Taylor & Francis Group, LLC
10
Numerical Linear Approximation in C
a1 a2 a3 a4 a5 a6
= = = = = =
Infant dearth rate Physician/population ratio Population density Density as a function of agricultural land area Literacy measure An index of higher education
The results of this data indicate a strong multicollinearity (dependence) between the population density and density as a function of agricultural land area (columns a3 and a4). Mason and Gunst ([16], pp. 253-259) solved this example and gave the results for few options, of which we report the following: (1) They obtained the least squares solution for the full data by ignoring the strong multicollinearity (dependence) between a3 and a4, and as a result, matrix (ATA) is nearly singular. (2) They used a Ridge technique (Section 17.11) to overcome the near singularity of matrix (ATA); reasonable results were obtained. (3) Two points that influenced the outcome of the results were deleted. These two points were outliers (odd points) and their influences masked one another in one of the statistical tests. When these two points were deleted from the data, satisfactory results for certain estimates were obtained. See Mason and Gunst [20]. Once more, solving this example in the L1 sense instead of the least squares sense would resolve the issue of the odd points. 1.3 1.3.1
Applications to industry Windmill generating electricity
A research engineer was investigating the use of a windmill to generate DC (direct current) electricity. The data is the DC output vs. wind velocity and is given in Montgomery and Peck ([21], p. 92). The suggested model was (1.3.1)
x11 + x2V = DC
where DC is the current output, 1 is an n-column of 1’s and V is the wind velocity. © 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
11
Inspection of the plot of DC (the y-axis) and V (the x-axis) suggests that as the velocity of the wind increases, the DC output approaches an upper limit and the linear model of (1.3.1) is not suitable. A quadratic model (inverted parabola) was then suggested, namely x01 + x1V + x2V2 = DC However, this model is also not suitable because the inverted parabola will eventually reach a maximum point and will bend downwards. A third model was suggested which is x01 + x1V' = DC Here, V' = (1/V). A plot of DC (y-axis) and V' (x-axis) is a straight line with negative slope and it seems more appropriate. 1.3.2
A chemical process
This example displays a case of severe multicollinearity. This example concerns the percentage of conversion of n-Heptane to acetylene P. It is given in Montgomery and Peck ([21], pp. 311, 312). There are three main factors; reactor temperature T, ratio of H2 to n-Heptane H and contact time C. Data is collected for 16 observations. The given model is quadratic, as follows (1.3.2)
P = x01 + x1T + x2H + x3C + x4TH + x5TC + x6HC + x7T2 + x8H2 + x9C2
That is, in (1.3.2), b = P, a1 = 1, a2 = T, a3 = H, …, etc. TH means each element of T is multiplied by the corresponding element of H. Similarly, T2 means each element of T is squared, etc. In this example, matrix A is highly ill-conditioned, or one says, there is severe multicollinearity due to strong near linear dependency in the data. Column a2 = T is highly interrelated with a4 = C, the longer the time of operation, the higher is the temperature. There are also quadratic terms and cross product terms. Model (1.3.2) was replaced by 5 other models which are 5 subsets of (1.3.2). We cite two of them here: (a) Linear model, where the first 4 columns of A are kept and the other 6 columns were eliminated and (b) All the columns with C were eliminated and the equation is full © 2008 by Taylor & Francis Group, LLC
12
Numerical Linear Approximation in C
quadratic in T and H only. The first model still maintains multicollinearity. The second model appears more satisfactory, since the causes of ill-conditioning of matrix A are eliminated. This problem is studied further in Example 17.6 of Section 17.11, using the Ridge equation. 1.4
Applications to digital images
1.4.1
Smoothing of random noise in digital images
Digital images are usually corrupted by random noise. A gray level image may be enhanced by a smoothing operation. Each pixel in the image is replaced by the average of the pixel values of all the pixels inside a square window or a mask centered at the pixel. Let the window be of size L by L, where L is an odd integer. We show here that the smoothing operation is equivalent to fitting a plane in the least squares sense to all the pixels in the window. The ordinate of the fitting plane at the centre of the window is the new pixel value in the enhanced image, which equals the average of the pixel values inside the window. This idea was given first by Graham [15] and was rediscovered later by Haralick and Watson [17]. We present here this idea and elaborate on it. Let N = L2 be the number of pixels inside the window. Let us take N = 9, i.e., L = 3. Let the coordinate of the pixel at hand be (0, 0). Let the pixels around the centre have coordinates as shown in Figure 1-1. Let these pixels be labeled 1 to 9 as shown in Figure 1-2. →j ↓ (–1, –1) (–1, 0) i (0, –1) (0, 0) (1, –1) (1, 0)
(–1, 1) (0, 1) (1, 1)
Figure 1-1: Relative coordinates of pixels inside a 3 by 3 window
1 4 7
2 5 8
3 6 9
Figure 1-2: Pixel labels inside the 3 by 3 window © 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
13
Let the pixel values inside the window be given by zk, k = 1, 2, …, 9 and let the fitting plane for the pixels be x11 + x2i + x3j = z
(1.4.1)
where 1 is an 9-column of 1’s, i and j are the coordinates of the pixels, and x1, x2 and x3 are to be calculated. By substituting the pixel coordinates from Figures 1.1 and 1.2, into (1.4.1) we get the equation (1.4.2)
Ax = z
which is 1 1 1 1 1 1 1 1 1
–1 –1 –1 0 0 0 1 1 1
z1 –1 z2 0 z3 1 z4 –1 x1 0 x2 = z5 1 x3 z6 –1 z7 0 z8 1 z9
The least squares solution of this equation is obtained by solving the so-called normal equation to equation (1.4.2), which is obtained by pre-multiplying equation (1.4.2) by AT, the transpose of A (equation (17.2.3)), namely (ATAx = ATz), which gives Z1 9 0 0 x1 0 6 0 x2 = Z2 0 0 6 x3 Z3 where Z1 = (z1 + z2 + … + z9) Z2 = (–z1 – z2 – z3 + z7 + z8 + z9) Z3 = (–z1 + z3 – z4 + z6 – z7 + z9)
© 2008 by Taylor & Francis Group, LLC
14
Numerical Linear Approximation in C
From equation (1.4.1), x1 is the value at the centre of the window, (0, 0), which is 9
∑ zk ⁄ 9 k=1
or for any other size of window N = L2 N
∑ zk ⁄ N k=1
The obtained result proves the argument concerning smoothing of random noise in digital images. Smoothing random noise by applying non-square masks, such as pentagonal and hexagonal windows were suggested by Nagao and Matsuyama [22]. Such operations are explained in a similar manner to the above in [3]. 1.4.2
Filtering of impulse noise in digital images
Another type of noise that contaminates a gray level digital image is the impulse noise known as salt and/or pepper noise. The noisy image, in this case, has white and/or black dots, investing the whole image. Filtering the impulse noise may be done by applying a simple technique using L1 approximation. The noise salt and pepper dots are identified as follows. If the gray level intensity of the image be measured from 0 to 256. A white (salt) noise pixel has the intensity zero and a black (pepper) noise pixel has the intensity 256. The L1 approximation has an interesting property that the L1 approximating curve to a given point set interpolates (passes through) a number of points of the given point set. The number of interpolated points equals at least the number of the unknown parameters of the approximating curve. As a result, the L1 approximation of a given point set that contains an odd (wild) point, almost entirely ignores the odd point. See Figure 2-1. Each of the salt and the pepper points in the gray image is an odd or a wild point. Let us have a set of three points in the x-y plane of which the middle point is a wild point. Let us fit the straight line y = a1 + a2x in © 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
15
the L1 norm to these three points. According to the property described above, the best fit in the L1 norm would be the straight line that interpolates the first and the third points. In this case, the approximated y value on the straight line of the middle (wild) point = (y1 + y3)/2, where y1 and y3 are the y-coordinates of the first and third points respectively. This idea is now utilized to filter the impulse noise corrupting a digital gray level image. Let the image in the x-y plane be of size N by M pixels. We first scan the digital noisy image horizontally, one row at a time and identify the impulse noise pixels (that have gray intensity zero or 256). We scan the image from row 1 to row N, starting from the second pixel to the last pixel but one in each row. A horizontal window of size 1 by 3 is centered at each noise pixel. Let the three pixel values inside the window be denoted in succession by z1, z2 and z3. An L1 fit of the three pixel values would interpolate pixels 1 and 3 of the window. The approximated pixel value of the middle (noise) pixel would be z2 = (z1 + z3)/2. This would filter the impulse noise pixel. At the end of this process, the noisy image is only partially filtered as the impulse noise pixels may not all be removed. Any two adjacent white pixels or any two adjacent black pixels in any row would not be filtered. We then reapply the same process to the partially filtered image in a column-wise manner. At the end of the second process, most of the impulse noise pixels are filtered. Experimental results [3] show that this technique is powerful in filtering impulse noise. 1.4.3
Applications to pattern classification
We solve here overdetermined systems of linear inequalities. This problem is discussed in detail in Chapter 16. We describe it here for completion. Given is a class A of s patterns and a class B of t patterns, where each pattern is a point in an m-dimensional Euclidean space. Let n = (s + t). Usually n >> m. It is required to find a surface in the m-dimensional space such that all points of class A be on one side of this surface and all points of class B be on the other side of the surface. Let the equation of this separating surface be
© 2008 by Taylor & Francis Group, LLC
16
Numerical Linear Approximation in C
a1φ1(x) + a2φ2(x) + … + am+1φm+1(x) = 0 where a = (a1, a2, … am+1)T is a vector of parameters or weights to be calculated and {φ1(x), φ2(x), …, φm+1(x)} is a set of linearly independent functions to be specified according to the geometry of the problem. Following Tou and Gonzalez ([25], pp. 40-41, 48-49), a decision function d(x) of the form d(x) = a1φ1(x) + a2φ2(x) + … + am+1φm+1(x) is established. This function has the property that (1.4.3a)
d(xi) < 0, xi ∈ A
(1.4.3b)
d(xi) > 0, xi ∈ B
By multiplying the first set of inequalities by a –ve signs, we get –d(xi) > 0, xi ∈ A
(1.4.3c)
The problem may now be posed as follows. Using (1.4.3c) and (1.4.3b), let C be an n by (m + 1) matrix whose ith row Ci is Ci = (–φ1(xi), –φ2(xi), …, –φm(xi), –φm+1(x)), 1 ≤ i ≤ s and Ci = (φ1(xi), φ2(xi), …, φm(xi), φm+1(x)), (s + 1) ≤ i ≤ n It is required to calculate the elements of the (m + 1)-vector a that satisfies the inequalities Ca > 0 We now have a system of linear inequalities and is solved by linear one-sided approximation algorithms [2]. See Chapter 16. 1.4.4
Restoring images with missing high-frequency components
We solve here a constrained underdetermined system of linear equations. Let the algebraic image restoration problem in the one-dimension case be defined by the system of linear equations (1.4.4)
Ax = b
where A is the discrete Fourier transform low-pass filter matrix and b
© 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
17
is the observed image vector. Matrix A is an N by L matrix, N ≥ L. Equation (1.4.4) is an inconsistent equation and thus has no exact solution as vector b is often contaminated with noise. By inconsistent, we mean rank(A|b) > rank(A). It is made consistent by premultiplying it by matrix AT the transpose of A. We get ATAx = ATb
(1.4.5)
This equation may be written as Cx = c (ATA)
(ATb).
where C = and c = If matrix A is of rank L, equation (1.4.5) is the normal equation and its solution is the least squares solution of Ax = b and is given by x = (ATA)–1ATb (Chapter 17). However, in general, matrix A is a rank deficient matrix and is of rank k, k ≤ L. As a result, matrix (ATA) is singular and has no inverse. In other words, system (1.4.5) or equation Cx = c has k ≤ L linearly independent equations and (L – k) redundant equations. Let these k linearly independent equations be given by (see also equation (1.4.13) in Section 1.4.7) (1.4.5a)
C(k)x = c(k)
which is an underdetermined system of k equations in L unknowns. One may obtain equation (1.4.5a) from equation Cx = c by applying Gauss-Jordan elimination steps (Section 4.6) to matrix C and its updates with partial pivoting. Since C = (ATA) is symmetric positive semi-definite, one may pivot on the diagonal elements of matrix C and its updates and exchange (permute) the equations of Cx = c and the columns of C. After k ≤ L steps, the pivot elements become small enough and would be replaced by 0’s. We thus get the k linearly independent equations (1.4.5a). The physical problem identifies the solution vector x of (1.4.5a) as having non-negative and bounded elements (1.4.6)
0 ≤ xi ≤ xmax, i = 1, 2, …, L
The underdetermined system (1.4.5a) may now be solved by minimizing its solution vector x in the L1 norm subject to the constraints (1.4.6). This problem is known as the minimum fuel problem for discrete linear admissible control systems and it is © 2008 by Taylor & Francis Group, LLC
18
Numerical Linear Approximation in C
solved in [9] by the algorithm described in Chapter 21. One may also solve equation (1.4.5a) subject to the constraints (1.4.6) in the least squares sense (L2 norm) [1]. This problem is known as the minimum energy problem for discrete linear admissible control systems and it is solved by the algorithm described in Chapter 23. See Section 1.4.7. It was found, however, that solving equation (1.4.5a) in the L1 norm subject to the constraints (1.4.6) is much faster than obtaining the solution in the L2 norm [10]. 1.4.5
De-blurring digital images using the Ridge equation
In the linear model for image restoration of digital images, the problem is described by the Fredholm integral equation of the first kind (Chapter 19). The discretization of this equation gives a system of linear equations of the form (1.4.7)
[H]f = g
where g is a stacked real m-vector representing the known or given degraded (blurred) image, f is a stacked real n-vector representing the unknown or un-degraded image. [H] is an n by m real matrix. If the degraded image g is represented by an I by J matrix, n = I × J. Also, if the unknown un-degraded image f is represented by a K by L matrix, m = K × L. Without loss of generality, we assume that n ≥ m. It is common to solve equation (1.4.7) in the least squares sense. However, this equation is ill-posed in the sense that small error in vector g may cause large error in the solution vector f. A successful technique for overcoming the ill-posedness of equation (1.4.7) is to regularize or dampen its least squares solution. This method is also known as the Ridge technique (Section 17.11). Variations of the Ridge equation were given by some authors, such as Phillips [23]. The dampened least squares solution to system (1.4.7) is obtained from the normal equation (1.4.8)
([H]T[H] + ε[I])f = [H]Tg
[H]T is the transpose of [H] and [I] is an m-unit matrix. The parameter ε, 0 ≤ ε ≤ 1, is known as the regularization parameter. Adding the term ε[I] in the above equation results in adding the positive quantity © 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
19
ε to each of the diagonal elements of [H]T[H] which are themselves non-negative and real. Also it means adding the positive quantity ε to each of the eigenvalues of [H]T[H]. Hence, assuming that the matrix on the l.h.s. of (1.4.8) is non-singular, an approximate solution to (1.4.8) is given by f = ([H]T[H] + ε[I])–1[H]Tg The parameter ε is increased or decreased, and a new solution is calculated each time. This is usually done a few times until a visually acceptable solution is obtained. The cost of these repeated solutions in terms of computation time is prohibitive if the problem is to be solved from scratch each time. However, in this problem, the regularization parameter for the best or near-best solution may be obtained by an inverse interpolation method. This is explained in detail in [8]. The inverse interpolation method is described by Ralston ([24], p. 657). 1.4.6
De-blurring images using truncated eigensystem
As in the previous example, in the linear model for image restoration of digital images, the problem is described by the Fredholm integral equation of the first kind. The discretization of this equation gives a system of linear equations of the form (1.4.9)
[H]f = g
The definitions and the dimensions of matrix [H] and vectors g and f are exactly as are given at the beginning of the previous example. We consider here the case where matrix [H] is the discretization of the so-called space-invariant point-spread function (SIPSF). Equation (1.4.9) is ill-posed in the sense that small changes in vector g result in large changes in the solution vector f. There are two main approaches for obtaining a numerically stable and physically acceptable solution to this equation, using direct (non-iterative) methods. The first approach is demonstrated by the Ridge technique described in Section 1.4.5. The second approach is the technique of a truncated singular value decomposition (SVD – Chapter 17), or truncated eigensystem of matrix [H]. A major drawback to using the SVD expansion is the high cost in terms of the number of arithmetic operations of computing the © 2008 by Taylor & Francis Group, LLC
20
Numerical Linear Approximation in C
singular value system, especially for large size matrices. However, in this example, an efficient method is used since matrix [H] has a structure similar to that of the so-called circulant matrix. Circulant matrices have special properties [13]. Hence, [H] may be replaced by its approximate circulant matrix. As a result, the border regions are the only regions in the unknown image that are affected. The obtained system of linear equations may be solved efficiently by using the Fast Fourier Transform (FFT) techniques. See for example, Andrews and Hunt ([11], Chapter 8). Matrix [H] in (1.4.9) should be a square and symmetric one, so as to have real eigenvalues. If [H] is not square and symmetric, we pre-multiply (1.4.9) by [H]T, and get a square symmetric coefficient matrix of the equation [H]T[H]f = [H]Tg. Without loss of generality, we assume that the coefficient matrix in (1.4.9) is m by m and symmetric. We now replace matrix [H] in (1.4.9) by its approximate circulant matrix, denoted by [Hc]. We get [Hc]f = g Once more, following the approach of Baker et al. [12], matrix [Hc] may be approximated by another circulant matrix of smaller rank r. This is done by retaining the largest r diagonal elements in absolute value, in the eigensystem of [Hc], r ≤ m, and replacing the remaining (m – r) diagonal elements by 0’s. Hence, this is called the truncated eigensystem technique. A method for finding out the optimum value of the rank r of matrix [Hc] is described in detail in [7]. Also in [7], analysis is provided for the case when the point-spread function is spatially-separable. That is, matrix [Hc] may be written in the form [Hc] = [A]⊗[B], where ⊗ is the Kronecker product operator. We note here that some experimentation was done with a truncated LU factorization for restoring blurred images [5]. However, results were not as successful as with the truncated eigensystem technique. 1.4.7
De-blurring images using quadratic programming
As in the previous two examples, in the linear model for image restoration of digital images, the problem is described by the
© 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
21
Fredholm integral equation of the first kind. The discretization of this equation gives a system of linear equations of the form (1.4.10)
[H]f = g
The definitions and the dimensions of matrix [H] and vectors g and f are exactly as those given in the previous two examples. However, in this example, for a realistic solution, equation (1.4.10) is solved under the conditions that the elements of vector f satisfy (1.4.11)
0 ≤ fj ≤ fmax
where fmax is a specified pixel value, usually equal to 256. Equation (1.4.10) is ill-posed. As in the previous example, the ill-posedness of equation (1.4.10) is dealt with by one of two ways. The first is by a dampening technique such as that demonstrated by the Ridge technique described in Section 1.4.5. The second approach is by using a rank reduction technique to matrix [H] or equivalently to matrix [H]T[H]. One way of doing this is to use a truncated eigensystem of matrix [H], as demonstrated by the previous example. In this method, we apply another rank reduction technique to matrix [H]T[H]. That is besides taking into account the physical conditions (1.4.11). To start, convert system (1.4.10) to a consistent system of linear equations by pre-multiplying by [H]T. We get (1.4.12)
[H]T[H]f = [H]Tg
Let [D] = [H]T[H] and b = [H]Tg. Then (1.4.12) becomes (1.4.13)
[D]f = b
Assume the m by m matrix [H] in (1.4.12) is of rank r. Then system (1.4.13) has r linearly independent equations and (m – r) dependent (redundant) equations. Assume that the equations in (1.4.13) are permuted such that the first r equations are linearly independent and the last (m – r) are the linearly dependent equations. Let the first r equations be given by (1.4.14)
[C]f = p
where [C] is an r by m matrix r ≤ m, and p is an r-vector. Equation (1.4.14) is a consistent underdetermined system of linear equations and thus the residual r = [C]f – p = 0 and it has an infinite number of © 2008 by Taylor & Francis Group, LLC
22
Numerical Linear Approximation in C
solutions (Chapter 4). In order to obtain a unique least squares solution, the problem is formulated as find f that minimizes (f, f), subject to the condition [C]f – p = 0. This is known as the minimal length least squares solution of (1.4.14) (Theorem 17.3). Let us now take condition (1.4.11) into account. Let ai = fi/fmax, i = 1, 2, …, m and dj = pj/fmax, j = 1, 2, …, r where ai and dj are the elements of vectors a and d respectively. This problem is now formulated as a quadratic programming problem with bounded variables, as follows minimize aTa subject to 0 ≤ ai ≤ 1, i = 1, 2, …, m and [C]a = d As in Section 1.4.4, this problem is also known as the Minimum energy problem for discrete linear admissible control systems (Chapter 23). In the current example, for estimating the rank r of system [C]a =d, which gives a best or near-best solution, we may use a special technique, based on the knowledge of the unbiased estimate of the variance and the mean of the noise in the blurred image. The detail of this method is given in [6]. The above cited applications are a small fraction of those in the field of linear approximation. The author also applied linear approximation to segmentation of 3-D range images [4, 19] and similar problems.
© 2008 by Taylor & Francis Group, LLC
Chapter 1: Applications of Linear Approximation
23
References 1. 2.
3. 4. 5. 6. 7. 8. 9.
10.
11.
Abdelmalek, N.N., Restoration of images with missing high-frequency components using quadratic programming, Applied Optics, 22(1983)2182-2188. Abdelmalek, N.N., Linear one-sided approximation algorithms for the solution of overdetermined systems of linear inequalities, International Journal of Systems Science, 15(1984)1-8. Abdelmalek, N.N., Noise filtering in digital images and approximation theory, Pattern Recognition, 19(1986)417-424. Abdelmalek, N.N., Heuristic procedure for segmentation of 3-D range images, International Journal of Systems Science, 21(1990)225-239. Abdelmalek, N.N. and Kasvand, T., Image restoration by Gauss LU decomposition, Applied Optics, 18(1979)16841686. Abdelmalek, N.N. and Kasvand, T., Digital image restoration using quadratic programming, Applied Optics, 19(1980)34073415. Abdelmalek, N.N., Kasvand, T. and Croteau, J.P., Image restoration for space invariant pointspread functions, Applied Optics, 19(1980)1184-1189. Abdelmalek, N.N., Kasvand, T., Olmstead, J. and Tremblay, M.M., Direct algorithm for digital image restoration, Applied Optics, 20(1981)4227-4233. Abdelmalek, N.N. and Otsu, N., Restoration of Images with missing high-frequency components by minimizing the L1 norm of the solution vector, Applied Optics, 24(1985)14151420. Abdelmalek, N.N. and Otsu, N., Speed comparison among methods for restoring signals with missing high-frequency components using two different low-pass-filter matrix dimensions, Optics Letters, 10(1985)372-374. Andrews, H.C. and Hunt, B.R., Digital Image Restoration, Prentice-Hall, Englewood Cliffs, NJ, 1977.
© 2008 by Taylor & Francis Group, LLC
24
12. 13. 14. 15. 16. 17. 18. 19.
20. 21. 22. 23. 24. 25. 26.
Numerical Linear Approximation in C
Baker, C.T.H., Fox, L., Mayers, D.F. and Wright, K., Numerical solution of Fredholm integral equations of the first kind, Computer Journal, 7(1964)141-148. Bellman, R., Introduction to Matrix Analysis, McGraw-Hill, New York, 1970. Chatterjee, S. and Price, B., Regression Analysis by Example, John Wiley & Sons, New York, 1977. Graham, R.E., Snow removal – A noise-stripping process for picture signals, IEEE Transactions on Information Theory, IT-8(1962)129-144. Gunst, R.F. and Mason, R.L., Regression Analysis and its Application: A Data Oriented Approach, Marcel Dekker, Inc., New York, 1980. Haralick, M.R. and Watson, L., A facet model for image data, Computer Graphics and Image Processing, 15(1981)113-129. Kleinbaum, D.G., Kupper, L.L. and Muller, K.E., Applied Regression Analysis and Other Multivariate Methods, Second Edition, PWS-Kent Publishing Company, Boston, 1988. Kurita, T. and Abdelmalek, N.N., An edge based approach for the segmentation of 3-D range images of small industriallike objects, International Journal of Systems Science, 23(1992)1449-1461. Mason, R.L. and Gunst, R.F., Outlier-induced collinearities, Technometrics, 27(1985)401-407. Montgomery, D.C. and Peck, E.A., Introduction to Linear Regression Analysis, John Wiley & Sons, New York, 1992. Nagao, M. and Matsuyama, T., Edge preserving smoothing, Computer Graphics and Image Processing, 9(1979)394-407. Phillips, D.L., A technique for the numerical solution of certain integral equations, Journal of ACM, 9(1962)84-97. Ralston, A., A First Course in Numerical Analysis, McGraw-Hill, New York, 1965. Tou, J.L. and Gonzalez, R.C., Pattern Recognition Principles, Addison-Wesley, Reading, MA, 1974. Weisberg, S., Applied Linear Regression, John Wiley & Sons, Second Edition, New York, 1985.
© 2008 by Taylor & Francis Group, LLC
25
Chapter 2 Preliminaries
2.1
Introduction
Each algorithm described in this book is implemented by a C function that typically has several child functions to perform sub-tasks in the computation. Every algorithm is accompanied by a driver function that provides examples of how to use the algorithm. All code is provided at the end of each chapter. For naming convention, algorithm and child function names are prefixed with “LA_” (for Linear Approximation) and driver function names are prefixed with “DR_”. Source file naming follows a similar convention. For example, the Linear L1 approximation algorithm (Chapter 5) is implemented by LA_L1() and its child functions, all of which are in the source file LA_L1.c. Similarly, DR_L1() is in the source file DR_L1.c. For ease of use, the header file hierarchy is kept to a minimum. An application that exercises these algorithms must include the header file LA_Prototypes.h to access all the algorithms. In this book, we adopt vector-matrix notation and use bold letters for vectors and matrices. In this chapter, we discuss the following issues: (1) To start with, we show that the discrete linear approximation in a certain residual (error) norm is equivalent to the solution of an overdetermined system of linear equations in the same norm [1, 2]. (2) A comparison is made between approximation in the 3 main residual norms, the L1, the L2 and the L∞, using a simple practical example. We also describe some main properties concerning the L1 and the L∞ approximations.
© 2008 by Taylor & Francis Group, LLC
26
Numerical Linear Approximation in C
(3)
In dealing with approximation problems using digital computers with finite word length, we have to specify tolerance parameters to account for round-off error. The values of the tolerance parameters are set according to the word lengths of single- and double-precision numbers. The C implementation of vectors and matrices in this book facilitates array indexing from 1 to n, instead of from 0 to (n – 1), the latter being the convention in C. This allows indexing, such as that found in “for” loops (“DO” loops in FORTRAN), to match the mathematical convention used to describe the algorithms. The algorithms dynamically allocate vectors and matrices according to the size of the data passed to them, then relinquish that memory upon termination. Outlier, spikes, wild or odd points in a given data are identified and some methods of dealing with them are described.
(4)
(5)
2.2
Discrete linear approximation and solution of overdetermined linear equations
To clarify the relationship between discrete linear approximation and the solution of overdetermined systems of linear equations, consider the following example. Let us have the set of 8 points in the x-y plane: (1, 2), (2, 2.5), (3, 2), (4, 6.5), (5, 3.5), (6, 4.5), (7, 6), (8, 7). It is required to approximate this discrete set of points by the vertical parabola y = a1 + a2x + a3x2
(2.2.1)
where a1, a2 and a3 are unknowns to be calculated. If we substitute the 8 given points in equation (2.2.1), we get
(2.2.2)
© 2008 by Taylor & Francis Group, LLC
a1 a1 a1 a1 a1 a1 a1 a1
+ + + + + + + +
a2 2a2 3a2 4a2 5a2 6a2 7a2 8a2
+ + + + + + + +
a3 4a3 9a3 16a3 25a3 36a3 49a3 64a3
= = = = = = = =
2 2.5 2 6.5 3.5 4.5 6 7
Chapter 2: Preliminaries
27
In vector-matrix form, the set of equations in (2.2.2) is written as (2.2.3)
Ca = f
C is the matrix on the l.h.s. of (2.2.2), a is the solution vector and f is the vector on the right hand side r.h.s., as in 1 1 1 1 1 1 1 1
1 2 3 4 5 6 7 8
1 2 4 2.5 9 a 2 1 16 6.5 a2 = 25 3.5 a3 36 4.5 49 6 64 7
This set of 8 equations in the 3 the unknowns (a1, a2, a3) has no exact solution and can only be solved approximately. The approximate solution is done with respect to a certain measure or norm of the error vector. The error, or the residual vector, is the difference between the l.h.s. and the r.h.s. in (2.2.3) r = Ca – f As indicated in Chapter 1, the (approximate) solution vector a of problem (2.2.2) or (2.2.3) requires that the norm of vector r be as small as possible. The p-measure or p-norm of vector r, is denoted by Lp or by ||r||p and is given by 1/p
n
z = r p=
∑
r ( xi )
p
, 1≤p≤∞
i=1
where n is the number of elements of vector r. Let r = (r(xi)), where r(xi) are the elements of vector r. In (2.2.2) it is given by r(xi) = a1 + a2xi + a3xi2 – yi, i = 1, 2, …, n Then for p = 1, the approximate solution of (2.2.2) or (2.2.3) requires that the norm z © 2008 by Taylor & Francis Group, LLC
28
Numerical Linear Approximation in C
n
z =
∑
n
r ( xi ) =
i=1
∑
2
a1 + a2 xi + a3 xi – yi
i=1
be as small as possible, where for this example n = 8. We call the obtained (approximate) solution vector a, the L1 solution of system (2.2.2). For p = 2, the approximate solution of (2.2.2) require function z be as small as possible n
z = sqrt
∑ [ r ( xi ) ] i=1
2
n
= sqrt
2
∑ [ a1 + a2 xi + a3 xi – yi ]
2
i=1
The obtained solution vector a is the L2 or the least squares solution of system (2.2.2). For p = ∞, we require that z = max|r(xi)| = max|a1 + a2 xi + a3 xi2 – yi|, i = 1, 2, …, n be as small as possible. We call the solution vector a the Chebyshev, the L∞, or the minimax solution to system (2.2.2) or (2.2.3). Now consider the following two problems and assume that all the functions are real valued. Problem (a) Let f(x) be a given function defined on a finite subset X = {x1, x2, …, xn} of an interval I on the real line. Let the set of arbitrary linearly independent continuous functions {φ1(x), φ2(x), …, φm(x)}, m < n, be defined on I. We define the polynomial P(a1, a2, …, am, x) as (2.2.4)
P(a1, a2, …, am, x) = a1φ1(x) + … + amφm(x)
or simply the function P(a, x), where a denotes the parameter vector (a1, …, am)T in the Em space. By comparing (2.2.4) and (2.2.1), φ1(x) = 1, an n-vector of 1’s, φ2(x) = x and φ3(x) = x2. The linear L1 approximation problem for f(x) on X is to determine vector a that minimizes the function
© 2008 by Taylor & Francis Group, LLC
Chapter 2: Preliminaries
29
n
∑
z =
r ( xi )
i=1
where the residuals r(xi) are given by (2.2.5)
r(xi) = P(a, xi) – f(xi), i = 1, 2, …, n
The linear L2 or the linear least squares approximation problem for f(x) on X is to determine vector a that minimizes the function n
z = sqrt
∑ [ r ( xi ) ]
2
i=1
The linear Chebyshev, L∞ or minimax approximation problem for f(x) on X is to determine vector a that minimizes the function z = max|r(xi)|, i = 1, 2, …, n Problem (b) Consider now the overdetermined system of linear equations (2.2.6)
c11a1 + c12a2 + … + c1mam = f1 … … … cn1a1 + cn2a2 + … + cnmam = fn
where C = (cij) is an n by m constant matrix of rank m, m < n, and f = (fi) and a = (ai) are n- and m-vectors in the Euclidean m- and n-spaces respectively. The linear L1 solution to system (2.2.6) is to determine vector a that minimizes the function n
z =
∑
ri
i=1
where (2.2.7)
ri = ci1a1 + … + cimam – fi, i = 1, 2, …, n
In the same manner, the L2 and the L∞ approximations for the system of linear equations (2.2.6) are defined. The symbols used for problem (b) are chosen to match those of © 2008 by Taylor & Francis Group, LLC
30
Numerical Linear Approximation in C
problem (a). In (2.2.6), f = (fi) and a = (ai) correspond to (f(xi)) and (ai) of problem (a) respectively. Also, matrix C = (cij) corresponds to (φj(xi)). Consequently (ri) of (2.2.7) corresponds to (r(xi)) of (2.2.5). It is clear, as demonstrated, that problem (a) is equivalent to problem (b); that is, the discrete linear approximation problem in a certain norm is equivalent to the solution of an overdetermined system of linear equations in the same norm. Throughout this book, the above-mentioned 3 norms are used. Note 2.1 The approximation for each of the 3 norms is called linear because the residuals in (2.2.5) or in (2.2.7) depend linearly on the solution vector a. Note 2.2 In most algorithms in this book, we define the residuals r(xi), as in (2.2.5), or ri, as in (2.2.7). In some algorithms, the residuals are defined as the negative of the r.h.s. of (2.2.5) or (2.2.7). This choice is arbitrary and does not affect the analysis of the algorithm. Hence, if r = Ca – f, then Ca = f + r, and if r = f – Ca, then Ca = f – r. Note 2.3 From now on, the expressions discrete linear approximation in a certain norm and the solution of an overdetermined system of linear equations in the same norm, would be used alternatively. 2.3
Comparison between the L1, the L2 and the L∞ norms by a practical example
The motivation behind this section is summarized by the following. Suppose we are given a set of experimental data points in the x-y plane that contains spikes, odd or wild points. Let this data be approximated by a curve in the Lp norm, where p ≥ 1. It is known that the L1 norm is recommended over other norms for fitting a curve to a data that contains odd or wild points [11]. We are given the set of 8 points of Section 2.2, which contains the wild point (4, 6.5). This set is approximated by vertical parabolas of the form y = a1 + a2x + a3x2 in the L1, the L2 and the L∞ norms. The parameters a1, a2 and a3 are calculated for each case. The results are shown in Figure 2-1. © 2008 by Taylor & Francis Group, LLC
Chapter 2: Preliminaries
31
We observe in this figure that the L1 approximation almost entirely ignores the wild point (4, 6.5), while the L2 and especially the L∞ approximation curves bend towards the wild point. Barrodale [4] presented numerical evidence showing that the L1 approximation is superior to the L2 approximation for data that contains some very inaccurate points. Rosen et al. [12], in a practical application of signal identification, also illustrated the robustness of a solution based on minimizing the L1 error norm over minimizing the L2 error norm. The wild point (4, 6.5) is called an outlier, since its residual in the L1 approximation is very large, compared with the residuals of the other points. As explained in the next section, the fitting curve in the L1 norm passes through at least m points of the given set, where m = rank(C). On the other hand, the Chebyshev approximation curve bends towards the wild point since (m + 1) residuals in the L∞ approximation, have the same absolute maximum L∞ norm. The least squares approximation curve also curves towards the wild point but not as much as the Chebyshev approximation curve. 8 7 6 5 4 3 2 1 0 0
1
2
3
4
5
6
7
8
9
Figure 2-1: Curve fitting a set of 8 points, including a wild point, using the L1, L2 and L∞ norms
In Figure 2-1, the solid curve is the L1 approximation. The dashed
© 2008 by Taylor & Francis Group, LLC
32
Numerical Linear Approximation in C
curve is the L2 (least squares) approximation and the dotted curve is the L∞ (Chebyshev) approximation. 2.3.1 (a)
Some characteristics of the L1 and the Chebyshev approximations The L1 approximation
A main property is that the L1 approximation curve (see Figure 2-1) passes through m of the given points, where m is the number of unknowns of the approximating curve. Here, y = a1 + a2x + a3x2, m = 3. It passes through points 1, 6 and 8. This is known as the interpolatory characteristic of the L1 approximation. This property is explained further in Chapter 5. If the coefficient matrix in (2.2.2) is of rank k, k < m, the fitting curve will pass through at least k point. In fact, the interpolatory characteristic of the L1 approximation results in making the L1 approximation curve be far away from the wild point(s). See also Sposito et al. [14]. (b)
The Chebyshev approximation
A main property of the Chebyshev approximation is that each of the (m + 1) residuals for the given points equals ±z, where z is the optimum Chebyshev norm and m is the number of unknowns of the approximating curve, which equals 3 in this example. We observe in Figure 2-1 that the 4 residuals for the points 3, 4, 5 and 8 are equal in magnitude and oscillating in sign; that is, r3 = +z, r4 = –z, r5 = +z and r8 = –z. This characteristic is known as the equioscillatory characteristic of the Chebyshev approximation. This property will be discussed further in Chapter 10. 2.4
Error tolerances in the calculation
Since the programs in this book are for linear approximation, two tolerance parameters named “PREC” and “EPS” are specified in LA_Defs.h. PREC stands for precision, or the round-off error that occurs in simple floating-point arithmetic operations [15]. Single-precision (s.p.) floating-point numbers typically have a precision of around 6 decimal places. This means that the figure after the 6th decimal place © 2008 by Taylor & Francis Group, LLC
Chapter 2: Preliminaries
33
is either truncated if it is < 5, or rounded up if it is ≥ 5. Hence, for s.p. computation we set PREC = 1.0E–06. EPS is the tolerance to be used during computation. A calculated number x is considered to be 0 if |x| < EPS. For s.p. computation we set EPS = 1.0E–04. Double-precision (d.p.) floating-point numbers typically have a precision of around 16 decimal places. Thus the figure after the 16th decimal place is either truncated or rounded up. For d.p. computation we set PREC = 1.0E–16 and EPS = 1.0E–11. See Section 4.7.1 for a description of normalized floating-point representation. The values of the parameters PRES and EPS are controlled by conditional compile variable DOUBLE_PRECISION, which is used to toggle PREC and EPS between single- and double-precision values. The formats of s.p. and d.p numbers can vary between computer systems, so the values of PREC and EPS need to be adjusted accordingly. Floating-point processors perform all simple arithmetic operations in at-least double-precision (often higher than d.p.), regardless of whether the result is stored in single- or double-precision. All computation in our C implementation is performed in d.p. To simulate s.p. computation, as indicated in several sample problems in the book, we change PREC and EPS to s.p. values. Note 2.4 Some programs in this book do not employ PREC, but all employ EPS. 2.5
Representation of vectors and matrices in C
Arrays in C are zero-offset, or zero-origin, meaning that an array of size n is indexed from [0] to [n – 1]). In FORTRAN (and the mathematical convention of linear approximation), arrays are indexed from [1] to [n]. In C, a vector v = (1, 2, 3, 4)T of length 4 is accessed as v[0] = 1, v[1] = 2, v[2] = 3 and v[3] = 4, whereas in FORTAN (and the mathematical notation), it is accessed as v[1] = 1, …, v[4] = 4. In order to counteract the zero-origin nature of C, after memory for a vector variable has been allocated (via malloc()), the pointer to the memory is decremented to allow indexing to start from 1 instead of 0. Statically-initialized vectors (used only for initial data in drivers) © 2008 by Taylor & Francis Group, LLC
34
Numerical Linear Approximation in C
cannot have their pointers manipulated, so the value at the 0th index is set to 0 and ignored, and data initialization is from indices [1] to [n] such that v = (0, 1, 2, 3, 4)T, or v[1] = 1, … v[4] = 4. See also [8] and [10]. An n by m matrix variable is allocated using similar pointer manipulation such that it can be indexed from [1][1] to [n][m]. However, unlike statically-initialized vectors, statically-initialized matrices (also used only for initial data in drivers) are not padded with 0 elements, as this would necessitate the entire first column and the entire first row of each matrix to be set to 0’s. Instead, a statically-initialized matrix, indexed from [0][0] to [n – 1][m – 1], is copied to a dynamically-allocated matrix variable indexed from [1][1] to [n][m]. LA_Utils.c contains a collection of utility functions for dynamic allocation, deallocation, copying and printing of vector and matrix data. See any driver function for examples of how these utilities are used. 2.6
Outliers and dealing with them
In Section 2.2, we have given 8 data points, with one of them being an odd point, or outlier. When the L1 approximation was used to fit this data (Figure 2-1), the residual of the fourth point was much larger than the residuals of the other 7 points, so we denoted this point as an outlier. The L1 curve fitting almost entirely ignores the outliers. As mentioned earlier, the reason is that in the L1 curve fitting, the curve has to interpolate (passes through) at least k of the given points, k is the rank of the coefficient matrix. On the other hand, in the L∞ (Chebyshev) approximation for the same data, (k + 1) residuals have the same absolute maximum value and the curve bends towards this outlier. The L2 (the Least squares) approximation also leans towards the outlier, not as much as the L∞ approximation curve. As a result, the identification of these outliers is difficult as their residuals in the L2 norm have been made small. Hence, for the L1 approximation, the fourth point of this data set is an outlier, while for the L∞ approximation, this fourth point is not, and thus the notion of outliers depends on the fitting strategy. Statisticians mostly use the L2 (the Least squares) approximation for which an
© 2008 by Taylor & Francis Group, LLC
Chapter 2: Preliminaries
35
outlier has a larger residual, compared with the residuals of the other points. They would like to use statistical measures to detect the outliers and discard them before the curve fitting takes place. It is important to detect outliers, but it is equally important to realize that outliers should be discarded only if it is determined that they are not valid data points. Outliers that are valid data points usually suggest that the used model is incorrect. If the model is correct, an outlier that is a valid point might be given a smaller weight to suit the model. Deleting odd points as outliers from the data should be done if their presence can be attributed to a provable error in the data collection. Ryan ([13], p. 408) gives an example of 10 data points that were fitted by a least squares method, in which the last point was an outlier. When using only the first 9 points, the residual estimator was reduced by 50%. The last point caused high correlation (dependence) between two of the columns of the coefficient matrix. However, it is reasonable to discard the outlier that does not suggest the unsuitability of the model. Consider the fourth point in Figure 2-1. This point is not extreme in either its x-coordinate nor in its y-coordinate, but it has an influence on the equation of the L2 and the L∞ fit. In fact, when this outlier is removed, both the L2 and the L∞ curve fits for the remaining 7 points, almost coincide on the L1 curve fit (see an almost identical example in [3], p. 420). According to Ryan ([13], p. 350), there are 5 types of outliers: (1) Regression outlier: a point that deviates from the rest of the points or at least from the majority of them. (2) Residual outlier: a point that has a large calculated residual, (3) x-outlier: a point that is outlying in regards to the x-coordinate only. (4) y-outlier: a point that is outlying in regards to its y-coordinate only. (5) x- and y-outlier: a point that is outlying in both coordinates. Each one of these types of outliers may severely distort the coefficients of the approximating curve. These types of outliers may also be detected by statistical tests, which are outside the scope of this book. For statistical study of the problem, see for example, Belsley et al. [5], Cook and Weisberg [6] and Ryan [13].
© 2008 by Taylor & Francis Group, LLC
36
2.6.1
Numerical Linear Approximation in C
Data editing and residual analysis
As suggested by Gunst and Mason [7], data screening before approximation calculations may get rid of costly errors that computer calculation may not detect. Because the approximating curve attempts to fit all the data points, the residuals might all be large, while if one or two odd points were eliminated before the calculation, that would give a better fit with smaller residuals. One of the easiest ways for spotting irregularities in a given data is to visually scan the given data. Also, by routinely plotting the given data, one may specify the approximating curve, would it be a straight line or maybe better a quadratic, or a higher degree polynomial. By plotting the given data points, one may distinguish the outlier in the data and determine if it belongs to any one of the types described above. One important point when examining plots of data points is to look at the overall trend of the data, not at small perturbations of few points. In some problems, plotting the data points does not clearly identify the pattern of the data. Eliminating the variability of the data may be done by a smoothing technique. This results in enabling the recognition of the trend of the data. Gunst and Mason ([7], pp. 39-41) presented an example where they did just that. Given is the raw data of a set of points; the y-value of each data point (apart of the first and the last ones) is replaced by the median of the y-values of three points. These are the point itself and the points before and after it. This kind of smoothing technique may be repeated once or twice. Even after smoothing, one should examine the overall trend of the smoothed data rather than the localized trends. Median smoothing is not the only type of smoothing to reduce the variability of the raw data. Moving averages and exponential smoothing have also been used effectively. Residual analysis is an important task in the approximation of given data. It means examining the differences between the residuals of the scanned or the plotted given data and the calculated residuals from the fitted curves. This kind of examination assists the detection of large residuals such as those of the outliers. A certain pattern may be observed and suggests actions to be taken, such as using a different approximation equation, eliminating some points from, or adding
© 2008 by Taylor & Francis Group, LLC
Chapter 2: Preliminaries
37
other points to the data. Montgomery and Peck ([9], pp. 74-79) suggest plotting the residuals against the calculated values of y (or against x). If the plot indicates that the residuals are contained in a horizontal band, then there are no obvious model defects. Otherwise there might be symptomatic model deficiencies. Also, the effect of outliers on the fitting equation may be easily checked by dropping the outliers and re-calculating the fitting equation again. If the coefficients of the fitting equation are over sensitive to the removal of the outliers, then the chosen fitting equation may not be suitable for the given data, or that the outliers are to be removed to give a better fit for the remaining data points.
References 1. 2. 3. 4. 5. 6. 7. 8. 9.
Abdelmalek, N.N., Linear L1 approximation for a discrete point set and L1 solutions of overdetermined linear equations, Journal of ACM, 18(1971)41-47. Abdelmalek, N.N., On the discrete L1 approximation and L1 solutions of overdetermined linear equations, Journal of Approximation Theory, 11(1974)38-53. Abdelmalek, N.N., Noise filtering in digital images and approximation theory, Pattern Recognition, 19(1986)417-424. Barrodale, I., L1 approximation and analysis of data, Applied Statistics, 17(1968)51-57. Belsley, D.A., Kuh, E. and Welch, R.E., Regression Diagnostics Identifying Influential Data and Sources of Collinearity, John Wiley & Sons, New York, 1980. Cook, R.D. and Weisberg, S., Residuals and Influence in Regression, Chapman-Hall, London, 1982. Gunst, R.F. and Mason, R.L., Regression Analysis and its Application: A Data Oriented Approach, Marcel Dekker, Inc., New York, 1980. Lau, H.T., A Numerical Library in C for Scientists and Engineers, CRC Press, Ann Arbor, 1995. Montgomery, D.C. and Peck, E.A., Introduction to Linear Regression Analysis, John Wiley & Sons, New York, 1992.
© 2008 by Taylor & Francis Group, LLC
38
10.
11. 12. 13. 14. 15.
Numerical Linear Approximation in C
Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T., Numerical Recipes in C, The Art of Scientific Computing, Second Edition, Cambridge University Press, Cambridge, 1992. Rice, J.R. and White, J.S., Norms for smoothing and estimation, SIAM Review, 6(1964)243-256. Rosen, J.B., Park, H. and Glick, J., Signal identification using a least L1 norm algorithm, Optimization and Engineering, 1(2000)51-65. Ryan, T.P., Modern Regression Methods, John Wiley & Sons, New York, 1997. Sposito, V.A., Kennedy, W.J. and Gentle, J.E., Useful generalized properties of L1 estimators, Communications in Statistics-Theory and Methods, A9(1980)1309-1315. Wilkinson, J.H., Rounding Errors in Algebraic Processes, Prentice-Hall, Englewood Cliffs, NJ, 1963.
© 2008 by Taylor & Francis Group, LLC
39
Chapter 3 Linear Programming and the Simplex Algorithm
3.1
Introduction
This chapter is a tutorial one. Its purpose is to introduce the reader to the subject of Linear Programming, which is used as the main tool for solving all the approximation problems in this book, with the exception of some least squares approximation problems. In general, linear programs deal with optimization problems in real life, such as those found in industry, transportation, economics, numerical analysis and other fields [6, 8, 10, 11]. A linear programming problem is an optimization problem in which a linear function of a set of variables is to be maximized (or minimized) subject to a set of linear constraints. It may be formulated as follows. Find the n variables x1, x2, …, xn that maximize the linear function (3.1.1)
z = c1x1 + c2x2 + … + cnxn
subject to the m linear constraints (conditions)
(3.1.2)
a11x1 + a12x2 + … + a1nxn a21x1 + a22x2 + … + a2nxn … … am1x1 + am2x2 + … + amnxn
η1 η2 ηm
b1 b2 … bm
where ηi, i = 1, 2 …, m, is a ≤, ≥, or = sign. The following n constraints are also specified (3.1.3)
x1 > 0, x2 > 0, …, xn > 0
The following is a common linear programming problem that
© 2008 by Taylor & Francis Group, LLC
40
Numerical Linear Approximation in C
occurs in industry. The so-called simplex method solves the problem via Gauss-Jordan elimination processes. Example 3.1 A firm has 3 workshops, one for parts, one for wiring and one for testing. The firm produces 2 different products A and B. Each product has to undergo an operation in each workshop consecutively. One unit of product A requires 1, 1, and 2 hours in the 3 workshops respectively. One unit of product B requires 1, 3, and 1 hours in the 3 workshops respectively. The workshops are available 20, 50 and 30 hours per week respectively. The profit in selling one unit of product A is $20 and in selling one unit of product B is $30. What is the number of weekly output units x1 and x2 of products A and B respectively that will maximize profit? The formulation of the problem is maximize z = 20x1 + 30x2 subject to the constraints (3.1.4)
x1 + x2 ≤ 20 x1 + 3x2 ≤ 50 2x1 + x2 ≤ 30
and (3.1.5)
x1 ≥ 0 and x2 ≥ 0
Conditions (3.1.5) indicate that we cannot produce a negative number of goods. This problem has two independent variables x1 and x2 and may be solved graphically. It is known that a straight line, say ax1 + bx2 + c = 0, divides the Euclidean plane into two halves. The quantity (ax1 + bx2 + c) < 0, in one half of the plane and (ax1 + bx2 + c) > 0 in the other half. The intersection of the five half planes satisfying the above five inequalities (3.1.4) and (3.1.5) constitute the feasibility region for the solution (x1, x2). For this problem, it is a polygonal region that has 5 sides and 5 corners as shown in Figure 3-1. The value z = 20x1 + 30x2 defines a straight line that moves parallel to itself as z increases or decreases. The maximum value of z, zmax, will be obtained when this line touches the region of feasible © 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
41
solution. That is when the line z = 20x1 + 30x2 passes through the furthest corner of this region. The solution of the problem is (x1, x2) = (5, 15) and zmax = 550, depicted in Figure 3-1. x2
30
2x1 + x2 = 30 20
(5 , 1 5 )
P 10
Q
x1 + 3x2 = 50 20x1 + 30x2 = 550 x1 + x2 = 20
20x1 + 30x2 = 0
10
20
x1
Figure 3-1: Feasibility Region
In this example, the region of feasible solution has as boundaries the lines given by the set of constraints (3.1.4) and (3.1.5). One vertex (corner) of this region is the optimal solution. This result is always true for any general linear programming problem with the number of variables ≥ 2. We see from this example that since all the functions in linear programming problems are linear, the feasible region defined by problem (3.1.1-3) is a convex set. Thus the linear programming problem is convex. Again, the optimizer of linear programming must lie on the boundary of the feasible region. In this problem the solution is said to be feasible and unique. However, there are some exceptional cases, which are discussed next. 3.1.1
Exceptional linear programming problems
Various possibilities may arise in linear programming problems. A problem may have a unique solution, an infinite number of solutions or no solution at all. In the last case, the problem may have © 2008 by Taylor & Francis Group, LLC
42
Numerical Linear Approximation in C
an unbounded solution or it may have inconsistent constraints. In both cases, we say that the problem has no solution. Consider the following examples. Example 3.2: (non-unique solution) Example 3.1, illustrated by Figure 3-1, has the unique solution (x1, x2) = (5, 15). Consider the same example and assume instead that z is given by z = 20x1 + 20x2. Again, zmax is obtained when the line z = 20x1 + 20x2 touches the region of feasible solution. However, in this case, this line coincides with one of the constraint lines, namely the line x1 + x2 = 20 (Figure 3-1). Any point on the portion PQ of this line is a solution to this problem. The solution is thus not unique. Example 3.3: (unbounded solution) Consider the example maximize z = x1 + x2 subject to the conditions
x1 – x2 ≥ –1 –x1 + 3x2 ≤ 5 x1 ≥ 0, x2 ≥ 0
x2
-x 1 + 3x 2 = 5
2
x 1 - x 2 = -1
1
z = x1 + x2
1
2
x1
Figure 3-2: Unbounded Solution
For this example the region of feasible solution is given by the shaded area in Figure 3-2. The line representing z can be moved parallel to itself in the direction of increasing z and z can be made as © 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
43
large as one wishes. The problem thus has no solution as the solution is unbounded. Example 3.4: (inconsistent constraints) Consider the example maximize z = x1 – x2 subject to the constraints
x1 + 2x2 ≤ 2 x1 + 2x2 ≥ 3 x1 ≥ 0, x2 ≥ 0
x2 x 1 + 2x 2 = 3
1
x 1 + 2x 2 = 2 1
2
3
x1
Figure 3-3: Inconsistent Constraints
In this example the constraints are inconsistent as there is no common feasibility region between them, as shown in Figure 3-3. In Section 3.2, some notations and definitions are given. In Section 3.3, a well-known version of the simplex method is described and in Section 3.4 the simplex tableau is illustrated with a solved example. In Section 3.5, the two-phase method of the simplex algorithm is introduced. In Section 3.6, the duality theory in linear programming is presented and in Section 3.7, degeneracy, which often occurs in linear programming, is described and its resolution discussed. In Section 3.8, the relationships between linear programming and the discrete linear L1 and Chebyshev approximations are established. Finally, in Section 3.9, the stability of linear programs is considered. © 2008 by Taylor & Francis Group, LLC
44
3.2
Numerical Linear Approximation in C
Notations and definitions
Let us recall the linear programming problem formulated by (3.1.1-3). The following are well-known notations and definitions: (a) The objective function: Function z of (3.1.1) is known as the objective function. (b) The prices: Constants (ci) in (3.1.1) are known as the prices associated with the variables (xi). (c) The requirement vector: Vector b, whose elements are (b1, …, bm) of (3.1.2), is often known as the requirement vector. It is preferable to have the bi all positive. If one or more of the bi is negative, the corresponding inequality might be multiplied by –1 and the inequality sign reversed. (d) Slack and surplus variables: The simplex method described in Section 3.2, in effect, deals with constraints in the form of equalities, not in the form of inequalities as those of (3.1.4) for example. It is thus necessary to convert these inequalities into equalities. This is done by using the slack and surplus variables. A slack variable: An inequality of the form ai1x1 + ai2x2 + … + ainxn ≤ bi is transferred to the equality ai1x1 + ai2x2 + … + ainxn + xi+n = bi by adding to the l.h.s. a new variable, say xi+n > 0, known as a slack variable. A surplus variable: Also, the inequality ai1x1 + ai2x2 + … + ainxn ≥ bi is transferred to the equality ai1x1 + ai2x2 + … + ainxn – xi+n = bi
(e)
by subtracting from the l.h.s. a new variable, say xi+n ≥ 0, known as a surplus variable. Artificial variables: It is sometimes desirable, as in the two-phase method of Section 3.5, to add other variables to the problem. Such variables are neither slack nor surplus. They are known as artificial variables.
© 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
(f) (g) (h)
(i)
45
A hyper plane: Anyone of the resulting equalities in (3.1.2) is known as a hyper plane. This is the generalization of a plane when it has more than three variables. A solution: Any set of variables x1, x2, …, xn that satisfies the sets of constraints (3.1.2) is known as a solution to the given programming problem. A basic solution: Let us now assume that the m constraints (3.1.2) are converted, by using the slack and surplus variables, to m equations in the form Ax = b, where A is an m by N matrix, m < N of rank m. Then a basic solution to this set is obtained by equating (N – m) variables (xj) to 0’s and solving for the remaining m variables (xi). These m variables are known as basic variables, or the basic solution. Basis matrix: The m linearly independent columns of matrix A, in the system Ax = b, associated with the m basic variables form an m by m matrix, known as the basis matrix. If the basis matrix is denoted by B, the basic solution xB is given by xB = B–1b
(j) (k)
(l)
(m)
3.3
and each of the other (N – m) variables (xj) is 0. A basic feasible solution: Any basic solution that also satisfies the non-negativity constraints (3.1.3) is called a basic feasible solution. A vertex in the region of feasible solution: It is easy to show that a basic feasible solution is a vector whose elements are the coordinates of a vertex (or a corner) in the region of feasible solution. Degenerate solution: If one or more basic variables xi is 0, then the basic solution to Ax = b is known as a degenerate solution. Often degeneracy in linear programming should be resolved. See Section 3.7. Optimal basic feasible solution: Any basic feasible solution that maximizes (or minimizes) the function z of (3.1.1) is called an optimal basic feasible solution. The simplex algorithm By adding the necessary slack variables and by subtracting the
© 2008 by Taylor & Francis Group, LLC
46
Numerical Linear Approximation in C
necessary surplus variables in the set of constraints (3.1.2), we get an underdetermined set of linear equations. Problem (3.1.1-3) might be reformulated as follows. Maximize the objective function (3.3.1)
z = c1x1 + … + cnxn + 0xn+1 + … + 0xN
subject to the constraints n
∑ api xi
= bp,
p = 1, 2, …, w
= bj,
j = 1, 2, …, u
∑ aki xi – xn + u + k = bk,
k = 1, 2, …, v
i=1 n
(3.3.2)
∑ aji xi + xn + j
i=1 n i=1
and (3.3.3)
x1, …, xn ≥ 0, xn+1, …, xN ≥ 0
In writing (3.3.1-3), we have assumed that in (3.1.2) there are w equalities, u inequalities with ≤ 0 signs and v inequalities with ≥ 0 signs. Also in (3.3.1), N = n + u + v. Note that the coefficients (the prices) of the slack and surplus variables in (3.3.1) are 0’s. In vector-matrix notation, this formulation reduces to (3.3.4)
maximize z = (c, x)
subject to the constraints (3.3.5)
Ax = b
and (3.3.6)
x≥0
Vector c in (3.3.4) is vector c appearing in (3.3.1), namely c = (c1, …, cn, 0, …, 0)T and vector b in (3.3.5) is the requirement vector (bi) of (3.3.2) which is assumed to have non-negative elements. Matrix A is an m by N matrix given by (assume that u = 3, v = 1 and w = 1)
© 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
47
a 11 a 12 a 13 a 14 ... a 1, n – 1 a 1, n 1 0 0 0 a 21 a 22 a 23 a 24 ... a 2, n – 1 a 2, n 0 1 0 0 (3.3.7)
A = a 31 a 32 a 33 a 34 ... a 3, n – 1 a 3, n 0 0 1 0 a 41 a 42 a 43 a 44 ... a 4, n – 1 a 4, n 0 0 0 – 1 a 51 a 52 a 53 a 54 ... a 5, n – 1 a 5, n 0 0 0 0
Problem (3.3.4-6) is solved by the simplex algorithm, described next. The simplex algorithm is an iterative one. It starts by calculating the coordinates of one of the vertices (corners) of the region of feasible solution. It then goes from one vertex to a neighboring one in such a way that at each step the objective function z increases (or decreases), until the optimal solution is reached. This process takes a finite number of steps, usually between m and 2m steps, where m is the number of constraints in (3.3.2). If the programming problem has an unbounded solution or if it has an inconsistent set of constraints, the simplex algorithm will detect this in the course of the solution. The simplex method will also indicate whether the solution is unique. Another important merit is that the simplex algorithm detects any redundancy in the set of constraints (3.1.2). It removes such redundant constraints and proceeds to find the solution after discarding them. By redundant constraints, we mean that one or more equations in (3.3.5) are linearly dependent on other equations in (3.3.5). Consider the linear programming problem given by (3.3.4-6). Let us further assume that rank(A|b) = rank(A), where matrix (A|b) denotes the m by (N + 1) matrix with b as the (N + 1)th column of matrix A. 3.3.1
Initial basic feasible solution
As indicated, the simplex method solves the problem by first finding any basic feasible solution to the problem, i.e., by first finding m non-negative coordinates of a vertex of the region of feasible solution. In the case where a slack variable exists in each constraint
© 2008 by Taylor & Francis Group, LLC
48
Numerical Linear Approximation in C
equation in (3.3.5), matrix A of (3.3.7) has the form (3.3.8)
A = (R|I)
where I is an m-unit matrix. In this case, the slack variables themselves form an initial basic feasible solution. Let us write x = (x0, xs)T, where x0 contains the original variables and xs contains the m slack variables. Then it is obvious that by setting x0 = 0, we have Ixs = b The slack variables xs = b and they form an initial basic feasible solution. The parameters of interest in the simplex algorithm are (i) Vectors (yj), given by yj = B–1aj = aj, j = 1, 2, …, n (ii)
since B–1 = I here Scalars (zj – cj), known as the marginal costs, given by zj – cj = (cB, yj) – cj = –cj, j = 1, 2, …, n as the prices associated with the slack variables (elements of cB here) are 0’s. See (3.3.13) and (3.3.14).
Hence, for this initial basic feasible solution, no computation is needed to obtain the initial parameters of interest in the simplex algorithm. 3.3.2
Improving the initial basic feasible solution
Once an initial vertex is available, one iteration in the simplex algorithm is used to find a neighboring vertex associated with a larger value of z. This is done in a straightforward manner using what is known as the simplex tableau. Let the columns of matrix A in (3.3.5) be denoted by a1, a2, …, aN. Then equation (3.3.5) may be written as (3.3.9)
x1a1 + x2a2 + … + xmam + … + xNaN = b
Let the initial basic feasible solution be the m non-negative (xi) denoted by x1(1), x2(1), …, xm(1). The superscripts on the xi denote that the xi are those of the first vertex. Let the columns ai associated © 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
49
with the xi be denoted by ai(1). These columns form the initial basis matrix B. The initial basic solution is given by (3.3.10)
x1(1)a1(1) + x2(1)a2(1) + … + xm(1)am(1) = b
The initial value z of (3.3.4), denoted by z(1) is also given by (3.3.11)
z(1) = c1(1)x1(1) + c2(1)x2(1) + … + cm(1)xm(1)
The columns in (3.3.10) are assumed to be linearly independent. Thus these columns (vectors) form a basis for an m-dimensional real space Em, and therefore any other m-dimensional real vector can be expressed as a linear combination of the m-vectors ai(1). In particular the (N – m) non-basic columns aj in (3.3.9) may be expressed in terms of the basis columns ai(1) of (3.3.10) in the form (3.3.12)
aj(1) = y1j(1)a1(1) + y2j(1)a2(1) + … + ymj(1)am(1) j = m+1, m+2, …, N
where the yij(1) are none other than the elements of vector yj(1) given by (3.3.13)
yj(1) = B–1aj(1), j = m+1, m+2, …, N
where B is the basis matrix whose columns are (a1(1), a2(1), …, am(1)). For j = m+1, m+2, …, N, let us denote the scalar product zj(1) by zj(1) = (cB, yj(1)) or (3.3.14)
zj(1) = c1(1)y1j(1) + c2(1)y2j(1) + … + cm(1)ymj(1)
where the elements of cB are those of (3.3.11), associated with the basic columns (a1(1), a2(1), …, am(1)). Our task now is to replace one of the basic columns ai(1) by a non-basic column aj(1) in such a way that the new basic variables xi(2) will all be non-negative and also that the new value z(2) will be greater than z(1). This is done by manipulating equations (3.3.10-14). Let us multiply (3.3.12) and (3.3.14) by a positive parameter θ and subtract the resulting equations from (3.3.10) and (3.3.11) respectively. We thus get (3.3.15) (x1(1) – θy1j(1))a1(1) + (x2(1) – θy2j(1))a2(1) + … + © 2008 by Taylor & Francis Group, LLC
50
Numerical Linear Approximation in C
(xm(1) – θymj(1))am(1) + θaj(1) = b and (3.3.16)
(x1(1) – θy1j(1))c1(1) + … + (xm(1) – θymj(1))cm(1) + θcj(1) = z(1) + θ(cj(1) – zj (1)), j = m+1, m+2, …, N
All the coefficients but one of ak(1) in (3.3.15) will form the new basic variables. Also, the r.h.s. of (3.3.16) will be the value z(2). Therefore, to achieve our goal, we search for an index j for which (cj(1) – zj(1)) in the r.h.s. of (3.3.16) is positive. Also, for this value of j, θ is chosen so that (3.3.17) θ = θmin = mini(xi(1)/yij(1)) = xu(1)/yuj(1) (say), yij(1) > 0 This choice of θ, reduces one of the coefficients in (3.3.15) to 0 and leaves the remaining m coefficients positive. Such coefficients are the coordinates of the new vertex, and are denoted by x1(2), x2(2), …, xm(2). The value z(2) is now greater than z(1) and is given by (3.3.18)
z(2) = z(1) + θmin(cj(1) – zj(1))
In general, there may be several values of j for which (cj(1) - zj(1)) > 0. It would be reasonable to choose θ for which z(2) in (3.3.18) yields the greatest increase over z(1). However, this requires some extra computational efforts. Instead, one usually selects j for which (cj(1) – zj(1)) is the largest and thus the increase in z(1) will be δz = (mini(xi(1)/yij(1)))maxj(cj(1) – zj(1)), (cj(1) – zj(1)) > 0, yij(1) > 0 The coordinates of the new vertex are now the coefficients of the vectors ai(1) in (3.3.15), where θ is given by (3.3.17). The new values of (cj(2) – zj(2)) are given by (cu(2) – zu(2)) = 0 (cj(2) – zj(2)) = (cj(1) – zj(1)) – θ (cu(1) – zu(1)), j ≠ u where u is defined in (3.3.17). This iteration is repeated several times until the optimal value zmax is obtained. The value zmax is achieved when (cj – zj) ≤ 0 for all j. The above calculation is greatly simplified if we arrange the parameters of interest in the form of a table called the simplex tableau. © 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
3.4
51
The simplex tableau
Without loss of generality, let the basis matrix B consist of the first m columns of matrix A in (3.3.5) and let us pre-multiply (3.3.5) by B–1. We shall then get the equation (for m = 5) 1 0 0 0 0 y 1, m + 1 0 1 0 0 0 y 2, m + 1 0 0 1 0 0 y 3, m + 1 0 0 0 1 0 y 4, m + 1 0 0 0 0 1 y 5, m + 1
(1) (1) (1) (1) (1)
... y 1, N ... y 2, N ... y 3, N ... y 4, N ... y 5, N
(1) (1) (1)
(1)
(1)
b1
(1)
b2
x1 x2
(1)
. .
(1)
xN
(1)
= b( 1 ) 3
(1)
(1)
b4
(1)
b5
and xm+1 = xm+2 = … = xN = 0. In this matrix equation, it is obvious that the bi(1) are the basic variables. Also, for the non-basic vectors, aij(1) = yij(1), from which the parameters zj(1) may be calculated. In other words, this equation gives the information needed for the next iteration. Once an iteration has been performed, it would be advantageous to obtain a new set of parameters from which the information needed for the next iteration is obtained. These concepts of the simplex tableau are best illustrated by an example. Consider Example 3.1, which was solved geometrically in Section 3.1. By adding the slack variables to the inequalities (3.1.4), we get the formulation (3.4.1a)
maximize z = 20x1 + 30x2 + 0x3 + 0x4 + 0x5
subject to the constraints (3.4.1b)
x1 + x2 + x3 = 20 + x4 = 50 x1 + 3x2 2x1 + x2 + x5 = 30
(3.4.1c)
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0, x5 ≥ 0
Tableau 3.4.1 is the simplex tableau for the first iteration.
© 2008 by Taylor & Francis Group, LLC
52
Numerical Linear Approximation in C
Tableau 3.4.1 cT cB B bB ––––––––––––– 0 a3 20 0 a4 50 0 a5 30 ––––––––––––– z=0 (zi – ci)
20 30 0 0 0 a1 a2 a3 a4 a5 θi –––––––––––––––––––––– 1 1 1 0 0 20 1 (3) 0 1 0 (50/3) 2 1 0 0 1 30 –––––––––––––––––––––– –20 (–30) 0 0 0
The second column from the left shows the vectors forming the current basis matrix B, namely a3, a4 and a5. The first column on the left shows the parameters cB associated with these vectors, namely 0, 0 and 0. The third column from the left gives the initial basic feasible solution, i.e., the coordinates of the initial vertex, namely x3 = 20, x4 = 50 and x5 = 30. The inner product of the column cB with each of the non-basic columns, namely a1 and a2, is obtained. The zi, calculated as in (3.3.14) are subtracted from the top row of the tableau, and the result is written in the bottom row; the (ci – zi) row. However, in this book we adopt the opposite notation and record the values (zi – ci) instead. Lastly, the value of z is entered in the last row, namely z = (cB, bB) = 0 in this tableau. In Tableau 3.4.1, it is shown that the basic feasible solution is associated with the unit (vectors) a3, a4 and a5. The entries in columns a1 and a2 are simply y1 and y2. It is expected that one iteration in the simplex algorithm is used to find a new feasible basic solution (a neighboring vertex) associated with a larger value of z. This is done as follows. From the last row in Tableau 3.4.1, it is found that maximum (cj – zj), or minimum (zj – cj) is (z2 – c2). Then if a2 replaces one of the basic columns a3, a4 or a5, the resulting value of z will be larger than the current one. This is done by first calculating the parameters θi, where θi are obtained by dividing the elements of column bB by the corresponding elements of a2 which have the minimum (zj – cj). Since we are interested in the parameters θi that have positive values, and that the elements of bB are non-negative, the elements of a2 used in the divisions are the positive elements, which they are in
© 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
53
this case. The possible θi are recorded in the last column to the right of the tableau, from which it is found that θ2 is the minimum of the θi. Therefore, if a2 replaces a4 in the basis, the new basis matrix will have a3, a2 and a5 as its columns. The new basic solution will be feasible (all its elements are non-negative) and the value of z will increase. The so-called pivotal entry in the tableau is now decided by the intersection of the key column (of minimum (zj – cj)) and the key row (of minimum θi). The pivotal entry is then used to establish the simplex tableau for the next iteration, which results in a larger value of z. Step (1) Divide all the entries in the key row by the pivotal entry. This step is equivalent to dividing one of the equations in a linear set of equations by a constant and it does not result in any change in the solution. Step (2) Reduce all the entries in the key column, except the pivotal entry itself, which is now unity, to zero. This is done by subtracting an appropriate multiplier of the altered key row from each of the other rows of the column bB and the ai. This step is equivalent to one step of the Gauss-Jordan (elimination step) method, outlined in Section 4.6. The calculated entries in the (zi – ci) row of the simplex tableau, including the z entry, are obtained by extending step (2) to the (zi – ci) row also. This is done by subtracting an appropriate multiplier of the altered key row from the (zi – ci) row that reduces the (zj – cj) entry of the key column to 0. The results are recorded in Tableau 3.4.2. In Tableau 3.4.2, the key column is that of a1 and the key row is the first row, and thus the pivotal entry is determined. It is used to construct the next tableau and this is done by observing the two steps described above. The tableau for the next iteration is Tableau 3.4.3. The end row of Tableau 3.4.3 contains non-negative entries and this indicates that no further improvement to the solution is possible and the programming problem is now solved. The optimal solution is x1 = 5, x2 = 15 and zmax = 550. © 2008 by Taylor & Francis Group, LLC
54
Numerical Linear Approximation in C
Tableau 3.4.2 cT cB B bB ––––––––––––– 0 a3 10/3 30 a2 50/3 0 a5 40/3 ––––––––––––– z = 500 (zi – ci)
20 30 0 0 0 a1 a2 a3 a4 a5 θi –––––––––––––––––––––– (2/3) 0 1 –1/3 0 (5) 1/3 1 0 1/3 0 50 5/3 0 0 –1/3 1 8 –––––––––––––––––––––– (–10) 0 0 10 0
Tableau 3.4.3 cT cB B bB ––––––––––––– 20 a1 5 30 a2 15 0 a5 5 ––––––––––––– z = 550 (zi – ci)
20 30 0 0 0 a1 a2 a3 a4 a5 –––––––––––––––––––––– 1 0 3/2 –1/2 0 0 1 –1/2 1/2 0 0 0 –5/2 1/2 1 –––––––––––––––––––––– 0 0 15 5 0
It is easy to see that the increase in the value of z after each iteration is given by [–θmin(zi – ci)min]. That is, if z(1) and z(2) are the values of z in tableaux k and (k + 1) respectively z(2) = z(1) – θmin(zi – ci)min where θmin and (zi – ci)min are those of tableau k. In this example, if z(1) and z(2) are those of Tableaux 3.4.1 and 3.4.2 respectively z(2) = 0 + (50/3)×30 = 500 Also z(3) = z(2) + (5)×10 = 550 where z(3) is that in Tableau 3.4.3. In this example, we found that because of the slack variables, an identity matrix appears in matrix A and as a result, the initial calculation is an easy task. However, in many cases no identity matrix appears in A. This will be the case when some constraints have
© 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
55
equality signs or ≥ signs. In such cases, the initial basic feasible solution is constructed by the so-called two-phase method, which is described in the next section. 3.5
The two-phase method
In this method, artificial variables are added first to every equality constraint and also to every constraint having a surplus variable, so that matrix A of (3.3.5) may have the form (3.3.8). The problem is solved in two-phases. In phase 1, we try to derive all the artificial variables to 0. In phase 2, we try to maximize the actual objective function z, starting from a basic feasible solution that either contains no artificial vectors or that contains some artificial vectors at zero level; that is, whose associated xi = 0. 3.5.1
Phase 1
In phase 1, we assign to each artificial variable a price ci = –1. To all other variables, which include the original n variables, the slack and surplus variables, we assign the price ci = 0. We then use the simplex method techniques to maximize the function z = – 1x1a – 1x2a – … – 1xka where the xia are the artificial variables. Since in the simplex method any variable is not allowed to be negative, the xia are also non-negative and hence z would be non-positive. The maximum value of z will be 0. If there is no redundancy in the constraints, maximum z = 0 and no artificial variable will appear in the basis. Furthermore, z may become 0 before the optimality criterion is satisfied. In this case, some artificial vectors appear in the basis at a zero level. This case may result from the presence of redundancy in the original constraints. In any case, if z is 0 before the optimality criterion is satisfied, we end phase 1 and start working on phase 2. On the other hand, if maximum z < 0, then the artificial variables in the basis cannot be driven to 0 and the original problem has no feasible solution.
© 2008 by Taylor & Francis Group, LLC
56
3.5.2
Numerical Linear Approximation in C
Phase 2
In phase 2, we assign the actual prices ci to each legitimate variable and a price 0 to each slack and surplus variable. Also, we assign 0 to each artificial variable that may appear in the basis at zero level. The simplex method is then used to maximize the original function z. The first tableau in phase 2 is itself the last tableau in phase 1. The only difference is that the (zi – ci) for the last tableau in phase 1 are altered to account for change in the prices. If at the beginning of phase 2, one or more artificial variables appear at zero level, we ensure that such artificial variables remain at zero level at each iteration in phase 2. See the second example in Hadley ([7], pp. 154-158). 3.5.3
Detection of the exceptional cases
In Section 3.1, different exceptional cases of a linear programming problem were described. Such cases are detected easily from the simplex tableaux in the course of the solution of the problem. At the end of phase 1, we found that the following possibilities exist: (a) The presence of redundancy in the original constraints. (b) The possibility of inconsistent constraints. (c) Feasible problem. If no artificial vectors appear in the basis, then we have found an initial basic feasible solution, and thus the original problem is feasible. The original constraints are consistent and none of them is redundant. However, in phase 2, the following may occur: (d) An unbounded solution. Suppose that in a simplex tableau in phase 2, it was found that for some non-basic variable j, (zj – cj) is negative, but all the elements of the corresponding yj are non-positive. Then the solution to this problem is unbounded. See Hadley ([7], pp. 93, 94). (e) Non-unique optimal solution. If the optimal solution is degenerate, i.e., one or more elements in the basic feasible solution is 0, then we may have a non-unique optimal solution.
© 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
3.6
57
Duality theory in linear programming
The dual problem If a problem has a dual, the dual problem is usually given in terms of the same variables of the original problem, with the roles of certain variables being interchanged. The original problem, in this case, is known as the primal problem. Consider the primal problem (3.6.1a)
maximize z = (c, x)
subject to the constraints Ax ≤ b
(3.6.1b) and
x≥0
(3.6.1c) The dual problem to (3.6.1) is (3.6.2a)
minimize Z = (b, w)
subject to the constraints ATw ≥ c
(3.6.2b) and
w≥0
(3.6.2c)
Note in (3.6.1) and (3.6.2) that the roles of b and c are interchanged and the matrices of the constraints are the transpose of one another. Hence, if we consider Example 3.1, also given in (3.4.1), as a primal problem, its dual will be (3.6.3a)
minimize Z = 20w1 + 50w2 + 30w3
subject to the constraints (3.6.3b)
w1 + w2 + 2w3 ≥ 20 w1 + 3w2 + w3 ≥ 30
and (3.6.3c)
w1 ≥ 0, w2 ≥ 0 and w3 ≥ 0
The solution of a dual problem is related in some defined way to
© 2008 by Taylor & Francis Group, LLC
58
Numerical Linear Approximation in C
the solution of its primal problem. We convert (3.6.3a) to a maximization problem and also subtract surplus variables in (3.6.3b). We get (3.6.4a)
maximize Z = –20w1 – 50w2 – 30w3
subject to the constraints w1 + w2 + 2w3 – w4 = 20 (3.6.4b) w1 + 3w2 + w3 – w5 = 30 and (3.6.4c)
w1 ≥ 0, w2 ≥ 0, w3 ≥ 0, w4 ≥ 0 and w5 ≥ 0
We only write the initial and final tableaux for each of the primal and the dual and also omit the rows and the columns corresponding to the artificial variables. The primal problem (3.4.1) is Initial
Final
cT 20 30 0 0 0 cT 20 30 0 0 0 ––––––– –––––––––––––––––– ––––––– –––––––––––––––––– 0 20 1 1 1 0 0 20 5 1 0 3/2 –1/2 0 0 50 1 3 0 1 0 30 15 0 1 –1/2 1/2 0 0 30 2 1 0 0 1 0 5 0 0 –5/2 1/2 1 ––––––– –––––––––––––––––– ––––––– –––––––––––––––––– z=0 –20 –30 0 0 0 z = 550 0 0 15 5 0 The dual problem (3.6.4) is Initial bT ––––––– 0 20 0 30 ––––––– Z=0
Final bT
–20 –50 –30 0 0 –20 –50 –30 0 0 –––––––––––––––––– ––––––– –––––––––––––––––– 1 1 2 –1 0 –20 15 1 0 5/2 –3/2 1/2 1 3 1 0 –1 –50 5 0 1 –1/2 1/2 –1/2 –––––––––––––––––– ––––––– –––––––––––––––––– 20 50 30 0 0 Z = –550 0 0 5 5 15
Let us examine the final tableaux of the two problems. We notice that the basic variables for the dual, namely 15 and 5, appear in the last row of the primal. Also, the basic variables in the primal, namely 5, 15 and 5, appear in the last row of the primal.
© 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
59
Also, the columns of the non-basic vectors in the dual are found in the rows of the primal with negative signs. It is sometimes easier to solve the dual problem rather than the primal. If a linear programming problem contains many constraints and only a few variables, then the dual to this problem contains only a few constraints and many variables. In this case, it is much easier to solve the dual problem, which has a smaller basis matrix. For this reason, for all the algorithms of the L1 and Chebyshev solutions of overdetermined linear equations in this book, we found that solving the dual forms of the linear programming problems are easier than solving the primal forms. Moreover, a knowledge of the properties of the dual problems also leads to a much better understanding of all aspects of linear programming theory. It led, for example, to the discovery of the dual simplex algorithm and to the primal dual algorithm. See for example, Hadley [7]. 3.6.1
Fundamental properties of the dual problems
Theorem 3.1 The dual of the dual is the primal. Theorem 3.2 If x is any feasible solution to (3.6.1) and w is any feasible solution to (3.6.2), then (c, x) ≤ (b, w), that is, z ≤ Ζ Theorem 3.3 If x is a feasible solution to (3.6.1) and w is a feasible solution to (3.6.2) such that (c, x) = (b, w) then x is an optimal solution to (3.6.1) and w is an optimal solution to (3.6.2). Theorem 3.4 If one of the set of problems (3.6.1) and (3.6.2) has an optimal solution, then the other also has an optimal solution. © 2008 by Taylor & Francis Group, LLC
60
Numerical Linear Approximation in C
Theorem 3.5 If the maximum value of z is unbounded, then the dual has no feasible solution. Note 3.1 The converse of Theorem 5.5 is not true; that is, if the dual has no feasible solution, this does not imply that maximum z is unbounded. Neither problem has a feasible solution. 3.6.2
Dual problems with mixed constraints
The primal problem (3.6.1) has all its constraints in (3.6.1b) with the same ≤ sign. Consequently all the constraints in the dual in (3.6.2b) have the ≥ signs. However, a primal problem may have both kinds of inequality signs and also may have some equality signs. We first observe that an ≥ sign in the primal is reversed by multiplying the whole inequality by –1. We also observe that a constraint with an equality sign Σaijxj = bi may be replaced by the two inequalities (3.6.5a)
Σaijxj ≤ bi Σaijxj ≥ bi
When the last inequality is multiplied by –1, it is converted to (3.6.5b)
–Σaijxj ≤ –bi
In other words, a constraint with an equality sign is replaced by two constraints each having ≤ signs. Therefore, a programming problem with mixed constraints may always be converted to the constraints with ≤ signs. Theorem 3.6 An equality constraint in the primal corresponds to an unrestricted (in sign) variable in the dual. Proof: An unrestricted (in sign) variable say w, may be written as the difference of two non-negative variables © 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
(3.6.6)
61
w = w1 – w2, where w1 ≥ 0 and w2 ≥ 0
On the other hand, an equality constraint in the primal, may be replaced by the two inequalities (3.6.5a) and (3.6.5b), which yield two variables w1 and w2 in the dual. The dual column and the dual price corresponding to w1 are the negative to those of w2. Thus w1 and w2 may be replaced by one unrestricted variable w, as in (3.6.6). We conclude that the column corresponding to an unrestricted variable is the one obtained had the original equation in the primal was not replaced by two inequalities. 3.6.3
The dual simplex algorithm
It is observed in the simplex algorithm that any basic feasible solution with all the (zi – ci) ≥ 0 is an optimal solution. It is also observed that the zi are completely independent of the requirement vector bB. These observations present an interesting alternative to the method of solution. We may start the simplex tableau in the dual with a basic, but not feasible solution (not all the elements of bB are non-negative) to the linear programming problem which has all the (zi – ci) ≥ 0. If we then move from this basic solution to another by changing one basic vector at a time in such a way that we keep all the (zi – ci) ≥ 0, an optimal solution would be obtained in a finite number of steps. This constitutes the idea of the dual simplex algorithm. The dual simplex algorithm does not have the general applicability of the simplex algorithm because it is not always easy to start with all the (zi – ci) ≥ 0. The algorithm is given this name since the criterion for changing the basis is that for the dual, not for the primal problem. In the dual simplex algorithm, one first determines the vector that leaves the basis and then the vector that enters the basis. This is the reverse of what is done in the simplex algorithm. The criteria for changing the basis are as follows: (a) For the vector to remove from the basis, choose bBr = mini(bBi), bBi < 0 (b)
Column ar is removed from the basis and xBr is driven to 0. For the vector to enter the basis, choose
© 2008 by Taylor & Francis Group, LLC
62
Numerical Linear Approximation in C
θ = (zk – ck)/yrk = maxj((zj – cj)/yrj), yrj < 0 The increase in z is z = bBr(zk – ck)/yrk. This algorithm is particularly useful in solving bounded linear programming problems, as it eliminates the necessity for the introduction of the artificial variables. A bounded linear programming problem has the elements of its vector x bounded between upper and lower bounds. Instead of x ≥ 0 in (3.6.1c), we have di ≥ xi ≥ 0, i = 1, 2, ..., n where the (di) are given constants. A numerical example is solved in detail in Chapter 5; Example 5.1, for a bounded linear programming problem using the dual simplex algorithm. 3.7 3.7.1
Degeneracy in linear programming and its resolution Degeneracy in the simplex method
It is noted that in the simplex method, if one or more of the elements of the basic solution bB is 0, then the solution is known to be degenerate. Degeneracy may occur in the initial basic feasible solution, or it may occur in the course of the solution. The initial basic feasible solution is degenerate if and only if one or more of the elements in the initial bB column is 0. Degeneracy occurs in the course of the solution if there are two or more values of i which have the same θmin = mini(xi/yij), yij > 0 In this case, in view of (3.3.17), two or more of the nonzero xBi of the current solution are reduced to zero for the next iteration, while only one of the xi that is zero in the current solution becomes positive for the next iteration. When degeneracy occurs, the value of θmin for the iteration that follows will certainly be 0 and in view of (3.3.18), the new solution z(2) = z(1); that is, the objective function does not increase. Moreover, as noted earlier, this degeneracy may persist for several successive iterations and there is a possibility of cycling; that is, a set of bases may be introduced again every few iterations. Hadley ([7], p. 190) © 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
63
presents an (artificial) example in which cycling occurred. In that example there were two equal θmin and the wrong basis vector was removed. Hadley however states that no actual problem has ever cycled. Resolution of degeneracy is done, in effect, by perturbing the elements of the bB column as explained next. In the next two sections we deal with degeneracy in the simplex method. In the third section we mention two cases where degeneracy was resolved in the dual simplex algorithm. 3.7.2
Avoiding initial degeneracy in the simplex algorithm
Assume that k elements in the initial bB column are 0’s. Select a small number ε such that kε << 1. Replace the first zero element in bB by ε, the second zero element by 2ε, … and the kth zero element in bB by kε. At the end of the problem, if necessary let ε → 0. This would resolve the problem of initial degeneracy. 3.7.3
Resolving degeneracy resulting from equal θmin
We replace the set of variables xi by another set xi', as follows x1' = x1 – ε1 x2' = x2 – ε2 … … … xN' = xN – εN where ε1, ε2, …, εN are very small positive numbers and that ε1 ≠ ε2 ≠ … ≠ εN We might instead make this procedure systematic by taking ε1 = ε, ε2 = ε2, …, εN = εN and define the variables xj' = xj – εj, j = 1, 2, …, N The ith constraint now becomes bi = ai1x1' + ai2x2' + … + aiNxN' or bi = ai1(x1 – ε) + ai2(x2 – ε2) + … + aiN(xN – εN)
© 2008 by Taylor & Francis Group, LLC
64
Numerical Linear Approximation in C
Or in terms of the original variables, the new variables bi' are bi' = bi + ai1ε + ai2ε2 + … + ainεn This means that the perturbation, in effect, is to the bB column and the degeneracy is resolved. In practice, it is not necessary to follow this procedure literally. It is observed that the biggest change in the elements bBi is caused by the ε term in (3.7.1), as the higher orders of ε have negligible effects. We can thus predict how the tie in θmin will be broken without actually calculating bBi'. 3.7.4
Resolving degeneracy in the dual simplex method
In the dual simplex algorithm, degeneracy occurs when one or more of the marginal costs is zero. A method for resolving this kind of degeneracy is described in Section 5.6. This method is also used in Section 21.3 for resolving the same kind of degeneracy. 3.8
Linear programming and linear approximation
Wagner [13] was the first to successfully formulate both the linear L1 and the Chebyshev approximation problems as linear programming problems in both the primal and dual forms. Given, as usual, is the overdetermined system of linear equations Ca = f C = (cij) is a given real n by m matrix of rank k, k ≤ m < n and f = (fi) is a given real n-vector. The residual vector r is r = Ca – f Its ith element, ri is given by m
ri =
∑ cij aj – fi ,
i = 1, 2, …, n
j=1
3.8.1
Linear programming and the L1 approximation
The L1 approximation problem is defined as to find the solution vector a for the equation Ca = f such that the L1 norm of the residual r © 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
65
be as small as possible. That is n
∑
minimize Z =
ri
i=1
Since the residuals (ri) are unrestricted in sign; that is, ri may be >, = or < 0, i = 1, 2, …, n, we write ri = ui – vi Hence |ri| = ui + vi (ui) and (vi) are the elements of the n-vectors u and v respectively and that ui > 0 and vi > 0, i = 1, 2, …, n The primal form of the linear programming problem for the L1 approximation problem is now n
(3.8.1a)
minimize Z =
∑
n
ui +
i=1
∑ vi i=1
subject to the constraints r = u – v = Ca – f or Ca – u + v = f (3.8.1b)
ai unrestricted in sign, i = 1, 2, …, m
(3.8.1c)
ui ≥ 0, vi ≥ 0, i = 1, 2, …, n
From Section 3.6, the dual form to problem (3.8.1) is given by (3.8.2a)
maximize z = fTw
subject to C Tw = 0
(3.8.2b)
© 2008 by Taylor & Francis Group, LLC
wi ≤ 1,
i = 1, 2, …, n
wi ≥ –1,
i = 1, 2, …, n
66
Numerical Linear Approximation in C
where CT is the transpose of C and the n-vector w = (wi). The last two sets of constraints reduce to the constraints –1 ≤ wi ≤ 1, i = 1, 2, …, n
(3.8.2c)
Problem (3.8.2) is a linear programming problem with bounded variables (wi). That is, by defining bi = wi + 1, i = 1, 2, …, n, we get the following formulation of the problem maximize z = fT(b – e)
(3.8.3a)
subject to the constraints (3.8.3b)
CTb = CTe
(3.8.3c)
0 ≤ bi ≤ 2, i = 1, 2, …, n
e is an n-vector of 1’s and the elements of the vector b = (bi) are bounded. Problem (3.8.3) is a linear programming problem with non-negative bounded variables [7]. In Chapter 5, this problem is solved by the dual simplex algorithm. 3.8.2
Linear programming and Chebyshev approximation
The Chebyshev approximation problem is defined as to find the solution vector a for the equation Ca = f such that the L∞ norm of the residual r be as small as possible. Let z = maxi|ri|, i = 1, 2, …, n Let h ≥ 0, be the value z = maxi|ri|. Hence, since ri is unrestricted in sign, i = 1, 2, …, n, the primal form of the linear programming problem is minimize h subject to
ri ≤ h ri ≥ –h
The above two inequalities reduce to m
–h ≤
∑ cij aj – fi ≤ h , j=1
© 2008 by Taylor & Francis Group, LLC
i = 1, 2, …, n
Chapter 3: Linear Programming and the Simplex Algorithm
67
In vector-matrix form, these inequalities become Ca + he ≥ f –Ca + he ≥ –f h ≥ 0 and aj, j = 1, 2, …, m, unrestricted in sign. From Section 3.6, the dual form of this formulation is (3.8.4a)
maximize z = [fT –fT]b
subject to T
T
(3.8.4b)
C –C b = e m+1 T T e e
(3.8.4c)
bi > 0, i = 1, 2, …, 2n
em+1 is an (m + 1)-vector, which is the last column in an (m + 1)-unit matrix. The 2n-vector b = (bi). In Chapter 10, problem (3.8.4) is solved by the simplex method. Relevant linear programming formulations are also made for other problems in this book, such as the one-sided and the bounded L1 and Chebyshev solutions of overdetermined linear equations and the formulations for the solutions of the underdetermined linear equations of Chapters 20-23. 3.9
Stability of the solution in linear programming
In practice, it is common to solve linear programming problems that have several hundred equations. Consequently, round-off error plays a significant role in the accuracy of the result. The basis matrix for a simplex tableau differs from that of a preceding tableau in only one column. The inverse of the basis matrix is updated at each simplex step by the Gauss-Jordan method rather than by inverting a new matrix. While this generally gives satisfactory results, it is susceptible to round-off error in two respects. First, if the basis matrix is continually updated, computational errors are propagated in the problem from step to step. Second, because the Gauss-Jordan method is applied without pivoting, inaccurate results are obtained when small pivots © 2008 by Taylor & Francis Group, LLC
68
Numerical Linear Approximation in C
are met. Wolfe [15] described some methods of round-off control. This include (1) conditioning of the linear programming problem and (2) precautions to observe in using the simplex algorithm. Pierre [10] described in detail some scaling techniques applied to the linear programming problem before the simplex algorithm is applied. This process has the effect of conditioning the problem; that is, by minimizing the condition number of the basis matrix. In this respect, see also Noble ([9], Section 13.5). Clasen [5] gave a criterion by which tolerances in linear programming problems are calculated. Any number that is smaller in absolute value than a specified tolerance is considered as round-off error and is replaced by 0. Storoy [12] described a simple method for error control in the simplex algorithm. He also presented a method for improving the obtained solutions. Bartels [1] discussed the stability of Gauss-Jordan elimination method by a round-off error analysis. He then implemented the simplex method based on the Hessenberg LU decomposition of the basis matrices. In order to be able to select large pivots, Bartels and Golub [2] implemented the LU factorization method with row interchanges of the basis matrix. Additional accuracy is obtained by iteratively refining the optimal solution, using a technique due to Wilkinson [14]. Bartels, Golub and Saunders [3] described computational methods for updating the inverse of the basis matrix by both the LU decomposition method and by the orthogonalization method of Householder’s transformation. Bartels, Stoer and Zenger [4] used a triangular decomposition method for the basis matrix, which ensures numerical stability of the results for ill-conditioned problems. A variation of this technique has been used in the algorithm for calculating the L1 approximation problem in Chapter 5 and another variation has been used in the algorithm for the restricted Chebyshev approximation problem in Chapter 13.
© 2008 by Taylor & Francis Group, LLC
Chapter 3: Linear Programming and the Simplex Algorithm
69
References 1. 2. 3.
4.
5. 6. 7. 8. 9. 10. 11. 12.
Bartels, R.H., A stabilization of the simplex method, Numerische Mathematik, 16(1971)414-434. Bartels, R.H. and Golub, G.H., The simplex method of linear programming using LU decomposition, Communications of ACM, 12(1969)266-268. Bartels, R.H., Golub, G.H. and Saunders, M.A., Numerical techniques in mathematical programming, Nonlinear Programming, Rosen, J.B., Mangasarian, O.L. and Ritter, K. (eds.), Academic Press, New York, pp. 123-176, 1970. Bartels, R.H, Stoer, J. and Zenger, Ch., A realization of the simplex method based on triangular decomposition, Handbook for Automatic Computation, Vol. II: Linear Algebra, Wilkinson, J.H. and Reinsch, C. (eds.), Springer-Verlag, New York, pp. 152-190, 1971. Clasen, R.J., Techniques for automatic tolerance control in linear programming, Communications of ACM, 9(1966)802803. Faigle, U., Kern, W. and Still, G., Algorithmic Principles of Mathematical Programming, Kluwer Academic Publishers, London, 2002. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Ignizio, J.P. and Cavalier, T.M., Linear Programming, Prentice Hall, Englewood Cliffs, NJ, 1993. Noble, B., Applied Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, 1969. Pierre, D.A., Optimization Theory with Applications, John Wiley & Sons, New York, 1969. Sierksma, G., Linear and Integer Programming, Theory and Practice, Second Edition, Marcel Dekker Inc., New York, 2002. Storoy, S., Error control in the simplex technique, BIT, 7(1967)216-225.
© 2008 by Taylor & Francis Group, LLC
70
13. 14. 15.
Numerical Linear Approximation in C
Wagner, H.M., Linear programming techniques for regression analysis, Journal of American Statistical Association, 54(1959)206-212. Wilkinson, J.H., Rounding Errors in Algebraic Processes, Prentice-Hall, Englewood Cliffs, NJ, 1963. Wolfe, P., Error in the solution of linear programming problems, Proceedings of a symposium conducted by the MRC and the University of Wisconsin, Vol. 2, Rall, L.B., (ed.), John Wiley, New York, pp. 271-284, 1966.
© 2008 by Taylor & Francis Group, LLC
71
Chapter 4 Efficient Solutions of Linear Equations
4.1
Introduction
This is another tutorial chapter. It deals with the solution of real non-singular systems of linear equations and inversion of matrices. This chapter is intended to be an introduction to Chapter 17, on the least squares problem and the pseudo-inversion of matrices. It is required to solve the system of linear equations Ax = b A = (aij) is an n by n real non-singular matrix, b = (bi) an n-real vector and x = (xj) is the solution n-vector. We assume that matrix A is of a reasonable size, and that it is not sparse. We start in Section 4.2 with the familiar subject of vector and matrix norms and some relevant theorems. A norm for either a vector or a matrix gives an assessment to the size of the vector or the matrix. In Section 4.3, elementary matrices, which are used to perform elementary operations on a matrix equation, are introduced. In Sections 4.4 and 4.5, two of the direct methods for solving linear equations, which are among the most efficient known methods are described. These are the Gauss LU factorization method with complete pivoting and the Householder's QR factorization method with pivoting. These two methods have been studied extensively by many authors. However, we adopt here the approach of Wilkinson [16]. The inverse of a square non-singular matrix is calculated. In Section 4.6, a note on the Gauss-Jordan elimination method for a set of underdetermined system of linear equations is given. Gauss-Jordan method is the key elimination method for updating simplex tableaux in linear programming. We end this chapter with
© 2008 by Taylor & Francis Group, LLC
72
Numerical Linear Approximation in C
Section 4.7, where a presentation of rounding error analysis for simple and extended arithmetic operations is given. 4.2
Vector and matrix norms and relevant theorems
Given an n-dimensional vector x, real or complex, it is useful in mathematical analysis to have a single non-negative number which gives an assessment of the size of x. This number is known as the norm of x and is denoted by ||x||. We already defined and used vector norms in Chapter 2. We elaborate here on this subject. 4.2.1
Vector norms
Let vector x = (x1, x2, …, xn)T. The norm of x plays the same role as the modulus in the case of a complex number. It satisfies (i) ||x|| > 0, unless x = 0; ||x|| = 0 implies x = 0, (ii) ||cx|| = |c| ||x||, c is a complex scalar, and (iii) ||x + y|| ≤ ||x|| + ||y|| (known as the triangle inequality) and as a result, ||x – y|| ≥ ||x|| – ||y||. As described in Chapter 2, there are three vector norms in common use. They are derived from the p or the Holder’s norm n
(4.2.1)
x p =
∑
xi p
1⁄p
, 1≤p≤∞
i=1
Then for p = 1, the L1 norm of x is n
(4.2.2)
x
1
=
∑
xi
i=1
For p = 2, the L2 or the Euclidean vector norm of x is n
(4.2.3)
x
2
= sqrt
∑
xi
2
i=1
Again, for p = ∞, the L∞ or the Chebyshev vector norm of x is (4.2.4)
© 2008 by Taylor & Francis Group, LLC
||x||∞ = max|xi|, i = 1, 2, …, n
Chapter 4: Efficient Solutions of Linear Equations
73
It is easy to show that these three norms are related by the following inequalities (1/sqrt(n))||x||1 ≤ ||x||2 ≤ ||x||1, (1/n)||x||1 ≤ ||x||∞ ≤ ||x||1 ||x||∞ ≤ ||x||1 ≤ n ||x||∞,
||x||∞ ≤ ||x||2 ≤ sqrt(n)||x||∞
(1/sqrt(n))||x||2 ≤ ||x||∞ ≤ ||x||2,
||x||2 ≤ ||x||1 ≤ sqrt(n)||x||2
A useful inequality known as Schwartz’s inequality is |(x, y)| ≤ ||x||2 ||y||2 where (x, y) denotes the inner product of vectors x and y. 4.2.2
Matrix norms
Similarly, given an n by n real or complex matrix A = (aij), the norm of A satisfies the conditions (i) ||A|| > 0, unless A = 0; ||A|| = 0 implies A = 0, (ii) ||cA|| = |c| ||A||; c is a complex scalar, (iii) ||A + B|| ≤ ||A|| + ||B||; (A and B are of the same size), and (iv) ||AB|| ≤ ||A|| ||B||. There are matrix norms that are said to be natural norms, i.e., that are associated with (subordinate to or induced by) vector norms. Since for any vector norm, we expect that ||Ax|| ≤ ||A|| ||x||, the subordinate matrix norm associated with a vector norm is defined by ||A||p = max(||Ax||p/||x||p), x ≠ 0 or equally ||A||p = max||Ax||p, ||x||p = 1 4.2.3
Hermitian matrices and vectors
Definition 4.1 A square matrix A is known as a Hermitian matrix if AH = A, where AH is the complex conjugate transpose of A. This class of matrices includes symmetric matrices when the elements of A are real. The following are Hermitian matrices
© 2008 by Taylor & Francis Group, LLC
74
Numerical Linear Approximation in C
a b b d
and
a b + ci b – ci d
where a, b, c and d are real elements and i = sqrt(–1). Similarly let x be a column vector whose elements are complex. Then xH is a row vector whose elements are the complex conjugates of those of x. It follows that (xH)H = x, (AH)H = A and (AB)H = BHAH and xHx is real and positive, x ≠ 0. Ιn other words, Hermitian transposes have similar properties to those of ordinary transposes. Theorem 4.1 (i) (ii)
Let A be a Hermitian matrix. Then A has real eigenvalues AHA has real non-negative eigenvalues.
Proof: Let λ be an eigenvalue of a Hermitian matrix A and x the corresponding eigenvector Ax = λx Hence xHAx = λxHx Now xHx is real and positive, for x ≠ 0. Again, xHAx is scalar and thus real, since (xHAx)H = xHAHx = xHAx proving (i) that is λ real. The proof of (ii) follows since (4.2.5)
xHAHAx = (Ax, Ax) = xHAHAx = (Ax)H(Ax) = λ2xHx = σxHx
Let the eigenvalues of (AHA) be σi(AHA). Then sqrt(σi(AHA)) are known as the singular values of A and are denoted by si(A), i.e., si2(A) = sqrt(σi(AHA)), i = 1, 2, …, n © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
75
Theorem 4.2 Let A be an n by n matrix. Corresponding to the given vector norms of Section 4.2.1, there are matrix norms ||A||p, for p =1, 2 and ∞, that are subordinate norms: (i) ||A||1 = maximum absolute column sum of A (ii) ||A||2 = maxi si(A) (maximum singular value of A) (iii) ||A||∞ = maximum absolute row sum of A. Proof: To prove (i), from (4.2.2) n
Ax
=
1
n
n
∑ ∑ aij xj i=1 j=1
n
∑∑
≤
a ij x j
i = 1j = 1
and by re-arranging the summations n
Ax
1
≤
∑ i=1
n
a ij
∑ j=1
n ⎛ ⎞⎛ n ⎞ x j ≤ ⎜ max j ∑ a ij ⎟ ⎜ ∑ x j ⎟ ⎝ ⎠ ⎝j = 1 ⎠ i=1
Since the summation of the far right = ||x||1 n
A
1
= max ( Ax
1
⁄ x 1 ) ≤ max j ∑ a ij i=1
Suppose that the maximum sum is for column k. Choose xk = 1 and xi = 0, for i ≠ k. For this choice of x, equality is obtained. To prove (ii), pre-multiply matrix A by AH. Then (AHA), being Hermitian, has an orthogonal set of n eigenvectors {yi}, each associated with a non-negative eigenvalue (Theorem 4.1 (ii)). Let x be written as a linear combination of (yi), i.e., n
x =
∑ cj yj j=1
where the cj are constants. Then from (4.2.5) ||Ax||22 / ||x||22 = (xHAHAx) / (x, x)
© 2008 by Taylor & Francis Group, LLC
76
Numerical Linear Approximation in C
n
=
∑
2 2 cj sj ( A )
n
⁄
∑
cj
2
j=1
j=1
or ||Ax||2 / ||x||2 ≤ smax(A) Moreover, by choosing x = yi associated with σmax(AHA), an equality is obtained. To prove (iii), from (4.2.4) n
A
∞
= max i
∑
n
n
j=1
j=1
a ij x j ≤ max i ∑ a ij x j ≤ max i ∑ a ij max x j
j=1
Hence n
A
∞
= max Ax
∞
⁄ x
∞
≤ max i ∑ a ij j=1
Again, assume that i = k gives the maximum sum on the right. Construct a vector x with xj = 1 if akj > 0 and xj = –1 if akj < 0. For this x, an equality is attained. The theorem is thus proved. As a result of this theorem ||I|| = 1 where I is an n-unit matrix and the norm is any natural norm. 4.2.4
Other matrix norms
There are other matrix norms [9, 11] that satisfy the four conditions of Section 4.2.2 and are defined by A
q
=
∑ aij
q 1⁄q
, 1
i,j
For the case q = 2, the norm is known as the Schur or the Euclidean matrix norm. It is easily calculated and hence often used
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
A
E
=
∑
a ij
77
2 1⁄2
i,j
This matrix norm is consistent with the Euclidean vector norm ||x||2 but it is not subordinate to any vector norm, since for an n-unit matrix I, ||I||E = sqrt(n). 4.2.5
Euclidean and the spectral matrix norms
An important property shared by both the Euclidean and spectral matrix norms is given by the following. Theorem 4.3 The Euclidean matrix norm and the 2 (the spectral) matrix norm have invariant properties under unitary transformation. The same is true for the 2-vector norm. For unitary matrices U and V, we have ||x||2 = ||Ux||2 ||A|| = ||UAV|| = ||UA|| = ||AV|| (U is said to be unitary if UHU = UUH = I; that is, UH = U–1. A real unitary matrix is called orthonormal). Proof: To prove the first part of the theorem, since U is unitary, 2 = ||I||2 = 1
||UHU||
||Ux||22 = (Ux)H(Ux) = xHUHUx = xHx = ||x||22 To prove the second part of the theorem, we have for example ||UA||22 = ||(UA)HUA||2 = ||AHUHUA||2 = ||AHA||2 = ||A||22 For the Euclidean matrix norm, again consider for example UA, then ||UA||E = ||A||E The proof follows from the fact that the Euclidean length of each column of UA equals the Euclidean length of that column and the theorem is proved.
© 2008 by Taylor & Francis Group, LLC
78
4.2.6
Numerical Linear Approximation in C
Euclidean norm and the singular values
Definition 4.2 The trace of an n square matrix A, denoted by tr(A), is the sum of its diagonal elements. For two square matrices A and B, tr(AB) = tr(BA). As a result: (i) tr(A) = the sum of the eigenvalues of A. (ii) The sum of the eigenvalues of AB = sum of the eigenvalues of BA. In fact, AB and BA have identical eigenvalues. See also Theorem 4.10. (iii) ||A||E2 = tr(AHA). Since ||A||E2 equals the trace of AHA which equals the sum of the eigenvalues of (AHA), Σi σi(AHA), and again, since σi(AHA) = si2(A), we have A
2 E
n
=
2
∑ si ( A ) i=1
Then since ||A||2 = s1(A), the largest singular value of A, we have the relations ||A||2 ≤ ||A||E ≤ sqrt(n)||A||2
(4.2.6) Theorem 4.4
Let ||A||α and ||A||β be any two matrix norms. Then there exist positive numbers a and b such that a ≤ ||A||α / ||A||β ≤ b Assume that A is an n by n matrix. The following relations between matrix norms exist (1/sqrt(n))||A||E≤||A||∞≤sqrt(n)||A||E (1/sqrt(n))||A||E≤||A||2≤||A||E, (1/sqrt(n))||A||E≤||A||1≤sqrt(n)||A||E, (1/sqrt(n))||A||2≤||A||1≤sqrt(n)||A||2 (1/sqrt(n))||A||2≤||A||∞ ≤sqrt(n)||A||2,
(1/n)||A||∞≤||A||1≤n||A||∞
Theorem 4.5 ([13], p. 133) ||AB||E ≤ ||A||E ||B||2 and ||AB||E ≤ ||A||2 ||B||E © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
79
Theorem 4.6 For any matrix norm, we have || I || ≥ 1, ||A–1|| ≥ 1/||A||, ||Ak|| ≤ ||A||k 4.2.7
Eigenvalues and the singular values of the sum and the product of two matrices
Theorem 4.7: Wielandt-Hoffman theorem ([16], p. 104) Let C = A + B, where A, B and C are real symmetric n-matrices having eigenvalues αi, βi and γi respectively, arranged in a non-increasing order; that is, α1 ≥ α2 ≥ ...≥ αn, etc. Then n
∑ ( γi – αi ) i=1
2
≤ B
2 E
n
=
2
∑ βi i=1
This theorem relates the perturbations in the eigenvalues to the Euclidean norm of the perturbation matrix B. Theorem 4.8 Let C = A + B, where A, B and C are real symmetric n-matrices having singular values si(A), si(B) and si(C) respectively, arranged in a non-decreasing order; α1 ≤ α2 ≤ … ≤ αn. Then si(A) – s1(B) ≤ si(C) ≤ si(A) + sn(B), i = 1, 2, …, n In particular, since ||B||2 = sn(B) and since from (4.2.6), ||B||2 ≤ ||B||E si(C) ≤ si(A) + ||B||E, i = 1, 2, …, n Definitions 4.3 Given an n by m matrix A, then ([11], p. 25) (i) A k(≤n) by s(≤m) submatrix of A is obtained from A by deleting certain rows and columns from A. (ii) A principal submatrix of A is a submatrix, the diagonal elements of which are diagonal elements of A. (iii) A leading submatrix of A is a submatrix (not necessary a square one) that lies in the upper left hand corner of A. (vi) A leading principal submatrix of A is a square submatrix that lies in the upper left hand corner of A. © 2008 by Taylor & Francis Group, LLC
80
Numerical Linear Approximation in C
Theorem 4.9 ([11], p. 317) Let A be a real symmetric n-matrix and B be a principal (n – 1)-submatrix of A. Let (αi) be the eigenvalues of A and (βi) be the eigenvalues of B, arranged in a non-decreasing order. Then α1 ≤ β1 ≤ α2 ≤ β2 … ≤ αn–1 ≤ βn–1 ≤ αn Theorem 4.10 ([16], p. 54) (i)
Let A and B be square real n-matrices. Then the eigenvalues of AB and of BA are identical, Let A and B be an n by m and an m by n real martices respectively, n ≥ m. Then AB and BA have m identical eigenvalues and the extra (n – m) eigenvalues of AB are 0’s.
(ii)
4.2.8
Accuracy of the solution of linear equations
Theorem 4.11 Assume that ||dA|| < 1, where dA is a perturbation to matrix A. Then (I + dA) is non-singular and –1 1 1 ------------------------- ≤ ( I + dA ) ≤ ------------------------( 1 + dA ) ( 1 – dA )
Proof: Let us write I = (I + dA)(I + dA)–1
(4.2.7) Then by taking norms
1 ≤ ||(I + dA)|| ||(I + dA)–1|| ≤ (1 + ||dA||)||(I + dA)–1|| which proves the left inequality of the theorem. To prove the right inequality, write (4.2.7) in the form (I + dA)–1 = I – dA(I + dA)–1 By taking norms, we get ||(I + dA)–1|| ≤ 1 + ||dA(I + dA)–1|| ≤ 1 + ||dA|| ||(I + dA)–1|| By dividing by ||(I + dA)–1|| and arranging the terms, we get the right inequality of the theorem. © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
81
Suppose that in a certain problem a small change dα in the data α causes a change dx in the result x. If it is possible to write |dx/x| = K|dα/α| then K is called the condition number for the relative change in x caused by a relative change in α. Let us consider the matrix equation Ax = b If vector b is perturbed by db and matrix A is perturbed by dA, the solution has to be given by (4.2.8)
(A + dA)(x + dx) = (b + db)
Theorem 4.12 Assume that A is a square non-singular matrix and that ||dA|| < 1. Then if x ≠ 0
||A–1||
dx - ≤ CK ( A ) ----------dA - + ---------db ---------x A b where C = 1 / [1 – K(A)||dA||/||A||] and K(A) is the condition number of A. The spectral condition number of A is given by K(A) = ||A||2 ||A–1||2 See also Section 17.5.1. Proof: Subtract Ax = b, from (4.2.8) (4.2.9)
(A + dA)dx = db – dA x
Then dx = (A + dA)–1(db – dA x) = (I + A–1dA)–1A–1(db – dA x) and by taking norms ||dx|| ≤ ||(I + A–1dA)–1|| ||A–1||(||db|| + ||dA|| ||x||) By using the right inequality of Theorem 4.11, after replacing dA A–1dA and since ||A–1|| ||dA|| < 1
© 2008 by Taylor & Francis Group, LLC
82
Numerical Linear Approximation in C
–1
( I + A dA )
–1
1 1 ≤ ----------------------------------≤ --------------------------------------–1 –1 ( 1 – A dA ) ( 1 – A dA )
Since ||A–1|| ||dA|| = K(A)||dA||/||A||, we get ||(I + A–1dA)–1|| ≤ C where C is defined above. Then by dividing by ||x|| and noting that ||b|| ≤ ||A|| ||x||, the theorem is proved. Again, since in the denominator of C, the quantity K(A)||dA||/||A|| = ||A–1|| ||dA||, if ||A–1|| ||dA|| << 1, C ≈ 1 and we get dx - ≤ K ( A ) ----------dA - + ---------db ---------x A b
(4.2.10)
If db = 0, we get from (4.2.9) dx = –(I + A–1dA)–1A–1dA x Then by taking norms and using the right inequality of Theorem 4.11, after replacing dA by A–1dA ||dx|| ≤ ||(I + A–1dA)–1|| ||A–1dA|| ||x|| and –1
dx - ≤ -----------------------------A dA ---------–1 x 1 – A dA This inequality is used in estimating the relative error in the solution vector x, due to rounding error in some matrix computations, using backward error analysis. It is itself inequality (4.7.12) below. Note 4.1 It is noted that if the set of equations Ax = b is ill-conditioned, resulting in a large relative error ||dx||/||x||, we expect K(A) in (4.2.10) to be large. Yet, the converse is not necessarily true. A large K(A) does not necessarily imply that Ax = b is ill-conditioned. The reason for this is that Theorem 4.12 gives an upper bound for the ratio ||dx||/||x|| and this upper bound may not be attained.
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
4.3
83
Elementary matrices
Elementary operations to a given matrix equation involving matrix A are mostly based on pre-multiplying and/or post-multiplying A progressively by one or more of the matrices known as elementary matrices. We describe here 3 of the elementary matrices. (a) Diagonal matrix: A diagonal matrix D = diag(α, β, γ, …) is an elementary matrix. Pre-multiplying matrix A by D results in multiplying each element of the first row of A by α and each element of the second row of A by β, …, etc. Post-multiplying A by D results in multiplying each element of the first column of A by α and each element of the second column of A by β, …, etc. (b) Permutation matrix: A permutation matrix Pij is an n-unit matrix I, except that its i and j columns are interchanged. Pre-multiplying A by Pij results in the interchange of rows i and j of A. Post-multiplying A by Pij results in the interchange of columns i and j of A. We note that Pij = PijT = Pij–1 and PijPij = I. (c) A matrix Mi, which is an n-unit matrix I, except for its column i which has the form (for the case n = 5 and i = 2) (m12, 1, m32, m42, m52)T is another elementary matrix. Pre-multiplying A by M2 results in a matrix B, B = M2A. The second row of B equals the second row of A. But the first row of B is the first row of A + m12 times the second row of A. The third row of B is the third row of A + m32 times the second row of A, …, etc. Post-multiplying A by M2T, where the superscript T refers to the transpose, gives a similar result with the columns of A instead. Note 4.2 It is noted that the elementary matrices described above are never stored in the computer. They are described here in order to elucidate the operations that equation Ax = b undergoes. In particular, the operations of the permutation matrices are recorded by index vectors. For example, an index vector IR(j) = (1, 2, 5, 4, 3) indicates that rows 3 and 5 of matrix A have © 2008 by Taylor & Francis Group, LLC
84
Numerical Linear Approximation in C
been interchanged. Note 4.3 Elementary operations on matrices are performed on matrices of any size, not only on square matrices (Chapter 17). 4.4
Gauss LU factorization with complete pivoting
Given a system of linear equations Ax = b, the Gauss elimination method or the Gauss LU factorization is described as follows. The method applies elementary operations successively to matrix A and vector b, which finally reduce matrix A to an upper triangular matrix U while vector b becomes vector L–1b, where L is a unit lower triangular matrix (its diagonal elements are 1’s). Gauss LU factorization without pivoting is as follows. Let A = A(1) and b = b(1). Then (i) Add multiples of the first row of equation A(1)x = b(1) to each of the second, third, …, nth rows of equation A(1)x = b(1) so as to eliminate x1 from such equations. This results in the equation A(2)x = b(2). (ii) Add multiples of the second row of the new equation A(2)x = b(2) to each of the new third, forth, …, etc. rows of A(2)x = b(2), so as to eliminate x2 from such equations. This results in the equation A(3)x = b(3). (iii) Continue this process until an upper triangular set of equations is obtained as shown (for n = 4)
(4.4.1)
a11(1)x1 + a12(1)x2 + a13(1)x3 + a14(1)x4 a22(2)x2 + a23(2)x3 + a24(2)x4 a33(3)x3 + a34(3)x4 a44(4)x4
= = = =
b1(1) b2(2) b3(3) b4(4)
In (4.4.1) the a1j(1) are the elements of the first row of A(1) and a2j(2) are the elements of the second row of A(2) and so on. Again, b1(1) is the first element of b(1) and so on. The solution is obtained by back substitution in the triangular system (4.4.1) starting with x4. The first element a11(1) of matrix A(1) is the pivot element in process (i) above. Likewise the element a22(2) of matrix A(2) is the pivot element in process (ii) above and so on. We now show that this method gives the factorization © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
85
A = LU As indicated above, U is the upper triangular matrix appearing on the l.h.s. of (4.4.1) and L–1b is the vector appearing on the r.h.s. of (4.4.1). 4.4.1
Importance of pivoting
In the Gauss elimination method, complete pivoting is achieved by means of interchanging the rows of the system of equations A(i)x = b(i), and/or the columns of the matrices A(i), where i = 1, 2, …, n – 1 such that the pivot element a11(1) is the largest element in absolute value in the whole of matrix A(1), while a22(2) is the largest element in absolute value among the current aij(2) of A(2), with i, j ≥ 2, …, and so on. Partial pivoting is achieved by interchanging the columns of A(1), A(2), … such that a11(1) is the largest element in absolute value in first row of A(1), a22(2) is the largest element in absolute value among the current a2j(2) of A(2), with j ≥ 2…, and so on. By doing this, we avoid pivots that are small in absolute value. Otherwise, due to the finite precision of the computer word, inaccurate or even incorrect results are obtained. Example 4.1 This is a well-known classical example [3] which illustrates the importance of pivoting in the Gauss elimination process. Assume that all the results are rounded to four decimal places. Let us solve the system of two equations (4.4.2)
0.0001x1 + 1.00x2 = 1.00 1.00x1 + 1.00x2 = 2.00
The true result rounded to 4 decimals is x1 = 10000/9999 = 1.0001 and x2 = 0.9999 Yet, Gauss elimination without partial pivoting (without exchanging columns) is done by using the first equation in (4.4.2) to eliminate x1 from the second equation. That is by subtracting 10000 times the first equation from the second equation and rounding the result to 4 decimal places, which gives
© 2008 by Taylor & Francis Group, LLC
86
Numerical Linear Approximation in C
0.0001x1 + 1.00x2 = 1.00 –9999x2 = –9998 from which x2 = 1.0 and x1 = 0.0 (wrong result). On the other hand, with pivoting (by first exchanging the two columns on the l.h.s. of (4.4.2)), and subtracting the first equation from the second, we get the triangular system 1.00x2 + 1.00x1 = 1.00 (1.0 – 0.0001)x1 = 1.00 giving x1 = 1.00/0.9999 = 1.0001 and x2 = 0.9999 (correct result). 4.4.2
Using complete pivoting
Let (A|b) denote the n by (n + 1) matrix with vector b situated to the right side of matrix A. The Gauss elimination process with complete pivoting consists of (n – 1) major steps in which matrix (A|b) = (A(1)|b(1)) is reduced successively to (A(2)|b(2)), (A(3)|b(3)), …, (A(n)|b(n)) as follows: (i) Choose the element of aij(1) of matrix A(1) of maximum absolute value among all the elements of A(1), as the pivot. Suppose this element is auv(1). Interchange rows 1 and u in the complete n by (n + 1) matrix (A(1)|b(1)) and interchange columns 1 and v of A(1). This may be done by pre-multiplying (A(1)|b(1)) by a permutation matrix S1 and post-multiplying A(1) by another permutation matrix P1. (ii) Calculate and record the (n – 1) multipliers mi1 mi1 = ai1(1)/a11(1), i = 2, 3, …, n
(iii)
where ai1(1) and a11(1) are the elements of the permuted matrix A(1). Calculate the matrix (A(2)|b(2)) as follows for i = 2, …, n aij(2) = aij(1) – mi1a1j(1),
j = 2, 3, …, n
bi(2) = bi(1) – mi1b1(1) We remark that (4.4.3) S1 = S1T = S1–1, P1 = P1T = P1–1, S1S1 = I, P1P1 = I © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
87
Then in vector-matrix notation, step (i) is equivalent to changing equation A(1)x = b(1) to S1A(1)P1P1x = S1b(1) Or in terms of the permuted matrix A(1) and permuted vectors x and b(1), where A(1) = S1A(1)P1, x = P1x and b(1) = S1b(1) A(1)x = b(1) Step (iii) is equivalent to pre-multiplying (A(1)|b(1)) by matrix M1 in order to obtain matrix (A(2)|b(2)). M1 is a unit matrix, except for its first column, which is given by (1, –m21, –m31, …, –mn1)T where m21, m31, …, mn1 are given in step (ii) above. Hence, we get (A(2)|b(2)) = M1(A(1)|b(1)) Or in other words A(2) = M1S1A(1)P1 and b(2) = M1S1b(1) The second major operation is similar. Determine the element of maximum absolute value among the current aij(2) with i, j ≥ 2. Suppose this element is ars(2). Interchange rows 2 and r in the complete n by (n + 1) matrix (A(2)|b(2)) and columns 2 and s of A(2). Calculate the (n – 2) multipliers mi2 = ai2(2)/a22(2), i = 3, 4, …, n Again ai2(2) and a22(2) are the elements of the permuted matrix A(2). Calculate the elements of (A(3)|b(3)) in the same manner as before. This is equivalent to changing equation A(2)x = b(2) to S2A(2)P2P2x = S2b(2) Then we pre-multiply it by M2, where M2 is a unit matrix, except for its second column, which is given by (0, 1, –m32, –m42, …, –mn2)T The subsequent operations follow easily until we finally get Ux = A(n)x = b(n) where
© 2008 by Taylor & Francis Group, LLC
88
Numerical Linear Approximation in C
(4.4.4)
M(n–1)S(n–1) … M2S2M1S1AP1P2 … P(n–1) = U
By pre-multiplying (4.4.4) successively by M(n–1)–1, S(n–1), …, M2–1, S2, M1–1, S1, …, and making use of (4.4.3), we get (4.4.5) AP1P2 … P(n–1) = S1M1–1S2M2–1 … S(n-1)M(n–1)–1U It is not difficult to show that (a) Mi–1 is simply Mi with the signs of the off-diagonal elements reversed, and S1M1–1S2M2–1 … S(n–1)M(n–1)–1
(b)
= S1S2 … S(n–1)M1–1M2–1 … M(n–1)–1 Also, M1–1M2–1 … M(n–1)–1 = L, a unit lower triangular matrix, whose diagonal elements are 1’s and the sub-diagonal elements are the calculated mij elements. Hence
(c)
S1M1–1S2M2–1 … S(n–1)M(n–1)–1 = S1S2 … S(n–1)L We may thus write (4.4.5) in the form AP = SLU where S = S1S2 … S(n–1) pre-multiplying by S we get (4.4.6)
and
P = P1P2 … P(n–1)
and
by
A = SAP = LU
As a result of pivoting, the LU factorization in (4.4.6) is not for matrix A but for matrix A = SAP, which is the permuted matrix A. Equation Ax = b thus reduces to (4.4.7)
LUx = b
where x = Px and b = Sb The solution of (4.4.7) is obtained in two steps, i.e., by solving the two triangular systems, Ly = b and Ux = y, from which x is obtained. Finally, the solution of Ax = b is x = Px. 4.4.3
Pivoting and the rank of matrix A
We noted earlier that elementary operations may be performed on
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
89
any matrix systems, not only on square non-singular systems. Besides the importance of pivoting illustrated by Example 4.1, an equally important purpose of pivoting is to determine the rank of matrix A. Let us assume that complete pivoting has been used in deriving the triangular equations (4.4.1) from the given set of equations Ax = b. If rank(A) < n, we expect one or more rows of A to be linearly dependent on the other rows. However, after each step of the Gauss elimination method, such dependent rows will depend on one row less than in the previous step. Hence, if rank(A) = k < n, after step k in the Gauss elimination method, the set of equations (4.4.1) will instead have the form (for n = 4 and k = 2)
(4.4.8)
a11(1)x1 + a12(1)x2 + a13(1)x3 + a14(1)x4 a22(2)x2 + a23(2)x3 + a24(2)x4 0 + 0 0
= = = =
b1(1) b2(2) b3(3) b4(4)
In (4.4.8), if b3(2) and b4(2) are 0’s, then rank(A|b) = rank(A) = 2 and system (4.4.8), being an underdeterminerd system, has an infinte number of solutions But if b3(2) and/or b4(2 is not 0, rank(A|b) > rank(A), and (4.4.8) is inconsistent and has no solution. Theorem 4.13 ([10], Section 3.5) Let A be an n by n matrix and b be an n-vector. Then: System Ax = b has a solution if and only if rank(A|b) = rank(A). (ii) If rank(A|b) = rank(A) = n, the solution is unique. (iii) If rank(A|b) = rank(A) < n, the solution is not unique. (iv) If rank(A|b) > rank(A), system Ax = b is inconsistent and it has no solution. (i)
Example 4.2 A simple example of inconsistent set of two equations is 4x1 + 6x2 = 6 2x1 + 3x2 = 4 A Gauss elimination step produces 4x1 + 6x2 = 6 0 = 1
© 2008 by Taylor & Francis Group, LLC
90
Numerical Linear Approximation in C
The l.h.s. of the second equation is half the l.h.s. of the first equation. That is, rank(A) = 1, while rank(A|b) = 2. The two equations represent two parallel lines that will never intersect. This system is inconsistent and has no solution. The following is a special case of the previous theorem. Theorem 4.14 Consider the solution of the homogeneous set of equations Ax = 0 (i.e., b = 0), where A is an n by n matrix. Then: (i) If rank(A) = n, the equations have the unique solution x = 0, (ii) If rank(A) = k < n, the solution is not unique. 4.5
Orthogonal factorization methods
Orthogonal factorization methods are used in factorizing an n by n matrix A in the form of A = QR. They include Householder, Givens and the Gram-Schmidt methods [5, 6, 8]. We shall consider here the first method only and also assume that A and b are real. For Householder's method, we introduce the following elementary orthogonal matrices. 4.5.1
The elementary orthogonal matrix H
Let H = I – 2wwT
(4.5.1)
where w is an n-dimensional vector such that (w, w) = wTw = 1. Then H is symmetric and also orthonormal; HTH = I, since HTH = (I – 2wwT)(I – 2wwT) = I – 4wwT + 4w(w, w)wT = I H is known as an elementary orthogonal matrix. 4.5.2
Householder’s QR factorization with pivoting
This method gives the factorization of a square matrix A into
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
(4.5.2)
91
A = QR
Q is an orthonormal matrix, QTQ = I, and R an upper triangular. The transformation of matrix A into an upper triangular matrix R is done by pre-multiplying A respectively by the (n – 1) elementary orthogonal matrices H(1), H(2), …, H(n–1). Let A = A(1) and let A(2), A(3), …, A(n) be defined by A(k+1) = H(k)A(k), k = 1, 2, …, n – 1 The transformation A(2) = H(1)A(1) produces (n – 1) 0’s below the first element in column 1 of A(2). Likewise, the transformation A(3) = H(2)A(2), produces (n – 2) 0’s below the second element in column 2 of A(3) and also leaves the 0’s obtained in the previous step unchanged. The process continues until we obtain an upper triangular matrix A(n). To illustrate this process, we consider the 4 by 4 matrix A = (aij). We wish to determine the elements of the vector w(1)= (w1(1), w2(1), w3(1), w4(1))T such that A(2) = H(1)A(1) has zero elements in the positions (2, 1), (3, 1) and (4, 1). We have from (4.5.1) A(2) = A(1) – 2w(1)w(1)TA(1) Let w(1)TA(1) = (d1, d2, d3, d4). The elements in the first column of A(2) are thus (4.5.3) (a11(1) – 2w1(1)d1), (a21(1) – 2w2(1)d1), (a31(1) – 2w3(1)d1), and (a41(1) – 2w4(1)d1) Each of the last 3 elements in (4.5.3) is equated to 0 a21(1) – 2w2(1)d1 = 0 a31(1) – 2w3(1)d1 = 0
(4.5.4)
a41(1) – 2w4(1)d1 = 0 We also observe that since H(1) is orthogonal, the sum of the squares of the elements of any column of A(1) is invariant. Thus from (4.5.3) and (4.5.4), (a11(1) – 2w1(1)d1)2 + 0 + 0 + 0 = the sum of the squares of the 4 elements of the first column of A(1) = C2. Hence (4.5.5)
a11(1) – 2w1(1)d1 = ±C or 2w1(1)d1 = a11(1) ± C
By squaring equation (4.5.5) as well as the three equations of (4.5.4) © 2008 by Taylor & Francis Group, LLC
92
Numerical Linear Approximation in C
and adding the results, we get 2d12 = C2 ± 2Ca11(1)
(4.5.6)
To ensure numerical stability, the + or – sign in (4.5.6) is chosen according to whether a11(1) is > 0 or < 0 respectively. The elements of w(1) are thus obtained from (4.5.5) and (4.5.4) by substituting d1 from (4.5.6), from which H(1) is easily calculated. To calculate the elements of w(2) of H(2) = I – 2w(2)w(2)T, we proceed in the same manner. We observe that the first element of w(2) is 0, i.e., w(2) = (0, w2(2), w3(2), w4(2))T This will ensure that the first row and also the 0’s in the first column of A(2) are left unaltered in the transformation A(3) = H(2)A(2). Likewise, in calculating w(3), its first two elements are 0’s. This also ensures that the first two rows as well as the 0’s obtained in the first two columns of A(3) are left unaltered in the subsequent transformation, and so on. For computational purposes, to summarize and simplify the above calculation of w(k) and H(k) for k = 1, 2, …, n – 1, let H(k) = I – βkw(k)w(k)T For k = 1, 2, …, n – 1, H(k) and A(k+1) are generated as follows [1]
(4.5.7)
σk
= sqrt(Σi |aik(k)|2) (sum from i = k to n)
βk
= [σk(σk + |akk(k)|)]–1
wi(k) = 0, for i < k wk(k) = sgn(akk(k))(σk + |akk(k)|) wi(k) = aik(k), for i > k
H(k) may thus be computed from (4.5.7) and consequently A(k+1) = H(k)A(k) is computed. 4.5.3
Pivoting in Householder’s method
At the beginning of the kth step, k = 1, 2, …, n – 1, we compute σj from (4.5.7) for each value of j from k to n. If σp2 is the maximum sum, interchange columns k and p of A(k) in the full array. This is 2
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
93
done by post-multiplying A(k) by the permutation matrix Pk, and in order not to change the structure of the equation A(k)x = b(k) we pre-multiply x by Pk. That is A(k)PkPkx = b(k) This equation is then pre-multiplied by H(k). Hence, at the end of the (n – 1)th step we get A(n)x = b(n)
(4.5.8)
By denoting R as an upper triangular matrix A(n) = HAP = R, x = Px and b(n) = Hb Equation (4.5.8) is re-written as (4.5.9)
HAPx = Rx = Hb
where H = H(n–1) … H(2)H(1) and P = P1P2 … Pn–1 Since H is orthonormal, H–1 = HT, and by pre-multiplying (4.5.9) by HT we get QRx = b, where Q = HT is an orthonormal matrix. The QR factorization is thus not for matrix A but for the permuted matrix A = AP. The solution x is obtained by solving the triangular system Rx = QTb, and finally x is obtained from x = Px. 4.5.4
Calculation of the matrix inverse A–1
We may use either Gauss’ or Householder’s method in calculating A–1. In the equation Ax = b, we take successively b = e1, e2, …, en, where ei is the ith column of the n-unit matrix I. The calculated solution xi is the ith column of the inverse A–1. Proof: Let Ax1 = e1, Ax2 = e2, Axn = en Then since [e1 e2 … en] = I [x1 x2 … xn] = A–1
© 2008 by Taylor & Francis Group, LLC
94
4.6
Numerical Linear Approximation in C
Gauss-Jordan method
Gauss-Jordan elimination method is of importance in the solution of linear programming problems [3, 7]. This method proceeds as in the Gauss elimination method, described at the beginning of Section 4.4, but without pivoting. The only difference is that at the kth step, xk is eliminated not only from all equations below the kth equation, but also from all equations above the kth equation. If we have a system of n equations in n unknowns, after step n, we get (for n = 4) a11(1)x1
a22(2)x2
a33(3)x3
a44(4)x4
= = = =
b1(1) b2(2) b3(3) b4(4)
Then the solution of the system is obtained by dividing equation i by the coefficient aii(i), i = 1, 2, …, n. We are interested here in the solutions of an underdetermined system of linear equations by the Gauss-Jordan elimination method. Let us suppose that we have applied this method to a system of 4 equation in 6 unknown and that the system is of rank 4. Let us assume that we have also divided each equation i by the coefficient aii x1 (4.6.1)
x2
x3
+ + + x4 +
a15x5 a25x5 a35x5 a45x5
+ + + +
a16x6 a26x6 a36x6 a46x6
= = = =
b1(1) b2(2) b3(3) b4(4)
Hence, if we assign any prescribed values to x5 and x6, 0’s say, x1, x2, x3 and x4 in (4.6.1) are known. Let us assume now that we want to solve the given system not for x1, x2, x3 and x4 but for x1, x2, x6 and x4 instead. Rather than solving the original set of equations from the beginning, one iteration of the Gauss-Jordan method is enough to give us the required result. This is done in two steps, as follows: (1) Divide the third equation (which contains x3) by a36, so that the coefficient of x6 in this equation is unity, (2) Subtract a suitable multiple of the obtained equation from each of the other three equations so that x6 is eliminated from such equations. That is, apply a Gauss-Jordan elimination step to eliminate x6 from all equations except the third one. © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
95
The new set of equations will now have the form x1
+ a13x3 x2 + a23x3 a33x3 a43x3
(4.6.2)
+ + + +
a15x5 a25x5 a35x5 + x4 + a45x5
x6
= = = =
b1(1) b2(2) b3(3) b4(4)
from which the solution for x1, x2, x6 and x4 is obtained. Obviously, the coefficients and the r.h.s. of (4.6.1) and of (4.6.2) are not the same. The method described above is that of changing basis in the simplex algorithm, which is used for solving linear programming problems. Note 4.4 We observe in (4.6.1), by assigning 0’s to x5 and x6, the values of x1, x2, x3 and x4 are themselves the r.h.s. of these equations, namely = b1(1), b2(2), b3(3) and b4(4) respectively. Hence, we name vector b as the basic solution. We end this chapter with an analysis of rounding errors in arithmetic operations. 4.7 4.7.1
Rounding errors in arithmetic operations Normalized floating-point representation
The general format of a normalized floating-point number consists of a sign bit, an exponent and a mantissa, as shown in Figure 4-1. Single- and double-precision numbers differ only in the number of bits allocated to their exponent and mantissa. ±
Exponent
Mantissa
Figure 4-1: General floating-point format
Most present day implementations use the IEEE (Institute of Electrical and Electronics Engineers) normalized floating-point representation. The single-precision type “float” in C occupies 32 bits (binary digits), consisting of a sign bit, an 8-bit exponent and a 23-bit mantissa. The double-precision type “double” occupies 64 bits, consisting of a sign bit, an 11-bit exponent and a 52-bit mantissa. There is an implied leading “1.” in the mantissa, so s.p and d.p. mantissas are actually 24 and 53 bits long respectively (even though © 2008 by Taylor & Francis Group, LLC
96
Numerical Linear Approximation in C
the most-significant bit is not stored in memory) resulting in 1 ≤ mantissa < 2. We will simplify our examination of round-off error in floating-point arithmetic operations by eliminating the implied leading bit in the mantissa. Instead, the mantissa is normalized by constraining the most-significant bit to be nonzero. This format was typical of pre-IEEE floating-point implementations. We can generalize the normalized floating-point representation to any base, such as binary, octal, decimal, hexadecimal, etc. We denote the base by β = 2, 8, 10, 16, etc. [2, 3, 15]. Let t1 be the number of digits in a s.p. mantissa and t2 be the number of digits in a d.p. mantissa. A normalized s.p. floating-point number x is interpreted as x = ±βb(0.d1d2 … dt1) where b is the exponent, the “.” is the decimal point and d1, d2, …, dt1 are the digits of the mantissa. Because d1 ≠ 0, we get (1/β) ≤ 0.d1d2…dt1 < 1
(4.7.1)
so (½) ≤ 0.d1d2…dt1 < 1 (binary), (1/10) ≤ 0.d1d2…dt1 < 1 (decimal), (1/16) ≤ 0.d1d2…dt1 < 1 (hexadecimal), etc. The value of x is thus calculated by d b⎛ d x = ± β ⎜ ----1- + ----2- + ⎝ β β2
d t1⎞ ... + ------⎟ t1 β ⎠
For example, the decimal numbers 4983 and 0.004983 are expressed in normalized floating-point representation as 104(0.4983) and 10–2(.4983) respectively. Due to the finite length of floating-point representation, numbers are rounded-off or truncated after each arithmetic operation. In this chapter, round-off error is calculated for simple floating-point arithmetic operations. Bounds for round-off errors are then obtained for extended simple operations and for simple matrix calculations. Backward round-off error analysis is introduced by an example. 4.7.2
Overflow and underflow in arithmetic operations
In an IEEE-format s.p. number, the exponent b has the range -127 ≤ b ≤ 128, the stored value of which is offset by 127. This means © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
97
that a s.p. number has a range of the order of 2-127 ≤ x ≤ 2128; specifically, 1.175494351×10-38 ≤ x ≤ 3.402823466×1038. A d.p. number has a range of the order of 2-1023 ≤ x ≤ 21024; specifically, 2.2250738585072014×10-308 ≤ x ≤ 1.7976931348623158×10308. Beyond the bounds of the given floating-point representation, overflow or underflow occurs. 4.7.3
Arithmetic operations in a d.p. accumulator
Floating-point processors perform all simple arithmetic operations in at-least double-precision (often higher than d.p.), regardless of whether the result is stored in single- or double-precision. Consider the following simplified examples in which we assume a s.p. representation supporting 4 decimal places of precision, a d.p. of 8 decimal places and a d.p. accumulator also of 8 decimal places: (a) Addition (subtraction) of two s.p. floating-point numbers. Let x and y each be a 4-digit floating-point decimal number and let us calculate z = (x + y). Before the arithmetic operation, the smaller number is given an exponent equal to that of the larger number. In the following three examples, we show how round-off error can be affected by the numerical signs and the relative magnitudes of the two numbers. (1a) Consider 10–6(0.1015) – 10–7(0.9852): 10–6(0.1 0 1 5 0 0 0 0) –10–6(0.0 9 8 5 3 0 0 0) ––––––––––––––––––– 10–6(0.0 0 2 9 7 0 0 0) We denote the difference z, normalized in d.p. and then stored in single-precision as z = fl(x – y) = 10–8(0.2970)
(1b)
where fl denotes the floating point calculation in s.p. In this example, we see that there is no round-off error. Consider the same numbers as above, but in an addition operation rather than a subtraction:
© 2008 by Taylor & Francis Group, LLC
98
Numerical Linear Approximation in C
10–6(0.1 0 1 5 0 0 0 0) +10–6(0.0 9 8 5 3 0 0 0) ––––––––––––––––––– 10–6(0.2 0 0 0 3 0 0 0) The rounded sum is stored in single-precision as z = fl(x + y) = 10–6(0.2000)
(2)
giving a round-off error of 10–10(0.3000), which was realized by simply changing the sign of the arithmetic operation. Consider 104(0.8314) + 101(0.5241): There is a difference of 3 between the exponents of the two numbers. Hence, the second number is shifted to the right 3 places before the addition takes place 104(0.8 3 1 4 0 0 0 0) +104(0.0 0 0 5 2 4 1 0) ––––––––––––––––––– 104(0.8 3 1 9 2 4 1 0)
(b)
The rounded sum z = fl(x + y) = 104(0.8319), giving a round-off error of 10–4(0.2410). (3) Consider 10–6(0.3145) + 104(0.6758): The difference in exponent between the two numbers is 10, (i.e., > the 8 decimal places in the accumulator), and thus the computed sum z = fl(x + y) = 104 (0.6758) = y, giving a round-off error equal to the whole of the smaller number. Multiplication (division) of two s.p. floating-point numbers. Let x and y each be a 4-digit floating-point decimal number and let us calculate z = xy. Let us consider the following two examples: (1) Consider two numbers of similar magnitude: 10–4(0.1714) × 10–3(0.1213) = 10–8(0.20790820)
(2)
The computed result z = fl(xy) = 10–8(0.2079), giving a round-off error of 10–13(0.8200). Consider two numbers of dissimilar magnitude:
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
99
10–4(0.8204) × 106(0.3325) = 102(0.27278300) The product is then rounded to z = fl(xy) = 102(0.2728). The least-significant digit is rounded up from 7 to 8, giving a round-off error of 10–2(0.1700), which is much greater than the smaller number. Theorem 4.15 Let us take two normalized floating-point s.p. numbers x and y and let ‘op’ denote any of the arithmetic operations +, –, × or /. Let (x‘op’y) be computed in d.p. before rounding it back to s.p. Let the calculated result be denoted by z. Then z = fl(x‘op’y) = (x‘op’y)(1 + ε) |ε| ≤ (1/2)β(1–t1) Proof: Let z before and after rounding be respectively βb(0.d1d2…dt1dt1+1…) and βb(0.d1d2…dt1) where dt1 is the rounded digit. Hence, the error resulting from the rounding operation is error = βb|0.d1d2…dt1dt1+1… – 0.d1d2…dt1| ≤ βb[(1/2)β–t1] The relative error (R.E.) = error/true value, is given by R.E. ≤ βb[(1/2)β–t1)]/βb(0.d1d2…dt1dt1+1…) Yet, we know from (4.7.1) that (1/β) ≤ (0.d1d2…dt1dt1+1…) and thus R.E. ≤ [(1/2)β–t1)]/(1/β) ≤ (1/2)β(1–t1) and the theorem is proved. This theorem tells us that (binary) |ε| ≤ 2–t1 (1–t1) |ε| ≤ (1/2)10 (decimal) |ε| ≤ (1/2)16(1–t1) (hexadecimal) Truncating of the result In some computers, the result z in the previous theorem is truncated to s.p. t1 digits rather than rounded. In this case, we have
© 2008 by Taylor & Francis Group, LLC
100
Numerical Linear Approximation in C
Theorem 4.16 (4.7.2)
z(truncated) = z(1 + ε), where |ε| ≤ β(1–t1)
It is also possible to prove the following alternative to the above two theorems. Theorem 4.17 Let ‘op’ denote any of the arithmetic operations +, –, × or /. Then z = fl(x‘op’y) = (x‘op’y)/(1 + ε) where
4.7.4
|ε| ≤ (1/2)β(1–t1) |ε| ≤ β(1–t1)
(rounded operation) (truncated operation)
Computation of the square root of a s.p. number
The bound for the error made in obtaining the square root depends on the algorithm used to extract the root of the number. However, it will always be assumed that [15] fl[sqrt(x)] = sqrt(x)(1 + ε), where |ε| ≤ 1.00001(1/2)β(1–t1) 4.7.5
Arithmetic operations in a s.p. accumulator
Let us see what happens when arithmetic operations are instead performed in a s.p. accumulator. Addition and subtraction operations are more affected than multiplication or division. In the former, either none, one or two rounding operations are performed for the single arithmetic operation. Let us illustrate the worst case by the example 103(0.9741) + 102(0.4936) When a d.p. accumulator is used, the result is computed as follows 103(0.97410000) + 103(0.04936000) = 104(0.1023) Yet, in s.p. we have 103(0.9741) + 103(0.0494) = 103(1.0235) and the normalized result is 104(0.1024); that is, the smaller number is rounded up and the final result is also rounded up. In this case, let the error in these two rounding operations be respectively ξ and η. Let y © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
101
be the smaller number, which is rounded off first; then ξ and η may be given by |ξ| ≤ βb(1/2)β(–1–t1) and |η| ≤ βb(1/2)β–t1 Thus we have |ξ + η| ≤ |ξ| + |η| ≤ (1 + (1/β))βb(1/2)β–t1 Theorem 4.18 For a single-precision accumulator z = fl(x + y) = (x + y)(1 + δ) |δ| ≤ (1/2)(1 + (1/β))β1–t1 Hence, |δ| ≤ (1.5)2–t1 (binary), |δ| ≤ (0.55)101–t1 (decimal) and for hexadecimal |δ| ≤ (1/2)(1 + (1/16))161–t1. For multiplication or division, the error depends on how the operation is performed for the given floating-point processor. In any case, it is unlikely that an error greater than 1 in the least significant digit of the final result will occur, so the result of the previous theorem will still be valid for the multiplication and division case. 4.7.6
Arithmetic operations with two d.p. numbers
In case there is no quadruple word for two d.p. numbers in which simple operations take place, we get fl2(x‘op’y) = (x‘op’y)(1 + δ), |δ| ≤ (1/2)(1 + (1/β))β1–t2 where t2 = the number of digits in the mantissa in a d.p. word. From now on, ε1 and ε2 denote the round-off error in a s.p. operation performed first in a d.p. accumulator, and the d.p. operation performed in a d.p. accumulator respectively, where (4.7.3)
ε1 = (1/2)β(1–t1) and ε2 = (1/2)(1 + (1/β))β(1–t2)
Let us now consider extended simple arithmetic operations of s.p. numbers. 4.7.7
Extended simple s.p. operations in a d.p. accumulator
Consider the following operations:
© 2008 by Taylor & Francis Group, LLC
102
Numerical Linear Approximation in C
(a)
Extended multiplication: Let us calculate w = xyz. This is done in two steps, namely by calculating fl(xy) then fl(fl(xy)z). Thus w = fl((xy)(1 + δ1)z) = (xyz)(1 + δ1)(1 + δ2), |δ1|, |δ2| ≤ ε1 where ε1 is defined in (4.7.3). In general, by assuming |δi| ≤ ε1 w = fl(x1 x2…xn) = (x1 x2…xn)(1 + δ1)(1 + δ2)…(1 + δn–1)
(b)
Extended addition (summation): Consider w = (x + y + z). In the same manner, this is done in two steps, by calculating fl(x+y) then fl(fl(x + y) + z). We get w = fl((x + y)(1 + δ1) + z) = ((x + y)(1 + δ1) + z)(1 + δ2) = (x + y)(1 + δ1)(1 + δ2) + z(1 + δ2) In general w = fl(x1 + x2 + … + xn) = (x1 + x2)(1 + δ1)(1 + δ2)…(1 + δn–1) + x3(1 + δ2)…(1 + δn–1) +…+ xn–1(1 + δn–2)(1 + δn–1) + xn(1 + δn–1) |δi| ≤ ε1
(c)
Inner products: Let us compute w = (x1y1 + x2y2) w = (x1y1(1 + δ1) + x2y2(1 + δ2))(1 + δ3), |δi| ≤ ε1 This result may also be extended.
Because the product of the factors (1 + δi) appear so often, it is useful for practical purposes to obtain some rounding error bounds for such products. Lemma 4.1 Assume that |δi| ≤ ε1, i = 1, 2, …, n, and that nε1 < 0.01. Then
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
103
1 – nε1 ≤ Π1n(1 + δi) ≤ 1 + 1.01nε1
(4.7.4) Proof:
From the assumption that |δi| < ε1 (1 – ε1)n ≤ Π1n(1 + δi) ≤ (1 + ε1)n It is easy to establish that 1 – nε1 ≤ (1 – ε1)n Also, assuming that nε1 < 0.01, it is easy to show that (1 + ε1)n < 1 + 1.01nε1 and by applying (4.7.5a, b) to (4.7.4), the lemma is proved. By using (4.7.4), we get the following bounds for the extended multiplication, extended summation and inner products respectively. (a)
w = fl(x1 x2 … xn) = (x1 x2 … xn)(1 + τ), where 1 – (n – 1)ε1 ≤ 1 + τ ≤ 1 + 1.01(n – 1)ε1
(4.7.6) (b)
w = fl(x1 + x2 + … + xn) = fl(x1 + x2)(1 + τ2) + x3(1 + τ3) +…+ xn(1 + τn), where
(4.7.7)1 – (n + 1 – r)ε1 ≤ (1 + τr) ≤ 1 + 1.01(n + 1 – r)ε1, r = 2, 3,…, n (c)
w = fl(x, y) = (x1y1)(1+τ1) + x2y2(1+τ 2)+…+ xn yn (1+τn) where 1 – nε1 ≤ (1 + τ1) ≤ 1 + 1.01nε1
1 – (n + 2 – r)ε1 ≤ (1 + τr) ≤ 1 + 1.01(n + 2 – r)ε1, r = 2, 3, …, n Remark 4.1 The error in the extended-product case depends on the order in which the numbers are multiplied. Yet, the upper bound of the error as given in (4.7.6) is independent of such order. Also, the relative error (R.E.) is appreciably small and from (4.7.6) is |R.E.| < 1.01(n – 1)ε1 Remark 4.2 In the summation operation, both the error and the upper bound in (4.7.7) depend on the order in which the numbers are added. The © 2008 by Taylor & Francis Group, LLC
104
Numerical Linear Approximation in C
upper bound of this error will be smaller if the numbers were added in order of increasing absolute magnitude. That is because the largest factor (1+τi) will be associated with the smallest xi. However, in this process there is no guarantee that the relative error will be small. R.E. ≤
n
n
i=1
i=1
∑ xi τi ⁄ ∑ xi
It may happen that the positive and negative terms in the summation of the denominator cancel each other in such a way that the summation is very small. It may also happen that the terms in the numerator will support each other. In such a case the relative error will be very high. The same situation may occur in the inner product case. This phenomenon of cancellation is very crucial and it magnifies the harmful effect of the round-off error. 4.7.8 (a)
Alternative expressions for summations and inner-product operations The expression for summations may take the form w = fl(x1 + x2 + … + xn)
where
= (x1 + x2 + … + xn) + e1
(4.7.8) |e1| ≤ 1.01ε1[(n – 1)|x1 + x2| + (n – 2)|x3| + … + 2|xn–1| + |xn|] (b)
Also, for the inner product operation, we may have w = fl(x, y) = (x, y) + e2, where
(4.7.9)
|e2| ≤ 1.01ε1[n|x1| |y1| + n|x2| |y2| + (n – 1)|x3| |y3| +…+ 2|xn| |yn|]
4.7.9 (a)
More conservative error bounds The summation case, from (4.7.8) w = fl(x1 + x2 + … + xn) = (x1 + x2 + … + xn) + e3, where
© 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
105
|e3| ≤ 1.01(n – 1)ε1(|x1| + |x2| + … + |xn|) (b)
The inner product case, from (4.7.9) w = fl(x, y) = (x, y) + e4, where |e4| ≤ 1.01nε1(|x|, |y|) where |x| and |y| are the two vectors whose elements are respectively the absolute elements of x and y.
4.7.10
D.p. summations and inner-product operations
In most computers there are facilities for accumulating the sum of several terms in a d.p. accumulator for each partial sum and the whole terms are added. The final result is then rounded to s.p. (a) The summation operation. First we obtain an expression similar to (4.7.7), namely fl2(x1 + x2 + … + xn) = (x1 + x2)(1 + η2) + x3(1 + η3) + … + xn(1 + ηn) and 1 – (n + 1 – r)ε2 ≤ (1 + ηr) ≤ 1 + 1.01(n + 1 – r)ε2, r = 2, 3, …, n and from (4.7.3) ε2 = (1/2)(1 + (1 + (1/β)))β(1–t2) Then when this result is rounded to s.p. we get ⎛ n ⎞ w = fl fl 2 ⎜ ∑ x i⎟ ⎝i = 1 ⎠ = [(x1 + x2)(1 + η2) + x3(1 + η3) + … + xn(1 + ηn)](1 + ε), |ε| ≤ ε1 From this result we write w = (x1 + x2 + … + xn) + E1, where
© 2008 by Taylor & Francis Group, LLC
106
Numerical Linear Approximation in C
n
∑ xi
E1 ≤ ε1
n
+ 1.01 ( n – 1 )ε 2 ∑ x i
i=1
(b)
i=1
The inner product case. In the same manner as above w = fl(fl2(x, y)) = (x, y) + E2, where |E2| ≤ ε1|(x, y)| + 1.01nε2(|x|, |y|) We observe that the second term on the right for both |E1| and |E2| is a second order term, compared with the first term.
4.7.11
Rounding error in matrix computation
The obtained results so far are now applied to a simple matrix computation. Let A = (aij) be an n by n matrix and let x be an n-vector. Let us compute the following: (i) y = Ax. The computed ith element of y is n
y i = fl
∑ j=1
n
a ij x j =
∑ aij xj + Ei j=1
From (4.7.9) | Ei | ≤ 1.01ε1[n|ai1| |x1| + n|ai2| |x2| + (n – 1)|ai3| |x3| + … + 2|ain| |xn|] We may write y = Ax + E1 | E1| ≤ 1.01 |A| D |x| ε1 and D is a diagonal matrix given by D = diag(n, n, n – 1, …, 2) Again, |A| denotes the matrix whose elements are the absolute elements of matrix A. Thus by taking the Euclidean norms of the above inequality, we get the following bound © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
107
||E1||2 ≤ 1.01 nε1||A||E ||x||2 (ii)
C = AB, where each of A, B and C is an n by n real matrix. In the same manner we write down C = fl(AB) = AB + E2 where ||E2||E ≤ 1.01 nε1||A||E ||B||E See for example Wilkinson ([15], p. 83).
However, if d.p. accumulations of inner products are made, the results for (i) and (ii) are (i) y = fl(fl2(Ax)) = Ax + E3, where ||E3||2 ≤ ε1||Ax||2 + 1.01nε2||A||E ||x||2 (ii)
C = fl(fl2(AB)) = AB + E4, where ||E4||E ≤ ε1||AB||E + 1.01nε2||A||E ||B||2
4.7.12
Forward and backward round-off error analysis
Let us now assume that we are calculating the value of x = g(a1, a2, …, an) where the ai are given. As a result of the round-off error, the computed value of x, namely x will differ from the exact value x by dx = x – x. In the forward error analysis, we attempt to obtain some bound on dx. In the backward error analysis, we do not concern ourselves with the value of dx. Instead, we try to show that the computed value x is exactly equal to x = g(a1 + da1, a2 + da2, …, an + dan) for some values of da1, da2, …, dan, and we calculate the bounds for the dai. The analysis has to be continued after this step to obtain some bounds for ||dx||, usually by using some perturbation technique [3, 14]. In practice, it is found that the backward round-off error analysis is much simpler than the forward analysis, in particular in the floating-point computation. We give here a simple example to illustrate the idea of the backward error analysis. Suppose that we are computing the elements xi from the lower triangular set of equations © 2008 by Taylor & Francis Group, LLC
108
Numerical Linear Approximation in C
a11x1 a21x1 + a22x2 a31x1 + a32x2 + a33x3 a41x1 + a42x2 + a43x3 + a44x4
(4.7.10)
= = = =
b1 b2 b3 b4
The variables x1, x2, …, are computed in succession x1 = fl(b1/a11) xi = fl[(–ai1x1 – ai2x2 … + bi)/aii], i = 2, 3, 4 From Theorem 4.17 x1 = b1/[a11(1 + δ11)] and xi = fl[(–ai1(1 + δi1)x1 – … – ai, i–1(1 + δi, i–1)xi–1 + bi)] / [aii(1 + δii)(1 + εii)], i = 2, 3, 4 where |δii|, |εii| ≤ ε1, i = 1, 2, 3, 4,
|δi1| ≤ 1.01(i – 1)ε1, i = 2, 3, 4
|δij| ≤ 1.01(i + 1 – j)ε1, i = 2, 3, 4, j < i
and Hence, we get
a11(1 + δ11)x1 = b1 a21(1 + δ21)x1 + a22(1 + δ22)(1 + ε22)x2 = b2, etc. or in matrix form (A + dA)x = b where a 11 dA ≤ 1.01ε 1
0
a 21 2 a 22
0
0
0
0
2 a 31 2 a 32 2 a 33
0
3 a 41 3 a 42 2 a 43 2 a 44 Hence, by taking the L∞ norm of dA, (maximum absolute row sum) ||dA||∞ ≤ (1/2) × 1.01 × 4 × 5ε1 × maxij|aij| If instead, A is n by n (not 4 by 4), the error bound would be © 2008 by Taylor & Francis Group, LLC
Chapter 4: Efficient Solutions of Linear Equations
(4.7.11)
109
||dA||∞ ≤ (1/2)1.01n(n + 1)ε1 × maxij|aij|
In other words, the computed solution to (4.7.10) is the exact solution of (A + dA)x = b, where ||dA||∞ is given above and the error dx = (x – x) is calculated from Theorem 4.12 by taking db = 0 and assuming ||A–1dA|| < 1 –1
dx A dA ---------≤ -----------------------------–1 x 1 – A dA
(4.7.12) 4.7.13
Statistical error bounds and concluding remarks
We conclude that the upper bounds of round-off error may be reached only in very special cases. The round-off error in a single arithmetic operation lies between –ε1 and ε1. To realize the worst-case estimated upper bound in the error of an accumulation of simple arithmetic operations, every single round-off error would have to be at either extreme –ε1 or ε1 and the data of the problem would have to be arranged in such a form that the errors all accumulate in the same direction. In the above, the obtained error bound for ||dA||∞ is given by (4.7.11). In practice, due to the statistical distribution of the round-off errors the factor n(n + 1) in (4.7.11) is more likely to be n. We note that the first detailed round-off error analysis in fixed point arithmetic operations (not discussed in this section) was given by von Neumann and Goldstine [12]. The corresponding analysis in floating-point operations was first given by Wilkinson [14]. The backward round-off error analysis was first introduced by Givens [4]. The pioneering contribution to the subject of round-off error analysis in algebraic processes in both fixed and floating-point computation is due to Wilkinson. Most of his published and unpublished work is summarized in his two books [15, 16].
References 1.
Businger, P. and Golub, G., Linear least squares solution by Householder transformation, Numerische Mathematik, 7(1965)269-276.
© 2008 by Taylor & Francis Group, LLC
110
2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Numerical Linear Approximation in C
Fisher, M.E., Introductory Numerical Methods with the NAG Software Library, The University of Western Australia, Crawley, 1988. Forsythe, G.E. and Moler, C.B., Computer Solution of Linear Algebraic Systems, Prentice-Hall, Englewood Cliffs, NJ, 1967. Givens, J.W., Numerical computation of the characteristic values of a real matrix, Oak Ridge National Laboratory, ORNL-1574, 1954. Givens, J.W., Computation of plane unitary rotations transforming a general matrix to triangular form, Journal of SIAM, 6(1958)26-50. Golub, G.H. and Van Loan, C.F., Matrix Computation, Third Edition, The Johns Hopkins University Press, Baltimore, 1996. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Householder, A.S., Unitary triangularization of a nonsymmetric matrix, Journal of ACM, 5(1958)339-342. Lancaster, P., Theory of Matrices, Academic Press, New York, 1969. Noble, B., Applied Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, 1969. Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, 1973. von Neumann, J. and Goldstine, H.H., Numerical inverting of matrices of high order, Bulletine of American Mathematical Society, 53(1947)1021-1099. Westlake, J.R., A Handbook of Numerical Matrix Inversion and Solution of Linear Equations, John Wiley & Sons, New York, 1968. Wilkinson, J.H., Error analysis of floating-point computation, Numerische Mathematik, 2(1960)319-340. Wilkinson, J.H., Rounding Errors in Algebraic Processes, Prentice-Hall, Englewood Cliffs, NJ, 1963. Wilkinson, J.H., The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965.
© 2008 by Taylor & Francis Group, LLC
PART 2
The L1 Approximation
© 2008 by Taylor & Francis Group, LLC
112
Numerical Linear Approximation in C
Chapter 5
Linear L1 Approximation
113
Chapter 6
One-Sided L1 Approximation
183
Chapter 7
L1 Approximation with Bounded Variables
213
Chapter 8
L1 Polygonal Approximation of Plane Curves
245
Chapter 9
Piecewise L1 Approximation of Plane Curves
275
© 2008 by Taylor & Francis Group, LLC
113
Chapter 5 Linear L1 Approximation
5.1
Introduction Given is the overdetermined system of linear equations
(5.1.1)
Ca = f
where C = (cij) is a real n by m matrix of rank k, k ≤ m ≤ n and f = (fi) is a real n-vector. In general, this is an inconsistent linear system and has no exact solution; only an approximate one. The residual vector for the system Ca = f, denoted by r is r = Ca – f In this chapter, we consider the linear L1 solution of Ca = f, which requires that the L1 norm of the errors (the residuals) in the approximation problem be as small as possible. Since the early seventies, there has been an increasing interest in the linear L1 approximation. According to Bloomfield and Steiger ([14], pp. 33, 34), it has been given many names, such as Discrete L1 approximation [1, 2, 8], L1 solution of overdetermined linear equations [1, 3, 4], Linear discrete L1 norm problem [6], L1 norm minimization [22], Least absolute deviations (LAD) [14], Minimum sum of absolute errors (MSAE) [17], L1–Approximation [18], Least absolute value (LAV) [23], L1 linear regression [24], Least absolute value regression [28] and others. The robustness of the L1 solution against odd points or outliers (one or more inaccurate element in vector f) was illustrated in Figure 2-1. In this context, Rosen et al. [22] give necessary and sufficient conditions that correct solution is obtained when there are some errors not only in vector f, but also in the coefficient matrix C.
© 2008 by Taylor & Francis Group, LLC
114
Numerical Linear Approximation in C
The L1 solution of system Ca = f, or the discrete L1 approximation, is the m-vector a = (ai) that minimizes the L1 norm of the residuals; that is n
minimize Z =
(5.1.2)
∑
ri
i=1
where ri is the ith element of vector r and is given by m
(5.1.3)
ri =
∑ cij aj – fi ,
i = 1, 2, …, n
j=1
Abdelmalek [1] showed that the discrete linear L1 approximation is equivalent to the solution of the overdetermined linear equation Ca = f in the L1 norm, as explained in Chapter 2. For the case when matrix C satisfies the Haar condition, Usow [25] treated the discrete L1 approximation by solving a geometric problem equivalent to solving Ca = f in the L1 norm. Abdelmalek [1] generalized Usow’s algorithm for the non-Haar case and showed that his algorithm is completely equivalent to a dual simplex method applied to a linear programming problem with non-negative bounded variables. However, one iteration of Usow’s algorithm is equivalent to one or more iterations in the latter. The most widely used methods for solving the L1 approximation employ linear programming techniques, in the primal or the dual form. Wagner [26] was the first to successfully formulate the L1 approximation problem (5.1.1-3) to a linear programming one in both the primal and dual forms. Barrodale and Roberts [8, 9] used a modification of the simplex method of linear programming to solve the primal problem. Their routine is able to skip certain intermediate simplex iterations. Matrix C of (5.1.1) need not be of full rank, minimum computer storage is required and an initial basic feasible solution is easily computed. To advance the starting basis in the linear programming solution for the L1 approximation problem, Sklar and Armstrong [23] solved the equation Ca = f first in the least squares (LS) sense. They computed and ordered the absolute LS residuals. Then the initial basis © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
115
for the L1 problem corresponds to the columns related to the m smallest absolute LS residuals. This is because the L1 approximation is supposed to interpolate at least m of the given data points, m being the rank of matrix C. Their method requires solving the problem in the LS sense first and assumes that the coefficient matrix C is of full rank m. See Section 5.1.1. Bloomfield and Steiger [13] presented an algorithm identical to that of Barrodale and Roberts [8, 9] except that they differ in the start up of the algorithm. They assume that rank(C) = m and use the characterization theorem outlined in Section 5.1.1. They consider all m combinations with zero residuals and use an exchange method that is a variation of the simplex method. Among other methods that use linear programming is that of Narula and Wellington [17]. They developed one algorithm that solves both the L1 approximation problem, calling it the minimum sum of absolute errors (MSAE) regression and the Chebyshev approximation problem, calling it minimization of the maximum absolute error (MMAE) regression. They maintain that the algorithm for MSAE regression is that of Barrodale and Roberts [8] and the algorithm for MMAE regression retains the basic elements of Barrodale and Phillips [7]. Following the analysis of Usow’s method, Abdelmalek [2, 3, 4] next solved the linear programming problem in the dual form, in such a way that certain intermediate simplex iterations are skipped. Numerical results [3] on a large number of test cases show that this method is comparable to that of Barrodale and Roberts [7, 8]. This is not surprising since the two algorithms are identical as shown by Armstrong and Godfrey [6]. However, the two methods differ in obtaining the initial basic solution and the way a vector leaves and enters the basis. Another difference is that in Abdelmalek [3, 4], for the purpose of numerical stability for ill-conditioned problems, a triangular decomposition to the basis matrix is used. Nevertheless, a program without triangular decomposition of the basis, which is faster, is also included here. Robers and Ben-Israel [20] and Robers and Robers [21] described an interval programming technique to solve the bounded variables dual linear programming problem (5.2.3) below. No intermediate simplex iterations are skipped in their method.
© 2008 by Taylor & Francis Group, LLC
116
Numerical Linear Approximation in C
Techniques for solving the L1 approximation problem other than linear programming, include that of Bartels et al. [10] who used a projected gradient method for choosing a descent direction to minimize a piecewise differential function. Wesolowsky [28] used a technique closely related to that of Bartels et al. [10] with different rules of descent. Bartels and Conn [11] used a descent method, which is a variant of the simplex method. In the following, we describe the solution using the dual form of the linear programming formulation of the L1 approximation problem. It reduces to a programming problem with non-negative bounded variables. No artificial variables are needed in the algorithm and as we noted earlier, it deals with rank deficient cases. Also, minimum computer storage is used and an initial basic feasible solution is easily computed. We are able to skip certain intermediate simplex iterations and hence, the algorithm is an efficient one. In Section 3.8.1, we outlined the formulation of the L1 problem as a linear programming problem. For the sake of completeness, we formulate this problem again here. 5.1.1
Characterization of the L1 solution
For characterization theorems of the L1 optimal solution, see for example, Madsen et al. [16], Pinkus [18], Powell [19] and Watson [27]. The most important characterization is the following. For practical purposes, for an optimal solution, if matrix C in (5.1.1) has rank k, k ≤ m, then at least k of the residuals are 0’s. This characterization (theorem) is a direct result of the solution of the dual linear programming problem of the L1 approximation. See Section 5.7. However, simplex methods calculate such optimal solutions, but not every optimal solution has this characterization ([24], p. 60). 5.2
Linear programming formulation of the problem
Since the residuals (ri) in (5.1.3) are unrestricted in sign; that is, ri may be >, =, or < 0, i = 1, 2, …, n, we write (5.2.1a) Hence
© 2008 by Taylor & Francis Group, LLC
ri = ui – vi
Chapter 5: Linear L1 Approximation
(5.2.1b)
117
|ri| = ui + vi
where ui and vi are such that (5.2.1c)
ui ≥ 0 and vi ≥ 0, i = 1, 2, …, n
When ri > 0, we have ui > 0 and vi = 0 and when ri < 0, we have vi > 0 and ui = 0. Obviously, when ri = 0, ui = vi = 0. Hence, from (5.2.1b), (5.1.2) becomes n
minimize Z =
∑
n
ui +
i=1
∑ vi i=1
In vector-matrix notation, let vector u = (uj) and vector v = (vj). Hence, the residual vector r is r = u – v = Ca – f The primal form of the linear programming problem is minimize Z = eTu + eTv
(5.2.2a)
subject to the constraints
(5.2.2b)
C –In In
a u = f v
(5.2.2c)
ai unrestricted in sign, i = 1, 2, …, m
(5.2.2d)
ui ≥ 0, vi ≥ 0, i = 1, 2, …, n
In (5.2.2a), e is an n-vector, each element of which equals 1, and the superscript T refers to the transpose. In (5.2.2b), In is an n-unit matrix. The dual form to problem (5.2.2) is (5.2.3a)
maximize z = fTw
subject to (5.2.3b)
C Tw = 0
(5.2.3c)
w ≤ e w ≥ –e
© 2008 by Taylor & Francis Group, LLC
118
Numerical Linear Approximation in C
where the n-vector w = (wi). The last two sets of constraints reduce to the following constraints –1 ≤ wi ≤ 1, i = 1, 2, …, n Then by defining bi = wi + 1, i = 1, 2, …, n, we get the following formulation of the problem. (5.2.4a)
maximize z = fT(b – e)
subject to the constraints CTb = CTe or (5.2.4b)
T
C b =
n
T
∑ Ci i=1
(5.2.4c)
0 ≤ bi ≤ 2, i = 1, 2, …, n
CiT is the ith column of matrix CT and b = (bi). We see that (5.2.4) is a linear programming problem with non-negative bounded variables [15]. We solve here this dual form (5.2.4). 5.3
Description of the algorithm
In (5.2.4c), for an optimum solution, the lower bounds of parameter bi, i = 1, 2, …, n, are 0’s and their upper bounds are 2. A dual simplex algorithm for solving this problem is described here. Without loss of generality, assume that matrix C is of rank m. Theorem 5.1 A necessary and sufficient condition for a nonzero program for system (5.2.4) to be optimal is that (n – m) elements of vector b, each has the value 0 (lower bound) or 2 (upper bound), and that the other m elements of b are basic variables [15]. Let a basis indicator set for vector b be the index set I(b) ⊂ {1, 2, …, n} with the property that the variables {bi | i ∈ I(b)} are basic variables. Let also the index sets L(b) and U(b) be indicators for the non-basic variables {bi | i ∉ I(b)} that are respectively at their lower bounds (= 0) and at their upper bounds (= 2). Let B denote the © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
119
basis matrix and the basic solution be bB = (bBj), j = 1, 2, …, m. As usual, the simplex tableau is formed by calculating the non-basic vectors yi and their marginal costs (zi – fi), i = 1, 2, …, n. Then yi = B–1CiT
(5.3.1) and
zi – fi = fBTyi – fi
(5.3.2)
The elements of fB = (fi), i ∈ I(b). Since some of the non-basic variables may be at their upper bound (= 2), from (5.2.4b), the basic solution (5.3.3)
bB = B
–1
n
T
∑ Ci
–2
i=1
∑
i ⊂ U(b)
T
Ci
By denoting the first term on the right by bB0 and by (5.3.1) (5.3.4)
b B = b B0 – 2
∑
i ⊂ U(b)
yi
Theorem 5.2 A basic feasible solution is maximal, if the marginal costs for the non-basic variables (zi – fi), i ∉ I(b), satisfy the relations [15] zi – fi ≥ 0, i ∈ L(b) zi – fi ≤ 0, i ∈ U(b) The solution of this dual problem is summarized as follows. A non-basic column may replace a basic column, may go from a zero bound to an upper bound or may go from an upper bound to zero. The optimal solution is characterized by Theorem 5.1. Theorem 5.3 At any stage of the computation, the residuals (ri) of (5.1.3) are themselves the marginal costs (zi – fi) for the same reference zi – fi = ri, i = 1, 2, …, n Proof:
© 2008 by Taylor & Francis Group, LLC
120
Numerical Linear Approximation in C
For any i ∉ I(b), from (5.3.1), (5.3.2) is given by zi – fi = fBTB–1CiT – fi = CiB–TfB – fi where Ci is the ith row of C. Also, the elements of fB = (fj), j ∈ I(b). Thus B–TfB = (a1, …, am)T, and hence m
zi – fi =
∑ cij aj – fi = ri ,
i = 1, 2, …, n
j=1
For i ∈ I(b); that is, for the reference equations (zi – fi) = 0, the residual ri = 0, which proves the theorem. As a result, the objective function z of (5.1.2) is given by n
z =
∑
zi – fi
i=1
Theorem 5.4 The solution vector of the L1 approximation problem is given by aT = fBTB–1
(5.3.5)
where B–1 is the inverse of the basis matrix for the optimum solution of the programming problem and fB is associated with the optimal solution. The proof follows also from the fact that B–TfB = (a1, …, am)T. 5.4
The dual simplex method
We start solving (5.2.4) by choosing any m linearly independent columns of CT to form the basis matrix B. The simplex tableau is then formed by calculating vectors (yi) from (5.3.1) and the marginal costs (zi – fi) from (5.3.2), for i = 1, 2, …, n. The following steps constitute none other than a dual simplex algorithm for the non-negative bounded variables (bi) as described, for example, in Hadley ([15], pp. 387-394). The choice of the vector that leaves the basis is made first. Then the vector that enters the basis is determined. At first, any non-basic variable bi is given the lower bound 0. © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
Scan the marginal costs. For all marginal costs (zi – fi) < 0, let their bi take the value 2 (upper bound). Indicate that by putting a mark “x” above the corresponding columns in the simplex tableau. Then the bB is calculated from (5.3.4). Scan the elements of the basic solution bB. If all the elements bBs, s = 1, 2, …, m, are bounded, i.e., 0 ≤ bBs ≤ 2, an optimal solution is reached. Otherwise go to step (3). The first element bBi that is < 0 or > 2 is considered. The corresponding column in the basis is to be replaced by a non-basic column according to one of the steps (3.1-4) below. The scanning proceeds from element bBi+1 and back again from bB1.
(1)
(2) (3)
CjT
121
Let at any iteration, CjT be associated with bBi, and let CrT replace in the basis. The following steps are employed.
Case (1) If bBi < 0, CrT is determined from θr = max(θ1, θ2) < 0 where θ1 = (zr – fr)/yir = maxs((zs – fs)/yis), yis < 0, s ∈ L(b) and θ2 = (zr – fr)/yir = maxs((zs – fs)/yis), yis > 0, s ∈ U(b) (3.1) (3.2)
If θr = θ1, transfer the simplex tableau and go to step (2) above. If θr = θ2, transfer the simplex tableau, add 2 (upper bound) to the new bBi. Remove the mark “x” from column r since br is no longer at its upper bound. Go to step (2).
Case (2) If bBi > 2, CrT is determined from τr = max(τ1, τ2) > 0 where τ1 = (zr – fr)/yir = mins((zs – fs)/yis), yis > 0, s ∈ L(b) and © 2008 by Taylor & Francis Group, LLC
122
Numerical Linear Approximation in C
τ2 = (zr – fr)/yir = mins((zs – fs)/yis), yis < 0, s ∈ U(b) (3.3) (3.4)
If τr = τ1, transfer the simplex tableau. Mark column CjT with “x” to indicate that bj is now at its upper bound (= 2). Subtract 2yj from bB and go to step (2). If τr = τ2, transfer the simplex tableau as usual. Remove the mark “x” from column CrT and place an “x” on column CjT. Add 2 to the new bBi and subtract 2yj from bB. Go to step (2).
In view of (5.3.4), steps (3.1-4) are analyzed as follows. When a non-basic column CrT at its upper bound enters the basis, as in steps (3.2) and (3.4), it is no more at its upper bound and according to (5.3.4), we add 2yr to bB, or in effect, we add 2 to bBi. Also, when a basic column CjT leaves the basis to its upper bound, as in steps (3.3) and (3.4), we subtract 2yj from bB. To illustrate the steps in this algorithm, so far, consider this example. Example 5.1 Solve the following system of equations in the L1 norm
(5.4.1)
a1 a1 a1 2a1 3a1
+ – + + +
a2 a2 2a2 4a2 a2
= 3 = 1 = 7 = 11.1 = 7.2
In the initial data, column bB0 is the sum of the 5 columns of CT. Matrix B–1 is the inverse of the matrix formed by the first 2 columns of CT. Initial Data fT bB0 B–1 ––––––––––––––––––––––––– 0.5 0.5 8 0.5 –0.5 7 –––––––––––––––––––––––––
3 1 7 11.1 7.2 C1T C2T C3T C4T C5T –––––––––––––––––––––––––––– 1 1 1 2 3 1 –1 2 4 1 ––––––––––––––––––––––––––––
In the initial data B-1 is not calculated yet, and in view of (5.3.3), vector bB0 is the algebraic sum of the five columns of CiT. Tableau 5.4.1 is formed by multiplying the initial data by B–1. © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
123
However, we actually obtain Tableau 5.4.1 by pivoting over the first element in row 1 of the initial data and apply a Gauss-Jordan elimination step, as explained in Chapter 4. Then we pivot over the second element in row 2 in the obtained tableau and apply a Gauss-Jordan elimination step. We then calculate the marginal costs (zi – fi), i ∉ I(b). We find in Tableau 5.4.1 that (zi – fi) < 0, for i = 3, 4 and 5. Hence, the corresponding bi is given the upper bound 2, and this is identified by marks “x” on these columns. Vector bB is modified by subtracting 2(y3 + y4 + y5) from it. In Tableau 5.4.1, the elements of column bB are –5.5 and 1.5. The first element violates the inequalities 0 ≤ bB1 ≤ 2 of (5.2.4c). Column C1T is the column in the basis that is associated with this element. Hence, we replace this column in the basis. Tableau 5.4.1 x x x 3 1 7 11.1 7.2 T T T T B C1 C2 C3 C4 C5T bB ––––––––––––––––––––––––– –––––––––––––––––––––––––––– 7.5–2(1.5+3+2) = –5.5 C1T 1 0 1.5 3 (2) 0 1 –0.5 –1 1 0.5–2(–0.5–1+1) = 1.5 C2T ––––––––––––––––––––––––– –––––––––––––––––––––––––––– z = 6.3 zi – f i 0 0 –3 –3.1 –0.2 –2 –1.03 –0.1 θr fT
Tableau 5.4.2 x x 3 1 7 11.1 7.2 T T T T bB B C1 C2 C3 C4 C5T ––––––––––––––––––––––––– –––––––––––––––––––––––––––– –2.75+2 = –0.75 C5T 0.5 0 0.75 1.5 1 –0.5 1 –1.25 (–2.5) 0 4.25 C2T ––––––––––––––––––––––––– –––––––––––––––––––––––––––– z = 5.75 zi – f i 0.1 0 –2.85 –2.8 0 2.28 1.12 τr fT
The vector that enters the basis is calculated from steps (3.1-4)
© 2008 by Taylor & Francis Group, LLC
124
Numerical Linear Approximation in C
above via calculating the ratios θr or τr. In Tableau 5.4.1 the parameters θr represent the ratios of the elements of (zi – fi) over the corresponding elements of the first row in that tableau. For this Tableau we apply step (3.2) and the pivot element is shown between brackets; that is, column C5T will replace column C1T in the basis. We calculate Tableau 5.4.2, by applying a Gauss-Jordan step and remove the mark “x” from the top of column C5T. For Tableau 5.4.2, both elements of bB violate the inequalities (5.2.4c), so we have the choice of replacing C5T or C2T in the basis. We shall replace C2T by C4T by observing step (3.4) above. Again, in Tableau 5.4.2, the parameters τr represent the ratios of the elements of (zi – fi) over the corresponding elements of the second row in that tableau. We get Tableau 5.4.3, in which both elements of bB satisfy the inequalities (5.2.4c). Hence, Tableau 5.4.3 gives the optimal solution for Example 5.1. Tableau 5.4.3 fT bB B ––––––––––––––––––––––––– 1.8 –2(0.6) = 0.6 C5T –1.7 +2–2(–0.4) = 1.1 C4T ––––––––––––––––––––––––– z = 3.23 zi – f i
x x 3 1 7 11.1 7.2 T T T T C1 C2 C3 C4 C5T –––––––––––––––––––––––––––– 0.2 0.6 0 0 1 0.2 –0.4 0.5 1 0 –––––––––––––––––––––––––––– 0.66 –1.12 –1.45 0 0
Note also that the elements of the last row in Tableau 5.4.3 are the residuals of system (5.4.1). From Theorem 5.3, the sum of their absolute vales = 0.66 + 1.12 + 1.45 + 0 + 0 = 3.23 = z. From (5.3.5) or by solving the fifth and fourth equations of (5.4.1) of Example 5.1, we get the solution of the problem; a = (a1, a2)T = (1.77, 1.89)T. 5.5
Modification to the algorithm
We note in the above section that the process of adding 2yr and/or subtracting 2yj vectors is done after the simplex tableau has been changed by applying a Gauss-Jordan elimination step. The efficiency of the algorithm is improved by skipping certain © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
125
intermediate Gauss-Jordan iterations [2]. This is done with the following modifications: (1) Reverse the order of the two processes. The process of adding the 2yr and/or subtracting the 2yj vectors from bB is done first. Then it is followed by the Gauss-Jordan step. (2) More importantly, the changing of the tableau may be postponed until some non-basic columns each enter then leave the basis. This is because the corresponding bBi does not yet satisfy the bounds 0 ≤ bBi ≤ 2 of (5.2.4c). This process continues until the last non-basic column for which bBi satisfies this inequality is found. This ensures that in replacing a basic column, the maximum decrease in the objective function z has been achieved. Let us assume that we start the procedure of the previous paragraph from a basic solution given by a simplex tableau, say tableau 1. Let us also assume that the (k – 1) non-basic columns have each entered, then left the basis for which bBi violated the inequalities (5.2.4c). Assume also that a kth column entered the basis without leaving, because bBi satisfied the inequalities (5.2.4c). Hence, in this method, we have skipped calculating the (k – 1) intermediate tableaux 2, 3, …, k that correspond to the (k – 1) non-basic columns that each entered, then left the basis. Only tableau k + 1 is calculated. All necessary data needed for pursuing this procedure is contained in the ith row of tableau 1 and the marginal costs of this tableau. We need to know the (bBi) and the pivot elements (yir) for the (k – 1) intermediate iterations. We also note that each intermediate tableau is obtained by pivoting over an element yir in row i of the previous (intermediate) tableau. As a result, the ith element of the basic solution and the pivot elements for intermediate tableaux t = 2, 3, …, k, are (bBi/yi,t–1) and (yit/yi,t–1) respectively. We also know that the (k + 1)th tableau is the tableau that we would have obtained, had we changed tableau 1 once, by pivoting over yik. Let (5.5.1a) (5.5.1b)
b'Bi = bBi and yi,t = yi,t, t = 1 b'Bi = b'Bi/yi,t–1 and yi,t = yi,t/yi,t–1, t = 2, 3, …, k
where b'Bi and yi,t are respectively the ith element of b'B and the pivot © 2008 by Taylor & Francis Group, LLC
126
Numerical Linear Approximation in C
element in tableaux t = 1, 2, …, k. Also, b'B is bB with vector(s) 2yr added to it and/or vector(s) 2yj subtracted from it. Again, let us start an iteration from a basic solution given by, say, tableau 1. This algorithm modifies the algorithm of Section 5.4 as follows. Starting step Scan bBk for k = 1, 2, …, m. Consider the smallest of parameters bBk < 0 and (2 – bBk). Let CrT replaces CjT and each time calculate b'Bi from (5.5.1). This is done until the calculated b'Bi satisfies 0 ≤ b'Bi ≤ 2. Change the simplex tableau and re-scan the elements of bB. This is repeated until all the elements bBk satisfy 0 ≤ bBk ≤ 2. Steps (3.1-4) of Section 5.4 are now modified as follows. Case (1') If bBi < 0, the sequence of the non-basic columns CrT that enter the basis and may leave the basis, corresponds to the parameters θr = (zr – fr)/yir < 0, r ∉ I(b), starting from the algebraically larger ratio. (3.1') If yir < 0, b'B is not changed. Go to step (3.5'). (3.2') If yir > 0, add 2yr to b'B. Remove the “x” from CrT indicating that br is no more at its upper bound. Go to step (3.5'). Case (2') If bBi > 2, the sequence of the non-basic columns CrT that enter the basis and may leave the basis corresponds to the parameters τr = (zr – fr)/yir > 0, r ∉ I(b), starting from the smallest ratio. (3.3') If yir > 0, subtract 2yj from b'B and place a mark “x” over CjT indicating that it is now at its upper bound. Go to step (3.5'). (3.4') If yir < 0, add 2yr and subtract 2yj from b'B. Remove the “x” from CrT and place an “x” over CjT. Go to step (3.5'). (3.5') Calculate b'Bi from (5.5.1a, b). If the resulting b'Bi still violates 0 ≤ b'Bi ≤ 2, go to either case (1') or case (2'), according to whether b'Bi < 0 or > 2 respectively. If the resulting b'Bi satisfies 0 ≤ b'Bi ≤ 2, change the tableau. Go to the starting step. 5.6
Occurrence of degeneracy In the dual simplex method degeneracy arises when there exist
© 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
127
one or more non-basic column r, for which the ratio (zr – fr)/yir = 0. It should be decided whether it is τr = 0 or θr = 0. This is resolved as follows. If br = 0, we replace (zr – fr) by a small number δ, and if br = 2, we replace (zr – fr) by –δ. We then calculate (zr – fr)/yir, and it will definitely be either > or < 0, and the degeneracy is resolved. The small number δ represents the round-off error in floating-point arithmetic operations, denoted in our software by PREC, and set to 10–6 and 10–16 for single- and double-precision computation respectively (see Sections 2.4 and 4.7.1). Example 5.2 Solve the following system of equations in the L1 norm. –2a1 8a1 –8a1 21a1 12a1 –32a1
(5.6.1)
= 6 9a2 = 6 = 24 + 18a2 = 3 – 9a2 = –6 – 13.5a2 = –9 +
For simplicity of presentation, we are not using pivoting in obtaining part 1 of the solution. As indicated above, in part 1 of this problem, columns 1 and 2 form the initial basis matrix. Initial Data ΣiCiT
fT
––––––––––––– –1 4.5 –––––––––––––
6 6 24 3 –6 –9 T T T T C1 C2 C3 C4 C5T C6T ––––––––––––––––––––––––––– –2 8 –8 21 12 –32 0 9 0 18 –9 –13.5 –––––––––––––––––––––––––––
Tableau 5.6.2 (part 1) fT bB0 ––––––––––––– 2.5 0.5 ––––––––––––– © 2008 by Taylor & Francis Group, LLC
6 6 24 3 –6 –9 C1T C2T C3T C4T C5T C6T ––––––––––––––––––––––––––– 1 0 4 –2.5 –10 10 0 1 0 2 –1 –1.5 –––––––––––––––––––––––––––
128
Numerical Linear Approximation in C
Tableau 5.6.1 (not shown) is obtained by pivoting over the first nonzero element in the first row of the initial data and applying a Gauss-Jordan step. Tableau 5.6.2 is obtained by pivoting over the first nonzero element in the second row in Tableau 5.6.1 and applying a Gauss-Jordan step. Tableau 5.6.3 is Tableau 5.6.2, with the marginal costs (zi – fi), i = 1, 2, …, 6, added to it, and with bB calculated. We have an initial degeneracy, as τr = (zi – fi)/yir = 0, for r = 3. Since b3 = 0, we replace the 0 value of (z3 – f3) by +δ and (z3 – f3)/y13 > 0. From τr = (zi – fi)/yir in Tableau 5.6.3, the sequence of the columns that enter the basis and may then leave the basis is C3T, C4T, C5T, C6T. The pivot y13 = 4 > 0, and according to step (3.3'), 2y1 is subtracted from bB, and a mark “x” is placed above C1T. The new b'B1 = (27.5 – 2)/4 = 25.5/4 > 2, and we still have case (2') of the algorithm. Now C3T leaves the basis and C4T enters the basis. From (5.5.1b), the pivot y14 = –2.5/4 < 0. Hence, from step (3.4'), 2y3 is subtracted from and 2y4 is added to the previously calculated b'B. A mark “x” is placed over C3T and the “x” over C4T is removed. The new b'B1 = (25.5 – 8 – 5)/(–2.5), which = 12.5/(–2.5) < 0, and we now have case (1') of the algorithm. Tableau 5.6.3 (part 2) x x 6 6 24 3 –6 –9 T T T T C1 C2 C3 C4 C5T C6T bB ––––––––––––– ––––––––––––––––––––––––––– 2.5+5+20 = 27.5 1 0 4 –2.5 –10 10 0.5–4+2 = –1.5 0 1 0 2 –1 –1.5 ––––––––––––– ––––––––––––––––––––––––––– z = 126 zi – f i 0 0 0 –6 –60 60 τr δ/4 2.4 6 6 fT
As C4T leaves the basis, either C5T or C6T enters the basis as they both have the same ratio τr. Let C5T enter the basis. The pivot y15 = –10/(–2.5) > 0, and from step (3.2') of case (1') we add 2y5 to the previously obtained b'B and remove the “x” from C5T. The obtained b'B1 = (12.5 – 20)/(–10) = 0.75, which is between 0 and 2. We thus change the tableau and get Tableau 5.6.4. That ends this iteration. © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
129
Tableau 5.6.4 fT bB ––––––––––––– 0.75 1.25 ––––––––––––– z = 39 zi – f i
x x 6 6 24 3 –6 –9 C1T C2T C3T C4T C5T C6T ––––––––––––––––––––––––––– –0.1 0 –0.4 0.25 1 –1 –0.1 1 –0.4 2.25 0 –2.5 ––––––––––––––––––––––––––– –6 0 –24 9 0 0
In Tableau 5.6.4, both the elements of bB are between 0 and 2 and Tableau 5.6.4 is optimal. Hence, this example requires 3 iterations, 2 in part 1 and 1 in part 2. The solution vector a is calculated from the fifth and second equations of system (5.6.1). These two equations correspond to the final basis (in Tableau 5.6.4) a1 = 0 and a2 = 2/3 From Theorem 5.3, the residuals are the marginal costs r = (–6, 0, –24, 9, 0, 0)T and z = 39. 5.7
A significant property of the L1 approximation
Assume that the n by m matrix C of Ca = f is of rank m, m < n. Then the best L1 approximation of system Ca = f has m equations with zero residuals ri. This property is a direct consequence of using the dual form of the linear programming for this problem, since the m equations in Ca = f associated with the basis matrix have zero marginal costs, (zi – fi), i.e., zero residuals ri. This means that if n discrete points in the x-y plane are being approximated by a plane curve in the L1 norm, the curve will pass through m of the discrete points. This is known as the interpolation property of the L1 approximation, described in Section 2.3.1. If the rank of matrix C is k, k ≤ m < n, there exists an L1 solution to system Ca = f such that there are k equations in Ca = f for which the residuals are 0’s. However, as noted in Section 5.1.1, Spath ([24], p. 60) argues that while optimal solutions calculated by simplex © 2008 by Taylor & Francis Group, LLC
130
Numerical Linear Approximation in C
methods produce this property, not every optimal solution has this characterization. For the purpose of numerical stability for ill conditioned systems Ca = f, we employ a triangular decomposition to the basis matrix. This is described in the next section. 5.8
Triangular decomposition of the basis matrix
In our modification to part 1 of the algorithm (Section 5.5), we apply m Gauss-Jordan elimination steps with complete pivoting. In part 2, a triangular decomposition method for the basis matrix is used [3, 4]. This method is a variation of the method of Bartels et al. [12]. Without loss of generality, let rank(C) = m. Let at the end of part 1, the basis be denoted by I0(b), the inverse of the basis matrix be B0–1, the basic solution be bB0, and the columns of the simplex tableau be Y0. In other words, the programming problem (5.2.4) is described by the 5-tuple {I0(b), I, B0–1, bB0, Y0} where an m-unit matrix I is added. For part 2, the inverse of the basis matrix as well as the other parameters in the 5-tuple are updated. The inverse of the basis matrix B–1 may be updated as B-1 = EB0–1 E is a nonsingular m-square matrix and E–1 is decomposed into LE–1 = P L is a nonsingular m-square matrix and P is a nonsingular upper triangular m-matrix. Now, P and B0–1 are stored and updated. For each iteration in part 2, the above 5-tuple change to {I(b), P, G–1, VB, X}
(5.8.1) In (5.8.1) (5.8.1a)
G–1 = LB0–1, VB = LbB0, X = LY0
(5.8.2a)
B–1 = P–1G–1
The basic solution is given by © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
(5.8.2b)
bB = P
–1
131
–1
VB – 2 ∑ xj + 2 ∑ xr = P VB j
r
The summation over j is for vectors that go from zero bound to an upper bound, and the summation over r are for vectors that go from upper bound to zero bound. Vector bB is calculated from PbB = VB by back substitution. Vectors (yi) in the simplex tableau are given by yi = P–1xi, i = 1, 2, …, n
(5.8.2c)
The marginal costs which are the residuals are, as usual, given by (5.8.2d)
ri = fBTyi – fi, i = 1, 2, …, n
where fB is associated with the basis matrix. In part 2, the non-basic column r, that enters the basis, is determined from αr = ±mins |rs/yis|, s ∉ I(b)
(5.8.3)
where αr denotes τr or θr of Section 5.4. If |αr| ≤ 1, the residuals in (5.8.2d) are updated as r'j = rj – αr yij, j ∉ I(b) However, if |αr| > 1, the r'j are calculated from (5.8.2c) and (5.8.2d); that is, r'j = uTxj – fj, where u is the solution of PTu = fB. For the elements yis, s ∉ I(b), from (5.8.2c), yis = wTxs, where wT is row i of P–1and PTw = ei, where ei is column i of an m-unit matrix. Let now {I'(b), P', G'–1, V'B, X'} be the update of (5.8.1) for which r ∉ I(b) replaces s ∈ I(b). Then I'(b) = {i, …, is–1, is+1, …, im, r} and P = {p1, …, ps–1, ps+1, …, pm, xr} P is an upper Hessenberg matrix. Its (m – s) nonzero sub-diagonal © 2008 by Taylor & Francis Group, LLC
132
Numerical Linear Approximation in C
elements are annihilated by (m – s) Gauss elimination steps with row permutation. Finally, the solution of the L1 problem is given by Theorem 5.4 aT = fBTB–1
(5.8.4)
where B–1 is the inverse of the basis matrix given by (5.8.2a), for the optimum solution of the programming problem. 5.9
Arithmetic operations count
In the majority of algorithms, the number of additions/ subtractions in the arithmetic operations is comparable to the number of multiplications/divisions (m/d). Yet, the CPU time of an addition or a subtraction is very small, compared with the CPU time taken to execute a multiplication or a division. Hence, in this section we shall only count the number of (m/d) for this algorithm. For part 1 of this algorithm, each iteration requires an average of (nm + 0.5m2) m/d. This is to apply a Gauss-Jordan elimination step to the simplex tableau and to calculate the inverse of the basis B0–1. At the end of part 1, from (5.8.2d), the residuals (ri), i ∉ I(b), are calculated, which requires m(n – m) m/d. Then bB0 is calculated, which requires m×n additions and subtractions, which are small, compared with other m/d, and may be neglected. For part 2, from (5.8.2b), bB is calculated and it requires 0.5m2 m/d. If the residuals (ri), i ∉ I(b), are calculated from (5.8.3), (n – m) m/d are required. Otherwise, approximately (mn – 0.5m2) m/d are required to calculate vector u and the inner product uTxi, for i ∉ I(b). The (n – m) elements yis, s ∉ I(b), need an average of 0.5mn m/d to calculate vector w and the inner product wTxs. The parameter αr requires (n – m) divisions. For more detail, see [3]. We come to the following conclusion. Each iteration in part 2 needs an average of about mn + 1.17m2 m/d if |αr| ≤ 1 and about 2mn + 0.67m2 m/d if |αr| > 1. Numerical experience shows that for most test examples conducted in [3], most of the αr satisfy |αr| < 1. Very few examples have |αr| > 1 for the first few iterations; the |αr| then become ≤ 1 for the remaining iterations. Hence, in practice, the arithmetic operations count per iteration in part 2 is comparable to that of part 1. At last, the solution of the L1
© 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
133
problem as given by (5.8.4), where matrix B–1 and vector fB are associated with the optimal solution needs 1.5m2 m/d. Our final conclusion is that in part 1 of the algorithm the arithmetic operations count per iteration is [m(n – m)] m/d. In part 2, the arithmetic operations count per iteration is on the average of approximately [mn + 1.17m2] m/d if |αr| ≤ 1 and [2mn + 0.67m2] if |αr| > 1. That means that the arithmetic operations count per iteration in either part 1 or part 2 of the algorithm is a linear function of n and m2. It is also known that in linear programming problems, the number of iterations is proportional to m. A number of examples were solved in [3] on the IBM 360/67 computer and their CPU seconds were recorded (Table 1 in [3], p. 226). For problem 4 in that table, matrix C(n×m) has the values n = 23, 53, 103 and 203, for m = 6. In problem 5, matrix C(n×m) has the values n = 21, 51, 101 and 202, for m = 11. Table 5.1 (problem 4 in [3]) –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations CPU CPU/iter (CPU/iter)/n seconds in 10–3sec. –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 23×6 9 0.08 0.0089 0.386 2 53×6 10 0.21 0.0210 0.396 3 103×6 12 0.48 0.0400 0.388 4 203×6 11 1.03 0.0936 0.461 Table 5.2 (problem 5 in [3]) –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations CPU CPU/iter (CPU/iter)/n seconds in 10–3sec. –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 21×11 18 0.24 0.0133 0.633 2 51×11 27 0.88 0.0326 0.639 3 101×11 29 1.71 0.0590 0.584 4 201×11 36 5.12 0.1422 0.708 In Tables 5.1 and 5.2, we show that the CPU times/iteration for these two problems are approximately proportional to the parameter n.
© 2008 by Taylor & Francis Group, LLC
134
Numerical Linear Approximation in C
In other words, (CPU times/iteration)/n is almost the same for the 4 cases in each problem. 5.10
Numerical results and comments
The functions LA_L1() and LA_Lone() both implement this algorithm. LA_Lone() is the same as LA_L1(), except that LA_Lone() does not use a triangular decomposition of the basis matrix. LA_Lone() implements the algorithm described in [2] and LA_L1() implements the algorithm [3] given in FORTRAN IV in [4]. LA_Lone() is faster than LA_L1(), and it is used by other programs in this book. DR_L1() and DR_Lone() both test the following examples. Example 5.3 This example is the same as that solved in the L1 norm in ([2], pp. 848-849). The obtained results are z = 90, the residual vector r = (–5.6, –4.0, 48.0, –22.4, 0, 0, 10)T and a = (–0.2, 0.4, 0.0)T. LA_Lone() and LA_L1() yield the same values of z and r as obtained in [2], in 3 iterations. However, they compute a different solution vector a = (0, 0.6, –0.2)T, indicating that it is not unique. Example 5.4 For 0 ≤ x ≤ 1, approximate in the L1 norm the function f(x) = exp(x), for x ≤ (0.5), and f(x) = exp(0.5), for x > 0.5. The approximating function is a1 + a2 sin(x) + a3 cos(x) + a4 sin(2x) + a5 cos(2x) + a6 sin(3x) + a7 cos(3x) + a8 sin(4x) + a9 cos(4x) + a10 sin(5x) + a11 cos(5x) This problem is the same as problem 2 of example 5 in [3]. The coordinates of x are taken at 0.02 intervals, i.e., (0.0, 0.02, 0.04, …, 1.0). Matrix C in Ca = f is a 51 by 11 matrix and is ill-conditioned. When the solution is computed in s.p, both LA_L1() and LA_Lone() give rank(C) = 8 and z = 0.3242. However, the elements of vector a agree only to 2 decimal places; a = (–7.97, 0, 0, 7.40, 18.72, 0, –19.64, –3.83, 15.03, 0.39, –5.15)T. This indicates that for ill conditioned problems in single-precision (s.p.), LA_L1() gives more accurate results. In double-precision (d.p.), LA_L1() and LA_Lone() give identical results (a and r), in 27 iterations; rank(C) = 11 and z = 0.1564. It is © 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
135
observed, however, that although the value of z in d.p. is about half the value of z in s.p., the values of the elements of a in d.p. are very large, compared with those of a in s.p., indicating that matrix C for this problem is ill-conditioned. Example 5.5 This is the curve fitting Example 2.1. We have the set of 8 points in the x-y plane: (1, 2), (2, 2.5), (3, 2), (4, 6.5), (5, 3.5), (6, 4.5), (7, 6), (8, 7), with the fourth point as an outlier. It is required to approximate this discrete set in the L1 norm by the vertical parabola y = a1 + a2x + a3x2, where a1, a2 and a3 are unknowns. In this example, the equation Ca = f is given by (2.2.2). The result, computed by both LA_L1() and LA_Lone(), is z = 4.857 and r = (0, –0.429, 0.357, –3.643, 0.071, 0, –0.357, 0)T. The solution vector a = (2.143, –0.25, 0.107)T is unique. The fitting parabola is given by the solid curve in Figure 2-1. This example is solved again where the given 8 points are approximated by the curve y = a1 + a2x + a3x2 + a4x2, where a1, a2, a3 and a4 are to be calculated. Matrix C is an 8 by 4 matrix, where the first 3 columns are the same as in equation (2.2.2) and the fourth column is the same as the third column; that is, matrix C is rank deficient. The results r and z are the same as in the previous paragraph but the solution vector a = (2.143, –0.25, 0.107, 0.0)T and a is not unique due to rank deficiency of matrix C. We shall be using this example again in other chapters of this book. Finally, we note that in our experience, cycling (as a result of the occurrence of degeneracy) never occurred, and no failure of our programs has been recorded. We also observe that our algorithm compares favorably with that of Barrodale and Roberts [8, 9]. See the comments later. The techniques used in this algorithm, i.e., solving a linear programming problem with non-negative bounded variables has proved to be valuable in solving other problems such as the L1 solution of overdetermined systems of linear equations with bounded variables of Chapter 7 and the bounded and L1 bounded solutions of underdetermined systems of linear equations of Chapter 21. A large number of examples have been tested by the current © 2008 by Taylor & Francis Group, LLC
136
Numerical Linear Approximation in C
method [3, 4] and compared with those of Barrodale and Roberts (BR) [9]. A typical result is that the total CPU time for the current method is about 15% higher than (BR). Spath ([24], pp. 74, 82) argues that this is partially true. We also note that when we used partial pivoting in part 1 of our algorithm LA_L1() instead of complete pivoting, our method was slightly faster. As noted earlier, it was concluded by Armstrong and Godfrey [6] that the two algorithms, ours and that of Barrodale and Roberts (BR) [9], are identical except for the starting basis. Bloomfield and Steiger [14] presented a unified treatment of the role of the L1 approximation in many areas, such as statistics and numerical analysis. They also discussed and described different algorithms for solving the L1 approximation problem. In their book ([14], pp. 219-236) they compared the CPU times and the iteration counts between three algorithms. These are of Barrodale and Roberts (BR) [9], Bartels et al. (BCS) [10] and Bloomfield and Steiger (BS) [13]. The comparison was over a variety of problems, some deterministic and some random. The coefficient matrices have row values n ranging from 50 to 600 and some from 100 to 900, in increments and column values k = 3, 4, …, 8. A characteristic difference was observed between the (BR) and the other two. For all k, as n increases, the CPU times increase almost proportional to n2 while for the other two the CPU increase linearly. A comparison was also made between the (BCS) and the (BS) algorithms on some other data. The CPU times for the (BS) algorithm was between one half to one third of that of the (BCS). Spath [24] collected data (matrix C and vector f) for 42 examples that were tested on a number of routines after converting them to FORTRAN 77 on the IBM PC AT 102. He compared them with respect to computer storage, CPU time and accuracy of results. The routines were those of Robers and Robers (RR) [21] (believed to be the oldest published routine and needing the most computer storage) Barrodale and Roberts (BR) [8], Armstrong et al. (AFK) [5], ours (NA) [4], Bloomfield and Steiger (BS) [13] and Bartels and Conn (BC) [11]. The last algorithm is for constrained L1 approximation problem, and was modified for the case when the constraints do not exist. Spath displayed the results of the last 5 methods ([24], pp. 79-82).
© 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
137
He found that the algorithm of (BS) needs two arrays of size nm. However, unlike other methods, matrix C is not overwritten. The (BR) method needs the smallest storage and the fewest number of lines of code. He also proved that the (AFK) method was the fastest of all available algorithms. The (BS) method is the fastest of all algorithms only if n is at least two orders of magnitude larger than m. Due to the number of lines of code executed by the (BC) method, it is slower than the (BS) and (BR) methods. For realistic sizes of n and m, (BR) does best after (AFK) and (BS). Spath’s chose the method of Armstrong et al. [5] as the best, followed by Abdelmalek [4] and Bloomfield and Steiger [13], in comparison with Barrodale and Roberts [9]. Moreover, among the algorithms tested by Spath, only the methods of Barrodale and Roberts [8, 9] and of Abdelmalek [3, 4] can deal with the case when matrix C is rank deficient, i.e., has rank(C) ≤ m. Spath concluded, however, that the results of the different algorithms are sensitive to the value of the tolerance parameter EPS.
References 1. 2. 3. 4.
5.
Abdelmalek, N.N., On the discrete L1 approximation and L1 solutions of overdetermined linear equations, Journal of Approximation Theory, 11(1974)38-53. Abdelmalek, N.N., An efficient method for the discrete linear L1 approximation problem, Mathematics of Computation, 29(1975) 844-850. Abdelmalek, N.N., L1 solution of overdetermined systems of linear equations, ACM Transactions on Mathematical Software, 6(1980)220-227. Abdelmalek, N.N., Algorithm 551: A FORTRAN subroutine for the L1 solution of overdetermined systems of linear equations [F4], ACM Transactions on Mathematical Software, 6(1980)228-230. Armstrong, R.D., Frome, E.L. and Kung, D.S., Algorithm 79-01: A revised simplex algorithm for the absolute deviation curve fitting problem, Communications on StatisticsSimulation and Computation, B8(1979)175-190.
© 2008 by Taylor & Francis Group, LLC
138
6. 7.
8. 9. 10.
11. 12.
13. 14. 15. 16.
Numerical Linear Approximation in C
Armstrong, R.D. and Godfrey, J., Two linear programming algorithms for the linear discrete L1 norm problem, Mathematics of Computation, 33(1979)289-300. Barrodale, I. and Phillips, C., An improved algorithm for discrete Chebyshev linear approximation, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 177-190, 1975. Barrodale, I. and Roberts, F.D.K., An improved algorithm for discrete l1 approximation, SIAM Journal on Numerical Analysis, 10(1973)839-848. Barrodale, I. and Roberts, F.D.K., Algorithm 478, Solution of an overdetermined system of equations in the l1 norm [F4], Communications of ACM, 17(1974)319-320. Bartels, R.H., Conn, A.R. and Sinclair, J.W., Minimization techniques for piecewise differentiable functions: The l1 solution of an overdetermined linear system, SIAM Journal on Numerical Analysis, 15(1978)224-241. Bartels, R.H. and Conn, A.R., Algorithm 563: A program for linear constrained discrete l1 problems, ACM Transactions on Mathematical Software, 6(1980)609-614. Bartels, R.H, Stoer, J. and Zenger, Ch., A realization of the simplex method based on triangular decomposition, Handbook for Automatic Computation, Vol. II: Linear Algebra, Wilkinson, J.H. and Reinsch, C. (eds.), Springer-Verlag, New York, pp. 152-190, 1971. Bloomfield, P. and Steiger, W.L., Least absolute deviations curve-fitting, SIAM Journal on Scientific and Statistical Computing, 1(1980)290-301. Bloomfield, P. and Steiger, W.L., Least Absolute Deviations, Theory, Applications, and Algorithms, Birkhauser, Boston, 1983. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Madsen, K., Nielsen, H.B. and Pinar, M.C., New characterizations of l1 solutions to overdetermined systems of linear equations, Operations Research Letters, 16(1994)159-166.
© 2008 by Taylor & Francis Group, LLC
Chapter 5: Linear L1 Approximation
17.
18. 19. 20. 21. 22.
23. 24. 25. 26. 27. 28.
139
Narula, S.C. and Wellington, J.F., An efficient algorithm for the MSAE and the MMAE regression problems, SIAM Journal on Scientific and Statistical Computing, 9(1988)717727. Pinkus, A.M., On L1-Approximation, Cambridge University Press, London, 1989. Powell, M.J.D., Approximation Theory and Methods, Cambridge University Press, London, 1981. Robers, P.D. and Ben-Israel, A., An interval programming algorithm for discrete linear L1 approximation problems, Journal of Approximation Theory, 2(1969)323-336. Robers, P.D. and Robers, S.S., Algorithm 458: Discrete linear L1 approximation by interval linear programming, Communications of ACM, 16(1973)629-631. Rosen, J.B., Park, H., Glick, J. and Zhang, L., Accurate solution to overdetermined linear equations with errors using L1 norm minimization, Computational Optimization and Applications, 17(2000)329-341. Sklar, M.G. and Armstrong, R.D., Least absolute value and Chebyshev estimation utilizing least squares results, Mathematical Programming, 24(1982)346-352. Spath, H., Mathematical Algorithms for Linear Regression, Academic Press, English Edition, London, 1991. Usow, K.H., On L1 approximation. II. Computation for discrete functions and discretization effects, SIAM Journal on Numerical Analysis, 4(1967)233-244. Wagner, H.M., Linear programming techniques for regression analysis, Journal of American Statistical Association, 54(1959) 206-212. Watson, G.A., Approximation Theory and Numerical Methods, John Wiley & Sons, New York, 1980. Wesolowsky, G.O., A new descent algorithm for the least absolute value regression problem, Communications in Statistics – Simulation and Computation, B10(1981)479-491.
© 2008 by Taylor & Francis Group, LLC
140
5.11
Numerical Linear Approximation in C
DR_L1
/*------------------------------------------------------------------DR_L1 --------------------------------------------------------------------This program is a driver for the function LA_L1(), which solves an overdetermined system of linear equations in the L1 (L-One) norm. It uses a dual simplex method and a triangular factorization of the basis matrix. The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m <= n. "f" is a given real n vector. "a" is the solution m vector. This program contains 3 examples whose results appear in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define
Na Ma Nb Mb Nc Mc
7 3 51 11 8 4
void DR_L1 (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[Na][Ma] = { { -2.0, 0.0, -2.0 }, { 8.0, 9.0, 17.0 }, { 36.0, 18.0, 54.0 }, { -8.0, 0.0, -8.0 }, { 21.0, 18.0, 39.0 },
© 2008 by Taylor & Francis Group, LLC
Chapter 5: DR_L1
141
{ 12.0, -9.0, 3.0 }, { -32.0, -13.5, -45.5 } }; static tNumber_R f1[Na+1] = { NIL, 6.0, 6.0, -48.0, 24.0, 3.0, -6.0, -9.0 }; static tNumber_R c3init[Nc][Mc] = { { 1.0, 1.0, 1.0, 1.0 }, { 1.0, 2.0, 4.0, 4.0 }, { 1.0, 3.0, 9.0, 9.0 }, { 1.0, 4.0, 16.0, 16.0 }, { 1.0, 5.0, 25.0, 25.0 }, { 1.0, 6.0, 36.0, 36.0 }, { 1.0, 7.0, 49.0, 49.0 }, { 1.0, 8.0, 64.0, 64.0 } }; static tNumber_R f3[Nc+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MM_COLS); tMatrix_R tMatrix_R
c1 c3
= init_Matrix_R (&(c1init[0][0]), Na, Ma); = init_Matrix_R (&(c3init[0][0]), Nc, Mc);
int int tNumber_R
m, n, irank, iter; i, j, Iexmpl; dx, x1, x2, x3, x4, x5, g, g1, z;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_L1, L1 Solution of an Overdetermined System " " of Linear Equations"); © 2008 by Taylor & Francis Group, LLC
142
Numerical Linear Approximation in C
for (Iexmpl = 1; Iexmpl <= 3; Iexmpl++) { switch (Iexmpl) { case 1: n = Na; m = Ma; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = c1[i][j]; } break; case 2: n = Nb; m = Mb; dx = 0.02; g1 = exp (0.5); for (i = 1; i <= n; i++) { x1 = dx * (tNumber_R)(i-1); x2 = x1 + x1; x3 = x1 + x2; x4 = x2 + x2; x5 = x2 + x3; ct[1][i] = 1.0; ct[2][i] = sin (x1); ct[3][i] = cos (x1); ct[4][i] = sin (x2); ct[5][i] = cos (x2); ct[6][i] = sin (x3); ct[7][i] = cos (x3); ct[8][i] = sin (x4); ct[9][i] = cos (x4); ct[10][i] = sin (x5); ct[11][i] = cos (x5); g = exp (x1); f[i] = g1; if (g < g1) f[i] = g; } break; case 3: © 2008 by Taylor & Francis Group, LLC
Chapter 5: DR_L1 n = Nc; m = Mc; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] = c3[i][j]; } break; default: break; } prn_algo_bnr ("L1"); prn_example_delim(); PRN ("Example #%d: Size of Matrix \"c\", %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("L1 Solution of an Overdetermined System\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_L1 (m, n, ct, f, &irank, &iter, r, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the L1 Approximation\n"); PRN ("L1 solution vector \"a\"\n"); prn_Vector_R (a, m); PRN ("L1 residual vector \"r\"\n"); prn_Vector_R (r, n); PRN ("L1 norm \"z\" = %8.4f\n", z); PRN ("Rank of matrix \"c\" = %d, No. of Iterations " "= %d\n", irank, iter); } prn_la_rc (rc); } free_Matrix_R (ct, MM_COLS); © 2008 by Taylor & Francis Group, LLC
143
144
Numerical Linear Approximation in C free_Vector_R (f); free_Vector_R (r); free_Vector_R (a); uninit_Matrix_R (c1); uninit_Matrix_R (c3);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
5.12
145
LA_L1
/*------------------------------------------------------------------LA_L1 --------------------------------------------------------------------This program solves an overdetermined system of linear equations in the L1 (L-One) norm. It uses a dual simplex method to the dual linear programming formulation of the problem. In this program certain intermediate simplex iterations are skipped. For the purpose of numerical stability, this program uses a triangular decomposition to the basis matrix. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m <= n. "f" is a given real n vector. The problem is to calculate the elements of the real m vector "a" that gives the minimum L1 residual norm z. z = |r[1]| + |r[2]| + ... + |r[n]| where r[i] is the ith residual and is given by r[i] = c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m] - f[i], i = 1, 2, ..., n Inputs m n mmm ct f
Number of columns of matrix "c" in the system c*a = f. Number of rows of matrix "c" in the system c*a = f. An integer = (m* (m+3))/2 - 1. A real m by n matrix containing the transpose of matrix "c" of the system c*a = f. A real n vector containing the r.h.s. of the system c*a = f.
Local Variables ginv A real m square matrix that contains the inverse of the basis matrix in the linear programming problem. vb A real m vector containing the basic solution in the linear programming problem. ic An integer m vector containing the indices of the columns of
© 2008 by Taylor & Francis Group, LLC
146
ir ib
p
bp xp
Numerical Linear Approximation in C "ct" that form the columns of the basis matrix. An integer m vector containing the row indices of of matrix "ct". A sign n vector. Its elements have the values +1 or -1. ib[j] = +1 indicates that column j of matrix "ct" is in the basis or is at its lower bound 0. ib[j] = -1 indicates that column j is at its upper bound 2. A real (((m+1)*((m+1)+3))/2-1) vector whose first ((irank*(irank+3))/2)-1 elements contain the ((irank*(irank+1))/2) elements of an upper triangular matrix + extra (irank-1) working locations. See the comments in LA_l1_pslv(). A real m vector whose first "irank" elements are the r.h.s. of the triangular equation p*xp = bp. A real m vector whose first "irank" elements are the solution of the triangular equation p*xp = bp or the triangular equation p(transpose)*xp = bp.
Outputs irank The calculated rank of matrix "c". iter Number of iterations or the number of times the simplex tableau is changed by a Gauss-Jordon elimination step. a A real m vector containing the L1 solution of the system c*a = f. r A real n vector containing the residual vector r = (c*a - f). z The minimum L1 norm of the residual vector "r". Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcSolutionDefNotUniqueRD LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_L1 (int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ) { tVector_R t = alloc_Vector_R (n); tVector_R uf = alloc_Vector_R (m); © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
147
tVector_R tVector_R tMatrix_R tVector_R tVector_I tVector_I tVector_I tVector_R tVector_R
bp xp ginv vb ib ic ir alfa p
int
iout = 0, jin = 0, icj = 0, iciout = 0, icjin = 0, ivo = 0, itest = 0; i = 0, j = 0, kk = 0, ijk = 0; nmm = 0, irank1 = 0, irnkm1 = 0; tpeps = 0.0; alpha = 0.0, pivot = 0.0, pivotn = 0.0, pivoto = 0.0, xb = 0.0, bxb = 0.0;
int int tNumber_R tNumber_R
= = = = = = = = =
alloc_Vector_R (m); alloc_Vector_R (m); alloc_Matrix_R (m, m); alloc_Vector_R (m); alloc_Vector_I (n); alloc_Vector_I (n); alloc_Vector_I (m); alloc_Vector_R (n); alloc_Vector_R (((m + 1) * ((m + 1) + 3)) / 2 - 1);
/* Validation of the data before executing the algorithm eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m VALIDATE_PTRS (ct && f && pIrank && pIter && r && a && VALIDATE_ALLOC (t && uf && bp && xp && ginv && vb && ib ir && alfa && p); /* Initialization */ tpeps = 2.0 + EPS; *pIrank = m; nmm = (m * (m + 3))/2; *pIter = 0; for (j = 1; j <= n; j++) { ib[j] = 1; ic[j] = j; } for (j = 1; j <= m; j++) { ir[j] = j; vb[j] = 0.0; a[j] = 0.0; for (i = 1; i <= m; i++) { ginv[i][j] = 0.0; } © 2008 by Taylor & Francis Group, LLC
*/ == 1))); pZ); && ic &&
148
Numerical Linear Approximation in C ginv[j][j] = 1.0; } iout = 0; /* Part 1 of the algorithm */ LA_l1_part_1 (m, n, ct, ic, ir, ginv, pIrank, pIter); /* Part 2 of the algorithm */ irank1 = *pIrank + 1; irnkm1 = *pIrank - 1; /* Initial residuals and basic solution */ LA_l1_part_2 (n, ct, f, ic, ib, uf, vb, pIrank, r); /* Initializing the triiangular matrix */ LA_l1_triang_matrix (m, p, pIrank); for (ijk = 1; ijk <= n; ijk++) { ivo = 0; LA_l1_pslv (1, pIrank, p, vb, xp); /* Determine the vector that leaves the basis */ LA_l1_vleav (&ivo, &iout, &xb, xp, pIrank); if (ivo == 0) { LA_l1_pslv (2, pIrank, p, uf, vb); /* Calculate the results */ rc = LA_l1_res (pIrank, m, n, r, ginv, vb, ir, xp, a, pZ); GOTO_CLEANUP_RC (rc); } iciout = ic[iout]; t[iciout] = 1.0; bxb = xb; for (i = 1; i <= *pIrank; i++) { bp[i] = 0.0; } bp[iout] = 1.0; LA_l1_pslv (2, pIrank, p, bp, xp);
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1 /* Determine the alfa ratios */ LA_l1_alfa (iout, n, ct, ic, ib, xp, t, alfa, pIrank, r); pivoto = 1.0; itest = 0; for (kk = 1; kk <= n; kk++) { /* Determine the vector that enters the basis */ LA_l1_vent (ivo, &jin, &itest, n, ic, alfa, pIrank); /* No feasible solution has been found */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } icjin = ic[jin]; pivot = t[icjin]; alpha = alfa[icjin]; pivotn = pivot/pivoto; /* Skipping simplex iteration */ LA_l1_skip_iters (iciout, icjin, xb, &bxb, pivotn, ct, ib, t, vb, pIrank); xb = bxb/pivot; if (xb < -EPS || xb > tpeps) itest = 0; alfa[icjin] = 0.0; if (itest == 1) break; pivoto = pivot; iciout = icjin; } r[icjin] = 0.0; *pIter = *pIter + 1; /* Update vector p */ LA_l1_update_p (iout, jin, ct, ic, p, pIrank); if (iout != *pIrank) { for (i = iout; i <= irnkm1; i++) { /* Update (ginv, vb, ct) */ LA_l1_update_ginv (i, n, ct, ir, p, ginv, vb, © 2008 by Taylor & Francis Group, LLC
149
150
Numerical Linear Approximation in C pIrank); } } for (j = 1; j <= *pIrank; j++) { icj = ic[j]; uf[j] = f[icj]; } LA_l1_calcul_r (alpha, n, ct, f, ic, ib, uf, xp, t, p, pIrank, r); }
CLEANUP: free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R free_Vector_I free_Vector_I free_Vector_I free_Vector_R free_Vector_R
(t); (uf); (bp); (xp); (ginv, m); (vb); (ib); (ic); (ir); (alfa); (p);
return rc; } /*------------------------------------------------------------------Square non-singular system --------------------------------------------------------------------This function solves the square real non-singular system of linear equations p*xp = vb or the square real non-singular system of linear equations p(transpose)*xp = vb "p" is an upper triangular matrix. "vb" is the right hand side vector. "xp" is the solution vector.
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
151
Inputs id An integer specifying the action to be performed. If id = 1 the equation "p*xp=vb" is solved. if id = any integer other than 1, the equation "p(transpose)*xp=vb" is solved. k The number of equations of the given system. p An (((m+1)*((m+1)+3))/2-1) vector whose first (k+1) elements contain the first k elements of row 1 of the upper triangular matrix plus an extra location to the right. Its next k elements contain the (k-1) elements of row 2 of the upper triangular matrix plus an extra location to the right, ..., etc. vb An m vector whose first k elements contain the r.h.s. vector of the given system. Outputs xp A real m vector whose first k elements contain the solution to the given system. -------------------------------------------------------------------*/ void LA_l1_pslv (int id, int *pIrank, tVector_R p, tVector_R vb, tVector_R xp) { int i, l, ll, j, jj, kk, kd, km1, kkd, kkm1; tNumber_R s; /* Solution of the upper triangular system */ if (id == 1) { l = (*pIrank-1) + (*pIrank * (*pIrank+1))/2; xp[*pIrank] = vb[*pIrank]/p[l]; if (*pIrank != 1) { kd = 3; km1 = *pIrank - 1; for (i = 1; i <= km1; i++) { j = *pIrank - i; l = l - kd; kd = kd + 1; s = vb[j]; ll = l; jj = j; for (kk = 1; kk <= i; kk++) { ll = ll + 1; © 2008 by Taylor & Francis Group, LLC
152
Numerical Linear Approximation in C jj = jj + 1; s = s - p[ll] * (xp[jj]); } xp[j] = s/p[l]; } } } /* Solution of the lower triangular system */ if (id != 1) { xp[1] = vb[1]/p[1]; if (*pIrank != 1) { l = 1; kd = *pIrank + 1; for (i = 2; i <= *pIrank; i++) { l = l + kd; kd = kd - 1; s = vb[i]; kk = i; kkm1 = i - 1; kkd = *pIrank; for (j = 1; j <= kkm1; j++) { s = s - p[kk] * xp[j]; kk = kk + kkd; kkd = kkd - 1; } xp[i] = s/p[l]; } } }
} /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_L1() -------------------------------------------------------------------*/ void LA_l1_gauss_jordn (int iout, int jin, int lj, int *pIrank, int n, tMatrix_R ct, tMatrix_R ginv, tVector_I ic) { int i, j; tNumber_R pivot, d;
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
153
pivot = ct[iout][jin]; for (j = 1; j <= n; j++) { ct[iout][j] = ct[iout][j]/pivot; } for (j = 1; j <= iout; j++) { ginv[iout][j] = ginv[iout][j]/pivot; } for (i = 1; i <= *pIrank; i++) { if (i != iout) { d = ct[i][jin]; for (j = 1; j <= n; j++) { ct[i][j] = ct[i][j] - d * ct[iout][j]; } for (j = 1; j <= iout; j++) { ginv[i][j] = ginv[i][j] - d * ginv[iout][j]; } } } /* Swap two elements of vector "ic" */ swap_elems_Vector_I (ic, iout, lj); } /*------------------------------------------------------------------Calculate the results of LA_L1() -------------------------------------------------------------------*/ eLaRc LA_l1_res (int *pIrank, int m, int n, tVector_R r, tMatrix_R ginv, tVector_R vb, tVector_I ir, tVector_R xp, tVector_R a, tNumber_R *pZ) { int i, j, k, ij; tNumber_R d, e, s; for (i = 1; i <= *pIrank; i++) { s = 0.0; for (k = 1; k <= *pIrank; k++) { s = s + vb[k] * ginv[k][i]; © 2008 by Taylor & Francis Group, LLC
154
Numerical Linear Approximation in C } ij = ir[i]; a[ij] = s; } s = 0.0; for (j = 1; j <= n; j++) { s = s + fabs (r[j]); } *pZ = s; if (*pIrank < m) return LaRcSolutionDefNotUniqueRD; e = 2.0 - EPS; for (i = 1; i <= m; i++) { d = xp[i]; if (d < -EPS || d > e) return LaRcSolutionProbNotUnique; } return LaRcSolutionUnique;
} /*------------------------------------------------------------------Part 1 of LA_L1() -------------------------------------------------------------------*/ void LA_l1_part_1 (int m, int n, tMatrix_R ct, tVector_I ic, tVector_I ir, tMatrix_R ginv, int *pIrank, int *pIter) { int i, j, k, lj = 0, li = 0; int iout, jin = 0, icj; tNumber_R d, g, piv; for (iout = 1; iout <= m; iout++) { if (iout <= *pIrank) { piv = 0.0; for (j = iout; j <= n; j++) { icj = ic[j]; for (i = iout; i <= *pIrank; i++) { © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
155 d = ct[i][icj]; if (d < 0.0) d = -d; if (d > piv) { li = i; jin = icj; lj = j; piv = d; }
} } /* Detection of rank deficiency of matrix "c"*/ if (piv < EPS) { *pIrank = iout - 1; } if (piv > EPS) { if (li != iout) { for (j = 1; j <= n; j++) { g = ct[iout][j]; ct[iout][j] = ct[li][j]; ct[li][j] = g; } /* Swap two elements of vector "ir" */ swap_elems_Vector_I (ir, iout, li); /* Swap two rows of matrix "ginv" */ if (iout != 1) { k = iout - 1; for (j = 1; j <= k; j++) { d = ginv[li][j]; ginv[li][j] = ginv[iout][j]; ginv[iout][j] = d; } } }
© 2008 by Taylor & Francis Group, LLC
156
Numerical Linear Approximation in C LA_l1_gauss_jordn (iout, jin, lj, pIrank, n, ct, ginv, ic); *pIter = *pIter + 1; } } }
} /*------------------------------------------------------------------Part 2 of LA_L1() -------------------------------------------------------------------*/ void LA_l1_part_2 (int n, tMatrix_R ct, tVector_R f, tVector_I ic, tVector_I ib, tVector_R uf, tVector_R vb, int *pIrank, tVector_R r) { int i, j, icj, irank1; tNumber_R s, sa; /* Calculating initial residuals r[] */ irank1 = *pIrank + 1; for (j = 1; j <= *pIrank; j++) { icj = ic[j]; r[icj] = 0.0; uf[j] = f[icj]; } for (j = irank1; j <= n; j++) { icj = ic[j]; s = -f[icj]; for (i = 1; i <= *pIrank; i++) { s = s + uf[i] * ct[i][icj]; } r[icj] = s; if (s < 0.0) ib[icj] = -1; } /* Calculating the initial basic solution vb[] */ for (i = 1; i <= *pIrank; i++) { s = 0.0; for (j = 1; j <= n; j++) { sa = ct[i][j]; © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
157
if (ib[j] == -1) sa = -sa; s = s + sa; } vb[i] = s; } } /*------------------------------------------------------------------Initializing the triangular matrix in LA_L1() -------------------------------------------------------------------*/ void LA_l1_triang_matrix (int m, tVector_R p, int *pIrank) { int i, k, kd, nmm; nmm = (m * (m + 3))/2; for (i = 1; i <= nmm; i++) { p[i] = 0.0; } k = 1; kd = *pIrank + 1; for (i = 1; i <= *pIrank; i++) { p[k] = 1.0; k = k + kd; kd = kd - 1; } } /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_L1() -------------------------------------------------------------------*/ void LA_l1_vleav (int *pIvo, int *pIout, tNumber_R *pXb, tVector_R xp, int *pIrank) { int i; tNumber_R d, e, g, tpeps; tpeps = 2.0 + EPS; g = 1.0; for (i = 1; i <= *pIrank; i++) { e = xp[i]; if (e > tpeps || e < -EPS) { © 2008 by Taylor & Francis Group, LLC
158
Numerical Linear Approximation in C if (e > tpeps) { d = 2.0 - e; if (d < g) { g = d; *pIvo = 1; *pIout = i; *pXb = e; } } if (e < -EPS) { d = e; if (d < g) { g = d; *pIvo = -1; *pIout = i; *pXb = e; } } } }
} /*------------------------------------------------------------------Determine the alfa ratios for LA_L1() -------------------------------------------------------------------*/ void LA_l1_alfa (int iout, int n, tMatrix_R ct, tVector_I ic, tVector_I ib, tVector_R xp, tVector_R t, tVector_R alfa, int *pIrank, tVector_R r) { int i, j, icj, irank1; tNumber_R d, e, s; tNumber_R alfmx; irank1 = *pIrank + 1; alfmx = 0.0; for (j = irank1; j <= n; j++) { icj = ic[j]; alfa[icj] = 0.0; s = 0.0; for (i = iout; i <= *pIrank; i++) © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
159
{ s = s + xp[i] * (ct[i][icj]); } e = s; t[icj] = e; if (fabs (e) > EPS) { d = r[icj]; if (fabs (d) < PREC) d = PREC * (ib[icj]); alfa[icj] = d/e; } } } /*------------------------------------------------------------------Determine the vector that enters the basis for LA_L1() -------------------------------------------------------------------*/ void LA_l1_vent (int ivo, int *pJin, int *pItest, int n, tVector_I ic, tVector_R alfa, int *pIrank) { int j, icj, irank1; tNumber_R d, e, alfmn, alfmx; irank1 = *pIrank + 1; alfmx = 1.0/EPS; alfmn = -alfmx; for (j = irank1; j <= n; j++) { icj = ic[j]; e = alfa[icj]; d = e * ivo; if (d > 0.0) { if (ivo == -1) { if (e > alfmn) { alfmn = e; *pJin = j; *pItest = 1; } } if (ivo == 1) { © 2008 by Taylor & Francis Group, LLC
160
Numerical Linear Approximation in C if (e < alfmx) { alfmx = e; *pJin = j; *pItest = 1; } } } }
} /*------------------------------------------------------------------Skipping simplex iterations in LA_L1() -------------------------------------------------------------------*/ void LA_l1_skip_iters (int iciout, int icjin, tNumber_R xb, tNumber_R *pBxb, tNumber_R pivotn, tMatrix_R ct, tVector_I ib, tVector_R t, tVector_R vb, int *pIrank) { int i; tNumber_R tpeps; tpeps = 2.0 + EPS; if (xb < -EPS) { if (pivotn > 0.0) { for (i = 1; i <= *pIrank; i++) { vb[i] = vb[i] + ct[i][icjin] + ct[i][icjin]; } *pBxb = *pBxb + t[icjin] + t[icjin]; ib[icjin] = 1; } } if (xb > tpeps) { for (i = 1; i <= *pIrank; i++) { vb[i] = vb[i] - ct[i][iciout] - ct[i][iciout]; } *pBxb = *pBxb - t[iciout] - t[iciout]; ib[iciout] = -1;
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1
161
if (pivotn < 0.0) { for (i = 1; i <= *pIrank; i++) { vb[i] = vb[i] + ct[i][icjin] + ct[i][icjin]; } *pBxb = *pBxb + t[icjin] + t[icjin]; ib[icjin] = 1; } } } /*------------------------------------------------------------------Update vector p for LA_L1() -------------------------------------------------------------------*/ void LA_l1_update_p (int iout, int jin, tMatrix_R ct, tVector_I ic, tVector_R p, int *pIrank) { int i, j, k, k1, kd; int icjin, irnkm1; icjin = ic[jin]; irnkm1 = *pIrank - 1; if (iout != *pIrank) { for (j = iout; j <= irnkm1; j++) { k = j; k1 = k + 1; kd = *pIrank; /* Swap two elements of vector "ic" */ swap_elems_Vector_I (ic, k, k1); for (i = 1; i <= k1; i++) { p[k] = p[k + 1]; k = k + kd; kd = kd - 1; } } /* Swap elements "irank" and "jin" of vector "ic" */ swap_elems_Vector_I (ic, jin, *pIrank); © 2008 by Taylor & Francis Group, LLC
162
Numerical Linear Approximation in C
k = *pIrank; kd = *pIrank; for (i = 1; i <= *pIrank; i++) { p[k] = ct[i][icjin]; k = k + kd; kd = kd - 1; } } } /*------------------------------------------------------------------Update matrix ginv in LA_L1() -------------------------------------------------------------------*/ void LA_l1_update_ginv (int i, int n, tMatrix_R ct, tVector_I ir, tVector_R p, tMatrix_R ginv, tVector_R vb, int *pIrank) { int j, k, l; int i1, ii, kd, kk, kl, irank1; tNumber_R
d, e, g;
irank1 = *pIrank + 1; ii = i; i1 = i + 1; k = 0; kd = irank1; for (j = 1; j <= ii; j++) { k = k + kd; kd = kd - 1; } kk = k; kl = k - kd; l = kl; g = p[k]; d = p[l]; if (g < 0.0) g = - g; if (d < 0.0) d = - d; if (g > d) { for (j = ii; j <= *pIrank; j++) { /* Swap two elements of vector "p" */ © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_L1 swap_elems_Vector_R (p, k, l); k = k + 1; l = l + 1; } /* Swap two elements of vector "ir" */ swap_elems_Vector_I (ir, i, i1); /* Swap two rows of matrix "ct" */ swap_rows_Matrix_R (ct, i, i1); /* Swap two rows of matrix "ginv" */ swap_rows_Matrix_R (ginv, i, i1); /* Swap two columns of matrix "ginv" */ for (j = 1; j <= *pIrank; j++) { e = ginv[j][i]; ginv[j][i] = ginv[j][i1]; ginv[j][i1] = e; } /* Swap two elements in real vector "vb" */ swap_elems_Vector_R (vb, i, i1); } e = k = l = for {
p[kk]/p[kl]; kk; kl; (j = ii; j <= *pIrank; j++) p[k] = p[k] - e * p[l]; k = k + 1; l = l + 1;
} for (j = 1; j <= n; j++) { ct[i1][j] = ct[i1][j] - e * (ct[i][j]); } for (j = 1; j <= *pIrank; j++) { ginv[i1][j] = ginv[i1][j] - e * ginv[i][j]; } vb[i1] = vb[i1] - e * vb[i]; }
© 2008 by Taylor & Francis Group, LLC
163
164
Numerical Linear Approximation in C
/*------------------------------------------------------------------Calculate the residual vector "r" for LA_L1() -------------------------------------------------------------------*/ void LA_l1_calcul_r (tNumber_R alpha, int n, tMatrix_R ct, tVector_R f, tVector_I ic, tVector_I ib, tVector_R uf, tVector_R xp, tVector_R t, tVector_R p, int *pIrank, tVector_R r) { int i, j, icj, irank1; tNumber_R d, s; irank1 = *pIrank + 1; if (fabs (alpha) <= 1.0) { for (j = irank1; j <= n; j++) { icj = ic[j]; r[icj] = r[icj] - alpha * t[icj]; } } if (fabs (alpha) > 1.0) { LA_l1_pslv (2, pIrank, p, uf, xp); for (j = irank1; j <= n; j++) { icj = ic[j]; s = -f[icj]; for (i = 1; i <= *pIrank; i++) { s = s + xp[i] * (ct[i][icj]); } r[icj] = s; } } for (j = irank1; j <= n; j++) { icj = ic[j]; d = r[icj] * ib[icj]; if (d < 0.0) r[icj] = 0.0; } }
© 2008 by Taylor & Francis Group, LLC
Chapter 5: DR_Lone
5.13
165
DR_Lone
/*------------------------------------------------------------------DR_Lone --------------------------------------------------------------------This program is a driver for the function LA_Lone(), which solves an overdetermined system of linear equations in the L1 (L-One) norm, using a dual simplex method. The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m <= n. "f" is a given real n vector. "a" is the solution m vector. This program contains the 3 examples whose results appear in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define
Na Ma Nb Mb Nc Mc
7 3 51 11 8 4
void DR_Lone (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[Na][Ma] = { { -2.0, 0.0, -2.0 }, { 8.0, 9.0, 17.0 }, { 36.0, 18.0, 54.0 }, { -8.0, 0.0, -8.0 }, { 21.0, 18.0, 39.0 }, { 12.0, -9.0, 3.0 },
© 2008 by Taylor & Francis Group, LLC
166
Numerical Linear Approximation in C { -32.0, -13.5, -45.5 } }; static tNumber_R f1[Na+1] = { NIL, 6.0, 6.0, -48.0, 24.0, 3.0, -6.0, -9.0 }; static tNumber_R c3init[Nc][Mc] = { { 1.0, 1.0, 1.0, 1.0 }, { 1.0, 2.0, 4.0, 4.0 }, { 1.0, 3.0, 9.0, 9.0 }, { 1.0, 4.0, 16.0, 16.0 }, { 1.0, 5.0, 25.0, 25.0 }, { 1.0, 6.0, 36.0, 36.0 }, { 1.0, 7.0, 49.0, 49.0 }, { 1.0, 8.0, 64.0, 64.0 } }; static tNumber_R f3[Nc+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MM_COLS); tMatrix_R tMatrix_R
c1 c3
= init_Matrix_R (&(c1init[0][0]), Na, Ma); = init_Matrix_R (&(c3init[0][0]), Nc, Mc);
int int
irank, iter; i, j, m, n, Iexmpl;
tNumber_R
dx, x1, x2, x3, x4, x5, g, g1, z;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_Lone, L-One Solution of an Overdetermined " "System of Linear Equations"); © 2008 by Taylor & Francis Group, LLC
Chapter 5: DR_Lone
z = 0.0; for (Iexmpl = 1; Iexmpl <= 3; Iexmpl++) { switch (Iexmpl) { case 1: n = Na; m = Ma; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = c1[i][j]; } break; case 2: n = Nb; m = Mb; dx = 0.02; g1 = exp (0.5); for (i = 1; i <= n; i++) { x1 = dx * (tNumber_R)(i-1); x2 = x1 + x1; x3 = x1 + x2; x4 = x2 + x2; x5 = x2 + x3; ct[1][i] = 1.0; ct[2][i] = sin (x1); ct[3][i] = cos (x1); ct[4][i] = sin (x2); ct[5][i] = cos (x2); ct[6][i] = sin (x3); ct[7][i] = cos (x3); ct[8][i] = sin (x4); ct[9][i] = cos (x4); ct[10][i] = sin (x5); ct[11][i] = cos (x5); g = exp (x1); f[i] = g1; if (g < g1) f[i] = g; } break; © 2008 by Taylor & Francis Group, LLC
167
168
Numerical Linear Approximation in C
case 3: n = Nc; m = Mc; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) { ct[j][i] = c3[i][j]; } } break; default: break; } prn_algo_bnr ("Lone"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\", %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("L-One Solution of an Overdetermined System" " of Linear Equations\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Lone (m, n, ct, f, &irank, &iter, r, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the L-One Approximation\n"); PRN ("L-One solution vector, \"a\"\n"); prn_Vector_R (a, m); PRN ("L-One residual vector \"r\"\n"); prn_Vector_R (r, n); PRN ("L-One norm \"z\" = %8.4f\n", z); PRN ("Rank of matrix \"c\" = %d, No. of Iterations =" " %d\n", irank, iter); © 2008 by Taylor & Francis Group, LLC
Chapter 5: DR_Lone } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R
(ct, MM_COLS); (f); (r); (a);
uninit_Matrix_R (c1); uninit_Matrix_R (c3); }
© 2008 by Taylor & Francis Group, LLC
169
170
5.14
Numerical Linear Approximation in C
LA_Lone
/*------------------------------------------------------------------LA_Lone --------------------------------------------------------------------This program solves an overdetermined system of linear equations in the L1 (L-One) norm. It uses a dual simplex method to the dual linear programming formulation of the problem. In this program certain intermediate simplex iterations are skipped. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m <= n. "f" is a given real n vector. The problem is to calculate the elements of the real m vector "a" that gives the minimum L1 residual norm z. z = |r[1]| + |r[2]| + ... + |r[n]| where r[i] is the ith residual and is given by r[i] = c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m] - f[i], i = 1, 2, ..., n Inputs m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. ct A real m by n matrix containing the transpose of matrix "c" of the system c*a = f. f A real n vector containing the r.h.s. of the system c*a = f. Local Variables binv A real m square matrix containing the inverse of the basis matrix in the linear programming problem. bv A real m vector containing the basic solution in the linear programming problem. th A real n vector containing the ratios th[j] = r[j]/ct[iout][j] "iout" corresponds to the basic vector that leaves the
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_Lone
171
basis. An integer m vector containing the indices of the columns of "ct" that form the columns of the basis matrix. irbas An integer m vector containing the indices of the rows of "ct". ibound An n sign vector. Its elements have the values +1 or -1. ibound[j] = +1 indicates that column j of matrix "ct" is in the basis or is at its lower bound 0. ibound[j] = -1 indicates that column j is at its upper bound 2. icbas
Outputs irank The calculated rank of matrix "c". iter Number of iterations, or the number of times the simplex tableau is changed by a Gauss-Jordon elimination step. a A real m vector containing the L1 solution of the system c*a = f. r A real n vector containing the residual vector r = (c*a - f). z The minimum L1 norm of the residual vector "r". Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcSolutionDefNotUniqueRD LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Lone (int m, int int *pIter, tVector_R { tVector_I icbas = tVector_I irbas = tVector_R th = tMatrix_R binv = tVector_R bv = tVector_I ibound = int int tNumber_R
n, tMatrix_R ct, tVector_R f, int *pIrank, r, tVector_R a, tNumber_R *pZ) alloc_Vector_I alloc_Vector_I alloc_Vector_R alloc_Matrix_R alloc_Vector_R alloc_Vector_I
(m); (m); (n); (m, m); (m); (n);
iout = 0, jin = 0, jout = 0, ivo = 0; i = 0, j = 0, ij = 0, kk = 0, itest = 0, ipart = 0; tpeps = 0.0, xb = 0.0;
© 2008 by Taylor & Francis Group, LLC
172
Numerical Linear Approximation in C tNumber_R
pivot = 0.0, pivotn = 0.0, pivoto = 0.0;
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m == 1))); VALIDATE_PTRS (ct && f && pIrank && pIter && r && a && pZ); VALIDATE_ALLOC (icbas && irbas && th && binv && bv && ibound); /* Initialization */ xb = 0.0; tpeps = 2.0 + EPS; *pIrank = m; *pIter = 0; *pZ = 0.0; ipart = 1; for (j = 1; j <= n; j++) { r[j] = 0.0; ibound[j] = 1; } for (j = 1; j <= m; j++) { a[j] = 0.0; bv[j] = 0.0; irbas[j] = j; icbas[j] = 0; for (i = 1; i <= m; i++) binv[i][j] = 0.0; binv[j][j] = 1.0; } /* Part 1 of the algorithm */ LA_lone_part_1 (ipart, n, ct, icbas, irbas, binv, bv, ibound, pIrank, pIter, r); /* Part 2 of the algorithm */ /* Calculating the initial residuals (marginal costs) and the initial basic solution */ ipart = 2; LA_lone_part_2 (n, ct, f, icbas, bv, ibound, pIrank, r); for (kk = 1; kk <= n; kk++) { ivo = 0;
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_Lone /* Determine the vector that leaves the basis */ LA_lone_vleav (&ivo, &iout, pIrank, &xb, bv); /* Calculate the results */ if (ivo == 0) { rc = LA_lone_res (m, n, f, icbas, irbas, binv, bv, pIrank, r, a, pZ); GOTO_CLEANUP_RC (rc); } jout = icbas[iout]; /* Calculation of the possible parameters th[j] */ LA_lone_th (iout, n, ct, icbas, ibound, th, pIrank, r); pivoto = 1.0; itest = 0; for (ij = 1; ij <= n; ij++) { /* Determine the vector that enters the basis */ LA_lone_vent (ivo, &itest, &jin, n, th); /* Solution is not feasible */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } pivot = ct[iout][jin]; pivotn = pivot/pivoto; if (xb < -EPS) { if (pivotn > 0.0) { for (i = 1; i <= *pIrank; i++) { bv[i] = bv[i] + ct[i][jin] + ct[i][jin]; } } ibound[jin] = 1; } if (xb > tpeps) { for (i = 1; i <= *pIrank; i++) © 2008 by Taylor & Francis Group, LLC
173
174
Numerical Linear Approximation in C { bv[i] = bv[i] - ct[i][jout] - ct[i][jout]; } ibound[jout] = -1; if (pivotn < 0.0) { for (i = 1; i <= *pIrank; i++) { bv[i] = bv[i] + ct[i][jin] + ct[i][jin]; } ibound[jin] = 1; } } xb = bv[iout]/pivot; if (xb < -EPS || xb > tpeps) itest = 0; th[jin] = 0.0; icbas[iout] = jin; if (itest == 1) break; pivoto = pivot; jout = jin; } /* A Gauss-Jordan elimination step */ LA_lone_gauss_jordn (ipart, iout, jin, n, ct, icbas, binv, bv, ibound, pIrank, r); *pIter = *pIter + 1; }
CLEANUP: free_Vector_I free_Vector_I free_Vector_R free_Matrix_R free_Vector_R free_Vector_I
(icbas); (irbas); (th); (binv, m); (bv); (ibound);
return rc; } /*------------------------------------------------------------------Part 1 of LA_Lone() -------------------------------------------------------------------*/ void LA_lone_part_1 (int ipart, int n, tMatrix_R ct, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, int *pIter, tVector_R r) © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_Lone
175
{ int int
i, j, li = 0, lj; iout, jin = 0;
tNumber_R
d, piv;
for (iout = 1; iout <= *pIrank; iout++) { if (iout <= *pIrank) { piv = 0.0; for (j = 1; j <= n; j++) { for (i = iout; i <= *pIrank; i++) { d = ct[i][j]; if (d < 0.0) d = -d; if (d > piv) { li = i; jin = j; piv = d; } } } /* Detection of rank deficiency of matrix "ct" */ if (piv < EPS) { *pIrank = iout - 1; ipart = 2; } if (ipart == 2) break; if (li != iout) { /* Swap two elements of vector "irbas" */ swap_elems_Vector_I (irbas, li, iout); /* Swap two rows of matrix "ct" */ swap_rows_Matrix_R (ct, li, iout); if (iout != 1) { lj = iout - 1; for (j = 1; j <= lj; j++) © 2008 by Taylor & Francis Group, LLC
176
Numerical Linear Approximation in C { d = binv[li][j]; binv[li][j] = binv[iout][j]; binv[iout][j] = d; } } } if (ipart == 2) break; /* A Gauss-Jordan elimination step */ LA_lone_gauss_jordn (ipart, iout, jin, n, ct, icbas, binv, bv, ibound, pIrank, r); *pIter = *pIter + 1; } }
} /*------------------------------------------------------------------Part 2 of LA_Lone() -------------------------------------------------------------------*/ void LA_lone_part_2 (int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_R bv, tVector_I ibound, int *pIrank, tVector_R r) { int i, j, k, ic; tNumber_R
s, sa;
for (j = 1; j <= n; j++) { ic = 0; for (i = 1; i <= *pIrank; i++) if (j == icbas[i]) ic = 1; if (ic == 0) { s = - f[j]; for (i = 1; i <= *pIrank; i++) { k = icbas[i]; s = s + f[k]*ct[i][j]; } r[j] = s; if (s <= 0.0) ibound[j] = - 1; } } for (i = 1; i <= *pIrank; i++) { © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_Lone
177
s = 0.0; for (j = 1; j <= n; j++) { sa = ct[i][j]; if (ibound[j] == -1) sa = - sa; s = s + sa; } bv[i] = s; } } /*------------------------------------------------------------------Calculate the "th" ratios in LA_lone() -------------------------------------------------------------------*/ void LA_lone_th (int iout, int n, tMatrix_R ct, tVector_I icbas, tVector_I ibound, tVector_R th, int *pIrank, tVector_R r) { int i, j, ic; tNumber_R d, e, gg, thmax; thmax = 0.0; /* Calculation of the possible parameters th[j] */ for (j = 1; j <= n; j++) { th[j] = 0.0; ic = 0; for (i = 1; i <= *pIrank; i++) { if (j == icbas[i]) ic = 1; } if (ic == 0) { e = ct[iout][j]; if (fabs (e) > EPS) { d = r[j]; if (fabs (d) < PREC) d = PREC*ibound[j]; th[j] = d/e; gg = th[j]; if (gg <0.0) gg = - gg; if (gg > thmax) thmax = gg; } } } © 2008 by Taylor & Francis Group, LLC
178
Numerical Linear Approximation in C
} /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Lone() -------------------------------------------------------------------*/ void LA_lone_vent (int ivo, int *pItest, int *pJin, int n, tVector_R th) { int j, ij; tNumber_R
d, e, thmax, thmin;
for (ij = 1; ij <= n; ij++) { thmax = 1.0/ (EPS*EPS); thmin = -thmax; for (j = 1; j <= n; j++) { e = th[j]; d = e * ivo; if (d > 0.0) { if (ivo == -1) { if (e > thmin) { thmin = e; *pJin = j; *pItest = 1; } } if (ivo == 1) { if (e < thmax) { thmax = e; *pJin = j; *pItest = 1; } } } } } }
© 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_Lone
179
/*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Lone() -------------------------------------------------------------------*/ void LA_lone_vleav (int *pIvo, int *pIout, int *pIrank, tNumber_R *pXb, tVector_R bv) { int i; tNumber_R d, e, g, tpeps; tpeps = 2.0 + EPS; g = 1.0; for (i = 1; i <= *pIrank; i++) { e = bv[i]; if (e > tpeps || e < -EPS) { if (e > tpeps) { d = 2.0 - e; if (d < g) { g = d; *pIvo = 1; *pIout = i; *pXb = e; } } if (e < -EPS) { d = e; if (d < g) { g = d; *pIvo = -1; *pIout = i; *pXb = e; } } } } } /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Lone() -------------------------------------------------------------------*/ © 2008 by Taylor & Francis Group, LLC
180
Numerical Linear Approximation in C
void LA_lone_gauss_jordn (int ipart, int iout, int jin, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, tVector_R r) { int i, j, ic; tNumber_R pivot, e, d; pivot = ct[iout][jin]; icbas[iout] = jin; for (j = 1; j <= n; j++) { ct[iout][j] = ct[iout][j]/pivot; } for (j = 1; j <= *pIrank; j++) { binv[iout][j] = binv[iout][j]/pivot; } if (ipart != 1) bv[iout] = bv[iout]/pivot; for (i = 1; i <= *pIrank; i++) { if (i != iout) { e = ct[i][jin]; for (j = 1; j <= n; j++) { ct[i][j] = ct[i][j] - e*ct[iout][j]; } for (j = 1; j <= *pIrank; j++) { binv[i][j] = binv[i][j] - e*binv[iout][j]; } if (ipart != 1) { bv[i] = bv[i] - e*bv[iout]; } } } if (ipart != 1) { e = r[jin]; for (j = 1; j <= n; j++) r[j] = r[j] - e*ct[iout][j]; for (j = 1; j <= n; j++) { ic = 0; © 2008 by Taylor & Francis Group, LLC
Chapter 5: LA_Lone
181
for (i = 1; i <= *pIrank; i++) { if (j == icbas[i]) ic = 1; if (ic == 0) { d = r[j]*ibound[j]; if (d < 0.0) r[j] = 0.0; } } } } } /*------------------------------------------------------------------Calculate the results of LA_Lone() -------------------------------------------------------------------*/ eLaRc LA_lone_res (int m, int n, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, int *pIrank, tVector_R r, tVector_R a, tNumber_R *pZ) { int i, j, k; tNumber_R s, sa; for (j = 1; j <= *pIrank; j++) { s = 0.0; for (i = 1; i <= *pIrank; i++) { k = icbas[i]; s = s + f[k] * (binv[i][j]); } k = irbas[j]; a[k] = s; } s = 0.0; for (j = 1; j <= n; j++) { sa = r[j]; if (sa < 0.0) sa = - sa; s = s + sa; } *pZ = s; if (*pIrank < m) © 2008 by Taylor & Francis Group, LLC
182
Numerical Linear Approximation in C return LaRcSolutionDefNotUniqueRD; for (i = 1; i <= m; i++) { if (bv[i] <= EPS || bv[i] >= 2.0 - EPS) return LaRcSolutionProbNotUnique; } return LaRcSolutionUnique;
}
© 2008 by Taylor & Francis Group, LLC
183
Chapter 6 One-Sided L1 Approximation
6.1
Introduction
In the previous chapter, an algorithm for obtaining the L1 solution of an overdetermined system of linear equations is given. That L1 solution is a double-sided one, meaning that some of the elements of the residual vector have values ≥ 0 and others have values < 0. In other words, for the discrete linear L1 approximation, some of the points lie above or on the approximating surface (curve) and some lie below the approximating surface. Hence, the approximation is the ordinary or double-sided L1 approximation. In this chapter, we present the linear one-sided L1 approximation problem. In this approximation, all the given discrete points lie either above or on, or below and on the approximating surface. When all the discrete points lie above or on the approximating surface, this is known as the one-sided L1 approximation from above. When all the discrete points lie below or on the approximating surface, the approximation is known as the one-sided L1 approximation from below. In two dimensional case, this is illustrated by Figure 6-1. We shall consider the problem of the one-sided L1 approximation from below. However, the analysis and presentation of the problem from above are almost identical. The described algorithm is manipulated slightly so that it can be applied to the latter case as well. The problem is presented here as a linear programming problem and we pursue the analysis of the dual form of the linear programming presentation. The algorithm is much simpler than the algorithm for the ordinary L1 approximation in Chapter 5. We should note that there are problems that have (ordinary) L1 © 2008 by Taylor & Francis Group, LLC
184
Numerical Linear Approximation in C
approximation but do not have one-sided L1 approximation from above and/or one-sided L1 approximation from below. See the numerical examples in Section 6.5. See also the practical example, Example 16.2. Consider the overdetermined system of linear equations (6.1.1a)
Ca = f
C = (cij), is a given real n by m matrix of rank k, k ≤ m ≤ n and f = (fi) is a given real n-vector. The (ordinary) L1 solution of system Ca = f is the m-vector a = (ai) that minimizes the L1 norm z of the residuals n
(6.1.1b)
z =
∑
ri
i=1
where ri is the ith residual and is given by m
ri =
(6.1.1c)
∑ cij aj – fi ,
i = 1, 2, …, n
j=1
8 7 6 5 4 3 2 1 0 0
1
2
3
4
5
6
7
8
9
Figure 6-1: Curve fitting with vertical parabolas of a set of 8 points using L1 approximation and one-sided L1 approximations
This figure gives curve fitting with vertical parabolas of the set of © 2008 by Taylor & Francis Group, LLC
Chapter 6: One-Sided L1 Approximation
185
8 points shown in Figure 2-1. The solid curve is the ordinary L1 approximation. The dashed curve is the one-sided L1 approximation from above and the dotted curve is the one-sided L1 approximation from below. The special case when the system of equation Ca = f, is consistent, i.e., the residual r = Ca – f = 0, is not of interest here. We thus assume that r ≠ 0. When the elements of the residual vector r satisfy the additional conditions (6.1.1d)
ri ≤ 0, i = 1, 2, …, n
or in effect (6.1.1e)
Ca ≤ f
we have the one-sided L1 solution from above of system Ca = f; that is, for any equation i, i = 1, …, n, in Ca = f, the observed value fi is greater than (or equal to) the calculated value (ci1a1 + ci2a2 + … + cimam). If the inequalities (6.1.1d) are reversed, i.e., ri ≥ 0, i = 1, 2, …, n we have the one-sided L1 solution from below. As indicated above, we formulate here the problem of the one-sided L1 solution from below as a linear programming one. We use the dual formulation of the problem as we did for the ordinary L1 approximation in the previous chapter. However, we use here the simplex method, not the dual simplex method that we used in the previous chapter. In this algorithm no conditions are imposed on the coefficient matrix. It may be a rank deficient one. An initial basic solution is obtained with a small effort. The described algorithm applies as well to the one-sided L1 solution from above. In Section 6.2, the problem is presented as a special case of a general constrained L1 approximation problem. In Section 6.3, the linear programming formulation of the problem is given. In Section 6.4, the algorithm is described and a numerical example is solved. A note on the linear one-sided L1 solution from above is also given and the interpolating properties of the one-sided L1 approximation is described. In Section 6.5, numerical results are presented and compared with other techniques for solving the same problem. © 2008 by Taylor & Francis Group, LLC
186
Numerical Linear Approximation in C
6.1.1
Applications of the algorithm
One-sided Lp approximations have applications to the numerical solution of operator equations, to ordinary differential equations and to integral equations. See Watson [23, 24]. The one-sided approximation in the L1 norm is applied to the degree reduction of interval polynomial of the so-called Bézier curves in computer aided design. See Deng et al. [13]. The one-sided L1 and the one-sided Chebyshev solutions of overdetermined systems are also applied to the solution of overdetermined linear inequalities [2]. The latter is a basic problem in pattern classification. See Tou and Gonzalez ([22], pp. 40-41, 48-49) and also Chapter 16. 6.1.2
Characterization and uniqueness
For the characterization and uniqueness of the best one-sided L1 approximation of a continuous function, see Pinkus [21] and for harmonic functions see Armitage et al. [3]. For the uniqueness of the best one-sided L1 approximation of continuous differentiable functions see Babenko and Glushko [5] and also Lenze [19] 6.2
A special problem of a general constrained one
A number of authors developed algorithms for general constrained L1 approximation problems. By manipulating the constraints, each problem reduces to a one-sided L1 approximation problem. In other words, their algorithms are general purpose algorithms, while ours is a special purpose one. Using our notation, let C and E be matrices of appropriate dimensions and let f be the vector associated with C, and ea and eb be two vectors associated with E respectively. Armstrong and Hultz (AH) [4] seek a solution vector a that satisfies the problem (6.2.1a)
minimize ||Ca – f||1
subject to (6.2.1b)
ea ≤ Ea ≤ eb
If in (6.2.1b) we take E = C, ea = f, and eb is a very large vector, we get the one-sided L1 approximation from below. © 2008 by Taylor & Francis Group, LLC
Chapter 6: One-Sided L1 Approximation
187
Each of Barrodale and Roberts (BR) [7, 9], Bartels and Conn (BC) [10, 11] and Dax (DA) [12] also proposed to solve a class of problems that includes the problem given by Armstrong and Hultz (AH) [4] as a special case. They minimize the L1 norm of the residual subject to a mixture of linear equality and inequality constraints. Using our notation, let C, G and D be matrices of appropriate dimensions and let f, g and d be vectors associated with C, G and D respectively. Then they seek a solution vector a that satisfies (6.2.2a)
minimize ||Ca – f||1
subject to (6.2.2b)
Ga = g and Da ≥ d
They allow for the possibility that some but not all of the arrays (G, g) and (D, d) be vacuous. Barrodale and Roberts [7, 9] use a primal linear programming technique that is an extension of their method for the L1 approximation without linear constraints [6]. Bartels and Conn [10, 11], however, use a penalty-function method, which solves a constrained optimization problem, while Dax [12] uses a steepest descent search direction method by first solving a linear least squares sub-problem. Hence, if in (6.2.2b) we take G = 0, g = 0, D = C and d = f, problem (6.2.2) reduces to the one-sided L1 problem from below. We shall comment on these methods in Section 6.5. 6.3
Linear programming formulation of the problem
In linear programming terminology [18], problem (6.1.1) is formulated as follows. See Lewis [20] and Watson [23, 24]. Since the residual vector r for Ca = f is given by r = Ca – f, and since all the elements of r, ri ≥ 0, i = 1, 2, …, n, in vector-matrix notation, we have minimize Z = eT(Ca – f) where e is an n-vector, each element of which is 1. Now since eTf is just a constant, this reduces to (6.3.1a)
minimize Z = eT(Ca)
subject to Ca – f > 0, which is (6.3.1b)
© 2008 by Taylor & Francis Group, LLC
Ca ≥ f
188
Numerical Linear Approximation in C
and (6.3.1c)
aj unrestricted in sign, j = 1, 2, …, m
It is easier to deal with the dual of problem (6.3.1), namely maximize z = fTb
(6.3.2a)
subject to CTb = CTe, which is T
(6.3.2b)
C b =
n
T
∑ Cj i=1
and bi ≥ 0, i = 1, 2, …, n
(6.3.2c)
In (6.3.2b) CjT is the jth column of matrix CT and the r.h.s. of (6.3.2b) is the sum of the columns of CT. 6.4
Description of the algorithm
This problem may be solved by the two-phase method of linear programming, as described in Section 3.5. However, we should start with vector bB whose elements are non-negative. This is easily done because of the simple structure of the problem, as explained next. 6.4.1
Obtaining an initial basic feasible solution
We note that the main body (the matrix of constraints) in the initial data in the programming problem is matrix CT and from (6.3.2b), the basic solution vector bB is given by n
bB =
T
∑ Cj i=1
The elements of vector bB may or may not be non-negative, i.e., one or more of its elements may be < 0. Let the element bBi < 0. If we now multiply bBi and the whole of row i in the initial data by –1, this amounts to multiplying column i of matrix C in the system Ca = f by –1. Then the calculated element ai of the solution vector a will not be ai but –ai. An index i would then be stored in an index vector and, at © 2008 by Taylor & Francis Group, LLC
Chapter 6: One-Sided L1 Approximation
189
the end of the program, element ai would be multiplied by –1. At the end of phase 1, we have an initial basic feasible solution bB. If rank(C) = k < m, then only k Gauss-Jordan steps are needed. We then calculate the marginal costs (zj – fj) zj – fj = fBTyj – fj, j = 1, …, n Phase 2 of the algorithm is the ordinary simplex method. If at any iteration, a pivot element is not found, or the ratio bBi/yij is < EPS, the problem has no solution and the program terminates. Lemma 6.1 (Theorem 5.3) At any stage of the computation, the residuals (ri) of (6.1.1c) are themselves the marginal costs (zi – fi) for the same reference zi – fi = ri, i = 1, 2, …, n As a result, for the optimum solution, the objective function z = the sum of the marginal costs, being all ≥ 0 n
z =
∑ ( zi – fi ) i=1
See Theorem 5.3. Lemma 6.2 The solution vector of the one-sided L1 approximation problem is given by (6.4.1)
aT = fBTB–1
where B–1 is the inverse of the basis matrix for the optimum solution and fB is associated with the optimal solution. See Theorem 5.4. Example 6.1 Obtain the L1 solution from below of the following system = 2 –a1 – a2 a1 + 3a2 = 1 (6.4.2) a1 + 2a2 = 1 = –3 a2 a3 = 0 In the tableau for the Initial Data, the left hand side is the algebraic © 2008 by Taylor & Francis Group, LLC
190
Numerical Linear Approximation in C
sum of the columns of matrix CT. Tableau 6.4.1 represents the end of part 1. It is obtained by applying 3 Gauss-Jordan steps to the Initial Data. The basic solution is feasible; all elements of bB are ≥ 0. Initial Data ΣiCiT
fT
––––––––––––– 1 5 1 –––––––––––––
2 1 1 –3 0 T T T T C1 C2 C3 C4 C5T –––––––––––––––––––––– –1 1 1 0 0 –1 3 2 1 0 0 0 0 0 1 ––––––––––––––––––––––
Tableau 6.4.1 (end of part 1) fT f BT bB ––––––––––––– 2 1 1 2 0 1 –––––––––––––
2 1 1 –3 0 T T T T C1 C2 C3 C4 C5T –––––––––––––––––––––– 1 0 –1/2 1/2 0 0 1 (1/2) 1/2 0 0 0 0 0 1 –––––––––––––––––––––– 0 0 –3/2 9/2 0
Tableau 6.4.2 (part 2) fT f BT bB ––––––––––––– 2 3 1 4 0 1 ––––––––––––– z=9
2 1 1 –3 0 T T T T C1 C2 C3 C4 C5T –––––––––––––––––––––– 1 1 0 1 0 0 2 1 1 0 0 0 0 0 1 –––––––––––––––––––––– 0 3 0 6 0
It took one Gauss-Jordan iteration to obtain the optimum solution of the problem. The residuals of the problem are given as the marginal costs (Lemma 6.1) in the last tableau and are r = (0, 0, 3, 6, 0)T from which z = 9. From (6.4.1) or by solving the first, the third and the fifth equations in the set (6.4.2), we get the solution vector a = (–5, 3, 0)T. © 2008 by Taylor & Francis Group, LLC
Chapter 6: One-Sided L1 Approximation
6.4.2
191
One-sided L1 solution from above
For the linear one-sided L1 solution from above, as indicated earlier, the inequalities for the residuals are given by ri ≤ 0, i = 1, 2, …, n The algorithm described here for one-sided L1 solution from below may be applied as well to the one-sided L1 solution from above as follows. We multiply each element of matrix C and each element of vector f in Ca = f by –1. We get the equation Ca = f, where C = –C and f = –f. We then apply the current algorithm to the equation Ca = f. Then the elements ri of the residual vector r = Ca – f, satisfy ri ≥ 0, i = 1, 2, …, n which implies that for the given equation Ca = f ri ≤ 0, i = 1, 2, …, n meaning the one-sided solution from above. The obtained solution vector a would be that for the L1 solution from above, and the elements of the obtained residual vector r are to be multiplied by –1. 6.4.3
The interpolation property
A certain property is shared between the (ordinary) L1 solution [1] and the one-sided L1 solutions of overdetermined systems of linear equations. Let rank(C) = k ≤ m. Then at least k equations of Ca = f, each has zero residual ri. This property is a direct result of using the dual form of the linear programming formulation for both problems. The reason is that k equations in Ca = f, associated with the basis matrix have zero marginal costs (residuals). This also means the following. If n discrete points in the x-y plane are being approximated by a plane curve, in the one-sided L1 sense, the curve will pass through at least k of the discrete points. See Figure 6-1. 6.5
Numerical results and comments
LA_Loneside() implements this algorithm. DR_Loneside() tests 8 examples in which the data is taken from [14, 15, 16, 17]. © 2008 by Taylor & Francis Group, LLC
192
Numerical Linear Approximation in C
Table 6.1 shows the results of 3 of the examples, computed in single-precision. For comparison purposes, the results for the (ordinary or the two-sided) L1 solution for each example are included. The L1 solution is calculated by LA_Lone() of Chapter 5. Table 6.1 One-sided L1 L1 One-sided L1 from above solution from below –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iter z Iter z Iter z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4×2 no solution 2 18.4 no solution 2 10× 5 7 10.00 6 10.00 no solution 3 25×10 20 0.1408 21 0.0878 19 0.139 For each example, the number of iterations and the optimum L1 norms z for the 3 cases are shown. In this table, “no solution” indicates that the problem has no one-sided L1 solution. The results indicate that the number of iterations for the one-sided L1 approximation are comparable to those for the ordinary L1 approximation. The current algorithm is a special purpose one for solving the one-sided L1 approximation problem. However, the algorithms mentioned in Section 6.2, those of Armstrong and Hultz (AH) [4], Barrodale and Roberts (BR) [7-9], Bartels and Conn (BC) [10, 11] and Dax (DA) [12], are general purpose algorithms that may by used to solve the current problem. Each of those algorithms necessitates that matrix C and vector f be stored twice in computer memory, once for (6.2.1a) and once for (6.2.1b), or once for (6.2.2a) and once for (6.2.2b), replacing D and d respectively. That would nearly double the number of arithmetic operations in their algorithms. Our algorithm, as a special purpose one, would thus be more efficient. The same observations are made at the end of Chapter 11 for the one-sided Chebyshev approximation problem.
© 2008 by Taylor & Francis Group, LLC
Chapter 6: One-Sided L1 Approximation
193
References 1. 2.
3. 4. 5.
6. 7. 8. 9. 10. 11.
Abdelmalek, N.N., On the discrete L1 approximation and L1 solutions of overdetermined linear equations, Journal of Approximation Theory, 11(1974)38-53. Abdelmalek, N.N., Linear one-sided approximation algorithms for the solution of overdetermined systems of linear inequalities, International Journal of Systems Science, 15(1984)1-8. Armitage, D.H., Gardiner, S.J., Haussmann, W. and Rogge, L., Best one-sided L1 – approximation by harmonic functions, Manuscripta Mathematica, 96(1998)181-194. Armstrong, R.D. and Hultz, J.W., An algorithm for a restricted discrete approximation problem in the L1 norm, SIAM Journal on Numerical Analysis, 14(1977)555-565. Babenko, V.F. and Glushko, V.N., On the uniqueness of elements of the best approximation and the best one-sided approximation in the space L1, Ukrainian Mathematical Journal, 46(1994)503-513. Barrodale, I. and Roberts, F.D.K., An improved algorithm for discrete l1 approximation, SIAM Journal on Numerical Analysis, 10(1973)839-848. Barrodale, I. and Roberts, F.D.K., Algorithms for restricted least absolute value estimation, Communications on Statistics - Simulation and Computation, B6(1977)353-363. Barrodale, I. and Roberts, F.D.K., An efficient algorithm for discrete l1 linear approximation with linear constraints, SIAM Journal on Numerical Analysis, 15(1978)603-611. Barrodale, I. and Roberts, F.D.K., Algorithm 552: Solution of the constrained L1 linear approximation problem, ACM Transactions on Mathematical Software, 6(1980)231-235. Bartels, R.H. and Conn, A.R., Linearly constrained discrete l1 problems, ACM Transactions on Mathematical Software, 6(1980)594-608. Bartels, R.H. and Conn, A.R., Algorithm 563: A program for linearly constrained discrete l1 problems, ACM Transactions on Mathematical Software, 6(1980)609-614.
© 2008 by Taylor & Francis Group, LLC
194
Numerical Linear Approximation in C
12.
Dax, A., The l1 solution of linear equations subject to linear constraints, SIAM Journal on Scientific and Statistical Computation, 10(1989)328-340. Deng, J., Feng, Y. and Chen, F., Best one-sided approximation of polynomials under L1 norm, Journal of Computational and Applied Mathematics, 144(2002)161-174. Duris, C.S., An exchange method for solving Haar and non-Haar overdetermined linear equations in the sense of Chebyshev, Proceedings of Summer ACM Computer Conference, (1968)61-65. Duris, C.S. and Sreedharan, V.P., Chebyshev and l1-solutions of linear equations using least squares solutions, SIAM Journal on Numerical Analysis, 5(1968)491-505. Easton, M.C., A fixed point method for Tchebycheff solution of inconsistent linear equations, Journal of Institute of Mathematics and Applications, 12(1973)137-159. Goldstein, A.A., Levine, N. and Hereshoff, J.B., On the best and least qth approximation of an overdetermined system of linear equations, Journal of ACM, 4(1957)341-347. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Lenze, B., Uniqueness in best one-sided L1 – approximation by algebraic polynomials on unbounded intervals, Journal of Approximation Theory, 57(1989)169-177. Lewis, J.T., Computation of best one-sided L1 approximation, Mathematics of Computation, 24(1970)529-536. Pinkus, A.M., On L1-Approximation, Cambridge University Press, London, 1989. Tou, J.T. and Gonzalez, R.C., Pattern Recognition Principles, Addison-Wesley, Reading, MA, 1974. Watson, G.A., The calculation of best linear one-sided Lp approximations, Mathematics of Computation, 27(1973)607620. Watson, G.A., One-sided approximation and operator equations, Journal of Institute of Mathematics and Applications, 12(1973)197-208.
13. 14.
15. 16. 17. 18. 19. 20. 21. 22. 23. 24.
© 2008 by Taylor & Francis Group, LLC
Chapter 6: DR_Loneside
6.6
195
DR_Loneside
/*------------------------------------------------------------------DR_Loneside --------------------------------------------------------------------This program is a driver for the function LA_Loneside(), which calculates the one-sided L-One solution from above or from below of an overdetermined system of linear equations. The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m <= n. "f" is a given real n vector. "a" is the solution m vector. This driver contains 8 examples from which the results of examples 1, 6 and 7 are given in the text. All the example are solved twice; once for the one-sided L-One approximation from above and once for the one-sided L-One approximation from below. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define
N1 M1 N2 M2 N3 M3 N4 M4 N5 M5 N6 M6 N7 M7 N8 M8
4 2 5 3 6 3 7 3 8 4 10 5 25 10 8 4
void DR_Loneside (void)
© 2008 by Taylor & Francis Group, LLC
196
Numerical Linear Approximation in C
{ /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[N1][M1] = { { 0.0, -2.0 }, { 0.0, -4.0 }, { 1.0, 10.0 }, {-1.0, 15.0 } }; static tNumber_R { { 1.0, 2.0, {-1.0, -1.0, { 1.0, 3.0, { 0.0, 1.0, { 0.0, 0.0, };
c2init[N2][M2] =
static tNumber_R { { 0.0, -1.0, { 1.0, 3.0, { 1.0, 0.0, { 0.0, 0.0, {-1.0, 1.0, { 1.0, 1.0, };
c3init[N3][M3] =
0.0 0.0 0.0 0.0 1.0
0.0 -4.0 0.0 1.0 2.0 1.0
}, }, }, }, }
}, }, }, }, }, }
static tNumber_R c4init[N4][M4] = { { 1.0, 0.0, 1.0 }, { 1.0, 2.0, 2.0 }, { 1.0, 2.0, 0.0 }, { 1.0, 1.0, 0.0 }, { 1.0, 0.0, -1.0 }, { 1.0, 0.0, 0.0 }, { 1.0, 1.0, 1.0 } }; static tNumber_R c5init[N5][M5] = { { 1.0, -3.0, 9.0, -27.0 }, © 2008 by Taylor & Francis Group, LLC
Chapter 6: DR_Loneside { { { { { { {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
-2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0,
4.0, 1.0, 0.0, 1.0, 4.0, 9.0, 16.0,
197 -8.0 -1.0 0.0 1.0 8.0 27.0 64.0
}, }, }, }, }, }, }
}; static tNumber_R c6init[N6][M6] = { { 1.0, 0.0, 0.0, 0.0, 0.0 }, { 0.0, 1.0, 0.0, 0.0, 0.0 }, { 0.0, 0.0, 1.0, 0.0, 0.0 }, { 0.0, 0.0, 0.0, 1.0, 0.0 }, { 0.0, 0.0, 0.0, 0.0, 1.0 }, { 1.0, 1.0, 1.0, 1.0, 1.0 }, { 0.0, 1.0, 1.0, 1.0, 1.0 }, {-1.0, 0.0, -1.0, -1.0, -1.0 }, { 1.0, 1.0, 0.0, 1.0, 1.0 }, { 1.0, 1.0, 1.0, 0.0, 1.0 } }; static tNumber_R { { 1.0, 1.0, { 1.0, 2.0, { 1.0, 3.0, { 1.0, 4.0, { 1.0, 5.0, { 1.0, 6.0, { 1.0, 7.0, { 1.0, 8.0, };
c8init[N8][M8] = 1.0, 4.0, 9.0, 16.0, 25.0, 36.0, 49.0, 64.0,
1.0 4.0 9.0 16.0 25.0 36.0 49.0 64.0
static tNumber_R f1[N1+1] = { NIL, -12.0, 6.0, 0.0, 5.0 };
}, }, }, }, }, }, }, }
static tNumber_R f2[N2+1] = { NIL, 1.0, 2.0, 1.0, -3.0, 0.0 };
© 2008 by Taylor & Francis Group, LLC
198
Numerical Linear Approximation in C static tNumber_R f3[N3+1] = { NIL, 1.0, 2.0, 3.0, 2.0, 2.0, 4.0 }; static tNumber_R f4[N4+1] = { NIL, 0.0, -2.0, 1.0, -1.0, 5.0, 7.0, 0.0 }; static tNumber_R f5[N5+1] = { NIL, 3.0, -3.0, -2.0, 0.0, 7.0, -1.0, 5.0, 2.0 }; static tNumber_R f6[N6+1] = { NIL, 1.0, -1.0, 0.0, -1.0, 1.0, 0.0, 2.0, 3.0, -3.0, -2.0 }; static tNumber_R f7[N7+1] { NIL, 0.0872673, 0.0872794, 0.3491184, 0.3498802, 0.6111334, 0.6150641, 0.8733883, 0.8841621, 1.135895, 1.157550, };
= 0.0873029, 0.3513824, 0.6230824, 0.9071868, 1.206257,
0.0873315, 0.3532572, 0.6336395, 0.9400757, 1.283258,
0.0873576, 0.3550109, 0.6441493, 0.9766021, 1.384432
static tNumber_R f8[N8+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MM_COLS); tMatrix_R c7 = alloc_Matrix_R (N7, M7); tMatrix_R tMatrix_R
c1 c2
© 2008 by Taylor & Francis Group, LLC
= init_Matrix_R (&(c1init[0][0]), N1, M1); = init_Matrix_R (&(c2init[0][0]), N2, M2);
Chapter 6: DR_Loneside
199
tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
c3 c4 c5 c6 c8
= = = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(c3init[0][0]), (&(c4init[0][0]), (&(c5init[0][0]), (&(c6init[0][0]), (&(c8init[0][0]),
int int tNumber_R
irank, iter, iside, kase; i, j, k, m, n, Iexmpl; d, dd, ddd, e, ee, eee, z;
eLaRc
rc = LaRcOk;
N3, N4, N5, N6, N8,
M3); M4); M5); M6); M8);
for (j = 1; j <= 5; j++) { d = 0.15* (j-3); dd = d*d; ddd = d*dd; for (i = 1; i <= 5; i++) { e = 0.15* (i-3); ee = e*e; eee = e*ee; k = 5* (j-1) + i; c7[k][1] = 1.0; c7[k][2] = d; c7[k][3] = e; c7[k][4] = dd; c7[k][5] = ee; c7[k][6] = e*d; c7[k][7] = ddd; c7[k][8] = eee; c7[k][9] = dd*e; c7[k][10] = ee*d; } } prn_dr_bnr ("DR_Loneside, One-Sided L-One Solutions of an " "Overdetermined System of Linear Equations"); for (kase = 1; kase <= 2; kase++) { if (kase == 1) iside = 1; if (kase == 2) iside = 0; for (Iexmpl = 1; Iexmpl <= 8; Iexmpl++) { © 2008 by Taylor & Francis Group, LLC
200
Numerical Linear Approximation in C switch (Iexmpl) { case 1: n = N1; m = M1; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = c1[i][j]; } break; case 2: n = N2; m = M2; for (i = 1; i <= n; i++) { f[i] = f2[i]; for (j = 1; j <= m; j++) ct[j][i] = c2[i][j]; } break; case 3: n = N3; m = M3; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] = c3[i][j]; } break; case 4: n = N4; m = M4; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) ct[j][i] = c4[i][j]; } break; case 5: n = N5; m = M5;
© 2008 by Taylor & Francis Group, LLC
Chapter 6: DR_Loneside
201
for (i = 1; i <= n; i++) { f[i] = f5[i]; for (j = 1; j <= m; j++) ct[j][i] = c5[i][j]; } break; case 6: n = N6; m = M6; for (i = 1; i <= n; i++) { f[i] = f6[i]; for (j = 1; j <= m; j++) ct[j][i] = c6[i][j]; } break; case 7: n = N7; m = M7; for (i = 1; i <= n; i++) { f[i] = f7[i]; for (j = 1; j <= m; j++) ct[j][i] = c7[i][j]; } break; case 8: n = N8; m = M8; for (i = 1; i <= n; i++) { f[i] = f8[i]; for (j = 1; j <= m; j++) ct[j][i] = c8[i][j]; } break; default: break; } prn_algo_bnr ("Loneside"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\", %d by %d\n", © 2008 by Taylor & Francis Group, LLC
202
Numerical Linear Approximation in C Iexmpl, n, m); prn_example_delim(); if (iside == 1) PRN ("One-sided L-One Solution from Above\n"); else PRN ("One-sided L-One Solution from Below\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Loneside (iside, m, n, ct, f, &irank, &iter, r, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the One-sided L1 Approximation\n"); PRN ("One-sided L-One solution vector, \"a\"\n"); prn_Vector_R (a, m); PRN ("One-sided L-One residual vector, \"r\"\n"); prn_Vector_R (r, n); PRN ("One-sided L-One norm \"z\" = %8.4f\n", z); PRN ("Rank of Matrix \"c\" = %d," " Number of Iterations = %d\n", irank, iter); } prn_la_rc (rc); } } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Matrix_R
(ct, MM_COLS); (f); (r); (a); (c7, N7);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(c1); (c2); (c3); (c4); (c5); (c6); (c8);
} © 2008 by Taylor & Francis Group, LLC
Chapter 6: LA_Loneside
6.7
203
LA_Loneside
/*------------------------------------------------------------------LA_Loneside --------------------------------------------------------------------This program calculates the one-sided L-One solution from above or from below of an overdetermined system of linear equations. It uses a modified simplex method to the linear programming formulation of the problem. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k <= m <= n. "f" is a given real n vector. The problem is to calculate the elements of the real m vector "a" that gives the minimum L1 residual norm z. z = |r[1]| + |r[2]| + ... + |r[n]| where r[i] is the ith residual and is given by r[i] = c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m] - f[i], i = 1, 2, ..., n subject to the conditions r[i] => 0, for the one-sided L-One solution from below or r[i] =< 0, for the one-sided L-One solution from above. Inputs iside An integer specifying the action to be performed. If iside = 1, the one-sided L-One solution from above is calculated. If iside != 1, the one-sided L-One solution from below is calculated. m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. ct A real m by n matrix containing the transpose of matrix "c" of the system c*a = f. f A real n vector containing the r.h.s. of the system c*a = f.
© 2008 by Taylor & Francis Group, LLC
204
Numerical Linear Approximation in C
Local Variables binv A real m square matrix containing the inverse of the basis matrix in the linear programming problem. bv A real m vector containing the basic solution in the linear programming problem. icbas An integer m vector containing the indices of the columns of "ct" that form the columns of the basis matrix. irbas An integer m vector containing the indices of the rows of "ct". Outputs irank The calculated rank of matrix "c". iter Number of iterations, or the number of times the simplex tableau is changed by a Gauss-Jordon elimination step. a A real m vector containing the one-sided L-One solution of the system c*a = f. r A real n vector containing the one-sided L-One residual vector r = (c*a - f). z The optimum one-sided L-One norm of the problem. Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcSolutionDefNotUniqueRD LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Loneside (int iside, int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ) { tMatrix_R binv = alloc_Matrix_R (m, m); tVector_R bv = alloc_Vector_R (m); tVector_I icbas = alloc_Vector_I (m); tVector_I irbas = alloc_Vector_I (m); int int
i = 0, j = 0, kk = 0; iout = 0, jin = 0, ivo = 0, itest = 0;
© 2008 by Taylor & Francis Group, LLC
Chapter 6: LA_Loneside
205
/* Validation of data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m == 1))); VALIDATE_PTRS (ct && f && pIrank && pIter && r && a && pZ); VALIDATE_ALLOC (binv && bv && icbas && irbas); /* Initialization */ *pIrank = m; *pIter = 0; for (j = 1; j <= m; j++) { a[j] = 0.0; icbas[j] = 0; irbas[j] = j; for (i = 1; i <= m; i++) { binv[i][j] = 0.0; } binv[j][j] = 1.0; } for (j = 1; j <= n; j++) { r[j] = 0.0; } /* One-sided L-One solution from above */ if (iside == 1) { for (j = 1; j <= n; j++) { f[j] = -f[j]; for (i = 1; i <= m; i++) ct[i][j] = - ct[i][j]; } } /* Calculate the initial basic solution */ LA_loneside_basic_sol (m, n, ct, irbas, bv); /* Determine the rank of matrix "ct" */ LA_loneside_part_1 (m, n, ct, icbas, irbas, binv, bv, pIrank, pIter, r); /* Part 2 of the algorithm */ /* Calculate the marginal costs */ LA_loneside_marg_costs (m, n, ct, f, icbas, r); © 2008 by Taylor & Francis Group, LLC
206
Numerical Linear Approximation in C
for (kk = 1; kk <= n*n; kk++) { /* Determine the vector that enters the basis */ LA_loneside_vent (&ivo, &jin, m, n, icbas, r); if (ivo == 0) { /* Calculate the results */ rc = LA_loneside_res (iside, m, n, f, icbas, irbas, binv, bv, pIrank, r, a, pZ); GOTO_CLEANUP_RC (rc); } itest = 0; /* Determine the vector that leaves the basis */ LA_loneside_vleav (jin, &iout, &itest, m, ct, bv); if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } /* A Gauss-Jordan elimination step */ LA_loneside_gauss_jordn (iout, jin, m, n, ct, icbas, binv, bv, r); *pIter = *pIter + 1; } CLEANUP: free_Matrix_R free_Vector_R free_Vector_I free_Vector_I
(binv, m); (bv); (icbas); (irbas);
return rc; } /*------------------------------------------------------------------Determine the rank of matrix "c" in LA_Loneside() -------------------------------------------------------------------*/ void LA_loneside_part_1 (int m, int n, tMatrix_R ct, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, int *pIrank, int *pIter, tVector_R r) { © 2008 by Taylor & Francis Group, LLC
Chapter 6: LA_Loneside int tNumber_R
i, j, k, li = 0, jin = 0, iout; d, g, piv;
for (iout = 1; iout <= m; iout++) { if (iout <= *pIrank) { piv = 0.0; for (j = 1; j <= n; j++) { for (i = iout; i <= *pIrank; i++) { d = ct[i][j]; if (d < 0.0) d = -d; if (d > piv) { li = i; jin = j; piv = d; } } } /* Detection of rank deficiency */ if (piv > 0.0) { if (li != iout) { /* Swap of two elements of vector "irbas" */ swap_elems_Vector_I (irbas, li, iout); /* Swap of two elements of vector "bv" */ swap_elems_Vector_R (bv, li, iout); /* Swap of two rows of matrix "ct" */ swap_rows_Matrix_R (ct, li, iout); /* Swap parts of two rows of matrix "binv" */ if (iout != 1) { k = iout - 1; for (j = 1; j <= k; j++) { g = binv[li][j]; binv[li][j] = binv[iout][j]; © 2008 by Taylor & Francis Group, LLC
207
208
Numerical Linear Approximation in C binv[iout][j] = g; } } } LA_loneside_gauss_jordn (iout, jin, m, n, ct, icbas, binv, bv, r); *pIter = *pIter + 1; } if (piv < EPS) { /* Solution is not unique */ *pIrank = iout - 1; } } }
} /*------------------------------------------------------------------Calculation of the initial marginal costs in LA_Loneside() -------------------------------------------------------------------*/ void LA_loneside_marg_costs (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_R r) { int i, j, k, ibc; tNumber_R s; for (j = 1; j <= n; j++) { r[j] = 0.0; ibc = 0; for (i = 1; i <= m; i++) { if (j == icbas[i]) ibc = 1; } if (ibc == 0) { s = - f[j]; for (i = 1; i <= m; i++) { k = icbas[i]; s = s + f[k] * ct[i][j]; } r[j] = s; } © 2008 by Taylor & Francis Group, LLC
Chapter 6: LA_Loneside
209
} } /*------------------------------------------------------------------Calculation of the initial basic solution in LA_Loneside() -------------------------------------------------------------------*/ void LA_loneside_basic_sol (int m, int n, tMatrix_R ct, tVector_I irbas, tVector_R bv) { int i, j; tNumber_R s; for (i = 1; i <= m; i++) { s = 0.0; for (j = 1; j <= n; j++) { s = s + ct[i][j]; } bv[i] = s; } for (i = 1; i <= m; i++) { if (bv[i] < -EPS) { bv[i] = -bv[i]; irbas[i] = -i; for (j = 1; j <= n; j++) { ct[i][j] = -ct[i][j]; } } } } /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Loneside() -------------------------------------------------------------------*/ void LA_loneside_vent (int *pIvo, int *pJin, int m, int n, tVector_I icbas, tVector_R r) { int i, j, ic; tNumber_R d, g; *pIvo = 0; © 2008 by Taylor & Francis Group, LLC
210
Numerical Linear Approximation in C g = 1.0/ (EPS*EPS); for (j = 1; j <= n; j++) { ic = 0; for (i = 1; i <= m; i++) { if (j == icbas[i]) ic = 1; } if (ic == 0) { d = r[j]; if (d < 0.0) { if (d < g) { *pIvo = 1; g = d; *pJin = j; } } } }
} /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Loneside() -------------------------------------------------------------------*/ void LA_loneside_vleav (int jin, int *pIout, int *pItest, int m, tMatrix_R ct, tVector_R bv) { int i; tNumber_R d, g, thmax; thmax = 1.0/ (EPS*EPS); for (i = 1; i <= m; i++) { d = ct[i][jin]; if (d > EPS) { g = bv[i]/d; if (g <= thmax) { thmax = g; *pIout = i; *pItest = 1; © 2008 by Taylor & Francis Group, LLC
Chapter 6: LA_Loneside
211
} } } } /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Loneside() -------------------------------------------------------------------*/ void LA_loneside_gauss_jordn (int iout, int jin, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_R r) { int i, j; tNumber_R pivot, d; pivot = ct[iout][jin]; for (j = 1; j <= n; j++) ct[iout][j] = ct[iout][j]/pivot; for (j = 1; j <= m; j++) binv[iout][j] = binv[iout][j]/pivot; bv[iout] = bv[iout]/pivot; for (i = 1; i <= m; i++) { if (i != iout) { d = ct[i][jin]; for (j = 1; j <= n; j++) { ct[i][j] = ct[i][j] - d * (ct[iout][j]); } for (j = 1; j <= m; j++) { binv[i][j] = binv[i][j] - d * (binv[iout][j]); } bv[i] = bv[i] - d * (bv[iout]); } } icbas[iout] = jin; d = r[jin]; for (j = 1; j <= n; j++) r[j] = r[j] - d * (ct[iout][j]); } /*------------------------------------------------------------------Calculate the results of LA_Loneside() -------------------------------------------------------------------*/ eLaRc LA_loneside_res (int iside, int m, int n, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, © 2008 by Taylor & Francis Group, LLC
212
Numerical Linear Approximation in C int *pIrank, tVector_R r, tVector_R a, tNumber_R *pZ)
{ int tNumber_R
i, j, k; s;
for (j = 1; j <= *pIrank; j++) { s = 0.0; for (i = 1; i <= *pIrank; i++) { k = icbas[i]; s = s + f[k]*binv[i][j]; } k = irbas[j]; if (k < 0) k = -k; a[k] = s; if (irbas[j] < 0) a[k] = -s; } s = 0.0; for (j = 1; j <= n; j++) { s = s + r[j]; } *pZ = s; if (iside == 1) { for (j = 1; j <= n; j++) r[j] = -r[j]; } if (*pIrank < m) return LaRcSolutionDefNotUniqueRD; for (i = 1; i <= m; i++) { if (bv[i] < EPS) return LaRcSolutionProbNotUnique; } return LaRcSolutionUnique; }
© 2008 by Taylor & Francis Group, LLC
213
Chapter 7 L1 Approximation with Bounded Variables
7.1
Introduction
In Chapter 5, an algorithm for calculating the L1 solution of overdetermined systems of linear equations is given. In this solution the L1 norm of the residual vector, is as small as possible. In Chapter 6, the one-sided L1 solution of overdetermined linear equations was presented, in which the L1 norm of the residual vector is as small as possible and such that all the elements of the residual vector are either non-positive or non-negative. In this chapter we present yet another kind of the linear L1 approximation, known as L1 approximation with bounded variables [5]. Here, the additional constraints are on the elements of the solution vector, not on the elements of the residual vector as in the previous chapter. The elements of the solution vector are to be bounded between –1 and 1. For example, given the (x, y) data of Figure 2-1, the solid curve in Figure 7-1, is for the ordinary L1 approximation, and is given by y = 2.143 – 0.25x + 0.107x2. The elements of the solution vector are (2.143, –0.25, 0.107) and they are not all bounded between –1 and 1. Yet for the same data, for the L1 approximation with bounded variables between –1 and 1, the approximating curve is given by y = 1 + 0.083x + 0.083x2, where the elements of the solution vector are between –1 and 1. For comparison purposes, the two approximating curves are shown in Figure 7-1. In this chapter, the problem is solved by linear programming techniques, where initial basic solution is obtained with very little computational effort. Minimum computer storage is required and no conditions are imposed on the coefficient matrix. It could be a rank © 2008 by Taylor & Francis Group, LLC
214
Numerical Linear Approximation in C
deficient one. Consider the overdetermined system of linear equations Ca = f C = (cij) is a given real n by m matrix of rank k, k ≤ m ≤ n, and f = (fi) is a given real n-vector. The L1 solution of system Ca = f is the m-vector a = (ai) that minimizes the L1 norm z of the residuals n
z =
(7.1.1)
∑
ri
i=1
where ri is the ith element of the residual vector r = Ca – f When the elements of the solution vector a are required to satisfy the additional conditions –1 ≤ ai ≤ 1, i = 1, 2, …, m
(7.1.2)
we have the problem of calculating the L1 solution of system Ca = f with bounded variables between –1 and 1. If instead of the constraints (7.1.2), we require the elements of vector a to satisfy the constraints ci ≤ ai ≤ di, i = 1, 2, …, m
(7.1.3)
where vectors c = (ci) and d = (di) are given m-vectors, by substituting variables, these constraints reduce to the constraints (7.1.2) in the new variables. Let (7.1.4)
ai = 0.5[(di – ci)zi + (di + ci)], i = 1, 2, …, m
That is zi = [2ai – (di + ci)]/(di – ci), i = 1, 2, …, m Hence, when ai = di, zi = 1 and when ai = ci, zi = –1. In vector-matrix form, (7.1.4) is given by (7.1.5)
a = Gz + g
G is a diagonal m-matrix whose ith diagonal element is 0.5(di – ci) and g is an m-vector whose ith element is 0.5(di + ci). By substituting (7.1.5) into Ca = f, one gets © 2008 by Taylor & Francis Group, LLC
Chapter 7: L1 Approximation with Bounded Variables
215
Dz = h where D = CG and h = f – Cg and the elements of the new solution vector z are bounded between –1 and 1 –1 ≤ zi ≤ 1, i = 1, 2, …, m Once the solution vector z is obtained, the solution vector a of the given system Ca = f is calculated by substituting z into (7.1.5). The linear L1 approximation with non-negative parameters may be formulated as an L1 approximation with bounded variables, as explained in Section 7.1.1. 8 7 6 5 4 3 2 1 0 0
1
2
3
4
5
6
7
8
9
Figure 7-1: Curve fitting with vertical parabolas of a set of 8 points using L1 approximation and L1 approximation with bounded variables between –1 and 1
The solid curve is the L1 approximation and the dashed curve is the L1 approximation with bounded variables. In Section 7.2, the bounded L1 approximation problem would be treated as a special case of some general purpose algorithms. We shall comment on these algorithms in Section 7.5. In Section 7.3, the linear
© 2008 by Taylor & Francis Group, LLC
216
Numerical Linear Approximation in C
programming formulation of the problem is presented and necessary lemmas are given. In Section 7.4, the algorithm is described, and in Section 7.5, numerical results and comments are given. 7.1.1
Linear L1 approximation with non-negative parameters (NNL1)
If in (7.1.3) we take (ci) = 0 and (di) = Big, i = 1, 2, …, m, where Big is a large number, we get the non-negative linear L1 (NNL1) approximation. Spath ([13], p. 250) suggested to take Big = 100 × max|fi|, i = 1, 2, …, n, where (fi) are the elements of vector f in the system Ca = f. See Chapter 12.1.1 for the non-negative L-infinity approximation (NNLI). 7.2
A special problem of a general constrained one
As in Chapter 6, this problem would be treated as a special case of one of the four general purpose algorithms. These are of Armstrong and Hultz [6], Barrodale and Roberts [7, 8], Bartels and Conn [9, 10] and Dax [11]. Using our notation, let C and E be matrices of appropriate dimensions and let f be the vector associated with C, and e1 and e2 be two vectors associated with E respectively. Using a special purpose primal linear programming method, Armstrong and Hultz (AH) [6] seek a solution vector a that satisfies the problem (7.2.1a)
minimize ||Ca – f||1
subject to (7.2.1b)
e1 ≤ Ea ≤ e2
Hence, if we take in (7.2.1b), matrix E = Im, an m-unit matrix, and take e1 = –em and e2 = em, where each element of em is 1, (7.2.1a, b) reduce to the problem of bounded L1 approximation between –1 and 1. Once more, consider equations (6.2.2a-b) using our notation. Let C, G and D be matrices of appropriate dimensions and let f, g and d be vectors associated with C, G and D respectively. Each of Barrodale and Roberts (BR) [7, 8], Bartels and Conn (BC) [9, 10] and Dax (DA) [11] seek a vector a that satisfies © 2008 by Taylor & Francis Group, LLC
Chapter 7: L1 Approximation with Bounded Variables
(7.2.2a)
217
minimize ||Ca – f||1
subject to Ga = g and Da ≥ d
(7.2.2b)
They allow the possibility that some but not all of the arrays (G, g) and (D, d) be vacuous. Hence, if in (7.2.2b), we take G = 0, g = 0, D = [Im, –Im]T and d = [–em, –em]T, (7.2.2a, b) reduce to the problem of the L1 approximation with bounded variables. As indicated earlier, we shall comment on these methods in Section 7.5. In the following, we describe an algorithm for the L1 approximation with bounded variables between –1 and 1, using linear programming techniques. The algorithm is an extension of the algorithm for the ordinary L1 solution of overdetermined linear equations. In the linear programming formulation, the initial basic feasible solution is obtained with little computational effort, and intermediate simplex iterations are skipped [2, 3]. 7.3
Linear programming formulation of the problem
Since the elements of the residual vector r = (ri) are unrestricted in sign, we may write the residual vector as r = r1 – r2 where vectors r1, r2 ≥ 0. It is understood that when ri > 0, r1i > 0 and r2i = 0 and when ri < 0, r1i = 0 and r2i > 0. When ri = 0, r1i = r2i = 0, where r1i and r2i are the ith elements of vectors r1 and r2 respectively. This problem may be reduced to a linear programming problem in the primal form as follows. minimize Z = enTr1 + enTr2 subject to Ca – Inr1 + Inr2 = f –Ima ≥ –em Ima ≥ –em aj, j = 1, 2, …, m, unrestricted in sign, r1, r2 ≥ 0 Here, en and em are n and m-vectors respectively, each element of © 2008 by Taylor & Francis Group, LLC
218
Numerical Linear Approximation in C
which is 1. Also, In and Im are n- and m-unit matrices respectively. It is more beneficial to work with the dual form of this primal problem, namely (Chapter 3) maximize z = fTw – emTu – emTv subject to CTw – Imu + Imv = 0 w ≤ en w ≥ –en ui, vi ≥ 0, i = 1, 2, …, m The last two vector inequalities reduce to –1 ≤ wi ≤ 1, i = 1, 2, …, n where w, u and v are respectively n-, m- and m-vectors for the dual linear programming problem. As in [1, 2], by letting bi = wi + 1, i = 1, 2, …, n, this problem reduces to (in vector-matrix form) b T maximize z = f T – e T – e T u – f e n m m v or since fTen is just a constant term, the above reduces to (7.3.1a)
b maximize z = f T – e T – e T u m m v
subject to b T u = C en v
(7.3.1b)
C –Im Im
(7.3.1c)
0 ≤ bi ≤ 2,
i = 1, 2, …, n
(7.3.1d)
ui, vi ≥ 0,
i = 1, 2, …, m
© 2008 by Taylor & Francis Group, LLC
T
Chapter 7: L1 Approximation with Bounded Variables
219
Let in (7.3.1a) g = [f T –emT –emT]
(7.3.2a) and in (7.3.1b)
D = [CT –Im Im]
(7.3.2b)
The r.h.s. of (7.3.1b) is none other that the sum of the columns of matrix CT. Compare the r.h.s. of (7.3.1b) with those of (5.2.4b) and of (6.3.2b). Problem (7.3.1) is solved by using a dual simplex algorithm with non-negative bounded variables [1-4, 12]. Since matrix D in (7.3.2b) is of dimension m by (n + 2m), a simplex tableau for m constraints in (n + 2m) variables is constructed for problem (7.3.1). Note also that the matrix of constraints D in (7.3.2b) is of full rank; rank(D) = m. though matrix C may be rank deficient. The basis matrix, denoted by B has m linearly independent columns in the simplex tableau. Vectors yj are given by (7.3.3a)
yj = B–1Dj, j = 1, 2, …, n+2m
where Dj is the jth column of matrix D in (7.3.2b). Let bB denote the initial basic solution. Since in (7.3.1c), variables bi, i = 1, 2, …, n, are bounded between 0 and 2, as in [2]. In this problem, bB is given by (7.3.3b)
bB =
∑
iεI ( b )
yi +
∑
iεL ( b )
yi –
∑
iεU ( b )
yi
The summations are respectively over the basic variables bi, the non-basic variables bi at their lower bound (= 0) and the non-basic variables bi at their upper bound (= 2). The technique used here is very similar to that used for the (ordinary) linear L1 approximation problem of Chapter 5. The marginal costs, denoted by (zj – gj), are given by (7.3.3c)
zj – gj = gBTyj – gj, j = 1, 2, …, n+2m
where the elements of the m-vector gB are associated with the basic variables and vector g is defined by (7.3.2a).
© 2008 by Taylor & Francis Group, LLC
220
7.3.1
Numerical Linear Approximation in C
Properties of the matrix of constraints
We will make use of the fact that there is a kind of asymmetry in parts of the matrix of constraints D. There exist the two matrices –Im and Im as part of D in (7.3.2b). We take the m-unit matrix Im as the initial basis matrix B. Definition Let i and j, (n + 1) ≤ i, j ≤ (n + 2m), be the indices of any two columns in matrix D such that |i – j| = m. We define columns i and j as two corresponding columns. Consider the following lemmas. Lemma 7.1 Any two corresponding columns should not appear together in any basis matrix. Otherwise, the basis matrix would be singular. Lemma 7.2 At any stage of the computation, the corresponding columns of the simplex tableau are related. If one column is known, the other is easily derived. The same is true about their marginal costs. From (7.3.3a) and (7.3.3c) respectively we get (7.3.4a)
yi + yi+m = 0, i = n+1, n+2, …, n+m
(7.3.4 b) (zi – gi) + (zi+m – gi+m) = 2, i = n+1, n+2, …, n+m These two lemmas indicate that only m columns of the last 2m columns in matrix D need to be stored by the program, and no corresponding columns exist in these m columns. Hence, from the matrix of constraints D, we need to store m constraints in only (n + m) variables, since the y vectors of the other m variables and their marginal costs would be known from (7.3.4a, b). Lemma 7.3 At any stage of the computation, the last m columns in the simplex m by (n + 2m) tableau are themselves the m columns of the inverse of the basis matrix; B–1. Lemma 7.4 (Lemma 2 in [1]) The residuals ri, i = 1, …, n, of (7.2.1c) are themselves the marginal costs for the first n columns in the simplex tableau © 2008 by Taylor & Francis Group, LLC
Chapter 7: L1 Approximation with Bounded Variables
221
ri = (zi – gi), i = 1, 2, …, n Hence, the objective function z of (7.1.1) is given by n
n
z =
(7.3.5)
∑ i=1
ri =
∑
zi – gi
i=1
Lemma 7.5 At any stage of the computation, the solution of the L1 problem of the bounded variables is given by aT = gBTB–1 See Lemmas 5.4 and 6.2. The steps taken to solve problem (7.3.1) are almost identical to those used in solving (5.2.4). However, we make use here of the asymmetry that exists in the matrix of constraints D, as indicated above. This is explained in the following section. 7.4
Description of the algorithm
Lemma 7.3 indicates that the last m columns in the simplex tableau are themselves the m columns of matrix B–1. Since matrix B–1 is always available, and from the discussion following (7.3.4a, b), in the initial tableau, we only need to store the first n columns of matrix D of (7.3.2b). We call the initial tableau and the following tableaux, the condensed tableaux. We consider the columns of matrix B–1 (and their corresponding columns) together with their marginal costs as part of the simplex tableau. We need an (n + m)-index indicator vector whose elements are +1 or –1. For the first n elements of this indicator, if bj, 1 ≤ j ≤ n, of (7.3.1c), is a basic variable, or if it is at its lower bound (= 0), index j has the value +1. If bj is at its upper bound (= 2), index j has value –1. For the remaining m elements of this index vector, if column j of matrix B–1, 1 ≤ j ≤ m, has its marginal cost stored, the index (n + j) has the value +1. Else, it has the value –1. The algorithm is in 2 parts. In part 1, the last m columns of matrix D; columns (n + m + 1), …, (n + 2m), form an m-unit matrix Im and
© 2008 by Taylor & Francis Group, LLC
222
Numerical Linear Approximation in C
they form the inverse of the initial basis matrix, B–1. This does not require changing the simplex tableau being columns of an m-unit matrix. In part 2, one calculates the basic solution bB from (7.3.3b) and the marginal costs (zi – gi), i = 1, 2, …, n, from (7.3.3c) for the condensed tableau. We also calculate the marginal costs for B–1. From (7.3.1a) and from the above discussion, vector (–em) would be the price vector for the columns of B–1. The algorithm proceeds almost exactly as in part 2 in Chapter 5, for the ordinary L1 solution, where intermediate simplex iterations are skipped. The only difference between the algorithm of Chapter 5 and this one is as follows. From (7.3.1c), if bj, 1 ≤ j ≤ n, is in the basis for the optimum solution, bj has to be bounded; i.e., 0 ≤ bj ≤ 2. While, from (7.3.1d), if uj or vj, 1 ≤ j ≤ m, is in the basis, uj or vj should only satisfy the non-negativity conditions; i.e., uj, vj ≥ 0. From Lemma 7.4, the residuals ri, i = 1, 2, …, n, are themselves the marginal costs for the first n columns in the simplex tableau. The bounded L1 error norm z is calculated from (7.3.5). The optimal solution vector a is calculated from Lemma 7.5. 7.5
Numerical results and comments
LA_Lonebv() is an extension to LA_Lone() of Chapter 5, where again certain intermediate simplex iterations are skipped. This algorithm has been tested with many examples, including examples with rank deficient matrices C. No failures were encountered. DR_Lonebv() tests the 8 examples that were solved in Chapter 6 for the one-sided L1 approximation problem. Table 7.1 L1 L1 solution with Solution bounded variables –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations z Iterations z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4×2 2 18.40 1 20.2 2 10× 5 6 10.00 2 11.00 3 25×10 21 0.0878 12 3.548 © 2008 by Taylor & Francis Group, LLC
Chapter 7: L1 Approximation with Bounded Variables
223
The results satisfy the inequalities (7.1.2), namely, –1 ≤ ai ≤ 1. Table 7.1 shows the results for 3 of the 8 examples. For each example, the number of iterations and the optimum norm for the L1 solution with bounded variables are shown. For comparison purposes, the results for the ordinary L1 approximation problem are also given. We observe that the number of iterations for this algorithm are, in general, smaller than those for the corresponding ordinary L1 case. Analogous to the comments at the end of Chapter 6, our algorithm in this chapter is a special purpose algorithm compared with the algorithms of Armstrong and Hultz (AH) [6], Barrodale and Roberts (BR) [7, 8], Bartels and Conn (BC) [9, 10] and Dax (DA) [11] mentioned in Section 7.2. Their algorithms are general purpose ones. They need more computer storage than ours and as a result, more computation. This point was explained in detail at the end of Chapter 6.
References 1. 2. 3. 4.
5.
Abdelmalek, N.N., On the discrete linear L1 approximation and L1 solutions of overdetermined linear equations, Journal of Approximation Theory, 11(1974) 38-53. Abdelmalek, N.N., An efficient method for the discrete linear L1 approximation problem, Mathematics of Computation, 29(1975) 844-850. Abdelmalek, N.N., L1 solution of overdetermined systems of linear equations, ACM Transactions on Mathematical Software, 6(1980) 220-227. Abdelmalek, N.N., Algorithm 551: A FORTRAN subroutine for the L1 solution of overdetermined systems of linear equations [F4], ACM Transactions on Mathematical Software, 6(1980)228-230. Abdelmalek, N.N., Chebyshev and L1 solutions of overdetermined systems of linear equations with bounded variables, Numerical Functional Analysis and Optimization, 8(1985-86)399-418.
© 2008 by Taylor & Francis Group, LLC
224
6. 7. 8. 9. 10. 11. 12. 13.
Numerical Linear Approximation in C
Armstrong, R.D. and Hultz, J.W., An algorithm for a restricted discrete approximation in the L1 norm, SIAM Journal on Numerical Analysis, 14(1977)555-565. Barrodale, I. and Roberts, F.D.K., An efficient algorithm for discrete l1 linear approximation with linear constraints, SIAM Journal on Numerical Analysis, 15(1978)603-611. Barrodale, I. and Roberts, F.D.K., Algorithm 552: Solution of the constrained L1 linear approximation problem, ACM Transactions on Mathematical Software, 6(1980)231-235. Bartels, R.H. and Conn, A.R., Linearly constrained discrete l1 problems, ACM Transactions on Mathematical Software, 6(1980)594-608. Bartels, R.H. and Conn, A.R., Algorithm 563: A program for linearly constrained discrete l1 problems, ACM Transactions on Mathematical Software, 6(1980)609-614. Dax, A., The l1 solution of linear equations subject to linear constraints, SIAM Journal on Scientific and Statistical Computation, 10(1989)328-340. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Spath, H., Mathematical Algorithms for Linear Regression, Academic Press, English Edition, London, 1991.
© 2008 by Taylor & Francis Group, LLC
Chapter 7: DR_Lonebv
7.6
225
DR_Lonebv
/*------------------------------------------------------------------DR_Lonebv --------------------------------------------------------------------This program is a driver for the function LA_Lonebv(), which solves an overdetermined system of linear equations in the L-One norm subject to the constraints that each element of the solution vector "a" is bounded between -1 and 1; -1 <= a[j] <= 1,
j = 1, 2, ..., m
The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m <= n. "f" is a given real n vector. "a" is the solution m vector. This driver contains the 8 examples whose results are given in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define
NN_MM_ROWS N1 M1 N2 M2 N3 M3 N4 M4 N5 M5 N6 M6 N7 M7 N8 M8
© 2008 by Taylor & Francis Group, LLC
(NN_ROWS + MM_COLS) 4 2 5 3 6 3 7 3 8 4 10 5 25 10 8 4
226
Numerical Linear Approximation in C
void DR_Lonebv (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[N1][M1] = { { 0.0, -2.0 }, { 0.0, -4.0 }, { 1.0, 10.0 }, {-1.0, 15.0 } }; static tNumber_R { { 1.0, 2.0, {-1.0, -1.0, { 1.0, 3.0, { 0.0, 1.0, { 0.0, 0.0, };
c2init[N2][M2] =
static tNumber_R { { 0.0, -1.0, { 1.0, 3.0, { 1.0, 0.0, { 0.0, 0.0, {-1.0, 1.0, { 1.0, 1.0, };
c3init[N3][M3] =
0.0 0.0 0.0 0.0 1.0
0.0 -4.0 0.0 1.0 2.0 1.0
}, }, }, }, }
}, }, }, }, }, }
static tNumber_R c4init[N4][M4] = { { 1.0, 0.0, 1.0 }, { 1.0, 2.0, 2.0 }, { 1.0, 2.0, 0.0 }, { 1.0, 1.0, 0.0 }, { 1.0, 0.0, -1.0 }, { 1.0, 0.0, 0.0 }, { 1.0, 1.0, 1.0 } }; static tNumber_R c5init[N5][M5] = © 2008 by Taylor & Francis Group, LLC
Chapter 7: DR_Lonebv
227
{ { { { { { { { {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0,
9.0, 4.0, 1.0, 0.0, 1.0, 4.0, 9.0, 16.0,
-27.0 -8.0 -1.0 0.0 1.0 8.0 27.0 64.0
}, }, }, }, }, }, }, }
}; static tNumber_R c6init[N6][M6] = { { 1.0, 0.0, 0.0, 0.0, 0.0 }, { 0.0, 1.0, 0.0, 0.0, 0.0 }, { 0.0, 0.0, 1.0, 0.0, 0.0 }, { 0.0, 0.0, 0.0, 1.0, 0.0 }, { 0.0, 0.0, 0.0, 0.0, 1.0 }, { 1.0, 1.0, 1.0, 1.0, 1.0 }, { 0.0, 1.0, 1.0, 1.0, 1.0 }, {-1.0, 0.0, -1.0, -1.0, -1.0 }, { 1.0, 1.0, 0.0, 1.0, 1.0 }, { 1.0, 1.0, 1.0, 0.0, 1.0 } }; static tNumber_R { { 1.0, 1.0, { 1.0, 2.0, { 1.0, 3.0, { 1.0, 4.0, { 1.0, 5.0, { 1.0, 6.0, { 1.0, 7.0, { 1.0, 8.0, };
c8init[N8][M8] = 1.0, 4.0, 9.0, 16.0, 25.0, 36.0, 49.0, 64.0,
1.0 4.0 9.0 16.0 25.0 36.0 49.0 64.0
static tNumber_R f1[N1+1] = { NIL, -12.0, 6.0, 0.0, 5.0 };
}, }, }, }, }, }, }, }
static tNumber_R f2[N2+1] = { NIL, 1.0, 2.0, 1.0, -3.0, 0.0 © 2008 by Taylor & Francis Group, LLC
228
Numerical Linear Approximation in C }; static tNumber_R f3[N3+1] = { NIL, 1.0, 2.0, 3.0, 2.0, 2.0, 4.0 }; static tNumber_R f4[N4+1] = { NIL, 0.0, -2.0, 1.0, -1.0, 5.0, 7.0, 0.0 }; static tNumber_R f5[N5+1] = { NIL, 3.0, -3.0, -2.0, 0.0, 7.0, -1.0, 5.0, 2.0 }; static tNumber_R f6[N6+1] = { NIL, 1.0, -1.0, 0.0, -1.0, 1.0, 0.0, 2.0, 3.0, -3.0, -2.0 }; static tNumber_R f7[N7+1] { NIL, 0.0872673, 0.0872794, 0.3491184, 0.3498802, 0.6111334, 0.6150641, 0.8733883, 0.8841621, 1.135895, 1.157550, };
= 0.0873029, 0.3513824, 0.6230824, 0.9071868, 1.206257,
0.0873315, 0.3532572, 0.6336395, 0.9400757, 1.283258,
0.0873576, 0.3550109, 0.6441493, 0.9766021, 1.384432
static tNumber_R f8[N8+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R rbv = alloc_Vector_R (NN_MM_ROWS); tVector_R a = alloc_Vector_R (MM_COLS); tMatrix_R binv = alloc_Matrix_R (MM_COLS, MM_COLS); tVector_R bv = alloc_Vector_R (MM_COLS); © 2008 by Taylor & Francis Group, LLC
Chapter 7: DR_Lonebv
229
tVector_R tVector_I tVector_I tMatrix_R
thbv icbas ibbv c7
= = = =
alloc_Vector_R alloc_Vector_I alloc_Vector_I alloc_Matrix_R
(NN_MM_ROWS); (MM_COLS); (NN_MM_ROWS); (N7, M7);
tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
c1 c2 c3 c4 c5 c6 c8
= = = = = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
int int tNumber_R
iter; i, j, k, m, n, Iexmpl; d, dd, ddd, e, ee, eee, z;
eLaRc
rc = LaRcOk;
(&(c1init[0][0]), (&(c2init[0][0]), (&(c3init[0][0]), (&(c4init[0][0]), (&(c5init[0][0]), (&(c6init[0][0]), (&(c8init[0][0]),
N1, N2, N3, N4, N5, N6, N8,
prn_dr_bnr ("DR_Lonebv, Bounded L-One Solution of an " "Overdetermined System of Linear Equations"); z = 0.0; for (j = 1; j <= 5; j++) { d = 0.15* (j-3); dd = d*d; ddd = d*dd; for (i = 1; i <= 5; i++) { e = 0.15* (i-3); ee = e*e; eee = e*ee; k = 5* (j-1) + i; c7[k][1] = 1.0; c7[k][2] = d; c7[k][3] = e; c7[k][4] = dd; c7[k][5] = ee; c7[k][6] = e*d; c7[k][7] = ddd; c7[k][8] = eee; c7[k][9] = dd*e; c7[k][10] = ee*d; © 2008 by Taylor & Francis Group, LLC
M1); M2); M3); M4); M5); M6); M8);
230
Numerical Linear Approximation in C } } for (Iexmpl = 1; Iexmpl <= 8; Iexmpl++) { switch (Iexmpl) { case 1: n = N1; m = M1; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] } break; case 2: n = N2; m = M2; for (i = 1; i <= n; i++) { f[i] = f2[i]; for (j = 1; j <= m; j++) ct[j][i] } break; case 3: n = N3; m = M3; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] } break; case 4: n = N4; m = M4; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) ct[j][i] } break; case 5: n = N5;
© 2008 by Taylor & Francis Group, LLC
= c1[i][j];
= c2[i][j];
= c3[i][j];
= c4[i][j];
Chapter 7: DR_Lonebv m = M5; for (i = 1; i <= n; i++) { f[i] = f5[i]; for (j = 1; j <= m; j++) } break; case 6: n = N6; m = M6; for (i = 1; i <= n; i++) { f[i] = f6[i]; for (j = 1; j <= m; j++) } break; case 7: n = N7; m = M7; for (i = 1; i <= n; i++) { f[i] = f7[i]; for (j = 1; j <= m; j++) } break; case 8: n = N8; m = M8; for (i = 1; i <= n; i++) { f[i] = f8[i]; for (j = 1; j <= m; j++) } break; default: break; }
231
ct[j][i] = c5[i][j];
ct[j][i] = c6[i][j];
ct[j][i] = c7[i][j];
ct[j][i] = c8[i][j];
prn_algo_bnr ("Lonebv"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("Bounded L-One Solution of an Overdetermined " "Equations\n"); © 2008 by Taylor & Francis Group, LLC
232
Numerical Linear Approximation in C prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Lonebv (m, n, ct, f, icbas, binv, bv, ibbv, thbv, &iter, rbv, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Bounded L-One Solution\n"); PRN ("Bounded L-One solution vector \"a\"\n"); prn_Vector_R (a, m); PRN ("Bounded L-One residual vector \"r\"\n"); prn_Vector_R (rbv, n); PRN ("Bounded L-One norm \"z\" = %8.4f\n", z); PRN ("No. of Iterations = %d\n", iter); } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_I free_Vector_I free_Matrix_R
(ct, MM_COLS); (f); (rbv); (a); (binv, MM_COLS); (bv); (thbv); (icbas); (ibbv); (c7, N7);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(c1); (c2); (c3); (c4); (c5); (c6); (c8);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 7: LA_Lonebv
7.7
233
LA_Lonebv
/*------------------------------------------------------------------LA_Lonebv --------------------------------------------------------------------This program calculates the L-One solution of an overdetermined system of linear equations subject to the conditions that the elements of the solution vector be bounded between -1 and +1. This program uses a modified simplex method to the linear programming formulation of the problem. In this method certain intermediate simplex iterations are skipped. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m <= n. "f" is a given real n vector. The problem is to calculate the elements of the solution vector "a" that minimizes the L-One norm z z = |rbv[1]| + |rbv[2]| + ... + |rbv[n]| subject to the constraints -1 <= a[j] <= 1,
j = 1, 2, ..., m
rbv[i] is the ith residual and is given by rbv[i] = c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m] - f[i], i = 1, 2, ..., n Inputs m Number of columns of matrix "c" of the system c*a = f. n Number of rows of matrix "c" of the system c*a = f. ct A real m by n matrix containing the transpose of matrix "c" of the system c*a = f. f A real n vector containing the r.h.s. of the system c*a = f. Other Parameters binv A real m square matrix containing the inverse of the basis matrix in the linear programming problem.
© 2008 by Taylor & Francis Group, LLC
234
Numerical Linear Approximation in C
bv
A real m vector containing the basic solution in the linear programming problem. An (n + m) vector containing the ratios
thbv
thbv[j] = rbv[j]/ct[iout][j]
ibbv
icbas
"iout" corresponds to the basic vector that leaves the basis. A sign (n + m) vector. Its first n elements have the values 1 or -1. ibbv[j] = 1 indicates that column j of matrix "ct" is in the basis or at its lower bound 0. ibbv[j] = -1 indicates that column "j" is at its upper bound 2. An integer m vector containing the indices of the columns of "ct" forming the basis matrix.
Outputs iter Number of iterations, or the number of times the simplex tableau is changed by a Gauss-Jordan step. a A real m vector containing the bounded L-One solution of the system c*a = f. rbv An (n + m) vector. Its first n elements are the bounded L-One residual vector r = c*a - f. z The minimum bounded L-One norm of the residuals "rbv". Returns one of LaRcSolutionFound LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Lonebv (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibbv, tVector_R thbv, int *pIter, tVector_R rbv, tVector_R a, tNumber_R *pZ) { int ij = 0, kk = 0, n1 = 0, nm = 0; int iout = 0, jout = 0, jin = 0, ivo = 0, itest = 0; tNumber_R pivot = 0.0, pivoto = 0.0, tpeps = 0.0, xb = 0.0; /* Validation of the data before executing the algorithm */ © 2008 by Taylor & Francis Group, LLC
Chapter 7: LA_Lonebv
235
eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m == 1))); VALIDATE_PTRS (ct && f && icbas && binv && bv && ibbv && thbv && pIter && rbv && a && pZ); nm = n + m; n1 = n + 1; tpeps = 2.0 + EPS; *pIter = 0; /* Part 1 of the algorithm. Obtain a feasible basic solution */ LA_lonebv_part_1 (m, n, ct, f, icbas, binv, bv, ibbv, rbv); /* Part 2 of the algorithm */ for (kk = 1; kk <= n*n; kk++) { ivo = 0; xb = 0.0; /* Determine the vector that leaves the basis */ LA_lonebv_vleav (&ivo, &iout, &xb, m, n, icbas, bv); /* Calculate the results */ if (ivo == 0) { LA_lonebv_res (m, n, f, icbas, binv, rbv, a, pZ); GOTO_CLEANUP_RC (LaRcSolutionFound); } jout = icbas[iout]; /* Calculate the possible ratios thbv[j] */ LA_lonebv_thbv (ivo, iout, m, n, ct, icbas, binv, ibbv, thbv, rbv); pivoto = 1.0; itest = 0; for (ij = 1; ij <= n*n; ij++) { /* Determine the vector that enters the basis */ LA_lonebv_vent (ivo, &jin, &itest, m, n, thbv); /* Solution is not feasible */ if (itest != 1) © 2008 by Taylor & Francis Group, LLC
236
Numerical Linear Approximation in C { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } /* Update the solution vector "bv" */ LA_lonebv_update_bv (iout, jout, jin, &pivot, pivoto, &xb, m, n, ct, binv, bv, ibbv); if ((jin > n && xb > -EPS) || (xb >= -EPS && xb<= tpeps)) { thbv[jin] = 0.0; } else { itest = 0; thbv[jin] = 0.0; } icbas[iout] = jin; if (itest == 1) break; pivoto = pivot; jout = jin; } /* A Gauss-Jordan elimination step */ LA_lonebv_gauss_jordn (jin, iout, m, n, ct, icbas, binv, bv, ibbv, rbv); *pIter = *pIter + 1; }
CLEANUP: return rc; } /*------------------------------------------------------------------Part 1; Initialization for LA_Lonebv() -------------------------------------------------------------------*/ void LA_lonebv_part_1 (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibbv, tVector_R rbv) { int i, j, k; tNumber_R s, sa; /* Part 1 of the algorithm. Obtain a feasible basic solution */ © 2008 by Taylor & Francis Group, LLC
Chapter 7: LA_Lonebv
237
for (j = 1; j <= m; j++) { k = n + j; icbas[j] = k; ibbv[k] = -1; rbv[k] = 0.0; for (i = 1; i <= m; i++) binv[i][j] = 0.0; binv[j][j] = 1.0; } /* Calculate the residuals (marginal costs) */ for (j = 1; j <= n; j++) { ibbv[j] = 1; s = - f[j]; for (i = 1; i <= m; i++) s = s - ct[i][j]; rbv[j] = s; if (s < -EPS) ibbv[j] = -1; } /* Calculate the initial basic solution */ for (i = 1; i <= m; i++) { s = 0.0; for (j = 1; j <= n; j++) { sa = ct[i][j]; if (ibbv[j] == -1) sa = -sa; s = s + sa; } bv[i] = s; } } /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Lonebv() -------------------------------------------------------------------*/ void LA_lonebv_vleav (int *pIvo, int *pIout, tNumber_R *pXb, int m, int n, tVector_I icbas, tVector_R bv) { int i, k; tNumber_R e, d, g, tpeps; tpeps = 2.0 + EPS; g = 1.0; © 2008 by Taylor & Francis Group, LLC
238
Numerical Linear Approximation in C for (i = 1; i <= m; i++) { e = bv[i]; k = icbas[i]; if (e < -EPS) { d = e; if (d < g) { *pIvo = -1; g = d; *pIout = i; *pXb = e; } } if (k <= n && e > tpeps) { d = 2.0 - e; if (d < g) { *pIvo = 1; g = d; *pIout = i; *pXb = e; } } }
} /*------------------------------------------------------------------Calculate the parameters of vector "thbv" in LA_Lonebv() -------------------------------------------------------------------*/ void LA_lonebv_thbv (int ivo, int iout, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_I ibbv, tVector_R thbv, tVector_R rbv) { int i, j, ii, i0 = 0, k, nm; tNumber_R e = 0, d, gg, thmax; nm = n + m; thmax = 0.0; for (j = 1; j <= nm; j++) { thbv[j] = 0.0; © 2008 by Taylor & Francis Group, LLC
Chapter 7: LA_Lonebv
239
ii = 0; for (i = 1; i <= m; i++) { if (j == icbas[i]) { ii = 1; i0 = i; } } if (ii == 1) { if (j > n && i0 == iout) { thbv[j] = -2.0; rbv[j] = 2.0; ibbv[j] = -ibbv[j]; gg = thbv[j]; if (gg < 0.0) gg = -gg; if (gg > thmax) thmax = gg; } } else if (ii == 0) { if (j <= n) e = ct[iout][j]; if (j > n) { k = j - n; d = -1.0; if (ibbv[j] == -1) d = -d; e = d * binv[iout][k]; if ((ivo == -1 && e > EPS) || (ivo == 1 && e < -EPS)) { e = -e; rbv[j] = 2.0 - rbv[j]; ibbv[j] = -ibbv[j]; } } if (fabs (e) > EPS) { d = rbv[j]; if (fabs (d) < PREC) d = PREC * ibbv[j]; if (fabs (d) < PREC && j > n) d = PREC; thbv[j] = d/e; } © 2008 by Taylor & Francis Group, LLC
240
Numerical Linear Approximation in C } }
} /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Lonebv() -------------------------------------------------------------------*/ void LA_lonebv_vent (int ivo, int *pJin, int *pItest, int m, int n, tVector_R thbv) { int j, nm; tNumber_R e, d, thmax, thmin; nm = n + m; thmax = 1.0/EPS; thmin = -thmax; for (j = 1; j <= nm; j++) { e = thbv[j]; d = e * ivo; if (d > 0.0) { if (ivo == -1) { if (e > thmin) { thmin = e; *pJin = j; *pItest = 1; } } if (ivo == 1) { if (e < thmax) { thmax = e; *pJin = j; *pItest = 1; } } } } } /*------------------------------------------------------------------© 2008 by Taylor & Francis Group, LLC
Chapter 7: LA_Lonebv
241
Update the basic solution bv in LA_Lonebv() -------------------------------------------------------------------*/ void LA_lonebv_update_bv (int iout, int jout, int jin, tNumber_R *pPivot, tNumber_R pivoto, tNumber_R *pXb, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ibbv) { int i, kin, nm; tNumber_R e, pivotn, tpeps; nm = n + m; tpeps = 2.0 + EPS; if (jin > n) { kin = jin - n; e = -1.0; if (ibbv[jin] == -1) e = -e; *pPivot = e * (binv[iout][kin]); } if (jin <= n) *pPivot = ct[iout][jin]; pivotn = *pPivot/pivoto; if ((jin > n && *pXb > -EPS) || *pXb > tpeps) { if (jout <= n) { for (i = 1; i <= m; i++) { bv[i] = bv[i] - ct[i][jout] - ct[i][jout]; } ibbv[jout] = -1; if (pivotn <= 0.0) { for (i = 1; i <= m; i++) { bv[i] = bv[i] + ct[i][jin] + ct[i][jin]; } ibbv[jin] = 1; } } } else if (pivotn > 0.0) { for (i = 1; i <= m; i++) { © 2008 by Taylor & Francis Group, LLC
242
Numerical Linear Approximation in C bv[i] = bv[i] + ct[i][jin] + ct[i][jin]; } ibbv[jin] = 1; } *pXb = bv[iout]/ (*pPivot);
} /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Lonebv() -------------------------------------------------------------------*/ void LA_lonebv_gauss_jordn (int jin, int iout, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibbv, tVector_R rbv) { tNumber_R pivot, e = 0, d; int i, ic = 0, j, k, kin = 0, n1, nm; nm = n + m; n1 = n + 1; pivot = ct[iout][jin]; if (jin > n) { kin = jin - n; e = -1.0; if (ibbv[jin] == -1) e = -e; pivot = e * binv[iout][kin]; } for (j = 1; j <= n; j++) ct[iout][j] = ct[iout][j]/pivot; for (j = 1; j <= m; j++) binv[iout][j] = binv[iout][j]/pivot; bv[iout] = bv[iout]/pivot; for (i = 1; i <= m; i++) { if (i != iout) { d = ct[i][jin]; if (jin > n) d = e * binv[i][kin]; for (j = 1; j <= n; j++) { ct[i][j] = ct[i][j] - d * ct[iout][j]; } for (j = 1; j <= m; j++) { binv[i][j] = binv[i][j] - d * binv[iout][j]; } bv[i] = bv[i] - d * bv[iout]; © 2008 by Taylor & Francis Group, LLC
Chapter 7: LA_Lonebv
243
} } d = rbv[jin]; for (j = 1; j <= n; j++) { rbv[j] = rbv[j] - d * ct[iout][j]; } for (j = n1; j <= nm; j++) { e = -1.0; if (ibbv[j] == -1) e = -e; k = j - n; rbv[j] = rbv[j] - e * d * binv[iout][k]; } for (j = 1; j <= n; j++) { for (i = 1; i <= m; i++) { ic = 0; if (j == icbas[i]) ic = 1; } if (ic == 0) { d = rbv[j] * ibbv[j]; if (d >= 0.0) break; rbv[j] = 0.0; } } } /*------------------------------------------------------------------Calculate the results of LA_Lonebv() -------------------------------------------------------------------*/ void LA_lonebv_res (int m, int n, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R rbv, tVector_R a, tNumber_R *pZ) { tNumber_R e, s; int i, j, k; for (j = 1; j <= m; j++) { s = 0.0; for (i = 1; i <= m; i++) { k = icbas[i]; © 2008 by Taylor & Francis Group, LLC
244
Numerical Linear Approximation in C e = -1.0; if (k <= n) e = f[k]; s = s + e * binv[i][j]; } a[j] = s; } s = 0.0; for (j = 1; j <= n; j++) s = s + fabs (rbv[j]); *pZ = s;
}
© 2008 by Taylor & Francis Group, LLC
245
Chapter 8 L1 Polygonal Approximation of Plane Curves
8.1
Introduction
Polygonal approximation of a given plane curve is done by first digitizing the given curve into discrete points and then approximating the digitized curve by a polygon of connected straight lines. The points at which the lines join are usually, but not necessarily, a subset of the digitized points of the curve. There are certain particulars about polygonal approximation that are summarized in the following sections. 8.1.1
Two basic issues
There are two basic issues associated with piecewise approximation in general, including polygonal approximation: (1) Find a polygonal approximation of the given curve such that the measured residual or error norm between the discrete points of the curve and the straight line in each segment does not exceed a specified tolerance ε. (2) Given a specified number of segments, find the polygonal approximation of the given curve such that the error norms in all the segments are nearly equal. This is known as the near balances norm approximation. There might be a third issue, which is: (3) Find a polygonal approximation of the given curve such that the error norms in all the segments are nearly equal and that each does not exceed a specified tolerance ε. This outcome could be achieved by assuming a number of segments n, and calculating the polygonal approximation for near balanced © 2008 by Taylor & Francis Group, LLC
246
Numerical Linear Approximation in C
norms. If the calculated norm is > ε, increase n by 1 at a time and the process is repeated until the obtained norm is ≤ ε. If the calculated norm is < ε, the process is reversed by decreasing n by 1, and repeated until the calculated norm starts to be > ε. 8.1.2
Approaches for polygonal approximation
In the literature, a large number of polygonal approximation methods have been proposed. The various approaches of these algorithms can be classified into categories such as: (a) Sequential or scan-along approach: This approach is utilized by Kurozumi and Davis [18], Sklansky and Gonzalez [35] and by Wall and Danielsson [39] as well as the approach of our algorithm [2]. In the sequential approach, one starts at a point Pa, and scans sequentially along the following points of the digitized curve. The scan stops at point Pb, when the norm of the errors between the intermediate points and the line joining Pa and Pb begins to exceed a specified tolerance ε. The method is repeated by starting at point Pb and so on. (b) Split approach: This approach has been used by Ramer [25] and by Duda and Hart [9]. In this approach, the digitized points of a curve segment is approximated by a straight line connecting the first and the last points of the segment. If the calculated error norm is larger than the specified value ε, the curve segment is broken into two segments at the curve point most distant from the straight line segment and the new error norm is calculated. This process is repeated until each curve segment is approximated by a straight line connecting its end points and the error norm for each segment is ≤ the specified value ε. (c) Split or merge approach: This is demonstrated by Pavlidis [21] and by Ansari and Delp [4]. In this approach, one starts from an initial segmentation of the given digitized curve and calculates the error norms between the given points and the straight lines. The segmentation is then split or merged by adjusting segment end points until the error norms are driven under the pre-specified bound.
© 2008 by Taylor & Francis Group, LLC
Chapter 8: L1 Polygonal Approximation of Plane Curves
(d)
(e)
(f)
247
Dominant point or significant vertices detection approach: This is explained by Teh and Chin [37], Ansari and Delp [4], and by Ray and Ray [27]. See also Sarkar [33] for polygonal approximation of chain-coded curves. This method extracts the most significant points in the curve before approximating it. These are the peak points, positive maximum and negative minimum curvature points. The method then partitions the curves between the dominant points. Hamann and Chen [13] presented a method for selecting a subset of points from the finite set of curve points. The selected subset is based on assigning weights to all the initial data points. Only the most significant points are used in the piecewise approximation. Sato [34] presented a point choice function that relates the original points and the chosen points. Sarkar et al. [32] presented a genetic algorithm based approach to locate a specified number of significant points. Johnson and Vogt [16] presented a geometric method for polygonal approximation of convex arcs. The method tends to produce many points where the curvature is high and few points where the curvature is low. Pei and Lin [22] introduced a technique for detecting the dominant points on a digital curve by a scale-space filtering with a Gaussian kernel. K-means based approach: This method was elaborated upon by Yin [41]. In this approach, the digitized points are first partitioned into k connected arbitrary clusters and their principal axes are computed. The perpendicular distances from the points in the cluster to the principal axis of the cluster are calculated. Points are reassigned to a neighboring cluster if they have shorter distances to the principal axis of the neighboring cluster. This process is repeated until the points of each cluster have the shortest distances to the principal axis of their own cluster. Genetic algorithms approach, particularly for closed digitized curves: Such algorithms are described by Yin [42], Huang and Sun [15], Sun and Huang [36] and by Ho and Chen [14]. A disadvantage of all the methods outlined in (a) to (e) above is that the final polygonal approximation results depend on the selection of the initial points and the arbitrary initial solution.
© 2008 by Taylor & Francis Group, LLC
248
Numerical Linear Approximation in C
See for example, Kolesnikov and Franti [17]. Genetic algorithms are supposed to rectify this disadvantage. Genetic algorithms can be viewed as stochastic search algorithms based on natural genetic systems, by stimulating the biological model of evolution. These algorithms simulate the survival of the fittest elements, which reproduce and then compete with each other. That produces an optimal or near optimal solution in a search space for optimization problems. See for example, Golden [11] and Michalewicz [20]. Yin [42] presented three genetic algorithms for polygonal approximation of digital curves. The first one minimizes the number of sides of the polygon such that the approximation error norms, each does not exceed a specified value. The second one minimizes the error norm for a given number of sides. The third one determines the approximating polygon automatically without any given condition. 8.1.3
Other unique approaches
There are other unique approaches to polygonal approximations of plane curves. (a) Lopes et al. [19] considered the problem of computing a polygonal approximation for a plane curve presented implicitly by a given function. A robust and adaptive polygonal approximation was proposed. By robust they mean that the algorithm captures both the topology and the geometry of the curve. By adaptive, they mean that the polygonal approximation is adapted to the geometry of the curve; having longer edges for the flat parts of the curve where the curvature is low, and shorter edges where the curvature is high. (b) Badi’i and Peikari [6] presented a linear functional approximation of planar curves based on adaptive segmentation procedure. Their method alleviates the need for specifying the number of segments and minimizing the error norm. (c) Rannou and Gregor [26] presented a polygonal approximation algorithm for closed contours with a unique feature. Their algorithm creates polygons whose edges are all of the same
© 2008 by Taylor & Francis Group, LLC
Chapter 8: L1 Polygonal Approximation of Plane Curves
249
length. Then a one-dimensional description of the given shape is possible by using the interior angles between the approximating lines. 8.1.4
Criteria by which error norm is chosen
Polygonal approximation may also be classified according to the criterion by which the error norm ε between the straight line and the digitized points of a segment is chosen. (1) The mean square (the L2) error norm was used by Cantoni [7], Pavlidis [21], Salotti [31] and Sarkar et al. [32]. (2) The uniform or the Chebyshev norm was used by Ramer [25], Tomek [38], Williams [40], Kurozumi and Davis [18], Dunham [10], Sklansky and Gonzalez [35] and Zhu and Seneviratne [43]. (3) The L1 error norm was used by a number of authors. Johnson and Vogt [16] used a geometric method for polygonal approximation of a planar convex curve. Their method minimizes the area between the curve and the polygonal arc. This method is a kind of an L1 piecewise approximation method. Ray and Ray [28] presented a technique that uses the L1 error norm criterion. Wall and Danielsson [39] also used the area deviation for each line segment, which is itself the L1 error norm criterion. We shall comment in Section 8.5 on these 3 algorithms that also use the L1 error norm criterion. (4) The city block metric criterion was used by Pikaz and Dinstein [24] to define the maximal distance of a point (x, y) between the line and the curve as |x| + |y|. That is, the sum of the sides of the triangle whose diagonal is the perpendicular distance between the point (x, y) and the approximating line. 8.1.5
Direction of error measure
One more issue by which the errors between the digitized points and the straight line approximating them are measured is considered: (a) The error may be measured in the direction perpendicular to the approximating line, known as the Euclidean distance, such as in the case of Pavlidis [21], Sarkar et al. [32], Ramer [25],
© 2008 by Taylor & Francis Group, LLC
250
(b)
8.1.6
Numerical Linear Approximation in C
Williams [40], Kurozumi and Davis [18], Sklansky and Gonzalez [35], Dunham [10] and Zhu and Seneviratne [43], or The error may be measured in the direction along the y-axis, such as in the case of Cantoni [7], Tomek [38] and our technique. Comparison and stability of polygonal approximations
Arkin et al. [5] developed a method for comparing two polygons, one is stored as a model for a particular object and the other as an approximate one of the object. The method is based on the L2 distance between certain functions of the two polygons. Rosin [30] developed several measures to assess the stability of polygonal approximation algorithms under perturbation of the given data and changes of the algorithms scale parameters. We present here an algorithm for obtaining a polygonal approximation in the L1 norm of a plane curve of an arbitrary shape [2]. In the presence of spikes and wild points in a given set of discrete points, the L1 approximation is usually recommended over other norms to approximate this set of points. See Section 2.3, Rice and White [29] and Abdelmalek [3]. In the current algorithm, the L1 (error) norm in any segment is not to exceed a pre-assigned value. The errors between the discrete points and the approximating lines are measured along the y-axis direction. We shall discuss this point in Section 8.5. The given curve may be an open curve, as in Figure 8-1 or a closed one, as in Figure 8-2. The given curve is first digitized into a number of discrete points, not necessarily at equal distances. The algorithm is then applied to the discrete points. The vertices of the polygon are a subset of the points of the given digitized curve. In other words, the approximating straight line to any segment of the given curve passes through the two end points of the segment. The algorithm uses linear programming techniques [12], making it more efficient than other recorded methods. In Section 8.2, the L1 approximation problem is outlined. In Section 8.3, the algorithm is described. In Section 8.4, the linear programming techniques are implemented. In Section 8.5, numerical
© 2008 by Taylor & Francis Group, LLC
Chapter 8: L1 Polygonal Approximation of Plane Curves
251
results and also comments on related methods to ours for calculating the polygonal approximation of plane curves in the L1 norm are given. 8.1.7
Applications of the algorithm
Piecewise approximation of plane curves, including polygonal approximation, represent the data in waveforms and the boundaries of digital images. This facilitates feature extraction, shape recognition [8] and pattern classification of the given waveforms or images [21]. Besides, polygonal representation of a digital curve reduces the amount of data needed to process the given shape. Polygonal approximation is also used in map simplification in cartography, pre-processing in electrocardiography and in electroencephalography, transient analysis of speech signals, and other applications. For this see the references in Perez and Vidal [23]. 8.2
The L1 approximation problem
Let us be given a two-dimensional open or closed curve. Let this curve be digitized and let the digitized set be ordered at K consecutive points. The first and the last points coincide for a closed curve. The digitized points may not necessarily be equally spaced. Let n be the number of pieces (segments) in the approximation and let (zj), j = 1, 2, …, n, be the L1 residual (error) norms for the n pieces. The number of pieces n is not known beforehand. Consider any segment j, 1 ≤ j ≤ n of the given curve. Let this segment consist of N digitized points, with coordinates (xi, f(xi)), i = 1, 2, …, N, N ≥ 2. Let the N points be approximated by the straight line y = a1 + a2x
(8.2.1)
that minimizes the L1 norm zj (or briefly z), of the residuals ri N
(8.2.2a)
z =
∑
ri
i=1
where (8.2.2b)
ri = a1 + a2 xi – fi, i = 1, 2, …, N
© 2008 by Taylor & Francis Group, LLC
252
Numerical Linear Approximation in C
and fi denotes f(xi). It is known that this problem reduces to the problem of obtaining the L1 solution of an overdetermined system of linear equations. See Chapter 2. In vector-matrix notation, the linear system is (8.2.3)
Ca = f
C is the coefficient matrix in (8.2.3), vector a = (a1, a2)T and vector f = (f1, f2, …, fN)T. Or we have 1 x1
f1
1 x2
f2
. . . 1
. a1 = . . a2 . . . xN fN
The L1 solution to system (8.2.3) is the real vector a that minimizes the norm z given by (8.2.2a). As indicated above, the approximating straight line that approximates segment j passes through the end points; points 1 and N of segment j. 8.3
Description of the algorithm
Let the piecewise approximation for any segment j, 1 ≤ j ≤ n, be such that its residual or error norm zj ≤ ε, a pre-assigned value. This algorithm employs the following steps: (1) Take j = 1 (the first segment). (2) Take N = 2, which is the minimum initial number of points for segment j to be approximated by the straight line (8.2.1). Calculate the solution vector (a1, a2)T for the line that passes through the 2 points. The norm z = 0, since it corresponds to a perfect fit to the 2 points. Go to step (3). In case points 1 and N lie on a vertical or a nearly vertical line, a2 (the slope of the line) would be a very large number and the computation may break down. In this case, increase the number of points N by 1, as many times as necessary, until the end points 1 and N are not on a vertical or a nearly vertical line.
© 2008 by Taylor & Francis Group, LLC
Chapter 8: L1 Polygonal Approximation of Plane Curves
(3)
(4)
253
Add the point (N + 1) of the curve to segment j and calculate the new solution (a1, a2) for the straight line that passes through the points 1 and (N + 1). Calculate z from (8.2.2a). Take N = N + 1. If z < ε, go to step (3). If z > ε, take N = N – 1, which is the final number of points for segment j, with the solution (a1, a2) as the required L1 solution for segment j. Take j = j + 1 and go to step (2) for segment (j + 1).
Since this algorithm is for a polygonal approximation, the last end point of segment j is the first end point of segment (j + 1). 8.4
Linear programming technique
Implementing linear programming techniques enables us to calculate, in an efficient way, the norm z each time we add a new point to the N points of segment j. Let CT form the initial data of a linear programming problem and let f be the cost vector to the initial data. C and f are those of equation (8.2.3). For N ≥ 2, let a 2 by 2 unit matrix I2 be the initial basis matrix B. Let B be an extension to the initial data, given by Tableau 8.4.1. Hence initially, B–1 = I2. As the tableau is updated, B–1 is also updated. Tableau 8.4.1 (Initial Data) –––––––––––––––––––––– f1 f2 f3 … fN –––––––––––––––––––––– 1 1 1 … 1 x2 x3 … xN x1 –––––––––––––––––––––– We update Tableau 8.4.1 by applying two Gauss-Jordan elimination steps, which reduce columns 1 and N of CT to the first and second columns of I2 respectively. We get Tableau 8.4.2. If points 1 and N lie on a vertical or near vertical line, the pivot element for the second elimination step in Tableau 8.4.1 would be 0, or a very small number, and the computation would break down (Lemma 8.3). In this case, as indicated earlier, we increase the number
© 2008 by Taylor & Francis Group, LLC
254
Numerical Linear Approximation in C
of points N by 1, more than once if necessary, until the pivot element for the added point is greater than a specified tolerance. Tableau 8.4.2 –––––––––––––––––––––– f1 f2 f3 … fN –––––––––––––––––––––– 1 y12 y13 … 0 0 y22 y23 … 1 –––––––––––––––––––––– 0 c2–f2 c3–f3 0 In Tableau 8.4.2, vectors (yi), i = 1, 2, …, N, are given by (8.4.1)
yi = B
–1
1 , i = 2, 3, …, N – 1 xi
The marginal costs in Tableau 8.4.2 are given by (8.4.2)
ci – fi = f1y1i + fNy2i – fi, i = 2, 3, …, N
Lemma 8.1 The marginal costs denoted here by (ci – fi), i = 1, 2, …, N, in any tableau are themselves the residuals, given by (8.2.2b) ci – fi = ri, i = 1, 2, …, N and since r1 = rN = 0, the L1 norm z of (8.2.2a) is N–1
(8.4.3)
z =
∑
ci – fi
i=2
See Theorem 5.3 and also [1]. Lemma 8.2 The solution vector a of the interpolating straight line is given by (8.4.4)
(a1, a2) = (f1, fN) B–1
See Theorem 5.4 and [1].
© 2008 by Taylor & Francis Group, LLC
Chapter 8: L1 Polygonal Approximation of Plane Curves
255
Lemma 8.3 If a new added point (N + 1) lies on a vertical line with point 1 of the segment at hand, then the two points would have the same x-coordinate, and we would have yN+1 = (1, 0)T. The next elimination step breaks down, since the pivot element is 0. In view of this lemma, the users of the software should ensure that no two points of the digitized curve have the same x-coordinate. 8.4.1
The algorithm using linear programming
The following steps are employed: Take j = 1 (the first segment). Take N = 2. Form Tableau 8.4.1 and Tableau 8.4.2 for the two points. For N = 2, z = 0. Go to step (3). (3) Add the point (N + 1) of the curve to segment j. From (8.4.1) and (8.4.2) calculate the vector yN+1 from (8.4.1) and the marginal cost (cN+1 – fN+1) from (8.4.2). Append yN+1, fN+1 and (cN+1 – fN+1) to the right of Tableau 8.4.2. Apply an elimination step, which reduces vector yN+1 to the second column of the unit matrix I2 and also reduces (cN+1 – fN+1) to 0. Take N = N + 1. Go to step (4). (4) Calculate the norm z from (8.4.3). If z < ε, the pre-assigned norm, go to step (3). If z > ε, take N = N – 1, which is the final number of points for segment j. From (8.4.4) calculate the vector (a1, a2)T as the L1 solution for segment j. Take j = j + 1 and go to step (2) for segment (j + 1). (1) (2)
The solution vector a is calculated only once; i.e., at the end of step (4). 8.5
Numerical results and comments
LA_L1pol() calculates the number of segments in the polygonal approximation in the L1 error norm, the end points and the parameters (a1 and a2) of each straight line. It also calculates the residuals (errors) at all the points of the digitized curve. From the end points of the n segments or from the parameters of the straight lines (a1, a2), we draw the polygonal approximation. © 2008 by Taylor & Francis Group, LLC
256
Numerical Linear Approximation in C
DR_L1pol() tests 4 examples, 2 of which are presented here. The first example is with unequal x-intervals. The second example is a closed figure and its data is not equally spaced. 4 3.5 3 2.5 2 1.5 1 0.5 0 0
10
20
30
40
50
60
Figure 8-1: L1 polygonal approximation for a waveform. The L1 norm in any segment is ≤ 0.6
2.5 2 1.5 1 0.5 0 0
5
10
15
20
25
30
Figure 8-2: L1 polygonal approximation for a contour. The L1 norm in any segment is ≤ 0.4
© 2008 by Taylor & Francis Group, LLC
Chapter 8: L1 Polygonal Approximation of Plane Curves
257
In calculating the polygonal approximation of plane curves in the Chebyshev norm, Dunham [10], Kurozumi and Davis [18], Ramer [25], Williams [40] and Zhu and Seneviratne [43] defined the errors as the Euclidean distances between the given points and the approximating line, i.e., perpendicular to the approximating line in each segment. This requirement is not needed in the L1 polygonal approximation such as in our algorithm. The reason is that the L1 error norm for a digitized curve segment is approximately proportional to the area between the segment and the approximating straight line. This area is approximately the same when the residuals are measured along the y-axis or when the error is measured perpendicular to the approximating straight line using the straight line as the base, as in Ray and Ray [28] and in Wall and Danielsson [39]. We now comment on 3 of the algorithms that deal with the same problem as ours. These are the algorithms of Ray and Ray [28], Wall and Danielsson [39] and Johnson and Vogt [16]. The number of arithmetic operations in the method of Ray and Ray [28] including the calculation of square roots, is much higher than ours. The method of Wall and Danielsson [39] encounters some difficulties when the curvature of the given curve changes sign. Once more, Johnson and Vogt [16] use a geometric method that is a generalization of a method that minimizes the area between the curve and the polygonal arc. This method is a kind of an L1 piecewise approximation method. In contrast to our algorithm, in their method the given curve has to be a convex one.
References 1. 2. 3.
Abdelmalek, N.N., On the discrete linear L1 approximation and L1 solutions of overdetermined linear equations, Journal of Approximation Theory, 11(1974)38-53. Abdelmalek, N.N., Polygonal approximation of planar curves in the L1 norm, International Journal of Systems Science, 17(1986)1601-1608. Abdelmalek, N.N., Noise filtering in digital images and approximation theory, Pattern Recognition, 19(1986)417-424.
© 2008 by Taylor & Francis Group, LLC
258
4. 5.
6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
Numerical Linear Approximation in C
Ansari, N. and Delp, E.J., On detecting dominant points, Pattern Recognition, 24(1991)441-451. Arkin, E.M., Chew, L.P., Huttenlocher, D.P., Kedem, K. and Mitchell, J.S.B., An efficient computable metric for comparing polygonal shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(1991)209-215. Badi’i, F. and Peikari, B., Functional approximation of planar curves via adaptive segmentation, International Journal of Systems Science, 13(1982)667-674. Cantoni, A., Optimal curve fitting with piecewise linear functions, IEEE Transactions on Computers, 20(1971)59-67. Davis, L.S., Shape matching using relaxation techniques, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1(1979)60-72. Duda, R.O. and Hart, P.E., Pattern Classification and Scene Analysis, John Wiley & Sons, New York, 1973. Dunham, J.G., Optimum uniform piecewise linear approximation of planar curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(1986)67-75. Golden, D.E., Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, MA, 1989. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Hamann, B. and Chen, J-L., Data point selection for piecewise linear curve approximation, Computer Aided Geometric Design, 11(1994)289-301. Ho, S-Y. and Chen, Y-C., An efficient evolutionary algorithm for accurate polygonal approximation, Pattern Recognition, 34(2001)2305-2317. Huang, S-C. and Sun, Y-N., Polygonal approximation using genetic algorithms, Pattern Recognition, 32(1999)1409-1420. Johnson, H.H. and Vogt, A., A geometric method for approximating convex arcs, SIAM Journal on Applied Mathematics, 38(1980)317-325. Kolesnikov, A. and Franti, P., Polygonal Approximation of Closed Contours, Lecture Notes in Computer Science, 2749(2003)778-785.
© 2008 by Taylor & Francis Group, LLC
Chapter 8: L1 Polygonal Approximation of Plane Curves
18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31.
259
Kurozumi, Y. and Davis, W.A., Polygonal approximation by the minimax method, Computer Graphics and Image Processing, 19(1982)248-264. Lopes, H., Oliveira, J.B. and de Figueiredo, L.H., Robust adaptive polygonal approximation of implicit curves, Computers and Graphics, 26(2002)841-852. Michalewicz, Z., Genetic Algorithms + Data Structures = Evolution Programs, Second Extended Edition, SpringerVerlag, New York, 1992. Pavlidis, T., Polygonal approximations by Newton’s method, IEEE Transactions on Computers, 26(1977)800-807. Pei, S-C. and Lin, C-N., The detection of dominant points on digital curves by scale-space filtering, Pattern Recognition, 25(1992)1307-1314. Perez, J-C. and Vidal, E., Optimum polygonal approximation of digitized curves, Pattern Recognition Letters, 15(1994)743750. Pikaz, A. and Dinstein, I.H., Optimal polygonal approximation of digital curves, Pattern Recognition, 28(1995)373379. Ramer, U., An iterative procedure for the polygonal approximation of plane curves, Computer Graphics and Image Processing, 1(1972)244-256. Rannou, F. and Gregor, J., Equilateral polygon approximation of closed contours, Pattern Recognition, 29(1996)1105-1115. Ray, B.K. and Ray, K.S., An algorithm for detection of dominant points and polygonal approximation of digitized curves, Pattern Recognition Letters, 13(1992)849-856. Ray, B.K. and Ray, K.S., Determination of optimal polygon from digital curve using L1 norm, Pattern Recognition, 26(1993)505-509. Rice, J.R. and White, J.S., Norms for smoothing and estimation, SIAM Review, 6(1964)243-256. Rosin, P.L., Assessing the behavior of polygonal approximation algorithms, Pattern Recognition, 36(2003)505-518. Salotti, M., Optimal polygonal approximation of digitized curves using the sum of square deviations criterion, Pattern Recognition, 35(2002)435-443.
© 2008 by Taylor & Francis Group, LLC
260
Numerical Linear Approximation in C
32.
Sarkar, B., Singh, L.K. and Sarkar, D., A genetic algorithmbased approach for detection of significant vertices for polygonal approximation of digital curves, International Journal of Image and Graphics, 4(2004)223-239. Sarkar, D., A simple algorithm for detection of significant vertices for polygonal approximation of chain-coded curves, Pattern Recognition Letters, 14(1993)959-964. Sato, Y., Piecewise linear approximation of plane curves by perimeter optimization, Pattern Recognition, 25(1992)15351543. Sklansky, J. and Gonzalez, V., Fast polygonal approximation of digitized curves, Pattern Recognition, 12(1980)327-331. Sun, Y-N. and Huang, S-C., Genetic algorithms for errorbounded polygonal approximation, International Journal of Pattern Recognition and Artificial Intelligence, 14(2000)297314. Teh, C-H. and Chin, R.T., On the detection of dominant points on digital curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(1989)859-872. Tomek, I., Two algorithms for piecewise-linear continuous approximation of functions of one variable, IEEE Transactions on Computers, 23(1974)445-448. Wall, K. and Danielsson, P-E., A fast sequential method for polygonal approximation of digitized curves, Computer Vision, Graphics, and Image Processing, 28(1984)220-227. Williams, C.M., An efficient algorithm for the piecewise linear approximation of planar curves, Computer Graphics and Image Processing, 8(1978)286-293. Yin, P-Y., Algorithms for straight line fitting using k-means, Pattern Recognition Letters, 19(1998)31-41. Yin, P-Y., Genetic algorithms for polygonal approximation of digital curves, International Journal of Pattern Recognition and Artificial Intelligence, 13(1999)1061-1082. Zhu, Y. and Seneviratne, L.D., Optimal polygonal approximation of digitized curves, IEEE Proceedings on Vision, Image and Signal Processing, 144(1997)8-14.
33. 34. 35. 36.
37. 38. 39. 40. 41. 42. 43.
© 2008 by Taylor & Francis Group, LLC
Chapter 8: DR_L1pol
8.6
261
DR_L1pol
/*------------------------------------------------------------------DR_L1pol --------------------------------------------------------------------This is a driver for the function LA_L1pol(), which calculates the polygonal straight line approximation of a given data point set {x,y} that results from discretizing a given plane curve y = f(x). The approximation by LA_L1pol() is such that the approximating straight line interpolates the end points of the segment at hand and also such that the L-One residual norm for any segment not to exceed a pre-assigned tolerance denoted by "enorm". From an n data points (x,y) we form the overdetermined system of linear equations c*a = f "c" is a real n by 2 matrix. Each element of the first column of "c" is 1. The second column of "c" contains the x-coordinates of the set {x,y} arranged in a sequential order. "f" is a real n vector whose elements are the y coordinates of the given data set {x,y}. "a" is the solution 2 vector (for each segment there is a different solution vector "a"). This driver contains 4 test examples. The results of 2 of them are given in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define
Ncp Ndp Nep Ngp
28 44 49 64
void DR_L1pol (void) { /*----------------------------------------
© 2008 by Taylor & Francis Group, LLC
262
Numerical Linear Approximation in C Constant matrices/vectors ----------------------------------------*/ static tNumber_R fc[Ncp+1] = { NIL, 8.0, 11.0, 13.0, 11.2, 9.1, 10.8, 14.8, 16.0, 15.1, 14.0, 14.7, 15.8, 16.8, 15.6, 13.0, 14.3, 13.8, 10.6, 9.3, 9.6, 10.8, 11.2, 9.0, 7.0, 5.8, 6.8, 4.8, 3.9 }; static tNumber_R xd[Ndp+1] = { NIL, 6.0, 7.0, 8.5, 9.5, 10.2, 17.8, 18.0, 19.3, 20.1, 21.0, 25.5, 26.8, 28.0, 29.2, 30.2, 36.6, 37.3, 37.8, 39.7, 40.8, 48.1, 49.0, 49.9, 50.2 };
12.9, 21.5, 32.0, 41.9,
13.6, 21.9, 33.1, 42.8,
static tNumber_R fd[Ndp+1] = { NIL, 1.6, 1.65, 1.75, 1.78, 1.7, 2.6, 2.32, 2.4, 2.38, 1.8, 0.39, 0.68, 0.67, 0.31, 0.31, 1.35, 1.35, 1.95, 2.67, 2.68, 3.03, 3.15, 3.12, 1.58 };
1.88, 1.15, 0.51, 2.58,
2.28, 1.12, 0.86, 2.57,
13.5, 22.0, 15.5, 9.0, 6.0,
15.0, 23.5, 15.0, 8.0, 5.1,
15.6, 24.1, 15.9, 7.5, 4.0,
15.7, 25.2, 16.6, 7.0, 3.8
15.8, 24.8, 17.0, 6.0,
0.27, 1.02, 1.55, 2.11, 0.73,
0.28, 1.23, 1.6, 2.11, 0.72,
0.5, 1.38, 1.6, 2.02, 0.6,
0.55, 1.57, 1.65, 1.95, 0.48
0.6, 2.0, 1.78, 1.89,
static tNumber_R xe[Nep+1] = { NIL, 3.8, 6.6, 8.8, 10.9, 12.3, 15.9, 18.0, 19.0, 19.7, 20.4, 22.5, 21.3, 20.5, 18.0, 16.0, 17.8, 16.0, 15.3, 14.9, 10.0, 5.0, 5.8, 6.5, 7.0, 8.0, }; static tNumber_R fe[Nep+1] = { NIL, 0.48, 0.48, 0.39, 0.46, 0.38, 0.7, 0.89, 0.89, 0.96, 0.96, 1.79, 1.77, 1.5, 1.26, 1.26, 1.98, 2.1, 2.17, 2.23, 2.22, 1.74, 1.5, 1.41, 1.19, 0.9, };
© 2008 by Taylor & Francis Group, LLC
15.9, 23.2, 34.0, 46.0,
2.23, 1.13, 0.87, 3.33,
16.4, 24.2, 34.9, 47.0,
2.38, 0.72, 0.79, 3.36,
16.8, 24.5, 35.7, 47.6,
2.63, 0.4, 0.85, 3.08,
Chapter 8: DR_L1pol
263
static tNumber_R xg[Ngp+1] = { NIL, 2.8, 3.9, 5.6, 6.7, 8.2, 14.1, 15.0, 19.0, 19.8, 21.0, 32.5, 32.1, 32.6, 31.5, 32.0, 19.9, 19.0, 18.7, 19.0, 18.1, 17.6, 17.4, 16.9, 16.8, 17.0, 15.2, 13.3, 13.0, 12.5, 12.1, 8.1, 8.5, 5.5, 2.8 };
9.5, 20.9, 30.1, 18.1, 17.1, 11.5,
11.3, 25.2, 26.1, 18.7, 16.5, 10.5,
12.4, 28.0, 24.2, 18.9, 15.9, 10.0,
12.4, 32.5, 23.1, 18.1, 16.0, 9.5,
14.1, 31.6, 21.7, 18.1, 16.3, 9.0,
static tNumber_R fg[Ngp+1] = { NIL, 1.18, 0.9, 0.8, 0.6, 0.51, 0.98, 0.92, 1.58, 1.58, 1.89, 1.8, 1.7, 1.52, 1.3, 1.02, 0.6, 0.66, 0.8, 0.88, 0.88, 1.38, 1.46, 1.46, 1.58, 1.58, 2.1, 2.08, 2.09, 2.09, 2.03, 1.75, 1.6, 1.1, 1.18 };
0.6, 1.95, 0.82, 0.99, 1.73, 2.1,
0.52, 2.3, 0.5, 0.99, 1.7, 2.0,
0.6, 2.32, 0.48, 1.15, 1.8, 2.05,
0.73, 1.92, 0.52, 1.13, 1.9, 1.92,
0.8, 1.8, 0.5, 1.38, 1.98, 1.9,
/*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (2, NN_ROWS); tMatrix_R ctn = alloc_Matrix_R (2, NN_ROWS); tMatrix_R binv = alloc_Matrix_R (2, 2); tMatrix_R ap = alloc_Matrix_R (KK_PIECES, 2); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R rp = alloc_Vector_R (NN_ROWS); tVector_R zp = alloc_Vector_R (KK_PIECES); tVector_R a = alloc_Vector_R (2); tVector_R v = alloc_Vector_R (2); tVector_I icbas = alloc_Vector_I (2); tVector_I ixl = alloc_Vector_I (KK_PIECES); int int
m, n, npiece, Iexmpl; j;
tNumber_R
enorm;
eLaRc
rc = LaRcOk;
© 2008 by Taylor & Francis Group, LLC
264
Numerical Linear Approximation in C prn_dr_bnr ("DR_L1pol, Polygonal L1 Approximation of a Plane" " Curve"); m = 2; for (Iexmpl = 1; Iexmpl <= 4; Iexmpl++) { switch (Iexmpl) { case 1: enorm = 1.8; n = Ncp; for (j = 1; j <= n; j++) { f[j] = fc[j]; ct[1][j] = 1.0; ct[2][j] = j; } break; case 2: enorm = 0.6; n = Ndp; for (j = 1; j <= n; j++) { f[j] = fd[j]; ct[1][j] = 1.0; ct[2][j] = xd[j]; } break; case 3: enorm = 0.48; n = Nep; for (j = 1; j <= n; j++) { f[j] = fe[j]; ct[1][j] = 1.0; ct[2][j] = xe[j]; } break; case 4: enorm = 0.8; n = Ngp; for (j = 1; j <= n; j++)
© 2008 by Taylor & Francis Group, LLC
Chapter 8: DR_L1pol
265
{ f[j] = fg[j]; ct[1][j] = 1.0; ct[2][j] = xg[j]; } break; default: break; } prn_algo_bnr ("L1pol"); prn_example_delim(); PRN ("Example #%d: Size of Matrix \"c\", %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("Polygonal L1 Approximation of a Plane Curve\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_L1pol (m, n, enorm, ct, f, ctn, binv, icbas, r, a, ixl, v, rp, ap, zp, &npiece); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of Polygonal L1 Approximation\n"); PRN ("Pre-assigned tolerance 'enorm' = %8.4f\n", enorm); PRN ("Number of lines (pieces) = %d\n", npiece); PRN ("Starting point of each line\n"); prn_Vector_I (ixl, npiece); PRN ("Solution Vector \"a\" for each line\n"); prn_Matrix_R (ap, npiece, 2); PRN ("Residual Vector \"r\" for the n points\n"); prn_Vector_R (rp, n); PRN ("L-One norms of the residuals for the " " \"npiece\" lines\n"); prn_Vector_R (zp, npiece); } prn_la_rc (rc); © 2008 by Taylor & Francis Group, LLC
266
Numerical Linear Approximation in C } free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_I free_Vector_I
(ct, 2); (ctn, 2); (binv, 2); (ap, KK_PIECES); (f); (r); (rp); (zp); (a); (v); (icbas); (ixl);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 8: LA_L1pol
8.7
267
LA_L1pol
/*------------------------------------------------------------------LA_L1pol --------------------------------------------------------------------This program calculates straight line polygon to a discrete point set {x,y}. The approximation is such that the L-One error norm for any line not to exceed a pre-assigned tolerance denoted by "enorm". The number of lines in the polygon is not known before hand. The polygon may be an open or a closed one. Given is a set of points {x,y} resulting from digitizing a given plane curve of the form y = f(x). The points may not be equally spaced. We form the overdetermined system of linear equations c*a = f "c" is a real n by 2 matrix. Each element of its first column is 1 and its second column contains the x-coordinates of the points (x,y) in a sequential order. "f" is a real n vector whose elements are the y coordinates of the points (x,y). Inputs m An integer = 2 which is the number of unknowns in the straight line y = a[1] + a[2]*x. n The number of points of the set {x,y}. ct A real 2 by n matrix containing the transpose of matrix "c" of system c*a = f. "ct" is not destroyed in the computation. f A real n vector containing the r.h.s. of the system c*a = f. This vector also is not destroyed in the computation. enorm A given real parameter; a pre-assigned tolerance such that the L-One residual norm for any segment is <= enorm. Outputs npiece The number of straight lines of the polygonal approximation. ixl An integer "npiece" vector containing the indices of the first elements of the "npiece" segments. For example, if ixl = (1,5,12,22,...), then the first segment contains points 1 to 5, the second segment contains points 5 to 12, and so on. ap A real "npiece" by 2 matrix. The 2 elements of its first row contain the coefficients of the first line. The 2 elements of the second row contain the coefficients of
© 2008 by Taylor & Francis Group, LLC
268
rp zp
Numerical Linear Approximation in C the second line and so on. A real n vector containing the residual values at the n points of the set {x,y}. A real "npiece" vector containing the npiece optimum values of the L-One norms for the "npiece" segments.
Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_L1pol (int m, int n, tNumber_R enorm, tMatrix_R ct, tVector_R f, tMatrix_R ctn, tMatrix_R binv, tVector_I icbas, tVector_R r, tVector_R a, tVector_I ixl, tVector_R v, tVector_R rp, tMatrix_R ap, tVector_R zp, int *pNpiece) { int is = 0, ie = 0, nu = 0, je = 0; int i = 0, j = 0, k = 0, ij = 0, ijk = 0, kase = 0; tNumber_R z = 0.0, piv = 0.0; /* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((m == 2) && (m <= n) && !((n == 1) && (m == 1))); VALIDATE_PTRS (ct && f && ctn && binv && icbas && r && a && ixl && v && rp && ap && zp && pNpiece); /* Initialization */ *pNpiece = 1; /* "is" means i(start) and "ie" means i(end) */ is = 1; for (ijk = 1; ijk <= n; ijk++) { /* kase == 0: The start of the algorithm */ je = 0; kase = 0; z = 0.0, zp[*pNpiece] = 0.0; ie = is + m - 1; for (j = is; j <= ie; j++) { rp[j] = 0.0; } © 2008 by Taylor & Francis Group, LLC
Chapter 8: LA_L1pol for (i = 1; i <= m; i++) { ap[*pNpiece][i] = 0.0; } if (ie > n) ie = n; ixl[*pNpiece] = is; nu = ie - is + 1; if (nu < m) { GOTO_CLEANUP_RC (LaRcSolutionFound); } if (kase == 0) { for (j = is; j <= ie; j++) { for (i = 1; i <=m; i++) { ctn[i][j] = ct[i][j]; } } } for (ij = 1; ij <= n; ij++) { LA_l1pol_residuals (m, is, ie, ctn, f, icbas, binv, r, a, &z, kase); if (z > enorm) { is = ie - 1; *pNpiece = *pNpiece + 1; break; } zp[*pNpiece] = z; for (j = 1; j <= m; j++) { ap[*pNpiece][j] = a[j]; } for (j = is; j <= ie; j++) { rp[j] = r[j]; } kase = -1; je = ie + 1; © 2008 by Taylor & Francis Group, LLC
269
270
Numerical Linear Approximation in C
if (je > n) { GOTO_CLEANUP_RC (LaRcSolutionFound); } /* Check if the first and last points lie on a vertical or near vertical line */ LA_l1pol_vertic_line (je, m, ct, ctn, f, r, binv, a, v, &piv); for (k = 1; k <= n; k++) { if (fabs (piv) > EPS) break; if (fabs (piv) <= EPS) { je = ie + 1; if (je > n) { GOTO_CLEANUP_RC (LaRcSolutionFound); } LA_l1pol_vertic_line (je, m, ct, ctn, f, r, binv, a, v, &piv); } } ie = je; } } CLEANUP: return rc; } /*------------------------------------------------------------------This function solves the first and the last equations of an overdetermined system. It uses a linear programming method, such that if an extra equation is added to the system, it solves the first and the new last equation. It does that in one simplex step. It also calculates the L-One norm "z" of the residual vector of the segment at hand from the simplex tableau. -------------------------------------------------------------------*/ void LA_l1pol_residuals (int m, int n1, int n2, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R r, tVector_R a, tNumber_R *pZ, int kase) © 2008 by Taylor & Francis Group, LLC
Chapter 8: LA_L1pol
271
{ int tNumber_R
i, j, k, jin, ibc, iout; e, s, z, pivot;
/* kase = 0 indicates the start of a new segment */ if (kase == 0) { for (j = 1; j <= m; j++) { a[j] = 0.0; icbas[j] = 0; for (i = 1; i <= m; i++) { binv[i][j] = 0.0; } binv[j][j] = 1.0; } z = 0.0; iout = 1; jin = n1; /* A Gauss-Jordan elimination step */ LA_l1pol_gauss_jordn (n1, n2, m, iout, jin, ct, binv, icbas); } /* kase != 0 The segment at hand is enlarged by adding to its end an extra point */ iout = 2; jin = n2; pivot = ct[iout][jin]; /* Detection of a singular basis matrix */ if (fabs (pivot) >= EPS) { LA_l1pol_gauss_jordn (n1, n2, m, iout, jin, ct, binv, icbas); if (kase != 0) { e = r[jin]; for (j = n1; j <= n2; j++) { r[j] = r[j] - e * (ct[iout][j]); } } } © 2008 by Taylor & Francis Group, LLC
272
Numerical Linear Approximation in C else return; /* Calculate the residuals (marginal costs) */ for (j = n1; j <= n2; j++) { r[j] = 0.0; ibc = 0; for (i = 1; i <= m; i++) { if (j == icbas[i]) ibc = 1; } if (ibc == 0) { s = -f[j]; for (i = 1; i <= m; i++) { k = icbas[i]; s = s + f[k] * ct[i][j]; } r[j] = s; } } LA_l1pol_gauss_jordn (n1, n2, m, iout, jin, ct, binv, icbas); LA_l1pol_res (n1, n2, m, f, r, a, binv, icbas, pZ);
} /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_L1pol() -------------------------------------------------------------------*/ void LA_l1pol_gauss_jordn (int n1, int n2, int m, int iout, int jin, tMatrix_R ct, tMatrix_R binv, tVector_I icbas) { int i, j; tNumber_R e, pivot; pivot = ct[iout][jin]; icbas[iout] = jin; for (j = n1; j <= n2; j++) { ct[iout][j] = ct[iout][j]/pivot; } for (j = 1; j <= m; j++) { © 2008 by Taylor & Francis Group, LLC
Chapter 8: LA_L1pol
273
binv[iout][j] = binv[iout][j]/pivot; } for (i = 1; i <= m; i++) { if (i != iout) { e = ct[i][jin]; for (j = n1; j <= n2; j++) { ct[i][j] = ct[i][j] - e * (ct[iout][j]); } for (j = 1; j <= m; j++) { binv[i][j] = binv[i][j] - e * (binv[iout][j]); } } } } /*------------------------------------------------------------------Calculate the results of LA_L1pol() -------------------------------------------------------------------*/ void LA_l1pol_res (int n1, int n2, int m, tVector_R f, tVector_R r, tVector_R a, tMatrix_R binv, tVector_I icbas, tNumber_R *pZ) { int i, j, k; tNumber_R s; for (j = 1; j <= m; j++) { s = 0.0; for (i = 1; i <= m; i++) { k = icbas[i]; s = s + f[k] * binv[i][j]; } a[j] = s; } s = 0.0; for (j = n1; j <= n2; j++) { s = s + fabs (r[j]); } *pZ = s; © 2008 by Taylor & Francis Group, LLC
274
Numerical Linear Approximation in C
} /*------------------------------------------------------------------Test if the 2 end points lie on a vertical or near vertical line in LA_L1pol() -------------------------------------------------------------------*/ void LA_l1pol_vertic_line (int je, int m, tMatrix_R ct, tMatrix_R ctn, tVector_R f, tVector_R r, tMatrix_R binv, tVector_R a, tVector_R v, tNumber_R *pPiv) { int i, k; tNumber_R s; for (i = 1; i <= m; i++) { v[i] = ct[i][je]; } /* Calculating the marginal costs (the residuals) */ for (i = 1; i <= m; i++) { s = 0.0; for (k = 1; k <= m; k++) { s = s + binv[i][k]*v[k]; } ctn[i][je] = s; } s = -f[je]; for (i = 1; i <= m; i++) { s = s + a[i] * ct[i][je]; } r[je] = s; *pPiv = ctn[2][je]; }
© 2008 by Taylor & Francis Group, LLC
275
Chapter 9 Piecewise L1 Approximation of Plane Curves
9.1
Introduction
Piecewise linear approximation of a plane curve implies that the plane curve is divided into segments, or pieces, and each piece is approximated by a simple curve. Unlike polygonal approximation, where the approximating lines are joined to form a polygon, the approximating curves in piecewise approximations are not joined to one another. In this respect, Pavlidis [8] argues that the continuity requirement at the joints does not appear to be essential for some applications, and even desirable for other applications. Discontinuity is desirable in the detection of feature selection for pattern recognition and picture processing. It is also desirable in detecting jumps in waveforms. Besides that, calculating the approximations for separate segments reduces the computation complexity and time. In the majority of the published works on piecewise linear approximation, the approximating curves are either constants (horizontal lines) or straight lines. However, the approximations could be by any kind of polynomials. In the presence of spikes in the given curve, the approximation in the L1 norm is usually recommended over other norms. This is because the approximating curve in the L1 norm almost totally ignores the presence of the spikes, as explained in Section 2.3. The problem of piecewise approximation in the L1 norm has not received much attention. The obvious reason is that the segmentation requires the use of algorithms for solving the L1 approximation problem that were not available to many authors. On the other hand, piecewise approximations in the Chebyshev and the L2 norms © 2008 by Taylor & Francis Group, LLC
276
Numerical Linear Approximation in C
received considerable attention. See the references in Chapters 15 and 18 respectively. We note here that Sklar and Armstrong [10] described a procedure for piecewise linear approximation for Lp norm curve fitting, where 1 < p < 2. In this chapter, two algorithms for the piecewise linear L1 approximation of plane curves are presented [4] for the two cases: (a) when the L1 norm in any segment is not to exceed a pre-assigned value, and (b) when the number of segments is given and a balanced (equal) L1 norm solution for all the segments is required. The problem is solved by first digitizing the given curve. Then either algorithm is applied to the discrete points. This work is analogous to that presented in Chapters 15 and 18 for the Chebyshev [3] and the L2 [5] norms respectively. In Section 9.2, the characteristics of the piecewise approximation are presented. In Section 9.3, the discrete linear L1 approximation problem is outlined. In Section 9.4, the two piecewise linear L1 approximation algorithms are described. In Section 9.5, numerical results and comments are given. 9.1.1
Applications of piecewise approximation
As indicated in the previous chapter, piecewise approximation of plane curves has several applications. It is used in the area of image processing such as image segmentation, image compression and segmentation/reconstruction of range images. It is also used in feature extraction, noise filtering, speech recognition and numerous other fields. See the references cited in Section 8.1.1. 9.2
Characteristics of the piecewise approximation
Piecewise approximation possesses certain characteristics that were laid down by Lawson [7]. These characteristic properties are for segmented rational Chebyshev approximation. Yet, they apply as well for linear approximation and for norms other than the Chebyshev norm. See also Pavlidis and Maika [9]. Let us be given the plane curve y = f(x) and let it be defined on the interval [a, b]. Let this curve be digitized at the K points (xi, f(xi)), © 2008 by Taylor & Francis Group, LLC
Chapter 9: Piecewise L1 Approximation of Plane Curves
277
i = 1, 2, …, K, where x1 = a and xK = b. Let n be the number of pieces (segments) in the approximation and let (zj), j = 1, 2, …, n, be the L1 residual or error norms for the n pieces. For the first algorithm, the number of segments is not known beforehand. Assume that f(x) is continuous and satisfies Lipschitz condition on [a, b]. Hence, the error norm zj, 1 ≤ j ≤ n, has the following characteristics. Characteristic 1 (a) (b) (c)
zj is a continuous function of the end points of the segment, zj is non-increasing in the segment left end point, and zj is non-decreasing in the segment right end point.
Characteristic (b) means that if a point is deleted from the left end of segment j, zj, the error norm for segment j, will not increase, but is likely to decease or remain unchanged. Characteristic (c) means that if a point is added to the right end of segment j, zj will not decrease, but is likely to increase or remain unchanged. Characteristic 2 If a piecewise approximation is calculated for the curve f(x), (not for the digitized points of the curve), and if z1 = z2 = … = zn, then the solution is optimal. Such a solution always exists and is known as a balanced (equal) error norm solution. However, in the case of the approximation of the discrete points of the digitized curve, a balanced error norm solution may not exist. One attempts to obtain a near-balanced piecewise approximation instead. Lawson [7] suggested an iterative technique for obtaining such a balanced error norm solution. The technique performs well on smooth data, but it might fail (cycle) if at some intermediate step, a segment has a zero-error norm. This happened when segmentation by straight lines were used. Pavlidis [8] was able to rectify this point. We should also note that this algorithm does not result in a unique near equal error norm solution. The final result depends on the initial segmentation of the given digitized curve. We have met this situation in our experimentation. Pavlidis [8] encountered the same situation in his experimentation. As indicated above, the problem is solved by first digitizing the given curve into discrete points. Then either algorithm is applied to
© 2008 by Taylor & Francis Group, LLC
278
Numerical Linear Approximation in C
the discrete data. The two algorithms use the discrete linear L1 approximation function LA_Lone() of Chapter 5 [2]. The first algorithm has the option of calculating connected or disconnected piecewise linear L1 approximations, while the second algorithm calculates disconnected piecewise L1 approximations only. In the connected piecewise approximation, the x-coordinate of the right end point of segment j is the x-coordinate of the starting (left) point of segment (j + 1). In the disconnected piecewise approximation, the x-coordinate of the adjacent point to the right end point of segment j is the x-coordinate of the starting left point of segment (j + 1). See Figures 9.1-3 below. 9.3
The discrete linear L1 approximation problem
Consider any segment j, 1 ≤ j ≤ n, of the given curve f(x). Let this segment consist of N digitized points with coordinates (xi, f(xi)), i = 1, 2, …, N. Let these N discrete points be approximated by M
∑ ai φi ( x )
L ( a, x ) =
i=1
which minimizes the L1 norm zj (or briefly z), of the residuals r(xi) N
z =
∑
r ( xi )
i=1
where (9.3.1)
r(xi) = L(a, xi) – f(xi), i = 1, 2, …, N
The functions (φi(x)), i = 1, 2, …, M, M ≤ N, are given real linearly independent approximating functions and a = (ai) is an M real vector to be calculated. This problem reduces to the problem of obtaining the L1 solution of the overdetermined system of equations [1] (Section 2.2) Ca = f C is an N by M matrix given by C = (cij) = (φj(xi)), i = 1, 2, …, N, j = 1, 2, …, M. and f is the N-vector f = (fi) = (f(xi)). The L1 solution
© 2008 by Taylor & Francis Group, LLC
Chapter 9: Piecewise L1 Approximation of Plane Curves
279
to system Ca = f is the real M-vector a that minimizes the L1 norm N
z =
∑
ri
i=1
where ri is the ith residual given by (9.3.1), or by M
ri = r ( xi ) =
∑ cij aj – fi ,
i = 1, 2, …, N
j=1
This problem has been efficiently solved as a linear programming problem in the dual form [2, 6] (Chapter 5). The implementation of the algorithm for the discrete linear L1 approximation to this problem is described in the next section. 9.4 9.4.1
Description of the algorithms Piecewise linear L1 approximation with pre-assigned tolerance
Let the piecewise approximation for any segment j, 1 ≤ j ≤ n, be such that the L1 error norm zj, be zj ≤ ε, where ε is a pre-assigned parameter. The number of segments n is not known yet. For this algorithm, we use the sequential or scan-along approach. See Section 8.1.2. It is described in the following steps: (1) Take j = 1 (first segment). (2) Starting from the first point in segment j in the digitized curve, take a number of points N = M, which is the minimum initial number of points for segment j. M is the number of terms in the approximating function L(a, x). Use the function L(a, x) to calculate matrix C and vector f, as in Ca = f. From C, f, and the parameters M and N calculate the L1 approximation using LA_Lone() of Chapter 5. This corresponds to a perfect fit, with zj = 0, assuming that the N by M matrix C is of rank M. In fact, since we assumed that the functions (φi(x)), i = 1, …, M, are linearly independent, matrix C in Ca = f would be of rank M. Go to step (3).
© 2008 by Taylor & Francis Group, LLC
280
Numerical Linear Approximation in C
(3)
Add point (N + 1) to the right of the N points of segment j. Call the L1 approximation function LA_Lone() and calculate the L1 norm for the (N + 1) points. Go to step (4). If the value of the norm zj is ≤ ε, where ε is the pre-assigned value, go to step (3). If zj > ε, take N = N – 1, which is the final number of points for segment j, with the corresponding zj as the L1 solution for segment j. Take j = j + 1 and go to step (2) for segment (j + 1).
(4)
If the number of points in the last segment is ≤ M, we get a perfect fit for this segment with zn = 0. 9.4.2
Piecewise linear approximation with near-balanced L1 norms
Given is the number of segments n. It is required that the calculated L1 residual norms for the n segments be equal; i.e., z1 = z2 = … = zn. As indicated earlier, it may be possible to fulfill this requirement for the piecewise approximation of the given curve itself. However, in the case of the approximation of the discrete points of the digitized curve, a balanced residual norm solution may not exist, so a near-balanced solution is obtained instead. The initial indices of the first point in each segment are calculated. This is done by dividing the total number of the digitized points of the given curve into n approximate numbers. The steps of this algorithm are similar to those of Pavlidis [8]: (1) Call the L1 approximation function LA_Lone() to calculate the L1 approximation for the n segments, where n is specified. Let their optimum L1 residual norms be (z1, …, zn). (2) Set iflag(j) = 1, for j = 1, 2, …, n. iflag(j) is an index indicator for segment j such that if both iflag(j) = 0 and iflag(j + 1) = 0, we skip step (3) for segments j and (j + 1). This is indicated shortly. (3) For j = 1, 3, 5, …, call LA_Lone() to calculate the L1 residual norms zj and zj+1. Compare the norms zj and zj+1. If zj < zj+1, go to step (3.1). Otherwise if zj > zj+1, go to step (4.1). (3.1) If zj < zj+1, add the left end point of segment (j + 1) to the right end of segment j. Call LA_Lone() to calculate the L1 approximation for the enlarged segment j and for the © 2008 by Taylor & Francis Group, LLC
Chapter 9: Piecewise L1 Approximation of Plane Curves
281
(4.2)
shortened segment (j + 1). Call the new L1 residual norms zjnew and zj+1new. Go to step (3.2). If |zjnew – zj+1new| < |zj – zj+1|, set zj = zjnew and zj+1 = zj+1new. Also set iflag(j) = 0 and iflag(j + 1) = 0. Repeat step (3) for j = 2, 4, 6, … If iflag(j) = 1, 1 ≤ j ≤ n, repeat step (3), otherwise terminate. If in repeating step (3), iflag = 0 for any two adjacent segments j and (j + 1), skip step (3) for segments j and (j + 1). If zj > zj+1, delete the right end point of segment j by adding it to the left end of segment (j + 1). Go to step (4.2). This step uses equivalent logic to step (3.2).
9.5
Numerical results and comments
(3.2)
(4.1)
Each of the functions LA_L1pw1() and LA_L1pw2() calculate the number of segments in the piecewise approximation (for LA_L1pw2(), n is given), the starting points of the n segments, the coefficients of the approximating curves for the n segments, the residuals at each point of the digitized curve and finally the L1 residual norms for the n segments. LA_L1pw1() computes for the case when the L1 residual norm in any segment is not to exceed a pre-assigned value ε. LA_L1pw2() computes for the case when the number of segments is given and a near-balanced L1 error norm solution is required. Both of these functions use LA_Lone() of Chapter 5. LA_L1pw1() has the option of calculating connected or disconnected piecewise L1 approximations, while LA_L1pw2() can only calculate disconnected piecewise L1 approximations. This was explained at the end of Section 9.2. DR_L1pw1() and DR_L1pw2() test several examples. In order to compare the results of these two algorithms with those of the algorithms for the Chebyshev norm [2] and for the L2 norm [3], we chose the same curve. This curve is digitized with equal x-intervals into 28 points, and the data points are fitted with vertical parabolas. Each is of the form y = a1 + a2x + a3x2. The results of LA_L1pw1() are shown in Figures 9-1 and 9-2. The results of LA_L1pw2() are shown in Figure 9-3, where the number of segments was set to n = 4.
© 2008 by Taylor & Francis Group, LLC
282
Numerical Linear Approximation in C
The L1 norms of Figures 9-1 and 9-2 are (5.940, 4.543, 5.480, 4.414) and n = 4, and (5.940, 4.820, 5.213, 3.038, 1.367) and n = 5, respectively. The L1 norms of Figure 9-3 are (5.940, 4.543, 5.480, 4.414), for n = 4. 18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 9-1: Disconnected linear L1 piecewise approximation with vertical parabolas. The L1 residual norm in any segment ≤ 6.2
18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 9-2: Connected linear L1 piecewise approximation with vertical parabolas. The L1 residual norm in any segment ≤ 6.2
© 2008 by Taylor & Francis Group, LLC
Chapter 9: Piecewise L1 Approximation of Plane Curves
283
18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 9-3: Near-balanced residual norm solution. Disconnected linear L1 piecewise approximation with vertical parabolas. Number of segments = 4
We observe that LA_L1pw2() did not produce a balanced norm solution (equal L1 norms for all the segments) in Figure 9-3. Also, by mere chance in this example, Figures 9-1 and 9-3 give identical results. We should note that for each segment, the error between the digitized points and the approximating curve is measured along the y-axis direction. That is, not in the direction perpendicular to the approximating curve. The requirement that the error between the digitized points and the approximate curve be measured in a direction perpendicular to the approximating curve is not needed in the L1 piecewise approximation, such as in our algorithm. The reason is that the L1 error norm for a digitized curve segment is approximately proportional to the area between the digitized points and the approximating curve. This area is approximately the same when the residuals (errors) are measured in the y-direction or when measured perpendicular to the approximating curve. This same point was made in the previous chapter for polygonal approximation of plane curves in the L1 norm. In a previous version of our algorithms [4], we utilized parametric linear programming techniques [6] and made use of the interpolating property of the L1 approximation (Chapters 2 and 5). In these © 2008 by Taylor & Francis Group, LLC
284
Numerical Linear Approximation in C
techniques, information from a previous simplex iteration in step (3) of Section 9.4.1 or Section 9.4.2 are stored and used in a current iteration. This increases the code complexity but reduces the number of iterations.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Abdelmalek, N.N., On the discrete linear L1 approximation and L1 solutions of overdetermined linear equations, Journal of Approximation Theory, 11(1974)38-53. Abdelmalek, N.N., An efficient method for the discrete linear L1 approximation problem, Mathematics of Computation 29(1975)844-850. Abdelmalek, N.N., Piecewise linear Chebyshev approximation of planar curves, International Journal of Systems Science, 14(1983)425-435. Abdelmalek, N.N., Piecewise linear L1 approximation of planar curves, International Journal of Systems Science, 16(1985)447-455. Abdelmalek, N.N., Piecewise linear least-squares approximation of planar curves, International Journal of Systems Science, 21(1990)1393-1403. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Lawson, C.L., Characteristic properties of the segmented rational minimax approximation problem, Numerische Mathematik, 6(1964)293-301. Pavlidis, T., Waveform segmentation through functional approximation, IEEE Transactions on Computers, 22(1973)689697. Pavlidis, T. and Maika, A.P., Uniform piecewise polynomial approximation with variable joints, Journal of Approximation Theory, 12(1974)61-69. Sklar, M.G. and Armstrong, R.D., A piecewise linear approximation procedure for Lp norm curve fitting, Journal of Statistical Computation and Simulation, 52(1995)323-335.
© 2008 by Taylor & Francis Group, LLC
Chapter 9: DR_L1pw1
9.6
285
DR_L1pw1
/*------------------------------------------------------------------DR_L1pw1 --------------------------------------------------------------------This is a driver for the function LA_L1pw1(), which calculates a linear piecewise L-One approximation of a given data point set {x,y} that results from discretizing a given plane curve y = f(x). The points of the set might not be equally spaced. The approximation by LA_L1pw1() is such that the L-One residual norm for any segment is not to exceed a pre-assigned tolerance denoted by "enorm". LA_L1pw1() calculates the connected or the disconnected linear piecewise L-One approximation, according to the value of an integer parameter "konect" set by the user. See comments in program LA_L1pw1(). From the approximating curve we form the overdetermined system of linear equations c*a = f "c" is a real n by m matrix of rank k, k <= m < n. n is the number of digitized points of the given plane curve. m is the number of terms in the approximating curves. If for example, the approximating curves are vertical parabolas of the form y = a1 + a2*x + a3*x*x then m = 3. "f" is a real n vector whose elements are the y coordinates of the data set {x,y}. "a" is the solution m vector. There are different "a" solution vectors for the different segments. This driver contains 1 test example. A given curve is digitized into 28 points at equal x intervals. The points are piecewise approximated by vertical parabolas of the form y = a1 + a2*x + a3*x*x
© 2008 by Taylor & Francis Group, LLC
286
Numerical Linear Approximation in C
The results for the disconnected and of the connected piecewise L-One approximation are given in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define Nc
28
void DR_L1pw1 (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R fc[Nc+1] = { NIL, 8.0, 11.0, 13.0, 11.2, 9.1, 10.8, 14.8, 16.0, 15.1, 14.0, 14.7, 15.8, 16.8, 15.6, 13.0, 14.3, 13.8, 10.6, 9.3, 9.6, 10.8, 11.2, 9.0, 7.0, 5.8, 6.8, 4.8, 3.9 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tMatrix_R rp1 = alloc_Matrix_R (KK_PIECES, NN_ROWS); tMatrix_R ap = alloc_Matrix_R (KK_PIECES, MM_COLS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R zp = alloc_Vector_R (KK_PIECES); tVector_I irankp = alloc_Vector_I (KK_PIECES); tVector_I ixl = alloc_Vector_I (KK_PIECES); int int tNumber_R eLaRc
j, k, m, n; konect, npiece; enorm; prnRc;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_L1pw1, L1 Piecewise Approximation of a Plane " "with Pre-assigned Norm"); konect = 0; for (k = 1; k <= 2; k++) © 2008 by Taylor & Francis Group, LLC
Chapter 9: DR_L1pw1
287
{ switch (k) { case 1: enorm = 6.2; n = Nc; m = 3; for (j = 1; j <= n; j++) { f[j] = fc[j]; ct[1][j] = 1.0; ct[2][j] = j; ct[3][j] = j*j; } break; default: break; } prn_algo_bnr ("L1pw1"); if (k == 1) konect = 0; if (k == 2) konect = 1; prn_example_delim(); PRN ("konect = %d: Size of matrix \"c\" %d by %d\n", konect, n, m); PRN ("L1 Piecewise Approximation with Pre-assigned Norm\n"); PRN ("Pre-assigned Norm \"enorm\" = %8.4f\n", enorm); prn_example_delim(); if (konect == 1) PRN ("Connected L1 Piecewise Approximation\n"); else PRN ("Disconnected L1 Piecewise Approximation\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_L1pw1 (m, n, enorm, konect, ct, f, ixl, irankp, rp1, ap, zp, &npiece); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the L1 Piecewise Approximation\n"); © 2008 by Taylor & Francis Group, LLC
288
Numerical Linear Approximation in C PRN ("Calculated number of segments (pieces) = %d\n", npiece); PRN ("Staring points of the \"npiece\" segments\n"); prn_Vector_I (ixl, npiece); PRN ("Coefficients of the approximating curves\n"); prn_Matrix_R (ap, npiece, m); PRN ("Residual vectors for the \"npiece\" segments\n"); prnRc = LA_pw1_prn_rp1 (konect, npiece, n, ixl, rp1); PRN ("L1 residual norms for the \"npiece\" segments\n"); prn_Vector_R (zp, npiece); if (prnRc < LaRcOk) { PRN ("Error printing PW1 results\n"); } } prn_la_rc (rc); } free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_I free_Vector_I
(ct, MM_COLS); (rp1, KK_PIECES); (ap, KK_PIECES); (f); (zp); (irankp); (ixl);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 9: LA_L1pw1
9.7
289
LA_L1pw1
/*------------------------------------------------------------------LA_L1pw1 --------------------------------------------------------------------This program calculates a linear piecewise L1 (L-One) approximation to a discrete point set {x,y}. The approximation is such that the L1 residual (error) norm for any segment is not to exceed a given tolerance "enorm". The number of segments (pieces) is not known before hand. Given is a set of points {x,y}. From the approximating functions of the piecewise approximation, one forms the overdetermined system of linear equations c*a = f This program uses LA_Lone() for obtaining the L-One solution of overdetermined system of linear equations. LA_L1pw1() has the option of calculating connected or disconnected piecewise L1 approximation, according to the value of an integer parameter "konect". In the connected piecewise approximation the x-coordinate of the end point of segment j say, is the x-coordinate of the starting point of segment (j+1). In the disconnected piecewise approximation, the x-coordinate of the adjacent point to the end point of segment j is the x-coordinate of the starting point of segment (j+1). See the comments on "ixl" below. Inputs m Number of terms in the approximating functions. n Number of points to be piecewise approximated. ct An m by n real matrix containing the transpose of matrix "c" of the system c*a = f. Matrix "ct" is not destroyed in the computation. f An real n vector containing the r.h.s. of the system c*a = f. This vector contains the y-coordinates of the given point set. This vector is not destroyed in the computation. enorm A real pre-assigned parameter, such that the L-One residual norm for any segment is <= enorm. konect An integer specifying the action to be performed. If konect = 1, the program calculates the connected L-One
© 2008 by Taylor & Francis Group, LLC
290
Numerical Linear Approximation in C piecewise approximation. If konect != 1, the program calculates the disconnected L-One piecewise approximation.
Outputs npiece Obtained number of segments or pieces of the approximation. ixl An integer "npiece" vector containing the indices of the first elements of the "npiece" segments. For example, if ixl = (1,5,12,22,...), and if konect = 1, then the first segment contains points 1 to 5, the second segment contains points 5 to 12, the third segment contains points 12 to 22 and so on. Again if ixl = (1,5,12,22,...), and if konect !=1, then the first segment contains points 1 to 4, the second segment contains points 5 to 11, the third segment contains points 12 to 21 ..., etc. ap A real "npiece" by m matrix. Its first row contains the the coefficients of the approximating curve for the first segment. The second row contains the coefficients of the approximating curve for the second segment and so on. If any row j is all zeros, this indicates that vector "a" of segment j is not calculated as the number of points of segment j is <= m and there is a perfect fit by the approximating curve for segment j. rp1 A real "npiece" by n matrix. Its first row contains the residuals for the points of the first segment. Its second row contains the residuals for the points of the second segment, and so on. zp A real "npiece" vector containing the "npiece" optimum values of the L-One residual norms for the "npiece" segments. If zp[j] == 0.0, it indicates that there is a perfect fit for segment j. Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_L1pw1 (int m, int n, tNumber_R enorm, int konect, tMatrix_R ct, tVector_R f, tVector_I ixl, tVector_I irankp, tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece) © 2008 by Taylor & Francis Group, LLC
Chapter 9: LA_L1pw1
291
{ tMatrix_R tVector_R tVector_R tVector_R
ctp fp r a
int
i = 0, j = 0, je = 0, ji = 0, jj = 0, is = 0, ie = 0, nu = 0, irank = 0; ijk = 0, iter = 0; z = 0.0;
int tNumber_R
= = = =
alloc_Matrix_R alloc_Vector_R alloc_Vector_R alloc_Vector_R
(m, n); (n); (n); (m);
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m == 1)) && (0.0 < enorm)); VALIDATE_PTRS (ct && f && ixl && irankp && rp1 && ap && zp && pNpiece); VALIDATE_ALLOC (ctp && fp && r && a); *pNpiece = 1; /* "is" means i(start) for the segment at hand */ is = 1; for (ijk = 1; ijk <= n; ijk++) { /* Initializing the data for "npiece" */ ie = is + m - 1; LA_pw1_init (pNpiece, is, ie, m, rp1, ap, zp); irank = m; z = 0; for (j = is; j <= ie; j++) { ji = j - is + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j]; } } for (jj = 1; jj <= n; jj++) { © 2008 by Taylor & Francis Group, LLC
292
Numerical Linear Approximation in C nu = ie - is + 1; rc = LA_Lone (m, nu, ctp, fp, &irank, &iter, r, a, &z); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } if (z > enorm + EPS) { break; } else { LA_pw1_map (m, nu, r, a, z, rp1, ap, zp, pNpiece); ixl[*pNpiece] = is; je = ie + 1; if (je > n) { GOTO_CLEANUP_RC (LaRcSolutionFound); } ie = je; nu = ie - is + 1; for (j = is; j <= ie; j++) { ji = j - is + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j]; } } } } is = ie; if (konect == 1) is = ie - 1; *pNpiece = *pNpiece + 1; ixl[*pNpiece] = is; }
© 2008 by Taylor & Francis Group, LLC
Chapter 9: LA_L1pw1 CLEANUP: free_Matrix_R free_Vector_R free_Vector_R free_Vector_R
(ctp, m); (fp); (r); (a);
return rc; }
© 2008 by Taylor & Francis Group, LLC
293
294
9.8
Numerical Linear Approximation in C
DR_L1pw2
/*------------------------------------------------------------------DR_L1pw2 --------------------------------------------------------------------This is a driver for the function LA_L1pw2() which calculates the "near balanced" piecewise linear L-One approximation of a given data point set (x,y) resulting from the discretization of a plane curve y = f(x). Given is an integer number "npiece" which is the number of segments (pieces) in the approximation. The approximation by LA_L1pw2() is such that the L-One residual norms for all segments are nearly equal, hence the name "near balanced" piecewise approximation. From the approximating curves we form the overdetermined system of linear equations c*a = f "c" is a real n by m matrix of rank k, k <= m < n. n is the number of digitized points of the given plane curve. m is the number of terms in the approximating curves. If for example, the piecewise approximating curves are vertical parabolas of the form y = a1 + a2*x + a3*x*x then m = 3. "f" is a real n vector whose elements are the y coordinates of the data set {x,y}. "a" is the solution m vector. There are different "a" solution vectors for the different segments. This driver contains 1 test example. A given curve is digitized into 28 points at equal x intervals. The points are piecewise approximated by vertical parabolas of the form y = a1 + a2*x + a3*x*x The results for piecewise L-One approximation are given in the text. -------------------------------------------------------------------*/
© 2008 by Taylor & Francis Group, LLC
Chapter 9: DR_L1pw2
295
#include "DR_Defs.h" #include "LA_Prototypes.h" #define Nc
28
void DR_L1pw2 (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R fc[Nc+1] = { NIL, 8.0, 11.0, 13.0, 11.2, 9.1, 10.8, 14.8, 16.0, 15.1, 14.0, 14.7, 15.8, 16.8, 15.6, 13.0, 14.3, 13.8, 10.6, 9.3, 9.6, 10.8, 11.2, 9.0, 7.0, 5.8, 6.8, 4.8, 3.9 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tVector_R rp2 = alloc_Vector_R (NN_ROWS); tMatrix_R ap = alloc_Matrix_R (KK_PIECES, MM_COLS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R zp = alloc_Vector_R (KK_PIECES); tVector_I ixl = alloc_Vector_I (KK_PIECES); int int eLaRc
j, m, n; Iexmpl, npiece; prnRc;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_L1pw2, L1 Piecewise Approximation of a Plane " "Curve with Near Equal Residual Norms"); for (Iexmpl = 1; Iexmpl <= 1; Iexmpl++) { switch (Iexmpl) { case 1: npiece = 4; n = Nc; m = 3; © 2008 by Taylor & Francis Group, LLC
296
Numerical Linear Approximation in C for (j = 1; j <= n; j++) { f[j] = fc[j]; ct[1][j] = 1.0; ct[2][j] = j; ct[3][j] = j*j; } break; default: break; } prn_algo_bnr ("L1pw2"); prn_example_delim(); PRN ("Size of matrix \"c\" %d by %d\n", n, m); prn_example_delim(); PRN ("L1 Piecewise Approximation with Near Equal Norms\n"); PRN ("Given number of segments (pieces) = %d\n", npiece); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_L1pw2 (m, n, npiece, ct, f, ap, rp2, zp, ixl); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the L1 Piecewise Approximation\n"); PRN ("Starting points of the \"npiece\" segments\n"); prn_Vector_I (ixl, npiece); PRN ("Coefficients of the \"npiece\" approximating " " curves\n"); prn_Matrix_R (ap, npiece, m); PRN ("Residuals at the given points\n"); prnRc = LA_pw2_prn_rp2 (npiece, n, ixl, rp2); PRN ("L1 residual norms for the \"npiece\" segments\n"); prn_Vector_R (zp, npiece); if (prnRc < LaRcOk) { PRN ("Error printing PW2 results: "); }
© 2008 by Taylor & Francis Group, LLC
Chapter 9: DR_L1pw2 } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_I
(ct, MM_COLS); (rp2); (ap, KK_PIECES); (f); (zp); (ixl);
}
© 2008 by Taylor & Francis Group, LLC
297
298
9.9
Numerical Linear Approximation in C
LA_L1pw2
/*------------------------------------------------------------------LA_L1pw2 --------------------------------------------------------------------This program calculates the "near balanced" piecewise linear L1 (L-One) approximation of a given data point set {x,y} resulting from the discretization of a plane curve y = f(x). Given is an integer number "npiece" which is the number of segments in the approximation. The approximation by LA_L1pw2() is such that the L1 residual norms for all segments are nearly equal, hence the name "near balanced" piecewise approximation. From the approximating functions (curves) one forms the overdetermined system of linear equations c*a = f Inputs npiece m n ct f
Given umber of segments (pieces) of the approximation. Number of terms in the approximating curves. Number of points to be piecewise approximated. A real m by n matrix containing the transpose of matrix "c" of the system c*a = f. This matrix is not destroyed in the computation. A real n vector containing the r.h.s. of the system c*a = f. This vector contains the y-coordinates of the given point set. This vector is not destroyed in the computation.
Outputs ixl An integer "npiece' vector containing the indices of the first elements of the "npiece" segments. For example, if ixl = (1,5,12,22,...), then the first segment contains points 1 to 4, the second segment contains points 5 to 11, the third segment contains points 12 to 21 ..., etc. ap A real "npiece" by m matrix. Its first row contains the coefficients of the approximating curve for the first segment. The second row contains the coefficients of the approximating curve for the second segment and so on. rp2 A real n vector containing the residual values of the n
© 2008 by Taylor & Francis Group, LLC
Chapter 9: LA_L1pw2
299
points of the given set {x,y}. A real "npiece" vector containing the optimum L-One "npiece" residual norms for the "npiece" segments.
zp
Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_L1pw2 (int m, int n, int npiece, tMatrix_R ct, tVector_R f, tMatrix_R ap, tVector_R rp2, tVector_R zp, tVector_I ixl) { tVector_R al = alloc_Vector_R (m); tVector_R rl = alloc_Vector_R (n); tVector_R bv = alloc_Vector_R (m); tMatrix_R binv = alloc_Matrix_R (m, m); tVector_R th = alloc_Vector_R (n); tVector_I icbas = alloc_Vector_I (m); tVector_I irbas = alloc_Vector_I (m); tVector_I ibound = alloc_Vector_I (n); tVector_R ar = alloc_Vector_R (m); tVector_R rr = alloc_Vector_R (n); tMatrix_R ctp = alloc_Matrix_R (m, n); tVector_R fp = alloc_Vector_R (n); tVector_I iflag = alloc_Vector_I (npiece); int int int int tNumber_R tNumber_R
i = 0, j = 0, k = 0, is = 0, ie = 0, ji = 0, nu = 0, ibc = 0, kl = 0, klp1 = 0; iend = 0, iter = 0, irank = 0, kase = 0; ijk = 0, isl = 0, isr = 0, isrn = 0, iel = 0, ieln = 0, ier = 0; ipcp1 = 0, istart = 0; zl = 0.0, zln = 0.0, zrn = 0.0; con = 0.0, conn = 0.0;
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m == 1)) && (1 < npiece)); VALIDATE_PTRS (ct && f && ap && rp2 && zp && ixl); VALIDATE_ALLOC (al && rl && bv && binv && th && icbas && irbas © 2008 by Taylor & Francis Group, LLC
300
Numerical Linear Approximation in C && ibound && ar && rr && ctp && fp && iflag); kase = 0; for (k = 1; k <= npiece; k++) { iflag[k] = 1; /* Initializing L1pw2 */ LA_pw2_init (k, npiece, m, n, &is, &ie, ct, f, ctp, fp, ixl); /* Calculating the L1 approximations of the "npiece" segments */ nu = ie - is + 1; rc = LA_Lone (m, nu, ctp, fp, &irank, &iter, rl, al, &zl); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } zp[k] = zl; /* Mapping initial data for the "npiece" segments */ for (i = 1; i <= m; i++) { ap[k][i] = al[i]; } for (j = is; j <= ie; j++) { ji = j - is + 1; rp2[j] = rl[ji]; } } /*---Process of balancing the L-One norms----*/ istart = 1; iend = npiece - 1; ipcp1 = npiece + 1; ixl[ipcp1] = n + 1; for (ijk = 1; ijk < n*n; ijk++) { for (kl = istart; kl <= iend; kl=kl+2) { klp1 = kl + 1;
© 2008 by Taylor & Francis Group, LLC
Chapter 9: LA_L1pw2
301
con = fabs (zp[klp1] - zp[kl]); isl = ixl[kl]; isr = ixl[klp1]; iel = isr - 1; if (kl < iend) ier = ixl[kl + 2] - 1; if (kl == iend) ier = n; /* The case where : -----z[i]
302
Numerical Linear Approximation in C GOTO_CLEANUP_RC (rc); } conn = fabs (zrn - zln); iflag[kl] = 1; iflag[klp1] = 1; if (conn > con) { iflag[kl] = 0; iflag[klp1] = 0; continue; } } /* The case where : -----z[i]>z[i+1] */ else if (zp[kl] > zp[klp1]) { isrn = isr - 1; ieln = isrn - 1; nu = ieln - isl + 1; for (j = isl; j <= ieln; j++) { ji = j - isl + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j]; } } rc = LA_Lone (m, nu, ctp, fp, &irank, &iter, rl, al, &zln); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } nu = ier - isrn + 1; for (j = isrn; j <= ier; j++) { ji = j - isrn + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j]; }
© 2008 by Taylor & Francis Group, LLC
Chapter 9: LA_L1pw2
303
} rc = LA_Lone (m, nu, ctp, fp, &irank, &iter, rr, ar, &zrn); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } conn = fabs (zrn - zln); iflag[kl] = 1; iflag[klp1] = 1; if (conn > con) { iflag[kl] = 0; iflag[klp1] = 0; continue; } } for (j = isl; j <= ieln; j++) { ji = j - isl + 1; rp2[j] = rl[ji]; } for (j = isrn; j <= ier; j++) { ji = j - isrn + 1; rp2[j] = rr[ji]; } for (i = 1; i <= m; i++) { ap[kl][i] = al[i]; ap[klp1][i] = ar[i]; } zp[kl] = zln; zp[klp1] = zrn; ixl[klp1] = isrn; isr = isrn; iel = isr - 1; } is = 2; if (istart == 2) is = 1; istart = is; ibc = 0; © 2008 by Taylor & Francis Group, LLC
304
Numerical Linear Approximation in C for (j = 1; j <= npiece; j++) { if (iflag[j] != 0) ibc = 1; } if (ibc == 0) { GOTO_CLEANUP_RC (LaRcSolutionFound); } }
CLEANUP: free_Vector_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R
(al); (rl); (bv); (binv, m); (th);
free_Vector_I (icbas); free_Vector_I (irbas); free_Vector_I (ibound); free_Vector_R (ar); free_Vector_R (rr); free_Matrix_R (ctp, m); free_Vector_R (fp); free_Vector_I (iflag); return rc; }
© 2008 by Taylor & Francis Group, LLC
PART 3
The Chebyshev Approximation
© 2008 by Taylor & Francis Group, LLC
306
Numerical Linear Approximation in C
Chapter 10
Linear Chebyshev Approximation
307
Chapter 11
One-Sided Chebyshev Approximation
347
Chapter 12
Chebyshev Approximation with Bounded Variables
387
Chapter 13
Restricted Chebyshev Approximation
419
Chapter 14
Strict Chebyshev Approximation
469
Chapter 15
Piecewise Chebyshev Approximation
517
Chapter 16
Solution of Linear Inequalities
545
© 2008 by Taylor & Francis Group, LLC
307
Chapter 10 Linear Chebyshev Approximation
10.1
Introduction
In this chapter and the following four chapters, 5 algorithms for the discrete linear Chebyshev (L∞ or minimax) approximations are described. In this chapter, we consider the first of these algorithms, the (ordinary) linear Chebyshev approximation, which requires that the Chebyshev or L∞ norm of the residuals (errors) in the approximation be as small as possible [1, 2]. In real life it is important to know the largest error between the approximation of a given function and the true value of the function itself. Hence, it is called the minimax approximation. Consider the overdetermined system of linear equations (10.1.1)
Ca = f
C = (cij) is a given real n by m matrix of rank k, k ≤ m < n, and f = (fi) is a given real n-vector. The Chebyshev solution of system (10.1.1) is the m-vector a = (aj) that minimizes the Chebyshev or the L∞ norm z (10.1.2)
z = maxi|ri|
where ri is the ith residual in (10.1.1), given by m
(10.1.3)
ri =
∑ cij aj – fi ,
i = 1, 2, …, n
j=1
The first algorithm for solving the Chebyshev approximation problem, we are aware of, was by Stiefel [19] for the case when matrix C in the system Ca = f satisfies the Haar condition, (every m by m sub-matrix of C is nonsingular). This algorithm is known as the
© 2008 by Taylor & Francis Group, LLC
308
Numerical Linear Approximation in C
exchange algorithm. It is also known by Cheney ([10], p. 45) as the ascent method. Later, both Stiefel [20] and Osborne and Watson [15] showed that the exchange algorithm is exactly equivalent to a simplex algorithm of a linear programming problem. Barrodale and Phillips [6, 7] implemented a simplex algorithm for obtaining the Chebyshev solution of Ca = f using the dual form of the linear programming formulation of the problem. Among the other methods that solve the linear Chebyshev approximation is that of Bartels and Golub [8, 9]. They presented a numerically stable version of Siefel exchange algorithm [20] and used an LU decomposition technique for the basis matrix together with an iterative improvement. In their method matrix C has to be of full rank. Narula and Wellington [14] developed one algorithm that solves both the L1 approximation problem (minimum sum of absolute errors (MSAE) regression) and the Chebyshev approximation problem (minimization of the maximum absolute error (MMAE) regression). Their algorithm exploits the similarities between the two problems. Armstrong and Sklar [5] used the revised simplex method with multiple pivoting to solve the dual form of the linear programming problem. They also used an LU decomposition technique and in their method matrix C is a full rank matrix. To advance the starting basis in the linear programming solution for the Chebyshev approximation, Sklar and Armstrong [17] solved Ca = f first in the L2 (least squares) sense. Then using the characterization theorem (Section 10.1.1), the initial basis in the linear programming problem corresponds to the (m + 1) largest residuals in absolute value of the L2 solution. This method would be advantageous when the least squares estimate is already available. Coleman and Li [11] presented a new iterative quadratically convergent algorithm for solving the linear Chebyshev approximation problem. In their method the required number of iterations to solve the problem is insensitive to the problem size and being quadratically convergent, a solution can be obtained with high accuracy. However, in their method, at each iteration, a weighted least squares problem is solved. They showed that their method converges in fewer steps than that of Barrodale and Phillips [7]. Pinar and Elhedhli [16] presented an algorithm based on the application of quadratic penalty functions to a primal linear
© 2008 by Taylor & Francis Group, LLC
Chapter 10: Linear Chebyshev Approximation
309
programming formulation of the Chebyshev approximation problem. Comparisons were made between the Barrodale and Phillips [7] and the predictor-corrector primal-dual interior point algorithms. We obtain here the Chebyshev solution of the overdetermined system of linear equations Ca = f using the dual form of the linear programming formulation of the problem. Our method differs in significant points from that of Barrodale and Phillips [6] as explained in Section 10.5. However, like [6], our algorithm needs minimum computer storage and imposes no conditions on the coefficient matrix, such as the Haar condition or the full rank condition. The algorithm presented here is in 3 parts. In part 1, a numerically stable initial basic solution is obtained. The initial basic solution may be feasible or not. Also, if C is not of full rank, the rank of matrix C is determined. In part 2, if the initial basic solution is not feasible, it is made feasible by applying a number of Gauss Jordan elimination steps. To end part 2, the marginal costs as well as the objective function z are calculated. The objective function z may not be > 0. If z < 0, it is made positive with a small effort. Part 3 consists of a modified simplex algorithm that makes use of the kind of asymmetry of the simplex tableau. In Section 10.1.1, the characterization of the Chebyshev solution is outlined. In Section 3.8.2, we outlined the formulation of the Chebyshev problem as a linear programming problem. For the sake of completeness, we present this formulation again in Section 10.2. In Section 10.3, the algorithm is described and a numerical example is solved in detail. In Section 10.4, a significant property of the Chebyshev approximation is described. In Section 10.5, numerical results and comments are given. 10.1.1
Characterization of the Chebyshev solution
For the characterization theorems of the residuals for an optimum solution, see for example, Appa and Smith [3] and Watson [22]. The most important characteristic is described as follows. If matrix C in Ca = f has rank k, k ≤ m, then there are at least (k + 1) residuals having values ±z, where z is the optimum norm solution. See Sections 2.3.1 and 10.4. © 2008 by Taylor & Francis Group, LLC
310
Numerical Linear Approximation in C
10.2
Linear programming formulation of the problem
The Chebyshev solution problem of system Ca = f was first formulated as a linear programming problem in both the primal and the dual formats by Wagner [21]. Let h ≥ 0, be the value z = maxi|ri| in (10.1.2). The primal form of the linear programming problem is minimize h subject to ri ≤ h and ri ≥ –h, i = 1, 2, …, n From (10.1.3) the above two inequalities reduce to m
–h ≤
∑ cij aj – fi ≤ h ,
i = 1, 2, …, n
j=1
In vector-matrix form, they become Ca + he ≥ f –Ca + he ≥ –f h ≥ 0 and aj, j = 1, 2, …, m, unrestricted in sign e is an n-vector, each element of which is 1. That is minimize h subject to C e a ≥ f –C e h –f
(10.2.1)
h ≥ 0 and aj, j = 1, 2, …, m, unrestricted in sign It is much easier to solve the dual form of this formulation (10.2.2a)
maximize z = [fT –fT]b
subject to (10.2.2b)
© 2008 by Taylor & Francis Group, LLC
T
T
C –C b = e m+1 T T e e
Chapter 10: Linear Chebyshev Approximation
311
bi ≥ 0, i = 1, 2, …, 2n
(10.2.2c)
where em+1 is an (m + 1)-vector, which is the last column in an (m + 1)-unit matrix, eT is an n-row vector whose elements are 1’s, and the 2n-vector b = (bi). For simplicity of presentation, we write (10.2.2b) in the form (10.2.2d)
Db = em+1
and let the columns of matrix D be denoted by Dj, j = 1, 2, …, 2n. To simplify the analysis, assume that rank(C) = m and thus rank(D) = (m + 1). We construct a simplex tableau for problem (10.2.2), which is for (m + 1) constraints in 2n variables. We call this the large tableau, because it has 2n variables. This is to distinguish it from the condensed tableaux of (m + 1) constraints in only n variables, described later. Let B denote the basis matrix for problem (10.2.2). It would be of rank (m + 1). Let also (yi) be the columns forming the simplex tableau, (zj – fj) the marginal costs for column (yj), bB the basic solution and z the objective function. Let also the elements of the 2n-vector [fT –fT] of (10.2.2a), associated with the basic variables be the (m + 1)-vector fB. Then as usual, we have for j = 1, 2, …, 2n. (10.2.3a) (10.2.3b) (10.2.3c) (10.2.3d) 10.2.1
yj (zj – fj) bB z
= = = =
B–1Dj fBTyj – fj B–1em+1 fBTbB
Property of the matrix of constraints
Note the kind of asymmetry in the right and left halves of matrix D in (10.2.2d). This kind of asymmetry enables us to use a condensed simplex tableau for this problem, as explained in Section 10.3. From (10.2.2b), we note that for j = 1, 2, …, n (10.2.4)
Dj = Cj 1
T
where CjT is column j of CT.
© 2008 by Taylor & Francis Group, LLC
and D j + n = – C j 1
T
312
Numerical Linear Approximation in C
We have CjT as part of vector Dj, and –CjT as part of vector Dj+n, for j = 1, 2, …, n. Hence, from (10.2.4) (10.2.5)
Dj + Dj+n = 2em+1, 1 ≤ j ≤ n
Definition Because there is a kind of asymmetry between the two halves of the matrix of constraints D in (10.2.2b), we define any column j, 1 ≤ j ≤ n, and the column (j + n) in this matrix as two corresponding columns. Lemma 10.4 and 10.5 below give useful relations between any two corresponding columns in the simplex tableau and between their marginal costs. Thus we use a condensed simplex tableau having (m + 1) constraints in only n variables. Lemma 10.1 At any stage of the computation, the basic solution vector bB equals the (m + 1)th column of B–1. This follows directly from (10.2.3c), since em+1 is the (m + 1)th column of an (m + 1)-unit matrix. Lemma 10.2 The optimum solution a and z for problem (10.2.2) is given by T B a = fB z
or in effect (aT z) = fBTB–1 Proof: To prove the first equation, we observe that the (m + 1) linearly independent rows of matrix BT are themselves the (m + 1) rows from the l.h.s. matrix of (10.2.1), and that the (m + 1) elements of fB are the corresponding elements from the r.h.s. of (10.2.1). We also observe that when the optimum solution is reached, h = z and the inequalities in (10.2.1) become equalities. By taking the transpose of the first equation, the second equation follows. © 2008 by Taylor & Francis Group, LLC
Chapter 10: Linear Chebyshev Approximation
313
Lemma 10.3 At any stage of the computation, the algebraic sum of the elements of the basic solution bB equals 1. Proof: Consider the last equation in the system (10.2.2b), which is (eT, eT)b = 1 Since the elements of the vector (eT, eT) are 1’s, this inner product equals the sum of the elements of vector b. Then since the non-basic elements of b are all 0’s, the lemma is proved. This lemma is useful in checking the simplex tableau in the process of the solution of the problem. See Tableaux 10.3.1-5 below. Lemma 10.4 Any two corresponding columns, i and j, |i – j| = n, where 1 ≤ i, j ≤ 2n, should not appear together in any basis. See for example, Lemma 4.3 in [15]. Lemma 10.5 (10.2.6a)
yi + yj = 2bB
and (10.2.6b)
(zi – fi) + (zj – fj) = 2z
Proof: By multiplying (10.2.5) by B–1, from (10.2.3a, c), (10.2.6a) is proved. Also, from (10.2.3a-d) and that fi = –fj, (10.2.6b) is proved. Let us assume that we have obtained an initial basic solution that is not feasible; that is, one or more elements of the basic solution bB is negative. Consider the following lemma. Lemma 10.6 Let there be one or more elements of the basic solution bB that is < 0. Let bBs, 1 ≤ s ≤ (m + 1), be one of those elements and let i be the basic column associated with bBs. Let j be the corresponding column to the basic column i, i.e., |i – j| = n. Then if column i is replaced by column j in the basis, the new element bBs(n) will be > 0. Moreover, © 2008 by Taylor & Francis Group, LLC
314
Numerical Linear Approximation in C
the elements of the new basic solution bBk(n) k ≠ s, each will keep the sign of its old value. Proof: Since we assume that column i is the basic column associated with bBs (10.2.7)
yi = (0, …, 1, …, 0)T
where the 1 is in the s position. If now column j replaces column i in the basis, and we apply a Gauss-Jordan step to the simplex tableau, we get the following. The pivot to change the tableau is ysj. Then after the Gauss-Jordan step we get (10.2.8a) (10.2.8b)
bBs(n) = bBs/ysj bBk(n) = bBk – (ykj/ysj)bBs, k ≠ s
From (10.2.6a) and (10.2.7), ysj = 2bBs – 1 and ykj = 2bBk, k ≠ s. Then by substituting the values of ysj and ykj into (10.2.8b), it is easy to show that (10.2.8a, b) become (10.2.9a)
bBs(n) = bBs/ysj
(10.2.9b)
bBk(n) = –bBk/ysj, k ≠ s
As bBs is assumed negative, ysj = 2bBs – 1 is also negative. Thus in (10.2.8a), bBs(n) > 0. In (10.2.9b), the parameters bBk(n), k ≠ s, each has the same sign as its old value bBk. The lemma is thus proved. Lemma 10.7 (a)
(b)
Consider any basic solution, feasible or not. Then there corresponds two basis matrices denoted by B(1) and B(2), each giving the same basic solution. Every column in one has its corresponding column in the other basis, arranged in the same order. The corresponding two values of z are equal in magnitude but opposite in sign.
Proof: Let B(1) be given in partitioned form as B(1) = (H/h), where H is an m by (m + 1) matrix and h is an (m + 1)-row vector, then from (10.2.2b), and since no two corresponding columns appear together in any basis, B(2) = (–H/h). Hence, if B(1)–1 = (G | g), where G is an © 2008 by Taylor & Francis Group, LLC
Chapter 10: Linear Chebyshev Approximation
315
(m + 1) by m matrix and g an (m + 1)-vector, then B(2)–1 = (–G | g). For B(1), from (10.2.3c), the basic solution is bB(1) = (G|g)em+1 = g. For B(2), the basic solution is bB(2) = (–G|g)em+1 = g, which proves part (a) of the lemma. To prove part (b) of the lemma, if fB(1) is associated with B(1), then from (10.2.3b), –fB(1) would be associated with B(2). For B(1), z(1) = fB(1)Tg, and for B(2), z(2) = –fB(1)Tg = –z(1) which proves (b) of the lemma. Lemma 10.8 Let us use (10.2.3a, b) to construct two simplex tableaux T(1) and T(2) that correspond respectively to the bases B(1) and B(2) defined in Lemma 10.7. Let also j be the corresponding column to column i, where 1 ≤ i, j ≤ 2n. Then we have the following: (1) yi in T(1) equals yj in T(2), and (2) the marginal cost (zi – fi) in T(1) = –(zj – fj) in T(2). Proof: For T(1), B(1)–1 = (G | g), and for T(2), B(2)–1 = (–G | g), from (10.2.4) and (10.2.3a) yi = (GCiT + g) and for T(2) yj = (GCiT + g) = yi which proves the first part of the lemma. The second part is established by (10.2.3b), the above equation and the facts that fB(2) = –fB(1) and fj = –fi. 10.3
Description of the algorithm
We construct a condensed simplex tableau for problem (10.2.2), for (m + 1) constraints in only n variables. An n-index indicator vector whose elements are +1 or –1 is needed. If column i, 1 ≤ i ≤ n, is in the condensed tableau, index i of this vector has the value +1. If the corresponding column to column i is in the condensed tableau, index i has a value –1. The algorithm is in 3 parts. In part 1, we obtain an initial basic solution, feasible or not. This is © 2008 by Taylor & Francis Group, LLC
316
Numerical Linear Approximation in C
simply done by performing a finite number of Gauss-Jordan elimination steps to the initial data. We use partial pivoting, where the pivot is the largest element in absolute value in row i of tableau (i – 1), i = 1, 2, …, m+1. Tableau 0 denotes the initial data. If rank(C) = k < m, a row (or more) of CT is linearly dependent on one or more of the preceding rows, a zero row in the tableau is obtained and is discarded from the following tableaux. An exception from this rule is the last row in the tableau. The simplex tableau becomes for (k + 1) constraints in n variables. In part 2 of the algorithm, we examine the elements of the basic solution bB. If one or more elements in bB is < 0, the initial basic solution is not feasible. For each basic column i associated with a negative element of bB, we apply Lemma 10.6. We make use of (10.2.6a) and replace the column yi by its corresponding column yj. We reverse the sign of the element in vector fB associated with this column. According to Lemma 10.6, by applying a Gauss-Jordan step after each replacement, the new basic solution will be feasible. The calculation of the marginal costs (zi – fi) follows from (10.2.3b), and the objective function z from (10.2.3d). If z < 0, we use Lemmas 10.7 and 10.8, and replace the columns of the basis matrix by its corresponding columns. We mainly keep the simplex tableau unchanged, except for the fi and fB values, the marginal costs and z. Such parameters have their signs reversed. We now have a basic feasible solution and z > 0. This ends part 2 of the algorithm. Part 3 is the ordinary simplex algorithm. The difference is in the choice of the non-basic column that enters the basis. It has the most negative marginal cost among the non-basic columns in the condensed tableau and their corresponding columns. Relation (10.2.6b) is used for calculating the marginal costs of the corresponding columns. Consider the following numerical example. Example 10.1 This example is, in effect, Example 5.1 solved in Chapter 5 for the L1 approximation. Matrix C is a 5 by 2 matrix of rank 2. Obtain the Chebyshev solution for the system
© 2008 by Taylor & Francis Group, LLC
Chapter 10: Linear Chebyshev Approximation
a1 a1 a1 2a1 3a1
(10.3.1)
+ – + + +
a2 a2 2a2 4a2 a2
= = = = =
317
–3 –1 –7 –11.1 –7.2
The Initial Data and the condensed tableaux are shown for this example. In the initial data and in Tableaux 10.3.1 and 10.3.2, the pivot is chosen as the largest element in absolute value in row i of tableau (i – 1). The pivot element in each tableau is bracketed. Initial Data fT B bB ––––––––––––– 0 0 1 –––––––––––––
–3 –1 –7 –11.1 –7.2 D1 D2 D3 D4 D5 –––––––––––––––––––––– 1 1 1 2 (3) 1 –1 2 4 1 1 1 1 1 1 ––––––––––––––––––––––
Tableau 10.3.1 (part 1) fT fB B bB ––––––––––––– –7.2 D5 0 0 1 –––––––––––––
–3 –1 –7 –11.1 –7.2 D1 D2 D3 D4 D5 –––––––––––––––––––––– 1/3 1/3 1/3 2/3 1 2/3 –4/3 5/3 (10/3) 0 2/3 2/3 2/3 1/3 0 –––––––––––––––––––––– Tableau 10.3.2
fT fB B bB ––––––––––––– –7.2 D5 0 –11.1 D4 0 1 –––––––––––––
–3 –1 –7 –11.1 –7.2 D1 D2 D3 D4 D5 –––––––––––––––––––––– 0.2 0.6 0 0 1 0.2 –0.4 0.5 1 0 0.6 (0.8) 0.5 0 0 ––––––––––––––––––––––
Tableau 10.3.3 gives an initial basic solution bB that is not feasible © 2008 by Taylor & Francis Group, LLC
318
Numerical Linear Approximation in C
since the first element of bB is < 0. We make use of Lemma 10.6. Hence, y5, which is associated with this element, is replaced by its corresponding vector y10 and f5 is replaced by f10, which = –f5. From (10.2.6a), y10 = 2bB – y5, or y10 = (–5/2, 1, 5/2)T. A Gauss-Jordan step is performed giving tableau 10.3.4. In this tableau the marginal costs and z are also calculated. Though the initial basic solution bB is feasible, yet z < 0. We use Lemma 10.7 and get Tableau 10.3.4*, where z > 0. Tableau 10.3.3 T
f fB B bB ––––––––––––– –7.2 D5 –3/4 –11.1 D4 1/2 –1 D2 5/4 –––––––––––––
–3 –1 –7 –11.1 –7.2 D1 D2 D3 D4 D5 –––––––––––––––––––––– –1/4 0 –3/8 0 1 1/2 0 3/4 1 0 3/4 1 5/8 0 0 ––––––––––––––––––––––
Tableau 10.3.4 (part 2) fT fB B bB ––––––––––––– 7.2 D10 0.3 –11.1 D4 0.2 –1 D2 0.5 ––––––––––––– z = –0.56
–3 –1 –7 –11.1 –7.2 D1 D2 D3 D4 D10 –––––––––––––––––––––– 0.1 0 3/20 0 1 0.4 0 3/5 1 0 0.5 1 1/4 0 0 –––––––––––––––––––––– –1.22 0 1.17 0 0 Tableau 10.3.4*
fT fB B bB ––––––––––––– –7.2 D5 0.3 11.1 D9 0.2 1 D7 0.5 ––––––––––––– z = 0.56
© 2008 by Taylor & Francis Group, LLC
–3 –1 –7 –11.1 –7.2 D6 D7 D8 D9 D5 –––––––––––––––––––––– 0.1 0 3/20 0 1 0.4 0 (3/5) 1 0 0.5 1 1/4 0 0 –––––––––––––––––––––– 1.22 0 –1.17 0 0
Chapter 10: Linear Chebyshev Approximation
319
In tableau 10.3.4* and its corresponding part, (z8 – f8) is the most negative marginal cost, and D8 replaces D9 in the basis, which gives tableau 10.3.5. In each of Tableau 10.3.5 and its corresponding part, we search for the most negative marginal cost. We find that (z6 - f6) is the most negative marginal cost, which from (10.2.6b), = 2z – (z1 – f1) = –0.1. Hence, in Tableau 10.3.5, D1 replaces D6 and a Gauss-Jordan step is performed, giving tableau 10.3.6, which gives the optimum Chebyshev norm with z = 1. Note that in all the tableaux, the algebraic sum of the elements of bB equals 1 (Lemma 10.3). Tableau 10.3.5 (part 3) fT fB B bB ––––––––––––– –7.2 D5 1/4 7 D8 1/3 1 D7 0.42 ––––––––––––– z = 0.95
–3 –1 –7 –11.1 –7.2 D6 D7 D8 D9 D5 –––––––––––––––––––––– 0 0 0 –1/4 1 2/3 0 1 5/3 0 1/3 1 0 –0.42 0 –––––––––––––––––––––– 2 0 0 1.95 0 Tableau 10.3.6
T
f fB B bB ––––––––––––– –3 D1 1/2 7 D8 1/3 1 D7 1/6 ––––––––––––– z=1
–3 –1 –7 –11.1 –7.2 D1 D7 D8 D9 D5 –––––––––––––––––––––– 1 0 0 –1/2 2 0 0 1 5/3 0 0 1 0 –1/6 –1 –––––––––––––––––––––– 0 0 0 1.9 0.2
The Chebyshev solution of system (10.3.1) is obtained by solving the equations that correspond to the basis in tableau 10.3.6, namely D1, D8 and D7. We observe that D8 and D7 are the corresponding columns of D3 and D2 respectively; that is, from Lemma 10.2
© 2008 by Taylor & Francis Group, LLC
320
Numerical Linear Approximation in C
a1 + a2 + z = –3 –a1 – 2a2 + z = 7 –a1 + a2 + z = 1 or (10.3.2)
a1 + a2 + 3 = –z –a1 – 2a2 – 7 = –z –a1 + a2 – 1 = –z
By comparing the above 3 equations with their counterparts in the set (10.3.1), we find that the residuals in these 3 equations are ±z = ±1. By replacing z with 1.0 (from Tableau 10.3.6), we get z = 1, a1 = –2, a2 = –2 However, in the software of this chapter, a and z are calculated from the second equation of Lemma 10.2. Nevertheless, the set (10.3.2) is known as the reference equation set for this example, as explained in the following section. 10.4
A significant property of the Chebyshev approximation
Assume that matrix C of Ca = f in (10.1.1) is of rank m. It was noted that the basis matrix B is of rank (m + 1); that is, (m + 1) linearly independent columns in the simplex tableau for problem (10.2.2) form the basis matrix B. These columns correspond to (m + 1) linearly independent equations in system (10.2.1) (with the inequality sign made an equality sign), known as the reference equation set. As noted in the previous section, the equation in Lemma 10.2, which is equation (10.3.2), is a reference equation set. A significant property of the Chebyshev approximation is stated as follows. The residuals ri of the (m + 1) equations in the reference equation set each has the value ±z, arranged in any order, where z is the optimum Chebyshev norm. The residuals of all other equations of the set Ca = f, in absolute value are ≤ z. If rank(C) = k < m, then the reference equation set consists of (k + 1) equations and the residuals of (k + 1) equations each has the value ±z.
© 2008 by Taylor & Francis Group, LLC
Chapter 10: Linear Chebyshev Approximation
10.4.1
321
The equioscillation property of the Chebyshev norm
If n (> m) discrete points in the x-y plane are being approximated by a plane curve in the Chebyshev norm, the residuals for (m + 1) of these points will be ±z in an oscillating order, where z is the optimum Chebyshev norm. A residual ri = +z, is followed by residual ri+1 = –z, followed by ri+2 = –z and so on, where i, i+1, i+2, …, are consecutive points that correspond to the reference set. The residuals of all other points in the discrete set in absolute value ≤ z. The above is illustrated in Figure 2-1, where z = 1.797 and the optimum values r3 = +1.797, r4 = –1.797, r5 = +1.797, r8 = –1.797 and the reference equation set involves equations 3, 4, 5 and 8 of system describe by equation (2.2.2). Hence, this property is known as the equioscillation property of the Chebyshev norm. 10.5
Numerical results and comments
LA_Linf() implements this algorithm [2]. DR_Linf() tests the same 8 examples that were solved in Chapter 6 for the one-sided L1 approximation problem. Table 10.5.1 shows the results of 3 of the examples, computed in single-precision. These 3 results are typical of the other test cases. Table 10.5.1 –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations z Unique –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4×2 5 10.0 no 2 10× 5 9 1.7778 yes 3 25×10 23 0.0071 no For each example, the number of iterations and the optimum Chebyshev norm z are shown. The Chebyshev solution is not unique when matrix C is a rank deficient matrix. Also, when the optimum basic feasible solution bB is degenerate, i.e., bB has one or more zero elements, the Chebyshev solution is most probably not unique. The algorithm of Barrodale and Phillips [6, 7] is the nearest to ours. Yet, it differs from ours in significant points. They introduced © 2008 by Taylor & Francis Group, LLC
322
Numerical Linear Approximation in C
(m + 2) artificial variables in the linear programming problem, which were used as slack variables. In our algorithm no artificial variables are needed. In part 1 of our algorithm it is possible to obtain a numerically stable basic solution, which is always desirable in any iterative procedure. Also, we do not need to calculate the marginal costs and objective function z until the end of part 2 of the algorithm. In our algorithm, if rank(C) = k < m, the simplex tableaux would correspond to (k + 1) constraints in n variables and the amount of computation is considerably reduced. On the other hand, in this case in [6], some of the slack variables will persist in all the tableaux. Spath [18] collected data (matrix C and vector f) for 42 examples. He tested these examples using different routines, after converting them to FORTRAN 77. He compared the routines with respect to computer storage, computation time and accuracy of results. Among these algorithms are those of Bartels and Golub [9], Barrodale and Phillips [7], ours [2] and Armstrong and Kung [4]. Spath concluded that the method of Armstrong and Kung [4] contains some errors, which yielded incorrect results for some test cases. According to Grant and Hopkins [12], if working correctly, it is faster than that of Barrodale and Phillips [7] only for small values of m. The total CPU time on the IBM PC AT 102, for the 42 examples were 34, 46, 56 and 73 seconds respectively for the methods of Barrodale and Phillips [7], ours [2], Bartels and Golub [9] and Armstrong and Kung [4]. For pseudo-randomly generated data, the method of Barrodale and Phillips [7] took longer CPU time than the other 3 methods in 3 out of 4 cases (values of m and n), and nearly the same value for the 4th case. Finally, only the methods of Barrodale and Phillips [7] and ours [2] accommodate rank deficient coefficient matrix C. Spath also stated that the results are sensitive to the value of the computer tolerance parameter EPS.
References 1.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129.
© 2008 by Taylor & Francis Group, LLC
Chapter 10: Linear Chebyshev Approximation
2.
3. 4. 5. 6.
7.
8.
9. 10. 11. 12. 13.
323
Abdelmalek, N.N., A computer program for the Chebyshev solution of overdetermined systems of linear equations, International Journal for Numerical Methods in Engineering, 10(1976)1197-1202. Appa, G. and Smith, C., On L1 and Chebyshev estimation, Mathematical Programming, 5(1973)73-87. Armstrong, R.D. and Kung, D.S., Algorithm AS 135: MinMax estimates for a linear multiple regression problem, Applied Statistics, 28(1979)93-100. Armstrong, R.D. and Sklar, M.G., A linear programming algorithm for curve fitting in the L∞ norm, Numerical Functional Analysis and Optimization, 2(1980)187-218. Barrodale, I. and Phillips, C., An improved algorithm for discrete Chebyshev linear approximation, Proceedings of the Fourth Manitoba conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 177-190, 1975. Barrodale, I. and Phillips, C., Algorithm 495: Solution of an overdetermined system of linear equations in the Chebyshev norm, ACM Transactions on Mathematical Software, 1(1975)264-270. Bartels, R.H. and Golub, G.H., Stable numerical methods for obtaining the Chebyshev solution of an overdetermined system of equations, Communications of ACM, 11(1968)401406. Bartels, R.H. and Golub, G.H., Algorithm 328: Chebyshev solution to an overdetermined linear system, Communications of ACM, 11(1968)428-430. Cheney, E.W., Introduction to Approximation Theory, McGraw-Hill, New York, 1966. Coleman, T.F. and Li, Y., A global and quadratically convergent method for linear L∞ problems, SIAM Journal on Numerical Analysis, 29(1992)1166-1186. Grant, P.M. and Hopkins, T.R., A remark on algorithm AS 135: Min-Max estimates for linear multiple regression problems, Applied Statistics, 32(1983)345-347. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962.
© 2008 by Taylor & Francis Group, LLC
324
Numerical Linear Approximation in C
14.
Narula, S.C. and Wellington, J.F., An efficient algorithm for the MSAE and the MMAE regression problems, SIAM Journal on Scientific and Statistical Computing, 9(1988)717727. Osborne, M.R. and Watson, G.A., On the best linear Chebyshev approximation, Computer Journal, 10(1967)172177. Pinar, M.C., and Elhedhli, S., A penalty continuation method for the l∞ solution of overdetermined linear systems, BIT – Numerical Mathematics, 38(1998)127-150. Sklar, M.G. and Armstrong, R.D., Least absolute value and Chebyshev estimation utilizing least squares results, Mathematical Programming, 24(1982)346-352. Spath, H., Mathematical Algorithms for Linear Regression, Academic Press, English Edition, London, 1991. Stiefel, E., Uber diskrete und lineare Tschebyscheffapproximation, Numerische Mathematik, 1(1959)1-28. Stiefel, E., Note on Jordan elimination, linear programming and Tchebycheff approximation. Numerische Mathematik, 2(1960)1-17. Wagner, H.M., Linear programming techniques for regression analysis, Journal of American Statistical Association, 54(1959)206-212. Watson, G.A., Approximation Theory and Numerical Methods, John Wiley & Sons, New York, 1980.
15. 16. 17. 18. 19. 20. 21. 22.
© 2008 by Taylor & Francis Group, LLC
Chapter 10: DR_Linf
10.6
325
DR_Linf
/*------------------------------------------------------------------DR_Linf --------------------------------------------------------------------This is a driver for the function LA_Linf(), which solves an overdetermined system of linear equations in the L-infinity or the Chebyshev norm. The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m < n. "f" is a given real n vector. "a" is the solution m vector. This driver contains the 8 examples whose results are given in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define
N1 M1 N2 M2 N3 M3 N4 M4 N5 M5 N6 M6 N7 M7 N8 M8
4 2 5 3 6 3 7 3 8 4 10 5 25 10 8 4
void DR_Linf (void) { /*----------------------------------------
© 2008 by Taylor & Francis Group, LLC
326
Numerical Linear Approximation in C Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[N1][M1] = { { 0.0, -2.0 }, { 0.0, -4.0 }, { 1.0, 10.0 }, {-1.0, 15.0 } }; static tNumber_R { { 1.0, 2.0, {-1.0, -1.0, { 1.0, 3.0, { 0.0, 1.0, { 0.0, 0.0, };
c2init[N2][M2] =
static tNumber_R { { 0.0, -1.0, { 1.0, 3.0, { 1.0, 0.0, { 0.0, 0.0, {-1.0, 1.0, { 1.0, 1.0, };
c3init[N3][M3] =
0.0 0.0 0.0 0.0 1.0
0.0 -4.0 0.0 1.0 2.0 1.0
}, }, }, }, }
}, }, }, }, }, }
static tNumber_R c4init[N4][M4] = { { 1.0, 0.0, 1.0 }, { 1.0, 2.0, 2.0 }, { 1.0, 2.0, 0.0 }, { 1.0, 1.0, 0.0 }, { 1.0, 0.0, -1.0 }, { 1.0, 0.0, 0.0 }, { 1.0, 1.0, 1.0 } }; static tNumber_R { { 1.0, -3.0, { 1.0, -2.0, { 1.0, -1.0, © 2008 by Taylor & Francis Group, LLC
c5init[N5][M5] = 9.0, -27.0 }, 4.0, -8.0 }, 1.0, -1.0 },
Chapter 10: DR_Linf { { { { {
1.0, 1.0, 1.0, 1.0, 1.0,
0.0, 1.0, 2.0, 3.0, 4.0,
327 0.0, 0.0 }, 1.0, 1.0 }, 4.0, 8.0 }, 9.0, 27.0 }, 16.0, 64.0 }
}; static tNumber_R c6init[N6][M6] = { { 1.0, 0.0, 0.0, 0.0, 0.0 }, { 0.0, 1.0, 0.0, 0.0, 0.0 }, { 0.0, 0.0, 1.0, 0.0, 0.0 }, { 0.0, 0.0, 0.0, 1.0, 0.0 }, { 0.0, 0.0, 0.0, 0.0, 1.0 }, { 1.0, 1.0, 1.0, 1.0, 1.0 }, { 0.0, 1.0, 1.0, 1.0, 1.0 }, {-1.0, 0.0, -1.0, -1.0, -1.0 }, { 1.0, 1.0, 0.0, 1.0, 1.0 }, { 1.0, 1.0, 1.0, 0.0, 1.0 } }; static tNumber_R { { 1.0, 1.0, { 1.0, 2.0, { 1.0, 3.0, { 1.0, 4.0, { 1.0, 5.0, { 1.0, 6.0, { 1.0, 7.0, { 1.0, 8.0, };
c8init[N8][M8] = 1.0, 4.0, 9.0, 16.0, 25.0, 36.0, 49.0, 64.0,
1.0 4.0 9.0 16.0 25.0 36.0 49.0 64.0
static tNumber_R f1[N1+1] = { NIL, -12.0, 6.0, 0.0, 5.0 };
}, }, }, }, }, }, }, }
static tNumber_R f2[N2+1] = { NIL, 1.0, 2.0, 1.0, -3.0, 0.0 }; static tNumber_R f3[N3+1] = { NIL, © 2008 by Taylor & Francis Group, LLC
328
Numerical Linear Approximation in C 1.0, 2.0, 3.0, 2.0, 2.0, 4.0 }; static tNumber_R f4[N4+1] = { NIL, 0.0, -2.0, 1.0, -1.0, 5.0, 7.0, 0.0 }; static tNumber_R f5[N5+1] = { NIL, 3.0, -3.0, -2.0, 0.0, 7.0, -1.0, 5.0, 2.0 }; static tNumber_R f6[N6+1] = { NIL, 1.0, -1.0, 0.0, -1.0, 1.0, 0.0, 2.0, 3.0, -3.0, -2.0 }; static tNumber_R f7[N7+1] { NIL, 0.0872673, 0.0872794, 0.3491184, 0.3498802, 0.6111334, 0.6150641, 0.8733883, 0.8841621, 1.135895, 1.157550, };
= 0.0873029, 0.3513824, 0.6230824, 0.9071868, 1.206257,
0.0873315, 0.3532572, 0.6336395, 0.9400757, 1.283258,
0.0873576, 0.3550109, 0.6441493, 0.9766021, 1.384432
static tNumber_R f8[N8+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MMc_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MMc_COLS); tMatrix_R binv = alloc_Matrix_R (MMc_COLS, MMc_COLS); tVector_R bv = alloc_Vector_R (MMc_COLS); tVector_I ibound = alloc_Vector_I (NN_ROWS); tVector_I icbas = alloc_Vector_I (MMc_COLS); tVector_I irbas = alloc_Vector_I (MMc_COLS); tMatrix_R c7 = alloc_Matrix_R (N7, M7 + 1); © 2008 by Taylor & Francis Group, LLC
Chapter 10: DR_Linf
329
tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
c1 c2 c3 c4 c5 c6 c8
= = = = = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(c1init[0][0]), (&(c2init[0][0]), (&(c3init[0][0]), (&(c4init[0][0]), (&(c5init[0][0]), (&(c6init[0][0]), (&(c8init[0][0]),
int int tNumber_R
iter, irank; i, j, k, m, n, Iexmpl; d, dd, ddd, e, ee, eee, z;
eLaRc
rc = LaRcOk;
N1, N2, N3, N4, N5, N6, N8,
M1); M2); M3); M4); M5); M6); M8);
for (j = 1; j <= 5; j++) { d = 0.15* (j-3); dd = d*d; ddd = d*dd; for (i = 1; i <= 5; i++) { e = 0.15* (i-3); ee = e*e; eee = e*ee; k = 5* (j-1) + i; c7[k][1] = 1.0; c7[k][2] = d; c7[k][3] = e; c7[k][4] = dd; c7[k][5] = ee; c7[k][6] = e*d; c7[k][7] = ddd; c7[k][8] = eee; c7[k][9] = dd*e; c7[k][10] = ee*d; } } prn_dr_bnr ("DR_Linf, Chebyshev Solution of an Overdetermined " "System of Linear Equations"); for (Iexmpl = 1; Iexmpl <= 8; Iexmpl++) { switch (Iexmpl) © 2008 by Taylor & Francis Group, LLC
330
Numerical Linear Approximation in C { case 1: n = N1; m = M1; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = c1[i][j]; } break; case 2: n = N2; m = M2; for (i = 1; i <= n; i++) { f[i] = f2[i]; for (j = 1; j <= m; j++) ct[j][i] = c2[i][j]; } break; case 3: n = N3; m = M3; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] = c3[i][j]; } break; case 4: n = N4; m = M4; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) ct[j][i] = c4[i][j]; } break; case 5: n = N5; m = M5;
© 2008 by Taylor & Francis Group, LLC
Chapter 10: DR_Linf for (i = 1; i <= n; i++) { f[i] = f5[i]; for (j = 1; j <= m; j++) ct[j][i] = c5[i][j]; } break; case 6: n = N6; m = M6; for (i = 1; i <= n; i++) { f[i] = f6[i]; for (j = 1; j <= m; j++) ct[j][i] = c6[i][j]; } break; case 7: n = N7; m = M7; for (i = 1; i <= n; i++) { f[i] = f7[i]; for (j = 1; j <= m; j++) ct[j][i] = c7[i][j]; } break; case 8: n = N8; m = M8; for (i = 1; i <= n; i++) { f[i] = f8[i]; for (j = 1; j <= m; j++) ct[j][i] = c8[i][j]; } break; default: break; } prn_algo_bnr ("Linf"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", © 2008 by Taylor & Francis Group, LLC
331
332
Numerical Linear Approximation in C Iexmpl, n, m); prn_example_delim(); PRN ("Chebyshev Solution of an Overdetermined System\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Linf (m, n, ct, f, &irank, &iter, r, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Chebyshev Approximation\n"); PRN ("Chebyshev solution vector, \"a\"\n"); prn_Vector_R (a, m); PRN ("Chebyshev residual vector \"r\"\n"); prn_Vector_R (r, n); PRN ("Chebyshev norm \"z\" = %8.4f\n", z); PRN ("Rank of matrix \"c\" = %d, No. of Iterations " "= %d\n", irank, iter); } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R free_Vector_I free_Vector_I free_Vector_I free_Matrix_R
(ct, MMc_COLS); (f); (r); (a); (binv, MMc_COLS); (bv); (ibound); (icbas); (irbas); (c7, N7);
uninit_Matrix_R (c1); uninit_Matrix_R (c2); © 2008 by Taylor & Francis Group, LLC
Chapter 10: DR_Linf uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(c3); (c4); (c5); (c6); (c8);
}
© 2008 by Taylor & Francis Group, LLC
333
334
10.7
Numerical Linear Approximation in C
LA_Linf
/*------------------------------------------------------------------LA_Linf /*------------------------------------------------------------------This program solves an overdetermined system of linear equations in the Chebyshev (L-infinity) norm. It uses a modified simplex method to the linear programming formulation of the problem. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k <= m < n. "f" is a given real n vector. The problem is to calculate the elements of the m vector "a" that gives the minimum Chebyshev norm z. z = max|r[i]|,
i=1,2,...,n
where r[i] is the ith residual and is given by r[i] = c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m] - f[i], i = 1, 2, . . ., n Inputs m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. ct A real (m+1) by n matrix. Its first m rows and its n columns contain the transpose of matrix "c" of the system c*a = f. Its (m+1)th row will be filled with ones by the program. f A real n vector containing the r.h.s. of the system c*a = f. Local Variables binv A real (m+1) square matrix containing the inverse of the basis matrix in the linear programming problem. bv A real (m+1) vector containing the basic solution in the linear programming problem. icbas An integer (m+1) vector containing the indices of the columns of matrix "ct" forming the basis matrix. irbas A integer (m+1) vector containing the indices of the rows of matrix "ct".
© 2008 by Taylor & Francis Group, LLC
Chapter 10: LA_Linf
335
Outputs irank Calculated rank of matrix "c". iter Number of iterations, or the number of times the simplex tableau is changed by a Gauss-Jordon elimination step.. a A real (m + 1) vector. Its first m elements are the Chebyshev solution of the system c*a = f. r A real n vector containing the residual vector r = (c*a - f). z The minimum Chebyshev norm of the residual vector "r". Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcSolutionDefNotUniqueRD LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Linf (int m, int int *pIter, tVector_R { tVector_I icbas = tVector_I irbas = tVector_R th = tMatrix_R binv = tVector_R bv = tVector_I ibound = int int tNumber_R
n, tMatrix_R ct, tVector_R f, int *pIrank, r, tVector_R a, tNumber_R *pZ) alloc_Vector_I alloc_Vector_I alloc_Vector_R alloc_Matrix_R alloc_Vector_R alloc_Vector_I
(m + (m + (n); (m + (m + (n);
1); 1); 1, m + 1); 1);
i = 0, ij = 0, j = 0, kl = 0, m1 = 0; itest = 0, iout = 0, jin = 0, ivo = 0; d = 0.0;
/* Validation of data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < m) && (m < n)); VALIDATE_PTRS (ct && f && pIrank && pIter && r && a && pZ); VALIDATE_ALLOC (icbas && irbas && th && binv && bv && ibound); /* Part 1 of the algorithm */ m1 = m + 1; © 2008 by Taylor & Francis Group, LLC
336
Numerical Linear Approximation in C kl = 1; *pIter = 0; *pIrank = m; *pZ = 0.0; /* Initializing the data */ LA_linf_init (m, n, ct, icbas, irbas, binv, ibound, r, a); iout = 0; /* Part 1 of the algorithm. Detecting the rank of matrix "c" */ LA_linf_part_1 (&kl, m, n, ct, f, icbas, irbas, binv, bv, ibound, pIrank, pIter); /* Part 2 of the algorithm. Obtaining a basic feasible solution */ LA_linf_part_2 (kl, m, n, ct, f, icbas, binv, bv, ibound, pIter); /* Part 3 of the algorithm. Calculating the initial marginal costs and the norm z */ LA_linf_part_3 (kl, m, n, ct, f, icbas, binv, bv, ibound, r, pZ); for (ij = 1; ij <= n*n; ij++) { ivo = 0; /* Determine the vector that enters the basis */ LA_linf_vent (&ivo, &jin, kl, m, n, icbas, r, pZ); if (ivo == 0) { /* Calculate the results */ rc = LA_linf_res (kl, m, n, f, icbas, irbas, binv, bv, ibound, r, a, *pIrank, pZ); GOTO_CLEANUP_RC (rc); } if (ivo != 1) { for (i = kl; i <= m1; i++) { ct[i][jin] = bv[i] + bv[i] - ct[i][jin]; } r[jin] = *pZ + *pZ - r[jin]; f[jin] = -f[jin]; ibound[jin] = -ibound[jin];
© 2008 by Taylor & Francis Group, LLC
Chapter 10: LA_Linf
337
} itest = 0; /* Determine the vector that leaves the basis */ LA_linf_vleav (&itest, &iout, jin, kl, m, ct, bv); /* No feasible solution is available */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } /* A Gauss-Jordan elimination step */ LA_linf_gauss_jordn (iout, jin, kl, m, n, ct, icbas, binv, bv, pIter); d = r[jin]; for (j = 1; j <= n; j++) { r[j] = r[j] - d * (ct[iout][j]); } *pZ = *pZ - d * (bv[iout]); } CLEANUP: free_Vector_I free_Vector_I free_Vector_R free_Matrix_R free_Vector_R free_Vector_I
(icbas); (irbas); (th); (binv, m + 1); (bv); (ibound);
return rc; } /*------------------------------------------------------------------Initializing the data of LA_Linf() -------------------------------------------------------------------*/ void LA_linf_init (int m, int n, tMatrix_R ct, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R r, tVector_R a) { int i, j, m1; m1 = m + 1; © 2008 by Taylor & Francis Group, LLC
338
Numerical Linear Approximation in C for (j = 1; j <= m1; j++) { a[j] = 0.0; icbas[j] = 0; irbas[j] = j; for (i = 1; i <= m1; i++) { binv[i][j] = 0.0; } binv[j][j] = 1.0; } for (j = 1; j <= n; j++) { ct[m1][j] = 1.0; r[j] = 0.0; ibound[j] = 1; }
} /*------------------------------------------------------------------Part 1 of the algorithm LA_Linf() -------------------------------------------------------------------*/ void LA_linf_part_1 (int *pKl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, int *pIter) { int j, iout, jin = 0; tNumber_R d, piv, m1 = m + 1; for (iout = 1; iout <= m1; iout++) { piv = 0.0; for (j = 1; j <= n; j++) { d = ct[iout][j]; if (d < 0.0) d = -d; if (d > piv) { jin = j; piv = d; } } if (piv > EPS) { © 2008 by Taylor & Francis Group, LLC
Chapter 10: LA_Linf
339
/* A Gauss-Jordan elimination step */ LA_linf_gauss_jordn (iout, jin, *pKl, m, n, ct, icbas, binv, bv, pIter); } /* Detect the rank of matrix "c" */ LA_linf_detect_rank (pKl, iout, jin, m, n, piv, ct, f, icbas, irbas, binv, bv, ibound, pIrank, pIter); } } /*------------------------------------------------------------------Detection of rank deficiency of matrix "c" in LA_Linf() -------------------------------------------------------------------*/ void LA_linf_detect_rank (int *pKl, int iout, int jin, int m, int n, tNumber_R piv, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, int *pIter) { int i, j, k, m1, icb; m1 = m + 1; if ((piv < EPS) && iout < m1) { swap_rows_Matrix_R (ct, *pKl, iout); k = irbas[iout]; irbas[iout] = irbas[*pKl]; irbas[*pKl] = 0; for (j = *pKl; j <= m; j ++) { binv[iout][j] = binv[*pKl][j]; binv[*pKl][j] = 0.0; } icbas[iout] = icbas[*pKl]; icbas[*pKl] = 0; for (i = *pKl; i <= m1; i++) { binv[i][iout] = binv[i][*pKl]; binv[i][*pKl] = 0.0; } *pIrank = *pIrank - 1; *pKl = *pKl + 1; } if ((piv < EPS) && iout == m1) { for (j = 1; j <= n; j++) © 2008 by Taylor & Francis Group, LLC
340
Numerical Linear Approximation in C { icb = 0; for (i = *pKl; i <= m; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { jin = j; break; } } f[jin] = -f[jin]; ibound[jin] = -ibound[jin]; for (i = *pKl; i <= m; i++) { ct[i][jin] = -ct[i][jin]; } ct[m1][jin] = 2.0 - ct[m1][jin]; /* A Gauss-Jordan elimination step */ LA_linf_gauss_jordn (iout, jin, *pKl, m, n, ct, icbas, binv, bv, pIter); }
} /*------------------------------------------------------------------Part 2 of the algorithm. Obtaining a basic feasible solution in LA_Linf() -------------------------------------------------------------------*/ void LA_linf_part_2 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIter) { int i, k, m1; int iout, jin; jin = 0; iout = 0; m1 = m + 1; for (i = kl; i <= m; i++) { if (bv[i] < 0.0) { iout = i; © 2008 by Taylor & Francis Group, LLC
Chapter 10: LA_Linf
341
jin = icbas[i]; f[jin] = -f[jin]; ibound[jin] = -ibound[jin]; for (k = kl; k <= m1; k++) { ct[k][jin] = bv[k] + bv[k] - ct[k][jin]; } /* A Gauss-Jordan elimination step */ LA_linf_gauss_jordn (iout, jin, kl, m, n, ct, icbas, binv, bv, pIter); } } } /*------------------------------------------------------------------Part 3 of the algorithm. Calculating the initial marginal costs and the norm z in LA_Linf() -------------------------------------------------------------------*/ void LA_linf_part_3 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tNumber_R *pZ) { int i, j, k, m1, icb; tNumber_R s; m1 = m + 1; for (j = 1; j <= n; j++) { r[j] = 0.0; icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { s = -f[j]; for (i = kl; i <= m1; i++) { k = icbas[i]; s = s + ct[i][j] * (f[k]); } r[j] = s; } } © 2008 by Taylor & Francis Group, LLC
342
Numerical Linear Approximation in C s = 0.0; for (i = kl; i <= m1; i++) { k = icbas[i]; s = s + bv[i] * (f[k]); } *pZ = s; if (*pZ < 0.0) { for (j = 1; j <= n; j++) { f[j] = -f[j]; ibound[j] = -ibound[j]; r[j] = -r[j]; } *pZ = -*pZ; for (j = kl; j <= m; j++) { for (i = kl; i <= m1; i++) { binv[i][j] = -binv[i][j]; } } }
} /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Linf() -------------------------------------------------------------------*/ void LA_linf_vent (int *pIvo, int *pJin, int kl, int m, int n, tVector_I icbas, tVector_R r, tNumber_R *pZ) { int i, j, m1, icb; tNumber_R d, e, g, tz; m1 = m + 1; g = 1.0/ (EPS*EPS); tz = *pZ + *pZ + EPS; for (j = 1; j <= n; j++) { icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } © 2008 by Taylor & Francis Group, LLC
Chapter 10: LA_Linf
343
if (icb == 0) { d = r[j]; if (d < -EPS) { e = d; if (e < g) { g = e; *pJin = j; *pIvo = 1; } } else if (d >= tz) { e = tz - d; if (e < g) { g = e; *pJin = j; *pIvo = -1; } } } } } /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Linf() -------------------------------------------------------------------*/ void LA_linf_vleav (int *pItest, int *pIout, int jin, int kl, int m, tMatrix_R ct, tVector_R bv) { int i, m1; tNumber_R d, g, thmax; m1 = m + 1; thmax = 1.0/ (EPS*EPS); for (i = kl; i <= m1; i++) { d = ct[i][jin]; if (d > EPS) { g = bv[i]/d; if (g <= thmax) © 2008 by Taylor & Francis Group, LLC
344
Numerical Linear Approximation in C { thmax = g; *pIout = i; *pItest = 1; } } }
} /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Linf() -------------------------------------------------------------------*/ void LA_linf_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, int *pIter) { int i, j, l, m1; tNumber_R d, pivot; m1 = m + 1; pivot = ct[iout][jin]; for (j = 1; j <= n; j++) { ct[iout][j] = ct[iout][j]/pivot; } l = m1; /*ko = kl + *pIter; if (ko < m1) l = ko; */ for (j = kl; j <= m1; j++) { binv[iout][j] = binv[iout][j]/pivot; } for (i = kl; i <= m1; i++) { if (i != iout) { d = ct[i][jin]; for (j = 1; j <= n; j++) { ct[i][j] = ct[i][j] - d * (ct[iout][j]); } for (j =kl; j <= m1; j++) { binv[i][j] = binv[i][j] - d * (binv[iout][j]); © 2008 by Taylor & Francis Group, LLC
Chapter 10: LA_Linf
345
} } } for (i = kl; i <= m1; i++) { bv[i] = binv[i][m1]; } *pIter = *pIter + 1; icbas[iout] = jin; } /*------------------------------------------------------------------Calculate the results of LA_Linf() -------------------------------------------------------------------*/ eLaRc LA_linf_res (int kl, int m, int n, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tVector_R a, int irank, tNumber_R *pZ) { int i, j, k, m1; tNumber_R s, d; m1 = m + 1; for (j = kl; j <= m1; j++) { s = 0.0; for (i = kl; i <= m1; i++) { k = icbas[i]; s = s + f[k]*binv[i][j]; } k = irbas[j]; a[k] = s; } for (j = 1; j <= n; j++) { d = r[j] - *pZ; if (ibound[j] == -1) d = -d; r[j] = d; } if (irank < m) return LaRcSolutionDefNotUniqueRD; © 2008 by Taylor & Francis Group, LLC
346
Numerical Linear Approximation in C
for (i = 1; i <= m1; i++) { if (bv[i] < EPS) return LaRcSolutionProbNotUnique; } return LaRcSolutionUnique; }
© 2008 by Taylor & Francis Group, LLC
347
Chapter 11 One-Sided Chebyshev Approximation
11.1
Introduction
In the previous chapter, an algorithm for obtaining the linear (ordinary) Chebyshev approximation is given [1]. The linear Chebyshev approximation is a double-sided one, meaning that some of the elements of the residual vector have values ≥ 0 and the others have values < 0. In other words, for the discrete linear Chebyshev approximation, some of the discrete points are above or on, and some are below the approximating curve (surface). Hence, the approximation is the ordinary or the double-sided one. In this chapter, we study the linear one-sided Chebyshev approximation problem. In this problem, all the residuals are either non-positive or non-negative. In other words, for the Chebyshev approximation of discrete points, all the discrete points are either above or on the approximating curve (surface), or below or on the approximating curve. In the former case, when all the discrete points are above or on the approximating curve, we have the one-sided Chebyshev approximation from above. The one-sided Chebyshev approximation from below is described in a similar manner. This is illustrated by Figure 11-1, which is analogous to Figure 6-1 for the linear one-sided L1 approximations. Consider the overdetermined system of linear equations Ca = f C = (cij), is a given real n by m matrix of rank k, k ≤ m < n and f = (fi) is a given real n-vector. The L∞ or the Chebyshev solution of system Ca = f is the m-vector a = (ai) that minimizes the Chebyshev norm z of the residuals
© 2008 by Taylor & Francis Group, LLC
348
Numerical Linear Approximation in C
(11.1.1)
z = max|ri|, i = 1, 2, …, n
where ri is the ith residual and is given by m
(11.1.2)
ri = fi –
∑ cij aj ,
i = 1, 2, …, n
j=1
The special case when the system of equation Ca = f is consistent, i.e., the residual r = f – Ca = 0, is not of interest here. We thus assume that r ≠ 0. Note that the residuals in (11.1.2) are chosen to be the negative of the residuals in Chapter 10. This is to facilitate the analysis in this chapter. As noted in Chapter 2, such choice of the definition of the residuals is arbitrary. When the solution vector satisfies the additional conditions (11.1.3)
ri ≥ 0, i = 1, 2, …, n
which in vector-matrix form is (11.1.3a)
f ≥ Ca
we have the one-sided Chebyshev solution from above of system Ca = f; that is, from (11.1.3a), for any equation i in Ca = f, i = 1, 2, …, n, the observed value fi of vector f is ≥ the calculated element i of Ca. If the inequalities in (11.1.3) are reversed, we have the problem of the one-sided Chebyshev solution from below. We also note that there are problems that have an (ordinary) Chebyshev approximation but do not have one-sided Chebyshev approximation from above and/or from below. See the numerical examples in Section 11.5. We consider here the problem of the one-sided Chebyshev approximation from above. However, the analysis and presentation of the problem from below are almost identical and the described algorithm applies to it as well. The problem is presented here as a linear programming problem and we pursue the analysis of the dual form of the linear programming presentation. In this algorithm, minimum computer storage is required, no artificial variables are needed and no conditions are imposed on the coefficient matrix. In Section 11.2, the problem is presented as a special case of general constrained Chebyshev approximation problems. We shall comment on these algorithms in Section 11.5. In Section 11.3, the
© 2008 by Taylor & Francis Group, LLC
Chapter 11: One-Sided Chebyshev Approximation
349
linear programming formulation of the problem is given and necessary lemmas are derived. In Section 11.4, the algorithm is described and a numerical example is solved. A note on the linear one-sided Chebyshev solution from below is also presented. In Section 11.5, numerical results are given. Simple relationships between the one-sided and the ordinary Chebyshev approximations are observed. Comparisons with other methods for solving the same problem are also given. 8 7 6 5 4 3 2 1 0 -1
0
1
2
3
4
5
6
7
8
9
-2
Figure 11-1: Curve fitting with vertical parabolas of a set of 8 points using Chebyshev and one-sided Chebyshev approximations
This figure shows curve fitting with vertical parabolas of the set of 8 points shown in Figure 2-1. The solid curve is the ordinary Chebyshev approximation. The dashed curve is the one-sided Chebyshev approximation from above and the dotted curve is the one-sided Chebyshev approximation from below. 11.1.1
Applications of the algorithm
As indicated in Chapter 6, the discrete linear one-sided approximation problems arise in the discretization of the continuous
© 2008 by Taylor & Francis Group, LLC
350
Numerical Linear Approximation in C
one-sided approximation problems. The latter problems arise, for example, in the solution of operator equations [13, 14]. They are also applied to the solution of overdetermined linear inequalities. The latter is a basic problem in pattern classification ([12], pp. 40-41, 48-49). See Chapter 16. 11.2
A special problem of a general constrained one
There exist algorithms for constrained Chebyshev approximation problems. By manipulating the constraints, the problem reduces to a one-sided Chebyshev approximation one. This was the case with the one-sided L1 approximation problem presented in Section 6.2. Using our notation, we start with Dax [6] who presented an algorithm for solving the one-sided Chebyshev approximation from below, namely minimize ||Ca – f||∞ subject to Ca ≥ f In his method, a search direction is obtained via the solution of the least squares problem, which means that a least squares solution should be available first. Joe and Bartels [8], using a penalty linear programming approach, presented an algorithm to solve the problem (11.2.1a)
minimize ||Ca – f||∞
subject to (11.2.1b)
Ga = g and Ea ≥ e
If in (11.2.2b) we take G = 0, g = 0 and take E = C and e = f, (11.2.1b) reduces to Ca ≥ f and we get the one-sided Chebyshev solution from below. Sklar [11] presented a linear programming algorithm with multiple pivoting and an LU decomposition technique to solve the problem (11.2.2a) © 2008 by Taylor & Francis Group, LLC
minimize ||Ca – f||∞
Chapter 11: One-Sided Chebyshev Approximation
351
subject to (11.2.2b)
ea ≤ Ea ≤ eb
His algorithm is a generalization of that of Armstrong and Sklar [3]. If we take E = C, ea = –∞ and eb = f, we get the one-sided Chebyshev solution from above. By –∞, we mean a large negative number, such as say –106. Roberts and Barrodale [10] presented an algorithm to solve the problem (11.2.3a)
minimize ||Ca – f||∞
subject to (11.2.3b)
Ga = g and ea ≤ Ea ≤ eb
Their algorithm is the extension of the algorithm of Barrodale and Phillips for the ordinary Chebyshev solution [4, 5]. If in (11.2.3b) we take G = 0, g = 0, problem (11.2.3) reduces to problem (11.2.2). 11.3
Linear programming formulation of the problem
Watson [14, 15] reduced the problem of the one-sided Chebyshev approximation from above to a linear programming problem as follows. Let in (11.1.1), h = max|ri|, i = 1, 2, …, n. Then from (11.1.2) and (11.1.3), in vector-matrix form, the primal linear programming formulation of this problem is minimize h subject to 0 ≤ f – Ca ≤ he e is an n-vector each element of which equals 1 and 0 is an n-zero vector. The above two inequalities reduce to Ca + he ≥ f –Ca ≥ –f This problem may be stated as
© 2008 by Taylor & Francis Group, LLC
352
Numerical Linear Approximation in C
T minimize Z = e m + 1 a h
subject to C e a ≥ f –C 0 h –f where em+1 is an (m + 1)-vector that is the (m + 1)th column of an (m + 1)-unit matrix. It is more efficient to go to the dual of this problem [7, 15]; i.e., maximize z = [fT –fT]b
(11.3.1a)
subject to the constraints T
T
C –C b = e m+1 T T e 0
(11.3.1b) and b is a 2n-vector
bi ≥ 0, i = 1, 2, …, 2n
(11.3.1c)
Let us write (11.3.1b) as (11.3.1d)
Db = em+1
where D is the matrix on the l.h.s. of (11.3.1b). As usual, for problem (11.3.1), a simplex tableau for (m + 1) constraints in 2n variables is constructed. This is denoted by the large tableau, in order to distinguish it from the condensed tableaux, which we shall use in this algorithm. Without loss of generality, we assume that rank(C) = m. Let the basis matrix at any stage of the computation be denoted by B, which is an (m + 1) non-singular square matrix. For any column j in the simplex tableau, the vector yj and its marginal cost are given by (11.3.2a)
yj = B–1Dj, j = 1, 2, …, 2n
(11.3.2b)
(zj – fj) = fBTyj – fj, j = 1, 2, …, 2n
The elements of fB are those associated with the basic variables. © 2008 by Taylor & Francis Group, LLC
Chapter 11: One-Sided Chebyshev Approximation
353
The basic solution, denoted by bB is given by bB = B–1em+1
(11.3.2c)
and the objective function by z = fBTbB
(11.3.2d) 11.3.1
Properties of the matrix of constraints
The analysis of the linear programming problem is initiated by the kind of asymmetry in the matrix of constraints D in (11.3.1d). Let us denote the jth column matrix D by Dj and the jth column of matrix CT by CjT. Then from (11.3.1b), for j = 1, 2, …, n (11.3.3)
Dj = Cj 1
T
and D j + n = – C j 0
T
We have CjT as part of vector Dj and –CjT as part of vector Dj+n, for j = 1, 2, …, n. See also Section 10.2.1. This kind of asymmetry in the matrix of constraints D, enables us to use a condensed simplex tableau of (m + 1) constraints in only n variables. From (11.3.3) (11.3.4)
Dj + Dj+n = em+1
Definition As in Section 10.2, we define any two columns j and (j + n), 1 ≤ j ≤ n in the matrix of constraints D in (11.3.1d) as two corresponding columns. The following 8 lemmas correspond respectively to the 8 lemmas in Chapter 10. Lemma 11.1 At any stage of the computation, the (m + 1)th column of B–1 is the basic solution vector bB. This follows directly from (11.3.2c). Lemma 11.2 The optimum solution vector a and z for problem (11.3.1) are given by
© 2008 by Taylor & Francis Group, LLC
354
Numerical Linear Approximation in C
T B a = fB z
or (aT z) = fBTB–1 Lemma 11.3 Assume that bB is a basic feasible solution. Then the algebraic sum of the elements of bB ≥ 1. Proof: Consider the last equation in (11.3.1b), which is (eT 0T)b = 1 The elements of the vector (eT 0T) are 1’s and 0’s. Also, as bB is assumed feasible, each element of bB is ≥ 0. Since the non-basic elements of vector b are all 0’s, the algebraic sum of the elements of bB ≥ (eT 0T)b = 1 and the lemma is proved. Lemma 11.4 Any two corresponding columns i and j, |i – j| = n, 1 ≤ i, j < 2n, should not appear together in any basis matrix. For the proof see Lemma 3 in [15] Lemma 11.5 Let bB be any basic solution, and z be the corresponding objective function for the programming problem (11.3.1). Let also columns j and (j + n), 1 ≤ j ≤ n, be two corresponding columns in the matrix of constraints in (11.3.1b). Then at any stage of the computation yj + yj+n = bB, 1 ≤ j ≤ n
(11.3.5a) and a (11.3.5b)
(zj – fj) + (zj+n – fj+n) = z, 1 ≤ j ≤ n
Proof: The proof of this lemma follows the proof of Lemma 10.5. That is,
© 2008 by Taylor & Francis Group, LLC
Chapter 11: One-Sided Chebyshev Approximation
355
by multiplying (11.3.4) by B–1, from (11.3.2a) and Lemma 11.1, we get (11.3.5a). From (11.3.5a), (11.3.2a, b), Lemma 11.1, and that fj+n = –fj, (11.3.5b) is obtained. Lemma 11.6 Assume that we have obtained an initial basic solution that is not feasible; there exist one or more elements of the basic solution bB that is < 0. Let bBs, 1 ≤ s ≤ (m + 1), be one of these elements. Let i be the basic column associated with bBs. Let j be the corresponding column to the basic column i; |i – j| = n. Then if column i is replaced by column j in the basis, and a Gauss-Jordan step is performed on the simplex tableau, the new element bBs of the basic solution will be > 0. Moreover, the elements of the new basic solution bBk, k ≠ s, k = 1, 2, …, m+1, each will have the sign of its old value. The proof here follows the steps of the proof of Lemma 10.6. Lemma 11.7 (Lemma 5 in [2]) Let a basis matrix B(1) be formed by choosing (m + 1) linearly independent columns out of the 2n columns in the matrix of constraints in (11.3.1b) such that no two corresponding columns appear together in B(1). Let another basis matrix B(2) be formed, the columns of which are the corresponding columns of B(1) arranged in the same order and assume that B(2) is non-singular. Let bB(1) and z(1) be respectively the basic solution and the objective function associated with B(1) and bB(2) and z(2) be the respective parameters associated with B(2). Then bB(2) = cbB(1) and z(2) = –cz(1) where c is a constant given by (11.3.6)
c = 1/(sm+1 – 1)
where sm+1 is the algebraic sum of the elements of bB(1). If bB(1) is feasible, from Lemma 11.3, sm+1 > 1 and then c > 0. Lemma 11.8 (Lemma 6 in [2]) Consider the two bases B(1) and B(2) of the previous lemma. Let © 2008 by Taylor & Francis Group, LLC
356
Numerical Linear Approximation in C
(B(1))i and (B(2))i be respectively column i of B(1) and of B(2). Let also column i of B(1)–1 and of B(2)–1 be respectively (B(1)–1)i and (B(2)–1)i. Let us construct two simplex tableaux T(1) and T(2) for B(1) and B(2) respectively, where the columns of T(2) are the corresponding columns of T(1) arranged in the same order. Then (B(2)–1)i = –(B(1)–1)i + csibB(1) yi in T(2) = yi in T(1) + (1 – ci)cbB(1) where si is the algebraic sum of the elements of column (B(1)–1)i, ci is the algebraic sum of the elements of yi in T(1) and c is given by (11.3.6). Lemma 11.9 Unlike the ordinary linear Chebyshev approximation problem (Chapter 10), there are problems whose one-sided Chebyshev solution(s) do not exist. The corresponding linear programming problem (11.3.1) would have an unbounded solution. See Lemma 7 in [2], the numerical results in Section 11.5 and also Example 16.2. 11.4
Description of the algorithm
The algorithm is similar to that of the ordinary Chebyshev approximation of Chapter 10. It is again in three parts. In part 1, a numerically stable initial basic solution, feasible or not, is obtained. Also, the rank of matrix C is determined. In part 2, if the initial basic solution is not feasible, it is made feasible. Then the objective function z and the marginal costs are calculated. Part 3 is a slightly modified simplex algorithm. We start by constructing a condensed simplex tableau for (m + 1) constraints in only n variables, namely, the first n variables. An n-index indicator vector whose elements are +1 or –1 is needed. If column j, 1 ≤ j ≤ n, is in the condensed tableau, index j of this vector has the value +1. If column (j + n), 1 ≤ j ≤ n, is in the condensed tableau, index j of this vector has the value –1. An initial basic solution, feasible or not is obtained. This is done by simply applying a finite number of Gauss-Jordan elimination steps to the initial data. We use partial pivoting. The pivot is the largest © 2008 by Taylor & Francis Group, LLC
Chapter 11: One-Sided Chebyshev Approximation
357
element in absolute value in row i of tableau (i – 1), i = 1, 2, …, m+1. Tableau 0 is the initial data. If one of the first m rows of matrix CT is linearly dependent on one or more of the preceding rows, i.e., rank(C) = k < m, during the elimination processes in part 1 of the algorithm, we get a zero row in the tableau. Such row(s) are discarded from the following tableaux. From (11.3.1b), the continuation of any of these rows in the large tableau will also be a row consisting of zero elements. In this case, the simplex tableau is for (k + 1) constraints in n variables. An exception from this rule is the last row in the tableau, as this row cannot be linearly dependent on any of the preceding rows. If the last row in the condensed tableau consists of all zero elements, the continuation of this row in the large tableau will not contain all zero elements. For the (k + 1)th iteration in part 1, we chose the pivot as the largest positive element in the last row of the tableau and the extension of this row (in the large tableau). This ends part 1 of the algorithm. If one or more elements of the obtained basic solution bB is < 0, the initial basic solution is not feasible. For each basic column j associated with a negative element in the basic solution, we utilize Lemma 11.6. Column yj in the simplex tableau is replaced by its corresponding column. Then by applying a Gauss-Jordan step after each replacement, the new basic solution will be feasible. The objective function z is then calculated from (11.3.2c, d). Then if z < 0, we calculate the algebraic sum of the elements of bB. If this algebraic sum > 1, we construct the simplex tableau of the corresponding columns of the current tableau. We also construct the inverse of the corresponding basis matrix. This is done in the manner given by Lemma 11.7 and Lemma 11.8. Hence, z becomes > 0. The marginal costs (zi – fi) are then calculated from (11.3.5b). If z < 0 and the algebraic sum of the elements of bB = 1, B(2)–1 would be singular, and we proceed to calculate the marginal costs and go to part 3 of the algorithm. In this case, in part 3, z is bound to increase algebraically and will eventually change to a positive quantity and if the solution exists, z will eventually increase to its optimum value. This ends part 2 of the algorithm, as we now have an initial basic feasible solution and z ≥ 0. Part 3 is the ordinary simplex algorithm. The only difference is in
© 2008 by Taylor & Francis Group, LLC
358
Numerical Linear Approximation in C
the choice of the non-basic column that enters the basis. The column to enter the basis is that which has the most negative marginal cost among the non-basic columns in the current condensed tableau and their corresponding columns. Relation (11.3.5b) is used for calculating the marginal costs of the corresponding columns. The above steps are illustrated by the following detailed numerical example. Example 11.1 Obtain the one-sided Chebyshev following system. a1 + 2a2 = –a1 – a2 = (11.4.1) a1 + 3a2 = a2 =
solution from above of the –1 –2 –1 –4
The initial data are shown in the condensed tableau. The pivot is chosen as the largest element in absolute value in row i of tableau (i – 1). The pivot element in each tableau is bracketed. Initial Data fT bB ––––––––––––– 0 0 1 –––––––––––––
–1 –2 –1 –4 D1 D2 D3 D4 –––––––––––––––––– (1) –1 1 0 2 –1 3 1 1 1 1 1 ––––––––––––––––––
Tableau 11.4.1 (part 1) fT fB bB ––––––––––––– –1 D1 0 0 1 –––––––––––––
–1 –2 –1 –4 D1 D2 D3 D4 –––––––––––––––––– 1 –1 1 0 0 (1) 1 1 0 2 0 1 ––––––––––––––––––
In Tableau 11.4.2 and its corresponding part, the pivot is chosen as © 2008 by Taylor & Francis Group, LLC
Chapter 11: One-Sided Chebyshev Approximation
359
the largest positive element, which is y37. For this reason we write down Tableau 11.4.2*, where y7 replaces y3 in the basis. Tableau 11.4.2 fT fB B bB ––––––––––––– –1 D1 0 –2 D2 0 1 –––––––––––––
–1 –2 –1 –4 D1 D2 D3 D4 –––––––––––––––––– 1 0 2 1 0 1 1 1 0 0 –2 –1 –––––––––––––––––– Tableau 11.4.2*
fT fB B bB ––––––––––––– –1 D1 0 –2 D2 0 1 –––––––––––––
–1 –2 1 –4 D1 D2 D7 D4 –––––––––––––––––– 1 0 –2 1 0 1 –1 1 0 0 (3) –1 ––––––––––––––––––
Tableau 11.4.3 gives an initial basic feasible solution but z < 0. Meanwhile, the algebraic sum of the elements of bB = 1.33 > 1. Hence, we make use of Lemmas 11.7 and 11.8 and replace Tableau 11.4.3 by Tableau 11.4.3*. In this Tableau, z > 0 and we calculate the marginal costs from (11.3.5b). Tableau 11.4.3 T
f fB B bB ––––––––––––– –1 D1 2/3 –2 D2 1/3 1 D7 1/3 ––––––––––––– z = –1
© 2008 by Taylor & Francis Group, LLC
–1 –2 1 –4 D1 D2 D7 D4 –––––––––––––––––– 1 0 0 1/3 0 1 0 2/3 0 0 1 –1/3 ––––––––––––––––––
360
Numerical Linear Approximation in C
Tableau 11.4.3* (part 2) fT fB B bB ––––––––––––– 1 D5 2 2 D6 1 –1 D3 1 ––––––––––––– z=3
1 2 –1 4 D5 D6 D3 D8 –––––––––––––––––– 1 0 0 1 0 1 0 (1) 0 0 1 0 –––––––––––––––––– 0 0 0 –1
Tableau 11.4.4 (part 3) fT fB B bB ––––––––––––– 1 D5 1 4 D8 1 –1 D3 1 ––––––––––––– z=4
1 2 –1 4 D5 D6 D3 D8 –––––––––––––––––– 1 –1 0 0 0 1 0 1 0 0 1 0 –––––––––––––––––– 0 1 0 0
In Tableau 11.4.3*, (z8 – f8) is the most negative marginal cost and thus D8 replaces D6 in the basis. Tableau 11.4.4 gives the optimum one-sided Chebyshev solution from above with z = 4. The optimum solution vector a of system (11.4.1) is given by the solution of the equations in (11.4.1) associated with the basic vectors, namely D5, D8 and D3 in Tableau 11.4.4. That is –a1 – 2a2 = 1 – a2 = 4 + 3a + z = –1 a1 2 which gives a1 = 7, a2 = –4 and z = 4. The residual r = (0, 1, 4, 0)T. However, since the initial data (matrix C and vector f) are destroyed in the computation, vector a and z are calculated from the second equation in Lemma 11.2. 11.4.1
One-sided Chebyshev solution from below
For the one-sided Chebyshev solution from below, the problem
© 2008 by Taylor & Francis Group, LLC
Chapter 11: One-Sided Chebyshev Approximation
361
would be (11.4.2a)
minimize ||Ca – f||∞
subject to Ca ≥ f
(11.4.2.b)
The formulation of the dual form of the linear programming problem is straightforward and is given by maximize z = [fT –fT]b subject to the constraints T
T
C –C b = e m+1 T T 0 e and bi > 0, i = 1, 2, …, 2n Our algorithm for one-sided Chebyshev from above can also be applied to the one-sided Chebyshev from below problem. The function LA_Linfside() calculates either the one-sided Chebyshev solution from above or from below, depending on an indicator specified by the user. For the solution from below, we follow the steps of Section 6.4.2. We multiply both matrix C and vector f by –1. The inequality (11.4.2b) reduces to [–C]a ≤ [–f] and the problem is solved as a one-sided Chebyshev approximation from above. The solution vector a would be for the one-sided from below. However, the residual vector r, would be multiplied by –1. 11.5
Numerical results and comments
LA_Linfside() implements this algorithm. DR_Linfside() tests the same 8 examples that were solved in Chapter 6 for the one-sided L1 approximation problem. The driver tests these examples twice; once for the one-sided Chebyshev solution from above and once for the
© 2008 by Taylor & Francis Group, LLC
362
Numerical Linear Approximation in C
one-sided Chebyshev solution from below. Table 11.5.1 shows the results of 3 of the examples, computed in single-precision. For each example, the number of iterations and the optimum norm for the one-sided Chebyshev solution from above, the ordinary Chebyshev solution, and the one-sided Chebyshev solution from below are shown. In this table, “no solution” indicates that the programming problem has an unbounded solution. The results show that the number of iterations for the one-sided Chebyshev solution from above (or from below), if either exists, is of the same order of that of the (ordinary) Chebyshev solution of the given problems. Table 11.5.1 One-sided Chebyshev One-sided from above solution from below –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iter z Iter z Iter z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4×2 no solution 5 10.0 no solution 2 10× 5 12 3.00 9 1.7778 no solution 3 25×10 23 0.0142 23 0.0071 23 0.0142 Example 1 has no one-sided Chebyshev solution from above nor from below. Example 2 has one-sided Chebyshev solution from above but not from below. The third example has both one-sided Chebyshev solutions. For Example 3, consider the next section. 11.5.1
Simple relationships between the Chebyshev and one-sided Chebyshev approximations
It is observed in example 3 that the value of z for either the one-sided Chebyshev approximation from above or from below, is exactly twice that of the ordinary Chebyshev approximation. This relationship exists when one of the approximating functions is a constant term. In this case, the one-sided Chebyshev approximations are just shifted ordinary Chebyshev approximations and the optimum value of z for the one-sided Chebyshev approximation is twice the optimum value of z for the ordinary Chebyshev approximation. © 2008 by Taylor & Francis Group, LLC
Chapter 11: One-Sided Chebyshev Approximation
363
In the example of Figure 11-1, the approximating functions for the one-sided Chebyshev approximation from above, the Chebyshev approximation and the one-sided Chebyshev approximation from below are (–2.6 + 2x – 0.156x2), (–0.8 + 2x – 0.156x2) and (1 + 2x – 0.156x2) respectively. The three approximating functions differ only in the constant terms. The difference between the constant terms in the second and the first functions = the difference between the constant terms in the third and the second functions = the Chebyshev deviation 1.8. This and other relationships between the one-sided and the ordinary approximations for certain classes of approximating functions, for certain types of norms, have been addressed by Phillips [9]. We now comment on the algorithms of Dax [6], Joe and Bartels [8], Sklar [11] and Roberts and Barrodale [10] mentioned in Section 11.2. Dax’s algorithm is more involved than ours, as it requires the least squares solution of the system Ca = f. In the other three algorithms, one would store matrix C twice; once for (11.2.1a), (11.2.2a) or (11.2.3a) and once for (11.2.1b), (11.2.2b) or (11.2.3b), where E is replaced by C. The same can be said about storing vector f twice. Hence, their algorithms need more computer storage than ours and nearly double the number of arithmetic operations. We claim that for this problem, a special purpose program such as ours would be more efficient than a general purpose one such as those of [8], [11] and [10]. See similar comments at the end of Chapter 6, for the one-sided L1 approximation problem.
References 1. 2. 3.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129. Abdelmalek, N.N., The discrete linear one-sided Chebyshev approximation, Journal of Institute of Mathematics and Applications, 18(1976)361-370. Armstrong, R.D. and Sklar, M.G., A linear programming algorithm for curve fitting in the L∞ norm, Numerical Functional Analysis and Optimization, 2(1980)187-218.
© 2008 by Taylor & Francis Group, LLC
364
4.
5.
6. 7. 8. 9. 10.
11. 12. 13. 14. 15.
Numerical Linear Approximation in C
Barrodale, I. and Phillips, C., An improved algorithm for discrete Chebyshev linear approximation, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 177-190, 1975. Barrodale, I. and Phillips, C., Algorithm 495: Solution of an overdetermined system of linear equations in the Chebyshev norm, ACM Transactions on Mathematical Software, 1(1975)264-270. Dax, A., The minimax solution of linear equations subject to linear constraints, IMA Journal of Numerical Analysis, 9(1989)95-109. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Joe, B. and Bartels, R., An exact penalty method for constrained, discrete, linear l∞ data fitting, SIAM Journal on Scientific and Statistical Computation, 4(1983)76-84. Phillips, D.L., A note on best one-sided approximations, Communications of ACM, 14(1971)598-600. Roberts, F.D.K. and Barrodale, I., An algorithm for discrete Chebyshev linear approximation with linear constraints, International Journal for Numerical Methods in Engineering, 15(1980)797-807. Sklar, M.G., L∞ norm estimation with linear restrictions on the parameters, Numerical Functional Analysis and Optimization, 3(1981)53-68. Tou, J.T. and Gonzalez, R.C., Pattern Recognition Principles, Addison-Wesley, Reading, MA, 1974. Watson, G.A., One-sided approximation and operator equations, Journal of Institute of Mathematics and Applications, 12(1973)197-208. Watson, G.A., The calculation of best linear one-sided Lp approximations, Mathematics of Computation, 27(1973)607620. Watson, G.A., On the best linear one-sided Chebyshev approximation, Journal of Approximation Theory, 7(1973)4858.
© 2008 by Taylor & Francis Group, LLC
Chapter 11: DR_Linfside
11.6
365
DR_Linfside
/*------------------------------------------------------------------DR_Linfside --------------------------------------------------------------------This program is a driver for the function LA_Linfside(), which calculates the one-sided Chebyshev solution from above or from below of an overdetermined system of linear equations. The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m < n. "f" is a given real n vector. "a" is the solution m vector. This driver contains 8 examples from which the results of examples 1, 6 and 7 are given in the text. All the examples are solved twice; once for the one-sided Chebyshev approximation from above and once for the one-sided Chebyshev approximation from below. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define
N1 M1 N2 M2 N3 M3 N4 M4 N5 M5 N6 M6 N7 M7 N8 M8
4 2 5 3 6 3 7 3 8 4 10 5 25 10 8 4
void DR_Linfside (void)
© 2008 by Taylor & Francis Group, LLC
366
Numerical Linear Approximation in C
{ /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[N1][M1] = { { 0.0, -2.0 }, { 0.0, -4.0 }, { 1.0, 10.0 }, {-1.0, 15.0 } }; static tNumber_R { { 1.0, 2.0, {-1.0, -1.0, { 1.0, 3.0, { 0.0, 1.0, { 0.0, 0.0, };
c2init[N2][M2] =
static tNumber_R { { 0.0, -1.0, { 1.0, 3.0, { 1.0, 0.0, { 0.0, 0.0, {-1.0, 1.0, { 1.0, 1.0, };
c3init[N3][M3] =
0.0 0.0 0.0 0.0 1.0
0.0 -4.0 0.0 1.0 2.0 1.0
}, }, }, }, }
}, }, }, }, }, }
static tNumber_R c4init[N4][M4] = { { 1.0, 0.0, 1.0 }, { 1.0, 2.0, 2.0 }, { 1.0, 2.0, 0.0 }, { 1.0, 1.0, 0.0 }, { 1.0, 0.0, -1.0 }, { 1.0, 0.0, 0.0 }, { 1.0, 1.0, 1.0 } }; static tNumber_R c5init[N5][M5] = { { 1.0, -3.0, 9.0, -27.0 }, © 2008 by Taylor & Francis Group, LLC
Chapter 11: DR_Linfside { { { { { { {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
-2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0,
4.0, 1.0, 0.0, 1.0, 4.0, 9.0, 16.0,
367 -8.0 -1.0 0.0 1.0 8.0 27.0 64.0
}, }, }, }, }, }, }
}; static tNumber_R c6init[N6][M6] = { { 1.0, 0.0, 0.0, 0.0, 0.0 }, { 0.0, 1.0, 0.0, 0.0, 0.0 }, { 0.0, 0.0, 1.0, 0.0, 0.0 }, { 0.0, 0.0, 0.0, 1.0, 0.0 }, { 0.0, 0.0, 0.0, 0.0, 1.0 }, { 1.0, 1.0, 1.0, 1.0, 1.0 }, { 0.0, 1.0, 1.0, 1.0, 1.0 }, {-1.0, 0.0, -1.0, -1.0, -1.0 }, { 1.0, 1.0, 0.0, 1.0, 1.0 }, { 1.0, 1.0, 1.0, 0.0, 1.0 } }; static tNumber_R { { 1.0, 1.0, { 1.0, 2.0, { 1.0, 3.0, { 1.0, 4.0, { 1.0, 5.0, { 1.0, 6.0, { 1.0, 7.0, { 1.0, 8.0, };
c8init[N8][M8] = 1.0, 4.0, 9.0, 16.0, 25.0, 36.0, 49.0, 64.0,
1.0 4.0 9.0 16.0 25.0 36.0 49.0 64.0
static tNumber_R f1[N1+1] = { NIL, -12.0, 6.0, 0.0, 5.0 };
}, }, }, }, }, }, }, }
static tNumber_R f2[N2+1] = { NIL, 1.0, 2.0, 1.0, -3.0, 0.0 };
© 2008 by Taylor & Francis Group, LLC
368
Numerical Linear Approximation in C static tNumber_R f3[N3+1] = { NIL, 1.0, 2.0, 3.0, 2.0, 2.0, 4.0 }; static tNumber_R f4[N4+1] = { NIL, 0.0, -2.0, 1.0, -1.0, 5.0, 7.0, 0.0 }; static tNumber_R f5[N5+1] = { NIL, 3.0, -3.0, -2.0, 0.0, 7.0, -1.0, 5.0, 2.0 }; static tNumber_R f6[N6+1] = { NIL, 1.0, -1.0, 0.0, -1.0, 1.0, 0.0, 2.0, 3.0, -3.0, -2.0 }; static tNumber_R f7[N7+1] { NIL, 0.0872673, 0.0872794, 0.3491184, 0.3498802, 0.6111334, 0.6150641, 0.8733883, 0.8841621, 1.135895, 1.157550, };
= 0.0873029, 0.3513824, 0.6230824, 0.9071868, 1.206257,
0.0873315, 0.3532572, 0.6336395, 0.9400757, 1.283258,
0.0873576, 0.3550109, 0.6441493, 0.9766021, 1.384432
static tNumber_R f8[N8+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MMc_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MMc_COLS); tMatrix_R c7 = alloc_Matrix_R (N7, M7); tMatrix_R tMatrix_R
c1 c2
© 2008 by Taylor & Francis Group, LLC
= init_Matrix_R (&(c1init[0][0]), N1, M1); = init_Matrix_R (&(c2init[0][0]), N2, M2);
Chapter 11: DR_Linfside
369
tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
c3 c4 c5 c6 c8
= = = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(c3init[0][0]), (&(c4init[0][0]), (&(c5init[0][0]), (&(c6init[0][0]), (&(c8init[0][0]),
int int tNumber_R
irank, iter, iside, kase; i, j, k, m, n, Iexmpl; d, dd, ddd, e, ee, eee, z;
eLaRc
rc = LaRcOk;
N3, N4, N5, N6, N8,
M3); M4); M5); M6); M8);
iside = 1; for (j = 1; j <= 5; j++) { d = 0.15* (j-3); dd = d*d; ddd = d*dd; for (i = 1; i <= 5; i++) { e = 0.15* (i-3); ee = e*e; eee = e*ee; k = 5* (j-1) + i; c7[k][1] = 1.0; c7[k][2] = d; c7[k][3] = e; c7[k][4] = dd; c7[k][5] = ee; c7[k][6] = e*d; c7[k][7] = ddd; c7[k][8] = eee; c7[k][9] = dd*e; c7[k][10] = ee*d; } } prn_dr_bnr ("DR_Linfside, One-Sided Chebyshev Solutions of an " "Overdetermined System of Linear Equations"); for (kase = 1; kase <= 2; kase++) { iside = 2 - kase; for (Iexmpl = 1; Iexmpl <= 8; Iexmpl++) { © 2008 by Taylor & Francis Group, LLC
370
Numerical Linear Approximation in C switch (Iexmpl) { case 1: n = N1; m = M1; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = c1[i][j]; } break; case 2: n = N2; m = M2; for (i = 1; i <= n; i++) { f[i] = f2[i]; for (j = 1; j <= m; j++) ct[j][i] = c2[i][j]; } break; case 3: n = N3; m = M3; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] = c3[i][j]; } break; case 4: n = N4; m = M4; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) ct[j][i] = c4[i][j]; } break; case 5: n = N5; m = M5;
© 2008 by Taylor & Francis Group, LLC
Chapter 11: DR_Linfside
371
for (i = 1; i <= n; i++) { f[i] = f5[i]; for (j = 1; j <= m; j++) ct[j][i] = c5[i][j]; } break; case 6: n = N6; m = M6; for (i = 1; i <= n; i++) { f[i] = f6[i]; for (j = 1; j <= m; j++) ct[j][i] = c6[i][j]; } break; case 7: n = N7; m = M7; for (i = 1; i <= n; i++) { f[i] = f7[i]; for (j = 1; j <= m; j++) ct[j][i] = c7[i][j]; } break; case 8: n = N8; m = M8; for (i = 1; i <= n; i++) { f[i] = f8[i]; for (j = 1; j <= m; j++) ct[j][i] = c8[i][j]; } break; default: break; } prn_algo_bnr ("Linfside"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\", %d by %d\n", © 2008 by Taylor & Francis Group, LLC
372
Numerical Linear Approximation in C Iexmpl, n, m); prn_example_delim(); if (iside == 1) PRN ("One-sided Chebyshev Solution from above\n"); else PRN ("One-sided Chebyshev Solution from below\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Linfside (iside, m, n, ct, f, &irank, &iter, r, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the " "One-sided Chebyshev Approximation\n"); PRN ("One-sided Chebyshev solution vector, \"a\"\n"); prn_Vector_R (a, m); PRN ("One-sided Chebyshev residual vector \"r\"\n"); prn_Vector_R (r, n); PRN ("One-sided Chebyshev norm \"z\" = %8.4f\n", z); PRN ("Rank of matrix \"c\" = %d, No. of Iterations =" " %d\n", irank, iter); } prn_la_rc (rc); } } free_Matrix_R (ct, MMc_COLS); free_Vector_R (f); free_Vector_R (r); free_Vector_R (a); free_Matrix_R (c7, N7); uninit_Matrix_R (c1); uninit_Matrix_R (c2); uninit_Matrix_R (c3); uninit_Matrix_R (c4); uninit_Matrix_R (c5); uninit_Matrix_R (c6); uninit_Matrix_R (c8);
} © 2008 by Taylor & Francis Group, LLC
Chapter 11: LA_Linfside
11.7
373
LA_Linfside
/*------------------------------------------------------------------LA_Linfside --------------------------------------------------------------------This program calculates the one-sided Chebyshev solution from above or from below of an overdetermined system of linear equations. It uses a modified simplex method to the linear programming formulation of the problem. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m < n. "f" is a given real n vector. The problem is to calculate the elements of the m vector "a" that gives the minimum Chebyshev residual norm z. z=max|r[i]|,
i = 1, 2, ..., n
where r[i] is the ith residual and is given by r[i] = f[i] - (c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m]), i = 1, 2, ..., n subject to the conditions r[i] => 0, for the one-sided L-One solution from above. or r[i] =< 0, for the one-sided L-One solution from below. [Note: For LA_Linfside vector r = f - ca, while for LA_Loneside, r = ca - f]. Inputs iside An integer specifying the action to be performed. If iside = 1, the one-sided Chebyshev solution from above is calculated. If iside != 1, the one-sided Chebyshev solution from below is calculated. m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. ct A real (m+1) by n matrix. Its first m rows and its n columns contain the transpose of matrix "c" of the system c*a = f. Its (m+1)th row will be filled with ones by the program.
© 2008 by Taylor & Francis Group, LLC
374 f
Numerical Linear Approximation in C A real n vector containing the r.h.s. of the system c*a = f.
Local Variables binv A real (m + 1) square matrix containing the inverse basis matrix in the linear programming problem. bv A real (m + 1) vector containing the basic solution linear programming problem. icbas An integer (m + 1) vector containing the indices of columns of matrix "ct" forming the basis matrix. irbas An integer (m + 1) vector containing the indices of rows of matrix "ct".
of the in the the the
Outputs irank Calculated rank of matrix "c". iter Number of iterations, or the number of times the simplex tableau is changed by a Gauss-Jordan step. a A real (m + 1) vector. Its first m elements are the one-sided Chebyshev solution to the system c*a = f. r A real n vector containing the one-sided Chebyshev residual vector r = (f - c*a). z The optimum one-sided Chebyshev norm of the problem. Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcSolutionDefNotUniqueRD LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Linfside (int iside, int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ) { tMatrix_R binv = alloc_Matrix_R (m + 1, m + 1); tVector_R bv = alloc_Vector_R (m + 1); tVector_I icbas = alloc_Vector_I (m + 1); tVector_I irbas = alloc_Vector_I (m + 1); tVector_I ibound = alloc_Vector_I (n); int int tNumber_R
i = 0, j = 0, kl = 0, m1 = 0; ijk = 0, iout = 0, jin = 0, ivo = 0, itest = 0; d = 0.0, piv = 0.0;
/* Validation of the data before executing the algorithm */
© 2008 by Taylor & Francis Group, LLC
Chapter 11: LA_Linfside eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < m) && (m < n)); VALIDATE_PTRS (ct && f && pIrank && pIter && r && a && pZ); VALIDATE_ALLOC (binv && bv && icbas && irbas && ibound); *pIrank = m; *pIter = 0; m1 = m + 1; kl = 1; /* Initializing the data */ LA_linfside_init (iside, m, n, ct, irbas, binv, ibound, a); iout = 0; /* Part 1 of the algorithm */ for (iout = 1; iout <= m1; iout++) { piv = 0.0; for (j = 1; j <= n; j++) { d = fabs (ct[iout][j]); if (d > piv) { jin = j; piv = d; } } if (piv > EPS) { /* A Gauss-Jordan elimination step, */ LA_linfside_gauss_jordn (iout, jin, kl, m, n, ct, icbas, binv, bv); *pIter = *pIter + 1; } /* Detection of rank deficiency of matrix "c" */ LA_linfside_detect_rank (piv, iout, jin, &kl, m, n, ct, f, icbas, irbas, binv, bv, ibound, pIrank, pIter); } /* Part 2; obtaining a basic feasible solution */ LA_linfside_part_2 (kl, m, n, ct, f, icbas, binv, bv, ibound, pIter); /* Calculating the initial residual vector r and norm z */ LA_linfside_resid_norm (kl, m, n, ct, f, icbas, binv, bv, ibound, r, pZ);
© 2008 by Taylor & Francis Group, LLC
375
376
Numerical Linear Approximation in C
/* Part 3 of the algorithm */ for (ijk = 1; ijk <= n*n; ijk++) { ivo = 0; /* Determine the vector that enters the basis */ LA_linfside_vent (&ivo, &jin, kl, m, n, icbas, r, pZ); /* Calculate the results */ if (ivo == 0) { rc = LA_linfside_res (iside, m, n, kl, iout, f, icbas, irbas, binv, bv, ibound, r, a, *pIrank, pZ); GOTO_CLEANUP_RC (rc); } if (ivo != 1) { for (i = kl; i <= m1; i++) { ct[i][jin] = bv[i] - ct[i][jin]; } r[jin] = *pZ - r[jin]; f[jin] = -f[jin]; ibound[jin] = -ibound[jin]; } itest = 0; /* Determine the vector that leaves the basis */ LA_linfside_vleav (&itest, &iout, jin, kl, m, ct, bv); if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } /* A Gauss-Jordan elimination step */ LA_linfside_gauss_jordn (iout, jin, kl, m, n, ct, icbas, binv, bv); *pIter = *pIter + 1; d = r[jin]; for (j = 1; j <= n; j++) { r[j] = r[j] - d * (ct[iout][j]); } *pZ = *pZ - d * (bv[iout]); }
© 2008 by Taylor & Francis Group, LLC
Chapter 11: LA_Linfside CLEANUP: free_Matrix_R free_Vector_R free_Vector_I free_Vector_I free_Vector_I
(binv, m + 1); (bv); (icbas); (irbas); (ibound);
return rc; } /*------------------------------------------------------------------Initializing the program data for LA_Linfside() -------------------------------------------------------------------*/ void LA_linfside_init (int iside, int m, int n, tMatrix_R ct, tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R a) { int i, j, m1; tNumber_R e; m1 = m + 1; for (i = 1; i <= m1; i++) { a[i] = 0.0; irbas[i] = i; } for (j = 1; j <= m1; j++) { for (i = 1; i <=m1; i++) { binv[i][j] = 0.0; } binv[j][j] = 1.0; } e = 1.0; if (iside != 1) e = 0.0; for (j = 1; j <= n; j++) { ct[m1][j] = e; ibound[j] = 1; } } /*------------------------------------------------------------------Detection of the rank of matrix "c" in LA_Linfside() -------------------------------------------------------------------*/ void LA_linfside_detect_rank (tNumber_R piv, int iout, int jin, int *pKl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv,
© 2008 by Taylor & Francis Group, LLC
377
378
Numerical Linear Approximation in C tVector_I ibound, int *pIrank, int *pIter)
{ int
i, j, k, m1, icb;
m1 = m + 1; if (piv < EPS && iout < m1) { swap_rows_Matrix_R (ct, *pKl, iout); k = irbas[iout]; irbas[iout] = irbas[*pKl]; irbas[*pKl] = 0; for (j = *pKl; j <= m; j++) { binv[iout][j] = binv[*pKl][j]; binv[*pKl][j] = 0.0; } bv[*pKl] = 0.0; icbas[iout] = icbas[*pKl]; icbas[*pKl] = 0; for (i = *pKl; i <= m1; i++) { binv[i][iout] = binv[i][*pKl]; binv[i][*pKl] = 0.0; } *pIrank = *pIrank - 1; *pKl = *pKl + 1; } if (piv < EPS && iout == m1) { for (j = 1; j <= n; j++) { icb = 0; for (i = *pKl; i <= m; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { jin = j; break; } } f[jin] = -f[jin]; ibound[jin] = -ibound[jin]; for (i = *pKl; i <= m; i++) { ct[i][jin] = -ct[i][jin];
© 2008 by Taylor & Francis Group, LLC
Chapter 11: LA_Linfside } ct[m1][jin] = 1.0 - ct[m1][jin]; /* A Gauss-Jordan elimination step, */ LA_linfside_gauss_jordn (iout, jin, *pKl, m, n, ct, icbas, binv, bv); *pIter = *pIter + 1; } } /*------------------------------------------------------------------Obtaining a basic feasible solution of the linear programming problem in LA_Linfside() -------------------------------------------------------------------*/ void LA_linfside_part_2 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIter) { int i, k, m1; int iout, jin; m1 = m + 1; for (i = kl; i <= m1; i++) { if (bv[i] < 0.0) { iout = i; jin = icbas[i]; f[jin] = -f[jin]; ibound[jin] = -ibound[jin]; for (k = kl; k <= m1; k++) { ct[k][jin] = bv[k] - ct[k][jin]; } /* A Gauss-Jordan elimination step, */ LA_linfside_gauss_jordn (iout, jin, kl, m, n, ct, icbas, binv, bv); *pIter = *pIter + 1; } } } /*------------------------------------------------------------------Calculating initial residual vector r and norm z in LA_Linfside() -------------------------------------------------------------------*/ void LA_linfside_resid_norm (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tNumber_R *pZ) {
© 2008 by Taylor & Francis Group, LLC
379
380
Numerical Linear Approximation in C int tNumber_R
i, j, k, m1, icb; d, g, s;
m1 = m + 1; s = 0.0; for (i = kl; i <= m1; i++) { k = icbas[i]; s = s + bv[i] * (f[k]); } *pZ = s; if (*pZ < 0.0) { s = 0.0; for (i = kl; i <= m1; i++) { s = s + bv[i]; } d = s - 1.0; if (fabs (d) > EPS) { g = 1.0/d; for (i = kl; i <= m1; i++) { bv[i] = g * (bv[i]); } *pZ = - g * (*pZ); for (j = 1; j <= n; j++) { f[j] = -f[j]; ibound[j] = -ibound[j]; icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { s = 0.0; for (i = kl; i <= m1; i++) { s = s + ct[i][j]; } d = 1.0 - s; for (i = kl; i <= m1; i++) { ct[i][j] = ct[i][j] + d * (bv[i]); }
© 2008 by Taylor & Francis Group, LLC
Chapter 11: LA_Linfside } } for (j = kl; j <= m; j++) { s = 0.0; for (i = kl; i <= m1; i++) { s = s + binv[i][j]; } d = s; for (i = kl; i <= m1; i++) { binv[i][j] = -binv[i][j] + d * (bv[i]); } } for (i = kl; i <= m1; i++) { binv[i][m1] = bv[i]; } } } for (j = 1; j <= n; j++) { r[j] = 0.0; icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { s = -f[j]; for (i = kl; i <= m1; i++) { k = icbas[i]; s = s + ct[i][j] * (f[k]); } r[j] = s; } } } /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Linfside() -------------------------------------------------------------------*/ void LA_linfside_vleav (int *pItest, int *pIout, int jin, int kl, int m, tMatrix_R ct, tVector_R bv) { int i, m1;
© 2008 by Taylor & Francis Group, LLC
381
382
Numerical Linear Approximation in C tNumber_R
d, g, thmax;
m1 = m + 1; thmax = 1.0/ (EPS * EPS); for (i = kl; i <= m1; i++) { d = ct[i][jin]; if (d > EPS) { g = bv[i]/d; if (g < thmax) { thmax = g; *pIout = i; *pItest = 1; } } } } /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Linfside() -------------------------------------------------------------------*/ void LA_linfside_vent (int *pIvo, int *pJin, int kl, int m, int n, tVector_I icbas, tVector_R r, tNumber_R *pZ) { int i, j, m1, icb; tNumber_R d, e, g, tz; m1 = m + 1; g = 1.0/ (EPS*EPS); tz = *pZ + EPS; for (j = 1; j <= n; j++) { icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { d = r[j]; if (d < -EPS) { e = d; if (e < g) { g = e; *pJin = j;
© 2008 by Taylor & Francis Group, LLC
Chapter 11: LA_Linfside *pIvo = 1; } } else if (d >= tz) { e = tz - d; if (e < g) { g = e; *pJin = j; *pIvo = -1; } } } } } /*------------------------------------------------------------------A Gauss-Jordan elimination step for LA_Linfside() -------------------------------------------------------------------*/ void LA_linfside_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv) { int i, j, k, m1; tNumber_R d, pivot; m1 = m + 1; pivot = ct[iout][jin]; for (j = 1; j <= n; j++) { ct[iout][j] = ct[iout][j]/pivot; } k = m1; /* kj = kl + *pIter; if (kj < m1) k = kj; for (j = kl; j <= k; j++) */ for (j = kl; j <= m1; j++) { binv[iout][j] = binv[iout][j]/pivot; } for (i = kl; i <= m1; i++) { if (i != iout) { d = ct[i][jin];
© 2008 by Taylor & Francis Group, LLC
383
384
Numerical Linear Approximation in C for (j = 1; j <= n; j++) { ct[i][j] = ct[i][j] - d * (ct[iout][j]); } for (j = kl; j <= k; j++) { binv[i][j] = binv[i][j] - d * (binv[iout][j]); } } } for (i = kl; i <= m1; i++) { bv[i] = binv[i][m1]; } icbas[iout] = jin;
} /*------------------------------------------------------------------Calculate the results of LA_Linfside() -------------------------------------------------------------------*/ eLaRc LA_linfside_res (int iside, int m, int n, int kl, int iout, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tVector_R a, int irank, tNumber_R *pZ) { int i, j, k, m1; tNumber_R d, g, s, piv, pivot; m1 = m + 1; piv = 0.0; for (i = kl; i <= m1; i++) { d = bv[i]; if (d > piv) { piv = d; iout = i; } } if (iout != m1) { for (j = kl; j <= m1; j++) { d = binv[iout][j]; binv[iout][j] = binv[m1][j]; binv[m1][j] = d; } k = icbas[iout]; icbas[iout] = icbas[m1];
© 2008 by Taylor & Francis Group, LLC
Chapter 11: LA_Linfside icbas[m1] = k; } pivot = binv[m1][m1]; for (j = kl; j <= m1; j++) { binv[m1][j] = binv[m1][j]/pivot; } for (i = kl; i <= m; i++) { d = binv[i][m1]; for (j = kl; j <= m1; j++) { binv[i][j] = binv[i][j] - d * (binv[m1][j]); } } for (j = kl; j <= m; j++) { k = icbas[j]; g = f[k]; if (iside == 1 && ibound[k] == 1) g = g - *pZ; if (iside != 1 && ibound[k] == -1) g = g - *pZ; binv[j][m1] = g; } for (j = kl; j <= m; j++) { s = 0.0; for (i = kl; i <= m; i++) { s = s + binv[i][m1] * (binv[i][j]); } k = irbas[j]; a[k] = s; } /* Calculation of the one-sided residuals */ for (j = 1; j <= n; j++) { d = r[j]; if (iside == 1) { if (ibound[j] == 1) d = *pZ - d; r[j] = d; } else if (iside != 1) { if (ibound[j] == -1) d = *pZ - d; r[j] = - d; } }
© 2008 by Taylor & Francis Group, LLC
385
386
Numerical Linear Approximation in C
if (irank < m) return LaRcSolutionDefNotUniqueRD; if (kl > 1) return LaRcSolutionProbNotUnique; for (i = 1; i <= m1; i++) { if (bv[i] < EPS) return LaRcSolutionProbNotUnique; } return LaRcSolutionUnique; }
© 2008 by Taylor & Francis Group, LLC
387
Chapter 12 Chebyshev Approximation with Bounded Variables
12.1
Introduction
In Chapter 10, an algorithm for calculating the Chebyshev solution of overdetermined systems of linear equations is given. In that algorithm, the Chebyshev norm of the residual vector, is as small as possible. In Chapter 11, an algorithm for the one-sided Chebyshev solution of overdetermined linear equations is presented. In that algorithm, the Chebyshev solution is subject to the additional constraints that all the elements of the residual vector are either non-positive or non-negative. In this chapter, we present the Chebyshev approximation with bounded variables [3], where the additional constraints are on the elements of the solution vector, not on the elements of the residual vector. Typically, the elements of the solution vector are to be bounded between –1 and 1. This chapter is analogous to Chapter 7 for the linear bounded L1 approximation. The solid curve in Figure 12-1 is for the ordinary Chebyshev approximation, for the 8 points of Figure 2-1, and is given by y = –0.8 + 2x – 0.156x2. The elements of the solution vector for this approximating curve are (–0.8, 2, –0.156). They are not all bounded between –1 and 1. For the same 8 points, the Chebyshev approximation with bounded variables between –1 and 1 is given by y = 1+ x – 0.024x2. The dotted curve in Figure 12-1 shows this approximation. In this chapter, linear programming techniques [7] are used to solve this problem, where minimum computer storage is required and no conditions are imposed on the coefficient matrix. The coefficient matrix could be a rank deficient one. © 2008 by Taylor & Francis Group, LLC
388
Numerical Linear Approximation in C
Consider the overdetermined system of linear equations (12.1.1)
Ca = f
C = (cij) is a given real n by m matrix of rank k, k ≤ m < n, and f = (fi) is a given real n-vector. The Chebyshev solution to system Ca = f is the m-vector a = (aj) that minimizes the Chebyshev norm z of the residuals z = max|ri|, i = 1, 2, …, n where ri is the ith residual and is given by m
(12.1.2)
ri =
∑ cij aj – fi ,
i = 1, 2, …, n
j=1
When the elements of the solution vector a is required to satisfy the additional conditions (12.1.3)
–1 ≤ ai ≤ 1, i = 1, 2, …, m
we have the problem of calculating the Chebyshev solution of (12.1.1) with bounded variables between –1 and 1. Figure 12-1 shows curve fitting with vertical parabolas for the same 8 points shown in Figure 2-1. The solid curve is the ordinary Chebyshev approximation and the dotted curve is the Chebyshev approximation with bounded variables between –1 and 1. If, instead of the constraints (12.1.3), we require the elements of vector a to satisfy the constraints (12.1.4)
ci ≤ ai ≤ di, i = 1, 2, …, m
where (ci) and (di) are elements of given m-vectors c and d, by substituting variables, (12.1.4) reduces to the constraints (12.1.3) in the new variables. This has been given in Section 7.1. The Chebyshev approximation with non-negative parameters may be formulated as a Chebyshev approximation with bounded variables as well. See Section 12.1.1. In Section 12.2, the Chebyshev approximation with bounded variables is formulated as a special problem of some general constrained problems. We shall comment on these methods in Section 12.5. In Section 12.3, the linear programming formulation of the problem is presented. In Section 12.4, the algorithm is described and © 2008 by Taylor & Francis Group, LLC
Chapter 12: Bounded Chebyshev Approximation
389
in Section 12.5, numerical results and comments are given.
8 7 6 5 4 3 2 1 0 0
1
2
3
4
5
6
7
8
9
Figure 12-1: Curve fitting a set of 8 points using Chebyshev approximation and Chebyshev approximation with bounded variables . –1 and 1 between
12.1.1
Linear Chebyshev approximation with non-negative parameters (NNLI)
If in (12.1.4) we take (ci) = 0 and (di) = Big, for i = 1, 2, …, m, where Big is a large number, we have the linear L∞ approximation with non-negative parameters, or non-negative L-infinity (NNLI). See Section 7.1.1 for the L1 approximation with non-negative parameters, or non-negative L1 approximation (NNL1). 12.2
A special problem of a general constrained one
Sklar [11] presented an algorithm to solve the problem (expressed here in our notation) minimize ||Ca – f||∞ subject to
© 2008 by Taylor & Francis Group, LLC
390
Numerical Linear Approximation in C
ea ≤ Ea ≤ eb His algorithm is a generalization of that of Armstrong and Sklar [4]. Also, Roberts and Barrodale [10] presented an algorithm to solve the problem minimize ||Ca – f||∞ subject to Ga = g and ea ≤ Ea ≤ eb Their algorithm is the extension of the algorithm of Barrodale and Phillips for the ordinary Chebyshev solution [5, 6]. In the above, C, G and E are matrices of appropriate dimensions, f is the vector associated with C, g is the vector associated with G and ea and eb are two vectors associated with E. Also, not all of the arrays (G, g) and (E, ea, eb) are vacuous. If we take G = 0, g = 0 and take E = Im and ea = –em and eb = em, where Im and em are respectively an m-unit matrix and an m-vector, each element of which equals 1, we get the Chebyshev solution with bounded variables between 1 and –1. As mentioned earlier, we shall comment on these algorithms in Section 12.5. 12.3
Linear programming formulation of the problem
The problem given by (12.1.1-3) may be reduced to a linear programming problem as follows. Let h ≥ 0, be h = maxi|ri| in (12.1.2). Hence, the primal form of the programming problem is minimize h subject to m
–h ≤
∑ cij aj – fi ≤ h ,
i = 1, 2, …, n
j=1
and –1 ≤ ai ≤ 1, i = 1, 2, …, m In vector-matrix notation, the above inequalities become
© 2008 by Taylor & Francis Group, LLC
Chapter 12: Bounded Chebyshev Approximation
391
Ca + hen ≥ f –Ca + hen ≥ –f –a ≥ –em a ≥ –em where aj, j = 1, 2, …, m, unrestricted in sign, h ≥ 0 en and em are an n- and m-vectors, each element of which is 1. This formulation may be written more conveniently as T minimize Z = d m + 1 a h
subject to C en
f a ≥ –f –em h –em
–C en –Im 0 Im 0
aj, j = 1, 2, …, m, unrestricted in sign, h ≥ 0 Here, dm+1 is an (m + 1)-vector whose first m elements are 0’s and the (m + 1)th element is 1; that is, dm+1 is the (m + 1)th column of an (m + 1)-unit matrix. Here, dm+1 is the same as em+1 in Chapters 10 and 11. Again, Im is an m-unit matrix and 0 is an m-zero vector. By rearranging the terms in the primal formulation, the dual form is (Section 3.6) (12.3.1a)
maximize z = [fT –emT –fT –emT]b
subject to T
(12.3.1b)
en (12.3.1c)
T
C –Im –C Im T
0
T
en
T
0
T
b = dm + 1
bi ≥ 0, i = 1, 2, …, 2(n + m)
© 2008 by Taylor & Francis Group, LLC
392
Numerical Linear Approximation in C
Let us write (12.3.1b) in the form (12.3.1d)
Db = dm+1
where D is the coefficient matrix on the l.h.s. of (12.3.1b). D is an (m + 1) by 2(n + m) matrix and is of rank (m + 1), despite the fact that matrix C may be rank deficient. As usual, let B denote the non-singular basis matrix for the linear programming problem (12.3.1). Since matrix D is of rank (m + 1), B is an (m + 1) square matrix. Let also (yi) be the columns forming the simplex tableau, (zj – fj) be the marginal costs for column (yj), bB be the basic solution, and z be the objective function. L e t a l so t h e e l e m e n t s o f t h e 2 ( n + m ) - v e c t o r g T = [fT –emT –fT –emT] of (12.3.1a) associated with the basic variables be the (m + 1)-vector gB. Then for j = 1, 2, …, 2(n + m), we have (12.3.2a) (12.3.2b) (12.3.2c) (12.3.2d)
yj (zj – gj) bB z
= = = =
B–1Dj gBTyj – gj B–1dm+1 gBTbB
where Dj is the jth column of matrix D. 12.3.1
Properties of the matrix of constraints
Consider the two halves of matrix D, in (12.3.1b, d), each is of (n + m) columns. While matrix CT exists in the first n columns of the left half of D, matrix –CT exists in the first n columns of the right half. The last m columns of the left half are the negatives of the last m columns of the right half. We also make use of the fact the a unit matrix exists in the last m columns of each of the two halves of D. This kind of asymmetry and the existence of unit matrices in D will enable us to use a simplex tableau for this problem of only (m + 1) constraints in n variables, instead of a simplex tableau of (m + 1) constraints in 2(n + m) variables. This will be apparent from the coming analysis. From the definition of matrix D in (12.3.1d), we have for j = 1, 2, …, n
© 2008 by Taylor & Francis Group, LLC
Chapter 12: Bounded Chebyshev Approximation
(12.3.3a)
Dj = Cj 1
T
and D j + n + m = – C j 1
393
T
where again, Dj and CjT are respectively the jth column of D and the jth column of CT. Then we have (12.3.3b)
Dj + Dj+n+m = 2dm+1, j = 1, …, n
and also we have (12.3.3c)
Dj + Dj+n+m = 0, j = n + 1, …, n + m
Definition Let i and j, 1 ≤ i, j ≤ 2(n + m), be the indices of any two columns in matrix D such that |i – j| = (n + m). We define columns i and j as two corresponding columns. Consider the following lemmas. Lemma 12.1 At any stage of the computation, the (m + 1)th column of matrix B-1 is the basic solution vector bB. This follows directly from (12.3.2c) since dm+1 is the (m + 1)th column of an (m + 1)-unit matrix. Lemma 12.2 At any stage of the computation, the solution vector a and the objective function z, are given by Lemma 10.2 (12.3.4a)
T B a = gB z
or (12.3.4b)
(aT z) = gBTB–1
Lemma 12.3 Any two corresponding columns should not appear together in any basis matrix.
© 2008 by Taylor & Francis Group, LLC
394
Numerical Linear Approximation in C
Lemma 12.4 For any two corresponding columns in the simplex tableau we have yi + yi+n+m = 2bB, i = 1, …, n yi + yi+n+m = 0, i = n + 1, …, n + m (zi – gi) + (zi+n+m – gi+n+m) = 2z, i = 1, 2, …, n (zi – gi) + (zi+n+m – gi+n+m) = 2, i = n + 1, …, n + m Proof: By multiplying (12.3.3b, c) by B–1 and using (12.3.2a, c) the first two equations are proved. From (12.3.2a-c) the last two equations are proved. Lemma 12.5 The residuals (ri), i = 1, 2, …, n, are given by one of the following relations ri = (zi – gi) – z, ri = z – (zi – gi),
i = 1, 2, …, n i = n + m + 1, …, 2n + m
It is sufficient to prove the first relation. Let 1 ≤ j ≤n. Then since
CjT is the jth column of CT, i.e., the jth row of C, we write (12.1.2) as rj = aTCjT – gj, or from the definition of DjT in (12.3.3a), 1 ≤ j ≤n, rj = (aT z )DjT – gj – z. By substituting (12.3.4b), (12.3.2a) and (12.3.2d) in succession in the last expression of rj, we get rj = gBTB–1DjT – gj – z = (zi – gi) – z. This proves the first relation in the Lemma. The second relation is proved in the same manner. See also Lemma 13.4 and [2]. Lemma 12.6 The last m columns in the (m + 1) by 2(n + m) simplex tableau are the first m columns of the inverse of the basis matrix; B–1. Initially, they are the first m columns of an (m + 1)-unit matrix. 12.4
Description of the algorithm
We only need to store in the initial simplex tableau the first n columns of matrix D of (12.3.1b, d). We call this the condensed tableau. From Lemma 12.4, if any column in the simplex tableau and © 2008 by Taylor & Francis Group, LLC
Chapter 12: Bounded Chebyshev Approximation
395
its marginal cost are known, the corresponding column and its marginal cost are easily derived. From Lemma 12.6, the first m columns of the (m + 1) matrix B–1 are available. We consider the first m columns of matrix B–1 (and their corresponding columns) as part of the simplex tableau. The algorithm is in 3 parts. In part 1, the simplex tableau is changed once when any one (usually the first column) of the first n columns of matrix D may form the (m + 1)th column of the basis matrix B. After changing the simplex tableau once, the initial basic solution bB is now available. From Lemma 12.1, bB is itself the (m + 1)th column of B–1. We remark that for this Gauss-Jordan step, each of the pivot element and the (m + 1)th element of bB is 1; that is, the (m + 1)th element of bB is ≥ 0. An (n + m)-index indicator vector whose elements are +1 or –1 is needed. If column j, 1 ≤ j ≤ n, is in the condensed tableau, index j of this vector has the value +1. If the corresponding column to column j is in the condensed tableau, index j has a value –1. Similarly, if column j of matrix B–1, 1 ≤ j ≤ m, has its marginal cost stored, the index (n + j) has the value +1. Otherwise, it has the value –1. In part 2, if one or more elements i, 1 ≤ i ≤ m, of the basic solution bB is negative, the initial basic solution is not feasible. For each basic column j, (2n + m + 1) ≤ j ≤ 2(n + m), associated with a negative element i in the basic solution, we make use of the second equation of Lemma 12.4 and replace column j by its corresponding column. Again, this does not require changing the simplex tableau, except that row i, which corresponds to the negative element of bB, is multiplied by –1. Row i of B–1 and the ith element of bB are also multiplied by –1. At the end of part 2, the initial objective function z is calculated from (12.3.2d) and z may have a positive or a negative value. If z is negative in part 3 of the algorithm, z increases monotonically until it acquires its optimum (positive) value. Part 3 is almost identical to part 3 of the algorithm for the ordinary Chebyshev solution [1] described in Chapter 10. The only difference is the following. At the beginning of part 3, we calculate the marginal costs not only for the columns of the condensed tableau, but also for the first m columns of matrix B–1. From (12.3.1a) and the above discussion, the prices for the first m columns of B–1 would be the
© 2008 by Taylor & Francis Group, LLC
396
Numerical Linear Approximation in C
elements of the vector (–em). As in Chapter 10, the non-basic column to enter the basis is that which has the most negative marginal cost among: (a) the non-basic columns in the current condensed tableau, (b) the first m columns of matrix B–1, and (c) the corresponding columns of (a) and (b). The optimum residual vector r is calculated from Lemma 12.5 and the optimum (aT z) values are obtained from Lemma 12.2. 12.5
Numerical results and comments
LA_Linfbv() implements this algorithm. DR_Linfbv() tests 8 examples, including some with rank deficient matrices. All of the results satisfy the inequalities (12.1.3). Table 12.5.1 shows the results of 3 of the examples. Table 12.5.1 Chebyshev Chebyshev solution with Solution bounded variables –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations z Iterations z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4×2 5 10.0 5 10.0 2 10× 5 9 1.778 6 1.778 3 25×10 22 0.0071 19 0.2329 For each example, the number of iterations and the optimum norm for the Chebyshev solution with bounded variables are shown. For comparison purposes, the results of the ordinary Chebyshev approximation problem are also given. The number of iterations for this algorithm are, in general, smaller than those for the corresponding ordinary Chebyshev case. We now comment on other algorithms that solve this problem. Using an extension of the well-known exchange algorithm, Powell [9] presented an algorithm for obtaining the Chebyshev solution of system Ca = f subject to the constraints (12.1.3). FORTRAN IV code for Powell's method was given by Madsen and Powell [8]. Excluding comments, the FORTRAN implementation consists of 289 lines of © 2008 by Taylor & Francis Group, LLC
Chapter 12: Bounded Chebyshev Approximation
397
code, as compared with the FORTRAN IV implementation of our algorithm, which consists of 165 lines of code. As noted in Section 12.2, both Sklar [11] and Roberts and Barrodale [10] presented general constrained Chebyshev algorithms that would solve the bounded Chebyshev problem as a special case. We observe that our algorithm requires less computer storage and, as a special purpose algorithm, would be more efficient than a general purpose algorithm, such as [9] and [11]. It is also noted by Roberts and Barrodale ([10], p. 798) that Powell's algorithm [8, 9] is more efficient than theirs.
References 1. 2. 3.
4. 5.
6.
7. 8.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129. Abdelmalek, N.N., The discrete linear restricted Chebyshev approximation, BIT, 17(1977)249-261. Abdelmalek, N.N., Chebyshev and L1 solutions of overdetermined systems of linear equations with bounded variables, Numerical Functional Analysis and Optimization, 8(1985-86)399-418. Armstrong, R.D. and Sklar, M.G., A linear programming algorithm for curve fitting in the L∞ norm, Numerical Functional Analysis and Optimization, 2(1980)187-218. Barrodale, I. and Phillips, C., An improved algorithm for discrete Chebyshev linear approximation, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 177-190, 1975. Barrodale, I. and Phillips, C., Algorithm 495: Solution of an overdetermined system of linear equations in the Chebyshev norm, ACM Transactions on Mathematical Software, 1(1975)264-270. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Madsen, K. and Powell, M.J.D., A FORTRAN subroutine that calculates the minimax solution of linear equations subject to
© 2008 by Taylor & Francis Group, LLC
398
9.
10.
11.
Numerical Linear Approximation in C
bounds on the variables, United Kingdom Atomic Energy Research Establishment, AERE-R7954, February 1975. Powell, M.J.D., The minimax solution of linear equations subject to bounds on the variables, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 53-107, 1975. Roberts, F.D.K. and Barrodale, I., An algorithm for discrete Chebyshev linear approximation with linear constraints, International Journal for Numerical Methods in Engineering, 15(1980)797-807. Sklar, M.G., L∞ norm estimation with linear restrictions on the parameters, Numerical Functional Analysis and Optimization, 3(1981)53-68.
© 2008 by Taylor & Francis Group, LLC
Chapter 12: DR_Linfbv
12.6
399
DR_Linfbv
/*------------------------------------------------------------------DR_Linfbv --------------------------------------------------------------------This program is a driver for the function LA_Linfbv(), which solves an overdetermined system of linear equations in the Chebyshev norm subject to the constraints that each element of the solution vector is bounded between -1 and 1; -1 <= a[j] <= 1,
j = 1, 2, ..., m
The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m < n. "f" is a given real n vector. "a" is the solution m vector. This driver contains the 8 examples whose results are given in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define
NNMMc_ROWS N1 M1 N2 M2 N3 M3 N4 M4 N5 M5 N6 M6 N7 M7 N8 M8
(NN_ROWS + MMc_COLS) 4 2 5 3 6 3 7 3 8 4 10 5 25 10 8 4
© 2008 by Taylor & Francis Group, LLC
400
Numerical Linear Approximation in C
void DR_Linfbv (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[N1][M1] = { { 0.0, -2.0 }, { 0.0, -4.0 }, { 1.0, 10.0 }, {-1.0, 15.0 } }; static tNumber_R { { 1.0, 2.0, {-1.0, -1.0, { 1.0, 3.0, { 0.0, 1.0, { 0.0, 0.0, };
c2init[N2][M2] =
static tNumber_R { { 0.0, -1.0, { 1.0, 3.0, { 1.0, 0.0, { 0.0, 0.0, {-1.0, 1.0, { 1.0, 1.0, };
c3init[N3][M3] =
0.0 0.0 0.0 0.0 1.0
0.0 -4.0 0.0 1.0 2.0 1.0
}, }, }, }, }
}, }, }, }, }, }
static tNumber_R c4init[N4][M4] = { { 1.0, 0.0, 1.0 }, { 1.0, 2.0, 2.0 }, { 1.0, 2.0, 0.0 }, { 1.0, 1.0, 0.0 }, { 1.0, 0.0, -1.0 }, { 1.0, 0.0, 0.0 }, { 1.0, 1.0, 1.0 } }; static tNumber_R c5init[N5][M5] = © 2008 by Taylor & Francis Group, LLC
Chapter 12: DR_Linfbv
401
{ { { { { { { { {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0,
9.0, 4.0, 1.0, 0.0, 1.0, 4.0, 9.0, 16.0,
-27.0 -8.0 -1.0 0.0 1.0 8.0 27.0 64.0
}, }, }, }, }, }, }, }
}; static tNumber_R c6init[N6][M6] = { { 1.0, 0.0, 0.0, 0.0, 0.0 }, { 0.0, 1.0, 0.0, 0.0, 0.0 }, { 0.0, 0.0, 1.0, 0.0, 0.0 }, { 0.0, 0.0, 0.0, 1.0, 0.0 }, { 0.0, 0.0, 0.0, 0.0, 1.0 }, { 1.0, 1.0, 1.0, 1.0, 1.0 }, { 0.0, 1.0, 1.0, 1.0, 1.0 }, {-1.0, 0.0, -1.0, -1.0, -1.0 }, { 1.0, 1.0, 0.0, 1.0, 1.0 }, { 1.0, 1.0, 1.0, 0.0, 1.0 } }; static tNumber_R { { 1.0, 1.0, { 1.0, 2.0, { 1.0, 3.0, { 1.0, 4.0, { 1.0, 5.0, { 1.0, 6.0, { 1.0, 7.0, { 1.0, 8.0, };
c8init[N8][M8] = 1.0, 4.0, 9.0, 16.0, 25.0, 36.0, 49.0, 64.0,
1.0 4.0 9.0 16.0 25.0 36.0 49.0 64.0
static tNumber_R f1[N1+1] = { NIL, -12.0, 6.0, 0.0, 5.0 };
}, }, }, }, }, }, }, }
static tNumber_R f2[N2+1] = { NIL, 1.0, 2.0, 1.0, -3.0, 0.0 © 2008 by Taylor & Francis Group, LLC
402
Numerical Linear Approximation in C }; static tNumber_R f3[N3+1] = { NIL, 1.0, 2.0, 3.0, 2.0, 2.0, 4.0 }; static tNumber_R f4[N4+1] = { NIL, 0.0, -2.0, 1.0, -1.0, 5.0, 7.0, 0.0 }; static tNumber_R f5[N5+1] = { NIL, 3.0, -3.0, -2.0, 0.0, 7.0, -1.0, 5.0, 2.0 }; static tNumber_R f6[N6+1] = { NIL, 1.0, -1.0, 0.0, -1.0, 1.0, 0.0, 2.0, 3.0, -3.0, -2.0 }; static tNumber_R f7[N7+1] { NIL, 0.0872673, 0.0872794, 0.3491184, 0.3498802, 0.6111334, 0.6150641, 0.8733883, 0.8841621, 1.135895, 1.157550, };
= 0.0873029, 0.3513824, 0.6230824, 0.9071868, 1.206257,
0.0873315, 0.3532572, 0.6336395, 0.9400757, 1.283258,
0.0873576, 0.3550109, 0.6441493, 0.9766021, 1.384432
static tNumber_R f8[N8+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MMc_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NNMMc_ROWS); tVector_R a = alloc_Vector_R (MMc_COLS); tMatrix_R binv = alloc_Matrix_R (MMc_COLS, MMc_COLS); tVector_R bv = alloc_Vector_R (MMc_COLS); tVector_I ibound = alloc_Vector_I (NNMMc_ROWS); © 2008 by Taylor & Francis Group, LLC
Chapter 12: DR_Linfbv
403
tVector_I tVector_I tMatrix_R
icbas irbas c7
= alloc_Vector_I (MMc_COLS); = alloc_Vector_I (MMc_COLS); = alloc_Matrix_R (N7, M7);
tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
c1 c2 c3 c4 c5 c6 c8
= = = = = = =
int tNumber_R
iter, i, j, k, m, n, Iexmpl; d, dd, ddd, e, ee, eee, z;
eLaRc
rc = LaRcOk;
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(c1init[0][0]), (&(c2init[0][0]), (&(c3init[0][0]), (&(c4init[0][0]), (&(c5init[0][0]), (&(c6init[0][0]), (&(c8init[0][0]),
N1, N2, N3, N4, N5, N6, N8,
for (j = 1; j <= 5; j++) { d = 0.15* (j-3); dd = d*d; ddd = d*dd; for (i = 1; i <= 5; i++) { e = 0.15* (i-3); ee = e*e; eee = e*ee; k = 5* (j-1) + i; c7[k][1] = 1.0; c7[k][2] = d; c7[k][3] = e; c7[k][4] = dd; c7[k][5] = ee; c7[k][6] = e*d; c7[k][7] = ddd; c7[k][8] = eee; c7[k][9] = dd*e; c7[k][10] = ee*d; } }
M1); M2); M3); M4); M5); M6); M8);
prn_dr_bnr ("DR_Linfbv, Bounded Chebyshev Solution of an " "Overdetermined System of Linear Equations");
© 2008 by Taylor & Francis Group, LLC
404
Numerical Linear Approximation in C for (Iexmpl = 1; Iexmpl <= 8; Iexmpl++) { switch (Iexmpl) { case 1: n = N1; m = M1; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = c1[i][j]; } break; case 2: n = N2; m = M2; for (i = 1; i <= n; i++) { f[i] = f2[i]; for (j = 1; j <= m; j++) ct[j][i] = c2[i][j]; } break; case 3: n = N3; m = M3; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] = c3[i][j]; } break; case 4: n = N4; m = M4; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) ct[j][i] = c4[i][j]; } break; case 5:
© 2008 by Taylor & Francis Group, LLC
Chapter 12: DR_Linfbv n = N5; m = M5; for (i = 1; i <= n; i++) { f[i] = f5[i]; for (j = 1; j <= m; j++) ct[j][i] = c5[i][j]; } break; case 6: n = N6; m = M6; for (i = 1; i <= n; i++) { f[i] = f6[i]; for (j = 1; j <= m; j++) ct[j][i] = c6[i][j]; } break; case 7: n = N7; m = M7; for (i = 1; i <= n; i++) { f[i] = f7[i]; for (j = 1; j <= m; j++) ct[j][i] = c7[i][j]; } break; case 8: n = N8; m = M8; for (i = 1; i <= n; i++) { f[i] = f8[i]; for (j = 1; j <= m; j++) ct[j][i] = c8[i][j]; } break; default: break; } prn_algo_bnr ("Linfbv"); prn_example_delim(); © 2008 by Taylor & Francis Group, LLC
405
406
Numerical Linear Approximation in C PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("Bounded Chebyshev Solution " "of an Overdetermined System\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Linfbv (m, n, ct, f, icbas, irbas, binv, bv, ibound, &iter, r, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Bounded Chebyshev Solution\n"); PRN ("Bounded Chebyshev solution vector, \"a\"\n"); prn_Vector_R (a, m); PRN ("Bounded Chebyshev residual vector \"r\"\n"); prn_Vector_R (r, n); PRN ("Bounded Chebyshev norm \"z\" = %8.4f\n", z); PRN ("No. of Iterations = %d\n", iter); } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R free_Vector_I free_Vector_I free_Vector_I free_Matrix_R
(ct, MMc_COLS); (f); (r); (a); (binv, MMc_COLS); (bv); (ibound); (icbas); (irbas); (c7, N7);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(c1); (c2); (c3); (c4); (c5);
© 2008 by Taylor & Francis Group, LLC
Chapter 12: DR_Linfbv uninit_Matrix_R (c6); uninit_Matrix_R (c8); }
© 2008 by Taylor & Francis Group, LLC
407
408
12.7
Numerical Linear Approximation in C
LA_Linfbv
/*------------------------------------------------------------------LA_Linfbv --------------------------------------------------------------------This program calculates the Chebyshev solution of an overdetermined system of linear equations subject to the conditions that the elements of the solution vector be bounded between -1 and +1. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k <= m < n. "f" is a given real n vector. The problem is to calculate the solution vector "a" that gives the minimum Chebyshev norm z, z = max|r[i]|,
i = 1, 2, ..., n
subject to the constraints that the elements of vector "a" are bounded between -1 and 1. That is -1 <= a[j] <= 1,
j = 1, 2, ..., m
r[i] is the ith residual and is given by r[i] = c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m] - f[i], i = 1, 2, ..., n Inputs m Number of columns of matrix "c" of the system c*a = f. n Number of rows of matrix "c" of the system c*a = f. ct A real (m+1) by n matrix. Its first m rows and its n columns contain the transpose of matrix "c" of the system c*a = f. Its (m+1)th row will be filled with ones by the program. f A real n vector containing the r.h.s. of the system c*a = f. Other Parameters binv An (m+1) real square matrix containing the inverse of the basis matrix in the linear programming problem. bv An (m+1) real vector containing the basic solution in the
© 2008 by Taylor & Francis Group, LLC
Chapter 12: LA_Linfbv
icbas irbas
409
linear programming problem. An (m+1) integer vector containing the indices of the columns of "ct" forming the basis. An (m+1) integer vector containing the indices of the rows of matrix "ct".
Outputs iter Number of iterations, or the number of times the simplex tableau is changed by a Gauss-Jordan step. a An (m + 1) vector whose first m elements contain the bounded Chebyshev solution vector "a" to the system c*a = f. r An (n + m + 1) vector whose first n elements constitute the residual vector r = (c*a - f). z The minimum bounded Chebyshev norm of the residual vector r. Returns one of LaRcSolutionFound LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Linfbv (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ) { int i = 0, j = 0, ijk = 0, k = 0, m1 = 0, n1 = 0, nm = 0; int iout = 0, jin = 0, ivo = 0, itest = 0; tNumber_R d = 0.0, e = 0.0; /* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m < n)); VALIDATE_PTRS (ct && f && icbas && irbas && binv && bv && ibound && pIter && r && a && pZ); /* Initialization */ m1 = m + 1; n1 = n + 1; nm = n + m; *pIter = 0;
© 2008 by Taylor & Francis Group, LLC
410
Numerical Linear Approximation in C /* Part 1 of the algorithm Initializing program data */ LA_linfbv_init (m, n, ct, icbas, irbas, binv, ibound, r, a); iout = m1; jin = 1; r[1] = 0.0; /* A Gauss-Jordan elimination step */ LA_linfbv_gauss_jordn (m, n, iout, jin, ct, icbas, binv, bv, ibound); *pIter = *pIter + 1; /* Part 2 of the algorithm. Obtaining a feasible basic solution */ LA_linfbv_part_2 (m, n, ct, binv, bv, ibound); /* Calculating the initial residual vector "r" and norm "z" */ LA_linfbv_resid_norm (m, n, ct, f, icbas, bv, r, pZ); /* Part 3 of the algorithm */ for (ijk = 1; ijk < n*n; ijk++) { ivo = 0; /* Determine the vector that enters the basis */ LA_linfbv_vent (&ivo, &jin, m, n, icbas, r, pZ); if (ivo == 0) { /* Calculate the results */ LA_linfbv_res (m, n, f, icbas, irbas, binv, ibound, r, a, pZ); GOTO_CLEANUP_RC (LaRcSolutionFound); } if (ivo == -1) { ibound[jin] = -ibound[jin]; if (jin <= n) { for (i = 1; i <= m1; i++) { ct[i][jin] = bv[i] + bv[i] - ct[i][jin]; }
© 2008 by Taylor & Francis Group, LLC
Chapter 12: LA_Linfbv
411
r[jin] = *pZ + *pZ - r[jin]; f[jin] = -f[jin]; } else if (jin > n) { r[jin] = 2.0 - r[jin]; } } itest = 0; /* Determine the vector that leaves the basis */ LA_linfbv_vleav (jin, &iout, &itest, m, n, ct, binv, bv, ibound); /* Solution is not feasible */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } /* A Gauss-Jordan elimination step */ LA_linfbv_gauss_jordn (m, n, iout, jin, ct, icbas, binv, bv, ibound); *pIter = *pIter + 1; d = r[jin]; for (j = 1; j <= n; j++) { r[j] = r[j] - d * (ct[iout][j]); } for (j = n1; j <= nm; j++) { e = -1.0; if (ibound[j] == -1) e = -e; k = j - n; r[j] = r[j] - e * (d) * (binv[iout][k]); } *pZ = *pZ - d * (bv[iout]); } CLEANUP: return rc; }
© 2008 by Taylor & Francis Group, LLC
412
Numerical Linear Approximation in C
/*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Linfbv() -------------------------------------------------------------------*/ void LA_linfbv_gauss_jordn (int m, int n, int iout, int jin, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound) { int i, j, kin = 0, m1; tNumber_R d, e = 0, pivot = 0; m1 = m + 1; if (jin <= n) { pivot = ct[iout][jin]; } else if (jin > n) { kin = jin - n; e = -1.0; if (ibound[jin] == -1) e = 1.0; pivot = e * binv[iout][kin]; } for (j = 1; j <= n; j++) { ct[iout][j] = ct[iout][j]/pivot; } for (j = 1; j <= m1; j++) { binv[iout][j] = binv[iout][j]/pivot; } for (i = 1; i <= m1; i++) { if (i != iout) { d = ct[i][jin]; if (jin > n) d = e * (binv[i][kin]); for (j = 1; j <= n; j++) { ct[i][j] = ct[i][j] - d * (ct[iout][j]); } for (j = 1; j <= m1; j++) { binv[i][j] = binv[i][j] - d * (binv[iout][j]); } © 2008 by Taylor & Francis Group, LLC
Chapter 12: LA_Linfbv
413
} } for (i = 1; i <= m1; i++) { bv[i] = binv[i][m1]; } icbas[iout] = jin; } /*------------------------------------------------------------------Calculate the results of LA_Linfbv() -------------------------------------------------------------------*/ void LA_linfbv_res (int m, int n, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R r, tVector_R a, tNumber_R *pZ) { int i, j, k, m1; tNumber_R d, e, s; m1 = m + 1; for (j = 1; j <= m1; j++) { s = 0.0; for (i = 1; i <= m1; i++) { k = icbas[i]; e = -1.0; if (k <= n) e = f[k]; s = s + e * (binv[i][j]); } k = irbas[j]; a[k] = s; } for (j = 1; j <= n; j++) { d = r[j] - *pZ; if (ibound[j] == -1) d = -d; r[j] = d; } } /*------------------------------------------------------------------Initializing program data of LA_Linfbv() -------------------------------------------------------------------*/ void LA_linfbv_init (int m, int n, tMatrix_R ct, tVector_I icbas, © 2008 by Taylor & Francis Group, LLC
414
Numerical Linear Approximation in C tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R r, tVector_R a)
{ int
i, j, k, m1;
m1 = m + 1; for (j = 1; j <= m; j++) { a[j] = -1.0; k = n + j; icbas[j] = k; ibound[k] = -1; r[k] = 0.0; } for (j = 1; j <= m1; j++) { irbas[j] = j; for (i = 1; i <= m1; i++) { binv[i][j] = 0.0; } binv[j][j] = 1.0; } for (j = 1; j <= n; j++) { ct[m1][j] = 1.0; ibound[j] = 1; } } /*------------------------------------------------------------------Obtaining a feasible basic solution of LA_Linfbv() -------------------------------------------------------------------*/ void LA_linfbv_part_2 (int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ibound) { int i, j, k, m1; m1 = m + 1; for (i = 1; i <= m; i++) { if (bv[i] < 0.0) { k = n + i; ibound[k] = -ibound[k]; © 2008 by Taylor & Francis Group, LLC
Chapter 12: LA_Linfbv
415
for (j = 1; j <= n; j++) { ct[i][j] = -ct[i][j]; } for (j = 1; j <= m1; j++) { binv[i][j] = - binv[i][j]; } bv[i] = - bv[i]; } } } /*------------------------------------------------------------------Calculating the initial residual vector "r" and norm "z" in LA_Linfbv() -------------------------------------------------------------------*/ void LA_linfbv_resid_norm (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_R bv, tVector_R r, tNumber_R *pZ) { int i, j, m1, icb; tNumber_R s; m1 = m + 1; for (j = 1; j <= n; j++) { icb = 0; for (i = 1; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { s = - f[j] + f[1] * (ct[m1][j]); for (i = 1; i <= m; i++) { s = s - ct[i][j]; } r[j] = s; } } s = f[1] * (bv[m1]); for (i = 1; i <= m; i++) { s = s - bv[i]; © 2008 by Taylor & Francis Group, LLC
416
Numerical Linear Approximation in C } *pZ = s;
} /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Linfbv() -------------------------------------------------------------------*/ void LA_linfbv_vent (int *pIvo, int *pJin, int m, int n, tVector_I icbas, tVector_R r, tNumber_R *pZ) { int i, j, m1, nm, icb; tNumber_R d, e, g, tz; g = 1.0/ (EPS*EPS); m1 = m + 1; nm = n + m; for (j = 1; j <= nm; j++) { icb = 0; for (i = 1; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { tz = *pZ + *pZ + EPS; if (j > n) tz = 2.0 + EPS; d = r[j]; if (d < 0.0) { e = d; if (e < g) { g = e; *pJin = j; *pIvo = 1; } } else if (d >= tz) { e = tz - d; if (e < g) { g = e; *pJin = j; © 2008 by Taylor & Francis Group, LLC
Chapter 12: LA_Linfbv
417
*pIvo = -1; } } } } } /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Linfbv() -------------------------------------------------------------------*/ void LA_linfbv_vleav (int jin, int *pIout, int *pItest, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ibound) { int i, m1, kin; tNumber_R d, e, g, thmax; m1 = m + 1; thmax = 1.0/ (EPS*EPS); if (jin <= n) { for (i = 1; i <= m1; i++) { d = ct[i][jin]; if (d > EPS) { g = bv[i]/d; if (g <= thmax) { thmax = g; *pIout = i; *pItest = 1; } } } } else if (jin > n) { e = -1.0; if (ibound[jin] == -1) e = 1.0; for (i = 1; i <= m1; i++) { kin = jin - n; d = e * (binv[i][kin]); if (d > EPS) { © 2008 by Taylor & Francis Group, LLC
418
Numerical Linear Approximation in C g = bv[i]/d; if (g <= thmax) { thmax = g; *pIout = i; *pItest = 1; } } } }
}
© 2008 by Taylor & Francis Group, LLC
419
Chapter 13 Restricted Chebyshev Approximation
13.1
Introduction
Consider the overdetermined system of linear equations Ca = f C = (cij) is a given real n by m matrix of rank k, k ≤ m < n and f = (fi) is a given real n-vector. The Chebyshev solution of this system is the m-vector a = (ai) that minimizes the Chebyshev norm z of the residual vector r (13.1.1)
z = max|ri|, i = 1, 2, …, n
where ri is the ith residual (error) and is given by m
(13.1.2)
ri = fi –
∑ cij aj ,
i = 1, 2, …, n
j=1
Note that ri here is –ri in (10.1.3). In the last three chapters, algorithms are presented for three different kinds of the linear Chebyshev approximations. In Chapter 10, the Chebyshev solution of system Ca = f, has the Chebyshev norm of vector r as small as possible. In Chapter 11, the one-sided Chebyshev solution of Ca = f requires the additional constraints that the elements of the residual vector r be either non-positive or non-negative. In Chapter 12, for the bounded Chebyshev approximation, the additional constraints are that the elements of the Chebyshev solution vector a, each be bounded between –1 and 1. In this chapter, an algorithm for yet another kind of the linear Chebyshev approximation, known as the restricted Chebyshev approximation, is © 2008 by Taylor & Francis Group, LLC
420
Numerical Linear Approximation in C
presented. In this algorithm, the additional constraint is that the left hand side of Ca = f be bounded between 2 given n-vectors, namely l ≤ Ca ≤ u
(13.1.3) or m
(13.1.3a)
li ≤
∑ cij aj ≤ ui ,
i = 1, 2, …, n
i=1
where l = (li) and u = (ui). That is, the elements of Ca are restricted between upper and lower ranges. In deriving existence and characterization theorems to this problem, an extra condition to (13.1.3) is often imposed [17, 18, 22], namely (13.1.4)
l≤f≤u
If this condition is not met, a solution for the problem may not exist. In our work, this condition is not imposed. However, if a solution does not exist, it could be because this condition is not met or because the linear programming problem has an unbounded solution. In either case, the computation is terminated. Usually the assumption that matrix C satisfies the Haar condition is required [12, 18, 23]. This condition is not required in our work. Matrix C may even be a rank deficient matrix. In this chapter, a numerically stable linear programming algorithm for calculating the restricted Chebyshev solution of overdetermined systems of linear equations is described. Minimum computer storage is required and no conditions are imposed on the coefficient matrix C. The algorithm consists of two main parts. In part 1, a simplex algorithm is described. This part is analogous to that of obtaining the Chebyshev solution of system Ca = f given in Chapter 10. However, for the purpose of numerical stability, part 2 constitutes a triangular decomposition of the basis matrix. The ordinary Chebyshev solution [1], the one-sided Chebyshev solutions [2] and the Chebyshev approximation by non-negative functions [17] are obtained as special cases in this algorithm. In Section 13.2, we state that this problem may be solved as a special case by algorithms of other constrained Chebyshev approximation problems. We shall comment on these algorithms in Section 13.7. © 2008 by Taylor & Francis Group, LLC
Chapter 13: Restricted Chebyshev Approximation
421
In Section 13.3, the linear programming formulation of the problem is given, together with necessary lemmas. In Section 13.4, the new algorithm is described and in Section 13.5, the triangular decomposition of the basis matrix is presented. In Section 13.6, arithmetic operations count for the algorithm are presented and in Section 13.7, numerical results and comments are given. 13.1.1
The semi-infinite programming problem
A interesting, related problem to the restricted Chebyshev approximation problem is the semi-infinite programming (SIP) problem, in which a given function f(x) is approximated in the Chebyshev norm over a set X that contains infinite elements. The Chebyshev approximation by the function Ca is subject to the constraints (13.1.3) together with the additional constraints bi ≤ ai ≤ ci, i = 1, 2, …, m where b = (bi) and c = (ci) are given m-vectors. That is, the SIP is a restricted problem with bounded variables as well. The SIP problems occur in technical applications such as the study of propagation of water and air pollution [13, 14, 16]. 13.1.2
Special cases
The following are special cases of the restricted Chebyshev approximation problem: (a) If for all i, we take li = –∞ and ui = ∞, we get the ordinary Chebyshev solution [1] to system the Ca = f (Chapter 10). (b) If for all i, we take li = –∞ and ui = fi, we get the one-sided Chebyshev solution from above [2] (Chapter 11) to Ca = f. The one-sided Chebyshev solution from below is obtained in a similar way. (c) If for all i, we take li = 0 and ui = ∞, we have the problem of calculating the Chebyshev approximation by a non-negative linear function [7]. By ∞, we mean a large number, such as 100×max|fi|, i = 1, 2, …, n, where (fi) are the elements of vector f in the system Ca = f.
© 2008 by Taylor & Francis Group, LLC
422
Numerical Linear Approximation in C
(d)
Arbitrary choices of li and ui may also be made. Figure 13-1 shows curve fittings with vertical parabolas for the set of 8 points of Figure 2-1, as calculated by this algorithm. 8 7 6 5 4 3 2 1 0 -1
0
1
2
3
4
5
6
7
8
9
-2
Figure 13-1: Curve fitting a set of 8 points using restricted Chebyshev approximations with arbitrary ranges
The solid curve is the (ordinary) Chebyshev approximation. The dashed curve is the one-sided Chebyshev approximation from above. The dotted curve is the Chebyshev approximation with arbitrary lower and upper ranges; li = fi – 0.5zC.S. and ui = a, very large number, where zC.S. is the optimum Chebyshev norm. 13.1.3
Applications of the restricted Chebyshev algorithm
The restricted Chebyshev approximation problem arises in several engineering applications. See the references in [12, 18]. See also [23, 24]. 13.2
A special problem of general constrained algorithms
As in the previous two chapters, some authors developed constrained Chebyshev approximation algorithms that would solve this problem as a special case. Using our notation, Sklar [21] presented an algorithm to solve the © 2008 by Taylor & Francis Group, LLC
Chapter 13: Restricted Chebyshev Approximation
423
problem (13.2.1a)
minimize ||Ca – f||∞
subject to ea ≤ Ea ≤ eb
(13.2.1b)
His algorithm is an extension of that of Armstrong and Sklar [6]. Also, Roberts and Barrodale [20] presented the algorithm (13.2.2a)
minimize ||Ca – f||∞
subject to Ga = g and ea ≤ Ea ≤ eb
(13.2.2b)
Their algorithm is an extension of the algorithm of Barrodale and Phillips for the ordinary Chebyshev solution [7, 8]. In the above C, G and E are matrices of appropriate dimensions, f is the vector associated with C, g is the vector associated with G and ea and eb are two vectors associated with E. Also, not all of the arrays (G, g) and (E, ea, eb) are vacuous. If in (13.2.1b) we take E = C and ea = l and eb = u, we get the restricted Chebyshev approximation problem. Also, if in (13.2.2b) we take G = 0, g = 0, we get problem (13.2.1), which reduces to the restricted Chebyshev approximation problem. 13.3
Linear programming formulation of the problem
This problem is formulated as a linear programming problem as follows. Let in (13.1.1), h = maxi|ri|, h ≥ 0. The primal form is [24] minimize h subject to m
–h ≤ fi –
∑ cij aj ≤ h ,
i = 1, 2, …, n
j=1
and m
li ≤
∑ cij aj ≤ ui , j=1
© 2008 by Taylor & Francis Group, LLC
i = 1, 2, …, n
424
Numerical Linear Approximation in C
In vector-matrix notation, the above inequalities become Ca + he ≥ f –Ca + he ≥ –f Ca ≥ l –Ca ≥ –u h ≥ 0 and aj, j = 1, 2, …, m, unrestricted in sign This may be written more conveniently as T minimize Z = e m + 1 a h
subject to C –C C –C
e f e a ≥ –f 0 h l 0 –u
h ≥ 0 and aj, j = 1, 2, …, m, unrestricted in sign where em+1 is an (m + 1)-vector that is the (m + 1)th column of an (m + 1)-unit matrix. Also, e is an n-vector, each element of which equals 1, and the 0 is an n-zero vector. The two n-vectors l = (li) and u = (ui). It is more efficient to solve the dual of this problem, which is (13.3.1a)
maximize z = [f T –f T lT –uT] b = gTb
subject to T
T
T
T
(13.3.1b)
C –C C –C b = e m+1 T T T T e e 0 0
(13.3.1c)
bi ≥ 0, i = 1, 2, …, 4n
Let (13.3.1b) be written in the form (13.3.1d) © 2008 by Taylor & Francis Group, LLC
Db = em+1
Chapter 13: Restricted Chebyshev Approximation
425
D is the (m + 1) by 4n coefficient matrix on the l.h.s. of (13.3.1b), and is the matrix of constraints in this linear programming problem. To simplify the analysis, assume that rank(C) = m. Let for the linear programming problem (13.3.1), the basis matrix B be an (m + 1) square nonsingular matrix. In the simplex tableau, vectors (yj) are calculated, as usual, as yj = B–1Dj, j = 1, 2, …, 4n
(13.3.2a)
where Dj is the jth column of matrix D. The basic solution bB is given by bB = B–1em+1
(13.3.2b)
Let the (m + 1)-vector gB be the vector whose elements are the prices for bB. Then for the marginal costs, denoted by (zj – gj), we have (13.3.2c)
(zj – gj) = gBTyj – gj, j = 1, 2, …, 4n
The objective function z is z = gBTbB
(13.3.2d)
Vector g is defined in (13.3.1a). 13.3.1
Properties of the matrix of constraints
As we encountered in the previous 3 chapters, we observe asymmetries in the matrix of constraints D. The description of the algorithm is prompted by such kind of asymmetries. Consider the two halves of matrix D, in (13.3.1b, d), each is of 2n columns. While matrix CT exists in the first n columns of the left half of D, matrix –CT exists in the second n columns of this half. The first n columns of the right half are the negative of the last n columns of the right half. From (13.3.1b), the following relationships exist between any column i, 1 ≤ i ≤ n and the columns i + n, i + 2n and i + 3n of matrix D in (13.3.1d). (13.3.3a) Di + Di+n = 2em+1, Di – Di+2n = em+1, Di + Di+3n = em+1 and (13.3.3b) Di+n + Di+2n = em+1, Di+n – Di+3n = em+1, Di+2n + Di+3n = 0 © 2008 by Taylor & Francis Group, LLC
426
Numerical Linear Approximation in C
Definition Let i and j, 1 ≤ i, j ≤ 4n, be two columns in the matrix of constraints D such that |i – j| = n or 2n or 3n. We define columns i and j as two corresponding columns. From Lemmas 13.3, 13.4 and 13.5 below, we find that in the simplex algorithm, we need tableaux of (m + 1) constraints in only n variables (not 4n variables). We call such tableaux the condensed tableaux. Lemma 13.1 From (13.3.2b), bB is the (m + 1)th column of B–1 (Lemma 10.1). Lemma 13.2 The solution vector a and objective function z at any stage of the computation are given by Lemma 10.2 (aT z) = gBTB–1 Lemma 13.3 For any two corresponding columns, the following relations between the marginal costs are obtained: (1) For 1 ≤ i ≤ n
(13.3.4)
(2)
(zi (zi (zi (zi+n (zi+n (zi+2n
– – – – – –
gi ) gi ) gi ) gi+n ) gi+n ) gi+2n)
+ – – + – +
(zj+n (zi+2n (zi+3n (zi+2n (zi+3n (zi+3n
– – – – – –
gj+n ) gi+2n) gi+3n) gi+2n) gi+3n) gi+3n)
= = = = = =
2z z – (fi – z + (ui – z + (fi – z – (ui – (ui – li)
li) fi) li) fi)
For 1 ≤ i ≤ n
(13.3.5)
yi + yi+n = 2bB, yi – yi+2n = bB, yi + yi+3n = bB, …
Relations (13.3.4) are established from (13.3.2a-d) since in (13.3.1a), for 1 ≤ i ≤ n, gi = fi, gi+n = –fi, gi+2n = li and gi+3n = –ui. Also, relations (13.3.5) are proved from (13.3.3a, b) and (13.3.2a, b). Lemma 13.4 Let z be the objective function and (zi – gi), i = 1, 2, …, 4n, be the marginal costs for a basic solution for problem (13.3.1). Then the
© 2008 by Taylor & Francis Group, LLC
Chapter 13: Restricted Chebyshev Approximation
427
residuals ri, 1 ≤ i ≤ n of (13.1.2) are given by any one of the relations ri = z – (zi – gi) ri = (zi+n – gi+n) – z ri = (fi – li) – (zi+2n – gi+2n) ri = (zi+3n – gi+3n) + (fi – ui) The proof of the first relation follows the proof of Lemma 12.5. The proof of the other 3 relations are established from the first 3 relations in (13.3.4). Lemma 13.5 (Lemma 3 in [3]) Any two corresponding columns in the simplex tableau can not appear together in any basis. Lemma 13.6 (Lemma 4 in [3]) Let us have a basic feasible solution to problem (13.3.2). Then a non-basic vector i may replace a corresponding basic column j in the basis only if |i – j| = 2n. Lemma 13.7 Not every problem has a restricted Chebyshev solution. As noted earlier, this is due to one of two cases: (a) An optimal solution can not be reached, which may occur if the condition (13.1.4) is not met, or (b) The corresponding linear programming problem (13.3.2) would have an unbounded solution. 13.4
Description of the algorithm
As in Chapter 10, the algorithm for solving problem (13.3.1) is in 3 parts. In part 1, an initial basic solution, feasible or not, is obtained. We note that the left half of matrix D in (13.3.1d) is the same as matrix D in equation (10.2.2b) and the right hand side of (13.3.1d) is itself the right hand side of (10.2.2b). Hence, part 1 here is made identical to part 1 in Chapter 10. We should remark that in our work, the simplex tableau is not calculated explicitly, as in Chapter 10. This means that required vectors and elements of the tableau are calculated when they are needed. The given data of the problem, namely C and f of Ca = f are © 2008 by Taylor & Francis Group, LLC
428
Numerical Linear Approximation in C
not destroyed (changed) in the computation, except perhaps for a –ve sign multiplication to some of the equations in Ca = f. As indicated, we calculate the (condensed) tableaux of (m + 1) constraints in only n variables (not in 4n variables). A 4n-index indicator vector whose elements are 1, 2, 3 or 4 is needed. If column i, 1 ≤ i ≤ n, is in the condensed tableau, index i has the value 1. If a column j, corresponding to column i, (n + 1) ≤ j ≤ 2n, is in the condensed tableau, index j has a value 2, and so on. In part 2, the basic solution is made feasible and an objective function z ≥ 0 is calculated, as explained in detail in Chapter 10. However, for part 2 here, we have developed a simple technique [5] by which B–1, obtained at the end of part 1, is modified in one step by a matrix of rank 1. That is, no simplex steps are used in part 2. At the end of part 2, the marginal costs for the n stored columns are calculated. Part 3 is the ordinary simplex method in which the column to enter the basis is that which has the most negative marginal cost among the non-basic columns. This is done in accordance with Lemma 13.6. For calculating the parameters of the corresponding columns, relations (13.3.4) are used. The results of the problem, a and z, are calculated from Lemma 13.2. In part 3 also, a triangular decomposition method for the basis matrix is used. The details are briefly given in the next section. 13.5
Triangular decomposition of the basis matrix
A variation of the triangular decomposition method of Bartels et al. [10] is described here. Assume without loss of generality that rank(C) = m. Let B0 denote the basis matrix at the end of part 2 and B denote the basis matrix in part 3. Then we may write (13.5.1)
B–1 = EB0–1
where E is a nonsingular (m + 1) square matrix that is the product of Frobenius matrices ([15], p. 48). Let E–1 be decomposed into (13.5.2)
LE–1 = P
where L is an (m + 1) nonsingular square matrix and P is an (m + 1) nonsingular upper triangular matrix. This is analogous to the decomposition (1.3.3) in [10]. Here, P and B0–1 are stored and © 2008 by Taylor & Francis Group, LLC
Chapter 13: Restricted Chebyshev Approximation
429
updated. This is done as follows. Let, in our algorithm, at the end of part 2, problem (13.3.2) be described by the 3-tuple {I0(b), I, B0–1}
(13.5.3)
where I0(b) represents the basis at the end of part 2, I is an (m + 1)-unit matrix and B0–1 is the inverse of the basis matrix at the end of part 2. In each iteration in part 3, the above 3-tuple is updated to {I(b), P, G–1}
(13.5.4)
In (13.5.4), G–1 would be given by G–1 = LB0–1. The relation between the main parameters in Section 13.3 and the parameters in (13.5.1) are (13.5.5a)
B–1 = P–1G–1
(13.5.5b)
yj = P–1G–1Dj, j = 1, 2, …, 4n
The updates are given in detail in [3]. 13.6
Arithmetic operations count
As noted in Section 5.8, we shall only count the number of multiplications/divisions (m/d) per iteration for the algorithm. In part 1, matrix B–1 is constructed from the Frobenius matrices Ei, i = 1, 2, …, (m + 1), where B–1 = Em+1….E2E1. See for example, Hadley ([15], pp. 48-50). For i = 1, we construct column 1 of E1. This step needs (m + 1) m/d. For i = 2, row 2 of the first tableau is calculated using E1 and rows 1 and 2 of matrix D of (13.3.1d), and E2E1. This step needs (n – 1) + (m + 1) + 2(m + 1) = (n – 1) + 3(m + 1) m/d for calculating respectively row 2 of the first tableau (minus the pivot element), the columns of the pivot element and the first 2 columns of E21. The remaining steps of part 1 follow in the same way. Thus part 1 requires about 0.67(m + 1)3 + 0.5n(m + 1)2 m/d. These operation counts are slightly less than those of part 1 in [1] obtained by applying the Gauss-Jordan elimination steps to the simplex tableau. Part 2 of our algorithm requires (m + 1)(m + 2) m/d. For part 3, the calculation of vectors yr needs about 1.5(m + 1)2 m/d; that is, to calculate vectors xr = G–1Dr and Pyr = xr by backward
© 2008 by Taylor & Francis Group, LLC
430
Numerical Linear Approximation in C
substitution. The marginal costs for the (n – m – 1) nonbasic columns of matrix D are calculated from Lemma 13.4, where the residuals are calculated from (13.1.2). Vector (aT z) is calculated by calculating vector w from PTw = gB by forward substitution and (aT z) = wTG–1. These operations need respectively about 0.5(m + 1)2 + (m + 1)2 = 1.5(m + 1)2 m/d. The m/d counts for the other parameters in part 3 follow easily. Part 3 meeds an average of n(m+1) + 3.17(m + 1)2 m/d per iteration. The arithmetic operations count/iteration is a linear function of n and m3 in part 1, and of n and m2 in part 3. We have reached a similar conclusion about the operations count in the L1 approximation case (Chapter 5). The above result compares favorably with those of related algorithms of Bartels et al. [9] and of Cline [11]. Also, numerical experience reported in [9] indicates that direct methods, in general, converge in a far greater number of iterations, compared with iterative methods such as ours. Among the solved examples in table 1 in [3], 4 of the examples were solved by Bartels et al. [9] for the ordinary Chebyshev case. The reported number of iterations for their methods are very high, compared with the number of iterations of ours. The method of Cline [11] is very similar to that of Bartels et al. [9]. 13.7
Numerical results and comments
LA_Restch() implements this algorithm. DR_Restch() has 40 test cases. Let z be a large positive number, say z = 100×max|fi|. Then for each example, for i = 1, 2, …, n: (a) li = –z and ui = z, the (ordinary) Chebyshev solution results calculated in Chapter 10 are obtained. (b) li = fi – 0.5zC.S. and ui = z, where zC.S. is the Chebyshev norm calculated from case (a). (c) li = –z and ui = fi, the results for the one-sided Chebyshev solutions from above in Chapter 11 are obtained. (d) li = c1 and ui = c2, where c1 = min(fi) and c2 = max(fi), i = 1, 2, …, n.
© 2008 by Taylor & Francis Group, LLC
Chapter 13: Restricted Chebyshev Approximation
(e)
431
li = 0 and ui = z, (this is the approximation by non-negative functions).
Table 13.7.1 shows the results of 8 of the test cases, computed in single-precision. Table 13.7.1 Chebyshev One-sided Solution from above (a) (b) (c) (d) –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Ex C(n×m) Iter z Iter z Iter z Iter z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4 × 2 5 10.0 no solution no solution 8 11.04 2 5×3 6 2.25 10 3.375 7 4.5 6 2.25 3 6 × 3 5 1.556 no solution 5 1.556 no solution 4 7×3 7 2.625 10 3.938 8 5.25 8 3.0 5 8×4 7 3.786 10 5.679 8 7.571 7 3.7857 6 10× 5 9 1.778 13 4.889 13 3.0 9 1.778 7 25×10 21 0.0071 28 0.0106 28 0.0142 21 0.008 8 8×3 7 1.797 9 2.695 12 3.594 8 1.853 For each example, the optimum norms and the number of iterations for cases (a) to (d) are shown. In this table, “no solution” indicates that no feasible solution was obtained for these cases. Cases (a), (b) and (c) of the results of example 8 are displayed in Figure 13-1. For all the examples, the number of iterations needed for the solution is comparable to the number of iterations required for obtaining the Chebyshev solution for the same example. It is observed that for examples 2, 4, 7 and 8, the norms z for case (b) are 1.5zC.S., and the norms z for case (c) are 2zC.S. These relationships with zC.S were proved by Phillips [19] for certain approximations, including the one-sided Chebyshev approximations. See section 11.5. As in Chapters 10, 11 and 12, our algorithm does not need any artificial variables and no conditions are imposed on the coefficient matrix C. Rank(C) is determined in part 1 of the algorithm and if rank(C) = k < m, the amount of calculation is considerably reduced. The triangular decomposition method adopted here makes use of
© 2008 by Taylor & Francis Group, LLC
432
Numerical Linear Approximation in C
the numerically stable calculation of parts 1 and 2. As indicated at the end of Section 13.6, the arithmetic operation counts for this algorithm [3, 4] compare favorably with related algorithms, such as those of Bartels et al. [9] and Cline [11]. Iterative refinement of the solutions of the two triangular systems occurring in the calculation in part 3 may be implemented if the problem is very badly conditioned. Again, the given data of the problem, namely C and f of Ca = f, are not changed, except perhaps for a –ve sign multiplication applied to some of the equations in Ca = f. Hence, iterative refinement to the final solution of the problem may be used. Finally, our algorithm is a general one; that is, it calculates the restricted Chebyshev solution, including the ordinary and the one-sided Chebyshev solutions, as well as the approximation by non-negative functions, as special cases. We now comment on the algorithms of Sklar [21] and Roberts and Barrodale [20] discussed in Section 13.2. As noted at the end of Chapter 11, in these algorithms one would store matrix C twice; once for (13.2.5a) and once for (13.2.5b), or once for (13.2.6a) and once for (13.2.6b), where E is replaced by C. Hence, their algorithms when compared with ours, need more computer storage and require nearly double the number of arithmetic operations. We claim that for this problem, a special purpose program such as ours would be more efficient than a general purpose one such as [21] and [20].
References 1. 2. 3. 4.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129. Abdelmalek, N.N., The discrete linear one-sided Chebyshev approximation, Journal of Institute of Mathematics and Applications, 18(1976)361-370. Abdelmalek, N.N., The discrete linear restricted Chebyshev approximation, BIT, 17(1977)249-261. Abdelmalek, N.N., Computer program for the discrete linear restricted Chebyshev approximation, Journal of Computational and Applied Mathematics, 7(1981)141-150.
© 2008 by Taylor & Francis Group, LLC
Chapter 13: Restricted Chebyshev Approximation
5. 6. 7.
8.
9.
10.
11. 12.
13. 14.
433
Abdelmalek, N.N., An exchange algorithm for the Chebyshev solution of overdetermined systems of linear equations, unpublished work. Armstrong, R.D. and Sklar, M.G., A linear programming algorithm for curve fitting in the L∞ norm, Numerical Functional Analysis and Optimization, 2(1980)187-218. Barrodale, I. and Phillips, C., An improved algorithm for discrete Chebyshev linear approximation, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 177-190, 1975. Barrodale, I. and Phillips, C., Algorithm 495: Solution of an overdetermined system of linear equations in the Chebyshev norm, ACM Transactions on Mathematical Software, 1(1975)264-270. Bartels, R.H, Conn, A.R. and Charalambous, C., Minimization techniques for piecewise differentiable functions: The l∞ solution to overdetermined linear system, The Johns Hopkins University, Baltimore, MD, Technical report no. 247, May 1976. Bartels, R.H, Stoer, J. and Zenger, Ch., A realization of the simplex method based on triangular decomposition, Handbook for Automatic Computation, Vol. II: Linear Algebra, Wilkinson, J.H. and Reinsch, C. (eds.), Springer-Verlag, New York, pp. 152-190, 1971. Cline, A.K., A descent method for the uniform solution to overdetermined systems of linear equations, SIAM Journal on Numerical Analysis, 13(1976)293-309. Gimlin, D.R., Cavin, III R.K. and Budge, Jr., M.C., A multiple exchange algorithm for calculation of best restricted approximations, SIAM Journal on Numerical Analysis, 11(1974)219-231. Glashoff, K. and Gustafson, S.A., Numerical treatment of a parabolic boundary-value control problem, Journal of Optimization Theory and Applications, 19(1976)645-663. Gustafson, S.A. and Kortanek, K.O., Numerical treatment of a class of semi-infinite programming problems, Naval Research Logistics Quarterly, 20(1973)477-504.
© 2008 by Taylor & Francis Group, LLC
434
Numerical Linear Approximation in C
15.
Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Hettich, R., A Newton method for nonlinear Chebyshev approximation, Lecture Notes in Mathematics No. 556, Dold, A. and Eckmann, B. (eds.), Springer-Verlag, Berlin, pp. 222236 (1976). Jones, R.C. and Karlovitz, L.A., Iterative construction of constrained Chebyshev approximation of continuous functions, SIAM Journal on Numerical Analysis, 5(1968)574-585. Lewis, J.T., Restricted range approximation and its application to digital filter design, Mathematics of Computation, 29(1975)522-539. Phillips, D.L., A note on best one-sided approximations, Communications of ACM, 14(1971)598-600. Roberts, F.D.K. and Barrodale, I., An algorithm for discrete Chebyshev linear approximation with linear constraints, International Journal for Numerical Methods in Engineering, 15(1980)797-807. Sklar, M.G., L∞ norm estimation with linear restriction on the parameters, Numerical Functional Analysis and Optimization, 3(1981)53-68. Taylor, G.D., Approximation by functions having restricted ranges III, Journal of Mathematical Analysis and Applications, 27(1969)241-248. Taylor, G.D. and Winter, M.J., Calculation of best restricted approximations, SIAM Journal on Numerical Analysis, 7(1970)248-255. Watson, G.A., The calculation of best restricted approximations, SIAM Journal on Numerical Analysis, 11(1974)693699.
16.
17. 18. 19. 20.
21. 22. 23. 24.
© 2008 by Taylor & Francis Group, LLC
Chapter 13: DR_Restch
13.8
435
DR_Restch
/*------------------------------------------------------------------DR_Restch --------------------------------------------------------------------This program is a driver for the function LA_Restch(), which calculates the restricted Chebyshev solution of an overdetermined systems of linear equations. It uses a modified simplex method with a triangular factorization of the basis matrix. The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m < n. "f" is a given real n vector. The problem is to calculate the elements of the m vector "a" that gives the minimum Chebyshev norm z. z=max|r[i]|,
i = 1, 2, ..., n
where r[i] is the ith residual and is given by r[i] = f[i] - (c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m]) subject to the conditions el[i] <= c[i][1]*a[1] + ... + c[i][m]*a[m] <= ue[i] i = 1, 2, ...,n where el[i] and ue[i] are given Also the following conditions should be satisfied el[i]<= f[i] <= ue[i], i=1,...,n This driver contains the 8 examples whose results are given in the text. Let "big" denote a large real number, say 1000 and "z" be the Chebyshev solution of the given problem. Then all the examples are solved for the 4 cases: 1) el[i] = - big and ue[i] = big, i = 1, 2, ..., n The output will be the ordinary Chebyshev solution.
© 2008 by Taylor & Francis Group, LLC
436
Numerical Linear Approximation in C
2) el[i] = f[i] - 0.5*z and ue[i] = big, i = 1, 2, ..., n The output is the solution for arbitrary lower and upper ranges. 3) el[i] = -big and ue[i] = f[i], i = 1, 2, ..., n The output is the one-sided Chebyshev solution from above. 4) el[i] = c1 and ue[i] = c2 where c1 is min{f[i]} and c2 = max{f[i]}; i = 1, 2, ..., n. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define #define
N1 M1 N2 M2 N3 M3 N4 M4 N5 M5 N6 M6 N7 M7 N8 M8
4 2 5 3 6 3 7 3 8 4 10 5 25 10 8 4
void DR_Restch (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c1init[N1][M1] = { { 0.0, -2.0 }, { 0.0, -4.0 }, { 1.0, 10.0 }, {-1.0, 15.0 } }; static tNumber_R c2init[N2][M2] = © 2008 by Taylor & Francis Group, LLC
Chapter 13: DR_Restch
437
{ { 1.0, {-1.0, { 1.0, { 0.0, { 0.0,
2.0, -1.0, 3.0, 1.0, 0.0,
0.0 0.0 0.0 0.0 1.0
}, }, }, }, }
}; static tNumber_R { { 0.0, -1.0, { 1.0, 3.0, { 1.0, 0.0, { 0.0, 0.0, {-1.0, 1.0, { 1.0, 1.0, };
c3init[N3][M3] = 0.0 -4.0 0.0 1.0 2.0 1.0
}, }, }, }, }, }
static tNumber_R c4init[N4][M4] = { { 1.0, 0.0, 1.0 }, { 1.0, 2.0, 2.0 }, { 1.0, 2.0, 0.0 }, { 1.0, 1.0, 0.0 }, { 1.0, 0.0, -1.0 }, { 1.0, 0.0, 0.0 }, { 1.0, 1.0, 1.0 } }; static tNumber_R { { 1.0, -3.0, { 1.0, -2.0, { 1.0, -1.0, { 1.0, 0.0, { 1.0, 1.0, { 1.0, 2.0, { 1.0, 3.0, { 1.0, 4.0, };
c5init[N5][M5] = 9.0, 4.0, 1.0, 0.0, 1.0, 4.0, 9.0, 16.0,
-27.0 -8.0 -1.0 0.0 1.0 8.0 27.0 64.0
}, }, }, }, }, }, }, }
static tNumber_R c6init[N6][M6] = { { 1.0, 0.0, 0.0, 0.0, 0.0 }, { 0.0, 1.0, 0.0, 0.0, 0.0 }, © 2008 by Taylor & Francis Group, LLC
438
Numerical Linear Approximation in C { 0.0, { 0.0, { 0.0, { 1.0, { 0.0, {-1.0, { 1.0, { 1.0,
0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 0.0, 1.0, 1.0, -1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0, 1.0, -1.0, 1.0, 0.0,
0.0 0.0 1.0 1.0 1.0 -1.0 1.0 1.0
}, }, }, }, }, }, }, }
}; static tNumber_R { { 1.0, 1.0, { 1.0, 2.0, { 1.0, 3.0, { 1.0, 4.0, { 1.0, 5.0, { 1.0, 6.0, { 1.0, 7.0, { 1.0, 8.0, };
c8init[N8][M8] = 1.0, 4.0, 9.0, 16.0, 25.0, 36.0, 49.0, 64.0,
1.0 4.0 9.0 16.0 25.0 36.0 49.0 64.0
}, }, }, }, }, }, }, }
static tNumber_R f1[N1+1] = { NIL, -12.0, 6.0, 0.0, 5.0 }; static tNumber_R f2[N2+1] = { NIL, 1.0, 2.0, 1.0, -3.0, 0.0 }; static tNumber_R f3[N3+1] = { NIL, 1.0, 2.0, 3.0, 2.0, 2.0, 4.0 }; static tNumber_R f4[N4+1] = { NIL, 0.0, -2.0, 1.0, -1.0, 5.0, 7.0, 0.0 }; static tNumber_R f5[N5+1] = { NIL, 3.0, -3.0, -2.0, 0.0, 7.0, -1.0, 5.0, 2.0 © 2008 by Taylor & Francis Group, LLC
Chapter 13: DR_Restch
439
}; static tNumber_R f6[N6+1] = { NIL, 1.0, -1.0, 0.0, -1.0, 1.0, 0.0, 2.0, 3.0, -3.0, -2.0 }; static tNumber_R f7[N7+1] { NIL, 0.0872673, 0.0872794, 0.3491184, 0.3498802, 0.6111334, 0.6150641, 0.8733883, 0.8841621, 1.135895, 1.157550, };
= 0.0873029, 0.3513824, 0.6230824, 0.9071868, 1.206257,
0.0873315, 0.3532572, 0.6336395, 0.9400757, 1.283258,
0.0873576, 0.3550109, 0.6441493, 0.9766021, 1.384432
static tNumber_R f8[N8+1] = { NIL, 2.0, 2.5, 2.0, 6.5, 3.5, 4.5, 6.0, 7.0 }; /* The elements of vector zz are the optimum Chebyshev norms for the 8 examples */ static tNumber_R zz[9] = { NIL, 10.0, 2.25, 1.5556, 2.625, 3.786, 1.7778, 0.0071, 1.9 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MMc_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MMc_COLS); tVector_R el = alloc_Vector_R (NN_ROWS); tVector_R ue = alloc_Vector_R (NN_ROWS); tMatrix_R c7 = alloc_Matrix_R (N7, M7); tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
c1 c2 c3 c4 c5 c6
© 2008 by Taylor & Francis Group, LLC
= = = = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(c1init[0][0]), (&(c2init[0][0]), (&(c3init[0][0]), (&(c4init[0][0]), (&(c5init[0][0]), (&(c6init[0][0]),
N1, N2, N3, N4, N5, N6,
M1); M2); M3); M4); M5); M6);
440
Numerical Linear Approximation in C tMatrix_R
c8
= init_Matrix_R (&(c8init[0][0]), N8, M8);
int int tNumber_R
irank, iter, kase; i, j, k, m, n, Iexmpl; cc1, cc2, d, dd, ddd, e, ee, eee, big, z;
eLaRc
rc = LaRcOk;
big = 1.0/ (EPS * EPS); for (j = 1; j <= 5; j++) { d = 0.15* (j-3); dd = d*d; ddd = d*dd; for (i = 1; i <= 5; i++) { e = 0.15* (i-3); ee = e*e; eee = e*ee; k = 5* (j-1) + i; c7[k][1] = 1.0; c7[k][2] = d; c7[k][3] = e; c7[k][4] = dd; c7[k][5] = ee; c7[k][6] = e*d; c7[k][7] = ddd; c7[k][8] = eee; c7[k][9] = dd*e; c7[k][10] = ee*d; } } prn_dr_bnr ("DR_Restch, Restricted Chebyshev Solution of an " "Overdetermined System of Linear Equations"); for (kase = 1; kase <= 4; kase++) { for (Iexmpl = 1; Iexmpl <= 8; Iexmpl++) { switch (Iexmpl) { case 1: n = N1; m = M1; © 2008 by Taylor & Francis Group, LLC
Chapter 13: DR_Restch
441
z = zz[1]; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = c1[i][j]; } break; case 2: n = m = z = for {
N2; M2; zz[2]; (i = 1; i <= n; i++) f[i] = f2[i]; for (j = 1; j <= m; j++) ct[j][i] = c2[i][j];
} break; case 3: n = m = z = for {
N3; M3; zz[3]; (i = 1; i <= n; i++) f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] = c3[i][j];
} break; case 4: n = m = z = for {
N4; M4; zz[4]; (i = 1; i <= n; i++) f[i] = f4[i]; for (j = 1; j <= m; j++) ct[j][i] = c4[i][j];
} break; case 5: n = N5; m = M5; z = zz[5]; © 2008 by Taylor & Francis Group, LLC
442
Numerical Linear Approximation in C for (i = 1; i <= n; i++) { f[i] = f5[i]; for (j = 1; j <= m; j++) ct[j][i] = c5[i][j]; } break; case 6: n = m = z = for {
N6; M6; zz[6]; (i = 1; i <= n; i++) f[i] = f6[i]; for (j = 1; j <= m; j++) ct[j][i] = c6[i][j];
} break; case 7: n = m = z = for {
N7; M7; zz[7]; (i = 1; i <= n; i++) f[i] = f7[i]; for (j = 1; j <= m; j++) ct[j][i] = c7[i][j];
} break; case 8: n = m = z = for {
N8; M8; zz[8]; (i = 1; i <= n; i++) f[i] = f8[i]; for (j = 1; j <= m; j++) ct[j][i] = c8[i][j];
} break; default: break; } if (kase == 1) © 2008 by Taylor & Francis Group, LLC
Chapter 13: DR_Restch { for (i = 1; i <= n; i++) { el[i] = - big; ue[i] = big; } } if (kase == 2) { for (i = 1; i <= n; i++) { el[i] = f[i] - z/2.0; ue[i] = big; } } if (kase == 3) { for (i = 1; i <= n; i++) { el[i] = -big; ue[i] = f[i]; } } if (kase == 4) { cc1 = big; cc2 = -big; for (j = 1; j <= n; j++) { if (f[j] < cc1) cc1 = f[j]; if (f[j] > cc2) cc2 = f[j]; } for (i = 1; i <= n; i++) { el[i] = cc1; ue[i] = cc2; } } prn_algo_bnr ("Restch"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); © 2008 by Taylor & Francis Group, LLC
443
444
Numerical Linear Approximation in C if (kase == 1) PRN ("Ordinary Chebyshev Solution\n"); else if (kase == 2) PRN ("Solution for arbitrary lower and upper " "ranges\n"); else if (kase == 3) PRN ("One-sided Chebyshev solution from above.\n"); else if (kase == 4) PRN ("Solution for arbitrary lower and upper " "ranges\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Restch (m, n, ct, f, el, ue, &irank, &iter, r, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Restricted Approximation\n"); PRN ("Restricted solution vector, \"a\"\n"); prn_Vector_R (a, m); PRN ("Restricted residual vector, \"r\"\n"); prn_Vector_R (r, n); PRN ("Restricted Chebyshev norm, \"z\" = %8.4f\n", z); PRN ("Rank of matrix \"c\" = %d, No. of " "iterations = %d\n", irank, iter); } prn_la_rc (rc); } } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Matrix_R
(ct, MMc_COLS); (f); (r); (a); (el); (ue); (c7, N7);
© 2008 by Taylor & Francis Group, LLC
Chapter 13: DR_Restch uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(c1); (c2); (c3); (c4); (c5); (c6); (c8);
}
© 2008 by Taylor & Francis Group, LLC
445
446
Numerical Linear Approximation in C
13.9
LA_Restch
/*------------------------------------------------------------------LA_Restch --------------------------------------------------------------------This program calculates the restricted Chebyshev solution of an overdetermined system of linear equation. It uses a modified simplex method to the linear programming formulation of the problem. For purpose of numerical stability, this program uses a triangular decomposition to the basis matrix. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank <= m < n. "f" is a given real n vector. The problem is to calculate the elements of the m vector "a" that gives the minimum Chebyshev norm z, z = max|r[i]|,
i = 1, 2, ..., n
where r[i] is the ith residual and is given by r[i] = f[i] - (c[i][1]*a[1] + c[i][2]*a[2] +...+ c[i][m]*a[m]), subject to the conditions el[i] <= c[i][1]*a[1] + ... + c[i][m]*a[m] <= ue[i] where el[i] and ue[i] are given parameters for i = 1, 2, ..., n. Also the following conditions should be satisfied. el[i] <= f[i] <= ue[i],
i = 1, 2, ..., n
Inputs m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. ct A real (m+1) by n matrix. Its first m rows and its n columns contain the transpose of matrix "c" of the system c*a = f. Its (m+1)th row will be filled with ones by the program.
© 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch f el ue
447
A real n vector containing the r.h.s. of the system c*a = f. A real n vector containing the lower range vector "el". A real n vector containing the upper range vector "ue".
Local Variables ginv A real (m+1) square matrix that is the inverse of the basis matrix. vb A real (m+1) vector containing the basic solution in the linear programming problem. ic An integer m vector containing the column indices of matrix "ct" that form the columns of the basis matrix. ir An integer (m+1) vector containing the indices of the rows of matrix "ct". ib An integer n vector, each element of which has the value 1, 2, 3 or 4. p An (((m+1)*((m+1)+3))/2) vector whose first (((irank+1)*(irank+4))/2)-1 elements contain the ((irank+1)*(irank+2))/2 elements of an upper triangular matrix + extra "irank" working locations. See the comments in LA_pslvc(). Outputs irank The calculated rank of matrix "c". iter The number of iterations needed for the problem. a A real (m+1) vector whose first m elements are the solution vector "a". r A real n vector containing the residual vector r = f - c*a. z The restricted Chebyshev norm of the residual vector. Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcSolutionDefNotUniqueRD LaRcNoFeasibleSolution LaRcInconsistentConstraints LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Restch (int m, int n, tMatrix_R ct, tVector_R f, tVector_R el, tVector_R ue, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ) © 2008 by Taylor & Francis Group, LLC
448
Numerical Linear Approximation in C
{ tMatrix_R tVector_R tVector_R tVector_R
ginv vb zc p
= = = =
alloc_Matrix_R (m alloc_Vector_R (m alloc_Vector_R (n alloc_Vector_R (((m + 1) * ((m + alloc_Vector_R (n alloc_Vector_R (m alloc_Vector_R (m alloc_Vector_R (m alloc_Vector_I (m alloc_Vector_I (m alloc_Vector_I (n
+ 1, m + 1); + 1); + 1);
tVector_R tVector_R tVector_R tVector_R tVector_I tVector_I tVector_I
g u v w ic ir ib
= = = = = = =
int int
i = 0, k = 0, kd = 0, kk = 0, ko = 0, m1 = 0; ijk = 0, itest = 0, iout = 0, ivo = 0, jin = 0, nmm = 0, irank1 = 0;
tNumber_R
ggg = 0.0, piv = 0.0;
eLaRc
tempRc;
1) + 3)) / 2); + 1); + 1); + 1); + 1); + 1); + 1); + 1);
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < m) && (m < n)); VALIDATE_PTRS (ct && f && el && ue && pIrank && pIter && r && a && pZ); VALIDATE_ALLOC (ginv && vb && zc && p && g && u && v && w && ic && ir && ib); /* Initialization & checking some constraints */ m1 = m + 1; nmm = (m1* (m1 + 3))/2; *pIrank = m; irank1 = *pIrank + 1; *pIter = 0; *pZ = 0.0; /* Initializing program data */ tempRc = LA_restch_init (m, n, ct, f, el, ue, ir, ib, ic, g, ginv, a); if ( tempRc < LaRcOk ) { GOTO_CLEANUP_RC (tempRc); } © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch
449
/* Part 1 of the algorithm */ iout = 0; for (kk = 1; kk <= m1; kk++) { iout = iout + 1; if (iout <= irank1) { /* Part 1 of the program */ LA_restch_part_1 (&piv, iout, &jin, n, ct, ic, ginv); /* Detect the rank of matrix "c" */ LA_restch_detect_rank (&piv, &iout, &jin, n, ct, ic, ir, ib, g, v, ginv, pIrank, pIter); } } /* Part 2 of the algorithm. Initial feasible basic solution and a positive norm z */ LA_restch_part_2 (n, ct, ic, ib, g, v, ginv, vb, pIrank); /* Initializing the triangular matrix */ LA_restch_init_p (m, p, pIrank); /* Calculating the marginal costs */ LA_restch_marg_costs (m, n, ct, f, el, ue, ic, ir, ib, g, u, v, w, zc, p, ginv, pIrank, r, a, pZ); /* Part 3 */ for (ijk = 1; ijk <= n*n; ijk++) { irank1 = *pIrank + 1; ivo = 0; /* Determine the vector that enters the basis */ LA_restch_vent (&ivo, &jin, &ggg, n, f, el, ue, ic, ib, zc, pIrank, pZ); /* Calculate the results */ if (ivo == 0) { tempRc = LA_restch_res (m, f, el, ue, ic, ib, v, p, vb, pIrank, r, pZ); GOTO_CLEANUP_RC ( tempRc == LaRcOk ? rc : tempRc ); © 2008 by Taylor & Francis Group, LLC
450
Numerical Linear Approximation in C } ko = ib[jin]; if (ko != ivo) { kk = ko + ivo - 2; if (kk != 2 && kk != 4) { for (i = 1; i <= irank1; i++) { ct[i][jin] = -ct[i][jin]; } } for (i = 1; i <= m1; i++) { k = ir[i]; if (k == m1) ko = i; } if (ivo <= 2) ct[ko][jin] = 1.0; if (ivo > 2) ct[ko][jin] = 0.0; } /* Determine the vector that leaves the basis */ itest = 0; ib[jin] = ivo; g[jin] = ggg; LA_pslvc (1, irank1, p, vb, v); LA_restch_vleav (jin, &iout, &itest, ct, u, v, w, p, ginv, pIrank); /* No feasible solution has been found */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } *pIter = *pIter + 1; if (iout != irank1) { /* Updating of the triangular factorization */ LA_restch_update_p (iout, ic, p, pIrank); }
© 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch
451
ic[irank1] = jin; k = irank1; kd = irank1; for (i = 1; i <= irank1; i++) { p[k] = u[i]; k = k + kd; kd = kd - 1; } if (iout != irank1) { /* Update matrix ginv */ LA_restch_update_ginv (iout, ct, ir, p, ginv, vb, pIrank); } /* Calculate the results; vector "a" and z */ LA_restch_marg_costs (m, n, ct, f, el, ue, ic, ir, ib, g, u, v, w, zc, p, ginv, pIrank, r, a, pZ); } CLEANUP: free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_I free_Vector_I free_Vector_I
(ginv, m + 1); (vb); (zc); (p); (g); (u); (v); (w); (ic); (ir); (ib);
return rc; } /*------------------------------------------------------------------Initializing data for LA_Restch() -------------------------------------------------------------------*/ eLaRc LA_restch_init (int m, int n, tMatrix_R ct, tVector_R f, tVector_R el, tVector_R ue, tVector_I ir, tVector_I ib, tVector_I ic, tVector_R g, tMatrix_R ginv, tVector_R a) © 2008 by Taylor & Francis Group, LLC
452
Numerical Linear Approximation in C
{ int
i, j, m1;
m1 = m + 1; for (j = 1; j <= n; j++) { if (el[j] > f[j] || ue[j] < f[j]) return LaRcInconsistentConstraints; g[j] = f[j]; ib[j] = 1; ct[m1][j] = 1.0; } for (j = 1; j <= m1; j++) { for (i = 1; i <= m1; i++) { ginv[i][j] = 0.0; } ginv[j][j] = 1.0; ic[j] = 0; ir[j] = j; a[j] = 0.0; } return LaRcOk; } /*------------------------------------------------------------------Part 1 of for LA_Restch() -------------------------------------------------------------------*/ void LA_restch_part_1 (tNumber_R *pPiv, int iout, int *pJin, int n, tMatrix_R ct, tVector_I ic, tMatrix_R ginv) { int j, k, icb, ioutm1; tNumber_R d, s; *pPiv = 0.0; if (iout == 1) { for (j = 1; j <= n; j++) { d = ct[iout][j]; if (d < 0.0) d = -d; if (d > *pPiv) © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch
453
{ *pJin = j; *pPiv = d; } } } else if (iout > 1) { ioutm1 = iout - 1; for (j = 1; j <= n; j++) { icb = 0; for (k = 1; k <= ioutm1; k++) { if (j == ic[k]) icb = 1; } if (icb == 0) { s = ct[iout][j]; for (k = 1; k <= ioutm1; k++) { s = s + ginv[iout][k] * (ct[k][j]); } d = s; if (d < 0.0) d = -d; if (d > *pPiv) { *pJin = j; *pPiv = d; } } } } } /*------------------------------------------------------------------Detect the rank of matrix "c" in LA_Restch() -------------------------------------------------------------------*/ void LA_restch_detect_rank (tNumber_R *pPiv, int *pIout, int *pJin, int n, tMatrix_R ct, tVector_I ic, tVector_I ir, tVector_I ib, tVector_R g, tVector_R v, tMatrix_R ginv, int *pIrank, int *pIter) { int i, j, k, icb, ioutm1, irank1; tNumber_R s, pivot; © 2008 by Taylor & Francis Group, LLC
454
Numerical Linear Approximation in C
/* Detection of rank deficiency */ irank1 = *pIrank + 1; if ((*pPiv < EPS) && *pIout != irank1) { for (j = 1; j <= n; j++) { ct[*pIout][j] = ct[*pIrank][j]; ct[*pIrank][j] = 1.0; ct[irank1][j] = 0.0; } ioutm1 = *pIout - 1; for (j = 1; j <= ioutm1; j++) { ginv[*pIout][j] = ginv[*pIrank][j]; ginv[*pIrank][j] = ginv[irank1][j]; ginv[irank1][j] = 0.0; } ir[*pIout] = ir[*pIrank]; ir[*pIrank] = ir[irank1]; ir[irank1] = 0; *pIrank = *pIrank - 1; irank1 = *pIrank + 1; *pIout = *pIout - 1; } if ((*pPiv <= EPS) && *pIout == irank1) { for (j = 1; j <= n; j++) { icb = 0; for (i = 1; i <= *pIrank; i++) { if (j == ic[i]) icb = 1; } if (icb == 0) { *pJin = j; *pPiv = 2.0; break; } } ib[*pJin] = 2; g[*pJin] = -g[*pJin]; for (i = 1; i <= *pIrank; i++) © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch
455
{ ct[i][*pJin] = -ct[i][*pJin]; } } if (*pPiv > EPS) { for (i = 1; i <= irank1; i++) { s = ct[i][*pJin]; if (*pIout != 1) { ioutm1 = *pIout -1; if (i < *pIout) s = 0.0; for (k = 1; k <= ioutm1; k++) { s = s + ginv[i][k] * (ct[k][*pJin]); } } v[i] = s; } pivot = v[*pIout]; for (j = 1; j <= *pIout; j++) { ginv[*pIout][j] = ginv[*pIout][j]/pivot; } for (i = 1; i <= irank1; i++) { if (i != *pIout) { for (j = 1; j <= *pIout; j++) { ginv[i][j] = ginv[i][j] - v[i] * (ginv[*pIout][j]); } } } ic[*pIout] = *pJin; *pIter = *pIter + 1; } } /*------------------------------------------------------------------Part 2 of the algorithm for LA_Restch(). Obtaining a basic feasible solution and a positive norm z. © 2008 by Taylor & Francis Group, LLC
456
Numerical Linear Approximation in C
-------------------------------------------------------------------*/ void LA_restch_part_2 (int n, tMatrix_R ct, tVector_I ic, tVector_I ib, tVector_R g, tVector_R v, tMatrix_R ginv, tVector_R vb, int *pIrank) { int i, j, k, kk, l, ibv, jin, irank1; tNumber_R d, s, z; irank1 = *pIrank + 1; for (i = 1; i <= irank1; i++) { v[i] = 0.0; vb[i] = ginv[i][irank1]; } ibv = 0; d = 0.5; for (i = 1; i <= irank1; i++) { if (vb[i] <= -EPS) { vb[i] = -vb[i]; ibv = ibv + 1; jin = ic[i]; g[jin] = -g[jin]; kk = 2; if (ib[jin] == 2) kk = 1; ib[jin] = kk; for (k = 1; k <= *pIrank; k++) { ct[k][jin] = -ct[k][jin]; } d = d + vb[i]; for (j = 1; j <= irank1; j++) { ginv[i][j] = -ginv[i][j]; v[j] = v[j] + ginv[i][j]; } } } if (ibv > 0) { for (j = 1; j <= irank1; j++) { v[j] = v[j]/d; for (i = 1; i <= irank1; i++) © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch
457
{ ginv[i][j] = ginv[i][j] - v[j]*vb[i]; } } for (j = 1; j <= irank1; j++) { vb[j] = ginv[j][irank1]; } } s = 0.0; for (i = 1; i <= irank1; i++) { k = ic[i]; s = s + g[k] * (vb[i]); } z = s; if (z <= -EPS) { for (j = 1; j <= n; j++) { g[j] = -g[j]; kk = 2; if (ib[j] == 2) kk = 1; ib[j] = kk; for (i = 1; i <= *pIrank; i++) { ct[i][j] = -ct[i][j]; } z = -z; for (l = 1; l <= *pIrank; l++) { for (i = 1; i <= irank1; i++) { ginv[i][l] = -ginv[i][l]; } } } } } /*------------------------------------------------------------------Initializing the triangular matrix for LA_Restch() -------------------------------------------------------------------*/ void LA_restch_init_p (int m, tVector_R p, int *pIrank) { © 2008 by Taylor & Francis Group, LLC
458
Numerical Linear Approximation in C int
i, k, kd, m1, nmm, irank1;
m1 = m + 1; nmm = (m1 * (m1 + 3))/2; irank1 = *pIrank + 1; for (i = 1; i <= nmm; i++) { p[i] = 0.0; } k = 1; kd = *pIrank + 2; for (i = 1; i <= irank1; i++) { p[k] = 1.0; k = k + kd; kd = kd - 1; } } /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Restch() -------------------------------------------------------------------*/ void LA_restch_vent (int *pIvo, int *pJin, tNumber_R *pGgg, int n, tVector_R f, tVector_R el, tVector_R ue, tVector_I ic, tVector_I ib, tVector_R zc, int *pIrank, tNumber_R *pZ) { int i, j, kk; int icb, irank1; tNumber_R
gg, ga, gb, gc, gd;
irank1 = *pIrank + 1; gg = 1.0/ (EPS*EPS); for (j = 1; j <= n; j++) { ga = 0.0; gb = 0.0; gc = 0.0; gd = 0.0; icb = 0; kk = ib[j]; for (i = 1; i <= irank1; i++) { if (j == ic[i]) icb = 1; © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch } if (icb == 0) { if (kk == 1) { ga = zc[j]; gb = *pZ + *pZ - ga; gc = ga - *pZ + f[j] gd = *pZ - ga - f[j] } else if (kk == 2) { gb = zc[j]; ga = *pZ + *pZ - gb; gc = *pZ - gb + f[j] gd = gb - *pZ - f[j] } else if (kk == 3) { gc = zc[j]; ga = *pZ + gc - f[j] gb = *pZ - gc + f[j] gd = ue[j] - el[j] } else if (kk == 4) { gd = zc[j]; ga = *pZ - gd - f[j] gb = *pZ+ *pZ - ga; gc = ue[j] - el[j] } if ((ga <= -EPS) && ga < { gg = ga; *pGgg = f[j]; *pJin = j; *pIvo = 1; } else if ((gb <= -EPS) && { gg = gb; *pGgg = -f[j]; *pJin = j; *pIvo = 2; } © 2008 by Taylor & Francis Group, LLC
459
- el[j]; + ue[j];
- el[j]; + ue[j];
+ el[j]; - el[j]; gc;
+ ue[j]; gd; gg)
gb < gg)
460
Numerical Linear Approximation in C else if ((gc <= -EPS) && gc < gg) { gg = gc; *pGgg = el[j]; *pJin = j; *pIvo = 3; } else if ((gd <= -EPS) && gd < gg) { gg = gd; *pGgg = -ue[j]; *pJin = j; *pIvo = 4; } } else if (icb == 1) { if (kk == 1) { gc = -*pZ + f[j] - el[j]; if ((gc <= -EPS) && gc < gg) { gg = gc; *pGgg = el[j]; *pJin = j; *pIvo = 3; } } else if (kk == 2) { gd = -*pZ - f[j] + ue[j]; if ((gd <= -EPS) && gd < gg) { gg = gd; *pGgg = -ue[j]; *pJin = j; *pIvo = 4; } } else if (kk == 3) { ga = *pZ - f[j] + el[j]; if ((ga <= -EPS) && ga < gg) {
© 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch
461
gg = ga; *pGgg = f[j]; *pJin = j; *pIvo = 1; } } else if (kk == 4) { gb = *pZ + f[j] - ue[j]; if ((gb <= -EPS) && gb < gg) { gg = gb; *pGgg = -f[j]; *pJin = j; *pIvo = 2; } } } } } /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Restch() -------------------------------------------------------------------*/ void LA_restch_vleav (int jin, int *pIout, int *pItest, tMatrix_R ct, tVector_R u, tVector_R v, tVector_R w, tVector_R p, tMatrix_R ginv, int *pIrank) { int i, k, irank1; tNumber_R d, s, gg, thmax; thmax = 1.0/ (EPS*EPS); irank1 = *pIrank + 1; for (i = 1; i <= irank1; i++) { s = 0.0; for (k = 1; k <= irank1; k++) { s = s + ginv[i][k] * (ct[k][jin]); } u[i] = s; } LA_pslvc (1, irank1, p, u, w);
© 2008 by Taylor & Francis Group, LLC
462
Numerical Linear Approximation in C for (i = 1; i <= irank1; i++) { d = w[i]; if (d >= EPS) { gg = v[i]/d; if (gg <= thmax) { thmax = gg; *pIout = i; *pItest = 1; } } }
} /*------------------------------------------------------------------Update vector p in LA_Restch() -------------------------------------------------------------------*/ void LA_restch_update_p (int iout, tVector_I ic, tVector_R p, int *pIrank) { int i, j, k, kp1, kd, irank1; irank1 = *pIrank + 1; for (j = iout; j <= *pIrank; j++) { k = j; kp1 = k + 1; kd = irank1; /* Swap two elements of vector "ic" */ swap_elems_Vector_I (ic, k, kp1); for (i = 1; i <= kp1; i++) { p[k] = p[k+1]; k = k + kd; kd = kd - 1; } } } /*------------------------------------------------------------------Update matrix ginv in LA_Restch() -------------------------------------------------------------------*/ © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch void LA_restch_update_ginv (int iout, tMatrix_R ct, tVector_I ir, tVector_R p, tMatrix_R ginv, tVector_R vb, int *pIrank) { int i, j, k, l, kl, kd, kk, irank1; int ii, ip1; tNumber_R
d, e;
irank1 = *pIrank + 1; for (i = iout; i <= *pIrank; i++) { ii = i; ip1 = i + 1; k = 0; kd = *pIrank + 2; for (j = 1; j <= ii; j++) { k = k + kd; kd = kd - 1; } kk = k; kl = k - kd; l = kl; d = p[k]; if (d < 0.0) d = -d; e = p[l]; if (e < 0.0) e = -e; if (d > e) { for (j = ii; j <= irank1; j++) { /* Swap two elements of a real vector */ swap_elems_Vector_R (p, l, k); k = k + 1; l = l + 1; } /* Swap two elements of an integer vector */ swap_elems_Vector_I (ir, i, ip1); /* Swap two rows of matrix "ct" */ swap_rows_Matrix_R (ct, i, ip1); /* Swap two rows of matrix "ginv" */ swap_rows_Matrix_R (ginv, i, ip1); © 2008 by Taylor & Francis Group, LLC
463
464
Numerical Linear Approximation in C
/* Swap two columns of matrix "ginv" */ for (k = 1; k <= irank1; k++) { d = ginv[k][i]; ginv[k][i] = ginv[k][ip1]; ginv[k][ip1] = d; } /* Swap two elements of a real vector */ swap_elems_Vector_R (vb, i, ip1); } e = k = l = for {
p[kk]/p[kl]; kk; kl; (j = ii; j <= irank1; j++) p[k] = p[k] - e * (p[l]); k = k + 1; l = l + 1;
} for (j = 1; j <= irank1; j++) { ginv[ip1][j] = ginv[ip1][j] - e * (ginv[i][j]); } vb[ip1] = vb[ip1] - e * (vb[i]); } } /*------------------------------------------------------------------Calculate the results of LA_Restch() -------------------------------------------------------------------*/ void LA_restch_marg_costs (int m, int n, tMatrix_R ct, tVector_R f, tVector_R el, tVector_R ue, tVector_I ic, tVector_I ir, tVector_I ib, tVector_R g, tVector_R u, tVector_R v, tVector_R w, tVector_R zc, tVector_R p, tMatrix_R ginv, int *pIrank, tVector_R r, tVector_R a, tNumber_R *pZ) { int i, j, k, kk, m1, icb, irank1; tNumber_R d, s; irank1 = *pIrank + 1; m1 = m + 1; for (j = 1; j <= irank1; j++) { © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch k = ic[j]; u[j] = g[k]; } LA_pslvc (2, irank1, p, u, w); for (j = 1; j <= irank1; j++) { s = 0.0; for (k = 1; k <= irank1; k++) { s = s + w[k] * (ginv[k][j]); } v[j] = s; k = ir[j]; a[k] = s; } *pZ = a[m1]; for (j = 1; j <= n; j++) { zc[j] = 0.0; icb = 0; for (i = 1; i <= irank1; i++) { if (j == ic[i]) icb = 1; } if (icb == 0) { kk = ib[j]; s = f[j] + *pZ; if (kk == 2) s = f[j] - *pZ; if (kk == 3) s = f[j]; d = -1.0; if (kk == 2 || kk == 4) d = 1.0; for (i = 1; i <= irank1; i++) { s = s + d * (v[i]) * (ct[i][j]); } r[j] = s; if (kk == 1) zc[j] = *pZ - r[j]; if (kk == 2) zc[j] = r[j] + *pZ; if (kk == 3) zc[j] = f[j] - el[j] - r[j]; if (kk == 4) zc[j] = r[j] + ue[j] - f[j]; } } } © 2008 by Taylor & Francis Group, LLC
465
466
Numerical Linear Approximation in C
/*------------------------------------------------------------------Calculating the residuals in LA_Restch() -------------------------------------------------------------------*/ eLaRc LA_restch_res (int m, tVector_R f, tVector_R el, tVector_R ue, tVector_I ic, tVector_I ib, tVector_R v, tVector_R p, tVector_R vb, int *pIrank, tVector_R r, tNumber_R *pZ) { int i, j, kk, m1, irank1; eLaRc
rc = LaRcOk;
irank1 = *pIrank m1 = m + 1; for (i = 1; i <= { j = ic[i]; kk = ib[j]; if (kk == 1) if (kk == 2) if (kk == 3) if (kk == 4) }
+ 1; irank1; i++)
r[j] r[j] r[j] r[j]
= = = =
*pZ; -*pZ; f[j] - el[j]; f[j] - ue[j];
if (*pIrank < m) { rc = LaRcSolutionDefNotUniqueRD; } else if (*pIrank == m) { LA_pslvc (1, irank1, p, vb, v); for (i = 1; i <= m1; i++) { if (v[i] < EPS) rc = LaRcSolutionProbNotUnique; } } return rc; } /*------------------------------------------------------------------LA_pslvc --------------------------------------------------------------------This function solves the square non-singular system of linear equations. © 2008 by Taylor & Francis Group, LLC
Chapter 13: LA_Restch
467 p*x = b
or the square non-singular system of linear equations p(transpose)*x = b where "p" is an upper triangular matrix, "b" is the right hand side vector and "x" is the solution vector. Inputs id An integer specifying the action to be performed. If id = 1 the equation "p*x = b" is solved. if id= any integer other than 1, the equation "p(transpose)*x = b" is solved. k The number of equations of the given system. p An (((m+1)*((m+1)+3))/2) vector. Its first (k+1) elements contain the first k elements of row 1 of the upper triangular matrix + an extra location to the right. Its next k elements contain the (k-1) elements of row 2 of the upper triangular matrix + an extra location to the right,..., etc. b An (m+1) vector. Its first "k" elements contain the r.h.s. vector "b" of the given system. Outputs x An (m+1) vector whose first k elements contain the solution to the given system. -------------------------------------------------------------------*/ void LA_pslvc (int id, int k, tVector_R p, tVector_R b, tVector_R x) { int i, j, jj, l, ll, kd, kk; int kkd, km1, kkm1; tNumber_R s; /* Solution of the upper triangular system */ if (id == 1) { l = (k-1) + (k* (k+1))/2; x[k] = b[k]/p[l]; if (k > 1) { kd = 3; km1 = k - 1; for (i = 1; i <= km1; i++) { j = k - i; © 2008 by Taylor & Francis Group, LLC
468
Numerical Linear Approximation in C l = l - kd; kd = kd + 1; s = b[j]; ll = l; jj = j; for (kk= 1; kk <= i; kk++) { ll = ll + 1; jj = jj + 1; s = s - p[ll] * (x[jj]); } x[j] = s/p[l]; } } } /* Solution of the lower triangular system */ else if (id != 1) { x[1] = b[1]/p[1]; if (k > 1) { l = 1; kd = k + 1; for (i = 2; i <= k; i++) { l = l + kd; kd = kd - 1; s = b[i]; kk = i; kkm1 = i - 1; kkd = k; for (j = 1; j <= kkm1; j++) { s = s - p[kk] * (x[j]); kk = kk + kkd; kkd = kkd - 1; } x[i] = s/p[l]; } } }
}
© 2008 by Taylor & Francis Group, LLC
469
Chapter 14 Strict Chebyshev Approximation
14.1
Introduction
Consider the overdetermined system of linear equations Ca = f C = (cij) is a given real n by m matrix of rank k, k ≤ m < n, f = (fi) is a given real n-vector and a = (aj) is the m-solution vector. The residual vector for this system is given by r = Ca – f In the last four chapters, algorithms are presented for four kinds of linear Chebyshev approximations. In Chapter 10, the ordinary Chebyshev approximation of system Ca = f is presented, where the Chebyshev norm of the residual vector r is minimum [1]. In Chapter 11, the one-sided Chebyshev approximation of system Ca = f requires the additional constraints that all the elements of the residual vector r be either non-positive or non-negative. In Chapter 12, the bounded Chebyshev approximation is presented, where the additional constraints are that each element of the solution vector a is bounded between 1 and –1. In Chapter 13, different additional constraints are that the left hand side of system Ca = f, i.e., vector Ca be bounded between lower and upper rages. The solution vector a in any of the aforementioned Chebyshev algorithms, if it exists, may or may not be unique. In this chapter, we study the uniqueness issue of the ordinary Chebyshev approximation. We consider what is known as the Strict Chebyshev solution of overdetermined systems of linear equations. A Strict Chebyshev solution is calculated only when the ordinary
© 2008 by Taylor & Francis Group, LLC
470
Numerical Linear Approximation in C
Chebyshev solution is not unique. The Strict Chebyshev solution is always unique. Let us denote the Chebyshev solution of Ca = f by C.S., and the Strict Chebyshev solution by S.C.S. Descloux [6] proved the important result that the Lp solution of system Ca = f converges to the S.C.S. as p → ∞. Rice [10] introduced a certain C.S. defined on a finite set, that he called the S.C.S, which is always unique. In this chapter, we use linear programming techniques to solve the Strict Chebyshev approximation problem. In Section 14.2, the problem is described as it was presented by Descloux. In Section 14.3, the linear programming analysis of the problem is given. This analysis provides a way to determine, for the majority of cases, all the equations that belong to the so-called characteristic set. It also gives an efficient method to calculate the inverse of the matrix needed to calculate the strict Chebyshev solution. In addition, it provides a way to recognize the elements of the solution vector of the ordinary Chebyshev solution that equal the corresponding elements of the strict Chebyshev solution. Necessary lemmas are presented. In Section 14.4, numerical results and comments on other algorithms to solve the same problem are given. Let E denote the set of the n equations Ca = f. The C.S. to this system is the m-vector a = (ai) that minimizes the Chebyshev norm of the residuals z = max|ri|, i ∈ E where ri is the
ith
residual and is given by m
(14.1.1)
ri =
∑ cij aj – fi ,
i∈E
j=1
Let us denote the C.S. a by (a)C.S. It is known that if C satisfies the Haar condition, where every m by m sub-matrix of matrix C is nonsingular, the C.S. vector (a)C.S. is unique. Otherwise it may not be unique. When the C.S. is not unique, there is a certain degree of freedom for some, but not all, of the residuals (14.1.1). For these residuals, the maximum absolute value is minimized over the C.S. and the resulting solution is the S.C.S. This is explained in the next section.
© 2008 by Taylor & Francis Group, LLC
Chapter 14: Strict Chebyshev Approximation
14.2
471
The problem as presented by Descloux
The following presentation of the problem is due to Descloux [6]. Assume that matrix C is of rank m and let the Chebyshev norm to the given system Ca = f be z1 = z. The solution (a)C.S. and z1 are obtained from (m + 1) equations of E known as the reference equation set (RS). Equation (10.3.2) is a RS; let it be given by m
∑ cij aj + δi z
(14.2.1)
= f i , i ∈ RS
j=1
where δi is either +1 or –1. It is better to write this equation as m
∑ cij aj
(14.2.1a)
= fi – δi z
, i ∈ RS
j=1
Assume that (a)C.S. is not unique and let W1 be the set of all Chebyshev solutions (a)C.S. to the system Ca = f. Let R1 be the collection of all equations i ∈ E for which |ri| = z1. R1 is denoted by Descloux as the characteristic set of f relative to matrix C. m
(14.2.2)
∑ cij aj
= f i – δ i z , i ∈ R1
j=1
System (14.2.2) is of rank s1, s1 ≤ m and thus W1 is a subset of the solutions to (14.2.2). It is required to obtain the C.S. of the system (E – R1) subject to the s1 constraints (14.2.2). This may be done by eliminating certain s1 elements of vector a, using (14.2.2). Then the C.S. of the obtained reduced system (E – R1) is calculated. To illustrate this point, see Example 14.2 below. System Ca = f thus reduces to the system of (E – R1) equations in (m – s1) unknowns, where C(2) is of rank (m – s1) (14.2.3)
C(2)a(2) = f(2)
The procedure is repeated for system (14.2.3). If the C.S. of (14.2.3) is not unique, the characteristic set R2 of (14.2.3) is obtained, and we eliminate s2 elements of a(2) from (E – R1) and obtain the C.S.
© 2008 by Taylor & Francis Group, LLC
472
Numerical Linear Approximation in C
of the further reduced system (E – R1 – R2). This process is repeated if necessary a finite number of times, until the C.S. of the most reduced system is unique. At the end, an m equations in m unknowns is obtained. It consists of s1 equations of R1 + s2 equations of R2 + …, namely (14.2.4)
Da = d
Duris and Temple [7] were the first to describe an algorithm for calculating the S.C.S. of Ca = f, following the presentation of Descloux. Thiran and Thiry [12] did not follow the presentation of Descloux. They rather used an ascent exchange algorithm for computing the strict Chebyshev solution to overdetermined systems of equations. Branning [4, 5] proposed a direct method for computing the same problem. Comments on the works Thiran and Thiry [12] and of Branning [4, 5] are given in Section 14.4. We present here an algorithm that is essentially that of Duris and Temple [7]. By using linear programming techniques, one obtains the solution in an efficient manner. Our computational scheme [2] also differs in several significant respects with the result that normally, the computational effort is reduced considerably. Our method deals also with full rank as well as rank deficient coefficient matrix C. 14.3
Linear programming analysis of the problem
The C.S. of system Ca = f is obtained by using the linear programming algorithm described in Chapter 10. Let in the final tableau for the Chebyshev approximation matrix B be the basis matrix. Let also bB be the optimal basic solution and (zi – fi), i = 1, 2, …, 2n, be the marginal costs for the optimum solution. By examining the final tableau of the linear programming problem, a simple procedure is used by which, for the majority of cases, we determine all the equations that belong to the characteristic set R1. If this procedure is not followed, many Gauss-Jordan iterations may be needed to obtain system (14.2.3) from system Ca = f. From the equation in Lemma 10.2, matrix B is the transpose of the coefficient matrix on the left hand side of the reference set (14.2.1a)
© 2008 by Taylor & Francis Group, LLC
Chapter 14: Strict Chebyshev Approximation
473
above [9]. Also, the residuals (14.1.1) in the RS are given by ri = ±(zi – fi – z). See for example, the first equation in Lemma 12.5. So that if (zi – fi) = 0 or 2z, ri is given by |ri| = z. From the final tableau of the programming problem for the Chebyshev approximation, we find out whether the C.S. of Ca = f is unique or not. Then, as indicated earlier, determine the characteristic set R1. 14.3.1
The characteristic set R1 and how to obtain it
The following lemma was proved in [2]. Lemma 14.1 If bB has no zero components, i.e., bB is non-degenerate, the C.S. of Ca = f is unique and the C.S. = S.C.S. The proof also follows from the fact that no non-basic column can replace any one of the basic columns. The inverse is not always true. There are problems with degenerate basic solutions but the C.S. solution is unique. See Example 14.1 below. Assume that C.S. of Ca = f is not unique. Assume that we have obtained all the optimal basic solutions bB of the linear programming problem for system Ca = f. Let bB(1), bB(2), …, be such solutions. We expect that each of these solutions is degenerate, i.e., each has one or more zero elements. We deduce from the definition of R1 that R1 consists of the union of the equations in the reference sets (14.2.1) associated with the nonzero elements of the corresponding bB(i), i = 1, 2, … . How to obtain the characteristic set R1 To obtain the characteristic set R1, that is, to obtain all the optimal basic solutions, a simple procedure is followed, requiring the calculation of some of the optimal solutions without the need to change the simplex tableau. To start with, the equations in RS of (14.2.1) corresponding to the nonzero elements of the bB at hand belong to R1. The non-basic columns are then examined and those that have zero marginal costs or marginal costs of 2z are considered. Remember that if the marginal cost (zi – fi) = 0 or 2z, ri is given by |ri| = z. Let i be one of such columns with zero marginal cost. Check in © 2008 by Taylor & Francis Group, LLC
474
Numerical Linear Approximation in C
the usual manner, if column i may enter the basis with positive level. If so, calculate the new optimal solution bB' without changing the simplex tableau. Then the equations of the new RS (14.2.1) that correspond to the nonzero elements of bB', belong to R1. This of course includes column i itself, since it entered the basis with nonzero (positive) level. This procedure is repeated for every non-basic column having zero marginal cost or a marginal cost of 2z. The reduced system (14.2.3) is then calculated as described. We call this the simplified procedure. An alternative procedure is given by Hadley ([8], pp. 166-168), which requires changing of the tableau. Our simplified procedure does not, in general, calculate all the optimal solutions, but it is successful in the majority of cases in finding the set R1. In Table 1 in [2], this procedure worked for all examples, with the exception of one (Example 3a) in which two major iterations, instead of one, were needed to determine R1. The following lemma has also been proved in [2]. Lemma 14.2 Assume that the optimal solutions bB(1), …, bB(r) have q zero components in common. We mean that each bB(i) has its jth component = 0. Then the equations corresponding to the nonzero elements of these solutions have rank (m – q). The proof follows directly from the fact that if the bB(i) have q zeros in common, the columns in the simplex tableau associated with the nonzero components of these bB(i), each has those q zero elements in common. It follows that if q = 0, the C.S. is unique. Example 14.1 Consider the C.S. of the equations a1 – 15a2 –0.5a1 + 7.5a2 2a2 –4a2
= –5 = 17.5 = 12 = 6
Two of the optimal basic solutions, bB(1) and bB(2), obtained by the simplified procedure are (0, 2/3, 1/3)T and (1/3, 0, 2/3)T and there is no zero component in common between bB(1) and bB(2) and thus the © 2008 by Taylor & Francis Group, LLC
Chapter 14: Strict Chebyshev Approximation
475
C.S. is unique; (a)C.S. = (0, 1)T = (a)S.C.S. and z = 10. We note here that the two basic solutions bB(1) and bB(2) correspond respectively to equations (1, 3 and 4) and (1, 3 and 2), and that all 4 equations form R1 of (14.2.2) which is of rank 2. 14.3.2
Calculating matrix (DT)–1
Instead of calculating matrix D of (14.2.4), we rather calculate the inverse of its transpose (DT)–1. This is done by successively modifying matrix B–1, as we see shortly. In the product form, B–1 may be given by the matrices Ei, i = 1, 2, …, m + 1. B–1 = Em+1Em…E1
(14.3.1)
The Ei are (m + 1)-square matrices and are given, for example, in ([8], p. 48). We need to calculate the matrices (Em+1–1B–1), (Em–1Em+1–1B–1), …, (Em+1–q–1 … Em+1–1B–1) If necessary we exchange the rows of B–1 before calculating the matrix Em+1–1 and the columns of (Em+1–1B–1) before calculating Em-1, … . This is to achieve maximum numerical stability. Let us assume that m = 5 and q = 2. Then the end result of this process is that the modified B–1 will have its right 3 columns become the right three columns of a 6-unit matrix. It is not difficult to write (see the second equation in Lemma 11.2) (a)C.ST = fT[Em+1–1B–1]m×m where the subscript m×m denotes the upper left sub-matrix. If we assume that m = 5 and q = 2, and that s1 = 3 and s2 = 2, then the system Da = d consists of s1 (= 3) equations of R1 + s2 (= 2) equations of R2. The elements of a(1) are permuted such that a(2) is obtained by eliminating the first s1 elements of a(1). As in the case of B–1 given in the product form (14.3.1), matrix (DT)–1 may also be given in the form (14.3.2) (DT)–1 = (Em …) … (Es1+s2 … Es1+1)(Es1 … E1) where the Ei are suitable m by m matrices. We write (14.3.2) in the form (DT)–1 = Em … Es1+1[(Es1 … E1)]m×m = Em … Es1+1[(G1)]m×m
© 2008 by Taylor & Francis Group, LLC
476
Numerical Linear Approximation in C
from which, for j ≤ s1, we write m
( a j ) S.C.S. =
∑ i=1
T –1
d i ( D ) ij =
s1
∑ fi [ ( Gi )ij ]m × m i=1
where fi are the s1 elements of (14.2.2). For j ≤ s2, a similar equation to the above is obtained, and so on. For more details, see [2]. The following lemma is proved in [2], and provides the important means to identify which elements of (a)S.C.S. equal their corresponding elements of (a)C.S.. This is also illustrated by Example 14.2. Lemma 14.3 Consider the first s1 columns of matrix G1. Then if in column j ≤ s1, there exists q zero elements in the position of the q zero elements in bB, then (aj)S.C.S. = (aj)C.S. 14.3.3
The case of a rank deficient coefficient matrix
For a system Ca = f, where matrix C is rank deficient, the columns of matrix C that are linearly dependent on the other columns are detected while obtaining the C.S. by the algorithm of Chapter 10, and are deleted. The parameters ai associated with these columns are set equal to 0. In such cases, the calculated S.C.S. would be for the overdetermined system whose coefficient matrix C consists of the linearly independent columns of the given coefficient matrix. 14.4
Numerical results and comments
The algorithm described above may be illustrated by the following example from Duris and Temple ([7], p. 697). Example 14.2 Obtain the strict Chebyshev solution of the following seven equations.
© 2008 by Taylor & Francis Group, LLC
Chapter 14: Strict Chebyshev Approximation
a1
477
+
a1 – a1 – 2a1 –
a3 = 1 a2 = 1 a2 + a3 = 1 a3 = 3 2a3 = 0 a2 – a3 = –4 a2 = 1
This example is solved by the linear programming technique of Chapter 10, which is incorporated in [2]. The inverse of the basis matrix and the basic solution in the final simplex tableau are shown for this problem. B–1 ––––––––––––––––––––––– –0.33 0.33 0.33 0.33 0 –1 –1 0 0 0 –1 0 0.33 0.67 1.67 0.67 –––––––––––––––––––––––
bB ––––– 0.33 0 0 0.67 –––––
Here, bB is degenerate. Columns 5, 2, 6 and 4 in the final tableau form the basis. By examining the simplex tableau, we find that no non-basic column with zero marginal cost or a marginal cost of 2z, can enter the basis with a positive level. Thus R1 consists of equations 5 and 4, which correspond to the nonzero elements of bB. It has rank s1 = 1 and the C.S. of Ca = f is not unique with (a)C.S. = (2, 3, 1)T and z1 = 2. It is observed that there exist two 0’s in column 1 of B–1 which correspond to the two 0’s of bB. Column 1 of B-1 correspond to a3. Hence, according to Lemma 14.3, (a3)S.C.S. = (a3)C.S. = 1. Since s1 = 1 and one element of (a)S.C.S. is now known, system (14.2.3), namely C(2)a(2) = f(2) is easily derived, as explained in Section 14.3.4. It is given by a1
a2 a1 – a2 a1 – a2 2a1 – a2
© 2008 by Taylor & Francis Group, LLC
= = = = =
1 – 1 = 0 1 1 – 1 = 0 –4 + 1 = –3 1
478
Numerical Linear Approximation in C
Vector bB of this system has one zero element with columns 7, 6 and 3 forming the basis and z2 = 1.5. However, the non-basic column 2 has a zero marginal cost and can replace column 3 in the basis, with a positive level. The obtained optimal solution bB' is not degenerate. Hence, the solution of this reduced system is unique. Matrix (DT)–1 is then calculated as described at the end of Section 14.3.2, and the S.C.S. is obtained from (14.2.4). The final result is (a)S.C.S = (1, 2.5, 1)T and the Chebyshev solution has two systems with z1 = 2.0 and z2 = 1.5. LA_Strict() implements this algorithm [3]. DR_Strict() tests 5 examples. Examples 2 and 5 each have 3 systems. Examples 1 and 4 each have 2 systems, and example 3 has one system; i.e., it has a unique C.S. Table 14.1 shows the results. Table 14.1 –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations # systems z1, z2, z3, … –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 25×10 24 2 0.0071, 0.0064 2 5×3 9 3 2.25, 1.5, 0.0 3 7×3 7 1 2.625 4 7×4 10 2 2.0, 1.5 5 25× 6 26 3 1.0, 0.5, 0.25 For each example, the number of iterations; i.e., the total number of times the simplex tableau is changed, the number of systems and their corresponding z values are given. Example 1 is the same as example 2 in ([7], p. 698), solved by Duris and Temple. Example 4 is the same as Example 14.2. In Example 4, the 7 by 4 matrix C is of rank 3 and we get the same final vector r, number of systems and corresponding z values, as those for Ca = f when the linearly dependent column in C is discarded. In [2] we experimented with 14 examples (Table 1 in [2]). The data and the number of points were taken from Watson ([13], Table 2), who used this data for a different purpose. The results in Table 1 in [2] were compared with the results of Duris and Temple [7]. The number of major iterations is the same as those of ours for all but two of the examples.
© 2008 by Taylor & Francis Group, LLC
Chapter 14: Strict Chebyshev Approximation
479
As we noted earlier, Thiran and Thiry [12] used an ascent exchange algorithm for computing the strict Chebyshev solution to overdetermined systems of equations. The numbers of major iterations agree with ours for all but two (examples 3a and 4a of Table 1 in [2]), where their numbers are slightly smaller. Their algorithm is adaptive to improvements in order to deal with very ill-conditioned systems ([12], p. 724). Branning [4, 5] proposed a direct method for computing the same problem. However, as reported by Thiran and Thiry [12], Branning’s algorithm might lead to cycling, and convergence is not guaranteed.
References 1. 2. 3.
4. 5. 6. 7. 8.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129. Abdelmalek, N.N., Computing the strict Chebyshev solution of overdetermined linear equations, Mathematics of Computation, 31(1977)974-983. Abdelmalek, N.N., A computer program for the strict Chebyshev solution of overdetermined systems of linear equations, International Journal for Numerical Methods in Engineering, 13(1979)1715-1725. Brannigan, M., Theory and computation of best strict constrained Chebyshev approximation of discrete data, IMA Journal of Numerical Analysis, 1(1980)169-184. Brannigan, M., The strict Chebyshev solution of overdetermined systems of linear equations with rank deficient matrix, Numerische Mathematik, 40(1982)307-318. Descloux, J., Approximations in LP and Chebyshev approximations, Journal of Society of Industrial and Applied Mathematics, 11(1963)1017-1026. Duris, C.S. and Temple, M.G., A finite step algorithm for determining the “strict” Chebyshev solution to Ax = b, SIAM Journal on Numerical Analysis, 10(1973)690-699. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962.
© 2008 by Taylor & Francis Group, LLC
480
9. 10. 11. 12. 13.
Numerical Linear Approximation in C
Osborne, M.R. and Watson, G.A., On the best linear Chebyshev approximation, Computer Journal, 10(1967)172177. Rice, J.R., Tchebycheff approximation in a compact metric space, Bulletin of American Mathematical Society, 68(1962)405-410. Rice, J.R., The Approximation of Functions, Vol. 2, AddisonWesley, Reading, MA, 1969. Thiran, J.P. and Thiry, S., Strict Chebyshev approximation for general systems of linear equations, Numerische Mathematik, 51(1987)701-725. Watson, G.A., A multiple exchange algorithm for multivariate Chebyshev approximation, SIAM Journal on Numerical Analysis, 12(1975)46-52.
© 2008 by Taylor & Francis Group, LLC
Chapter 14: DR_Strict
14.5
481
DR_Strict
/*------------------------------------------------------------------DR_Strict --------------------------------------------------------------------This program is a driver for the function LA_Strict(), which calculates the "Strict" Chebyshev solution of an overdetermined system of linear equations. It uses a linear programming method. The overdetermined system has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m < n. "f" is a given real n vector. "a" is the solution m vector. This driver contains the 5 examples whose results are given in the text. Example 3 is the one solved in detail in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define
N1s M1s N2s M2s N3s M3s N4s M4s N5s M5s
25 10 5 3 7 3 7 4 25 6
void DR_Strict (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R c2init[N2s][M2s] = { { 1.0, 2.0, 0.0 }, {-1.0, -1.0, 0.0 },
© 2008 by Taylor & Francis Group, LLC
482
Numerical Linear Approximation in C { 1.0, 3.0, 0.0 }, { 0.0, 1.0, 0.0 }, { 0.0, 0.0, 1.0 } }; static tNumber_R c3init[N3s][M3s] = { { 1.0, 0.0, 1.0 }, { 1.0, 2.0, 2.0 }, { 1.0, 2.0, 0.0 }, { 1.0, 1.0, 0.0 }, { 1.0, 0.0, -1.0 }, { 1.0, 0.0, 0.0 }, { 1.0, 1.0, 1.0 } }; static tNumber_R { { 1.0, 0.0, { 0.0, 1.0, { 1.0, -1.0, { 0.0, 0.0, { 0.0, 0.0, { 1.0, -1.0, { 2.0, -1.0, };
c4init[N4s][M4s] =
static tNumber_R { { 1.0, -2.0, { 1.0, -1.0, { 1.0, 0.0, { 1.0, 1.0, { 1.0, 2.0, { 1.0, -2.0, { 1.0, -1.0, { 1.0, 0.0, { 1.0, 1.0, { 1.0, 2.0, { 1.0, -2.0, { 1.0, -1.0, { 1.0, 0.0, { 1.0, 1.0, { 1.0, 2.0, { 1.0, -2.0,
c5init[N5s][M5s] =
© 2008 by Taylor & Francis Group, LLC
1.0, 0.0, 1.0, 1.0, 2.0, -1.0, 0.0,
-2.0, -2.0, -2.0, -2.0, -2.0, -1.0, -1.0, -1.0, -1.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0,
1.0 0.0 1.0 0.0 0.0 1.0 2.0
4.0, 1.0, 0.0, 1.0, 4.0, 4.0, 1.0, 0.0, 1.0, 4.0, 4.0, 1.0, 0.0, 1.0, 4.0, 4.0,
}, }, }, }, }, }, }
4.0, 2.0, 0.0, -2.0, -4.0, 2.0, 1.0, 0.0, -1.0, -2.0, 0.0, 0.0, 0.0, 0.0, 0.0, -2.0,
4.0 4.0 4.0 4.0 4.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0
}, }, }, }, }, }, }, }, }, }, }, }, }, }, }, },
Chapter 14: DR_Strict { { { { { { { { {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
-1.0, 0.0, 1.0, 2.0, -2.0, -1.0, 0.0, 1.0, 2.0,
483 1.0, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 2.0, 2.0,
1.0, 0.0, 1.0, 4.0, 4.0, 1.0, 0.0, 1.0, 4.0,
-1.0, 0.0, 1.0, 2.0, -4.0, -2.0, 0.0, 2.0, 4.0,
1.0 1.0 1.0 1.0 4.0 4.0 4.0 4.0 4.0
}, }, }, }, }, }, }, }, }
}; static tNumber_R f1[N1s+1] = { NIL, 0.0872673, 0.0872794, 0.0873029, 0.3491184, 0.3498802, 0.3513824, 0.6111334, 0.6150641, 0.6230824, 0.8733883, 0.8841621, 0.9071868, 1.135895, 1.157550, 1.206257, };
0.0873315, 0.3532572, 0.6336395, 0.9400757, 1.283258,
static tNumber_R f2[N2s+1] = { NIL, 1.0, 2.0, 1.0, -3.0, 0.0 }; static tNumber_R f3[N3s+1] = { NIL, 0.0, -2.0, 1.0, -1.0, 5.0, 7.0, 0.0 }; static tNumber_R f4[N4s+1] = { NIL, 1.0, 1.0, 1.0, 3.0, 0.0, -4.0, 1.0 }; static tNumber_R f5[N5s+1] = { NIL, -1.0, 0.0, 1.0, -1.0, -1.5, -0.875, -0.25, -0.25, -1.5, 0.0, 1.5, 2.25, 1.375, 3.0, 4.625, 6.0, 5.75, 8.0, 10.125, 12.125, };
1.0, 0.375, 2.5, 7.375, 13.875
/*---------------------------------------© 2008 by Taylor & Francis Group, LLC
0.0873576, 0.3550109, 0.6441493, 0.9766021, 1.384432
484
Numerical Linear Approximation in C Variable matrices/vectors ----------------------------------------*/ tMatrix_R c = alloc_Matrix_R (NN_ROWS, MMc_COLS); tVector_R f = alloc_Vector_R (NN_ROWS); tMatrix_R cc = alloc_Matrix_R (NN_ROWS, MMc_COLS); tVector_R fc = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R rch = alloc_Vector_R (NN_ROWS); tVector_R z = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MMc_COLS); tMatrix_R c1 = alloc_Matrix_R (N1s, M1s); tMatrix_R tMatrix_R tMatrix_R tMatrix_R
c2 c3 c4 c5
= = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
int int tNumber_R
iter, irank, ksys; i, j, k, m, n, Iexmpl; d, dd, ddd, e, ee, eee;
eLaRc
rc = LaRcOk;
for (j = 1; j <= 5; j++) { d = 0.15* (j-3); dd = d*d; ddd = d*dd; for (i = 1; i <= 5; i++) { e = 0.15* (i-3); ee = e*e; eee = e*ee; k = 5* (j-1) + i; c1[k][1] = 1.0; c1[k][2] = d; c1[k][3] = e; c1[k][4] = dd; c1[k][5] = ee; c1[k][6] = e*d; c1[k][7] = ddd; c1[k][8] = eee; c1[k][9] = dd*e; c1[k][10] = ee*d; } © 2008 by Taylor & Francis Group, LLC
(&(c2init[0][0]), (&(c3init[0][0]), (&(c4init[0][0]), (&(c5init[0][0]),
N2s, N3s, N4s, N5s,
M2s); M3s); M4s); M5s);
Chapter 14: DR_Strict } prn_dr_bnr ("DR_Strict, Strict Chebyshev Solution " "of Overdetermined Systems"); for (Iexmpl = 1; Iexmpl <= 5; Iexmpl++) { switch (Iexmpl) { case 1: n = N1s; m = M1s; for (i = 1; i <= n; i++) { f[i] = f1[i]; fc[i] = f1[i]; for (j = 1; j <= m; j++) { c[i][j] = c1[i][j]; cc[i][j] = c1[i][j]; } } break; case 2: n = N2s; m = M2s; for (i = 1; i <= n; i++) { f[i] = f2[i]; fc[i] = f2[i]; for (j = 1; j <= m; j++) { c[i][j] = c2[i][j]; cc[i][j] = c2[i][j]; } } break; case 3: n = N3s; m = M3s; for (i = 1; i <= n; i++) { f[i] = f3[i]; © 2008 by Taylor & Francis Group, LLC
485
486
Numerical Linear Approximation in C fc[i] = f3[i]; for (j = 1; j <= m; j++) { c[i][j] = c3[i][j]; cc[i][j] = c3[i][j]; } } break; case 4: n = N4s; m = M4s; for (i = 1; i <= n; i++) { f[i] = f4[i]; fc[i] = f4[i]; for (j = 1; j <= m; j++) { c[i][j] = c4[i][j]; cc[i][j] = c4[i][j]; } } break; case 5: n = N5s; m = M5s; for (i = 1; i <= n; i++) { f[i] = f5[i]; fc[i] = f5[i]; for (j = 1; j <= m; j++) { c[i][j] = c5[i][j]; cc[i][j] = c5[i][j]; } } break; default: break; } prn_algo_bnr ("Strict"); prn_example_delim(); PRN ("Example #%d: Size %d by %d\n", Iexmpl, n, m);
© 2008 by Taylor & Francis Group, LLC
Chapter 14: DR_Strict
487
prn_example_delim(); PRN ("Strict Chebyshev Solution of an Overdetermined System " "of Linear Euations\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Coefficient Matrix, \"c\"\n"); prn_Matrix_R (c, n, m); rc = LA_Strict (m, n, c, f, &ksys, &irank, &iter, r, a, z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Strict Chebyshev Solution\n"); PRN ("Number of systems = %d, Iterations = %d\n", ksys, iter); PRN ("Solution vector, \"a\"\n"); prn_Vector_R_nDec (a, m, 4); PRN ("Residual vector \"r\"\n"); prn_Vector_R_nDec (r, n, 4); PRN ("System norms \"z\"\n"); prn_Vector_R_nDec (z, ksys, 4); /* Here the residual vector is calculated from obtained vector "a" and from matrix "c" and vector "f". That is for checking the oalculated results.*/ for (j = 1; j <= n; j++) { d = -fc[j];; for (i = 1; i <= m; i++) { d = d + a[i] * cc[j][i]; } rch[j] = d; } PRN ("Residual vector 'rc'\n"); prn_Vector_R_nDec (rch, n, 4); } prn_la_rc (rc); } free_Matrix_R (c, NN_ROWS); free_Vector_R (f); © 2008 by Taylor & Francis Group, LLC
488
Numerical Linear Approximation in C free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Matrix_R
(cc, NN_ROWS); (fc); (r); (rch); (z); (a); (c1, N1s);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(c2); (c3); (c4); (c5);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
14.6
489
LA_Strict
/*------------------------------------------------------------------LA_Strict --------------------------------------------------------------------This program calculates the "Strict" Chebyshev solution of an overdetermined system of linear equations. It uses a modified simplex method to the linear programming formulation of the problem. The system of linear equations has the form c*a = f "c" is a given real n by m matrix of rank k, k <= m < n. "f" is a given real n vector. The strict Chebyshev solution of system c*a = f is the m vector "a" that minimizes the Chebyshev norm z = max|r[i]|,
i = 1, 2. ..., n
in the strict sense. r[i] is the ith residual of system c*a = f and is given by r[i] = c[i][1]*a[1] + c[i][2]*a[2] + ... + c[i][m]*a[m] - f[i], i = 1, 2, ..., n Inputs m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. c A real n by (m+1) matrix. Its n rows and first m columns contain matrix "c" of the system c*a = f. f A real n vector containing the r.h.s. of the system c*a = f. Local Variables ct A real (m+1) by n matrix. Its first m rows and its n columns contain the transpose of matrix "c" of the system c*a = f. Its (m+1)th row will be filled with ones by the program. binv An (m + 1) square matrix containing the inverse of the basis matrix in the linear programming problem. bv An (m + 1) vector containing the basic solution in the linear programming problem.
© 2008 by Taylor & Francis Group, LLC
490
Numerical Linear Approximation in C
icbas irbas
An (m + 1) vector containing the indices of the columns of matrix "ct" that form the columns of the basis matrix. An (m + 1) vector containing the row indices of "ct".
Outputs irank The calculated rank of matrix "c". ksys Number of involved systems. ksys = 1 ndicates that one system, namely c*a = f is solved. The obtained solution is the ordinary Chebyshev solution and it is unique. ksys = 2 indicates that 2 systems are involved, and so on. iter Total number of iterations, or total number of times the simplex tableau is changed by a Gauss-Jordan step. a A real (m+1) vector whose first m elements are the solution vector "a" of the problem. r A real n vector containing the residual vector r = c*a - f. z A real n vector whose first "ksys" elements are the Chebyshev "norms" of the "ksys" systems involved. Returns one of LaRcSolutionUnique LaRcNoFeasibleSolution LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Strict (int m, int n, tMatrix_R c, tVector_R f, int *pKsys, int *pIrank, int *pIter, tVector_R r, tVector_R a, tVector_R z) { tMatrix_R ct = alloc_Matrix_R (m + 1, n); tMatrix_R binv = alloc_Matrix_R (m + 1, m + 1); tVector_R bv = alloc_Vector_R (m + 1); tVector_R v = alloc_Vector_R (m + 1); tVector_R fs = alloc_Vector_R (m + 1); tVector_I ibound = alloc_Vector_I (n); tVector_I icbas = alloc_Vector_I (m + 1); tVector_I irbas = alloc_Vector_I (m + 1); tVector_I ib = alloc_Vector_I (n); tVector_I iv = alloc_Vector_I (m + 1); tVector_I ih = alloc_Vector_I (m + 1); int
i = 0, j = 0;
© 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict int int int tNumber_R
491
ij = 0, ji = 0, kn = 0, kj = 0, kl = 0, km = 0, ld = 0, m1 = 0, ma = 0; ivo = 0, jin = 0, ijk = 0, kln = 0, iter = 0, iout = 0, itest = 0; ild1 = 0, klnm = 0; d = 0.0, zz = 0.0, piv = 0.0, bignum = 0.0;
/* Validation of data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < m) && (m < n)); VALIDATE_PTRS (c && f && pKsys && pIrank && pIter && r && a && z); VALIDATE_ALLOC (ct && binv && bv && v && fs && ibound && icbas && irbas && ib && iv && ih); bignum = 1./ (EPS*EPS); m1 = m + 1; ma = m; kn = n; kl = 1; zz = 0.0; *pIrank = m; *pIter = 0; *pKsys = 0; /* Initializing data (1) */ LA_strict_init_1 (m, n, binv, ib, iv, irbas, v, fs, a, r, z); /* Loop for "ksys" */ for (ijk = 1; ijk <= n; ijk++) { *pKsys = *pKsys + 1; /* Initializing data (2) */ LA_strict_init_2 (kl, m, n, c, ct, ib, ibound, irbas); itest = 1; iout = kl - 1; LA_strict_part_1 (&kl, kn, &ma, m, n, c, ct, f, binv, bv, v, fs, ib, ih, iv, ibound, icbas, irbas, pIrank, pKsys, pIter, r, a, z, rc); /* Part 2 of the Chebyshev algorithm. Obtaining a basic feasible solution */ LA_strict_part_2 (kl, m, n, ct, f, binv, bv, ib, ibound, © 2008 by Taylor & Francis Group, LLC
492
Numerical Linear Approximation in C icbas, &iter); /* Part 3 of the algorithm for the Chebyshev solution. Calculating the residuals and the Chebyshev norm zz */ LA_strict_part_3 (kl, m, n, c, ct, f, binv, bv, ib, ibound, icbas, &zz); /* Calculating the optimun Chebyshev solution of the system at hand */ for (ij = 1; ij <= n*n; ij++) { ivo = 0; jin = 0; /* Determine the vector that enters the basis */ LA_strict_vent (&ivo, &jin, kl, m, n, c, ib, icbas, zz); if (ivo == 0) { break; } if (ivo == -1) { for (i = kl; i <= m1; i++) { ct[i][jin] = bv[i] + bv[i] - ct[i][jin]; } c[jin][m1] = zz + zz - c[jin][m1]; f[jin] = - f[jin]; ibound[jin] = - ibound[jin]; } /* Determine the vector that leaves the basis */ itest = 0; LA_strict_vleav (kl, &iout, jin, &itest, m, ct, bv); /* No feasible solution */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } /* A Gauss-Jordan elimination step to matrix "ct" */ LA_strict_gauss_jordn (iout, jin, kl, m, n, ct, binv, bv, ib, icbas); *pIter = *pIter + 1;
© 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
493
d = c[jin][m1]; /* Updating the residuals and zz of the system at hand */ for (j = 1; j <= n; j++) { if (ib[j] != 0) { c[j][m1] = c[j][m1] - d * (ct[iout][j]); } } zz = zz - d * (bv[iout]); } kj = kl - 1; ld = 0; /* Number of zero elements in the optimum basic solution vector "bv" */ for (i = kl; i <= m1; i++) { ih[i] = 1; if (bv[i] < EPS) { ih[i] = 0; ld = ld + 1; } } /* Check for non-uniqueness of the Chebyshev solution */ if (ld != 0) { LA_strict_uniquens (&ivo, &iout, kl, &kn, &ld, m, n, c, ct, f, bv, v, ib, ih, ibound, icbas, r, zz); } /* Eliminating zz */ LA_strict_eliminat_zz (&iout, kl, m, c, binv, bv, ib, ih, ibound, icbas, pKsys, r, z, zz); for (ji = 1; ji <= n; ji++) { ild1 = 0; /* A Gauss-Jordan step to matrix binv */ LA_strict_gauss_jordn_binv (iout, kl, m, binv, iv); if (iout == m1) © 2008 by Taylor & Francis Group, LLC
494
Numerical Linear Approximation in C { if (ld != 0) { kln = m1 - ld; klnm = kln - 1; kn = kn + kl + ld - m1 - 1; km = m; LA_strict_permute_binv (kj, kl, &km, m, binv, bv, ih, icbas); } LA_strict_map (kl, m, f, fs, ih, icbas, r, zz); /* Calculate the solution vector "a" */ LA_strict_calcul_a(m, fs, binv, iv, icbas, irbas, a); if ((ld == 0) && (*pKsys == 1) || (rc == LaRcSolutionProbNotUnique)) { LA_strict_calcul_r_1(m, n, c, ib, ibound, r, zz); GOTO_CLEANUP_RC (LaRcSolutionUnique); } if (ld == 0) { ild1 = 1; break; } } iout = iout - 1; if (iout < kln) break; piv = 0.0; for (j = kl; j <= iout; j++) { d = binv[iout][j]; if (d < 0.0) d = -d; if (d <= piv) continue; piv = d; jin = j; } if (jin == iout) continue; for (i = kl; i <= m1; i++) { d = binv[i][jin]; binv[i][jin] = binv[i][iout]; binv[i][iout] = d; }
© 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
495
/* Swap two elements of an integer vector */ swap_elems_Vector_I (irbas, jin, iout); if (kj == 0) continue; for (j = 1; j <= kj; j++) { if (iv[j] == 0) continue; d = binv[jin][j]; binv[jin][j] = binv[iout][j]; binv[iout][j] = d; } } if (ild1 == 0) { for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; if (ibound[j] == 1) continue; f[j] = -f[j]; } /* calculating the reduced system */ if (kl < kln) { /* Calculating the reduced system */ LA_strict_reduce_sys (kl, kln, &ma, m, n, c, f, binv, ib, iv, irbas, a); } } if ((*pKsys != 1) && (kj != 0)) { LA_strict_modify_binv (kl, kj, m, binv, v, iv); } if (ld == 0) { rc = LaRcSolutionProbNotUnique; } if (rc == LaRcSolutionProbNotUnique) { /* Calculating vector a */ LA_strict_calcul_a (m, fs, binv, iv, icbas, irbas, a); break; } if (ma != ld) { if (kl != kln) © 2008 by Taylor & Francis Group, LLC
496
Numerical Linear Approximation in C { LA_strict_eliminate_ll (kl, kln, &ma, ld, m, n, c, f, ib, iv, ibound, icbas, irbas, r, zz); } } /* Calculating elements of residual vector r, part (c) */ LA_strict_calcul_r_3 (kl, kln, m, c, ib, ibound, icbas, r, zz); LA_strict_calcul_r_2 (&kn, kln, m, n, c, ib, ibound, irbas, r, zz); if (kn != 0) { kl = kln; } } LA_strict_calcul_r_1 (m, n, c, ib, ibound, r, zz); rc = LaRcSolutionUnique;
CLEANUP: free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_I free_Vector_I free_Vector_I free_Vector_I free_Vector_I free_Vector_I
(ct, m + 1); (binv, m + 1); (bv); (v); (fs); (ibound); (icbas); (irbas); (ib); (iv); (ih);
return rc; } /*------------------------------------------------------------------Initializing the data in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_init_1 (int m, int n, tMatrix_R binv, tVector_I ib, tVector_I iv, tVector_I irbas, tVector_R v, tVector_R fs, © 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
497
tVector_R a, tVector_R r, tVector_R z) { int
i, j, m1;
m1 = m + 1; for (j = 1; j <= m1; j++) { for (i = 1; i <= m1; i++) { binv[i][j] = 0.0; } binv[j][j] = 1.0; iv[j] = 1; irbas[j] = j; v[j] = 0.0; a[j] = 0.0; fs[j] = 0.0; } for (j = 1; j <= n; j++) { r[j] = 0.0; ib[j] = 1; z[j] = -1.0; } } /*------------------------------------------------------------------Initializing data (2) in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_init_2 (int kl, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_I ib, tVector_I ibound, tVector_I irbas) { int i, j, k, m1; m1 = m + 1; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; ibound[j] = 1; for (i = kl; i <= m; i++) { k = irbas[i]; ct[i][j] = c[j][k]; } ct[m1][j] = 1.0; © 2008 by Taylor & Francis Group, LLC
498
Numerical Linear Approximation in C }
} /*------------------------------------------------------------------Part 1 in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_part_1 (int *pKl, int kn, int *pMa, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_R f, tMatrix_R binv, tVector_R bv, tVector_R v, tVector_R fs, tVector_I ib, tVector_I ih, tVector_I iv, tVector_I ibound, tVector_I icbas, tVector_I irbas, int *pIrank, int *pKsys, int *pIter, tVector_R r, tVector_R a, tVector_R z, eLaRc rc) { int kj, ld, m1; int iout, jin = 0; tNumber_R zz, piv; m1 = m + 1; for (iout = *pKl; iout <= m1; iout++) { if ((kn == *pMa) && iout == m1) { zz = 0.0; z[*pKsys] = 0.0; kj = *pKl - 1; ld = 0; /* Mapping some data */ LA_strict_map (*pKl, m, f, fs, ih, icbas, r, zz); /* Calculating vector a */ LA_strict_calcul_a (m, fs, binv, iv, icbas, irbas, a); if ((ld == 0 && *pKsys == 1) || (rc == LaRcSolutionProbNotUnique)) { /* Calculating the residual vector r, part (c) */ LA_strict_calcul_r_1 (m, n, c, ib, ibound, r, zz); break; } /* Modifying matrix binv */ if (ld == 0) { /* Modifying matrix binv */ © 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
499
LA_strict_modify_binv (*pKl, kj, m, binv, v, iv); } } if (iout <= m1) { piv = 0.0; /* Calculate pivot element */ LA_strict_piv (iout, &jin, n, &piv, ct, ib); if (piv > EPS) { /* A Gauss-Jordan elimination step to matrix "ct" */ LA_strict_gauss_jordn (iout, jin, *pKl, m, n, ct, binv, bv, ib, icbas); *pIter = *pIter + 1; } /* Detection of rank deficiency of matrix "c" */ if (piv < EPS) { if (iout < m1) { /* Swapping process */ LA_strict_swapping (*pKl, iout, m, n, ct, binv, ib, icbas, irbas); *pIrank = *pIrank - 1; *pMa = *pMa - 1; iv[*pKl] = 0; *pKl = *pKl + 1; } if (iout == m1) { /* Rank of Coefficient matrix "c" */ LA_strict_detect_rank (*pKl, &jin, m, n, ct, f, ib, ibound, icbas); /* A Gauss-Jordan elimination step to matrix "ct" */ LA_strict_gauss_jordn (iout, jin, *pKl, m, n, ct, binv, bv, ib, icbas); *pIter = *pIter + 1; } } } } © 2008 by Taylor & Francis Group, LLC
500
Numerical Linear Approximation in C
} /*------------------------------------------------------------------Mapping some data in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_map (int kl, int m, tVector_R f, tVector_R fs, tVector_I ih, tVector_I icbas, tVector_R r, tNumber_R zz) { int i, k; for (i = kl; i <= m; i++) { k = icbas[i]; r[k] = 0.0; } for (i = kl; i <= m; i++) { k = icbas[i]; fs[i] = f[k] - zz; if (ih[i] == 1) f[k] = fs[i]; } } /*------------------------------------------------------------------Calculating the pivot element in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_piv (int iout, int *pJin, int n, tNumber_R *pPiv, tMatrix_R ct, tVector_I ib) { int j; tNumber_R
d;
for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; d = ct[iout][j]; if (d < 0.0) d = -d; if (d <= *pPiv) continue; *pJin = j; *pPiv = d; } } /*------------------------------------------------------------------© 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
501
Swapping processes in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_swapping (int kl, int iout, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_I ib, tVector_I icbas, tVector_I irbas) { int i, j, m1; tNumber_R
d;
m1 = m + 1; irbas[iout] irbas[kl] = icbas[iout] icbas[kl] =
= irbas[kl]; 0; = icbas[kl]; 0;
for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; d = ct[iout][j]; ct[iout][j] = ct[kl][j]; ct[kl][j] = d; } for (j = kl; j <= m; j++) { binv[iout][j] = binv[kl][j]; binv[kl][j] = 0.0; } for (i = kl; i <= m1; i++) { binv[i][iout] = binv[i][kl]; binv[i][kl] = 0.0; } } /*------------------------------------------------------------------Rank of matrix "c" in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_detect_rank (int kl, int *pJin, int m, int n, tMatrix_R ct, tVector_R f, tVector_I ib, tVector_I ibound, tVector_I icbas) { int i, j, m1, ibc; m1 = m + 1; © 2008 by Taylor & Francis Group, LLC
502
Numerical Linear Approximation in C for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; ibc = 0; for (i = kl; i <= m; i++) { if (j == icbas[i]) ibc = 1; } if (ibc == 0) { *pJin = j; break; } } f[*pJin] = -f[*pJin]; ibound[*pJin] = -ibound[*pJin]; for (i = kl; i <= m; i++) { ct[i][*pJin] = -ct[i][*pJin]; } ct[m1][*pJin] = 2.0 - ct[m1][*pJin];
} /*------------------------------------------------------------------Part 2 of the Chebyshev algorithm. Obtaining a basic feasible solution in LA_Strict(). -------------------------------------------------------------------*/ void LA_strict_part_2 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I ibound, tVector_I icbas, int *pIter) { int i, k, m1, iout, jin; m1 = m + 1; for (i = kl; i <= m; i++) { if (bv[i] >= 0.0) continue; iout = i; jin = icbas[i]; f[jin] = -f[jin]; ibound[jin] = -ibound[jin]; for (k = kl; k <= m1; k++) { ct[k][jin] = bv[k] + bv[k] - ct[k][jin]; } © 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
503
/* A Gauss-Jordan elimination step to matrix "ct" for Part 2 of the Chebyshev algorithm */ LA_strict_gauss_jordn (iout, jin, kl, m, n, ct, binv, bv, ib, icbas); *pIter = *pIter + 1; } } /*------------------------------------------------------------------Part 3 of the Chebyshev algorithm in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_part_3 (int kl, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_R f, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I ibound, tVector_I icbas, tNumber_R *pZz) { int i, j, k, m1, icb; tNumber_R s; m1 = m + 1; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 0) { s = -f[j]; for (i = kl; i <= m1; i++) { k = icbas[i]; s = s + ct[i][j] * (f[k]); } /* The residual of equation j is stored in c[j][m1] */ c[j][m1] = s; } else if (icb == 1) c[j][m1] = 0.0; } s = 0.0; for (i = kl; i <= m1; i++) { © 2008 by Taylor & Francis Group, LLC
504
Numerical Linear Approximation in C k = icbas[i]; s = s + bv[i] * (f[k]); } if (fabs (s) < EPS) s = 0.0; *pZz = s; if (*pZz < 0.0) { for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; f[j] = -f[j]; ibound[j] = -ibound[j]; c[j][m1] = -c[j][m1]; } *pZz = -*pZz; for (j = kl; j <= m; j++) { for (i = kl; i <= m1; i++) { binv[i][j] = -binv[i][j]; } } }
} /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_vent (int *pIvo, int *pJin, int kl, int m, int n, tMatrix_R c, tVector_I ib, tVector_I icbas, tNumber_R zz) { int i, icb, j, m1; tNumber_R d, e, g, tz; m1 = m + 1; g = 1.0/ (EPS*EPS); tz = zz + zz + EPS; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } © 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
505
if (icb == 1) continue; d = c[j][m1]; if (d < -EPS) { e = d; if (e < g) { g = e; *pJin = j; *pIvo = 1; } } if (d >= tz) { e = tz - d; if (e < g) { g = e; *pJin = j; *pIvo = -1; } } } } /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_vleav (int kl, int *pIout, int jin, int *pItest, int m, tMatrix_R ct, tVector_R bv) { int i, m1; tNumber_R d, g, thmax; m1 = m + 1; thmax = 1.0/ (EPS*EPS); for (i = kl; i <= m1; i++) { d = ct[i][jin]; if (d > EPS) { g = bv[i]/d; if (g <= thmax) { © 2008 by Taylor & Francis Group, LLC
506
Numerical Linear Approximation in C thmax = g; *pIout = i; *pItest = 1; } } }
} /*------------------------------------------------------------------A Gauss-Jordan elimination step to matrix "ct" in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I icbas) { int i, j, m1; tNumber_R d, pivot; m1 = m + 1; pivot = ct[iout][jin]; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; ct[iout][j] = ct[iout][j]/pivot; } for (j = kl; j <= m1; j++) { binv[iout][j] = binv[iout][j]/pivot; } for (i = kl; i <= m1; i++) { if (i == iout) continue; d = ct[i][jin]; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; ct[i][j] = ct[i][j] - d * (ct[iout][j]); } for (j = kl; j <= m1; j++) { binv[i][j] = binv[i][j] - d * (binv[iout][j]); } } icbas[iout] = jin; for (i = kl; i <= m1; i++) © 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
507
{ bv[i] = binv[i][m1]; } } /*------------------------------------------------------------------Check for non-uniqueness of the Chebyshev solution in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_uniquens (int *pIvo, int *pIout, int kl, int *pKn, int *pLd, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_R f, tVector_R bv, tVector_R v, tVector_I ib, tVector_I ih, tVector_I ibound, tVector_I icbas, tVector_R r, tNumber_R zz) { int i, j, k, icb, m1; tNumber_R d, e, piv, bignum; m1 = m + 1; bignum = 1.0/ (EPS*EPS); for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; icb = 0; for (i = kl; i <= m1; i++) { if (j == icbas[i]) icb = 1; } if (icb == 1) continue; d = c[j][m1]; if (d >= EPS) { e = zz + zz - d; if (e > EPS) continue; for (i = kl; i <= m1; i++) { ct[i][j] = bv[i] + bv[i] - ct[i][j]; } f[j] = -f[j]; ibound[j] = -ibound[j]; c[j][m1] = e; } piv = bignum; *pIvo = 0; for (i = kl; i <= m1; i++) { if ((bv[i] < EPS) || ct[i][j] < EPS) continue; © 2008 by Taylor & Francis Group, LLC
508
Numerical Linear Approximation in C d = bv[i]/ct[i][j]; if (d > piv) continue; *pIvo = 1; *pIout = i; piv = d; } if (*pIvo == 0) continue; v[*pIout] = bv[*pIout]/ct[*pIout][j]; for (i = kl; i <= m1; i++) { if (i == *pIout) continue; icb = 0; v[i] = bv[i] - v[*pIout] * (ct[i][j]); if (v[i] < -EPS) { icb = -1; break; } } if (icb == -1) continue; k = 0; for (i = kl; i <= m1; i++) { if (bv[i] > EPS) continue; if (v[i] < EPS) continue; ih[i] = 1; k = 1; } if (k == 0) continue; d = c[j][m1] - zz; if (ibound[j] == -1) d = -d; r[j] = d; ib[j] = 0; *pKn = *pKn - 1; } *pLd = 0; for (i = kl; i <= m1; i++) { if (ih[i] == 0) *pLd = *pLd + 1; }
} /*------------------------------------------------------------------Eliminating zz in LA_Strict() -------------------------------------------------------------------*/ © 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
509
void LA_strict_eliminat_zz (int *pIout, int kl, int m, tMatrix_R c, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I ih, tVector_I ibound, tVector_I icbas, int *pKsys, tVector_R r, tVector_R z, tNumber_R zz) { int i, j, k, m1; tNumber_R d, piv; m1 = m + 1; piv = 0.0; z[*pKsys] = zz; for (i = kl; i <= m1; i++) { d = bv[i]; if (d > piv) { piv = d; *pIout = i; } } k = icbas[*pIout]; d = c[k][m1] - zz; if (ibound[k] == -1) d = -d; r[k] = d; ib[k] = 0; if (*pIout != m1) { /* Swap two rows of matrix "binv" */ for (j = kl; j <= m1; j++) { d = binv[*pIout][j]; binv[*pIout][j] = binv[m1][j]; binv[m1][j] = d; } bv[*pIout] = binv[*pIout][m1]; bv[m1] = binv[m1][m1]; /* Swap two elements of vector "icbas" */ swap_elems_Vector_I (icbas, *pIout, m1); /* Swap two elements of vector "ih" */ swap_elems_Vector_I (ih, *pIout, m1); } *pIout = m1; } © 2008 by Taylor & Francis Group, LLC
510
Numerical Linear Approximation in C
/*------------------------------------------------------------------A Gauss-Jordan step to matrix "binv" in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_gauss_jordn_binv (int iout, int kl, int m, tMatrix_R binv, tVector_I iv) { int i, j, m1; tNumber_R d, pivot; m1 = m + 1; pivot = binv[iout][iout]; for (j = kl; j <= iout; j++) { if (iv[j] == 0) continue; binv[iout][j] = binv[iout][j]/pivot; } for (i = kl; i <= m1; i++) { if (i == iout) continue; d = binv[i][iout]; for (j = kl; j <= iout; j++) { if (iv[j] == 0) continue; binv[i][j] = binv[i][j] - d * (binv[iout][j]); } } } /*------------------------------------------------------------------Permuting matrix "binv" in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_permute_binv (int kj, int kl, int *pKm, int m, tMatrix_R binv, tVector_R bv, tVector_I ih, tVector_I icbas) { int j, l, ij, ji, m1; tNumber_R d; m1 = m + 1; l = kj; l = l + 1; for (ij = 1; ij <= m1; ij++) { if (l > *pKm) break; © 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
511
for (ji = l; ji <= *pKm - 1; ji++) { if (ih[ji] == 1) l = l + 1; if (ih[ji] != 1) break; } if ((ih[*pKm] != 0) && l != *pKm) { for (j = kl; j <= m; j++) { d = binv[l][j]; binv[l][j] = binv[*pKm][j]; binv[*pKm][j] = d; } swap_elems_Vector_R (bv, l, *pKm); swap_elems_Vector_I (icbas, l, *pKm); swap_elems_Vector_I (ih, l, *pKm); } *pKm = *pKm - 1; } } /*------------------------------------------------------------------Calculating vector "a" in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_calcul_a (int m, tVector_R fs, tMatrix_R binv, tVector_I iv, tVector_I icbas, tVector_I irbas, tVector_R a) { int i, j, k; tNumber_R s; for (j = 1; j <= m; j++) { if (iv[j] != 0) { s = 0.0; for (i = 1; i <= m; i++) { if (icbas[i] != 0) { s = s + fs[i] * (binv[i][j]); } } k = irbas[j]; © 2008 by Taylor & Francis Group, LLC
512
Numerical Linear Approximation in C a[k] = s; } }
} /*------------------------------------------------------------------Calculating the reduced system in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_reduce_sys (int kl, int kln, int *pMa, int m, int n, tMatrix_R c, tVector_R f, tMatrix_R binv, tVector_I ib, tVector_I iv, tVector_I irbas, tVector_R a) { int i, j, k, l, ii; int klnm; tNumber_R d; klnm = kln - 1; if (kln > kl) { for (l = kl; l <= klnm; l++) { ii = 0; for (i = kln; i <= m; i++) { d = binv[i][l]; if (d < 0.0) d = -d; if (d < EPS) continue; ii = 1; break; } if (ii == 1) continue; k = irbas[l]; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; f[j] = f[j] - a[k] * (c[j][k]); } iv[l] = 0; irbas[l] = 0; *pMa = *pMa - 1; } } } /*------------------------------------------------------------------© 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
513
Modifying matrix "binv" in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_modify_binv (int kl, int kj, int m, tMatrix_R binv, tVector_R v, tVector_I iv) { int i, j, k; tNumber_R s; for (j = 1; j <= kj; j++) { if (iv[j] == 0) continue; for (i = kl; i <= m; i++) { s = 0.0; for (k = kl; k <= m; k++) { s = s + binv[i][k] * (binv[k][j]); } v[i] = s; } for (i = kl; i <= m; i++) { binv[i][j] = v[i]; } } } /*------------------------------------------------------------------Equation ll eliminates element a[k] in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_eliminate_ll (int kl, int kln, int *pMa, int ld, int m, int n, tMatrix_R c, tVector_R f, tVector_I ib, tVector_I iv, tVector_I ibound, tVector_I icbas, tVector_I irbas, tVector_R r, tNumber_R zz) { int i, j, k, l, ll = 0, kb, m1, klnm; tNumber_R d, e, g, piv, pivot = 0; m1 = m + 1; klnm = kln - 1; for (i = kl; i <= klnm; i++) { if (iv[i] == 0) continue; k = irbas[i]; piv = 0.0; © 2008 by Taylor & Francis Group, LLC
514
Numerical Linear Approximation in C for (j = kl; j <= klnm; j++) { l = icbas[j]; if (ib[l] == 0) continue; g = c[l][k]; d = g; if (d < 0.0) d = -d; if (d <= piv) continue; piv = d; pivot = g; ll = l; } /* Equation ll is used to eliminate element a[k] */ if (piv < EPS) continue; for (j = kl; j <= m; j++) { l = irbas[j]; if (l == 0) continue; c[ll][l] = c[ll][l]/pivot; } f[ll] = f[ll]/pivot; d = c[ll][m1] - zz; if (ibound[ll] == -1) d = -d; r[ll] = d; ib[ll] = 0; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; d = c[j][k]; e = d; if (e < 0.0) e = -e; if (e < EPS) continue; for (l = 1; l <= m; l++) { kb = irbas[l]; if (kb == 0) continue; c[j][kb] = c[j][kb] - d * (c[ll][kb]); } f[j] = f[j] - d * (f[ll]); } *pMa = *pMa - 1; if (*pMa == ld) break; }
}
© 2008 by Taylor & Francis Group, LLC
Chapter 14: LA_Strict
515
/*------------------------------------------------------------------Calculating elements of residual vector r, part (c) in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_calcul_r_3 (int kl, int kln, int m, tMatrix_R c, tVector_I ib, tVector_I ibound, tVector_I icbas, tVector_R r, tNumber_R zz) { int i, k, m1, klnm; tNumber_R d; m1 = m + 1; klnm = kln - 1; for (i = kl; i <= klnm; i++) { k = icbas[i]; if (ib[k] == 0) continue; d = c[k][m1] - zz; if (ibound[k] == -1) d = -d; r[k] = d; ib[k] = 0; } } /*------------------------------------------------------------------Calculating elements of residual vector r, part (b) in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_calcul_r_2 (int *pKn, int kln, int m, int n, tMatrix_R c, tVector_I ib, tVector_I ibound, tVector_I irbas, tVector_R r, tNumber_R zz) { int i, j, k, ii, m1; tNumber_R e, d, g; m1 = m + 1; for (j = 1; j <= n; j++) { ii = 0; if (ib[j] == 0) continue; d = c[j][m1]; if (d >= EPS) { e = zz + zz - d; if (e >= EPS) continue; } for (i = kln; i <= m; i++) © 2008 by Taylor & Francis Group, LLC
516
Numerical Linear Approximation in C { k = irbas[i]; g = c[j][k]; if (g < 0.0) g = -g; if (g >= EPS) { ii = 1; break; } } if (ii == 1) continue; d = d - zz; if (ibound[j] == -1) d = -d; r[j] = d; ib[j] = 0; *pKn = *pKn - 1; }
} /*------------------------------------------------------------------Calculating elements of residual vector r, part (a) in LA_Strict() -------------------------------------------------------------------*/ void LA_strict_calcul_r_1 (int m, int n, tMatrix_R c, tVector_I ib, tVector_I ibound, tVector_R r, tNumber_R zz) { int j, m1; tNumber_R d; m1 = m + 1; for (j = 1; j <= n; j++) { if (ib[j] == 0) continue; d = c[j][m1] - zz; if (ibound[j] == -1) d = -d; r[j] = d; } }
© 2008 by Taylor & Francis Group, LLC
517
Chapter 15 Piecewise Chebyshev Approximation
15.1
Introduction
In Chapter 9, two algorithms for the piecewise linear approximation of plane curves in the L1 norm are described. In this chapter, we describe two corresponding algorithms for the piecewise linear approximation of plane curves in the Chebyshev norm. The problem of piecewise approximation for plane curves in the Chebyshev norm, including polygonal approximation, received considerable attention for some time. Although the number of published works on the polygonal approximation of plane curve is considerably large (see the references in Chapter 8), yet there are relatively fewer references on piecewise approximation in the Chebyshev norm. Lawson [8] derived characteristic properties of the segmented Chebyshev approximation problem and proposed an iterative algorithm for finding these approximation. Phillips [13] presented two simple algorithms for approximating a convex function (whose second derivative is positive), the first one is when the Chebyshev residual (error) norm in any segment is not to exceed a pre-assigned value, and the other one is when the number of segments is given and a balanced (equal) Chebyshev error norm solution is required. The Chebyshev residual was measured along the direction of the y-axis. Pavlidis [9] reviewed various algorithms for piecewise linear Chebyshev segmentation and proposed a new one based on discrete optimization. His algorithm is for the near-balanced solution case. The Chebyshev residual norm is measured along the direction of the y-axis. Pavlidis and Horowitz [11] then presented another algorithm for © 2008 by Taylor & Francis Group, LLC
518
Numerical Linear Approximation in C
segmenting a digitized plane curve in the Chebyshev norm. They attempted to determine the minimum number of segments in the approximation. After an arbitrary initial segmentation of the given digitized curve, segments are split and merged to derive the Chebyshev residual norm below a specified value. The Chebyshev norm for each segment is measured perpendicular to the straight line approximating the segment. Tomek [15] described two simple heuristic algorithms, using parallel lines technique for Chebyshev piecewise approximation by straight lines. One algorithm works well for smooth functions. Both algorithms are for the case where the Chebyshev residual norm for each segment is not to exceed a pre-assigned value. The Chebyshev error norm is measured along the y-axis direction. Using functional iteration, Pavlidis and Maika [12] proposed a procedure for solving the near-balanced polynomial Chebyshev approximation problem. Their method compares favorably with Lawson’s algorithm [8]. Again, the Chebyshev error norm is measured along the direction of the y-axis. Kioustelidis [6] discussed the existence of segmented approximations with free knots for a given continuous function. The majority of published methods consider only the special cases of piecewise approximation by constants (horizontal lines) or by straight lines. In this chapter, the approximating functions may be any kind of polynomials, not necessarily constants or straight lines. The Chebyshev error norm is measured along the direction of the y-axis. Two algorithms for the piecewise linear Chebyshev approximation of plane curves are presented [1]. They are for the 2 cases: (a) when the Chebyshev error norm in any segment is not to exceed a pre-assigned value and (b) when the number of segments is given and a near-balanced Chebyshev error norm solution is required. The problem follows the same steps taken in Chapter 9 for the piecewise linear approximation of plane curves in the L1 norm. The problem is solved by first digitizing the given curve into discrete points. Then each of the algorithms is applied to the discrete points. Both algorithms use the discrete linear Chebyshev approximation function LA_Linf() of Chapter 10. In Section 15.2, the characteristic properties of the piecewise approximation are outlined. In Section 15.3, the Chebyshev
© 2008 by Taylor & Francis Group, LLC
Chapter 15: Piecewise Chebyshev Approximation
519
approximation is described. In Section 15.4, the two Chebyshev piecewise approximations are presented. In Section 15.5, numerical results and comments are given. 15.1.1
Applications of piecewise approximation
We identify here some applications of piecewise approximation that were not presented in Chapters 8 and 9. In the field of numerical analysis, it is used in the reduction of non-linear problems to approximately equivalent linear problems. It is used also in the approximate solutions of linear differential equations with time varying parameters [14]. Applications are also used in the design of electronic analog and hybrid computers [7]. In the fields of image processing and pattern recognition it is used in feature extraction, noise filtering and data compression [4, 10]. 15.2
Characteristic properties of piecewise approximation
Let y = f(x) be a given plane curve and let it be defined on the interval [a, b]. Let this curve be digitized at the K points (xi, f(xi)), i = 1, 2, …, K, where x1 = a and xK = b. Let n be the number of pieces (segments) in the approximation and let (zj), j = 1, 2, …, n, be the Chebyshev error norms for the n pieces. For the first algorithm, the number of segments n is not known beforehand. Assume f(x) is continuous and satisfies Lipschitz condition on [a, b]. Lawson [8] showed that the norm zj, 1 ≤ j ≤ n, has certain characteristics. Such characteristics are given in Section 9.2 for the piecewise linear approximation of plane curves in the L1 norm. The same characteristics apply as well for the piecewise linear approximation in the Chebyshev norm and in the L2 or the least squares norm (Chapter 18). 15.3
The discrete linear Chebyshev approximation problem
Consider any segment j, 1 ≤ j ≤ n, of the given curve f(x). Let this segment consist of N digitized points with coordinates (xi, f(xi)), i = 1, 2, …, N. Let these N points be approximated by the linear function
© 2008 by Taylor & Francis Group, LLC
520
Numerical Linear Approximation in C
L(a, x) = a1φ1(x) + … + aMφΜ(x) which minimizes the Chebyshev norm zj (or briefly z), of the residuals r(xi), where z = max|r(xi)| and (15.3.1)
r(xi) = L(a, xi) – f(xi), i = 1, 2, …, N
Here, {φj(x)}, j = 1, 2, …, M, M < N, is a given set of real linearly independent approximating functions and a is an M real vector to be calculated. For example, if the approximating function is a vertical parabola, L(a, x) = a1 + a2x + a3x2, then a1, a2 and a3 are the elements of vector a, which are to be calculated. This problem reduces to the problem of obtaining the Chebyshev solution of the overdetermined system of linear equations Ca = f C = (cij) is an N by M matrix given by C = (φj(xi)), i = 1, 2, …, N and j = 1, 2, …, M, and f is the N-vector f = f(xi). The Chebyshev solution to this system is the real M-vector a that minimizes the Chebyshev norm z = max|ri|, i = 1, 2, …, N where ri is the ith residual given by (15.3.1) or by ri = ci1a1 + … + cimam – fi, i = 1, 2, …, N 15.4 15.4.1
Description of the algorithms Piecewise linear Chebyshev approximation with pre-assigned tolerance
This algorithm uses the same steps as those of Section 9.4.1. However, each time a point is added to segment say j, we use the function LA_Linf() of Chapter 10 to re-calculate the new Chebyshev norm of the segment.
© 2008 by Taylor & Francis Group, LLC
Chapter 15: Piecewise Chebyshev Approximation
15.4.2
521
Piecewise linear Chebyshev approximation with near-balanced Chebyshev norms
This algorithm uses the same steps as those of Section 9.4.2. Again, each time we add a point or delete a point from segment j, we use the function LA_Linf() of Chapter 10 to re-calculate the new Chebyshev norm of the segment. 15.5
Numerical results and comments
Each of the functions LA_Linfpw1() and LA_Linfpw2() calculate the number of segments in the piecewise approximation (for LA_Linfpw2(), n is given), the starting points of the n segments, the coefficients of the approximating curves for the n segments, the residuals at each point of the digitized curve and finally the Chebyshev residual norms for the n segments. LA_Linfpw1() computes for the case when the Chebyshev error norm in any segment is not to exceed a pre-assigned value. LA_Linfpw2() computes for the case when the number of segments is given and a near-balanced Chebyshev error norm solution is required. Each of these functions uses LA_Linf() [2, 3] of Chapter 10. DR_Linfpw1() and DR_Linfpw2() were used to test the two algorithms in single-precision. Recall that LA_Linfpw1() has the option of calculating connected or disconnected piecewise Chebyshev approximations, while LA_Linfpw2() can only calculate disconnected piecewise Chebyshev approximations. As explained at Section 9.2, in the connected piecewise approximation, the x-coordinate of the right end point of segment j is the x-coordinate of the starting (left) end point of segment (j + 1). In the disconnected piecewise approximation, the x-coordinate of the adjacent point to the right end point of segment j is the x-coordinate of the starting left point of segment (j + 1). The numerical results of one example are presented here. They are for the same curve used in Chapter 9 for the piecewise linear approximation in the L1 norm, and also for the L2 norm of Chapter 18. The given curve is digitized with equal x-intervals into 28 points and the data points are fitted with vertical parabolas, each is of the form y = a1 + a2x + a3x2.
© 2008 by Taylor & Francis Group, LLC
522
Numerical Linear Approximation in C
The results of the first algorithm are shown in Figures 15-1 and 15-2. The results of the second algorithm are shown in Figure 15-3, where the number of segments was set to n = 4. 18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 15-1: Disconnected linear Chebyshev piecewise approximation with vertical parabolas. Chebyshev residual norm in any segment ≤ 1.3
18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 15-2: Connected linear Chebyshev piecewise approximation with vertical parabolas. Chebyshev residual norm in any segment ≤ 1.3 © 2008 by Taylor & Francis Group, LLC
Chapter 15: Piecewise Chebyshev Approximation
523
18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 15-3: Near-balanced residual norm solution. Disconnected linear Chebyshev piecewise approximation with vertical parabolas. Number of segments = 4
The Chebyshev norms of Figures 15-1 and 15-2 are (0.438, 0.981, 1.238, 0.937) and n = 4, and (0.438, 0.981, 1.203, 1.285, 0.842) and n = 5, respectively. The Chebyshev norms of Figure 15-3 are (1.553, 0.920, 1.050, 0.937), for n = 4. We observe that the second algorithm did not produce a balanced norm solution. However, by comparing Figures 15-1 and 15-3, we see that it gives an improved solution to that of the first algorithm. In a previous version of our algorithms [1], we used parametric linear programming techniques [5] and made use of the equioscillation property of the Chebyshev approximation. This technique increases the code complexity but reduces the number of iterations.
References 1.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129.
© 2008 by Taylor & Francis Group, LLC
524
2.
3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
Numerical Linear Approximation in C
Abdelmalek, N.N., A computer program for the Chebyshev solution of overdetermined systems of linear equations, International Journal for Numerical Methods in Engineering, 10(1976)1197-1202. Abdelmalek, N.N., Piecewise linear Chebyshev approximation of planar curves, International Journal of Systems Science, 14(1983)425-435. Davison, L.D., Data compression using straight line interpolation, IEEE Transactions on Information Theory, IT-14(1968)390-394. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Kioustelidis, J.B., Optical segmented approximations, Computing, 24(1980)1-8. Korn, G.A. and Korn, T.M., Electronic Analog and Hybrid Computers, McGraw-Hill, New York, 1964. Lawson, C.L., Characteristic properties of the segmented rational minimax approximation problem, Numerische Mathematik, 6(1964)293-301. Pavlidis, T., Waveform segmentation through functional approximation, IEEE Transactions on Computers, 22(1973)689697. Pavlidis, T., The use of algorithms of piecewise approximations for picture processing applications, ACM Transactions on Mathematical Software, 2(1976)305-321. Pavlidis, T. and Horowitz, S.L., Segmentation of plane curves, IEEE Transactions on Computers, 23(1974)860-870. Pavlidis, T. and Maika, A.P., Uniform piecewise polynomial approximation with variable joints, Journal of Approximation Theory, 12(1974)61-69. Phillips, G.M., Algorithms for piecewise straight line approximation, Computer Journal, 11(1968)211-212. Solodov, A.V., Linear Automatic Control Systems with Varying Parameters, Blackie, London, 1966. Tomek, I., Two algorithms for piecewise-linear continuous approximation of functions of one variable, IEEE Transactions on Computers, 23(1974)445-448.
© 2008 by Taylor & Francis Group, LLC
Chapter 15: DR_Linfpw1
15.6
525
DR_Linfpw1
/*------------------------------------------------------------------DR_Linfpw1 --------------------------------------------------------------------This is a driver for the function LA_Linfpw1(), which calculates a linear piecewise Chebyshev approximation of a given data point set {x,y} that results from discretizing a given plane curve y = f(x). The points of the set might not be equally spaced. The approximation by LA_Linfpw1() is such that the Chebyshev norm for any segment is not to exceed a pre-assigned tolerance donated by "enorm". LA_Linfpw1() calculates the connected or the disconnected piecewise linear Chebyshev approximation according to the value of an integer parameter denoted by "konect" set by the user. See comments in LA_Linfpw1(). From the approximating curve we form the overdetermined system of linear equations c*a = f "c" is a real n by m matrix of rank k, k <= m < n. n is the number of digitized points of the given plane curve. m is the number of terms in the approximating curves. If for example, the approximating curves are vertical parabolas of the form y = a1 + a2*x + a3*x*x then m = 3. "f" is a real n vector whose elements are the y coordinates of the data set {x,y}. "a" is the solution m vector. There are different "a" solution vectors for the different segments. This driver contains 1 test example. A given curve is digitized into 28 points at equal x intervals. The points are piecewise approximated by vertical parabolas of the form y = a1 + a2*x + a3*x*x
© 2008 by Taylor & Francis Group, LLC
526
Numerical Linear Approximation in C
The results for the disconnected and of the connected piecewise Chebyshev approximation are given in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define Nc
28
void DR_Linfpw1 (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R fc[Nc+1] = { NIL, 8.0, 11.0, 13.0, 11.2, 9.1, 10.8, 14.8, 16.0, 15.1, 14.0, 14.7, 15.8, 16.8, 15.6, 13.0, 14.3, 13.8, 10.6, 9.3, 9.6, 10.8, 11.2, 9.0, 7.0, 5.8, 6.8, 4.8, 3.9 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tMatrix_R rp1 = alloc_Matrix_R (KK_PIECES, NN_ROWS); tMatrix_R ap = alloc_Matrix_R (KK_PIECES, MM_COLS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R zp = alloc_Vector_R (KK_PIECES); tVector_I irankp = alloc_Vector_I (KK_PIECES); tVector_I ixl = alloc_Vector_I (KK_PIECES); int int tNumber_R eLaRc
j, k, m, n; konect, npiece; enorm; prnRc;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_Linfpw1, Chebyshev Piecewise Approximation of " "a Plane with Pre-assigned Norm"); for (k = 1; k <= 2; k++) { © 2008 by Taylor & Francis Group, LLC
Chapter 15: DR_Linfpw1 switch (k) { case 1: enorm = 1.3; n = Nc; m = 3; for (j = 1; j <= n; j++) { f[j] = fc[j]; ct[1][j] = 1.0; ct[2][j] = j; ct[3][j] = j*j; } break; default: break; } prn_algo_bnr ("Linfpw1"); if (k == 1) konect = 0; if (k == 2) konect = 1; prn_example_delim(); PRN ("konect #%d: Size of matrix \"c\" %d by %d\n", konect, n, m); PRN ("Chebyshev Piecewise Approximation with " "Pre-assigned Norm\n"); PRN ("Pre-assigned Norm \"enorm\" = %8.4f\n", enorm); prn_example_delim(); if (konect == 1) PRN ("Connected Piecewise Approximation\n"); else PRN ("Disconnected Piecewise Approximation\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Linfpw1 (m, n, enorm, konect, ct, f, ixl, irankp, rp1, ap, zp, &npiece); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Chebyshev Piecewise " © 2008 by Taylor & Francis Group, LLC
527
528
Numerical Linear Approximation in C "Approximation\n"); PRN ("Calculated number of segments (pieces) = %d\n", npiece); PRN ("Staring points of the \"npiece\" segments\n"); prn_Vector_I (ixl, npiece); PRN ("Coefficients of the approximating curves\n"); prn_Matrix_R (ap, npiece, m); PRN ("Residual vectors for the \"npiece\" segments\n"); prnRc = LA_pw1_prn_rp1 (konect, npiece, n, ixl, rp1); PRN ("Chebyshev residuals for the \"npiece\" " "segments\n"); prn_Vector_R (zp, npiece); if (prnRc < LaRcOk) { PRN ("Error printing PW1 results\n"); } } prn_la_rc (rc); } free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_I free_Vector_I
(ct, MM_COLS); (rp1, KK_PIECES); (ap, KK_PIECES); (f); (zp); (irankp); (ixl);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 15: LA_Linfpw1
15.7
529
LA_Linfpw1
/*------------------------------------------------------------------LA_Linfpw1 --------------------------------------------------------------------This program calculates a linear piecewise Chebyshev (L-infinity) approximation to a discrete point set {x,y}. The approximation is such that the Chebyshev residual (error) norm for any segment is not to exceed a given tolerance "enorm". The number of segments (pieces) is not known before hand. Given is a set of points {x,y}. From the approximating functions (curves) one forms the overdetermined system of linear equations c*a = f This program uses program LA_Linf() for obtaining the Chebyshev solution of overdetermined system of linear equations. LA_Linfpw1() has the option of calculating connected or disconnected piecewise Chebyshev approximation, according to the value of an integer parameter "konect". In the connected piecewise approximation the x-coordinate of the end point of segment j say, is the x-coordinate of the starting point of segment (j+1). In the disconnected piecewise approximation, the x-coordinate of the adjacent point to the end point of segment j is the x-coordinate of the starting point of segment (j+1). See the comments on "ixl" below. Inputs m Number of terms in the approximating functions. n Number of points to be piecewise approximated. ct A real (m+1) by n matrix. Its forst m rows and its n columns contain the transpose of matrix "c" of the system c*a = f. The (m + 1)th row of "ct" is a row of ones. Matrix "ct" is not destroyed in the computation. f An real n vector containing the r.h.s. of the system c*a = f. This vector contains the y-coordinates of the given point set. This vector is not destroyed in the computation. enorm A real pre-assigned parameter, such that the Chebyshev residual norm for any segment is <= enorm. konect An integer specifying the action to be performed.
© 2008 by Taylor & Francis Group, LLC
530
Numerical Linear Approximation in C If konect Chebyshev If konect Chebyshev
= 1, the program calculates the connected piecewise approximation. != 1, the program calculates the disconnected piecewise approximation.
Outputs npiece Obtained number of segments or pieces of the approximation. ixl An integer "npiece" vector containing the indices of the first elements of the "npiece" segments. For example, if ixl = (1,5,12,22,...), and if konect = 1, then the first segment contains points 1 to 5, the second segment contains points 5 to 12, the third segment contains points 12 to 22 and so on. Again if ixl = (1,5,12,22,...), and if konect !=1, then the first segment contains points 1 to 4, the second segment contains points 5 to 11, the third segment contains points 12 to 21 ..., etc. ap A real "npiece" by m matrix. Its first row contains the the coefficients of the approximating curve for the first segment. The second row contains the coefficients of the approximating curve for the second segment and so on. If any row j is all zeros, this indicates that vector "a" of segment j is not calculated as the number of points of segment j is <= m and there is a perfect fit by the approximating curve for segment j. rp1 A real "npiece" by n matrix. Its first row contains the residuals for the points of the first segment. Its second row contains the residuals for the points of the second segment, and so on. zp A "npiece" real vector containing the "npiece" optimum values of the Chebyshev residual norms for the "npiece" segments. If zp[j] == 0.0, it indicates that there is a perfect fit for segment j. Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Linfpw1 (int m, int n, tNumber_R enorm, int konect, tMatrix_R ct, tVector_R f, tVector_I ixl, tVector_I irankp, © 2008 by Taylor & Francis Group, LLC
Chapter 15: LA_Linfpw1
531
tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece) { tMatrix_R tVector_R tVector_R tVector_R
ctp fp r a
int
i = 0, j = 0, je = 0, ji = 0, jj = 0, is = 0, ie = 0, m1 = 0, nu = 0, irank = 0; ijk = 0, iter = 0; z = 0.0;
int tNumber_R
= = = =
alloc_Matrix_R alloc_Vector_R alloc_Vector_R alloc_Vector_R
(m + 1, n); (n); (n); (m + 1);
/* Validation of data before executing algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m < n) && (0.0 < enorm)); VALIDATE_PTRS (ct && f && ixl && irankp && rp1 && ap && zp && pNpiece); VALIDATE_ALLOC (ctp && fp && r && a); m1 = m + 1; *pNpiece = 1; /* "is" means i(start) for the segment at hand */ is = 1; for (ijk = 1; ijk <= n; ijk++) { /* Initializing the data for "npiece". is means i (start) ie means i (end) for the segment at hand */ ie = is + m1 - 1; LA_pw1_init (pNpiece, is, ie, m, rp1, ap, zp); if (ie > n) { GOTO_CLEANUP_RC (LaRcSolutionFound); } irank = m; z = 0; for (j = is; { ji = j fp[ji] = for (i = { © 2008 by Taylor & Francis Group, LLC
j <= ie; j++) is + 1; f[j]; 1; i <= m; i++)
532
Numerical Linear Approximation in C ctp[i][ji] = ct[i][j]; } } for (jj = 1; jj <= n; jj++) { nu = ie - is + 1; rc = LA_Linf (m, nu, ctp, fp, &irank, &iter, r, a, &z); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } if (z > enorm + EPS) break; else { LA_pw1_map (m, nu, r, a, z, rp1, ap, zp, pNpiece); ixl[*pNpiece] = is; je = ie + 1; if (je > n) { GOTO_CLEANUP_RC (LaRcSolutionFound); } ie = je; nu = ie - is + 1; if (nu < m1) { GOTO_CLEANUP_RC (LaRcSolutionFound); } for (j = is; j <= ie; j++) { ji = j - is + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j]; } } } } is = ie; if (konect == 1) is = ie - 1; *pNpiece = *pNpiece + 1; ixl[*pNpiece] = is; }
© 2008 by Taylor & Francis Group, LLC
Chapter 15: LA_Linfpw1
CLEANUP: free_Matrix_R free_Vector_R free_Vector_R free_Vector_R
(ctp, m + 1); (fp); (r); (a);
return rc; }
© 2008 by Taylor & Francis Group, LLC
533
534
15.8
Numerical Linear Approximation in C
DR_Linfpw2
/*------------------------------------------------------------------DR_Linfpw2 --------------------------------------------------------------------This is a driver for the function LA_Linfpw2() which calculates the "near balanced" piecewise linear Chebyshev approximation of a given data point set {x,y} resulting from the discretization of a plane curve y=f(x). Given is an integer number "npiece" which is the number of segments in the approximation. The approximation by LA_Linfpw2() is such that the Chebyshev residual norms for all segments are nearly equal, hence the name "near balanced" piecewise approximation. From the approximating curves we form the overdetermined system of linear equations c*a = f "c" is a real n by m matrix of rank k, k <= m < n. n is the number of digitized points of the given plane curve. m is the number of terms in the approximating curves. If for example, the piecewise approximating curves are vertical parabolas of the form y = a1 + a2*x + a3*x*x then m = 3. "f" is a real n vector whose elements are the y coordinates of the data set {x,y}. "a" is the solution m vector. There are different "a" solution vectors for the different segments. This driver contains 1 test example. A given curve is digitized into 28 points at equal x intervals. The points are piecewise approximated by vertical parabolas of the form y = a1 + a2*x + a3*x*x The results for piecewise Chebyshev approximation are given in the text.
© 2008 by Taylor & Francis Group, LLC
Chapter 15: DR_Linfpw2
535
-------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define Nc
28
void DR_Linfpw2 (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R fc[Nc+1] = { NIL, 8.0, 11.0, 13.0, 11.2, 9.1, 10.8, 14.8, 16.0, 15.1, 14.0, 14.7, 15.8, 16.8, 15.6, 13.0, 14.3, 13.8, 10.6, 9.3, 9.6, 10.8, 11.2, 9.0, 7.0, 5.8, 6.8, 4.8, 3.9 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R rp2 = alloc_Vector_R (NN_ROWS); tMatrix_R ap = alloc_Matrix_R (KK_PIECES, MM_COLS); tVector_R zp = alloc_Vector_R (KK_PIECES); tVector_I ixl = alloc_Vector_I (KK_PIECES); tVector_R w = alloc_Vector_R (NN_ROWS); int int eLaRc
j, m, n; Iexmpl, npiece; prnRc;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_Linfpw2, Chebyshev Piecewise Approximation of " "a Plane Curve with Near Equal Residual Norms"); for (Iexmpl = 1; Iexmpl <= 1; Iexmpl++) { switch (Iexmpl) { case 1: npiece = 4; © 2008 by Taylor & Francis Group, LLC
536
Numerical Linear Approximation in C n = Nc; m = 3; for (j = 1; j <= n; j++) { f[j] = fc[j]; ct[1][j] = 1.0; ct[2][j] = j; ct[3][j] = j*j; } break; default: break; } prn_algo_bnr ("Linfpw2"); prn_example_delim(); PRN ("Size of matrix \"c\" %d by %d\n", n, m); prn_example_delim(); PRN ("Chebyshev Piecewise Approximation " "with Near Equal Norms\n"); PRN ("Given number of segments (pieces) = %d\n", npiece); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Linfpw2 (m, n, npiece, ct, f, ap, rp2, zp, ixl); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Chebyshev Piecewise " "Approximation\n"); PRN ("Starting points of the \"npiece\" segments\n"); prn_Vector_I (ixl, npiece); PRN ("Chebyshev residual norms for the" " \"npiece\" segments\n"); prn_Vector_R (zp, npiece); PRN ("Coefficients of the \"npiece\" approximating " " curves\n"); prn_Matrix_R (ap, npiece, m); PRN ("Residuals at the given points\n"); prnRc = LA_pw2_prn_rp2 (npiece, n, ixl, rp2); if (prnRc < LaRcOk)
© 2008 by Taylor & Francis Group, LLC
Chapter 15: DR_Linfpw2 { PRN ("Error printing PW2 results: "); } } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R free_Vector_I free_Vector_R
(ct, MM_COLS); (f); (rp2); (ap, KK_PIECES); (zp); (ixl); (w);
}
© 2008 by Taylor & Francis Group, LLC
537
538
15.9
Numerical Linear Approximation in C
LA_Linfpw2
/*------------------------------------------------------------------LA_Linfpw2 --------------------------------------------------------------------This program calculates the "near balanced" piecewise linear Chebyshev approximation of a given data point set {x,y} resulting from the discretization of a plane curve y = f(x). Given is an integer number "npiece" which is the number of segments in the approximation. The approximation by LA_Linfpw2() is such that the Chebyshev residual norms for all segments are nearly equal, hence the name "near balanced" piecewise approximation. From the approximating functions (curves) one forms the overdetermined system of linear equations c*a = f Inputs npiece m n ct
f
Given umber of segments (pieces) of the approximation. Number of terms in the approximating functions. Number of points to be piecewise approximated A real (m+1) by n matrix. Its first m rows and its n columns contain the transpose of matrix "c" of the system c*a = f. The (m+1)th row of "ct" is a row of ones. Matrix "ct" is not destroyed in the computation. A real n vector containing the r.h.s. of the system c*a = f. This vector contains the y-coordinates of the given point set. This vector is not destroyed in the computation.
Outputs ixl An integer "npiece" vector containing the indices of the first elements of the "npiece" segments. For example, if ixl = (1,5,12,22,...), then the first segment contains points 1 to 4, the second segment contains points 5 to 11, the third segment contains points 12 to 21 ..., etc. ap A real "npiece" by m matrix. Its first row contains the coefficients of the approximating curve for the first segment. The second row contains the coefficients of the approximating curve for the second segment and so on.
© 2008 by Taylor & Francis Group, LLC
Chapter 15: LA_Linfpw2 rp2
539
A real n vector containing the residual values of the n points of the given set {x,y}. A real npiece vector containing the optimum Chebyshev residual norms for the "npiece" segments.
zp
Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Linfpw2 (int m, int n, int npiece, tMatrix_R ct, tVector_R f, tMatrix_R ap, tVector_R rp2, tVector_R zp, tVector_I ixl) { tVector_R al = alloc_Vector_R (m + 1); tVector_R rl = alloc_Vector_R (n); tVector_R bv = alloc_Vector_R (m + 1); tMatrix_R binv = alloc_Matrix_R (m + 1, m + 1); tVector_I icbas = alloc_Vector_I (m + 1); tVector_I irbas = alloc_Vector_I (m + 1); tVector_I ibound = alloc_Vector_I (n); tVector_R ar = alloc_Vector_R (m + 1); tVector_R rr = alloc_Vector_R (n); tMatrix_R ctp = alloc_Matrix_R (m + 1, n); tVector_R fp = alloc_Vector_R (n); tVector_I iflag = alloc_Vector_I (npiece); int int int int tNumber_R
i = 0, j = 0, k = 0, is = 0, ie = 0, ji = 0, nu = 0, kl = 0, klp1 = 0, ijk = 0; icb = 0, isl = 0, isr = 0, iel = 0, ier = 0, ieln = 0, isrn = 0; istart = 0, iend = 0, ipcp1 = 0, iterl = 0, iterr = 0; irankl = 0, irankr = 0; con = 0.0, conn = 0.0, zl = 0.0, zln = 0.0, zrn = 0.0;
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m < n) && (1 < npiece)); VALIDATE_PTRS (ct && f && ap && rp2 && zp && ixl); © 2008 by Taylor & Francis Group, LLC
540
Numerical Linear Approximation in C VALIDATE_ALLOC (al && rl && bv && binv && icbas && irbas && ibound && ar && rr && ctp && fp && iflag); for (k = 1; k <= npiece; k++) { iflag[k] = 1; /* Initializing LA_Linfpw2 */ LA_pw2_init (k, npiece, m, n, &is, &ie, ct, f, ctp, fp, ixl); /* Calculating the Chebyshev solution of the "npiece" segment */ nu = ie - is + 1; rc = LA_Linf (m, nu, ctp, fp, &irankl, &iterl, rl, al, &zl); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } zp[k] = zl; /* Mapping initial data for the "npiece" segments */ for (i = 1; i <= m; i++) { ap[k][i] = al[i]; } for (j = is; j <= ie; j++) { ji = j - is + 1; rp2[j] = rl[ji]; } } /* Process of balancing the Chebyshev norms */ istart = 1; iend = npiece - 1; ipcp1 = npiece + 1; ixl[ipcp1] = n + 1; for (ijk = 1; ijk < n*n; ijk++) { for (kl = istart; kl <= iend; kl=kl+2) { klp1 = kl + 1;
© 2008 by Taylor & Francis Group, LLC
Chapter 15: LA_Linfpw2 con = fabs (zp[klp1] - zp[kl]); isl = ixl[kl]; isr = ixl[klp1]; iel = isr - 1; if (kl != iend) ier = ixl[kl + 2] - 1; if (kl == iend) ier = n; /* The case where : -----z[i]
541
542
Numerical Linear Approximation in C GOTO_CLEANUP_RC (rc); } conn = fabs (zrn - zln); iflag[kl] = 1; iflag[klp1] = 1; if (conn > con) { iflag[kl] = 0; iflag[klp1] = 0; continue; } } /* The case where : -----z[i]>z[i+1] */ else if (zp[kl] > zp[klp1]) { isrn = isr - 1; ieln = isrn - 1; nu = ieln - isl + 1; for (j = isl; j <= ieln; j++) { ji = j - isl + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j]; } } rc = LA_Linf (m, nu, ctp, fp, &irankl, &iterl, rl, al, &zln); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } nu = ier - isrn + 1; for (j = isrn; j <= ier; j++) { ji = j - isrn + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j];
© 2008 by Taylor & Francis Group, LLC
Chapter 15: LA_Linfpw2 } } rc = LA_Linf (m, nu, ctp, fp, &irankr, &iterr, rr, ar, &zrn); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } conn = fabs (zrn - zln); iflag[kl] = 1; iflag[klp1] = 1; if (conn > con) { iflag[kl] = 0; iflag[klp1] = 0; continue; } } for (j = isl; j <= ieln; j++) { ji = j - isl + 1; rp2[j] = rl[ji]; } for (j = isrn; j <= ier; j++) { ji = j - isrn + 1; rp2[j] = rr[ji]; } for (i = 1; i <= m; i++) { ap[kl][i] = al[i]; ap[klp1][i] = ar[i]; } zp[kl] = zln; zp[klp1] = zrn; ixl[klp1] = isrn; isr = isrn; iel = isr - 1; } is = 2; if (istart == 2) is = 1; istart = is; © 2008 by Taylor & Francis Group, LLC
543
544
Numerical Linear Approximation in C icb = 0; for (j = 1; j <= npiece; j++) { if (iflag[j] != 0) icb = 1; } if (icb == 0) { GOTO_CLEANUP_RC (LaRcSolutionFound); } }
CLEANUP: free_Vector_R free_Vector_R free_Vector_R free_Matrix_R
(al); (rl); (bv); (binv, m + 1);
free_Vector_I (icbas); free_Vector_I (irbas); free_Vector_I (ibound); free_Vector_R (ar); free_Vector_R (rr); free_Matrix_R (ctp, m + 1); free_Vector_R (fp); free_Vector_I (iflag); return rc; }
© 2008 by Taylor & Francis Group, LLC
545
Chapter 16 Solution of Linear Inequalities
16.1
Introduction
Until now, we have dealt with different solutions of overdetermined systems of linear equations, in the L1 and in the Chebyshev norms. In this chapter, we deal with the solution of overdetermined systems of linear inequalities of the form (16.1.1)
Ca > 0
C is a given real N by M matrix, and the 0 is a zero N-vector, N > M. It is required to calculate the M-vector a that satisfies this inequality. Solution of linear systems of inequalities such as (16.1.1) has interesting applications for pattern classification problems, as described in detail in Section 16.2. In short we will show in Section 16.4 that the solution of system (16.1.1) is none other than the one-sided solution of a system of linear equations of the form Ca = f, where f is a strictly positive vector. As early as 1952, Hoffman [20] considered the solution of the consistent system of linear inequalities (using our notation) (16.1.2)
Ca ≤ f
By consistent, it was meant that a solution exists that satisfies the system of inequalities. He gave a quantitative formulation of the assertion that if vector a almost satisfies (16.1.2), then a is close to a solution. Over forty years later, Guler et al. [15] considered the result of Hoffman and obtained a particularly simple proof of Hoffman’s existence theorem. They also obtained a new representation for the corresponding Lipschitz bound in that theorem and provided
© 2008 by Taylor & Francis Group, LLC
546
Numerical Linear Approximation in C
geometric representation of these bounds. In 1954, both Agmon [5] and Motzkin and Schoenberg [24] discussed an iterative method called the relaxation-projection method for solving the consistent system (16.1.3)
Ca ≥ f
Associating these inequalities to half-spaces, in which a point corresponding to a feasible solution lies, they proved that such a point can be reached from some arbitrary outside point. Forty-three years later, Labonte [21] examined the implementation of the relaxation projection method to solve sets of linear inequalities using artificial neural networks. He described the different versions of this method and reported on tests he ran with simulated realizations of neural networks. Goffin [14] studied the rate of conversion of the relaxation method by Agmon [5] and Motzkin and Schoenberg [24] and the possible finiteness of the method. Censor and Elfving [8] considered the iterative method of Cimmino for solving linear equations and generalized it to solve a general system of linear inequalities of the form Ca ≤ f. They also showed how to modify a Richardson-type iterative least-squares algorithm for computing a solution for the linear inequalities. De Pierro and Iusem [11] proved the convergence of that method starting from any point, both for consistent and inconsistent systems. The convergence is to a feasible solution in the first case and to a weighted least squares type solution in the second case. He [18] presented a new method based on an iterative contraction technique to a convex minimization problem. A system of linear inequalities may be translated to an equivalent unconstrained smooth convex minimization problem to which the contraction method is applied. None of the aforementioned authors presented an algorithm for solving the inequalities (16.1.2) or (16.1.3). Interesting enough, a solution to these equations is none other than a solution to the one-sided overdetermined system of equations of the form Ca = f from above and from below respectively. We shall introduce that in Section 16.3. Nagaraja and Krishna [25] developed an algorithm for solving the system of linear inequalities Ca > 0 using the method of conjugate
© 2008 by Taylor & Francis Group, LLC
Chapter 16: Solution of Linear Inequalities
547
gradients for function minimization. They showed that the algorithm converges to a solution in a finite number of steps for both consistent and inconsistent cases. They stated that their algorithm converges faster than that of Ho and Kashyap [19] and the accelerated relaxation algorithms (which will be discussed shortly). Nie and Xu [26] dealt with the inequality system Ca ≥ f. If it is an inconsistent system, i.e., having an infeasible solution, they determine which inequality in the system causes the inconsistency, and correct its right hand side. They use the isometric plane method of linear programming. Cohen and Megiddo [10] considered the linear systems of inequalities Ca ≤ f, where each inequality involves, at-most, two out of the M elements (using our notation), i.e., each row of matrix C contains, at-most, two nonzero elements. They state that their algorithm is faster than previous algorithms. Faigle et al. in their book ([13] Section 2.4) attempted to solve the linear system of inequalities Ca ≤ f, by eliminating one variable after the other, until a solution is obtained, or decide that no solution is feasible. A solution, if it exists, is obtained by back substitution. However, in the elementary row operations, only multiples with strictly positive scalars are allowed. This is because multiplication of an inequality by a negative scalar reverses the inequality sign. They use the Fourier-Motzkin elimination method, which can be viewed as Gauss elimination with respect to the set of non-negative scalars. As a result, the Fourier-Motzkin elimination may considerably increase the number of inequalities in every elimination step. Han [17] described an algorithm for solving the system of inequalities Ca ≤ f using a least squares solution technique. The algorithm employs a singular value decomposition to sub-matrices of matrix C. Later, Bramley and Winnicka [7] improved over Han’s algorithm. They used the computationally efficient QR factorization instead, which allows updating and downdating of matrix R. As a result, their algorithm is much faster than that of Han [17]. For the QR factorization, see Chapter 17. For the updating and downdating techniques see Stewart [30] and for an algorithm using updating and downdating techniques, see Abdelmalek [4]. Ho and Kashyap [19] also used a least squares solution technique for solving the system Ca > 0. They considered the following
© 2008 by Taylor & Francis Group, LLC
548
Numerical Linear Approximation in C
equivalent problem instead. It is required to calculate the M weight vector a and an N-vector f such that Ca = f, f > 0
(16.1.4)
The residual vector of (16.1.4) is (16.1.5)
r = Ca – f
Starting from a guess vector f = f(0) > 0, they used an iterative least squares minimization method, where the residual vector (16.1.5) is minimized in the Euclidean norm. In each iteration the r.h.s. vector f is changed to f + δf and a new least squares solution vector is calculated. If there is a solution to the system Ca > 0, the algorithm converges in a finite number of steps, due to the fact that in each iteration both vectors a and f change. Ho-Kashyap algorithm needs the calculation of the pseudo-inverse C+ of matrix C. Abdelmalek [2] presented an algorithm analogous to that of Ho and Kashyap, except that the residual vector (16.1.5) is minimized in the Chebyshev norm rather than in the least squares sense. In the iterations of this method, parametric linear programming techniques [16] were used with the Chebyshev approximation algorithm. This resulted in a speed improvement to this method, and an algorithm that converges faster than that of Ho and Kashyap [19]. In this method, matrix C need not be a full rank matrix. Pinar and Chen [27] presented an algorithm that calculated the L1 solution for the system Ca = f instead, where f is an n-vector each element of which is 1. In their method, the L1 norm minimization problem is approximated by a piecewise quadratic smooth function. Bahi and Sreedharan [6] presented an algorithm for the solution of the linear system of inequalities Ca ≥ f, in the Lp norm, where, 1 < p < ∞. They first characterized the solutions for the problem and then introduced a dual to this problem. They presented the results of their algorithm for 2 examples. Their numerical results for some cases show the number of iterations to be very large. 16.1.1
Linear programming techniques
Linear programming techniques have been applied for some time to the solution of the linear inequality Ca > 0. Minnick [23] showed how linear programming could be used in the solution of the linear © 2008 by Taylor & Francis Group, LLC
Chapter 16: Solution of Linear Inequalities
549
input logic problem, a linearly separable switching function, which reduces to a linear inequality problem. Mangasarian [22] suggested the use of linear programming to the solution of the problem, without actually solving it. Smith [29] presented a linear programming formulation of discriminant function design, which reduces to a pattern classifier design. Duda and Hart ([12] pp. 167-169) presented two different linear programming formulations to the problem Ca ≥ f, without implementing them. In our work, we solve problem (16.1.1) as a linear one-sided approximation problem in the Lp norm, for p = ∞ and for p = 1. We make use of the definition of the one-sided solution of overdetermined linear equations in the Chebyshev norm of Chapter 11 and in the L1 norm of Chapter 6 respectively. We formulate the problem in such a way that we may apply either of these two algorithms to it. Both algorithms use linear programming techniques. We also observe that there are one-sided problems whose solutions do not exist. Hence, there are linear systems of inequalities that do not have feasible solutions. See Example 16.2 in Section 16.6. In Section 16.2, the pattern classification problem in formulated as a problem of linear inequalities. In Section 16.3, the solution of the system of linear inequalities Ca > 0 is presented as a one-sided solution of the system of linear equations Ca = f, where f is a strictly positive vector. In Sections 16.4 and 16.5, the linear one-sided Chebyshev approximation algorithm and the linear one-sided L1 approximation algorithm are outlined respectively. In Section 16.6, two numerical examples for solving the pattern classification problems are given with comments. 16.2
Pattern classification problem
Overdetermined linear inequalities form a basic problem in pattern classification. Given is a class A of s patterns and a class B of t patterns, where each pattern is a point in an m-dimensional Euclidean space. Let n = (s + t) and usually n >> m. The pattern classification problem is summarized as follows. It is required to find a surface in the m-dimensional space such that all points of class A be on one side of this surface and all points of class B be on the other side of the surface. Let the equation of this
© 2008 by Taylor & Francis Group, LLC
550
Numerical Linear Approximation in C
separating surface be a1φ1(x) + a2φ2(x) + … + am+1φm+1(x) = 0 Vector a = (a1, a2, …, am+1)T is to be calculated and {φ1(x), φ2(x), …, φm+1(x)} is a set of linearly independent functions to be specified according to the geometry of the problem. Without loss of generality, we may take φm+1(x) = 1. Following Tou and Gonzalez ([31], pp. 40-41, 48-49), a decision function d(x) of the form (taking φm+1(x) = 1) d(x) = a1φ1(x) + a2φ2(x) + … + amφm(x) + am+1 is established. This function has the property that d(xi) < 0, xi ∈ A
(16.2.1) and
d(xi) > 0, xi ∈ B
(16.2.2)
The two classes of patterns A and B are linearly separable if and only if there exists an (m + 1)-vector a such that the above two inequalities are satisfied. If no such vector exists, then classes A and B are linearly inseparable. By multiplying the first set of inequalities by a –ve signs, we get –d(xi) > 0, xi ∈ A
(16.2.3)
The problem may now be posed as follows. Using (16.2.3) and (16.2.2), let C be an n by (m + 1) matrix whose ith row Ci is Ci = (–φ1(xi), –φ2(xi), …, –φm(xi), –1), 1 ≤ i ≤ s and Ci = (φ1(xi), φ2(xi), …, φm(xi), 1), (s + 1) ≤ i ≤ n It is required to calculate the (m + 1)-vector a such that the system of inequalities Ca > 0 is satisfied. This is the same system of inequalities (16.1.1) above. In this case, N = n and M = (m + 1). Clark and Gonzalez [9] presented an algorithm based on a search procedure where matrix C has to satisfy the Haar condition. That is,
© 2008 by Taylor & Francis Group, LLC
Chapter 16: Solution of Linear Inequalities
551
every (m + 1) by (m + 1) sub-matrix of C is of rank (m + 1). Sklansky and Michelotti [28] described an algorithm, called a locally trained piecewise linear classifier, for use with multi-dimensional data. 16.3
Solution of the system of linear inequalities Ca > 0
A common practice is to replace the inequalities Ca > 0 by the inequalities Ca ≥ f where f is an N-vector, each element of which is positive. It is easier to manipulate the system Ca ≥ f and that its solution is also a solution to the system Ca > 0. The solution of the system of linear inequalities Ca ≥ f, reminds us of the one-sided solutions of overdetermined systems of linear equations in the Chebyshev norm (Chapter 11) or in the L1 norm (Chapter 6). Here, two algorithms for solving the overdetermined system of linear inequalities Ca > 0 are presented [3]. The first algorithm calculates the one-sided Chebyshev solution from below of the system Ca = f, where f is a strictly positive vector. The second algorithm calculates the one-sided L1 solution from below of the same system Ca = f, where f is a strictly positive vector. If a solution exists to either of the one-sided problems, it would be a solution to the given system of inequalities Ca > 0. 16.4
Linear one-sided Chebyshev approximation algorithm
Consider the overdetermined system of linear equations derived from Ca ≥ f, namely (16.4.1)
Ca = f
where C = (cij) is an N by M real matrix and f = (fi) is an N real vector, N > M. The Chebyshev solution of (16.4.1) is vector a that minimizes the Chebyshev norm of the residuals z = max|ri|, i = 1, 2, …, N where ri is the ith residual and is given by
© 2008 by Taylor & Francis Group, LLC
552
Numerical Linear Approximation in C
m
ri =
∑ cij aj – fi ,
i = 1, 2, …, n
j=1
As in Chapter 11, when the solution vector a satisfies the additional conditions ri ≥ 0, i = 1, 2, …, N we have the one-sided Chebyshev solution from below [1] of system Ca = f. Let h = maxi|ri|, then this problem is formulated in linear programming as follows minimize h subject to 0 ≤ Ca – f ≤ he where e is an N-vector, each element of which is 1 and the 0 is an N-zero vector. From the left inequality we have Ca ≥ f Since we assume that f is a strictly positive vector, the solution of the inequalities Ca ≥ f, is a solution of the inequalities Ca > 0. In Chapter 11, an algorithm for the one-sided Chebyshev solution of overdetermined systems of linear equations is presented. 16.5
Linear one-sided L1 approximation algorithm
Consider the overdetermined system of linear equations Ca = f in (16.4.1). The L1 solution of this equation is the solution vector a that minimizes the L1 norm of the residuals N
z =
∑
ri
i=1
where ri is the ith residual, as defined in section 16.4. As in Chapter 6, when the solution vector a satisfies the additional conditions
© 2008 by Taylor & Francis Group, LLC
Chapter 16: Solution of Linear Inequalities
553
ri ≥ 0, i = 1, 2, …, N we have the one-sided L1 solution from below of system Ca = f; that is, for any equation i, i = 1, …, N, the observed value fi is not greater than the calculated value of element i of Ca. In other words, the inequalities ri ≥ 0, i = 1, 2, …, N, are translated to M
∑ cij aj ≥ fi ,
i = 1, 2, …, N
j=1
Or in vector-matrix form Ca ≥ f Since we assume that f is a strictly positive vector, the solution vector a of Ca ≥ f is a solution to the inequalities Ca > 0. In Chapter 6, an algorithm for the one-sided L1 solution of overdetermined systems of linear equations was presented. 16.6
Numerical results and comments
DR_Chineq() calls the one-sided Chebyshev solution function LA_Linfside() of Chapter 11, and DR_L1ineq() calls the one-sided L1 function LA_Loneside() of Chapter 6. Two pattern classification examples are solved here. Example 16.1 Consider the two dimensional problem of 16 points that belong to the classes A and B. Their coordinates are as follows. Class A: {(1, 3), (1.5, 5), (2, 6), (2.5, 7), (–1, 2), (–1.5, 4), (–2, 6), (–2.5, 7)} Class B: {(2, 2), (4, 4), (5, 6), (6, 7), (–3, –1), (–4, 3), (–5, 4), (–6, 6)} We assume that the separating curve of the two classes is the vertical parabola (16.6.1)
© 2008 by Taylor & Francis Group, LLC
a1 + a2x + a3x2 + a4y = 0
554
Numerical Linear Approximation in C
We should remember that the decision function of (16.2.1) for class A is d(xi) < 0, xi ∈ A. The decision function for this example is the l.h.s. of the equation of the vertical parabola in (16.6.1). In order to reverse the < sign for class A, we multiply d(xi) < 0, xi ∈ A by –1. Also, it is customary to take each element of vector f as 1. Hence, from Ca ≥ f, the following equation is formed a11 + a2x + a3x2 + a4y = f where 1 is a vector of 16 elements, each of which is 1. That is –1 –1 –1 –1 –1 –1 –1 –1 1 1 1 1 1 1 1 1
–1 – 1.5 –2 – 2.5 1 1.5 2 2.5 2 4 5 6 –3 –4 –5 –6
–1 – 2.25 –4 – 6.25 –1 – 2.25 –4 – 6.25 4 16 25 36 9 16 25 36
–3 –5 –6 –7 –2 –4 –6 –7 2 4 6 7 –1 3 4 6
a1 a2 a3 a4
1 1 1 1 1 1 1 = 1 1 1 1 1 1 1 1 1
The solution by the first algorithm, the one-sided Chebyshev approximation from below, is a = (1.42, 0.38, 0.287, –1.164)T, which is y = 1.22 + 0.327x + 0.247x2. The solution by the second algorithm, the one-side L1 approximation from below, is a = (0.733, 0.4, 0.267, –0.8)T, which is y = 0.92 + 0.5x + 0.33x2. The separating curves for the two algorithms are shown in Figure 16-1.
© 2008 by Taylor & Francis Group, LLC
Chapter 16: Solution of Linear Inequalities
555
18 16 14 12 10 8 6 4 2 -8
-6
-4
-2
0 -2 0
2
4
6
8
Figure 16-1: Two curves that each separate the patterns of class A from the patterns of class B
Algorithm 1 produces the thick parabola and algorithm 2 produces the thin parabola. Each parabola separates the patterns of class A, denoted by “.”, from the patterns of class B, denoted by “×”. Example 16.2 Consider the two dimensional problem of 12 points that belong to the two classes A and B. Their coordinates are Class A: {(2, 1), (2, –1), (4, 1), (4, –1), (–3, 1), (–3, –1)} Class B: {(3, –3), (5, –3), (4, –4), (–1, 3), (–3, 3), (–2, 5)} We assume that the separating curve of the two classes is the ellipse a1x2 + a2y2 + a3 = 0 Vector f is again a 12-vector, each element of which is 1. Hence, from the inequality Ca ≥ f, the following equation is formed a1x2 + a2y2 + a3 = f
© 2008 by Taylor & Francis Group, LLC
556
Numerical Linear Approximation in C
–4 –4 – 16 – 16 –9 –9 9 25 16 1 9 4
–1 –1 –1 –1 –1 –1 9 9 16 9 9 25
–1 1 –1 1 –1 1 –1 1 –1 a 1 1 –1 1 a2 = 1 1 a3 1 1 1 1 1 1 1 1 1 1
The solution vector by either of the two algorithms is a = (0.0, 0.25, –1.25)T, which gives the result y2 = 5, or y = ±sqrt(5). Certainly, the two straight lines y = ± sqrt(5) is a solution to the inequality Ca > 0. For this example the second point in class B is then changed from (5, –3) to (5, 1) and the problem is re-solved. The obtained result by both algorithms is a = (0.222, 0.667, –5.222)T. This is an ellipse of semi-major and semi-minor axes 4.85 and 2.8 respectively. Once more, for this example, the second point in class B is changed from (5, –3) to (4, 1). Both algorithms return, indicating that no feasible solution is obtained; that is, the two classes A and B are inseparable. It should be noted that if we replace vector f in (16.4.1) by Kf, where K is a positive scalar, the obtained solution vector would be Ka and we would get the same separating surface. Our two algorithms are comparable in speed. They are faster and simpler than other existing algorithms including a previous one by the author [2]. They also converge in a finite number of iterations. Being linear programs, the number of iterations is of the order of 2 to 3 times the smaller dimension of matrix C in Ca > 0. Another important points is the following. Finally, since we are using linear programming techniques, matrix C need not be of full rank. © 2008 by Taylor & Francis Group, LLC
Chapter 16: Solution of Linear Inequalities
557
References 1. 2. 3.
4. 5. 6. 7. 8. 9.
10. 11. 12.
Abdelmalek, N.N., The discrete linear one-sided Chebyshev approximation, Journal of Institute of Mathematics and Applications, 18(1976)361-370. Abdelmalek, N.N., Chebyshev approximation algorithm for linear inequalities and its applications to pattern recognition, International Journal of Systems Science, 12(1981)963-975. Abdelmalek, N.N., Linear one-sided approximation algorithms for the solution of overdetermined systems of linear inequalities, International Journal of Systems Science, 15(1984)1-8. Abdelmalek, N.N., Piecewise linear least-squares approximation of planar curves, International Journal of Systems Science, 21(1990)1393-1403. Agmon, S., The relaxation method for linear inequalities, Canadian Journal of Mathematics, 6(1954)382-392. Bahi, S. and Sreedharan, V.P., An algorithm for a minimum norm solution of a system of linear inequalities, International Journal of Computer Mathematics, 80(2003)639-647. Bramley, R. and Winnicka, B., Solving linear inequalities in a least squares sense, SIAM Journal on Scientific Computation, 17(1996)275-286. Censor, Y. and Elfving, T., New methods for linear inequalities, Linear Algebra and its Applications, 42(1982)199-211. Clark, D.C. and Gonzalez, R.C., Optimal solution of linear inequalities with applications to pattern recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(1981)643-655. Cohen, E. and Megiddo, N., Improved algorithms for linear inequalities with two variables per inequality, SIAM Journal on Computing, 23(1994)1313-1347. De Pierro, A.R. and Iusem, A.N., A simultaneous projections method for linear inequalities, Linear Algebra and its Applications, 64(1985)243-253. Duda, R.O. and Hart, P.E., Pattern Classification and Scene Analysis, John Wiley & Sons, New York, 1973.
© 2008 by Taylor & Francis Group, LLC
558
Numerical Linear Approximation in C
13.
Faigle, U., Kern, W. and Still, G., Algorithmic Principles of Mathematical Programming, Kluwer Academic Publishers, London, 2002. Goffin, J.L., The relaxation method for solving systems of linear inequalities, Mathematics of Operations Research, 5(1980)388-414. Guler, O., Hoffman, A.J. and Rothblum, U.G., Approximations to solutions to systems of linear inequalities, SIAM Journal on Matrix Analysis and Applications, 16(1995)688696. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Han, S-P., Least squares solution of linear inequalities, Technical Report TR-2141, Mathematics Research Center, University of Wisconsin-Madison, 1980. He, B., New contraction methods for linear inequalities, Linear Algebra and its Applications, 207(1994)115-133. Ho, Y-C. and Kashyap, R.L., An algorithm for linear inequalities and its applications, IEEE Transactions on Electronic Computers, 14(1965)683-688. Hoffman, A.J., On approximate solutions of systems of linear inequalities, Journal of Research of the National Bureau of Standards, 49(1952)263-265. Labonte, G., On solving systems of linear inequalities with artificial neural networks, IEEE Transactions on Neural Networks, 8(1997)590-600. Mangasarian, O.L., Iterative solution of linear programs, SIAM Journal on Numerical Analysis, 18(1981)606-614. Minnick, R.C., Linear-input logic, IRE Transactions on Electronic Computers, 10(1961)6-16. Motzkin, T.S. and Schoenberg, I.J., The relaxation method for linear inequalities, Canadian Journal of Mathematics, 6(1954)393-404. Nagaraja, G. and Krishna, G., An algorithm for the solution of linear inequalities, IEEE Transactions on Computers, 23(1974)421-427.
14. 15.
16. 17. 18. 19. 20. 21. 22. 23. 24. 25.
© 2008 by Taylor & Francis Group, LLC
Chapter 16: Solution of Linear Inequalities
26. 27. 28. 29. 30. 31.
559
Nie, Y.Y. and Xu, S.R., Determination and correction of an inconsistent system of linear inequalities, Journal of Computational Mathematics, 13(1995)211-217. Pinar, M.C. and Chen, B., l1 solution of linear inequalities, IMA Journal of Numerical Analysis, 19(1999)19-37. Sklansky, J. and Michelotti, L., Locally trained piecewise linear classifier, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(1980)101-110. Smith, F.W., Pattern classifier design by linear programming, IEEE Transactions on Computers, 17(1968)367-372. Stewart, G.W., The effects of rounding residual on an algorithm for downdating a Cholesky factorization, Journal of Institute of Mathematics and Applications, 23(1979)203-213. Tou, J.L. and Gonzalez, R.C., Pattern Recognition Principles, Addison-Wesley, Reading, MA, 1974.
© 2008 by Taylor & Francis Group, LLC
560
16.7
Numerical Linear Approximation in C
DR_Chineq
/*------------------------------------------------------------------DR_Chineq --------------------------------------------------------------------Given is an overdetermined system of linear inequalities of the form (1)
c*a > 0
"c" is a given real n by m matrix of rank k, k <= m < n. Usually, n>>m. It is required to obtain the m solution vector "a" for this system. System (1) is first replaced by the overdetermined system of linear equations (2)
c*a = f
where "f" is a strictly real positive n vector. This program is a driver for the function LA_Chineq(), which calculates the one-sided "Chebyshev" solution from below of system (2). This solution itself would be a solution of the inequality (1). This driver contains examples for problems of pattern classification in which there are two classes of patterns. It is required to calculate the parameters of a plane curve that separates the two classes. The results of these examples appear in the text. Example 1: class a: { (1,3), (1.5,5), (2,6), (2.5,7), (-1,2), (-1.5,4), (-2,6), (-2.5,7) } class b: { (2,2), (4,4), (5,6), (6,7), (-3,-1), (-4,3), (-5,4), (-6,6) } separating curve is the parabola a1 + a2*x + a3*x*x + a4*y = 0 Example 2: class A: { (2,1), (2,-1), (4,1), (4,-1), (-3,1), (-3,-1) } class B: { (3,-3), (5,-3), (4,-4), (-1,3), (-3,3), (-2,5) } Separating curve is the ellipse a1*x*x + a2*y*y + a3 = 0
© 2008 by Taylor & Francis Group, LLC
Chapter 16: DR_Chineq
561
Example 3: class A: { (2,1), (2,-1), (4,1), (4,-1), (-3,1), (-3,-1) } class B: { (3,-3), (5,1), (4,-4), (-1,3), (-3,3), (-2,5) } Separating curve is the ellipse a1*x*x + a2*y*y + a3 = 0 This example is itself example 2, but with the second point in B (5,-3) replaced by (5,1). Example 4: class A: { (2,1), (2,-1), (4,1), (4,-1), (-3,1), (-3,-1) } class B: { (3,-3), (4,1), (4,-4), (-1,3), (-3,3), (-2,5) } Separating curve is the ellipse a1*x*x + a2*y*y + a3 = 0 This example is itself example 2, but with the second point in B (5,-3) replaced by (4,1). -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define
N1q M1q N2q M2q
16 4 12 3
void DR_Chineq (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R Cainit[N1q][M1q] = { { -1.0, -1.0, -1.0, -3.0 }, { -1.0, -1.5, -2.25, -5.0 }, { -1.0, -2.0, -4.0, -6.0 }, { -1.0, -2.5, -6.25, -7.0 }, { -1.0, 1.0, -1.0, -2.0 }, { -1.0, 1.5, -2.25, -4.0 }, { -1.0, 2.0, -4.0, -6.0 }, { -1.0, 2.5, -6.25, -7.0 }, © 2008 by Taylor & Francis Group, LLC
562
Numerical Linear Approximation in C { { { { { { { {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2.0, 4.0, 5.0, 6.0, -3.0, -4.0, -5.0, -6.0,
4.0, 2.0 }, 16.0, 4.0 }, 25.0, 6.0 }, 36.0, 7.0 }, 9.0, -1.0 }, 16.0, 3.0 }, 25.0, 4.0 }, 36.0, 6.0 }
}; static tNumber_R Cbinit[N2q][M2q] = { { -4.0, -1.0, -1.0 }, { -4.0, -1.0, -1.0 }, { -16.0, -1.0, -1.0 }, { -16.0, -1.0, -1.0 }, { -9.0, -1.0, -1.0 }, { -9.0, -1.0, -1.0 }, { 9.0, 9.0, 1.0 }, { 25.0, 9.0, 1.0 }, { 16.0, 16.0, 1.0 }, { 1.0, 9.0, 1.0 }, { 9.0, 9.0, 1.0 }, { 4.0, 25.0, 1.0 } }; static tNumber_R fa[N1q+1] = { NIL, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 }; static tNumber_R fb[N2q+1] = { NIL, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MMc_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MMc_COLS); © 2008 by Taylor & Francis Group, LLC
Chapter 16: DR_Chineq
tMatrix_R tMatrix_R
Ca = init_Matrix_R (&(Cainit[0][0]), N1q, M1q); Cb = init_Matrix_R (&(Cbinit[0][0]), N2q, M2q);
int int tNumber_R
irank, iter, iside; i, j, m, n, Iexmpl; z = 0.0;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_Chineq, Solving an Overdetermined System of" " Linear Inequalities"); for (Iexmpl = 1; Iexmpl <= 4; Iexmpl++) { switch (Iexmpl) { case 1: n = N1q; m = M1q; for (i = 1; i <= n; i++) { f[i] = fa[i]; for (j = 1; j <= m; j++) { ct[j][i] = Ca[i][j]; } } break; case 2: n = N2q; m = M2q; for (i = 1; i <= n; i++) { f[i] = fb[i]; for (j = 1; j <= m; j++) { ct[j][i] = Cb[i][j]; } } break; case 3: n = N2q; © 2008 by Taylor & Francis Group, LLC
563
564
Numerical Linear Approximation in C m = M2q; for (i = 1; i <= n; i++) { f[i] = fb[i]; for (j = 1; j <= m; j++) { ct[j][i] = Cb[i][j]; } ct[2][8] = 1.0; } break; case 4: n = N2q; m = M2q; for (i = 1; i <= n; i++) { f[i] = fb[i]; for (j = 1; j <= m; j++) { ct[j][i] = Cb[i][j]; } ct[1][8] = 16.0; ct[2][8] = 1.0; } break; default: break; } prn_algo_bnr("Chineq"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("\"Linfside\" for Solving an Overdetermined System of" " Linear Inequalities\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); iside = 0; rc = LA_Linfside (iside, m, n, ct, f, &irank, &iter, r, a,
© 2008 by Taylor & Francis Group, LLC
Chapter 16: DR_Chineq
565 &z);
if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Solution of Linear Inequalities\n"); PRN ("One-sided Chebyshev solution vector \"a\"\n"); prn_Vector_R (a, m); PRN ("One-sided Chebyshev residual vector \"r\"\n"); prn_Vector_R (r, n); PRN ("One-sided Chebyshev norm \"z\" = %8.4f\n", z); PRN ("Rank of of matrix \"c\" = %d, No. of " " Iterations = %d\n", irank, iter); } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R
(ct, MMc_COLS); (f); (r); (a);
uninit_Matrix_R (Ca); uninit_Matrix_R (Cb); }
© 2008 by Taylor & Francis Group, LLC
566
16.8
Numerical Linear Approximation in C
DR_L1ineq
/*------------------------------------------------------------------DR_L1ineq --------------------------------------------------------------------Given is an overdetermined system of linear inequalities of the form (1)
c*a > 0
"c" is a given real n by m matrix of rank k, k <= m < n. Usually, n>>m. It is required to obtain the m solution vector "a" for this system. System (1) is first replaced by the overdetermined system of linear equations (2)
c*a = f
where "f" is a strictly real positive n vector. This program is a driver for the function LA_L1ineq(), which calculates the one-sided L-One solution from below of system (2). This solution itself would be a solution of the inequality (1). This driver contains examples for problems of pattern classification in which there are two classes of patterns. It is required to calculate the parameters of a plane curve that separates the two classes. The results of these examples appear in the text. Example 1: class a: { (1,3), (1.5,5), (2,6), (2.5,7), (-1,2), (-1.5,4), (-2,6), (-2.5,7) } class b: { (2,2), (4,4), (5,6), (6,7), (-3,-1), (-4,3), (-5,4), (-6,6) } separating curve is the parabola a1 + a2*x + a3*x*x + a4*y = 0 Example 2: class A: { (2,1), (2,-1), (4,1), (4,-1), (-3,1), (-3,-1) } class B: { (3,-3), (5,-3), (4,-4), (-1,3), (-3,3), (-2,5) } Separating curve is the ellipse a1*x*x + a2*y*y + a3 = 0
© 2008 by Taylor & Francis Group, LLC
Chapter 16: DR_L1ineq
567
Example 3: class A: { (2,1), (2,-1), (4,1), (4,-1), (-3,1), (-3,-1) } class B: { (3,-3), (5,1), (4,-4), (-1,3), (-3,3), (-2,5) } Separating curve is the ellipse a1*x*x + a2*y*y + a3 = 0 This example is itself example 2, but with the second point in B (5,-3) replaced by (5,1). Example 4: class A: { (2,1), (2,-1), (4,1), (4,-1), (-3,1), (-3,-1) } class B: { (3,-3), (4,1), (4,-4), (-1,3), (-3,3), (-2,5) } Separating curve is the ellipse a1*x*x + a2*y*y + a3 = 0. This example is itself example 2, but with the second point in B (5,-3) replaced by (4,1). -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define
Na Ma Nb Mb
16 4 12 3
void DR_L1ineq (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R Cainit[Na][Ma] = { { -1.0, -1.0, -1.0, -3.0 }, { -1.0, -1.5, -2.25, -5.0 }, { -1.0, -2.0, -4.0, -6.0 }, { -1.0, -2.5, -6.25, -7.0 }, { -1.0, 1.0, -1.0, -2.0 }, { -1.0, 1.5, -2.25, -4.0 }, { -1.0, 2.0, -4.0, -6.0 }, { -1.0, 2.5, -6.25, -7.0 }, © 2008 by Taylor & Francis Group, LLC
568
Numerical Linear Approximation in C { { { { { { { {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2.0, 4.0, 5.0, 6.0, -3.0, -4.0, -5.0, -6.0,
4.0, 2.0 }, 16.0, 4.0 }, 25.0, 6.0 }, 36.0, 7.0 }, 9.0, -1.0 }, 16.0, 3.0 }, 25.0, 4.0 }, 36.0, 6.0 }
}; static tNumber_R Cbinit[Nb][Mb] = { { -4.0, -1.0, -1.0 }, { -4.0, -1.0, -1.0 }, { -16.0, -1.0, -1.0 }, { -16.0, -1.0, -1.0 }, { -9.0, -1.0, -1.0 }, { -9.0, -1.0, -1.0 }, { 9.0, 9.0, 1.0 }, { 25.0, 9.0, 1.0 }, { 16.0, 16.0, 1.0 }, { 1.0, 9.0, 1.0 }, { 9.0, 9.0, 1.0 }, { 4.0, 25.0, 1.0 } }; static tNumber_R fa[Na+1] = { NIL, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 }; static tNumber_R fb[Nb+1] = { NIL, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MM_COLS, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R r = alloc_Vector_R (NN_ROWS); tVector_R a = alloc_Vector_R (MM_COLS);
© 2008 by Taylor & Francis Group, LLC
Chapter 16: DR_L1ineq tMatrix_R tMatrix_R
Ca = init_Matrix_R (&(Cainit[0][0]), Na, Ma); Cb = init_Matrix_R (&(Cbinit[0][0]), Nb, Mb);
int int tNumber_R
irank, iter, iside; i, j, m, n, Iexmpl; z;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_L1ineq, Solving an Overdetermined System of" " Linear Inequalities"); for (Iexmpl = 1; Iexmpl <= 4; Iexmpl++) { switch (Iexmpl) { case 1: n = Na; m = Ma; for (i = 1; i <= n; i++) { f[i] = fa[i]; for (j = 1; j <= m; j++) { ct[j][i] = Ca[i][j]; } } break; case 2: n = Nb; m = Mb; for (i = 1; i <= n; i++) { f[i] = fb[i]; for (j = 1; j <= m; j++) { ct[j][i] = Cb[i][j]; } } break; case 3: n = Nb; m = Mb; © 2008 by Taylor & Francis Group, LLC
569
570
Numerical Linear Approximation in C for (i = 1; i <= n; i++) { f[i] = fb[i]; for (j = 1; j <= m; j++) { ct[j][i] = Cb[i][j]; } ct[2][8] = 1.0; } break; case 4: n = Nb; m = Mb; for (i = 1; i <= n; i++) { f[i] = fb[i]; for (j = 1; j <= m; j++) { ct[j][i] = Cb[i][j]; } ct[1][8] = 16.0; ct[2][8] = 1.0; } break; default: break; } prn_algo_bnr ("L1ineq"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\": %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("\"Loneside\" for Solving an Overdetermined System of" " Linear Inequalities\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); iside = 0; rc = LA_Loneside (iside, m, n, ct, f, &irank, &iter, r, a,
© 2008 by Taylor & Francis Group, LLC
Chapter 16: DR_L1ineq
571 &z);
if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Solution of Linear Inequalities\n"); PRN ("One-sided L-One solution vector \"a\"\n"); prn_Vector_R (a, m); PRN ("One-sided L-One residual vector \"r\"\n"); prn_Vector_R (r, n); PRN ("One-sided L-One norm \"z\" = %8.4f\n", z); PRN ("Rank of of matrix \"c\" = %d, No. of " " Iterations = %d\n", irank, iter); } prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R
(ct, MM_COLS); (f); (r); (a);
uninit_Matrix_R (Ca); uninit_Matrix_R (Cb); }
© 2008 by Taylor & Francis Group, LLC
PART 4
The Least Squares Approximation
© 2008 by Taylor & Francis Group, LLC
574
Chapter 17
Chapter 18
Chapter 19
Numerical Linear Approximation in C
Least Squares and Pseudo-Inverses of Matrices
575
Piecewise Linear Least Squares Approximation
671
Solution of Ill-Posed Linear Systems
703
© 2008 by Taylor & Francis Group, LLC
575
Chapter 17 Least Squares and Pseudo-Inverses of Matrices
17.1
Introduction
It is expected that the reader of this chapter is familiar with the concept of the least squares solution of systems of linear equations. This chapter is the extension of Chapter 4, and is also a tutorial one. Consider the systems of linear equations Ax = b A is a real n by m matrix, b is a real n-vector and x is the solution m-vector. The residual vector for the system Ax = b, is given here by r = b – Ax It is known that A has an inverse A–1 if and only if A is a square nonsingular matrix. The solution of the system Ax = b, is given by x = A–1b. However, if A is a rectangular matrix, or a square singular matrix, Ax = b may have an approximate solution. A least squares solution x minimizes the L2 norm ||r||2 of the residual vector. The least squares solution is given by x = A+b, where A+ is known as the pseudo-inverse of A. We state here some theorems concerning the least squares solution of a system of linear equations whose coefficient matrix is a real general one. Such theorems make it easy to understand the term unique least squares solution or minimal length least squares solution of a system of linear equations. The pseudo-inverse of A is calculated by factorizing matrix A into the product of 2 or 3 matrices, each is of full rank. The pseudo-inverse of A is calculated in terms of the pseudo-inverses of the factorized matrices.
© 2008 by Taylor & Francis Group, LLC
576
Numerical Linear Approximation in C
Two efficient matrix factorization methods are considered. These are the Gauss LU factorization with complete pivoting and the Householder’s QR factorization method with pivoting. In Section 17.2, theorems are given concerning necessary and sufficient conditions for the square of the residual vector to have a minimum value. This leads to the definition and calculation of the pseudo-inverse of matrix A and the definition of the minimal length least squares solution. In Section 17.3, the two mentioned factorizations of matrix A are presented. A review of other factorization methods, such as Givens’ plane rotation and the classical and modified Gram-Schmidt methods, is also presented. In Section 17.4, explicit expressions for the pseudo-inverse in terms of the factorization matrices are presented. The singular value decomposition (SVD) of matrix A is given in Section 17.5, together with the spectral condition number of A and some properties of the pseudo-inverses of A. In Section 17.6, practical considerations in computing the linear least squares solution of Ax = b are given. An introduction of linear spaces and the pseudo-inverses is given in Section 17.7. The interesting subject of multicollinearity, collinearity of the columns of matrix A, or the ill-conditioning of matrix A, is presented in Section 17.8. This leads to the subject of dealing with multicollinearity via the Principal components analysis (PCA), the Partial least squares method (PLS) and the Ridge regression or Ridge equation technique. These are given in Sections 17.9-17.11. In Section 17.12, numerical results using the Gauss LU factorization method with complete pivoting and the Householder’s QR factorization method with pivoting are presented with comments. 17.2
Least squares solution of linear equations
Let us assume that we have real matrices and vectors. Let AT and to the transpose of matrix A and of vector x respectively. The vector norm ||.|| denotes the L2 norm. xT refer
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
577
Theorem 17.1 A necessary and sufficient condition for the square of the residual vector, (r, r) = ||r||2, to be minimum is that [18] AT(b – Ax) = 0
(17.2.1) Proof:
Sufficiency: From the definition of r, we have (r, r) = ((b – Ax), (b – Ax)) = (b, b) – (b, Ax) – (Ax, b) + (Ax, Ax) Let us differentiate (r, r) w.r.t. x and equate to 0. This gives ATAx = ATb or AT(b – Ax) = 0 Necessity: Suppose x gives the minimum (r, r) but AT(b – Ax) = w ≠ 0 The square of the residual corresponding to x + εw, where ε is a small positive parameter is easily found to be ||b – Ax – εΑw||2 = ||b – Ax||2 – 2ε||w||2 + ε2||Αw||2 For sufficiently small ε, the r.h.s. is < ||b – Ax||2. The hypothesis is thus contradicted and we should have w = 0 and the theorem is proved. Theorem 17.2 Let A be an n by m matrix (n > m) of rank m. Then the linear least squares solution to the system Ax = b is given by (17.2.2)
x = (ATA)–1ATb
Proof: From (17.2.1), we write (17.2.3)
ATAx = ATb
Since A is of rank m, the m by m matrix (ATA) is nonsingular and equation (17.2.3) has the solution x = (ATA)–1ATb and the theorem is proved. Equation (17.2.3) is known as the normal equation to the linear least squares problem of Ax = b. In (17.2.2), we may write x = A+b, where we define
© 2008 by Taylor & Francis Group, LLC
578
Numerical Linear Approximation in C
A+ = (ATA)–1AT
(17.2.4)
as the pseudo-inverse of A. The motivation behind this is that if A is square nonsingular, A+ in (17.2.4) reduces to A–1 and x = A–1b. 17.2.1
Minimal-length least squares solution
In (17.2.2) the solution x = A+b, where A+ is given by (17.2.4) is unique, since A is an n by m matrix, n > m, of rank m and thus ATA is nonsingular. However, in the general case, when n < m or when rank(A) = k < min(n, m), the least squares solution of Ax = b is not unique. It is desired that the solution given by x = A+b, where A+ has a suitable definition, for a general n by m matrix, be unique. On the other hand, it was found that the solution of Ax = b that gives the minimum residual norm ||r||2 and also the minimum norm ||x||2, is unique. It is known as the minimal-length (or minimum norm) least squares solution. For this reason, the general derivation of A+ is based on the fact that the least squares solution x of Ax = b is of minimum length. This point is the basis of the next two theorems. Theorem 17.3 If A is an n by m matrix (n < m) of rank n, then the minimal length least squares solution x of Ax = b is given by (17.2.5)
x = AT(AAT)–1b
Proof: Since n < m, the system of equations Ax = b is an underdetermined system and thus has an infinite number of solutions (Chapter 4). Hence, in order to obtain the unique minimal length least squares solution, the problem is formulated as follows. Find x that minimizes the inner product (x, x) subject to the condition b – Ax = 0. This problem is solved by the method of Lagrange multipliers. Let c = (c1, c2, …, cn)T be the Lagrange multiplier vector. Let the function F be F = (x, x) + (c, (b – Ax)) For minimum F, we differentiate F w.r.t. xi, i = 1, …, m and also w.r.t. © 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
579
ci, i = 1, 2, …, n, and equate each of the m + n equations to 0. This gives the two vector equations ATc – 2x = 0 and b – Ax = 0 Thus (AAT)c = 2b or c = 2(AAT)–1b (since (AAT) is non-singular). Substituting this value of c into the first equation gives x = AT(AAT)–1b and the theorem is proved. We write the solution (17.2.5) in the form x = A+b, where A+ = AT(AAT)–1 Also, A+ reduces to A–1 if A is square nonsingular. As indicated in Section 17.1, in order to be able to calculate the pseudo-inverse A+ efficiently, we first factorize matrix A into the product of 2 or 3 matrices, each of full rank. Then we get the pseudo-inverse of A in terms of the pseudo-inverses of the factorized matrices. 17.3
Factorization of matrix A
Assume that A is a real n by m matrix of rank k, k ≤ min(n, m). Then A may be factorized into two matrices, each of rank k. We shall consider the factorization by the Gauss LU and by Householder’s QR methods. 17.3.1
Gauss LU factorization
Let (A|b) denote the n by (m + 1) matrix whose first m columns are matrix A and its (m + 1)th column is vector b. For maximum numerical stability and also in order to help determine the rank of (A|b), we obtain the LU factorization of (A|b) with complete pivoting. This is done by permuting the rows of (A|b) and/or the columns of A. The LU factorization will not be for matrix A, but rather for matrix A, where, if S and P are permutation matrices, A = SAP. See Chapter 4. The formal factorization proceeds exactly as in the nonsingular case in Chapter 4. However, in the current case, if matrix A is of rank k, after the kth step we shall have the factorization (17.3.1)
A = L1U1
L1 is an n by n-unit lower triangular matrix, while U1 is an n by m
© 2008 by Taylor & Francis Group, LLC
580
Numerical Linear Approximation in C
matrix. The upper k rows of U1 form an upper trapezoidal matrix and its remaining rows consist of zero elements. See the structure of L1 and of U1 as illustrated by the following, where n = 6, m = 4 and k = 2. It is clear that the last 4 columns of L1 and the last 4 rows of U1, in this case, play no part in the factorization. They might be discarded and instead we get the two trapezoidal matrices L and U. Obviously, the elements aij of U1 or of U are not the elements of matrix A; that is, we get the factorization (17.3.2)
A = LU
where L is an n by k-unit lower trapezoidal (its diagonal elements are 1’s) and U is a k by m upper trapezoidal matrix L1 1 m 21 1 L1 U1 =
U1 a 11 a 12 a 13 a 14 a 22 a 23 a 24
m 31 m 32 1 m 41 m 42 0 1 m 51 m 52 0 0 1 m 61 m 62 0 0 0 1 L
U
1 m 21 1 LU =
m 31 m 32 a 11 a 12 a 13 a 14 m 41 m 42
a 22 a 23 a 23
m 51 m 52 m 61 m 62 Note 17.1 For practical considerations whenever matrix U is a trapezoidal (not a triangular) matrix, U may be factorized into U = DU
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
581
and thus (17.3.3)
A = LDU
where D is a k by k diagonal matrix whose elements are the diagonal elements of U and U is a unit trapezoidal matrix (with unit diagonal elements). 17.3.2
Householder’s factorization
This factorization proceeds in the same manner as in the nonsingular case of Chapter 4. Once more, the factorization is not for matrix A, but for matrix A = AP, and P is a permutation matrix. If matrix A is of rank k, k < min(n, m), then after the kth step, we shall get (17.3.4)
A = Q1T1
Here, Q1 is an n by n orthonormal matrix (Q1TQ1 = In), while T1 is an n by m matrix whose first k rows form an upper trapezoidal and the remaining (n – k) rows consist of zero elements. The last (n – k) columns of Q1 and the last (m – k) rows of T1 play no part in the factorization of A and should be discarded. We get (17.3.5)
A = QT
where Q is an n by k orthonormal matrix (QTQ = Ik) and T is a k by m upper trapezoidal. In (17.3.5), we might also factorize T into T = RWT where R is a k by k upper triangular and W is an m by k orthonormal matrix. This factorization might be achieved by a modified version of the Householder’s method [10]. In this case, (17.3.5) becomes (17.3.6)
A = QRWT
where QTQ = Ik and WTW = Ik. To simplify the notation, we shall assume from now on that the permuted A is denoted by A, A+ is denoted by A+, where A+ is the pseudo-inverse of A and the permuted x is denoted by x. There are other orthogonal factorization methods that give equally-accurate results as the Gauss LU factorization and Householder's factorization methods, and also have their own merits. © 2008 by Taylor & Francis Group, LLC
582
Numerical Linear Approximation in C
These are the Givens’ transformation (known as plane rotations) and the Gram-Schmidt methods. For the next two sections, we shall assume that the n by m matrix A, n > m, is of rank m. These methods are described here briefly and they give the QR factorization of the un-permuted matrix A as the Householder's factorization method, apart possibly from numerical signs [1]. 17.3.3
Givens’ transformation (plane rotations)
Givens’ transformation factorizes A into, A = QR. To achieve this, matrix A is pre-multiplied by the so-called rotation matrices, producing a triangular matrix R. Givens’ method makes 0’s of desired elements of matrix A while preserving existing 0 elements. Pre-multiplying A by rotation matrices produces 0’s below the diagonal of A. This is done column by column or row by row. As in Householder’s factorization, the orthonormal matrix Q is not calculated explicitly; rather, vector QTb is calculated. There are two advantages for this method. It is best suited for sparse matrices, where the existing 0’s in matrix A are preserved and need not be treated [9]. Another advantage is that Givens’ transformation is best suited for updating the least squares solution of the problem when equations are added to or deleted from the matrix equation Ax = b. Stewart calls such a technique, the updating and downdating of a system of linear equations [3, 23, 24]. 17.3.4
Classical and modified Gram-Schmidt methods
Let vectors a1, a2, …, am be the columns of matrix A, assumed linearly independent. It is required to construct a set of orthonormal columns q1, q2, …, qm of a matrix Q such that for each k ≤ m, qk is a linear combination of a1, a2, …, ak. Again, by orthonormal, we mean, (qs, qs) = 1 and (qs, qk) = 0, s ≠ k. This is done by the Gram-Schmidt algorithm. As a result, A is factorized as A = QR, where QTQ = Im, and R is an upper triangular m by m matrix. For the classical Gram-Schmidt algorithm, the elements of R are calculated one column at a time. For the modified Gram-Schmidt, the elements of R are instead calculated one row at a time [1, 5]. For the
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
583
classical Gram-Schmidt with re-orthogonalization, intermediate vectors are formed and the re-orthogonalization is then carried out. Such extra computation is a drawback for this last method. The classical Gram-Schmidt algorithm is derived in Section 17.7.3. In [1], following the presentation of Bjorck [5], a round-off error analysis for the classical Gram-Schmidt algorithm with re-orthogonalization was presented. A numerical example for an ill-conditioned case was solved by the Householder’s factorization (without pivoting), the classical Gram-Schmidt, the modified Gram-Schmidt and the classical Gram-Schmidt with reorthogonalization. The last two methods compared favorably with Householder’s method, while the classical Gram-Schmidt lacks accuracy for the solution vector x, matrix R and for the orthogonality of the columns of matrix Q, meaning QTQ ≈ Im, instead of QTQ = Im (neglecting the rounding error). Eventually, Longley [15] modified the presentation of Bjorck for both the classical and the modified Gram-Schmidt methods. A merit for the Gram-Schmidt algorithms is that in calculating the least squares solution, the number of basis functions can be added without recalculating the problem. Barrodale and Stuart [4] gave a FORTRAN program for the modified Gram-Schmidt method that allows the number of basis functions to be increased without recalculating the problem form scratch. 17.4
Explicit expression for the pseudo-inverse
Let A be a real n by m matrix of rank k, k ≤ min(n, m). Then in general, A may be factorized into the form A = BC where B is an n by k matrix and C is a k by m matrix, each of rank k. Theorem 17.4 The minimal-length least squares solution of Ax = b, is given by x = A+b, where (17.4.1)
A+ = C+B+ = CT(CCT)–1(BTB)–1BT
Proof:
© 2008 by Taylor & Francis Group, LLC
584
Numerical Linear Approximation in C
From Theorem 17.1, the linear least squares solution x satisfies – Ax) = 0, or since AT = CTBT
AT(b
CTBTBCx = CTBTb From this we have CCTBTBCx = CCTBTb and since each of (CCT) and (BTB) is a k by k square nonsingular matrix, by pre-multiplying both sides by (BTB)–1(CCT)–1, we get Cx = (BTB)–1BTb Finally, by applying Theorem 17.3, (17.4.1) is obtained and the theorem is proved. 17.4.1
A+ in terms of Gauss factorization
In view of (17.4.1), for A = LU by Gauss factorization, in terms of L and U, we get A+ = UT(UUT)–1(LTL)–1LT
(17.4.2)
and thus the least squares solution x is given by x = UT(UUT)–1(LTL)–1LTb
(17.4.3)
If A is factorized into A = LDU, as in (17.3.3), we get respectively A+ = UT(UUT)–1D–1(LTL)–1LT and x = UT(UUT)–1D–1(LTL)–1LTb For the case k = m < n, in the factorization A = LU, U is an upper triangular matrix and instead of (17.4.2) we get A+ = U–1L+ = U–1(LTL)–1LT For the case k = n < m, L is a lower triangular matrix. We have A+ = U+L–1 = UT(UUT)–1L–1 17.4.2
A+ in terms of Householder’s factorization
For the factorization A = QT of (17.3.5), in view of (17.4.1) and the fact that Q is orthonormal, Q+= QT
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
585
A+ = TT(TTT)–1QT
(17.4.4) and
x = TT(TTT)–1QTb For the case k = m < n, T is an upper triangular matrix and instead of (17.4.4), we get A+ = T–1Q+ = T–1QT Also, from (17.3.6) A+ = WT+R–1Q+ = WR–1QT Hence (17.4.5) 17.5
x = WR–1QTb
The singular value decomposition (SVD)
One of the most important factorizations for an n by m matrix A of rank k, k ≤ min(n, m), is the singular value decomposition, given by (17.5.1)
A = VSWT
V is a real n by n orthonormal matrix, VTV = In, W is a real m by m orthonormal matrix, WTW = Im and S is a n by m diagonal matrix, S = diag(si). [Matrix W in (17.5.1) is different from matrix W in (17.3.6)]. The positive real numbers si, i = 1, 2, …, k are known as the singular values of A. They are often ordered so that s1 ≥ s2 ≥ … ≥ sk > 0. The singular value decomposition of A is closely related to the eigenvalues and eigenvectors of matrix (ATA), where from (17.5.1), (ATA) is decomposed as (17.5.2)
(ATA) = WS2WT
Theorem 17.5 The orthonormal matrix W diagonalizes (ATA) and hence the diagonal elements of S2 are the eigenvalues of (ATA) and are the squares of the singular values of A. Proof: Matrix W is orthonormal. From (17.5.2) © 2008 by Taylor & Francis Group, LLC
586
Numerical Linear Approximation in C
(ATA)W = WS2 If we now write W = [w1, w2, …, wm], where the (wi) are the m columns of W, the above equation becomes (ATA)[w1, w2, …, wm] = [s12w1, s22w2, …, sm2wm] and the theorem is proved. Though, the SVD of matrix A provides information about the eigensystem of (ATA). More-importantly, it provides information that relates directly to matrix A itself; that is, concerning the notion of the (spectral) condition number of A, which is given in terms of the largest and smallest singular values of A. For the n by m matrix A of rank k, k ≤ min(n, m), since V and W are each orthonormal, and hence are of full rank, rank(A) = rank(S) and the SVD in (17.5.1) may be partitioned as T
A = VSW = V
S 11 0
W
T
0 0 where S11 is a k by k diagonal non-singular matrix. The zeros denoted by “0” are zero matrices. In this case, both V and W may be partitioned as follows V = [V1 V2] and W = [W1 W2] where V1 is an n by k orthonormal matrix and V2 is an n by (n – k) zero matrix. W1 is an m by k orthonormal matrix and W2 is an m by (m – k) zero matrix. By post multiplying A by W, we deduce that AW1 = V1S11 and AW2 = 0 from which the n by k matrix V1 provides an orthonormal basis for the range of A (Section 17.7.4). If we take the transpose of A, we get similar relations for AT, namely ATV1 = W1S11 and ATV2 = 0 from which the m by k matrix W1 provides an orthonormal basis for the range of AT. From now on, the SVD given by (17.5.1) is used, where, S, V and W denote respectively S11, V1 and W1. © 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
17.5.1
587
Spectral condition number of matrix A
Because each of V and W is an orthonormal matrix and S is diagonal, from (17.5.1), the pseudo-inverse A+ = WS–1VT
(17.5.3)
Since V and W are orthonormal, the spectral norms of A and A+ are given respectively by (see Theorem 4.3) ||A||2 = ||S||2 = s1 and ||A+||2 = ||S–1||2 = sk–1 The spectral condition number for A, denoted by K is defined as K = ||A||2 ||A+||2 = [s1]/[sk]
(17.5.4)
The condition number of a matrix defines the condition of the matrix with respect to a computing problem. When the condition number is very large, in most cases, sk is very small. As a result, the round-off error, or a small variation of vector b and/or of matrix A, will affect the accuracy of the obtained solution x of Ax = b. The resulting error in the solution x is proportional to the condition number of the matrix. See Section 4.2.8 and also [2]. The least squares solution of the system Ax = b, by the singular value decomposition, where rank(A) = k ≤ min(m, n) is given by x = A+b, or x = WS–1VTb 17.5.2
Main properties of the pseudo-inverse A+
Theorem 17.6 A+ satisfies the following relations (i) (ii) (iii) (iv)
A+AA+ AA+A (AA+)T (A+A)T
= = = =
A+ A (AA+) (A+A)
The proof of each of these four equalities follow easily from the definition of A and A+ in (17.5.1) and (17.5.3) respectively. If the n by m matrix A is of full rank, i.e., rank(A) = k = min(n, m), then: © 2008 by Taylor & Francis Group, LLC
588
(i) (ii)
Numerical Linear Approximation in C
If k = m < n, A+ = (ATA)–1AT, A+A = Ik If k = n < m, A+ = AT(AAT)–1, AA+ = Ik
where Ik is a k-unit matrix. Finally, if A is a square nonsingular matrix A+ = A–1 the ordinary inverse of A. 17.6
Practical considerations in computing
When writing program code, a sequence of intermediate calculations are performed in computing a final solution. The following sections describe some practical considerations in this regard. Assume that A and b are real. 17.6.1
Cholesky’s decomposition
Given a real k by k symmetric positive definite matrix B, Cholesky's decomposition method decomposes B into B = LLT where L is a k by k lower triangular matrix. As it is assumed that rank(B) = k, the Cholesky's decomposition will not break down in the process of the decomposition of B. See for example, Lau ([14], p. 101). 17.6.2
Solution of the normal equation
To calculate the least squares solution of Ax = b from the normal equation ATAx = ATb (equation (17.2.3)), the following practical steps are taken. We assume that the n by m matrix A is of rank m. Then the m by m matrix (ATA) is symmetric positive definite and may be decomposed by Cholesky's decomposition into ATA = GGT, where G is m by m lower triangular. The normal equation becomes GGTx = ATb We write u = ATb, y = GTx and we have Gy = u. Then y is obtained by forward substitution and from GTx = y, x is obtained by backward substitution. Spath ([22], pp. 22-24) presented a FORTRAN routine for this method and stated that it is very fast. However, if the columns of
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
589
matrix A are nearly linearly dependent, the obtained solution x would not be accurate. 17.6.3
Solution via Gauss LU factorization method
To calculate the least squares solution x of Ax = b, given by x = UT(UUT)–1(LTL)–1LTb as in (17.4.3), we calculate the intermediate vectors u, y and z first, as follows (assuming that matrix A is of rank k) (17.6.1) u = LTb, (LTL)y = u, (UUT)z = y and x = UTz Each of LTL and UUT is a positive definite k by k symmetric matrix. They may be decomposed by Cholesky's decomposition into (LTL) = YYT and (UUT) = ZZT, where each of Y and Z is a k by k lower triangular matrix. To calculate y and z we calculate the intermediate vectors x1 and x2 first as follows. The solution of (LTL)y = u is done in two steps, by solving the two triangular systems Yx1 = u and YTy = x1. The solution of (UUT)z = y is given by solving the two triangular systems Zx2 = y and ZTz = x2. Triangular systems are solved by backward or by forward substitutions, depending on the triangular matrix at hand. 17.6.4
Solution via Householder’s method
The solution x of Ax = b by the Householder’s method x = WR–1QTb as given by (17.4.5) is obtained by calculating the intermediate vectors u and v, where (17.6.2)
u = QTb, Rv = u and x = Wv
We note that Rv = u is a triangular system and the elements of v are calculated by back substitution. 17.6.5
Calculation of A+
To calculate the m by n pseudo-matrix A+, we calculate its columns x1, x2, …, xn. That by taking in (17.6.1) (or in (17.6.2)), © 2008 by Taylor & Francis Group, LLC
590
Numerical Linear Approximation in C
b = e1, e2, …, en, in succession, where ei is the ith column of the unit matrix In. The proof follows the proof given in Section 4.5.4, for calculating the matrix inverse A–1. 17.6.6
Influence of the round-off error
In the presence of round-off error, the computed parameters will differ slightly from the actual (correct) parameters. Equations (17.3.1) and (17.3.4) may be given by (A is assumed of rank k) (17.6.3)
A + F1 = L1U1 and A + G1 = Q1T1
where F1 and G1 are two error matrices. Also, the matrices on the r.h.s. of each equation are not the matrices that correspond to the exact factorizations of A. Expected zero elements in the last (m – k) rows of U1 and of T1 are small numbers. Let H denote either F1 or G1. The singular values si of matrices A and (A + H), i = 1, 2, …, are related by the following inequalities ([27], p. 102 - see also Theorem 4.8) si(A) – ||H||2 ≤ si(A + H) ≤ si(A) + ||H||2 In particular, for i = k, |sk(A + H) – sk(A)| ≤ ||H||2 and for i = k + 1, since sk+1(A) = 0, sk+1(A + H) ≤ ||H||2. When H is a perturbation (slight error) matrix, sk(A) >> ||H||2, sk(A + H) ≈ sk(A) and sk+1(A + H) << sk(A), we have a well division line in each of the factorizations (17.6.3) between the very small numbers and the other numbers in U1 and T1. When these small numbers are detected and discarded, instead of (17.6.3), we shall have (17.6.3a)
A + F = L1U1 and A + G = Q1T1
where ||F|| < ||F1|| and ||G|| < ||G1||. Let dA, db and dx be perturbation terms. Then the computed solution of Ax = b will satisfy (A + dA)(x + dx) = (b + db) For error analysis for the Gauss LU factorization method and for the Householder’s method, the reader is referred to a detailed analysis in [2].
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
17.7
591
Linear spaces and the pseudo-inverses
The simple idea of vector spaces, known as linear spaces, is presented. The null and range spaces of a general n by m matrix A are introduced. Finally, the orthogonal projection operators onto the range and null spaces of A and of the pseudo-inverse A+ are derived. The following definitions are needed to introduce this subject 17.7.1
Definitions, notations and related theorems
A linear combination of vectors: A vector x is said to be a linear combination of vectors x1, x2, …, xk, if it can be written in the form x = c1x1 + c2x2 + … + ckxk where the ci are scalars. Obviously, vectors x and (xi) are of the same dimensions. A vector space (or a linear space): denoted by V, is the collection of all vectors that are closed under the operations of addition and multiplication by a scalar. That is, if vectors x1 and x2 belong to V, so do vectors (x1 + x2), cx1 and dx2, where c and d are scalars. The vectors –x1, –x2 and the zero vector 0 also belong to V. Symbolically we write x∈V meaning x is a vector in V or x belongs to V. Vectors x1, x2, …, in V are also known as points in the space V. Let x1, x2, …, xk, be m-dimensional vectors. Then all possible combinations of this set of vectors form a vector space V. Spanning of a vector space: If a vector space consists of all linear combinations of a set of vectors x1, x2, …, xk, this set of vectors is said to span the linear space. Linear dependence and linear independence: Let x1, x2, …, xj, be a set of vectors. Then these vectors are linearly dependent if there exist scalars c1, c2, …, cj, not all 0’s such that c1x1 + c2x2 + … + cjxj = 0 If this equality is satisfied only when all the ci are 0’s, the given set of vectors are linearly independent.
© 2008 by Taylor & Francis Group, LLC
592
Numerical Linear Approximation in C
If x1, x2, …, xk, span a vector space and one or more of the xi is linearly dependent on the others, then the vector space is spanned by the given set of vectors after omitting the dependent vectors. Basis: Given a set of vectors x1, x2, …, xk, they form a basis for the vector space if they are linearly independent and they span the vector space. Theorem 17.7 Every vector in a vector space may be expressed uniquely as a linear combination of a given basis. Dimension of a linear space: The dimension of a linear space equals the number of vectors in its basis. If a space has a basis consisting of a finite number of vectors, the space is known to be of finite dimensions. Theorem 17.8 The maximum number of linearly independent m-dimensional vectors in the linear space V is m. In this case, we denote V by Vm. 17.7.2
Subspaces and their dimensions
Theorem 17.9 Let Vm be a vector space of dimension m and let x1, x2, …, xr, r < m, be linearly independent vectors in Vm. Then there exist vectors xr+1, xr+2, …, xm such that x1, x2, …, xr, xr+1, …, xm, form a basis for Vm. Let x1, x2, …, xr, span the space U and xr+1, xr+2, …, xm span the space W. Subspaces: Each of U and W is called a subspace of the linear space Vm. The dimension of U is r and the dimension of W is (m – r). Every element of U is an element of Vm, but the converse is not necessarily true. It is said that U is contained in Vm or Vm contains U. This is symbolized by U ⊆ Vm or Vm ⊇ U. A proper subspace: If U ≠ Vm, we write U ⊂ Vm or Vm ⊃ U and U is called a proper subspace of Vm. The same is said about the subspace W. Direct sum of subspaces: In Theorem 17.9, we say that Vm is the © 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
593
direct sum of U and W. This is symbolized by Vm = U ⊕ W In this case, the following conditions are satisfied: (i) For every vector x ∈ Vm, there exists x1∈ U and x2 ∈ W such that x = x1 + x2, and (ii) If x ∈ U and x ∈ W, then x = 0. This condition indicates that the only vector common to U and W is the zero vector. Complements of subspaces: If Vm = U ⊕ W, it is said that U and W are complements of each other. Inner product spaces: If in a vector space V, the inner product of two of its vectors satisfies: (i) (x, cy+dz) = c(x, y) + d(x, z), (ii) (x, y) = (y, x), and (iii) (x, x) > 0, if x ≠ 0, where c and d are scalars, V is known as an inner product space. Orthogonality: If q1 and q2 are two vectors in an inner product vector space V, then q1 and q2 are orthogonal if the inner product (q1, q2) = 0. Orthogonal set of vectors: If q1, q2, …, qk is a set of vectors in V, then they form an orthogonal set if (qi, qj) = 0, i ≠ j. This set of vectors are also known as orthonormal set if ||qi||2 = 1, i = 1, 2, …, k. Theorem 17.10 Let V be an inner product space and q1, q2, …, qk form an orthonormal set in V. Then assuming this set is real: (i) q1, q2, …, qk are linearly independent, (ii) k ≤ the dimension of V, and (iii) if a vector x is a linear combination of this set, then k
k
x =
∑ i=1
( q i ,x )q i =
T
∑ qi qi x i=1
Proof: The proof of (i) and (ii) is immediate since any qi cannot be a linear combination of the rest. To prove (iii), we write x in the form
© 2008 by Taylor & Francis Group, LLC
594
Numerical Linear Approximation in C
x = c1q1 + c2q2 + … + ckqk Then by taking the inner products with qi, i = 1, 2, …, k, ci = (qi, x) = qiTx and the theorem is proved. 17.7.3
Gram-Schmidt orthogonalization
Theorem 17.11 (i)
Given a set of linearly independent vectors a1, a2, …, ak, in an inner product space Vm, k ≤ m, the Gram-Schmidt orthogonalization algorithm constructs a set of orthonormal vectors q1, q2, …, qk such that for each i ≤ k, qi is a linear combination of a1, a2, …, ai. Every m-dimensional inner product space Vm has an orthonormal basis.
(ii) Proof:
We start by choosing q1 in the direction of a1 and write t11q1 = a1 t11 is a normalization factor, chosen such that ||q1|| = 1, thus t11 = ||a1||. That is q1 = a1/||a1|| Next, we write q2 as a linear combination of a1 and a2, or in other words, in terms of q1 and a2. We write t22q2 = a2 – t12q1 We calculate t12 such that q2 is orthogonal to q1. Also, t22 is a normalization factor. By taking the inner product of this equation with q1, we get (q1, t22q2) = 0 = (q1, a2) – t12(q1, q1) Hence, since (q1, q1) = 1, t12 = (q1, a2) and t22 = ||a2 –t12q1||. In general, for i = 2, 3, …, k, the vectors qi are given by i–1
t ii q i = a i –
∑ tji qj ,
j=1
© 2008 by Taylor & Francis Group, LLC
2≤i≤m
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
595
where tji = (qj, ai), j = 1, 2, …, i – 1 and i–1
t ii = a i –
∑ tji qj
j=1
The proof of (ii) follows immediately. The algorithm described above is in fact a factorization method. Let the vectors a1, a2, …, ak be the columns of the m by k matrix A and the vectors q1, q2, …, qk be the columns of the m by k matrix Q. Then we have A = QT, where QTQ = Ik and T an upper triangular k by k matrix whose elements are the tji parameters. As noted earlier, the Gram-Schmidt QT factorization of A gives the QR factorization of the un-permuted matrix A, as does the Householder's factorization method, apart possibly from numerical signs. The difference also is in the accuracies of the calculated parameters, as indicated at the end of Section 17.3.4. Orthonormal basis: If further q1, q2, …, qk form the basis of a vector space V, then the vector space is spanned by an orthonormal basis. The Euclidean space: An m-dimensional Euclidean space denoted by Em is an inner product space associated with any two vectors x and y in Em. A non-negative number called the distance between x and y is given by ||x – y|| = [(x – y), (x – y)]1/2 The familiar 2 and 3 dimensional geometric spaces are Euclidean spaces. Let the 3 dimensional vectors x1, x2 and x3, be linearly independent. Then they span E3. The two vectors x1 and x2 will span a plane passing through the origin. This plane is a subspace of E3. Likewise the vector x3 spans a straight line through the origin. This line is also a subspace of E3. The vectors e1, e2 and e3, where ei is the ith column of the unit matrix I3 may be taken as an orthonormal basis for E3. Orthogonal complements: If, in Section 17.7.2, for every vector u ∈ U and for every vector w ∈ W, the inner product (u, w) = 0, U and W are known as orthogonal complements of each other. In this case, we write © 2008 by Taylor & Francis Group, LLC
596
Numerical Linear Approximation in C
W = U⊥ or U = W⊥ and also Vm = U ⊕ U⊥ and Vm = W ⊕ W⊥ Null space of a matrix A: Consider the solution of the system of linear equations Ax = 0, where A is an m by m matrix and x is an m-vector. It is known that if rank(A) < m, Ax = 0 has an infinite number of solutions (Theorem 4.14). Theorem 17.12 The set of all solutions x of Ax = 0 is a vector space. Proof: If the nonzero vectors x1 and x2 are two solutions to Ax = 0, i.e., Ax1 = 0 and Ax2 = 0, then A(cx1 + dx2) = 0. Thus (cx1 + dx2) is also a solution to Ax = 0. All such solutions with different values of the scalars c and d constitute a vector space. The vector space containing all the solutions x of Ax = 0, is known as the null space of A and is denoted by N(A). Symbolically we write N(A) = {x ∈ Em | Ax = 0} 17.7.4
Range spaces of A and AT and their orthogonal complements
The following definitions apply to range spaces and their orthogonal complements: (1) Range space of a matrix A: Let A be a general n by m matrix of rank k. Then the vector space spanned by the columns of A is known as the column space or the range space of A and is denoted by R(A). Obviously, R(A) is a subspace of the Euclidean space En and is of dimension k. Symbolically we write R(A) = {y ∈ En | y = Ax, x ∈ Em} The space of all vectors that are orthogonal to the columns of A is the orthogonal complement of R(A) and is denoted by R(A)⊥. © 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
597
Let us recall from Section 17.3 that a general n by m matrix of rank k, k ≤ min(n, m) may be factorized in different forms. In the orthogonal factorization of equation (17.3.4), we get (A denotes the permuted A) (17.7.1)
A = Q1T1
where Q1 is an n by n orthonormal and T1 is an n by m matrix whose first k rows form an upper trapezoidal and its last (m – k) rows consist of zero elements. When the last (n – k) columns of Q1 and the last (m – k) rows of T1 are discarded, we get (17.7.2)
A = QT
where Q is an n by k orthonormal matrix and T is a k by m upper trapezoidal matrix. Factorization (17.7.2) indicates that every column ai of matrix A may be written as a linear combination of the k columns qi of Q. That is, ai = Σj tjiqj, where tji are the elements of matrix T. Hence, the k columns of Q span R(A) and may be taken as its basis. In the singular value decomposition of Section 17.5 for matrix A of rank k, we have (17.7.3)
(2)
(3)
A = VSWT
The columns of V span R(A) and may be taken as another basis for R(A). The orthogonal complement of R(A): In (17.7.1) the first k columns of Q1 form matrix Q of (17.7.2) and span R(A). Then since Q1 is orthonormal, the last (n – k) columns of Q1 that form say Q2, span R(A)⊥; the orthogonal complement of R(A). Range space of matrix AT: We note that by taking the transpose of (17.7.3) for instance, the columns of W span R(AT).
Let us now revisit a variation of Theorem 4.13. Let A be an n by m matrix and b be an n-vector. Then [17]: (i) The system of linear equations Ax = b has a solution if and only if rank(A|b) = rank(A).
© 2008 by Taylor & Francis Group, LLC
598
Numerical Linear Approximation in C
(ii) (iii)
If rank(A|b) = rank(A) = m, the solution is unique. If rank(A|b) = rank(A) < m, the solution is not unique.
We may now restate (i), (ii) and (iii) of this theorem as follows: The system Ax = b has a solution if and only if b ∈ R(A), If b ∈ R(A), and the columns of A form a basis for R(A), i.e., they are linearly independent, the solution is unique, and (iii) If b ∈ R(A), and the columns of A are linearly dependent, the solution is not unique.
(i) (ii)
We may also state the following theorem. Theorem 17.13 Assuming that we are dealing with real matrices. Then: (i) N(AT) = R(A)⊥ (ii) N(A) = R(AT)⊥ As a consequence, we have: (iii) En = R(A) + R(A)⊥ (iv) Em = R(AT) + R(AT)⊥ Proof: Let Q1 in (17.7.1) be written as Q1 = [Q|Q2] where the columns of Q span R(A), the columns of Q2 span R(A)⊥ and QTQ2 = 0. To prove (i), let x ∈ R(A)⊥; then there exists a vector y such that x = Q2y. Hence from (17.7.2), ATx = TTQTQ2y = 0 and thus x ∈ N(AT). Therefore, N(AT) ⊆ R(A)⊥. On the other hand, if x ∈ N(AT), TTQTx = 0 and x is orthogonal to the columns of Q. That is, x may be written as x = Q2y. Hence, x ∈ R(A)⊥ and therefore, R(A)⊥ ⊆ N(AT). We conclude that R(A)⊥ = N(AT). In the same way we prove (ii). Then (iii) and (iv) follow. 17.7.5
Representation of vectors in Vm
It is shown in Theorem 17.10 that if Vm is an inner product space of dimension m and the m-vectors q1, q2, …, qm form an orthonormal basis for Vm, then any vector x ∈ Vm may be expressed as a linear © 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
599
combination of this basis, in the form m
m
x =
∑
( q i, x )q i =
T
∑ qi qi x i=1
i=1
Hence, if Q is an m by m matrix that has the qi as its columns, we have the alternative expression to this summation as x = QQTx 17.7.6
Orthogonal projection onto range and null spaces
Let Vm be expressed as the sum of two spaces U and W; that is, Vm = U ⊕ W. Let q1, q2, …, qk span the subspace U ⊆ Vm and qk+1, …, qm, span the subspace W ⊆ Vm. Hence, if x ∈ Vm, then we may write k
x =
m
∑ αi qi + ∑ i=1
αi qi = x1 + x2
i = k+1
Then x1 ∈ U and x2 ∈ W. Theorem 17.14 Let Q1 be the m by k matrix whose columns are q1, q2, …, qk and Q2 be the m by (m – k) matrix whose columns are qk+1, …, qm. Let also Im be an m-unit matrix. Then: (i) Q1Q1T = (Im – Q2Q2T) (ii) Q2Q2T = (Im – Q1Q1T) (iii) x1 = Q1Q1Tx = (Im – Q2Q2T)x (iv) x2 = Q2Q2Tx = (Im – Q1Q1T)x Proof: Since Q is an m by m orthonormal, QTQ = QQT = Im and we have Im = Q1Q1T + Q2Q2T and the proof follows. Definition 17.1 We define Q1Q1T as the orthogonal projection operator that projects any vector x ∈ Vm onto the space U. Likewise let Q2Q2T be the orthogonal projection operator that projects any vector x ∈ Vm © 2008 by Taylor & Francis Group, LLC
600
Numerical Linear Approximation in C
onto the space W. This notion is continued as follows. Orthogonal projection operators onto R(A) and R(AT) From (17.7.3), A = VSWT and thus A+ = WS–1VT from which AA+ = VVT and A+A = WWT Theorem 17.15 AA+, (In – AA+), A+A and (Im – A+A) or respectively VVT, (In – VVT), WWT and (Im – WWT) are (i) (ii)
Orthogonal projector operators, They project any vector x onto R(A), R(A)⊥, R(AT) and R(AT)⊥ respectively. Consider AA+A = (AA+)A= (VVT)A = A
This is interpreted as the columns of A belong to the range space of A, which is a trivial result. Similarly A+AA+ = (A+A)A+ = (WWT)A+ = A+ is interpreted as the columns of A+ belong to the range space of A+. The other two operators are interpreted in the same way. Theorem 17.16 Let Ax = b, and b be resolved into two vectors b = b1 + b2, where b1 ∈ R(A) and b2 ∈ R(A)⊥. Then the least squares solution x is the exact solution of Ax = b1, and the residual r = b2. Proof: The least squares solution vector x of Ax = b is given by x = A+b and by pre-multiplying by A, we get Ax = AA+b and from the previous theorem, the right hand side is the projection of b onto R(A), i.e., = b1. Then b2 = b – b1 = (In – AA+)b = b – Ax = r and the theorem is proved, emphasizing that r ∈ R(A)⊥.
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
17.7.7
601
Singular values of the orthogonal projection matrices
Theorem 17.17 Assuming that A is of rank k, the singular values of AA+ are k 1’s and (n – k) 0’s. The singular values of (In – AA+) are (n – k) 1’s and k 0’s. Similar results are for the two other operators A+A, and (Im – A+A). Proof: From Theorem 4.10 (Wilkinson ([27], p. 54), given C as an m by k matrix and D as a k by m matrix, m ≥ k, CD and DC have k identical eigenvalues and the remaining (m – k) eigenvalues of CD are 0’s. The proof of the theorem follows from the fact that the eigenvalues of AA+ are themselves their singular values and by observing that AA+ = VVT and that VTV = Ik. Corollary ||AA+||2 = 1, ||In – AA+||2 = 1, etc. Let us now consider the interesting subject of the ill-conditioning of the coefficient matrix A in the matrix equation Ax = b. 17.8
Multicollinearity, collinearity or the ill-conditioning of matrix A
In analyzing their data, statisticians build a linear model of the form Ax = b A is a given n by m matrix containing the data of the problem, where in most cases, n > m, and b is a given observation n-vector. The least squares solution of this equation is mostly obtained by forming the normal equation ATAx = ATb In case matrix A is of rank m, ATA is non-singular and the least squares solution is given by x = (ATA)–1ATb
© 2008 by Taylor & Francis Group, LLC
602
Numerical Linear Approximation in C
When matrix A is well conditioned, its columns are almost orthogonal. However, this is not the case in most applications. In many problems the lack of orthogonality is not serious enough to affect the analysis of the result. The state of strong non-orthogonality is referred to as the problem of collinear data, also known as multicollinearity, and is a condition of deficient matrix A. The calculated elements of x are very sensitive to perturbations (slight errors) in the data. Hence, it is important to know when multicollinearity occurs and how to remedy it. Let us consider the following example by Chatterjee and Price ([6], pp. 144-151). Example 17.1 The congress of the United Sates ordered a survey concerning the lack of availability of equal educational opportunities for students. Data were collected from 70 schools across the country. A linear model of the form Ax = b is constructed with the solution vector x to be calculated. Vector b is the measure of the level of student achievement. Matrix A consists of 4 columns; a1, a2, a3 and a4. Column a1 is a constant column of 1’s, a2 is a measure of the school facilities, a3 consists of measures of home environments and column a4 is a measure of the school credentials. Equation Ax = b, may written as x11 + x2a2 + x3a3 + x4a4 = b The results of this equation were disappointing. Some of the calculated coefficients had negative values, which was not expected. That is besides the failed statistical tests carried out by the analysts. These results indicate the existence of extreme multicollinearity between the last three columns of A. There exist simple relationships between every pair of the three columns a2, a3 and a4. It is concluded that these three columns may be replaced by only one of them. In Section 17.7.1, we defined linear dependence and linear independence of a given set of vectors. We define that here again for the columns of a given matrix A. Definition 17.2 Let a1, a2, …, am denote the m columns of matrix A. Then these
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
603
columns are linearly dependent if there exist a vector c of constants whose elements are c1, c2, …, cm, not all 0’s, such that m
∑ ci ai
= 0
i=1
Definition 17.3 Collinearity exists among the columns of matrix A if for some specified small constant ρ > 0 m
∑ ci ai
= d,
d ≤ρ c
i=1
where c is defined above. If definition 17.2 holds for a subset of the columns of A, then rank(A) < m and (ATA) is singular and its inverse does not exist. However, if definition 17.3 holds for a subset of the columns of A, we have near linear dependency among these columns and collinearity exists as matrix (ATA) is ill-conditioned. 17.8.1
Sources of multicollinearity
Montgomery and Peck ([16], p. 306) cited two primary sources of multicollinearity. The first is an inappropriate choice of columns of matrix A, thus causing ill-conditioning of (ATA). They also suggested how to remedy this situation. For example, if columns a1, a2 and a3 of A are nearly linearly dependent, one might replace the three columns by one column whose ith element is the product of the ith elements of the three columns. This preserves the information given by the three original columns. This method is a kind of re-specifying the given problem. Another technique of re-specifying the problem is to eliminate one of the columns a1, a2 and a3. Such techniques might not be totally satisfactory. The second source of multicollinearity, is the improbably-defined model, which means that there are not enough data points. In this case, the usual approach is to increase the data points (increase n) or eliminate some of the columns of matrix A (decrease m). © 2008 by Taylor & Francis Group, LLC
604
17.8.2
Numerical Linear Approximation in C
Detection of multicollinearity
Multicollinearity may be detected when there occurs instability in the elements of the solution vector x. Large changes occur in the elements of x when a variable is added or is deleted in search of a better matrix equation, i.e., when a column is added to or a column is deleted from matrix A. A second indication of multicollinearity is large changes in the elements of x when a data point is added or is dropped; i.e., when n is increased or decreased by one. A third indication is the numerical signs and/or the magnitude of the elements of x that are not consistent with the physical problem. A calculated number of students in a school, for example, is expected to be a positive number. A statistical detection of the presence of multicollinearity is the size of correlation coefficients (measure of dependence) that exist between the columns of matrix A. A large correlation between two of the columns indicates a strong linear relationship between those two columns. More reliable means are needed to detect the existence of multicollinearity. One of the powerful means is by calculating the singular values of matrix (ATA), where we get the factorization (ATA) = WS2WT(equations (17.5.2)). S2 is a diagonal matrix, whose elements s12 ≥ s22 ≥ … ≥ sm2 are the singular values of (ATA). If one or more of the singular values are very small, that implies the near linear dependency among a subset of the columns of matrix A. However, some authors argue by saying, we are not informed what “small” is and there is a natural tendency to compare “small” to 0. The condition number of matrix (ATA); κ = s12/sm2, is a better indication of the presence of multicollinearity. Montgomery and Peck ([16], p. 319) argue saying, if this condition number is < 100, there is no serious problem with multicollinearity. Yet, if this condition number is between 100 and 1000, moderate to strong multicollinearity exists and if it exceeds 1000, severe multicollinearity exists. They also defined the condition indices of (ATA) as κi = s12/si2, i = 1, 2, …, m
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
605
Example 17.2 Montgomery and Peck ([16], pp. 319) recorded the condition indices of (ATA) for an example of m = 9 as κ1 = 1, …, κ6 = 84.96, κ7 = 309.18, κ8 = 824.47 and κ9 = 42,048. Since one of the condition indices exceed 1000, it is concluded that there exists strong near dependency in the data of this example. Existence of multicollinearity may also be detected by standard statistical means. In the following sections, we describe three non-statistical methods to combat multicollinearity; principal components analysis (PCA), partial least squares (PLS) and the Ridge equation. 17.9
Principal components analysis (PCA)
Principal components analysis is a method that rewrites matrix A in the equation Ax = b in terms of a set of orthogonal columns. These new columns are obtained as linear combinations of the columns of A. They are known as the principal components of the columns of A. 17.9.1
Derivation of the principal components
Consider the equations Ax = b. The principal components are realized by the help of the singular value decomposition of the n by m matrix A of rank m ([6], pp. 172-174). From (17.5.1) A = VSWT From this we write (17.9.1)
VS = AW = (a1, a2, …, am)W
Let us write VS = V whose columns are (v1, v2, …, vm) and the diagonal elements of S be (s1, s2, …, sm). The columns of vector V are orthogonal, (vi, vi) = si2 and (vi, vj) = 0, i ≠ j. The columns vi are referred to as the “principal components”. Equation Ax = b may be rewritten in terms of the principal components as follows. Since WWT = Im, Ax = b is AWWTx = b Let © 2008 by Taylor & Francis Group, LLC
606
Numerical Linear Approximation in C
y = WTx
(17.9.2) then from (17.9.1), we get (17.9.3)
VSy = b
Thus (17.9.4)
m
–1 T
y = S V b or y =
–1 T vi bi
∑ si i=1
Equation (17.9.3) is a re-presentation of equation Ax = b in terms of the orthogonal columns of V = VS. Also, from (17.9.1), the ith principal component vi = visi, i = 1, 2, …, m, is a linear function of the columns aj, namely m
∑ wij aj
vi si =
j=1
It follows that when si = 0, the l.h.s. of the above equation is 0. From definition 17.2 linear dependences exists among the columns of A. If si is very small, from definition 17.3, there is an approximate linear dependence among the given columns of A. Suppose that the first k singular values of A are significant and the last (m – k) are either 0’s or are considered 0’s. The last (m – k) principal components are then considered 0’s. The solution of (17.9.4) will instead be k
y =
–1 T vi bi
∑ si i=1
and from (17.9.2) x = Wy, or k
x =
–1
∑ si
T
wi vi bi
i=1
where (wi) are the columns of matrix W. The principal components analysis may be used as a linear dimensionality reduction technique. It determines a set of orthogonal vectors, which are the principal components, ordered by the largest © 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
607
singular values which describe most of the state of the problem. The portion of the PCA space corresponding to the smaller singular values describe the random noise. By properly determining the number of principal components, to maintain the PCA model, the system can be decoupled from the random noise. The principal components, corresponding to the large singular values of (ATA) are retained and the smallest singular values are set equal to 0’s. To illustrate this point, consider the following example. Example 17.3 Data was collected by Cheng et al. ([7], p. 107) for pattern analysis in industry and was used by Russell et al. ([20], pp. 36, 37). For this data, the n by m matrix A in the equation Ax = b is a 50 by 4 matrix. The 4 by 4 matrix (ATA) is decomposed by the singular value decomposition into (ATA) = WS2WT, where the elements of S2 are the singular values of (ATA). The singular values of (ATA) are (1.92, 0.96, 0.88, 0.24) and their sum = 4. If the first two columns of the principal components matrix V are retained, the ratio of the sum of their singular values to the sum of the total singular values is (1.92 + 0.96)/4 = 72%, contributed to this problem. That means dimension reduction to the problem with advantages explained by Russell et al. ([20], p. 37). An alternative to the principal components analysis is the partial least squares (PLS) technique, presented next. 17.10 Partial least squares method (PLS) Given is the linear model or the overdetermined matrix equation Ax = b. Like the PCA method, the partial least squares (PLS) method is a dimensionality reduction technique. It is particularly appealing because the calculation is performed on the given data A (not on ATA). This makes it suitable for large problems. There is a number of variations of the algorithm for this method. The most popular one is known as the non-iterative partial least squares (NIPALS). It is described by Geladi and Kowalski [8] and by Hoskuldsson [13]. See also Wold et al. [28] and Russell et al. [20]. The NIPALS does not calculate all the principal components at once. In the matrix equation Ax = b, there might be more than one right © 2008 by Taylor & Francis Group, LLC
608
Numerical Linear Approximation in C
hand side vector b; b1, b2, …, br, and in this case, there will be more than one solution vector x; x1, x2, …, xr. Then we may write Ax = b in the form (17.10.1)
AX = B
where B = [b1, b2, …, br] is an n by r matrix and X = [x1, x2, …, xr] is an m by r matrix. It is important that A and B be mean-centered and scaled [8]. Matrix A may be given in terms of the sum of m matrices, each is of rank 1. A = t1p1T+ t2p2T + … + tmpmT Each of the tspsT is an outer product of ts (known as score vector) and ps (known as loading vector). We may write A as k
T
A = TP + E =
T
∑ tj pj
+E
j=1
where T = [t1, t2, …, tk] and P = [p1, p2, …, pk]. E is a residual matrix. Similarly, B in (17.10.1) may be decomposed into a score matrix U and a loading matrix Q added to a residual matrix F* T
k
B = UQ + F* =
T
∑ uj pj
+ F*
j=1
Let k be the rank of matrix A. If k is set equal to min(m, n), then E and F* are 0’s and the PLS reduces to the ordinary least squares solution. Setting k < min(m, n) reduces noise and colliniarity. The aim of using the PLS is to describe matrix A in terms of a smaller number of the tjpjT components. The aim of the following analysis is to get a useful relation between matrices A and B. The simplest relation is via taking uh = bhth where bh =
uhTth/||th||.
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
609
17.10.1 Model building for the PLS method The PLS model may be considered as consisting of outer products of matrices in A and in B and inner relations linking such matrices. A simplified model is as follows: For matrix A, calculating the outer product matrices is described in 5 steps, below. To start with, one calculates t1 and p1T from matrix A. Then t1p1T is subtracted from A and the residual E1 is calculated. Then E1 is used to calculate t2 and p2T and so on. Thus E1 = A – t1p1T, E2 = E1 – t2p2T, …, Eh = Eh–1 – thphT, … For matrix A, the algorithm is: (1) tstart = a column aj of A (2) pold = ATt/||t|| (=ATu/||u||) (3) Scale p to be of length one; pnew = pold/||pold||, p = pnew (4) t = Ap/||p|| (5) Compare t used in step (2) with t obtained in step (4). If the difference is of the order of rounding error, the iteration has converged. If not go to step (2). For matrix B, the procedure is similar, namely (1) ustart = a column bj of B (2) qold = BTu/||u|| (= BTt/||t||) (3) Scale q to be of length one; qnew = qold/||qold||, q = qnew (4) u = Bq/||q|| (5) Compare u used in step (2) with u obtained in step (4). If the difference is of the order of rounding error, the iteration has converged. If not go to step (2). The above two separate algorithms are applied to A and B respectively. However, in order that each can get information about the other, let t and u change places in step (2) and the two algorithms would be written in sequence. The following algorithm is due to Geladi and Kowalski [8] and also to Hoskuldsson [13]. For each major iteration (1) ustart = a column bj of B For matrix A (2) w = ATu/||u|| (3) Normalize w, wnew = wold/||wold|| © 2008 by Taylor & Francis Group, LLC
610
(4)
Numerical Linear Approximation in C
t = Aw/||w||
For matrix B (5) q = BTt/||t|| (6) Normalize q; qnew = qold/||qold|| (7) u = Bq/||q|| (8) Compare t in step (4) with the one in the previous iteration. If the difference is of the order of rounding error, the iteration has converged and go to step (9). If not, go to step (2). If B consists of one column only, steps (5)-(8) can be omitted and no more iterations are required. (9) p = ATt/||t|| (10) Normalize p; pnew = pold/||pold|| (11) tnew = told||pold|| (12) wnew = wold||pold|| Obtain the regression coefficient b for the inner relation (13) b = uTt/||t|| The residual for the A matrix (block) for component h is Eh = Eh–1 – th–1ph–1T, E0 = A and for the B matrix (block), for component h is Fh = Fh–1 – bh–1th–1qh–1T, F0 = B For the next major iteration, meaning for the next component, go to step (1). However, after the first component, A is replaced by Eh and B is replaced Fh. The iterations continue until a stopping criteria is used or Eh becomes the zero matrix. The following analysis is due to Hoskuldsson [13]. Using steps (7), (5), (4) and (2) above in succession, we get uh = Bqh/||qh|| = BBTth/[||qh|| ||th||] = BBTAwh/[||qh|| ||th|| ||w||] = BBTAATuh–1/[||qh|| ||th|| ||wh|| ||uh–1||] These equations show that the algorithm described above performs like calculating the largest eigenvalue of a matrix by the power point ([12], pp. 330-332). In most practical cases, convergence
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
611
is obtained in less than ten iterations, unless there are equal eigenvalues that are the largest. Similar equations are derived for qh, th and wh. At convergence we may write BBTAATu = au BTAATB q = aq AATBBTt = at ATBBTAw = aw where a is the largest eigenvalue and vectors u, q, t and w are eigenvectors of the appropriate matrices corresponding to the largest eigenvalue. The next equation relates a residual matrix to its previous residual matrix Eh = Eh–1 – th–1ph–1T = Eh–1 – th–1t h–1TEh–1/||th–1|| = [I – th–1th–1T/||th–1||] Eh–1 The basic properties of vectors w, t and p are derived from the residual matrices [13] (i) The vectors wi are mutually orthogonal, (wi, wj) = 0, i ≠ j. (ii) The vectors ti are mutually orthogonal, (ti, tj) = 0, i ≠ j. (iii) The vectors wi are orthogonal to the vectors pj, (wi, pj), i < j. What is important is that these properties do not depend on the way a new t vector is constructed. There are other more interesting properties for the PLS algorithm. For this see the tutorial paper by Hoskuldsson [13]. The subject of PLS is a large one and the analysis given above is a brief description for the non-iterative partial least squares (NIPALS) version. Example 17.4 The data of Example 17.3, which illustrated the use of the principal components analysis (PCA), was solved again by Russell et al. ([20], p. 57) to illustrate the use of the PLS method. For the NIPALS algorithm, they took E0 = A, F0* = B. After 12 iterations for the first component t1, convergence was obtained with an error © 2008 by Taylor & Francis Group, LLC
612
Numerical Linear Approximation in C
< 10–10. One more major iteration was needed to obtain a satisfactory result for the problem. In the next section, we describe a third technique to overcome the problem of multicollinearity or the ill-conditioning of matrix A; the Ridge equation, which is a stabilized normal equation. 17.11 Ridge equation As indicated in Section 17.5, the condition number of a matrix defines the condition of the matrix with respect to a computing problem. The spectral condition number K of matrix A of rank k is given in terms of its largest and smallest singular values s1 and sk, respectively K = ||A||2 ||A+||2 = [s1]/[sk] When the condition number is very large, that implies in most cases that sk is very small. On the other hand, a singular value sk that is very small is an indicator of multicollinearity. Given the system of linear equations Ax = b, the original idea behind the Ridge equation is to reduce the condition number of matrix ATA in the normal equation ATAx = ATb. Normally, 0 ≤ ε ≤ 1 and the Ridge equation is given by (ATA + εΙ)x = ATb
(17.11.1)
I is an m-unit matrix. The solution of this equation is x = (ATA + εΙ)–1ATb It can be shown ([22], pp. 207-208, [25], p. 102) that (17.11.1) gives x = minx(||Ax – b||2 + ε||x||2) The singular values of the m by m matrix ATA are non-negative and if they are denoted by σ1, σ2, …, σm, the singular values of (ATA + εΙ) are (σ1 + ε), (σ2+ ε), …, (σm + ε). The condition number of (ATA + ε Ι) is [(σ1 + ε)/(σm + ε)] < (σ1/σm) As a result, the solution vector x of the Ridge equation is stable with respect to slight perturbations in the given data A. A modified version of (17.11.1) was used by Varah [25, 26] in the © 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
613
solution of the Fredholm integral equation of the first kind (Chapter 19), namely (ATA + εLTL)x = ATb L is some discrete approximation to the first or the second derivative operator. However, we shall only consider equation (17.11.1). Example 17. 5 The data for this example is given in Chatterjee and Price ([6], p. 152). This example is also presented in Section 1.2.6. This example is now solved by the Ridge equation. The linear model is Ax = b, or (17.11.2)
x01 + x1a1 + x2a2 + x3a3 = b
The 1 denotes an n-column of 1’s, a1 refers to domestic production, a2 refers to stock formation and a3 refers to domestic consumption, while b refers to the imports, all measured in millions of French francs. The Ridge equation (17.11.1) for this model was solved many times for values of ε from ε = 0 to ε = 1 in increments of 0.001. The results of x1, x2 and x3 were plotted and also were given as a function of ε ([6], pp. 184, 185). The value of x1 changes –0.339 for ε = 0 to a stable value of 0.42 for ε = 0.14, x3 changes from 1.302 for ε = 0 to 0.525 for ε = 0.14, which are big changes for x1 and x3. The value of x2 was not affected and remains stable at about 0.21. 17.11.1 Estimating the Ridge parameter One of the early ways to estimate ε that is still in-use today is to construct a ridge trace (graph). In this graph, the value of each element of the solution vector x is plotted against ε, 0 ≤ ε ≤ 1, at small increments, as was done in the above example. The value ε at which the elements of x seem to stabilize is selected. This is of course a subjective way of selection. There are other techniques that use statistical factors. For these techniques, see for example, Montgomery and Peck ([16], pp. 339-343) and the references in Ryan ([21], pp. 400, 401).
© 2008 by Taylor & Francis Group, LLC
614
Numerical Linear Approximation in C
17.11.2 Ridge equation and variable selection In some linear models Ax = b, a subset of the columns of A cause multicollinearity. It is thus desirable to find these columns and to delete some of them. This process is known as variable selection. This is mainly done by examining the ridge trace mentioned above. It is done by removing the columns of matrix A that correspond to unstable x element, which changes sign as ε increase from 0 towards 1, and also delete the columns of A that correspond to the x elements that decrease in value towards 0, again as ε increase from 0 towards 1. This is illustrated by the following example, due to Montgomery and Peck ([16], pp. 343-345). This example is also discussed in Section 1.3.2. Example 17.6 The matrix equation for this example is given by P = x01 + x1T + x2H + x3C + x4TH + x5TC + x6HC + x7T2 + x8H2 + x9C2 Here, b = P, a1 = 1, an n-column of 1’s, a1 = T, a2 = H, …, etc. TH means each element of T is multiplied by the corresponding element of H. Similarly, in T2 each element of T is squared, …, etc. (see the notations in Section 1.3.2). The ridge trace is plotted as a function of the ridge variable ε, from ε = 0 to ε = 1, at small increments. As ε increases the coefficients x5 and x9 decease rapidly toward 0. Coefficients x6 changed sign at ε = 0.32 and x8 decreased towards 0 but not so rapidly. The four columns of matrix A that correspond to the above 4 coefficients are deleted and the ridge model is calculated again for the remaining 5 parameters, x1, x2, x3, x4 and x7. The new ridge trace seems more stable than when all 9 terms were considered. Also, an increase of the ridge parameter ε from ε = 0 to ε = 1, did not change the values of the remaining 5 variables much. 17.12 Numerical results and comments The purpose of this final section is to show that the Gauss LU factorization and the Householder’s QR factorization methods are
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
615
among the most efficient methods for calculating the least squares solution of system Ax = b and the pseudo-inverse matrix A+. The two methods give results with comparable accuracies. See also [1]. LA_Eluls() computes the least squares solution and pseudo inverse of matrices using the Gauss LU factorization method with complete pivoting. LA_Hhls() implements the Householder’s QR factorization method with pivoting. We note that both Lau [15] and Press et al. [19] presented C programs for the QR factorization method, but for a matrix with full rank only. Both DR_Eluls() and DR_Hhls() test the same 3 examples. We also noted earlier that the factorizations by the Gauss LU method and by Householder’s QR method are not for matrix A, but for the permuted matrix A defined in these methods. Yet, the final results computed by LA_Eluls() and LA_Hhls() are for the given non-permuted equation Ax = b. Note 17.2 Both LA_Eluls() and LA_Hhls() are designed to solve linear systems with several r.h.s. vectors b(s). Example 17.7 This example was solved by Golub, and Reinsch ([11], p. 412) by the singular value decomposition method. The system of linear equations is 22 14 –1 –3 9 9 2 4
10 7 13 –2 8 1 –6 5
2 10 –1 13 1 –7 6 0
3 0 – 11 –2 –2 5 5 –2
7 8 3 4 4 –1 1 2
x 11 x 12 x 13 x 21 x 22 x 23 x 31 x 32 x 33 x 41 x 42 x 43 x 51 x 52 x 53
–1 2 1 = 4 0 –3 1 0
1 –1 10 0 –6 6 11 –5
0 1 11 4 –6 3 12 –5
Matrix A is a well conditioned 8 by 5 matrix of rank 3. There are 3 r.h.s. vectors b. The 3 r.h.s. vectors b(s) are chosen so that the minimum norm least squares solutions x1, x2 and x3 are respectively, x1 = (–1/12, 0, 1/4, –1/12, 1/12)T, x2 = (0, 0, 0, 0, 0)T and x3 = x1. © 2008 by Taylor & Francis Group, LLC
616
Numerical Linear Approximation in C
As expected, matrix A is found by the two routines to be of rank 3. Results obtained in single-precision by the two methods are as follows. Solution by Gauss LU factorization method x1 = (–0.083333, 1.1E–06, 0.25, –0.083333, 0.083333)T x2 = (9.4E–07, 1.8E–06, –3.5E–07, 2.1E–06, 4.9E–07)T x3 = (–0.083333, 1.1176E–06, 0.25, –0.083333, 0.083333)T A measure of the accuracy of calculation of the pseudo-inverse A+ is the expression ||AA+A – A||E/||A||E, where ||.||E refers to the Euclidean matrix norm. The calculated ||AA+A – A||E/||A||E = 1.3E–05. Solution by Householder’s QR factorization x1 = (–0.083333, 9.8E–07, 0.25, –0.083333, 0.083333)T x2 = (–1.54168E–06, 4.8E–06, –8.9E–07, –4.6E–06, 7.2E–07)T x3 = (–0.083333, 5.68441E–06, 0.25, –0.083333, 0.083333)T The calculated ||AA+A – A||E/||A||E = 9.7E–06. Example 17.8 This example was solved by Golub ([10], p. 211) using an iterative scheme. A is a 6 by 5 badly conditioned matrix. It consists of the first 5 columns of the inverse of the 6×6 Hilbert matrix. There are 2 r.h.s.columns b(s). Each column b is chosen such that the exact minimum norm least squares solution for each of them is x = (1, 1/2, 1/3, 1/4, 1/5)T. 0.360D02 – 0.630D03 A = 0.336D04 – 0.756D04 0.756D04 – 0.2772D04
– 0.630D03 0.147D05 – 0.882D05 0.21168D06 – 0.2205D06 0.8316D05
0.336D04 – 0.882D05 0.56448D06 – 0.14112D07 0.1512D07 – 0.58212D06
– 0.756D04 0.756D04 0.21168D06 – 0.2205D06 – 0.14112D07 0.1512D07 0.36288D07 – 0.3969D07 – 0.3969D07 0.441D07 0.155232D07 – 0.174636D07
b1 = (0.463D03, –0.1386D05, 0.9702D05, –0.25872D06, 0.29106D06, –0.116424D06)T b2 = (–0.4157D04, –0.1782D05, 0.93555D05, –0.2618D06, 0.288288D06, –0.118944D06)T
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
617
The computation is performed in double-precision. Rank(A) is found by the two routines to be 5. Solution by Gauss LU factorization method Calculated Solutions x1 = x2 = (1, 0.5, 0.3333333333, 0.25, 0.2)T for each of b1 and b2. Measure of accuracy for the calculated pseudo-inverse A+ is ||AA+A – A||E/||A||E = 0.1633277774E–11. Solution by Householder’s QR factorization x = (1, 0.5, 0.3333333333, 0.25, 0.2)T for each of b1 and b2. Measure of accuracy for the calculated pseudo-inverse A+ is ||AA+A – A||E/||A||E = 0.2715803665E–11. Note 17.3 As noted earlier, the calculated triangular matrix R in the QR factorization is exactly as displayed by Golub ([10], p. 211) but different from matrix R in ([1], p. 364). The reason is that in [1], QR factorization was done without pivoting, whereas in [10] and in our implementation for this chapter, pivoting was used. Example 17.9 In this example, matrix A is a badly conditioned 4 by 5 matrix. There is no r.h.s. vector b. The pseudo-inverse A+ was computed in both single- and double-precision. 0.4087 A = 0.6246 0.0661 0.2112
0.1593 0.3383 0.9112 0.815
0.6594 0.6591 0.6898 0.7983
0.4302 0.9342 0.1931 0.3406
0.3516 0.9038 0.1498 0.2803
Solution by Gauss LU factorization method In single-precision, the calculated rank(A) = 3 and ||AA+A – A||E/||A||E = 0.1141155E–04. In double-precision, the calculated rank(A) = 4 and ||AA+A – A||E/||A||E = 0.8727612201E–11.
© 2008 by Taylor & Francis Group, LLC
618
Numerical Linear Approximation in C
Solution by Householder’s QR factorization In single-precision, the calculated rank(A) = 3 and ||AA+A – A||E/||A||E = 0.8185643E–05. In double-precision, the calculated rank(A) = 4 and ||AA+A – A||E/||A||E = 0.1190011279E–11. Recall that LA_Eluls() and LA_Hhls() can also solve non-singular square systems. The following paragraphs also confirm the superiority of the Householder’s QR factorization (and the Gauss LU factorization) methods. Spath [22] collected 42 test examples, which he used to evaluate different existing algorithms, after converting them to FORTRAN 77. For the linear least squares solution of system Ax = b, he compares between the solution by the normal equation (NGL) using Cholesky’s decomposition of ATA, the modified Gram-Schmidt (MGS), Givens’ (GIVR), Householder’s transformation (HFTI) and the Singular Value Decomposition (SVD) methods. For computer storage requirements, for n > m (using our notation), all subroutines need more or less the same amount of storage except the SVD, which needs almost 3 times the storage. Spath ([22], pp. 43-46) compares the behavior of the different subroutines with respect to the accuracy of the results. As expected, the results by the NGL method are not very accurate for ill-conditioned cases (only few decimal places agree with the results of other subroutines). In one example, only the MGS and the HFTI gave reasonable results, while the others failed. In another example, the MGS, GIVR, and the HFTI gave the most accurate results. As for the CPU times for the 42 examples, and for matrices randomly generated, the fastest was the NGL, followed by MGS, GIVR, HFTI and the SVD.
References 1.
Abdelmalek, N.N., Round-off error analysis for Gram-Schmidt method and solution of linear least squares problems, BIT, 11(1971)345-367.
© 2008 by Taylor & Francis Group, LLC
Chapter 17: Least Squares and Pseudo-Inverses of Matrices
2. 3. 4.
5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
619
Abdelmalek, N.N., On the solution of linear least squares problems and pseudo-inverses, Computing, 13(1974)215-228. Abdelmalek, N.N., Piecewise linear least-squares approximation of planar curves, International Journal of Systems Science, 21(1990)1393-1403. Barrodale, I. and Stuart, G.F., A Fortran program for linear least squares problems of variable degree, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 191-204, 1975. Bjorck, A., Solving linear least squares problems by Gram-Schmidt orthogonalization, BIT, 7(1967)1-21. Chatterjee, S. and Price, B., Regression Analysis by Example, John Wiley & Sons, New York, 1977. Cheng, Y-Q., Zhuang, Y-M. and Yang, J-Y., Optimal Fisher discriminant analysis using the rank decomposition, Pattern Recognition, 25(1992)101-111. Geladi, P. and Kowalski, B.R., Partial least-squares regression: A tutorial, Analytica Chimica Acta, 185(1986)1-17. Gentleman, W.M., Algorithm AS 75: Basic procedures for large, sparse or weighted linear least squares problems, Applied Statistics, 22(1974)448-454. Golub, G.H., Numerical methods for solving linear least squares problems, Numerische Mathematik, 7(1965)206-216. Golub, G.H. and Reinsch, C., Singular value decomposition and the least squares solutions, Numerische Mathematik, 14(1970)403-420. Golub, G.H. and Van Loan, C.F., Matrix Computation, The Johns Hopkins University Press, Third Edition, London, 1996. Hoskuldsson, A., PLS regression methods, Journal of Chemometrics, 2(1988)211-228. Lau, H.T., A Numerical Library in C for Scientists and Engineers, CRC Press, Ann Arbor, 1995. Longley, J.M., Least Squares Computations Using Orthogonalization Methods, Marcel Dekker, New York, 1984. Montgomery, D.C. and Peck, E.A., Introduction to Linear Regression Analysis, John Wiley & Sons, New York, 1992.
© 2008 by Taylor & Francis Group, LLC
620
Numerical Linear Approximation in C
17.
Noble, B., Applied Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, 1969. Peters, G. and Wilkinson, J.H., The least squares problem and pseudo-inverses, Computer Journal, 13(1970)309-316. Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T., Numerical Recipes in C, The Art of Scientific Computing, Cambridge University Press, Second Edition, Cambridge, 1992. Russell, E., Chiang, L.H. and Braatz, R.D., Data-Driven Methods for Fault Detection and Diagnosis in Chemical Processes, Springer-Verlag, London, 2000. Ryan, T.P., Modern Regression Methods, John Wiley & Sons, New York, 1997. Spath, H., Mathematical Algorithms for Linear Regression, Academic Press, English Edition, London, 1991. Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, 1973. Stewart, G.W., The effects of rounding residual on an algorithm for downdating a Cholesky factorization, Journal of Institute of Mathematics and Applications, 23(1979)203-213. Varah, J.M., A practical examination of some numerical methods for linear discrete ill-posed problems, SIAM Review, 21(1979)100-111. Varah, J.M., Pitfalls in the numerical solution of linear ill-posed problems, SIAM Journal on Scientific Statistical Computation, 4(1983)164-176. Wilkinson, J.H., The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965. Wold, S., Ruhe, A., Wold, H. and Dunn III, W.J., The collinearity problem in linear regression. The partial least squares (PLS) approach to generalized inverses, SIAM Journal on Scientific and Statistical Computing, 5(1984)735-743.
18. 19.
20. 21. 22. 23. 24. 25. 26. 27. 28.
© 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Eluls
621
17.13 DR_Eluls /*------------------------------------------------------------------DR_Eluls --------------------------------------------------------------------This program is a driver for the function LA_Eluls(), which calculates the minimal-length least squares solution of a system of linear equations and/or calculates the pseudo-inverse of the coefficient matrix. It uses Gauss "LU" decomposition method with complete pivoting. The system of linear equations is of the form a*xs = bs "a" is a given real n by m matrix of rank k, k <= [min(n,m)]. "bs" is (are) given real r.h.s. n vector(s). "xs" is (are) the m solution vector(s). The system of linear equations might be overdetermined, determined or underdetermined. The required results are obtained according to a parameter "ientry" specified by the user: 0 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition of (the permuted) matrix "a(permuted)" as a(permuted) = l*u. or as a(permuted) = l*diag*u; "diag" is a diagonal matrix and "u" has unit diagonal elements. 1 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition + the least squares solution vector(s) "xs" (if "bs" != 0). 2 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition + the pseudo-inverse of "a", "apsudo". 3 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition + the pseudo-inverse of matrix "a" + the least squares solution(s) (if "bs" != 0). This program contains 3 examples whose results appear in the text. Example 1: matrix "a" is 8 by 5 well conditioned of rank 3. There are 3 r.h.s. vectors "bs". It is solved in double precision.
© 2008 by Taylor & Francis Group, LLC
622
Numerical Linear Approximation in C
Example 2: matrix "a" is 6 by 5 badly conditioned of rank 5. There are 2 r.h.s. vectors "bs". It is solved in double precision. Example 3: matrix "a" is 4 by 5 badly conditioned. There are no r.h.s. vectors "bs". In single precision, calculated rank (a) = 3. In double precision, calculated rank (a) = 4. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define
Na2 Ma2 Irhsa Nb2 Mb2 Irhsb Nc2 Mc2 Irhsc
8 5 3 6 5 2 4 5 0
void DR_Eluls (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R a1init[Na2][Ma2] = { { 22.0, 10.0, 2.0, 3.0, 7.0 }, { 14.0, 7.0, 10.0, 0.0, 8.0 }, { -1.0, 13.0, -1.0, -11.0, 3.0 }, { -3.0, -2.0, 13.0, -2.0, 4.0 }, { 9.0, 8.0, 1.0, -2.0, 4.0 }, { 9.0, 1.0, -7.0, 5.0, -1.0 }, { 2.0, -6.0, 6.0, 5.0, 1.0 }, { 4.0, 5.0, 0.0, -2.0, 2.0 } }; static tNumber_R bs1init[Na2][Irhsa] = { { -1.0, 1.0, 0.0 }, { 2.0, -1.0, 1.0 }, { 1.0, 10.0, 11.0 }, © 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Eluls { { { { {
4.0, 0.0, -3.0, 1.0, 0.0,
623
0.0, 4.0 }, -6.0, -6.0 }, 6.0, 3.0 }, 11.0, 12.0 }, -5.0, -5.0 }
}; static tNumber_R a2init[Nb2][Mb2] = { { 36.0, -630.0, 3360.0, { -630.0, 14700.0, -88200.0, { 3360.0, -88200.0, 564480.0, { -7560.0, 211680.0, -1411200.0, { 7560.0, -220500.0, 1512000.0, { -2772.0, 83160.0, -582120.0, }; static tNumber_R { { 463.0, { -13860.0, { 97020.0, { -258720.0, { 291060.0, { -116424.0, };
-7560.0, 211680.0, -1411200.0, 3628800.0, -3969000.0, 1552320.0,
bs2init[Nb2][Irhsb] = -4157.0 -17820.0 93555.0 -261800.0 288288.0 -118944.0
}, }, }, }, }, }
static tNumber_R a3init[Nc2][Mc2] = { { 0.4087, 0.1593, 0.6594, 0.4302, { 0.6246, 0.3383, 0.6591, 0.9342, { 0.0661, 0.9112, 0.6898, 0.1931, { 0.2112, 0.8150, 0.7983, 0.3406, };
0.3516 0.9038 0.1498 0.2803
}, }, }, }
/*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R aa = alloc_Matrix_R (NN2_ROWS, tMatrix_R bs = alloc_Matrix_R (NN2_ROWS, tMatrix_R xs = alloc_Matrix_R (MM2_COLS, tMatrix_R l = alloc_Matrix_R (NN2_ROWS, tMatrix_R u = alloc_Matrix_R (NN2_ROWS, tMatrix_R apsudo = alloc_Matrix_R (MM2_COLS, tMatrix_R rres = alloc_Matrix_R (NN2_ROWS, © 2008 by Taylor & Francis Group, LLC
7560.0 -220500.0 1512000.0 -3969000.0 4410000.0 -1746360.0
MM2_COLS); KK2_COLS); KK2_COLS); NN2_ROWS); MM2_COLS); NN2_ROWS); KK2_COLS);
}, }, }, }, }, }
624
Numerical Linear Approximation in C tMatrix_R tMatrix_R tVector_R tVector_R
aux auy diag bb
= = = =
alloc_Matrix_R alloc_Matrix_R alloc_Vector_R alloc_Vector_R
tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
a1 bs1 a2 bs2 a3
= = = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
int int
irank, irhs, ientry; i, j, k, m, n, Iexmpl;
(NN2_ROWS, NN2_ROWS); (NN2_ROWS, MM2_COLS); (MM2_COLS); (NN2_ROWS); (&(a1init[0][0]), Na2, Ma2); (&(bs1init[0][0]), Na2, Irhsa); (&(a2init[0][0]), Nb2, Mb2); (&(bs2init[0][0]), Nb2, Irhsb); (&(a3init[0][0]), Nc2, Mc2);
tNumber_R s, sa, sum1, sum2, aerr; eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_Eluls, Minimal Length L2 Solution of a System of" " Linear Equations by Gauss \"LU\" Factorization"); for (Iexmpl = 1; Iexmpl <= 3; Iexmpl++) { switch (Iexmpl) { case 1: ientry = 3; n = Na2; m = Ma2; irhs = Irhsa; for (i = 1; i <= n; i++) { for (j = 1; j <= m; j++) { aa[i][j] = a1[i][j]; } } for (i = 1; i <= n; i++) { for (j = 1; j <= irhs; j++) { bs[i][j] = bs1[i][j]; } } break; © 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Eluls case 2: ientry = 3; n = Nb2; m = Mb2; irhs = Irhsb; for (i = 1; i <= n; i++) { for (j = 1; j <= m; j++) { aa[i][j] = a2[i][j]; } } for (i = 1; i <= n; i++) { for (j = 1; j <= irhs; j++) { bs[i][j] = bs2[i][j]; } } break; case 3: ientry = 2; n = Nc2; m = Mc2; irhs = 0; for (i = 1; i <= n; i++) { for (j = 1; j <= m; j++) { aa[i][j] = a3[i][j]; } } break; default: break; } prn_algo_bnr ("LA_Eluls"); prn_example_delim(); PRN ("Example #%d: Size of coefficient matrix " "\"a\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("Minimal Least Squares Solution(s) of a System of" © 2008 by Taylor & Francis Group, LLC
625
626
Numerical Linear Approximation in C " Linear Equations Using Gauss\"LU\" Decomposition\n"); prn_example_delim(); PRN ("Coefficient Matrix, \"a\"\n"); prn_Matrix_R (aa, n, m); if (irhs != 0) { PRN ("\n"); PRN ("Right Hand Vector(s), \"bs\"\n"); prn_Matrix_R (bs, n, irhs); } rc = LA_Eluls (ientry, n, m, irhs, aa, bs, l, u, diag, bb, &irank, apsudo, xs, rres); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Least Squares Problem\n"); PRN ("Rank of the coefficient matrix = %d\n\n", irank); PRN ("Lower Triangular (Trapezoidal) Matrix \"l\"\n"); prn_Matrix_R (l, n, irank); if (irank != m) { PRN ("Elements of Diagonal Vector \"diag\"\n"); prn_Vector_R (diag, irank); } PRN ("Upper Triangular (Trapezoidal) Matrix \"u\"\n"); prn_Matrix_R (u, irank, m); if (ientry >= 2) { PRN ("Pseudo-inverse Matrix \"apsudo\"\n"); prn_Matrix_R (apsudo, m, n); s = 0.0; for (j = 1; j <= m; j++) { for (i = 1; i <= n; i++) { s = s + aa[i][j] * (aa[i][j]); } } sum1 = sqrt (s); for (j = 1; j <= n; j++)
© 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Eluls
627
{ for (i = 1; i <= n; i++) { s = 0.0; for (k = 1; k <= m; k++) { s = s + aa[i][k] * (apsudo[k][j]); } aux[i][j] = s; } } for (j = 1; j <= m; j++) { for (i = 1; i <= n; i++) { s = 0.0; for (k = 1; k <= n; k++) { s = s + aux[i][k] * (aa[k][j]); } auy[i][j] = s - aa[i][j]; } } sa = 0.0; for (j = 1; j <= m; j++) { for (i = 1; i <= n; i++) { sa = sa + auy[i][j] * (auy[i][j]); } } sum2 = sqrt (sa); aerr = sum2/sum1; PRN ("||a*apsudo*a - a||/||a|| = %22.15f\n", aerr); } if ((ientry == 1) || (ientry == 3)) { if (irhs != 0) { PRN ("\n"); PRN ("The Least Squares Solution(s)\n"); for (i = 1; i <= irhs; i++) { for (j = 1; j <= n; j++) © 2008 by Taylor & Francis Group, LLC
628
Numerical Linear Approximation in C { bb[j] = bs[j][i]; } PRN ("Right Hand Vector \"b\"\n"); prn_Vector_R (bb, n); for (j = 1; j <= m; j++) { bb[j] = xs[j][i]; } PRN ("Solution vector \"x\"\n"); prn_Vector_R (bb, m); for (j = 1; j <= n; j++) { bb[j] = rres[j][i]; } s = 0.0; for (j = 1; j <= n; j++) { s = s + bb[j] * (bb[j]); } sum1 = sqrt (s); PRN ("Residual vector \"rres\"\n"); prn_Vector_R (bb, n); } } } } prn_la_rc (rc); } free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R
(aa, NN2_ROWS); (bs, NN2_ROWS); (xs, MM2_COLS); (l, NN2_ROWS); (u, NN2_ROWS); (apsudo, MM2_COLS); (rres, NN2_ROWS); (aux, NN2_ROWS); (auy, NN2_ROWS); (diag); (bb);
uninit_Matrix_R (a1); uninit_Matrix_R (bs1); © 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Eluls uninit_Matrix_R (a2); uninit_Matrix_R (bs2); uninit_Matrix_R (a3); }
© 2008 by Taylor & Francis Group, LLC
629
630
Numerical Linear Approximation in C
17.14 LA_Eluls /*------------------------------------------------------------------LA_Eluls --------------------------------------------------------------------This program calculates the minimal-length least squares solution(s) of a system of linear equations using the Gauss "LU" factorization method with complete pivoting. The system of linear equations is of the form a*xs = bs "a" is a given real n by m matrix of rank k, k <= [min (n,m)]. "bs" is (are) the given r.h.s. n vector(s). "xs" is (are) the m solution vector(s). The problem is to calculate the elements of the "irhs" vector(s) xs[i][j], i = 1, 2, ..., m, j = 1, 2, ..., "irhs" of the system a*xs = bs. Inputs n m irhs a bs ientry
Number of rows of matrix "a" in the system a*xs = bs. Number of columns of matrix "a" in the system a*xs = bs. Number of columns in matrix "bs" in the system a*xs = bs. A real n by m matrix of the system a*xs = bs. An n rows by "irhs" columns matrix in the system a*xs = bs. An integer specifying the action by the user: 0 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition of (the permuted) matrix "a(permuted)" into a(permuted) = l*u or a(permuted) =l*diag*u; "diag" is a diagonal matrix and "u" has unit diagonal elements. 1 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition + the least squares solution vector(s) "xs" (if "bs" != 0). 2 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition + the pseudo-inverse of "a", "apsudo". 3 LA_Eluls() calculates matrices "l" and "u" in the "lu" decomposition + the pseudo-inverse of matrix "a" + the least squares solution(s) (if "bs" != 0).
Local Data t, v, tempp
© 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Eluls
631
Real working matrices. b, x, w, y temp Real working vectors. ir An n vector containing the indices of the rows of the permuted matrix "a". ic An m vector containing the indices of the columns of the permuted matrix "a". Outputs irank The calculated rank of matrix "a". xs An m by "irhs" matrix containing the "irhs" columns that are the "irhs" solution m vector(s) xs. rres An n by "irhs" matrix. (rres[1][j], rres[2][j], ..., rres[n][j]) is the least squares residual vector for the jth solution, j = 1, ..., "irhs". rres[i][j] = a[i][1]*xs[1][j] + ... + a[i][m]*xs[m][j] - bs[i][j], for j = 1, 2,..., "irhs". l An n by n real lower triangular matrix whose n rows and first "irank" columns contain the lower triangular (trapezoidal) matrix "l" in the decomposition of a(permuted) = l*u or a(permuted) = l*diag*u. diag An m real vector whose first "irank" elements contain the diagonal elements of matrix "u". u An m by m real matrix whose first "irank" rows and m columns contain the upper triangular (trapezoidal) matrix "u" in the decomposition of a(permuted) = l*u or = l*diag*u. apsudo An m by n real matrix containing the pseudo inverse of matrix "a". NOTE: The calculated results "xs" and "apsudo" are for the given equation a*xs = bs, (not for the permuted one). Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Eluls (int ientry, int n, int m, int irhs, tMatrix_R aa, tMatrix_R bs, tMatrix_R l, tMatrix_R u, tVector_R diag, tVector_R b, int *pIrank, tMatrix_R apsudo, tMatrix_R xs, tMatrix_R rres) © 2008 by Taylor & Francis Group, LLC
632
Numerical Linear Approximation in C
{ tMatrix_R tMatrix_R tMatrix_R tVector_R tVector_R tVector_R tVector_R tVector_I tVector_I
t v tempp x y w temp ir ic
int
iend = 0, iout = 0, iwish = 0, ifirst = 0, isecnd = 0; i = 0, j = 0, ij = 0, irank = 0;
int
= = = = = = = = =
alloc_Matrix_R alloc_Matrix_R alloc_Matrix_R alloc_Vector_R alloc_Vector_R alloc_Vector_R alloc_Vector_R alloc_Vector_I alloc_Vector_I
(m, m); (m, m); (m, m); (m); (m); (m); (m); (n); (m);
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < n) && (0 < m) && !((n == 1) && (m == 1)) && (0 <= irhs) && IN_BOUNDS (ientry, 0, 3)); VALIDATE_PTRS (aa && bs && l && u && diag && b && pIrank && apsudo && xs && rres); VALIDATE_ALLOC (t && v && tempp && x && y && w && temp && ir && ic ); iwish = ientry; iout = 1; irank = n; if (m <= n) irank = m; /* lu decomposition */ LA_eluls_lu_decomp (&iout, n, m, aa, l, u, diag, ir, ic); *pIrank = iout - 1; if (*pIrank != n) { /* l factorizing */ LA_eluls_l_decomp (n, m, tempp, l, u, pIrank); /* Cholesky decomposition of matrix tempp */ LA_chols (irank, tempp, t); } if (*pIrank != m) { /* u factorizing */ LA_eluls_u_decomp (m, tempp, u, diag, pIrank); © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Eluls
/* Cholesky decomposition of matrix tempp */ LA_chols (irank, tempp, v); } /* End of l-u factorizations */ if (iwish == 0) { GOTO_CLEANUP_RC (LaRcSolutionFound); } iend = n; if (iwish != 2) { iend = irhs; } /* Calculating the solution vector(s) and the psudo-inverse of the coefficient matrix "a" */ for (ij = 1; ij <= 2; ij++) { for (ifirst = 1; ifirst <= iend; ifirst++) { if (iwish == 2) { for (isecnd = 1; isecnd <= n; isecnd++) { b[isecnd] = 0.0; } b[ifirst] = 1.0; } else if (iwish != 2) { for (i = 1; i <= n; i++) { j = ir[i]; b[i] = bs[j][ifirst]; } } /* Calculating vector y */ LA_calcul_y (pIrank, n, t, y, w, temp, l, b); /* Calculating vector x */ LA_calcul_x (pIrank, m, v, y, w, temp, u, diag, x); © 2008 by Taylor & Francis Group, LLC
633
634
Numerical Linear Approximation in C
/* Permuting ellements of vector x and calculating pseudo-inverse matrix "apsudo" */ LA_permute_x (iwish, ifirst, isecnd, n, m, aa, bs, w, x, ir, ic, apsudo, xs, rres); } if ((iwish == 1) || (iwish == 2)) { GOTO_CLEANUP_RC (LaRcSolutionFound); } iwish = 2; iend = n; } CLEANUP: free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_I free_Vector_I
(t, m); (v, m); (tempp, m); (x); (y); (w); (temp); (ir); (ic);
return rc; } /*------------------------------------------------------------------Part 1 of LA_Eluls() -------------------------------------------------------------------*/ void LA_eluls_lu_decomp (int *pIout, int n, int m, tMatrix_R aa, tMatrix_R l, tMatrix_R u, tVector_R diag, tVector_I ir, tVector_I ic) { int i, j, ij, iout1, kp = 0, kp1, kdp = 0, nless; tNumber_R e, gh, piv, pivot; for (j = 1; j <= m; j++) { ic[j] = j; for (i = 1; i <= n; i++) { u[i][j] = aa[i][j]; © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Eluls } } for (i = 1; i <= n; i++) { for (j = 1; j <= n; j++) { l[i][j] = 0.0; } ir[i] = i; l[i][i] = 1.0; } *pIout = 1; nless = n; if (m < n) nless = m; for (ij = 1; ij <= nless; ij++) { piv = 0.0; /* Complete pivoting for elements of matrix u */ for (j = *pIout; j <= m; j++) { for (i = *pIout; i <= n; i++) { e = u[i][j]; if (e < 0.0) e = -e; if (e > piv) { kp = i; kdp = j; piv = e; } } } if (piv < EPS) return; else { if (*pIout != kdp) { /* Swap two elements of integer vector "ic" */ swap_elems_Vector_I (ic, kdp, *pIout); /* Swap two columns of real matrix "u" */ for (i = 1; i <= n; i++) © 2008 by Taylor & Francis Group, LLC
635
636
Numerical Linear Approximation in C { e = u[i][*pIout]; u[i][*pIout] = u[i][kdp]; u[i][kdp] = e; } } if (*pIout != kp) { /* Swap two elements of integer vector "ir" */ swap_elems_Vector_I (ir, kp, *pIout); /* Swap two partial rows of real matrix "u" */ for (j = *pIout; j <= m; j++) { e = u[*pIout][j]; u[*pIout][j] = u[kp][j]; u[kp][j] = e; } if (*pIout != 1) { iout1 = *pIout - 1; for (j = 1; j <= iout1; j++) { e = l[*pIout][j]; l[*pIout][j] = l[kp][j]; l[kp][j] = e; } } } pivot = u[*pIout][*pIout]; diag[*pIout] = pivot; if (*pIout != n) { kp1 = *pIout + 1; for (i = kp1; i <= n; i++) { e = u[i][*pIout]; gh = e/pivot; u[i][*pIout] = 0.0; l[i][*pIout] = gh; for (j = kp1; j <= m; j++) { u[i][j] = u[i][j] - gh * (u[*pIout][j]); }
© 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Eluls
637
} } *pIout = *pIout + 1; } } } /*------------------------------------------------------------------Calculating vector y for LA_Eluls() -------------------------------------------------------------------*/ void LA_calcul_y (int *pIrank, int n, tMatrix_R t, tVector_R y, tVector_R w, tVector_R temp, tMatrix_R l, tVector_R b) { int i, j, k, ki, im1, ip1; tNumber_R sm; if (*pIrank == n) { y[1] = b[1]; if (*pIrank == 1) return; for (i = 2; i <= *pIrank; i++) { im1 = i - 1; sm = -b[i]; for (k = 1; k <= im1; k++) { sm = sm + l[i][k] * (y[k]); } y[i] = -sm; } return; } else if (*pIrank != n) { for (i = 1; i <= *pIrank; i++) { sm = 0.0; for (k = 1; k <= n; k++) { sm = sm + l[k][i] * (b[k]); } temp[i] = sm; } w[1] = temp[1]/t[1][1]; if (*pIrank > 1) © 2008 by Taylor & Francis Group, LLC
638
Numerical Linear Approximation in C { for (i = 2; i <= *pIrank; i++) { im1 = i - 1; sm = -temp[i]; for (k = 1; k <= im1; k++) { sm = sm + t[i][k] * (w[k]); } w[i] = -sm/t[i][i]; } } y[*pIrank] = w[*pIrank]/t[*pIrank][*pIrank]; if (*pIrank == 1) return; for (j = 2; j <= *pIrank; j++) { i = *pIrank - j + 1; ip1 = i + 1; sm = -w[i]; ki = j - 1 + ip1 - 1; for (k = ip1; k <= ki; k++) { sm = sm + t[k][i] * (y[k]); } y[i] = -sm/t[i][i]; } }
} /*------------------------------------------------------------------Calculating vector x for LA_Eluls() -------------------------------------------------------------------*/ void LA_calcul_x (int *pIrank, int m, tMatrix_R v, tVector_R y, tVector_R w, tVector_R temp, tMatrix_R u, tVector_R diag, tVector_R x) { int i, j, im1, ip1, k, ki; tNumber_R
sm;
if (*pIrank == m) { x[*pIrank] = y[*pIrank]/u[*pIrank][*pIrank]; if (*pIrank == 1) return; for (j = 2; j <= *pIrank; j++) © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Eluls { i = *pIrank - j + 1; ip1 = i + 1; sm = -y[i]; ki = j - 1 + ip1 - 1; for (k = ip1; k <= ki; k++) { sm = sm + u[i][k] * (x[k]); } x[i] = -sm/u[i][i]; } return; } else if (*pIrank != m) { for (i = 1; i <= *pIrank; i++) { y[i] = y[i]/diag[i]; } w[1] = y[1]/v[1][1]; if (*pIrank > 1) { for (i = 2; i <= *pIrank; i++) { im1 = i - 1; sm = -y[i]; for (k = 1; k <= im1; k++) { sm = sm + v[i][k] * (w[k]); } w[i] = -sm/v[i][i]; } } temp[*pIrank] = w[*pIrank]/v[*pIrank][*pIrank]; if (*pIrank > 1) { for (j = 2; j <= *pIrank; j++) { i = *pIrank - j + 1; ip1 = i + 1; sm = -w[i]; ki = j - 1 + ip1 - 1; for (k = ip1; k <= ki; k++) { sm = sm + v[k][i] * (temp[k]); © 2008 by Taylor & Francis Group, LLC
639
640
Numerical Linear Approximation in C } temp[i] = -sm/v[i][i]; } } for (i = 1; i <= m; i++) { sm = 0.0; for (k = 1; k <= *pIrank; k++) { sm = sm + u[k][i] * (temp[k]); } x[i] = sm; } }
} /*------------------------------------------------------------------Permute elements of vector x and calculate apsudo for LA_Eluls() -------------------------------------------------------------------*/ void LA_permute_x (int iwish, int ifirst, int isecnd, int n, int m, tMatrix_R aa, tMatrix_R bs, tVector_R w, tVector_R x, tVector_I ir, tVector_I ic, tMatrix_R apsudo, tMatrix_R xs, tMatrix_R rres) { int i, j, k; tNumber_R
sm;
for (i = 1; i <= m; i++) { k = ic[i]; w[k] = x[i]; } if (iwish != 2) { for (i = 1; i <= m; i++) { xs[i][ifirst] = w[i]; } for (j = 1; j <= n; j++) { sm = -bs[j][ifirst]; for (k = 1; k <= m; k++) { sm = sm + aa[j][k] * (w[k]); © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Eluls
641
} rres[j][ifirst] = -sm; } } else if (iwish == 2) { for (isecnd= 1; isecnd <= m; isecnd++) { k = ir[ifirst]; apsudo[isecnd][k] = w[isecnd]; } } } /*------------------------------------------------------------------lu factorization for LA_Eluls -------------------------------------------------------------------*/ void LA_eluls_l_decomp (int n, int m, tMatrix_R tempp, tMatrix_R l, tMatrix_R u, int *pIrank) { int i, j, k, irank1; tNumber_R sm; if (*pIrank < n) { irank1 = *pIrank + 1; for (i = irank1; i <= n; i++) { for (j = *pIrank; j <= m; j++) { u[i][j] = 0.0; } } } for (i = 1; i <= *pIrank; i++) { for (j = 1; j <= *pIrank; j ++) { sm = 0.0; for (k = 1; k <= n; k++) { sm = sm + l[k][i] * (l[k][j]); } tempp[i][j] = sm; } © 2008 by Taylor & Francis Group, LLC
642
Numerical Linear Approximation in C }
} /*------------------------------------------------------------------Factorizing matrix u for LA_Eluls() -------------------------------------------------------------------*/ void LA_eluls_u_decomp (int m, tMatrix_R tempp, tMatrix_R u, tVector_R diag, int *pIrank) { int i, j, k; tNumber_R con, sm; for (i = 1; i <= *pIrank; i++) { u[i][i] = 1.0; k = i + 1; con = diag[i]; for (j = k; j <= m; j++) { u[i][j] = u[i][j]/con; } } for (i = 1; i <= *pIrank; i++) { for (j = 1; j <= *pIrank; j++) { sm = 0.0; for (k = 1; k <= m; k++) { sm = sm + u[i][k] * (u[j][k]); } tempp[i][j] = sm; } } } /*------------------------------------------------------------------LA_chols --------------------------------------------------------------------This program calculates the Cholesky decomposition of a positive definite symmetric matrix q. Matrix "q" is decomposed into q = el*el(transpose) where "el" is a lower triangular matrix.
© 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Eluls
643
Inputs irank The dimension of the square matrix q. q An m by m matrix whose first "irank" rows and "irank" columns contain the positive definite symmetric matrix "q". Outputs el An "irank" by "irank" matrix whose "irank" rows and "irank" columns contain the lower triangular matrix "el" of the decomposition q = el*el (transpose) -------------------------------------------------------------------*/ void LA_chols (int irank, tMatrix_R q, tMatrix_R el) { int i, j, k, im1, jp1; tNumber_R c, sm; j = 1; c = q[1][1]; el[1][1] = sqrt (c); if (irank == 1) return; for (j = 1; j <= irank-1; j++) { jp1 = j + 1; el[jp1][1] = q[1][jp1]/el[1][1]; if (j > 1) { for (i = 2; i <= j; i++) { im1 = i - 1; c = -q[i][jp1]; sm = c; for (k = 1; k <= im1; k++) { sm = sm + el[i][k] * (el[jp1][k]); } el[jp1][i] = -sm/el[i][i]; } } for (i = 1; i <= j; i++) { el[i][jp1] = 0.0; } c = -q[jp1][jp1]; sm = c; for (k = 1; k <= j; k++) © 2008 by Taylor & Francis Group, LLC
644
Numerical Linear Approximation in C { sm = sm + el[jp1][k] * (el[jp1][k]); } el[jp1][jp1] = sqrt (-sm); }
}
© 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Hhls
645
17.15 DR_Hhls /*------------------------------------------------------------------DR_Hhls --------------------------------------------------------------------This program is a driver for the function LA_Hhls(), which calculates the minimal-length least squares solution of a system of linear equations and/or calculates the pseudo-inverse of the coefficient matrix. It uses Householder's "qrp" decomposition method with pivoting. The system of linear equations is of the form a*xs = bs "a" is a given real n by m matrix of rank k, k <= [min (n,m)] "bs" is (are) given real r.h.s. n vector(s). "xs" is (are) the m solution vector(s). The system of linear equations might be overdetermined, determined or underdetermined. The required results are obtained according to a parameter "ientry" specified by the user: 0 LA_Hhls() calculates the factorization matrices q, r and p of "a(permuted)" = q * r * p(transpose). 1 LA_Hhls() calculates the matrices "q", "r" and "p" + the least squares solution vector(s) "xs" (if "bs" != 0). 2 LA_Hhls() calculates the matrices q", "r" and "p" + the pseudo-inverse of "a", "apsudo". 3 LA_Hhls() calculates the matrices q", "r" and "p" + the pseudo-inverse of matrix "a" + the least squares solution(s) (if "bs" != 0). This program contains 3 examples whose results appear in the text. Example 1: matrix "a" is 8 by 5 well conditioned matrix of rank 3. There are 3 r.h.s. vectors "bs". Example 2: matrix "a" is 6 by 5 badly conditioned matrix of rank 5. There are 2 r.h.s. vectors "bs".
© 2008 by Taylor & Francis Group, LLC
646
Numerical Linear Approximation in C
Example 3: matrix "a" is 4 by 5 badly conditioned matrix. There are no r.h.s. vectors "bs". In single precision, the calculated rank of coefficient matrix "a" = 3. In double precision, the calculated rank of coefficient matrix "a" = 4. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define
Na2 Ma2 irhsa Nb2 Mb2 irhsb Nc2 Mc2 irhsc
8 5 3 6 5 2 4 5 0
void DR_Hhls (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R a1init[Na2][Ma2] = { { 22.0, 10.0, 2.0, 3.0, 7.0 }, { 14.0, 7.0, 10.0, 0.0, 8.0 }, { -1.0, 13.0, -1.0, -11.0, 3.0 }, { -3.0, -2.0, 13.0, -2.0, 4.0 }, { 9.0, 8.0, 1.0, -2.0, 4.0 }, { 9.0, 1.0, -7.0, 5.0, -1.0 }, { 2.0, -6.0, 6.0, 5.0, 1.0 }, { 4.0, 5.0, 0.0, -2.0, 2.0 } }; static tNumber_R bs1init[Na2][irhsa] = { { -1.0, 1.0, 0.0 }, { 2.0, -1.0, 1.0 }, { 1.0, 10.0, 11.0 }, { 4.0, 0.0, 4.0 }, © 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Hhls { { { {
0.0, -3.0, 1.0, 0.0,
647
-6.0, -6.0 }, 6.0, 3.0 }, 11.0, 12.0 }, -5.0, -5.0 }
}; static tNumber_R a2init[Nb2][Mb2] = { { 36.0, -630.0, 3360.0, { -630.0, 14700.0, -88200.0, { 3360.0, -88200.0, 564480.0, { -7560.0, 211680.0, -1411200.0, { 7560.0, -220500.0, 1512000.0, { -2772.0, 83160.0, -582120.0, }; static tNumber_R { { 463.0, { -13860.0, { 97020.0, { -258720.0, { 291060.0, { -116424.0, };
-7560.0, 211680.0, -1411200.0, 3628800.0, -3969000.0, 1552320.0,
bs2init[Nb2][irhsb] = -4157.0 -17820.0 93555.0 -261800.0 288288.0 -118944.0
}, }, }, }, }, }
static tNumber_R a3init[Nc2][Mc2] = { { 0.4087, 0.1593, 0.6594, 0.4302, { 0.6246, 0.3383, 0.6591, 0.9342, { 0.0661, 0.9112, 0.6898, 0.1931, { 0.2112, 0.8150, 0.7983, 0.3406, };
0.3516 0.9038 0.1498 0.2803
}, }, }, }
/*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R aa = alloc_Matrix_R (NN2_ROWS, tMatrix_R bs = alloc_Matrix_R (NN2_ROWS, tMatrix_R xs = alloc_Matrix_R (MM2_COLS, tMatrix_R q = alloc_Matrix_R (NN2_ROWS, tMatrix_R r = alloc_Matrix_R (MM2_COLS, tMatrix_R p = alloc_Matrix_R (MM2_COLS, tMatrix_R apsudo = alloc_Matrix_R (MM2_COLS, tMatrix_R res = alloc_Matrix_R (NN2_ROWS, © 2008 by Taylor & Francis Group, LLC
7560.0 -220500.0 1512000.0 -3969000.0 4410000.0 -1746360.0
MM2_COLS); KK2_COLS); KK2_COLS); NN2_ROWS); MM2_COLS); MM2_COLS); NN2_ROWS); KK2_COLS);
}, }, }, }, }, }
648
Numerical Linear Approximation in C tMatrix_R tMatrix_R tVector_R
aux auy bb
= alloc_Matrix_R (NN2_ROWS, NN2_ROWS); = alloc_Matrix_R (NN2_ROWS, MM2_COLS); = alloc_Vector_R (NN2_ROWS);
tMatrix_R tMatrix_R tMatrix_R tMatrix_R tMatrix_R
a1 bs1 a2 bs2 a3
= = = = =
int int
irank, irhs, ientry; i, j, k, m, n, Iexmpl;
tNumber_R
s, sa, sum1, sum2, aerr;
eLaRc
rc = LaRcOk;
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(a1init[0][0]), Na2, Ma2); (&(bs1init[0][0]), Na2,irhsa); (&(a2init[0][0]), Nb2, Mb2); (&(bs2init[0][0]), Nb2,irhsb); (&(a3init[0][0]), Nc2, Mc2);
prn_dr_bnr ("DR_Hhls, Minimal Length L2 Solution of a System of" " Linear Equations by Householder's \"QR\"" " Decomposition"); for (Iexmpl = 1; Iexmpl <= 3; Iexmpl++) { switch (Iexmpl) { case 1: ientry = 3; n = Na2; m = Ma2; irhs = irhsa; for (i = 1; i <= n; i++) { for (j = 1; j <= m; j++) { aa[i][j] = a1[i][j]; } } for (i = 1; i <= n; i++) { for (j = 1; j <= irhs; j++) { bs[i][j] = bs1[i][j]; } } break; © 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Hhls
case 2: ientry = 3; n = Nb2; m = Mb2; irhs = irhsb; for (i = 1; i <= n; i++) { for (j = 1; j <= m; j++) { aa[i][j] = a2[i][j]; } } for (i = 1; i <= n; i++) { for (j = 1; j <= irhs; j++) { bs[i][j] = bs2[i][j]; } } break; case 3: ientry = 2; n = Nc2; m = Mc2; irhs = 0; for (i = 1; i <= n; i++) { for (j = 1; j <= m; j++) { aa[i][j] = a3[i][j]; } } break; default: break; } prn_algo_bnr ("Hhls"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"a\" %d by %d\n", Iexmpl, n, m); © 2008 by Taylor & Francis Group, LLC
649
650
Numerical Linear Approximation in C
prn_example_delim(); PRN ("Minimal Least Squares Solution(s) of a System of" " Linear Equations Using Householder's \"QR\"" " Decomposition\n"); prn_example_delim(); PRN ("Coefficient Matrix \"a\"\n"); prn_Matrix_R (aa, n, m); if (irhs != 0) { PRN ("\n"); PRN ("Right Hand Vector(s) \"bs\"\n"); prn_Matrix_R (bs, n, irhs); } rc = LA_Hhls (ientry, n, m, irhs, aa, bs, q, r, p, &irank, apsudo, xs, res); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Least Squares Problem\n"); PRN ("Rank of the coefficient matrix = %d\n\n", irank); PRN ("Orthogonal Matrix \"q\"\n"); prn_Matrix_R (q, n, irank); PRN ("Upper Triangular Matrix \"r\"\n"); prn_Matrix_R (r, irank, irank); PRN ("Orthogonal Matrix \"p\"\n"); prn_Matrix_R (p, m, irank); if (ientry >= 2) { PRN ("Pseudo-inverse Matrix \"apsudo\"\n"); prn_Matrix_R (apsudo, m, n); s = 0.0; for (j = 1; j <= m; j++) { for (i = 1; i <= n; i++) { s = s + aa[i][j] * (aa[i][j]); © 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Hhls
651
} } sum1 = sqrt (s); for (j = 1; j <= n; j++) { for (i = 1; i <= n; i++) { s = 0.0; for (k = 1; k <= m; k++) { s = s + aa[i][k] * (apsudo[k][j]); } aux[i][j] = s; } } for (j = 1; j <= m; j++) { for (i = 1; i <= n; i++) { s = 0.0; for (k = 1; k <= n; k++) { s = s + aux[i][k] * (aa[k][j]); } auy[i][j] = s - aa[i][j]; } } sa = 0.0; for (j = 1; j <= m; j++) { for (i = 1; i <= n; i++) { sa = sa + auy[i][j] * (auy[i][j]); } } sum2 = sqrt (sa); aerr = sum2/sum1; PRN ("||a*apsudo*a - a||/||a|| = %22.15f\n", aerr); } if ((ientry == 1) || (ientry == 3)) { © 2008 by Taylor & Francis Group, LLC
652
Numerical Linear Approximation in C if (irhs != 0) { PRN ("\n"); PRN ("The Least Squares Solution(s)\n"); for (i = 1; i <= irhs; i++) { for (j = 1; j <= n; j++) { bb[j] = bs[j][i]; } PRN ("Right Hand Vector \"b\"\n"); prn_Vector_R (bb, n); for (j = 1; j <= m; j++) { bb[j] = xs[j][i]; } PRN ("Solution vector \"x\"\n"); prn_Vector_R (bb, m); for (j = 1; j <= n; j++) { bb[j] = res[j][i]; } s = 0.0; for (j = 1; j <= n; j++) { s = s + bb[j] * (bb[j]); } sum1 = sqrt (s); PRN ("Residual vector \"res\"\n"); prn_Vector_R (bb, n); } } } } prn_la_rc (rc); } free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R
(aa, NN2_ROWS); (bs, NN2_ROWS); (xs, MM2_COLS); (q, NN2_ROWS);
© 2008 by Taylor & Francis Group, LLC
Chapter 17: DR_Hhls free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R
(r, MM2_COLS); (p, MM2_COLS); (apsudo, MM2_COLS); (res, NN2_ROWS); (aux, NN2_ROWS); (auy, NN2_ROWS); (bb);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(a1); (bs1); (a2); (bs2); (a3);
}
© 2008 by Taylor & Francis Group, LLC
653
654
Numerical Linear Approximation in C
17.16 LA_Hhls /*------------------------------------------------------------------LA_Hhls --------------------------------------------------------------------This program calculates the minimal-length least squares solution(s) of a system of linear equations and/or the pseudo-inverse of the coefficient matrix. It uses the Householder's "QR" decomposition method with pivoting. The system of linear equations is of the form a*xs = bs "a" is a given real n by m matrix of rank k, k <= [min (n,m)]. "bs" is (are) the given r.h.s. n vector(s). "xs" is (are) the m solution vector(s). In this method the permuted matrix "a(permuted)" is factorized into (1)
"a(permuted)" = q*r*p(transpose)
See description of "q", "r" and "p" below". Inputs n m irhs a
Number of rows of matrix "a" in the system a*xs = bs. Number of columns of matrix "a" in the system a*xs = bs. Number of columns of the r.h.s matrix "bs". A real n by m matrix of the given equation a*xs = bs. This matrix is not destroyed in the computation. bs A real n by "irhs" matrix of the r.h.s. of a*xs = bs. ientry An integer specifying the action by the user: 0 LA_Hhls() calculates the factorization matrices q, r and p of "a(permuted)" = q * r * p(transpose). 1 LA_Hhls() calculates the matrices "q", "r" and "p" + the least squares solution vector(s) "xs" (if "bs" != 0). 2 LA_Hhls() calculates the matrices q", "r" and "p" + the pseudo-inverse of "a", "apsudo". 3 LA_Hhls() calculates the matrices q", "r" and "p" + the pseudo-inverse of matrix "a" + the least squares solution(s) (if "bs" != 0). Local Data ic An integer n vector containing the column permutation of
© 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls
655
matrix "a". t A real working matrix. b, x, w, dm Real working vectors. Outputs irank The calculated rank of matrix "a". q An n by n matrix whose first n rows and "irank" columns contain the orthogonal matrix "q" in the factorization (1) above. r A real m by m matrix whose first "irank" rows and "irank" columns contain the triangular matrix "r" in the factorization (1) above. p An m by m matrix whose first "irank" rows and m columns contain matrix "p(transpose)" of the factorization (1) above. xs A real m by "irhs" matrix containing the minimal length least squares solution matrix "xs" of system a*xs = bs. res A real n by "irhs" matrix containing the residual vectors given by (bs - a*xs) for the "irhs" solutions of a*xs = bs. apsudo A real m by n matrix containing the pseudo-inverse of matrix "a". NOTE: The calculated results "xs" and "apsudo" are for the given equation a*xs = bs, (not for the permuted one). Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Hhls (int ientry, int n, int m, int irhs, tMatrix_R aa, tMatrix_R bs, tMatrix_R q, tMatrix_R r, tMatrix_R p, int *pIrank, tMatrix_R apsudo, tMatrix_R xs, tMatrix_R res) { tMatrix_R t = alloc_Matrix_R (n, m); tVector_R x = alloc_Vector_R (m); tVector_R w = alloc_Vector_R (n + 1); tVector_R dm = alloc_Vector_R (n + 1); tVector_I ic = alloc_Vector_I (m); tVector_R b = alloc_Vector_R (n);
© 2008 by Taylor & Francis Group, LLC
656
Numerical Linear Approximation in C int int tNumber_R
iout = irank1 i = 0, eps2 =
0, iwish = 0, in = 0, ifirst = 0, isecnd = 0, = 0; ij = 0; 0.0;
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < n) && (0 < m) && !((n == 1) && (m == 1)) && (0 <= irhs) && IN_BOUNDS (ientry, 0, 3)); VALIDATE_PTRS (aa && bs && q && r && p && pIrank && apsudo && xs && res); VALIDATE_ALLOC (t && x && w && dm && ic && b); /* Initialization */ eps2 = EPS*EPS; iwish = ientry; LA_hhls_init (n, m, aa, q, t, ic); iout = 1; /* Calculation of matrix q */ LA_hhls_calcul_q (&iout, n, m, q, t, ic, w, dm); *pIrank = iout - 1; irank1 = *pIrank + 1; /* Intialize matrix p */ LA_hhls_init_p (n, m, pIrank, r, t, p); /* Calculation of matrix r */ LA_hhls_calcul_r_p (pIrank, m, r, p, w, dm); if (iwish == 0) { GOTO_CLEANUP_RC (LaRcSolutionFound); } in = n; if (iwish != 2) in = irhs; for (ij = 1; ij <= 2; ij++) { for (ifirst = 1; ifirst <= in; ifirst++) { © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls
657
if (iwish != 2) { for (i = 1; i <= n; i++) { b[i] = bs[i][ifirst]; } } else if (iwish == 2) { for (isecnd = 1; isecnd <= n; isecnd++) { b[isecnd] = 0.0; } b[ifirst] = 1.0; } /* Calculate the results */ LA_hhls_calcul_res (iwish, ifirst, isecnd, pIrank, n, m, aa, bs, xs, apsudo, q, r, p, ic, b, x, w, dm, res); } if ((iwish == 1) || (iwish == 2)) { GOTO_CLEANUP_RC (LaRcSolutionFound); } iwish = 2; in = n; } CLEANUP: free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_I free_Vector_R
(t, n); (x); (w); (dm); (ic); (b);
return rc; } /*------------------------------------------------------------------Calculation of matrix q in LA_Hhls() -------------------------------------------------------------------*/ void LA_hhls_calcul_q (int *pIout, int n, int m, tMatrix_R q, © 2008 by Taylor & Francis Group, LLC
658
Numerical Linear Approximation in C tMatrix_R t, tVector_I ic, tVector_R w, tVector_R dm)
{ int int
nless, kdp, iout1; i, ij, j, k;
tNumber_R
beta, e, eps2, piv, s, sqsg;
eps2 = EPS*EPS; nless = n; if (m < n) nless = m; /* Calculation of matrix q */ for (ij = 1; ij <= nless; ij++) { /* Permuting the columns of matrix t */ piv = 0.0; kdp = *pIout; for (i = *pIout; i <= m; i++) { s = 0.0; for (k = *pIout; k <= n; k++) { s = s + t[k][i] * t[k][i]; } if (s >= piv) { piv = s; kdp = i; } } if (piv >= eps2) { if (*pIout != kdp) { /* Swap two elements of integer vector "ic" */ swap_elems_Vector_I (ic, kdp, *pIout); /* Swap two columns of real matrix "t" */ for (i = 1; i <= n; i++) { e = t[i][*pIout]; t[i][*pIout] = t[i][kdp]; t[i][kdp] = e; } © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls } if (*pIout < n) { e = t[*pIout][*pIout]; if (e < 0.0) e = -e; sqsg = sqrt (piv); beta = 1.0/ (piv + sqsg * e); for (i = 1; i <= n; i++) { w[i] = 0.0; if (i > *pIout) { w[i] = t[i][*pIout]; } else if (i == *pIout) { e = sqsg; if (t[*pIout][*pIout] < 0.0) e = -e; w[i] = t[*pIout][*pIout] + e; } } for (j = *pIout; j <= m; j++) { s = 0.0; for (k = *pIout; k <= n; k++) { s = s + w[k] * (t[k][j]); } dm[j] = s * beta; } for (j = *pIout; j <= m; j++) { for (i = *pIout; i <= n; i++) { t[i][j] = t[i][j] - w[i] * (dm[j]); } } iout1 = *pIout + 1; for (i = iout1; i <= n; i++) { t[i][*pIout] = 0.0; © 2008 by Taylor & Francis Group, LLC
659
660
Numerical Linear Approximation in C } for (i = 1; i <= n; i++) { s = 0.0; for (k = *pIout; k <= n; k++) { s = s + w[k] * (q[i][k]); } dm[i] = s * beta; } for (j = *pIout; j <= n; j++) { for (i = 1; i <= n; i++) { q[i][j] = q[i][j] - dm[i] * (w[j]); } } } *pIout = *pIout + 1; } }
} /*------------------------------------------------------------------Intialize matrix p in LA_Hhls() -------------------------------------------------------------------*/ void LA_hhls_init_p (int n, int m, int *pIrank, tMatrix_R r, tMatrix_R t, tMatrix_R p) { int i, j, irank1; irank1 = *pIrank + 1; if (*pIrank != n) { for (j = *pIrank; j <= m; j++) { for (i = irank1; i <= n; i++) { t[i][j] = 0.0; } } } © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls
661
for (j = 1; j <= m; j++) { for (i = 1; i <= *pIrank; i++) { r[i][j] = t[i][j]; } } for (j = 1; j <= m; j++) { for (i = 1; i <= m; i ++) { p[i][j] = 0.0; } p[j][j] = 1.0; } } /*------------------------------------------------------------------Calculating matrix r in LA_Hhls() -------------------------------------------------------------------*/ void LA_hhls_calcul_r_p (int *pIrank, int m, tMatrix_R r, tMatrix_R p, tVector_R w, tVector_R dm) { int iout, irank1; int i, ij, j, k; tNumber_R
e, s, abass, beta, sigma, sqsg;
irank1 = *pIrank + 1; if (*pIrank != m) { iout = *pIrank; for (ij = 1; ij <= *pIrank; ij++) { e = r[iout][iout]; if (e < 0.0) e = -e; abass = e; s = abass*abass; for (k = irank1; k <= m; k++) { s = s + r[iout][k] * (r[iout][k]); } © 2008 by Taylor & Francis Group, LLC
662
Numerical Linear Approximation in C
sigma = s; sqsg = sqrt (sigma); beta = 1.0/ (sigma + abass * sqsg); for (j = 1; j <= m; j++) { w[j] = 0.0; if (j > *pIrank) { w[j] = r[iout][j]; } else if (j == *pIrank) { e = sqsg; if (r[iout][iout] < 0.0) e = -e; w[iout] = r[iout][iout] + e; } } for (i = 1; i <= m; i++) { s = 0.0; for (k = 1; k <= m; k++) { s = s + p[i][k] * (w[k]); } dm[i] = s * beta; } for (j = 1; j <= m; j++) { for (i = 1; i <= m; i++) { p[i][j] = p[i][j] - dm[i] * (w[j]); } } for (i = 1; i <= *pIrank; i++) { s = 0.0; for (k = 1; k <= m; k++) { s = s + r[i][k] * (w[k]); } © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls
663
dm[i] = s * beta; } for (j = 1; j <= m; j++) { for (i = 1; i <= *pIrank; i++) { r[i][j] = r[i][j] - dm[i] * (w[j]); } } iout = iout - 1; if (iout < 1) { return; } } } } /*------------------------------------------------------------------Calculate the results of LA_Hhls() -------------------------------------------------------------------*/ void LA_hhls_calcul_res (int iwish, int ifirst, int isecnd, int *pIrank, int n, int m, tMatrix_R aa, tMatrix_R bs, tMatrix_R xs, tMatrix_R apsudo, tMatrix_R q, tMatrix_R r, tMatrix_R p, tVector_I ic, tVector_R b, tVector_R x, tVector_R w, tVector_R dm, tMatrix_R res) { int ip1; int i, ii, j, k; tNumber_R
s;
for (i = 1; i <= *pIrank; i++) { s = 0.0; for (k = 1; k <= n; k++) { s = s + b[k] * (q[k][i]); } dm[i] = s; } w[*pIrank] = dm[*pIrank]/r[*pIrank][*pIrank]; © 2008 by Taylor & Francis Group, LLC
664
Numerical Linear Approximation in C
if (*pIrank { for (ii { i = ip1 s = for {
!= 1) = 2; ii <= *pIrank; ii++) *pIrank - ii + 1; = i + 1; -dm[i]; (k = ip1; k <= *pIrank; k++)
s = s + r[i][k] * (w[k]); } w[i] = -s/r[i][i]; } } for (i = 1; i <= m; i++) { s = 0.0; for (k = 1; k <= *pIrank; k++) { s = s + w[k] * (p[i][k]); } dm[i] = s; k = ic[i]; x[k] = dm[i]; } if (iwish == 2) { for (isecnd = 1; isecnd <= m; isecnd++) { apsudo[isecnd][ifirst] = x[isecnd]; } } else if (iwish != 2) { for (i = 1; i <= m; i++) { xs[i][ifirst] = x[i]; } for (j = 1; j <= n; j++) { s = -bs[j][ifirst]; for (k = 1; k <= m; k++) © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls
665
{ s = s + aa[j][k] * (x[k]); } res[j][ifirst] = -s; } } } /*------------------------------------------------------------------Initialization of LA_Hhls() -------------------------------------------------------------------*/ void LA_hhls_init (int n, int m, tMatrix_R aa, tMatrix_R q, tMatrix_R t, tVector_I ic) { int i, j; for (j = 1; j <= m; j++) { ic[j] = j; for (i = 1; i <= n; i++) { t[i][j] = aa[i][j]; } } for (j = 1; j <= n; j++) { for (i = 1; i <= n; i++) { q[i][j] = 0.0; } q[j][j] = 1.0; } }
/*------------------------------------------------------------------LA_Hhlsro --------------------------------------------------------------------Given is the real n by m matrix "c" of rank m <= n and a real n vector "f" for the equation (a)
c*a = f
© 2008 by Taylor & Francis Group, LLC
666
Numerical Linear Approximation in C
This program applies householder"s transformation to obtain the upper triangular matrix, which is the r.h.s. in the orthogonal factorization [ r | h ] (b) q(transpose)*[c|f] = [---|---] [ 0 |rho] where "q" is a real n by n orthogonal matrix q(transpose)*q = I(unit matrix) "q" is not calculated nor it is stored. The matrix on the r.h.s. of (b) is an upper (m+1) by (m+1) matrix. "r" is a real m by m upper triangular matrix. "h" is a real m vector given by q(transpose)*b=h. "rho" is a real scalar that equals plus or minus of the L-Two norm of the residual vector res = (b - a*x), i.e. |rho| = ||c*a-f|| where ||.|| denotes the L-Two norm. This program then calculates the least squares solution of the overdetermined system (a) above, given by a = (r**-1)*h Returns one of LaRcSolutionFound LaRcInconsistentSystem LaRcErrBounds LaRcErrNullPtr -------------------------------------------------------------------*/ eLaRc LA_Hhlsro (int n, int m, tMatrix_R c, tVector_R f, tVector_R a, tMatrix_R r, tVector_R res, tMatrix_R t, tVector_R dm, tVector_R w, tNumber_R *pRho) { int iend, mp1; int i, j; tNumber_R e, eps2; eLaRc tempRc; eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < n) && (0 < m) && !((n == 1) && (m == 1))); mp1 = m + 1; eps2 = EPS * EPS; © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls
/* Initialization, mapping matrix "c" on matrix t */ for (j = 1; j <= m; j++) { for (i = 1; i <= n; i++) { t[i][j] = c[i][j]; } } for (i = 1; i <= n; i++) { t[i][mp1] = f[i]; } iend = mp1; if (n == { iend } if (n == { iend }
mp1) = m; m) = m - 1;
/* Householder"s transformation of matrix "t" This will never return a failure code */ tempRc = LA_hhlsro_hh_t (iend, n, m, t, dm, w); if (tempRc < LaRcOk) { return tempRc; } for (j = 1; j <= mp1; j++) { for (i = 1; i <= mp1; i++) { r[i][j] = t[i][j]; } } e = r[m][m]; if (e < 0.0) { e = -e; © 2008 by Taylor & Francis Group, LLC
667
668
Numerical Linear Approximation in C } if (e < EPS) { GOTO_CLEANUP_RC (LaRcInconsistentSystem); } /* Calculating the least squares solution vector "a" */ LA_hhlsro_x_res (n, m, c, f, a, r, res, pRho);
CLEANUP: return rc; } /*------------------------------------------------------------------Householder"s transformation of matrix "t" in LA_Hhlsro() -------------------------------------------------------------------*/ eLaRc LA_hhlsro_hh_t (int iend, int n, int m, tMatrix_R t, tVector_R dm, tVector_R w) { int i, j, k, iout, kdp, mp1; tNumber_R e, s, beta, piv, eps2, sqsg; eps2 = EPS * EPS; mp1 = m + 1; for (iout = 1; iout <= iend; iout++) { piv = 0.0; kdp = iout; s = 0.0; for (k = iout; k <= n; k++) { s = s + t[k][iout] * (t[k][iout]); } piv = s; /* Pemature exit due to rank deficiency of coefficient matrix */ if ((iout <= m) && (piv < eps2)) return LaRcNoFeasibleSolution; /* Householder"s transformation of matrix "t" */ e = t[iout][iout]; if (e < 0.0) e = - e; © 2008 by Taylor & Francis Group, LLC
Chapter 17: LA_Hhls
669
sqsg = sqrt (piv); beta = 1.0 / (piv + sqsg * e); for (i = 1; i <= n; i++) { w[i] = 0.0; if (i >= iout) { if (i > iout) w[i] = t[i][iout]; if (i == iout) { e = sqsg; if (t[iout][iout] < 0.0) e = - e; w[i] = t[iout][iout] + e; } } } for (j = iout; j <= mp1; j++) { s = 0.0; for (k = iout; k <= n; k++) { s = s + w[k] * (t[k][j]); } dm[j] = s * beta; } for (j = iout; j <= mp1; j++) { for (i = iout; i <= n; i++) { t[i][j] = t[i][j] - w[i] * (dm[j]); } } } return LaRcOk; } /*------------------------------------------------------------------Calculating the least squares solution vector in LA_Hhlsro() -------------------------------------------------------------------*/ void LA_hhlsro_x_res (int n, int m, tMatrix_R c, tVector_R f, tVector_R a, tMatrix_R r, tVector_R res, tNumber_R *pRho) { © 2008 by Taylor & Francis Group, LLC
670
Numerical Linear Approximation in C int tNumber_R
i, j, k, ii, ip1, mp1; s;
mp1 = m + 1; a[m] = r[m][mp1]/r[m][m]; if (m != 1) { for (ii { i = ip1 s = for {
= 2; ii <= m; ii++) m - ii + 1; = i + 1; -r[i][mp1]; (k = ip1; k <= m; k++)
s = s + r[i][k] * a[k]; } a[i] = -s/r[i][i]; } } for (j = 1; j <= n; j++) { s = -f[j]; for (k = 1; k <= m; k++) { s = s + c[j][k] * (a[k]); } res[j] = -s; } if (n == m) r[mp1][mp1] = 0.0; /* L2 norm of residual vector |rho| */ *pRho = r[mp1][mp1]; if (*pRho < 0.0) *pRho = -*pRho; }
© 2008 by Taylor & Francis Group, LLC
671
Chapter 18 Piecewise Linear Least Squares Approximation
18.1
Introduction
In Chapter 9, two algorithms for the piecewise linear approximation of plane curves in the L1 norm are described [2]. In Chapter 15, two corresponding algorithms in the Chebyshev norm are given [1]. In this chapter, we describe two corresponding algorithms for the piecewise linear approximation of plane curves in the L2 or the least squares norm [4]. The problem of piecewise linear approximation of plane curves in the least squares sense has been dealt with by a number of authors. As early as 1961, Stone [18] presented an algorithm for finding best approximation to a given convex function f(x) on a finite interval by K straight line segments using the least squares norm. He formulated the problem giving a closed form solution when the approximated function is quadratic. Bellman [5] showed that when the approximated function is quadratic, dynamic programming also provides a solution. For both Stone and Bellman, the L2 errors between an approximated segment and the approximating straight line are measured in the y-axis direction. Cantoni [6] described a method for finding the break points for segment approximation using weighted least squares norm. Pavlidis [11] derived necessary and sufficient conditions and suggested simple functional iteration algorithms for locating the break points of the L2 piecewise approximation of continuous differentiable functions of one and two variables. Pavlidis [13] also used Newton’s method to optimally locate the breaking points for a continuous differentiable function for L2 piecewise linear approximation. In [11], Pavlidis measured the errors © 2008 by Taylor & Francis Group, LLC
672
Numerical Linear Approximation in C
between the approximated segments and the approximating straight lines in the y-axis direction, while in [13] Pavlidis measured the errors in the Euclidean distance, i.e., perpendicular to the approximating line for each segment. Salotti [15] presented a new algorithm for an optimal polygonal approximation for digitized curves using the least squares norm criterion. By optimal is meant using the minimum number of segments. Sarkar et al. [16] presented a genetic algorithm-based approach for detection of significant vertices for polygonal approximation in the L2 norm of digital curves. The error is the Euclidean distances between the given points and the line segments. For the applications and the merits of piecewise linear approximation of plane curves, see Section 9.1.1 and also Pavlidis [10, 12]. In our work, the following two algorithms for the piecewise linear approximation in the least squares sense are presented: (a) when the L2 residual norm in any segment is not to exceed a pre-assigned value, and (b) when the number of segments is given and a near-balanced L2 residual norm solution is required. That is, all the segments would have nearly the same L2 residual norm. The errors are measured along the y-axis direction. As usual, the problem is solved by first digitizing the given curve into discrete points. Then each algorithm is applied to the discrete points. No conditions are imposed on the approximating functions. The approximating functions are any polynomials. They need not be constants or straight lines as is the case for the majority of the published works. In Section 18.2, the characteristics of the piecewise approximation are outlines. In Section 18.3, the discrete linear L2 approximation problem is presented. In Section 18.4, the two algorithms listed above are outlined. In Section 18.5, numerical results and comments are given. Section 18.6 offers a brief tutorial describing the technique of updating and downdating systems of linear equations, used for calculating the piecewise linear approximation of plane curves in the least squares sense.
© 2008 by Taylor & Francis Group, LLC
Chapter 18: Piecewise Linear Least Squares Approximation
18.1.1
673
Preliminaries and notation
For convenience sake, the following notation is used again. Let us be given the plane curve y = f(x) defined on the interval [a, b]. Let this curve be digitized at the K points (xi, f(xi)), i = 1, 2, …, K, where x1 = a and xK = b. Let n be the number of segments (pieces) in the piecewise approximation. For the first algorithm, n is not known beforehand. Finally, let (zj), j = 1, 2, …, n, be the L2 residual norms of the n segments. 18.2
Characteristics of the approximation
As indicated in Chapters 9 and 15, assume that f(x) is continuous and satisfies Lipschitz condition on [a, b]. Following the analysis of Lawson [9] for the Chebyshev norm, one shows that for segment k, 1 ≤ k ≤ n, the L2 norm zk, has the following properties: (a) zk is a continuous function of the end points of the segment, (b) zk is non-increasing in the segment left end point, and (c) zk is non-decreasing in the segment right end point. For the meanings of these characteristics, see Section 9.2. One also shows that if a piecewise L2 approximation is calculated for the curve f(x) (not for the discrete points of the curve) and if z1 = z2 = … = zn, then the solution is optimal. Such a solution always exists and is known as a balanced residual norm solution. However, in the case of the approximation of the digitized curve, a balanced residual norm solution may not exist. One may in this case attempt to minimize the variance of the residual norms (zk) and get a near-balanced L2 solution. 18.3
The discrete linear least squares approximation problem
Consider any segment k, 1 ≤ k ≤ n, of the given curve f(x). Let this segment consist of N digitized points with coordinates (xi, f(xi)), i = 1, 2, …, N. Let these N points be approximated by the function L(a, x) = Σj ajφj(x) where, {φj(x)}, j = 1, 2, …, M, is a given set of real linearly independent approximating functions. The number of points N is © 2008 by Taylor & Francis Group, LLC
674
Numerical Linear Approximation in C
usually far more than the number of the M parameters (aj); N > M. The linear least squares (approximate) solution to the data set by this curve is vector a that minimizes the L2 norm N
∑ [ r ( xi ) ]
z = sqrt
2
i=1
where r(xi) is ith residual and is given by (18.3.1)
r(xi) = f(xi) – L(a, xi), i = 1, 2, …, N
Note that the residual r(xi) in (18.3.1) is the –ve of that defined in equation (9.3.1) and also in equation (15.3.1). Such definitions are arbitrary. As in Section 2.2, this problem reduces to the problem of obtaining the least squares solution of the overdetermined system of linear equations Ca = f C is an N by M matrix given by C = (φj(xi)), i = 1, 2, …, N and j = 1, 2, …, M, and f is the N-vector (f(x1), …, f(xN))T. The least squares solution to system Ca = f is the real M-vector a that minimizes the L2 norm N
z = sqrt
∑ [ ri ]
2
i=1
where ri is residual i, given by (18.3.1), which is M
ri = fi –
∑ cij aj ,
i = 1, 2, …, N
i=1
and cij are the elements of matrix C. 18.4
Description of the algorithms
The C implementations of the algorithms are similar to those of Chapters 9 and 15, as follows.
© 2008 by Taylor & Francis Group, LLC
Chapter 18: Piecewise Linear Least Squares Approximation
18.4.1
675
Piecewise linear L2 approximation with pre-assigned tolerance
This algorithm uses the same steps as those of Section 9.4.1. However, each time a point is added to segment say j, we use the function LA_Hhls() of Chapter 17 to re-calculate the new L2 norm of the segment. 18.4.2
Piecewise linear L2 approximation with near-balanced L2 norms
This algorithm uses the same steps as those of Section 9.4.2. Again, each time we add a point or delete a point from segment say j, we use the function LA_Hhls() of Chapter 17 to re-calculate the new Chebyshev norm of the segment. 18.5
Numerical results and comments
Each of the functions LA_L2pw1() and LA_L2pw2() calculate the number of segments in the piecewise approximation (for LA_L2pw2(), n is given), the starting points of the n segments, the coefficients of the approximating curves for the n segments, the residuals at each point of the digitized curve and finally the L2 residual norms for the n segments. LA_L2pw1() computes for the case when the L2 error norm in any segment is not to exceed a pre-assigned value. LA_L2pw2() computes for the case when the number of segments is given and a near-balanced L2 error norm solution is required. DR_L2pw1() and DR_L2pw2() were used to test the algorithms on a number of problems in single-precision. Again, as in Chapters 9 and 15, LA_L2pw1() has the option of calculating connected or disconnected piecewise linear L2 approximations, while LA_L2pw2() can only calculate disconnected piecewise linear L2 approximations. This is explained at the end of Section 9.2. In order to compare the results of these algorithms with those of the algorithms of Chapters 9 and 15 for the L1 and the Chebyshev norms respectively, we chose the same curve used in those chapters.
© 2008 by Taylor & Francis Group, LLC
676
Numerical Linear Approximation in C
The curve is digitized with equal x-intervals into 28 points and the data points are fitted with vertical parabolas. Each is of the form y = a1 + a2x + a3x2. The results of LA_L2pw1() are shown in figures 18-1 and 18-2. The results of LA_L2pw2() are shown in figure 18-3, where the number of segments was set to n = 4. 18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 18-1: Disconnected linear L2 piecewise approximation with vertical parabolas. The L2 residual norm in any segment ≤ 3
18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 18-2: Connected linear L2 piecewise approximation with vertical parabolas. The L2 residual norm in any segment ≤ 3
© 2008 by Taylor & Francis Group, LLC
Chapter 18: Piecewise Linear Least Squares Approximation
677
The L2 norms of Figures 18-1 and 18-2 are (2.905, 3.000, 2.867, 1.165) and n = 4, and (2.905, 2.984, 2.762, 2.756) and n = 4, respectively. The L2 norms of Figure 18-3 are (2.905, 1.990, 2.390, 2.028), for n = 4. We observe that LA_L2pw2() did not produce a balanced norm solution. But it did give an improved approximation to that of figure 18-1. 18 16 14 12 10 8 6 4 2 0 0
5
10
15
20
25
30
Figure 18-3: Near-balanced residual norm solution. Disconnected linear L2 piecewise approximation with vertical parabolas. Number of segments = 4
In a previous version of these algorithms [4], we utilized the techniques of updating and downdating the systems of linear equations, which solve the piecewise approximation in the least squares sense. These techniques use information from a current iteration and carry them into the calculation of the next iteration without recomputing the least squares solution of a modified system from scratch. This increases the code complexity but reduces the number of iterations. In the next section we present a brief review of the updating and downdating technique. 18.6
The updating and downdating techniques
For any segment at hand in the piecewise approximation, let the system of linear equations be Ca = f, where as usual, C is an N by M © 2008 by Taylor & Francis Group, LLC
678
Numerical Linear Approximation in C
matrix, N > M, and f is an N-vector. The augmented matrix [C|f] of this system is factorized by the Householder’s QR factorization method [14, 19] (Chapter 17). The least squares solutions are then calculated in terms of the R matrices [8]. The Q matrices are not explicitly calculated. The algorithms work in an iterative manner by updating the least squares solutions via updating the R matrices of the segments. Assume that the least squares solution of system Ca = f has been obtained. Let one new data point be added to or an old point be deleted from the existing point set and a new least squares solution be re-calculated for the new set. Stewart [17] denotes these two problems as the problems of updating and of downdating of the least squares problem respectively. These two problems are equivalent to updating the least squares solution vector a when an extra equation is augmented to or when an existing equation is deleted from the system Ca = f respectively. For the updating problem, we make use of the scheme of Gill et al. [7] and for the downdating problem, we use the technique of Stewart [17]. This is explained here as follows. As indicated earlier, the least squares solution of system Ca = f may be calculated without explicitly storing matrix Q. We pre-multiply the matrix [C|f] by Householder’s elementary orthogonal matrices (Section 5.4.1). We get the upper triangular matrix R = R h 0 ρ
(18.6.1a) where
h = QTf and |ρ| = ||r||
(18.6.1b)
R and Q are the factorization of matrix C by Householder’s decomposition into C = QR. The least squares solution to system Ca = f is given by a = R–1h
(18.6.1c) 18.6.1
The updating algorithm
Assume that we have added an extra equation to the system Ca = f © 2008 by Taylor & Francis Group, LLC
Chapter 18: Piecewise Linear Least Squares Approximation
679
and that we want to re-calculate the least squares solution of the new system. Assume without loss of generality that the added equation is augmented to the bottom of the given system Ca = f. That is, the new system is given by C a = f c η
(18.6.2)
c is a row vector and η is the corresponding f element. Calculating the least squares solution of system (18.6.2) amounts to calculating the upper triangular matrix R' in the Q'R' factorization of the augmented system. This is done by updating matrix R of (18.6.1a) in the QR factorization of matrix [C|f] as follows. From (18.6.2), it is easy to verify that Q0 C f = 0 1 c η
(18.6.3)
R h 0 ρ c η
By considering (18.6.3), we first augment [c|η] to the bottom of matrix R in (18.6.1a). Then we pre-multiply the augmented matrix by the product of either Householder or Givens matrices to reduce the r.h.s. matrix in (18.6.3) to an upper triangular matrix. The resulting matrix is R' = R' h' 0 ρ'
(18.6.4)
The least squares solution for the new system is a' = R'–1h', where |ρ'| = ||r'||; r' is the new residual vector. 18.6.2
The downdating algorithm
Downdating matrix R after deleting an existing data point is done as follows. Let the row in matrix [C|f] that corresponds to this deleted point be [c|η]. Stewart [17] shows that the algorithm is stable in the presence of rounding errors. However, the new matrix R' can be a very ill-conditioned function of R and c. The algorithm by Stewart
© 2008 by Taylor & Francis Group, LLC
680
Numerical Linear Approximation in C
reduces the obtained system to (18.6.1a) or (18.6.4) and the result is given by (18.6.1b, c). See also [4]. We applied the updating and downdating techniques described in this section [4] to the problems that we solved by the algorithms of Section 18.4, and obtained the same results. 18.6.3
Updating and downdating in the L1 norm
We note that the updating and downdating techniques for linear equations in the L1 norm could also be achieved. Such techniques use parametric linear programming together with making use of the characteristic of solution of the L1 approximation problem. For details, see [3].
References 1. 2. 3. 4. 5. 6. 7.
Abdelmalek, N.N., Piecewise linear Chebyshev approximation of planar curves, International Journal of Systems Science, 14(1983)425-435. Abdelmalek, N.N., Piecewise linear L1 approximation of planar curves, International Journal of Systems Science, 16(1985)447-455. Abdelmalek, N.N., A recursive algorithm for discrete L1 linear estimation using the dual simplex method, IEEE Transactions on Systems, Man and Cybernetics, SMC-15(1985)737-742. Abdelmalek, N.N., Piecewise linear least-squares approximation of planar curves, International Journal of Systems Science, 21(1990)1393-1403. Bellman, R., On the approximation of curves by line segments using dynamic programming, Communications of the ACM, 4(1961)284. Cantoni, A., Optimal curve fitting with piecewise linear functions, IEEE Transactions on Computers, 20(1971)59-67. Gill, P.E., Golub, G.H., Murray, W. and Saunders, M.A., Methods for modifying matrix factorizations, Mathematics of Computation, 28(1974)505-535.
© 2008 by Taylor & Francis Group, LLC
Chapter 18: Piecewise Linear Least Squares Approximation
8. 9. 10. 11. 12. 13. 14. 15. 16.
17. 18. 19.
681
Golub, G., Numerical methods for solving linear least squares problems, Numerische Mathematik, 7(1965)206-216. Lawson, C.L., Characteristic properties of segmented rational minimax approximation problem, Numerische Mathematik, 6(1964)293-301. Pavlidis, T., Waveform segmentation through functional approximation, IEEE Transactions on Computers, 22(1973)689697. Pavlidis, T., Optimal piecewise polygonal L2 approximation of functions of one and two variables, IEEE Transactions on Computers, 24(1975)98-102. Pavlidis, T., The use of algorithms of piecewise approximations for picture processing, ACM Transactions on Mathematical Software, 2(1976)305-321. Pavlidis, T., Polygonal approximation by Newton’s method, IEEE Transactions on Computers, 26(1977)800-807. Peters, G. and Wilkinson, J.H., The least squares problem and pseudo-inverses, Computer Journal, 13(1970)309-316. Salotti, M., Optimal polygonal approximation of digitized curves using the sum of squares deviation criterion, Pattern Recognition, 35(2002)435-443. Sarkar, B., Singh, L.K. and Sarkar, D., A genetic algorithmbased approach for detection of significant vertices for polygonal approximation of digital curves, International Journal of Image and Graphics, 4(2004)223-239. Stewart, G.W., The effects of rounding residual on an algorithm for downdating a Cholesky factorization, Journal of Institute of Mathematics and Applications, 23(1979)203-213. Stone, H., Approximation of curves by line segments, Mathematics of Computation, 15(1961)40-47. Wilkinson, J.H., The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965.
© 2008 by Taylor & Francis Group, LLC
682
18.7
Numerical Linear Approximation in C
DR_L2pw1
/*------------------------------------------------------------------DR_L2pw1 --------------------------------------------------------------------This is a driver for the function LA_L2pw1(), which calculates a linear piecewise L2 (L-Two) approximation of given data point set {x,y} that results from the discretization of a given plane curve y = f(x). The points of the set might not be equally spaced. The approximation by LA_L2pw1() is such that the L2 norm for any segment is not to exceed a pre-assigned tolerance denoted by "enorm". LA_L2pw1() calculates the connected or the disconnected linear piecewise L2 approximation, according to the value of an integer parameter "konect" set by the user. See comments in program LA_L2pw1(). From the approximating curve we form the overdetermined system of linear equations c*a = f "c" is a real n by m matrix of rank k, k <= m < n. n is the number of digitized points of the given plane curve. m is the number of terms in the approximating curves. If for example, the approximating curves are vertical parabolas of the form y = a1 + a2*x + a3*x*x then m = 3. "f" is a real n vector whose elements are the y coordinates of the data set {x,y}. "a" is the solution m vector. There are different "a" solution vectors for the different segments. This driver contains 1 test example. A given curve is digitized into 28 points at equal x intervals. The points are piecewise approximated by vertical parabolas of the form y = a1 + a2*x + a3*x*x
© 2008 by Taylor & Francis Group, LLC
Chapter 18: DR_L2pw1
683
The results for the disconnected and of the connected piecewise L2 approximation are given in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define Nc
28
void DR_L2pw1 (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R fc[Nc+1] = { NIL, 8.0, 11.0, 13.0, 11.2, 9.1, 10.8, 14.8, 16.0, 15.1, 14.0, 14.7, 15.8, 16.8, 15.6, 13.0, 14.3, 13.8, 10.6, 9.3, 9.6, 10.8, 11.2, 9.0, 7.0, 5.8, 6.8, 4.8, 3.9 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R c = alloc_Matrix_R (NN_ROWS, MM_COLS); tMatrix_R ap = alloc_Matrix_R (KK_PIECES, MM_COLS); tMatrix_R rp1 = alloc_Matrix_R (KK_PIECES, NN_ROWS); tVector_R f = alloc_Vector_R (NN_ROWS); tVector_R zp = alloc_Vector_R (KK_PIECES); tVector_I ixl = alloc_Vector_I (KK_PIECES); int tNumber_R eLaRc
j, k, n, m, konect, npiece; enorm; prnRc;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_L2pw1, L2 Piecewise Approximation of a Plane " "with Pre-assigned Norm"); for (k = 1; k <= 2; k++) { switch (k) { © 2008 by Taylor & Francis Group, LLC
684
Numerical Linear Approximation in C case 1: enorm = 3.0; n = Nc; m = 3; for (j = 1; j <= n; j++) { f[j] = fc[j]; c[j][1] = 1.0; c[j][2] = j; c[j][3] = j*j; } break; default: break; } prn_algo_bnr ("L2pw1"); if (k == 1) konect = 0; if (k == 2) konect = 1; prn_example_delim(); PRN ("konect #%d: Size of matrix \"c\" %d by %d\n", konect, n, m); PRN ("L2 Piecewise Approximation with Pre-assigned Norm\n"); PRN ("Pre-assigned Norm \"enorm\" = %8.4f\n", enorm); prn_example_delim(); if (konect == 1) PRN ("Connected Piecewise Approximation\n"); else PRN ("Disconnected Piecewise Approximation\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Coefficient Matrix, \"c\"\n"); prn_Matrix_R (c, n, m); rc = LA_L2pw1 (n, m, enorm, konect, c, f, ixl, rp1, ap, zp, &npiece); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the L2 Piecewise Approximation\n"); PRN ("Calculated number of segments (pieces) = %d\n", npiece); PRN ("Staring points of the \"npiece\" segments\n");
© 2008 by Taylor & Francis Group, LLC
Chapter 18: DR_L2pw1
685
prn_Vector_I (ixl, npiece); PRN ("Coefficients of the approximating curves\n"); prn_Matrix_R (ap, npiece, m); PRN ("Residual vectors for the \"npiece\" segments\n"); prnRc = LA_pw1_prn_rp1 (konect, npiece, n, ixl, rp1); PRN ("L2 residual norms for the \"npiece\" segments\n"); prn_Vector_R (zp, npiece); if (prnRc < LaRcOk) { PRN ("Error printing PW1 results\n"); } } prn_la_rc (rc); } free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_I
(c, NN_ROWS); (ap, KK_PIECES); (rp1, KK_PIECES); (f); (zp); (ixl);
}
© 2008 by Taylor & Francis Group, LLC
686
18.8
Numerical Linear Approximation in C
LA_L2pw1
/*------------------------------------------------------------------LA_L2pw1 --------------------------------------------------------------------This program calculates a linear piecewise L2 (L-Two) approximation to a discrete point set {x,y}. The approximation is such that the L2 residual (error) norm for any segment is not to exceed a given tolerance "enorm". The number of segments (pieces) is not known before hand. Given is a set of points {x,y}. From the approximating functions of the piecewise approximation, one forms the overdetermined system of linear equations c*a = f This program uses LA_Hhls() for obtaining the L2 solution of overdetermined system of linear equations. LA_L2pw1() has the option of calculating connected or disconnected piecewise L2 approximation, according to the value of an integer parameter "konect". In the connected piecewise approximation the x-coordinate of the end point of segment j say, is the x-coordinate of the starting point of segment (j+1). In the disconnected piecewise approximation, the x-coordinate of the adjacent point to the end point of segment j is the x-coordinate of the starting point of segment (j+1). See the comments on "ixl" below. Inputs m Number of terms in the approximating functions. n Number of points to be piecewise approximated. c A real n by m matrix of containing matric "c" of the system c*a = f. Matrix "c" is not destroyed in the computation. f A real n vector containing the r.h.s. of the system c*a = f. This vector contains the y-coordinates of the given point set. This vector is not destroyed in the computation. enorm enorm A real pre-assigned parameter, such that the L2 residual norm for any segment is <= enorm. konect An integer specifying the action to be performed.
© 2008 by Taylor & Francis Group, LLC
Chapter 18: LA_L2pw1
687
If konect = 1, the program calculates the connected L2 piecewise approximation. If konect != 1, the program calculates the disconnected L2 piecewise approximation. Outputs npiece Obtained number of segments or pieces of the approximation. ixl An integer "npiece" vector containing the indices of the first elements of the "npiece" segments. For example, if ixl = (1,5,12,22,...), and if konect = 1, then the first segment contains points 1 to 5, the second segment contains points 5 to 12, the third segment contains points 12 to 22 and so on. Again if ixl = (1,5,12,22,...), and if konect !=1, then the first segment contains points 1 to 4, the second segment contains points 5 to 11, the third segment contains points 12 to 21 ..., etc. irankp An integer "npiece" vector containing the rank values of the approximating curves for the "npiece" segments. ap A real "npiece" by m matrix. Its first row contains the the coefficients of the approximating curve for the first segment. The second row contains the coefficients of the approximating curve for the second segment and so on. If any row j is all zeros, this indicates that vector "a" of segment j is not calculated as the number of points of segment j is <= m and there is a perfect fit by the approximating curve for segment j. rp1 A real "npiece" by n matrix. Its first row contains the residuals for the points of the first segment. Its second row contains the residuals for the points of the second segment, and so on. zp A real "npiece" vector containing the "npiece" optimum values of the L2 residual norms for the "npiece" segments. If zp[j] == 0.0, it indicates that there is a perfect fit for segment j. Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h"
© 2008 by Taylor & Francis Group, LLC
688
Numerical Linear Approximation in C
eLaRc LA_L2pw1 (int n, int m, tNumber_R enorm, int konect, tMatrix_R c, tVector_R f, tVector_I ixl, tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece) { tMatrix_R cp = alloc_Matrix_R (n, m); tMatrix_R t = alloc_Matrix_R (n, m + 1); tMatrix_R r = alloc_Matrix_R (m + 1, m + 1); tVector_R fp = alloc_Vector_R (n); tVector_R res = alloc_Vector_R (n); tVector_R dm = alloc_Vector_R (n); tVector_R w = alloc_Vector_R (n); tVector_R a = alloc_Vector_R (m); int int tNumber_R
i = 0, j = 0, je = 0, ii = 0, jj = 0, is = 0, ie = 0, nu = 0, irank = 0; ijk = 0; z = 0.0;
/* Validation of data before executing algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m == 1)) && (0.0 < enorm)); VALIDATE_PTRS (c && f && ixl && rp1 && ap && zp && pNpiece); VALIDATE_ALLOC (cp && t && r && fp && res && dm && w && a); *pNpiece = 1; /* "is" means i(start) for the segment at hand */ is = 1; for (ijk = 1; ijk <= n; ijk++) { /* Initializing the data for "npiece" */ ie = is + m - 1; LA_pw1_init (pNpiece, is, ie, m, rp1, ap, zp); irank = m; z = 0; for (i = is; i <= ie; i++) { ii = i - is + 1; fp[ii] = f[i]; for (j = 1; j <= m; j++) { cp[ii][j] = c[i][j]; } } © 2008 by Taylor & Francis Group, LLC
Chapter 18: LA_L2pw1
689
for (jj = 1; jj <= n; jj++) { nu = ie - is + 1; if (nu < m) { GOTO_CLEANUP_RC (LaRcSolutionFound); } rc = LA_Hhlsro (nu, m, cp, fp, a, r, res, t, dm, w, &z); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } if (z > enorm + EPS) { break; } else { LA_pw1_map (m, nu, res, a, z, rp1, ap, zp, pNpiece); ixl[*pNpiece] = is; je = ie + 1; if (je > n) { GOTO_CLEANUP_RC (LaRcSolutionFound); } ie = je; nu = ie - is + 1; for (i = is; i <= ie; i++) { ii = i - is + 1; fp[ii] = f[i]; for (j = 1; j <= m; j++) { cp[ii][j] = c[i][j]; } } } } is = ie; if (konect == 1) is = ie - 1; © 2008 by Taylor & Francis Group, LLC
690
Numerical Linear Approximation in C *pNpiece = *pNpiece + 1; ixl[*pNpiece] = is; }
CLEANUP: free_Matrix_R free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R
(cp, n); (t, n); (r, m + 1); (fp); (res); (dm); (w); (a);
return rc; }
© 2008 by Taylor & Francis Group, LLC
Chapter 18: DR_L2pw2
18.9
691
DR_L2pw2
/*------------------------------------------------------------------DR_L2pw2 --------------------------------------------------------------------This is a driver for the function LA_L2pw2() which calculates the "near balanced" piecewise linear L2 approximation problem of a given data point set {x,y} resulting from the discretization of a plane curve y=f(x). Given is an integer number "npiece" which is the number of segments in the approximation. The approximation by LA_L2pw2() is such that the L2 residual norms for all segments are nearly equal, hence the name "near balanced" piecewise approximation. From the approximating curves we form the overdetermined system of linear equations c*a = f "c" is a real n by m matrix of rank k, k <= m < n. n is the number of digitized points of the given plane curve. m is the number of terms in the approximating curves. If for example, the piecewise approximating curves are vertical parabolas of the form y = a1 + a2*x + a3*x*x then m = 3. "f" is a real n vector whose elements are the y coordinates of the data set {x,y}. "a" is the solution m vector. There are different "a" solution vectors for the different segments. This driver contains 1 test example. A given curve is digitized into 28 points at equal x intervals. The points are piecewise approximated by vertical parabolas of the form y = a1 + a2*x + a3*x*x The results for piecewise L2 approximation are given in the text. -------------------------------------------------------------------*/
© 2008 by Taylor & Francis Group, LLC
692
Numerical Linear Approximation in C
#include "DR_Defs.h" #include "LA_Prototypes.h" #define Nc
28
void DR_L2pw2 (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R fc[Nc+1] = { NIL, 8.0, 11.0, 13.0, 11.2, 9.1, 10.8, 14.8, 16.0, 15.1, 14.0, 14.7, 15.8, 16.8, 15.6, 13.0, 14.3, 13.8, 10.6, 9.3, 9.6, 10.8, 11.2, 9.0, 7.0, 5.8, 6.8, 4.8, 3.9 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R c = alloc_Matrix_R (NN_ROWS, MM_COLS); tVector_R f = alloc_Vector_R (NN_ROWS); tMatrix_R ap = alloc_Matrix_R (KK_PIECES, MM_COLS); tVector_R rp2 = alloc_Vector_R (NN_ROWS); tVector_R zp = alloc_Vector_R (KK_PIECES); tVector_I ixl = alloc_Vector_I (KK_PIECES); int int eLaRc
j, m, n; Iexmpl, npiece; prnRc;
eLaRc
rc = LaRcOk;
prn_dr_bnr ("DR_L2pw2, L2 Piecewise Approximation of a Plane " "Curve with Near Equal Residual Norms"); for (Iexmpl = 1; Iexmpl <= 1; Iexmpl++) { switch (Iexmpl) { case 1: npiece = 4; n = Nc; m = 3; © 2008 by Taylor & Francis Group, LLC
Chapter 18: DR_L2pw2
693
for (j = 1; j <= n; j++) { f[j] = fc[j]; c[j][1] = 1.0; c[j][2] = j; c[j][3] = j*j; } break; default: break; } prn_algo_bnr ("L2pw2"); prn_example_delim(); PRN ("Size of matrix \"c\" %d by %d\n", n, m); prn_example_delim(); PRN ("L2 Piecewise Approximation with Near Equal Norms\n"); PRN ("Given number of segments (pieces) = %d\n", npiece); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Coefficient Matrix, \"c\"\n"); prn_Matrix_R (c, n, m); rc = LA_L2pw2 (m, n, npiece, c, f, ap, rp2, zp, ixl); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the L2 Piecewise Approximation\n"); PRN ("Starting points of the \"npiece\" segments\n"); prn_Vector_I (ixl, npiece); PRN ("L2 residual norms for the \"npiece\" segments\n"); prn_Vector_R (zp, npiece); PRN ("Coefficients of the \"npiece\" approximating " "curves\n"); prn_Matrix_R (ap, npiece, m); PRN ("Residuals at the given points\n"); prnRc = LA_pw2_prn_rp2 (npiece, n, ixl, rp2); if (prnRc < LaRcOk) { PRN ("Error printing PW2 results: "); } } © 2008 by Taylor & Francis Group, LLC
694
Numerical Linear Approximation in C
prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_I
(c, NN_ROWS); (f); (ap, KK_PIECES); (rp2); (zp); (ixl);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 18: LA_L2pw2
695
18.10 LA_L2pw2 /*------------------------------------------------------------------LA_L2pw2 --------------------------------------------------------------------This program calculates the "near balanced" piecewise linear L2 (L-Two) approximation of a given data point set {x,y} resulting from the discretization of a plane curve y = f(x). Given is an integer number "npiece" which is the number of segments in the approximation. The approximation by LA_L2pw2() is such that the L2 residual norms for all segments are nearly equal, hence the name "near balanced" piecewise approximation. From the approximating functions (curves) one forms the overdetermined system of linear equations c*a = f Inputs npiece m n c f
Given umber of segments (pieces) of the approximation. Number of terms in the approximating functions. Number of points to be piecewise approximated. A real n by m matrix of the system c*a = f. This matrix is not destroyed in the computation. A real n vector containing the r.h.s. of the system c*a = f. This vector contains the y-coordinates of the given point set.
Outputs ixl An integer "npiece' vector containing the indices of the first elements of the "npiece" segments. For example, if ixl = (1,5,12,22,...), then the first segment contains points 1 to 4, the second segment contains points 5 to 11, the third segment contains points 12 to 21 ..., etc. ap A real "npiece" by m matrix. Its first row contains the coefficients of the approximating curve for the first segment. The second row contains the coefficients of the approximating curve for the second segment and so on. rp2 A real n vector containing the residual values of the n points of the given set {x,y}.
© 2008 by Taylor & Francis Group, LLC
696
Numerical Linear Approximation in C
zp
A real npiece vector containing the npiece optimum values of the L-Two residual norms for the "npiece" segments.
Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_L2pw2 (int m, int n, int npiece, tMatrix_R c, tVector_R f, tMatrix_R ap, tVector_R rp2, tVector_R zp, tVector_I ixl) { tMatrix_R cp = alloc_Matrix_R (n, m); tMatrix_R t = alloc_Matrix_R (n + 1, m + 1); tVector_R fp = alloc_Vector_R (n); tVector_R a = alloc_Vector_R (m); tVector_R al = alloc_Vector_R (m); tVector_R ar = alloc_Vector_R (m); tVector_R dm = alloc_Vector_R (n); tVector_R w = alloc_Vector_R (n); tVector_I iflag = alloc_Vector_I (npiece); tVector_R resl = alloc_Vector_R (n); tVector_R resr = alloc_Vector_R (n); tMatrix_R r = alloc_Matrix_R (m + 1, m + 1); tVector_I ir = alloc_Vector_I (n); int int int tNumber_R
i = 0, j = 0, k = 0, ii = 0, ji = 0, is = 0, ie = 0, isl = 0, iel = 0, isr = 0, ier = 0, nu = 0; istart = 0, iend = 0, kl = 0, ijk = 0, klp1 = 0, ipcp1 = 0, ibc = 0; ieln = 0, isrn = 0; con = 0.0, conn = 0.0, zl = 0.0, zln = 0.0, zrn = 0.0;
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (m <= n) && !((n == 1) && (m == 1)) && (1 < npiece)); VALIDATE_PTRS (c && f && ap && rp2 && zp && ixl); VALIDATE_ALLOC (cp && t && fp && a && al && ar && dm && w && iflag && resl && resr && r && ir);
© 2008 by Taylor & Francis Group, LLC
Chapter 18: LA_L2pw2
697
/* Calculating the residuals for initial segments */ for (k = 1; k <= npiece; k++) { iflag[k] = 1; LA_l2pw2_init (k, npiece, n, m, &is, &ie, c, f, cp, fp, ixl); /* Triangularizing the coefficient matrix using Householder's transformation */ nu = ie - is + 1; rc = LA_Hhlsro (nu, m, cp, fp, al, r, resl, t, dm, w, &zl); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } zp[k] = fabs (zl); /* Map initial data for the "npiece" segments */ for (j = 1; j <= m; j++) { ap[k][j] = al[j]; } for (j = is; j <= ie; j++) { ji = j - is + 1; rp2[j] = resl[ji]; } } /* Process of balancing the L2 norms */ istart = 1; iend = npiece - 1; ipcp1 = npiece + 1; ixl[ipcp1] = n + 1; for (ijk = 1; ijk <= n; ijk++) { for (kl = istart; kl <= iend; kl=kl+2) { klp1 = kl + 1; con = fabs (zp[klp1] - zp[kl]); isl = ixl[kl]; isr = ixl[klp1]; iel = isr - 1; if (kl != iend) ier = ixl[kl+2] - 1; © 2008 by Taylor & Francis Group, LLC
698
Numerical Linear Approximation in C if (kl == iend) ier = n; /* The case where : -----z[i]
© 2008 by Taylor & Francis Group, LLC
Chapter 18: LA_L2pw2
699
zln = fabs (zln); zrn = fabs (zrn); conn = fabs (zrn - zln); iflag[kl] = 1; iflag[klp1] = 1; if (conn >= con) { iflag[kl] = 0; iflag[klp1] = 0; continue; } } /* The case where : ---------- z[i]>z[i+1] */ else if (zp[kl] > zp[klp1]) { isrn = isr - 1; ieln = isrn - 1; for (i = isl; i <= ieln; i++) { ii = i - isl + 1; fp[ii] = f[i]; for (j = 1; j <= m; j++) { cp[ii][j] = c[i][j]; } } /* Updating the l2 approximation for left segment */ nu = ieln - isl + 1; rc = LA_Hhlsro (nu, m, cp, fp, al, r, resl, t, dm, w, &zln); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } for (i = isrn; i <= ier; i++) { ii = i - isrn + 1; fp[ii] = f[i]; for (j = 1; j <= m; j++) { cp[ii][j] = c[i][j]; } } /* Updating the L2 approximation for right segment */ © 2008 by Taylor & Francis Group, LLC
700
Numerical Linear Approximation in C nu = ier - isrn + 1; rc = LA_Hhlsro (nu, m, cp, fp, ar, r, resr, t, dm, w, &zrn); if (rc < LaRcOk) { GOTO_CLEANUP_RC (rc); } zln = fabs (zln); zrn = fabs (zrn); conn = fabs (zrn - zln); iflag[kl] = 1; iflag[klp1] = 1; if (conn >= con) { iflag[kl] = 0; iflag[klp1] = 0; continue; } } for (j = isl; j <= ieln; j++) { ji = j - isl + 1; rp2[j] = resl[ji]; } for (j = isrn; j <= ier; j++) { ji = j - isrn + 1; rp2[j] = resr[ji]; } for (i = 1; i <= m; i++) { ap[kl][i] = al[i]; ap[klp1][i] = ar[i]; } zp[kl] = zln; zp[klp1] = zrn; ixl[klp1] = isrn; isr = isrn; iel = isr - 1; } is = 2; if (istart == 2) is = 1; istart = is;
© 2008 by Taylor & Francis Group, LLC
Chapter 18: LA_L2pw2
701
ibc = 0; for (j = 1; j <= npiece; j++) { if (iflag[j] != 0) ibc = 1; } if (ibc == 0) { GOTO_CLEANUP_RC (LaRcSolutionFound); } } CLEANUP: free_Matrix_R free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_I
(cp, n); (t, n + 1); (fp); (a); (al); (ar); (dm); (w); (iflag);
free_Vector_R (resl); free_Vector_R (resr); free_Matrix_R (r, m + 1); free_Vector_I (ir); return rc; } /*------------------------------------------------------------------Initializing LA_L2pw2() -------------------------------------------------------------------*/ void LA_l2pw2_init (int k, int npiece, int n, int m, int *pIs, int *pIe, tMatrix_R c, tVector_R f, tMatrix_R cp, tVector_R fp, tVector_I ixl) { int i, j, ii, jj, kp1; jj = n/npiece; ixl[k] = (k-1) * jj + 1; *pIs = ixl[k]; if (k == npiece) *pIe = n; © 2008 by Taylor & Francis Group, LLC
702
Numerical Linear Approximation in C if (k != npiece) { kp1 = k + 1; ixl[kp1] = k * jj + 1; *pIe = ixl[kp1] - 1; } for (i = *pIs; i <= *pIe; i++) { ii = i - *pIs + 1; fp[ii] = f[i]; for (j = 1; j <= m; j++) { cp[ii][j] = c[i][j]; } }
}
© 2008 by Taylor & Francis Group, LLC
703
Chapter 19 Solution of Ill-Posed Linear Systems
19.1
Introduction
This chapter presents the solution of ill-posed linear systems such as those arising from the discretization of Fredholm integral equation of the first kind. Fredholm integral equation of the first kind arises in the mathematical analysis of many physics, chemistry and biology problems. It also arises in the restoration of blurred images [4, 5, 6] and in several classical mathematical problems, such as the numerical inversion of Laplace transform and many other problems [28, 30]. Consider the Fredholm integral equation of the first kind β
∫α A (t, t')x ( t' )dt' = b ( t ) A is a continuous kernel and ill-posed in the sense that the solution vector x does not depend continuously on the data b. In other words, the problem is not stable with respect to small perturbation on vector b. In a real situation, the elements of b are the output of some measurements contaminated with errors. Using numerical integration, the discretization of the above integral equation results in the system of linear equations Ax = b A is a real n by m matrix of rank k, k ≤ min(n, m) and b is a real n-vector. Assume that n ≥ m. Again, system Ax = b is ill-posed and ill-posed systems are also ill-conditioned. An ill-conditioned system such as Ax = b has the smallest singular values of A very small. The elements of the solution © 2008 by Taylor & Francis Group, LLC
704
Numerical Linear Approximation in C
vector x, would be of large and alternate positive and negative values. Computational difficulties arise when one attempts to solve this system. We shall only consider the case that kernel A is smooth. As a result, one expects a reasonable approximate solutions to this problem. Varah [30] examined the existence and uniqueness questions of such reasonable approximate solutions. He defined a reasonable solution as follows. Vector x is a reasonable solution to Ax = b with respect to noise level ε, if the norm of the residual, ||Ax – b|| = O(ε), provided that both A and b are properly scaled. O(ε) denotes of the order of ε. In Section 19.2, different methods of solution of ill-posed systems of linear equations are discussed. For each method of solution, a free parameter is to be estimated. In Section 19.3, the estimation of the free parameter is outlined. In Section 19.4, our method of solution of ill-posed systems is described. This method utilizes linear programming techniques and the free parameter to be estimated is the rank of the system of equations. The optimum value of the rank is estimated in Section 19.5, and the linear programming techniques are described in Section 19.6. In Section 19.7, numerical results and comments are given. 19.2
Solution of ill-posed linear systems
There exist several methods for solving ill-posed linear systems. These include the methods of steepest descents and conjugate gradients by Squire [24], without using any regularization technique (explained below). As reported by Squire, when proper termination criteria are used, the conjugate gradients methods require less computational time than the steepest descents. He also noted that a noticeable increase in the residual during the calculation of the conjugate gradients indicates that the limiting accuracy of the method has been reached. Kammerer and Nashed [18] discussed the convergence of the conjugate gradient method. They then studied iterative solutions for integral equations of the first and second kinds [19]. Hanson and Phillips [17] presented a method where the approximate solution of Ax = b is expressed as a continuous
© 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
705
piecewise linear spline function. The procedure may be used iteratively to improve the accuracy of the approximate solution. However, the most widely used approaches for solving the ill-posed system Ax = b are the following: (1) The first approach is to use the regularization methods, such as the original one of Tikhonov [27] and Phillips [23] and the modified regularization method [30]. (2) The second approach is to replace matrix A in system Ax = b by an approximate matrix of smaller rank k, k < m. This approach is illustrated by Hanson [16] and by Varah [28], using a truncated singular value decomposition (SVD) expansion of matrix A. This approach also includes the truncated QR method [29], where Q is an orthogonal matrix and R an upper triangular. The method used in this chapter is of this nature but quite a different one. The methods listed in (1) and (2) both require the estimation of a free parameter. We shall give a brief description of these methods: (1a) The original regularization methods [27, 23] are called by Varah [29, 30] damped least squares. Assuming that the n by m matrix A is of rank k, k < m ≤ n, these methods calculate a solution xα such that (the ||.|| refers to the Euclidean vector norm) xα = minx (||Ax – b||2 + α2||x||2) The free parameter in this method is α, which is to be estimated. This method is equivalent to the solution of the modified normal equation in the form of the Ridge equation [29] (discussed in Chapter 17) (ATA + α2Im)xα = ATb where Im is an m-unit matrix. Matrix (ATA + α2Im) is matrix ATA with α2 added to each of its singular values, and thus (ATA + α2Im) is square nonsingular. The solution of this equation is obtained efficiently via the singular value decomposition of A, as A = VSWT, V is a real n by m orthonormal matrix, VTV = Im, W is a real m by m orthonormal matrix, WTW = Im and S is an m by m diagonal
© 2008 by Taylor & Francis Group, LLC
706
Numerical Linear Approximation in C
matrix, S = diag(s1, s2, …, sm), the si are the singular values of A, ordered as s1 ≥ s2 ≥ … sm ≥ 0. With little manipulation m
(19.2.1)
xα =
2
∑ si βi wi ⁄ ( si + α
2
)
i=1
where VTb = β and (wi) are the columns of W. For a proper choice of the parameter α, most of the terms in the above summation are damped and the summation would be from 1 to say k, not from 1 to m, where k < m. The solution would be very close to the truncated SVD, given by (19.2.2) in approach (2a) below. To see this, for i > k, si → 0, |βi| → ε, so if we chose α ≥ ε, in (19.2.1), siβi/(si2 + α2) ≈ 0, since si → 0. For i ≤ k, if we choose α ≈ ε in (19.2.1), and since si > ε, siβi/(si2 + α2) ≈ βi/si. Thus (19.2.1) becomes k
xα =
∑ ( βi wi ) ⁄ si i=1
(1b)
Compare this result with (19.2.2) of the truncated SVD in approach (2a). The modified regularization or the modified least squares method produces the modified normal equation [29, 30] (ATA + α2LTL)xα = ATb
(2a)
L is usually some discrete approximation to the first or the second derivative operator. The solution xα has the same form of xα of the previous method except that the solution is given in terms of some orthogonal vectors other than the (wi). te Riele [26] presented a program with numerical results for the solution of Fredholm integral equation of the first kind, using this method. The truncated SVD method [16] is described as follows. Again using the SVD of A = VSWT, Ax = b, reduces to VSWTx = b
© 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
707
In this method, the smallest (m – k) singular values of matrix A are replaced by 0’s. The free parameter in this method is k, and the solution is k
(19.2.2)
xk =
∑ ( βi wi ) ⁄ si i=1
(2b)
This approach requires the estimation of the rank k of the matrix that approximates matrix A. The truncated SVD method is an appropriate method, but it is computationally expensive. The truncated QR method [29] is an alternative to the truncated SVD method. As we observed in the above 3 approaches, the solution xα or xk are given in the form of an expansion in terms of orthogonal vectors, as in (19.2.2) for example. Now, assume some expansion of the form m
x =
∑ ci yi i=1
where the {yi} is a set of m orthogonal vectors, the columns of some orthogonal matrix Y. The factors (ci) are elements of an m-vector c, are to be calculated by solving a least squares problem. Let us write this equation as x = Yc Varah [29] suggested the factorization AY= QR, where Q is an orthogonal matrix and R an upper triangular [22]. Then by solving the first k equations of Rc = QTb we get the k-vector c. However, Varah pointed out that the singular values of AY are not always reflected in the size of the diagonal elements of R. That is, the truncated QR system may not be a good approximation to the system to AY whose rank is estimated by k < m. Hansen [13] compared the truncated SVD method with the regularization method and defined necessary conditions for which the
© 2008 by Taylor & Francis Group, LLC
708
Numerical Linear Approximation in C
two methods give similar results. 19.3
Estimation of the free parameter
Each of the above methods involves a free parameter α for the regularization techniques, and the rank k for the truncated SVD and the QR methods. It is extremely difficult to calculate the value of this free parameter that results in a “good” approximate solution to the “true” one. It is a trade-off between accuracy and smoothness of the solution. Among the methods for calculating this free parameter are the cross-validation, the discrepancy principle and the L-curve methods. See also Section 19.5. For the cross-validation method, for choosing α, see Wahba [32] and for the truncated SVD method when data are noisy, see Vogel [31]. The last method is based on statistical assumptions about the data error. For the discrepancy principle method, see Morozov [20]. It is based on the coupling of the regularization parameter α and the noise level in the given data vector b. The L-curve method [14, 15] seems to strike a balance between the size of the solution ||xα|| and the corresponding residual norm ||Axα – b||. In our work [2, 3], we use an alternative to the truncated SVD method. The ill-posed system Ax = b is first replaced by an equivalent consistent system of linear equations. The algorithm calculates the minimal length least squares solution of the consistent system. Starting from rank k = 1 of the consistent system, the rank k is increased by one in succession and a new solution is calculated. This is repeated until a simple criterion is satisfied. Then the obtained minimal length least squares solution would be the approximate (smooth) solution of the system Ax = b. Our method may also be viewed as the analog of the so-called stepwise regression, described by Albert ([8] Section 4.4). Linear programming techniques are used for which the successive solutions of the consistent system are themselves the basic solutions in the successive simplex tableaux. The algorithm is numerically stable. Results show that this method gives comparable accuracy to the truncated SVD method. Yet it is 2 to 5 times faster [2].
© 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
19.4
709
Description of the new algorithm
Let the system Ax = b be pre-multiplied by AT. One gets the consistent system of linear equations ATAx = ATb
(19.4.1) Let (19.4.1) be written as
C(m)x(m) = c(m)
(19.4.1a)
where C(m) = ATA and c(m) = ATb. If rank(A) = k, k ≤ m, (19.4.1a) would have k linearly independent equations and (m – k) linearly dependent ones. Assume that the equations in (19.4.1a) are properly permuted and that the first k equations, k ≤ m are linearly independent. Let C(k) denotes the first k rows of matrix C(m) and c(k) denotes the first k elements of vector c(m). Then the first k equations in (19.4.1a) are C(k)x(k) = c(k)
(19.4.2)
System (19.4.2) is an underdetermined system of rank k. The pseudo-inverse of C(k) is given by (Chapter 17) [C(k)]+ = [C(k)]T([C(k)][C(k)]T)–1 and its minimal length least squares solution is x(k) = [C(k)]+c(k)
(19.4.3) 19.4.1 (1) (2)
(3)
Steps of the algorithm First we show that the least squares solution of Ax = b is itself the least squares solution of ATAx = ATb of (19.4.1) (Lemma 19.1). We then show that matrix C(k) of (19.4.2) is a good approximation to matrix (ATA), which is assumed of rank k, k ≤ m (Lemma 19.2). As a result, x(k) of (19.4.3) is the least squares solution of ATAx = ATb, which is the least squares solution of Ax = b (Lemma 19.3). We measure the error between matrix (ATA)2 and the matrix that approximates it (Lemma 19.4). As a result, we illustrate that |pivot(i)|1/4 is of the order of magnitude of the singular
© 2008 by Taylor & Francis Group, LLC
710
Numerical Linear Approximation in C
(4)
value si(A), where pivot(i) is the pivot element in the ith simplex step (Lemma 19.5). Meanwhile, for estimating the rank value k, we need to show that the norm of residual (b – Ax) decreases monotonically as k increases (Lemma 19.6).
Lemma 19.1 The least squares solution of Ax = b is itself the least squares solution of ATAx = ATb. Proof: The proof follows directly from the SVD of matrix A, namely A = VSWT and its pseudo-inverse, A+ = WS–1VT. The solution of Ax = b is x(1) = A+b and the solution of ATAx = ATb is x(2) = (ATA)+(ATb). By substituting the expressions of A and A+ in the two equations, x(1) = x(2) and the lemma is proved. Lemma 19.2 The following relations between the singular values of (ATA)2 and (Ck[Ck]T) exist sk(ATA)2 ≤ sk(Ck[Ck]T) ≤ sk–1(ATA)2 Proof: Each of (ATA)2 and (C(k)[C(k)]T) is Hermitian (symmetric), positive semi-definite (Theorem 4.1). Their eigenvalues are themselves their singular values. The lemma is proved from the fact that (C(k)[C(k)]T) is the k by k leading principal sub-matrix of matrix (ATA)2 ([25], pp. 317). Note that in [25], the eigenvalues of the Hermitian matrices are ordered in an increasing manner while in our case the singular values are ordered in a decreasing manner. This lemma means that if sk+1(ATA)2 and the smaller singular values of (ATA)2 are very small and considered 0’s, (Ck[Ck]T) would be good approximation of (ATA)2, meaning Ck is a good approximation of (ATA). Lemma 19.3 The minimal length least squares solution x(k) of the system of k equations C(k)x(k) = c(k) of (19.4.2), is itself the least squares solution © 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
711
of C(m)x(m) = c(m), assuming that rank(A) = k. Proof: Since C(m) = (ATA) is assumed of rank k, it has k linearly independent rows and its last (m – k) rows are linearly dependent on the first k rows. The same is said for vector c(m). We partition each of C(m) and c(m) and we may write Cx(m) = c(m) as (Ik/P)C(k)x = (Ik/P)c(k) where Ik is a k-unit matrix and P is an (m – k) by k matrix. See for example, Noble ([21], pp. 144, 145). The minimal length least squares solution of this equation is x = [C(k)]+(Ik/P)+(Ik/P)c(k) = [C(k)]c(k) = x(k) since (Ik/P) is of full rank and (Ik/P)+(Ik/P) = Ik. This proves the lemma. This lemma shows that system C(k)x(k)=c(k) is equivalent to ATAx = ATb in the sense that the two systems have the same minimal length least squares solution, assuming rank(A) = k. Our algorithm begins as follows. We calculate x(k) for k = 1 of system (19.4.2). That is, system C(k)x(k) = c(k) consists of only one equation. We then increase the rank k by 1 at a time so that C(k)x(k) = c(k) consists of 2, 3, …, equations in succession. In each case the solution x(k) is calculated. This is repeated until a certain simple criterion is satisfied. Linear programming techniques are used for which the successive solution vectors x(k), for k = 1, 2, …, appear as the basic solutions in the successive simplex tableaux. The criterion for choosing the rank value k is described in Section 19.5. Before every simplex iteration, the program pivots on the diagonal elements of matrix –([C(m)][C(m)]T) = –(ATA)2 and its updates in the simplex tableaux (see Tableau (a) in Section 19.6). That is, complete pivoting is used before each simplex iteration. Since this matrix is symmetric semi-definite, the algorithm is numerically stable. It is easy to illustrate that the pivot elements in the Gauss-Jordan steps are themselves the diagonal element of matrix D in the Cholesky’s factorization –(ATA)2 = –LDLT, where L is a lower triangular matrix with unit diagonals and D is diagonal, assuming that rank(A) = m ([10], pp. 146, 147). Let us illustrate this point by © 2008 by Taylor & Francis Group, LLC
712
Numerical Linear Approximation in C
following example. Let a matrix C be decomposed by Cholesky’s factorization into C = LDLT as follows 1 C = 2 1 1
2 8 4 4
1 4 11 5
1 1 1 4 4 = 2 1 5 1 0.5 1 9 19 1 0.5 0.33 1 16
1 2 1 1 1 0.5 0.5 1 0.33 1
This matrix is a modification of that of ([11], p. 26). Let us now perform 4 Gauss-Jordan steps to matrix C (without pivoting). It is easy to show that the pivot elements are going to be (1, 4, 9, 16), which are the diagonal elements of D. This idea would allow us to obtain an upper bound to sk+1[C(k+1)][C(k+1)]T, had we stopped the simplex method after step (k + 1) instead of step k. Consider now matrix [C(m)][C(m)]T = (ATA)2 and assume that we decompose it by Cholesky’s factorization with complete pivoting. We get (ATA)2 = LDLT + B
(19.4.4)
L is a lower m by k trapezoidal matrix with unit diagonal elements and D is k-diagonal matrix of diagonal elements d1 ≥ d2 ≥ … ≥ dk > 0. The sum of the singular values of LDLT is the trace (sum of diagonal elements) of LDLT or of D1/2LTLD1/2 (Section 4.2.6). The factorization is stopped after step k, when there is no significant addition to the trace ([10], p. 146); in other words, when dk+1 is not significant. The next lemma gives a measure of error matrix B. Lemma 19.4 ||B||2 ≤ (m – k) dk+1
(19.4.5) Proof:
In (19.4.4), each of (ATA)2 and LDLT is symmetric positive semi-definite of the same dimensions and their eigenvalues are their singular values. We have ([25], p. 315) sk(ATA)2 ≤ sk(LDLT) + ||B||2
© 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
713
Yet ||B||2 ≤ ||B||E ≤ (m – k)dk+1 where ||.||E denotes the Euclidean matrix norm, and the lemma is proved. In obtaining the upper bound of ||B||E, we assumed that the absolute value of each element in the right lower (m – k) square submatrix B is dk+1. That is unrealistic; a more realistic factor is (m – k)1/2, and (19.4.5) would be replaced by ||B||2 ≤ (m – k)1/2dk+1
(19.4.6)
This lemma shows that the diagonal elements (di) of D, i = 1, 2, …, in the factorization (ATA)2 ≈ LDLT, are good measures of the size of si(ATA)2. Or that the pivot elements in the Gauss-Jordan elimination steps are good measures of the size of si(ATA)2. Thus we shall replace the parameter dk+1 in the above inequality by |pivot(k + 1)|, where pivot(i) is the ith pivot element in the simplex iteration. We get ||B||E ≤ (m – k)1/2|pivot(k + 1)| Lemma 19.5 The value |pivot(i)|1/4 is of the order of magnitude of si(A). Proof: From Lemma 19.2, sk+1(Ck+1[Ck+1]T) ≤ sk(ATA)2 and we have sk(A) = (sk(ATA)2)1/4. By taking the fourth power of (m – k)1/2|pivot(k + 1)| in (19.4.6), we get (m – k)1/8|pivot(k)|1/4, k = 1, 2, …, a measure of sk(A). Consider now the factor (m – k)1/8. For all practical purposes, we have (m – k)1/8 < 2. For example, for large m, say m = 200 and k = 30, (m – k)1/8 ≈ 1.9 and for smaller m, say m = 15 and k = 3, we get (m – k)1/8 ≈ 1.36. The lemma is thus proved. Lemma 19.6 Let x(k) and x(k+1) be the respective minimal length least squares solutions of the systems C(k)x(k) = c(k) and C(k+1)x(k+1)= c(k+1) of (19.4.2). Then ||r(k+1)|| ≤ ||r(k)||, where r(k+1) and r(k) are the residuals
© 2008 by Taylor & Francis Group, LLC
714
Numerical Linear Approximation in C
(b – Ax(k+1)) and (b – Ax(k)) respectively. That is, the residual of Ax(k) = b decreases monotonically as k increases. 19.5
Optimum value of the rank
For the estimation of the rank k of matrix A, which gives best or near-best solution to system Ax = b, we adopt a simple criterion similar to that used by Hanson [16] and also by Squire [24]. It is based on Lemma 19.6 that the Euclidean norm of the residual vector of equation Ax = b; ||b – Ax(k)|| decreases monotonically as k increases. For each solution vector x(k), k = 1, 2, …, the program calculates the residual norm ||b – Ax(k)||. The program exits when this residual norm is less than a specified tolerance TOLER, or if the magnitude of the pivot element for the next linear programming tableau is less than a machine-dependent tolerance EPS. 19.5.1
The parameters TOLER and EPS
The parameter TOLER is estimated as follows. If the right hand side vector b in system Ax = b is expected to be contaminated by an error vector δb, then an appropriate value of the parameter TOLER is of the order of the Euclidean norm ||δb||. If δb = 0, a suitable value is TOLER = 1.0E–04 or 1.0E–03, the justification for which is supported only by numerical experimentation. As described in Chapter 2, EPS, on the other hand, is a machine-dependent tolerance such that a calculated parameter z is considered 0 if |z| < EPS. EPS is set to 10–4 for single-precision and 10–11 for double-precision (see Section 4.7.1). We might be able to improve the accuracy of the solution vector x by making a better estimate of the parameter EPS. This depends on the order of magnitude of the pivot elements in the simplex tableaux. In such cases we might let EPS take smaller values. We examine the absolute values of the pivot elements in the simplex tableaux that are say < 1.0E–03. We take EPS as the mean of the absolute values of two consecutive pivot elements where there is a large separation between them. See numerical experimentation for a similar problem ([6], p. 1418) and also ([7], p. 374).
© 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
715
In example 2 in [2], the absolute values of the pivot elements of the simplex tableaux, accurate to 3 decimal places, are 0.304E+04, 0.242E+01, 0.148E–03, 0.272E–06, 0.107E–08, … . There is a large separation between the third and the fourth values. We take EPS as the mean of these 2 values; 0.741E–04. This gives the calculated rank(ATA)2 = 3 and ||x(exact) – x(calculated)|| = 0.299 which is a slight improvement to the result given in [2], which is 0.377 for rank(A) = 5. This example is solved as Example 19.1, using a different discretization method to the Fredholm integral equation of the first kind. 19.6
Use of linear programming techniques
The method described here uses linear programming techniques. It achieves the result that the calculated successive minimal length least squares solutions are themselves the basic solutions in the successive simplex tableaux. It is a hybrid of a technique by the author for a different problem [1]. See also Chapter 23. Assume that we have the set of the 2m underdetermined linear equations, where v, x and y are m unknown vectors. (19.6.1a) (19.6.1b)
[C(m)]Tv + Imx + 0 = 0 –C(m)[C(m)]Tv + 0 + Imy = c(m)
C(m) and c(m) are those of (19.4.1a) and Im is an m-unit matrix. A solution to system (19.6.1) is v = x = 0 and y = c(m). Let us write down this system in a simplex tableau format [1, 12]. Tableau (a) vT xT yT B bB –––––––––––––––––––– ––––––––––––––––––––––––––––– x 0 [C(m)]T Im 0 y c(m) –C(m)[C(m)]T 0 Im –––––––––––––––––––– ––––––––––––––––––––––––––––– In Tableau (a), B and bB denote respectively the initial basis matrix and the initial basic solution. In linear programming terminology, the elements of vectors x and y are basic variables. Assume that rank(C(m)) = m. Then C(m)[C(m)]T is nonsingular.
© 2008 by Taylor & Francis Group, LLC
716
Numerical Linear Approximation in C
Let now the elements of vector v replace the corresponding elements of vector y as basic variables. This gives Tableau (b). Tableau (b) B bB vT xT yT –––––––––––––––––––– ––––––––––––––––––––––––––––– x [C(m)]+c(m) 0 Im [C(m)]+ v –(C(m)[C(m)]T)–1c(m) Im 0 –(C(m)[C(m)]T)–1 –––––––––––––––––––– ––––––––––––––––––––––––––––– In effect, Tableau (b) is obtained by premultiplying Tableau (a) by matrix E, where E =
Im
[ C( m ) ]
–1
T
0 –C( m ) [ C( m ) ]
T
=
Im
[ C( m ) ]
+ T –1
0 –( C( m ) [ C( m ) ] )
However, in practice, Tableau (b) is obtained by applying m Gauss-Jordan elimination steps to Tableau (a) and its updates. Since under vT in Tableau (a), –C(m)[C(m)]T is symmetric semi-definite, before each Gauss-Jordan elimination step, we pivot over the diagonal elements of –C(m)[C(m)]T in Tableau (a) and its updates, in the successive tableaux between Tableaux (a) and (b). Under the basic solution bB in Tableau (b), we find that x = [C(m)]+c(m), which is the required minimal length least squares solution (19.4.3) of the system Ax = b, for k = m. However, if rank(C(m)) = k < m, the absolute value of the pivot element in the (k + 1)th elimination step is assumed very small < EPS and is replaced by 0. The columns of [C(m)]T under vT in Tableau (a), used in the calculation so far, are matrix [C(k)]T. The obtained basic solution in this case is x(k) = [C(k)]+c(k), which is the approximate least squares solution of system Ax = b. Let us denote the successive tableaux between Tableau (a) and Tableau (b), as Tableau (1), Tableau (2), …, Tableau (k). Tableau (0) is Tableau (a) and Tableau (k) is Tableau (b). Then the columns under yT in Tableau (p), p = 1, 2, …, is [C(p)]+ and the first m elements of bB is x(p) = [C(p)]+c(p). In other words, in Tableaux (1), (2), …, (k), the calculated basic solutions are the repeated solutions © 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
717
x(1), x(2), …, x(k) respectively. The calculation may also be done in a smaller (condensed) tableaux, as we did in [1]. 19.7
Numerical results and comments
In this section, we show that our algorithm obtains a smooth solution to the Fredholm integral equation of the first kind and also may be applied in solving other ill-conditioned linear systems. LA_Mls() implements the Minimum norm Least Squares algorithm. DR_Mls() tests two examples, the results of which were computed in double-precision, as follows. Example 19.1 Solve the following Fredholm integral equation of the first kind 1
∫0 ( x
2
2 0.5
+y )
2 1.5
f ( y )dy = [ ( 1 + x )
3
–x ]⁄3
This example was solved by Baker ([9], pp. 664-667) and also by Baker et al. [10] and in [2, 3]. The exact solution is f(x) = x. Table 19.1 f(x) x(exact) x(calculated) error –––––––––––––––––––––––––––––––––––––– 0.0000 0.0000 –0.0066 0.0066 0.0714 0.0714 0.1249 –0.0535 0.1429 0.1429 0.1135 0.0294 0.2143 0.2143 0.1628 0.0515 0.2857 0.2857 0.2590 0.0267 0.3571 0.3571 0.3690 –0.0119 0.4286 0.4286 0.4720 –0.0434 0.5000 0.5000 0.5585 –0.0585 0.5714 0.5714 0.6259 –0.0544 0.6429 0.6429 0.6749 –0.0321 0.7143 0.7143 0.7080 0.0063 0.7857 0.7857 0.7279 0.0578 0.8571 0.8571 0.7372 0.1200 0.9286 0.9286 0.7382 0.1904 1.0000 1.0000 0.7329 0.2671
© 2008 by Taylor & Francis Group, LLC
718
Numerical Linear Approximation in C
In discretizing this equation, we use here the rectangular rule rather than the Chebyshev rule, as in [2, 3]. We used the parameters n = 15, m = 15, EPS = 10–11, TOLER = 10–4. The calculated rank(A) = 5. The results are shown in Table 19.1. Residual vector (b – Ax) = (–0.00001515, 0.00009018, –0.00008601, –0.00007357, 0.00000561, 0.00005994, 0.00006877, 0.00004385, 0.00000466, –0.00003197, –0.00005458, –0.00005681, –0.00003611, 0.00000753, 0.00007266)T ||b – Ax|| = 0.21293071E–03 ||x(exact) – x(calculated)|| = 0.37677 The pivots in the simplex tableaux to 8 decimal places are (–3039.73005711, –2.41835358, –0.00014840, –0.00000027, 0.00000000) |pivot|1/4 = (7.42521023, 1.24703875, 0.11037181, 0.02283408, 0.00571954) The pivots and |pivot|1/4 shown here differ slightly from those in ([2], p. 108) because here we use the rectangular rule in the discretization of the given equation while in [2], we used the Chebyshev rule. It is well-known (Baker [9]) that different quadrature rules for discretizing a given integral equation give results with different accuracies. In ([2], p. 107), where the discretization of this example is done by the Chebyshev rule, slightly better results are obtained and the estimated rank(A) = 3. In ([2], p. 108) we displayed the results of this problem together with the results by the truncated SVD method. The calculated values of the elements of x were almost the same for both methods. However, our method was 4 times faster than the truncated SVD method. Example 19.2 Obtain the least squares solution of the ill-conditioned system Ax = b, where matrix A (a scaled version of Hilbert matrix) and vector b are given such that the exact solution is x = (1, 1, 1, 1, 1, 1)T. We have n = 7 and m = 6. The results are shown in Table 19.2. © 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
719
Table 19.2 x(exact) x(calculated) error –––––––––––––––––––––––––––– 1.0000 0.9999 0.0001 1.0000 1.0016 –0.0016 1.0000 0.9950 0.0050 1.0000 1.0032 –0.0032 1.0000 1.0044 –0.0044 1.0000 0.9959 0.0041 Residual vector (b – Ax) = (0.00000001, –0.00000016, 0.00000046, –0.00000019, –0.00000036, 0.00000001, 0.00000025)T ||b – Ax|| = 0.6798635653E–06 ||x(exact) – x(calculated)|| = 0.008639 Matrix A and vector b are given in ([3], p. 292) and they are not multiplied by 1.0E05. Their elements should be of reasonable sizes. The pivots in the simplex tableaux to 8 decimal places are (–673.99536685, –0.28826497, –0.00001107, 0.00000000) |pivot|1/4 = (5.09523510, 0.73273674, 0.05767962, 0.00366241) For these results, we used EPS = 10–11 and TOLER = 10–4. This example was also solved in ([22], p. 315). We now illustrate that |pivot(i)|1/4 in the simplex tableau are of the order of magnitude of the elements of si(A). In example 2 in ([2], p. 108), the values (si(A)) and (|pivot(i)|1/4) were compared, where rank(A) was found to be 3 (si (A)) = (12.16, 1.435, 0.100) which is compared with (|pivot(i)|1/4) = (7.275, 1.132, 0.088) Compare also |pivot(i)|1/4 and si(A) in example 1 in ([2], p. 107) and the numerical example given in Baker et al. ([10], p. 147). This makes our method a good alternative to the truncated singular value decomposition method. It is also concluded that matrix C(k) in C(k)x = c(k) of (19.4.2) is a (good) approximation to matrix © 2008 by Taylor & Francis Group, LLC
720
Numerical Linear Approximation in C
C(m) = (ATA), where it is assumed that rank(A) = k. For this example, the results here agree with those Wilkinson ([22], p. 315), where TOLER was set rank(A) = 4. We solved this example also for EPS TOLER = 10–3. The results were less accurate, giving rank(A) = 3.
of Peters and to 10–4 and = 10–11 and the estimated
Finally, one observes that the matrix from which we chose the pivot elements in the linear programming problem is matrix –C(m)[C(m)]T and its updates in the simplex tableaux. Thus if A has large elements, the tableaux will contain very large numbers. Also, if A has many elements of very small sizes, this may result in some very small pivot elements of the order of EPS. A proper scaling of system Ax = b before calling LA_Mls() would solve this problem. In conclusion, the singular values of matrix A give an accurate representation of the condition of matrix A. In our method, however, the absolute values of the pivot elements in the successive simplex tableaux, raised to the power (1/4), reflect the condition of matrix A in the system Ax = b, (Lemma 19.5). In the simplex tableaux, the number of arithmetic operations for each iteration is of the order of m2 multiplications/divisions. The computation time is approximately proportional to m2(n + k/2), where k is the estimated rank of matrix A. The actual code of our subroutine (in FORTRAN IV, not including comment lines) consists of less than 100 lines.
References 1. 2.
3.
Abdelmalek, N.N., Minimum energy problem for discrete linear admissible control systems, International Journal of Systems Science, 10(1979)77-88. Abdelmalek, N.N., An algorithm for the solution of ill-posed linear systems arising from the discretization of Fredholm integral equation of the first kind, Journal of Mathematical Analysis and Applications, 97(1983)95-111. Abdelmalek, N.N., A program for the solution of ill-posed linear systems arising from the discretization of Fredholm
© 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
4. 5. 6.
7.
8. 9. 10. 11. 12. 13. 14. 15. 16.
721
integral equation of the first kind, Computer Physics Communications, 58(1990)285-292. Abdelmalek, N.N., Kasvand, T. and Croteau, J.P., Image restoration for space invariant pointspread functions, Applied Optics, 19(1980)1184-1189. Abdelmalek, N.N., Kasvand, T., Olmstead, J. and Tremblay, M.M., Direct algorithm for digital image restoration, Applied Optics, 20(1981)4227-4233. Abdelmalek, N.N. and Otsu, N., Restoration of images with missing high-frequency components by minimizing the L1 norm of the solution vector, Applied Optics, 24(1985)14151420. Abdelmalek, N.N. and Otsu, N., Speed comparison among methods for restoring signals with missing high-frequency components using two different low-pass-filter matrix dimensions, Optics Letters, 10(1985)372-374. Albert, A., Regression and More-Penrose Pseudoinverse, Academic Press, New York, 1972. Baker, C.T.H., The Numerical Treatment of Integral Equations, Oxford University Press, Clarendon, Oxford, 1977. Baker, C.T.H., Fox, L., Mayers, D.F. and Wright, K., Numerical solution of Fredholm integral equations of the first kind, Computer Journal, 7(1964)141-148. Fisher, M.E., Introductory Numerical Methods with the NAG Software Library, The University of Western Australia, Crawley, 1988. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Hansen, P.C., The truncated SVD as a method for regularization, BIT, 27(1987)534-553. Hansen, P.C., Analysis of discrete ill-posed problems by means of the L-curve, SIAM Review, 34(1992)561-580. Hansen, P.C. and O’Leary, D., The use of the L-curve in the regularization of discrete ill-posed problems, SIAM Journal on Scientific Computation, 14(1993)1487-1502. Hanson, R.J., A numerical method for solving Fredholm integral equations of the first kind using singular values, SIAM Journal on Numerical Analysis, 8(1971)616-622.
© 2008 by Taylor & Francis Group, LLC
722
Numerical Linear Approximation in C
17.
Hanson, R.J. and Phillips, J.L., An adaptive numerical method for solving linear Fredholm integral equations of the first kind, Numerische Mathematik, 24(1975)291-307. Kammerer, W.J. and Nashed, M.Z., On the convergence of the conjugate gradient method for singular linear equations, SIAM Journal on Numerical Analysis, 8(1971)65-101. Kammerer, W.J. and Nashed, M.Z., Iterative methods for best approximate solution of linear integral equations of the first and second kinds, Journal of Mathematical Analysis and Applications, 40(1972)547-573. Morozov, V.A., Methods for Solving Incorrectly Posed Problems, Springer-Verlag, New York, 1984. Noble, B., Applied Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, 1969. Peters, G. and Wilkinson, J.H., The least squares problem and pseudo-inverses, Computer Journal, 13(1970)309-316. Phillips, D.L., A technique for the numerical solution of certain integral equations of the first kind, Journal of ACM, 9(1962)84-97. Squire, W., The solution of ill-conditioned linear systems arising from Fredholm equations of the first kind by steepest descents and conjugate gradients, International Journal for Numerical Methods in Engineering, 10(1976)607-617. Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, 1973. te Riele, H.J.J., A program for solving first kind Fredholm integral equations by means of regularization, Computer Physics Communications, 36(1985)423-432. Tikhonov, A.N., Solution of incorrectly formulated problems and method of regularization, Soviet Mathematics, 4(1963)1035-1038. Varah, J.M., On the numerical solution of ill-conditioned linear systems with applications to ill-posed problems, SIAM Journal on Numerical Analysis, 10(1973)257-267. Varah, J.M., A practical examination of some numerical methods for linear discrete ill-posed problems, SIAM Review, 21(1979)100-111.
18. 19.
20. 21. 22. 23. 24.
25. 26. 27. 28. 29.
© 2008 by Taylor & Francis Group, LLC
Chapter 19: Solution of Ill-Posed Linear Systems
30. 31.
32.
723
Varah, J.M., Pitfalls in the numerical solution of linear ill-posed problems, SIAM Journal on Scientific Statistical Computation, 4(1983)164-176. Vogel, C.R., Optimal choice of a truncation level for the truncated SVD solution of linear first kind integral equations when data are noisy, SIAM Journal on Numerical Analysis, 23(1986)109-134. Wahba, G., Practical approximate solutions to linear operator equations when the data are noisy, SIAM Journal on Numerical Analysis, 14(1977)651-667.
© 2008 by Taylor & Francis Group, LLC
724
19.8
Numerical Linear Approximation in C
DR_Mls
/*------------------------------------------------------------------DR_Mls --------------------------------------------------------------------This program is a driver for the function LA_Mls(), which calculates a smooth solution vector "x" for an ill-posed / ill-conditioned system of linear equations a*x = b "a" is a real n by m matrix of rank k, k <= m <= n. "b" is a real n vector. Remark 1: It is recommended to solve this program in double precision only. That is due to the ill-conditioning of the derived system of equations a*x = b. This program contains 2 examples. Example 1: This example is for getting the least squares solution of Fredholm integral equation of the first kind. For this example the real solution is y = x. The integration is done by the rectangular rule. Example 2: This example is for getting the least squares solution of an ill-conditioned system of linear equations. Each example is solved twice, once for the parameter "toler" = 1.0e-04 and once for "toler" = 1.0e-03 -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define TOLER1 #define TOLER2
1.0E-04 1.0E-03
#define NN1_ROWS #define MM1_COLS
60 40
© 2008 by Taylor & Francis Group, LLC
Chapter 19: DR_Mls
#define #define #define #define
N1m M1m N2m M2m
725
15 15 7 6
void DR_Mls (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R a2init[N2m][M2m] = { { 3.6036, 1.8018, 1.2012, 0.9009, 0.72072, { 1.8018, 1.2012, 0.9009, 0.72072, 0.6006, { 1.2012, 0.9009, 0.72072, 0.6006, 0.5148, { 0.9009, 0.72072, 0.6006, 0.5148, 0.45045, { 0.72072, 0.6006, 0.5148, 0.45045, 0.4004, { 0.6006, 0.5148, 0.45045, 0.4004, 0.36036, { 0.5148, 0.45045, 0.4004, 0.36036, 0.3276, };
0.6006 0.5148 0.45045 0.4004 0.36036 0.3276 0.3003
}, }, }, }, }, }, }
static tNumber_R b2[N2m+1] = { NIL, 8.82882, 5.74002, 4.38867, 3.58787, 3.04733, 2.65421, 2.35391 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R a = alloc_Matrix_R (NN1_ROWS, MM1_COLS); tVector_R b = alloc_Vector_R (NN1_ROWS); tVector_R x = alloc_Vector_R (MM1_COLS); tVector_R xp = alloc_Vector_R (M1m + 1); tVector_R yp = alloc_Vector_R (M1m + 1); tVector_R xexact = alloc_Vector_R (M1m + 1); tVector_R xerror = alloc_Vector_R (M1m + 1); tVector_R r = alloc_Vector_R (N1m + 1); tMatrix_R
a2
int int tNumber_R
krun, irank; i, j, k, m, n, Iexmpl; pi, dx, dy, e, e1, s, sum, toler;
© 2008 by Taylor & Francis Group, LLC
= init_Matrix_R (&(a2init[0][0]), N2m, M2m);
726
Numerical Linear Approximation in C eLaRc
rc = LaRcOk;
pi = 4.0*atan (1.0); dx = 2.*pi/99; pi = 4.0*atan (1.0); prn_dr_bnr ("DR_Mls, " "Solution of Ill-posed Systems of Equations"); for (krun = 1; krun <= 2; krun++) { if (krun == 1) toler = TOLER1; if (krun == 2) toler = TOLER2; PRN ("Test run output # = %d, toler = %8.4f\n", krun, toler); for (Iexmpl = 1; Iexmpl <= 2; Iexmpl++) { switch (Iexmpl) { case 1: n = N1m; m = M1m; dx = 1.0 / (tNumber_R)(n - 1); for (i = 1; i <= n; i++) { xp[i] = dx * ((tNumber_R)(i - 1)); } dy = 1.0 / (tNumber_R)(m - 1); for (j = 1; j <= m; j++) { yp[j] = dy * (tNumber_R)(j - 1); } for (i = 1; i <= n; i++) { e = xp[i]; xexact[i] = e; e1 = (pow ((1.0 + e*e), 1.5) - e*e*e) / 3.0; b[i] = e1/dx; for (j = 1; j <= m; j++) { a[i][j] = pow ((xp[i]*xp[i] + yp[j]*yp[j]), 0.50); } © 2008 by Taylor & Francis Group, LLC
Chapter 19: DR_Mls
727
} break; case 2: n = N2m; m = M2m; for (i = 1; i <= n; i++) { b[i] = b2[i]; xexact[i] = 1.0; for (j = 1; j <= m; j++) { a[i][j] = a2[i][j]; } } break; default: break; } prn_algo_bnr ("Mls"); prn_example_delim(); PRN ("Example #%d: Size of coefficient matrix " "%d by %d\n", Iexmpl, n, m); prn_example_delim(); if (Iexmpl == 1) { PRN ("Least Squares Solution of a Fredholm " "Integral Equation of the First Kind\n"); prn_example_delim(); PRN ("Exact solution is f(x) = x\n"); } if (Iexmpl == 2) { PRN ("Least Squares Solution of an Ill-conditioned " "System of Linear Equations\n"); prn_example_delim(); PRN ("r.h.s. Vector \"b\"\n"); prn_Vector_R (b, n); PRN ("Coefficient Matrix, \"a\"\n"); prn_Matrix_R (a, n, m); } PRN ("Parameters \"toler\" = %11.4f, \"EPS\" " "= %13.11f\n", toler, EPS); © 2008 by Taylor & Francis Group, LLC
728
Numerical Linear Approximation in C
rc = LA_Mls (m, n, a, b, toler, x, &irank); if (rc >= LaRcOk) { PRN ("Calculated rank of the system of equations " " = %d\n", irank); PRN ("Calculated solution vector, \"x\"\n"); prn_Vector_R (x, m); PRN ("Exact solution vector, \"xexact\"\n"); prn_Vector_R (xexact, m); sum = 0.0; for (i = 1; i <= m; i++) { xerror[i] = xexact[i] - x[i]; sum = sum + xerror[i] * (xerror[i]); } sum = pow (sum, 0.5); if (Iexmpl == 1) { PRN (" Coordinate x(exact) " "x(calculated) xerror\n"); for (i = 1; i <= m; i++) { PRN (" %8.4f, %8.4f," " %8.4f, %8.4f\n", xp[i], xexact[i], x[i], xerror[i]); } } if (Iexmpl == 2) { PRN (" x(exact) x(calculated) " "xerror\n"); for (i = 1; i <= m; i++) { PRN (" %8.4f, %8.4f, %8.4f\n", xexact[i], x[i], xerror[i]); } } sum = 0.0; for (i = 1; i <= m; i++) { xerror[i] = xexact[i] - x[i]; © 2008 by Taylor & Francis Group, LLC
Chapter 19: DR_Mls sum = sum + xerror[i] * (xerror[i]); } sum = pow (sum, 0.5); PRN ("L2 Norm ||x(exact) - x(calculated)|| " "= %17.10f\n", sum); sum = 0.0; for (i = 1; i <= n; i++) { s = b[i]; for (k = 1; k <= m; k++) { s = s - a[i][k] * (x[k]); } r[i] = s; sum = sum + s*(s); } sum = pow (sum, 0.5); PRN ("Residual vector, (b-a*x)\n"); prn_Vector_R_exp (r, m); PRN ("L2 Norm ||b-a*x|| = %17.10f\n", sum); } prn_la_rc (rc); } } free_Matrix_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R free_Vector_R
(a, NN1_ROWS); (b); (x); (xp); (yp); (xexact); (xerror); (r);
uninit_Matrix_R (a2); }
© 2008 by Taylor & Francis Group, LLC
729
730
19.9
Numerical Linear Approximation in C
LA_Mls
/*------------------------------------------------------------------LA_Mls --------------------------------------------------------------------Given is the system of linear equations a*x = b "a" is a given real n by m matrix of rank k, k <= m <= n. "b" is a given real n vector. The system a*x = b may result from the discretization of "Fredholm integral equation of the first kind". In this case, it is known that a*x = b is ill-posed; a small perturbation in the r.h.s. vector "b" results in a large change to the solution vector "x". An ill-posed system is also ill-conditioned. The problem is to calculate a smooth solution m vector "x" to the (ill-posed / ill-conditioned) system a*x = b. The program pre-multiplies the equation a*x = b by a(transpose). One gets the system of linear equations c*x = f The m by m matrix c = a(transpose)*a and the m vector f = a(transpose)*b. The program then, using linear programming techniques calculates the minimum length least squares solution x(k) of k equations of the system c*x=f, where k = 1, 2, ... . Meanwhile after each simplex iteration (solution) the Euclidean norm ||b-a*x|| is calculated. Computation stops when this norm is less than a given tolerance "toler" or when the absolute value of the pivot element for the next simplex iteration is less than "EPS", a computer-dependent parameter. The obtained solution vector x[k] is itself the smooth solution "x" of the system a*x = b. Inputs m Number of columns of matrix "a" of the system a*x = b.
© 2008 by Taylor & Francis Group, LLC
Chapter 19: LA_Mls n a b toler
731
Number of rows of matrix "a" of a*x = b. An n by m real matrix "a" of the system a*x = b. An n real vector of the system a*x = b. Both matrix "a" and vector "b" are not destroyed in the computation. A real specified tolerance; Euclidian ||b - a*x|| < toler.
Local Variables c An (m + m) by n real matrix. f An m real vector. Outputs irank The estimated rank of matrix "c" of c*x = f that results in a best solution "x" for the system a*x = b. Also, "irank" is the number of simplex iterations in the linear programming problem. x Am m real vector containing the minimum length least squares solution to the "irank" equations of the system c*x = f. "x" is the smooth solution to the system a*x = b. diag A real m vector containing the pivot elements in the "irank" simplex iterations. ic An m integer vector whose first "irank" elements are the indices of the "irank" linearly independent equations of the system c*x = f. Returns one of LaRcSolutionFound LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Mls (int m, int n, tMatrix_R a, tVector_R b, tNumber_R toler, tVector_R x, int *pIrank) { tVector_R f = alloc_Vector_R (m); tMatrix_R c = alloc_Matrix_R (m + m, m); tVector_I ic = alloc_Vector_I (m); tVector_R diag = alloc_Vector_R (m); int int tNumber_R
i = 0, j = 0, k = 0, kd = 0, kdp = 0, mpi = 0, mpj = 0, mpm = 0; iout = 0, mout = 0; d = 0.0, piv = 0.0, pivot = 0.0, s = 0.0, sum = 0.0;
© 2008 by Taylor & Francis Group, LLC
732
Numerical Linear Approximation in C
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < m) && (0 < n) && !((n == 1) && (m == 1))); VALIDATE_PTRS (a && b && toler && x && pIrank); VALIDATE_ALLOC (f && c && ic && diag); /* Initialization */ mpm = m + m; *pIrank = m; for (j = 1; j <= m; j++) { ic[j] = j; diag[j] = 0.0; x[j] = 0.0; s = 0.0; for (k = 1; k <= n; k++) { s = s + b[k] * (a[k][j]); } f[j] = s; } for (j = 1; j <= m; j++) { for (i = j; i <= m; i++) { sum = 0.0; for (k = 1; k <= n; k++) { sum = sum + a[k][i] * (a[k][j]); } c[j][i] = sum; c[i][j] = sum; } } for (j = 1; j <= m; j++) { mpj = m + j; for (i = j; i <= m; i++) { mpi = m + i; s = 0.0; for (k = 1; k <= m; k++) { © 2008 by Taylor & Francis Group, LLC
Chapter 19: LA_Mls s = s + c[k][i] * (c[k][j]); } c[mpj][i] = - s; c[mpi][j] = - s; } } for (iout = 1; iout <= m; iout++) { mout = m + iout; piv = 0.0; /* Pivoting along the diagonal elements of matrix "c" */ for (j = iout; j <= m; j++) { mpj = m + j; d = fabs (c[mpj][j]); if (d > piv) { kd = j; kdp = mpj; piv = d; } } if (piv < EPS) { *pIrank = iout - 1; GOTO_CLEANUP_RC (LaRcSolutionFound); } if (kdp != mout) { /* Swap two rows of matrix "c" */ swap_rows_Matrix_R (c, kdp, mout); /* Swap of two elements of vector "f" */ swap_elems_Vector_R (f, kd, iout); /* Swap two columns of matrix "c" */ for (i = 1; i <= mpm; i++) { d = c[i][iout]; c[i][iout] = c[i][kd]; c[i][kd] = d; } © 2008 by Taylor & Francis Group, LLC
733
734
Numerical Linear Approximation in C swap_elems_Vector_I (ic, kd, iout); } /* A Gauss-Jorden elimination step */ pivot = c[mout][iout]; diag[iout] = pivot; for (j = 1; j <= m; j++) { c[mout][j] = c[mout][j]/pivot; } f[iout] = f[iout]/pivot; for (i = 1; i <= mpm; i++) { if (i != mout) { d = c[i][iout]; for (j = 1; j <= m; j++) { if (j != iout) { c[i][j] = c[i][j] - d * (c[mout][j]); } } c[i][iout] = - c[i][iout]/pivot; if (i > m) { k = i - m; f[k] = f[k] - d * (f[iout]); } else if (i <= m) { x[i] = x[i] - d * (f[iout]); } } } c[mout][iout] = 1.0/pivot; sum = 0.0; for (i = 1; i <= n; i++) { s = b[i]; for (k = 1; k <= m; k++) { s = s - a[i][k] * (x[k]); } sum = sum + s*s;
© 2008 by Taylor & Francis Group, LLC
Chapter 19: LA_Mls } sum = sqrt (sum); if (sum < toler) { *pIrank = iout; break; } } CLEANUP: free_Vector_R free_Matrix_R free_Vector_I free_Vector_R
(f); (c, m + m); (ic); (diag);
return rc; }
© 2008 by Taylor & Francis Group, LLC
735
PART 5
Solution of Underdetermined Systems Of Linear Equations
© 2008 by Taylor & Francis Group, LLC
738
Chapter 20
Chapter 21
Chapter 22
Chapter 23
Numerical Linear Approximation in C
L1 Solution of Underdetermined Linear Equations
739
Bounded and L1 Bounded Solutions of Underdetermined Linear Equations
765
Chebyshev Solution of Underdetermined Linear Equations
799
Bounded Least Squares Solution of Underdetermined Linear Equations
833
© 2008 by Taylor & Francis Group, LLC
739
Chapter 20 L1 Solution of Underdetermined Linear Equations
20.1
Introduction
This is the first of four chapters on the solution of underdetermined consistent systems of linear equations subject to certain types of constraints. For consistent underdetermined systems of linear equations, the residual vector is zero. Hence, the constraints are on the solution vector of the system, not on the residual vector. In this chapter, the constraint is that the L1 norm of the solution vector be as small as possible. Consider the underdetermined system of linear equations Ca = f C = (cij) is a given real n by m matrix of rank k, k ≤ n < m, and f = (fi) is a given real n-vector. It is required to calculate a solution m-vector a = (aj) for this system. It is known that system Ca = f has a solution if and only if rank(C|f) = rank(C). If rank(C|f) > rank(C), the system is inconsistent and it has no solution (Theorem 4.13). We shall assume throughout our work that the system Ca = f is consistent. Also, because the number of equations is less than the number of unknowns, system Ca = f has an infinite number of solutions. In this problem, among these infinite solutions, we seek a solution vector a whose L1 norm m
(20.1.1)
z = a
1
=
∑ i=1
© 2008 by Taylor & Francis Group, LLC
ai
740
Numerical Linear Approximation in C
is as small as possible. Using basic theorems of functional analysis, Cadzow [4] developed algorithmic procedures for solving both the problem of the minimum L1 solution and the problem of the minimum L∞ solution (Chapter 22), of the underdetermined system Ca = f. Kolev [7] presented an iterative algorithm to solve both these problems based on the steepest descent method for constrained optimization. His algorithm results in a procedure of theoretically infinite number of iterations. Kolev [8] then developed his algorithm such that it required a finite number of iterations. In both of Kolev’s methods, the minimum energy solution (Chapter 23) is used to start the iterations. A third algorithm was presented by Kolev [9] based on the simplex method of linear programming [6], but unfortunately, this method was not properly developed. See the comments in [2]. In this context, Dax [5] presented a method for calculating the Lp norm of the solution vector of the consistent system Ca = f, where 1 < p < ∞. He presented a primal Newton method for p > 2 and a dual Newton method for 1 < p < 2. In our work, this problem is formulated as a linear programming problem [2] for which a modified simplex algorithm is described. In this algorithm minimum computer storage is required, where an initial basic feasible solution is available with no artificial variables needed. This method applies to full rank as well as rank deficient cases i.e., when rank(C|f) = rank(C) < n. Our method is initiated by the method that we described for the Chebyshev solution of overdetermined systems of linear equations [1]. In Section 20.2, the linear programming formulation of the problem is given. In Section 20.3, the algorithm is described and in Section 20.4, numerical results and comments are given. 20.1.1
Applications of the algorithm
In control theory this problem is known as the Minimum fuel problem for discrete linear control systems [11, 9, 3]. The parameter ai represents the rate of fuel consumption in time interval i and thus z in (20.1.1) represents the total fuel consumption per unit time.
© 2008 by Taylor & Francis Group, LLC
Chapter 20: L1 Solution of Underdetermined Linear Equations
20.2
741
Linear programming formulation of the problem
This problem is reduced to a linear programming problem as follows m
minimize z =
∑
aj
j=1
subject to Ca = f aj, j = 1, 2, …, m, unrestricted in sign Since the variables aj are unrestricted in sign, we may write aj = vj – wj Hence |aj| = vj + wj vj ≥ 0 and wj ≥ 0, j = 1, 2, …, m Let the m-vectors v = (vj) and w = (wj). Hence, the problem now reduces to (20.2.1a)
minimize z =
(20.2.1b) (20.2.1c)
C –C
m
m
j=1
j=1
∑ vj + ∑ wj v = f w
vj ≥ 0 and wj ≥ 0, j = 1, 2, …, m
Let us denote the matrix on the l.h.s. of (20.2.1b) by D and let the vectors v and w be augmented to form the 2m-vector b. Problem (20.2.1) may be rewritten as (20.2.2a)
minimize z = e2mTb
subject to (20.2.2b) © 2008 by Taylor & Francis Group, LLC
Db = f
742
Numerical Linear Approximation in C
bj ≥ 0, j = 1, 2, …, 2m
(20.2.2c)
where e2m is a 2m-vector each element of which is 1. Problem (20.2.2) is a linear programming problem of n constraints in 2m variables. Assume without loss of generality that rank(C) = n. Let the basis matrix for this problem be denoted by the n-square matrix B. Let us, as usual, construct the simplex tableau for problem (20.2.2) by calculating the vectors (yj) yj = B–1Dj, j = 1, 2, …, 2m
(20.2.3)
where Dj is the jth column of matrix D. From (20.2.2a), the marginal costs are given by (zj – 1) zj – 1 = enTyj – 1, j = 1, 2, …, 2m
(20.2.4)
where en is an n-vector each element of which is 1. Let bB denote the basic solution to problem (20.2.2) bB = B–1f The objective function denoted by z is given by z = enTbB 20.2.1
Properties of the matrix of constraints
The analysis of this problem is initiated by the asymmetry matrix D of (20.2.2b) has. For any column j, 1 ≤ j ≤ m, we have (20.2.5)
Dj = Cj and Dj+m = –Cj, j = 1, 2, …, m
where again Dj and Cj are the jth columns of matrix D and matrix C respectively. This asymmetry enables us to use a condensed simplex tableau of n constraints in only m variables. Definition Because of this asymmetry in matrix D, we define any column j, 1 ≤ j ≤ m, of D and the column (j + m) as two corresponding columns. Consider the following lemmas [2]. Lemma 20.1 Any two corresponding columns could not appear together in any basis. Otherwise, the basis matrix would be singular.
© 2008 by Taylor & Francis Group, LLC
Chapter 20: L1 Solution of Underdetermined Linear Equations
743
Lemma 20.2 Let i and j be any two corresponding columns in the simplex tableau, i.e., |i – j| = m, 1 ≤ i, j ≤ 2m. Then from (20.2.3-5) yi + yj = 0 (zi – 1) + (zj – 1) = –2 Lemma 20.3 Assume that we have obtained an infeasible basic solution bB, to the linear programming problem (20.2.2); that is, one or more elements of bB is < 0. Let bBs, 1 ≤ s ≤ n, be one of such elements. Let i be the column associated with bBs. Let j be the corresponding column to column i. Then if column j replaces column i in the basis, the new basic solution has the new bBs = –bBs and all the other elements of bB are unchanged. The proof of this lemma is similar to the proof of Lemma 10.6. It is easily seen, in this case, that the simplex tableau is changed as follows. Row s of the tableau and the element bBs are multiplied by –1, while the rest of the tableau as well as the other elements of bB are left unchanged. Lemma 20.4 The elements of the solution vector a to the system Ca = f are given in terms of the elements of the optimal basic solution bB to problem (20.2.2) as follows. For any element bBs, s = 1, 2, …, n, of the optimal basic solution, let j be the column in the simplex tableau associated with bBs. Then aj
= bBs, if
1 ≤ j ≤ m
aj – m = –bBs, if instead m < j ≤ 2m The remaining (m – n) elements of a are zero elements and thus the L1 norm z of (20.1.1) is the sum of the elements of bB, being all non-negative. Proof: The proof follows directly from aj = vj – wj and b = [vT wT]T. Also, since the non-basic elements of vector b are 0’s, the remaining elements of vector a are 0’s and the objective function z = the sum of © 2008 by Taylor & Francis Group, LLC
744
Numerical Linear Approximation in C
the elements of the basic solution bB. Note 1 If rank(C|f) = rank(C) = k < n, then (m – k) elements of a are zero elements instead. Lemma 20.5 If the system of equation Ca = f is not consistent, the linear programming solution would have one or more zero rows in the simplex tableau but the corresponding bB elements would be nonzero. The solution of the programming problem is not feasible. 20.3
Description of the algorithm
From Lemma 20.2, the corresponding columns yi and yj are the negative of one another, |i – j| = m. Also, the marginal costs of the corresponding columns are related to one another. Hence, we need to construct simplex tableaux for problem (20.2.2), for n constraints in only m variables, and we call this the condensed tableaux. We start with the first half of matrix D of (20.2.2b). In part 1 of this algorithm, we obtain a basic solution, feasible or not, without needing any artificial variables. This is simply done by performing a finite number of Gauss-Jordan elimination steps to the initial data, with partial pivoting. We choose the pivot as the largest element in absolute value in row i in tableau (i – 1), i = 1, 2, …, n. If rank(C|f) = rank(C) = k < n, this is indicated by the presence of 0 rows and corresponding 0 bB elements in the simplex tableaux. These rows are deleted from the following tableaux. If one or more elements of the basic solution is < 0, the basic solution is not feasible. For each of these elements bBs, we apply Lemma 20.3, which implies replacing column i associated with bBs by its corresponding column. If Ca = f is inconsistent, this is detected, as explained by Lemma 20.5. In this case the calculation is terminated as the solution is not feasible, with an indication that the system is inconsistent. We end part 1 by calculating the marginal costs in the condensed tableau using (20.2.4). Part 2 of the algorithm is the ordinary simplex method. The choice of the non-basic column that enters the basis would be the column © 2008 by Taylor & Francis Group, LLC
Chapter 20: L1 Solution of Underdetermined Linear Equations
745
with the largest positive marginal cost of the non-basic columns and their corresponding columns. That is, by using Lemma 20.2 for calculating the marginal costs of the corresponding columns. The solution vector a of system Ca = f and the optimum norm z of (20.1.1) are obtained from Lemma 20.4. The steps described here are explained by the following detailed numerical example. This example is a simplified version of the example solved by Cadzow ([3], pp. 489, 490) and by Kolev ([7], p. 783). Example 20.1 Obtain the minimum L1 norm of the solution vector a of the following underdetermined system (20.3.1)
2a1 – a2 – 4a3 – 3a4 + a5 = 2 2a1 – a2 – 4a3 – 3a4 + a5 = 2 5a1 + a2 + 3a3 – 2a4 + 0 = 1
Here, matrix C is a 3 by 5 matrix of rank 2 and system (20.3.1) is consistent. The second equation in (20.3.1) is the same as the first one. The Initial Data and the condensed simplex tableaux are shown. The pivot in each tableau is bracketed. Initial Data B bB ––––––––––––––– 2 2 1 –––––––––––––––
D1 D2 D3 D4 D5 ––––––––––––––––––––––––––––––– 2 –1 (–4) –3 1 2 –1 –4 –3 1 5 1 3 –2 0 ––––––––––––––––––––––––––––––– Tableau 20.3.1 (part 1)
B bB ––––––––––––––– D3 –0.5 0 2.5 –––––––––––––––
© 2008 by Taylor & Francis Group, LLC
D1 D2 D3 D4 D5 ––––––––––––––––––––––––––––––– –0.5 0.25 1 0.75 –0.25 0 0 0 0 0 (6.5) 0.25 0 –4.25 0.75 –––––––––––––––––––––––––––––––
746
Numerical Linear Approximation in C
Tableau 20.3.2 B bB ––––––––––––––– D3 –0.308 D1 0.385 –––––––––––––––
D1 D2 D3 D4 D5 ––––––––––––––––––––––––––––––– 0 0.270 1 0.423 –0.192 1 0.039 0 –0.654 0.115 ––––––––––––––––––––––––––––––– Tableau 20.3.2*
B bB ––––––––––––––– D8 0.308 D1 0.385 –––––––––––––––
D1 D2 D8 D4 D5 ––––––––––––––––––––––––––––––– 0 –0.270 1 –0.423 0.192 1 0.039 0 –0.654 0.115 ––––––––––––––––––––––––––––––– 0 –1.231 0 –2.077 –0.693 Tableau 20.3.3 (part 2)
B bB ––––––––––––––– D8 0.308 0.385 D10 –––––––––––––––
D1 D2 D8 D9 D5 ––––––––––––––––––––––––––––––– 0 –0.270 1 0.423 0.192 1 0.039 0 (0.654) 0.115 ––––––––––––––––––––––––––––––– 0 –1.231 0 0.077 –0.693 Tableau 20.3.4
B bB ––––––––––––––– 0.059 D8 D10 0.588 ––––––––––––––– z = 0.647
D1 D2 D8 D9 D5 ––––––––––––––––––––––––––––––– –0.647 –0.295 1 0 –0.266 1.529 0.060 0 1 0.176 ––––––––––––––––––––––––––––––– –0.118 –1.235 0 0 –1.090
Note that the prices are the coefficients of vector b in (20.2.2a), which are the elements of e2m, which are all 1’s, so we need not write these prices on the top of each tableau. In Tableau 20.3.1, the whole of the second row consists of zero element, indicating rank deficiency. This row is deleted from the following tableaux. Tableau 20.3.2 gives a basic solution that is not feasible, since bB1 < 0. © 2008 by Taylor & Francis Group, LLC
Chapter 20: L1 Solution of Underdetermined Linear Equations
747
In Tableau 20.3.2*, y8 replaces its corresponding column y3 (which is associated with bB1) in the basis. In effect, the simplex tableau is left unchanged except for the first row and the first element in bB. They are multiplied by –1. The element y18 is corrected to +1. This gives Tableau 20.3.2* in which the marginal costs (zj – 1) are also calculated from (20.2.4). In Tableau 20.3.2*, y9, the corresponding column to y4, has the largest positive marginal cost. We exchange y4 by y9 (Tableau 20.3.3) and perform a Gauss-Jordan elimination step. This gives Tableau 20.3.4, which is optimal. From Lemma 20.4, the solutions a and z of (20.1.1) are obtained, a = (0, 0, –0.059, –0.588, 0)T and z = (0.059 + 0.588) = 0.647. Since rank(C) = 2, only 2 elements of a are nonzeros and the other 3 are zeros (Lemma 20.4). Finally, since bB is not degenerate and none of the marginal costs for the non-basic columns is zero, a is unique ([6], p. 166). 20.4
Numerical results and comments
LA_Fuel() implements the minimum fuel algorithm, and DR_Fuel() demonstrates 7 test cases. Table 20.4.1 shows the results, calculated in single-precision. Table 20.4.1 –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4×5 2 2.12 2 4×8 7 2.055 3 2×3 2 4.975 4 4×6 4 1.359 5 4×100 13 10.930 6 5×100 19 14.778 7 7×100 22 20.464 For each example, the size of matrix C, the number of iterations and the optimum L1 norm z are given. This method can be characterized as follows. All the calculations are done in the condensed simplex tableaux. The inverse of the basis © 2008 by Taylor & Francis Group, LLC
748
Numerical Linear Approximation in C
matrix, i.e., matrix B–1, is never calculated. The elements of the solution vector a and the optimum norm z are obtained directly from the optimal solution bB.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129. Abdelmalek, N.N., A simplex algorithm for minimum fuel problems of linear discrete control systems, International Journal of Control, 26(1977)635-642. Cadzow, J.A., Functional analysis and the optimal control of linear discrete systems, International Journal of Control, 17(1973)481-495. Canon, M.D., Cullum Jr., C.D. and Polak, E., Theory of Optimal Control and Mathematical Programming, McgrawHill, New York, 1970. Dax, A., Methods for calculating lp-minimum norm solutions of consistent linear systems, Journal of Optimization Theory and Applications, 83(1994)333-354. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Kolev, L.V., Iterative algorithm for the minimum fuel and minimum amplitude problems, International Journal of Control, 21(1975)779-784. Kolev, L.V., Algorithm of finite number of iterations for the minimum fuel and minimum amplitude control problems, International Journal of Control, 22(1975)97-102. Kolev, L.V., Minimum-fuel control of linear discrete systems, International Journal of Control, 23(1976)207-216. Luenberger, D.G., Optimization by Vector Space Methods, John Wiley, New York, 1969. Porter, W.A., Modern Foundations of System Engineering, Macmillan, New York, 1966.
© 2008 by Taylor & Francis Group, LLC
Chapter 20: DR_Fuel
20.5
749
DR_Fuel
/*------------------------------------------------------------------DR_Fuel --------------------------------------------------------------------This program is a driver for the function LA_Fuel(), which calculates the L-One solution of an underdetermined system of consistent linear equations. Given is the underdetermined consistent system c*a = f "c" is a given real n by m matrix of rank k <= n <= m. "f" is a given real n vector. It is required to calculate the m vector "a" for this system such that the L-One norm z of "a" z = |a[1]|+ |a[2]|+ ... + |a[m]| is as small as possible. In control theory, this problem is known as the "Minimum Fuel" problem for linear discrete control systems. This program has 7 examples whose results appear in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define
N1e M1e N2e M2e N3e M3e N4e M4e N5e M5e N6e M6e
4 5 4 8 2 3 4 6 4 100 5 100
© 2008 by Taylor & Francis Group, LLC
750
Numerical Linear Approximation in C
#define N7e #define M7e
7 100
void DR_Fuel (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R b1init[N1e][M1e] = { { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, {-8.0, -4.0, 2.0, 5.0, 6.0 } }; static tNumber_R f1[N1e+1] = { NIL, 10.0, 10.0, 10.0, 8.0 }; static tNumber_R { {-6.0, 4.0, { 7.0, -3.0, { 9.0, 7.0, { 9.0, -3.0, };
b2init[N2e][M2e] = 2.0, 9.0, -5.0, 2.0,
-8.0, 5.0, 2.0, 4.0,
5.0, -9.0, 7.0, 0.0,
-1.0, 8.0, 0.0, 3.0,
static tNumber_R f2[N2e+1] = { NIL, 5.0, 7.0, -9.0, 1.0 }; static tNumber_R b3init[N3e][M3e] = { {-0.7182, -3.6706, -11.6961 }, {-1.7182, -4.6706, -12.6961 } }; static tNumber_R f3[N3e+1] = { NIL, 24.8218, 24.4218 };
© 2008 by Taylor & Francis Group, LLC
6.0, 7.0, 1.0, 0.0,
3.0 -4.0 8.0 1.0
}, }, }, }
Chapter 20: DR_Fuel
751
static tNumber_R b4init[N4e][M4e] { {2.0, -1.0, -4.0, 0.0, -3.0, {2.0, -1.0, -4.0, 0.0, -3.0, {5.0, 1.0, 3.0, 1.0, -2.0, {1.0, -2.0, -1.0, -5.0, 1.0, };
= 1.0 1.0 0.0 4.0
}, }, }, }
static tNumber_R f4[N4e+1] = { NIL, 2.0, 2.0, 1.0, -4.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R c = alloc_Matrix_R (Ne_ROWS, Me_COLS); tVector_R f = alloc_Vector_R (Ne_ROWS); tVector_R a = alloc_Vector_R (Me_COLS); tMatrix_R fay = alloc_Matrix_R (N7e, M7e); tVector_R fy = alloc_Vector_R (N7e + 1); tMatrix_R tMatrix_R tMatrix_R tMatrix_R
b1 b2 b3 b4
= = = =
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
int int tNumber_R
irank, iter; i, j, m, n, Iexmpl; pi, dx, x1, x2, x3, z;
eLaRc
rc = LaRcOk;
pi = 4.0*atan (1.0); dx = 2.*pi/99; for (j = 1; j <= 100; j++) { x1 = (j - 1)* dx; x2 = x1 + x1; x3 = x2 + x1; fay[1][j] = 1.0; fay[2][j] = sin (x1); fay[3][j] = cos (x1); fay[4][j] = sin (x2); fay[5][j] = cos (x2); © 2008 by Taylor & Francis Group, LLC
(&(b1init[0][0]), (&(b2init[0][0]), (&(b3init[0][0]), (&(b4init[0][0]),
N1e, N2e, N3e, N4e,
M1e); M2e); M3e); M4e);
752
Numerical Linear Approximation in C fay[6][j] = sin (x3); fay[7][j] = cos (x3); } for (i = 1; i <= 7; i++) { fy[i] = (10.0+15.0*fay[2][i]-7.0*fay[3][i]+9.0*fay[4][i]) * (1.0 + 0.01*fay[4][i]); } prn_dr_bnr ("DR_Fuel, L1 Solution of an Underdetermined System"); for (Iexmpl = 1; Iexmpl <= 7; Iexmpl++) { switch (Iexmpl) { case 1: n = N1e; m = M1e; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) c[i][j] = b1[i][j]; } break; case 2: n = N2e; m = M2e; for (i = 1; i <= n; i++) { f[i] = f2[i]; for (j = 1; j <= m; j++) c[i][j] = b2[i][j]; } break; case 3: n = N3e; m = M3e; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) c[i][j] = b3[i][j];
© 2008 by Taylor & Francis Group, LLC
Chapter 20: DR_Fuel } break; case 4: n = N4e; m = M4e; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) c[i][j] = b4[i][j]; } break; case 5: n = N5e; m = M5e; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) c[i][j] = fay[i][j]; } break; case 6: n = N6e; m = M6e; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) c[i][j] = fay[i][j]; } break; case 7: n = N7e; m = M7e; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) c[i][j] = fay[i][j]; } break; default: break; } © 2008 by Taylor & Francis Group, LLC
753
754
Numerical Linear Approximation in C prn_algo_bnr ("Fuel"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("L1 Solution of an Underdetermined System\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Coefficient Matrix, \"c\"\n"); prn_Matrix_R (c, n, m); rc = LA_Fuel (m, n, c, f, &irank, &iter, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Minimum Fuel Solution\n"); PRN ("L1 solution vector \"a\"\n"); prn_Vector_R (a, m); PRN ("L1 Norm of vector \"a\", ||a|| = %8.4f\n", z); PRN ("Rank of matrix \"c\" = %d, No. of Iterations " "= %d\n", irank, iter); } LA_check_rank_def (n, irank); prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R
(c, Ne_ROWS); (f); (a); (fay, N7e); (fy);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(b1); (b2); (b3); (b4);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 20: LA_Fuel
20.6
755
LA_Fuel
/*------------------------------------------------------------------LA_Fuel --------------------------------------------------------------------This program solves an underdetermined system of consistent linear equations whose solution vector has a minimum L1 norm. The underdetermined system is of the form c*a = f "c" is a given real n by m matrix of rank k, k <= n <= m. "f" is a given real n vector. It is required to calculate the m vector "a" for this system such that the L-One norm "z" of vector "a" z = |a[1]| + |a[2]| + ... + |a[m]| is as small as possible. In control theory, this problem is known as the "Minimum Fuel" problem for linear discrete control systems. Inputs m n c f
Number Number A real A real
of columns of matrix "c" in the system c*a = f. of rows of matrix "c" in the system c*a = f. n by m matrix containing matrix "c" of system c*a = f. n vector containing the r.h.s. of the system c*a = f.
Outputs irank The calculated rank of matrix "c". iter The number of iterations or the number of times the simplex tableau is changed by a Gauss-Jordan step. a A real m vector whose elements are the solution of system c*a = f. z The minimum L-One norm of the solution vector "a". Local Data icbas An m vector containing the indices of the columns of matrix "c" that form the basis matrix. Returns one of LaRcSolutionUnique
© 2008 by Taylor & Francis Group, LLC
756
Numerical Linear Approximation in C
LaRcSolutionProbNotUnique LaRcNoFeasibleSolution LaRcInconsistentSystem LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Fuel (int m, int int *pIter, tVector_R { tVector_I icbas = tVector_I irbas = tVector_I ibound = tVector_R zc = int tNumber_R eLaRc
n, tMatrix_R c, tVector_R f, int *pIrank, a, tNumber_R *pZ) alloc_Vector_I alloc_Vector_I alloc_Vector_I alloc_Vector_R
(n); (n); (m); (m);
itest = 0, i = 0, ij = 0, iout = 0, ivo = 0, j = 0, jin = 0, kl = 0; d = 0.0; tempRc;
/* Validation of data data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < n) && (n <= m) && !((n == 1) && (m == 1))); VALIDATE_PTRS (c && f && pIrank && pIter && a && pZ); VALIDATE_ALLOC (icbas && irbas && ibound && zc); /* Initialization */ kl = 1; *pZ = 0.0; *pIter = 0; *pIrank = n; LA_fuel_init (m, n, icbas, irbas, ibound, a); iout= 0; /* Part 1 of the algorithm */ for (ij = 1; ij <= n; ij++) { iout = iout + 1; tempRc = LA_fuel_part_1 (iout, &jin, &kl, m, n, c, f, icbas, irbas, pIrank, pIter); © 2008 by Taylor & Francis Group, LLC
Chapter 20: LA_Fuel if (tempRc < LaRcOk) { GOTO_CLEANUP_RC (tempRc); } } for (i = kl; i <= n; i++) { if (f[i] <= -EPS) { jin = icbas[i]; ibound[jin] = -ibound[jin]; f[i] = -f[i]; for (j = 1; j <= m; j++) { c[i][j] = -c[i][j]; } c[i][jin] = 1.0; } } /* Part 2 of the algorithm. Calculating the marginal costs */ LA_fuel_part_2 (kl, m, n, c, icbas, zc); /* Part 3 of the algorithm, */ for (ij = 1; ij <= m*m; ij++) { ivo = 0; jin = 0; /* Determine the vector that enters the basis */ LA_fuel_vent (&ivo, &jin, kl, m, n, icbas, zc); /* Calculate the results */ if (ivo == 0) { rc = LA_fuel_res (kl, n, f, icbas, ibound, a, pZ); GOTO_CLEANUP_RC (rc); } if (ivo != 1) { for (i = kl; i <= n; i++) { c[i][jin] = -c[i][jin]; © 2008 by Taylor & Francis Group, LLC
757
758
Numerical Linear Approximation in C } zc[jin] = -2.0 - zc[jin]; ibound[jin] = -ibound[jin]; } itest= 0; /* Determine the vector that leaves the basis */ LA_fuel_leav (&itest, jin, &iout, kl, n, c, f); /* No feasible solution is possible */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } LA_fuel_gauss_jordn (iout, jin, kl, m, n, c, f, icbas); *pIter = *pIter + 1; d = zc[jin]; for (j = 1; j <= m; j++) { zc[j] = zc[j] - d*c[iout][j]; } }
CLEANUP: free_Vector_I free_Vector_I free_Vector_I free_Vector_R
(icbas); (irbas); (ibound); (zc);
return rc; } /*------------------------------------------------------------------Initialization of LA_fuel() -------------------------------------------------------------------*/ void LA_fuel_init (int m, int n, tVector_I icbas, tVector_I irbas, tVector_I ibound, tVector_R a) { int i, j; for (j = 1; j <= m; j++) { © 2008 by Taylor & Francis Group, LLC
Chapter 20: LA_Fuel
759
a[j] = 0.0; ibound[j] = 1; } for (i = 1; i <= n; i++) { icbas[i] = 0; irbas[i] = i; } } /*------------------------------------------------------------------Part 1 of LA_Fuel() -------------------------------------------------------------------*/ eLaRc LA_fuel_part_1 (int iout, int *pJin, int *pKl, int m, int n, tMatrix_R c, tVector_R f, tVector_I icbas, tVector_I irbas, int *pIrank, int *pIter) { int j; tNumber_R d, piv; piv = 0.0; for (j = 1; j <= m; j++) { d = fabs (c[iout][j]); if (d > piv) { *pJin = j; piv = d; } } /* Detection of rank deficiency of matrix "c" */ if (piv < EPS) { /* Inconsistent system */ if (fabs (f[iout]) > EPS) return LaRcInconsistentSystem; /* Swap two rows of matrix "c" */ swap_rows_Matrix_R (c, *pKl, iout); /* Swap two elements of vector "f" */ swap_elems_Vector_R (f, *pKl, iout); irbas[iout] = irbas[*pKl]; irbas[*pKl] = 0; © 2008 by Taylor & Francis Group, LLC
760
Numerical Linear Approximation in C
icbas[iout] = icbas[*pKl]; icbas[*pKl] = 0; *pIrank = *pIrank - 1; *pKl = *pKl + 1; } else { icbas[iout] = *pJin; /* A Gauss-Jordon elimination step */ LA_fuel_gauss_jordn (iout, *pJin, *pKl, m, n, c, f, icbas); *pIter = *pIter + 1; } return LaRcOk; } /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Fuel() -------------------------------------------------------------------*/ void LA_fuel_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R c, tVector_R f, tVector_I icbas) { tNumber_R pivot, d; int i, j; pivot = c[iout][jin]; for (j = 1; j <= m; j++) c[iout][j] = c[iout][j]/pivot; f[iout] = f[iout]/pivot; for (i = kl; i <= n; i++) { if (i != iout) { d = c[i][jin]; for (j = 1; j <= m; j++) { c[i][j] = c[i][j] - d * (c[iout][j]); } f[i] = f[i] - d * (f[iout]); } } icbas [iout] = jin; © 2008 by Taylor & Francis Group, LLC
Chapter 20: LA_Fuel
761
} /*------------------------------------------------------------------Part 2 of LA_Fuel() -------------------------------------------------------------------*/ void LA_fuel_part_2 (int kl, int m, int n, tMatrix_R c, tVector_I icbas, tVector_R zc) { int i, j, icb; tNumber_R s; for (j = 1; j <= m; j++) { zc[j] = 0.0; icb = 0; for (i = kl; i <= n; i++) if (j == icbas[i]) icb = 1; if (icb == 0) { s = -1.0; for (i = kl; i <= n; i++) { s = s + c[i][j]; } zc[j] = s; } } } /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Fuel() -------------------------------------------------------------------*/ void LA_fuel_vent (int *pIvo, int *pJin, int kl, int m, int n, tVector_I icbas, tVector_R zc) { int i, j, icb; tNumber_R d, e, g; g = - 1.0/ (EPS*EPS); for (j = 1; j <= m; j++) { icb = 0; for (i = kl; i <= n; i++) { if (j == icbas[i]) icb = 1; } © 2008 by Taylor & Francis Group, LLC
762
Numerical Linear Approximation in C if (icb == 0) { d = zc[j]; if (d > EPS) { e = d; if (e > g) { *pIvo = 1; g = e; *pJin = j; } } else { e = -2.0 - d; if (e > EPS && e > g) { *pIvo = -1; g = e; *pJin = j; } } } }
} /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Fuel() -------------------------------------------------------------------*/ void LA_fuel_leav (int *pItest, int jin, int *pIout, int kl, int n, tMatrix_R c, tVector_R f) { int i; tNumber_R d, g, thmax; thmax = 1.0/ (EPS*EPS); for (i = kl; i <= n; i++) { d = c[i][jin]; if (d > EPS) { g = f[i]/d; if (g <= thmax) { © 2008 by Taylor & Francis Group, LLC
Chapter 20: LA_Fuel
763
thmax = g; *pIout = i; *pItest = 1; } } } } /*------------------------------------------------------------------Calculate the results of LA_fuel() -------------------------------------------------------------------*/ eLaRc LA_fuel_res (int kl, int n, tVector_R f, tVector_I icbas, tVector_I ibound, tVector_R a, tNumber_R *pZ) { int i, k; tNumber_R e; *pZ = 0.0; for (i = kl; i <= n; i++) { k = icbas[i]; e = f[i]; *pZ = *pZ + e; if (ibound[k] == -1) e = -e; a[k] = e; if (fabs (a[k]) < EPS) return LaRcSolutionProbNotUnique; } return LaRcSolutionUnique; }
© 2008 by Taylor & Francis Group, LLC
765
Chapter 21 Bounded and L1 Bounded Solutions of Underdetermined Linear Equations
21.1
Introduction
In the previous chapter, the L1 solution of an underdetermined system of consistent linear equations was obtained. The underdetermined system was solved subject to the constraint that the L1 norm of the solution vector be as small as possible. In this chapter, the constraints are (a) each element of the solution vector would be bounded between –1 and +1 or (b) each element of the solution vector would be bounded between –1 and +1 and that the L1 norm of the solution vector be as small as possible. Consider the underdetermined consistent system Ca = f C = (cij) is a given real n by m matrix of rank k, k ≤ n < m, f = (fi) is a given real n-vector and a = (ai) is the required m-solution vector. The first problem may be formulated as follows. Among the infinite number of solutions that the underdetermined system Ca = f has, a solution satisfying the conditions (21.1.1)
–1 ≤ ai ≤ 1, i = 1, 2, …, m
may or may not exist. We call this problem (A). The second problem may be formulated as follows. Given the system Ca = f, find vector a that satisfies (21.1.1) and such that the L1 norm of the solution vector a m
z = a
1
=
∑ i=1
© 2008 by Taylor & Francis Group, LLC
ai
766
Numerical Linear Approximation in C
be as small as possible. We call this problem (B). If instead of the constraints (21.1.1), we require the elements of vector a to satisfy the constraints bi ≤ ai ≤ ci, i = 1, 2, …, m where vectors b = (bi) and c = (ci) are given m-vectors, by substituting variables, the above constraints reduce to the constraints (21.1.1) in the new variables. See Section 7.1. Lin [8] attempted to construct boundary hyper-planes of a reachable set for problem (A). Outrata [9] stated necessary conditions for the solution of problem (A) and developed a simple algorithm to get an approximate solution to it. Weischedel [11] used the minimum energy problem (Chapter 23), as a subsidiary problem, with an iterative technique to solve problem (A). Torng [10] used the simplex method of linear programming and solved both problems (A) and (B). Bashein [5] also used an efficient simplex method to solve problem (A). We shall comment on both Torng’s and Bashein’s methods in Section 21.4. Here, we formulate both problems (A) and (B) as one linear programming problem but with two different objective functions [1]. This linear programming problem is very similar to the dual form of the linear programming problem for the discrete linear L1 approximation problem [1] given in Chapter 5. The algorithm for solving the L1 approximation problem is modified and used here to obtain the solution of either problem (A) or problem (B). Unlike the problem of the previous chapter, problems (A) and (B) may have no solution. If problem (A) has no solution, then problem (B) also has no solution. Condition (21.1.1), which requires the elements of the solution vector to be bounded, may not be achieved for some given problems. If problem (A) and problem (B) have no solution, this is determined by the algorithm. In this case, the corresponding linear programming problem has an unbounded solution. In Section 21.2, the linear programming formulation of the problem is presented. In Section 21.3, the algorithm is described and the problem of degeneracy in this algorithm is discussed in detail. Also, the uniqueness of the solution for problems (A) and (B) is outlined. In Section 21.4, numerical results and comments are given.
© 2008 by Taylor & Francis Group, LLC
Chapter 21: Bounded and L1 Bounded Solutions
21.1.1
767
Applications of the algorithms
Our algorithm for problem (B) has been applied to problems of restoration of images with missing high frequency components [3, 4]. However, the two algorithms have applications in control theory, where problems (A) and (B) are known respectively as the minimum time problem and the minimum fuel problem for discrete linear admissible control systems [6]. 21.2
Linear programming formulation of the two problems
Problem (A) is formulated as follows. Obtain a basic feasible solution to system Ca = f subject to the conditions |aj| ≤ 1, j = 1, 2, …, m. By introducing n artificial variables aSi ≥ 0, i = 1, 2, …, n, problem (A) is stated as n
∑ aSi
minimize Z =
i=1
subject to the conditions Ca = f and |aj| ≤ 1, j = 1, …, m. Problem (B) is stated as m
(21.2.1a)
minimize z = a
1
=
∑
aj
j=1
subject to Ca = f and |aj| ≤ 1, j = 1, …, m. The elements of vector a in Ca = f are unrestricted in sign. We may write (21.2.1b)
ai = vi – wi
Hence |ai| = vi + wi
(21.2.1c) and (21.2.1d)
0 ≤ vi ≤ 1 and 0 ≤ wi ≤ 1, i = 1, 2, …, m
We shall now reformulate problem (B) first. Using (21.2.1), problem (B) may be reformulated as
© 2008 by Taylor & Francis Group, LLC
768
Numerical Linear Approximation in C
minimize z =
m
m
j=1
j=1
∑ vj + ∑ wj
subject to C –C
v = f w
0 ≤ vi ≤ 1 and 0 ≤ wi ≤ 1, i = 1, 2, …, m Let us rewrite this problem as 2m
(21.2.2a)
maximize z = – ∑ b i i=1
subject to (21.2.2b)
Db = f
(21.2.2c)
0 ≤ bi ≤ 1, i = 1, 2, …, 2m
where D = [C –C] and the 2m augmented vector b = [vT wT]T. A negative sign is introduced in (21.2.2a) so that the problem is posed as a maximization problem. Reformulation of problem (A) is n
(21.2.2a')
maximize Z = – ∑ a Si i=1
subject to Db = f and 0 ≤ bi ≤ 1, i = 1, 2, …, 2m Problem (21.2.2) is very similar to the problem described by equations (5.2.3). Both problems are linear programming problems with bounded variables. The following theorem states the necessary and sufficient conditions for the existence of a solution to problem (21.2.2).
© 2008 by Taylor & Francis Group, LLC
Chapter 21: Bounded and L1 Bounded Solutions
769
Theorem 21.1 A necessary and sufficient condition for a nonzero program for problem (A) or for problem (B) to be optimal, is that m elements of the 2m-vector b each has the value 0 (lower bound), (m – n) elements of b, each has the value 0 (lower bound) or 1 (upper bound) and the other n elements of b are basic variables ([7], pp. 387-394). See also Lemma 21.3 below. Without loss of generality, assume that matrix C is of rank n. Hence, matrix D of (21.2.2b) is also of rank n. A simplex tableau of n constraints in 2m variables for problem (A) or problem (B) is constructed as if the elements of b were not bounded from above. Let B denote a basis matrix. Then vectors (yj) in the simplex tableau are given by yj = B–1Dj, j = 1, 2, …, 2m
(21.2.3)
where Dj is the jth column of matrix D. Denote the marginal costs in this problem by (zcj), j = 1, 2, …, 2m. When the artificial variables aSi of (21.2.2a') are driven out, for problem (A), the marginal costs zcj = 0, for all j. For problem (B), in part 2 of the algorithm, since from (21.2.2a) the prices are all 1’s n
(21.2.4)
zc j = –
∑ yij + 1 ,
j = 1, 2, …, 2m
i=1
where yij is the ith element of yj. 21.2.1
Properties of the matrix of constraints
Again, as in the previous chapter, the analysis for this problem is initiated by the asymmetry matrix D has. From (21.2.2b) we have Dj = Cj and Dj+m = –Cj, j = 1, 2, …, m That is (21.2.5)
Dj + Dj+m = 0, j = 1, 2, …, m
where again Dj and Cj are the jth columns of D and C respectively. This asymmetry enables us to use a condensed simplex tableau of n constraints in only m variables.
© 2008 by Taylor & Francis Group, LLC
770
Numerical Linear Approximation in C
Definition Let us define any column j, 1 ≤ j ≤ m, and the column (j + m) of matrix D as two corresponding columns. Lemma 21.1 Any two corresponding columns could not appear together in any basis. The proof is obvious. If two corresponding columns appear together in the basis matrix B, the basis matrix would be singular, since one column is the negative of the other. Lemma 21.2 yj + yj+m = 0, j = 1, 2, …, m and for their marginal costs we have the following. For problem (A), as indicated earlier, in part 2 of the algorithm zcj = zcj+m = 0, j = 1, 2, …, m and for problem (B), in part 2 of the algorithm, we have zcj + zcj+m = 2, j = 1, 2, …, m The lemma is proved from (21.2.3-5). Lemma 21.3 Let i and j be two corresponding columns, meaning |i – j| = m, 1 ≤ i, j ≤ 2m. Then if Di is in the basis, Dj is at zero level, i.e., bj = 0. Lemma 21.4 The elements of the solution vector a of the system Ca = f are obtained as follows. As in Lemma 20.4, for any element bBs, s = 1, 2, …, n, of the optimal basic solution, let j be the column in the simplex tableau associated with bBs. Then aj
= bBs, if
1 ≤ j ≤ m
aj – m = –bBs, if instead m < j ≤ 2m However, the remaining (m – n) elements of a are either at their lower bound (zero) or at their upper bound (one). The upper bound means aj = 1 for 1 ≤ j ≤ m and aj – m = –1 for m < j ≤ 2m. The L1 norm z is the sum of the absolute values of all of these elements. © 2008 by Taylor & Francis Group, LLC
Chapter 21: Bounded and L1 Bounded Solutions
21.3
771
Description of the algorithms
Since from Lemma 21.2, if any vector yj, j = 1, 2, …, m, and its marginal cost in the simplex tableau are known, the corresponding vector yj+m and its marginal cost are easily derived. Accordingly, we shall use a simplex tableau in n constraints in only m variables. We call this the condensed tableau. We start this tableau with columns 1, 2, …, m, of matrix D. The computation for either problem (A) or (B) is divided into two parts. In part 1, a numerically stable initial basic solution, feasible or not, is obtained. This is done simply by performing to the initial data a finite number of Gauss-Jordan elimination steps with partial pivoting. As in the previous chapter, the program will detect the cases when rank(C|f) = rank(C) = k < n, i.e., the consistent rank deficient case. It will also detect an inconsistent system, i.e., when rank(C|f) > rank(C). For problem (A), part 1 is equivalent to driving the artificial variables aSi of (21.2.2a') out, but having an initial basic solution that might not be feasible. At the end of part 1 for problem (A), zcj = 0, for all j, and the objective function Z = 0. The (zcj) and Z remain 0’s in part 2. For problem (B), the marginal costs (zcj), j = 1, 2, …, m, are calculated from (21.2.4). Part 2 of the algorithm is almost identical to the algorithm for the discrete linear L1 approximation problem of Chapter 5 [1]. In this algorithm certain intermediate iterations are skipped. Since in our algorithm, only one half of the simplex tableau is stored, we account for the second un-stored half from Lemma 21.2. 21.3.1
Occurrence of degeneracy
In part 2, for problem (A), the marginal costs are all 0’s. Also, for problem (B), some non-basic marginal costs may acquire zero values. As in Chapter 5, the degeneracy is resolved as follows. Every zero non-basic marginal cost zcj is replaced by either δ or –δ according to whether bj is 0 or 1 respectively. The parameter δ is the round-off error of the computer; δ is set to 10–6 and 10–16 for single- and double-precision computation respectively. However, for problem (A), another difficulty exists. This is when two non-basic corresponding columns, one at its zero level and the
© 2008 by Taylor & Francis Group, LLC
772
Numerical Linear Approximation in C
other at its upper level (= 1), are both eligible to replace a column in the basis. It should be decided which one enters the basis so that Lemma 21.3 is not violated. Let us consider Di and Dj as two non-basic corresponding columns, |i – j| = m such that bi = 0 and bj = 1. According to the previous paragraph we replace respectively zci by δ and replace zcj by –δ. Yet, from Lemma 21.2, yi = –yj and according to the selection rules for the column to enter the basis, we find that either Di or Dj may enter the basis. However, if Di (with bi = 0) enters the basis, Lemma 21.3 would be violated and the algorithm breaks down since its corresponding column Dj (with bj = 1) is at its upper bound. This lemma states that if a column is in (or enters) the basis, its corresponding column is (should be) at zero level, i.e., bj = 0. Hence, we always chose the column which is at its upper bound to enter the basis. 21.3.2
Uniqueness of the solution
For problem (A), since in the final condensed tableau, the non-basic marginal costs are all 0’s, it is almost certain that the solution vector a is not unique. Problem (B) has a unique solution a if in the final condensed tableau none of the elements of the basic solution bB is 0 (lower bound) or 1 (upper bound) and none of the non-basic marginal costs is 0 or 1. Otherwise the solution a might not be unique. 21.4
Numerical results and comments
LA_Tmfuel() implements the Minimum Time Minimum Fuel algorithm. DR_Tmfuel() demonstrates 7 test cases. These are the same test cases shown in Table 20.4.1, for the L1 solution of underdetermined linear systems. For each of the test cases, DR_Tmfuel() solves the Minimum Time case, problem (A), then the Minimum Fuel case, problem (B). Table 21.4.1 shows the results of the 7 test examples calculated in single-precision.
© 2008 by Taylor & Francis Group, LLC
Chapter 21: Bounded and L1 Bounded Solutions
773
Table 21.4.1 Minimum Time
Minimum Fuel
–––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations z Iterations z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4× 5 3 2.750 4 2.667 2 4× 8 6 2.851 7 2.055 3 2× 3 no solution no solution 4 4× 6 4 1.546 5 1.359 5 4×100 9 14.492 14 11.094 6 5×100 9 18.864 17 15.286 7 7×100 22 27.647 23 23.130 For each example, the size of matrix C, the number of iterations and the L1 norm z are given. Also, “no solution” refers to the case when the given problem has no feasible solution. Again, the obtained optimum z and the number of iterations for each example are comparable to the results of Table 20.4.1. The results of the first example in Table 21.4.1 are given here in more detail in Example 21.1. Example 21.1 Find the bounded and L1 bounded solutions of the system 2a1 2a1 2a1 –8a1
+ a2 + a2 + a2 – 4a2
– – – +
3a3 3a3 3a3 2a3
+ + + +
5a4 5a4 5a4 5a4
+ + + +
3a5 3a5 3a5 6a5
= 10 = 10 = 10 = 8
This system of 4 equations is consistent but of rank 2. Each of the second and third equations is the same as the first one. The solution for problem (A) is a = (0.25, 0, –0.5, 1, 1)T and z = 2.75 The solution for problem (B) is a = (0, 0, –0.875, 1, 0.7917)T and z = 2.6667 We observe that the L1 norm z for problem (B) is smaller than that of problem (A). This is besides the fact that each element of a is bounded between –1 and +1. © 2008 by Taylor & Francis Group, LLC
774
Numerical Linear Approximation in C
Finally, we note that the nearest method to our algorithm is that of Torng [10] and of Bashein [5]. Torng's method uses m slack variables and no intermediate simplex iterations are skipped. Bashein’s method, which applies to a problem similar to problem (A), preserves the minimum computer storage, but no intermediate iterations could be skipped.
References 1. 2.
3.
4.
5. 6. 7. 8.
Abdelmalek, N.N., Efficient solution for the discrete linear L1 approximation problem, Mathematics of Computation, 29(1975)844-850. Abdelmalek, N.N., Solutions of minimum time problem and minimum fuel problem for discrete linear admissible control systems, International Journal of Systems Science, 9(1978)849-855. Abdelmalek, N.N. and Otsu, N., Restoration of Images with missing high-frequency components by minimizing the L1 norm of the solution vector, Applied Optics, 24(1985)14151420. Abdelmalek, N.N. and Otsu, N., Speed comparison among methods for restoring signals with missing high-frequency components using two different low-pass-filter matrix dimensions, Optics Letters, 10(1985)372-374. Bashein, G., A simplex algorithm for on-line computation of time optimal controls, IEEE Transactions on Automatic Control, 16(1971)479-482. Canon, M.D., Cullum Jr., C.D. and Polak, E., Theory of Optimal Control and Mathematical Programming, McGrawHill, New York, 1970. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Lin, J.N., Determination of reachable set for a linear discrete system, IEEE Transactions on Automatic Control, 15(1970)339-342.
© 2008 by Taylor & Francis Group, LLC
Chapter 21: Bounded and L1 Bounded Solutions
9. 10. 11.
775
Outrata, J.V., On the minimum time problem in linear discrete systems with the discrete set of admissible controls, Kybernetika, 11(1975)368-374. Torng, H.C., Optimization of discrete control systems through linear programming, Journal of Franklin Institute, 278(1964)28-44. Weischedel, H.R., A solution of the discrete minimum-time control problem, Journal of Optimization Theory and Applications, 5(1970)81-96.
© 2008 by Taylor & Francis Group, LLC
776
21.5
Numerical Linear Approximation in C
DR_Tmfuel
/*------------------------------------------------------------------DR_Tmfuel --------------------------------------------------------------------This program is a driver for the function LA_Tmfuel(), which calculates the "bounded" or the "L-One" bounded solutions of an underdetermined consistent system of linear equations. Given is the underdetermined consistent system c*a = f "c" is a given real n by m matrix of rank k, k <=n <= m. "f" is a given real n vector. It is required to calculate the m vector "a" for either of the following two problems: Problem (a): The m elements a[i] of "a" satisfy the conditions |a[i]| < 1, i = 1, 2, ..., m or Problem (b): The m elements of "a" satisfy (1) and the L1 norm z of vector "a" z = |a[1]| + |a[2]| + ... + |a[m]| is as small as possible. In control theory, problem (a) is known as the "minimum time" problem and problem (b) is known as the "minimum fuel problem" for linear discrete admissible control systems. This program has 7 examples whose results are given in the text. The program solves the 7 examples twice, once for the minimum time case and once for the minimum fuel case. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define N1e #define M1e
4 5
© 2008 by Taylor & Francis Group, LLC
Chapter 21: DR_Tmfuel #define #define #define #define #define #define #define #define #define #define #define #define
N2e M2e N3e M3e N4e M4e N5e M5e N6e M6e N7e M7e
777
4 8 2 3 4 6 4 100 5 100 7 100
void DR_Tmfuel (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R b1init[N1e][M1e] = { { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, {-8.0, -4.0, 2.0, 5.0, 6.0 } }; static tNumber_R f1[N1e+1] = { NIL, 10.0, 10.0, 10.0, 8.0 }; static tNumber_R { {-6.0, 4.0, { 7.0, -3.0, { 9.0, 7.0, { 9.0, -3.0, };
b2init[N2e][M2e] = 2.0, 9.0, -5.0, 2.0,
-8.0, 5.0, 2.0, 4.0,
5.0, -9.0, 7.0, 0.0,
-1.0, 8.0, 0.0, 3.0,
static tNumber_R f2[N2e+1] = { NIL, 5.0, 7.0, -9.0, 1.0 }; static tNumber_R b3init[N3e][M3e] = © 2008 by Taylor & Francis Group, LLC
6.0, 7.0, 1.0, 0.0,
3.0 -4.0 8.0 1.0
}, }, }, }
778
Numerical Linear Approximation in C { {-0.7182, -3.6706, -11.6961 }, {-1.7182, -4.6706, -12.6961 } }; static tNumber_R f3[N3e+1] = {NIL, 24.8218, 24.4218 }; static tNumber_R b4init[N4e][M4e] { {2.0, -1.0, -4.0, 0.0, -3.0, {2.0, -1.0, -4.0, 0.0, -3.0, {5.0, 1.0, 3.0, 1.0, -2.0, {1.0, -2.0, -1.0, -5.0, 1.0, };
= 1.0 1.0 0.0 4.0
}, }, }, }
static tNumber_R f4[N4e+1] = { NIL, 2.0, 2.0, 1.0, -4.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R c = alloc_Matrix_R (N_ROWS, M_COLS); tVector_R f = alloc_Vector_R (N_ROWS); tVector_R a = alloc_Vector_R (M_COLS); tMatrix_R fay = alloc_Matrix_R (N7e, M7e); tVector_R fy = alloc_Vector_R (N7e + 1); tMatrix_R tMatrix_R tMatrix_R tMatrix_R
b1 b2 b3 b4
int int tNumber_R
irank, iter, itf; i, j, kase, m, n, Iexmpl; pi, dx, x1, x2, x3, znorm;
eLaRc
rc = LaRcOk;
pi = 4.0*atan (1.0); dx = 2.*pi/99;
= = = =
© 2008 by Taylor & Francis Group, LLC
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(b1init[0][0]), (&(b2init[0][0]), (&(b3init[0][0]), (&(b4init[0][0]),
N1e, N2e, N3e, N4e,
M1e); M2e); M3e); M4e);
Chapter 21: DR_Tmfuel
779
for (j = 1; j <= 100; j++) { x1 = (j - 1)* dx; x2 = x1 + x1; x3 = x2 + x1; fay[1][j] = 1.0; fay[2][j] = sin (x1); fay[3][j] = cos (x1); fay[4][j] = sin (x2); fay[5][j] = cos (x2); fay[6][j] = sin (x3); fay[7][j] = cos (x3); } for (i = 1; i <= 7; i++) { fy[i] = (10.0+15.0*fay[2][i]-7.0*fay[3][i]+9.0*fay[4][i]) * (1.0+0.01*fay[4][i]); } prn_dr_bnr ("DR_Tmfuel, Bounded & L1 Bounded Solutions of " "an Underdetermined System of Linear Equations"); for (kase = 1; kase <= 2; kase++) { if (kase == 1) itf = 0; if (kase == 2) itf = 1; for (Iexmpl = 1; Iexmpl <= 7; Iexmpl++) { switch (Iexmpl) { case 1: n = N1e; m = M1e; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) c[i][j] = b1[i][j]; } break; case 2: n = N2e; m = M2e; for (i = 1; i <= n; i++) { f[i] = f2[i]; © 2008 by Taylor & Francis Group, LLC
780
Numerical Linear Approximation in C for (j = 1; j <= m; j++) c[i][j] = b2[i][j]; } break; case 3: n = N3e; m = M3e; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) c[i][j] = b3[i][j]; } break; case 4: n = N4e; m = M4e; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) c[i][j] = b4[i][j]; } break; case 5: n = N5e; m = M5e; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) c[i][j] = fay[i][j]; } break; case 6: n = N6e; m = M6e; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) c[i][j] = fay[i][j]; } break; case 7: n = N7e; m = M7e; for (i = 1; i <= n; i++)
© 2008 by Taylor & Francis Group, LLC
Chapter 21: DR_Tmfuel
781
{ f[i] = fy[i]; for (j = 1; j <= m; j++) c[i][j] = fay[i][j]; } break; default: break; } prn_algo_bnr ("Tmfuel"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); if (itf != 1) PRN ("Minimum Time Problem (Bounded Solution) " " for an Underdetermined System\n"); else PRN ("Minimum Fuel Problem (L1 Bounded Solution) " " for an Underdetermined System\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Coefficient Matrix, \"c\"\n"); prn_Matrix_R (c, n, m); rc = LA_Tmfuel (itf, m, n, c, f, &irank, &iter, a, &znorm); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the Problem\n"); PRN ("L1 Solution vector, \"a\"\n"); prn_Vector_R (a, m); PRN ("L1 Norm of vector \"a\", ||a|| = %8.4f\n", znorm); PRN ("Rank of matrix \"c\" = %d, No. of" "iterations = %d\n", irank, iter); } LA_check_rank_def (n, irank); prn_la_rc (rc); } } © 2008 by Taylor & Francis Group, LLC
782
Numerical Linear Approximation in C
free_Matrix_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R
(c, N_ROWS); (f); (a); (fay, N7e); (fy);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(b1); (b2); (b3); (b4);
}
© 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
21.6
783
LA_Tmfuel
/*------------------------------------------------------------------LA_Tmfuel --------------------------------------------------------------------Given is the underdetermined consistent system of linear equations c*a = f "c" is a given real n by m matrix of rank k, k <= n <= m. "f" is a given real n vector. This program calculates the m vector "a" for either of the following two problems: Problem (a): The m elements a[i] of "a" satisfy the conditions |a[i]| < 1, i = 1, 2, ..., m or Problem (b): The m elements of "a" satisfy (1) and the L1 norm z of vector "a" z = |a[1]| + |a[2]| + ... + |a[m]| is as small as possible. In control theory, problem (a) is known as the "minimum time" problem and problem (b) is known as the "minimum fuel problem" for linear discrete admissible control systems. This program uses a dual simplex algorithm to the linear programming formulation of the given problem. In this algorithm certain simplex intermediate iterations are skipped. Inputs itf An integer set by the user. If itf = 1, the program calculates a solution to the minimum fuel problem; problem (b). If itf != 1, the program calculates a solution to the minimum time problem; problem (a). m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. c A real n by m matrix containing matrix "c" of system c*a = f. f A real n vector containing the r.h.s. of the system c*a = f.
© 2008 by Taylor & Francis Group, LLC
784
Numerical Linear Approximation in C
Local Variables icbs An integer n vector whose elements are the indices of the columns of matrix "c" that form the columns of the basis matrix. irbs An integer n vector whose elements are the row indices of matrix "c". Outputs irank The calculated rank of matrix "c". iter The number of iterations, or the number of times the simplex tableau is changed by a Gauss-Jordan step. a A real m vector whose elements are the solution of c*a = f. znorm The L1 norm of the solution vector "a". Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcNoFeasibleSolution LaRcInconsistentSystem LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Tmfuel (int itf, int m, int n, tMatrix_R c, tVector_R f, int *pIrank, int *pIter, tVector_R a, tNumber_R *pZnorm) { tVector_I icbs = alloc_Vector_I (n); tVector_I irbs = alloc_Vector_I (n); tVector_I ibnd = alloc_Vector_I (m + m); tVector_I kbnd = alloc_Vector_I (m + m); tVector_I ib = alloc_Vector_I (m); tVector_R th = alloc_Vector_R (m); tVector_R tu = alloc_Vector_R (m); tVector_R zc = alloc_Vector_R (m); int int tNumber_R eLaRc
ii = 0, ij = 0, ijk = 0, iout = 0, itest = 0, ivo = 0; j = 0, jin = 0, jout = 0, ktest = 0; d = 0, oneps = 0, pivot = 0, pivoto = 0, xb = 0; tempRc;
© 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
785
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < n) && (n <= m) && !((n == 1) && (m == 1))); VALIDATE_PTRS (c && f && pIrank && pIter && a && pZnorm); VALIDATE_ALLOC (icbs && irbs && ibnd && kbnd && ib && th && tu && zc); *pIter = 0; *pIrank = n; oneps = 1.0 + EPS; *pZnorm = 0.0; iout = 0; LA_tmfuel_init (m, n, icbs, irbs, ibnd, kbnd, ib, zc, a); /* Part 1 of the algorithm */ for (ij = 1; ij <= n; ij++) { iout = iout + 1; if (iout <= *pIrank) { jin = 0; tempRc = LA_tmfuel_part_1 (iout, &jin, m, c, f, icbs, irbs, pIrank, pIter); if (tempRc < LaRcOk) { GOTO_CLEANUP_RC (tempRc); } } } /* Obtain a basic feasible solution */ if (itf == 1) { /* Calculating the marginal costs */ LA_tmfuel_marg_costs (m, c, f, icbs, ibnd, kbnd, zc, pIrank); } /* Part 2 of the algorithm */ for (ijk = 1; ijk <= m; ijk++) { /* Determine the vector that leaves the basis */ ivo = 0; xb = 0.0; LA_tmfuel_vleav (pIrank, &iout, &ivo, f, &xb); © 2008 by Taylor & Francis Group, LLC
786
Numerical Linear Approximation in C
/* Calculate the results */ if (ivo == 0) { rc = LA_tmfuel_res (itf, m, f, icbs, ibnd, kbnd, ib, zc, pIrank, a, pZnorm); GOTO_CLEANUP_RC (rc); } /* Calculating of the possible ratios th[j] and tu[j] */ LA_tmfuel_th_tu (itf, m, ivo, iout, &jout, c, icbs, ibnd, kbnd, th, tu, zc); pivoto = 1.0; ktest = 1; for (ii = 1; ii <=m; ii++) { if (ktest == 0) { break; } itest = 0; /* Determine the vector that enters the basis */ LA_tmfuel_vent (m, &jin, ivo, &itest, th, tu); /* No feasible solution has been found */ if (itest == 0) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } if (kbnd[jin] != 1) { if (jin > m) { jin = jin - m; } if (ibnd[jin] == 1) { LA_tmfuel_swap (itf, pIrank, &jin, c, ibnd, kbnd, ib, th, tu, zc); } } © 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
787
/* Cascading vectors, each enters then leaves the basis*/ LA_tmfuel_cascade (iout, jin, &jout, xb, &pivot, pivoto, c, f, ibnd, pIrank); LA_tmfuel_test (iout, jin, &itest, &xb, pivot, f, icbs, th); if (itest == 0) { pivoto = pivot; jout = jin; ktest = 1; } if (itest == 1) { ktest = 0; /* A Gauss-Jordan elimination step */ LA_tmfuel_gauss_jordn (iout, jin, pIrank, m, c, f); *pIter = *pIter + 1; if (itf == 1) { d = zc[jin]; for (j = 1; j <= m; j++) { zc[j] = zc[j] - d * (c[iout][j]); } } } } } CLEANUP: free_Vector_I free_Vector_I free_Vector_I free_Vector_I free_Vector_I free_Vector_R free_Vector_R free_Vector_R
(icbs); (irbs); (ibnd); (kbnd); (ib); (th); (tu); (zc);
return rc; © 2008 by Taylor & Francis Group, LLC
788
Numerical Linear Approximation in C
} /*------------------------------------------------------------------LA_Tmfuel() initialization -------------------------------------------------------------------*/ void LA_tmfuel_init (int m, int n, tVector_I icbs, tVector_I irbs, tVector_I ibnd, tVector_I kbnd, tVector_I ib, tVector_R zc, tVector_R a) { int i, j; for (j = 1; j <= m; j++) { ib[j] = 1; a[j] = 0.0; zc[j] = 0.0; ibnd[j] = 1; kbnd[j] = 1; } for (i = 1; i <= n; i++) { icbs[i] = 0; irbs[i] = i; } } /*------------------------------------------------------------------Part 1 of LA_Tmfuel() -------------------------------------------------------------------*/ eLaRc LA_tmfuel_part_1 (int iout, int *pJin, int m, tMatrix_R c, tVector_R f, tVector_I icbs, tVector_I irbs, int *pIrank, int *pIter) { int i, j, li = 0; tNumber_R e, d, piv, pivot; piv = 0.0; for (j = 1; j <= m; j++) { for (i = iout; i <= *pIrank; i++) { d = fabs (c[i][j]); if (d > piv) { © 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
789
li = i; *pJin = j; piv = d; } } } if (piv < EPS) { /* Detection of rank deficiency of matrix "c" */ for (i = iout; i <= *pIrank; i++) { e = f[i]; if (fabs (e) > EPS) return LaRcInconsistentSystem; } *pIrank = iout - 1; } else { pivot = c[li][*pJin]; icbs[iout] = *pJin; if (li != iout) { /* Swap two rows of matrix "c" */ swap_rows_Matrix_R (c, li, iout); /* Swap two elements of vectors "f" and "irbs" */ swap_elems_Vector_R (f, li, iout); swap_elems_Vector_I (irbs, li, iout); } /* A Gauss-Jordan elimination step */ LA_tmfuel_gauss_jordn (iout, *pJin, pIrank, m, c, f); *pIter = *pIter + 1; } return LaRcOk; } /*------------------------------------------------------------------Calculating the marginal costs in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_marg_costs (int m, tMatrix_R c, tVector_R f, tVector_I icbs, tVector_I ibnd, tVector_I kbnd, tVector_R zc, int *pIrank) © 2008 by Taylor & Francis Group, LLC
790
Numerical Linear Approximation in C
{ int tNumber_R
i, j, icb; d, g, s;
for (j = 1; j <= m; j++) { icb = 0; for (i = 1; i <= *pIrank; i++) { if (j == icbs[i]) icb = 1; } if (icb == 0) { s = -1.0; for (i = 1; i <= *pIrank; i++) { s = s + c[i][j]; } zc[j] = -s; d = -s; if (d < 0.0) { for (i = 1; i <= *pIrank; i++) { f[i] = f[i] - c[i][j]; } ibnd[j] = -1; } else { g = 2.0 - d; if (g < 0.0) { for (i = 1; i <= *pIrank; i++) { f[i] = f[i] + c[i][j]; } kbnd[j] = -1; } } } } } /*------------------------------------------------------------------© 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
791
Determine the vector that leaves the basis in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_vleav (int *pIrank, int *pIout,int *pIvo, tVector_R f, tNumber_R *pXb) { int i; tNumber_R e, d, g, oneps; oneps = 1.0 + EPS; *pIvo = 0; g = 1.0; for (i = 1; i <= *pIrank; i++) { e = f[i]; if (e >= oneps) { d = 1.0 - e; if (d < g) { g = d; *pIvo = 1; *pIout = i; *pXb = e; } } if (e < -EPS) { d = e; if (d < g) { g = d; *pIvo = -1; *pIout = i; *pXb = e; } } } } /*------------------------------------------------------------------Calculating possible ratios th[j] and tu[j] in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_th_tu (int itf, int m, int ivo, int iout, int *pJout, tMatrix_R c, tVector_I icbs, tVector_I ibnd, © 2008 by Taylor & Francis Group, LLC
792
Numerical Linear Approximation in C tVector_I kbnd, tVector_R th, tVector_R tu, tVector_R zc)
{ tNumber_R int
d, e, g, gg, thmax; j;
*pJout = icbs[iout]; thmax = 0.0; for (j = 1; j <= m; j++) { tu[j] = 0.0; th[j] = 0.0; e = c[iout][j]; if (fabs (e) > EPS) { d = zc[j]; g = 2.0 - d; if (itf != 1) { g = 0.0; } if (fabs (g) <= EPS) { g = PREC * (kbnd[j]); if (kbnd[j] == 1) g = g + g; } tu[j] = -g/e; g = ivo * (tu[j]); if (g < 0.0) tu[j] = 0.0; if (j != *pJout) { gg = tu[j]; if (gg < 0.0) gg = -gg; if (gg > thmax) { thmax = gg; } if (fabs (d) < EPS) { d = PREC * (ibnd[j]); if (ibnd[j] == 1) d = d + d; }
© 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
793
th[j] = d/e; d = ivo * (th[j]); if (d < 0.0) th[j] = 0.0; gg = th[j]; if (gg < 0.0) gg = -gg; if (gg > thmax) thmax = gg; } } } if (itf == 1 && ivo == -1) tu[*pJout] = -2.0; if (itf != 1 && ivo == -1) tu[*pJout] = -PREC; } /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_vent (int m, int *pJin, int ivo, int *pItest, tVector_R th, tVector_R tu) { tNumber_R e, thmax, thmin; int j; thmax = 1.0/ (EPS * EPS); thmin = -thmax; for (j = 1; j <= m; j++) { e = th[j]; if (e != 0.0) { if (ivo == 1) { if (e < thmax) { thmax = e; *pJin = j; *pItest = 1; } } if (ivo != 1) { if (e > thmin) { thmin = e; *pJin = j; *pItest = 1; © 2008 by Taylor & Francis Group, LLC
794
Numerical Linear Approximation in C } } } } for (j = 1; j <= m; j++) { e = tu[j]; if (e != 0.0) { if (ivo == 1) { if (e < thmax) { thmax = e; *pJin = j + m; *pItest = 1; } } if (ivo != 1) { if (e > thmin) { thmin = e; *pJin = j + m; *pItest = 1; } } } }
} /*------------------------------------------------------------------Swap elements and vectors in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_swap (int itf, int *pIrank, int *pJin, tMatrix_R c, tVector_I ibnd, tVector_I kbnd, tVector_I ib, tVector_R th, tVector_R tu, tVector_R zc) { tNumber_R d; int i, k; k = ibnd[*pJin]; ibnd[*pJin] = kbnd[*pJin]; kbnd[*pJin] = k; © 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
795
d = th[*pJin]; th[*pJin] = tu[*pJin]; tu[*pJin] = d; ib[*pJin] = -ib[*pJin]; for (i = 1; i <= *pIrank; i++) { c[i][*pJin] = -c[i][*pJin]; } zc[*pJin] = 2.0 - zc[*pJin]; if (itf != 1) zc[*pJin] = 0.0; } /*------------------------------------------------------------------Cascades vectors, each enters and then leaves the basis in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_cascade (int iout, int jin, int *pJout, tNumber_R xb, tNumber_R *pPivot, tNumber_R pivoto, tMatrix_R c, tVector_R f, tVector_I ibnd, int *pIrank) { tNumber_R oneps, pivotn; int i; oneps = 1.0 + EPS; *pPivot = c[iout][jin]; pivotn = *pPivot/pivoto; if (xb > oneps) { for (i = 1; i <= *pIrank; i++) { f[i] = f[i] - c[i][*pJout]; } ibnd[*pJout] = -1; if (pivotn <= 0.0) { for (i = 1; i <= *pIrank; i++) { f[i] = f[i] + c[i][jin]; } ibnd[jin] = 1; } } else { © 2008 by Taylor & Francis Group, LLC
796
Numerical Linear Approximation in C if (pivotn > 0.0) { for (i = 1; i <= *pIrank; i++) { f[i] = f[i] + c[i][jin]; } ibnd[jin] = 1; } }
} /*------------------------------------------------------------------Test if xb is not bounded in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_test (int iout, int jin, int *pItest, tNumber_R *pXb, tNumber_R pivot, tVector_R f, tVector_I icbs, tVector_R th) { tNumber_R oneps; oneps = 1.0 + EPS; *pXb = f[iout]/ (pivot); if (*pXb <= -EPS || *pXb >= oneps) { *pItest = 0; } th[jin] = 0.0; icbs[iout] = jin; } /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Tmfuel() -------------------------------------------------------------------*/ void LA_tmfuel_gauss_jordn (int iout, int jin, int *pIrank, int m, tMatrix_R c, tVector_R f) { int i, j; tNumber_R d, pivot; pivot = c[iout][jin]; for (j = 1; j <= m; j++) { c[iout][j] = c[iout][j]/pivot; } f[iout] = f[iout]/pivot; © 2008 by Taylor & Francis Group, LLC
Chapter 21: LA_Tmfuel
797
for (i = 1; i <= *pIrank; i++) { if (i != iout) { d = c[i][jin]; for (j = 1; j <= m; j++) { c[i][j] = c[i][j] - d * (c[iout][j]); } f[i] = f[i] - d * (f[iout]); } } } /*------------------------------------------------------------------Calculate the results of LA_Tmfuel() -------------------------------------------------------------------*/ eLaRc LA_tmfuel_res (int itf, int m, tVector_R f, tVector_I icbs, tVector_I ibnd, tVector_I kbnd, tVector_I ib, tVector_R zc, int *pIrank, tVector_R a, tNumber_R *pZnorm) { int i, j, i0, icb; tNumber_R e, d; eLaRc rc = LaRcSolutionUnique; e = 1.0 - EPS; for (j = 1; j <= m; j++) { a[j] = 0.0; i0 = 0; for (i = 1; i <= *pIrank; i++) { if (j == icbs[i]) i0 = i; } if (i0 == 0) { d = 0.0; if (ibnd[j] == -1) d = 1.0; if (kbnd[j] == -1) d = -1.0; } else { d = f[i0]; if (d < EPS || d > e) © 2008 by Taylor & Francis Group, LLC
798
Numerical Linear Approximation in C { rc = LaRcSolutionProbNotUnique; } } if (ib[j] == -1) d = -d; a[j] = d; if (d < 0.0) d = -d; *pZnorm = *pZnorm + d; } if (itf != 1) { rc = LaRcSolutionProbNotUnique; } for (j = 1; j <= m; j++) { icb = 0; for (i = 1; i <= *pIrank; i++) { if (j == icbs[i]) { icb = 1; } } if (icb == 0) { e = zc[j]; d = e; if (e < 0.0) d = -d; if (d < EPS) { rc = LaRcSolutionProbNotUnique; } } } return rc;
}
© 2008 by Taylor & Francis Group, LLC
799
Chapter 22 Chebyshev Solution of Underdetermined Linear Equations
22.1
Introduction
In Chapter 20, an algorithm was presented for the solution of underdetermined systems of consistent linear equations, where the L1 norm of the solution vector is minimum. In Chapter 21, algorithms for the bounded and for the L1 bounded solution of underdetermined systems of consistent linear equations were presented as problems (A) and (B) respectively. In problem (A), each element of the solution vector is bounded between 1 and –1, and in problem (B), each element of the solution vector is bounded between 1 and –1 and the L1 norm of the solution vector is as small as possible. This chapter is the third chapter for the solution of underdetermined consistent systems where the constraint is to have the Chebyshev or the L∞ norm of the solution vector be as small as possible. Consider the underdetermined system of linear equations Ca = f C = (cij) is a given real n by m matrix of rank k, k ≤ n < m, and f = (fi) is a given real n-vector. We are seeking the m-solution vector a = (ai) whose L∞ or Chebyshev norm (22.1.1)
||a||∞ = max(|a1|, |a2|, …, |am|)
is minimized. Based on the steepest descent method, Kolev [11] presented an iterative algorithm for solving both the L1 (Chapter 20) and L∞ (this chapter) problems for consistent underdetermined linear equations.
© 2008 by Taylor & Francis Group, LLC
800
Numerical Linear Approximation in C
His algorithm results in a procedure of theoretically infinite number of iterations. Kolev [12] then developed his algorithm such that it requires a finite number of iterations. In both of Kolev’s methods, the minimum energy (Chapter 23) solution is used to start the iterations. Based on the duality principle from functional analysis, Cadzow [3, 4] developed a number of important properties related to the solution of the consistent underdetermined systems Ca = f. As noted in Chapter 20, algorithmic procedures for both the minimum L1 solution and the minimum L∞ solution were then developed. Cadzow [5] further refined his algorithm using a column exchange method. This necessitates that matrix C satisfies the Haar condition. Next, Cadzow [6] described another algorithm that handles the non-Haar cases but for the full rank case. He also outlined a linear programming scheme for the problem of this chapter. See also Cadzow [7]. As noted in Chapter 20, Dax [8] presented a method for calculating the Lp norm of the solution vector of the consistent system Ca = f, where 1 < p < ∞. He presented a primal Newton method for p > 2 and a dual Newton method for 1 < p <2. Lucchetti and Mignanego [13] studied the variational perturbations of the minimum effort problem in control theory. They gave a sufficiency characterization for the convergence of the sequence of solutions of the perturbed problem to the original problem. Gravagne and Walker [9] explored the details of a related subject to the minimum effort problem. In their exploration, they noted that they introduced for the first time a closed-form expression for the minimum effort solution of underdetermined consistent linear systems. They also gave an extended discussion of the minimum effort solutions from a geometric point of view. In our work, this is reduced to a linear programming problem. No conditions are imposed on matrix C, such as the Haar condition or the full rank condition. Our algorithm [2] has two parts. In part 1, an initial basic feasible solution for the linear programming problem is obtained. The objective function z is calculated next. If z < 0, it is easily made positive. The marginal costs are then calculated. Part 2 consists of a slightly modified simplex method The algorithm needs minimum computer storage. The elements of the solution vector a are calculated
© 2008 by Taylor & Francis Group, LLC
Chapter 22: Chebyshev Solution of Underdetermined Linear Equations 801
from the objective function and the marginal costs in the final tableau of the programming problem. In Section 22.2, the linear programming formulation of the problem is presented. In Section 22.3, the algorithm is described, and in Section 22.4, numerical results and comments are given. 22.1.1
Applications of the algorithm
In control theory, this problem is known as the minimum effort or minimum amplitude problem for discrete linear control systems. See the references in [5, 6, 9]. 22.2
The linear programming problem
Let in (22.1.1), ||a||∞ = h > 0. Then this problem may be reduced to a linear programming problem as follows minimize h subject to –h ≤ ai ≤ h, i = 1, 2, …, m and Ca = f The last set of constraints may be replaced by f ≤ Ca ≤ f After rearranging the constraints, this problem is conventionally formulated as follows (22.2.1a)
T minimize Z = e m + 1 a h
subject to (22.2.1b)
© 2008 by Taylor & Francis Group, LLC
Ca a + he –Ca –a + he
≥ ≥ ≥ ≥
f 0 –f 0
802
Numerical Linear Approximation in C
and (22.2.1c)
h ≥ 0
In (22.2.1a), em+1 is an (m + 1)-vector each element of which is 0 except the (m + 1)th element, which is 1. Again, em+1 is the last column in an (m + 1)-unit matrix. In (22.2.1b), e is an m-vector each element of which is 1. It is more efficient to use the dual of problem (22.2.1). For a related case see ([14], p. 174). The dual formulation of (22.2.1) is (22.2.2a)
maximize z = [fT 0T –fT 0T]b = gTb
subject to C 0
T
T
T
I –C –I b = e m+1 T T T e 0 e
For convenience, we write the above equation in the form (22.2.2b)
Db = em+1
where D is the coefficient matrix on the l.h.s. of the previous equation. A simplex tableau for (m + 1) constraints in 2(n + m) variables is to be constructed for this problem. However, we show later that we need to store only n columns of the constraint matrix D. Let the basis matrix be denoted by the (m + 1) square matrix B. Then the vectors yj in the simplex tableau are given by (22.2.3)
yj = B–1Dj, j = 1, 2, …, 2(n + m)
Dj is the jth column of matrix D of (22.2.2b). Let the elements of vector g of (22.2.2a), associated with the basic variables be the (m + 1) vector gB. Then the marginal costs are (22.2.4)
zj – gj = gBTyj – gj, j = 1, 2, …, 2(n + m)
The basic solution bB and the objective function z are given by (22.2.5)
bB = B–1em+1
(22.2.6)
z = gBTbB
© 2008 by Taylor & Francis Group, LLC
Chapter 22: Chebyshev Solution of Underdetermined Linear Equations 803
22.2.1
Properties of the matrix of constraints
Again, as in the previous two chapters, as well as in other chapters in this book, the analysis in this problem is initiated by the kind of asymmetry matrix D has. The first n columns of the left half of matrix D, are the negative of the first n columns of the right half. Again the unit matrix I exists in the last m columns of the left half of D, and –I exists in the second n columns of the right half. That kind of asymmetry in D and the existence of unit matrices in matrix D will enable us to use a simplex tableau for this problem of only (m + 1) constraints in only n variables, instead of (m + 1) constraints in 2(n + m) variables. We call this the reduced tableau. This is explained in Section 22.3. Consider first the following analysis. Definition Because of the kind of asymmetry of matrix D described above, we define any column i, 1 ≤ i ≤ (n + m), and the column j = (i + (n + m)) in this matrix as two corresponding columns. We can show that any two corresponding columns should not appear together in any basis. Let z and B be respectively the optimum objective function and the basis matrix associated with the optimum solution. Then the optimum Chebyshev norm ||a||∞ = z = Z, where Z is the optimum solution of problem (22.2.1). Again, if in the dual problem (22.2.2), a column is in the basis for the optimal solution, its corresponding inequality in the primal (22.2.1b) is an equality ([10], p. 239). As a result, (aT z) is the solution of the system (22.2.7)
T B a = gB z
where gB is associated with the basic variables for the optimum solution of (22.2.2). See also Lemma 10.2. For further use, we write (22.2.7) in the form (22.2.7a)
(aT z) = gBT(B)–1
Consider an example of obtaining the minimum L∞ solution of the © 2008 by Taylor & Francis Group, LLC
804
Numerical Linear Approximation in C
underdetermined system Ca = f, for 2 equations of rank 2 in 4 unknowns. In this case (22.2.7) consists of 5 equations in 5 unknowns, the 4 elements of a and z. The equations would be the first 2 and a suitable 3 of the remaining 4 of the following system, where ρi = +1 or –1. If column i, i = 1, 2, …, n+m of the matrix of constraints D in (22.2.2b), is in the basis, ρi = +1, and ρi = –1 if instead its corresponding column is in the basis ρ 1 c 11 ρ 1 c 12 ρ 1 c 13 ρ 1 c 14 0 ρ 2 c 21 ρ 2 c 22 ρ 2 c 23 ρ 2 c 24 0 (22.2.7b)
ρ3
0
0
0
1
0
ρ4
0
0
1
0
0
ρ5
0
1
0
0
0
ρ6
1
a1 a2
a3 = a4 z
ρ1 f1 ρ2 f2 0 0 0 0
In this example, the first 2 equations are in system (22.2.7a) and thus system Ca = f is satisfied. Also, since the 3 last equations but one in (22.2.7b) are in (22.2.7a), 3 out of the 4 elements of a, each equals +z or –z. It is clear from this example that in general, (m + 1 – n) elements of a, each equals +z or –z and therefore the remaining (n – 1) elements of a in absolute value, each ≤ z. See Lemmas 22.5 and 22.6 below. Lemma 22.1 In any stage of the computation, we have the following relations between the corresponding columns i and j, 1 ≤ i ≤ (n+m) yi + yj = 0, i = 1, 2, …, n
(22.2.8a) and (22.2.8b)
yi + yj = 2bB, i = n+1, n+2, …, n+m
Also, for the marginal costs (22.2.9a)
(zi – fi) + (zj – fj) = 0, i = 1, 2, …, n
and (22.2.9b)
(zi – fi) + (zj – fj) = 2z, i = n+1, n+2, …, n+m
© 2008 by Taylor & Francis Group, LLC
Chapter 22: Chebyshev Solution of Underdetermined Linear Equations 805
Lemma 22.2 (a)
(b)
For each basic solution, feasible or not, there correspond two bases B(1) and B(2), each of which determines the same basic solution. Every column in one of the bases has its corresponding column in the other basis, arranged in the same order. The two values of z are equal in magnitude but opposite in sign. This lemma corresponds to Lemma 10.7.
Lemma 22.3 Consider the two bases B(1) and B(2), defined in the previous Lemma, and let us use (22.2.3-5) to construct two simplex tableaux T(1) and T(2) that correspond respectively to B(1) and B(2). Let i be the corresponding column to column j, where 1 ≤ j ≤ 2(n + m). Then we have yi in T(1) = yj in T(2). This lemma corresponds to Lemma 10.8. Lemma 22.4 At any stage of the computation, the (m + 1)th column of matrix B equals the basic solution bB. This is obvious from (22.2.5) since em+1 is the (m + 1)th column in an (m + 1)-unit matrix. Assume that we have obtained an optimum basic feasible solution to the linear programming problem (22.2.2). Then it is concluded earlier that (m + 1 – n) elements of a, each equals +z or –z and the remaining (n – 1) elements of a in absolute value, each ≤ z. –1
Note 1 If rank(C|f) = rank(C) = k < n, then (m + 1 – k) elements of a, each equals +z or –z and the remaining (k – 1) elements of a in absolute value, each ≤ z. Lemma 22.5 If column j, (n + 1) ≤ j ≤ (n + m), of the matrix of constraints of (22.2.2b) is in the final basis, then aj–n = –z. But if instead, the corresponding column of j is in the final basis, aj–n = z. Proof: The proof of the lemma is established from the structure of equation (22.2.7b). © 2008 by Taylor & Francis Group, LLC
806
Numerical Linear Approximation in C
Lemma 22.6 The (n – 1) elements of a whose absolute value ≤ z, each is calculated from the marginal cost of a non-basic column in the final (condensed) simplex tableau. These non-basic columns are not corresponding columns of any column in the basis. Let column j, (n + 1) ≤ j ≤ (n + m) of the matrix of constraints be such non-basic column. Then (22.2.10a)
aj–n
= (zj – gj) – z,
(22.2.10b)
aj–2n–m = z – (zj – gj),
(n + 1) ≤ j ≤ (n + m) (2n + m + 1) ≤ j ≤ 2(n + m)
Proof: Consider the case (n + 1) ≤ j ≤ (n + m). From (22.2.2b) and (22.2.3-4) T
( zj – gj ) = zj = gB B
–1
uj – n 1
where uj is the jth column of an m-unit matrix. Hence (zj – gj) = gBT[(B)j–n–1 + (B)m+1–1] where (B)j–n–1 and (B)m+1–1are respectively the (j – n)th and the (m + 1)th column of (B)–1. Finally, from (22.2.7) and (22.2.6) and Lemma 22.5, (22.2.10a) is proved. In the same way, (22.2.10b) is proved. Lemma 22.7 If the set of equations Ca = f is inconsistent, i.e., rank(C|f) > rank(C), then the solution of the linear programming problem (22.2.2) would be unbounded. 22.3
Description of the algorithm
Lemma 22.1 relates the columns yi, 1 ≤ i ≤ (n + m) in the simplex tableau and their corresponding column yj, j = i + (n + m). The same is true for their marginal costs. If one column and its marginal cost is known, the corresponding column and its marginal cost is easily derived. We start by constructing a simplex tableau for problem
© 2008 by Taylor & Francis Group, LLC
Chapter 22: Chebyshev Solution of Underdetermined Linear Equations 807
(22.2.2), for (m + 1) constraints in only (n + m) variables. Obviously, let these be the first (n + m) elements of the 2(n + m)-vector b. This tableau is the condensed tableaux. In part 1 of the algorithm, an initial basic feasible solution to the programming problem (22.2.2) is obtained. We also calculate the initial objective function z (≥ 0) and the initial marginal costs (zi – gi). We take advantage of the existence of the m-unit sub-matrices I in matrix D in (22.2.2b) in obtaining an initial basic feasible solution, and we need no artificial variables. The m columns (n + 1), (n + 2), …, (n + m), in matrix D, each is a column in an m-unit matrix augmented by a 1 as the (m + 1)th element. We chose these m columns, or their corresponding columns, to form the first m columns in the initial basis matrix B. This is simply done by performing m Gauss-Jordan eliminations for each of these columns, which consists of one step only. This is the step needed to eliminate the 1 in the (m + 1)th position of each column. The choice between column (n + i), i = 1, 2 …, m, or its corresponding column to form the first m columns in the initial basis matrix B is given next. Consider any one of the first n columns in matrix D. Denote this column by X. Consider element i, i = 1, 2, …, m, in succession of column X. If in X, element i is ≤ 0, we chose column (n + i) in matrix D to form the ith column of B. If element i in X is > 0, we chose instead the corresponding column to column (n + i) to form the ith column of B. When all these m columns enter the basis, the first m elements of X, each would keep its value, with a negative sign and the (m + 1)th element of X would be > 0. In fact, this (m + 1)th element would equal the sum of the absolute values of the first m elements in X = 1. Column X will now be chosen to be the (m + 1)th column of B. The process described so far, guarantees that when column X enters the basis as the (m + 1)th column of B, the initial basic solution would be feasible. That is, with all the element of bB ≥ 0. The objective function z is then calculated from (22.2.6). If z < 0, we use Lemmas 22.2 and 22.3 and replace the basis matrix by its corresponding one. We also replace the columns of the condensed simplex tableau by their corresponding columns. In effect, we keep the simplex tableau unchanged, except for the f values and z. Such parameters have their signs reversed. See [1] and also Chapter 10. The marginal costs (zj – gj) are then calculated from (22.2.4). We now
© 2008 by Taylor & Francis Group, LLC
808
Numerical Linear Approximation in C
have an initial basic feasible solution and z ≥ 0.This ends part 1 of the algorithm. Part 2 of the algorithm is the ordinary simplex algorithm. However, the column to enter the basis is that which has the most negative marginal cost among the non-basic columns in the current tableau and their corresponding columns. Lemma 22.1 is used for calculating the marginal costs of the corresponding columns. Finally, the elements of the solution vector a to the given problem Ca = f are calculated from Lemmas 22.5 and 22.6. The steps described here are explained by the following detailed numerical example. Example 22.1 Obtain the minimum L∞ solution of the underdetermined system of linear equations. 7a1 – 4a2 + 5a3 + 3a4 = –30 –2a1 + a2 + 5a3 + 4a4 = 15 Here, matrix C is a 2 by 4 matrix of rank 2 and the system is consistent. This example is a hybrid of example 1 in ([2], p. 65). The Initial Data and the condensed simplex tableaux are shown. Again Dj, 1 ≤ j ≤ 2(n + m) is the jth column of the matrix of constraints D in (22.2.2b). The pivot elements are shown between brackets, as first seen in Tableau 22.3.1**. Initial Data g –30 15 0 0 0 0 D1 D2 D3 D4 D5 D6 bB ––––––––––––––––––––––––––––––––––––––––– 0 7 –2 1 0 0 0 0 –4 1 0 1 0 0 0 5 5 0 0 1 0 0 3 4 0 0 0 1 1 0 0 1 1 1 1 ––––––––––––––––––––––––––––––––––––––––– Let column X be the first column in the initial data, column D1. We see that elements i = 1, 3, and 4 of D1 are > 0. Hence, we chose columns [(i + n) + (n + m)], i.e, columns 9, 11, and 12 to be columns © 2008 by Taylor & Francis Group, LLC
Chapter 22: Chebyshev Solution of Underdetermined Linear Equations 809
1, 3 and 4 of the basis matrix B. Again since the second element of D1 is < 0, we chose column (i + n), i.e., column 4 to form column 2 of the basis matrix B. The basis (m + 1)-matrix B is now formed from columns 9, 4, 11, 12 and 1 of the matrix of constraints D. As indicated above, we perform 4 Gauss-Jordan steps to eliminate the 1 in the 5th position of each of D3, D4, D5 and D6. We then perform another Gauss-Jordan step to make column D1 the fifth column in the basis matrix B. This gives Tableau 22.3.1, where we also calculate the objective function z from (22.2.5-6). Tableau 22.3.1 (part 1) g –30 15 0 0 0 0 bB D1 D2 D9 D4 D11 D12 ––––––––––––––––––––––––––––––––––––––––– 0.368 0 4.21 1 0 0 0 0.211 0 2.26 0 1 0 0 0.263 0 –3.42 0 0 1 0 0.158 0 –3.05 0 0 0 1 0.053 1 0.316 0 0 0 0 ––––––––––––––––––––––––––––––––––––––––– z = –1.59 Tableau 22.3.1* g 30 –15 0 0 0 0 bB D7 D8 D3 D10 D5 D6 ––––––––––––––––––––––––––––––––––––––––– 0.368 0 4.21 1 0 0 0 0.211 0 2.26 0 1 0 0 0.263 0 –3.42 0 0 1 0 0.158 0 –3.05 0 0 0 1 0.053 1 0.316 0 0 0 0 ––––––––––––––––––––––––––––––––––––––––– z = 1.59 0 24.47 0 0 0 0 In Tableau 22.3.1, however, the initial basic solution is feasible but z < 0. Hence, we make use of Lemmas 22.2 and 22.3 and replace Tableau 22.3.1 by Tableau 22.3.1*. This tableau is the same as
© 2008 by Taylor & Francis Group, LLC
810
Numerical Linear Approximation in C
Tableau 22.3.1, except for the f values and z; their signs are reversed. Also, the columns in Tableau 22.3.1* are the corresponding columns of Tableau 22.3.1. This ends part 1 of the algorithm. In Tableau 22.3.1*, D2 (the corresponding column to D8) has the most negative marginal cost. Hence, we replace y8 by y2. From (22.2.8a) y2 = –y8, and from (22.2.9a), (z2 – g2) = –(z8 – g8). This gives Tableau 22.3.1**. In Tableau 22.3.1**, y2 replaces y6 in the basis. This gives Tableau 22.3.2. Tableau 22.3.1** (part 2) g 30 15 0 0 0 0 bB D7 D2 D3 D10 D5 D6 ––––––––––––––––––––––––––––––––––––––––– 0.368 0 –4.21 1 0 0 0 0.211 0 –2.26 0 1 0 0 0.263 0 3.42 0 0 1 0 0.158 0 (3.05) 0 0 0 1 0.053 1 –0.316 0 0 0 0 ––––––––––––––––––––––––––––––––––––––––– z = 1.59 0 –24.47 0 0 0 0 Tableau 22.3.2 g 30 15 0 0 0 0 bB D7 D2 D3 D10 D5 D6 ––––––––––––––––––––––––––––––––––––––––– 0.586 0 0 1 0 0 1.379 0.328 0 0 0 1 0 0.741 0.086 0 0 0 0 1 –1.121 0.052 0 1 0 0 0 0.328 0.069 1 0 0 0 0 0.103 ––––––––––––––––––––––––––––––––––––––––– z = 2.845 0 0 0 0 0 8.017 In Tableau 22.3.2, y12 has the most negative marginal cost and it replaces its corresponding column y6. This gives Tableau 22.3.2*. In Tableau 22.3.2*, y12 has the most negative marginal cost. It replaces y5 in the basis. This gives Tableau 22.3.3, which gives the optimal objective function z = 3. From Lemmas 22.5 and 22.6, the optimum
© 2008 by Taylor & Francis Group, LLC
Chapter 22: Chebyshev Solution of Underdetermined Linear Equations 811
solution z = 3 and a = (–3, 3, –1.2, 3)T. Tableau 22.3.2* g 30 15 0 0 0 0 bB D1 D2 D3 D10 D5 D12 ––––––––––––––––––––––––––––––––––––––––– 0.586 0 0 1 0 0 –0.207 0.328 0 0 0 1 0 –0.086 0.086 0 0 0 0 1 (1.293) 0.052 0 1 0 0 0 –0.224 0.069 1 0 0 0 0 0.035 ––––––––––––––––––––––––––––––––––––––––– z = 2.845 0 0 0 0 0 –2.328 Tableau 22.3.3 g 30 15 0 0 0 0 bB D7 D2 D3 D10 D5 D12 ––––––––––––––––––––––––––––––––––––––––– 0.6 0 0 1 0 0.16 0 0.333 0 0 0 1 0.067 0 0.067 0 0 0 0 0.773 1 0.067 0 1 0 0 0.173 0 0.067 1 0 0 0 –0.027 0 ––––––––––––––––––––––––––––––––––––––––– z=3 0 0 0 0 1.8 0 22.3.1
The reduced tableaux
A careful look to the simplex Tableaux in this solved example, we see that 5 out of 6, i.e., (m + 1) out of (n + m) columns in the condensed tableaux are actually 5 columns in a 6-unit matrix. Such columns need not be accounted for nor they need to be stored. The condensed simplex tableaux may be condensed more. We denote such tableaux as the reduced tableaux. In the reduced tableaux we calculate only bB and (n – 1) columns and their marginal costs. These (n – 1) columns are the non-basic columns in the condensed tableaux. An index indicator is used for the columns that we actually use in the tableaux. © 2008 by Taylor & Francis Group, LLC
812
22.4
Numerical Linear Approximation in C
Numerical results and comments
LA_Effort() implements this Minimum Effort algorithm, and DR_Effort() demonstrates 7 test cases. Table 22.4.1 shows the results calculated in single-precision. These are the same 7 examples that were solved in Chapters 20 and 21. Table 22.4.1 –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Example C(n×m) Iterations z –––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 4×5 3 0.906 2 4×8 8 0.537 3 3×5 4 3.0 4 4×6 7 0.372 5 4×100 49 0.244 6 5×100 51 0.303 7 7×100 60 0.508 For each example, the size of matrix C, the number of iterations and the optimum L∞ norm z are given. This method can be characterized by the following features. All the calculations are done in the reduced (not even in the condensed) simplex tableaux. An initial basic feasible solution for the linear programming problem as well as the initial objective function z (≥ 0) are obtained with minimum effort. We do not need to calculate the marginal costs until the end of part 1 of the algorithm. The inverse of the basis matrix, B–1, is never calculated. The elements of the solution vector a of the given system Ca = f are calculated from z and the marginal costs of the final reduced tableau. Rank(C) is known at the end of the solution. See example 1 in ([2], p. 65). The closest technique to our algorithm is that of Cadzow [6]. Using our notation, we compared the number of arithmetic operations of Cadzow’s method with that of ours. The number of multiplications/divisions (m/d) per iteration in [6] is > (3nm + m). In our method, the number of (m/d) required per iteration, i.e., to change a simplex tableau, is n(m + 3). This is about 1/3 of the iterations in [6]. Again, the numerical example solved by Cadzow ([5], p. 616) was © 2008 by Taylor & Francis Group, LLC
Chapter 22: Chebyshev Solution of Underdetermined Linear Equations 813
solved by our program and the execution time was half the time given by Cadzow, who used a faster computer.
References 1. 2. 3. 4. 5. 6.
7. 8. 9.
10. 11.
Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129. Abdelmalek, N.N., Minimum L∞ solution of underdetermined systems of linear equations, Journal of Approximation Theory, 20(1977)57-69. Cadzow, J.A., Algorithm for the minimum-effort problem, IEEE Transactions on Automatic Control, 16(1971)60-63. Cadzow, J.A., Functional analysis and the optimal control of linear discrete systems, International Journal of Control, 17(1973)481-495. Cadzow, J.A., A finite algorithm for the minimum l∞ solution to a system of consistent linear equations, SIAM Journal on Numerical Analysis, 10(1973)607-617. Cadzow, J.A., An efficient algorithmic procedure for obtaining a minimum l∞-norm solution to a system of consistent linear equations, SIAM Journal on Numerical Analysis, 11(1974)1151-1165. Cadzow, J.A., Minimum-amplitude control of linear discrete systems, International Journal of Control, 19(1974)765-780. Dax, A., Methods for calculating lp-minimum norm solutions of consistent linear systems, Journal of Optimization Theory and Applications, 83(1994)333-354. Gravagne, I.A. and Walker, I.D., On the structure of minimum effort solutions with application to kinematic redundancy resolution, IEEE Transactions on Robotics and Automation, 16(2000)855-863. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Kolev, L.V., Iterative algorithm for the minimum fuel and minimum amplitude problems, International Journal of Control, 21(1975)779-784.
© 2008 by Taylor & Francis Group, LLC
814
Numerical Linear Approximation in C
12.
Kolev, L.V., Algorithm of finite number of iterations for the minimum fuel and minimum amplitude control problems, International Journal of Control, 22(1975)97-102. Lucchetti, R. and Mignanego, F., Variational perturbation of the minimum effort problem, Journal of Optimization Theory and Applications, 30(1980)485-499. Osborne, M.R. and Watson, G.A., On the best linear Chebyshev approximation. Computer Journal, 10(1967)172177.
13. 14.
© 2008 by Taylor & Francis Group, LLC
Chapter 22: DR_Effort
22.5
815
DR_Effort
/*------------------------------------------------------------------DR_Effort --------------------------------------------------------------------This program is a driver for the function LA_Effort(), which calculates the Chebyshev solution of an underdetermined system of consistent linear equations. Given is the underdetermined consistent system c*a = f "c" is a given real n by m matrix of rank k, k <= n <= m. "f" is a given real n vector. It is required to calculate the m vector "a" for this system, such that the Chebyshev norm z of "a" z = max|a[j]|,
j = 1, 2, ..., m
is as small as possible. In control theory, this problem is known as the "Minimum Effort" or "Minimum Amplitude" problem for discrete linear control systems. This program contains 7 examples whose results appear in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define #define #define #define #define #define
Nf_ROWS Mf_COLS N1f M1f N2f M2f N3f M3f N4f M4f N5f M5f
25 151 4 5 4 8 3 5 4 6 4 100
© 2008 by Taylor & Francis Group, LLC
816
Numerical Linear Approximation in C
#define #define #define #define
N6f M6f N7f M7f
5 100 7 100
void DR_Effort (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R b1init[N1f][M1f] = { { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, {-8.0, -4.0, 2.0, 5.0, 6.0 } }; static tNumber_R f1[N1f+1] = { NIL, 10.0, 10.0, 10.0, 8.0 }; static tNumber_R { {-6.0, 4.0, { 7.0, -3.0, { 9.0, 7.0, { 9.0, -3.0, };
b2init[N2f][M2f] = 2.0, 9.0, -5.0, 2.0,
-8.0, 5.0, 2.0, 4.0,
5.0, -9.0, 7.0, 0.0,
-1.0, 8.0, 0.0, 3.0,
static tNumber_R f2[N2f+1] = { NIL, 5.0, 7.0, -9.0, 1.0 }; static tNumber_R { { 7.0, -4.0, {-2.0, 1.0, { 5.0, -3.0, };
b3init[N3f][M3f] = 5.0, 3.0, 1.0}, 5.0, 4.0, 1.0}, 10.0, 7.0, 2.0}
static tNumber_R f3[N3f+1] = { NIL, © 2008 by Taylor & Francis Group, LLC
6.0, 7.0, 1.0, 0.0,
3.0 -4.0 8.0 1.0
}, }, }, }
Chapter 22: DR_Effort
817
-30.0, 15.0, -15 }; static tNumber_R b4init[N4f][M4f] { {2.0, -1.0, -4.0, 0.0, -3.0, {2.0, -1.0, -4.0, 0.0, -3.0, {5.0, 1.0, 3.0, 1.0, -2.0, {1.0, -2.0, -1.0, -5.0, 1.0, };
= 1.0 1.0 0.0 4.0
}, }, }, }
static tNumber_R f4[N4f+1] = { NIL, 2.0, 2.0, 1.0, -4.0 }; static tNumber_R b5init[N5f][M5f] { {2.0, -1.0, -4.0, 0.0, -3.0, {2.0, -1.0, -4.0, 0.0, -3.0, {5.0, 1.0, 3.0, 1.0, -2.0, {1.0, -2.0, -1.0, -5.0, 1.0, };
= 1.0 1.0 0.0 4.0
}, }, }, }
static tNumber_R f5[N5f+1] = { NIL, 2.0, 2.0, 1.0, -4.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (Mf_COLS, Nf_ROWS); tVector_R f = alloc_Vector_R (Nf_ROWS); tVector_R a = alloc_Vector_R (Mf_COLS); tMatrix_R fay = alloc_Matrix_R (N7f, M7f); tVector_R fy = alloc_Vector_R (N7f + 1); tMatrix_R tMatrix_R tMatrix_R tMatrix_R
b1 b2 b3 b4
= = = =
tNumber_R int
pi, dx, x1, x2, x3, z; irank, iter;
© 2008 by Taylor & Francis Group, LLC
init_Matrix_R init_Matrix_R init_Matrix_R init_Matrix_R
(&(b1init[0][0]), (&(b2init[0][0]), (&(b3init[0][0]), (&(b4init[0][0]),
N1f, N2f, N3f, N4f,
M1f); M2f); M3f); M4f);
818
Numerical Linear Approximation in C int
i, j, m, n, Iexmpl;
eLaRc
rc = LaRcOk;
pi = 4.0*atan (1.0); dx = 2.*pi/99; for (j = 1; j { x1 = (j x2 = x1 + x3 = x2 + fay[1][j] fay[2][j] fay[3][j] fay[4][j] fay[5][j] fay[6][j] fay[7][j] }
<= 100; j++) 1)* dx; x1; x1; = 1.0; = sin (x1); = cos (x1); = sin (x2); = cos (x2); = sin (x3); = cos (x3);
for (i = 1; i <= 7; i++) { fy[i] = (10.0+15.0*fay[2][i]-7.0*fay[3][i]+ 9.0*fay[4][i]) * (1.0 + 0.01*fay[4][i]); } prn_dr_bnr ("DR_Effort, " "Chebyshev Solution of an Underdetermined System"); for (Iexmpl = 1; Iexmpl <= 7; Iexmpl++) { switch (Iexmpl) { case 1: n = N1f; m = M1f; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = b1[i][j]; } break;
© 2008 by Taylor & Francis Group, LLC
Chapter 22: DR_Effort case 2: n = N2f; m = M2f; for (i = 1; i <= n; i++) { f[i] = f2[i]; for (j = 1; j <= m; j++) ct[j][i] = b2[i][j]; } break; case 3: n = N3f; m = M3f; for (i = 1; i <= n; i++) { f[i] = f3[i]; for (j = 1; j <= m; j++) ct[j][i] = b3[i][j]; } break; case 4: n = N4f; m = M4f; for (i = 1; i <= n; i++) { f[i] = f4[i]; for (j = 1; j <= m; j++) ct[j][i] = b4[i][j]; } break; case 5: n = N5f; m = M5f; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) ct[j][i] = fay[i][j]; } break; case 6: n = N6f; m = M6f; for (i = 1; i <= n; i++) © 2008 by Taylor & Francis Group, LLC
819
820
Numerical Linear Approximation in C { f[i] = fy[i]; for (j = 1; j <= m; j++) ct[j][i] = fay[i][j]; } break; case 7: n = N7f; m = M7f; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) ct[j][i] = fay[i][j]; } break; default: break; } prn_algo_bnr ("Effort"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("Chebyshev Solution of an Underdetermined System\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Effort (m, n, ct, f, &irank, &iter, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of the PRN ("Minimum Effort prn_Vector_R (a, m); PRN ("Chebyshev norm "= %8.4f\n", z); PRN ("Rank of matrix "= %d\n", irank,
© 2008 by Taylor & Francis Group, LLC
Minimum Effort Solution\n"); solution vector \"a\"\n"); of vector \"a\", ||a|| " \"c\" = %d, No. of Iterations " iter);
Chapter 22: DR_Effort } LA_check_rank_def (n, irank); prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R
(ct, Mf_COLS); (f); (a); (fay, N7f); (fy);
uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R uninit_Matrix_R
(b1); (b2); (b3); (b4);
}
© 2008 by Taylor & Francis Group, LLC
821
822
22.6
Numerical Linear Approximation in C
LA_Effort
/*------------------------------------------------------------------LA_Effort --------------------------------------------------------------------This program solves an underdetermined system of consistent linear equations whose solution vector has a minimum Chebyshev norm. The underdetermined system is of the form c*a = f "c" is a given real n by m matrix of rank k, k <= n <= m. "f" is a given real n vector. It is required to calculate the m vector "a" for this system such that the Chebyshev norm "z" of vector "a" z = max|a[j]|,
j = 1, 2, ..., m
is as small as possible. In control theory, this problem is known as the "Minimum Effort" or "Minimum Amplitude" problem for discrete linear control systems. Inputs m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. ct A real (m + 1) by n matrix, Its first m row and its n columns contain the transpose of matrix "c" of the system c*a = f. f A real n vector containing the r.h.s. of the system c*a = f. Outputs irank The calculated rank of matrix "c". iter The number of iterations or the number of times the simplex tableau is changed by a Gauss-Jordan step. a A real m vector whose elements are the solution of system c*a = f. z The minimum Chebyshev norm of the solution vector "a". Returns one of LaRcSolutionUnique LaRcSolutionProbNotUnique LaRcNoFeasibleSolution LaRcErrBounds
© 2008 by Taylor & Francis Group, LLC
Chapter 22: LA_Effort
823
LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Effort (int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R a, tNumber_R *pZ) { tVector_I ic = alloc_Vector_I (m + 1); tVector_I ip = alloc_Vector_I (n); tVector_I ib = alloc_Vector_I (m + n); tVector_I nb = alloc_Vector_I (n); tVector_R zc = alloc_Vector_R (n); int int tNumber_R
i = 0, ii = 0, ijk = 0, iout = 0, itest = 0, ivo = 0; j = 0, l, jc = 0, jin = 0, m1 = 0, nm = 0; bignum = 0.0, d = 0.0, pivot = 0.0;
/* Validation of the data before executing the algorithm */ eLaRc rc = LaRcSolutionUnique; VALIDATE_BOUNDS ((0 < n) && (n < m)); VALIDATE_PTRS (ct && f && pIrank && pIter && a && pZ); VALIDATE_ALLOC (ic && ip && ib && nb && zc); /* Part 1 of the algorithm, */ bignum = 1.0 / (EPS*EPS); *pIter = 0; *pIrank = n; m1 = m + 1; nm = n + m; *pZ = 0.0; /* Program initialization */ LA_effort_init (m, n, ct, ib, ic, ip, nb); iout = m1; jin = 1; ic[iout] = 0; jc = nb[jin]; ii = 1; /* A Gauss-Jordan elimination step */ pivot = ct[iout][jin]; LA_effort_gauss_jordn (iout, jin, m, n, pivot, ct, ic, nb); *pIter = *pIter + 1; © 2008 by Taylor & Francis Group, LLC
824
Numerical Linear Approximation in C
/* Part 2 of the algorithm, */ LA_effort_marg_costs (m, n, ct, f, ib, zc); /* Calculate the results */ if (n == 1) { rc = LA_effort_res (m, n, ct, ib, ic, ip, nb, zc, pIrank, a, pZ); GOTO_CLEANUP_RC (rc); } /* Part 3 of the algorithm */ for (ijk = 1; ijk <= m*m; ijk++) { /* Determine the vector that enters the basis */ ivo = 0; LA_effort_vent (&ivo, &jin, n, nb, zc, pZ); /* Calculate the results */ if (ivo == 0) { rc = LA_effort_res (m, n, ct, ib, ic, ip, nb, zc, pIrank, a, pZ); GOTO_CLEANUP_RC (rc); } jc = nb[jin]; if (ivo != 1) { ib[jc] = -ib[jc]; if (jc > n) { for (i = 1; i <= m1; i++) { ct[i][jin] = ct[i][1] + ct[i][1] - ct[i][jin]; } zc[jin] = zc[1] + zc[1] - zc[jin]; } else { for (i = 1; i <= m1; i++) { ct[i][jin] = -ct[i][jin]; } zc[jin] = -zc[jin]; © 2008 by Taylor & Francis Group, LLC
Chapter 22: LA_Effort
825
f[jc] = -f[jc]; } } itest = 0; /* Determine the vector that leaves the basis */ LA_effort_vleav (&itest, jin, &iout, m, ct); /* No feasible solution */ if (itest != 1) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } /* A Gauss-Jordan elimination step */ pivot = ct[iout][jin]; LA_effort_gauss_jordn (iout, jin, m, n, pivot, ct, ic, nb); *pIter = *pIter + 1; d = zc[jin]; for (j = 1; j <= n; j++) { if (j != jin) { zc[j] = zc[j] - d * (ct[iout][j]); } } zc[jin] = -zc[jin]/pivot; l = ic[m1]; if (l != ii) { swap_elems_Vector_I (ip, l, ii); ii = l; } } CLEANUP: free_Vector_I free_Vector_I free_Vector_I free_Vector_I free_Vector_R
(ic); (ip); (ib); (nb); (zc);
return rc; © 2008 by Taylor & Francis Group, LLC
826
Numerical Linear Approximation in C
} /*------------------------------------------------------------------A Gauss-Jordan elimination step in LA_Effort() -------------------------------------------------------------------*/ void LA_effort_gauss_jordn (int iout, int jin, int m, int n, tNumber_R pivot, tMatrix_R ct, tVector_I ic, tVector_I nb) { int i, j, k, m1; m1 = m + 1; k = nb[jin]; nb[jin] = ic[iout]; ic[iout] = k; for (j = 1; j <= n; j++) ct[iout][j] = ct[iout][j]/pivot; for (i = 1; i <= m1; i++) { if (i != iout) { for (j = 1; j <= n; j++) { if (j != jin) { ct[i][j] = ct[i][j] - ct[i][jin] * (ct[iout][j]); } } ct[i][jin] = -ct[i][jin]/pivot; } ct[iout][jin] = 1.0/pivot; } } /*------------------------------------------------------------------Initialization of LA_Effort() -------------------------------------------------------------------*/ void LA_effort_init (int m, int n, tMatrix_R ct, tVector_I ib, tVector_I ic, tVector_I ip, tVector_I nb) { int i, j, k, m1, nm; m1 = m + 1; nm = n + m; for (j = 1; j <= n; j++) { © 2008 by Taylor & Francis Group, LLC
Chapter 22: LA_Effort
827
nb[j] = j; ip[j] = j; ct[m1][j] = 0.0; } for (j = 1; j <= nm; j++) { ib[j] = 1; } for (i = 1; i <= m; i++) { k = i + n; ic[i] = k; if (ct[i][1] >= EPS) { for (j = 1; j <= n; j++) { ct[i][j] = -ct[i][j]; ib[k] = -1; } } for (j = 1; j <= n; j++) { ct[m1][j] = ct[m1][j] - ct[i][j]; } } } /*------------------------------------------------------------------Calculating the initial marginal costs in LA_Effort() -------------------------------------------------------------------*/ void LA_effort_marg_costs (int m, int n, tMatrix_R ct, tVector_R f, tVector_I ib, tVector_R zc) { int i, m1, nm; tNumber_R d; m1 = m + 1; nm = n + m; d = f[1]; zc[1] = d * (ct[m1][1]); if (zc[1] <= -EPS) { for (i = 1; i <= n; i++) { f[i] = -f[i]; } © 2008 by Taylor & Francis Group, LLC
828
Numerical Linear Approximation in C
for (i = 1; i <= nm; i++) { ib[i] = -ib[i]; } zc[1] = -zc[1]; } if (n > 1) { for (i = 2; i <= n; i++) { zc[i] = -f[i] + f[1] * (ct[m1][i]); } } } /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Effort() -------------------------------------------------------------------*/ void LA_effort_vent (int *pIvo, int *pJin, int n, tVector_I nb, tVector_R zc, tNumber_R *pZ) { int j, jc; tNumber_R bignum, d, e, g, gg, tz, tze; bignum = 1.0/ (EPS*EPS); *pIvo = 0; g = bignum; *pZ = zc[1]; tz = *pZ + *pZ; tze = tz + EPS; for (j = 2; j <= n; j++) { d = zc[j]; jc = nb[j]; if (jc <= n) { gg = d; if (gg < 0.0) gg = -gg; if (gg > EPS) { if (d >= 0.0) { e = -d; if (e < g) © 2008 by Taylor & Francis Group, LLC
Chapter 22: LA_Effort
829
{ *pIvo = -1; g = e; *pJin = j; } } else if (d < 0.0) { e = d; if (e < g) { *pIvo = 1; g = e; *pJin = j; } } } } else if (jc > n) { if (d < -EPS) { e = d; if (e < g) { *pIvo = 1; g = e; *pJin = j; } } else if (d >= tze) { e = tz - d; if (e < g) { *pIvo = -1; g = e; *pJin = j; } } } } } /*------------------------------------------------------------------© 2008 by Taylor & Francis Group, LLC
830
Numerical Linear Approximation in C
Determine the vector that leaves the basis in LA_Effort() -------------------------------------------------------------------*/ void LA_effort_vleav (int *pItest, int jin, int *pIout, int m, tMatrix_R ct) { int i, m1; tNumber_R d, g, thmax; m1 = m + 1; thmax = 1.0/ (EPS*EPS); for (i = 1; i <= m1; i++) { d = ct[i][jin]; if (d > EPS) { g = ct[i][1]/d; if (g <= thmax) { thmax = g; *pIout = i; *pItest = 1; } } } } /*------------------------------------------------------------------Calculate the results of LA_Effort() -------------------------------------------------------------------*/ eLaRc LA_effort_res (int m, int n, tMatrix_R ct, tVector_I ib, tVector_I ic, tVector_I ip, tVector_I nb, tVector_R zc, int *pIrank, tVector_R a, tNumber_R *pZ) { int i, k, kj, l, m1; tNumber_R gg; m1 = m + 1; for (i = 1; i <= m; i++) { k = ic[i]; if (k <= n) { l = ip[k]; k = nb[l]; kj = k - n; © 2008 by Taylor & Francis Group, LLC
Chapter 22: LA_Effort a[kj] = zc[l] - *pZ; if (ib[k] == -1) a[kj] = -a[kj]; } else if (k > n) { kj = k - n; a[kj] = -*pZ; if (ib[k] == -1) a[kj] = *pZ; } } for (i = 1; i <= n; i++) { gg = fabs (zc[i]); if (gg <= EPS) { *pIrank = *pIrank - 1; } } for (i = 1; i <= m1; i++) { if (ct[i][1] < EPS) return LaRcSolutionProbNotUnique; } return LaRcSolutionUnique; }
© 2008 by Taylor & Francis Group, LLC
831
833
Chapter 23 Bounded Least Squares Solution of Underdetermined Linear Equations
23.1
Introduction
This is the last of four chapters on the solution of underdetermined systems of consistent linear equations subject to constraints on the solution vectors. This chapter differs from the previous 3 chapters, where we used linear programming techniques, in that we use quadratic programming techniques. Here, it is required to calculate the least squares solution of an underdetermined consistent linear system subject to the constraints that each element of the solution vector be bounded between –1 and 1. Consider the underdetermined system of linear equations (23.1.1)
Ca = f
C = (cij) is a given n by m real matrix of rank k, k ≤ n < m, f = (fi) is a given real n-vector and a = (aj) is the solution m-vector. Assume throughout that rank(C|f) = rank(C) meaning that system Ca = f in (23.1.1) is consistent. Since n < m, system Ca = f by itself has an infinite number of solutions. In Chapter 17, among these infinite number of solutions, we obtained a solution vector a that minimizes the least square or the L2 norm ||a||2 of vector a. We call this the minimal length least squares solution. In this chapter, the problem is stated as follows. It is required to minimize half of the square of the L2 norm of vector a (23.1.2)
(1/2)||a||2 = (1/2)(aTa)
subject to the constraints (23.1.3) © 2008 by Taylor & Francis Group, LLC
–1 ≤ aj ≤ 1, j = 1, 2, …, m
834
Numerical Linear Approximation in C
The problem is dealt with in two steps. The first step is to calculate the minimal length least squares solution of the undetermined system Ca = f, without applying the constraints (23.1.3). Let this be called problem (E0). We then examine the elements of the solution vector. If they satisfy the given constraints (23.1.3), then the solution of problem (E0) is itself the solution of the given problem. If not, we proceed to the second step, where the elements of vector a are to be bounded between –1 and 1. We call this problem (E). Problem (E0): A solution for problem (E0) always exists and is given by a = C+f
(23.1.4)
where C+ is the pseudo-inverse of matrix C. If rank(C) = n < m, the solution is unique [9] and C+ = CT(CCT)–1
(23.1.5)
Problem (E) is to find a solution m-vector a that minimizes (23.1.2), or in effect, minimizes the L2 norm ||a|| and whose elements are bounded between 1 and –1. As indicated in Chapter 21, if instead of the constraints (23.1.3), we require vector a to satisfy the constraints bi ≤ ai ≤ ci, i = 1, 2, …, m where vectors b = (bi) and c = (ci) are given m-vectors, by substituting variables, the above constraints reduce to the constraints (23.1.3) in the new variables. See Section 7.1. 23.1.1
Applications of the algorithm
This algorithm has been used in digital image restoration [2, 3]. However, this problem is known in control theory as the minimum energy problem for discrete linear admissible control systems [4, 5]. A dynamical control system may be described by the vector difference equation (23.1.6)
x[i + 1] = Ax[i] + Ba[i], i = 0, 1, 2, …
A and B are real constant matrices of dimensions n by n and n by r © 2008 by Taylor & Francis Group, LLC
Chapter 23: Bounded Least Squares Solution of Underdetermined Linear Equations
835
respectively, x[i] is the state n-vector and a[i] is the control vector at time interval i. The minimum energy problem for this control system is to find a[0], a[1], …, a[N – 1], N ≥ n, that brings the system from an initial state x[1] = b to a desired state x[N] = c such that N–1
E = (1 ⁄ 2)
∑ a[i]
T
a[i]
i=0
is as small as possible. The recurrence solution of equation (23.1.6) with the conditions x[1] = b and x[N] = c, gives N
c = A b+
N–1
∑A
N–1–i
Ba [ i ]
i=0
This equation could finally be written in the form Ca = f, which is equation (23.1.1). For admissible control systems, the elements of vector a satisfy the inequalities (23.1.3), which yields problem (E). A solution of system Ca = f that satisfies the above constraints may or may not exist. Canon and Eaton [4] introduced canonical representation of the control system and solved problem (E) as a quadratic programming problem. Again Polak and Deparis [10] used optimal control and convex programming ideas for solving this problem. Stark and Parker [11] introduced a FORTRAN subroutine named BVLS (bounded-variable least-squares) that is modeled on the non-negative least squares (NNLS) algorithm of Lawson and Hanson [8]. BVLS solves the least squares problem of the system Ca = f subject to the constraints l ≤ a ≤ u, where l and u are given vectors. Using our notation, the relative size of the n by m matrix C of (23.1.1), is typically n << m. In this chapter, problem (E) is solved as a quadratic programming problem for bounded variables. It is based on the simplex method for quadratic programming introduced by Dantzig [6] and implemented by van de Panne and Whinston [12, 13]. We take advantage of the special structure of the simplex tableaux for this problem. Hence, minimum computer storage is needed and the computation time is considerably reduced. As a result, this method can deal with fairly large size problems without exhausting the © 2008 by Taylor & Francis Group, LLC
836
Numerical Linear Approximation in C
capacity of the computer. Problem (E0) is solved first as a quadratic programming problem. Its solution requires k simplex iterations, where k = rank(C). It needs a small number of arithmetic operations. If the obtained solution satisfies the inequality constraints (23.1.3), then this solution is also a solution of problem (E). If not, the simplex tableau is enlarged and the problem at hand is solved as a quadratic programming problem for bounded variables. If Ca = f is consistent but rank deficient; that is, one or more equations in Ca = f are redundant, the redundant equation(s) are deleted. The algorithm also detects the case when system Ca = f is inconsistent, i.e., rank(C|f) > rank(C) and a solution for problems (E0) and (E) is not feasible. In this case, the calculation terminates. In section 23.2, the quadratic programming formulation of the problem is given. In section 23.3, the solution of problem (E0) as a quadratic programming problem is obtained. Also in section 23.3, the case of rank deficient consistent system Ca = f and the case of inconsistent system Ca = f are detected. In section 23.4, the solution of problem (E) as a quadratic programming problem with bounded variables is obtained. In section 23.5, numerical results and comments are given. 23.2
Quadratic programming formulation of the problems
Problem (E0) is formulated as a quadratic programming problem maximize z = –(1/2)aTa
(23.2.1a) subject to (23.2.1b)
Ca = f
(23.2.1c)
aj, j = 1, 2, …, m, unrestricted in sign
The problem is posed as a maximization problem by introducing a –ve sign in (23.2.1a). Problem (E) is formulated by (23.2a, b) and (23.2.1d)
© 2008 by Taylor & Francis Group, LLC
–1 ≤ aj ≤ 1, j = 1, 2, …, m
Chapter 23: Bounded Least Squares Solution of Underdetermined Linear Equations
23.3
837
Solution of problem (E0)
To solve problem (E0) as a quadratic programming problem, we first write down the Lagrange function L = –(1/2)aTa – vT(Ca – f) v is an n-vector of Lagrange multipliers for the constraints (23.2.1b), namely Ca = f. For optimum L, we differentiate L w.r.t. ai, i = 1, 2, …, m and also w.r.t. vj, j = 1, 2, …, n, and equate each of the (m + n) equations to 0. However, we shall not equate the (m + n) equations to 0’s right away. We do that in two steps, as follows: (1) The differentiation produces [12, 13] (23.3.1a)
Imu0 – CTv – Ima = 0
(23.3.1b)
Ca + Iny = f
(23.3.1c) aj, j = 1, 2, …, m, and vi, i = 1, 2, …, n, unrestricted in sign
(2)
In (23.3.1a), u0 is the m-vector whose elements are the negative of the partial derivatives of the Lagrange function with respect to the a-variables. In (23.3.1b) we added the n-vector y, which is assumed to be an n-vector of artificial variables used to form the initial basic solution of the problem. Also, Im and In are unit matrices of order m and n respectively. Solve for a and v by making both u and y as non-basic. This is done via putting equations (23.3.1a, b) in a simplex tableau format (Tableau 23.3.1), which is a setup tableau for problem (E0). Tableau 23.3.1 (Setup tableau for problem (E0))
B bB –––––––––––––– u0 0 y f ––––––––––––––
u0T vT aT yT –––––––––––––––––––––––––––––––– Im –CT –Im 0 0 0 C In ––––––––––––––––––––––––––––––––
In this tableau, B denotes the (m + n) basis matrix and bB denotes the basic solution. The elements of u0 and of y form the (initial) basic variables. To solve problem (E0), we change this tableau, by applying © 2008 by Taylor & Francis Group, LLC
838
Numerical Linear Approximation in C
Gauss-Jordan elimination steps so that the elements of vector a replace the corresponding elements of vector u0 as basic variables. In effect, Tableau 23.3.2 is obtained by pre-multiplying the main body of Tableau 23.3.1 by the inverse of matrix B1–1, which is its own inverse –Im 0
–1
–Im 0
=
C In
C In
Then the elements of vector v replace the corresponding elements of vector y as basic variables making y a non-basic, again by applying Gauss-Jordan elimination steps. This gives Tableau 23.3.3. Again, in effect, Tableau 23.3.3 is obtained by pre-multiplying the main body of Tableau 23.3.2 by matrix B2–1, where B2
–1
=
Im
C
–1
T
0 – CC
= T
Im
C
+ T –1
0 – ( CC )
where C+ is given by (23.1.5) for rank(C) = n < m. Tableau 23.3.2 (For problem (E0)) B bB –––––––––––––– a 0 y f ––––––––––––––
u0T vT aT yT –––––––––––––––––––––––––––––––– –Im CT Im 0 T C –CC 0 In ––––––––––––––––––––––––––––––––
Tableau 23.3.3 (Optimal solution for problem (E0)) B bB –––––––––––––– a C+f v –(CCT)–1f ––––––––––––––
u0T vT aT yT –––––––––––––––––––––––––––––––– (–Im+ C+C) 0 Im C+ –C+T In 0 –(CCT)–1 ––––––––––––––––––––––––––––––––
In (23.3.1c), the elements of vectors a and v are unrestricted in sign. Therefore, Tableau 23.3.3 would be the tableau for the optimal solution. In Tableau 23.3.3, C+ is the pseudo-inverse of matrix C as given by (23.1.5). Hence, by comparing with (23.1.4), the basic © 2008 by Taylor & Francis Group, LLC
Chapter 23: Bounded Least Squares Solution of Underdetermined Linear Equations
839
solution in Tableau 23.3.3, a = C+f, is the optimal solution for problem (E0). To ensure the numerical stability of the solution, before changing the simplex tableau, pivoting is performed as follows. To obtain Tableau 23.3.2 from Tableau 23.3.1, Gauss-Jordan elimination steps are used to reduce matrix C in the column of aT to 0. Again, to obtain Tableau 23.3.3 from Tableau 23.3.2, GaussJordan elimination steps with pivoting are used to reduce the columns under vT to a matrix 0 augmented vertically by matrix In. Since matrix CCT is positive semi-definite, pivoting is done along the diagonal elements of –CCT. Because of using partial pivoting in obtaining Tableau 23.3.3, the calculated C+ in Tableau 23.3.3 would be the pseudo-inverse of matrix C (not of matrix C), where C is matrix C whose rows are permuted. The permutation of the rows of C are recorded in an n-vector of indices. If rank(C) = k < n, this is determined in the process of applying the Gauss-Jordan elimination with pivoting. In this case one or more of the intermediate tableaux between Tableaux 23.3.2 and 23.3.3 has a row of 0’s. If the corresponding element of the basic solution bBi to this row is also 0, then system (23.1.2) is consistent but rank deficient. The row and its zero bBi element are deleted and the rows of the current tableau are then rearranged. Again, a row of 0’s indicates that a corresponding column of CT is linearly dependent on one or more of the previous columns. The calculated C+ would be of dimensions m by k and it is the pseudo-inverse of matrix C of dimension k by m, where C contains the linearly independent rows of matrix C, properly permuted. If the corresponding element of bB to a row of 0’s is nonzero, the system is inconsistent and the calculation is terminated, as system Ca = f would have no feasible solution. Again, if the elements of the obtained a-solution of problem (E0) are bounded between –1 and +1, this solution would also be the solution of problem (E). If the solution vector a of problem (E0) does not satisfy (23.1.3), we proceed to solve problem (E), as a quadratic programming problem with bounded variables.
© 2008 by Taylor & Francis Group, LLC
840
Numerical Linear Approximation in C
23.4
Solution of problem (E)
Slack variables xj1 ≥ 0 and the surplus variables xj2 ≥ 0, j = 1, 2, …, m, are needed to convert respectively the right and left inequalities in (23.2.1d) into equalities. (23.4.1a)
aj + xj1 = 1,
j = 1, 2, …, m
(23.4.1b)
xj2
j = 1, 2, …, m
–aj +
= 1,
The Lagrange function for problem (E) is then given by L = –(1/2)aTa – vT(Ca – f) – u1T(a + x1 – e) – u2T(a + x2 – e) The vectors u0, v and y are defined as in (23.3.1a, b), e is an m-vector each element of which is 1, and u1 and u2 are m-vectors of Lagrange multipliers for (23.4.1a, b) respectively. The Kuhn-Tucker conditions for the solution of problem (E) are the following [14]. (a) Imu0 – Imu1 + Imu2 – CTv – Ima =0 1 =e (b) Ima + Imx (c) – Ima + + Imx2 =e (d) Ca + + Iny = f =0 (e) u1Tx1 + u2Tx2 (f) xj1, xj2, uj1, uj2 ≥ 0, j = 1, 2, …, m (g) aj, j = 1, 2, …, m and vi, i = 1, 2, …, n, unrestricted in sign From these conditions, the setup tableau for problem (E) is constructed (Tableau 23.4.1), where the elements of vectors u0, x1, x2 and y form the initial basic variables. Tableau 23.4.1 (Setup Tableau for problem (E)) u0T u1T u2T vT aT x1T x2T yT B bB –––––––––– ––––––––––––––––––––––––––––––––––––––––––– u0 0 Im –Im Im –CT –Im 0 0 0 e 0 0 0 0 Im Im 0 0 x1 2 x e 0 0 0 0 –Im 0 Im 0 y f 0 0 0 0 C 0 0 In –––––––––– ––––––––––––––––––––––––––––––––––––––––––– The setup tableau is now changed as in problem (E0), where the elements of vector a replace the corresponding elements of vector u0 as basic variables. The elements of vector v then replace the © 2008 by Taylor & Francis Group, LLC
Chapter 23: Bounded Least Squares Solution of Underdetermined Linear Equations
841
corresponding elements of vector y as basic variables. That gives Tableau 23.4.2 (not shown), then Tableau 23.4.3. Tableau 23.4.3 (For problem (E)) B bB u0T u1T u2T vT aT x1T x2T yT –––––––––– ––––––––––––––––––––––––––––––––––––––––––– a C+f G –G G 0 Im 0 0 C+ x1 e–C+f –G G –G 0 0 Im 0 –C+ 2 + x e+C f G –G G 0 0 0 Im C+ v –(CCT)–1f –C+T C+T –C+T In 0 0 0 –(CCT)–1 –––––––––– ––––––––––––––––––––––––––––––––––––––––––– In Tableau 23.4.3, G = (–Im + C+C). This tableau is then changed in such a way as to satisfy the Kuhn-Tucker conditions (e) and (f). 23.4.1
Asymmetries in the simplex tableau
By examining Tableau 23.4.3, the following asymmetries exist. (i) The columns under u0T and under u1T are the –ve of one another. (ii) The columns under u1T and under u2T are the –ve of one another. Lemma 23.1 Let wj1 and wj2 be respectively the columns under uj1and uj2 in any tableau after Tableau 23.4.2. Then wj1 + wj2 = 0, j = 1, 2, …, m and as a result, we have the following lemma. Lemma 23.2 In any tableau after Tableau 23.4.3, for any j, 1 ≤ j ≤ m, uj1 and uj2, cannot together be basic variables. Proof: If uj1 and uj2 appear together in any basis, the basis matrix would be singular.
© 2008 by Taylor & Francis Group, LLC
842
Numerical Linear Approximation in C
Lemma 23.3 Assume in a tableau after Tableau 23.4.3, xj1 and xj2, 1 ≤ j ≤ m, are both basic variables and (gij) and (hij) are the elements of the tableau opposite xj1 and xj2 respectively in the non-basic columns. Then gij = –hij For the proof of a similar lemma, see ([14], pp. 176, 177) Standard and non-standard tableau If in any of the tableaux following Tableau 23.4.3, condition (e) of Kuhn-Tucker is satisfied, the tableau is defined as a standard tableau. For condition (e) to be satisfied, if uj1, j = 1, 2, …, m, is a basic variable, xj1 is non-basic, and vice versa. The same is true for uj2 and xj2, j = 1, 2, …, m. If condition (e) is not satisfied, the tableau is nonstandard. For any standard tableau and from Lemma 23.2, one of the following pairs of variables is basic, (xj1 and uj2), (uj1 and xj2) or (xj1 and xj2), j = 1, 2, …, m. Lemma 23.4 In any standard tableau the following are satisfied: (a) If xj1 and uj2 are basic variables, xj2 = 0, xj1 = 2 and aj = –1. (b) If uj1 and xj2 are basic variables, xj1 = 0, xj2 = 2 and aj = 1. (c) If xj1 and xj2 are both basic variables, xj1 + xj2 = 2. Lemma 23.5 In any tableau, standard or not, where only xj1 or xj2 is a basic variable, 1 ≤ j ≤ m, we have the following. If xj1 say is non-basic, the row opposite xj2 has zero elements in the non-basic columns, except under xj1, where the element is 1 and xj2 = 2. The same is true when the roles of xj1 and xj2 are exchanged. 23.4.2
The condensed tableau for problem (E)
Because of the Kuhn-Tucker condition (g), the elements of vectors a and v are unrestricted in sign, the elements of their respective corresponding vectors u0 and y should be 0’s. They are so in Tableau
© 2008 by Taylor & Francis Group, LLC
Chapter 23: Bounded Least Squares Solution of Underdetermined Linear Equations
843
23.4.3, and they should stay non-basic in all the following tableaux. In other words, the elements of vectors a and v should stay basic variables in all the following tableaux. Tableau 23.4.4 (Condensed tableau for problem (E)) B bB –––––––––––––– x1 e – C+f 2 x e + C+f ––––––––––––––
u0T vT aT yT –––––––––––––––––––––––––––––––– G –G Im 0 –G G 0 Im ––––––––––––––––––––––––––––––––
where G was defined earlier and is G = (–Im+ C+C). In effect, we may disregard in Tableau 23.4.3 and the following tableaux, the m and n rows that correspond to the elements of vectors a and v respectively. We also disregard the columns that correspond to the elements of vectors a, u0T, vT and y. The calculation in the following simplex steps is confined to the remaining part of Tableau 23.4.3, which we denote by Tableau 23.4.4. We call this a condensed tableau. Finally, for optimal solution, the elements xj1, xj2, uj1, uj2, j = 1, 2, …, m, are to be non-negative in a standard condensed tableau. 23.4.3
The dual method of solution
Let from now on, u denote either of the vectors u1 or u2 and similarly let x denote either of the vectors x1 or x2. The x-variables are known as the primal variables and the corresponding u-variables are the dual variables. A new obtained tableau may or may not be a standard one. In a non-standard tableau, if an x-variable and its corresponding u-variable are both basic, they are known as a basic pair. If they are both non-basic, they are known as a non-basic pair. It has been shown [12, 13], that the rules of changing the simplex tableau may be such that there is only one basic pair and one non-basic pair in a nonstandard tableau. Tableau 23.4.4 is a standard tableau, and where u ≥ 0, the dual variables have feasible solution. Thus Tableau 23.4.4 is a suitable initial tableau for the solution of problem (E) using the dual method for quadratic programming. The rules of changing the simplex tableau © 2008 by Taylor & Francis Group, LLC
844
Numerical Linear Approximation in C
are found in ([12], p. 296). 23.4.4
The reduced tableau for problem (E)
We show here that it is sufficient to store 1/8 of the condensed tableau. That is, 1/2 the non-basic columns and 1/2 the rows opposite the basic variables. The columns of the basic variables are also not stored, as each is a column in a 2m-unit matrix. The remaining part of the tableau can easily be obtained from the stored part. We call this m by m tableau, the reduced tableau. Let xj1 and uj2, 1 ≤ j ≤ m, both be basic variables. In this case the column under xj2 and the row opposite uj2 are stored. From Lemma 23.1, in the condensed tableau, the column under uj1 is ej, where ej is the jth column in an 2m-unit matrix, and from Lemma 23.5, the row opposite xj1 consists of zero elements, except under xj2, where the element is 1. Let xj1 and xj2 both be basic variables, 1 ≤ j ≤ m. If 0 ≤ xj1 ≤ 2, then from the Kuhn Tucker conditions (b) and (c), 0 ≤ xj2 ≤2. In this case the row opposite either xj1 or xj2 and the column under the corresponding non-basic u-variable are stored. From Lemmas 23.3 and 23.1, the un-stored row and column respectively are known. If, on the other hand, we let xj1 < 0, then xj2 > 2. In this case the row opposite xj1 and the column under its corresponding non-basic u-variable are stored. In a nonstandard tableau, in which ui1 say, did replace uj1 in the basis, we store the column under xj1 and the row opposite uj1. Again from Lemma 23.1, the column under ui2 is –ei, and from Lemma 23.5, the row opposite xj2 has zero elements, except under xj1, where it is 1. Hence, in all cases, 1/8 of the condensed tableau is stored. This includes the data needed for the following iterations. 23.4.5
The method of solution of problem (E)
All calculation is done in the reduced tableaux [1], using the dual method for quadratic programming, in a manner very similar to that used in [13]. Index indicator vector accounts for the corresponding part of the tableau. If a solution for problem (E) does not exist, in the simplex step, no
© 2008 by Taylor & Francis Group, LLC
Chapter 23: Bounded Least Squares Solution of Underdetermined Linear Equations
845
non-basic variable at a positive level is found to replace a basic variable at a negative level. However, if a solution exists, the convergence of the method is guaranteed [12]. 23.5
Numerical results and comments
LA_Energy() implements this algorithm, and DR_Energy() tests 3 examples. The examples were solved in both single- and double-precision, the results of which are given here. Also, the 7 test cases of Chapter 20 were demonstrated for this algorithm. The third example, whose matrix C is a 2 by 3 matrix, has no feasible solution. The one whose matrix C is a 4 by 5 matrix is given here as Example 23.1. For the other 5 test cases of Chapter 20, the elements of the solution vector a satisfy the boundedness constraints (23.1.3); that is, the solution of problem (E0) is itself the solution of the given problem (E). Example 23.1 Matrix C is a well conditioned 4 by 5 matrix of rank 2. The solution is obtained after 3 (= rank + 1) iterations; 2 for problem (E0) and 1 for problem (E). The result is the same, regardless of single- or double-precision computation. 2a1 2a1 2a1 –8a1
+ a2 + a2 + a2 – 4a2
– – – +
3a3 3a3 3a3 2a3
+ + + +
5a4 5a4 5a4 5a4
+ + + +
3a5 3a5 3a5 6a5
= 10 = 10 = 10 = 8
a = (0.139, 0.070, –0.614, 1.0, 0.937)T z = 0.5||a||2 = 1.139 Example 23.2 Matrix C is a 4 by 19 badly conditioned matrix of rank 4. Using single-precision computation, the problem has no feasible solution. The following solution, computed in double-precision, is obtained after 20 iterations. a = (–1, –1, –1, –1, –1, –0.534, 1, 1, 1, 1, 1, 1, 0.393, –1, –1, –1, –1, 0.139, 1)T
© 2008 by Taylor & Francis Group, LLC
846
Numerical Linear Approximation in C
z = 0.5||a||2 = 8.230 Example 23.3 Matrix C is a 4 by 20 matrix of rank 4, the extension of matrix C of the previous example. It is not badly conditioned. The obtained results are the same, regardless of single- or double-precision computation. It converges in 8 iterations. The results are a = (–1, –1, –1, –0.799, –0.410, –0.058, 0.247, 0.491, 0.661, 0.744, 0.732, 0.618, 0.403, 0.102, –0.251, 0.596, –0.832, –0.798, –0.254, 1)T z = 0.5||a||2 = 4.502 We note here that the calculation for problem (E0) is numerically stable. This is because of using pivoting in the simplex steps. As for problem (E), the numerical results indicate the following. If the problem has a solution, the parameter θ θ = min(bBi/gi ≥ 0), i ∈ I where I is the set of indices associated with the basic u-variables and xl, are less than or of the order of 1, the calculation is also numerically stable. If the problem does not have a solution, the parameters θ becomes larger and larger in the successive iterations, and as a result, the build up of the round-off error increases. The algorithm terminates when the pivot in the simplex step is of wrong sign or is a very small number of the order of the round-off error of the computer.
References 1. 2. 3.
Abdelmalek, N.N, Minimum energy problem for discrete linear admissible control systems, International Journal of Systems Science, 10(1979)77-88. Abdelmalek, N.N., Restoration of images with missing high-frequency components using quadratic programming, Applied Optics, 22(1983)2182-2188. Abdelmalek, N.N. and Kasvand, T., Digital image restoration using quadratic programming, Applied Optics, 19(1980)34073415.
© 2008 by Taylor & Francis Group, LLC
Chapter 23: Bounded Least Squares Solution of Underdetermined Linear Equations
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
847
Canon, M.D. and Eaton, J.H., A new algorithm for a class of quadratic programming problems with application to control, SIAM Journal on Control, 4(1966)34-45. Canon, M.D., Cullum Jr., C.D. and Polak, E., Theory of Optimal Control and Mathematical Programming, McGrawHill, New York, 1970. Dantzig, G.B., Linear Programming and Extensions, Princeton University Press, Princeton, NJ, 1963. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Lawson, C.W. and Hanson, R.J., Solving Least Squares Problems, Prentice Hall, Englewood Cliffs, NJ, 1974. Peters, G. and Wilkinson, J.H., The least squares problem and pseudo-inverses, Computer Journal, 13(1970)309-316. Polak, E. and Deparis, M., An algorithm for minimum energy control, IEEE Transactions on Automatic Control, 14(1969)367-377. Stark, P.B. and Parker, R.L., Bounded-variable least-squares: An algorithm and applications, Computational Statistics, 10(1995)129-141. van de Panne, C. and Whinston, A., Simplicial methods for quadratic programming, Naval Research Logistic Quarterly, 11(1964)273-302. van de Panne, C. and Whinston, A., Simplex and the dual method for quadratic programming, Operations Research Quarterly, 15(1964)355-388. Whinston, A., The bounded variable problem – An application of the dual method for quadratic programming, Naval Research Logistics Quarterly, 12(1965)173-179.
© 2008 by Taylor & Francis Group, LLC
848
23.6
Numerical Linear Approximation in C
DR_Energy
/*------------------------------------------------------------------DR_Energy --------------------------------------------------------------------This program is a driver for the function LA_Energy(), which calculates the minimum bounded least squares solution of an underdetermined consistent system of linear equations. Given is the underdetermined consistent system c*a = f "c" is a given real n by m matrix of rank k, k <= n <= m. "f" is a given real n vector. It is required to calculate the m vector "a" for this system such that the elements of "a" satisfy -1 <= a[j] <= 1,
j = 1, 2, ..., m
and that half of the square of the L2 (L-Two) norm of vector "a" z = 0.5*[sum[a[j]*a[j]], sum over j from 1 to m is as small as possible. In control theory, this problem is known as the "Minimum Energy" problem for discrete linear admissible control systems. This program carries 3 examples.whose results appear in the text. -------------------------------------------------------------------*/ #include "DR_Defs.h" #include "LA_Prototypes.h" #define #define #define #define #define #define #define
MNe_ROWS MNe_COLS N1e M1e N2e M2e M3e
(Ne_ROWS + Me_COLS) /* From DR_Defs.h */ (Ne_ROWS + Me_COLS) 4 5 4 19 20
© 2008 by Taylor & Francis Group, LLC
Chapter 23: DR_Energy
849
void DR_Energy (void) { /*---------------------------------------Constant matrices/vectors ----------------------------------------*/ static tNumber_R b1init[N1e][M1e] = { { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, { 2.0, 1.0, -3.0, 5.0, 3.0 }, {-8.0, -4.0, 2.0, 5.0, 6.0 } }; static tNumber_R f1[N1e+1] = { NIL, 10.0, 10.0, 10.0, 8.0 }; /*---------------------------------------Variable matrices/vectors ----------------------------------------*/ tMatrix_R ct = alloc_Matrix_R (MNe_ROWS, MNe_COLS); tVector_R f = alloc_Vector_R (Ne_ROWS); tVector_R a = alloc_Vector_R (Me_COLS); tMatrix_R fay = alloc_Matrix_R (N2e, M3e); tVector_R fy = alloc_Vector_R (N2e + 1); tMatrix_R
b1
= init_Matrix_R (&(b1init[0][0]), N1e, M1e);
int int tNumber_R
irank, iter; i, j, m, n, i2, Iexmpl; z;
eLaRc
rc = LaRcOk;
for (i = 1; i <= 2; i++) { i2 = i + 2; fy[i] = -2.0; fy[i2] = -4.0; } for (j = 1; j <= 20; j++) { i = j; fay[1][j] = 1.0; © 2008 by Taylor & Francis Group, LLC
850
Numerical Linear Approximation in C fay[2][j] = pow (1.1, i); fay[3][j] = pow (1.2, i); fay[4][j] = pow (1.3, i); } prn_dr_bnr ("DR_Energy, " "Bounded L2 Solution of an Underdetermined System"); for (Iexmpl = 1; Iexmpl <= 3; Iexmpl++) { switch (Iexmpl) { case 1: n = N1e; m = M1e; for (i = 1; i <= n; i++) { f[i] = f1[i]; for (j = 1; j <= m; j++) ct[j][i] = b1[i][j]; } break; case 2: n = N2e; m = M2e; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) ct[j][i] = fay[i][j]; } break; case 3: n = N2e; m = M3e; for (i = 1; i <= n; i++) { f[i] = fy[i]; for (j = 1; j <= m; j++) ct[j][i] = fay[i][j]; } break; default:
© 2008 by Taylor & Francis Group, LLC
Chapter 23: DR_Energy
851
break; } prn_algo_bnr ("Energy"); prn_example_delim(); PRN ("Example #%d: Size of matrix \"c\" %d by %d\n", Iexmpl, n, m); prn_example_delim(); PRN ("Bounded Least Squares Solution of an Underdetermined " "System\n"); prn_example_delim(); PRN ("r.h.s. Vector \"f\"\n"); prn_Vector_R (f, n); PRN ("Transpose of Coefficient Matrix, \"ct\"\n"); prn_Matrix_R (ct, m, n); rc = LA_Energy (m, n, ct, f, &irank, &iter, a, &z); if (rc >= LaRcOk) { PRN ("\n"); PRN ("Results of Minimum Energy Solution\n"); PRN ("Minimum Energy solution vector \"a\"\n"); prn_Vector_R (a, m); PRN ("Norm z = 0.5||a||*||a||= %8.4f\n", z); PRN ("Rank of matrix \"c\" = %d, No. of Iterations " "= %d\n", irank, iter); } LA_check_rank_def (n, irank); prn_la_rc (rc); } free_Matrix_R free_Vector_R free_Vector_R free_Matrix_R free_Vector_R
(ct, MNe_ROWS); (f); (a); (fay, N2e); (fy);
uninit_Matrix_R (b1); }
© 2008 by Taylor & Francis Group, LLC
852
23.7
Numerical Linear Approximation in C
LA_Energy
/*------------------------------------------------------------------LA_Energy --------------------------------------------------------------------Given is the consistent underdetermined system of linear equations c*a = f "c" is a given real n by m matrix of rank k, k <= n <= m. "f" is a given real n vector. It is required to calculate the m vector "a" for this system that satisfies the conditions -1 <= a[j] <= 1,
j = 1, 2, ..., m
and that half of square of the L2 norm of vector "a" z = 0.5*sum([a[j]*a[j]), sum over j from 1 to m is as small as possible. In control theory, this is known as the "Minimum Energy" problem for discrete linear admissible control systems. Inputs m Number of columns of matrix "c" in the system c*a = f. n Number of rows of matrix "c" in the system c*a = f. ct An (m + n) by (m + n) matrix whose first m rows and first n columns contain the transpose of matrix "c" of the system c*a = f. f An n vector that contains the r.h.s. of the system c*a = f. Outputs irank The calculated rank of matrix "c". iter The number of iterations or the number of times the simplex tableau is changed by a Gauss-Jordan step. a A real m vector that is the minimum energy solution of the system c*a = f. z Half the square of the minimum L2 norm of the solution vector "a". Returns one of
© 2008 by Taylor & Francis Group, LLC
Chapter 23: LA_Energy
853
LaRcSolutionFound LaRcNoFeasibleSolution LaRcInconsistentSystem LaRcErrBounds LaRcErrNullPtr LaRcErrAlloc -------------------------------------------------------------------*/ #include "LA_Prototypes.h" eLaRc LA_Energy (int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R a, tNumber_R *pZ) { tVector_I ic = alloc_Vector_I (n); tVector_I ir = alloc_Vector_I (m); tVector_I ik = alloc_Vector_I (m); int int int int tNumber_R eLaRc
i = 0, jin = 0, ibc = 0, iijj = 0, ijk = 0, iout = 0, irjin = 0; ikj0 = 0, ikj1 = 0, istd = 0, ivo = 0; itest = 0, ikj2 = 0, ikk = 0, iriout = 0, j = 0, k = 0, j0 = 0, j1 = 0, j2 = 0; mn = 0, np1 = 0, nj0 = 0; e = 0.0, opeps = 0.0, pivot = 0.0; tempRc;
/* Validation of the data before executing algorithm */ eLaRc rc = LaRcSolutionFound; VALIDATE_BOUNDS ((0 < n) && (n <= m) && !((n == 1) && (m == 1))); VALIDATE_PTRS (ct && f && pIrank && pIter && a && pZ); VALIDATE_ALLOC (ic && ir && ik); /* Initialization */ *pIrank = n; *pIter = 0; *pZ = 0.0; np1 = n + 1; mn = m + n; opeps = 1.0 + EPS; /* Initializing the data */ LA_energy_init (m, n, ct, ic, ir, ik, a); /* Phase 1 of the algorithm */ tempRc = LA_energy_phase_1 (m, n, ct, f, ic, a, pIrank, pIter); © 2008 by Taylor & Francis Group, LLC
854
Numerical Linear Approximation in C
if (tempRc < LaRcOk) { GOTO_CLEANUP_RC (tempRc); } ibc = 0; for (i = 1; i <= m; i++) { if (fabs (a[i]) > opeps) ibc = 1; } if (ibc == 0) { /* The norm z */ LA_energy_norm (m, a, pZ); GOTO_CLEANUP_RC (LaRcSolutionFound); } /* Phase 2 of the program */ LA_energy_phase_2 (m, n, ct, ir, ik, pIrank, a); istd = 1; j1 = 0; for (ijk = 1; ijk <= m*m; ijk++) { if (istd != 1) { j0 = j1; nj0 = n + j1; iijj = 1; } if (istd == 1) { /* Determine the vector that enters the basis */ ivo = 0; LA_energy_vent (&ivo, &jin, m, n, ir, a); /* Calculate the results */ if (ivo == 0) { LA_energy_res (m, n, ct, ir, a); /* The specified norm */ LA_energy_norm (m, a, pZ); © 2008 by Taylor & Francis Group, LLC
Chapter 23: LA_Energy GOTO_CLEANUP_RC (LaRcSolutionFound); } if (ivo == 1) { for (j = np1; j <= mn; j++) { ct[jin][j] = -ct[jin][j]; } a[jin] = 2.0 - a[jin]; ir[jin] = -ir[jin]; } irjin = ir[jin]; ikj1 = irjin + mn; if (irjin < 0) ikj1 = irjin - mn; iijj = 0; for (j = 1; j <= m; j++) { j0 = j; nj0 = n + j0; ikj0 = ik[j0]; if (ikj0 == ikj1) { iijj = 1; break; } if (ikj0 == -ikj1) { for (i = 1; i <= m; i++) { ct[i][nj0] = -ct[i][nj0]; } ik[j0] = -ik[j0]; ikj0 = -ikj0; iijj = 1; break; } } if (iijj == 0) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); © 2008 by Taylor & Francis Group, LLC
855
856
Numerical Linear Approximation in C } } itest = 0; iout = 0; /* Determine the vector that leaves the basis */ LA_energy_vleav (&itest, jin, m, n, &iout, j0, ct, ir, a); /* The problem has no feasible solution */ if (itest == 0) { GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } if ((istd != 1) || (iout != jin)) { istd = 0; iriout = ir[iout]; ikj1 = iriout - mn; if (iriout < 0) ikj1 = iriout + mn; iijj = 0; for (j = 1; j <= m; j++) { j2 = j; ikj2 = ik[j2]; if (ikj2 == ikj1) { for (k = 1; k <= m; k++) { ikk = ik[k]; if (ikk == ikj2) { j1 = k; iijj = 1; break; } } if (iijj == 1) break; } } if (iijj == 0) { /* The problem has no feasible solution */ GOTO_CLEANUP_RC (LaRcNoFeasibleSolution); } }
© 2008 by Taylor & Francis Group, LLC
Chapter 23: LA_Energy
857
istd = 1; pivot = ct[iout][nj0]; e = fabs (pivot); LA_energy_gauss_jordn_e (m, n, iout, j0, ct, ir, ik, a); *pIter = *pIter + 1; } CLEANUP: free_Vector_I (ic); free_Vector_I (ir); free_Vector_I (ik); return rc; } /*------------------------------------------------------------------LA_Energy() initialization -------------------------------------------------------------------*/ void LA_energy_init (int m, int n, tMatrix_R ct, tVector_I ic, tVector_I ir, tVector_I ik, tVector_R a) { int i, j, k, np1, mi, mj, nj, mn; tNumber_R s; np1 = n + 1; mn = m + n; for (j = np1; j <= mn; j++) { for (i = 1; i <= m; i++) { ct[i][j] = 0.0; } } for (j = 1; j <= m; j++) { ir[j] = 0; ik[j] = 0; a[j] = 0.0; nj = j + n; for (i = 1; i <= n; i++) { mi = m + i; © 2008 by Taylor & Francis Group, LLC
858
Numerical Linear Approximation in C ct[mi][nj] = ct[j][i]; } } /* Calculating matrix [-(c*c(transpose))] */ for (j = 1; j <= n; j++) { ic[j] = j; mj = m + j; for (i = j; i <= n; i++) { mi = m + i; s = 0.0; for (k = 1; k <= m; k++) { s = s + ct[k][i] * (ct[k][j]); } ct[mj][i] = -s; ct[mi][j] = -s; } }
} /*------------------------------------------------------------------Phase 1 of LA_Energy() -------------------------------------------------------------------*/ eLaRc LA_energy_phase_1 (int m, int n, tMatrix_R ct, tVector_R f, tVector_I ic, tVector_R a, int *pIrank, int *pIter) { int i, j, ij, iout, kd = 0, kdp = 0, mj, mn, mout; tNumber_R d, piv; mn = m + n; iout = 0; for (ij = 1; ij <= n; ij++) { iout = iout + 1; if (iout <= *pIrank) { /* Pivoting along the diagonal elements of ct*ct(transpose) */ mout = m + iout; piv = 0.0; for (j = iout; j <= n; j++) © 2008 by Taylor & Francis Group, LLC
Chapter 23: LA_Energy { mj = m + j; d = fabs (ct[mj][j]); if (d > piv) { kd = j; kdp = mj; piv = d; } } if (piv > EPS) { if (kdp != mout) { swap_rows_Matrix_R (ct, kdp, mout); } if (kd != iout) { swap_elems_Vector_R (f, kd, iout); swap_elems_Vector_I (ic, kd, iout); /* Swap two columns of matrix "ct" */ for (i = 1; i <= mn; i++) { d = ct[i][iout]; ct[i][iout] = ct[i][kd]; ct[i][kd] = d; } } /* A Gauss-Jordan eliminaion step */ LA_energy_gauss_jordn_e0 (iout, m, n, ct, f, a); *pIter = *pIter + 1; } else if (piv < EPS) { for (i = iout; i <= n; i ++) { if (fabs (f[i]) > EPS) return LaRcInconsistentSystem; } *pIrank = iout - 1; } } } © 2008 by Taylor & Francis Group, LLC
859
860
Numerical Linear Approximation in C
return LaRcOk; } /*------------------------------------------------------------------Phase 2 of LA_Energy() -------------------------------------------------------------------*/ void LA_energy_phase_2 (int m, int n, tMatrix_R ct, tVector_I ir, tVector_I ik, int *pIrank, tVector_R a) { int i, j, k, mk, mn, ni, nj; tNumber_R s; /* Calculating (c * pseuedo-inverse * c) */ mn = m + n; for (j = 1; j <= m; j++) { nj = n + j; for (i = j; i <= m; i++) { ni = n + i; s = 0.0; for (k = 1; k <= *pIrank; k++) { mk = m + k; s = s + ct[i][k] * (ct[mk][nj]); } ct[i][nj] = s; ct[j][ni] = s; } } /* Calculating (c * pseuedo-inverse * c - m-Unit matrix) */ for (i = 1; i <= m; i++) { ir[i] = i; ik[i] = mn + i; a[i] = 1.0 - a[i]; ni = n + i; ct[i][ni] = ct[i][ni] - 1.0; } } /*------------------------------------------------------------------Norm z in LA_Energy() © 2008 by Taylor & Francis Group, LLC
Chapter 23: LA_Energy
861
-------------------------------------------------------------------*/ void LA_energy_norm (int m, tVector_R a, tNumber_R *pZ) { int i; tNumber_R s; s = 0.0; for (i = 1; i <= m; i++) { s = s + a[i] * (a[i]); } *pZ = 0.5 * s; } /*------------------------------------------------------------------A Gauss-Jordan elimination step for problem E0 in LA_Energy() -------------------------------------------------------------------*/ void LA_energy_gauss_jordn_e0 (int iout, int m, int n, tMatrix_R ct, tVector_R f, tVector_R a) { int i, j, jin, mn, mout, k; tNumber_R d, pivot; mn = m + n; jin = iout; mout = m + iout; pivot = ct[mout][jin]; for (j = 1; j <= n; j++) { ct[mout][j] = ct[mout][j]/pivot; } f[iout] = f[iout]/pivot; for (i = 1; i <= mn; i++) { if (i != mout) { d = ct[i][jin]; for (j = 1; j <= n; j++) { if (j != jin) { ct[i][j] = ct[i][j] - d * (ct[mout][j]); } }
© 2008 by Taylor & Francis Group, LLC
862
Numerical Linear Approximation in C ct[i][jin] = -ct[i][jin]/pivot; if (i > m) { k = i - m; f[k] = f[k] - d * (f[iout]); } else if (i <= m) { a[i] = a[i] - d * (f[iout]); } } } ct[mout][jin] = 1.0/pivot;
} /*------------------------------------------------------------------A Gauss-Jordan elimination step for problem E in LA_Energy() -------------------------------------------------------------------*/ void LA_energy_gauss_jordn_e (int m, int n, int iout, int j0, tMatrix_R ct, tVector_I ir, tVector_I ik, tVector_R a) { int i, j, k, mn, np1, nj0; tNumber_R d, pivot; mn = m + n; np1 = n + 1; nj0 = n + j0; pivot = ct[iout][nj0]; for (j = np1; j <= mn; j++) { ct[iout][j] = ct[iout][j]/pivot; } a[iout] = a[iout]/pivot; for (i = 1; i <= m; i++) { if (i != iout) { d = ct[i][nj0]; ct[i][nj0] = -ct[i][nj0]/pivot; a[i] = a[i] - d * (a[iout]); for (j = np1; j <= mn; j++) { © 2008 by Taylor & Francis Group, LLC
Chapter 23: LA_Energy
863
if (j != nj0) { ct[i][j] = ct[i][j] - d * (ct[iout][j]); } } } } ct[iout][nj0] = 1.0/pivot; k = ik[j0]; ik[j0] = ir[iout]; ir[iout] = k; } /*------------------------------------------------------------------Determine the vector that enters the basis in LA_Energy() -------------------------------------------------------------------*/ void LA_energy_vent (int *pIvo, int *pJin, int m, int n, tVector_I ir, tVector_R a) { int i, k, mn; tNumber_R d, e, g, tpeps; mn = m + n; tpeps = 2.0 + EPS; g = 1.0; for (i = 1; i <= m; i++) { k = ir[i]; if (k < 0) k = -k; if (k <= mn) { e = a[i]; if (e < -EPS) { d = e; if (d < g) { *pIvo = -1; g = d; *pJin = i; } } if (e >= tpeps) © 2008 by Taylor & Francis Group, LLC
864
Numerical Linear Approximation in C { d = 2.0 - e; if (d < g) { *pIvo = 1; g = d; *pJin = i; } } } }
} /*------------------------------------------------------------------Determine the vector that leaves the basis in LA_Energy() -------------------------------------------------------------------*/ void LA_energy_vleav (int *pItest, int jin, int m, int n, int *pIout, int j0, tMatrix_R ct, tVector_I ir, tVector_R a) { int i, iri, mn, nj0; tNumber_R d, e, g, thmax; mn = m + n; nj0 = n + j0; thmax = 1.0/ (EPS*EPS); for (i = 1; i <= m; i++) { if (i != jin) { iri = ir[i]; if (iri < 0) iri = -iri; if (iri < mn) continue; } d = ct[i][nj0]; e = d; if (e < 0.0) { e = -e; } if (e >= EPS) { g = a[i]/d; if (g >= 0.0 && g < thmax) © 2008 by Taylor & Francis Group, LLC
Chapter 23: LA_Energy
865
{ thmax = g; *pItest = 1; *pIout = i; } } } } /*------------------------------------------------------------------Calculate the results of LA_Energy() -------------------------------------------------------------------*/ void LA_energy_res (int m, int n, tMatrix_R ct, tVector_I ir, tVector_R a) { int i, iri, mn; tNumber_R g; mn = m + n; for (i = 1; i <= m; i++) { ct[mn][i] = a[i]; } for (i = 1; i <= m; i++) { g = ct[mn][i]; iri = ir[i]; if (iri > mn) { g = 1.0; iri = iri - mn; a[iri] = g; continue; } if (iri < -mn) { g = -1.0; iri = -mn - iri; a[iri] = g; continue; }
© 2008 by Taylor & Francis Group, LLC
866
Numerical Linear Approximation in C if (iri > 0) { g = 1.0 - g; a[iri] = g; continue; } if (iri <= 0) { g = g - 1.0; iri = -iri; a[iri] = g; continue; } }
}
© 2008 by Taylor & Francis Group, LLC
Appendices
© 2008 by Taylor & Francis Group, LLC
868
Numerical Linear Approximation in C
Appendix A References
869
Appendix B Main Program
893
Appendix C Constants, Types and Function Prototypes
897
Appendix D Utilities and Common Functions
917
© 2008 by Taylor & Francis Group, LLC
869
Appendix A References 1. 2. 3. 4. 5. 6. 7.
8. 9. 10. 11.
Abdelmalek, N.N., Linear L1 approximation for a discrete point set and L1 solutions of overdetermined linear equations, Journal of ACM, 18(1971)41-47. Abdelmalek, N.N., Round-off error analysis for Gram-Schmidt method and solution of linear least squares problems, BIT, 11(1971)345-367. Abdelmalek, N.N., On the discrete L1 approximation and L1 solutions of overdetermined linear equations, Journal of Approximation Theory, 11(1974)38-53. Abdelmalek, N.N., On the solution of linear least squares problems and pseudo-inverses, Computing, 13(1974)215-228. Abdelmalek, N.N., Chebyshev solution of overdetermined systems of linear equations, BIT, 15(1975)117-129. Abdelmalek, N.N., An efficient method for the discrete linear L1 approximation problem, Mathematics of Computation, 29(1975)844-850. Abdelmalek, N.N. A computer program for the Chebyshev solution of overdetermined systems of linear equations, International Journal for Numerical Methods in Engineering, 10(1976)1197-1202. Abdelmalek, N.N., The discrete linear one-sided Chebyshev approximation, Journal of Institute of Mathematics and Applications, 18(1976)361-370. Abdelmalek, N.N., Minimum L∞ solution of underdetermined systems of linear equations, Journal of Approximation Theory, 20(1977)57-69. Abdelmalek, N.N., A simplex algorithm for minimum fuel problems of linear discrete control systems, International Journal of Control, 26(1977)635-642. Abdelmalek, N.N., The discrete linear restricted Chebyshev approximation, BIT, 17(1977)249-261.
© 2008 by Taylor & Francis Group, LLC
870
Numerical Linear Approximation in C
12.
Abdelmalek, N.N., Computing the strict Chebyshev solution of overdetermined linear equations, Mathematics of Computation, 31(1977)974-983. Abdelmalek, N.N., Solutions of minimum time problem and minimum fuel problem for discrete linear admissible control systems, International Journal of Systems Science, 9(1978)849-855. Abdelmalek, N.N., Minimum energy problem for discrete linear admissible control systems, International Journal of Systems Science, 10(1979)77-88. Abdelmalek, N.N., A computer program for the strict Chebyshev solution of overdetermined systems of linear equations, International Journal for Numerical Methods in Engineering, 13(1979)1715-1725. Abdelmalek, N.N., L1 solution of overdetermined systems of linear equations, ACM Transactions on Mathematical Software, 6(1980)220-227. Abdelmalek, N.N., Algorithm 551: A FORTRAN subroutine for the L1 solution of overdetermined systems of linear equations [F4], ACM Transactions on Mathematical Software, 6(1980)228-230. Abdelmalek, N.N., Computer program for the discrete linear restricted Chebyshev approximation, Journal of Computational and Applied Mathematics, 7(1981)141-150. Abdelmalek, N.N., Chebyshev approximation algorithm for linear inequalities and its applications to pattern recognition, International Journal of Systems Science, 12(1981)963-975. Abdelmalek, N.N., Piecewise linear Chebyshev approximation of planar curves, International Journal of Systems Science, 14(1983)425-435. Abdelmalek, N.N., An algorithm for the solution of ill-posed linear systems arising from the discretization of Fredholm integral equation of the first kind, Journal of Mathematical Analysis and Applications, 97(1983)95-111. Abdelmalek, N.N., Restoration of images with missing high-frequency components using quadratic programming, Applied Optics, 22(1983)2182-2188.
13.
14. 15.
16. 17.
18. 19. 20. 21.
22.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
23.
24. 25. 26.
27. 28. 29. 30. 31.
32. 33. 34.
871
Abdelmalek, N.N., Linear one-sided approximation algorithms for the solution of overdetermined systems of linear inequalities, International Journal of Systems Science, 15(1984)1-8. Abdelmalek, N.N., Piecewise linear L1 approximation of planar curves, International Journal of Systems Science, 16(1985)447-455. Abdelmalek, N.N., A recursive algorithm for discrete L1 linear estimation using the dual simplex method, IEEE Transactions on Systems, Man and Cybernetics, SMC-15 (1985)737-742. Abdelmalek, N.N., Chebyshev and L1 solutions of overdetermined systems of linear equations with bounded variables, Numerical Functional Analysis and Optimization, 8(1985-86)399-418. Abdelmalek, N.N., Noise filtering in digital images and approximation theory, Pattern Recognition, 19(1986)417-424. Abdelmalek, N.N., Polygonal approximation of planar curves in the L1 norm, International Journal of Systems Science, 17(1986)1601-1608. Abdelmalek, N.N., Heuristic procedure for segmentation of 3-D range images, International Journal of Systems Science, 21(1990)225-239. Abdelmalek, N.N., Piecewise linear least-squares approximation of planar curves, International Journal of Systems Science, 21(1990)1393-1403. Abdelmalek, N.N., A program for the solution of ill-posed linear systems arising from the discretization of Fredholm integral equation of the first kind, Computer Physics Communications, 58(1990)285-292. Abdelmalek, N.N. and Kasvand, T., Image restoration by Gauss LU decomposition, Applied Optics, 18(1979)16841686. Abdelmalek, N.N., Kasvand, T. and Croteau, J.P., Image restoration for space invariant pointspread functions, Applied Optics, 19(1980)1184-1189. Abdelmalek, N.N. and Kasvand, T., Digital image restoration using quadratic programming, Applied Optics, 19(1980)34073415.
© 2008 by Taylor & Francis Group, LLC
872
Numerical Linear Approximation in C
35.
Abdelmalek, N.N., Kasvand, T., Olmstead, J. and Tremblay, M.M., Direct algorithm for digital image restoration, Applied Optics, 20(1981)4227-4233. Abdelmalek, N.N. and Otsu, N., Restoration of images with missing high-frequency components by minimizing the L1 norm of the solution vector, Applied Optics, 24(1985)14151420. Abdelmalek, N.N. and Otsu, N., Speed comparison among methods for restoring signals with missing high-frequency components using two different low-pass-filter matrix dimensions, Optics Letters, 10(1985)372-374. Agmon, S., The relaxation method for linear inequalities, Canadian Journal of Mathematics, 6(1954)382-392. Albert, A., Regression and More-Penrose Pseudoinverse, Academic Press, New York, 1972. Andrews, H.C. and Hunt, B.R., Digital Image Restoration, Prentice-Hall, Englewood Cliffs, NJ, 1977. Ansari, N. and Delp, E.J., On detecting dominant points, Pattern Recognition, 24(1991)441-451. Appa, G. and Smith, C., On L1 and Chebyshev estimation, Mathematical Programming, 5(1973)73-87. Arkin, E.M., Chew, L.P., Huttenlocher, D.P., Kedem, K. and Mitchell, J.S.B., An efficient computable metric for comparing polygonal shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(1991)209-215. Armitage, D.H., Gardiner, S.J., Haussmann, W. and Rogge, L., Best one-sided L1 – approximation by harmonic functions, Manuscripta Mathematica, 96(1998)181-194. Armstrong, R.D., Frome, E.L. and Kung, D.S., Algorithm 79-01: A revised simplex algorithm for the absolute deviation curve fitting problem, Communications on StatisticsSimulation and Computation, B8(1979)175-190. Armstrong, R.D. and Godfrey, J., Two linear programming algorithms for the linear discrete L1 norm problem, Mathematics of Computation, 33(1979)289-300. Armstrong, R.D. and Hultz, J.W., An algorithm for a restricted discrete approximation problem in the L1 norm, SIAM Journal on Numerical Analysis, 14(1977)555-565.
36.
37.
38. 39. 40. 41. 42. 43.
44. 45.
46. 47.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
48. 49. 50.
51. 52. 53. 54. 55. 56.
57.
58. 59.
873
Armstrong, R.D. and Kung, D.S., Algorithm AS 135: MinMax estimates for a linear multiple regression problem, Applied Statistics, 28(1979)93-100. Armstrong, R.D. and Sklar, M.G., A linear programming algorithm for curve fitting in the L∞ norm, Numerical Functional Analysis and Optimization, 2(1980)187-218. Babenko, V.F. and Glushko, V.N., On the uniqueness of elements of the best approximation and the best one-sided approximation in the space L1, Ukrainian Mathematical Journal, 46(1994)503-513. Badi’i, F. and Peikari, B., Functional approximation of planar curves via adaptive segmentation, International Journal of Systems Science, 13(1982)667-674. Bahi, S. and Sreedharan, V.P., An algorithm for a minimum norm solution of a system of linear inequalities, International Journal of Computer Mathematics, 80(2003)639-647. Baker, C.T.H., The Numerical Treatment of Integral Equations, Clarendon Press, Oxford, 1977. Baker, C.T.H., Fox, L., Mayers, D.F. and Wright, K., Numerical solution of Fredholm integral equations of the first kind, Computer Journal, 7(1964)141-148. Barrodale, I., L1 approximation and analysis of data, Applied Statistics, 17(1968)51-57. Barrodale, I. and Phillips, C., An improved algorithm for discrete Chebyshev linear approximation, Proceedings of the Fourth Manitoba conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 177-190, 1975. Barrodale, I. and Phillips, C., Algorithm 495: Solution of an overdetermined system of linear equations in the Chebyshev norm, ACM Transactions on Mathematical Software, 1(1975)264-270. Barrodale, I. and Roberts, F.D.K., An improved algorithm for discrete l1 approximation, SIAM Journal on Numerical Analysis, 10(1973)839-848. Barrodale, I. and Roberts, F.D.K., Algorithm 478, Solution of an overdetermined system of equations in the l1 norm [F4], Communications of ACM, 17(1974)319-320.
© 2008 by Taylor & Francis Group, LLC
874
Numerical Linear Approximation in C
60.
Barrodale, I. and Roberts, F.D.K., Algorithms for restricted least absolute value estimation, Communications on Statistics-Simulation and Computation, B6(1977)353-363. Barrodale, I. and Roberts, F.D.K., An efficient algorithm for discrete l1 linear approximation with linear constraints, SIAM Journal on Numerical Analysis, 15(1978)603-611. Barrodale, I. and Roberts, F.D.K., Algorithm 552: Solution of the constrained L1 linear approximation problem, ACM Transactions on Mathematical Software, 6(1980)231-235. Barrodale, I. and Stuart, G.F., A Fortran program for linear least squares problems of variable degree, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 191-204, 1975. Bartels, R.H. and Conn, A.R., Linearly constrained discrete l1 problems, ACM Transactions on Mathematical Software, 6(1980)594-608. Bartels, R.H. and Conn, A.R., Algorithm 563: A program for linearly constrained discrete l1 problems, ACM Transactions on Mathematical Software, 6(1980)609-614. Bartels, R.H, Conn, A.R. and Charalambous, C., Minimization techniques for piecewise differentiable functions: The l∞ solution to overdetermined linear system, The Johns Hopkins University, Baltimore, MD, Technical report no. 247, May 1976. Bartels, R.H., Conn, A.R. and Sinclair, J.W., Minimization techniques for piecewise differentiable functions: The l1 solution of an overdetermined linear system, SIAM Journal on Numerical Analysis, 15(1978)224-241. Bartels, R.H. and Golub, G.H., Stable numerical methods for obtaining the Chebyshev solution of an overdetermined system of equations, Communications of ACM, 11(1968)401406. Bartels, R.H. and Golub, G.H., Algorithm 328: Chebyshev solution to an overdetermined linear system, Communications of ACM, 11(1968)428-430.
61. 62. 63.
64. 65. 66.
67.
68.
69.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
70. 71.
72.
73. 74. 75. 76. 77. 78. 79. 80. 81.
875
Bartels, R.H. and Golub, G.H., The simplex method of linear programming using LU decomposition, Communications of ACM, 12(1969)266-268. Bartels, R.H., Golub, G.H. and Saunders, M.A., Numerical techniques in mathematical programming, Nonlinear Programming, Rosen, J.B., Mangasarian, O.L. and Ritter, K. (eds.), Academic Press, New York, 1970. Bartels, R.H, Stoer, J. and Zenger, Ch., A realization of the simplex method based on triangular decomposition, Handbook for Automatic Computation, Vol. II: Linear Algebra, Wilkinson, J.H. and Reinsch, C. (eds.), Springer-Verlag, New York, pp. 152-190, 1971. Bashein, G., A simplex algorithm for on-line computation of time optimal controls, IEEE Transactions on Automatic Control, 16(1971)479-482 Bellman, R., On the approximation of curves by line segments using dynamic programming, Communications of ACM, 4(1961)284. Bellman, R., Introduction to Matrix Analysis, McGraw-Hill, New York, 1970. Belsley, D.A., Kuh, E. and Welch, R.E., Regression Diagnostics Identifying Influential Data and Sources of Collinearity, John Wiley & Sons, New York, 1980. Bjorck, A., Solving linear least squares problems by Gram-Schmidt orthogonalization, BIT, 7(1967)1-21. Bloomfield, P. and Steiger, W.L., Least absolute deviations curve-fitting, SIAM Journal on Scientific and Statistical Computing, 1(1980)290-301. Bloomfield, P. and Steiger, W.L., Least Absolute Deviations, Theory, Applications, and Algorithms, Birkhauser, Boston, 1983. Bramley, R. and Winnicka, B., Solving linear inequalities in a least squares sense, SIAM Journal on Scientific Computation, 17(1996)275-286. Brannigan, M., Theory and computation of best strict constrained Chebyshev approximation of discrete data, IMA Journal of Numerical Analysis, 1(1980)169-184.
© 2008 by Taylor & Francis Group, LLC
876
Numerical Linear Approximation in C
82.
Brannigan, M., The strict Chebyshev solution of overdetermined systems of linear equations with rank deficient matrix, Numerische Mathematik, 40(1982)307-318. Businger, P. and Golub, G., Linear least squares solution by Householder transformation, Numerische Mathematik, 7(1965)269-276. Cadzow, J.A., Algorithm for the minimum-effort problem, IEEE Transactions on Automatic Control, 16(1971)60-63. Cadzow, J.A., Functional analysis and the optimal control of linear discrete systems, International Journal of Control, 17(1973)481-495 Cadzow, J.A., A finite algorithm for the minimum l∞ solution to a system of consistent linear equations, SIAM Journal on Numerical Analysis, 10(1973)607-617. Cadzow, J.A., An efficient algorithmic procedure for obtaining a minimum l∞-norm solution to a system of consistent linear equations, SIAM Journal on Numerical Analysis, 11(1974)1151-1165. Cadzow, J.A., Minimum-amplitude control of linear discrete systems, International Journal of Control, 19(1974)765-780. Canon, M.D., Cullum Jr., C.D. and Polak, E., Theory of Optimal Control and Mathematical Programming, McGrawHill, New York, 1970. Canon, M.D. and Eaton, J.H., A new algorithm for a class of quadratic programming problems with application to control, SIAM Journal on Control, 4(1966)34-45. Cantoni, A., Optimal curve fitting with piecewise linear functions, IEEE Transactions on Computers, 20(1971)59-67. Censor, Y. and Elfving, T., New methods for linear inequalities, Linear Algebra and its Applications, 42(1982)199-211. Chatterjee, S. and Price, B., Regression Analysis by Example, John Wiley & Sons, New York, 1977. Cheney, E.W., Introduction to Approximation Theory, McGraw-Hill, New York, 1966. Cheng, Y-Q., Zhuang, Y-M. and Yang, J-Y., Optimal Fisher discriminant analysis using the rank decomposition, Pattern Recognition, 25(1992)101-111.
83. 84. 85. 86. 87.
88. 89. 90. 91. 92. 93. 94. 95.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
96.
97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108.
877
Clark, D.C. and Gonzalez, R.C., Optimal solution of linear inequalities with applications to pattern recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(1981)643-655. Clasen, R.J., Techniques for automatic tolerance control in linear programming, Communications of ACM, 9(1966)802803. Cline, A.K., A descent method for the uniform solution to overdetermined systems of linear equations, SIAM Journal on Numerical Analysis, 13(1976)293-309. Cohen, E. and Megiddo, N., Improved algorithms for linear inequalities with two variables per inequality, SIAM Journal on Computing, 23(1994)1313-1347. Coleman, T.F. and Li, Y., A global and quadratically convergent method for linear l∞ problems, SIAM Journal on Numerical Analysis, 29(1992)1166-1186. Cook, R.D. and Weisberg, S., Residuals and Influence in Regression, Chapman-Hall, London, 1982. Dantzig, G.B., Linear Programming and Extensions, Princeton University Press, Princeton, NJ, 1963. Davis, L.S., Shape matching using relaxation techniques, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1(1979)60-72. Davison, L.D., Data compression using straight line interpolation, IEEE Transactions on Information Theory, IT-14(1968)390-394. Dax, A., The l1 solution of linear equations subject to linear constraints, SIAM Journal on Scientific and Statistical Computation, 10(1989)328-340. Dax, A., The minimax solution of linear equations subject to linear constraints, IMA Journal of Numerical Analysis, 9(1989)95-109. Dax, A., Methods for calculating lp-minimum norm solutions of consistent linear systems, Journal of Optimization Theory and Applications, 83(1994)333-354. Deng, J., Feng, Y. and Chen, F., Best one-sided approximation of polynomials under L1 norm, Journal of Computational and Applied Mathematics, 144(2002)161-174.
© 2008 by Taylor & Francis Group, LLC
878
109. 110. 111. 112. 113.
114. 115. 116. 117. 118. 119. 120. 121.
Numerical Linear Approximation in C
De Pierro, A.R. and Iusem, A.N., A simultaneous projections method for linear inequalities, Linear Algebra and its Applications, 64(1985)243-253. Descloux, J., Approximations in LP and Chebyshev approximations, Journal of Society of Industrial and Applied Mathematics, 11(1963)1017-1026. Duda, R.O. and Hart, P.E., Pattern Classification and Scene Analysis, John Wiley & Sons, New York, 1973. Dunham, J.G., Optimum uniform piecewise linear approximation of planar curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(1986)67-75. Duris, C.S., An exchange method for solving Haar and non-Haar overdetermined linear equations in the sense of Chebyshev, Proceedings of Summer ACM Computer Conference, (1968)61-65. Duris, C.S. and Sreedharan, V.P., Chebyshev and l1-solutions of linear equations using least squares solutions, SIAM Journal on Numerical Analysis, 5(1968)491-505. Duris, C.S. and Temple, M.G., A finite step algorithm for determining the “strict” Chebyshev solution to Ax = b, SIAM Journal on Numerical Analysis, 10(1973)690-699. Easton, M.C., A fixed point method for Tchebycheff solution of inconsistent linear equations, Journal of Institute of Mathematics and Applications, 12(1973)137-159. Faigle, U., Kern, W. and Still, G., Algorithmic Principles of Mathematical Programming, Kluwer Academic Publishers, London, 2002. Fisher, M.E., Introductory Numerical Methods with the NAG Software Library, The University of Western Australia, Crawley, 1988. Forsythe, G.E. and Moler, C.B., Computer Solution of Linear Algebraic Systems, Prentice-Hall, Englewood Cliffs, NJ, 1967. Geladi, P. and Kowalski, B.R., Partial least-squares regression: A tutorial, Analytica Chimica Acta, 185(1986)1-17. Gentleman, W.M., Algorithm AS 75: Basic procedures for large, sparse or weighted linear least squares problems, Applied Statistics, 22(1974)448-454.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
122. 123.
124. 125. 126. 127. 128. 129. 130. 131. 132. 133.
879
Gill, P.E., Golub, G.H., Murray, W. and Saunders, M.A., Methods for modifying matrix factorizations, Mathematics of Computation, 28(1974)505-535. Gimlin, D.R., Cavin, III, R.K. and Budge, Jr., M.C., A multiple exchange algorithm for calculation of best restricted approximations, SIAM Journal on Numerical Analysis, 11(1974)219-231. Givens, J.W., Numerical computation of the characteristic values of a real matrix, Oak Ridge National Laboratory, ORNL-1574, 1954. Givens, J.W., Computation of plane unitary rotations transforming a general matrix to triangular form, Journal of SIAM, 6(1958)26-50. Glashoff, K. and Gustafson, S.A., Numerical treatment of a parabolic boundary-value control problem, Journal of Optimization Theory and Applications, 19(1976)645-663. Goffin, J.L., The relaxation method for solving systems of linear inequalities, Mathematics of Operations Research, 5(1980)388-414. Golden, D.E., Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley Publishing Company, Reading, MA, 1989. Goldstein, A.A., Levine, N. and Hereshoff, J.B., On the best and least qth approximation of an overdetermined system of linear equations, Journal of ACM, 4(1957)341-347. Golub, G., Numerical methods for solving linear least squares problems, Numerische Mathematik, 7(1965)206-216. Golub, G.H. and Reinsch, C., Singular value decomposition and the least squares solutions, Numerische Mathematik, 14(1970)403-420. Golub, G.H. and Van Loan, C.F., Matrix Computation, Third Edition, The Johns Hopkins University Press, Baltimore, 1996. Graham, R.E., Snow removal – A noise-stripping process for picture signals, IEEE Transactions on Information Theory, IT-8(1962)129-144.
© 2008 by Taylor & Francis Group, LLC
880
134. 135.
136.
137. 138. 139. 140. 141. 142. 143. 144. 145. 146.
Numerical Linear Approximation in C
Grant, P.M. and Hopkins, T.R., A remark on algorithm AS 135: Min-Max estimates for linear multiple regression problems, Applied Statistics, 32(1983)345-347. Gravagne, I.A. and Walker, I.D., On the structure of minimum effort solutions with application to kinematic redundancy resolution, IEEE Transactions on Robotics and Automation, 16(2000)855-863. Guler, O., Hoffman, A.J. and Rothblum, U.G., Approximations to solutions to systems of linear inequalities, SIAM Journal on Matrix Analysis and Applications, 16(1995)688696. Gunst, R.F. and Mason, R.L., Regression Analysis and its Application: A Data Oriented Approach, Marcel Dekker, Inc., New York, 1980. Gustafson, S.A. and Kortanek, K.O., Numerical treatment of a class of semi-infinite programming problems, Naval Research Logistics Quarterly, 20(1973)477-504. Hadley, G., Linear Programming, Addison-Wesley, Reading, MA, 1962. Hamann, B. and Chen, J-L., Data point selection for piecewise linear curve approximation, Computer Aided Geometric Design, 11(1994)289-301. Han, S-P., Least squares solution of linear inequalities, Technical Report TR-2141, Mathematics Research Center, University of Wisconsin-Madison, 1980. Hansen, P.C., The truncated SVD as a method for regularization, BIT, 27(1987)534-553 Hansen, P.C., Analysis of discrete ill-posed problems by means of the L-curve, SIAM Review, 34(1992)561-580. Hansen, P.C. and O’Leary, D., The use of the L-curve in the regularization of discrete ill-posed problems, SIAM Journal on Scientific Computation, 14(1993)1487-1502. Hanson, R.J., A numerical method for solving Fredholm integral equations of the first kind using singular values, SIAM Journal on Numerical Analysis, 8(1971)616-622. Hanson, R.J. and Phillips, J.L., An adaptive numerical method for solving linear Fredholm integral equations of the first kind, Numerische Mathematik, 24(1975)291-307.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
147. 148. 149.
150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160.
881
Haralick, M.R. and Watson, L., A facet model for image data, Computer Graphics and Image Processing, 15(1981)113-129. He, B., New contraction methods for linear inequalities, Linear Algebra and its Applications, 207(1994)115-133. Hettich, R., A Newton method for nonlinear Chebyshev approximation, Lecture Notes in Mathematics No. 556, Dold, A. and Eckmann, B. (eds.) Springer-Verlag, Berlin, pp. 222-236 (1976). Ho, S-Y. and Chen, Y-C., An efficient evolutionary algorithm for accurate polygonal approximation, Pattern Recognition, 34(2001)2305-2317. Ho, Y-C. and Kashyap, R.L., An algorithm for linear inequalities and its applications, IEEE Transactions on Electronic Computers, 14(1965)683-688. Hoffman, A.J., On approximate solutions of systems of linear inequalities, Journal of Research of the National Bureau of Standards, 49(1952)263-265. Hoskuldsson, A., PLS regression methods, Journal of Chemometrics, 2(1988)211-228. Householder, A.S., Unitary triangularization of a nonsymmetric matrix, Journal of ACM, 5(1958)339-342. Huang, S-C. and Sun, Y-N., Polygonal approximation using genetic algorithms, Pattern Recognition, 32(1999)1409-1420. Ignizio, J.P. and Cavalier, T.M., Linear Programming, Prentice Hall, Englewood Cliffs, NJ, 1993. Joe, B. and Bartels, R., An exact penalty method for constrained, discrete, linear l∞ data fitting, SIAM Journal on Scientific and Statistical Computation, 4(1983)76-84. Johnson, H.H. and Vogt, A., A geometric method for approximating convex arcs, SIAM Journal on Applied Mathematics, 38(1980)317-325. Jones, R.C. and Karlovitz, L.A., Iterative construction of constrained Chebyshev approximation of continuous functions, SIAM Journal on Numerical Analysis, 5(1968)574-585. Kammerer, W.J. and Nashed, M.Z., On the convergence of the conjugate gradient method for singular linear equations, SIAM Journal on Numerical Analysis, 8(1971)65-101.
© 2008 by Taylor & Francis Group, LLC
882
161.
162. 163. 164. 165. 166. 167. 168. 169.
170. 171. 172. 173.
Numerical Linear Approximation in C
Kammerer, W.J. and Nashed, M.Z., Iterative methods for best approximate solution of linear integral equations of the first and second kinds, Journal of Mathematical Analysis and Applications, 40(1972)547-573. Kioustelidis, J.B., Optical segmented approximations, Computing, 24(1980)1-8. Kleinbaum, D.G., Kupper, L.L. and Muller, K.E., Applied Regression Analysis and Other Multivariate Methods, Second Edition, PWS-Kent Publishing Company, Boston, 1988. Kolesnikov, A. and Franti, P., Polygonal Approximation of Closed Contours, Lecture Notes in Computer Science, 2749(2003)778-785. Kolev, L.V., Iterative algorithm for the minimum fuel and minimum amplitude problems, International Journal of Control, 21(1975)779-784. Kolev, L.V., Algorithm of finite number of iterations for the minimum fuel and minimum amplitude control problems, International Journal of Control, 22(1975)97-102. Kolev, L.V., Minimum-fuel control of linear discrete systems, International Journal of Control, 23(1976)207-216. Korn, G.A. and Korn, T.M., Electronic Analog and Hybrid Computers, McGraw-Hill, New York, 1964. Kurita, T. and Abdelmalek, N.N., An edge based approach for the segmentation of 3-D range images of small industriallike objects, International Journal of Systems Science, 23(1992)1449-1461. Kurozumi, Y. and Davis, W.A., Polygonal approximation by the minimax method, Computer Graphics and Image Processing, 19(1982)248-264. Labonte, G., On solving systems of linear inequalities with artificial neural networks, IEEE Transactions on Neural Networks, 8(1997)590-600. Lancaster, P., Theory of Matrices, Academic Press, New York, 1969. Lau, H.T., A Numerical Library in C for Scientists and Engineers, CRC Press, Ann Arbor, 1995.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
174. 175. 176. 177. 178. 179. 180. 181. 182. 183. 184. 185.
186. 187.
883
Lawson, C.L., Characteristic properties of the segmented rational minimax approximation problem, Numerische Mathematik, 6(1964)293-301. Lawson, C.W. and Hanson, R.J., Solving Least Squares Problems, Prentice Hall, Englewood Cliffs, NJ, 1974. Lenze, B., Uniqueness in best one-sided L1 – approximation by algebraic polynomials on unbounded intervals, Journal of Approximation Theory, 57(1989)169-177. Lewis, J.T., Computation of best one-sided L1 approximation, Mathematics of Computation, 24(1970)529-536. Lewis, J.T., Restricted range approximation and its application to digital filter design, Mathematics of Computation, 29 (1975)522-539. Lin, J.N., Determination of reachable set for a linear discrete system, IEEE Transactions on Automatic Control, 15(1970)339-342. Longley, J.M., Least Squares Computations Using Orthogonalization Methods, Marcel Dekker, New York, 1984. Lopes, H., Oliveira, J.B. and de Figueiredo, L.H., Robust adaptive polygonal approximation of implicit curves, Computers and Graphics, 26(2002)841-852. Lucchetti, R. and Mignanego, F., Variational perturbation of the minimum effort problem, Journal of Optimization Theory and Applications, 30(1980)485-499. Luenberger, D.G., Optimization by Vector Space Methods, John Wiley, New York, 1969. Madsen, K., Nielsen, H.B. and Pinar, M.C., New characterizations of l1 solutions to overdetermined systems of linear equations, Operations Research Letters, 16(1994)159-166. Madsen, K. and Powell, M.J.D., A FORTRAN subroutine that calculates the minimax solution of linear equations subject to bounds on the variables, United Kingdom Atomic Energy Research Establishment, AERE-R7954, February 1975. Mangasarian, O.L., Iterative solution of linear programs, SIAM Journal on Numerical Analysis, 18(1981)606-614. Mason, R.L. and Gunst, R.F., Outlier-induced collinearities, Technometrics, 27(1985)401-407.
© 2008 by Taylor & Francis Group, LLC
884
188. 189. 190. 191. 192. 193. 194. 195.
196. 197. 198. 199. 200. 201.
Numerical Linear Approximation in C
Michalewicz, Z., Genetic Algorithms + Data Structures = Evolution Programs, Second Extended Edition, SpringerVerlag, New York, 1992. Minnick, R.C., Linear-input logic, IRE Transactions on Electronic Computers, 10(1961)6-16. Montgomery, D.C. and Peck, E.A., Introduction to Linear Regression Analysis, John Wiley & Sons, New York, 1992. Morozov, V.A., Methods for Solving Incorrectly Posed Problems, Springer-Verlag, New York, 1984. Motzkin, T.S. and Schoenberg, I.J., The relaxation method for linear inequalities, Canadian Journal of Mathematics, 6(1954)393-404. Nagao, M. and Matsuyama, T., Edge preserving smoothing, Computer Graphics and Image Processing, 9(1979)394-407. Nagaraja, G. and Krishna, G., An algorithm for the solution of linear inequalities, IEEE Transactions on Computers, 23(1974)421-427. Narula, S.C. and Wellington, J.F., An efficient algorithm for the MSAE and the MMAE regression problems, SIAM Journal on Scientific and Statistical Computing, 9(1988)717727. Nie, Y.Y. and Xu, S.R., Determination and correction of an inconsistent system of linear inequalities, Journal of Computational Mathematics, 13(1995)211-217. Noble, B., A method for computing the generalized inverse of a matrix, SIAM Journal on Numerical Analysis, 3(1966)582584. Noble, B., Applied Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, 1969. Osborne, M.R. and Watson, G.A., On the best linear Chebyshev approximation, Computer Journal, 10(1967)172177. Outrata, J.V., On the minimum time problem in linear discrete systems with the discrete set of admissible controls, Kybernetika, 11(1975)368-374. Pavlidis, T., Waveform segmentation through functional approximation, IEEE Transactions on Computers, 22(1973)689697.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
202. 203. 204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216.
885
Pavlidis, T., Optimal piecewise polygonal L2 approximation of functions of one and two variables, IEEE Transactions on Computers, 24(1975)98-102. Pavlidis, T., The use of algorithms of piecewise approximations for picture processing applications, ACM Transactions on Mathematical Software, 2(1976)305-321. Pavlidis, T., Polygonal approximation by Newton’s method, IEEE Transactions on Computers, 26(1977)800-807. Pavlidis, T. and Horowitz, S.L., Segmentation of plane curves, IEEE Transactions on Computers, 23(1974)860-870. Pavlidis, T. and Maika, A.P., Uniform piecewise polynomial approximation with variable joints, Journal of Approximation Theory, 12(1974)61-69. Pei, S-C. and Lin, C-N., The detection of dominant points on digital curves by scale-space filtering, Pattern Recognition, 25(1992)1307-1314. Perez, J-C. and Vidal, E., Optimum polygonal approximation of digitized curves, Pattern Recognition Letters, 15(1994)743750. Peters, G. and Wilkinson, J.H., The least squares problem and pseudo-inverses, Computer Journal, 13(1970)309-316. Phillips, D.L., A technique for the numerical solution of certain integral equations of the first kind, Journal of ACM, 9(1962)84-97. Phillips, D.L., A note on best one-sided approximations, Communications of ACM, 14(1971)598-600. Phillips, G.M., Algorithms for piecewise straight line approximation, Computer Journal, 11(1968)211-212. Pierre, D.A., Optimization Theory with Applications, John Wiley & Sons, New York, 1969. Pikaz, A. and Dinstein, I.H., Optimal polygonal approximation of digital curves, Pattern Recognition, 28(1995)373379. Pinar, M.C. and Chen, B., l1 solution of linear inequalities, IMA Journal of Numerical Analysis, 19(1999)19-37. Pinar, M.C., and Elhedhli, S., A penalty continuation method for the l∞ solution of overdetermined linear systems, BIT – Numerical Mathematics, 38(1998)127-150.
© 2008 by Taylor & Francis Group, LLC
886
217. 218. 219. 220.
221. 222.
223. 224. 225. 226. 227. 228. 229. 230.
Numerical Linear Approximation in C
Pinkus, A.M., On L1-Approximation, Cambridge University Press, London, 1989. Polak, E. and Deparis, M., An algorithm for minimum energy control, IEEE Transactions on Automatic Control, 14(1969)367-377. Porter, W.A., Modern Foundations of System Engineering, Macmillan, New York, 1966. Powell, M.J.D., The minimax solution of linear equations subject to bounds on the variables, Proceedings of the Fourth Manitoba Conference on Numerical Mathematics, Hartnell, B.L. and Williams, H.C. (eds.), Winnipeg, Manitoba, Canada, pp. 53-107, 1975. Powell, M.J.D., Approximation Theory and Methods, Cambridge University Press, London, 1981. Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T., Numerical Recipes in C, The Art of Scientific Computing, Second Edition, Cambridge University Press, Cambridge, 1992. Ralston, A., A First Course in Numerical Analysis, McGraw-Hill, New York, 1965. Ramer, U., An iterative procedure for the polygonal approximation of plane curves, Computer Graphics and Image Processing, 1(1972)244-256. Rannou, F. and Gregor, J., Equilateral polygon approximation of closed contours, Pattern Recognition, 29(1996)1105-1115. Ray, B.K. and Ray, K.S., An algorithm for detection of dominant points and polygonal approximation of digitized curves, Pattern Recognition Letters, 13(1992)849-856. Ray, B.K. and Ray, K.S., Determination of optimal polygon from digital curve using L1 norm, Pattern Recognition, 26(1993)505-509. Rice, J.R., Tchebycheff approximation in a compact metric space, Bulletin of American Mathematical Society, 68(1962)405410. Rice, J.R., The Approximation of Functions, Vol. 2, AddisonWesley, Reading, MA, 1969. Rice, J.R. and White, J.S., Norms for smoothing and estimation, SIAM Review, 6(1964)243-256.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
231. 232. 233.
234. 235.
236. 237. 238. 239. 240. 241.
242.
887
Robers, P.D. and Ben-Israel, A., An interval programming algorithm for discrete linear L1 approximation problems, Journal of Approximation Theory, 2(1969)323-336. Robers, P.D. and Robers, S.S., Algorithm 458: Discrete linear L1 approximation by interval linear programming [E2], Communications of ACM, 16(1973)629-631. Roberts, F.D.K. and Barrodale, I., An algorithm for discrete Chebyshev linear approximation with linear constraints, International Journal for Numerical Methods in Engineering, 15(1980)797-807. Rosen, J.B., Park, H. and Glick, J., Signal identification using a least L1 norm algorithm, Optimization and Engineering, 1(2000)51-65. Rosen, J.B., Park, H., Glick, J. and Zhang, L., Accurate solution to overdetermined linear equations with errors using L1 norm minimization, Computational Optimization and Applications, 17(2000)329-341. Rosin, P.L., Assessing the behavior of polygonal approximation algorithms, Pattern Recognition, 36(2003)505-518. Russell, E., Chiang, L.H. and Braatz, R.D., Data-Driven Methods for Fault Detection and Diagnosis in Chemical Processes, Springer-Verlag, London, 2000. Ryan, T.P., Modern Regression Methods, John Wiley & Sons, New York, 1997. Salotti, M., Optimal polygonal approximation of digitized curves using the sum of square deviations criterion, Pattern Recognition, 35(2002)435-443. Sarkar, D., A simple algorithm for detection of significant vertices for polygonal approximation of chain-coded curves, Pattern Recognition Letters, 14(1993)959-964. Sarkar, B., Singh, L.K. and Sarkar, D., A genetic algorithmbased approach for detection of significant vertices for polygonal approximation of digital curves, International Journal of Image and Graphics, 4(2004)223-239. Sato, Y., Piecewise linear approximation of plane curves by perimeter optimization, Pattern Recognition, 25(1992)15351543.
© 2008 by Taylor & Francis Group, LLC
888
243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253.
254. 255. 256.
Numerical Linear Approximation in C
Sierksma, G., Linear and Integer Programming, Theory and Practice, Second Edition, Marcel Dekker Inc., New York, 2002. Sklansky, J. and Gonzalez, V., Fast polygonal approximation of digitized curves, Pattern Recognition, 12(1980)327-331. Sklansky, J. and Michelotti, L., Locally trained piecewise linear classifier, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(1980)101-110. Sklar, M.G., L∞ norm estimation with linear restrictions on the parameters, Numerical Functional Analysis and Optimization, 3(1981)53-68. Sklar, M.G. and Armstrong, R.D., Least absolute value and Chebyshev estimation utilizing least squares results, Mathematical Programming, 24(1982)346-352. Sklar, M.G. and Armstrong, R.D., A piecewise linear approximation procedure for Lp norm curve fitting, Journal of Statistical Computation and Simulation, 52(1995)323-335. Smith, F.W., Pattern classifier design by linear programming, IEEE Transactions on Computers, 17(1968)367-372. Solodov, A.V., Linear Automatic Control Systems with Varying Parameters, Blackie, London, 1966. Spath, H., Mathematical Algorithms for Linear Regression, Academic Press, English Edition, London, 1991. Sposito, V.A., Kennedy, W.J. and Gentle, J.E., Useful generalized properties of L1 estimators, Communications in Statistics-Theory and Methods, A9(1980)1309-1315. Squire, W., The solution of ill-conditioned linear systems arising from Fredholm equations of the first kind by steepest descents and conjugate gradients, International Journal for Numerical Methods in Engineering, 10(1976)607-617. Stark, P.B. and Parker, R.L., Bounded-variable least-squares: An algorithm and applications, Computational Statistics, 10(1995)129-141. Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, 1973. Stewart, G.W., The effects of rounding residual on an algorithm for downdating a Cholesky factorization, Journal of Institute of Mathematics and Applications, 23(1979)203-213.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
257. 258. 259. 260. 261.
262. 263. 264. 265. 266. 267. 268. 269.
889
Stiefel, E., Uber diskrete und lineare Tschebyscheffapproximation, Numerische Mathematik, 1(1959)1-28. Stiefel, E., Note on Jordan elimination, linear programming and Tchebycheff approximation. Numerische Mathematik, 2(1960)1-17. Stone, H., Approximation of curves by line segments, Mathematics of Computation, 15(1961)40-47. Storoy, S., Error control in the simplex technique, BIT, 7(1967)216-225. Sun, Y-N. and Huang, S-C., Genetic algorithms for errorbounded polygonal approximation, International Journal of Pattern Recognition and Artificial Intelligence, 14(2000)297314. Taylor, G.D., Approximation by functions having restricted ranges III, Journal of Mathematical Analysis and Applications, 27(1969)241-248. Taylor, G.D. and Winter, M.J., Calculation of best restricted approximations, SIAM Journal on Numerical Analysis, 7(1970)248-255. Teh, C-H. and Chin, R.T., On the detection of dominant points on digital curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(1989)859-872. te Riele, H.J.J., A program for solving first kind Fredholm integral equations by means of regularization, Computer Physics Communications, 36(1985)423-432. Thiran, J.P. and Thiry, S., Strict Chebyshev approximation for general systems of linear equations, Numerische Mathematik, 51(1987)701-725. Tikhonov, A.N., Solution of incorrectly formulated problems and method of regularization, Soviet Mathematics, 4(1963)1035-1038. Tomek, I., Two algorithms for piecewise-linear continuous approximation of functions of one variable, IEEE Transactions on Computers, 23(1974)445-448. Torng, H.C., Optimization of discrete control systems through linear programming, Journal of Franklin Institute, 278(1964)28-44.
© 2008 by Taylor & Francis Group, LLC
890
270. 271. 272. 273. 274. 275. 276. 277.
278. 279. 280. 281.
Numerical Linear Approximation in C
Tou, J.T. and Gonzalez, R.C., Pattern Recognition Principles, Addison-Wesley, Reading, MA, 1974. Usow, K.H., On L1 approximation. II. Computation for discrete functions and discretization effects, SIAM Journal on Numerical Analysis, 4(1967)233-244. van de Panne, C. and Whinston, A., Simplicial methods for quadratic programming, Naval Research Logistic Quarterly, 11(1964)273-302. van de Panne, C. and Whinston, A., Simplex and the dual method for quadratic programming, Operations Research Quarterly, 15(1964)355-388. Varah, J.M., On the numerical solution of ill-conditioned linear systems with applications to ill-posed problems, SIAM Journal on Numerical Analysis, 10(1973)257-267. Varah, J.M., A practical examination of some numerical methods for linear discrete ill-posed problems, SIAM Review, 21(1979)100-111. Varah, J.M., Pitfalls in the numerical solution of linear ill-posed problems, SIAM Journal on Scientific Statistical Computation, 4(1983)164-176. Vogel, C.R., Optimal choice of a truncation level for the truncated SVD solution of linear first kind integral equations when data are noisy, SIAM Journal on Numerical Analysis, 23(1986)109-134. von Neumann, J. and Goldstine, H.H., Numerical inverting of matrices of high order, Bulletine of American Mathematical Society, 53(1947)1021-1099. Wagner, H.M., Linear programming techniques for regression analysis, Journal of American Statistical Association, 54(1959)206-212. Wahba, G., Practical approximate solutions to linear operator equations when the data are noisy, SIAM Journal on Numerical Analysis, 14(1977)651-667. Wall, K. and Danielsson, P-E., A fast sequential method for polygonal approximation of digitized curves, Computer Vision, Graphics, and Image Processing, 28(1984)220-227.
© 2008 by Taylor & Francis Group, LLC
Appendix A: References
282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293. 294. 295.
891
Watson, G.A., One-sided approximation and operator equations, Journal of Institute of Mathematics and Applications, 12(1973)197-208. Watson, G.A., On the best linear one-sided Chebyshev approximation, Journal of Approximation Theory, 7(1973)4858. Watson, G.A., The calculation of best linear one-sided Lp approximations, Mathematics of Computation, 27(1973)607620. Watson, G.A., The calculation of best restricted approximations, SIAM Journal on Numerical Analysis, 11(1974)693699. Watson, G.A., A multiple exchange algorithm for multivariate Chebyshev approximation, SIAM Journal on Numerical Analysis, 12(1975)46-52. Watson, G.A., Approximation Theory and Numerical Methods, John Wiley & Sons, New York, 1980. Weisberg, S., Applied Linear Regression, Second Edition, John Wiley & Sons, New York, 1985. Weischedel, H.R., A solution of the discrete minimum-time control problem, Journal of Optimization Theory and Applications, 5(1970)81-96. Wesolowsky, G.O., A new descent algorithm for the least absolute value regression problem, Communications in Statistics – Simulation and Computation, B10(1981)479-491. Westlake, J.R., A Handbook of Numerical Matrix Inversion and Solution of Linear Equations, John Wiley & Sons, New York, 1968. Whinston, A., The bounded variable problem – An application of the dual method for quadratic programming, Naval Research Logistics Quarterly, 12(1965)173-179. Wilkinson, J.H., Error analysis of floating-point computation, Numerische Mathematik, 2(1960)319-340. Wilkinson, J.H., Rounding Errors in Algebraic Processes, Prentice-Hall, Englewood Cliffs, NJ, 1963. Wilkinson, J.H., The Algebraic Eigenvalue Problem, Clarendon Press, Oxford 1965.
© 2008 by Taylor & Francis Group, LLC
892
296. 297.
298.
299. 300. 301.
Numerical Linear Approximation in C
Williams, C.M., An efficient algorithm for the piecewise linear approximation of planar curves, Computer Graphics and Image Processing, 8(1978)286-293. Wold, S., Ruhe, A., Wold, H. and Dunn III, W.J., The collinearity problem in linear regression. The partial least squares (PLS) approach to generalized inverses, SIAM Journal on Scientific and Statistical Computing, 5(1984)735-743. Wolfe, P., Error in the solution of linear programming problems, Proceedings of a symposium conducted by the MRC and the University of Wisconsin, Vol. 2, Rall, L.B., (ed.), John Wiley, New York, pp. 271-284, 1966. Yin, P-Y., Algorithms for straight line fitting using k-means, Pattern Recognition Letters, 19(1998)31-41. Yin, P-Y., Genetic algorithms for polygonal approximation of digital curves, International Journal of Pattern Recognition and Artificial Intelligence, 13(1999)1061-1082. Zhu, Y., and Seneviratne, L.D., Optimal polygonal approximation of digitised curves, IEEE Proceedings on Vision, Image and Signal Processing, 144(1997)8-14.
© 2008 by Taylor & Francis Group, LLC
893
Appendix B Main Program #include <Windows.h> #include <stdio.h> #include <string.h> #include "LA_Defs.h" /*------------------------------------------------------------------Driver prototypes -------------------------------------------------------------------*/ void DR_Lone (void); void DR_L1 (void); void DR_Loneside(void); void DR_Lonebv (void); void DR_L1ineq (void); void DR_L1pol (void); void DR_L1pw1 (void); void DR_L1pw2 (void); void DR_Linf (void); void DR_Linfside(void); void DR_Linfbv (void); void DR_Chineq (void); void DR_Restch (void); void DR_Strict (void); void DR_Linfpw1 (void); void DR_Linfpw2 (void); void DR_Eluls (void); void DR_Hhls (void); void DR_Mls (void); void DR_L2pw1 (void); void DR_L2pw2 (void); void DR_Fuel (void); void DR_Tmfuel (void); void DR_Effort (void); void DR_Energy (void); /*------------------------------------------------------------------Local function prototypes -------------------------------------------------------------------*/ © 2008 by Taylor & Francis Group, LLC
894
Numerical Linear Approximation in C
void Run_all (void); void prn_unrecognized (char *pzsInput); void prn_options (void); /*------------------------------------------------------------------Main program -------------------------------------------------------------------*/ int main (int argc, char* argv[]) { DWORD deltaTick; DWORD startTick = GetTickCount (); if (argc < 2) { PRN ("Missing option\n"); prn_options (); } else { strlwr (argv[1]); if (strncmp (argv[1], "all" , else if (strncmp (argv[1], "lone" , else if (strncmp (argv[1], "l1" , else if (strncmp (argv[1], "l1ineq" , else if (strncmp (argv[1], "lonebv" , else if (strncmp (argv[1], "l1pol" , else if (strncmp (argv[1], "l1pw1" , else if (strncmp (argv[1], "l1pw2" , else if (strncmp (argv[1], "linf" , else if (strncmp (argv[1], "loneside", else if (strncmp (argv[1], "linfside", else if (strncmp (argv[1], "linfbv" , else if (strncmp (argv[1], "chineq" , else if (strncmp (argv[1], "restch" , else if (strncmp (argv[1], "strict" , else if (strncmp (argv[1], "linfpw1" , else if (strncmp (argv[1], "linfpw2" , else if (strncmp (argv[1], "eluls" , else if (strncmp (argv[1], "hhls" , else if (strncmp (argv[1], "l2pw1" , else if (strncmp (argv[1], "l2pw2" , else if (strncmp (argv[1], "mls" , else if (strncmp (argv[1], "fuel" , else if (strncmp (argv[1], "tmfuel" , else if (strncmp (argv[1], "effort" , © 2008 by Taylor & Francis Group, LLC
3) 5) 5) 5) 5) 4) 5) 5) 5) 5) 5) 5) 1) 1) 1) 7) 7) 2) 1) 5) 5) 1) 1) 1) 2)
== == == == == == == == == == == == == == == == == == == == == == == == ==
0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0) 0)
Run_all (); DR_Lone (); DR_L1 (); DR_L1ineq (); DR_Lonebv (); DR_L1pol (); DR_L1pw1 (); DR_L1pw2 (); DR_Linf (); DR_Loneside(); DR_Linfside(); DR_Linfbv (); DR_Chineq (); DR_Restch (); DR_Strict (); DR_Linfpw1 (); DR_Linfpw2 (); DR_Eluls (); DR_Hhls (); DR_L2pw1 (); DR_L2pw2 (); DR_Mls (); DR_Fuel (); DR_Tmfuel (); DR_Effort ();
Appendix B: Main Program
895
else if (strncmp (argv[1], "energy" , 2) == 0) DR_Energy (); else prn_unrecognized (argv[1]); } deltaTick = GetTickCount () - startTick; PRN ("Time to execute: %d msec", deltaTick); } /*------------------------------------------------------------------Local functions -------------------------------------------------------------------*/ void Run_all (void) { DR_Lone (); DR_L1 (); DR_L1ineq (); DR_Lonebv (); DR_L1pol (); DR_L1pw1 (); DR_L1pw2 (); DR_Linf (); DR_Loneside(); DR_Linfside(); DR_Linfbv (); DR_Chineq (); DR_Restch (); DR_Strict (); DR_Linfpw1 (); DR_Linfpw2 (); DR_Eluls (); DR_Hhls (); DR_L2pw1 (); DR_L2pw2 (); DR_Mls (); DR_Fuel (); DR_Tmfuel (); DR_Effort (); DR_Energy (); } void prn_unrecognized (char *pszInput) { PRN ("Unrecognized option \"%s\"\n", pszInput); prn_options (); © 2008 by Taylor & Francis Group, LLC
896
Numerical Linear Approximation in C
} void prn_options (void) { PRN ("Type \"LA option\": options are (case-insensitive)\n"); PRN (" All : Run all algorithms\n"); PRN (" Lone : L-ONE Approximation\n"); PRN (" L1 : L1 Approximation\n"); PRN (" L1ineq : One-Sided L-One Approximation\n"); PRN (" Lonebv : Bounded Variables L-One Approximation\n"); PRN (" L1pol : L1 Polygonal Approximation\n"); PRN (" L1pw1 : Linear L-One Piecewise Approximation (1)\n"); PRN (" L1pw2 : Linear L-One Piecewise Approximation (2)\n"); PRN (" Linf : Chebyshev Approximation\n"); PRN (" Loneside: L-One Solution of Linear Inequalities\n"); PRN (" Linfside: One-Sided Chebyshev Approximation\n"); PRN (" Linfbv : Bounded Variables Chebyshev Approximation\n"); PRN (" Chineq : Chebyshev Solution of Linear Inequalities\n"); PRN (" Restch : Restricted Chebyshev Approximation\n"); PRN (" Strict : Strict Chebyshev Approximation\n"); PRN (" Linfpw1 : Linear Chebyshev Piecewise Approximation(1)\n"); PRN (" Linfpw2 : Linear Chebyshev Piecewise Approximation(2)\n"); PRN (" Eluls : Least Squares Approximation by " "Gauss LU Decomposition\n"); PRN (" Hhls : Least Squares Approximation by " "Hoseholder's Transformation\n"); PRN (" Mls : Solution of Fredholm Integral Equation of the " "First Kind\n"); PRN (" L2pw1 : Least Squares Piecewise Approximation (1)\n"); PRN (" L2pw2 : Least Squares Piecewise Approximation (2)\n"); PRN (" Fuel : L-ONE Solution of Underdetermined " "Linear Equations\n"); PRN (" Tmfuel : Bounded/L-One-Bounded Solution of " "Underdetermined Linear Equations\n"); PRN (" Effort : Chebyshev Solution of Underdetermined " "Linear Equations\n"); PRN (" Energy : Bounded Least Squares Solution of" "Underdetermined Linear Equations\n"); PRN ("NOTE: You can type the first N unique chararcters in the " "option\n"); PRN (" - e.g. \"LA t\" instead of \"LA Tmfuel\"\n"); PRN (" \"LA Loneb\" instead of \"LA LA_Lonebv\"\n"); }
© 2008 by Taylor & Francis Group, LLC
897
Appendix C Constants, Types and Function Prototypes /*------------------------------------------------------------------DR_Defs.h Constants and Type Definitions used by Drivers -------------------------------------------------------------------*/ #ifndef _DR_DEFS_H_ #define _DR_DEFS_H_ /* Constants used only by Driver programs. Not visible to LA_ algorithm programs */ /* General Constants */ #define N_ROWS #define M_COLS #define KK_PIECES #define MM_COLS #define MMc_COLS #define NN_ROWS
25 250 50 25 26 250
/* Energy + Fuel */ #define Ne_ROWS #define Me_COLS
25 150
/* Eluls + Hhls */ #define NN2_ROWS #define MM2_COLS #define KK2_COLS
50 30 12
#endif
© 2008 by Taylor & Francis Group, LLC
898
Numerical Linear Approximation in C
/*------------------------------------------------------------------LA_Defs.h Constants and Type Definitions used by Linear Approximation Library Functions -------------------------------------------------------------------*/ #ifndef _LA_DEFS_H_ #define _LA_DEFS_H_ #define PRN
printf
/* Used to identify unused (index 0) vector/matrix elements */ #define NIL 0.0 typedef int typedef int
*tVector_I; **tMatrix_I;
typedef double typedef double typedef double
tNumber_R; *tVector_R; **tMatrix_R;
/* Comment-out the following line for single precision */ #define DOUBLE_PRECISION #ifdef DOUBLE_PRECISION #define PREC 1.0E-16 #define EPS 1.0E-11 #else /* SINGLE PRECISION */ #define PREC 1.0E-06 #define EPS 1.0E-04 #endif /*-----------------------------------------------------------Linear Approximation Return Code Provides the result of execution of any algorithm ------------------------------------------------------------*/ typedef enum { /*---------------------------------------Intermediate calculation did not fail - never returned externally ----------------------------------------*/ LaRcOk = 0,
© 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes /*---------------------------------------Algorithm has calculated results ----------------------------------------*/ /* Solution vector was found */ LaRcSolutionFound = 1, /* Solution vector is unique */ LaRcSolutionUnique
= 2,
/* Solution vector is most-probably not unique */ LaRcSolutionProbNotUnique = 3, /* Solution vector is definitely not unique due to rank deficiency */ LaRcSolutionDefNotUniqueRD = 4, /*---------------------------------------Algorithm could not calculate results ----------------------------------------*/ /* No feasible solution found, due to one of: (1) The problem itself has no solution, or (2) Matrix "c" of the system c*a=f is ill-conditioned */ LaRcNoFeasibleSolution = -1, /* There is no solution because c*a=f is not consistent */ LaRcInconsistentSystem = -2, /* The calculation terminated because the conditions el(i) <= f(i) <= ue(i), i=1,...,n were not satisfied */ LaRcInconsistentConstraints = -3, /*---------------------------------------Errors ----------------------------------------*/ /* Argument(s) out of legal bounds */ LaRcErrBounds = -4, /* Null pointer argument */ LaRcErrNullPtr /* Memory allocation error */ LaRcErrAlloc } eLaRc;
© 2008 by Taylor & Francis Group, LLC
= -5, = -6,
899
900
Numerical Linear Approximation in C
/* Macros to simplify/abstract run-time error/return-code handling */ #define GOTO_CLEANUP_RC(r) { rc = r; goto CLEANUP; } #define VALIDATE(cond,r) if (!(cond)) {GOTO_CLEANUP_RC (r);} #define VALIDATE_BOUNDS(cond) VALIDATE(cond,LaRcErrBounds) #define VALIDATE_PTRS(cond) VALIDATE(cond,LaRcErrNullPtr) #define VALIDATE_ALLOC(cond) VALIDATE(cond,LaRcErrAlloc) #define IN_BOUNDS(v,lo,hi) (((v) >= (lo)) && ((v) <= (hi))) #endif
© 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
901
/*------------------------------------------------------------------LA_Prototypes.h Linear Approximation Library Function Prototypes -------------------------------------------------------------------*/ #ifndef _LA_PROTOTYPES_H #define _LA_PROTOTYPES_H #include "LA_Utils.h" eLaRc LA_L1 (int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_l1_part_1 (int m, int n, tMatrix_R ct, tVector_I ic, tVector_I ir, tMatrix_R ginv, int *pIrank, int *pIter); void LA_l1_part_2 (int n, tMatrix_R ct, tVector_R f, tVector_I ic, tVector_I ib, tVector_R uf, tVector_R vb, int *pIrank, tVector_R r); void LA_l1_triang_matrix (int m, tVector_R p, int *pIrank); void LA_l1_vleav (int *pIvo, int *pIout, tNumber_R *pXb, tVector_R xp, int *pIrank); void LA_l1_alfa (int iout, int n, tMatrix_R ct, tVector_I ic, tVector_I ib, tVector_R xp, tVector_R t, tVector_R alfa, int *pIrank, tVector_R r); void LA_l1_vent (int ivo, int *pJin, int *pItest, int n, tVector_I ic, tVector_R alfa, int *pIrank); void LA_l1_skip_iters (int iciout, int icjin, tNumber_R xb, tNumber_R *pBxb, tNumber_R pivotn, tMatrix_R ct, tVector_I ib, tVector_R t, tVector_R vb, int *pIrank); void LA_l1_update_p (int iout, int jin, tMatrix_R ct, tVector_I ic, tVector_R p, int *pIrank); void LA_l1_update_ginv (int i, int n, tMatrix_R ct, tVector_I ir, tVector_R p, tMatrix_R ginv, tVector_R vb, int *pIrank); void LA_l1_calcul_r (tNumber_R alpha, int n, tMatrix_R ct, tVector_R f, tVector_I ic, tVector_I ib, tVector_R uf, © 2008 by Taylor & Francis Group, LLC
902
Numerical Linear Approximation in C tVector_R xp, tVector_R t, tVector_R p, int *pIrank, tVector_R r);
void LA_l1_pslv (int id, int *pIrank, tVector_R p, tVector_R vb, tVector_R xp); void LA_l1_gauss_jordn (int iout, int jin, int lj, int *pIrank, int n, tMatrix_R ct, tMatrix_R ginv, tVector_I ic); eLaRc LA_l1_res (int *pIrank, int m, int n, tVector_R r, tMatrix_R ginv, tVector_R vb, tVector_I ir, tVector_R xp, tVector_R a, tNumber_R *pZ);
eLaRc LA_Lone (int mCols, int nRows, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_lone_gauss_jordn (int ipart, int iout, int jin, int nRows, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, tVector_R r); void LA_lone_part_1 (int ipart, int nRows, tMatrix_R ct, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, int *pIter, tVector_R r); void LA_lone_part_2 (int nRows, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_R bv, tVector_I ibound, int *pIrank, tVector_R r); void LA_lone_th (int iout, int nRows, tMatrix_R ct, tVector_I icbas, tVector_I ibound, tVector_R th, int *pIrank, tVector_R r); void LA_lone_vent (int ivo, int *pItest, int *pJin, int nRows, tVector_R th); eLaRc LA_lone_res (int mCols, int nRows, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, int *pIrank, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_lone_vleav (int *pIvo, int *pIout, int *pIrank, tNumber_R *pXb, tVector_R bv);
© 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
903
eLaRc LA_Loneside (int iside, int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_loneside_basic_sol (int m, int n, tMatrix_R ct, tVector_I irbas, tVector_R bv); void LA_loneside_part_1 (int m, int n, tMatrix_R ct, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, int *pIrank, int *pIter, tVector_R r); void LA_loneside_gauss_jordn (int iout, int jin, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_R r); eLaRc LA_loneside_res (int iside, int m, int n, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, int *pIrank, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_loneside_marg_costs (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_R r); void LA_loneside_vent (int *pIvo, int *pJin, int m, int n, tVector_I icbas, tVector_R r); void LA_loneside_vleav (int jin, int *pIout, int *pItest, int m, tMatrix_R ct, tVector_R bv);
eLaRc LA_Lonebv (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibbv, tVector_R thbv, int *pIter, tVector_R rbv, tVector_R a, tNumber_R *pZ); void LA_lonebv_part_1 (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibbv, tVector_R rbv); void LA_lonebv_vleav (int *pIvo, int *pIout, tNumber_R *pXb, int m, int n, tVector_I icbas, tVector_R bv); void LA_lonebv_thbv (int ivo, int iout, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_I ibbv, tVector_R thbv, © 2008 by Taylor & Francis Group, LLC
904
Numerical Linear Approximation in C tVector_R rbv);
void LA_lonebv_vent (int ivo, int *pJin, int *pItest, int m, int n, tVector_R thbv); void LA_lonebv_update_bv (int iout, int jout, int jin, tNumber_R *pPivot, tNumber_R pivoto, tNumber_R *pXb, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ibbv); void LA_lonebv_gauss_jordn (int jin, int iout, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibbv, tVector_R rbv); void LA_lonebv_res (int m, int n, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R rbv, tVector_R a, tNumber_R *pZ);
eLaRc LA_L1pol (int m, int n, tNumber_R enorm, tMatrix_R ct, tVector_R f, tMatrix_R ctn, tMatrix_R binv, tVector_I icbas, tVector_R r, tVector_R a, tVector_I ixl, tVector_R v, tVector_R rp, tMatrix_R ap, tVector_R zp, int *pNpiece); void LA_l1pol_vertic_line (int je, int m, tMatrix_R ct, tMatrix_R ctn, tVector_R f, tVector_R r, tMatrix_R binv, tVector_R a, tVector_R v, tNumber_R *pPiv); void LA_l1pol_residuals (int m, int n1, int n2, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R r, tVector_R a, tNumber_R *pZ, int kase); void LA_l1pol_gauss_jordn (int n1, int n2, int m, int iout, int jin, tMatrix_R ct, tMatrix_R binv, tVector_I icbas); void LA_l1pol_res (int n1, int n2, int m, tVector_R f, tVector_R r, tVector_R a, tMatrix_R binv, tVector_I icbas, tNumber_R *pZ);
eLaRc LA_L1pw1 (int m, int n, tNumber_R enorm, int konect, tMatrix_R ct, tVector_R f, tVector_I ixl, tVector_I irankp, tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece);
© 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
905
eLaRc LA_L1pw2 (int m, int n, int npiece, tMatrix_R ct, tVector_R f, tMatrix_R ap, tVector_R rp2, tVector_R zp, tVector_I ixl);
eLaRc LA_Linf (int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_linf_init (int m, int n, tMatrix_R ct, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R r, tVector_R a); void LA_linf_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, int *pIter); void LA_linf_detect_rank (int *pKl, int iout, int jin, int m, int n, tNumber_R piv, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, int *pIter); void LA_linf_part_1 (int *pKl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, int *pIter); void LA_linf_part_2 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIter); eLaRc LA_linf_res (int kl, int m, int n, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tVector_R a, int irank, tNumber_R *pZ); void LA_linf_part_3 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tNumber_R *pZ); void LA_linf_vent (int *pIvo, int *pJin, int kl, int m, int n, tVector_I icbas, tVector_R r, tNumber_R *pZ); void LA_linf_vleav (int *pItest, int *pIout, int jin, int kl, int m, tMatrix_R ct, tVector_R bv);
© 2008 by Taylor & Francis Group, LLC
906
Numerical Linear Approximation in C
eLaRc LA_Linfside (int iside, int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_linfside_vleav (int *pItest, int *pIout, int jin, int kl, int m, tMatrix_R ct, tVector_R bv); void LA_linfside_init (int iside, int m, int n, tMatrix_R ct, tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R a); void LA_linfside_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv); void LA_linfside_detect_rank (tNumber_R piv, int iout, int jin, int *pKl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIrank, int *pIter); void LA_linfside_part_2 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIter); void LA_linfside_resid_norm (int kl, int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tNumber_R *pZ); eLaRc LA_linfside_res (int iside, int m, int n, int kl, int iout, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, tVector_R r, tVector_R a, int irank, tNumber_R *pZ); void LA_linfside_vent (int *pIvo, int *pJin, int kl, int m, int n, tVector_I icbas, tVector_R r, tNumber_R *pZ);
eLaRc LA_Linfbv (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_R bv, tVector_I ibound, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ); void LA_linfbv_init (int m, int n, tMatrix_R ct, tVector_I icbas, © 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
907
tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R r, tVector_R a); void LA_linfbv_part_2 (int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ibound); void LA_linfbv_resid_norm (int m, int n, tMatrix_R ct, tVector_R f, tVector_I icbas, tVector_R bv, tVector_R r, tNumber_R *pZ); void LA_linfbv_vent (int *pIvo, int *pJin, int m, int n, tVector_I icbas, tVector_R r, tNumber_R *pZ); void LA_linfbv_vleav (int jin, int *pIout, int *pItest, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ibound); void LA_linfbv_gauss_jordn (int m, int n, int iout, int jin, tMatrix_R ct, tVector_I icbas, tMatrix_R binv, tVector_R bv, tVector_I ibound); void LA_linfbv_res (int m, int n, tVector_R f, tVector_I icbas, tVector_I irbas, tMatrix_R binv, tVector_I ibound, tVector_R r, tVector_R a, tNumber_R *pZ);
eLaRc LA_Mls (int m, int n, tMatrix_R a, tVector_R b, tNumber_R toler, tVector_R x, int *pIrank);
eLaRc LA_Restch (int m, int n, tMatrix_R ct, tVector_R f, tVector_R el, tVector_R ue, int *pIrank, int *pIter, tVector_R r, tVector_R a, tNumber_R *pZ); eLaRc LA_restch_init (int m, int n, tMatrix_R ct, tVector_R f, tVector_R el, tVector_R ue, tVector_I ir, tVector_I ib, tVector_I ic, tVector_R g, tMatrix_R ginv, tVector_R a); void LA_restch_part_1 (tNumber_R *pPiv, int iout, int *pJin, int n, tMatrix_R ct, tVector_I ic, tMatrix_R ginv); void LA_restch_detect_rank (tNumber_R *Ppiv, int *pIout, int *pJin, int n, tMatrix_R ct, tVector_I ic, tVector_I ir, tVector_I ib, tVector_R g, tVector_R v, tMatrix_R ginv, int *pIrank, int *pIter); © 2008 by Taylor & Francis Group, LLC
908
Numerical Linear Approximation in C
void LA_restch_part_2 (int n, tMatrix_R ct, tVector_I ic, tVector_I ib, tVector_R g, tVector_R v, tMatrix_R ginv, tVector_R vb, int *pIrank); void LA_restch_init_p (int m, tVector_R p, int *pIrank); void LA_restch_update_ginv (int iout, tMatrix_R ct, tVector_I ir, tVector_R p, tMatrix_R ginv, tVector_R vb, int *pIrank); void LA_restch_vent (int *pIvo, int *pJin, tNumber_R *pGgg, int n, tVector_R f, tVector_R el, tVector_R ue, tVector_I ic, tVector_I ib, tVector_R zc, int *pIrank, tNumber_R *pZ); void LA_restch_vleav (int jin, int *pIout, int *pItest, tMatrix_R ct, tVector_R u, tVector_R v, tVector_R w, tVector_R p, tMatrix_R ginv, int *pIrank); void LA_restch_update_p (int iout, tVector_I ic, tVector_R p, int *pIrank); eLaRc LA_restch_res (int m, tVector_R f, tVector_R el, tVector_R ue, tVector_I ic, tVector_I ib, tVector_R v, tVector_R p, tVector_R vb, int *pIrank, tVector_R r, tNumber_R *pZ); void LA_pslvc (int id, int k, tVector_R p, tVector_R b, tVector_R x); void LA_restch_marg_costs (int m, int n, tMatrix_R ct, tVector_R f, tVector_R el, tVector_R ue, tVector_I ic, tVector_I ir, tVector_I ib, tVector_R g, tVector_R u, tVector_R v, tVector_R w, tVector_R zc, tVector_R p, tMatrix_R ginv, int *pIrank, tVector_R r, tVector_R a, tNumber_R *pZ);
eLaRc LA_Strict (int m, int n, tMatrix_R c, tVector_R f, int *pKsys, int *pIrank, int *pIter, tVector_R r, tVector_R a, tVector_R z); void LA_strict_init_1 (int m, int n, tMatrix_R binv, tVector_I ib, tVector_I iv, tVector_I irbas, tVector_R v, tVector_R fs, tVector_R a, tVector_R r, tVector_R z); void LA_strict_init_2 (int kl, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_I ib, tVector_I ibound, tVector_I irbas); © 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
909
void LA_strict_part_1 (int *pKl, int kn, int *pMa, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_R f, tMatrix_R binv, tVector_R bv, tVector_R v, tVector_R fs, tVector_I ib, tVector_I ih, tVector_I iv, tVector_I ibound, tVector_I icbas, tVector_I irbas, int *pIrank, int *pKsys, int *pIter, tVector_R r, tVector_R a, tVector_R z, eLaRc rc); void LA_strict_map (int kl, int m, tVector_R f, tVector_R fs, tVector_I ih, tVector_I icbas, tVector_R r, tNumber_R zz); void LA_strict_piv (int iout, int *pJin, int n, tNumber_R *pPiv, tMatrix_R ct, tVector_I ib); void LA_strict_swapping (int kl, int iout, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_I ib, tVector_I icbas, tVector_I irbas); void LA_strict_detect_rank (int kl, int *pJin, int m, int n, tMatrix_R ct, tVector_R f, tVector_I ib, tVector_I ibound, tVector_I icbas); void LA_strict_part_2 (int kl, int m, int n, tMatrix_R ct, tVector_R f, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I ibound, tVector_I icbas, int *pIter); void LA_strict_part_3 (int kl, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_R f, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I ibound, tVector_I icbas, tNumber_R *pZz); void LA_strict_vent (int *pIvo, int *pJin, int kl, int m, int n, tMatrix_R c, tVector_I ib, tVector_I icbas, tNumber_R zz); void LA_strict_vleav (int kl, int *pIout, int jin, int *pItest, int m, tMatrix_R ct, tVector_R bv); void LA_strict_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R ct, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I icbas); void LA_strict_uniquens (int *pIvo, int *pIout, int kl, int *pKn, int *pLd, int m, int n, tMatrix_R c, tMatrix_R ct, tVector_R f, tVector_R bv, tVector_R v, tVector_I ib, tVector_I ih, tVector_I ibound, tVector_I icbas, tVector_R r, tNumber_R zz);
© 2008 by Taylor & Francis Group, LLC
910
Numerical Linear Approximation in C
void LA_strict_eliminat_zz (int *pIout, int kl, int m, tMatrix_R c, tMatrix_R binv, tVector_R bv, tVector_I ib, tVector_I ih, tVector_I ibound, tVector_I icbas, int *pKsys, tVector_R r, tVector_R z, tNumber_R zz); void LA_strict_gauss_jordn_binv (int iout, int kl, int m, tMatrix_R binv, tVector_I iv); void LA_strict_permute_binv (int kj, int kl, int *pKm, int m, tMatrix_R binv, tVector_R bv, tVector_I ih, tVector_I icbas); void LA_strict_calcul_a (int m, tVector_R fs, tMatrix_R binv, tVector_I iv, tVector_I icbas, tVector_I irbas, tVector_R a); void LA_strict_reduce_sys (int kl, int kln, int *pMa, int m, int n, tMatrix_R c, tVector_R f, tMatrix_R binv, tVector_I ib, tVector_I iv, tVector_I irbas, tVector_R a); void LA_strict_modify_binv (int kl, int kj, int m, tMatrix_R binv, tVector_R v, tVector_I iv); void LA_strict_eliminate_ll (int kl, int kln, int *pMa, int ld, int m, int n, tMatrix_R c, tVector_R f, tVector_I ib, tVector_I iv, tVector_I ibound, tVector_I icbas, tVector_I irbas, tVector_R r, tNumber_R zz); void LA_strict_calcul_r_3 (int kl, int kln, int m, tMatrix_R c, tVector_I ib, tVector_I ibound, tVector_I icbas, tVector_R r, tNumber_R zz); void LA_strict_calcul_r_2 (int *pKn, int kln, int m, int n, tMatrix_R c, tVector_I ib, tVector_I ibound, tVector_I irbas, tVector_R r, tNumber_R zz); void LA_strict_calcul_r_1 (int m, int n, tMatrix_R c, tVector_I ib, tVector_I ibound, tVector_R r, tNumber_R zz);
eLaRc LA_Linfpw1 (int m, int n, tNumber_R enorm, int konect, tMatrix_R ct, tVector_R f, tVector_I ixl, tVector_I irankp, tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece);
© 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
911
eLaRc LA_Linfpw2 (int m, int n, int npiece, tMatrix_R ct, tVector_R f, tMatrix_R ap, tVector_R rp2, tVector_R zp, tVector_I ixl);
eLaRc LA_Eluls (int ientry, int n, int m, int irhs, tMatrix_R aa, tMatrix_R bs, tMatrix_R l, tMatrix_R u, tVector_R diag, tVector_R b, int *pIrank, tMatrix_R apsudo, tMatrix_R xs, tMatrix_R rres); void LA_eluls_lu_decomp (int *pIout, int n, int m, tMatrix_R aa, tMatrix_R l, tMatrix_R u, tVector_R diag, tVector_I ir, tVector_I ic); void LA_eluls_l_decomp (int n, int m, tMatrix_R tempp, tMatrix_R l, tMatrix_R u, int *pIrank); void LA_eluls_u_decomp (int m, tMatrix_R tempp, tMatrix_R u, tVector_R diag, int *pIrank); void LA_calcul_y (int *pIrank, int n, tMatrix_R t, tVector_R y, tVector_R w, tVector_R temp, tMatrix_R l, tVector_R b); void LA_calcul_x (int *pIrank, int m, tMatrix_R v, tVector_R y, tVector_R w, tVector_R temp, tMatrix_R u, tVector_R diag, tVector_R x); void LA_permute_x (int iwish, int ifirst, int isecnd, int n, int m, tMatrix_R aa, tMatrix_R bs, tVector_R w, tVector_R x, tVector_I ir, tVector_I ic, tMatrix_R apsudo, tMatrix_R xs, tMatrix_R rres); void LA_chols (int irank, tMatrix_R tempp, tMatrix_R el);
eLaRc LA_Hhls (int ientry, int n, int m, int irhs, tMatrix_R aa, tMatrix_R bs, tMatrix_R q, tMatrix_R r, tMatrix_R p, int *pIrank, tMatrix_R apsudo, tMatrix_R xs, tMatrix_R res); void LA_hhls_init (int n, int m, tMatrix_R aa, tMatrix_R q, tMatrix_R t, tVector_I ic); void LA_hhls_calcul_q (int *pIout, int n, int m, tMatrix_R q, tMatrix_R t, tVector_I ic, tVector_R w, tVector_R dm); © 2008 by Taylor & Francis Group, LLC
912
Numerical Linear Approximation in C
void LA_hhls_init_p (int n, int m, int *pIrank, tMatrix_R r, tMatrix_R t, tMatrix_R p); void LA_hhls_calcul_r_p (int *pIrank, int m, tMatrix_R r, tMatrix_R p, tVector_R w, tVector_R dm); void LA_hhls_calcul_res (int iwish, int ifirst, int isecnd, int *pIrank, int n, int m, tMatrix_R aa, tMatrix_R bs, tMatrix_R xs, tMatrix_R apsudo, tMatrix_R q, tMatrix_R r, tMatrix_R p, tVector_I ic, tVector_R b, tVector_R x, tVector_R w, tVector_R dm, tMatrix_R res); eLaRc LA_Hhlsro (int n, int m, tMatrix_R c, tVector_R f, tVector_R a, tMatrix_R r, tVector_R res, tMatrix_R t, tVector_R dm, tVector_R w, tNumber_R *pZ); eLaRc LA_hhlsro_hh_t (int iend, int n, int m, tMatrix_R t, tVector_R dm, tVector_R w); void LA_hhlsro_x_res (int n, int m, tMatrix_R c, tVector_R f, tVector_R a, tMatrix_R r, tVector_R res, tNumber_R *pRho);
eLaRc LA_L2pw1 (int n, int m, tNumber_R enorm, int konect, tMatrix_R c, tVector_R f, tVector_I ixl, tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece);
eLaRc LA_L2pw2 (int m, int n, int npiece, tMatrix_R c, tVector_R f, tMatrix_R ap, tVector_R rp2, tVector_R zp, tVector_I ixl); void LA_l2pw2_init (int k, int npiece, int n, int m, int *pIs, int *pIe, tMatrix_R c, tVector_R f, tMatrix_R cp, tVector_R fp, tVector_I ixl); void LA_pw1_init (int *pNpiece, int is, int ie, int m, tMatrix_R rp1, tMatrix_R ap, tVector_R zp); void LA_pw1_map (int m, int nu, tVector_R r, tVector_R a, tNumber_R z, tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece);
© 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
913
eLaRc LA_pw1_prn_rp1 (int konect, int npiece, int n, tVector_I ixl, tMatrix_R rp1); void LA_pw2_init (int k, int npiece, int m, int n, int *pIs, int *pIe, tMatrix_R ct, tVector_R f, tMatrix_R ctp, tVector_R fp, tVector_I ixl); eLaRc LA_pw2_prn_rp2 (int npiece, int n, tVector_I ixl, tVector_R rp2);
eLaRc LA_Fuel (int m, int n, tMatrix_R c, tVector_R f, int *pIrank, int *pIter, tVector_R a, tNumber_R *pZ); void LA_fuel_init (int m, int n, tVector_I icbas, tVector_I irbas, tVector_I ibound, tVector_R a); eLaRc LA_fuel_part_1 (int iout, int *pJin, int *pKl, int m, int n, tMatrix_R c, tVector_R f, tVector_I icbas, tVector_I irbas, int *pIrank, int *pIter); void LA_fuel_part_2 (int kl, int m, int n, tMatrix_R c, tVector_I icbas, tVector_R zc); void LA_fuel_vent (int *pIvo, int *pJin, int kl, int m, int n, tVector_I icbas, tVector_R zc); void LA_fuel_leav (int *pItest, int jin, int *pIout, int kl, int n, tMatrix_R c, tVector_R f); void LA_fuel_gauss_jordn (int iout, int jin, int kl, int m, int n, tMatrix_R c, tVector_R f, tVector_I icbas); eLaRc LA_fuel_res (int kl, int n, tVector_R f, tVector_I icbas, tVector_I ibound, tVector_R a, tNumber_R *pZ);
eLaRc LA_Tmfuel (int itf, int m, int n, tMatrix_R c, tVector_R f, int *pIrank, int *pIter, tVector_R a, tNumber_R *pZnorm); void LA_tmfuel_init (int m, int n, tVector_I icbs, tVector_I irbs, tVector_I ibnd, tVector_I kbnd, tVector_I ib, tVector_R zc, tVector_R a); © 2008 by Taylor & Francis Group, LLC
914
Numerical Linear Approximation in C
eLaRc LA_tmfuel_part_1 (int iout, int *pJin, int m, tMatrix_R c, tVector_R f, tVector_I icbs, tVector_I irbs, int *pIrank, int *pIter); void LA_tmfuel_marg_costs (int m, tMatrix_R c, tVector_R f, tVector_I icbs, tVector_I ibnd, tVector_I kbnd, tVector_R zc, int *pIrank); void LA_tmfuel_vleav (int *pIrank, int *pIout, int *pIvo, tVector_R f, tNumber_R *pXb); void LA_tmfuel_th_tu (int itf, int m, int ivo, int iout, int *pJout, tMatrix_R c, tVector_I icbs, tVector_I ibnd, tVector_I kbnd, tVector_R th, tVector_R tu, tVector_R zc); void LA_tmfuel_vent (int m, int *pJin, int ivo, int *pItest, tVector_R th, tVector_R tu); void LA_tmfuel_swap (int itf, int *pIrank, int *pJin, tMatrix_R c, tVector_I ibnd, tVector_I kbnd, tVector_I ib, tVector_R th, tVector_R tu, tVector_R zc); void LA_tmfuel_cascade (int iout, int jin, int *pJout, tNumber_R xb, tNumber_R *pPivot, tNumber_R pivoto, tMatrix_R c, tVector_R f, tVector_I ibnd, int *pIrank); void LA_tmfuel_test (int iout, int jin, int *pItest, tNumber_R *pXb, tNumber_R pivot, tVector_R f, tVector_I icbs, tVector_R th); void LA_tmfuel_gauss_jordn (int iout, int jin, int *pIrank, int m, tMatrix_R c, tVector_R f); eLaRc LA_tmfuel_res (int itf, int m, tVector_R f, tVector_I icbs, tVector_I ibnd, tVector_I kbnd, tVector_I ib, tVector_R zc, int *pIrank, tVector_R a, tNumber_R *pZnorm);
eLaRc LA_Effort (int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R a, tNumber_R *pZ); void LA_effort_gauss_jordn (int iout, int jin, int m, int n, tNumber_R pivot, tMatrix_R ct, tVector_I ic, tVector_I nb);
© 2008 by Taylor & Francis Group, LLC
Appendix C: Constants, Types and Function Prototypes
915
eLaRc LA_effort_res (int m, int n, tMatrix_R ct, tVector_I ib, tVector_I ic, tVector_I ip, tVector_I nb, tVector_R zc, int *pIrank, tVector_R a, tNumber_R *pZ); void LA_effort_init (int m, int n, tMatrix_R ct, tVector_I ib, tVector_I ic, tVector_I ip, tVector_I nb); void LA_effort_marg_costs (int m, int n, tMatrix_R ct, tVector_R f, tVector_I ib, tVector_R zc); void LA_effort_vent (int *pIvo, int *pJin, int n, tVector_I nb, tVector_R zc, tNumber_R *pZ); void LA_effort_vleav (int *pItest, int jin, int *pIout, int m, tMatrix_R ct);
eLaRc LA_Energy (int m, int n, tMatrix_R ct, tVector_R f, int *pIrank, int *pIter, tVector_R a, tNumber_R *pZ); void LA_energy_init (int m, int n, tMatrix_R ct, tVector_I ic, tVector_I ir, tVector_I ik, tVector_R a); eLaRc LA_energy_phase_1 (int m, int n, tMatrix_R ct, tVector_R f, tVector_I ic, tVector_R a, int *pIrank, int *pIter); void LA_energy_phase_2 (int m, int n, tMatrix_R ct, tVector_I ir, tVector_I ik, int *pIrank, tVector_R a); void LA_energy_norm (int m, tVector_R a, tNumber_R *pZ); void LA_energy_gauss_jordn_e0 (int iout, int m, int n, tMatrix_R ct, tVector_R f, tVector_R a); void LA_energy_gauss_jordn_e (int m, int n, int iout, int nj0, tMatrix_R ct, tVector_I ir, tVector_I ik, tVector_R a); void LA_energy_vent (int *pIvo, int *pJin, int m, int n, tVector_I ir, tVector_R a); void LA_energy_vleav (int *pItest, int jin, int m, int n, int *pIout, int j0, tMatrix_R ct, tVector_I ir, tVector_R a);
© 2008 by Taylor & Francis Group, LLC
916
Numerical Linear Approximation in C
void LA_energy_res (int m, int n, tMatrix_R ct, tVector_I ir, tVector_R a); #endif
© 2008 by Taylor & Francis Group, LLC
917
Appendix D Utilities and Common Functions /*------------------------------------------------------------------LA_Utils.h Linear Approximation Utility Prototypes -------------------------------------------------------------------*/ #ifndef _LA_UTILS_H_ #define _LA_UTILS_H_ #include <stdlib.h> #include <stdio.h> #include <math.h> #include "LA_Defs.h" /* Function prototypes */ void LA_check_rank_def
(int n, int irank);
void void void void
prn_la_rc prn_dr_bnr prn_algo_bnr prn_example_delim
(eLaRc rc); (char *pszMsg); (char *pszMsg); (void);
tVector_I void void void
alloc_Vector_I swap_elems_Vector_I prn_Vector_I free_Vector_I
(int nElems); (tVector_I v, int elemA, int elemB); (int *v, int nElems); (int *v);
tVector_R void void void void void
alloc_Vector_R swap_elems_Vector_R prn_Vector_R prn_Vector_R_nDec prn_Vector_R_exp free_Vector_R
(int nElems); (tVector_R v, int elemA, int elemB); (tNumber_R *v, int nElems); (tNumber_R *v, int nElems, int nDec); (tNumber_R *v, int nElems); (tNumber_R *v);
tMatrix_R alloc_Matrix_R (int nRows, int mCols); tMatrix_R init_Matrix_R (tNumber_R *c, int nRows, int mCols); void swap_rows_Matrix_R (tMatrix_R m, int elemA, int elemB); © 2008 by Taylor & Francis Group, LLC
918
Numerical Linear Approximation in C
void void void
prn_Matrix_R free_Matrix_R uninit_Matrix_R
(tMatrix_R, int, int); (tMatrix_R, int nRows); (tNumber_R **m);
#endif /*------------------------------------------------------------------LA_Utils.c Linear Approximation Utility Functions -------------------------------------------------------------------*/ #include <malloc.h> /* malloc () */ #include <memory.h> /* memcpy () */ #include <string.h> #include "LA_Utils.h" /*------------------------------------------------------------------Interpretation of results functions -------------------------------------------------------------------*/ void LA_check_rank_def(int n, int irank) { if (irank < n) { PRN ("System is rank deficient\n"); } } /*------------------------------------------------------------------Print functions -------------------------------------------------------------------*/ void prn_la_rc (eLaRc rc) { PRN("---------- RC = "); switch (rc) { case LaRcSolutionFound: PRN ("Solution Found"); case LaRcSolutionUnique: PRN ("Solution is Unique"); case LaRcSolutionProbNotUnique: © 2008 by Taylor & Francis Group, LLC
break; break;
Appendix D: Utilities and Common Functions PRN ("Solution is probably not Unique"); case LaRcSolutionDefNotUniqueRD: PRN ("Solution is definitely not Unique" " due to Rank deficiency"); case LaRcNoFeasibleSolution: PRN ("No Feasible Solution"); case LaRcInconsistentSystem: PRN ("Inconsistent System"); case LaRcInconsistentConstraints: PRN ("Inconsistent Constraints"); case LaRcErrBounds: PRN ("ERROR: Argument out of bounds"); case LaRcErrNullPtr: PRN ("ERROR: NULL pointer argument"); case LaRcErrAlloc: PRN ("ERROR: Memory allocation failure"); case LaRcOk: /* Should never return this */ default: PRN ("ERROR: Unknown");
919 break; break; break; break; break; break; break; break; break;
} PRN(" ----------\n"); } void prn_dr_bnr (char *pszMsg) { PRN("\n=========================================================\n"); PRN("===== Executing Driver: %s", pszMsg); PRN("\n=========================================================\n"); } void prn_algo_bnr (char *pszMsg) { PRN("\n---------------------------------------------------------\n"); PRN("----- Executing Algorithm: %s", pszMsg); PRN("\n---------------------------------------------------------\n"); } void prn_example_delim (void) { PRN("----------\n"); }
© 2008 by Taylor & Francis Group, LLC
920
Numerical Linear Approximation in C
/*------------------------------------------------------------------Vector Functions -------------------------------------------------------------------*/ /* Allocate an integer vector */ tVector_I alloc_Vector_I (int nElem) { int *v; if (nElem <= 0) return NULL; v = (int *)malloc (nElem * sizeof (int)); if (!v) return NULL; memset (v, nElem * sizeof (int), 0); return v - 1; } void swap_elems_Vector_I (tVector_I v, int idxA, int idxB) { int temp = v[idxA]; v[idxA] = v[idxB]; v[idxB] = temp; } void prn_Vector_I (int *v, int nElems) { int i; for (i = 1; i <= (nElems - 1); i++) PRN ("%d\t", v[i]); PRN ("%d\n", v[nElems]); } void free_Vector_I (int *v) { if (v == NULL) return; free ((char *) (v+1)); } /* Allocate a real vector */ tNumber_R *alloc_Vector_R (int nElem) { tNumber_R *v; if (nElem <= 0) return NULL; v = (tNumber_R *)malloc (nElem * sizeof (tNumber_R)); if (!v) return NULL; memset (v, nElem * sizeof (tNumber_R), 0); return v - 1; © 2008 by Taylor & Francis Group, LLC
Appendix D: Utilities and Common Functions } void swap_elems_Vector_R (tVector_R v, int idxA, int idxB) { tNumber_R temp = v[idxA]; v[idxA] = v[idxB]; v[idxB] = temp; } void prn_Vector_R (tNumber_R *v, int nElems) { int i; for (i = 1; i <= nElems; i++) PRN ("% 7.3f ", v[i]); PRN ("\n"); } /* Print to nDec decimal places */ void prn_Vector_R_nDec (tNumber_R *v, int nElems, int nDec) { char szFormat[] = "% 5.pf "; int i; szFormat[4] = (char)((nDec % 10) + '0'); for (i = 1; i <= (nElems - 1); i++) PRN (szFormat, v[i]); PRN (szFormat, v[nElems]); PRN ("\n"); } /* Print vector in exponent format */ void prn_Vector_R_exp (tNumber_R *v, int nElems) { int i; for (i = 1; i <= (nElems - 1); i++) PRN ("% 1.3E\t", v[i]); PRN ("% 1.3E\n", v[nElems]); } void free_Vector_R (tNumber_R *v) { if (v == NULL) return; free ((char *) (v+1)); }
© 2008 by Taylor & Francis Group, LLC
921
922
Numerical Linear Approximation in C
/*------------------------------------------------------------------Matrix Functions -------------------------------------------------------------------*/ /* Allocate a real matrix */ tNumber_R **alloc_Matrix_R (int nRows, int mCols) { int i; tNumber_R **m; if (nRows <= 0 || mCols <= 0) return NULL; m = (tNumber_R **)malloc (nRows * sizeof (tNumber_R *)); if (!m) return NULL; m -= 1; for (i = 1; i <= nRows; i++) { m[i] = (tNumber_R *) malloc (mCols * sizeof (tNumber_R)); if (!m[i]) return NULL; memset (m, mCols * sizeof (tNumber_R), 0 ); m[i] -= 1; } return m; } /* Allocate matrix and initialize it with contents of matrix "c" */ tNumber_R **init_Matrix_R (tNumber_R *c, int nRows, int mCols) { int i, j; tNumber_R **m = (tNumber_R **) malloc (nRows * sizeof (tNumber_R *)); if (!m) return NULL; m -= 1; for (i = 0, j = 1; i <= nRows - 1; i++, j++) m[j] = c + mCols * i - 1; return m; } void swap_rows_Matrix_R (tMatrix_R m, int idxA, int idxB) { tVector_R temp = m[idxA]; © 2008 by Taylor & Francis Group, LLC
Appendix D: Utilities and Common Functions m[idxA] = m[idxB]; m[idxB] = temp; } void prn_Matrix_R (tNumber_R **m, int nRows, int mCols) { int i; for (i = 1; i <= nRows; i++) prn_Vector_R (m[i], mCols); } void free_Matrix_R (tNumber_R **m, int nRows) { int i; if (m == NULL) return; for (i = nRows; i > 0; i--) if (m[i] != NULL) free ((char *)(m[i] + 1)); free ((char *)(m + 1)); } void uninit_Matrix_R (tNumber_R **m) { free ((char *)(m + 1)); }
© 2008 by Taylor & Francis Group, LLC
923
924
Numerical Linear Approximation in C
/*------------------------------------------------------------------LA_PwCmn.h Commonly-Used Piecewise Linear Approximation Functions -------------------------------------------------------------------*/ #include "LA_Prototypes.h" /*------------------------------------------------------------------Initializing the data for "npiece" for piecewise approximation programs -------------------------------------------------------------------*/ void LA_pw1_init (int *pNpiece, int is, int ie, int m, tMatrix_R rp1, tMatrix_R ap, tVector_R zp) { int i, j, ji; /* Initializing the data for "npiece" */ zp[*pNpiece] = 0.0; for (j = is; j <= ie; j++) { ji = j - is + 1; rp1[*pNpiece][ji] = 0.0; } for (i = 1; i <= m; i++) { ap[*pNpiece][i] = 0.0; } } /*------------------------------------------------------------------Mapping the data for piecewise approximation programs LA_L1pw1() and LA_Linfpw1() -------------------------------------------------------------------*/ void LA_pw1_map (int m, int nu, tVector_R r, tVector_R a, tNumber_R z, tMatrix_R rp1, tMatrix_R ap, tVector_R zp, int *pNpiece) { int j; zp[*pNpiece] = z; for (j = 1; j <= m; j++) © 2008 by Taylor & Francis Group, LLC
Appendix D: Utilities and Common Functions
925
{ ap[*pNpiece][j] = a[j]; } for (j = 1; j <= nu; j++) { rp1[*pNpiece][j] = r[j]; } } /*------------------------------------------------------------------Initializing L1pw2 or LA_Linfpw2 -------------------------------------------------------------------*/ void LA_pw2_init (int k, int npiece, int m, int n, int *pIs, int *pIe, tMatrix_R ct, tVector_R f, tMatrix_R ctp, tVector_R fp, tVector_I ixl) { int i, j, ji, jj, kp1; jj = n/npiece; ixl[k] = (k-1) * jj + 1; *pIs = ixl[k]; if (k == npiece) *pIe = n; if (k != npiece) { kp1 = k + 1; ixl[kp1] = k * jj + 1; *pIe = ixl[kp1] - 1; } for (j = *pIs; j <= *pIe; j++) { ji = j - *pIs + 1; fp[ji] = f[j]; for (i = 1; i <= m; i++) { ctp[i][ji] = ct[i][j]; } } } /*------------------------------------------------------------------Printing the residual vectors for the "npiece" segments for LA_L1pw1(), LA_Linfpw1() and LA_L2pw1() -------------------------------------------------------------------*/ eLaRc LA_pw1_prn_rp1 (int konect, int npiece, int n, tVector_I ixl, © 2008 by Taylor & Francis Group, LLC
926
Numerical Linear Approximation in C tMatrix_R rp1)
{ tVector_R int
w = alloc_Vector_R (n); i, j, j1, j2, j3, jj = 0;
eLaRc rc = LaRcOk; VALIDATE_ALLOC (w); /* For connected piecewise approximation */ if (konect == 1) { for (i = 1; i <= npiece-1; i++) { j1 = ixl[i]; j2 = ixl[i + 1]; for (j = j1; j <= j2; j++) { jj = j2 - j1 + 1; j3 = j - j1 + 1; w[j3] = rp1[i][j3]; } prn_Vector_R (w, jj); } j1 = ixl[npiece]; j2 = n; for (j = j1; j <= j2; j++) { jj = j2 - j1 + 1; j3 = j - j1 + 1; w[j3] = rp1[npiece][j3]; } prn_Vector_R (w, jj); } /* For disconnected piecewise approximation */ if (konect != 1) { for (i = 1; i <= npiece-1; i++) { j1 = ixl[i]; j2 = ixl[i + 1] - 1; for (j = j1; j <= j2; j++) { jj = j2 - j1 + 1; j3 = j - j1 + 1; © 2008 by Taylor & Francis Group, LLC
Appendix D: Utilities and Common Functions
927
w[j3] = rp1[i][j3]; } prn_Vector_R (w, jj); } j1 = ixl[npiece]; j2 = n; for (j = j1; j <= j2; j++) { jj = j2 - j1 + 1; j3 = j - j1 + 1; w[j3] = rp1[npiece][j3]; } prn_Vector_R (w, jj); } CLEANUP: free_Vector_R (w); return rc; } /*------------------------------------------------------------------Printing the residual vectors for the "npiece" segments for LA_L1pw2(), LA_Linfpw2() and LA_L2pw2() -------------------------------------------------------------------*/ eLaRc LA_pw2_prn_rp2 (int npiece, int n, tVector_I ixl, tVector_R rp2) { tVector_R w = alloc_Vector_R (n); int i, j, j1, j2, j3, jj = 0; eLaRc rc = LaRcOk; VALIDATE_ALLOC (w); for (i = 1; i <= npiece-1; i++) { j1 = ixl[i]; j2 = ixl[i + 1] - 1; for (j = j1; j <= j2; j++) { jj = j2 - j1 + 1; j3 = j - j1 + 1; w[j3] = rp2[j]; } © 2008 by Taylor & Francis Group, LLC
928
Numerical Linear Approximation in C prn_Vector_R (w, jj); } j1 = ixl[npiece]; j2 = n; for (j = j1; j <= j2; j++) { jj = j2 - j1 + 1; j3 = j - j1 + 1; w[j3] = rp2[j]; } prn_Vector_R (w, jj);
CLEANUP: free_Vector_R (w); return rc; }
© 2008 by Taylor & Francis Group, LLC