Consulting Editor George A. Anastassiou Department of Mathematical Sciences University of Memphis
Sorin G. Gal
Global Smoothness and Shape Preserving Interpolation by Classical Operators
Birkh¨auser Boston • Basel • Berlin
Sorin G. Gal Department of Mathematics University of Oradea Oradea, 410087 Romania
[email protected]
Cover design by Alex Gerasev. AMS Subject Classifications: Primary: 26A15, 26A48, 26A51, 26B25, 41A05, 41A63, 42A15; Secondary: 41A17, 41A29, 41A35 Library of Congress Cataloging-in-Publication Data Gal, Sorin G., 1953Global smoothness and shape preserving interpolation by classical operators / Sorin G. Gal. p. cm. Includes bibliographic references and index. ISBN 0-8176-4387-7 (alk. paper) 1. Interpolation. 2. Functions. 3. Global analysis (Mathematics) I. Title. QA281.G27 2005 511’.42–dc22
2005043604
ISBN-10 0-8176-4387-7 ISBN-13 978-0-8176-4387-4
e-ISBN
Printed on acid-free paper.
c 2005 Birkh¨auser Boston, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Birkh¨auser Boston, c/o Springer Science+Business Media Inc., New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.
Printed in the United States of America. 987654321 www.birkhauser.com
SPIN 11371755
(TXQ/MP)
A tribute to Tiberiu Popoviciu
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
Global Smoothness Preservation, Univariate Case . . . . . . . . . . . . . . . . . 1.1 Negative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Global Smoothness Preservation by Some Lagrange, Hermite–Fejér, and Shepard-Type Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Algebraic Projection Operators and the Global Smoothness Preservation Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Global Smoothness Preservation by Jackson Trigonometric Interpolation Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Trigonometric Projection Operators and the Global Smoothness Preservation Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Bibliographical Remarks and Open Problems . . . . . . . . . . . . . . . . . . .
1 1
26
2
Partial Shape Preservation, Univariate Case . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Hermite–Fejér and Grünwald-Type Polynomials . . . . . . . . . . . . . . . . . 2.3 Shepard Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Bibliographical Remarks and Open Problems . . . . . . . . . . . . . . . . . . .
43 43 45 59 69
3
Global Smoothness Preservation, Bivariate Case . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Bivariate Hermite–Fejér Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Bivariate Shepard Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Bivariate Lagrange Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Bibliographical Remarks and Open Problems . . . . . . . . . . . . . . . . . . .
71 71 72 75 82 87
4
Partial Shape Preservation, Bivariate Case . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91 91
1
4
30 34 37
viii
Contents
4.2 4.3 4.4 A
Bivariate Hermite–Fejér Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Bivariate Shepard Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Bibliographical Remarks and Open Problems . . . . . . . . . . . . . . . . . . . 128
Appendix: Graphs of Shepard Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Preface
This monograph is devoted to two interesting properties in the interpolation of functions: the first deals with the Global Smoothness Preservation Property (GSPP) for the well-known classical interpolation operators of Lagrange, Grünwald, Hermite–Fejér and Shepard type, while the second treats the Shape Preservation Property (SPP) for the same interpolation operators. The examination of the two properties involves both the univariate and bivariate cases. An exception is the GSPP of complex Hermite– Fejér interpolation polynomials based on the roots of unity. An introduction to GSPP of some classical interpolation operators in the univariate case was given in a short chapter (chapter 6) of a monograph by Anastassiou–Gal [6]. In this work the univariate case is completed with many new results; an entirely new chapter is devoted to the bivariate case. The study of SPP classical interpolation operators in both the univariate and bivariate cases appears here for the first time in book form. This monograph consists of the author’s research over the past five years with new concepts and results that have not been previously published. Many open problems suggested throughout may be of interest to different researchers. In general, the book may be used in various areas of mathematics — interpolation of functions, numerical analysis, approximation theory — as well as computer-aided geometric design, data fitting, fluid mechanics and engineering. Additionally, the work may be used as supplementary material in graduate courses as well. Acknowledgments. The basic methods of this book are mainly based on the joint papers I wrote with Professor Jozsef Szabados from Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences in Budapest, whom I thank very much for his great contribution. Also, I would like to thank Professor George Anastassiou from the Department of Mathematical Sciences, University of Memphis, TN, U.S.A., for our collaboration on the topic and for his support and Professor Radu Trimbitas from “Babe¸s-Bolyai” University, Faculty of Mathematics and Informatics, Cluj-Napoca, Romania, who, by a very generous help made the graphs of the surfaces in Appendix.
x
Preface
Finally, I would like to thank Ann Kostant, Executive Editor at Birkhäuser for support, and my typist Georgeta Bonda of “Babe¸s-Bolyai” University, Cluj-Napoca, Romania, for doing a great job so punctually. Oradea, September 2004
Sorin G. Gal
Introduction
The classical interpolation operators of Lagrange, Grünwald, Hermite–Fejér and Shepard on various systems of nodes, were introduced in mathematics mainly for the purpose of approximation of functions. As a consequence, most papers on these operators deal with their convergence properties. However, in recent years it has been pointed out that many of these classical interpolation operators have other interesting properties too, like the (partial) preservation of the global smoothness and shape of the interpolated functions. Systematic results in these directions have been obtained by the author of this monograph in a series of papers jointly written with other researchers. The partial global smoothness preservation property can be described as follows. We say that the sequence of interpolation operators Ln : C[a, b] → C[a, b], n ∈ N, (in the sense that each Ln (f ) coincides with f on a system of given nodes) partially preserves the global smoothness of f , if for f ∈ LipM α, 0 < α ≤ 1, there exists 0 < β < α independent of f and n, such that ω1 (Ln (f ); h) ≤ Chβ ,
∀h ∈ [0, 1), n ∈ N,
(i.e., Ln (f ) ∈ LipC β, ∀n ∈ N) where C > 0 is independent of n but may depend on f . Here ω1 (f ; δ) = sup{|f (x + h) − f (x)|; 0 ≤ h ≤ δ, x, x + h ∈ [a, b]} is the uniform modulus of continuity, and of course it can be replaced by other kinds of moduli of continuity too. It seems that in general (excepting, for example, some particular Shepard operators), the interpolation conditions do not permit a complete global smoothness preservation property, i.e., α = β, as, for example, is the case of Bernstein-type approximation operators. An introduction to the global smoothness preservation property (GSPP) in the univariate case was made in the recent book by Anastassiou–Gal [6]. In the present monograph, for the univariate case many new results are presented. For example, if τ (f ; h) ≤ Mhα , ∀h ∈ [0, 1), then there exists 0 < β ≤ α such that τ (Ln (f ); h)L1 ≤ Chβ , ∀n ∈ N, h ∈ [0, 1), where τ (f ; h)L1 is the averaged
xii
Introduction
L1 -modulus of continuity and Ln (f ) is a certain interpolation operator. Also, new results on univariate Shepard operators on infinite intervals and on univariate Shepard– Lagrange operators on [0, 1] are included. In addition, bivariate cases of these classical operators with respect to the following three kinds of moduli of continuity are examined: • the bivariate uniform modulus of continuity given by ω1 (f ; δ, η) = sup{|f (x + h, y + k) − f (x, y)|;
0 ≤ h ≤ δ, 0 ≤ k ≤ η, (x, y),
(x + h, y + k) ∈ I }, •
the Bögel modulus of continuity given by ωB (f ; δ, η) = sup{|h,k f (x, y)|;
0 ≤ h ≤ δ, 0 ≤ k ≤ η, (x, y),
(x + h, y + k) ∈ I },
•
where h,k f (x, y) = f (x + h, y + k) − f (x + h, y) − f (x, y + k) + f (x, y), the bivariate Euclidean modulus of continuity given by ωE (f ; ρ) = sup{|f (x + h, y + k) − f (x, y)|;
1
0 ≤ h, 0 ≤ k, (h2 + k 2 ) 2 ≤ ρ,
(x, y), (x + h, y + k) ∈ I },
where f : I → R with I a bidimensional interval.
The case of complex Hermite–Fejér interpolation polynomials on the roots of unity is studied as well. On the other hand, the partial shape preservation property can be described as follows. Let L : C[a, b] → C[a, b] be an interpolation operator. We say that L partially preserves the shape of f , if for any monotone (convex) function f (x) on [a, b], there exist some points ξi ∈ [a, b] (which may be independent of f ) such that L(f )(x) is of the same monotonicity (convexity, respectively) in some neighborhoods V (ξi ) of ξi . The first results in this direction were obtained in 1960–1962 by T. Popoviciu, for the monotonicity case and Hermite–Fejér polynomials on some special Jacobi nodes. His results were of the qualitative type, i.e., only the existence of these neighborhoods V (ξi ) was proved, without any estimate of their lengths. In this monograph, quantitative results of T. Popoviciu’s results and new qualitative and quantitative results for other classical interpolation operators are presented. Bivariate variants of these operators, with respect to various natural concepts of bivariate monotonicity and convexity, also are studied. The results in this monograph, especially those in the bivariate case, have potential applications to data fitting, fluid dynamics, computer aided geometric design, curves,
Introduction
xiii
and surfaces. At the end of the book we present an Appendix with 20 graphs of various bivariate Shepard-type operators attached to some particular functions. Sorin G. Gal Department of Mathematics University of Oradea Romania September 2004
1 Global Smoothness Preservation, Univariate Case
In this chapter we present results concerning the global smoothness preservation by some classical interpolation operators.
1.1 Negative Results Kratz–Stadtmüller [66] prove the property of global smoothness preservation for discrete sequences {Ln (f )}n of the form Ln (f )(x) =
n
f (xk,n )pk,n (x),
k=1
x ∈ [−1, 1],
f ∈ C[−1, 1],
(1.1)
satisfying the conditions n k=1
pk,n (x) ≡ sn , which is independent of x ∈ [−1, 1], n k=1
pk,n ∈ C 1 [−1, 1]
and
|pk,n (x)| ≤ C1 , n k=1
x ∈ [−1, 1],
′ |(x − xk,n )pk,n (x)| ≤ C2 ,
(1.2)
(1.3)
x ∈ [−1, 1],
(1.4)
for some constants C1 , C2 . Since the Hermite–Fejér and the Lagrange polynomials are of type (1.1), it is natural to ask if for these polynomials (1.2)–(1.4) hold. Unfortunately, this is not so for Lagrange interpolation because of the following: Theorem 1.1.1. Let M = {Mn }n∈N (Mn = {xk,n }nk=1 ) be an arbitrary triangular matrix of interpolation nodes in [−1, 1] (i.e., −1 ≤ xn,n < xn−1,n < · · · <
2
1 Global Smoothness Preservation, Univariate Case
x1,n ≤ 1, n ∈ N) and lk,n (x) the fundamental polynomials of Lagrange interpolation on Mn . Then for all n ≥ 2 we have inf Mn max|x|≤1 1 ≤ lim inf
n k=1
′ |x − xk,n | · |lk,n (x)|
n
n→∞
n
inf Mn max|x|≤1
k=1
≤ lim sup
′ |x − xk,n | · |lk,n (x)|
n
n→∞
≤ 2.
Proof. Let us denote xk,n = xk , k = 1, . . . , n and An (x; Mn ) =
n k=1
′ |x − xk,n | · |lk,n (x)|,
where lk,n (x) =
ωn (x) , ′ ωn (x)(x − xk )
k = 1, . . . , n,
ωn (x) =
n
(x − xk ).
k=1
Consider an index j ∈ {1, 2, . . . , n} such that |ωn′ (xj )| = max |ωn′ (xk )|. 1≤k≤n
Then we get An (xj ; Mn ) = |ωn′ (xj )|
n k=1 k =j
1 |ωn′ (xk )|
≥ n − 1,
which proves the first inequality in the statement of theorem. To prove the second inequality, let us choose ωn (x) =
1 [Tn−2 (x) − Tn (x)] = sin t sin(n − 1)t, 2
x = cos t,
where Tn (x) = cos[n arccos x] is the Chebyshev polynomials of degree n. Then xk = cos tk , tk = (k−1) n−1 π , k = 1, . . . , n, and an easy calculation yields n − 1, if 2 ≤ k ≤ n − 2 ′ |ωn (xk )| = 2n − 1, if k = 1, n, and
max |ωn (x)| ≤ 1,
|x|≤1
max |ωn′ (x)| ≤ 2n − 2.
|x|≤1
1.1 Negative Results
3
Thus, denoting an index j for which |x − xj | = min1≤k≤n |x − xk |, we obtain An (x; Mn ) ≤ |ωn′ (x)|
n k=1
2 n−2 + ≤ (2n − 2) n − 1 2n − 2
n
1 |ωn (x)| + ′ ′ |ωn (xk )| |ωn (xk )| · |x − xk | k=1
+2+
n k=1 k =j
sin t (n − 1)| cos t − cos tk |
n 1 1 1 ≤ 2n + + k k n − 1 k=1 sin t−t sin t+t 2 2 k =j
⎛ n ⎜ ≤ 2n + O ⎝ k=1 k =j
which completes the proof.
⎞
1 ⎟ ⎠ = 2n + O(log n), |j − k|
Remarks. (1) If {Ln (f )}n are the classical Lagrange polynomials of any system of nodes, then it is known that (1.3) does not hold. As a conclusion, it seems that for classical interpolation polynomials all three conditions (1.2) to (1.4) cannot be verified. (2) The shortcoming in Remark 1 above can be removed as follows. Let Ln (f )(x) be of form (1.1) with pk,n (x) satisfying (1.4). Then f ∈ LipM (1; [−1, 1]) implies Ln (f ) ∈ LipC2 M (1; [−1, 1]), for all n ∈ N. Indeed, we have n ′ |Ln (f )(x) − Ln (f )(y)| = (x − y) f (xk,n )pk,n (ξx,y,n ) k=1
n ′ = |x − y| [f (xk,n ) − f (ξx,y,n )]pk,n (ξx,y,n ) k=1
≤ |x − y|M
n k=1
′ (ξx,y,n )| ≤ C2 M|x − y|. |xk,n − ξx,y,n | · |pk,n
(3) If {Ln (f )}n are the Hermite–Fejér polynomials based on the Chebyshev nodes of first kind, then obviously (1.2) and (1.3) hold with sn ≡ 1 and C1 = 1. But (1.4) cannot hold since then by Kratz–Stadtmüller [66] would follow that {Ln (f )}n have the property of global smoothness preservation and by Anastassiou– Cottin–Gonska [5] this does not hold. (4) Let xk,n = −1 + 2(k−1) n−1 , k = 1, . . . , n, be equidistant nodes in [−1, 1] and H2n−1 (f )(x) the Hermite–Fejér interpolation polynomials on these nodes. Berman
4
1 Global Smoothness Preservation, Univariate Case
[20] proved that for f (x) = x, x ∈ [−1, 1], the sequence {H2n−1 (f )(x)}n is unboundedly divergent for any 0 < |x| < 1. Hence this sequence has no the property of partial preservation of global smoothness. Indeed, if f ∈ Lip1 (1; [−1, 1]) and if we suppose that there exist 0 < α < 1 and M > 0 such that H2n−1 (f ) ∈ LipM (α; [−1, 1]) for all n ∈ N, then |H2n−1 (f )(x) − H2n−1 (f )(y)| ≤ M|x − y|α ,
∀ x, y ∈ [−1, 1],
n ∈ N.
Taking x = −1 and 0 < |y| < 1, letting n → ∞ in the above inequality, we get a contradiction.
1.2 Global Smoothness Preservation by Some Lagrange, Hermite–Fejér, and Shepard-Type Operators Let xk = cos 2k−1 2n π , k = 1, . . . , n, be the roots of the Chebyshev polynomial Tn (x), and the Hermite–Fejér polynomial of an f ∈ C[−1, 1] based on these roots, Hn (f )(x) =
n k=1
f (xk )(1 − xxk )
Tn2 (x) . n2 (x − xk )2
If we consider the Jackson interpolation trigonometric polynomials Jn (f )(x) (see Section 1.4), then for f ∈ C[−1, 1], denoting F (t) = f (cos t), it is known that (see, e.g., Szabados [95], p. 406) Hn (f )(x) = J2n−1 (F )(t),
t = arccos x.
Now, if 0 < α < 1 and f ∈ LipM (α; [−1, 1]), then J2n−1 (F ) ∈ LipC α (see Theorem 1.4.2), which can be written as (x = cos u, y = cos v) |Hn (f )(x) − Hn (f )(y)| = |J2n−1 (F )(u) − J2n−1 (F )(v)| ≤ C|u − v|α Cπ = C| arccos x − arccos y|α ≤ √ |x − y|α/2 , 2
∀ x, y ∈ [−1, 1],
since arccos x ∈ Lipπ/√2 (1/2; [−1, 1]), (see, e.g., Cheney [22], Problem 5, p. 88), which means that Hn (f ) ∈ Lip Cπ √ (α/2; [−1, 1]), for all n ∈ N. 2
If α = 1, in the same way we get
1 1/2 , ω1 (Hn (f ); h) ≤ C h log h
n ∈ N, h ∈ (0, 1).
As a conclusion, we can say that {Hn (f )}n has the property of partial preservation of global smoothness of f . However, by a direct method we will improve the above considerations about {Hn (f )}n .
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
5
Theorem 1.2.1. For any f ∈ C[−1, 1], h > 0 and n ∈ N we have ω1 (Hn (f ); h)
= min O hn
n k=1
n 1 1 1 + ω1 (f ; h) ω1 f ; 2 , ω1 f ; ,O k n k k=1
where the constants in “O” are independent of f , n and h. Proof. First we obtain an upper estimate for |Hn′ (f )(x)|. Let x ∈ [−1, 1] be fixed, the index j defined by |x − xj | = min{|x − xj |; 1 ≤ k ≤ n} and denote Ak (x) = (1 − xxk )
Tn2 (x) . n2 (x − xk )2
We have A′k (x) = −xk
2Tn (x)Tn′ (x) 2(1 − xxk )Tn2 (x) Tn2 (x) + (1 − xx ) − , k (x − xk )2 (x − xk )2 (x − xk )3
k = 1, . . . , n, which implies (by 1 − xxk = 1 − x 2 + x(x − xk )) |A′k (x)| ≤ +
2(1 − x 2 )|Tn′ (x)| 2|Tn′ (x)| 1 + + 2 (x − xk ) (x − xk )2 |x − xk |
2 2(1 − x 2 ) + , 3 |x − xk | (x − xk )2
k = 1, . . . , n.
For simplicity, all the constants (independent of f , n, and h) which appear will be denoted by C. Since Hn′ (f )(x) =
n k=1
f (xk )A′k (x) =
n k=1
[f (xk ) − f (x)]A′k (x),
we obtain |Hn′ (f )(x)| ≤
n 3 2(1 − x 2 )|Tn′ (x)| C ω (f ; |x − x |) + 1 k 2 2 n k=1 (x − xk ) (x − xk )2 k =j
2(1 − x 2 ) 2|Tn′ (x)| + + + Cω1 (f ; |x − xj |)|A′j (x)|. |x − xk |3 |x − xk |
(1.5)
The following known relations (see, e.g., Szabados–Vértesi [100], p. 282) will be frequently used: |x − xj | ≤
Cj , n2
n 1 − x 2 ∼ j,
|x − xk | ∼
|j 2 − k 2 | , n2
k = j.
Now by (1.6) and by the combined Bernstein–Markov inequality, we get
(1.6)
6
1 Global Smoothness Preservation, Univariate Case
n2 Aj Cn2 , |A′j (x)| ≤ √ ≤ j n 1 − x2 + 1 and ω1 (f ; |x
− xj |)|A′j (x)|
≤ Cω1
j f; 2 n
n2 1 ≤ Cω1 f ; 2 n2 . j n
Also, by (1.6) we obtain (using also the inequality ω1 (f ; T )/T ≤ 2ω1 (f ; t)/t, for t ≤ T ) ω1 (f ; |x − xk |) n2 ω1 (f ; |x − xk |) ≤ 2 (x − xk ) j |x − xk | k =j
k =j
|j 2 −k 2 | n Cn4 1 k2 Cn4 ω1 f ; n2 ≤ f ; , ω ≤ 1 j |j 2 − k 2 | j k2 n2 k =j
k=1
ω1 (f ; |x − xk |)(1 − x 2 )|T ′ (x)| n
(x − xk )2
k =j
≤ Cj
ω1 (f ; |x − xk |) k =j
(x − xk )2
|x − xk |3
k =j
n k2 1 ≤ Cn ω1 f ; 2 , k2 n 4
k=1
ω1 (f ; |x − xk |)(1 − x 2 ) k =j
√ ω1 (f ; |x − xk |)n 1 − x 2 ≤ (x − xk )2
≤C
|j 2 −k 2 | j 2 ω1 f ; n2 n2 |j 2 −k 2 |3 n6
k =j
1 j2 1 j2 Cn6 1 · ≤ Cn ω1 f ; 2 ≤ 2 ω1 f ; 2 n |j 2 − k 2 | n2 j n n2 6
k =j
1 ≤ Cn4 ω1 f ; 2 , n ω1 (f ; |x − xk |)|T ′ (x)| ω1 (f ; |x − xk |) n ≤ Cn2 |x − xk | |x − xk | k =j
k =j
4
≤ Cn
n k=1
1 k2 ω f ; 1 k2 n2
.
Collecting now all these estimates, by (1.5) we get |Hn′ (f )(x)| Since
n 1 k2 ω1 f ; 2 . ≤ Cn k2 n 2
k=1
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
n2
7
1 n k2 1 ω1 (f ; t 2 ) f ; ∼ n ω dt 1 2 2 k n t2 1/n k=1 n n 1 1 =n ω1 f ; 2 du ∼ n ω1 f ; 2 u k 1 k=1
(we have used the equivalence between the Riemann integral sums and the integral itself, and made the substitution t = 1/u), the above estimate becomes |Hn′ (f )(x)|
≤ Cn
n k=1
1 ω1 f ; 2 . k
(1.7)
On the other hand, for |x − y| ≤ h and by, e.g., Szabados–Vértesi [100] , Theorem 5.1, p. 168, we get |Hn (f )(x) − Hn (f )(y)| ≤ |Hn (f )(x) − f (x)| + |f (x) − f (y)| + |f (y) n 1 k ω1 f ; −Hn (f )(y)| ≤ 2 Hn (f ) − f + ω1 (f ; h) ≤ C + ω1 (f ; h). k2 n k=1
But similarly to the above considerations,
n 1 1 n 1 k 1 1 ω1 (f ; t) ω1 f ; ω1 f ; dt = ∼ du k2 n n 1/n t2 n 1 u k=1
n 1 1 ω1 f ; ∼ , n k k=1
and therefore we get
n 1 1 ω1 (Hn (f ); h) = O ω1 f ; + ω1 (f ; h). n k
(1.8)
k=1
Now, on the other hand, using (1.7), we obtain for |x − y| ≤ h, |Hn (f )(x) − Hn (f )(y)| ≤
|Hn′ (f )(ξ )|h
i.e.,
ω1 (Hn (f ); h) = O hn
n k=1
which together with (1.8) proves the theorem.
≤ Cnh
n k=1
1 ω1 f ; 2 , k
1 ω1 f ; 2 , k
Corollary 1.2.1. If f ∈ LipM (α; [−1, 1]), 0 < α ≤ 1, then for all n ∈ N and 0 < h < 1 we have
8
1 Global Smoothness Preservation, Univariate Case
⎧ α ⎪ ⎨O h max(2−α,1+α) , if 0 < α < 1/2 or 1/2 < α < 1, ω1 (Hn (f ); h) = 2α+1 ⎪ 1 6 ⎩O h log h , if α = 1/2 or 1. Proof. The optimal choice in Theorem 1.2.1 is when h = vn , where n
1 ω1 f ; k k=1 vn = . n 1 n2 ω1 f ; 2 k k=1
When h < vn , the minimum in Theorem 1.2.1 is the first term, and when h > vn , it is the second term. By simple calculations we have ⎧ α−2 ), O(n ⎪ ⎪ if 0 < α < 1/2, ⎪ ⎪ 1 ⎨ O n3/2 log n , if α = 1/2, vn = −1−α ), O(n if 1/2 < α < 1, ⎪ ⎪ ⎪ ⎪ log n ⎩O , if α = 1. n2 Hence, by using Theorem 1.2.1 we arrive at the statement of the corollary.
Remarks. (1) Let 0 < α < 1. The obvious inequalities α α 1 1/3 < , h1/4 > h log 2 max(2 − α, 1 + α) h mean that the preservation property given by Corollary 1.2.1 is better than that given for Hn (f )(x) at the beginning of this section. (2) It is an open question if the estimates of ω1 (Hn (f ); h) in Theorem 1.2.1 and Corollary 1.2.1 are the best possible. However, if we choose, for √ example, f0 (x) = x ∈ Lip1 (1; [−1, 1]), then we can prove that ω1 (Hn (f0 ); h) ∼ h. Indeed, by Hn (f0 )(x) = x −
T2n−1 (x) + T1 (x) Tn (x)Tn−1 (x) =x− n 2n
(see, e.g., Anastassiou–Cottin–Gonska [5]) we get ′ T2n−1 (x) + 1 ′ ≥ √ C |Hn (f0 )(x)| = 1 − ≥ Cn, 2n 1 − x2 ! 1 for all x ∈ 1+x 2 , 1 and
1 − x1 1 + x1 ω1 Hn (f0 ); ≥ Hn (f0 )(1) − Hn (f ) 2 2
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
9
"
1 − x1 1 − x1 ≥ Cn(1 − x1 ) = C , 2 2 as claimed. Now we will prove that in fact Hn (f0 ) ∈ LipM (1/2; [−1, 1]) for all (x) n ∈ N. Evidently, it suffices to prove that T2n−1 ∈ LipM (1/2; [−1, 1]), for all 2n n ∈ N. But by Cheney [22], Problem 5, p. 88 = |Hn′ (f0 )(ξ )| ·
cos[(2n − 1) arccos x] − cos[(2n − 1) arccos y]| |T2n−1 (x) − T2n−1 (y)| = 2n 2n (2n − 1)| sin ξ | · | arccos x − arccos y| ≤ M| arccos x − arccos y| 2n Mπ ≤ √ |x − y|1/2 , 2 which was to be proved. ≤
(3) It is well known (see, e.g., Szabados–Vértesi [101], relation (3.2)) that the mean convergence of Hn (f )(x) to f (x) is better than the uniform convergence, that is, we have 1 , ∀n ∈ N, p > 0, ||f − Hn (f )||p,w ≤ Cp ω1 f ; n $ # where ω1 f ; n1 is the uniform modulus of continuity, w(x) = √ 1 2 and ||f ||p,w = 1−x 1 %1 ( −1 |f (x)|p dx) p . This estimate suggests that by using a suitable chosen weighted L1w -modulus of continuity, Hn (f )(x) may have a better global smoothness preservation property. In this sense, let us define 1−t ω1 (f ; h)1,w = sup{ w(x)|f (x + t) − f (x − t)|dx; 0 ≤ t ≤ h}, −1+t
0 < h < 1. We present: Theorem 1.2.2. Let Hn (f ) be the Hermite–Fejér polynomial in Corollary 1.2.1. If f ∈ LipM (α; [−1, 1]), 0 < α ≤ 1, then for all n ∈ N and 0 < h < 1, it follows ! ⎧ α ⎪ O h 2−α , if 0 < α < 21 , ⎪ ⎪ ⎨ 1 1 ω1 (Hn (f ); h)1,w = O (h log h )1/3 , if α = 2 ⎪ ! ⎪ α ⎪ ⎩ O h 1+α , if 21 < α ≤ 1. Proof. We have
w(x)|Hn (f )(x + t) − Hn (f )(x − t)|
10
1 Global Smoothness Preservation, Univariate Case
≤
w(x) w(x + t)|Hn (f )(x + t) − f (x + t)| + w(x)|f (x + t) − f (x − t)| w(x + t) +
w(x) w(x − t)|f (x − t) − Hn (f )(x − t)|. w(x − t)
Integrating the inequality from −1 + t to 1 − t and applying to the first and third term the Hölder’s inequality, we get
1−t
−1+t
w(x)|Hn (f )(x + t) − Hn (f )(x − t)|dx ≤
1−t
−1+t
+
1−t −1+t
where
2
−1+t
[w (x)/w(x + t)]dx
1−t
−1+t
2
[w (x)/w(x − t)]dx
1 ≤ CI1 (t)ω1 (f ; ) + n
1−t
w(x + t)|Hn (f )(x + t) − f (x + t)|2 dx
+
1−t −1+t
1/2
1/2
w(x)|f (x + t) − f (x − t)|dx 1/2
1−t −1+t
2
w(x − t)|Hn (x − t) − f (x − t)| dx
w(x)|f (x + t) − f (x − t)|dx + CI2 (t)ω1 1−t
I1 (t) =
I2 (t) =
−1+t 1−t
−1+t
[w 2 (x)/w(x + t)]dx 2
[w (x)/w(x − t)]dx
1/2
1/2
1/2
1 f; , n
,
.
Passing to supremum with t ∈ [0, h] we get 1 [sup{I1 (t); t ∈ [0, h]} ω1 (Hn (f ); h)1,w ≤ Cω1 f ; n + sup{I2 (t); t ∈ [0, h]}] + ω1 (f ; h)1,w .
In what follows we show that sup{Ik (t); t ∈ [0, h]} ≤ C, ∀0 < h < 1, k = 1, 2, where C > 0 is independent of h. We have 0 1−t 2 1 − (x + t)2 w (x) dx dx = 1 − x2 −1+t −1+t w(x + t) 1−t 0 1 − (x + t)2 dx + dx ≤ √ 1 − x2 1 − x2 0 −1+t 0 √ 1−t −2tx dx + dx + √ 2 1 − x 1 − x2 −1+t 0
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
≤ C1 + Also,
0
−1+t
0 −1+t
11
√
−2tx dx + C2 . 1 − x2
√
√ −2tx dx = 2t 2 1−x
√
0
−1+t
−x dx. 1 − x2
For x ≤ 0 by some calculation we get √ √ √ 1 − −x −x 1 dx = arctg ( −x) + log √ . 1 − x2 2 1 + −x Therefore, we easily obtain √ √ √ 0 √ −x 1+ 1−t dx ≤ c t log( 2t , √ 2 1− 1−t −1+t 1 − x and since by the l’Hôpital’s rule it follows √ √ 1+ 1−t = 0, lim t log √ tց0 1− 1−t as a consequence we immediately obtain √ √ 0 −x dx ≤ C1 , ∀t ∈ [0, 1], 2t 2 −1+t 1 − x and I1 (t) ≤ C, ∀t ∈ [0, 1]. Similarly, for t ∈ [0, 1] we have
1−t
−1+t
2
[w (x)/w(x − t)]dx =
0 −1+t
1 − (x − t)2 dx 1 − x2
0 1−t dx 1 − (x − t)2 dx ≤ + √ 2 1 − x 1 − x2 −1+t 0 1−t √ 1−t 2tx dx dx + + √ 2 1 − x2 1−x 0 0 √ √ 1−t x ≤ C + 2t dx. 1 − x2 0
Reasoning as in the case of I1 (t), we easily obtain I2 (t) ≤ C, for all t ∈ [0, 1]. As a first conclusion, we get the first estimate ω1 (Hn (f ); h)1,w ≤ Cω1 (f ; 1/n) + ω1 (f ; h)1,w . On the other hand, by the integral mean value theorem, for each fixed t, there exists ξ ∈ (−1 + t, 1 − t) such that
12
1 Global Smoothness Preservation, Univariate Case
1−t
−1+t
w(x)|Hn (f )(x + t) − Hn (f )(x − t)|dx
= |Hn (f )(ξ + t) − Hn (f )(ξ − t)|
1−t −1+t
√
dx 1 − x2
≤ Ct|Hn′ (f )(η)| ≤ Ct||Hn′ (f )|| ≤ & (by the proof of Theorem 1.2.1) ≤ Cnt nk=1 ω1 (f ; 1/k 2 ). Passing to supremum with t ∈ [0, h], it follows that ω1 (Hn (f ); h)1,w ≤ Cnh
n k=1
ω1 (f ; 1/k 2 ).
Let f ∈ LipM (α; [−1, 1]). The last relation becomes ⎧ 2−2α , if 0 < α < 1 , ⎪ 2 ⎨ Chn ω1 (Hn (f ); h)1,w ≤ Chn log(n), if α = 21 ⎪ ⎩ Chn, if 21 < α ≤ 1.
Since f ∈ LipM (α; [−1, 1]) implies ω1 (f ; 1/n) ≤ Mn−α and ω1 (f ; h)1,w ≤ hα , the first estimate becomes ω1 (Hn (f ); 1/n)1,w ≤ Cn−α + Chα . For the three cases 0 < α < 1/2, α = 1/2, 1/2 < α < 1, by the standard technique, the optimal choices for n are from the equations n2−2α h = n−α , hn log(n) = n−1/2 and nh = n−α , respectively. Replacing in the last estimates we get the theorem. Remark. Comparing Theorem 1.2.2 with Corollary 1.2.1, we see that for all α ∈ (0, 1) we have the same global smoothness preservation property for both moduli ω1 (Hn (f ); h)1,w and ω1 (Hn (f ); h), while for α = 1, Theorem 1.2.2 gives a much better result than Corollary 1.2.1, since ω1 (Hn (f ); h)1,w ≤ Ch1/2 , while ω1 (Hn (f ); h) ≤ C(h log( h1 ))1/3 . Now, let us consider the Lagrange interpolation algebraic polynomial Ln (f ) based on the Chebyshev nodes of second kind plus the endpoints ±1. It is known (Mastroianni–Szabados [72]) that Ln (f )(x) = where xk = cos tk , tk = lk (x) =
k−1 n−1 π ,
n
f (xk )lk (x),
k=1
k = 1, n, and
(−1)k−1 ωn (x) , (1 + δk1 + δkn )(n − 1)(x − xk )
k = 1, n,
with ωn (x) = sin t sin(n − 1)t, x = cos t and δkj is the Kronecker’s symbol.
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
13
Theorem 1.2.3. For any f ∈ C[−1, 1], h > 0 and n ∈ N we have n 1 1 ω1 (Ln (f ); h) ≤ C min hn log n + ω1 (f ; h) , ω1 f ; 2 , ω 1 f ; k n k=1
where C > 0 is independent of f , n and h. Proof. The method will follow the ideas in the proof of Theorem 1.2.1, taking also into account that the relations (1.6) hold in this case too. Therefore, let x ∈ [0, 1] be fixed (the proof in the case x ∈ [−1, 0] being similar), the index j defined by |x − xj | = min{|x − xk |; 1 ≤ k ≤ n} and let us denote ωn (x) = Un (x)(1 − x 2 ), where Un (x) = sin(n−1)t sin t , x = cos t. By simple calculations we get lk′ (x) =
(−1)k−1 (1 + δk1 + δkn )(n − 1)(x − xk ) ′ U (x)(1 − x 2 ) 2xUn (x) Un (x)(1 − x 2 ) − − × n , x − xk x − xk (x − xk )2
which, as in the proof of Theorem 1.2.1, implies |L′n (f )(x)|
′ 1 |Un (x)|(1 − x 2 ) ≤ ω1 (f ; |x − xk |) n−1 |x − xk | k =j
2|Un (x)| |Un (x)|(1 − x 2 ) + + + ω1 (f ; |x − xj |)|lj′ (x)|. |x − xk | (x − xk )2
Now, the Bernstein–Markov inequality yields
n2 Cn2 |lj′ (x)| ≤ √
lj
lj ≤ j n 1 − x2 + 1 2
|Un (x)| nj 2 1 1 Cn2 Cn2 Cn2 · · · (n − 1) = . ≤ ≤ 1 j n−1 j n−1 j 2 n
Therefore ω1 (f ; |x
− xj |)|lj′ (x)|
1 j Cn2 2 ≤ n ω1 f ; 2 . ≤ ω1 f ; 2 n j n
Now, we will use the obvious estimates j |Un (x)|(1 − x 2 ) ≤ 1 − x 2 ∼ , n
|Un′ (x)|(1 − x 2 ) ≤ 2(n − 1).
Thus, we obtain
ω1 (f ; |x − xk |) 1 |U ′ (x)|(1 − x 2 ) ω1 (f ; |x − xk |) n ≤2 n−1 |x − xk | |x − xk | k =j
k =j
14
1 Global Smoothness Preservation, Univariate Case
≤ Cn2 1 n−1
k =j
n n k2 1 1 f ; ≤ Cn , f ; ω ω 1 1 k2 n2 k2 k=1
k=1
ω1 (f ; |x − xk |) ≤ Cn
ω1 (f ; |x − xk |) |Un (x)| ≤ |x − xk | |x − xk |
n k=1
k =j
ω1 f ;
1 k2
,
1 Cj ω1 (f ; |x − xk |) |Un (x)|(1 − x 2 ) ≤ 2 ω1 (f ; |x − xk |) n−1 |x − xk | n (x − xk )2 k =j
k =j
≤
ω1 (f ; |x − xk |) Cj ω1 (f ; |x − xk |) ≤C j 2 n |x − xk | 2 |x − xk | k =j
k =j
n
n 1 ≤ Cn ω1 f ; 2 . k k=1
Collecting all these estimates, we obtain |L′n (f )(x)|
≤ Cn
n
ω1
k=1
1 f; 2 k
,
which, by the same reasoning as in the proof of Theorem 1.2.1, yields ω1 (Ln (f ); h) ≤ Cnh
n k=1
1 ω1 f ; 2 . k
(1.9)
On the other hand, for |x − y| ≤ h we get |Ln (f )(x) − Ln (f )(y)| ≤ 2 Ln (f ) − f + ω1 (f ; h), which implies ω1 (Ln (f ); h) ≤ 2 Ln (f ) − f + ω1 (f ; h).
Standard technique in interpolation theory (see Szabados–Vértesi [100]) gives 1
λn ,
Ln (f ) − f ≤ Cω1 f ; n & where λn (x) = nk=1 |lk (x)|, x ∈ [−1, 1], is the Lebesgue function of interpolation. Here by (1.6) λn (x) ≤
n Un (x)(1 − x 2 ) |Un (x)|(1 − x 2 ) + |lj (x)| (n − 1)(x − x ) ≤ (n − 1)|x − x | k=1
k
k =j
k
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
≤C
k =j
j n
(n − 1)
|j 2 −k 2 | n2
+ |lj (x)| ≤ C
k =j
15
k + |lj (x)| |k 2 − j 2 |
≤ C log n + |lj (x)|. Now,
2 Un (x)(1 − x 2 ) = |U ′ (ξ )| 1 − x . |lj (x)| = n (n − 1)(x − xj ) n
Here, if j = 1 or n, then 1 − x 2 ∼ n−2 , and by Markov’s inequality |Un′ (ξ )| ≤ 3 n = n , i.e., by (1.6)
n2 U
1−x |Un′ (ξ )| n
2
≤n
C 3 n2
n
= C.
If 2 ≤ j ≤ n − 1, then evidently 1 − x 2 ∼ 1 − ξ 2 and |lj (x)| =
1 − x2 |Un (ξ )|(1 − x 2 ) ≤ = C, n 1 − ξ2
whence in both cases |lj (x)| = O(1), and λn = O(log n). Thus 1 log n + ω1 (f ; h), ω1 (Ln (f ); h) ≤ Cω1 f ; n
(1.10)
which together with (1.9) proves the theorem.
Corollary 1.2.2. (i) If f ∈ LipM (α; [−1, 1]), 0 < α ≤ 1, then for all n ∈ N and h ∈ (0, 1) we have ⎧ α # $ 2−2α 1 2−α ⎪ 2−α ⎪ log h , if 0 < α < 21 , ⎪O h ⎪ ⎪ ⎨ 1 1/3 if α = 21 ω1 (Ln (f ); h) = O h log h , ⎪ ⎪ ⎪ $ 1 ⎪ α # ⎪ ⎩ O h 1+α log 1 1+α , if 1 < α ≤ 1. h 2 (ii) If ω1 (f ; h) = O
1 logβ
1 h
, β > 1, then
ω1 (Ln (f ); h) = O
1 logβ−1
1 h
.
(All the constants in O are independent of n and h). Proof. (i) Let f ∈ LipM (α; [−1, 1]), 0 < α ≤ 1. Then (1.9) and (1.10) yield
16
and
1 Global Smoothness Preservation, Univariate Case
⎧ 2−2α h), if 0 < α < 1 , ⎪ 2 ⎨ O(n ω1 (Ln (f ); h) = O(nh log n), if α = 21 ⎪ ⎩ O(nh), if 21 < α ≤ 1, log n α ω1 (Ln (f ); h) = O +h , nα
respectively. Now if n is smaller than
1 1 log h h
1 2−α
,
h−2/3 ,
1 1 log h h
(1.11)
(1.12)
1 1+α
,
in the cases 0 < α < 1/2, α = 1/2, 1/2 < α ≤ 1, respectively, then we use the corresponding estimates in (1.11). Otherwise we use (1.12). (ii) In this case we get from (1.9) and (1.10) 2 n h ω1 (Ln (f ); h) = O log n and ω1 (Ln (f ); h) = O √1 h
1 logβ−1 n
+
1 logβ
1 h
.
As before, we use these estimates according as n is smaller or bigger than $ 1 −β # log h1 2 . Now, let us consider the case of the Shepard interpolation operator n k −λ f (k/n) x − n k=0 Sn,λ (f )(x) = , λ ≥ 1, n ∈ N, −λ n x − k n k=0
defined for an arbitrary f ∈ C[0, 1]. Since by Szabados [98], Theorem 1 and Lemma ′ , by applying the above method we 2, we have estimates for Sn,λ − f and Sn,λ immediately get ω1 (Sn,λ (f ); h) ' 1 1 ω1 (f ; t) ω1 (f ; t) 2−λ 1−λ ≤ C min hn dt, n dt + ω1 (f ; h) tλ tλ 1/n 1/n 1 ω1 (f ; t) dt, 0 < h < 1 < λ, n ∈ N, ≤ Chλ−1 tλ h
where the constant C is independent of n. As an immediate consequence it follows the following.
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
17
Corollary 1.2.3. (i) For 1 < λ ≤ 2, f ∈ LipM (α; [−1, 1]), 0 < α ≤ 1, we get ⎧ α ), if α < λ − 1, ⎪ ⎨ O(h $ # α 1 ω1 (Sn,λ (f ); h) = O h log h , if α = λ − 1, ⎪ ⎩ O(hλ−1 ), if λ − 1 < α.
(ii) If λ > 2 then by 1 2ω1 (f ; t) 1 1−λ ω1 (f ; t) dt ≤ t dt ≤ Ch1−λ ω1 (f ; h), tλ h h h we get the simpler relation ω1 (Sn,λ (f ); h) ≤ Cω1 (f ; h),
0 < h < 1, λ > 2, n ∈ N.
Remark. The above results show that if λ > 2, or 1 < λ ≤ 2 and α < λ − 1, then the Shepard operators completely preserve the global smoothness of f . Also, the case λ = 1 remains unsolved, since in this case Lemma 2 in Szabados [98] does ′ . not give an estimate for Sn,λ In what follows we consider the so-called Shepard–Lagrange operator studied in i Coman–Trimbitas [26], defined ( here on the equidistant nodes xi = n ∈ [0, 1], i = 0, 1, . . . , n, n ∈ N, m ∈ N {0}, m < n, λ > 0, as follows: Sn,λ,Lm (f )(x) = where
and
n
Ai,λ (x)Lm,i (f )(x),
i=0
|x − xi |−λ Ai,λ (x) = &n , −λ k=0 |x − xk | Lm,i (f )(x) =
m j =0
ui (x) f (xi+j ), (x − xi+j )u′i (xi+j )
where ui (x) = (x − xi ) · · · (x − xi+m ) and xn+ν = xν , ν = 1, . . . , m. Obviously the Shepard–Lagrange operator generalizes the classical Shepard operator, which is obtained for m = 0. With respect to the classical Shepard operator in Corollary 1.2.3 which has the degree of exactness 0, the Shepard–Lagrange operator has the advantage that its degree of exactness is m (recall that an operator O is said to have the degree of exactness m, if O(P ) = P , for any polynomial P of degree ≤ m and there exists a polynomial Q of degree m + 1, such that O(Q) = Q.) Theorem 1.2.4. Let us suppose that f(: [0, 1] → R, f ∈ LipL α, where 0 < α ≤ 1. Then for all x ∈ [0, 1], n ∈ N, m ∈ N {0} we have the estimates: (i) |Sn,λ,Lm (f )(x) − f (x)| ≤ Cm LEλm (n),
18
1 Global Smoothness Preservation, Univariate Case
where
⎧ 2m+1 n −(m+1)α ⎪ , if λ = 1, ⎪ log(n) ⎪ ⎪ ⎨ 2m+2−(m+1)α−λ n , if 1 < λ < m + 2, Eλm (n) = m−(m+1)α ⎪n log(n), if λ = m + 2, ⎪ ⎪ ⎪ ⎩ m−(m+1)α n , if λ > m + 2.
(ii)
′ |Sn,λ,L (f )(x)| ≤ C max{nEλm (n), nm }, m
for λ ≥ 2, even number. Proof. Everywhere in the proof we can consider x = xk , k = 0, . . . , n, since in these cases Sn,λ,Lm (f )(xk ) = f (xk ), k = 0, . . . , n and the estimates are trivial. (i) We have |Sn,λ,Lm (f )(x) − f (x)| ≤
≤L
n i=0
Ai (x)
m ν=0
n i=0
Ai (x)|Lm,i (f )(x) − f (x)| α
|x − xi | · · · |x − xi+ν | · · · |x
− xi+m |/|u′i (xi+ν )|
.
It is easy to show that because of the equidistant nodes we have |u′i (xi+ν )| ≥ Cm nm , ∀i = 0, 1, . . . , n, ν = 0, 1, . . . , m and from |x − xk | ≤ 1, ∀k it follows |x − xi | · · · |x − xi+ν |α · · · |x − xi+m | ≤ |x − xi |α · · · |x − xi+ν |α · · · |x − xi+m |α . Therefore, |Sn,λ,Lm (f )(x) − f (x)| ≤ Cm Lnm
n
Ai (x)
i=0
|x − xi | · · · |x − xi+m | . |x − xi |1−α · · · |x − xi+m |1−α
Taking into account the relations (11) and (12) in Coman–Trimbitas [26], pp. 476–477 1 and and denoting γ (x, xi ) = |x − xi |...|x − xi+m |, it follows |x−x1 i+l | ≤ |2(j −l)−1|r
[1/γ (x, xi )]1−α ≤ [1/r m+1 ]1−α 1/
m l=0
|2(j − l) − 1|
1−α
≤ [1/r m+1 ]1−α ,
)m
taking into account that l=0 |2(j − l) − 1| ≥ 1. From Theorem 4 in Coman–Trimbitas [26], for the equidistant nodes we have there M ≤ 3, r ∼ n1 and reasoning as in the proof of the above mentioned Theorem 4, we get |Sn,λ,Lm (f )(x) − f (x)| ≤ Cm L[1/r]m+(m+1)(1−α) ǫλm (r), where ǫλm (r) is given by the formula
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
19
⎧ | log(r)|−1 , if λ = 1, ⎪ ⎪ ⎪ ⎪ ⎨ r λ−1 , if 1 < λ < m + 2, ǫλm (r) = ⎪ r λ−1 | log(r)|, if λ = m + 2, ⎪ ⎪ ⎪ ⎩ m+1 r , if λ > m + 2.
All these prove (i). (ii) We have
′ |Sn,λ,L (f )(x)| ≤ | m
n
A′ (x)Lm,i (f )(x)|
i=0
n Ai (x)[Lm,i (f )(x)]′ := |E1 (x)| + |E2 (x)|, + i=0
where λ is considered even, i.e., λ = 2p, p ∈ N. Since
A′i (x) = where S :=
2p(x − xi )−2p −2p(x − xi )−2p−1 + S S
&n
k=0 (x
|A′i (x)| ≤ 2p
n (x − xk )−2p−1 k=0
S
,
− xk )−2p , we get
|x − xi
|−1 |x S
− xi
|−2p
+
2p|x − xi S
|−2p
n k=0
|x − xk |−2p−1 S
.
For fixed x ∈ [0, 1], let xd be the nearest point to x, i.e., |x − xd | := min{|x − xi |; i = 0, 1, . . . , n} ≤ n1 . It follows, for all i = d, |A′i (x)| ≤ 2pn and |A′d (x)| ≤ 2pn Since
&n
′ k=0 Ai (x)
|x − xi |−2p S
|x − xd |−2p . S
= 0, we get
n i=0
A′i (x)Lm,i (f )(x) =
n i=0
[Lm,i (f )(x) − f (x)],
which implies |E1 (x)| ≤
n i=0
|A′i (x)||Lm,i (f )(x) − f (x)|
20
1 Global Smoothness Preservation, Univariate Case
≤ Cn
n i=0
|Ai (x)||Lm,i (f )(x) − f (x)| ≤ CnEλm (n),
(for the last inequality see (i)). On the other hand, [Lm,i (f )(x)]′ ≤ Cm nm , which implies |E2 (x)| ≤ Cm nm . As a conclusion, ′ (f )(x)| ≤ C max{nEλm (n), nm }, |Sn,λ,L m where C depends on f but it is independent of x and n.
Remark. It is easy to see that the estimates in the expression of Eλm (x) are not “good” for all the values of n, α and λ. For example, the worst situation seems to be in the λ = 1 case, when the estimate actually does not prove the convergence of Sn,λ,Lm (f )(x) to f (x). Corollary 1.2.4 Let us suppose that f : [0, 1] → R, f ∈ LipL α, where 0 < ′ α ≤ 1 and let us denote by F1 (n, h) and F2 (n, h), the estimates of ||Sn,λ,L (f )|| and m ||Sn,λ,Lm (f ) − f ||, respectively. We have: ω1 (Sn,λ,Lm (f ); h) ≤ Cm,f max{hF1 (n, h), F2 (n, h) + ω1 (f ; h)}, where n is the unique solution of equation hF1 (n, h) = F2 (n, h).
Proof. By the standard technique we obtain ω1 (Sn,λ,Lm (f ); h) ′ (f )(x)||, ||Sn,λ,Lm (f )(x) − f (x)|| + ω1 (f ; h)}, ≤ Cm,f max{h||Sn,λ,L m
for all n ∈ N, h ∈ (0, 1). Combined with the estimates in Theorem 1.2.4, the standard method gives the optimal choice for n, as solution of the equation hF1 (n, h) = F2 (n, h). Remark. Corollary 1.2.4 contains many global smoothness preservation properties, depending on the relations satisfied by λ, m and α. In some cases they are “good,” in other cases are “bad.” For example, let us suppose that λ > m + 2, λ is even and m > 1−α α . We get nEλm (n) = n(m+1)(1−α) < nm and therefore we get the equation hnm = −1
nm−(m+1)α , which gives n = h (m+1)α and
ω1 (Sn,λ,Lm (f ); h) −1
≤ Ch × (h (m+1)α )m = Ch[(m+1)α−m]/[(m+1)α] , ∀n ∈ N, 0 < h < 1.
For a “good” property in this case, it is necessary to have (m + 1)α − m > 0, α α . It follows the 1−α > 1−α i.e., m < 1−α α , condition which necessarily implies 1 0 < 2 < α < 1. As a conclusion, if α < 21 then we don’t have a “good” global smoothness preservation property.
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
21
As a concrete example, take α = 45 . We obtain 4−m
ω1 (Sn,λ,Lm (f ); h) ≤ Ch 4m+4 , which for m < 4 and λ > m + 2, λ even, gives a “good” global smoothness preservation property. In what follows we consider a sort of Shepard operator on the semi-axis [0, +∞), introduced and studied by the papers DellaVecchia–Mastroianni–Szabados [36]–[38]. Let us denote C([0, +∞]) = {f : [0, +∞) → R; there exists limx→+∞ f (x) = f (+∞) ∈ R}. If f ∈ C([0, +∞]) then obviously the usual modulus of continuity ω(f ; h) = sup{|f (x) − f (y)|; x, y ≥ 0, |x − y| ≤ h} is finite and for the quantity ǫf (x) = supy≥x |f (+∞) − f (y)| it follows that limx→+∞ ǫf (x) = 0. Putting ||f || = sup{|f (x)|; x ≥ 0}, let ω (f ; t) = sup0
modulus of smoothness of f with step weight function (x) = x γ , γ ≥ 1. γ Also, for the knots xk = nkγ /2 , k = 0, . . . , n, γ ≥ 1, s ≥ 2 and f ∈ C([0, +∞]), let us consider the so-called Balázs–Shepard operator defined by
Sn,s (f )(x) =
n k=0
|x − xk |−s f (xk )
n k=0
−s
|x − xk |
, x ≥ 0.
The following estimates are known. Theorem 1.2.5. (See Della Vecchia–Mastroianni–Szabados [37]) Let us suppose that T lim sup [f (t) − f (+∞)]/[(t)]dt < +∞ T >0
0
and
ǫf (x) ≤
C , ∀x ≥ 0. (1 + x)1/γ
(i) (See [37], Theorem 2.2) If s > 2 and 0 < α < 1, then ω (f ; h) ≤ Chα implies ||Sn,s (f ) − f || ≤ Cn−α/2 ; (ii) (See [37], Theorem 2.4) Let s = 2. If 0 < α < 1, then ω (f ; h) ≤ Chα implies ||Sn,s (f ) − f || ≤ Cn−α/2 and if √ . α = 1 then ω (f ; h) ≤ Ch implies ||Sn,s (f ) − f || ≤ C log(n) n Theorem 1.2.6. (See Della Vecchia–Mastroianni–Szabados [37], Lemma 3.1, p. ′ (f )|| ≤ C √n||f ||. 446) If s > 1 then ||Sn,s Corollary 1.2.5. Let us suppose that lim sup | T >0
0
T
[f (t) − f (+∞)]/(t)dt| < +∞
22
1 Global Smoothness Preservation, Univariate Case
and ǫf (x) ≤
C , ∀x ≥ 0. (1 + x)1/γ
(i) If s > 2 and 0 < α < 1, then ω (f ; h) ≤ Chα implies ω (Sn,s (f ); h) ≤ Chα/(α+1) , for all n ∈ N, 0 < h < 1, where C > 0 is a constant independent of n and h. (ii) If s = 2 and ω (f ; h) ≤ Chα then we have: ω (Sn,s (f ); h) ≤ Chα/(α+1) if 0 < α < 1 and
ω (Sn,s (f ); h) ≤ C[h log(1/ h)]1/2 if α = 1.
Both estimates take place for all n ∈ N, 0 < h < 1.
Proof. (i) From the general inequality ω (f ;√h) ≤ Ch||f ′ ||, combined with Theorem 1.2.6, it follows ω (Sn,s (f ); h) ≤ Ch n||f ||. Combined now with the Theorem 1.2.5,(i), and by using the standard method, it immediately follows ω (Sn,s (f ); h) ≤ C max{hn1/2 , n−α/2 + hα }. From the equation hn1/2 = n−α/2 , we get n = h−2/(α+1) which replaced above gives the required estimate. (ii) Let s = 2. If 0 < α < 1 then the proof follows similarly from Theorem 1.5.2,(ii). If α = 1 then we get ω (Sn,s (f ); h) ≤ C max{hn1/2 ,
log(n) + h}, n1/2
which conducts to the equation hn = log(n). This implies n = it above, we get the desired estimate. The proof is complete.
1 h
log( h1 ) and replacing
Unlike the results of this chapter which refer to real functions of one real variable, let us consider now the case of complex Hermite–Fejér interpolation polynomials on the roots of unity , attached to a complex function defined on the closed unit disk D = {z ∈ C; |z| ≤ 1}. Let us denote by AC={f : D → C; f is analytic on {|z| < 1} and continuous on D} and let zk = exp2πki/n , k = 1, . . . , n be the nth roots of unity. Then according to a result of Cavaretta–Sharma–Varga [21], there exists a uniquely determined complex polynomial Ln (f )(z) of degree at most 2n − 1, such that Ln (f )(zk ) = f (zk ), k = 1, . . . , n and L′n (f )(zk ) = 0, k = 1, . . . , n. Denoting by w(f ; h) = sup{|f (expix ) − f (expi(x+t) )|; x ∈ [0, 2π ], 0 ≤ t < h}, by Theorem 1 in Sharma–Szabados [89] the following estimate 1 + E[cn] (f ) log(n) ||Ln (f ) − f || ≤ C w f ; n
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
23
holds, where || · || represents the uniform norm on D, C and c are independent of f and n and En (f ) denotes the error of best polynomial approximation. Also, recall (see Rubel–Shields–Taylor [84]) that w(f ; h) is equivalent to the usual modulus of continuity on D defined by ω(f ; h) = sup{|f (z1 ) − f (z2 )|; z1 , z2 ∈ D, |z1 − z2 | ≤ h}. Therefore, we have the estimate 1 ||Ln (f ) − f || ≤ C ω f ; + E[cn] (f ) log(n) . n We present Theorem 1.2.7. If f ∈ AC and there exists 0 < α ≤ 1 such that ω(f ; h) ≤ Chα , ∀0 ≤ h < 1 then 1 α/(1+α) ω(Ln (f ); h) ≤ Cf,α h log , ∀n ∈ N, 0 < h < 1, h where the constant Cf,α > 0 is independent of n and h. Proof. By the above error estimate we immediately obtain log(n) 1 . ||Ln (f ) − f || ≤ C α + n nα Now, let us prove that the standard method in the real case, works in the complex case too. Indeed, for z1 , z2 ∈ D, we obtain |Ln (f )(z1 ) − Ln (f )(z2 )| ≤ |Ln (f )(z1 ) − f (z1 )| + |f (z1 ) − f (z2 )| +|f (z2 ) − Ln (f )(z2 )|, which immediately implies ω(Ln (f ); h) ≤ 2||Ln (f ) − f || + ω(f ; h) ≤ Cα
log(n) α +h . nα
On the other hand, by the mean value theorem in the complex case, there exists ξ on the segment [z1 , z2 ] and λ ∈ C with |λ| ≤ 1, such that Ln (f )(z1 ) − Ln (f )(z2 ) = λ · L′n (f )(ξ )(z1 − z2 ), which implies |Ln (f )(z1 ) − Ln (f )(z2 )| ≤ |L′n (f )(ξ )||z1 − z2 |, and therefore ω(Ln (f ); h) ≤ ||L′n (f )|| · h.
24
1 Global Smoothness Preservation, Univariate Case
But by, e.g., Corollary 1.3 in DeVore–Lorentz [41], p. 98, we have ||L′n (f )|| ≤ Cn||Ln (f )||. Combining it with relation (3.7), page 46 in Sharma–Szabados [89], we obtain ||L′n (f )|| ≤ Cn log(n)||f ||. As a conclusion, it follows ω(Ln (f ); h) ≤ Cf hn log(n). The best choice for n follows from the equation hn log(n) = log(n) nα , which implies n = h−1/(1+α) and finally the estimate ω(Ln (f ); h) ≤ Cf,α hα/(1+α) log( h1 ), ∀n ∈ N, 0 < h < 1. The theorem is proved. At the end of this section, let us make some considerations on the interpolation of normed spaces valued mappings, of real variable. More exactly, let (X, || · ||) be a normed space over K, where K = R or K = C and f : [a, b] → X, [a, b] ⊂ R. First let us recall that in the recent paper [54], for f : [0, 1] → X continuous on [0, 1] the following estimates hold: (i) 1 1 φ φ ≤ ||Bn (f ) − f ||u ≤ C2 ω2 f ; √ , C 1 ω2 f ; √ n n where ||f ||u = sup{||f (x)||; x ∈ [0, 1]} and C1 , C2 > 0 are absolute constants. (ii) 1 1 φ C1 ω2 f ; √ + ω1 f ; ≤ ||Kn (f ) − f ||u n n 1 1 φ ≤ C2 ω2 f ; √ + ω1 f ; ; n n (iii)
x(1 − x) ||Bn (f )(x) − f (x)|| ≤ M n
where α ≤ 2. (iv)
α/2
∀x ∈ [0, 1], if and only if
ω2 (f ; δ) = O(δ α ), ω1 (Bn (f ); δ) ≤ 2ω1 (f ; δ),
for all n ∈ N and all δ ≤ 1. Here the usual first-order modulus of continuity ω1 (f ; δ), the usual second-order modulus of smoothness and the second Ditzian–Totik modulus of smoothness are defined by ω1 (f ; δ) = sup{||f (v) − f (w)||; v, w ∈ [0, 1], |v − w| ≤ δ}, ω2 (f ; δ) = sup{sup{||f (x + h) − 2f (x) + f (x − h)||; x − h, x, x + h ∈ [0, 1], where δ ≤ φ
1 2
and
h ∈ [0, δ]},
ω2 (f ; δ) = sup{sup{||f (x + hφ(x)) − 2f (x) + f (x − hφ(x))||; x ∈ I2,h },
1.2 Lagrange, Hermite–Fejér and Shepard-Type Operators
25
h ∈ [0, δ]}, ! √ 1−h2 1−h2 respectively, with I2,h = − 1+h x(1 − x), δ ≤ 1, the Bernstein 2 , 1+h2 , φ(x) = and Bernstein–Kantorovich type operators attached to f are defined by Bn (f )(x) = and Kn (f )(x) = (n + 1)
n k=0
n
k pn,k (x)f , n
pn,k (x)
k=0
(k+1)/(n+1)
f (t)dt, k/(n+1)
%b # $ respectively, with pn,k (x) = nk x k (1 − x)n−k and the integral a f (t)dt is defined as the limit for m → ∞ in the norm || · ||, of the all (classical defined) Riemann sums &m i=0 (xi+1 − xi )f (ξi ). We want to show that these ideas can be used for interpolation too. Thus, for f : [−1, 1] → X we can attach to f the Lagrange polynomials Ln (f ) based on the Chebyshev knots of second kind plus the endpoints −1, +1 given by Ln (f ) : [−1, 1] → X, n f (xk )lk (x), Ln (f )(x) = k=1
where xk = costk , tk = lk (x) =
(k−1)π n−1 , k
= 1, . . . , n and
(−1)k−1 ωn (x) , (1 + δk1 + δkn )(n − 1)(x − xk )
k = 1, n,
with ωn (x) = sin t sin(n − 1)t, x = cos t and δkj is the Kronecker’s symbol. We can prove the following. Theorem 1.2.8. Let f : [−1, 1] → X be continuous on [−1, 1]. We have: (i) 1 log n, ∀n ∈ N, x ∈ [−1, 1]; ||Ln (f )(x) − f (x)|| ≤ Cω1 f ; n (ii) if ||f (v) − f (w)|| ≤ M|v − w|α , ∀v, w ∈ [−1, 1], where 0 < α < 1, then ω1 (Ln (f ); h) satisfies the same estimates in the Corollary 1.2.2, (i). Proof. For x ∗ ∈ X ∗ = {x ∗ : X → K; x ∗ is linear and continuous } with ≤ 1, define g : [−1, 1] → R by g(x) = x ∗ [f (x)], x ∈ [−1, 1]. (i) By the relation (1.10) we have 1 ||g − Ln (g)|| ≤ Cω1 g; log n, ∀n ∈ N, n
|||x ∗ |||
where ||g|| = sup{|g(x)|; x ∈ [−1, 1]}.
26
1 Global Smoothness Preservation, Univariate Case
By the linearity of x ∗ we get g(x) − Ln (g)(x) = x ∗ [f (x) − Ln (f )(x)] and therefore 1 ∗ |x [f (x) − Ln (f )(x)]| ≤ Cω1 g; log n, ∀x ∈ [−1, 1]. n On the other hand, by |g(v) − g(w)| = |x ∗ [f (v) − f (w)]| ≤ |||x ∗ ||| · ||f (v) − f (w)|| ≤ ||f (v) − f (w)||, we easily get that ω1 (g; n1 ) ≤ ω1 (f ; n1 ). Combined with the above inequality, we get 1 ∗ |x [f (x) − Ln (f )(x)]| ≤ Cω1 f ; log n, ∀x ∈ [−1, 1]. n For fixed x ∈ [−1, 1], passing to supremum with |||x ∗ ||| ≤ 1 and taking into account the well-known classical equality ||x|| = sup{|x ∗ (x)|; x ∗ ∈ X ∗ , |||x ∗ ||| ≤ 1},
∀x ∈ X,
it follows ||Ln (f )(x) − f (x)|| ≤ Cω1
1 log n, f; n
∀n ∈ N,
x ∈ [−1, 1];
(ii) We have |g(v) − g(w)| = |x ∗ [f (v) − f (w)]| ≤ |||x ∗ ||| · ||f (v) − f (w)|| ≤ M|v − w|α , for all v, w ∈ [−1, 1]. Then by Corollary 1.2.2, (i), it follows that ω1 (Ln (g); h) ≤ CEn (h, α), where C and En (h, α) obviously do not depend on x ∗ . Let h > 0 and v, w ∈ [−1, 1] with |v − w| ≤ h, be fixed. We have |x ∗ [Ln (f )(v) − Ln (f )(w)]| = |Ln (g)(v) − Ln (g)(w)| ≤ ω1 (Ln (g); h) ≤ CEn (h, α).
Passing to supremum with x ∗ ∈ X ∗ , |||x ∗ ||| ≤ 1 we get as at the above point (i), ||Ln (f )(v) − Ln (f )(w)|| ≤ CEn (h, α). Passing now to supremum with v, w ∈ [−1, 1], |v − w| ≤ h, we obtain ω1 (Ln (f ); h) ≤ CEn (h, α), which proves the theorem.
1.3 Algebraic Projection Operators and the Global Smoothness Preservation Property Let n be the set of algebraic polynomials of degree at most n. It is well known that the algebraic projection operators Ln : C[−1, 1] → n are bounded linear operators having the properties: (i) f ∈ C[−1, 1] implies Ln (f ) ∈ n ; (ii) f ∈ n implies Ln (f ) ≡ f . The following approximation result for Ln is known.
1.3 Algebraic Projection Operators
27
Theorem 1.3.1. (See Szabados [97], Theorem 2.) If f (s) ∈ C[−1, 1], s ∈ N ∪ {0}, then −s (s) (s) ∗
f (s) − L(s) n (f ) ≤ Cs n ω1 (f ; 1/n) · |||Ln ||| , n ∈ N, where · represents the uniform norm on [−1, 1] and ∗ (s) 2 −s/2 |||L(s) ∈ C[−1, 1], n ||| = sup{ Ln (f ) ; f (x)(1 − x )
f (x)(1 − x 2 )−s/2 = 1}. Remark. For s = 0 it is easy to see that ∗ |||L(s) n ||| = |||Ln ||| = sup{ Ln (f ) ;
f ∈ C[−1, 1],
|f = 1}
and the above relation becomes
f − Ln (f ) ≤ Cω1 (f ; 1/n) · |||Ln |||,
n ∈ N,
where |||Ln ||| ≥ C log n. (See, e.g., Szabados–Vértesi [100], pp. 266 and 268). First we need the following simple Lemma 1.3.1. If f ∈ Lp [a, b], 1 ≤ p ≤ ∞ and {Ln (f )}n is a sequence of p approximation b−a operators such that Ln (f ) ∈ L [a, b], n ∈ N, then for all n, r ∈ N, h ∈ 0, r , we have ωr (Ln (f ); h)p ≤ 2r f − Ln (f ) p + ωr (f ; h)p ,
where ωr (f ; h)p represents the usual modulus of smoothness, · p is the classical Lp -norm, L∞ [a, b] ≡ C[a, b], ωr (f ; ·)∞ ≡ ωr (f ; ·). Proof. Let first 1 ≤ p < +∞. For x ∈ [a, b − rt] we have rt Ln (f )(x) = rt [Ln (f ) − f ](x) + rt f (x) =
r r (−1)r−k [Ln (f )(x + kt) − f (x + kt)] + rt f (x). k k=0
This implies
≤
b−rt a
b−rt a
|rt Ln (f )(x)|p dx
p 1/p r r |Ln (f )(x + kt) − f (x + kt)| + |t f (x)| dx k
r k=0
'1/p
(by Minkowski’s inequality) b−rt '1/p r r p ≤ 2 Ln (f ) − f p + . |t f (x)| dx a
28
1 Global Smoothness Preservation, Univariate Case
Passing now to supremum with 0 ≤ t ≤ h we get the lemma. For p = +∞ the proof is obvious.
The first main result of this section is the following: Theorem 1.3.2. If f (s) ∈ C[−1, 1], s ∈ N ∪ {0}, then for all n ∈ N and h ∈ (0, 1) we have ω1 (L(s) n (f ); h) ∗ (s) ≤ Cs min{h f · |||L(s+1) |||, n−s ω1 (f (s) ; 1/n) · |||L(s) n n ||| + ω1 (f ; h)}, * + n (f ) where |||Ln ||| = sup L f
; 0 ≡ f ∈ C[−1, 1] . (s)
Proof. By Lemma 1.3.1 written for p = +∞, r = 1, f ≡ f (s) and Ln (f ), and by Theorem 1.3.1 we get (s) (s) (s) ω1 (L(s) n (f ); h) ≤ 2 Ln (f ) − f + ω1 (f ; h) ∗ (s) ≤ 2Cs n−s ω1 (f (s) ; 1/n) · |||L(s) n ||| + ω1 (f ; h).
On the other hand, we obtain (s+1) ω1 (L(s) |||, n (f ); h) ≤ h f · |||Ln
which completes the proof.
Corollary 1.3.1. Let us assume that f (s) ∈ LipM (α; [−1, 1]), s ∈ N ∪ {0}. Then the best possible result concerning the partial preservation of global smoothness of (s) f by Ln (f ) which can be derived by Theorem 1.3.2 is 2s+2 α 1 2s+2+α (s) ω1 (Ln (f ); h) = O h 2s+2+α log , n ∈ N, h ∈ (0, 1), h attained if simultaneously we have ∗ s (s+1) |||L(s) ||| = O(n2(s+1) ). n ||| = O(n log n) and |||Ln
(All the constants in “O” are independent of n and h). Proof. By Berman [18], the estimate 2s |||L(s) n ||| = O(n ),
s ∈ N,
is the best possible, and by Szabados–Vértesi [100], Theorem 8.1, p. 266, the estimate ∗ s |||L(s) n ||| = O(n log n),
is the best possible. Replacing in Theorem 1.3.2, we get
s ∈ N ∪ {0},
1.3 Algebraic Projection Operators
29
2(s+1) −α ω1 (L(s) , n log n + hα }, s ∈ N ∪ {0}. n (f ); h) ≤ Cs,α,f min{hn
By hn2(s+1) = n−α log n, reasoning exactly as in the proof of Corollary 1.2.2,(i), $ 1 # we choose n = h1 log h1 2s+2+α and we obtain ω1 (L(s) n (f ); h)
2s+2 1− 2s+2+α
=O h
=O h
α 2s+2+α
2s+2 1 2s+2+α log h
2s+2 1 2s+2+α , log h
which proves the theorem.
Remark. In Szabados [97], Lagrange interpolation operators {Ln (f )}n are con(s) structed, such that |||Ln |||∗ = O(ns log n), s ∈ N. But we do not know if simulta(s+1) neously we have |||Ln ||| = O(n2(s+1) ). Corollary 1.3.2. Let us denote Gs = {f ∈ C[−1, 1]; f (x)(1 − x 2 )−s/2 ∈ C[−1, 1], f (x)(1 − x 2 )−s/2 = 1}, s ∈ N. If Ln (f )(x) represents the Lagrange interpolating polynomial based on the roots of the polynomial n (x) introduced in Szabados–Vértesi [99], then for all f ∈ Gs+1 with f (s) ∈ LipM (α; [−1, 1]), s ∈ N ∪ {0}, we have α 1 s+1+α log ω1 (L(s) (f ); h) = O h , 0 < h < 1, n ∈ N, n h where the constant which appears in “O” is independent of n and h. Proof. We have (s) (s+1) (f )(ξ )| · |x − y| |L(s) n (f )(x) − Ln (f )(y)| = |Ln
≤ |||L(s+1) |||∗ h, n for all |x − y| ≤ h and f ∈ Gs+1 , which immediately implies (s+1) ∗ ||| , ω1 (L(s) n (f ); h) ≤ h|||Ln
0 < h < 1, f ∈ Gs+1 .
Then, reasoning as in the proof of Theorem 1.3.2., we get ω1 (L(s) n (f ); h) ∗ (s) ≤ C min{h|||L(s+1) |||∗ , n−s ω1 (f (s) ; 1/n) · |||L(s) n n ||| + ω1 (f ; h)}.
By Theorem 1 in Szabados–Vértesi [99], we have
30
1 Global Smoothness Preservation, Univariate Case ∗ s |||L(s) n ||| = O(n log n),
|||L(s+1) |||∗ = O(ns+1 log n), n
which replaced in the above inequality gives s+1 ω1 (L(s) log n, n−α log n + hα }. n (f ); h) ≤ C min{hn
The equation hns+1 log n = n−α log n gives the best choice, h = n−s−1−α , and the standard technique proves the corollary.
1.4 Global Smoothness Preservation by Jackson Trigonometric Interpolation Polynomials The goal of this section is to show that the sequence of Jackson interpolation trigonometric polynomials given by n
Jn (f )(x) =
2 f (tk )φn (x − tk ), n+1 k=0
tk =
2π k , n+1
2
1 sin ((n+1)x/2) where f ∈ C2π and (x) = n+1 have the property of (partial) global 2sin2 (x/2) smoothness preservation with respect to the uniform and average modulus of continuity. First we need:
Definition 1.4.1. (Stechkin [94], pp. 219–220.) Let k ∈ N. One says that the function ϕ is of N k -class if it satisfies the following conditions: (i) ϕ is defined on [0, π ]; (ii) ϕ is nondecreasing on [0, π ]; (iii) ϕ(0) = 0 and limt→0 ϕ(t) = 0; (iv) ϕ(t) ≥ C1 hk , for all t > 0. (v) there exists a constant C2 > 0 such that 0 < t < s ≤ π implies s −k ϕ(s) ≤ −k C2 t ϕ(t). For f ∈ C2π and ϕ ∈ N k one says that f belongs to the Hk (ϕ) class if ωk (f ; t) ≤ C3 ϕ(t), for all t ∈ [0, π ], where ωk (f ; t) represents the usual uniform modulus of smoothness of order k of f .
Remark. An obvious example of ϕ ∈ N k is ϕ(t) = ωk (f ; t), with fixed f ∈ C2π .
Theorem 1.4.1. (Stechkin [94], Theorem 6, p. 230.) Let k ∈ N, ϕ ∈ N k and f ∈ Hk (ϕ). If {Tn }n is a sequence of trigonometric polynomials with degree Tn ≤ n, which satisfy
f − Tn ≤ C4 ϕ(1/n), for all n ∈ N, then
1.4 Jackson Trigonometric Interpolation Polynomials
31
ωk (Tn ; h) ≤ C5 ϕ(h), for all h > 0, n ∈ N, where · represents the uniform norm and C4 , C5 are absolute positive constants (independent of f and n). With respect to the uniform modulus of continuity, we present: Theorem 1.4.2. Let f ∈ C2π . If f ∈ LipM α, 0 < α < 1, (i.e., ω1 (f ; h) ≤ Chα ) then ω1 (Jn (f ); h) ≤ Chα , h > 0, n ∈ N, and if f ∈ LipM 1, then 1 ω1 (Jn (f ); h) ≤ Ch log , h
h ∈ (0, 1), n ∈ N.
Proof. By Szabados [95] the estimate
f − Jn (f ) ≤ C[ω1 (f ; 1/n) + ω(f˜; 1/n)],
n ∈ N,
holds, where f˜ represents the trigonometric conjugate of f . Let f ∈ LipM α, 0 < α ≤ 1. If α < 1, then it is known (see, e.g., Bari–Stechkin [9], p. 485) that this is equivalent to f˜ ∈ LipM α, which, by the above estimate and by Theorem 1.4.1 (applied to ϕ(h) = hα ) gives ω1 (Jn (f ); h) ≤ Chα , h > 0, n ∈ N. Also, if α = 1, then by, e.g., Zygmund [110], p. 157 it follows that ω1 (f˜; h) ≤ Mh log h1 , which again with the above estimate and with Theorem 1.4.1 (applied to ϕ(h) = h log h1 ) yields 1 ω(Jn (f ); h) ≤ Ch log , h ∈ (0, 1), n ∈ N. h Now let us introduce the so-called average modulus of continuity by the following. Definition 1.4.2. (Sendov [87]) The local modulus of continuity of a bounded 2π -periodic Riemann integrable function f (x) is given by ω1 (f, x, h) = sup{|f (x ′ ) − f (x")|; x ′ , x" ∈ [x − h/2, x + h/2]}, p
while the average modulus of continuity in L2π is given by τ (f ; h)Lp = ||ω(f, x, h)||Lp , 1 ≤ p < ∞. Remark. These concepts are very useful in some approximation theoretical problems where the ordinary Lp -modulus of continuity ω1 (f, h)Lp = sup{||f (x + t) − f (x)||Lp ; |t| ≤ h} is not applicable. First we prove the following estimates. Theorem 1.4.3. For the Jackson trigonometric interpolation polynomials Jn (f )(x) defined above, it follows:
32
1 Global Smoothness Preservation, Univariate Case
(i) ||Jn′ (f )||L1 ≤ C (ii) τ (Jn (f ); h)L1 ≤
n 1 τ f; ; k L1 k=1
n 2 C τ f; + h + τ (f ; h)L1 . n k L1 k=1
Proof. (i) Reasoning as in the proof of Theorem 1 in Popov–Szabados [78] we easily get n C |Jn′ (f )(x)| ≤ |f (tk ) − f (x)||′n (x − tk )|. n k=0
But
′n (u)
(n + 1)u (n + 1)u , 2 u 1 (n + 1) sin cos sin = 2(n + 1) 2 2 2 u u , (n + 1)u sin2 sin3 − cos . 2 2 2
Denoting by xj the nearest node to x and taking into account that |x − tk | ∼
= j , similar with the proof of Theorem 1 in Popov–Szabados [78] we obtain |j −k| n , ∀k
|′n (x − tk )| ≤ C[n2 |j − k|−2 + n2 |j − k|−3 ] ≤ Cn2 |j − k|−2 , and |′n (u)| = |
n+1
n
k=1
k=1
1 1 (n + 1 − k) sin(ku)| ≤ (n + 1 − k)k ≤ Cn2 . n+1 n+1
It follows that |Jn′ (f )(x)| ≤ +n2
n
k=0,k =j
1 C [cn2 ω f, x, n n
|j − k|−2 ω(f, x, |x − tk |)] ≤ Cn
n
k −2 ω(f, x, k/n).
k=1
Integrating and reasoning exactly as in the proof of Theorem 1 in Popov–Szabados [78], we obtain ||Jn′ (f )||L1 ≤ Cn
n k=1
n k 1 k −2 τ f ; ≤C τ (f ; )L1 . n L1 n
(ii) Let x ′ , x" ∈ [x − h2 , x + h2 ]. We have
k=1
1.4 Jackson Trigonometric Interpolation Polynomials
33
|Jn (f )(x ′ ) − Jn (f )(x")| ≤ |Jn (f )(x ′ ) − f (x ′ )| +|Jn (f )(x") − f (x")| + |f (x ′ ) − f (x")|
(see the proof of Theorem 1 in Popov–Szabados [78]) C
n i=1
i −2 ω(f, x ′ , i/n) + C
n i=1
i −2 ω(f, x", i/n) + |f (x ′ ) − f (x")|.
Passing to supremum after x ′ , x" ∈ [x − h2 , x + h2 ], it follows that ω(Jn (f ), x, h) ≤ C +C But
n i=1
i
−2
n i=1
' h h i −2 sup ω(f, x ′ , i/n); x ′ ∈ x − , x + 2 2
h h sup ω(f, x", i/n); x" ∈ x − , x + 2 2
'
+ ω(f, x, h).
' h h sup ω(f, x ′ , i/n); x ′ ∈ x − , x + 2 2
' ′ h h ′ ′ ′ ′ = sup sup{|f (u ) − f (u")|; u , u" ∈ x − i/n, x + i/n }}; x ∈ x − , x + 2 2 2i +h . ≤ ω f, x, n
Integrating on [0, 2π ], it follows that τ (Jn (f ); h)L1 ≤ C
n i=1
2i +h + τ (f ; h)L1 ≤ i −2 τ f ; n L1
(reasoning as in the proof of Theorem 1 in Popov–Szabados [78]) C 2 −2 C n/2 u τ (f ; u + h)L1 du + τ (f ; h)L1 ≤ τ (f ; 1/v + h)dv n 2/n n 1/2 +τ (f ; h)L1 ≤ which proves the theorem.
n 2 C + τ (f ; h)L1 , τ f; + h n i L1 i=1
Corollary 1.4.1. Let us suppose that τ (f ; h)L1 ≤ Mhα , for all 0 < h < 1, where α ∈ (0, 1]. We have: τ (Jn (f ); h)L1 ≤ Chα , ∀h ∈ (0, 1), n ∈ N, if 0 < α < 1 and
34
1 Global Smoothness Preservation, Univariate Case
τ (Jn (f ); h)L1
1 , ∀h ∈ (0, 1), n ∈ N, if α = 1. ≤ Ch log h
Proof. By Sendov–Popov [88], the property τ (f ; h)Lp ≤ Ch||f ′ ||Lp , ∀h ∈ (0, 1), holds. This and Theorem 1.4.3,(i), imply τ (Jn (f ); h)L1 ≤ Ch||Jn′ (f )||L1 ≤ Ch
n k=1
1/k α ≤ Chn1−α ,
∀h ∈ (0, 1).
That is, τ (Jn (f ); h)L1 ≤ Chn1−α , for α ∈ (0, 1) and τ (Jn (f ); h)L1 ≤ Ch log(n), for& α = 1. Also, by Theorem 1.4.3,(ii), for α ∈ (0, 1) we get τ (Jn (f ); h)L1 ≤ n 1 C α α −α + Chα , while for α = 1 we have τ (J (f ); h) [ n L1 ≤ k=1 k α + nh ] + h ≤ Cn n log(n) −1 C n + Ch. The standard method gives the optimal choices of n = h from the equations n−α = hn1−α and log(n) = h log(n), corresponding to the cases 0 < α < 1 n and α = 1, respectively. Replacing in the above estimates, we get the statement of the theorem.
1.5 Trigonometric Projection Operators and the Global Smoothness Preservation Property Let us denote by Tn the set of trigonometric polynomials of degree at most n. It is well known that the trigonometric projection operators Pn : C2π → Tn are bounded linear operators having the properties: (i) f ∈ C2π implies Pn (f ) ∈ Tn (ii) f ∈ Tn implies Pn (f ) ≡ f . The following approximation result for Pn is known. Theorem 1.5.1 (Runck–Szabados–Vértesi [85], relation (19).) Let s ∈ N ∪ {0}. If f (s) ∈ C2π then
f (s) − Pn(s) (f ) ≤ Cs n−s ω1 (f (s) ; 1/n) · |||Pn(s) |||,
n ∈ N,
where · represents the uniform norm on R and (s)
P (f ) n ; 0 ≡ f ∈ C2π . |||Pn(s) ||| = sup
f Also, we need the following result. p
Lemma 1.5.1. If f ∈ L2π , 1 ≤ p ≤ +∞, and {Tn (f )}n is a sequence of p approximation operators such that Tn (f ) ∈ L2π , n ∈ N, then for all n, r ∈ N, h > 0, we have ωr (Tn (f ); h)p ≤ 2r Tn (f ) − f p + ωr (f ; h)p ,
1.5 Trigonometric Projection Operators
35
where ωr (f ; h)p represents the periodic Lp -modulus of smoothness of order r and p
· p is the classical L2π -norm (L∞ 2π ≡ C2π , ωr (f ; ·)∞ ≡ ωr (f ; ·)). Proof. For 0 ≤ t and x ∈ R, we have
rt [Tn (f )](x) = rt [Tn (f ) − f ](x) + rt f (x) r r = (−1)r−k [Tn (f )(x + kt) − f (x + kt)] + rt f (x). k k=0
If 1 ≤ p < +∞, then we obtain 2π
0
|rt [Tn (f )](x)|p dx +|rt f (x)| ≤
+
0
+ +
2π
0
2π 0
p
r r k=0
k
|rt f (x)|p dx
2π
'1/p
0
'1/p
r 2π r k=0
k
|Tn (f )(x + kt) − f (x + kt)|
≤ (by Minkowski’s inequality)
2π
0
'1/p
|rt f (x)|p dx
|rt f (x)|p dx
dx
≤
p
|Tn (f )(x + kt) − f (x + kt)| dx =
'1/p
'1/p
r r k=0
k
2π+kt
kt
'1/p
|Tn (f )(u) − f (u)|p du
'1/p
'1/p r 2π r p = |Tn (f )(u) − f (u)| du k 0 k=0
= 2r Tn (f ) − f p +
0
2π
|rt f (x)|p dx
'1/p
.
Passing to supremum with 0 ≤ t ≤ h we easily get the statement for 1 ≤ p < +∞. For p = +∞ the proof is obvious. Now, concerning the (partial) global smoothness preservation by Pn (f ) we can prove the following results. Theorem 1.5.2. If f (s) ∈ C2π , s ∈ N ∪ {0}, then for all n ∈ N and h ∈ (0, 1) we have ω1 (Pn(s) (f ); h) ≤ Cs min{h f · |||Pn(s+1) |||, n−s ω1 (f (s) ; 1/n) · |||Pn(s) ||| + ω1 (f (s) ; h}. Proof. By Lemma 1.5.1, written for p = +∞, r = 1, f ≡ f (s) and Tn (f ) ≡ and by Theorem 1.5.1, we get
(s) Pn (f ),
ω1 (Pn(s) (f ); h) ≤ 2 Pn(s) (f ) − f (s) + ω1 (f (s) ; h)
36
1 Global Smoothness Preservation, Univariate Case
≤ 2Cs n−s ω1 (f (s) ; 1/n) · |||Pn(s) ||| + ω1 (f (s) ; h). On the other hand, if |x − y| ≤ h, x, y ∈ R, we have |Pn(s) (f )(x) − Pn(s) (f )(y)| ≤ |x − y| · Pn(s+1) (f ) ≤ h Pn(s+1) (f ) ≤ h f · |||Pn(s+1) |||. Passing to supremum, we get ω1 (Pn(s) (f ); h) ≤ h f · |||Pn(s+1) |||. Collecting the inequalities, we immediately obtain ω1 (Pn(s) (f ); h) ≤ Cs min{h f · |||Pn(s+1) |||, n−s ω1 (f (s) ; 1/n) · |||Pn(s) ||| + ω1 (f (s) ; h)}, which proves the theorem.
Corollary 1.5.1. Let us assume that f (s) ∈ LipM α, s ∈ N ∪ {0}, 0 < α ≤ 1. Then the best possible result concerning the partial preservation of global smoothness of (s) f by Pn (f ) which can be derived by Theorem 1.5.2 is α 1 ω1 (Pn(s) (f ); h) ≤ Mh s+1+α log , h
0 < h < 1,
n ∈ N,
attained if simultaneously we have |||Pn(s) ||| = O(ns log n)
|||Pn(s+1) ||| = O(ns+1 log n).
and
Proof. By Berman [19], the estimates |||Pn(s) ||| = O(ns log n),
|||Pn(s+1) ||| = O(ns+1 log n)
are the best possible. Replacing in Theorem 1.5.2, we obtain ω1 (Pn(s) (f ); h) ≤ Cs,α,f min{hns+1 log n, n−α log n + hα }, for all n ∈ N, h ∈ (0, 1). By the equation n−α log n = hns+1 log n we get h = n−s−1−α . This is the best choice for h, because when h < n−s−1−α , the minimum in the above inequality is hns+1 log n, and when h > n−s−1−α , it is n−α log n + hα . 1 As a conclusion, replacing n = h− s+1+α , we immediately obtain the corollary.
1.6 Bibliographical Remarks and Open Problems
37
1.6 Bibliographical Remarks and Open Problems The Theorems 1.1.1, 1.2.1, 1.2.3, Corollaries 1.2.1, 1.2.2, Theorem 1.4.2 and Corollary 1.2.3 are from Gal–Szabados [56]. Lemma 1.3.1, Theorem 1.3.2, Corollaries 1.3.1, 1.3.2, Lemma 1.5.1, Theorem 1.5.2 and Corollary 1.5.1 are from the book Anastassiou–Gal [6]. Completely new are the following results: Theorems 1.2.2, 1.2.4, 1.2.7, Corollaries 1.2.4, 1.2.5, Theorem 1.2.8, Theorem 1.4.3, and Corollary 1.4.1. Also, below are described several open problems which might be of interest for future research. Open Problem 1.6.1. Construct a sequence of algebraic projection operators (s+1) (s) for which in Corollary 1.3.1 we have |||Ln |||∗ = O(ns log n) and |||Ln ||| = 2(s+1) O(n ), n ∈ N. Open Problem 1.6.2. Prove global smoothness preservation properties for the Balász–Shepard operator on an infinite interval (semi-axis) from Della Vecchia– γ Mastroianni–Szabados [38], defined on the knots xk = nkγ /2 , k = 0, 1, . . . , n, γ ≥ 1, with respect to the modulus of continuity ω (f ; t)w . We recall that in [38], the convergence behavior is considered with respect to the weight w(x) = (1 + x)−β and ω (f ; t)w = sup0≤h≤t {||w(x)[f (x + h(x)) − f (x)]||} -the modulus of continuity of f with the step function (x) = x 1−1/γ . Open Problem 1.6.3. Construct a sequence of trigonometric projection operators {Pn (f )}n (for example, a sequence of interpolating trigonometric polynomials) simultaneously satisfying |||Pn(s) ||| = O(ns log n),
|||Pn(s+1) ||| = O(ns+1 log n),
n ∈ N,
for a fixed s ∈ N ∪ {0}. It is possible to be useful the results in the algebraic case in Szabados–Vértesi [99]. Open Problem 1.6.4. Let us consider &n 2the so-called Grünwald interpolation l(x) polynomials given by Gn (f )(x) = i=1 li (x)f (xi ), where li (x) = (x−xi )l ′ (xi ) , )n l(x) = i=1 (x − xi ), xi = cos (2i − 1)π/[2n]. In Jiang Gongjian [64] the following estimate of convergence is proved: if f ∈ Lipα(0 < α ≤ 1), then √ 4 2 |Gn [f )(x) − f (x)| ≤ (1 + log n)|f (x)| + gn (x), (−1 < x ≤ 1), (1 + x)n where gn (x) ≤ and
8 1+x
2+
1 1−α
π α , n
√ 4 2 gn (x) ≤ (1 + log n), (1 + x)n
(0 < α < 1)
(α = 1).
38
1 Global Smoothness Preservation, Univariate Case
Then proving an estimate for ||G′n (f )|| and using the standard method in this chapter, find the global smoothness preservation property with respect to the usual uniform modulus of continuity, satisfied by the Grünwald interpolation polynomials. More generally, let us consider the positive linear operators introduced by Criscuolo–Mastroianni [31] of the form - m m 2 (x) 2 (x) lm,k lm,k f (x , Vm (A; ; f ; x) = ) m,k 2m (xm,k ) 2m (xm,k ) k=1
k=1
where A = (xm,k )m k=1 is a triangular matrix of nodes, lm,k (x) are the corresponding fundamental polynomials of Lagrange interpolation, and (m (x))∞ m=1 is a sequence of functions such that (xm,i ) = 0, i = 1, ..., m. Obviously Vm are combined Shepard– Grünwald operators. The question is to find global smoothness preservation properties of these operators with respect to the usual uniform modulus of continuity. n+1 Open Problem 1.6.5. Let X = (xk,n )k=1 be an interpolatory matrix on [a, b]. For any f ∈ C[a, b] and x ∈ [a, b], let us consider the Shepard operator
Sn,s (X; f ; x) =
n+1 k=0
f (xk,n )|x − xn,k |−s n+1 k=0
,
s > 1.
−s
|x − xk,n |
Let φ(x) be a function defined on [a, b] such that φ ∼ 1 locally and φ(x) ∼ (x − a)α , x → a+ and φ(x) ∼ (b − x)β , x → b−, where α and β are non-negative numbers. The behavior of convergence in terms of the so-called Ditzian–Totik modulus of φ continuity ω1 (f ; h) is estimated by Della Vecchia–Mastroianni–Vértesi [40]. The question is to find global smoothness preservation properties of Sn,s (X; f ; x) φ with respect to the Ditzian–Totik modulus of continuity ω1 (f ; h). Open Problem 1.6.6. Let f ∈ C2π and In (f ) be the trigonometric Lagrange interpolation polynomial of degree n on equidistant nodes in [0, 2π ). A classical theorem of Marcinkiewicz and Zygmund (see, e.g., Zygmund [111]) shows that 1 ||In (f ) − f ||Lp ≤ Cω f ; . n ∞ On the other hand, we can get ω(In (f ); h)Lp ≤ C||In (f ) − f ||Lp + ω(f ; h)Lp and ω(In (f ); h)Lp ≤ Ch||In′ (f )||Lp .
The problem is to find an estimate of ||In′ (f )||Lp in terms of the ω(f ; n1 )∞ modulus, or more generally in terms of the ω(f ; n1 )Lp modulus, the fact of which would
1.6 Bibliographical Remarks and Open Problems
39
imply a global smoothness preservation property of In (f ) with respect to the usual Lp -modulus of continuity. Open Problem 1.6.7. In Prestin–Xu [83] it is proved (see p. 118, Corollary 3.2 p there) that if f ∈ W1 and m1 > 1, then −1 ′ 1 ||Fn (f ) − f ||Lp ≤ Cp n ωm1 −1 f ; , 1 < p < ∞, n Lp where Fn (f ) represents the (0, m1 ) trigonometric interpolation polynomials on the nodes 2kπ/n, k = 0, . . . , n − 1. An estimate of ||Fn′ (f )||Lp in terms of the ω(f ; n1 )Lp modulus, would imply by the standard method of this chapter a global smoothness preservation property of Fn (f ), with respect to the usual Lp modulus of continuity. Open Problem 1.6.8. According to a result of Prasad and Varma (see, e.g., the survey Szabados–Vértesi [101] and its references), for the Hermite–Fejér polynomial Hn (f ) based on the knots of Chebyshev of first kind, the estimate 1 ||Hn (f ) − f ||Lp ,w ≤ Cω f ; n ∞ holds, where w(x) = (1 − x 2 )−1/2 and ||f ||Lp ,w =
1
p
w(x)|f (x)| dx −1
1/p
,
0 < p < ∞.
Defining the modulus 1−t ' w(x)|f (x + t) − f (x − t)|dx; 0 ≤ t ≤ h , 0 < h < 1, ω(f ; h)L1 ,w = sup −1+t
and possibly taking into account the estimate for ||Hn′ (f )|| in the proof of Theorem 1.2.1 (see relation (1.7) there), find a global smoothness preservation result for Hn (f ) through the above weighted modulus of continuity. Similar problem for the mean convergence of Lagrange interpolation polynomials. Open Problem 1.6.9. Let Jn (f )(x) be the Jackson interpolation trigonometric polynomials considered by Section 1.4. Using the estimate of ||f − Jn (f )||Lp , 1 < p < +∞, in terms of the Lp -average modulus of continuity τ (f ; h)Lp in Theorem 2 in Popov–Szabados [78] and the standard method in this chapter, find the global smoothness preservation properties of Jn (f )(x) in terms of τ (f ; h)Lp , p > 1. (The case p = 1 was solved by Corollary 1.4.1.)
Open Problem 1.6.10. Let Sn,λ,Lm (f )(x) be the Shepard–Lagrange operator attached to f ∈ C[−1, 1] and to the roots of the sequence of orthogonal polynomials in Trimbitas [104]. Using the estimate of ||f −Sn,λ,Lm (f )|| in Theorem 2 in [104] and the standard method in this chapter, find the global smoothness preservation properties of Sn,λ,Lm (f ).
40
1 Global Smoothness Preservation, Univariate Case
Open Problem 1.6.11. Let the Shepard–Taylor operator, given by Sn,λ,Tm (f )(x) =
n
Ai,λ (x)Tm,i (f )(x),
i=−n
where Ai,λ (x) =
|x − xi |−λ , n −λ |x − xk |
k=−n
and Tm,i (f )(x) =
m f (j ) (xi )(x − xi )j
j!
j =0
,
defined on the special matrix of nodes in Della Vecchia–Mastroianni [33]. Using the estimates of the approximation error in [33] and the standard method in this chapter, find the global smoothness preservation properties of Sn,λ,Tm (f )(x). Open Problem 1.6.12. Theorem 1.2.7 suggests the following problem: Find convergence and global smoothness preservation properties on D, of the complex Shepard operators on the roots of unity given by Sn,2p (f )(z) =
n
Ak,2p (z)(f )(zk ),
k=1
where
(z − zk )−2p
Ak,2p (z) =
n j =1
p ∈ N,zk = exp2π ki/n , f ∈ AC and i =
,
−2p
(z − zj )
√ −1.
Open Problem 1.6.13. In order to reduce the large amount of computation required by the classical Shepard operator given by the formula Sn,p (f )(x) = where sk,p (x) =
n
sk,p (x)f (xk ),
k=1
(x − xk )−p , n −p (x − xj ) j =1
p ∈ N, in, e.g., [50], [53], [106] is considered the so-called local variant of it, given by the formula
1.6 Bibliographical Remarks and Open Problems
Wn,p (f )(x) = where
n
wk,p (x) =
41
wk,p (x)f (xk ),
k=1
Bk,p (x) , n Bj,p (x) j =1
p (R−|x−xk |)+ R p |x−xk |p
, R > 0 constant, p ∈ N. with Bk,p (x) = Also, replacing in the expression of Wn,p (f )(x) the values f (xk ) by the values of some interpolation operators (like Lagrange or Taylor), in, e.g., [105] the local variants of Shepard–Lagrange and Shepard–Taylor operators are considered. Finally, an interesting problem would be to find global smoothness preservation properties for all these local variants of Shepard operators, too.
2 Partial Shape Preservation, Univariate Case
In this chapter we present results concerning the shape-preserving property of some classical interpolation operators.
2.1 Introduction We begin with the following simple Definition 2.1.1. Let C[a, b] = {f : [a, b] → R; f continuous on [a, b]} and a ≤ x1 < x2 < · · · < xn ≤ b, be fixed knots. A linear operator U : C[a, b] → C[a, b] is called of interpolation-type (on the knots xi , i = 1, . . . , n) if for any f ∈ C[a, b] we have U (f )(xi ) = f (xi ), ∀ i = 1, . . . , n. Remark. Important particular cases of U are of the form Un (f )(x) =
n k=1
f (xk )Pk (x),
n ∈ N,
where Pk ∈ C[a, b] satisfy Pk (xi ) = 0 if k = i, Pk (xi ) = 1, if k = i, and contain the classical Lagrange interpolation polynomials and Hermite–Fejér interpolation polynomials. Now, if f ∈ C[a, b] is, for example, monotone (or convex) on [a, b], it is easy to note that because of the interpolation conditions, in general U (f ) cannot be monotone (or convex) on [a, b]. However there is a natural question if U (f ) remains monotone (or convex) on neighborhoods of some points in [a, b]. In this sense, let us introduce the following Definition 2.1.2. Let U : C[a, b] → C[a, b] be a linear operator of interpolationtype on the knots a ≤ x1 < · · · < xn ≤ b. Let y0 ∈ (a, b). If for any f ∈ C[a, b], nondecreasing on [a, b], there exists a neighborhood of y0 , Vf (y0 ) = (y0 − εf , y0 + εf ) ⊂ [a, b], εf > 0 (i.e., depending
44
2 Partial Shape Preservation, Univariate Case
on f ) such that U (f ) is nondecreasing on Vf (y0 ), then y0 is called a point of weak preservation of partial monotony and, correspondingly, U is said to have the property of weak preservation of partial monotony (about y0 ). If the above neighborhood V (y0 ) does not depend on f , then y0 is called a point of strong preservation of partial monotony. Similar definitions hold if monotony is replaced by, e.g., convexity (of any order). In connection with Definition 2.1.2, in a series of papers, T. Popoviciu proved the following negative and positive results. Theorem 2.1.1. (i) (Popoviciu [80], p. 328). Let a ≤ x0 < x1 < · · · < xm ≤ b be fixed. If m ≥ n + 3, (where n ∈ {−1, 0, 1, 2, 3}), then there do not exist in (a, b) points of strong preservation of partial convexity of order n, for the Lagrange interpolation polynomial (of degree ≤ m), Lm (x0 , . . . , xm ; f )(x). Here convexity of order n means that (the divided difference) [z1 , . . . , zn+2 ; f ] ≥ 0, for all distinct zi ∈ [a, b], i = 1, . . . , n + 2. The monotony corresponds to n = 0 and the usual convexity to n = 1. (ii) (Popoviciu [81], p. 81, Theorem VII). If we denote by Fn (f )(x), n ∈ N, the classical Hermite–Fejér polynomial based on the roots xi,n ∈ (−1, 1), i = 1, . . . , n, (α,β) of the Jacobi polynomials Jn (x) of degree n with −1 ≤ α ≤ 1, −1)≤ β ≤ 1, then n ′ , i = 1, . . . , n−1, of the polynomial l ′ (x), where l(x) = each root xi,n i=1 (x−xi,n ), is a point of strong preservation of partial monotony for Fn (f ). (iii) (Popoviciu [81], p. 82, Theorem VIII.) There do not exist in (−1, 1) points of strong preservation of partial (usual) convexity for Fn (f ), n ∈ N. (iv) (Popoviciu [81], p. 82, Theorem IX.) If, for example, 2 n l(x) , f (xi ) Gn (f )(x) = (x − xi )l ′ (xi ) i=1
l(x) =
n (x − xi ), i=1
xi = cos
(2i − 1)π , 2n
i = 1, . . . , n,
are the Grünwald interpolation polynomials, then there do not exist in (−1, 1) points of strong preservation of partial monotony for Gn (f ), n ≥ 2. In Section 2.2 first we obtain quantitative estimates of the lengths of neighbor′ ), i = 1, . . . , n − 1, in Theorem 2.1.1 (ii). In contrast with Theorem hoods V (xi,n 2.1.1(iii), it is proved that in case when n ≥ 3 is odd, 0 is a point of weak preservation of partial strict-convexity for Fn (f ). The Kryloff–Stayermann polynomials are studied as well. A related result for Grünwald polynomials is obtained and quantitative estimates of the neighborhoods of shape preservation are proved. Also, taking into account the relationship between the Hermite–Fejér polynomials Fn (f ) based on the Chebyshev nodes of first kind and the trigonometric Jackson interpolation polynomials Jn (f ), the shape preserving properties of Jn (f ) are deduced. Finally, qualitative and quantitative results in partial monotony (or convexity) preserving approximation by several Shepard-type operators are obtained.
2.2 Hermite–Fejér and Grünwald-Type Polynomials
45
2.2 Hermite–Fejér and Grünwald-Type Polynomials First we obtain a quantitative estimate for a particular case of Theorem 2.1.1,(ii) (the qualitative version of the result is proved in Popoviciu [79]). Theorem 2.2.1. Let n = 2m be even and let us denote by Fn (x) the classical Hermite–Fejér polynomial based on the roots xi,n ∈ (−1, 1), i = 1, . . . , n, of λultraspherical polynomials of degree n, with λ > −1 (i.e., Jacobi polynomials of degree n, with α = β and λ = α + β + 1, −1 < α, β ≤ 1), −1 < xn,n < · · · < xn+1−m,n < 0 < xm,n < · · · < x1,n < −1. There exists a constant c > 0 (independent of f and n) such that if f : [−1, 1] → R is on [−1, 1], then Fn (f )(x) is monotone (of the same monotonicity) in monotone − nc4 , nc4 ⊂ (−1, 1). & Proof. Let us denote Fn (f )(x) = ni=1 hi,n (x)f (xi,n ), where l ′′ (xi,n ) 2 hi,n (x) = li (x) 1 − ′ (x − xi,n ) , l (xi,n ) li (x) = l(x)/[(x − xi,n )l ′ (xi,n )],
l(x) =
By, e.g., Popoviciu [79] we have
n i=1
(x − xi,n ).
2 2 3 hi,n (0) = l 2 (0)[2 − (1 − λ)xi,n ]/[l ′ (xi,n )2 (1 − xi,n )xi,n ],
(2.1)
for all i = 1, . . . , n, and Fn′ (f )(x) =
n−1 [Qi (x)][f (xi,n ) − f (xi+1,n )], i=1
& where Qi (x) = ij =1 h′j,n (x), i = 1, . . . , n − 1. First we will prove Qi (0) ≥ h′1,n (0) > 0, for all i = 1, . . . , n − 1. Indeed, by (2.1) we get sign {h′i,n (0)} = +1, for all i = 1, . . . , m, which immediately implies Qi (0) ≥ h′1,n (0), for all i = 1, . . . , m.
(2.2)
46
2 Partial Shape Preservation, Univariate Case
Now, let m + 1 ≤ i < n. Then by (see, e.g., Popoviciu [79], p. 245) h′i,n (0) = i = 1, . . . , n − 1, we again get
−h′n+1−i,n (0),
Qi (0) =
i
h′j,n (0) > h′1,n (0),
j =1
i = m + 1, . . . , n − 1,
which proves (2.2). On the other hand, simple calculations show h′1,n (0) ≥
c1 n2
(c1 > 0, independent of n).
(2.3)
& From Szegö [102], §14.6, we have max|x|≤ 1 nj=1 |hj,n (x)| ≤ c2 . Applying 2 Bernstein’s inequality, we obtain i ′ hj,n (x) = max |Qi (x)| ≤ c3 n, i = 1, . . . , n − 1 max 1 |x|≤ 4 j =1 |x|≤ 41 and
max |Q′i (x)| ≤ c4 n2 ,
|x|≤ 81
i = 1, . . . , n − 1.
(2.4)
Let di be the nearest root of Qi (x) to zero. By (2.2), (2.3) and (2.4) it follows that c1 ≤ |Qi (0)| = |Qi (di ) − Qi (0)| = |di | · |Q′i (z)| ≤ c4 |di |n2 , n2 i.e., |di | ≥
c , for all i = 1, . . . , n − 1, n4
which proves the theorem.
Also, we can prove the following Theorem 2.2.2. Let us denote by Fn (f )(x), n ∈ N, the classical Hermite–Fejér polynomial based on the roots −1 < xn,n < xn−1,n < · · · < x1,n < 1, of the (α,β) Jacobi polynomials Pn (x), with α, β ∈ (−1, 0]. If ξ is any root of the polynomial ′ l (x), then there exists a constant c > 0 (independent of n and of f ) such that if f : [−1, 1] → R is monotone on [−1, 1], then Fn (f )(x) is monotone (of the same monotony) in cξ cξ c ξ − 7+2γ , ξ + 7+2γ ⊂ (−1, 1), where cξ = , γ = max{α, β}, n n (1 − ξ 2 )5/2+δ and δ=
α, if 0 ≤ ξ < 1 β, if −1 < ξ ≤ 0.
Proof. Keeping the notations in the proof of Theorem 2.2.1, and reasoning as in the proof of Lemma 3 in Popoviciu [81] we get
2.2 Hermite–Fejér and Grünwald-Type Polynomials
47
Qi (ξ ) > min{h′1,n (ξ ), −h′n,n (ξ )} > 0, for all i = 1, . . . , n − 1. Let an , bn ∈ (0, 1), an , bn ց 0 (when n → +∞) be such that |h′1,n (ξ )| ≥ c1 an , |h′n,n (ξ )| ≥ c2 bn , and sn = min{an , bn }. It easily follows Qi (ξ ) ≥ c3 sn , i = 1, . . . , n − 1. By Szegö [102], Theorem 14.5, we have n hj,n (x) = 1, ∀ x ∈ [−1, 1], j =1
where hj,n (x) ≥ 0, ∀ x ∈ [−1, 1], j = 1, . . . , n. Applying twice the Bernstein’s inequality and reasoning exactly as in the proof of Theorem 2.2.1 (with ξ instead of 0), we obtain Qi (ξ ) ≤ c1 |di − ξ |n2 /(1 − ξ 2 ),
i = 1, . . . , n − 1,
where di is the nearest root of Qi (x) to ξ , and therefore max |x−ξ |≤aξ
sn n2
Qi (x) > 0,
i = 1, . . . , n − 1,
with aξ = c2 (1 − ξ 2 ). It remains to find a (lower) estimate for sn . First we have |Pn(α,β) (ξ )| ≥
c3 n−1/2 , (1 − ξ )δ/2+1/4
(see Theorem 8.21.8 in Szegö [102]). By Popoviciu [81], p. 79, relation (27), l ′′ (x1,n ) l 2 (ξ ) 2 + (x > 0, − ξ ) h′1,n (ξ ) = 1,n (x1,n − ξ )3 [l ′ x1,n )]2 l ′ (x1,n ) l 2 (ξ ) l ′′ (xn,n ) ′ hn,n (ξ ) = 2 + (xn,n − ξ ) ′ < 0. (xn,n − ξ )3 [l ′ xn,n )]2 l (xn,n ) l ′′ (x
)
i,n By Szegö [102], Theorem 14.5, 2 + (xi,n − ξ ) l ′ (xi,n ) ≥ 1 and by Szegö [102], (7.32.11),
(α,β)
h′1,n (ξ ) ≥
(ξ )]2 l 2 (ξ ) [Pn = ′ 3 ′ 2 (α,β) (x1,n − ξ ) [l (x1,n )] (x1,n − ξ )3 [Pn (x1,n )]2 (α,β)
≥ (where q = max{2 + α, 2 + β}). Also,
(ξ )]2 c4 [Pn , n2q (1 − ξ )3 (α,β)
−h′n,n (ξ ) = |h′n,n (ξ )| ≥
c5 [Pn (ξ )]2 . 2q n (1 + ξ )3
48
2 Partial Shape Preservation, Univariate Case
Thus we obtain Qi (ξ ) ≥ Finally, taking sn =
c8 , 5+2γ n (1 − ξ 2 )7/2+δ
c8 n5+2γ (1−ξ 2 )7/2+δ
i = 1, . . . , n − 1.
we easily obtain the theorem.
Remarks. (1) If ξ is near the endpoints, i.e., 1 − ξ 2 ∼ n12 , then the interval of preservation of the monotony is c c ξ − 2+2(γ −δ) , ξ + 2+2(γ −δ) ⊂ (−1, 1). n n For α = β ∈ (−1, 0), i.e., in the ultraspherical case, we obtain the best possible interval c c ξ − 2 , ξ + 2 ⊂ (−1, 1). n n
(2) Let us consider, for example, the case α = β = − 21 , i.e., xi,n = cos 2i−1 2n π, i = 1, . . . , n and f (x) = e1 (x) = x, ∀ x ∈ [−1, 1]. It is known that Fn (f )(x) = x − T2n−12n(x)+x , x ∈ [−1, 1], where T2n−1 (x) = cos[(2n − 1) arccos x]. We have Fn′ (e1 )(x) = 1 −
′ T2n−1 (x) + 1
2n
,
Fn′ (e1 )(−x) = Fn′ (e1 )(x),
′′ Fn′′ (e1 )(x) = −T2n−1 (x)/(2n),
Fn′′ (0) = 0.
It follows that the roots of Fn′ (e1 )(x) are symmetric in respect with 0 and the ′ equation Fn′ (e1 )(x) = 0 is equivalent with T2n−1 (x) = 2n − 1. This last one reduces to sin[(2n − 1)t] = sin t, t ∈ (0, π), where x = cos t. Because of symmetry, we are interested only in the positive roots, i.e., when # t ∈ 0, π2 . For n even, 0 is not a root of Fn′ (e1 )(x), while for n odd 0 is a double root. Simple calculations show that the positive roots of Fn′ (e1 )(x) are given by n−1 2kπ (2k + 1)π (1) (2) , k = 1, . . . , , xk = cos , xk = cos 2n − 2 2 2n n−1 k = 0, 1, . . . , , n ≥ 2. 2 It is easy to show that for all k, (1)
(2)
(1)
xk > xk > xk+1 > · · · Because x1 −x1 ∼ n12 , near the endpoints we obtain the best possible estimates. n−2 = 2 and for k = n−2 Now, if for example n is even, then n−1 2 2 we easily get (1)
(2)
2.2 Hermite–Fejér and Grünwald-Type Polynomials (1)
(2)
xk − xk ∼
49
1 , n
which means that near 0, the estimate for the interval of preservation of monotonicity is much better than those given by Theorems 2.2.1 and 2.2.2. For n ≥ 3 odd, let us denote by Fn (f )(x) the Hermite–Fejér interpolation polynomial based on the roots xi,n ∈ (−1, 1), i = 1, . . . , n, of λ-ultraspherical polynomials of degree n, λ > −1, λ = 0. Also, let us consider the Côtes–Christoffel numbers of the Gauss–Jacobi quadrature λi,n := 2
2−λ
and denote
−2 λ Ŵ(n + λ) ′ 2 −1 (1 − xi,n ) [Pn(λ) (xi,n )]−2 , i = 1, . . . , n π Ŵ 2 Ŵ(n + 1) 2h f (0) = f (h) − 2f (0) + f (−h).
In contrast to the negative result in Theorem 2.1.1,(iii), we have the following: Theorem 2.2.3. If f ∈ C[−1, 1] satisfies n [λ 2 f (0)] i,n xi,n 2 xi,n
i=1
> 0,
(2.5)
then Fn (f )(x) is strictly convex in [−|dn |, |dn |], with c(λ)
(n−1)/2
2 [λi,n 2xi,n f (0)]/xi,n
i=1 , 1 n2 ω 1 f ; + f − Fn (f ) n I # $ 1 1 ! is the where c(λ) > 0 is independent of f and n, I = − 2 , 2 , ω1 f ; n1 − 21 , 21 modulus of continuity on − 21 , 21 and · 1 1 ! is the uniform norm on − 21 , 21 . −2,2 & Proof. Denoting Fn (f )(x) = ni=1 hi,n (x)f (xi,n ), we have |dn | ≥
h′′i,n (x) = −4
l ′′ (xi,n ) li (x)li′ (x) + 2[(li′ (x))2 + li (x)li′′ (x)] l ′ (xi,n ) l ′′ (xi,n ) 1− ′ (x − xi,n ) . l (xi,n ) ′
But li (0) = 0 and li′ (0) = − xi,nll ′(0) (xi,n ) , for i = (n + 1)/2 and 1 + xi,n
2 1 + λxi,n l ′′ (xi,n ) , = 2 l ′ (xi,n ) 1 − xi,n
We obtain
i = 1, . . . , n (see, e.g., Popoviciu [79]).
50
2 Partial Shape Preservation, Univariate Case
h′′i,n (0)
2(l ′ (0))2 1 = ′ · 2 (l (xi,n ))2 xi,n
2 1 + λxi,n 2 1 − xi,n
> 0, ∀ i = (n + 1)/2.
(2.6)
Also, because xi,n = −xn+1−i,n , i = 1, . . . , n, l ′ (xi,n ) = l ′ (xn+1−i,n ) (since n is odd) we easily get h′′i,n (0) = h′′n+1−i,n (0). But
λi,n =
1 c1 λŴ(n + λ) 1 · ′ and (l ′ (0))2 ∼ nλ , · 2 Ŵ(n + 1) (l (xi,n ))2 1 − xi,n
which together with (2.6) implies 2 h′′i,n (0) ≥ c2 λnλi,n /xi,n , for all i = (n + 1)/2.
Therefore Fn′′ (f )(0)
=
(n−1)/2 i=1
h′′i,n (0)2xi,n f (0) ≥ c3 λn
n λ 2 f (0) i,n xi,n i=1
2 xi,n
> 0.
(2.7)
By (2.7) it follows that Fn (f ) is strictly convex in a neighborhood of 0. Let dn be the nearest root of Fn′′ (f ) to 0. We may assume that |dn | ≤ nc (since otherwise there is nothing to prove, the interval of convexity cannot be larger than − nc , nc ). Then by the mean value theorem, Bernstein’s inequality and Stechkin’s inequality (see, e.g., Szabados–Vértesi [100], p. 284) we get Fn′′ (f )(0) = |Fn′′ (f )(0) − Fn′′ (f )(dn )| = |dn | · |Fn′′′ (f )(y)| 1 2 ′ 3 ≤ |dn |c4 n Fn (f ) J ≤ c5 |dn |n ω1 Fn (f ); n I 1 1 3 + ω1 Fn (f ) − f ; ≤ c5 |dn |n ω1 f ; n n I 1 ≤ c5 |dn |n3 ω1 f ; + f − Fn (f ) , n I 1 1 1 1 where J = − 4 , 4 , I = − 2 , 2 . Combining the last inequality with (2.7), the proof of the theorem is immediate. Remarks. (1) It is obvious that if f is strictly convex on [−1, 1], then (2.5) is satisfied (from λi,n > 0, i = 1, . . . , n). Therefore, because the lower estimate for |dn | depends on f , we can say that 0 is a point of weak preservation of partial strict-convexity. Let us suppose that 0 would be point of strong preservation of partial strict-convexity. For any ε > 0, f1 (x) = εx 2 + x and f2 (x) = εx 2 − x are strictly convex on [−1, 1]. By hypothesis we obtain that there exists a neighborhood of 0,
2.2 Hermite–Fejér and Grünwald-Type Polynomials
51
Vn (0), independent of ε > 0 (here n ≥ 3 is odd fixed), such that Fn′′ (f1 )(x) > 0, Fn′′ (f2 )(x) > 0, ∀ x ∈ Vn (0). Passing with ε ց 0, we obtain Fn′′ (e1 )(x) ≥ 0 and Fn′′ (e1 )(x) ≤ 0, ∀ x ∈ Vn (0), i.e., Fn′′ (e1 ) = 0, ∀ x ∈ Vn (0), where e1 (x) = x, x ∈ [−1, 1]. But this means that degree [Fn (e1 )(x)] ≤ 1, x ∈ Vn (0), which is a contradiction, because Fn (e1 )(x) = x − T2n−12n(x)+x , Tm (x) = cos[m arccos(x)]. As a conclusion, 0 cannot be a point of strong type for strict-convexity and Fn (f ). Let us note that also the strict convexity of f on [−1, 1] can be replaced by the weaker condition 2h f (0) > 0, for all h ∈ (0, 1]. (2) More explicit estimate for |dn | in Theorem 2.2.3 can be obtained in the following particular case. Let us suppose that f is strongly convex on [−1, 1] (i.e., there exists γ > 0 such that ηf (x) + (1 − η)f (y) − f (η + (1 − η)y) ≥ η(1 − η)γ (x − y)2 , for all η ∈ [0, 1], x, y ∈ [−1, 1]) and that f ∈ Lip α, with 0 < α ≤ 1. 2 ≥ 2γ , for all i = 1, . . . , (n − 1)/2. Also, by It easily follows that 2xi,n f (0)/xi,n Szegö [102], relation (15.3.10), we have λi ∼ i λ /nλ+1 , for all i = 1, . . . , n,
(2.8)
which immediately implies (n−1)/2 i=1
λi,n ≥ c,
(2.9)
c > 0, constant independent of n. For f − Fn (f ) I we can use various estimates, (see, e.g., Szabados–Vértesi [100]) n k
f − Fn (f ) I ≤ c ω1 f ; k λ−2 , if λ > 0. n k=0
Taking into account all the above estimates, by the estimate in Theorem 2.2.3 we obtain n α i 1 2 λ−2 + i , if λ > 0, |dn | ≥ cλγ / n nα n i=0
where the constant c (with cλ > 0) is independent of n but depends on f . & If for example 0 < λ + α < 1, by ni=0 i α+λ−2 ≤ c, we obtain |dn | ≥
cλγ , n2−α
c, λ > 0.
(3) Two open questions appear in a natural way: (i) If n is odd then find other points of weak preservation of partial strict-convexity for Fn (f ); (ii) What happens if n is even? (4) The dependence of |dn | of f in Theorem 2.2.3 can be dropped in a simple way for some subclasses of functions. Thus, let us define (for a ∈ (0, 1]) B ∗ [−a, a] = {f : [−1, 1] → R; f is strictly convex on [−a, a],
52
2 Partial Shape Preservation, Univariate Case
f (x) ≥ 0, x ∈ [−a, a], f (0) = 0}.
Let n ≥ 3 be odd. Because hi,n (0) = h′i,n (0) = 0 and h′′i,n (0) > 0, for all / . i ∈ {1, . . . , n} \ n+1 2 , obviously there exists εn > 0 such that ' n+1 h′′i,n (x) > 0, for all x ∈ [−εn , εn ] and all i ∈ {1, . . . , n} \ . 2 For all x ∈ [−εn , εn ] and all f ∈ B ∗ [−1, 1] we get Fn′′ (f )(x)
≥
(n−1)/2 j =1
=
n
h′′i,n (x)f (xi,n )
i=1
=
n
h′′i,n (x)f (xi,n )
i=1i = n+1 2
h∗ (x)[f (xj,n ) + f (xn+1−j,n )] > h∗ (x)
n−1 2f (0) = 0, 2
* . /+ where h∗ (x) = min h′′i,n (x); i ∈ {1, . . . , n} \ n+1 > 0, ∀ x ∈ [−εn , εn ], εn 2 being independent of f ∈ B ∗ [−1, 1]. On the other & hand, it easily follows that hi,n (x) ≥ 0, for all x ∈ [−εn , εn ], i.e., Fn (f )(x) = ni=1 hi,n (x)f (xi,n ) ≥ 0, for all x ∈ [−εn , εn ], f ∈ B ∗ [−1, 1] and as a conclusion, Fn (f ) ∈ B ∗ [−εn , εn ]. Now, let us consider the Hermite–Fejér type interpolation with quadruples nodes introduced by&Kryloff–Stayermann [68] (see also, e.g., Gonska [59]), given by Kn (f )(x) = nk=1 hk (x)f (xk ), where xk = cos 2k−1 2n π , Tn (x) = cos[n arccos x] pk (x) 4 and hk (x) = (x−x )4 Tn (x), k
' 1 1 2 2 2 pk (x) = 4 (1 − xxk ) + (x − xk ) [(4n − 1)(1 − xxk ) − 3] . n 6 (i)
= f (xk ), Kn (f )(xk ) = 0, i = 1, 2, 3, k = 1, . . . , n, &n It is known that Kn (f )(xk ) & n ′ h (x) ≡ 1. This implies k=1 k k=1 hk (x) ≡ 0 and reasoning exactly as in Popoviciu [79], p. 242, by −1 < xn < xn−1 < · · · < x1 < 1 we obtain Kn′ (f )(x) =
n−1 [Qi (x)][f (xi ) − f (xi+1 )], i=1
&i
where Qi (x) = j =1 h′j (x). First we present two results of qualitative type. Theorem 2.2.4. If n is even number (i.e., n = 2m), then there exists a neighborhood of 0 where Kn (f )(x) preserves the monotonicity of f . Proof. We have Tn′ (0) = 0 and h′k (x) =
Tn4 (x) pk (x) [p ′ (x)(x − xk ) − 4pk (x)], · 4Tn3 (x)Tn′ (x) + 4 (x − xk ) (x − xk )5 k
2.2 Hermite–Fejér and Grünwald-Type Polynomials
pk′ (x) =
53
1 1 −2xk (1 − xxk ) + (x − xk )[(4n2 − 1)(1 − xxk ) − 3] n4 3 ' (x − xk )2 (−xk )(4n2 − 1) , + 6
1 [4pk (0) + xk pk′ (0)], k = 1, n, xk5 1 2 2 2 pk (0) = 4 1 + xk (n − 1) > 0, n 3 xk3 1 4 ′ 2 2 pk (0) = − 4 2xk + xk (n − 1) + (4n − 1) , k = 1, n. n 3 6 h′k (0) =
′ By xn+1−k = −xk , k = 1, m, we get pn+1−k (0) = pk (0), pn+1−k (0) = −pk′ (0) and consequently h′n+1−k (0) = −h′k (0), k = 1, m. By simple calculations
4pk (0) + xk pk′ (0) =
1 2 2 [n (8xk − 4xk4 ) + 24 − 20xk2 + xk4 ] > 0, 6n4
which implies h′k (0) < 0, if xk < 0 and h′k (0) > 0, if xk > 0, i.e., h′k (0) > 0 if k = 1, m and h′k (0) < 0, if k = m + 1, 2m. Reasoning now exactly as in Popoviciu [79], p. 245, we get Qi (0) > 0, for all 0, Vi (0), such that i = 1, 2m − 1. This implies that there exist neighborhoods of 02m−1 Vi (0), we get Qi (x) > 0, ∀ x ∈ Vi (0), i = 1, 2m − 1. Denoting V (0) = i=1 Qi (x) > 0, ∀ x ∈ V (0), i = 1, 2m − 1, which proves the theorem. Theorem 2.2.5. Let n ≥ 3 be odd. Then there exists a neighborhood of 0 where Kn (f )(x) preserves the strict-convexity of f . Proof. We can write h′k (x) = pk (x)
4Tn3 (x)Tn′ (x) + Tn4 (x)E1 (x), (x − xk )4
h′′k (x) = 12pk (x)
Tn2 (x)[Tn′ (x)]2 + Tn3 (x)E2 (x), (x − xk )4
h′′′ k (x) = 24pk (x) (4)
Tn (x)[Tn′ (x)]3 + Tn2 (x)E3 (x), (x − xk )4
hk (x) = 24pk (x) For n ≥ 3 odd and k =
n+1 2
[Tn′ (x)]4 + Tn (x)E4 (x). (x − xk )4
we obtain
(4)
hk (0) = h′k (0) = h′′k (0) = h′′′ k (0) = 0 and hk (0)
54
2 Partial Shape Preservation, Univariate Case
2 2 2 24 = 4 1 + xk (n − 1) > 0. 3 xk (4)
(4)
This implies hn+1−k (0) = hk (0) and therefore Kn(4) (f )(0) =
(4)
n i=1
+f (0)h n+1 (0) > f (0) 2
(4)
hk (0)f (xk ) = (n−1)/2 j =1
(n−1)/2 j =1
(4)
[f (xj ) + f (xn+1−j )]hj (0)
2h′′j (0) + f (0)h′′n+1 (0) = f (0) 2
n i=1
h′′i (0) = 0.
(i) Because Kn (f )(0) = 0, i =, 1, 2, 3, it follows that 0 is minimum point for ′′ Kn (f )(x) and Kn′′ (f ) is strictly convex in a neighborhood of 0, V (0). As a conclusion, Kn′′ (f )(x) > 0, ∀ x ∈ V (0)\{0}, which shows that Kn (f ) is strictly convex on V (0).
Obviously here V (0) depends on f (and of n of course). The theorem is proved.
Remark. Theorem 2.2.5 remains valid if the strict-convexity of f on [−1, 1] is replaced by the weaker condition 2h f (0) = f (h) + f (−h) − 2f (0) > 0, ∀ h ∈ (0, 1]. The quantitative version of Theorem 2.2.4 is the following. Theorem 2.2.6. Let n be even. There exists a constant c > 0 (independent of f and n) such that if f : [−1, 1] → R is monotone on [−1, 1], then Kn (f ) is of the same monotonicity in − nc4 , nc4 ⊂ (−1, 1). Proof. By the proof of Theorem 2.2.4 we easily get Qi (0) ≥ h′1 (0) > 0, ∀ i = 1, n − 1. 4n2 x 2 Tn4 (0) 2 [n (8x12 − 4x14 ) + 24 − 20x12 + x14 ] ≥ 4 15 ≥ 3n2 2 (because x15 6n4 6n x1 Tn4 (0) = 1, 8x12 − 4x14 > 4x12 and 24 − 20x12 + x14 > 0). & On the other hand, by 0 ≤ hk (x), k = 1, n, nk=1 hk (x) = 1, ∀ x ∈ [−1, 1] and
But h′1 (0) =
by Bernstein’s inequality we obtain (reasoning as in the proof of Theorem 2.2.1) 1 1 i 1 1 1 1 hk 1 n1 1 1 k=1 ≤ c1 n, i = 1, n − 1 max |Qi (x)| ≤ max √ 1 − x2 |x|≤ 41 |x|≤ 41
and
max |Q′i (x)| ≤ c2 n2 ,
|x|≤ 18
where c1 , c2 > 0 are independent of n.
i = 1, n − 1,
2.2 Hermite–Fejér and Grünwald-Type Polynomials
55
Now, if di is the nearest root of Qi (x) to zero, we get 2 ≤ Qi (0) = |Qi (0)| = |Qi (0) − Qi (di )| = |di | · |Q′i (z)| ≤ c2 |di |n2 , 3n2 c i.e., |di | ≥ 4 , for all i = 1, n − 1, which proves the theorem. n The quantitative version of Theorem 2.2.5 is the following. Theorem 2.2.7. Let n ≥ 3 be odd. If f : [−1, 1] → R satisfies
2h f (0) = f (h) + f (−h) − 2f (0) > 0, ∀ h ∈ (0, 1],
then Kn (f )(x) is strictly convex in [−|dn |, |dn |], with c |dn | ≥
(n−1)/2
2xi f (0)/xi2
i=1
$ , # n3 ω1 f ; n1 + f − Kn (f ) I
1 1 . I= − , 2 2
Proof. By the proof of Theorem 2.2.5 we have Kn(4) (f )(0) = (4)
where hi (0) =
24
xi4
(n−1)/2
hi (0)2xi f (0), (4)
i=1
1 + 23 xi2 (n2 − 1) ≥ Kn(4) (f )(0) ≥ n2
n2 , xi2
i.e.,
(n−1)/2 i=1
2xi f (0) xi2
> 0.
(4)
Let dn be the nearest root of Kn (f )(x) to 0. Reasoning as in the proof of Theorem 2.2.3, we have Kn(4) (f )(0) = |Kn(4) (f )(0) − Kn(4) (f )(dn )| = |dn | · |Kn(5) (f )(y)| 1 ≤ |dn |cn4 Fn′ (f ) J ≤ c1 |dn |n5 ω1 Kn (f ); n I 1 ≤ c2 |dn |n5 ω1 f ; + f − Kn (f ) , n I 1 1 1 1 where J = − 4 , 4 , I = − 2 , 2 . Combining with the previous inequality, we obtain c |dn | ≥
(n−1)/2 i=1
2xi f (0)/xi2
, 1 n3 ω1 f ; + f − Kn (f ) n I
1 1 , I= − , 2 2
56
2 Partial Shape Preservation, Univariate Case
where c > 0 is independent of n and f and ω1 (f ; ·)I , · I represents the corresponding concepts on I . Remarks. (1) For upper estimates of f − Kn (f ) can be used, for example, Gonska [59]. (2) Comparing Theorem 2.2.6 with Theorem 2.2.1, we see that although the polynomials Kn (f )(x) satisfy higher-order Hermite–Fejér conditions, still the obtained estimate for the length of preservation of monotonicity is the same as that given by the classical Hermite–Fejér polynomials. (3) As was pointed out in the case of classical Hermite–Fejér polynomials, there are also other points (different from 0) where the monotonicity of f is preserved. Therefore it is natural to look for other points of this type in the case of the polynomials Kn (f )(x), too. In contrast with the negative result in Theorem 2.1.1, (iv), in the case of convexity we can prove a shape-preserving property of Grünwald interpolation polynomials based on Chebyshev nodes of the first kind. In this sense, first we need the following simple result. Lemma 2.2.1.If xk = cos (2k−1)π , k = 1, . . . , n and lk (x) = (x−xl(x) are the ′ 2n k )lk (xk ) fundamental Lagrange interpolation polynomials (where l(x) = (x−x1 ) · · · (x−xk )), then n 1 1 1 1 |lk (x)|p 1 ≤ Cp , ∀p > 1, 1 k=1
and
n 1 1 1 1 |lk (x)|p 1 ≤ Cp n1−p , 1 k=1
∀0 < p < 1,
where Cp is a positive constant depending only on p. Proof. Let x ∈ [−1, 1] be fixed and the index j defined by |x − xj | := min{|x − xk |; 1 ≤ k ≤ n}. By the relations in Szabados–Vértesi [100], p. 282 (see also the and relations (1.6) in the proof of Theorem 1.2.1), denoting x = cos(t), tk = (2k−1)π 2n Tn (x) = cos(n arccos(x)), for all k = j we get |lk (x)| =
k |Tn (x)|| sin(tk )| C n ≤ . ≤ |j 2 −k 2 | n| cos(t) − cos(tk )| |j − k| n n2
Also, lj (xj ) = 1 and |lj (x)| =
|Tn (x)|| sin(tj )| 1 ≤ n| cos(t) − cos(tj )| n
Cj n Cj n2
= C.
As a conclusion, n k=1
|lk (x)|p ≤ Cp
n
k=1,k =j
1 + Cp , |k − j |p
2.2 Hermite–Fejér and Grünwald-Type Polynomials
which immediately proves the lemma.
57
Remark. In the case when p = 2 and the fundamental Lagrange interpolation polynomials are based on the roots of α-ultraspherical Jacobi polynomials of degree & n, α ∈ (−1,10), in Szegö [102], Problem 58, can be found the relation || nk=1 |lk (x)|2 || ≤ |α| . We present: Theorem 2.2.8. Let n ≥ 3 be odd. If 2h f (0) > 0, ∀ h ∈ (0, 1], then the Grünwald polynomials Gn (f )(x) on the Chebyshev nodes of first kind (given by Theorem 2.1.1 (iv)) are strictly convex on [−|dn |, |dn |], where for |dn | we can have either of the following two lower estimates: (i) (n−1)/2 2(1 − x 2 ) k 2xk f (0) 2 x k k=1 ; |dn | ≥ C n3 ||f || (ii)
|dn | ≥ C
(n−1)/2
2(1 − xk2 ) xk2
k=1
2xk f (0)
n3 [ω1 (f ; n1 ) + ||Gn (f ) − f ||]I
.
Proof. It is known that lk (x) = (−1)k
2 1 − xk2 x − xk
·
Tn (x) . n
Denoting hk (x) = lk2 (x), we get h′k (0) = 2lk (0)lk′ (0), h′′k (0) = 2(lk′ (0))2 + 2lk (0)lk′′ (0). Now, for k = n+1 2 we get xk = 0 and lk (0) = 0, lk′ (x) = [lk′ (0)]2 =
1 − xk2 xk2
,
2 (−1)k 1 − xk2 n
h′k (0) = 0,
·
Tn′ (x)(x − xk ) − Tn (x) , (x − xk )2
h′′k (0) = 2
because xj&= −xn+1−j . Since nk=1 h′′k (0) = 0, it follows h′′(n+1)/2 (0) = −4 and therefore
(n−1)/2 k=1
1 − xk2 xk2
1 − xk2 xk2
= h′′n+1−k (0) > 0,
58
2 Partial Shape Preservation, Univariate Case
G′′n (f )(0) =
n k=0
h′′k (0)f (xk ) =
′′
+f (0)h n+1 (0) = 2
(n−1)/2 k=1
(n−1)/2 k=1
2(1 − xk2 ) xk2
2(1 − xk2 ) xk2
[f (xk ) + f (xn+1−k )]
2xk f (0).
On the other hand, if we denote dn the nearest root to 0 of G′′n (f )(x), then by reasonings similar to those in the proof of Theorem 2.2.3 and by the above Lemma 2.2.1, we get G′′n (f )(0) = |G′′n (f )(0) − G′′n (f )(dn )| = |dn ||G′′′ n (η)| ≤ C|dn |n3 ||Gn || ≤ C|dn |n3 ||f ||. It immediately follows the estimate in (i). Also, by reasonings similar to those in the proof of Theorem 2.2.3 (but without to use the Lemma 2.2.1), we get G′′n (f )(0) = |G′′n (f )(0) − G′′n (f )(dn )| = |dn ||G′′′ n (η)| 1 ≤ C|dn |n2 ||G′n ||J ≤ C|dn |n3 ω1 (Gn (f ); )I n 1 1 ≤ C|dn |n3 [ω1 (f ; ) + ω1 (Gn (f ) − f ; )]I n n 1 3 ≤ C|dn |n ω1 f ; + ||Gn (f ) − f || , n I
where J = [−1/4, 1/4], I = [−1/2, 1/2]. This proves the estimate (ii), too.
Remarks. (1) Theorem 2.2.8 shows that 0 is a point of weak preservation of partial strict-convexity for the Grünwald polynomials. Because Gn (e1 )(x), n ≥ 3 odd, is a polynomial of degree > 1, (here e1 (x) = x, x ∈ [−1, 1]), reasoning exactly as in Remark 1 of Theorem 2.2.3, we obtain that 0 cannot be point of strong preservation of partial strict-convexity for Gn (f ). (2) For the estimate in Theorem 2.2.8 (ii), might be useful the degree of approximation in, e.g., Jiang Gongjian [64] and Sheng–Cheng [90]. We end the section with some remarks on the shape-preserving properties of trigonometric interpolation polynomials. They are based on the well-known relationship between the Jn (f )(x) in Theorem 1.4.2 and the algebraic Hermite–Fejér polynomials Fn (f )(x) on the Chebyshev knots of the first kind, given by (see, e.g., Szabados [95]) J2n−1 (f )(x) = Fn (g)(cos(x)), x ∈ R, where f (t) = g(cos(t)).
2.3 Shepard Operators
59
This relation allows us to extend the properties of Fn (f ) in Theorems 2.2.1, 2.2.2 and 2.2.3, to J2n−1 (f ). First we need the following concept. Definition 2.2.1. Let f : R → R be 2π -periodic. f is called periodically monotone if it is increasing in an interval (x1 , x2 ) and decreasing in (x2 , x1 + 2π ). A particular periodically monotone function is the so-called bell-shaped function, which is 2π -periodic, even and decreasing on [0, π] (i.e., x1 = π and x2 = 2π ). Remark. The most natural bell-shaped function f can be generated by an increasing function g : [−1, 1] → R, defining f (x) = g(cos(x)), x ∈ R. Actually it is easy to show that g is increasing on [−1, 1] if and only if f is bell-shaped. The shape-preserving properties of J2n−1 (f )(x) can be summarized by the following. Theorem 2.2.9. Let Fn (g)(x) be the Hermite–Fejér polynomials based on the Chebyshev nodes of the first kind and let Jn (f )(x) be the trigonometric Jackson interpolation polynomials, where f (x) = g(cos(x)), g : [−1, 1] → R. (i) If f is bell-shaped and n is even, then J2n−1 (f ) is increasing in a neighborhood of π2 , of the form ( π2 − nc2 , π2 + nc2 ). (ii) Let us suppose that f is bell-shaped. If ξ is any root of the polynomial l ′ (x) )n (where l(x) is given by l(x) = i=1 (x − xi,n ), xi,n = cos (2i − 1)π/[2n]), then J2n−1 (f ) is increasing in a neighborhood of η = arccos(ξ ) of the form (η− nc3 , η+ nc3 ). Proof. (i),(ii) It is immediate by taking α = β = − 21 (Chebyshev nodes of the first kind) in Theorems 2.2.1 and 2.2.2 and then taking into account that the function h(x) = arccos(x) ∈ Lip √π (1/2). 2
2.3 Shepard Operators First, we present some results of qualitative type for a Shepard interpolation operator of the form Sn,p (f )(x) =
n
sk,n (x)f (xk ),
k=−n
n ∈ N,
f : [−1, 1] → R,
(2.10)
−1 ≤ x−n < x−n+1 < · · · < x1 < · · · < xn ≤ 1, #&n $ −2p , p ∈ N, fixed. where sk,n (x) = (x − xk )−2p / i=−n (x − xi ) It is easy to see that Sn,p (f )(xi ) = f (xi ), i = −n, . . . , n, Sn,p (f ) is a pos(j ) itive &n linear operator, Sn,p (f )(xi ) = 0, i = −n, . . . , n, j = 1, . . . , 2p − 1 and k=−n sk,n (x) & ≡ 1. ′ (x) ≡ 0, reasoning exactly as in Popoviciu [81], p. 76, we Because nk=−n sk,n obtain ⎛ ⎞ i n−1 ′ ′ ⎝− sj,n (x)⎠ (f (xi+1 ) − f (xi )). (2.11) Sn,p (f )(x) = i=−n
j =−n
We also need the following basic result due to T. Popoviciu.
60
2 Partial Shape Preservation, Univariate Case
& Lemma 2.3.1. (Popoviciu [81]) Let Fm (f )(x) = m hk (x)f (xk ), x ∈ [a, b], k=1& a ≤ x1 < · · · < xm ≤ b, f : [a, b] → R, hk ∈ C 1 [a, b], m k=1 hk (x) ≡ 1, ∀ x ∈ [a, b] and x0 ∈ [a, b] be fixed. If h′1 (x0 ) < 0, h′m (x0 ) > 0 and the sequence h′1 (x0 ), h′2 (x0 ), . . . , h′m (x0 ) has exactly one variation of sign, then there exists a neighborhood of x0 , V (x0 ), independent of f such that if f is monotone on [a, b], then Fm (f ) is of the same monotonicity on V (x0 ). We have the following: &n −2p . Any solution ξ ∈ Theorem 2.3.1. Let us denote l(x) = i=−n (x − xi ) ′ (x−n , xn ) of the equation l (x) = 0 is a point of strong preservation of partial monotony for Sn,p (f )(x). Proof. Obviously l ′ (ξ ) = 0 means n
i=−n
1 = 0, (ξ − xi )2p+1
We have ′ sj,n (ξ ) =
where l(ξ ) > 0. Then, ′ s−n,n (ξ ) =
and ′ sn,n (ξ ) =
ξ = xi , i = −n, . . . , n.
(2.12)
−2p(ξ − xj )−2p−1 , l(ξ )
−2p(ξ − x−n )−2p−1 <0 l(ξ ) −2p(ξ − xn )−2p−1 > 0. l(ξ )
′ (ξ )] = sgn(x − ξ ), j = −n, . . . , n, which implies that the sequence Also, sgn[sj,n j ′ ′ ′ ′ s−n,n (ξ ), s−n+1,n (ξ ), . . . , s1,n (ξ ), . . . , sn,n (ξ )
has exactly one variation of sign. Applying now Lemma 2.3.1 and (2.11), we obtain the theorem.
Remarks. (1) It is easy to see that (2.12) is equivalent to the polynomial equation
Fn (ξ ) =
n
k=−n
⎡ ⎣
n
i=−ni =k
⎤2p+1
(ξ − xi )⎦
= 0.
Because simple calculations show that Fn (xj )Fn (xj +1 ) < 0, j = −n, −n + 1, . . . , n − 1, it follows that in each interval (xj , xj +1 ) there exists a point ξ with l ′ (ξ ) = 0.
2.3 Shepard Operators
(2) An open question is what happens in the case when n sj,n (x) = |x − xj |−(2p+1) / |x − xi |−(2p+1) , i=−n
61
p ∈ N.
Concerning the convexity, we can prove the following: Theorem 2.3.2. If Sn,p (f )(x) is of the form (2.10) with x−k = −xk , x0 = 0, and 2h f (0) > 0, ∀ h ∈ (0, 1], then Sn,p (f )(x) is strictly convex in a neighborhood of 0 (depending on f ). Proof. We can write sk,n (x) =
x 2p (x − xk )−2p , n 2p −2p 1+ x (x − xi )
k = −n, −n + 1, . . . , n,
i=−ni =k
and
sk,n (0) = [x 2p (x − xk )−2p ]x=0 , ∀ i = 0, . . . , 2p. (i)
(i)
But from simple calculations, for all k = 0 we get ⎧ ⎨ 0, i = 1, . . . , 2p − 1 2p −2p (i) [x (x − xk ) ]x=0 = (2p)! , i = 2p. ⎩ 2p xk
As a conclusion
(i) sk,n (0)
0, i = 1, . . . , 2p − 1 (2p)! = ⎩ 2p , i = 2p, xk ⎧ ⎨
k = 0,
(2p−1)
′ ′′ (f )(0) = Sn,p (f )(0) = · · · = Sn,p Sn,p
and (2p)
Sn,p (f )(0) =
n
(2p)
k=−n
sk,n (0)f (xk ) =
(2p)
+f (0)s0,n (0) > f (0)
n (2p)! 2p
xk
k=1
n
k=−n
(f )(0) = 0
[f (xk ) + f (−xk )]
(2p)
sk,n (0) = 0.
(2p−2)
′′ (f )(x), . . . , S It follows that for Sn,p (f )(x), Sn,p n,p
(f )(x), 0 is a minimum point.
(2p) On the other hand, by Sn,p (f )(0) > 0, there exists a neighborhood V (0) of (2p) 0 (depending on f ) such that Sn,p (f )(x) > 0, for all x ∈ V (0). It follows that (2p−2) (2p−2) Sn,p (f )(x) is strictly convex on V (0), with Sn,p (f )(0) = 0 and 0 minimum (2p−2) (2p−2) point for Sn,p (f )(x). This implies Sn,p (f )(x) > 0, ∀ x ∈ V (0) \ {0}. Rea′′ (f )(x) > 0, for all x ∈ V (0) \ {0}, soning by recurrence, at the end we obtain Sn,p i.e., Sn,p (f )(x) is strictly convex in V (0), which proves the theorem.
62
2 Partial Shape Preservation, Univariate Case
Remarks. (1) Because for f (x) ≡ x, Sn,p (f )(x) obviously is not a polynomial of degree ≤ 1 in any neighborhood, it follows that there do not exist points of strong preservation of partial convexity for Sn,p (f )(x). (2) Obviously Theorem 2.3.2 shows that 0 is a point of weak preservation of partial strict-convexity for Sn,p (f )(x). (3) Reasoning exactly as in Remark 1 of Theorem 2.2.3 and taking into account the above Remark 1, it follows that 0 cannot be a point of strong preservation of partial strict-convexity for Sn,p (f )(x). A quantitative version of Theorem 2.3.1 is the following (we use the notation of Theorem 2.3.1) Theorem 2.3.3. Let xk = nk , k = −n, . . . , n. If ξ ∈ (x−n , xn ) is any solution of the equation l ′ (x) = 0, then there exists a constant c > 0 (independent of f and n) such that if f : [−1, on [−1, 1] then Sn,p (f )(x) is of the same 1] → R is monotone
c c ⊂ (−1, 1). monotonicity in ξ − n2p+3 , ξ + n2p+3 Proof. Reasoning exactly as in the proof of Lemma 3 in Popoviciu [81] and taking into account the proof of Theorem 2.3.1, we get ′ ′ Qi (ξ ) < max{s−n,n (ξ ), −sn,n (ξ )} < 0, ∀ i = −n, . . . , n − 1,
where Qi (x) 2p(xn −ξ )−2p−1 , l(ξ )
=
&i
′ j =−n sj,n (x),
′ s−n,n (ξ )
=
−2p(ξ −x−n )−2p−1 , l(ξ )
′ (ξ ) sn,n
l(ξ ) > 0. We have two possibilities. xn +x−n 2
Case 1: x−n < ξ ≤
=
< xn . In this case,
′ ′ ′ max{s−n,n (ξ ), −sn,n (ξ )} = −sn,n (ξ ).
Let j0 be such that |xj0 − ξ | = min{|xi − ξ |; i = −n, n}. We obtain ′ (ξ ) = sn,n
2p · n (xn − ξ )2p+1
i=−n
1
1 |ξ − xi |2p
≥
c1 > 0. n2p
The last lower estimate is obtained by similar reasonings with those for |E1 (x)| ′′ (x)| below in the proof. in the estimate of |sj,n Case 2: x−n <
xn +x−n 2
< ξ < xn . In this case,
′ ′ ′ max{s−n,n (ξ ), −sn,n (ξ )} = s−n,n (ξ ) ′ (ξ ) ≥ and for xj0 as in Case 1, we again obtain |s−n,n
c2 n2p
> 0. As a conclusion,
c ≤ |Qi (ξ )|, ∀ i = −n, . . . , n − 1. n2p On the other hand, sj,n (x) =
1 + (x − xj
)2p
1 n
k=−n,k =j
1 (x − xk )2p
,
2.3 Shepard Operators n
2p(x − xj )2p−1 ′ sj,n (x) = ⎡
k=−n,k =j
⎣1 + (x − xj )2p
and
n
k=−n,k =j
⎡
⎣1 + (x − xj )2p
xk − x j (x − xk )2p+1 k=−n,k =j ⎤2 n 1 ⎦ (x − xk )2p
k=−n,k =j
⎡
⎣1 + (x − xj )2p
4p(x − xj )2p−1 +⎡
⎣1 + (x − xj )2p
2p(x − xj ·
)2p−1
1 + (x − xj
n
xk − xj (x − xk )2p+2 k=−n,k =j ⎤2 n 1 ⎦ (x − xk )2p
2p(2p + 1)(x − xj )2p−1 −
xk − x j (x − xk )2p+1 ⎤2 , 1 ⎦ (x − xk )2p
n
2p(2p − 1)(x − xj )2p−2 ′′ sj,n (x) =
63
k=−n,k =j
n
k=−n,k =j n
k=−n,k =j n
k=−n,k =j n )2p
k=−n,k =j
xk − xj (x − xk )2p+1 ⎤2 1 ⎦ (x − xk )2p
xk − x j (x − xk )2p+1 1 (x − xk )2p
= E1 (x) − E2 (x) + E3 (x) · E4 (x), where by Ei (x), i = 1, 4 we denote the expressions above in the order of their occurrence. Obviously ′′ |sj,n (x)| ≤ |E1 (x)| + |E2 (x)| + |E3 (x)| · |E4 (x)|.
Let us denote |x − xi | = min{|x − xk |; −n ≤ k ≤ n} ∼ n1 . We have two cases. Case 1: i = j . We obtain
64
2 Partial Shape Preservation, Univariate Case
|E1 (x)| ≤ cp
n
1 n2p−2
= c p n2
k=−n,k =j
n
k=−n,k =j
|k − i|/n |i − k| 2p+1 n
|k − i| = cp n2 |k − i|2p+1
n
k=−n,k =j
1 ≤ cp n 2 . |i − k|2p
It is easy to see that the same kinds of estimates applied to E2 (x), E3 (x) and E4 (x) give us the estimates |E2 (x)| ≤ cp
|E3 (x)| ≤ cp
1 n2p−1 1 n2p−1
n
k=−n,k =j n
k=−n,k =j
|E4 (x)| ≤ cp n.
|k − i|/n 2 2p+2 ≤ cp n , |i − k| n
|k − i|/n ≤ cp n, |i − k| 2p+1 n
′′ (x)| ≤ c n2 . As a conclusion, in this case |sj,n p Case 2: i = j . We obtain
|E1 (x)| ≤ c1,p
n &
|x − xj |2p−2
k=−n,k =j x−x 4p
|k−j |/n |x−xk |2p+1
j
x−xi
≤ c1,p ≤ c2,p
(x − xi )4p (x − xj )2p+2
n
k=−n,k =j
|2p−1 |x
|x − xi i − xj | 2p+2 |x − xj |
n−4p +c3,p 2p+2 |i−j | n
⎧ ⎪ ⎨ n1−2p (|i − j |/n) n2 ≤ c4,p + 2p+2 ⎪ |i − j |2p+1 |i−j | ⎩ n ⎧ ⎛ ⎨ 2 2 n n ⎝ = c4,p + ⎩ |i − j |2p+1 |i − j |2p+1 2
≤ cp n .
|k − j |/n |x − xk |2p+1
n
k=−n,k =i,k =j
n
k=−n,k =i,k =j n
k=−n,k =i,k =j
|k − j |/n 2p+1 |i−k| n
⎫ ⎪ |k − j | ⎬ |i − k|2p+1 ⎪ ⎭
1 |i − k|2p
⎞⎫ |i − j | ⎠⎬ + |i − k|2p+1 ⎭
2.3 Shepard Operators
65
Following exactly the same kinds of estimates, we easily get n
|x − xj |2p−1 |E2 (x)| ≤ c1,p ≤ c1,p
k=−n,k =j x−x 4p
(x − xi )4p (x − xj )2p+1 |x − xj |2p−1
|E3 (x)| ≤ c1,p
(x − xi )4p ≤ c1,p (x − xj )2p+1
|E4 (x)| ≤ c1,p
j
x−xi
n
k=−n,k =j n
k=−n,k =j x−x 4p
|k − j |/n |x − xk |2p+2
|k − j |/n ≤ cn2 , |x − xk |2p+2 |k − j |/n |x − xk |2p+1
j
x−xi
n
k=−n,k =j
|k − j |/n ≤ cn, |x − xk |2p+1
|x − xj |2p−1
n &
k=−n,k =j x−x 2p
|k−j |/n |x−xk |2p+1
j
x−xi
≤ c1,p
(x − xi )2p |x − xj |
n
k=−n,k =j
|k − j |/n |x − xk |2p+1
|xi − xj | ≤ c2,p |x − xj | · |x − xi | n n−2p +c3,p |i − j |/n
k=−n,k =j,k =i
≤ c4,p
⎧ ⎨ (|i − j |/n) ⎩
|i−j | n
· n−1
n + |i − j |
|k − j |/n 2p+1 |i−k| n
n
k=−n,k =j,k =i
⎫ |k − j | ⎬ ≤ cn. |i − k|2p+1 ⎭
′′ (x)| ≤ cn2 . Let d be the nearest root As a conclusion, in this case too we have |sj,n i of Qi (x) to ξ . It follows
c ≤ |Qi (ξ )| = |Qi (ξ ) − Qi (di )| = |di − ξ | · |Q′i (η)| n2p ≤ |di − ξ | · c4 n3 , ∀i = 1, . . . , n − 1, and therefore |di − ξ | ≥ which proves the theorem.
c n2p+3
,
∀i = 1, . . . , n − 1,
66
2 Partial Shape Preservation, Univariate Case
Remark. We would like to point out here an error appeared in the proof of Theorem 3.4 in the paper Gal–Szabados [57], where the estimate of the last line, page 241, and c ′ (ξ ) ≥ c , and |s ′ its analogue on page 242, line 3 from above (i.e., sn,n −n,n (ξ )| ≥ n ) n are wrong, having as a consequence a wrong estimate in Theorem 3.4. The correct proof and estimate are given by Theorem 2.3.3. A quantitative version of Theorem 2.3.2 is the following. Theorem 2.3.4. If f satisfies 2h f (0) > 0, for all h ∈ (0, 1], then Sn,p (f ; x) of the form (2.10) with xk = nk , k = −n, . . . , n is strictly convex in [−|dn |, |dn |], with cp |dn | ≥
n k=1
2p
[2xk f (0)]/xk
,
n2p+1 ||f ||
where cp is a constant depending only on p. Proof. By the proof of Theorem 2.3.2 we easily get (2p)
Sn,p (f ; 0) =
n k=1
2p
sk,n (0)[2xk f (0)] = (2p)!
n 2xk f (0) 2p
.
xk
k=1
2p
Let dn be the nearest root of Sn,p (f ; x) to 0. We have (2p)
(2p)
(2p)
(2p+1)
(x)||f (xk )| ≤ |dn |||f ||
(2p+1)
Sn,p (f ; 0) = |Sn,p (f ; 0) − Sn,p (f ; dn )| = |dn ||Sn,p ≤ |dn |
n
k=−n
|sk,n
n
k=−n
(f ; y)|
(2p+1)
|sk,n
(x)|.
Taking into account the proof of Theorem 2.1 in Della Vecchia–Mastroianni [33], p. 149, we get (2p+1)
|sk,n
(x)| ≤ cp sk,n (x)(1/n)−(2p+1) = cp sk,n (x)n2p+1 .
It follows n k=1
2p
[2xk f (0)]/xk = Sn,p (f ; 0) ≤ cp |dn |||f ||n2p+1 , (2p)
which immediately implies the inequality in the statement of theorem.
Remarks. (1) If f is strictly convex on [−1, 1], i.e., there exists γ > 0 such that ηf (x) + (1 − η)f (y) − f (ηx + (1 − η)y) ≥ η(1 − η)γ (x − y)2 , for all η ∈ [0, 1] and x, y ∈ [−1, 1], then we easily get 2xk f (0)/xk2 ≥ 2γ , which implies |dn | ≥ cp 2γ
n n 2p−2 [1/xk ]/[||f ||n2p+1 ] ≥ 2γ cp [1/k 2p−2 ]/[||f ||n3 ], k=1
k=1
2.3 Shepard Operators
67
which for p = 1 implies |dn | ≥ 2γ cp /[||f ||n2 ] and for p ≥ 2 implies |dn | ≥ 2γ cp /[||f ||n3 ]. 2p (2) Replacing xk = k/n, we get xk = k 2p /n2p and the estimate in Theorem 2.3.4 becomes n 2xk f (0)/k 2p cp k=1
|dn | ≥
n||f ||
.
(3) We don’t know if the estimate of dn in Theorem 2.3.4 is the best possible. Concerning the Balász–Shepard operator defined on the semi-axis and whose global smoothness preservation properties were proved by Corollary 1.5.2, we have: γ Theorem 2.3.5 For the knots xk = nkγ /2 , k = 0, . . . , n, n ∈ N, γ ≥ 1, s ≥ 2 and f ∈ C([0, +∞]), let us consider the so-called Balázs–Shepard operator defined by
Sn,2p (f )(x) =
n k=0
|x − xk |−2p f (xk )
n k=0
|x − xk |−2p
, x ≥ 0.
&n
−2p . Any solution ξ ∈ (x , x ) = (0, nγ /2 ) Let us denote l(x) = 0 n i=0 (x − xi ) ′ of the equation l (x) = 0 is a point of strong preservation of partial monotony for Sn,2p (f )(x). Proof. Similar to the proof of Theorem 2.3.1.
Remark. The Remarks 1 and 2 after the proof of Theorem 2.3.1 remain true in this case, too. At the end, we present some (negative) remarks on the shape preservation properties of the so-called Shepard–Lagrange operator, whose global smoothness preservation properties were studied by Corollary 1.2.4. These operators are defined on ( the equidistant nodes xi = ni ∈ [−1, 1], i = −n, . . . , 0, . . . , n, n ∈ N, m ∈ N {0}, m < n, λ > 0, as follows: Sn,λ,Lm (f )(x) = where Ai,λ (x) =
n
Ai,λ (x)Lm,i (f )(x),
i=−n
|x − xi |−λ , n −λ |x − xk |
k=−n
and Lm,i (f )(x) =
m j =0
ui (x) f (xi+j ), (x − xi+j )u′i (xi+j )
68
2 Partial Shape Preservation, Univariate Case
where ui (x) = (x − xi )...(x − xi+m ) and xn+ν = xν , ν = 1, . . . , m. Choose λ = 2p even. Reasoning exactly as for the relation (2.11) in Section 2.3, we obtain ′ Sn,λ,L (f )(x) = m n−1
i=−n
[−
i j =0
n
i=−n
A′i,λ (x)Lm,i (f )(x) +
n
i=−n
A′i,λ (x)][Lm,i+1 (f )(x) − Lm,i (f )(x)] +
Ai,λ (x)L′m,i (f )(x) = n
Ai,λ (x)L′m,i (f )(x).
i=−n
Suppose that f would be monotonically increasing on [−1, 1]. Unfortunately, by Popoviciu [80] (see Theorem 2.2.1 (i)) it follows that for m ≥ 3, Lm,i (f )(x) do not preserve the monotonicity (around certain points). Let us take the simplest case, i.e., m = 1. Then we get ⎡ ⎤ i n−1 ′ ⎣− A′i,2p (x)⎦ [Lm,i+1 (f )(x) − Lm,i (f )(x)] Sn,2p,L (f )(x) = m j =0
i=−n
+
n
Ai,2p (x)mi ,
i=−n
where all mi = L′1,i (f )(x) are ≥ 0 (since if f is increasing on [−1, 1] then it is easily seen that all the Lagrange polynomials of degree 1, L1,i (f )(x), are increasing). Now, while the second sum above is obviously positive, the differences Lm,i+1 (f )(x) − Lm,i (f )(x) cannot be positive for all i at some point ξ ∈ [−1, 1]. As a conclusion, it seems that even in the simplest case m = 1, Sn,2p,L1 (f )(x) cannot preserve the monotonicity around some points. For the study of convexity, from the proof of Theorem 2.3.2 we get (k) Sn,2p,Lm (f )(0) = 0, ∀k = 1, . . . , 2p − 1 and (2p)
Sn,2p,Lm (f )(0) =
=
n
(2p)!
2p i=−n,i =0 xi
n
(2p)
Ai,2p (0)Lm,i (f )(0)
i=−n (2p)
Lm,i (f )(0) + A0,2p (0)Lm,0 (f )(0).
Unfortunately, even in the simplest case m = 1, from this relation we cannot deduce any convexity-preserving property for Sn,2p,Lm (f )(x), as we did in the proof of Theorem 2.3.2 for the usual Shepard operator. Similar reasonings for the so-called Shepard–Taylor operator, given by Sn,λ,Tm (f )(x) =
n
i=−n
Ai,λ (x)Tm,i (f )(x),
2.4 Bibliographical Remarks and Open Problems
where Ai,λ (x) =
|x − xi |−λ
n
Tm,i (f )(x) =
, −λ
k=−n
and
69
|x − xk |
m f (j ) (xi )(x − xi )j
j!
j =0
,
even for the simplest case m = 1, show that it does not have the shape-preservation property (around certain points).
2.4 Bibliographical Remarks and Open Problems All the results in this chapter, except those where the authors are mentioned and Lemma 2.2.1, Theorems 2.2.4–2.2.9, 2.3.3–2.3.5, which are new, are from Gal– Szabados [57]. Open Problem 2.4.1. For the Kryloff–Stayermann polynomials, Kn (f )(x), in Theorems 2.2.4 and 2.2.6, find other points (different from 0) of preservation for the monotonicity of f . Open Problem 2.4.2. What happens if in the Theorems 2.2.1, 2.2.4, 2.2.6 n is odd and if in the Theorems 2.2.3, 2.2.5, 2.2.7 and 2.2.8 n is even? Open Problem 2.4.3. What happens if in the statements of Theorems 2.3.1 and 2.3.2, the Shepard operators are of the form n
j =−n
|x − xj |−(2p+1) f (xj )
n
k=−n
? −(2p+1)
|x − xk |
Open Problem 2.4.4. For the Balász–Shepard operator defined on the semi-axis, prove a quantitative version of Theorem 2.3.5. Also, another question for this operator is if there exist points such that in some neighborhoods of them, it preserves the strict-convexity of function. Open Problem 2.4.5. For the general Shepard–Grünwald operators introduced by Criscuolo–Mastroianni [31] (considered in Open Problem 1.6.4, too), prove shapepreserving properties. Open Problem 2.4.6. For the local variants of Shepard operators in Open Problem 1.6.13, prove shape-preserving properties.
3 Global Smoothness Preservation, Bivariate Case
Extending the results in the univariate case, we prove in this chapter that the bivariate interpolation polynomials of Hermite–Fejér based on the Chebyshev nodes of the first kind, those of Lagrange based on the Chebyshev nodes of second kind and ±1, and those of bivariate Shepard operators, have the property of partial preservation of global smoothness with respect to various bivariate moduli of continuity.
3.1 Introduction It is the aim of this chapter to extend the results of Chapter 1, with respect to various bivariate moduli of continuity. In this sense, we will use the following kinds of bivariate moduli of continuity. Let f : [−1, 1] × [−1, 1] → R. For δ, η > 0, we define ω(x) (f ; δ) =
ω
(y)
sup sup{|f (x + h, y) − f (x, y)|; x, x + h ∈ [−1, 1],
y∈[−1,1]
0 ≤ h ≤ δ},
(f ; η) =
sup sup{|f (x, y + k) − f (x, y)|; y, y + k ∈ [−1, 1],
x∈[−1,1]
0 ≤ k ≤ η},
(i.e., the partial bivariate moduli of continuity, see, e.g., Timan [103]) ω(f ; δ, η) = sup{|f (x + h, y + k) − f (x, y)|; 0 ≤ h ≤ δ, 0 ≤ k ≤ η, x, x + h ∈ [−1, 1], y, y + k ∈ [−1, 1]},
ω(B) (f ; δ, η) = sup{|h,k f (x, y)|; 0 ≤ h ≤ δ, 0 ≤ k ≤ η, x, x + h ∈ [−1, 1], y, y + k ∈ [−1, 1]},
(i.e., the Bögel modulus of continuity, see, e.g., Gonska–Jetter [61] or Nicolescu [76]) where
72
3 Global Smoothness Preservation, Bivariate Case
h,k f (x, y) = f (x + h, y + k) − f (x + h, y) − f (x, y + k) + f (x, y). The properties of these moduli of continuity useful in the next sections are given by the following Lemma 3.1.1. (see, e.g., Timan [103], p. 111–114). (i) ω1 (f ; 0, 0) = 0, ω(f ; δ, η) is nondecreasing with respect to δ and η, ω(f ; δ1 + δ2 , η1 + η2 ) ≤ ω(f ; δ1 , η1 ) + ω(f ; δ2 , η2 ), ω(f ; δ, η) ≤ ω(x) (f ; δ) + ω(y) (f ; η) ≤ 2ω(f ; δ, η).
(ii) (see, e.g., Anastassiou–Gal [6], p. 81)
ω(B) (f ; δ, η) ≤ ω(x) (f ; δ) + ω(y) (f ; η).
3.2 Bivariate Hermite–Fejér Polynomials Let us define the bivariate Hermite–Fejér polynomial on the Chebyshev nodes of the first kind by Hn1 ,n2 (f )(x, y) =
n2 n1
hi,n1 (x)hj,n2 (y)f (xi,n1 , xj,n2 ),
i=1 j =1
where f : [−1, 1] × [−1, 1] → R, xi,n = cos
2i − 1 π, 2n
hi,n (x) =
Tn2 (x)(1 − xxi,n ) , n2 (x − xi,n )2
i = 1, n.
It is well known that we have n i=1
hi,n (x) = 1,
hi,n (x) ≥ 0,
∀ x ∈ [−1, 1], ∀ i = 1, n.
Let us denote by Hn (f, x) the univariate Hermite–Fejér polynomials based on Chebyshev nodes of the first kind and ⎧ α 1 1 ⎪ O δ max{2−α,1+α} , if 0 < α < or < α < 1, ⎪ ⎪ ⎪ 2 2 ⎨ E(α, δ) = 2α+1 ⎪ ⎪ 1 6 1 ⎪ ⎪ O δ log , if α = or 1, ⎩ δ 2
Remark. In all subsequent results of this section, the constants involved in the signs O, will depend only on the functions considered.
3.2 Bivariate Hermite–Fejér Polynomials
73
The following result is known in the univariate case. Theorem 3.2.1. (see Corollary 1.2.1). If f ∈ LipM (α; [−1, 1]), 0 < α ≤ 1, then ω(Hn (f ); δ) = E(α, δ). We also need the following. Lemma 3.2.1. Let a, b ∈ (0, 1] be fixed. Then, ω(f ; δ, η) ≤ C(δ a + ηb ) for all δ, η ≥ 0 if and only if ω(x) (f ; δ) ≤ Cδ a , and ω(y) (f ; η) ≤ Cηb , for all δ, η ≥ 0. Here C > 0 denotes an absolute constant, but which can be different at each occurrence. Proof. Indeed, by hypothesis and Lemma 3.1.1 (i), we get ω(f ; δ, η) ≤ C(δ a + b η ). Conversely, let us suppose that ω(f ; δ, η) ≤ C(δ a + ηb ) for all δ, η ≥ 0. From Lemma 3.1.1,(i) we get ω(x) (f ; δ) + ω(y) (f ; η) ≤ 2Cδ a + 2Cηb
for all
δ, η ≥ 0.
Now with η = 0 this implies ω(x) (f ; δ) ≤ 2Cδ a
for all δ ≥ 0
ω(y) (f ; η) ≤ 2Cηb
for all
and η ≥ 0,
respectively, which proves the lemma.
The first main result is given by the following: Theorem 3.2.2. Let a, b ∈ (0, 1] be fixed. If ω(f ; δ, η) ≤ C(δ a + ηb ) for all δ, η ≥ 0, then ω(Hn1 ,n2 (f ); δ, η) = E(a, δ) + E(b, η) for all
δ, η ≥ 0.
Proof. For each fixed y ∈ [−1, 1], let us denote Fn2 ,y (x) =
n2
hj,n2 (y)f (x, xj,n2 ).
j =1
By hypothesis and by Lemma 3.2.1 we get that ω(x) (f ; δ) ≤ Cδ a . This implies, for all |x1 − x2 | ≤ δ and y, |Fn2 ,y (x1 ) − Fn2 ,y (x2 )| ≤ |
≤
n2 j =1
n2 j =1
hj,n2 (y)[f (x1 , xj,n2 ) − f (x2 , xj,n2 )]|
hj,n2 (y)|f (x1 , xj,n2 ) − f (x2 , xj,n2 )| ≤ a
≤ Cδ ,
n2 j =1
hj,n2 (y)ω(x) (f ; δ) = ω(x) (f ; δ)
74
3 Global Smoothness Preservation, Bivariate Case
where C is independent of n2 and y. As a conclusion, ω(x) (Fn2 ,y (x); δ) ≤ Cδ a
for all y ∈ [−1, 1].
Now, it is easily seen that we can write Hn1 ,n2 (f )(x, y) = Hn1 [Fn2 ,y (x)](x), where the univariate Hn1 is applied to Fn2 ,y (x) as function of x (y is fixed arbitrary). According to Theorem 3.2.1, we immediately get that for every fixed y we have ω(x) (Hn1 ,n2 (f ); δ) = ω(x) (Hn1 [Fn2 ,y (x)]; δ) = E(a, δ). Similarly we obtain ω(y) (Hn1 ,n2 (f ); η) = E(b, η). Adding the last to relations we get the theorem.
In what follows we deal with global smoothness preservation properties for the Hermite–Fejér polynomial through the Bögel modulus of continuity, but only for a special class of functions. In this sense, for a, b ∈ (0, 1], let us define Da,b = {G : [−1, 1] × [−1, 1] → R; F : [−1, 1] → R satisfies and
∞ i=1
G(x, y) = F [f (x)g(y)], where (i 2 / i!)|F (i) (0)| < +∞
f ∈ LipM1 (a; [−1, 1]), g ∈ LipM2 (b; [−1, 1]), ||f ||, ||g|| ≤ 1}.
Remark. A simple example is G(x, y) = sin(xy) ∈ D1,1 . For a, b ∈ (0, 1], let us denote LipB (a, b; [−1, 1]) = {G : [−1, 1] × [−1, 1] → R;
ω(B) (G; a, b) ≤ Cδ a ηb }.
Lemma 3.2.2. Da,b ⊂ LipB (a, b; [−1, 1]). Proof. Let G ∈ Da,b . Developing F in MacLaurin series, we get G(x, y) = F (0) +
∞
f i (x)g i (y)F (i) (0)/ i!.
i=1
This immediately implies h,k G(x, y) =
∞
h f i (x)k g i (y)F i (0)/ i!.
i=1
On the other hand, by f ∈ LipM1 (a, [−1, 1]) and f ≤ 1
3.3 Bivariate Shepard Operators
|f i (x + h) − f i (x)| ≤ iM1 ha ,
75
|g i (y + k) − g i (y)| ≤ iM2 k b , i = 1, 2, . . . ,
which implies ω(B) (G; δ, η) ≤
∞ i=1
(i 2 / i!)|F (i) (0)|δ a ηb ≤ Cδ a ηb .
These prove the lemma.
As a consequence we obtain Corollary 3.2.1. If G ∈ Da,b then ω(B) (Hn1 ,n2 (G); δ, η) = E(a, δ)E(b, η). Proof. We easily get Hn1 ,n2 (G)(x, y) = F (0) +
+∞
Hn1 (f i )(x)Hn2 (g i )(y)F (i) (0)/ i!.
i=1
Then, as in the proof of Lemma 3.2.2, we have ω
(B)
(Hn1 ,n2 (G); δ, η) ≤
+∞
ω1 (Hn1 (f i ); δ)ω(Hn2 (g i ); η)F (i) (0)/ i!.
i=1
But for f ∈ LipM (a; [−1, 1]), for all i, n ∈ N and all δ ≥ 0 we have ω1 (Hn (f i ); δ) = iE(a, δ). Indeed, from the univariate case (see Theorem 1.2.1) we get n n i 2a −1 a a 1/k , n 1/k + δ , ω(Hn (f ); δ) ≤ Ci min δn k=1
k=1
where C > 0 depends only on the Lipschitz constant M of f . Then reasoning exactly as in the proof of Corollary 1.2.1, we get the required formula. Taking now into account Theorem 3.2.1, we immediately obtain ω
(B)
(Hn1 ,n2 (G); δ, η) ≤ E(a, δ)E(b, η)
+∞ i=1
i 2 |F (i) (0)|/ i!,
which proves the corollary.
3.3 Bivariate Shepard Operators Let us first consider the bivariate Shepard operator as a tensor product by
76
3 Global Smoothness Preservation, Bivariate Case (λ,μ) Sn,m (f )(x, y) =
n m i=0 j =0
si,λ (x)sj,μ (y)f (i/n, j/m), if (x, y) =
i j , , n m
(λ,μ)
j j Sn,m (f )( ni , m ) = f ( ni , m ), where 1 < λ, μ and f : [0, 1] × [0, 1] → R,
si,λ (x) = |x − i/n|
−λ
sj,μ (y) = |y − j/m|
/
n k=0
−μ
/
|x − k/n|
m k=0
−λ
|y − k/m|
−μ
,
.
Let Sn,λ (f, x) denote the univariate Shepard operator (for univariate f ) and let us define ⎧ ⎨ O #(δ α ) , $ if 0 < α < λ − 1, 1 Eλ (α, δ) = O #δ α log $ δ , if α = λ − 1 ⎩ λ−1 O δ , if λ − 1 < α ≤ 1, for 1 < λ ≤ 2. The following result is known. Theorem 3.3.1 (see Corollary 1.2.3). (i)If f ∈ LipM (α; [0, 1]), 0 < α ≤ 1, 1 < λ ≤ 2, then for all δ ≥ 0 and n ∈ N we have ω(Sn,λ (f ); δ) = Eλ (α, δ). (ii) If λ > 2 then for all δ ≥ 0 and n ∈ N we have ω(Sn,λ (f ); δ) ≤ Cλ ω(f ; δ). We present: Theorem 3.3.2. Let f : [0, 1] × [0, 1] → R. (i) Suppose a, b ∈ (0, 1], 1 < λ, μ ≤ 2 are fixed and ω(f ; δ, η) ≤ C[δ a + ηb ], for all δ, η ≥ 0. Then for all δ, η ≥ 0 and n, m ∈ N we have (λ,μ) ω(Sn,m (f ); δ, η) = Eλ (a, δ) + Eμ (b, η).
(ii) If λ, μ > 2 then (λ,μ) ω(Sn,m (f ); δ, η) ≤ Cλ,μ ω(f ; δ, η),
for all δ, η ≥ 0 and n, m ∈ N. Proof. (i) The reasonings are similar to those in the proof of Theorem 3.2.2, taking into account Theorem 3.3.1 (i) too. (ii) By Theorem 3.3.1 (ii) and reasoning exactly as in the proof of Theorem 3.2.2, we get (λ,μ) ω(Sn,m (f ); δ, η) ≤ Cλ,μ [ω(x) (f ; δ) + ω(y) (f ; η)],
3.3 Bivariate Shepard Operators
which combined with Lemma 3.1.1 (i) proves the statement.
77
Remark. It is of interest to know the approximation order by the tensor product (λ,μ) Shepard operator Sn,m (f )(x, y). Since by, e.g., Szabados [98], Theorem 1, we easily obtain that for the univariate case we have ||Sn,λ (g)|| ≤ ||Sn,λ (g) − g|| + ||g|| ≤ C||g||, then for bivariate functions f , the case λ, μ > 2, by Haussmann–Pottinger [63], Theorem 5, it immediately follows that (λ,μ) ||Sn,m (f ) − f || ≤ Cλ,μ [ω(x) (f ; 1/n) + ω(y) (f ; 1/m)],
which combined with the second relation in Lemma 3.1.1,(i) implies (λ,μ) ||Sn,m (f ) − f || ≤ Cλ,μ ω(f ; 1/n, 1/m).
Here ||.|| denotes the uniform norm. For the case of Bögel modulus of continuity, firstly we need to define a new class of bivariate functions (which includes Da,b ) by D ∗ = {G : [0, 1] × [0, 1] → R; ∞ i=1
G(x, y) = F [f (x)g(y)], where
F : [0, 1] → R satisfies (i 2 / i!)|F (i) (0)| < +∞ and ||f ||, ||g|| ≤ 1}.
Reasoning exactly as in the previous section, we get the following. Corollary 3.3.1. (i) Suppose a, b ∈ (0, 1], 1 < λ, μ ≤ 2 are fixed. If G ∈ Da,b , G(x, y) = F (f (x)g(y)) then for all δ, η ≥ 0 and n, m ∈ N we have λ,μ ω(B) (Sn,m (G); δ, η) = Eλ (a, δ)Eμ (b, η).
(ii) If 2 < λ, μ and G ∈ D ∗ , G(x, y) = F (f (x)g(y)) then for all δ, η ≥ 0 and n, m ∈ N we have λ,μ ω(B) (Sn,m (G); δ, η) ≤ Cλ,μ ω(f ; δ)ω(g; η).
Proof. As in the proofs of Lemma 3.2.2 and Corollary 3.2.1 we obtain λ,μ ω(B) (Sn,m (G); δ, η) ≤
+∞
μ i ω(Snλ (f i ); δ)ω(Sm (g ); η)|F (i) (0)|/ i!.
i=1
(i) The proof in this case follows the ideas of proof in Corollary 3.2.1, based on the formulas in the univariate case from the end of Section 1.2.
78
3 Global Smoothness Preservation, Bivariate Case
(ii) By mathematical induction we easily get ω(f i ; δ) ≤ iω(f ; δ) and ω(g i ; η) ≤ iω(g; η). Now, following the ideas in the proof of Corollary 3.2.1 and taking into account the above Theorem 3.3.1 too, we easily get the desired conclusion. It is well known that the original bivariate operator introduced by Shepard in [91], actually is not tensor product of the univariate case. More exactly, in [91] Shepard defines the bivariate operators by Sn,μ (f )(x, y) =
n i=0
(μ)
sn,i (x, y)f (xi , yi ), if (x, y) = (xi , yi ),
Sn,μ (f )(xi , yi ) = f (xi , yi ), where μ > 0 is fixed, f : D → R, D ⊂ R2 , (xi , yi ) ∈ D, i = 0, . . . , n, x0 < x1 < · · · < xn , y0 < y1 < · · · < yn , sn,i (x, y) = [(x − xi )2 + (y − yi )2 ]−μ/2 / ln(μ) (x, y), (μ)
ln(μ) (x, y) =
n i=0
(x − xi )2 + (y − yi )2
!−μ/2
.
This kind of Shepard operator is useful in computer-aided geometric design (see, e.g., Barnhill–Dube–Little [13]). In what follows, we consider another variant of the bivariate Shepard operator which is not a tensor product of the univariate case, has good approximation properties and implicitly better global smoothness preservation properties than that introduced in Shepard [91]. Thus, let us introduce Sn1 ,n2 ,μ (f ; x, y) =
Tn1 ,n2 ,μ (f ; x, y) , if (x, y) = (xi , yj ), Tn1 ,n2 ,μ (1; x, y)
Sn1 ,n2 ,μ (f ; xi , yj ) = f (xi , yj ), where μ > 0 is fixed, f : D → R, D = [0, 1] × [0, 1], xi = i/n1 , i = 0, . . . , n1 ; yj = j/n2 , j = 0, 1, . . . , n2 and Tn1 ,n2 ,μ (f ; x, y) =
n2 n1 i=0 j =0
f (xi , yj ) . [(x − xi )2 + (y − yj )2 ]μ
The following result will be essential in establishing the smoothness-preserving properties of this operator. Theorem 3.3.3. For any f ∈ C(D) and μ > 3/2 we have 1 1
f − Sn1 ,n2 ,μ (f ) ≤ cω f ; , n1 n2 and
3.3 Bivariate Shepard Operators
79
1 1 1 1 1 ∂Sn1 ,n2 ,μ (f ; x, y) 1 1 ∂Sn1 ,n2 ,μ (f ; x, y) 1 1 1 1 1 max 1 1,1 1 ∂x ∂y 1 1 ≤ c(n1 + n2 )ω f ; , . n 1 n2
Proof. Let xu and yv denote the nodes closest to x and y, respectively, and let K(x, y) := {(i, j )|0 ≤ i ≤ n1 , 0 ≤ j ≤ n2 , (i, j ) = (u, v)}. We prove that 1 1 |x − xi |λ 1 2 2 ν 1 1 (i,j )∈K(x,y) [(x − xi ) + (y − yj ) ] 1 1 Tn1 ,n2 ,μ (1; x, y) 1 1 1
and 1 1 |y − yj |λ 1 1 1 (i,j )∈K(x,y) [(x − xi )2 + (y − yj )2 ]ν 1 1 1 Tn1 ,n2 ,μ (1; x, y) 1 1 1
1 1 1 1 1 1 ≤ cn2ν−2μ−λ , 1 1 1 1 1
Indeed,
(i,j )∈K(x,y)
|x − xi |λ [(x − xi )2 + (y − yj )2 ]ν
Tn1 ,n2 ,μ (1; x, y)
1 1 1 1 1 1 2ν−2μ−λ , 1 ≤ cn2 1 1 1 1 n12ν−λ n2ν 2
≤c
n2 &n1 i=1
(n1 n2 )2μ
j =1 n2 n 1
λ < 2ν − 2,
(3.1)
λ < 2ν − 2.
(3.2)
i λ(n22 i 2 + n21 j 2 )−ν
(n22 i 2 + n21 j 2 )−μ
i=1 j =1
⎛ n1 [n 2 i/n1 ] ⎝ n−2ν i λ−2ν+i λ 2
2ν−2μ−λ 2ν−2μ i=1 n2
≤ cn1
j =1
n1 [n 2 i/n1 ] i=1
1−2ν n−1 1 n2 2ν−2μ−λ 2ν−2μ n2
≤ cn1
n1
n2
j =[n2 i/n1 ]+1
⎞
n1−2νj −2ν⎠
(n2 i)−2μ
j =1
i 1+λ−2ν
i=1 n1 −1 1−2μ n1 n2 i 1−2μ i=1
2ν−2μ−λ
≤ cn1
.
80
3 Global Smoothness Preservation, Bivariate Case
Equation (3.2) can be proved analogously. It is clear from the above argument that in the case λ ≥ 2ν − 2μ the summations in (3.1)–(3.2) can be extended to 0 ≤ i ≤ n1 , 0 ≤ j ≤ n2 . We shall also use the inequalities 1 1 (2 + n1 |x − xi | + n2 |y − yj |), (3.3) |f (xi , yj ) − f (x, y)| ≤ ω f ; , n1 n2 2μ−1
Tn1 ,n2 ,μ (1; x, y) ≥ cn1
(i,j )∈K(x,y)
|λ
n2 ,
|x − xi ≤ cn2ν−λ−1 n2 , 1 [(x − xi )2 + (y − yj )2 ]ν
(3.4) 0 ≤ λ < 2ν − 2 (3.5)
and 1 1 1 1 1 ∂Tn1 ,n2 ,μ (1; x, y)/∂x 1 1 ∂Tn1 ,n2 ,μ (1; x.y)/∂y 1 1,1 1 ≤ c(n1 + n2 ). max 1 1 T (1; x, y) 1 1 T (1; x, y) 1 n1 ,n2 ,μ
(3.6)
n1 ,n2 ,μ
Equation (3.3) easily follows from the properties of partial modulus of continuity. Equations (3.4)–(3.5) can be proved similarly to (3.1)–(3.2). Equation (3.6) follows from (3.1)–(3.2) with λ = 1, ν = μ + 1. Next, using (3.1)–(3.2) with λ = 0 or 1 and ν = μ > 3/2, as well as the second relation in Lemma 3.1.1,(i) we obtain 1 1
f − Sn1 ,n2 ,μ (f ) ≤ ω f ; , n1 n2 [ω(x) (f ; |x −xi |) + ω(y) (f ; |y − yj |)][(x − xi )2 + (y − yj )2 ]−μ +
(i,j )∈K(x,y)
Tn1 ,n2 ,μ (1; x, y) ⎤ 2 + n1 |x − xi | + n2 |y − yj | ⎢ [(x − xi )2 + (y − yj )2 ]μ ⎥ ⎥ 1 1 ⎢ (i,j )∈K(x,y) ⎥ ⎢ ≤ cω f ; , 1+ ⎥ n1 n2 ⎢ T (1; x, y) n1 ,n2 ,μ ⎣ ⎦ ⎡
1 1 . ≤ cω f ; , n1 n2
In order to prove the second relation in the theorem, besides (3.1)–(3.5) we use that ∂S (f ) ∂S (f ) for f = const. we have n1 ,n∂x2 ,μ = n1 ,n∂y2 ,μ ≡ 0 for all (x, y) ∈ D, and get ∂Sn1 ,n2 ,μ (f ; x, y) ∂x
1 1 ∂ 1 ≤ cω f ; , 2 2 μ n1 n2 ∂x [(x − xu ) + (y − yv ) ] Tn1 ,n2 ,μ (1; x, y)
3.3 Bivariate Shepard Operators
+
(i,j )∈K(x,y)
(i,j )∈K(x,y)
|x − xi | · |f (xi , yj ) − f (x, y)| [(x − xi )2 + (y − yj )2 ]μ+1
Tn1 ,n2 ,μ (1; x, y) |f (xi , yj ) − f (x, y)| · 2 2 μ [(x − xi ) + (y − yj ) ]
(i,j )∈K(x,y) (1; x, y)2
⎣|x − xu |[(x − xu )2 + (y − yv )2 ]μ−1 +[(x − xu )2 + (y − yv )2 ]μ
+
+
+
|x − xi | [(x − xi + (y − yj )2 ]μ+1 )2
Tn1 ,n2 ,μ 1 1 ≤ cω f ; , n1 n2
⎡
81
(i,j )∈K(x,y)
(i,j )∈K(x,y)
(i,j )∈K(x,y)
(i,j )∈K(x,y)
[(x − xi
)2
1 + (y − yj )2 ]μ
|x − xi | [(x − xi + (y − yj )2 ]μ+1 )2
|x − xi |[2 + n1 |x − xi | + n2 |y − yj |] [(x − xi )2 + (y − yj )2 ]μ+1
Tn1 ,n2 ,μ (1; x, y) 2 + n1 |x − xi | + n2 |y − yj | · 2 2 μ [(x − xi ) + (y − yj ) ]
⎤ |x − xi | [(x − xi )2 + (y − yj )2 ]μ+1 ⎥ ⎥ (i,j )∈K(x,y) ⎥ ⎥ 2 Tn1 ,n2 ,μ (1; x, y) ⎦
⎡ μ−1 μ 1 1 ⎣1 1 1 1 1 2μ−1 2μ ≤ cω f ; , + 2 + 2 n2 + n1 n1 n2 2 2 n1 n2 n1 n1 n2 n1 n2 +
2μ 2μ
n1 n 2
(n21 + n22 )μ
·
n1
2μ−1
n1
n2
+ n1 + n2
1 1 . ≤ c(n1 + n2 )ω f ; , n1 n2
Now we are in the position to state our preservation results, unfortunately only in the special case n1 = n2 . The case n1 = n2 remains open. Theorem 3.3.4. For μ > 3/2, n ∈ N and h, k ≥ 0, we have ω(Sn,n,μ (f ); h, k) ≤ cω(f ; h + k, h + k). Proof. We apply the standard technique. By the first estimate in Theorem 3.3.3 we get ω(Sn,n,μ (f ); h, k) ≤ 2||Sn,n,μ (f ) − f || + ω(f ; h, k)
82
3 Global Smoothness Preservation, Bivariate Case
1 1 + ω(f ; h, k) . ≤ c ω f; , n n
Then, by the bivariate mean value theorem and by the second estimate in Theorem 3.3.3 we get ∂Sn,n,μ (f ; x, y) ∂Sn,n,μ (f ; x, y) ω(Sn,n,μ (f ); h, k) ≤ h + k ∂x ∂y 1 1 ≤ cn(h + k)ω f ; , . n n Hence by considering the cases (h + k) ≤ 1/n and (h + k) > 1/n separately, we obtain the theorem.
Taking into account the form of Sn,n,μ (f )(x, y), it is more natural to consider the so-called Euclidean bivariate modulus, defined by ω(E) (f ; ̺) = sup{|f (x + h, y + k) − f (x, y)|; 0 ≤ h, 0 ≤ k, (h2 + k 2 )1/2 ≤ ̺, x, x + h ∈ [0, 1], y, y + k ∈ [0, 1]},
(see, e.g., Anastassiou–Gal [6], p. 80, Definition 2.3.1). With respect to this modulus we obtain the following: Corollary 3.3.2. For μ > 3/2, n ∈ N and ̺ ≥ 0, we have ω(E) (Sn,n,μ (f ); ̺) ≤ cω(E) (f ; ̺). Proof. Taking h = k in Theorem 3.3.4, it follows that ω(Sn,n,μ (f ); h, h) ≤ cω(f ; h, h). But by, e.g., Anastassiou–Gal [6], p. 81, we have ω(E) (f ; ̺) ≤ ω(f ; ̺, ̺) ≤ ω(E) (f ;
√
2̺) ≤ 2ω(E) (f ; ̺),
which combined with the above inequality immediately proves the corollary.
3.4 Bivariate Lagrange Polynomials Let us define the bivariate Lagrange interpolation polynomials on the Chebyshev nodes of the second kind plus the endpoints ±1, by Ln1 ,n2 (f )(x, y) = where
n1 n2 i=1 j =1
hi,n1 (x)hj,n2 (y)f (xi,n1 , yj,n2 ),
3.4 Bivariate Lagrange Polynomials
xi,n = cos and hi,n (x) =
i−1 , n−1
83
i = 1, n,
(−1)i−1 ωn (x) , (1 + δi1 + δin )(n − 1)(x − xi,n )
ωn (x) = sin t sin(n − 1)t.
Let us denote by Ln (f, x) the univariate Lagrange polynomials based on Chebyshev nodes of the second kind plus the endpoints ±1 and ⎧ 2−2α ⎪ α 1 2−α 1 ⎪ ⎪ O δ 2−α ln , if 0 < α < , ⎪ ⎪ ⎪ δ 2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ 1 1 1 , if α = , E(α, δ) = O δ 3 ln ⎪ δ 2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎪ α 1 1 1+α ⎪ ⎪ 1+α ⎪ , if < α ≤ 1. ln ⎩O δ δ 2 We need the following result in univariate case. Theorem 3.4.1. (See Corollary 1.2.2) If f ∈ LipM (α; [−1, 1]), 0 < α ≤ 1, then ω(Ln (f ); δ) = E(α, δ). Firstly, we consider global smoothness preservation properties with respect to ω(f ; δ, η). The proofs of our main result require the following three lemmas. Lemma 3.4.1. For all (x, y) ∈ [−1, 1] × [−1, 1] we have: n1 ∂Ln1 ,n2 (f )(x, y) 1 (x) ≤ cn1 f; 2 , ω1 ∂x i i=1
n2 ∂Ln1 ,n2 (f )(x, y) 1 (y) ≤ cn2 f ; . ω 1 ∂y j2 j =1
Here c > 0 is an& absolute constant (independent of n1 , n2 , x, y and f ). 1 Proof. Because ni=1 h′i,n1 (x) = 0, we get n n2 1 ∂Ln1 ,n2 (f )(x, y) (1) (2) = hj,n2 (y) h′i,n1 (x)f (xi , xj ) ∂x j =1
i=1
n n2 1 (1) (2) (2) = hj,n2 (y) h′i,n1 (x)[f (xi , xj ) − f (x, xj )] j =1 i=1
84
3 Global Smoothness Preservation, Bivariate Case
≤
n2
hj,n2 (y)
j =1
n1 i=1
(x)
(1)
|h′i,n1 (x)|ω1 (f ; |x − xi |)
(reasoning as in the univariate case in the proof of Theorem 1.2.2) n1 n1 n2 1 1 (x) (x) ω1 ω1 hj,n2 (y) nc1 f; 2 . = cn1 f; 2 ≤ i i j =1
i=1
i=1
(f )(x,y) ∂L The estimate for n1 ,n2∂y is similar.
Lemma 3.4.2. We have the estimates: 1 1 1 ∂Ln1 ,n2 (f ) 1 (x) 1 · δ, ∀ δ ≥ 0 1 ω (Ln1 ,n2 (f ); δ) ≤ 1 1 ∂x
and
ω
(y)
1 1 1 ∂Ln1 ,n2 (f ) 1 1 · δ, ∀ δ ≥ 0 1 (Ln1 ,n2 (f ); δ) ≤ 1 1 ∂y
where · represents the uniform norm on
C[[−1, 1] × [−1, 1]) = {f : [−1, 1]2 → R; f continuous on [−1, 1]2 }. Proof. By the mean value theorem, we get ∂Ln1 ,n2 (f )(ξ, y) , |Ln1 ,n2 (f )(x + h, y) − Ln1 ,n2 (f )(x, y)| = |h| ∂x
which immediately implies
1 1 1 ∂Ln1 ,n2 (f ) 1 1. ω(x) (Ln1 ,n2 (f ); δ) ≤ δ 1 1 1 ∂x
The proof of second estimate is similar.
Lemma 3.4.3. We have the estimates: ω(x) (Ln1 ,n2 (f ); δ) ≤ 2 Ln1 ,n2 (f ) − f + ω(x) (f ; δ), and ω(y) (Ln1 ,n2 (f ); δ) ≤ 2 Ln1 ,n2 (f ) − f + ω(y) (f ; δ), ∀ δ ≥ 0. Proof. The first estimate is immediate by |Ln1 ,n2 (f )(x + h, y) − Ln1 ,n2 (f )(x, y)| ≤ |Ln1 ,n2 (f )(x + h, y) − f (x + h, y)| +|f (x + h, y) − f (x, y)| + |f (x, y) − Ln1 ,n2 (f )(x, y)|.
3.4 Bivariate Lagrange Polynomials
We have a similar proof for the second estimate.
85
We obtain the following consequences. Corollary 3.4.1. For any f ∈ C([−1, 1] × [−1, 1]), any δ > 0 and any n ∈ N, we have: n 1 (x) (x) ω (Ln,n (f ); δ) ≤ min O δn f; 2 ω , i i=1 ' 1 1 (y) (x) (x) f; log n + ω log n + ω (f ; δ) f; O ω n n and
⎛
ω(y) (Ln,n (f ); δ) ≤ min O ⎝δn
n j =1
⎞ 1 ω(y) f ; 2 ⎠ , j
' 1 1 O ω(x) f ; log n + ω(y) f ; log n + ω(y) (f ; δ) . n n Proof. It is immediate by Lemmas 3.4.1, 3.4.2, 3.4.3 for n1 = n2 = n and by the fact that taking into account the technique in Shisha–Mond [92], p. 1275–1276 and the estimate in the univariate case, we have 1 1 log n1 + ω(y) f ; log n2 .
Ln1 ,n2 (f ) − f ≤ c ω(x) f ; n1 n2 Corollary 3.4.2. Let f : [−1, 1] × [−1, 1] → R be Lipschitz of order α ∈ (0, 1] with respect to x and y, respectively, i.e., |f (x1 , y) − f (x2 , y)| ≤ L1 |x1 − x2 |α , ∀ x1 , x2 , y ∈ [−1, 1], and |f (x, y1 ) − f (x, y2 )| ≤ L2 |y1 − y2 |α , ∀ x, y1 , y2 ∈ [−1, 1].
Then, for all n ∈ N, δ, η ∈ (0, 1) we have:
ω(Ln,n (f ); δ, η) ≤ C[min{δn, n−α log n + δ α } 1 + min{ηn, n−α log n + ηα }], if < α ≤ 1; 2 ' log n √ ω(Ln,n (f ); δ, η) ≤ C min δn log n, √ + δ n ' log n √ 1 + min ηn log n, √ + η , if α = ; 2 n
ω(Ln,n (f ); δ, η) ≤ C[min{δn2−2α , n−α log n + δ α }
86
3 Global Smoothness Preservation, Bivariate Case
+ min{ηn2−2α , n−α log n + ηα }], if 0 < α <
1 . 2
Proof. By Lemma 3.1.1,(i) we have ω(Ln1 ,n2 (f ); δ, η) ≤ ω(x) (Ln1 ,n2 (f ); δ) + ω(y) (Ln1 ,n2 (f ); η) (actually ω(Ln1 ,n2 (f ); δ, η) ∼ [ω(x) (Ln1 ,n2 (f ); δ) + ω(y) (Ln1 ,n2 (f ); η)] which justifies the method). Because in general we have ⎧ n ⎨ O(n1−α ), if 0 < α < 1 1 = O(log n), if α = 1 ⎩ iα C > 0, if α > 1 i=1 it follows
⎧ 1 ⎪ ⎪ n2−2α , if 0 < α < ⎪ ⎪ 2 n ⎨ 1 1 (z) n , ω1 f ; 2 ≤ C n log n, if α = ⎪ i 2 ⎪ i=1 ⎪ 1 ⎪ ⎩ n, if < α ≤ 1 2 for z := x, z := y, f ∈ Lipα with respect to x and y. Taking now n = n1 = n2 in Lemma 3.1.1, by Corollary 3.4.1 we get the statement in corollary. Now, we are in position to prove the following global smoothness preservation property. Corollary 3.4.3. Let f : [−1, 1] × [−1, 1] → R be Lipschitz of order α ∈ (0, 1] with respect to x and y, respectively, which obviously is equivalent to ω(f ; δ, η) ≤ C(δ α + ηα ), ∀ δ, η > 0. For all n ∈ N, δ, η ∈ (0, 1), we have: ω(Ln,n (f ); δ, η) = E(α, δ) + E(α, η). Proof. By Corollary 3.4.2 and reasoning exactly as in the univariate (for example, if 1/2 < α ≤ 1, then we consider separately the cases δn ≤ n−α log n, δn > n−α log n, ηn ≤ n−α log n, ηn > n−α log n), we immediately obtain ω(Ln,n (f ); δ, η) = E(α, δ) + E(α, η).
Remark. The method applied for Hermite–Fejér interpolation polynomials in Section 3.2, unfortunately does not work for Lagrange polynomials.As a consequence, the case n1 = n2 still remains open for Lagrange polynomials. For the case of Bögel modulus of continuity, reasoning exactly as in the previous Section 3.2, we immediately get the following.
3.5 Bibliographical Remarks and Open Problems
87
Corollary 3.4.4. If G ∈ Da,b , G(x, y) = F (f (x)g(y)) then ω(B) (Ln1 ,n2 (G); δ, η) = E(a, δ)E(b, η). Remark. All the above results can easily be extended for m variables, m > 2.
3.5 Bibliographical Remarks and Open Problems All the results in this chapter except those where the authors are mentioned belong to Gal–Szabados [58]. Open Problem 3.5.1. Prove global smoothness preservation properties for the Shepard operators Sn1 ,n2 ,μ (f )(x, y) in Theorem 3.3.3 and for the Lagrange polynomials Ln1 ,n2 (f )(x, y) in Lemmas 3.4.1–3.4.3, for the general case n1 = n2 . (The particular case n1 = n2 is solved by Theorem 3.3.4, Corollary 3.3.2 and Corollary 3.4.3.) Open Problem 3.5.2. For the bivariate tensor product Shepard operators on the semi-axis, generated by the univariate case in Della Vecchia–Mastroianni–Szabados [36], prove global smoothness preservation properties. Open Problem 3.5.3. For the kinds of univariate Shepard operators considered by Chapters 1 and 2, various bivariate combinations different from those considered by Chapter 3 can be considered as follows. We suppose that all the kinds of Shepard operators are defined on equidistant nodes in the interval [−1, 1] × [−1, 1], i.e., are of the form xk = nk , k = −n, . . . , 0, . . . , n j and yj = m , j = −m, . . . , 0, . . . , m. Type 1. “Original Shepard–Lagrange operator,” Sn,p,Lm (f )(x, y) =
n
si,n,p (x, y)Lim (f )(x, y),
i=−n
where [(x − xi )2 + (y − yi )2 ]−p , 2 2 −p k=−n [(x − xk ) + (y − yk ) ]
si,n,p (x, y) = &n
p > 2, p ∈ N,
and Lim (f )(x, y) is the polynomial defined as in, e.g., Coman–Trimbitas [27], on page 43. Type 2. “Tensor product Shepard–Lagrange operator” Sn,m,2p,2q,n1 ,n2 (f )(x, y) = where
m n
i=−n j =−m
i,j
si,n,2p (x)sj,m,2q (y)Ln1 ,n2 (f )(x, y), p, q ≥ 2,
88
3 Global Smoothness Preservation, Bivariate Case
(x − xi )−2p , n −2p (x − xk )
si,n,2p (x) =
k=−n
(y − yj )−2q , m −2q (y − yk )
sj,m,2q (y) =
k=−m
and
i,j
Ln1 ,n2 (f )(x, y) =
n1 n2
ν=0 μ=0
vj (y) ui (x) f (xi+ν , yj +μ ), (x − xi+ν )u′i (xi+ν ) (y − yj +μ )vj′ (yj +μ )
with ui (x) = (x − xi ) · · · (x − xi+n1 ), vj (y) = (y − yj ) · · · (y − yj +n2 ) and xn1 +ν = xν , yn2 +μ = yμ . Type 3. “Tensor product Shepard–Lagrange-Taylor operator” Sn,m,2p,2q,n1 ,n2 (f )(x, y) = where
n m
i,j
si,n,2p (x)sj,m,2q (y)LTn1 ,n2 (f )(x, y),
i=−n j =−m
(x − xi )−2p
si,n,2p (x) =
n
k=−n
sj,m,2q (x) =
p, q ≥ 2,
,
−2p
(x − xk )
(y − yj )−2q , m −2q (y − yk )
k=−m
and i,j
LTn1 ,n2 (f )(x, y) =
n1 n2
ν=0 μ=0
(y − yj )μ ∂ μ f (xi+ν , yj ) ui (x) . ′ (x − xi+ν )ui (xi+ν ) (μ)! ∂y μ
Obviously, if we change the place of x and y, then we get the “tensor product Shepard–Taylor–Lagrange” type operator, which is similar. Type 4. “Original Shepard–Taylor operator” Sn,p,Tm−n ,...,mn (f )(x, y) = where
n
i=−n
si,n,p (x, y)Tmi i (f )(x, y),
3.5 Bibliographical Remarks and Open Problems
si,n,p (x, y) =
[(x − xi )2 + (y − yi )2 ]−p , n 2 2 −p [(x − xk ) + (y − yk ) ]
89
p > 2, p ∈ N,
k=−n
and Tmi i (f )(x, y) =
(x − xi )r (y − yj )s ∂ r+s f (xi , yi ) . r! s! ∂x r ∂y s r+s≤m i
Type 5. “Tensor product Shepard–Taylor operator”
Sn,m,2p,2q,n−n ,...,nn ,m−m ,...,mm (f )(x, y) = where
n m
i,j
si,n,2p (x)sj,m,2q (y)T Tni ,mj (f )(x, y),
i=−n j =−m
si,n,2p (x) =
p, q ≥ 2,
(x − xi )−2p , n −2p (x − xk )
k=−n
sj,m,2q (x) =
(y − yj )−2q , m −2q (y − yk )
k=−m
and i,j
T Tni ,mj (f )(x, y) =
mj ni (x − xi )ν (y − yj )μ ∂ ν+μ f (xi , yj ) . (ν)! (μ)! ∂x ν ∂y μ ν=0 μ=0
Type 6. “Shepard–Gal–Szabados–Taylor operator”
Sn,m,p,n−n ,...,nn ,m−m ,...,mm (f )(x, y)
=
where
n m
i=−n j =−m
[(x − xi )2 + (y − yj )2 ]−p T Tni ,mj (f )(x, y) i,j
m n
i=−n j =−m
i,j T Tni ,mj (f )(x, y)
, 2
2 −p
[(x − xi ) + (y − yj ) ]
mj ni (x − xi )ν (y − yj )μ ∂ ν+μ f (xi , yj ) . = (ν)! (μ)! ∂x ν ∂y μ ν=0 μ=0
It would be interesting to find global smoothness preservation properties (with respect to various bivariate moduli of continuity) of the above six types of bivariate Shepard operators.
90
3 Global Smoothness Preservation, Bivariate Case
Open Problem 3.5.4. Starting from the local variants of Shepard operators in the univariate case defined in Open Problem 1.6.13 (Chapter 1), local bivariate tensor products can be defined as in the above Open Problem 3.5.3. The problem is to find global smoothness preservation properties (with respect to various bivariate moduli of continuity) of these local bivariate Shepard operators.
4 Partial Shape Preservation, Bivariate Case
In this chapter we extend the results of Chapter 2 to the bivariate case.
4.1 Introduction As it was pointed out in Chapter 2, it is evident that because of the interpolation conditions, the interpolating operators do not completely preserve the shape of an univariate function f , on the whole interval that contains the points of interpolation. A key result used in the univariate case for the proofs of qualitative-type results, is the following simple one Lemma 4.1.1. (Popoviciu [81]). Let f : [a, b] → R, a ≤ x1 <& x 2 < · · · < xn ≤ & b and Fn (f )(x) = ni=1 hi (x)f (xi ), where hi ∈ C 1 [a, b] and ni=1 hi (x) = 1, ∀x ∈ [a, b]. (i) We have ⎛ ⎞ i n−1 ⎝− h′j (x)⎠ [f (xi+1 ) − f (xi )]. Fn′ (f )(x) = i=1
j =1
(ii) If there exists x0 ∈ (a, b) such that h′1 (x0 ) < 0, h′n (x0 ) > 0 and the sequence h′2 (x0 ), . . . , h′n (x0 ) has a unique variation of sign, then
h′1 (x0 ),
−
i j =0
h′j (x0 ) > 0,
for all i = 1, n − 1,
and consequently by (i) there exists a neighborhood V (x0 ) of x0 , where the monotonicity of f assumed on the whole [a, b] is preserved. In this chapter qualitative and quantitative results for bivariate Hermite–Fejér polynomial and Shepard operators are obtained. New aspects appear because of various possible natural concepts for bivariate monotonicity and convexity. Also, three different kinds of bivariate Shepard operators are studied.
92
4 Partial Shape Preservation, Bivariate Case
4.2 Bivariate Hermite–Fejér Polynomials If g : [−1, 1] → R and −1 < xn,n < xn−1,n < · · · < x1,n < 1 are the roots of Jacobi (α,β) (x), then it is well known that (see, e.g., Fejér [43] or Popoviciu polynomials Jn [81]) the (univariate) & Hermite–Fejér polynomials based on the roots above are given by Fn (g)(x) = ni=1 hi,n (x)g(xi,n ), where ℓ′′n (xi,n ) 2 hi,n (x) = ℓi,n (x) · 1 − ′ (x − xi,n ) , ℓn (xi,n ) n ℓn (x) ℓi,n (x) = , ℓ (x) = (x − xi,n ). n (x − xi,n )ℓ′n (xi,n ) i=1
&n
We have i=1 hi,n (x) = 1, for all x ∈ [−1, 1]. Now, if f : [−1, 1] × [−1, 1] → R, then according to, e.g., Shisha–Mond [92], the bivariate Hermite–Fejér polynomial is defined by Fn1 ,n2 (f )(x, y) = (1)
n1 n2
(1)
(2)
(1)
(2)
(4.1)
hi,n1 (x)hj,n2 (y)f (xi,n1 , xj,n2 )
i=1 j =1
(2)
(2)
(1)
where hi,n1 (x), xi,n1 , i = 1, n1 and hj,n2 (y) and xj,n2 , j = 1, n2 are defined as in the univariate case above, n1 , n2 ∈ N. We easily see that (1)
(2)
(1)
(2)
Fn1 ,n2 (f )(xi,n1 , xj,n2 ) = f (xi,n1 , xj,n2 ), ∀i = 1, n1 , j = 1, n2 . The key result of this section is Theorem 4.2.1. With the notations above, we have ⎞ ⎛ ⎞ ⎛ n j n i 1 −1 2 −1 2 ∂ Fn1 ,n2 (f )(x, y) ′ ′ ⎠ ⎝ ⎠ ⎝ = h(2) h(1) q,m (y) p,n1 (x) · ∂x∂y j =1
p=1
i=1
q=1
(2) (1) (2) (1) ·(f (xi,n1 , xj,n2 ) − f (xi,n1 , xj +1,n2 )
' .
(1) (2) (1) (2) − f (xi+1,n1 , xj,n2 ) + f (xi+1,n1 , xj +1,n2 ))
Proof. We observe n
n
2 1 ∂Fn1 ,n2 (f )(x, y) (1)′ (2) (1) (2) = hi,n1 (x)hj,n2 (y)f (xi,n1 , xj,n2 ) ∂x
i=1 j =1
=
n n2 1 j =1
i=1
(1)′ hi,n1 (x)f (xi,n1 , xj,n2 )
(2)
hj,n2 (y)(by Lemma 4.1.1 (i) )
4.2 Bivariate Hermite–Fejér Polynomials
=
=
93
⎞
⎛
n2 n i 1 −1 ′ (1) (2) (1) (2) (2) ⎠ ⎝ (f (x , x ) − f (x , x )) · hj,n2 (y) h(1) (x) p,n1 i,n1 j,n2 i+1,n1 j,n2 j =1
n 1 −1 i=1
i=1
⎛ ⎝
i
p=1 (1)′
hp,n1
p=1
It follows that ∂ 2F
⎞
(x)⎠ ·
n2 j =1
n1 ,n2 (f )(x, y)
∂x∂y (1)
=
(2)
'
(2) (1) (2) (1) (2) hj,n2 (y)(f (xi,n1 , xj,n2 ) − f (xi+1,n1 , xj,n2 ))
⎛
n 1 −1 i=1
(1)
⎝
i
p=1
= (1)
(2)
i=1
⎛ ⎝
i
(2)
p=1
(1)
′
⎞
⎠ h(1) p,n1 (x) · (2)
′
⎠ h(1) p,n1 (x) ·
·(f (xi,n1 , xj,n2 ) − f (xi+1,n1 , xj,n2 )) n 1 −1
⎞
'′
n 2 −1 j =1 (1)
n2
.
(2)
hj,n2 (y)
j =1
(by Lemma 4.1.1 (i))
y
⎛ ⎝
j q=1
(2)
⎞
′
⎠ h(2) q,n2 (y)
(1)
(2)
·(f (xi,n1 , xj,n2 ) − f (xi+1,n1 , xj,n2 ) − f (xi,n1 , xj +1,n2 ) + f (xi+1,n1 , xj +1,n2 )),
which proves the theorem.
Also, we need the following: Definition 4.2.1. (see, e.g., Marcus [71], p. 33). We say that f : [a, b]×[c, d] → R is bidimensional or hyperbolical upper (lower) monotone on [a, b] × [c, d], if 2 (f )(x, y; α, β) = f (x + α, y + β) − f (x, y + β) − f (x + α, y) + f (x, y) ≥ 0 (≤ 0, respectively), for all α, β ≥ 0 and (x, y) ∈ [a, b] × [c, d] such that (x + α, y + β) ∈ [a, b] × [c, d]. 2 f (x,y) ≥ 0, for all (x, y) ∈ [a, b] × Remark. If f ∈ C 2 ([a, b] × [c, d]) and ∂ ∂x∂y [c, d], then f is bidimensional upper monotone on [a, b] × [c, d] (see, e.g., Marcus [71]). Corollary 4.2.1. Let n1 = 2p1 , n2 = 2p2 be even numbers and let us consider the bivariate Hermite–Fejér polynomials Fn1 ,n2 (f )(x, y) given by (4.1), based on the (1) roots xi,n1 , i = 1, n1 of λ1 -ultraspherical polynomials of degree n1 with λ1 > −1 (i.e., (α ,β )
the Jacobi polynomials Jn1 1 1 with α1 = β1 , λ1 = α1 + β1 + 1, −1 < α1 , β1 ≤ 1) (2) and on the roots xj,n2 , j = 1, n2 of λ2 -ultraspherical polynomials of degree n2 , (α ,β )
Jn2 2 2 , λ2 > −1 (i.e., α2 = β2 , λ2 = α2 + β2 + 1, −1 < α2 , β2 ≤ 1). There exists a constant c > 0 (independent of f and n1 , n2 ) such that if f : [−1, 1]×[−1, 1] → R is bidimensional monotone on [−1, 1] × [−1, # 1], then$ Fn1#,n2 (f )(x,$ y) is bidimensional monotone (of the same monotonicity) on − nc4 , nc4 × − nc4 , nc4 . 1
1
2
2
Proof. By the proof of Theorem 2.2.1 (see relation (2.2) and the last relation there), we have
94
4 Partial Shape Preservation, Bivariate Case i
′ h(1) p,n1 (x)
> 0,
j
′
h(2) q,n2 (y) > 0,
q=1
p=1
c c ∀x ∈ − 4 , 4 n1 n1
∀i = 1, n1 − 1, j = 1, n2 − 1,
c c ∀y ∈ − 4 , 4 n2 n2
,
.
Taking into account Theorem 4.2.1, we obtain c c c c ∂ 2 Fn1 ,n2 (f )(x, y) ≥ 0, ∀(x, y) ∈ − 4 , 4 × − 4 , 4 , ∂x∂y n1 n1 n2 n2 which by the Remark after Definition 4.2.1 proves the theorem. Corollary 4.2.2. Let us consider Fn1 ,n2 (f )(x, y) given by (4.1), based on the (α ,β ) (α ,β ) roots of Jacobi polynomials Jn1 1 1 , Jn2 2 2 , of degree n1 and n2 , respectively, with (1)′ αi , βi ∈ (−1, 0], i = 1, 2. If ξ is any root of the polynomial ℓn1 (x) and η is any root of ′ ) ) 2 (2) (1) (1) (2) (2) 1 the polynomial ℓn2 (y) (here ℓn1 (x) = ni=1 (x −xi,n1 ), ℓn2 (y) = nj =1 (y −xj,n2 )), then there exists a constant c > 0 (independent of n1 , n2 and f ) such that if f is bidimensional monotone on [−1, 1] × [−1, 1], then Fn1 ,n2 (f )(x, y) is bidimensional monotone (of the same monotonicity) on cη cη cξ cξ ξ − 7+2γ , ξ + 7+2γ × η − 7+2γ , η + 7+2γ ⊂ (−1, 1) × (−1, 1), n1 1 n1 1 n2 2 n2 2 where cξ =
c (1 − ξ 2 )5/2+δ1
, cη =
c (1 − η2 )5/2+δ2
, γi = max{αi , βi }, i = 1, 2
and δ1 = δ2 =
α1 , if 0 ≤ ξ < 1 β1 , if − 1 < ξ ≤ 0, α2 , if 0 ≤ η < 1 β2 , if − 1 < η ≤ 0.
Proof. An immediate consequence of Theorem 4.2.1 above and of Theorem 2.2.2. (1)′
(2)′
Remarks. (1) Because ℓn1 and ℓn2 have exactly n1 −1 and n2 −1 roots in (−1, 1), respectively, it follows that in (−1, 1) × (−1, 1) there exists a grid of (n1 − 1)(n2 − 1) points (ξ, η) from Corollary 4.2.2. (2) From Remark 1, after Theorem 2.2.2, it follows that if ξ and η are near the endpoints in the ultra-spherical case, for example, (i.e., αi = βi ∈ (−1, 0), i = 1, 2) then the best interval of$ preservation of bidimensional $ # # possible bidimensional monotonicity is ξ − nc2 , ξ − nc2 × η − nc2 , η − nc2 . 1
1
2
2
4.2 Bivariate Hermite–Fejér Polynomials
95
In what follows we will extend the convexity problem from the univariate case. In this sense, we need the following. Definition 4.2.2. (see, e.g., Nicolescu [76]). We say that f : [−1, 1] × [−1, 1] → 2,y R is strictly double convex on [−1, 1] × [−1, 1], if h1 [2,x h2 f (a, b)] > 0, for all h1 , h2 > 0, (a, b) ∈ [−1, 1] × [−1, 1], with a ± h2 , b ± h1 ∈ [−1, 1], where 2,x h2 f (α, β) = f (α + h2 , β) − 2f (α, β) + f (α − h2 , β) and
2,y
h1 f (α, β) = f (α, β + h1 ) − 2f (α, β) + f (α, β − h1 ). ∂ 4 f (x,y) (x, y) ∂x 2 ∂y 2 2 [−1, 1] .
Remark. By the mean value theorem it is easy to see that if [−1, 1]2 ,
> 0,
for all (x, y) ∈ then f is strictly double convex on Now, let n1 , n2 ≥ 3 be odd and let us consider as Fn1 ,n2 (f )(x, y) the Hermite– (1) (2) Fejér polynomial given by (4.1), based on the roots xi,n , i = 1, n1 and xj,n2 , j = 1, n2 (λ )
of the λ1 -ultraspherical polynomials pn11 of degree n1 and λ2 -ultraspherical poly(λ ) nomials Pn2 2 of degree n2 , respectively, λ1 , λ2 ∈ [0, 1], and the Côtes–Christoffel numbers of the Gauss–Jacobi quadrature (1) λi,n1
:= 2
2−λ1
−2 λ1 Ŵ(n1 + λ1 ) (1) [1 − (xi,n1 )2 ]−1 π Ŵ 2 Ŵ(n1 + 1) ′
(2)
λj,n2
(1)
·[Pn(λ1 1 ) (xi,n1 )]−2 , i = 1, n1 , −2 λ2 Ŵ(n2 + λ2 ) (2) 2−λ2 [1 − (xj,n2 )2 ]−1 := 2 π Ŵ 2 Ŵ(n2 + 1) ′
·[Pn(λ2 2 ) (xj,n2 )]−2 ,
j = 1, n2 .
Theorem 4.2.2. If f ∈ C([−1, 1] × [−1, 1]) satisfies n2 n1 i=1 j =1
2,y (1) (2) λi,n1 λj,n2 (2) x
2,x (1) f (0, 0) xi,n
j,n2
1
(1) (2) (xi,n1 xj,n2 )2
> 0,
(4.2)
then Fn1 ,n2 (f )(x, y) is strictly double convex in V (0, 0) = {(x, y); x 2 +y 2 < dn21 ,n2 }, with |dn1 ,n2 | n1 −1 n2 −1 2 2
n1 n2 ≥ cf,λ1 ,λ2
& &
i=1 j =1
(1)
(2)
λi,n1 λj,n2
2,y (2) xj,n
2 [2,x (1) f (0, 0)]/(xi,n1 xj,n2 ) (1)
2
xi,n
(n1 + n2 )5
where cf1 λ1 ,λ2 > 0 is independent of n1 and n2 . Proof. We observe
1
(2)
,
96
4 Partial Shape Preservation, Bivariate Case
n n2 1 ∂ 4 Fn1 ,n2 (f )(x, y) (2)′′ (1)′′ (2) (1) = hj,n2 (y) hi,n1 (x)f (xi,n1 , xj,n2 ) ∂x 2 ∂y 2 j =1
i=1
and reasoning as in the proof of Theorem 2.2.3 in (see relation (2.7) there) we obtain ⎤ ⎡ n −1 1 n 2 2 4 ∂ Fn1 ,n2 (f )(0, 0) (2)′′ ⎢ (1)′′ (2) ⎥ hi,n1 (0)2,x = hj,n2 (0) ⎣ (1) f (0, xj,n2 )⎦ . 2 2 xi,n ∂x ∂y 1 j =1
Denoting G(y) =
n1 −1 2
&
i=1
i=1
(1)′′
hi,n1 (0) · 2,x (1) f (0, y), we get xi,n
1
n2 −1
n2 2 ∂ 4 Fn1 ,n2 (f )(0, 0) 2,y (2)′′ (2) (2)′′ = h (0)G(x ) = hj,n2 (0) (2) G(0) j,n2 j,n2 2 2 xj,n ∂x ∂y 2 j =1
n2 −1 n1 −1 2 2
=
j =1
(2)′′
(1)′′
2,y (2) xj,n
hj,n2 (0)hi,n1 (0)
j =1 i=1
2
[2,x (1) f (0, 0)]. xi,n
1
Therefore, again by relation (2.7) and by hypothesis we obtain n
n
2 1 ∂ 4 Fn1 ,n2 (f )(0, 0) 2,y (1) (2) λi,n1 λj,n2 (2) ≥ c λ λ n n 3 1 2 1 2 xj,n ∂x 2 ∂y 2 2
i=1 j =1
2 [2,x (1) f (0, 0)]/(xj,n1 xi,n2 ) > 0. (2)
(1)
xi,n
(4.3)
1
So it follows that Fn1 ,n2 (f )(x, y) is strictly double convex in a neighborhood of (0, 0). ∂4F
(f )
Let (αn1 ,n2 , βn1 ,n2 ) be the nearest root of ∂xn12,n∂y2 2 to (0, 0), in the sense that the 2 ∂4F (f ) distance dn1 ,n2 = αn21 ,n2 + βn21 ,n2 is minimum for all the roots of ∂xn12,n∂y2 2 . Then, for all (x, y) ∈ V (0, 0) = {(x, y) ∈ R2 ; x 2 + y 2 < dn1 ,n2 } we necessarily have ∂ 4 Fn1 ,n2 (f )(x,y) ∂x 2 ∂y 2
> 0. By the mean value theorem for bivariate functions we get ∂ 4 Fn1 ,n2 (f )(0, 0) ∂Fn1 ,n2 (f )(0, 0) ∂ 4 Fn1 ,n2 (f )(αn1 ,n2 , βn1 ,n2 ) = − ∂x 2 ∂y 2 ∂x 2 ∂y 2 ∂x 2 ∂y 2 ∂ 5F ∂ 5F n1 ,n2 (f )(ξ, η) n1 ,n2 (f )(ξ, η) + |β ≤ |αn1 ,n2 | · | · n ,n 1 2 ∂x 3 ∂y 2 ∂x 2 ∂y 3 ∂ 5F ∂ 5F n1 ,n2 (f )(ξ, η) n1 ,n2 (f )(ξ, η) ≤ |dn1 ,n2 | · + . ∂x 3 ∂y 2 ∂x 2 ∂y 3
Because degree (Fn1 ,n2 (f )) ≤ 2n1 − 1 +2n2 − 1 = 2n 1 + 2n2 − 2, we have degree ∂ 5 Fn1 ,n2 (f ) ∂ 5 Fn1 ,n2 (f ) ≤ 2n1 + 2n2 − 7, degree ≤ 2n1 + 2n2 − 7. As in the ∂x 3 ∂y 2 ∂x 2 ∂y 3
4.2 Bivariate Hermite–Fejér Polynomials
97
proof of Theorem 2.2.3, we can assume that the interval of convexity cannot be larger than − nc11 , nc11 × − nc22 , nc22 . Consequently, we may assume that |dn1 ,n2 | ≤ min{nc1 ,n2 } . Now, by the Bernstein theorem in Kroó–Révész [67], p. 136, relation (8), we obtain ∂ 5F n1 ,n2 (f )(ξ, η) ≤ c(2n1 + 2n2 − 7)(2n1 + 2n2 − 6)(2n1 + 2n2 − 5) ∂x 3 ∂y 2 ·(2n1 + 2n2 − 4)(2n1 + 2n2 − 3) · Fn1 ,n2 C([−1,1]×[−1,1]) ≤ c(n1 + n2 )5 · Fn1 ,n2 C([−1,1]×[−1,1]) .
(1)
But because by Fejér [43], the fundamental interpolation polynomials hi,n1 (x) and (2)
hj,n2 (y) are ≥ 0, ∀i = 1, n1 , ∀j = 1, n2 , ∀(x, y) ∈ [−1, 1] × [−1, 1], denoting Mf = f C([−1,1]×[−1,1]) , it follows that ∂ 5 F ∂ 5F (f )(ξ, η) n1 ,n2 (f )(ξ, η) n ,n 1 2 5 ≤ c(n + n ) M , ≤ c(n1 + n2 )5 Mf 1 2 f 3 2 2 3 ∂x ∂y ∂x ∂y
and consequently
∂ 4 Fn1 ,n2 (f )(0, 0) ≤ cf (n1 + n2 )5 dn1 ,n2 , ∂x 2 ∂y 2 where cf > 0 is independent of n1 and n2 (but dependent on f ). Combining this estimate with (4.3), we easily get the lower estimate for |dn1 ,n2 | in the statement of Theorem 4.2.2. Remarks. (1) As in the univariate case, the neighborhood V (0, 0) of preservation of strict convexity depends on f too. (2) The estimate of |dn1 ,n2 | in the bivariate case seems to be weaker, in a sense, than that of the univariate case, because it was not proved yet to be a Stechkin-type inequality for bivariate polynomials. That would be useful for a better estimate. (3) If f : [−1, 1] × [−1, 1] → R is strictly double convex on [−1, 1] × [−1, 1], then the condition (4.2) is obviously satisfied and consequently Fn1 ,n2 (f )(x, y) preserve the strictly double convexity in a disc centered at (0, 0), having for its ray |dn1 ,n2 | the lower estimate of Theorem 4.2.2. (1) (4) Let Fn1 ,n2 (f )(x, y) be given by (4.1), based on the roots xi,n1 , i = 1, n1 (2)
(λ )
and xj,n2 , j = 1, n2 of the λ1 -ultraspherical polynomials Pn1 1 and λ2 -ultraspherical (λ )
polynomials Pn2 2 , respectively, where λ1 , λ2 ∈ [0, 1]. Because by Fejér [43], the (1) (2) polynomials hi,n1 (x), hj,n2 (y) ≥ 0, ∀i = 1, n1 , ∀j = 1, n2 , ∀(x, y) ∈ [−1, 1] × [−1, 1], by the formulas ⎤ ⎡ n1 n2 p h(2) (y) ∂ ∂ p Fn1 ,n2 (f )(x, y) j,n (1) (2) (1) 2 = f (xi,n1 , xj,n2 )⎦ , hi,n1 (x) ⎣ ∂y p ∂y p i=1
j =1
98
4 Partial Shape Preservation, Bivariate Case
p = 1, 2, from the univariate case in Section 2.2, the following results are immediate: If f (x, y) is nondecreasing with respect to y ∈ [−1, 1] (for all fixed x ∈ [−1, 1]), (λ )′ then for n1 , n2 ∈ N and η root of Pn2 2 (y), Fn1 ,n2 (f )(x, y) is nondecreasing with respect to cη cη y ∈ η − 7+2γ , η + 7+2γ , n2 2 n2 2 for all fixed x ∈ [−1, 1] (here cη and γ2 are given by Corollary 4.2.2). If f (x, y) is strictly convex with respect to y ∈ [−1, 1] (for all fixed x ∈ [−1, 1]), then for all n1 ∈ N, arbitrary and n2 ∈ N, n2 ≥ 3, n2 odd number, there exists a neighborhood V (0) of 0, such that for all fixed x ∈ [−1, 1], Fn1 ,n2 (f )(x, y) is strictly convex with respect to y ∈ V (0). ∂ p Fn1 ,n2 (f )(x,y) Similar results hold if we consider , p = 1, 2. ∂x p (5) All the results above can easily be extended for n variables, n > 2.
4.3 Bivariate Shepard Operators For the three types of bivariate Shepard operators considered in Section 3 of Chapter 3, we prove here that preserve natural kinds of bivariate monotonicity and convexity in the neighborhoods of some points and quantitative estimates of the lengths of these neighborhoods are obtained. Let us first consider the bivariate Shepard operator defined as a tensor product by (λ,μ) Sn,m (f ; x, y) =
n m
i=−n j =−m
si,λ (x)sj,μ (y)f (xi , yj ), if (x, y) = (xi , yj ),
(λ,μ)
Sn,m (f ; xi , yj ) = f (xi , yj ), where 1 < λ, μ, −1 ≤ x−n < · · · < xn ≤ 1, −1 ≤ y−m < · · · < ym ≤ 1 and f : [−1, 1] × [−1, 1] → R, n −λ −λ si,λ (x) = |x − xi | / |x − xk | , k=−n
sj,μ (y) = |y − yj |
−μ
/
m
k=−m
−μ
|y − yk |
.
The global smoothness preservation properties and convergence properties of these operators were studied in Section 3 of Chapter 3. In this section we consider their properties of preservation of shape. In this sense, a key result is the following. Theorem 4.3.1. With the notations above, we have: ⎛ ⎞ ⎛ ⎞ j m−1 i n−1 (λ,μ) ∂ 2 Sn,m (f ; x, y) ′ ′ ⎝− ⎝− sq,μ (y)⎠ sp,λ (x)⎠ = ∂x∂y q=−m p=−n i=−n
j =−m
4.3 Bivariate Shepard Operators
99
×(f (xi , yj ) − f (xi , yj +1 ) − f (xi+1 , yj ) + f (xi+1 , yj +1 )). Proof. We observe m n (λ,μ) ∂Sn,m (f ; x, y) ′ = si,λ (x)sj,μ (y)f (xi , yj ) ∂x i=−n j =−m
m
=
=
=
j =−m
m
⎡
n−1
⎛
j =−m
i=−n
It follows
⎣
n−1
i=−n
⎝−
n
′ si,λ (x)f (xi , yj )
i=−n
⎛
⎝−
i
p=−n
i
p=−n
sj,μ (y) (by Lemma 4.1.1)
⎞
⎤
′ sp,λ (x)⎠ (f (xi+1 , yj ) − f (xi , yj ))⎦ sj,μ (y)
⎞
′ sp,λ (x)⎠
m
j =−m
sj,μ (y) × (f (xi+1 , yj ) − f (xi , yj )).
⎛ ⎞ i n−1 m (λ,μ) ∂ 2 Sn,m (f ; x, y) ′ ′ ⎝− = sp,λ (x)⎠ sj,μ (y) ∂x∂y p=−n i=−n
j =−m
×(f (xi+1 , yj ) − f (xi , yj )),
which by the above Theorem 4.1.1, immediately implies the formula in the statement. Corollary 4.3.1. Let f : [−1, 1] × [−1, 1] → R, λ = 2p, μ = 2q, &p, q ∈ N, xi = i/n, i ∈ {−n, . . . , n}, yj = j/m, j ∈ {−m, . . . , m}, ln,p (x) = ni=−n (x − & −2q . If ξ is any solution of the equation l ′ (x) = 0 xi )−2p , lm,q (y) = m n,p j =−m (y −yj ) ′ (y) = 0, then there exists a constant and η is any solution of the equation lm,q c > 0 independent of n, m and f , such that if f (x, y) is bidimensional monotone (λ,μ) on [−1, 1] × [−1, 1], then Sn,m (f ; x, y) is bidimensional monotone (of the same monotonicity) on (ξ − c/n2p+3 , ξ + c/n2p+3 ) × (η − c/m2q+3 , η + c/m2q+3 ) ⊂ (−1, 1) × (−1, 1). Proof. It is immediate by Theorem 4.3.1 and Theorem 2.3.3.
′ (x) = 0, Remark. Because by Remark 1 after Theorem 2.3.1, each equation ln,p = 0 has 2n, respectively, 2m solutions, it follows that in Corollary 4.3.1 there exists a grid of 4mn points (ξ, η). In what follows, we will extend the convexity problem from the univariate case. ′ (y) lm,q
100
4 Partial Shape Preservation, Bivariate Case
Corollary 4.3.2. Let f : [−1, 1] × [−1, 1] → R, λ = 2p, μ = 2q, p, q ∈ N, 2,y xi = i/n, i ∈ {−n, . . . , n}, yj = j/m, j ∈ {−m, . . . , m}. If h [2,x k f (0, 0)] > 0, (λ,μ) for all h, k ∈ (0, 1], then Sn,m (f ; x, y) is strictly double convex in a bivariate neighborhood V (0, 0) of (0, 0). Proof. We observe n m (λ,μ) ∂ 4 Sn,m (f ; x, y) ′′ ′′ = sj,μ (y) si,λ (x)f (xi , yj ) . ∂x 2 ∂y 2 j =−m
i=−n
Reasoning as in the proof of univariate case of Theorem 2.3.2 , we obtain n m (λ,μ) (r) ∂ r+s Sn,m (f ; 0, y) (s) = sj,μ (y) si,λ (0)f (xi , yj ) ∂x r ∂y s j =−m
i=−n
m (λ,μ) ∂ r+s Sn,m (f ; x, 0) (s) = sj,μ (0) = ∂x r ∂y s j =−m
n
(r) si,λ (x)f (xi , yj )
i=−n
= 0,
for all r ∈ {1, . . . , 2p − 1}, s ∈ {1, . . . , 2q − 1}, (x, y) ∈ [−1, 1] × [−1, 1], and n m (λ,μ) (2p) ∂ 2p+2q Sn,m (f ; 0, 0) (2q) = sj,μ (0) si,λ (0)f (xi , yj ) ∂x 2p ∂y 2q j =−m
= Denoting
m
(2q) sj,μ (0)
j =−m
we get A= =
m
j =1 i=1
si,λ (0)2,x xi f (0, yj ) := A.
n
si,λ (0)2,x xi f (0, y)
(2p)
(2p)
i=1
(2q)
j =−m
m n
n i=1
G(y) =
i=−n
sj,μ (0)G(yj ) =
m
(2q)
2,y
sj,μ (0)yj G(0)
j =1
2,y
sj,μ (0)si,λ (0)yj [2,x xi f (0, 0)] > 0, (2q)
(2p)
by hypothesis. As a conclusion,
(λ,μ)
∂ 2p+2q Sn,m (f ; 0, 0) > 0, ∂x 2p ∂y 2q which implies that there exists a neighborhood V (0, 0) of (0, 0) such that (λ,μ)
∂ 2p+2q Sn,m (f ; x, y) > 0, ∂x 2p ∂y 2q
4.3 Bivariate Shepard Operators
101
for all (x, y) ∈ V (0, 0). Denote by Vy the projection of V (0, 0) on the OY -axis and by Vx the projection of V (0, 0) on the OX-axis. Firstly let y ∈ Vy be fixed. Reasoning as in the univariate case, from the above relations it follows that (0, y) is a minimum point for (λ,μ)
∂ 2p+2q−2 Sn,m (f ; x, y) , ∂x 2p−2 ∂y 2q ∂ 2p+2q−2 S
(λ,μ)
(f ;x,y)
n,m and since by the last relation is convex with respect to x ∈ Vx , it ∂x 2p−2 ∂y 2q follows that (λ,μ) ∂ 2p+2q−2 Sn,m (f ; x, y) > 0, ∂x 2p−2 ∂y 2q
for all (x, y) ∈ V (0, 0), with x = 0.
(λ,μ)
∂ 2p+2q−2 Sn,m (f ;x,y) ∂x 2p−2 ∂y 2q
Now, let x ∈ Vx , x = 0, be fixed. Reasoning for similarly we get (λ,μ) ∂ 2p−2+2q−2 Sn,m (f ; x, y) > 0, ∂x 2p−2 ∂y 2q−2
as above,
for all (x, y) ∈ V (0, 0), with x = 0 and y = 0. Reasoning by induction, finally we arrive at (λ,μ)
∂ 4 Sn,m (f ; x, y) > 0, ∂x 2 ∂y 2 for all (x, y) ∈ V (0, 0), with x = 0 and y = 0, which proves the theorem. As an immediate consequence of the univariate result, for the bivariate tensor product Shepard operator, we obtain the following quantitative version of Corollary 4.3.2. Corollary 4.3.3. Let f : [−1, 1] × [−1, 1] → R, λ = 2p, μ = 2q, p, q ∈ N, 2,y xi = i/n, i ∈ {−n, . . . , n}, yj = j/m, j ∈ {−m, . . . , m}. If h [2,x k f (0, 0)] > 0, (λ,μ) for all h, k ∈ (0, 1], then Sn,m (f ; x, y) is strictly double convex in the bivariate 2 }, with neighborhood of (0, 0), V (0, 0) = {(x, y); x 2 + y 2 < dn,m dn,m ≥
cp,q
&n
i=1
2,y 2p 2p 2,x j =1 xi [yj f (0, 0)]/(xi yj ) , [n2p+1 + m2q+1 ]||f ||
&m
where cp,q is a constant depending only on p, q. Proof. From the proof of Corollary 4.3.2 and from the univariate case we get m
(λ,μ)
n
∂ 2p+2q Sn,m (f ; 0, 0) (2q) (2p) 2,y = sj,μ (0)si,λ (0)yj [2,x xi f (0, 0)] ∂x 2p ∂y 2q j =1 i=1
≥ cp,q
m n i=1 j =1
2,y
2p 2p
2,x xi [yj f (0, 0)]/(xk yj ) > 0.
102
4 Partial Shape Preservation, Bivariate Case
Let (αn,m , βn,m ) be the nearest root to (0, 0) of that the distance dn,m = (λ,μ)
2 (αn,m
∂ 2p+2q Sn,m (f ;x,y) . Then for all ∂x 2p ∂y 2q (λ,μ) ∂ 2p+2q Sn,m (f ;x,y) sarily have > 2p ∂x ∂y 2q
+
2 )1/2 βn,m
(λ,μ)
∂ 2p+2q Sn,m (f ;x,y) , ∂x 2p ∂y 2q
in the sense
is minimum for all the roots of
(x, y) ∈ {(x, y); (x 2 + y 2 )1/2 < dn,m }, we neces0. By the mean value theorem for bivariate functions,
we get
(λ,μ)
(λ,μ)
(λ,μ)
∂ 2p+2q Sn,m (f ; 0, 0) ∂ 2p+2q Sn,m (f ; 0, 0) ∂ 2p+2q Sn,m (f ; αn,m , βn,m ) = − ∂x 2p ∂y 2q ∂x 2p ∂y 2q ∂x 2p ∂y 2q ∂ 2p+2q+1 S (λ,μ) (f ; ξ, η) ∂ 2p+2q+1 S (λ,μ) (f ; ξ, η) n,m n,m ≤ |αn,m | + |βn,m | ∂x 2p+1 ∂y 2q ∂x 2p ∂y 2q+1 ∂ 2p+2q+1 S (λ,μ) (f ; ξ, η) ∂ 2p+2q+1 S (λ,μ) (f ; ξ, η) n,m n,m ≤ |dn,m | + ∂x 2p+1 ∂y 2q ∂x 2p ∂y 2q+1 ≤ |dn,m |[n2p+1 + m2q+1 ]||f ||,
taking into account the proof of Theorem 2.1 in Della Vecchia–Mastroianni [33], p. 149, too (see also the proof of Theorem 2.3.4). In what follows, let us consider the original bivariate Shepard operator introduced in Shepard [91] by Sn,2p (f )(x, y) =
n i=1
(2p)
sn,i (x, y)f (xi , yi ), if (x, y) = (xi , yi ),
Sn,2p (f )(xi , yi ) = f (xi , yi ), where p ∈ N is fixed, f : D → R, D = [a, b]×[c, d], (xi , yi ) ∈ D, i = 1, . . . , n, x1 < · · · < xn , y1 < · · · < yn , sn,i (x, y) = [(x − xi )2 + (y − yi )2 ]−p / ln (2p)
(2p)
ln
(2p)
(x, y) =
n i=1
(x − xi )2 + (y − yi )2
(x, y),
!−p
.
Convergence properties of this kind of operators can be found in, e.g., Gonska [60], Farwig [42], Allasia [3], [4], while global smoothness preservation properties have not yet been proved. On the other hand, concerning the partial shape-preserving property we first can prove the following qualitative results. Theorem 4.3.2. If f : D → R, D = [a, b] × [c, d] is such that f (x, y) is nondecreasing as a function of x (for each fixed y) and nondecreasing as a function of y (for each fixed x), then for any point (ξ, η) ∈ (a, b) × (c, d) that is a solution of the system of equations
4.3 Bivariate Shepard Operators (2p)
∂ln
103
(2p)
(x, y) ∂ln (x, y) = 0, = 0, ∂x ∂y
there exists a neighborhood V (ξ, η) of it (depending on n and p but independent of f ) such that Sn,2p (f )(x, y) is nondecreasing as a function of x and as a function of y on V (ξ, η). Proof. By using Lemma 4.1.1, (i), we get ⎡ ⎤ i ∂s (2p) (x, y) n−1 ∂Sn,2p (f )(x, y) n,j ⎣− ⎦ = ∂x ∂x j =1
i=1
·[f (xi+1 , yi+1 ) − f (xi , yi )], ⎤ ⎡ i ∂s (2p) (x, y) n−1 ∂Sn,2p (f )(x, y) n,j ⎦ ⎣− = ∂y ∂y j =1
i=1
·[f (xi+1 , yi+1 ) − f (xi , yi )].
By hypothesis, f (xi+1 , yi+1 ) − f (xi , yi ) = f (xi+1 , yi+1 ) − f (xi+1 , yi ) + f (xi+1 , yi ) −f (xi , yi ) ≥ 0, for all i = 1, n − 1. Let (ξ, η) ∈ (a, b) × (c, d) be a solution of the system in the statement. Because (2p)
∂sn,j (f )(x, y) ∂x
−2p(x − xj )[(x − xj )2 + (y − yj )2 ]−p−1
=
(2p)
ℓn (2p) ∂ℓn (x,y)
∂x
−
(x, y)
[(x − xj )2 + (y − yj )2 ]−p (2p)
[ℓn
(x, y)]2
and (2p)
∂sn,j (f )(x, y) ∂y
−2p(y − yj )[(x − xj )2 + (y − yj )2 ]−p−1
=
(2p)
ℓn (2p)
∂ℓn
−
(x,y) [(x ∂y
(x, y)
− xj )2 + (y − yj )2 ]−p (2p)
[ℓn
(x, y)]2
we immediately get (2p)
∂sn,1 (ξ, η) ∂x ⎡
sgn ⎣
(2p)
< 0,
(2p)
∂sn,j (ξ, η) ∂x
∂sn,n (ξ, η) > 0, ∂x ⎤ ⎦ = sgn(xj − ξ )
,
104
4 Partial Shape Preservation, Bivariate Case
and
(2p)
∂sn,1 (ξ, η)
(2p)
< 0,
∂sn,n (ξ, η) > 0, ∂y ⎤
∂y ⎡ (2p) ∂sn,j (ξ, η) ⎦ = sgn(yj − η). sgn ⎣ ∂y
By Lemma 4.1.1,(ii), we easily obtain −
i ∂s (2p) (ξ, η) n,j j =1
∂x
> 0, −
i ∂s (2p) (ξ, η) n,j j =1
∂y
> 0,
∀i = 1, n − 1
and consequently ∂Sn,2p (f )(x, y) ∂Sn,2p (f )(x, y) > 0, > 0, ∂x ∂y
∀(x, y) ∈ V (ξ, η)
which is a neighborhood of (ξ, η). The theorem is proved.
Remark. A natural question is whether or not the system in the statement has solutions in (a, b) × (c, d). For some particular choices of the nodes xk , yk , k = 1, n, a positive answer can easily be derived. Thus, let us first consider the case [a, b] = [c, d] and xk = yk , k = 1, n. An easy calculation shows that it is equivalent to the system n k=1
[(x − xk
n
y − yk = 0. [(x − xk )2 + (y − yk )2 ]p+1
k=1
)2
x − xk = 0, + (y − yk )2 ]p+1
Taking now xk = yk , k = 1, n and subtracting the equations, it necessarily follows x = y. & Replacing in the first equation of the above system, we obtain nk=−n (x−x 1)2p+1 = k 0, which is exactly equation (2.12) in the proof of Theorem 2.3.1. But according to Remark 1 after the proof of Theorem 2.3.1, equation (2.12) has 2n solutions. So, the above system has in this case 2n solutions of the form (ξ, ξ ). Another particular choice would be [a, b] = [c, d] = [−1, 1] and xk = −yk , k = 1, n. In this second case, by adding both equations we necessarily obtain x = −y, that is, replacing in the first equation we easily obtain the equation n k=1
x − xk = 0. (x 2 + xk2 )p+1
4.3 Bivariate Shepard Operators
Denoting F (x) =
&n
x−xk k=1 (x 2 +x 2 )p+1 , k
105
we get F (x1 ) < 0 and F (xn ) > 0, that is,
there exists ξ ∈ (x1 , xn ) with F (ξ ) = 0, and as a conclusion, this (ξ, ξ ) will be a solution of the system in the statement. Finally, notice that if n = 2 and p ∈ N, 2 2 and η = y1 +y is a (xk , yk ) ∈ [a, b] × [c, d], k = 1, n, then (ξ, η) with ξ = x1 +x 2 2 solution. Now, let us discuss some properties of qualitative kind of Sn,2p (f )(x, y) related to the convexity. In this sense, we will consider f : [−1, 1] × [−1, 1] → R, the equidistant interpolation knots x−i = −xi , y−i = −yi , i = 1, n, x0 = y0 = 0 and the Shepard operator given by Sn,2p (f )(x, y) =
n
(2p)
(4.4)
sn,k (x, y)f (xk , yk ),
k=−n
(2p)
where sn,k (x, y) can be written in the form (2p)
sn,k (x, y) =
(x 2 + y 2 )p [(x − xk )2 + (y − yk )2 ]−p . n & 2 2 p 2 2 −p (x + y ) [(x − xj ) + (y − yj ) ] 1+
(4.5)
j =−n j =0
We also need the following Definition 4.3.1. The function f : [−1, 1]×[−1, 1] → R is called strictly convex on [−1, 1] × [−1, 1] if for any P1 = (x1 , y1 ), P2 = (x2 , y2 ) ∈ [−1, 1] × [−1, 1] and any λ ∈ (0, 1), we have f (λP1 + (1 − λ)P2 ) < λf (P1 ) + (1 − λ)f (P2 ), where λP1 + (1 − λ)P2 = (λx1 + (1 − λ)x2 , λy1 + (1 − λ)y2 ) ∈ (−1, 1) × (−1, 1). Remark. It is well known (see, e.g., Fleming [44], p. 114]) that if f ∈ C 2 ([−1, 1] × [−1, 1]) and ∂ 2 f (x, y) ∂ 2 f (x, y) ∂ 2 f (x, y) ∂ 2 f (x, y) > 0, > 0, · > ∂x 2 ∂y 2 ∂x 2 ∂y 2
∂ 2 f (x, y) ∂x∂y
2
,
∀(x, y) ∈ [−1, 1]×[−1, 1], then f is strictly convex on [−1, 1]×[−1, 1]. Moreover, if the two strict inequalities above are valid for all (x, y) ∈ [−1, 1]×[−1, 1]\{(0, 0)} (0,0) (0,0) and ∂f ∂x = ∂f ∂y = 0, then f is strictly convex on [−1, 1] × [−1, 1] and (0, 0) is its global minimum point. We present the following: Theorem 4.3.3. Let Sn,2p (f )(x, y) be given by (4.4), (4.5), with p = 1. If f : [−1, 1] × [−1, 1] → R is strictly convex on [−1, 1] × [−1, 1], then there exists a neighborhood of (0, 0), denoted by V (0, 0) (depending on f and n), such that Sn,2p (f )(x, y) is strictly convex in V (0, 0). Proof. By (4.5) and by simple calculations, for all k = 0 we get
106
4 Partial Shape Preservation, Bivariate Case (2p) ∂ i sn,k (0, 0) ∂x i
We have
=
(2p) ∂ i sn,k (0, 0) ∂y i
=
⎧ 0, ⎪ ⎪ ⎨ ⎪ ⎪ ⎩
if k = 1, 2p − 1 (4.6)
(2p)! , if k = 2p. (xk2 + yk2 )p
(2) n ∂ 2 sn,k (0, 0) ∂ 2 Sn,2 (f )(0, 0) = f (xk , yk ) ∂x 2 ∂x 2 k=−n
=
n
(xk2 k=−n
2! f (xk , yk ) = + yk2 )2
n
(xk2 k=1
2 [f (xk , yk ) + f (−xk , −yk )] + yk2 )2
(2)
+f (0, 0) ·
∂ 2 sn,0 (0, 0) ∂x 2
> f (0, 0) ·
(2) n ∂ 2 sn,k (0, 0)
k=−n
∂x 2
= 0,
taking into account that the strict convexity of f implies f (xk , yk )+f (−xk , −y−k ) > (2p) & & ∂2s (0,0) (2p) 2f (0, 0) and that the identity nk=−n sn,k (x, y) = 1, implies nk=−n n,k = ∂x 2
0. Similarly, (2) ∂ 2 sn,k (0,0)
∂x∂y
∂ 2 Sn,2 (f )(0,0) ∂y 2
> 0. On the other hand, by simple calculations we get
= 0, ∀k = −n, n, which implies ∂ 2 Sn,2 (f )(0, 0) = 0. ∂x∂y
So, it easily follows ∂ 2 Sn,2 (f )(0, 0) ∂ 2 Sn,2 (f )(0, 0) · > ∂x 2 ∂y 2
∂ 2 Sn,2 (f )(0, 0) ∂x∂y
= 0.
As a conclusion, there exists a neighborhood of (0, 0), denoted by V (0, 0) (obviously depending on f and n) such that for all (x, y) ∈ V (0, 0) we have ∂ 2 Sn,2 (f )(x,y) ∂x 2
> 0, and
∂ 2 Sn,2 (f )(x, y) ∂ 2 Sn,2 (f )(x, y) · > ∂x 2 ∂y 2
∂ 2 Sn,2 (f )(x, y) ∂x∂y
2
,
that is Sn,2 (f )(x, y) is strictly convex in V (0, 0).
Remark. Let p ≥ 2. According to the Remark after Definition 4.3.1, it will be enough if we will prove that there exists a neighborhood V (0, 0) of (0, 0), such that ∂ 2 Sn,2p (f )(x, y) > 0, ∂y 2 2 2 ∂ 2 Sn,2p (f )(x, y) ∂ 2 Sn,2p (f )(x, y) ∂ Sn,2p (f )(x, y) · > , ∂x 2 ∂y 2 ∂x∂y ∂ 2 Sn,2p (f )(x, y) > 0, ∂x 2
(4.7)
4.3 Bivariate Shepard Operators
107
for all (x, y) ∈ V (0, 0) \ {(0, 0)} because by relations (4.6) we obviously have ∂Sn,2p (f )(0, 0) ∂Sn,2p (f )(0, 0) = = 0. ∂x ∂y Because of the same relations (4.6) we have ∂ 2 Sn,2p (f )(0, 0) ∂ 2 Sn,2p (f )(0, 0) = = 0, 2 ∂x ∂y 2 the idea is to prove that the functions F (x, y) =
∂ 2 Sn,2p (f )(x, y) , ∂x 2
G(x, y) =
and H (x, y) = F (x, y) · G(x, y) −
∂ 2 Sn,2p (f )(x, y) ∂y 2
∂ 2 Sn,2p (f )(x, y) ∂x∂y
2
are strictly convex on a neighborhood of (0, 0), having as global minimum point (0, 0), which would imply the required relations (4.7). In order to prove the “qualitative result,” Theorem 4.3.3, for all p ∈ N, p ≥ 2, the following three lemmas are necessary. Lemma 4.3.1. If p ∈ N, p ≥ 2 and f : [−1, 1] × [−1, 1] → R is strictly convex on [−1, 1] × [−1, 1], then there exists a neighborhood V (0, 0) of (0, 0), such that ∂ 2 Sn,2p (f )(x, y) ∂ 2 Sn,2p (f )(x, y) > 0, > 0, ∀(x, y) ∈ V (0, 0) − {(0, 0)}, ∂x 2 ∂y 2 where Sn,2p (f )(x, y) is given by (4.4) and (4.5). Proof. Denoting E(x, y) =
[(x − xk )2 + (y − yk )2 ]−p , n & 2 2 −p 2 2 p [(x − xj ) + (y − yj ) ] 1 + (x + y )
k = 0,
j =−n j =0
(2p)
we have sn,k (x, y) = P (x, y) · E(x, y), where p p 2i 2p−2i P (x, y) = (x + y ) = x y i i=0 p 2 2p−2 p 4 2p−4 p 6 2p−6 2p =y + x y + x y + x y 1 2 3 p p +··· + x 2p−4 y 4 + x 2p−2 y 2 + x 2p , p−2 p−1 2
2 p
and E(x, y) has at (0, 0) partial derivatives of any order. Denote
108
4 Partial Shape Preservation, Bivariate Case (2p)
Rk (x, y) =
∂ 2p−2 sn,k (x, y) ∂x 2p−2
,
k = 0.
Firstly, by (4.6) we get (2p)
∂ 2p sn,k (0, 0) (2p)! ∂ 2 Rk (0, 0) = = 2 > 0. 2 2p ∂x ∂x (xk + yk2 )p Then ⎤ ⎡ 2p−2 ∂ ⎣ 2p − 2 ∂ i P (x, y) ∂ 2p−2−i E(x, y) ⎦ ∂ 2 Rk (x, y) · = 2 ∂y 2 ∂y i ∂x i ∂x 2p−2−i i=0
2p−2
∂ = ∂y + =
i=0 2p−2
i=0 2p−2
2p − 2 ∂ i+1 P (x, y) ∂ 2p−2−i E(x, y) · ∂x i ∂y ∂x 2p−2−i i
2p − 2 ∂ i P (x, y) ∂ 2p−1−i E(x, y) · i ∂x i ∂x 2p−2−i ∂y
2p−2−i 2p − 2 ∂ 2 ∂ i P (x, y) E(x, y) ∂ · 2 i 2p−2−i y ∂x ∂x i
i=0 2p−2
+2 +
i=0 2p−2 i=0
2p−1−i 2p − 2 ∂ ∂ i P (x, y) E(x, y) ∂ · i 2p−2−i i ∂y ∂x ∂x ∂y
2p − 2 ∂ i P (x, y) ∂ 2p+i E(x, y) · 2p−2−i 2 . ∂x i ∂x ∂y i
If we take x = y = 0 in these sums, then all the terms that contain x or (and) y will become zero, so taking into account the form of polynomial P (x, y), we obtain 1 2p(2p − 2)! ∂ 2 Rk (0, 0) 2p − 2 = 2p (2p − 2)! 2 = > 0. 2p − 2 ∂y 2 (xk + yk2 )p (xk2 + yk2 )p Reasoning for Sn,2p (f )(x, y) exactly as in the case p = 1 (see the proof of Theorem 4.3.3), we easily obtain ∂ 2p Sn,2p (f )(0, 0) ∂ 2 ∂ 2p−2 Sn,2p (f )(0, 0) > 0, = ∂x 2 ∂x 2p−2 ∂x 2p ∂ 2p Sn,2p (f )(0, 0) ∂ 2 ∂ 2p−2 Sn,2p (f )(0, 0) = > 0. ∂y 2 ∂x 2p−2 ∂x 2p−2 ∂y 2 So, there exists a neighborhood V1 (0, 0) of (0, 0), such that for all (x, y) ∈ V1 (0, 0) we have
4.3 Bivariate Shepard Operators
∂2 ∂x 2
∂ 2p−2 Sn,2p (f )(x, y) > 0, ∂x 2p−2
∂2 ∂y 2
109
∂ 2p−2 Sn,2p (f )(x, y) > 0. ∂x 2p−2
On the other hand, reasoning as above, we immediately get ∂2 ∂y∂x As a first conclusion, of (0, 0), and because
(2p)
∂ 2p−2 sn,k (0, 0) ∂x 2p−2
∂ 2p−2 Sn,2p (f )(x,y) ∂x 2p−2
(2p)
=
∂ 2p sn,k (0, 0) ∂x 2p−1 ∂y
= 0.
is strictly convex in a neighborhood V1 (0, 0)
∂ 2p−2 Sn,2p (f )(0, 0) ∂ 2p−1 Sn,2p (f )(0, 0) ∂ 2p−1 Sn,2p (f )(0, 0) = 0, = = ∂x 2p−2 ∂x 2p−1 ∂x 2p−2 ∂y ∂ 2p−2 S
(f )(x,y)
n,2p it follows that ∂x 2p−2 By symmetry, we get
> 0, ∀(x, y) ∈ V1 (0, 0) \ {(0, 0)}.
∂ 2p−2 Sn,2p (f )(x, y) > 0, ∂y 2p−2
∀(x, y) ∈ V2 (0, 0).
Now, if p = 2 then we exactly obtain the statement of the lemma.
If p > 2, then by similar reasonings as above, we obtain that ∂ 2p−4 S
∂ 2p−4 Sn,2p (f )(x,y) ∂x 2p−4
(f )(x,y)
n,2p are strictly convex on a neighborhood U (0, 0) of (0, 0), etc., and ∂y 2p−4 and as a conclusion
∂ 2p−4 Sn,2p (f )(x, y) > 0, ∂x 2p−4
∂ 2p−4 Sn,2p (f )(x, y) > 0, ∂y 2p−4
∀(x, y) ∈ U (0, 0) \ {(0, 0)}. We can continue in this way until we arrive at ∂ 2 Sn,2p (f )(x, y) ∂ 2 Sn,2p (f )(x, y) > 0, > 0, 2 ∂x ∂y 2
∀(x, y) ∈ V (0, 0) \ {(0, 0)}.
Lemma 4.3.2. Let p ∈ N, p ≥ 2. Then we have: ⎧ r ∂ Sn,2p (f )(0, 0) ⎨ = 0, if r < 2p or r > 2p. = = 0, if r = 2p and both i, j are odd ⎩ ∂x i ∂y j > 0, if r = 2p and both i, j are even.
Proof. Let P (x, y) = (x 2 + y 2 )p . It is easy to check that ⎧ = 0, if r < 2p or r > 2p. r ∂ P (0, 0) ⎨ = 0, if r = 2p and both i, j are odd = ⎩ ∂x i ∂y j > 0, if r = 2p and both i, j are even.
110
4 Partial Shape Preservation, Bivariate Case (2p)
We have sn,k (x, y) = P (x, y) · E(x, y), ∀k = 0, where E(x, y) is given in the proof of Lemma 4.3.1, and ⎡ ⎤ (2p) j ∂ r sn,k (x, y) ∂ i ⎣ j ∂ 2 P (x, y) ∂ j −q E(x, y) ⎦ , = i · ∂x i ∂y j ∂x ∂y 2 ∂y j −q q q=0
which combined with the above properties of in Lemma 4.3.1 (because
∂ r P (0,0) ∂x i ∂y j
and with the method of proof
(2p) n ∂ r sn,k (0, 0) ∂ r Sn,p (f )(0, 0) = · f (xk , yk ) ∂x i ∂y j ∂x i ∂y j k=−n
proves the lemma. Lemma 4.3.3. Let p ∈ N, p ≥ 2. We have ∂ 2 Sn,2p (f )(x, y) ∂ 2 Sn,2p (f )(x, y) · − ∂x 2 ∂y 2
∂ 2 Sn,2p (f )(x, y) ∂x∂y
2
> 0,
for all (x, y) ∈ V (0, 0) \ {(0, 0)}, where V (0, 0) is a neighborhood of (0, 0) (depending on f, n and p). Proof. Denote 2 2 ∂ 2 Sn,2p (f )(x, y) ∂ 2 Sn,2p (f )(x, y) ∂ Sn,2p (f )(x, y) · − . H (x, y) = ∂x 2 ∂y 2 ∂x∂y According to the remark after the proof of Theorem 4.3.3, we have to prove that ∂ 2 H (x, y) ∂ 2 H (x, y) ∂ 2 H (x, y) ∂ 2 H (x, y) > 0, > 0, · − ∂x 2 ∂y 2 ∂x 2 ∂y 2
∂ 2 H (x, y) ∂x∂y
2
> 0, (0,0) (0,0) for all (x, y) ∈ V (0, 0)\{(0, 0)}, and that ∂H∂x = ∂H∂y = 0 (because by Lemma 4.3.2 we have H (0, 0) = 0). r > 0 only if r = 2p and both Now, by Lemma 4.3.2 we easily get that ∂ ∂xHi(0,0) ∂y i i, j are even, all the other partial derivatives of H at (0, 0) being 0, so we obtain
∂ 2p H (0, 0) ∂ 2p H (0, 0) ∂ 2p H (0, 0) = 0, > 0, > 0, ∂x 2p ∂x 2p−2 ∂y 2 ∂x 2p−1 ∂y that is, ∂ 0 and so
2p−2 H (x,y)
∂x 2p−2 ∂ 2p−1 H (0,0) ∂x 2p−1
is strictly convex in a neighborhood of zero. Because ∂
=
∂ 2p−1 H (0,0) ∂x 2p−2 ∂y
2p−2 H (0,0)
∂x 2p−2
=
= 0, it follows that (0, 0) is a global minimum point,
∂ 2p−2 H (x, y) > 0, ∂x 2p−2
∀(x, y) ∈ V1 (0, 0) \ {(0, 0)}.
4.3 Bivariate Shepard Operators
111
Reasoning by symmetry, we get ∂ 2p−2 H (x, y) > 0, ∂y 2p−2
∀(x, y) ∈ V2 (0, 0) \ {(0, 0}.
Similarly, by Lemma 4.3.2 we obtain ∂ 2p−2 H (x, y) > 0, ∂x 2p−4 ∂y 2
∀(x, y) ∈ V3 (0, 0) \ {(0, 0)}
and consequently ∂2 ∂x 2
∂ 2p−4 H (x, y) ∂x 2p−4
∂2 > 0, ∂y 2
∂ 2p−4 H (x, y) ∂x 2p−4
> 0,
∀(x, y) ∈ V3 (0, 0) \ {(0, 0)}.
Let us denote
∂ 2p−2 H (x, y) ∂ 2p−2 H (x, y) H1 (x, y) = · − ∂x 2p−2 ∂x 2p−4 ∂y 2
∂ 2p−2 H (x, y) ∂x 2p−3 ∂y
2
.
By the same Lemma 4.3.2, we obtain ∂ 2 H1 (0, 0) ∂ 2 H1 (0, 0) ∂ 2 H1 (0, 0) = 0, > 0, > 0, ∂x 2 ∂y 2 ∂x∂y that is H1 (x, y) is strictly convex in a neighborhood of (0, 0). But H1 (0, 0) = 0 and ∂H1 (0,0) = ∂H1∂y(0,0) = 0, so it follows that H1 (x, y) > 0, ∀(x, y) ∈ V4 (0, 0)\{(0, 0)}. ∂x 2p−4
H (x,y) is strictly convex in a neighborhood As a conclusion, it follows that ∂ ∂x 2p−4 of zero, and reasoning as above, we get
∂ 2p−4 H (x, y) > 0, ∂x 2p−4
∀(x, y) ∈ V5 (0, 0) \ {(0, 0)}.
Continuing this process by recurrence, finally we will arrive at ∂ 2 H (x, y) > 0, ∂x 2
∀(x, y) ∈ V7 (0, 0) \ {(0, 0)},
and by reason of symmetry at ∂ 2 H (x, y) > 0, ∂y 2
∀(x, y) ∈ V8 (0, 0) \ {(0, 0)},
and then get that ∂ 2 H (x, y) ∂ 2 H (x, y) · − ∂x 2 ∂y 2
∂ 2 H (x, y) ∂x∂y
2
> 0, ∀(x, y) ∈ V9 (0, 0) \ {(0, 0)},
112
4 Partial Shape Preservation, Bivariate Case
which proves the lemma.
Corollary 4.3.4. Let Sn,2p (f )(x, y) be given by (4.4), (4.5), with p ∈ N, p ≥ 2. If f : [−1, 1] × [−1, 1] → R is strictly convex on [−1, 1] × [−1, 1], then there exists a neighborhood V (0, 0) of (0, 0) (depending on f , n and p) such that Sn,2p (f )(x, y) is strictly convex in V (0, 0). Proof. Immediate by the Lemmas 4.3.1, 4.3.2 and 4.3.3. Remarks. (1) The idea of the proof of Corollary 4.3.4 is in fact that in the univariate case. (2) The results can be extended to n variables, n > 2. In what follows we present some quantitative versions for the above results, on the bidimensional interval [−1, 1] × [−1, 1] and for xi = yi = i/n, i = −n, . . . , 0, . . . , n, i.e., Sn,2p (f )(x, y) = where
n
(2p)
sn,i (x, y)f (i/n, i/n),
i=−n
sn,i (x, y) = [(x − i/n)2 + (y − i/n)2 ]−p / ln (2p)
(2p)
(2p)
ln
(x, y) =
n
(x − i/n)2 + (y − i/n)2
i=−n
(x, y),
!−p
.
Theorem 4.3.4. If f : [−1, 1] × [−1, 1] → R is such that f (x, y) is nondecreasing as a function of x (for each fixed y) and nondecreasing as a function of y (for each fixed x), then for any point (ξ, ξ ) ∈ (−1, 1) × (−1, 1) which is solution of the system of equations (2p)
∂ln
(x, y) = 0, ∂x
(2p)
∂ln
(x, y) = 0, ∂y
there exists a constant c > 0 (independent of f and n) such that Sn,2p (f )(x, y) is nondecreasing as a function of x and as a function of y in (ξ − c/n2p+3 , ξ + c/n2p+3 ) × (ξ − c/n2p+3 , ξ + c/n2p+3 ). Proof. By the proof of Theorem 4.3.2 and by the hypothesis on the points (ξ, ξ ), we immediately get (2p)
(2p)
∂sn,i (ξ, ξ ) ∂x
=
∂sn,i (ξ, ξ ) ∂y
−p(ξ − i/n)−2p−1 = &n , −2p k=−n (ξ − k/n)
which for i = −n and i = n give the same expressions (except the constant 1/2) with those in the univariate case in the proof of Theorem 2.3.3. Then, combining the ideas in the bivariate case in the proof of Theorem 4.3.2 with those in the univariate case in the proof of Theorem 2.3.3, we immediately obtain |Qj (ξ, ξ )| ≥ c/(n2p ), |Pj (ξ, ξ )| ≥ c/(n2p ), j = −n, . . . , 0, . . . , n − 1,
4.3 Bivariate Shepard Operators
113
where Qj (x, x) =
(2p) j ∂sn,i (x, x)
∂x
i=−n
,
Pj (y, y) =
(2p) j ∂sn,i (y, y)
∂y
i=−n
.
∂s
(2p)
(x,x)
By taking x = y in the proof of Theorem 4.3.2, we immediately get that n,i∂x actually is 1/2 times the first derivative of the fundamental univariate rational func′ (x) in the proof of Theorem 2.3.3. tions sj,n It follows that our Qj (x, x) is exactly 1/2 times the function Qj (x) which appears in univariate case, in the proof of Theorem 2.3.3. Let αj be the nearest root of Qj (x, x) to ξ and βj the nearest root of Pj (y, y) to ξ . Reasoning exactly as in the proof of Theorem 2.3.3, we get |αj − ξ | ≥
c n2p+3
, |βj − ξ | ≥
c n2p+3
,
which proves the theorem.
For the convexity result, first we need the following bivariate analogous of the estimate in univariate case in Della Vecchia–Mastroianni [33]. Lemma 4.3.4. For all l, k ≥ 0 with l + k = q and for (2p)
sn,i (x, y) = where (2p)
ln we have
and
(x, y) =
[(x − i/n)2 + (y − i/n)2 ]−p (2p)
ln
n
i=−n
,
(x, y)
(x − i/n)2 + (y − i/n)2
!−p
,
∂ q s (2p) (x, y) (2p) n,i ≤ Cnq sn,i (x, y), ∂x l ∂y k q ∂ Sn,2p (f )(x, y) ≤ Cnq ||f ||, ∂x l ∂y k
where ||f || is the uniform norm. Proof. First let us denote
Fk,p (x, y) =
n
i=−n
(x − xi )k [(x − xi )2 + (y − yi )2 ]−p−k n
, 2
2 −p
[(x − xi ) + (y − yi ) ]
i=−n
114
4 Partial Shape Preservation, Bivariate Case
and xu (respectively, yv ), the closest point xi (respectively, yj ) to x (respectively, y). We have |Fk,p (x, y)| ≤ |x − xu |−k . Indeed, by |x − xu | ≤ |x − xi |, for all i, we get
|x − xi |k |x − xi |k ≤ = 1/|x − xk |k ≤ |x − xu |−k . [(x − xi )2 + (y − yi )2 ]k (x − xi )2k Similarly, if we denote
Gk,p (x, y) =
n
(y − yi )k [(x − xi )2 + (y − yi )2 ]−p−k
i=−n
n
, 2
2 −p
[(x − xi ) + (y − yi ) ]
i=−n
we get |Gk,p (x, y)| ≤ |y − yv |−k .
Now, if we denote
Hk,l,p (x, y) =
n
i=−n
(x − xi )k (y − yi )l [(x − xi )2 + (y − yi )2 ]−p−k−l n1
i=−n1
, 2
2 −p
[(x − xi ) + (y − yi ) ]
we have |Hk,l,p (x, y)| ≤ |x − xu |−k |y − yv |−l , k, l ≥ 0.
Then (2p)
∂sn,i (x, y) ∂x
=
−2p(x − xi ) (2p) (2p) s (x, y) + 2psn,i (x, y)F1,p (x, y), (x − xi )2 + (y − yi )2 n,i
which immediately implies (2p) ∂s n,i (x, y) (2p) (2p) ≤ C|x − xu |−1 sn,i (x, y) ≤ Cnsn,i (x, y). ∂x Similarly,
Then
(2p) ∂s n,i (x, y) (2p) ≤ Cnsn,i (x, y). ∂y (2p)
∂ 2 sn,i (x, y) ∂x 2
4.3 Bivariate Shepard Operators (2p)
= −2psn,i (x, y)
2(x − xi )2 1 − (x − xi )2 + (y − yi )2 [(x − xi )2 + (y − yi )2 ]2 (2p)
115
(2p)
∂sn,i (x, y) ∂sn,i (x, y) 2p(x − xi ) + 2p F1,p (x, y) − (x − xi )2 + (y − yi )2 ∂x ∂x (2p)
+2psn,i (x, y)
∂F1,p (x, y) , ∂x
where
∂F1,p (x, y) = ∂x
n
[(x − xi )2 + (y − yi )2 ]−p−1
i=−n n
2
i=−n
2 −p
[(x − xi ) + (y − yi ) ]
− 2(p + 1)F2,p (x, y)
+2p[F1,p (x, y)]2 . We immediately get ∂ 2 s (2p) (x, y) n,i (2p) (2p) ≤ C|x − xu |−2 sn,i (x, y) ≤ Cn2 sn,i (x, y). ∂x 2
Also
(2p)
∂ 2 sn,i (x, y) ∂x∂y
=
4p(x − xi )(y − yi ) (2p) s (x, y) [(x − xi )2 + (y − yi )2 ]2 n,i (2p)
−
(2p)
∂sn,i (x, y) ∂sn,i (x, y) 2p(x − xi ) + 2p F1,p (x, y) 2 2 (x − xi ) + (y − yi ) ∂y ∂y (2p)
+2psn,i (x, y)
∂F1,p (x, y) , ∂y
where ∂F1,p (x, y) = −2(p + 1)H1,1,p (x, y) + 2pF1,p (x, y)G1,p (x, y). ∂y It immediately follows that ∂ 2 s (2p) (x, y) n,i (2p) ≤ Cn2 sn,i (x, y). ∂x∂y
Reasoning in this way, we obtain ∂ 3 s (2p) (x, y) n,i (2p) ≤ Cn3 sn,i (x, y), ∂x 3
116
4 Partial Shape Preservation, Bivariate Case
and so on
∂ 3 s (2p) (x, y) n,i (2p) ≤ Cn3 sn,i (x, y), ∂x 2 ∂y
∂ q s (2p) (x, y) (2p) n,i ≤ Cnq sn,i (x, y), ∂x l 2∂y k
which proves the first estimate in the lemma. The second one is immediate by Sn,2p (f )(x, y) =
n
(2p)
sn,i (x, y)f (i/n, i/n).
i=−n
Theorem 4.3.5. Let p = 1 and f : [−1, 1] × [−1, 1] → R be strictly convex on [−1, 1]×[−1, 1]. Then Sn,2p (f )(x, y) is strictly convex in V (0, 0) = {x 2 +y 2 < dn2 }, where n 2 −4 2 xk [xk F (0)] /n5 , |dn | ≥ c k=1
with F (x) = f (x, x), for all x ∈ [−1, 1] × [−1, 1]. Proof. Let us denote Hn (x, y) =
2 2 ∂ Sn,2p (f )(x, y) ∂ 2 Sn,2p (f )(x, y) ∂ 2 Sn,2p (f )(x, y) − . ∂x 2 ∂y 2 ∂x∂y
By the proof of Theorem 4.3.3 we have Hn (0, 0) =
n x −4 k
k=1
2
[2xk F (0)]
2
> 0.
Let (αn , βn ) be the nearest root to (0, 0) (in the sense of Euclidean distance in R2 ) of Hn (x, y). Denoting dn = [αn2 + βn2 ]1/2 , by the mean value theorem we get ∂Hn (ξ, η) 0 < Hn (0, 0) = |Hn (0, 0) − Hn (αn , βn )| ≤ |αn | ∂x ∂Hn (ξ, η) ≤ |dn | ∂Hn (ξ, η) + ∂Hn (ξ, η) . +|βn | ∂y ∂x ∂y By simple calculation and by the above Lemma 4.3.4 we immediately get ∂Hn (ξ, η) ≤ cn5 , ∂Hn (ξ, η) ≤ cn5 , ∂x ∂y for all (ξ, η), which immediately implies
|dn | ≥ cHn (0, 0)/n5 ,
4.3 Bivariate Shepard Operators
117
and the proof is complete.
Remark. Because of the complicated proof of the qualitative result (see the proofs of Lemmas 4.3.1, 4.3.2 and 4.3.3), a variant of the above Theorem 4.3.5 for p ≥ 2 still remains an open question. Now, let us consider the third kind of Shepard operator (which is not a tensor product), defined by Sn1 ,n2 ,μ (f ; x, y) =
Tn1 ,n2 ,μ (f ; x, y) , if (x, y) = (xi , yj ), Tn1 ,n2 ,μ (1; x, y)
Sn1 ,n2 ,μ (f ; xi , yj ) = f (xi , yj ), where μ > 0 is fixed, f : D → R, D ⊂ R2 , D = [−1, 1]×[−1, 1], xi = i/n1 , i = −n1 , . . . , n1 ; yj = j/n2 , j = −n2 , . . . , n2 and n2 n1 f (xi , yj ) Tn1 ,n2 ,μ (f ; x, y) = . [(x − xi )2 + (y − yj )2 ]μ i=−n1 j =−n2
The global smoothness preservation properties and convergence properties of these operators were studied in Section 3.3. In what follows we consider their properties of preservation of shape. Remark. Let us note that with respect to preserving monotonicity, it is unfortunate that this does not seem to be a useful method for dealing with this kind of Shepard operator in the univariate case — as it had been in dealing with the original Shepard operator (see the proof of Theorem 4.3.2) or with the tensor product Shepard operator. This seems to happen since there is not a way to put the knots (xi , yj ), i ∈ {−n1 , . . . , n1 }, j ∈ {−n2 , . . . , n2 } in a sort of “increasing” sequence. That is why we consider here only some properties related to the usual bivariate convexity. For simplicity, first we consider this Shepard operator for μ = 1. Theorem 4.3.6. If f : [−1, 1] × [−1, 1] → R is strictly convex on [−1, 1] × [−1, 1], then there exists a neighborhood V (0, 0) of (0, 0) (depending on f and n1 , n2 ) such that Sn1 ,n2 ,1 (f ; x, y) is strictly convex in V (0, 0). Proof. We observe that we can write Sn1 ,n2 ,μ (f ; x, y) =
n1
n2
f (xi , yj )hi,j,μ (x, y),
i=−n1 j =−n2
with hi,j,μ (x, y) =
1+
(x 2 + y 2 )μ [(x − xi )2 + (y − yj )2 ]−μ , &∗ n2 2 2 μ 2 2 −μ j =−n2 (x + y ) [(x − xi ) + (y − yj ) ] i=−n1
&n1
& &∗ where means that the index (i, j ) of double sum is different from (0, 0). By simple calculation, for μ ∈ N, i = 0 and j = 0 we get ∂ r hi,j,μ (0, 0) ∂ s hi,j,μ (0, 0) = = 0, r ∂x ∂y s
(4.8)
118
4 Partial Shape Preservation, Bivariate Case
if r, s = 1, . . . , 2μ − 1 and ∂ 2μ hi,j,μ (0, 0) ∂ 2μ hi,j,μ (0, 0) = = (2μ)!/[(xi2 + yj2 )μ ]. 2μ ∂x ∂y 2μ By (4.9) and by the obvious relation n1 ∂ 2 Sn1 ,n2 ,1 (f ; 0, 0) = ∂x 2
∂ 2μ h0,0,μ (0,0) ∂x 2μ
=
∂ 2μ h0,0,μ (0,0) ∂y 2μ
(4.9)
= 0, we get
n2 ∂ 2 hi,j,1 (0, 0) f (xi , yj ) ∂x 2
i=−n1 j =−n2
= =
n2
i=−n1 j =−n2
n1 n2 ∗ i=0 j =0
n1
∗
(2!)f (xi , yj )/(xi2 + yj2 ) + f (0, 0)
∂ 2 h0,0,1 (0, 0) ∂x 2
[2/(xi2 + yj2 )][f (xi , yj ) + f (−xi , −yj ) + f (−xi , yj ) + f (xi , −yj )]
+f (0, 0)
n1 ∂ 2 h0,0,1 (0, 0) > f (0, 0) ∂x 2
n2 ∂ 2 hi,j,1 (0, 0) = 0, ∂x 2
i=−n1 j =−n2
taking into account that the strict convexity of f implies f (xi , yj ) + f (−xi , −yj ) > 2f 0),&f (−xi , yj ) + f (xi , −yj ) > 2f (0, 0) and that we have the identity &n(0, n2 1 i=−n1 j =−n2 hi,j,1 (x, y) = 1. Similarly,
∂ 2 hi,j,1 (0,0) get ∂x∂y ∂ 2 Sn1 ,n2 ,1 (f ;0,0) ∂x∂y
∂ 2 Sn1 ,n2 ,1 (f ;0,0) ∂y 2
> 0. On the other hand, by simple calculation we
= 0, for all i = −n1 , . . . , n1 , j = −n2 , . . . , n2 , which implies
= 0. So it is immediate ∂ 2 Sn1 ,n2 ,1 (f ; 0, 0) ∂ 2 Sn1 ,n2 ,1 (f ; 0, 0) > ∂x 2 ∂y 2
∂ 2 Sn1 ,n2 ,1 (f ; 0, 0) ∂x∂y
2
.
As a conclusion, there exists a neighborhood V (0, 0) of (0, 0) (depending obviously on f and n) such that ∂ 2 Sn1 ,n2 ,1 (f ; x, y) ∂ 2 Sn1 ,n2 ,1 (f ; x, y) > ∂x 2 ∂y 2
∂ 2 Sn1 ,n2 ,1 (f ; x, y) ∂x∂y
for all (x, y) ∈ V (0, 0), which proves the theorem.
2
,
Remark. For the cases μ ∈ N, μ ≥ 2, it is enough if we prove that there exists a neighborhood V (0, 0) of (0, 0), such that ∂ 2 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂x 2
∂ 2 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂y 2
4.3 Bivariate Shepard Operators
∂ 2 Sn1 ,n2 ,μ (f ; x, y) ∂ 2 Sn1 ,n2 ,μ (f ; x, y) > ∂x 2 ∂y 2
∂ 2 Sn1 ,n2 ,μ (f ; x, y) ∂x∂y
2
,
119
(4.10)
for all (x, y) ∈ V (0, 0) \ (0, 0), because by the relations (4.8)–(4.9) we obviously have ∂Sn1 ,n2 ,μ (f ; 0, 0) ∂Sn1 ,n2 ,μ (f ; 0, 0) = = 0. ∂x ∂y Taking into account that for μ ≥ 2 relations (4.8)–(4.9) imply ∂ 2 Sn1 ,n2 ,μ (f ; 0, 0) ∂ 2 Sn1 ,n2 ,μ (f ; 0, 0) = = 0, 2 ∂x ∂y 2 the idea of proof (for the μ ≥ 2 case) will be to prove that the functions F (x, y) =
∂ 2 Sn1 ,n2 ,μ (f ; x, y) , ∂x 2
G(x, y) =
and H (x, y) = F (x, y)G(x, y) −
∂ 2 Sn1 ,n2 ,μ (f ; x, y) , ∂y 2
∂ 2 Sn1 ,n2 ,μ (f ; x, y) ∂x∂y
2
are strictly convex on a neighborhood of (0, 0), having as global minimum point (0, 0), which would imply the required relations (4.10). But the cases μ ≥ 2 also require the following three lemmas. Lemma 4.3.5. Let μ ∈ N, μ ≥ 2. If f : [−1, 1] × [−1, 1] → R is strictly convex on [−1, 1] × [−1, 1], then there exists a neighborhood V (0, 0) of (0, 0) such that ∂ 2 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂x 2
∂ 2 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂y 2
for all (x, y) ∈ V (0, 0) \ (0, 0). Proof. Denoting for (i, j ) = (0, 0) E(x, y) =
1+
[(x − xi )2 + (y − yj )2 ]−μ , &∗ n2 2 2 μ 2 2 −μ j =−n2 (x + y ) [(x − xi ) + (y − yj ) ] i=−n1
&n1
we have hi,j,μ (x, y) = P (x, y)E(x, y), where
μ μ 2k 2μ−2k P (x, y) = (x + y ) = = y 2μ x y k 2
2 μ
k=0
μ 2 2μ−2k μ 4 2μ−4 μ + x y + x y + ··· + x 2μ−2 y 2 + x 2μ . 1 2 μ−1
Let us denote
Ri,j (x, y) = Firstly, by (4.8)–(4.9) we get
∂ 2μ−2 hi,j,μ (x, y) . ∂x 2μ−2
120
4 Partial Shape Preservation, Bivariate Case
∂ 2 Ri,j (x, y) (2μ)! = > 0. 2 2 ∂x [xi + yj 2 ]μ Then 2μ−2 2μ − 2 ∂ 2 ∂ k P (x, y) ∂ 2μ−2−k E(x, y) ∂ 2 Ri,j (x, y) = ∂y 2 ∂y 2 ∂x k ∂x 2μ−2−k k k=0
+2
2μ−2 k=0
+
2μ − 2 ∂ ∂ k P (x, y) ∂ 2μ−1−k E(x, y) ∂y ∂x k ∂x 2μ−2−k ∂y k
2μ−2 k=0
2μ − 2 ∂ k P (x, y) ∂ 2μ−k E(x, y) . k ∂x k ∂x 2μ−2−k ∂y 2
If we take x = y = 0 in these sums, then all the terms that contain x or (and) y will become zero, so taking into account the form of the polynomial P (x, y), we obtain ∂ 2 Ri,j (0, 0) 2μ − 2 (2μ − 2)! = 2μ > 0. ∂y 2 2μ − 2 [xi 2 + yj 2 ]μ
Reasoning for Sn1 ,n2 ,μ (f ; x, y) exactly as in the case μ = 1 (see the proof of Theorem 4.3.6) we easily obtain ∂ 2μ Sn1 ,n2 ,μ (f ; 0, 0) ∂ 2 ∂ 2μ−2 Sn1 ,n2 ,μ (f ; 0, 0) > 0, = 2 2μ−2 ∂x ∂x ∂x 2μ ∂2 ∂y 2
∂ 2μ Sn1 ,n2 ,μ (f ; 0, 0) ∂ 2μ−2 Sn1 ,n2 ,μ (f ; 0, 0) = > 0. 2μ−2 ∂x ∂x 2μ−2 ∂y 2
Therefore, there exists a neighborhood V1 (0, 0) of (0, 0) such that for all (x, y) ∈ V1 (0, 0), we have ∂ 2 ∂ 2μ−2 Sn1 ,n2 ,μ (f ; x, y) ∂ 2 ∂ 2μ−2 Sn1 ,n2 ,μ (f ; x, y) > 0, > 0. ∂x 2 ∂x 2μ−2 ∂y 2 ∂x 2μ−2 On the other hand, reasoning as above we immediately get 2μ−2 ∂ 2μ hi,j,μ (0, 0) hi,j,μ (0, 0) ∂ ∂2 = 0. = ∂x∂y ∂x 2μ−2 ∂x 2μ−1 ∂y As a first conclusion, V1 (0, 0) and because
∂ 2μ−2 Sn1 ,n2 ,μ (f ;x,y) ∂x 2μ−2
is strictly convex on the neighborhood
∂ 2μ−2 Sn1 ,n2 ,μ (f ; 0, 0) ∂ 2μ−1 Sn1 ,n2 ,μ (f ; 0, 0) = ∂x 2μ−2 ∂x 2μ−1
4.3 Bivariate Shepard Operators
=
121
∂ 2μ−1 Sn1 ,n2 ,μ (f ; 0, 0) = 0, ∂x 2μ−2 ∂y
∂ 2μ−2 Sn1 ,n2 ,μ (f ;x,y) > 0, for all (x, y) ∈ V1 (0, 0) \ (0, 0). ∂x 2μ−2 ∂ 2μ−2 Sn1 ,n2 ,μ (f ;x,y) symmetry, we get > 0, for all (x, y) ∈ V2 (0, 0) \ (0, 0). ∂y 2μ−2
it follows that
By Now, if μ = 2 then we exactly obtain the statement of the lemma. If μ > 2 then by similar reasonings as above, we obtain that ∂ 2μ−4 S
(f ;x,y)
n1 ,n2 ,μ and ∂y 2μ−4 a conclusion
∂ 2μ−4 Sn1 ,n2 ,μ (f ;x,y) ∂x 2μ−4
are strictly convex in a neighborhood U (0, 0) of (0, 0), and as
∂ 2μ−4 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂x 2μ−4
∂ 2μ−4 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂y 2μ−4
for all (x, y) ∈ U (0, 0) \ (0, 0). We can continue in this way until we arrive at ∂ 2 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂x 2
∂ 2 Sn1 ,n2 ,μ (f ; x, y) > 0, ∂y 2
for all (x, y) ∈ V (0, 0) \ (0, 0), which proves the lemma.
Lemma 4.3.6. Let μ ∈ N, μ ≥ 2, and let us denote A :=
∂ r Sn1 ,n2 ,μ (f ; 0, 0) . ∂x i ∂y j
We have: A = 0 if r < 2μ or r > 2μ, A = 0 if r = 2μ and both i, j are odd, and A > 0 if r = 2μ and both i, j are even. r Proof. Let P (x, y) = (x 2 + y 2 )μ . Denoting B := ∂∂xPi(0,0) , it is easy to check ∂y j that we have B = 0 if r < 2μ or r > 2μ, B = 0 if r = 2μ and both i, j are odd, and B > 0 if r = 2μ and both i, j are even. But hi,j,μ (x, y) = P (x, y)E(x, y), where E(x, y) is given in the proof of Lemma 4.3.5 and j i 2 ∂ r hi,j,μ (x, y) j ∂ ∂ P (x, y) ∂ j −q E(x, y) = , q ∂x i ∂x i ∂y j ∂y 2 ∂y j −q q=0
which combined with the above properties of in Lemma 4.3.5, taking into account that n1 ∂ r Sn1 ,n2 ,μ (f ; 0, 0) = ∂x i ∂y j
∂ r P (0,0) ∂x i ∂y j
and with the method of proof
n2 ∂ r hi,j,μ (0, 0) f (xi , yj ), ∂x i ∂y j
i=−n1 j =−n2
proves the lemma.
122
4 Partial Shape Preservation, Bivariate Case
Lemma 4.3.7. For μ ∈ N, μ ≥ 2, we have ∂ 2 Sn1 ,n2 ,μ (f ; x, y) ∂ 2 Sn1 ,n2 ,μ (f ; x, y) − ∂x 2 ∂y 2
∂ 2 Sn1 ,n2 ,μ (f ; x, y) ∂x∂y
2
> 0,
for all (x, y) ∈ V (0, 0)\(0, 0), where V (0, 0) is a neighborhood of (0, 0) (depending on f , n1 , n2 and μ). Proof. Denote 2 2 ∂ 2 Sn1 ,n2 ,μ (f ; x, y) ∂ 2 Sn1 ,n2 ,μ (f ; x, y) ∂ Sn1 ,n2 ,μ (f ; x, y) − . H (x, y) = ∂x 2 ∂y 2 ∂x∂y According to the Remark after the proof of Theorem 4.3.6, we have to prove that ∂ 2 H (x, y) ∂ 2 H (x, y) > 0, > 0, 2 ∂x ∂y 2 ∂ 2 H (x, y) ∂ 2 H (x, y) − ∂x 2 ∂y 2
∂ 2 H (x, y) ∂x∂y
2
> 0,
(0,0) (0,0) for all (x, y) ∈ V (0, 0) \ (0, 0), and that ∂H∂x = ∂H∂y = 0 (because by Lemma 4.3.6 we have H (0, 0) = 0). r H (0,0) > 0, only if r = 2μ and both Now, by Lemma 4.3.6 we easily get that ∂ ∂x i ∂j i, j are even, all the other partial derivatives of H on (0, 0) being 0, so we get
∂ 2μ H (0, 0) > 0, ∂x 2μ that is, ∂ 0 and
2μ−2 H (x,y)
∂x 2μ−2
∂ 2μ H (0, 0) > 0, ∂x 2μ−2 ∂y 2
∂ 2μ H (0, 0) = 0, ∂x 2μ−1 ∂y
is strictly convex in a neighborhood of zero. Because ∂ ∂ 2μ−1 H (0, 0) ∂ 2μ−1 H (0, 0) = 0, = ∂x 2μ−1 ∂x 2μ−2 ∂y
it follows that (0, 0) is a global minimum point, so ∂ 2μ−2 H (x, y) > 0, ∂x 2μ−2 for all (x, y) ∈ V1 (0, 0) \ (0, 0). Reasoning by symmetry, we get ∂ 2μ−2 H (x, y) > 0, ∂y 2μ−2 for all (x, y) ∈ V2 (0, 0) \ (0, 0). Similarly, by Lemma 4.3.6, we obtain ∂ 2μ−2 H (x, y) > 0, ∂x 2μ−4 ∂y 2
2μ−2 H (0,0)
∂x 2μ−2
=
4.3 Bivariate Shepard Operators
123
for all (x, y) ∈ V3 (0, 0) \ (0, 0), and consequently ∂ 2 ∂ 2μ−4 H (x, y) ∂ 2 ∂ 2μ−4 H (x, y) > 0, > 0, ∂x 2 ∂x 2μ−4 ∂y 2 ∂x 2μ−4 for all (x, y) ∈ V3 (0, 0) \ (0, 0). Let us denote H1 (x, y) =
∂ 2μ−2 H (x, y) ∂ 2μ−2 H (x, y) − ∂x 2μ−2 ∂x 2μ−4 ∂y 2
∂ 2μ−2 H (x, y) ∂x 2μ−3 ∂y
2
.
By Lemma 4.3.6 we obtain ∂ 2 H1 (0, 0) > 0, ∂x 2
∂ 2 H1 (0, 0) > 0, ∂y 2
∂ 2 H1 (0, 0) = 0, ∂x∂y
that is, H1 (x, y) is strictly convex in a neighborhood of (0, 0). But H1 (0, 0) = 0 and ∂H1∂x(0,0) = ∂H1∂y(0,0) = 0, so it follows that H1 (x, y) > 0, for all (x, y) ∈ V4 (0, 0) \ (0, 0). 2μ−4 H (x,y) As a conclusion, it follows that ∂ ∂x 2μ−4 is strictly convex in a neighborhood of zero, and reasoning as above we get ∂ 2μ−4 H (x, y) > 0, ∂x 2μ−4 for all (x, y) ∈ V5 (0, 0) \ (0, 0). Continuing this process by recurrence, finally we arrive to (x, y) ∈ V6 (0, 0) \ (0, 0), by reason of symmetry to V7 (0, 0) \ (0, 0), and then ∂ 2 H (x, y) ∂ 2 H (x, y) − ∂x 2 ∂y 2
∂ 2 H (x,y) ∂y 2
∂ 2 H (x, y) ∂x∂y
2
∂ 2 H (x,y) ∂x 2
> 0, for all
> 0, for all (x, y) ∈
> 0,
for all (x, y) ∈ V8 (0, 0) \ (0, 0), which proves the lemma.
Corollary 4.3.5. Let μ ∈ N, μ ≥ 2. If f : [−1, 1]×[−1, 1] → R is strictly convex on [−1, 1] × [−1, 1], then there exists a neighborhood V (0, 0) of (0, 0) (depending on f , μ and n1 , n2 ) such that Sn1 ,n2 ,μ (f ; x, y) is strictly convex in V (0, 0). Proof. Immediate by Lemmas 4.3.5–4.3.7. Remark. For a quantitative estimate of the length of convexity neighborhood V (0, 0) in Corollary 4.3.5, we need the following. Lemma 4.3.8. Denoting n = max{n1 , n2 } and Ai,j (x, y) = we have
[(x − xi )2 + (y − yj )2 ]−p , Tn1 ,n2 ,p (1; x, y)
124
4 Partial Shape Preservation, Bivariate Case
q ∂ Ai,j (x, y) q ∂x l ∂y k ≤ Cn Ai,j (x, y).
Proof. Firstly, let us denote
Fk,p (x, y) =
n1
n2
i=−n1 j =−n2 n1
(x − xi )k [(x − xi )2 + (y − yj )2 ]−p−k n2
i=−n1 j =−n2
, 2
2 −p
[(x − xi ) + (y − yj ) ]
and xu (respectively, yv ), the closest point xi (respectively, yj ) to x (respectively, y). We have |Fk,p (x, y)| ≤ |x − xu |−k . Indeed, by |x − xu | ≤ |x − xi |, for all i, we get
|x − xi |k |x − xi |k ≤ = 1/|x − xk |k ≤ |x − xu |−k . [(x − xi )2 + (y − yj )2 ]k (x − xi )2k Similarly, if we denote
Gk,p (x, y) =
n1
n2
i=−n1 j =−n2 n1
(y − yj )k [(x − xi )2 + (y − yj )2 ]−p−k n2
i=−n1 j =−n2
, 2
2 −p
[(x − xi ) + (y − yj ) ]
we get |Gk,p (x, y)| ≤ |y − yv |−k .
Now, if we denote
Hk,l,p (x, y)
=
n1
n2
i=−n1 j =−n2
(x − xi )k (y − yj )l [(x − xi )2 + (y − yj )2 ]−p−k−l n1
n2
i=−n1 j =−n2
then we have
, 2
2 −p
[(x − xi ) + (y − yj ) ]
|Hk,l,p (x, y)| ≤ |x − xu |−k |y − yv |−l , k, l ≥ 0. Then ∂Ai,j (x, y) −2p(x − xi ) = Ai,j (x, y) + 2pAi,j (x, y)F1,p (x, y), ∂x (x − xi )2 + (y − yj )2
4.3 Bivariate Shepard Operators
125
which immediately implies ∂Ai,j (x, y) ≤ C|x − xu |−1 Ai,j (x, y) ≤ CnAi,j (x, y). ∂x Similarly,
∂Ai,j (x, y) ≤ CnAi,j (x, y). ∂y
Then
∂ 2 Ai,j (x, y) ∂x 2
= −2pAi,j (x, y) −
2(x − xi )2 1 − 2 2 (x − xi ) + (y − yj ) [(x − xi )2 + (y − yj )2 ]2
∂Ai,j (x, y) ∂Ai,j (x, y) 2p(x − xi ) + 2p F1,p (x, y) 2 2 (x − xi ) + (y − yj ) ∂x ∂x +2pAi,j (x, y)
where ∂F1,p (x, y) = ∂x
n1
n2
∂F1,p (x, y) , ∂x
[(x − xi )2 + (y − yj )2 ]−p−1
i=−n1 j =−n2 n1 n2
i=−n1 j =−n2
[(x − xi )2 + (y − yj )2 ]−p
−2(p + 1)F2,p (x, y) + 2p[F1,p (x, y)]2 .
We immediately get 2 ∂ Ai,j (x, y) ≤ C|x − xu |−2 Ai,j (x, y) ≤ Cn2 Ai,j (x, y). ∂x 2
Also
4p(x − xi )(y − yj ) ∂ 2 Ai,j (x, y) = Ai,j (x, y) ∂x∂y [(x − xi )2 + (y − yj )2 ]2
−
∂Ai,j (x, y) ∂Ai,j (x, y) 2p(x − xi ) + 2p F1,p (x, y) (x − xi )2 + (y − yj )2 ∂y ∂y +2pAi,j (x, y)
∂F1,p (x, y) , ∂y
where ∂F1,p (x, y) = −2(p + 1)H1,1,p (x, y) + 2pF1,p (x, y)G1,p (x, y). ∂y It immediately follows
126
4 Partial Shape Preservation, Bivariate Case
2 ∂ Ai,j (x, y) ≤ Cn2 Ai,j (x, y). ∂x∂y
Reasoning in this way, we obtain 3 ∂ Ai,j (x, y) ≤ Cn3 Ai,j (x, y), ∂x 3
and so on
which proves the lemma.
3 ∂ Ai,j (x, y) 3 ∂x 2 ∂y ≤ Cn Ai,j (x, y),
q ∂ Ai,j (x, y) q ∂x l ∂y k ≤ Cn Ai,j (x, y),
As an immediate consequence, we get Corollary 4.3.6. For n = max{n1 , n2 } and Sn1 ,n2 ,p (x, y) = we have
n1
n2
Ai,j (x, y)f (xi , yj ),
i=−n1 j =−n2
q ∂ Sn1 ,n2 ,p (x, y) ≤ Cnq ||f ||, ∂x l ∂y k
where ||f || is the uniform norm. The quantitative estimate of convexity in Theorem 4.3.6 is given by the following. Theorem 4.3.7. Let f : [−1, 1] × [−1, 1] → R supposed to be strictly convex on [−1, 1] × [−1, 1]. If n = max{n1 , n2 } then Sn1 ,n2 ,1 (f )(x, y) is strictly convex in V (0, 0) = {x 2 + y 2 < dn21 ,n2 }, where ⎡ ⎤2 n1 n2 ∗ [2/(xi2 + yj2 )]E(xi , yj )⎦ /n5 , |dn1 ,n2 | ≥ C ⎣ i=0 j =0
with E(xi , yj ) = [(f (xi , yj ) + f (−x 2f (0, 0)) + (f (−xi , yj ) + & 1i , −y &nj 2) − ∗ means that the index (i, j ) of f (xi , −yj ) − 2f (0, 0))] and the sum ni=0 j =0 double sum is different from (0, 0). Proof. From the proof of Theorem 4.3.6, we immediately obtain ∂ 2 Sn1 ,n2 ,1 (f ; 0, 0) ∂ 2 Sn1 ,n2 ,1 (f ; 0, 0) = ∂x 2 ∂y 2 =
n2 ∗ n1 i=0 j =0
[2/(xi2 + yj2 )][f (xi , yj ) + f (−xi , −yj ) + f (−xi , yj ) + f (xi , −yj )] =
n1 n2 ∗ i=0 j =0
[2/(xi2 + yj2 )]E(xi , yj ) > 0,
4.3 Bivariate Shepard Operators
127
where E(xi , yj ) = (f (xi , yj ) + f (−xi , −yj ) − 2f (0, 0)) + (f (−xi , yj ) + f (xi , −yj ) − 2f (0, 0)) > 0, from the convexity of f . Also, we have Let us denote
H (f )(x, y) =
∂ 2 Sn1 ,n2 ,1 (f ;0,0) ∂x∂y
= 0.
∂ 2 Sn1 ,n2 ,1 (f ; x, y) ∂ 2 Sn1 ,n2 ,1 (f ; x, y) ∂ 2 Sn1 ,n2 ,1 (f ; x, y) . − ∂x 2 ∂y 2 ∂x∂y
We have H (f )(0, 0) > 0. Let (αn1 ,n2 , βn1 ,n2 ) be the nearest root to (0, 0) of H (f )(x, y), in the sense that the distance dn1 ,n2 = ((αn1 ,n2 )2 +(βn1 ,n2 )2 )2 )1/2 is minimum for all the roots of H (f )(x, y). Then, for all (x, y) ∈ {(x, y); (x 2 + y 2 )1/2 }, we necessarily have H (f )(x, y) > 0. By the mean value theorem for bivariate functions, we get
But
H (f )(0, 0) = |H (f )(0, 0)| = |H (f )(0, 0) − H (f )(αn1 ,n2 , βn1 ,n2 )| ∂H (f )(ξ, η) ∂H (f )(ξ, η) + . ≤ |dn1 ,n2 | ∂x ∂y ∂H (f )(x, y) ∂ 3 Sn1 ,n2 ,1 (f ; x, y) ∂ 2 Sn1 ,n2 ,1 (f ; x, y) = ∂x ∂x 3 ∂y 2
+
∂ 3 Sn1 ,n2 ,1 (f ; x, y) ∂ 2 Sn1 ,n2 ,1 (f ; x, y) ∂ 3 Sn1 ,n2 ,1 (f ; x, y) , − ∂y 2 ∂x ∂x 2 ∂x 2 ∂y
which by Corollary 4.3.6 implies ∂H (f )(x, y) ∂ 3 Sn1 ,n2 ,1 (f ; x, y) ∂ 2 Sn1 ,n2 ,1 (f ; x, y) ≤ ∂x ∂x 3 ∂y 2 3 ∂ Sn1 ,n2 ,1 (f ; x, y) ∂ 2 Sn1 ,n2 ,1 (f ; x, y) ∂ 3 Sn1 ,n2 ,1 (f ; x, y) ≤ Cn5 , + + ∂y 2 ∂x ∂x 2 ∂x 2 ∂y
with C depending on f , but independent of n. ∂H (f )(x,y) 5 Analogously, ≤ Cn . We get 0 < H (f )(0, 0) ≤ C|dn1 ,n2 |n5 , i.e., ∂y ⎡ ⎤2 n2 ∗ n1 2 2 |dn1 ,n2 | ≥ C ⎣ [2/(xi + yj )]E(xi , yj )⎦ /n5 , i=0 j =0
which proves the theorem.
Remark. A version of Theorem 4.3.7 for arbitrary p ≥ 2, i.e., for Sn1 ,n2 ,p (f )(x, y), still remains an open question, because of the complicated reasonings in the proofs of Lemmas 4.3.5, 4.3.6 and 4.3.7.
128
4 Partial Shape Preservation, Bivariate Case
4.4 Bibliographical Remarks and Open Problems Theorem 4.2.1, Corollaries 4.2.1, 4.2.2, Theorem 4.2.2 are in Anastassiou–Gal [7], Theorems 4.3.2, 4.3.3, Lemmas 4.3.1–4.3.3 and Corollary 4.3.4 are in Anastassiou– Gal [8]. All the other results of this chapter (except those where the authors are mentioned), are from Gal–Gal [55]. Open Problem 4.4.1. Prove Theorems 4.3.5 and 4.3.7 for all p ≥ 2. Open Problem 4.4.2. For the bivariate tensor product operator generated by the univariate Balász–Shepard operator defined on the semi-axis considered by Theorem 2.3.5, prove qualitative and quantitative shape-preserving properties. Open Problem 4.4.3. For the bivariate tensor product operator generated by the univariate Shepard–Grünwald operator introduced by Criscuolo–Mastroianni [31] (see Open Problem 1.6.4, too), prove qualitative and quantitative shape-preserving properties. Open Problem 4.4.4. For the bivariate Shepard kind operator defined by Sn1 ,n2 ,μ (f ; x, y) =
Tn1 ,n2 ,μ (f ; x, y) , if (x, y) = (xi , yj ), Tn1 ,n2 ,μ (1; x, y)
Sn1 ,n2 ,μ (f ; xi , yj ) = f (xi , yj ), where μ > 0 is fixed, f : D → R, D ⊂ R2 , D = [−1, 1]×[−1, 1], xi = i/n1 , i = −n1 , . . . , n1 ; yj = j/n2 , j = −n2 , . . . , n2 and n2 n1 f (xi , yj ) , Tn1 ,n2 ,μ (f ; x, y) = [(x − xi )2 + (y − yj )2 ]μ i=−n1 j =−n2
prove properties concerning the preservation of monotonicity of f , in neighborhoods of some points. Open Problem 4.4.5. For the local bivariate tensor products, Shepard operators mentioned in Open Problem 3.5.4 (previous chapter) prove the shape-preserving properties.
A Appendix: Graphs of Shepard Surfaces
Due to the usefulness of their properties in approximation theory, data fitting, CAGD, fluid dynamics, curves and surfaces, the Shepard operators (univariate and bivariate variants) have been the object of much work. Let us mention here the following papers: [1]–[4], [7], [8], [10]–[17], [23]–[40], [42], [45]–[53], [55]–[58], [60], [62], [65], [69], [70]–[75], [77], [86], [91], [93], [96], [98], [104]–[109]. In this appendix we present some pictures for various kinds of bivariate Shepard operators, which illustrate the shape-preserving property of them. I thank very much Professor Radu Trimbitas from “Babe¸s-Bolyai” University, Faculty of Mathematics and Informatics, Cluj-Napoca, Romania, who made all the graphs in this section. Except type 4, when the original Shepard–Lagrange operator is defined on 100 random knots uniformly distributed into [−1, 1] × [−1, 1] (which were then sorted in ascending order with respect to their relative distance), all the other types of bivariate Shepard operators in this section are defined on equidistant knots in the bidimensional interval [−1, 1] × [−1, 1], i.e., are of the form xk = nk , k = −n, . . . , 0, . . . , n and j yj = m , j = −m, . . . , 0, . . . , m, with n = m = 5. Recall that by the previous sections, we have considered nine main types of Shepard operators, given by the following formulas. Type 1. Original Shepard operator: Sn,p (f )(x, y) =
n
si,n,p (x, y)f (xi , yi ),
i=−n
where [(x − xi )2 + (y − yi )2 ]−p , 2 2 −p k=−n [(x − xk ) + (y − yk ) ]
si,n,p (x, y) = &n
p > 2, p ∈ N.
Type 2. Tensor product Shepard operator: Sn,m,2p,2q (f )(x, y) =
n m
i=−n j =−m
si,n,2p (x)sj,m,2q (y)f (xi , yj ),
p, q ≥ 2,
130
A Appendix: Graphs of Shepard Surfaces
where
(x − xi )−2p −2p k=−n (x − xk )
si,n,2p (x) = &n
and
(y − yj )−2q sj,m,2q (x) = &m . −2q k=−m (y − yk )
Type 3. Shepard–Gal–Szabados operator
Sn,m,p (f )(x, y) =
m n
[(x − xi )2 + (y − yj )2 ]−p f (xi , yj )
i=−n j =−m n m
, 2
2 −p
[(x − xi ) + (y − yj ) ]
i=−n j =−m
where p > 2, p ∈ N. Type 4. Original Shepard–Lagrange operator: Sn,p,Lm (f )(x, y) =
n
si,n,p (x, y)Lim (f )(x, y),
i=−n
where [(x − xi )2 + (y − yi )2 ]−p
si,n,p (x, y) =
n
2
k=−n
2 −p
[(x − xk ) + (y − yk ) ]
, p > 2, p ∈ N,
and Lim (f )(x, &y) is the Lagrange interpolation bivariate polynomial of degree m (i.e., of the form j +k≤m aj,k x j y k , where aj,k ∈ R), uniquely defined by the conditions Lm (f )(xi+μ , yi+μ ) = f (xi+μ , yi+μ ), μ = 1, . . . , m−1, m = (n+1)(n+2)/2, m < n, (xn+μ , yn+μ ) = (xμ , yμ ) (see, e.g., Coman–Trimbitas [27], page 9). Type 5. Tensor product Shepard–Lagrange: operator Sn,m,2p,2q,n1 ,n2 (f )(x, y) =
n m
i=−n j =−m
i,j
si,n,2p (x)sj,m,2q (y)Ln1 ,n2 (f )(x, y), p, q ≥ 2,
where si,n,2p (x) =
(x − xi )−2p
n
k=−n
−2p
(x − xk )
,
A Appendix: Graphs of Shepard Surfaces
131
(y − yj )−2q , m −2q (y − yk )
sj,m,2q (y) =
k=−m
and
i,j
Ln1 ,n2 (f )(x, y) =
n1 n2
ν=0 μ=0
vj (y) ui (x) f (xi+ν , yj +μ ), (x − xi+ν )u′i (xi+ν ) (y − yj +μ )vj′ (yj +μ )
with ui (x) = (x − xi ) · · · (x − xi+n1 ), vj (y) = (y − yj ) · · · (y − yj +n2 ). Type 6. Tensor product Shepard–Lagrange–Taylor operator: Sn,m,2p,2q,n1 ,n2 (f )(x, y) =
m n
i,j
i=−n j =−m
si,n,2p (x)sj,m,2q (y)LTn1 ,n2 (f )(x, y), p, q ≥ 2,
where
(x − xi )−2p
si,n,2p (x) =
n
k=−n
,
−2p
(x − xk )
(y − yj )−2q
sj,m,2q (x) =
m
,
−2q
(y − yk )
k=−m
and i,j LTn1 ,n2 (f )(x, y)
=
n1 n2
ν=0 μ=0
(y − yj )μ ∂ μ f (xi+ν , yj ) ui (x) . ′ (x − xi+ν )ui (xi+ν ) (μ)! ∂y μ
Remark. Evidently, by changing the variable x with y we get the “Tensor product Shepard–Taylor–Lagrange” kind of operator. Type 7. Original Shepard–Taylor operator: Sn,p,Tm−n ,...,mn (f )(x, y) =
n
si,n,p (x, y)Tmi i (f )(x, y),
i=−n
where si,n,p (x, y) =
[(x − xi )2 + (y − yi )2 ]−p , n 2 2 −p [(x − xk ) + (y − yk ) ]
k=−n
p > 2, p ∈ N,
132
A Appendix: Graphs of Shepard Surfaces
and Tmi i (f )(x, y) =
(x − xi )r (y − yj )s ∂ r+s f (xi , yi ) . r! s! ∂x r ∂y s r+s≤m i
Type 8. Tensor product Shepard–Taylor operator:
Sn,m,2p,2q,n−n ,...,nn ,m−m ,...,mm (f )(x, y) =
n m
i,j
si,n,2p (x)sj,m,2q (y)T Tni ,mj (f )(x, y),
i=−n j =−m
where si,n,2p (x) =
p, q ≥ 2,
(x − xi )−2p , n −2p (x − xk )
k=−n
sj,m,2q (x) =
(y − yj )−2q , m −2q (y − yk )
k=−m
and i,j
T Tni ,mj (f )(x, y) =
mj ni (x − xi )ν (y − yj )μ ∂ ν+μ f (xi , yj ) . (ν)! (μ)! ∂x ν ∂y μ ν=0 μ=0
Type 9. Shepard–Gal–Szabados–Taylor operator: Sn,m,p (f )(x, y)
=
n m
i=−n j =−m
[(x − xi )2 + (y − yj )2 ]−p T Tni ,mj (f )(x, y) i,j
n m
i=−n j =−m
, 2
2 −p
[(x − xi ) + (y − yj ) ]
where i,j T Tni ,mj (f )(x, y)
mj ni (x − xi )ν (y − yj )μ ∂ ν+μ f (xi , yj ) . = (ν)! (μ)! ∂x ν ∂y μ ν=0 μ=0
The Shepard surfaces generated by the Shepard operators of the types 2, 3, 4, 6, 7 and 8, will be illustrated for the following five examples of functions f (x, y), called in the sequel “test functions.” Example 1. Let f1 (x, y) = xy, x, y ∈ [−1, 1]. It is a bidimensional (upper) monotone function, i.e., it satisfies the condition:
A Appendix: Graphs of Shepard Surfaces
∂ 2 f (x, y) ≥ 0, ∂x∂y
133
∀x, y ∈ [−1, 1].
Another example of this kind is f4 (x, y) = x 3 y 3 . Example 2. Let f2 (x, y) = x 2 y 2 . It is a “strictly double convex” function, i.e., it satisfies the condition: ∂ 4 f (x, y) > 0, ∂x 2 ∂y 2
∀x, y ∈ [−1, 1].
Example 3. Let f3 (x, y) = x 2 y 2 + x 2 + y 2 . It is an ordinary strictly convex bivariate function, i.e., satisfies the conditions: ∂ 2 f (x, y) > 0, ∂x 2
∂ 2 f (x, y) > 0, ∂y 2
2 2 ∂ f (x, y) ∂ 2 f (x, y) ∂ 2 f (x, y) > , ∂x 2 ∂y 2 ∂x∂y
∀x, y ∈ (−1, 1). Example 4. Let f5 (x, y) = 9 exp(x +y)−xy. Simple calculations show that f5 is simultaneously bidimensional (upper) monotone, strictly double convex and ordinary strictly convex. The graphs of the first four functions, f1 , f2 , f3 , f4 , are given in Figure A.1. Also, Figures 5.2, 5.3 and 5.4 contain the graphs of various Shepard operators corresponding to these four test functions. Figure A.2 shows the graphs for Shepard–Gal–Szabados operators, type 3, p = 3. Figure A.3 gives the graphs for tensor product Shepard–Lagrange–Taylor operators (type 6), p = 2, q = 2. In Figure A.4 appear the graphs for original Shepard–Taylor operators (type 7) for p = 4. Finally, in Figure A.5 we give the graphs for Shepard operators of types 2, 4 and 8, corresponding to the function f5 (x, y) = 9 exp(x + y) − xy. The graph of this function is given in Figure A.5(a).
134
A Appendix: Graphs of Shepard Surfaces
Fig. A.1. Graphs of the test functions, f1 , f2 , f3 , f4
A Appendix: Graphs of Shepard Surfaces
Fig. A.2. Graphs for Shepard–Gal–Szabados operators, Type 3, p = 3
135
136
A Appendix: Graphs of Shepard Surfaces
Fig. A.3. Graphs for tensor product Shepard–Lagrange–Taylor operators, type 6, p = 2, q = 2
A Appendix: Graphs of Shepard Surfaces
Fig. A.4. Graphs for original Shepard–Taylor operators, type 7, p = 4
137
138
A Appendix: Graphs of Shepard Surfaces
Fig. A.5. Graphs for f5 and Shepard operators of types 2, 4 and 8
References
1. Akima H (1978) A method of bivariate interpolation and smooth surface fitting for irregularly distributed data points. ACM Transactions on Mathematical Software, 4, 146–159 2. Akima H (1984) On estimating partial derivatives for bivariate interpolation of scattered data. Rocky Mountain J. Math., 14, No. 1, 41–52 3. Allasia G (1995) A class of interpolating positive linear operators: theoretical and computational aspects. In: Approximation Theory: Wavelets and Applications (Singh SP ed), Kluwer, Dordrecht, 1–36 4. Allasia G (2003) Simultaneous interpolation and approximation by a class of multivariate positive operators. Numerical Algorithms (in print) 5. Anastassiou GA, Cottin C, Gonska HH (1991) Global smoothness of approximating functions. Analysis, 11, 43–57 6. Anastassiou GA, Gal SG (2000) Moduli of Continuity and Global Smoothness Preservation. Birkhäuser Boston 7. Anastassiou GA, Gal SG (2001) Partial shape-preserving approximation by bivariate Hermite–Fejér polynomials. Computers and Mathematics with Applic., 42, 57–64 8. Anastassiou GA, Gal SG (2001) Partial shape-preserving approximation by bivariate Shepard operators. Computers and Mathematics with Applic., 42, 47–56 9. Bari NK, Stechkin SB (1956) Best approximation and differential properties of two conjugate functions. (Russian) Trudy Moskov. Mat. Obshch., 5, 483–522 10. Barnhill RE (1977) Representation and approximation of surfaces. In: Mathematical Software, III, (Rice JR ed), Academic Press, New York, 69–120 11. Barnhill RE (1983) A survey of the representation and design of surfaces. IEEE Computer Graphics and Applications, 3, No. 7, 9–16 12. Barnhill RE (1985) Surfaces in Computer Aided Geometric Design: A survey with new results. Comput. Aided Geom. Design, 2, 1–17 13. Barnhill RE, Dube RP, Little FF (1983) Properties of Shepard’s surfaces. Rocky Mountain J. Math., 11, 365–381 14. Barnhill RE, Little FF (1984) Three- and four-dimensional surfaces. Rocky Mountain J. Math., 14, 77–102 15. Barnhill RE, Piper BR, Rescorla KL (1987) Interpolation to arbitrary data on a surface. In: Geometric Modelling: Algorithms and New Trends (Farin GE ed), SIAM, 281–290 16. Barnhill RE, Mansfield L (1972) Sard kernel theorems on triangular and rectangular domains with extensions and applications to finite-element error. Technical Report, Nr. 11, Department of Mathematics, Brunel University
140
References
17. Barnhill RE, Stean SE (1984) Multistage trivariate surfaces. Rocky Mountain J. Math., 14, No. 1, 103–118 18. Berman DL (1950) On some linear operators. (in Russian), Dokl. Akad. Nauk SSSR, 73, 249–252 19. Berman DL (1952) On a class of linear operators. (in Russian) Dokl. Akad. Nauk SSSR, 85, 13–16 20. Berman DL (1958) Divergence of the Hermite–Fejér interpolation processes. (in Russian) Uspekhi Mat. Nauk, 13(80), 143–148 21. Cavaretta AS, Sharma, JA, Varga RS (1980) Hermite–Birkhoff interpolation in the nth roots of unity. Trans. Amer. Math. Soc., 259, 621–628 22. Cheney EW (1966) Introduction to Approximation Theory. McGraw-Hill, New York 23. Coman Gh (1987) The remainder of certain Shepard-type interpolation formulas. Studia Univ. “Babe¸s-Bolyai," 32, No. 4, 24–32 24. Coman Gh (1988) Shepard–Taylor interpolation. In: Itinerant Seminar on Functional Equations, Approximation and Convexity, “Babe¸s–Bolyai" University, Cluj-Napoca, 5– 14 25. Coman Gh, Tambulea L (1988) A Shepard–Taylor approximation formula. Studia Univ. “Babe¸s-Bolyai," 33, No. 3, 65–73 26. Coman Gh, Trimbitas RT (2001) Combined Shepard univariate operators. East J. Approx., 7, No. 4, 471–483 27. Coman Gh, Trimbitas RT (1999) Bivariate Shepard interpolation. Research Seminar, Seminar of Numerical and Statistical Calculus, Babe¸s-Bolyai University, Faculty of Mathematics and Computer Sciences, 41–83 28. Coman Gh, Trimbitas RT (1997) Shepard operators of Lagrange-type. Studia Univ. “Babes–Bolyai," 42, No. 1, 75–83 29. Coman Gh, Trimbitas RT (2000) Bivariate Shepard interpolation in MATLAB. In: Proceeding of “Tiberiu Popoviciu" Itinerant Seminar, Cluj-Napoca, 41–56 30. Coman Gh, Trimbitas RT (2001) Univariate Shepard–Birkhoff interpolation. Revue d’Analyse Numer. Théor. Approx., Cluj-Napoca, 30, No. 1, 15–24 31. Criscuolo G, Mastroianni G (1993) Estimates of the Shepard interpolatory procedure. Acta Math. Hung., 61, No.1–2, 79–91 32. Della Vecchia B (1996) Direct and converse results by rational operators. Constr. Approx., 12, 271–285 33. Della Vecchia B, Mastroianni G (1991) Pointwise simultaneous approximation by rational operators. J. Approx. Theor., 65, 140–150 34. Della Vecchia B, Mastroianni G (1991) Pointwise estimates of rational operators based on general distribution of knots. Facta Universitatis (Nis), 6, 63–72 35. Della Vecchia B, Mastroianni G (1995) On function approximation by Shepard-type operators—a survey. In: Approximation Theory, Wavelets and Applications (Maratea 1994), vol. 454, Kluwer, Dordrecht, 335–346 36. Della Vecchia B, Mastroianni G, Szabados J (1996) Balász–Shepard operators on infinite intervals. Ann. Univ. Sci. Budapest Sect. Comp., 16, 93–102 37. Della Vecchia B, Mastroianni G, Szabados J (1998) Uniform approximation on the semiaxis by rational operators. East J. Approx., 4, 435–458 38. Della Vecchia B, Mastroianni G, Szabados J (1999) Weighted uniform approximation on the semiaxis by rational operators. J. of Inequal. and Appl., 4, 241–264 39. Della Vecchia B, Mastroianni G, Totik V (1990) Saturation of the Shepard operators. Approx. Theor. Appl., 6, 76–84
References
141
40. Della Vecchia B, Mastroianni G, Vértesi P (1996) Direct and converse theorems for Shepard rational approximation. Numer. Funct. Anal. Optimization, 17, No. 5-6, 537– 561 41. DeVore RA, Lorentz GG (1993) Constructive Approximation, Springer-Verlag, Berlin 42. Farwig R (1986) Rate of convergence of Shepard’s global interpolation formula. Math. Comput., 46, 577–590 43. Fejér L (1930) Über Weierstrassche Approximation besonders durch Hermitesche Interpolation. Mathemathische Annalen, 102, 707–725 44. Fleming W (1977) Functions of Several Variables. Second edition, Springer-Verlag, New York 45. Foley TA (1987) Interpolation and approximation of 3-D and 4-D scattered data. Comput. Math. Applic., 13, 711–740 46. Foley TA (1990) Interpolation of scattered data on a spherical domain. In: Algorithms for Approximation, II, (Mason JC, Cox MG eds), Chapman Hall, 303–310 47. Foley TA (1986) Scattered data interpolation and approximation with error bounds. Comput.-Aided Geom. Design, 3, 163–177 48. Foley TA (1984) Three-stage interpolation to scattered data. Rocky Mountain J. Math., 14, No. 1, 141–149 49. Foley TA, Nielson GM (1980) Multivariate interpolation to scattered data using delta iteration. In: Approximation Theory, III, (Cheney EW ed), Academic Press, New York, 419–424 50. Franke R (1982) Scattered data interpolation: tests of some methods. Math. Comput., 38, 181–200 51. Franke R (1977) Locally determined smooth interpolation at irregularly spaced points in several variables. JIMA, 19, 471–482 52. Franke R, Nielson GM (1983) Surface approximation with imposed conditions. In: Surfaces in CAGD, (Barnhill R, Boehm W, eds), North-Holland, Amsterdam, 135–146 53. Franke R, Nielson GM (1980) Smooth interpolation of large sets of scattered data. Int. J. Numer. Methods Eng., 15, 1691–1704 54. Gal SG (2004) Remarks on the approximation of normed spaces valued functions by some linear operators. In: Proceeding of the 6th Romanian–German Research Seminar, RoGer (to appear). 55. Gal SG, Gal CS, Partial shape preserving approximation by three kinds of bivariate Shepard operators, submitted. 56. Gal SG, Szabados J (1999) On the preservation of global smoothness by some interpolation operators. Studia Sci. Math. Hung., 35, 393–414 57. Gal SG, Szabados J (2002) Partial shape preserving approximation by interpolation operators. Functions, Series, Operators, Alexits Memorial Conference, 1999 (Leindler L, Schipp F, Szabados J eds), Budapest, 225–246 58. Gal SG, Szabados J (2003) Global smoothness preservation by bivariate interpolation operators. Analysis in Theory and Appl., 19, No. 3, 193–208 59. Gonska HH (1983) A note on pointwise approximation by Hermite–Fejér type interpolation polynomials. In: Functions, Series, Operators, vol. I, II (Proc. Int. Conf. Budapest 1980; Nagy BSz, Szabados J eds), Amsterdam, New York, 525–537 60. Gonska HH (1983) On approximation in spaces of continuous functions, Bull. Austral. Mat. Soc., 28, 411–432 61. Gonska HH, Jetter K (1986) Jackson–type theorems on approximation by trigonometric and algebraic pseudopolynomials. J. Approx. Theory, 48, 396–406 62. Gordon WJ, Wixom JA (1978) Shepard’s method of “metric interpolation" to bivariate and multivariate interpolation. Math. Comput., 32, 253–264
142
References
63. Haussmann W, Pottinger P (1977) On the construction and convergence of multivariate interpolation operators. J. Approx. Theory, 19, 205–221 64. Jiang G (1990) The degree of approximation of Grünwald interpolation polynomial operators. Pure Appl. Math., 6, No. 2, 82–84 65. Katkauskayte AY (1991) The rate of convergence of a Shepard estimate in the presence of random interference. Soviet J. Automat. and Inform. Sciences, 24, No. 5, 19–24 66. Kratz W, Stadtmüller U (1988) On the uniform modulus of continuity of certain discrete approximation operators. J. Approx. Theory, 54, 326–337 67. Kroó A, Révész S (1999) On Bernstein and Markov-type inequalities for multivariate polynomials on convex bodies. J. Approx. Theory, 99, 139–152 68. Kryloff N, Stayermann E (1922) Sur quelques formules d’interpolation convergentes pour toute fonction continue. Bull. Classe Sci. Phys. Math. Acad. Sci. Ukraine, 1, 13–16 69. Lawson CL (1984) C 1 surface interpolation for scattered data on a sphere. Rocky Mountain J. Math., 14, No. 1, 177–202 70. Little FF (1983) Convex combination surfaces. In: Surfaces and Computer-Aided Geometric Design, (Barnhill R, Boehm W eds), North-Holland, Amsterdam, 99–107 71. Marcus S (1956) Functions monotone de deux variables. Rev. Roum. Math. Pures Appl., 1(2), 17–36 72. Mastroianni G, Szabados J (1993) Jackson order of approximation by Lagrange interpolation. Suppl. Rend. Circ. Mat. Palermo, ser. II, No. 33, 375–386 73. Mastroianni G, Szabados J (1997) Balász–Shepard operators on inifnite intervals. II, J. Approx. Theory, 90, 1–8 74. McLain DH (1976) Two-dimensional interpolation from random data. Comput. J., 19, 178–181; Errata, Comput. J., 19, 384 75. Nguyen KA, Rossi I, Truhlar DG (1995) A dual-level Shepard interpolation method for generating potential energy surfaces for dynamics calculations. J. Chem. Phys., 103, 5522 76. Nicolescu M (1952) Contribution to a hyperbolic kind analysis of plane. (in Romanian) St. Cerc. Mat. (Bucharest), 3, No.1-2, 7–51 77. Nielson GM, Franke R (1984) A method for construction of surfaces under construction. Rocky Mountain J. Math., 14, No. 1, 203–221 78. Popov VA, Szabados J (1984) On the convergence and saturation of the Jackson polynomials in Lp spaces. Approx. Theory Appl., 1, No. 1, 1–10 79. Popoviciu T (1960-1961) Remarques sur la conservation du signe et de la monotonie par certains polynomes d’interpolation d’une fonctions d’une variable. Ann. Univ. Sci. Budapestinensis, III-IV, 241–246 80. Popoviciu T (1961) Sur la conservation de l’allure de convexité d’une fonction par ses polynomes d’interpolation. Mathematica (Cluj), vol. 3, No. 2, 311–329 81. Popoviciu T (1962) Sur la conservation par le polynome d’interpolation de L. Fejér, du signe ou de la monotonie de la fonction. An. Stiint. Univ. Jassy, VIII, 65–84 82. Popoviciu T (1934) Sur quelques propriétés des fonctions d’une de/deux variables reélles. Mathematica (Cluj), VIII, 1–85 83. Prestin J, Xu Y (1994) Convergence rate for trigonometric interpolation of nonsmooth functions. J. Approx. Theory, 77, 113–122 84. Rubel LA, Shields AI, Taylor BA (1975) Mergelyan sets and the modulus of continuity of analytic functions. J. Approx. Theory, 15, 23–40 85. Runck PO, Szabados J, Vértesi P (1989) On the convergence of the differentiated trigonometric projection operators. Acta Sci. Math. (Szeged), 53, 287–293 86. Schumaker LL (1976) Fitting surfaces to scattered data. In: Approximation Theory, II, (Lorentz GG, Chui CK, Schumaker LL eds), Academic Press, New York, 203–268
References
143
87. Sendov B (1968) Approximation with Respect to Hausdorff Metric. Thesis (in Russian), Moscow, 1968 88. Sendov B, Popov VA (1983) Averaged Moduli of Smoothness, Bulgarian Mathematical Monographs, Vol. 4, Publishing House of the Bulgarian Academy of Science, Sofia 89. Sharma A, Szabados J (1988) Convergence rates for some lacunary interpolators on the roots of unity. Approx. Theory and Its Applic., 4, No. 2, 41–48 90. Sheng B, Cheng G (1995) On the approximation by generalized Grünwald interpolation in Orlicz spaces. J. Baoji Coll. Arts Sci. Nat. Sci., 15, No. 4, 7–13 91. Shepard D (1968) A two-dimensional interpolation function for irregularly spaced points. Proceedings 1968 Assoc. Comput. Machinery National Conference, 517–524 92. Shisha O, Mond B (1965) The rapidity of convergence of the Hermite–Fejér approximation to functions of one or several variables. Proc. Amer. Math. Soc., 16, No.6, 1269–1276 93. Somorjai G (1976) On a saturation problem. Acta Math. Acad. Sci. Hungar., 32, 377–381 94. Stechkin SB (1951) On the order of best approximation of periodic functions. (in Russian) Izv. Akad. Nauk SSSR, 15, 219–242 95. Szabados J. (1973) On the convergence and saturation problem of the Jackson polynomials. Acta Math. Acad. Sci. Hungary, 24, 399–406 96. Szabados J (1976) On a problem of R. De Vore. Acta Math. Acad. Sci. Hungary, 27, 219–223 97. Szabados J (1987) On the convergence of the derivatives of projection operators. Analysis, 7, 349–357 98. Szabados J (1991) Direct and converse approximation theorems for the Shepard operator. Approx. Theory Appl. (N.S.), 7, 63–76 99. Szabados J, Vértesi P (1989) On simultaneous optimization of norms of derivatives of Lagrange interpolation polynomials. Bull. London Math. Soc., 21, 475–481 100. Szabados J, Vértesi P (1990) Interpolation of Functions. World Scientific, Singapore 101. Szabados J, Vértesi P (1992) A survey on mean convergence of interpolatory processes. J. Comp. Appl. Math., 43, 3–18 102. Szegö G (1939) Orthogonal Polynomials. Amer. Math. Soc. Colloq. Publ., vol. XXIII, New York 103. Timan AF (1994) Theory of Approximation of Functions of a Real Variable, Dover, New York 104. Trimbitas RT (2002) Univariate Shepard–Lagrange interpolation. Kragujevac J. Math., 24, 85–94 105. Trimbitas RT (2002) Local bivariate Shepard interpolation. Rend. Circ. Matem. Palermo, Serie II, Suppl. 68, 865–878 106. Trimbitas MG, Trimbitas RT (2002) Range searching and local Shepard interpolation. In: Mathematical Analysis and Approximation Theory, The 5th Romanian–German Seminar on Approximation Theory and Its Applications, RoGer, Burg Verlag, Sibiu, 279–292 107. Vértesi P (1996) Saturation of the Shepard operator. Acta Math. Hungar., 72, No. 4, 307–317 108. Xie T, Zhang RJ, Zhou SP (1998) Three conjectures on Shepard interpolatory operators. J. Approx. Theory, 93, No. 3, 399–414 109. Zhou X (1998) The saturation class of Shepard operators. Acta Math. Hungar., 80, No. 4, 293–310 110. Zygmund A (1955) Trigonometric Series. Dover, New York 111. Zygmund A (1959) Trigonometric Series. 2nd edition, Cambridge Univ. Press, London, New York, 1959, vol. II, p. 30
Index
Lp -average modulus of continuity, 39 algebraic projection operators, 26, 37 average modulus of continuity, 31 Bögel modulus of continuity, xii, 71, 74, 77, 86 Balász–Shepard operator, 37, 67, 69 bell-shaped, 59 bidimensional monotone, 93, 94, 99 bidimensional upper monotone, 93 bivariate Euclidean modulus of continuity, xii bivariate Hermite–Fejér polynomial, 72, 91 bivariate Lagrange interpolation, 82, 87 bivariate monotonicity and convexity, 98 bivariate Shepard operator, 75 bivariate tensor product Shepard operator, 101 bivariate uniform modulus of continuity, xii Côtes–Christoffel numbers, 49, 95 Chebyshev polynomial, 2 complex Hermite–Fejér interpolation polynomials on the roots of unity, 22 degree of exactness, 17 global smoothness preservation, xi, 74 Grünwald interpolation polynomials, 37, 44, 56 Hermite–Fejér polynomial, 4, 9, 39, 44–46, 74, 95
Jackson interpolation trigonometric polynomials, 4, 30, 39, 58 Kryloff–Stayermann, 52, 69 Lagrange interpolation, 12, 29, 44 local modulus of continuity, 31 mean convergence of Lagrange interpolation, 39 original Shepard operator, 87, 129 original Shepard–Lagrange operator, 130 original Shepard–Taylor operator, 88, 131 partial bivariate moduli of continuity, 71 periodically monotone, 59 point of strong preservation, 44, 67 point of weak preservation, 44 shape-preservation property, xii Shepard interpolation operator, 16, 59 Shepard operator on the semi-axis, 21 Shepard–Gal–Szabados operator, 130 Shepard–Gal–Szabados–Taylor operator, 89, 132 Shepard–Grünwald operators, 38, 69 Shepard–Lagrange operator, 17, 39, 67 Shepard–Taylor operator, 40, 68 strictly convex, 105 strictly double convex, 95, 97, 100, 101 strongly convex, 51 Tensor product Shepard operator, 77, 129
146
Index
tensor product Shepard operator on the semi-axis, 87 Tensor product Shepard–Lagrange operator, 87, 130 Tensor product Shepard–Lagrange–Taylor operator, 88, 131 Tensor product Shepard–Taylor operator, 89, 132
third kind of Shepard operator, 117 trigonometric projection operators, 34, 37 uniform modulus of continuity, 31 weighted L1w -modulus of continuity, 9 weighted modulus of continuity, 39