Managing Editors: M:Beckmann and W. Krelle
319
Dinh The' Luc .
.' "'Theory of. V~~tor Optimization
'
.:
'Springer~Verlag
.
This series reports ·:new .developments in mathematical econ?mi.cs, economic theory, econometrics, operations research r and mathematical syst.ems, research and teac~ing - quickly, informal'y and at a high level.. The type of material considered for'publication includes; 1. Preliminary drafts of original papers and monographs 2. Lectures on a new field ·or presentations of a new angle in a classical
fjeld.
.
-3. Seminar work-outs 4. Reports -of' ·meetings, provided they' are
a) of exceptional interest and b) devoted to a single topic. Texts which are out of.print. but still in demand may also be considered' if they fall wi'thin t~ese categories. The timeliness of- a manuscript is more important than its form, ·which may be unfinished tentative. Thus, in Some iristances r proofs may be.' merely outlined and results presented which have been or will later be . published. elsewhere. If pos~ible, a subject index should be included.' ..... DubJication of Lecture Notes is intended as a service to the internationc;ll .'. 3c~entific community,.' in that a commercial .publisher l S'pringer~Ve-~": ." . ag, can offer a wid~ distribu'tion of documents which would otherw.ise .. ~ lave a restricted readership. Once published and copyrighted, they can ... ~
or
Je documented in the scientific literature~
.
ranu5cripts
.
.
,
. . . . . ,,"
-,l ...-
anuscri pts shou rd be no ress tha n 100 and preferably no rna re than 500 pages j n length. On regu."e.sfthe: jbHshcr wilJ suppry speciar paper with the typing area outlined a-no essentials for the preparation o~c.a~ef~~:, ·ady manuscripts. Manuscripts .should be sent diroctly, to Springer~Verlag HeideJb.erg or Spd~g~~~:.~t~.~.~
ew York.
.
.
.'
_._----------pringer..Verlag, Heidelberger. Platz· 3.0·1000 Berlin 33
•
.
pringer~Verlagf Tiergarten_straBe 17, 0 ...6900 Heidelberg 1
'. pringer-Verlag, 175 Fifth Avenue, New York,'NY 10-010/USA.... · pringer-Verlag. 37-3 Hongo 3-chome. Bunkyo~kuJ TOkyo 113:'Ja'pan1
•
BN 3·540~50541 ..5 BN 0-387...50541 ~5
4
.
.
.
"
.
III
.
~.~~ -~
Lecture Notes in Economics and Mathematical Systems Managing Editors: M. Beckmann and W. Krelle
319
Dinh The Luc
Theory of Vector Optimization
S prjnger-Verlag
Editorial Board
H. Albach M. Beckmann (Managing Editor) P. Dhrymes G.Fandel G. Feichtinger J. Green W. Hi!denbrand W. Krclle (Managing Editor) H.P. KOnzi K. Ritter R. Sata U. 8chittko P. Schonfeld R. Selten Managing Editors
Prof. Dr. M. Beckmann Brown U niversi ly Providence r Rl 02912, USA Prof. Dr. W. KrelJe Institu t fG r G esel lschafts· und Wi rtsc haftswjsscnschaften der Universitat Bonn Adenauerallee 24-42, D~5300 Bonn, FRG Author
Dr. Dinh The Luc nstHute of Mathematjcs RBox 631 Boho 10000 Hanoi, Vietnam J
rSBN 3-540-50541-5 Springer-Vedag Berlin Heidelberg New York [SBN O·S87-50541 5 Springer-Yerfag New York Berlin Heiderberg M
This work is subject to copyright. All rights are reserved, whether the whole or part of tho material is concerned, specifically the rights of translat~on. reprint~ngt re-use of i:lustrat!ons. recita~ion, broadcasHng, reproduction on microfilms or in o~her ways, and storage in data banks. Dupl:cation of this publ:cat:on or parts thereof is only permitted u:1der tho prov:s:ons of the German Copyr;ght Law of September 9. 1965, in its version of Juno 24. 1985, and a copyrigh~ fco must always be paid. V:o~ations fall ur.der tho prosecution act of the German Copyright Law.
© Springor.Ver:ag Berlin Heidc!berg 1989 Printed in Germany C ...·n+inn ":Int4 hil"'ni ....n·
n ... ,,...lth~ll~ RAI·,.
H"'m~Mch/Rpr~crtr
To My Mother and the Memory of My Father
Preface These notes grc,v out of a series of lectures given by the author at the University of Budapest during 1985-1986. Additional results have been included which \vere obtained \V'hile the author "\vas at the University of Erlangen-)Jiirnberg under a grant of the Alexander .von !Iumboldt Foundation. Vector optimization has two main sources coming from economic equilibrium and ,velfare theories of EGge,vorth (1881) a.ud Pareto (1906) and from mathematical backgrounds of ordered spaces of Cantor (1897) and Iiausdorff (1906). Later, game theory of Borel (1921) and von Neumann (1926) and production theory of I(oopmans (1951) have also contributed to this area. Ho\vever, only in the fifties, after the publication of I(uhu·-Tucker's paper (1951) on the necessary and sufficient .conditions for efficiency, and of Deubreu's paper (1954) on valuation equilibrium and Pareto optimum, has vector optimization been recognized as a mathematical discipline. The stretching development of this field began later in the seventies and eighties. Today there are a number of books on vector optimization. Most of them arc concerned with the rnethodology and the applications. Fev"," of them offer a systematic study of the theoretical aspects.The aim of these notes is to provide a tmified background of vector optinuzation,"\vith the emphasis on nonCOllvex problems in infinite dimensional spaces ordered by convex cones. The notes are arranged into six chapters. The first chapter presents preliminary material. It cOlltains a study of nonconvex analysis \vith respect to convex cones. In chapter 2, we introduce main concepts of vector optimization such as preference order, efficiencies. vector optimality etc. and the existence of efficient points. Chapter 3 deals "\vith vector optimization problems ,vith set.· ·valued objectives and constraints. Necessary and sufficient conditions are established in terms of generalized derivatives. In chapter 4, \Ve present a scalarizatiolllTIcthod to convert a vector problem to a scalar problem. Stability properties of solution sets-of vector problems arc addressed. Chapter 5 is devot.ed to duality. Together ,vith the classical approaches to dualit.~· sneh a~ Lagrangcan and conjugate duality we also provide axiomatic duality and au approach via thcoreUls of the alternative. These approaches arc especially appropriate for llonconvex problc1l1s. In the last chaptel", the structure of efficient point. sets of linear, convex and quasiconyex problems is investigated. In the references \ve include only the papers \\·hich are directly
VI
involved with the topics of our consideration and those of recent publication& the subject. . A cknov.dcdgernen ts. The author ,vould like to express his deep thanks to Professor A.1:lrekopa \vno was his PhD supervisor at the Hungarian Academy of Sciences and \vho suggested giving a course of lectures on multiobjective optimization at the University of Budapcst~ ITurther thanks go to Professor J~Jahn of the University of Erlangen-Xiirnberg for reading the manuscript and for useful suggestions. turthermore, the author is grateful to the Computer and Automation Institute of the I-Iungarian AcadenlY of Sciences and the Institute for Applied Math ematics of the University of Erlangen-Nurnberg for the hospitality and the \:vorking facilities he received during his stay there. (These notes would never have been completed ,vithout a grant of the Alexander von Jlumboldt Foundation to \vhom the author is especially indebtcd~The last) but not least thanks are addressed to the Institute of Mathernatics, Hanoi, for the permission and support to carry out the research abroad..
Erlangen, West Germany lvIay 1988
Contents Chapter 1: Analysis over Cones
1
1~ Convex cones ~. ~ ~ ~ ~ ~ ~ 2.Recession cones 3. Cone closed sets .. '1. Cone monotoni c f UIlcti ons ..... 5.Cone continuous functions 6.Cone conve..x functions 7~Sct-valued maps ~ ~ .. ~ .. ~ ~ 6
•
•
6
••
••
••
~ ~
•
I
••
~ I
~ ~
•
6
•••••••••••••••
I
~
••
6
6
~
•
••••••
••••
••••••••
6
~
••
•
6
I
~
~
•
••••
•
I
~ "'
~
••
~
•
6
I
•••
I
••
I
~
•
I
~
I
•••
~
•••
6
•
I
•
••••
•
•
•
•
••••••••••••••••••••••••
••••••••••••
••••
I
•••••••••••••••••••
I
6
•
6
•
6
•••••
~
••••••
6
••••••••••••
I
••••••••••••
•••••••
I
~ ~
•••••
••••••••••••••
Chapter 2: Efficient Points and Vector Optimization Problems
37
o~dcrs
. I.Binary relations and p"artial
2.Efficient points ~ .. ~ ~ ~ . ~ ~ 3~Existcncc of efficient points 4.Domination property . ~ ~ "' 51Vector optimization problems
6
I
~
••
I
•••••••••••••
. ~ .. ~ ~ . ~ .. ~
• • • • • • • • • • • • • •'
~ ~
I
I
I
I
••••••••••••••••••
I
•
••
Chapter 3: Nonsmooth Vector Optimization Problems 1. Contingent cierivatives ~ ... 2.Unconstrained problems ~. ~ ~ .. ~ 3.Constrained problems 4.Differentiable case ~~ 5.Convex case ~ I
•
I
I
•
6
I'
••
I
I
I
I
••••••
6
I
62
•••••••••••••••••••••••••••
"' . "' ..
•••
~
•••••••••
I
~
••••••••••
••••
~~~ ~~~ •
I.
••
"'
~~
6
••••••••••
•••••••••••
•••
I
••
I
••
~ ~.~
6
.70
I
•
6
~
•••••••••
77
so
Chapter 4: Scalarization and Stability
I
63 67
. ". 74
••••••••••••••••••••••••••••••••••••••••••
I.Separation by monotonic functions 2. Scalar representations . ~ .. ~ ~ .
37
~~
39 .. ~ ~ 46 ~ . ~ .. ~ . ~ ~ . ~ ~ . 53 ~ 57
••••••••••••••••••••••••••••••
~
I
1 8 13 18 22 29 33
••••••••••••••••••••••••
•••••••••••••
~ I
.. ~
I
••
I'
I
••••
I'
••••
•••••••••••••••••••••••
I
~~
••
••••
81 86
VI1l
3.Completeness of scalarizatioll ~ 4.Stabilit)" . ~ Chapter 5:
".. "
H
~.~
••••
•
•
•
•
•
•
109
Duality
1.Lagrangcan duality duality 3.Axiomatic duality 4.Dllality and alternative
~
2~Conjtlgate
Chapter 6:
95 · · · .. · .. ~ . 101
~
~
"
.~
~
110 117 120 129
~
~~
~
~
135
Structure of Optimal Solution Sets
I.General ca..'iC •••••• ~ 2.Lineal· case 3.Convcx case 4.Quasiconvex case
~ ..•••••••. ~ ~ .•..•••••••••.•••••••••••••••••.•• ~ •.
"
135 137 139 148
Comments
155
References
151
Index
171
Chapter 1
Analysis over Cones
This chapter is of preliminary character. I t contains a study of sets and functions with respect to cones in infinite dimensional spaces. First, we give definitions concerning convex cones and properties of cones with special structure such as correct cones and cones with conve.x bounded base. In Section 2, we introduce the concept of recession cones of nonconvex sets which plays an important role in nonconvex optimization. The next four sections deal with sets and functions in the spaces with the presence of convex cones. 1"\he last section provides some defi.i:l..itions and results about set-valued maps which will be needed in the study of optimization problems \vith set-valued data.
I .. CONVEX CONES
Let E be a topological vector space over the field of real numbers. For a set C ~ E, the following notations '\vill be used throughout: elC, intC, riC, conv(C) stay for the closure, interior, relative interior, complement and convex hull of C in E, respectivelY6 Besides, I(C) denotes the set C n -C~
cc ,
Definition 1.1 A subset C of E is said to be a cone if tc E C for every c E C and every nonnegative number t. It is said to be a convex set if for any c, dEC,
. the line segment [c,d] = {tc+ (1 - t)d: 0 :5 t 5 I} belongs to C. Further, suppose
2
that C is a convex cone in E, then we say that it is 1) pointed if I(C) = {O}, 2) acute if its closure is pointed, 3) strictly supported if C \ l(C) is contained in an open homogeneous half space, 4) correct if (cIC) + C \ I(C) ~ c. Example 1.2
Below we give some examples to· clarify the definition.
1. Let Rn be the n-dimensional Euclidean space, then the nonnegative orthant R+. consisting of all vectors of Rn with nonnegative coordinates, is convex closed acute strictly Stlpprorted and correct as welL The set {O} is also such a cone, but it is a trivial cone. 1"he set composed of zero and of the vectors with the first coordinates being positive, is a pointed strictly supported correct cone, but it is not acute.
Any closed homogeneous half space is a correct strictly supported cone, but it is not painted. 2. Let
c=
{(x, y, z) E R 3 : x
> 0, Y > 0, Z > O} U {(x, y) z) : x ;:::
Then C is convex, acute but not 3. Let
y ;:: 0, z = O}.
correct~
n be the vector space of all sequences x = c = {x = {x n } En:
Xn
?:
{x n } of real numbers. Let
0 for all n}.
Then C is a convex pointed cone~ We cannot say whether it is correct or acute because no topology has been given in the space. 4. Ubiquitous cone. Let 0 1 be the subspace of n consisting of the sequences x = {x n } such that X n = 0 for all but a finite number of choises for n~ It is a normed space if we provide it with the norm
I]xll = max{lxnl : n = 1, 2, ~ .. }~ Let C be the cone composed of zero and of sequences whose last nonzero term is positive~ Then C is pointed. It is called a ubiquitous cone (Holmes-1975) because the linear space spanned by C is the whole 01 . This cone is neither strictly supported, nor correct. 5.. Lcxicographic cone. Let
3
and let C be the set composed of zero and of seqnences ,vhose first nonzero term is positive~ This is a convex cone, called lexicographic~ It is pointed, but neither correct nor strictly supported.
6. IJet Lp[O, 1], 0 < P < 1, be the space of functions x(~) on [0,1] ,vhich are 1 integrable with respect to Lebesgue measure IJ. and I 0 ]xl PdJl < oc ~ A mctrizable topology of this space is determined by the basis of neighborhoods of zero:
{x E Lp[O, 1] :
(Id
IxI PdJ.L)l/ p < lin}, n = 1,2, ....
Let C be the set of functions which are nonnegative almost everywhere. l"'hen the cone C is convex closed, hence correct (Proposition 1.4). Later on we shall see that this cone is not strictly supported.
Correct cones will play an important role in the next chapter, therefore \ve provide here some criteria for a cone to be correct. In the sequel C is presumed to be convex. Proposition 1.3
C is correct if and only if (elC) + C \ l(C) ~ C \ I(C).
Proof. Since C\Z(C) ~ C, if the relation stated in the proposition holds, then the cone is obviously correct~ No\v, suppose that C is a convex correct cone. Observe that since 1(C) is a linear subspace and C is convex, for each a, bEe the relation a + b E I(C) implies that a, b E l(C). By this, the following relations hold:
C \ l(C) + C \ l(C) = C \ l(C); G + C \ I(C) ~ C \ l(C). With these 've can rewrite the inclusion in the defmition of correct cones as
(ciC) + C \ I(C) = (cIC) + C \ l(C) ~ C+C\l(C) ~ C\l(C),
+ C \ Z(C)
completing the proof.•
Proposition 1.4 C is correct if one of the following conditions holds: i) C is closed; ii) C \ I(C) is nonempty open; iii) C is composed of zero and the intersection of certain open and closed homogeneous half spaces in E. Proof. The first case is obvious. Now, if C \ I(C) is nonempty open, then the interior intO of C is nonempty and intC = C \ I(C)~ Hence,
(ciC)
+C\
l(G)
= (ciC) + (intC)
~
c.
4
Thus C is correct. Finally, assume that
C = {O} u (n{H>.: A E A}), \\There H).. is either a closed or open half space of E~ If all of HJ+.. arc closed, then this is equivalent to the first case. Therefore we may assume that at least one of the half spaces is open. In that case, I(C) = {OJ and a vector bEE belongs to C \ I(C) if and only if it belongs to every l!).., A E A. Further, it is clear that a E clC if and only if a E clH).. for all A E A. Now, since
(clH>J
+ H)..
~
H)..,
whatever H).. be open or closed, we conclude that a + bEe \vhenever a E clC, bEe \ l(C), completing the proof.-
Definition 1.5 Given a cone C in the space E. We say that a set B generates the cone C and write C ::::: cone(B) if
~
E
C = {tb : ~ E B, t ~ OJ. If in addition B does not contain ze TO and JOT each c E C, c f= 0, the re are unique b E B, t > 0 such that c = tb, then we say that B is a base of C. Whenever B is a finite set, cone(conv (B)) is called a polyhedral cone. In the literature (Pcressini-1967) sometimes a base is required to be convex closed. According to our definition, every nontrivial cone has a base. Later on \ve shall impose other requirements on the base if needed.
Remark 1.6 It is clear that in finite dimensional spaces a cone has a closed convex bounded base if and only if it is pointed closed. This fact, however is not true in infinite dimensions as it will be demonstrated by the example of Remark
4.6. Proposition 1.7 (Jamcson-1970) If the space E is Hausdorff, then a cone with a closed convex bounded base is closed pointed, hence correc.t.
Proof. We sho\v first that C is closed. :Ei'or, let {c~} be a net from C converging to c. Since B is a base, there exist a net {b a} from B and a net {tal of positive numbers such that Co = to:ba: vVe claim that {to:} is bounded. In fact, if this is not the case, t4at is, we suppose that lim to: = 00. Then, since the space is Hausdorff, the net {b a = cOo/ta} converges to O. Moreover, since B is closed, we arrive at the contradiction: 0 = limbo: E B. In this way, we may assume that {ta:} converges to some to 2:: o. If to = 0, then by the boundeness of B ~ lim lobo; = O. Hence· c 0 and of course c E C. If to > 0, we may assume that t Q > c for 6
=
5
all a and some positive c. No\v, bo: = cex./ta converges to clio and again by the closedness of B, the vector clto E B. Hence c E C and C is closed. 1'he pointedness of C is obvious.• Below \ve furnish t\VO other characterizations of cones ,vi th closed convex bounded base. frhe first one describes a local property; for t~~o vectors x and y, the intersection (x + C) n (y - C) must be small enough if they are sufficiently closed to each other.The second one describes a global property: if two vectors of the cone are far from the origin, then so is their sum. The space E is presumed to be separated. I
Proposition 1.8 Assume that C has a closed convex bounded base. Then in any neighborhood W of the origin in E, there exists another neighborhood, say V , such thai (1.1) (V + C) n (V - C) ~ w. Proof. Let B be a base meeting the requirements of the proposition. First ,\ve prove that there is a balanced absorbing neighborhood U of zero in E such that
B n (U - C) =
0.
(1.2)
In fact, since B is closed and does not contain zero, there is a neighborhood U of zero which may be assumed to be balanced absorbing symmetric such that B
n U = 0.
(1.3)
We show that this neighborhood U will yield the relation (1~2). Suppose to the contrary that that relation does not hold, i.e. there are some b E B, u E U and c E C with b u - c. Since B is a base, one can find a nonnegative number t and a vector b' E B such that c = tb' . We have
=
b + c = b + tb' . Consider the vector u/(l + t). On one hand it belongs to U since the latter set is balanced. On the other hand it belongs to B because it is a convex combination of the vectors band b' of the convex set B. In other words, u =
u/{l + t) E Un B, contradicting the relation (1.3)~ Thus, the relation (1.2) holds. Now, let W be an arhitrary neighborhoo d of zero in E. We construct a set V with the property (1.1). Let to be a positive number, which is smaller than 1 and such that the follo\ving relation holds, toB S; W/2.
(1.4)
Such a number exists because B is bounded. Further, we may assume that
U ~ W/2~
(1.5)
6
'V\'e set
v = (t
o
(1~6)
/2)U,
and verify that this neighborhood yields the relation (1.1). Indeed,let a = v \vhcre v E V,c E C. Supposing that a rf. W we show that .
+ c,
a¢V-C, and by this the lemma will be proven. In virtue of the relations (1.5) and (1~6): c ~ W/2.
(1.7)
Since the set B is a base of C, hence so is toB. Moreover, it follows from (1.4) and (1.7) that there is a positive number t 1 > 1 \vith c = t1tob, for some b E B. This together with (1~2) implies the fact that c ~ toU - C~ Consequently,
c+ u
~
(i o /2)U - C, for every u E V~
In particular,
a=c+vrf.V-C. The proof is complete.•
Proposition 1.9
Suppose that C has a closed convex bounde.d base. Then for every bounded neighborhood V of zero, there is another one, say U, such that x, y E en UC implies x
+ y E V e•
(1~8)
ProoE Let B be a closed convex bounded base or-C and suppose to the contrary that (1~8) does not hold, i.e. there exists a bounded neighborhood ~ of zero such that for every neighborhood U of zero, one can find x, y E en UC with x + y E 1;;: Since B is bounded, we fix a neighborhood U which contains B and consider the family of neighborhoods {nU : n = 1, 2, .~.}. For every n > 0, there axe some :tn, Yn E C n (nU)C \vith X n + Yn E Vo~ As B is a baf-ie of C, there are some an, bn E B and positive numbers tn, Sn such that X n = tna n , Yn = snbn . It can be assumed that t n and Sn are greater than n. Indeed, since
xn/n E en u c , 50 that if xn/n = ta for some a E B, t ~ 0, then t must be greater than 1 because B ~ u. Hence ~n = nta where the quantity nt is greater that n. Further, the points (tna n + snbn)/(tn + sn) belong to V:/(t n + sn), n = 1,2, .1' • On the other hand, they belong to B due to the convexity of B It is clear that I
lim(tna n
+ snbn)/(t n + 511.) =
O~
This and the closedness of B imply 0 E B, a contradiction~ •
7
Xext, let B* and E' denote the algebraic and topological dual spaces of The algebraic and topological polar cones C"« and C' of C arc: C*
E~
= {~ E E»- : ~(x) C 0, for all x E C},
C' = {~ E E' = {(x) ~ 0 , for all x E e}. Denote also
c*+ =
{~ E E* : ~(x)
G + = {~ E E ' : ~(x) 1
> 0, for all x E C \ l(C)}, > 0, for all x E C \ l(C)}.
It should be noted that the first two cones are noncmpty convex, for instance they contain zero, but the last t,vo cones are not necessarily noncmpty.
= (R+)' = R+" In Example 1.2 (3), C" = en (21. In Example 1.2 (4)(6), c*- = {O}, which shows
In Example 1.2 (1), (R:t.)*
that the cone is not strictly
supported. Belov" we provide a condition under ,vhich polar cones are nontriviaL
Proposition 1.10 (Pcrcssini-1967, Borwein-1980) In a vector space E, a convex set B is a base of C if and only if there exists a vector € E C*+ such that B={CEC:~(c)=l}.
F'lLTthermore, if E is locally convex .separate.d and C has a convex weakly compact base, then C'+ is nonempty, and if in addii1.on E' is metrizable or barreled, then intC' is also nonempiy. ProoE It is clear that the set B = {c E C : ~(c) = I} with ~ E C-+ is a convex base of the cone C. Conversely, for a given convex base B, consider the family of a1l1inear manifolds in E containing B but not zero. By Zorn's lernma there exists a maximal one which is a hyperplane, i.e. it is the set c;-l(l) for some ~ E EIt<. Since B ~ €-1(1), it follows that ~ E C*+ and indeed B = {c E C : ~(c) = 1}.
Further, if E is locally convex separated and B is a convex weakly compact base of C, then in view of a sepaxation theorem there is a vector €o E E' such that ~o(b)
> 0,
for all b E B.
(1.9)
The relation above implies that ~o E G 1+. If in addition, E' is metrizable or barreled, by Proposition 36.3 of Treves (1967), the topology of E' is the same as the Mackey's topology T(E J , E) and consequently (1.9) gives the relation: ~(b) > 0, for all b E B and for all ~ belonging to some neighborhood of (0 in E'. In other words, ~o E intC'.-
8
"'vVe recall that ~ E C' is an extreme vector if there are no two linearly independent vectors ~1, ~2 E C' so that ~ = ~1 + ~2·
Proposition 1 .. 11 Assume that E is a separated locally convex space, C is a closed convex cone with C' having weakly compact convex base. Then x ¢ C if and only if there is an extreme vector ~ of C' such that ~(x) < o.
Proof. If ~(x) < 0, some ~ E C', then obviously x cannot belong to C. Conversely, if x ~ C, then one can separate{ x} and C by a nonzero vector ~ E C' ,i.e. ~(x) < O. Let B' be a weakly compact convex base of C ' . The inequality obtained above shows that inf{~(x) : ~ E B ' } < O. Since B' is weakly compact convex, the function f(f,) = ~(x) attains its infimum at an extreme point of B ' , which is also an extreme vector of C' .•
2.RECESSION CONES
Recession cones, or sometimes called asymptotic cones, were first introduced for convex sets (Steinitz-1913, 1914, 1916, Fenchel-1951, Dieudonne-1966 and Rockafcllar-1970), and then they were extended for arbitrary sets in infinite dimensional spaces (Dedicu-1978) . Here we give a new definition of recession cones and provide several properties of these. Let us add the point 00 to the space E which is, as in the previous section, a separated topological vector space ~ver rcals.
Definition 2.1 neighborhood of 00 boun ded set.
A nonempty set·V in E is said to be an open (resp., closed) if it is open (resp.,closed) and its complement VC in E is a
. In the sequel let B denote the filter of neighborhoods of
Definition 2.2
00.
1 he recession cone of a.nonempty set X ~ E is the cone 1
Rec(X) = In this definition we set
n{clcone(X n V) : VEE}.
9
clcone(X n V)
= {OJ
n V is empty~ If the space is normable, then there exists a bounded balanced absorbing neighborhood W of zero and in this case direct verification shows that
if the intersection X
Rec(X) = n{clcone(X n (nW)C) : n =.1, 2, .~.}. Relative to the set X, we also define two other cones:
X
CXJ
=
where (0, l1']X = {tX ~ 0
n{cl(O, o:]X : < t ~ a};
Q
> O},
·
As(X) = {a E E : there axe a net {x oe } from X and a net {tal of positive numbers converging to 0 such that a = lim taxa}. Remark 2.3 It is not diflictilt to see that in normablc spaces the cones X oo ) As(X) and Rec(X) coincide. In other cases, we have the relation:
X oo = As(X) In Dedieu (1978), the cone
X~
~
Rec(X).
is called the asymptotic cone of J\.
Lemma 2.4
A vector a E E belongs to Rec(X) and each neighborhood U of zero in B,
cone(a + U)
if
and only
if for
each V E 13
n X n V i- 0.
Proof. By definition, a E Rec(X) if and only if a E clcone(XnV) for each V E 13, which is equivalent to the relation: (a + U)
n cone(X n V) -# 0
or cqui va1ently~
cone(a + U)
n (X n V) =F 0,
for each neighborhood U of zero and each V E B.•
The following properties are true: 1) Rec(X) = {OJ if X is bounded; 2) Rec(X) ~ Rec(Y) if X ~ y ~ E; 3) Rec(tX) = sign(t)Rec(X), each scalar t, where sign(t) is 1,0 and-1 if t is positive, zero and negative, respectively; 4) Rec(X) = Rec(clX); 5) Rec(X) = clX if X is a cone; 6) Rec(X U Y) = Rec(X) U Rec(Y) for each X, Y ~ E; 7) Rec(X n Y) ~ Rec(X) n Rec(Y) for each X, Y ~ E}~ 8) conv(Rec(X)) ~ Rec(conv(X)).
Proposition 295
10
Proof. Invoke to Definition 2.2 and Lemma 2.4.•
Remark 2.6 In the case ,vhere X and Yare convex closed with X nonempty, the inclusion in 7) of Proposition 2.3 becomes the equality: Rec(X n Y) = Rec(X)
n Y being
n Rec(Y).
However, this is not the case when the sets are arbitrary. The opposite inclusion of 8), in general, docs not hold even in finite dimensional spaces. Furthermore, it is not difficult to prove that a subset of a finite dimensional space is unbounded if and only if its recession cone is nontrivial. In infinite dimensions, the recession cone of an unbounded set is not necessarily nontrivial unless the set happens to have special structure at the infinity or to be convex. We say that X satisfies the condition (C B) if there exists a neighborhood v;, of 00 such that the cone clcone(X n V o ) has a compact base; and it satisfies the condition (CD) if for each a E R~c(X), there is a bounded set A ~ E S'l.Lch that (ta + A) n X is nonempty for all t ~ o.
Definition 2.7
Remark 2.8 Direct verification shows that the condition (OB) holds for every set in finite dimensional spaces. In infinite dimensional spaces if a set is convex, locally compact, then both of the conditions (CB) and (CD) are satisfied. Lemma 2.9 Assume that X satisfies the condition (CB). Then X is bounded if and only if Rec(X) consists of the zero vector alone. Proof. The "only if' part is the property 1) of J:>roposition 2.5. As for the converse assertion suppose that X is unbounded. Then for each neighborhood V E B, there is a point Xv E X n V. Without loss of generality we may assume that V ~ Vo , where Va is as in Definition 2.7. Let B be a compact base of the cone clcone(X n Vo)~ Then there exists a positive number t and a point bv E B such t~at Xv = tb v • Since B is compact, the set {b v : V E B} h&.~ at least one point of accumulation, say a E B. 1 he vector a is nonzero since B does not contain zero and using Lemma 2.4 we can verify that a E Rec(X), completing the proof.1
Lemma 2.10
Assume that X is unbounded and it satisfies the condition (CB). Then for every filter U on X which is finer than the filter generated by the basis {XnV:VEB,V~Vo},
where Va is defined by the condition (CB), there exists a nonzero vector v Rec(X) such that v E c[cone(U), for every U E U.
E
11
Proof. Let B be a compact base of the cone clcone(X clement U of the :filter U, clcone(U) ~ clcone(X n V o ),
n Yo).
Since for each
the set B u = B n clcone(U) is a compact base of clcone(U). Indeed, it is compact as the intersection of a compact set and a closed set; further, it does not contain zero since neither docs B. Moreover, for each x, x E clcone(U) ~ clcone(X n V o ) there are some b E B and a nonnegative number t so that x = tb. The vector b obviously belongs to B u , Le~ cone(Bu ) = clcone(U). The family {B-u : U E U} forms a basis of filter on B. Since B is compact, this family has at least onc accumulation point, say v. It is clear that v E B and v # o. lvlorcover, v E clcone(U) for each U E U, completing the proof.•
In vie\v of Remark 2.8, a useful case of Lemma 2.10 occurs v;hcn the set is closed convex, locally compact. The rcsult of this case was established by Dieudonnc (1966).
Lemma 2.11
If a set A
~
E is bounded, then
Rec(X + A) = Rec(X). Proof. Invoke to Defmition 2.2 and Lemma 2.4.-
In the remainder of this section for the sake of simplicity we assume that E is a normable space. Theorem 2 . 12 FOT every nonempty subsets X and Y of the space E, we have the following re lations:
1) Rec(X),Rec(Y)
~
Rec(X + Y);
+-
2) Rec(X) + Rec(Y) ~ Rec(X Y) if the condition (C1) holds for at least one of the two sets; 3) Rec(X + Y) ~ Rec(X) + Rec(Y} if the condition (CB) holds for at least one of the two sets and if Rec(X) n -Rec(Y) = {O}. Proof. For the first assertion, let x be a fixed point of X. Then x+Y~X+Y.
In virtue of Proposition 2.5 and Lemma 2.11,
12
Rec(Y)
= Rec(x + Y) ~ Rec(X + Y).
Rec(X)
~ Rec(X
The inclusion: is established by the same
+ Y)
way~
For the second assertion~let a E Rec(X), b E Rec(Y). We have to prove the relation
+ Y)~
a + b E Rec(X
(2.1)
Suppose that X satisfies the condition (CD). By Remark 2~3, there are some Yec E Y, i Q > 0 ,vith lim t a = 0 and lim toyo = b. By the condition (C D), there exists a bounded net {a o } such that XQ
= (l/io.)a
+ aa EX.
I t is clear that and
(2~1)
a + b = limta(x a is proven.
+ Yet),
For the last assertion, suppose that X satisfies the condition (CB) and let Xo: E X, Ya. E Y and to > 0,
a E Rec(X + Y) ~ i.e. a = lim to:(xo: + Ycx), for some limt = o. If the net {x a } is bounded, then Q
a = limto.Y", E Rec(X) + Rec(Y), completing the proof. If not, since the filter generated by that net is finer tha:L the filter generated by the basis
{X n V : V E B, V ~ Vol, in virtue of Lemma 2.9, there exists a nonzero vector z E Rec(X) such that
z \vhere A{3
= lim A/3Xo:",
> O,limA,B = Pap
0 and {xO"IJ} is a subnet of {XCi}. Denoting
= to.~ /)../3
,
we may assume by taking a subnet if necessary, that limpCtp = Po , where Po may be infinite or a nonnegative number. If Po is finite, we have that a=
PaZ
+ lim Pap )..[jYo.~ ,
which shows that
a E Rec(X) + Rec(Y). If Po is infinite, then
o=
lima/pOop
= limA.e(xQI3 + YQfJ)·
Consequently, z = -limAfjYQP E -Rec()I'),
contradicting the assumption of 3)~ The theorem is proven.•
13
In l'heorem 2.12, the conditions (CD) and (CD) are infallible. Let us consider the follovling sets in the space R 2 : A = {(2 2n , 0) E R 2 : n = O~ 1, ...},
B = {(O~ 22n +1 ) E R2 : n = 0, 1~ IU}' It is clear that the vectors (1,0) -and (0,1) belong to Rec(A) and Rec{B), respectively, nevertheless their sum does not belong to Rec(A + B). l"'hese sets do not satisfy the condition (CD).
Further, let x(n) be a sequence· in 0 1 (Example 1.2{4)) \vhose terrllS arc all zero except for the first and the n-th ones being n, while y( n) is a sequence whose terms are all zero except for the first one being n and the n-th one being -n. Set
A = {x(n) : n = 2,3, .. ~},
= {y(n) : n = 2,3, ~ ..}. Rec(B) = {O} and meanwhile Rec(A + B) -I {O} B
Then Rec(A) = , for instance it contains the sequence with the unique nonzero term being the first one. The two sets above do not satisfy the condition (CB).
Corollary 2.13
Assume that X and Yare nonempty with
Rec(X) n -Rec(Y) = {OJ, and one of them is convex, locally compact. Then Rec(X + Y) = Rec(X)
+ Rec(Y).
Proof. If one of the two sets is convex~ locally compact, then it yields the conditions (CB) and (CD). The corollary is then deduced from Theorem 2.12.•
3loCONE CLOSED SETS
As in the previous section, let C be a nonempty convex cone in a separated topological space E. We shall examine the sets which are closed or compact not in the usual sense, but with respect to the cone C.
Definition 3.1 Let X be a subset of E. We say that it is 1) C·-bounded if for each neighborhood U of zero in E, there is some positive number t such that X ~ tU + C, 2) C -closed if X + clC is closed,
14
3) C-compact
if any
cover of X of the f01m
{Ua + C : Q E I, Uo arc open} admits a finite subcover, 4) C-semicompact if any cover of X of the form {(xo: - cIC)C : 0: E I, Xo. E X} admits a finite subcover.
The last definition was first given by Corley (1980)Jt is clear that whenever C
= {O}, every C-notion becomes the corresponding ordinary one.
Lemma 3.2 If X ~ A + C for some bounded set A ~ E, then X is C-bounded. Conversely, if the space is normable and X is C-bounded, then there exists a bounded set A such that
X
~
A+C.
Proof. The proof is straighfo!'\vard. We omit it.•
Proposition 3.3
Every C-compact set is C-semicompact, C-closed and G'-
bounded.
Proof. Let X ~ E be a C-compact set~ That it is C-scmicompact follows from the fact that (x - clC)C is an open set and it is the same as (x - clC)C + C. We prove now that X is C-closed. For, let x be a clustcr point of the set X +clC.. We have to show that x E X +cIC. Suppose to the contrary that x does not belong to that set. Let U be the filter of neighborhoods of zero in E. Consider the family G = {(cl(x - clC + U))C
+C
: U E U}.
It forms an open cover of X. Indeed, let y E X. Then y ¢ x - clC. lIenee, thcrc is a neighborhood V of zero such that Y E (cl(x - ciG + V))C ~ (cl(x - clC + V))C
+ c.
Since y is arbitrary, G is actually an open cover of JY. We show now that this cover has no finite subcovers. In fact, if that is not the case, Lc. thcre are some U1 , ... , Un from U such that the family
{(cl(x - cIG + Ui))C covers X, then taking
U= \ve have the incI usion
+C : i =
1, ... , n}
n{Ui : i = 1, ... , n}
X ~ (x - etC + U)C + C and arrive at the contradiction: x cannot be a cluster point of X
+ etC.
15
For the C-boundedness) let U E U. Then the family {x + U + C : x E X} forms an open cover of X. There are a finite number of points, say Xi, ••• , X n of X such that the family {Xi + U + C : i = 1, ... , n} still covers X. Since the set {Xl' .~., x n } is bounded, there is a positive t such that Xi E tU, for i = 1, ... ,11.. By this \ve get the relation X ~ U {Xi
+U +C :i =
1, .. ~, n} ~ (t
+ l)U + C,
completing the proof.•
We must confess here that any compact set is C-compact \vhatever the convex cone C be, ho\vcver, a closed set is not necessarily C-closed unless C happens to be {OJ. ::vI:oreover, in finite dimensional spaces, not every C-closed, C'-bounded set is compact~ The unboundedness of C destroys the nice property of the usual
c-
compactncss~
Proposition 3.4 Let L be a linear map from E into another separated topological vector space. Then we have 1) L(X) is L(C)-convex if X is C-convex (i.e. X 2) L(X) is L(C)-bounded if X is C-bounded.
+C
is convex)J
Proof. This is immediate from the definition.•
Proposition 3 . 5 Assume that !( is a convex cone contained in C. 'l'hen X is C-bounded if it is !( -·bounded. The conclusion remains valid if instead of bounded we write compact or semicompact. Proof. If X is K-bounded, then for any neighborhood U of zero in E, there is a positive number t such that
Since !(
~
X ~ tU + !(. C, we have X ~ tU +C
and by this, X is C-bounded. 1'ow,assume that X is K-compact, and let {Uce + C = a: E I} be a cover of X as in Definition 3.1, then {Ua + C + K : 0:' E I} is also a cover, here the sets Uo. + C are open. By the !(-compactness of X, there is a finite set [0 from I such that {Ua + C +!( : 0' E [o} still covers X. Observing that !( ~ C we see that the latter family is the same as {Ua
+C ;
Q'
E
[o}
and X is C-·compact.
ltbr the C-semicompactness the proof is similar.-
16
Proposition 3.6
If the space is normable and X is C -bounded, then Rec(X)
~
eiC..
Proof. By Lemma 3.2, there exists a bounded set A ~ E such that ...Y £; A Applying Proposition 2.5 we get the relation
Rec(X)
~
+ C.
Rec(A + C) = Rec(C) = elG,
completing the proof.Theorem 3.7 Assume that X and Yare two nonempty C-closed sets in a normable space E and the following conditions hold, i) X yields the condition (CD) and any bounded subset of X is relatively compact,
ii) Rec(X) n -Rec(Y + cIC) Then X + Y is C -closed.
= {O}.
Proof. I.lct p be a cluster point of X relation
+ Y + cIG.
'fhe aim is to establish the
P EX + Y+clC.
For this purpose, let U be a neighborhood of zero in E. Let us consider the set
XV = X
n (p -
y - U - cIC).
(3.1)
It is obvious that the family {Xu: U E U}, where U is the filter of neighborhoods of zero in E, forms a basis of filter on X. If one of Xu is bounded, then by condition i), that filter has an accumulation point, say X o • Since X is C:.... closed, X o E X + cIC. This and (3.1) sho\v that Xo
E P - Y - 2U - ciC.
Since Y is C-closed and U is arbitrary, X o must be in p - Y - cleo Consequently, p belongs to X + Y + ciC. Further, if none of Xu is bounded, then in virtue of Lemma 2~lO there exists a nonzero vector v E Rec(X), such that
v E clcone(Xu
n V),
(3.2)
for every U E U and V E B, where B is the filter of neighborhoods of 00. By (3.1), relation (3.2) gives
v E clcone((p - Y - U - cIG) n V), for each V E B.
This and Proposition 2.5 show that
v E Rec( -Y - U - ciC), for every U. In particular, when U is bounded, we get the relation
v E Rec(-Y - cIC), contradicting condition ii) of the theorem. The proof is complete.a
17
Remark 3.8 It is clear that the theorem above is not true if condition ii) docs not hold~ We furnish a simple example to show that the result of the theorem may fail if condition i) is violated. We are in a Hilbert space and let X
= {ei : i
= 1,2, ... }
be an orthonormal set in the space. Further, let C = {O} and let
Y = {(-l + 1/2i)ei: i = 1,2, ...}. The sets .)( and Y arc closed with
Rec(X) Ho,vevcr, X
+Y
Theorem 3.9
n -Rec(Y) = '{o}.
is not closed.
Assume that Rec(X) n -elC
= {O}
and either X or C satisfies condition i) of Theorem 3.7. Then X is C-closed if and only if X + ciA is closed for some subset A of C which contains the origin of the space. Proof. Suppose that X is set X +A.
C~-closed.
Take A to be C to get the closcdness of the
Assume now that X + A is closed for some A ~ C with 0 E A and p is a cluster point of X + cleo The aim is to establish that p EX +cIC.
Let W be a bounded neighborhood of zero in E. Supposing that X satisfies condition i) of the preceding theorem, we consider the set Xw
= {x
EX: (p+ W)
n (x +cIC) =#= 0}~
It is obvious that the family {Xw : W E U} forms a basis of filter on X. If one of the sets of the basis is bounded, then that filter has an accumulation point,say x. Since X + A is closed and X ~ X + A, 'vc have that x E X + A~ This also shows that p - x is a cluster point of C~ hence it belongs to cIC. In this way p E X + ciC. Further, if none of the sets of the basis is bounded. Applying the technique developed in the proof of the previous theorem one can find a nonzero vector v fromRec(X)n-Rec(cIC)~ Hence, Rec(X)n-clC is nODzcro,contradicting the assumption. For the case where C yields condition i), instead of X \ve have to consider the set Cw = {c E C : (p + W)
n (X + c) f:. 0}
and repeat the procedure described above for this set to ensure· the relation-
pEX+C. The proof is complete.•
1'1-· . 1
I
_v/95.0,:
18
Remark 3.10 We give here t\VO e"xaroplcs to show that the assumptions of the above theorem cannot be v/eakened~ Let the space be as in Remark 3.8. Consider the following sets and cone:
X
= {(l -
1/2n )el - ne n : n = 1,2, UI},
A = {OJ, and C = cone(conv{e n : n = 2,3, ~.~}). The cone Rec(X) = {O} , the set X + A is closed, although the set X
+C
is not
closed. Ko\v, if
x= A
{en: n = 1,2, UI},
= cone(conv(X)),
C = cone(conv{ (l/n)el then X
en
=
n = 1, 2, .. ~}),
+ A is closed while X + C is not it.
4.CONE MONOTONIC FUNCTIONS
Let E 1 and E 2 be two real topological vector spaces and let K and C be t\VO convex cones in E 1 and E 2 , respectively. Let further f be a function from X s: E 1 to E 2 • Denote the epigraph of f by epi f,i.e.
epi f = {(x~ Y) E E 1 X E 2 : Y E f(x) and the level set of f at a point y E E 2 by lev(y),Le.
+ C, x
EX},
lev(y) = {x E E 1 : f(x) E y - C,x EX}. Besides, we shall use also two other notations:
levo(Y) = {x E E 1 : f(x) E y -
c \ I(C)},
\vhcre l(C) = C n -~C, and
levl{Y) = {x EEl": f(x) E y - intC} \vhen intC is noncmpty.
Definition 4.1 FOT a given junction !, we say that 1) it is nondecreasing (or monotonic) at X o E LY with respect to (1<, C) if x E X n (xo - !() implies f(x) E f(x o ) - C; 2) it is increasing at x E X with respect to (K, C) if it is nondecreasing at that point and
19
x E X n (x o - I( \ l(I()) implies J(x) E f(x o ) - C \ l(C); Whenever intI( and intC are nonempty, we say that f is strictly increasing at x 0 E X with re spec t to (!(, C) if it is nondecreasing with respee t to (!(, C) and increasing with respect to ({O} U intI(, {O} U intC).
Further, if f is nondecreasing (resp.,
increasing~ .. )
at every point of X with
respec t to (]{~ C), we say that it is nondecreasing (re sp., increasing ~ ~.) on X with respect to (1(, C) or even say that it is nondecreasing if it is clear where and which cones it is with respect to. In a special case where the spaces coincide with the field of real numbers, R) and the cones are the set of nonnegativcs numbers, R+, 've have everything in the usual sense, for instance, I is nondecreasing if f(x) 2: fey) for every x, y with x ~ y and f is increasing if f(x) > f(y) for every x > y. In this case, incrcasingness and strict increasingness are the same. Proposition 4.2 We have the following 1) f is nondecreasing at x E X if and only if X n (x - I() ~ lev(f(x)); (4~1) 2) f is increasing at x E X if and only if in addition to (4.1), X n (x - !( \ I(I()) ~ levo(f(x)); (4~2) 3) f is strictly increasing at x E X if and only if in addition to (4.1)
X
n (x -
intI<)
~ levl (f(x))~
(4.3)
Proof. This is immediate from the definition.• ~ow,lct
E 3 be another real topological vector space and a convex cone D be
given in E 3 • Proposition 4.3 Suppose that f and 9 are functions from X to E2 and h is a function from f(X) to E 3 • Then 1) if is nondecreasing (resp~,increasing or strie:tly increasing) for each t > 0 2) 3)
if so is I; f + 9 is nondecreasing if so are J and g; f + 9 is increasing (resp.,strictly increasing)
if they are nondecreasing and at least one of them is increasing (resp.,strictly increasing); 4) h 0 f is nondecreasing (resp~,increasing or strictly increasing) if so are f and h. Proof. Assertions 1), 2) and 4) arc immediate from the defintion. For 3) it suffices to observe that by the convexity of C,
20
c + C \ l(C) ~ C1(C) and C
+ intC ~ intC~.
Proposition 4.4 Let T be a nonempty set and g(x, t) is a function from X x T to R, and let the cone C be R+~ Assume that the following conditions hold: i) g(~, t) is nondecreasing (resp~, increasing or strictly increasing) on X for every fixed t E T; ii) f(x) = max{g(x, t) : t E T} exists for every fixed x E- X. Then I(x) is a nondecreasing (resp.,increasing Qr strictly increasing) function on X"~
Proof. Let first x, y E X ,vith y E x t3;' t y E T so that f(x) = g(x, t x ) and
I(~
By condition ii), there are some
f(y) = g(y, t y ). We have then g(x) t x )
2: g(x, ty)~
In view of condition i) for the fixed t y
g(x, tv)
,
c g(y, t y).
Consequently, f(x) = g(x, (,;)
~
g(y, t y) = f(y).
Th"e other cases are proven similarly.• Below are some examples of monotonic functions 1. Positive linear operators:
Let L(E1 ) E2) be the space of linear operators from E 1 to E2 . An operator A E L(E1 , E 2 ) is said to be positive if A(I<) ~ C. It is clear that A is nondecreasing if and only if it is positive. If in addition A(K \ 1(1<)) ~ C \ I(C) or
A(intK) ~ intC, then it is increasing or strictly increasing and vice versa.
2 . Positive linear functionals: Assume that in the previous example E 2 = R, C = R+ and we write E instead of E 1 • Then L(E, R) is the space of linear functionals on E called the algebraic dual of E which we have denoted in Section 1 by E>Ic . We recall that K* is the algebraic polar cone of K.
21 FOT every junctional ~ E E* , we have that 1) ~ is nondecreasing if and only if ~ E J(*; 2) ~ is increasing if and only if ~ E [(*+; 3) ~ is stric tly in creasing if and only if ~ E K* \ {O} ~
Proposition 4.5
Proof. This is immediate from the definition. •
3. The smallest strictly monotonic funcqons:
Assume that intK is noncmpty. 1."'he spaces and cones are as in the previous Let e E intI( be a fixed vector and a E E, define a function he,a on E as follu\vs; cxamplc~
he,a(x) = min{t : x E a + te - !(, t E R}. It is obvious that this function is strictly increasing on E. It is the smallest in the sense that if f is a strictly increasing function at a, then the level set of f at f( a) must contain that of he,a at O. 4. Cherbysbev norm:
Let the spaces and cones be as before.Assume further that 1< has a convex bounded base and intI( is nonempty. Let e E intI( be fixed. The ::vIinkowski functional corresponding to e is defined by
f(x) = inf{t : t > 0 and (l/t)x E (e - !() n (-e for every x E E. One can verify direct that
J gives a
+ !()},
norm on E :
llxll = f(x) which is called a generalized Cherbyshev norm~ The word "generalized" falls down when E = Rn , !( = R+ and e is the vector with all components equal to 1. In this case where
Xi
IIxU = max{lxil : i = 1, ... , n} are the ith components of x .
For every e E intI(, the Minkowski functional corresponding to e is strictly increasing on K and -1(.
Proposition 4.6
Proof. Direct verification completes the proof.•
22
5.CONE CONTINUOUS FUNCTIONS
In this section we give a definition of cone continuity of vector valued functions and using the concept of equiscmicontinuity of scalar valued functions we establish SOlue criteria for a function to be conc-~·continuous. Let E 1 and E 2 be real normable spaces and a convex cone C be given in E 2 • l,ct f be a function from a nonempty set. X ~ E 1 to E2. For a given junction f from X to E 2 , we say that is C··continuous at X o E X if for any neighborhood V of f(x o ) in E 2 there is a ne ig h borhood U of x 0 in }J 1 such that
Definition 5.1 1)
J
f(x) E V + C, for all x E un X, (561) and f is C -continuous on X if it is C -continuous at any point of X. Further, assuming that C is closed we say that 2) f is epi-closed if epij is a closed set in the product space E 1 x E2; 3) f is level-closed if the level set of f at any point of E 2 is closed~
In the Iitcrature sometimes epi-closed functions are called closed and 1evclclosed ones are called C-·semicontinuous (Corlcy-1980). 'vVe recall that a scalar valued function h from X into R is lower semicontinuous at x E X if for each positive e , there is a neighborhood U of X o in E 1 such that
h(x)
~
h(x o )
-
c, for all x E Un x.
(5~2)
Whenever E 2 = R and C = R+ (the cone of nonnegative numbercs), C -continuity is the same as lovler scmicontinuity. In this case three concepts: C-continuity, cpiclosedness and level-closedncss coincide. In other cases they are different from each other as this will be seen later.
Definition 5.2 Let {h(x, t) : t E T} be a family of scalar-valued functions on X, where T is a nonempty parameter set. We say that this family is lower equisemicontinuous at in E 1 such that
Xo E
h(x,"t) Theorem 5.3
~
X
if for
every c
> 0,
there is a neighborhood U of X o
h(x o , t) - c, for all x E U n X and t E T.
(5.3)
Assume that C has a closed convex bounded base. Then in order that f be continuous it is necess ary and sufficient that it be C - and (- C)cantinuous simultaneously.
23
ProoE It is obvious that if f is continuous,Lc {O}-continuous, then it is D~ continuous for any cone D in E 2 • Suppose now that f is C- and (-C)-continuous at a point X o E X and let W be a neighborhood of f(x o ) in £2- \"Ve have to show that there is a neighborhood U of $0 in E 1 such that f(x) E W, for all x E X n U. (5.4) For the neighborhood W, due to Proposition 1.7, one can find a neighborhood V of zero in E 2 such that (1.1) holds. By the assumption of the theorem, for V, there arc two neighborhoods U1 and U2 of X o i'Q E 1 such that f(x) E V + C, for x E U1 n X and f(x) E V - C, for x E U2 n X. This and (1.1) imply (5.4) for U = U1 n U2 .• Remark 5.4 If the cone C is merely convex closed pointed, then the result above is not always true. To see this, let us consider the following sets and functions: the space is as in Remark 3.8,
C = cone(conv{ei, bi : i = 1,2, ... }), where bi = (1/2 i - 1)el - ei, X = [0,1] and the function f is given by the rule:
f(O) = 0, f(t) = 2(1 - 2i )bi+ 1 + (2 i+ 1t - l)b i ,
= 0,1, ... with bo = O. It is clear that f is C- and (-C)-continuous at 0, but not continuous cone C also serVes an example clarifying Remark 1.5. for t, 1/2 i +1 ~ t ~ 1/2 i , i
thcrc~
The
For the sake of simplicity we assume that it is given a norm 11.11 in E 2 and the norm in the topological dual space is denoted by the same~
Theorem 5.5 G = {~o f
:~
f
E C',
is C-continuoU8 at a point X o E X if and only if the family I} is lower equisemicontinuous at that point.
Ilell =
Assume first that f is C-continuous at there is a neighborhood U of :to in E 1 so that
Proof.
Xo
E X~ Then for every c
> 0)
f(x) E f(x o ) + B(O, e) + C, for every x E Un X, where B(O,e) is the ball of center 0 and radius c in E 2 ~ Let ~ be a unit normed vector from C'. We have that ~0
f(x)
~ ~0
f(x o )
+ inf{~(y) : y E B(O, c:)} == ~ 0
f(x o )
- £~
24
This relation sho'vs the lo\ver equisernicontinuity of the family G at X O • As for the converse assertion of the theorem, suppose that f is not C··continuous at X o E X, i.e. there exist () > 0 and a net {xo; = Q' E I} from X ,vith limx o := X o such that
f(x a ) ¢ f(x o ) + B(O, 28) + C, for all G E [. Since the set cl(f(xo)+B(O~8)+C) is convex closed, applying a separation theorem, one can find some ~Q from the topological dual of E 1 with unit norm such that ~a(f(x~) ::; ~a(Y), (5.5) for all y E f(x o ) + B(O, 8) + C. It follo"\vs from (5.5) that ~a E C' and ~Q(f(xQ) ~ ~Ct(f(xo)) + inf{~(y) : y E B(O, 8)} = ~~(f(xo)) - o. In this "\\o"ay G docs not yield (5.3) for e = 8/2, completing the proof. Corollary 5.6 S'llppose that C' is a polyhedral cone. Then f is C··continuous at E X if and only if every function of G is lower semicontinuous at that point.
Xo
Proof. Assume that
0'
= cone(conv{~i: i = 1, ... ,n}).
It can easily be proven that G is lower cquisemicontinuous if and only if the family {~i 0 f : i = 1, ... , n} is it. But the latter family is finite and it is lower equisemicontinuous if and only if every element of it is lower semicontinuous. II Remark 5.7 The fact that the result of the corollary above may fail when C is not polyhedral is shown by the following example in R 3 . Let ai
= (1,1 -
1/22i , 1/2 i ) E
R3)
i = 0,1) ... ,
C = cone(conv({ai : i = 0,1, ...} U (1,0,0) U (1,1,0»)). Denote Pi
= (1, -1/(1 + 1/22i+1 ), -3/(2 i +1 + 1/2i )).
Then it can be verified that the cone C consists of the vectors a E 1~3 \vhich solve the following system: (a~Pi)
;::: 0, i
= 0,1, u.
(a, (0,0, 1)) ~ 0, {a, (0, 1,0)) 2: 0, here (.) is the inner prodlie t. Further,denote bi = 2iai -.. Pi. We construct a function as follows: \v
1(0)
= 0,
f(t) = (2 i +1 t - l)b i for t, 1/2 i + 1 :5 t ~ 1/2 i , i = 0,1, ....
+ (2 - 2i +1 t)bi +1
f
from [0,1] into R3
25
The cone C' is obviously generated by the set
=
G {Pi: i = 0, 1, .. ~} U (0,0,1) U (0,1,0). It is clear that f is not C-continuous at O. Nevertheless we shall prove that any composition ~ 0 f, € E C' , is lower semicontinuous at o. Indeed, it suffices to verify that fact for the vectors from G. Direct computation shows that
{bi, (0,0, 1)) = 1 + 1/(2 i+1 + 1/2 i ); (bi, (0, 1,0))
= 2i (1 -
1/2 2i )
+ 1/(1 + 1/22i +1 ),
for i = 0, 1, .. ~ . Further,
(Pi,Pj) ~ 2i /(1 lIenee, when j is fixed and i is large enough, we have
(bi,pj) = 2i (ailPj) (bi,pj)
~
-
+ 22i + 1 ) -
14.
O.
(5.8)
Cornbining (5~6), (5.7) with (5.8) and taking (5.5) into account we obtain that for each ~ E G, if t is sufficiently small, then (,(f(t)) 2: ~(f(O)), i.e. ~ 0 f is lo,vcr scmicontinuous at O.
Theorem 5.8
Every epi-closed function is level-closed. Conversely,
if intC
is
nonemptYJ then every level-closed function is epi~~closed.
and
Proof. Suppose that epif is closed let x be a cluster point of L(y), for some y E E. We have to show that x E L(Y)6 If that is not the c&Clc,Lc. f(x)
(U, V)
n epif = 0.
In particular,
(U, y) n epif = 0, for every x E E. In other ,vords,
Un L(y) =
0,
that is, x cannot be a cluster point of L(y). Assuming that iniC is nonempty, we no\v demonstrate the converse assertion. For, suppose that L(y) is closed for each y E E 2 • vVe have to prove the closedness of epif. Let (x,y) E E 1 X E 2 and (x,y) ¢ epif. The latter relation means that y rt f (x) + C. Since C is closed, there is a neighborhoo d of zero , say W, such that
(y + W) n (f(x) + C) = 0. Taking a vector e E W n intC, we get y
t/. f(x) - e + C,
tt
which means that x L(y + e). The set L(y neighborhood U of x in E 1 such that
+ e)
is closed, hence there is a
26
Un L(y + e)
= 0.
This gives the relation y ¢ f(U) - e + C.
(5.9)
J:urthcr, as the vector e belongs to intC) there exists a neighborhood V of zero in E 2 such that (5.10) e- V ~ C.
\Ve are going to establish the relation (U, y + V) n epif = 0, and by this the theorem will be proven. If that relation is not true,Le. for some x' E U and v E V, (x', y + v) E epif, then
f(x') E y + v-C. In virtue of (5.10),
f(x') E Y + e - C, contradicting (5~g).1"he proof is complete.• The following simple example shows the need for the condition iniC Theorem 5.8.. Let f be a function from R to R 2 defined by the relations: f(x) = (1,0), for x $ 0, f(x) = (0,1), for x > 0; C = {(t, 0) E R 2 : t ~ OJ. Then f is level-closed, although epif is not closed.
t= 0 in
Theorem 5.9 Assume that ~ 0 f is lower semicontinuous for each ~ E C' . 7'hen f is epi-closed~ Proof. We suppose to the contrary that
point (xo,Yo) of epif for which Yo the property:
f is not epi-closcd, i.e. there is a cluster
rt f(x o) + C. Let t be a positive number with
(Yo -~ B(O, t)) n (f(x o) + C) = 0~ Separating these convex sets by a unit normed vector ~ E E 1 , we obtain (c;, f(x o ) + c) ~ (~, V), for all c E C and y E Yo + B(G, t)~ It is obvious that ~ E C and (~, f(x o )) ~ ({, y)
+ (~, V'),
for all y E Yo + B(O, t/2) and Y' E B(O, t/2). Consequently,
27
(f" f(x o )) ~ (~, y) + sup{ {c;, y'} : y' E B(O, l/2)} ~ (~, y) + t/2, for all y E Yo + B(O, t/2). Remembering that (x o' Yo) is a cluster point of epij \vhich means that there are points x as closed to X o as we want so that f(x) E yo+B(o, t/2), we conclude that the function
~0
Corollary 5.10
f
cannot be lower scrnicontinuous at
XO'
•
Every C-continuOU5 fun~tion is epi-closed, hence level-closed.
Proof. Invoke the corollary to 'I'heorems 5.8,5.9 and Theorem 5.12 below.•
We now study the compositions of C-continuous functions. Let 9 be a function from E 2 to a normed space E 3 and let D be a convex cone in E 3 I
Theorem 5.11 Assume that X is a subset of E 1 with at least one accumulation point, say XO ' The composition 9 0 f is D-continuous at Xo for eve1~ function f from X to E 2 , being C-continuous at X o if and only if g is D-contin1l0Us and nondecreasing on E 2 • Proof. Suppose that 9 is D-continuous and nondecreasing on E 2 and f is Ccontinuous at XOI Let W be a neighborhood of g
g(y) E W + D, for all y E V. (5.11) By the Ch·continuity of !, there is a neighborhood U of X o in E 1 such that j(x) E V + C, for all x E U n x~ (5.12) Since g is nondecrcasing, (5.11) and (5.12) imply the relation g(f(x)) E g(V + C) ~ W + D, for all x E Un x. Thus, go f is D-continuous at xo~ Conversely, suppose first that 9 is not nondecreasing on E 2 , i. eO. there are a point Yo E E 2 and a nonzero vector c E C such that g(y + c) 95 g(y) + D. Since the set g( y) + D is closed, there exists a neighborhood ltV of zero in E a such that
(g(y + c) + W) We construct a fvnction
f
n (g(y) + D) = 0. from X into E 2 as follows:
/(x o ) = y, j(x) = y + c, for all x E X, x # It is clear that f is C-continuous at that point as (5.13) sho\vs.
X o•
(5.13)
Xo'
However, 9
0
f
is not D-continuous at
28
Further, suppose that 9 is not D· 'continuous, say at Yo E E, , Le. there are a neighborhood W of g(yo) in E 3 and a sequence {Yn} from E 2 ,vith limYn = Yo such that
g(Yn) fJ. W
+ D, for all n = 1,2, ... .
(5.14)
Since X o is an accumulation point of X, there is a scquence {x n }, X n i: x o , from X \vhich converges to X o • The aim at the moment is to construct a continuous function f from X to E 2 for which go f is not D-continuous at X o • Without loss of generality, we may assume that {llYn - yoll} is decreasing ,vith IIYl - Yo!1 = 1. First we construct a function 11 from {x n } to the interval [0,1] by the formula:
fl(x n ) = nYn - Yolr· This function is continuous on the closed subset of the space E 1 • Apply Tietze extension theorem (Kurato,vski-1972) to get a continuous function /2 from E 1 to [0, I). Now we construct a continuous function is from [0, 1J to E 2 as follows. First we note that for every t from that interval, there exists exactly one n such that r1Yn-l - Yo II
> t 2::
llYn -
yoll·
In other words,
t = SUYn-l - YoH
+ (1 -
s)IIYn - YoU, for some $,0 < S
::;
1.
We set
f3( t) = SYn-l + (1 - S)Yn. It is clear that fa is continuous on [0) 1]. Hence, the composition f = continuous on E 1 • Moreover,
fa 0 12
is
f(x n ) = Yn, n = 1,2, .... The composition 9 0
f
is obviously not D-continuous at
Xo • •
Theorem 5.12 Assume that the cone D does not coinside with the whole space. The.n 9 0 f is D-continuous at a point X o E X for every function 9 being Dcontinuous at f (x 0) if and only if f is cantinuous at x 0. Proof. Supposc that f is continuous at xo~ Then it is {O}-continuous at that point. Since any function from E 2 . to E 3 is nonincreasing with respect to the cones {O} ~ E2 and D, applying Theorem 5.11 ,ve get the D-continuity of 9 0 f for each 9 which is D-continuous at f(x o ). Suppose now that f is not continuous at the point X O • Then there are a neighborhood V of f(x o ) and a sequence {x n } from X with limx n = X o such that f(x n ) E V, for each n. Define a function 9 from E 2 to E 3 as follows~ Let v be a nonzero vector of E 3 which docs not belong to D, and let t be a positive number with the property:
B(O, t)
+ f(x o )
Set
g(f(x o )) = 0
C V.
29
g(y) = (Jly - f(x o )lI/t)v, for every y E E 2
6
It is obvious that 9 is continuous, hence D-continuous. Despite of this, go! is not D-continuous at X O ' The theorem is proven.•
6.CONE CONVEX FUNCTIONS
In this section E 1 and E 2 are real topological vector spaces, X is a nonempty convex set in E 1 and in E 2 it is given a convex cone C.
Let f be a function from X to E 2 • We say that is C-·convex on X if for Xl,X2 EX, t E [0,1],
Definition 6.1 1)
f
2)
f(tXl + (1 - t)X2) E tf(Xl) + (1 - t)f(X2J - C; is strictly C-·convex on X, when intC is nonempty, if fOT Xl, X2 E X, Xl i=- X2, t E (0,1), i.e. 0 < t < 1,
(6.1)
f
f(txl + (1 - t)X2) E tf(Xl) + (1 - t)f(X2) - intC; is C-·quasiconvex on X if for y E E 2 ? Xl, X2 E J"l() t E [0, 1], f(Xl),!(X2) E y - C implies f(tXl + (1 - t)X2) E Y - C~ 4) f is strictly C-guasiconvex, when intC is nonempty, if for y E E 2 , Xl, X2 E E 3 , Xl ~ X2, t E (0,1), f(Xl), f(X2) E y - C implies !(tXt + (1- t)X2) E Y - intC.
3)
f
In a particular case \vhere E 2 = R, C = R+, we get the definition of convex and quasiconvex functions in the usual sense. Here are some simple properties of C-conve.."{ and C-quasiconvex functions.
Proposition 6.2 f is q-convex if and only if epif is a convea; set. Moreover, if E 2 is separated and C is closed, then f is C-convex if and only if ~ 0 f is convex for every € E C/.
PrOOL The first part of the proposition is immediate from the definition. For the second part, suppose that f is C-convcx. Then the relation 1) of Definition 6.1 holds. We already know that any functional ~ E C' is nondccrcasing and linear, therefore applying ~ to the relation of 1), \ve obtain ~f(txl + (1
- t)X2) $ t(f(Xl) + (1 - t)~f(X2)'
30
\vhich sho\vs that ~
f
is convex as a scalar valued function. Conversely, if relation (6.1) docs not hold for some Xl, X2 E X, t E [0,1]. By a separation theorem, there is a functional ~ E E' separating the point f(tXl + (1 - t)X2) and the convex closed set in the right hand side of (6.1). It is clear that {E C' and 0
€f(txl which sho'vs that ~ 0
f
+ (1 - t)X2) > t~f(Xl) + (1 -
t)~f(X2),
is not convex, completing the proof. •
Proposition 6.3 We have the following: 1) f is C-quasiconvex if and only if lev(y) is convex for each y E E 2 ; 2) f is C -quasiconvex if and only if he,a () f is quasiconvex for every a. E E 2 and a fixed e E intC, where he,a is the smallest strictly monotonic function at the point a (Sec.4), whene'uer iniC is nonempty. Proof. The first assertion is obvious. For the second OIle, suppose that f is not C-quan."iiconvcx,Le. the relation in 3) of Definition 6.1 docs not hold. Take a = y
and consider the function hc,y. By the definition of this function, we have
he,y(f(Xl)) h e ,y(f(X2))
~
0 and
~ 0,
while
he,y(f(tXl + (1 - t)X2)) > 0, which shows that hc~y 0 f is not quasiconvex. 1'hc converse assertion follows from Proposition 6.8 below.•
Proposition 6~4 Assume that E 2 is locally convex separated and C t has a weakly compact convex base. If ~ 0 f is quasiconvex"!oT every extreme vector ~ of C' , then
f is C-quasicon·vex. Proof. Suppose ~o the contrary that f is not C-quasiconvex which means that the relation in 3) of Definition 6.1 docs not hold. In virtue of Proposition 1.10, there is an extreme vector ~ E C' such that ~(f(txl
+ (1 -
t)X2) - y) > o.
This and the fact sho\v that ~ 0
f
c;(f(x2) - y) ~ 0, i = 1,2 is not quasiconvex, completing the proof.
II
Proposition 6.5 Assume that E 2 = RR and C is a polyhedral cone generated by n linearly independent 'Vectors. Then f is C-quasiconvex if and only if ~ 0 f is quasiconvex for every extreme vector ~ of
ct.
31
By Proposition 6.4, it suffices to prove that the C-quasiconvexity of f implies the quasiconvexity of ~ 0 f for every extreme vector ~ of C'. First 1,ve note that if aI, ... , an generate C, then C' is generated by the nonzero vectors b1 , ••• , bn \vhich are the only extreme vectors of C' and defined by (bi,aj) = 0, i =f:. j, (6.2)
Proof.
(bi, ai) = 1. Suppose that ~ 0 f is not quasiconvcx for say X2 E X, t E (0,1) such that
~
= b1 • Then there exist some
Xl,
+ (1 -
t)X2) > max{~J(xl); ~f(X2)}, \vIDch means that ~ strictly separates f(tXl + (1- t)X2) and f(Xl) U f(X2). Assume ~f(tXl
that ~f(Xl) ~ ~f(X2).
Consider the hyperplane H generated by ~ and passing through f(Xl)' By (6.2), \ve have that
H
=
leXt) + lin{a2' ... , an},
where lin denotes the linear subspace stretched on a2, ..., an' Consequently)
(/(x!)
+ C) n H
+ Zin{al}} n H +cone(conv{a2' .~., an})~
= {!(X2)
(6.3)
Let c be the point which yields the relation
(J(Xl) + C) n (f(X2) + C) = c + C (such a point exists because C is generated by n linearly independent vectors, see the Choquet-Kendall rfheorem in Pcrcssini-1967). We prove that c E H. Indeed, consider the (n - I)-space H - !(Xl) and the cone Co = cone(conv{a2, ... ,an }) in it. One can easily verify that there is a unique point
Co
such that
Co n {{f(X2) + lin(al)} n H + Co} ~ Co + Co. It follows from (6.3) and from the definition of Co that
+ f(xl)~
c = Co
i.e.
C
E H. We have then
j(Xl), /(:£2) E C- C, meanwhile
f(tXl In this 1,vay,
f
+ (1 -
t)X2) (j. H - C.
is not C-quasiconvex.•
Corollary 6.6 C == R+ ' then is quasiconvex.
f
Under the assumptions of the previous proposition, if in addition is C···quasiconvex if and only if every component junction of f .
32
Proof. This follows from Proposition 6.5 and the fact that the polar cone of C is itself...
Proposition 6.7 Let f and 9 be two functions from X to E 2 • 1 hen 1) if is C -~convex (resp.,strictly C··convex,~~.) for each t > 0 if so is ji 2) f + 9 is C-convex (resp.,C-quasiconvex) if 80 are f and g; 3) f + 9 is strictly C -convex (resp., strictly C -~quasiconvex) if they are Gconvex and at leas t one of them is strictly C -con1) ex (re sp., strictly Cqu asiconvex). 1
Proof. This is immediate from the definition.•
Proposition 6.8 Let f be a junction from X to E 2 and 9 be a, junction from E z to another space E:~ in which a convex cone D is given. Then 1) go f is D-convex if f is C-convex, g is D-convex and nondecreasing; 2) 9 0 f is strictly D-convcx if f is strictly C-convex, 9 is D-~convex and strictly increasing j 3) g 0 f is D-quasiconvex if f is C-convex and if g is D-~quasiconvex and nondecreasing,. 4) go f is strictly D-~quasiconvex if f is strictly C-convex) and if g is Dquasiconvex and strictly increasing. Proof. We prove l)w Let
Xl, X2
E X, and 0
1. Since
f
is C·-convex~
f(tXl + (1 - t)X2) E tf(Xl) + (1 :- t)J(X2) - c. rrhis combines with the nondecreasingncss of 9 to yield the relation:
go J(tXl + (1 - t)X2) E g(tf(Xl) + (1 - t)f(X2)) - Dw (6.4) By the D-convexity of 9 we have that g(tf(Xl) + (1 - t)f(X2)) E tg 0 J(Xl) + (1 - t)g 0 !(X2) - D. The latter relation and (6.4) sho\v that 9 0 f is D-convex. Other parts of the proposition are proven similarly.• Remark 6.9 In the proposition above, in every case f must be C-convex or strictly C-convex. If it is merely C·-quasiconvex or strictly C-quasiconvcx, then the last two assertions may fail. Below we give an example where" f is Cquasiconvex, 9 is D-convcx and increasing, but go f is not D-convex.
Let X = [-1, 1], E 2 = R2, C = R~ ~ E s Let f be defined by
f(x) = (-x,O), if x E 10, 1J f(x) = (O,x), if X E [-1,0],
= R, D = Rt;
33
and let 9
be defined by
g((x, y)) = x + y, for (x, y) E R2~ I t is easy to verify that f is C-quasiconvcx, 9 is linear increasing, although the composition go f(x) = -Ix] is not quasiconvex.
7.SET-VALUED MAPS
Suppose that E 1 and E 2 are t\VO real topological vector spaces and it is given a convex cone C in E 2 • Let }' be a set-valued map from E 1 to E 2 v{hich means that }?(x) is a set in E 2 for each x EEl. The following notations will be used for set~· valued maps: domF = {x E E 1 : F(x) f 0} grafF = {(x,y) EEl X E 2 : y E F(x),x E domF} epiF = {(x, y) E E l X E 2 : y E F(x) + C, ~ E domF}
Definition 7.1
Let X be a subset of domF. We say that 1) F is upper C-continuous at X o E X if for each neighborhood V of F(x o ) in E 2 ) there is a neighborhood U of X o in E 1 such that
F(x) ~ V + C, for all x E Un dam}'; 2) F is lower C-continuous at X o E X if for any y E F(x o ), any neighbor· hood V of y in E z , there is a neighborhood U of X o in E 1 such that F(x) n (V + C) -:f 0, for each x E Un domF; 3) F is C-continuous at X o if it is upper and lower C-continuous at that . point; and F is upper (resp.,lower, ...) C-continuOU8 on X if it is upper (resp.,lower, ...) C-continuous at every point of X; 4) F is C-closed if epiF is closed; 5) whenever "N" denotes some property of sets in E 2 , we say that F is 'tN'~ ··-valued on X if F(x) has the property "N", for every x E x.
=
In the above definition, setting C {O} we get the definitions in the usual sense which we meet in the literature with adding "semi" to "continuous'~. Sometimes we say simply upper continuous instead of upper {OJ-continuous. There are a lot of books dealing with set-valued maps ( see for instance Aubin and Ekeland-1984; Berge-1962). vVe develop here only what we need in the chapters to come.
34
in
Theorem 7.2 Assume that X is a compact set E 1 and F is an L'lVn-valued, upper C-coniinuous map from E: to E 2 with X ~ domF, where "lV)' may be Cclosed, G-bounded, C-compact or C-semicompact. Then F(X) has the property "lV" in E 2 . f)roof. First let ,roN" be C-closcd and let {aa = (} E I} be a net from F(X) 1,vith lima Q = Q. 'ATe have to prove that there is some x E X such that
a E F(x)
+ etC
+ clG.
Let Xo: E X, Yet E F(x a ) and Co: E clC be such that ao: = YQ + co. We may assume that limx a = x EX. ~br any neighborhood
V of }'(x) in E 2 , there is some f3 E I such that F(xaJ S;; V + cIC, for all a ~ {3.
In particular, YQ E V -:-cIC,
\vhich implies that aa E V + clC, for all Q ~ (3. Since V is arbitrary and F' is C-closcd-valued, we conclude that a E F(x) + cleo Now,lct "lV" be C~·boundcd and let V be an arbitrary neighborhood of zero in E2. "'rVe have to show that there is some i > 0 such that F(X) ~ tV +C. '1.'0 this end, for every x E X, consider the set U(x) = {y EX: F(y) ~ F(x) + V + C} which is open in X due to the upper C-continuity of F. By the compactness of X, there are a finite number of points from X, say Xl, ... , X n , such that {U(Xi) : i = 1, ... ,n} coversX. Thus,
F(X) ~ U{F(Xi) :i= 1, ... ~n}+nV:tC. Remember that F is C-bounded-valued, which means that there are some ii so that F(Xi) ~ ti V + C.
~
a
Take t = n + t 1 + -.. + "t n to get the inclusion F(X) ~ tV + G. furthcr,lct "N': be C-compact and suppose that {Va + C : a E I} where Vo; are open is a cover of F(X). We have to draw a finite subcover from that cover. For x E X, denote by I(x) a fInite index set from I which exists by the C···compactness of F(x) such that {Va + C : a E I(x)} covers F(x). Again, the set
U(x) = {y
E
X = F(x)
~
U{Vc:e : a
E
I(x)} + C}
is open and we can obtain a finite cover of X, say {U(Xj) : i = 1, ... , n}. Then the family {Va + C : a E I(Xl) U ~ .. U I(x n )} forms a finite subcover of F(X).
35
Finally,let "N" be G'-semicompact and let
{(a Q
-
clC)C : a E I, a~ E F(X)}
be a cover of F(X). For each x E X, since F(x) is C-scmicompact, it can be seen that there is a finite index set I( x) ~ I such that {(a cIC)C : 0' E I( x)} covers F(x), where a el may lie outside of F(x). Further, it is obvious that Q
-
(a - cIC)C + C = (a - clC)C, for each a E E 2 , therefore the set
U(x) = {y EX: F(y}S; U{(a
Q
-
cIC)C : a E lex)}}
is open in X. No,v the argument of the previous part can be applied without any change.•
Definition 7.3 (Penot-1984) F is said to be compact at x E damEl if any net {(xQ' Va)} from grafF possesses a convergent subnet with the limit belonging to grafF as soon as {xo:} converges to x. Whenever this is true for each x E X s;; damF., we say that F is compact on the set X. Definition 7.4 Let now E 1 and E 2 be metric spaces and let F be a compactvalued on E 1 • We say that F is Lipschitz at x E domF if there is a neighborhood U of x in E 1 and a positive number k, called a Lipschitz constant, such that ~
h(F(x), F(y)) where d(.,.) is the metric in E 1 compact sets in E 2 •
,
kd(x, y), for each y E U,
and h(.,.) is the Haus40rff distance between two
Proposition 7.5 If F is Lipschitz at x E damP, then it is compact and continuous at that point Proof. The continuity of the map at x is obvious. We prove that the map is compact.. For, let {(xo:' Yo.)} be a net from grafF \vith {xa:} converging to x E domP. Consider the net d(yo.' F(x)) of real numbers. Since F is Lipschitz at the point x,
limd(Ya, F(x)) = O. By the compactness of F(x), there is a net {za:} from F(x) such that
d(yo., F(x») = d(yco zoJ and this net may be assumed to converge to some z E F(x). VVc have then, d(Ya., z) ~ d(Ya' zce) + d(zce, z) and {Yell converges to z, completing the proof.•
36
No,,, suppose that E a is a real topological vector space, G is a set-valued map from E 2 to E 3 . The composition G 0 F is the map from E 1 to E 2 defined as follows:
(G 0 F) (x) = U {G(y) : y E F(x)}. Further, for t,vo set-valued maps F 1 , F2 from E 1 to E2, their sum and the multiplication with a scalar t are defined by the rules:
(FI
+ F2 )(x) = F1(x) + F2 (x); = tF1 (x), for each x E E 1 •
(iF} )(x)
Proposition 7~6 Suppose that the maps F, F I and F2 are compact at x E E 1 and G is compact on F(x). Then the map tF, PI + F2 and Go F are compact at the point x. Proof. This is immediate from the definitions ~ •
Chapter 2
Efficient Points and Vector Optimization Problems
rrhis chapter is devoted to the basic concepts of efficiency in topological vector spaces. We deal with partial orders generated by convex cones over all, after having introduced them in Section 1. l'he next three sections contain the definitions, properties and· existence conditions of efficient points. In Section 5 \ve define vector optimization problems, their solution concepts and investigate the existence of optimal solutions.
1.BINARY RELATIONS AND PARTIAL ORDERS
Given an arbitrary set E, a binary relation on E is, by definition, a subset B of the product set E X E. This means that an element x E E is in relation ,vith Y E E, if (x,.y) E B.
Definition 1.1 Let B be a binary relation on E. We say that it is 1) reflexive if (x, x) E B for every x E E ; otherwise it is irreftexive 2) symmetric if (x, y) E B implies (y, x) E B for each x, y E E, otherwise it is asymmetric; 3) transitive if (x, y) E B, (y, z) E B imply (x, z) E B for every x, y, z E B, oiherwise it is nontransitiv e;
38
if (x, y) E B or (y, x) E B for each x, y E E, x:f:. Yi 5) linear in the case where E is a real vector space, if (x, y) E B implies that (tx + z, ty + z) E B for every x, y, z E E, t > 0, 6) closed in the case where E is a topological vector space, if it is clo.ged as a subset of the product space E x E.
4) complete or connected
To clarify this definition, let us consider the following classical example: let E be a community of inhabitants of a city and we define binary relations as follo\vs.The inhabitants are named by x, y, z, .... 1) (x, y) E B 1 if x is older or as aged as y. 2) (x, y) E B 2 if x and yare of different sex. 3) (x,y) E B 3 if x and yare relatives (they come from one family tree). 4) (x, y) E B 4 jf x and y are relatives of somebody 5) (x, Y) E B s if x and y axe more weighted than any cocitizcns. It can be seen that B 1 is re.flexive, transitive, asymmetric, complete; B 2 is irsymmetric, nontransitivc, noncomplete ; B 3 is reflexive,symmetric, nontransitive, noncomplcte; B 4 is refLexive,symmetric,transitivc)complctc, while B s is irreflexive and noncomplete.Thc t\VO last relations are extreme cases:B4 = E X E and B s = 0. re.flexive~
Definition 1.2 transitiv e.
A binary relation is said to be a partial order if it is reflexive,
I t is known that if B is a par tial order which is linear in a vector space, then the set
c=
{x E E : (x) 0) E B}
is a convex cone. If in addition B is asymmetric, then C is pointed. Conversely, every convex cone C in E gives a binary relation
Be = {(x, y) E E x E = x - y E C} which is reflexive, transitive and linear. If in addition C is pointed, then Be is asymmetric. From now on we shall consider only orders generated by convex cones. We . wri te sometimes x?:.cY instead of x - y E C,
or simply x ~ y if it is clear that the binary relation is defined by C;
x
>c Y if x~cy·and not Y?cx,
in other words, x E y + C \ I(C);
39
When intC is nonempty, x>Cy means that x
> I( Y ,vith ]( = {O} U intC.
Here are some examples: 1.Let us be in Rn and let C = Ri.o Then Be is reflexive transitive linear closed asymmetric but not complete. For x = (Xl' ~.~, X n), Y = (Yl' ... , y,.J E Rn :
x?acY if and only if Xi.~ Yi for i = 1, ... , n; x >e Y if and only if Xi ~ Yi, for i = 1, ... , n and at least «;>uc of the i nequalities is strict;
x
>c Y if and only if Xi > Yi
for all i = 1, ... , n.
2
2. In R , if C = (Rl, 0) , then Be is reflexive transitive linear closed and syrnmetric. In this case, x~cY if and only if the second components of these vectors coincide. The order is not complete. 3. The ubiquitous cone (Example 1.2( 4))Chaptcr 1) gives a reflexive transitive linear, but not complete relation in 0 1 , 4. The lexicographic cone (Example 1.2(5),Chapter 1) provides a reflexive transi tive linear complete relation in l'P.
2.EFFICIEKT POINTS
Let E be a real topological vector space with partial order (~) generated by a convex cone C.
Definition 2.1 Let A be a nonempty subset of E. We say that 1) x E A is an ideal efficient (or ideal minimal) point of A with respect to C if Y ~ x for every yEA; The set of ideal minimal points of A is denoted by IMin(AfC); 2) x E A is an efficient ( OT Pareto-minimal, or nondominated) point of A with respect to C if x ~ y, for some yEA, then y ~ x; The set of efficient points of A is denoted by lVlin(AIC); 3) x E A is a (global) properly efficient point of A with respect to C if there exists a convex cone ]( which is not the whole space and contains C \ I(C) in its interior so that x'E Min(AII{); The set of proper efficient points of A is denoted by Pr.lvlin(AIC);
40
4) supposing that intC is nonempty) x E A is a weakly efficient point of A with respect to C if x E l\1in(A]{O} U inter); The set of weakly efficient points of A is denoted by Wlv[in(A~C). In the literature some authors exclude indifferent points from the set of efficient points (two points x, YEA, x ;f. Y arc indifferent \vith respect to C if they satisfy simultaneously the relations: x ~c Y and y ~c X4 In other "\vords, a point x E A is said to be efficient if there is no yEA, x =f:. y such that x ~ C Y~ 0 bviously~ this definition coincides with ours only in the case where C is a pointed cone (sec Proposition 2.3 below). In the sequel, sometimes, if no confusion occurs, we omit "with respect to C" and ,t Ie" in the definition above. The notions
[Max, lVfax, PrMax, l-VJvlax
E;
are defined dually. When \ve restrict ourselves in a neighborhood of x in we get the local ideal efficient, local efficient etc. points notions and denote them by the same with the lower index "1": [Mini, MinI etc.. When speaking of weakly efficient points we always nlean that C is assumed to have nonempty interior. IIere are some simple examples;1. "Vo are in the 2-space R2 ~ Let A = {(x, Y) E R2 : x 2 + y2 :5 1, Y ::; O} U {(x, y) : x ~ 0, 0 ~ Y B = AU {( -2, -2)}. For C = R~, ,vc have,
IMin(B) = PrlVlin(B) Il\1Iin(A) = 0,
= Min(B) =
WMin(B) = {(-2, -2)};
PrMin(A) = {(x,y) E R 2 : x 2 +y2 = 1,0 > x,D> y}, Min(A) = Pr Min(A) U {(a, -l)} U {( -1, O)}, W Min(A) = i\1"in(A) U {(x, y) : y = -1, x ~ O}~ Now, for C = (Rl,O) S; R 2 , we have, IMin(B) = 0, PrMin(B) = Min(D) = WMin(B) = B, IMin(A) = .0, Prl\1Iin(A) = Min(A) = Wl11in(A) = A. 2. Let.us be in 0 1 and denote
B(O, 1) = {x E rl 1 : Hxll ::; 1}. It is easy to see that for C being as in Example 1.2(3) of Chapter 1, Min(BIC) = {x E ~1
For C being the ubiquitous cone,
:
2
IIxH = l)x ~ O}.
-l};
41
Min(B I C) = 0. Proposition 2 . 2 We have the inclusions: PrMin(A) ~ Min(A) ~ WMin{A). Moreover,
if Il11in(A)
is nonempty, then
I1Vlin(A) = Min(A) and it is a point whenever C is pointed., Proof. We prove first the inclusion
PrMin(A)
~
1\!Iin(A).
Let x E PrMin(A). Posit to the contrary that it is not an efficient point of A, Le. there is some yEA such that
x E y+G\l(C). Hence x E y + intK, where K is the cone in the definition of the proper efficiency. Since !( is not the whole space, intK belongs to K \ l(I(). Consequcntly, x >K y~ contradicting the fact that x is efficient with respect to [(. Further, to prove the inclusion
Min(A) ~ WMin(A), let x E Min(A) and let [( be the cone composed of zero and of intC. Suppose that yEA and x 2:K y. We have to show that y ~K x which implies that x E WMin(AIC). Indeed, if x = y, nothing to prove. If x
=F y,
X ~K
Y means that
x - y E intC. .
(2.1)
Since x E Min(A) and [( ~ C, x ~K y implies that y ~c x. In othcr words, y - x E C. This and (2.1) show that 0 E intC, Le. C = E and hence y ~J( x as well. Finally, it is clear that
IMin(A)
~
Min(A).
If IMin(A) is nonempty, say x is one of its elements, then for each Y E Min(A), y ~ x implies x 2:: y. The transitivity of the order gives us the relation: z 2: Y for every z E A. This means that y E IMin(A) and hence IMin(A) and i\lIin(A) coincide. Whenever C is pointed, x ~ y and y ~ x axe possible only in the case x = y. Thus,IMin(A) is a point.•
Proposition 2.3 An equivalent definition of ejjiciency.~ 1) x E IMin(A) if and only if x E A and A ~ x + c;
42
j\1 in( A) if and only if An (x - C) S; x +I(C), or equiv~lently, there is no yEA such that x > y. In particular, when G is pointed, x E i\1in(A) if and only if A n (x - C) = {x}; 3) when C is not the whole space, x E Wlvlin(A) if and only if A n (x -- intC') = 0, or equivalently, there. is no yEA such that x > y.
2) x E
Proof. This is immediate from the definition.•
Proposition 2.4 Suppose that there exists a convex pointed cone ]( containing C. Then 1) IJ1fin(AII() = I1W"in(AIC) in case I1Vin(AIC) exists, 2) PrMin(ArK) ~ PrMin(ArC),
3) 1\din(AII()
~
1\din(AJC),
1) Wl¥in(AII() ~ Wl\lin(AIC). Proof. To prove the first assertion we observe that C is pointed~ Hence by Proposition 2~2, IlvIin(AiC) is a point,say x E A, if it exists~ By Proposition 2.3,
A
~
x+C
~
x+I(.
Consequently, x E Ii\lIin(ArK) and actually we have the equality since 1< is pointed~
The second assertion is triviaL . For the third onc,let x E Min(AIK), by Proposition 2.3, Since C
~
A
n (x -
K) = {x}.
A
n (x -
C)
A
n (x -
C) = {x} and
!<, ~
A
n (x -
J<).
Conscquently,
x E Min(AIC). To prove the last assertion it suffices to note that intI( is noncmpty whenever intC is noncmpty , and in this case intC k intK.• A counterexample for Proposition 2.4, in the case where the pointedness of
!< is violated, is obtained \vhen [( is the whole space. In that case every point of a set, in particular, the points which are not efficient with respect to C, is efficient \vith respect to ](. Howevcr,the following proposition provides a useful exception.
Proposition 2.5 Assume that there is a closed homogeneous half space H which contains C \ l(C) in its interior. Then
43
1) IMin(ArH) ~ IMin(AIC) in case the right hand side set is nonempty, 2) Min(AJH) ~ Min(AJC), 3) W ..i\1in(AIH) ~ W.l\IIin(AIC).
Proof. For the first assertion, supposing x E IMin(AIII), ,\ve prove that x E IMin(ArC). By Proposition 2.3, it suffices to show
th~t
A ~ x+C. Let y E IMin(AIC) which is nonempty by the assumption~ In vie\v of Proposition 2.3, we have two relations: x - y E II and Y -x E C. Hence, x - y E I(H). )J.1oreover, as C \ I(C) ~ H \ l(H) = intH, we conclude that y - x E l(C), which leads to the relation: A ~ y+C = x+y -x+C = x +l(C) + C = x+C. For the second asse~tion, let x E Nlin(AJH). Suppose that there is some yEA with x - y E C. If x - Y E I(C), nothing to prove. If not, then \\Fe have that x - y E C \ I(C) r;. intH, contradicting the fact that x E Min(AJH). To prove the last assertion, it suffices to apply Proposition 2.3 by noting that intC ~ intH.• Proposition 2.6 Let B and A be two sets in E with B C A. Then 1) IMin(A) n B ~ I1V!in(B),~ 2) PrMin(A) n B ~ PrMin(B); 3) Min(A) n B ~ Min(B); 4) WMin(A) n B ~ WMin(B).
Proof. Apply Proposition 2.3 to get the first assertion. For the third assertion let x E Min(A) n B. Then any y E B ~ A with x 2:: y implies Y 2: x. This means that x E Min(B). The other assertions follow from the third one by taking the cones [( and {O} U intC in the role of C.• Definition 2.7 Let x E E. The set A denoted by Ax.
n (x -
C) is called a section of A at x and
44
Proposition 2.8 For any x E E with Ax being nonempty, we have 1) IMin(A x ) ~ IMin(A) in case the right hand side set is nonempty;
2) l\1in(A~) ~ Min(A); 3) WMin(A x ) ~ WMin(A)~
Proof. For the first assertion, assuming y E IMin(A:I:)' 've have to prove that A ~ y+C, which implies y E IMin(A) by Proposition 2.3~ Let Z E IMin(A). Then
A
~
z+C,
in particular, y E z + c. (2.~) This shows that z E Ax , consequently, z E y + C. The latter relation and (2.2) give us the inclusion: z - y E l(C).
No\v,
A ~ z + C = y + z - y + C = y + l(C) + C = Y + C, as we wanted. For the inclusion of 2), let y E Min(Ax)~ If there is some z E A, y ~ Z, then z E A z . Hence z ~ y. In this way, y E Min(A)~ The last relation is derived from 2) by considering the cone {OJ UintC instead of C.• It should be noted that in the proposition above there is nothing stated about the propc;r efficiency. Ordinarily, the inclusion PrMin(A x ) ~ PrMin(A) is not true6 For instance, an efficient point of A which is not proper is a proper effici.cnt point of the section at itself. Kevertheless, if the point '\vhcre the section is taken is well chosen~ then a positive result can be expected.
Proposition 2.9
Assume that E is a finite dimensional Euclidean space and C is an acute convex cone with nonempty interior. If in addition
Rec(A) n -etC = {O}, then x E Pr M in(A) if and only if x E Pr lt1in(A e ), some e E E with e > x (that is e E x - intC). Proof. The "only if' part is derived from Proposition 2.6. For the "if'~ part,lct x E PrMin(Ae ) with e E E and e >- x. We state that for the acute cone C~ there
is a sequence of convex cones {Ci} in E such that
45
Ci+l ~ Ci, C \ {OJ ~ intCi and Ci = eiG. Indeed, let B = en B(O, 1), where B(O, 1) is a ball in E \vith the.center at 0 and radius 1. Then B is a base for C. Since C is acute, there is a hyperplane H passing through 0 such that clB does not meet H. :VI ore over, since clB is compact, H is closed, one can find £ > 0 such that the set
n
C(e) does not meet H.
= {(cIB) + B(O,c)}
'1~ake
Ci = cone(C(e/i)), i = 1,2, ... to get the required sequence. Now, if x is not a properly efficient point of A, then for each i, A n (x - C i ) # {x}~ (2.3) We state that there is an intergcr k such that the sets A n (x - Ci) is bounded whenever i ~ k. Indeed, if that is not the case, by Lemma 2.1 (Chapter 1), there exists a nonzero vector z E Rec(A n (x - Ci)) for all i. By Proposition 2.5 and Lemma 2.11 of Chapter 1, Z E Rec(A) n -ClCi, for all i, hence Z E Rec(A) n -clC, contradicting the assumption. Further, we state that there exists an intcrger m ~ k such that An (x - em) ~ e - C. (2.4) In fact~ if that is not true, one can find a sequence {Xi} with Xi E A n (x - C i ) \ (e - C). By the boundedness property we have just establishcd,the sequence may be assumed to converge to some y E E. Since x - C ~ int(e - C), the point y cannot belong to x - eIC, contradicting the fact that Ci = cleo Combine (2.3) with (2.4) to convince that x cannot be a properly efficient point of the set A c ••
n
'vVe finish this section by a remark on proper efficiency. The definition used here was first given by Henig (1982)~ There are some other definitions of Borwein (1980), Benson (1979), Geoffrion (1968), but they are all the same when the set is convex and the cone is closed pointed. The essential \vhich is common for these .definitions is that ,vhcnever a set A is convex, every point of Prl\din(AJC) can be obtained by solving the problem min~(x)
s.t. x E A,. where ~ is some vector from C·+ (see Theorem 2.12, Chapter 4). We refer the reader who is interested in these definitions, to Sawaragi et al. (1985) for more details.
46
3.EXISTENCE OF EFFICIENT POINTS
Let E be a real topological vector space, C a convex cone in E~ We recall that C is correct if
(cIC)
+ C \ l(C)
(clC)
+C \
~
C,
or equivalently, I(C) ~ C \ I(C)6
Definition 3.1 A net {x a : Q' E I} from E is said to be decreasing ( with respect to C) if X a >c XI3 for each a, {3 E I, f3 > 0'. Definition 3 .. 2 A set A ~ E is said to be C-complete (resp., strongly C·-complete) if it has no covers of the form {(xC( - cIC)C : a E I} (resp~, {(x a - C)C : Q' E I}) with {x O'} being a decreasing ne t in A6 It is obvious that whenever C is closed, C-completeness and strong Ccompleteness coincide. We shall return to conditions for a set to be C-complete later6 No\v we proceed to the main results on the existence of efficient points.
Theorem 3.3 Assume that C is a convex correct cone and A is a nonempty set: in E. 1.'hen i'din(ArC) is nonempty if and only if A has a nonempty C-complete section. ProoL If lvIin(A!C) is nonempty) then any point of this set will provide a Ccomplete section because no decreasing nets exist there. Conversely, let A~ be a nonempty C-completc section of A. Due to Proposition 2~8) to finish the proof, it suffices to show that Min(A~IC) is nonempty. First we consider the set, denoted by P, consisting of decreasing nets in A . Since A is nonvoid, so is P. Further, for two elements a, b E P, we write a >- b if b ~ a as two sets. It is clear that (>-) is a partial order in P. \rVe claim that any chain in ]:J has an upper bound. Indeed, let {a A = A E A} be a chain in }'J; let t3 denote the set of finite subsets B of A ordered by inclusion and let aB
= U{a A ; A E B}6
::'\ow) set
ao=U{aB:BEB}.
47
Then a o is an clement of P and a o >- a)., for each /\ E A, i.e~ a o is an upper bound of the chain. Applying Zorn's Lemma we get a maximal element, say a* = {xo: ; Q' E I} E PI Ko"\v, supposing to the contrary that 1\11in(A x rC) is empty, \ve prove that {x a - cIC)C : a E I} forms a cover of Ax . v'Vith this in hand, remembering that a* is a decreasing net in Ax , ViC arrive at a contradiction; Ax is not C-·complete and the theorem is proven. OUf last aim is to shovv that for each y Ax , there is some a E I such that (x a - cIC)C contains y. If that is not the case, then y E XQ - clC, for each a E II Since J\.fin(AxrC) is empty~ there is some Z E A:r "\vith y >c z. Due to the correctness of C) we conclude that X Q >c z, for every a E I. Adding z to the net a* ,ve see that this net cannot be maximal. The contradiction achieves the proof. •
E
Theorem 3.4 Assume that C is a convex co'ne and A is a nonempty set in E. Then Min(A[C) is nonempty if and only if A has a nonempty strongly C-complete sectionA
PrOOL It is clear that if J11in(AIC) is noncmpty, then any point of this set gives a required section. Now, suppose that Ax is a strongly C-complete section of A. If Min(AxfC) is empty, then by the argument of the proof of Theorem 3.3, we get a maximal net {x a : Q' E I} in P and we· prove that this net provides a cover of the second form in the definition. Indeed, if that is not true) then there is some y E Ax such that y E X Q - C, for all Q E I. Since j'vfin(A~'C) is empty, there is some z E A~ with y >c z. By Proposition 1.3 (Chapter 1), we conclude that Xa: > Z, for all Q' E I. By this the net a* cannot be maximal.. The contradiction
completes the proof. • Below we present some criteria for a set to be C-complete and by means of these criteria we obtain several results on the existence of efficient points which have been established in the literature up todays.
We recall that the cone C is Daniell if any decreasing net having a lower bound converges to its infimum and the space E with the given cone C is boundedly order complete if any bounded decreasing net has an infimum (Peressini-1967).
Lemm 3.5 A set A ~ E is C--complete in the following cases: 1) A is C-semicompact, in particular A is C-compact or compact; 2) A is weakly compact and E is a locally convex space; 3) A is closed bounded and C is Daniell, E is boundedly order complete; 4) A is closed minorized (i.e. there is x E E such that A ~ x + C) and C is Daniell. Proof. For the first case, suppose to the contrary that there is a cover as required
48
in ])efinition 3~2. By the C-semicompactness (Definition 3.1,Chaptcr 1) there are a finite number of indexes,say 1, u., n from I such that {(Xi - clC)C : i = 1, ... , n} covers A Vt-hcre Xl >C ... >c X n • This is a contradiction because Xn
E
Xi -
C
~ Xi --
elC, for all i = 1,
h"
n,
consequently, no clement of that cover contains X n E A. The second case is deduced from the first one by considering A and C in the weak topology and taking into account the fact that a closed convex set in a locally convex space is also weak closed~ For the t,vo last cases, it suffices to observe the following fact: if a net {x a } is a decreasing net in A , then it has an infimum to which it converges. Moreover, this infimum must be in A and therefore it belongs to X a - clC, for every 0'. Hence the net cannot provide a cover of the form in Definition 3.2... Corollary 3.6 Suppose that C is a cone with C \ l(C) lying in the interior of a closed homogeneous half space. Then Min(ArC) is nonempty for any compact set A in E. If, in addition E is locally convex, then the resuIt is true for any weak compact set. Proof. under the conditions of the corollary, by Lemma 3.5, A is H-complcte) where H is a closed homogeneous half space containing C \ 1(G) in i ts interior. In view of "fheorem 3.3, lVlin(AJH) is noncmpty, hence so is Min(AIC) by Proposition 2.5.•
Corollary 3.7 (Corley-1980,1987) Let C be an acute convex cone in E and A is C-semicompact. Then Nlin(AIC) is nonempty. Proof." By Lemma 3.5, A is (clC)-eomplete. In view of Theorem 3.3, the set lv1in(AlclC) is noncmpty. Xow apply Proposition 2.4 to the set A by setting the pointed cones clC and G in the role of ]( to complete the proof.•
Corollary 3.8 (Borwein-1983) Let"C be a closed convex cone in E. Suppose that one of the following conditions holds~~ i) A has a nonempty minorized closed section and C is Daniell; ii) A is closed and bounded} C is Daniell and E is boundedly order complete; iii) A has a nonempty compact section. Then lVIin(A'C) is nonempty. Proof. In virtue of Proposition 1.3 (Chapter 1), C is correct and in virtue of Lemma 3.5, the set A in the case ii), or its section in the cases i) and iii) is C-complete. The result is obtained from Theorem 3.3 and Proposition 2.8.•
49
Corollary 3.9 (Henig-1986) We are in the n-space Rn. Let C be a convex strictly supported cone in E. Suppose that there is a closed set B so that
A ~ B ~ A + clC and ReeCE) n -etC = {O}~ Then il1in (..4 f C) is nonempty. Proof We shall first show that A is cl.C-~completef For this, let {Xi} be an arbitrary sequence from A \vhich is decreasing with respect to the order generated byelC. This sequence is bounded, otherwise by Remark 2.6 (Chapter 1) there would exists a nonzero vector v, v E Rec(A) ~ Rec(B)
I
(l
-clG,
contradicting the assumption. In this way the sequence may be assumed to con... verge to some point X o E B since A ~ B and B is closed. Moreover, X o E Xi - c/C, for all i. Furthermore, since B ~ A + ciC, there is some a E A such that a E X o - elC. In this way, a E X o - clC ~ 3;i - eiC, i = 1, 2, .... Consequently,the family {(Xi - cIC)C} cannot cover A , establishing the cIC··· completeness of A. In vcw of Theorem 3.3, to finish the proof it is enough to verify the inclusion Min(ArcIC) ~ Min(AIC). Making use of the strict supportedness of C, select a vector ~(x)
~
E Rn such that
> 0, for all x E C \ {OJ.
if x tJ. Min(AJC), i.e. there is some yEA such that y - x E C \ {O}~ then applying the functional ~ to the vector y - x we can see that
!\O\V,
x - y ~ -(cIC) n ciC. In other words, x ¢ .l\;lin(AJcIC) and the verification of the required inclusion is done.•
Corollary 3.10 (Jahn-1986) Assume that one of the following conditions holds: i) E is normed space which is the topological dual of another normed space and A has a weak* -closed section, say A o ; ii) E is a reflexive Banach space and A has a weak closed section} say A o ~ If in addition A o has a lower bound and the norrn in E is increasing on C with respect to (C, R+), i.e. x, y E C, x - Y E C \ l(C) imply that the norm of x is strictly bigger than that of y, then M in(A~C) is nonempty. Proof. Observe that if one of the conditions above holds, then A o is weak or weak1'
compact, hence C---complete. Due to Propositions 2.4, 2.8 and Theorem 3.3, it suffices to prove that cIC is pointed. Indeed, note first that by the continuity, the
50
norm is nondecreasing on clC. If cl C is not p ointed ~ then for a point x E C \ I( C) , the set B = (x - cIC) n clC is unbounded, so is also the set B \ {x + l(clC)}. We arrive at the contradiction that the norm cannot be nondecrcasing on ciC. tfhc proof is complete.• Corollary 3.11 If E is of finite dimension, then Min(AIC) is nonempty whatever a compact nonempty set A and a convex cone C be~
ProoE In vic\v of 'l'heorem 3.4,it is enough to verify that A is strongly C- complete. We do this by induction on the dimension of C. The case where dime is zero is trivial. No,v, suppose that dime is n. If A is not strongly C-complcte, then there is a decreasing sequence {Xk} from A such that {(Xk - C)C : k =:: 1,2... } forms a cover of A. We may posit that limxk = x E A. Hence there is some m such that x E (x m - C)C . This implies also that x E (Xk - C)C for all k ~ m. Therefore without loss of generality we may assume that x ¢ Xk - C, for all k. Denote by L the minimal linear space generated by Xk - Xl, k = 2,3, .... Then x - Xl E L. Moreover, LnriC = 0. Indeed, any vector x of L can be expressed in the form:
x = I:~l ti(Xk(i) - Xl) , (3.1) where ti i=- 0, k(i) E {I, 2, ... } and k(i) < k(i + 1), i = 1, h9' m. We prove by induction on m that x cannot belong to riC. For m = 0, the assertion is trivial, because 0 ¢ riC. For m > 0, if t m < 0, we rewrite the sum (3.1) as: x - tm(Xk(m) - Xl) = E~~l ti(Xk(i) - Xl). (3.2) If x E riC, the vector in the left hand side of (3.2) belongs to riC, while the vector in the right hand side, by induction, does not belong to it. Therefore x tJ. riC. Now if t m > 0, by using the fact that Xk{m-l) >c Xk(m) we can express x in the form: x 1: ~12 ti(Xk(i) - Xl) + (tm-l + tm)(Xk(m-l) - Xl) - tmc, where c is some vector from C. Carrying the vector tmc into the left hand side and using the same argument we have just e>qJloitcd in the case t m < 0 to assure that x ~ riC. In this way we have established the relation
=
LnriC =
In particular dimL a new cone C 1 by
< n.
Ct =
0.
Now, separate L and riC by a hyperplane H and define
HnC~
Then dime! < dimG. 1foreover) {(Xk - C1)C ; k = 1,2, ...} still covers A, where {x n } is decreasing with respect to the order generated by C 1 • This contradicts the assertion of induction and the corollary is proven.•
51
R"emark 3.12 The result of Corollary 3.11 can fail if E is of infinite dimension. In Example 3.13 it will be constructed a nonempty compact set A and a convex cone C in infinite dimensional spaces such that Min(ArC) ;;:= 0. Recently SternaKarwat (1986a,1987) has proven the following interesting result: If C satisfies the ,the condition, denoted by (SK): for every linear subspace L of E, C n L is a linear subspace \vhenever so is cl(C n L), then Min(AIC) is nonempty whatever a noncmpty compact set A ~ E be. This fact also means that every compact set is strongly C- -complete if C satisfies the condition mentioned above.
Example 3.13 (Stcrna-Karwat, 1986a) Let the space E and the cone C be as in Example 1.2 (4) of Chapter 1. Let en stay for the vector with the unique nonzero component being 1 at the n-th place. Consider the set
A = {x o } u {U:=l L
f=l Xi
=
n = 1,2, Oh},
where Xo
= el
./2n - 1 -
X n --
'" n-l e~ L- i::::::l
en /2 n - 1 , n > _ 1·
Then A is cornpact because liml: ~=1
Xi
=
X o•
lturthcrmore, ~n > Li~l Xi> Min(AfC) = 0.
Xo
,vhich shows that
We recall that A
~
"" n+l LJ i=l Xi
E is a polyhedron if it is a convex hull of a finite set.
Corollary 3.14 let A be a polyhedron in E. whatever the cone C be.
Then lVin(AJC) is nonempty
Proof. Since A is a polyhedron, there is a finite dimensional subspace E 1 in E such that A ~ E 1 • By Corollary 3.11, the set lVlin(AICnE1 ) is noncmpty. Direct verification shows that 114in(AJC n E 1 ) ~ Min(ArC) .•
Corollary 3.15 Assume that there exists a con'Uex cone [( which is not the whole space and contains C \ {OJ in its inte.rior. Then for every compact set AcE, the set PrMin(AIC) is nonempty. Proof. Consider the cone D = {OJ U intK. It is correct and pointed. By Theorem 3.3, the set Min(AJD) is nonempty~ Hence so is PrMin(ArC) because it contains
Min(A!D) .•
52
Corollary 3.16 Suppose that E is a finite dimensional space, C is a closed pointed cone, A is C-convcx C-closed. If 1\!Iin(A1C) is nonempty, then so is Pr J11.l in(A 1C). Proof. Since A + C is convex closed, ,ve may assume that its interior is nonvoid. 1'ake x E int(A + C) and consider ·the section (A + C)x . ¥le state that it is compact, hence, in vie\v of Corollary 3.15, the result follows. Indeed, if not, since it is closed, there would be a nonzero recession vector v E -C n Rec( (A + C)x). It follows from the convexity of (A + C)x that for any a E (A + C)x ,
a + tv E (A + C)x , for all t ~ o. Hence lvfin(AlC) = 0. The contradiction complete the proof. • We finish this section by presenting a condition for the existence of efficiency in terms of recession concs. Theorem 3.17 nonempty, then
Assume that C is not the whole space.
If Wl\.1in(AIC) is
Rec(A) n -intC = 0, and if PrMin(AIC) is nonempty, then
Rec(A)
n -clC ~ l(clC).
Proof. Suppose to the contrary that WMin(AIC) is Donempty, but there is a
vector v,
v E Rec(A) n -intC. For each x E A, by Lemma 2.11 (Chapter 1), v E Rec(A - x). In view of Lemma 2.4 (Chapter 1) and since v E -intC, we may choose a neighborhood U of zero in E small enough, so that for each neighborhood V of 00 ,
cone(v + U) n (A - x) n V f 'lJ and cone(v + U) \ {O} ~ -intC. With V not containing zero, the above relations show that (A - x) n -intC #- 0. In other words, there is some yEA so that x - y E intC, i.e. x cannot be a \veakly efficient point of A. No\v, suppose that PrMin.(A1C) is nonempty, say x is a point of that set. There exists a convex cone I( such that x E Min(AII() and C \ I(C) S; intI(. By the first part of the theorem, Rec(A) n -intI( is empty. Consequently,
53
Rec(A) n -ciG completing the
proof~
~
l(clC),
•
It is '\vorthwile noticing that the condi tiODS stated in '"fheorem 3.1 7 are necessary, but not sufficient for the existence of efficient points. Ho\vever, the following result is sometimes helpful when the sets arc polyhedral. We recall that a set is polyhedral if it is the sum of a polyhedron and a polyhedral cone.
Theorem 3.18 Let A be a polyhedral set in E. Then Min(AICr ) is nonempty if and only if
Rec(A) n -c
~
l(C)
Proof. Let A = A o + Co, where A o is a polyhedron and Co is a polyhedral cone. It is cleat that Rec(A) = Coe If Co n -C docs not entirely lie in leG), say v E (Co n -C) \ l(G), then for every x E A, the point x + v E A and x - (x + v) = -v E C \ I(C), implying that Min(AIC) is empty.
-c
Conversely, suppose that Rec{A) n ~ l(C). Let E 1 be a finite dimensional subspace in E which contains A. Then by an argument similar to that we have used in the proof of Corollary 3.11 one can show that A is strongly CI-complete, where C l = en E l . Hence in virtue of Theorem 3.4, the set Min(A;C1 ) is nonempty. Direct verification shows that Min(AIC1 ) ~ Min(AlC), completing the proof.-
4.DOMINATION PROPERTY
In the previous section \Vc have studied the conditions under which the set of efficient points of a set is nonvoid. Another question \vhich is important in the theory of decision making is about the existence of an efficient alternative which is smaller (with respect to the ordering cone) than a given alternative. This is the domination property or sometimes (see Sawaragi ct al.-1985) called external stability,vhich first was introduced by Vogel (1977) and recently investigated in several works of Benson(1983), Luc(1984,19S8a), Henig (1986) and others. Except for Luc (1988a), the study has been performed in finite dimensional spaccs~Here 've \vo~k in infinite dimensions and by exploiting the existence results of the previous section, we provide rather general criteria for the domination property to hold. As before, E is a topological vector space over reals, C is a convex cone in E.
54
Definition 4.1 We say that the domination property, denoted by (DP) holds for a set A ~ E if for every point YEA) there is some x E lW"in(AIC) such that y ~c x.
Proposition 4.2 (DP) holds for a set A if and only A ~ Min(AIC) + C.
if
Proof. This is immediate from the definition.•
Theorem 4.3 CDP) holds for a set A ~ E if and only if for each YEA, there is some x E A y such that one of the following conditions is satisfied: i) Ax is C --complete and C is corTee t; ii) AX' is strongly C· ·complete and C is arbitrary convex. Proof. If (DP) holds for A, then obviously for each yEA, the point x required in the definition of (DP) yields condition ii). The converse assertion stems from Theorems 3.3 and 3.4. 11
Corollary 4.4 (Ifenig-1986) We are in Rn. Suppose that C is pointed closed and A .f- B is closed, where B is a set with 0 E B ~ C. Then (DP) holds if . PrlVlin(AIC) is nonempty. Proof. By Theorem 3.. 17,
Rec(A) n -C = {O}. Consequently, for x E A, the section (A+B)x of A+B is bounded closed, hence Ccomplete. Thus, condition i) of Theorem 4.2 is satisfied. By this, Min«(A+B):z:]C) ~s nonempty. Remembering that 0 E B ~ C, we conclude that l1Jin(A z IC) is nonemptyand (DP) follows .• It is worth noticing that the closedncss assumption in Corollary 4.4 is important. To see this, let C be the cone in Example 1.. 2(2) of Chapter 1 and let
x = y,O ~ x> -l,z = O}U{(-I,-2,O)}. Then Pr Min(ArC) is nonempty set, it consists of the single point (-1, -2,0) and despites of this, the point (0,0, 0) E A is not dominated by (less than or equal to) the efficient p oin t. A= {(x,y,z) E R 3
Corollary 4.5
:
(Benson-1983;Henig-1986) We are in Rn with C being pointed closed. If A is C-convex C-closed, then the following statements are equivalent:
i) (DP) holds for A
1-
55
ii) Min(A(C) is nonvoid; iii) Pr1\11in(A(C) is non'Void.
*
Proof. The implication i) => ii) is trivial. The implication ii) iii) is drawn from Corollary 3.16 and finally, Corollary 4.4 gives the irnplication iii) =} i).•
Corollary 4.6 Assume that C is closed and A i.s closed with Rec(A) n -C = {a} and it satisfies condition (CB) in Definition 2.7, Chapter 1. Then (DP) holds for A if one of the following requirements is fulfilled: i) C is Daniell and E is boundedly order complete; ii) any bounded part of A is relatively compact; iii) E is locally convex, A is C -convex. Proof. For every x E A, by Proposition 2~5 (Chapter 1)
Rec(Ax } ~ Rec(A)
n Rec(x -
\VC
have the inclusions:
C) = Rec(A)
n -C = {O}.
Hence in view of Lemma 2.9(Chapter 1), A is bounded. If i) or ii) holds, then Ax is C-complete. If iii) holds, then A~ is closed, convex, bounded, hence weakly compact. By Lenuna 3.5, it is C-complctc too. Now apply Theorem 4.2 to complete the proof. •
Corollary 4.7 Assume that E is of finite dimension) C is closed pointed and A is closed. If there is a set B ~ E such that the sum A + B has a nonempty proper efficie nt pain t set, then (DP) holds for A~ Proof. By Theorem 3.17,
Rec(X + Y) n -C = {O}, which in its turn, in view of Theorem 2.12(Chaptcr 1) gives
Rec(A)
n -c =
{O}~
The result is then obtained from the preceding corollary .• Corollary 4.8 Assume that C is closed, pointed and for two closed sets A and B in E, the following requirements are satisfied: i) they yield condition (CB)
n -Ree(B) = {O}; + Ree(E)} n -C = {O}. Then (DP). holds for the sum A + B if either any ii) Rec(A)
iii) {Rec(A)
closed bounded part of A
and B is compact, or at least one of them is locally compact and C is Daniell,
E is boundedly order complete" In particular,
if E is of finite dimension and
56
Pr 1\1in(A[C) is nonempty, then (DP) holds for every sum A compact set in E.
+B
with B being a
Proof. Using the argument of Theorem 3.7(Chaptcr 1), observe first that the set A + B is closed. Moreover, for any x E A + B, the section (A + B)x is boundcd~ Hence as in the proof of Gorollary 4.6, (A + B)z is C-complete. tThe result now is deduced from Theorem 4.3.
Finally, if the space is finite dimensional, then condition (C B) holds automatically. Besides, when B is compact, ii) is obvious. 'l'he last requirement is satisfied in view of Theorem 3.17.•
Definition 4.9 We say that the weak domination property holds for a set A ~ E if for every yEA, either y E WMin(AIG), or there is some x E WMin(AIG) such that y ~c x. Proposition 4.10 The weak domination property holds for every compact set. Proof. This is drawn from Theorem 4.3 by observing that the cone !( consisting of zero and intO is always correct and every compact set is !(-complcte~ •
It is interesting to observe the following relationship between (DP) and the classical version of the Hahn-Banach theorem. Recall that a function p from a vector space E to R is sublinear if .
p(x + y) ::; p(x)
+ p(y), p(AX) = Ap(X), for all x, y E E, A c O~
The classical version of the Hahn-Banach theorem says that for every sublinear function p, there is a linear function l from E to R such that
l(x) :s; p(x), for all x E E. It is easy to see that in the space F of all functions from E to R if we introduce the order
(~)
by:
.
I,g E :F,j
~ 9
if and only if f(x)
~
g(x) for all x E E,
in other ,vords, the order is generated by the cone
C = {j E :F = 1(x) 2: 0 for all x E E}, then the (DP) of the set of all sublinear functions is the same as the Hahn-I3anach theorem. This is because among sublinear functions, a flIDction is efficient if and only if it is linear.
57
5.VECTOR OPTIMIZATION PROBLEMS
Let X be a nonempty subset of a topological space and F be a set-valued map from X into E, where E is a real topological vector space which is ordered by a convex cone G. The general vector optimization problem corresponding to X and F is denoted by (VP) and written as follows: ·
minF(x) s.t. x E X. This amounts to finding a point x E X, called an optimal ( or minimal, or efficient) solution of (VP), such that
F(x)
n l\1Iin(F(X)IC) # 0,
where F(X) is the union of F(x) over X. The clcments of Mi~(F(X)IC) are also called optimal values of (V P). The set of optimal solutions of (VP) is denoted by S(X; F). By replacing IMin, PrMin, WMin instead of .l11in(F(X)IC) we get the notions of IS(Xi F), PrS(X; F) and WS(X; F). The set X is sometimes called the set of alternatives and F(X) is the set of outcomes. Problems with set-valued data arise originally inside of the theory of vector optimization. As we shall see later, in Chapter 5, for a vector problem, its dual constructed by several means, is a problem whose objective function is set~-valued) whatever the objective of the primal problem be. On the other hand several objects appeared in the theory of optimization) nonsmooth analysis ctc~ have the sct·-valuedness as an inherent property. For instance, the sets of subdifferentials, tangent cones in nonsmooth analysis, or the sets of feasible solutions, optimal solutions in parametric programming are all set-valued maps. Therefore optimization theory for set-valued maps are of increasing interest. As in mathematical progranuning, additional constraints arc often imposcd on the set X. ~amely, let E 1 be a topological vector space and ]( a convex cone in it. Let G and H be two set-valued maps from X to E 1 ~ Two kinds of constraints generalize the inequality constraint in scalar programming:
{x E X, G(x) n -I( 1= 0}, {x E X,H(x) ~ -I(}.
(5.1) (5.2)
These constraints may be obtained from each other by an appropriate redefinition of the maps G and H. To see this, suppose first that (5.1) is given. Define H by the rule: H(x) = G(x) n -K if G(x) n -1(:;6 0, H(x) = G(x) otherwise.
58
It can be seen that domH and domG coincide and the sets defined by (541) and (5.2) arc the same4 Conversely, suppose that (5.2) is given. 'Then G can be defined as foliov/s: .
G(x) = H(x) if Ii(x) ~ -1<, G(x) = H(x) \ -I( otherwise.
(5.4)
The same conclusion is true for the sets of (5.1) and (5.2) with G being defi~cd by (5~4). The only disadvantage of this construction is that neither convexity nor continuity properties are preserved, for instance if H(x) is convex, then it is not necessary for G(x) of (5.4) to be convex. In the chapters to come we deal with the constraint of (5.1). A helpful relationship between efficient, properly efficient and weakly efficient solutions of (VP) is given in the next proposition.
Proposition 5.1 For (VP), we have the following inclusions:
Pr8(X;F) ~ S(X;F) ~ WS(X;F). Furthermore, if IS(X; F) is nonempiy, then IS(X;F) = S(X;F). Proof. These inclusions. axe immediate from the definition of the sets above and from Proposition 2.2.• Below "\ve prove some existence results for optimal solutions of (VP). We need first a lemma concerning C·· complete sets which were introduced in Section 3.
Lemm 5.2 Assume that C is convex, X is nonempty compact and F is upper C-continuous onX withF(x)+C being closedC-completefor every x EX. Then F(X) is C-complete. Prool. Suppose to the contrary that F(X) is not C-complete. 'l'his means that there is a decreasing net {aa : a E I} from F(X) such that {( aa - clC)C : Q E I} is a cover of }'(X). Let Xo: E X with aO' E F(x a ). Without loss of generality "\vc may assume that limx a = x E X. Then, for every neighborhood V of F~(x) in E, there is an index j3 E I such that ao: E V
+ C for all a ~ f3.
(5.5)
Further, since {aa:} is decreasing, a Q E a6
+ C, for every 8 ~
This and (5.5) give the following relation: ao: E cl(F(x) + C) = F(x)
Q' •
+ C for all
Q.
59
V\'e arrive at the contradiction: F(x)
+ C cannot be C-complete.•
Theorem 5.3 Under the assumptions of Lemma 5.2, S(X; F) is nonempty.
if C
is correct, then the set
Proof. By Lemma 5.2, F(X) is a C-complete set. Apply Theorem 3.3 to the set F(X) to obtain the result.•
. Theorem 5.4 Assume that X is compact, C is correct and F is upper Ccontinuous, C-semicompact-valued on X. Then S(X; F) is nonempty. Proof. Invoke the theorem to Theorem 7.2 (Chapter 1) and 1'1heorem 3.3.•
Corollary 5 .5 (Corley-1987) If JY is compact, C is acute, F is C-semicompactvalued, upper continuous. Then S(X; F) is nonempty. °
ProoE Take the cone cIC instead of C in Theorem 5.4 to conclude that the set 1\1in(F(X)lcIC) is noncmpty. Due to Proposition 2~4, Min(F(X)IC) is nonempty and hence so is S(X; F) .• Corollary 5.6 Assume that X· is compact) C satisfies the condition (SK)(Remark 3.12) , F is compact-valued upper continuous on X. Then S(X; F) is nonempty. Proof. Specialize Theorem 7.2 of Chapter 1 for the case C = {O} to see that F(X) is a compact set. By Remark 3.12, Min(F(X)JC) is nonempty, hence so is S(X;F) .•
We return now to the case where the problem is with explicit constraint of the form in (5.1): (OP) min F(x)
s.t. x E X,G(x) n -K;6 0.
Definition 5.7 A point G(x o )
Xo
E X is said to be a feasible solution of (GP) if
n -I( # 0.
The set of feasible solutions of (GP) is denoted by X o
•
Proposition 5.8 Let X o be a feasible solution of (GP). The.n a o E F(x o ) is an optimal value of (GP) if and only if·
60
(a o - C \ l(C), -!() n (F(x), G(x)) = 0~ JOT
(5.6)
all x E X.
Proof. Posit frrst that a o E Min(F(Xo)'C). rfhen by Proposition 2.3,
(a o
-
C) n F(X o )
~
ao + l(C),
or cquivalcntly,
(a o - C \ I(C») n F(X o ) = 0. :Further, if x E X \ X o , then G(x) n -I( = 0. This and (5.7) give (5.6). Conversely) if (5.6) holds, then whenever G(x) n -I( =f 0 , Le~ x E
(5.7)
x o , one
must have the relation:
(a o
-
C \ I(C»)
n F(x) = 0.
In view of Proposition 2.3 this implies a o E Min(F(Xo)IC) .•
Corollary 5.9 Let J.Vo be the cone composed of (C \ I(C),!() and of zero in the product space E x E 1 • 1 hen a o E E is an optimal value of (GP) if and only if 1
(a o , 0) E l\1in(U{(F(x), G(x) + K) : x E X}INo ).
(5.8)
Proof. Suppose flISt that ao E l\IIin(F(X o )lC). This means that a o E F(x o ), for some X o E X o and by Proposition 5.8, relation (5.6) holds. Denote by Qo the set under Min in the relation (568). Then (a o , 0) E Q 0 , since X o is a feasible solution. Moreover, relation (5.6) implies that
((a o ' 0) - lVo \ l(No )) n Qo = lIenee in virtue of Proposition 2.3, (5.8) holds.
0.
Conversely, if (5.8) is true~ then there is some
(a o , 0) E (F(x o ), G(x o )
Xo
E X such that
+ I().
It follo\vs in particular that X o is a feasible solution and a o E F(Xo)~ :\"OW, if a o is not an optimal value of (GP), then there is some x E X o and a E F(x) such that a o - a E C \ leG). In other words, (a,O) E Qo and (ao,O) - (a,O) E No \ l(No)' contradicting (5.8) .• In the rest of the section we shall take up the special case where F is a point--valued map. We use f instead of F to indicate this case.
Corollary
5~10
Suppose that X is compact, C is correct and f is upper C-
continuous. Then S(X;
f) is
nonempty
Proof. Since f is point-valued, it is C--seulicompact-valued. The corollary is then immediate from rrhcorem 5.4.•
61
Definition 5.11
We say that (V P) is
1) a linear problem if X is a polyhedral set, C is a polyhedral cone and f is linear mapj 2) a con11 ex probtem (re sp., strictly convex, quasicon1J ex, stne tly quasiconvex problem) if X is convex and f is C-'convex (resp., strictly C-convex: C --quasiconvex, strictly C· -quasiconvex)_
Proposition 5 . 12 Let (V P) be a and only if
Rec(f(X))
linea~
n -C =
problem. Then S(X; f) is nonempty if
{O}.
Proof. Invoke this proposition to Theorem 3.18.•
Proposition 5.13 FOT (V P) being strictly convex or strictly quasiconvex, the sets WS(X; f) and S(X; f) coincide. Proof. Let x E WS(X; f). If x ~ S(X; f), then there exists some y E X such
that
f(y) E f(x) - C \ {OJ.
+ y)/2. By definition of strict quasiconvexity, f(z) E (f(x) + f(y))/2 - intC ~ (f(x) + f(x) - C)/2 - intC ~ I(x) - intC, contradicting x E WS(X;f) .• Consider the point z = (x
Chapter 3
Nonsmooth Vector Optimization Problems
This chapter deals with the vector optimization problems having sct-·valued objectives. We develop the theory of contingent derivatives for set-valued maps and then study local optimality conditions for these problems via contingent derivatives~ The concept of contingent cones was introduced by Bouligand as early as in the thirties. However, it has got a wide application in nonsmooth analysis very recently, due to the work of Aubin (1981) and to the development of other concepts of tangent cones in the study of nondifferentiable functions (Aubin and Ekeland -1984, Frankowska -1985, Penot -1984, Ward and Borwein-1987). Contingent cones suffer from an undesirable defect of being nonconvex, hence many powerful techniques of convex analysis cannot be exploited when using these cones to define derivatives of functions. I\evertheless, since containing almost every existing tangent conc, they carry in themselves rich information about the local behavior of sets and they always exist regardless of the structure of the sets. Moreover, these cones enjoy enough properties to make a decent calculus and they are well suited to dcfine derivatives of set-valued functions. This is why we choose contingent derivatives in order to produce optimality conditions for vector problems with set-valued data.
In the first section we give the definitions of contingent derivatives, scmidifferentiabilities and some calculus rules. In sections 2 and 3 we derive necessary and sufficient optimality conditions for both unconstrained and constrained vector problems by means of contingent derivatives. Sections 4 and 5 are devoted to the classical cases where the data of the problems are differentiable and convex.
63
I.CONTINGENT DERIVATIVES
Let X and Y be two separated top·ological vector spaces over reals and let F be a set-valued map from X into y~
Definition 1.1 Let A be a nonempty subset of JY. and x E ciA. The contingent cone to A at x is the cone T(A, x) = {clcone«(A - x) n U) : U E U}, where U is the filter of neighborhoods of zero in X~
n
Definition 1.2 Let (x, y) belong to grafF. The contingent derivative of F at (x~ y) is the set-valued map, denoted by DF(x, y), from X into Y, defined by the rule: for any u EX, v E DF(x, y) if and only if (u, v) E T(grafF, (x, y)).
In terms of nets, it can be seen that v E DF(x, y)(u) if and only if there exists a net (xo:, Yo.) from grafF converging to (x, y) and a net of positive numbers {ta:} such that limta:(xa - X,Ya: - y) = (u,v). Below we use the expression of derivatives in terms of nets to define lower and upper semidiffcIcntiabilitics. If {a Q } is a net, then by writting {aQ{J}.we shall henceforth mean that it is a subnet of the given net.
Definition 1.3 F is said to be lower semidifferentiable at (x, y) E grafF if for any nets: {xa:} from domF, converging to x and {t a } of positive numbers with .lim ta(x a - x) = u, some u E X, there exists a net {Ya,6}' YCtp E F(x~~)~ such that the net {tap (Yoc{J - y)} converges. F is said to be upper semidifferentiable at (x, y) E grafF if for any net {(xa,Y~)} from grafF, not coinciding with (x,y) and converging to (x,y) there is a net {tap} of positive numbers such that the .net {t Ct {3 (x QfJ - x, Yap - y)} converges to a nonzero vector of the product space X X Y. Here are some simple examples to illustrate the definition. 1. Let X = Y = R and let
F(x) = {y E R : ]xJ + 1 2: Jyl 2: (IXJ)1/2}.
64
It is easy to see that F is lower scmidiffercntiable at (0, 1), but not lower differen-
tiable at (0,0). 2. Let X = R, let Y be the space 0 1 (Example 1~2(3), Chapter 1)~ For every t E R, set F(t) = 0 if either t > 1, or t ~ 0, F(t) = {xn(t)} otherwise, ,\There {xn(t)} is the sequence whose terms are all zero except for the only one term on the ith place being t, where i is the unique positive integer satisfying the relation l/i ~ t > l/(i + 1)~ Then F is not upper semidiffcrcntiable at (0,0).
In the liter aturc, there are some defini ti ons similar to the deftnition above. \Ve recall here the notion of Dini upper and lower derivatives from Penot(1984). Let u E X, the Dini upper and lower derivatives of F at (x, y) in direction u
are DuppF(x, y)(u) = limsup(t,1))-+(O~u)~t>o(F(x + tv) - y)/t, DlowF(x, y)(u) = liminfCt,v)-(O,u),t>o(F(x + tv) - y)/t. It is clear that the Dini upper derivative in direction u is the same as the contingent derivative DF(x, y)(u). Moreover, whenever the lo\vcr Dini derivative exists at (x, y) E grafF in every direction u E T(dornF,x), F is lower semidifferentiable at that p oint~
Proposition 1.4 Suppose that X and Yare normable spaces. Then 1) F is lower semidifferentiable at (x, y) E grafF if it is Lipschitz at x and the space Y is of finite dimension; 2) F is upper semidifferentiable at (x, y) E grafF if both of X and Yare finite dime nsionaL
Proof. For the first statement, let (x,y) E grafF and let {x a } be a net from domF converging to x and limtQ(x a
-
x) = u, some u E X, to
> O~
It is clear from the Lipschitz condition that there arc Yo. E F(x a ) vvith limycx = y
and
d(ycx, y) ~ kd(x a , x), where k is a Lipschitz constant and d(.,.) is the distance between two points. Since { to (x Q - x)} is a convergent net, the net {to:d( X er , x)} is bounded, hence so is the net {to:d(yo:, y)}. Whenever Y is of finite dimension, one can choose from the latter net a convergent subnet, which proves that F is lower semidiffcrentiable.
65
Now, suppose that X and Y are finite dimensional spaces. Let {(xa,Yo:)} be a net from grafF converging to (x, y). Without 10s5 of generality we may assume that d(x Q ) x) # 0 and the net {(x a - x)/d(x a , x)} converges to some u E X. Set
t a = IJd(xo.,x) and consider the net {to(Ya - y)}. If {ta:d(Ya:, y)} has a bounded subnet, one can choose a convergent subnet from {tQ(ycr - y)} and the second statement is proven. If not~ we may assume that {so:(Yo: - y)} converges to some v E Y, where Sa = l/d(yo., V). Of course, v # O. Consider the net {sa(xo: - x)}. This net converges to 0 because
I
(d(x co x) = d(x co x)/d(ya., y) = l/(t a d(yo., y)). Any case, there is a net {t QfJ } such that {to: 13 (x(XfJ - x,Yc.(J - y)} converges to a Sec
nonzero vector.• · ~ow suppose that Z is a separated topological vector space and G is a setvalued map from Y to Z . Besides, F 1 and F2 are two set-valued maps from X to Y.
Prop_osition 1 . 5 For the maps F, F 1 , F2 and G above, we have the following: 1) for every nonzero number t the map tF is upper (resp.J lower) semidifferentiable at (x, y) E grajF if so is F at (x, y/t) (the case t = 0 is 1
trivial); 2) if F and G are lower semidifferentiable at (x,y) E grafF and (y, z) E grafG, then Go F is lower semidifJerentiable at (x, z); 3) if PI and F 2 are lowe r s emidifferentiable at (x, Y1) E gra f F I and (x, Y2) E grafF2 , then PI +F2 is lower semidifferentiable at (x, Yl +Y2);
, 4) if F is compact at x, upper semidifferentiable at (x y) with DF(x, y)(O) = {OJ, each Y E F(x), while G is upper continuous on F(x) 1
upper semidifferentiable at (y, z), each y E F(x), z E G(y), then G of is upper semidifferentiable at (x z), each z EGo F(x); 1
5)
if F 1
and F2 are upper continuous at x E X, upper semidifferentiable at (x, Yl), each Yl E F 1 (x) and at (x, Y2), each Y2 E F 2 (x) with either DF1 (x, Yl )(0) = {O} or DF2 (x, Y2)(O) = {OJ, and one of them is compact at x, then PI + F2 is upper semidifferentiable at (x, Yl + Yl).
Proof. The first three assertions of the proposition axe immediate from the definitions, so we omit their proofs. We proceed now to prove the statement 4). Let {(x Q , za)} be a net from 0 F converging to (x, z). There is a net {Ya} , y~ E F(xoJ such that E G(Ya). Since F is compact, we may assume that
grafG Za
limYQ = Y E F(x).
66
By the upper continuity of G,z E G(y). Taking subncts if necessary and due to the upper semidifferentiability of F and G, we may also assume that there are positive numbers teo BOo such that the nets {to:(xo: - x, Yet - y)} and {so:(Ya - Y, ZQ - z)} converge to some nonzero vectors (u, v) E X x Y and (q, w) E Y X Z, respectively. Consider the nets {t a / sa} and {ser/ to. }. It is clear that at least one of them possesses a convergent subnet which we denote by the same index. The first case: lim tOol Set = t. It is obvious that {to; (x a - x, Zo; - z)} converges to (u, tw). rrhis vector is nonzero because ('1.1, v) is nonzero and DF(x, y)(O) = {O}. The second case: limsa:(tQ = t. The net {sQ' (x a - X, Za. - z)} converges to (tu, w)" Again, the vector (iu, w) is nonzero. Indeed, if q = 0, then w ~ o. If q # 0, so must v be. Consequently u =I: 0 and the statement is established. For the last statement, let {(XcoYla + Y2et)} be a net from graf(Jt1 + F 2), not coinciding with (x, Yl + Y2) and converging to it, where Yla. E F I (xo;), Y2et E F 2 (x a ),Yl E F 1 (x),Y2 E F 2 (x). Suppose that F 1 is compact at x. vVithout loss of generality \ve may assume that {(X£nYIa)} converges to some (x,Yi) E grafF1 . Hence {(x a , yzoJ} converges to (x, Yl + Y2 - yi) E grafJt2" By the upper semidifferentiability of the maps F I and F 2 , it can be assume that
limta(x a - X,Ylo - Yi) = (U,VI) E X X Y, limtQ(x a - x, Y2a. - Yl - Y2 + vi) = (u, V2) E JY x Y, '\vhere (u, Vl) and (u, V2) arc nonzero. It follows from the assumption on the value of the derivatives at 0 that u must be nonzero. In that case lim t o(X a - X, Yla
+ Y2a -
(Yl
+ Y2))
= (u, Vl
+ V2),
being nonzero.•
Proposition 1.6 The following relations hold: 1) D((tF)(x, ty))(u) = tDF(x, y)(u)) for each t E R, (x, y) E grafF; 2) D((G 0 Jt')(x, z)(u) ~ U {DG(y, z)(v) : v E DF(x, y)(u)}, if G is upper continuous on f'(x), while F is compact at x, upper semidifferentiable at (x, y) with DF(x, y)(O) = {OJ, where y is a point from F(x) such that Z E G(y); 3) D((F1 +F2 )(x,y)(u) ~ U{DF1(X,Yl)(U) + DF2 (x,Y2)(U) : Yl E Fl(X)~ Y2 E F2 (x), Y = VI + Y2}, if F 1 and.F2 are upper" continuous at x, one of them is compact at x and one of them, say F1 , is upper semidifferentiable at (x, Yl) with D.F1(x, Yl)(O) = {O}, each Yl E F1 (x). Proof. The first assertion is triviaL
67
For 2), let w E D((G 0 F)(x, z))(u). By definition, there is a net {(x cn zoo)} from grafG 0 F converging to (x, z) such that
limta(x a - x, Zo - z) = (u, w), Let {Yo} be a net such that Ya E F(x a ),
\vhere to > o. compact at x, we may assume that
ZQ
E
G(yo). Since F is
limYa = Y E F(x).
Further, since G is upper continuous,z E G(y). Now, it follows from the upper semidiffcrentiability of F that there is a; net {sa} of positive numbers such that some subnet of {sa(xex-x, Ya-Y)} converges to a nonzero vector (q, v). We denote that subnet by the same index. Consider the nets {talsa} and {see/tee}. By taking subnets if necessary, we may assume that at.least one of these nets converges to a number t~ Let first lim t ee / Set = t. Then
lim ta(Yee - Y) Zo. - z) = (tv, w) and limta(x ce - x, Yo: - y) = (tq, tv) = (u, tv)~ In other words, W E DG(y, z)(tv), where tv E DF(x, y)(u), proving the relation in 2)~
Now, let limsex/t ex = t. Then
limsce(yex - y, Za - z) = (v, tw) and limsce(x ce - X,Ya: - y) = (tu,v) = (q,v)~ We show here that t must be nonzero and by this W E DG(y, z)(v/t), where vlt E DF(x, y)(u), proving 2)~ Indeed, if t = 0, then q = o. Remember that DF(x, y)(O) = {O}, q = 0 implies v = 0, a contradiction~ In this way, 2) is established. For the last assertion the same argument goes through without change.
II
2. UNCONSTRAINED PROBLEMS
Suppose as before that X and Y are separated topological vector spaces over reals and it is given a convex closed pointed cone C in Y. Let F be a set-valued map from X to y~ We consider the vector optimization problem which is denoted by (UP) :
68
minF(x) s.t. x E X. This is an unconstrained problem, since no explicit constraints are imposed on the domain of the map F. Definition 2.1 A poin.t (x, y) E grafF is said to be a local (resp., a local weak) minimizer of (UP) if there exists a neighborhood U of x in X such that
y E l11in(F(U)IC) (resp.) y E WMin(F(U)IC)). It should be observed that the connection between local minimizers and local efficient outcomes is very loose. In general, if (x, y) is a local minimizer, then y is not necessarily a local efficient point of F(X). In the case F is point-valued, that F(x) is a local efficient point of F(X) docs not imply that (x, ~(x)) is a local minimizer. However, in the latter case if in addition F is continuous, then the assertion is positive,Le. (x, F( x)) is a local minimizer whenever F( x) is a local efficient point of F(X). Below we give first-order necessary and sufficient conditions for a point from grafF to be a minimizer of (UP) in terms of contingent derivatives. Theorem 2.2 If (x, y) is a local weak minimizer of (UP), then
DF(x, y)(u)
n -intC = 0 fOT each u
E X.
Conversely, if for some x E domF, y E l1d"in(F(x)IC) and the following conditions hold: i) DF(x, y)(u) n -C = 0, each U E domDF(x, y), u f=. 0, ii) DF(x, y)(O) n -C = {O}, iii) F is compact at x, upper semidiffereniiable at (x, y), then (x, y) is a local minimizer of (UP).
Proof. We prove first the necessary some vector v,
part~
Suppose to the contrary that there is
v E DF(x,y)(u) n -intC, some u EX. By definition, there is a net {(x o , Yo:)} from grafF converging to (x, y) and {tal, to: > 0 such that limta:{xa - X, Yo. - y) = (u, v). Since v E -intC, there exists an index f3 , so that to:(ya. - y) E -intC, Le. Ya E Y - intC, whenever Q' > {3. This is a contradiction because (x, y) is a local weak minimizer of (UP).
69
N ow ~ for the sufficient part, suPP ose that (x, y) is not a 10 cal minimizer of the {(XcnYCl)} from grafF such that
problcm~ i.e. there is a net
limx a = x, Yo. E Y - C \ {O}, for all
Q1.
(2.1)
Since F is compact, we may assume that {yO'} converges to some y* E F(x). Obv~ously, Y* E y-C and actually y* coincides with y, because y E Min(F(x)IC)~ Further, by the upper semidifferentiability of F, one can find a net of positive numbers {i.e} such that the net {t,t3(x Q .8 - x,Yo./J - y)} converges to a nonzero vector (u, v)~ It is clear from (2.1) that v E -C. Observe further that if u = 0, then v must be nonzero and condition' ii) does not hold. If u is nonzero then condition i) is impossible for instance v is a vector of that intersection. This completes the proof..
Corollary 2.3 Suppose that X and Yare finite dimensional Euclidean spaces, F is a point-valued map from X to Y. If for every sequence {x n } in X converging to x E X, the limit of (F(x n ) - F(x))/Hx n - xii does not belong to -C whenever it exists, then x is a local optimal s olution of (UP). Kate that in view of Proposition 1~4, condition iii) of Theorem 2.2 is The other conditions hold obviously~ Now invoke the corollary to the theorem above. •
ProoL
satisfied~
The following special case is sometimes helpfuL Let us consider the situation when X = R;'t, Y = R, C = R+ and F is a point-valued map, ,vmch we denote by !, from X to Y. Recall that the first order directional derivative of f at X o E X in direction u is defined as:
/'(Xoi u) = limt-+o+(f(x o + tu) - f(xo))/t. Corollary 2.4 (Ben-tal-Zowe,1985) Assume that f is Lipschitz in a neighborhood of X o E X and f'(x o ; u) > 0 for all u # O. Then X o ·is a local optimal solution of (UP)~
Proof. It is clear that under the asswnptions of the corollary, all the conditions of Theorem 2~2 are satisfied. Hence (x o , f(x o )) is a local minimizer of (UP) .•
I t should be emphasized that the result of TheoreIn; 2.2 is especia11y useful for nonsmooth problems where the directional derivatives of the functions to be optimized do not exist. The following simple example illustrates this point. Let X = Y = R and let
f(x) = Ix[1/2, for every x E R.
70
At the point 0 the directional derivative of f does not exist, hence the result of Corollary 2.4 is not applicable. The reader \vho is familar with the Clarke derivatives (Clarkc-1984) can also observe that the theory of those derivatives is
neither applicable to this example. However, direct calculation sho'\vs that the contingent derivative of f at (0, teO)) is quite simple:
Dj(O,O)(u) = 0 if u =I 0, = R+.
Dj(O~O)(O)
In virtue of Theorem 2~2 we can conclude that 0 is a local minimum of j.
3.CONSTRAINED PROBLEMS
Sometimes '\ve need to solve vector optimization problems not over the whole space, "but over a part of it.This part of the space is often determined explicitly by some functions and in such situations one has the so called constrained problems as '\ve have previously discussed in Section 5 of the preceding chapter. We consider the problem
(CP)
minF(x) s,t~ x E
X, G(x)
n -K #: 0.
where, as bcforc 1 F is a set-valued map from X to Y, G is a set-valued map from X to a separated topological vector space Z, which is ordered by a convex pointed closed cone !( \vith nonempty interior It should be observed that if we define a set-valued function F by the rule:
Fo(x) Fo(x)
= F(x) for x E X, satisfying G(x) n -l( ¥ 0, = 0 otherwise,
then (CP) is equivalent to the following unconstrained problem:
min Fa (x) S.t. x E X~ The two problems are equivalent in the sense that any minimizer of the fust problem is a minimizer of the second one and vice versa. In this section 1 exploiting the explicit form of the constraints we develop optimality conditions for (CP) in termes of contingent derivatives.
Definition 3.1 A triple (x, y, z) E X x y x Z is said to be feasible if
71
x E (domF) n (domG), Y E F(x), z E G(x) n -I(~ A feasible triple (x, y, z) is said to be a local (resp., a local weak) minimizer of (CP) if there exists a neighborhood U of x in X such that y E IVlin(U{F(x') : x' E U, G(x') n -K "# 0}lC) (resp., y E WMin(U{F(x'): x' E U,G(x') n-I( # 011C)). As a special case, where F and G' are point-valued maps,for a given x E X, the points y and z in the definition above axe uniquely defined by y = F(x), z = G(x), therefore it is convenient to say that x E X is feasible instead of the triple (x, F(x), G(x)) whenever G(x) E -K~
Theorem 3.2 Assume that (x, y, z) is a local weak minimizer of (OP). We have the following necessary conditions: for each U E (domDF(x, y)) n(domDG(x, z)), 1) DF(x, y)(u) n -(intC)C = 0 implies (DG(x, z)(u) + z) n -intI( = 0, if F is lower semidifferentiable at (x, y);
2) DF(x,y)(u) n -intC 1:- 0 implies (DG(x, z)(u) + z) n (-intI()C "# 0, if G is lowers emidiffereniiable at (x, z). Proof. We shall first treat the case where F is lower semidifferentiable. Suppose to the contrary that there is some w E DG(x,z)(u) with w + z E -intK, while the intersection DF(x,y)(u) n (-intC)C is empty. By definition, there is a net f(xo-, za)} from grafG converging to (x, z) and {t a } , t a > 0 such that limtet(xa- - x, Za - z) = (u, w). Since w + Z E -intK, there is an index f3 so that iQ(za - z)
+ z E -intI(, whenever Q' > {3.
Let 'Y be an index which is greater than (such an index always exists because
DF(x, y)(u)
f3
and such that to
>
1, for all
0:'
>1
n (-intC)C f.= 0 implies u I- 0).
We have then Za E.(ta - l)z - intI( ~ -[(, for all a > 1.. (3.1) Further, since F is lo,vcr semidifferentiable at (x, y), by passing to subncts if necessary, we can find a net {Ya}, Ya E F(x a ) such that
lim ta(Ya - y) = v, some v E Y.. By the assumption, v must belong to -intC.. Hence, there is an index ')'1 > "I such that YQ E Y - intC, for all Q' > ,1~ (3.2) We arrive at a contradiction, because (3.1) and (3.2) show that (x, y, z) cannot be a local weak minimizer of (CP),
72
The case where G is lower semidifferentiable is proven by a similar way_ •
Corollary 3~3 .Assume that (x, y, z) is a local weak minimizer of (CP) and one of the following conditions holds.i) at least one of F and G is lower semidifferentiable at (x, y) (or at (x) z)) and its contingent derivative at that point is point-1Jalu,cdj ii) both F and G are lower semidifferentiable at (x, y) and (x, z) and for eve'ry u from the set (domDF(x, y) n domDG(x, z))> at least one of the sets DF(x, y)(u) and DG(x, z)(u) reduces to a point. Then for every u E (domDF(x, y) n (domDG(x, z)), the following 'relation holds: sup{~(v)
+ (w + z) : (~, () E ~} ~ 0,
for all v E DF(x, y)(u), w E DG(x, z)(u), where Li. is an arbitrary base of the .cone (C,I()'. Proof. It follows from the assumptions of the corollary and from Theorem 3.. 2 that for each U
E
(domDF(x, y)) n (domDG(a:, z»),
the intersection of -int(C, I() and (DF(x, y)(u), DG(x, z)(u) +z) is empty. Since the convex cone (C, K) has a nonempty interior, for each v E DF(x, y)(u) and W E DG(x,z)(u), there is a nonzero vector (~, () E (C,]<)' such that
{(v) + (Cw + z) 2: o. We may assume that (~, () E 6 , where 6 is a fixed base of the cone (C, K)' . The inequality in the corollary is then immediate.•
Corollary 3.4 Under the assumptions of Corollary 3.9, if in additionJ DF(x, y) and DG(x, z) are C- and!( -convex--valued, respectively, then for each u,
y»
U E (domF(x, n (domDG(x, z»), there is a nonzero vector (~, () from (C,I()' Buch that
~(v)+,(w+z)~O,
for every v E DF(x, y)(u),
W
E DG(x, y)(u).
Moreover, if DF(x) y)(X) and DG(x, z)(X) are C- and K -convexJ respectively, then there is a nonzero vector (~, () from (C,I()/ such that ~(v)
for every v
E
+ ((w + z) 2: 0,
DF(x, y)(X),
W E
DG(x, z)(X).
Proof. This is immediate from the proof of Corollary 3~3 with the use of a separation theorcm~ • .
73
Theorem 3.5 Let y E Min(F(x)[C) for some x E domF with G(x) n -1< I- 04 Assume that the following conditions hold: i) F and G are compact at x, upper semidifferentiable at (x y) and at (x, z), for each Z E G(x) n -1(, ii) (DF(x, y)(u), DG(x, z)(u)) n -(C, T(I(, -z)) = 0; for each Z E G(x) n -1(, U E (domDF(x, y)) n (domDG(x, z)), u :F Oi iii) (DF(x, y)(O), DG(x, z)(O)) n - (C, T(l(, -,Z)) = {O}, for each Z E G(x) n -1(. Then (x, y, z) is a local minimizer of (OP) fOT every z E G(x) n -1(. 1
Proof Suppose to the contrary that (x, y, z) is not a local minimizer of (CP). This means that there is a net {(x a, YQ", Z a)} of feasible triples so that A
(3.3)
limx a = x, YQ E Y - C \ {O}, ZQ E G(x a ) n -[(.
By the compactness assumption, we may assume that limYa = Y. E F(x), and limza = z* E G(x). It is obvious that y* = y and z. E -1(. Moreover, by ii), there are two nets of positive numbers {to} and {sQ} such that limta(x o - x, Ya - y) = (u, v) '# 0 and limsa(x a - X,Za - z) = (U'~W) #- o. (3.4) By taking a subnet if necessary, we may assume further that either {to/sa} or {saltc;r} converges to some number t. First, let limto./so: = t. Then, lim to(xo: - X, Za - z*) = (tu', tw), where tu' and u must be the same. If u :f:. 0, due to (3.3) and (3.4), v E DF(x, y)(u) n -C, while
tw ~ DG(x, Z)lc)(u) n -T(I(, -2:*), contradicting ii) . If u = 0, then 1) must be nonzero and the nonzero vector (v, tw) belongs to
(DF(x, y)(O),DG(x~ z*)(o)) n -(G, T(I(, -z*)), contradicting iii). Now, let lim sa:/ta = t . It can be assunlcd that t ;ft 0, otherwise we return to the case lim t Q / Sa = lIt. We see that lim so:(x a -
fE,
Ya - y) = (0,0), and
74
limscr(xr;k - X, Za - z*) == (0, w), We arrive again at a contradiction: (0, w) E (DF(x, y)(O), DG(x, z*)(O)) n -(C, T(I(, -z*)), completing the proof.•
where w must be
nonzero~
Corollary 3.6 Assume that x E dom}"1 with G(x) n -K :f:. 0, Y E Min(F(x)IC) and the following conditions hold: i) }., and G are compact at x, upper semidiffereniiable at (x, y) and (x, z) with DF(x, y)(O) = {OJ, DG(x, z)(O) = {O}) for each Z E G(x) n -1(; ii) sup{~(v) + ((w) : (c;, () E ~} > 0, jor every z E G(x) n"-!(, u E (domDF(x,y)) n (domDG(x,z)), v E DF(x, y)(u), W E DG(x, z)(u), where ~ is some base of the cone (G, T(I(, -z»)'. Then (x, y, z) is a local minimizer of (CP), for any z E G(x) n -1(.
Proof. It is clear that under the assumptions of the corollary, the conditions required in Theorem 3.5 ate satisfied and the result follows .•
4.DIFFERENTIABLE CASE
In this section we apply the general results obtained in the previous section to the special case where F and G are point-valued Frechet differentiable. In doing so \ve highlight the possible extension of classical optimality results of scalar programming to vector problems. Let us consider the problem denoted by (C P) :
minf(x) s.L g(x) E -1(, \\There f and 9 arc point-valued functions from X to Y and Z respectively. The spaces X~ Y and Z arc supposed to be reflexive Banach, the cones C ~ Y, !( ~ Z are closed convex pointed ,vith nonemp ty interior ~ It can easily be seen that if f is Frechet differentiable at x E X, then the contingent derivative of f at the point (x, f(x) coincides with the Frechet differential and it is a continuous linear function from X to Y. We denote it by Df(x) instead of D f(x, f(x)). Furthermore, for every ~ E Y/~ we can define ~Df(x) as a function from X to R which is given by the formula:
75
~Df(x)(u)
=
~(Df(x)(u)), for every
u E X.
Theorem 4.1 Assume that x E X is a local weak optimal solution of (CP), and the functions f and 9 are Frechet differentiable at x. Then there is a nonzero vector (~~ () E (C, If)' such that (4.1) ~Df(x) + (Dg(x) = 0; (4.2) (g(x) = O. Proof. t'ndcr the assumptions of the theorem, all the conditions required in Corollary 3.4 hold. Therefore, there exists a nonzero vector (€,() E (C,I()' such that ~Df(x)(u) + «(Dg(x)(u) + g(x)) ~ 0, for all u E X. (4.3) In particular, setting u = 0, we obtain that ((g(x)) 2: 0, ,vhich together with the fact that g(x) E -I( yields the relation (4.2)~ Further, remember that Df(x) and Dg(x) are linear maps,so is the map f,D f(x) + (Dg(x). Hcnce, (4~3) is true only in the case if (4.1) holds.•
It should be remarked here that in the theorem above, there is no guarantee that ~ is nonzero~ As known from mathematical programming, this problem is connected ,vith constraint qualifications. To see how one can ensure the multiplier € being nonzero,let us return to the proof of Corollary 3.3 for our special case.The crucial moment is that for a local weak optimal solution x EX) the set -int( G., I() cannot contain the vector (Df(x)(u), Dg(x)(u)+g(x)) in the product space X xY, for every U E X. This may happen only if for every vector u E X, at least either Df(x)(u) does not belong to -intC, or Dg(x)(u) does not belong to -intK, or both. Further, the vector (c;,() in Theorem 4.1 separates the sets -int(C,K) and (DJ(x)(X)~Dg(x)(X)+g(x)), hence ~ = 0 means that (must separate the sets -intI{ and Dg(x)(X) + g(x)~ An immediate consequence: for ~ to be nonzero, it is sufficient that ( do not separate the two sets above. This fact can be reached by imposing additional conditions on 9 at the point x, and this is why 've call such conditions by constraint qualifications. Another consequence from Theorem 4.1 is that if the sets -intC and Df(x)(X) are disjoint, then one can always find a nonzero vector ~ separating them and one may set ( = O. This case is of no interest because it reduces to the situation when f is not decreasing (with respect to intO) in no directions starting from x whatever 9 can be. Below we give some conditions under which ~ f:. o.
Constraint Qualification 4.2 (generalized Slater condition) There is some U E X such that
76
Dg(x)(u) + g(x) E -intI(. In particular, g(x) E -intI(,
OT
Dg(x)(u) E -intI( for some u E X.
For the second qualification we need some notations: let X o denote the set of feasible points of (CP) ~Le. X o = {x EX: g(x) E -I(} and M(x) = {u EX: Dg(x)(u) E -T(K, -g(x))},
lV(X)
= {(Dg(x)
= (
E (T(I(, -g(x)))'}.
Constraint Qualification 4.3 The cone N(x) is closed in X and Dj(x)(M(x)) = Dj(x)(T(Xo,x)). Theorem 4.4 Under the assumptions of Theorem 4.1, if in addition at least one of the above constraint qualifications holds, then there exists a vector (€, () frorn (C; !()' with ~ being nonzero such that relations (4.1) and (4.2) are fulfilled.
ce, ()
Proof. By Theorem 4.1 there is a nonzero vector E (C, K)' so that the relations in the theorem are fulfilled. Supposing that Constraint Qualification 4.2 holds '\ve prove that ~ must be nonzero. Indeed, if it is zero, then the" vector ( must be nonzero and
((w) < 0 = (Dg(x)(u) + ((g(x)), for every u E X and W E -intI(. This is impossible because Dg(x)(u) + g(x) E -intK for some vector u E X. Posit now Constraint Qualification 4.3. It follows from Theorem 3.2 that for x E X being local weak optimal, Df(x)(T(Xo~x)) n -intC = 0. Hence, there exists a nonzero vector ~ E G' such that ~(w) ~ 0, for each w E Dj(x) (T(X o , x)). By Qualification 4.3, this inequality holds for all W E M(x). In other words, (~Df(x))(u) ?:: 0, for every u E M(x). (4.4) We prove that ~Df(x) E -N(x), which shows that there is some vector ( from (T(K, -g(x)))' S; K' such that f,Df(x) = -(Dg(x), completing the proof. Indeed, if that is not the case, one can separate the point €Df(x) and the convex closed set -N(x) by a nonzero vector u E (X')' == X (remember that X is Banach reflexive), i.e. (~Df(x))(u)
> 0 ~ ((Dg(x))(u),
(4.5)
77
for every (E (-T(K,-g(x)))'. Since the cone T(K,-g(x)) is convex closed, the second inequality in (4.5) implies that
Dg(x)(u) E ((T(I<, -g(x)))')' = T(I<, -g(x)), which in its turn implies that u E -l\1(x). The latter fact and the first inequality in (4.5) give a contradiction to (4.4). The proof is complete.• Proposition 4.5 If x is a feasible point of (OP) and there is a vector (~, () from (C,T(I(, -g(x))' such that €Df(x)(u) -~ (Dg(x)(u) > 0, for every nonzero vector u E X, then x is a local optimal solution of (CP). Proof~ If there exists a vector (~, () with the property in the proposition) then the set -(C, T(I<, -9(x) does not contain (Df(x)(u), Dg(x)(u)) for any nonzero u E X. l'ow the statement is derived from Theorem 3~5~.
5.CONVEX CASE
We consider the problem (OP) as in the previous section:
minf(x) s.t g(x) E -K. Throughout this section we assume that f and 9 are C- and !(-convex, respectivelyand they are Frechet differentiable at the point of our interest.
Proposition 5.1 If x E X is a local optimal solution of (CP), then it is also 910 bal optimat Proof. Let X o , as before denote the set of feasible points of (OP). Since 9 is !(-convex, this set is convex. If a point x E X is not global optimal, then there is
some other point, say y E X so that
f(x) - fey) Consider the point x(t) = tx feasible. Moreover, since
f
E C \ l{C)~
(5.1) + (1 - t)y, for t E [0,1]. As X o is convex, xCt) is
is C-convex, we have that
f(x(t)) E tf(x) + (1 - t)f(y) - C. This combines with (5.1) to give the relation:
78
f(x)- f(x(t)) = (l-t)(f(x)- f(y))+tf(x)+(l-t)f(y)- f(x(t)) E (1 - t)C \ l(C)
Whenever 0
~
+ C.
t < 1, f(x) - f(x(t)) E C \
l(C)~
which shows that x cannot be local optimal if t is close to 1 and the proof is complete.• Lemma 5~2 Assume that h is a C-conve:cfunctionfrom X to Y and it is Frechet differentiable at x E X ~ Then Dh(x)(y - x) ~ f(y) - f(x)) for all y E x. Proof. By the definition of Frechet differential, we have that
Dh(x)(y - x) = limt_o+(h(x + t(y - x)) - h(x))/t. By the C-convexity of the function,
(h(x + t(y - x)) - h(x))/t E f(y) - f(x) - C. Now taking the limit and remembering that C is closed we obtain at once the required inequality of the lemma. • Theorem 5.3 Suppose that there are a point x E X and two vectors ~ E G' , ( E 1(' with the property:
i) g(x) E -K, ii) f,Df(x) + (Dg(x) = 0, iii) (g(x) = 0, iv) ~(w) > 0 for every w E C \ l(C).
Then x is an optimal solution of (CP). Proof. Suppose to the contrary that x is not optimal, \vhich means that there is a point y E X o such that
f(x) - f(y) E C \ I(C).
(5.2) Since the functions are convex (with respect to the cones C and !(), in virtue of Lemma 5.2, we have the inclusions: Df(x)(y - x) E fey) - f(x) - C, Dg(x)(y - x) E g(y) - g(x) - ](. Combine these two inclusions with (5.2) and take into account the fact that g(y) E -I( to obtain the inclusions:
Df(x)(y - x) E -C \ l(C) and Dg(x)(y - x) E -(g(x) + ]().
79
Applying functions ~ and ( with properties iii), iv) to these vectors \ve arrive at a contradiction to ii). The proof is complete.•
Remark 5.4 In the theorem above if we merely require ~ 'f 0 instead of being strictly positive on C \ l(C), then the result is no longer true. To see this, let us consider the following example: X Rl, Y = R 2 ,C = R~ ,Z = Rl ,!( = Rl, 9 is the identity map,
=
f(x)
= (x, 0), for every x
E X.
We calculate the differential of f at the point x = 1 :
Df(l) Now, take
~
= (~ ~).
= (0,1) and ( = 0 to sec that
~Df(l) + (Dg(x) = O. . However, the point x = 1 is not an optimal solution of the problem with the given f and g.
Chapter 4
Scalarization and Stability
The main difference betweep scalar optimization and vector optimization lies in the underlying preference orders .on the space concerned. In the scalar case, as the functions to be maximized or minimized arc valued in the one dimensional space where a complete order is given, it can be decided, for each pair of alternatives, which of them is prcfcrcd. However, this important feature is no longer valid in the vector case, because the preference orders, as \ve have seen, arc generally not complete. To overcome difficulties caused by the noncompleteness of the orders, techniques which convert vector problems into appropriate scalar problems arc widely applied. In other words, given a vector optimization problem minF(x) s~t.
x EX,
where as before F is a set-valued map from a nonempty set X to a vector space
E ordered by a convex cone C , one tries to find another problem, say minG(x)
s.t.xEX, \vhcre G is a set-valued map from X to R, so that the latter problem is much likely easier to be dealt and provides optimal solutions of the former problem.. This chapter is devoted to the method mentioned above. In Section 1 we develop separation technique for nonconvex sets by means of monotonic functions. Section 2 deals with scalar representations~ Several representations are provided which preserve the linearity, convexity and quasiconvcxity properties of the original problems, In Section 3 we turn to the question of how many scalar problems arc needed in order to obtain all the optimal solutions of a given vector problem.. The final section is devoted to the stability of solution sets of vector problems when the data and ordering cones arc perturbed. .
81
1.SEPARATION BY MONOTONIC FUNCTIONS
Let E 1 and E 2 be two real topological vector spaces and it is given two convex cones ]( and C in E 1 and E 2 , respectively. We recall that a function f from E 1 to E z is nondecrcasing at x E E 1 ,vith respect to (1(, C) if x ~l( y, Y E E 1 impli~s f(x) 2:c f(y)~ We shall say that f is nondecreasing by understanding that it is nondecreasing at any point of the space or at a point of our interest. Further, f is increasing if it is nondccrcasing and x >!( y implies f(x) >c f(y); f is strictly increasing if it is nondecreasing and x "»l( y implies f(x) >c f(y)~
Definition 1.1 We say that f is properly. increasing at x E E 1 with respect to (1(, C) if there exists a convex cone D which is not the whole space and contains !( \ I([() in its inte rioT such that f is increasing with respec t to (D, C) . Proposition 1.2 Let X be a nonempty set in E 1 and let E 1 to E 2 ~ Then the following statements are true: 1) f(IMin(XlI()) ~ I1Win(f(X)IC)
if f
~ Min(XII<) if the left hand side reduces to a point;
2) f-l(Min(f(X)[C»)
f
be a junction from
is nondecreasing;
f is
nondecreasing and the set of
3) f-l(Min(f(X)[C)) ~ Min(XrI() if f is increasing; 4) j-l(Wj\.fin(!(X)rC)) ~ WMin(XII() if f is strictly increasing; 5) j-l(Min(j(X)IC) ~ PrMin(XII<) if f is properly increasing. Proof. For the first statement, let x E IMin(XI!(). Then by Proposition 2.3 (Chapter 2), X~x+K.
Since
f
is nondecrcasing,
f(X) ~ f(x)
+ C,
giving the inclusion of 1) in view of the same proposition. For the second statement) denote that point set by x. If x is not an efficient point of X, then there is some y E X such that
x E y+I<\l(I()a
82
Hence J(x) E fey) + C. Remembering that f(x) is an efficient point of f(X), 'V'e arrive at the contradiction that together with x, y also belongs to the set of the left hand side of 2). Now posit that f is increasing and let x E X ,vith
f(x) E 1\1in(f(X) Ie). If x is not an efficient point of X, then there is some y E X such that x > K y. Hence f(x) >c f(y), contradicting the fact that f(x) E Min(j(X)lC). The fourth statement is proven by a similar way. For the last statement, let f(x) E Min(!(X)l C ), for some x EX. Since f is properly increasing at x, there is a cone D as in Definition 1.1. vVe claim that x E Min(XlD) which shows that x E Prl1!in(XIK), Indeed, if that is not true, Le. there is some y E X with x > D y~ then we have a contradiction: f(x) >c f(y), completing the proo[ •
Remark 1.3 In the above proposition , as the proof shows, it is sufficient to require the nondecreasingness, increasingne~s etc. of the function at the points of our interest. For instance) if f is increasing at x E X and f(x) E Min(f(X)IC), then x E j1;1 in(~Y 11<) and so on. Furthermore) in assertion 2) of the above proposition the requirement that the set of x E X satisfying the relation
j(x) = l11in(f(X)IC) is a unique point is important. This allows us to relax the requirement on f. by contrast with assertion 3). From no\v on to avoid the triviality, we assume that the cones to consider are not the whole spaces.
Definition 1.4 Let A and B be two nanempty sets in E 1 and f a function from E 1 to E 2 • We say that 1) (f, C) separates A and B if there is some a E E 2 such that j(A) ~ a - C and feB) ~ (a - C \ l(C))C ; 2) (f, C) separates weakly A and B in the case intC # 0 if there is S01ne a E E 2 such that
f(A)
~
a - C and f(B)
~
(a - intC)C .
Sometimes we simply say that f separates A and B if it is clear ,vhich cone is under the consideration and we also say that C separates A and B if f is the
83
identity function. It follows from the definition above that f separates A and B, then it separates them weakly. Of course, the converse assertion is not always true. However, the special case where E 2 = Rl, C = R~, the t,vo separations coincide and they reduce to the following: j separates A and B if there is some number t such that f(a) ::; t ~ f(b) for every a E A, b E B~
Proposition 1",5 }'br any nonempty set JY
~
fiJI, we have the following:
1) x E Min(XII() if x E X ·and if there is a function f from E 1 to E 2 which is increasing at x and such that (f, G) separates (x - [() and X. In particular x E Min(XII() if !( separates the two sets above; 2) x E Wl\1in(XII() if x E X and if there is a function f from E 1 to E 2 which is strictly increasing at x and such that (f, C) separates weakly (x - K) and .LY. Proof. Suppose that (f, C) separates (x - !() and X where function. Due to Proposition 1.2, it suffices to show that
f
is an increasing
f(x) E Min(f(X)fC). Indeed, for any y E X, fey) E (a - C \ l(C))C , for some a E E 2 (1.1) If f(x) 2:c fey), then f(y) E a-C. 'l'his combines with (1.1) to yield the relation:
f(y) E a + 1(0), i.e. f(x) E Min(j(X)]C). The second part is proven similarly.•
Theorem 1.6 Let A and B be two nonempty sets in E 1 and assume that the following conditions hold: i) intK is nonempty; ii) (A - intI() n B = 0. Then there e.xists a continuous function f from E 1 to R which is strictly increasing with respect to (K, R+) and separates A and B. Proof. We fix a vector e E intK, and consider the function
f(x) = inf{t : x E te + A - intI(}, for each x E Ere The aim is to prove that it is a function meeting our requirements. First, we have to show that f is well defined. Indeed, denote by R(x) the set of numbers t for which x E te + A - intI(.
84
This set is nonempty. In fact, 0 E e --. intI(, hence for a fixed point js a positive number t such that
a E A,
there
(x - a)/t E e -- intI(, x E te + a - intI{ ~ te + A - intK, so that t E R(x). Moreover, R(x) is bounded from below. To see this, note that if a number t does not belong to R(x), then neither does any number which is smaller than t. Therefore, R(x) is bounded from belovv if there is some t not belonging to it. If that is not the case,Le. x E te + A - intI(, for every t E R, (1.2) then take some b E B and let s be a number which exists by the same argument as in proving R(x) i= 0, such that b - x E se - intK. Xow setting t = -8 in (1.2) we obtain the relation bE A - intI(, contradicting condition ii). We have ~stablished the fact that R(x) is nonempty bounded from below. This means that f(x) is well defined. i..e_
We show now that f is continuous. Observe first that for everye is a neighborhood U of zero in £1 such that
U
~
(-ee
+ intl() n (c:e -
> 0,
there (1~3)
intK).
We claim that
If(y) - j(x)1 < 3£, for each y E x + U~
(1.4)
In fact, for the given c,
Y E (f(y) + e)e + A - intl( , and Y ¢ (f(y) - e)e + A - intK. Taking the relation y = x + u, some have then
U
E U, into account and in virtue of (1.3) we 0"
x E
j(x) f(x)
~ ~
fey) - 26", and f(y) + 2e,
proving (1.4). In this way, E1 .
f
is continuous at x where x is an arbitrary point of
The next step is to show that
f
is strictly increasing. First, if x ~K y, then
R(y) ~ R(x), hence j(x) ~ f(y). Further, x >K y means that x - y E intK and in this case one can find c > 0 such that
85
x - Y E £€ + intI(. Consequently, f(x) - fey) ~ e as required. Our last task is to show that f(a) $ 0 ::; f(b), for every a E A, b E B. Indeed~
for every a E A, and for every c > a a E A -!( S; ee + A - intI(.
Consequently, f(a)
:5 O.
Furthermore, for b E B,
b ¢ A - intI(, hence 0 ¢. R(b) and f(b) ~ 0, completing the proof.• Corollary 1.7 Let X be a nonempty set in E 1 . A point x E X belongs to W j\tl in (X I!() if and only if there exists a continuous strictly increasing function from E 1 to R such that it separates (x - !() and X . Proof. According to Proposition 2.3 (Chapter 2), x E WMin(XII<) implies that
(x - intK) n X
:=:
0.
It remains only to apply Theorem 1.6 to get a function as required. Conversely, if such a function f exists, then f(x) ~ f(y), for all y E X and by Proposition 1.2, x E WMin(XII() .• Corollary 1.8 x E PrMin(XlK) if and only if x E X and if there exists a continuous properly increasing function from E 1 to R such that it separates (x-I() andX. Proof. Invoke the corollary to Theorem 1.6 and Proposition 1.1.
II
For x E Min(X1K), in view of Proposition 2.2(Chapter 2), x E WlvIin(XrK), hence one can apply Corollary 1.7 to this point to get a continuous strictly incrcas.ing function separating (x - [() and X . However, it is not true that there exists an "increasing function separating (x - !() and X . To sec this let us consider the set
x = {(t,s) E R 2 : t =
-l/s vvith t,s ~ O}U{O} ~ R 2 ,
K=R~.
We have {O} = Min(XIK), but no increasing functions on R 2 can separate the sets and X. This is because if a function f is continuous increasing, then
-Rt
\ {O} ~ int(j-l{t : t < 1(0)}). Hence X must meet the set in the left hand side. In other words, -R~
86
f(x) < 1(0), for some x E X and \ve have no separation. Nevertheless, the result can usually be expected tmder additional assumptions on the set and cone.
Theorem 1 . 9 Suppose that E 1 is a normed space, K is a cone with a compact convex base and X is compact. Then x E Min(XIK) if and only if x E X and there exists a continuous increasing function from E 1 to R such that it separates (x - !() and X. Proof. The proof of this theorem is similar to that of Theorem 3.6 to come later. We omit it at the moment.•
2.SCALAR REPRESENTATIONS
Consider a vector optimization problem, denoted by (VP) ,
min"/(x) s.t. x EX, where f is a point-valued function from a ~onempty set X S; E 1 to E 2 , and a scalar optimization problem, denoted by (SP), mins(x)
s.t. x EX, where s is a scalar-valued function on X . The spaces E 1 and E 2 arc, as before, real topological vector spaces and it is given a convex cone C in E 2 • The cone is assumed to be not a linear subspace~ " Definition 2;11 We say that
1) (SP) is a scalar representation of (VP) with respect to C if for each x,yEX, f(x) ~o f(y) implies sex) ~ s(y), and J(x) >c fCy) implies s(x) > 8(Y); 2) (SP) is a scalar strict representation of (V P) with respect to C in the case intC #= 0 if f(x) ~c f(y) implies s(x) ~ s(y), and
87
f(x) ~c f(y) imp"lies s(x) > s(y); 3) (SP) is a scalar weak representation of (V P) with respect to C if f(x) >c f(y) implies sex) > s(y); 4) (SP) is a scalar proper representation of (VP) with respect to C if it is a scalar representation with respect to some cone D which is not the whole space and contains C \ l(C) in its interior. The following implications are immediate from the definition:
*
4) => 1) ;::::} 2) 3). The opposite implications are obviously not valid. For instance, let (VP) be given~ Set sex) = ~ 0 f(x). Then (SP) is a scalar proper representation if ~ E C'+, it is scalar strict representation, but not scalar representation \vhen ~ E C' \ C'+ and ~ f:: O. To see that not every weak representation is strict, let us consider the following function denoted by 9 from R 2 to R; for (Xl, X2) E R2
g(Xl' X2) = min{xl~x2}' if Xl > 0,$2 > 0
g(Xl,X2) = -max{xl' x2}/(max{xl,x2} + 1) if
Xl ~
0, X2
;:::
0, X1X2 = 0,
g(Xl' X2) = min{Xl' X2} - 1, otherwise. One can verify that in the case where f is a function from X to R2, C is R~, problem (SP) with s(x) = 9 0 t(x) is a weak, but not strict representation of (VP). Further, for (SP), the notions of efficient, properly efficient and weakly efficient solutions coincide, so we shall use S(X; s) for all these three. In the theory of decision making a definition of value functions is some,vhat similar to that of scalar representations_ Kamely, given a set A ~ E 2 , a scalar valued function s on A is said to be a value function if for each G, bEAt a >c b if and only if s(a) > s(b). Setting A = X and f = id in (V P) , we see that any value function provides a scalar representation of (VP) , but of course not vice versa. Conditions for a value function to exist are very strict (see Yu-1985).
Proposition 2.2 sions.·
For the vector and scalar problems above we have the inclu-
1) S(X; s) ~ PrS(X; f) if (SP) is a proper representation of (VP) ;
2) S(X;s) 3) S(X; s)
SeX;!) if(SP) is a representation of (VP) j ~ W S(X; f) if (SP) is a weak representation of (VP) . ~
88
Proof. Invoke these inclusions to Definition 2.1 .•
Proposition 2.3 In order that (SP) be a scalar representation (resp., strict representation, proper representation) of (VP) , it is necessary and sufficient that s be a composition of f and an increasing (resp., strictly increasing,properly increasing) scalar- valued function on f(X).
Proof. Let 9 be an increasing function from f(X) to R .
Then~
for every x and
yEX,
f(x)
~c
f(y) implies go f(x)
f(x) >c f(y) implies go f(x) In this way, s = 9
0
f
~
go f(y),
> go f(y).
provides a scalar representation of (V P) .
Conversely) let (SP) be a scalar representation of (VP) , we define an increasing function g on f(X) as follows: for a E f(X), i.e. a = f(x), some x EX, put
g(a) = s(x). This function is well defined. Indeed, if y is another point of X with f(y) = a, then by the relations in 1) of Definition 2.1, s(x) = s(y). In other words, g(a) does not depend on the chaise of the point x, for which f(x) = a. Further, 9 is increasing because if Q, b E f(X), i.e. a = f(x), b = f(y) for some x, y E X and a ~c b (resp., a >c b), then by Definition 2.1,
g(a)
= s(x)
~
s(y) = g(b) (resp., g(a)
> g(b)).
The other cases arc proven by a similar argument.• Now vIe are going to construct scalar representations for vector problems with additional properties such as problems with linear, convex and quasiconve.'C. data.
Linear Problems
Definition 2.4 Let (VP) be a linear problem, i.e. X and C are polyhedral set and cone, f is a linear function~ If (SP) is a linear scalar problem which is a representation of (VP) , then we say that (SP) is a linear representation of (VP). Theorem 2.5 Assume that E 2 = Rn. Then for any linear problem (V P) , x E X is an optimal (resp~, weakly optimal) solution of (V P) if and only if it is an optimal soly.tion of a linear representation (resp.,linear weak representation) (SP) of (V P)
89
with the objective function s being of the form s = { ~ E C*
0
f ,
some ~ E riC* (resp.,
\ {O}).
Prooi First, note that for aJlY ~ E C*, the problem (SP) with s = ~ 0 f is linear whenever so is (VP) . :v£oreover, if ~ E riC*, then it is increasing. Indeed, for a 1 bE R n , a - bEe implies ~(a - b) 2: 0, a - bEe \ ICC) implites ~(a - b) > 0, otherwise «(a - b) = a for all ~ E C*, \vhich shows in particular that both vectors (a - b) and (b - a) belong to (C*)* = C, i.e. a - b E I(C),a contradiction. In this way, s = ~ 0 f with ~ E riC· provides a linear representation of (V P) . Therefore, if x is an optimal solution of (SP) then according to Proposition 2.2, it is also an optimal solution of (VP) .
Conversely, suppose that x E SeX; f). Then the two polyhedral sets f(X) and f(x) - C \ I(C) have no points in common. By a lemma we prove later, there is a vector ~ E riC* separating these sets. In particular) ~(f(x))
::; €(f(y)) for
c~ch
y EX,
i.e. x E S(X; c; 0 f), completing the proof. • Here the lemma we
promised~
Lemma 2 . 6 Let A be a polyhedral set and C a polyhedral cone which is not a subspace in Rn with
An (C \ l(C»)
= 0~
Then there exists a vector c; E riC* such that
~(a)
S 0 for every a
E A.
Proof. There is no loss of generality if 've suppose that A is a cone~ Consider mst the case ,vhere C is pointed, i.e. it has a convex compact base. We denote this base by B. Since A n B is empty, there is a nonzero vector ~ E Rn such that ~(a) ~ 0
This relation sho,vs that ~ow,
< ~(b), for
each a E A, b E B.
€ E intC*.
let C be arbitrary. Let H denote the orthogonal complement of l(C) in
Rn, i.e H
= {€ E Rn : ~(x) =
0, for all x E l(C)}.
We have then
c=
Co + l(O), where Co = H
n C.
(2.1)
Since Co n l(C) = {O}, the cone Co is pointed. Consider two polyhedral cones: A + l(C) and Co . It caD. be seen that they have only one common point at zero.
90
By the fact we have just established in the case where the cone is pointed, there is some € E intC~ such that ~(y) :5 0, for all yEA + 1(0)6
In particular, ~(y) = 0, for all y E I(C),
€(y)
~
(2.2)
0, for all yEA. € E H. To finish the proof it remains actually to sho,v
The equality (2.2) says that
that riC* = H
n intC~ .
Indeed,
riC* = {~ E Rn : ~(c) > 0, for all c E C \ l(C)}. By (2.1), c; E riC* if and only if {(co) + ~(y) > 0, for every Co E Co \ {OJ, y E l(C). Since ~ E H the inequality above becomes: c;(c o ) > 0, for every Co E Co \ {O}, so that ~ E i n tC~ , completing the proof. • Corollary 2.7 If E 2 is a finite dimensional space, then any optimal solution of a linear vector problem is proper.
According to Theorem 2.5, x is an optimal solution of a linear problem (VP) if and only if x E S(X;~ 0 f), for some {E riC'IC. Consider the cone
Proof.
D == {O} U {a E E2 : ~(a) > O}~ It is pointed and contains C \ 1(C) in its interior. Indeed, since C ~ clD, if some point c belongs to C \ D, then ~ (c) = O. Remembering that ~ E riG*, we conclude ~(c) = 0 for all ~ E
C*.
In particular,
-c E (C*)* = C, i.e. c E leG). By this,
C \ l(C) ~ D \ {OJ ~ intD~ Further, ~ is increasing with respect to (D, R+), hence it is properly increasing with respect to (C, Rr)~ In view of Propositions 2 .3,~ 0 f provides a proper representation of (VP), so that according to Proposition 2.2, x is a properly optimal solution.•
91
Convex Problems: Definition 2.8 Let (V P) be a convex (reap., strictly convex) vector problem,i.e. X is a convex set, f is C···convex (resp~, strictly C-convex) function. We say that (SP) is a convex (resp., strictly convex) representation of (V P) if it is convex (resp., strictly con7Jex) scalar problem which is a representation of (VP) . Proposition 2.9 Suppose that (V P) is a convex (resp., strictly c.onvex) problem and 9 is a convex increasing (reap., convex strictly increasing) function from f(X) to R. Then the scalar problem (SP) with s = 9 0 f is a convex repre~'lentation (resp., strictly convex strict representation) of (VP) . Proof. Invoke this to Proposition 6.8 (Chapter 1) and Proposition 2.3.
at
Theorem 2.10 Assume that (VP) is a convex problem. Then x E X is a weakly optimal solution of (VP) if and only if it is an optimal solution of a convex strict representation(SP) of(VP) with s being olihe form s = ~of , some E C'\ {O}.
e
Proof. ~
It is clear that s =
e
0
f
is convex whenever
f
is convex. Moreover, if
E C' \ {O}, then by Proposition 4.5 (Chapter 1),< is strictly increasing. Hence, by
Proposition 2.3, ~ 0 f provides a convex strict representation of (VP) . To prove the theorem, it suffices only to sho\v that for x E W8(X; f), there exists some nonzero vector from C' such that x E.S(X; ~ 0 f). Indeed, in view of Proposition 2.3 (Chapter 2), the sets I(X) and f(x) -intC do not meet each other~ We prove that neither do the sets f(X) + C and f(x) - intC. In fact, if that is not the case,Le. f(y) + c E f(x) - intC, some y EX ,c E C, then f(y) E f(x) - c - intC ~ f(x) - intO, contradicting the fact that x E WS(X; f). Next, since X is convex and f is C-convex, the set j(X) + C is convex. 'Separate the sets f(x) - intC and f(X) + C by a nonzero vector ~ E E~ to get . the relation:
e
e(a) ~ ~(b), for all a E I(x) - intC, bE j(X) + C. This shows in particular that ~ E 0 ' and f(x) ~ f(y) for all y EX, so that x E S(X; 0 f) .•
e
Theorem 2.11 Assume that (VP) is a convex problem and ~ is reflexive. Then x E X is a proper optimal solution of (VP) if and only if it is an optimal solution
92
of a proper representation (SP) ~ E
0/ (VP)
with s being of the form s = ~ 0
f , some
C'+.
By an argument similar to that in the proof of Corollary 2.7) we can see that € E G'+ is a properly increasing function,hence by Proposition 2.3, (SP) with s = ~ 0 J is a proper representation of (VP) ~ To finish the proof it is enough to show that for x E PrS(X; f) there exists a vector € E C'+ as required in the theorem. Indeed, by definition of proper efficiency, there is a cone D not coinciding with E 2 and C \ l(C) ~ intD sush that
Proof.
f(x)
E
Min(f(X)ID).
By Theorem 2.10, there is a nonzero vector ~ E D I such that x E S(X; ~
0 f). It remains to verify that € E C'+ . If tllat is not true, then the function ~ takes the zero value in the interior of D, hence it is zero itself. The contradiction completes the proof. • .
It should be noted that according to Theorems 2.10 and 2.11, an optimal solution of a convex vector problem, which is not proper, can be a solution of no scalax representations (SP) with 8 being a composition of f and an increasing linear function. In this lyes the most important feature of proper efficiency. Quasiconvex Problems: Definition 2.12 Let (VP) be a quasiconvex (resp.,strictly quasiconvex) prob.. lem,i.e X is convex and f is G-quasiconvex (resp_,strictly C-quasiconvex). We say that (SP) is a quasiconvex (resp., strictly quasiconvex) representation of (VP) if it is a quasiconvex (resp., strictly quasiconvex) scalar problem which is a representation of (VP) .
Proposition 2.13 Suppose that (VP) is a convex (resp., strictly conve~) problem and 9 is a quasiconvex increasing (resp., quasiconvex strictly increasing) function from f(X) to R, Then (SP) with s = go f is a quasiconvex representation (resp., strictly quasiconvcz strict representation) of (V P) ~ Proo£. Invoke this to Proposition 6.8 of Chapter 1 and Proposition
2~3 .•
Suppose that E 2 is the space Rn, C is a polyhedral cone generated by n linearly independent vectors in Rn and (V P) is a quasiconvex problem. Then there exist at least n quasiconvex strict representations with the objectives being of the form ~ o!, where € E C* and Ilell = 1. Proposition 2.14
93
Proof. Let eEl, u.,~n be n unit normed extreme vectors of C'" . According to Proposition 6.5 of Chapter 1, ~i o!, i = 1, ... , n, are quasiconvex. The result now is drawn from Propo~ition 2.3 of this chapter and from Proposition 4.5 of Chapter 1.•
Theorem 2.. 15 Let (VP) be a quasiconvex (resp.J strictly quasiconvex) problem and e be a fixed vector from intC. Then x E X is a weakly optimal solution of (VP) if and only if there exists some 'lJ,ector a E E 2 such that it is an optimal solution (resp., the unique optimal solution) of a quasiconvex strict representation (SP) with s = hc,a o!, where he,a is the smallest strictly increasing junction at a. Proof. Due to Proposition 6.3 (Chapter 1), it suffices to prove that x E WS(X; f) implies the existence of a E E 2 such that
x E S(X; he,a
0
f)
in the quasiconvcx case and
{x} = S(X; he,a
0
f)
in the strictly quasiconvex case. We already know that for a weakly optimal solution x E X, f(X) and f(x) - intC are disjoint. Take a = f{x) to see that
he')a 0 f(x) = 0 :5 he,a 0 f(y), for all y EX. In the strictly quasiconvex case,the problem mln{he,a 0 f(y) : y E X} has a pniquc solution, therefore {x} = S(X; he,a
0
f), completing the proof.•
Theorem 2.16 Let (V P) be a quasicon1Jex (resp., strictly quasiconvez) problem and let a be a fixed vector of E 2 • Then x E X with the property that f(x) E a-intC is a weakly optimal solution of (V P) if and only if there is some vector e E intC such that it is an optimal solution (resp., unique solution) of a quasiconvex strict representation (SP) with s = hc,a 0 f. Proof. The proof is similar to that of the preceding theorem.•
In general, for (VP) being a quasiconvex problem, and given an optimal solution x E X, we cannot claim that there is an increasing function g such that s = 9 0 f provides us a representation having x as an optimal solution. In other words, a quasiconvcx vector problem may have no scalar representations (but it always has strict representations according to Theorem 2.15, however) with nonempty optimal solution sets.
95
3.COMPLETENESS OF SCALARIZATION
Let us return to the vector problem introduced in the preceding section:
(VP)
minf(x), S4t4 x EX.
It is obvious that for every function 9 from E 2 to R, one can define a scalar problem (SP) corresponding" to 9 and (VP) as follows:
(SP)
ming 0 f(x)
s.t. x EX. Definition 3.1 Let G be a family of junctions from E 2 to R. We say that this family is a complete scalarizationjor (V P) if for every optimal solution x o!(VP), there exists 9 E G such that x is an optimal solution of (SP) corresponding to 9 and (VP), and SeX; 9 0 f) ~ S(X; f). By a similar way are defined complete weak and complete proper scalarizations for (VP). Trivially, even without any conditions presumed such scalarizations always exist for any vector problem, for instance a family of functions which send the optimal solution set (resp., the weakly optimal solution set or properly optimal solution set) to zero and other points to 1, will provide a complete (rcsp., '\-veak or proper) scalarization for the vector problem. However, from the theoretical as well as computational point of vie\v, it is desirable to have G with additional properties such as continuity, linearity or convexity whenever (VP) possesses these.
Proposition 3.2 The following statements are true: 1) for any (V P) , there exists a family of continuous, strictly increasing (resp., properly increasing) functions from E 2 to R which is a complete (resp., complete proper) scalarization for (VP); 2) if (VP) is linear, the family G1 = C)jt, \ {O} (resp.,G 2 = c*+) is a complete weak (resp., complete) scalarization for (VP) ;
3) if (V P) is convex, the family G 1 above is a complete weak scalarization and G2 is a complete proper scalarizationj 4) if (VP) is quasieonvexJ the family of the smallest monotonic functions he,a with a fixed e E intC (Section 4, Chapter 1) is a complete weak scalarization; if in addition there .is sorne a E E 2 , sueh that f(X) ~ a - intC, then the family of the junctions he~a, depending on
96
e E intC with this fixed a is a complete weak scalarizalion. Proof. The first statement is deduced from Corollaries 1.4 and 1.5. 1"1he other statements are obtained from the results of the preceding section.•
Sometimes ""hen the vector problem possesses additional properties, one can expect to find a complete scalarization of minimal power. For instance, in order to obtain all the solutions of a linear vector problem it is sufficient to solve a finite number of linear representations (Theorem 3.3 below), or even morc, if certain compactness assumptions happen to hold, one needs to solve exactly one representation to catch all the solutions of a vector problem which is not necessarily linear or convex (Theorem 3.6 to come). The following theorem (Theorem 3.3) belongs to Arrow-Barankin-Blackwell (1953), but here we furnish a nc'\v proof.
Theorem 3.3 ).ssume that ~ = R n and (VP) is a linear problem. There are a finite number of vectors ~1, .u, €n of C* (resp., riC*) such that the set W8(X; f) (resp., S(X; f)) is the union
U{S(X; ~i
0
f) = i = 1, ... , k}.
Proof. Since (VP) is linear, f(X) is a polyhedral set. Hence it has a partition consisting of a finite number of disjoint relatively open faces, say
f(X) = U {Ai: i = 1, ... , m}. Since Ai is relatively open, if a linear function attains its minimum at some point of it , then any other point of this face is also a minimum of the function. Let now AI? ... , Ak be the faces from the partition with the property that their union contains WMin(f(X)'C) and none of them is superfluous. By Theorem 2.9, for a fixed Xi E WS(X; f) with f(Xi) E A, there is some ~i E C* such that
f(Xi) E S(f(X);~i).
In view of the observation made above,
Ai ~ S(f(X);~i)' Consequently, Wlvlin(!(X)IC) contains the union of Ai, i = 1, ..., ~ and therefore
WS(X; f) = U{S(X;~i 0 f) ~ i = I, ... , k}. For the set S(X; f) the proof is similar.•
Theorem 3.4 Let (V P) be arbitrary4 There exists a continuous strictly increasing function 9 from E2 to R such that
vVS(X;f) = S(X;gof).
97
Proof. If the set of weak optimal solutions of (V P) is empty, any continuous strictly increasing function is suitable. We may therefore suppose that that set is nonempty. Let A = WMin(f(X)IC) - C and let e be a vector of intC. Construct a function 9 by the rule= g(a) = inf{t : a E te + A}, for every a E E 2 By an argument similar to that used in the proof of Theorem 1~6 one can verify that 9 meets all requirements of the theorem. •
Lemma 3.5 Let set) be a function from R+ to itself with the following properties: i) limt-O s( t) = 0; ii) inf{ s(t) : t 2 t i } > 0, JOT every tl > 0; iii) s(t) ~ at - {3, for all t > t a , some to > 0, (3 > 0, a > o. 7 hen there is a continuous increasing function h(t) from R+ to itself $0 that 1) h(O) = 0; 2) 0 < h(t) < s(t), for all t > 0; 3) h(t) = (at - 13)/2, t ~ to. 1
ProoE Without loss of generality we may assume that to = 1 and s(l) = 1. For k = 0,1, ... let hk = (1/2 k +1 ) inf{ s(t) : t ~ 1/2 k }. The sequence {hk} has the following properties: (3.1) h k > h k + 1 > 0; limhk = 0; (3.2) s(t)/2 2: hk' for every t ~ 1/2 k , k = 0, 1, .... (3.3) We define a minorant function h as follows: h(O) = 0,
if t E
[1/2 k +1 ,
h(t) = 2k +1 (hk - hk +1 )t + 2h k +1 ~ hk' 1/21:J, k = 0, 1, ... ,
h(t) = (at - (3)/2, if t > 1. It can be seen that h is continuous at every point t > O. At t = 0, it is continuous in view of (3.2). Relation (3.1) shows that h(t) < s(t). The other properties of h can be verified without any difficulties.•
Theorem 3.6 Suppose that the following conditions hold: i) E 2 is a normed space;
98
ii) there is some ~ E C' such that the set {c E C : ~(c) = I} is compact;
iii) f(X) and lyfin(f(X)IC) are compact. 1'hen there exists a continuous increasing junction 9 from E 2 to R such that SeX; f) S(JY;g 0 f).
=
Proof. Denote
A = .LVIin(!(X) [C), C(t) := {c E C ; ~(c) = t}, for each t
~
04
It follo\vs from the two last conditions of the theorem that the set A - C(t) is compact and does not meet the set f(X) \vhenever t > O. Let s(t) denote the distance bet"vccn these t\VO sets in E 2 , Le.
s(t) = d(A - C(t), f(X)). We prove that s(.) possesses properties i), ii) and iii) of Lemma 3.5. Indeed, for i), observe that C(t) = tC(l). Take a fixed point c E C (1) and calculate s (t) to get the relations:
set) ~ d(A - te, j(X)) S d(A - tc,A). Passing to limit when t runs to 0, the above relations give i) of Lemma 3.5. Further, if property ii) of that lemma is not true,Le. inf{s(t) = t ~ t l } = 0, some tl > 0, then by the compactness assumption, there is some t
(A - C(t»)
~
t 1 such that
n f(X) ;f 0,
contradicting the fact that A = 1'vlin(!(X)IC). For property iii), let {3 =
max{Uyll : Y E f(X)} + max{llylJ : yEA},
a: = d(C(l), {O}).
We calculate set) :
s(t) = inf{H(a - c) - ylr : a E A, C E C(t), Y E f(X)} ~ inf{Jlc - zrJ : c E C(t), z E E 2 , Hzfl S 13} = at - (3, if t is large enough.
~ow
we arc able to apply Lemma 3~5 to get nmction h(t).
Set
K = U{A - G(t) - B(h(t») : t
~
O},
where
B(h(t)) = {y E E 2 and define a function 9 by the rule:
:
Hurl ~ h(t),~(y)
=
O},
g(y) = inf{t : y E te +](}, for every y E E 2 ,
99
where e is a fixed vector from C(l). The aim is to show that 9 is a function we wanted. By making a translate if necessary, one can assume that 0 E A. 'We have first to verify that g is well defined" As in the proof of Theorem 1.4, denote R(y) = {t: y E te+I<}. It is clear that t E R(y) implies if E R(y), for all t' ~ t. Therefore g(y) is well defined if R(y) as well as (R(y))C are nonempty. For the given y, it is known that y = ke + z, some k E R, Z E E z with ~(z) = o. By relation 3) of Lemma 3.5, there is some t > 0 such that
h(t)
~
y =
ke + z E (k
JfzrJ.
Consequently
+ l)e + 1(, is empty, then y E te + !(, for
+ l)e + (-te + B(h(t»))
~
(k
i.e. k + 1 E R(y) and R(y) is noncmpty. If (R(y»C every number t. In this case, for n = 1,2, ... there exist positive numbers in such that y = -ne + an - en - bn ,
for-some an E A, en E C(t n ), bn E B(h(t n »). Or equivalently,
e = (an - y)/n - (en + bn)/n. In particular, the sequence {-(en + bn)/n} converges to e. Apply the functional ~ to this sequence to obtain the relation: 1 = ~(e) = -lim~(cn)/n, because ~(bTt) = O. This is a contradiction since ~(en) 2: O. In this way, (R(y))C isnonempty and hence g(y) is well defined.
We establish now the monotonicity of g. To do this it suffices actually to show
that -c E intK, for every c E C \ {OJ. Indeed, let
H = {y E E 2 : ~(y) = O} and let L be the linear subspace generated by the vector c. Then E 2 is the direct Sum of H and L. Further, let
U = {z E L: z = A(-c/2) + (1- A)(-3c/2), O < A < 1}, V = {y E H : UyH < h(~(c)/2)}. These sets arc open neighborhoods of -c and zero in L and H, respectively. lIenee, U + V is a neighborhood of -c in E 2 • We show that-
U+V
~
!(,
100
which implies (3.4). For every a = u + v E U + V, in one hand ~(u) = -~(c)(3/2 - A), some ..x,
a< ..x < 1.
(3.5)
In the other hand,
v E B(h(~(c))/2) ~ B(h(~(c)(3/2 - A))). This combines with (3.5) to yield
a E -G(t) - B(h(t)) ~ A - (C(t) + B(h(t))) ~ [(, where t = ~(c)(3/2 - A), and the incrcasingncss of 9 is established. The next step is to prove the continuity of g. For, let {Yn} be a sequence in E 2 converging to Yo. We want to verify the limit,
limg(Yn) = g(yo).
(3.6)
By definition,
+ 8)e + !( and t/. (g(yo) - 6)e + K, for all 0 > O.
Yo E (g(yo) Yo
By (3.4),
-oe E intI(, therefore,
Yo E int( (g(yo)
+ 6)e + !(),
Yn E int((g(yo)
+ 8)e + I()
and consequently, ~
(g(yo)
+ 8)e + K,
(3.7)
for n large enough. Similarly,
Yn ¢ (g(yo) - 8)e + K, vJ'henevcr n is large enough. This relation and (3.7) show that
g(yo) - 8 :5 g(Yn) S g(yo) + 6, proving (3.6). Our last task is to verify the relation
S(X; f) = SeX; g 0
f)~
1t is clear that
g(y) = 0, for all YEA, g(y) 2: 0, for all y E f(X). If y E I(X) \ A, there is some
Z
E j(X) with
y - z E C \ l(C). Since 9 is increasing g(y) > g( z). In this ,yay,
S(X;g 0 f) = {x EX: f(x) E A} = The proof is complete.•
sex; f)·
Concerning the assumptions required in Theorem 3.6, the follo\ving discussion should be used. In locally convex separated spaces, condition ii) of Theorem 3.6
101
is satisfied if and only if C is a cone with compact yonvex base6 Indeed, let B be a such base of C. By Proposition 1.10 (Chapter 1), G' + is nonempty. Take a vector ~ from the latter set and set
Co = {c E C : ~(c) = 1}. It is clear that Co is also a convex base of C . We claim that it is compact. To sec this, it suffices actually to construct a continuous function, say h, from B to Co with h(B) = Co' Let h be defined by the rule: h(b) = b/~(b), for each b E B. Since B does not contain zero and E E intC'), c;(b) 1= 0 for all b E B. Hence h is ,veIl defmed. Moreover, it is continuous since so is €. Furthermore, for every c E Co, there is some b E B, t > 0 such that c = tb because B is a base. Remembering that ~(c) = 1 we have 'that t = l/~(b), in other words, C = h(b), and h(B) = Co' Finally, as to condition iii) observe that if a continuous increasing function g yields the relation S(X; f) == S(X; go f) and f(X) is compact, then the set Min(f(X)IC) is compact.
4~STABILITY
Let T and E 1 be topological spaces and E 2 be a sepaxated topological vector space over reals~ Let us be given the following set-valued maps:
X(t) : T ==> E 1 , G(t) : T ==> E 2 , F(t, x) ~ T X E 1 => E 2 , where C( ~) is convex conc-valued~
These maps determine the follo,ving parametric vector optimization problem: (P(t)) minl"(t, x)
s.t. x E X(i)) with the ordering cone C(t) in E 2 . We adopt the follo\ving notations: for every t E T
Q(t) = F(t,X(t)), M(t) = Min(Q(t)IG(t)), WM(t) :::: Wl\d"in(Q(t)IC(t)),
102
S(t) ~ {X E X(t) : F(t,x) n M(t) "1= 0}, WS(t) = {X E X(t) : F(t,x) n WM(t) -:l0}. With these notations we obtain in addition five set-valued maps: Q(.) from T to E 1 X E2' S(.) and liVS(.) from T to E 1 , M(.), and l¥M(.) from ']1 to E 2 •. In the terminology of scalar optimization the maps M(.) and WlvI(.) are called the maginal functions. Two most important questions arising in connection with parametric optimization are about the continuity properties (stability) and about differentiability properties (sensitivity) of the maps mentioned above. In this section \ve present some stability aspects of parametric vector problems. The reader is referred to Tanino (1988) for the material concerning sensitivity investigati9ns. Proposition 4.1 The map Q(.) is 1) closed if i) F( ~, .) is closed, ii) X(.) is compact closed; 2) upper continuous if iii) F(.,.) is upper continuous, iv) X (.) is upper continuous compact-valued; 3) lowe r continuous if v) f1(.,.) is lower continuous,
vi) X (.) is lower cantinuous; 4) compact-valued
if
vii) P(t,.) is upper continuous compact-valued in the second variable for e'very fixed t E T, viii) X (.) is compact-valued. Proof. For the first statement, let {(teo YQ)} be a net from the gTaph of F(.,.) converging to (to, Yo), some to E T. We have to show that Yo E F(i o , X(t o )), i.e. Yo E F(t o, x o) for some X o E X(t o). Let X a E X(io:) be such that Ya E F(t Q , Xet). By condition ii), it can be assumed that {x a } converges to some::c o E X(t o ). Since F is closed,yo E F(t o , x o ) as required. For the second statement, let V be a neighborhood of F(t o , X(t o )) in E 2 • We have to find a neighborhood U of to in T such that
F(t,X(t)) ~ V, for all t E U.
(4.1)
In view of iii), for each :c E X(t o ), there arc neighborhoods A(x) of to in T, B(x) of x in E 1 such that
103
F(A(x),B(x)) ~ V.
(4.2)
Since X (to) is compac t, one can find a finite number of p oints, say Xl, ~ •• , X n in X(t o ) such that {R(Xl)' .. 1' B(x n )} is an open cover of it. Denote by B the union of B(Xl), ..., B(x n ). It is an open neighborhood of X(t o ) in E 1 • By the upper continuity of X(.), there is a neighborhood A o of to in T such that
X(A o )
~
B.
(4.3)
Take now U = A o n A(Xl) n ..
1
n A(x n )
and combine (4.3) with (4.2) to get (4.1). Further, for 3), let V be a neighborhood in E 2 with
V n ]?(t o , X(t o ))
# 0 , i.e.
V n F(t o , x o ) =f:. 0, for some X o E X(t o ). By condition v), there are neighborhoods A o of to in T, and B o of X o in E 1 such
that
V n F(t,x)
'# 0 , for all t
E A o, x E B o•
(4.4)
Since X(.) is lo\ver continuous, for the given Eo there is a neighborhood A of to in T such that
Eo
n X(t) 1= 0, for all tEA.
(4.5)
Take U = An A o and combine (4.5) with (4.4) to see that
V n F(t,X(t») # 0 for all t E U, Lc. Q(.) is lower continuous.
"The last statement is triviaL • In Proposition 4.1 the requirement that X(.) is compact in the first statement and X(I) is compact-valued in the second one is indispensable. To see this, let us consider the following examples. Let
T= [0,1], X(t) = R for each t E T, F(t,x) = e- x2 for each t E T,x E R. Then Q(.) is not closed, for instance, the points (lIn, e- n2 ), n = 1,2, ..., belong to the grafQ, although the limit of that sequence is (0,0) which does not belong to the graph. Further, let T be, as before, the segment [0,1], let X(.) be a map from T to
R: X(t) = [0, lit], let F(.,.) be a map from T x R to R 2 defined by the rule:
104
F(t,x)
=
(X, tx)~
Then Q(.) is obviously not upper continuous at 0, nevertheless F(., .) is continuous and X(.) is upper continuous. Definition 4.2 A set-valued map G(t) from T to E 2 is said to be lower C(.)continuous at to E T if for each neighborhood V of E 2 , V n G(t o ) f 0 implies (V + C(t) n G(t) f 0, for all t in some neighborhood of to in T. "'.
It should be observed that if G(~) is lower continuous at to, then it is C(.)continuous at that point \vhatever the map C(.) be. In the case where C(.) is a constant map, the concepts of lower C(.)-continuity and C-continuity coincide. Concerning the map C(.) we make the following hypotheses: (HI) C(t) is closed pointed for every t E T and C(.) is closed; (H2) intC(t) is nonempty for every t E T and C(~) is continuous in the sense that for each t E T, if c E intC(t), then there is a neighborhood V of c in E 2 and a neighborhood U of t in T such that V ~ intC(t') for all t f E U. The continuity in the sense above holds for instance when C(.) is a constant map or it is lower continuous in finite dimensional spaces. Theorem 4.. 3 The map WM(.) is
1) closed if (H2) holds and i) Q (.) is c10 sed lower continuous12) upper continuous if (H2) holds and ii) Q(.) is continuous, compact-valued; 3) lower (- C (.)) -continuous if iii) Q (.) is lower contin'Uous compact-valued.
ProoE For the first statement, let {(teo Ya)} be a net from the graph of W1\d(.), converging to (io,yo), to E T. We have to prove E WM(t o ). Indeed, by the closcdness of Q(.), Yo E Q(t o ). H Yo rJ. WM(t o ), there is some z E Q(t o ) such that Yo-Z E intC(to ). Since Q(.) is lower continuous, for z E Q(t o ), there is a net {za}, Zoe E Q(toc ) such that Yo
limzQ' = z. It follows from this limit and from hypothese (H2) that for
Q'
large enough,
105
yO! -
E intC(ta ),
ZQ
~ontradicting the fact that
Yo: E WM(to:).
For the second statement, suppose to the contrary that there is a neighborhood V of W1\.1(t o ) in E 2 and a net {(to, yoJ} from the graph of vVM(.) such that limt ce = to E T, Yer
¢
v.
By condition iii) we may assume that {Ya} converges to some Yo. It is easy to verify that Q(.) is closed, hence so is W¥(.) by the first statement. Consequently, we arrive at the contradiction
v.
Yo E WM(t o ) ~ For the last part, observe first that under the condition of this part, the weak domination property holds for each Q(t) (Proposition 4.10, Chapter 2). Suppose that the statement 3) is not true,Le. there is a net {t a } converging to to E T, a neighborhood V S; E 2 so that
(V - C(t a )) n WM(t a ) = 0, Yo E V n WM(t o) ::f.
0, some Yo
E E2 •
(4.6)
Since Q(.) is lo,ver semicontinuous, there is a net {Ya}, Yo: E Q(t a ) such that Yo E V. According to the weak domination property, there is some ZOo E Wl\l(t a.) such that
intC(t a ). E V - C(t a ), contradicting
Ycx - Za E
In other words,
Zek
(4.6)~
•
Theorem 4.4 The map M(.) is 1) lower (- C (.)) -continuous
if
i) Q (.) is lower semicontinuous, ii) the domination property holds for every Q(t), t E T;
2) lower continuous if (HI) holds and iii)
Q(~) is continuous
compact·-'l)alued~
Proof. We shall first treat the case where i) and ii) hold. Let Yo E l\11(t o) and ·suppose to the contrary that there arc a net {ta.} converging to to, a neighborhood V of Yo in E 2 such that
n (V -
C(t Q ) ) = 0. By i), there is some YQ E Q(t Q ) and Yet E V for each some z~ E M(t a ), such that M(toJ
Ya - Zo:
Consequently,
E C(t o:}.
(4.7) 0'.
In view of ii), there is
106
,
ZQ
E V - C'(taJ,
contradicting (4.7). Further, if iii) holds, without 16ss of generality, it can be assumed that By (III), Yo -
Zo
limzo: = Zo E Q(t o ). E C(t o). Hence Yo = ZQ and M(.) indeed is lower continuous.
II
It should arise a question of what about the upper continuity of the map 1\1(.). In general, tWs map is not upper continuous even under strict conditions. For instance, in R 2 we take - tX 2,
a:5 X2
Q(t) = {(Xl,X2) E R 2 : either S I}, C(t) = R~, for every t E [0,1].
X2
= 0, 0 ~
Xl ::;
1 or xl =
1'hen M(.) is not upper continuous at 0, nevertheless Q(.) is continuous compact and C(.) is constant. Of course, M(.) is upper C-continuous, but this property gives no new information because under the domination property the t\VQ sets M(i) -~ C(t) and Q(t) + G(t) coincide. Theorem 4.5 The map WS(.) is 1) closed if (H2) holds and i) F (., .) is compact c los ed lower continuous, ii) X(.) is closed lower continuous; 2) upper continuous
if (H2)
holds and
iii) F(.,.) is continuous compact--'l)alued, iv) X(.) is continuous compact-valued; Proof. We prove first 1). Let {(to,xoJ} be a net from the graph of WS(.) converging to (to,x o). As X(.) is closed, X o E X(t o). Suppose to the contrary that X o fj WS(t o ), i.e.
F(t o, $0)
n W.i\!(F(t o, X(to))IC(t o)) = 0.
(4.8)
Let
Ya E F(t~,xQ) n WM(toJ. By i), ,ve may assume that {YQ} converges to some Yo E F(t o' x o). In view of (4.8) there is some Zo E F(t o ' a), some a E X(t o ) such that
Yo -
Zo
E intC(t o).
Since X(.) and F(.,.) are lower continuous, there arc some ao: E X(ta:), F(t a , aCe) ,vith limaec = a,
(4.9) Za:
E
107
limza =
Zoo
It follows now from (4.9) and (H2) that Yo - Za E intC(t a ), for a large enough, \vherc Za E F(t Ct , X(to:)). We arrive at the contradiction Ya ¢ Wi\l(t o ) and the first statement is proven. For the second statement, it suffices to note that under the conditions of this part, the map WS(.) is closed. We can then express it as:
WS(t) = WS(t) n.X(t). Direct verification (or see Theorem 7,Chaptcr VI of Berge (1963)) shows that the intersection of a closed map with a upper continuous compact·-valucd map is upper continuous.•
Corollary 4.6 Suppose that F(.,.) is a point-1Jalued continuous map and X(.) is continuous compact-valued. Then WS(.) is upper continuous if (H2) holds. Proof. Kate that point-valued maps arc compact-valued. l"'he result is no,\v immediate from Theorem 4.5.•
Theorem 4.7 The map S(.) is lower continuous if (HI) holds and if i) F(.,.) is continuous compact valued and for each t E T, x ¥= x' in E 1 , l\IJin(F(t, x)IC(t)) does not intersect Min(F(t, x')IC(t)), ii) X(.) is continuous compact-valued.
Proof. Let
Xo
E S(t o ), which means there is some Yo,
Yo E F(io'xo) n M(t o)' Suppose that the assertion of the theorem is not true,Le. there is a neighborhood V of X o in E 1 and a net {tee} converging to to in T with
S(t a ) n V = 0. By Proposition 4.1 and rI'heorem 4.4, we may assume that there are Yo:,
Yee E F(to:, x ee ) n M(i where
Xo:
Q ),
E X(taJ such that
limyo = Yo. In view of ii), we may also assume that limx Ct = x, some x E X(t o ). Since Yo E M(t o) and this set contains the set F(t o, x) and F(t o, xo) as \vcll, by Proposition 2.6 (Chapter 2), Yo is an efficient point for both of these sets, contradicting i) and the proof is complete.•
108
Corollary 4.8 Suppose that F(.,.) is a point-1Jalued continuous map and X(.) is continuous compact-valued. Then S(.) is lower continuous if (Ii!) holds and if F(., .) is injective in the second variable for every fixed t E T, i. e.
F(t, x)
~
F(t,$') whenel1erx
# X'.
ProoE It suffices to apply 'fhcorcm 4.7 and to observe that the set of efficient points of F(t, x) with respect to C(t) is the point F(t, x) itself.•
Chapter 5
Duality
In mathematical programming duality means that corresponding to every . optimization (say minimization) problem, onc relates a maximization problem in such a manner that by solving the latter problem it is possible to get the optimal value of the first one. To see the crucial ideas of this method let us consider a linear mathematical programming problem, denoted by (LP): min ex
s.t. x E Rn, Ax
~
b,
where cERn, bERm and A is an (n x m)-matrix.
It is known that the dual problem, denoted by (LD), is of the form: max by s.t. yERm , A1'y=c, y~O, where A~r is the transposition of A.
These problems arc linked by the duality relations, described below:
1) ex ~ by, for every feasible solutions x of (LP) and y of (LD); 2) (LP) has an optimal solution if and only if so does (LD) and their optimal values are equal to each other; 3) if (LP) has no optimal solutions, then (LD) has no feasible solutions and vice versa;
4) (LP) is the dual problem of (LD). It follo,vs from the above relations that (LP) is completely characterized by (LD) and any theoretical or computational aspects of (LD) reflect and are reflected by that of (LP)" This is why duality is a powerful tool in the study of mathematical programming problems.
110
For vector optimization problems such satisfactory duality cannot be expected because of noncompleteness of preference orders. However, we can develop duality in such a way that it preserves some useful linkages between minimization and maximization problems such as the relation 1) above for the vector case. rrhis chapter is devoted to duality theory of vector problems with set-valued objectives in a very general setting. In the first two sections, using classical approaches such as Lagrangcan and conjugate approaches we establish duality results for problems satisfying some constraint qualification and convexity assumptions. Section 3 deals 'w'ith the a~iomatic approach which allows us to construct dual problems and to obtain duality results for nonconvex vector problems under rather \veak conditions. In the final section ,ve investigate the relation between duality and alternative in spaces ,vithout linear structure. For the sake of simple presentation the ordering cones to consider in this chapter arc presumed to be pointed.
1.LAGRANGEAN DUALITY
The following notations arc adopted throughout this part: E 1 , E 2 , E a are separated topological vector spaces over reals, X is a nonempty subset of E 1 , C ~ E 2 and !( ~ E s are convex pointed cones with nonempty interior, F and G are set-valued maps from E 1 to E 2 and E a respectively, with X ~ domP n domG. Let us return to the vector optimization problem (VP) with set-valued data, introduced in Chapter 2:
minF(x) s.t. x EX, G(x) Recall that the set
n -K # 0.
X o = {x EX: G(x) n -K:f 0} is the set of feasible solutions of (VP) . Some further notations: C denotes the linear space of continuous functions from E 3 to E 2 , C+ denotes the set of all functions from C \vhich are nondecreasing with respect
to (K,C). Yh = {y E C+ : y is C - convex, positively homogeneous}, Yl = {y E C : y is linear with y(K) ~ C},
111
Ye = {y E C: there is some ~ E !(' such that Y(A) = ~(.)e},
where e is a fixed vector from intO. An immediate consequence of all ,ve have introduced above is that the last three sets are convex cones in C and they yield the relation:
Ye ~ Yl ~ Yh ~ C+.
.
Further, let Q(.) denote the set-valued map from E 1 to E 2
Q(x) = (F(x)
X
E s defined by
+ C, G(x) + K), for every x EX.
Here are some hypotheses which will be needed when we say so:
(Hl): F is self-efficient in the sense that F(x) = lvlax(F(x)IC); (H2): G(x) ~ -K, whenever G(x) n -I( I- 0; (H3): Y is a ~ubcone of C+ with the property that for each b ¢ -]( there is some y E Y such that y(b) E intC; (H4): Q(.) is convex at (a, b) E E 2 X E 3 in the sense that cone(Q(X) - (a, b)) is convex in E 2 x E a; (H5): Slater Condition: G(X) n -intI(:f= 0. The following observations are sometimes helpfu1=(Hl) and (H2) are satisfied whenever F and G axe point-valued maps; (H3) holds for all the three cones Ye , Yi, Yh above; (H4) holds for instance when Q(X) is a convex set, \vhich is the case when F is C-convex, G is D-convex and X is convex. We recall that F is C-convex if its epigraph is convex, in other words for each Xl, X2 E X, 0 ::; A :5 1,
AF(Xl) + (1 - t\)F(X2) ~ F(AXI Now let Y be a
conv~x
+ (1 -
A)X2)
+ c.
cone in C . Corresponding to Y and (VP) we define:
1) the Lagrangcan map L(.,.) from X x Y to Ee;. by:
L(x, y) = F(x) + yG(x), for x E X, Y E Y; 2) the dual map D(.) from Y to E 2 by: D(y) = Min(L(X,y)[C), for y E Y; The dual problem of (VP) can be formulated as
(D)
maxD(y) s.t. y E Y. Sometimes we are also interested in the map P(.) :
P(x) = Max(L(x, Y»)C), for x E XA The primal problem associated with this map is
(P)
minP(x)
s.t. x EX. Proposition 1.. 1
Under the hypotheses (HI) , (H2) and (H3) ,
112
P(x) = F(x) for any feasible solution x E X and
P(x) = 0 otherwise. Proof. Let x be a feasible solution of (VP) , Then in view of (H2) ,
G(x)
~ -1(,
Consequently, y(G(x)) ~ -C, for all y E Y .
In particular for y being a zero function and taking (HI) into account we get the relation
P(x) = 1Vlax(F(x)lC) = F(x).
In case
=
G(x) n-K 0, since G(x) f::. 0 there is some b E G(x) \ -K. By (H3) one can find a function y E Y such that
y(b) E intO. The function ty with t running to
00
will make P(x) =
0, completing the proof. -
A useful conclusion is that under (Hl) ,(H2) and (H3) problem (P) coincides with (VP) in the sense that their feasible sets coincide as well as the values of the objective maps on the feasible set.This is always the case when the problem (VP) is scalar and Y Yl as expected in scalar mathematical programming. Before going further let us recall that a triple (x, a, b) E E 1 X E 2 X E 3 is feasible if
=
x EX, a E F(x) and b E G(x) n -K. For the dual problem a feasible couple (y, a') E C x E 2 means that y E Y and at E D (y). Moreover, a feasihIe triple (x, a, b) is called optimal if a E Min(F(Xo)IC) and it is properly optimal if
a E PrMin(F(Xo)rC). (Weak duality theorem)For any feasible triple (x, a, b) of (VP) and feasible couple (y, at) of (D) , it is not the case that a' >c a.
Theorem 1.2
Proof. Suppose to the contrary that
a' - a E C \ {OJ. Add -y(b) to this inclusion with the observation that b E -1(, Y E C+ to obtain the ineIusions:
113
a'-(a+y(b») eC\{O}-y(b) ~ C \ {O} + C ~ C \ {OJ. (1.1) Remembering that a E F(x), b E G(x) we have that a + y(b) E L(x, y) and (1.1) shows that a' cannot be a minimal point of L(X, y), contradicting the fact that at E D(y) .• If a feasible solution x'o of (V P) satisfies the relation F(x o ) n l\fax(D(Y)IC) f.:. 0, then it is an optimal solution of (VP) .
Corollary 1.3
Proof. Let ao E l'(x o } n Max(D(Y)]C). 'fhen there is
som~
Yo E Y such that
ao
E
D(yo)
n Max(D(Y)]C).
By Theorem "1.2, there is DO feasible triple (x, a) b) of (VP) such that a o This means that a o E Min(F(Xo)lC) and X o is really an optimal solution of (VP) .•
>c
a.
Definition 1.4 We say that (x, y) E X x Y is a dual pair of optimal solutions to (VP) and (D) if x solves (V P) , y solves (D) and
F(x) Lemma 1.5
n D(y) n Max(D(Y»)C)
~
0.
4 feasible triple (x o, ao, bo ) of (VP) is optimal if and only if Q(X) n (a o - C \ {O}, -!() = 0.
ProaL This is a reformulation of Proposition 5.8 (Chapter 2).•
Theorem 1.6 (Strong Duality) Assume that i) Y contains ~ for some e E intC; i i) Slate r Condition (H 5) hollls,iii) Q(X) is convex. Then for every properly optimal triple (x o , a o , bo ) of (VP) , there is some such that 1) (x o, Yo) is a dual pair of optimal solutions to (VP) and (D) ; 2) yo(b) = 0, for all bE G(x o ) n -I(.
Yo
E~
114
Proof. Let (x o ' aO ) b~) be a properly optimal triple of (VP) . Let Co be a convex cone in E 2 "\vhich contains G \ {O} in its interior such that ao E .l\lin(F(Xo )ICo ). In view of assumption iii) and Lemma 1.5, we can separate the two convex sets Q(X) and (aa - C \ {O}, -!() by a nonzero functional (~, () E (E2 , E 3 )' : ~(a) + ((b) ~ ~(ao) + ~(-c) + ((-k), (1.2) for all x EX, a E F(x), b E G(x), c E Co \ {O}, k E [(. This relation gives us in particular the following ones: ~ E C~, 'E K'; «b) ~ 0, for all b E G(x o ).
The Slater condition implies that Take
~
::I O. Indeed, if ~ =
(1.3) (1.4) 0, then ( must be nonzero.
b E G(X) n -intI(. With (1.2) in hand one can see that «(b) 2:: 0 which together with (1.3) becomes «(b) = O. 'l\'e arrive at the contradiction: ( = O~ FUrther, since
e E intC ~(e)
~(e)
~
iniGo,
# O. By deviding (1.2) by a positive number if necessary we may assume that
= 1. Define now Yo
by the rule:
yo(b) = ((b)e, for each bEEs. Our task at the moment is to verify that Yo yields the requirements of the theorem. The relation in 2) is immediate from (1.4) and the fact that ( E [('. To prove the first assertion, in view of Corollary 1.3 it suffices only to show that ao E D(yo) n Max(D(y)rC). (1.5) Note first that by 2), a o = aD + yo(b o ) E F(x o ) + yoG(x o ). If ao ¢ D(Vo), then there are some x E X ~ a E F(x o}, bE G(x o) such that ao - (a + yo(b) E C \ {O} ~ int(Co ). (1.6) Apply the functional ~ to the vector in the left hand side of (1.6) to get the inequality
c;(a o ) > ~(a) + ((b), contradicting (1.2). In this way, ao E D(yo).
Xow, if ao (j. l\1ax(D(Y)]C), then there axe some y E Y , a E D(y) such that a - a o E C \ {OJ. Adding -y(bo ) to the latter relation we have that
115
a - (a o
+ y(bo )
E C \ {O},
contradicting the fact that a E D(y) = lvlin(L(X, y)IC)
and (1.5) is proven. Definition 1.7
A pair (x o, Yo) E X x Y is said to be a saddle point of L(.)
L(x o, Yo)
if
n Max(L(x o, Y)IG) n Min(L(Yo,X)JC) i=- 0.
Theorem 1.8 (Saddle Point Theorem) Under (H3) , if (x o, Yo) is a saddle point of L, then
1) (x o , Yo) is a dual pair of optimal solutions to (V P) and (D) ; 2) G(x o) S; -K and Yo (b) = O"for all b E G(x o ). Conversely, under (HI) , point of L.
if 1)
and 2) above hold, then (x o' Yo) is a saddle
Proof.. Posit first that (x o , Yo) is a saddle point4 There are some ao E F(xoL bo E G(x o ) such that
ao + Yo(b o) E Max(L(x o, Y)rC), a o + yo(bo) E Min(L(X, Yo)JC). Argue first that G(x ~ -K. In fact, if that is not the case, say
(1.7) (1~8)
Q)
b E G(x o )
\ -[(,
then by (H3) , there is some y E Y with y(b) E intC. The function ty applying to. b will make the set in the right hand side of (1.7) empty when t runs to 00. Further,it follows from (1~7) that yo(bo ) = 0, and by (1.8), ao belongs to Min(L(X, Yo)fC)~ This shows that
yo(b) = 0, for all b E G(xo) S; -K. Thus the assertion of 2) is proven. For the a.Cjscrtion in 1), the first step is to establish that (x o , ao , bo ) is an optimal triple of (VP) . By the assertion in 2) we have just verified, ~t is a fcasibl~ triple. If it is not optimal, then there axe some x EX, a E F(x), b E G(x) n -I( such that
ao
>c a.
Since yo(b) E -C, the latter inequality may be expressed as a o >c a + yo(b),
where a + yo(b) E L(x, Yo), contradicting (1.8)~
116
The it.ext step is to establish the fact that (Yo, a o) is an optimal couple of (D). We already know by (1.8) and the assertion in 2) that it is feasible. If it is not optimal, then there are some y E Y ,a E D(y) such that
a >c ao. Since y(bo ) E -C~ the latter inequality can be expressed a..c;
a >c ao + y(b o), ""here a o + y(bo ) E L(x o , y), contradicting the fact that
a E D(y) = Min(L(X; y)IC). We have proven that (x o, Yo) is actually a dual pair of optimal solutions to (VP) and (D) ~ Conversely, if (x o, Yo) is a dual pair of optimal solutions then there is some
aa E F(x o) n D(yo)
n Max(D(Y)IC).A
We shovv that under the conditions stated in the theorem, a o belongs to every set which takes part in the intersection in Definition 1.7~ Indeed, by 2),
a o = a o + yo(b), for any b E G(x o ). This means that a o E L(x o, Yo). To complete the proof we only need to verify the inclusion a o E Max(L(x o , Y)IC). Suppose to the contrary that there are some y E Y , a E that
F(xo)~
b E G(x o ) such
a + y(b) >c a o • Since b E -I( and y is nondccreasing, y(b) E -C. Consequently,
a ~c a+y(b) >c a. But both a and a o belong to F(x o ). This is -impossible by (HI) . The proof is complete.•
Corollary 1.9 If in addition to the assumptions of 1 heorem 1~6, (HI) and (112) . Ito ld, then for every prope rly optimal solution x 0 of (VP) , there exists a functio.n Yo E Y e such that (x o, Yo) is a saddle point of L. 1
Prool Invoke this to Theorems 1.6 and 1.8.• Remark 1.10 By replacing the cone {OJ U intC instead of G everywhere in this section we obtain immediately the results for weak optimal points. In this case the word "properly" in Theorem 1.6 is superfluous. 1'heorem 1.6 for weak optimal solutions was proven in Corley(1987).
117
2.CONJUGATE DUALITY
Given a vector problem as in the previous section
minF(x) s~t. x E X, G(x) n -I( # 0. Let us define a perturbation for (VP) as a map from E 1 x E 3 to E 2 by the rule
.
It is clear that problem (VP) is the same as (Po). Further, let as before C be the space of continuous functions from E 3 to E 2 , Yo a linear subspace of C and let V be the space of continuous functions from E 1 to E2, Zo a linear subspace of'D . Corresponding to {f?, Yo and Zo one can define the conjugate map of q> as a map q,* from'D x C to E 2 by the rule:
+ y(b) -
tI>(x, b) : x EEl, b E E3 }rC)
if y E Yo , z E Zoi
F(AXI
+ (1 -
A)X2)
~
AF(Xl} + (1 - A)F(X2) -
c.
Proposition 2.1 Let <J? and CI?* be define4 as above. Then 1) ~(.,.) is C-convex in both variables on E 1 x E 3 if X is convex, F(.) is C-convex, G is K -convex on X; 2) ~*(.,.) is C-concave in both variables on V x C if Yo and Zo contain linear operators Proof. Let
only~
Xl' X2
E E1
,
b1 , ~ E E 3 , 0 :5 A ::; 1 be given. We have to show that
118
+ (1 - -X)
AtI>(Xl' b1)
A)b2 )
+ C.
(2.1)
If one of the sets iJ?(Xi, bi), i = 1,2, is empty, then everything is trivial. Thus, we may assume that both of them are nonempty which mean that Xi E X and bi E G(Xi) n -K, i = 1,2. Since G is [(-convex,
Ab 1 + (1 - A)b2 E AG(Xl) + (1 - .:\)G(X2) ~ G(AXl + (1 - .:\)X2) + K~ Hence,
+ (1 -
A)X2).
For every y E Yo n C+ we have that -iI>*{O, y) = D(y), where D(y) is defined by the Lagrangean map in the previous section.
Proposition 2.2
Proof. We calculate directly -~* (0, y) by taking into account the fact that C+ consists of nondecreasing functions with respect to (K, C). For y E Yo n c+ : -
Theorem 2.3 (Weak duality) For every feasible triple (x o , a o , bo ) of (VP) and feasible couple (Yo, a~) of (D*) with Yo(O) = 0, it is not the case that a~ >c a. Proof Suppose to the contraxy that a~ >c a. Since Yo(O) = 0 and bo E G(x o) n -K, we can get the following inclusions:
ao = ao + Yo(bo - bo ) E F(x o} + yo(G(x o ) ~ U{yo(b) -
+ K)
If (x 0, Yo) E (X, Yo) is a dual pair of optimal solutions to (VP)
119
o E
*(0, Yo). Conversely, if every function of Yo vanishes at 0, i. e. a"-nd if for some X o EEl, Yo E E 2 =
y(O)
= 0 for
all y E Yo
a E ~(xo, 0) + <1>*(0, Yo), then (x o, Yo) is a dual pair of optimal solutions to (VP) and (D*). Proof. The first part is trivial.
For the second part,let a o E ~(xo, 0) n -<}*(O, Yo). We prove that a o is a common optimal value for (VP) and its conjugate dual. To do this observe first that a o E ~(xo, 0) means that a o E F(x o) and there is some bo E G(x o ) n -1(. In other words, (x o , ao, b~) is a feasible triple of (VP) . If Go is not an optimal value of (VP) ,then there is a feasible triple (x,a,b) such that
a o >c a. But (Yo, ao) is a feasible couple of the conjugate dual, the latter inequality contradicts the result of Theorem 2.3. Further, if a o is not an optimal value of the conjugate dual, then there is a feasible couple (y, a) such that a >c a o , ""hich again contradicts the result of Theorem 2.3 because (x o , ao, bo ) is a feasible triple of (VP) .• We now consider the classical ca..c;e where Yo and Zo consist of linear functions only. For the sake of simplicity we assume that ep(E1 , b) satisfies the domination property for every b E E 3 . Denote
M(b) = Min(~(El' b)IC), for each b E E 3 • Then the map 1\!(.) is a set-valued map from E 3 to E 2 •
Definition 2e5
M(.) is said to be subdifferentiable at bo E E3 if for each value
ao E M(bo ), the. set
8M(bo ; ao ) = {y E Yo : ao
-
y(bo ) E Min(U{M(b) - y(b) : b E E 3 }JC}
is nonempty.
Definition 2.6
We say that (VP) is stable
Theorem 2.7 (VP) is stable that is
if M(.) is subdifferentiable at O.
if and only if for each minimizer (x o , ao ) of(VP),
120
there exists a solution Yo of (D*) such that a o E -<1>11(0) Yo).
Proof. Posit first that (VP) is stablc,i.c~ for each a E M(O), the set 8}J(O; a) is noncrnpty. Let (x o, ao) be a minimizer of (VP) .1"1hen there is some Yo E 8M(O; ao). We show that a o E --«p(O, Yo) and by Corollary 2.4, Yo is actually a solution of (D*). Indeed, Yo E 81\1(0; a o) means that ao E Min(U{M(b) - yo(b) : bE E 3 }IC). Using the domination property we have ao E -Max(U{Yo(b) - (x,b): x E E1,b E E 3 }IC) = -
3.AXIOMATIC DUALITY
Let us consider a general vector optimization problem
(P)
minP(x) s.t. x E X,
where X is a nonempty set and P is a set-valued map from X to a separated topological vector space E 2 in which a convex cone C specifies a partial order (~c). We already know that if X is given by constraints as in problem (VP) of the previous sections, then using Lagrangean maps or conjugate maps we can construct a dual problem for (P) and prove the main linkage between these problems expressed in the weak duality relation (Theorem 1.2). In this section we do the converse. Roughly speaking, given Problem (P) , starting from the weak duality relation we construct a dual problem for (P) wi thou t any knowledge of Lagrangean or conjugate maps.
Definition 3.1
A vector optimization problem is called a dual problem of (P) it is a maximization problem which is denoted by (D) and is of the form
if
121
maxD(y) s.t. YEY, : ::Whe're Y is a nonempty set, D(~) is a set-valued map from Y to E 2 , and the '":".foilowin9 relation called weak duality axiom holds . .. (W.D A ) D(y) n (p(a;) + C \ {On = 0 , for each a; EX, Y E Y
. It should be noted that the set of dual problems of (P) is infinitely many if "it is nonempty. Further, given a maJ4mi'zation problem (D) ,then a dual problem of (D) by definition is a minimization problem of the form of (P) which satisfies the relation .(WPA) P(x) n (D(y) - C\ {O}) = 0 ,for each x EX, Y E y~ It is clear that if (D) is a dual problem of (P) , then (P) is one of the dual "problems of (D) . This is the synunetry property of duality.
"Definition 3.2
We say that (D) is an exact dual problem of (P) D(Y)
if
n P(X) f 0.
According to this definition, if (D) is an exact dual of (P) then (P) is also an e.."{act dual of (D) . Morcover~ for any nonempty subset Yo S; Y, one can define a subproblem
(Do)
minD(y)
s.t. y E YoTrivially, if (D) is a dual of (P) , then so is (Do). Although, if (D) is an exact dual of (P) , it is not necessary for the subproblem to be exact.
Proposition 3.3
If (D) is an exact dual of (P) , then both problems possess optimal solutions and they have a common optimal value.
Proof. Let ao E D(Y) n P(X), i.e. there are some
ao
E
Xo
E X and Yo E Y such that
P(x o) n D(yo).
We show that a o is a common optimal value of (P) and (D) . Indeed, if ao ¢ Min(P(X)]C), then there is some a E P(X) such that a o E a + C \ {O}. This contradicts (WDA) because a o E D(yo) and a E P(X). Hence ao is an optimal value of (P) , so that X o is an optimal solution of it. The proof of the fact that a o is an optimal value of (D) is similar.•
122
We proceed now to show ho~: to construct (D) for a given (P) by using (WDA). To this end let the constraint set of (P) be given by {x E X, G(x) n -I( :f: 0} as in the first section,i.e. we consider problem (GP) . OUf scheme is as follows:
Step 1: Write formally a maximization problem (D) ,vith Y and D(.) being UnknO\V11
Step 2: Write down the weak duality axiom:
P(x) n (D(y) - C \ {O}) = 0, for all y E Y , x E X with G(x) n -]( :f: 0. This is equivalent to the following: (D(y) - C \ {a}, -!<) n Q(x) = 0,
(3.1) EX, Y E Y, ,vhcrc Q(x) = (P(x) + C, G(x) + !() ~ E 2 X E3 • Step 3: Choose a partially ordered ·space to separate the sets involved in the left hand side of (3.1). Let the space be Eo which is ordered by a convex cone lvI ~ Eo. Denote by S the set of point-valued maps from E 2 x E 3 to Eo which are increasing in the first variable with respect to (C, M) and nondecreasing in the second variable ,vith respect to (I(, M), i.e. for s E S , a, a' E E 2 ) b, b' E E 3 :
for all
X
s(a, b) > lvI s(a', b) if a >c a', s(a, b) ~Jt,f s(a, b') if b ~K b'. Step 4: Identify Y with S and define a set--valued map D(.) from S to E 2 by the rule: for s E S , D(s) = {a E E z : s(a,O) E Min(s(Q(X))11VI)}. Step 5: Write down the maximization problem corresponding to S and DC.) obtained in Steps 3 and 4:
(D)
max{a E E 2 : s(a, 0) E .l\lfin(s(Q(X))lM)} s.t. s E S. The follo\ving result assures that (D) is really a dual problem of (GP) .
Lemma 3.4 If there exists a/unction s E S separating the sets (D(y)-C\{O}, 0)
and Q(X), i.e. s(a, b) "2:M s(a', 0)) for each (a, b) E Q(X),a' E D(y) - C\ {O}, then (3.1) holds. Proof. Suppose to the contrary that t.here exist some a' E D(y), x EX, a E PCx),
cl E C \ {O}, c E C , b E G(x), k E !( such that
123
(at -c',O)
= (a+c,b+ k).
Applying function s to the point (a' - c'/2, 0) = (a + c + c' /2, b + k) ,ve get
s(a' - c'/2~ 0) = s(a + c+ c' /2, b+ k) >M s(a+ c+ c' /4, b+ k), where
(a+c+d/4,b+k) E Q(x),
Ca' - c' /2,0)
E
(D(y) - C \ {OJ, 0).
The above inequality contradicts the condition of the lemma.• It should be emphasized that the dual problem (D) obtained in Step 5 depends not only upon the initial problem (G P) , but also upon the space Eo in Step 3. By varying Eo one can obtain several dual problems for (GP) . Furthermore, it is clear that the set S is noncmpty whenever M is not a linear subspace of Eo , but it is not sure that DC.) has noncmpty domain on a nonempty subset So ~ S~ This means that not every subproblem (Do) has the objective being nonempty-valued on the feasible solution set~
Theorem 3.5 Let Eo = E 2 , M = C and assume that (GP) has an optimal solution. Then (D) is an exact dual 'of (G P) . In particular, domD is nonempty on S. Proof. Let a o be an optimal solution of (GP) and e E intC. Define a map
8 0 from E 3 to E 2 by the rule: so(a, b) = a, if b E -[(, and so(a, b) = a + e, otherwise. I t is evident that So is increasing in the first variable with respect to (C, C) and nondecreasing in the second variable with respect to (K, C). In other words) So E S. Moreover,
E2
X
a o E D(so) n P(Xo ). Hence,(D) is an exact dual of (GP) . It also follows that is complete. •
So
E
domD(.). The proof
From the point view of applications the result of Theorem 3~5 is of little interest because the set S is too large and the calculation of D(so) is practically the same as solving (GP) . The main task of duality theory is to find a subset So ~ S as small as possible so that the subproblem (D Q ) is still an exact dual of (GP) .
Again, let us consider the case where Eo = E 2 , M = C. We show how the Lagrangean duality can be drawn from our scheme. Remember that Yi denotes the set of continuous linear operators y from E3 to E 2
124
'\vith y(I() ~ C. Let Sl be the subset of S defined by the rule: s E 5 if there is some y E Yi such that s(a, b) = a + y(b), for each a E E 2 , b E E 3 · It is clear that s(.,.) is increasing in the first variable and nondccreasing in the second one. Hence, Sl is actually a subset of S and the following problem denoted by (D[) is indeed a dual problem of (P) : max Min(U{F(x) + yG(x) : x E X}lC) s.t. y E Yi. 1
[
Theorem 3.6 Assume that the Slater condition (H5) and convexity assumption (H4) hold. Then (D l ) is an exact dual of (GP) if (GP) has a properly optimal solution. Proof. Invoke this to theorem 1.6.•
NO\V \VC consider the case Eo = R, M = R+. Let us fix two vectors €2 and ea from intC and intI( respectively and as before, No denotes the cone composed of zero and of (C \ {O},I() in the product space E 2 x E 3 • Further, let N be a convex pointed cone in E 2 x E a Yvith the property: No \ {OJ ~ intlV. (3.2) Denote by S N the set of functions from E 2 x E 3 to R which are of the form: s E S N if there is some ao E E 2 such that for every (a, b) E E 2 X E a : s(a, b) = inf{t E R : (a, b) E (a o , 0) + t(e2' e3) - lV}. (3.3) It follo\vs from the proof of Theorem 1.6 (Chapter 4) that any function from SN is continuous. :Moreover, ,ve have the result:
Lemma 3.7
FOT
every cone lV satisfying (3.2), SN is a subset of S.
Proof. The proof of Theorem 1.6 (Chapter 4) shows also that any 8 E SN is increasing with respect to ({O} U intN, R+) . This means that for each (a, b) and (a', b') from E 2 x E 3 , (a, b) - (a', b') E intlV implies that s(a, b) > s(a', b'). In particular, ,vhen b = b' , a >c a', ,ve have (a, b) - (a', b) E No \ {O} ~ intN.
Hence
125
s(a, b) > s(a', b) is increasing in the first variable. Further, for every a E E 2 and b, b' E E 3 with b ~I( b', we take a nonzero .vector c E C and consider the sequence {(a + c/n, b)}. It is clear that and indeed
5
(a + c/n, b)
>N (a, b')
~
Hence
s(a + c/n, b) > s(a, b') . This and the continuity of s give us the relation s(a, b) ~ B(a, b') when n runs to 00. In this way, s(.) is nondecreasing in the second variable, establishing S N ~ S. •
.
In virtue of Lemma 3.6 for every cone lV satisfying the property (3.2) , the following problem denoted by (D N ) is a dual problem of (GP) : maxD(s) S.t. s E SN. Since s is defined by some ao E E 2 , we may identify SJ.v with E 2 and (D N ) becomes of the form: maxD(y) s.t. Y E E 2 , where
D(y) = {z E E 2 : inf{t : (z - y,O) E i(C2' C3) - IV} = miUCa,b)EQ(X) inf{t : (a - y, b) E t(e2' e3)'- N}}, here min and in! arc taken in the usual sense in R~ Moreover, instead of min in the expression of D(y) one can take inf and this docs not effect on the validity of the results obtained in this part~
Assume that PrMin(Q(X)INo ) n (E2 ,O) f:. 0. . " Then there exists a cone lV such that (DN) is an exact dual of (GP) Theorem 3.8
~
Proof. Let (a o , 0) E Prl\1in(Q(X)IC) . By definition of proper efficiency, there is a cone N satisfying (3.2) such that
(a o , 0) E Min(Q(X)JC) . With this lV, CD N) is a dual of "(G P) Furthermore, let s be the function defined by (3.3) . Then it is clear that 6
s(a, b)
~
s(a o , 0), for every (a, b) E Q(X) .
126
rrhis means that ao E "D(s) . :Moreover, by Corollary 5.8 (Chapter 2), a o is an optimal value of (GP) . In particular~ a o E P(X o ) ,where X o is the set of feasible solutions of (GP) . Consequently,
D(SN) n P(X o ) 1: 0 and (Dl\r) eventually is an exact dual of (GP) .• vVe do not go further on conditions under which (DN) is exact. Instead of this "\ve present an approximating result which should be helpful in solving nonlinear problems with no constraint qualifications. Let {N(a)} be a net of convex pointed cones which satisfy relation (3.2) and the two following ones: l\f(Q) \ {O} ~ inilV({3), if N(Ct) S; clNo .
n
Theorem 3.9
Q
> (3,
(3.4) (3.5)
Assume that
i) any unbounded part of Q(X) has nonzero recession vector and the inters ection of Q (X) with any bounded clos ed set is compact~· ii) for each index (1', (DN(Q)) has an optimal value aa with lima~ = a*; iii) ]( is closed. Then a* ¢ Min(P(~Yo)IC) - intC . Proof. Let us define a function
Soc
E Sl'v(~) by the formula:
sa(a, b) = inf{t : (a, b) E (aQ;' 0) + t(e2, e3) - N(a)}. We claim that ao: E P( so:) . Indced~ since a~ is an optimal value, there is some s E
SN(a.)
such that
s(a co 0) = inf{s(a) b) : (a, b) E Q(X)}_ It follows from the definition of s and from the equality above that ((ao,O) - intN(a)) nQ(X) = 0. Consequently
If t~ = inf{sa(a, b) : (a, b) E Q(X)} > 0, then by the technique ,ve have use9. in the proof of Theorem 1.6 (Chapter 4), one can verify that there exists a positive number e such that
BQ(aO'
In other ,vords,
+ se2, 0) =
to'.
127
contradicting the fact that a Q is an optimal value of (DN(Ot.)) • In this 'Yay, we have established ao: E D (s Cc) . ~ ex t, we prove that if Q' is not the starting index, i.e. 0:' > f3 for some index {3, then Sa(aa' 0) = sc:l(a(a), b(a)), (3.6) for some (a(a) , b(at)) E Q(X) ~ In fact, if that is not the case, then by i) , for every £ > 0, the set
((aa' 0) + (to:
+ e)(e2;e3) -
N(a))
n Q(X)
must be unbounded. Again, in view of i) ,this set has a nonzero recession vector and hence
Rec(Q(X))
n -N(o:) =f:. {O}.
By (3.4) and Lemma 3.10 to be proven later, we conclude that inf{sj3(a, b) : (a) b) E Q(X)} cannot be finite, contradicting the optimality of ape Posit now to the contrary that there is some a o E Min(P(Xo)IC) such that· a* E a o - intC. Then one can find an index 0'0 such that ao: E 'a o -
(3.7)
intC, for all a 2:
0:'0-
Since we have then
(a(a), b(a)) E (a o , 0) - N(o:) , for all a: ~ a o By the argument used in proving (3.6) it can be shown that the intersection ((a o ' 0) - N(a o ) n Q(X) is
bounded~
hence compact whenever
0'0
>
{3. Therefore, we may assume that ~ Moreover, by (3.5»)
{(a(a), b(Q))} converges to some (a, b) E Q(X) (a, b) E (aile, 0) - CllVo _ In particular, a - a* E -ciC, bE-elK.
(3.8)
Since [( is closed, b E -I( and hence a E P(Xo ) • (Ihis together with (3.7) and (3.8) shows that a o >c a, contradicting the fact that a o is an optimal value of (GP) _The proof is complete. _
Lemma 3.10 Suppose that N is a convex pointed cone satisfying relation (3.2) and Qo is a nonempty set in E 2 X E 3 with the property that
128
Rec(Qo)
n -intN of 0.
Then inf{ s(a, b) : (a~ b) E Q o} cannot be
finite~
Proof. I-lct a o E E 2 be a point determining s, i.e. s(a, b) = inf{t = (a, b) E (a o ' 0) + t(e2, C3) - N} and suppose to the contrary that that infimum is finite,say it equals t E R. Then Qo
n ((a o , 0) + t o (e2' ea) -
intN)
= 0.
This iID:plics that
Rec(Qo)
n -intN = 0,
contradicting the assumption of the lemma. • We turn now to the application of the above results to scalar nonconvex programming problems. Let us consider a scalar problem:
(SP)
minf(x) x E X, 91(X) ~ 0, ... ,gn(x) ~ 0, where !, 91, ... , g71 are point-valued functions from X to R. In this case: s~t.
E 2 = R, C = R+, E a = Rn, I( = R+.. " Let further {lV(k)} be a sequence of convex pointed cones in Rn+l satisfying relations (3.2) , (3.4) and (3.5) . Take
1, and n e3 = (1, ... ,1) E R and consider the sequence of dual problems (DN(k)) . Denote by tk "the optimal value of (D N(k) if it exists at alL e2 =
Corollary 3.11
Assume that Q(X) is closed and tk exists for some k
> o.
Then
t m exists JOT all m ~. k. Moreover, one of the two cases is possible: 1) limt m exists and it is the optimal value of (SP) ; 2) limt m = 00 and (SP) has no feasible solutions. Proof. It can be verified ,vithout any difficulty that for m = 1,2) ..., and for A, a E R, bERn ~
inf{t: (a, b) E (A, 0, ...,0) + t(l, ... ,1) - N(m)} ~ inf{t : (a, b) E (A, 0,' ...,0) + tel, ... ,1) - N(m + I)}. Hence, if Dm(A) is nonempty, then so must D m+1 (A) be. ~urther, let us fix some :&0 E X. Since the vector (1,0, ... ,0) E intN(m) , there is some '\(m) E R such that "
(!(x o ), gl(X o ), .•. , gn(x o )) E (A(k), 0, ...,0) - IV(m) ·
~29
In other ~ords, the set {Dm(A) : A E R} is bounded from above. l1oreover, by the argument as used in proving (3.6) one can sho,v that
sUp{Dm(A) : A E R} is attainable if the set under sup is noncmpty, ,vhich is the case whenever m
~
k.
:'\ow if t = lim t m exists, then in virtue of Theorem 3.9 and of (WDA),
t = inf{f(x) : x
E X,91(X) ~
0, h.,gn(X) :5 OJ.
As shown in the proof of Theorem 3.9, it· follows from the closedness of Q(X) that
lim(a(m), b(m)). = (a, b) E Q(Xo )·. This means that there is some X o E X which is feasible and such that t = f(x o ) In this way, t is the optimal value of (SP) . Finally, if limt m
~ 00,
•
then by (WDA), (SP) has no feasible solutions.•
We should remark that there always exist sequences of cones satisfying relations (3.2) ,(3.4) and (3.5) ~ A simple example of such sequences is the sequence of the cones consisting of zero and the interior of the convex hull of (0, R+.) and the vector (k, -1, .. ~, -1) in Rn+l with k = 1,2, ....
4.. DUALITY AND ALTERNATIVE
Let E be a nonempty set with a reflexive transitive antisymmetric binary relation (~) . Let A be a subset of E . We denote
A+ = {x E E : x
2
a, for some a E A}.
'The set A_ is defined similaxly. Min(A) and Max(A) are the sets of minimal and maximal efficient points of A with respect to the relation (2:).
Definition 4.1 We say that two nonempty subsets A and B of E satisfy the duality relation, denoted by (DR) , if .l\tJin(A) ~ Max(B) ,
and they satisfy the relation of the alternative, denoted by (AR) , x E A+ u B_ , exactly one of the following conditions holds: (l(x)) there is some a E A with x
> a,
(2(x)) there is some b E Ii with b ~ x.
i~e~ x ~ a, x
=I a;
if
for each
,30
'rVe shall establish in this section the equivalence between (DR) and (AR) and apply it to obtain some results from duality theory developed in the previous sections. vVe remember that the domination property holds for A if for any x E A, there is some a E lvlin(A) such that x ~ a. 7
Theorem 4.2 Suppose that the domination property holds for A . '1 hen A and B satisfy (DR) if and only if they satisfy (AR) .
Proof. To prove this theorem let us formulate some formal assertion~: (Dill) for every b E B, there is no a E A such that b > a; (DR2) for every a E Min(A) , there is b E B such that b ~ a; (ARl) for each x E A+ U B_ , (2(x)) implies the negation of (l(x)); (AR2) for each x E A+ U B_ , the negation of (l(x)) implies (2(x») . Now ,vc invoke the theorem to three lemmas below.•
Lemma 4.3
(AR)
¢:}
(AR1), (AR2) ~
Proof. The equivalence is obvious...
Lemma 4.4
If the domination property holds for A , then (DR) {:> (DRl), (DR2) ~
Proof. vVe shall first treat the implication (::::}) . Let b E B . If there \vcrc some a E A such that b > a, by the domination assumption we would have
b>a
~
a', for some a' E Min(A) . This \vould contradict a' E l\1Jax(B) , since by (DR) ,Min(A) tion (DR2) obviously holds~
~
Max(B) . Rela-
For the converse implication ~ let a E l\lin(A) . By (DR2) , there is some
b E B such that b ~ a. By (DRl) ,b = a, that is a E B. Further, if a E Max(B), there would exist some b' E B with b' complete.
IJemma 4.5
> a, contradicting (DR1). The proof is
CARl) {:} (DRl) ;
(AR2) {::} (DR2) . Proof. (DRI) => (ARl) : let x E A+ U B_ ) b E B with b ~ x~ If there were some a E A such that x > a, we would have b> a, contradicting (Dill) .
131
(DRl) -¢:: (ARt) : let b E B , then (2 (x)) holds for x = b. The negation of (1 (x)) means that no a E A satisfies the relation b > a. (DR2) :;:} (AR2) : let x E A+ U B_ , and suppose that there is no a E A \vith x > a~ By the definition of A+ and B_, we have that x E Min(A) U B_1 In both cases, x E Min(A) or x E B_, there is some bE B with b ~ x. (DR2) -<= (AR2) : assume that a E Min(A) . Then (l(x)) cannot be true for x = a. By virtue of (AR2), (2(x)) holds for x = a, i.e. there is some b E B with b ~ a, completing the proof.•
We return now to the vector problem (VP) studied in the first section: minF(x) SIt. x E X, G(x) n -I( # 0. The Lagrangean dual problem is then
maxD(y) s.t. y E Y. Denote
A = F(Xo ) , where X o = {x EX: G(x) n -I{ =f. 0}, and B = D(Y). A+ = F(Xo ) + C and B- = D(Y) - C. vVith these notations in hand we can reformulate Theorem 4.2 for this special ca.<;e as: the relation
lVI in(F(Xo ) JG) ~ M ax(D(Y) Ie) is true if and only if for each a o ,
ao E (F(Xo ) + C) U (D(Y) - C) exactly one on the following conditions holds: 1) there is some x E X o , a E F(x) ,vith a o >c a; 2) there is some y E Y ,b E D(y) with b 2:c ao' Corollary 4.6
Suppose that
i) the hypotheses (H 4) and (lI5) of the section 1 are satisfied,. ii) Y contains Ye ,~ iii) either C \ {OJ is op~n or all the solutions of (V P) is proper. Then Min(F(Xo)IC) ~ Max(D(Y)IC) .
(4.1)
(4.2) (4.3)
132
Proof. Let Go E A+ U B __ In vie\v of Theorem 4.2, it suffices to verify (AR). Consider the first case: a o E F(X o )
;-
C.
If this point is not a minimal point of A+ ~ then there is some a E F(Xo ) such that a o >c a and ('!.2) holds, meanwhile (4.3) is impossible for this point, due to Theorem 1.2. If a o is a minimal point of A , then (4.2) is impossible but in view of l'heorem 1.6, there is some Yo E Yc \vith a Q E D(yo) which means that (4.3) holds. For the case;
a o E D(Y) - C, by Theorem 1.2, (4.2) is impossible and (4.3) is al\vays true.• No,\v, consider the dual problem introduced in Section 3~
maxD(s) s.t. s E S. By setting
B_ = D(S) - C and A+ = P(Xo ) +C 1,ve have the following result:
Corollary 4.7 Assume that Eo = R, lv! = R+ and either C \ {O} is open, or every optimal solution of (P) is proper. Then
Min(P(Xo)IC) ~ Max(D(S)IC). Proof. Invoke to the \veak duality axiom and Theorems 3.6,4.2.•
The proofs of the two corollaries above show that strong duality results are tantamount to proving the relation of the alternative. The advantage of Theorem 4.2 is that the space 17 is required merely to be partially ordered, therefore the result is applicable to problems in the spaces of general structure \vithout any linearity or convexity. In the remainder of this section, ,ve take up the case where E is a complete lattice. Recall that E is said to be a lattice if it is equipped ,vith a reflexive transitive antisymmetric binary relation such that every pair of clements of E has a least upper bound and a greatest lo\ver bound. A lattice is said to be complete if every subset A ~ E bounded above has a least upper bound (which \Vc denote
133
by sup A), or equivalently, every subset bounded below has a greatest lower bound (\vhich we denote by inf A).
Let A, B be two nonempty subsets in E. Denote
A * = {x E E : a 2:: x
~ inf A, some
B* = {x E E : sup B
2:: x
~ b, some
a E A},
bEE}.
Let us consider the following duality relation: sup B = inf A, and sup B is attainable \vhich means that there is some b E B such that b = sup B (it can be seen that in this case sup B = I M ax(B) ).
(DR')
The relation of the alternative corresponding to (DR') can be stated as fol10\\'5:
for any x E (A *) + U (B.i<) _ 7 exactly one of the followings holds:
(AR')
(l(x») there is some a E A such that x
> a,
(2(x)) there is some b E B such that b ~ x. -Theorem 4.8
(DR') {::} (AR').
Proof. the proof is similar to that of Theorem 4.2, so we omit it.•
It tnrns out that several duality results which have been developed for vector problems with outcomes in complete lattices can be deduced from Theorem 4.8 in a rather simple way. We shall demonstrate this by proving an important result from Azimov (1982)~ 'fhe technique to obtain other results from that "\vork goes through \vithout change. In what follows E 1 , E a and E denote topological vector spaces over rcals~ In addition, E is supposed to be a complete lattice. Let us consider the problem
inf{j(x) : x EEl},
(P)
where
f
is
~
point-valued map from E 1 to E.
Further, let ¢(x, b) be a point-·valucd map from E 1 x E a to E such that
¢(x, 0) = f(x) for all x E X. 'This map is called a perturbation of f. As in Section 2, the conjugate map ¢* of
¢wc(z) y) = sup{z(x.) + y(b) - ¢J(x, b) : ; and the conjugate dual is of the form: (D*) sup -¢»; (0, y) S.t6 Y E Yl.
E E 17 b E
E3 }
134
where :Yl is the space of continuous linear maps from E a to E, that is according to the notations used in Section 1.
Yl
~
C
CQrollary 4.9 (Azimov, 1982) If X o EEl, Yo E Yl are solutions of (P) and (D*) and if the optimal values of these problems are equal, then X o and Yo are connected by the re lation
¢J(x o , 0) + >*(0, Yo) =
o.
Conversely, if X o E E 1 and Yo E Yl satisfy the above relation, then (P) and Yo solves (D*), and the op-timal values are equal.
(4.4) $0
solves
Proof. In order to apply Theorem 4.8, set A = {
(x, O) : x EEl}, B = {-4>*(0, y) : y E JIl}. Since 4>* is the conjugate map of 4>, relations (1 (x)) and (2( x)) cannot simultaneously be true. Therefore, if X o E X and Yo E Yi satisfy the relation (4~4), then (AR') holds. The result is now drawn from Theorem 4.8~ •
Chapter 6
Structure of Optimal Solution Sets
In mathematical programming if an optimization problern has an optimal solution, then the optimal value of the problem is a unique point in the field of real numbers. Due to this fact, the set of optimal solutions has rather simple structure whenever the data of the problem are convex or linear. For instance, the optimal solution set of a linear problem is a polyhedron, the optimal solution set of a convex problem is convex if of course it is noncmpty. Such a ~icc structure of solution sets is no longer true for vee.tor problems. However, some properties '\vhich are important in application such as closcdness, connectedness or contractibility of these sets are still in hand if the data satisfy certain assumptions. This chapter is devoted to the study of the properties mentioned above of efficient point sets for linear, convex and quasiconvex vector problems.
1.GENERAL CASE
Let E be a real topological vector space and C a convex cone in E. We consider the following vector problem: minf(x)
s.t.x EX,
136
where X is a nonempty set equiped with a topology and f is a point-valued map from X to E~ We recall that S(X; f) denotes the set of optimal solutions and W S(X; f) denotes the set of weakly optimal solutions of the problem above. Theorem 1.1 Assume that the interior of C is nonempty~ If X is closed, C-continuous on X J then W8(X; f) is closed.
f
is
Suppose to the contrary that there is a net of weakly optimal solutions X o which is not weakly optimal. Then, since X is closed, ~o E X, there is some x E X such that .
Proof.
{x a } converging to
f(x o ) - f(x) E intC. Let V be a neighborhood of f(x o ) in E such that V ~ f(x) + intC. By the C- continuity of !, there is an index 0'0 such that j(xaJ E V + C, for all 0:' > 0'0' Comhine this with (1.1) to get the C9n tradiction: X a ¢ WS(X;!), completing the proof. -
(1~1)
Corollary 1.2 If f(X) is closed, then WMin(f(X)fC) is closed~ In particular, if X is compact, f is continuous, then W1\din(j(X)IC) is closed~ Proof. If I(X) is closed, then according to Theorem 1.1, the set WS(!(JY); id) is closed, where id is the identity map on J(X) . Hence ltV1\.1in(f(X)JC) is closed because it coincides with the set WS(f(X); id).
Finally if X is compact,
f
is continuous, then f(X) is closed.•
It should be worthwhile to note here that the results above are not always true for the sets SeX; f) and 1.VIin(f(X)JC). lfthe space E has dimension greater than one, then there are compact sets with the efficient point sets being noncompact. Take for example in R2 the set consisting of the unit ball {(x,y) E R 2 : x 2 +y2 $1} and the point (-2,0). Then the efficient point set ,vith respect to the cone R~ is the union
{(-2,O)} U{(x,y) E R2: x 2 +y2 = l,x S O,y and it is not compact.
< O}
137
2.LINEAR CASE
Let A be a polyhedron in Rn, C a polyhedral cone in Rn. We simply usc Nlin(A) instead of Min(AIC) in thc sequel by keeping in rnind that the order is . generated by the cone C~ We also recall that a subset A o ~ A is a face of A (in a general sense) if either A o = A or there exists a hyperplane H in the space such that A is entirely contained in one of the half spaces gcnerated by II and such
that A o = AnH.
.
Definition 2.1 A set B ~ Rn is said to be pathwise connected if for every a, b E B, there are a finite number of points in B: bo = a, b1 , ••• ) bl + 1 = b such that the segments [b i ) bi+l], i = 0, ... , l belong to B.
Theorem 2.2 For a given polyhedron A in Rn, the sets l\1Iin(A) and W Min(A) if they are nonempty, consist of certain faces of A and they are pathwise connected~ Proof. It is known that for cvery linear function ~ on R n , the set of points minimizing ~ on A is a closed face of A. Hence the first assertion of the theorem is deduced from Theorem 3.3 (Chapter 4)~ The second assertion can be obtaind from the first one and from the connectedness results of the next section concerning convex sets. However we shall also furnish an alternative proof by using the argument of Poclinovski-:\'ogin(1983). Let A 1 , ••. , Ak be relatively opcn faces of A) not meeting cach other and their union is Min(A)~ Take ai E Ai and consider the closed convex cones C*(ai) = {~ E Rn : ai E S(A;~)}) i = 1) ~ .. , k.
If denote
co =
{~ E ri(C*) : min{~(x)
:x
E A} exists },
then
co ~ G*(al) U ... U C*(ak).
(2.1)
Indeed, for ~ E Co, the solution set S(A;~) is a closed face of A and it belongs to lWin(A). Hcnce,there is some i such that
Ai ~ S(A;E). This means that ~ E C:ic(ai)~ Xow let a) b E l\!Iin(A) with a E Ai, b E A j , some i, j E {I, ... ) k}. We have to show that there are some bo , ~ •. ) bl+ 1 E 1vJ in(A) such
that (2.2)
138
[b r , br +1] ~ 1\1in(A), r = 0, ... , L
In fact, by Proposition 3.2 (Chapter 4), there are some ~a E Co n C*(ai),~b E Co n C*(aj) such that a E S(A; €a), b E S(A; ~b).
It is clear that Co is a convex cone, hence
[€a, ~b] S;; Co. In view of (2.1), there arc some ~l = {a,€l
=
€1, ... ,~l E Co such that
~b,
[{a, ~bJ = [~l, ~2] U ... U [~l-l' ~d and [~r, ~r+l] ~ C* (ai(r)), where i(r) E {I, HO' k}, r = 1, ,.. )1- 1.
(2.3)
It foIlo,vs from (2.3) that ~T+l E G*(ai(r))
n C*(ai(r+l») nCo,
and therefore [ai(r), ai(r+l)] ~
SeA; ~r+l) ~ j\1in(A). (2.4) Since c;l = ~a, a E S(A; €1); and since a E Ai, Ai ~ S(A; c;l). In other words, ai{l) = ai and (2.5) [a, aiel)] ~ S(A; c;l) ~ Min(A). Similarly, [ai{l) ,
b] ~ S(A; C;l) ~ i\1in(A).
(2.6)
Now, setting bo = a, br = ai(r), bl+ 1 = b, r = 1, ... , I and using relations (2.4), (2.5) and (2.6) we obtain (2.2).
For WMin(A) the proof is analogue.• Let us now consider a linear problem
minf(x) s.t.x EX, where X is a polyhedron, f is a linear function from X to Rn. If S(X; f) (resp., WS(X; f) ) is nonempty, then it consists of certain closed faces of X and it is pathwise connected. .
Theorem 2.3
ProoE. This theorem is proven by a similar way as the preceding
on~.•.
139
3.CONVEX CASE
Let A be a convex set in a topological space with an ordering cone C. The aim of this section is to study the sets Min(AJC) and WMin(AIC). First we recall some defmitions from topology~
Definition 3~1 Let A ~ B ~ E. We say that A is 1) contrac ti bIe if there exist a continuous junction H (a, t) from the produc t A x [0, 1] to A and a point a o E A such that
H(a.,O)=a o , H(a,l)=a/orallaEA; 2) locally contractible provided any neighborhhod V of any point a E A contains. a neighborhood U of a such that UnA is contractible; 3) a retract of B if there is a continuous function h from B to A such that h( a) = a for each a E A ~ L~mma
cone ~ap
3.2 Suppose that E is a locally convex space, C is a closed convex pointed nonempty interior and A is a closed convex set. Then the set-valued G from A + C to E defined by the rule: w~th
G(a) = (a - C) n (A + C), fOT each a E A
is continuous at every point of int(A; + C) U Min(A1C).
+ C,
Proof. The upper continuity of G is obvious because it is the intersection of a continuous map a H a - C and a constant map a H- A + C. To prove the lower continuity, let first a o E Min(A1C), if the latter set is nonempty.Then
G{a o ) = {a o }. Hence, for any neighborhood V of a o , by taking U = V we see that
G(a) n V
f 0, for each a
E U,
whic.h means that G is lower continuous at that point. Now let ao E int{A+C) and suppose to the contrary that the map is not lower . continuous at that point, i.e there are a neighborhood V of some point bo E G(a o ), a .net {aa} from A + C converging to a o such that
G(a Q )
nV
=
0.
(3.1)
·Since ao E int(A + C), there is a convex neighborhood, say W of ao such that
W
~
int(A + C).
140
Let W o be the convex hull of bo and W4 'fhis set belongs to A + C since bo E A + C and the set A + C is convex. Y.[oreover,
bt = tb o + (1 - t)a o E (intWo ) n V,
(3.2)
\vhcncvcr t is smaller than and close to I. Let U be a neighborhood of bt in the right hand side set of (362). Then a o E U + C and there is some index a o such
that aa
Let
Ua:
E U,
Since U
~
Co:
Wo
E U + C) for all a 2;:
Qo·
(343)
+ C, for aU 0:' ~ 0:'0.
(3.4)
E C' be such that
~
aa: = U Q + Ca A + C, '\ve have that aa: E A
It follows from (3.3) that U ce
= aa -
CO'
E ao: - C,
which together with (344) gives the inclusion Uo:
E G( a cr ), for a C 0'0.
'I'his contradicts (3.1) and completes the proo[ •
Remark 3.3 It should be observed that under the conditions of Lemma 3.2, the + C , but it is not necessarily lower continuous there.. To see this, let us consider the following example in R3 . The cone C = -R~, the se t A is the con vex hull of the point (0, 1, 1) and the set {(x,y,z) E R3 : y = o,x 2: O,z;::: O,x2 +z2 :51}.
map G is upper continuous on A
The map G is not lower continuous at the point (0,0, 1). This example also shows that the efficient point set of a convex compact set in R3 is not necessarily compact_ We recall that a function 9 from E to R is strictly quasiconvex if it is strictly R+-quasiconvex (Definition 6.1, Chapter 1), Le. for every t E R, x" Y E E and x f y, g(x) ~ t, g(y) :5 t imply g((x + y)/2) < t4 In other words, 9 is strictly quasiconvex if its level set at any p oint of R is strictly convex when being nonempty. Lemma 3.4 Let E be a normed space with a strictly quasiconvex norm and C a convex cone with convex bounded base. 'l'hen one can construct a continuous function 9 from E to R which is strictly {j'uasiconvez and increasing with respect to (C, R+).
141
"Proof. Let B be a convex bounded base of C . By a separation theorem therc is vector ~ E intC' such that
", a
~(b)
> c, for all b E B,
some c
> O.
Denote
H{t) = {x E E : ~(x) = t}; B o = {b/~(b) : b E B}. "Then, B o S; If(l), and it can bc verified that B o is also a convex bounded base of C . Let k be a positive number such th~t Eo ~ intB(O; k), 'where B(O; k) is the ball in E with the center at 0 and radius k. Consider now the set
(3.5)
1) = {x E E : c;(x) = t, IIxli :5 k(t + t 1 / 2 ), t ;?: O}. "Ve claim that D possesses the following properties:
(x + y)/2 E intD, for each x, y E D, x # y; C \ {O} ~ intD.
(3:6)
(3.7) To establish the latter relation, let e E C,c f= o. Then ~(c) = t > 0 and by (3.5), C E intB(O; kt)~ One can then find a positive number () such that x E intB(O, kt), fo! all x E E, [Ixl1 < b. (3.8) Let 'Y = min{6; «(1
+ 2t)1/2 -
1)/2}
and set
M = {x E E : ~(x) > t -" IIx - ell < ,}. It is obvious that I is positive and M is an open set containing c. Moreover, M ~ D. Indeed, since t > 0, for any x E M,
{(x) > t -
r
~
t - ((1 + 2t)1/2 - 1) > 0,
and by (3.8),
tlxll
~ kt S k(t - 1) + (t -1)1/2)
< k(~(x) + (c;(X))1/2). These two relations say that xED and hence (3.7) is proven. Before proving (3.6), let us state a useful observation, which can be obtained by a way similar to that we have uscd in proving (3.7) :
x E intD if there is some t E R such that
(3.9)
€(x) = t > 0 and Ilxll < k(t + t 1 / 2 ), As for the relation (3.6), let x, y E D, x;f y and c;(x) = t, ~(y) = s. Since x -;;f. y, at lea..~t one of the nonnegative numbers t and 5 must be positive. If t = s, then
142
c;((x + y)/2) = t
>0
and by the assumption on the norm,
II(x + y)/211 < k(t + t 1/ 2 ). This and relation (3.9) show that
(x + y)/2 E intD. 1= 5, then e(x + y)/2) = (t+s)/2 and we can estimate the norm of (x + y)/2 as !I(x + y)/211 5 Irxll/2 + l[yll/2 :5 k(t + 5 + t 1 / 2 + 8 1 / 2 )/2 < k((t + 8}/2 + «(t + 5)/2)1/2). Again, in vie\v of (3.9), (x + y)/2 E intD, establishing (3~6). Further, if t
With D in hand we are able to proceed to construct g by the rule: g(x) = inf{t E R : x E te - D}, x E E, where e is a fixed vector from B o • We claim that 9 is well defined. Indeed, the set {t E R : x E te - D} is nonempty, for instance any positive number being greater
thaD. max{~(x);
€(x)
+ (~(x) + IIxU/k)2}-
belongs to it. Moreover, that set is bounded from below because any number being smaller than ~(x) does not belong to it. In this way, g(x) is correctly defined.
All that remains to be proved is that g is continuous, increasing and strictly . quasiconve:"(. We establish first the following fact: for every x E E, t E R,
x E int(te - D) if and only if 9(x) < t. Indeed, if x E int(te - D), then there is some K > 0 such that x + Ke E int(te - D).
(3~10)
Consequently,
x E (t - ~)e - D, implying g(x) ~ t - "'~ < t, then, by (3.7) we have that x E (t - (t - g(x))/2)e - D ~ te - ((t - g(x))/2)e - D ~ te - intD ~ int(te - D). Now, the increasingncss of 9 is derived from relations (3.7) and (3.10); its strict Conversely, if g(x)
quasiconvexity is derived from relations (3.6) and (3.10). As for the continuity, let
{x ce } be a net in E converging to X o E E. It is clear that {g(x a )} is bounded. Let t be a cluster point of this net. In virtue of (3.10), t 5 g(x o ). If t > g(x o ), then, again by (3.10), we arrive at the contradiction:
g(x o ) < t - (t - g(x o ))/2, for a large enough.
143
Thus, limg(x a )
= g(x o ), completing the proof. •
- Concerning the assumption on the norm in the lemma above, in the literature a strictly quasiconvex norm is called strict normJt is known that in Banach separable spaces there always exist strict norms. More generally, if in a locally convex space there is a strictly convex bounded absorbing neighborhood~thenthe Minkowski functional defined by that neighborhood provides a strict norm. Let A be a compact set in E with A + C being convex. Then under the assumptions of Lemma 9.4, the function 9 attains its minimum on A at a unique poin t.
Corollary 3.5
Proof. Since A is compact and 9 is continuous, min{g(x) : x E A} exists, say it is t. Let x,y E A with .
g(x) = g(y) = t. If x ;f y,. by the strict quasiconvexity, g(x
Further, since A
+C (x
+ y)/2) < t.
is convex,
+ y)/2 E A + C.
,lIenee there is some z E A, c E C such that (x + y)/2 = z + c. Remembering that .9 is increasing we arrive at a contradiction:
g(z)
~
g((x + y)/2) < t.
The proof is complete. •
Theorem 3.6 i) E is a ii) C is a iii) A is a
Suppose that the following conditions holds: space with a strict quasiconvex norm; convex cone with nonempty interior and convex bounded base; convex set with Ax = (x - C) n A being compact for each x E E~
Then lv[in (A IC) is contractibIe. Proof. First let us construct a function h from A
+C
to M in(AJC) as fo110\\'5:
h(x) = {y E G(x) : g(y) = min{g(z) : z E G(x)}}, for x E A
.
(3.11)
+ C. Observe that 9 is increasing, therefore we can have the equality: min{g(z) = z E G(x)} = min{g(z) : z E Ax}.
The value in the right hand side of the equality above is finite because 9 is continuous and A z is compact. By Corollary 3.6, the set in the right hand side of (3.11) is a single point which in view of Proposition 1.2 (Chapter 4) belongs to Min(AfC)~
144
Thus~ h is correctly defined. Further, by Lemma 3.2 and by the continuity of 9, the function h is continuous on int(A C) U Min(AIC). Moreover,
+
G(x) = {x} for each x E Min(AIC)' hence
hex) = x.
(3.12)
We are now in a position to construct the map H from l\1Iin(AIC) x [0,1] to Min(AIC). Let Us fix a point a E int(A+C) and for each x E Min(AIC), t E fO, 1] "\ve set
lI(x, t) = h(tx + (1 - t)a). It is clear that II is continuous in both variables~ Moreover, by (3.12), H(x, 1) = h(x) = x, and H(x,O) = h(a) E Min(ArC), for all x E l\1Iin(ArC). In this way, Min(AIC) is a contractible set and the proof is complete.• Theorem 3.7 Under the conditions of Theorem 3.6, lvlin(AJC) is closed if and only if it is a retract of A + C . Proof. Since A + C is closed, any retract of this set is closed. Kow, suppose that l\1in(AIC) is closed. Denote by d(x) the distance from x to the set l\1Iin(AIC) and let
t(x) = d(x)/(l Construct a retract~on
+ d(x).
f from A + C to i\lfin(AIC) by the rule
. f(x) = h(l - t(x))x + t(x)a), where h is the function defmed in the proof of 'I"\heorem 3~6, a is a fixed point from int(A + C). For any x E A + C \ve have (1 - t(x))x + t(x)a E int(A + C) U l\IIin(AIC). Therefore, f continuous on A + C . Moreover, if x E Min(ArC), then f(x) = hex) = x. Consequently f is actually a retraction and the theorem is proven.•
is
As a matter of fact, the result of Theorem 3~6 can be improved when the space is of dimension 2, as the following theorem shows. Theorem 3.8 Assume that A is a compact convex set in R2. Then Min(AJC) is homeomorphic to a simplex. Proof. To prove this, we argue first that for every nondegenerated affine transformation 1~ of R 2 ,
145
T(lv.fin(AJC)) = Min(TA[TC) .
. In fact, it is clear that T is increasing with respect to (C, TC) and the inverse transformation T-l is increasing with respect to (TC, C). The equality is now dra\vn from Proposition 1.2 (Chapter 4)~ Further, ,ve consider the first case where the aimension of C is equal to two and C is closed. By the observation just made above it can be assumed that C = R~ and A ~ intC. For the closedncss of Min(A]C), let {ak} be a sequence from that set converging to a EA. If this point were not efficient) then there would be some b E A with a >c b, say a l > b1 , a2 ~ b2 , where the upper indexes . indicate the components of the point~ Since ak arc efficient and lim ak = a, a% > b2 when k is large enough. For a fixed large number n,
[an,b] ~ A+C and a E [an, b] + intC. This would imply that ak cannot be efficient whenever k is large. The contradiction establishes the closcdness of Min(A1C). Next, we show that the cone generated by this set is convex. Indeed, suppose that a and b are two efficient points and c = ta + (1 - t)b, 0 ~ t ~ 1. Consider the optimization problem
mint s.t. t 2:: 0, tc E A
+ C. It is clear that this problem has an optimal solution, say to. We claim that toe is efficient. Indeed, if not, there is a point x E A such that toe> x. Observe that the triangle with vertices a, b and x is contained in A + C and it c~ntains toe as an interior point. This fact contradicts the choise of to. l~hus, the cone generated by Min(ArC) is convex closed. Moreover it is pointed since C is the nonnegative orthant. Consequently, it has a base homeomorphic to a simplex~ .Direct verification shows that the base is homeomorphic to Min(AIC) and therefore the latter set is homeomorphic to that simplex too. For the case \vhcre C is not closed one can consider C as the cone R~ \vithout one or both extremal rays. The proof described above goes through without change~
For the case where the dimension of C is one, there is no loss of generality if we suppose that C' is the ~lIst coordinate axis and as before,A ~ intR~. Let Xo =
min{x 2 : x =
(X l
,x2 ) E A+C, xl = O},
Yo = max{ x 2 : x E A + C, Xl = O}.
It is clear that these solutions exist and they are unique and one can verify without any difficulties that Min(AIC) is homeomorphic to the segment [x o, Yo]. The proof is complete. •
146
Remark 3.9 The result of Theorem 3.8 can fail when the dimension of A is higher than two ~ as is ShO\VD by the follo\ving example in R 3 . Let A be the polyhedron with the vertices: (3) 2, 0), (2,3,0), (4,0,0), (0,4) 0) and (2.6, 2~6, 3).
It i.s a convex compact set and its efficient point set with respect to the nonpositive orthant consists of t \Vo triangles, one of which is with vcrtices
(4,0,0),(3,2,0),(2.6,2.6,3) and another is with vertices (0,4,0), (2,3,0), (2.6,2.6,3).
'I1hese triangles have only the point (2.6, 2.6,3) in common and their union cannot be homeomorphic to a simplex. We return now to the set WMin(AIC). Recall that a set X is arcwisc connected if for any two points x, y E X, there is a continuous function t.p from [O,lJ to X such that ~(O)
= x,
cp(l) = y. Theorem 3.10 connected.
Under the conditions of Theorem 3.6, WMin(AJC) is arcwise
Proof. Let a b E WMin(ArC). It is clear that there are x,y E Min(A'C) such 1
that a ~c x and b 2.c y.
Moreover, [a,x] and [b,y] belong to WMin(ArC). According to Theorern 3~6, the set Min(AIC) is contractible, hence arcwisc connected. Let ep be an arc linking x and y in lVlin(ArC). Then the arc
[a, x] U
Corollary 3.11 Suppose that E = Rn, C is pointed closed with nonempty interior and A is closed convex. Then WJ\l.lin(AIC) is arcwise connected if Min(A'C) is nonempty.
Proof. If Min(ArC) is nonempty, then condition iii) of Theorem 3.6 holds. Other conditions are satisfied trivially. Kow invoke the corollary to Theorem 3.10.•
147
Remark 3.12 In the preceding corollary if lVIin(AIC) is empty, then the set WMin(AIC) may be disconnected. Indeed, let us consider the following example
rn R 3 • Let C
be the nonpositive orthant, A the convex hull of the points:
(O,O,O),(O,l,O),(l,O,O),(O,O,k),(O,l,k),(l,O,k) and (1-1/2 k ,1-1/2 k ,k), k=1,2, ... ~ Then
1\11 in(AiC) == 0,
WMin(A1C) = {(l,O:z): z ~ O} U {(O,l,z): z ~ O}.
Theorem 3a13 Under the conditions of Theorem 3.6, WMin(AIC) is locally contractib Ie if and only -if it is a re tract of A. Proof. Since A is a convex closed set, it is locally contractible, and so is any retract of A. Now·, suppose that WMin(A1G) is locally contractible. By Dugundji (1958, Theorem 15.1) it follows that WMin(AlC) is a retract of some neighborhood V of WMin(A1C) in A, Le. there exists a continuous map hi from V to WMin(AlC) as in Definition 3.1. Further, for every positive integer k there exists a number s(k) > 0 such that
h2 (x) = (x + sd(x)h(x))J(l + sd(x)) E V, for each x E A with lixll :5 k, s ;::: s(k), where h is the function constructed in Theorem 3.6, d(x) is the distance from x to W.l\.1in(A1C). Without loss of generality we may assume that
s(k + 1)
> s(k) for
every k.
Let
s(t) = (s(k + 2) - ·s(k + l))t + (k + l)s(k + 1) - ks(k + 2)~ for t, k :S t :5 k + 1, k = 1", 2, ... , and let h 3 (x) = (x + s(l1 x lDd(x)h(x))/(l + s(l1xIDd(x)). It is obvious that h 3 is a continuous function on A~ Moreover, h 3 (x) E V for all x E A and h3 (x) = x whenever x E WMin(AIC). Now the composition h 1 0 h3 will be the continuous function on A to W M in(AIC) as required in Definition 3.1. In this way WMin(A1C) is a retract of A .• We finish this section by a short comment on the conditions of Theorem 3.6. Conditions i) and iii) are of no specification. However, condition ii) is rather strict. For instance if E is a separated Ricsz space (i.e. for every two finite sets A, B ~ E with the property: a ~ b for all a E A, b E B, there exists some x E E such that a ~ x ~ b, for all a E A, b E B), then condition ii) implies that the space is finite dimensional (sec Jameson-1970).
148
4.QUASICONVEX CASE
In this section let us consider a quasiconvex problem:
min/ex) S.t.x EX, where X is a convex closed set, f is a C-quasiconvex function from X to E , and G is a convex pointed closed cone in E with nonernpty interior. Throughout this section ,ve shall make use of strictly increasing continuous functions from E to R which were introduced in Section 4, Chapter 1 for every a E E, e E intC = he,a(Y) = min{t : yEa + te - C}, y E E. V\'e shall sometimes write h instead of he,a if it is clear that a and e are given. Lemma 4.1 If in addition f is C-continuous, then for every fixed points a E E, e E intC~ the solution set B( X; he,a (.) 0 f) is closed convex" Proof. Note first that the scalar function ho f is quasiconvex in view of Proposition 6.3 (Chapter 1). Moreover, by Theorem 5.11 (Chapter 1), it is upper semicontinuous. The result now is immediate.•
Lemma 4.2 If in addition to the assumption of Lemma 4.1, f is strictly Cquasiconvex, then the set S(X; h 0 f) is a single point whenever it is nonempty4 Proof. In this case the function h immediate.•
0
f
is strictly quasiconvex and the lemma is
Lemma 4.3 Let a be a fixed point of E and let lev(a) denote the level set of f at a. If f is C-quasiconvex and the weak solution set WS(lev(a); f) is compact, then the set-valued map
e H S(X; he,a(.) 0 f), e E E is upper continuous on intC. Proof. Suppose to the contrary that the map above is not upper continuous at a point eo E intC. This means that there are: a net {eo:} from intC converging to eo, a net {xa}, XI): E S(X; heCHa.(.) 0 f) and a neighborhood U of SeX; heo,a(') 0 f) in X such that X Q ¢ U. It can be seen that
S(X; he~ta(.) 0 f) ~ WS(lev(a); f).
149
-Therefore, without loss of generality we may assume that {xQ-} converges to some E X~ Moreover, f is C-continuous, by Corollary 5.1 (Chapter 1),
~ ~o
Xo
. Since
XC(
E lev(a)~
¢ U, we conclud.c that X
o rt SeX; hco,a(~)
which means that there is some
It is obvious that
Xl
0
f)
E X such that
to = hco,a(!(X o)) > he~,a(f(Xl)) = t l . E lev(a). Now it follows from (4.1) that !(Xl) E toe - C,
(4.1)
Xl
. or equivalently,
f(x o ) ¢ tle o - C. Hence there are a neighborhood V1 of f(x o ) and V2 of tle o in E such that Vl
n (V2 -
C) = 0,
also
(Vi + C) n (V2 - C) = 0. Further, since f is C-continuous and limeo: ~ eo, f(x a ) E V + C and t1eo- E V, whenever By definition of h, heCt,a(f(x~))
for
Q
Q'
is large enough.
> heO;,a(f(Xl)),
being large enough. This contradicts the fact that
E S(X; hee-,a(') . completing the proo[ • XC(
0
f),
Before going further, recall that a set A are no open sets U and V in E such that
~
E is said to be connected if there
A ~ U u V, An U # 0, A n V =F 0 and A nun V = 0. Lemma 4.4 Suppose that Ak
~
Ak+l and Ak' k = 1,2, ... are connected sets in
E ;. Then the union U {Ak : k = 1, 2, .. ~ } is connected.
Proof. This is immediate from the definition of connectedness.•
Lemma 4.5 Let F is an upper continuous set-valued map from a connected set D to E with F(x) being connected for each xED. Then F(D) is connected.
150
Proof. Direct verification completes the proof, or see Warburton (1983), Hiri~t Urruty(19S5) .•
Theorem 4~6 Assume that f is C-continuous, C-quasicon1Jex on a convex closed set X and WS(lev(a); f) is nonempty compact for every a E E with lev(a) being nonempty. Then WS(X;f) is a nonempty closed connected set.
Proof. That ltV S(X; f) is nonempty is guaranteed by the fact that
WS(lev(a); f) ~ W8(X; f), which is a direct consequence of Proposition 2.8 (Chapter 2). rrhc closedness·of that set is obtained by Theorem 1.1. 1\O\V we have to prove the connectedness. Let us fi."'C a E intC and set ak = ka~ k = 1,2, and ~ct Xk = lev(ak). It is clear that
u.
U{Xk : k = 1, 2, ... }; T¥S(Xk; f) S; WS(Xk+l; f) and WS(X; f) = U{WS(Xk; f) = k = 1, 2, ... }~ In virtue of Lemma 4.4, it suffices to prove that WS(X k ; f) is connected. v~ic fix k and consider a. set-valued map F from intC to WS(X k ; f) as follows: X =
F(e) == S(Xk ; he,ak+l 0 f). Since f(Xk) ~ int(ak+l - C), by Theorem 2.16 (Chapter 4) and Proposition 3.2 (Chapter 4), F(intC) = WS(X k ; f). Now apply Lemmas 4.1, 4.3, 4.5 and use the connectedness of intC to derive the connectedness of WS(Xk; f) .•
vVc recall that for a E E, f(X)a is the section of f(X) at a, te. f(X)a = f(X) n (a + C)~ Lemma 4.7 IjWA1in(f(X)aJC) is compact, then the
set~Jt}alued map
e H S(j(X)a; he,a(.)), e E intC, is upper continuous on intC. Proof. Since he,a(Y) is strictly increasing in y, we have that
S(f(X)a; hc)a(.») ~ WMin(f(X)a'C) which implies
151
We
a1r~ady
know that h is a function being continuous in all variables, therefore
:by Theorem 4.5 (Chapter 4), S(f(X)a; h) is upper continuous in e on intC.• Theorem 4.8 Assume that f is continuous C-quasiconvex on a closed convex set X and WMin(j(X)aI C ) is nonempty compact for each a E E with f(X)a being nonempty. Then WMin(f(X)lC) is a nonempty closed connected set. Proof. 1.'\hat the set W Min(f(X)IC) is closed and nonempty is obvious. For the connec.tedness, the proof is similar to that of Theorem 4.6 by using Lemma 4.7 iristead of Lemma 4.2, a sct--valued map:
. G(e) = S(f(Xk); he ,ak+l (.)) instead of FCe) and by observing that G(e) is connected as the image f(F(e)) of the connected set F( e) under the continuous function f .• Remark 4.9 It should be emphasiied that the two conditions: .i) W M in(f(X)a Ie) is compact; . ii) W S(lev(a); f) is compact, not derived. from each other, neit"her arc Theorems 4.6 and 4.8. To sec this let us consider the following examples: 1) Let
are ..
and lc~
f :X
1--4
X = {(Xl)X2) E R2: 0 ~ Xl::; 1,0 ~ X2:5 I}, R 2 be defined by f((Xl' X2)) = (Xl, 0) if Xl =f 1, X2 =f:. 1; J((l, 1)) = (0,2) .
.Obviously, for C = -R~ rrheorem 4.6 holds meanwhile f(Wlltin(X; f)) = f(X) is disconnected.
2) Let
Let
f
X. bci the convex hull of the sets:
{(x)O)O) E R3 : x ~ O}, {(x,D, 1) E R 3 : x ~ O}, {(x, 1,0) E R 3 : x ~ O}, {(x, sin 2- k 1r, cos2-k1f) E R3 : x !:: k}) k = 1,2, .... be the projection 'from X to R2 as follo,vs:
f((x, y, z) = (y, z) and G = -R~. In this case, ~heorem 4.8 holds, ho'\vever, WS(X; f) = B U (U{Ak : k = 1,2, ... }),
where
~52
x ~ k, (y, z) E [sin 2- k 1f, cos 2- k Jr]}, B = {(x, y, z) E R3 : x ~ 0, y = 0, z = I},
Ak = {(x, y, z) E R3
:
i.e. ltVS(X; f) is disconnected. The first example also shows that without continuity of f the result of Theo·rem 4.8 may fail.
If Wlvlin(f(X)aJC) is nonempty compact for every a E E with j(X) being nonempty and f is C-continuous strictly C-quasiconve::c on X , then both of the maps defined in Lemmas 4.3, 4.1 are point·-vaiued,continuous on intC.
Lemma 4.10
Proof. Due to Lemma 4.2, the maps mentioned above arc point-valued; Hence they are continuous whenever being upper continuous as set-valued maps. The proof of their upper continuity is straightforward, so we omit it.• Assume that f is C-contin'uous strictly C~·quasicon'Vex on a convex closed set X and one of the following conditions holds: i) S(lev(a); f) is compact for each a E E; ii) l\1Iin(f(X)afC) is compact for each a E E.
Theorem 4.11
Then S(X; f) is nonempty closed connected. If in addition f is continuous then Min(f(X)JC) is nonempty closed connected. .
Proof. The proof is similar to that of Theorem 4.6 by using Lemma 4.10 instead of Lemmas 4.3 and 4.7.•
Theorem 4.12 Suppose that f is continuous strictly C-quasicon'Vex on a convex closed set X and M in(f(X)a Ie) is compact for each a E E. Then S(X; f) (resp" Min(f(X)IC)) is a retract of X (resp., f(X»). Proof. The aim is to construct a retraction 9 from X to S(X; f) . Let e be a fIXed vector from intC and let x E X. Consider the problem P(x) :
min he,!(:r:) (f(y») s.t. Y E X. In view of Lemma 4~2, this problem has a unique solution which is denoted by x•. Set g(x) = XofC. We prove that 9 is continuous on X . Indeed, let {xa:} be a net from X converging to X o E X~ Since f is continuous,
limf(x(:l) = f(x o ). :NIoreover,{ he ,f(:to-) (I (x Q))} converges to he ,/(:c 0) (I (x 0») sinee h is continuous in all the variables. If g(x o )
tJ. S(X; he,!(x c )(·) 0 f),
153
.~then
there is some x E X such that he,!(x o ) (f(x))
.By the continuity of h anc:l
> he,!(x o ) (f(x o)) .
!,
he,!(:Co)(f(x)) :> he~f(:r:Of)(f(xo.)), whenever a is large enough. This contradicts the fact that g(x a ) belongs to the . set S(X; he,!ex a ) ( . ) 0 f)· Thus, 9 is continuous on X . The relation g(x) = x for all x E S(X; f) is obvious and 9 is indeed a retraction. A retraction from f(X) to Min(j(X)IC) can similarly be constructed9 •
Theorem 4.13 Under the aS8umptions of Theorem 4.12, Min(j(X)IC) and S(X; f) are homeomorphic to each other~ Proof. We prove that the restriction of f on S(X; f) provides a homeomorphism. between two sets above. For, let y E l\-fin(f(X)IC), there is some x E S(X; f) so that f(x) = y. This x is unique because if there were some z E SeX; f), z ~ x .with fez) = y, then
+ z)/2)
E f(x) - intC, contradicting x E S(X; f). Set g(y) = x to get a function from Min(f(X)IC) to S (X; f) . It can be verified direct that 9 is continuous on Min (/ (X) IC) and g 0 f as well as fog axe identity functions on S(X; f) and Min(f(X)IC), respectively. The proof is complete. •
f«(x
. Corollary 4.14 Under the assumptions of Theorem 4.12, the sets Min(f(X)IC) and S(X; f) are contractible.
ProoE Invoke this corollary to Theorems 4.12, 4.13 and the fact that a set is alwaJo:s contracti hIe. •
convc~
Remark 4.15 If f is continuous C-quasiconvex on a compact convex set, then it is not n~cessary for SeX; f) and Min(j(X)IC) to be connected. For instance, let
x = [-1, 1}, let
f :X
H
R2 be defined by the rule J(t) = (0, -t) if t ~ 0; . f (t) = (t, 0) if t sO;
Then Min(f(X)IR~) = {( -1,0) U
S(X; f) = {-I} U{I}.
(0, -I)},
154
As the conclusion of this section the following remark on the efficient point sets of fractional vector problems should be usefuL Let us consider the following linear fr actional problem: min((alx
+ b1)/(CIX + d~), .... , (anx + bn)j(cnx + dn ))
Sit. X EX, ,vhere X is a polyhedron in Rm, ai, Ci E Rm, bi , d i E R and the ordering cone in Rn is the nonpositive orthant. Since the problem above is quasiconvex, all the results ,vc have established concerning solution sets of quasi convex problems are also vaJid for this problem. Chao and Atkins (1983) succeeded in proving a stronger result. Namely, if X ·is compact; then WS(X; f) is pathwise connected, while the set S(X; f) is connected in the case n S 3. For higher dimension the question is still open.
Comments Chapter 1:
The first six sections are based on Dedieu (1978), Luc (1987 d~e,f), Pomcrol (1985), Precupanu (1984), Yu (1974). Section 7. Most definitions concerning set-valued maps are taken from Aubin and Ekcland (1984), Berge (1963), Penot (1984). Chapter 2:
Section 1. An excelent book on binary relations and preference orders in multiobjcctivc decision making is Yu (1985).See also Percssini (1967). Section 2. The results of this section are classical except for Proposition 2.9 (Lac (1988.a)). For several concepts of proper efficiency see Sawaragi et aL (1985). Section 3~ The general theorems (Theorems 3.3, 3.4) on the existence of efficient points are new (Luc (1988c)). Other results have been obtained by Do!"\vein (1983), Corley (1980), Henig (1982b), Jahn (1986), Sterna- Karwat (1986a,1987c). Section 4. The first work dealing with the domination property is Vogel (1977). Further investigations have been achieved by Benson (1983), Henig (1986), .Luc (1984,1988a). Section 5. Vector problems with set-valued objectives and constraint have .been studied by several authors when developing duality theory. The form we present in this section was given in Corley (1987), Luc (1988c). Chapter 3:
Contingent cones have been introduced by Bouligand (1930). Contingent derivatives have. been developed in Aubin (1981), Aubin and Ekeland (1984), :·Frankowska (1985), Penot (1984)) Ward and Borwein (1987) and others. .. The concept of semidiffercntiabilities and the results of the first three sections .have been given in Luc (1988b)w Necessary and sufficient conditions of differentiable and ( or) convex vector problems have been obtained by many authors,see for example Aubin (1979),Kuhn and Tucker (1951), Da Cunha and Polak (1967), Luc (1985a), Smale (1974), Zalincscu (1987) and others~
156
Chapter 4: The first three sections contain the infinite dimensional version of the results from Luc (1986b,1987b), which include also results of Pascoletti and Serafini (1984), Warburton (1983), Wierzbicki (1977). Some results of Jahn (1984) are also added. For the separation of nonconvex sets see also Gertewitz and Ivanov (1985), Elster and Gorfcrt (1986). Stability has been investigated by Naccache (1979) ,Penot and Sterna- Karwat (1986), Stcrna···Karwat (1987a), Tanino and Sawaragi (1980b)~
Chapter 5: Duality of linear vector problems which is not included in these notes has been considered in Gale et al.(1951), Iserman (1977,1978), I
Chapter 6: The structure of optimal solution sets of linear problems has been obtained very early (Arrow ct al.-1953, Chernikov-1968). The structure of efficient point sets of convex sets has been studied by Bitran and :YJ:agnanti (1979), Luc (1985b), MOIOZOV (1977), Peleg (1972). In Section 3, we furnish the results from the works mentioned above in infinite dimensional spaces. For quasiconvex problems the results have been obtained by Choo and Atkins (1983), Luc (1987c), Warburton (1983). The differentiable structure of efficient point sets is not included in these notes. It can be found in Luc (1983), Schecter (1978), Smale (1976).
References A.rrow,K.J.,E.W.Barankin and D.Blackwcll: 1953. Admissible points of convex sets, in "Contributions to the Theory of Games", H.W.kuhn and A.W.Tuckcr (eds), VoL2, Princeton "Gniv.Press, Princeton, New Jersey. ~ubin,J .P.; 1979. "Mathematical Methods of Game and Economic Theory)') Korth-Holland Publ., Amsterdam. 1983. Contingent derivatives of set-valued maps and existence of solutions to nonlinear inclusions and differential inclusions, in "Advances in Mathematics Supplementary Study" 7a,L.Nachbin (cd.), Academic Press, Xew York, 159-~229. ~ubin,J.P. and I.Ekeland: 1984. "Applied Nonlinear Analysis", John Willey, Ne\V' York. !\zimov ,A.Ja.: 1982. Duality in vector optimization problems, Soviet Math. Doklady 26,170- -174. 3enson,H.P.: 1979. An improved definition of proper efficiency for vector minimization vvith respect to to cones, J.Math.Anal.Appl.79, 232-241. 1983. On a domination property for vector maximization with respect to cones, J.Optim~ Theory Appl.39, 125-132. Errata Corrige (1984),J. Optim. Theory Appl. 43,477-479. 1983. Efficiency and proper efficiency in vector maximization ,,,ith respect to cones, J. Math.A nal.AppI.93,273-289. - - - - and T.L.Morin: .1977. The vector maximization problem; proper efficiency and stability, SIAM J.Appl.Math.32,64-72. 3cn-Tal,A. and J.Zowc: . 1985. Directional derivatives in nonsmooth optimization, J. Optim. Theory Appl.47, 483-490. Bergc,C.: 1963. "Topological spaces", MacMillan, N"e'\v York. Bergstresscr,K., A.Charnes and P.L.Yu:
158
1976. "Generalization of domination: structures and nondorninatcd solutions in multicriteria decision making, J. Optim~ Theory Appl.18, 3-13. Bitran,G.R.: 1981. Duality for nonlinear multiple-criteria 0 p timization problems, J. Op tim. Theory Appl. 35,367-401. - - - - and T.L.Magnanti: 1981. The structure of admissible points with respect to cone dominance, J.Optim. Theo1VY Appl.29, 573-614. Borel,E.: 1921. The theory of play and of integral equations \vith skew syrmnetric kernels, G.R. Acad. Sci. Paris 173, 1304-- -1308.
Borsuk,K.: 1967. t'Theory of retracts", PWX,Warsaw. Borwcin,J.M.: 1977. Proper efficient points for maximizations with respect to cones, SIAM J.Control Optim~15,57- -63. 1980. The geometry of Pareto efficiency over concs~ Math. OperationsfoTsch. und Statist.Ser. Optim.l1 ~235-248. .1983. On the existence of Pareto efficient points, Math. Oper.Res.8) 64-73. _. - - - and J.W.Xicuwenhuis: 1984. Two kinds of normality in vector optimization,
Math.Programming 28,185-191. Bouligand~G.:
1930. Sur l'existence des demi-tangcntes a nne curbe de Jordan, Fun.Math.15,215 -218. Brosowski,B.: 1982. "Parametric semi--infinite optimization", Lang- Verlag, Frankfurt am Main. .~ - - - and A.Conci: 1983. On vector optimization and parametric programming, Seg'l£ndas J.Latino Amer.de Mai.Apl.2, 483-496. Brumelle,S.: 1981. Duality for multiple 0 bj ec tive convax programs ~ Math.Oper.Res.6, 159-172. Bu,Q.Y. and II.R.Shcn: 1985. Some properties of efficient solutions for vector optimiza.tion, J. Optim. Theory Appl.46,255-263. Cantor,G.: 1895) 1897 .Contributions to the foundation of transfmite set theory, Math.A nnalen 46,481---512; 49,207-246. Cesari,L. and M.B.Suryanarayana: 1976. Existence theorems for Pareto optimization in Banach spaces, Bull. Amer. Math. SOC~ 82, 306-308. 4
159
1978. An existence theorem for Pareto problems, Nonlinear A nal.2,225-233. .Chankong,V. and Y.Y.Haimes: 1983. "Multiobjcctivc Decision Making: 'l"lheory and Holland Pub!., Amsterdam.
)J.[cthodology'~,
North-
Chaney,R~W.:
·1982. On sufficient conditions in nonsrrlooth optimization, Math. Oper.Res~7, 463-475. . Charnes,A. and W.W.Cooper: 1977~ Goal programming and multiple objective optimization, J.Oper.Res.Soc.l,39-54. Chernikov,C.H.: 1968~ "Linear inequalities", ~auka, Moscow. Choo,E.U. and D.R.Atkins: 1983. Proper efficiency in nonconvex multicriteria programming, Math.Oper.Res.8,467-470. 1983. Connectedness in multiple linear fractional programming, Management Sci.29,250--255. Choquet,G.: 1962. Ensembles et cones convexes fBiblement complets, C.R.Acad.Sci.P,aris 254, 190~-1910. Clarke,F.H.: 1983. "Optimization and Konsmooth Analysis", Wiley, Kew York. Corley,H~W.:
1980. An cxistenc~ result for maximization with respect to cones, J. Optim. Theory Appl.31,277-281. 1981. Duality for maximizations with respect to cones, J.Math.A nal. A ppl.84,560-568~ 1985. On optimality conditions for maximizations with respect to cones, J.Optim.Theory Appl.46,67-78. 1987. Existence and Lagrangian duality for maximizations of set-valued maps, J.Optim~Theory Appl.54, 489-501 . .Craven,B.D.: 1980. Strong vector minimization and duality, Z.Angew.Math.Mech.60,1-5. Da Cunha~I\.O. and E Polak: , .1967. Constrained minimization under vector valued criteria in finite dimensional spaces, J.Math. Anal. Appl.19, 103-124. D.auer,J.P.: . 1987. Analysis of the 0 b j cctive space in finltipIe objective linear pro grarnming, . J.Math~Anal.Appl.126, 579-593.
- - - - and W.Stadler: 1986. A survey of vector optimization in infinite-dimensional spaces, Part 2, J. Optim. Theory Appl.51,205-241.
160
Debreu,G.: 1954. Valuation equilibrium and Pareto optimum, Proc.NatAcad.Sci. U.S.A.40,588--·592. Dcdieu,J.. P.: 1978. Criteres de fermeture pour l'imagc d'un ferme non convex par une multiapplication~ C.R.Acad.Sci.Paris 287,9/11-943. Dieudonne,J .: 1966. Sur la separation des ensembles convexes, Math~Annalen 163,1-3~
Dolecki,8.. and C.:vt:alivert: 1986. Polarities and stability in vector optimization, Lecture Notes in Economics and Mathematical Systems 294,96-113. Dugundij~J.:
1958. Absolute neighborhood retracts and local connected ness in arbitrary metric spaces, Composito Math.13,229·---246. Durier,R.. and C.Michelot: 1986. Sets of efficient points in a normcd 147 space, J. Math.AnaLAppLl 17,506-528. Edgeworth,F.Y.: 1881. "Mathematical Psychics~', C.Kcgan Paul & Co., London, England. Egudo,R.R. and M.A.Hanson: 1987. Multiobjective duality with invexity, J.Math.Anal.Appl.126, 469-477. Elster,K.-H. and A.Garfert: 1986. Recent results on duality in vector optimization, Lecture Notes in Economics and Mathematical Systems 294,129-136. Fandel,G. and T.Gal (eds): 1979. "Multiple Criteria Decision Making Theory and Applcations", Proceedings, Bonn, Springer--Verlag, Berlin·and New York~ Fenchel,W.: 1951. Convex cones, sets and functions, mimeographed lecture notes, Princeton
"Gniversity. Frank.owska,H~:
1985. Adjoint differentiable inclusions in necessary conditions for the minimal trajectories of di.fferential inclusions, Annales Insi.Henri Poincare, Analyse non lineaire 2,75-99. Gale,D.,H.W.Kuhn and A.. W . 1ucker: 1951. Linear programming and the theory of game. in "Activity Analysis of Production and Allocation", New York, Wiley, 317-329. .
Gearhart,vV.B.: 1983. Characterization of properly efficient solutions by generalized scalarization methods, J. Optim. Theory Appl.41,491-502. Geoffrion,A.M.. : 1968. Proper efficiency and the theory of vector maximization, J.Math.Anal.Appl.22, 618-630...
161
Gertewitz Chr. and E.lvanov: 1985. Dualitat fiir nichtkonvexe Vektoroptimierungsproblemc, Wiss.Zeitschr. d. TH Ilmenau 31~61-81. Giannessi,F. : 1984. Theorems of the alternative and optimality conditions, J.Optim. Theory Appl.42,331-~·365. 1986. Theorems of the alternative far multifunctions with applications to optimization. General results, lles.Rep.n.127, Dept.Math. of Pisa,1-40.
Gorfert, A: 1986. YIulticriteria duality, examples and advances, Lecture Notes in Economics and Mathematical Systems 273,52-58. Grauer,);!. and A.P.Wierzbicki (cds): 1984. ('Interactive Decision Analysis; Proceedings of an International Workshop on Interactive Decision Analysis and Interpretative Computer Intelligence, IIASA, Laxenburg, Austria 1983". Springer-Verlag, Berlin and l'ew York. Gros,C.: 1978~ Generalization of Fenchels duality theorem for convex vector optimization, European J.Oper.Res.2,368---376. di Guglielmo,F.: 1977. Nonconvex duality in multiobjective optimization, Math. Oper. Res.2,285-291. Hansen,P.: 1983. "Essays and Surveys on Multiple Criteria Decision Making", SpringcI'~ Verlag, Berlin and New York. . Hartley,R.: 1978. On cone-efficiency, cone-convexity, and cone-compactness, SIAM J.Appl.Math.34,211-222. ~.Hasen,G.B. and T.L.Morin: . 1983. Optimality conditions in nonconical multiple-objective.programming, J. Optim. Theo1'Y App1.40,25-60. Hausdorff,F.: 1906. I.nvestigations concerning order types, Math.-Physische Klasse 58,106---169. ~ Helbig,S.: . 1987~ "Parametrische semi-infinite Optimicrung in tatalgcordnctcn Gruppen" , R.G.Fischer Verlag, Frankfurt am. Main. .Hcnig,M.I.: 1982 a. Proper efficiency with respect to cones) J. Optim. Theory Appl~36,387
407. 1982 b. Existence and characterization of efficient decisions with respect to cones, Math.Programming 23,111-116. 1986. The domination property in multicriteria optimization,
162
J.Math.Analysis Appl.114,7-16. Hiriart-"C"rruty,J _B.: 1985. Images of connected sets by scmicontinuous multifunctions, J.Math.Anal.Appl.l11,407-422. Hurwicz,L.: 1958. Progranuning in linear spaces, in "Studies in Linear and Nonlinear Programming", K.J.Arro\v, L.Hurwicz and H.Uzawa (cds), Stanford Univ.Press, Stanford, California. IIvlang,C.L. and A.S.M. :Masud: 1979. 'tMultiple Objective Decision Making Methods and Applications" , Springer-Verlag, Berlin and l\ew York. Iscrmann,H. : 1977. The relevance of duality in multiplc~objectivc linear programming, in "Multiple Criteria Decision Making" ,lVLK.Starr and M.Zeleny (eds), ;..r orth-·llolland Publ., Amsterdam. 1978. On som.e relations between a dual pair of multiple objective linear programs, Z. Oper. Res ~22,33-41. . 1978. Duality in multiple objective linear programming, in "Multiple Criteria Problem Solving", S.Zionts (ed.), Springer-Verlag, Berlin and New York6 Ivanov E.H. and R. Nehsc : 1985. Some results on dual vector optimization, Optimization 16,505-517. Jahn,J.: 1983. Duality in vector optimization, Math. Programming 25, 343-353. 1984. Scalarization in vector optimization, Math.Programming 29, 203-218. 1985. A characterization of properly minimal clements of a set, SIAM J. Control and Optimization 23,649-656. 1986 a .. '(Mathematical Vector Optimization in Partially Ordered Linear Spaces", Peter Lang, Frankfurt. 1986 b. Duality in partially ordered sets, Lecture Notes in Economics and Mathematical Systems 294,160-172. 1986 c. Existence theorems in vector optimization, J. Optim. Theory Appl.50,397-406. 1987. Parametric approximation problems arising in vector optimization, J. Optim. Theory Appl.54,5D3-516. _.... ~ and W.Krabs (eds): 1986. "Recent Advances and Historical Development of Vector Optimization", Lecture Notes in Economics and Mathematical Systems 294. Juekiewicz,E.; 1983. Stability of compromise solution in multicriteria decision-making problems, J. Optim. Theory Appl.40, 77~-83. I(anniappan,P. :
163
~eccssary conditions for optimality of nondifferentiable convex multiobjective programming, J. Optim. Theory A ppl.40, 167-174. Kawasaki,H.: 1981. Conjugate relations and weak subdiffcrentiaIs, Math. Oper.Res.6,593-607. 1982. A duality theorem in multiobjective nonlinear prog ramming, Math.Oper.Res.7,95-110. Keeney,R.L. and H.Raiffa: 1975. "Decision with Multiple Objectives: Preferences and Value Tradcofl"s" , Wiley, New York. Kirsch A., W.Warth and J.Werner: 1978. Kotwendige Optimalitatsbedingungcn unci ihre Anwendung, Lecture Notes in Economics and Mathematical Systems 152, Springer·· Verlag, Berlin. Klinger,A. : 1967. Improper solutions of the vector maximum problems, Oper.Res.15,570-572. Koopmans, T.C.: 1951. Analysis of production as an efficient combination of activities, in "Activity Analysis of Production and Allocation", Cowles Commission Monograph n.13, rr.C.Koopmans ed., John Wiley and Sons, Ne'\v York, New York,33· -97. {ornbluth,J.S.H.: 1974. Duality, indifference and sensitivity analysis in multiple objective linear programming, Oper.Res.Quart.25, 150 599-·614. {uhn,H.W. and A.W.Tuckcr: 1951. Konlincar progranuning, in "Proceedings of the second Berkeley Symposium on Mathematical Statistics and Probability", Berkeley, California, pp.481-492. iuratowski.K.: , 1972. "Introduction to set theory and topologyn, Pergamon Press, Polish Scientific Publishers, \Varszawa. ~ai)H.C. and C.P.Ho: 1986. Duality theorem of nondiffer enti able convex multiobjective programming, J. Optim. Theory Appl.50,407-420. ~eitmann,G. (ed.): ..1976. ":v£ulticriteria Decision Making and Differential Games'~, Plenum, Nc",v York. - - - - and A.Marzolla (eds): .1975. "Multicriteria Decision Making", Springer-Verlag, Belin and New York. ~ewandowski,A. and M.Graucr: 1982. The reference point optimization approach-methods of efficient imple-
1983.
.
mentation,IIASA WP-82~--26.
164
Lin.. J.G.: 1976. :YI:ultiplc-objectivc problems: Pareto-optimal solutions by the method of proper equality constraints) IEEE Trans.Automatic Control AC-21,641-· 650. 1976. Proper equality constraints and maximization of index vectors, J. Optim. Theory App1.20,215· -244 . rvlcLinden,L.: 1975. Duality theorems and theorems of the alternative, Proc.Amer.Maih.Soc.5,172· -175. Luc~D.T.:
1982 ,1983. On ::;ash equilibrium I,II,Acta Math.Hungar.40, 267-272 and 41,61-66. 1984 a. On the domination property in vector optimization, J.. Optim. Theory Appl.43,327-330. 1984 b. On duality theory in multiobjective programming, J. Optim. Theory App1.43,557~-582. 1985 a. Theorems of the alternative and applications in multi objective optimization, Acta Math.Hungar.45,311-320. 1985 b. Structure of the efficient point sets, Proc. A mer~Math. Soc.95 ,433-440. 1986 a. Selection of efficient points, Optimization 17,227-236. 1986 b. On sca:larizing method in vector optimization, Lecture Notes in Economic and Math.Systems 273,46-51. 1987 a. About duality and alternative in multiobjectivc optimization, J. Optim. Theory Appl.53,303-307. 1987 b. Scalarization of vector optimization problems, J. Optim. Theory Appl.55,85-· -102. 1987 c. Connectedness of the e:ffi~icnt point sets in quasiconcave maximization, J.Math.Anal.Appl.122,346-354. 1987 d . Convexity and closedness \vith respect to cones, Optimization 18,785-789. 1987 c. Cone closcdness and cone continuity in normablc spaces, Preprint Series 5~ Math.Institute of Hanoi. 1987 f.. A closedness theorem for nonconvex sets, in '(Essays on Xonlinear Analysis and Optimization'~, Hanoi. 1988 a.(to appear). Recession cones and the domination property in vector optimization. 1988 b.(to appear). Contingent derivatives of set-valued maps and application· to vector optimization. 1988 c.(to appear). An existence theorem in vector optimization, Math.Oper.Res.
. . . - - and J.Jahn:
165
1988 (to appear). Axiomatic approach to duality in mathematical programming. Luenberger,D.G.: 1969. "Optimization by Vector Spaces 11ethods", ';Yiley, Kcw York. Malivcrt,C.: 1984. Dualite en programmation lineaire multicritere, Math. OperationsforschI Statis t. S er. Optim.15 ,555 ·-572. :vIazzoleni,P.: 1980. Duality and reciprocity for vee'tor prograrmning, European J~ Oper.Res.lO,42-50. Minami,M.: 1983. Weak Pareto-optimal necessary conditions in a nondifferentiable multiobjective program on a Banach space, J. Optim. Theory Appl.41,451--461. 11orozov,V.V.: 1977. On properties of the set of nondominatcd vectors, Vestnik Moskov~ Univ. ComputSci. Cyber.4,51-61. Naccachc,P.H.: 1978. Connectedness of the set of nondominated outcomes ill multicriteria optimization, J. Optim. Theory Appl.25,459--467. 1979. Stability in multicriteria optimization, J.Math.Anal.Appl.68, 441-453~ Nakayama,H.: 1984. Geometric consideration of duality in vector optimization, J. Optim~ Theory AppI.44,625-655. 1984. Duality in linear vector optimization, IIASA Working Paper WP 84/86. 1985. Duality theory in vector optimization: an ovcrvie,v, in Lecture Notes in " Economic·s and Mathematical Systems 242~ 109-125. - .- - -, T . Tanino and Y Sawaragi: 1980. An interactive optimization method in multicriteria decision making, IEEE Trans.Systems and Cybernet.SMClO,163-169 . . . von Neumann,J.: 1928. On the theory of parlor games, Math.Annalen 100,295-320. - - -~ and O.Morgenstern: 1947. "Theory of Games and Economic Behavior", Princeton Univ.Press, Princeton, New Jersey. Nieuwcnhuis,J.W.: 1980. Supremal points and generalized duality~ Math. Operationsforsch. Statist. Ser. Optimization 11,41-59. . 1983. Some minimax theorems in vector-valued functions) J. Optim~ Theory Appl.40,463-475. 1983.. About Isermann duality,J.Optim.Theory Appl.41,481-490~ Oettli,W.: 1976~ A duality theorem for the nonlinear vector maximum problem,
166
in "Progress in Operations Research", A.Prekopa ed., North-I-Iolland, Amsterdam, 697-703. 1980. Optimality conditions for programming problems involving multivalncd rnappings, Manuskripte, Universitat Mannheim. Pascoletti,A. and P.Serafini: 1984. Scalarizing vector optimization problems, J" Optim. Theory A ppl.42 ,537-557. Peleg~B.=
1972. Topological properties of the efficient point set, Proc .Amer.Math. S oc.35 ,531-536. Penot,J.P~=
1981. Differentiability of relations and differential stability of perturbed optimization problems, SIAM J. Control and Optimization 22,529- -551. - - - and A.Sterna-I<arwat: 1986. Parametrized multicri teria. optimization: continuity and closcdness of optimal multifunctions) 153 J.Math.AnatAppl.120,150-168. P eressini ,1\ ~ L.: 1967. "Ordered Topological Vector Spaces", Harper and Ro\v Publ., f\ew York. Podinovskji,V.Vand V.D.Nogin: 1982. t(Pareto Optimal Solutions in :vIulticriteria Optimization Problems". Nauka, :vIOSCO\v. Pomerol,J.Ch.: 1985. Openness,closedness and duality in Banach spaces with applications to continuous linear programming, Lecture Notes in Economics and Mathematical Systems 259,1-15. Postolica,V.: 1986. Vectorial optimization programs with multifunctions and duality, A nn.Sci.Math. Quebec 10,85-102. Precupanu,T.: 1984. Closcdness conditions for the optimality of a family of nonconvex optimization problems, Math. OperationsjoTsch.Statist.Ser. Optimization 15, 339··- -346. Rockafellar,R.T.: 1970. "Convex Analysis", Princeton Univ.Prcss, Princeton, New Jersey. Role\vi cz~S = 1975. On a norm scalarization in infinite dimensional Banach spaces Control and Cybernetics 4,85-89. Radder, \"1.: 1977. A generalized saddlepoint theory, European J.Oper.Res.l,55-59. Rosinger,E.E.; 1977~ Duality and alternative in multiobjective optimization, Proc.Amer.Math.Soc.64,307-312. Salukvadzc,:M.E.:
• i
167
1979. "Vector-valued Optimization l'roblcms in Control Theory", Academic Press, I\ew York. Sawaragi,Y., H.Nakayama and T.Tanino: 1985. "Theory of YIultiobjective Optimization" , Academic Press I~C.,::\ew York and London. Schecter ,S~: 1978. Structure of the demand function cmd Pareto optimal set ,vith natural boundary conditions,J.Math.Econom.5,1-21. Sch6nfeld,P.: · 1970. Some duality theorems for the nonlinear vector maximum problem, Unternehmensforsch.14,51-63. Serafini,P. : 1984. A unified approach for scalar and vector optimization, Proceedings of the Conference on Mathematics of Multiobjective Optimization, CIS)/I, Udinc, Italy. Singh,C.: 1978. Optimality conditions in multiobjcctive differentiable programming, J. Optim. Theory Appl.53,115-123. Smale,S.: 1973. Global analysis and economics I: Pareto optimum and a generalization of Nlorse theory, in "Dynamical Systems'), NLPeixoto (ed.), Academic Press, New York. 1974. Global Analysis and economics III: Pareto optima and price equilibrium, J. Math. Econom.l ,213-221. 1976. Global analysis and economics V: Pareto theory V\,ith constraints, J.Math.Econom.l,213-222. Stadler,W.: 1979. A survey of multicriteria optimization or vector maximum probleln~ Part 1~1776-1968, J. Optim. 7'heory Appl.29,1···-52. Staibr,'I'.: 1988. On two generalizations of Pareto minimality, J. Optim. Theory AppL59,( to appear). Starr,M.K. and M.Zeleny (cds): 1977. "Multiple Criteria Decision Making" , :\Jorth··-llolland Publ., Amsterdam. Stcinitz,E. : 1913 ,1914,1916.Bedingt konvcrgcnte Reihen unci konvexe Systemc 1,11)111, J.Math.143,128-175; 144)1-40; 146,1-52. Sterna-K ar\vat, A. : 1986 a. On existence of cone-maximal points in real topological linear spaces, Israel J.Math.54)33---41. 1986 b. A note on the solution set in optimization with respect to cones, Optimization 17,297-303.
168
1987 a. Lipschitz and differentiable dependence of solutions on a parameter in a scalarization method, J. Austral. Math. Soc. (A) 42,354-364. 1987 b. A note on cone-maximal and extrem~ points in topological vector spaces, Numer. Funct. Anal. Optim.9,647-655. Steuer,R.E.: 1986. "Multiple Criteria Optimization: Theory, Computation, and Application", \iViley, New York. .- - - and E.Choo: 1983. An interactive weighted 'I'scebysheffprocedure for multiple objective programming, Math.Programming 26,326-344. S,vartz,C.: 1987.. Pshenichnyi's theorem for vector minimization, J.Optim.Theory App1.53, 309-317. Tamura,I(. and S.Arai: 1982. On proper and improper efficient solutions of optimal problems with multicriteria, J. Optim. Theory AppL38,191--205. - - - q.nd S.11iura: 1979. Necessary and sufficient conditions for local and global nondominatcd solutions in decision problems with multiobjectives, J. Optim. Theory Appl.28,501-· -523. Tang,Y.: 1983. Conditions for constrained efficient solutions of multiobjective problems in Banach spaces, J.Math.AnaLAppL96,505-519.
Tanino,T.: 1982. Saddle points and duality in multiobjective programming, Interat.J~Systems Sci.13,323-335. 1988. Sensitivity analysis in multiobjective optimization) J. Optim. Theory Appl.56,479-499. - - - and Y.Sawaragi. 1979~ Duality theory in multiobjective programming, J. Optim. Theory Appl.30,229-253. 1980 a. Conjugate maps and duality in multiobjective optimization~ J. Optim. Theory Appl.31,473-499. 1980 b. Stability of nondominated solutions in multicriteria decision making, J. Optim. Theory A pp1.30)229-253. Tran Quac Chien: 1984. Duality in vector optimization, parts 1,11,111, Kybernetika 20,304--313; 386-401; 458--472. Treves,J:c"".: 1967. ('Topological vector spaces, Distributions and Kernels", Academic Press, 'Xe\v York, London. Vogel,W.:
169
1974. Ein Nlaximum-Prinzip fiiI Vektoroptimierungsaufgaben,
O~R. Verfahren
Bd XIX,161-184.
1977. Vektoroptirnicrung in Produktriiumen, Math4Systems in Economics 35, Verlag Anton Hain, Meiscnheim am Glan.
Wan,Y.H.: 1975. On local Pareto optima, J.Math.Econom.2,35-42. Warburton,A.R.: 1983. Quasiconcave vector maximization: connectedness of the sets of Pareto-optimal and weak Pareto-optimal alternatives, J. Optim. Theory App1.40,537--557. Ward,D.. E. and J.M.Bor\vcin: 1987. Nonsmooth calculus in finite dimensions, SIAM J~ContTol and Optimization 25,1312-1340. Wcndell,R.E. and D.N.Lec: 1977. Efficiency in multiple objective optimization problems, Math.Programming 12,406-414. White,D.J . : . 1982~ (tOptimality and Efficiency'), Wiley, New York . 1985. Characterization of efficient sets by constrained objectives, J. Optim. Theory Appl.45,603-629. . Wicrzbicki,A.P.: 1977. Basis properties of scalarizing functionals for multiobjective optimization, Math. Opere Statist. Ser. Optimization 8, 55-60. 1980. The use of reference objectives in multiobjective optimization, in "Ivlultiple Criteria Decision Making Theory and Applications~', G.Fandcl and T.Gal (eds),Springcr-Vcrlag, Berlin and Ne\v York.
Yll,P.L.: 1974. Cone convexity, cone extreme points and nondominated solutions in decision problems \vith multiple objectives, J. Optim. Theory Appl.14)319-377. 1979.. Second order game problem decision dynamics in gaming phenomena, J. Optim. Theory Appl.27,144·--166. 1985. "Multiple-criteria Decision Making: Concepts, Techniques and Extensions", Plenum Press, New York and London. - - - and G.Leitmann: 1974. Compromise solutions, domination structures, and Salukvadze's solution, J. Optim. Theory Appl.13, 362- -- 378. - - - and YLZeleny: 1974. The set of all undominatcd solutions in linear cases and the multicriteria siIl),plex method, J~ Math. Anal. A ppL4 9, 430-~-468. Zadeh,L.A.: 1963. Optimality and nonscalar-valued performance criteria IEEE Trans~Automat.Con~rol A C8,59-60.
170
Za1incscu~C4:
1987. Solvability results for sublincar functions and operators, Oper~Res.31, 79-101.
z.
Zeleny,Y.£. : 1973. Compromise programming, in "11ultiple Criteria Decision lvIaking", J.L.Cochrane and M.Zeleny (eds), Univ. of South Carolina Press, Columbia, South Carolina. 1974. "Linear :YIultiobjcctive Programming", Springcr--Vcrlag, Berlin and "2\cw York. 1976 (ed.). "11ultiple Criteria Decision Making: I(yoto", Springer-Verlag, Berlin and Ne\v York. 1982. "Multiple Criteria Decision Making", McGraw-Hill, New York. Zionts,S.: 1978 (ed.). "Multiple Criteria Problem Solving: Proceedings, Buffalo,?\Y1977" , Springer-Verlag) Berlin and ~cw York. - -~ - and J.Wallenius: 1976. An interactive programming method for solving the multiple criteria problem, M anag"ement Sci.22,653-663. Zowe)J.: 1975. A duality theory for a convex programming in order complete vector lattices, J. Math.Anal.A ppl~50,273-287. 19774 The saddle-point theorem of Kuhn and Tucker in ordered vector spaces, J.Math.Anal.AppL57,41-55.
Index acute cone: 2 algebraic dual space: 7 algebraic polar cone: 7 alternative: 57 arcwisc connected set: 146 asymmetric binary relation: 37 asymptotic cone: 9 axiomatic duality: 120 base: 4-7 binary relation: 37,38 boundedly order complete space: 47,48 C-boundcd set: 13,14,15 C·-·closed map: 33~34 C-closed set: 13-17,34 C-compact set: 13,14,34 C-complete set: 46,4.7,58 C--continuous function: 22-24, 27 C-continuous set-valued map: 33 C-convex function: 29,30,32 C-convex set: 15 C-quasiconvex function: 29-32 C-scmicompact set: 14,34 C-scmicontinuQus function~ 22 Cherbyshev norm: 21 closed function: 22 closed convex bounded base: 4-6
compact map: 35,36 complete binaxy relation: 38 complete lattice: 133 complete scalarization: 95,96 composition of functions: 27.,28 composition of set-valued maps:
36 condition (CB): 10~·13,16 condition (CD): 10-13 cone: 1 cone continuous function: 22 cone convex function: 29 cone monotonic function: 18 conjugate dual: 117,133 conjugate map: 117,133 connected binary relation: 38 connected set: 149 constrained problem: 70 constraint : 57 constraint qualification: 75,76 contingent cone: 63 contingent derivative: 63 continuous map: 33--35 contractible set: 139,143,153 convex hull: 1 convex problem: 61,91 convex weakly compact base: 7 correct cone: 2·-4 Daniell cone: 47 Decreasing net: 46
172
Dini lower derivative: 64 Dini upper derivative: 64 direc ti onal derivative: 69. domination property: 53-55 dual map: 110 dual pair of solutions: 112 dual problem: 111,121 duality relation: 129,133 efficient point: 39-44,46 efficient solution: 57 epi-closed function: 22,25-27 epigraph: 18 exact dual: 121,123-125 existence of efficient points: 46···52 extreme vector: 8 feasible solution: 59 feasible triple:· 70,112 feasible COllpIe: 112 fr actiollal problem:· 154 generalized Slater condition: 75 graph: 33 greatest lower bound; 132
ideal efficient point: 39~·43 increasing function: 18-21,81 Lagrangean duality: 110 Lagrangcan map: 111 lattice: 132 least upper bound: 132 level-closed function: 22,25,27 level set: 18 lexicographic cone; 2,39 linear binary relation: 38 linear problern: 61,88~138 linear representation: 88,89 Lipschitz map: 35
local efficient point: 40 local minimizer: 68-73 local optimal solution: 69 local weak minimizer: 68,71 locally contractible set: 139,147 lower bound: 132 lower C-continuous map: 33 lower cquiscmicontinuous family; 22,23 lower scmicontinuous function:
22,24-26 lower scmidiffcrcntiablc map:
62--65 minimal solution: 57 Minkowski functional: 21,93 monotonic function: 18
necessary condition: 71 nondecreasing function: 18-21 nondominatcd point: 39 nonnegative orthant: 2 nontransitive binary relation: 37 "N"-valued ma.p: 33 optimal solution: 57 optimal value: 57,60 Pareto minimal point: 39 partial order: 38 parthwise connected set: 137,153 perturbation: 117,133 pointed cone: 2,24 polyhedral cone: 4,24 positive linear functional: 20 positive linear operator: 20 primal problem; 111 properly efficient point: 39,44,90 properly increasing function: 81 properly optimal triple: 112,113
173 '-
quasiconvex function: 29,30 quasiconvcx problem: 61,92,93,148 quasiconvex representation: 92,93 recession cone: 8-13 reflexive binary relation: 37 relation of the alternative: 129 retract: 139~144,147
saddle point: 115,116 saddle point theorem: 115 scalar proper representation: 87 scalar strict representation: 86 scalar ,veak representation: 87 section: 43 self·-cfficicnt map: 111 separation: 81-85 set·~valued rnap: 33 Slater condition: 111 smallest strictly monotonic function: 21,30 stable problem: 119 strictly C-convcx function: 29,32 strictly C--quasiconvex function; 29 strictly convex problem: 61 strictly increasing function:
19-21
strictly quasiconvex function: 61 strictly supported cone: 2 strongly C--complete set: 46,47 strong duality: 113 subdifferentiable map: 119 sufficient condition: 67,71,77 symmetric binary r~lation: 37 topological dual space: 7 topological polar cone: 7 transitive binary relation: 37 ubiquitous cone: 2)39 unconstrained problem: 67 upper bound: 132 upper C-·continuous map:
33 upper semidifferentiable map: 63-66 vector optimization problem: 57
weak domination property: 56 weak duality: 112,118 weak duality a"'Ciom: 121 weakly efficient point : 40~ -.44 weak separation: 82
VOl. :./:u • • - .~n den Iieuve~t The Stabili~ of a l'I/.acrocconom:c system with Qua ntity Cor.str~in~$_ VII. 169 pages- 1983.
Vol. 237: Misspec·t:cation Ara:ysis. Proceedings. 19B3. EdIted by T. K. Diji<stra. V. 129 pages. 1984.
~ol. 212~ R.
Vol. 238: W. DOr:1$chko. A. Drexl. Locat~ofl and layout P~ann:ng. IV, 134 pages. 1985.
Sato and T. I\~no. Invariance Structure o~ Tecr.nology. V, 94 pages. 1963.
P~i:lc;p'es
ar.c
t~e
Vol. 213: Aspira~:or Levels in Bargai nil1g and EconoT.;c Decision Making. Proceedings. 1982. Edi~ed by R. Tietz. VIII. 406 pases. 1983.
'vo~. 214:
M. Faber, H. Nicr:1es und G. Stepl'-an. F.r.trop:o, Umwelt-
SC~1l;1Z und Rohstctfvcrbrauc"l. IX. 18~ Seiten_ 1983.
Vor. 215: Ser:1i·lnf:n:~e P':ogramr:1ing a.nd App::cat:ons. Proceedi!1gs.
1981. Edited by A. V. Fiacco and 1<.0. Kortar.ek. XI. 322 pages. ~983.
:11 a General EqJilibrilJm Model 'w:~h por$is~e.'t U:'B'Tlp"oyrr.ent. VI. 92 pages. 1983.
.Vo!. 216; H. H. MUlier, Fiscal Policies
Voi. 2 17: C~. Grootaert. Tho Relation Between Final Den-.and aroc !r.come Distribution. XIV, 106 pages. lQ83. Vol. 21 B: P. van Loor:, A Dynamic Theory of the Firm: Prcdt.:ction. Financo ard Inves~mont. VII. 191 pages. 1983. Vol. 219: E. van Darnnc, Ref:r:.e:r.~ntsof tho Nash Equi;ibr:~m Concept. VI, 15: pages. 1983.
Vol. 239: Microeco:":omic Mode s of housi:;g Marke~s. Eo ted K. Stahl. VII. 197 pages. 1985.
by
Vol. 240: Contribufons to Opera.t:ens Rc&oo:ch. Proceed"r.gs, 1984. Edi~ed by K. Neumann and D. Pallaschke. V. 190 pages. 1905. Vol. 241~ U. Wi~tma~n, Das Kenzep~ ratio ...~;;lIQr Preiscrwa.~ungen. XI, 310 Seiten. 1986. Vo:. 242: Decision Making wit'1 M;J~tiple Objec!ivcs. Proceedings l :984. Edited by Y. Y. Ha'm~s and V. C~ankorg. Xl. 57~ pages. 1985. Vo·. 243~ Integer Prcgrarr:n:rg ard Re~ated Arens. A C;assifioc Bibliography 1981-1984. Edited by R. von Rar.cow. XX, 386 paGes. 1985. Vol. 244~ Advances in Equilibrium ThE'ory. Proceed:ngs, 1984. Edited by C. O. Aliprantis. O. Burkinshnw and N. J. RO~h~an. U. 235 pages. 1985. Vol. 245~ J. E.M. Wil!"c:rr.. A~b;trage Theory. Vri. 114 pages. 1985.
Vel. 220; M. Aoki. Notes o~ Economic Time Series Analysis: S)'s~em Theoretic Perspectives. IX. 249 pages, 1983.
Vol. 246: P.W. Otter, :Jyna-nic FeatLire ScacQ ModeLing, Fl~ter;ng and S~lf-Tuning ControJ of Stocr.astic Systems. XiV, 177 pages.1985.
Vol. 221~ S. Nakamura, An rnter-lr.dJstry Translog Model of Prices ar.d Tecl"~ical C ha'1ge ~or the Wesi Gorman Economy. XIV. 290 pages. J984.
Vo!. 247: Optimization and Disc~ote C~cice ir Urban Syste:r.s. Proceedings. 1983. Edited by B.G. Hutchinsen. P. Nijkamp ar.d f,I.. Baty. VI, 371 pages. 1985.
vor. 222: Po J\/.eier. Einergy Systems. Analys;s for Deve!op:r.g Co':.! '1~rics. VI. 344 pages. 19B4. Vol. 22.3: W. Trockel. Markot Derr.e:nd. VIII. 205 pages.. 1984.
Vol. 248: Plura: Rafon~"ty and In~eraC~'o'e Decision Processes. Pro~ ceed;ngs. 1964. Edited by M. Grauo:". M. Thompson and A.P. Wierlbici
Vol. 224: fIJ.. Kiy, Eil'l disaggregiertes Prog:10S9system fur die Bundos· ropublik Deutsc1:land. XVIII. 2 76 So~te;,. 1984.
Vol. 249~ Spatial Prico EqL: :ibrium: Advancos in Tr.eory. Computation and APP··C3tio~. Proceedings. 1984. Edited by P. T, Ha.~e:. Vir. 277 pages. ~985.
Vol. 225~ T. R. von U rgern-S~ernberg. Zur Analyse von r-I.lirkten mit unvollstiindigor Nachfrageri:,:"ormation. IX, 126 Seiten. 1984
Vol. 250~ M. Roube!',s. Ph. Vincke, Pro!c:enco MoCoiling. VIII.
Voi. 226: Selected Top' cs in Operations Research a~d Milt"le:r.at:cal Ecor.omics. Proc!:ed; ngs. 1983. Edited by G. Ha"mer a~d D. Pallaschke_ IX, 47S pages. 1984. Vol. 227: Risk a~Q Capital. Proeeed:ngs. 1903. Edited by G. Ba!r.berg and K. Spremann. VIt, 306 pages. 1984. Vol. 228: NOOllinearModels of f:;Jctua~ing Growth. Proceedings. 1983. Edited by R.M. Goodwin. fII. KrOger
94 pages. 1985. Vo'. 251~ lr.put~Ou~put Modeling. Proceedings, 1984. Ed"ted ~ A. Smyshlyaev. VI, 261 pages. 1985. Vol. 262: A. B rolil":.i, On the Use of S!ochastic Processes :n Medeling Ro·:abil.ty Proble:r.s. VI, 105 pages. 1985. Vol. 253: C. Wimagerl. Economic Theory ar.d Ir:te~r.ational Trade in Natura! Exhaustib:o Resources. VI. 172 pagos. 1ge5. Vol. 254: S. Mullor. Arbi!rage Pricing pages. 1965.
cr. Co~t:ngont
Clai~s. VIII. 151
Vol. 255: Nondifferer:tiab:e Optim'zatier~ Mot\'ations and Applications. Prcccedi:-:gs. 1984. Editod by V. F. Demy3t1ov a":.d D. Par
Va:. 231~ G. F. Newell. The M/M/oo Service System wi~h Ranked Servers in Hea'o'Y Traffic. XI. 126 pages. 1984.
Vol. 257: Dynamics of MacrosySlOrr.s. Proceedir.gs. 1984. Edited by J.·P. Auoin, D. Saari a'1d K. Sig:rund. VI. 280 pages. 1985.
Vol. 232: L Bauwens. Bayesian FllH Ir!ormation Aratysis of Simultaneous Equat:on t-I.odcls Usir.g Int~gra~:on by Monte Cado.
Vol. 25B~ H. Funke. E~ne allgemeine Tneorio der Polypol- und Oli90polpreisbildu~g.III. 237 pages. 1985.
Va:. 233: G. Wagcnha~s. The World Copper Market X" 190 pages.
Vo:. 259: Inf:nite Prograrnming. Procoodings. 1984. Edited by E. J. Ande rson and A. B. Philpott. XlV. 244 pages. 1966.
. Vl. 114 pages. 1964. ~9a4.
Vol. ~34: B. C. Eaves, A Course in Triang;Jlations fer Solv:ng Ec;uations with D~formations. lU, 302 pages. ~984. Vol. 235: Stochast:c Models in Rcliab'l~ty Theory. Procccdi :1g9. 1984. Edj~ed by S. Osaki and Y. Ha~oyama. Vii, 212 pages. 1984. Vol. 236: G. Gandolfo. P. C. Padoan t A Disequilibrium Modei 0' Real and F:r.ancial Accumulation in an Open Economy. VI. 172 pages. 1984.
Vel. 260~ H.-J. Kruse. Degereracy Graphs and tho Neighbourhood Prob·eM. via. 128 pages. 10e6. Vol. 2$1: Th.R.Gulledgo. J~., N.t<. Wo:r.e,. Tho Economics of t-I.adeto.Qrder Production. VI. 134 pagos. t9Ba. Vol. 262~ H.U. Buh:, A Nea.Classica Theory of Di~Jib...tion and Weal~h. V. 146~s.1986.
Vol. 26:3: M. Schafer. Rosource Ex1rac~ion and r-I.arke~ S~"l,;~u'"O. X:. 164 pages. 1986.
:.V"O!. 264~
Moc!as cf EconoT.:c Dynam~cS. Proceecings. '1983. Edi~ad .. ' '.
~"-bY' f'(F. Sonnensc~ain. VB: 212 DigeS'-1986.
'.
~~'. ':265:: Dynamic 'Gat:'10~
':1.
B~~r.
and Applicat'or:s in Economics.
IX: 288 pages. 1986.
.
E~i~e~ by
Prod':.:c-jon P1ar.n·ng a.~d Ir:v"ntory Control. , Ed:ted by S:Axsa~er, CI;. Sch~oDweiss and 1:;. S.:yor. V. 264 pages. 1986• .
Vo;. 266:
· Yd.
~67:
MUlti.S~ge
R. Bemclmaoo:s,
pagos. 1986.
Vol. 268: V. Erchau. .pages. 1986.
T~o
C.Jpacity Aspect 01
Jnvon~or.es. IX. 165.
D. Batten. J. Casti. B. Johansso,"· (~s.), .Econ~:r:c. E'@i ~on ar.d St:'uct..:ral AC:ustrnent. Proceecf:'lgs. 1985•. VI •. 38:2 'p'Qg~: .987. .' : . . ,.
vor. '293~
Vol. 294: J.
Ja.~nt
W. Krabg: (Eds.).
Recc~t
A?vances anq
Dcve~opmen: of Vector Optim~at:oil. Vllt 405 p_~ges. ~9S'7'.
Vol: 295'; H. t-I.eister, Tho'· Purllc~~on Prcib:~m' for Gar:les wi~h Ir.comp~de Infonnct.ion, Xt 127 pages•. 198'7.
H~~c
.' ~./
Consfr~~~ -
... >~
Vol.. 296: A. Borsch-Supan t Econometric' AnaJysis ·ot". D:s~r~ COOicD. Villi 211 pages: 1987. . . :: . , .. . . ..
t:
Inform:J.~ion Eya'ua~ion in Cap:taJ f./,arkets. VII. 103
Vd. 297: V. Fedorov. H. Lauter {Eds.)t Modol-Orie.:"lted Data_ ~BJ}
Ecer.o~'
Vol. 298~ S. H. Chew. Q. 2r.ong. integral· G'obal OPti:r.!2atio~: .y{ 1'/9 pages. 1988. . . '
.
~69~
'vel. A. Borglin. H. Kc;dir:g. O,ofmalify ;!1 Ir.finite Horizon ,mics. VI. lao pnges. 1;813•.
- Vo'. 270: Tochno~og:cBI Cna,ge. Employment and Spatial Dynamics. · ~roceedings 1985. Ed~od by P. r-.;~i<.arnp. VII. 466 pages. ~986.·
S:8. Proceed!~gs.
19a7. vr, 239 pages. 1988.
• '.
VOl: 299: K. Marti. Desce:1t OireC"';o~s a.":~ Effic;ent Solut'o~8"h Discretely Distribl:tad S~ochastic Prog~a'r.s. XiV. 178 p3 3e,s. 1985. ;
:n ChiCc1.go. 1939-
. Vo:. 300: U~ Oei.gs. Programm;ng in N~rks and G.~aor-.s:· X:.,·31~ pages. 1988. _ '.~,
Vol. 272~ G. Clemer-z. Crec't fll.ar~C'!s with Asymrr.e~ric lnfo.:"ma.tion.
Vol. 301: J. Kacprzyk. M. Rc....bc:-s (Eds.'. Non-Co:'lvontion.aJ Pre~or. onco Relations in Deo:s:on Mak:ng. VU, 155 P.1gCS. 1988... _ ....
Vo~. 271: C.' Hildro~ht Tho Cowlos Commission · 1955. V. 176 pages. 1986.
"vIII. 212 p.Jg09. 190e. • ·Vo~. 273: largo-Scale
!v1odelling a.":.d Interacti.. . e D'ec=sion Ami'ysis. Proceedings, '1985. Ea:tcd by G. Fandel. M. Grauer. A. KUr2hanski "ard A.P. Wierzb·ck:. VIII ~63 pages. 1986.
274~ - W. K. Klein Ha.~oveld, DuaJ:ty in Stocrasf.c Lir.ear ar.d Dynamic P:ogramrr.::'lg. Vii, 295 piiges. 19D6.
: Vol.
Vel. 275: ComPetr.ion. instab;~:ty, ~:':d Nonl!near Cycles. Proceedings. 1985. Edi!ed by W. SeM.'1"le~. XII, 340 pages. 1986.
·V~I.
276: M.R. Baya. D.A. Black. Consur.ier Bohavior. Cost of living :MB8suro9. and t~o !ncomo Tax. VII. 119 pagos. 1986. .
Vol. 277~ Stud:cs :n Austrian Cap:tal Theory. Edited by M. Faber. VI, 317 pag~s. 1986.
rn.. es~mcnt
and Timo.
'Ii.
· Vol. 278: E. Diewe'1. The Mensu~e:rent of tho Economic 8er.efi~ of fn(ras~ructJ""O Se~cos. V. 202·pages. 1986.
Vol. 219: H.-J. Bij~jer. G. Fre: nnd B. Sch:ps, libr.um fl.ode's. VI. 114 pag~s. 1980.
Es~;ma'ion
of Disoqui-
FORT"RA~.
VU. 126 pages. 1986.
281'~ Ch.-L~
Hwang. M.-j. Lin. G.:"OuP. M1,.*p~a·Crr.o:ia. Xl. 400 page.s. '90'1. .
Decisio~ Making under
Vol. 282: K. Schittkowski. toI.cre Tost Examples fer Nc::':nDar gramrr.·ng Codes, V.-261 page9. 1987. ..
·Va'. 2~
203: G. Gabisch. H.:W. lorenz. Buinass pages. 19S7.
Vol. 284:
H.
Pro~
Cy~e TheorY. V!~.
Uitkepohl. Forecasting Aggregated Vector ARMA
·Processcs. X. 323 pagos. 198'7. Vol.. :285: Toward Intcract:vo and Intel;;gent Dec'sion Support Syste..rs. Yolume 1. ProcecdEngs. 1986. Editod by Y. Sawaragi, K. lnouo and H. Nakayama.. XU} 445 pages. 1987. .
.vo~. 286~' Tovva'tl Jntoraet:vD and Int~I::ge.,t Decision Support Systoms. Vo'ume 2. J'roceedi:-gs, 7986. Ectitect by Y. Sawaragi, K. Inoue a.~d H. Nako.~aTa. X!J. 450 pages. 1987•._ Vol.' 287: Dyr.amicaJ Systems. Proceedings. 1985. Edited by A. B. KUr2hanski and K. ·Sigmund. VJ. 215 pages, 198.1. Vol. 2ea~ G.D. Rudebusch. Tho Es:imaticn of Macroac.cnomic D~s. equ;:ibr;um Mode!s with Reg~me Classif:catlon Information•. vrr, 128 pages. 19B'7. " -Yol. 289: B.R. Mo~boom. P;a"':ning in Decontralized Firms. X. 168 '·~es. '987.' ·Vol. ·290: C.A. Carlson. A. Haurio. 'XI. 264 pages. 198'7.
rnfid~
Horizon Optimal Corn.:"ol.
·Vol. 291; N. _Tal(shashi, Design of AdaPtive Organizations. 'pages: :19B~. . ' . . .
Vol.
292: J. Tch~ov •. L. To~~QW:cz
~rOC.e~din~s.' 1985.' VI. 19~ pag-os.
tAo
Vo i. 303~ F.X~ Diebo'd.· cmp:rical Dynamics. VII. 143 p~sos. 1988. ..
~adc':ng- of
Exchange R.~te . . . ,,:
- Vo!. 304: A: Kurzhanski t K. NeuManr. t D. F'a!aschke -(Eds,). Opt=m.;a. tion. Paralic: Procoss:~g and App!ica~ons. Proceedings., 198'7. V), 292 pages. 1988. - . . . VOl. 306: G.-J.C. Th. van Schij::de!. Dynam~c Fir!Jl' and" In\'estor ·Behaviour lol"lder Progress:ve Pe:sor:a! Taxatio:l. X. 216 pa~e~. 1988~
,
Vor. 306: Ch. Klein, A S~t~c M:croocor.om~c ~odel of Pure Corr-pe~t:on. ~ 139 pages. 1988. .. ~
v:n
Djks~r3 (Ed.)t On Model l.:ncertainty-a"d its Implications. Vii. 138 pages. 1988. Vol. 30·7: 1:K.
S~a(s~ical . .
306: J.R. Daduna, A. Wren (Eds.}t Ccmpt.1er·Aided Trar.-sit SchcdLA:ng. '\1~U. 339 page9: 1966. . VOl.
Vel 2eo~· ri. T. Lou. Combir.o1orial Heur:st:c Algor:thms w:th' Vol.
Vo:. 30~: f Ese't, G. Pooer7oli (Eds.,. Adva~co~ ;n oPtirr:iza:i~r . and Control. Proceedings. 1986. VHI. 372 pages. 1988. ~ : . ...;
vr,
1~b
(Eds,). Input-Ou!pt.1 Modc':ng. 1987. ..
. - Vol. 309~ -G. R!cci. K. Ve;~p:'!a: ~Eds.), Grow:h Cycles ar.d Mul~isoo~ tara) Eco:1om:cs: the Goodw:n Tl'adition.lII. 126 page9. 1988. '..
Vet 310: J, KacP~k. M. ·Fodrizi (Eds.), Co:nb~ning Fuzzy :mpreci. sion wi~h Prcbab'li~ic Ur.ce~.air.ty ::'1 Decision Ma(ir.g. -I?<. 399-
pagos.
1988.'
Vol. 311: R. pagos.1988.
F~re,
Fur.darr.entas of
.
Procuc~on' Theo;y.
IX. 163 '.
Vol. 312: J. K::sr:nskt.mar. Est!ma~on o~ Sirr::Jltaneou9. Eq~at'on ~cde's w:th Eno" Componerts Structo.:ro. X. '361 ~ages. 19Sf!. .
Ve·. 313: W. Jamrr.emegg. Sequent;a! 8i!1a~ I:':vesVrent qecisicns~-: VI. 156 pages. 1988.
' .
Vol. 314: R.. T:e.tl.} W. Aloors. R. ~~,.~n ~E03.}, Bounced ~~~m Behavior in &por.~entaJ Games and Marke~ .• VJ. 368 pages. 1 ssa. Vol. 315: J. Orlshimo, G.l.O. Hewings. P. Nij~amp (Eds.). ·I~for.r.a-· (on Techr.ology: Social and Spatial PCfspeclves.. ;Proceed\~9s,.' 1966. VI. 268 pages. 1988. __
Vo:. 316: R. L. 8~man!'l, O. J. Slohje. K. Hayes. J. q.. Jch,:,so.'1. ~.J. '.• Molina, The Gonera:~led Fechner-Thurstono Direct UZil;~. Fun~~·on .. 3-~C Soma o~ i~s Uses. VHJ. 150 pag~s. 1986. ,': . Vol. 317: L. Bia~co. A. La []~lIa (Eds.). Freight T~spcrt.P1annin9· and Log:st=l:::s. P:oceed:ngs. 1987. X. ~68 pages.. 198a.· •. Vol. 318: T.· Doup. Simplic:al Algo:'~h~s on t~o Sirrp:o~opo. VIIJ. 262 pag09.1CJ~.
Va:. 319: D. T. Luc, Thee;y of Vec!or Optim~.zatlon. V~II. ~ '73 pages. 1989. ' -
T. Vasko (Ed.)
The Long-Wave Debate Selected papers from an ITASA (lntemationa1lnstitute for Applied Systems Analysis) International Meeting on Long-Tenn Fluctuations in Economic Growth: Their Causes and Consequences) IIeid in Weimar, German Democratic Republic, June 10-14, 1985 1987.128 figures. XVII, 431 pages. ISBN 3-540-18164-4 Contents: Concepts and Theories on the Intc!?reration ofLong-Tcnn Fluctuations-in Ecor.omic Gro\\1h. - Technical Revolutions and Long Waves. - The Role of Financial and Monetary Variables in the LongWave Context. - I\-fodeling the Long-Wave Context. - :vIodc1ing the Long~Wave Phenomenon. - List of Participants. I. Boyd~ J. ~t Blatt
Investment Confidence and Business Cycles 1988. 160 pages. ISBN 3~540-18516~X Contents: Introduction and bricfsummary. - A brief historical survey of the trade cycle. - Literature on confidence. - The dominant theories. A first look at the new model. - Confidence. - Description of the model. The longer run. - Some general remarks. - Appendices. - References. Index. M. Faber. H. Niemes t G. Stephan
Entropy, Environment and Resources An Essay in Physico-Economics With the cooperation arL. Freytag Translated from the German by I. Pellengahr
1987. 33 figures. Approx. 210 pages.lSB~ 3~540 ..18248-9 The special features of the book are thal the authors utilize a natural scientific variable, entropy, to relate the economic system and the environment, that environmental protection and resource use arc analyzed in combination, and that a replacement of techniques over time is analyzed. A nove] aspect is that resource extraction is inteTJlrcted as a reversed diffusion process. Thus a relationship between entropy, energy and reM source concentration is estabHshed.
E.vanDamme
Stability and Perfection of Nash Equilibria 1987. 105 figures. Approx. 370 pages. ISBN 3 540-17101-0 Contents: Introduction. - Games in :KonnaJ Form. - ~Jatrix and Bimatrix Games. - Control Costs. - Incomplete Information. - Extensive POnTI Games. - Bargaining and Fair Division. - Repeated Games. - Evolu· tionary Game Theory. - Strategic Stability and Applications. - References. - Survey Diagrams. - Index. 8
Springer-Verlag Berlin IIeidelberg NewYork London Parjs Tokyo