A new approach to BSDE
Lixing Jin Exeter College University of Oxford Supervisor: Dr. Zhongmin Qian
A thesis submitted...
10 downloads
459 Views
209KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
A new approach to BSDE
Lixing Jin Exeter College University of Oxford Supervisor: Dr. Zhongmin Qian
A thesis submitted in partial fulfillment of the MSc in Mathematical and Computational Finance 23 June, 2011
Acknowledgements
I would like to acknowledge the efforts of all the department members who have taught and advised me during the MSc course. In particular, I would especially like to express my gratitude to Dr Lajos Gergely Gyurko and my supervisor Dr. Zhongmin Qian for their invaluable guidance and help through this dissertation. Last but not least, I would like to thank my parents for their endless encouragement, support and love.
Abstract
This thesis is a survey report based on the following two papers [3] and [4]. In [3] a new approach was introduced to interpret backward stochastic differential equations (BSDE) as ordinary functional differential equations on certain path spaces. By Liang, Lyons and Qian’s approach, we could study the following type of BSDE
dYt = −f (t, Yt , L (M )t ) dt −
d ∑
fi (t, Yt )dBti + dMt
i=1
with terminal condition YT = ξ, on a general probability space (Ω, F, F t , P ), where B = (B 1 , · · · , B d ) is a d-dimensional Brownian motion, (fi )1≤i≤d are given, L is a prescribed mapping which sends a square-integrable martingale M to a predictable process L(M ), and M , a correction term, is a square integrable martingale to be determined. The new approach first proves the existence of solution in a small time interval, or called a local solution by defining a contraction mapping and using fixed point theorem. Last but not least, we study the generalized comparison theorem, and propose a lemma based on the property called homogeneous condition to perfect the original proof given by Shi, H.Z and Qian, Z.M in their paper[12]. In addition, a so called K-Lipschitz condition makes a local version of comparison theorem. Finally we propose a condition, K1condition, which is equivalent to K-Lipschitz condition.
Key words: Backward stochastic differential equation, semimartingale, comparison theorem, ordinary functional differential equation, stochastic differential equation, local condition, homogeneous property, K-Lipschitz condition
4
Contents
1
INTRODUCTION
6
2
BSDE and FUNCTIONAL DIFFERENTIAL EQUATIONS
9
2.1
Preliminaries . . . . . . . . . . . . . . . . . . . .
9
2.2
Functional differential equations . . . . . . . . .
11
2.3
Existence and uniqueness of local solutions . . .
14
3
COMPARISON THEOREM
22
4
CONCLUSIONS
34
5
Chapter 1 INTRODUCTION We are concerned with backward stochastic differential equations (BSDE) and its applications. For the past twenty years, the theory of BSDE has undergone a dramatic development and there is a large number of articles devoted to it. As a result, it was found that BSDE has intimate connections with other research areas: Partial differential equations (PDE), mathematical finance and stochastic analysis etc. Below is an interesting example showing that a partial differential equation is able to be solved by specifying a backward stochastic differential equation. Suppose the solution u(t, x), a function on [0, ∞) × R to the following partial differential equation exists and is smooth. ( 1
∂2 2 ∂x2
−
∂ ∂t
)
( ) u = −f u, ∂u ; ∂x
u (0, x) = u0 (x) ,
(1.0.1)
where u0 is a function on R, and f is a given function. Let h(t, x) = u(T − t, x) for t ∈ [0, T ]. Then h solves the backward parabolic equation with terminal condition ( 1
∂2 2 ∂x2
+
∂ ∂t
)
( ) h = −f h, ∂h , t ∈ [0, T ] ; ∂x
h (T, x) = u0 (x) .
(1.0.2)
Let Yt = h(t, Wt ), where W is Brownian motion in R. Then YT = h(T, WT ) = 6
u0 (WT ) and according to Itˆo’s formula, ˆ
Yt − Y0
Clearly (Y, Z), where Zt =
ˆ t ∂h 1 ∂ 2 h ∂h = ( + )ds + dWs 2 2 ∂x 0 ∂s 0 ∂x ˆ t ˆ t ∂h ∂h = − )ds + dWs . f (h, ∂x 0 0 ∂x t
∂h (t, Wt ) ∂x
solves the following BSDE
dYt = −f (Yt , Zt ) dt + Zt dWt , t ∈ [0, T ] ; YT = u0 (WT ) .
(1.0.3)
Hence we see that BSDE (1.0.3) is closely related to non-linear PDE (1.0.1). In 1973, a linear backward stochastic differential equation was introduced by Bismut as the equation for adjoint process in stochastic version of the Pontryagin maximum principle. He also introduced a nonlinear stochastic differential equation in 1978 for which he proved the existence and uniqueness of the solution. Pardoux and Peng in 1990 investigated general BSDE which we will discuss soon. Later many papers extended their results and explored more about BSDE. So far, most papers apply BSDE on a probability space with Brownian filtration, only few papers dealing with the BSDE with jumps or with reflecting boundary conditions. In 1994, Tang and Li[5] studied BSDE with jumps in stochastic control problems. Meanwhile, Barles et al[6] have explored the connection between BSDE with random rumps and a specific type of parabolic integro-differential equations. Qian and Ying[2]explained the continuity of natural filtration of a continuous Hunt process, and further establish a martingale representation theorem over filtration generated by Hunt process. Many articles demonstrate the strong connection of BSDE with mathematical finance, stochastic control, partial differential equations (PDE) and optimal stopping problems. El Karoui et al[4] utilized linear BSDE theory to European option pricing problems and broadened the application of BSDE in mathematical finance area. Our report is organized as follows. In chapter 2, we introduce the ordinary
7
functional differential equation by specifying the classical BSDE first studied by Pardoux and Peng[1]. In chapter 3, we state the corresponding comparison theorem and establish a local version of comparison theorem.
8
Chapter 2 BSDE and FUNCTIONAL DIFFERENTIAL EQUATIONS 2.1
Preliminaries
We now proceed to fix mathematical notions for this section. Given a complete probability space (Ω, F, P ), for 0 ≤ t0 ≤ T
• {(Ft ) : t ∈ [t0 , T ]} is a right continuous filtration, each Ft contains all P -null events. • L2 (Ω, FT , P ) denotes the space of all FT −measurable random variables X satisfying ∥ X ∥2 = E P (| X |2 ) < +∞. • C([t0 , T ]; Rn ) denotes the space of all continuous adapted Rn −valued processes (Vt )t0 ≤t≤T satisfying max1≤j≤n supt0 ≤t≤T | Vtj |∈ L2 (Ω, FT , P ) equipped with the norm ∥·∥C[t0 ,T ] such that v u∑ u n ∥ V ∥C[t0 ,T ] = t E[ sup | Vtj |2 ]. j=1
t0 ≤t≤T
• C0 ([t0 , T ]; Rn ) denotes the space of all processes (Vt )t0 ≤t≤T in C([t0 , T ]; Rn ) with initial value Vt0 = 0 equipped with the norm ∥·∥C[t0 ,T ] . 9
• M2 ([t0 , T ]; Rn ) denotes the space of Rn −valued square integrable martingales on (Ω, F, Ft , P ) equipped with the norm ∥·∥C[t0 ,T ] . • H2 ([t0 , T ]; Rn ) denotes the space of all predictable Rn −valued processes (Zt )t0 ≤t≤T equipped with the norm v u∑ ˆ u n t ∥ Z ∥H2 ([t0 ,T ]) = E[ j=1
T
| Ztj |2 dt].
t0
Definition 2.1.1. A Banach space (V, ∥·∥) is a vector space V with norm ∥·∥ such that every Cauchy sequence (with respect to the metric d (x, y) = ∥x − y∥) in V has a limit in V (with respect to the topology generated by the metric d (·)).
Now let us introduce the concept of contraction in a metric space.
Definition 2.1.2. Let (M, d) be a metric space. A mapping
f :M → M x 7→ f (x) is called a contraction if there is a constant r ∈ [0, 1), such that for any x, y∈ M , d(f (x), f (y)) ≤ rd(x, y).
Theorem 2.1.3. (The Banach Fixed Point Theorem) Let (X, ∥·∥) be a Banach space. Let f be a contraction mapping on X. Then the map f admits a unique fixed point x e in X. i.e. f (e x) = x e.
10
Proof. Let x0 be the initial point, xn = f (xn−1 ). Then clearly f is continuous ∥ xn − xn−1 ∥≤ rn−1 ∥ x1 − x0 ∥ . Hence, {xn } is a Cauchy sequence, which implies that lim xn = x e,
n→∞
and x e = f (e x) by continuity of f . x e is called the fixed point. Next we show the uniqueness of x e. Suppose there is another point ye, such that ye = f (e y ), then ∥x e − ye ∥=∥ f (e x) − f (e y ) ∥≤ r ∥ x e − ye ∥ which implies x e = ye. Theorem 2.1.4. (Doob-Meyer Decomposition Theorem) Let Y be a supermartingale that is right-continuous and has left limits almost everywhere and Y0 = 0. Then there exists a unique, increasing, predictable process V with V0 = 0 and a uniformly integrable martingale M , such that Yt = Mt + Vt .
2.2
Functional differential equations
We begin with the BSDE introduced by Pardoux and Peng[1] and try to provide a main idea about how BSDEs may be reformulated as ordinary functional differential equations. Consider the following example of backward stochastic differential equations dYt
= −f (t, Yt , Zt )dt + Zt dWt ;
YT
= ξ, 11
(2.2.1)
on the filtered probability space (Ω, F, Ft , P ), where W = (Wt )t≥0 is a d−dimensional R−valued Brownian motion, ξ ∈ L2 (Ω, FT , P ) and (Ft )t≥0 is the Brownian filtration. In this BSDE, (Wt )t≥0 , f and ξ are given. A solution of equation(2.2.1) backward to t0 is a pair of adapted processes (Yt , Zt )t∈[t0 , T ] satisfying (2.2.1). Equation (2.2.1) is equivalent to ˆ
ˆ
T
ξ − Yt = −
T
f (s, Ys , Zs )ds + t
(2.2.2)
Zs dWs . t
Our next step is based on an observation that the solution Y is a semimartingale which has so-called semimartingale decomposition. Clearly Y is a continuous semimartingale, Yt = Mt − Vt where Mt is a continuous martingale and Vt is a continuous process with bounded variation. The decomposition is unique. Substitute Yt = Mt − Vt into the integration ˆ
ˆ
T
ξ − Mt + Vt = −
ˆ
t
f (s, Ys , Zs )ds +
Zs dWs , t0 ≤ t.
f (s, Ys , Zs )ds +
t0
T
t0
t
Take operator E(· | Ft ) on both sides, and we have ˆ
ˆ
T
E(ξ | Ft ) − Mt + Vt = −E(
t
f (s, Ys , Zs )ds | Ft ) +
f (s, Ys , Zs )ds. t0
t0
It implies that
ˆ
t
Vt = Vt0 +
f (s, Ys , Zs )ds,
(2.2.3)
t0
and ˆ
T
Mt = E(ξ | Ft ) + E(
f (s, Ys , Zs )ds | Ft ) + Vt0 . t0
From the fact that ˆ E(
T
f (s, Ys , Zs )ds | Ft ) + Vt0 = VT ,
(2.2.4)
t0
Mt could be expressed as Mt = E(ξ + VT | Ft ), 12
(2.2.5)
denoted by M (V )t . Also, according to martingale representation theorem, there is unique Z, such that
ˆ
t
Mt = Mt0 +
Zs dWs . t0
Formally, Zt =
dMt . dWt
So far equation (2.2.1) is a differential equation in terms of V , Y and Z. It turns out that Y and Z could be rewritten in terms of V . The following part is very important, as it uses functional analysis language to describe SDE. Think about the operators M and Z, M : C ([t0 , T ] ; Rn ) → C ([t0 , T ] ; Rn ) Vt 7→ M (V )t = E (ξ + VT | Ft ) , and Z : M2 ([t0 , T ] ; Rn ) → H2 ([t0 , T ] ; Rn ) ˆ t Mt = Mt0 + Zs dWs 7→ Z (M )t = Zt . t0
This gives us the idea that M is considered as functional of V , i.e.
Mt = M (V )t ,
and Z is considered as functional of M as well, i.e.
Zt = Z(M (V ))t = Z(V )t .
Eventually, Y is a functional of V , precisely Yt = E(ξ + VT | Ft ) − Vt .
13
Therefore equation (2.2.1) is equivalent to the following equation only involving V , which is a ordinary functional differential equation, ˆ
T
VT − Vt =
f (t, Y (V )s , L [M (V )]s ) ds,
(2.2.6)
t
which can be solved by Picard iteration applying to V alone. We will call the stochastic differential equation in terms of V a functional ordinary differential equation.
2.3
Existence and uniqueness of local solutions
In this section, our aim is to show the BSDE dYt
= −f (t, Yt , L (M )t ) dt + dMt ;
YT
= ξ,
(2.3.1)
admits a unique pair of solutions (Y, M )over [t0 , T ] when T − t0 < δ, for some δ to be determined. Here function f is Lipschitz continuous, i.e.
f : [0, ∞) × Rn × Rn×d → Rn if there exist positive constants C and C ∗ such that |f (t, y, z) − f (t, ye, ze)| ≤ C (|y − ye| + |z − ze|) ; |f (t, y, z)| ≤ C ∗ (1 + t + |y| + |z|) , for all t ∈ [0, ∞) , y, ye ∈ Rn and z, ze ∈ Rn×d . And operator L satisfies the Lipschitz condition with constant K. This concept is explained in the definition below.
Definition 2.3.1. An operator L, L : M2 ([t0 , T ] ; Rn ) → H2 ([t0 , T ] ; Rn )
14
is in Lipschitz condition with constant K, if there exists a constant K such that for any M 1 , M 2 ∈ M2 ([t0 , T ] ; Rn ),
( 1) ( )
L M − L M 2
H2 [t0 ,T ]
≤ K M 1 − M 2 C[t0 ,T ] .
Below is an example for Lipschitz condition. Example 2.3.2. For a given Brownian Motion W = (Wt ) on Rd with its filtration (Ft ), any square-integrable Ft −martingale M = (Mt ) ∈ M2 ([t0 , T ]; Rn ) has a representation
ˆ
t
Mt = M0 +
Zs dWs 0
with unique Z ∈ H2 ([t0 , T ]; Rn×d ). Let us define the operator Z as the following, ( ) Z : M2 ([t0 , T ] ; Rn ) → H2 [t0 , T ] ; Rn×d Mt 7→ Z(M )t = Zt for any t ∈ [t0 , T ]. First of all, we have to verify that Z is well-defined. It is no hard to see √ ∥ Z(M ) ∥H2 [t0 , T ] =
ˆ
| Zt |2 dt] t0
√ = =
T
E[ ˆ
T
Zt dWt ]2
E[ √
t0
E[MT − Mt0 ]2
≤ 2 ∥ M ∥M2 [t0 ,T ] . ) ( This shows that Z (M ) ∈ H2 [t0 , T ] ; Rn×d . Then we show that the operator Z is in Lipschitz condition with constant 2. i.e. f ∈ M2 ([t0 , T ] ; Rn ) , for any M, M
( )
f
Z (M ) − Z M
H2 [t0 ,T ]
15
f ≤ 2 M − M
C[t0 ,T ]
.
This property can be proved by the following calculation. √
( )
f
Z (M ) − Z M
H2 [t0 ,T ]
ˆ
T
E(
= √ =
ˆ
t0 T
E[ √
(Zs − Zes )2 ds) (Zs − Zes )dWs ]2
t0
fT ) − (Mt0 − M ft0 )]2 E[(MT − M
f . ≤ 2 M − M =
C[t0 ,T ]
Hence, in this case the Lipschitz constant is 2. In the previous section, we have introduced the operator M : C ([t0 , T ] ; Rn ) → C ([t0 , T ] ; Rn ) Vt 7→ M (V )t = E (ξ + VT | Ft ) . However some details still need to be discussed. First of all, it is straightforward to see that ∥ M (V ) ∥C[t0 , T ]
√ E[ sup | E(ξ + VT | Ft ) |2 ] = t0
≤ = ≤ ≤
√ 4E[| E(ξ + VT | FT ) |]2 √ 2 E[| ξ + VT | 2 ] √ √ 2 E(| ξ | 2 ) + 2 E(| VT |2 ) √ 2 E(| ξ |2 ) + 2 ∥ V ∥C[t0 ,T ] .
The first inequality is obtained by Doob’s inequality, and the second inequality is obtained by Minkowski’s inequality. Therefore, M (V ) is valued in C ([t0 , T ] ; Rn ), which also implies that M is well-defined. Lemma 2.3.3. For a given ξ and (Vt )t∈[t0 , T ] a continuous adapted process with bounded variation, operator M satisfies
( )
M (V ) − M Ve
C[t0 ,T ]
16
≤ 2 V − Ve
C[t0 ,T ]
.
Proof. If V, Ve ∈ C ([t0 , T ] ; Rn ), then ∥ M (V ) − M (Ve ) ∥C[t0 ,T ] =
√ √
≤
E[ sup | E(VT − VeT | Ft ) |2 ] t0 ≤t≤T
4E(| VT − VeT |2 ) ≤ 2 ∥ V − Ve ∥C[t0 ,T ] .
This completes the proof.
Corollary 2.3.4. It follows immediately that the operator Y : C ([t0 , T ] ; Rn ) → C ([t0 , T ] ; Rn ) Vt 7→ Y (V )t = E(ξ + VT | Ft ) − Vt is well-defined, and √ ∥Y (V )∥C[t ,T ] ≤ 2 E(| ξ |2 ) + 3 ∥ V ∥C[t0 ,T ] ; 0
( )
Y (V ) − Y Ve ≤ 3 ∥ V − Ve ∥C[t0 ,T ] . Now let us define the operator L, L : C ([t0 , T ] ; Rn ) → C ([t0 , T ] ; Rn ) ˆ t Vt 7→ L (V )t = f (s, Y (V )s , L (M (V ))s ) ds + Vt0 t0
17
where Vt0 = V0 . First of all, we have to check it is well-defined. Because
∥L (V )∥C[t0 ,T ]
√ ˆ ≤ E
T
t0
≤
√
2 f (s, Y (V ) , L (M )) ds √ ˆ T
(T − t0 )
|f (s, Y (V )s , L (M )s )|2 ds
E
√ ˆ √ ≤ (T − t0 ) E
t0 T
(C ∗ )2 (1 + s + |Y (V )s | + |L (M (V ))s |)2 ds t0
√ ≤ C ∗ (T − t0 )
[√ ˆ
T
√ ˆ (1 + s)2 ds + E
t0
√ ˆ √ +C ∗ (T − t0 ) E
]
T
|Y (V )s |2 ds
t0 T
|L (M (V ))s |2 ds t0
≤ K1 + K2 ∥Y (V )∥C[t0, T ] + K3 ∥L (M (V ))∥H2 [t0 ,T ] √ e e 3 ∥V ∥ ≤ K1 + K2 E |ξ|2 + K C[t0 ,T ] , e2, K e 3 are independent of V , hence L is wellwhere the constants K1 , K2 , K3 , K defined. Now let us see the properties of operator L. If the interval [t0 , T ] is sufficient small, then under assumptions that f is Lipschitz continuous and L is in Lipschitz condition with constant C, L becomes a contraction mapping. Let X = {V | Vt0 = V0 , V ∈ C([t0 , t]; Rn )}, and the operator L is defined on X and valued in X. Lemma 2.3.5. There exists a constant δ, such that if |T − t0 | < δ, then the operator L : (X, ∥ C([t0 , T ]) ∥ → (X, ∥ C([t0 , T ]) ∥) Vt 7→ L (V )t admits a unique fixed point V ∈ X, i.e. L (V ) = V .
18
Proof. Notice that ∥ L(V ) − L(Ve ) ∥C √ ˆ
ˆ
t
=
t
f (s, Y (V )s , L (M (V ))s ) ds −
E sup | t0 ≤t≤T
t0
( ( ( )) ) f s, Y (Ve )s , L M Ve ds |2 , s
t0
and the term inside the square root satisfies ˆ | sup
ˆ sup [
t0 ≤t≤T
≤ C2
( ( )) e |2 f (s, Ys (V ), L M Ve
t
f (s, Y (V )s , L (M (V ))s )ds −
t0 ≤t≤T
≤
ˆ
t
t0 t
t0
s
(
( )) | ds]2 | f (s, Y (V )s , L (M (V ))s ) − f (s, Y (Ve )s , L M Ve s
t0
ˆ t ( ( )) sup [ (| Ys (V ) − Y (Ve )s | + | L (M (V ))s − L M Ve |)ds]2
t0 ≤t≤T
t0
ˆ
t
≤ C (T − t0 ) sup 2
t0 ≤t≤T
(
( )) e |)2 ds (| Y (V )s − Y (V )s | + | L (M (V ))s − L M Ve
t0
ˆ
t
≤ 2C 2 (T − t0 ) sup [ t0 ≤t≤T
ˆ
T
≤ 2C 2 (T − t0 )[
s
| Y (V )s − Y (Ve )s |2 ds +
t0
| Y (V )s − Y (Ve )s |2 ds +
t0
ˆ
ˆ
t
T
s
( )) | L (M (V ))s − L M Ve |2 ds]
t0
(
s
(
( )) | L (M (V ))s − L M Ve |2 ds].
t0
s
Therefore √ 2C 2 (T − t0 )(T − t0 ) ∥ Y (V ) − Y (Ve ) ∥C ( ( )) √ + 2C 2 (T − t0 ) ∥ L (M (V ))s − L M Ve ∥H 2 s √ 2C 2 (T − t0 )(3(T − t0 ) ∥ V − Ve ∥C +K ∥ V − Ve ∥C ) ≤ [ ] √ 2 = (3(T − t0 ) + K) 2C (T − t0 ) ∥ V − Ve ∥C
∥ L(V ) − L(Ve ) ∥C ≤
where K is the Lipschitz constant of L. Choose T − t0 < δ such that (3(T − t0 ) + K)
√ 2C 2 (T − t0 ) < 1,
then L is a contradiction. The fixed point of the operator L is ensured by the Banach fixed point theorem.
19
Rearranging the BSDE (2.3.1) into its integral form, we have ˆ
T
Yt − ξ =
f (s, Ys , L (M )s ) ds + Mt − MT . t
Let Mt = E (ξ + VT | Ft ) , and Vt = Mt − Yt , one obtains that ˆ
t
Vt − Vt0 =
f (s, Y (V )s , L (M (V ))s ) ds. t0
A pair of solutions (Y, M ) of BSDE ˆ
ˆ
T
ξ − Yt = −
f (s, Ys , Zs )ds + t
T
Zs dWs , t
is equivalent to a solution to the functional integral equation (2.3.1) i.e. ˆ
T
VT − Vt = −
f (t, Y (V )s , L [M (V )]s ) ds, t
for t ∈ [t0 , T ]. Next, we are going to prove the existence and uniqueness of BSDE (2.3.1) by its corresponding functional integral equation. Theorem 2.3.6. Let f be Lipschitz continuous with Lipschitz constant K. Choose √ δ > 0 such that (3δ + K) 2C 2 δ < 1. Then the BSDE (2.3.1) admits a unique pair of solutions (Y, M ) on [t0 , T ] with 0 < T − t0 < δ. Proof. For existence, by Theorem (2.3.5), there is a unique V ∈ C ([t0 , T ]) such that ˆ
t
Vt =
f (s, Y (V )s , L (M (V ))s ) ds + Vt0 , t0
20
where Mt = E (ξ + VT | Ft ) and Yt = Mt − Vt . It is clear that YT = ξ and ˆ Yt − ξ =
T
f (s, Ys , L (M )s ) ds + Mt − MT , t
for all t ∈ [t0 , T ]. Therefore, (Y, M ) solves the backward stochastic differential equation (2.3.1). For uniqueness, suppose that (Y 1 , M 1 ) and (Y 2 , M 2 ) are two pairs of solutions satisfying equation (2.3.1), from the above discussion, we see that Yti = E (ξ + VTi | Ft ) − Vti for i = 1, 2. On the other hand, we have shown that ˆ Vti
t
= t0
( ( ) ( ( )) ) f s, Y V i s , L M V i s ds + Vt0 .
By lemma (2.3.5), Vt1 = Vt2 for t ∈ [t0 , T ], hence Y 1 = Y 2 over [t0 , T ]. That completes the proof. For sufficient small time interval, the existence and uniqueness of the solution to the BSDE has been showed here. The global solution is obtained by subdividing the interval [0, T ] into small sub-intervals under some regulations. The results of global solution are available in Liang’s paper[3] page 18-24.
21
Chapter 3 COMPARISON THEOREM We first propose the definitions related to this section. In Shi’s paper [11] and Liang’s paper [3], some regulations like local-in-time property, differential property and Lipschitz continuity are assumed in order to ensure the results of global solution and comparison theorem. However, most of them are not explained with examples and hence seems not quite natural. In our report, we clearly re-define all of those concepts. In addition, some corresponding examples are provided to make them more reasonable.
Definition 3.0.7. (local condition) Let
L : M2 ([t0 , T ]; Rn ) → H2 ([t0 , T ]; Rn×d ) be an operator. For any M 1 , M 2 ∈ M2 ([t0 , T ]; Rn ), as long as Mt1 = Mt2 over [t2 , t1 ], where t0 ≤ t2 ≤ t1 ≤ T , then L (Mt1 ) = L (Mt2 ). L is said to have local condition.
Once we have the definition of local condition, the following can be introduced reasonably. It allows us to define the "restriction" of operator L over a small interval. Hence, local properties of L may be discussed further.
22
Definition 3.0.8. Let
L : M2 ([t0 , T ]; Rn ) → H2 ([t0 , T ]; Rn×d ) be an operator having local condition. For any [t2 , t1 ] ⊂ [t0 , T ], define L[t2 ,t1 ] : M2 ([t2 , t1 ]; Rn ) → H2 ([t2 , t1 ]; Rn×d ) ( ) f Mt 7→ L[t2 ,t1 ] (M )t = L M
where ft = M
t
E (Mt1 | Ft ) t ≤ t1 ; Mt1
(3.0.1)
t ≥ t1 ,
(
) f In the above definition, Mt defined by (3.0.1) acts like a representation martingale of the class M = {(M )t | M ∈ M2 ([t0 , T ]; Rn ), ∀M 1 , M 2 ∈ M, Mt1 = Mt2 t ∈ [t2 , t1 ]}. Now another important definition need to be discussed here is the homogeneous property.
Definition 3.0.9. (homogeneous property) Let L : M2 ([t0 , T ]; Rn ) → H2 ([t0 , T ]; Rn×d ) has local condition. If for any [t2 , t1 ] ⊂ [t0 , T ], bounded Ft2 −adapted random variable A, and M ∈ M2 ([t2 , t1 ]; Rn ), we have
L[t2 ,t1 ] (AM )t = AL[t2 ,t1 ] (M )t , where t ∈ [t2 , t1 ]. Then L is said to have homogeneous property.
The above definitions are not artificial, indeed they are motivated by the example below. It provides the most interesting examples of L in applications, which are however variations of the classical example considered in the literature.
23
Example 3.0.10. Suppose (Ω, F, P ) is a complete probability space, (Ft )t≥0 is the Brownian filtration generated by a d−dimensional Brownian motion W = (W 1 , · · · , W d ). For any M = (M 1 , · · · , M n ) ∈ M2 ([t0 , T ]; Rn ), it has a representation ˆ
t
Zs dWs .
Mt = Mt0 + t0
i.e. componentwise
Mti
=
Mti0
+
d ˆ ∑ j=1
t
Zsi,j dWsj , i = 1, · · · , n,
t0
with unique Z = (Z 1 , · · · , Z n )T ∈ H2 ([t0 , T ]; R). Z i = (Z i1 , · · · , Z id ) for i = 1, · · · , n. And d ⟨M ⟩t =
n ∑ ⟨ ⟩ d Mi t . i=1
Define operator L
L : M2 ([t0 , T ]; Rn ) → H2 ([t0 , T ]; Rn×d ) Mt 7→ L(M )t = At Zt ( ) where At = aij t t∈[t0 ,T ] be a n × n matrix process, and there is a constant K > 0 such that for any t ∈ [t0 , T ], ∥At ∥norm ≤ K, here∥At ∥norm = sup∥x∥≤1
∥At x∥ , ∥x∥
for x ∈ Rd×n , ∥xij ∥2 =
∑ i,j=1
x2ij .
Then we claim that L has local condition and homogeneous property. First of all, we show the local condition of L. For any M 1 , M 2 ∈ M2 ([t0 , T ]; Rn ) with Mt1 = Mt2 t ∈ [t2 , t1 ], where t0 ≤ t2 ≤ t1 ≤ T , according to martingale ) ( representation theorem, ∃Z 1 , Z 2 ∈ H2 [t2 , T ] ; Rn×d such that ˆ Mti
=
Mti0
t
Zsi dWs
+ t0
i = 1, 2. t2 ≤ t ≤ T . Because of Mt1 = Mt2 t ∈ [t2 , t1 ] and the uniqueness of
24
martingale representation theorem, one obtains Zt1 = Zt2 for t ∈ [t2 , t1 ]. Hence ( ) L M 1 t = At Zt1 = At Zt2 ( ) = L M2 t , which indicates that L has local condition. For the homogeneous property, it is because for any A ∈ Ft2 , M ∈ M2 ([t2 , t1 ]; Rn ), we have E(AMs | Ft ) = AE(Ms | Ft ), t2 ≤ t ≤ s ≤ t1 . Therefore AM ∈ M2 ([t2 , t1 ]; Rn ). Furthermore
ˆ
t
AMt = AMt2 +
AZs dWs , t2
which implies that
L[t2 ,t1 ] (AM )t = AZt = AL[t2 ,t1 ] (M )t .
Definition 3.0.11. Suppose L with local condition,
L : M2 ([t0 , T ]; Rn ) → H2 ([t0 , T ]; Rn×d ) then it has K − Lipschitz condition if there is a positive constant K such that for any t0 ≤ t2 ≤ t1 ≤ T and any M 1 , M 2 ∈ M2 ([t2 , t1 ]; Rn ), ⟨ ⟩ ⟨ ⟩ KE[ M 1 − M 2 t1 − M 1 − M 2 t2 ] ≥
ˆ
t1
t2
25
| L[t2 ,t1 ] (M 1 )s − L[t2 ,t1 ] (M 2 )s |2 ds.
Example 3.0.12. Suppose we have the same settings as in example (3.0.10).
L : M2 ([t0 , T ]; Rn ) → H2 ([t0 , T ]; Rn×d ) Mt 7→ L(M )t = At Zt then L has K − Lipschitz condition property. From the fact that L (M ) − L (N ) = L (M − N ), we only need to show for some e K,
ˆ
t1
e L[t2 ,t1 ] (M ) 2 dt ≤ K
ˆ
t1
t
t2
d ⟨M ⟩t .
t2
Notice that ˆ
t1
E t2
L[t2 ,t1 ] (M ) 2 dt = E t
ˆ
t1
|At Zt |2 dt t2 ˆ t1 ∑ ( ij )2 2 ≤ K E Zt dt t2
ˆ
t1
= K 2E t2
= K2 = K2
d ∑ i=1 d ∑ i=1
= K 2E
i,j=1
∑ 2 Zti dt i=1
ˆ
t1
E
⟨ ⟩ dM i , dM i
t2
ˆ
t1
E t2
d ( ∑ ⟨
⟨ ⟩ d Mi t
Mi
⟩ t1
⟨ ⟩ ) − M i t2
[i=1 ] 2 = K E ⟨M ⟩t1 − ⟨M ⟩t2 , so in this case, L has K − Lipschitz condition, where K is just the norm of operator A. It follows that L : M2 ([0, T ]; Rn ) → H2 ([0, T ]; Rn×d ) M 7→ L(M ) = Z has K − Lipschitz condition property as well as it is a special case of Example (3.0.12).
26
Lemma 3.0.13. If L has local condition, then the following two statements are equivalent. 1. There is a positive constant K1 such that for any t ∈ [t0 , T ], M 1 ,M 2 ∈ M2 ([t0 , T ]; Rd ) ˆ E t
T
⟨ ⟩ ⟨ ⟩ | L(M 1 )s − L(M 2 )s |2 ds ≤ K1 E[ M 1 − M 2 T − M 1 − M 2 t ].
2.L satisfies the K−Lipschitz condition.
Proof. Let us call the new regularity condition K1 −condition for simplicity. We only need to prove the K1 −condition imply K−Lipschitz condition, since K1 −condition can be easily deduced from the K−Lipschitz condition. Suppose K−Lipschitz condition can not be deduced from K1 −condition. Then for any k = 1, 2, · · · , there exist associated [tk2 , tk1 ] and [M1k , M2k ] such that ˆ
tk1
E tk2
⟨ ⟩ ⟨ ⟩ | L[tk ,tk ] (M1k )s − L[tk ,tk ] (M2k )s |2 ds > kE[ M1k − M2k tk − M1k − M2k tk ] 2
1
2
1
Let fk ) = (M 1
fk ) = (M 2
2
1
M k k t ≥ tk1 ; 1,t 1
k M1,t
t < tk1 ,
M k k t ≥ tk1 ; 2,t 1
k M2,t
t < tk1 ,
then ˆ
T
E tk2
fk )s |2 ds ≥ E fk )s − L(M | L(M 2 1
ˆ
tk1
tk2
ˆ
fk )s |2 ds fk )s − L(M | L(M 2 1
tk1
| L[tk ,tk ] (M1k )s − L[tk ,tk ] (M2k )s |2 ds 2 1 2 1 ⟨ k ⟩ ⟨ ⟩ ≥ kE[ M1 − M2k tk − M1k − M2k tk ] 1 2 ⟩ ⟩ ⟨ ⟨ k = kE[ M1 − M2k T − M1k − M2k tk ]. = E
tk2
2
27
This leads to a contradiction to K1 −condition. That completes the proof.
Our main contribution to the generalized comparison theorem proposed by Shi, H.Z in his paper [11] is the following lemma. It fills in the gap of his proof. Indeed in his paper page 25 line 4, the inequality [ˆ
T
E t
[ˆ ⟨ ⟩] c 1(Ybs >0) d M ≥ αE s
T
t
] ( ) 2 1(Ybs >0) L (M )s − L M s ds ,
c = M − M , can not be deduced simply from the assumption where M [ˆ
T
E
[ˆ ⟩] c d M ≥ αE ⟨
s
t
T t
] ( ) 2 L (M ) − L M ds . s s
Lemma 3.0.14. If L satisfies local condition, homogeneous property and K − Lipschitz condition, then for any nonnegative predictable process Y , ˆ KE[
t1
t2
⟨
Ys d M − M 1
2
⟩
ˆ ]≥E s
t1
Ys2 | L[t2 ,t1 ] (M 1 )s − L[t2 ,t1 ] (M 2 )s |2 ds.
t2
Proof. Let Ys = A1[t2 ,t1 ] (s), t2 ≤ s ≤ t1 , A ∈ Ft2 , then Y M i ∈ M2 ([t2 , t1 ]; Rd ),
and ⟨ ⟩ ⟨ ⟩ d Y (M 1 − M 2 ) s = Ys d M 1 − M 2 s , L[t2 ,t1 ] (Y M i )s = Ys L[t2 ,t1 ] (M i )s .
28
Therefore, ˆ
⟨ ⟩ Ys d M 1 − M 2 s = E
t1
E t2
ˆ
t1
t2
⟨ ⟩ d Y M1 − Y M2 s
ˆ t1 1 ≥ E | L(Y M 1 )s − L(Y M 2 )s |2 ds K t ˆ 2t1 1 = E Ys2 | L(M 1 )s − L(M 2 )s |2 ds. K t2
Now let Y be a predictable process. Let {φn } be a uniformly convergent sequence, limn↑∞ φn = Y , φn = Ysni−1 1[sni−1 ,sni ] where t2 < sn0 < sn1 < · · · < snk = t1 , ˆ
n ˆ ∑ ⟨ 1 ⟩ 2 φn d M − M s =
t1
E t2
sn i−1
i=1
1 ≥ K 1 = K
sn i
ˆ
sn i
sn i−1
ˆ
t1
Ysni−1 d ⟨M1 − M2 ⟩s
Ys2ni−1 | L(M 1 )s − L(M 2 )s |2 ds
φn | L(M 1 )s − L(M 2 )s |2 ds.
t2
Let n → ∞, ˆ
t1
E t2
⟨ ⟩ 1 Ys d M 1 − M 2 s ≥ E K
ˆ
t1
Ys2 | L(M 1 )s − L(M 2 )s |2 ds.
t2
By using lemma (3.0.14), we are able to prove the local comparison theorem. The proof is analogue to Shi’s proof [11].
Theorem 3.0.15. (Comparison Theorem) Let (Ω, F, P ) be a complete space, (Ft )t≥0 is continuous and all martingales on (Ω, F, Ft , P ) is continuous. Consider the following BSDEs,
dY 1 t
YT1
= −f 1 (t, Yt1 , L(M 1 )t )dt + dMt1 ; = ξ1,
29
(3.0.2)
dY 2
and
t
YT2
= −f 2 (t, Yt2 , L(M 2 )t )dt + dMt2 ;
(3.0.3)
2
=ξ ,
where f 1 and f 2 are the Lipschitz continuous and
L : M([t0 , T ]; Rd ) → H2 ([t0 , T ]; Rn×d ) satisfies local condition, homogeneous property and K − Lipschitz condition. Let (Y 1 , M 1 ) and (Y 2 , M 2 ) ∈ C 2 ([t0 , T ]; Rd ) × M2 ([t0 , T ]; Rd ) be the unique adapted solutions of (3.0.2) and (3.0.3) respectively. Let T − t0 < δ, where δ is the constant determined in lemma (2.3.5)to ensure the existence and uniqueness of local solutions. If ξ 1 ≤ ξ 2 and f 1 (t, Yt2 , L(M 2 )t ) ≤ f 2 (t, Yt2 , L(M 2 )t ) a.s.,then Yt1 ≤ Yt2 ,a.s.,∀t0 ≤ t ≤ T. c = M 1 − M 2 . From (3.0.2) and Proof. 1 Denote by Yb = Y 1 − Y 2 , ξb = ξ1 − ξ2 , M (3.0.3), one obtains Ybt = ξb +
ˆ
ˆ
T
[f
1
(s, Ys1 , L(M 1 )s )
−f
2
(s, Ys2 , L(M 2 )s )]ds
T
−
t
cs, dM
t
and Ybt = Ybt0 −
ˆ
ˆ
t 1
[f (s,
Ys1 ,
1
L(M )s ) + f
2
(s, Ys2 , L(M 2 )s )]ds
t0
+ 0
Therefore, Ybt is a continuous semimartingale. For t ∈ [t0 , T ], ´T
Ybs+ [f 1 (s, Ys1 , L(M 1 )s − f 2 (s, Ys2 , L(M 2 )s )]ds ⟨ ⟩ ´ ´T cs − T 1 b d M c . +(ξb+ )2 − 2 t Ybs+ dM (Ys >0) t
Ybt+ = 2
t
s
1
This proof is analogue to Shi’s proof from page 23 to 25 in [11].
30
t
cs . dM
Rearranging the equation above, we have ˆ T ⟨ ⟩ + b c Yt + 1(Ybs >0) d M s t ˆ T = 2 Ybs+ [f 1 (s, Ys1 , L(M 1 )s − f 2 (s, Ys2 , L(M 2 )s )]ds t ˆ T + 2 b cs . + (ξ ) − 2 Ybs+ dM
(3.0.4)
t
Next we show that
´T t
cs is a martingale. By using Burkholder-Davis-Gundy Ybs+ dM
inequality, we have ˆ E[ sup | t0 ≤t≤T
t
[ˆ + c b Ys dMs |] ≤ CE (
0
[ ≤ CE [
T
⟨ ⟩ 1] + 2 b c )2 | Ys | d M s
t0
(ˆ sup Yˆs+
t0 ≤s≤T
T
] ⟨ ⟩ ) 21 c d M s
0
⟨ ⟩ ] c ≤ CE sup Yˆs+ M T t ≤s≤T { 0[ ] [⟨ ⟩ ]} C ˆ + 2 c ≤ E sup Ys + E M 2 s t0 ≤s≤T { [ ] [ ] [⟨ ⟩ ]} 1 2 2 2 C c 2E sup Ys + 2E sup Ys +E M , ≤ 2 T t0 ≤s≤T t0 ≤s≤T 1 2
< ∞
where C is a positive constant. Hence,
´T t
cs is a martingale. Take expectation Ybs+ dM
31
on both sides of (3.0.4), then E(ξb+ )2 = E
´T t
cs = 0, and (3.0.4) becomes Ybs+ dM
ˆ T ⟨ ⟩ + 2 b c ) E(Ys ) + E( 1(Ybs >0) d M s t ˆ T Ybs+ [f 1 (s, Ys1 , L(M 1 )s − f 2 (s, Ys2 , L(M 2 )s )]ds = 2E t
1 ≤ 1(Ybs >0) | f 1 (s, Ys1 , L(M 1 )s − f 2 (s, Ys2 , L(M 2 )s ) |2 ds 2 2KC2 ˆ T + E 2KC22 (Ybs+ )2 t
1 [C2 (| Ybs | + | L(M 1 )s − L(M 2 )s |)]2 ds ≤ 1(Ybs >0) 2 2KC2 ˆ T + E 2KC22 (Ybs+ )2 t
1 ≤ 1(Ybs >0) [| Ybs |2 + | L(M 1 )s − L(M 2 )s |2 ]ds K ˆ T + E 2KC22 (Ybs+ )2 ,
(3.0.5)
t
where C2 is the Lipschitz constant of function f . Notice that 1{Ys >0} = lim 1{YS− 1 >0} ∈ Fs− , hence 1{Ys >0} is a predictable process. n→∞
n
Let 1{Ybs >0} = gs (ω), applying lemma (3.0.14) to ˆ
T
gs2
E
ˆ | L(M )s − L(M )s | ds ≤ KE[ 1
2
2
t
t
T
⟨ ⟩ gs d M 1 − M 2 s ],
one obtains ˆ E t
T
ˆ T ⟩ 1 c ≥ E 1(Ybs >0) d M 1(Ybs >0) | L(M 1 )s − L(M 2 )s |2 ds. K s t ⟨
(3.0.6)
In Shi’s paper, inequality (3.0.6) can not be obtained simply by using the assumption ˆ E t
T
ˆ T ⟩ 1 c | L(M 1 )s − L(M 2 )s |2 ds. d M ≥ E K s t ⟨
Substituting (3.0.6) into (3.0.5) for
1 E K
´T t
32
1(Ybs >0) | L(M 1 )s − L(M 2 )s |2 ds, we
obtain E(Ybs+ )2 ≤ E[
ˆ
T
t
ˆ
2KC22 Ybs+ + 1(Ybs >0)
1 b 2 | Ys | ] K
T
1 (2KC22 + )(Ybs+ )2 ds] K t ˆ T 1 ≤ (2KC22 + ) E(Ybs+ )2 ds. K t = E[
Let g(t) = E(YbT+−t )2 , t0 ≤ t ≤ T . Then from the above inequality, ˆ T 1 g(t) ≤ + ) g(T − s)ds K T −t ˆ t 1 2 = (2KC2 + ) g(s)ds. K 0 (2KC22
Gronwall’s inequality ensures that g(t) = 0, t0 ≤ t ≤ T . Hence, Ybt+ = 0, t0 ≤ t ≤ T , which is equivalent to say that Yt1 ≤ Yt2 . That completes the proof.
33
Chapter 4 CONCLUSIONS We are interested in solving the following BSDE
dYt = −f (t, Yt , L (M )t ) dt + dMt , with terminal condition YT = ξ on a filtered probability space (Ω, F, Ft , P ), where L is a deterministic mapping which sends M to an adapted process L (M ), where M is a martingale to be determined. In Liang, Lyons and Qian’s paper[3], a new approach was introduced to reformulate this differential equation as a functional differential equation.In our report, we illustrate this approach by considering the BSDE dYt = −f (t, Yt , Zt ) dt + dMt with terminal condition YT = ξ, where Z satisfies dMt = Zt dWt . We also investigate the existence and uniqueness of local solutions of BSDE in the form of dYt = −f (t, Yt , L (M )t ) dt + dMt , where L is a prescribed mapping satisfying certain regularities. Furthermore, we revisited the corresponding generalized comparison theorem which was done by Shi, H.Z. and Qian, Z.M [11]. Our contribution in this aspect is the the following lemma which fills in the gap in their proof. If L satisfies local, homogeneous and K − Lipschitz conditions, then for any
34
nonnegative predictable process Y ˆ
t1
KE[
⟨
Ys d M − M 1
t2
2
ˆ
⟩ s
]≥E
t1
Ys2 | L[t2 ,t1 ] (M 1 )s − L[t2 ,t1 ] (M 2 )s |2 ds,
t2
where K − Lipschitz condition property means there is a positive constant K such that for any 0 ≤ t2 ≤ t1 ≤ T and any M1 , M2 ∈ M2 ([t2 , t1 ], Rn ), ⟨ ⟩ ⟨ ⟩ KE[ M 1 − M 2 t1 − M 1 − M 2 t2 ] ≥
ˆ
t1
| L(M 1 )s − L(M 2 )s |2 ds.
t2
In addition, we established a local comparison theorem. Let (Ω, F, P ) be a complete space, (Ft )t≥0 is continuous and all martingales on (Ω, F, Ft , P ) is continuous. Consider the following BSDEs, dY 1 t
and
= −f 1 (t, Yt1 , L(M 1 )t )dt + dMt1 ;
YT1
=ξ ,
dYt2
= −f 2 (t, Yt2 , L(M 2 )t )dt + dMt2 ;
YT2
(4.0.1)
1
(4.0.2)
2
=ξ ,
where f 1 and f 2 are the Lipschitz continuous. And
L : M([t0 , T ]; Rd ) → H2 ([t0 , T ]; Rn×d ) satisfies local, homogeneous and K − Lipschitz conditions.
Let (Y 1 , M 1 ) and
(Y 2 , M 2 ) ∈ C 2 ([t0 , T ]; Rd ) × M2 ([t0 , T ]; Rd ) be the unique adapted solutions of (4.0.1) and(4.0.2) respectively. Besides T − t0 < δ, where δ is a constant determined by the lemma (2.9) to ensure the existence and uniqueness of local solutions. If ξ 1 ≤ ξ 2 and f 1 (t, Yt2 , L(M 2 )t ) ≤ f 2 (t, Yt2 , L(M 2 )t ) a.s.,then Yt1 ≤ Yt2 ,a.s.,∀t0 ≤ t ≤ T. The functional differential equation approach has a wide range in application. It could be applied to the theory of forward-backward stochastic differential equa-
35
tions and Backward stochastic differential equation. In finance area, it has intimate connection with European option pricing problems and stochastic control problems. We believe that with the further studies in this approach, more properties will be found.
36
Bibliography [1] Pardoux, E . and Peng, S.G. (1990), Adapted solution of a backward stochastic differential equation, System and Control Letters, 14(1), 55-61. [2] Qian, Z.M. and Ying, J.G. Martingale representations for diffusion processes and backward stochastic differential equations. [3] Liang, G.C., Lyons, T. and Qian, Z.M. (2009), Backward stochastic dynamics on a filtered probability space. [4] Karoui, N.El., Peng, S.G. and Quenez, M.C. (1997), Backward stochastic differential equations in finance. [5] Tang, S. and Li, X,. Maximum principle for optimal control of distributed parameter stochastic systems with random jumps, Differential equations, dynamical systems, and control science, 152, 1994, 867-890. [6] Barles, G., Buckdahn, R., and Pardoux, E., Backward stochastic differential equations and integral-partial differential equations, Stochastics and Stochastics Reports, 60(1-2), 1997, 57-83. [7] Oksendal, B., Stochastic Differential Equations An Introduction with Applications. [8] Quenez, M.C., Stochastic control and BSDEs, Backward stochastic differential equations (Paris, 1995-1996), Pitman Res. Notes Math. Ser., 364, 1997, 83-100.
37
[9] Briand, P., Delyon, B., Hu, Y., Pardoux, E. and Stoica, L., Lp solutions of backward stochastic differential equations, Stochastic Processes and Their Applications 108(2003) 109-129. [10] Chung, K.L. From Markov Processes to Brownian Motion, Springer. [11] Shi, H.Z., Qian, Z.M. Backward Stochastic Differential Equations in Finance. 2010. [12] Hunt, G.A. Markov processes and potentials 1., Illinois J. Math.1, 1957, p. 44-93. [13] Fukushima, Dirichlet Forms and Markov Processes, North-Holland Publishing Company, Amsterdam, Oxford, New York, 1980. [14] Peng, S.G., A general stochastic maximum principle for optimal control problems, SIAM Journal on Control and Optimization, 28(4), 1990, 966-979. [15] Ma, J. and Yong, J., Forward-backward stochastic differential equations and their applications, Lecture Notes in Mathematics, Springer-Verlag, Berlin, 1999. [16] Yong, J. and Zhou, X.Y., Stochastic controls: Hamiltonian systems and HJB equations, Springer-Verlag, New York, 1999. [17] Hu, Y. and Peng, S.G., Solution of forward-backward stochastic differential equations, Probability Theory and Related Fields, 103(2), 1995, 273-283. [18] Kohlmann, M. and Zhou, X.Y., Relationship between backward stochastic differential equations and stochastic controls: a linear-quadratic approach, SIAM Jouranl on Control and Optimization, 38(5), 2000, 1392-1407. [19] Pham, H. Continuous-time Stochastic Control and Optimization with Financial Applications. Springer. [20] Lepeltier, J.P. and San Martin, J.., Backward stochastic differential equations with continuous coefficient, Statistics and Probability Letters, 32(4), 1997, 425430. 38
[21] Kobylanski, M., Backward stochastic differential equations and partial differential equations with quadratic growth, The Annals of Probability, 28(2), 2000, 558-602. [22] Duffie, D. and Epstein, L., I., Stochastic differential utility, Econometrica, 60, 1992, 353-394. [23] El Karoui, N., Hamadene, S. and Matoussi, A., BSDEs and applications, Indifference pricing: theory and applications, Priceton University Press, 2009, 267-320. [24] El Karoui, N. and Mazliak, L. (editors), Backward stochastic differential equations, Pitman Research Notes in Mathematics Series, (Paris, 1995-1996), 364, 1997. [25] Briand, P. and Hu, Y., BSDE with quadratic growth and unbounded terminal value, Probability Theory and Related Fields, 136(4), 2006, 604-618. [26] Borkar, V.S. Probability Theory. An Advanced Course. New York, SpringerVerlag, 1995. [27] Cox, D.R., and Miller, H.D., The Theory of Stochastic Processes. London, Chapman & Hall, 1980. [28] Thang, S. and Li, X., Maximum principle for optimal control of distributed parameter stochastic systems with random jumps, Differential equations, dynamical systems and control science, 152, 1994, 867-890.
39