Série Mathématiques
H AYRI K OREZLIOGLU
Two-parameter gaussian Markov processes and their recursive linear filtering
Annales scientifiques de l’Université de Clermont-Ferrand 2, tome 67, sérieMathéma- tiques, no17 (1979), p. 69-93
<http://www.numdam.org/item?id=ASCFM_1979__67_17_69_0>
© Université de Clermont-Ferrand 2, 1979, tous droits réservés.
L’accès aux archives de la revue « Annales scientifiques de l’Université de Clermont- Ferrand 2 » implique l’accord avec les conditions générales d’utilisation (http://www.
numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.
Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques
http://www.numdam.org/
TWO-PARAMETER GAUSSIAN MARKOV PROCESSES AND THEIR RECURSIVE LINEAR FILTERING
Hayri KOREZLIOGLU(*)
SUMMARY : A class of
two-parameter
Gaussian Markov processes is charac- terized and conditions aregiven
for a process of this class to havea
representation
in terms of Wiener processes. When thesignal
is aGaussian Markov process
having
arepresentation
in terms of atwo-para-
meter Wiener process and the noise is another Wiener process
independent
of the
signal,
and when the observation is defined as in the one-parame- ter Kalmanfiltering model,
recursive linearfiltering equations
satis-fied
by
the estimation of thesignal
are obtainedby using
the linearfiltering
method of Hilbertspace-valued
Gaussian processes.INTRODUCTION
In this
work,
wedevelop
andimprove
the shortexpose [ 1~
on thelinear fi 1 tering
oftwo-parameter
Gaussian Markov processes.In
§ 1,
wegive
the defi ni ti on of a Markovproperty
for twoparameter
Gaussian processes and express conditions on the covariance functions of such processeshaving
the defined Markovproperty
in order thatthey
may possess arepresentation
in terms of Wiener processes.Shortly
after the Saint-Flour seminars in1978,
Dr. D. NUALARTkindly
communicated to us his work in collaboration with M.SANZ,
on the characterization of Gaussian Markov fields[ 2~ .
Weshow, in § 1, that
our characterization is
equivalent
to theirs.In
§ 2,
we consider thefollowing "signal
and observation"model. The
signal
X is a Gaussian Markov process, definedby
(*)
Ecole NationaleSuperieure
des Télécommuni cations 46 rueBarrault,
75634 PARIS CEDEX 13.where B is a Wiener process
and w
and G are non-random functions. The observation Y isgiven by
where W is a Wiener process
independent
of B and H is a non-random function. Weregard
various processesappearing
in this model astaking
their
values
in anL 2 -space
when indexedby
their firstparameter. Then, by using
a result due to OUVRARD[31
on linearfiltering
of Hilbertspace-valued Gaussian
processes, we obtain the horizontalfiltering equation (in
the firstparameter)
satisfiedby
the estimations,t (to) of in terms of the observations
s,v
s, t u,v o o
By commuting
the roles of the twoparameters,
we obtain the verticalfiltering equation (in
the secondparameter)
satisfiedby
the estima- tionkv,t(so)
ofXs,t
in terms of the observationst},
o s ,t L U,V 0
with s
s0 Finally,
for the caseSo
= s andt -
t, wegive
the ex-pression
of the seconddifferential,
in s and t, ofX s,t
S , u =s,t
o =terms of innovation processes
appearing
in the horizontals, t o
and vertical
filtering equations.
§0
NOTATIONSThe random processes considered here will be indexed
by
9 /
[o,S] x[o,T]
cP2.
where S and T are finitepositive numbers(:".).
Fora second order centered process X =
x[o,T]},
definedon a
probability
spaced x
will denote the smallest Hilbert- =s ,t
subspace
of -generated by {X 2013u
u,v s,ts,v
t} andF x
thesmallest
a-subalgebra generated
.by
{X u ,v:u s ,v t)
and allP-negli- gible
sets ofA.
If and if H is a Hilbertsubspace
of(~) Throughout
thiswork,
the index set[o,S] x[o,T]
can bereplaced by R+xR+ ’
with littleprecaution.
Inparticular,
this substitu-tion would not
bring
any modification to the text of§1 .
L(Q,A,P) ,
we shall denoteby (Z/U)
theprojection
of Z onto H. Ifwe shall denote
by (Z/Y)
theprojection
of Z onto thesubspace (of
dimension1) generated by
Y.For a random process X =
x[o,T)},
weextend the domain of the
parameters to negative
numbersby putting
= o, when at least one of the
parameters
isnegative.
We denotes,t
by
the smallest Hilbert spacegenerated by
and-S,t s,T
For two Hilbert spaces
H1
andH2
such thatH 2
9HI
will denote the
orthogonal complement
ofHi
inH2*
§1.
GAUSSIAN SIMPLE MARKOV PROCESSESIn this
paragraph,
X =x[o,Tj}
will bea centered Gaussian process, defined on a
given probability
space(Q,A,P).
DEFINITION 1 : X will be called a horizontal Markov process, if
(1)
for alls,t
and u s, =(X,t/X,)
and a vertical Markov process, if
(2)
for all s,t andv t,
=X will be called a
simple
Markov process, if it is a horizontal and a vertical Markov process.We
identify, here,
random variables with theirP-equivalence
classes.
Conditions
(1)
and(2)
arerespectively equivalent
to thefollowing
ones.PROPOSITION 1 : If X is a
simple
Markov process, then1 for all
(u,v)
and(s,t)
such that(u,v) (s,t)
Proof : For
(u,v) (s,t),
wehave, according
to(1)
and(2)’,
where a and b are
adequate
constants.Therefore,
condition(3)
is satis-f ied. ·
The
following corol lary
is an obvious consequence of the aboveproposition.
COROLLARY : If X is a
simple
Markov process, then for anyincreasing path
C in[o,S] x[ o,T] ,
is aone-parameter
Markov process withrespect
to the filtrationhence,
withrespect
to its natural f i 1--Slt
}, hence with respect to its natural fil-tration.
Let K be the covariance function of X and let (D be defined
by
(::)
Thisimplication
isproved by
Pascal LEFORT in his current research work on the Markovproperty
oftwo-parameter
processes. In[1],
wehad defined a
simple
Markov process as a processsatisfying
condi-tions
~1), (2)
and(3).
Noti ce that we have
Using
thisequation,
one can show that conditions(1), (2)
and
(3)
areequivalent
tofollowing
conditions(6), (7)
and(8),
res-pectively.
Condition
(8) implies
thatand, according
to the abovecorollary,
if X is asimple
Markov process, then for anyincreasing path C,
the restriction of ~ toC2
is the Markov transition function of the processPROPOSITION 2 : For a
simple
Markov processX,
thefollowing orthogona- 1 ity
rel ation holds.This is
equivalent
q tosaying
ying that the o-algebras FkS,t that thea-algebras FX
and
=s,T
areconditional ly independent
y p withrespect
p e toFrom
this,
the relation(9)
follows. Theequivalence
of this relationwith the conditional
independance
p ofFXS
=S,t =s,T and withrespect
p to ~FX
=S,is a known
property
of Gaussianspaces (cf. ( 4] ~ .
PROPOSITION 3 : If X is a
simple
Markov process,then,
for(u,v) (s,t~,
the random variable
we
only
have toprove.the orthogonality
Notice that this last space has the
following orthogonal decomposition.
Then
It can
easily
be verified that each of theprojections
of theright-
hand side is zero. ·
In
f 2] ,
the Markovproperty
of X was defined as follows :r
Forall (u,v)
and(s,t)
such that(u,v) (s,t),
L
where a, b and c areadequate
constants.With our convention of
§0,
inextending
the domain of theparameters
to
negative numbers,
one can deducethat,
if X satisfies condition(12),
then it satisfies conditions
(1)
and(2) ;
thus X is asimple
Markovprocess.
Conversely,
if X is asimple
Markov process,according
to Pro-position 3,
it satisfies condition(12). Therefore,
conditions(1)
and(2) together
areequivalent
to condition(12).
From now on, we suppose that X is a centered Gaussian
simple
Markov process.
We make the
following hypothesis
on the covariance function K of X.H Y P OTHE S I S
H :
K is continuous on([o,S] x [ o , T l ~ 2 , K ( ( s , t ) , ( s , t ) )
isstrictly positive
forst > o, K~~s,o~,~s,o)~
is eitheridentically
nul l orstrictly positive
on]o,S] and K((o,t),(o,t))
is eitheridentically
null orstrictly positive on 1o,T].
Under this
hypothesis K((s,t),(u,v))
isstrictly positive
for all
s,t,u,v
such thatK((s,t),(s,t))K((u,v),(u,v)) >o.
Infact,
any
pair
of differentpoints (s,t)
and(u,v) can
bejoined by
apath consisting
of at most one horizontal and one vertical linesegments
on each of which K is a
one-parameter
covariance function.Therefore,
theproof
of the mentionedproperty
reduces to that of the one-parame- tercase (cf. [ 4~ ,
p.55).
Suppose
now that thefunction 4),
definedby ~4~,
verifiesthe
following hypothesis.
HYPOTHESIS
H 2 :
There exists astrictly positive
continuous functions,t ,
defined on(o,S] x [ o,T1,
such thatfor all
(u,v)
for whichK((u,v)(u,v))
> o.(We
canalways
suppose = 1by dividing y by
a cons-tant,
withoutchanging ~13).
That is what we shalldo).
Notice
tha t,
if K isstrictly positive
on(E o , S 1 x[o»T] )2,
the
function ~
definedby
verifies
H .
Moregenerally, if,
for(p,q) (s,t)
inD, 4)((s,t),(p,q))
has a
strictly positive
continuous as~p,q) -~ (o,o)
and if~ has a continuous and
strictly positive
extension to the entire domain( o, s] x ( o, T] ,
then thisextension,
denotedagain by ~,
verifies thehypothesis H .
Infact,
for~p,q) ~u,v) ~s,t)
inD,
we haveTherefore,
for(u,v) (s,t) ,
where the limit is to be taken for
(p,q) (u,v)
in D.PROPOSITION 4 : Let M be defined
by
f r 11 u v E o S x o T 2
where A stands for a
((s,t) , (u,v) ) E ([ o,S] l x[o,T]) , where"
standsfor the infinum.
Proof : The
proof
can be obtainedby
a direct verification of the equa-lity
for all
possible configurations
of the 1 inesegment joining (u,v)
tots,t~ . ·
Now,
we can characterize the process X in terms of a Gaussianstrong martingale.
PROPOSITION 5 : The process defined
by
Z =-1 x is a (centered)
Gaussian
strong martingale
withrespect
to the filtra- tion sltx[o tT]}. Hence,
Z hasindepen-
dent increments and
E(Zs,t Zu,v)
=MsAu,tAV,
where M is definedby (14).
Inparticular,
=
Zs.t - zo,t -
+Z0,0
Gaussianstrong martingale
U withrespect
to the same filtration.Similarly, U’s 10 =
andU" Olt = Zo,o
define two Gaussian
martingales
U’ andU", respectively,
with
respect
to the filtrations and=s,o The random variable
Xo,o
and the pro-=0,t
o,ocesses U, U’ and U" are
mutually independent.
Proof : We refer to
[5]
for the definition of astrong martingale.
LetZ(](u,v),(s,t)]) be
the increment of Z between(u,v)
and(s,t)
withIn order to prove that Z is a
strong martingale
weonly
have to prove that theprojection
of the above increment onto =S,v =u,T is null.But, ys,tZ (l(u.v). (s.t)]
=Xslt -
isgiven by (11),
and
Proposition
3 says that theprojection
ofX.. - and, hence,
ofZ(](u,v),(s,t)])
onto is zero.The fact that U is a
strong martingale
is due to theequality
of the increments of U to those of Z.
As a consequence of
Proposition 3,
and hence U’ is amartingale
withrespect
to A similarargument
=S )o
holds for U". The
independence
of X0,0 U’,
U" and U is due to the fact that thesequantities
are increments of Z and that Z hasindependent
in-crements..
We can say more about the
representation
of X in terms of mar-tingales,
if the functionM,
definedby (14),
verifies thefollowing hypothesis.
HYPOTHESIS
H3 :
There exist squareintegrable
functionsG,
G’ ‘ and G"such that
THEOREM 1 : Under
hypotheses H 2
andH39
there existone-parameter
Wiener processes B’ and
B" = t and a
two-parameter
Wi ener process Bx [ o, T] }
such thatXo,o, B’ , B"
andB are
mutually independent
andConversely, G’ , G; G, B’ ,
B" and B are asabove,
then the process X defi ned
by (16)
i s a centered Gaussiansimple
Markov processverifying hypotheses His H 2
andH .
In both cases, X has a modification with continuous
trajec-
toires.
Proof : Notice first that the
right
hand side of(16)
devideds ,t
should
represent
.Zs t
defined inProposition
5. Consider themartingale
s,t
U of the same
proposition.
U is null on the coordinate axes and hasindependent
increments. If I > o for almost all(s,t),
then aWiener measure B can be defined
by
dB =G-1
dU and we have s,t s,t s,tand define
a Wiener measure B on D
by d’s,t
=G s 1 t dUs,t
and take any other Wienermeasure B on
[ o,S] x [ o,T]·D, independent
ofX ; then, put
B = B +g .
U has
again
the samerepresentation
in terms of B. The Wiener process B considered in the theorem isgenerated by
the Wiener measure B of theproof.
The process B and thecorresponding
stochasticintegral
IS It G V dB U,V
have continuous modifications(cf. [6]).
o o uv uv
The construction of B’ and B" can be made in the same way,
by sta rting, respectively,
from themartingales
U’ and U" ofProposi-
tion 5. The
independence
ofX o,o , B,
B’ and B" is a consequence of thisproposition.
The
proof
of the conversepart
of the theorem i s a matter of di rect versification.§ 2.
LINEAR FILTERINGWe consider the
following
"state and observation" model for thefiltering problem.
The state process or the
signal
X is a continuous Gaussiansimpl e
Markov process def i nedby
where y
is a continuousstrictly positive
random functionhaving
conti-nuous
first partial derivatives
)y s,t asp t
B is a continuous Wiener
nuous f i rst
partial
derivatives20132013,
B is a continuous Wiener)s )t
process and G is a
square-integrable
non-random function.The observation process Y is defined
by
where H is a continuous non-random function and W is a continuous Wiener process,
independent
of B.For some
regularity properties
of theprobability
space(Q,A,P)
on which the processes considered in the above model are defi-ned,
we shallidentify
it with the canonical space of(B,W).
We shall
put
=S U>sF y =U,T and denote by
G the filtration
and
by L
the space dP adsadt),
where B is the Borel
a-algebra
of[o,S]x{o,TL
Letthe Borel
a-algebra
ofto,S] (resp. to,T]).
If P’ denotes thea-algebra
of
G-predictable
x sets =91
o.Sl’ then we shall denoteby P
- theproduct a-algebra
andby L2(P)
the Hilbertsubspace
ofL 2 generated
2 2
by
allMeasurable
elements ofL .
The space will be denotedby L2(dt).
DEFINITION 2 : For a process Z in
L2,
theg-predictable projection Zp
of Z will be definedby
the conditionalexpectation
of Z with
respect
to thea-algebra P
and the measuredP ods odt.
The
predictable projection
in the senseof the above definitioncan be constructed as
follows, by using
the notion ofpredictable
pro-jection
in the senseof[7].
Let Z be a process definedby
where U is a bounded random variable and let be the
ri ght-
continuous version of the
martingale
Let usput
where
1. n(s)
coincides with theG-predictable projection
ofs-[u,S] =
u 11 U’SI(s )
in thesenseof t7].
Noticethat,
for almost al l s,Zp
s,t is a version ofE Z ~/G~).
On the otherhand
the spaceL2(E)
isgenerated
by
processes oftype
where V is a bounded
Gu-measu rabl e
randomvariable. (There
i s no 1 ossof
generality
inchoosing V>o). By using
the fact that the measuredP a ds commutes with
predictable projections (cf. [ 7] , T 30, p. 107),
we find
Therefore,
the processZp
is thepredictable projection
of Z in the senseof the above definition.Now,
let Z be anarbitrary
element of2
L and
let{Z n: n EN} be
a sequenceconsisting
of processes that are linear combinations of processes oftype (3)
andconverging
to Z inL 2
(The
spacel2 is generated by
processes oftype (3)).
To eachZ
therecorresponds
a processZp
defined as linear combinations of processes oftype (4)
and the sequence converges to thepredictable
n
projection Zp of
Z as defined inDefinition
2.Therefore,
there existsa
subsequence
suchthat,
for almost all(s,t),
p
n 2converges to s , t
L (o,A,P).
- Fromthis,we
deducethat,
for eachZEL2 and
for almost all(s,t), Zp
is a version of the conditionals , t
expectation E(Z 4./G ).
Now,
let us consider thefollowing
horizontal evolution equa- tion for X.X, Y,
W and thesystem noise, represented
hereby
the last termin ~ 5 ~ ,
have continuous
trajectoires. Therefore,
for fixed s, X, V
andcan be considered as
taking
their values inIf U =
iUs,t : (s,t)~[o,S] x[O,Tl)
is a process suchthat,
for all s and almost al I shall denoteby
U = the
corresponding L2(dt)-valued
process. We shall denoteby
11 . 11 and .,.> ,respectively,
the norm and the scalar pro- duct onL 2(dt).
The covariance
operator
ofM
iswhere W is
the nuclearoperator
onL 2 (dt),
the kernel of which ist AT,i.e.
The
operator W is
called the covarianceoperator
of W. We define the bounded linearoperator Hs
onL2(dt) by
Consequently, equation (2)
can be written asDEFINITION
3 :
LetXP
be theG-predictable projection
of the state pro-cess X. Then the process v defined
by
will be called the horizontal innovation process of Y Notice
that,
as anL2(dt)-valued
process, v can be definedby
PROPOSITION 6 : The
L2(dt)-valued
process v =fv s : s~[ o,S]}
is aG-
Brownian motion with covariance
operator W.
Inparticu-
tar,
is atwo-parameter
Wienerprocess such that for all is
independent
of G , . .Proof : We refer to
( 8]
for the definition andproperties
of a Hilbertspace-valued
Brownian motion. Notice first that v can be written aswhere
$i
=X - xp
that vwhere
u,v
=Xu.v - Xu , v .
I t i s a matter of easy verification that v issquare-integrable.
i.e.Ellv sll2
oo , a n d it isst r ongly
cont in uo u sas an
L2(dt)-valued
process. To prove that v is a G-Brownianmotion, according
to[8],
it isenough
to showthat,
for allIv~,f> : s G[ o ~ S 1 }
is aG-martingale,
and for al l and al ls‘S
These two
properties
can beproved
inexactly
the same way as in theproof
of the wel l known innovation theorem of theone-parameter
fil te-ring probl em ([9],
Lemma2.2 ) .
The secondpart
of theproposition
isan immadiate consequence of the first. ’
Let us consider the process
tE[ o,T] },
for fixeds. As s,t :
tE[o,T] I
is continuous in thequadratic
mean, the samegoes for this process. It has
therefore,
a measurable version(in t)
which is what we shall consider below. ’ PROPOSITION 7 : Let M be defined
by
Then M = i s a
square- i ntegrabl e L2 dt -
valued Gaussian
G-martingale. Moreover,
for almost allt, (M . :
: also is a GaussianG-martingale.
Proof : The
square-integrability
ofMs
is easy to showby
direct compu-tation. For s’s and we have
This shows that M is an
L 2(dt)-valued G-martingale.
Theproof
of thelast
part
of theproposition
is similar...In the
sequel,
we shallonly
consider theright
continuousmodification for the process s t * =S taken as an
L2(dt)-
valued process, that we shall denote
by X 0
= We shalls
rather write
equation (10)
as follows :keeping
in mind thatX0
s,t is a version ofE(X /9 )
s,t sand,
as anL 2(dt)-
valued process
X0
isright-continuous.
Notice thatX0
S t L =XP
S $ L a..s. foralmost all
(s,t).
Let
P
be the covarianceoperator
ofX
=xs -x0s, defined by
Then
Ps
is asymmetric
nuclearoperator
the kernel of which isdefined by
’
for almost all s,t,T..
PROPOSITION 8 : Let K be defined
by
for almost all s,t,T. Then the
martingale
M definedby
(11)
thas thefollowing representation
for almost all t.
Proof : For the
proof
of therepresentation (13),
we refer to the repre- sentation theorem 2.6 in[3].
Let M be anL2 (dt)-valued square-integrable G-martingale.
Then there exists aG-predictable process*
with values in the Hilbert space of(not necessarily bounded)
linearoperators
inL2 (dt)
such thatfS E[tr]YuWY*u)] du °°, where Y*
is theadjoint
ofo u u
and that
(Cf. ( 10~
for the definition of this kind of stochasticintegrals).
Let the
representation
of themartingale
M ofProposition
7be
given by (14). Then,
for almost all t,is a
G-martingale
in terms of s. To seethis,
it isenough
toapply
the Ito differentiation rule to and take the conditional expec-
s,t ,s,t
tation with
respect
toG ,
for s’s. Thispart
of theproof
is similar to that ofProposition
2.11 in[3J.
Considered as anoperator,
the in-tegral
term of(15)
can be written asfs P H
o u u du.This,
with the abovementioned
representation theorem, implies
thatM
has thefollowing
representation
as
given
in Theorem 2.12 i n[3] t
whereID+
i s thepseudo-inverse of yy
defined
by
with the limit taken in
L (dt).
Notice that is a non-randomoperator.
°Then,
for almost all t, M is an element ofTherefore,
s,t s, T =S,T’
by taking
the mathematicalexpectation
of(15),
whichequals
o, weobtain
From
this,
followsrepresentation (13). "
Let us consider now the
equation
and let Z be a measurable version of
The existence of a measurable version of Z is
guaranteed by
the measu-rability
of its covariance function.By substituting
Uby
Z in(16),
itcan be verified that Z satisfies
equation (16).
~
We want to show that if
equation (16)
is satisfied a.e.dP o dsodt
by
two different processes U and U’ inL2
then U = U’ a . e.dPa ds adt. In this case, we can also
replace K u (t,v~
in(16)
and(17)
by
any other measurable functionequal
to K for almost all u, v, t.Suppose
that we have two processes U and U’ inL2 satisfying
equation (16)
a.e. dPodsodt. Then Y = U-U’ satisfies theequation
a.e. dP B ds B dt. Then we have
where
F
=u,t and
A is an upperbound, independent of (s,t),
u,t
au u,tof the
integral
in which F appears. Let usput
Then the above
inequality
becomeswhich
gives,
aftermultiplication by e-As 91
Therefore,
This is
possible only if,
for almost all t, = o.Consequently
wehave IS IT
o o s,tE(Y2 ’t)dt ds =
o.Equation (11)
andrepresentation (13)
show thatXP
andXO
sa-tisfy equation (16)
a.e.dPedsadt. Therefore,
Z =XP
=X 0
a.e.dP ads a dt. Then, according
to(7),
we haveThis
implies
that is an element of Hence~~,T.
quently,
the filtration Gcoincides
withs~{o,S]}
andis,
there-fore,
continuous. All theseimply
thatG~
isgenerated by
=Hv
Moreover,
by using
thecontinuity
ofG,
one can showthat,
asfunction of
(s,t), E(X -/G )
is continuous in thequadratic
mean.Hence,
Ps(t,T)
can be chosen as a continuous function of(S,t,T).
That is whatwe shall do in the
sequel.
In this case, the process Z definedby (17)
is continuous in the
quadratic
mean.Therefore, Z s,t
is a version ofNotice
that,
for any fixed t,{Z~ . : s6[o,S]}
has a continuousversion
satisfying (16) and, being
continuous in thequadratic
mean,for any fixed s,
lZ ,t : tE[ o,U I
has a measurable version.Therefore,
there exists a measurable process X =
(s,t)E[o,S] x[o,Tl}
suchthat,
for any fixed t,sE[o,S]}
has continuoustrajectoires and,
for all
(s,t),
=Z s,t
a.e. hence a version ofE(X s,t/gs ), (cf. [11]).
Noting
thatnothing
wouldchange
in the above conclusions if T werereplaced by
we shall summarize the main resul tsby
the
following
theorem.THEOREM 2 : The process
(s,t) E[ 0,S]
is conti-nuous in the
quadratic
mean and has a measurable modifica- tionRh (t
0 = :(s,t)E[s,S]x[o, tO]
0 suchthat,
for any fixed t, is continuous. Let s, t O
and be defined
respectively by
and let the horizontal innovation process of
height t 0
bedefined
by
Then satisfies the
following
horizontalfiltering equation
and has the
following explicit expression
Moreover,
the kernel satisfies thefollowing
Riccati
equation.
Proof : We
only
have to establ ish the Riccatiequation. By putting
X
=R- xh(t0)
andconsidering equations (5), (21), (2)
and(20),
weget
and we obtain
equation (23) by applying
the Ito differentiation formula toxs,t X.
as function of s andby taking
the mathematicalexpecta-
tion. aBy interchanging
the roles of s andt,we
obtain similar resultsfor the vertical
filtering problem.
The processus {E (Xs,t/Fys o,t) ; (s.t)E [o.s] x [0.T.]}, with
201320132013201320132013 s,t S69 "
is continuous in the
quadratic
mean and has ameasurable
modificationxv(s 0
=x[o,T])
suchthat,
forany fixed s,
s,t (s 0 t"lo,Tll
is continuous. Let and be definedrespectively by
and let the vertical innovation process of width
s 0
be de-f i ned
by
Then
iv(s 0)
satisfies thefollowing
verticalfiltering equation
and has the
following explicit expression
The kernel satisfies the Riccati
equation :
with = o for tus =o.
We
may call
any version of a causal estimation ofXs,t
s,t in terms of Y. It is clear thatXhs
s,t.(t)
andXvs
s,t.(s)
are causalestimations of s,t
Let us
put x s,t
= s = s a.s.. For any fixedt,
the process
{X s,t: s~{o,S]}
has a. continuous version definedby
and
satisfying
theequation
Similarly,
for any fixed s, the process{Xs,t :tE[o,T]
has a continuousversion defined
by
,and
satisfying
theequation
’
We do not know
yet
whether or not X has a continuous version as a two-parameter
process.For numerical
applications,
it would beinteresting
to expressterms of
s ,t ’
that
is,
to have atwo-parameter
recursivefiltering equation.
We couldfind this
equation
in[1] only by extending,
in a rather formal way, the results obtained in[12]
for the case of discreteparameters.
Thisequation
is thefollowing
where d denotes the second differential in s and t,
d s
andd t
denoterespectively
the first differentials in s and in t. The first term of theright
hand side can bereplaced by
At the view of
equation ~26~,
it seems difficult torepresent
Xs t
as the sum of stochasticintegrals
in terms of various innovations.s ,t
For the non-linear
filtering
oftwo-parameter semi-martingales
such arepresentation
was obtained in[ 13].
Wehope
to be able to extend themethod of
(13~
to the Gaussian case in aforthcoming publication.
We would like to mention that linear
filtering equations
oftype (21), (21’)
and(26)
weregiven by
E. WONG in[14]
where the mar-tingale representation (13)
was introduced withoutproof. Apart
fromproviding
all the tools forestablishing
the linearfiltering equations,
the extension of the method used in
[13]
to the Gaussian casewill,
wehope,
also contain theproof
of suchmartingale representation
theorems.REFERENCES
1. H.
KOREZLIOGLU,
"Recursive LinearFiltering
of Two-Parameter Gaus- sian MarkovProcesses", Proceedings
of theEight Prague
Con-ference on Information
Theory,
Statistical Decision Functions and RandomProcesses, Prague, August 28-Sept. 1,
1978.2. D. NUALART & M.
SANZ,
"Random Gaussian Markov Fields" TechnicalReport, Departamento
deMatematicas,
Escuela TécnicaSuperior
de
Arquitectura,
Barcelona.3. J.-Y.
OUVRARD, "Martingale Projection
and LinearFiltering
in Hil-bert
Spaces.
I : TheTheory",
SIAM J. Control andOptimisa- tion,
Vol.16,
N°6,
p.p.912-937,
1978.4. J.
NEVEU,
Processus aléatoiresgaussiens,
Presses de l’Université deMontreal,
1968.5. R. CAIROLI & J.B.
WALSH,
"StochasticIntegrals
in thePlane",
Acta Math.134,
p.p.111-183,
1975.6. E. WONG & M.
ZAKAI, "Martingales
and StochasticIntegrals
for Pro-cesses with a Multidimensional
Prameter",
Z. Wahrschein.Verw.
Gebiete, 29,
p. p.109-122,
1974.7. C.
DELLACHERIE, Capacité
et processusstochastiques, Springer-Ver- lag,
1972.8. M.
YOR,
"Existence et unicité de diffusions à valeurs dans un espace deHilbert",
Ann. Inst. HenriPoincaré,
Vol.X,
N°1,
p. p.
55-88,
1974.9. M.
FUJISAKI,
M. KALLIANPUR & H.KUNITA,
"StochasticDifferential Equation
for the Non-LinearFiltering Problem",
Osaka J. Math.
9,
p.p.19-40,
1972.10. M. METIVIER & G.
PISTONE,
"Une formule d’isométrie pourl’intégrale stochastique
hilbertienne etéquations
d’évolution linéairesstochastiques"
Z.Wahrschein,
Verw.Gebiete, 33,
p.p.1-18,1975.
11. C.
DOLEANS-DADE, "Intégrales stochastiques dépendant
d’unparamètre"
Publ. Inst. Stat. Univ.
Paris, 16,
p.p.23-34,
1967.12. H.
K0REZLIOGLU, "Filtrage
linéaire récursif des processus à deux in- dicesdiscrets,
markoviens au senslarge", ENST-D-78004,
1978.13. H.
KOREZLIOGLU,
G. MAZZIOTTO & J.SZPIRGLAS, "Equations
dufiltrage
non linéaire pour des processus à deux
indices",
C.R. Acad.Sc.
Paris,
t.287,
SérieA-891-893,
1978.14. E.
WONG,
"Recursive Causal LinearFiltering
forTwo-Dimensional
Randomfields",
IEEE Trans. Inf. Th.IT-24, N° 1,
p.p.50-59,
1978.