Semiblind Channel Estimation for MIMO Spatial Multiplexing Systems
Abdelkader Medles Dirk T.M. Slock Institut Eur´ecom
, 2229 route des Crˆetes, B.P. 193,
06904 Sophia Antipolis Cedex, FRANCE Tel: +33 4 9300 2910/2606; Fax: +33 4 9300 2627
e-mail:
fmedles,slock
g@eurecom.fr
Abstract
In this paper, we analyze the semiblind mutual informa- tion (MI) between the input and output of a MIMO channel.
To that end we consider the popular block fading model.
We assume that some training/pilot symbols get inserted at the beginning of each burst. We show that the average MI over a transmission burst can be decomposed into symbol position dependent contributions. The MI component at a certain symbol position optimally combines semiblind in- formation up to that symbol position (with perfect input re- covery up to that position) with blind information from the rest of the burst. We also analyze the asymptotic regime for which we can formulate optimal channel estimates and evaluate the capacity loss with respect to the known chan- nel case. Asymptotically, the decrease in MI involves Fisher Information matrices for certain channel estimation prob- lems. We also suggest to exploit correlations in the channel model to improve estimation performance and minimize ca- pacity loss.
1: Introduction
We consider single user spatial multiplexing systems over flat fading MIMO channels. We furthermore consider the usual block fading model in which data gets transmitted over a number of bursts such that the channel is constant over a burst but fading independently between bursts. The transmitter is assumed to have no channel knowledge. The formidable capacity increase realizable with such dual an- tenna array systems has been shown [5],[2] to be propor- tional to the minimum of the antenna array dimensions for channel with i.i.d. fading entries. At least, this is the case when the receiver has perfect channel knowledge. This con- dition is fairly straightforward to approach in SISO systems by inserting pilot/training data in the transmission, with ac- ceptable capacity decrease [1]. For MIMO systems of large dimensions though, the training overhead for good channel
Eur´ecom’s research is partially supported by its industrial partners:
Ascom, Swisscom, Thales Communications, ST Microelectronics, CEGE- TEL, Motorola, France T´el´ecom, Bouygues Telecom, Hitachi Europe Ltd.
and Texas Instruments. The work reported herein was also partially sup- ported by the French RNRT project ERMITAGES.
estimation quality becomes far from negligible though, es- pecially for higher Doppler speeds such as in mobile com- munications. The effect of channel estimation errors on the MI has been analyzed in [4], whereas optimal design of training based channel estimation has been addressed in [6]. The true capacity of the system is higher though than the MI obtained by training based channel estimates [2],[3].
In this paper, we attempt to approach the true capacity by optimal semiblind channel estimation that is suggested by a decomposition of the MI. Related background material ap- peared in [7] where a variety of semiblind MIMO channel estimation techniques were introduced.
2: Capacity Decomposition
We shall consider here the usual block fading model, except that we shall refer to a block as burst; consider then trans- mission over a MIMO flat fading channel
y = Hx + v
, fora particular burst of
T
symbol periods. The accumulated received signal over the burst is then:Y = (I
TH) X + V
(1)where
Y
andV
areN
rT
1
andX
isN
tT
1
.H
isa
N
rN
tchannel matrix.N
t(resp.N
r) is the number of transmit (resp. receive) antennas. We assume the use of a pilot training sequence of lengthN
tT
p, the length of the transmitted data is thenN
tT
d so thatT
p+ T
d= T
. Wecan decompose the burst signal into training and data parts
X = (X
pTX
dT)
TY = (Y
pTY
dT)
T andV = (V
pTV
dT)
T.As stated in [6], the mutual information between the input and the output of this system verifies:
I(Y
pY
dX
djX
p) = I(Y
dX
djX
pY
p) + I(Y
pX
pjX
d)
= I(Y
dX
djX
pY
p)
I(Y
pX
pX
d) = 0
due to the independence betweenX
(2)d and(Y
pX
p)
. Consider now a partition ofX
d inQ
blocksX
ii = 1:::Q
,X
d= (X
1T:::X
QT)
T, of different lengthsT
ii = 1:::Q
PiT
i= T
d.X
i andV
iare assumed independent from block to block (block-wise coding across bursts). Let’s define for,
j
i
,X
ij= (X
iTX
i+1T:::X
jT)
T andY
ij= (Y
iTX
i+1T:::Y
jT)
T.Then,
I(Y
dX
djX
pY
p) = I(Y
1QX
1X
2QjX
pY
p)
= I(Y
1QX
1jX
pY
p) + I(Y
1QX
2QjX
pY
pX
1)
= I(Y
1QX
1jX
pY
p) + I(Y
2QX
2QjX
pY
pX
1Y
1) +I(Y
1X
2QjX
pY
pX
1)
| {z }
=0
(3) where
I(Y
1X
2QjX
pY
pX
1) = h(Y
1jX
pY
pX
1)
;h(Y
1jX
pY
pX
1X
2Q)
, and considering the fact thatX
2Qisindependent of
Y
1conditioned on(X
pY
pX
1)
, this leads toh(Y
1jX
pY
pX
1X
2Q) = h(Y
1jX
pY
pX
1)
and finallyI(Y
1X
2QjX
pY
pX
1) = 0
. Iterating the equation fori = 2:::Q
, this leads to the following:I(Y
dX
djX
pY
p) =
PQi=1I(Y
iQX
ijX
pY
pX
1i;1Y
1i;1)
=
XQi=1
I(Y
iX
ijX
pY
pX
1i;1Y
1i;1)
| {z }
I1
+
Q;1Xi=1
I(Y
i+1QX
ijX
pY
pX
1i;1Y
1i)
| {z }
I2
This clearly shows the way of processing to achieve capac- ity: for every block we use the already detected blocks as a (data-aided (DA)) training sequence (in addition to the ac- tual training) and use the not yet detected blocks as blind information. Also, the mutual information can be seen as the sum of two parts:
I(Y
dX
djX
pY
p) = I1 + I2
where:I1
: is the rate that we can achieve by using only the already treated blocks for side information (DA training).I2
: is the additional amount of rate that can be achieved by exploiting the blind information contained in the not yet de- tected blocks.The average mutual information is defined as
I
av g=
1
T
I(Y
dX
djX
pY
p)
. We want to show below that this quan- tity goes to the coherent MII(yx
jH)
asT
andQ
growfor finite
T
p, whereI(yx
jH) = lim
T!1 1
T
I(Y X
jH) = lim
i!1 1
T
i
I(Y
iX
ijH)
. An upper bound is given by:I
av g=
T1(h(X
d)
;h(X
djX
pY
pY
d))
1
T
(h(X
d)
;h(X
djX
pY
pY
dH))
=
T1(h(X
d)
;h(X
djY
dH))
=
T1(I(Y X
jH)
;I(Y
pX
pjH))
)
lim
T!1
I
av gI(yx
jH)
(4) Also
I(Y
iQX
ijX
pY
pX
1i;1Y
1i;1)
= h(X
i)
;h(X
ijX
pY
pX
1i;1Y
1i;1Y
i+1QY
i)
h(X
i)
;h(X
ijY
iH(X
b pY
pX
1i;1Y
1i;1Y
i+1QY
i))
= h(X
i)
;h(X
ijY
iH(X
b pX
1i;1Y
pY
1i;1Y
i+1Q| {z }
=Y
i
))
= I(Y
iX
ijH
b(i))
(5) where
H
b(i)= H(X
b pX
1i;1Y
i)
is the optimal es- timate of the channel (statistic of reduced dimension),that verifies
lim
i!1H
b(i)= H a:s
. Given that,lim
i!1 Ni1I(Y
iQX
ijX
pY
pX
1i;1Y
1i;1)
I(yx
jH)
.This allows us to conclude that
lim
T!1I
av gI(yx
jH)
. Combining this result with (4) we conclude that:lim
T!1T1I(Y
dX
djX
pY
p) = I(yx
jH) :
Thismeans that when
T
grows, adopting the detection method per block, and the associated channel estimation allows to reach the capacity of the system assuming perfect channel knowledge (see [3] for related results at high SNR).Remark1: When
T
grows, the use of detected blocks only to estimate the channel allows to achieve asymptotically the MII
av g of the system. But for finiteT
, it is necessary to also use the blind information to reach it.Remark2: For a fixed
T
, and when all the entries ofX
(training and data) are iid, it’s easy to see that the average MI
I
av g of the system is maximized when the number of the training symbolsT
pis as small as possible (i.e. allows semiblind identifiability of the channel).Remark3: In the case where H is no longer constant, and varies from block to block
H = H
(i), the (coherent) ca- pacity of the system assuming perfect channel knowledge is no longer achievable for largeT
and the average MI is bounded by:T 1
Q
X
i=1
I(Y
iX
ijH
bi)
I
av g1 T
Q
X
i=1
I(Y
iX
ijH
i)
(6)for any channel estimate
H
bi= H
bi(X
pX
1i;1Y
i)
.3: Channel estimation
3.1 Bayesian case (random channel with prior) The capacity
C
of the system is the maximum MII
av gover all input distributions, under a given power constraint.
We have
1
T P
Q
i=1
I(Y
iX
ijH
b(i))
C
max
p(Xd)tr (RX
d
)TdNt 2
x
T 1
Q
X
i=1
I(Y
iX
ijH) :
(7)This is valid for all choices of the partitioning of
X
d intoX
ii = 1:::Q
, and in particular forQ = T
dandT
i1
.For an AWGN
V
with powerv2and in the absence of side information on the channel at the transmitter (see [5]), the max in the upper bound of the capacity (coherent capacity) is attained for a centered white Gaussian input with covari- anceR
Xd=
x2I
NtTd.T 1
T
d
X
i=1
I(Y
iX
ijH
b(i))
C
T
;T
pT E lndet(I+
2x 2vH H
H)
(8) where
H
b(i)= H(X
b pX
1i;1Y
i)
.The received signal is
Y
i= H X
i+ V
i= H
b(i)X
i+
H
e(i)X
i+ V
i= H
b(i)X
i+ V
i+ Z
i, where we as- sumeH
b(i) to satisfy the Pythagorean Theorem (PT) (i.e.H
b is decorrelated withH
e). NowI(Y
iX
ijH
b(i)) = h(Y
ijH
b(i))
;h(Y
ijX
iH
b(i)) = h(Y
ijH
b(i))
;h(V
i+ Z
ijX
iH
b(i))
. Under the above conditions and for uncor- related and centered GaussianV
i andX
i= X
iG (vari-able with same 1st and 2nd order moments, but Gaus- sian), it was shown in [6] that a lower bound for
I(Y
iX
ijH
b(i))
is given by consideringZ
i as an indepen- dent and white Gaussian noise, with covariance2zI
, where 2z=
Nr1tr
E(Z
iZ
iH) =
2xtrE(HeNr(i)He(i)H)= N
t2xH2e(i)
and
2He(i)
=
trE(HNre(i)NtHe(i)H). The new lower bound is now:C
T1PTdi=1I(Y
iX
ijH
b(i))
T;T
p
T P
T
d
i=1 E
lndet(I +
2v x2 +Nt 2x
2
f
H (i)
H
b(i)H
b(i)H)
(9) Let
2Hb(i)
=
trE(HNrb(i)NtHb(i)H)andH
(i)=
Hbc(i) H(i)
, then due to the fact that the channel estimator satisfies the Pythagorean Theorem, we have that
2Hb(i)
+
2He(i)
=
2H. Now:C
T;TpT PTdi=1 Elndet(I +
x22(2H;H2f(i))v +N
t
2
x
2
f
H (i)
H
(i)H
(i)H)
= C
LB(10) The expectation is over the distribution of
H
(i), which re- mains close to that ofH
(i). Then the given capacity lower boundC
LB depends primarily on the Mean Square Error (MSE) of the channel estimatorH
b(i). SinceC
LB is a de-creasing function of the MSE, the optimum estimator is the Minimum Mean Square Error(MMSE) estimator:
H
bMMSE(i)= H
bMMSE(i)(X
pX
1i;1Y
i)
=
E(H
jX
pX
1i;1Y
i)
(11)which is an unbiased estimator of
H
. The performance of any unbiased estimator is bounded by the Cramer-Rao lower bound:R
he(i)he(i)=
Eeh(i)eh(i)TJ
;(i) (12)where h
= Re(vect(H))
TIm(vect(H))
T]
T andJ
(i)isthe Bayesian Fischer Information Matrix (FIM) for the a posteriori distribution of
H
, and is in this case:J
(i)=
;E@@h
@lnp(HjX
p X
i;1
1 Y
i )
@h
T
=
;E@
@
h@ lnp(X
pY
pX
1i;1Y
1i;1jH)
@
hT
| {z }
J (i)
D Atr aining
;E
@
@
h@ lnp(Y
i+1TdjH)
@
h!
T
| {z }
J (i)
blind
;E
@
@
h@ lnp(H)
@
h
T
| {z }
J (i)
pr ior
We have
2He (i)trJ
;(i)
N
t N
r
. This is an absolute lower bound on the channel estimation MSE. The MMSE estima- tor achieves this bound asymptotically (
T
d ! 1). Theabove result is also valid for the case when the channel varies from block to block, since channel re-estimation for every block is assumed.
3.2 Asymptotic Behavior
We focus here on the asymptotic behavior of the ca- pacity loss for small channel estimation error. We sup- pose that the channel is constant over every block, the mu- tual information for the particular block (i) at the receiver assuming channel estimation is then
I(Y
iX
ijH
b(i)) =
E
(I
H(i)(Y
iX
ijH
b(i)))
, whereI
H(i)(Y
iX
ijH
b(i))
is the ca-pacity for a particular realization of the channel. It is as- sumed here that the channel
H
(i)may vary from block to block (while allowing an asymptotic regime). The MI as- suming perfect channel knowledge and for a particular re- alization isI
H(i)(Y
iX
ijH
(i))
. Let’s temporarily drop the block index(i)
.In the following, we derive a weighting matrix that appears in the asymptotic MI decrease and optimal semiblind chan- nel estimate. The first order derivative of
I
H(Y X
jH)
b withrespect to
H = H
e ;H
b evaluated atH = 0
e is zero:@
@
he
I
H(Y X
jH)
b jH=0e=
@@
he
h(X)
;h
H(X
jY H)
b jH=0e=
R R pHpH(YXjH) (XjYH)@
@
he
p
H(X
jY H)
b jH=0edXdY
=
R Rp
H(Y
jH)
@@
eh
p
H(X
jY H)
b jH=0edXdY
=
Rp
H(Y
jH)
@@
he
((
Zp
H(X
jY H)dX
b| {z }
=1
)
jH=0edY = 0 :
(13) The second order derivative w.r.t.
H
eevaluated at0
is:@
@
he
@I
H (YXj
b
H)
@ eh
T
j
e
H=0
=
;E @@ eh
@lnp
H (Yj
b
H)
@
he
T
j
e
H=0
+
E @@
he
@lnp
H (YjX
b
H)
@ eh
T
j
e
H=0
= J
Y(H)
;J
YjX(H) J
Y andJ
YjX are FIMs describing the MI decrease due to(14) the channel estimation error and evaluated atH = 0
e . Tocompute
J
Y andJ
YjX, we consider a receiver point of view in which, given a certain realizationH
and a certainH
b,H
e is an unknown constant. Thenp
H(Y
jX H) = p(V +
b( H +
bH)X
e jX H) = p(V +
bH X
e jX) = p
He( V
ejX)
whereV = V +
eH X
e and the variableV
ejX
follows the same distribution asV
but with an offset (mean)H X
e (notationassumes
N
i= 1
). Now:J
YjX(H) =
;E
@
@
he
@lnp
f
H (
e
VjX)
@ eh
T
j
e
H=0
!
J
Y(H) =
E(
BBT)
B=
@lnpH(YjH)b@
he j
e
H=0
B
=
pH(Y1jH)@RpH(YXjH)dXb@
he j
e
H=0
=
pH(Y1jH)@RpH(YjXH)p(X)dXb@
he
j
e
H=0
=
pH(Y1jH)R @pHf(VejX)@ eh
j
e
H=0
p(X)dX
For
H
b in a small neighborhood ofH
, the MI decrease is approximated by a quadratic function:I
H(Y X
jH)
b ;I
H(Y X
jH) =
;ehTW(H)
eh=
;(
bh;h)
T(J
YjX(H)
;J
Y(H))(
bh;h)
(15)The weighting matrix
W(H) = J
YjX(H)
;J
Y(H)
is non-negative definite since
I
H(Y X
jH)
bI
H(Y X
jH)
.Example: For white Gaussian and decorrelated
V
i andX
i,V
eijX
i is Gaussian with meanH X
e and covariance 2vI
:p
He( V
eijX
i) = (2
v2)
;NiNr xexp
;jVei;(INi2vH)e Xij2. Then
J
Yi(H) = 0
andJ
YijXi(H) = N
i2x2vI
. As a re-sult
W
i(H) = W
i= N
i2x2vI
is constant. More gen- erally, for any Gaussian noiseV
i and zero mean inputX
i;J
Yi(H) = 0
,J
YijXi(H) = J
YijXi is constant andW
i(H) = W
i= J
YijXi.We now want to find the best channel estimator, that minimizes the MI decrease. Then we need to mini- mize the cost function
I(Y X
jH)
;I(Y X
jH(S)) =
bEf
I
H(Y X
jH)
;I
H(Y X
jH(S))
b gwhere for every blocki
,H
b is based onS
(i)= (X
pX
1i;1Y
i)
. Asymptoticallymin
hb(S)(I(Y X
jH)
;I(Y X
jH(S)))
b) E
min
hb(S)Ef
(
bh(S)
;h)
TW(H)(
bh(S)
;h)
jS
g]
)
hbopt
(S) = (
EfW(H)
jS
g)
;1(
EfW(H)
hjS
g)
(16) So bhopt is a weighted MMSE estimate. For centered in- put
X
and Gaussian noiseV
,W(H) = W
and the opti- mal channel estimator is the MMSE estimatorbhopt(S) = W
;1 EfW
hjS
g=
EfhjS
g. The minimum mean MI de- crease w.r.t. the perfectly known channel case is:I(Y X
jH)
;I(Y X
jH
bopt(S)) =
E hTW(H)
h;
(
EfW(H)
hjS
g)
T(
EfW(H)
jS
g)
;1(
EfW(H)
hjS
g)]:
and all this so far for block
(i)
. The minimum mean MI de- crease for the complete burst becomes, using the recursive MI decomposition of section 2:P
Q
i=1
I(Y
iX
ijH
(i))
;I(Y
iX
ijH
bopt(i)(S
(i)))]
=
PQi=1 EfhTW(H
(i))
h;
(
EfW
ihjS
(i)g)
T(
EfW
ijS
(i)g)
;1(
EfW
ihjS
(i)g)
g]:
where
W
i= W
i(H
(i)) = J
YijXi(H
(i))
;J
Yi(H
(i))
.In the case of a constant channel over the different blocks (i.e
H
(i)= H i = 1:::Q
),PQi=1I(Y
iX
ijH
(i))
getsmodified to
I(Y
dX
djH)
andW
i(H
(i))
toW
i(H)
.3.3 Deterministic channel
In this case we have no prior information on the channel. The channel is constant during the burst with an unknown value
H
. The coherent capacity is thenI(Y
dX
djY
pX
pH)
. For a given realizationY
oofY
and a(variable) channel
H
b, let’s defineG(Y
oX
pH) = h(X)
b ;E
(
;lnq(X
jY
oH)) = h(X
b d)
;E(
;lnq(X
jY
poY
doH))
bwhere the expectation here is with respect to
X
d (X
p isknown), and
q(X
jY
poY
doH) = p(X
b jY
poY
doH)
jH=Hb.Let’s introduce a partition of
Y
d in which the different blocks have the same lengthsN
i= N
1i = 1:::Q
andare i.i.d. Then in 1
T
G(Y
oX
pH) =
b T1lnq(X
pjY
poH) +
bP
Q
i=1
(h(X
i) +
Elnq(X
ijY
ioH))]
b , the averaging over the blocks tends asymptotically to an expectation w.r.t.Y
do. Weconclude that :
lim
T!1T1G(Y
oX
pH) = lim
T!1T1flnp(X
pjY
poH)
; P
Q
i=1
(h(X
i)
; Elnp(X
ijY
ioH)
g= I(yx
jH)
(17) So
asymptotically
G(Y
oX
pH)
approaches the mutual infor- mation. Let’s defineG
1( H) = lim
b T!1T1G(Y
oX
pH)
b .G
1(H)
;G
1( H)
b=
N11 EX1Y1jHlnp(X
1jY
1H)
;lnq(X
1jY
1H)]
b=
N11 Rp(Y
1jH)(
Rp(X
1jY
1H)ln
p(Xq (X1jY1H) 1Y
1 j
b
H)
dX
1)dY
1=
N11 EY1jHD(p(X
1jY
1H)
jjq(X
1jY
1H))]
b0
(Kullback-Leibner distance). This means that
G
1( H)
b ismaximized when
q(X
1jY
1H) = p(X
b 1jY
1H)
. This fact, combined with the hypothesis that the training part is suf- ficiently informative to allow, together with the blind in- formation, complete channel identifiability, ensures that asymptotically (forT
d andT
p big enough)G(Y
oX
pH)
bis maximized for
H = H
b . We can hence use the maxi- mization ofG(Y
oX
pH)
b as optimization approach to find a consistent channel estimator.This method is related to the first iteration of an itera- tive MAP/ML estimation approach for input signal/channel with EM applied to the ML estimation part for the chan- nel. The maximum a posteriori (MAP) estimate of
X
d inthe MAP/ML approach is:
X
dMAP= arg
Xdmax
XdH
0
lnq(X
jY
oH
0) :
(18)Various solution techniques exist for this type of problem.
Consider an alternating maximization (between
X
dandH
)approach in which expectation over
X
dis introduced when maximizing overH
(EM-like approach). The iterations comprise two steps. The first step for the first iteration gives:H = arg max
b H0 E(lnq(X
jY
oH
0))
= arg max
H0G(Y
oX
pH
0) :
(19)This is a semiblind cost function for the channel estimation.
The second step consists of the MAP estimation of the input assuming the channel
H
0= H
b.Remark1: The recursive decomposition of section 2 re- mains valid in the deterministic channel case, and we can process by successive detection of the symbols (blocks). To have an acceptable algorithm complexity, one can choose from a variety of channel updating techniques. Along the lines of section 2, this approach allows to maximize the MI for large
T
.Remark2: Similarly to the Bayesian case, we can evaluate the asymptotic MI decrease aseh
T
W(H)
eh withW(H) = J
YjX(H)
;J
Y(H)
. In the deterministic case however, the direct minimization of the MI decrease does not lead to a meaningful channel estimator.4 Correlated MIMO channel model
In order to improve channel estimation and reduce ca- pacity loss, it is advantageous to exploit correlations in the channel, if present. So consider the frequency-flat MIMO channel:
H (N
r xN
tx)
h=
vect(H)
. The correlated channel model we suggest is:h
= S
g (+ g
0hfor direct path,jg
0j= 1
)where the elements ofgare taken to be i.i.d. Gaussian for a stochastic model. The correlations are captured by
S
.Special case 1: Bell-Labs/Saturn model:
H = R
1=2rG R
H=2t )S = R
H=2tR
1=2r g=
vect(G)
Special case 2: Cioffi-Raleigh model (multipath model):
H =
Xi
g
i aibHi )S =
b1a1 b2a2] :
The modelh
= S
g is straightforwardly extendible to the non-zero delay-spread case.5 Observations
Capacity approaching channel estimation should exploit prior info + data/decision aided info + (Gaussian) blind info.
The symbol-wise decomposition of the block fading chan- nel capacity involves for each symbol position in a burst a channel estimate that is based on: prior channel distri- bution info, training and detected inputs up to that symbol position, (Gaussian) blind info in remaining channel out- puts. Hence, symbol-wise Gaussian semiblind (Bayesian) channel estimation is required (blind parts: detected data + Gaussian undetected data). To have asymptotically (in SNR and burst length) negligible capacity loss, enough training symbols are required to have (deterministic) identifiability of the parameters that cannot be identified blindly (from the Gaussian undetected symbols), hence blind (Gaussian) info reduces training data requirements. Exploiting channel correlations, the (effective) number of degrees of freedom in the channel and hence the training requirements get re- duced. The channel with i.i.d. entries, while optimal from a capacity point of view, is the worst case from the channel estimation point of view.
The recursive mutual info decomposition may suggest a practical approach for channel estimation. However, sim- pler practical approaches would pass through the bursts it- eratively, with semiblind (blind info = Gaussian undetected symbols) channel estimation in the first pass, and semiblind (blind info = detected data) channel estimation in the next iterations. Prior channel info (and
S
in the channel model) gets estimated (sufficiently well) by considering the data in multiple bursts jointly (assuming these parameters are in- variant across a (large) set of bursts).Whereas we have considered block fading so far in this paper, we conjecture that these results extend to the continu- ous transmission (CT) case: in steady-state, channel estima- tion should be based on the semi-infinite detected past sym- bols, and semi-infinite future blind channel information. A Gauss-Markov model for the channel variations with a cer- tain Doppler bandwidth will prevent perfect channel extrac- tion from this infinite data though. Finally, the proposed channel model is useful for the introduction of partial chan- nel knowledge at the transmitter. Indeed, if the transmit- ter can know the channel correlations summarized in
S
inh
= S
gand only lacks knowledge of the fast fading pa- rametersg, the channel capacity may be close to that of the known channel case.References
[1] E. Biglieri, J. Proakis and S. Shamai “Fading chan- nels: information-theoretic and communications as- pects”. IEEE Trans. Info. Theory, vol. 44, no. 6, pp. 2619-2692, Oct 1998.
[2] T.L. Marzetta and B.M. Hochwald “Capacity of a mo- bile multiple-antenna communication link in Rayleigh flat fading ”. IEEE Trans. Info. Theory, vol. 45, no. 1, pp. 139-157, Jan 1999.
[3] L. Zheng and D. Tse “Communication on the Grass- mann Manifold: A Geometric Approach to the Non- coherent Multiple Antenna Channel”. To appear in IEEE Trans. Info. Theory.
[4] M. Medard “The effect upon channel capacity in wire- less communications of perfect and imperfect knowl- edge of the channel”. IEEE Trans. Info. Theory, vol. 46, no. 3, pp. 933-946, May 2000.
[5] I.E. Telatar. “Capacity of Multi-antenna Gaussian Channels”. Eur. Trans. Telecom., vol. 10, no. 6, pp. 585-595, Nov/Dec 1999.
[6] B. Hassibi and B. M. Hochwald. “How Much Training is Needed in Multiple-Antenna Wireless Links?”. Sub- mitted to IEEE Trans. Info. Theory.
[7] A. Medles, D.T.M. Slock and E. De Carvalho “Lin- ear prediction based semi-blind estimation of MIMO FIR channels”. In Proc. Third Workshop on Sig- nal Processing Advances in Wireless Communications (SPAWC’01), pp. 58-61, Taipei, Taiwan, March 2001.