• Aucun résultat trouvé

Empirical Coordination with Two-Sided State Information and Correlated Source and State

N/A
N/A
Protected

Academic year: 2021

Partager "Empirical Coordination with Two-Sided State Information and Correlated Source and State"

Copied!
5
0
0

Texte intégral

(1)

Empirical Coordination with Two-Sided State Information and Correlated Source and State

Maël Le Treust

ETIS, UMR 8051 / ENSEA, Université Cergy-Pontoise, CNRS, 6, avenue du Ponceau,

95014 CERGY-PONTOISE CEDEX, FRANCE

Email: mael.le-treust@ensea.fr

Abstract—The coordination of autonomous agents is a critical issue for decentralized communication networks. Instead of trans- mitting information, the agents interact in a coordinated manner in order to optimize a general objective function. A target joint probability distribution is achievable if there exists a code such that the sequences of symbols are jointly typical. The empirical coordination is strongly related to the joint source-channel coding with two-sided state information and correlated source and state.

This problem is also connected to state communication and is open for non-causal encoder and decoder. We characterize the optimal solutions for perfect channel, for lossless decoding, for independent source and channel, for causal encoding and for causal decoding.

Index Terms—Shannon Theory, State-dependent Channel, Joint Source-Channel Coding, Empirical Coordination, Empiri- cal Distribution of Symbols, Non-Causal Encoding and Decoding, Causal Encoding, Causal Decoding.

I. INTRODUCTION

The problem of the coordination of autonomous agents is the cornerstone of decentralized communication networks.

Communication devices are considered as agents that interact in a coordinated manner in order to achieve a common objective, for example, the transmission of information. This analysis is based on a two step approach [1], [2]. The first step is the characterization of the set of achievable joint prob- ability distributions over the symbols of source and channel.

The second step is the maximization or the minimization of an objective function (source distortion or channel cost) by considering the set of achievable joint probability distributions.

Empirical coordination has been investigated in [3], [4] with a rate-distortion perspective. In the literature of game theory, the agents coordinate their actions by implementing a coding scheme that satisfies an information constraint [5]. Empirical coordination was under investigation in [6] for perfect channel, in [7] for causal encoding, in [8] with channel feedback, in [9] for a multi-user network with an eavesdropper, in [10]

for polar codes, in [2] for causal decoding, in [11], [12] with feedback from the source, and in [13] for lossless decoding and correlated source and state.

The characterization of the set of achievable joint proba- bility distributions is equivalent to the joint source-channel coding with two-sided state information [14], [15] and corre-

lated source and state (Fig. 1). It is also related to the problem of state communication [16]. In this paper, we investigate the empirical coordination with non-causal encoding and decoding and we characterize the optimal solutions for three particular cases: perfect channel, lossless decoding and independent source and channel. We also characterize the optimal solutions for causal encoding and causal decoding with two-sided state information.

Un Sn Zn

Xn Yn Vn

Pusz C T D

Fig. 1. Non-causal encodingf:Un× Sn→ Xnand decodingg:Yn× Zn→ Vnfunctions for i.i.d. sourcePuszand channelTy|xs.

Channel model and definitions are presented in Sec. II. In Sec. III, we provide achievability and converse results for non- causal encoding and decoding. In Sec. IV, we characterize optimal solutions for perfect channel, for lossless decoding and for independent source and channel. Causal encoding and decoding are presented in Sec. V. Conclusion is in Sec. VI and sketches of proof are provided in App. A, B, C, D, E.

II. SYSTEMMODEL

The problem under investigation is depicted in Fig. 1. Cap- ital letterU denotes the random variable, lowercase letteru∈ U denotes the realization andUndenotes then-time cartesian product.Un, Sn,Zn,Xn,Yn,Vn denote the sequences of random variables of source symbolsun= (u1, . . . , un)∈ Un, of channel statessn∈ Sn, of state informations at the decoder zn ∈ Zn, of channel inputs xn ∈ Xn, of channel outputs yn ∈ Yn and of outputs of the decoder vn ∈ Vn. Sets U, S, Z, X, Y, V are discrete. ∆(X) stands for the set of probability distributions P(X) over X. The total variation distance between the probability distributions Q and P is denoted by||Q− P||tv= 1/2·P

x∈X|Q(x)− P(x)|. Notation

1

(y|x) denotes the indicator function, that is equal to 1 if y = x and 0 otherwise. Markov chain property is denoted by Y −−X −−U and holds if for all (u, x, y) we have P(y|x, u) = P(y|x). Information source and channel states

(2)

are correlated and i.i.d. distributed with Pusz. Channel is memoryless with transition probabilityTy|xs. Statistics ofPusz

andTy|xs are known by both encoderC and decoderD.

Definition II.1 Non-causal codec= (f, g)∈ C(n)is defined

by: f : Un× Sn−→ Xn, (1)

g: Yn× Zn−→ Vn. (2) N(u|un)denotes the occurrence number of symbolu∈ U in the sequenceun. The empirical distributionQn∈∆(U × S × Z × X × Y × V)of(un, sn, zn, xn, yn, vn)is defined by:

Qn(u, s, z, x, y, v) = N(u, s, z, x, y, v|un, sn, zn, xn, yn, vn)

n ,

∀(u, s, z, x, y, v) ∈ U × S × Z × X × Y × V. (3) Fix a target proba. distributionQ ∈∆(U ×S×Z×X ×Y×V), the error probability of the code c∈ C(n)is defined by:

Pe(c) =Pc

Qn− Q tv≥ε

, (4)

whereQn∈∆(U ×S ×Z ×X ×Y ×V)is the random variable of the empirical distribution induced by the codec∈ C(n)and the probability distributionsPusz,Ty|xs.

Definition II.2 Probability distributionQ ∈∆(U × S × Z × X × Y × V)is achievable if for allε >0, there exists an¯∈N s.t. for all n≥¯n, there exists a codec∈ C(n)that satisfies:

Pe(c) =Pc

Qn− Q tv≥ε

≤ε. (5)

If the error probabilityPe(c)is small, the empirical frequency of symbols(u, s, z, x, y, v)∈ U × S × Z × X × Y × V is close to the target probability distributionQ(u, s, z, x, y, v),i.e.the sequences (Un, Sn, Zn, Xn, Yn, Vn) ∈ A⋆nε (Q) are jointly typical, with large probability. In that case, the sequences of symbols are coordinated empirically.

The performance of the coordination is evaluated by an objective function Φ : U × S × Z × X × Y × V 7→ R, as stated in [2] and [13]. This approach is powerful and since the expectationE[d(u, v)]over the set of achievable distributions, provides directly the minimal distortion level d(u, v). This is valid for any function, as the channel cost c(x) or the utility Φ(u, s, z, x, y, v)of a decentralized network [5].

III. ACHIEVABILITY ANDCONVERSERESULTS

We provide necessary and sufficient conditions for non- causal encoding and decoding. This problem is open.

Theorem III.1 (Non-Causal Encoding and Decoding) 1) If the joint probability distribution Q(u, s, z, x, y, v) is achievable, then it satisfies the marginal and Markov chains:

( Q(u, s, z) =Pusz(u, s, z), Q(y|x, s) =T(y|x, s),

Y −−(X, S)−−(U, Z), Z−−(U, S)−−(X, Y). (6) 2) The joint probability distributionPusz(u, s, z)⊗Q(x|u, s)⊗

T(y|x, s)⊗ Q(v|u, s, z, x, y)is achievable if:

maxQ∈Q

I(W1, W2;Y, Z)−I(W1, W2;U, S)

>0, (7)

3) The joint probability distributionPusz(u, s, z)⊗Q(x|u, s)⊗

T(y|x, s)⊗ Q(v|u, s, z, x, y)is not achievable if:

maxQ∈Q

I(W1;Y, Z|W2)−I(W2;U, S|W1)

<0, (8) Qis the set of distributionsQ ∈∆(U × S × Z × W1× W2× X × Y × V)with auxiliary random variables(W1, W2)s.t.:











 P

(w1,w2)∈W1×W2Q(u, s, z, w1, w2, x, y, v)

=Pusz(u, s, z)⊗ Q(x|u, s)⊗ T(y|x, s)⊗ Q(v|u, s, z, x, y), Y −−(X, S)−−(U, Z, W1, W2),

Z−−(U, S)−−(X, Y, W1, W2), V −−(Y, Z, W1, W2)−−(U, S, X).

The probability distribution Q ∈Qdecomposes as follows:

Pusz(u, s, z)⊗ Q(x, w1, w2|u, s)⊗ T(y|x, s)⊗ Q(v|y, z, w1, w2).

The supports of the auxiliary random variables (W1, W2) are bounded bymax(|W1|,|W2|)≤(|B|+ 1)·(|B|+ 2)with B=U × S × Z × X × Y × V.

Remark III.2 It is possible to send an additional message m∈ M with rateRcorresponding to left-hand side of equa- tion (7), while still satisfying the coordination requirement.

This remark also extends to the other results of this article.

Sketches of proof are available in App. A and B. Achiev- ability result of Theorem III.1 is based on hybrid coding [17], [18] and is stated in [7], with a unique auxiliary random variable W = (W1, W2), without state informations S and Z. Empirical coordination is obtained by considering (U, S) as state information at the encoder and (Y, Z) as a state information at the decoder. This open problem is also related to "state communication" [16] which is solved for Gaussian channels [19], but remains open for non-causal encoding and decoding. In the following, we connect the duality result of [15] and the separation result of [14] to empirical coordination.

IV. OPTIMALSOLUTIONS FORPARTICULAR CASES

In this section, we characterize the joint probability distri- butions that are achievable for three particular cases.

A. Perfect Channel is defined byTy|xs =

1

(y|x)as in Fig. 2

Un Sn Zn

Xn Vn

Pusz C D

Fig. 2. The perfect channel is defined byTy|xs=

1

(y|x).

Theorem IV.1 (Perfect Channel)

1) If the joint probability distribution Q(u, s, z, x, v) is achievable, then it satisfies:

Q(u, s, z) =Pusz(u, s, z), Z−−(U, S)−−X. (9) 2) The probability distribution Pusz(u, s, z)⊗ Q(x|u, s)⊗ Q(v|u, s, z, x)is achievable if (10). The converse holds.

Q∈Qmaxp

I(W2;Z|X) +H(X)−I(X, W2;U, S)

>0, (10)

(3)

Qpis the set of distributionsQuszw2xvwithW2satisfying marg- inals andZ−−(U, S)−−(X, W2)andV−−(X, Z, W2)−−(U, S).

Support satisfies |W2| ≤ |B|+ 1 andQ ∈Qpdecomposes:

Pusz(u, s, z)⊗ Q(x|u, s)⊗ Q(w2|u, s, x)⊗ Q(v|x, z, w2).

Achievability comes from replacing Y and W1 by X in Theorem III.1 and converse is in App. C. This results extends the coding theorem of Wyner-Ziv [20] with distortiond(u, v), to the framework of coordination. The message of rate R corresponds to the channel inputs Xn of rate log2|X | and the optimal distortion level D is obtained by taking the ex- pectationE[d(u, v)], as in Corollary V.2. The main difference is that the symbols(U, S, V)are coordinated withX. B. Lossless Decoding defined by Q(v|u, s, z, x, y) =

1

u|u)

The characterization for lossless decoding with correlated source and state is in [13]. Achievability is also a particular case of Theorem III.1 by replacing random variables V and W2 by U. Joint distribution Pusz(u, s, z)⊗ Q(x|u, s)⊗ T(y|x, s)⊗

1

u|u)is achievable if (11). The converse holds.

maxQ∈Ql

I(U, W1;Y, Z)−I(W1;S|U)−H(U)

>0, (11) Ql is the set of distributions Quszw1xyˆu satisfying marginal conditions and Markov chains Y −−(X, S)−−(U, Z, W1) and Z −−(U, S)−−(X, Y, W1). This result extends the coding theorem of Gel’fand-Pinker [21] to the framework of coordination. The message is a sequence of source symbolsU correlated with the states(S, Z)and channel inputsX. Duality between equations (10) and (11) recalls the duality between channel capacity and rate distortion, as mentioned in [15].

C. Independent source(U, Z, V), channel(S, X, Y), Fig. 3

Un Sn Zn

Xn Yn Vn

Puz

Ps

C T D

Fig. 3. The random variables of the source(U, Z, V)are independent of the random variables of the channel(S, X, Y).

Theorem IV.2 (Independent Source and Channel)

1) If the product Q(u, z, v) ⊗ Q(s, x, y) of probability distribution is achievable, then it satisfies:

Q(u, z) =Puz(u, z), Q(s) =Ps(s), Q(y|x, s) =T(y|x, s). (12)

2) Probability distribution Puz(u, z)⊗ Q(v|u, z)⊗ Ps(s)⊗ Q(x|s)⊗ T(y|x, s)is achievable if (13). The converse holds.

maxQ∈Qs

I(W1;Y) +I(W2;Z)−I(W1;S)−I(W2;U)

>0, (13)

Qs is the set of product distributions Quzw2v ⊗ Qsxw1y with auxiliary random variables(W1, W2)satisfying conditions:

ZUW2, V (Z, W2)U, Y (X, S)W1.

Supports of (W1, W2)are bounded by (|B|+ 1)·(|B|+ 2).

The product distribution Q ∈Qs decomposes as follows:

Puz(u, z)⊗ Q(w2|u)⊗ Q(v|z, w2)⊗ Ps(s)⊗ Q(x, w1|s)⊗ T(y|x, s).

As mentioned in [2], the independence between the prob- ability distributions of the source and channel induces the separation of the source coding and the channel coding. This result is related to Theorem 1 in [14], with expected distor- tion d(u, v)and channel costc(x) functions. By considering (U, Z, W2, V) independent of (S, X, W1, Y)and introducing three indexes(m, l, j)with rates(Rm,Rl,Rj)satisfying,

Rm+Rl≥ I(W2;U), Rl ≤I(W2;Z), Rj≥ I(W1;S), Rm+Rj ≤I(W1;Y), the achievability of Theorem III.1 becomes a separated coding scheme. The author would like to thank Pablo Piantanida, Matthieu Bloch and Claudio Weidmann for useful discussions about the independence of the auxiliary random variables (W1, W2)in the converse proof of Theorem IV.2.

V. CAUSALENCODING ANDDECODING

The encoding (resp. decoding) is causal if for all i ∈ {1, . . . , n}, we haveXi=fi(Ui, Si), (resp.Vi =gi(Yi, Zi)).

We characterize of the set of achievable joint probability distributions for causal encoding and for causal decoding.

Ui Si Zn

Xi Yn Vn

Pusz C T D

Fig. 4. Causal encoding functionfi:Ui× Si→ X, for alli∈ {1, . . . , n}.

Theorem V.1 (Causal Encoding and Non-Causal Decoding) 1) IfQ(u, s, z, x, y, v)is achievable, then it satisfies:

( Q(u, s, z) =Pusz(u, s, z), Q(y|x, s) =T(y|x, s), Y −−(X, S)−−(U, Z), Z−−(U, S)−−(X, Y). (14)

2) The distribution Pusz(u, s, z)⊗ Q(x|u, s)⊗ T(y|x, s)⊗ Q(v|u, s, z, x, y)is achievable if (15). The converse holds.

Q∈Qmaxe

I(W1, W2;Y, Z)−I(W2;U, S|W1)

>0, (15) Qe is the set of joint distributionsQuszw1w2xyv with auxiliary random variables(W1, W2)satisfying marginal cond. and:









(U, S)independent ofW1, X−−(U, S, W1)−−W2, Y −−(X, S)−−(U, Z, W1, W2),

Z−−(U, S)−−(X, Y, W1, W2), V −−(Y, Z, W1, W2)−−(U, S, X).

Supports of (W1, W2)are bounded by (|B|+ 1)·(|B|+ 2).

The probability distributionQ ∈Qe decomposes as follows:

Pusz⊗ Qw1⊗ Qw2|usw1⊗ Qx|usw1⊗ Ty|xs⊗ Qv|yzw1w2. Proofs are available in App. A and B. This result is a particular case of Theorem 2 in [7], by considering(U, S)as information source and (Y, Z) as channel output. Empirical coordination is equivalent to joint source channel coding with two-sided state information and correlated source and state.

Theorem V.1 also reduces to Theorem 3 in [16], by considering (U, S)as channel state and(Y, Z)as channel output.

(4)

When considering strictly causal encoding instead of causal encoding, the optimal solution is obtained by replacingW1by X in equation (15).

Corollary V.2 (Causal Encoding without Coordination) Consider a distortion functiond:U × V 7→R. The distortion level D≥0 is achievable if and only if:

Qw1,Q max

w2|usw1,Qx|usw1, Qv|yzw1w2,E[d(U,V)]≤D

I(W1, W2;Y, Z)−I(W2;U, S|W1)

≥0,

Corollary V.2 is a direct consequence of Theorem V.1. The distortion levelD≥0 is achievable if and only if there exists a joint distributionPusz⊗ Qw1⊗ Qw2|usw1⊗ Qx|usw1⊗ Ty|xs⊗ Qv|yzw1w2that satisfies (16), such thatE[d(U, V)]≤D. Similar analysis is mentioned in [2] for the proof of Theorem V.3.

Causal decoding and non-causal encoding is considered in [2] without S and Z. Joint probability distribution Pusz⊗ Qx|us⊗ Ty|xs⊗ Qv|uszxy is achievable if (16). Converse holds.

Qxw1w2max|us, Qv|yzw2

I(W1;Y, Z|W2)−I(W1, W2;U, S)

>0, (16)

Remark that V depends on(Y, Z, W2)but not on W1. When considering strictly causal decoding instead of causal decod- ing, the optimal solution is obtained by replacingW2byV in equation (16). More details are provided in [2].

VI. CONCLUSION

The problem of empirical coordination is closely related to the joint source-channel coding with two-sided state in- formation with correlation between source and state. These two problems are also related to state communication. We provide achievability and converse results for the non-causal case, that is open. We characterize the optimal solutions for perfect channel, for lossless decoding, for independent source and channel, for causal encoding and for causal decoding.

APPENDIX

The full versions of the proofs are stated in [22].

A. Sketch of Achievability of Theorem III.1

Consider Q(u, s, z, w1, w2, x, y, v) ∈ Q that achieves the maximum in (7). There existsδ >0and rateR≥0such that:

R ≥ I(W1, W2;U, S) +δ, (17) R ≤ I(W1, W2;Y, Z)−δ. (18)

Random codebook. We generate |M| = 2nR pairs of sequences(W1n(m), W2n(m))with indexm∈ Mdrawn from the i.i.d. marginal probability distributionQ⊗nw1w2.

Encoding function.Encoder finds the indexm∈ Msuch that (Un, Sn, W1n(m), W2n(m)) ∈ A⋆nε (Q) are jointly typical. It sendsXn drawn fromQ⊗nx|usw

1w2 depending on (Un, Sn, W1n(m), W2n(m)).

Decoding function. Decoder finds the index m ∈ M such that (Yn, Zn, W1n(m), W2n(m)) ∈ A⋆nε (Q) are

jointly typical. Decoder returnsVndrawn fromQ⊗nv|yzw1w2 depending on(Yn, Zn, W1n(m), W2n(m)).

The pair (W1, W2) can be replaced by a single W. From properties of typical sequences, packing and covering Lemmas stated in [23] pp. 27, 46 and 208, equations (17), (18) imply that the expected probability of error is bounded for alln≥n:¯

Ec

P

(Un, Sn)/A⋆n ε (Q)

ε, (19)

Ec

P

∀m∈ M, (Un, Sn, Wn 1 (m), Wn

2 (m))/A⋆n ε (Q)

ε, (20) Ec

P

∃m6=m,s.t.(Yn, Zn, Wn 1 (m), Wn

2 (m))A⋆n

ε (Q) ε. (21)

There exists a codec∈ C(n)such that sequences are jointly typical for distributionPusz⊗Qxw1w2|us⊗Ty|xs⊗Qv|yzw1w2 with probability more than1−3ε.

B. Sketch of Converse of Theorem III.1

Consider codec(n)∈ Cwith small error probabilityPe(c).

0

n

X

i=1

I(Ui+1n , Si+1n , Yi+1n , Zi+1n ;Yi, Zi|Yi−1, Zi−1)

n

X

i=1

I(Yi−1, Zi−1;Ui, Si|Ui+1n , Si+1n , Yi+1n , Zni+1) (22)

=

n

X

i=1

I(W1,i;Yi, Zi|W2,i)

n

X

i=1

I(W2,i;Ui, Si|W1,i) (23)

n·max

Q∈Q

I(W1;Y, Z|W2)I(W2;U, S|W1)

. (24)

Eq. (22) is due to Csiszár Sum Identity and properties of MI.

Eq. (23) introduces auxiliary random variables W1,i = (Ui+1n , Si+1n , Yi+1n , Zi+1n )andW2,i= (Yi−1, Zi−1)satisfying:

Yi−−(Xi, Si)−−(Ui, Zi, W1,i, W2,i), (25) Zi−−(Ui, Si)−−(Xi, Yi, W1,i, W2,i), (26) Vi−−(Yi, Zi, W1,i, W2,i)−−(Ui, Si, Xi). (27) This is due to memoryless channel, i.i.d. source and non-causal decoding since(Yn, Zn)is included in (Yi, Zi, W1,i, W2,i).

Eq. (24) comes from taking the maximum over the setQ.

C. Sketch of Converse of Theorem IV.1

Consider codec(n)∈ Cwith small error probabilityPe(c).

0 = H(Xn, Zn)I(Xn, Zn;Un, Sn)H(Xn, Zn|Un, Sn)(28)

n

X

i=1

H(Xi, Zi)

n

X

i=1

I(Xn, Zn;Ui, Si|Ui+1n , Si+1n )

H(Xn, Zn|Un, Sn) (29)

=

n

X

i=1

I(Xn, Zn, Ui+1n , Sni+1;Xi, Zi)

n

X

i=1

I(Xn, Zn, Ui+1n , Sni+1;Ui, Si)H(Xn, Zn|Un, Sn)(30)

n

X

i=1

I(Xn, Z−i, Ui+1n , Si+1n ;Xi, Zi)

n

X

i=1

I(Xn, Z−i, Ui+1n , Si+1n ;Ui, Si) (31)

=

n

X

i=1

I(Xi, W2,i;Xi, Zi)I(Xi, W2,i;Ui, Si) (32)

n·max

Q∈Qp

I(X, W2;X, Z)I(X, W2;U, S)

. (33)

Eq. (28), (29) are due to properties of mutual information.

Eq. (30) is due to the i.i.d. source(U, S).

Eq. (31) is due to the i.i.d. source(U, S, Z):H(Zn|Un, Sn) =

(5)

PH(Zi|Ui, Si) =P

H(Zi|Xn, Z−i, Ui+1n , Si+1n , Ui, Si).

Eq. (32) introduces auxiliary random variable W2,i = (X−i, Z−i, Ui+1n , Si+1n ) that satisfies Markov chains: Zi−− (Ui, Si)−−(Xi, W2,i)andVi−−(Xi, Zi, W2,i)−−(Ui, Si).

This is due to i.i.d. source(U, S, Z)and non-causal decoding.

Eq. (33) comes from taking the maximum over the setQp. D. Sketch of Achievability of Theorem V.1

Consider Q ∈ Qe that maximizes (15) and block-Markov code c∈ C(n)overB ∈Nblocs of lengthn∈Nwith:

R ≥ I(W2;U, S|W1) +δ, (34) R ≤ I(W1;Y, Z) +I(W2;Y, Z|W1)−δ. (35)

Random codebook. We generate |M| = 2nR sequences W1n(m)drawn fromQ⊗nw1 with indexm∈ M. For each m∈ M, we generate the same number |M| = 2nR of sequences W2n(m,m)ˆ with indexmˆ ∈ M, drawn from Q⊗nw

2|w1 depending onW1n(m).

Encoding function.It recallsmb−1and findsmb∈ Ms.t.

sequences (Ub−1n , Sb−1n , W1n(mb−1), W2n(mb−1, mb)) ∈ A⋆nε (Q) are jointly typical in block b−1. It deduces W1n(mb)for blockb and sendsXbn drawn fromQ⊗nx|usw

1

depending on(Ubn, Sbn, W1n(mb)).

Decoding function. It recalls mb−1 and finds mb ∈ M such that (Yb−1n , Zb−1n , W1n(mb−1), W2n(mb−1, mb)) ∈ A⋆nε (Q) and (Ybn, Zbn, W1n(mb)) ∈ A⋆nε (Q) are jointly typical. It returnsVb−1n drawn fromQ⊗nv|yzw1w2 depending on(Yb−1n , Zb−1n , W1n(mb−1), W2n(mb−1, mb)).

(Un, Sn) W1n W2n Xn (Yn, Zn) Wˆ1n2n Vn

b−2 b−1 b b+ 1

Equations (34), (35) imply for all n ≥ n¯ and for a large number of blocks B ∈ N, the sequences are jointly typical with large probability:

Ec

P

(Un, Sn)/A⋆n ε (Q)

ε, Ec

P

∀m∈ M,(Un b−1, Sn

b−1, Wn

1 (mb−1 ), Wn

2 (mb−1, m))/A⋆n ε (Q)

ε, Ec

P

∃m6=m,s.t.

n (Yn

b , Zn b , Wn

1 (m))A⋆n ε (Q)o

n

(Yn b−1, Zn

b−1, Wn

1 (mb−1 ), Wn

2 (mb−1, m))A⋆n ε (Q)o

ε.

E. Sketch of Converse of Theorem V.1

Consider codec(n)∈ Cwith small error probabilityPe(c).

0

n

X

i=1

I(Ui−1, Si−1, Yi+1n , Zi+1n ;Yi, Zi)

n

X

i=1

I(Yi+1n Zi+1n ;Ui, Si|Ui−1, Si−1) (36)

=

n

X

i=1

I(W1,i, W2,i;Yi, Zi)

n

X

i=1

I(W2,i;Ui, Si|W1,i) (37)

n·max

Q∈Qe

I(W1, W2;Y, Z)I(W2;U, S|W1)

. (38)

Eq. (36) is due to Csiszár Sum Identity and properties of MI.

Eq. (37) introduces auxiliary random variables W1,i = (Ui−1, Si−1)andW2,i= (Yi+1n , Zi+1n )satisfying:

(Ui, Si)are independent ofW1,i, (39) Xi(Ui, Si, W1,i)W2,i, (40) Yi(Xi, Si)(Ui, Zi, W1,i, W2,i), (41) Zi(Ui, Si)(Xi, Yi, W1,i, W2,i), (42) Vi(Yi, Zi, W1,i, W2,i)(Ui, Si, Xi). (43)

Eq. (38) comes from taking the maximum over the setQe. REFERENCES

[1] V. Anantharam and V. Borkar, “Common randomness and distributed control: A counterexample,”Systems & Control Letters, vol. 56, no. 7- 8, pp. 568 – 572, 2007.

[2] M. Le Treust, “Empirical coordination for the joint source-channel cod- ing problem,”submitted to IEEE Transactions on Information Theory, http://arxiv.org/abs/1406.4077, 2014.

[3] G. Kramer and S. Savari, “Communicating probability distributions,”

IEEE Trans. on Info. Theory, vol. 53, no. 2, pp. 518 – 525, 2007.

[4] P. Cuff, H. Permuter, and T. Cover, “Coordination capacity,”IEEE Trans.

on Information Theory, vol. 56, no. 9, pp. 4181–4206, 2010.

[5] O. Gossner, P. Hernandez, and A. Neyman, “Optimal use of communi- cation resources,”Econometrica, vol. 74, no. 6, pp. 1603–1636, 2006.

[6] P. Cuff and L. Zhao, “Coordination using implicit communication,”IEEE Information Theory Workshop (ITW), pp. 467– 471, 2011.

[7] P. Cuff and C. Schieler, “Hybrid codes needed for coordination over the point-to-point channel,”49th Annual Allerton Conference on Communi- cation, Control, and Computing, pp. 235–239, 2011.

[8] M. Le Treust, “Empirical coordination with feedback,” inIEEE Inter- national Symposium on Information Theory (ISIT), 2015.

[9] M. Le Treust, A. Zaidi, and S. Lasaulce, “An achievable rate region for the broadcast wiretap channel with asymmetric side information,”

49th Annual Allerton Conference on Communication, Control, and Computing, pp. 68–75, 2011.

[10] M. Bloch, L. Luzzi, and J. Kliewer, “Strong coordination with polar codes,” 50th Annual Allerton Conference on Communication, Control, and Computing, pp. 565–571, 2012.

[11] B. Larrousse and S. Lasaulce, “Coded power control: Performance analysis,”IEEE Internat. Symp. on Inf. Th. (ISIT), pp. 3040–3044, 2013.

[12] B. Larrousse, S. Lasaulce, and M. Bloch, “Coordination in distributed networks via coded actions with application to power control,”Submitted to IEEE Trans. on Info. Theory, http://arxiv.org/abs/1501.03685, 2014.

[13] M. Le Treust, “Correlation between channel state and information source with empirical coordination constraint,”IEEE Information Theory Workshop (ITW), pp. 272–276, 2014.

[14] N. Merhav and S. Shamai, “On joint source-channel coding for the wyner-ziv source and the gel’fand-pinsker channel,”IEEE Transactions on Information Theory, vol. 49, no. 11, pp. 2844–2855, 2003.

[15] T. Cover and M. Chiang, “Duality between channel capacity and rate distortion with two-sided state information,” IEEE Transactions on Information Theory, vol. 48, no. 6, pp. 1629–1638, Jun 2002.

[16] C. Choudhuri, Y.-H. Kim, and U. Mitra, “Causal state communication,”

IEEE Trans. on Info. Th., vol. 59, no. 6, pp. 3709–3719, June 2013.

[17] S. Lim, P. Minero, and Y.-H. Kim, “Lossy communication of correlated sources over multiple access channels,”48th Annual Allerton Conference on Communication, Control, and Computing, pp. 851–858, 2010.

[18] P. Minero, S. Lim, and Y.-H. Kim, “Joint source-channel coding via hybrid coding,”IEEE International Symposium on Information Theory (ISIT), pp. 781–785, 2011.

[19] T. M. C. A. Sutivong, M. Chiang and Y.-H. Kim, “Channel capacity and state estimation for state-dependent gaussian channels,”IEEE Transac- tions on Information Theory, vol. 51, no. 4, pp. 1486–1495, 2005.

[20] A. D. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,”IEEE Transactions on Information Theory, vol. 22, no. 1, pp. 1–11, 1976.

[21] S. I. Gel’fand and M. S. Pinsker, “Coding for channel with random parameters,”Pb. of Control and I.T., vol. 9, no. 1, pp. 19–31, 1980.

[22] M. Le Treust, “Coding theorems for empirical coordination,” internal technical report, to be submitted, 2015.

[23] A. E. Gamal and Y.-H. Kim,Network Information Theory. Cambridge University Press, Dec. 2011.

Références

Documents relatifs

Index Terms —Shannon Theory, State-Dependent Channel, Information Leakage, Empirical Coordination, State Masking, State Amplification, Causal Encoding, Noisy Channel Feedback..

Ziv, “The rate-distortion function for source coding with state information at the decoder,” IEEE Transactions on Information Theory, vol. Pinsker, “Coding for channel with

For empirical coordination with strictly causal encoding and feedback, the information constraint does not involve auxiliary random variable anymore.. Index Terms —Shannon

Empirical coordination is achievable if the sequences of source symbols, channel states, channel inputs and channel outputs are jointly typical for a target joint

Inspired by the binning technique using polar codes in [3], we propose an explicit coding scheme that achieves a subset of the coordination region in [2] by turning the argument

In this paper, we investigate the problem of empirical coordination for two point-to-point source-channel scenarios, in which the encoder is non-causal and the decoder is

Index Terms —Shannon theory, state-dependent channel, state leakage, empirical coordination, state masking, state amplifica- tion, causal encoding, two-sided state information,

Index Terms —Shannon theory, state-dependent channel, state leakage, empirical coordination, state masking, state amplifica- tion, causal encoding, two-sided state information,