• Aucun résultat trouvé

component lifetimes, especially for the comparison of different system designs and the computation of the system reliability

N/A
N/A
Protected

Academic year: 2021

Partager "component lifetimes, especially for the comparison of different system designs and the computation of the system reliability"

Copied!
17
0
0

Texte intégral

(1)

BETWEEN SYSTEM SIGNATURES AND RELIABILITY FUNCTIONS

JEAN-LUC MARICHAL

Abstract. The concept of signature is a useful tool in the analysis of semi- coherent systems with continuous and i.i.d. component lifetimes, especially for the comparison of different system designs and the computation of the system reliability. For such systems, we provide conversion formulas between the signature and the reliability function through the corresponding vector of dominations and we derive efficient algorithms for the computation of any of these concepts from the other. We also show how the signature can be easily computed from the reliability function via basic manipulations such as differentiation, coefficient extraction, and integration.

1. Introduction

Consider ann-component system(C, ϕ), whereC is the set[n]={1, . . . , n} of its components andϕ∶{0,1}n→{0,1} is its structure function which expresses the state of the system in terms of the states of its components. We assume that the system issemicoherent, which means that the structure functionϕis nondecreasing in each variable and satisfies the conditionsϕ(0, . . . ,0)=0 andϕ(1, . . . ,1)=1. We also assume that the components have continuous and i.i.d. lifetimesT1, . . . , Tn.

Samaniego [10] introduced the signature of such a system as the n-vector s = (s1, . . . , sn) whose k-th coordinate sk is the probability that the k-th component failure causes the system to fail. That is,

sk = Pr(TS=Tkn), k=1, . . . , n,

where TS denotes the system lifetime and Tkn denotes the k-th smallest lifetime.

From this definition one can immediately derive the identity∑nk=1sk=1.

It is very often convenient to express the signature vector s in terms of the tail signature of the system, a concept introduced by Boland [3] and named so by Gertsbakh et al. [5]. The tail signature of the system is the (n+1)-vector S=(S0, . . . , Sn)defined fromsby

(1) Sk = ∑n

i=k+1

si, k=0, . . . , n.

In particular, we haveS0=1 andSn=0. Moreover, it is clear that the signatures can be retrieved from the tail signatureSthrough the formula

(2) sk = Sk1Sk, k=1, . . . , n.

Date: April 30, 2014.

2010Mathematics Subject Classification. 62N05, 90B25, 94C10.

Key words and phrases. Semicoherent system, system signature, reliability function, domina- tion vector.

1

(2)

Recall also that the reliability function associated with the structure function ϕ is the unique multilinear polynomial function h∶[0,1]n → R whose restriction to {0,1}n is precisely the structure function ϕ. Since the component lifetimes are independent, this function expresses the reliability of the system in terms of the component reliabilities (for general background see [2, Chap. 2] and for a more recent reference see [9, Section 3.2]).

By identifying the variables of the reliability function, we obtain a real polyno- mial functionh(x)of degree at mostn. Then-vectord=(d1, . . . , dn)whose k-th coordinatedk is the coefficient ofxk in h(x)is called thevector of domination of the system (see, e.g., [11, Sect. 6.2]).

The computation of the signature of a large system by means of the usual meth- ods may be cumbersome and tedious since it requires the evaluation of the structure function ϕ at every element of {0,1}n. However, Boland et al. [4] observed that the n-vectors s and d can always be computed from each other through simple bijective linear transformations (see also [11, Sect. 6.3]). Although these linear transformations were not given explicitly, they show that the signature vector s can be efficiently computed from the domination vector d, or equivalently, from the polynomial function h(x). Since Eqs. (1) and (2) provide linear conversion formulas between vectorss andS, we observe that any of the vectorss, S, andd can be computed from any other by means of a bijective linear transformation (see Figure 1).

s S

dorh(x)

@@

@I@@@R -

Figure 1. Bijective linear transformations

After recalling some basic formulas in Section 2 of this paper, in Section 3 we yield these linear transformations explicitly and present them as linear conversion formulas. From these conversion formulas we derive algorithms for the computation of any of these vectors from any other. These algorithms prove to be very efficient since they require at most 12n(n+1)additions and multiplications.

We also show how the computation of the vectorssandScan be easily performed from basic manipulations of functionh(x)such as differentiation, reflection, coeffi- cient extraction, and integration. For instance, we establish the polynomial identity (see Eq. (26))

(3)

n

k=1

(n

k)skxk = ∫

x

0 (Rn1h)(t+1)dt ,

where h(x)is the derivative of h(x)and (Rn1h)(x) is the polynomial function obtained fromh(x)by switching the coefficients ofxkandxn1kfork=0, . . . , n−1.

Applying this result to the classical 5-component bridge system (see Example 1 below), we can easily see that Eq. (3) reduces to

5s1x+10s2x2+10s3x3+5s4x4+s5x5 = 2x2+6x3+x4.

(3)

By equating the corresponding coefficients we immediately obtain the signature vectors=(0,15,35,15,0).

In Section 4 we examine the general non-i.i.d. setting where the component lifetimesT1, . . . , Tn may be dependent. We show how a certain modification of the structure function enables us to formally extend almost all the conversion formulas and algorithms obtained in Sections 2 and 3 to the general dependent setting.

Finally, we end our paper in Section 5 by some concluding remarks.

2. Preliminaries

Boland [3] showed that every coordinatesk of the signature vector can be ex- plicitly written in the form

(4) sk = ∑

AC

A∣=nk+1

1

(An)ϕ(A)− ∑

AC

A∣=nk

1

(An)ϕ(A).

Here and throughout we identify Booleann-vectorsx∈{0,1}n and subsetsA⊆[n] in the usual way, that is, by setting xi =1 if and only if iA. Thus we use the same symbol to denote both a function f∶{0,1}n →R and the corresponding set functionf∶2[n]→Rinterchangeably. For instance, we writeϕ(0, . . . ,0)=ϕ(∅)and ϕ(1, . . . ,1)=ϕ(C).

As mentioned in the introduction, the reliability function associated with the structure functionϕis the multilinear functionh∶[0,1]n→Rdefined by

(5) h(x) = h(x1, . . . , xn) = ∑

AC

ϕ(A)∏

iA

xi

iCA

(1−xi).

It is easy to see that this function can always be put in the unique standard mul- tilinear form

(6) h(x) = ∑

AC

d(A)∏

iA

xi,

where, for everyAC, the coefficientd(A)is an integer.

By identifying the variables x1, . . . , xn in function h(x), we define its diagonal sectionh(x, . . . , x), which we have simply denoted byh(x). From Eqs. (5) and (6) we immediately obtain

h(x) = ∑

AC

ϕ(A)xA(1−x)n−∣A = ∑

AC

d(A)xA, or equivalently,

(7) h(x) = ∑n

k=0

ϕkxk(1−x)nk = ∑n

k=0

dkxk,

where

(8) ϕk = ∑

AC

A∣=k

ϕ(A) and dk = ∑

AC

A∣=k

d(A), k=0, . . . , n.

Clearly, we haveϕ0=ϕ(∅)=0 andd0=d(∅)=h(0)=0. As already mentioned, then-vectord=(d1, . . . , dn)is called thevector of dominations of the system.

Example 1. Consider the bridge structure as indicated in Figure 2. The corre- sponding structure function is given by

ϕ(x1, . . . , x5) = x1x4x2x5x1x3x5x2x3x4,

(4)

where ∐is the (associative) coproduct operation defined byxy=1−(1−x)(1− y). The corresponding reliability function, given in Eq. (5), can be computed by expanding the coproducts in ϕ and then simplifying the resulting algebraic expression usingx2i =xi. We have

h(x1, . . . , x5) = x1x4+x2x5+x1x3x5+x2x3x4

x1x2x3x4x1x2x3x5x1x2x4x5x1x3x4x5x2x3x4x5

+2x1x2x3x4x5.

We then obtain its diagonal section h(x)= 2x2+2x3−5x4+2x5 and finally the domination vectord=(0,2,2,−5,2).

2 1

3

5 4 HHH

HH H

r

HHH

HH

H r

Figure 2. Bridge structure

Example 1 illustrates the important fact that the reliability function h(x) of any system can be easily obtained from the minimal path sets simply by first expressing the structure function as a coproduct over the minimal path sets and then expanding the coproduct and simplifying the resulting algebraic expression (using x2i = xi) until it becomes multilinear. The diagonal section h(x) of the reliability function is then obtained by identifying all the variables.

This observation is crucial since, when combined with an efficient algorithm for converting the polynomial functionh(x)into the signature vector, it provides an efficient way to compute the signature of any system from its minimal path sets.

3. Conversion formulas

Recall that Eq. (6) gives the standard multilinear form of the reliability function h(x). As mentioned for instance in [9, p. 31], the link between the coefficients d(A)and the valuesϕ(A)is given through the following linear conversion formulas (obtained from the M¨obius inversion theorem)

(9) ϕ(A) = ∑

BA

d(B) and d(A) = ∑

BA

(−1)A∣−∣Bϕ(B).

The following proposition yields the linear conversion formulas between the n- vectorsd=(d1, . . . , dn)and(ϕ1, . . . , ϕn). Note that an alternative form of Eq. (11) was previously found by Samaniego [11, Sect. 6.3].

Proposition 1. We have

(10) ϕk = ∑k

j=0

(nj

kj)dj, k=1, . . . , n, and

(11) dk = ∑k

j=0

(−1)kj(nj

kj)ϕj, k=1, . . . , n.

(5)

Proof. By Eqs. (8) and (9) we have ϕk = ∑

AC

A∣=k

ϕ(A) = ∑

AC

A∣=k

BA

d(B).

Permuting the sums and then settingj=∣B∣, we obtain ϕk = ∑

BC

B∣⩽k

d(B) ∑

AB

A∣=k

1 = ∑

BC

B∣⩽k

(n−∣B

k−∣B∣)d(B) = ∑k

j=0

(nj kj) ∑

BC

B∣=j

d(B),

which proves Eq. (10). Formula (11) can be established similarly.

We are now ready to establish conversion formulas and algorithms as announced in the introduction.

3.1. Conversions between s and S. We already know that the linear conversion formulas between the vectorssandSare given by Eqs. (1) and (2). This conversion can also be explicitly expressed by means of a polynomial identity. Let ∑nk=1skxk and∑nk=0Skxk be the generating functionsof vectors sandS, respectively. Then we have the polynomial identity

(12)

n

k=1

skxk = 1+(x−1)∑n

k=0

Skxk.

Indeed, using Eq. (2) and summation by parts, we obtain

n

k=1

skxk = ∑n

k=1

(Sk1Sk)xk = x+∑n

k=1

Sk(xk+1xk), which proves Eq. (12).

For instance, for the bridge system described in Example 1, the generating func- tions of vectors s and S are given by 15x2+ 35x3+ 15x4 and 1+x+ 45x2+ 15x3, respectively. We can easily verify that Eq. (12) holds for these functions.

3.2. Conversions between S and d. Combining Eq. (1) with Eqs. (4) and (8), we observe that

(13) Sk = 1

(nk) ∑

AC

A∣=nk

ϕ(A) = 1

(nk)ϕnk, k=0, . . . , n.

Recall that apath setof the system is a component subsetAsuch thatϕ(A)=1.

It follows from Eq. (13) thatϕk is precisely the number of path sets of sizek and that Snk is the proportion of component subsets of size k which are path sets.

We also observe that the leading coefficient dn of h(x), also known as thesigned domination[1] ofh(x), is zero if and only if there are as many path sets of odd sizes as path sets of even sizes. This observation immediately follows from the identity dn=∑nj=0(−1)njϕj, obtained by settingk=nin Eq. (11).

Combining Eqs. (10) and (11) with Eq. (13), we immediately obtain the following conversion formulas between the vectorsSandd.

(6)

Proposition 2. We have

Sk = nk

j=0

(nkj)

(nk) dj = nk

j=0

(njk)

(nj) dj, k=0, . . . , n, (14)

dk = (n k)∑k

j=0

(−1)kj(k

j)Snj, k=0, . . . , n.

(15)

Equation (15) can be rewritten in a simpler form by using the classical difference operator ∆iwhich maps a sequencezi to the sequence ∆izi=zi+1zi. Defining the k-th difference ∆kizi of a sequencezirecursively as ∆0izi=ziand ∆kizi=∆iki1zi, we can show by induction onk that

(16) ∆kizi = ∑k

j=0

(−1)kj(k j)zi+j.

Comparing Eq. (15) with Eq. (16) immediately shows that Eq. (15) can be rewritten as

(17) dk = (n

k) (∆kiSni)∣i=0, k=1, . . . , n,

and the vectordcan then be computed efficiently from a classical difference table (see Table 1).

Sn

(n1)(∆iSni)∣i=0

Sn1 (n2)(∆2iSni)∣i=0

(n1)(∆iSni)∣i=1 (n3)(∆3iSni)∣i=0

Sn2 (n2)(∆2iSni)∣i=1 ⋮ (n1)(∆iSni)∣i=2

Sn3

Table 1. Computation of dfrom S

SettingDj,k=(nk)(∆ki Sni)∣i=j, from Eq. (17) we can easily derive the following algorithm for the computation ofd. This algorithm requires only12n(n+1)additions and multiplications.

Algorithm 1. The following algorithm inputs vectorSand outputs vectord. It uses the variablesDj,k fork=0, . . . , nand j=0, . . . , n−k.

Step 1. Forj=0, . . . , n, set Dj,0∶=Snj. Step 2. Fork=1, . . . , n

Forj=0, . . . , n−k

Dj,k ∶= nkk+1(Dj+1,k1Dj,k1) Step 3. Fork=0, . . . , n, set dk∶=D0,k.

Example 2. Consider the bridge system described in Example 1. The correspond- ing tail signature vector is given by S = (1,1,45,15,0,0). Forming the difference table (see Table 2) and reading its first row, we obtain the vectord=(0,2,2,−5,2) and therefore the functionh(x)=2x2+2x3−5x4+2x5.

(7)

0 0

0 2

1 2

1/5 4 −5

3 −8 2

4/5 −4 5

1 2

1 −2

0 1

Table 2. Computation of dfromS(Example 2)

The converse transformation (14) can then be computed efficiently by the fol- lowing algorithm, in which we compute the quantities

Sj,k = ∑k

i=0

(ki)(i+ij) (nij) di+j.

Algorithm 2. The following algorithm inputs vectordand outputs vectorS. It uses the variablesSj,kfork=0, . . . , nandj=0, . . . , n−k.

Step 1. Forj=0, . . . , n, set Sj,0∶=dj. Step 2. Fork=1, . . . , n

Forj=0, . . . , n−k

Sj,k ∶= jn+1jSj+1,k1+Sj,k1

Step 3. Fork=0, . . . , n, set Snk∶=S0,k.

3.3. Conversions between s and d. The following proposition yields the con- version formulas between the vectors sandd. Note that a non-explicit version of Eq. (18) was previously found in Boland et al. [4] (see also Theorem 6.1 in [11]).

Proposition 3. We have

sk = nk

j=0

(nkj) (nk)

j+1

njdj+1 = nk+1

j=1

(nj1k)

(nj) dj, k=1, . . . , n, (18)

dk = (n k)k1

j=0

(−1)k1j(k−1

j )snj, k=1, . . . , n.

(19)

dk = (n

k) (∆ki1sni)∣i=0, k=1, . . . , n, (20)

Proof. Combining Eq. (14) with Eq. (2), we obtain sk = Sk1Sk = nk+1

j=1

(nkj+1)

(nj) djnk

j=1

(njk) (nj) dj

= nk

j=1

(njk1)

(nj) dj+ 1

(nnk+1)dnk+1,

(8)

which proves Eq. (18). By Eq. (2) we have ∆iSni = sni for i = 0, . . . , n−1.

Equation (20) then follows from Eq. (17). Equation (19) then follows immediately

from Eq. (20).

Equation (20) shows thatdcan be efficiently computed directly fromsby means of a difference table (see Table 3).

(n1)sn

(n2)(∆isni)∣i=0

(n1)sn1 (n3)(∆2isni)∣i=0

(n2)(∆isni)∣i=1 (n4)(∆3isni)∣i=0

(n1)sn2 (n3)(∆2isni)∣i=1 ⋮ (n2)(∆isni)∣i=2

(n1)sn3

Table 3. Computation ofdfroms

Settingdj,k=(nk)(∆ki1sni)∣i=j1, we can also derive the following algorithm for the computation of vectord. This algorithm requires only 12n(n−1)additions and multiplications.

Algorithm 3. The following algorithm inputs vector s and outputs vectord. It uses the variablesdj,k fork=1, . . . , nandj=1, . . . , n−k+1.

Step 1. Forj=1, . . . , n, set dj,1∶=n snj+1. Step 2. Fork=2, . . . , n

Forj=1, . . . , n−k+1

dj,k ∶= nkk+1(dj+1,k1dj,k1) Step 3. Fork=1, . . . , n, set dk∶=d1,k.

Example 3. Consider again the bridge system described in Example 1. The cor- responding signature vector is given by s= (0,15,35,15,0). Forming the difference table (see Table 4) and reading its first row, we obtain the vectord=(0,2,2,−5,2) and hence the functionh(x)=2x2+2x3−5x4+2x5.

0 2

1 2

4 −5

3 −8 2

−4 5

1 2

−2 0

Table 4. Computation ofdfroms(Example 3)

(9)

The converse transformation (18) can then be computed efficiently by the fol- lowing algorithm, in which we compute the quantities

sj,k = 1 n

k

i=1

(ki11)(i+ij11) (ni1j) di+j1.

Algorithm 4. The following algorithm inputs vector dand outputs vector s. It uses the variablessj,k fork=1, . . . , nandj=1, . . . , n−k+1.

Step 1. Forj=1, . . . , n, set sj,1∶=n1dj. Step 2. Fork=2, . . . , n

Forj=1, . . . , n−k+1

sj,k ∶= jn+1jsj+1,k1+sj,k1

Step 3. Fork=1, . . . , n, set snk+1∶=s1,k.

3.4. Conversions between S or s and h(x). The conversion formulas between vectors s and d show that the diagonal section h(x) of the reliability function encodes exactly the signature (or equivalently, the tail signature), no more, no less.

Even though the latter can be computed from vectordusing Eqs. (14) and (18), we will now see how we can compute it by direct and simple algebraic manipulations of functionh(x).

Letf be a univariate polynomial of degree⩽n,

f(x) = anxn+an1xn1+ ⋯ +a1x+a0.

Then-reflected off is the polynomialRnf obtained fromf by switching the coef- ficients ofxk andxnk fork=0, . . . , n; that is,

(Rnf)(x) = a0xn+a1xn1+ ⋯ +an1x+an, or equivalently,(Rnf)(x)=xnf(1/x).

Combining Eq. (7) with Eq. (13), we obtain (see also [4])

(21) h(x) = ∑n

k=0

Snk(n

k)xk(1−x)nk.

From this equation it follows, as it was already observed in [8], that (22) (Rnh)(x+1) = ∑n

k=0

(n k)Skxk.

Thus, (nk)Sk can be obtained simply by reading the coefficient of xk in the poly- nomial function (Rnh)(x+1). Denoting by [xk]f(x) the coefficient of xk in a polynomial functionf(x), Eq. (22) can be rewritten as

(23) (n

k)Sk = [xk](Rnh)(x+1), k=0, . . . , n.

From Eq. (23) we immediately derive the following algorithm (see also [8]).

Algorithm 5. The following algorithm inputsnandh(x)and outputsS.

Step 1. Fork=0, . . . , n, letak be the coefficient ofxk in the n-degree polynomial(Rnh)(x+1)=(x+1)nh(x1+1).

Step 2. We haveSk=ak/(nk)fork=0, . . . , n.

(10)

The following proposition yields the analog of Eqs. (22) and (23) for the signa- ture. Here and throughout we denote byh(x)the derivative of h(x).

Proposition 4. We have k(n

k)sk = [xk1](Rn1h)(x+1), k=1, . . . , n, (24)

n

k=1

(n

k)skk xk1 = (Rn1h)(x+1), (25)

n

k=1

(n

k)skxk = ∫

x

0 (Rn1h)(t+1)dt . (26)

Proof. By Eq. (7) we haveh(x)=∑nj=01(j+1)dj+1xj and therefore (Rn1h)(x+1) = n1

j=0

(j+1)dj+1(x+1)n1j

= n1

j=0

(j+1)dj+1 nj

k=1

(n−1−j k−1 )xk1

= ∑n

k=1

xk1

nk

j=0

(n−1−j

k−1 )(j+1)dj+1.

Thus, the inner sum in the latter expression is the coefficient ofxk1in the polyno- mial function(Rn1h)(x+1). Dividing this sum byk(nk)and then using Eq. (18), we obtain sk. This proves Eqs. (24) and (25). Equation (26) is then obtained by integrating both sides of Eq. (25) on the interval[0, x].

From Eq. (24) we immediately derive the following algorithm.

Algorithm 6. The following algorithm inputsnandh(x)and outputss.

Step 1. Fork=1, . . . , n, letak1be the coefficient ofxk1in the(n−1)- degree polynomial(Rn1h)(x+1)=(x+1)n1h(x1+1). Step 2. We havesk=ak1/(k(nk))fork=1, . . . , n.

Even though such an algorithm can be easily executed by hand for small n, a computer algebra system can be of great assistance for largen.

Example 4. Consider again the bridge system described in Example 1. We have h(x) = 4x+6x2−20x3+10x4 and (R4h)(x) = 10−20x+6x2+4x3. It follows that (R4h)(x+1) = 4x+18x2+4x3 and hence s = (0,15,35,15,0) by Algorithm 6. Indeed, we have for instances3=a2/(3(53))=35.

The following proposition, established in [8], provides a necessary and sufficient condition on the system signature for the reliability function to be of full degree (i.e., the corresponding signed dominationdnis nonzero). Here we provide a shorter proof based on Eq. (25).

Proposition 5([8]). Let(C, ϕ)be ann-component semicoherent system with con- tinuous and i.i.d. component lifetimes. Then the reliability functionh(x)(or equiv- alently, its diagonal sectionh(x)) is a polynomial of degree nif and only if

kodd

(n−1

k−1)sk ≠ ∑ k even

(n−1 k−1)sk.

(11)

Proof. The functionh(x)is of degreenif and only if h(x)is of degree n−1 and this condition holds if and only if dn = n1(Rn1h)(0)≠0. By Eq. (25) this means that

n

k=1

(n

k)skk(−1)k1 = n

n

k=1

(n−1

k−1)sk(−1)k1

is not zero.

The vectorss andScan also be computed via their generating functions. The following proposition yields integral formulas for these functions.

Proposition 6. We have

n

k=0

Skxk = ∫

1

0 (n+1)Rnt ((Rnh)((t−1)x+1))dt , (27)

n

k=1

skxk = ∫

1 0

x Rtn1((Rn1h)((t−1)x+1))dt , (28)

whereRnt is then-reflection with respect to variable t.

Proof. By Eq. (22), we have

(Rnh)((t−1)x+1) = ∑n

k=0

(n

k)Sk(t−1)kxk and hence

Rnt((Rnh)((t−1)x+1)) = ∑n

k=0

(n

k)Sktnk(1−t)kxk.

Integrating this expression fromt=0 tot=1 and using the well-known identity

(29) ∫

1 0

tnk(1−t)kdt = 1 (n+1)(nk),

we finally obtain Eq. (27). Formula (28) can be proved similarly by using Eq. (25).

From Eq. (28) we immediately derive the following algorithm for the computation of the generating function of the signature. The algorithm corresponding to Eq. (27) can be derived similarly.

Algorithm 7. The following algorithm inputsnandh(x)and outputs the gener- ating function of vectors.

Step 1. Letf(t, x)=x(Rn1h)((t−1)x+1).

Step 2. We have ∑nk=1skxk = ∫01(Rn11f)(t, x)dt, where Rn11 is the (n−1)-reflection with respect to the first argument.

The computation ofh(x)from sor Scan be useful if we want to compute the system reliabilityh(p)directly from the signature and the component reliabilityp.

We already know that Eq. (21) gives the polynomial h(x) in terms of vector S. The following proposition yields simple expressions ofh(x)and h(x)in terms of vector s. This result was already presented in [6, Sect. 4] and [8, Rem. 2] in alternative forms.

(12)

Proposition 7. We have h(x) = ∑n

k=1

skk(n

k)xnk(1−x)k1, (30)

h(x) = ∑n

k=1

skIx(nk+1, k) = ∑n

k=1

sk

n

i=nk+1

(n

i)xi(1−x)ni, (31)

whereIx(a, b)is the regularized beta function defined, for anya, b, x>0, by Ix(a, b) = ∫0xta1(1−t)b1dt

01ta1(1−t)b1dt .

Proof. Formula (30) immediately follows from Eq. (25). Then, from Eqs. (29) and (30) we immediately derive the first equality in Eq. (31) since h(x)=∫0xh(t)dt.

The second equality follows from Eqs. (1) and (21).

The following proposition provides alternative expressions ofh(x)and h(x)in terms ofSands, respectively.

Proposition 8. We have

h(x) = ((x∆i+I)nSni)∣i=0, (32)

h(x) = n((x∆i+I)n1sni)∣i=0, (33)

whereI denote the identity operator.

Proof. By Eq. (17) we have h(x) = ∑n

k=0

dkxk = ∑n

k=0

(n

k)xk(∆ki Sni)∣i=0,

which proves Eq. (32) as we can immediately see by formally expanding the bino- mial operator expression(x∆i+I)n. Equation (33) then immediately follows from

Eq. (32).

Proposition 8 shows that the functions h(x) and h(x)can be computed from difference tables. Setting

Dj,k(x)=((x∆i+I)kSni)∣i=j and dj,k(x)=n((x∆i+I)k1sni)∣i=j1, we can derive the following algorithms for the computation ofh(x)andh(x). Algorithm 8. The following algorithm inputs vectorSand outputs functionh(x). It uses the functionsDj,k(x)fork=0, . . . , nandj=0, . . . , n−k.

Step 1. Forj=0, . . . , n, set Dj,0(x)∶=Snj. Step 2. Fork=1, . . . , n

Forj=0, . . . , n−k

Dj,k(x) ∶= x Dj+1,k1(x)+(1−x)Dj,k1(x) Step 3. h(x)∶=D0,n(x).

Algorithm 9. The following algorithm inputs vectorsand outputs functionh(x). It uses the functionsdj,k(x)fork=1, . . . , nandj=1, . . . , n−k+1.

Step 1. Forj=1, . . . , n, set dj,1(x)∶=n snj+1.

(13)

Step 2. Fork=2, . . . , n

Forj=1, . . . , n−k+1

dj,k(x) ∶= x dj+1,k1(x)+(1−x)dj,k1(x) Step 3. h(x)∶=d1,n(x).

Table 5 summarizes the main conversion formulas obtained thus far. They are given by the corresponding equation numbers. For instance, formulas to compute sfromdorh(x)are given in Eqs. (18), (24), (26), and (28).

dorh(x) s S

dor h(x) (19)(20)(31) (15)(17)(21)(32)

s (18)(24)(26)(28) (2)(12)

S (14)(23)(27) (1)(12)

Table 5. Conversion formulas

3.5. Conversions based on the dual structure. We end this section by giving conversion formulas involving the dual structure of the system. LetϕD∶{0,1}n → {0,1}be the dual structure function defined asϕD(x)=1−ϕ(1x), where1x= (1−x1, . . . ,1−xn), and lethD∶[0,1]n→Rbe its corresponding reliability function, that is,hD(x)=1−h(1x).

Straightforward computations yield the following conversion formulas, where the upper index D always refers to the dual structure andδ stands for the Kronecker delta:

dDk = δk,0−(−1)kn

j=k

(j

k)dj, k=0, . . . , n, (34)

dk = δk,0−(−1)kn

j=k

(j

k)dDj , k=0, . . . , n, (35)

Sk = 1−SDnk = 1−∑k

j=0

(kj)

(nj)dDj , k=0, . . . , n, (36)

sk = sDnk+1 = ∑k

j=1

(kj11)

(nj) dDj , k=1, . . . , n, (37)

dDk = δk,0−(n

k)(∆kiSi)∣i=0, k=0, . . . , n, (38)

dDk = (n

k) (∆ki1si)∣i=1, k=1, . . . , n.

(39)

Recall thatϕk gives the number of path sets of sizek. Combining (13) with (22), we obtain the identity ∑nk=0ϕnkxk =(Rnh)(x+1), from which we immediately derive the following generating function

n

k=0

ϕkxk = Rn((Rnh)(x+1)) = (x+1)nh( x x+1).

Références

Documents relatifs

Usually, such tools provide a study of the computing system, including its software or hardware reliability, by analytic, simulation, or semi-natural modeling with a weak

Figure 4 shows the modulated signal (point B on Figure 1) using DSB-ASK (Double- side band amplitude shift keying) modulation sent by the reader transmitter.. Digital data to

The reader consists in a transceiver, with an antenna that communicates with a passive tag.The tag returns by retro-modulationits identification code.The RFID

We consider the radial free wave equation in all dimensions and derive asymptotic formulas for the space partition of the energy, as time goes to infinity.. We show that the

When 2k is small compared with A, the level of smoothing, then the main contribution to the moments come from integers with only large prime factors, as one would hope for in

We finish the section dedicated to A n − 1 by giving the global outline of the procedure MNS KostantA(v) com- puting the Kostant partition number of a vector v lying in the

The article finishes with applications to zero-bit watermarking (probability of false alarm, error expo- nent), and to probabilistic fingerprinting codes (probability of

The article finishes with applications to zero-bit watermarking (probability of false alarm, error exponent), and to probabilistic fingerprinting codes (probability of wrongly