• Aucun résultat trouvé

We now have to consider the mean dispersion during the time (0,

T),

de- fined b y

T

I s=~

D sy(t) dt,

0

but since the calculations are trivial and contain n o t h i n g of interest, I exclude t h e m here. I only give some notes concerning the limit case L = oo.

The only critical points of 12 are p = q. I n this case we obtain for L = oo

D2y(t)

= ~ p 4 1 1 +

(pt--1)e't] ~ ~ +,=,~, (a~ +m~)

and

I2=--2p 41

[ 1 + 2 ev~ + 4 p T - - - ( 1 - - e vT) +

~2 ~ e2V T - ~ e2,T __ --T5 (1 __ e2pT) ] . 4p

This expression has obviously no minimum, for I 2 - + 0 for increasing ]Pl. I t is easily seen t h a t this statement is true also for p ~ q. The more remote the roots are situated from the origin and from each other, the less

DSy(t)

and, since

DSy(t)>

0 for all t, also the rms error I becomes.

E x a m p l e 2. I n m a n y noise problems the derivatives of a r a n d o m function appear at the same time as the r a n d o m function itself. Let us for instance take the equation

ij + aij + by

= kl~(t) +

ksz(t)

(132)

where the left m e m b e r is the same as in example 1 but where the right m e m b e r is a linear function of t h e disturbing function and its first derivative instead of the disturbing function alone.

We are going to prove t h a t the probability distribution of x(t) = kl$(t) + ksz(t)

is of a form very similar to t h a t of

z(t),

when

z(t)

is distributed as

x(t)

in example 1. Assuming t h a t

1 ~ B ( 2 ~ t )

z(t) = ~ B o + ~ , c o s - - + T , (133)

v=l

where the amplitudes B, are finite and approximately normally and the phase angles ~v, uniformly distributed and t h a t

x ( t ) = 2 A o + ~ A , c o s

- - + ~b, , (134) its distribution being unknown, we m u s t have the identity

222

ARKIV FOR MATEMATIK. B d 2 nr 8

1 ~A /2gvt )

l ( 2 ~ t ~ . ) _ _ ~ 2 g v k l _ ( 2 i v t ~/y) "

~k2Bo+~=~lk2B~cos - - + ,=1 L / ~ s i n - - + This identity can also be written

2 g v t

Ao + A, cos ~b~ cos - - ~ -- Z A~ sin ~b~ sin 2 ~ v t _ _

v ~ l ~=1 L

1 ~ (

2 ~ v k l ) 2~rvt

z 2 k 2 Bo + ,__~IB~ k2 cOs ~ ~L-: sin T , cos L

~ ( 2ztvkl ) " 2zvt

- - ~ = B , k 2 s i n k r - c , + ~ c o s k r l , s i n - T - and can be satisfied only if

r

A, cos ~5~ = k2 cos k~, 2 7t v k 1 L sin krr B~ (v > 1)

)

A~ sin q~ = (k~ sin ~, ~- ~ cos ~,) B~

(v~>l)

From this follows

which proves that A, and B, obey the same distribution law, i.e. they are both approximately normal. If the dispersion of B, is denoted by s,, we have

a, 2 = s~[k~ +' (2 7 k---~l)21 9 (136) If we introduce the auxiliary quantities a, and Q~, defined by

k 2 Qv COS ~ , ~

Q, ~in ~ =

w e o b t a i n

2 2 3

M. SUNDSTROM, Some statistical problems in the theory of servomechanisms

and

~)~ = 1, 27~Vkl tg ~v -- - -

Lk2 cos q~ = cos (T~ + ~ ) , sin ~ , = sin (Tv § ~).

I t follows that if T , is uniformly distributed, so is q}~ and vice versa.

For the rest the reasoning is the same as in example 1.

Example 3. As a third example we take the differential equation

ij + a y A- by = x(t) § v(t) (137)

where x(t) means the time function of a "message" and v(t) the time function of the noise; a and b are two constants which are to be determined in such a way t h a t the effect of the noise on the message will be as small as possible.

We put x ( O ) = O, v(O)= O. Both the message and the noise m a y be given in the form of uniformly convergent Fourier series

x ( t ) = 2 A o + ~ ~cos - - + ~, , (138:a)

o

v ( t ) = y , a o + a~cos - - + (p~ (138: b) where L can be chosen arbitrarily ( > 0). We assume t h a t the amplitudes be approximately normally distributed, while the distributions of, the phases m a y be uniform. Further we presume that there is no correlation between message and noise or between amplitude and phase for each one of the two signals nor between different amplitudes or different phases, in other words, t h a t there is no correlation at all. ~'inally we put

M A o = M a o = O , M A ~ = Ma2o=O.

From the above assumptions follows

M x ( t ) = M y ( t ) = O.

As in the first example we do not use the derivatives of the signals. In d x ( t - - a ! by x ( t - - c r ) and the formulas of VI: c (A = 1) we therefore replace d a

d v (t - - 3)

d r by v ( t - 3). Instead of t h a t we have to take the derivative of k(~).

Apart from these modifications we use the same notations as in VI: c. Thus 224

A R K I V FOR M A T E M A T I K . B d 2 n r 8

1 0r 2 moo ((r, v, t) = -3I [x (t - - (r) x (t - - T)] = ,) ~_,MA~. cos

~v=, L

toOl (Q, "c, l) = 7;tlO (6, T, t) = 0

1 o0 2 m l l (if, T, t) = M [v (t - - (r) v (t - - l')] = ~ ~ M a~. cos

z~=, L

F r o m this we see t h a t moo (o, r, t) a n d roll ((r, % t) o n l y d e p e n d on v - a. The series are u n i f o r m l y c o n v e r g e n t .

Since t h e c o r r e l a t i o n f u n c t i o n s m ((r, ~,t) are i n d e p e n d e n t of t, it is e a s y to d e t e r m i n e t h e f u n c t i o n s ~ ( a , T ) . W e find

2 = 2 ~ v ( ~ - ~ )

,~//oo ((r, 7) = (T - - [or, 7]) _~IMA2 . 9 cos - L r (G, T) = c~10(O', T) = 0

1 2 ~ v ( v - - a )

~/,~ (~, ~)

= 2 ( T -

[o, ~])

~ i a v " COS

~ , L

I n this case ( A = I ) t h e one single e q u a t i o n

T

f K((~, 7) k'(~) d~ =/((r)

0

of t h e t y p e s t u d i e d in section e) w i t h

K (a, ~) = ( f - - [a, ~]) ~ (MA~ + May) cos a n d

s y s t e m of integral e q u a t i o n s (114) is r e d u c e d to

L 2 z t v ~ / ((~) = (T -- (r) ~, M A ~ . cos - -

v=l L

The kernel K (a, v) is a p p a r e n t l y s y m m e t r i c a n d h a s all its singularities on t h e line a = v. I t is c o n v e n i e n t t o write it in t h e f o r m

w h e r e

oo

K (0", T) = ~ xv [~vl (G i fl~l (7) "~- (Xv2 (0") flu2 (7)]

v=l

~ = MA~ + Ma~

2z~vtr I

~ 1 (~) = ( T - - ~) c o s - -

2 ~ v a I

- - L - - for 2 ~ v 7 [

- E - - :~2((r)

=

(T-

~) sin

/~rl (T) COS

/%2 (T) sin

225

M. SUNDSTROM, Some statistical problems in the theory of servomechanisms 2 :r~ V O" ]

~vl (0) = COS - - i - 2 ~ v a

~,2 (-) = 9 sin ~ -

for T ~ a 2~rv3 !

/~,i

( 3 ) = ( r - 3 ) c o ~

2 ~ v v fl~2 (3) = ( T - - 3 ) sin - - ~ !

) If we further put

2 y~v (y y~(a) = ( T - - a ) M A ~ . cos L

and remember that the necessary conditions /or integration term by term are /ul- [illed, the integral equation becomes

T

0

ov flvl(T) -4- q,2(O')'{~v2(3)]]r d T = ~ , ( 6 ) .

v=l (139)

This equation can be solved by the method of e). Thus we p u t T~+I

f

3 . + 1 - 3 r ) d %

e ~ i = z~ T

vt*+l

= f 3 - - ~ _ v~

(i = 1, 2)

For facilitating the solution we further introduce the auxiliary quantities:

~/Jv(O') = Xv [Cp,1 6(vl ((Y) "F Cp,2 0~r2(O')], D # v (6) = ~', [dg,1 0C,,I (0") "~- d a,2 ~:2 (O')].

The equation (139) becomes

' , ((Y) -1- kt~+l D u, (a = Y, (a). (140) If in this equation we put a = 3o, 31 . . . . 3zr we have a linear system with N + 1 equation and N + 1 unknown quantities: k~, k~ . . . . k~.

226

ARKIV FOR MATEMATIK, Bd 2 nr 8

V I I . S o m e p r o b l e m s i n t h e t h e o r y o f a u t o c o r r e l a t i o n f u n c t i o n s a n d s p e c t r a l d e n s i t i e s

a) R a n d o m e r r o r s in a u t o c o r r e l a t i o n f u n c t i o n s and spectral d e n s i t i e s c a l c u l a t e d f r o m an e m p i r i c a l m a t e r i a l

In the foregoing sections we have been concerned only incidentally with auto- correlation functions (this function is defined in a remark in section V:b), whereas spectral densities have not yet been considered in this treatise. The theory of these concepts is treated very carefully in the servotechnic literature 1 and will not be dealt with here. However, the errors committed by using empirical material b y the computation of autocorrelation functions and spectral densities seem not to have been studied so much.

A u t o c o r r e l a t i o n f u n c t i o n s

Suppose t h a t the time function y(t) has been observed for 0--~ t-~ T and t h a t the result is given in the form of an oscillogram. Further we presume t h a t the process can be considered as stationary. Then a great deal of informa- tion can be obtained, from the autocorrelation function

T

R('r) = T-.oolim 2 ~

fy(t)y(t

+ ~)dt. (141)

- T

If T is not too small and ~ not t o o large, we can use the approximation formula

T--v

R(~) ~ R f ( v ) = T - - 9 y(t) y(t + 3) dt (142)

o o r

1 N--m

R('r) -~ R~v(m) N - - m + 1 ~ o yn y n + m l . = (143) where yn = y ( n A t)=-y(t). We are going to estimate the error committed by the use of these approximation formulas. Thereby it is always assumed that the mean value o/ y(t) is zero. Of course, this does not mean any loss of generality.

By the calculation of Rz~(m) instead of RT(v) we commit a computational error E R~ (m), depending on the fact t h a t the interval A t has a finite length.

This error can easily be estimated and will not b e considered here. I t is more difficult to master the random error represented b y the standard deviation D R ~ ( m ) ;

Dg"R~(m)

( N - - m + 1) ~

1 See for i n s t a n e o t h e t r e a t i s e b y R . S. P ~ s in " R a d i a t i o n L a b o r a t o r y , Series 2 5 " . Many of P h i l l i p s ' n o ~ t i o n s a r e u s e d in t h i s c h a p t e r .

227

M. S U N D S T R O M , Some s~atistical problems in the theory of servomechanisms

Since the process is assumed to be stationary, M (YnY~+~) is independent of n. Thus

/ N - m )

W h e n c o m p u t i n g M (~ynyn+m) ~ we have to consider the correlation n o t only between two quantities y~ and y~+m b u t also between the p r o d u c t s y~Yn+m and Yn+v Y~§ L e t t h e last correlation function be denoted b y R (m, p), i.e.

R (m, p) = M (yn yn+,a" Yn+v Yn+v+~) = R (p, m).

Then

1 {R (m, 0) +

D 2 R~v (m) - N - - m + 1

~ : ( P ) R ( m , P ) - - ( N - - m + I)[R(m)]2 t"

+ 2 = 1 N - - m + l (144)

As a limit for A t = 0 we obtain T--T

2 j (1_ t (145/

D 2 RT (T) T -- 7: T --

0

The formulas (144) a n d (145) will now be applied to a couple of h y p o t h e t i c a l distributions.

y(t) normal (0, a). I n t h i s - c a s e R(m, p) can be expressed as a function of a, R (m), R (p), R (m - - p) a n d R ( m + p ) . This follows from the form of the f r e q u e n c y function of the combined variable {Yn, Y~+m, Yn+p, Y~+v+~}:

where

1 1

( ~ 1 , ~ 2 ' ~ 3 ' ~4) = ( 2 .~5)2 ] / / n e 2D

-- - - [Dll ~12 + D2~ ~2 2 + " " + 2 D12 ~1 ~2 -~ ' " "]

D =

a s R (m) R (p) R (m + p)

R (m) G s R (m - - p) R (p) R (p) R (m - - p) a 2 R (m) R (m + p) R (p) R (m) a 2

a n d D s , means the minor of the # : t h row a n d the v:th column of D.

I n order to determine R (m, p) we introduce the characteristic function g (ul, us, u3, u4) = M ( e i(u~'+'''+u'~')) = e -89176 +2R(m)ul~+''']

One finds 228

ARKIV FOR MATEMATIK. Bd 2 nr 8 I

R(m, p) = (OUl and specially

0u20ua0u4/~,=o = R ( m ) 2 + R ( m + p ) R ( m - p ) + R @ ) 2

v ~ l , . . . 4

R ( m , O ) = 2 R ( m ) 2 + a2.

With these expressions we have

1 {

.D 2R~v(m)= N - - m + l R(m) 2 + a 2 +

and

)

+ 2 ~ 1 P [R(m

v=l N - - m + l + p) R (m -- p) + R @)2]}

D 2 R T ('C) = - - T--'t"

2 0

(146)

+ t) R ( ~ - - t ) +R(t)2]dt. (147)

Modi/ied normal distribution. Many times the distribution of y (t) is not exactly normal but can be represented by a frequency function of the form

/ (8) = a e P (8)

where P(~) is a polynomial in ~. D 2 R~v (m) can also then be given in terms of the autocorrelation function. Firstly, the frequency function of the combined variable can be written

] (81, 82, 83, 84) =: q9 (81, ~2, 83, 84) Q (81, ~2' 83, ~4)

where Q (81, ~2, 83, 84) means a polynomial of 81, 82, 8a, 84. The characteristic function of {81, ~2, ~3, ~4} is derived from g(ul, u2, u3, us) by operations of differentiation, multiplication by constants and addition. Making further the

04

L _~ wdlce operation Oul Ou20ua Oua we obtain R(m, p). If for "-~'~

3 3 3

Q = al ~1 ~- a2 ~2 + as ~'s + a4 ~4 + b ~1 ~2 83 ~43 , we have

O a [1 ~ 0 1 012 )

R (m, ~) = 0 Ul 0 u2 0 u3 0 ua ( j ,~1 a" Ou~ § ~ b ~ u~ a 0 u~ 0 'u~ 0 u~ g (ul' u2, u3 7,/4) = 016 g

= b Ou~ O 4 4 4 (for U l - U 2 = U . ~ = u a = 0 ) . u2 0 u3 0 m "

There is no point, in this case, to give a compact expression for D 2 R~(m).

229

M. SUNDSTROM,

Some statistical problems in the theory of servomechanisms