• Aucun résultat trouvé

Chapter 1 - Introduction

4 Brain Connectivity

4.2 Nonlinear measures of functional connectivity

There are several nonlinear measures to investigate functional connectivity. Here, we introduce some phase-synchronisation and information-based measures as well as the non-linear correlation coefficient. For a comprehensive review on nonlinear estimates of functional connectivity, please see [130].

4.2.1 Phase-synchronisation measures: Phase-locking value and Phase-lag Index

Nonlinear measures can be used to investigate the phase synchronization between signals. Indeed, instead of investigating the relationship between the amplitudes of the signals, one could also investigate how the phases of the considered signals are coupled, the so-called phase synchronization measures. To compute phase-synchronisation, it is assumed that two dynamic systems may have their phases synchronized even if their amplitudes are not correlated. Phase synchronisation is defined as the locking of the phases associated to each signal, such that for two signals x(t), y(t) of equal time length and with instantaneous phases

x

(t )

and

y

(t )

, the following condition holds for any time t:

In order to compute the phase synchronisation, it is necessary to know the instantaneous phase of the two signals. This can be done using the analytical signal computed based on the Hilbert transform [131].

Two commonly used measures of phase synchronisation are the phase locking value (PLV) (also known as mean phase coherence) [131, 132] and the phase-lag index (PLI) [133].

The PLV is defined as:

where tis the sampling period and K is the number of time points. PLV ranges between 0 and 1, where 1 indicates perfect phase synchronisation and 0 indicates lack of synchronisation.

Since PLV reflects both zero and nonzero phase-lag coupling between two signals, it is sensitive to volume conduction. PLI was developed to account for this problem [133]. Here, the nonzero phase lags are determined from the asymmetry of the distribution of instantaneous phase differences between two signals. It is defined as:

which also ranges between 0 and 1. For more details on PLI see [133].

Comprehensive reviews on other phase synchronisation measures can be found in [130, 134].

4.2.2 Information-based Measures

Information-based techniques are sensitive to both linear and nonlinear statistical dependencies between two signals. The most commonly used information-based measures are the mutual information and the transfer entropy. They are both based on the concept of entropy, that measures the uncertainty of a random variable. Mutual information (MI) assesses the undirected nonlinear relations between two signals while transfer entropy assesses the directed nonlinear relations.

The Shannon entropy of a set of probabilities pi , with i = 1 ...K, of signal x is defined as:

where pi is the probability that signal x has the value xi. This entropy is positive and is measured in bits if the base of the logarithm is 2. To obtain the probabilities pi, one has to compute the histogram of x, with K being the number of histogram bins, and pi is then the ratio between the number of samples in the ith bin of the histogram of x and the total number of samples [130].

The concept of Shannon entropy is used to compute MI, which investigates the mutual statistical dependencies between two signals x and y by quantifying the amount of information gained about one signal by knowing the other. It is defined as:

symmetric measure (MIxy = MIyx), it does not give any information about the directionality of the interaction between the two signals.

Transfer entropy extends the concept of Shanon entropy by taking into account the transitions probability:

the probability p(in1in,,ink1) of obtaining a given value of x at instant n+1 given that the previous values of x are known [135]. The entropy rate of a signal is defined as the difference between Shanon entropies computed from delay vectors of length k+1 and k [136]:

from x to y is found from the entropy rate in a similar way that mutual information is found from the Shanon entropy [135]. It is measure of how the transition probabilities of x influence those of y [135, 137]. Because transfer entropy incorporates dynamical information, this measure is asymmetric (the degree of dependence of y on x is different of x on y) and thus, it is a directed measure of functional connectivity.

However, for both mutual information and transfer entropy, large datasets are needed to estimate the probabilities of the signal, which restricts their application [130].

4.2.3 Nonlinear correlation coefficient

The nonlinear correlation coefficient, h2[138], is a nonparametric nonlinear regression coefficient that describes the dependency of a signal y on a signal x in a general way, without including any specification of the type of relationship between the two signals (i.e. linear or nonlinear). It is based on the idea that if the amplitude of signal y is considered as a function of the amplitude of signal x, the value of y given a certain value of x can be predicted according to a nonlinear regression curve. The h2yx is the estimator of the correlation ratio

2, which describes the reduction of the variance of y obtained by the prediction of the values of y from those of x according to the regression curve:

ance predicted on the knowledge of x. The unexplained variance is obtained by subtracting the explained variance from the total variance. In practice, an approximation of the nonlinear regression curve is obtained by 1) computing a scatterplot of y versus x, 2) x is subdivided into bins, 3) for each bin, the x value of the midpoint (pi) and the average value of y (qi) are calculated, 4) the resulting points (pi,qi) are connected by segments of straight lines (hence, a piecewise linear regression curve between the samples of the two signals). The nonlinear correlation coefficient hy2x is computed as the fraction of total variance that can be explained by the segments of linear regression lines as:

between the samples of the two time series. Thus, h2is a measure of the strength of association between the two signals. It ranges between 0 (y is independent from x) and 1 (y is completely determined by x). In contrast with the correlation function, which is always symmetric, the nonlinear correlation coefficient may be asymmetric, i.e., hy2xhx2y. If the relationship between x and y is linear, it is equivalent to the regression coefficient r2. If there is a large asymmetry in the values of h2, the relationship between the two signals is non-linear.

Similarly, as in the case of the cross-correlation, one can compute the non-linear correlation coefficient as a function of different time shifts,  , between signal x and y or vice-versa, so h2yx() or hy2x(). In this case, in equation (48),

y

i becomes

y

i . The time shift correspondent to the maximum value of hy2x or hx2y, respectively yx and xy, can be used as an estimate of the time-lag between the two signals, and thus,

causality, as the delayed activity is likely caused by the preceding activity. If y is delayed with respect to x,

x

y is positive and xy is negative (x precedes y). Consequently, the difference  yx xy is positive.

The asymmetry information of the nonlinear correlation coefficients h2yx and hx2y together with the time delays yx and yx were used to obtain the direction index D[139], which provides the degree and direction of functional coupling between the two signals [139]. It makes a combined use of the sign of

2 2 2

y x x

y h

h

h  

 and  yxxy to provide a probabilistic information of the direction of coupling:

)) sgn(

) (sgn(

2

1  2  

h

D (49)

If D

1

, y is completely dependent on and delayed with respect to x, and so xy. If D

1

, the opposite holds, so yx. If D0, there is a bidirectional coupling between x and y (xy).

For further details about this nonlinear estimate, we refer the reader to [138-140] .