• Aucun résultat trouvé

Forecasting seizure risk in adults with focal epilepsy: a development and validation study

N/A
N/A
Protected

Academic year: 2022

Partager "Forecasting seizure risk in adults with focal epilepsy: a development and validation study"

Copied!
31
0
0

Texte intégral

(1)

Article

Reference

Forecasting seizure risk in adults with focal epilepsy: a development and validation study

PROIX, Timothée, et al.

PROIX, Timothée, et al. Forecasting seizure risk in adults with focal epilepsy: a development and validation study. The Lancet Neurology, 2020

PMID : 33341149

Available at:

http://archive-ouverte.unige.ch/unige:147703

Disclaimer: layout of this document may differ from the published version.

1 / 1

(2)

1

Appendix for

Forecasting seizure risk over days in adults with focal epilepsy:

a development and validation study

Timothée Proix, PhD, Wilson Truccolo, PhD, Marc G. Leguia, PhD, Thomas K. Tcheng, PhD, David King- Stephens, MD, Vikram R. Rao, MD & Maxime O. Baud, MD.

This appendix details the methodology employed in this study, complementing the more concise exposition presented in the main text. First, we introduce our methodological strategy based on the (re)-discovery of cycles in epilepsy. Specifically, we explain our rationale for borrowing probabilistic methods from different fields, namely neuroscience and meteorology. We also review the differences between deterministic and probabilistic forecasting through mathematical definitions. Second, we report all technical details of the study to support the replication of our results. Code and data from the development cohort is openly accessible at #DOI will be made available upon acceptance. Third, we provide supplementary analyses to expand and support the main figures. In essence, this chronic EEG study is to our knowledge the first to fully leverage the newly discovered but pervasive phenomenon of multi-scale cycles of epileptic brain activity to issue calibrated probabilistic seizure forecasts.

Methodology

Here, we first present the key differences between deterministic and probabilistic forecasting in general terms, and then explain our rationale for opting for a probabilistic approach to seizure forecasting.

The performance of a forecast must be assessed thoroughly in order to understand its true potential for real-life applications. A good forecast must be “consistent,” of high “quality,” and add “value.”

Value is found when issuing the forecast enables informed decision-making. A high-quality forecast that is released too late to take any useful action has little value. In other words, the forecast horizon is crucial, and longer (or nested longer and shorter) horizons typically enable better decision-making.

A “consistent” forecast gives true information about different outcome probabilities and cannot be rewarded for hedging. This can happen, for example, when a forecast always issues the average probability, and remains correct by avoiding commitment to different outcomes. Finally, the quality

(3)

2

of a forecast cannot be ascertained with a single value, as illustrated by the many existing metrics defined below.

Probabilistic versus deterministic forecasting

A deterministic forecast seeks to provide a categorical answer to the question of whether an event will occur or not (categorical yes or no answer). In contrast, probabilistic forecasting, strives to reproduce the probability of events. When accurate, a deterministic forecasting approach is excellent for forecasting problems, because it will provide spot forecasts to best inform decision-making with potentially perfect accuracy. While this is most often conceivable at very short horizons, perfectly accurate deterministic forecasts do not exist for seizures, nor for weather. Deterministic forecasts are indeed as good as a combination of the accuracy of the model generating them, the accuracy of the collected data, the interpretability of the output, and the horizon to take action before the realization of the forecast. In the 1960s, the acknowledgement that the accuracy of deterministic forecasts heavily depended on minute parameter changes or measurement inaccuracies (initial conditions) led the field of meteorology to adopt a probabilistic approach(1). Many other forecasting problems greatly benefit from a probabilistic approach, because they are too complex to be modelled accurately, or because the very nature of the events’ timing is stochastic. Probabilistic forecasting is now a core concept in many fields, including probability and control theory, econometrics, meteorology, time series analysis, and machine learning.

The choice between a deterministic and a probabilistic approach has repercussions on (I) the goal to attain, (II) the choice of forecasting algorithm, (III) the information provided to users, (IV) the methods to evaluate performance, and (V) the amount of data needed to do so:

(I) Two different goals may be sought: (a) always forecast a category (event or no event), but accept that this may sometimes be wrong, or (b) forecast a probability of belonging to a category (continuum between 0% and 100%), striving to produce a reliable quantification of uncertainty.

(II) The types of algorithms that best match these distinct goals are different. While deterministic algorithms learn how to assign a label to each data sample, probabilistic algorithms seek to optimize conditional probability of observing a label given a data sample. Although there are methods to convert outputs of deterministic algorithms into probabilistic ones, opting for probabilistic algorithms allows for using methods adapted to the probabilistic nature of the problem (e.g. likelihood optimization).

(III) Deterministic outputs to users require threshold optimization to achieve a given goal (e.g.

high specificity, low sensitivity), which is not required if users directly access forecasted probabilities to make informed decisions based on a certain degree of uncertainty.

(4)

3

(IV) “How often are the forecasts correct?” Correctness is appealing to characterize deterministic forecasts, but is generally considered inappropriate to evaluate probabilistic forecasts.(2) Probabilistic scoring metrics reward a forecaster for reporting risk when an event could have occurred, even in the absence of an observed event (absence of realization of the risk). A deterministic forecast on the other hand strives for discrimination between event and no- event datapoints; Deterministic scoring metrics punish reporting risk when no event took place. Indeed, higher deterministic scores reflect that lower and higher forecasted probabilities are associated with the non-events and events, respectively, but not how well this forecast captures the underlying event probabilities. Importantly, they are blind to the calibration of a forecast as they only rely on the relative—rather than absolute—probabilities.

For example, an algorithm forecasting 0% when no event occurs and 0.1% each time an event occurs would lead to an area under the sensitivity vs. 1–specificity (or time in warning) curve (AUC) of 1, even if the event occurred every other day on average (i.e. an expected probability of 50%). Probabilistic metrics on the other hand can quantify the calibration of the forecast(2), and are further discussed in the next section.

(V) While benefiting from more observations, deterministic scores can already be evaluated on small sample sizes (e.g. 20)(2). In contrast, evaluating a probabilistic forecast requires orders of magnitude larger sample size, because it is based on comparing probability distributions that include some values which are rarely output by the algorithm for low-probability events.

The critical lack of longitudinal data undermined burgeoning interest for this alternative approach in epilepsy(3).

(5)

4

Fig. S1: Probabilistic performance metrics. Arbitrary reliabilty curve illustrating several key aspects of the evaluation of probabilistic forecasts. The forecast output was binned in 10 bins of equal number of observations, two of which are under the expected probability (‘E’), and eight of which are above E. The grey curve connecting the bin centers is for visualization purposes only. The perfect calibration and no-resolution lines are the dotted diagonal and horizontal lines, respectively.

Resolution (blue dotted line) is the vertical distance to the horizontal no-resolution line. Calibration loss is the vertical distance to the diagonal. All forecast values up to 25% are perfectly calibrated and align with the diagonal. Forecasted values between 25 and 50% are under-confident (above the diagonal), resulting in calibration loss. Forecasted values between 50 and 100% are over-confident (below the diagonal) resulting in loss of calibration and resolution. Uncertainty can be understood geometrically as the grey area.

(6)

5

Terminology Definition Formula

General definitions Forecast horizon The future period of time for which a forecast is generated.

Uninformative forecasts

Forecasts that do not help decision-making. Trivial solutions, such as perpetually issuing 0% probability for rare events, have good performance but are uninformative (unskilled) and can be used as a reference.

Discrimination

Discrimination measures whether forecasts differ when their corresponding observations differ; for example, if forecasts for days that are wet indicate more rain than for days that are dry, the forecasts can discriminate wetter from drier days.

Deterministic metrics

Accuracy Measure of discrimination or how well a forecast correctly identifies or excludes a certain outcome.

𝑇𝑃 + 𝑇𝑁 𝐴𝑙𝑙 Sensitivity (Se) How often the forecast correctly identifies an event. 𝑇𝑃

𝑇𝑃 + 𝐹𝑁

Specificity (Sp) How often the forecast avoids misidentification. 𝑇𝑁

𝑇𝑁 + 𝐹𝑃 Time in

warning (Tiw) Duration of time a forecast indicates an event is likely. 𝑇𝑃 + 𝐹𝑃 𝐴𝑙𝑙 Area under the

curve (AUC)

Typically assessed as the tradeoff between sensitivity and specificity (or time in warning) by systematically thresholding the algorithm output at all forecasted values.

Se vs. 1-Sp or Se vs. Tiw Observed

probability

Frequency of events per unit of time observed in the data, i.e. their empirical probability.

𝑛𝑘=1𝑜𝑖 𝑛 Relative risk The ratio between the probability of an event in a category or state and the

probability of this event in another category.

𝑇𝑃 𝑇𝑃 + 𝐹𝑃 𝐹𝑁 𝐹𝑁 + 𝑇𝑁

Probabilistic metrics

Expected probability

Based on all previous observations, the frequency (probability) of events

expected over long duration in the future. 𝑙𝑖𝑚𝑛→∞𝑛𝑘=1𝑜𝑖 𝑛 Forecasted

probability Probability of event forecasted for one time interval in the future 𝑓𝑖

Calibration (or reliability)

Agreement between forecasted probability and observed probability.

Typically calculated by averaging n forecasts datapoints in m ranked bins (𝑓𝑘 , e.g. average forecast between 0 and 10%) and calculating the

corresponding observed event probability, 𝑜̅𝑘. For a calibrated forecast, the binned forecasted probability and observed probability match and therefore align on a diagonal in a reliability diagram. Graphically, distance to the diagonal (Fig. S1).

1

𝑛∑ 𝑛𝑘(𝑓̅𝑘

𝑚

𝑘=1

− 𝑜̅𝑘)2

Resolution

Ability of the forecast to separate observed probabilities from the average observed probability. Resolution is zero for a flat line intersecting the y-axis at the expected probability, this corresponds to alignment of the ROC curve with the diagonal. Graphically, separation of the reliability curve from the horizontal line of no resolution (Fig. S1).

1

𝑛∑ 𝑛𝑘(𝑜̅𝑘

𝑚

𝑘=1

− 𝑜̅)2

Sharpness

Tendency to forecast probabilities, 𝑓𝑖, near 0 or 1, as opposed to uniformly distributed forecasts. Sharpness is an attribute belonging only to the forecast and is not influenced by the observations. Graphically, variance of the distribution of the forecasts.

1

𝑛∑(𝑓𝑖− 𝑓)̅̅̅

𝑛

𝑖=1

Uncertainty

Uncertainty only depends of the frequency of events 𝑜̅ and is not influenced by the forecast. Uncertainty tends to 0 with very rare (or frequent)

observations (i.e. with increased imbalance) and is greatest (=0.25) when an event is observed 50% of the time, making forecasts more difficult.

𝑜̅(1 − 𝑜̅)

Skill

Accuracy of a forecast relative to some reference forecast. The reference forecast is generally an unskilled forecast such as random chance, shuffled forecasts, or uninformative forecasts. A forecast may be better simply because it is easier to make, which is taken into account when calculating Skill.

1 − 𝑆𝑐𝑜𝑟𝑒 𝑆𝑐𝑜𝑟𝑒𝑟𝑒𝑓

Bias Mismatch between the mean forecast value, 𝑓̅ , and mean observed

probability, 𝑜̅. 𝑓̅ − 𝑜̅

Brier score (BS)

Mean squared distance between the forecasted value, 𝑓𝑖, and the observation, 𝑜𝑖 (set to 1 or 0), calculated at each ith timepoint for n forecasts. Better Brier scores are lower (i.e. tend to zero).

1

𝑛∑(𝑓𝑖− 𝑜𝑖)2

𝑛

𝑖=1

Brier skill score (BSS)

Improvement of Brier score over a reference forecast. Brier Skill Scores tend to 1 when better, 0 when no improvement over reference, and −∞ when worse than reference.

1 − 𝐵𝑆 𝐵𝑆𝑟𝑒𝑓

(7)

6

Table S1: Metrics for forecast performance. TP: true positive, TN: true negative, FP: false positive, FN: false negative, All: TP + TN + FP + FN. m, number of bins in the reliability diagram; n, number of data points (observed or forecasted); fi , forecast probability for the ith forecast; oi the ith observed probability; and 𝑜̅ the average observed probability.

Methodology for probabilistic forecasting

Refining measurement of probabilistic forecast value has a decades-long history in meterology(4).

More recently, the field of machine-learning has also adopted similar definitions and concepts.

Nomenclature differs somewhat from one community to the other. Our own definitions inspired from these fields are given in Table S1, and their geometric meaning can be visualized in an illustrative reliability diagram (Fig. S1) which evaluates how well the forecasted probabilities of an event correspond to their observed probabilities. Excellent online sources can be found at:

 https://www.cawcr.gov.au/projects/verification

 http://checkmyai.com/index.php?get=methods

Metrics to evaluate probabilistic forecast performance do not rely on classical definitions of false/true positives/negatives and the related deterministic scores (e.g. Sensitivity, Specificity), as probabilities are not thresholded. Rather, probabilistic scores determine how well a group of probabilistic predictions correspond to reality. By comparing forecasted probabilities to observed probabilities, these assessments require a large number of trials, as some observations will be very rare (by design).

For example, consider a given forecasted value of 5% probability of an event with a 24-hour horizon.

If we assume that such a forecast is issued one day out of 10, verifying, with some degree of confidence, that observations match this forecast would require observing at least 2 events out of 40 forecasts with 5% probability, i.e. it would necessitate ~400 days of observation. The calculation must be repeated for higher and lower forecasted values associated with more and less frequent events, respectively. This leads to a rapid expand of the number of observations required to correctly assess the performance of a model at different forecasted values.

The Brier score measures the magnitude of the probabilistic forecast errors and can be partitioned into three attributes for better interpretability(5). First, a forecast is evaluated accounting for the uncertainty intrinsic to the problem, as not all forecasting problems are equally difficult (i.e. some are imbalanced). In our case, expected individual seizure rates ranged from 1% to 48% (Fig. S5), corresponding to less and more difficult forecasting problems, respectively. Second, a good probabilistic forecast has resolution, i.e. it is able to predict different outcomes, when it takes on different values(2). If the outcome is independent of the forecast, the forecast has no resolution and is useless. When resolution is absent, discrimination is also absent for each chosen threshold (i.e. AUC

= 0.5). Resolution is conditioned on the forecasts: are different outcomes obtained given different forecasted values? Conversely, discrimination is conditioned on the observations: are different forecasted values obtained given different outcome categories? Resolution cannot be adjusted after algorithm training. Third, a good probabilistic forecast is calibrated (or reliable). A loss of calibration is

(8)

7

defined as datapoints deviating from the diagonal and can result in over or under-forecasting bias.

Calibration can be adjusted after algorithm training, in a subsequent step of “re-calibration,” which we did not use here. Additionally, a sharp forecast tends to issue mostly higher and/or lower probabilities, i.e. it offers greater confidence in the outcome, provided it is calibrated.

The Brier skill score incorporates all of these attributes and assesses the improvement in performance of the output forecast relative to a random reference (in our case, the random shuffling of the original forecast). Finally, the AUC represents a complementary metric to assess probabilistic forecasts, influenced by the problem’s uncertainty as well as by forecast resolution and sharpness but not calibration. In seizure forecasting, the imbalanced nature of the problem leads most researchers to use the time-in-warning as opposed to specificity for the calculation of the AUC, so as to avoid an over- weighting of true negatives(6,7). While the Brier score and the AUC are influenced by the degree of uncertainty (imbalance), the BSS reports results in relation to a common reference, which facilitates comparisons between forecasts.

Empirical versus forecasted seizure probability

For clarity, when referring to “probability” in the Methods and Results sections, we here define four types of distinct seizure probabilities (range 0–1, see also Table S1):

(1) on the training dataset, we calculated the proportion of days (or hours) with seizures, which represents the empirical long-term average expected seizure probability that can be known a priori, before issuing any forecast (labeled “E” in Figures 1 and 3); This probability (frequency) corresponds to the “seizure rate” typically used in a clinical setting when evaluating efficacy of an intervention.

(2) on the testing dataset, we calculated the proportion of days (or hours) with seizures, which represents the empirical observed seizure probability obtained a posteriori (after realization of the risk).

(3) in the testing dataset, we forecasted seizure probability for each time interval (labeled 𝑓𝑖 in equations above), representing the estimated a priori probability that at least one seizure will occur on that day (hour), based on the models trained on the training dataset.

Because an absolute seizure probability of 20% may represent a high or low-risk depending on the individual expected seizure probability (e.g., 10% vs. 30% for two subjects), we also calculated the individual probability differences by simply subtracting the expected seizure probability from the forecasted seizure probability (-10% and +10% in the example above, i.e. centered around the expected probability, Figure 3). Similarly, the calculation of relative risk is based on changes in individual seizure rates in different states and accounts for differences in absolute seizure rates across subjects.

(9)

8 Rationale for probabilistic seizure forecasting

Our choice of a probabilistic approach is motivated by three main factors: (1) the lack of generalizable seizure precursors in the pre-ictal period (minutes preceding seizures); (2) our recent discovery of cycles of epileptic brain activity that are markers for seizure risk; (3) our hypothesis of the existence of pro-ictal states, which constrain the timing of seizures in a probabilistic manner over different durations (Fig. S2); and (4) the potential for probabilistic forecasts to be interpreted by people with epilepsy in terms of quantified uncertainty about upcoming seizures (Fig. S3) as opposed to categorical prediction. Our probabilistic approach was enabled by the recent availability of great amounts of chronic EEG data.

Fig. S2. Metric of model performance depends on pro-ictal state duration. (a) In these ideal simulated examples, gradient- colored line depicts forecasted seizure probabilities over 100 days with 9 recurrent pro-ictal states (High-Risk for seizures) at about 10-day intervals. The true duration of each individual pro-ictal state is 3 (top) and 7 days (bottom), which is captured by forecasts F1 and F2, respectively. Red dots show seizures, which occur as a stochastic realization of the underlying risk, when greater than zero. The offset on the y-axis for F1 and F2 is only for visual purposes, and probabilities are given by the gradient cures. (b) In both cases, all seizures occur during pro-ictal states (Sensitivity should be 100%) but their different durations result in different AUCs, as longer pro-ictal states increase proportion of time spent above probabilities greater than zero (i.e. increased time in warning). The color gradient corresponds to the scale in (a) and helps the visual identification of sensitivity (proportion of seizures) and specificity (proportion of time) at a given forecasted value. (c) Despite very different AUCs, the two forecasts are perfectly calibrated (i.e. they align on the diagonal). Their resolution differs, as evidenced by the fact that the F2 line extends on only half of the diagonal. F2 gives overall lower probabilities that are accurate, reflecting the true underlying process. The dashed blue line represents the expected seizure probability (E), which is an intuitive threshold to separate periods of relatively higher versus lower risk of seizures.

(10)

9

Fig. S3. Seizure Gauge. In this schematic of what a seizure gauge may look like, forecasted probabilities are rendered as a continuum of increasing risk as the gauge moves from left to right. Dotted blue line represents the expected seizure probability (E). Similar visual scales in other applications enable direct interpretation of probabilistic forecasts by the user.

One key advantage of a probabilistic framework is that it includes the possibility to issue fully committed predictions (i.e. 0% and 100%) at both ends of a continuum of intermediate degrees of confidence. Conversely and by design, a deterministic approach severs the connection to model outputs by irreversibly thresholding values into two mutually exclusive categories. The price to pay for a deterministic approach is to be wrong in some (or many) occasions, whereas for a probabilistic approach it is to never (or rarely) be certain.

Point-process Generalized Linear Models (PP -GLMs) for probabilistic seizure forecasting

We opted for point-process generalized linear models (PP-GLMs), an established probabilistic framework to evaluate the association between a sequence of event (seizure) times, represented as a binary (or count) time-series, and temporal features upon which event probability may depend.

Specifically, we considered the sequence of seizure occurrence times as a realization of a stochastic discrete-time point process, St, with the time-bin length set to ∆t=1 hour or ∆t=1 day, based on desired long and short forecast horizons and the sampling resolution in our dataset. Because more than one seizure can occur in a given hour or day (true for 0.2% of the hours on average across subjects), multiple seizure events in a given time-bin were considered as a single seizure event such that St ∈ {0,1}. We used PP-GLMs with a log-link function and a (conditionally) Poisson distribution to forecast the probability of a seizure as a function of features extracted from the most recent seizure history, the most recent history of the IEA, It, and other covariates {Xt1,Xt2,...}(8). This probability is related to the ’instantaneous’ rate or conditional intensity function λ(t|·) of the point process(8,9), here modeled as:

log 𝜆(𝑡|𝑆{𝑡−1,…,𝑡−𝑝}, 𝐼{𝑡−1,…,𝑡−𝑞}, 𝑋{𝑡−1}) = 𝜇 + ∑ 𝑎𝑖𝑆𝑡−𝑖

𝑝

𝑖=1

+ ∑ 𝑏𝑖𝐼𝑡−𝑗

𝑞

𝑗=1

+ ∑ 𝑐𝑘𝑋𝑡−1𝑘

𝑛

𝑘=1

,

where p and q correspond to the number of time points for the seizure and IEA histories, respectively;

n is the number of additional covariates, µ relates to a background rate, ai, bj, and ck are the model parameters to be estimated. Parameters p and q were optimized for each patient (see below). When

(11)

10

modeling the conditional intensity as a function of the instantaneous phase 𝜃t of a specific multidien cycle, we used:

log 𝜆(𝑡|𝜃𝑡−1) = 𝜇 + 𝑏 cos(𝜃𝑡−1− 𝜃0) = 𝜇 + 𝑏1 cos(𝜃𝑡−1) + 𝑏2 sin(𝜃𝑡−1)

where 𝜃0 is the preferred phase of the seizure process with respect to the multidien cycle, and 𝜃t is the corresponding instantaneous phase of the cycle. Except for models that include the recent seizure history, all other cases result in an inhomogeneous Poisson process with a time-varying instantaneous intensity λ(t). (An alternative formulation would be a doubly-stochastic Cox process where extrinsic covariates are also explicitly treated as stochastic.)

The forecast probability, 𝑓𝑖, used in Brier score is then obtained by multiplying the conditional intensity, 𝜆, to the time-bin length Δ𝑡, which is a valid approximation for small enough Δ𝑡 (8), i.e.

𝑓𝑖(𝑡) ≈ 𝜆(𝑡) ⋅ Δ𝑡. The rare but possible forecasted values above 1 were clipped to 1.

Supplementary Methods

Three distinct types of raw data were used in this study: (1) Interictal activity (Fig. S4a), (2) Electrographic seizures (Fig. S4b) and (3) self-reported seizures (Fig. S4c). The RNS System recorded interictal activity from all subjects and electrographic seizures only in the development cohort. Only subjects in the validation cohort self-reported their seizures.

Fig. S4. Raw data types. Three types of raw data were included in the study. (a) interictal epileptiform activity, measured as counts of detections made by the RNS System. (b) electrographic seizures from the development cohort, measured as detections made by the RNS System and lasting longer than a patient-specific duration. (c) Self-reported seizures from the validation cohort in which participants reported seizures during the nine-year long trials of the device as the primary clinical outcome.

Self-reported seizure data

Participants in the long-term trials of the RNS System were instructed to log daily seizure counts and types, and were implanted with the device and followed at 34 centers across the USA (Institution, City): University of Alabama, Birmingham; Mayo Clinic, Phoenix; University of Southern California, Los Angeles; California Pacific Medical Center, San Francisco; Yale University, New Haven; George Washington University, Washington; University of Florida, Gainesville; Mayo Clinic, Jacksonville;

Miami Children's Hospital, Miami; Emory University, Atlanta; Medical College of Georgia, Augusta;

Rush University Medical Center, Chicago; Indiana University, Indianapolis; Via Christi Comprehensive

(12)

11

Epilepsy Center, Wichita; Louisiana State University Epilepsy Center of Excellence, New Orleans;

Johns Hopkins University School of Medicine, Baltimore; Massachusetts General Hospital, Boston;

Henry Ford Hospital, Detroit; Mayo Clinic, Rochester; Dartmouth-Hitchcock Medical Center, Lebanon; Saint Barnabas Medical Center, Livingston; Weill Medical College of Cornell University, New York; Columbia University, New York; University of Rochester, Rochester; Wake Forest

University Health Sciences, Winston-Salem; Cleveland Clinic Foundation, Cleveland; Oregon Health &

Science University, Portland; Thomas Jefferson University, Philadelphia; Medical University of South Carolina, Charleston; University of Texas Southwestern Medical Center, Dallas; Baylor College of Medicine, Houston; University of Virginia, Charlottesville; Swedish Medical Center, Seattle;

University of Wisconsin Hospital and Clinics, Madison.

Data was collected over 14 years in total. Gaps in seizure diaries were treated as follows: (i) If the gap did not exceed the expected subject-specific seizure interval, they were converted into days of no seizures (transformed into zeros). The expected subject-specific interval was taken as the 95th percentile of all observed intervals between seizures of each subject; (ii) If the gap was longer, both seizure diary data and IEA data were discarded, and extracted features (see below) before and after the gap were concatenated. As a small number of subjects (n=7) included in the validation cohort became seizure-free during their involvement in the RNS System clinic trial, periods ≥6 months without disabling seizures were discarded (Fig. S8). Subjects from the clinical trial logged date but not time of seizures. This data could therefore only be used for daily and not hourly forecasts. We defined

‘seizure-days’ or ’seizure-hours’ as binary events regardless of the seizure count. We report in Fig. S5 the percentage of days with seizures for different subjects, highlighting the variable degree of imbalance between categories.

Fig. S5. Distribution of seizure rates in the test dataset. Number of subjects as a function of the percentage of days with seizures (binned). Additional analyses for 14 patients with more than 50% seizure-days are provided in Fig. S8.

(13)

12 Chronic EEG data

For each subject, IEA time-series from two RNSSystem detectors were selected for periods of continuous data with stable detection settings lasting longer than six months. Four detectors can be independently programmed on the neurostimulator, each with a unique parametrization of the embedded algorithm, and Boolean operators (‘AND’, ’OR’) are used to combine detectors(10). Thirty- four subjects had only one active detector that could be used to obtain input temporal features. Four subjects had a changing number of active detectors over time (one or two). For those cases, consecutive periods with the same number of active detectors were processed and used for forecasting independently, and the forecasting performance then averaged across periods. For all other subjects, input temporal features including past IEA and underlying cycles were extracted from two detectors. Short blocks of data (<9 months) in-between gaps that could not be interpolated (see below) were discarded. Additional blocks of data were discarded if IEA counts were saturated (mostly capped at 256 per hour) or too sparse (mostly zeros). The resolution for electrographic seizure timing was one hour (LE counts are stored in hourly bins), and thus this data was included into hourly forecasts.

Data pre-processing

Data were pre-processed as described previously(11). Briefly, changes in RNS System detection settings affect detection sensitivity and, therefore, absolute IEA counts. Hourly IEA counts were z- scored by block, where a block corresponds to an epoch with stable detection settings. Gaps in IEA data owing to subjects’ non-compliance with data transmission were treated as follows: (i) Gaps that did not exceed thirteen days were interpolated (415 gaps in 131 subjects, median width 41 h [IQR 11–

109 h]) as follows: (a) For each gap, we selected flanking data on each side with the same length as the gap. We linearly interpolated the mean value of these windowed data and added a Gaussian random noise with standard deviation (SD) given by the SD of the concatenated IEA data in the two windows; (b) We used this time-series to compute the different multidien rhythms 𝑥multidien,𝑡 using a Morlet wavelet transform; (c) We computed the circadian distribution of the mean and SD of the IEA time-series and created two corresponding periodic time-series, 𝑧meanCircadian,𝑡 and 𝑧sdCircadian,𝑡; (d) Finally, we filled in the gaps by summing the multidien and circadian time-series according to the following equation: 𝑧IEAFilled,𝑡= 𝑧multidien,𝑡+ 𝑧meanCircadian,𝑡+ 𝑧sdCircadian,𝑡∙ 𝜀𝑡, with 𝜀𝑡~𝒩(0, 1) the Gaussian distribution. We filled corresponding gaps in the seizure time-series by replacing missing data with zeros. (ii) if the gaps were too long to be interpolated, features were extracted for each block independently, and then concatenated in a single time series (152 gaps in 85 subjects, median width 942 h [IQR 479–3938 h]).

(14)

13 Temporal feature extraction

Temporal features (also called covariates in the statistics literature, listed in Table 2 of the main article) were extracted from the interpolated IEA and seizure time-series. The distribution of seizures over 24- h clock time and the calendar week were computed on the training dataset only, and cyclical temporal features were constructed by repeating these probabilities over time. We estimated the circadian and multidien IEA cycles using a centered bandpass finite impulse response forward-backward filter (Matlab functions fir1 and filtfilt) and a Hamming window. The order of the filter was different for each rhythm and was chosen to be twice the period of each rhythm. The bandwidth of the filter was set as b = [1/m-1/(3m),1/m+1/(3m)], where m is the peak periodicity previously derived by wavelet transform. The cosine and the sine of the instantaneous phase, as well as the instantaneous amplitude of each circadian and multidien rhythm, were then extracted from the analytic signal obtained via a Hilbert transform (MATLAB function hilbert) of the filtered signal after removal of the mean to center the analytic signal.

Causality

Regarding training and testing of statistical model itself, the temporal causality was strictly enforced to ensure that only past data was used for forecasting. Further, test datasets were always followed chronologically after training datasets.

To obtain the most accurate estimation of the instantaneous phases of selected circadian and multidien rhythms, we used a non-causal filter in combination with the (non-causal) Hilbert transform in the training and testing dataset, as described above. For application in a prospective trial, the phase will need to be estimated in a time-causal manner. This is an optimization problem for which many signal processing methods are available, and that we will tackle in the near future to enable prospective trials.

Training, validation and testing datasets

Data were divided into chronological training and held-out testing sets comprising the shorter of 60%

of the data or 480 days, and the total remaining data, respectively. To find the optimal lengths of seizure and IEA histories (parameters p and q, respectively) for the model, we used five-fold cross- validation: the validation set was chosen sequentially without replacement as 20% blocks of the training data, and the training set consisted of the remaining 80% of data.

Model training and optimization

Fitting of the model and forecasting was computed with R, using the library tscount(12) (https://cran.r-project.org/package=tscount). Model parameters were estimated via maximum likelihood on the training dataset. For models using recent IEA and recent seizure timings, parameter

(15)

14

space exploration (for parameters p and q in the PP-GLM model equation above) was run by systematically varying the number of days (0 to 5) and hours (0 to 10) in the history. Optimal parameters were obtained using AUC (see below), were patient-dependent, and were used to train PP-GLMs on the whole training dataset. Performance of forecasts obtained using the amplitude of the circadian or multidien rhythms was low and not considered further.

Final performance reported in the main figures was assessed on the held-out test dataset without re-training, unless otherwise specified. Given our criteria to select training data (at most 480 days), the resulting very large amount of previously unseen testing data (211,005 days, 73% of total data) ensured that the models were not over-fitted for a small number of seizures. Slight over- confident forecasts were observed for probabilities above 25%, possibly related to the gradual decrease in seizure frequency over time with neurostimulation across subjects(13,14) (Fig. S16). To test the sensitivity of our models to such non-stationarities, retraining was performed at different intervals. To that aim, the training dataset was incrementally increased at the beginning of each new period (new seizure, or fixed period of time).

Forecast performance metrics

Reliability diagrams were obtained by plotting the observed seizure probability stratified by bins of forecasted seizure probabilities, where the bins were chosen to be equally populated(15). Further, the averages of binned forecasted values were used on the x-axis instead of the arithmetic center of the bin(15). Consistency bars (shaded area in Fig. 2b in main article) were obtained by computing the variations of the observed seizure probabilities over a set of calibrated forecasts generated by bootstrapping.

To compute the Brier skill score (BSS), we used a random reference strategy where forecast probabilities are randomly shuffled 1000 times, similar to a previous study(16). We used the area under the curve (AUC) of the sensitivity (number of time points with seizures correctly predicted divided by total number of time points with seizures) versus proportion of corrected time in warning curve. We corrected the proportion of time in warning because the minimum time in warning corresponds to the number of correctly predicted time points with seizures (true positives), as our sampling period (one hour or one day) is equal to our warning duration. As a result, the same model performance would yield lower AUC in subjects with more predicted seizures. We corrected for this bias by using the following definition for the proportion of corrected time in warning:

𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑒𝑑 𝑝𝑟𝑜𝑝𝑜𝑟𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡𝑖𝑚𝑒 𝑖𝑛 𝑤𝑎𝑟𝑛𝑖𝑛𝑔 =𝑡𝑖𝑚𝑒 𝑖𝑛 𝑤𝑎𝑟𝑛𝑖𝑛𝑔 − 𝑡𝑖𝑚𝑒 𝑖𝑛 𝑡𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑡𝑜𝑡𝑎𝑙 𝑡𝑖𝑚𝑒 − 𝑡𝑖𝑚𝑒 𝑖𝑛 𝑡𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒

with the total time being the length of the time series used for the prediction, and the time in true positive being the number of hours or days for which there is a true positive. We also report (Figs. S6, S12) the sensitivity for 25% proportion of corrected time in warning, the proportion of corrected time

(16)

15

in warning for 75% sensitivity, and the sensitivity and proportion of corrected time in warning that minimize the Euclidean distance between the curve and the perfect forecasting point (sensitivity = 100% for time in warning = number of seizures).

Surrogate data and statistical analysis

Surrogates were obtained for the recent seizure, circadian and weekly distribution models under the null hypothesis that the seizure process is memoryless. We randomly shuffled the seizure time series, thereby destroying any memory that could be in the process. To determine chance-level forecast of IEA, we used a phase randomization approach to destroy the potential statistical dependence of the seizure point process on this covariate. We used the iterated amplitude-adjusted Fourier transform algorithm(17) to randomize the phases of the IEA time-series in Fourier space while conserving the amplitude distribution and the auto-correlation function (power spectrum) of the IEA. For all time- series, M=200 chance-level surrogate datasets were constructed. Covariates based on each surrogate IEA time-series were then constructed for univariate and multivariate forecasting models. The p- values for the surrogate analysis were computed according to:

𝑝 = 1 + #{𝐴𝑈𝐶𝑠𝑢𝑟𝑟 > 𝐴𝑈𝐶}

1 + 𝑀

where AUC was computed on the true test dataset and AUCsurr set was computed based on the chance-level surrogate datasets. We used the false discovery rate (FDR) to correct for multiple testing(18) with a target α = 0.05 across all patients.

To compare the AUC of different subjects (Figs. S5, S11), we computed the z-scored effect size 𝑧𝑖 for each subject i as

𝑧𝑖 =𝐴𝑈𝐶𝑖− 𝜇𝐴𝑈𝐶𝑖,𝑠𝑢𝑟𝑟 𝜎𝐴𝑈𝐶𝑖,𝑠𝑢𝑟𝑟

with 𝜇𝐴𝑈𝐶𝑖,𝑠𝑢𝑟𝑟 and 𝜎𝐴𝑈𝐶𝑖,𝑠𝑢𝑟𝑟 the mean and standard deviation of the distribution of the AUC of the surrogate models.

We also assess the significance of our forecasts by computing the p-values based on binomial test for predicting k out of n seizures in the test dataset for each patient, given a chance probability equal to the fraction of time (i.e., days or hours) under warning(6) (Table S2). Shortly, the p-value is given by(19):

𝑝 = {1 − 𝐹𝐵(𝑛 − 1; 𝑁, 𝑆𝑛𝑐) + 𝐹𝐵(𝑘𝑓− 1; 𝑁, 𝑆𝑛𝑐) 1 − 𝐹𝐵(𝑘𝑐− 1; 𝑁, 𝑆𝑛𝑐) + 𝐹𝐵(𝑛; 𝑁, 𝑆𝑛𝑐)

for 𝑛 𝑁≥ 𝑆𝑛𝑐 for 𝑛

𝑁< 𝑆𝑛𝑐 where

𝐹𝐵(𝑘; 𝑛, 𝑝) ≡ ∑ 𝑓𝐵

𝑘

𝑗=0

(𝑗; 𝑛, 𝑝)

(17)

16 𝑓𝐵(𝑘; 𝑛, 𝑝) ≡ (𝑛

𝑘) 𝑝𝑘(1 − 𝑝)𝑛−𝑘 and

𝑘𝑓 = floor(2𝑁 ⋅ 𝑆𝑛𝑐− 𝑛) 𝑘𝑐 = ceiling(2𝑁 ⋅ 𝑆𝑛𝑐− 𝑛)

with n the number of seizures correctly predicted by the forecasting algorithm, N the true total number of seizures, and Snc the chance level, which is here chosen to be equal to the time in warning found for each patient when choosing an optimal threshold (Fig. S8).

(18)

17

Supplementary results

In addition to evaluating the individual significance of results, we used estimation statistics(20) to characterize the effect of each input variable on forecasts at the population level (Fig. S6).

Fig. S6. Comparison of different temporal features for daily forecasts. Top: Bootstrap distribution of AUC mean difference between temporal features and recent seizure timings, along with the mean (black dot) and the 95% confidence interval.

Mean of the recent seizure timing distribution is represented by the horizontal dashed line. Bottom: distribution of z-score effect sizes (see Methods) between each subject and their 200 surrogates. Zero dashed line corresponds to no difference between a subject and its 200 surrogates. Dotted lines in the distribution represent the median, first and third quartiles.

(19)

18

As complementary analysis, we also thresholded our probabilistic output to calculate sensitivities and corrected time in warning (Fig. S7). It should be noted however, that our probabilistic models were not optimized for these quantities, which belie the true value of the models.

Fig. S7. Sensitivity and corrected time in warning for daily forecasts. Sensitivity and time in warning for specific choice of threshold probability. Optimal threshold is the minimal squared distance between the upper left corner of the sensitivity vs.

corrected proportion of time in warning plot and the curve (sensitivity=1 and corrected proportion of time in warning=0).

(20)

19

To assess how our results depend on data inclusion criteria, we performed three sensitivity analyses.

First, we included all self-reported seizure types, including non-disabling sensory auras (i.e. ’simple other’) and motor seizures without loss of awareness (i.e. ’simple motor’), in addition to including disabling seizures with loss of awareness (i.e. ’complex partial’, and ’generalized tonic-clonic’). This revised inclusion criterion resulted in a cohort of 158 subjects that was partially overlapping with the original cohort of 157 subjects. Indeed, some subjects were excluded on the criterion of > 50% seizure- days, while others who did not have disabling seizures were included. The median AUC for daily forecasts was 0.63 [IQR 0·59-0·73] (Fig. S8a), suggesting that our models demonstrated similar ability to discriminate days with any type of seizure as they did for days with disabling seizures.

Second, we included seizure-free periods in seven subjects of the original cohort who stopped reporting seizures for periods longer than six months. Here, we acknowledge the fact that seizure- freedom cannot be determined a priori, but only retrospectively. It is also reasonable to think that a subject would stop using a forecast after six months if they were seizure-free for that length of time.

We therefore considered six months a reasonable trade-off between the prospective uncertainty about seizure freedom and a realistic use of the system. In those seven subjects, median AUC was 0·72 [IQR 0·67-0·77] when seizure free periods were included (Fig. S8b), comparable to a median AUC of 0 ·74 [IQR 0·71-0·88] when seizure-free periods were not included.

Third, we included a few subjects who had more than 50% days with disabling seizures. Based on these revised criteria, a cohort of 14 subjects, without overlap with the original cohort, had a median percentage of days with seizures of 75% [IQR 60-88]. Daily forecasts in this cohort resulted in high AUCs (Fig. S8c), an inflation that is likely due to the high rate of positives.

(21)

20

Fig. S8. Sensitivity analysis on AUC performance for three exclusion criteria. (a) Forecast AUCs in a cohort (N=158) with all self-reported seizure types included (i.e. ’simple motor’, ’simple other’, ’complex partial’, and ’generalized tonic-clonic’).

Exclusion of subjects with more than 50% of seizure days still applies. (b) Six-month periods of seizure freedom were included in seven subjects included in the main analysis. For two subjects, this resulted in a test set without any seizures, which are therefore not reported here (c) Fourteen subjects self-reporting more than 50% of seizure days. Exclusion of non-disabling seizures (i.e. ’simple motor’, ’simple other’) still applies.

(22)

21

Retraining probabilistic models based on novel observations made over time is widely used to account for non-stationarity of empirical probabilities. We found that the discrimination ability of our models was very stable (AUC) but that model retraining ensured better forecast calibration (Fig. S9). Note that this is not equivalent to the previously described possibility of “re-calibration”, which is done after training, and which we did not use.

Also, we found that although hourly forecasts had better discrimination than daily forecasts, their resolution was lower (most forecasted probabilities below 10%) and therefore the BSS was lower. This highlights the need to use complementary metrics to evaluate the performance of different forecasts.

(23)

22

Fig. S9. Performance of daily forecasts with retraining. Top: AUC for different intervals of retraining. Dashed lines indicate median AUC without retraining for the development cohort (electrographic seizures) and the validation cohort (self-reported seizures) in green and orange respectively. Bottom: Reliability diagrams for different intervals of retraining. Retraining with every seizures is also show in Fig. 2d in the main article.

(24)

23

We also performed sensitivity analyses on the amount of data required for training our models to their maximal capacity and found that about 6 months of data were sufficient across subjects at the daily (Fig. S10) and hourly timescale (Fig. S15).

Fig. S10. Performance of daily forecasts depends on size of training dataset. Performance of repeated forecasts on held- out test data using training periods of increasing length for the electrographic seizure cohort. Shading ± 1 SD. Performance plateaus after ~ 200 days of training.

(25)

24

Finally, we also performed a pre-specified analysis of the variability in forecasting performance as a correlation to the strength of multidien cycles in epileptic brain activity as assessed by the phase- locking value between seizures and IEA. We found that PLV obtained on retrospective data were somewhat predictive of forecasting performance (Fig. S11).

Fig. S11. Forecast performance relates to strength of seizure cycles. Correlation between AUCs and PLV. for daily forecasting and phase-locking values between seizures and multidien IEA rhythms. N=175.

(26)

25

Fig. S12. Comparison of different temporal features for hourly forecasts. Top: Bootstrap distribution of AUC mean difference between temporal features and recent seizure timings, along with the mean (black dot) and the 95% confidence interval. Mean of the recent seizure timing distribution is represented by the dashed horizontal line. Bottom: distribution of z-score effect sizes (see Methods) between each subject and their 200 surrogates. Zero dashed line corresponds to no difference between a subject and its 200 surrogates.

Fig. S13. Performance of hourly forecasts with retraining. AUC for different intervals of retraining.

(27)

26

Fig. S14. Sensitivity and corrected time in warning for hourly forecasts. Sensitivity and time in warning for specific choice of threshold probability. Optimal threshold is the minimal squared distance between the upper left corner of the sensitivity vs. corrected proportion of time in warning plot and the curve (sensitivity=1 and corrected proportion of time in warning=0).

Fig. S15. Performance of hourly forecasts depends on size of training dataset. Performance of repeated forecasts on held- out test data using training periods of increasing length. Shading ± 1 SD. Performance plateaus after ~ 200 days of training.

(28)

27

Fig. S16. Expected, forecasted, and observed seizure average probability. (A) Observed average seizure probability from testing dataset versus expected average seizure probability from training dataset shows an increase (above diagonal) and a decrease (below diagonal) in seizure frequency in few and many subjects, respectively, reflecting the fact that most subjects improve with time in the clinical trial. (B) Expected average seizure probability from training dataset versus average forecasted probability shows good correspondence, suggesting that the central tendency of the distribution is well fitted by the model. (C) Forecast bias, measured as average forecasted probability versus observed average seizure frequency shows that, in most cases, the model was able to forecast the average observation frequency in the testing dataset. Datapoints above and below the diagonal represent an under- and over-forecasting, respectively.

(29)

28

Horizon Daily Hourly

Cohort Development

(N=18)

Validation Self-reported

(N=157)

Development (N=18) Seizure data Electrographic

seizures

Self-reported disabling seizures

Electrographic seizures

Metric IoC IoC IoC

Temporal features

Recent seizures

2/18 (11%) 76/157 (48%) 6/18 (33%)

Recent IEA

0/18 (0%) 52/157 (33%) 12/18 (67%)

Circadian IEA phases

NA NA 15/18 (83%)

Circadian seizure distribution

NA NA 10/18 (56%)

Weekly seizure distribution

0/18(0%) 31/157 (20%) NA

Multidien phases

15/18 (83%) 111/157 (71%) 17/18 (94%)

Multivariate NA NA 18/18 (100%)

Table S2. Improvement over chance (IoC) with an alternative measure of significance of the forecasting model performance. Significance level was assessed by comparing the number of seizures correctly identified by the model and by chance for a given fraction of time under warning (see Supplementary Methods).

(30)

29 Supplementary References

1. Lorenz E. Deterministic Nonperiodic Flow. J Atmos Sci. 1963;130–41.

2. Mason S. Guidance on Verification of Operational Seasonal Climate Forecasts. Seevccc.Rs.

2013.

3. Jachan M, Feldwisch Genannt Drentrup H, Posdziech F, Brandt A, Altenmüller DM, Schulze- Bonhage A, et al. Probabilistic forecasts of epileptic seizures and evaluation by the brier score. IFMBE Proc. 2008;22(1):1701–5.

4. Winkler RL, Murphy AH. " Good " Probability Assessors. J Appl Meteorol. 1968;(May 2020):751–8.

5. Mason SJ. On using “climatology” as a reference strategy in the Brier and the ranked probability skill scores. Mon Weather Rev. 2004;132(7):1891–5.

6. Snyder DE, Echauz J, Grimes DB, Litt B. The statistics of a practical seizure warning system. J Neural Eng. 2008;5(4):392–401.

7. Kuhlmann L, Lehnertz K, Richardson MP, Schelter B, Zaveri HP. Seizure prediction — ready for a new era. Nat Rev Neurol. 2018;14(10):618–30.

8. Truccolo W, Eden UT, Fellows MR, Donoghue JP, Brown EN. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J Neurophysiol. 2005;93(2):1074–89.

9. Daley DJ, Vere-Jones D. An introduction to the theory of Point Processes - Vol. 1: Elementary theory and methods. Verlag New York Berlin Heidelberg: Springer. 2003.

10. Sisterson ND, Wozny TA, Kokkinos V, Constantino A, Richardson RM. Closed-Loop Brain Stimulation for Drug-Resistant Epilepsy: Towards an Evidence-Based Approach to Personalized Medicine. Neurotherapeutics. 2019;16(1):119–27.

11. Baud MO, Kleen JK, Mirro EA, Andrechak JC, King-Stephens D, Chang EF, et al. Multi-day rhythms modulate seizure risk in epilepsy. Nat Commun. 2018;9(1).

12. Liboschik T, Fokianos K, Fried R. Tscount: An R package for analysis of count time series following generalized linear models. J Stat Softw. 2017;82.

13. Bergey GK, Morrell MJ, Mizrahi EM, Goldman A, King-Stephens D, Nair D, et al. Long-term treatment with responsive brain stimulation in adults with refractory partial seizures.

Neurology. 2015;84(8):810–7.

14. Skarpaas TL, Jarosiewicz B, Morrell MJ. Brain-responsive neurostimulation for epilepsy (RNS ® System). Epilepsy Res. 2019;153(January):68–70.

15. Bröcker J, Smith LA. Increasing the reliability of reliability diagrams. Weather Forecast.

2007;22(3):651–61.

16. Karoly PJ, Ung H, Grayden DB, Kuhlmann L, Leyde K, Cook MJ, et al. The circadian profile of epilepsy improves seizure forecasting. Brain. 2017;140(8):2169–82.

(31)

30

17. Schreiber T, Schmitz A. Surrogate time series. Phys D Nonlinear Phenom. 2000;142(3–4):346–

82.

18. Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Ser B. 1995;57(1):289–300.

19. Mormann F. Seizure prediction. Scholarpedia. 2008;3(10):5770.

20. Ho J, Tumkaya T, Aryal S, Choi H, Claridge-Chang A. Moving beyond P values: data analysis with estimation graphics. Nat Methods. 2019;16(7):565–6.

Références

Documents relatifs

The Epilepsy of Infancy With Migrating Focal Seizures: Identification of de novo Mutations of the KCNT2 Gene That Exert Inhibitory Effects on the Corresponding Heteromeric

Receiver operating characteristics (ROC) curves of seizure forecasting performance testing for different patients of the three datasets: (a) - the CHB-MIT sEEG dataset, (b) -

3. Barcia G, Fleming MR, Deligniere A, et al. De novo gain-of-function KCNT1 channel mutations cause malignant migrating partial seizures of infancy. Heron SE, Smith KR, Bahlo M, et

This study highlighted the fact that, using a specific questionnaire or standardized questionnaires, no particular differences could be found concerning the characteristics of

يقيبطتلا راطلإا 88 يعامتجلإا لصاوتلا عقاوم برع يعوطتلا لمعلا في تُبغارلا بابشلا ليىأتو بيردت 23 23,46 يعوطتلا لمعلا عيجشت في يعامتجلإا لصاوتلا تاصنم

garantit l’équilibre indispensable à l’existence d’un minimum de solidarité et parmi tous les genres de la littérature orale, le proverbe est le genre le

Thus all the previous results are valid if the attrac- tive point is moving (interception problem or movement defined by a global planner) and/or if the repulsive

E, Concentrations of KL-6 (U/mL) according to the progression of rheumatoid arthritis (RA)-associated interstitial lung disease (ILD), assessed by on chest high-resolution