* Correspondence should be adressed to: Olivier David <email@example.com>
DynamicCausal Modelling (DCM) and the theory of autopoietic systems are two important conceptual frameworks. In this review, we suggest that they can be combined to answer important questions about self-organising systems like the brain. DCM has been developed recently by the neuroimaging community to explain, using biophysical models, how the non-invasive brain imaging data are caused by neural processes. It allows one to ask mechanistic questions about how the implementation of cerebral processes. In DCM the parameters of biophysical models are estimated from measured data and the evidence for each model is evaluated. This enables one to test different functional hypotheses (i.e., models) for a given data set. Autopoiesis and related formal theories of biological systems as autonomous machines represent a body of concepts with many successful applications. However, autopoiesis has remained largely theoretical and has not penetrated the empiricism of cognitive neuroscience. In this review, we try to show the connections that exist between DCM and autopoiesis. In particular, we propose a simple modification to standard formulations of DCM that includes autonomous processes. The idea is to exploit the machinery of the system identification of DCMs in neuroimaging to test the face validity of the autopoietic theory applied to neural subsystems. We illustrate the theoretical concepts and their implications for interpreting electroencephalographic signals acquired during amygdala stimulation in an epileptic patient. The results suggest that DCM represents a relevant biophysical approach to brain functional organisation, with a potential that is yet to be fully evaluated.
Functional vs. effective connectivity
The aim of dynamiccausal modeling (Friston et al., 2003) is to make inferences about the coupling among brain regions or sources and how that coupling is influenced by experimental factors. DCM uses the notion of effective connectivity, defined as the influence one neuronal system exerts over another. DCM represents a fundamental departure from existing approaches to connectivity because it employs an explicit generative model of measured brain responses that embraces their nonlinear causal architecture. The alternative to causal modeling is to simply establish statistical dependencies between activity in one brain region and another. This is referred to as functional connectivity. Functional connectivity is useful because it rests on an operational definition and eschews any arguments about how dependencies are caused. Most approaches in the EEG and MEG literature address functional connectivity, with a focus on dependencies that are expressed at a particular frequency of oscillations (i.e. coherence). See Schnitzler and Gross (2005) for a nice review. Recent advances have looked at nonlinear or generalized synchronization in the context of chaotic oscillators (e.g. Rosenblum et al 2002) and stimulus-locked responses of coupled oscillators (see Tass 2004). These characterizations often refer to phase-synchronization as a useful measure of nonlinear dependency. Another exciting development is the reformulation of coherence in terms of autoregressive models. A compelling example is reported in Brovelli et al (2004) who were able show that "synchronized beta oscillations bind multiple sensorimotor areas into a large-scale network during motor maintenance behavior and carry Granger causal influences from primary somatosensory and inferior posterior parietal cortices to motor cortex." Similar developments have been seen in functional neuroimaging with fMRI (e.g. Harrison et al., 2003; Roebroeck et al., 2005).
Complex processes resulting from the interaction of multiple elements can rarely be understood by analytical procedures; instead, mathematical models of system dynamics are required. This insight, which disciplines like physics have embraced for a long time already, is gradually gaining importance in the study of cognitive processes by functional neuroimaging. In this field, causal mechanisms in neural systems are described in terms of effective connectivity. Recently, DynamicCausal Modelling (DCM) was introduced as a generic method to estimate effective connectivity from neuroimaging data in a Bayesian fashion. One of the key advantages of DCM over previous methods is that it distinguishes between neural state equations and modality-specific forward models that translate neural activity into a measured signal. Another strength is its natural relation to Bayesian Model Selection (BMS) procedures. In this article, we review the conceptual and mathematical basis of DCM and its implementation for functional magnetic
Please insert here Figures (3) and (4)
The network representations corresponding to heatmaps in Figure (4) are illustrated by Figures (5), (6) and (7). Corresponding network representations are provided for the six periods covering the data sample (see Table 3). Such …gures emphasize the evolution of the dynamiccausal network of Euro zone stock exchanges over time. Speci…cally, the network’s density increases up to the crisis period, diminishes during the post-crisis period, starts increasing during the sovereign debt crisis period, and then strongly drops over the post-sovereign debt crisis period. Over the last period of the sample, there are very few causal relationships between Euro zone stock exchanges. Moreover, previous …gures display incoming and outgoing connections between the network’s nodes (i.e. between stock exchanges). Thus, we observe clearly the directional propagation of shocks across stock market places over time.
Engineering, Southeast University, Nanjing, 210096, China
d Centre de Recherche en Information Biom´ edicale sino-fran¸ cais (CRIBs), 35000, France
This paper addresses the question of effective connectivity in the human cerebral cortex in the context of epilepsy. Among model based approaches to infer brain connectivity, spectral DynamicCausal Modelling is a conventional technique for which we propose an alternative to estimate cross spectral density. The proposed strategy we investigated tackles the sub-estimation of the free energy using the well-known variational Expectation-Maximization algorithm highly sensitive to the initialization of the parameters vector by a permanent local adjustment of the initialization process. The performance of the proposed strategy in terms of effective connectivity identification is assessed using simulated data generated by a neuronal mass model (simulating unidirectional and bidirectional flows) and real epileptic intracerebral Electroencephalographic signals. Results show the efficiency of proposed approach compared to the conventional DynamicCausal Modelling and the one wherein a deterministic annealing scheme is employed. Keywords: Effective connectivity; Dynamiccausal modelling; Physiology based model; Epilepsy; Intracerebral EEG
A B S T R A C T
Dynamiccausal modeling (DCM) is a methodological approach to study effective connectivity among brain re- gions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of con- nectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The ﬁrst, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary dif- ferential equations (ODEs) and solves it with a ﬁxed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the ﬁrst integration scheme in regard to parameter estimation and Bayesian model se- lection, we performed simulations of local ﬁeld potentials using ﬁrst, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the ﬁrst one was compared. Then, the performances of the two integration schemes were directly compared by ﬁtting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme.
Table 3: Second simulation: deviations of dipole orientations of contralateral SI/SII and ipsilateral SII from true orientations (angles around axes in degrees).
IV D ISCUSSION
In this paper, we have presented dynamical causal modeling (DCM) for event-related potentials and fields using equivalent current dipole models. We have shown that this Bayesian approach can be used to estimate parameters for a generative ERP model. Importantly, one can estimate, simultaneously, source activity, extrinsic connectivity, its modulation by context, and spatial lead-field parameters from the data. An alternative view of DCM for ERP/ERF is to consider it a source reconstruction algorithm with biologically grounded temporal constraints. We have used simulated and real ERP data to show the usefulness and validity of the approach. Although we have not applied DCM to ERF data in the present paper, we note that the model can be adapted to ERFs by adjusting the electromagnetic component of the forward model.
integration, which is mediated by context-dependent interactions among spatially segregated
areas (see, e.g., McIntosh 2000 ). In this view, the functional role played by any brain component (e.g. cortical area, sub-area, neuronal population or neuron) is defined largely by its connections (c.f. Passingham et al. 2002). In other terms, function emerges from the flow of information among brain areas. This has led many researchers to develop models and statistical methods for understanding brain connectivity. Three qualitatively different types of connectivity have been at the focus of attention (see e.g., Sporns 2007 ): (i) structural connectivity, (ii) functional connectivity and (iii) effective connectivity. Structural connectivity, i.e. the anatomical layout of axons and synaptic connections, determines which neural units interact directly with each other ( Zeki and Shipp, 1988 ). Functional connectivity subsumes non-mechanistic (usually whole-brain) descriptions of statistical dependencies between measured time series (e.g., Greicus 2002 ). Lastly, effective connectivity refers to causal effects, i.e. the directed influence that system elements exert on each other (see Friston et al. 2007a for a comprehensive discussion).
5.4.1 Detecting delivery errors
One of the challenges to evaluate the proposed ap- proach is to measure the error rate. When a message is said to be “causally ready” by our mechanism, we need to verify that it is really causally ready, there- fore, in our simulator we also need to implement a perfect causal broadcast. This additional mechanism should have the lowest cost possible as it limits the simulator scalability. To detect an error, we must know all the messages the sending of which happened- before a given message. A simple solution would be to attached a set of messages to each sent message. Obviously, this would limit drastically the scalabil- ity of the simulator. Therefore, we use a mechanism based on vector clocks. Unfortunately, a vector clock cannot capture wrongly delivered messages.
2 Related Work
The first causal broadcast mechanism was introduced in the ISIS system . The simplest way to imple- ment causal communication consists in piggybacking on each messsage a process want to send the whole set of messages it has delivered prior to this send- ing. Of course, this is very costly and there is a need to some kind of garbage collector. Otherwise, prior work mainly use either a logical structure (central node, tree, ring, etc.) or are based on the use of timestamps. A timestamp is an integer value that counts events (possibly not all events). A vector clock is a vector of such counters. The first solution based on vector clocks for a broadcast primitive has been proposed is  (a solution based on a matrix of counters has been proposed in  for point-to-point communication). Vector clocks introduced simulta- neously by [6, 9] have been proved to be the smallest data structure that can capture exactly causality . Moreover, vector clocks require to know the exact number of sites involved in the application. As an example, the churn (intempestive join and leave of processes) and the high (and unknown) number of processes make the use of vector clocks unrealistic.
Un mécanisme est sensible aux raisons si et seulement s’il existe une séquence causale alternative dans laquelle le même mécanisme produit une action différente à cause de raisons différentes. Dans le cas « assassin », le mécanisme de délibération par lequel Sam en est venu à la conclusion qu’il devait tuer le maire est sensible aux raisons parce que ce mécanisme pourrait conduire à une action différente (par exemple si le maire était accompagné par son fils et que Sam refuse de tuer un homme devant ses enfants). Ainsi, dans la séquence principale, Sam est responsable de son action parce qu’il est la source de son mécanisme de délibération (ce dernier est sensible à ses raisons) et parce que ce mécanisme est la source de l’action (tirer sur le maire). À l’opposé, Sam n’est plus la source de son action dans les séquences alternatives (celles où Jack a activé la puce informatique) parce que le type de mécanisme qui est à l’œuvre (la puce) n’est pas sensible aux raisons de Sam. Ce dernier ne détient plus le contrôle de guidage, et n’est donc plus responsable de son action. En résumé, un individu est responsable de ses actions lorsque ces dernières sont le produit d’un mécanisme sensible à ses raisons. Cette responsabilité concorde avec le déterminisme causal ; la possibilité qu’un mécanisme sensible aux raisons puisse produire des actions différentes selon les raisons permet de conclure que l’auteur de l’action (via le mécanisme sensible à ses raisons) constitue une condition nécessaire pour que cette action se produise. Si la personne avait eu des raisons différentes, elles auraient agit autrement. Puisqu’elle est responsable en tant que cause nécessaire, le libre arbitre se révèle être superflu.
Thus we say that “α explains β” when adding α to our knowledge, and using a “suitable chain” of causal and taxonomical information, β is obtained. Which chains are “suitable” is one of the subjects addressed in this text. We define a dedicated inference system to capture explanations based on causal statements and stress that the rˆole of ontology-based information is essential. Our causal information is restricted to the cases where the causation never fails, rejecting e.g. “smoking causes cancer”. We leave also for future work temporal aspects. Also, we consider that the causal information is provided by the user, we are not concerned by the extraction of causal information as in scientific research. We provide a way to extract what we call explanations from causal (and “ontological”) information given by the user: we aim at providing all the (eventually tentative) explanations that can be obtained. Then, some choice between these explanations should be made by the user, depending of its needs, but this aspect is not considered here.
Sec. 1.4. Summary of Results 33
Empirical Studies: Toward Personalized, Data-Efficient Treatments
Given that SI can estimate potential outcomes under treatment (as well as control) across all units, SI can effectively simulate treatment groups. As a result, we apply SI to several case studies to highlight its ability to enhance what-if analysis and improve RCTs, the gold standard mechanism in drawing causal conclusions. Most notably, we use real-world observational data to quantify the trade-offs between different policies to combat COVID-19 via SI. While standard OS methods (a la SC variants) can only infer the counterfactual death trajectories if countries did nothing to combat COVID-19, SI can additionally, and arguably more importantly, infer counterfactual trajectories if countries implemented different policies than what was actually enacted. Indeed, understanding the impact of various policies before having to actually enact them may provide guidance to policy makers in making statistically informed decisions as they weigh the difficult choices ahead of them. Furthermore, we use real-world experimental data from a large development economics and e-commerce website to perform data-efficient, personalized RCTs and A/B tests. Finally, we finish our whirlwind tour of case studies with an in-vitro cell-therapy study (with experimental data) that bears implications towards data-efficient drug discovery, thereby establishing SI’s widespread applicability.
new-york is here free of concurrent write w.r.t. paris and berlin, making causal consistency equiv- alent to sequential consistency for new-york. paris and berlin however write to X concurrently, in which case causal consistency is not enough to ensure strongly consistent results.
If we assume paris and berlin execute in the same data center, while new-york is located on a distant site, this example illustrates a more general case in which, because of a program’s logic or activity patterns, no operations at one site ever conflict with those at another. In such a situation, rather than enforce a strong (and costly) consistency in the whole system, we propose a form of consistency that is strong for processes within the same site (here paris and berlin), but weak between sites (here between paris,berlin on one hand and new-york on the other).
for compliers is identical to the average effect for always‑takers and never‑takers. Bertanha & Imbens (2019) suggest testing the combina‑ tion of two equalities, namely the equality of the average outcomes of untreated compliers and never‑takers, and the equality of the average outcomes of treated compliers and always‑takers. In the case of regression discontinuity, the lack of external validity is mainly due to the fact that this method produces local estimators, which are only valid around the considered eligibility threshold. If, for example, that threshold is an age condition, regression discontinuity does not make it possible to infer what the average effect of the intervention would be for people whose age differs significantly from the age defining the eligibility threshold. Under what conditions can the estimated effects obtained through regression discontinuity be generalized? Dong & Lewbel (2015) note that in many cases, the variable that defines the eligibility threshold (called the “forcing variable”) is a continuous variable such as age or income level. These authors point out that in this case, beyond the extent of the discontinuity of the outcome variable in the vicinity of the threshold, it is also possible to estimate the variation of the first derivative of the regression function, and even of higher‑order derivatives. This makes it possible to extrapolate the causal effects of the treatment to values of the forcing variable further away from the eligibility threshold. Angrist & Rokkanen (2015) propose to test whether, conditional on additional exogenous variables, the correlation between the forcing variable and the outcome variable disappears. Such a result would mean that the allocation to treatment could be considered independent of the potential outcomes (this is called the unconfoundedness property) 7 conditional on those additional exoge‑
J’ai suggéré que les espèces naturelles sont postulées en sciences comme étant ce qui est sous-jacent à un ensemble de dispositions systématiquement associées. N’est-ce pas préci- sément la composition microstructurelle d’une molécule qui détermine l’ensemble de ses dispositions et fonctions ? Il ne fait aucun doute que le profil causal soit déterminé par la microstructure. Il s’agit là d’une conséquence du physi- calisme et de la survenance des propriétés systémiques des objets complexes sur les propriétés de plus bas niveau de leurs composantes. Dans le cas des protéines, cela signifie que la structure tertiaire (et quaternaire) d’une protéine (et donc son profil causal, c’est-à-dire l’ensemble de ses disposi- tions) est une conséquence de sa structure primaire. Cepen- dant, (IDENT) à la forme d’un biconditionnel. Or, seul l’un des deux conditionnels est une conséquence du physicalisme, à savoir : si les échantillons A et B de substances chimiques possèdent la même composition microstructurelle, alors A appartient à la même (espèce de) substance chimique que B. En revanche, ce qui rend (IDENT) controversé est le second conditionnel selon lequel : si deux échantillons A et B appar- tiennent à la même substance chimique, alors A et B auront la même composition microstructurelle.
Indeed, as the number of off-target effects increases, we see that in both interven- tional settings and both structure-learning metrics, UT-IGSP outperforms all other methods. However, JCI-GSP and UT-IGSP are roughly equal in performance on intervention target recovery, which might be attributed to the fact that their only difference is through their differing notion of covered edges, which directly affects the causal structure and only indirectly affects the intervention targets.
init (A.a, X); init (C.c, Y);
B.b := A.a || B.b := A.a || B.b := C.c;
Our algorithm design hinges on two principles that can be im- plemented assuming only causal consistency: (1) before an outref is assigned to a source object (in initialisation or assignment), we ensure that the corresponding inref has been added to the target object; importantly, causal consistency is enough to enforce this ordering of updates. (2) To delete a target, we require that no inref exists, nor will later be added, for this target. This property can be checked by well-known mechanisms which rely only on causal consistency and progress guarantees . The combination of these properties is sufficient to ensure RI as defined in the introduction.