• Aucun résultat trouvé

Introduc)on  à  MaBoSS

N/A
N/A
Protected

Academic year: 2022

Partager "Introduc)on  à  MaBoSS"

Copied!
47
0
0

Texte intégral

(1)

Introduc)on  à  MaBoSS  

h1ps://maboss.curie.fr  

Gau)er  Stoll  &  Laurence  Calzone  

3  février  2015  

(2)

MaBoSS  soIware  et  matériels  

MaBoSS  est  un  soI  en  ligne  de  commande,  accessible  sur  un  environnement   unix  (sur  windows,  u)liser  Cygwin)  

•  Installer  MaBoSS  :   aller  sur  h1ps://maboss.curie.fr,  cliquer  sur  “installa)on”.  A1en)on,   certaines  libraires  peuvent  être  nécessaires,  lire  le  README  dans  l’archive  tgz.    

•  MaBoSS  refcard  :   sur  h1ps://maboss.curie.fr/pub/MaBoSS-­‐RefCard.pdf  

•  Script  perl  pour  un  usage  plus  facile  :   télécharger  le  fichier  PlMaBoSS_1.1.pl  dans   h1ps://maboss.curie.fr/PerlScript/  

•  Exemple  1  (Toy  model  sur  la  page  web):  

–  Fichier  bnd  :  h1ps://maboss.curie.fr/files/ToyModel/Four_cycle.bnd  

–  Fichier  cfg  :  h1ps://maboss.curie.fr/files/ToyModel/Four_cycle_FEscape.cfg  

•  Exemple  2  (p53  /  Mdm2  sur  la  page  web):  

–  Fichier  bnd  :  h1ps://maboss.curie.fr/files/p53Dam/p53_Mdm2.bnd  

–  Fichier  cfg  :  h1ps://maboss.curie.fr/files/p53Dam/p53_Mdm2_runcfg.cfg  

•  Exemple  3,  Demo  1  (Cell  cycle  sur  la  page  web):  

–  Fichier  bnd  :  h1ps://maboss.curie.fr/files/CellCycle/cellcycle.bnd  

–  Fichier  cfg  :  h1ps://maboss.curie.fr/files/CellCycle/cellcycle_runcfg_randinit.cfg  

•  Exemple  3,  Demo  2  :  

–  Fichier  bnd  :  h1ps://maboss.curie.fr/Cours2015/restric)on.bnd  

–  Fichier  cfg  :  h1ps://maboss.curie.fr/Cours2015/restric)on.cfg  

(3)

Mo)va)on  

Algorithmes  synchrone  /  asynchrone  sont  définis  par  des  pas  temporels  discrets.  

 

⇒   Problèmes:  

•  La  comparaison  entre  un  modèle  et  des  résultats  expérimentaux  de  se  fait  

principalement  sur  les  états  finaux  (comment  définir  un  état  final  en  biologie  ?)  

•  La  modélisa)on  de  phénomènes  transitoires  est  difficile  (e.g.  dans  le  cycle   cellulaire)  

•  L’interpréta)on  de  trajectoires  non-­‐déterministes  est  difficile  

•  L’implémenta)on  de  processus  d’échelles  temporelles  différentes  est  difficile   (e.g.  une  phosphoryla)on  se  fait  plus  rapidement  qu’une  transcrip)on)  

 

⇒   Idée  générale  :  entre  une  modélisa)on  Booléenne  en  temps  discret  et  un  

système  d’équa)ons  différen)elles  ordinaires,  construire  une  approche  de  

modélisa)on  Booléenne  en  temps  con-nu.  

(4)

Plan  

1.  Le  langage  MaBoSS  

2.  Différentes  vues  d’un  modèle  

3.  Les  paramètres  de  simula)ons  du  fichier  cfg,  illustra)ons   avec  l’exemple  1.  

4.  Fondements  mathéma)ques:  processus  Markovien  en   temps  con)nu  

5.  Algorithme  de  Gillespie  pour  simuler  un  processus   Markovien  en  temps  con)nu  

6.  Comportement  asympto)que:  processus  sta)onnaires   indécomposables  

7.  Exemple  2:  Applica)on  biologique   modèle  p53-­‐MDM2  

8.  Caractérisa)ons  globales  :  entropies  

9.  Exemple  3:  Mammalian  Cell  Cycle  model  

(5)

1.  Le  langage  MaBoSS  

Problème  :  comment  introduire  la  no)on  de  vitesse  d’ac)va)on/

inhibi)on  dans  un  formula)on  Booléenne  ?  

A   B  

Table  de  vérité   C   A   B   C   0   0   0   0   1   1   1   0   1   1     1   1  

C=A  OR  B  

A   B   Ac)va)on  de  C   Inhibi)on  de  C  

0   0   0.1   100  

0     1   40   0.2  

1   0   40   0.2  

1   1   50   0  

Vitesses  de  transi)on  

(6)

Grammaire  de  MaBoSS      

Opérateurs  logiques  :  AND,  OR,  NOT,  XOR   Opérateurs  arithmé)ques  :    +,  -­‐  ,*,/,  

«  Si  »  ternaire  :  (?:)  ie  (  condi)on  ?  Valeur_si_condi)on_vraie  :  valeur_si_condi)on_fausse  )   Variables  externes  :  nom  commençant  par  $,  définit  par  un  fichier  de  configura)on.  

 

(7)

Exercice  

A   B  

Table  de  vérité   C   A   B   C   0   0   0   0   1   1   1   0   1   1     1   1  

C=A  OR  B  

A   B   Ac)va)on  de  C   Inhibi)on  de  C  

0   0   0.1   100  

0     1   40   0.2  

1   0   40   0.2  

1   1   50   0  

Vitesses  de  transi)on  

Node  C  {  

rate_up  =  (A  OR  B)  ?  (40  +  (A  AND  B)*10)  :  .1  ;  

rate_down  =  (A  XOR  B)  ?  .2  :  ((NOT  A  AND  NOT  B)  ?  100  :  0  )  :  0  ;   }    

  Exercice:  

U)liser  le  langage  MaBoSS  

pour  reproduire  la  table  

(8)

2.  Différentes  vues  d’un  modèle  

•  Graphe  de  régula)on  :  représente  les  agents  (gènes,  protéines,  etc.)  et  les  lien  da   causalité  logique  (ac)va)on/inhibi)on)  

•  Graphe  de  transi)on  :  représente  les  états  booléens    et  les  transi)ons  possibles  

•  Défini)on  du  modèle  avec  MaBoSS  :  vitesse  d’ac)va)on/inhibi)on  pour  chaque  

nœud  (agent),  à  l’aide  de  la  grammaire  spécifique  

(9)

Exemple  1  :  Retrouver  un  réseau  de  régula)on  et  un   graphe  de  transi)on  à  par)r  d’un  modèle  u)lisant  la   grammaire  MaBoSS  

Node  C  

{   rate_up=0.0;  

rate_down=((NOT  A)  AND  (NOT  B))  ?  $escape  :  0.0  ;   }  

 

Node  A     {  

rate_up=(T  AND  (NOT  B))    ?  $Au  :  0.0;  

rate_down=  B  ?  $Ad  :  0.0;  

}  

  Node  B    

{   rate_up=  A  ?  $Au  :  0.0;  

rate_down  =  A  ?  0.0  :  $Ad  ;   }  

 

(10)

3.  Les  paramètres  de  simula)ons  du  fichier  cfg    

•  Variables  externes  :  $var  

•  Condi)ons  ini)ales  :  node.istate  

•  Nœuds  «  cachés  »  :  node.is_internal  

•   États  de  référence  :  node.refstate  

•   Paramètres  de  simula)on  :  

–   Nombre  de  trajectoires  (traj_count)   –  Temps  Maximum(max_)me)  

–  Fenêtre  temporelle  d’es)ma)on   ()me_)ck)  

Stoll et al. BMC Systems Biology 2012, 6:116 Page 9 of 18

http://www.biomedcentral.com/1752-0509/6/116

Figure 1 Comparison of tools for discrete modeling, biological implication. Comparison table of the following tools: MaBoSS, GINsim,

CellNetAnalyzer, BoolNet, GNA, SQUAD. Technical aspects are provided, along with the inputs/outputs relations between a model and data. The last row illustrates graphically the typical outputs that can be obtained from each tool.

a b c

Figure 2 Toy model. Toy model of a single cycle. (a) Influence network. (b) Logical rules and transition rates of the model. (c) Simulation parameters.

Illustra)ons  avec  l’exemple  1  

 

(11)

Exercices  :  

  Me1re  MaBoSS  et  PlMaBoSS_1.1.pl  dans  le  répertoire  de  travail,  les  rendre  exécutables   (chmod  a+x)  

Ligne  de  commande  :    

  ./PlMaBoSS_1.1.pl  Four_cycle.bnd  Four_cycle_FEscape.cfg  0.001  

 

1.  Lancer  MaBoSS  et  produire  les  courbes  de  probabilité  avec  un  tableur   2.  Faire  varier  les  paramètres  de  simula)on  

3.  Modifier  la  condi)on  ini)ale  

4.  Modifier  les  nœuds  cachés  

(12)

4.  Fondements  mathéma)ques:  processus   Markovien  en  temps  con)nu  

 

Soit  un  réseau  de  signalisa)on  de  n  nœuds.  On  définit  un  état   booléen  par  un  vecteur  S  de  dimension  n  à  valeurs  booléenne  (Σ   est  l’espace  des  états)  

 

Par  défini)on,  un  processus  stochas)que  est  une  applica)on  du  

«  temps  »  vers  une  variable  aléatoire  associée  à  Σ,   Avec    

   

Stoll et al. BMC Systems Biology 2012, 6:116 Page 3 of 18

http://www.biomedcentral.com/1752-0509/6/116

In this article, we will first review some of these works and present BKMC algorithm. We will then describe the C++

software, MaBoSS, developed to implement BKMC algo- rithm and finally illustrate its use with three examples, a toy model, a published model of p53-MDM2 interaction and a published model of the mammalian cell cycle.

All abbreviations, definitions, algorithms and estimates used in this article can be found in Additional file 1.

Throughout the article, all terms that are italicized are defined in the Additional file 1, “Definitions”.

Results and discussion

BKMC for continuous time Boolean model

Continuous time in Boolean modeling: past and present

In Boolean approaches for modeling networks, the state of each node of the network is defined by a Boolean value (node state) and the network state by the set of node states. Any dynamics in the transition graph is rep- resented by sequences of network states. A node state is based on the sign of the input arrows and the logic that links them. The dynamics can be deterministic in the case of synchronized update [1], or non-deterministic in the case of asynchronized update [2] or probabilistic Boolean networks [7].

The difficulty to interpret the dynamics in terms of bio- logical time has led to several works that have generalized Boolean approaches. These approaches can be divided in two classes that we call explicit and implicit time for discrete steps.

The explicit time for discrete steps consists of adding a real parameter to each node state. These parameters cor- respond to the time associated to each node state before it flips to another one ([12,13]). Because data about these time lengths are difficult to extract from experimental studies, some works have included noise in the definition of these parameters [18]. The drawback of this method is that the computation of the Boolean model becomes sensitive to both the type of noise and the initial con- ditions. As a result, these time parameters become new parameters that need to be tuned carefully and thus add complexity to the modeling.

The implicit time for discrete steps consists of adding a probability to each transition of the transition graph in the case of non-deterministic transitions (asynchronous case). It is argued that these probabilities could be inter- preted as specifying the duration of a biological process.

As an illustration, let us assume a small network of two nodes, A and B. At time t, A and B are inactive: [AB] = [00].

In the transition graph, there exist two possible transitions at t+1: [00] → [01] and [00] → [10]. If the first transition has a significant higher probability than the second one, then we can conclude that B will have a higher tendency to activate before A. Therefore, it is equivalent to say that the activation of B is faster than the activation of A. Thus,

in this case, the notion of time is implicitly modeled by setting probability transitions. In particular, priority rules, in the asynchronous strategy, consist of putting some of these probabilities to zero [6]. In our example, if B is faster than A then the probability of the transition [00] → [10]

is zero. As a result, the prioritized nodes always activate before the others. From a different perspective but keep- ing the same idea, Vahedi and colleagues [14] have set up a method to deduce explicitly these probabilities from the duration of each discrete step. With the implementation of implicit time in a Boolean model, the dynamics remains difficult to interpret in terms of biological time.

As an alternative to these approaches, we propose BKMC algorithm.

Properties of BKMC algorithm

BKMC algorithm was built such as to meet the following principles:

• The state of each node is given by a Boolean number (0 or 1), referred to as node state;

• The state of the network is given by the set of node states, referred to as network state;

• The update of a node state is based on the signs linking the incoming arrows of this node and the logic;

• Time is represented by a real number;

• Evolution is stochastic.

We choose to describe the time evolution of network states by a Markov process with continuous time, applied to the asynchronous transition graph. Therefore, the dynamics is defined by transition rates inserted in a mas- ter equation (see Additional file 1, “Basic information on Markov process”, section 1.1).

Markov process for Boolean model

Consider a network of n nodes (or agents, that can rep- resent any species, i.e. mRNA, proteins, complexes, etc.).

In a Boolean framework, the network state of the sys- tem is described by a vector S of Boolean values, i.e. S i ∈ {0, 1}, i = 1, . . . , n where S i is the state of the node i. The set of all possible network states, also referred to as the network state space , will be called ! .

A stochastic description of the state evolution is repre- sented by a stochastic process s : t #→ s ( t ) defined on t ∈ I ⊂ R applied on the network state space, where I is an interval: for each time t ∈ I ⊂ R, s ( t ) represents a random variable applied on the network state space. Thus, the probability of these random variables is written as:

P [s ( t ) = S] ∈ [ 0, 1] for any state S ∈ ! with !

S∈ !

P [s ( t ) = S] = 1 (1)

Stoll et al. BMC Systems Biology 2012, 6:116 Page 3 of 18

http://www.biomedcentral.com/1752-0509/6/116

In this article, we will first review some of these works and present BKMC algorithm. We will then describe the C++

software, MaBoSS, developed to implement BKMC algo- rithm and finally illustrate its use with three examples, a toy model, a published model of p53-MDM2 interaction and a published model of the mammalian cell cycle.

All abbreviations, definitions, algorithms and estimates used in this article can be found in Additional file 1.

Throughout the article, all terms that are italicized are defined in the Additional file 1, “Definitions”.

Results and discussion

BKMC for continuous time Boolean model

Continuous time in Boolean modeling: past and present

In Boolean approaches for modeling networks, the state of each node of the network is defined by a Boolean value (node state) and the network state by the set of node states. Any dynamics in the transition graph is rep- resented by sequences of network states. A node state is based on the sign of the input arrows and the logic that links them. The dynamics can be deterministic in the case of synchronized update [1], or non-deterministic in the case of asynchronized update [2] or probabilistic Boolean networks [7].

The difficulty to interpret the dynamics in terms of bio- logical time has led to several works that have generalized Boolean approaches. These approaches can be divided in two classes that we call explicit and implicit time for discrete steps.

The explicit time for discrete steps consists of adding a real parameter to each node state. These parameters cor- respond to the time associated to each node state before it flips to another one ([12,13]). Because data about these time lengths are difficult to extract from experimental studies, some works have included noise in the definition of these parameters [18]. The drawback of this method is that the computation of the Boolean model becomes sensitive to both the type of noise and the initial con- ditions. As a result, these time parameters become new parameters that need to be tuned carefully and thus add complexity to the modeling.

The implicit time for discrete steps consists of adding a probability to each transition of the transition graph in the case of non-deterministic transitions (asynchronous case). It is argued that these probabilities could be inter- preted as specifying the duration of a biological process.

As an illustration, let us assume a small network of two nodes, A and B. At time t, A and B are inactive: [AB] = [00].

In the transition graph, there exist two possible transitions at t+1: [00] → [01] and [00] → [10]. If the first transition has a significant higher probability than the second one, then we can conclude that B will have a higher tendency to activate before A. Therefore, it is equivalent to say that the activation of B is faster than the activation of A. Thus,

in this case, the notion of time is implicitly modeled by setting probability transitions. In particular, priority rules, in the asynchronous strategy, consist of putting some of these probabilities to zero [6]. In our example, if B is faster than A then the probability of the transition [00] → [10]

is zero. As a result, the prioritized nodes always activate before the others. From a different perspective but keep- ing the same idea, Vahedi and colleagues [14] have set up a method to deduce explicitly these probabilities from the duration of each discrete step. With the implementation of implicit time in a Boolean model, the dynamics remains difficult to interpret in terms of biological time.

As an alternative to these approaches, we propose BKMC algorithm.

Properties of BKMC algorithm

BKMC algorithm was built such as to meet the following principles:

• The state of each node is given by a Boolean number (0 or 1), referred to as node state;

• The state of the network is given by the set of node states, referred to as network state;

• The update of a node state is based on the signs linking the incoming arrows of this node and the logic;

• Time is represented by a real number;

• Evolution is stochastic.

We choose to describe the time evolution of network states by a Markov process with continuous time, applied to the asynchronous transition graph. Therefore, the dynamics is defined by transition rates inserted in a mas- ter equation (see Additional file 1, “Basic information on Markov process”, section 1.1).

Markov process for Boolean model

Consider a network of n nodes (or agents, that can rep- resent any species, i.e. mRNA, proteins, complexes, etc.).

In a Boolean framework, the network state of the sys- tem is described by a vector S of Boolean values, i.e. S i ∈ {0, 1}, i = 1, . . . , n where S i is the state of the node i. The set of all possible network states, also referred to as the network state space , will be called ! .

A stochastic description of the state evolution is repre- sented by a stochastic process s : t #→ s ( t ) defined on t ∈ I ⊂ R applied on the network state space, where I is an interval: for each time t ∈ I ⊂ R, s ( t ) represents a random variable applied on the network state space. Thus, the probability of these random variables is written as:

P [s ( t ) = S] ∈ [ 0, 1] for any state S ∈ ! with !

S∈ !

P [s ( t ) = S] = 1 (1)

Stoll et al. BMC Systems Biology 2012, 6:116 Page 3 of 18

http://www.biomedcentral.com/1752-0509/6/116

In this article, we will first review some of these works and present BKMC algorithm. We will then describe the C++

software, MaBoSS, developed to implement BKMC algo- rithm and finally illustrate its use with three examples, a toy model, a published model of p53-MDM2 interaction and a published model of the mammalian cell cycle.

All abbreviations, definitions, algorithms and estimates used in this article can be found in Additional file 1.

Throughout the article, all terms that are italicized are defined in the Additional file 1, “Definitions”.

Results and discussion

BKMC for continuous time Boolean model

Continuous time in Boolean modeling: past and present

In Boolean approaches for modeling networks, the state of each node of the network is defined by a Boolean value (node state) and the network state by the set of node states. Any dynamics in the transition graph is rep- resented by sequences of network states. A node state is based on the sign of the input arrows and the logic that links them. The dynamics can be deterministic in the case of synchronized update [1], or non-deterministic in the case of asynchronized update [2] or probabilistic Boolean networks [7].

The difficulty to interpret the dynamics in terms of bio- logical time has led to several works that have generalized Boolean approaches. These approaches can be divided in two classes that we call explicit and implicit time for discrete steps.

The explicit time for discrete steps consists of adding a real parameter to each node state. These parameters cor- respond to the time associated to each node state before it flips to another one ([12,13]). Because data about these time lengths are difficult to extract from experimental studies, some works have included noise in the definition of these parameters [18]. The drawback of this method is that the computation of the Boolean model becomes sensitive to both the type of noise and the initial con- ditions. As a result, these time parameters become new parameters that need to be tuned carefully and thus add complexity to the modeling.

The implicit time for discrete steps consists of adding a probability to each transition of the transition graph in the case of non-deterministic transitions (asynchronous case). It is argued that these probabilities could be inter- preted as specifying the duration of a biological process.

As an illustration, let us assume a small network of two nodes, A and B. At time t, A and B are inactive: [AB] = [00].

In the transition graph, there exist two possible transitions at t+1: [00] → [01] and [00] → [10]. If the first transition has a significant higher probability than the second one, then we can conclude that B will have a higher tendency to activate before A. Therefore, it is equivalent to say that the activation of B is faster than the activation of A. Thus,

in this case, the notion of time is implicitly modeled by setting probability transitions. In particular, priority rules, in the asynchronous strategy, consist of putting some of these probabilities to zero [6]. In our example, if B is faster than A then the probability of the transition [00] → [10]

is zero. As a result, the prioritized nodes always activate before the others. From a different perspective but keep- ing the same idea, Vahedi and colleagues [14] have set up a method to deduce explicitly these probabilities from the duration of each discrete step. With the implementation of implicit time in a Boolean model, the dynamics remains difficult to interpret in terms of biological time.

As an alternative to these approaches, we propose BKMC algorithm.

Properties of BKMC algorithm

BKMC algorithm was built such as to meet the following principles:

• The state of each node is given by a Boolean number (0 or 1), referred to as node state;

• The state of the network is given by the set of node states, referred to as network state;

• The update of a node state is based on the signs linking the incoming arrows of this node and the logic;

• Time is represented by a real number;

• Evolution is stochastic.

We choose to describe the time evolution of network states by a Markov process with continuous time, applied to the asynchronous transition graph. Therefore, the dynamics is defined by transition rates inserted in a mas- ter equation (see Additional file 1, “Basic information on Markov process”, section 1.1).

Markov process for Boolean model

Consider a network of n nodes (or agents, that can rep- resent any species, i.e. mRNA, proteins, complexes, etc.).

In a Boolean framework, the network state of the sys- tem is described by a vector S of Boolean values, i.e. S i ∈ {0, 1}, i = 1, . . . , n where S i is the state of the node i. The set of all possible network states, also referred to as the network state space, will be called ! .

A stochastic description of the state evolution is repre- sented by a stochastic process s : t #→ s ( t ) defined on t ∈ I ⊂ R applied on the network state space, where I is an interval: for each time t ∈ I ⊂ R, s ( t ) represents a random variable applied on the network state space. Thus, the probability of these random variables is written as:

P [s ( t ) = S] ∈ [ 0, 1] for any state S ∈ ! with !

S∈ !

P [s ( t ) = S] = 1 (1)

(13)

A1en)on,  les  variables  aléatoires  ne  sont  pas  indépendantes  :    

 

Probabilité  instantanée  :      

 

Processus  de  Markov  :  

«  les  probabilités  condi)onnelles  par  rapport  au  présent  et  au   passé  ne  dépendent  que  du  présent  »  

Stoll et al. BMC Systems Biology 2012, 6:116 Page 4 of 18

http://www.biomedcentral.com/1752-0509/6/116

Notice that for all t, s ( t ) are not independent, therefore P !

s ( t ) = S, s ( t ) = S "

̸= P [s ( t ) = S] P !

s ( t ) = S "

. From now on, we define P [s ( t ) = S] as instantaneous probabil- ities. Since the instantaneous probabilities do not define the full stochastic process, all possible joint probabilities should also be defined.

In order to simplify the stochastic process, Markov property is imposed. It can be expressed in the following way: “the conditional probabilities in the future, related to the present and the past, depend only on the present” (see Additional file 1, “Basic information on Markov process”, section 1.1 for the mathematical definition). The formal definition of a Markov process is a stochastic process with the Markov property.

Any Markov process can be defined by (see Van Kampen [19], chapter IV):

1. An initial condition:

P [s ( 0 ) = S] ; ∀S ∈ ! (2) 2. Conditional probabilities (of a single condition):

P !

s ( t ) = S|s ( t ) = S "

; ∀S, S ∈ ! ; ∀t , t ∈ I ; t < t (3) Concerning time, two cases can be considered:

• If time is discrete: t ∈ I = {t 0 , t 1 , · · · }, it can be shown that all possible conditional probabilities are function of transition probabilities [20]:

P !

s ( t i ) = S|s ( t i−1 ) = S "

. In that case, a Markov process is often named a Markov chain.

• If time is continuous: t ∈ I =[ a, b], it can be shown that all possible conditional probabilities are function of transition rates [19]: ρ ( S

→S ) ( t ) ∈[ 0, ∞].

Notice that a discrete time Markov process can be derived from continuous time Markov process, and is called a Jump Process with the following transition proba- bilities:

P S→S

≡ ρ S→S

# S

′′

∈ ! ρ S→S

′′

If the transition probabilities or transition rates are time independent, the Markov process is called a time inde- pendent Markov process. In BKMC, only this case will be considered. For a time independent Markov process, the transition graph can be defined as follows: a transition graph is a graph in ! , with an edge between S and S if and only if ρ S→S

> 0 (or P !

s ( t i ) = S|s ( t i−1 ) = S "

> 0 if time is discrete).

Asynchronous Boolean dynamics as a discrete time Markov process

Asynchronous Boolean dynamics [2] is widely used in Boolean modeling. It can be easily interpreted as a discrete time Markov process [21,22] as shown below.

In the case of asynchronous Boolean dynamics, the sys- tem is given by n nodes (or agents), with a set of directed arrows linking these nodes and defining a network. For each node i, a Boolean logic B i ( S ) is specified and depends only on the nodes j for which there exists an arrow from node j to i (e.g. B 1 = S 3 AND NOTS 4 , where S 3 and S 4 are the Boolean values of nodes 3 and 4 respectively, and B 1 is the Boolean logic of node 1). The notion of asynchronous transition (AT) can be defined as a pair of network states ( S, S ) ∈ ! , written ( SS ) such that

S j = B j ( S ) for a given j

S i = S i for i ̸= j (4)

To define a Markov process, the transition probabili- ties P !

s ( t i ) = S|s ( t i−1 ) = S "

can be defined: given two network states S and S , let γ ( S ) be the number of asyn- chronous transitions from S to all possible states S . Then

P !

s ( t i ) = S |s ( t i−1 ) = S "

= 1 /γ ( S ) if ( SS ) is an AT P !

s ( t i ) = S |s ( t i−1 ) = S "

= 0 if ( SS ) is not an AT (5) In this formalism, the asynchronous Boolean dynam- ics completely defines a discrete time Markov pro- cess when the initial condition is specified. Notice that here the transition probabilities are time independent, i.e. P !

s ( t i ) = S|s ( t i−1 ) = S "

= P !

s ( t i+1 ) = S|s ( t i ) = S "

Therefore, the approaches, mentioned in section “Con- . tinuous time in Boolean modeling: past and present”, that introduce time implicitly by adding probabilities to each transition of the transition graph, can be seen as a gener- alization of the definition of γ ( S ) .

Continuous time Markov process as a generalization of asynchronous Boolean dynamics

To transform the discrete time Markov process described above in a continuous time Markov process, tran- sition probabilities should be replaced by transition rates ρ ( S→S

) . In that case, conditional probabilities are computed by solving a master equation (equation 2 in Additional file 1, “Basic information on Markov process”, section 1.1). We present below the corresponding numer- ical algorithm, the Kinetic Monte-Carlo algorithm [23].

Because we want a generalization of the asynchronous Boolean dynamics, transition rates ρ ( S→S

) are non-zero only if S and S differ by only one node. In that case, each Boolean logic B i ( S) is replaced by two functions

Stoll et al. BMC Systems Biology 2012, 6:116 Page 4 of 18

http://www.biomedcentral.com/1752-0509/6/116

Notice that for all t, s ( t ) are not independent, therefore P !

s ( t ) = S, s ( t ) = S "

̸= P [s ( t ) = S] P !

s ( t ) = S "

. From now on, we define P [s ( t ) = S] as instantaneous probabil- ities. Since the instantaneous probabilities do not define the full stochastic process, all possible joint probabilities should also be defined.

In order to simplify the stochastic process, Markov property is imposed. It can be expressed in the following way: “the conditional probabilities in the future, related to the present and the past, depend only on the present” (see Additional file 1, “Basic information on Markov process”, section 1.1 for the mathematical definition). The formal definition of a Markov process is a stochastic process with the Markov property.

Any Markov process can be defined by (see Van Kampen [19], chapter IV):

1. An initial condition:

P [s ( 0 ) = S] ; ∀S ∈ ! (2) 2. Conditional probabilities (of a single condition):

P !

s ( t ) = S|s ( t ) = S "

; ∀S, S ∈ ! ; ∀t , t ∈ I ; t < t (3) Concerning time, two cases can be considered:

• If time is discrete: t ∈ I = {t 0 , t 1 , · · · }, it can be shown that all possible conditional probabilities are function of transition probabilities [20]:

P !

s ( t i ) = S|s ( t i−1 ) = S "

. In that case, a Markov process is often named a Markov chain.

• If time is continuous: t ∈ I =[ a, b], it can be shown that all possible conditional probabilities are function of transition rates [19]: ρ ( S ′ →S ) ( t ) ∈[ 0, ∞].

Notice that a discrete time Markov process can be derived from continuous time Markov process, and is called a Jump Process with the following transition proba- bilities:

P S→S ≡ ρ S→S

# S ′′ ∈ ! ρ S→S ′′

If the transition probabilities or transition rates are time independent, the Markov process is called a time inde- pendent Markov process. In BKMC, only this case will be considered. For a time independent Markov process, the transition graph can be defined as follows: a transition graph is a graph in ! , with an edge between S and S if and only if ρ S→S ′ > 0 (or P !

s ( t i ) = S|s ( t i−1 ) = S "

> 0 if time is discrete).

Asynchronous Boolean dynamics as a discrete time Markov process

Asynchronous Boolean dynamics [2] is widely used in Boolean modeling. It can be easily interpreted as a discrete time Markov process [21,22] as shown below.

In the case of asynchronous Boolean dynamics, the sys- tem is given by n nodes (or agents), with a set of directed arrows linking these nodes and defining a network. For each node i, a Boolean logic B i ( S ) is specified and depends only on the nodes j for which there exists an arrow from node j to i (e.g. B 1 = S 3 AND NOTS 4 , where S 3 and S 4 are the Boolean values of nodes 3 and 4 respectively, and B 1 is the Boolean logic of node 1). The notion of asynchronous transition (AT) can be defined as a pair of network states ( S, S ) ∈ ! , written ( SS ) such that

S j = B j ( S ) for a given j

S i = S i for i ̸= j (4)

To define a Markov process, the transition probabili- ties P !

s ( t i ) = S|s ( t i−1 ) = S "

can be defined: given two network states S and S , let γ ( S ) be the number of asyn- chronous transitions from S to all possible states S . Then

P !

s ( t i ) = S |s ( t i−1 ) = S "

= 1 /γ ( S ) if ( SS ) is an AT P !

s ( t i ) = S |s ( t i−1 ) = S "

= 0 if ( SS ) is not an AT (5) In this formalism, the asynchronous Boolean dynam- ics completely defines a discrete time Markov pro- cess when the initial condition is specified. Notice that here the transition probabilities are time independent, i.e. P !

s ( t i ) = S|s ( t i−1 ) = S "

= P !

s ( t i+1 ) = S|s ( t i ) = S "

Therefore, the approaches, mentioned in section “Con- . tinuous time in Boolean modeling: past and present”, that introduce time implicitly by adding probabilities to each transition of the transition graph, can be seen as a gener- alization of the definition of γ ( S ) .

Continuous time Markov process as a generalization of asynchronous Boolean dynamics

To transform the discrete time Markov process described above in a continuous time Markov process, tran- sition probabilities should be replaced by transition rates ρ ( S→S ′ ) . In that case, conditional probabilities are computed by solving a master equation (equation 2 in Additional file 1, “Basic information on Markov process”, section 1.1). We present below the corresponding numer- ical algorithm, the Kinetic Monte-Carlo algorithm [23].

Because we want a generalization of the asynchronous Boolean dynamics, transition rates ρ ( S→S ′ ) are non-zero only if S and S differ by only one node. In that case, each Boolean logic B i ( S) is replaced by two functions

Because such a mathematical model can be very complicated, stochastic processes are often restricted to Markov processes: a Markov process is a stochastic process that has the Markov property, expressed in the following way: “conditional probabilities in the future, related to the present and the past, depend only on the present”. This property can be translated as follow

P h

s(t i ) = S (i) | s(t 1 ) = S (1) , s(t 2 ) = S (2) , . . . , s(t i 1 ) = S (i 1) i

= P h

s(t i ) = S (i) | s(t i 1 ) = S (i 1) i

(1) For discrete time, i.e. I = { t 1 , t 2 , . . . } , it can be shown that a Markov process is completely defined by its transition probabilities (P [s(t i ) = S | s(t i 1 ) = S 0 ]) and its initial condition (P [s(t 1 ) = S]).

For continuous time, this can be generalized. If I is an interval (I = [t m , t M ]), it can be shown (see

Shiryaev) that a Markov process is completely defined by the set of transition rates ⇢ (S ! S

0

) and its initial condition P [s(t m ) = S]. In that case, instantaneous probabilities P [s(t) = S] are solutions of a master equation:

d

dt P [s(t) = S] = X

S

0

(S

0

! S) P [s(t) = S 0 ] ⇢ (S ! S

0

) P [s(t) = S] (2) Formally, the transition rates (and the transition probabilities) can depend explicitly on time. For now, we will consider time independent transition rates. It can be shown that, according to this equation, the sum of probabilities over the network state space is constant. Obviously, the master equation represents a set of linear equation. Because network state space is finite, P [s(t) = S] can be seen as a vector of real number, indexed in the network state space: ⌃ = S (µ) , µ = 1, . . . , 2 n , P(t) ~

µ ⌘ P ⇥

s(t) = S (µ)

. With this notation, the master equation becomes

d

dt P(t) = ~ M P(t) ~ (3)

with

M | µ⌫ ⌘ ⇢ (S

(⌫)

! S

(µ)

) X

(S

(⌫)

! S

( )

) µ⌫ (4)

called the transition matrix. The solution of the master equation can be written formally:

P(t) = exp(M t) ~ P(0) ~ (5) Solutions of the master equation provide not only the instantaneous probabilities, but also conditional

probabilities:

P h

s(t) = S (µ) | s(t) = S (⌫ ) i

= h

exp(M t) P(0) ~ i

µ (6)

2

(14)

•  t  est  discret  :  le  processus  de  Markov  est  définit  pas  la   condi)on  ini)ale  et  les  probabilités  de  transi)ons  

•  t  est  con)nu  :  le  processus  de  Markov  est  définit  par  la   condi)on  ini)ale  et  les  vitesses  de  transi)ons  

Processus  indépendant  du  temps  :  probabilités/vitesses  de  transi)on  indépendantes   du  temps  

Stoll et al. BMC Systems Biology 2012, 6:116 Page 4 of 18

http://www.biomedcentral.com/1752-0509/6/116

Notice that for all t, s ( t ) are not independent, therefore P !

s ( t ) = S, s ( t ) = S "

̸= P [s ( t ) = S] P !

s ( t ) = S "

. From now on, we define P [s ( t ) = S] as instantaneous probabil- ities. Since the instantaneous probabilities do not define the full stochastic process, all possible joint probabilities should also be defined.

In order to simplify the stochastic process, Markov property is imposed. It can be expressed in the following way: “the conditional probabilities in the future, related to the present and the past, depend only on the present” (see Additional file 1, “Basic information on Markov process”, section 1.1 for the mathematical definition). The formal definition of a Markov process is a stochastic process with the Markov property.

Any Markov process can be defined by (see Van Kampen [19], chapter IV):

1. An initial condition:

P [s ( 0 ) = S] ; ∀S ∈ ! (2) 2. Conditional probabilities (of a single condition):

P !

s ( t ) = S|s ( t ) = S "

; ∀S, S ∈ ! ; ∀t , t ∈ I ; t < t (3) Concerning time, two cases can be considered:

• If time is discrete: t ∈ I = {t 0 , t 1 , · · · }, it can be shown that all possible conditional probabilities are function of transition probabilities [20]:

P !

s ( t i ) = S|s ( t i−1 ) = S "

. In that case, a Markov process is often named a Markov chain.

• If time is continuous: t ∈ I =[ a, b], it can be shown that all possible conditional probabilities are function of transition rates [19]: ρ ( S ′ →S ) ( t ) ∈[ 0, ∞].

Notice that a discrete time Markov process can be derived from continuous time Markov process, and is called a Jump Process with the following transition proba- bilities:

P S→S ≡ ρ S→S

# S ′′ ∈ ! ρ S→S ′′

If the transition probabilities or transition rates are time independent, the Markov process is called a time inde- pendent Markov process. In BKMC, only this case will be considered. For a time independent Markov process, the transition graph can be defined as follows: a transition graph is a graph in ! , with an edge between S and S if and only if ρ S→S ′ > 0 (or P !

s ( t i ) = S|s ( t i−1 ) = S "

> 0 if time is discrete).

Asynchronous Boolean dynamics as a discrete time Markov process

Asynchronous Boolean dynamics [2] is widely used in Boolean modeling. It can be easily interpreted as a discrete time Markov process [21,22] as shown below.

In the case of asynchronous Boolean dynamics, the sys- tem is given by n nodes (or agents), with a set of directed arrows linking these nodes and defining a network. For each node i, a Boolean logic B i ( S ) is specified and depends only on the nodes j for which there exists an arrow from node j to i (e.g. B 1 = S 3 AND NOTS 4 , where S 3 and S 4 are the Boolean values of nodes 3 and 4 respectively, and B 1 is the Boolean logic of node 1). The notion of asynchronous transition (AT) can be defined as a pair of network states ( S, S ) ∈ ! , written ( SS ) such that

S j = B j ( S ) for a given j

S i = S i for i ̸= j (4)

To define a Markov process, the transition probabili- ties P !

s ( t i ) = S|s ( t i−1 ) = S "

can be defined: given two network states S and S , let γ ( S ) be the number of asyn- chronous transitions from S to all possible states S . Then

P !

s ( t i ) = S |s ( t i−1 ) = S "

= 1 /γ ( S ) if ( SS ) is an AT P !

s ( t i ) = S |s ( t i−1 ) = S "

= 0 if ( SS ) is not an AT (5) In this formalism, the asynchronous Boolean dynam- ics completely defines a discrete time Markov pro- cess when the initial condition is specified. Notice that here the transition probabilities are time independent, i.e. P !

s ( t i ) = S|s ( t i−1 ) = S "

= P !

s ( t i+1 ) = S|s ( t i ) = S "

Therefore, the approaches, mentioned in section “Con- . tinuous time in Boolean modeling: past and present”, that introduce time implicitly by adding probabilities to each transition of the transition graph, can be seen as a gener- alization of the definition of γ ( S ) .

Continuous time Markov process as a generalization of asynchronous Boolean dynamics

To transform the discrete time Markov process described above in a continuous time Markov process, tran- sition probabilities should be replaced by transition rates ρ ( S→S ′ ) . In that case, conditional probabilities are computed by solving a master equation (equation 2 in Additional file 1, “Basic information on Markov process”, section 1.1). We present below the corresponding numer- ical algorithm, the Kinetic Monte-Carlo algorithm [23].

Because we want a generalization of the asynchronous Boolean dynamics, transition rates ρ ( S→S ′ ) are non-zero only if S and S differ by only one node. In that case, each Boolean logic B i ( S) is replaced by two functions

Stoll et al. BMC Systems Biology 2012, 6:116 Page 4 of 18

http://www.biomedcentral.com/1752-0509/6/116

Notice that for all t, s ( t ) are not independent, therefore P !

s ( t ) = S, s ( t ) = S "

̸= P [s ( t ) = S ] P !

s ( t ) = S "

. From now on, we define P [s ( t ) = S] as instantaneous probabil- ities. Since the instantaneous probabilities do not define the full stochastic process, all possible joint probabilities should also be defined.

In order to simplify the stochastic process, Markov property is imposed. It can be expressed in the following way: “the conditional probabilities in the future, related to the present and the past, depend only on the present” (see Additional file 1, “Basic information on Markov process”, section 1.1 for the mathematical definition). The formal definition of a Markov process is a stochastic process with the Markov property.

Any Markov process can be defined by (see Van Kampen [19], chapter IV):

1. An initial condition:

P [s ( 0 ) = S] ; ∀S ∈ ! (2) 2. Conditional probabilities (of a single condition):

P !

s ( t ) = S|s ( t ) = S "

; ∀S, S ∈ ! ; ∀t , t ∈ I ; t < t (3) Concerning time, two cases can be considered:

• If time is discrete: t ∈ I = {t 0 , t 1 , · · · }, it can be shown that all possible conditional probabilities are function of transition probabilities [20]:

P !

s ( t i ) = S|s ( t i−1 ) = S "

. In that case, a Markov process is often named a Markov chain.

• If time is continuous: t ∈ I =[ a, b], it can be shown that all possible conditional probabilities are function of transition rates [19]: ρ ( S ′ →S ) ( t ) ∈[ 0, ∞].

Notice that a discrete time Markov process can be derived from continuous time Markov process, and is called a Jump Process with the following transition proba- bilities:

P S→S ≡ ρ S→S

# S ′′ ∈ ! ρ S→S ′′

If the transition probabilities or transition rates are time independent, the Markov process is called a time inde- pendent Markov process. In BKMC, only this case will be considered. For a time independent Markov process, the transition graph can be defined as follows: a transition graph is a graph in ! , with an edge between S and S if and only if ρ S→S ′ > 0 (or P !

s ( t i ) = S|s ( t i−1 ) = S "

> 0 if time is discrete).

Asynchronous Boolean dynamics as a discrete time Markov process

Asynchronous Boolean dynamics [2] is widely used in Boolean modeling. It can be easily interpreted as a discrete time Markov process [21,22] as shown below.

In the case of asynchronous Boolean dynamics, the sys- tem is given by n nodes (or agents), with a set of directed arrows linking these nodes and defining a network. For each node i, a Boolean logic B i ( S ) is specified and depends only on the nodes j for which there exists an arrow from node j to i (e.g. B 1 = S 3 AND NOTS 4 , where S 3 and S 4 are the Boolean values of nodes 3 and 4 respectively, and B 1 is the Boolean logic of node 1). The notion of asynchronous transition (AT) can be defined as a pair of network states ( S, S ) ∈ ! , written ( SS ) such that

S j = B j ( S ) for a given j

S i = S i for i ̸ = j (4)

To define a Markov process, the transition probabili- ties P !

s ( t i ) = S|s ( t i−1 ) = S "

can be defined: given two network states S and S , let γ ( S ) be the number of asyn- chronous transitions from S to all possible states S . Then

P !

s ( t i ) = S |s ( t i−1 ) = S "

= 1 /γ ( S ) if ( SS ) is an AT P !

s ( t i ) = S |s ( t i−1 ) = S "

= 0 if ( SS ) is not an AT (5) In this formalism, the asynchronous Boolean dynam- ics completely defines a discrete time Markov pro- cess when the initial condition is specified. Notice that here the transition probabilities are time independent, i.e. P !

s ( t i ) = S|s ( t i−1 ) = S "

= P !

s ( t i+1 ) = S|s ( t i ) = S "

Therefore, the approaches, mentioned in section “Con- . tinuous time in Boolean modeling: past and present”, that introduce time implicitly by adding probabilities to each transition of the transition graph, can be seen as a gener- alization of the definition of γ ( S ) .

Continuous time Markov process as a generalization of asynchronous Boolean dynamics

To transform the discrete time Markov process described above in a continuous time Markov process, tran- sition probabilities should be replaced by transition rates ρ ( S→S ′ ) . In that case, conditional probabilities are computed by solving a master equation (equation 2 in Additional file 1, “Basic information on Markov process”, section 1.1). We present below the corresponding numer- ical algorithm, the Kinetic Monte-Carlo algorithm [23].

Because we want a generalization of the asynchronous Boolean dynamics, transition rates ρ ( S→S ′ ) are non-zero only if S and S differ by only one node. In that case, each Boolean logic B i ( S) is replaced by two functions

Stoll et al. BMC Systems Biology 2012, 6:116 Page 4 of 18

http://www.biomedcentral.com/1752-0509/6/116

Notice that for all t, s ( t ) are not independent, therefore P !

s ( t ) = S, s ( t ) = S "

̸ = P [s ( t ) = S] P !

s ( t ) = S "

. From now on, we define P [s ( t ) = S] as instantaneous probabil- ities. Since the instantaneous probabilities do not define the full stochastic process, all possible joint probabilities should also be defined.

In order to simplify the stochastic process, Markov property is imposed. It can be expressed in the following way: “the conditional probabilities in the future, related to the present and the past, depend only on the present” (see Additional file 1, “Basic information on Markov process”, section 1.1 for the mathematical definition). The formal definition of a Markov process is a stochastic process with the Markov property.

Any Markov process can be defined by (see Van Kampen [19], chapter IV):

1. An initial condition:

P [s ( 0 ) = S] ; ∀S ∈ ! (2) 2. Conditional probabilities (of a single condition):

P !

s ( t ) = S|s ( t ) = S "

; ∀S, S ∈ ! ; ∀t , t ∈ I ; t < t (3) Concerning time, two cases can be considered:

• If time is discrete: t ∈ I = {t 0 , t 1 , · · · }, it can be shown that all possible conditional probabilities are function of transition probabilities [20]:

P !

s ( t i ) = S|s ( t i−1 ) = S "

. In that case, a Markov process is often named a Markov chain.

• If time is continuous: t ∈ I =[ a, b], it can be shown that all possible conditional probabilities are function of transition rates [19]: ρ ( S ′ →S ) ( t ) ∈[ 0, ∞].

Notice that a discrete time Markov process can be derived from continuous time Markov process, and is called a Jump Process with the following transition proba- bilities:

P S→S ≡ ρ S→S

# S ′′ ∈ ! ρ S→S ′′

If the transition probabilities or transition rates are time independent, the Markov process is called a time inde- pendent Markov process. In BKMC, only this case will be considered. For a time independent Markov process, the transition graph can be defined as follows: a transition graph is a graph in ! , with an edge between S and S if and only if ρ S→S ′ > 0 (or P !

s ( t i ) = S|s ( t i−1 ) = S "

> 0 if time is discrete).

Asynchronous Boolean dynamics as a discrete time Markov process

Asynchronous Boolean dynamics [2] is widely used in Boolean modeling. It can be easily interpreted as a discrete time Markov process [21,22] as shown below.

In the case of asynchronous Boolean dynamics, the sys- tem is given by n nodes (or agents), with a set of directed arrows linking these nodes and defining a network. For each node i, a Boolean logic B i ( S ) is specified and depends only on the nodes j for which there exists an arrow from node j to i (e.g. B 1 = S 3 AND NOTS 4 , where S 3 and S 4 are the Boolean values of nodes 3 and 4 respectively, and B 1 is the Boolean logic of node 1). The notion of asynchronous transition (AT) can be defined as a pair of network states ( S, S ) ∈ ! , written ( SS ) such that

S j = B j ( S ) for a given j

S i = S i for i ̸ = j (4)

To define a Markov process, the transition probabili- ties P !

s ( t i ) = S|s ( t i−1 ) = S "

can be defined: given two network states S and S , let γ ( S ) be the number of asyn- chronous transitions from S to all possible states S . Then

P !

s ( t i ) = S |s ( t i−1 ) = S "

= 1 /γ ( S ) if ( SS ) is an AT P !

s ( t i ) = S |s ( t i−1 ) = S "

= 0 if ( SS ) is not an AT (5) In this formalism, the asynchronous Boolean dynam- ics completely defines a discrete time Markov pro- cess when the initial condition is specified. Notice that here the transition probabilities are time independent, i.e. P !

s ( t i ) = S|s ( t i−1 ) = S "

= P !

s ( t i+1 ) = S|s ( t i ) = S "

Therefore, the approaches, mentioned in section “Con- . tinuous time in Boolean modeling: past and present”, that introduce time implicitly by adding probabilities to each transition of the transition graph, can be seen as a gener- alization of the definition of γ ( S ) .

Continuous time Markov process as a generalization of asynchronous Boolean dynamics

To transform the discrete time Markov process described above in a continuous time Markov process, tran- sition probabilities should be replaced by transition rates ρ ( S→S ′ ) . In that case, conditional probabilities are computed by solving a master equation (equation 2 in Additional file 1, “Basic information on Markov process”, section 1.1). We present below the corresponding numer- ical algorithm, the Kinetic Monte-Carlo algorithm [23].

Because we want a generalization of the asynchronous Boolean dynamics, transition rates ρ ( S→S ) are non-zero only if S and S differ by only one node. In that case, each Boolean logic B i ( S) is replaced by two functions

Stoll et al. BMC Systems Biology 2012, 6:116 Page 4 of 18

http://www.biomedcentral.com/1752-0509/6/116

Notice that for all t, s ( t ) are not independent, therefore P !

s ( t ) = S, s ( t ) = S "

̸= P [s ( t ) = S] P !

s ( t ) = S "

. From now on, we define P [s ( t ) = S] as instantaneous probabil- ities. Since the instantaneous probabilities do not define the full stochastic process, all possible joint probabilities should also be defined.

In order to simplify the stochastic process, Markov property is imposed. It can be expressed in the following way: “the conditional probabilities in the future, related to the present and the past, depend only on the present” (see Additional file 1, “Basic information on Markov process”, section 1.1 for the mathematical definition). The formal definition of a Markov process is a stochastic process with the Markov property.

Any Markov process can be defined by (see Van Kampen [19], chapter IV):

1. An initial condition:

P [s ( 0 ) = S] ; ∀S ∈ ! (2) 2. Conditional probabilities (of a single condition):

P !

s ( t ) = S|s ( t ) = S "

; ∀S, S ∈ ! ; ∀t , t ∈ I ; t < t (3) Concerning time, two cases can be considered:

• If time is discrete: t ∈ I = {t 0 , t 1 , · · · }, it can be shown that all possible conditional probabilities are function of transition probabilities [20]:

P !

s ( t i ) = S|s ( t i−1 ) = S "

. In that case, a Markov process is often named a Markov chain.

• If time is continuous: t ∈ I =[ a, b], it can be shown that all possible conditional probabilities are function of transition rates [19]: ρ ( S ′ →S ) ( t ) ∈[ 0, ∞].

Notice that a discrete time Markov process can be derived from continuous time Markov process, and is called a Jump Process with the following transition proba- bilities:

P S→S ≡ ρ S→S

# S ′′ ∈ ! ρ S→S ′′

If the transition probabilities or transition rates are time independent, the Markov process is called a time inde- pendent Markov process. In BKMC, only this case will be considered. For a time independent Markov process, the transition graph can be defined as follows: a transition graph is a graph in ! , with an edge between S and S if and only if ρ S→S ′ > 0 (or P !

s ( t i ) = S|s ( t i−1 ) = S "

> 0 if time is discrete).

Asynchronous Boolean dynamics as a discrete time Markov process

Asynchronous Boolean dynamics [2] is widely used in Boolean modeling. It can be easily interpreted as a discrete time Markov process [21,22] as shown below.

In the case of asynchronous Boolean dynamics, the sys- tem is given by n nodes (or agents), with a set of directed arrows linking these nodes and defining a network. For each node i, a Boolean logic B i ( S ) is specified and depends only on the nodes j for which there exists an arrow from node j to i (e.g. B 1 = S 3 AND NOTS 4 , where S 3 and S 4 are the Boolean values of nodes 3 and 4 respectively, and B 1 is the Boolean logic of node 1). The notion of asynchronous transition (AT) can be defined as a pair of network states ( S, S ) ∈ ! , written ( SS ) such that

S j = B j ( S ) for a given j

S i = S i for i ̸= j (4)

To define a Markov process, the transition probabili- ties P !

s ( t i ) = S|s ( t i−1 ) = S "

can be defined: given two network states S and S , let γ ( S ) be the number of asyn- chronous transitions from S to all possible states S . Then

P !

s ( t i ) = S |s ( t i−1 ) = S "

= 1 /γ ( S ) if ( SS ) is an AT P !

s ( t i ) = S |s ( t i−1 ) = S "

= 0 if ( SS ) is not an AT (5) In this formalism, the asynchronous Boolean dynam- ics completely defines a discrete time Markov pro- cess when the initial condition is specified. Notice that here the transition probabilities are time independent, i.e. P !

s ( t i ) = S|s ( t i−1 ) = S "

= P !

s ( t i+1 ) = S|s ( t i ) = S "

Therefore, the approaches, mentioned in section “Con- . tinuous time in Boolean modeling: past and present”, that introduce time implicitly by adding probabilities to each transition of the transition graph, can be seen as a gener- alization of the definition of γ ( S ) .

Continuous time Markov process as a generalization of asynchronous Boolean dynamics

To transform the discrete time Markov process described above in a continuous time Markov process, tran- sition probabilities should be replaced by transition rates ρ ( S→S ′ ) . In that case, conditional probabilities are computed by solving a master equation (equation 2 in Additional file 1, “Basic information on Markov process”, section 1.1). We present below the corresponding numer- ical algorithm, the Kinetic Monte-Carlo algorithm [23].

Because we want a generalization of the asynchronous

Boolean dynamics, transition rates ρ ( S→S ) are non-zero

only if S and S differ by only one node. In that case,

each Boolean logic B i ( S) is replaced by two functions

Références

Documents relatifs

This Markov chain has a special structure, because its invariant distribution admits a density function on the interior of the state space, unlike its transition kernel.. Indeed,

— Nous présentons dans cet article un processus de Markov généralisé qui englobe le processus de décision markovien actualisé à l'horizon infini, avec état et action finis;

It can be formulated in the abstract framework of Banach lattices (X, k·k, ≥) which are Banach spaces endowed with compatible order relation or equivalently with an appropriate

Une fourmi se promène sur une montre de la manière suivante : elle démarre sur le chiffre 0 et, toutes les minutes, elle se déplace avec proba 1 2 d’un chiffre vers la gauche et

In this paper we establish an upper bound for absolute value of the determinant of rate matrix R used in matrix-geometric solution for steady-state probabilities of a

Montrer que (Y n ) est une chaˆıne de Markov sur un espace d’´ etats que l’on pr´ ecisera. D´ eterminer son ´ etat initial et sa transition... d) Montrer qu (Y n ) est r´

Que vaut son esp´

b) Soit l une fonction mesurable paire sur R on suppose que l est born´ ee, et qu’elle est nulle. a l’ext´ erieur de