• Aucun résultat trouvé

Unsupervised gene network inference with decision trees and Random forests

N/A
N/A
Protected

Academic year: 2021

Partager "Unsupervised gene network inference with decision trees and Random forests"

Copied!
22
0
0

Texte intégral

Loading

Figure

Fig. 1 Overfitting and underfitting. The blue (resp. orange) curve plots, for varying levels of com- com-plexity of the predictive model, the average value of the loss function over the instances of the learning sample (resp
Fig. 3 GENIE3 procedure. For each gene g j , j = 1, . . . ,G, a learning sample LS j is generated with the expression levels of g j as output values and the expression levels of all the other genes as input values
Table 1 Running times of the different GENIE3 implementations
Fig. 4 AUPRs (blue circles) and running times (orange triangles) of GENIE3, when varying the values of the parameters K (number of randomly chosen candidate regulators at each split node of a tree), n min (minimum number of samples at a leaf) and T (number
+2

Références

Documents relatifs

– Use a vote of the trees (majority class, or even estimates of class probabilities by % of votes) if classification, or an average of the trees if regression.. Decision Trees

Main results Theorem Given a d-DNNF circuit C with a v-tree T, we can enumerate its satisfying assignments with preprocessing linear in |C| + |T| and delay linear in each

• Extending the results from text to trees • Supporting updates on the input data • Understanding the connections with circuit classes • Enumerating results in a relevant

The aim of these experiments was to observe and quantify the possible improvement at the final step of a Question Answering prototype (i.e. at the answer selection step), as

If we compare decision trees obtained in two models, we see that using the MAXMIN pre- clustering algorithm significantly reduces the number of checks needed to make

The results include the minimization of average depth for decision trees sorting eight elements (this question was open since 1968), improvement of upper bounds on the depth of

Then, the efficiency of important variables detection is analyzed using nine data simulation models designed for both continuous and nominal data, and CUBT is compared to

Recent approaches propose using a larger class of representation: the class of ra- tional distributions (also called rational stochastic tree languages, or RSTL) that can be computed