• Aucun résultat trouvé

Aris Dokoumetzidis An algorithm for proper lumping of systems of ODEs Aris Dokoumetzidis

Unviersity of Manchester

Objectives: We aim to develop an algorithm for automatic order reduction of large mathematical models, in order to use it for simplifying systems biology models with drug interest, for potential use in pharmacodynamics.

Methods: We develop an optimisation based algorithm for lumping of responses. More

specifically, given a non-linear model described by a set of ordinary differential equations, a set of prior distributions for the model parameters and a set of constraints, the algorithm first determines all the candidate lumped models based on permutations. Then it searches for the one that has the best average agreement with the original model according to a Bayesian objective function that averages over the parameter prior distributions. The algorithm is applied to an example model in order to demonstrate its performance.

Results: The algorithm when applied to the example model, produces a lumped model which is much smaller to the original. Furthermore, the variables and parameters of the reduced model retain a specific physiological meaning, since the algorithm only considers proper lumping schemes. The solution of the lumped model is found to be close to the one of the original full model. Advantages of the algorithm include that it is completely automatic, can be used for non-linear models and can handle parameter uncertainty and constraints.

Conclusions: In the future more sophisticated, mechanistic models will be needed to meet the challenges of quantitative pharmacology. Systems biology is a growing trend in the entire biological sciences and PK/PD modelling needs to go towards that direction also. The inherent problem of overparametrisation of the models derived by a systems approach, given the quality of the available data, will have to be addressed with mathematical tools like lumping techniques, in order for these models to be useful.

Poster: Methodology- Algorithms

Jeroen Elassaiss-Schaap Automation of Structural Pharmacokinetic Model Search

in NONMEM: Evaluation with Preclinical Datasets

Jeroen Schaap (1), Stefan Verhoeven (2), Gerard Vogel (3), Martijn Rooseboom (3,4) and Rene van Schaik (2)

(1) PK-PD/M&S, Clinical Pharmacology and Kinetics, Organon N.V., The Netherlands; (2) Molecular Design and Informatics, Pharmacology Oss, Organon N.V., The Netherlands; (3) DMPK & Safety, Dept. Pharmacology Oss, Organon N.V., The Netherlands; (4) Dept. Toxicology

and Drug Disposition Oss, Organon N.V., The Netherlands.

Objective: One of the factors limiting further implementation of PK-PD modeling in the pharmaceutical industry is the availability of modelers. Automation of model development is therefore an attractive proposition. An alternative to the published unsupervised global model space search by means of a hybrid genetic algorithm [1], is a staged and supervised approach. The latter approach more closely reflects manual model development, with its mixed stages of model selection. We set out to implement the first building block of this approach, structural

pharmacokinetic model search, and evaluate its performance in the context of small, pharmacology-type preclinical PK datasets.

Methods: Nonlinear models were fitted with NONMEM V. Model selection was performed on the basis of a hierarchy of criteria: successful covariance step, AIC (with a tolerance of 2) and number of parameters. Selected models were manually checked on the basis of goodness-of-fit and

observed/predicted versus time plots. 56 Datasets originating from preclinical research, manually selected to contain the range from easy to impossible to fit, were used as test input. The experiments typically included 1-3 routes of administration. Each non-iv route was combined with iv-data; this procedure resulted in 73 datasets. The model search on this database was executed twice, on an Itanium2 machine with the Intel fortran compiler 9.1 and on an Opteron machine with g77.

Results: Screening of initial parameters estimates appeared to be the most time-consuming part of our automated model search. A staged algorithm was developed as an efficient screen. Manual inspection of automatically selected models revealed that they mostly were acceptable with the exception of models built on datasets of poor quality. The outcomes obtained at the different hardware setups revealed that (i) 8 datasets did not run due to technicalities and that (ii) 29 exactly identical objective functions (OVFs), (iii) 23 slightly different OVFs (delta < 3.7), (iv) 6 different OVFs and (v) 7 unambiguously different OFVs (delta >= 10) models were found. In groups iv and

Poster: Methodology- Algorithms

Serge Guzy Comparison between NONMEM and the Monte-Carlo Expectation

Maximization (MC-PEM) Method Using a Physiologically-Based Glucose-Insulin

Model

Robert Bauer(1), Serge Guzy(1), Hanna E Silber(2), Petra M Jauslin(2,3), Nicolas Frey(3), Mats O Karlsson(2)

(1)Pop-Pharm Inc.,Berkeley,CA (2)Uppsala University, Uppsala, Sweden and (3)Hoffmann-La Roche Inc., Basel, Switzerland

Objectives: The purpose of this work is to compare the methodologies of NONMEM and MC-PEM (as applied in the software packages PDx-MC-MC-PEM and S-ADAPT) using an advanced model for regulation of glucose and insulin kinetics.

Methods: In NONMEM a linearized form of the likelihood function is maximized. The MC-PEM method, by using Monte-Carlo simulations during the expectation step, allows to maximize the exact likelihood while avoiding complicated integration algorithms. The MC-PEM algorithm consists of two main steps: the first one is the expectation step (E-Step) where Monte-Carlo sampled model parameters contribute to assessing the conditional means and variances for each subject, at the current values of the population parameters and inter-subject variances. The E-Step is then followed by the maximization step which updates the population parameter characteristics.

As the MC-PEM algorithm is particularly suited for complex models with highly dimensioned inter-subject variances the comparison between MC-PEM algorithm and NONMEM algorithm as been conducted by using the physiologically based glucose-insulin model previously developed by HE Silber and PM Jauslin and presented at the PAGE meetings in 2004 and 2005 [1-3].

Results: In a first analysis, the estimation procedure was performed with some predefined constraints as present in the NONMEM version of the glucose-insulin model (disposition parameters following the OGTT fixed to values obtained analyzing the IVGTT). This analysis resulted in similar final estimates across the three programs for the population means and variances that were allowed to vary, with intra-subject variances over-estimated in PDx-MCPEM. In a second analysis, the fit was performed without any constraints on the population means and variances, and a full S-ADAPT analysis was performed successfully with a statistically significant improvement in the objective function. The same analysis is being conducted using NONMEM.

Conclusions: S-ADAPT's ability to optimally combine both Monte-Carlo stochastic algorithms with deterministic optimization algorithms allowed both precise objective function assessment as well as full analysis without any parameter constraints.

References:

[1] PAGE 13 (2004) Abstr 541 [www.page-meeting.org/?abstract=541]

[2] PAGE 14 (2005) Abstr 799 [www.page-meeting.org/?abstract=799]

[3] PAGE 14 (2005) Abstr 826 [www.page-meeting.org/?abstract=826]

Poster: Methodology- Algorithms

Robert Leary An evolutonary nonparametric NLME algorithm Robert H. Leary

Pharsight Corporation

Nonparametric NLME algorithms [1,2,3] relax the usual multivariate normality assumption for the random effects distribution to allow arbitrary nonparametric distributions. The nonparametric maximum likelihood distribution can be shown to be a discrete distribution defined by probabilities PJ on support points SJ, J=1, 2, ..., M, where M ≤ N with N the number of subjects. Support points have dimensionality D equal to the number of random effects. While finding optimal probabilities for a given set of support points is a convex optimization problem with known fast and reliable algorithms, optimizing support point positions is a much more difficult global optimization problem with many local optima, particularly in higher dimensions.

In the low dimensional case (e.g., D=1 or 2), a computationally feasible and effective strategy involves gridding the random effects space (as in the NPEM algorithm in [2]) at reasonable resolution over a box whose dimensions are derived from the observed post-hoc and/or Omega matrix estimates from an initial parametric analysis. This allows a (near) global optimum solution to be found in a single step using only a very fast and effective convex primal dual probability

optimization algorithm [3] that selects an optimal subset from the candidate pool defined by the grid. However, in higher dimensions reliance on gridding alone breaks down due Bellman's ‘curse of dimensionality' - the number of grid points grows exponentially with D and quickly overwhelms computational feasibility if reasonable resolution is maintained. The alternate strategy of direct optimization of support point coordinates with derivative-based methods often leads to trapping in one of a large number of local optima.

Here we consider an evolutionary type algorithm that leverages the power and speed of the convex probability optimization algorithm to find good approximations to the solution of the more general nonconvex global optimization problem. Using only the probability optimization algorithm, it is possible to select the M optimal support points (and assign maximum likelihood probabilities) from an arbitrarily large collection of candidates. The algorithm evolves a population of candidate support points in a manner that guarantees that the likelihood of the optimal subset monotonically increases with each generation. A novel feature of the algorithm is the use of the dual solution for the current generation to prescreen large numbers of candidates for new support points to limit population growth to only the most promising new points. The candidate proposal function

References:

[1] Mallet, A., A maximum likelihood estimation method for random coefficient regression models, Biometrika 73: 645-656, 1986.

[2] Schumitzky, A., Nonparametric EM algorithms for estimating prior distributions, App. Math.

Comput. 45: 143-157. 1991.

[3] Leary, R. et al, An adaptive grid non-parametric approach to pharmacokinetic and dynamic(PK/PD) population models, 14-th IEEE Symposium on Computer Based Medical Systems, 389-394, 2001.

Poster: Methodology- Algorithms

Elodie Plan Investigation of performances of FOCE and LAPLACE algorithms in

NONMEM VI in population parameters estimation of PK and PD continuous data

E. Plan, M. C. Kjellsson, M. O. Karlsson

Division of Pharmacokinetics and Drug Therapy, Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden

Background: At PAGE 2005, P. Girard and F. Mentré presented a study where the performance of several estimation methods used in nonlinear mixed effects modeling were compared. It was shown that FOCE using NONMEM VIbêta did end in successful minimization in only 49% of the runs [1].

In NONMEM VI, S. Beal had added a warning message making the covariance step abort when its estimate is equal to zero. It has been hypothesized that these warnings are the reason why FOCE in NONMEM VIbêta shows such poor minimization properties. NONMEM VI includes a new estimation method, LAPLACE INTER, clearly intended for continuous type data. Therefore we also wanted to explore differences between FOCE and LAPLACE for such data.

Objective: The aim of this study was to compare the estimation performances of different methods available in NONMEM VI, with focus on FOCE and LAPLACE methods.

Methods: 100 datasets simulated by Girard et al were re-examined using NONMEM VI and a NONMEM VI version compiled without the warnings with methods FOCE, LAPLACE, SLOW, INTER, NUMERICAL, LIKE and -2LL. The model used to estimate these PK data was a one compartment model with a first order absorption and a first order elimination. Random effects exponentially added to all fixed effects with off-diagonal elements estimated and an exponential error completed the model. 100 other datasets were generated for further investigations in the difference in performance between FOCE and LAPLACE. PD data were then estimated using a sigmoïd Hill model with a baseline and a correlation between the EMAX and the ED50. The model included also random effects multiplicatively affected to 3 of the 4 parameters and an additional error. Results were compared by computing bias and precision of the 100 estimated parameters and by plotting relative estimation errors.

Results and Discussion: 100% successful minimizations were obtained with NONMEM VI both with and without warnings; otherwise no evident difference was seen between the different algorithms in PK data estimations. Regarding PD data estimations, a trend of lower bias with LAPLACE than with FOCE was observed, whereas precisions were quite similar between these two

Poster: Methodology- Algorithms

Benjamin Ribba Parameters estimation issues for complex population PK models: