• Aucun résultat trouvé

Applying Model-Based Optimization to Hyperparameter Optimization in Machine Learning

N/A
N/A
Protected

Academic year: 2022

Partager "Applying Model-Based Optimization to Hyperparameter Optimization in Machine Learning"

Copied!
1
0
0

Texte intégral

(1)

Applying Model-Based Optimization to Hyperparameter Optimization in Machine

Learning

Bernd Bischl

Ludwig-Maximilians-Universit¨at M¨unchen, M¨unchen, Germany, bernd.bischl@stat.uni-muenchen.de

Abstract. This talk will cover the main components of sequential model- based optimization algorithms. Algorithms of this kind represent the state-of-the-art for expensive black-box optimization problems and are getting increasingly popular for hyper-parameter optimization of ma- chine learning algorithms, especially on larger data sets.

The talk will cover the main components of sequential model-based op- timization algorithms, e.g., surrogate regression models like Gaussian processes or random forests, initialization phase and point acquisition.

In a second part I will cover some recent extensions with regard to parallel point acquisition, multi-criteria optimization and multi-fidelity systems for subsampled data. Most covered applications will use support vector machines as examples for hyper-parameter optimization.

The talk will finish with a brief overview of open questions and challenges.

Références

Documents relatifs

The starting point of this study is different: we estimate the true, ana- lytically intractable parameter distribution by MCMC from measured data, and study how the output of

In this case, a standard evolutionary algorithm (EA) can for instance be carried out, while using the model response as a substitute for the real objective or

Abstract—To make the most of limited spectrum resources in optical fibers, we propose to improve the quality of transmission leveraging the optimization of channel center frequencies

So far, active learning of points in which the original objective function will be evaluated during a GP-based surrogate modelling nearly always used a fixed covariance function of

For each sample the table lists the original runtime (no learning), the improved run- time (with learning), the number of original backjumps (no learning), the number of

Semantic net, Search engine, Search engine optimization, Keyword list, Semantic kernel, Machine learning, Web service, Situation control, Data mining.. 1

As such nonlinear method we choose an artificial neural network, because this method is widely used for the travel time prediction and require not so much time for training procedure

The idea behind maximum likelihood estimation is to choose the most probable parameter θ ∈ Θ for the observed data.. Hence, the high level idea of maximum likelihood estimation will