• Aucun résultat trouvé

Teaching Machines to Discover Particles

N/A
N/A
Protected

Academic year: 2021

Partager "Teaching Machines to Discover Particles"

Copied!
101
0
0

Texte intégral

(1)

Teaching machines to discover particles

Gilles Louppe

@glouppe

(2)

A few words about myself

• Training in computer science and machine learning.

• Postdoc at CERN+NYU, working at the intersection of ML and physical sciences.

(3)
(4)

The Key to Science

(5)

The scientific method

Hypothesis Experiment Conclusion

The Higgs boson exists

LHC+ATLAS+CMS Discovery!

• The scientific method = recurrence over the sequence “hypothesis, experiment and conclusion”.

• Conclusions are routinely automated through statistical inference, in which machine learning methods are embedded.

• Hypothesis and experiments are usually left for the scientists to decide.

(6)

Testing for new physics

(7)

Testing for new physics

(8)

Testing for new physics

Hypothesis test based on the likelihood ratio

p(x|background) p(x|background + signal)

(9)

The Standard Model

Theuniquenessof particle physics lies

in its highly precise and compact model. This model should be leveraged by ML!

(10)

Physics jargon vs. ML lingo

Physicists and machine learning researchers do not speak the same language.

(11)
(12)

The players

θ := (µ, ν)

Parameters

µ

Parameters of interest

ν

Nuisance parameters Forward modeling Generation Simulation Prediction

p(x, z|θ)

Inference Inverse problem Unfolding Measurement Parameter search

x ∼ p

r

(x)

Observations drawn from Nature

x ∼ p(x|θ)

Simulated data (a lot!)

(13)

Likelihood-free assumptions

Operationally,

x ∼ p(x|θ) ≡ z ∼ p(z|θ), x = g(z; θ) where

• z provides a source of randomness;

• g is a non-differentiable deterministic function (e.g. a computer program).

Accordingly, the density p(x|θ) can be written as p(x|θ) =

Z

{z:g(z;θ)=x}

p(z|θ)µ(dz)

(14)

Determining and evaluating all possible execution paths and all z that lead to the observation x is not tractable.

(15)

Testing hypothesis (

Inference

)

Formally, physicists usually test a null θ = θ0 by constructing the

likelihood ratio test statistic Λ(D; θ0) = p(D|θ0) supθ∈Θp(D|θ) = Q x∈Dp(x|θ0) supθ∈ΘQ x∈Dp(x|θ)

• Most measurements and searches for new particles are based on the

distribution of a single variable x ∈ R.

• The likelihood p(x|θ) is approximated using 1D histograms.

• Choosing a good variable x tailored for the goal of the experiment is the physicist’s job.

(16)

Likelihood-free inference (

Inference

)

Given observations x ∼ pr(x), we seek:

θ∗ = arg max

θ p(x|θ)

• Histogramming p(x|θ) does not scale to high dimensions.

• Can we automate or bypass the physicist’s job of thinking about a good and compact representation for x, without losing information?

(17)

Approximating likelihood ratios with classifiers (CARL)

The likelihood ratio r(x) is invariant under the change of variable u = s(x), provided s(x) is monotonic with r(x):

r(x) = p(x|θ0) p(x|θ1)

= p(s(x)|θ0) p(s(x)|θ1)

A classifier s trained to distinguish x ∼ p(x|θ0) from x ∼ p(x|θ1)

satisfies the condition above.

This gives an automatic procedure for learning a good and compact representation for x!

(18)

Therefore, θ∗ = arg max θ p(x|θ) = arg max θ p(x|θ) p(x|θ1) = arg max θ p(s(x; θ, θ1)|θ) p(s(x; θ, θ1)|θ1)

where θ1 is fixed and s(x; θ, θ1) is a family of classifiers trained to

distinguish between θ and θ1.

(19)

Application to the Higgs

Preliminary work using fast detector simulation and CARL to approximate likelihoods using full kinematic information parameterized in coefficients of a Quantum Field Theory.

(20)

Learning in implicit generative models

Likelihood-free inference can be cast into the framework of “implicit generative models”.

This framework ties together:

• Approximate Bayesian

computation

• Density estimation-by-comparison algorithms (two sample testing, density ratio, density difference estimation)

• Generative adversarial networks

• Variational inference

(21)

Likelihood-free inference has become

(22)
(23)

Fast simulation (

Prediction

)

• Half the LHC computing power (300000 cores) is dedicated to producing simulated data.

• Huge savings (in time and $) if simulations can be made faster.

• Hand-made fast simulators are being developed by physicists, trading-off precision for speed.

Can we learn to generate data?

(24)

Generative adversarial networks

LW = Ex∼p(x|θ)˜ [d(˜x; φ)] − Ex∼pr(x)[d(x; φ)]

Ld= LW + λΩ(φ)

(25)

Learning generative models (

Prediction

)

Challenges: • How to ensure physical properties? • Non-uniform geometry • Mostly sparse • GANs vs. VAE vs. Normalizing Flows? Credits:1701.05927,1705.02355

(26)

How to evaluate generative models?

Physics: Evaluate well-known physical

variates ML: Look at generated images

This is not satisfying.

Can’t we do better from a methodological standpoint?

(Some first steps at1511.01844)

(27)

What if the generator g in GANs isn’t a neural net, but an actual physics simulator?

(28)

What if the generator g in GANs isn’t a neural net, but an actual physics simulator?

(29)

Variational Optimization

min

θ f (θ) ≤ Eθ∼q(θ|ψ)[f (θ)] = U (ψ)

ψU (ψ) = Eθ∼q(θ|ψ)[f (θ)∇ψlog q(θ|ψ)]

(30)

Adversarial Variational Optimization

• Replace the generative network with a non-differentiable forward simulator g(z; θ).

• With VO, optimize upper bounds of the adversarial objectives:

Ud= Eθ∼q(θ|ψ)[Ld] (1)

Ug= Eθ∼q(θ|ψ)[Lg] (2)

respectively over φ and ψ.

(31)

Operationally, we get the marginal model:

(32)

Toy example: e

+

e

→ µ

+

µ

• Simplified simulator for electron–positron collisions resulting in muon–antimuon pairs.

• Observations: x = cos(A) ∈ [−1, 1], where A is the polar angle of the outgoing muon wrt incoming electron.

(33)

41 42 43 Ebeam 0.75 0.80 0.85 0.90 0.95 1.00 1.05 Gf 1 2 3 q( | ) = 0 q( | ) = 5 *= (42, 0.9) 1.0 0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x prx p(x| ) = 0(x) x p(x| ) = 5 0 50 100 150 200 250 300 0.0 0.5 1.0 1.5 Ud = 0 Ud = 5

(34)

Powering the scientific method with AI

(35)

Automating the scientific process

Hypothesis Experiment Conclusion

Most efforts are focused on automating the analysis of

experimental results to draw conclusions, assuming the hypothesis and experiment are fixed.

Can we also automate

(36)

Optimal experimental desgin

Hypothesis Experiment(φ) Conclusion

Parameters θ of the (standard) model are known with uncertainty H[θ]. How to best reduce the uncertainty H[θ]?

1. Assume an experiment with parameters φ can be simulated.

2. Simulate the expected improvement ∆(φ) = H[θ] − Edata|φ[H[θ|data]].

This embeds the full likelihood-free inference procedure.

3. Find φ∗= arg maxφ∆(φ)

Computationally (super) heavy.

Connections to:

• Bayesian optimization

• Optimal experimental design

(37)

Optimal experimental desgin

Hypothesis Experiment(φ) Conclusion

Parameters θ of the (standard) model are known with uncertainty H[θ]. How to best reduce the uncertainty H[θ]?

1. Assume an experiment with parameters φ can be simulated.

2. Simulate the expected improvement ∆(φ) = H[θ] − Edata|φ[H[θ|data]].

This embeds the full likelihood-free inference procedure.

3. Find φ∗ = arg maxφ∆(φ)

Computationally (super) heavy.

Connections to:

• Bayesian optimization

• Optimal experimental design

(38)

Active sciencing

(39)

Active sciencing

(40)

Exploring the theory space

Hypothesis(ψ) Experiment Conclusion

The Standard model admits several extensions. Can we explore the space of theories and find the envelope that agree with the data?

• Assume a generative model of theories, indexed by ψ.

• Assume the experiment design φ is fixed.

(41)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(42)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(43)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(44)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(45)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(46)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(47)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(48)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(49)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(50)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(51)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(52)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(53)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(54)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(55)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(56)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(57)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(58)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(59)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(60)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(61)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(62)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(63)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(64)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(65)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(66)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(67)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(68)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(69)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(70)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(71)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(72)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(73)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(74)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(75)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(76)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(77)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(78)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(79)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(80)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(81)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(82)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(83)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(84)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(85)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(86)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(87)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(88)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(89)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(90)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(91)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(92)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(93)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(94)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(95)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(96)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(97)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(98)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(99)

Finding exclusion contours

• Do not generate Monte Carlo a priori, generate it on demand only where it is relevant.

where the value of the test statistic (e.g., CLs) to be above/below the threshold is uncertain.

• Embed and instrument the full

experimental pipeline through RECAST.

• Drastically more efficient use of computing resources.

(100)

AI recipe for understanding Nature

Hypothesis(ψ) Experiment(φ) Conclusion

(101)

Summary

• Likelihood-free inference algorithms are modern generalizations of histogram-based inference.

• Very active field of research, which connects many related problems and algorithms.

• Particle physics provide aunique testbed for the ambitous development of ML/AI methods, as enabled by the precise mechanistic understanding of physical processes.

Références

Documents relatifs

Given the potential consequences of failure on the OSSLT, this focus was not unexpected, but it highlights a perspective where the purpose of Applied English was not to

According to the TCGA batch effects tool (MD Anderson Cancer Center, 2016), the between batch dispersion (DB) is much smaller than the within batch dis- persion (DW) in the RNA-Seq

and synthesize the picture by noting that magnetic peaks could be the signature of nonlinear saturation of mirror modes; hole structures are also observed and interpreted as

In this paper I will investigate the relation between the affective dimension of these normative pressures and their moral dimension by arguing that an important moral

These Sherpa projeclS resulted from the common interest of local communities, whereby outsiders, such as Newar and even low-caste Sherpa (Yemba), could contribute

We have shared large cross-domain knowledge bases [4], enriched them with linguistic and distributional information [7, 5, 2] and created contin- uous evolution feedback loops

Let us waive the question about the sense of "I" and ask only how reference to the right object could be guaranteed.. (This is appropriate, because people surely have here

As both the null and alternative hypotheses are composite hypotheses (each comprising a different set of distributions), computing the value of the generalized likelihood ratio