• Aucun résultat trouvé

Reinforcement learning

N/A
N/A
Protected

Academic year: 2022

Partager "Reinforcement learning"

Copied!
30
0
0

Texte intégral

(1)

1/18 Intensional data management Reinforcement learning Applications Conclusion

Reinforcement learning for intensional data management

Pierre Senellart

É C O L E N O R M A L E S U P É R I E U R E

RESEARCH UNIVERSITY PARIS

9 March 2018,SMiLe’18

(2)

2/18 Intensional data management Reinforcement learning Applications Conclusion

Intensional data is everywhere

Lots of data sources can be seen asintensional: accessing all the data in the source (in extension) is impossibleorvery costly, but it is possible to access the data throughviews, with some access constraints, associated with someaccess cost.

Indexes over regular data sources

Deep Web sources: Web forms, Web services

The Web or social networks as partial graphs that can be expanded by crawling

Outcome of complex automated processes: information extraction, natural language analysis, machine learning, ontology matching

Crowd data: (very) partial views of the world

Logical consequences of facts, costly to compute

(3)

formulation

answer & explanation

optimization

modeling priors

intensional access

knowledge update

Knowledge need Current

knowledge of the world

Knowledge acquisition plan

Uncertain access result Structured

source profiles

(4)

formulation

answer & explanation

optimization

modeling priors

intensional access

knowledge update

Knowledge need

Current knowledge of the world

Knowledge acquisition plan

Uncertain access result Structured

source profiles

(5)

formulation

answer & explanation

optimization

modeling

priors

intensional access

knowledge update

Knowledge need

Current knowledge of the world

Knowledge acquisition plan

Uncertain access result

Structured source profiles

(6)

formulation

answer & explanation

optimization

modeling priors

intensional access

knowledge update

Knowledge need Current

knowledge of the world

Knowledge acquisition plan

Uncertain access result

Structured source profiles

(7)

formulation

answer & explanation

optimization

modeling priors

intensional access

knowledge update

Knowledge need Current

knowledge of the world

Knowledge acquisition plan

Uncertain access result

Structured source profiles

(8)

formulation

answer & explanation

optimization

modeling priors

intensional access

knowledge update

Knowledge need Current

knowledge of the world

Knowledge acquisition plan

Uncertain access result Structured

source profiles

(9)

formulation

answer & explanation

optimization

modeling priors

intensional access

knowledge update

Knowledge need Current

knowledge of the world

Knowledge acquisition plan

Uncertain access result Structured

source profiles

(10)

formulation

answer & explanation

optimization

modeling priors

intensional access

knowledge update

Knowledge need Current

knowledge of the world

Knowledge acquisition plan

Uncertain access result Structured

source profiles

(11)

4/18 Intensional data management Reinforcement learning Applications Conclusion

What this talk is about

A personalperspective on how to approach intensional data management

Variousapplicationsof intensional data management and how we solved them (own research and my students’)

Main tool used: reinforcement learning(bandits, Markov decision processes)

(12)

5/18 Intensional data management Reinforcement learning Applications Conclusion

Outline

Intensional data management Reinforcement learning Applications

Conclusion

(13)

6/18 Intensional data management Reinforcement learning Applications Conclusion

Reinforcement learning

Deals with agents learning to interact with a partially unknown environment

Agents can perform actions, resulting inrewards (or penalties) and in a possiblestate change

Agents learn about the world by interacting with it

Goal: find the right sequence of actions that maximizes overall reward, or minimizes overall penalty

Classic tradeoff: exploration vs exploitation

(14)

7/18 Intensional data management Reinforcement learning Applications Conclusion

Multi-armed bandits

Stateless model

k > 1 different actions, with unknown rewards

often assumed that the rewards are from a

parametrized probabilistic distribution (Bernoulli, Gaussian, Exponential, etc.)

(15)

8/18 Intensional data management Reinforcement learning Applications Conclusion

Markov decision process (MDP)

Finite set of states

In each state, actions lead:

to astate change

to areward

Depending on cases:

state changes may be deterministic or probabilistic

state changes may be known or unknown (to be learned)

rewards may be known or unknown (to be learned)

current state may even be unknown! (poMDP)

(16)

9/18 Intensional data management Reinforcement learning Applications Conclusion

Outline

Intensional data management Reinforcement learning Applications

Conclusion

(17)

10/18 Intensional data management Reinforcement learning Applications Conclusion

Adaptive focused crawling (stateless)

[Gouriten et al., 2014]

with 3

5 0

4 0

0

3 5

3

4 2

2 3

3

5 0

4 0

0

3 5

3

4 2

2 3

2 3

0

0 0

0 1 0 1

0 1

3

0 0

0

0

Problem: Efficiently crawl nodes in a graph such that total score is high

Challenge: The score of a node is unknown till it is crawled

Methodology: Use various predictors of node scores, and adaptively select the best one so far with multi-armed bandits

(18)

10/18 Intensional data management Reinforcement learning Applications Conclusion

Adaptive focused crawling (stateless)

[Gouriten et al., 2014]

with 3

5 0

4 0

0

3 5

3

4 2

2 3

3

5 0

4 0

0

3 5

3

4 2

2 3

2 3

0

0 0

0 1 0 1

0 1

3

0 0

0

0

Problem: Efficiently crawl nodes in a graph such that total score is high

Challenge: The score of a node is unknown till it is crawled

Methodology: Use various predictors of node scores, and adaptively select the best one so far with multi-armed bandits

(19)

10/18 Intensional data management Reinforcement learning Applications Conclusion

Adaptive focused crawling (stateless)

[Gouriten et al., 2014]

with 3

5 0

4 0

0

3 5

3

4 2

2 3

3

5 0

4 0

0

3 5

3

4 2

2 3

2 3

0

0 0

0 1 0 1

0 1

3

0 0

0

0

Problem: Efficiently crawl nodes in a graph such that total score is high

Challenge: The score of a node is unknown till it is crawled

Methodology: Use various predictors of node scores, and adaptively select the best one so far with multi-armed bandits

(20)

11/18 Intensional data management Reinforcement learning Applications Conclusion

Adaptive focused crawling (stateful)

[Han et al., 2018]

with

Problem: Efficiently crawl nodes in a graph such that total score is high, taking into account

currently crawled graph

Challenge: Huge state space

Methodology: MDP,clusteringof state space, together withlinear approximation to value functions

(21)

12/18 Intensional data management Reinforcement learning Applications Conclusion

Optimizing crowd queries for data mining

[Amsterdamer et al., 2013a,b]

with

Problem: To find patterns in the crowd, what is the best question toask the crowd next?

Challenge: No a priori information on crowd data

Methodology: Model all possible questions as actions in a

multi-armed bandit setting, and find a trade-off between

exploration and exploitation

(22)

13/18 Intensional data management Reinforcement learning Applications Conclusion

Online influence maximization

[Lei et al., 2015]

with

Action Log User | Action | Date

1 1 2007-10-10 2 1 2007-10-12 3 1 2007-10-14 2 2 2007-11-10 4 3 2007-11-12 . . .

1

4 2

3 0.5

0.1 0.9

0.5 0.2

. . . Real World Simulator

1

4 2

3 . . . Social Graph

1

4 2

3 . . .

Weighted Uncertain Graph

One Trial

Sampling 1

2

Activation Sequences Update

3 Solution Framework

Problem: Run influence campaignsin social networks, optimizing the amount of influenced nodes

Challenge: Influence probabilities areunknown

Methodology: Build a model of influence probabilities and focus on influent nodes, with an

exploration/exploitation trade-off

(23)

14/18 Intensional data management Reinforcement learning Applications Conclusion

Routing of Autonomous Taxis

[Han et al., 2016]

with

Problem: Route a taxi to maximize its profit

Challenge: Real-world data, no a priori model of the world

Methodology: MDP, with standard Q-learning and customized

exploration/exploitation strategy

(24)

15/18 Intensional data management Reinforcement learning Applications Conclusion

Cost-Model–Free Database Tuning

[Basu et al., 2015, 2016]

with

Problem: Automatically find which indexes to create in a database for optimal performance

Challenge: The workload and cost model are unknown

Methodology: Model database tuning as a Markov decision processand use reinforcement learning techniques to iteratively learn a cost model and workload characteristics

(25)

16/18 Intensional data management Reinforcement learning Applications Conclusion

Outline

Intensional data management Reinforcement learning Applications

Conclusion

(26)

17/18 Intensional data management Reinforcement learning Applications Conclusion

In brief

Intensional data management arises in a large variety of settings, wheneverthere is a cost to accessing data

Reinforcement learning (bandits, MDPs) is a key tool in dealing with such data

Variouscomplicationsin data management settings: huge state space, no a priori model for rewards/penatlies, delayed rewards. . .

Richfield of applications for RL research!

(27)

18/18 Intensional data management Reinforcement learning Applications Conclusion

Merci.

(28)

Bibliography I

Yael Amsterdamer, Yael Grossman, Tova Milo, and Pierre Senellart. Crowd mining. InProc. SIGMOD, pages 241–252, New York, USA, June 2013a.

Yael Amsterdamer, Yael Grossman, Tova Milo, and Pierre Senellart. Crowd miner: Mining association rules from the crowd. In Proc. VLDB, pages 241–252, Riva del Garda, Italy, August 2013b. Demonstration.

Debabrota Basu, Qian Lin, Weidong Chen, Hoang Tam Vo, Zihong Yuan, Pierre Senellart, and Stéphane Bressan.

Cost-model oblivious database tuning with reinforcement learning. In Proc. DEXA, pages 253–268, Valencia, Spain, September 2015.

(29)

Bibliography II

Debabrota Basu, Qian Lin, Weidong Chen, Hoang Tam Vo, Zihong Yuan, Pierre Senellart, and Stéphane Bressan.

Regularized cost-model oblivious database tuning with reinforcement learning. Transactions on Large-Scale Data and Knowledge-Centered Systems, 28:96–132, 2016.

Georges Gouriten, Silviu Maniu, and Pierre Senellart. Scalable, generic, and adaptive systems for focused crawling. In Proc.

Hypertext, pages 35–45, Santiago, Chile, September 2014.

Douglas Engelbart Best Paper Award.

Miyoung Han, Pierre Senellart, Stéphane Bressan, and Huayu Wu. Routing an autonomous taxi with reinforcement

learning. In Proc. CIKM, Indianapolis, USA, October 2016.

Industry track, short paper.

Miyoung Han, Pierre Senellart, and Pierre-Henri Wuillemin.

Focused crawling through reinforcement learning. In Proc.

ICWE, June 2018.

(30)

Bibliography III

Siyu Lei, Silviu Maniu, Luyi Mo, Reynold Cheng, and Pierre Senellart. Online influence maximization. In Proc. KDD, pages 645–654, Sydney, Australia, August 2015.

Références

Documents relatifs

Given a user’s query, the central question in intensional data management is: “What is the best thing to do next?” in order to answer the query, meaning, what is the best access

Lots of data sources can be seen as intensional: accessing all the data in the source (in extension) is impossible or very costly, but it is possible to access the data through

Lots of data sources can be seen as intensional: accessing all the data in the source (in extension) is impossible or very costly, but it is possible to access the data through

Solve at each step the problem: What is the next best access to do given my current knowledge of the world and the knowledge need Knowledge acquisition plan (recursive,

I It can be chosen such that the components with variance λ α greater than the mean variance (by variable) are kept. In normalized PCA, the mean variance is 1. This is the Kaiser

Previous research work has developed tools that provide users with more effective notice and choice [9, 18, 19, 31]. With increasing concerns about privacy because of AI, some

Data collection; Data storage: the ability to organize heterogeneous data storage, providing real-time access to data, data storage using distributed storage (including

Kerstin Schneider received her diploma degree in computer science from the Technical University of Kaiserslautern 1994 and her doctorate (Dr. nat) in computer science from