• Aucun résultat trouvé

AntoineAmarilli YaelAmsterdamer TovaMilo Taxonomy-BasedCrowdMining

N/A
N/A
Protected

Academic year: 2022

Partager "AntoineAmarilli YaelAmsterdamer TovaMilo Taxonomy-BasedCrowdMining"

Copied!
28
0
0

Texte intégral

(1)Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Taxonomy-Based Crowd Mining Antoine Amarilli1,2 1 Tel 2 École. Yael Amsterdamer1. Tova Milo1. Aviv University, Tel Aviv, Israel normale supérieure, Paris, France. 1/27.

(2) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Data mining Data mining – discovering interesting patterns in large databases Database – a (multi)set of transactions Transaction – a set of items (aka. an itemset) A simple kind of pattern to identify are frequent itemsets. D=.  {beer, diapers}, {beer, bread, butter}, {beer, bread, diapers}, {salad, tomato}. An itemset is frequent if it occurs in at least Θ = 50% of transactions. {salad} is not frequent. {beer, diapers} is frequent. Thus, {beer} is also frequent. 2/27.

(3) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Human knowledge mining Standard data mining assumption: the data is materialized in a database. Sometimes, no such database exists! Leisure activities:  D=. Traditional medicine:  D=. {chess, saturday, garden},. {hangover, coffee},. {cinema, friday, evening},. {cough, honey},. .... .... This data only exists in the minds of people! 3/27.

(4) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Harvesting this data We cannot collect such data in a centralized database and use classical data mining, because: 1. It’s impractical to ask all users to surrender their data. “Let’s ask everyone to give the detail of all their activities in the last three months.”. 2. People do not remember the information. “What were you doing on July 16th, 2013?”. However, people remember summaries that we could access. “Do you often play tennis on weekends?” To find out if an itemset is frequent or not, we can just ask people directly.. 4/27.

(5) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Crowdsourcing Crowdsourcing – solving hard problems through elementary queries to a crowd of users Find out if an itemset is frequent with the crowd: 1. Draw a sample of users from the crowd. (black box). 2. Ask each user: is this itemset frequent? (“Do you often play tennis on weekends?”). 3. Corroborate the answers to eliminate bad answers. (black box, see existing research). 4. Reward the users. (usually, monetary incentive, depending on the platform). ⇒ An oracle that takes an itemset and finds out if it is frequent or not by asking crowd queries. 5/27.

(6) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Taxonomies Having a taxonomy over the items can save us work! item sickness cough. fever back pain. sport tennis running biking. If {sickness, sport} is infrequent then all itemsets such as {cough, biking} are infrequent too. Without the taxonomy, we need to test all combinations! Also avoids redundant itemsets like {sport, tennis}. 6/27.

(7) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Cost How to evaluate the performance of a strategy to identify the frequent itemsets? Crowd complexity – the number of itemsets we ask about (monetary cost, latency...) Computational complexity – the complexity of computing the next question to ask There is a tradeoff between the two: Asking random questions is computationally inexpensive but the crowd complexity is bad. Asking clever questions to obtain optimal crowd complexity is computationally expensive.. 7/27.

(8) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. The problem. We can now describe the problem: We have: A known item domain I (set of items). A known taxonomy Ψ on I (is-A relation, partial order). A crowd oracle freq to decide if an itemset is frequent or not.. We want to find out, for all itemsets, whether they are frequent or infrequent, i.e., learn freq exactly. We want to achieve a good balance between crowd complexity and computational complexity. What is a good interactive algorithm to solve this problem?. 8/27.

(9) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Table of contents. 1. Background. 2. Preliminaries. 3. Crowd complexity. 4. Computational complexity. 5. Conclusion. 9/27.

(10) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Itemset taxonomy. Itemsets I(Ψ) – the sets of pairwise incomparable items. (e.g. {coffee, tennis} but not {coffee, drink}.) If an itemset is frequent then its subsets are also frequent. If an itemset is frequent then itemsets with more general items are also frequent. We define an order relation 6 on itemsets: A 6 B for “A is more general than B”. Formally, ∀i ∈ A, ∃j ∈ B s.t. i is more general than j. freq is monotone: if A 6 B and B is frequent then A also is.. 10/27.

(11) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Itemset taxonomy example Taxonomy Ψ. Itemset taxonomy I(Ψ). Solution taxonomy S(Ψ). nil. nil {nil} {item}. item. item. chess. drink {chess, drink}. drink coffee. chess. tea. chess drink chess coffee. coffee chess tea chess coffee tea. tea. {chess, drink} {coffee}. {chess}. {drink}. {chess} {drink}. {coffee}. {chess} {coffee}. {chess} {tea}. {chess, tea}. coffee coffee} tea {chess, {tea}. {chess, tea} {coffee}. {chess, coffee} {chess, tea}. {chess} {coffee} {tea}. {coffee, tea}. {chess, drink} {coffee} {tea}. {chess} {coffee, tea}. {chess, drink} {tea}. {chess, coffee}. {tea} {coffee} {tea}. {chess, coffee} {coffee, tea}. {chess, drink} {coffee, tea} {chess, tea} {coffee, tea}. {chess, coffee} {chess, tea} {coffee, tea} {chess, coffee, tea}. 11/27.

(12) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Maximal frequent itemsets nil. Maximal frequent itemset (MFI): a frequent itemset with no frequent descendants. Minimal infrequent itemset (MII). The MFIs (or MIIs) concisely represent freq. ⇒ We can study complexity as a function of the size of the output.. item chess. drink. chess drink. coffee. chess coffee. chess tea. tea coffee tea. chess coffee tea 12/27.

(13) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Solution taxonomy. Conversely, (we can show) any set of pairwise incomparable itemsets is a possible MFI representation. Hence, the set of all possible solutions has a similar structure to the “itemsets” of the itemset taxonomy I(Ψ). ⇒ We call this the solution taxonomy S(Ψ) = I(I(Ψ)). Identifying the freq predicate amounts to finding the correct node in S(Ψ) through itemset frequency queries.. 13/27.

(14) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Solution taxonomy example Taxonomy Ψ. Itemset taxonomy I(Ψ). Solution taxonomy S(Ψ). nil. nil {nil} {item}. item. item. chess. drink {chess, drink}. drink coffee. chess. tea. chess drink chess coffee. coffee chess tea. tea. {chess, drink} {coffee}. {drink}. {chess} {drink}. {coffee}. {chess} {coffee}. {chess} {tea}. {chess, tea}. coffee {chess, coffee} {tea} tea. {chess, tea} {coffee}. {tea} {coffee} {tea}. {chess} {coffee} {tea}. {coffee, tea}. {chess, drink} {coffee} {tea}. {chess} {coffee, tea}. {chess, drink} {tea}. {chess, coffee}. {chess, coffee} {chess, tea}. chess coffee tea. {chess}. {chess, coffee} {coffee, tea}. {chess, drink} {coffee, tea} {chess, tea} {coffee, tea}. {chess, coffee} {chess, tea} {coffee, tea} {chess, coffee, tea}. 14/27.

(15) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Table of contents. 1. Background. 2. Preliminaries. 3. Crowd complexity. 4. Computational complexity. 5. Conclusion. 15/27.

(16) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Lower bound. Each query yields one bit of information. Information-theoretic lower bound: we need at least Ω(log |S(Ψ)|) queries. This is bad in general, because |S(Ψ)| can be doubly exponential in Ψ. As a function of the original taxonomy Ψ, we can write:  p width[Ψ] Ω 2 / width[Ψ] .. 16/27.

(17) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Upper bound nil. 6/7. a1. 5/7. a2. 4/7. A result from order theory shows that there is a constant δ0 ≈ 1/5 such that some element always achieves a split of at least δ0 .. a3. 3/7. Hence, the previous bound is tight: we need Θ(log |S(Ψ)|) queries.. a4. 2/7. a5. 1/7. We can achieve the information-theoretic bound if is there always an unknown itemset that is frequent in about half of the possible solutions.. 17/27.

(18) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Lower bound, MFI/MII nil a1 To describe the solution, we need the MFIs or the MIIs. However, we need to query both the MFIs and the MIIs to identify the result uniquely: Ω(|MFI| + |MII|) queries.  We can have |MFI| = Ω 2|MII| and vice-versa.. a2 a3. This bound is not tight (e.g., chain). a4 a5 18/27.

(19) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Upper bound, MFI/MII nil. There is an explicit algorithm to find a new MFI or MII in 6 |I| queries. Intuition: starting with any frequent itemset, add items until you cannot add any more without becoming infrequent. The number of queries is thus O(|I| · (|MFI| + |MII|)).. item chess. drink. chess drink. coffee. chess coffee. chess tea. tea coffee tea. chess coffee tea 19/27.

(20) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Table of contents. 1. Background. 2. Preliminaries. 3. Crowd complexity. 4. Computational complexity. 5. Conclusion. 20/27.

(21) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Hardness for standard (input) complexity We want an unknown itemset of I(Ψ) that is frequent for about half of the possible solutions of S(Ψ). This is related to counting the antichains of I(Ψ), which is FP#P -complete. Hence, we argue that finding the best-split element in I(Ψ) is FP#P -hard (as a function of I(Ψ), which can be exponential in Ψ – of course it is easy if S(Ψ) is materialized). Intuition: determine the number of antichains of a poset by comparing it with a known poset, use an oracle for the best split to decide the comparison. Our proof works for restricted itemsets (see later); the obstacle for the general case is that I(Ψ) has a constrained structure (distributive lattice). 21/27.

(22) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Hardness for output complexity nil. When running the incremental algorithm, we can materialize I(Ψ), but this may be exponential in Ψ. Do we need to? Problem EQ from Boolean function learning: decide whether our current MFIs and MIIs cover all possible itemsets. Reduction – a polynomial algorithm to learn freq entails a polynomial algorithm for EQ which is not known to be in PTIME. (Exact complexity open.). item chess. drink. chess drink. coffee. chess coffee. chess tea. tea coffee tea. chess coffee tea 22/27.

(23) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Table of contents. 1. Background. 2. Preliminaries. 3. Crowd complexity. 4. Computational complexity. 5. Conclusion. 23/27.

(24) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Summary and further work. We have studied the crowd and computational complexity of crowd mining under a taxonomy. Further work: improve the bounds and close gaps. More specifically: a tractable way to find reasonably good-split elements in arbitrary posets (or distributive lattices)? Experimental comparison of various heuristics to choose a question (chain partitioning, random, best split, etc.). Unformalized intuition: most itemsets are infrequent. Integrating uncertainty (black box for now).. 24/27.

(25) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Summary and further work. We have studied the crowd and computational complexity of crowd mining under a taxonomy. Further work: improve the bounds and close gaps. More specifically: a tractable way to find reasonably good-split elements in arbitrary posets (or distributive lattices)? Experimental comparison of various heuristics to choose a question (chain partitioning, random, best split, etc.). Unformalized intuition: most itemsets are infrequent. Integrating uncertainty (black box for now). Thanks for your attention!. 24/27.

(26) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Greedy algorithms nil Querying an element of the chain may remove < 1/2 possible solutions.. a1. Querying the isolated element b will remove exactly 1/2 solution.. a2. However, querying b classifies far less itemsets. ⇒ Classifying many itemsets isn’t the same as eliminating many solutions. Finding the greedy-best-split item is FP#P -hard.. b a3 a4 a5 25/27.

(27) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Restricted itemsets. Asking about large itemsets is irrelevant. “Do you often go cycling and running while drinking coffee and having lunch with orange juice on alternate Wednesdays?” If the itemset size is bounded by a constant, I(Ψ) is tractable. ⇒ The crowd complexity Θ(log |S(Ψ)|) is tractable too.. 26/27.

(28) Background. Preliminaries. Crowd complexity. Computational complexity. Conclusion. Bonus. Chain partitioning nil. Optimal strategy for chain taxonomies: binary search.. a1. We can determine a chain decomposition of the itemset taxonomy and perform binary searches on the chains.. a2. Optimal crowd complexity for a chain, performance in general is unclear.. a3. Computational complexity is polynomial in the size of I(Ψ) (which is still exponential in Ψ).. a4 a5 27/27.

(29)

Références

Documents relatifs

A model many people think of when they hear the words &#34;Directory Services&#34; is the directory service provided by the local telephone company. A local telephone

If the datagram would violate the state of the tunnel (such as the MTU is greater than the tunnel MTU when Don’t Fragment is set), the router sends an appropriate ICMP

After the ICP connections have been setup, the LOGGER expects a TELNET data type code, a string of network ASCII characters, or a null line (just CR-LF) to indicate whether

Once this constraint is lifted by the deployment of IPv6, and in the absence of a scalable routing strategy, the rapid DFZ RIB size growth problem today can potentially

While the IPv6 protocols are well-known for years, not every host uses IPv6 (at least in March 2009), and most network users are not aware of what IPv6 is or are even afraid

Background Preliminaries Crowd complexity Output crowd complexity Computational complexity Conclusion.. On the Complexity of Mining Itemsets from the Crowd

A general scheme to choose questions in crowd data sourcing A method to incorporate order constraints on the variables Ways to perform interpolation for questions with no

⇒ An oracle that takes an itemset and finds out if it is frequent or not by asking crowd queries.... Taxonomies Having a taxonomy over the items can save