• Aucun résultat trouvé

Knowledge-based Question Answering for DIYers

N/A
N/A
Protected

Academic year: 2022

Partager "Knowledge-based Question Answering for DIYers"

Copied!
2
0
0

Texte intégral

(1)

Knowledge-based Question Answering for DIYers

Doo Soon Kim?and Zhe Feng and Lin Zhao Bosch Research, Sunnyvale, CA 94085, USA

Abstract. DIY (Do-It-Yourself) requires extensive knowledge such as the usage of tools, properties of materials, and the procedure of activities. Most DIYers use online search to find information but it is usually time-consuming and challenging for novice DIYers to understand the retrieved results and later apply them to their individual DIY tasks. In the work, we present a Question Answering (QA) system which can address the DIYers’ specific needs. The core component is a knowl- edge base (KB) which contains a vast amount of domain knowledge encoded in a knowledge graph. The system is able to explain how the answers are derived with reasoning process. Our user study shows that the QA system addresses DIYers’

needs more effectively than the web search.

1 Problem

Our goal is to build a QA system for the home improvement DIY projects available at Bosch-Do-It1. Table 1 shows the common DIY question types, which were collected through a user study. Given a question, the QA system should provide not only an an- swer but also an explanation on how the answer is derived. The explanation capability is particularly important for the DIY questions which are generally a non-factoid question.

2 The DIY QA System

Fig. 1 shows the overall architecture of our system. The key component is a KB which is graphically represented using RDF and stored in Stardog2, a semantic web platform.

The ontology defines a taxonomy of about 300 entity concepts and 150 action concepts.

The entity concepts include the common DIY objects such as tools (e.g., JIGSAW), ac- cessories (e.g., DRILL-BIT) and materials (e.g., WOOD-SCREW). The action concepts include the DIY actions such as SAWINGand GLUING, and also include the tool-related actions such as REPLACING-BLADE. Each concept is associated with domain knowl- edge such as the definition and other attributes. Each DIY project is then represented us- ing the concepts. Specifically, the project representation consists of the required entities and the action structure. The required entities indicate the entities needed in the project along with their specification information, while the action structure describes a hierar- chical structure of the sub-actions to represent the decomposition of the project steps.

Our KB also contains the product knowledge which are used to recommend the tools or accessories suitable for a project. Different types of knowledge are inter-connected to one another and therefore can be combined to answer more complex questions.

?The first author is now affiliated with Adobe Research (contact: dkim@adobe.com).

1http://www.bosch-do-it.com/

2http://stardog.com

(2)

2 D. Kim et al.

Question Type Sub-question type Example

Project Question

required entities / properties What power tools do I need in the project?

What is the length of the drill bit needed in the project?

alternative entities Can I use a circular saw instead of a jigsaw in step 2?

time / difficulty / cost How long does the project take?

explanation of actions Can you explain the sawing step in more detail?

specific location of actions Where should it be screwed?

alternative actions Are there other options instead of pre-drilling?

Domain Question

definition of tool / accessory / materialWhat is jigsaw?

related action What can I do with a jigsaw?

tips Is there any safety tip for using a jigsaw?

structural info What does a jigsaw look like?

comparison How does a jigsaw differ from a circular saw?

Table 1: Selected types of the DIY questions and their examples

Question

Understanding Grounding Reasoning Answer Generation

Taxonomy of Concept

Project KR

Domain KR

Product KR

User / Context KR

KB

Natural language

question

structured question

grounded question

structured answer

Multi-modal natural answer

Fig. 1: Architecture of Our QA System

The KB is then used by the reasoner to derive an answer. For most question types, our system converts the question into a SPARQL query using in-house NLP solutions and execute it against the knowledge graph. For some complex question types (e.g.,

’alternative entities’ in Table. 1), we use advanced AI reasoning techniques3.

3 Evaluation and Discussion

In the pilot study, we compared our system against online search, a common method for information seeking. Specifically, we conducted a user study where 20 users were given a sample DIY project along with 5 questions and were asked to find an answer using online search and our QA system in separate sessions. With online search, the average time of finding an answer was 3.8 min. while our QA system can instantly provide an answer. The users’ satisfaction rate with our system was also found to be significantly better than that with online search. In the talk, we also want to share the lessons we learned from this project: pipelined architecture vs. end-to-end architecture, importance of explainability, and the knowledge acquisition bottleneck.

3SeeWang, Y., Lee., J., Kim, D.S.: A logic based approach to answering questions about alter- natives in DIY domains, Innovative Application of Artificial Intelligence (IAAI), 2017.

Références

Documents relatifs

Our participation at ResPubliQA 2010 was based on applying an In- formation Retrieval (IR) engine of high performance and a validation step for removing incorrect answers.. The

To attain that performance status, we enhanced the syntactic analysis of the question and improved the indexing process by using question categories at the sentence retrieval level

Concretely, a set of search terms are extracted for the passage retrieval module, the question type (factoid, list or definition) and the expected answer type along with some

The following labels are used in the table: #candidates (average number of extracted answer candidates per question), #answers (average number of right answers per question),

As we previously mentioned, we evaluated the proposed AV method in two different scenarios: the Answer Validation Exercise (AVE 2008) and the Question Answering Main Task (QA

In the black-box mode, the system chooses the least costly linkage from the Link Parser, retains only the first two synonyms of a keyword, uses the best ranked document from

Keywords: question answering, QA, text preprocessing, tokenization, POS tagging, document indexing, document retrieval, Lucene, answer extraction, query formulation.. 1

• In subsequent questions of the same cluster, the system first parses the question, obtaining its pivots, and then calls the method SetTopic, which appends to the question pivots