• Aucun résultat trouvé

An Ontology-based Approach to Adaptive Data Processing

N/A
N/A
Protected

Academic year: 2022

Partager "An Ontology-based Approach to Adaptive Data Processing"

Copied!
4
0
0

Texte intégral

(1)

An Ontology-based Approach to Adaptive Data Processing

Haokun Chen1,3, Xiaowang Zhang1,3,∗, and Zhiyong Feng2,3

1 School of Computer Science and Technology, Tianjin University, Tianjin, China

2 School of Computer Software, Tianjin University, Tianjin, China

3 Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin, China

Corresponding author.

Abstract. In this paper, we present an ontology-based approach to gen- erate workflows which are core in adaptive data processing by taking ad- vantage of ontologies in characterizing implicit relations among tasks of data processing. Moreover, compared with manual configurations of cur- rent approaches, ontology reasoning can automatically infer preference orders of tasks and overcome the limitation of manual configurations.

Experimental results show that our proposal is effective and efficient.

1 Introduction

Adaptive data processing, as an advanced automatic data processing, can select models and parameters by a system itself when processing variable data from applications [1]. The core problem of adaptive data processing is to design a

“smart” mechanism in generating dynamically optimal workflows for variable requirements. There are some existsing approaches to generate workflows, which are mostly based on manual configurations, such as Apache Oozie [5]. They are often taxing and poor in reusability due to the limitation of single developer. It becomes interesting in generating workflows to support adaptive data processing.

In this paper, we present an ontology-based approach to generating workflows for adaptive data processing overcome the limitation of manual configurations.

In our proposal, ontologies integrated with SWRL rules [2] are applied to charac- terize implicit relations among tasks of data processing and ontology reasoning can automatically infer preference orders of tasks. Experimental results show that our proposal is effective and efficient. For instance, in the New York Citi- bike, our approach can generate an optimal workflow (shown in Fig. 2) where optimal models and parameters can be selected.

2 Ontology Construction for Generating Workflows

The first step of our proposal is constructing ontologies in Prot´eg´e [4]. In the following, we introduce the four steps of ontology construction.

(2)

Class Definition There are three classes, namely,ML Process, ML Process Service, andParameters. Each of them has some subclasses.

– TheML Process class represents the whole procedure of data processing and it contains four main subclasses, namely,Data Preprocess,Feature Preprocess, Modeling, andEvaluation.

– TheML Process Service class represents a set of all services and it contains four subclasses, namely,Data Preprocess Service,Feature Preprocess Service, Modeling Service, andEvaluation Service.

– TheParameters class represents a set of all parameters and it contains many subclasses of parameters w.r.t. variable models.

Property Definition There are two kinds of properties, namely,object (relations among classes) anddata(relations between entities and datatype). There are 14 object properties as well as 7 data properties.

Note thatgetModelRequirementandgetParameterscontain many subproper- ties which are used to select models and parameters. Besides, three subproperties hasC,hasGamma and hasClass Weight ofgetParameters are used to select pa- rameters which are important to SVM.

Entity Definition According to a specific application scenario, we create some entities as normal [6], and these entities can be used with some rules to do some reasoning to get more facts about the specific application scenarios. The usage of the entity is shown at Section 3 clearly.

Subclass of

isCaseOf

hasC hasNextProcess hasLastProcess formatOfDataFile

hasGamma Accuracy_in_general

doDataPrepare doEvaluation

doModel

callDataService

Subclass of callModelService

doFeaturePrepare

Subclass of Subclass of

Subclass of Subclass of

Subclass of doEvaluationFunction

callEvaluationService

getFeatureSelected callFeatureService

Subclass ofgetDataParams

getSQLTableName getParameters

hasNextModelService hasLastModelService

Subclass of

hasK1_K2 Subclass of

Subclass of hasK

hasClassWeight hasPenalty

hasLastStep hasNextStep Literal

Literal Literal

Evaluation3 ML_Process3

Cluster_Service

Model_Service9 Literal

Literal DataPreprocess

3

Literal Model3

Literal

Literal GBRT_Parame...1

ML_Process_...

Bipartite_Stat...

1

Data_Preproc...1 Feature_Prep...1 Evaluation_Se...

1 FeaturePrepro...3

Literal

Paramters

SVM_Parameters1 KMeans_Para...1

Literal Literal

Literal Classification_...

Reggression_...

Literal

Thing

Fig. 1.An example of ontology for Citi-bike prediction.

Rule Definition We adopte SWRL [2] that is intended to be the rule language of semantic web to express the statements that can not be achieved with OWL[2].

Some SWRL rules generate the workflow according to the user requirements and the properties of the data set. Part of the rules are shown in Section 3.

(3)

Ontology Construction Finally, we apply Prot´eg´e [4] in constructing ontology and employ VOWL[3] to visualize the ontology schema shown in Fig. 1.

3 Experiments and evaluations

New York Citi-bike parking quanlity predication (www.citibikenyc.com/system- data): As the rents/returns of bikes at different stations in different periods are unbalanced, it is interesting to predict the number of bike parking spots to be rent from/returened to each station cluster.

Table 1.An example of SWRL rules for Citi-bike parking quantity predication

ML Process(?a) –>

Rule 1 doDataPrepare(?a,datapreparation), doFeaturePrepare(?a,featurepreparation), doModel(?a,model), doEvaluation(?a,evaluation), hasNextStep(model,evaluation), hasNextStep(datapreparation,featurepreparation),

hasNextStep(featurepreparation,model).

Rule 2 accuracy in general(regression service1,?a), equal(?a,true)–>

get(regression service1,svm service), getParameters(svm service,svm parameters).

citibike isCaseformatOfData

doDataPreprocessdoFeaturePreprocessdoModeling doEvaluation

hasNextStep hasNextStep hasNextStep

callDataService callFeatureService callModelService callModelService callEvaluationService

getFeatureSelect

hasK1_K2

doEvaluationFunction getSQLTableName

"citibike

"

"SQL"

data_pre process

feature_

preproc ess

model

ing evaluation

handdle_

SQLData _service

feature_

select_s ervice

bipartiteStat ionCluster_s ervice

cluster_

service

SVM_se

rvice GBRT_service

evaluation_fu nction_setvice

"default"

"k1=23, k2=6"

"citibike_position/

New_York_weather

"

regression _service2 regression

_service1 get

hasNextModel

hasNextModel

get get

callModelService

getParameters SVM_parame

ters getParmeters

bipartiteStat ionCluster_p

arameters hasGamma

"Balance d/None"

getparameters

GBRT_parameters hasC

"0.001/0.

01/0.1/1 /10/100"

hasClassWeight

"Auto/RS*

"

hasMax_depth

"5"

accuracy_in_general true

"citibike:start station id,start station latitude,start station lon gitude,end station id,end station latitude,end station long itude/New_York_weather:*"

Fig. 2.An example of workflow for Citi-bike parking quantity predication

There are 3 steps to generate an optimal workflow as follows:

Step 1 Instantiating a Citi-bike parking quanlity predication, named citibike, which have three properties, namely, isCase (to a string “citibike”), for- matOfData(to a string “SQL”), andaccuracy in general(to a boolean value

“ture”) shown in Fig. 2 as well as SWRL rules (e.g., rules in Table 1).

Step 2 Generating a workflow by ontology reasoning via Prot´eg´e as follows:

data preprocess→feature preprocess→modeling→evaluationshown Fig. 2.

Step 3 Executing the workflow through properties in red from data preprocess tofeature preprocess,modeling, andevaluation.

As we can see, the results of Citi-bike parking quanlity predication via the generated workflow are the same as the results via optimally manual configura- tions shown in Fig. 3.

(4)

Fig. 3.Results of Citi-bike parking quantity predication

4 Conclusions

In this paper, we presente an ontology-based approach to generate workflows so that optimal models and parameters can be automatically selected in data processing. Our proposal provides a novel way for adaptive data processing via ontologies and is helpful to apply ontology techniques for data processing.

Acknowledgments

This work is supported by the Key Technology Research and Development Pro- gram of Tianjin (16YFZCGX00210), the National Natural Science Foundation of China (61502336), the National Key R&D Program of China (2016YFB1000603), and the Seed Foundation of Tianjin University (2018XZC-0016).

References

1. Fern´andez-Delgado, M., Cernadas, E., Barro, S., and Gomes Amorim, D. Do we need hundreds of classifiers to solve real world classification problems. J. Mach.

Learn. Res., 15(1): 3133–3181, 2014.

2. Horrocks I., F. Patel-Schneider P., Boley H., Tabet S., Grosof B., and Dean M.

SWRL: A semantic web rule language combining OWL and RuleML.W3C Member submission, 21 May 2004.

3. Lohmann S., Negru S., Haag F., Ertl T. Visualizing ontologies with VOWL. Se- mantic Web, 7(4): 399–419, 2016.

4. Musen M.A. The Prot´eg´e project: A look back and a look forward. AI Matters. As- sociation of Computing Machinery Specific Interest Group in Artificial Intelligence, 1(4), June 2015. DOI: 10.1145/2557001.25757003.

5. Oozie: Apache workflow scheduler for Hadoop. The Apache Software Foundation, September, 2010. http://oozie.apache.org/

6. Pan J.Z., Vetere G., Gomez-Perez J.M., Wu H. (Eds.) Exploiting linked data and knowledge graphs in large organisations. Springer, 2017.

Références

Documents relatifs

Remote sensing, information systems, satellite data processing, processes mod- eling, interactive modeling environment, algorithmic sequence, information query, end us- er.. 1

Because of that metagraph processing framework can read data stored in the form of textual predicate representation and import it into flat graph data structure handled by

Within the framework of the current level of IT monitoring system development, the data volume in MCC JSC Russian Railways is 11 terabytes, and this value

So, the personalized approach to the processing of medical information is characterized by a number of problems, namely: the uncertainty of the data presented, the classifica- tion

Finally, a couple of benchmarks are even more specific (and unrelated from TPC-H), e.g., Spadawan (Lopes Siqueira et al., 2010), which allows performance evaluation of

Amplitude distribution for the interface response recovered through various wave separation techniques: f − k filtering, Radon do- main filtering and filtering in the curvelet

On the main social media (Youtube, Facebook, Instagram, Twitter, Pinterest), the potential value of influencers is traditionally estimated by the size of their community, the number

To support linked data query and analysis, we developed Ontobat, a Semantic Web strategy for automatic generation of linked data RDFs using ontology formats, data