• Aucun résultat trouvé

Microbenchmark

N/A
N/A
Protected

Academic year: 2021

Partager "Microbenchmark"

Copied!
2
0
0

Texte intégral

(1)

HAL Id: inria-00431409

https://hal.inria.fr/inria-00431409

Submitted on 12 Nov 2009

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Microbenchmark

Denilson Barbosa, Ioana Manolescu, Jeffrey Yu Xu

To cite this version:

Denilson Barbosa, Ioana Manolescu, Jeffrey Yu Xu. Microbenchmark. Ling Liu and Tamer Ozsu.

Encyclopedia of Database Systems, Springer, pp.1737, 2009, 978-0-387-35544-3. �inria-00431409�

(2)

Comp. by: bvijayalakshmiGalleys0000885340 Date:20/12/08 Time:13:03:22 Stage:First Proof File Path://ppdys1108/Womat3/Production/PRODENV/0000000005/0000008302/0000000016/ 0000885340.3D Proof by: QC by:

M

Microbenchmark

DENILSONBARBOSA1, IOANAMANOLESCU2, JEFFREY

XUYU3

1

Department of Computer Science, University of Calgary, Calgary, AB, Canada

2

INRIA Future, Le Chesnay, France

3

Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China

Definition

A micro-benchmark is an experimental tool that stud-ies a given aspect (e.g., performance, resource con-sumption) of XML processing tool. The studied aspect is called the target of the micro-benchmark. A micro-benchmark includes a parametric measure and guidelines, explaining which data and/or operation parameters may impact the target, and suggesting value ranges for these parameters.

Main Text

Micro-benchmarks help capture the behavior of an XML processing system on a given operation, as a result of varying one given parameter. In other words, the goal of a micro-benchmark is to study the precise effect of a given system feature or aspect in isolation.

Micro-benchmarks were first introduced for object-oriented databases [2]. An XML benchmark

sharing some micro-benchmark features is the Michi-gan benchmark [3] ([CITATION TO EDS ENTRY ON XML BENCHMARKING]). The MemBeR project [1], developed jointly by researchers at INRIA Futurs, the University of Amsterdam, and University of Antwer-pen provides a comprehensive repository of micro-benchmarks for XML.

Unlike application benchmarks ([CITATION TO ENTRY ON APPLICATION BENCHMARKS]), micro-benchmarks do not directly help determining which XML processing system is most appropriate for a given task. Rather, they are helpful in assessing par-ticular modules, algorithms and techniques present inside an XML processing tool. Micro-benchmarks are therefore typically very useful to system developers.

Cross-references

▶ Application Benchmarks ▶ XML Application

Recommended Reading

1. Afanasiev L., Manolescu I., and Michiels P. MemBeR: a micro-benchmark repository for XQuery. In Proceedings of the Third International XML Database Symposium (XSym). Trondheim, Norway, August 2005. Available at: http://ilps.science.uva.nl/ Resources/MemBeR/index.html.

2. Carey M.J., DeWitt D.J., and Naughton J.F. The OO7 Bench-mark. In SIGMOD Record (ACM Special Interest Group on Management of Data). ACM, New York, NY, USA, 1993. 3. Runapongsa K., Patel J.M., Jagadish H.V., Chen Y., and

Al-Khalifa S. The Michigan benchmark: towards XML query performance diagnostics. Inf. Syst., 31(2):73–97, 2006.

Références

Documents relatifs

Dans  un  premier  temps,  il  s’agit  de  constituer  une  bibliothèque d’études de cas susceptibles d’être mises à  disposition  de  l’ensemble  de 

The first dataset is the translation of a popular benchmark over Freebase to Wikidata, namely Sim- pleQuestions which contains 21.957 questions answerable over Wikidata.. The sec-

When rules are involved the deduction problem becomes the following: Given a module M ( V, I, B ) where B is composed of SCGs and rules, and a SCG Q defined on the same V and I, is

Our aim with this paper is to introduce the notion of benchmarking for learner models based on structured representations of code, and propose a set of tasks categories that

This work advocates for the development of a compre- hensive framework for sorting benchmarks, which accounts for various aspects of sorting algorithms starting with defin- ing

(1) Source code is analyzed by one or more code detection tools and candidate clones are detected, (2) Candidate clones (or a fraction of them) are evaluated by one or more human

In this chapter we address OS robustness as regards possible erroneous inputs provided by the application software to the OS via the Application Programming Interface (API). More

The set of JVM dependability benchmarks is to the third set of OS benchmarks we have built up for Windows and Linux, based on the same high-level specification of the benchmark,