• Aucun résultat trouvé

Application Benchmark

N/A
N/A
Protected

Academic year: 2021

Partager "Application Benchmark"

Copied!
2
0
0

Texte intégral

(1)

HAL Id: inria-00431405

https://hal.inria.fr/inria-00431405

Submitted on 12 Nov 2009

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Application Benchmark

Denilson Barbosa, Ioana Manolescu, Jeffrey Yu Xu

To cite this version:

Denilson Barbosa, Ioana Manolescu, Jeffrey Yu Xu. Application Benchmark. Ling Liu and Tamer

Ozsu. Encyclopedia of Database Systems, Springer, pp.99-100, 2009, 978-0-387-35544-3.

�inria-00431405�

(2)

Comp. by: bvijayalakshmiGalleys0000885341 Date:20/12/08 Time:12:18:28 Stage:First Proof File Path://ppdys1108/Womat3/Production/PRODENV/0000000005/0000008302/0000000016/ 0000885341.3D Proof by: QC by:

A

Application Benchmark

DENILSONBARBOSA1, IOANAMANOLESCU2, JEFFREY

XUYU3

1

Department of Computer Science, University of Calgary, Calgary, AB, Canada

2

INRIA Future, Le Chesnay, France

3

Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China

Synonyms

Benchmark; Performance benchmark

Definition

An application benchmark is a suite of tasks that are representative of typical workloads in an application domain.

Main Text

Unlike a MICROBENCHMARK, an application benchmark specifies broader tasks that are aimed at exercising most components of a system or tool. Each individual task in the benchmark is assigned a relative weight, usually reflecting its frequency or importance in the application being modeled. A meaningful inter-pretation of the benchmark results has to take these weights into account.

The Transaction Processing Performance Council (TPC) is a body with a long history of defining and published benchmarks for database systems. For

instance, it has defined benchmarks for Online Trans-action Processing applications (TPC-C and TPC-E), Decision Support applications (TPC-H), and for an Application Server setting (TPC-App).

Other examples of application benchmarks are: the OO1 and OO7 benchmarks, developed for object-ori-ented databases, and XMark, XBench and TPoX, de-veloped for XML applications.

Cross-references

▶ Micro-Benchmarks ▶ XML Application

Recommended Reading

1. Carey M.J., DeWitt D.J., and Naughton J.F. The OO7 bench-mark. In Proceedings of the ACM SIGMOD International Con-ference on Management of Data, WA, USA. ACM, New York, NY, USA, pp. 12–21, June 1993.

2. Gray J. (ed.). The Benchmark Handbook for Database and Transaction Systems, (2nd edn.). Morgan Kaufmann, San Fran-cisco, CA, USA, 1993, ISBN 1-55860-292-5.

3. Nicola M., Kogan I., and Schiefer B. An XML transaction pro-cessing benchmark. In Proceedings of the ACM SIGMOD Inter-national Conference on Management of Data. ACM, New York, NY, USA, 2007, pp. 937–948.

4. Schmidt A., Waas F., Kersten M.L., Carey M.J., Manolescu I., and Busse R. XMark: a benchmark for XML data management. In Proceedings of the Very Large Database Conference. Hong Kong, China, 2002, pp. 974–985.

5. Transaction Processing Performance Council. Available at: http://www.tpc.org/default.asp

6. Yao B.B., O¨ zsu M.T., and Khandelwal N. XBench benchmark and performance testing of XML DBMSs. In ICDE. Boston, MA, USA, 621–633, 2004.

inria-00431405, version 1 - 12 Nov 2009

Références

Documents relatifs

As illustrated in Figure 1, ideal weak scaling defined this way corresponds to constant elapsed time per time step, independent of the number of processor cores used.. Said another

In “The Impact of China on Cybersecurity: Fiction and Friction,” Jon Lindsay asserts that the threat of Chinese cyber operations, though “relentlessly irritating,” is

Increased expression of the long non-coding RNA ANRIL promotes lung cancer cell metastasis and correlates with poor prognosis. Circulating long noncoding RNA act as potential

Since the translation between RTL and gate-level descriptions can be done automatically and efficiently, the gate-level description is today mainly an intermediate representation

Propose in this paper: a benchmark testing suite composed of a database containing multiple data (keystroke dynamics templates, soft biometric traits. ), which will be made

Unit´e de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 RENNES Cedex Unit´e de recherche INRIA Rh ˆone-Alpes, 655, avenue de l’Europe, 38330 MONTBONNOT

Evaluation protocols for news recommender systems typically involve comparing the performance of methods to a baseline.. The difference in performance ought to tell us what benefit

• In a configuration with 32 DSs, Sprint processed 5.3x more transactions per second than a standalone server while running a database 30x bigger than the one fitting in the main