• Aucun résultat trouvé

Optimizing Precision for High-Performance, Robust, and Energy-Efficient Computations

N/A
N/A
Protected

Academic year: 2021

Partager "Optimizing Precision for High-Performance, Robust, and Energy-Efficient Computations"

Copied!
2
0
0

Texte intégral

(1)

HAL Id: hal-02401813

https://hal.archives-ouvertes.fr/hal-02401813

Submitted on 15 May 2020

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Optimizing Precision for High-Performance, Robust, and Energy-Efficient Computations

Roman Iakymchuk, Stef Graillat, Fabienne Jézéquel, Daichi Mukunoki, Toshiyuki Imamura, Yiyu Tan, Atsushi Koshiba, Jens Huthmann, Kentaro

Sano, Norihisa Fujita, et al.

To cite this version:

Roman Iakymchuk, Stef Graillat, Fabienne Jézéquel, Daichi Mukunoki, Toshiyuki Imamura, et al..

Optimizing Precision for High-Performance, Robust, and Energy-Efficient Computations. Interna- tional Conference on High Performance Computing in Asia-Pacific Region„ Jan 2020, Fukuoka, Japan.

�hal-02401813�

(2)

Optimizing Precision for High-Performance,

Robust, and Energy-Efficient Computations

RIKEN Center for Computational Science (R-CCS), Japan

Daichi Mukunoki, Toshiyuki Imamura, Yiyu Tan, Atsushi Koshiba, Jens Huthmann, Kentaro Sano Sorbonne University,

CNRS, LIP6, France

Roman Iakymchuk, Stef Graillat, Fabienne Jézéquel

Center for Computational Sciences, University of Tsukuba, Japan

Norihisa Fujita, Taisuke Boku

We proposed a new systema.c approach for minimal-precision computa.ons. This approach is robust, general, comprehensive, high-performant, and realis.c. Although the proposed system is s.ll in development, it can be constructed by combining already available (developed) in-house technologies and extending them. Our ongoing step is to demonstrate the system on a proxy applica.on

* All authors contributed equally to this research

In numerical computa.ons, the precision of floa.ng-point computa.ons is a key factor to determine the performance (speed and energy-efficiency) as well as the reliability (accuracy and reproducibility). However, the precision generally plays a contrary role for both. Therefore, the ul.mate concept for maximizing both at the same .me is the op#mized/reduced precision computa#on through preci- sion-tuning, which adjusts the minimal precision for each opera#on and data. Several studies have been already conducted for it so far, but the scope of those studies is limited to the precision-tuning alone. Instead, we propose a more broad concept of the op.mized/minimal precision and robust (numerically reliable) compu.ng with precision- tuning, involving both hardware and soIware stack

Minimal-Precision Computing

Robust (Numeraically Reliable)

To ensure the requested accuracy, the precision-tuning is processed based on numerical valida.on, guaranteeing also reproducibility

General

Our scheme is applicable for any floating-point computations. It contributes to low development cost and sustainability (easy maintenance and system portability)

Comprehensive

We propose a total system from the precision-tuning to the execu.on of the tuned code, combining heterogeneous hardware and hierarchical soIware stack

High-performance

Performance can be improved through the minimal- precision as well as fast numerical libraries and accelerators

Realis#c

Our system can be realized by combining available in-house technologies

System Stack System Workflow

Low-level code for FPGA (VHDL etc.) Compila.on and

Execu.on on FPGA Input:

C code with MPFR (and MPLAPACK)

Precision-Op,mizer (with PROMISE and CADNA/SAM)

Code Translation for FPGA

(SPGen, Nymble, FloPoCo) C code with MPFR

(op.mized)

Performance Optimization

C code with MPFR + other fast accurate

methods

Compilation and Execution

on CPU/GPU

Yes

A part of the C code with MPFR, which is

executed on FPGA

FPGA?

Precision-Optimizer

• The Precision-Optimizer

determines the minimal floating- point precisions, which need to achieve the desired accuracy

Performance Optimization

• At this stage, if possible to speedup some parts of the code with some other accurate computation methods than MPFR, those parts are replaced with them

• The required-accuracy must be taken into account

• If possible, it considers to utilize

FPGA (as heterogeneous computing)

Hardware

Heterogeneous System Arithmetic

Library

Numerical Libraries Numerical Validation

Precision Tuning

Arbitrary- Precision FP32, FP64,

(FP16, FP128)

Fast Accurate

Methods/ Libraries others…

LAPACK BLAS

MPFR Stochastic Arithmetic

SAM CADNA

PROMISE with SAM PROMISE

DD/QD

FPGA

acceleration

GPU GPU

Nymble FloPoCo

available in development

SPGen

for arbitrary-prec.

Compiler &

Tools for FPGA CPU

others…

OzBLAS ExBLAS QPEigen

MPLAPACK (MPBLAS)

MPLAPACK [7]

An open-source multi-

precision BLAS and LAPACK based on several high-

precision tools such as MPFR, QD, and FP128

MPFR [2]

A C library for multiple (arbitrary) precision floating-point

computations on CPUs

FloPoCo [16]

An open-source floating-point core generator for FPGA

supporting arbitrary-precision

No

Energy-Efficient

Through the minimal-precision as well as the energy- efficient hardware accelera.on with FPGA and GPU

FPGA as an Arbitrary-Precision Computing Platform Fast and Accurate Numerical Libraries

Error-free transforma-on of dot-product ( x T y )

52

11 11 52

52

11 52

OzBLAS (TWCU, RIKEN)

• OzBLAS [13] is an accurate & reproducible BLAS using Ozaki scheme [18], which is an accurate matrix mulCplicaCon method based on the error-free transformaCon of dot-product

• The accuracy is tunable and depends on the range of the inputs and the vector length

• CPU and GPU (CUDA) versions

ExBLAS (Sorbonne University)

• ExBLAS [12] is an accurate & reproducible BLAS based on floating-point expansions with error-free transformations (EFT:

twosum and twoprod) and super- accumulator

• Assures reproducibility through assuring correct-rounding: it preserves every bit of information until the final rounding to the desired format

• CPU (Intel TBB) and GPU (OpenCL) versions

(x', x') = split(x) {

ρ = ⌈(log

2

(u

−1

) + log

2

(n + 1))/2⌉

τ = ⌈ log

2

(max

1 ≤ i ≤ n

|x

i

|)⌉

σ = 2

(ρ+τ)

x' = fl((x + σ ) - σ) x' = fl(x - x')

} (1) x and y are split by split()

(x

(1)

, x

(2)

) = split(x), (y

(1)

, y

(2)

) = split(y) it is applied recursively until x

(p+1)

= y

(q+1)

= 0

x = x

(1)

+ x

(2)

+…+ x

(p)

, y = y

(1)

+ y

(2)

+…+ y

(q)

(2) then, x

T

y is computed as

x

T

y = (x

(1)

)

T

y

(1)

+ (x

(1)

)

T

y

(2)

+…+ (x

(1)

)

T

y

(q)

+ (x

(2)

)

T

y

(1)

+ (x

(2)

)

T

y

(2)

+…

+…

+ (x

(p)

)

T

y

(1)

+ (x

(p)

)

T

y

(2)

+…+ (x

(p)

)

T

y

(q)

(x

(i)

)

T

y

(j)

is error-free: (x

(i)

)

T

y

(j)

= fl((x

(i)

)

T

y

(j)

)

UPI

PCIe x16 PCIe x16

I e Xe G d

I e Xe G d

V100 V100 IB HDR100 IB HDR100 S ra i 10

V100 V100 IB HDR100 IB HDR100 S ra i 10

PLX PLX

PCIe x16 PCIe x16

Cygnus (University of Tsukuba)

• Cygnus is the world first supercomputer system equipped with both GPU (4x Tesla V100) and FPGA (2x Stratix 10), installed in CCS, University of Tsukuba

Cygnus system

Double-double format

ExBLAS scheme

B C

SPGen (RIKEN)

• SPGen (Stream Processor Generator) [14] is a compiler to generate HW module codes in Verilog-HDL for FPGA from input codes in Stream Processing DescripCon (SPD) Format. The SPD uses a data-flow graph representaCon, which is suitable for FPGA.

• It supports FP32 only, but we are going to extend SPGen to support arbitrary-precision floaCng-point.

Currently, there is no FPGA compiler supporCng arbitrary-precision.

Stochastic Arithmetic Tools

CADNA & SAM (Sorbonne University)

• CADNA (Control of Accuracy and Debugging for Numerical Applications) [18] is a DSA library for FP16/32/64/128

• CADNA can be used on CPUs in Fortran/C/C++ codes with OpenMP & MPI and on GPUs with CUDA.

• SAM (Stochastic Arithmetic in Multiprecision) [23] is a DSA library for arbitrary-precision with MPFR.

PROMISE (Sorbonne University)

A

Precision tuning based on Delta-Debugging

Discrete StochasCc ArithmeCc (DSA) [21] enables us to esCmate rounding errors (i.e., the number of correct digits in the result) with 95%

accuracy by execuCng the code 3 Cmes with random-rounding. DSA is a general scheme applicable for any floaCng-point operaCons: no special algorithms and no code modificaCon are needed. It is a light-weight approach in terms of performance, usability, and development cost compared to the other numerical verificaCon / validaCon methods.

QPEigen & QPBLAS (JAEA, RIKEN)

• Quadruple-precision Eigen solvers (QPEigen) [8, 25] is based on double-double (DD) arithmetic. It is built on a quadruple-precision BLAS (QPBLAS) [9]. They support distributed environments with MPI: equivalent to ScaLAPACK’s Eigen solver and PBLAS

Arbitrary-precision arithmeCc is performed using MPFR on CPUs, but the performance is very low. To accelerate it, we are developing several numerical libraries supporCng accurate computaCon based on high-precision arithmeCc or algorithmic approach. Some sojware also support GPU acceleraCon.

FPGA enables us to implement arbitrary-precision on hardware. High-Level Synthesis (HLS) enables us to program it in OpenCL. However, compiling arbitrary-precision code and obtaining high performance are still challenging. Heterogeneous computing with FPGA & CPU/GPU is also a challenge

Name Core; ### Define IP core “Core”

Main_In {in:: x0_0, x0_1, y0_0, y0_1};

Main_Out {out::x2_0, x2_1, y2_0, y2_1};

### Description of parallel pipelines for t=0 HDL pe10, 123, (x1_0, y1_0) = PE(x0_0, y0_0);

HDL pe11, 123, (x1_1, y1_1) = PE(x0_1, y0_1);

### Description of parallel pipelines for t=1 HDL pe20, 123, (x2_0, y2_0) = PE(x1_0, y1_0);

HDL pe21, 123, (x2_1, y2_1) = PE(x1_1, y1_1);

Name PE; ### Define pipeline “PEMain_In {in:: x_in, y_in};

Main_Out {out::x_out, y_out};

EQU eq1, t1 = x_in * y_in;

EQU eq2, t2 = x_in / y_in;

EQU eq3, x_out = t1 + t2;

EQU eq3, y_out = t1 - t2;

PE pe10

PE pe11 x0_0 y0_0 x0_1 y0_1

x1_0 y1_0 x1_1 y1_1 PE

pe20

PE pe21 x2_0 y2_0 x2_1 y2_1

x_in y_in

x /

+ -

x_out y_out Module definition with data-flow graph

by describing formulae of computation

Module definition with hardware structure by describing connections of modules

SPD codes

Optimization tool + Clustering DFG + Mapping clusters

to HW

SPGen

+ Wrap nodes with data-flow control logic + Connect nodes w/ wires + Equalize length of paths START

Hardware modules

in HDL C/C++ codes C/C++ Frontend

+ LLVM based + Polyhedral trans.

in development

Optimized SPD codes

FPGA bitstreams

in development

FINISH

Host codes

FPGA

CPU Available now

START

Nymble (TU Darmstadt, RIKEN)

• Nymble [15] is another compiler project for FPGA. It directly accepts C codes and has already started to support arbitrary-precision.

• It is more suited for non-linear memory access pattern, like with graph based data structures.

(1) The same code is run several Cmes with the random rounding mode (results are rounded up / down with the same probability)

(2) Different results are obtained (3) The common part in the different

results is assumed to be a reliable result

floating-point code

3.14160...

3.14161…

3.14159…

Several

Execu-ons with

random-rounding Reliable result

• PROMISE (PRecision OpCMISE) [17] is a tool based on Delta- Debugging [24] to automaCcally tune the precision of floaCng- point variables in C/C++ codes

• The validity of the results is checked with CADNA

• We are going to extend PROMISE for arbitrary-precision with MPFR

• Each Stratix 10 FPGA has four external links at 100Gbps. 64 FPGAs make 8x8 2D-Torus network for communication

• This project targets such a heterogeneous system with FPGA.

Notation:

• fl (…): computation with floating-point arithmetic

u: the unit round-off

Acknowledgement:

This research was partially supported by the European Union's Horizon 2020 research, innovation programme under the Marie Skłodowska-Curie grant agreement via the Robust project No. 842528, the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. 19K20286, and Multidisciplinary Cooperative Research Program in CCS, University of Tsukuba.

References

[1] D. H. Bailey, “QD (C++/Fortran-90 double-double and quad-double package),” hop://crd.lbl.gov/~dhbailey/mpdist [2] G. Hanrot et al, “MPFR : GNU MPFR Library,” hop://www.mpfr.org

[3] D. H. Bailey et al., “ARPREC: An Arbitrary Precision Computa.on Package,” Lawrence Berkeley Na.onal Laboratory Technical Report, No. LBNL-53651, 2002.

[4] CAMPARY, hop://homepages.laas.fr/mmjoldes/campary

[5] T. Ogita et al., ”Accurate Sum and Dot Product,” SIAM J. Sci. Comput., Vol. 26, pp. 1955-1988, 2005.

[6] K. Ozaki et al., “Error-free transforma.ons of matrix mul.plica.on by using fast rou.nes of matrix mul.plica.on and its applica.ons,” Numer. Algorithms, Vol. 59, No. 1, pp. 95-118, 2012.

[7] M. Nakata, “The MPACK (MBLAS/MLAPACK); a mul.ple precision arithme.c version of BLAS and LAPACK,” Ver. 0.6.7, hop://mplapack.sourceforge.net, 2010.

[8] Japan Atomic Energy Agency, “Quadruple Precision Eigenvalue Calcula.on Library QPEigen Ver.1.0 User’s Manual,”

hops://ccse.jaea.go.jp/ja/download/qpeigen_k/qpeigen_manual_en-1.0.pdf, 2015.

[9] Japan Atomic Energy Agency, “Quadruple Precision BLAS Rou.nes QPBLAS Ver.1.0 User’s Manual,”

hops://ccse.jaea.go.jp/ja/download/qpblas/1.0/qpblas_manual_en-1.0.pdf, 2013.

[10] X. Li et al., “XBLAS – Extra Precise Basic Linear Algebra Subrou.nes,” hop://www.netlib.org/xblas

[11] P. Ahrens et al., “ReproBLAS – Reproducible Basic Linear Algebra Sub-programs,” hops://bebop.cs.berkeley.edu/reproblas in Asia Poster, Interna.onal Supercompu.ng Conference (ISC’16), 2016.

[12] R.Iakymchuk et al., “ExBLAS: Reproducible and Accurate BLAS Library,” Proc. Numerical Reproducibility at Exascale (NRE2015) workshop at SC15, HAL ID: hal-01202396, 2015.

Our Contributions

Conclusion & Future Work

[13] D. Mukunoki et al., “Accurate and Reproducible BLAS Routines with Ozaki Scheme for Many-core Architectures,” Proc. PPAM2019, 2019 (accepted).

[14] K. Sano et al., “Stream Processor Generator for HPC to Embedded Applications on FPGA-based System Platform,” Proc. First International Workshop on FPGAs for Software Programmers (FSP 2014), pp. 43-48, 2014.

[15] J. Huthmann et al., “Hardware/software co-compilation with the Nymble system,” 2013 8th International Workshop on Reconfigurable and Communication-Centric Systems-on-Chip (ReCoSoC), pp. 1-8, 2013.

[16] F. Dinechin and B. Pasca, “Designing custom arithmetic data paths with FloPoCo,” IEEE Design & Test of Computers, Vol. 28, No. 4, pp. 18-27, 2011.

[17] S. Graillat et al., “Auto-tuning for floating-point precision with Discrete Stochastic Arithmetic,” Journal of Computational Science, Vol.36, 101017, 2019.

[18] F. Jézéquel and J.-M. Chesneaux, “CADNA: a library for estimating round-off error propagation,” Computer Physics Communications, Vol. 178, No. 12, pp. 933-955, 2008.

[19] F. Févotte and B. Lathuilière, “Debugging and optimization of HPC programs in mixed precision with the Verrou tool,” hal-02044101, 2019.

[20] C. Rubio-González et al., “Precimonious: Tuning Assistant for Floating-Point Precision,” Proc. SC’13, 2013.

[21] J. Vignes, “Discrete Stochastic Arithmetic for Validating Results of Numerical Software,” Numer. Algorithms, Vol. 37, No. 1-4, pp. 377-390, 2004.

[22] I. Laguna, P. C. Wood, R. Singh, S. Bagchi, “GPUMixer: Performance-Driven Floating-Point Tuning for GPU Scientific Applications,” Proc.

ISC2019, pp. 227-246, 2019.

[23] S. Graillat, et al., ”Stochastic Arithmetic in Multiprecision,” Mathematics in Computer Science, Vol. 5, No. 4, pp. 359-375, 2011.

[24] A. Zeller and R. Hildebrandt, “Simplifying and isolating failure-inducing input,” IEEE Trans. Softw. Eng., Vol. 28, No. 2, pp. 183-200, 2002.

[25] Y. Hirota et al., “Performance of quadruple precision eigenvalue solver libraries QPEigenK and QPEigenG on the K computer”, HPC in Asia Poster, International Supercomputing Conference (ISC’16), 2016.

Minimal-precision compu0ng is both reliable (aka robust) and sustainable as it ensures the requested accuracy of the result as well as is energy-efficient

Our Proposal

Available Components Red: Components developed by us

(2) Arbitrary-precision libraries and fast accurate numerical libraries

• Reduced-/mixed-precision with FP16/FP32/FP64 enables us to improve performance & energy-efficiency

• High-precision libraries and fast accurate computa.on

methods have been developed for reliable & reproducible computa,on

(3) Field-Programmable Gate Array (FPGA) with High-Level Synthesis (HLS)

• FPGA enables us to implement any operations on hardware, including arbitrary-precision operations

• HLS enables us to use FPGAs through existing

programming languages such as C/C++ and OpenCL

• FPGA can be used to perform arbitrary-precision computations on hardware efficiently (high-

performance and energy-efficient)

(1) Precision-tuning with numerical validation based on stochastic arithmetic

• Rounding-errors can be es.mated stochas.cally with a reasonable cost (for details, see “(A)

Stochas.c Arithme.c Tools” at the booom leI)

• General scheme applicable for any floa,ng-point computa,ons

Tools:

• High-precision arithmetic: binary128 (intel, gcc), QD [1], MPFR [2], ARPREC [3], CAMPARY [4], etc.

• Accurate sum/dot: AccSum/Dot [5], Ozaki-scheme [6], etc.

• Numerical libraries: MPLAPACK [7], QPEigen [8],

QPBLAS [9], XBLAS [10], ReproBLAS [11], ExBLAS [12], OzBLAS [13], etc.

Tools:

• Compilers: SPGen [14], Nymble [15], etc.

• Custom floating-point operation generator:

FloPoCo [16], etc.

Tools:

PROMISE [17] (based on a stochastic arithmetic library, CADNA [18]), Verrou [19], etc.

Related works (not validation-based):

• Precimonious [20], GPUMixer [22] etc.

HPC Asia 2020 – Interna.onal Conference on High-Performance Compu.ng in Asia-Pacific Region, Fukuoka, Japan, January 15-17, 2020

Introduction

A joint France-Japan research project

accel erat io n

Références

Documents relatifs

In summary, we determined here the first atomic-reso- lution model of a protein (the L , D -transpeptidase Ldt Bs ) bound to intact peptidoglycan sacculi, based on

This initiative explores computer-supported mechanisms for enhancing distributed engineering collaboration by using the Application Service Provider (ASP) model to

In this section, general language design principles, together with a selection of four important issues and criteria for rule markup language design, are identified

Analysis of product 1 by single crystal diffraction at 273 K reveals a 24-crown-8 ligand coordinating to four silver nitrate entities in the orthorhombic space group Ibca( Fig.

implements stochastic arithmetic in arbitrary precision (based on MPFR 1 ) mp_st stochastic type. recent improvement: control of operations mixing different precisions Ex: mp_st <

In this paper, we assess the interest of fixed-point and reduced-size floating-point arithmetic operations as a tool to provide a trade-off between energy, perfor- mance, and

To accelerate it, we are developing several numerical libraries supporCng accurate computaCon based on high-precision arithmeCc or algorithmic approach.. Some sojware

1) While a large amount of porosity is generally beneficial in a pure MIEC cathode, it could be detrimental for a composite cathode, especially at low