• Aucun résultat trouvé

Pipelined memory controllers for DSP applications handling unpredictable data acesses

N/A
N/A
Protected

Academic year: 2021

Partager "Pipelined memory controllers for DSP applications handling unpredictable data acesses"

Copied!
3
0
0

Texte intégral

(1)

HAL Id: hal-00080044

https://hal.archives-ouvertes.fr/hal-00080044

Submitted on 14 Jun 2006

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Pipelined memory controllers for DSP applications

handling unpredictable data acesses

Bertrand Le Gal, Emmanuel Casseau, Sylvain Huet, Eric Martin

To cite this version:

Bertrand Le Gal, Emmanuel Casseau, Sylvain Huet, Eric Martin. Pipelined memory controllers for

DSP applications handling unpredictable data acesses. 2005, pp.268 - 269. �hal-00080044�

(2)

Pipelined Memory Controllers for DSP Applications Handling Unpredictable

Data Accesses

Bertrand Le Gal, Emmanuel Casseau, Sylvain Huet, Eric Martin

LESTER Laboratory - University of South Brittany

Lorient - FRANCE

name.surname@univ-ubs.fr

Abstract

Multimedia applications are often characterized by a large number of data accesses with regular and periodic ac-cess patterns. In these cases, optimized pipelined memory access controllers can be generated improving the pipeline access mode to RAM. We focus on the design and the im-plementation of memory sequencers that can be automati-cally generated from a behavioral synthesis tool and which can efficiently handle predictable address patterns as well as unpredictable ones (dynamic address computations) in a pipeline way.

1

Introduction

Actual researches in Multimedia applications try to re-duce the computation complexity of algorithms using ad-hoc solution composed of conditional computations lead-ing to execution hazards to appear with conditional and un-bounded loop usages. On the other hand other architectural implementations (computation and memory architectures) are obtained for regular algorithms without execution haz-ard. For most of multimedia applications the entire mem-ory access sequence is not known a priori. This prevents the designer and High-Level Synthesis tools to handle effi-ciently the application repetitive sequences for efficient area and power consumption design. In many digital signal-processing applications, the array access patterns are pre-dictable, regular and periodic. In these cases, the necessary address patterns can be efficiently generated either directly from a memory address sequencer. The sequencer genera-tion allows the designer to decouple the concerns of mem-ory interfacing and static scheduling of possible memmem-ory accesses [2]. This technique is used to improve the pipeline access mode to RAM. Researches [1] have demonstrated the interest of an address sequencer utilization for memory access for power efficient circuit generation.

In this paper, we first present a design flow for the our sequencer which handles hazardous memory accesses providing the ability of accessing to a dynamically calcu-lated addresses in a data-flow fashion. In a second time we present our sequencer architectures which can perform pipeline memory access for statically and dynamic access sequences. Experimental results are discussed in the con-clusion.

2

The Design Flow

Our Design Flow (figure 1) starting point is a behavioral description of the application specifying the circuit func-tionality and a memory mapping specifying which data is in memory. In order to map an application onto the mem-ory sequencer architecture, we define a graph model name

Extended Data-Flow Graph which takes into account the

re-quired timing constraints due to dynamic address accesses.

Figure 1. Design Flow

(3)

se-quencer generation is the Extended Data-Flow Graph an-notation : this anan-notation step aims to handle the timing requirements for data and address transfers from one unit to the other (Communication, Memory and Datapath unit).

In second step, a dynamic address computation balanc-ing algorithm is applied to the previously annotated graph in order to move some dynamic address computations from datapath to the memory sequencer unit reducing transfer count, power consumption and delays. The decision met-ric used to select address computations which must bal-anced, takes into account different criteria: the number of data-transfer, the time increase/decrease for critical paths, the bitwise of the operators and the potential parallelism that can be exploited for computations and data-transfers. This dynamic address computation balancing script is ap-plied in a static manner to the annotated graph. Depending on the application of this script, the sequencer architecture may need an internal datapath for address computations.

Then the designer can implement is datapath by hand or using a high-level synthesis tool. During this process, the HLS tool takes into account the timing constrains for dy-namic data addressing and data transfers using the Extended Data-Flow Graph annotations.

Finally, the entire sequencer is generated using the pre-viously generated informations on the access patterns.

3

Dynamic Address Sequencer Architecture

This sequencer supports dynamic addressing capabilities in a mainly deterministic data transfer sequence. In this ar-chitecture (figure 2, without the dynamic address datapath and its controller), we suppose that all the dynamic address computations are calculated inside the datapath unit and then transfered using data buses to the memory sequencer. The datapath access buses are connected to the memory us-ing a multiplexed crossbar which is controlled through the memory access scheduler. This scheduler controls the ad-dress generator progress in a synchronous manner from the datapath point of view. A dynamic address access to the memory will go through the address translation table, this table will translate, the index of the data to a couple (mem-ory bank, address). This translation allows the designer to bind noncontiguous pieces of a vector into different memo-ries to better exploit access parallelism. The dynamic access controller will route the correct commands (read/write) and physical address to the right memory bank according to the dynamic access required.

Sequencer Architecture Improvement : We extend this architecture for massive transfers reduction by inserting dedicated computation units in the memory sequencer unit. These computation units perform internal address compu-tations realizing the datapath unit constraints for eligible

Figure 2. Dynamic Address Sequencer address computations. This approach provides interesting gains for pipeline architecture design : the data transfers between the datapath and the sequencer can take more than one clock cycle. In these applications, localizing compu-tations in the memory sequencer will reduce latency and avoid unnecessary address transfers. An internal datapath specialized in dynamic address computations is inserted be-fore the translation table (figure 2). It is composed of oper-ators and registers like a datapath. This dedicated hardware is shared between address computations during application execution.

4

Conclusion

Several experiments we have performed which exhibit the interest of this approach. This technique reduce the data transfer requirements for low-conditional high-computation applications exploiting data intensive techniques. This tech-nique also allows performance increases by using an inter-nal sequencer datapath for dynamic address computations reducing delay in pipeline architectural implementations.

References

[1] Taewhan Kim Chun-Gi Lyuh and Ki-Wook Kim. Coupling-aware high-level interconnect synthesis.

IEEE Transactions on Computer-Aided Design of Inte-grated Circuits and Systems, 23(1):157–164, January

2004.

[2] Joonseok Park and Pedro C. Diniz. Synthesis of pipelined memory access controllers for streamed data applications on fpga-based computing engines. In Proc.

Figure

Figure 1. Design Flow
Figure 2. Dynamic Address Sequencer

Références

Documents relatifs

When comparing these requirements with the limitations of state-of-the- art DRAM technology, we project that in the scientific domain, current memory technologies will likely scale

The aim of the study is (i) to assess how behavioral responses can be used as ecotoxicological endpoints and (ii) to compare the sensitivity of standard versus

Monodisperse sequence-encoded poly(alkoxyamine amide)s were synthesized using an iterative strategy employing two chemoselective steps: the reaction of a primary amine with an

In order to demonstrate the possibility of the model to characterise spatial configuration of different tissues we apply a simple model of tissue growth and growth control.

The clinical use of stress echocardiography in non- ischaemic heart disease: recommendations from the European association of cardiovascular imaging and the American Society

There is an urgent need for effective models and tools that take the local context into account, which will enable deciders to implement water sector reforms that meet the real

As is evident from the fmt section of this article the absence of litigation in the Quebec educational arena based on sections 2(a) and 15 of the Canadian Charter is

In this ar- chitecture (figure 2, without the dynamic address datapath and its controller), we suppose that all the dynamic address computations are calculated inside the datapath