• Aucun résultat trouvé

A GENERIC VIDEO ADAPTATION FPGA IMPLEMENTATION TOWARDS CONTENT- AND CONTEXT-AWARENESS IN FUTURE NETWORKS W. AUBRY

N/A
N/A
Protected

Academic year: 2022

Partager "A GENERIC VIDEO ADAPTATION FPGA IMPLEMENTATION TOWARDS CONTENT- AND CONTEXT-AWARENESS IN FUTURE NETWORKS W. AUBRY"

Copied!
7
0
0

Texte intégral

(1)

Figure 1: Generic Network Adaptation Framework

N e t w o r k D e v i c e ( H o m e G a t e w a y )

E n d U s e r T e r m i n a l A d a p t e d V i d e o

S t r e a m

C o n f i g u r a t i o n p r o t o c o l V i d e o

S t r e a m I n t e r n e t

A GENERIC VIDEO ADAPTATION FPGA IMPLEMENTATION TOWARDS CONTENT- AND CONTEXT-AWARENESS IN FUTURE NETWORKS

W. AUBRY123, B. LE GAL1, D. NEGRU2, S. DESFARGES2 and D. DALLET1 waubry@viotech.net, negru@labri.fr, legal@ims-bordeaux.fr, desfarge@labri.fr, dallet@ims-bordeaux.fr

1University of Bordeaux, IMS Laboratory, CNRS UMR 5218, IPB,

Talence, France

2University of Bordeaux, LaBRI Laboratory, CNRS UMR 5800, IPB

Talence, France

3Viotech Communications Montigny-le-Bretonneux, France

ABSTRACT

Intensive research in the networking domain addresses content- and context- aware features for the Future Internet.

Thus, being able to manipulate data flows and to adapt those to given constraints with a minimum resource involvement is a hot topic. On top of this topic, video manipulation is the challenge to undertake in order to optimize resource- consuming processes. However, developing heterogeneous video transcoders take tremendous time. Many algorithmic solutions have been proposed to reduce transcoding runtime but they involve re-development of ad-hoc IPs for every situation, multiplying development cost at a high speed. In this paper, we propose a generic system for video transcoding and its FPGA implementation. This system enables the reuse of already developed IPs, saving time to market for next generation network devices.

Index Terms— Video adaptation, heterogeneous transcoder, FPGA

1. INTRODUCTION

In today’s world, video stream is one of the most consumed data flow over Internet and the most bandwidth requiring.

Hence, video streams have the main impact on the global network. Thus, trying to transport an adapted video stream to the network characteristics has been deeply researched [9]-[11]. Nowadays, network oriented researches are directed toward the end user’s quality of experience. The network state is not the only parameter considered for video adaptation any more. The user context, including its terminal characteristics such as supported codec and screen resolution, is now the main constraint that has to be taken into account.

This problematic is one of the main focus in media centric network and a key to content- and context- awareness for next generation network. We propose to address this issue by using a network device as shown in Figure 1

.

The main objective is to embed the video adaptation task in this external device that has network- monitoring capabilities. Then, this device will be able to

detect and adapt the video contents depending on the user’s context (network load, terminal characteristics, etc.), making it into a content/context aware network device.

This approach (providing adaptation capabilities near the end-user) offers the advantage of a video distribution that is seamless for both the consumer and the video-stream provider. Indeed, on the one hand, the consumer can access video stream based only on its content without worrying on its own capability to read it. On the other hand, the content provider does not have to take into account context parameters when asked for a video stream. This feature is achieved in our system by embedding video adaptation task in network devices that are used to transfer video streams.

For scalability purpose, the system must be implemented in last hop devices that possess a better and quicker knowledge of the end user context. However, those devices are mainly network gateways with network switching responsibilities (such as a 3G antenna or a home gateway).

These devices are characterized by low-computation performances that are inefficient to execute high- computation video adaptation tasks. This explains why real- time video adaptation system1 proposed in this paper was developed according to low-computation complexity and low-cost constraints.

Adaptation system development is complex due to a large set of constraints:

 there is a need to implement many adaptation techniques that require different video processing (video compression, video decompression and video adaptation);

1 Work supported by the French project ARDMAHN within French National ARPEGE ANR Program http://www.ardmahn.org and by the European project ALICANTE within EU FP7 ICT, under grant agreement n° 248652. http://www.ict-alicante.eu.

(2)

Figure 2: System Overview

 adaptation processing must be performed under real time constraint;

 the system hardware complexity must be quite low to fulfill low-cost constraint.

These reasons motivate us to propose a generic adaptation system for video adaptation. To author’s knowledge there does not exist such kind of architecture in the literature. Proposed solution is based on FPGA device usage. Moreover, proposed solution is based on low- complexity adaptation algorithms ([3], [4] and [9]) that have low video quality impact.

This article is written as follows. In Section 2, the system design will be detailed and a state of the art of algorithmic adaptation solution will be presented. A lack of generic hardware solution will be pointed out as it is required by our system. In Section 3, we propose our system and present its hardware implementation that overcomes this issue.

Hardware design characteristics are shown in Section 4.

Future works and conclusion will be drawn in Section 5.

2. VIDEO ADAPTATION SYSTEMS AND RELATED WORKS

2.1. System design

In the proposed system approach presented in Figure 2, the home gateway - located between the video source coming from the Internet and the embedded device that display video content – operates the video stream transcoding.

Video stream transcoding is activated and configured according to the embedded device decoding capabilities. In order to enable and configure the video transcoding process, a communication protocol is required. This communication protocol is used by the embedded device to provide the list of video standards and characteristics that are supported.

This system requires two major evolutions for current systems:

1. To adapt the video characteristics, a modified home gateway device is required. This home gateway that links the embedded device to the Internet (or another video provider), needs real-time video adaptation capacity. In the proposed approach, this task is implemented using a hardware dedicated architecture implemented in a FPGA circuit;

2. To enable and control the video transcoding process, the embedded device must be able to inform the modified home gateway of the supported standard and characteristics (e.g. screen size). This feature is implemented using a negotiation protocol.

2.2. Introduction to video adaptation use cases

Video adaptation system must be able to fulfill a large set of video adaptation in order to meet the context aware requirements. Indeed, there exist many reasons to adapt a video stream such as:

reducing its bitrate in order to decrease network bandwidth usage [9];

reducing the video resolution according to the video device characteristics to minimize its power consumption [15];

transforming the video stream format (i.e. codec) to one supported by the embedded device [13].

Commonly used atomic adaptations are:

Video Transrating: The video transrating adaptation consists in reducing the video bitrate i.e. reducing the amount of information per second to transmit. The adaptation system has to reduce efficiently the number of data in the compressed video stream without too much damaging the video quality;

Video Temporal Downscaling: The temporal downscaling adaptation aims at reducing the frame rate of the video. This process has to find the best frames to be removed inside the video stream and have to update the frame that uses the removed one as a reference (mainly motion vectors);

Video Spatial Downscaling: The spatial downscaling process aims at reducing the spatial resolution of one video. This process is among the more difficult task to achieve because of the numerous parameters it manipulates. Indeed, modifying the video dimension changes its bitrate, modify the way that the codec store the video, etc;

Video Codec Transcoding: Codec transcoding is the process that allows to change the codec (h264, MPEG2, etc.) of the input video stream to another one. In some cases, input and output video codecs are the same, only codec parameters (level) are different. However, input

(3)

and output standards are often different. For example, in order to allow a device that can only decode MPEG-2 streams to access h.264 contents, a video codec transcoding is required.

2.3. Real-time video heterogeneous transcoding Video adaptation process is complex:

 Many adaptation use cases exist as presented bellow.

Moreover following adaptation can be associated to combine their effect on the video stream. E.g. frame resizing adaptation can be associated with frame skipping adaptation to reduce both the number of frame per second and the frame resolution of the video stream;

 Adaptation tasks are computation intensive application that requires high-performance resources.

Many literature approaches have been proposed at the algorithmic level to reduce computation complexity of video adaptation while keeping a high video quality ([1] [2]).

These techniques are mainly based on input video information reuse (such as motion vectors, macroblock types). Data reuse helps in simplifying the adapted video re- encoding process. Low-complexity ad-hoc techniques using such approach have been proposed in literature to perform:

1. Video Spatial Downscaling according to embedded devices characteristics (display size [3]-[5]). However, modifying the video dimension in reusing input video information is not so simple. For example, in a spatial downsizing by half, 4 macroblocks of the to-be-adapted video are to become one. In order to prevent researching the best metadata by the encoder, which implies tremendous computation, metadata are extracted from the incoming stream need to be merged.

The merging of metadata has raised lots of proposal in the video adaptation research field ([1] [2]);

2. Video Transrating: to reduce video bitrate, some algorithmic approaches were proposed ([9]-[11]). Most of them are based on the high order coefficients (in the frequency domain) removing. This coefficient reduction is generally, but not only, achieved by raising the quantizer scale factor. The main issue in the transrating process is to find the proper quantizer scale factor that

is the most fitted to match the bitrate constraint without reducing too much the video quality;

3. Video Codec Transcoding: to allow video codec change some algorithmic approaches were proposed ([6]-[8]).

These techniques are not generic and allow specific to specific standard conversion. Toward this issue ad-hoc heterogeneous transcoders have been proposed. An h.263 to h.264 pixel domain frame transcoder has been proposed [13]. This transcoder can be used to solve the MPEG-2 to h.264 issue [8]. The MPEG-2 to h.263 problematic has also been addressed [6]. Video codecs (standards) pixel representation is in the spatial domain, the adaptation process has the only task of translating the metadata parameter from the incoming codec to the parameters in the required codec.

Generally, codec transcoding is needed along with another adaptation. The adaptation is therefore called a heterogeneous adaptation. A downscaling process for h.26x (x = 1, 2 or 3) [4] and a rate control system for MPEG-2 to MPEG-4 transcoding [14] have been proposed. But with which parameter shall the adaptation compute the incoming codec or the required codec? Because, generic adaptation shall handle any codec, the parameter representation shall not be dependant of the decoder or encoder. Thus we propose to use a parameter representation that is defined for adaptation. Along with the pixel representation, there is a need to define an intermediate adaptation format that is neither the decoder data format nor the encoder data.

Adaptation has been addressed either as a homogeneous transcoding process (same CODEC) or as a specific heterogeneous transcoding process using features of the addressed CODECs. However, to authors’ knowledge, only very specific adaptation systems were addressed. Indeed, no generic adaptation system has been proposed that could be used for any kind of transcoding using a “configurable”

architecture. In real-life systems, input video stream and embedded video devices can requires complex adaptation requiring i.e. video resizing, codec modification and bitrate reduction at the same time. Thus, the design process shall undertake such considerations. For this reason we propose a generic system and its implementation architecture authorizing such real-life adaptation use cases.

(4)

Figure 3: Overall Adaptation Architecture

3. A GENERIC HARDWARE VIDEO ADAPTATION SYSTEM 3.1. Proposed video adaptation system

Our work motivation comes that generic low-computation approach to transcode a video does not exist. The video adaptation process has a fixed coarse architecture composed of a decoding path, an adaptation path and an encoding path.

Because every path is deeply dependent of one another, the cost of having a multiple codec/adaptation is heavy. Let us consider ni input video codecs, no output video codecs and na video adaptation techniques. There exists up to (ni*no*na) ad-hoc systems for video transcoding. Implementing such number of video transcoding architecture is not feasible.

Our proposed system (Figure 3) is implemented using a codesign based architecture composed of a general-purpose processor and a set of hardware accelerators named engines.

The first and the final parts of the video processing task (respectively video stream pre-processing and post- processing) required for network data streaming are implemented on the general-purpose processor. This processor can be located inside or outside the FPGA device.

This implementation choice was realized due to the characteristics of such network tasks that are control intensive ones. Then most of the adaptation processing tasks that are mainly computation intensive ones are implemented using dedicated hardware components in the FPGA device.

Three different hardware accelerators are required to implement a generic system for video adaptation:

1. the decoding engine - this hardware accelerator transforms the compressed video stream (codec x) to an intermediate video format;

2. the video adaptation engine - this hardware accelerator transforms the input intermediate video format according to newest bit-rate, frame-rate and picture dimension constraints (if needed);

3. the encoding engine - this accelerator module generates the output video stream (codec y) from the intermediate video format data.

In order to reuse already developed encoder and decoder IPs, we propose to use format converters that transform from the decoder format to the common intermediate format and from the intermediate format to the encoding format. Together with their respective converter, decoder and encoder becomes the aforementioned decoding and encoding engines.

Using these three modules, the most commonly needed adaptation use cases can be implemented. Indeed, proposed architecture enables:

Codec adaptation - the codec adaptation requirement is implemented through the decoding and the encoding engines. These engines that are codec dedicated convert the video stream to an intermediate video format and vice versa. The video adaptation engine does not operate any change in this use case; the codec adaptation is done in the format converter;

Video downsizing, bit rate and frame rate reductions are implemented using the video adaptation engine.

Indeed, decoder and encoder engines are configured using the same codec. Stream information (pixels and video metadata) are transformed by the video adaptation engine.

This approach helps in implementation complexity reduction through the usage of an intermediate video format.

Indeed, to implement video adaptations from codec i1 to codec o1 using video parameters adaptation a1 and a2 only required the implementation of hardware modules i1, o1, a1

and a2. This approach provides a lower complexity compared to dedicated processing developments performed according literature (i1+a1+o1) and (i1+a2+o1) must be implemented.

Once a codec has been developed, it can be added to the pool of available codec supported by the adaptation platform

(5)

Figure 4: FPGA internal Architecture

R e c o n f i g u r a t i o n m a n a g e r ( m i c r o b l a z e )

F o r m a t c o n v . D e c o d i n g e n g i n e F o r m a t

c o n v .

A d a p t a t i o n e n g i n e

S t a t i c Z o n e R e c o n f i g u r a b l e Z o n e V i d e o S t r e a m

I n p u t

V i d e o S t r e a m o u t p u t

F P G A d e v i c e

E n c o d i n g e n g i n e

M e m o r y C o n t r o l e r

Table 1: Standard Feature Summary [12]

Table 2: Xilinx IP Costs Table 2: Xilinx IP Costs

(Figure 3). The adaptation need not be re-developed. The interconnection with already developed codec is seamless and saves a lot of development time as well as it adds a lot of flexibility thanks to the intermediate video format.

3.2. Intermediate video format

To provide a generic system for video adaptation, a shared intermediate video format is required. This intermediate format was specified according to commonly used video standard requirements. The intermediate format should be chosen in order to support the maximum feature of every CODEC so that the adaptation remains optimum.

As shown on Table

1

, standards do not always use the same algorithms and data format. We need a common domain to process data. The pixel domain is the obvious choice. Since h.264 has a ¼ pixel precision, the adaptation shall posses a

¼ pixel precision. The smallest vector Block size is 4x4 for h.264 thus it will be the granularity of the generic adaptation process. With each blocks information will be added such as motion vector, quantizer scale, etc.

Using such video characteristics creates the intermediate format and authorizes h.264 to h.264 video adaptation with no information lost. Moreover, other video codecs such as h.262 or h.263 that have lower requirements will use only a subset of the intermediate format functionalities.

3.3. Hardware implementation of the proposed system Previous section has presented the structure of the system architecture and selected an intermediate video format. This architecture is generic enough to allow the execution of all common adaptation use cases. However, the system requires configuration opportunities i.e. to be able to change the decoding/encoding engines according the video stream format. This section discusses now the implementation choices performed to allow system configuration depending on adaptation use case. These choices were realized considering area cost of hardware modules. Indeed, there exist many ways to develop a multimode system:

1. Implementing all the modules in the device and then configuring the datapath according to the needed processing;

2. Implementing multi-functionality IPs that share hardware resources between mutually exclusive processing [17];

3. Implementing a generic architecture with partial reconfiguration characteristics in order to load/unload IPs according to executed use case [18].

The first solution provides the most efficient approach in terms of time to market. However it leads to huge area costs. The second one is efficient when mutually exclusive processings are sharing an important set of identical hardware resources. Area saving is not always possible and requires very long development times. The last solution is an interesting tradeoff providing low-implementation cost.

However it requires an FPGA device that provides partial reconfiguration functionality. Moreover, partial reconfiguration has other drawbacks; it requires bitstream storage memory and reconfiguration runtimes.

According to the characteristics of the processing engines in our system, a mixed solution has been developed.

Engine binding is provided in Figure 4

.

This architecture has been deployed on a Virtex 6 that allows dynamic partial reconfiguration. Next generation Xilinx FPGAs will be provided with dynamic partial reconfiguration features on low cost FPGAs

3.3.1. Dynamically reconfigurable modules

In order to develop a flexible adaptation most of the video codecs must be supported. Encoding and decoding engines are the most area expensive resources in our system. Area cost information for h.264 and MPEG-2 codecs are provided in Table 2

.

Integrating every encoding and decoding engines

(6)

Figure 5: Generic Adaptation Engine

Figure 6: Format converter Architecture in a low cost FPGA device is impossible due to the required

amount of resources. Indeed, let’s consider a system providing N to N codec adaptation possibilities. This system will requires N encoding engines and N decoding engines.

Moreover, these engines are inefficient candidates for multimode IP design. Indeed, their internal processings are too much different to efficiently share hardware resources.

The only solution comes from nowadays Xilinx FPGA devices. These FPGA devices provide partial hardware reconfiguration opportunities at runtime [16]. This partial dynamic reconfiguration is an efficient way to reduce area complexity for a system when most of hardware modules could not be executed at the same time: in the proposed system, to adapt one video stream only one decoding engine is needed. Moreover, such approach authorizes later deployment of video encoding or decoding engines to increase system capabilities.

We decided to implement the encoding and decoding engines in reconfigurable zones. Bitstream file are stored in an external memory and loaded by the reconfiguration manager through a reconfiguration bus. Engine loading is performed after the negotiation task between the home gateway and the embedded device. Using this approach, the area cost is limited to the most expensive area cost of the IP set. Indeed, each reconfigurable zone has to be dimensioned according to the most expensive IP cost.

3.3.2. Configurable static hardware modules

Video adaptation engine (Figure 5) is the module that performs bit-rate, frame-rate and picture dimension adaptation. In the worst case, the three adaptations must be performed at the same time. Because these processings work on an intermediate video format, they are independent of the video codecs. Hence, they do not require reconfiguration at runtime. We used configuration signal, along the data to enable or disable a processing. This configuration signal, allow the static adaptation to perform any kind of adaptation on

Finally, format converter engines (from and to the intermediate video format) are modules that convert the encoder/decoder video formats to the internal one. These modules mainly perform data reordering. An RTL-like

description of such modules is presented in Figure 6. The main change in this module behavior considering different use cases comes from data reordering. In the first place, we believe that the format converter is bound to the encoder/decoder to form encoding/decoding engines and thus should be located on a partial reconfiguration zone. For development considerations, we believe that the most efficient way to implement such modules is to develop a multi-functionality module. A configuration signal is used to: select the right virtual addressing memory and to enable / disable metadata processing. Developing a generic format converter will be studied in future works and thus, format converter could move from a dynamically reconfigurable zone to a static zone.

4. IMPLEMENTATION RESULTS Proposed architecture has been implemented on a XpressV6 board from PLDA with a LX240T Virtex-6 chip from Xilinx. For evaluation purpose we have integrated in the system prototype the whole chain (encoding, decoding and adaptation engines). Table 3 provides the area complexity of the different engines.

For architecture validation purpose, we have test an MPEG-2 encoder/decoder based adaptation chain. In a near future a low-complexity h.264 encoder / decoder codec will be added in the codec adaptation pool. However, evaluated adaptations practically confirm that our design is generic and upgradable. Area complexity difference between Xilinx IPs (Table 2) and integrated ones come from different

Resources Adaptation MPEG-2 (Enc)

MPEG-2

(Dec) Total

LUT 1.6k 8k 7k 17.6k

REG 1.8k 6k 6k 13.8k

BRAM

(16k) 30 15 15 60

Freq

(MHz) 200 150 130 130

Table 3: Implementation Results

(7)

design goals. Where Xilinx implemented stand alone encoder/decoder, we have designed dedicated decoder/encoder for transcoding purpose. Our designs reuse information contained in the incoming video stream (i.e. for example, we do not have to re-estimate motion vectors).

This architecture specialization reduces efficiently the area complexity as shown in Table 3 (engines perform up to 1080p video).

Implemented system processes a video Macroblock in less than 400 cycles. Thus, operating at 100MHz, it adapts a HD video stream (1080p) in real time (0.96s is needed to compute 1s of the video). The minimal cost of the system (without taking into account the partial reconfiguration manager implemented using an ICAP component and a microblaze IP) is about 18k LUTs, 14k registers and 60 BRAM. Adding h.264 capabilities will only increases the area cost of the reconfigurable zones. However, adaptation engine and video converter costs will stay the same.

5. CONCLUSION AND FUTURE WORKS In this paper, we addressed the processing chain development issues in video adaptation. These issues are mainly time to market while addressing for multi codec and multi adaptation. We proposed a multi-level generic architecture: (1) the use of a network device that allow seamless adaptation, (2) the use of an intermediate format that allows to adapt from any codec to any codec performing any adaptation - there are a few restrictions on which algorithm to implement inside the adaptation engine.

Proposed architecture for video adaptation drastically reduces time to market for IP development and enables dynamic reconfiguration features that lower final production costs. The proposed system has been implemented and is real-time for 1080p video streams when processing at 100MHz.

In future works we aim at integrating multiple hardware video codecs such as h.264 to validate the low-time to market characteristics. This step will authorize us to dimension precisely the reconfigurable zone characteristics.

6. REFERENCES

[1] I. Ahmad, X. Wei, Y. S. & Zhang, Y.-Q. “Video Transcoding:

An Overview of Various Techniques and Research Issues IEEE Transactions On Multimedia, October2005.

[2] Y. Xin, C-W. Lin and M-T. Sun “Digital Video Transcoding”

Proceedings of the IEEE, Vol. 93, No. 1, January 2005 [3] A. Vetro et al.“Complexity-Quality Analysis of Transcoding

Architectures for Reduced Spatial Resolution” IEEE Transaction on Consumer Electronics, vol. 48, no. 3, pp. 515- 521, August 2002.

[4] B. Shen, I. K. Sethi andB. Vasudev,“Adaptive Motion-Vector Resampling for Compressed Video Downscaling” IEEE Transactions On Circuits And Systems For Video Technology, September 1999.

[5] P. Yin et al.“ Drift Compensation for Reduced Spatial Resolution Transcoding” IEEE Transaction on Circuits and Systems for Video Technology, vol. 12, no. 11, pp. 1009- 1020, November 2002.

[6] N. Feamster and S. Wee, "An MPEG-2 to H.263 transcoder".

In SPIE Voice, Video and Data Communications Conference, September 1999.

[7] J. Xin, A. Vetro and H. Sun “Converting DCT Coefficients to H.264/AVC”. IEEE Pacific-Rim Conference on Multimedia (PCM), Lecure Notes in Computer Science, Vol. 3332/2004 pp. 939, 2004

[8] H. Kalva, B. Petljanski and B. Furht “Complexity Reduction Tools for MPEG-2 to H.264 Video Transcoding” WSEAS Transaction on Information Science & Applications, Vol. 2, March 2005, pp 295-300

[9] Z. Lei and N.D. Georganas, "A rate adaptation transcoding scheme for real-time video transmission over wireless channels", Signal processing. Image communication, vol. 18, pp. 641-658, 2003.

[10] Eleftheriadis, A. & Anastassiou, D. Meeting “Arbitrary QoS Constraints Using Dynamic Rate Shaping of Coded Digital Video”. In the 5th International Workshop on Network and Operating System Support for Digital Audio and Video, April 1995

[11] M. Lavrentiev and D. Malah, "Transrating of MPEG-2 coded video via requantization with optimal trellis-based DCT coefficients modification". EUSIPCO 2004 September 6-10, 2004, Vienna, Austria

[12] J. Golston,and A. Rao “ Video Compression: System Trade- Offs with H.264, VC-1 and Other Advanced CODECs” Texas Instruments, white paper, Aug 2006.

[13] J. Bialkowski, M. Barkowsky and A. Kaup “Overview of Low-Complexity Video Transcoding from H.263 to H.264”

IEEE International Conference on Multimedia and Expo , 2006. 9-12 July 2006.

[14] Y. Sun, X. Wei and I. Ahmad “Low-Delay rate-control in video transcoding” ISCAS 2003, 25-28 May 2003.

[15] W. Aubry, B. Le Gal, D. Dallet, S. Desfarges and D. Negru,

”A system approach for reducing power consumption of multimedia devices with a low QoE impact” in IEEE International Conference on Electronics, Circuits and Systems (ICECS), Dec 2011, pp.5-8

[16] K. Paulsson, M. Hubner, S. Bayar, and J. Becker,

“Exploitation of Run-Time Partial Reconfiguration for Dynamic Power Management in Xilinx Spartan III-based Systems,” in Field Programmable Logic and Applications, 2008. FPL 2008. International Conference on, Sep. 2008, pp.

699–700.

[17] E. Casseau and B. Le Gal, “Design of multi-mode application- specific cores based on high-level synthesis”, Integration 45(1): 9-21 (2012)

[18] R. Tessier and W. Burleson, “reconfigurable computing for digital signal processing: A survey”, J. VLSI Signal Process.

Syst., vol 28, pp. 7-27, May 2001.

Références

Documents relatifs

For example, if the request from the user is not coherent with what the cognitive architecture knows about him/her, establish a dialog with the user to express the problem and be

As long as we are dealing with logical contexts for which we may use the lparse/smodels answer set solver as a reasoner, we may make the test for founded activation of the

Considering a generic adaptive system one may think not only about defining a framework or reference (data) model but also about what the adaptation pro- cess within the system

Our objective is to handle the adaptation of the UI to the context of use (platform, environment, user). Vale [22] describes a parameterized transformation

Keywords: Daphnia magna, local adaptation, male production, quantitative trait locus mapping, resting

To provide the above, an augmented digital exhibit should be designed and implemented to be: (a) generic, built on top of an ontology meta- model (extending CIDOC-CRM)

As expected, Table 2 shows that the server has the best results for the classifi- cation processing time and can use caching. Even if the raw ontology is faster to load than

[r]