• Aucun résultat trouvé

Resource Aware Media Framework for Mobile Ad hoc Networks

Adeel Akram, Shahbaz Pervez, Shoab A. Khan University of Engineering and Technology, Taxila, Pakistan.

Email: {adeel, shahbaz, shoab}@uettaxila.edu.pk

Abstract—In this paper we present a framework that acts as a distributed media encoder/decoder for real-time multimedia streams. The paper proposes an implementation of a Multimedia encoder/decoder that works by partitioning and distributing various tasks allocated to different stages of the encoder/decoder to different computers having the minimum required capabilities for that task. At the end the combined work by these different nodes creates the actual encoded/decoded multimedia stream. As encoding is a resource hungry process, we divide it into separable stages and perform their tasks on multiple nodes, while decoding is performed on the single intended target device if it is capable to do so. In case of less capable target device, the Middleware can convert the encoded video into a format suitable for the client node.

Keywords; Computation Offloading, Task Partitioning, Time- constrained task scheduling, Multimedia over Ad hoc Networks, OMAP Architecture

I. INTRODUCTION:

With the phenomenal improvements in capability of devices that can become part of Ad hoc networks, the demand for higher level time constrained services such as multimedia and voice communication over ad hoc networks is increasing.

Multimedia transmission over ad hoc network is an application that requires computational resources as well as high throughput network links to provide information rich contents to the receiving nodes in real-time.

Digital Multimedia transmission over Ad hoc network requires encoding of source media in a format that become more resilient to errors and delays due to the intermittent jitters in transmission due to route changes or link failures.

Moreover as the intermediate nodes in an Ad hoc network act as repeaters to forward multimedia packets towards the destination nodes, the probability of failure increases with the increase in the number of intermediate nodes.

II. PROBLEM DEFINITION:

As multimedia scheduling is a multi-objective and constrained problem with all its known difficulties, the our objective is to minimize the complexity of the scenario ensuring delivery of contents to the desired target node in a bounded time frame as imposed by the multimedia traffic constraints.

The understanding of actual scenario is the first step towards the solution of this complex real world problem.

A. System Scenario

Consider a wireless ad hoc network composed of mobile nodes that utilize the OMAP (Open Multimedia Applications Platform) architecture.

For the sake of simplicity we assume that all mobile nodes have same capabilities and characteristics. Each mobile node is equipped with a camera, a low-power microprocessor, and 802.11b WiFi Network Interface Cards that allows these nodes to communicate over the wireless channel.

As OMAP is software and hardware architecture that enables multimedia applications in third-generation (3G) wireless appliances, it is targeted for superior performance in Video and Speech Processing Applications.

In our experiments, we have used iPAQ6365 PDAs that are equipped with TI OMAP 1510 Rev 2. It utilized a Dual-core processor architecture optimized for efficient operating system and multimedia code execution.

The TMS320C55x DSP core performs the multimedia and other signal processing related tasks while utilizing lowest system-level power consumption.

The TI-enhanced ARM™ 925 core with an added LCD frame buffer runs command and control functions and user interface applications.

Performance of the Multimedia algorithms is usually measured in Mcycles/s, defined as the frequency at which

T. Sobh et al. (eds.), Innovative Algorithms and Techniques in Automation, Industrial Electronics and Telecommunications, 27–30.

© 2007 Springer.

27

the core must run to sustain real-time speech coding and decoding. The DSP Core of OMAP 1510 can achieve upto 200 Mcycles/s.

MPEG4/H.263 Decoding

(QCIF @ 15 fps) 33 34 17 Mcycles/s

MPEG4/H.263 Encoding

QCIF @ 15 fps 179 153 41 Mcycles/s

JPEG Decoding (QCIF) 2.1 2.06 1.2 Mcycles/s

MP3 Decoding 19 20 17 Mcycles/s

Echo Cancellation 16 bits

(32 ms - 8 kHz) 24 39 4 Mcycles/s

Echo Cancellation 32 bits

(32 ms - 8 kHz) 37 41 15 Mcycles/s

Avg. Cycle Ratio with

TMS320C5510 3.09 3.04 1 95.2

Units Task Type ARM 9E S.ARM

1100

TMS320 C5510

Table 1: Shows the performance comparison of OMAP architecture’s TMS320C5510 DSP Core with currently available RISC processors designed for PDAs.

Various video encoding algorithms have been devised according to different hardware resources. e.g. H.261 is an audio/video codec for low quality online video conferencing and/or online chatting with voice and/or video. H.263 / i263 is an audio/video codec for medium quality online video conferencing and/or online chatting with voice and/or video.

H.264 is an MPEG4 Advanced Video codec, also known as MPEG4 part 10, H.26L, or AVC. This codec has excellent compression with an excellent picture quality and is supposed to be a universal video codec. H.323 is an ITU-T standard for transferring multimedia videoconferencing data over packet-switched networks, such as TCP/IP.

The complexity and hardware resource requirements increase with the enhancement in quality of video/audio in these Codecs.

Figure1: Resource Aware Media Framework dedicates various Ad hoc nodes for specific tasks. Node 1 is the video source node. The devices 1 to 4 are acting as computation sharing nodes while node 5 is acting as consolidator node.

Nodes 6 and 7 act as relay nodes B. Communication Procedure

• When node 1 wants to initiate a multimedia transfer, it sends a RREQ packet to all the neighboring nodes with destination as node 9.

• Each neighboring node provides its relative distance (hops) from node 1 and node 9 in their RREP packets.

• Source Node (1) sends a special broadcast packet AROL to all nodes. AROL packet contains list of all nodes that will participate in the communication with their Assigned ROles during this process i.e 1=Compute, 2=Consolidator, 3=MDRelay, 4=Source, 5=Destination

• In case of failure or removal of a node from the network at any time, the Source node (1) sends an AROL broadcast packet to all the nodes to inform them about the Change of ROLe of node(s).

• In case of low battery or overload, any node can send a RROL packet to the source node to Request a Role change.

• The option of assignment of “AROL 1” depends on the availability and available computational resources of the nodes closest to the source node.

• In the presence of any High Performance Computers in this ad hoc network, the Assign Role “AROL 2”

packet is preferred to be sent to such node. Moreover the source node can also assign “Consolidation” role to more than one node, if no node is capable of performing that task individually.

1 2

3

4

5

6

7

8

9

1 .

A . 3C .

D .

Video Destination

Device Video

Source Device

3C .

3C .

D . . 1

1 .

1 . 3C .

3C .

3C . 3C .

3C . 3C .

AKRAM ET AL.

28

• The “AROL 3” is preferred to be assigned to nodes that are closer to the consolidator(s) and to the Destination node.

• Each node on receiving the AROL packet with its address in it sends a Role acknowledgement packet RACK to the source node to announce that it has assumed its Role.

• The Source node (1) sends a JDES packet which provides Description of the Job to be handled by all participating nodes.

• JDES packet provides parameters such as Video Codec Type, Frame Format, Bit Rate etc. specific to that transmission

• Source node sends RAW frames to the “Compute”

nodes (1 to 4 in example scenario). These nodes compress / encode the source frame in the format described in the JDES packet and send them to the

“Consolidator” node(s).

• The “Consolidator” node (5 in example) assembles the encoded frames according the the video format and forwards them to the “MDRelay” nodes.

MDRelay nodes can also share their loads in case of network congestion or overload.

• The Destination node provides feedback on the Quality of stream being received at its end through the reverse path to the source node. This Feedback packet FBCK provides essential information to be used by the Framework for improvement of quality of the ongoing stream at realtime. FBCK packet also provides the source with the information of how much information has been received by the destination node.

• When the source receives acknowledgements of all intended information from the destination, it sends a Transmission End TEND broadcast packet to the participating nodes.

• The participating nodes clear their roles and go into idle mode until the next transmission.

C. Media Framework

Figure 2: The figure shows general architecture of the

Media Framework for complete end to end video transmission and reception over ad hoc network.

The framework is divided into three distinct blocks:

• Media Source Components

• Video Middleware (Transcoder)

• Media Destination Components

The Media Source Components can be a PDA transmitting RAW video frames from camera or a video streaming source that has high bit rate or a video source that uses a video format that is not decodable by the receiver node or requires too much computation by an ordinary ad hoc receiver node. In figure 1, node 1 is the Media source.

The Video Middleware is a modular transcoder that is capable of conversion of video formats in real time. The important thing in the design of this transcoder is that it can work in distributed fashion over different groups of ad hoc nodes to maximize its performance. Middleware Transcoder is capable of selecting appropriate video profile to suit the resource constraints of the target node.

All nodes have Middleware and Client Components installed on them. But the selection of a node to act as a Middleware node depends on its Device and Network Profiles. If a device is has sufficient resources and network bandwidth, it is considered to be capable of becoming a middleware node. In figure 1 the nodes 1 to 4 are sharing the Video Middleware load.

The Media Destination Components are the clients that are part of the ad hoc network which are capable of communicating with the Media framework through the User Client component of the Framework. The Client component creates the Device’s Resource Profile and Network Profile that helps in selection of any device as Middleware node as well. Node 9 in figure 1 is the Destination node running the Multimedia client software.

The Framework identifies all the nodes that are part of the ad hoc network, and try to map different stages of the Framework on different sets of nodes called groups. The number of nodes in a group depends upon the abilities (availability of resources) of nodes. Each group performs a specific task collaboratively.

In case of Reactive Ad hoc routing protocols, Whenever a Multimedia Transaction is going to start, the communication

RESOURCE AWARE MEDIA FRAMEWORK 29

procedure is run to assign their respective roles to all devices part of the Multimedia Framework.

In Proactive Ad hoc routing protocol based Ad hoc Networks, the Communication Procedure described above is executed from time to time during the transmission of Routing table update packets. Therefore the Media Framework is always ready for Media Transmission.

III. CONCLUSION:

The Media Framework allows less capability mobile devices to perform computation intensive tasks by following a novel task partitioning algorithm as proposed in this paper.

The Algorithm assigns different roles to all nodes participating in the communication.

The result of implementation of Media Framework on Ad hoc nodes is that resource constrained nodes are able to perform complex tasks such as video encoding in real-time by distributing different stages of the process on different nodes. The Media Middleware acts as a distributed collaborative video transcoder to assigned tasks to different nodes.

REFERENCES

[1] Shunan Lin, Shiwen Mao, Yao Wang, Shivendra Panwar. “A reference picture selection scheme for video transmission over ad-hoc networks using multiple paths”

[2] Shunan Lin, Shiwen Mao, Yao Wang, Shivendra Panwar. “Video transmission over ad-hoc networks using multiple paths”

[3] C. M. Calafate and M. P. Malumbres. “Testing The H.264 Error-Resilience On Wireless Ad-Hoc Networks”

[4] Shiwen Mao, Shunan Lin, Shivendra S. Panwar, Yao Wang, and Emre Celebi. “Video Transport Over Ad Hoc Networks: Multistream Coding With Multipath Transport”

[5] Shiwen Mao, Dennis Bushmitch, Sathya Narayanan, and Shivendra S.

Panwar. “MRTP: A Multi-Flow Realtime Transport Protocol for Ad Hoc Networks”

[6] Shiwen Mao, Shunan Lin, Shivendra S. Panwar, Yao Wang.

“Reliable Transmission of Video over Ad-hoc Networks Using Automatic Repeat-Request and Multi-path Transport”

[7] “Cross-layer design for video streaming over wireless ad hoc networks”

[8] Charles E. Perkins and Elizabeth M. Royer, “Ad hoc On-Demand Distance Vector Routing,” in Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, New Orleans, LA, February 1999, pp. 9CL100.

[9] David B. Johnson, David A. Maltz, Yih-Chun Hu, and Jorjeta G.

Jetcheva, “The dynamic source routing protocol for mobile ad hoc networks,” Internet Draft, MANET Working Group,draft-ietf-manet-dsr-07.txt, February 2002, Work in progress.

[10] V. Park and S. Corson, “Temporally-ordered routing algorithm (TOM) version 1 – functional specification,” Internet Draft, MANET Working Group, draft-ietf-manet-tora-spec-03.txt, November 2000, Work in progress.

[11] C. E. Perkins and P. Bhagwat, “Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers,”

ACM Computer Communication Review, vol. 24, no. 2, pp.234-244, October 1994.

[12] T. Clausen, P. Jacquet, A. Laouiti, P. Muhlethaler, A. Qayyum, and L. Viennot, “Optimized Link State Routing protocol,” International Multi Topic Conference, Pakistan, 2001.

[13] Burd, T. and Brodersen, R. “Energy Efficient CMOS Microprocessor Design,” Proceedings of the Twenty-Eighth Annual Hawaii International Conference on System Sciences; Vol.1; Wailea, HI, Jan.

1995.

AKRAM ET AL.

30

Cross-Layer Scheduling of QoS-Aware Multiservice

Outline

Documents relatifs