• Aucun résultat trouvé

T echnical D esign A nd I mplementation

4.2 The Architecture - The Herd Framework

4.2.4 Media Plugin Library

5 autopacket=HerdDataPacket::create();

6 packet−>addField(static_cast<unsigned short>(MemProtocol::MODIFY_OBJECT));

7 packet−>addField(_object−>id());

8 packet−>addField(_object−>getData(),_object−>getSize());//the data

9 packet−>encodeHeader();

10 m_node−>handlePacket(packet);

11}

Code 4.9:inform the publisher to update an object.

So far we can build a tree of data, subscribe to specific parts of the tree, get informed about any actions performed on what we have subscribed to. For each action we can implement a custom routine for handling and invoking new operations. It is possible to modify the data and then have it updated to the publisher. There is however one obstacle left to solve, which is the initial synchronizing of the data model with dependencies. In the tree of data each object is its own with the only relation of having a parent cache. However on the application level the data model may have complex dependencies on the data elements, where for example data elementA can only exist if data elementB exists or has a certain value. This is no problem if the application already has a valid data model that has to be published, since it will just be a conversion of each data element to a data object and the relation can be stored in the data by using the identifier of the object. However if the application is linking to a publisher which only pushes a set of data elements, then the application must first wait until the subscription is finished synchronizing. After this it can invoke thebuildFromCachefunction which will traverse from a given cache through all child caches and objects, for each of these ainvokeUpdateupdate is called withadd_cacheoradd_objectas action. This in turn will call the listenersonAddCache/Object functions, which are then used for building the appropriate data model.

4.2.4 Media Plugin Library

We can define two types of data, generic data and image data. Generic data is basically any kind of data that we want to transmit, this data should arrive exactly the same as it was send.

The image data on the other hand can be compressed with a degree of loss in image quality and furthermore can be treated as a single image or a stream of images (video). Therefore three different types of compression are provided, lossless and data type independent compression is done by zip compression (for development zlib was used6), a single image lossy compression using JPEG (libjpeg 7) and the third compression codec used for continues streaming rendered images the video codec VP8 (WebM8).

6“zlib was written by Jean-loup Gailly (compression) and Mark Adler (decompression)”http://www.zlib.net/

7“Developed by Tom Lane and the Independent JPEG Group (IJG) during the 1990’s and it is now maintained by several developers”http://libjpeg.sourceforge.net/

8Developed by On2 Technologieshttp://www.webmproject.org/code/

HerdNode

HerdVpxDecoderNode HerdVpxEncoderNode

HerdZipNode

HerdJpgNode

HerdVpxViewerNode DisplayYuv

InputHandler RawInputProtocol Vp8Decoder

Vp8Encoder

ResetFrameCommand HerdCommand

SDL OpenGL

DeviceProfile

LibVpx

zLib

LibJpeg HerdVpxDecoder

HerdTcpNode

Figure 4.21: Vpx (encoding, decoding), Jpeg and Zip media nodes diagram.

In figure 4.21 the encapsulation of these compression libraries are given with additionally a viewer for the VP8 codec. The Vp8Encoder and Vp8Decoder encapsulate the external library LibVpxand provide a specific interface for using it (encoding or decoding).

4.2.4.1 Encoding and decoding

For encoding the nodes HerdVpxEncoderNode, HerdZipNode and HerdJpgNode work in a similar manner. A HerdDataPacket is provided to the handlePacket function in the encapsulating node where it is being buffered into a thread safe array. In a separate thread the array is checked and for any new data it is being compressed. After compression the data is placed into a new Herd-DataPacketobject and is forwarded to its child nodes (e.g. aHerdTcpNode). TheHerdZipNodeand HerdJpgNode are set to encoding through the use of node attributes, where the VP8 is provided in separate encoding and decoding nodes. Both the Jpg and Vp8 encoders assume that the data provided is in RGB format. In general the procedure is as follows: The Vp8Encoder initializes itself for encoding, providing pre-set variables such as the target bit-rate, the key frame inter-val, width and height. Then within running thread upon receiving data the encode function is called and the dimensions are checked between given frame and the encoder’s initialized dimen-sions. The frame data is then extracted from theHerdDataPacket, converted into YUV format9and copied into the buffer for encoding. The output of the given frame is provided into two parts, first the header, which is 12 bytes and then the body. The data is copied into aHerdDataPacket,

9YUV is a colour space with reduced bandwidth for chrominance componentshttp://en.wikipedia.org/wiki/

YUV

additionally with the base protocol identifierframe(code: 4.7).

The decoding works similar to the encoder, with the difference that the decoded data is handled again by the node, and then either being placed into a buffer or send to a given callback function.

The decoding output of VP8 frames however is handled in a more special way. The data from theHerdDataPacketobject is decoded into a YUV frame but is not directly converted into RGB, instead separate buffers for each channel (Y,U,V) are directly accessible from the outside. The HerdVpxViewertakes advantage of this by converting the data itself according to the capabilities of the hardware and platform it is running on. Offering three different conversions, the straight-forward single threaded CPU conversion which is the most intensive but can run on any system, a more optimized threaded version which can give more benefit with larger image dimensions and a GPU version which uses an OpenGL fragment shader for the conversion.

4.2.4.2 Remote rendering client viewer

TheHerdVpxVieweris used as a general client to any VP8 video streaming service and supports the streaming over TCP and UDP. Upon executing the service it creates a single window in which the incoming frames are displayed. The rendering is using the Simple DirectMedia Library10 and can use either software renderer or accelerated renderer using OpenGL (SDL also supports Direct3D however no rendering path for this has been implemented). Several checks are used to determine the appropriate renderer to use. If accelerated is supported (in this case we only use OpenGL) a second test is used to determine if shaders are supported and the non-square texture support11. If the test fails, the CPU method for YUV to RGB conversion is used and the result is rendered using OpenGL’s commandglDrawPixels. Otherwise the shader is used and the process of converting YUV to RGB is transferred to the GPU. The shader for this is given in code 4.10.

10Simple DirectMedia Library (SDL)www.libsdl.org

11GL_ARB_texture_rectangle enable non-power of two dimensions texture targets http://www.opengl.org/

registry/specs/ARB/texture_rectangle.txt

1#extensionGL_ARB_texture_rectangle : enable

2uniform sampler2DRect Ytex,Utex,Vtex;

3

10 v=texture2DRect(Vtex,vec2(nx,ny)).r;

11 y= 1.1643 * (y0.0625) ;

Code 4.10:the GLSL fragment program for converting an YUV image to RGB.

From the code it is shown that for each channel a separate texture is used (uniform sampler2DRect) and corresponds to the channels provided by the decoder. In code 4.11 the rendering and up-date of the textures procedure is shown. For each channel the corresponding texture unit is activated and updated using theglTexSubImage2Dcommand, which streams the buffer from the main memory to the GPU memory. After updating each channel a full-screen quad polygon is rendered which in combination with the active fragment shader visualizes the final image.

1 glActiveTexture(GL_TEXTURE1);

2 glBindTexture(GL_TEXTURE_RECTANGLE_NV,1);

3 glTexSubImage2D(GL_TEXTURE_RECTANGLE_NV, 0, 0, 0,m_width>>1,m_height>>1,GL_LUMINANCE, GL_UNSIGNED_BYTE,m_uBuffer);

4 glActiveTexture(GL_TEXTURE2);

5 glBindTexture(GL_TEXTURE_RECTANGLE_NV,2);

6 glTexSubImage2D(GL_TEXTURE_RECTANGLE_NV, 0, 0, 0,m_width>>1,m_height>>1,GL_LUMINANCE, GL_UNSIGNED_BYTE,m_vBuffer);

7 glActiveTexture(GL_TEXTURE0);

8 glBindTexture(GL_TEXTURE_RECTANGLE_NV,3);

9 glTexSubImage2D(GL_TEXTURE_RECTANGLE_NV, 0, 0, 0,m_width,m_height,GL_LUMINANCE, GL_UNSIGNED_BYTE,m_yBuffer);

10 drawQuad();

11 SDL_GL_SwapBuffers();

Code 4.11:Updating the Y, U and V textures.