The WSPeer framework , interfaced with Tri- ana, aims at easing the deployment of Web-Services by exposing many of them at a single endpoint. It di↵ers from a container approach by giving to the application the control over service invocation. The Soaplab system  is especially dedicated to the wrapping of command-line tools into Web-Services. It has been widely used to integrate bioinformatics executables in workflows with Taverna . It is able to deploy a Web-Service in a container, starting from the description of a command-line tool. This command-line description, referred to as the meta- data of the analysis, is written for each application using the ACD text format file and then converted into a corresponding XML format. Among domain specific descriptions, the authors underline that such a command-line description format must include (i) the description of the executable, (ii) the names and types of the input data and parameters and (iii) the names and types of the resulting output data. As de- scribed latter, the format we used includes those fea- tures and adds new ones to cope with requirements of the execution of legacy code on grids.
Face au foisonnement des études en matière du Manufactoring Supply Chain (Colin, 2005), nous constatons une pénurie des recherches consacrées au Service Supply Chain, comme concept dual mêlant à la fois des opérations traditionnelles d’approvisionnement, et des activités de coordination de ressources diverses, dans un souci de satisfaction client, tout en intégrant des contraintes de temps, de capacités, de moyens partagés et de coproduction. Ce champ de recherche est de plus en plus convoité par de nombreux auteurs, comme objet et levier de performance. Celle-ci ne se mesure plus simplement en termes de qualité, de temps et de coût, mais également en termes de capacités de réactivité, d’agilité, d’efficience et d’externalités positives sur les territoires. Relever le défi de réduire les coûts et d’augmenter simultanément la valeur pour le client à l’échelle globale, exige donc, une approche radicalement différente de celle qui consiste à répondre exclusivement au marché. Le contexte actuel, à la fois concurrentiel et incertain, entraîne une grande pression sur le rapport qualité-prix de l’entreprise de service. Pour cette raison, le SCM et le Big-Data semblent être des paramètres de performance de l’entreprise de services.
modelling of data-awareness.
Furthermore, with explicit quantification, the translation of constraints into temporal logic becomes tightly coupled with the actual script on which it has to be checked. This is because the translation of the quantifiers shown depends on the values occurring in the script. It is, however, unrealistic that aservice provider advertises its constraints in such a manner: one would have to know in advance all possible values occurring in scripts prepared by third-parties to include them in the large disjunction. Finally, we suspect standard model checkers such as NuSMV  to easily handle systems with very large state spaces and reasonably short temporal formulæ, but to be far less efficient for checking exponentially long formulæ. The experimental results in Section VII will confirm this intuition. D. Generalization to Nested Message Elements
In Figure 11 we depict the corresponding costs of these transfers. The costs can be divided in two components: the
compute cost , paid for leasing a certain number of VMs for
the transfer period and the outbound cost, which is charged based on the amount of data exiting the datacenter. Despite taking longer time for the transfer, the compute cost of the user-based endpoint to endpoint is the smallest as it only uses 2 VMs (i.e., sender and destination). On the other hand, user-based multi-route transfers are faster but at higher costs resulted from the extra VMs, as explained in Section II-B and detailed in . The outbound cost only depends on the data volume and the cost plan. As the inter-site infrastructure is not the property of the cloud provider, part of this costs represent the ISP fees, while the difference is accounted by the cloud provider. The real cost (i.e., the one charged by the ISP) is not publicly known and depends on business agreements between the companies. However, we can assume that this is lower than the price charged to the cloud customers, giving thus a range in which the price can potentially be adjusted. Combining the observations about the current pricing margins for transferring data with the performance of the cloud transfer service, we argue that cloud providers should propose TaaS as an efficient transfer mechanisms with flexible prices. Cloud vendors can use this approach to regulate the outbound traffic of datacenters, reduces their operating costs, and minimising the idle bandwidth.
number of road segments involved in the multicast tree and (ii) the second approach considers the number of relaying intersections involved in the multicast tree. A heuristic is proposed for each approach. In this work, we propose a QoS-enabled multicasting scheme in Heterogeneous Vehicular Networks (HetVNets) with minimal V2V bandwidth usage. To ensure QoS of the multicasting service, efficient procedures are proposed for tracking clients and monitoring QoS of road segments. The QoS parameters involve two WAVE metrics: network connectivity and packet transmission delay in road segments. Moreover, a formulation of the multicast optimization problem in HetVNets is proposed. To solve the optimization problem, two near- optimal heuristics are proposed which are based on minimal Steiner tree . (3) We study the problems of network congestion in routing for vehicular networks (see Chapter 5). We propose (1) a Cloud-based routing approach that takes into account other existing routing paths which are already relaying data in VANET. New routing requests are addressed such that no road segment gets overloaded by multiple crossing routing paths. This approach incorporates load balancing and congestion prevention in the routing mechanism. Instead of routing over a limited set of road segments, our approach balances the load of communication paths over the whole urban road segments, thus, it helps in preventing potential congestions in VANET; and (2) a Software Defined Networking model and mechanism for VANET congestion control and monitoring of real-time WAVE connectivity and transmission delays on road segments. Our proposed SDN controller provides the requester with an optimal routing path. It is then the job of the requester to embed the routing information in the packet to be sent. To deal with the changes in the connectivity and delays of WAVE transmissions in road segments, we devise a cooperative road segment monitoring technique in which vehicles cooperatively notify SDN controller about the changes in each road segment. Upon notification, SDN controller computes new optimal routing path and updates the requester with alternative routing path to use for next packets. SDN controller computes routing path such that more road segments are utilized in VANET communications (thus balancing the load) and the delay constraint for packet delivery of request is satisfied.
Paris, France Raleigh, NC, USA Research Triangle Park, NC, USA
Abstract—In Enterprise Data Centers (EDC), service providers are usually governed by Client Service Contracts (CSC) that specify, among other requirements, the rate at which aservice should be accessed. The contract limits the rate to no more than a number of service requests during a given observation period. In two-tier setups, a cluster of Service-Oriented Networking (SON) Appliances form a pre-processing tier that accesses services in the service tier. SON Appliances locally shape the flow of requests to enforce the global rate defined in the CSC. Off-the-shelf SON Appliances present architectural limitations that prevent them from being used to efficiently perform traffic shaping in the presence of multiple service hosts. In this paper, besides identifying these limitations, we provide two contributions in this field. First, we introduce a SON Appliance architecture fit for multi-service traffic shaping. Second, we propose and validate an algorithm for multipoint-to-multipoint service traffic shaping in two-tier EDCs. We show via simulation that our approach solves the multipoint-to-multipoint service traffic shaping problem while pushing the system to its maximum capacity.
3) Service ontology: The main purpose of service
ontology is to enable the semantic representational knowledge inherent to services and their related relationships. The proposed ontology is managed through a dedicated framework that features different modules and interfaces, among which we cite the reasoner module. The latter relies on semantic relationships among services to perform inferences. These inferences are driven by rules, generating service composition schema and retrieving a newly inferred knowledge on services. For instance, consider two RESTful services S1 for querying temperature and S2 for querying precipitation. If the temperature value is around 15° and the value for precipitation is higher than 70mm then these two services should be complemented by the RESTful service S3 for querying the wind speed. The service ontology is subject to further work extensions taking into account services quality metrics to enhance service composition definition. Besides, it should be improved to consider interoperability issue between data access services. In fact, most of the times services are not compatible with each other. This makes interoperability a major issue for a successful data access service composition. Moreover, in order to execute services access to extract data, performance is handled by the next section discussing the data processing layer.
Conservative models and code practices are usually employed for fatigue-damage predictions of existing structures. Direct in-service behavior measurements are able to provide more accurate estimations of remaining-fatigue-life predictions. However, these estimations are often accurate only for measured locations and measured load conditions. Behavior models are necessary for exploiting information given by measurements and predicting the fatigue damage at all critical locations and for other load cases. Model-prediction accuracy can be improved using system iden- tification techniques where the properties of structures are inferred using behavior measurements. Building upon recent developments in system identification where both model and measurement uncertainties are considered, this paper presents a new data-interpretation framework for reducing uncertainties related to prediction of fatigue life. An initial experimental investigation confirms that, compared to traditional engineering approaches, the methodology provides a safe and more realistic estimation of the fatigue reserve capacity. A second application on a full-scale bridge also confirms t hat u sing l oad-test d ata r educes t he u ncertainty r elated t o remaining-fatigue-life predictions.
3. Overview of Data Distributed Service
3.1. Core Features
Data Distribution Service (DDS) is a network middleware for distributed real-time application which simplifies application development, deployment and maintenance and provides fast, predictable distribution of real-time critical data over heterogeneous networks. DDS Specification offers two levels of interface: one is a low level layer, the Data-Centric Publisher- Subscriber (DCPS), highly configurable, closely related to data and rich of QoS policies to determine the application required behavior. The Data Local Reconstruction Layer (DLRL) is the higher layer of the specification which is conceived to provide easy to use DCPS elements for developers. It summarizes the way to which an application can connect to DCPS through its proper classes using oriented Programming Object.
Chains (SFCs) .
The problem Conceptually a “cloud” provides one general-purpose infrastructure to support multiple inde- pendent services in an elastic way. To that aim, cloud operators deploy large-scale data centers built with com- mercial off-the-shelf (COTS) hardware and make them accessible to their customers. Compared to dedicated infrastructures, this approach significantly reduces costs for the operators and the customers. However, COTS hardware is less reliable than specific hardware  and its integration with software cannot be extensively tested, resulting in more reliability issues than in well-designed dedicated infrastructures. This concern is accentuated in public clouds where resources are shared between inde- pendent tenants, imposing the use of complex isolation mechanisms. As a result, moving Service Function Chains to data centers calls for a rethinking of the deployment model to guarantee high robustness levels.
While a priori interesting for end users, the principle of sponsoring data has been questioned by user associations and some regulators, and is under investi- gation as a possible infringement of network neutrality rules . Indeed, neu- trality rules imposed in many countries state that all “traffic should be treated equally, without discrimination, restriction or interference, independent of the sender, receiver, type, content, device, service or application” (definition from the European parliament on April 3rd, 2014). Offering a differentiated economic treatment to the CPs with respect to others can be considered as a violation of this rule and prevent newcomers not able to afford similar offers from entering the market. Laws to ban it have been imposed in countries such as Canada, Sweden, Hungary, India, Brazil, among others. Europe is putting it in a “grey zone” and let the decision to be taken by national regulatory bodies. A weaker version is when consumers are not differentiated by CPs depending on their origin, here their ISP, hence applying the same sponsoring at all ISPs.
Index Terms—Delay, energy, network calculus, quality of service (QoS), rate control, wireless.
I. I NTRODUCTION
S ERVICES envisioned in modern communication systems extend beyond traditional voice communication to en- hanced data applications such as video and real-time multimedia streaming, high-throughput data access, and voice-over-IP . Invariably, meeting the quality-of-service (QoS) requirements for these applications translates into stricter packet-delay and throughput constraints. Wireless systems also generally have strict limitations on energy consumption, thereby necessi- tating efficient utilization of this resource . For example, minimizing energy consumption leads to improved battery utilization for mobile devices, increased lifetime for sensor nodes and ad hoc networks, and better utilization of limited energy sources in satellites. Since, in many scenarios, trans- mission energy constitutes a significant portion of the total energy expenditure for wireless nodes , it is imperative to minimize this cost to achieve significant energy savings; henceforth, in this paper, we will focus solely on transmission energy expenditure.
But sponsored data bring concerns from user associations and small content providers. It is claimed to give an unfair advantage to some content/service providers and to eventually prevent some actors from entering the market, due to lower visi- bility and high entrance costs to the sponsored data system if they want to get the same service as incumbent and big providers already present. The culmination of the sponsored data principle is the so-called zero-rating where ISPs (freely) remove some content providers from their data caps, hoping more customers will subscribe due to potentially unlimited usage. Those providers could then hardly be challenged. The French ISP SFR did it with YouTube in its offer RED a few years ago. Sim- ilarly, Facebook, Google and Wikipedia have built special programs in developing countries, with the claimed goal to increase connectivity and Internet access. For those reasons, sponsored data and zero-rating are currently under investigation by regulators to determine if rules should be imposed to prevent or limit their use. In Chile for example, the national telecom regulator has stated that it was violating net neutrality laws and that it should be forbidden. Net neutrality means that all pack- ets/flows are treated the same, independently of their origin, destination, and type of service [3, 4].
We proposed a secure, privacy-preserving execu- tion model for data services allowing service providers to enforce their privacy and security policies with- out changing the implementation of their data ser- vices (i.e., services are seen as black boxes). Our model is inspired by the database approach to “declar- atively ” handle the security and privacy concerns. It involves the following steps (refer to Figure 3): Step 1: View rewriting to integrate security and privacy constraints . When adataservice is invoked, our model rewrites its corresponding RDF view to take into account applicable security and privacy rules from the service’s associated policies, which are expressed using the OrBAC and PrivOrBAC models over domain ontologies and take into ac- count the data recipient (i.e., service consumer), his purpose for requesting the data, and the consents of data subjects . The soundness and correctness of our algorithm are demonstrated in [16, 8]. Step 2: Rewriting the extended view in terms of data services . The extended RDF view v extended
of Bergen, Postboks 7800, NO-5020 Bergen, Norway. Published: 19 October 2010
Cite this article as: Deybach et al.: European Porphyria Network (EPNET) for information, epidemiological data, quality and equity of service. Orphanet Journal of Rare Diseases 2010 5(Suppl 1):P16.
DBpedia Navigator. Another kind of navigators are domain-specific Linked Data Navigators. We developed such a navigator — a server with its STTL service and the st:navlab RDF-to-HTML transformation — to browse the DB- pedia dataset, specifically on persons and places. Figure 2 is the screenshot of an HTML page produced by this navigator. We wrote the st:navlab transfor- mation as a set of 24 STTL templates which are available online 23 . Here is a template in st:navlab, to construct the table of resource descriptions; it recur- sively calls the st:title named template to output the title in HTML and the st:descresource to build the description of each resource selected in DBpedia.
We offer custom levels of consistency guarantees to applications, geared with data semantics and driven towards bandwidth resource optimisation. Therefore, the behaviour of the system can be tuned and the data semantics become the key decision-maker of a more efficient consistency paradigm for geo-located data stores. This is very relevant for Big Data stores such as HBase, where some eventually replicated updates might be necessary earlier than others. Unlike a uniform processing of updates during replication, with a QoD in place, one can aim at satisfying a more fine-grained data consistency and delivery model. Interestingly, service level objective (SLOs) management have also been proposed for HBase, but in order to handle application multi-tenancy performance (Wang et al., 2012). On the other hand, we use QoD so we can perform more functional and reliable decisions from the data storage layer upwards in order to fulfil the needs of consolidated application workloads. This is a step forward from the strictly eventual or strong consistency model in most cross-site replicated storage deployments. And consequently, shall evolve into more flexible consistency models for Big Data management. Our implementation provides several levels of data consistency, namely QoD-consistency fulfillment. This is used to ensure consistency among a group of updates as they become available to applications. For that, the value of one, several or a combination of the three dimensions of the vector-field consistency model in (Veiga et al., 2010) can be used.
• We propose a new Key-based Timestamping Service (KTS) which generates monotonically increasing timestamps, in a distributed fashion using local counters. KTS does distributed timestamp generation in a way that is similar to data storage in the DHT, i.e. using peers dynamically chosen by hash functions. To maintain timestamp monotonicity, we propose algorithms which take into account the cases where peers leave the system either normally or not (e.g. because they fail). To the best of our knowledge, this is the first paper that introduces the concept of key-based timestamping, and proposes efficient techniques for realizing this concept in DHTs. Furthermore, KTS is useful to solve other DHT problems which need a total order on operations performed on each data, e.g. read and write operations which are performed by concurrent transactions.
3.2 Service-based Query Resolution
In PAIRSE, users’ queries are resolved by com- posing relevant data services on the fly. Each virtual organization in PAIRSE’s hybrid P2P architecture has a DHT (Distributed Hash Table) to index its published services . Services are indexed accord- ing to the ontological concepts used in their RPVs. When a query is issued at a given peer, relevant services are first sought in the same V O where the query is posed, then the service discovery request is propagated to connected V Os. The descriptions of discovered services are then sent back to the initial peer, where the relevant services will be selected and composed. Furthermore, for each discovered service we return the mapping path between the ontologies associated with the expertise domains (i.e., VOs) of the discovered service and the initial peer. This mapping path allows the translation of RPV views. We proposed a query rewriting based service com- position algorithm to select and compose data ser- vices on the fly [7, 9]. The algorithm, given a SPARQL query, and a set of data services represented by their RPVs, rewrites the query in terms of calls to rele- vant services. Our algorithm extends earlier works on query rewriting and data integration  in the following aspects:
1.2 Motivations et problématiques
1.2.3 Une approche centrée sur les données pour systèmes à composants
Le développement à base de composants [Szyperski, 1998] est devenu de plus en plus important en génie logiciel. Cela est dû essentiellement au besoin d’utiliser les concepts de cette approche pour implémenter des services et augmenter le niveau d’abstraction en facilitant la réutilisation, l’extension, la personnalisation et la composition de services [Yang and Papazoglou, 2004]. Ainsi, les services sont encapsulés dans des composants avec des interfaces bien définies pour être réutilisés dans plusieurs nouvelles applications. Cependant, le flot de données qui permet aux services d’accomplir des activités de traitements et qui guide les interactions entre compo- sants, n’est souvent pas pris en compte, voir même totalement négligé, alors que dans plusieurs domaines de recherche tels que le Grid Computing, l’Informatique Décisionnelle et le P2P, les données sont incorporées comme une part importante du développement de systèmes. Ré- cemment, dans le domaine émergent du Cloud Computing, où tout est service, la gestion de données a fait l’objet d’une attention remarquable et d’un grand intérêt [Abadi, 2009], et cela ne peut que croître.