to find that change innetwork architecture from host-oriented to CCN can open new possibilities for energy-efficient content dissemination. CCN concentrate on networking devices i.e core/edge routers and optical multiplexers to find the oppor- tunities for energy reduction instead of contentservers(in data centers i.e. CDN or in user promises i.e. P2P). In this paper, authors present an energy-efficient con- tent router architecture ranging from core routers to home gateways. In this work, authors didn’t take into account to apply energy efficient techniques which can be interesting to get more energy savings e.g. routers are not enabled to dynamically turn on/off. Sleeping of routers or some ports of the routers can be switched-off to gain more energy savings. Though the CCN technology provides a novel ar- chitecture with some handsome advantages like energy conversation, network load reduction and low latency etc but also creates some doubts while thinking about the current technology. In order to analyze this complication of CCN, Perino et al. [ Perino 2011 ] investigated CCN according to the today’s technology. They provide an abstract model of a generic content router component. They concluded that the current hardware and software don’t support the CCN to be implemented on the Internet scale. However it is possible to implement it on smaller scales e.g. ISP (Internet Service Provider) level or smaller CDN scales.
To give more insight, Fig. 1(a) reports the best level h k
for each class k for the two scenarios. k ranges between 1 and V C . The levels on the left are the most popular ones
and hence, to minimize the costs of moving the information frequently from the cache to users, it is better to store these classes in the closest level to users, i.e., the access part of the network. Moving from left to right, the popularity decreases, and therefore the classes are stored in the inner levels of the topology (metro and core). At last, very unpopular classes are assigned to level 0, i.e., they are not cached at all. Interestingly, the percentage of the total number of stored classes is around 1.7% and 0.5% for the Moroccan and the FT networks, respectively. Thus, we can conclude that with the considered power and popularity models, the ISP needs to store a little amount of content information to achieve energy and bandwidth savings. This is an encouraging result showing that caching not only has benefits on QoS and customer experience, but it can also lead to a better management of the ISP power consumption.
In order to evaluate a CDN, it is important to know how resources are utilized. It is observed that in normal conditions, surrogate servers average utilization remains low as compared to their capacity . In this paper, we have evaluated surrogate servers average utilization under different scenarios i.e. by changing number of requests, varying frequency of client requests and using different number of surrogate servers. However, energy consumption behavior is very important to know to proceed towards energy saving mechanisms. In our work, we have evaluated energy consumption trends in surrogate servers under various scenarios as discussed previously. We also considered the energy cost of individual client request. It is important to note that energy consumption of transporting contents is not included in the scope of this study. The basic purpose is to enhance user experience. When a client makes a request for some particular content, he expects less delay incontent serving. Higher delays incontent serving effect quality of experience of end users. A metric of mean response time of client requests is used to have an idea of global user experience towards services. So, mean response time of user requests is important to evaluate. In this paper, we have evaluated mean response time of client requests under all proposed scenarios. Hit ratio is also computed to have a view of performance.
In Europe, the 20/20/20 objectives of the European Commission and the consequent financial incentives established by local governments are currently driving the growth of electricity generation from renewable energy sources [ 1 ]. A substantial part of the investments is made at the distribution networks (DN) level and consists of the installation of wind turbines or photovoltaic panels. The significant increase of the number of these distributed generators (DGs) undermines the fit and forget doctrine, which has dominated the planning and operation of DNs until now. This doctrine was developed when energy was transmitted from the transmission network (TN) to consumers, through the distributionnetwork (DN). With this approach, adequate investments innetwork components (i.e., lines, cables, transformers, etc.) are made to avoid congestion and voltage issues, without requiring continuous monitoring and control of the power flows or voltages. To that end, network planning is done with respect to a set of critical scenarios in which information is gathered about production and demand levels, in order to always ensure sufficient operational margins. Nevertheless, with the rapid growth of DGs, the preservation of such conservative margins implies significant network reinforcement costs i , because
required transformations of the Distribution grid are today delaying, and in some cases blocking, the pos- sibility to leverage key opportunities. In this paper we propose a new paradigm, that we named Virtual- ized Distribution Grids, which facilitates implement- ing the required solutions for major distribution grid challenges without requiring important infrastructure investments. The proposal includes a new approach for designing DSOs’ EnergyManagement Systems. The hierarchical architecture we present enables the coordinated participation of any type of player, in- cluding DSOs, aggregators, and end users (that be- come prosumers). In addition, in this general frame- work, we propose specific market-based solutions that enable deploying advanced technologies (local pro- duction, storage, BEMs, IoT based demand response systems and other on-premises technologies), by dif- ferent players, and coordinating those players in a way to optimize the overall value while keeping the distributionnetwork stable and providing the ex- pected quality of supply. In addition, we present a distributed architecture, based on the blockchain principles, that supports the implementation of the proposed markets. Finally, we extend the architec- ture to solve key challenges raised in smart homes, beyond energymanagement, including policy-based coordination of controllers from independent service providers acting on the same connected devices.
The API links contents and network but contentdistribution relies on HTTP while OpenFlow matching is limited to the lowest layers of the network stack (i.e., up to transport layer). Thus, implementing the API requires to use HTTP proxies. The interaction between OpenFlow, HTTP proxies, and the centralized controller is depicted in Fig. 1 where all components interact by the means of the API. We leverage the logically centralized approach of OpenFlow to simplify the management of the infrastructure. However, networks being inherently distributed systems, and concentrating all informa- tion and decisions would impair performances by causing high signaling load. Therefore, as illustrated by Fig. 1, decisions are performed centrally by the controller, but pushed in the network components that dispose of a control plane cache, thus avoiding to load the controller with data plane events.
Index terms— Active networkmanagement, electric distributionnetwork, flexibility services, renewable energy, optimal sequential decision-making under uncertainty, large system
In Europe, the 20/20/20 objectives of the European Commission and the consequent finan- cial incentives established by local governments are currently driving the growth of electricity generation from renewable energy sources . A substantial part of the investments lies in the distribution networks (DNs) and consists of the installation of units that depend on wind or sun as a primary energy source. The significant increase of the number of these distributed genera- tors (DGs) undermines the fit and forget doctrine, which has dominated the planning and the operation of DNs up to this point. This doctrine was developed when DNs had the sole mission of delivering the energy coming from the transmission network (TN) to the consumers. With this approach, adequate investments innetwork components (i.e., lines, cables, transformers, etc.) must constantly be made to avoid congestion and voltage problems, without requiring con- tinuous monitoring and control of the power flows or voltages. To that end, network planning is done with respect to a set of critical scenarios consisting of production and demand levels, in
c INRIA & LINA, Nantes d McGill University, Montreal, Canada
ContentDistribution Networks (CDN) are fundamental, yet expensive technologies for distributing the content of web-servers to large audiences. The P2P model is a perfect match to build a low-cost and scalable CDN infrastructure for popular websites by exploiting the underutilized resources of their user communities. However, building a P2P-based CDN is not a straightforward endeavor. In contrast to traditional CDNs, peers are autonomous and volunteer participants with their own heterogeneous interests that should be taken into account in the design of the P2P system. Moreover, churn rate is much higher than in dedicated CDN infrastructures, which can easily destabilize the system and severely degrade the performance. Finally and foremostly, while many P2P systems abstract any topological information about the underlying network, a top priority of a CDN is to incorporate locality- awareness in query routing in order to locate close-by content. This paper aims at building a P2P CDN with high performance, scalability and robustness. Our proposed protocols combine DHT e fficiency with gossip robustness and take into account the interests and localities of peers. In short, Flower-CDN provides a hybrid and locality-aware routing infrastructure for user queries. PetalUp-CDN is a highly scalable version of Flower-CDN that dynamically adapts to variable rates of participation and prevent overload situations. In addition, we ensure the robustness of our P2P CDN via low-cost maintenance protocols that can detect and recover from churn and dynamicity. Our extensive performance evaluation shows that our protocols yield high performance gains under both static and highly dynamic environments. Furthermore, they incur acceptable and tunable overhead. Finally we provide main guidelines to deploy Flower-CDN for the public use.
C(n) Set of loads connected to node n.
F (n) Set of flexible loads connected to node n.
In Europe, the 20/20/20 objectives of the European Commission and the consequent financial incentives established by local governments are currently driving the growth of electricity generation from renewable energy sources . A substantial part of the invest- ments is made at the distribution networks (DN) level and consists of the installation of wind turbines or photovoltaic panels. The significant increase of the number of these distributed generators (DGs) undermines the fit and forget 1 doctrine, which has domi- nated the planning and the operation of DNs up to now. This doctrine was developed when the energy was coming from the transmission network (TN) to the consumers, through the distributionnetwork (DN). With this approach, adequate investments innetwork components (i.e., lines, cables, transformers, etc.) are made to avoid congestion and voltage issues, without requiring continuous monitoring and control of the power flows or voltages. To that end, network planning is done with respect to a set of critical scenarios gathering information about production and demand levels, in order to al- ways ensure sufficient operational margins. Nevertheless, with the rapid growth of DGs, the preservation of such conservative margins implies significant network reinforcement costs 2 , because the net energy flow may be reversed, from the distributionnetwork to the transmission network, and flows within the distributionnetwork be very different from the flows historically observed.
Second, to lower convergence time, a centralized control can be implemented with a technology like Software Defined Networks (SDN). This technology is very promising to put energy-aware solution into practice. Indeed, this allows to carry out traffic measurements, to perform route calculation and then to trigger an installation of new routing rules in the SDN enabled routers and switch-off equipments. Indeed, the centralized controller is able to turn on/off or switch the rate of a network interface and storage caches via SDN control messages. Note that these messages will be very small in comparison to the global traffic and not frequent (only few changes are sufficient to obtain most of the energy gain, e.g. every 4 hours see Figure 12). The increase of the power consumption will thus be negligible. In summary, the centralized controller of SDN can collect traffic matrix and then compute a routing solution satisfying QoS while being minimal inenergy consumption. Then, the controller will update the forwarding tables of the nodes of the considered network and turn off some network interfaces and storage caches if needed in order to save some energy. [46, 47] studies such a solution in a context of energy-aware routing. In , the authors explain how waking up line cards of routers in almost zero-time.
Here, we go further on this idea by also considering the usage of caches on each of backbone routers, while still taking into account the choice of CDN servers. It is important to mention that there have been several proposals for developing global caching systems , in particular recently using in- network storage and content-oriented routing to improve the efficiency of contentdistribution by future Internet architec- tures –. Among these studies, we mention that in this paper we do not assume any specific technology for future Internet architectures, nor anything else that would require major overhaul of how the Internet works. Thus, there is no content routing among our caches. We assume that a cache serves a single city, taking all of its contents from the original provider. We consider that caches use energy and can be turned on or off. Thus, there is a trade-off between the energy savings they allow, by reducing network load, and their own consumption.
Second, to lower convergence time, a centralized control can be implemented with a technology like Software Defined Networks (SDN). This technology is very promising to put energy-aware solution into practice. Indeed, this allows to carry out traffic measurements, to perform route calculation and then to trigger an installation of new routing rules in the SDN enabled routers and switch-off equipments. Indeed, the centralized controller is able to turn on/off or switch the rate of a network interface and storage caches via SDN control messages. Note that these messages will be very small in comparison to the global traffic and not frequent (only few changes are sufficient to obtain most of the energy gain, e.g. every 4 hours see Figure 12). The increase of the power consumption will thus be negligible. In summary, the centralized controller of SDN can collect traffic matrix and then compute a routing solution satisfying QoS while being minimal inenergy consumption. Then, the controller will update the forwarding tables of the nodes of the considered network and turn off some network interfaces and storage caches if needed in order to save some energy.  studies such a solution in a context of energy-aware routing. In , the authors explain how waking up line cards of routers in almost zero-time.
Sanghamitra BANDYOPADHYAY ¶
August 22, 2017
In the field of building energy efficiency, researchers generally focus on building performance and how to enhance it. The objective of this work is to empower the building occupants by putting them in the loop of effi- cient energy use, supporting them to achieve their objectives by pointing out how far their actions are from an optimal set of actions. Different levels of explanation are investigated. Indicators measuring the distance to optimality are, firstly, proposed. An algorithm that generates deeper explanations is then presented to determine how changing some actions impacts comfort. The paper emphasizes the importance of explanations with a real case study. It identifies the type and level of explanations needed for different occupants. The concept of replay is presented. An occupant can replay his past actions and learn from them.
Number of traced routers. If the Contrace user specifies this option, only the specified number of hops from the Contrace user trace the Request; each router inserts its own Report block and forwards the Request message to the upstream router(s), and the last router stops the trace and sends the Reply message back to the Contrace user. This value is set in the "HopLimit" field located in the fixed header of the Request. For example, when the Contrace user invokes the Contrace command with this option such as "-r 3", only three routers along the path examine their path and cache information. If there is a caching router within the hop count along the path, the caching router sends back the Reply message and terminates the trace request. If the last router does not have the corresponding cache, it replies the Reply message with NO_INFO return code (described in Section 3.1 ) with no Reply block TLV inserted. The Request messages are terminated at
IoT has gained much interest thanks to its various advantages. It is considered as an extension of machine-to-machine communication (M2M) in order to maintain the communicative interaction between everyone and everything. All kinds of “things” are enabled to be connected with each other. Many tasks could be performed by these things such as detecting data and sensing it wirelessly, which reflects their intelligence. These smart communicating things have the capacity to evaluate the data collected from the sensing materials in order to take the most appropriate decision about them. It allows all objects to communicate with each other to be able to adapt to their environment by sensing some physical parameters. The IoT is implemented in different kinds of applications: smart home, 2 transportation, 3 agriculture, 4 health care, 5
8 Role of the diurnal cycle
As discussed earlier, demand perturbations can have a substantial impact on power generation capacity. In this section, the impact of diurnal demand cycle is explored. Figure 7 illustrates three diurnal patterns, each producing an average demand of 0.1 m 3 /s for the day. The patterns loosely represent service areas of different populations with the premise being that smaller customer bases exhibit wider demand fluctuations about the average daily value. Thus, a diurnal pattern with a peak-to-average ratio of 1.5 might typify a larger community (e.g., an entire small town), while a ratio of 2.5 could represent a single subdivision or cluster of subdivisions. Daily revenues for these patters were computed and compared with the daily revenue assuming
Figure 1: Representation of segmented and unsegmented caches with many content providers (CPs).
With the advent of broadband and social networks, the Internet became a worldwide content delivery platform ([1, 2]), with high bandwidth and low latency requirements. To meet the always increasing demand, contents are pushed as close as possible to their consumers and Content Providers (CP) install dedicated storage servers directly in the core of Internet Service Provider (ISP) networks . However, the TCP/IP protocol suite uses a conversational mode of communication between hosts that can be considered not appropriate for content delivery . Therefore, a complex machinery is developed (around the Domain Name System, DNS, protocol and the Hyper- Text Transfer Protocol, HTTP) to compensate the limitations of the TCP/IP protocol suite. Conscious of the mismatch between the network usage and its conception, the research community recently proposed the concept of in- network caching (e.g., Information Centric Networking (ICN) [2, 4]). For instance, in ICN, content objects can be accessed and delivered natively by the network according to their name rather than relying on IP addresses . Hence, this technology removes the concept of location or topology from communication primitives and uses the notion of contents and their name instead. These contents can therefore be found potentially anywhere in the network, moved or replicated at different locations [5, 4, 6].
be enhanced, they represent a main asset to promote their own or 3 rd party services.
2. Ownership of users’ information: Telcos maintain in their Information Systems and network equipment (e.g. HSS/ HLR) valuable users’ information including authentication keys, users’ identities and users’ service profiles. This information enables them to perform a number of control functions including users’ authentication, authorization of users’ access to services and billing on behalf of CPs and CDN providers. 3. Knowledge of users’ contexts: Telcos monitor, in real time, users’ contexts in terms of geographic location, access type and device type, among others. Providing this information to CPs or CDN providers enables the adaptation of both content portals and content resolution to users’ current contexts. Portals for instance can be adapted to users’ locations and devices capabilities. The format (Codec) and the resolution (encoding bit rate) of a selected content can be also adapted to users’ devices and access constraints (in terms of bandwidth). Some may argue that the Telco role is not primordial at this level. In fact, Telco independent approaches like HTTP adaptive streaming (MPEG Dash) already allow this kind of adaptation to take place. Furthermore, Geolocation and other APIs supported by many terminals are likely to provide CPs/ CDN providers with enough information concerning users’ location and devices. Thus, there is a moving equilibrium between relying on the Telco for providing context related data to 3 rd parties and counting on the terminal for doing so. Adopting the first option has the advantages of alleviating the terminal complexity and adding optimization to the adaptation process (context related data is only sent upon context change and not periodically as it is the case in MPEG Dash). Besides, adopting the second approach requires the support of some APIs and of HTTP adaptive streaming solutions (Adobe based, Microsoft based etc) by all manufacturers for all terminals which is far from being the case for the time being. On the other hand, Telcos can play a particularly important role in handling “vertical mobility”. In fact, as a last mile ISP, the Telco immediately track a change of the service IP address. In reaction, he can inform the concerned CP/ CDN provider of this change so that this latter performs adequate functions if needed (e.g.: content re-adaptation to new user’s context). As for him, he can perform functions like seamless flows redirection to the new service IP address.
Index Terms— Energy, IEEE 802.15.4, WSN, IoT,
I. I NTRODUCTION
I NTERNET of Things (IoT) has emerged in the last decades. In this respect, many definitions have been advanced in order to provide a clear definition of the IoT. The most famous one is provided by the International Telecommunica- tion Union (ITU). The IoT consists in connecting a wide range of objects , devices, buildings , vehicles sited within the reach of the same network, with the aim to make the Human being’s life rather easier . The IoT enables the things to see, hear, think as well as making decision  which leads to establish a smart city . It is estimated that the number of connected things would reach 50 billion by the year 2020 . The different kinds of connected objects contributes to many challenges such as maintaining the data source’s security robustness , . In effect, data security turns out to stand as a critically serious problem in IoT systems. The confidence of the data-provenance constitutes a crucial matter. In addition, the immense number of IoT system’s connected objects might well culminate in the reception of a massive amount of data
We end this section discussing works which do not meet exactly our fact-checking definition yet come very close to it:
Les Décodeurs, the fact checking team of Le Monde 11 have devel- oped (and share as open data) a database of manual fact checks 12 , comprising for each claim a set of Web and social media sources having propagated it, the fact checking analysis, and the final level- of-truth classification. Next, they have developed and share in open source Décodex, a plug-in for navigators and social media like Face- book, which signals to users visiting an information source (a Web page or Facebook account) having published a checked claim, a trust score resulting from the aggregated outputs of previous fact checks over that source. Unlike the journalists who devised it, Dé- codex does not check fact accuracy, strictly speaking. However, its ability to rate trustworthiness makes it quite relevant to this task. Last but not least, a closely related (and well-established) field of natural language processing is textual entailment , which considers the task of comparing two portions of text and deciding whether the information contained in the first one can be implied from the second. Textual entailment has never been applied explic- itly to fact checking problems, but they obviously meet at some points . Many evaluation campaigns and benchmarks are related to textual entailment, as well as paraphrase detection in general, among which PASCAL challenge , Answer Validation Exer- cise , the MSRP paraphrase corpus  or the SNLI corpus . Most of these tasks and data represent similarity between pairs of text as a binary yes/no classification decision.