Control and Data

Top PDF Control and Data:

CONTROL OF AN ARTIFICIAL MOUTH PLAYING A TROMBONE AND ANALYSIS OF SOUND DESCRIPTORS ON EXPERIMENTAL DATA

CONTROL OF AN ARTIFICIAL MOUTH PLAYING A TROMBONE AND ANALYSIS OF SOUND DESCRIPTORS ON EXPERIMENTAL DATA

4.1 Protocol An experiment is processed choosing a mode of control, an exploratory subspace, and following a precise proto- col. The subspace is explored with quasi-static commands. To ensure quasi-static states, waiting times are added be- tween measurements. For each measured point, every data from all sensors (Temperatures, Pressures, Positions, etc) are recorded and saved. Moreover, acoustic signals are au- tomatically analyzed using tools provided by the MIR tool- box [17, 18]. For all measured points, sound descriptors such as fundamental frequency (if any), sound energy and roughness are estimated and saved. The protocol consists of the following steps:
En savoir plus

9 En savoir plus

Access control policies and companies data transmission management

Access control policies and companies data transmission management

Table 2.6: Coverage of the main hybrid solutions regarding our challenges. 2.6 Conclusion In this chapter, we have first presented Access Control (AC). This mechanism aims at defining "Who can access what" and has been proposed within computer systems in the last 50 years. However, current AC models are not efficient against data leakage, a specific type of security issues that can disclose sensitive information and generate major issues for a company. To prevent data leakage, a company can use Data Leak Prevention (DLP) or Information Rights Management (IRM). These solutions offer great features to provide Transmission Control (TC) and Usage Control (UC). While TC defines "who can send what to whom", UC defines "what can be done with the data once it is accessed". However, DLP and IRM lack abstraction and suffer from different issues, including interoperability. Indeed, by using a traditional AC model and a DLP or an IRM, a security expert or administrator will have to define different policies in different paradigms and languages, which can be tiresome and hard to manage, especially within companies with many employees and resources. To overcome these issues, hybrid solutions can be used. These solutions propose to merge AC and TC/UC policies into a unified formalism. The proposed solutions are interesting, but by using them, a company will have to redefine its existing AC policies in a quite complex formalism. Moreover, errors and mistakes can be created during this redefinition, inducing potential data leaks. Finally, the TC policies will still have to be implemented, inducing time-consumption, tiresomeness and also potential coherence problems.
En savoir plus

159 En savoir plus

Capacity: an Abstract Model of Control over Personal Data

Capacity: an Abstract Model of Control over Personal Data

Because the case study used here is very simple, none of these remarks is really surprising. Nevertheless, it confirms that the intuitive notion of control is well captured by Capacity. The added-value of the approach is the fact that these results have been obtained formally through a systematic study of the different implementations. The same approach can be applied to the analysis of more complex and realistic systems. For example, the CNIL 14 recently stated 15 that biometric access control on smartphones is acceptable because the biometric data processing is performed under the control of the user. It is not clear, however, in what sense users really control their biometric template, what actions they can perform, enable or observe and what actors they have to trust (in addition to the smartphone provider). The same questions hold for many devices in the internet of things.
En savoir plus

25 En savoir plus

More Data Locality for Static Control Programs on NUMA Architectures

More Data Locality for Static Control Programs on NUMA Architectures

The polyhedral model is powerful for analyzing and trans- forming static control programs, hence its intensive use for the optimization of data locality and automatic paralleliza- tion. Affine transformations excel at modeling control flow, to promote data reuse and to expose parallelism. The ap- proach has also successfully been applied to the optimization of memory accesses (array expansion and contraction), al- though the available tools in the area are not as mature. Yet data locality also depends on other parameters such as data layout and data placement relatively to the memory hierarchy; these include spatial locality in cache lines and scalability on NUMA systems. This paper presents Ivie, a parallel intermediate language which complements affine transformations implemented in state-of-the-art polyhedral compilers and supports spatial and NUMA-aware data local- ity optimizations. We validate the design of the intermediate language on representative benchmarks.
En savoir plus

12 En savoir plus

Control chart and data fusion for varietal origin discrimination: Application to olive oil

Control chart and data fusion for varietal origin discrimination: Application to olive oil

Cultivars A B S T R A C T Combining data from different analytical sources could be a way to improve the performances of chemometric models by extracting the relevant and complementary information for food authentication. In this study, several data fusion strategies including concatenation (low-level), multiblock and hierarchical models (mid-level), and majority vote (high-level) are applied to near- and mid-infrared (NIR and MIR) spectral data for the varietal discrimination of olive oils from six French cultivars by partial least square discriminant analysis (PLS1-DA). The performances of the data fusion models are compared to each other and to the results obtained with NIR or MIR data alone, with a choice of chemometric pre-treatments and either an arbitrarily fixed limit or a control chart decision rule. Concatenation and hierarchical PLS1-DA fail to improve the prediction results compared to in- dividual models, whereas weighted multiblock PLS1-DA models with the control chart approach provide a more efficient differentiation for most, but not all, of the cultivars. The high-level models using a majority vote with the control chart decision rule benefit from the complementary results of the individual NIR and MIR models leading to more consistently improved results for all cultivars.
En savoir plus

8 En savoir plus

Tuple-Based Access Control: a Provenance-Based Information Flow Control for Relational Data

Tuple-Based Access Control: a Provenance-Based Information Flow Control for Relational Data

Enforcement is a key issue for access control models. The use of cryptographic systems can ensure the security of data while transferred between PDSs. Whereas PDSs are assumed to be trusted by all users in the system, it is not the case for communication services. A solution is to encrypt tuples according to their s-tags by using related work on the Ciphertext-Policy Attribute-Based Encryption (CP-ABE) [6]. In CP-ABE, attributes are used to describe users’ credentials, and a party encrypting data determines a policy for who can decrypt. Once applied to TBAC, the idea amounts to letting PDSs encrypt the tuples they emit according to the s-tags. CP-ABE deals with ciphertext-policies that can represent positive boolean formulae. As AttributeSet policies are isomorphic to positive boolean formulae, CP-ABE can be used for it. Extending CP-ABE to deal with richer semirings and ultimately N[X] is left open.
En savoir plus

11 En savoir plus

Usage control for data handling in smart cities

Usage control for data handling in smart cities

Keywords—Defeasible Logic, Usage Control, Data Handling, Smart Cities, and Information Accountability. I. I NTRODUCTION We are witnessing a new communication paradigm which goes beyond traditional people interactions, the one that is between devices under the umbrella of the Internet of Things (IoT) and the underlying technologies. In the very near future, most of the defined plans and ideas for smart cities will become a reality. In this era, deployment of the shared platforms for IoT enables the participation of citizens and groups of users in both the data collection and the emergence of new smart city services. Data can be collected from billions of interactions across a huge number of devices, forever altering the socioeconomic landscape [3]. In effect, it is the emergence of a marketplace for smart cities.
En savoir plus

7 En savoir plus

Access Control for HTTP Operations on Linked Data

Access Control for HTTP Operations on Linked Data

5 Conclusions We described an authorization framework for HTTP operations on Linked Data. The framework comes in three distinct configurations: Shi3ld-GSP (for the SPARQL 1.1 Graph Store Protocol) and Shi3ld for the Linked Data Plat- form (with and without the internal SPARQL endpoint). Our solutions feature attribute-based access control policies expressed with Web languages only. Eval- uation confirms that Shi3ld-GSP is slower than the Shi3ld-LDP counterparts, due to the HTTP communication with the protected RDF store. Shi3ld-LDP with internal SPARQL endpoint introduces a 3x delay in response time (when resources are protected by 5 access conditions). Nevertheless, under the same conditions, the SPARQL-less solution exhibits 25% faster response times. We show that response time grows linearly with the number of access conditions, and that the complexity of each access condition does not relevantly impact on the delay.
En savoir plus

16 En savoir plus

Mobile Field Data Entry for Concrete Quality Control Information

Mobile Field Data Entry for Concrete Quality Control Information

4 MULTIMODAL FIELD DATA COLLECTION To facilitate speedy field data collection and timely decision making, especially in the case of field qual- ity control inspection it would be highly beneficial to use multimodal wireless handheld devices capable of delivering, voice, text, graphics and even video. For example, "hands free" voice input can be used by a concrete technician in the field to enter inspec- tion information using a hybrid phone-enabled PDA and a wireless, Bluetooth technology enabled head- set piece. This information could be entered directly into the inspection forms on the handheld device and stored locally in the embedded database or wire- lessly transmitted to the backend database server. Thus, field inspection information could be commu- nicated in real time to facilitate timely decision- making on the construction site and at the ready-mix plant. This information will be stored in the project database and retrieved easily, if needed, in case of litigation. By combining a multimodal mobile hand- held device with a GPS receiver and a Pocket GIS system, the gathered inspection information could be automatically linked to its exact geographical loca- tion. In addition, other environmental sensors, such as temperature and moisture sensors could also be connected to a handheld device, if needed (Giroux et al, 2002).
En savoir plus

8 En savoir plus

A Data-Driven Approach to Prediction and Optimal Bucket-Filling Control for Autonomous Excavators

A Data-Driven Approach to Prediction and Optimal Bucket-Filling Control for Autonomous Excavators

of complex feedback control with sophisticated sensors and instrumentation technology is not practical, considering the harsh environment where excavators have to work. In ex- ploring an alternative approach and a new methodology, we exploit the data. Excavation consists mostly of repetitive operations. Although the operations are performed under diverse conditions, we can obtain a large amount of data from both laboratory tests and field operations to handle these conditions. This allows us to use the data for intelligent control of excavators. In a statistical modeling framework, we can deal with highly nonlinear, distributed behaviors of soil without going through terramechanics-based parametric representations. We obtain a non-parametric, nonlinear model directly from the data. It is possible to derive novel control methods from the statistical model.
En savoir plus

9 En savoir plus

Access Control Mechanisms in Named Data Networks: A Comprehensive Survey

Access Control Mechanisms in Named Data Networks: A Comprehensive Survey

4.4 Re-encryption-based Access Control A re-encryption scheme [133] may use two or more encryption operations to provide authentication or update the access rules [132]. Mangili et al. [79] design an encryption-based extension for ICN. The proposed scheme aims to: (a) enforce confidential data dissemination: the producer encrypts the content, while the intermediate nodes cache encrypted rather than plaintext content; (b) track content access: consumers are authenticated by the original producer and fetch the required decryption keys; and (c) support policy evolution: producers may update the access policy through key-derivation and re-encryption. Although the proposed scheme is able to update the access policies after publishing the content, there is no guarantee to update all cached content instances. Also, the system is based on the availability of the original producer to retrieve keys for content decryption. Zheng et al. [166] introduced a dual-phase encryption mechanism that combines an one-time decryption key, proxy re-encryption, and all-or-nothing transformation. The original producer encrypts the original content with a key derived from its private key. When a consumer sends a request for specific content, the associated edge router re-encrypts the content (which is already encrypted by the producer) with a random key (each content has a different random key). The consumer is required to use both keys from the original producer and edge router to decrypt the content. Hence, the producer may control who can access the content. This scheme requires two keys for encryption/decryption, this the scalability is an open question. Moreover, the system is based on the availability of the original producer and the trustability of edge routers.
En savoir plus

33 En savoir plus

Cryptographically enforced access control for user data in untrusted clouds

Cryptographically enforced access control for user data in untrusted clouds

Chapter 1 Introduction In today’s web, a single person often uses multiple web services. Conceptually, the user has a single logical data set, and she selectively exposes a portion of that data to each web service. In practice, the services control her data: each service keeps its portion of the user’s objects in a walled garden which neither the user nor external services can directly access. Although currently, the user can use access control lists or adjust her privacy settings to restrict data sharing for different web services, there are several problems. First, the user has to trust the cloud provider will honor these settings. Moreover, different providers might have different ways to set privacy policies, and these policies might not be sufficiently expressive. With these service-controlled data silos, users cede the ultimate power to set access controls and limit how their data is shared. Furthermore, simple operations like enumerating all of a user’s cloud data become difficult, since a user’s state is scattered across a variety of services which hide raw storage via high-level, curated APIs.
En savoir plus

60 En savoir plus

Self-Triggered Control for Sampled-data Systems using Reachability Analysis

Self-Triggered Control for Sampled-data Systems using Reachability Analysis

Keywords: Reachability; Invariance; Impulsive systems; Self triggered control; Sampled-data systems; 1. INTRODUCTION Technologies based on integrating digital controllers within physical systems are becoming more pervasive (intelli- gent buildings and cars, advanced manufacturing plants, smart medical devices, etc.). The interaction between the two corresponding cyber and physical worlds defines the scope of this work. More precisely, we analyze and design the behavior of a sampler in such cyber-physical systems where the instants at which sampling occurs strongly in- fluence the stability and performance of the overall sys- tem. Given the dynamics of the system and the control law, the simplest strategy for a sampler to work is to sample periodically with a fixed sampling period (time- triggered sampling). Alternatively, this period could vary so that sampling occurs only when needed. In fact, im- plementing sampled-data systems using variable sampling periods is proved to be more efficient in terms of perfor- mance and resource utilization [Tabuada (2007); Donkers and Heemels (2012); Fiter et al. (2012)]. In literature two frameworks define the latter strategy: Event-triggered [Tabuada (2007)] and Self-triggered [Mazo et al. (2009); Fiter et al. (2012)]. The first control strategy requires ded- icated hardware to continuously monitor the state of the plant and calls for sampling whenever it is necessary. On the other hand, the second strategy emulates the first one but requires to know the state just at the sampling instants and thus results in less intensive on-line computations. This work proposes a self-triggered control strategy, ob- tained using reachability analysis, in order to define the sampling period as a function of the state. In other words, we define, using off-line computations, a fixed set of sam- pling periods as well as their associated regions of the state space. Then in real-time and at each sampling instant,
En savoir plus

7 En savoir plus

Connecting Distributed Version Control Systems Communities to Linked Open Data

Connecting Distributed Version Control Systems Communities to Linked Open Data

LINA, Nantes University, France pascal.molli@univ-nantes.fr Abstract—Distributed Version Control Systems (DVCS) such as Git or Mercurial allow community of developers to coordinate and maintain well known software such as Linux operating system or Firefox web browser. The Push-Pull-Clone (PPC) collaboration model used in DVCS generates PPC social network where DVCS repositories are linked by push/pull relations. Unfortunately, DVCS tools poorly interoperate and are not navigable. The first issue prevents the development of generic tools and the second one prevents network analysis. In this paper, we propose to reuse semantic web technologies to transform any DVCS system into a social semantic web one. To achieve this objective, we propose SCHO+ a lightweight ontology that allows to represent causal history sharing. This ontology allows each node of the PPC social network to publish semantic datasets. Next, these semantic datasets can be queried with link transversal based query execution for metrics computation and PPC social network discovery. We experimented PPC network discovery and divergence metrics on real data from some representative projects managed by different DVCS tools.
En savoir plus

10 En savoir plus

A Control Approach for Performance of Big Data Systems

A Control Approach for Performance of Big Data Systems

for instance the case to tune MapReduce’s configuration as underlined in (White, 2012; Wang et al., 2012; Herodotou and Babu, 2011) or to assure performance objectives as noted in (Xie et al., 2012; Vernica et al., 2012). By performance objective, we usually mean the service time, that is the time needed for the program running on the cloud to serve a client request. For a user to run a MapReduce job at least three things need to be supplied to the framework: the input data to be treated, a Map function, and a Reduce function. From the control theory point of view, the Map and Reduce functions can be only treated as black box models since they are entirely application-specific, and we assume no a priori knowledge of their behavior. Without some profiling, no assumptions can be made regarding their runtime, their resource usage or the amount of output data they produce. On top of this, many factors (independent of the input data and of the Map and Reduce functions) are influence the performance of MapReduce jobs: CPU, input/output and network skews (Tian et al., 2009), hardware and software failures (Sangroya et al., 2012), Hadoop’s (Hadoop is the most used open source implementation of MapReduce) node homogeneity assumption not holding up (Zaharia et al., 2008; Ren et al., 2012), and bursty workloads (Chen et al., 2012). All these factors influence the MapReduce systems as perturbations. Concerning the performance modelling of MapReduce jobs, the state of the art methods uses mostly job level profiling. Some authors use statistical models made of several performance invariants such as the average, maximum and minimum runtimes of the different MapReduce cycles (Verma et al., 2011; Xu, 2012). Others employ a STATIC (???) linear model that captures the relationship between job runtime, input data size and the number of map,reduce slots allocated for the job (Tian and Chen, 2011). In both cases the model parameters are found by running the job on smaller set of the input data and using linear regression methods to determine the scaling factors for different configurations. A detailed analytical performance model has also been developed for off-line resource optimization, see Lin et al. (2012). Principle Component Analysis has also been employed to find the MapReduce/Hadoop components that most influence the performance of MapReduce jobs (Yang et al., 2012). It is important to note that all the models presented predict the steady state response of MapReduce jobs and do not capture system dynamics. They also assume that a single job is running at one time in a cluster, which is far from being realistic. The performance model that we propose addresses both of these issues: it deals with a concurrent workload of multiple jobs and captures the systems dynamic behaviour.
En savoir plus

8 En savoir plus

Generating data sets as inputs of reference for cyber security issues and industrial control systems

Generating data sets as inputs of reference for cyber security issues and industrial control systems

controls, to execute code and to raise alarms if needed. In this paper, we present in the next section one example of such a control which can ensure consistency between both sub systems and a global property for our ship demonstrator. a) : A ship operates in a complex environment that is composed of numerous sensors and effectors: a sensor can measure the temperature of the engine parts that is then fed into a controller to ensure the safe operation of the engine, a GPS chip can also acquire the ship position and can relate it on electronic maps,... All of these sensors and effectors are complex by themselves and we chose to exclude them from our platform, as their integration would go beyond the scope of our project and its timeline. Nevertheless to have a higher similarity with a real ship, we have decided to simulate them. For this purpose, we have configured and programed Arduinos and they can provide (at low cost) input/output data to our platform components. They can also be programed easily and provide various schemes to manage and contribute to various scenarios.
En savoir plus

3 En savoir plus

A case study of optimal input-output system with sampled-data control: Ding et al. force and fatigue muscular control model

A case study of optimal input-output system with sampled-data control: Ding et al. force and fatigue muscular control model

The article is organized as follows. In Section 2 , we make a brief presentation of Ding et al. force-fatigue model based on [ 20 ]. In Section 3 , the dynamics of the force model is briefly investigated to describe the input-output properties. In Section 4 , the force-fatigue model is analyzed in the framework of geometric optimal sampled- data control systems and preliminary results are presented using a simplified model using a model reduction and an input transformation. Section 5 is devoted to the observer description. In Section 6 , MPC method is presented using a further discretization of the dynamics to conclude by the algorithm used to compute in practise the optimized pulses trains. Numerical results are presented in the final Section 7 .
En savoir plus

23 En savoir plus

Impulse and sampled-data optimal control of heat equations, and error estimates

Impulse and sampled-data optimal control of heat equations, and error estimates

In many cases impulse control is an interesting alternative, not only to usual discretization schemes, but also in order to deal with systems that cannot be acted on by means of continuous control inputs, as it often occurs in applications. For example, relevant controls for acting on a population of bacteria should be impulsive, so that the density of the bactericide may change instantaneously; indeed continuous control would enhance drug resistance of bacteria. For more discussions and examples about impulse control or impulse control problems in infinite dimension, we refer the readers to [ 3 , 36 , 35 ] and references therein. It is also interesting to note that impulse control is as well an alternative to the well known concept of digital control, or sampled-data control, which is much used in the engineering community.
En savoir plus

31 En savoir plus

Composing data and control functions to ease virtual networks programmability

Composing data and control functions to ease virtual networks programmability

118 Route de Narbonne, F-31062 Toulouse, France Email: {FirstName.LastName}@irit.fr Abstract—This paper presents a new domain specific lan- guage, called AirNet, to design and control virtual networks. The central feature of this language is to rely on network abstractions in order to spare operators the trouble of dealing with the complex and dynamic nature of the physical infrastructure. One novelty of this language is to integrate a network abstraction model that offers a clear separation between simple transport functions and advanced network services. These services are classified into three main categories: static control functions, dynamic control functions and data functions. In addition, we provide an easy and elegant way for programming these functions using the decorator design pattern. The AirNet language is supported by a runtime system handling, in particular, the virtual-to-physical mapping. Despite the fact that it is still in a prototype stage, this runtime has been successfully tested on several use cases.
En savoir plus

8 En savoir plus

Composing data and control functions to ease virtual networks programmability

Composing data and control functions to ease virtual networks programmability

2) Second phase: incoming packets After the initialization phase, each time a packet arrives at a switch and matches a controller rule, it will be directly forwarded to that controller. The AirNet runtime system will then execute the general algorithm exposed in Fig. 4. It first looks up the appropriate bucket that is going to handle the packet (i.e., the bucket’s match covers the packet header) and uses it to call the network function included in the policy. Applying a network function will process the packet and return the results. In case of a data function, the result is a modified packet that will be re-injected into the network and transported to its final destination following the existing policies. In case of a dynamic control function, the result consists in first, a new policy that will be compiled and enforced on the physical infrastructure, and second, re-injecting the incoming packet into the network, thus transporting it according to the newly generated policies as well as the existing ones.
En savoir plus

9 En savoir plus

Show all 10000 documents...