Abstract— Virgo is an experiment aiming at the detection of
gravitational waves emitted by astrophysical sources. Its detector, based on a 3km arms interferometer, is a complex setup which requires several digital control loops running up to 10kHz, an accurate and reliable central timing system and an efficient dataacquisition, all of them being distributed over 3km. We overview here the main hardware and software components developed for the dataacquisitionsystem (DAQ) and its current architecture. Then, we briefly discuss its connections with interferometer’s controls, especially through the automation of the interferom- eter’s startup procedure. Then, we describe the tools used to monitor the DAQ and the performances we measured with them. Finally, are described also the tools developped for the online detector monitoring, mandatory complement of the DAQ for the commissioning of the Virgo detector.
ONSIDERING local neutron flux measurement techniques in nuclear research reactors environments in addition with reactor command control, online gaseous detectors are suffering from the lack of up-to-date and integrated signal acquisitionsystem on the nuclear instrumentation market. CEA needs for research reactor neutron field characterization led to the development of a multipurpose and integrated dataacquisitionsystem for online neutron and gamma measurements called MONACO, standing for ‘Multichannel Online Neutron Acquisition in Campbell mOde’. After two years of development, this paper presents the recent second version  features and the validation of two MONACO v2 prototypes in the Slovenian TRIGA Mark II reactor.
RECEIVED: 20 november 2018 / RECEIVED IN FINAL FORM: 15 february 2019 / ACCEPTED: 16 february 2019 Abstract: This paper is devoted to the design and simulation of an automatic dataacquisitionsystem of solar module characteristics. The system is essentially composed of three parts. The first part is devoted to the acquisition of the electrical parameters of the solar module (Current, Voltage and Power) in automatic ways using an automatic variable load. The second part is focused on devices which measure the ambient temperature and solar irradiation. In the last part, we study the design of an acquisition interface and a database for data saving. The results of simulations validate the correct operation of the measuring bench.
Fig. 2. The Timing Architecture.
The bc635/637 VME Time and frequency processor  provides a GPS clock (5MHz) to the timing board in the master timing crate, which generates the 4 signals. These 4 signals are translated from TTL to optical and sent using optical fibers to each building: central building, mode cleaner building, north and west end arm buildings. In each building, a distributor crate translates and fans out the optical signals to TTL signals. For each crate involved in control or readout, a timing board receives as input these 4 TTL signals. The programmable VME Timing board allows each data provider to build its own signals by dividing the input fast clock triggered by the input frame. These signals can be used to drive dedicated boards or to enable VME interrupts.
Query decomposition and execution. This phase is similar to that in data
integration systems and APPA reuses well-known, yet sophisticated techniques. Since some nodes in P” may have only subsets of Q’s relations, query decomposition produces a number of subqueries (not necessarily different), one for each node, together with a composition query to integrate, e.g. through join and union operations, the intermediate results . Finally, the subqueries are sent to the nodes in P”, which reformulate it on their local schema (using the node mappings), execute it, and send the results back to the sending node, who integrates the results. Result compo- sition can also exploit parallelism  using intermediate nodes. For instance, let us consider relations r 1 and r 2 defined over CSD r and relations s 1 and s 2 defined over CSD s, each stored at a different node, and the query select * from r, s where r.a=s.a
becomes and independent variable, uncorrelated with respect to x, y, and z.
This paper has presented a recursive optimisation method that, when combined with sparse range data, can produce high quality, high-resolution range images. The algorithm first creates a very rough and distorted model of an object and recursively optimises it using new range information and a standard ICP algorithm. The method was tested and integrated with a single spot laser scanner. Real-time tracking of free moving objects while creating high-resolution images was demonstrated providing a truly high-accuracy hand-held 3D laser scanning system.
things we can wear. Do you think that in this store, it will be able to buy…’), and the child was
then presented with one of three response types: grammatical, ungrammatical determiner (e.g., *une nouveau balai), and ungrammatical adjective (e.g., un *nouvelle balai). The child was asked to respond YES or NO by pushing a button. Reaction times (RTs) and error rates were recorded. Only the older group was able to perform semantic categorization. They showed an agreement effect, where children produce faster RTs on concordant stimuli than non-concordant ones. However, this effect was found only for the discordant determiner condition and did not reach significance for the adjective condition, although a trend was found. Error rates in the 6- year-olds were similar to those found in adults (5.8% vs. 4.1%). These data show that mastery of DP agreement emerges sometime between 4 and 6 years old in French-speaking children, and that definite determiners are reliably produced with appropriate gender at 4 years in French. However, the data on adjectives does not show such robust effects.
3. SCHEMA MAPPING MAINTENANCE In dynamic networks such as peer to peer ar- chitectures, the nodes may change not only their data but also their schemas and their query do- main. Facing this situation, schema mappings can become obsolete, a phase of maintenance is thus needed to maintain their consistency. Several solu- tions have been proposed to automate the adapta- tion of mappings when data schemes evolve. These solutions can be classified into two categories: (1) incremental method is implementing changes sepa- rately for each type of change occurs in the source and target schema ; (2) Composition approach, is a mapping-based representation of schema, and it is more flexible and expressive than the change- based representation .
In this paper, we describe our attempt to build a general system framework for supporting collaborative publishing and searching services. The system accepts data objects described with user-created metadata, stores and classifies them, and provides various querying and searching interfaces such as browsing, keyword search, and structured query. The metadata created by users can be both for their own use (for labeling their published information to the system) and for the system to use to organize all the published information. The data unit, that users use to describe their published information of any kind, consists of title, a number of fields, and a set of tags. Basically, a field is an attribute/value pair for characterizing certain property of an object, e.g., color:yellow for a puppy. A tag is a word or a phrase the user uses as “keyword” to characterize the published object. For example, the tags of a puppy could be animal, dog, etc..  discusses 7 types of tags an user uses to label URL bookmarks on Delicious website. Figure 1 shows two example data units that describe different types of information. To illustrate, the left data item describes the blog of uzzer. It uses four fields for showing the location, author, type, and language of the blog. In addition, it has 9 tags that are “keywords” of the blog.
constructs or crystallization conditions, for example, than was possible in the past.
In common with the genome sequencing projects, the rapid accumulation of large amounts of data by an SG project renders the conventional method of archiving and tracking experimental data via laboratory notebooks highly inefficient. The problem is particularly acute in structural biology because each step of the experimental pipeline involves different techniques and results. There is therefore a need for computerized experimental data information management systems in structural biology, and for structural genomics projects in particular. Such systems, often called Laboratory Information Manage- ment Systems (LIMS), have been developed in the past decade for genomics laboratories involved in sequencing and microarray analysis [5,6]. Structural genomics presents unique requirements for data tracking systems and these are outlined in detail in the following section. We have developed an experimental data management system for structural genomics, the SPEX Db (Structural Proteomics EXperimental Database). The system serves both as a LIMS and also as a tool for SG target selection and management. It follows a standard three tier client/ server architecture, using Oracle 8i for the database tier and Netscape iPlanet Web Sever (Enterprise Edition 4.1) for the server tier. The client tier is expected to be a stand- ard web browser such as Microsoft Internet Explorer or Netscape Navigator. The SPEX Db can accommodate any type of protein targets and is currently used by over 10 structural biology projects throughout Canada. The pri- mary user of the system is the Montreal-Kingston Bacterial Structural Genomics Initiative (M-KBSGI; http:// sgen.bri.nrc.ca/brimsg/bsgi.html), which selects as targets the ORFs of bacterial genomes such as E. coli K12 and the pathogenic E. coli strains O157 and CFT073. In this paper we describe the SPEX Db in detail, with reference to its actual use by the M-KBSGI. Central to the system is the Oracle-based relational database. We illustrate the data- base schema and describe the user interface to the data. In addition, we describe the interactions of the system with external sources of data. These facets of the system aid SG target selection and enable monitoring of the status of an SG project, both at the level of individual targets and as a whole. The target selection and external interaction facets of the SPEX Db differentiate it from other data manage- ment systems recently developed for structural genomics, such as SPINE , SESAME  or HalX , which are pri- marily devoted to experimental data tracking, as for a tra- ditional LIMS system.
In Caniou et al. [ 2014 ], the authors present a basic implementation of the OGF standard GridRPC Data Management API Caniou et al. [ 2012 ] and its integration in two different middlewares. The API takes into account both synchronous and asynchronous calls, while the Data Management standard provides a modular ar- chitecture that ensures immediate portability and interoperability between the API and the middlewares. There is no theoretical study of the model provided. Our proposal is complementary to this approach by providing transparent data access and making our API completely independent of the chosen middleware. More- over, both theoretical and experimental studies of scalability are given.