The Ledger Manager is a two-stage pipeline and runs asynchronously with respect to the Block Structure Manager. Its first stage, the transaction sequencer, runs in a loop to continuously poll the Block Structure Database and try to confirm new transactions. It starts by updating the list of votes cast on each proposer block. To avoid doing wasteful work, it caches the vote counts and the tips of the voter chains, and on each invocation, it only scans through the new voter blocks. Then, it tries to confirm a leader for each level in the proposer block tree as new votes are cast, according to the rules specified in §5.3. In the case where a leader is selected, it queries the Block Database to retrieve the transaction blocks confirmed by the new leader, and assembles a list of confirmed transactions. The list is passed on to the second stage of the pipeline, the ledger sanitizer. This stage maintains a pool of worker threads that executes the confirmed transactions in parallel. Specifically, a worker thread queries the UTXO Database to confirm that all inputs of the transaction are present; their owners match the signatures of the transaction; and the total value of the inputs is no less than that of the outputs. If execution succeeds, the outputs of the transaction are inserted into the UTXO Database, and the inputs are removed.
Presently there have been a number of different ABCM implementations in different manufacturing processes. The ABCM implementation team at BCAG developed a framework that will serve as a baseline for all future implementations. The team uses ABCM as a support tool for BCAG’s lean initiatives, while at the same time linking the typical manufacturing floor performance metrics to financial performance metrics such as cost and returns on assets. ABCM enables BCAG to address different issues related to cost such as cost of quality, make versus buy decisions, and optimal asset management. It enhances the lean manufacturing effort by accurately identifying the true ownership cost of a product or process. Although the ABCM effort is only 2 years old, BCAG is striving towards making ABCM and activity analysis the focal point of all cost management. In turn, the BCAG ABCM implementation team hopes to export ABCM techniques to all other operations in the facility, and migrate the facility’s cost accounting system from its present status to a so-called Stage IV, where all standard financial reporting originates from activity analysis data collected from the facility’s operations. To date, the implementation process has been slow, mostly due to lack of upper management support. To counter this problem, the ABCM implementation team has found that continuously communicating the potential benefits that can be gained from implementing ABCM is critical to building support from upper management, while avoiding the perception from below that ABCM is “just another lean initiative of the week.”
being either soft- or non-real-time code. For instance, the US Navy’s DD-1000 Zumwalt class destroyer is rumored to have million lines of code in its shipboard computing system, of which only small parts have real-time constraints. Typical programming practices for sharing data would involve synchronizing access to the data. In a real-time system, this might lead to unbounded blocking of the real- time thread, so-called priority inversion, causing serious deadlines infringements. One of the key design decisions of the Real-time Speciﬁcation for Java (RTSJ)  was to address these problems with a programming model that restricts ex- pressiveness to avoid unwanted interactions with the virtual machine and the garbage collector in particular. The RTSJ introduced the NoHeapRealtime- Thread for this purpose, and also proposed solutions to cope with priority inversion. As we discuss in the related work, however, experience implement- ing [5,13,21,2] and using [8,20,7,22,24] the RTSJ revealed a number of serious deﬁciencies. More recently, alternatives to NoHeapRealtimeThread have been proposed, such as Eventrons  and Exotasks  from IBM Research as well as Reﬂexes  and StreamFlex .
CLUPI, the high-performance colour close up imager, plays an important role in attaining the mission objectives: it is the equivalent of the hand lens that no geologist is without when undertaking field work. In a typical field scenario, the geologist will use his/her eyes to make an overview of an area and the outcrops within it to determine sites of particular interest for more detailed study. In the ExoMars scenario, the PanCam wide angle cameras (WACS) will be used for this task. After having made a preliminary general evaluation, the geologist will approach a particular outcrop for closer observation of structures at the decimetre to subdecimeter scale (ExoMars’ High Resolution Camera) before finally getting very close up to the surface with a hand lens (ExoMars’ CLUPI), and/or taking a hand specimen, for detailed observation of textures and minerals. Using structural, textural and prelim- inary compositional analysis the geologist identifies the materials and makes a decision as to whether they are of sufficient interest to be subsampled for laboratory analysis (using the ExoMars drill and laboratory instruments). Given the time and energy expense necessary for drilling and analysing samples in the ExoMars labora- tory, preliminary screening of the materials to chose those most likely to be of interest is essential. ExoMars, not having external analytical instruments, will be choosing the samples exactly as a field geologist does – by observation (backed up by years and years of field experience in rock interpretation in the field). Because the main science objective of ExoMars concerns the search for life, whose traces on Mars are likely to be cryptic, close up observation of the rocks and granular regolith will be critical to the decision as whether to drill and sample the nearby underlying materials. Thus, CLUPI is the essential final step in the choice of drill site. But not only are CLUPI’s observations of the rock outcrops important, but they also serve other purposes. CLUPI, placed on the drill box, can observe the placement of the drill head. It will also be able to observe the fines that come out of the drill hole, including any colour stratification linked to lithological changes with depth. Finally, CLUPI will provide detailed observation of the surface of the core drilled materials when they are in the sample drawer at a spatial resolution of 15 micrometer/pixel in color.
3.2. Performance and decision-making processes The basic purpose of any measurement system is to provide feedback, relative to predefined goals, that increases the chances of achieving these goals eﬃ- ciently and eﬀectively. Measurement gains true value when used as the basis for timely decisions. Conse- quently, many PMS methods link performance to decisions. The strategic PMS (Vitale et al. 1994), the performance-measurement questionnaire (Bourne et al. 2003, Chan et al. 2006), the strategic measure- ment analysis and reporting technique system (Cross and Lynch 1989) or the Cambridge University’s PMS method (Bourne et al. 2003) can be quoted as examples. They insist on the need to split decisions into many levels depending on their weight on the organisation and their time eﬀect. Thus, decisions do not have the same impact on the system, depending on whether the level is strategic, tactic or operational. These methods also look for the sensitivity between KPI variations and alternative decisions by direct investigation. But if the information on performance is condensed in KPIs, it is also possible to synthesise it on decisions using the well-known performance determi- nant, a concept first introduced by the balanced scorecard method (Kaplan and Norton 1996). The performance determinants have been natively defined as a control variable because one of the main criteria to select them is a sensitivity evaluation of their influence on the system.
1 Department of Mechanical Construction and Production, Laboratory Soete, Ghent University 2 Laboratory of Mechanics, Surfaces and materials processing, Université Lille,
The scratching process is a well know concept and is usually defined as a kind of surface abrasion, where plastic deformation is promoted by relative friction between soft phase and a hard intender. It is necessary to reduce material loss to minimum or even to reach zero to have an efficient and effective functionality of the materials. Polymers being highly sensitive to wear and scratch damage, their various modes of deformation such as, tearing, cracking, delamination, abrasive and adhesive vary with a narrow range of contact variables like applied normal load, sliding velocity, interfacial lubrication and testing temperature. This is particularly important when these materials are used to improve the tribological performance by adding various types of fillers such as, carbon fibers, graphite, PTFE, TiO 2 , and ZnS are added. The polymers with nanocomposites have the advantages over micro- composites from the viewpoint of wear and scratch damage, the underlying mechanism of damage in the single asperity mode is still unclear. The goal of this study is to experimentally evaluate the deformation modes and the friction processes involved during the scratching of polymer reinforced with nanocomposites. The scratches were produced on the semi- crystalline polyetheretherketone (PEEK) surface using a Rockwell C diamond indenter was pressed onto the flat surface of each sample, until a complete load- indentation depth-curve was achieved. These scratched surfaces were assessed with optical microscope and scanning electron microscope (SEM) for prevailing deformation mechanism and the geometry of damage.
4- Representations of the competency and goals of accomplishment: According to Nicholls, there are two different ways that individuals use their skills. The first, self-referenced, involves a comparison referring to internal or objective standards; the attention of the subject then turns on his personal progress or on the success of a particular task. The second, normatively referenced, solicits a process of social comparison, the results being of an incentive nature only to the extent that they allow the demonstration of a skill superior to that of others. Depending on the way in which their competence is represented, people will tend to set two types of goals. In the case of a self-referenced representation, they will preferentially orient themselves towards the objectives invested on the control of the tasks (Nicholls, 1984a, 1984b) still called goals of learning (Dweck, 1986) or goals of control (Ames , 1984). A normatively referenced representation will encourage the adoption of goals of ego involvement (Nicholls, 1984a, 1984b) also referred to as performance goals (Dweck, 1986) or goals centered on fitness (Ames, 1984).
student contracts are mainly used during holidays periods. Temporary agency work is intensively used
in most of the firms throughout the year. In Hambac, for instance (which produces mainly sliced ham
and bacon), temps make between 15% and 20% of all workers, because of high absenteeism (up to
15% among operators) and the great number of days off due to the 35 hours week. In the other firms,
5.4 Sensitivity to humidity variation
Sensitivity of the HV system to the environment humidity has been studied at CERN. At the test bench all the elements of a full super-drawer, apart the HV and LV power supplies, were placed inside a climate chamber. For low-level humidity (below 60%), no HV trip occurred on the 48 channels over a period of 8 weeks. Several trips occurred after 7 (24) hours in the case of humidity values above 70% (between 60 to 70%). On the base of these results, it was decided to inject dry air in the girder during the ATLAS operations to maintain the humidity below 60% (see Section 3 ).
Mots-clés : sécurité, ajustement, performance, travail, résilience
PRODUCTIVITY AND SAFETY: ADJUSTMENTS AT WORK IN SOCIO- TECHNICAL SYSTEM
Abstract: The thesis presents the findings from a study of the adjustments of performance conducted by human operators in the course of routine work. The findings are in the form of a comprehensive theory and a method. The adjustments are the changes to the natural flow of work, to avoid a situation considered undesirable, to compensate for a temporary lack of resources, equipment, and time, or to maintain or restore control over the operation of a socio- technical system. The thesis describes a number of events in which such adjustments occurred, and identifies the reasons behind the adjustments and their consequences for both safety and productivity. The identification of these two elements leads the research toward the development of a classification of adjustments in terms of their work conditions, their underlying motivations, and their observable effects. This classification may be used by anyone concerned with maintaining a proper balance between safety and productivity, by indicating which practices should be facilitated and improved upon, and which should be reduced or altogether avoided. The thesis uses data obtained from observation of various activities carried out aboard natural gas production platforms in the North Sea. The use of the classification is described as a method for gauging performance adjustments. Future research based on this study should go in the direction of refining the classification proposed here, as well as in the development of methods to support the management of performance adjustments.
On small dataset such as the Kasumi Cell Line , the genetic algorithm performs very well, and even a single run of the local implementation with a population of size equal to the number of conditions, (here, 𝑚 = 10), ﬁnds the best solution. However, as soon as the number of conditions in the dataset increasese (e.g. Yeast Cell Cycle , with 𝑚 = 17), the same local implementation is more limited and some solutions it provides are quite far from the optimum. The parallel implementation, (with local parameters unchanged and sixteen islands), still provides optimal solutions. To further demonstrate the interest of this parallel implementation, severals runs of each implementation are performed. Results are shown in Table 1. The local implementation ﬁnds each optimal solution in at least 10% of all runs, but just under half of them are identiﬁed every time. The parallel implementation, with a similar computation time, ﬁnds the best solutions every time. To improve the overall performance of the algorithm, a ﬁfth evolution operator is added. An existing solution is randomly selected, and a local search performed: we add a condition and remove an active one, (to maintain the bicluster size), as long as the solution can be improved. For small microarray dataset, this operator does not improve the overall performance, as the parallel algorithm was already identifying the optimal solution for each bicluster size.
Core daylighting systems employ various technologies to collect, concentrate, transport, and distribute daylight (sky and sunbeam light) from the building’s exterior envelope to target areas in interior spaces of buildings. They may be categorized into two types: vertical and horizontal systems. Vertical systems collect daylight at the roof level using static or tracking optical components (such as parabolic mirrors or heliostats) and transport the daylight in vertical hollow light guides passing through building floors. Several installations have been constructed around the world (Aizenberg, 2010; Kim and Kim, 2010; Rosemann and Kaase, 2005), but the high costs of these systems have limited their market penetration. Horizontal systems, on the other hand, are mounted on the building façades, and employ horizontal hollow light guides. Several studies and demonstrations of this type of systems have been carried out, and have showed that sufficient illuminance may be achieved indoors (Callow and Shao, 2003, Greenup and Edmonds, 2004, Schlegel et al., 2004, Tsangrassoulis et al., 2005; Kwok and Chung, 2008; Hien and Chirarattananon, 2009). Capital costs of horizontal systems have also been relatively high compared to their energy savings (Rosemann et al., 2007). One of the most promising
Wideband carrier operation
DVB-TM-S2 group has been working lately on wideband satellite transponders aiming to increase baud rates up to 200/500 Mbaud. This proposal aims to achieve more efficient payload operations, being able to work in a single-carrier mode or reducing OBO requirements on wideband HPAs. Indeed, HPAs are typically optimized to work at saturation where better efficiencies can be obtained. In legacy systems, carriers of typically 36MHz or 72MHz have been usually considered mostly due to technological limitations in user terminal chip-sets processing capabilities. Recently, Wideband HPAs in Ka-band are being developed, achieving reasonable efficiencies with amplifiable bands from 1GHz up to 3GHz. However, operating this wideband HPAs with such “narrow” carriers certainly requires significant OBO which degrades total power budget. Thus, even though a certain OBO will still be required working with wideband carriers on wideband HPA, an improvement on HPA efficiency can be expected. Working with high baud rates (200/500 Mbaud) implies user terminals must be capable to demodulate and decode large carriers. In fact, one of the main drawbacks being identified is the FEC decoding. In actual DVB-S2, each user terminal must decode all physical layer frames in order to reach either the generic stream or the MPEG-stream, carrying their own specific packets. As no state-of-the-art (neither mid- long term) chipset is capable to decode in real-time the proposed wide carriers, “time-slicing” approach is being considered in order to tackle the problem. This means user terminals have only to entirely decode certain PL frames (PL
The format of current standards is straight forward. The building is broken down into several
components: the building envelope, lighting equipment, HVAC equipment, service hot water equipment, and electrical equipment. For each of the major components a prescriptive path is provided. A trade-off path is also provided for each component where performance can be traded-off with similar components as long as the overall performance is not less than the prescriptive approach. Finally a performance path is also provided where a proposed design is compared to a prescription “reference building.” The designer is permitted to change most aspects of the “proposed building” as long the energy used by the proposed building is equal or less than the reference prescriptive building. Compliance along the performance path is demonstrated using computer simulation tools.
requirements but look more broadly at sustainable issues such as environmental impact, energy efficiency and cradle-to-grave profile [3, 14].
GRS and Sustainability
Buildings account for 30% of energy use and 27% of green house gases (GHG) emission in Canada . Innovation in energy efficient building technologies can significantly reduce the overall energy intensity and GHG emission nationwide, thus contributing positive to Canada’s efforts on climate change mitigation/adaptation. With the rapid growth of many major cities in North American, natural landscape is quickly displaced by development. The City of Toronto, Ontario is the largest municipality in Canada, with a population of about 2.5 million. It is estimated that the tree canopy and natural coverage is approximately 20% while rooftops cover as much as 30-35% of the land surface including existing and proposed development . Similar trends are observed for other major cities in North America. This is clear that the roofing industry can make a significant contribution on sustainability in an urban context. With the rapid adoption of the LEED (Leadership in Energy and Environmental Design) Green Building Rating System in North America, the construction industry is becoming more aware of sustainable building design and construction. As outlined in the Brundtland report Our Common Future, sustainability must be achieved through a balance between 3 domains: environmental, economic and social. Therefore, being “sustainable” is more than being “green” as the three areas are intertwined. This paper will provide highlights of current GRS research in North America. Based on the scientific findings, we will discuss how GRS address sustainability in the three domains.
Here we present an unprecedented cementitious com- posite incorporating few-layer thick graphene nanosheets. We have devised a technologically simple yet efficient method for manufacturing a cement–graphene composite based on the use of graphene obtained via electrochemical exfoliation (EE) of graphite foil. Prior to being mixed with cement, the graphene aqueous suspension is dried and wiped through the set of sieves. Contrary to previous reports describing the generation of cementitious materials incorporating graphene derivatives, the preparation of our composite does not require special treatments or the use of surfactant to obtain uniform dispersion of graphene within cement matrix. We first evaluated the dispersion of graphene in cement alkaline environment and its effect on the consistency of cement mortar. Then we investigated the mechanical properties of produced composites to select the most advantageous content of graphene. The 0.05 wt% addition of graphene results in the highest increase of mechanical properties, namely, 79%, 8%, and 9% for tensile strength, compressive strength, and Young’s modulus, respectively. Finally, we characterized the microstructure and composition of our mortars. As revealed by detailed characterization, the hydration rate of calcium sili- cates is remarkably increased in cement–graphene composites. The addition of electrochemically exfoliated graphene (EEG) accelerates the nucleation and growth of C-S-H phase, thus resulting in significant improvement of composites’ strength. Subsequently, the fluidity of the processed composites remains unaltered compared to reference mortars, indicating that the concrete alkaline environment does not jeopardize the uni- form dispersion of graphene within cement matrix. These results open new possibilities for the practical applications of graphene in high-performance cementitious composites.
examined. As indicated in Figure 1, in the absence of CaO, E3015 provides greater strength but lower modulus. This can be explained by the fact that E3015 has higher molecular weight, which allows better compatibility with the PP matrix. In the presence of CaO, E43 gives better performance than E3015. E43, which has a high amount of grafting, should allow better interaction with the wood particles and the CaO. The negative effect of the low molecular weight of E43 can be significantly reduced by the reaction between CaO and E43. Thus, all lead to a better improvement in interface and dispersion. Figure 2 shows that E3015 provides better ductility and toughness as a result of its higher molecular weight compared to E43. However, with the incorporation of CaO, ductility and toughness of samples formulated with E3015 were reduced, while those of samples formulated with E43 increased (due to a better chance for chemical reaction between E43 and CaO). At 10 wt% CaO, there is no difference in ductility and toughness between the two formulations.