The ”Extreme Universe Space Observatory - EUSO”  was proposed as free-flier satellite in winter 1999. EUSO was accepted for an Accommodation Study on the ISS (end 2000) and then approved for Phase A (study report and conceptual design), successfully completed in summer 2004.
ESAF, theEUSOSimulationandAnalysisFramework, has been developed during theEUSO Phase A study as the full End-to-end simulationandanalysis chain, from thesimulation of the primary particle interaction in atmosphere, to the transport of light to theEUSO optical pupil, to the detector response simulationand finally to the reconstruction andthe physical analysis. We designed ESAF so that each one of the above steps could be run individually and independently from the other ones. With this approach it is possible to run the same reconstruction andanalysis code for the real data and for the simulated ones. Moreover, it is also possible to run single parts of this chain and check quantitatively the differences between different configurations of the detector or different approximations for the physical processes involved. Therefore ESAF is easily adaptable to any space-borne detector design.
2686-953 Sacavem, Portugal. July 15, 2013
This paper presents a free and open-source numerical frame- work for thesimulationandtheanalysis of the sound pro- duction in reed and brass instruments. This tool is devel- oped using the freely distributed Python language and li- braries, making it available for acoustics student, engineers and researchers involved in musical acoustics. It relies on the modal expansion of the acoustic resonator (the bore of the instrument), the dynamics of the valve (the cane reed or the lips) and of the jet, to provide a compact continuous-time formulation of the sound production mechanism, modelling the bore as a series association of Helmholtz resonators. The computation of the self-sustained oscillations is con- trolled by time-varying parameters, including the mouth pressure andthe player’s embouchure, but the reed and acoustic resonator are also able to evolve during the sim- ulation in order to allow the investigation of transient or non-stationary phenomena. Some examples are given (code is provided within theframework) to show the main fea- tures of this tool, such as the ability to handle bifurcations, like oscillation onset or change of regime, and to simulate musical effects.
As indicated before, the maturity, in terms of fabrication and irradiation experience, gained through the obust design ave to be validated, either by post xaminations on irradiated pins or structure reactors, or by tative conditions. In the outline of Qualification Plan, several experimental irradiation tests are being designed, firstly JAEA Implementing Arrangement on the ASTRID Program and SFR OYO reactor) and also in Russian Fast Reactors (BN600 for fuel tests MACARON irradiation test dedicated to absorber pin design studies is currently RIAR PIE on irradiated pins or structure reactors, a dedicated s planned contribute to the qualification of the In this framework, we can mention the PIE programs on MATINA 2/3 pins dedicated to the Reflector development with magnesium oxide pins or the PIE standard fuel dedicated to CEA fuel simulation tool We can also mention PIE program on ZEBRE pins which the main us fuel behavior. ZEBRE and PAVIX 8 fuel pins were indeed constituted, like ASTRID pins, by fertile and fissile Up reached at the end of PAVIX 8 irradiation was about 12 to 13 at% close to the ASTRID The PIE program on ZEBRE fuel pins was already ) andthe PAVIX 8 PIE program is underway and will be
2.2 Flow-based Simulation
To increase the speed of network simulation one approach is to use theoretical models to compute the throughput of each flow in a network topology at a given time. Models have been proposed [19, 16, 18] that model the throughput of a TCP flow as a function of packet loss and round trip delay, as well as some parameters of the network and of the TCP protocol. Unfortunately, some of these parameters are difficult to measure and/or instantiate for the purpose of grid simulations. Furthermore, it is not clear how the model can be applied to arbitrary network topologies with many simulated flows competing for network resources. Instead, one desires reasonable models that capture the bandwidth sharing behavior induced by TCP among flows on arbitrary topologies and that are defined by a few simple parame- ters, namely link physical bandwidths and TCP congestion window size. This definition of macroscopic models of band- width sharing is challenging . These models generally fit in the following framework. Every link L k has a maximum
Virtual assembly working process has been analyzed combining with DMU and MTM methods (Bullinger, Richter, & Seidel, 2000). In their study, data glove was used as user interface to manipulate virtual objects in virtual assembly work, and MTM standard was employed to evaluate the time of assembly and cost. The gesture for the assembly operation could be automatically recognized and mapped to MTM motion. The limitation of this study was that such method was only applicable to seated operations. In a recent study, onsite video capture has been used for work measurement practice to evaluate the assembly process in long distance (Elnekave & Gilad, 2005). In this case, computer aided analysis based on MOST enhanced the motion time evaluation efficiency. However, the operation video transferred from long distance factory has to be observed and segmented by analysts with computers. Motion time analysis was also part of the output in theframework of Michigan’s HUMOSIM (Chaffin, 2002), and it was also integrated into Jack (Badler, Phillips, & Webber, 1993) in Badler, Erignac, & Liu (2002) to validate the maintenance work.
In this paper, we propose a stochastic extension of our scheduling framework that allows us to capture tasks whose real-time attributes, such as deadline, execution time or period, are also characterized by probability distributions. This is particularly useful to describe mixed-critical systems and make assump- tions on the hardware domain. These systems combine hard real-time periodic tasks, with soft real-time sporadic tasks. Classical scheduling techniques can only reason about worst-case analysis of these systems, and therefore always return pessimistic results. Using tasks with stochastic period we can better quantify the occurrence of theses tasks. Similarly, using stochastic deadlines we can relax timing requirements. Finally stochastic execution times model the variation of the computation time needed by the tasks. These distributions can be sampled from executions or simulations of the system, or set as requirements from the specifications. For instance in avionics, display components will have lower crit- icality. They can include sporadic tasks generated by users requests. Average user demand will be efficiently modelled with a probability distribution. Tim- ing executions may vary due to the content being display and can be measured from the system. This formal verification framework is embedded in a graphical high-level modeling tool developed with the Cinco meta tooling suite . It is available at http://cinco.scce.info/applications/.
Analysis of the RM results shows that several realistic fea- tures of the large-scale and mesoscale circulation are evident in this region. The mean cyclonic circulation is in good agreement with observations. Mesoscale variability is in- tense along the coasts of Sardinia and Corsica, in the Gulf of Lions and in the Catalan Sea. The length scales of the Northern Current meanders along the Provence coast and in the Gulf of Lions’ shelf are in good agreement with obser- vations. Winter Intermediate Water is formed along most of the north-coast shelves, between the Gulf of Genoa and Cape Creus. Advection of this water by the mean cyclonic circulation generates a complex eddy field in the Catalan Sea. Intense anticyclonic eddies are generated northeast of the Balearic Islands. These results are in good agree- ment with mesoscale activity inferred from satellite altimet- ric data. This work demonstrates the feasibility of a down- scaling system composed of a general-circulation, a regional and a coastal model, which is one of the goals of the Mediter- ranean Forecasting System Pilot Project.
HAL Id: hal-02789121
Submitted on 5 Jun 2020
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
1.3.3 Towards the uid-structure interaction in the heart (Part III )
The uid-structure interaction in the heart is a very fascinating problem which contains itself all the diculties related to the interaction of the blood with the wall and with the cardiac valves. From the modeling point of view, an accurate description of the heart and valves mechanics is required. Advances in this direction are for example given in [ HPS03 , CFG + 09 ] for the heart and in [ WKM05 , PSH07 ] for valves. From the numerical point of view, techniques such as the ones introduced in Chapters 4 and 5 have to be included in a single framework in order to consider all the possible interactions: blood - heart wall and blood - heart valves. This is feasible (see for example [ dS07 , Chapter 6] for preliminary results in two-dimensions) but in three-dimensions it could become so computationally intensive that it may not be the best option to address some clinical problems for which a precise mechanical description of all the elements is not required. If the mechanics of the heart itself is the principal point of interest, one could be motivated to replace the complex three-dimensional simulations of the valves with reduced valve models, which take into account the opening and closing behavior of the heart valves. Nonetheless the use of standard lumped parameter models has inherent limitations due to the introduction of articial boundaries in regions where high variability in the uid dynamics quantities is experienced. In the last part of the thesis, we propose a new reduced model for cardiac valves, which improves the accuracy of standard lumped models andthe robustness and eciency of 3D FSI models.
When designing systems, it is often desirable that the behavior of a module in isolation is not altered when it is connected to other modules. In order to reach this ob- jective, we have proposed to interconnect modules through insulation devices, which buffer modules from retroactivity effects (Jayanthi and Del Vecchio ). By merging dis- turbance rejection and singular perturbation techniques, we provide an approach that exploits the distinctive struc- ture of biomolecular networks to design biomolecular in- sulation devices. We illustrate the application of this ap- proach through an implementation based on protein cova- lent modification cycles (Jiang et al. ). Specifically, we illustrate that covalent modification cycles, ubiquitous in natural signal transduction, can be re-engineered to function as insulation devices for synthetic biology appli- cations.
also allows new capacity values to be determined, hence setting clear objectives for future research on ATM network management.
We have shown that significant savings could be achieved by developing a regulation algorithm that is better adapted to dense traffic. Our simulations have highlighted the sensitivity of the delay costs to the sector capacities: small capacity variations yield substantial savings. This observation indicates that a new regulation procedure should be robust to capacity changes. Such a procedure would certainly benefit from the introduction of automated tools for the monitoring, communication, and conflict detection and resolution. Such tools would allow for the successful management of much more complex situations without increasing the workload. Efficient tools could generate massive indirect savings if an adapted regulation procedure were simultaneously implemented. Finally, we have developed a complete framework for evaluating automated tools for air conflict resolution, which will enable us to test models developed in the future. We can study their performance independently or jointly under various scenarios.
1.2 Cardiac Anatomy and Function
1.2.1 Macroscopic description
The heart is a hollow muscle whose role is to pump blood to the body’s organs through blood vessels. It is situated near the centre of the chest cavity between the right and left lungs, and is supported inside a membranous structure, the pericardial sac. It is divided into two halves (left and right) by the interventricular septal wall. It consists of four major chambers (two in each half) which are the left and right ventricles andthe left and right atria (see Figure 1 and see [Kat10] for more information). Mechanical contraction of the heart is caused by the electrical activation of myocardial cells. The beats are initiated by the heart itself on a regular basis. In other words, the heart is self-contained and can continue to beat even after being removed from the body, for instance for a transplantation. Actually, the initiation of electrical activity is accomplished by the pacemaker cells which exist in various locations throughout the heart. The sinoatrial (SA) node contains the pacemaker cells with fastest rate of electrical activity. Hence
they are used, and by the types of applications for which they are well suited. Another limiting factor of routing in AMIs is that smart meters just provide basic communication features, with a limited throughput; therefore, routing protocols have to be simple and not computationally hungry : for instance, smart meters have limited storage capacity and can- not host very large routing tables nor implement complex routing algorithms. The most widespread routing mechanisms adopted in AMIs are Routing Protocol for Low-power and lossy networks (RPL), Ad-hoc On-demand Distance Vector (AODV), geographic-based, and layer-based, as discussed in Ramirez et al. (2015), Elyengui et al. (2015), and Hu et al. (2015). RPL, thoroughly described by Wang et al. (2010) and Tripathi et al. (2010), is a routing mechanism compatible with the IPv6 standard. It is based on the use of Directed Acyclic Graphs (DAGs) to create a topology in which each node has a rank, to represent its position with respect to the others. DAG formation process is started by the collector andthe metrics is based on the expected transmission count, computed using the acknowledgements of the Media Access Control (MAC) layer. Opportunistic RPL (ORPL) is an interesting variant of the RPL protocol : it aims at increasing AMI reliability by exploiting the wide variety of paths (i.e., DAGs) which characterizes AMI topologies, as discussed in Gormus et al. (2011). AODV, a reactive protocol proposed in Perkins and Royer (1999), was primarily conceived for mobile ad-hoc networks, but is also used in networks whose topology frequently changes : routes are established and maintained only when necessary. Route discovery is performed with simple control messages, such as route requests, route replies, and route errors. Some variants of AODV use Hello packets to improve local connectivity management. The perfor- mance of the AODV protocol in the Smart Grid context is investigated in Cheng et al. (2013), Farooq and Jung (2013a), Pozveh et al. (2016), Farooq and Jung (2013b), and Kathuria et al. (2013).
references for details] whose starting point is the observational approach, folded to the actual Instrument design and parameters.
A first important point is theEUSO trigger efficiency; it is determined by the signal attenuation due to the light transmission in the atmosphere, the smearing effect of noise andthe detection efficiency of the Instrument. The main ingredients for the trigger logic are: the Gate Time Unit (GTU) for counting the number of photoelectrons N pe in a pixel of the detector at the focal surface; the minimum number N thresh of photoelectrons piling-up in a GTU in a pixel, necessary to define it as an hit-pixel; the persistency level N pers , i.e. the minimum number of consecutive GTU, N cons , in which the ORed pixels of a given portion (macrocell) of the focal surface are hit. The first level trigger occurs when the hit-pixel condition N pe ≥N thresh is detected with persistence N cons ≥N pers in a given macrocell. This trigger scheme takes advantage of the very special space-time correlation that qualifies an EAS. The random photon noise background (natural or man- made sources) does not exhibit any space-time correlation and will be eliminated with efficiency depending on the shower energy. A second trigger level, consisting in a rough track finding algorithm on the hits in consecutive GTUs, based on the requirement of different fired hit to be space-time adjacent within a macrocell, has been implemented during Phase A, to further reject the fake triggers due to the random noise. The setting of the trigger parameters is determined by a trade-off between the desired extension of the energy range downward, toward faint showers, and both the need of a tolerable rejection power against the fake trigger contamination and a detection of tracks bright and long enough to be reconstructed with good resolution. Particular attention was given to the energy region below 10 20 eV, where the cross-calibration with AUGER data will be required; in this energy region a value of ~80% overall trigger efficiency is required for E>5×10 19 eV (averaged over the zenith angle and shower position in theEUSO FoV). The actual design of the detector, the technical performance of its component, andthe preliminary reconstruction algorithm allow us to respect this figure only for showers inclined more than 65°, which are longer and better reconstructed. The outcome of Phase A study shows that, to reach the necessary requirements, some technical improvements are needed, in particular the final detector design should contain a refurbishment of the overall throughput of the detection chain and mainly of the optical system, as well as a more detailed track finding algorithm that should be included as high trigger level in the event selection phase.
This is an author-deposited version published in : http://oatao.univ-toulouse.fr/
Eprints ID : 12979
To cite this version : Sibertin-Blanc, Christophe and Roggero, Pascal and Adreit, Françoise and Baldet, Bertrand and Chapron, Paul and El-Gemayel, Joseph and Mailliard, Matthias and Sandri, Sandra SocLab : a framework
imprecise flow facts: in the absence of full information on possible execution paths, the WCET analysis might consider infeasible paths and those might happen to exhibit longer execution times than valid paths. The goal of flow analysis  is to eliminate such infeasible paths but it might fail to identify all of them for several reasons: (a) restrictions on input data values are under-specified by the user; (b) some information given by the user cannot be translated into flow facts (this depends on the annotation format considered by the WCET analysis tool, complex scenarii might be difficult to describe); (c) some flow facts derived from user annotations or automatically extracted from the source or binary code cannot be exploited during theanalysis (e.g. they cannot be turned into linear constraints when the WCET is computed using the IPET method). In this paper, we assume that the applications under analysis do not suffer such issues and we consider that every relevant information of possible execution flows is known and taken into account when computing WCETs. We leave theanalysis of imprecision due to incomplete information on input data for future work.
As mentioned previously, DS can be an answer to interoperability barriers that can occur when running a system involving several subsystems [ 25 ]. Several standards have been proposed. For instance, HLA and FMI can be used as reference standards to support the development of DMSF. In order to implement DS, the defense community has initiated simulation standards such as the high-level architecture (HLA) protocol, which is the most advanced standard for integrating heterogeneous simulation models. It is a standard that helps in the development of distributed simulations. When HLA was first developed, the standard HLA US DoD 1.3 was created. In the year 2000, it was adopted by IEEE and named HLA IEEE 1516. It was then modified and updated in 2010 to encompass improvements; this last version is known as HLA Evolved. FMI is a standard designed and developed for industrial applications, more precisely for cyber–physical systems, with the aim of facilitating exchanges and ease of implementation. The main objective of this standard is to simplify collaboration between industrial partners by giving them a way to exchange models whilst guaranteeing the protection of industrial secrets. Imported components, namely, functional mock-up units (FMU), are seen as black boxes containing compiled code (and libraries). These FMUs can, therefore, be given to a collaborator who will be able to use them within cosimulation by using the interaction interface specified by the standard. Coupling components by using FMUs hide the details of implementation and can, therefore, protect intellectual property.
20000 (6) 409 (3) 15 (4) 0,05 (3) 88 (3) ND
Sources : 1 site carrière SETP, 2 R.Bost (2008), 3 A.Saad (2011), 4 M.Bost (2008), 5 J.D’Amato (2012), 6 E.Nauleau (2010), 7 Bachaud
(2009), 8 Rapport Géolithe (2006). UCS * : uniaxial compression strength . ND : not determined
Two experiments are performed. The first experiment is based on quasi-static water, which means that there is “no” flow of the solution on the sample. In that case it is simulated a water which comes into the discontinuity and remains in it at rest. In the second experiment, the water is flowing in order to model a runoff inside the discontinuity during rains fall. In this experiment the flux is one liter of fluid per day. The different types of limestone are submitted to acidic water (water with 0.035µl sulfuric acid (35%)). Thus acidic water is chosen to accelerate the phenomenon of dissolution. This acid is used in order to increase the rate of chemical rock degradation, it is selected to enhance the degradation kinetics of the rock, and also to reproduce the percolation of rainwater in soil and that loads in humic acid to becomes more acidic than rainwater. This kind of water causes damages to the rocks through discontinuities. Water used for the two experiments has the same pH.
4.2.1. Single Tree Structure
In describing this structure, executing a model with a single tree structure can be expressed as having an entire model tree simulated with the use of a central scheduler called the Root Coordinator. Single tree structures are mostly implemented using CDEVS and PDEVS algorithms. In CDEVS  events are processed in a sequence. This approach is the simplest form of simulation but it does not properly reflect the simultaneous occurrence of events in the system being modeled. Indeed, serialization reduces possible utilization of parallelism during the occurrence of events. On the other hand, Chow and Zeigler  introduced PDEVS as a possible solution to the problem of serialization. According to Chow, one desirable property provided by PDEVS is the degree of parallelism which can be exploited in parallel and distributed simulation. It beats the restrictions in CDEVS in both execution time and memory usage.