Residential Fire ScenarioAnalysis in Ontario 1995-2003
This work was undertaken to develop a comprehensive fatal fire scenarioanalysis for residential houses in Canada over the period 1986 to 2003. It turns out that the information required to carry out such an analysis is not available. For the period 1986- 2003, an average of 33.8% of fires in Canada occurred in Ontario . This report focuses only on Ontario for which data suitable to fire scenarioanalysis has been provided for the period 1995-2003 [2 to 13]. It details, among other things, the areas of origin of most fatal residential fires, the source of ignition, the materials first ignited, the building level of origin, as well as the physical conditions of the fire fatalities at the time of the fire.
6. Worst case scenarioanalysis
From the sensitivity analysis, the effect of most parameters on the downward migration of contaminants is known. A worst case scenario is built by giving every input parameter that value - from a range of possible or realistic values - that results in the largest and fastest downward migration of contaminants. Table 4 shows the worst case scenario parameter values. The prescribed piezometric heads at the boundaries of layer 1 are increased by 2 m. Further increasing these heads would be unrealistic, since this means that the water table would be higher than topography. The boundary conditions of layer 5 and 6 are lowered 2 m. This is a significant lowering since the total range of measured hydraulic heads in layers 5 and 6 is only 2 m. The hydraulic conductivities of the clay layers are multiplied by 10. This means that the horizontal and vertical hydraulic conductivities of these layers are now approximately 1×10 -6 m/s and 1×10 -7 m/s respectively. These values are high for sediments consisting mainly of clay and silt (Fetter, 2001) and are therefore appropriate worst case values. The overall longitudinal dispersivity is multiplied by 5, so that its value is now 25. All other input parameters and variables keep their initial values since they clearly have a less significant effect on the downward migration of contaminants.
5 th International Sy mposium for Farming Systems Design 7-10 September 2015, Montpellier, France ________________________________________________________________________________________________________________________
A MODELING FRAMEWORK FOR DESIGNING AND ASSESSING MULTI- FUNCTIONAL AGRICULTURAL LANDSCAP ES WITH SCENARIOANALYSIS
In this paper, we aim to address this gap through service net- work design modelling and scenario-based analysis, in order to gain insights about the inﬂuence of the costs and other relevant operational parameters and political levers on the repartition of the ﬂows and modal split over a freight transport network. For- mally, the problem belongs to the tactical decision horizon, tackling medium-term planning issues from the economic perspective of a typical transport operator. The market is assumed to be composed of shippers with demands to be delivered over the network. The decisions are two-fold: the operating frequencies of the services during the planning period - typically, a week - and the optimal routing of the demands over service-based itineraries. The objec- tive is to deliver the demands in a cost-minimization manner, where the costs are divided into a ﬁxed and a variable component to run the services and transport the goods over them, respectively. The model is designed to suit a general consolidation-based multi- modal framework; a service is deﬁned by a transport mode, in addition to its origin-destination node pair, and thus corresponds to a physical arc in the network. Mathematically, the proposed mixed-integer program extends the classical static path-based mul- ticommodity formulation, originally introduced in Crainic (2000) in the general freight transport context, and later re-considered in Crainic and Kim (2007) for intermodal transport. A static case is assumed throughout the decision process, in terms of the ship- ping demands, as well as the underlying physical network, including the terminals’ locations. The time factor is considered in terms of scheduled services for the modelling approach addressing long- corridor aspects. However, a decision is taken not to consider a time-expanded formulation, in the sense of avoiding the replica- tion of the physical nodes of the network for each time period, and thus not representing holding service arcs that link consecutive time realizations of the same physical node. Similarly, a simpliﬁ- cation is assumed with respect to the design variables, where a cycle-based formulation is not considered, thus restricting the rep- resentation of some asset-related requirements, such as the length of the asset routes. The reasons behind these decisions are to respect the medium-term horizon and to avoid modelling compli- cations at the later stage when pricing decisions will be integrated. Finally, the developed mathematical frameworks are utilized within a scenario-analysis methodology, where previously identiﬁed and validated parameters and policy levels are put to the test against three possible outlooks on the future. Relevant cost correlations are identiﬁed and related recommendations are proposed with respect to stimulating sustainable transport in the European market.
The guidance mechanism for goal analysis is based on a linguistic analysis of goal statements. It helps in reformulating a narrative goal statement as a goal template as introduced in the previous section. The mechanism for scenario authoring combines style/content guidelines and linguistic devices. The former advise authors on how to write scenarios whereas the latter provides semi-automatic help to check, correct, conceptualise, and complete a scenario. Finally, for goal elicitation through scenarioanalysis, we defined enactable rules offering three different goal discovery strategies namely, refinement strategy, composition strategy, and alternative strategy. The first of these discovers goals at a lower level of abstraction than a given goal ; the second discovers goals ANDed to the original one ; the last discovers goals ORed to the original goal.
PRISM, Université de Versailles Saint-Quentin 45, av. des Etats-Unis, 78035 Versailles Veronique.Plihon@prism.uvsq.fr
Abstract. Scenarios have proven useful to elicit, validate and document
requirements but cannot be used in isolation. Our concern in this paper is to integrate scenario-based techniques in existing methods. We propose a set of operators to support such an integration. This set is classified in two sub-sets: the one dealing with the integration of the product models of the two initial methods and the one concerned with the integration of their process models. The operators are used to integrate the CREWS-L'Ecritoire approach with the OOSE method. This leads to enhance the use case model construction of the OOSE method with on one hand, the linguistic techniques for scenario authoring and formalisation and on the other hand, the discovery strategies to elicit requirements by scenarioanalysis of the CREWS-L'Ecritoire approach.
5. Turbulent behaviour 5.1. Wave Turbulence
In this section, we analyze the regime occurring after the second bifurcation and charac- terized by a broadband Fourier spectrum. Recently, theoretical and experimental studies have revealed that the correct framework for analysis is that of a weakly turbulent behaviour, cor- roborating preliminary experimental studies revealing the divergence of dimension calculations when using classical indicators of low-dimensional chaos [23, 13]. D¨uring et al apply the Wave Turbulence Theory (WTT) to von K´arm´an equations of motions governing the non linear dy- namics of thin plates, showing the existence of a direct cascade of energy through lengthscales and deriving their statistical properties in terms of energy repartition . In particular, they show that the power spectrum P w (k) for the displacement w, for a perfect plate, must verify the
I. Ensuring that scenario developers embrace an increased range of uncertainties
Large numbers of climate and energy scenarios already exist. The IPCC Fifth Assessment Report (IPCC 2014) grew into a mega-report with more than 1200 scenarios. Multi-model multi-scenario comparisons, pioneered by Energy Modeling Forum (EMF 2014), are increasingly adopted beyond EMF. If open source modeling and crowdsourcing trends prevail (Bazilian et al. 2012), the amount of climate and energy scenarios will increase even further. All these scenarios provide separate pieces of the multidimensional space of future developments. Analysis of such comprehensive scenario ensembles offer means to embrace uncertainty. The on-going efforts towards creating scenario databases, depicting many scenarios and applying descriptive statistics methods represent a good start. But new techniques could generate an even richer understanding.
The first scenario is dedicated to freedom of ac- tion. Any actor with the correct assignment com- bination is able to execute an action. The second scenario, using the activity continuum feature, aims at being closer to the more efficient way to execute the procedure. When an actor execute the first ac- tion of the sequence, he or she becomes the only ac- tor able to trigger the downstream transitions up to the last action of the sequence.These two scenarios are opposed in their objectives. Using our model, there is only little changes between the two. In Fig- ure 5 the activity continuum is modelled by the the grey area that links all of the transitions. A video showing an execution of this case can be found at https://vimeo.com/109446234
The turbulence closure model is the dominant source of error in most Reynolds Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain an stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pres- sure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions. Keywords: Bayesian Model Averaging; Bayesian Model-Scenario Averaging; RANS models, model inadequacy; uncertainty quantification; calibration; boundary-layers; error estimation
We explore the consequences of an earlier phase-out of Ozone Depleting Sub- stances, starting ten years before the Montreal Protocol. Atmospheric chemistry simulations verify the effectiveness of such an early-action scenario: stratospheric chlorine abundance remains below the level at which the ozone hole was discov- ered, even though countries are permanently allowed to continue using ODS to a non-negligible extent. A sectorally detailed technico-economic analysis finds that the additional cost of the earlier action scenario would have been moderate. We conclude that the Montreal Protocol was only partially successful at precaution: global atmospheric environmental problems could be regulated before surprising non-linearities occur.
The October data set is the only one for which the median scenario produces more energy for all number of stages. The interest of a stochastic method is to account for uncertainty in the future. As we compare our method with the median scenario, if the actual realization of the inflows is close the the median scenario, the stochastic solution will not produce more energy, as the median scenario depicts correctly the future. In practice, this may happen during the fall period, for example when low variability exists in the weather and storms have less chances of developing. This can be seen on Figure 10 . Each subfigure corresponds to a reservoir. The minimum and maximum scenarios are illustrated with the dashed lines. The median scenario is the full line and the actual realization of the inflows is plus sign line. Figure 8a is Chute-du-Diable. The top figure is the day 1 October forecast and the bottom figure is the day 1 September forecast. For the 15 first days, the October forecast median scenario is very close to the inflow realization and therefore, as we keep the day 1 decision only, the median scenario produces more energy. The other subfigures are represented in the same fashion. Again, Figures 10b and 10c show that for Chute-Savane and Lac-St-Jean, the actual inflows in October are very close to the median scenario, therefore there is no gain in using a stochastic optimization model, as the deterministic median scenario allows to obtain a good solution. For this unusual October case, solving the short-term unit commitment and loading problem with a median scenario is acceptable. This affirmation is to be used with caution as situations like these have a low probability of occurring. These results show that there is certainly a gain in using a stochastic model for the short-term hydropower optimization model, as relying on the median scenario offers a less robust solution than multiple scenarios.
5 DISCUSSION AND RELATED WORKS
The presented work is pertinent to two distinct but related do- mains of research, computational ethics and the logical analysis of causality. Pertaining to the first, there exist a number of engaging at- tempts to model ethical reasoning (e.g. ). How- ever, they all have in common that they do not represent causality explicitly, such that an action and its consequences are not dynami- cally linked; causal relations are stated rather than inferred. This eclipses the underlying dynamics that make up causal reasoning, cutting short the possibility to provide a justifiable account of ethi- cal responsibility on its basis. The absence of expressively powerful causal rules also limits the applicability and scope of these models, meaning that they typically require an entirely new program to model each new scenario, even when there are common features. To our knowledge, this work is also the first in the domain to as- similate actions, omissions and automatic events within plans of actions for modelling responsibility. This greatly increases the ca- pacity to model true-to-life scenarios, but also permits the analysis of ethically critical distinctions that have often been overlooked.
motion affected a wide area in the surrounding of the Kashiwazaki-Kariwa Nuclear Power Plant (KKNPP). Due to the relative small source-to-site distance and shallow hypocenter depth, this seismic scenario resulted very interesting and well documented (a consistent database of seis- mic recordings is available). In this context, the test-case represents a suitable benchmark for the work-packages of the SINAPS@ project. SINAPS@ is the first French research project with the objective to quantify the uncertainty of the procedures to estimate the seismic risk. In this context, an omni-comprehensive approach is followed, modelling the wave-propagation from the fault to the structural components. This study is intended to describe the steps to build up and calibrate a reliable seismic scenario, capable to provide a synthetic wave-field at the re- gional scale. This objective is pursued by (1) assessing a stratified geological model for the Niigata region, (2) testing the effect of the source parameters in a kinematic approach (e.g. the rise time and the shape of the Source Time Function) and (3) checking the topography effect.
derived forcing cannot be directly compared to a proxy network in a transient way. Individual climate variations, occurring over a few decades or less, are not going to be directly reproduced by such a simulation. At this point, one also needs to keep in mind that (1) only few of the known climate events are directly tied to a large scale climate forcing, and most often occur only as regional anomalies, and (2) that for many nonmajor events a clean annual chronology is often missing anyway. If, however, there should be a desire to retain some particular historical timing of one or more selected events, there are justifiable options, post hoc, to modify the randomly generated volcanic scenario series. For example, if there are good physical reasons (ash deposit, sulfate spikes co-located with other ice core proxies that indicate a climatic change), one could shift the timing of a particularly large event (or of a sequence of events) to agree with a specific date without violating the statistical nature of the series. A good example might be a study trying to identify the climatic effects of the roughly 16th century BC eruption of Santorini (Greece).