Until recently, **neutron** **noise** equations have been only solved by analytical methods (see [ 2 ] or [ 3 ]) or by diffusion theory (see [ 4 ],[ 5 ] or [ 6 ]). As all deterministic methods, it is important to validate them thanks to **Monte** **Carlo** simulations. In 2013, an original stochastic **method** was proposed by Yamamoto in [ 7 ] in order to solve the transport equation in **neutron** **noise** theory thanks to **a** **Monte** **Carlo** algorithm. This algorithm is similar to the power iteration **method** and uses **a** weight cancellation technique developed by the same author into **neutron** leakage-corrected **calculations** or higher order mode eigenvalue **calculations** (see [ 8 ], [ 9 ] and [ 10 ]). This **method** gives good results but has some disadvantages especially the use of the ”binning procedure” **for** the weight cancellation: each fissile region must be divided into **a** large number of small regions (called bins) where positive and negative weights are cancelled.

En savoir plus
5. CONCLUSIONS
In this paper we have presented **a** **new** **Monte** **Carlo** **method** that solves **neutron** **noise** equations in the frequency domain. Contrary to the **method** developed in [ 8 ], our **method** does not need any weight can- cellation technique (instead, we remove implicit capture at low and high frequencies) and it is based on **a** real total cross-section and **a** modified collision kernel. We compared the two **Monte** **Carlo** methods in **a** heterogeneous one-dimensional rod geometry with the deterministic methods **for** several frequen- cies. The comparisons showed that, except at very high frequencies, our **Monte** **Carlo** **method** is faster than the **method** developed in [ 8 ]. Our **method** is also easier to implement because no weight cancel- lation technique is used and all complex operators and modifications with respect to standard **Monte** **Carlo** codes concern only the production term in the collision operator.

En savoir plus
The relationship of **neutron** migration area to the mean squared displacement (MSD) is established to derive the analytical expression **for** computing multi-group migration areas **for** infinite homogeneous media with energy-dependent cross sections. This ex- pression is used to demonstrate that the mean square displacement and migration area **for** anisotropic **neutron** scattering with hydrogen is nearly three times larger than that of isotropic scattering. **For** the case of multi-group energy-dependent cross sections, analyti- cal equations are derived using **neutron** up- and down-scattering to define removal cross sections so that energy-dependent migration areas can be extracted from deterministic transport **calculations**. The methods **for** extracting migration areas from deterministic transport **calculations** can also be applied to heterogeneous finite geometries that have **neutron** leakage. This thesis proves **for** the first time that the energy condensation of multi-group transport cross section must be inverse flux weighted (as opposed to the often-used direct flux weighting) in order to preserve **neutron** migration areas.

En savoir plus
214 En savoir plus

k and the **neutron** importance I k . In this work, we extend the IFP **method** to the α-eigenvalue equation, enabling the
calculation of the adjoint fundamental eigenmode ϕ †
α and the associated adjoint-weighted scores, including kinetics parameters.
Such generalized IFP **method** is first verified in **a** simple two-group infinite medium transport problem, which admits analytical solutions. Then, α-adjoint-weighted kinetics parameters are computed **for** **a** few reactor configurations by resorting to the **Monte** **Carlo** code Tripoli-4
R , and compared to the k-adjoint-weighted kinetics parameters obtained by the standard IFP. The algorithms

En savoir plus
Yi-Kang Lee
Commissariat à l’Energie Atomique et aux Energies Alternatives, CEA-Saclay, DEN/DANS/DM2S/SERMA, 91191 Gif-sur-Yvette, France
With the growing interest in using the continuous-energy TRIPOLI-4 ® **Monte** **Carlo** radiation transport code **for** ITER applications, **a** key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated **for** the D-T fusion neutronics **calculations**. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on **neutron**-gamma coupled **calculations** **for** fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from **a** uniform isotropic distribution in the plasma zone. Nuclear responses including **neutron** and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the **new** FENDL-3.0 evaluation was also benchmarked against the CEA- V5.1.1 results **for** the **neutron** transport **calculations**. In general, relevant benchmark results were obtained. Both data libraries can thus be run with TRIPOLI-4 **for** the fusion neutronics study. This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER.

En savoir plus
END_AMS
The performance of the AMS **method** and its options used in this study was presented in Table 2. To evaluate the performance of VR techniques, the FOM (Figure of Merit) index is utilized. The FOM index is defined as 1/ ( σ 2 * t), where σ is the standard deviation of the calculated result and t is the calculation time. It is clear that the basic AMS spatial importance function was already powerful in present **neutron** streaming case. Other AMS options were also investigated and presented in Table 2. They can slightly improve or degrade the FOM. The “Monitoring 0” option of TRIPOLI-4 was helpful to improve the fast **neutron** FOM and it was already shown in previous study [9]. The collision resampling option was activated so as to improve FOM values **for** both A7 and M6 positions (see Fig. 1 and 2). That means it performs particle splitting after and before collisions (otherwise only after collisions).

En savoir plus
In this paper, we propose **a** **new** Asymptotic Preserving **Monte** **Carlo** **method** which solves the Boltzmann equation of gas dynamics. It is specifically designed to address the complexity of the underlying kinetic equation, to reduce the numerical **noise** of classical MC methods and to overcome the stiffness of the equation close to the fluid limit. The scheme proposed here is inspired by some recent papers on the same subject [24, 25, 21, 17, 60] while improving the results obtained. In details, we focus on the space homogeneous problem and we design the **Monte** **Carlo** **method** by rewriting the equation in term of the time evolution of the perturbation from equilibrium. Then, we use exponential Runge-Kutta methods to discretize the resulting equation. Particles are successively used to describe only the perturbation from the equilib- rium and **a** **Monte** **Carlo** interpretation of the resulting equation is furnished. One of the major problem when this kind of MC approach is used is that the total number of particles increases with time due to collisions with particles sampled from the equilibrium state [59, 60, 42]. Here, we solve this problem by using **a** subset of samples to estimate the distribution function shape through kernel density reconstruction techniques [6] and then we use this estimate as **a** proba- bility **for** discarding or keeping particles through an acceptance-rejection algorithm [53]. This approach permits to eliminate samples which give redundant information at **a** cost proportional to the number of samples which are present at **a** fixed time of the simulation in the domain. In this way, the **method** enjoys both the unconditional stability property and the complexity reduction one as the solution approaches the thermodynamic equilibrium. In fact, the parti- cles are used to describe only the perturbation which goes to zero exponentially fast and thus disappear exponentially fastly. Thus, the statistical error due to the MC **method** decreases as the number of interactions increases, realizing **a** variance reduction **method** which effectiveness depends on the regime studied. Far from equilibrium the same variance of classical MC methods is obtained, while close to equilibrium the variance is lower than that of **a** classical MC. The approach presented here can then be incorporated in **a** solver **for** the spatially non homogeneous case by coupling it with deterministic methods **for** the equilibrium part of the solution. We do not discuss this issue here and we refer to **a** future work to extend the present **method** to the non homogeneous case.

En savoir plus
particle **method** is required in the parameter space. **A** classical approach consists in exploring the parameter space at the initialization step by setting up **a** prior distribution **for** the unknown parameters. However this scheme is known to be inefficient since only one value of the parameter will survive after several resampling steps. To adress this problem, kernel smoothing techniques [31], [32], artificial evolution of parameters [32]–[34] and Markov chain **Monte** **Carlo** (MCMC) steps [35], [36] have been proposed. However, such solutions do not solve the fixed-parameter estimation problem. In [37], Papavasiliou proposes an adaptive particle filter which is **a** combination of the interacting particle filter and the **Monte**- **Carlo** filter used respectively **for** the dynamic states and the static parameters. It consists in running one particle filter **for** each **Monte**-**Carlo** sample of the static parameter. Uniform convergence of this algorithm has been demonstrated. The only major disadvantage is its high complexity. In this paper, we propose **a** **new** strategy **for** parameter estimation using filtering methods. In this filtering context, the particle approximation of the posterior distribution is given by :

En savoir plus
the transport equation in **neutron** **noise** theory [6,7]. Such algorithm is **a** cross-over between fixed- source and power iteration methods and adopts **a** weight cancellation technique. This **method** yields satisfactory results but has some shortcomings, such as the need of introducing **a** “binning procedure” **for** the weight cancellation: each fissile region must be divided into **a** large number of small regions where positive and negative weights are summed up and cancelled. In 2016, **a** second **Monte** **Carlo** algorithm was proposed [8]: contrary to [6], this **method** uses the conventional algorithm **for** fixed-source problems **for** all frequencies, does not need any weight cancellation technique, and is based on **a** modified collision kernel with **a** real total cross-section.

En savoir plus
consider the following matrix polynomial p k (**A**) = ∑ ∞ k=0 q k C m+k−1 k **A** k , where
C k
m+k−1 are binomial coefficients, and the characteristic parameter q is used
as acceleration parameter of the algorithm [8,12,13]. This approach is **a** dis- crete analogues of the resolvent analytical continuation **method** used in the functional analysis [24]. There are cases when the polynomial becomes the resolvent matrix [11,12,7]. It should be mentioned that the use of acceleration parameter based on the resolvent presentation is one way to decrease the com- putational complexity. Another way is to apply **a** variance reduction technique [4] in order to get the required approximation of the solution with **a** smaller number of operations. The variance reduction technique **for** particle trans- port eigenvalue **calculations** proposed in [4] uses **Monte** **Carlo** estimates of the forward and adjoint fluxes. In [27] an unbiased estimation of the solution of the system of linear algebraic equations is presented. The proposed estimator can be used to find one component of the solution. Some results concerning the quality and the properties of this estimator are presented. Using this es- timator the author gives error bounds and constructs confidence intervals **for** the components of the solution. In [16] **a** **Monte** **Carlo** algorithm **for** matrix inversion is proposed and studied. The algorithm is based on the solution of simultaneous linear equations. In our further consideration we will use some results from [16] and [7] to show how the proposed algorithm can be used **for** approximation of the inverse of **a** matrix.

En savoir plus
Another goal of this work is to develop and test **a** **new** algorithm **for** the treatment of resonance scattering in the presence of **a** moving target compatible with **a** **new** cross section representation in **Monte** **Carlo** simulations. The most commonly used cross section representation **for** **Monte** **Carlo** simulations is based on the ACE format which relies on storing pointwise cross section data that can be linearly interpolated in energy and temperature which proves costly when modelling **a** system with **a** detailed temperature profile. Such **calculations** require an enormous amount of nuclear data which often exceeds node memory of modern computing platforms. To this end, OpenMC has recently adopted the multipole representation of nuclear data [3], which is **a** physical model that can be evaluated directly at the desired energy and temperature, and developed an efficient adaption known as the Windowed Multipole (WMP) **Method** [4]. However, this **new** format is incompatible with current resonance correction methods that rely on the pointwise nature of the data. In this thesis, **a** **new** algorithm **for** treating resonance scattering using the multipole representation is developed and tested.

En savoir plus
Independently of scheduling decisions, the accurate prediction of complex workload execution is hampered by the inherent variability of clouds, explained by multiple factors. Firstly IaaS operates in an opaque fashion: the exact nature of the underlying platforms is unknown, and their hardware are subject to evolu- tion. Secondly cloud systems are multi-tenant by nature. This adds uncertainty due to contention on network and memory accesses. This variability, reported by **a** number of practitionners who evaluate parallel application performance on clouds (e.g. [1], who report an average 5%-6% variability on AWS cluster com- pute instances), has also been measured by one of the most comprehensive and recent surveys by Leitner et al. [2]. We will see in this paper that our obser- vations fit with the figures presented in this survey. This variability increases the difficulty of modeling task execution times. In this regard, the prediction is highly dependent on the underlying simulator of the system and on the phenom- ena it can capture. In our work, we rely on the SimGrid [3] simulation toolkit, enabling us to build discrete event simulators of distributed systems such as Grids, Clouds, or HPC systems. SimGrid has been chosen **for** its well-studied accuracy against reality (e.g. [4, 5]). In particular, given **a** precise description of the hardware platform, its network model takes into account network contention in presence of multiple communication flows.

En savoir plus
The nuclear data memory requirements of large reactor physics simulations - mainly in the form of neutron cross sections and secondary angular and energy distributions - e[r]

133 En savoir plus

3.1 The Q5Cost common data format
Due to the inherent heterogeneity of grid architectures, and due to the necessity of using different codes, **a** common format **for** data interchange and interoper- ability is mandatory in the context of distributed computation. **For** this reason we have previously developped **a** specific data format and library **for** quantum chemistry[10], and its use **for** single processor and distributed **calculations** has already been reported[11]. The Q5Cost is based on the HDF5 format, **a** charac- teristic that makes the binary files portable on **a** multiple platform environment. Moreover the compression features of the HDF5 format are exploited to reduce significantly the file size while keeping all the relevant information and meta- data. Q5Cost contains chemical objects related data organized in **a** hierarchical structure within **a** logical containment relationship. Moreover **a** library to write and access Q5Cost files has been released[10]. The library, built on top of the HDF5 API, makes use of chemical concepts to access the different file objects. This feature makes the inclusion on quantum chemistry codes rather simple and straightforward, leaving the HDF5 low level technical details absolutely trans- parent to the chemical software developer. Q5Cost has emerged as an efficient tool to facilitate communication and interoperability and seems to be particu- larly useful in the case of distributed environments, and therefore well adapted to the grid.

En savoir plus
première page de la revue dans laquelle son article **a** été publié afin de trouver ses coordonnées. Si vous n’arrivez pas à les repérer, communiquez avec nous à PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca.
Questions? Contact the NRC Publications Archive team at
PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca. If you wish to email the authors directly, please see the first page of the publication **for** their contact information.

𝑆𝐷 = 1 + (− ln 𝛾) (1)
where γ is **a** uniform random number between 0 and 1, and σ is **a** parameter that allows control of the magnitude of the SD. When σ is set at 0, the model behaves
deterministically. In contrast, when σ is set at high positive values, the model follows **a** random process. Introducing an SD term in the transition probabilities may bias the model outcomes because cells with very low transition probabilities would be able to change their state (García et al., 2011; Wu, 2002). Wu (2002) proposed an alternative **method** that employs an MC procedure **for** modeling spatial allocation uncertainty. In this approach, after computing the transition probabilities, **a** cell in the landscape is randomly selected, its probability is compared with **a** random number uniformly distributed between 0 and 1, and the state of **a** cell will change if its probability score is greater than the generated random number. One of the shortcomings of this approach is that it does not allow control of the degree of randomness. Therefore, Wu (2002) transformed the transition probability of each cell by comparing it with the largest available probability at each time-step, as follows:

En savoir plus
In the HTR experiments, however, only the decay constant α 0 = β ef f /Λ ef f was mea-
sured. In Table 3 we used the delayed **neutron** fraction calculated with MCNP5 & JEFF-3.1 to scale the experimental results and be coherent with the **calculations**. To compare val- ues **for** different libraries, we can compare directly the decay constants. The measured value **for** the HTR-5 experiment is 3.597±0.026s −1 and the predictions with JEFF-3.1 and ENDF/B-VII.0 differ by 4.7±1.2% and 3.8±1.2%, respectively. **For** this experiment, us- ing ENDF/B-VII.0 instead of JEFF-3.1 slightly improve the results but the disagreement with measurement remains larger than 3σ. **For** the HTR-10 experiments the measured decay constant is 4.132±0.051s −1 and predictions with JEFF-3.1 and ENDF/B-B-VII.0 differ from the experimental value by 0.2±1.5% and -3.6±1.4%, respectively. Both libraries predictions agree with the measured value within 3σ but this time, predictions with JEFF-3.1 are in far better agreement. This contradicting results do not allow to favor one of the two libraries **for** HTR-type configurations.

En savoir plus
In Table III the Fixed-Node DMC energies obtained using several basis sets and various sizes of the reference wave function are presented. The correlation energy recovered in the FN-DMC **calculations** varies from 90% to nearly 100% depending on the nodal structure of the reference wave function. As was the case at the FCI level, to use **a** basis set adapted to the core region (CVQZ) is quantitatively important when highly-accurate total FN-DMC energies are searched **for**. This result illustrates the fact that the nodes in the nucleus region play **a** significant role. Our best total energy is obtained with the CVQZ basis set and 200 000 determinants. The value obtained is -75.065 8 ± 0.0001 recovering 99.4 ± 0.1% of the correlation energy. To the best of our knowledge it is the best FN-DMC value reported so far **for** the oxygen atom. Note that it is slighter lower than the value of -75.065 4 ± 0.0001 obtained very recently by Seth et al.[20] with **a** fully optimized multideterminant- Jastrow-backflow trial wave function. **For** comparison Table III reports also some of the most accurate energies obtained **for** this atom by different methods. At the FCI level the best result we know is that of Booth and Alavi.[44] At the FN-DMC level, it is that of Seth et al. just mentioned. Finally, to the best of our knowledge the best energy reported up now is the value obtained by Gdanitz using the r12-MR-ACPF **method**.[45]

En savoir plus
• Challenge: given the demand of MC transport and the actuality of hardware evolution, how to carry out MC **calculations** with modern or future HPC facilities is not evident. As stated before, the principle of MC **calculations** has been set up since the 1940s. Though several algorithms perform well **for** **a** long time and would still be the most effective **for** the foreseeable future, they have been found using less computing resources compared to the past. In other words, the improvement on hardware performance brings less so on simulation performance. To some extent, this situation is caused by the top500 list where all supercomputers are evaluated and recognized by solving the LINPACK benchmark [23] — one specific linear al- gebra problem scaling easily and therefore, applicable **for** any machine no matter its size or structure. As **a** result, manufacturers offer massively regular vectoriza- tion support at the hardware level to achieve **a** higher ranking, by ignoring the fact that this may make no sense to **a** lot of real case problems related to science and engineering. Common MC transports show little SIMD (Single Instruction on Multiple Data) opportunities thus can be **a** typical example of this issue. On the other hand, more detailed simulations will require more complicated numerical and programming model. Though the MC process is intrinsically parallel and requires **a** few communications, it will still be **a** challenge when the parallelism degree in- creases to the level of billion. Another major issue is memory space: as can be imagined, memory requirement will increase in order to perform more physically accurate **calculations**, thus today’s voluminous nuclear data sheet will not comply with the future trend.

En savoir plus
140 En savoir plus