• Aucun résultat trouvé

Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

N/A
N/A
Protected

Academic year: 2021

Partager "Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations"

Copied!
18
0
0

Texte intégral

(1)

HAL Id: hal-00907484

https://hal.inria.fr/hal-00907484v2

Submitted on 26 Nov 2013

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and

deposition observations

Victor Winiarek, Marc Bocquet, Nora Duhanyan, Yelva Roustan, Olivier Saunier, Anne Mathieu

To cite this version:

Victor Winiarek, Marc Bocquet, Nora Duhanyan, Yelva Roustan, Olivier Saunier, et al.. Estimation of

the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint

assimilation of air concentration and deposition observations. Atmospheric environment, Elsevier,

2014, 82, pp.268-279. �10.1016/j.atmosenv.2013.10.017�. �hal-00907484v2�

(2)

Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and

deposition observations

Victor Winiareka,b, Marc Bocqueta,b, Nora Duhanyana, Yelva Roustana, Olivier Saunierc, Anne Mathieuc

aUniversit´e Paris-Est, CEREA, joint laboratory ´Ecole des Ponts ParisTech and EDF R&D, Champs-sur-Marne, France

bINRIA, Paris Rocquencourt research centre, France

cInstitut de Radioprotection et de Sˆuret´e Nucl´eaire (IRSN), PRP-CRI, SESUC, BMTA, Fontenay-aux-Roses, 92262, France

Abstract

Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation.

We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6−19.3 PBq with an estimated standard deviation range of 15−20%

depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

This article has been published in Atmospheric Environment with the reference:

Winiarek, V., Bocquet, M., Duhanyan, N., Roustan, Y., Saunier, O., Mathieu, A., 2014. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations. Atmos. Env. 82. 268-279.

Keywords: Data assimilation, Atmospheric dispersion, Fukushima accident, Source estimation

1. Introduction

1.1. The Fukushima Daiichi accident

On March 11, 2011, 05:46 UTC, a magnitude 9.0 (Mw) undersea megathrust earthquake occurred in the Pacific Ocean and an extremely destructive tsunami hit the Pacific coast of Japan approximately one hour later. These events caused the automatic shut-down of

Email address:bocquet@cerea.enpc.fr(Marc Bocquet)

4 power plants in Japan. Diesel backup power sys- tems should have sustained the reactors cooling process.

In Fukushima Daiichi these backup devices were un- fortunately inoperative mainly because of the damages caused by the tsunami.

The Fukushima Daiichi NPP has six nuclear reactors.

At the time of the earthquake, reactor 4 had been de- fuelled and reactors 5 and 6 were in a cold shut-down for planned maintenance.

In the hours that followed, the situation quickly be-

(3)

came critical. Reactors 1, 2 and 3 experienced at least partial meltdown and hydrogen explosions. Addition- ally, fuel rods stored in pools in each reactor building began to overheat as water levels in the pools dropped.

All these events caused a massive discharge of ra- dioactive materials in the atmosphere. The total quan- tities of released radionuclides as well as the time evo- lution of these releases have to be estimated in order to assess the sanitary and environmental impact of the ac- cident.

Mathieu et al. (2012) used core inventories and in- situ data (pressure and temperature measurements in the reactors andγ-dose rates in the NPP) as well as first available observations over Japan to build a source term, which was used in Korsakissok et al. (2013) to assess the highγ-dose rates zones at a local scale. Chino et al.

(2011) used a few activity concentration measurements in the air to calibrate the magnitude of some identified releases of131I and137Cs. Katata et al. (2012) and Ter- ada et al. (2012) updated these estimations as new data were available. Stohl et al. (2012) performed an inverse modelling estimation using activity concentrations in the air, as well as deposition data at a global scale to estimate the release of133Xe and137Cs. However their estimation was shown to be quite sensitive to the prior information on the source used in the inversion. At the same time, Winiarek et al. (2012b) proposed a method to properly estimate the prior errors to perform inverse modelling using activity concentrations in the air. They applied it to estimate the source terms of131I and137Cs.

1.2. Objectives and outline

In Winiarek et al. (2012b) we emphasised the impor- tance of a proper estimation of prior errors in the inverse modelling algorithm. We proposed new methods to esti- mate two hyper-parameters: the variance of observation errors and the variance of background errors, assuming that all observation errors have the same variance. This assumption is acceptable when using only one type of data in the algorithm. On the other hand the general lack of data in accidental situations stresses the need for algorithms that would use all available data in the same inversion. The first objective of this paper is to extend the methods proposed in Winiarek et al. (2012b) to the simultaneous estimation of prior errors when using dif- ferent data sets in the inversion. The second objective is to apply these methods to the challenging reconstruc- tion of the caesium-137 source term of the Fukushima Daiichi accident, as well as to obtain an objective un- certainty on this estimation.

In Section 2 we briefly recall the methodology for the inverse modelling of accidental releases of pollutants.

The algorithm is sensitive to the statistics of the errors so that we propose methods based on the maximum like- lihood principle to estimate these errors. This general maximum likelihood scheme takes into account the pos- itivity of the source and the presence of several types of data.

In Section 3 we apply this new method to the re- construction of the atmospheric release of 137Cs from the Fukushima Daiichi power plant. The inversions are computed using three data sets: activity concentrations in the air, daily measurements of deposited material and total cumulated deposition. The new source terms are discussed and compared to earlier estimations. The posterior uncertainties of the retrieved sources are also computed.

Results are summarised and conclusions are given in Section 4.

2. Methodology

2.1. Inverse modelling of accidental releases and esti- mation of errors

To reconstruct the source term of an accidental re- lease of a pollutant into the atmosphere, inverse mod- elling techniques are a powerful alternative to a trial and error approach with direct numerical models, which is still widely used in such situations. Inverse modelling techniques can objectively estimate the source term us- ing the information content of an observation set and a numerical model that simulates the dispersion event.

The atmospheric transport model (ATM), which is lin- ear in our case for gaseous and particulate matter, pro- vides the relationship between the source term and the observation set through the source-receptor equation

µ=+ǫ, (1) whereµinRdis the measurement vector,σinRNis the source vector, and H is the Jacobian matrix of the trans- port model which incorporates the observation operator as well. The vectorǫ inRd, called the observation er- ror in this article, represents the instrumental errors, the representativeness errors and a fraction of model error altogether.

In an accidental context the number of observations is very often limited. Specific meteorological condi- tions, like during the Fukushima accident, can also lead to a weak observability of a part of the source term. In these cases, the source-receptor relationship defined by Eq. (1) constitutes an ill-posed inverse prob- lem (Winiarek et al., 2011).

(4)

One solution to deal with the lack of constraints is to implement parametric methods where the source to re- trieve is reduced to a very limited number of parameters.

The inverse problem is de facto regularised and it can be solved using different techniques, such as stochastic sampling techniques which allow to access the parame- ters posterior distributions (Delle Monache et al., 2008;

Yee et al., 2008). Nevertheless if the true source does not match the parametric model, the inversion may fail or yield a meaningless solution.

Another option is the use of non-parametric methods to retrieve a general source field. This field can be dis- cretised, for example to match the model discretisation, but the number of variables remains large and may still be larger than the number of observations in the data set. The non-parametric approach is robust and flexi- ble since no strong a priori assumptions are made on the source but it has its own constraints. If Gaussian statistics are assumed for observation errors, the inver- sion relies on the minimisation of the cost function

L(σ)= 1

2(µ−Hσ)TR−1(µ−Hσ), (2) where R is the observation error covariance matrix:

R=Eh ǫǫTi

, whereǫhas been defined by Eq. (1). One simple choice is to neglect correlations between obser- vation errors and to take R diagonal. If all the obser- vations are of the same type, one can even assume that R=r2Id, r2being the variance of the observation errors (Id is the identity matrix in Rd×d). With a low num- ber of observations and/or a poor observability of sev- eral source term parts, the minimisation of Eq. (2) gives infinitely many solutions. One needs to regularise the inverse problem, usually by adding a Tikhonov term in the cost function:

L(σ)=1

2(µ−Hσ)TR−1(µ−Hσ) +1

2(σ−σb)TB−1(σ−σb). (3) The solution of the inverse problem is now unique, but two additional (vector and matrix) parameters have been introduced: σbis the first guess for the source (or background term) and B is the background error covari- ance matrix: B=Eh

(σ−σb) (σ−σb)Ti .

In an accidental situation and particularly in the Fukushima accident context, the choice ofσb = 0 is relevant because (i) many of the parameters are likely to be zero (ii) it guarantees an independent estimate (iii) it avoids the risk of an inversion crime, since most of the first guess built by physics models are eventually calibrated using early observations. One can refer to

Bocquet (2005); Davoine and Bocquet (2007); Winiarek et al. (2012b) for extended discussions on this choice.

Still in an accidental context, off-diagonal terms in B are often negligible. For example the γ-dose mea- surements made in the Fukushima nuclear power plant (NPP) indicate that the source term is probably com- posed of uncorrelated events1. This is the reason why we take B=m2INin this article.

As it was shown in the Chernobyl case in Davoine and Bocquet (2007) or in the Fukushima case in Winiarek et al. (2012b) or Stohl et al. (2012), the re- trieved source is very sensitive to the matrices R and B, hence to the two hyper-parameters r and m. This is the reason why hyper-parameters estimation techniques are required. In Davoine and Bocquet (2007); Krysta et al. (2008); Saide et al. (2011), the L-curve technique of Hansen (1992) was successfully used to estimate the ratio r/m or both parameters. In the recent years sev- eral methodological developments in the data assimila- tion for weather forecast have focused on the estima- tion of the hyper-parameters of the prior errors. They are mostly based on either cross-validation technique or on the maximum likelihood principle (e.g. Mitchell and Houtekamer, 1999; Chapnik et al., 2004, 2006; Ander- son, 2007; Li et al., 2009). These techniques have also been implemented in the context of atmospheric chem- istry inverse modelling, using for example theχ2 crite- rion (M´enard et al., 2000; Elbern et al., 2007; Davoine and Bocquet, 2007), the maximum likelihood princi- ple (Michalak et al., 2004), or statistical diagnostics (Schwinger and Elbern, 2010).

In Winiarek et al. (2012b) we implemented several methods for the estimation of r and m. One of these methods relies on the L-curve coupled with aχ2 crite- rion. Another one is based on the maximum likelihood principle and takes into account without approximation the positivity of the source. This study only exploited activity concentrations in the air. In the case where sev- eral types of data are used in the inversion algorithm, the number of hyper-parameters to estimate increases.

One can for instance try to estimate one value of the variance of observation errors for each data set, that is an r2attached to each data set. This simultaneous esti- mation of prior errors is the methodological objective of this article.

1http://www.tepco.co.jp/en/nu/monitoring/index-e.

html

(5)

2.2. Prior errors statistics and cost function minimisa- tion

The observation errors defined by Eq. (1) are assumed normally distributed, with a probability density function (pdf):

pe(ǫ)= e12ǫTR−1ǫ p(2π)d|R|

, (4)

where|R|is the determinant of the observation error co- variance matrix R. R is assumed diagonal (it is an ap- proximation as model errors may introduce some corre- lations). Besides, it is assumed that the errors made on observations of the same type have the same variance, so that if Rirepresents the sub-block of R relative to the data set i, one has Ri=ri2Idi, where diis the number of observations in data set i (Pi=Nd

i=1 di=d, Ndis the number of different data sets) and r2i is the corresponding error variance. Note that due to the limited number of obser- vations and in order to avoid a too severely undercon- strained problem, it is essential to keep the number of hyper-parameters that define R as small as possible (Nd

hyper-parameters in our case: (ri)1≤i≤Nd). For instance, an approach such as the one put forward by Desroziers et al. (2005) or Schwinger and Elbern (2010) is likely to be unaffordable in this accidental context.

As far as background errors are concerned, we could also use Gaussian statistics. The advantages of this choice would be analytical solutions for both the source term estimation and the related uncertainty based on the Best Linear Unbiased Estimator (BLUE) theory. How- ever, such a Gaussian assumption could lead to negative values in the retrieved source term, because of the lack of sufficient observations to constrain the source term.

To avoid non-physical results, truncated normal statis- tics for background errors are considered, enforcing the positivity of the retrieved source term. It is known to provide valuable information to the data assimila- tion system (Bocquet et al., 2010) and was successfully used in the context of the Fukushima Daiichi accident (Winiarek et al., 2012b). The corresponding pdf of this normalised truncated normal distribution reads in the general case









if σ≥0 p (σ)= R

s≥0e12(s−σb)TB−1(s−σb)ds−1

×e12(σ−σb)TB−1(σ−σb) otherwise p (σ)= 0.

(5) The normalisation factor can be simplified in our case where B is diagonal andσb =0 to yield the following

semi-Gaussian pdf:









if σ≥0 p (σ)= e12σTB−1σ p(π/2)N|B|

otherwise p (σ)=0,

(6) where|B|is the determinant of the background error co- variance matrix B.

Bayes’ rule helps to formulate the inference, after the acquisition of the measurement vectorµ:

p(σ|µ) = p(µ|σ)p(σ)

p(µ) = pe(µ−Hσ)p(σ) p(µ)

∝ exp (

−1

2(µ−Hσ)TR−1(µ−Hσ)

−1

TB−1σ )

Iσ≥0, (7) whereIσ≥0is equal to 1 whenσi0, for every iN.

Otherwise its value is 0.

From this inference, the source term is estimated us- ing the maximum a posteriori estimator (MAP), denoted σa:

σa=argmax

σ

p(σ|µ). (8)

Maximising p(σ|µ) is equivalent to maximising ln p(σ|µ), which is equivalent to maximising the term in the exponential under the constraint of positivity, which is ultimately equivalent to minimising cost func- tion Eq. (3) under the constraint of positivity. Since there is no analytical solution to this problem, the pos- itivity of σ should be enforced during the numerical minimisation which is performed with a bounded quasi- Newton algorithm (Byrd et al., 1995).

In practice, because of the low number of observa- tions, the estimated source term is very sensitive to the matrices R and B, i.e. to the hyper-parameters (ri)1≤i≤Nd

and m. This is the reason why we have to estimate these parameters rigorously. We propose to extend the meth- ods developed in Winiarek et al. (2012b) to the use of several different data sets by simultaneously estimating the respective hyper-parameters.

2.3. Estimation of hyper-parameters

2.3.1. Unapproximated maximum likelihood values screening

The estimation of the prior errors’ magnitude pro- posed in this section relies on the maximum likelihood paradigm (Dee, 1995). As the prior probabilities depend on the hyper-parameters, the likelihood of the observa- tion set, which can be written

p(µ|θ)= Z

p(µ|σ;θ)p(σ|θ), (9)

(6)

is a function of the hyper-parameters vector θ = (r1, ...,rNd,m)T. The pdfs p(µ|σ;θ) and p(σ|θ) are the prior pdfs defined by Eq. (4) and Eq. (6).

If it exists, the vector θ that maximises p(µ|θ) is the most likely vector of hyper-parameters consistent with the observation setµ.

The most direct way to estimate these optimal hyper- parameters is to screen the likelihood function for a range of values ofθ. With our hypothesis on the errors statistics, the covariance matrices and the first guess, the likelihood can be written:

p (µ|θ)= e12µT(R+HBHT)−1µ p(2π)d|HBHT+R|

× Z

σ≥0

e12(σ−σ)TP−1(σ−σ) p(π/2)N|P|

dσ, (10) where σ is the BLUE estimator, which in our case reads:

σ =BHT

R+HBHT−1

µ, (11) and P is the corresponding analysis error covariance matrix:

P=BBHT

R+HBHT−1

HB. (12) The integral term in Eq. (10) has no analytical solu- tion, but can be numerically computed using a stochas- tic method, such as the GHK simulator from Hajivassil- iou et al. (1996). Nevertheless, if the dimension ofθis high, the size of the space to screen can lead to a costly computation.

For details about the calculation of the likelihood ex- pression, and in particular the general case expression, and for details about the use of the GHK simulator to estimate the integral of a truncated normal distribution, one can refer to Winiarek et al. (2012b) and Winiarek et al. (2012a).

As a statistical consistent method, we considered it as our reference method, denoted ML in this study. Faster but approximate alternatives can nonetheless be con- sidered and tested against this statistically consistent method.

2.3.2. Iterative scheme `a-la-Desroziers

As an alternative to the costly computation of the likelihood, the use of an iterative scheme which would quickly converge to the maximum likelihood is relevant.

In Winiarek et al. (2012b) we used such an iterative algorithm for the estimation of two hyper-parameters.

This iterative algorithm was shown to converge to a fixed-point that corresponds, in the context of Gaussian

statistics, to the pair of hyper-parameters of maximum likelihood. This algorithm is only an approximation in the context of semi-Gaussian statistics but it yielded ac- ceptable, through slightly different, results. Building on Desroziers and Ivanov (2001) we propose an extension of this online tuning scheme to the simultaneous esti- mation of several prior errors variances. The formulae read:

m2= 2 Jba)

Ntr PB−1, (13) ri2= 2 Joia)

di−tr

HiPHTiR−1i , (14) where Hiand Riare the sub-blocks of respectively the Jacobian matrix H and the observation error covariance matrix R related to data set i, whose observation vector is notedµi. Pis defined by Eq. (12). Jb and Joi are defined by:

Jb(σ)=1

Tσ, (15)

Joi(σ)=1

2 µiHiσT

µiHiσ. (16) The source vectorσais obtained from the minimisation of the cost function:

L(σ)= Jb(σ) m2 +

Nd

X

i=1

Joi(σ)

r2i (17)

under the constraintσ≥0. These equations can be used in an iterative scheme which we shall call Desroziers’

scheme later on. This algorithm quickly converges (3-4 iterations here) to a fixed-point giving estimated hyper- parameters and the related source term.

3. Applications to the Fukushima accident

3.1. Observations

In order to illustrate the proposed methods, observa- tions from three different data sets will be considered:

• The activity concentration in the air over Japan as described and referred to in Winiarek et al.

(2012b). This data set contains 104 observations.

• Starting from 18 March 2011, daily measurements of deposited137Cs in 22 prefectures, which repre- sents a total of 198 observations2.

2http://www.mext.go.jp/english/incident/1305529.

htm

(7)

• Measurements of total deposited137Cs in an area near the NPP (approximately 100 km around).

Among the 2180 deposition measurements pro- vided by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) during May and June 20113, 16 were filtered out because they are impacted by near-field effects that are not represented by larger scale ATMs, so that 2164 ob- servations, that are distant enough from the NPP, were considered. As shown in Fig. 5(a), they are densely distributed in space, but on the downside there is no scale of time in these observations.

The distribution of the observation sites led to use a mesoscale domain approximately covering Japan. Be- cause of the spatial extension and density of the sites, and because of the frequency of the observations, the ATM needs a rather limited simulation domain with high resolution in space and in time.

3.2. Modelling the atmospheric dispersion 3.2.1. Meteorological fields

ECMWF meteorological fields with a spatial resolu- tion of 0.25×0.25and a temporal resolution of 3 hours are too coarse to be used for our need. That is the rea- son why we computed mesoscale meteorological fields with the Weather Research and Forecasting (WRF) nu- merical model (Skamarock et al., 2008). The main ob- jective is to obtain meteorological fields with a spatial resolution of approximately 5 km and a temporal reso- lution of 1 h. The computed meteorological fields are inputs for the atmospheric transport model. Physical parametrisations as well as the design of simulation do- mains are summarised in Tab. 1 and the simulation do- mains are displayed in Fig. 1. One of the key feature is the use of several thousands of meteorological obser- vations to constrain the meteorological fields through nudging techniques (Stauffer and Seaman, 1994).

As shown in Fig. 2, WRF simulations show a good ability to model the occurrence of rain episodes but have the general tendency to overestimate precipitation rates (Katata et al. (2012) observed the same behaviour using MM5). This bias is a severe drawback when looking at deposition processes as this study aims to do. To avoid overestimating wet deposition, we used ground obser- vations of rain rates in Fukushima and Ibaraki prefec- tures to compute a global debiasing coefficient, that we

3http://www.mext.go.jp/b_menu/shingi/chousa/

gijyutu/017/shiryo/__icsFiles/afieldfile/2011/09/

02/1310688_1.pdf

found equal to 2.4, by which were divided all precipita- tion rates computed by the WRF model. We have also tested local corrections of the precipitation fields using data assimilation techniques but we did not find them to be as robust and reliable as a simpler global debiasing correction.

3.2.2. Atmospheric transport model

The simulations of the dispersion of radionuclides from the Fukushima Daiichi nuclear power plant have been performed with the chemistry-transport model P-

3D, the Eulerian model of the Pplatform.

It has been validated for the simulation of radionuclides transport on the European Tracer Experiment, on the Al- geciras incident and on the Chernobyl accident (Qu´elo et al., 2007).

The model integrates the concentration field c of

137Cs, following the transport equation

∂c

∂t +div (uc)=div ρK∇ c ρ

!!

−Λsc−Λdc+σ (18) whereρis the air density,Λsis the scavenging rate,Λd represents the radioactive decay andσis the point-wise source. K is the matrix of turbulent diffusion, diagonal in practice. The vertical component is given by Kz, com- puted with Louis parametrisation (Louis, 1979). The horizontal component KHis taken constant. The bound- ary condition on the ground is

Kz∇c·n=−vdepc (19) where n is the upward oriented unitary vector, and vdep is the dry deposition velocity of137Cs.

Two domains of simulation are considered. The finest domain of simulation is a mesoscale domain covering Japan, from 131.03E to 144.53E and from 30.72N to 43.72N with a spatial resolution of 0.05×0.05. The number of grid points in this domain is 270×260. Be- cause of the small size of the domain there is a risk of re-circulation of the plume outside the domain so that the model could fail to account for radionuclides re- entries. To avoid such situation and in order to com- pute the boundary conditions to this domain by a nest- ing technique, another more extended domain of sim- ulation with a coarser resolution is used. It covers a region from 115.03E to 165.03E and from 25.02N to 60.02N with a resolution of 0.25×0.25. This con- figuration is displayed in Fig. 1. For both domains the P3D model is configured with 15 vertical levels ranging from 0 to 8000 m.

Caesium-137 is modelled as monodispersed passive particulate matter with a radioactive decay of 11000

(8)

Table 1: Configuration and physical parametrisations of the WRF model. A two-way nesting technique is used between domain 1 and domain 2.

Domain 1 2

Spatial resolution 18 km 6 km

Number of grid points 340×250 241×241

Number of vertical levels 27 27

Numerical time-step 60 s 20 s

Output time-step 3600 s 3600 s

Planetary boundary layer Yonsei University Yonsei University

Micro-physics Kessler WRF Single Moment 3

Cumulus physics Grell-Devenyi Grell-Devenyi

Longwave radiation RRTM RRTM

Shortwave radiation Dudhia Dudhia

Surface layer MM5 similarity MM5 similarity

Land surface Noah LSM Noah LSM

Nudging Grid nudging Grid nudging

20 N 30 N 40 N 50 N 60 N

100 E 120 E 140 E 160 E 180

Figure 1: Map of the simulation domains used in WRF (dashed-line domains) and in P3D (full-line domains). Two-way nesting is used in the WRF simulations and one-way nesting in the P3D simulation. A triangle marks the location of the Fukushima Daiichi nuclear power plant.

(9)

03/11 03/15 03/19 03/23 03/27 03/31 Date (UTC)

0 20 40 60 80

Cumulated rain (mm) since 2011-03-11

Iitate

03/11 03/15 03/19 03/23 03/27 03/31 Date (UTC)

Hirono

03/11 03/15 03/19 03/23 03/27 03/31 Date (UTC)

Nihommatsu

Figure 2: Comparison between WRF calculations (dashed lines) and observations (full lines) for cumulated precipitation (in mm) at some surface weather stations in Fukushima prefecture.

days. Dry deposition is modelled using a simple scheme with a constant deposition velocity: vdep =0.15 cm.s−1 (Bocquet, 2012) over land, and vdep =0.01 cm.s−1over the ocean (see Estournel et al. (2012) and references within). As far as wet deposition is concerned, the parametrisation used in this study is a below-cloud scav- enging scheme of the form:

Λs=a p p0

!b

(20)

where p stands for the precipitation rate and p0 = 1 mm.h−1, a and b being two constants respectively equal to 8.4 10−5s−1 and 0.79 following Maryon et al.

(1991).

The advection is implemented thanks to a third-order direct space-time scheme, with a Koren-Sweby flux lim- iter function. Because of the plume sharp gradients, it is important that such a limiter is used. The diffusion scheme is integrated through an implicit second-order Rosenbrock scheme, with a three-point spatial scheme, and directional splitting.

This study attempts to reconstruct the source term from March 11 to April 1 (which represents N = 504 one-hour time-steps). However the simulations run over a longer period (from March 11 to April 5) in order to exploit the information content of later observations. A total of 504 direct simulations are thus performed to fill the Jacobian matrix H column by column, and no ad- joint model is needed in this process following Abida and Bocquet (2009); Winiarek et al. (2011).

3.3. Inverse modelling results only using activity con- centrations in the air

The proposed methods can of course be applied to in- verse modelling using only one data set, such as activity concentration in the air data (the first data set described in Section 3.1 and which contains 104 observations). In this case, the formulae are rigorously equivalent to the ones presented in Winiarek et al. (2012b). It is nonethe- less interesting to perform such inversion. It allows to check and confirm the consistency of the model, even though the meteorological fields and the removal pro- cesses parameterisations are different.

3.3.1. Estimation of parameters and total released ac- tivity

The estimated hyper-parameters and the estimated to- tal released activities are reported in Tab. 2 together with results found in Winiarek et al. (2012b) using the same data set. The estimated hyper-parameters as well as the total released activities in the case of the ML method are close and confirm the consistency of this approach.

As discussed in Winiarek et al. (2012b) Desroziers’

scheme is based on Gaussian assumptions and thus has to be considered an approximation in the case of semi- Gaussian assumptions for background errors statistics.

In the situation where only few data are available it could yield results different from the maximum likeli- hood estimation. Nevertheless the gap here is smaller than it was in Winiarek et al. (2012b) with the same data. It may be caused by the use of a mesoscale model with a better resolution in space and in time which in- creases the general consistency in the system. The clear decrease of r, that represents the magnitude of the ob- servational error, including representativeness error and

(10)

part of model error, was to be expected and may be due to a better resolved model hence reducing the impact of model error in the inversion.

Using the maximum likelihood estimation, the to- tal released activity of 137Cs is estimated to be 1.1× 1016Bq. This is consistent with other estimations:

1.2 ×1016Bq for Winiarek et al. (2012b) and Chino et al. (2011), 1.3×1016Bq for Terada et al. (2012), 1.6×1016Bq for Saunier et al. (2013), 2.1×1016Bq for Mathieu et al. (2012) or 3.6×1016Bq for Stohl et al.

(2012).

3.3.2. Temporal profile and uncertainty reduction Due to the meteorological conditions which have of- ten transported the radionuclides towards the Pacific Ocean, and to the low number of activity concentration data, the observability of the plume is reduced. That is why inverse modelling methods are only able to recon- struct the source term in some specific time intervals.

Consequently the estimated total released activities have to be considered as lower bound estimates of the actual releases.

The reconstructed source term and its uncertainty are displayed in Fig. 3(a). The posterior uncertainty has been computed from a Monte Carlo analysis, where the observations and the background term are perturbed us- ing their prior errors definition and the hyper-parameters estimates (2×104draws and inversions are performed).

Then the standard deviation of the estimators ensemble is used to estimate the posterior uncertainty of the re- constructed source term. The uncertainty on the total released activity is around 65%.

The time intervals of observability are clearly visible in the shape of the uncertainty. Three periods are par- ticularly well observed: the first one lies approximately from 14 March to 15 March, the second one lies from 19 March to 22 March and the last one from 24 March to 26 March. Once again the temporal profiles of the source term and its uncertainty are very consistent with the inversion made in Winiarek et al. (2012b).

Compared to source term estimates constructed from in-situ events monitoring and core inventories (Mathieu et al., 2012), a few events are not retrieved: the first hydrogen explosions in Unit 1 on 12 March, the vent- ings of Unit 3 on 13 March and the events concerning Unit 2 and 3 on 16 March and 18 March. On the other hand, the multiple events of 14 March and 15 March are present in the reconstructed source term, even though in an incomplete way since the last release, probably be- tween 7:00 UTC and 12:00 UTC on 15 March, is miss- ing. This release is partly accounting for the north-west

pattern on the deposition map, but no activity concen- tration observation is available to help reconstruct this event. The releases of 20 March and the ones from 21 March to 23 March are also retrieved. As far as the re- lease around 25 March is concerned, there seems to be a slight offset of 12 hours in the reconstruction. The re- leases reconstructed on 19 March are not mentioned by these inventories, but seem compatible with the in-situ measurements ofγ-dose rates from operator TEPCO4, for example on the northside of main office or near the west gate of the NPP. They are also retrieved by Stohl et al. (2012). They precede the attempts of emergency cooling with Tokyo Fire Department means. Finally, the releases around 30 March, not mentioned by the former inventories, are also retrieved by Terada et al. (2012).

The scatter plot of all observations is displayed in Fig. 4(a). The simulation using the source term re- constructed with activity concentration data only does not show any systematic bias when estimating the de- posited activities. Again, this shows the consistency of the model and in particular of the removal processes (wet and dry deposition).

3.4. Results of the inverse modelling using the three raw data sets

The total deposited137Cs measurements offer no in- formation about the time of deposition as they are mea- surements performed a posteriori, in May and June 2011. When only these measurements are used to attempt reconstructing the source term, the total re- leased estimated activity seems consistent (between 1.1×1016Bq and 1.2×1016Bq). However, the tem- poral profile, displayed in Fig. 3(b), is highly doubtful.

Only releases on 23 and on 25 March are clearly visi- ble. Indeed, the system has too much freedom to fit the data. The only constraint is the background term in the cost function, which in the case where the first guess is taken null (as we do) only defines a scale of ampli- tude. Therefore, even if these observations are abun- dant with a good spatial distribution, we propose in the next sections to jointly use them with other measure- ments with a good temporal resolution, such as activity concentrations in the air and daily measurements of de- posited material. Consequently we propose to use the three data sets described in Section 3.1 in the same in- version. In this aim the prior errors have to be estimated simultaneously for the background and the three data sets (Nd=3).

4http://www.tepco.co.jp/en/nu/monitoring/index-e.

html

(11)

Table 2: Estimation of parameters and corresponding reconstructed released activity for caesium-137 source reconstruction using only activity concentration observations in the air.

parameter method Regional scale model Mesoscale model

Winiarek et al. (2012b) This study

r (Bq m−3) Desroziers’ scheme 5.4 2.1

Maximum likelihood 3.3 1.9

m (Bq s−1) Desroziers’ scheme 5.3×1010 8.9×1010 Maximum likelihood 2.0×1011 1.6×1011 Released activity (Bq) Desroziers’ scheme 3.3×1015 7.2×1015 Maximum likelihood 1.2×1016 1.1×1016

03/11 03/13 03/15 03/17 03/19 03/21 03/23 03/25 03/27 03/29 03/31 Date (UTC)

109 1010 1011 1012

Source Rate (Bq/s)

(a)

03/11 03/13 03/15 03/17 03/19 03/21 03/23 03/25 03/27 03/29 03/31 Date (UTC)

109 1010 1011 1012

Source Rate (Bq/s)

(b)

03/11 03/13 03/15 03/17 03/19 03/21 03/23 03/25 03/27 03/29 03/31 Date (UTC)

109 1010 1011 1012

Source Rate (Bq/s)

(c)

03/11 03/13 03/15 03/17 03/19 03/21 03/23 03/25 03/27 03/29 03/31 Date (UTC)

109 1010 1011 1012

Source Rate (Bq/s)

(d)

Figure 3: Full line: Temporal profile of reconstructed caesium-137 source term using (a): activity concentration of137Cs measurements in the air only, (b): total cumulated deposited137Cs measurements only, (c): the activity concentration in the air, the daily fallout measurements and the total cumulated deposition raw measurements, (d): the activity concentration in the air, the daily fallout measurements and the super-observations computed from the total cumulated deposited137Cs. Dashed lines: posterior uncertainty of the source term computed using a Monte Carlo simulation.

(12)

106 102 102 106

Observations

106

102

102 106

Simulations

(a)

106 102 102 106

Observations (b)

106 102 102 106

Observations (c)

Figure 4: Scatter plots of137Cs activity concentration in the air (green crosses in Bq m−3), daily measurements of deposited137Cs (red crosses in Bq m−2) and total cumulated deposited137Cs (blue crosses in Bq m−2). (a) inversion only using activity concentrations in the air. (b) inversion using the three raw data sets. (c) inversion using the three data sets including super-observations. The dashed lines represent a misfit of a factor 10 between observations and modelled values.

3.4.1. Estimation of parameters and total released ac- tivity

As expected, when the number of available obser- vations increases, the results yielded by Desroziers’

scheme tend to get closer to the maximum likelihood estimation.

The estimated hyper-parameters as well as the esti- mated total released activities are reported in Tab. 3.

Excepted for the daily measurements hyper-parameter, the values of estimated hyper-parameters do not differ much when using Desroziers’ scheme or the maximum likelihood method.

Using these data sets, the total released activity of

137Cs is estimated to be between 1.2 ×1016Bq and 1.3×1016Bq.

3.4.2. Temporal profile and uncertainty reduction The reconstructed source term and its uncertainty are displayed in Fig. 3(c).

The observability of the accident is improved com- pared to the inversion using only activity concentrations in the air. The time windows where the uncertainty is reduced are much larger, specially before 14 March and between 19 March and 26 March. The uncertainty on the total released activity has been reduced from around 65% when using only activity concentration to around 25% in this case.

The events that were retrieved in the first inversion are still reconstructed with this data set. The time offset on the release of 25 March has disappeared. The system is now able to reconstruct the late release of 15 March, from 7:00 UTC to 9:00 UTC and hence to model the

north-west pattern of deposited137Cs. The maps of ob- served and simulated total deposition are displayed in Fig. 5(a,b,c,d).

3.5. Results of the inverse modelling using super- observations computed from the three raw data sets

As mentioned above, measurements of the third data set are very densely distributed. This proximity cer- tainly induces correlations in the observation errors (mainly through model errors) that are not taken into ac- count in our system. One way to take into account these correlations would be to use a non-diagonal R matrix.

This method would increase the computation cost as the inverse of R is needed for the minimisation of the cost function Eq. (3). Besides, this would introduce addi- tional hyper-parameters in the correlation model which would have to be properly estimated as well. Such an estimation could be a difficult task in a situation with a low number of observations. Another route that can be chosen to deal with densely distributed data is the thinning of observations which consists in the optimal selection of observations. These techniques are already used in the assimilation of satellite data in the weather forecast community (Liu and Rabier, 2002). Yet another method would consist in computing super-observations by averaging all the observations contained in a model grid cell (the model resolution is 0.05×0.05), in or- der to mitigate the spatial correlation in the errors. This is the method that we chose. From the 2180 measure- ments initially in the third data set, we computed 523

(13)

(a)

37.5N

(b)

(c)

37.5N

140E

(d)

140E

10 30 60 100 300 600 1000 3000

Figure 5: Map of observed and simulated deposited137Cs. Upper left (a): measurements from MEXT. Upper right (b): simulation with source reconstructed using only137Cs activity concentrations in the air. Lower left (c): simulation with source reconstructed using137Cs activity concen- trations in the air, daily measurements of deposited137Cs and measurements of total cumulated deposited137Cs. Lower right (d): simulation with source reconstructed using137Cs activity concentrations in the air, daily measurements of deposited137Cs and super-observations computed from measurements of total cumulated deposited137Cs. The displayed values are in kBq m−2

(14)

Table 3: Estimation of parameters and corresponding reconstructed released activity for caesium-137 source using activity concentration, fallout daily measurements and total deposition data. r1, r2and r3represents respectively the errors variances of these three data sets.

parameter method with deposition with deposition

raw data super-observations

r1(Bq m−3) Desroziers’ scheme 3.8 2.9

Maximum likelihood 3.3 2.3

r2(Bq m−2) Desroziers’ scheme 580 540

Maximum likelihood 210 240

r3(Bq m−2) Desroziers’ scheme 325000 240000

Maximum likelihood 320000 230000 m (Bq s−1) Desroziers’ scheme 1.2×1011 1.3×1011

Maximum likelihood 1.0×1011 1.0×1011 Released activity (Bq) Desroziers’ scheme 1.3×1016 1.9×1016 Maximum likelihood 1.2×1016 1.8×1016

super-observations from which we eliminated 4 super- observations located too close to the NPP, hence leaving 519 super-observations to be used as the new third data set. Note that it is not necessary to estimate the reduced variance of a super-observation since this is implicitly accounted for in the related hyper-parameter estimation.

3.5.1. Estimation of parameters and total released ac- tivity

The estimated hyper-parameters as well as the esti- mated total released activities are reported in Tab. 3. As expected, the estimated standard deviation of the error in the cumulated fallout (r3) is significantly decreased by about 28%.

Using these data sets, the total released activity of

137Cs is estimated to be between 1.8 ×1016Bq and 1.9×1016Bq.

3.5.2. Temporal profile and uncertainty reduction The reconstructed source term and its uncertainty are displayed in Fig. 3(d). The uncertainty on the total re- leased activity is estimated to be around 15% from a Monte Carlo study of 2×104draws. However, it is the same absolute standard deviation, of about 3 PBq, as in the raw data sets inversion.

All the previously mentioned events are now retrieved with these data sets:

• Around 12 March: identified as the hydrogen ex- plosion in Unit 1.

• On 13 March: identified as the ventings on Unit 3.

• On 14 and 15 March: multiple ventings and hy- drogen explosions mainly concerning Unit 2 and

Unit 3. The late release on 15 March is now re- trieved (from 7:00 to 9:00 UTC) even if its magni- tude might appear weak (see section 3.6.1 for a dis- cussion on the magnitude of the retrieved peaks).

• On 16 March: unidentified events but which corre- spond to pressure drops in Unit 2 and Unit 3.

• On 18 March: unidentified events probably related to Unit 3.

• On 19 March: unidentified events which corre- spond to an increase in several in-situγ-dose rate measurements. The attempts of emergency cooling with the Tokyo Fire Department means began just after these events.

• On 20 March: unidentified events concerning at least Unit 2 and Unit 3.

• From 21 March to 23 March: events corresponding to smokes emitted from Unit 2 and Unit 3.

• On 25 March: unidentified event possibly concern- ing Unit 2. The magnitude of this peak might ap- pear over-estimated (see section 3.6.1 for a discus- sion on the magnitude of the retrieved peaks).

• On 30 March: unidentified event.

From the scatter plots of Fig. 4(b,c), it is clear that the simulated cumulated deposition data are comparable when using the source term built from raw data or from super-observations. This is also visible on the deposi- tion maps in Fig. 5(c,d). On the other hand the system’s ability to simulate the two other data sets is slightly im- proved when using the source term retrieved from the super-observations. The assumption that the observa- tion errors are uncorrelated led the system to give too

(15)

much weight to the third raw data set in the inverse modelling algorithm. This over-confidence may be cor- rected when using super-observations so that the infor- mation content in the system is better balanced.

3.6. Discussion about the retrieved sources

These inversions offer an objective estimation using data assimilation techniques of what can be extracted from the data sets and the numerical model. The assim- ilated observations may or may not suffice to constrain the source term parameters. We showed that the cumu- lated deposition data alone are not sufficient to offer a satisfying chronology of the source term and that the joint assimilation of the three data sets helped to bet- ter constrain the chronology. Yet, the magnitude of the identified peaks remains questionable. The data may or may not be able to constrain them well enough, although the estimate of the total released activity seems robust.

3.6.1. On the magnitude of the peaks

It is generally thought that the main releases have oc- curred around March 15. These releases probably con- tributed the most to the north-west pattern of the deposi- tion map. Our system, where the cumulated deposition data do not contain any information in time, may not be constrained enough by air concentration observations or daily measurements of fallout (specially before March 18) to precisely balance the contributions of the releases of March 14-15 (around 15% of the total retrieved ac- tivity), March 20 (around 13%) and March 25 (around 15%).

The peaks of March 20 and March 25 are not only re- constructed with the help of deposition measurements.

They are both additionally explained by measurements of activity in the air from a particular monitoring sta- tion, located in Fukushima city, which measured an ac- tivity concentration of about 32 Bq.m−3 around March 20 and about 14 Bq.m−3around March 25. It is possible that these measurements are caused by a re-suspension of previously deposited caesium-137. But resuspension is not implemented in our model, so that resuspension events are likely to be accounted for by fictitious re- leases in the source term. We tried and performed a new inversion using the same data as in Section 3.5 but with- out the measurements of this station. Indeed, the results from the super-observations showed an increase of the released activity on March 14 and 15, from 2.6 to 3.0 PBq (from 13% to 16% of an unchanged total emitted activity of 19 PBq), and a decrease on March 20, from 2.6 to 2.0 PBq (from 13% to 11% of the total emitted activity). Yet, no consequences were observed on the magnitude of the peak on March 25.

We also performed inversions using a non-null first guess inferred from independentγ-dose measurements (Saunier et al., 2013), which indicates to the system that most of the releases occurred before March 18. As a result, the total estimated released activity increased by about 15%, but the peak on March 25 still remained al- most at the same level.

As a more drastic test and to force the system to reconstruct higher releases before March 18, we re- duced the inversion window to this period. As a con- sequence the releases of March 15 increased (from 0.6 PBq to 2.6 PBq using the three raw data sets, and from 0.9 PBq to 3.9 PBq using the three data sets with super-observations as the third data set), but the total es- timated releases decreased (respectively from 12 PBq to 5 PBq, and from 18 PBq to 11 PBq). It is consistent with the inversion using only activity concentration in the air where approximatively 6 PBq were estimated to be re- leased after this date. At the same time the reanalysis of deposition map has been degraded especially in the central area of the north-west pattern which is the most contaminated. This shows that: (i) considering only this shorter time window, the system can not properly recon- struct the deposition pattern shown in Fig. 5(a). (ii) Re- leases probably occurred after March 19 and may have contributed to the north-west pattern of the deposition map, but their magnitude is still difficult to estimate be- cause of the lack of observations with a temporal infor- mation (such as activity concentration in the air) in this area. (iii) This may highlight the difficulty to model the removal processes and specially the wet deposition. It is also possible that the cumulated deposition map, whose measurements have been made several months after the accident, is not anymore faithful to the deposition events of the accident.

We also tested a different reconstruction resolution.

Instead of reconstructing a source term with a one-hour time-step (N = 504), an inversion was carried out on a source term with a three-hour time-step (N = 168).

It is generally admitted that the inversion is sensitive to the resolution of the control space (Bocquet et al., 2011) and that the system cannot generally provide reliable in- formation at a too precise resolution (for instance the model resolution). Nevertheless, estimating the prior errors allows to compensate the impact of the control space resolution by tuning regularisation in the inver- sion, so that the dependence in the resolution should be mitigated. The estimated source term, using the same data sets as in Section 3.5, as well as its uncertainty are displayed in Fig. 6. The total released activity is now estimated to be 1.6×1016Bq with an uncertainty of 18%. The estimated released quantities were mainly

(16)

11/03 13/03 15/03 17/03 19/03 21/03 23/03 25/03 27/03 29/03 31/03 Date (UTC)

10 10 10 10

Source rate (Bq/s)

9 10 11 12

Figure 6: Full black line: Temporal profile of reconstructed caesium- 137 source term with a reconstruction resolution of 3 h using the activity concentration in the air, the daily fallout measurements and the super-observations computed from the total cumulated deposited

137Cs. Dashed line: posterior uncertainty of the source term computed using a Monte Carlo simulation. Thin blue line: Temporal profile of reconstructed caesium-137 source term with a reconstruction resolu- tion of 1 h using the same data.

reduced on March 30 (from 2.1 PBq to 1.3 PBq), on March 20 (from 2.6 PBq to 2.1 PBq) and on March 25 (from 2.9 PBq to 2.8 PBq), but remain at a high level.

At the same time, the estimated released quantities in- creased on March 14-15 from 2.6 PBq to 4.1 PBq. Many of the low magnitude peaks with a high uncertainty have been reduced or have disappeared.

3.6.2. On the difference between the sources recon- structed with raw data and super-observations Comparing the source profile obtained from the super-observations (Fig. 3(d)) with the source pro- file obtained from the inversion of the raw data sets (Fig. 3(c)), 2 PBq of caesium-137 are found to increase already existing peaks.

Moreover, when using super-observations instead of raw data, new release episodes appear in the source re- constructed term. Compared to the raw data sets in- version, 4 PBq of caesium-137 are found in new time slots.Yet the related uncertainty is still high and even above the retrieved peaks. Nonetheless, the new re- leases episodes that appear in the inversion do not seem to be artefacts of the inversion algorithm. They do cor- respond to observed events in the NPP and to events re- constructed by other studies (Stohl et al., 2012; Saunier et al., 2013).

Judging from the ri, the errors of the super- observations as well as their use by the model is di-

agnosed as more reliable than the use of the raw data sets. By contrast, the Tikhonov regularising term is less constraining and new peaks can more easily form in the reconstructed source term.

On a physical level, the appearance of the new peaks results from the smoothing of the deposition observa- tions that in turn leads to a smoothing in the retrieved source. This is probably why the only peak which is re- duced when using super-observations is the highest one on March 25. This peak is mainly induced by the137Cs deposit observations of the north-west very thin pattern.

Hence, a smoothing of this pattern can lead to a new bal- ance of the peaks (March 15, March 20 or March 25).

4. Conclusion

In order to reconstruct source terms of accidental pol- lutant releases into the atmosphere, we have proposed an inverse modelling algorithm able to use all avail- able data in the same inversion (concentrations in the air, measurements of fallout, integrated measurements, etc.). The algorithm relies on the relationship provided by the atmospheric transport model between the source vector and the observation set and on prior errors in- troduced in the system: the background errors and the observation errors. To properly balance the information content in the system a proper estimation of these prior errors is crucial.

In this aim, we proposed two methods relying on the maximum likelihood principle that we applied to the challenging reconstruction of the137Cs source term released during the Fukushima Daiichi nuclear power plant accident in March 2011. Three data sets were used in the inversion process: activity concentrations in the air, daily measurements of fallout and total cumu- lated deposition data. Consequently, the prior estima- tion concerned 4 hyper-parameters: the variance of the background errors and the variance of each observation set errors. Averaged observations (called in the article super-observations) have also been considered for the third data set in order to reduce correlation in errors that had been neglected in the algorithm. Posterior uncer- tainty related to the estimated releases have also been estimated through a Monte Carlo analysis. Such an es- timation is only possible with properly estimated prior errors.

With these methods and without super-observations, the total released activity is estimated to be between 1.2−1.3×1016Bq with a related uncertainty around 25%. When using super-observations instead of raw fallout data, the total released activity is estimated to be between 1.8−1.9×1016Bq with a related uncertainty

Références

Documents relatifs

Ce pic confirme donc l’activation de la laccase par l’agent de couplage EDC-NHS (Figure V.17). La présence de ce pic révèle que certains des groupements carboxyliques de la

longitude, with the 95% confidence cone associated, for the normal and reversed polarities (closed and open circle, respectively) from the edited and updated data

Abstract. The construction and start-up of the Czech nuclear power plant in Temelin led to a substantial uncertainty and opposition among the Austrian population. For this reason

(4) the outcome was that the controversy masked an ethical difference in the assessment of the so- cial acceptability of the accident, given the need to anticipate risks as a

A methodological framework to analyse coping based on the testimony of the Director of the Fukushima Dai Ichi nuclear power plant... A methodological framework to analyse coping

Mathieu (2012), Correction to “ Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137

Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131

An inverse modeling method to assess the source term of the Fukushima Nuclear Power Plant accident using gamma dose rate observations... CC Attribution