• Aucun résultat trouvé

CHAPTER 2. GENERAL SAMPLING CONCEPTS

2.4. OPTIMIZATION OF THE SAMPLING PROGRAMME

2.4.1. General framework

“d) Sampling process — Preparation of the sorted sample

“The preparation of sorted samples is carried out by reduction of single or composite samples. A sorted sample should be representative of the average value of one or more given soil characteristics.”

2.4.1. General framework2

The problem of optimization of a sampling programme for environmental monitoring and the representativeness of samples and their measurement became apparent following the accident at the Chernobyl nuclear power plant, when large territories across the European continent were contaminated. Environmental monitoring had tended to sample soil for subsequent measurement in the laboratory. However, aerial surveys, mobile monitoring and in situ gamma spectrometry (see Ref. [2.16]) proved valuable in detecting and delineating changes in soil concentrations (see Ref. [2.30]).

The sampling plan and sample preparation have a fundamental influence on the quality of the results. An increase in the number of samples collected, or an increase in the area sampled, leads to a reduction in error of the contamination level and distribution. However, this comes with increased costs of labour, sampling, sample transport, preparation and analysis. Therefore, the objective of optimization is to obtain an estimate of the distribution of environmental contamination, within a given error, at minimum cost and time. The optimization of the sampling plan thus takes into consideration the personnel resources for sample collection, the time and cost of measurement, and also the quantity and mass of samples, the size of the study area, the depth of sampling, and the vertical and spatial resolution requirements to fulfil the monitoring objectives.

Analytical results are obtained for each sample. Extrapolation of these results to the area or volume from which the sample was collected can only be achieved with some uncertainty. The largest errors during determination of the areal distribution of contamination occur at the stage of sampling planning and the execution of the sampling programme, and not during the measurement of the

2 This section is based on Ref. [2.29].

sample activity. In practice, the inhomogeneous distribution of contaminants is often the largest contributor to data uncertainty, and it is usually not quantified.

Accuracy, precision and other data quality indicators that characterize the robustness of the analytical data are affected by the sample preservation, transport and laboratory analytical procedures, but do not account for spatial variability of the contaminant at the site. It is therefore important that samples are collected in a manner which delivers the confidence level required for effective environmental management [2.15]. For example, after the accident at the Chernobyl nuclear power plant a variety of methods for soil sampling were used. However, the activity estimates from two soil samples collected from points located only several metres apart could differ by one order of magnitude.

This led to significant uncertainty in the determination of the areal distribution of contamination, so a governmental commission in Ukraine decided to establish a unified protocol for sampling radioactively contaminated soils.

The soil samples collected and analysed are assumed to be representative of the site. Most of the important decisions about a site are based on these data, so it is essential they accurately characterize site conditions at the time of sampling.

This requires that a sample or group of samples collected from the site accurately reflect the concentration of contaminants at the site. Such samples are called

“representative samples” [2.5]. This is particularly important in environmental monitoring, upon which the dose calculations are based and policy decisions are made.

Extreme spatial heterogeneity, such as the presence of ‘hot’ particles (particles of anomalously high activity) in samples can cause large errors in extrapolating the data [2.31, 2.32]. The consequent dissolution of hot particles with different velocities makes the soil contamination extremely inhomogeneous, even on small sites [2.33]. Non-uniformity in microrelief and the redistribution of radionuclides by biogenic factors further influence the non-uniformity of the soil. The radionuclide fallout can migrate deeply into the ground, the intensity of which is determined by the chemical properties of the element, the physical and chemical properties of the fallout, the landscape, and soil and climate characteristics [2.34, 2.35]. The radionuclides are uniformly mixed in the arable (ploughed or tilled) stratum of the soil, and with time they can migrate into the subsoil horizon. Neglecting this vertical migration could lead to significant errors when evaluating the activities and areal distributions of radionuclides [2.36]. It is therefore necessary to know about: (i) the source of radioactive contamination;

(ii) the physical and chemical characteristics of the radioactive material; and (iii) its depth migration into soils to obtain a representative sample from a field site.

Plants are primarily contaminated during routine and emergency releases, by direct deposition of aerosol bound and gaseous radionuclides or by direct

contamination (by wind or rain splash) of resuspended radionuclides. Root uptake can also be a significant route, especially for medium to long lived radionuclides.

The heterogeneity of radioactive contamination can be lower in plant samples compared with soil, because plant samples are collected from a greater area and the distribution of plant root systems across a larger area effectively averages the heterogeneity of contamination in the soil.

The two types of representativeness are physical and statistical. The physical representativeness of a sample is determined by collecting a single sample within a specified time and space distribution (e.g. by accounting for the vertical migration of radionuclides). The statistical representativeness of a sample is based on the number of samples and the statistical variance of radionuclide contamination in the samples [2.35]. In practice, however, the variance is very seldom known a priori. Therefore, the average value (mean or median) is usually determined from the data, along with an appropriate error and prescribed confidence limit. A very large error may require additional measurements to meet predefined data quality guidelines. To describe the quality of the data, investigators frequently report only the measurement error of a single subsample and extend it to the whole data set, which leads to underestimating the true error and also to incorrect conclusions concerning the contamination characteristics and implications [2.35].

The presence of discrete fuel particles in a soil sample may cause large errors when measuring the activity. For example, the gamma spectrometric measurement of sample activity can vary in the range of one order of magnitude, depending on the fuel particle position in the measuring container (pot or Marinelli beaker) and the container geometry. In addition, the probability of including isolated fuel particles within the subsample depends on the size of the subsample: the smaller the sample, the lower the probability of including a fuel particle in the analysis. In this case, the measured activity of a subsample might not correspond to the activity of the whole sample. Fuel fragments should be isolated and dealt with separately.

The aim of optimization is to minimize the costs of sampling and analysis by defining the minimum number of samples necessary to evaluate the controlled parameters within a specified error, thus ensuring the quality of the monitoring.

Sampling sites with no underlying gradient of contamination are of vital importance when statistically characterizing the contamination of soil and vegetation. These sites are within the limits of which any trend of contamination is absent, and all local deviations of the contamination density have a causal nature (see Fig. 2.2, Section 2.2).

Khomutinin et al. [2.29] describe the distribution of contamination as a continuous function of the locality coordinates f(x,y). Generally, this function has three components:

(a) Trend of contamination: Monotonic component of the density of radioactive fallout conditioned by the global (with respect to the controlled territory) gradient of fallout.

(b) Spot of contamination: Localities with increased or reduced contamination density against a background of the trend.

(c) Random component: Description of the microheterogeneity of radioactive fallout in a point conditioned by technique and process of soil sampling, preparation for measurement, technique and process of measurement.

Each component can be represented by its function of locality coordinates.

Combining them yields a contamination density f(x,y) at a specific point. It is possible to present f(x,y) as the sum of functions describing these components (additive model) and as a product (multiplicity model). As f(x,y) is strictly a positive random variable and the log-normal law of probability distribution describes the probability distribution of the values at the specific point, the multiplicity model used can be described as:

( , ) tr( , ) st( , ) ac

f x y =f x y f× x y f× (2.1)

where

ftr(x,y) describes the monotonic trend of the contamination density;

fst(x,y) describes spots of the contamination density against the trend;

and fac is the random component, independent of the site coordinates. The multiplicity model for f(x,y) can be substituted by an additive model for z(x,y), taking the logarithm where:

( ), ln

(

( ),

)

z x y = f x y (2.2)

( , ) tr( , ) st( , ) ac

z x y =z x y +z x y +z (2.3)

The representation of contamination density as Eqs (2.1) and (2.3) is sufficiently general to describe most complex systems. This approach has been successful in mapping the geology and radioactive contamination within the 30 km Exclusion Zone of the Chernobyl nuclear power plant (see Ref. [2.31]).

A key challenge to optimizing the sampling programme is that a regional trend with several anomalies in its background is too difficult to resolve because of site specific characteristics determining the distribution of the radioactive contamination, such as [2.34]:

— Contaminant distribution;

— Presence of localized gradients and hot spots;

— Underlying variability within the landscape;

— Processes leading to the redistribution of radionuclides.

However, an approximate solution can be found when the problem is divided into two consecutive steps:

(i) To define the minimum number of samples required to characterize spatially any trend in the contamination density, within predefined levels of uncertainty, assuming that no anomalies influence the observed spatial trend;

(ii) To define the minimum number of samples required to characterize spatially anomalies within the any trend in the background contamination density, within predefined levels of uncertainty, should this be reasonable.

The appropriateness of the second step is estimated after a statistical treatment of results from the first step, and the demonstration of any contamination density should it occur.

The site is considered to have no gradient within its borders if the density variations due to the radioactive fallout do not exceed the variability caused by random depositional and sampling factors. The identification and separation of sites for sampling can also present a challenge. In the case of gamma emitting radionuclides, mobile or airborne gamma spectrometry systems [2.16] can be used to partition the area under investigation into quasi-non-gradient. Such a method was used widely following the accidents both at the Chernobyl nuclear power plant and the Fukushima nuclear power plant.

Khomutinin et al. [2.29] find that normalizing the contamination density of an arbitrary site on the trend of general form z′(x,y) = z(x,y) − ztr(x,y) + zst(x,y) results in a non-gradient contaminated site (f′(x,y) = 1) with respect to the normalized density within the limits of which all divergences of contamination density have a random character.

Thus, Khomutinin et al. [2.29] find that it is fundamentally important to evaluate the statistical performance of the contamination density on uniformly contaminated sites (f(x,y) = const.); that is, sites without a systematic underlying gradient of contamination, or non-gradient sites. The statistical conclusions, obtained for these sites, are the basis of similar conclusions for sites that exhibit an arbitrary trend of radioactive contamination of the form ftr(x,y)·fst(x,y). The fact that any non-uniformly contaminated region can in practice be separated into quasi-non-gradient contamination sites indicates the importance of applying a statistical analysis of the contamination density on uniformly contaminated sites. The underlying assumption in this section is that contaminated sites do not

have any underlying gradient of contamination and are referred to as uniformly contaminated sites.