Fontainebleau, France – 3 UMR 6143 M2C, Université de Rouen, Bât. Irese A, Place Emile Blondel, 76821 Mont-Saint-Aignan Cedex
Source tracing of fluvial suspended sediments by magnetic, geochemical and spectrocolorimetric particlecharacterization: example of the Canche Watershed (Nord-Pas-De-Calais, France)
3. Laboratory analyses
Fig. 11 Evolution of the coating microhardness (Vickers) with operational conditions of the cold spray process for both powders A and B.
5. SUMMARY AND CONCLUSIONS
Individual particlecharacterization is a necessary step for a better understanding of the quality of cold spray coatings. Image analysis brings new insights into the particle properties by allowing fast, fully-automated quantitative microscopy of the particle geometry (size and shape). The fact that the traditional bulk analysis (e.g. sieves) is replaced by individual sensing of particles allows for precise morphostatistical data analysis that might reveal subtle differences in terms of product properties. In this work, the size of particles appears to be the most critical parameter with regard to process performance and coating quality. The use of size distribution by number instead of size distribution by weight appears to be the clue for developing powders with improved properties.
on the refractive index of liquid-liquid particles with radius within the range [30-1250] µm. This PJM method opens huge perspectives in domains requiring the characterization of the composition of objects (particles, cells…) in addition to their morphological (size and shape) and dynamical (velocity, acceleration) properties.
In the frame of the scalar diffraction theory, and more particularly the Fresnel’s Approximation, we have recently demonstrated that the PJ reconstructed by DIH is formally similar to the Airy pattern of a thin lens [4, 6], and that the reconstructed near field exhibits some real (m>1) or virtual features (m<1). To further extend the capability and accuracy of the DIH-PJM, it is a priori necessary to not only account for Geometrical and Physical Optics (GPOA) effects but also semi-classical effects (e.g. [7-11]). We investigate in this communication the interest of Debye series for this purpose and as an alternative [12, 13].
stream similarly to the abundance of pure iron oxide particles (class 1; compare Figure 7a with Figure 8a). One can therefore deduce that the dominant control on total magnetic concentration is not the detrital fraction of the suspended particle load. Figure 8c also depicts this change in magne- tomineralogy for all investigated profiles starting from rather hard coercivities upstream to much softer magnetomineralogy downstream. This agrees with the observed decreasing downstream abundance of Ti-rich iron oxides, given that the coercivity of titanomagnetite and ilmenohematite are in general higher than that of pure magnetite [e.g., Dillon and Franke, 2009, and references therein]. Moreover, low-temperature magnetic measurements also showed an increasing domi- nance of magnetite-like behavior and disappear- ance of goethite and hematite-like behaviors downstream through the watershed.
Figure 4. Left, synthetic overlapped particles; Right, contour of each particle detected by shadow image processing
The next test has been carried out with a shadow image from a transparent target with dots pattern (Thorlabs R3L3S4P2), which is shown in Figure 5. The dots on this target have five different size groups but only three of them are shown in the field of view: 62.5 μm, 125 μm and 250 μm. The back illumination is slightly non-uniform to increase the challenges for the shadow processing. The scale in the image is 7.2 μm per pixel.
5.2 Grit particlecharacterization: inuence of sample pretreatment, sieving method and weather conditions Abstract
Grit causes problems in water resource recovery facilities (WRRFs): clogging pipes, damaging pumps, and reducing the active volume of aeration tanks and anaerobic digesters by grit accumulation. Grit chambers are built to remove these particles. However, no standardized methodology exists to characterize grit particles for grit chamber design and operation despite the large observed variability in grit composition. Therefore, this paper proposes a combination and adaptation of existing methods to sample and characterize grit particles in view of proper grit chamber design and its modelling to ultimately optimize the eciency of this important WRRF unit process. Characteristics evaluated included particle size distribution from sieving after dierent sample pretreatments, organic/inorganic fractions, and density.
Mucus is a polymer-based hydrogel that covers the inner linings of the body, from the nose to the female genital tract. Mucus fulfills many different functions: it protects under- lying cells from contact with noxious agents, assists in the adsorption of food particles, and serves as a lubricant ( 1–3 ). One critical aspect of mucus function is to permit selective passage of molecules that are beneficial for the body while rejecting others that are potentially harmful, such as viruses or bacterial pathogens. The filtering properties of mucus are critical for health; changes in mucus properties can cause numerous medical conditions, from bacterial infections to some forms of infertility. Despite the importance of mucus for many body functions, the biophysical principles that govern selectivity in mucus barriers are not well understood. In a polymer-based hydrogel, the concentration of poly- mers defines the mesh size of the network. This mesh size sets the threshold beyond which particle diffusion is sup- pressed: particles with dimensions smaller than this mesh size should be able to pass through the hydrogel, whereas larger particles should not ( Fig. 1 ). Thus, hydrogels with different mesh sizes would show distinct selectivity proper- ties toward particles of different sizes, a concept we will refer to as size filtering. The concentration of mucin polymers found in the human body can vary between 1 and 5% (w/v) ( 4–6 ), suggesting that the mucin density might be a key parameter for the regulation of mucus function. Experiments on sputum mucus obtained from cystic fibrosis patients report a decrease in particle mobility with increasing particle size ( 7 )—consistent with a size-filtering mechanism. Yet, the
Abstract In this article, we present a novel La-
grangian particle tracking method derived from the perspective of the tracking-by-detection paradigm that has been adopted by many vision tracking tasks. Under this paradigm, the particle track- ing problem consists of first learning a function (the tracker) that maps the target particle’s im- age projection backwardly to its possible position inferred from its precedent tracking information. The target particle’s actual position is then de- tected by simply applying the learned function to particle images captured by cameras. We also propose to solve the function learning problem using kernel methods. The proposed method is therefore named Kernelized Lagrangian particle tracking (KLPT). The current state-of-art LPT ap- proach Shake-The-Box (STB), despite equipping a highly efficient image matching and shaking- based optimization procedure, tends to be trapped by local minimum when dealing with datasets fea- turing complex flows or larger time separations. KLPT can overcome these optimization difficul- ties associated with significant prediction errors since it features a highly robust function learning procedure combined with an efficient linear op- timization technique. We assessed our proposed KLPT against various STB implementations both on synthetic and real experimental datasets. For the synthetic dataset depicting a turbulent cylin- der wake-flow at Re3900, we focused on study- ing the effects of particle density, time separation, and image noises. KLPT outperformed STB in all cases by tracking more particles and producing more accurate particle fields. This performance gain, compared to STB, is more prominent for the dataset with larger seeding density, time separa-
CHAPITRE 1. INTRODUCTION
Throughout last 20 years development of computation power allowed to use sophisti- cated Monte Carlo methods in signal-processing, rare events estimation, computational biology, queuing theory, computational statistics and etc. In ﬁnance we have to deal with large dimensionality of problems, where other techniques due to some constraints, such as curse dimensionality, computational burden makes us to look for alternative numeri- cal techniques. Particle methods are a broad class of interacting type Monte Carlo algo- rithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These ﬂows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities de- pends on the distributions of the current random states (, ).
The advantages of the APF not possessed by standard SMC methods is the possibility
of, firstly, choosing the first-stage weights τ k N,i arbitrarily and, secondly, letting N and M N
be different (TSSPF only). Appealing to common sense, SMC methods work efficiently when the particle weights are well-balanced, and Pitt and Shephard (1999a) propose several strategies for achieving this by adapting the first-stage weights. In some cases it is possible to fully adapt the filter to the model (see Section 5), providing exactly equal importance
3 Generalized Bouncy Particle Sampler
In BPS, at event time, the velocity changes deterministically. However, we find that the velocity can be changed into other directions, according to some dis- tribution, at event time, which incorporates the randomness of the reference Poisson process in BPS to overcome the reducibility. In this section, we gen- eralize the BPS. Specifically, prior to event time, we decompose the velocity according to the gradient of log π(x), flip the parallel subvector and resample the orthogonal subvector with respect to some distribution. The details are as follows:
4. Materials and methods
For this study, samples from a large batch of spent PCBs (485 kg) were ground to 30 mm, then to 10 mm, using an industrial cutting mill. The sample was then quartered with a rotary divider and further divided with a riffle splitter to obtain 4 kg sub-samples with a fraction < 10 mm. These fractions were used to produce 3 samples of 4 kg. These were shredded with a lab knife mill (Retsch SM2000), the first to obtain a sample < 2 mm, the second to obtain a sample < 750 µm, and the third also to 750 µm before being milled with a Universal grinder (FL1 Poittemille) to obtain a sample < 200 µm. The samples were characterized by aqua regia digestion (solid/ liquid ratio 1:10, contact time 2h and reflux – leaching fraction filtered at 0,45 µm) and chemical analysis using Flame atomic absorption spectrometry (Varian SpectrAA-300). Three samples with different particle sizes (< 2 mm, < 750 µm, < 200 µm) and 3 subsamples with a different masses for the digestion procedure (0.5g, 2g and 5g) were studied. Combining the 2 parameters produced 9 different conditions. Each condition was repeated at least 16 times.
The HIV-1 auxiliary protein Vpr and Vpr-fusion proteins can be copackaged with Gag precursor (Pr55Gag) into virions or membrane-enveloped virus-like particles (VLP). Taking advantage of this property, we developed a simple and sensitive method to evaluate potential inhibitors of HIV-1 assembly in a living cell system. Two proteins were coexpressed in recombinant baculovirus-infected Sf9 cells, Pr55Gag, which formed the VLP backbone, and luciferase fused to the N- terminus of Vpr (LucVpr). VLP-encapsidated LucVpr retained the enzymatic activity of free luciferase. The levels of luciferase activity present in the pelletable fraction recovered from the culture medium correlated with the amounts of extracellular VLP released by Sf9 cells assayed by conventional immunological methods. Our luciferase-based assay was then applied to the characterization of betulinic acid (BA) derivatives that differed from the leader compound PA-457 (or DSB) by their substituant on carbon-28. The beta-alanine-conjugated and lysine-conjugated DSB could not be evaluated for their antiviral potentials due to their high cytotoxicity, whereas two other compounds with a lesser cytotoxicity, glycine-conjugated and e-NH-Boc-lysine-conjugated DSB, exerted a dose-dependent negative effect on VLP assembly and budding. A fifth compound with a low cytotoxicity, EP-39 (ethylene diamine-conjugated DSB), showed a novel type of antiviral effect. EP-39 provoked an aberrant assembly of VLP, resulting in nonenveloped, morula-like particles of 100-nm in diameter. Each morula was composed of nanoparticle subunits of 20-nm in diameter, which possibly mimicked transient intermediates of the HIV-1 Gag assembly process. Chemical cross-linking in situ suggested that EP-39 favored the formation or/and persistence of Pr55Gag trimers over other oligomeric species. EP-39 showed a novel type of negative effect on HIV-1 assembly, targeting the Pr55Gag oligomerisation. The biological effect of EP-39 underlined the critical role of the nature of the side chain at position 28 of BA derivatives in their anti-HIV-1 activity.
Université Paris-Est Marne-La-Vallée (UPEM), Laboratoire MSME, Equipe TCM, Marne-La-Vallée, France
Dense fluid-particle flow occurs in many industrial applications, such as fluidized bed technology. To model these flows, statistical approaches are developed and, since quite recently, particle resolved simulations may be used to support the validation and the develop- ment of models. The viscous penalty method is used here to track moving solid particles coupled with the Direct Numerical Simulation (DNS) of the interstitial fluid flow. Particle-particle collisions are taken into account by Discrete Element Method (DEM) as well as the lubrication forces. 3D direct numerical simulations have been carried out of a periodic Couette flow were performed for finite Stokes number and moderate Reynolds number values for dense flows ranging from 5 to 30%. The results show a particle accumulation - at the centre or at the wall- according to the Stokes number and particle volume fraction. The production, diffusive, collisional and fluid interaction terms are analyzed for the momentum equation and particle kinetic stress equation as well.
cake from one or more TEM images is ongoing. The principle consists of using a moving reconstruction volume whose thick- ness equals that of the TEM image, centred on the vertical po- sition of the particle that is added during the particle-by-parti- cle addition reconstruction scheme. Hence, a volume larger than the simulated slice is built in the end, in such a way that every slice centred on every particle inside the system has a projected image whose 2-point covariance function matches the measured values. Using an elevation-invariant 2-point cov- ariance function, or an elevation variant function, the tech- nique is able to reconstruct not just one single cake slice, but a whole ultrafiltration cake. Moreover, the technique that is de- veloped here with spheres can readily be applied using other particle shapes (e.g., polyhedra) and distributions of particle size. Hence, with such a technique, which is quite simple to implement, one should be able to simulate a wide variety of multiscale two-phase materials. It is worth noting that the 3D simulation technique presented here could also be applied to multi-phase systems, using inter-phase 2-point covariance functions.
3.1. Experimental Device and Procedure
The experiments were performed at room temperature in a cylindrical pipe with dimensions of 85.2 cm in height and 9.2 cm in inner diameter (Figure 1). The pipe was connected to an air supply system at the base, which delivered mean air ﬂow velocities of up to 7.5 m/s through a basal porous plate. It contained three sen- sors at different heights to measure the pressure of the air, as well as a system to inject particles incrementally at the top (Figure 1). The upper part of the pipe was closed with a cap consisting of a grid, which had different mesh sizes according to the different particle grain sizes used in the experiments. A propeller prevented pos- sible accumulation of particles below the grid, and tests showed that it had a negligible in ﬂuence on the results, including the pressure measurements. The experiments were observed with a Photron Fastcam SA3 high-speed video camera at 500 –2,000 frames/s and with a maximum resolution of 1,024 × 1,024 pixels. The recording time of single videos was between 10 and 30 s. The particles were nearly spherical glass beads of density ~2,500 kg/m 3 and of different grain size ranges according to the procedure we used (Table 1). Ideally, only one grain size for a given mixture should have been used, which was practically impossible. In consequence we sieved commercial batches of glass beads to narrow size ranges, which are given in
The present work improves the understanding of the hydro- dynamics in dense upward-ﬂowing gas-particle suspensions by
quantitative characterisation of single particle motion in the solids
ﬂow. We describe the particle motion in a vertical tube of the same diameter as those used in a solar receiver, investigated through the use of the Positron Emission Particle Tracking (PEPT) measure- ment technique and providing a comprehensive description of how the particles behave inside the uplift transport tubes. Since the movement of the particles controls the effective thermal conductivity of the suspension, the particle velocities were determined under a range of conditions in order to quantify the exchange rate between the wall and the bulk of the suspension.
2. Two-point particle correlation functions
In order to characterize the particle temperature distribution, particle-particle (also referred to as two-point particle) correla- tion functions are investigated. Two-point correlation functions are computed using three different methods. They are here briefly explained and formalized in the frame of homogeneous, isotropic, and statistically stationary conditions, where particle properties (velocity or temperature) only get a fluctuating nature around a zero mean. The first method accounts for the product between the temperatures of any pair of particles m and n (with m 6= n) separated by a distance r. The spatial correlation function may then be written as: