Pre-processing

Top PDF Pre-processing:

Fibre extraction from oleaginous flax for technical textile applications: influence of pre-processing parameters on fibre extraction yield, size distribution and mechanical properties

Fibre extraction from oleaginous flax for technical textile applications: influence of pre-processing parameters on fibre extraction yield, size distribution and mechanical properties

The harvesting of oleaginous flax does not allow the straw to be processed with the same techniques than the ones used for textile flax. The straws are mown and directly absorbed by the combine harvesting machine which separates the seeds from the straws with its integrated threshers. The straws are therefore submitted to mechanical loadings during the beating phase. At the end of the threshing phase, the straws fall regularly from the combine harvesting machine and form a windrow of randomly oriented stems. The straws can then be left in the field so that a dew-retting performed by soil microorganisms can take place. This pre-processing stage, which is well known and documented for textile flax cannot be performed with the same protocol for linseed flax as the straws are not aligned and well distributed on the ground. During the dew-retting, the contact with the soil and therefore the microorganisms are not similar for all the pieces of straws. Even if it would be possible to return the windrow during time, the evenness of the dew-retting may be questionable. Moreover, as the fibres are not aligned within the windrow, the linseed flax stems are packed with random orientations in large balls each of about 200 kg, and the stems cannot be aligned as it is the case in the traditional scutching and hackling route usually employed to separate the different vegetal fractions of the plant for the textile flax. As a consequence, an “all fibre” device has to be used. Different devices, inspired from the paper industry are generally used. However, these devices are often very aggressive to the fibres, and they may lead to the appearance of defects such as dislocations within the fibres as it is the case during the extraction of hemp fibres [10].
En savoir plus

9 En savoir plus

Pre-processing of area and mobile sources of the Pacific 93 emissions inventory

Pre-processing of area and mobile sources of the Pacific 93 emissions inventory

2.1 Emissions Processing and Creation of AMS and AFS Database Files Figure 1 describes some of the iles required by EPS2. 0 as well as the steps carried out in Pre Processing of the Pacific 93 inventory . EIS2EPS program creates our sets of database type files. The first and second type of iles are MS and FS database files. These iles contain the emission amounts or the domain. AMS iles have all the Area and Mobile sources while the AFS files contain the emissions or the point sources. This work was previously done in a set of queries designed in Paradox, by Dr. Robert McLaren 3 . For details into the structure of AlIS and FS iles please refer to EPS2. 0 manual.
En savoir plus

45 En savoir plus

Tree leaves extraction in natural images: Comparative study of pre-processing tools and segmentation methods

Tree leaves extraction in natural images: Comparative study of pre-processing tools and segmentation methods

Fig. 8: Example of over-segmentation problem for three segmentation methods: (left to right) ground truth, Snakes, MeanShift and GAC. study respectively the shape of the segmentation based on the analysis of contour points, and the quantity of informa- tion extracted relatively to the ground truth. The observation remains the same, the performance offered by the GAC are on average higher by almost 61.9% for MAD and 12.2% for SSIM compared to other methods. Therefore the Guided Active Contour approach considerably improves the extraction of tree leaves. Nevertheless, despite improved performance, this segmentation method has some limitations, as shown in the Figure 7 and 8 with particular problems of under- segmentation. In order to overcome these defects we propose to study the impact of a pre-processing step for defining a color distance map.
En savoir plus

13 En savoir plus

Robust balancing of production lines : MILP models and pre-processing rules

Robust balancing of production lines : MILP models and pre-processing rules

distribution of heads to machines satisfying all given constraints and maximising the value of the stability radius. We discover new formulas for measurement of line robustness in Manhattan and Chebyshev metrics and prove them within a series of lemmas and theo- rems. Two Mixed Integer Linear Programming (MILP) models are developed according to all problem properties. To handle their complexity, we proposed some enhancements based on well-known practices for line balancing problems. First of all, we introduce assignment intervals of tasks and introduce five rules to calculate them under both the considered metrics. Together with a reworked heuristic algorithm they compose a global pre-processing approach which not only generates an initial solution and a lower bound, but also creates two groups of constraints for MILP formulations. The chapter is ending by computational experiments.
En savoir plus

181 En savoir plus

Application of hybrid uncertainty-clustering approach in pre-processing well-logs

Application of hybrid uncertainty-clustering approach in pre-processing well-logs

In order to handle structural uncertainty in petroleum reserves, Thore et al. ( 2002 ) considered aggregation of side-effects of all processing and interpreting stages on the final results. Preparation of structural model by seismic studies generally consists of six stages, each one is a source of uncertainty in constructing structural model: acquisition, pre-processing, stacking, migration, time-to-depth conversion and interpretation. In this paper, migration, picking and time-to-depth conversion are introduced as dominant uncertainty resources; and amplitude, direction and correlation length of each are incorporated in calculations. In the article, it is also specified that computation of structural uncertainties has several benefits: (i) providing a distribution of gross rock volume; (ii) defining optimal well trajectories; and (iii) reservoir history matching.
En savoir plus

179 En savoir plus

Nonparametric Pre-Processing Methods and Inference Tools for Analyzing Time-of-Flight Mass Spectrometry Data

Nonparametric Pre-Processing Methods and Inference Tools for Analyzing Time-of-Flight Mass Spectrometry Data

The objective of this paper is to contribute to the methodology available for extracting and analyzing signal content from protein mass spectrometry data. Data from MALDI-TOF or SELDI-TOF spectra require considerable signal pre-processing such as noise removal and baseline level error correction. After removing the noise by an invariant wavelet transform, we develop a background correction method based on penalized spline quantile regression and apply it to MALDI-TOF (matrix assisted laser deabsorbtion time-of-flight) spectra obtained from serum samples. The results show that the wavelet transform technique combined with nonparametric quantile regression can handle all kinds of background and low signal-to-background ratio spectra; it requires no prior knowledge about the spectra composition, no selection of suitable background correction points, and no mathematical assumption of the background distribution. We further present a multi-scale based novel spectra alignment methodology useful in a functional analysis of variance method for identifying proteins that are differentially expressed between different type tissues. Our approaches are compared with several existing approaches in the recent literature and are tested on simulated and some real data. The results indicate that the proposed schemes enable accurate diagnosis based on the over-expression of a small number of identified proteins with high sensitivity.
En savoir plus

47 En savoir plus

A Pre-processing Algorithm Utilizing a Paired CRLB for TDoA Based IoT Positioning

A Pre-processing Algorithm Utilizing a Paired CRLB for TDoA Based IoT Positioning

As shown in Figure 3a, the simulation result shows a noticeable reduction in the localization error medians when using the paired CRLB pre-processing algorithm. Accordingly as shown in Figure 3b, the CDF curves obtained for all the noise values preserve the same performance rank over the whole simulations with 50% of the error values less than 200 m and 230 m, while turning the proposed algorithm ON and OFF, respectively. This indicates that the proposed method is robust to the high noise variances.

6 En savoir plus

Fibre extraction from oleaginous flax for technical textile applications: influence of pre-processing parameters on fibre extraction yield, size distribution and mechanical properties

Fibre extraction from oleaginous flax for technical textile applications: influence of pre-processing parameters on fibre extraction yield, size distribution and mechanical properties

RS 37.8 52.4 9.8 NRS 39.2 55.3 5.5 Table 3 indicates for the four considered batches the respective amounts of the extracted constituents from the oleaginous flax stems. More than half of the stem mass is constituted by shives. The values are situated in a 52-58% range, depending on the initial pre-processing treatment imposed to the stems. One can assume that the result dispersion may be relatively large because these ones may depend on the way to select the stems in the balls and on the exact degree of shive extraction from the lap. As the criterion is visual, some variations may happen and some small pieces of shives or dust particles may still be part of the laps even if this is in a very small proportion.
En savoir plus

9 En savoir plus

The Role of Data Pre-Processing in Intelligent Data Analysis

The Role of Data Pre-Processing in Intelligent Data Analysis

Key Words: Data Analysis, Data Pre-processing, Induction, Machine Learning Applications Abstract: This paper first provides a brief overview of some frequently encountered real world problems in data analysis. These are problems that have to be solved through data pre-processing so that the nature of the data is better understood and the data analysis is performed more accurately and efficiently. The architecture of a data analysis tool for which a data pre-processing mechanism has been developed and tested is also explained. An example is then given of the use of this data pre-processing mechanism for two purposes: (i) to filter out a set of semiconductor data, and (ii) to find out more about the nature of these data and make the induction process more efficient.
En savoir plus

6 En savoir plus

HVS based perceptual pre-processing for video coding

HVS based perceptual pre-processing for video coding

The coding loop is the most critical part of nowadays real- time encoders due to the amount of computations involved in coding decisions and reconstruction together. Thus, it is highly desirable to implement processing outside this loop using the source video to ease encoding. Using pipelining of the various modules finally allow designing the full encoder. Pre-processing is one relevant way to achieve perceptual optimization outside the video coding loop. It aims to remove visually redundant noise and high frequencies from the source video to ease compression and coding efficiency at a given quality level of the video.
En savoir plus

6 En savoir plus

A Pre-processing Composition for Secret Key Recovery on Android Smartphone

A Pre-processing Composition for Secret Key Recovery on Android Smartphone

4 Conclusion and Perspectives In this paper, we have proposed an efficient pre-processing composition to mount a powerful SSCA on RSA and ECC. We applied the proposed attack to recover the secret keys on an Android smartphone clocked at high frequency (832 MHz). We remind the reader that, in practice, the higher the frequency is, the harder to attack as more noise is generated and more sophisticated equipments are needed. Our scheme succeeded in recovering the secret keys from a single waveform. Therefore, we conclude that our technique is particularly efficient to perform the pattern discrimination as it deals with both types of noise (measurement and algorithmic noise) and both domain representations (time and frequency). The proposed attack is applied directly on baseband traces. Hence, we expect to further enhance our analysis with a Software-Defined Radio (SDR) demodulator. Future work will be applying our attack to the devices with the CPUs of higher frequency and multi cores.
En savoir plus

17 En savoir plus

Wavelet Decomposition Pre-processing for Spatial Scalability Video Compression Scheme

Wavelet Decomposition Pre-processing for Spatial Scalability Video Compression Scheme

As part of the Joint Video Exploration Team (JVET) effort, E. Thomas et al. proposed a new scalable scheme [5], based on a polyphase sub-sampling performed prior to encoding, achieving x2 spatial scalability with a single HEVC en- coder instance, thus greatly reducing the coding complex- ity compared to SHVC. In this paper, we propose several improvements to the polyphase sub-sampling pre-processing technique. The first consists in the correction of the phase difference between the chroma planes of the resulting sub- resolution images by introducing a simple chroma filtering process. In order to improve the visual quality of the output video layers and avoid the potential aliasing introduced by the polyphase sub-sampling, we also propose a different decom- position step based on the use of well-known wavelet kernels modified to fit in the scalable coding chain. The proposed solution achieves similar rate distortion performance as SHVC with a 50% coding complexity reduction.
En savoir plus

6 En savoir plus

The Alpage Architecture at the SANCL 2012 Shared Task: Robust Pre-Processing and Lexical Bridging for User-Generated Content Parsing

The Alpage Architecture at the SANCL 2012 Shared Task: Robust Pre-Processing and Lexical Bridging for User-Generated Content Parsing

1. An Ontonote/PTB token normalization stage is applied. Neutral quotes are disambiguated, fol- lowing (Wagner et al., 2007). 2. We then apply several regular-expression- based grammars taken from the SxPipe pre- processing chain (Sagot and Boullier, 2008) for detecting smileys, URLs, e-mail addresses and similar entities, in order to consider them as one token even if they contain whitespaces. SxPipe is able to keep track of the original tokeniza- tion, which is required for restoring it at the end of the process.

7 En savoir plus

Data Pre-Processing and Intelligent Data Analysis

Data Pre-Processing and Intelligent Data Analysis

Semiconductor wafer manufacture consists of four main operations performed several times over. These operations are: growth or deposition, patterning or photolithography, etching, and diffusion or implantation. Each operation consists of multiple steps during which the wafer is subject to specific physical and chemical conditions according to a recipe. Testing the unfinished product between manufacturing steps is expensive and difficult. Reworking a bad product is almost impossible. This leads to two problems. First, when a problem occurs at a particular step, it may go undetected till final test is performed, thereby tying up downstream processing on a product that has already been doomed to be scraped. Second, when final test indicates that a product is of bad quality, it is usually difficult to determine which single step in the manufacturing process is the source of the problem.
En savoir plus

29 En savoir plus

Pre-Processing by a Cost-Sensitive Literal Reduction Algorithm

Pre-Processing by a Cost-Sensitive Literal Reduction Algorithm

((short(C), closed(C)); (len1(T, 4), u_shaped(C), has_load1(T, circle))). The above Prolog program was the entry for the rst competition. This program has a complexity of 19 units, which shows that the cost of the decision tree (18 units) is only an approximation of the cost of the corresponding Prolog program, since some Prolog code needs to be added to assemble the Prolog fragments into a working whole. This extra code means that the sum of the sizes of the fragments is less than the size of the whole program. It is also sometimes possible to subtract some code from the whole, because there may be some overlap in the code in the fragments. The ideal solution to this problem would be to add a post-processing module to RL-ICET that automatically converts the decision trees into Prolog programs. The complexity could then be calculated directly from the output Prolog program, instead of the decision tree. Although post-processing with RL-ICET was done manually, it could be automated, as demonstrated by LINUS, which has a general-purpose post-processor.
En savoir plus

19 En savoir plus

On Processing Extreme Data

On Processing Extreme Data

The ADMIRE [4] platform is a software stack that implements an architecture where a wide range of data analytics tools are connected together through specific interaction points known as gateways. Although this approach has delivered some success, three years after project completion it has not gained significant traction in the wider area of data analysis. One of the reasons might be the fact that ADMIRE uses technologies such as REST and WSDL. This makes the architecture seem more suited towards cloud/cluster analytics, rather than HPC data analytics at exascale. Weka [99] is a collection of machine learning algorithms for data mining tasks. There are a variety of tools such as data pre-processing, classification and visualization which can be used directly or called from Java code. The fact that this is a Java library limits the suitability to HPC. Moreover, we must mention that distributed versions of the Weka framework have been developed (e.g., Weka4WS [94]), but they are suitable for small/medium size Grids or Cloud platforms, not for extreme computing systems where a massive degree of parallelism must be exploited.
En savoir plus

25 En savoir plus

On Processing Extreme Data

On Processing Extreme Data

The ADMIRE [4] platform is a software stack that implements an architecture where a wide range of data analytics tools are connected together through specific interaction points known as gateways. Although this approach has delivered some success, three years after project completion it has not gained significant traction in the wider area of data analysis. One of the reasons might be the fact that ADMIRE uses technologies such as REST and WSDL. This makes the architecture seem more suited towards cloud/cluster analytics, rather than HPC data analytics at exascale. Weka [99] is a collection of machine learning algorithms for data mining tasks. There are a variety of tools such as data pre-processing, classification and visualization which can be used directly or called from Java code. The fact that this is a Java library limits the suitability to HPC. Moreover, we must mention that distributed versions of the Weka framework have been developed (e.g., Weka4WS [94]), but they are suitable for small/medium size Grids or Cloud platforms, not for extreme computing systems where a massive degree of parallelism must be exploited.
En savoir plus

26 En savoir plus

Implanter une infrastructure numérique avec la PRE : étude de cas

Implanter une infrastructure numérique avec la PRE : étude de cas

La présente étude de cas démontre comment la conversion en bloc à la technologie numérique, par l’entremise d’un système de PRE de calibre mondial, est essentielle à l’entreprise pour faire face à ses défis en matière de productivité, de production, d’expédition, de service à la clientèle, d’entrée des commandes, de comptabilité et d’opérations. Maintenant que l’infrastructure est en place, HERD s’emploie à la personnaliser avec des technologies avancées qui permettent de porter l’expérience-client vers de nouveaux sommets, en mettant en ligne un magasin où les acheteurs peuvent configurer, suivre et modifier leurs commandes. Ce magasin est relié à un système d’ordonnancement numérique individualisé.
En savoir plus

10 En savoir plus

Certifiable Pre-Play Communication: Full Disclosure

Certifiable Pre-Play Communication: Full Disclosure

Abstract This article asks when communication with certifiable information leads to complete information sharing. We consider Bayesian games augmented by a pre-play communica- tion phase in which announcements are made publicly. We characterize the augmented games in which there exists a full disclosure sequential equilibrium with extremal beliefs (i.e., any deviation is attributed to a single type of the deviator). This characterization enables us to provide different sets of sufficient conditions for full information disclosure that encompass and extend all known results in the literature, and are easily applicable. We use these conditions to obtain new insights in senders-receiver games, games with strategic complementarities, and voting with deliberation.
En savoir plus

47 En savoir plus

Buckling patterns on pre-stretched bilayer shells

Buckling patterns on pre-stretched bilayer shells

We focus on the wrinkle to ridge transition, where we find that pre-stretch plays an essential role in ridge formation, and that the ridge geometry (width, height) [r]

63 En savoir plus

Show all 1904 documents...