Haut PDF Non regression testing for the JOREK code

Non regression testing for the JOREK code

Non regression testing for the JOREK code

Set of equations Some of the variables modelled in JOREK code are: the poloidal flux Ψ, the electric potential u, the toroidal current density j, the toroidal vorticity ω, the density ρ, the temperature T , and the velocity v parallel along magnetic field lines. Depending on the model choosen (hereafter denoted model which is a simulation parameter), the number of variables and the number of equations on them are setup. At every time-step, this set of reduced MHD equations is solved in weak form as a large sparse implicit system. The fully implicit method leads to very large sparse matrices. There are some benefits to this approach: there is no a priori limit on the time step, the numerical core adapts easily on the physics modelled (compared to semi-implicit methods that rely on additional hypothesis). There are also some disadvantages: high computational costs and high memory consumption for the parallel direct sparse solver (PASTIX or others).
En savoir plus

18 En savoir plus

Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: an Urgent Need for Systematic Security Regression Testing

Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: an Urgent Need for Systematic Security Regression Testing

the PHP code of five web applications. XSS attacks were used to kill the mutants. In their study, they do not consider the impact of the browser on the efficiency of an XSS vector, thus introducing a bias in their experiments. They also used similar sources for the XSS vectors, and used them without adapting them to the specific injection point. Doing this, you introduce a bias in the efficiency of the attacks. Attacks should be tailored to the injection point to be effective like in Duchene et al. approach[22]; otherwise, depending on the injection point, your XSS attack can be rendered useless (while with the same vector, an attacker can succeed). Most of XSS research works focus either on detection of XSS attacks [1],[3], or on finding XSS vulnerabilities [23],[24]. Other related papers study XSS vulnerabilities or XSS worms [25],[26]. A state of the art on XSS issues and countermeasures is available in [25]. Undermining the influence of charset, doctype and browser behavior in an xss attack can lead to false positives in web application vulnerability scanners. Some testing strategies rely on one instrumented web browser [22],[27] to assess XSS Vulnerabilities, thus ignoring vulnerabilities related to XSS vectors bound to a specific web browser. The only exception in this topic is the xenotix XSS testing tool[28] wich embeds 3 different browser engines (Trident from IE, Webkit from Chrome/Safari and Gecko from Firefox) to deal with browser- specific XSS vectors.
En savoir plus

9 En savoir plus

The Seemingly Unrelated Dynamic Cointegration Regression Model and Testing for Purching Power Parity

The Seemingly Unrelated Dynamic Cointegration Regression Model and Testing for Purching Power Parity

Mots clés : régressions empilées, estimation efficace, parité des pouvoirs d'achat, coïntégration ABSTRACT This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.
En savoir plus

20 En savoir plus

The seemingly unrelated dynamic cointegration regression model and testing for purchasing power parity

The seemingly unrelated dynamic cointegration regression model and testing for purchasing power parity

Mots clés : régressions empilées, estimation efficace, parité des pouvoirs d'achat, coïntégration ABSTRACT This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.
En savoir plus

21 En savoir plus

Minimax rate of testing in sparse linear regression

Minimax rate of testing in sparse linear regression

R s,Aλ ≤ ε, (3) (ii) (lower bound) for any ε ∈ (0, 1) there exists a ε > 0 such that, for all 0 < A < a ε , R s,Aλ ≥ 1 − ε. (4) Note that the rate λ defined in this way is a non-asymptotic minimax rate of testing as opposed to the classical asymptotic definition that can be found, for example, in Ingster and Suslina (2003). It is shown in Collier, Comminges and Tsybakov (2017) that when X is the identity matrix and p = n (which corresponds to the Gaussian sequence model), the non-asymptotic minimax rate of testing on the class B 0 (s) with respect to the ℓ 2 -distance has the following form:
En savoir plus

17 En savoir plus

Source-Code Level Regression Test Selection: the Model-Driven Way

Source-Code Level Regression Test Selection: the Model-Driven Way

a. ICAM, Nantes, France b. Naomod Team, Universit´ e de Nantes, LS2N (UMR CNRS 6004) c. Naomod Team, IMT Atlantique, LS2N (UMR CNRS 6004) Abstract In order to ensure that existing functionalities have not been impacted by recent program changes, test cases are regularly executed during regression testing (RT) phases. The RT time becomes problematic as the number of test cases is growing. Regression test selection (RTS) aims at running only the test cases that have been impacted by recent changes. RTS reduces the duration of regression testing and hence its cost. In this paper, we present a model-driven approach for RTS. Execution traces are gathered at runtime, and injected in a static source-code model. We use this resulting model to identify and select all the test cases that have been impacted by changes between two revisions of the program. Our MDE approach allows modularity in the granularity of changes considered. In addition, it offers better reusability than existing RTS techniques: the trace model is persistent and standardised. Furthermore, it enables more interoperability with other model-driven tools, enabling further analysis at different levels of abstraction (e.g. energy consumption).
En savoir plus

21 En savoir plus

Challenges & Opportunities in Low-Code Testing

Challenges & Opportunities in Low-Code Testing

5.2.1 Previous Attempts. MBT is a growing research field and many papers in this domain are published each year. The latest mapping study on MBT performed by Bernardino et al. illustrates that from 2006 to 2016, approximately 70 MBT supporting tools are proposed by business and academy while some of which are open source [1]. This significant number of tools promotes the opportunity to create a repository of existing MBT tools which can be analyzed for different purposes, but there is no repository so far. 5.2.2 Opportunities. The model-based testing is addressed in many papers, but it is not specialized for the low-code context. As we mentioned earlier, low-code development platforms are based on particular DSLs and system modeling is inherent in these platforms. Therefore, for the application of MBT in LCDPs, the first step (i. e., selection of a modeling language) is strictly imposed by the platform. Accordingly, for using MBT in the testing component of LCDPs, two modes exist:
En savoir plus

11 En savoir plus

NOTICE: A Framework for Non-functional Testing of Compilers

NOTICE: A Framework for Non-functional Testing of Compilers

In some cases, these optimizations may negatively decrease the quality of the software and deteriorate application per- formance over time [6]. As a consequence, compiler creators usually define fixed and program-independent sequence opti- mizations, which are based on their experiences and heuristics. For example, in GCC, we can distinguish optimization levels from O1 to O3. Each optimization level involves a fixed list of compiler optimization options and provides different trade- offs in terms of non-functional properties. Nevertheless, there is no guarantee that these optimization levels will perform well on untested architectures or for unseen applications. Thus, it is necessary to detect possible issues caused by source code changes such as performance regressions and help users to validate optimizations that induce performance improvement. We also note that when trying to optimize software perfor- mance, many non-functional properties and design constraints must be involved and satisfied simultaneously to better opti- mize code. Several research efforts try to optimize a single criterion (usually the execution time) [7]–[9] and ignore other important non-functional properties, more precisely resource consumption properties (e.g., memory or CPU usage) that must be taken into consideration and can be equally important in relation to the performance. Sometimes, improving program execution time can result in a high resource usage which may decrease system performance. For example, embedded systems for which code is generated often have limited resources. Thus, optimization techniques must be applied whenever possible to generate efficient code and improve performance (in terms of execution time) with respect to available resources (CPU or memory usage) [10]. Therefore, it is important to construct optimization levels that represent multiple trade-offs between non-functional properties, enabling the software designer to choose among different optimal solutions which best suit the system specifications.
En savoir plus

13 En savoir plus

Interactive Ultrasonic Field Simulation For Non-Destructive Testing

Interactive Ultrasonic Field Simulation For Non-Destructive Testing

2 J. Lambert, H. Chouh, G. Rougeron, V. Bergeaud, S. Chatillon, L. Lacassagne, J.C. Iehl, J.P. Farrugia & V. Ostromoukhov / EG L A TEX Author Guidelines vectorized. Morevover, a single step algorithm avoiding the unnecessary storage of temporary data and providing more work for each thread has been settled. The resulting opti- mized code scales well on a 2x12 cores CPU. Its overall per- formance is around 3.5x faster than previous reference im- plementation on a set of representative configurations. For the simplest ones, it reaches 20 fps.

3 En savoir plus

Sparse grids for eddy-current non-destructive testing

Sparse grids for eddy-current non-destructive testing

A rich family of surrogate models consists in data-fitting: an interpolation and/or regression is established based on a pre-calculated set of simulation results, i.e., samples. Once the sample set is obtained, the subsequent data-fitting is far less expensive that the true electromagnetic simulation. Among the contributions in the last years, let us cite [1], where the authors combine a radial basis function (RBF) interpolation on optimally scattered samples and particle swarm optimisation (PSO) to efficiently solve EC-NdT inverse problems.
En savoir plus

3 En savoir plus

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

Criteo - 1.00e-03 - 0.005 - - - - - - 49006.7±1400 37534.6±1600 Criteo - 1.00e-04 - 0.005 - - - - - - 59303.8±1300 42773.9±1000 4 Numerical Experiments Set-up We now present some numerical studies showing the computational gain achieved by our approach. As an inner solver and baseline algorithms, we have considered a proximal algorithm [16] and a block-coordinate descent approach [3]; they are respectively denoted as GIST and BCD. They have been implemented in Python/Numpy and the code will be shared online upon publica- tion. We have integrated those solvers into the maximum-violating constraint (MaxVC) working set approach (algorithm in the appendix) and our approach denoted as FireWorks for FeasIble REsid- ual WORKing Set). Note that for MaxVC, we add the same number of constraints in the working set as in our algorithm. This is already a better baseline than the genuine one proposed in [1] As another baseline, we have considered a solver based on majorization-minimization (MM) approach, which consists in iteratively minimizing a majorization of the non-convex objective function as in [17, 13, 25]. Each iteration results in a weighted Lasso problem that we solve with a Blitz-based con- vex proximal Lasso or BCD Lasso (up to precision of 10 −5 for its optimality conditions). For these
En savoir plus

14 En savoir plus

Smooth-transition regression models for non-stationary extremes

Smooth-transition regression models for non-stationary extremes

6 Conclusion We introduce a smooth-transition generalized Pareto regression model, useful for handling the time- varying effect of risk factors on the severity distribution of financial losses. This model has the advantages of accounting explicitly for the high probability of extreme events and for a change in effects of the explanatory variables over time. In a simulation study, we highlight the good properties of the proposed estimation and testing procedures. Then, we use our model to conduct an empirical study of the dynamics driving operational losses severity at UniCredit. We focus on connecting the severity distribution of operational losses and the past number of losses (i.e. the frequency process), with the idea that past operational events proxy the quality of internal controls. As transition variable, we use the VIX, assuming that the uncertainty on the financial markets influences the link between the severity and frequency processes. We find that two different limiting mechanisms drive the severity distribution: in high uncertainty periods, we observe that a high number of operational events is followed by less extreme losses. This result suggests a self-inhibition effect, i.e. that the monitoring and supervision processes following operational events at UniCredit mitigate the likelihood of extremes in subsequent periods. In addition, during such periods of high uncertainty, an increase in the financial stability index (FSI), the Italian yield spread, and the industrial production growth rate are linked with a decrease in the likelihood of extreme losses. In light of these effects, we conjecture that these variables are proxies for an increase in risk aversion, a tight monetary policy, and the counter-cyclical nature of fraud losses, respectively. On the contrary, in periods of low uncertainty, only positive economic growth rate and FSI are significantly associated with more severe losses. Potential explanations are related to the effect of economic growth on transaction sizes and to the impact of low liquidity on timing issues. Several periods in our sample are driven by mixtures of the two limiting regimes, suggesting that a continuous transition function is necessary to model the data correctly. Finally, we demonstrate that the smooth-transition components improve the goodness-of-fit with respect to simpler alternatives.
En savoir plus

43 En savoir plus

Testing for one-sided alternatives in nonparametric censored regression

Testing for one-sided alternatives in nonparametric censored regression

Remark 3. In some situations, as explained in Section 1, it may happen that the con- ditional location function (1.2) for a given function J (·) cannot be consistently estimated due to the presence of censoring. It is typically the case if two conditional means have to be compared, with J (s) = I(0 ≤ s ≤ 1). However, this problem can be avoided in many situations if the models (1.1) satisfy stronger assumptions. For example, in the classical homoscedastic case, where the error distribution is the same in both models and independent of the covariate, the null hypothesis is equivalent to the equality of two re- gression curves with a function ˜ J (·) (having the same properties as J (·)) chosen in an appropriate way. Indeed, for j = 1, 2, consider the models Y j = m j (X j ) + ε j , where
En savoir plus

26 En savoir plus

Raffinement de maillage adaptatif pour la simulation numérique des instabilités MHD dans les tokamaks : le code JOREK

Raffinement de maillage adaptatif pour la simulation numérique des instabilités MHD dans les tokamaks : le code JOREK

static(at equilibrium) refinement strategy for the 2D version of the JOREK code. Our dynamic refinement process is intended to increase the accuracy of spatial discretization in regions where the spatial scales are insufficiently resolved and decrease computing time for the same resolution with a simulation without refinement, in particular, the surfaces which present a deformations due to the appearance of instabilities. This technique is developed and imple- mented in 3D JOREK code to improve the simulation of MHD instabilities that are needed to evaluate mechanisms to control the energy losses observed in the standard tokamak operating scenario(ITER), and numerical simulation of these phenomena becomes crucial for better under- standing them. As a consequence, it is important to refine the grid as much as possible where instabilities are formed. This should be done adaptively, because the location of the instabilities can change, as in the case of the pellets injection.
En savoir plus

23 En savoir plus

Automatic Non-functional Testing of Code Generators Families

Automatic Non-functional Testing of Code Generators Families

In the first step, software developers have to define, at de- sign time, the software’s behavior using a high-level abstract language (DSLs, models, program, etc). Afterwards, develop- ers can use platform-specific code generators to ease the soft- ware development and automatically generate code for dif- ferent languages and platforms. We depict, as an example in Figure 1, three code generators from the same family capa- ble to generate code to three software programming languages (JAVA, C# and C++). The first step is to generate code from the previously designed model. Afterwards, generated soft- ware artifacts (e.g., JAVA, C#, C++, etc.) are compiled, de- ployed and executed across different target platforms (e.g., Android, ARM/Linux, JVM, x86/Linux, etc.). Finally, to per- form the non-functional testing of generated code, developers have to collect, visualize and compare information about the performance and efficiency of running code across the differ- ent platforms. Therefore, they generally use several platform- specific profilers, trackers, instrumenting and monitoring tools in order to find some inconsistencies or bugs during code ex- ecution [3, 7]. Finding inconsistencies within code generators involves analyzing and inspecting the code and that, for each execution platform. For example, one way to handle that, is to analyze the memory footprint of software execution and find memory leaks [16]. Developers can then inspect the generated code and find some fragments of the code-base that have trig- gered this issue. Therefore, software testers generally use to report statistics about the performance of generated code in order to fix, refactor, and optimize the code generation pro- cess. Compared to this classical testing approach, our pro- posed work seeks to automate the last three steps: generate code, execute it on top of different platforms, and find code generator issues.
En savoir plus

12 En savoir plus

Testing for one-sided alternatives in nonparametric censored regression

Testing for one-sided alternatives in nonparametric censored regression

(e-mail: juancp@uvigo.es) Abstract: Assume we have two populations satisfying the general model Y j = m j (X j ) + ε j , j = 1, 2, where m(·) is a smooth function, ε has zero location and Y j is possibly right-censored. In this paper, we propose to test the null hypothesis H 0 : m 1 = m 2 versus

1 En savoir plus

Non-destructive testing of concrete

Non-destructive testing of concrete

Rebound Tests The rebound hammer is a surface hardness tester for which an empirical correlation has been established between strength and rebound number. The only known instrument to make use of the rebound principle for concrete testing is the Schmidt hammer, which weighs about 4 lb (1.8 kg) and is suitable for both laboratory and field work. It consists of a spring-controlled hammer mass that slides on a plunger within a tubular housing. The hammer is forced against the surface of the concrete by the spring and the distance of rebound is measured on a scale. The test surface can be horizontal, vertical or at any angle but the instrument must be calibrated in this position.
En savoir plus

5 En savoir plus

A regression based non-intrusive method using separated representation for uncertainty quantification

A regression based non-intrusive method using separated representation for uncertainty quantification

product spaces [3]. In the context of uncertainty quantification, for problems involving very high stochastic dimension, instead of evaluating the coefficients of an expansion in a given approxi- mation basis (e.g. polynomial chaos), function u is approximated in suitable low-dimensional tensor subsets (e.g. rank m tensors) which are low-dimensional manifolds of the underlying tensor space. The dimensionality of these manifolds typically grows linearly with dimension d and therefore, it addresses the curse of dimensionality. Note that a regression-based method has already been proposed in [4] for the construction of tensor approxima- tions of multivariate functionals. Here, we propose an alternative construction of tensor approximations using greedy algorithms and sparse regularization techniques.
En savoir plus

9 En savoir plus

Rare variant association testing in the non-coding genome

Rare variant association testing in the non-coding genome

now well established that gene expression is controlled by a balance between the joint action of enhancers and promoters increasing transcriptional activity, and silencers having an opposite effect (Kolovos et al. 2012), along with the action of many proteins that bind to these DNA regions. A number of studies have been conducted to describe enhancers and link them to their target genes, as enhancers do not necessarily control the nearest gene (Yao et al. 2015). Gasperini et al.(2020) recently reviewed biological techniques and recent developments enabling the discovery and characterisation of such enhancers. Several huge projects like FANTOM5 (Forrest et al. 2014) or ENCODE (Dunham et al. 2012) have described and annotated regulatory elements of the genome and contributed to the construction of public databases to share this knowledge. Thanks to these projects, we now have access to a huge amount of information about gene regulation which can be used to identify variants within key regulatory elements that could potentially be linked to diseases (Ma et al. 2015). Other projects such as the Roadmap Epigenomics Project (Bernstein et al. 2010) were developed to study epigenomics marks of the genome. These marks are very useful to define regulatory elements with, for example, the mono-methylation of the 4 th lysine residue of the H3 histone (H3K4m1) being indicative of enhancers or its tri-methylation (H3K4m3) being indicative of promoters. Projects were also conducted to study gene expression in different tissues. The GTEx project (GTEx Consortium 2013) for example provides information on gene expression in different cell lines. It has enabled the identification of expression Quantitative Trait Loci (eQTL) that could be involved in human diseases (Albert and Kruglyak 2015). At a larger scale, the characterisation of the genome organisation or “3D genome” has also been
En savoir plus

26 En savoir plus

Ridge regression for the functional concurrent model

Ridge regression for the functional concurrent model

Functional ridge regression estimator FRRE The definition of the estimator of β in the centered model 2.2 is inspired by the estimator introduced by Hoerl [8] in the Ridge Regularization [r]

35 En savoir plus

Show all 10000 documents...