According to simulation results, PD and VBR models have similar precision for a given simulation time step. In some cases, they enable utilizing twice the time step of the classical dq0 model while maintaining simulation precision.
2. Discrete-Time dq0 Model with Internal Intermediate Time Step Usage (dq0- IITS): The classical dq0 model solution algorithm is modified by implementing an option for internal intermediate (fractional) time step usage between two existing main network solution time points. This approach improves precision at the expense of reduced simulation speed resulting from solving the machine equations more than once. However, internal intermediate time step usage is restricted to the transient intervals where the precision of the dq0 formulation decreases. As demonstrated in this thesis, when simulation time step is increased, the classical dq0 model introduces significant errors especially in the DC component of armature currents following a fault condition. The restriction of internal intermediate time step usage is achieved by implementing a network switching detection and machine terminal voltage monitoring algorithm for the startup of the transient (perturbation) interval and a field current monitoring algorithm for the decision process of moving back to normal time step after the perturbation interval. For a typical transient stability case, internal intermediate time step usage becomes active only for a small portion of the complete simulation interval. In addition, electromagnetictransients are local by nature, which limits the number of machines with intermediate time point solutions while simulating large scale systems. Therefore, the increase in simulation time is not significant. This solution approach provides similar accuracy with the PD and VBR models.
6.1 Overview of Real-Time Platform
eMEGAsim is a fully-digital power systems real-time simulator capable of simulating electromagnetictransients with sub-microsecond time steps. It has been developed and is commercialized by OPAL-RT technologies . It takes advantage of open high-performance distributed and parallel computer technologies. Its hardware system is based on new generation Intel or AMD multi-core processors. Today, it is thus one of the least expensive alternatives for doing real-time simulations of EMTs. In addition, eMEGAsim can combine FPGA-based models to obtain simulation time steps in the order of hundred nanoseconds . eMEGAsim’s I/O system includes several independent fast 2.5-microseconds 16-bit analog input and 1 s output converters directly controlled by an FPGA processor to achieve sample times as low as 10 s. Users can program FPGA processors using a SIMULINK-to-HDL code generator to implement fast signal processing or special interface functions .
The most accurate transformer models for low-frequency electromagnetictransients (be- low the winding first resonance frequency, typically a few kHz (Martinez-Velasco and Mork, 2005, Sec. 1)) have a physical basis. In these models the magnetic flux is confined inside predefined paths called flux tubes, as seen in Chapter 1. Such models are termed topological, since each model element represents a part of the reluctance in the magnetic field physical path. These models are used in EMT-type programs instead of vectorial field models, because the computational cost involved with FEM simulations were prohibitive due to three facts: the transient nature of the phenomenon, which would require to compute a field solution for each time-step; the nonlinearities of transformer cores; the need to model not just one transformer but several of them (depending on the system configuration being studied). It will actually be demonstrated later in Chapter 5, that the generalization of magnetic circuit theory leads to discrete electromagnetism and more sophisticated 3-D models.
F. Plumier, Student Member, IEEE, P. Aristidou, Student Member, IEEE, C. Geuzaine, Member, IEEE, and T. Van Cutsem, Fellow Member, IEEE
Abstract—Co-simulation opens new opportunities to combine mature ElectroMagneticTransients (EMT) and Phasor-Mode (PM) solvers, and take advantage of their respective high ac- curacy and execution speed. In this paper, a relaxation approach is presented, iterating between an EMT and a PM solver. This entails interpolating over time the phasors of the PM simula- tion, extracting phasors from the time evolutions of the EMT simulation, and representing each sub-system by a proper multi- port equivalent when simulating the other sub-system. Various equivalents are reviewed and compared in terms of convergence of the PM-EMT iterations. The paper also considers the update with frequency of the Thévenin impedances involved in the EMT simulation, the possibility to compute the EMT solution only once per time step, and the acceleration of convergence through a prediction over time of the boundary variables. Results are presented on a 74-bus, 23-machine test system, split into one EMT and one PM sub-system with several interface buses.
C OUPLED simulations of power systems, combining Phasor-Mode (PM) and ElectroMagneticTransients (EMT) models, aim at taking advantage of the high speed of PM simulations and the high accuracy of EMT simulations. To this purpose, the EMT model is simulated with a “small” time step size h and the PM model with a “large” time step size H. This feature makes the combined PM-EMT simulation a particular case of multirate methods. A typical example of application is the detailed simulation of an unbalanced fault using the EMT model in a subsystem surrounding the fault location, and the PM model for the rest of the power system. The first related work can be traced back to 1981 . Since then, a significant number of advances have taken place, as testified by the state-of-the-art report in . However, there is still room for improvement in coupled PM-EMT simulations, to reach the targeted speed-accuracy compromise. Some authors even challenge the theoretical basis supporting this type of hybrid simulation .
Figure 2.22: Integration of PM currents with H = 50 ms.
such a system. While a maximum time step size of h = 1 ms had to be used for solving EMT model, a time step size H = 50 ms is small enough to accurately solve the PM equations with the trapezoidal rule. This is confirmed by Figure 2.22 where the crosses indicate the time steps. A comparison of the solutions obtained by EMT and PM models is shown in Figure 2.23. In fact, it is very uncommon in PM simulation to rebuild the “full-wave” evolution of currents, voltages, etc. It is done in Figure 2.23b for comparison purposes only. As can be seen, the EMT transients are not reproduced by the PM model. In particular, the current i(t) obtained from the PM model experiences a non-physical discontinuity at t = 0 due to the jump in the amplitude of the source voltage e(t). After a very short time however, the EMT and PM approximations are so close to each other that it is impossible to distinguish both curves.
Numerical solution methods are crucial in the evolution and technological advancements in modern power systems, in which circuit-based numerical methods, such as those used in the high accuracy time-domain electromagnetictransients (EMT) simulations, are of particular interest to researchers and utility engineers. The EMT approach - targets operation problems in power system analyses. It helps utility engineers perform highly accurate studies including fault analysis, power flow analysis, stability analysis as well as EMT analysis, demonstrating great advantages over phasor-domain approaches , . It is of wideband nature and applicable to both slower electromechanical transients of lower frequencies as well as high frequency electromagnetictransients, such as switching and lightning. Typical numerical integration time-steps are in the range of µs, which creates major obstacles in terms of computing times in the accurate solution of large sets of differential and algebraic equations (DAE) for large-scale systems. Therefore, computing time reduction for solving complex, practical and large-scale power system networks has become a hot research topic.
1.3 Literature review
The computing time reduction for the simulation of electromagnetictransients (EMTs) is a crucial research topic. The EMT-type simulation methods are circuit based and can use very accurate models for an extended frequency range of power system phenomena. This qualifies them as being of wideband type. In fact, the EMT approach is applicable to both slower electromechanical transients and much faster electromagnetictransients. The computation of electromechanical transients can be achieved with EMT-type solvers for very large networks  and requires significant computing time when compared to phasor-domain approaches, but even for smaller networks, the computing time can become a key factor due to numerical integration time-step constraints or model complexity level. More and more challenging simulation cases are created for studying modern power systems, those include, for example, HVDC systems and wind generation.
To improve the accuracy of wind generator grid impact studies, it is needed to develop faster and sophisticated models using various simulation tools. The simulations are usually carried out independently for fast and slow transients. Traditional slow transient analysis methods are based on simplified solution methods with various approximations. These methods fall into the category of electromechanical transients. More sophisticated models are based on the detailed simulation of all wind generator components. Such models fall into the category of electromagnetictransients (EMT). It is, however, complicated to run detailed simulations for long simulation periods due to computer time restrictions. This is especially true in large grid integration simulations. The objective and innovation of this thesis is the simulation of wind generators in EMTP-type (ElectromagneticTransients Program) programs using faster modeling techniques with small integration time-steps and the capability to combine with detailed models. This way fast and slow transients are solved in the same environment and with acceptable computational speed. Another objective of this thesis is the contribution of wind generator models for wind farm integration studies.
In this study we describe the design of radiofrequency appli- cators, where submillimetric electrode circuit allows the study of the exposure of minimal volumes of GUVs or living cell suspensions to radiofrequency wave pulses. In the present work we sought to examine the non-thermal eﬀect of two waveforms: a damped sinusoid centered at 200 MHz the wideband (WB) signal, and a radar-like ultra-narrow band (UNB) signal at 1.5 GHz. The applicators, coupled to a light microscope, enabled us to study the morphological eﬀects of electromagnetic waves on GUVs and cells (size and shape), which, if altered, could attest to some harmful eﬀects of tested waveforms. Under our experi- mental conditions, the applied WB and UNB pulses did not induce any observable changes to the macromolecular or cellular samples tested. In contrast, when the parameters of electromagnetic pulses were set to values, which are known to cause membrane damage, the lipid bilayers visibly changed their structure, and the cell membranes became per- meabilized. 19 The applicators are designed in a way that
2. MATERIAL AND METHODS 2.1. The SPARTE code
SPARTE is a multi-physics code developed to improve the prediction of the CABRI power tran- sients. The power transients are calculated by a point kinetics algorithm, the same as DULCI- NEE . Point kinetics is adapted to our problem as CABRI is a small reactor, the reactivity is injected in an homogeneous way as the transient rods are disposed at the 4 corners of the core and the RIA phenomenon is very fast. The injected reactivity is defined at the beginning of the simu- lation, whereas feedback reactivities are computed at each time step. Heat transfers are computed by Fourier equations and thermal hydraulics by continuum equations. A dataset describes the ge- ometry, kinetic parameters, meshing, of the CABRI core. This dataset was updated in the SPARTE code, in order to take into account the last studies (neutronics, core material balance). Moreover models were added , in order to take into account a variable Doppler coefficient during tran-
Lynch et al. 2013 ; Stovall et al. 2014 ).
Low-frequency surveys can also achieve large instantaneous sky coverage and sensitivity. That is important, as over the last decade it has become increasingly apparent that the various sub- types of radio-emitting neutron stars show a wide range of activ- ity – from the classical, steady pulsars to the sporadic pulses of the rotating radio transients (RRATs; McLaughlin et al. 2006 ), and the o ff-on intermittent pulsars ( Kramer et al. 2006 ). Other cases of transient millisecond radio pulsars ( Archibald et al. 2009 ; Papitto et al. 2013 ; Bassa et al. 2014 ; Stappers et al. 2014 ) and radio magnetars ( Camilo et al. 2006 ; Eatough et al. 2013 ) also give strong motivation for pulsar surveys that permit large on-sky time and repeated observations of the same survey area. Furthermore, the recent discovery of the fast radio bursts (FRBs, also known as “Lorimer bursts”; Lorimer et al. 2007 ; Keane et al. 2012 ; Thornton et al. 2013 ; Spitler et al. 2014 ; Petro ff et al. 2014 ) provides even more impetus for wide-field radio surveys with sub-millisecond time resolution.
Effects of the parameters of interest (Subsaturation, flow, pressure, exponential period of the transients) ?
How to enhance the transient (for reasonable reactivity insertions) CHF in experimental reactors (surface state / operation conditions, …) ?
The new generation of powerful instruments is reaching sensitivities and temporal resolutions that will allow multi-messenger astron- omy of explosive transient phenomena, with high-energy neutrinos as a central figure. We derive general criteria for the detectability of neutrinos from powerful transient sources for given instrument sensitivities. In practice, we provide the minimum photon flux necessary for neutrino detection based on two main observables: the bolometric luminosity and the time variability of the emission. This limit can be compared to the observations in specified wavelengths in order to target the most promising sources for follow-ups. Our criteria can also help distinguishing false associations of neutrino events with a flaring source. We find that relativistic transient sources such as high- and low-luminosity gamma-ray bursts (GRBs), blazar flares, tidal disruption events, and magnetar flares could be observed with IceCube, as they have a good chance to occur within a detectable distance. Of the nonrelativistic transient sources, only luminous supernovae appear as promising candidates. We caution that our criterion should not be directly applied to low-luminosity GRBs and type Ibc supernovae, as these objects could have hosted a choked GRB, leading to neutrino emission without a relevant counterpart radiation. We treat a set of concrete examples and show that several transients, some of which are being monitored by IceCube, are far from meeting the criterion for detectability (e.g., Crab flares or Swift J1644+57).
c Dept. of Nuclear Physics & Engineering, Soreq Nuclear Research Center, Yavne, Israel
The main purpose of this benchmark paper is to study and compare point and spatial neutronic approaches used to calculate ULOF and UTOP transients in sodium cooled fast reactors. A second objective is to compare deterministic and Monte Carlo calculations with two different calculation codes. The first one is based on a deterministic (discrete ordinate
of materials .
While up to that point, most artificial materials were made to control the permittivity and permeability, extreme material parameters such as a refractive index less than unity were not shown before the work of Brown in the 1950s  and then, in the 1960s, that of Rotman who associated the electromagnetic behavior of a wire medium to that of an equivalent plasma . Continuing on that path, Veselago theoretically predicted, in 1968, a completely new class of materials where both the permittivity and the permeability are negative values leading to a negative index of refraction . The developments of artificial materials continued through the 1980s and 1990s, notably with the realization of microwave absorbers and bianisotropic media . But it is not before 2001 that the concept of metamaterial really started to attract major attention from the scientific community. Indeed, this rise in interest followed the groundbreaking works of Smith  2 , who, for the first time, experimentally achieved a negative index of refraction, and that of Pendry , who revived the concept of negative refraction (proposed by Veselago) to suggest the realization of a perfect lens. The interest towards metamaterials grew even more when, in 2006, Pendry and Smith proposed and realized the concept of electromagnetic cloaking based on transformation optics [20,21]. Although a general description of what a metamaterial is was already provided above, we will now rigorously define the meaning of this term. A metamaterial is an artificial structure made of an arrangement of engineered particles, sometimes refereed to as “meta-atoms” or scattering particles . The distance between these meta-atoms as well as their overall di- mension must be much smaller than the wavelength such that a wave propagating through the medium does not experience diffraction or Bragg scattering due to the granularity of the structure. In that case, the metamaterial structure can