a near-to-far-ﬁeld transformation based on the Huygens’ equivalence principle is used. In accordance with the theory, it is shown that the modes are radiated into a relatively narrow range of angles around the disk plane and exhibit nearly Gaussian far-ﬁeld intensity proﬁles. The etching processes required for the fabrication of microdisk resonators often lead to signiﬁcant deviations from the ideal geometry, for example to stepped or wedge-shaped edges. In a second step, we study the eﬀect of such deviations from the cylindrical symmetry on the wavelengths, quality-factors, mode patterns, and far-ﬁeld proﬁles of whispering-gallery modes. It is, for instance, shown for the ﬁrst time that a microdisk with a single step along its circumference supports modes that periodically change their radial and vertical position inside the cavity, provided the step size is large enough. In the picture of geometrical optics this oscillating behavior corresponds to a light ray that is “unable to decide” along which of the two lateral disk boundaries it should travel, and therefore constantly switches between them back and fourth. In contrast, it turned out that wedge- and step-shaped edges have only a minor inﬂuence on the far-ﬁeld emission characteristics of whispering-gallery modes, although the latter propergate close to the disk boundary.
This paper presents a global finite difference timedomain (FDTD) analysis of a silicon non- linear transmission line (NLTL) using optimized varactors. The simulation is based on the FDTD method , also including transmission line losses and the drift-diffusion model for the semi- conductor devices solved by means of finite dif- ferences (FD). The diodes are included in the FDTD scheme as lumped elements. The fall time of 74 ps of a 4 GHz sinewave was com- pressed to approximately 15 ps at the output of a 20 mm long NLTL with 40 diodes. Comparing the measured output signal to the simulation, a good agreement could be achieved.
UWB devices are usually described through an investigation, which is conducted in the frequency domain. However, this analysis does not permit to directly assess the distortion and the dispersiveness caused by the device on the transmitted signal. Hence, in this thesis, a complete analysis of UWB components (such as filters, anten- nas, and the integration of filters and antennas) has been performed, which takes into account not only the frequency domain behavior but also conducts an investigation in the timedomain, which permits to have a clearer insight into the distortion and the dispersiveness in the timedomain due to the non-ideal behavior of the device itself. Together with the timedomainanalysis of the different components, also a correlation analysis has been introduced, in order to quantify the amount of distortion introduced on the signal by the device. These two analyses complement each other and permit to describe the timedomain behavior of the component and its influence on the sig- nal through few parameters. Differently from the frequency domain characterization, where the commonly used parameters are frequency dependent, these timedomain parameters are single numbers, which have a direct relation to the pulse preserving capability and dispersiveness of the device. The mathematical background of these two analysis methods has been given in chapter 2.
spectrum ranges from 300 nm to 1500 nm. After the excitation, we record the elds at the interface between the total-eld and the scattered-eld region. Due to the discontinuous nature of our method, we can record both, the elds inside and outside of the TF/SF contour. From the eld data, after a Fourier transform, we obtain the Poynting vectors in frequency domain. Integration over the entire surface then yields the total ux. Normalization with respect to the incoming spectrum results in the absorption cross section and the scattering cross section for the elds inside and outside of the contour, respectively. The sum of the two cross sections nally yields the extinction cross section C ext which we use for the characterization of our nanoparticles. In a rst set of calculations, we x the height and width of the rod at h = w = 20 nm and vary the length l between 100 nm and 200 nm. The chamfering of the corners was achieved by replacing the endfaces of the bars with half-cylinders. From the results depicted in Fig. 8.14, we nd that the resonance shifts to shorter wavelength as we decrease the length of the rod. This was to be expected and in a very crude approxi- mation, we can interpret the nanorod as a small antenna. However, in contradiction to classical antennas, the fundamental resonance wavelength is found to be much longer than twice the rod length. In order to extract the properties of the individual reso- nances, we t a Lorentzian curve to each peak (cf. Sec. 7.1). The extracted resonance wavelengths and Q factors are plotted in Fig. 8.15. The quality factors are found to vary only weakly with the length and a saturation can be observed for very small lengths. This is in good agreement with theoretical expectations for the quasi-static limit .
Note that a polarimetric MUSIC algorithm searching jointly in space and polarimetric domain can be applied to estimate signal polarization as described in  and , however we are not merely interested to know the polarization of an incident signal, but rather to understand the behaviour of the system in the polarimetric space. For this purpose we define the MUSIC spectrum as
employs a fast scratchpad memory instead of caches and it accesses the DRAM via a specifically designed controller that allows the interference-free access for the four hardware threads. As noted in [Liu et al., 2012], it is challenging to efficiently program the PTARM system with its specific memory hierarchy. In contrast, our strictly in-order processor is equipped with a standard hierarchy with separate instruction and data caches and a background memory that serves one access at a time. In order to circumvent the complexity of comparing the different memory hierarchies, our PTARM- like design which focuses on the single-thread perspective uses the same memory hierarchy as our strictly in-order design. As a consequence, the single thread could fully use instruction and data caches which serve a similar purpose as the scratchpad in the original design. In a system with four threads, however, the threads would need to share the space of a local, fast memory. To account for this spatial sharing, we evaluate the relative performance of the single PTARM thread when reducing the cache size from 4KiB to 1KiB. The impact of the spatial sharing on the execution time heavily depends on the actual benchmarks. For a memory word latency of 10 processor cycles, the number of needed cycles increased by 16% on average over our benchmarks. Taking this increase into account, the ratio of execution time becomes 2.11.
The objective of this paper was to demonstrate the proposed multipoint loads modeling approach for the case of the lift coefficients at seven monitored load stations. The postulated model structure provides the ability to satisfactorily perform multipoint loads modeling. Moreover, the good agreements between the measured local loads from the strain gauges and the predicted loads from the identified rigid-body model proved the efficacy of these additional observations. The contribution of this work was to develop a multipoint aerodynamic loads model for supporting future real-time SHM applications. As a result, the approach presented in this paper has the novelty of extending the conventional global System-Identification approach, in order to enable the simulation of local loads in real time. Future work will address the modeling of the effects of the aircraft structural dynamics (i.e. structural flexibility) on this multipoint loads model.
The TDIE methods, conventionally solved by the marching-on-in-time (MOT) schemes, are increasingly receiving attention especially in the electromagnetic community for the complex broadband surface scattering phenomena and transient radiation problems - . The MOT schemes, computationally efficient time-domain formulation of the well- known boundary element method (BEM) for solving the TDIEs, have been shown prone to instabilities that appear in the form of exponentially growing oscillations in the late-time response and alternate in sign at each time step -.The instabilities are originated at the system discretization stage in the conversion of the integral equation to a discrete time- space model. Historically, many authors have been postponed MOT instabilities through temporal filtering [1, 7, 13, 14, 16, 18, 19, 23]. The time averaging used in filterings, how- ever, may adversely damage the accuracy of the final solution. The MOT instability arises when the poles that characterize the integral-equation system being solved drift into the right half-plane due to the approximations in the numerical scheme [13, 14]. Poles de- scribing interior resonances permitted by the time-domain electric field integral equation (EFIE) and magnetic field integral equation (MFIE) are prime candidates for such unde- sirable shifts as they reside on the imaginary axis. A linear combination of the EFIE and MFIE, so-called the combined field integral equation (CFIE), has been exploited to elim- inate the interior cavity modes that possibly corrupt the solution of the EFIE and MFIE [3, 4, 6]. Walker demonstrated that judiciously constructed MOT schemes designed to solve MFIE relying on accurate spatial integration rules and implicit time-stepping schemes are for practical purposes stable . Nonetheless, precautions have to be taken in spatial  and temporal discretizations to avoid any pole displacement into the right half-plane. In respect to the spatial discretization, most researchers have used triangular patch model- ing whereas all interior inner products of the BEM are evaluated using a fixed Gaussian quadrature rule over triangular subdomains, e.g. 7-points Gaussian-quadrature rule in . Applying a predefined number of quadrature points, however, not only leads to a computa- tionally inefficient algorithm for computing the interactions between those basis functions located far from each other, but also prohibit a sufficiently precise calculation of the mutual coupling of neighboring cells . In other words, fix-point quadrature schemes prevent controlling over the precision of numerical integrations on a formerly meshed structure.
which will improve the modeling and forecasting of foreign exchange volatil- ity. Similarly to Lanne (2007), Andersen et al. (2011), and S´evi (2014) we use the decomposition of the quadratic variation with the intention of building a more accurate forecasting model. Our approach is very di↵er- ent though, as we use wavelets to decompose the integrated volatility into several investment horizons and jumps. Moreover, we employ recently pro- posed realized GARCH framework of Hansen et al. (2012). In contrast to popular HAR framework of Corsi (2009), Realized GARCH allows to model jointly returns and realized measures of volatility, while key feature is a measurement equation that relates the realized measure to the conditional variance of returns. In addition, we benchmark our approach to several measures of realized volatility and jumps, namely realized volatility esti- mator proposed by Andersen et al. (2003), the bipower variation estimator of Barndor↵-Nielsen and Shephard (2004), the median realized volatility of Andersen et al. (2012), and finally jump wavelet two-scale realized variance (JWTSRV) estimator of Barunik and Vacha (2014) in the framework of Realized GARCH, and we find significant di↵erences in volatility forecasts, while our JWTSRV estimator brings the largest improvement. We use Re- alized GARCH models of Hansen et al. (2012) as well as realized GAS of Huang et al. (2014) based on the observation-driven estimation framework of generalized autoregressive score models to build a realized Jump-GARCH modeling strategy. In addition, we also utilize Realized GARCH with mul- tiple realized measures (Hansen and Huang, 2012) to build a time-frequency model for forecasting volatility.
Figure 1. Schematic diagram of social shrinkage
3.2.3 Social Shrinkage
In the time-frequency representation of instrumental sound, it is considered that energy is concentrated in the grouped part of the spectrograms because the window function extends spectral peaks. Hence, we propose to combine social sparsity [ 4 , 27 , 28 ]. This is sparse modeling in which the information of surrounding time-frequency bins is added as a weight to take a decision: keeping or discarding the bin. The effects of this modeling are expected that the time-frequency bins of the spectrogram become grouped and noise which is not related to the signal is reduced. A schematic diagram of social sparsity is shown in Fig. 1 . Introducing the social sparsity, P social is given by
In the scope of this thesis, I have studied the transformation of reflection seismic data into a structural image of the subsurface. This central step in the seismic imaging workflow is called migration. The extension to true-amplitude migration additionally compensates for the geometrical spreading e ffect during the transformation and, thus, allows to recover reflection amplitudes. Seismic time migration which is considered here has been introduced as an approximate transformation process based on the assumption of a laterally homogeneous velocity distribution. With this prerequisite, integral velocities are assumed to be su fficient to characterise the overburden and the velocity model building is consid- erably simplified. In practice, the assumption of a 1D medium is never strictly met. Therefore, the application of time migration is usually extended to media showing mild to moderate lateral velocity variations. The main advantage of time migration lies in its reduced sensitivity to velocity model er- rors compared to the depth migration process. This does not only influence the quality of the migrated image but also has a strong impact on the amplitudes. Thus, seismic amplitude analysis is usually carried out on time-migrated results because they provide more reliable and less distorted amplitude information.
fore having the processor for the next s time units and then repeatedly again after p time units. In the worst-case the event occurs exactly when the slot is finished and it processing is therefore delayed by p − s time units. The round-robin scheduling checks the tasks in a fixed order whether they have available jobs for execution. In such a case these jobs are ex- ecuted for at most the slot length for the task. Same as with TDMA a job can be distributed on several slots and several jobs can be executed within one slot. Other than with TDMA, having nothing to execute for a task does not lead to an idle time for the processor but the slots for the following tasks are brought forward. Therefore the available capacity for one task depends on the incoming event spectra for the other tasks in the RR-cycle. Each time unit not used by one of the tasks reduces all response times in which the time unit occurs of all other tasks by one time unit. The worst-case response time for RR is similar to the worst-case response timeanalysis for static priorities, only the calculation of the higher priority processing time differs. This can be calculated by "simulating" the worst-case RR approach on the interval-base event spectrum model.
P = ˙p n + 0.5∆t ¨p n , Q = p n + ∆t ˙p n + ∆t 2 (0.5 − β ) ¨p n . (6) Here, ∆t represents the time interval. The linear system of Eq. (3) is solvable efficiently using a preconditioned iterative solver, Conjugate Gradient (CG) method, with diagonal scaling. The present paper uses the highly accurate Fox–Goodwin (FG) method, which is known as a Newmark method with β = 1/12. Regarding the data structure of storing matrix components of Eq. (3), sparse storage formats such as the compressed row storage format, which do not store zero elements, provide the benefit of reducing or limiting the memory requirements because the matrices of Eq. (3) are sparse and include many zero elements.
The cross-correlation function (CCF) is computed to visualize the spatial sound information before and after dereverberation. The CCF is computed using a short time average every 100 ms without any overlap as in [ 17 ], whose maximal value stands for the current source direction. Figure 1 plots the contours of the computed CCF between the two reverberant inputs as well as that between the two dereverberant outputs in the two studied reverberantion conditions. Comparing Fig. 1 (a) and (c), one can clearly see that reverberation deteriorates the source spatial information. But the spatial information is well recovered by the developed method as seen in Fig. 1 (b) and (d).
Several SAR processing algorithms have been pro- posed in the literature, mainly divided in two broad classes: FFT-based and timedomain processors, each one having its benefits and disadvantages. FFT meth- ods are known for their efficiency but have limita- tions, mainly due to their specific assumptions . Range-Doppler and Chirp-Scaling rely on approxima- tions that break down for large apertures and Doppler centroids. The &-k algorithm is geometrically exact, but it assumes a perfectly straight trajectory. Devia- tions from a linear uniform trajectory are a bottleneck for these algorithms in an airborne scenario. To com- pute along-track FFTs, a full aperture of pulses must be acquired and so the processing is performed in blocks. Topography- and Aperture-Dependent (TAD) motion compensation algorithms based on block processing have been developed to overcome this limitation .
5.2 Comparison against measurements
In order to assess the accuracy of the ER model, a comparison between simulations and measurements is presented. The scenario is a single reflection of a spherical wave from a 5 cm porous absorber backed with a 15 cm air cavity. This material configuration is known to have ER behavior. The measurements were undertaken in an anechoic chamber, for further details on the measurement see Ref. [ 12 ]. The simulation is carried out in a large 3D domain and the resulting response is windowed in time, such that parasitic reflections are removed. A basis order of P = 4 is used and a high spatial resolution is employed in the simulation, roughly 14 PPW at 1 kHz. The initial condition is a Gaussian pulse with spatial variance σ = 0.2 m 2 . The admittance functions are mapped to rational functions who all have the same set of 14 poles, but varying numerical coefficients. The resulting transfer functions can be seen in Fig. 5 . For the small incidence angle case, there is very little difference between the LR and the ER model, as expected. However, as the angle increases, the difference between the two models increases as well. Clearly, the ER model matches the measured transfer functions better than the LR model.
II. TIMEDOMAIN PASSIVITY APPROACH TimeDomain Passivity Approach ,  is a widely used method to ensure stability of bilateral teleoperation systems. It consists of measuring the energy flow in the system and adaptively dissipating energy in order to enforce passivity. A useful tool to facilitate the application of this approach is to described the entire system in network representation and the communication channel as a Time Delay Power Network (TDPN). The TDPN is a two-port network through which the flow and effort variables are exchanged between master and slave. A TDPN can add time delays, jitter and package losses to the transmitted data, which makes it a suitable representation for the communication channel.
responding with reference to sound onsets may be the neural source not only for setting the shortest boundary for voice-onset-time-based phoneme discrimination in speech but also as the shortest general perceptual boundary separating acoustically meaningful patterns by sound-onset time-domain. This general hypothesis for an ICC-based window of perceptual constancy of sounds in the timedomain relies on how first- spike latencies to a sound are generated. Such latencies, as shown for auditory nerve fibers and neurons in the primary auditory cortex do not depend on the type of sound but only on the initial acceleration of the peak pressure [ 65 , 66 ]. We assume that this also holds for neurons in the inferior colliculus. Then the core of the FRA of most neurons, as defined by the variation of first-spike latencies in our present study, will not depend on the type of sound (clicks, tones, noise, animal calls, speech, etc) as such but on the acceleration of the sound amplitude at the beginning of the sounds. This acceleration may naturally differ between sounds so that sounds with fast rise times of amplitudes such as clicks will lead to shorter latencies than, for example, the pronunciation of a vowel.
Despite being able to autonomously fulfill a significant range of objectives, state-of-the-art robots still need human assistance to perform more complex or unforeseen tasks . The level of human participation in robotic tasks can range from supervised autonomy  to direct teleoperation . In the latter, an important characteristic of the telemanipulation setup is to be able to passively interact with the environment and the human operator. Among the passivity-based telema- nipulation approaches (e.g. , ) developed to solve that issue, TimeDomain Passivity Approach (TDPA, , ) presents the advantage of adapting the energy dissipation necessary to passivate the teleoperation channel based on measurements of the flow and effort variables acting on the system. This characteristic allows the implementation of a model-independent passivity observer and passivity controller (PO-PC) pair, which is robust to varying time delays and package loss in the communication channel. The adaptive characteristic of TDPA results in better performance compared to other passivity-enforcing controllers for teleop- eration, e.g. wave-variable methods (see ).
In cliometrics we constantly observe that regular shocks are superposed by irregular shocks which appear rarely (infrequent large shocks). This includes the question whether long-term economic development is caused (or not) by such extraordinary shocks such as wars, political measures and institutional changes. If this was the case, economic growth and development could probably not be explained as a systematic endogenous process but would have to be traced back to specific historical events. In view of that, this article summarises a new econometric technique for shock analysis in historical economics: the outlier methodology: 1