Computational electromagnetics gained its momentum ever since the Finite Element Method (FEM)  started being applied to solving the Maxwell’s equations. Exploration of Yee’s work  changed this sce- nario. The method proposed by Yee, later named as Finite Difference TimeDomain (FDTD), got a huge following mainly due to its simplicity. It is written for the linear non–dispersive media and Cartesian grid. A few years later Weiland proposed  another volume discretization method, termed as Finite Integration Technique (FIT). FIT presents a closed theory, which can be applied to the full spectrum of electromagnetics. FIT on a Cartesian grid with appropriate choice of sampling points in the TimeDomain (TD) is algebraically equivalent to FDTD. In other words, FIT is a more generalized form of FDTD. All the above mentioned methods got refined and applied to various classes of problems [4–16] over the years and each of these methods enjoys its own set of advantages and disadvantages. FEM, more specifically, continuous FEM, has high flexibility in modeling the curved surfaces when employed on un- structured grids. But the TD formulation of the FEM results in a global implicit time marching scheme which requires global matrix inversion at each time step. This restricts FEM practically to Frequency Domain (FD) simulations. FD methods must solve an algebraic system of equations at all significant frequencies in the desired frequency range individually, making them more costly with increasing band- width. Fast frequency sweep algorithms can be employed in order to avoid calculating the solution at each and every frequency sample. However, these algorithms are memory intensive. When opted for parallel computing, FD methods require significant global operations.
Many dynamical processes can be described by partial differential equations, and convection dominated phenomena as occuring in fluid dynamics are often modeled by hyperbolic conservation laws. As an example, the Euler equations of gas dynamics, the shallow water equations or the Savage-Hutter equations are used to characterize different forms of inviscid fluid flow. Since the structure of hyperbolic conservation laws is different from that of elliptic or parabolic equations, there is a need for specially designed numerical methods that take into account characteristics of the solutions, like discontinuities or the transport direction. Classically, partial differential equations are handled using Finite Element Methods (FEM), Finite Difference Methods (FDM) or FiniteVolumeMethods (FVM) on structured or unstructured meshes [BS02], [CL91], [GR96], [EGH00], [Kr¨ o97], [LeV92], [LeV02]. In these methods, a mesh is a covering of the computational domain consisting of pairwise disjoint polygons – the cells – which have to satisfy several geometrical and conformity constraints in order to ensure a good approximation of the solution. A main drawback of these methods when applied to problems with complicated or even time-dependent computational domains is the time consuming construction of such a mesh.
Two-phase flows are commonplace in numerous production engineering applications, such as the filling stage of injection molding and die casting, the droplet formation and detachment during GMA welding, and the die swell. Two types of approaches are used for the description of an evolving front, namely the interface-tracking method and the interface-capturing method, as already briefly introduced in Chapter 1. Each approach and its evolution is presented by Elgeti and Sauerland , combined with advanced methods, such as extended finite elements (XFEM) and Isogeometric Analysis (IGA), a NURBS-based method. The interface- tracking method describes the front in an explicit manner, based on a deforming grid, which conforms to the interface. Its examples are the arbitrary Eulerian-Lagrangian technique, [76, 77], and the deforming-spatial-domain/stabilized space-time method (DSD/SST) . The interface-capturing method describes the interface in an implicit manner, through an auxiliary function, defined on a fixed grid that describes the interface. Some representatives of the interface-capturing method are the volume-of-fluid (VOF) method , the level-set method , and marker-and-cell method (MAC) . The level-set method is chosen and used in the upcoming sections for describing two-phase flow problems, dealing with severe front movements. Such flows are governed by the transient equations of conservation of mass, momentum, and energy.
The adaptive concept calls for a discretization scheme, that is able to cope with fairly general grid partitions and in particular with hanging nodes. Due to the very local nature of the adaptation and the heterogeneous tessellation of the domain, it appears to be most suitable for the discretization to consider the grid as a fully unstructured mesh, composed of simply connected elements with other- wise arbitrary topology. The spatial discretization is based on an a finitevolume scheme for two- and three-dimensional flow problems. It is of second order accuracy in space and time. In order to account for the directed transport of information within the solution domain, the convective fluxes are discretized with upwind methods. Two different approaches for the discretization of diffusive fluxes on unstructured meshes are investigated. Turbulence is considered by the Spalart-Allmaras one-equation model. In order to simulate stationary, inviscid low speed flows, a local preconditioning technique is employed in conjunction with the AUSMDV(P) upwind method to operate effectively within the quasi-incompressible low Mach number regime. The methodology serves to bridge the gap between density-based schemes for compressible flows and pressure-based approaches for in- compressible flows. To accelerate convergence, a fully implicit Newton-Krylov type approach has been developed, which is suitable for stationary and non-stationary flows.
and the nature of the chemicals. They investigated the synthesis of surface stabilized TiO 2 nano- particles with different surfactants. The steric stabilization of the polymer and various functional groups of dispersants were also considered. The influence of various precursor concentrations and different surfactants on the PSDs were observed. Narrow distributed spherical titania particles in the size range 10-100 nm were produced in a sol-gel synthesis from titanium tetra-isopropoxide. Many efforts have been made to develope appropriate processes to prepare titania nano-particles experimentally. The most common procedures have been based on the hydrolysis of acidic solutions of titanium salts, gas-phase oxidation reactions of TiCl 4 and hydrolysis reactions of titanium alkoxides [74, 4]. Another approach for preparing micron size particles, e.g. particles with a very narrow size distribution was by dispersion polymerization . Recently, chemical vapour deposition was also used to make such nano-particles by thermal decomposition . The sol-gel process for the preparation of nano-particles was preferred among other methods mentioned above because it provided a possibility for deriving special shapes depending on the gel state. This process was rapid, had a low cost and gave better stability at the end of the reaction.
As a lot of expertise and development time was put into currently used codes, the desire to expand an existing code as opposed to writing a completely new one is very natural. Con- sequently, there are two main approaches to the design of numerical methods for the above mentioned flows: use either the compressible or the incompressible Euler or Navier-Stokes equations as the basic model and improve upon the existing methods. Both approaches are pursued and widely used. One important idea in this context was the artificial compressibil- ity method by Chorin that inspired the preconditioner of Turkel  for the compressible equations. These methods incorporate a preconditioning of the time derivative of the PDE, thus allowing faster convergence to steady state but sacrificing time accuracy. Along these lines, other preconditioners were proposed [23, 1]. The crucial idea is, that as the Mach number tends to zero, the original system develops a large disparity in wave speeds, as some of the eigenvalues grow to infinity while others stay O(1). The preconditioner changes all the wave speeds to O(1), thus greatly improving the condition number of the system. For incompressible flows, there are two main techniques that are used to expand the validity of the scheme into the compressible regime. One class of schemes is based on the marker and cell method, MAC for short, by Harlow and Welch , which is a finite difference method on a staggered grid. The method is quite fast, but it is very difficult to use the staggered location of the variables in the context of unstructured grids. Recently, Wen- neker, Segal and Wesseling proposed a method that faces this difficulty . On the other hand, Patankar and Spalding  published their SIMPLE scheme in 1972. Based on an approximation of the pressure, a velocity field is computed using the momentum equations. Then, an elliptic pressure correction equation is solved to improve the approximation of the pressure. These steps are then iterated until convergence is achieved. The approach is not limited to incompressible flows: see . Both SIMPLE and MAC scheme as well as their improved descendants have in common that they work on the velocity field and the pressure distribution. By contrast, codes for compressible flow are usually based on the conserved variables density, momentum and energy. Thus, in the context of methods for all Mach numbers it is useful not to speak of incompressible and compressible solvers, but of pressure based and density based schemes.
Gobet and Labart ( 2007 ); Gobet and Labart ( 2010 ); Gobet et al. ( 2016 ), among others.
A third branch of the methods represents the calibration methods. To improve the efficiency of given methods, some researchers developed parallel and distributed methods, variance reduction methods. For example, Peng et al. ( 2010 ) developed a parallel method of reflected BSDEs on option pricing. It is a method with block allocation. Tran ( 2011 ) reconstructed the four step method with some new conditions in FBSDEs, which is associated with Schwarz waveform relaxation method, to parallelize the related equations. Bender and Moseler ( 2010 ) introduced importance sampling to MC methods for pricing problems, represented by the BSDEs.
Anisotropic textile composites show complex deformation and failure behaviour. In particular three-dimensional reinforced textile composites are characterised by an orthotropic material be- haviour. To achieve the full potential of textile composites the material and especially the 3d failure behaviour has to be analysed. Particularly regions dominated by three-dimensional stress states, e. g. load introduction areas have to be explored in more detail. In such areas or thick structures proper computation of three-dimensional stress distributions requires the use of vol- ume elements because shell theories come to their limits of validity. Especially for the purposes of analysing thick composite structures a volume element based on hierarchical shape functions is being developed, . Additionally in our work the element is used within the p-version of the finite element method to achieve superior convergence properties during the computation pro- cess. Thereby different a posteriori error estimators are applied to control the spatial adaptivity of polynomial order of the shape functions. In case of the selected anisotropic ansatz space for the displacements even a simple error indicator is able to distinguish between the error contri- bution of in- and out-of-plane stresses.
Telepresence combines different sensorial modalities, including vision and touch, to pro- duce a feeling of being present in a remote location. The key element to successfully implement a telepresence system and thus to allow telemanipulation of a remote envi- ronment is force feedback. In a telemanipulation, mechanical energy is conveyed from the human operator to the manipulated object found in the remote environment. In general, energy is a property of all physical objects, fundamental to their mutual inter- actions in which the energy can be transferred among the objects and can change form but cannot be created or destroyed. In this thesis, we exploit this fundamental principle to derive a novel bilateral control mechanism that allows to design stable teleoperation systems with any conceivable communication architecture. The rationale starts from the fact that the mechanical energy injected by a human operator into the system must be conveyed to the remote environment and vice versa. As will be seen, setting energy as a control variable allows a more general treatment of the system than the more conven- tional setting of specific system variables, as can be position, velocity or force. Through the Time Delay Power Network (TDPN) concept, the issue of defining the energy flows involved in a teleoperation system is solved with independence of the communication architecture. In particular, communication time delays are found to be a source of vir- tual energy. This fact is observed with delays starting from 1 millisecond. Since this energy is intrinsically added, the resulting teleoperation system can be non-passive and thus become unstable. The Time Delay Power Networks are found to be carriers of the desired exchanged energy but also generators of virtual energy due to the time delay. Once these networks are identified, the TimeDomain Passivity Control approach for TDPNs is proposed as a control mechanism to ensure system passivity and therefore, system stability. The proposed method is based on the simple fact that this intrinsically added energy due to the communication must be transformed into dissipation. Then the system becomes closer to the desired one, where only the energy injected from one side of the system is conveyed to the other one. The resulting system presents two qualities: On one hand, system stability is guaranteed through passivity, independently from the chosen control architecture and communication channel; on the other, performance is maximized in terms of energy transfer fidelity. The proposed methods are sustained with a set of experimental implementations using different control architectures and communication delays ranging from 2 to 900 milliseconds. An experiment that includes a communication Space link based on the geostationary satellite ASTRA concludes this thesis.
Das Einf¨ ugen neuer Knoten in das Gitter ist keine gute adaptive Strategie aufgrund der globalen Auswirkung des Knoteneinf¨ ugens. Stattdessen ist eine hierarchische Verfeinerung f¨ ur Tensorprodukt-Splines sehr effektiv. Sie erlaubt eine lokale ¨ Anderung von Kontrollpunkten mit anschließender Modifikation kleiner Details in einigen Bereichen, ohne dabei andere Bereiche zu beeinflussen. Zur Programmierung ist eine Datenstruktur notwendig, die sowohl die Verfeinerung beschreibt als auch die Diskretisierung des Gebietes ber¨ ucksichtigt. Des Weiteren werden Algo- rithmen zur Aufstellung und L¨ osung des Finite-Elemente-Systems ben¨ otigt. In dieser Dissertation
Figure 3: Cut of the solution, SUPG method at t = 2, left Q1, right P1. The solutions obtained with the FEM–FCT methods are almost free of spuri- ous oscillations. These schemes gave the best results in the numerical studies. Computing times for the methods are given in the Table 1. For solving the equations in the nonlinear schemes, the same fixed point iteration as described in [4, 6] was used. The iterations were stopped if the Euclidean norm of the residual was less than 10 −8 . It can be observed that the non- linear schemes are considerably more expensive than the linear methods. In KLR02, the computing times increase with increasing size of the user–chosen parameter. All observations correspond to the results obtained in  for 2D problems.
The PO introduced in  is used, where the constant force or velocity assumption is removed. One of the main problems in using the TDP approach for time-delayed systems is the fact that because of the nature of such systems, some energy is accumulated in the channel. This can make the system vulnerable to instability because the reaction time which the energy needs to get from the accumulated value to the negative value is often too large. That makes the PC too slow to react against activity. Fig. 9 shows a plot of the observed energy flow at one of the ports of a time-delayed system. It should be noted that up to one point the upwards tendency changes, indicating that the flow inside the network is changing. However, the PC does not react until t = 3.5s, the moment in which the energy drops lower than zero. An energy resetting element is proposed driven by the following two conditions:
The TDIE methods, conventionally solved by the marching-on-in-time (MOT) schemes, are increasingly receiving attention especially in the electromagnetic community for the complex broadband surface scattering phenomena and transient radiation problems - . The MOT schemes, computationally efficient time-domain formulation of the well- known boundary element method (BEM) for solving the TDIEs, have been shown prone to instabilities that appear in the form of exponentially growing oscillations in the late-time response and alternate in sign at each time step -.The instabilities are originated at the system discretization stage in the conversion of the integral equation to a discrete time- space model. Historically, many authors have been postponed MOT instabilities through temporal filtering [1, 7, 13, 14, 16, 18, 19, 23]. The time averaging used in filterings, how- ever, may adversely damage the accuracy of the final solution. The MOT instability arises when the poles that characterize the integral-equation system being solved drift into the right half-plane due to the approximations in the numerical scheme [13, 14]. Poles de- scribing interior resonances permitted by the time-domain electric field integral equation (EFIE) and magnetic field integral equation (MFIE) are prime candidates for such unde- sirable shifts as they reside on the imaginary axis. A linear combination of the EFIE and MFIE, so-called the combined field integral equation (CFIE), has been exploited to elim- inate the interior cavity modes that possibly corrupt the solution of the EFIE and MFIE [3, 4, 6]. Walker demonstrated that judiciously constructed MOT schemes designed to solve MFIE relying on accurate spatial integration rules and implicit time-stepping schemes are for practical purposes stable . Nonetheless, precautions have to be taken in spatial  and temporal discretizations to avoid any pole displacement into the right half-plane. In respect to the spatial discretization, most researchers have used triangular patch model- ing whereas all interior inner products of the BEM are evaluated using a fixed Gaussian quadrature rule over triangular subdomains, e.g. 7-points Gaussian-quadrature rule in . Applying a predefined number of quadrature points, however, not only leads to a computa- tionally inefficient algorithm for computing the interactions between those basis functions located far from each other, but also prohibit a sufficiently precise calculation of the mutual coupling of neighboring cells . In other words, fix-point quadrature schemes prevent controlling over the precision of numerical integrations on a formerly meshed structure.
raphy is actually a 3D problem that should be faced in all the three dimensions at the same time but, if the aperture dimension is smaller compared to the slant-range distance then it is possible to separate the focusing in three steps (range-azimuth-tomographic direction). In fact in  it is shown that an extension of SAR focusing can succesfully face tomographic problems and provide reliable results. The so called SpecAn algorithm which exploits some anal- ogy between the TomSAR and ScanSAR signal, was used for the first demostration on real airborne data.
The current status of this ongoing work is shown in figure 1. So far, 33 deltas/alluvial fans, 213 proposed paleolakes, 25 valley networks and 13 outflow channels have been entered to the catalogue. The acquisition of parameters mentioned above and the derivation of discharges and derived water volume is still ongoing .
Coupled meshes are easily handled if the meshes do not change during a com- putation. In the setting of adaptive methods the problems of coupling grids be- come inherently more complex. If an adaptive method requires a change of any of the involved meshes, we might lose the useful property that the surface grids were originally defined as the collection of bulk faces. Without this property the transfer of data between bulk and surface meshes becomes much more difficult. In this scenario, after each mesh change one would have to somehow reconstruct the connection of bulk elements with surface elements, a cumbersome, possibly costly process. The aforementioned mapping of DOFs would no longer exist and would have to established from scratch.
A different topic, albeit also relating to an echo’s timedomain, was addressed in chapter 3. Many studies pertained the question of how and to what degree a bat extracts infor- mation about the ensonified objects from the echo, for example Beedholm (2006); Firzlaff et al. (2007); Grunwald et al. (2004); Habersetzer & Vogler (1983); H ¨ubner & Wiegrebe (2003); Schmidt (1988, 1992); Simmons (1971, 1973); Simmons et al. (1975). Some reach the conclusion that the highly precise performance that is exhibited by bats in echolo- cating tasks can only be explained by a detailed comparison of emitted call and echo, accomplished by the neural equivalent of a cross-correlation of call and received echo (Simmons, 1979; Simmons et al., 1990a). The spectrogram correlation and transforma- tion model, for example, is based on this assumption (Saillant et al., 1993). The authors suggest for their model a reconstruction of the IR’s timedomain based on a recoding of the echo’s spectral information into temporal information. A later study by Peremans & Hallam (1998) however demonstrated the limits of this model: the echo of the ensoni- fied object may not possess more than two reflections, with at least 20 µs delay between them. In addition, the authors always used a copy of a bat’s echolocation call to test the model, thus assuming that the ensonified object consists of only one plain reflecting surface. This is not the case for most natural objects, which usually comprise several different reflecting surfaces.
fix domain (rectangle). We use standard techniques in the proof: Rothe’s method for the time discretisation, proving the existence of the stationary solution and then deriving a priori estimates, which result into weak convergences of the approximat- ing sequences in the corresponding spaces. The ε-regularisation of the continuity equation is very useful by obtaining the compactness of these sequences in the space L 1 and consequently the strong convergence. Then, by passing to the limit in the weak formulation, we prove the existence of the weak solution. Then we show the uniqueness of this solution and also the continuous dependence on data. In the last step we prove the convergence of solution for ε −→ 0, i.e. we prove the existence of the solution to the problem introduced in the beginning of this thesis (in Chapter 1). The numerical part of this thesis deals with a simulation of the pulsating flow in a time-deforming 2D domain, where we use the UG software toolbox with an already implemented support for 2D moving domain. We first present experiments with the problem defined by equations (2.9)–(2.18), which was proposed and studied by Quar- teroni et al. Then we deal with a numerical solution for our approximation of the original problem (whose existence and uniqueness is proved in the theoretical part) for physically-based data. We experimentally test the global iterative method for decoupling the unknown domain deformation and the fluid flow. The experiments with Quarteroni’s problem as well as our approximated problem indicate the conver- gence of this method, although we do not prove the convergence theoretically. The domain geometry stabilises after approximately 5 iterations of this global method
from ecological time series to identify processes, wavelet analysis can as well very efficiently be used to extract short-term dynamics for the identification of processes that occur during very short time windows, as shown in chapter 4 in this thesis. The time window of analysis is in this respect also important as was shown in chapters 2 and 4. The wavelet coherence analysis ap- plied in chapter 4 in this thesis offers a powerful tool that can help to derive processes from time series, identify the time scales they operate on and the time windows of their occurrence. An advantage of wavelet analysis techniques is that prior assumptions of the dominant processes governing the data are not required, a benefit stressed by Lischeid (2009). Instead, processes can be identified that dominate oscillations in the data during particular time windows and at specific time scales. The visualization of the wavelet coherence as in Figs. 4.2 – 4.10 can help the human brain to grasp structures in the data at a glance. Chapter 4 showed that the vari- ability and especially the frequency-resolved covariation of multivariate high-frequency data can serve as a diagnostic tool to identify processes dominating a system and to detect changes in the driving or limiting forces. The wavelet coherence is superior to more commonly used methods that address temporal coherence by applying simple correlation analysis (Magnuson et al. 1990; Wynne et al. 1996; Kratz et al. 1998; Pace and Cole 2002; Salmaso et al. 2014) or infer causality from linear regressions (Gaiser et al. 2009; Eleveld 2012). These lack the ability to detect transient behaviors and frequency-specific relationships. Thus, potentially frequency- dependent correlations or processes limited to specific time windows can go undetected. The potential of wavelet coherence is obviously not limited to lake ecosystems. Another successful application is the detection of flood triggering situations and the underlying hydrometeorologi- cal constraints (Schaefli et al. 2007).