soils or to test new material laws such as the barodesy model. The results of these tests provide the theoretical basis for subsequent simulations and analysis in geotechnical engineering (e.g., cuts, embankments, foundations). Simulation tools which are reliable as well as economical concerning the com- puting time are indispensable for applications. In this contribution we intro- duce two novel meshfree generalized finitedifferencemethods – Finite Pointset Method (FPM) and Soft PARticle Code (SPARC) – to simulate the standard benchmark problems “oedometric test” and “triaxial test”. One of the most im- portant ingredients of both meshfree approaches is the weighted moving least squares method used to approximate the required spatial partial derivatives of arbitrary order on a finite pointset.
Abstract: This paper gives a review of numerical methods for solving the BSDEs, especially, finitedifferencemethods. For numerical methods of finitedifference, we should divide them into three branches. Distributed method (or parallel method) should now become a hot topic. It is a key reason we present the review. We give a brief survey on the financial problems. The problems include solution and simulation methods for the BSDEs. We first describe the BSDEs, and then outline the main techniques and main results of the BSDEs. In addition, we compare with the errors between these methods and the Euler method on the BSDEs.
Most problems from mathematics, nature, and engineering can be classified to be part of continuum mechanics and can be modeled as boundary value problems using partial differential equations. These problems are usually for- mulated on a subset of the three-dimensional continuous space and can often not be solved analytically, so that we need find a solution with numerical ap- proximations. Numerical methods require the discretization of the continuous space which will be divided into smaller entities that couple with neighbor- ing ones. A large variety of these methods exist, of which we briefly describe the most commonly used ones. With finitedifferencemethods (FDM), dif- ferential operators are evaluated as difference quotients on a finite number of grid points. The idea of finite volume methods (FVM) is the preservation of conserved quantities on small volumes by applying the Gauss-Ostrogradski theorem, which results in balancing volumetric averages with fluxes on inter- faces of neighboring volumes. In finite element methods (FEM), we specify a function space of piecewise polynomials in which we find the function that satisfies the partial differential equation of the investigated problem as a best approximation. The residual is projected orthogonal to the space of piecewise polynomials, which is equivalent to minimizing the energy for elliptic equations (Brenner and Scott 2008).
The presented finite element method with local Trefftz trial functions is a new strategy based on the idea in the recent publication . We have given sev- eral novel developments that belong to the first extensions of the primal strategy. The introduced lower as well as higher order trial functions admit optimal rates of convergence on polygonal meshes. This approach, also called BEM-based FEM, yields conforming approximations on these arbitrary meshes. Thus, the develop- ments fit into the current research topics in several areas. We just mention the discontinuous Petrov-Galerkin methods, the mimetic finitedifferencemethods, multiscale finite element methods and the topic of generalized barycentric coor- dinates in computer graphics. In all these methods the use of polygonal meshes and the generalization to higher order approximation is discussed in the latest literature as for example [11, 23, 30, 35, 61].
Keywords: Wave-based room acoustic simulations, sound absorption, extended-reaction, high-order nu- merical methods
Methods for simulating the acoustics of rooms are generally divided into two categories; the geometrical acous- tics methods (e.g. ray tracing and the image source method) and the wave-based methods (e.g. finite element methods, boundary element methods and finitedifferencemethods) [ 1 ]. In geometrical acoustics, several simpli- fying assumptions regarding sound propagation and sound reflection are made, which reduces the computational complexity, however, at the cost of limited accuracy, particularly in rooms where wave phenomena such as diffraction and interference are prominent [ 2 , 3 , 4 ]. In the wave-based methods, the governing physics equa- tions that describe wave motion in an enclosure are solved numerically. These methods are therefore, from a physical point of view, more accurate because they inherently account for all wave phenomena. The drawback is that wave-based methods are computationally much more demanding than the geometrical methods.
Figure 2.2 – Comparing analytical solutions obtained by schemes with different order of accuracy.
limiter. These functions need three input values, however, the resulting schemes are only second order accurate . Nonetheless, with three cell mean values as input, it is possible to construct a quadratic polynomial, leading to a third- order accurate scheme. This yields two advantages: Without enlarging the stencil of the numerical scheme we get one order of accuracy ”for free”. Furthermore, remaining in the traditional setup the newly developed limiter functions can easily be implemented in existing (commercial) codes by changing only few lines. There is yet another reason why third-order methods are advantageous. Most physically relevant problems include discontinuities. The model problem of a discontinuous function is the step function. Examining analytic solutions of nu- merical methods with different orders of accuracy with the help of Fourier trans- form leads to a surprising observation. The amplitude of the over- or undershoot caused by a third-order method is smaller than the amplitude of a second-, fourth- or fifth-order method. This observation is summarized in Fig. 2.2.
The use of wave-based methods for the simulation of the acoustical field inside rooms is still very far away from the widespread use of ray-tracing/image source algorithms. The difficulties in its implementation range from computational cost to missing data on material acoustical characteristics, amongst many other factors. However, it is undeniable that current research on this field has allowed this methodology to become more accessible, not only to researchers but also to practitioners like acoustical consultants.
methods are used to investigate the flow induced noise including the rotating domain of the fan. Special focus is placed on velocity terms and their influence onto aeroacoustic source and wave propagation effects. For this purpose, different hydrodynamic/acoustic splitting methods are analyzed and reformulated. The resulting sound fields are computed and interpreted based on the theoretical assumptions. Finally, the computational effort in the finite volume framework is discussed and the results are compared to experimental data.
‘difference’; the constitutive exclusion leaves a trace through which the difference returns and unsettles the apparent closure of the established model. Postcolonial and transnational feminist theories have evinced the manners in which the figure of third world women has been constituted as the ‘difference’ both marking and enacting the limits of a range of modern universalisms (Spivak, 1987; Mohanty and Alexander, 1991).
The least-squares finite element method (LSFEM) arouses interest as an alternative method for solving partial differential equations (PDEs) with respect to mechanical problems in recent years. The method offers several advantages compared to the well- known principals of Galerkin variational methods: A unified mathematical procedure in constructing first-order systems to all types of PDEs, it leads to positive definite and symmetric system matrices as well as it offers an a posteriori error estimator without additional costs. These advantages hold for differential equations with not self-adjoint operators like the incompressible Navier-Stokes equations, too. In this work, the LS- FEM is applied to the steady incompressible Stokes and Navier-Stokes equations as well as to the equations of small strain elastodynamics. The aim is to investigate the accuracy and performance of different formulations, which were proposed for the fluid and solid problems. Several numerical testings are investigated, besides discussing im- plementation aspects. The simulation of fluid flow through 3D porous structures is one field of applications considered in this thesis. Furthermore, adaptive meshing strategies are examined. A special focus is on formulations in terms of stresses and velocities as primary fields, which can be used in a least-squares FEM based fluid-structure interac- tion (FSI) approach. The proposed LSFEM-FSI approach relies on the idea of solving the interaction in a monolithic manner, with an inherent fulfillment of the interaction conditions due to the conforming discretization of the stresses and velocities in suitable Sobolev spaces. Numerical testings of the LSFEM-FSI in the small strain regime are provided.
The idea of calculations from first principles occurred at a very early point. In 1929, Dirac already stated that the equations which describe all of the interactions of electrons and nuclei are far too complicated to be solved and a practical numeri- cal method  is required. Since then, a variety of approximations have been intro- duced. The motion of the nuclei is usually separated from the electronic degrees of freedom and often treated semi-classically . However, the electrons also suf- fer from immense complexity. The electronic many-body wave function possesses far too many degrees of freedom to be treated numerically in full detail. The num- ber of storage element required to allow N e electrons access to d configurations (lattice sites or predefined orbitals) is d N e . This number easily grows to unavchiev- able values since both d and N e are linear in the system size. Approximations with products of single-particle wave functions lead to the class of quantum chemistry methods including the Hartree method, Hartree-Fock, coupled clusters and config- uration interaction . From these methods, we can learn that the electron-electron interaction introduces a correlation effect which is very difficult to treat in an exact manner.
Several dedicated preconditioners have been developed for immersed finite element methods. It is demonstrated in [ 47 ] that a diagonal preconditioner in combination with the constraining of very small basis functions results in an effec- tive treatment for systems with linear bases. With certain restrictions to the cut-element-geometry, [ 48 ] derives that a scalable preconditioner for linear bases is obtained by com- bining diagonal scaling of basis functions on cut elements with standard multigrid techniques for the remaining basis functions. In [ 49 ] the scaling of a Balancing Domain Decom- position by Constraints (BDDC) is tailored to cut elements, and this is demonstrated to be effective with linear basis functions. An algebraic preconditioning technique is pre- sented in [ 35 ], which results in an effective treatment for smooth function spaces. References [ 50 ] and [ 51 ] establish that additive Schwarz preconditioners can effectively resolve the conditioning problems of immersed finite element meth- ods with higher-order discretizations, for both isogeometric and hp-finite element function spaces and for both symmetric positive definite (SPD) and non-SPD problems. Furthermore, the numerical investigation in [ 50 ] conveys that the condi- tioning of immersed systems treated by an additive Schwarz preconditioner is very similar to that of mesh-fitting systems. In particular, the condition number of an additive Schwarz preconditioned immersed system exhibits the same mesh- size dependence as mesh-fitting approaches [ 52 , 53 ], which opens the doors to the application of established concepts of multigrid preconditioning. It should be mentioned that sim- ilar conditioning problems as in immersed methods occur in XFEM and GFEM. Dedicated preconditioners have been developed for these problems as well, a survey of which can be found in [ 50 ].
On-the-fly composition of speech recognition transducers, which is used for dynamic search networks, is not straightforward, as we describe later in this thesis. Several methods for effi- cient on-the-fly composition have been developed. One of them is to build the search network with a lower order LM, a unigram in most cases. The composition with a modified higher order LM is then computed on-demand during recognition [Dolfing & Hetherington 01, Willett & Katagiri 02]. A similar approach uses so-called hypothesis rescoring, which also applies the probabilities of a higher order LM during recognition, while the search network is constructed using a small LM [Hori & Hori + 04, Hori & Hori + 07]. A method for dynamic transducer com- position with on-the-fly determinization and minimization is described in [Caseiro & Trancoso 01b, Caseiro & Trancoso 01a, Caseiro & Trancoso 02, Caseiro & Trancoso 03, Caseiro 03, Ca- seiro & Trancoso 06]. This method has been extended and generalized in [Cheng & Dines + 07]. An alternative special-purpose algorithm for on-the-fly composition of speech recogni- tion transducers is described in [McDonough & Stoimenov + 07]. The concept of composition filters allows for efficient on-the-fly composition, label pushing, and weight pushing within the general transducer composition algorithm [Allauzen & Riley + 09, Allauzen & Riley + 11]. An experimental comparison between the label-reachability composition filter method and the on-the-fly rescoring approach can be found in [Dixon & Hori + 12], a general comparision of on-the-fly composition algorithms was given earlier in [Oonishi & Dixon + 08]. In [Oonishi & Dixon + 09], a filter transducer is used for the efficient on-the-fly composition with weight pushing.
plane curves become convex, , and then shrink to a ‘round’ point in finite time, . This does not hold for surfaces, where non-convexity can lead to the formation of singularities before the extinction. Indeed, developing a singularity in finite time is a rather typical behaviour. It occurs when the curvature at a point becomes un- bounded for some reason. Grayson in  was the first to rigorously prove the existence of a surface with such evolution, the dumbbell shape: Two spheres, connected by a thin tube that shrinks much faster than the spheres do. The tube, also called neck, then pinches off, and the two spheres each continue to flow by their mean curvature un- til they vanish. A collection of results on the study of singularities can be found in .
In section 2, we introduce classes of distributed and boundary control problems for a two- dimensional, second order elliptic equation with a quadratic objective functional and bilat- eral constraints on the control variable. The optimality conditions are given in terms of the state, the co-state, the control, and the Lagrangian multiplier for the control which will be referred to as the co-control. The control problems are discretized with respect to a family of shape regular, simplicial triangulations of the computational domain using continuous, piecewise linear finite elements for the state and the co-state and elementwise constant ap- proximations of the control and the co-control. Section 3 is devoted to the description of an efficient solver for the underlying constrained minimization problems. By means of a nonlinear complementarity problem function the complementarity system involved in first order optimality conditions for inequality constrained problems is reformulated as a single not necessarily Fr´echet-differentiable equality. Employing a generalized Newton methodol- ogy, the solution procedure is shown to be equivalent to a primal-dual active set strategy. The method converges locally at a superlinear rate in the original function space setting as well as after appropriate discretization in finite dimensional space. In Section 4, we are concerned with an a posteriori error analysis of residual-type error estimators for the global discretization errors in the state, the co-state, the control, and the co-control. We address the important issue of data oscillations and the so-called bulk criteria for an appropriate selection of elements and edges for refinement. We provide the reliability and the discrete local efficiency of the error estimators which are the basic tools to establish convergence results and to derive error reduction properties. Finally, section 5 contains numerical results for selected test examples illustrating the performance of the adaptive approach.
We have presented a space-time Galerkin method which combines a discontin- uous Galerkin approach in the spatial directions with a continuous Galerkin ap- proach in the temporal direction. The resulting method naturally allows for local hp-refinement in space and in time. Furthermore, it can be demonstrated that the method is non-dissipative, provided a centered flux is chosen and the spatial discretization is not changed during the simulation. We have described how the arising linear systems of equations can be solved iteratively, without the need of assembling and storing global or local element matrices. Since an efficient resid- ual evaluation is particularly important in order to cut down computation times in practice, we have demonstrated that the residual can be evaluated directly with complexities of O(p 4 ) for affine elements and O(p 5 ) operations for non affine el- ements. This was achieved by employing fast-summation techniques, which are an established tool for the implementation of high-order finite element methods [9, 13, 35, 42].
There has been tremendous progress on object detection and feature learning within the last decade [FGMR10, RHGS15, SRASC14]. In vision, this progress has lead to a shift away from recursive ﬁltering towards global optimization strategies. Notable advances have been made under the tracking-by-detection paradigm which typically refers to a tighter coupling of detection and tracking. Trajectories are often recovered directly in image space and often without estimation of a hidden kine- matic state. Interestingly, the related ﬁeld of simultaneous localization and mapping has undergone a similar shift away from the Bayesian paradigm towards eﬃcient global optimization strategies [GKSB10] and is now, apart from conventional meth- ods that still dominate the ﬁeld, a prime research area for methods derived from ﬁnite set statistics [MVAV11]. A particular formulation that has found wide appli- cation in tracking is based on a min-cost ﬂow transportation problem [ZLN08]. The popularity is due to availability of optimal, yet very eﬃcient inference algorithms that exploit the speciﬁc structure of the tracking problem [BFTF11, LGU15]. How- ever, a key limitation of the min-cost ﬂow formulation is its restriction to only pairwise cost terms. This makes integration of motion models challenging because multiple observations are required to compute object motion. Consideration of higher-order potentials, however, makes the problem NP-hard and approximate inference must applied [Col12, BC13]. Then, eﬃciency and optimality are lost.
I, Tamil Iniyan Manokaran born on 27.05.1994 hereby declare that the project work entitled Comparison of several solution methods for the Euler equations in a finite volume code submitted to the Deutsches Zentrum f¨ur Luft- und Raumfahrt (DLR), Braunschweig is a record of an original work done by me under the guid- ance of Dr. habil. Stefan Langer, Deutsches Zentrum f¨ur Luft- und Raumfahrt (DLR), Braunschweig and this project work is submitted in the partial fulfillment of the requirements for the award of the degree of Master of Computational Sci- ences in Engineering. The results embodied in this thesis have not been submitted to any other University or Institute for the award of any other degree.