The continuous problem and some a priori estimates on the exact solution and its deriva- tives are discussed in the next section. In Section 3, we formulate the finitevolumemethod as a Petrov-Galerkin finite element method and transform the Petrov-Galerkin method into a Bubnov-Galerkin one. We will show in Section 4 that the method is numerically stable by demonstrating that the bilinear form is coercive with respect to a discrete energy norm. We will also present an error analysis for the finite element solution and show that the global error of the approximation in the discrete energy norm is bounded above by O(h 1/2 ) almost uniformly in ε. In Section 5 we will present some numerical results to verify the theoretical rates of convergence. The numerical results also demonstrate the superconver- gence phenomenon of the method when a piecewise uniform mesh is used, though it is not theoretically proved in this paper.
The finitevolumemethod is a well-adapted method for the discretization of various par- tial differential equations in bounded domains. In particular, it is well-established in the engineering community (fluid mechanics) because of its conservative properties of the nu- merical fluxes and the natural formulation of an upwind scheme, which ensures stability for the convection part. In addition, it is stable with respect to a reaction dominated problem and it is applicable to problems with inhomogenous material properties. The boundary element method can, however, be applied to the most important linear partial differential equations with constant coefficients in bounded and also in unbounded domains and in a sense it features local conservation, as well. The coupling of the finitevolumemethod and the boundary element method combines the advantages of both methods. While a diffusion convection reaction process is modeled by the finitevolumemethod, the pure diffusive transport (in a possibly unbounded domain) is solved by using the boundary element method. We stress that, for example, the finite element method does not provide local conservation of numerical fluxes in general.
ABSTRACT: Wave-current-mud interaction is a complicated mechanism in the coastal and estuarine tur- bid waters. A laterally averaged finitevolume numerical model has been used to study the interaction of wave, current and mud. The fully non-linear Navier-Stokes equations with complete set of kinematic and dynamic boundary conditions at free surface and interface and the Bingham constitutive equation for modelling the behavior of mud layer, are solved. Finitevolumemethod based on an ALE description has been utilized for the simulation of wave motion in a combined system of water and mud layer. The model is applied for simulating three numerical tests including velocity profiles of steady flow along a trench, and variation of wave characteristics in interaction with opposing and following currents on fixed bed and mud bed. Comparison of the model predictions against analytical and experimental results confirms the ability of the model for wave-current-mud interaction predictions.
This appears to be a new result in this field.
In section 4, we present a model which computes the evolution of transient secondary electron emission yield. It is based upon a set of conservation laws which expresses the trapping of electrons and holes coupled with the electric field. From a numerical point of view, we apply a fully implicit scheme and uses a simple, fixed point technique to the solves the coupled set of discrete equations, and use a refined grid near the interface where the electron beam penetrates the sample. This enhances the quality of the numerical simulation and reduces significantly the elapsed computational time compared to Fitting’s works (I.A.Glavatskikh & Fitting, 2001),(Fitting, 1974) and (H.-J. Fitting & Wild, 1977)¿Q, which is constrained by the fixed mesh spacing used in the presentation of his modelling. Moreover our numerical scheme uses the conservative finite-volumemethod, and we have proved formally that some discrete maximum principle occur which provides confidence in our numerical work. Some comparison between numerical computations and experimental work by G. Moya (IM2MP, Marseille, France) and K. Zarbout (IM2MP, Marseille, France, and LamaCop, Sfax, Tunisia) are presented.
optimized architecture. Several authors share the idea about the poor scalability of OpenMP [46, 50] based on the work of Hoeflinger et al. , who claimed that through this method a good performance for a small number of processors (<16) could be achieved. They explored the cause of poor scalability and pointed out the importance of optimizing cache and memory utilization in numerical applications as well as the implementation of a set of language issues. To deal with the problem of the poor scalability of OpenMP many investigations into dual-level parallelism have been published [47, 50, 52, 53]. Despite of this widespread conception, a substantial computational time reduction for 3D unsteady flow simulations can be achieved by the implementation of OpenMP directives .
The structure of the main solver in Cronos is not suited for an efficient implementation of the MUSCL scheme. Originally developed for semi-discrete finite-volume schemes, the code treats each direction individually, i.e. it computes the changes of the dynamic variables in the different directions consecutively. Fig. 7.1 schematically shows the up- dating procedure of this implementation of the second-order semi-discrete scheme used in Cronos. In contrast, the evolution of the cell-averaged value in the MUSCL predictor step requires a collective reconstruction process for all space dimensions, as the spatial derivatives appearing in the Taylor expansion are approximated using the same slopes as in the reconstruction (5.51). Hence, for multidimensional problems, the reconstruc- tion process of the individual space dimensions, more specifically the computation of the slopes, cannot be treated in different stages of the code. Therefore, a MUSCL-based scheme cannot be implemented effectively in the current code structure of Cronos. In recent years, attempts have been made to develop an alternative implementation for the solver in Cronos that is compatible with the Adaptive Mesh Refinement (AMR) algorithm . The fundamental concept behind AMR is to compute the numerical solu- tion in different rectangular patches of the computational domain with different spatial resolutions [37, 38]. It turns out, that a local block -structured code, where the changes within a cell are computed simultaneously in all directions, is particularly efficient for both AMR and MUSCL-type schemes. In contrast to the approach illustrated in Fig. 7.1, where the intercell fluxes in a given direction are computed for the entire grid before repeating the process for the next direction, the block-structured version of the code uses a local, cell-focused structure. This means, that all sub-routines of the updating process are performed simultaneously for all directions (see Fig. 7.2). In particular, this version of the code allows updating only local patches of the grid, as it is also done in the block-structured AMR.
Several boundary conditions are explained, followed by exemplification of construction of the system matrix, the iteration matrix and the curl curl matrix for assorted finitevolume methods. The eigenvalue distribution of the iteration matrix of a given finitevolume time domain method foretells about the sta- bility of the method for a given time step and time update scheme. Various time marching schemes in addition to Runge–Kutta 1 (first order) and Runge–Kutta 2 (second order) are examined for temporal approximation. It is found in practice that first order methods have at least one stable time marching scheme for both types of flux approximation, central flux and upwind flux, whereas this is not the case for second order finitevolume methods. Second order central flux finitevolumemethod is not stable for the time marching schemes (Runge–Kutta and Leap–Frog) under consideration, where as there are multiple time marching schemes that can be employed for second order upwind flux finitevolumemethod. As the higher order methods induce more computational cost, only first and second order approximations in time are considered for investigation.
Though it does not fit with the flow of the paper, one comment is required before the numerical experiments. There is no single method that is the best for all applications. Even within the finitevolumemethod, there is a tradeoff between flexibility and speed. Adaptive grids give flexibility, but at the cost of speed (compared to the regular grid). In practice, a regular grid should be used unless a gain from adaptive refinements is expected. For example with the experiments conducted in section 3, using the flexibility leads to a speed loss without any gain for the Ornstein-Uhlenbeck process, but leads to a compression and accuracy gain with economic problems. The Aiyagari-Bewley-Huggett model requires only 58% of grid points to achieve equal accuracy as a regular grid and the lifecycle model 14%. However, this is at the cost of some additional computation. This cost would be regained if one uses the perturbation method introduced in Ahn et al. (2018) since the solution method (without model reduction) scales at O(N 3 ). 14 Hence, the 14% compression
Section 4.2 is concerned with a (semi-)kinetic representation of the SH equations and a re- sulting FiniteVolumeMethod. This so-called kinetic scheme is able to describe the whole dynamic of a granular mass from starting to stopping. Moreover, it can be shown that it preserves the steady states of granular masses of rest. For the construction of the kinetic formulation, we use an Ansatz of Perthame and Simeoni presented in [PS01]. The contents and results of Sections 4.1 and 4.2 can be found in [KS08] and are given here only for the sake of completeness. In Section 4.3, we apply the FVPM to the SH equations by adopting the kinetic fluxes. This provides an example where the FVPM can be used successfully to solve hyperbolic conservation laws with a source term.
To increase the order of accuracy of a scheme, it is usually necessary to construct a high order re- construction of the solution at each time step. This is true for the finitevolumemethod, for which many results can be found in the literature, as well as for the finitevolume particle method. In this chapter we will treat the problem of reconstruction of a function from given data, particularly we will look for an approximation of a function from given weighted integral means of this function (FVPM), which is the generalization of classical integral means appearing in FVM. The obtained knowledge will be utilized in chapter 4 to acquire initial data for local generalized Riemann prob- lems in order to solve a one-dimensional hyperbolic conservation law using FVPM. Therefore, we introduce the scattered data interpolation problem, which will be solved with the polyharmonic spline interpolation and the WENO procedure. The combination of these two methods gives rise to a powerful method to reconstruct a function from given data, proposed by Aboiyar, Georgoulis and Iske . WENO reconstruction by polyharmonic splines is numerically stable, if carefully implemented, and in comparison with the polynomial reconstruction more flexible. Moreover, it reproduces optimal reconstruction with respect to the seminorm in the Beppo-Levi space, and one therefore acquires a natural choice for an oscillation indicator.
The second aim is to study the convergence analysis of a finitevolumemethod for the aggrega- tion and multiple breakage equations on five different types of uniform and non-uniform meshes. We observe that the scheme is second order convergent independently of the grids for the pure breakage problem. Moreover, for pure aggregation as well as for combined equations the tech- nique shows second order convergence only on uniform, non-uniform smooth and locally uniform meshes. In addition, we find only first order convergence on oscillatory and random grids. A numerical scheme is said to be moment preserving if it correctly reproduces the time behaviour of a given moment. Some authors have proposed different numerical methods which show moment preservation numerically with respect to the total number or total mass for an individual process of aggregation, breakage, growth and source terms. However, coupling of all the processes causes no preservation for any moments. Up to now, there was no mathematical proof which gives the conditions under which a numerical scheme is moment preserving or not. The third aim of this work is to study the criteria for the preservation of different moments. Based on this criteria we determine zeroth and first moments preserving conditions for each process separately. Further, we propose one moment and two moment preserving finitevolume schemes for all the coupled processes. We analytically and numerically verify the moment preserving results. The numerical verifications are made for several coupled processes for which analytical solutions are available for the moments.
Part II – Riemann Solvers
In finitevolume methods, integrating conservation laws over a control volume leads to a formulation which requires the evaluation of local Riemann problems at each cell interface, see Chapter 2 for more details. The initial states for these problems are typically given by the left and right adjacent cell values. Since these local Riemann problems have to be solved many times in order to find the numerical solution, the Riemann solver is a building block of the finitevolumemethod. Over the last decades, many different Riemann solvers were developed, see e.g.  for a broad overview. The main challenges are the need for com- putational efficiency and easy implementation, while at the same time, accurate results without artificial oscillations need to be obtained.
Abstract. In , Guillard and Viozat propose a finitevolumemethod for the simulation of inviscid steady as well as unsteady flows at low Mach numbers, based on a preconditioning technique. The scheme satisfies the results of a single scale asymptotic analysis in a discrete sense and comprises the advantage that this can be derived by a slight modification of the dissipation term within the numerical flux function. Unfortu- nately, it can be observed by numerical experiments that the preconditioned approach combined with an explicit time integration scheme turns out to be unstable if the time step ∆t does not satisfy the requirement to be O(M 2 ) as the Mach number M tends
Received: 28 July 2017; Accepted: 25 August 2017; Published: 30 August 2017
Abstract: Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement’s state.
We present a spatially third-order accurate unstructured finitevolume scheme, which is based on the multiple-correction hybrid k-exact scheme. A recursive correction of Green- Gauss derivatives is used to reconstruct a k-exact polynomial within each cell, while only involving communication between direct cell neighbors. The scheme is extended to a k- exact reconstruction on vertex-centered median dual grids and utilized for the discretization of the incompressible Euler equations, showing its applicability for the solution of Poisson’s equation. The spatial accuracy is demonstrated on various, highly deformed unstructured grids and for various benchmark tests. It is shown that the scheme can clearly enhance the accuracy of time-dependent incompressible flow solutions.
In this paper, we present some concepts and applications of the finite cell method and discuss a posteriori error control for this method. The focus is on the application of the dual weighted residual approach (DWR) as presented in . It enables the control of the error with respect to a user-defined quantity of interest. The aim is to estimate both the discretization error and the quadrature error. The application of the DWR approach provides an adaptive strategy in which the error contributions resulting from discretization and quadrature are balanced. The strategy consists in refining either the finite cell mesh or its associated quadrature mesh. In several numerical experiments the performance of the error control and the adaptive scheme is demonstrated for a non-linear problem in 2D.
choice of the reference configuration. Those formulations are characterized by high accuracy, but also by significant numerical effort and convergence issues. Székely et al. [ 54 ] applied total Lagrangian formulation in combination with material nonlinearities in modeling soft tissue deformations for laparoscopic surgery simulation. They addressed the high computational demand of the approach by using a “brute force” method (i.e., by parallelizing the computation on a large, three-dimensional network of high-performance processor units). Despite of the fact that they report on a not highly realistic simulation of uterus tissue deformation. On the other hand, various ideas were implemented to keep the numerical effort in acceptable limits despite the use of rigorous FE formulations. For instance, Heng et al. [ 55 ] proposed an approach denoted as a hybrid condensed FE model. The idea is to partition the model into two regions—a part of the model that is being interacted with and the rest of the model. Hence, a complex nonlinear FE model that can also deal with topological change was used to model a small-scale FE model that represents the operational region, while a linear and topology-fixed FE model was used to model the large-scale nonoperational region. The development was done for a virtual training system for knee arthroscopic surgery. In another field of application, Dulong et al. [ 56 ] focused their work on development of real-time interaction between a designer and a virtual prototype as a promising way to perform optimization of parts design. In the first step, which they refer to as the “training phase,” they performed a set of nonlinear FE computations. Over the course of simulation, deformations for a current load case is determined by interpolating between the results obtained in the “training phase.” One may actually notice here a certain parallel with the approach based on neural networks.
The inviscid, two-dimensional hypersonic flow over a double-ellipse at M ∞ = 8.15, α = 30 ◦ is considered. It represents a standard test case for flow simulations of reentry vehicles . For the presented computations, the Green-Gauss reconstruction and the H¨anel/Schwane flux vector splitting are employed. The solution is advanced in time by the implicit Euler scheme, with a variation of the CF L number between 5 ≤ CF L ≤ 300. During the transient startup sequence, the use of the second order scheme may lead to negative pressures on the body. It is caused by the strong shock, that detaches from the surface of the vehicle. Due to the fact, that boundary faces are not considered within the limiting procedure of the reconstruction technique, the very steep gradient across the shock may generate new solution extrema at boundaries and thus lead to a non-permissible state, e.g. negative pressure. The problem is remedied by locally reducing the method to first order in space, if the reconstructed pressure or the density at a boundary face are negative. This procedure can be regarded as an additional limiting process, based on physical reasons. After the shock is fully detached from the surface, this mechanism is not required anymore.
Abstract. We analyze the simplest and most standard adaptive finite ele- ment method (AFEM), with any polynomial degree, for general second order linear, symmetric elliptic operators. As it is customary in practice, AFEM marks exclusively according to the error estimator and performs a minimal element refinement without the interior node property. We prove that AFEM is a contraction for the sum of energy error and scaled error estimator, be- tween two consecutive adaptive loops. This geometric decay is instrumental to derive optimal cardinality of AFEM. We show that AFEM yields a decay rate of energy error plus oscillation in terms of number of degrees of freedom as dictated by the best approximation for this combined nonlinear quantity.
Tikhonov-type (rather than the non-di↵erentiable TV-regularization) is applied. Some adaptive finite element approaches are used to solve a modification of the Perona-Malik model for image processing in  and . Image segmentation based on adaptive finite elements is studied in , and efficient solvers relying on OcTree-based adaptive mesh refinement for parametric as well as non-parametric approaches in image registration can be found in [28–30].