Newton’s method converges globally to a solution **of** φ(µ) − ε 2 = 0. However, for large dimension the evaluation **of** φ(µ) and φ 0 (µ) for fixed µ is much too expensive (or even impossible due to the high condition number **of** A). Calvetti and Reichel (2003) took advantage **of** partial bidiagonalization **of** A to construct bounds to φ(µ) and thus determine a ˆ µ satisfying (61).

77 Mehr lesen

for some tn, and for the residual norm we have the identities (3.5)–(3.6). At ﬁrst sight the second approach seems more attractive because it gives a cleaner relation between the residual norm (which is minimized at every step) and the conditioning **of** the computed basis. Its implementation is also simpler. On the other hand, the fact that the approximate solution is in this approach determined via the basis vec- tors v 1 , w 1 , . . . , w n −1 which are not mutually orthogonal raises some suspicions about potential numerical **problems**. In this section we recall implementations **of** both ap- proaches resulting in diﬀerent variants **of** the GMRES algorithm. In section 5 we will discuss their numerical properties.

Mehr anzeigen
23 Mehr lesen

The method we introduce is based on computing the nullspace **of** a large sparse matrix, and computing the zeros **of** a scalar, uni- variate polynomial. We introduce a novel nullspace algorithm to accomplish this goal, although any nullspace method (QR, SVD, etc.) could easily be substituted in the main algorithm. In our view, the main contribution **of** this paper is the formulation **of** these multivariable optimization **problems** in a way for which stan- dard tools such as nullspace computation methods and eigenvalue solvers can be directly applied. In this way, advances in sparse nullspace techniques can be easily and directly applied to this large class **of** optimization **problems**. The method we present does not rely on an initial guess, and will yield the set **of** local and global minimizers to equality-constrained multivariable polynomial op- timization **problems** when there exist a finite number **of** local and

Mehr anzeigen
The theory **of** hedonic models states that it is possible to precisely describe the price **of** a heterogeneous commodity by a set **of** its characteristics. The assump- tion that consumers derive utility from attributes **of** goods, rather than the good itself, led to the conception in which the price **of** a commodity is determined as an aggregate **of** values estimated for each significant characteristic **of** this commodity (so called hedonic prices). However, in cases **of** some goods the number **of** characteristics which significantly influence its price can be quite large. For smaller data sets this might cause a problem with the degrees **of** freedom, if standard methods **of** model estimation (such as OLS) are applied. Moreover, the bigger the number **of** prediction variables the more probable that they will be correlated to one another, which might cause **problems** induced by multicollinearity, especially when a large number **of** explanatory variables is binary, as it is sometimes the case in hedonic models, where variables often represent the existence **of** specific features in the given variant **of** a good, as well as its brand name or model.

Mehr anzeigen
13 Mehr lesen

only the temporal discretisation with the Crank-Nicolson method and adaptive time step size provided reasonable results. But it is worth noting that we were able to solve the Turing mechanism for all investigated large reaction coefficients using our method. Regarding the examined refinement criteria we have shown that the criteria based on the spectral method are well-suited for reaction-diffusion equations. The results from chapter 7 prove that we obtain very good results with the refined grids from these criteria. Comparing the two spectral criteria we have seen that the output **of** both methods is nearly the same. Therefore we recommend to use the refinement criterion based on the coefficients along the spectrum’s tail as it has much less computational effort. The gradient based criterion caused some **problems** applying it to the Turing mechanisms. In section 7.4 we developed some ideas to explain these issues. We came to the conclusion that the gradient based criterion is simply unsuitable for Turing mechanisms as they represent kind **of** smooth **problems**.

Mehr anzeigen
184 Mehr lesen

We have ﬁrst investigated sparsity **regularization** for two parameter identiﬁcation **problems**. For these **problems**, sparsity **regularization** incorporated with the energy functional approach was analyzed. In diﬀusion coeﬃcient identiﬁcation problem, the regularized problem was proven to be well-posed and convergence rates **of** the method was obtained under the simple source condition. An advantage **of** the new approach is to work with a convex and weakly lower semicontinuous functional. Therefore, the problem can be solved numerically by the fast algorithms and the proof **of** the well-posedness is obtained without further conditions. Another advantage is that the source condition **of** obtaining convergence rates is very simple. We want to emphasize that our source condition is the simplest when it is compared with the others in the **least** **squares** approach. We do not need the requirement **of** smallness (or generalizations **of** it) in the source condition.

Mehr anzeigen
108 Mehr lesen

The framework for TGV-**regularization** with arbitrary order established in this paper may be applied to recover functions or symmetric tensor fields which can be modeled as piecewise polynomials **of** a prescribed order. For instance, for the **regularization** **of** height maps, it might be favorable to utilize order 3 or higher, since then, solutions tend to be piecewise polynomials **of** at **least** order 2 which appear smooth in three-dimensional visualizations. In contrast to that, TGV 2 fa- vors piecewise linear solutions which typically exhibit “polygonalization artifacts” in 3D-plots (or “second-order staircasing”, see [10] for a discussion in dimen- sion one). The results in this paper may moreover be useful for the develop- ment **of** numerical solution algorithms: A discrete version **of** (1.2) can directly be plugged into L 1 -type minimization methods. For instance, based on ideas in [5], one can easily compute, given a Tikhonov-functional with TGV k;l ˛ **regularization**, an equivalent convex-concave saddle-point formulation and derive a convergent primal-dual algorithm from general saddle-point solvers such as, e.g., the one in- troduced in [12].

Mehr anzeigen
43 Mehr lesen

Abstract
More and more data are observed in form **of** curves. Numerous applications in finance, neuroeconomics, demographics and also weather and climate analysis make it necessary to extract common patterns and prompt joint modelling **of** individual curve variation. Focus **of** such joint variation analysis has been on fluctuations around a mean curve, a statistical task that can be solved via functional PCA. In a variety **of** questions concerning the above applications one is more interested in the tail asking therefore for tail event curves (TEC) studies. With increasing dimension **of** curves and complexity **of** the covariates though one faces numerical **problems** and has to look into sparsity related issues.

Mehr anzeigen
33 Mehr lesen

proaches proposed in the literature, essentially differ in the way in which such change **of** drift is found, and can be roughly divided into two families depending on the strategy adopted. The first strategy, common to the so- called adaptive Monte Carlo methods [5, 8–10], aims to determine the optimal drift through stochastic optimiza- tion techniques that typically involve an iterative algo- rithm. On the other hand, the second strategy, proposed in a remarkable paper by Glasserman, Heidelberger, and Shahabuddin (GHS) [6], relies on a deterministic opti- mization procedure that can be applied for a specific class **of** payouts. This approach, although approximate, turns out to be very effective for several pricing **problems**, in- cluding the simulation **of** a single factor Heath-Jarrow- Morton model [7], and portfolio credit scenarios [12].

Mehr anzeigen
18 Mehr lesen

Chapter 6
Discussions and conclusions
By virtue **of** being a new approach, model-check methods based on the residual partial sums process applied to spatial data analysis present open **problems**, both from a pure mathematical viewpoint as well as from the perspective **of** applications. Throughout this work, even under the simplest model, several difficulties and challenging mathe- matical **problems** are encountered. Since only few results concerning this subject are available, we preserve these **problems** as challenging open **problems** and directions for future research.

Mehr anzeigen
142 Mehr lesen

[2003b] can be related to the thesis **of** Codd [2002]. Moreover, Lee et al. [2000] show an approach based on a **least**-**squares** formulation for the fluid regime and a Galerkin-FEM formulation for the solid part. Here, the coupling is solved based on a staggered scheme. The publications Heys et al. [2004a], Heys et al. [2004b], and Heys et al. [2006a] come to a LSFEM-FSI approach from a mathematical perspective, as they consider the influence **of** different coupling strategies on solver performance and efficiency. Heys et al. [2004a] provide a method for solving the fluid and solid **problems** in a single stage, and additionally consider mesh motion for the fluid phase resulting from the movement **of** the solid domain. The movement **of** the mesh is solved in an intermediate stage. The authors demonstrate their methodology for three-dimensional **problems** **of** blood flow through elastic blood vessels. Likewise, the further publication from 2004 discusses the coupling **of** fluiddynamical and elastic **problems** in one method and investigates the difference **of** a fully coupled, partly coupled (mapping decoupled) and fully decoupled algorithm, see Heys et al. [2004b] . Hereby, the influence **of** an elliptic grid generation (EGG) is used to map the deformed fluid region back to the original computational region, which is determined by the “undeformed” fluid domain. The advantages **of** the EGG method can be found in Codd et al. [2003b] in companion with Codd et al. [2003a]. Heys et al. [2004b] deal with the incompressible Navier-Stokes equations and formulate a first-order system in terms **of** velocity-velocity gradient-pressure to describe the fluid phase. The solid phase is considered to be a linear elastic material, but enhancements for a nonlinear material behavior **of** Neo-Hookean type are announced. The solid problem is rewritten in terms **of** displacement and displacement gradient in order to obtain a first-order system. The numerical testings are two-dimensional **problems**. Moreover, in Heys et al. [2006a] the deliberations **of** previously mentioned articles are expanded to three dimensional investigations with applications to blood flow through vessels. The methods under investigation are tested in conjunction with an algebraic multigrid/conjugate gradient solver, compare to Heys et al. [2006a].

Mehr anzeigen
220 Mehr lesen

Summary The method **of** **Least** **Squares** is due to Carl Friedrich Gauss. The Gram-Schmidt orthogonalization method is **of** much younger date. A method for solving **Least** **Squares** **Problems** is developed which automatically results in the appearance **of** the Gram-Schmidt orthogonalizers. Given these orthogonalizers an induction-proof is available for solving **Least** **Squares** **Problems**.

One branch was engaged with the relaxation **of** the Lipschitz condition on the driver. For instance, see Lepeltier and San Mart´ın (1997), who examined BSDEs with continuous driver **of** linear growth, or Kobylanski (2000) on BSDEs with drivers **of** quadratic growth. An overview is given in El Karoui and Mazliak (1997). Another important aspect was the analysis **of** the connection between solutions **of** BSDEs and viscosity solutions for quasilinear parabolic partial differential equations by Pardoux and Peng (1992). Based on this, the notion **of** forward backward stochastic differential equations (FBSDEs) was developed and a generalization **of** the Feynman- Kac formula was obtained. A detailed introduction on this topic is also available in Ma and Yong (1999). Particularly, FBSDEs became a useful tool in the field **of** financial mathematics. Amongst these are pricing and hedging **of** European options in cases with constraints or utility optimization **problems**. An extension to American options by BSDEs with reflection was shown in El Karoui et al. (1997a). A comprehensive survey on the application **of** BSDEs in finance is given by El Karoui et al. (1997).

Mehr anzeigen
119 Mehr lesen

Specifically, we show that many **of** the theoretical and computational advantages **of** the WALS approach to Gaussian linear models continue to hold in the wider class **of** GLMs by a simple linearization **of** the constrained maximum likelihood (ML) estimators. To establish the asymp- totic theory for WALS, some improvements had to be made to the semiorthogonal transformation procedure. These improvements address potential discontinuity **problems** in the eigenvalue decom- position used in earlier papers on WALS. In addition to developing the asymptotic theory for the WALS estimator **of** GLMs, we also investigate the finite-sample properties **of** our model averaging estimator by a Monte Carlo experiment whose design is based on a real empirical example, namely the analysis **of** attrition in the first two waves **of** the Survey **of** Health, Ageing and Retirement in Europe (SHARE). Here, we compare the performance **of** WALS with that **of** other popular estima- tion methods such as standard ML, strict BMA with conjugate priors for GLM (Chen and Ibrahim 2003; Chen et al. 2008), and strict FMA with weighting systems based on smooth information criteria (Buckland et al. 1997; Hjort and Claeskens 2003a).

Mehr anzeigen
37 Mehr lesen

Imaging techniques are an attractive option for study- ing a variety **of** fluid flow **problems**, as they are inher- ently non-intrusive. The use **of** optical imaging to track the boundaries **of** non-rigid bodies such as bubbles and droplets is well established (Thoroddsen et al, 2008); imaging techniques have also been employed for the tracking **of** rigid bodies moving within a fluid, in ap- plications as diverse as calculating the velocities **of** par- ticles colliding with one another or with walls (Yang and Hunt, 2006; Marston et al, 2010), and measuring the forces on free-flying models in a shock tunnel (War- ren et al, 1961; Bernstein, 1975). In such **problems**, the requirements are often somewhat different from those **of** typical computer-vision or medical-imaging applica- tions for which many **of** the relevant image-processing techniques have been developed: the body geometries are usually well-defined, if not known a priori, and the emphasis is on precise displacement measurements so that velocities or accelerations can be derived with minimal uncertainty. While earlier image-based mea- surements using photographic film were limited in ac- curacy (Canning et al, 1970), recent work has shown that, with appropriate image-processing techniques ap- plied to digital images, the motion **of** wind-tunnel-scale bodies can be tracked to the micron level (Laurence and Hornung, 2009; Laurence and Karl, 2010). Such accu- racy allows precise time-resolved velocity and accelera- tion measurements despite the requisite differentiation **of** the signals.

Mehr anzeigen
16 Mehr lesen

The suggested eigensolver in line 7 **of** algorithm for finding the smallest eigenpair **of** B(θ k +1 ) is the Rayleigh quotient iteration. Due to the required LU factorizations in each step this method is very costly. An approach like this does not turn to account the fact that the matrices B(θ k ) converge as θ k approaches the root ˆ θ **of** g. We suggest a method which takes advantage **of** information acquired in previous iteration steps by thick starts.

200 Mehr lesen

In practice, one special item **of** concern is the handling **of** data gaps, which is problematic, because the filtering relies on an uninterrupted observation time series. There are several strategies to solve this problem. One strategy is to simply stop the filtering at each data gap and to restart the filtering after the data gap. However, due to an effect, which is called the filter warm-up (cf. Schuh 2003), the first observations at the beginning **of** the filtering cannot be used within the **least** **squares** adjustment. If there are many data gaps and if the filter warm-up is not short, one loses many observations in this way. Another strategy is therefore not to stop the filtering at each data gap, which then requires to fill-in the data gaps. However, to compute suitable fill-in values for the data gaps is problematic, as shown by Klees and Ditmar (2004). Therefore, Klees and Ditmar (2004) favor another approach. Here, the problem **of** integrating the covariance matrix into the **least** **squares** adjustment is reduced to the solution **of** a linear equation system with the covariance matrix as equation matrix. The linear equation system is solved using the preconditioned conjugate gradient algorithm, whereas the digital filters are used for preconditioning. However, this strategy introduces much higher computational costs, which is confirmed by Klees and Ditmar (2004).

Mehr anzeigen
165 Mehr lesen

This paper has two goals. The ﬁrst one is to develop a conditional instrumental variable estimator based on multiple instruments in the general framework introduced by Athey et al. (2019), and to work out the expressions for estimation, sample splitting and variance estimation needed for implementation in software. This contributes to completing the toolbox **of** machine learning techniques for classical econometric **problems** and to verifying the validity **of** Athey et al. (2019)’s general framework. We also address the problem **of** tuning an instrumental variables forest which, to our best knowledge, has not been considered in the literature before. Finally, we provide an implementation in R and C++, extending previous codes contributed by Athey et al. (2019). Our second goal is to use this estimator to revisit a classic application **of** instrumental variables by Angrist and Evans (1998), who used sibling-sex composition instruments in order to investigate the eﬀect **of** family size on parental labor supply. Including coarse group categories in their 2SLS regressions, they also provided a basic analysis **of** heterogeneity **of** these eﬀects across characteristics such as mother’s education or husband’s earnings. We revisit this question using instrumental variables random forests, which allow one to plot detailed maps **of** heterogenous eﬀects across multiple dimensions which is not possible using standard regression techniques. This yields deeper insights into the nature **of** heterogeneity in these eﬀects, going beyond the analysis in Angrist and Evans (1998).

Mehr anzeigen
25 Mehr lesen

Investigation into convergence behavior **of** reconstructions using multi-bang **regularization** ( 1.4 ) faces considerable challenges. Although giving convergence rates **of** Bregman distances is at hand by the help **of** the so-called source conditions or variational inequalities, rates **of** error norms are still attractive due to explicit form **of** multi-bang functional. It motivates studying a particular condition and adapting it accordingly to the multi-bang penalty in corporation with a source condition to derive error norms for reconstructions, at **least** in L 2 -space. In [ 59 , 57 ], a regularity assumption on the active sets is introduced and used as a substitution **of** the source condition to obtain error estimates for the solutions **of** optimal control **problems**. This condition is also applied in [ 26 , 30 ]. Following these works, the active set is the subset **of** the domain, where the inequality constraints for the true parameter are active. In the spirit **of** this approach, [ 18 ] investigates error estimations for solutions **of** multi-bang control **problems** using the regularity assumption on the active sets. This motivates an idea **of** how to treat convergence rates for reconstructions given by multi-bang **regularization** in this thesis which is shown thereafter.

Mehr anzeigen
107 Mehr lesen

Consequently, under the assumption **of** rational expecations, only α is identified, not, however, δ and β separately.
More recently, however, economic agents are frequently assumed to be boundedly ra- tional and to form their expectations via adaptive learning. The basic idea underlying all adaptive learning procedures is that agents employ an auxiliary model, or so-called perceived law **of** motion (PLM), to form their expectations y t|t−1 e . One way to specify the PLM is to assume that its functional form corresponds to that **of** the REE in (5.3). Generally, the agents will not know the parameter α and therefore replace it by some estimate a t−1 , based on the information F t−1 . Typically, the parameter α will be esti-

Mehr anzeigen
26 Mehr lesen