The mathematical and technical foundations ofoptimization have been developed to a large extent. In the design of buildings, however, optimization is rarely applied because of insufficient adaptation of this method to the needs ofbuilding design: Structural optimization, for example, normally uses the amount of material and the stiffness of a structure as objectives for optimization. In contrast for building design, other aspects from a couple of disciplines are relevant, such as economics, structural, lighting, and thermal engineering, fire protection, acoustics as well as architecture, with its concern for aesthetic and spatial appearance. Some aspectsof these disciplines are of non-numerical nature and therefore, require an interactive approach.
When focusing on the second specific objective (second part ofthe thesis), the core aspect of it was to integrate image data for improving the planimetrical and topological accuracies ofthe reconstructed models. This objective was also achieved, while contributing several innovative aspects to the scientific community. It is already proved that the object space line segments can be derived by the matching of image-based line segments in projective geometry through the intersection of viewing ray planes. For this matching process, scene constraints were incorporated for minimizing the matching ambiguities within this study. Three well-defined evidences were determined with respect to the scene i.e. roof models reconstructed from point clouds. The gradient of a roof outline, distance of a point to the plane and symmetry between two gutters belonging to two opposite roof pairs were defined in this scenario. Fault correspondences representing edges on the footprint, or beneath the roof outline, or even somewhere on the wall, were avoided using the first two constraints. Having identified the rough symmetries, ambiguities especially relevant to oblique roofs, were further avoided. Similar to many other researchers, this experiment also had to deal with incompleteness issues such as gaps. In addition, some erroneous derivations, such as deviated edges for eave lines, were also found. The effects of these defects were avoided by predicting the most probable boundary edges that could be used to represent such cases. In this regard, known structural arrangements of roof models and especially defined convergence priors were applied. Although some false positive and negative narrow regions were given for some roof outlines, the process enabled to accomplish complete refined roof boundaries for each roof model. The evaluation results showed an increased planimetric accuracy (0.55m) for the refined buildingmodels. It is the highest planimetric accuracy when compared with the results of other methods submitted to the ISPRS project. As such, obviously this is a considerable achievement. In addition, correctness of per-object level accuracy and topological accuracy of refined models reached the peak i.e. 100%. These statistics prove that refinement strategies increased both the planimetric and topological accuracies considerably.
In this paper, we present a method for regularizing noisy 3D reconstructions, which is especially well suited for scenes containing planar structures like buildings. At hori- zontal structures, the input model is divided into slices and for each slice, an inside/outside labeling is computed. With the outlines of each slice labeling, we create an irregularly shaped volumetric cell decomposition ofthe whole scene. Then, an optimized inside/outside labeling of these cells is computed by solving an energy minimization problem. For the cell labeling optimization we introduce a novel smooth- ness term, where lines in the images are used to improve the regularization result. We show that our approach can take arbitrary dense meshed point clouds as input and delivers well regularized buildingmodels, which can be textured af- terwards.
condition of road networks structural and architectural aspectsof cultural landmarks and historic buildings (Randall, 2013). All these can serve as physical (digital) records. Scanning buildings has to date mostly used static terrestrial scanners based on a tripod system. One important myth to address is that scanners are optical systems, only what the scanner can “see” is captured, thus scanners cannot go through walls or other obstructions (Randall, 2013). Although static scanning technology delivers good results when scanning the outside of a building there are a number of limitations when scanning indoor locations due to the need to use “tie points” with physical targets to create a reference frame. The manual placement ofthe laser scanner on multiple stations interrupts the scanning and thus reduces the scanning rate (points per second). The placement of tie points requires additional manual effort. New technologies; Indoor Mobile Mapping Systems (IMMS) and Simultaneous Location and Mapping (SLAM) are emerging as the most prominent systems for indoor mapping (Thomson, Apostolopoulos, Backes, & Boehm, 2013). 3D scanning creates a foundation for a BIM approach by capturing existing conditions in a highly accurate, 3-dimensional format that can be used as a basis for developing project designs (Randall, 2013). PAS 1192-2 (BSi, 2013) also notes that a point cloud survey shall be provided to verify the completeness ofthe as-constructed model. For FM a key application of scanning is for as-built recording, or assessment of project performance to support project guarantees during the “as constructed” phase of a project (Randall, 2013).
where x denotes the real vector of unknown parameters (the point movements) and v denotes the real vector of unknown residuals, that is, the degree of constraint satisfaction. Both, A (referred to as the design matrix) and l (the vector of observations) need to be speciﬁed in advance to deﬁne the constraints. The constraints are perfectly satisﬁed if v = 0. As this is generally not possible for all constraints, the function v T ·P ·v is minimized, where P deﬁnes the weights between diﬀerent constraints. If there are non-linear constraints, these are usually replaced by their linear approximations. Sarjakoski & Kilpel¨ ainen (1999) and Harrie & Sarjakoski (2002) show how to solve the problem for large datasets, also considering other generalization operators. Applying the same adjustment technique, Koch & Heipke (2005) and Koch (2007) additionally show how to cope with hard inequality constraints that are needed to ensure consistency between DLMs and digital terrain models. Related problems are discussed in thegeneralization domain, for example, a river must not run uphill (Gaﬀuri, 2007). Least squares adjustment allows diﬀerent generalization operators to be handled, yet the existing generalization methods that are based on this technique do not take the discrete nature of map generalization into account. Usually, continuous variables are used to model a problem. These are not suited, for example, to represent whether a vertex of an original line is selected for its simpliﬁcation. In their system, Sarjakoski & Kilpel¨ ainen (1999) deﬁne a constraint that attempts to pull an unwanted vertex onto the line connecting its predecessor and successor. This is a smart workaround to also allow for line simpliﬁcation, but of course it is not a solution to the discrete problem of vertex selection, which only allows two stages and none in between. Sester (2005) applies adjustment calculus to satisfy constraints in building simpliﬁcation, but also points out that it does not solve the whole problem: the elimination of details is done in a ﬁrst step, which is not based on optimization. The handling of hard constraints in optimization approaches is seldom addressed in the map generalization literature. Often constraints are relaxed, as they are conﬂicting (Harrie & Weibel, 2007). A few exceptions exist in the context of discrete optimization, which is addressed in the next section.
This report presents a practical approach to stacked generalization in surrogate model based optimization. It exemplifies the integration of stacking methods into the surrogate model building process. First, a brief overview ofthe current state in surrogate model based opti- mization is presented. Stacked generalization is introduced as a promising ensemble surrogate modeling approach. Then two examples (the first is based on a real world application and the second on a set of artificial test functions) are presented. These examples clearly illustrate two properties of stacked generalization: (i) combining information from two poor performing models can result in a good performing model and (ii) even if the ensemble contains a good performing model, combining its information with information from poor performing models results in a relatively small performance decrease only.
The anti-diversiﬁcation of point-folded structures presented in Part II is restricted to triangle-based elements and thus only presents a ﬁrst step towards enabling a complete and eﬃcient building system based on point-folded structures. We believe two impor- tant parts of future work to be (a) generalizing the technique to elements based on other types of polygons and (b) introducing more control over the dual, second layer. Already mentioned in Chapter 7 is that higher dimensional search spaces would rapidly reduce the possible accuracy due to increased memory consumption. One idea to alle- viate this problem could be to trade space complexity for time by using higher order elements, i.e., instead of piecewise linear prisms using, e.g., piecewise quadratic ones. Initial experiments have shown that this eﬃciently increases accuracy ofthe represen- tation, necessitating much fewer reﬁnement levels, while at the same time increasing the complexity ofthe intersection tests. Additionally, problems relating to the repre- sentation of higher-dimensional parachutes and the mapping from angle space need to be solved. The anti-diversiﬁcation method presented was entirely discrete. Further in- tegrating continuous optimization goals in the process, such as planarity ofthe faces ofthe second layer, calls for a simultaneous optimization for continuous and discrete goals, possibly necessitating completely new approaches. In this regard, also allowing modiﬁcation of not only the geometry, but also the connectivity of both layers might be necessary to enable enough degrees of freedom for successful optimization.
Explicit expressions for the generating functions of one-sided, two-sided and three- sided PWs have been found so far. The first class consists of partially directed walks and has a rational generating function. The second class was shown to have an algebraic generating function by Duchi [Duc05] and recently [BM08] the third class was solved and the generating function was found not to be D-finite. A (possibly multivariate) function f (z) is D-finite, if the vector space over C(z) spanned by its derivatives is finite dimensional. In the univariate case this means that f is a solution of a homogenous linear ordinary differential equation with polynomial coefficients. A univariate D-finite function can at most have finitely many singularities, namely the zeroes ofthe coefficient ofthe highest order derivative. Guttmann [DGJ07, GGJD09, Gut06] proposed to study the polygon version ofthe problem, meaning walks, whose last vertex is adjacent to the starting vertex. As above, the property of being prudent demands a starting vertex and a terminal vertex. So prudent polygons are rooted polygons with a directed root edge. Note further that a prudent polygon (PP) which ends, say, to the right ofthe origin (i.e. in the vertex (1, 0)) may never step right ofthe line x = 1, and furthermore if the walk hits that line it has to head directly to the vertex (1, 0). So prudent polygons are directed in the sense that they contain a corner of their box. Moreover, a k-sided PP can be interpreted as a (k − 1)-sided PW confined in a half-plane. In this chapter we deal with the polygon versions ofthe two-sided and three-sided walks, referred to as two-sided and three-sided PPs. Enumeration of one-sided PPs is trivial, since these are simply rows of unit cells. We give explicit expressions for the half-perimeter generating functions of two-sided and three- sided PPs and show that the latter is not D-finite, which was also expected on numerical grounds. To our knowledge three-sided PPs are the first exactly solved polygon model with a non-D-finite half-perimeter generating function. Concerning the enumeration ofthe full class of PPs, we are able to give a system of functional equations satisfied by the generating function, however we have not been able to solve it so far. The situation is similar for the walk case.
One particular interest in remote sensing is the3D recon- struction of urban areas for diverse applications such as 3D city modelling, urban and crisis management etc. A typical method for reconstructing urban areas on a large scale is to employ stereo optical imagery provided by high-resolution space-borne sensors in an ideal acquisition situation. However, because of limitations in acquiring those ideal images such as cloud effects as well as limited absolute localization accuracy, optical stereo might not always be the optimal choice. In contrast, the amplitude images provided by SAR sensors do not suffer from the aforementioned issues, and can thus pro- vide input to time-critical 3D reconstruction tasks. Regarding the growing archive of very high-resolution SAR and optical imagery, developing a framework that takes advantage of both SAR and optical imagery can provide a great opportunity to produce 3D spatial information over urban areas as an application of data fusion in remote sensing .
a young dog a mailman bitten has here already often ‘It has happened often here already that a young dog has bitten a mailman’
Wurmbrand (2004) notes that these examples present a potential problem for the SSG. We believe, however, that the problem is only apparent. There are two potential explanations for why German permits vP internal subjects and objects, both of which are compatible with A&A’s (2001) analysis: (a) One possibility is that German permits feature-chains between null clitics and in situ DP arguments qualifying essentially as a clitic doubling language (following Haider 1985; Fanselow 2001). Hence, there is no violation ofthe SSG understood as in (1’). (b) Alternatively, German lacks head-movement being a head-final language (Haider 1993, to appear). Hence, German lacks the formation of complex heads like (15) that would lead to a violation of (16).
In general, a non-equivalent variant of POD, known as factor analysis, has been renowned and has been used for various applications [1, 2, 3, 18], etc. Unlike POD, factor analysis assumes that the data have a strict factor structure and it looks for the factors that amount for common variance in the data. On contrary, PCA the finite counterpart of POD, allows the accountability of maximal amount of variance for observed variables. The PCA analysis consists of identifying the set of variables, also known as principle components, from the system that retain as much variation from the original set of variables as possible. Similarly, Principal Expectile Analysis (PEC), which generalizes PCA for expectiles was recently developed as a dimension reduction tool for extreme value theory . These POD equivalent tools have also been adopted in analysis on several instances such as [1, 9, 17, 24]. Yet, most ofthe literature exploits only the real life data for dimension reduction. Even though some analysis highly relies on real life data, there is an urgent need of introduction of tools that utilize simulated data generated from the non-standard models with nonlinear differential equations that are on constant rise and hold potential for enrichment of analysis.
A simplified structure ofthe general design ofthe digital beam-forming SAR system based on the reflector antenna is depicted in Fig. 1. It consists of a parabolic dish, an ar- ray of primary antennas located in the focal plane, a feed system circuitry and a digital control system. Each feed element is connected to a Transmit/Receive (TR) module. The receive part is represented by a switch, a low noise amplifier, a band-pass filter, and an analog-to-digital con- verter. In the transmit part a conventional analog configu- ration based on phase shifters is used.
Furthermore, I would like to thank all my colleagues at the mathematics department ofthe Trier University a pleasant time not only during working hours but also in many free time activities. Furthermore, I want to thank Dr. Christina Schenk, Olli Hauke, Gennadij Heidel and Daniel Hoffmann for proofreading this thesis and many joyful ”Spieleabende”. A special thank goes to Dr. Christina Schenk, who had to endure my presence countless hours in her office. Finally, I would like to express my deepest thank to my family, my parents Nguyen Van Thoai and Nguyen Hong Phan, as well as my sister Hanh Quyen Nguyen-Dudek for supporting me in so many ways and encouraging me again and again in all situations of life. Moreover, I thank my brother in law Sebastian Dudek and my two nieces Emma and Alina and my closest friends Bernhard Schmitt and Achim Schmillen for their moral support.
During the last decades, several approaches for the reconstruction of3Dbuildingmodels have been developed. Starting in the 1980s with manual and semi-automatic reconstruction methods of3Dbuildingmodels from aerial images, the degree of automation has increased in recent years so that they became applicable to various areas. Some typical applications and examples are shown in section 1.1. Especially since the 1990s, when airborne light detection and ranging (LiDAR) technology became widely available, approaches for (semi-)automatic building reconstruction of large urban areas turned out to be of particular interest. Only in recent years, some large cities have built detailed 3D city models. Although much effort has been put into the development of a fully automatic reconstruction strategy in order to overcome the high costs of semi-automatic reconstructions, no solution proposed so far meets all requirements (e.g., in terms of completeness, correctness, and accuracy). The reasons for this are manifold as discussed in section 1.2. Some of them are manageable, for example, either by using modern sensors which provide denser and more accurate point clouds than before or by incorporating additional data sources such as high-resolution images. However, there is quite a big demand for 3Dbuildingmodels in areas where such modern sensors or additional data sources are not available. Therefore, in this thesis a new fully automatic reconstruction approach of semantic 3Dbuildingmodels for low- and high-density airborne laser scanning (ALS) data of large urban areas is presented and discussed. Additionally, it is shown how automatically derived building knowledge can be used to enhance existing building reconstruction approaches. The specific research objectives are outlined in section 1.3. It includes an overview ofthe proposed reconstruction workflows and the contribution of this thesis. In order to have lean workflows with good performance, some general assumptions on the buildings to be reconstructed are imposed and explained in section 1.4. The introduction ends with an outline of this thesis in section 1.5.
Section 7.2 described the first demonstration example oftheoptimization for some lighting quality aspectsofthe hybrid LED-lamp adapted to the color objects. In that, the chosen lighting objects were the specific part of a museum with oil color paintings, the adapted color objects were the oil color objects, and the optimized target was the average new specific oil color rendering index of seventy nine oil color objects in the museum lighting application with cases of 3000 K, 4000 K, 5000 K and 6500 K at the hot binning (80 °C). In fact, normally only in a museum the ambient temperature can be kept constant because ofthe severe requirements of temperature and humidity for paintings. Unfortunately, the operating temperature ofthe hybrid LED-lamp in other applications is not always at 80 °C constant, but changes depending on different factors such as the weather, seasons, daytime, nighttime or other operating conditions. When the operating temperature changes, the previous color mixing rate is not correct furthermore causing the total change of all lighting quality parameters ofthe hybrid LED-lamp such as chromaticity, the correlated color temperature, whiteness, color rendering indexes, luminous flux and luminous efficacy. Therefore, in this section the control system structure, which is similar to that described in Figure 7.7, will be used to optimize the new specific color rendering indexes ofthe warm white hybrid LED-lamp (3000 K) adapted to all red objects in general shop lighting applications. The hybrid LED-lamp will be established from the varied semiconductor LEDs and the different warm white PC-LEDs of three manufacturers (A, B and C) as well as their combinations at the hot binning (80 °C). Then, the optimized spectra will be investigated at the different operating
So under conditions of rapid change the model shows that total return is increased by allowing an additional decision point, that is by rapid adaptation of strategy.
The introduction of formal models into management theory has several advantages. First it forces precision in the use of management concepts. Verbal concepts are often shrouded in ambiguity. In order to be incorporated in a formal model they must be stated in such a way that their relations to other elements ofthe model is clear. Second, the web of interconnected assumptions which typically comprise
diabetes. In diabetic foot ulcers, increased Langerhans cells ( Stojadinovic et al., 2013; Strom et al., 2014 ), dermal macrophages ( Loots et al., 1998 ), and neutrophils ( Vatankhah et al., 2017 ) positively correlate with disease severity in humans. Moreover, the importance for adipose tissue to promote closure of non- diabetic skin wounds was recently demonstrated in drosophila ( Franz et al., 2018 ). Yet, currently the investigation of wound healing in 3D skin models is limited to wound closure by fibroblasts and keratinocytes. Thus, the incorporation of resident and circulating immune cell types as well as physiologically relevant delivery of nutrients are necessary to recapitulate diseases associated with chronic skin wounds. Furthermore, the growth ofthe resident dermal bacteria, Staphylococcus aureus (S. aureus) ( Popov et al., 2014 ), as well as pathogenic bacteria, Acinetobacter baumannii ( de Breij et al., 2012 ), have been established on uninjured 3D skin equivalents, and drug resistant strains of S. aureus have also been used in a wound infection model for therapeutic development ( Ventress et al., 2016 ). The physiological relevance of3D skin wound models (diabetic ulcers or chronic wounds) can progress with the addition of today’s bioengineering techniques. Although, skin bioprinting is currently used to produce skin equivalents for in vivo wound treatment (summarized in a recent review; He et al., 2018 ), the technology has thus far not been applied to generate in vitro skin wound models. Still, it is clear that the bioprinting technology will have a substantial impact on development of in vitro skin wound models.
which is significantly more difficult to estimate than the previous models. Again, it is as- sumed that the errors are temporally independent, and that x i,0 and y i,0 are observable. OLS is
ineffective due to simultaneity bias. Maximum likelihood becomes unattainable due to the fact that the covariance matrix ofthe error term relies on the expected values of pre-sample observations, of which nothing is known. In fact, no satisfactory estimation has been found (Beck et al., 2006). To solve this problem, two approximations, the first based on the Bhar- gava and Sargan method (BS) and the second on the Nerlove and Balestra method (NB), have been proposed (Elhorst, 2003). The brief derivation that follows [see (Elhorst, 2003) for additional details] is intended to clarify the cost functions that are optimized in this paper.
The present project aims at evaluating the compatibility of a gelatin- methacryloyl-based bioink with hepatocytes and its characterization. This study is the basis to generate bioprinted liver tissue-like models with co- cultures of human hepatocytes, stellate and endothelial cells