Nach oben pdf Automated generation of 3D building models from dense point clouds & aerial photos

Automated generation of 3D building models from dense point clouds & aerial photos

Automated generation of 3D building models from dense point clouds & aerial photos

images, is recommended. Moreover, this compensates prevailing shortcomings of each data set, such as data gaps, resolution issues, and occlusion/shadow effects, and so on. It is noted that point clouds and images have complementary properties. In point clouds, vertical accuracy is far better than aerial images, whereas vice versa the planimetric accuracy of images is better (Lee et al., 2008). Recent ISPRS benchmark test project on urban object classification and 3D building reconstruction found that the vertical accuracy of building models reconstructed from point clouds is twice than that of the same buildings reconstructed from aerial images (Rottensteiner et al., 2012). Thus, algorithms in the third group fuse both point clouds and multiple aerial images, especially for achieving a high geometric accuracy for the 3D building models. The question of how to optimally use image data, together with ALS data, to increase the accuracy has still not been fully solved. Few approaches have been published (e.g. Ma, 2004; Kim et al., 2006; Ok et al., 2011) and further experiments are needed. Of the existing approaches, two main processing strategies (a) model-driven and (b) data-driven are found (Vosselman and Maas, 2010). The model-driven categories opt for pre-defined model libraries (e.g. Maas and Vosselman, 1999; Haala et al., 1998) and perform well in the presence of data gaps. However, in some instances it could be of inferior quality, especially with complicated architectural designs. The data-driven categories perform well for complex roof shapes (e.g. Brenner, 2000; Vosselman and Dijkman, 2001; Sohn et al., 2008) by recognizing adjacent planar faces and their relations (e.g. status of being ridges and step-edges) to achieve topologically and geometrically correct 3D building models. Data fusion can easily be realized with data-driven methods which reduce the model
Mehr anzeigen

157 Mehr lesen

CHALLENGES IN FUSION OF HETEROGENEOUS POINT CLOUDS

CHALLENGES IN FUSION OF HETEROGENEOUS POINT CLOUDS

Urban environments can efficiently get reconstructed in 3d from images by photogrammetric computer vision technologies such as research prototypes (Furukawa and Ponce, 2010, Irschara et al., 2012, Kuhn et al., 2017) or commercial products, e.g., Pix4D, nFrames, Agisoft or Acute3D. Depending on the image source, the reconstructed point clouds show different building parts and have different point densities and accuracies. For instance, (Rup- nik et al., 2018) derive digital surface models from very high res- olution satellite images with a ground sampling distance of 0.3 m. I.e., their landscape model may have a point density of 9 points per m 2 only. In contrast, we derive point clouds from terrestrial images or low-flying UAVs with several thousand 3d points per m 2 . Alternatively to reconstructed point clouds, they also can be acquired by (mobile) laser scanning systems as used in (Munoz et al., 2009), (Lauterbach et al., 2015) or (Vosselman et al., 2017). Fusing such heterogeneous point clouds is a very active and very challenging research topic. Motivation can be to improve 3d models by combining different sources (ground and aerial per- spective) or to insert a dense building model as point of inter- est into a sparser landscape model. This problem can be solved in three different ways. First, if images are available, the joint orientation of images from different sensors can be employed. (Koch et al., 2016) and (Roth et al., 2017) propose strategies to combine images with large scale differences and large perspec- tive changes, respectively. Second, 2d features in the images and 3d features in the point clouds can be analyzed to find correspon- dences for fusing different models, cf. (Schenk and Csath´o, 2002) or (Mishra and Zhang, 2012). Third, the point clouds are coregis- tered based on 3d information of the points themselves. A review on recent approaches in this field is given in (Pomerleau et al., 2015).
Mehr anzeigen

8 Mehr lesen

Regularized 3d modeling from noisy building reconstructions

Regularized 3d modeling from noisy building reconstructions

In this paper, we present a method for regularizing noisy 3D reconstructions, which is especially well suited for scenes containing planar structures like buildings. At hori- zontal structures, the input model is divided into slices and for each slice, an inside/outside labeling is computed. With the outlines of each slice labeling, we create an irregularly shaped volumetric cell decomposition of the whole scene. Then, an optimized inside/outside labeling of these cells is computed by solving an energy minimization problem. For the cell labeling optimization we introduce a novel smooth- ness term, where lines in the images are used to improve the regularization result. We show that our approach can take arbitrary dense meshed point clouds as input and delivers well regularized building models, which can be textured af- terwards.
Mehr anzeigen

9 Mehr lesen

Extracting Semantically Annotated 3D Building Models with
Textures from Oblique Aerial Imagery

Extracting Semantically Annotated 3D Building Models with Textures from Oblique Aerial Imagery

This property makes oblique cameras well suited for creating vi- sualizations of virtual city models if their interior and exterior orientation is known as the bitmaps recorded can be accurately mapped on existing geo-referenced 3D data sets. For example, (Frueh et al., 2004) project oblique aerial imagery onto triangular meshes of city buildings obtained from a terrestrial laser scan- ner. A 2D-to-3D registration process based on line segments is used to refine the set of initial camera poses before the image patches are chosen for each face by evaluating the viewing angle, occlusion, neighborhood relationship and other criteria. For vi- sualization purposes the resulting collection of colored triangles gets combined into a single texture atlas by a greedy placement algorithm. Similarly, (Wang et al., 2008) color LiDAR data of urban structures by projecting the virtual wall surfaces back into a set of tilted views and comparing the results against the edges of the fac¸ades inside the source bitmaps. In (Stilla et al., 2009) oblique thermal infrared imagery is used to texture a 3D model that has been previously constructed from nadir-looking aerial photographs covering the visible range of the spectrum. Their back-and-forth projection approach involves occlusion detection using a depth buffer and allows to estimate the actual metric reso- lution per pixel of the output bitmaps aiming at GIS applications. Aside from texture mapping true 3D surface information of urban objects can be regained from oblique imagery itself using pho- togrammetric techniques. The general suitability of tilted aerial photographs for terrain reconstruction has been outlined as early as in (Schultz, 1994) where projectively distorted stereo scenes get processed by a dense matcher computing a weighted cross- correlation score. More recently (Rupnik et al., 2014) generate point clouds from oblique data sets from state-of-the-art oblique aerial cameras that have been oriented with an adapted bundle
Mehr anzeigen

6 Mehr lesen

Grammar-guided reconstruction of semantic 3D building models from airborne LiDAR data using half-space modeling

Grammar-guided reconstruction of semantic 3D building models from airborne LiDAR data using half-space modeling

During the last decades, several approaches for the reconstruction of 3D building models have been developed. Starting in the 1980s with manual and semi-automatic reconstruction methods of 3D building models from aerial images, the degree of automation has increased in recent years so that they became applicable to various areas. Some typical applications and examples are shown in section 1.1. Especially since the 1990s, when airborne light detection and ranging (LiDAR) technology became widely available, approaches for (semi-)automatic building reconstruction of large urban areas turned out to be of particular interest. Only in recent years, some large cities have built detailed 3D city models. Although much effort has been put into the development of a fully automatic reconstruction strategy in order to overcome the high costs of semi-automatic reconstructions, no solution proposed so far meets all requirements (e.g., in terms of completeness, correctness, and accuracy). The reasons for this are manifold as discussed in section 1.2. Some of them are manageable, for example, either by using modern sensors which provide denser and more accurate point clouds than before or by incorporating additional data sources such as high-resolution images. However, there is quite a big demand for 3D building models in areas where such modern sensors or additional data sources are not available. Therefore, in this thesis a new fully automatic reconstruction approach of semantic 3D building models for low- and high-density airborne laser scanning (ALS) data of large urban areas is presented and discussed. Additionally, it is shown how automatically derived building knowledge can be used to enhance existing building reconstruction approaches. The specific research objectives are outlined in section 1.3. It includes an overview of the proposed reconstruction workflows and the contribution of this thesis. In order to have lean workflows with good performance, some general assumptions on the buildings to be reconstructed are imposed and explained in section 1.4. The introduction ends with an outline of this thesis in section 1.5.
Mehr anzeigen

259 Mehr lesen

Automatic detection and reconstruction of 2D/3D building shapes from spaceborne TomoSAR point clouds

Automatic detection and reconstruction of 2D/3D building shapes from spaceborne TomoSAR point clouds

Abstract—Modern spaceborne synthetic aperture radar (SAR) sensors, such as TerraSAR-X/TanDEM-X and COSMO-SkyMed, can deliver very high resolution (VHR) data beyond the inher- ent spatial scales of buildings. Processing these VHR data with advanced interferometric techniques, such as SAR tomography (TomoSAR), allows for the generation of four-dimensional point clouds, containing not only the 3-D positions of the scatterer location but also the estimates of seasonal/temporal deformation on the scale of centimeters or even millimeters, making them very attractive for generating dynamic city models from space. Motivated by these chances, the authors have earlier proposed ap- proaches that demonstrated first attempts toward reconstruction of building facades from this class of data. The approaches work well when high density of facade points exists, and the full shape of the building could be reconstructed if data are available from multiple views, e.g., from both ascending and descending orbits. However, there are cases when no or only few facade points are available. This usually happens for lower height buildings and renders the detection of facade points/regions very challenging. Moreover, problems related to the visibility of facades mainly facing toward the azimuth direction (i.e., facades orthogonally ori- ented to the flight direction) can also cause difficulties in deriving the complete structure of individual buildings. These problems motivated us to reconstruct full 2-D/3-D shapes of buildings via exploitation of roof points. In this paper, we present a novel and complete data-driven framework for the automatic (parametric) reconstruction of 2-D/3-D building shapes (or footprints) using unstructured TomoSAR point clouds particularly generated from one viewing angle only. The proposed approach is illustrated and validated by examples using TomoSAR point clouds generated using TerraSAR-X high-resolution spotlight data stacks acquired from ascending orbit covering two different test areas, with one containing simple moderate-sized buildings in Las Vegas, USA and the other containing relatively complex building structures in Berlin, Germany.
Mehr anzeigen

19 Mehr lesen

3D building reconstruction from spaceborne TomoSAR point cloud

3D building reconstruction from spaceborne TomoSAR point cloud

Modern synthetic aperture radar satellites (e.g., TerraSAR-X/TanDEM-X and CosmoSky- Med) provides meter resolution data which when processed using advanced interferomet- ric techniques, such as SAR tomography (or TomoSAR), enables generation of 3-D (or even 4-D) point clouds with point density of around 1 million points/km 2 . Taking into consideration special characteristics associated to these point clouds e.g., low position- ing accuracy (in the order of 1m), high number of outliers, gaps in the data and rich facade information (due to the side looking geometry), the thesis aims to explore for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models from space. The developed approach is completely data- driven and except for vertical facades assumption, it does not impose any constraint on the shape of building footprint (or to its constituent roof segments) i.e., any arbitrarily shaped building could be reconstructed in 3-D with several roof layers. The workflow is modular and consists of following main modules:
Mehr anzeigen

114 Mehr lesen

Automatic Generation of Building Models with Levels of Detail 1-3

Automatic Generation of Building Models with Levels of Detail 1-3

In this paper, we present a combination of our previously pub- lished methods and the workflow for automatic data analysis con- sisting of the orientation of images, the computation of depth maps, the generation of highly detailed 3D-point clouds, and fi- nally the interpretation of the data and the construction of build- ing models. Our workflow is almost fully automatic, only very little manual interaction is needed for inspecting the intermedi- ate results, for scaling the dense point cloud, and for rotating the scene into a selected coordinate system. The last two interac- tions could be skipped if the GPS-information of the acquired im- ages is used. Our software returns the recognized building parts, i.e., walls, roof planes, and windows, and we export the model in CityGML 2.0 format (Gr¨oger et al., 2012).
Mehr anzeigen

6 Mehr lesen

Geometric Refinement of ALS-Data Derived Building Models Using Monoscopic Aerial Images

Geometric Refinement of ALS-Data Derived Building Models Using Monoscopic Aerial Images

An inherent problem for the reconstruction methods based on multiple data sources is matching ambiguities of the extracted modeling cues. For the time being, the question of how to optimally use ALS data together with aerial imagery has still not been fully solved. In this paper, we present a novel approach for sequential refinement of 3D building models using a single aerial image. The core idea of modeling improvement is to obtain refined model edges by intersecting roof planes accurately extracted from 3D point clouds and viewing planes assigned to the building edges detected in a high resolution aerial photograph (cf. Figure 1). In order to minimize ambiguities that may arise during the integration of modeling cues, the ALS data is used as the master, providing initial information about buildings being processed. While roof plane outlines derived from ALS data may suffer from ALS point spacing effects in planimetry, they are still a rather good approximation for reality. Therefore, 3D buildings derived from ALS data serve as input information on the shape and topology of building roofs. The aim of our research is to increase the geometric accuracy of those models and thus improve the overall quality of the reconstruction. In order to evaluate the performance of our refinement algorithm, we compare the results of 3D reconstruction executed using only laser scanning data, with a reconstruction enhanced by image information. Furthermore, quality assessment of both reconstruction outputs is performed based on a comparison to reference data, according to the validation methods standardized by the International Society for Photogrammetry and Remote Sensing [1].
Mehr anzeigen

16 Mehr lesen

From LIDAR Point Clouds to 3D Building Models

From LIDAR Point Clouds to 3D Building Models

Airborne laser scanning (ALS), also referred to as airborne LIDAR (Light Detection And Rang- ing), is a very convenient source of information for extracting Digital Surface Models (DSM). The ALS is an efficient system which can deliver very dense and accurate point clouds from the ground surface and the objects which are located on it. Providing high quality height informa- tion of the landscape by means of LIDAR systems opens up an extensive range of applications in different subjects in photogrammetry and remote sensing. Moreover, laser scanning data is useful for an increasing number of mapping and GIS data acquisition purposes, including the detection and modeling of 3D objects. There are different types of information returned from the target which provide valuable information of the object and structures around it. Laser pulses have one important advantage: They partially penetrate the vegetation in gaps between leaves and thus make available data reflected from points underneath the vegetation. This property of the laser ray is at the heart of the difference between first- and last-pulse data: while in first-pulse data the vegetation’s surface is represented well, this is not the case in last-pulse data.
Mehr anzeigen

130 Mehr lesen

Mitigation of Positioning Bias in PSI Point Clouds

Mitigation of Positioning Bias in PSI Point Clouds

Persistent Scatterer Interferometry (PSI) [1] is a single-master multi-temporal Interferometric Synthetic Aperture Radar (In- SAR) technique. It mitigates the effect of decorrelation and atmospheric disturbances by retrieving the height and defor- mation parameters of only phase-stable targets, known as Per- sistent Scatterers (PS). In order to present the results of PSI in a common geodetic reference system, a coordinate transfor- mation between the radar datum and an earth-fixed Cartesian reference frame is required, which is called geocoding. The precise knowledge of the master orbit during acquisition as well as PS azimuth and range timings, and the height of PS are necessary input for geocoding. These variables are used to solve the Range-Doppler-Ellipsoid equations [2] to retrieve the three-dimensional (3D) Cartesian coordinates of the PS.
Mehr anzeigen

4 Mehr lesen

Optimization of unmanned aerial vehicle augmented ultra-dense networks

Optimization of unmanned aerial vehicle augmented ultra-dense networks

We consider the existing ground network to be a C-RAN, where a central unit (CU) sup- ports multiple access points such as remote radio heads (RRHs) and UAV-SC, as shown in Fig. 1 . A set of ground co-channel downlink users, defined as K = {1, . . . , K}, popu- late the network. The average long-term QoS requirement of the kth user is known via the average signal-to-interference-plus-noise ratio (SINR), denoted by γ k , and equiva- lent rate requirement (for fixed bandwidth), denoted by R k . The set of operational RRHs is defined as J = {1, . . . , J}, each with a maximum transmit power of P j max , where j denotes the index of the jth RRH. We assume that there exists a set of UAVs that can be potentially used for network augmentation, given as I = {1, . . . , I}. Each UAV-SC has a maximum transmit power of P max i , where i denotes the index of the ith UAV-SC. The spatial position of the ith UAV is described by (x i , y i , z i ), which described the lon- gitude, latitude, and altitude, respectively. The association between the ith UAV-SC and the kth user is defined by α ik ∈ {0, 1}, while the association between the jth RRH and the kth user is described by β jk ∈ {0, 1}. Note that a 1 is used to represent an active connection for both variables. The allocated power to the downlink channel between the i th UAV-SC and the kth UE is denoted by p ik , while p jk refers to the allocated power of the corresponding RRH and UE. Note that the relationship between the association and the transmitted power becomes evident, as a user only receives transmission by its associated node.
Mehr anzeigen

17 Mehr lesen

First Prismatic Building Model Reconstruction from TomoSAR Points Clouds

First Prismatic Building Model Reconstruction from TomoSAR Points Clouds

Typical data sources for reconstructing 3-D building models in- clude optical images (airborne or spaceborne), airborne LiDAR point clouds, terrestrial LiDAR point clouds and close range im- ages. In addition to them, recent advances in very high resolution synthetic aperture radar imaging together with its key attributes such as self-illumination and all-weather capability, have also at- tracted attention of many remote sensing analysts in character- izing urban objects such as buildings. However, SAR projects a 3-D scene onto two native coordinates i.e., “range” and “az- imuth”. In order to fully localize a point in 3-D, advanced in- terferometric SAR (InSAR) techniques are required that process stack(s) of complex-valued SAR images to retrieve the lost third dimension (i.e., the “elevation” coordinate). Among other InSAR methods, SAR tomography (TomoSAR) is the ultimate way of 3- D SAR imaging. By exploiting stack(s) of SAR images taken from slightly different positions, it builds up a synthetic aperture in elevation that enables the retrieval of precise 3-D position of dominant scatterers within one azimuth-range SAR image pixel. TomoSAR processing of very high resolution data of urban ar- eas provided by modern satellites (e.g., TerraSAR-X, TanDEM-X
Mehr anzeigen

6 Mehr lesen

Automated Generation of a Faceted Navigation Interface Using Semantic Models

Automated Generation of a Faceted Navigation Interface Using Semantic Models

A relatively novel approach of navigating structured data is called faceted browsing [2, 6]. The basic idea is, to filter items by their attributes (e.g. shoes by size and color). If the items that are supposed to be explored can be classified by certain characteristic features, faceted browsing is a suitable way of narrowing alternatives. This is especially useful, if the user does not look for a particular item, but for alterna- tives meeting certain requirements.

4 Mehr lesen

Automated generation of physical surrogate vehicle models for crash optimization

Automated generation of physical surrogate vehicle models for crash optimization

In module 2 the cumulative center of gravity (CoG) of the replaceable structure and the center of the connecting points (rear axle to car body) is determined. In the performed simulation the influence of joints, rubber bearings as well as interior parts, which also contribute slightly to the global stiffness, are taken into account. In Figure 5 the connecting points of the rear axle and the rigid body element at the interface, used for load introduction, are shown. At the interface a force of 1 000 N in longitudinal direction (Fig. 5 red arrow) is applied with a rigid-body element and an explicit structural simulation is performed. The force was selected to have an elastic deformation of the car. On the other hand the force is large enough to reduce the influence of nonlinearities obtained by local joint deformations. Since the stiffness is obtained by a quasi-static explicit simulation, no additional implicit vehicle model is required for the automated model
Mehr anzeigen

23 Mehr lesen

Model-free Dense Stereo Reconstruction Creating Realistic 3D City Models

Model-free Dense Stereo Reconstruction Creating Realistic 3D City Models

Digital Surface Models (DSM) are the basic input for a wide range of applications like flood simulation, 3D change detection and radio beam propagation. Additionally for the normal end-consumer, 3D city visualizations are becoming more important for navigation every day. While all of these applications require a high accuracy, the 3D visualization systems additionally require the data to be of modest size, since they often operate with limited ressources (e.g. web- based applications or navigation devices). In this paper we describe a framework which achieves a good trade-off between high accuracy and small data size.
Mehr anzeigen

4 Mehr lesen

Fast Probabilistic Fusion of 3D Point Clouds via Occupancy Grids for Scene Classification

Fast Probabilistic Fusion of 3D Point Clouds via Occupancy Grids for Scene Classification

ence. Many approaches have been reported over the last decades. Sophisticated classification algorithms, e.g., support vector ma- chines (SVM) and random forests (RF), data modeling methods, e.g., hierarchical models, and graphical models such as condi- tional random fields (CRF), are well studied. Overviews are given in (Schindler, 2012) and (Vosselman, 2013). (Guo et al., 2011) present an urban scene classification on airborne LiDAR and mul- tispectral imagery studying the relevance of different features of multi-source data. An RF classifier is employed for feature evalu- ation. (Niemeyer et al., 2013) proposes a contextual classification of airborne LiDAR point clouds. An RF classifier is integrated into a CRF model and multi-scale features are employed. Recent work includes (Schmidt et al., 2014), in which full wave- form LiDAR is used to classify a mixed area of land and water body. Again, a framework combining RF and CRF is employed for classification and feature analysis. (Hoberg et al., 2015) presents a multi-scale classification of satellite imagery based also on a CRF model and extends the latter to multi-temporal classification. Concerning the use of more detailed 3D geome- try, (Zhang et al., 2014) presents roof type classification based on aerial LiDAR point clouds.
Mehr anzeigen

8 Mehr lesen

An automated method for building cognitive models for turn-based games from a strategy logic

An automated method for building cognitive models for turn-based games from a strategy logic

Keywords: logic; computational cognitive modeling; turn-based games 1. Introduction Many events that happen in our daily life can be thought of as turn-based games. In fact, besides the “games” in the literal sense, our day-to-day dialogues, interactions, legal procedures, social and political actions, and biological phenomena—all these can be viewed as games together with their goals and strategies. Thus, effective and efficient strategies are needed everywhere, not just in games such as chess and bridge, but also in many social interactions in daily life, as well as other global phenomena and scientific procedures. For example, consider negotiations among rival parties to attain certain goals satisfactory to all, stabilization of entering firms in existing markets and decisions regarding an ideal situation to approach somebody for business. On an even larger scale, strategies are also of vital importance in negotiations in international crisis cases such as the Cuba missile crisis, the North Korean crisis, and Copenhagen climate change control. In biology, studying strategies is an important part of investigations of evolution and stabilization of populations.
Mehr anzeigen

29 Mehr lesen

Distributed streaming and compression architecture for point clouds from mobile devices

Distributed streaming and compression architecture for point clouds from mobile devices

Using movie encoders can be advantageous when depth and color information are encoded at the same time, because using the same encoder for both types of information will make the encoding and transmission less complex. The aforementioned image and movie encoders use quantization and downsampling to achieve high compression levels. Such methods strongly af- fect sharp corners or high-frequency changes in data, so the pre-processing and post-processing should not add continuity gaps or carryover jumps into the source image. The work suggests that one robust color channel contains the most significant 8 bits of the depth image, while the other two color channels encode the least significant bits. To ensure continuity, the actual depth values are transformed using two linear triangle wave functions (see Fig. 3.6), one for each remaining color channel, differing in their frequency.
Mehr anzeigen

63 Mehr lesen

Globally Consistent Dense Real-Time 3D Reconstruction
from RGBD Data

Globally Consistent Dense Real-Time 3D Reconstruction from RGBD Data

To demonstrate our capabilities we test our system on several real-world image sequences from the TUM RGBD dataset [13] and on the synthetic ICL-NUIM dataset [6]. We evaluate standard InfiniTAM (ITM) [7], ICPCUDA [14], DVO SLAM [8], RGBD SLAM [4] and our method based on ORB SLAM 2 [9]. ICPCUDA is a very fast implementation of ICP with online available code [1]. We run all systems in their standard settings from using the code available online at maximum resolution of 640×480. For RGBD SLAM, we set the feature detector and descriptor type to ORB and extract a maximum of 600 keypoints per frame. In ORB SLAM2, we extract 1000 features per frame with a minimum of 7 per cell and 8 scale pyramid levels. Finally, we run DVO SLAM with its standard 3 scale pyramid levels. We test all the systems on an Intel Core 2 Quad CPU Q9550 desktop computer with 8GB RAM and an NVIDIA GeForce GTX 480. For all models we chose a voxel size of 2cm and a truncation band µ of 8cm and limited the depth measurements from 0.2m to 5.0m. We empirically found the parameters Θ τ = 0.005 and
Mehr anzeigen

8 Mehr lesen

Show all 10000 documents...