Nach oben pdf Object-based change detection for individual buildings in SAR images captured with different incidence angles

Object-based change detection for individual buildings in SAR images captured with different incidence angles

Object-based change detection for individual buildings in SAR images captured with different incidence angles

High resolution synthetic aperture radar (SAR) images have been exploited for different change detection applications, like damage assessment [1], surveillance [2], and have shown great potentials. These applications are based on the comparison of pre- and post-event space borne SAR images captured with the same signal incidence angle. However, because of the satellite orbit trajectory - e.g. for TerraSAR- X the maximum site access time is 2.5 days (adjacent orbit) and the revisit time is 11 days (same orbit) - the first available post-event SAR image may be captured with a different incidence angle. In urgent situations such as earthquakes, this data has to be analyzed for changes in order to support local decision makers as fast as possible. As presented in Fig. 3, the same building appears differently in SAR images captured with different signal incidence angles: i) wall layover areas are scaled in range direction, ii) object occlusions are different, affecting the object visibility, shadow size, etc. iii) multiple reflections of signals related to building structures may be different. Accordingly, a pixel based comparison is not suitable as it may lead to a large amount of false alarms.
Mehr anzeigen

4 Mehr lesen

Simulation-based Building Change Detection from Multi-Angle SAR Images and Digital Surface Models

Simulation-based Building Change Detection from Multi-Angle SAR Images and Digital Surface Models

Considering the results of the case study, the following strategy for change detection is recommended. The algorithm based on Building Fill Ratio (BFR), which analyzes building block changes, is suggested for detecting prominent building changes based on a DSM and two SAR images (acquired with equal or different signal incidence angles). It relies on one segmentation step, where the nDSM is decomposed into building models, and is much faster than the strategy based on individual wall models (increased number of input models). Moreover, a geometric projection sensitive to modeling errors is not required. Extended building changes are assigned with high change ratios. However, as shown by the histogram in Fig. 14, buildings with mid-level change ratios are likely to remain. For buildings with sufficient spatial extent in the SAR image, the algorithm based on the wall fill position (WFP) offers a complementary strategy to uncover partly changes. To this end, a further segmentation step is necessary to decompose building blocks into wall models. The focus on distinct building blocks is motivated by the increasing number of input models for the WFP method (see example in section IV-D: 16 wall models instead of 1 building model).
Mehr anzeigen

17 Mehr lesen

Curvelet Approach for SAR Image Denoising, Structure Enhancement, and Change Detection

Curvelet Approach for SAR Image Denoising, Structure Enhancement, and Change Detection

Change detection in SAR images being a very difficult task has often been discussed in literature. An overview to principal SAR change detection methods, their advantages as well as their dis- advantages can be found in (Polidori et al., 1995). Some more specialized methods are touched in the following. The approach of (Balz, 2004) uses a high resolution elevation model (e.g. ac- quired by airborne laserscanning) to simulate a SAR image which is subsequently compared to the real SAR data. The quality of the results is naturally highly dependent on the resolution of the digi- tal elevation model and its co-registration to the SAR image. This nontrivial co-registration constraints this approach to small scale exemplary applications. Another idea starting with the fusion of several SAR images of different incidence angles to a ”superreso- lution” image is presented by (Marcos et al., 2006) and (Romero et al., 2006). Man-made objects, i.e. geometrical particularities that are not captured by the digital terrain model used for the or- thorectification of the SAR image, are classified by their diverse appearance in the single orthorectified images due to the different acquisition geometries. So, seasonal changes in natural surround- ings can easily be distinguished from changes in built-up areas. One disadvantage is the large number of different SAR images of the same area needed to generate the ”superresolution” image. (Wright et al., 2005) exploits the coherence (phase information) of two SAR images, which implies a relatively short repeat-pass time to avoid additional incoherence caused by natural surfaces. (Derrode et al., 2003) and (Bouyahia et al., 2008) adopt a hidden and a sliding hidden Markov chain model respectively to select areas with changes in reflectivity even from images with differ- ent incidence angles. Although this method allows to process very large images and does not need additional parameter tun- ing, except the window size, according to the authors still a lot of research work has to be done to improve the preliminary results.
Mehr anzeigen

6 Mehr lesen

Detection and height estimation of buildings from SAR and optical images using conditional random fields

Detection and height estimation of buildings from SAR and optical images using conditional random fields

The first objective of this thesis is to develop an innovative solution for the detection of buildings in urban areas merging information derived from one high-resolution SAR acquisition and one optical image. One SAR acquisition can either be one single SAR image or an interferometric SAR image pair acquired in single-pass mode with a certain baseline. At this point it should be noted that we do not want to perform change detection. The aim is to investigate joint use of complementary data of those two different sensor types for building detection. In case local evidence about a certain building is sparse, knowledge about the typical structure of the scene can support object detection. This contextual information reduces the number of possible locations and features to be considered. The majority of object detection approaches incorporating context information relies on model knowledge translated to a set of rules. A model of an object that is to be detected can be formulated either implicitly or explicitly. Implicit model representation often interweaves model knowledge with design and work-flow of data processing, which can become inflexible if dealing with a new object category. Approaches using explicit object models are called knowledge-based approaches (e.g., pro- duction nets or semantic nets). Sets of rules explicitly formulate the precise model of an object (and its context) independent of data processing (e.g., [Stilla, 1995; Koch et al., 1997; Kunz et al., 1997; Soergel et al., 2003b]). Advantages are that prior expert knowledge can directly be mod- elled and graphical representations of object relations can be intuitively understood. Furthermore, knowledge-based systems provide more flexibility compared to systems modelling objects implicitly because only the explicit object model has to be adapted for a new object category without changing the entire processing chain. Additional possibilities for object detection besides production nets and semantic nets are fuzzy logic [Zadeh, 1965] (remote sensing applications, e.g., [Benz et al., 2004; Tóvári & Vögtle, 2004]) and Dempster-Shafer evidential theory [Shafer, 1976] (remote sensing ap- plications, e.g., [Quint & Sties, 1996; Hégarat-Mascle et al., 1997; Rottensteiner et al., 2007; Poulain et al., 2011]). They also formulate object model knowledge rather intuitively and results can well be understood by human interpreters. In addition, Dempster-Shafer approaches provide the possibility of modelling uncertainty explicitly.
Mehr anzeigen

142 Mehr lesen

Automated Detection of Precipitation induced Artefacts in X-band SAR Images

Automated Detection of Precipitation induced Artefacts in X-band SAR Images

The detection of the precipitation cell signatures in SAR images is based on an a priori knowledge about the occur- rence of such effects. The signal intensities obtained form attenuated regions are similar to those of a water surface or smooth areas with little signal energy scattered towards the receiver. The enhanced backscattering expected from heavy clouds could be similar to those from urban areas

4 Mehr lesen

Wetland Monitoring Using the Curvelet-Based Change Detection Method on Polarimetric SAR Imagery

Wetland Monitoring Using the Curvelet-Based Change Detection Method on Polarimetric SAR Imagery

The Gagetown, New Brunswick site is in north central New Brunswick with a climate marked by warm summers and mild, snowy winters. The mean annual temperature is approximately 5 °C. The mean summer temperature is 15.5 °C and the mean winter temperature is −5.5 °C. The mean annual precipitation ranges 900–1300 mm. The lowlands are underlain by flat to gently dipping sandstones, shales, and conglomerates and rise inland from sea level to 200 m. The region is blanketed with moraine. The dominant soils are Humo Ferric Podzols and Gray Luvisols with significant areas of Gleysols, Fibrisols and Mesisols on wetter sites. The closed mixed wood forest is mainly composed of red spruce, balsam fir, red maple, hemlock, and eastern white pine. Sugar maple and yellow birch are found on the larger hills. Wetlands are extensive and support dwarf black spruce and eastern larch at their perimeters. Low density river and stream networks prevail, flowing into the Northumberland and St. Lawrence Gulf areas. Figure 1 shows the location of the Gagetown site as well as an optical image. 3. Methodology
Mehr anzeigen

16 Mehr lesen

An Integral Detection Scheme for Moving Object Indication in Dual-Channel High Resolution Spaceborne SAR Data

An Integral Detection Scheme for Moving Object Indication in Dual-Channel High Resolution Spaceborne SAR Data

or RADARSAT-2 will deliver high resolution dual channel SAR data with large coverage. These new missions - together with rising interest in area-wide traffic monitoring - motivate space- borne GMTI as an attractive alternative to conventional traffic data acquisition. However, a moving object appears distorted in the SAR image since the well-known stationary world assumption in the SAR focusing process is violated. In this paper, a detection approach is presented, which considers simultaneously the effects of azimuthal and radial motion of an object. The mathematical framework of this detector combines information of the measured signal, the expected signal, and their variances. Furthermore, the performance of the proposed algorithm is analysed using experimental airborne SAR data.
Mehr anzeigen

6 Mehr lesen

Aghababaee, H., Amini, J., Tzeng, Y.-C. & Sri Sumantyo, J.T.: Unsupervised Change Detection on SAR images using a New Fractal-Based Measure

Aghababaee, H., Amini, J., Tzeng, Y.-C. & Sri Sumantyo, J.T.: Unsupervised Change Detection on SAR images using a New Fractal-Based Measure

Detection of the changes occurring on the Earth’s surface by means of multi-temporal remote sensing images is one of the most im- portant applications of remote sensing tech- nology. It depends on the fact that, for many public and private institutions, the knowledge of the dynamics of either natural resources or Summary: Change detection for land use/cover is very important in the application of remote sensing. This paper proposes a new fractal measure for au- tomatic change detection in synthetic aperture ra- dar (SAR) images. The proposed measure is com- puted based on the fractal dimension and intensity information. The fractal dimension is calculated using the wavelet multi-resolution analysis based on the concept of fractional Brownian motion. In the next stage, a binary decision is made at each pixel location to determine whether it is a change or not, by applying a threshold on the image derived from the proposed measure. The threshold is com- puted from the distribution of the proposed fractal measure using the well-known Otsu method. The proposed change indicator is compared to the clas- sical log-ratio detector as well as two other statisti- cal similarity measures, namely Gaussian Kull- back-Leibler and cumulant-based Kullback-Leibler detectors. Experiments on simulated and real data show that the proposed approach achieves better results than the other detectors.
Mehr anzeigen

12 Mehr lesen

Flood Detection in Time Series of Optical and SAR Images

Flood Detection in Time Series of Optical and SAR Images

Each image has a binary label specifying whether a flood event is visible or not in the observed area. The labels have been provided by the original MediaEval 2019 dataset and were ob- tained from the Copernicus Emergency Management Service 1 . The Sentinel 1 images were downloaded from the Scientific ESA hub website 2 . The data were acquired in Interferometric Wide Swath (IW) mode at polarization VV and VH. The SAR images are delivered in Ground Range Detected High Resolu- tion (GRDH) products with a resolution of 10 × 10 m. Pre- processing – including radiometric calibration (Miranda, 2015) as well as Range Doppler Terrain Correction using the shuttle radar topographic mission digital elevation model – was applied to the SAR images thanks to the SNAP ESA software (Brock- mann Consult, C-S, 2019). The dataset is composed of 412 time series with 4 to 20 optical images and 10 to 58 SAR im- ages in each sequence. On average, there are 9 optical and 14 SAR images per sequence. The period of acquisition goes from December 2018 to May 2019. A flood event is occuring in 40% of the optical Sentinel 2 images and in 47% of the SAR Sen- tinel 1 images. As in the MediaEval dataset, once a flood oc- curred in a sequence, all the subsequent images are labeled as flooded which corresponds to the hypothesis that the surface still presents characteristic modifications after the event.
Mehr anzeigen

4 Mehr lesen

Model based object classification and localisation in multiocular images

Model based object classification and localisation in multiocular images

These methods were used for pose adjustment according to Algorithm 9.1. Ground truth was obtained by illuminating a number of points with a laser pointer and computing the sub-pixel accurate position of the laser spot using the weighted mean of the pixel values. The spatial coordinate of the illuminated point was computed by non-linear optimisation of Eq. 3.2, i.e. Bundle Adjustment. Since this method computes points on the surface of the tube or cable and our methods above reconstruct the central curve we evaluate the methods by computing the ground truth-central curve distance reduced by the radius. The most complicated scenes are Exp. 8 and 5. In Exp. 8 the tube is nearly invisible. A careful contrast enhancement can make it visible to a human observer. Thus sufficient information exists to distinguish between tube and table cloth. Exp. 5 exhibits at many places along the cable no contrast to the background on one side, while the other side is visible due to shading and shadow. This moves the apparent position of the cable and increases its apparent depth during monocular evaluation.
Mehr anzeigen

175 Mehr lesen

SAR-based change detection using hypothesis testing and Markov random field modelling

SAR-based change detection using hypothesis testing and Markov random field modelling

The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a built- in routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
Mehr anzeigen

8 Mehr lesen

The theoretical performance of different algorithms for shift estimation between SAR images

The theoretical performance of different algorithms for shift estimation between SAR images

This paper compares several algorithms for shift estimation between SAR images, presenting their performance along with their respective merits and demerits. It shows that there is a parallelism between correlation-based estimators (coherent and incoherent) and two different implementations of split-spectrum methods (Delta-k), both in terms of robustness to interferometric phase variations within the estimation window and in terms of performance. The character of Delta-k estimators is regulated by the amount of multi-looking at interferogram level and is related to the statistical efficiency of the standard estimator for the interferometric phase in case of Gaussian speckle.
Mehr anzeigen

4 Mehr lesen

Detecting moving targets in dual-channel high resolution spaceborne SAR images with a compound detection scheme

Detecting moving targets in dual-channel high resolution spaceborne SAR images with a compound detection scheme

case of STAP. In the ATI technique an interferogram is formed from the two SAR images by complex conjugate multiplica- tion, whereas in the DPCA processing the two calibrated SAR images are subtracted from each other. The interferometric phase in ATI and the magnitude of the result in DPCA are evaluated for detection [4]. These detections are done based on a constant false alarm rate scheme. In [5] these approaches have been extended by integrating apriori information, GIS data of road networks. However, e.g. ATI can only be applied if the motion of the vehicle affects the interferometric phase, which is not the case if vehicles are moving in along-track direction. To estimate ground moving parameters for vehicles travelling in along-track, one method is to apply filterbanks with differently designed matched filters [6], [7].
Mehr anzeigen

4 Mehr lesen

Landslide Mapping in Vegetated Areas Using Change Detection Based on Optical and Polarimetric SAR Data

Landslide Mapping in Vegetated Areas Using Change Detection Based on Optical and Polarimetric SAR Data

Abstract: Mapping of landslides, quickly providing information about the extent of the affected area and type and grade of damage, is crucial to enable fast crisis response, i.e., to support rescue and humanitarian operations. Most synthetic aperture radar (SAR) data-based landslide detection approaches reported in the literature use change detection techniques, requiring very high resolution (VHR) SAR imagery acquired shortly before the landslide event, which is commonly not available. Modern VHR SAR missions, e.g., Radarsat-2, TerraSAR-X, or COSMO-SkyMed, do not systematically cover the entire world, due to limitations in onboard disk space and downlink transmission rates. Here, we present a fast and transferable procedure for mapping of landslides, based on change detection between pre-event optical imagery and the polarimetric entropy derived from post-event VHR polarimetric SAR data. Pre-event information is derived from high resolution optical imagery of Landsat-8 or Sentinel-2, which are freely available and systematically acquired over the entire Earth’s landmass. The landslide mapping is refined by slope information from a digital elevation model generated from bi-static TanDEM-X imagery. The methodology was successfully applied to two landslide events of different characteristics: A rotational slide near Charleston, West Virginia, USA and a mining waste earthflow near Bolshaya Talda, Russia.
Mehr anzeigen

20 Mehr lesen

Synthetic tracking for orbital object detection in LEO

Synthetic tracking for orbital object detection in LEO

This is only an approximation according to the Gauss circle problem. The exact number can be determined by checking for each point on the grid if its distance to the centre point of the circle is less or equal to the radius. For the values above it is 178272. Compared to the 316 shifts in the benchmarking case this means 564.15 times the amount of shifts need to be calculated. Not taking into account that the graphics card of the development system could not store the input frame stack in global memory, this increase in calculations by a factor of roughly 15,600 would result in a computation time of 114 min for one second of input data. This means that a system aiming to calculate this in one second would need 6832 times the computing power of the graphics card present in the development system. The system with the single P4000 graphics card could do the same in just under 18 minutes. Assuming the same speed up of 6.3 times with a state-of-the-art graphics card, the computation time can be estimated to be less than 3 minutes; 170 of these graphics cards clustered could calculate all shifts in the sensible range. Extrapolating the capabilities of graphics cards into the future, the cards introduced in 2021 should be able to do this in a cluster of 27 cards.
Mehr anzeigen

8 Mehr lesen

The Radiometric Measurement Quantity for SAR Images

The Radiometric Measurement Quantity for SAR Images

Introducing this new terminology does not require a change of the radiometric measurement process as such. For instance, no new correction factors need to be introduced because especially the frequency dependent reflectivity of radar targets is a feature that a SAR system actually desires to, and already does, detect. Multispectral SAR systems, as a point in case, put a focus on taking advantage of the frequency dependent reflectivity. However, the new terminology changes the way in which radiometric SAR products should be annotated and calibrated. The distinction between RCS and equivalent RCS has special practical importance for wideband, high-resolution, and radiometrically accurate SAR systems for which the new terminology can resolve an inaccuracy in description.
Mehr anzeigen

8 Mehr lesen

Capsule Networks for Object Detection in UAV Imagery

Capsule Networks for Object Detection in UAV Imagery

Deep learning has demonstrated cutting-edge performance in the general computer vision lately. For instance, it has been tailored to scene classification [ 21 , 22 ], scene/object segmentation [ 23 , 24 ], object detection [ 25 , 26 ], and image retrieval [ 27 , 28 ], and recently in remote sensing [ 29 – 32 ]. In particular, Convolutional Neural Networks (CNNs) are the most widely used models among deep architectures. Typical CNNs operate over three steps. The first two consist of convolving the input image at hand with an ensemble of kernels (filters) of a predefined size to produce a bunch of feature maps. The convolution filters can be in various sizes and numbers. These two parameters define the number of the obtained feature maps. Further, these latter are down-sampled by the way of pooling in order to reduce the processing load but also to expand the field of view of the subsequent layers. Therefore, these two steps (convolution and pooling) are consecutively replicated several times up to a certain predefined depth (layer), where the size and number of kernels may vary among the layers. Finally, the obtained feature maps are aggregated at a fully connected layer, which projects the learned features into a likely class to which they belong (when a classification problem is considered). As proven in the literature, deeper architectures convey more information with the rich representations of the input images. Nonetheless, this is achieved at the cost of large processing overheads and large labeled training datasets in order to accomplish convergence of a deep CNN architecture. The basic technique that has been implemented to address this issue is the data augmentation, which consists in introducing some changes such as rotation and mirroring on the original images in order to enrich the training dataset with more samples [ 33 , 34 ]. Another efficient way is to exploit models pre-trained on largescale computer vision archives, such as ResNet [ 35 ], GoogLeNet [ 36 ], VGGNet [ 37 ], and AlexNet [ 33 ]. Another challenging aspect in object detection in remote sensing images is the fact that objects manifest various rotation changes, notwithstanding the orientation of the acquisition platform itself (e.g., a sensor that is mounted on a UAV may capture images, thus objects, of various orientations owing to the way in which the UAV is manoeuvred). On this point, several works attempt to solve this problem by incorporating a rotation-invariant property into deep models [ 38 – 40 ].
Mehr anzeigen

13 Mehr lesen

Object Detection Based on Deep Learning and Context Information

Object Detection Based on Deep Learning and Context Information

Abstract. In order to avoid collision with other traffic participants, automated vehicles need to understand the traffic scene. Object detection, as part of scene understanding, remains a challenging task mostly due to the highly variable ob- ject appearance. In this work, we propose a combination of convolutional neural networks and context information to improve object detection. To accomplish that, context information and deep learning architectures, which are relevant for object detection, are chosen. Different approaches for integrating context in- formation and convolutional neural networks are discussed. An ensemble sys- tem is proposed, trained, and evaluated on real traffic data.
Mehr anzeigen

4 Mehr lesen

Shadow Detection and Restoration for Hyperspectral Images Based on Nonlinear Spectral Unmixing

Shadow Detection and Restoration for Hyperspectral Images Based on Nonlinear Spectral Unmixing

Correspondingly, shadow detection and removal methods have been proposed specifically or generally for one among these three categories of data [2–4]. Shadow detection is frequently used as a preliminary step before shadow removal. Many works have investigated shadow detection methods and detailed reviews can be seen in [1,7]. One category of simple but popular shadow detection methods sets threshold values in a given data space to detect shadow regions [3,8]. In addition to RGB bands, near-infrared (NIR) bands are often used, because they are more sensitive to shadows [7,9]. One drawback of these methods is selecting suitable thresholds [1]. In addition, sunlit dark pixels and shadowed bright pixels can be wrongly detected [7]. Authors in [10] applied water masks in order to alleviate the impact from water regions. A second category of methods maps RGB images to color spaces insensitive to lighting conditions, such as Hue-Saturation-Value (HSV), Hue-Chroma-Value (HCV), deriving back RGB combinations after local brightness alterations [11–13]. A third category of methods studies the geometry and light sources of the scene (ray tracing) [14,15]. These algorithms depend on the availability and accuracy of geometrical data [16]. Other solutions consider physical information. Authors in [17,18] assume that shadow is a zero reflectance endmember and detect shadows through a matched filter. These methods can confuse shadows with materials characterized by low albedo. Authors in [19] compute the proportion of a pixel relative to skylight by considering illumination conditions in shadowed areas [19]. Furthermore, some works study shadow detection based on unsupervised or supervised machine learning methods. Authors in [4] apply K-means clustering, considering the shadow as one output class. In supervised methods, training samples of sunlit and shadowed pixels are selected, then classification methods are applied to separate shadowed from sunlit pixels [20]. The performance of machine-learning-based methods may depend on differences between ground objects and the selection of training samples. Recently, shadow detection based on deep learning has been proposed [21,22]. These methods usually require training data containing input RGB images and their corresponding ground-truth binary shadow masks. In addition, some methods solve shadow detection and restoration in the same framework [23,24]: we will come back to them in the discussion of shadow restoration algorithms.
Mehr anzeigen

22 Mehr lesen

Detection and Mitigation of Strong Azimuth Ambiguities in High
Resolution SAR Images

Detection and Mitigation of Strong Azimuth Ambiguities in High Resolution SAR Images

Azimuth ambiguities of bright point-like targets become more pronounced in case of high resolution SAR imaging. When improving the spatial resolution, the compression gain in SAR image formation becomes larger, which increases the dynamic range of SAR images. The increased point target intensity also implies a correspondingly higher azimuth ambiguity power, which often exceeds the reflectivity of the surrounding homogeneous areas. This paper proposes a new approach to identifying and mitigating azimuth ambiguities of bright targets within areas of distributed homo- geneous backscattering. The method makes use of two independent range looks. While the main signal in the two looks is considered nearly identical, the ambiguous signal becomes mis-registered, i.e. the azimuth position shifts as a function of the range look center frequency. This property is used to derive a suitable mitigation approach. It is tested with X-band data acquired by DLR’s F-SAR sensor in step-frequency mode.
Mehr anzeigen

4 Mehr lesen

Show all 10000 documents...