The colour composition in Fig. 3(a) shows three images over the Chuquicamata copper mine in Chile. The regions of high reflectivity caused by an extreme foreshortening effect are slightly smeared over during the geocoding step. One can note that something has changed around the deposit in the middle left of the image. The other images show the results of a curvelet-basedchangedetection with weighted coefficients. In Fig. 3(b) series of red and green stripes along the moun- tain especially in the upper part are visible. In Fig. 3(c) these structures can be found in the lower part. Fig. 3(d), which depicts the changes between the first and the third image can be seen as sum of the two preceding images. The sequence of red (darkened) and green (brightened up) linear features is most remarkable. These changes indicate a systematic dis- placement of linear high backscatterers. An explanation can be found in the geometrical form of the mine: The deposit is built in terraces. If the deposition goes on, the edges of these terraces move and cause also a displacement of the bright lines in the SAR image, because of the reflector-like diplane backscattering at each stage. In contrast to the colour com- position (Fig. 3(a)) the results are very smooth and show no single pixel changes in the surrounding (Fig. 3(b)-3(d)). So, this approach is capable to survey open cast mining activities even though it is not possible at the moment to determine the amount of soil moved.
Abstract—SAR image changedetection is playing an important role in various Earth Observation (EO) applications. There exist a large number of different methods that have been proposed to address this issue. However, due to the fact that several kinds of changes with diverse characteristics can arise in SARimages, there is no consensus on the their performances because most methods have been evaluated using different datasets, probably facing several kinds of changes, but without an in-depth analysis of the characteristics of SAR image changes. Therefore, two important problems arise. The first is what kind of change each approach can detect. The second is how much they can detect a kind of change. Although the importance to model any kind of changes has been realized, there is no principled methodology to carry out the analysis due to the difficulty in modeling various kinds of changes. In this paper, we propose a benchmark methodology to reach this goal by simulating selected kinds of changes in addition to using real data with changes. Six kinds of SAR changes for eight typical image categories are simulated, i.e., reflectivity changes, first order, second order, and higher order statistical changes, linear and nonlinear changes. Based on this methodology forchange simulation, a comprehensive evaluation of information similarity measures is carried out. An explicit conclusion we have drawn from the evaluation is that the various methods behave very differently for all kinds of changes. We hope that this study will promote the advancement of this topic.
sual perception of layover varies significantly while the intensity distributions intend to follow log-normal distributions (even for local maxima as a rough approximate). The polarization mode HH dominates VV in intensity (mean, median) but also tends to show stronger intensity variation (standard deviation). In con- trast, no tendency with respect to the proportion between HH and VV is observed on the number of prominent point signatures. As a conclusion, distribution-based methods for layover analysis seem to be favored which may be optionally supported by feature- based concepts, e.g., focused on lines or point-like signatures. The prominent appearance of facade layover motivates the identi- fication of the related SAR image parts in order to extract building- related information. In this context, a concept has been proposed for identifying building layover based on simulation methods in- cluding CityGML data. To this end, a simulation processing chain has been extended to fuse TerraSAR-X images and prior information provided by CityGML data sets. In this context, the data transformation from CityGML to the POV-Ray data struc- ture used by the simulator is fully automated using the spatial ETL (extract, transform, and load) software FME. As a first ex- ample, a case study of the Munich city center has been shown where the extent of simulated building layover is directly super- posed on a geocoded TerraSAR-X image. The simulation concept based on CityGML data indicates that the representation of real world entities by semantic objects has a number of advantages over the purely geometric representation in a DSM. In particular, changedetection applications may benefit from CityGML data sets if they are kept up to date and assigned with meta informa- tion.
In this context, several limitations are of relevance. First, the method is based on point features in wall layover areas which might be influenced by signatures coming from roofs, grounds or other adjacent objects. Moreover, the difference of incidence angles may lead to disappearance of the points because of the loss of visibility, possibly resulting in false alarms in our results. Finally, any changes between the acquisition time of LiDAR and the first SAR image are not considered in our approach.
from both images, in order to identify both spectral and texture transformations. In Figure 1 is depicted the above process, in which are implicated several phases. First phase includes the split of both images in patches, where i and j inside the pattern Pi_j describe the index of the image, respectively the index of the patch inside the image. The second phase encloses the generation of the NCD change map, where a matrix of similarities is obtained based on (1). The third phase contains a normalization of the matrix resulted from the previous phase, which generates a gray level image with the intensity of the pixel defining the degree of change. Finally, in the last phase a threshold is applied, which generates a binary change map. The threshold value is applied by finding the inflection point of the histogram of the gray level image, obtained in the third phase. Both 2D images Image 1, Image 2 have the same size and the parameters k and t represent the division result of the number of columns of the image to the dimension of the patch, respectively the division result of the number of rows of the image to the dimension of the patch, taking in consideration that patches are always quadratic. The result of the process described will be a matrix with t rows and k columns.
Only very few research has dealt with combining SAR data with an optical image in order to determine building heights. Only two groups of scientists have focussed on this topic, yet. Tupin  determines the heights of flat-roofed industrial buildings analysing the layover area in a single SAR intensity image. First, a building map is generated manually from an optical aerial image which defines expectation areas for line detection at particular buildings. Bright lines are extracted in the SAR intensity image and heights are computed exploiting three-dimensional information contained in layover (cf. section 3.2.1) with a set of rules. It relies on a very simple building model and geometrical considerations of radar viewing geometry. Tupin & Roux  regularize a height model derived by means of radargrammetry within regions of an aerial photo. First, the optical image is segmented into homogeneous regions. They second generate a region adjacency graph and a Markov Random Field is set up based on the graph. A specially designed potential function replaces the usually used Potts model in the prior term. The assumption is made that heights within a homogeneous region of the optical image tend to be similar. Additionally, heights of different image regions also should be similar in case no strong gradient in the optical image separates both regions. On the contrary, a height jump should occur between two regions if they are separated by a high gradient. Reconsidering that they use radargrammetric heights the authors actually combine two SAR acquisitions with one optical image. A similar approach dealing with the same configuration as this thesis is developed by Denis et al. . They extend the method of Tupin & Roux  to three-dimensionally reconstruct an urban area from high-resolution InSAR data and an optical image. In addition, they propose a graph-cuts-based inference method for energy minimization of the MRF. They perform tests of separate and joint likelihood functions of amplitude and phase data. Optical data is introduced via the prior term of the MRF, where the gradient magnitude serves as an indicator for height discontinuities (similar to [Tupin & Roux, 2005]).
road segments using perceptual organization rules. Dell’Acqua and Gamba  detected road networks and built areas in SARimages. They extracted straight edges and detected built areas with a clustering based approach. They assumed longitudinal gaps as road networks. Rianto et al.  proposed an approach to detect main roads in SPOT satellite images. For this purpose, the detected Canny edges and classified straight line segments using Hough transform. They assumed straight and parallel line segments as roads. Unfortunately, this approach cannot be sufficient alone to detect curvilinear and complex roads in urban scenes. Some researchers developed algorithms to detect both buildings and roads. ¨ Unsalan and Boyer  detected separate buildings and street networks from multispectral satellite images. Their method depends on using vegetation indices, clustering, decomposing binary images, and graph theory. Akc¸ay and Aksoy  proposed a novel system for detecting built areas and the road network using unsupervised segmentation in high resolution satellite images. Montesinos and Alquier  developed a novel ap- proach to detect thin objects in noisy images. They performed perceptual grouping on edge segments using active contours. They tested the validity of their approach by detecting roads in aerial images and blood vessels in medical images. In a previous study, we proposed an edge detection and spatial voting based system to detect road network from panchromatic Ikonos satellite images . Mayer et al. , and ¨ Unsalan and Boyer  provide excellent surveys on road detection in aerial and satellite images.
After the geocoding there can still be small shifts between the two images. To solve this problem a matching algorithm is used. The simulator RaySAR uses the geometry of the SAR image, thus a feature based matching algorithm is suitable for our case. We use a function in software Halcon  to extract lines from the real SAR image and match the long lines to the simulated image. A detailed description of the line extraction and matching is shown in . For the output we obtain three parameters: the shift in row and column direction, and the rotation angle. In our case, the rotation angle should be zero.
SAR polarimetry has been a research topic for many years. Many scientists designed new decompositions for special applications. An overview to actual stan- dard decompositions is given in some reviews ,. The problem with all of the decompositions is the availability of adequate data. Nowadays, most SAR satellites are able to acquire data in more than one po- larization direction, but not all satellites deliver full polarimetric data in all acquisition modes. Hence, the user has to balance reasons for a polarimetric evalua- tion on the one hand or a much better resolution on the other hand. In this article we agreed to avail Ter- raSAR-X High Resolution Spotlight data with only two polarizations (HH&VV). These images share a pixel-spacing on ground of 1 m in the spatially en- hanced Enhanced Ellipsoid Corrected product type. The Single Look Complex data has been combined according to the Huynen decomposition . This de- composition has been chosen, because it provides three independent polarimetric layers that can be de- composed out of HH and VV polarization (Eq. 1) without the cross polarized layer.
measures have been applied to changedetection as well and have shown promising performances. A prominent work by Inglada and Mercier ( 2007 ) proposed a method for multi-temporal SARchangedetectionbased on the evolution of local statistics computed from the pre-event and post-event images. The local statistics are estimated by one-dimen- sional Edgeworth series expansion, which approximates the probability density functions of the pixels in the neighbourhood. The degree of evolution of the local statistics is measured using the Kullback –Leibler divergence. Bovolo and Bruz-zone (2008) extended this method to object-basedchangedetection by computing the Kullback –Leibler divergence of two corre- sponding regions obtained by image segmentation. An unsupervised changedetection method in the wavelet domain based on statistical wavelet coe ﬃcient modelling was proposed by Cui and Datcu ( 2012 ). Several information similarity measures, namely distance to independence, mutual information, cluster reward, Woods criterion and correlation ratio, were compared by Alberga ( 2009 ) forchangedetection, among which mutual information has been demonstrated to be rather e ﬃcient. Taking advantage of mutual information, a pixel-based approach comparing localized mutual information was proposed by Winter et al. ( 1997 ). Intuitively, if two pixels share a lot of information, it is reasonable to assume no change at their location. Based on this idea, another information measure forchangedetection derived from mutual information was introduced by Gueguen and Datcu ( 2009 ), namely mixed information, which uni ﬁes mutual information and variational information by a para- meter. Furthermore, stochastic kernels including both Kullback –Leibler divergence and mutual information were used by Mercier et al. ( 2006 ) as features in a support vector machine forSARchangedetection. Based on the estimation of a bivariate Gamma distribution, mutual information was applied to SARchangedetection by Chatelain et al. ( 2007 ). Through a two- scale implementation, mutual information can be split into two terms to be linked to a changedetection part and a registration part (Mercier and Inglada 2008 ). The method by Bovolo and Bruzzone ( 2005 ) exploits a wavelet-based multiscale decomposition of the log-ratio image aiming at representation of the change signal using di ﬀerent scales. k-means clustering was applied by Celik ( 2009 ) to classify the undecimated wavelet transform of the di ﬀerence image into two classes corresponding to change and no-change classes. A benchmark evaluation of similarity measures, namely mutual information, variational information, mixed information and Kullback –Leibler divergence, was carried out by Cui, Schwarz, and Datcu ( 2016 ) for multi- temporal SAR image changedetection.
ChangeDetection techniques play a very significant role in modern infrastructure especially in urban areas. Urban areas are often reconstructed to replenish with greater space and resources. In urban areas most common and significant objects of interest are buildings. With advancements in remote sensing technologies, changedetectionbased on high resolution satellite images provides an efficient tool for monitoring infrastructural development within vast premises. Since visual analysis is costly and impractical, several automatic and semi automatic methods have been researched forchangedetection. Although there are a number of such methods, their applicability is restrained by limitation of the data they are evaluated upon. For example in case of buildings often real changes are mixed with other land cover changes, especially manmade structures such as roads, bridges etc due to their resemblance in the captured images. Although it is possible to eliminate these errors with height information from DSMs, the accuracy is directly proportional to the quality of the DSMs. Hence it also becomes crucial to develop highly sophisticated changedetection methods based alone on the images.
Fractal geometry is able to describe com- plex forms and find out their underlying or- der. The concept of fractals was introduced by M ANDELBROT (1982). It can be defined as an entity for which the Hausdorff-Besicovitch di- mension exceeds the topological dimension. Simply speaking, the fractal dimension of a phenomenon is a measure of randomness or variability. This dimension is different from the traditional dimensions of Euclidian geom- etry. Another characteristic of fractals is self- similarity, which means that fractal dimen- sion will be the same regardless of the meas- urement scale. Since 1989, fractals have been extensively adopted in satellite image process- ing ( C OLA 1989, R AMSTIEN & R AFFY 1989), and fractal models have been used in a variety of image processing and pattern recognition ap- plications. For example, several researchers have applied fractal techniques to describe image textures, data fusion, and classification ( D E J ONG & B URROUGH 1995, M YINT 2003, S UN
Schmitt et al (2009a) recently developed a method for radar image enhancement and changedetection using a Curvelet-based approach which has proven promising for man-madeobjects, i.e. mainly urban applications (Schmitt et al., 2009b). The original application was on magnitude only data, but it was believed to be appropriate for polarimetric data as well. This paper briefly presents the Curvelet-basedchangedetection approach and then demonstrates its utility forchangedetection in wetlands, especially changes in flooded vegetation, using polarimetric Radarsat-2 data. For the polarimetric decomposition the Freeman-Durden and the Cloude-Pottier algorithms are utilized. The image enhancement on single images as well as on image differences is applied on the individual decomposition channels.
As can be imagined, the precondition for application of CNNs or any other supervised learning frameworks is the availability of annotated datasets. They are necessary not only to analyze and validate the performance of classification algorithms but are too required in the training phase where parts of annotated data are utilized to optimize prediction models. Lack of such annotated datasets is one of the major issues in application of CNNs over SARimages. Manual (or somewhat interactive) annotation, as is done in the afore- mentioned approaches, is one potential solution. However, due to complex multiple scattering and different microwave scattering properties of the objects appearing in the scene possessing different geometrical and material features, the manual annotation often requires expert’s knowledge (see Figure 1) and easily becomes impractical when large scenes need to be processed. Apart from this, another possibility of generating such a reference SAR dataset is by exploiting simulation based methods as proposed e.g., in   . However such methods have their own limitations in a sense that they are either only capable of simulating simpler building shapes (e.g., ) or typically require accurate models (3- D building models and/or accurate digital surface models) to precisely generate such ground truth data which, in most cases, is not available. Thus, in view of above, automatic annotation of SARimages, if possible, is essential.
Hyperspectral images in general consist of hundreds of narrow and contiguous spectral channels, from the visible to the shortwave infrared region of the electromagnetic spec- trum. Such data have great potential to detect and identify earth surface objects and phe- nomena in a remotely sensed scene. When the signature of the target of interest is known, the target detection (TD) approach can be used. However, TD algorithms are dependent on the degree of signal mismatch between the spec- tral libraries and the spectra observed in an Summary: Anomaly detection (AD) is an impor- tant and challenging area in hyperspectral image analysis. Based on different approaches, numerous AD algorithms have been presented and developed throughout the literature. This paper aims to com- pare detection performances of contemporary AD algorithms for detecting man-madeobjects in hy- perspectral imagery. The algorithms used in this study include the segmented based Reed-Xiaoli (RX) algorithm, the principal component analysis based RX (PCA-RX), the orthogonal subspace pro- jection based anomaly detector (OSP-AD), the ker- nel PCA-RX, and the kernel based one-class sup- port vector machines. To evaluate the performance of the algorithms, three real hyperspectral datasets are employed. The performance comparison is then carried out on the basis of the receiving operative characteristics (ROC) curve and the average of false alarm rate (AFAR). Experimental results sug- gest that among the AD algorithms the OSP-AD is the most promising detector for detecting man- made targets.
Changedetection in SARimages being a very difﬁcult task has often been discussed in literature. An overview to principal SARchangedetection methods, their advantages as well as their dis- advantages can be found in (Polidori et al., 1995). Some more specialized methods are touched in the following. The approach of (Balz, 2004) uses a high resolution elevation model (e.g. ac- quired by airborne laserscanning) to simulate a SAR image which is subsequently compared to the real SAR data. The quality of the results is naturally highly dependent on the resolution of the digi- tal elevation model and its co-registration to the SAR image. This nontrivial co-registration constraints this approach to small scale exemplary applications. Another idea starting with the fusion of several SARimages of different incidence angles to a ”superreso- lution” image is presented by (Marcos et al., 2006) and (Romero et al., 2006). Man-madeobjects, i.e. geometrical particularities that are not captured by the digital terrain model used for the or- thorectiﬁcation of the SAR image, are classiﬁed by their diverse appearance in the single orthorectiﬁed images due to the different acquisition geometries. So, seasonal changes in natural surround- ings can easily be distinguished from changes in built-up areas. One disadvantage is the large number of different SARimages of the same area needed to generate the ”superresolution” image. (Wright et al., 2005) exploits the coherence (phase information) of two SARimages, which implies a relatively short repeat-pass time to avoid additional incoherence caused by natural surfaces. (Derrode et al., 2003) and (Bouyahia et al., 2008) adopt a hidden and a sliding hidden Markov chain model respectively to select areas with changes in reﬂectivity even fromimages with differ- ent incidence angles. Although this method allows to process very large images and does not need additional parameter tun- ing, except the window size, according to the authors still a lot of research work has to be done to improve the preliminary results.
In order to enable the operator to attach sin- gle small-sized changes to objects, it is over- laid with the building mask that has been ex- tracted from optical satellite imagery of IKONOS by (t aubenböck et al. 2009). Many changes – visible near the river – presumably refer to boats or pontoons. Presuming that all other mapped changes belong to buildings, we perceive striking deviations that result from the different image acquisition geometries of radar and optical data. The images should have been coregistered using the same high resolution surface model to avoid these ef- fects. Although this inconsistence is negligi- ble for our coarse validation, it has to be taken into account for real disaster applications. As reference information we use the damage as- sessment produced by (ZKI 2009b) which is described as follows: “Destroyed and dam- aged structures were derived by analyzing in- formation provided by the Indonesian Disaster Management Authority (BNPB), from post- disaster Quickbird imagery acquired on Octo- ber 3, 2009 (ground resolution 0.6 m) and from field data. The damage assessment is incom- plete and in some cases inaccurate due to cloud cover and the difficulty to identify partly damaged buildings from bird’s perspective” place in the Indian Ocean with a magnitude of
Finally, the changes detected by the Curvelet-based approach are compared to the manually-derived changes see the bottom of Table 2. Apparently, there is no confusion between positive and negative changes i.e., the direction of the changes always is correctly determined in contrast to the standard techniques above. The total accuracy now almost reaches 97%. The correctness once again raises and ranges around 72% for negative changes and even 86% for positive changes, i.e., the Curvelet-based technique is robust and the automatically detected changes are quite reliable. The completeness again is lower ranging around 54%, i.e., the reference is indicating more changed pixels than the automated approach produces. It was stated before that some of the interpreters used quite coarse tools to mark the changes. Therefore, the changes in the reference might be wider than the changes measured by the automated approach. Due to the pixels at the edges captured by the reference, but not captured by the automated approach the completeness values drop down. The confused pixels (see Figure 20)—mainly the missed hits—are all restricted to the edges of detected objects. Apart from that, some false alarms are visible in the harbor area. Those are small objects that have not been identified by the human interpreters. However, it is reasonable that these are real changes, e.g., the neighbored blue and red points in the middle of Figure 20 certainly refer to vehicles that have been moved from the blue positions the red positions in-between the two image acquisitions. Finally, it has to be mentioned that the very high accuracy stated in Table 2 relativizes keeping in mind the quite low concordance of the visual interpreters who produced the reference data set. Apart from that, the visual interpretation still provides the most reliable even though most expensive image interpretation technique.
Synthetic Aperture Radar (SAR) remote sensing has shown a high potential in surveying flood events [1,2]. Smooth water surfaces can easily be detected by their very low backscattering value: the transmitted radiation is almost completely scattered away from the sensor. For rougher water surfaces—induced by wind or currents—the backscattering might increase significantly so that more sophisticated methods have to be applied . Another challenge in the context of wetland monitoring is the detection of water beneath vegetation canopies. Using optical sensors there is no chance to penetrate leaves and branches. Thus, densely forested areas will always appear green independent of the flood situation on ground. With SAR sensors—operating in much longer wavelengths—the radiation penetrates the canopy and images underlying objects as well . As simple SAR intensity images cannot distinguish where the signal received comes from—canopy or ground—multi-polarized SAR mode are required. This type of SARimages allows the discrimination of different backscattering mechanisms. In general, following three scattering types are addressed:
In this paper, we proposed an automatic algorithm for iceberg detectionfrom high resolution X-band SAR data. It is based on the iterative censoring CFAR detector, which has proven its usefulness already for target detection in dense traffic situations. However, we are the first to apply to the iceberg detection problem. In order to better discriminate icebergs from false alarms that frequently arise from rough seas and strong winds, we included a novel filter, taking into account the outer shape of detected objects. Detections that are part of a recurring pattern (such as waves) are filtered out. The “wave filter” reduces the false alarm rate significantly.