Nach oben pdf Normalized Compression Distance for SAR Image Change Detection

Normalized Compression Distance for SAR Image Change Detection

Normalized Compression Distance for SAR Image Change Detection

Both sets of data were also used as input for the methodology described in [6]. Because we used a relatively large patch of 64x64 pixels, some steps of the original algorithm (e.g., difference image fusion or NSCT-based de- noising) had a minor contribution to the final result. In Fig. 4 are included two different change maps generated with the two different methodologies, where the first one (state-of- the-art) is focused on spatial neighborhood information and the second one is just detecting transformations of the land cover texture. Even if there are singular unchanged areas within some changed areas, the state-of-the-art method will consider the isolated unchanged areas as irrelevant. On the other hand, our method will take into account the relevance of each feature content as is highlighted in the zoomed area inside the third image, in Fig 2. There are flooded areas separated by railway or roads that appear also separated in change map. Indeed there are flooded areas that are not detected as changed by our method because of its properties. The similarity of a random pair of patches is independent of another since the generation of the change map by our method is linked with the value of the threshold. In the third
Mehr anzeigen

4 Mehr lesen

An Innovative Curvelet-only-Based Approach for Automated Change Detection in Multi-Temporal SAR Imagery

An Innovative Curvelet-only-Based Approach for Automated Change Detection in Multi-Temporal SAR Imagery

Obviously, the real and imaginary parts of the Curvelet coefficients follow more or less a normal distribution with its mean in zero and a similar standard deviation. This was expected because of two facts: At first, the large number of real random values associated with the Curvelet coefficients real and imaginary parts induces compulsorily a normal distribution because—according to the central limit theorem [22]—any distribution tends towards a normal distribution for a sufficiently large number of samples. And secondly, regarding change images in remote sensing applications—where (without exception) only local changes are present—the most part of the image is more or less stochastic, i.e., there are only few distinct structures that possibly affect the normal distribution of the coefficients. Above all, the single sub-band coefficients being normalized during Curvelet transform [23] make a special treatment of the sub-bands dispensable. Thus, if the image contents were completely stochastic, the random combination of real and imaginary parts would result in amplitudes that follow a Rayleigh distribution—similar to SAR amplitudes of distributed targets as reported in [24]—uniformly over all sub-bands. The sole parameter of the Rayleigh distribution function then is equal to the standard deviation of the real and imaginary parts. Figure 8 illustrates the analytical probability density function (PDF), the cumulative density function (CDF), and the empirical histogram of the Curvelet coefficient amplitudes. Indeed the histogram of the coefficient amplitudes does not deviate that much from the Rayleigh PDF except for very low and very high values. On the one hand, there are more low values in the empirical histogram than expected. On the other hand, there are more high amplitudes than the Rayleigh PDF would predict. From this, it follows that the combination of real and imaginary parts is not completely stochastic, but partly deterministic which is caused by the small amount of distinct structures in the change images.
Mehr anzeigen

28 Mehr lesen

Feature based non parametric estimation of Kullback Leibler divergence for SAR image change detection

Feature based non parametric estimation of Kullback Leibler divergence for SAR image change detection

In this article, a method based on a non-parametric estimation of the Kullback –Leibler divergence using a local feature space is proposed for synthetic aperture radar (SAR) image change detection. First, local features based on a set of Gabor filters are extracted from both pre- and post-event images. The distribution of these local features from a local neighbourhood is considered as a statistical representation of the local image information. The Kullback –Leibler divergence as a probabilistic distance is used for measuring the similarity of the two distributions. Nevertheless, it is not trivial to estimate the distribution of a high-dimensional random vector, let alone the comparison of two distributions. Thus, a non-parametric method based on k-nearest neighbour search is proposed to compute the Kullback –Leibler diver- gence between the two distributions. Through experiments, this method is compared with other state-of-the-art methods and the e ffectiveness of the proposed method for SAR image change detec- tion is demonstrated.
Mehr anzeigen

10 Mehr lesen

A Validation of ICA Decomposition for PolSAR Images by Using Measures of Normalized Compression Distance

A Validation of ICA Decomposition for PolSAR Images by Using Measures of Normalized Compression Distance

Simple color, intensity representations of polarimetric synthetic aperture radar (PolSAR) images fail to show the physical characteristics of the recorded ground objects, so several coherent and incoherent target decomposition theorems have been proposed in the state-of-the-art literature. All these decompositions assume the fact that any scattering mechanism can be represented as the sum of some simpler, ,,canonicalˮ scattering mechanisms. Following the same assumption, in this paper we employ the independent component analysis (ICA) for PolSAR images representation. Since ICA is a method used for blind sources separation, we expect that the derived ICA channels represent as well as possible certain types of scattering mechanisms present in the image. ICA decomposition is validated against the coherent Pauli and the incoherent H/A/α decompositions. The normalized compression distance (NCD) is used as a measure of quality of decompositions. Experiments are made on a SLC L-band F-SAR image over Kaufbeuren airfield, Germany.
Mehr anzeigen

1 Mehr lesen

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

provided in literature. This dissertation contributes a pixel-based algorithm to detect increased backscattering in SAR images by analyzing the SAR pixel values according to simulated layers. To detect demolished buildings, simulated images are generated using LiDAR data. Two comparison operators (normalized mutual information and joint histogram slope) are used to compare image patches related to same buildings. An experiment using Munich data has shown that both of them provide an overall accuracy of more than 90%. A combination of these two comparison operators using decision trees improves the result. The fourth objective is to detect changes between SAR images acquired with different incidence angles. For this purpose, three algorithms are presented in this dissertation. The first algorithm is a building-level algorithm based on layer fill. Image patches related to the same buildings in the two SAR images are extracted using simulation methods. For each extracted image patch pair, the change ratio based on the fill ratio of building layers is estimated. The change ratio values of all buildings are then classified into two classes using the EM-algorithm. This algorithm works well for buildings with different size and shape in complex urban scenarios. Since the whole building is analyzed as one object, buildings with partly demolished walls may not be detected. Under the same idea, a wall-level change detection algorithm was developed. Image patches related to the same walls in the two SAR images were extracted and converted to have the same geometry. These converted patch pairs are then compared using change ratios based on fill ratio or fill position. Lastly, the wall change results are fused to provide building change result. Compared to the building-level change detection algorithm, this method is more time consuming, but yields better results for partly demolished buildings. A combination of these two algorithms is therefore suggested, whereby the building-level method is used for all buildings and wall-level method additionally for selected large buildings. The third developed algorithm is a wall-level change detection algorithm based on point-feature location. To this end, local maximum points in two SAR images corresponding to the same building façade are compared. This method provides promising result for the present data. It may work better for future data with increased resolution to detect changes of detailed façade structures.
Mehr anzeigen

126 Mehr lesen

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

provided in literature. This dissertation contributes a pixel-based algorithm to detect increased backscattering in SAR images by analyzing the SAR pixel values according to simulated layers. To detect demolished buildings, simulated images are generated using LiDAR data. Two comparison operators (normalized mutual information and joint histogram slope) are used to compare image patches related to same buildings. An experiment using Munich data has shown that both of them provide an overall accuracy of more than 90%. A combination of these two comparison operators using decision trees improves the result. The fourth objective is to detect changes between SAR images acquired with different incidence angles. For this purpose, three algorithms are presented in this dissertation. The first algorithm is a building-level algorithm based on layer fill. Image patches related to the same buildings in the two SAR images are extracted using simulation methods. For each extracted image patch pair, the change ratio based on the fill ratio of building layers is estimated. The change ratio values of all buildings are then classified into two classes using the EM-algorithm. This algorithm works well for buildings with different size and shape in complex urban scenarios. Since the whole building is analyzed as one object, buildings with partly demolished walls may not be detected. Under the same idea, a wall-level change detection algorithm was developed. Image patches related to the same walls in the two SAR images were extracted and converted to have the same geometry. These converted patch pairs are then compared using change ratios based on fill ratio or fill position. Lastly, the wall change results are fused to provide building change result. Compared to the building-level change detection algorithm, this method is more time consuming, but yields better results for partly demolished buildings. A combination of these two algorithms is therefore suggested, whereby the building-level method is used for all buildings and wall-level method additionally for selected large buildings. The third developed algorithm is a wall-level change detection algorithm based on point-feature location. To this end, local maximum points in two SAR images corresponding to the same building façade are compared. This method provides promising result for the present data. It may work better for future data with increased resolution to detect changes of detailed façade structures.
Mehr anzeigen

126 Mehr lesen

A Benchmark Evaluation of Similarity Measures for Multi-temporal SAR Image Change Detection

A Benchmark Evaluation of Similarity Measures for Multi-temporal SAR Image Change Detection

We selected five datasets of TerraSAR-X images corre- sponding to reflectivity changes and statistical changes. The first four datasets are selected, as shown in the first two rows in Fig. 23, from two radiometrically enhanced TerraSAR-X images acquired in Stripmap mode prior to (on Oct. 20, 2010) and after (on May 6, 2011) the Sendai earthquake in Japan. Their pixel spacing is about 2.5 m. The sizes of these four image pairs are respectively 549 × 560, 613 × 641, 590 × 687, and 689×734 pixels. As the two TerraSAR-X images acquired before and after the disaster have nearly the same imaging parameters, we can reasonably assume that there is only a linear geometrical translation between the images. To achieve precise registration, ten strong point scatterers from each image were manually selected to determine the translation along both azimuth and range direction. To make sure that the translation is precise, the normalized cross-correlation is computed to check the translation and the residual pixel shift is less than one pixel. The reference data shown in the third row were produced through careful manual interpretation of optical images. In optical images, these changes are more visible. Thus, referring to optical images makes the reference data reliable. Due to the earthquake, a tsunami occurred, which led to a devastating flooding, as can be seen from
Mehr anzeigen

18 Mehr lesen

A Comparison of Bag-of-Words method and Normalized Compression Distance for Satellite Image Retrieval

A Comparison of Bag-of-Words method and Normalized Compression Distance for Satellite Image Retrieval

Recently, two improved methods have shown their advantages in browsing Earth Observation (EO) dataset. The first method is the Bag-of-Words (BoW) feature extraction method and the second is the Normalized Compression Distance (NCD) for assessing image similarity. However, they have not been compared so far for satellite image retrieval, which motivates this paper. Two retrieval experiments have been performed on a freely available optical image dataset and a SAR image dataset. Through these two experiments, we conclude that the BoW method performs generally better than NCD. Although it is a parameter-free solution for data mining, NCD only per- forms well for images with repetitive patterns like some ho- mogeneous classes. In contrast, BoW method performs much far beyond that of NCD. In addition, NCD is computation- ally very expensive, which makes it infeasible to be applied in real applications. In contrast, BoW method is more real- istic in practical applications in terms of both accuracy and computation.
Mehr anzeigen

4 Mehr lesen

SAR Change Detection in a General Case Using Normalized Compression Distance

SAR Change Detection in a General Case Using Normalized Compression Distance

This approach was used to detect changes in different regions of interest (e.g., the Danube Delta in Romania or Belgica Bank in Greenland) independently of a special scenario or a spec[r]

1 Mehr lesen

On the Use of Normalized Compression Distances for Image Similarity Detection

On the Use of Normalized Compression Distances for Image Similarity Detection

Abstract: This paper investigates the usefulness of the normalized compression distance (NCD) for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the NCD between the original image and sequences of translated (rotated) versions. Feature vectors for simple transforms (circular translations on horizontal, vertical, diagonal directions and rotations around image center) and several standard compressors are generated and tested in a very simple experiment of similarity detection between the original image and two filtered versions (median and moving average). The promising vector configurations (geometric transform, lossless compressor) are further tested for similarity detection on the 24 images of the Kodak set subject to some common image processing. While the direct computation of NCD fails to detect image similarity even in the case of simple median and moving average filtering in 3 × 3 windows, for certain transforms and compressors, the proposed approach appears to provide robustness at similarity detection against smoothing, lossy compression, contrast enhancement, noise addition and some robustness against geometrical transforms (scaling, cropping and rotation).
Mehr anzeigen

15 Mehr lesen

Interactive clustering for SAR image understanding

Interactive clustering for SAR image understanding

Immersive visual data mining: the SAR data from the DLR EO Digital Library will be processed for descriptor extraction, the descriptor space will be analyzed and projected adaptively in 3D space, visualized in the CAVE, jointly with multi-modal rendering of the images and their content. The analyst, immersed in the CAVE, will be enabled to interact with the data content using learning algorithms and navigate, explore and analyze the information in the archive.

1 Mehr lesen

Image retrieval using compression-based techniques

Image retrieval using compression-based techniques

Two simple classification experiments have been performed, where each image q has been used as query against all the others. In the first experiment, q has been assigned to the class minimizing the average distance; in the second, to the class of the top-ranked object retrieved, that is the most similar to q. Results obtained are reported in tables I and II, and show an accuracy of 71.3% for the former method and 76.3% for the latter. It has to be remarked that intraclass variability in the COREL dataset is sometimes very high: for example most of the 10 images not recognized for the African class reported in Table I may be in fact considered as outliers since just landscapes with no human presence are contained within (see fig. 7); this shows the existence of limits imposed by the subjective choice of the training datasets.
Mehr anzeigen

6 Mehr lesen

Polarimetric Change Detection for Wetlands

Polarimetric Change Detection for Wetlands

Synthetic aperture radar (SAR) satellite data, being independent of cloud and smoke cover, able to operate day and night, and not subject to sun-glint, offer a reliable data stream for monitoring water bodies in North America (Brisco et al., 2009). Radar in general is very good at detecting open surface water and has been used operationally for flood monitoring in many countries (Kasischke and Bourgeau-Chavez, 1997). The longer wavelengths employed by SAR also allow for canopy penetration and the subsequent interaction of the radar with the underlying flooded vegetation causes a double bounce scattering mechanism allowing for the identification of flooded vegetation (Pope et al; 1997). These characteristics make SAR an attractive tool for wetland monitoring applications (Brisco et al., 2008).
Mehr anzeigen

4 Mehr lesen

Object-based change detection for individual buildings in SAR images captured with different incidence angles

Object-based change detection for individual buildings in SAR images captured with different incidence angles

High resolution synthetic aperture radar (SAR) images have been exploited for different change detection applications, like damage assessment [1], surveillance [2], and have shown great potentials. These applications are based on the comparison of pre- and post-event space borne SAR images captured with the same signal incidence angle. However, because of the satellite orbit trajectory - e.g. for TerraSAR- X the maximum site access time is 2.5 days (adjacent orbit) and the revisit time is 11 days (same orbit) - the first available post-event SAR image may be captured with a different incidence angle. In urgent situations such as earthquakes, this data has to be analyzed for changes in order to support local decision makers as fast as possible. As presented in Fig. 3, the same building appears differently in SAR images captured with different signal incidence angles: i) wall layover areas are scaled in range direction, ii) object occlusions are different, affecting the object visibility, shadow size, etc. iii) multiple reflections of signals related to building structures may be different. Accordingly, a pixel based comparison is not suitable as it may lead to a large amount of false alarms.
Mehr anzeigen

4 Mehr lesen

A multiprocessing Framework for SAR Image Processing

A multiprocessing Framework for SAR Image Processing

The German Aerospace Centers new airborne SAR sen- sor (F-SAR) gets more and more operative, and first data sets are already acquired. Therefore new requirements to the processing software appear, in terms of data rates and operational modes. Due to the massive amount of data, the processing time required for one data acquisition (without interferometry) has been estimated from 8 to 28 hours using one AMD Athlon64 CPU at 2.2GHz. So new strategies for data handling and processing are required. Therefore a new processor implementation is done. The processor implemen- tation is realized object oriented using the C++ programming language which was identified to achieve maximum speed and flexibility. The processor will provide support for different kinds of hardware including symmetrical and asymmetrical multiprocessing architectures. Algorithms are developed from many different scientists, so a mechanism to handle the mul- tiprocessing transparently is required to decrease development time of the different algorithms. This is encapsulated within a numerical framework.
Mehr anzeigen

4 Mehr lesen

The Schmittlets for automated SAR image enhancement

The Schmittlets for automated SAR image enhancement

The definition of the Schmittlets bases on the hyperbolic tangent function family employed for the normalization of intensity images and image combinations [7], for the description of random distributions in the context of change detection [8], and last but not least for the derivation of filter masks. In contrast to common alternative image representations using gradient operators, Schmittlets exclusively compose out of hyperbolic secant square functions to increase the stability of the analysis. Two different shapes of Schmittlets are predefined: round Schmittlets (equal look number for each direction) and lengthy ones (different look numbers per direction). Round Schmittlets can appear in different scales, i.e. different sizes. Lengthy Schmittlets can additionally adopt different orientations. In accordance with image processing theory the number of distinguishable directions increases from two in the second finest scale to sixteen in the fourth scale [9]. In this manner, 35 Schmittlets are defined from scale zero (original resolution) to scale four (sixteen looks per direction) which are depicted in Fig.1.
Mehr anzeigen

4 Mehr lesen

Wavelet based image compression using FPGAs

Wavelet based image compression using FPGAs

As a consequence, such a bitplane coding scheme encodes the important information in terms of compres- sion efficiency first, if a energy compacting transform was applied before (see Equation ( 2.2 )). On the other hand, we have to store or transmit the ordering information, which can scatter the compression effect. Shapiro does not reorder the wavelet transform coefficients. He proposes to scan the samples from left to right and from top to bottom within each subband, starting in the upper left corner. The subbands LH, HL, and HH at each scales are scanned in that order. Furthermore, in contrast to traditional bitplane coders he has introduced data dependent examination of the coefficients. The idea behind is, if there are large areas with unimportant samples in terms of compression, they should be excluded from exploration. The addressed self similarities are the key to perform such exclusions of large areas.
Mehr anzeigen

99 Mehr lesen

Understanding and advancing PDE-based image compression

Understanding and advancing PDE-based image compression

For instance, Acar and Gökmen [4] use the same variational membrane model to detect edges and reconstruct the image from a sparse edge set. Similarly, a more sophisticated centipede model is the core ingredient for the compression technique by Kurt et al. [163]. It performs edge detection and segments the image. These segments are then ranked according to a confidence measure that incorporates boundary lengths and statistical moments. The codec only stores the most significant segments and reconstructs them with a variational method. Many codecs follow a standard pipeline: pre-processing to remove noise, followed by edge detection and some sort of entropy coding for compression. For decompression, linear homogeneous diffusion is employed by all of the following methods. Carlsson [47] proposed an early sketch-based approach with linear homogeneous diffusion which was later modified and extended by Desai et al. [73]. The codec for general images by Wu et al. [310] performs edge detection and edge extension. It then stores the grey values at thickened edges with JPEG2000 and reconstructs with homogeneous diffusion. In contrast, Bastani et al. [20] store edges, but restrict grey values to so-called source points. Before edge detection, their algorithm uses Perona-Malik filtering for denoising. Similarly, Zhao and Du [322] employ a modified Perona-Malik filter for presmoothing and extract edges as zero-crossings of the Laplacian afterwards. Edges are stored with a so-called chain code that interprets an edge as a path that is traversed from beginning to end. Thereby, the edge is uniquely determined by the sequence of directions taken from one edge pixel to the next on the path.
Mehr anzeigen

268 Mehr lesen

Efficient SAR Raw Data Compression in Frequency Domain

Efficient SAR Raw Data Compression in Frequency Domain

BAVQ leads to a slightly improved image quality compared to BAQ, but it does not justify the additional computations. Best results are achieved if the BAQ is applied in frequency domain. WHT, DCT and FFT in fusion with a BAQ (WHT-BAQ, DCT-BAQ, FFT-BAQ) were applied on SAR raw data by Benz, Strodl and Moreira [3]. WHT and DCT were applied separately on inphase and quadrature channel. A linear algorithm can be assumed because of the small amount of lossy quantization. Thus, the separation is valid. All algorithms performed well. However, the best results were obtained using FFT. The compression was applied on airborne data of the experimental SAR (E-SAR) of DLR and data of the ERS-1 satellite. The methods were compared with BAVQ, BAQ and a fuzzy BAQ regarding signal-to-distortion-noise ratio, geometric resolution, peak- sidelobe-ratio (PSLR) and integrated sidelobe-ratio (ISLR), radiometric linearity and phase errors.
Mehr anzeigen

3 Mehr lesen

Low Complexity Text and Image Compression for Wireless Devices and Sensors

Low Complexity Text and Image Compression for Wireless Devices and Sensors

The PPM technique generally gives better compression than the dictionary coders and seems to be conceptually more suitable for the compression of short messages. It is thus selected as a starting point for the first part of this work in chapter 2. A detailed overview of this chapter is given in section 2.1. The chapter describes the steps that were taken for the development of a low-complexity compression scheme. It starts with an own implementation of the method PPM. The implementation is then extended to build a tool which allows for a detailed analysis of the statistical data. The findings lead to the design of a novel statistical context model, which is preloaded with data and evaluated to compress very short messages. A model that only requires 32 kByte cuts the short messages in half. The scalability feature allows for better compression efficiency with the cost of more memory requirements. The novel method is finally applied to a cell phone application, which allows the user to save costs when writing messages that exceed the 160 character limit. Figure 1.1 a) depicts a screen shot of the own software, which was made freely available. Today a commercial tool uses the method, which is illustrated in figure b).
Mehr anzeigen

168 Mehr lesen

Show all 10000 documents...