Nach oben pdf OpenSARUrban: A Sentinel-1 SAR Image Dataset for Urban Interpretation

OpenSARUrban: A Sentinel-1 SAR Image Dataset for Urban Interpretation

OpenSARUrban: A Sentinel-1 SAR Image Dataset for Urban Interpretation

ing Professor appointments with the University of Oviedo, Spain; the University Louis Pasteur, Stras- bourg, France; the International Space University, Strasbourg, France; University of Siegen, Germany; University of Innsbruck, Austria; University of Alcala, Spain; University Tor Vergata, Rome, Italy; Universidad Pontificia de Salamanca, campus de Madrid, Spain; University of Camerino, Italy; and the Swiss Center for Scientific Computing, Manno, Switzerland. Since 1981, he has been a Professor with the Department of Applied Electronics and Information Engineering, Faculty of Electronics, Telecommunications and Information Technology, UPB, working on image processing and Electronic Speckle Interferometry. Since 1993, he has been a Scientist with the German Aerospace Center (DLR), Oberpfaffenhofen, Germany. He is developing algorithms for model-based information retrieval from high-complexity signals and methods for scene understanding from very- high resolution synthetic aperture radar (SAR) and Interferometric SAR data. He is currently a Senior Scientist and Image Analysis Research Group Leader with the Remote Sensing Technology Institute, DLR, Wessling, Germany. Since 2011, he has been leading the Immersive Visual Information Mining Research Lab, Munich Aerospace Faculty, and has been the Director of the Research Center for Spatial Information, UPB. Since 2001, he has been initiating and leading the Competence Centre on Information Extraction and Image Understanding for Earth Observation, ParisTech, Paris Institute of Technology, Telecom Paris, Paris, France, a collaboration of DLR with the French Space Agency (CNES). He has been a Professor of the DLR-CNES Chair, ParisTech, Paris Institute of Technology, Telecom Paris. He initiated the European frame of projects for Image Information Mining (IIM) and is involved in research programs for information extraction, data mining and knowledge discovery, and data understanding with the European Space Agency (ESA), NASA, and in a variety of national and European projects. He and his team have developed and are currently developing the operational IIM processor in the Payload Ground Segment systems for the German missions TerraSAR-X, TanDEM-X, and the ESA Sentinel-1 and -2. He is the author of more than 450 scientific publications; among them 80 are journal papers, and a book on number theory. He has served as a Co-organizer of international conferences and workshops, and as a Guest Editor of a special issue on IIM of the IEEE and other journals. He is involved in research related to information theoretical aspects and semantic representations in advanced communication systems. His research interests include Bayesian inference, information and complexity theory, stochastic processes, model-based scene understanding, image information mining, for applications in information retrieval, and understanding of high-resolution SAR and optical observations.
Mehr anzeigen

18 Mehr lesen

SEN12MS--A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion

SEN12MS--A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion

The availability of curated large-scale training data is a crucial factor for the development of well-generalizing deep learning methods for the extraction of geoinformation from multi-sensor remote sensing imagery. While quite some datasets have already been published by the community, most of them suffer from rather strong limitations, e.g. regarding spatial coverage, diversity or simply number of available samples. Exploiting the freely available data acquired by the Sentinel satellites of the Copernicus program implemented by the European Space Agency, as well as the cloud computing facilities of Google Earth Engine, we provide a dataset consisting of 180,662 triplets of dual-pol synthetic aperture radar (SAR) image patches, multi-spectral Sentinel-2 image patches, and MODIS land cover maps. With all patches being fully georeferenced at a 10 m ground sampling distance and covering all inhabited continents during all meteorological seasons, we expect the dataset to support the community in developing sophisticated deep learning-based approaches for common tasks such as scene classification or semantic segmentation for land cover mapping.
Mehr anzeigen

8 Mehr lesen

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

First, SAR images are often difficult to visually interpret, especially in dense urban areas. As illustrated in Fig. 1.1, it is hard to determine the location of streets, the boundaries of buildings or to identify individual buildings (e.g. to find the two towers of the Frauenkirche in the SAR image). This is related to the distortion effects pertinent to the SAR imaging concept. The layover effects lead to a mixture of backscatter from different objects at the same position in the SAR images; the shadow effects make many objects invisible; multiple scattering leads to bright lines, point signatures or even ghost scatterers (Auer et al. 2011) and causes high local contrasts in intensity. Man-made objects with different heights, shapes, materials or surface roughness appear in SAR images very differently, which also leads to unclear object boundaries. Nevertheless, exploiting these effects may bring us information which may be not contained in other kind of data (e.g. optical images or LiDAR data). For example, point signatures are strong hints of buildings (Soergel et al. 2006) and provide information about façade details such as windows or balconies (Auer et al. 2010a). Bright lines caused by double reflection signals indicate the boundaries of buildings (Wegner et al. 2010; Auer and Gernhardt 2014).
Mehr anzeigen

126 Mehr lesen

Colorizing Sentinel-1 SAR Images using a variational Autoencoder conditioned on Sentinel-2 imagery

Colorizing Sentinel-1 SAR Images using a variational Autoencoder conditioned on Sentinel-2 imagery

Synthetic aperture radar (SAR) images are completely different from optical images in terms of both geometric and radiometric appearance: While SAR is a range-based imaging modality and measures physical properties of the observed scene, optical im- agery basically represents an angular measurement system and collects information about the chemical characteristics of the en- vironment. Thus, the interpretation of SAR imagery is still a chal- lenging task for remote sensing scientists. However, SAR image interpretation can be alleviated when optical colors are used to support the interpretation process. For decades, this has been a special case of remote sensing image fusion (Pohl and van Gen- deren, 1998; Schmitt and Zhu, 2016). Still, SAR-optical image fusion by definition needs both a SAR and an optical image ac- quired at approximately the same time, which means standard im- age fusion techniques do not particularly help the interpretability of SAR images as an independent data source. To overcome the need for accompanying optical imagery, this paper proposes to learn feasible colorizations of Sentinel-1 SAR images from co- registered Sentinel-2 training examples using deep learning tech- niques. This is meant to provide a significant step in SAR-optical data fusion (Schmitt et al., 2017) with application to improved SAR image understanding, and will enable SAR data providers to attach colorized versions of their imagery to their products. In order to achieve this goal, we find inspiration in recent com- puter vision approaches dealing with gray-scale image coloriza- tion. In the frame of this paper, we use the network architec- ture proposed by Deshpande et al. (2017), which utilizes both a variational autoencoder (VAE) as well as a mixture density net- work (MDN) (Bishop, 1994) to create multi-modal colorization hypotheses. Since our problem is different from the computer vi- sion task in that there are no color SAR images that can be used as target samples during training, we first create artificial color SAR images by SAR-optical image fusion. In the remainder of this pa- per, we will first describe how these artificial color SAR images can be created (Section 2). Then, we introduce our dataset based on Sentinel-1 and Sentinel-2 imagery in Section 3, before we de- scribe the employed deep generative model in Section 4. Finally, exemplary results are shown in Section 5.
Mehr anzeigen

7 Mehr lesen

Assessment of Coastal Aquaculture for India from Sentinel-1 SAR Time Series

Assessment of Coastal Aquaculture for India from Sentinel-1 SAR Time Series

Received: 28 December 2018; Accepted: 6 February 2019; Published: 11 February 2019    Abstract: Aquaculture is one of the fastest growing primary food production sectors in India and ranks second behind China. Due to its growing economic value and global demand, India’s aquaculture industry experienced exponential growth for more than one decade. In this study, we extract land-based aquaculture at the pond level for the entire coastal zone of India using large-volume time series Sentinel-1 synthetic-aperture radar (SAR) data at 10-m spatial resolution. Elevation and slope from Shuttle Radar Topographic Mission digital elevation model (SRTM DEM) data were used for masking inappropriate areas, whereas a coastline dataset was used to create a land/ocean mask. The pixel-wise temporal median was calculated from all available Sentinel-1 data to significantly reduce the amount of noise in the SAR data and to reduce confusions with temporary inundated rice fields. More than 3000 aquaculture pond vector samples were collected from high-resolution Google Earth imagery and used in an object-based image classification approach to exploit the characteristic shape information of aquaculture ponds. An open-source connected component segmentation algorithm was used for the extraction of the ponds based on the difference in backscatter intensity of inundated surfaces and shape metrics calculated from aquaculture samples as input parameters. This study, for the first time, provides spatial explicit information on aquaculture distribution at the pond level for the entire coastal zone of India. Quantitative spatial analyses were performed to identify the provincial dominance in aquaculture production, such as that revealed in Andhra Pradesh and Gujarat provinces. For accuracy assessment, 2000 random samples were generated based on a stratified random sampling method. The study demonstrates, with an overall accuracy of 0.89, the spatio-temporal transferability of the methodological framework and the high potential for a global-scale application.
Mehr anzeigen

17 Mehr lesen

A Novel Method for Automated Supraglacial Lake Mapping in Antarctica Using Sentinel-1 SAR Imagery and Deep Learning

A Novel Method for Automated Supraglacial Lake Mapping in Antarctica Using Sentinel-1 SAR Imagery and Deep Learning

To enable more detailed analyses on the effects of supraglacial meltwater accumulation on Antarctic ice dynamics, mass balance, or ice shelf stability, a thorough mapping of the Antarctic surface hydrological network is required. For this purpose, spaceborne remote sensing is an ideal tool and enables both, unprecedented spatial coverage and high temporal resolution compared to ground-based mapping efforts. In addition to several local- and regional-scale investigations using manual to semi-automated mapping techniques (e.g., [ 18 – 24 ]), only few studies implemented algorithms for either automated or continent-wide mapping of Antarctic supraglacial lakes in optical satellite imagery. Of these, Moussavi et al. [ 25 ] used optical Landast 8 and Sentinel-2 data in combination with fixed band and index thresholding to automatically extract supraglacial lake extents and volumes over four East Antarctic ice shelves. Next, Halberstadt et al. [ 26 ] employed unsupervised k-means clustering on Landsat 8 scenes over two East Antarctic ice shelves to train a suite of supervised classifiers and Dell et al. [ 24 ] advanced the “FAST” (Fully Automated Supraglacial lake area and volume Tracking) algorithm [ 27 ] to automatically track supraglacial lakes in Landsat 8 and Sentinel-2 imagery over Nivlisen Ice Shelf, East Antarctica. Finally, our companion paper [ 28 ] presented an automated Sentinel- 2 supraglacial lake classification algorithm developed on basis of a random forest (RF) classifier and was tested on spatially and temporally independent acquisitions distributed across the Antarctic continent. While these studies rely on the use of optical multi-spectral data, synthetic aperture radar (SAR) data are currently used for mainly visual interpretation and evaluation of Antarctic supraglacial lakes (e.g., [ 22 , 29 ]). In fact, an automated or spatially transferable supraglacial lake detection algorithm using SAR data is entirely missing. Given the advantage of SAR data, e.g., for detection of subsurface lakes or considering its year-round as well as all-weather imaging capabilities, the development of an automated SAR-based mapping method is overdue. In this context, year-round image acquisitions are particularly suitable for analyses on intra-annual lake dynamics revealing whether lakes refreeze or drain at the onset of Antarctic winter as well as for delivery of complementary mapping products to optical lake extent classifications.
Mehr anzeigen

27 Mehr lesen

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

First, SAR images are often difficult to visually interpret, especially in dense urban areas. As illustrated in Fig. 1.1, it is hard to determine the location of streets, the boundaries of buildings or to identify individual buildings (e.g. to find the two towers of the Frauenkirche in the SAR image). This is related to the distortion effects pertinent to the SAR imaging concept. The layover effects lead to a mixture of backscatter from different objects at the same position in the SAR images; the shadow effects make many objects invisible; multiple scattering leads to bright lines, point signatures or even ghost scatterers (Auer et al. 2011) and causes high local contrasts in intensity. Man-made objects with different heights, shapes, materials or surface roughness appear in SAR images very differently, which also leads to unclear object boundaries. Nevertheless, exploiting these effects may bring us information which may be not contained in other kind of data (e.g. optical images or LiDAR data). For example, point signatures are strong hints of buildings (Soergel et al. 2006) and provide information about façade details such as windows or balconies (Auer et al. 2010a). Bright lines caused by double reflection signals indicate the boundaries of buildings (Wegner et al. 2010; Auer and Gernhardt 2014).
Mehr anzeigen

126 Mehr lesen

Time Series Sentinel-1 SAR Data for the Mapping of Aquaculture Ponds in Coastal Asia

Time Series Sentinel-1 SAR Data for the Mapping of Aquaculture Ponds in Coastal Asia

The pixel-wise median was calculated for the pre-processed time series data cube to reduce speckle noise in the intensity SAR imagery and identify permanent and stable low scatterers (see figure 4). By averaging over time, the median of the temporal data cube effectively improved the recognition and detection of small and narrow surface structures such as dams and levees surrounding aquaculture ponds. An automatic clustering-based image thresholding was used to mask land and water surfaces prior to the segmentation. We chose the OTSU method [7] to separate pond water and surrounding land area (dikes, levees, and dams). A connected component segmentation algorithm [8] was applied to the temporally filtered SAR time series data to extract pond objects based on shape and size features with a mean overall accuracy of 0.83. Appropriate parameters for object area, elongation, region ratio were derived from an aquaculture pond sample dataset and applied for the object based image filtering.
Mehr anzeigen

4 Mehr lesen

Interpretation of SAR images in urban areas using simulated optical and radar images

Interpretation of SAR images in urban areas using simulated optical and radar images

All geometrical parameters needed for POV-Ray have been calculated in consideration of the TSX data with the method explained in section II. In order to obtain a clear shape of the objects, all the model surfaces are characterized by low diffuse reflection for the optical simulation configuration 1 and by high diffuse scattering for configuration 2. The geocoded simulated optical images of configuration 1 and 2 are presented in Figure 4. Compared to the TSX image, shown in Figure 5. , several similarities are clearly visible in the simulated optical images. Structures of high buildings (e.g. feature 1, top of Frauenkirche) and low buildings (e.g. feature 2) are found at the same geometrical position in the three images. Moreover, the shape of shadow areas is similar (e.g. feature 3). Vegetation (feature 5) is not represented in the DEM. Hence, it can not be seen in the simulated images. The simulated optical image of configuration 3 is not geocoded, because its geometry is not directly comparable with the TSX image.
Mehr anzeigen

5 Mehr lesen

The SARptical dataset for joint analysis of SAR and optical image in dense urban area

The SARptical dataset for joint analysis of SAR and optical image in dense urban area

The joint interpretation of very high resolution SAR and optical images in dense urban area are not trivial due to the distinct imaging geometry of the two types of images. Especially, the inevitable layover caused by the side-looking SAR imaging geometry renders this task even more challenging. Only until recently, the “SARptical” framework [1], [2] proposed a promising solution to tackle this. SARptical can trace individual SAR scatterers in corresponding high-resolution optical images, via rigorous 3-D reconstruction and matching. This paper introduces the SARptical dataset 1 , which is a dataset of over 10,000 pairs
Mehr anzeigen

4 Mehr lesen

Joint Sparsity in SAR Tomography for Urban Mapping

Joint Sparsity in SAR Tomography for Urban Mapping

In particular, due to the significantly improved estimation accuracy, M-SL1MMER reconstructs some interesting details which was not accessible so far. For a practical demonstration, we calculated elevation distance between first and second layer for the double-scatterer case, which is shown in Figure 15. The red parallelogram marks the area where facade and roof are overlaid, cf. Figure 16. At the far-range side of this area, the elevation distance amounts to approximately 22.60 [m] (cyan). Accordingly, the width of the roof can be calculated to be 18.27 [m], which agrees, up to the decimeter level, with what we estimated from the 3-D building model of Google Earth. Besides, the yellow parallelogram circumfuses the area where two neighboring windows in the diagonal direction, exemplified as S 1 and S 2 in Figure 16, are superimposed. Thus
Mehr anzeigen

12 Mehr lesen

Automatic calving front delienation on TerraSAR-X and Sentinel-1 SAR imagery

Automatic calving front delienation on TerraSAR-X and Sentinel-1 SAR imagery

Especially for tidewater glaciers, the CFL underlies strong fluctuations, both seasonally and long term. Depending on the bed topography, the front can migrate on and off stable frontal positions, where the glacier is grounded below sea level [2]. A retreat from such a stable position into an over-deepening of the fjord can lead to a detachment of the glacier from the bed followed by rapid retreat. Therefore, changes in the CFL are seen as early indicators for a glaciers’ dynamic behavior [3].

4 Mehr lesen

SAR image dataset of military ground targets with multiple poses for ATR

SAR image dataset of military ground targets with multiple poses for ATR

training. 4 The independence between training and testing sets in the MSTAR has also been questioned, 2 but is essential to achieve reliable results in any recognition problems. As a result of these limitations, ATR algorithms could have shown artificially increased results and their performance not accurately reported. This paper presents a new dataset dedicated to SAR ATR. The experimental setup and all the environmental factors are presented in section 3.1 . The algorithm used to create the ISAR images is discussed in section 3.3 . Guidelines are also given to correctly select the sequences to form independent and varied training and testing sets in section 4.2 . In section 4.3 , the best variant to choose as a confuser is discussed. The confuser is a target that is not present during training but only during testing to confuse the algorithm. Following this discussion, an evaluation method is suggested, so that all algorithms are evaluated in the same way.
Mehr anzeigen

9 Mehr lesen

SAR Image Feature Analysis for Slum Detection in Megacities

SAR Image Feature Analysis for Slum Detection in Megacities

The most significant changes to urban areas in the last decades took place in the devel- oping countries around the world, where millions of people moved to the cities in the hope of finding labor. This leads to an increasing need of housing at affordable cost and inevitably to sub-standard living conditions and even illegal dwellings when capacities are exhausted (United Nations 2014). If accompanied by unbalanced economical growth, arising informal settlements, also called slums, will be a result of the concentration of poor urban dwellers. They are subse- quently about to experience economical, political, cultural and social exclusion (United Nations 2010). On today’s planet of slums (cf. Davis 2006) approximately more than one billion people are facing these problems and this number is expected to increase up to one and a half billion by 2020 (Arimah 2010). In consequence of the Millennium Development Goal 7D set by the United Nations to “achieve, by 2020, a significant improvement in the lives of at least 100 mil- lion slum dwellers” several countries, most prominently China, India and Indonesia, tried to face these problems of modern urbanization. Even though many slum dwellers globally experienced improvements in the living environment, for example by means of water sources, living space or sanitation facilities, further effort is needed to improve the living conditions of yet more people around the globe (United Nations 2013).
Mehr anzeigen

64 Mehr lesen

Normalized Compression Distance for SAR Image Change Detection

Normalized Compression Distance for SAR Image Change Detection

Both sets of data were also used as input for the methodology described in [6]. Because we used a relatively large patch of 64x64 pixels, some steps of the original algorithm (e.g., difference image fusion or NSCT-based de- noising) had a minor contribution to the final result. In Fig. 4 are included two different change maps generated with the two different methodologies, where the first one (state-of- the-art) is focused on spatial neighborhood information and the second one is just detecting transformations of the land cover texture. Even if there are singular unchanged areas within some changed areas, the state-of-the-art method will consider the isolated unchanged areas as irrelevant. On the other hand, our method will take into account the relevance of each feature content as is highlighted in the zoomed area inside the third image, in Fig 2. There are flooded areas separated by railway or roads that appear also separated in change map. Indeed there are flooded areas that are not detected as changed by our method because of its properties. The similarity of a random pair of patches is independent of another since the generation of the change map by our method is linked with the value of the threshold. In the third
Mehr anzeigen

4 Mehr lesen

Towards Sentinel-1 SAR Analysis-Ready Data: A Best Practices Assessment on Preparing Backscatter Data for the Cube

Towards Sentinel-1 SAR Analysis-Ready Data: A Best Practices Assessment on Preparing Backscatter Data for the Cube

In order to compare different processing workflows for their ability to correct for backscatter differences originating from the orientation of the terrain towards the sensor, the RTC backscatter products were compared with the local incident angle (INC). An image which has not been corrected for terrain effects will show a negative correlation with the INC product such that areas tilted towards the sensor are brighter than shadowed areas tilted away from the sensor [ 16 ]. In order to compare all processed images to a common INC product, an image was created according to the descriptions by Small 2011 and Meier et al. 1993 [ 46 ] in 30 m resolution and UTM projection for both study sites, respectively. In the following sections, this product is referred to as a UZH (University of Zurich) incident angle product. During processing, the products created by SNAP were found to be aligned to a different pixel grid than that defined by the input DEM, while in GAMMA, this exact grid was preserved. This is explained by the above-described additional resampling, which is always applied in SNAP. For this reason, two different INC products were resampled from the original UZH product to match the respective grids and the spatial resolution of 90 m used throughout this study for comparison with single image results. By up-sampling the product from 30 m to 90 m, nearly identical products were used for the SNAP and GAMMA grids. Otherwise, an additional resampling step would have had to be applied directly to the backscatter results of either software to match the grid of the other, potentially introducing additional errors and impairing comparability of the results.
Mehr anzeigen

37 Mehr lesen

The Schmittlets for automated SAR image enhancement

The Schmittlets for automated SAR image enhancement

The potential of the Schmittlets is demonstrated in Fig. 2 visually. The original intensity image is geocoded using a pixel spacing of 1 m on ground, see Fig. 2a. This corresponds to the spacing of the single look image approximately. After applying the Schmittlet image enhancement (see Fig. 2b) strong structures, i.e. deterministic targets are still present in the best available resolution, but distributed targets are perfectly smoothed. The Schmittlet index layer containing the best-fitting Schmittlets can be found in Fig. 2c. Schmittlets are colored according to Fig. 1: dark for fine structures, bright for coarse structures and colors according to the direction. Fig. 2d shows an optical image of the site acquired at another time to get an impression of the surrounding.
Mehr anzeigen

4 Mehr lesen

A framework for SAR-optical stereogrammetry over urban areas

A framework for SAR-optical stereogrammetry over urban areas

To verify the success of SAR-optical block adjustment, we require highly accurate GCPs, which are not available for the study areas. However, we evaluated the accuracy of block adjustment in sub-scenes of the two study areas with the assistance of available LiDAR point clouds. First, we manually found some matched points that had been measured in the SAR and optical sub-scenes, similar to the tie point selection step. Next, the measured points located in the optical imagery were projected to the terrain using the corresponding reverse rational polynomial functions (f c r h o ( , , ) and g c r h o ( , , ) ). To ensure exact back- projection, the height h of each point was extracted from the available high-resolution LiDAR point clouds of the target sub-scenes. To over- come the noise in the LiDAR data, we considered neighboring points around the selected measured point, and the final height of the target point was selected based on the mode of the heights in the considered neighborhood. The resulting ground points (f c r h g c r h h o ( , , ), o ( , , ), ) were then back-projected to the SAR scene using the forward RPCs fitted to the SAR imagery. Finally, comparing the image coordinates of the measured points on the SAR imagery (from manual matching) with their coordinates derived by projection from the optical to the SAR imagery using RPCs and LiDAR data provides an evaluation of the SAR- optical block adjustment performance. The residuals can be calculated as:
Mehr anzeigen

20 Mehr lesen

A New Approach for Optical and SAR Satellite Image Registration

A New Approach for Optical and SAR Satellite Image Registration

Over the last years research studies, like Ager and Bresnahan (2009), have shown the high geometric accuracy of high resolu- tion radar satellites like TerraSAR-X. In the papers of Reinartz et al. (2011) and Perko et al. (2011) methods are investigated which use high resolution TerraSAR-X images for the geometric accu- racy improvement of optical images. The improvement of the geometric accuracy of the optical images is achieved by using ground control points (GCPs) selected from the SAR reference image. The methods used in Reinartz et al. (2011) and Perko et al. (2011) are based on multi-sensor image to image registration and use TerraSAR-X images as references. Image registration is required in different applications in remote sensing, like change detection or image fusion, and is an on-going research topic. The registration of images is a challenging task especially, if the im- ages are acquired by different sensor types, like optical and SAR images. Commonly, image registration methods consist of four steps: (1) feature detection and extraction, (2) feature match- ing, (3) transformation model estimation and (4) image resam- pling and transformation (Zitov and Flusser, 2003). The proposed method described in this paper focuses only on the first two steps of the image registration. An overview of image registration tech- niques can be found in the papers of Brown (1992), Zitov and Flusser (2003) and Xiong and Zhang (2010).
Mehr anzeigen

8 Mehr lesen

Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X

Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X

Deep learning has been widely used in recent years in computer vision, biology, medical imaging, and remote sensing. Although the theory of deep learning is not yet mature, its capabilities shown in numerous applications have attracted the attention of many researchers. Let us simply review the development of image translation with deep learning. In 2016, Gatys et al. demonstrated the power of Convolutional Neural Networks (CNNs) in creating fantastic artistic imagery. With a good understanding of pretrained VGG networks, they achieved style transfer and demonstrated that semantic exchange could be made using neural networks. Since then, Neural Style Transfer has become a trending topic both in academic literature and industrial applications [26]. To accelerate the speed of Neural Style Transfer, a lot of follow-up studies were conducted. A typical one is Texture networks. With the appearance of GANs, several researchers turned to GANs to find more general methods without defining the texture. In this paper, we examine three typical methods, the method of Gatys et al. [9], Texture Networks [10], and Conditional GANs [13]. By analyzing their advantages and disadvantages in SAR image translations, we propose a new GAN-based framework which is the combination of the definition of SAR image content and the GAN method.
Mehr anzeigen

24 Mehr lesen

Show all 10000 documents...