• Nem Talált Eredményt

AUTOMATIC TUMULI DETECTION IN LIDAR BASED DIGITAL ELEVATION MAPS

N/A
N/A
Protected

Academic year: 2022

Ossza meg "AUTOMATIC TUMULI DETECTION IN LIDAR BASED DIGITAL ELEVATION MAPS"

Copied!
6
0
0

Teljes szövegt

(1)

AUTOMATIC TUMULI DETECTION IN LIDAR BASED DIGITAL ELEVATION MAPS

Zs. N´emeth1,, Cs. Benedek1, 2

13in Research Group, P´azm´any P´eter Catholic University, Faculty of Information Technology and Bionics, 2500 Esztergom, Hungary – nemeth.zsofia.2@hallgato.ppke.hu

2Institute for Computer Science and Control (SZTAKI), Budapest, Hungary – benedek.csaba@sztaki.hu

KEY WORDS:Tumuli detection, Marked Point Process, Multiple Birth and Death Dynamics

ABSTRACT:

In this study, we aim to identify Iron Age tumuli with a probabilistic method, in digital terrain models extracted from airborne LiDAR measurements. Our task was particularly challenging due to the strong and uneven elevation of the ground where tumuli are located. In addition, the area is densely covered by vegetation, mainly by forests, which makes it difficult to create a surface model. The difficulty was exacerbated by the fact that the height and width of tumuli is very diverse but incomparably smaller than the extension of the area. In contrast to the various visualization techniques (openness, local relief model) used since decades for identifying tumuli, our method recognises these features automatically. Our approach is based on a Marked Point Process Model (MPP), which due to its inverse methodology, allows us to detect tumuli with high precision, especially in comparison with bottom-up methods. We quantitatively evaluated the method on six test data sets and we demonstrated its advantages compared to a traditional Hough transform-based method.

1. INTRODUCTION

Archaeologists have been working for decades on finding and uncovering tumuli exploiting computer technology. Tumuli are pile-like protrusions of varying sizes. In the middle of these formations, ashes of the dead were placed under a stone pile or in a cavity (G´ati, 2017 autumn). Tumuli deserve to be preserved and examined further, as they contain valuable findings, and they are indeed by themselves memento of the human activity in ancient times. In this study, tumuli in the investigated area are originated from 9th to 7th century BC, but there are also Bronze Age finds (1200 - 800 BC) in the soil of piles!

In archaeological research, valuable information about ancient formations in relief can be obtained by applying different air- borne laser scanning (ALS) techniques. From the obtained ALS data, high quality digital terrain models (DTM) are derived and interpreted. There are several published interpretation tech- niques, including sky-view factor (Zakˇsek et al., 2011), local relief model (Hesse, 2010), dense contour mapping (Bewley et al., 2005), geostatistical filtering (Humme et al., 2006) and openness (Doneus, 2013). What is common in these techniques that they all contribute to the visualization of the relief, but the recognition of the archaeological sites is the responsibility of the experts. This can be a very time consuming task and its suc- cess depends largely on how well the chosen method visualizes the often fairly small structures. Henceforth the method we pro- posed detects tumuli automatically from raw ALS data, without having to apply the above mentioned interpretation techniques.

Terrain segmentation algorithms usually aim to recognise lar- ger formations e.g. landslides, as smaller landmarks are very difficult to distinguish due to the uneven nature of terrain. This study attempts to overcome the latter problem detecting tumuli in a vast area called the Jakab Mountain, an area near P´ecs in Hungary. The Mountain is around 600 meters high, and the area is home to one of the largest prehistoric ramparts in the

Corresponding author

Figure 1. Prehistoric rampart in the Jakab Mountain

Carpathian Basin (see Figure 1), beside which dozens of tumuli are located.

During the research, we had to face some serious difficulties.

As tumuli are scattered over a mountain, it is hard to separ- ate the barely visible small protrusions belonging to the tumuli, from the steeply rising ground surface. Due to the circum- stances, the trivial solution would be to develop a locally ad- aptive thresholding based method. However, we do not know exactly where on the mountain tumuli are located. Therefore, applying inappropriate thresholds would lead to losing some of the minor terrain features. Since we do not have prior know- ledge of the location of the tumuli, we cannot use other tradi- tional methods such as the Watershed transform either, as this would require seed points to start segmentation from. However, selecting seed points manually would compromise automation.

Exactly this is the reason why terrain segmentation problems are often solved with neural networks (Sung et al., 2010), as no prior information is required there. However, deep learning techniques require a huge number of training data to succeed, which is not available in this case. This work presents an al- gorithm which can localize tumuli containing important histor- ical artefacts, despite the difficult conditions presented by the diverse terrain.

(2)

With manual identification, one has to face several challenges.

The height of a tumulus can range from 20 centimeters to even 4 meters, but the latter case is quite rare, while the smaller piles are not easy to locate. Especially because the tombs are numer- ous and scattered in an area of 50 hectares. Moreover ground is covered by dense undergrowth which hinders the exploration of the area and hides smaller piles. Instead of manual explor- ation, we applied a laser measuring technology called LiDAR that provided an effective solution for capturing the area. The detailed description of the method is given in the next section.

2. INPUT DATA DESCRIPTION

As we mentioned before, the input data is acquired using air- borne LiDAR (Light Detection and Ranging) technology, which allows us to create high-resolutional topography images over a large area rapidly. To create the recording, the LiDAR device emits light and measures with a laser, how fast light is reflec- ted back from objects. With these measurements a 3D cloud of points is created from the horizontal coordinates and eleva- tion of landmarks. The data was recorded with an ALS (Aerial Laser Scanner) device that recorded four reflections. The point density of the 3D cloud was 8points/m2. The raw point cloud data was filtered with the Multiscale Curvature Classification technique (Evans, Hudak, 2007), which removed trees from the recording. Then with a 3D mapping software called Surfer, a grid representation was created with a 0.4 m x 0.4 m cell size, which we used as the input for our algorithm.

3. MARKED POINT PROCESS MODEL As the examined area shows steep elevation of the relief, we had to find a method that locally examines the points in the data set trying to isolate only those protrusions which belong to tumuli. As tumulus is an ellipse-shaped structure (so we will often refer to it as an ellipse later on) Hough transform (Mukho- padhyay, Chaudhuri, 2015) can come to our minds for solving the detection problem. However, the elevating ground also sug- gest another classical method, called the Watershed transform (Huang, Chen, 2004), as it is often used for segmentation of surface structures (Stepinski et al., 2006). Both Hough trans- form and Watershed transform based techniques can be fast, yet they may fail if the primitives (blobs, edge parts or corners in images) cannot be reliably detected. We can classify them as straightforward bottom-up methods, because they construct the objects from the aforementioned primitives. Although they are frequently used segmentation techniques, these approaches show limitations in cases of dense populations with several ad- jacent objects.

Our approach belongs to the other category of methods, which is referred to as top-down or inverse technique, and it is cap- able of overcoming the weaknesses of the above methods. We presented a modified version of the Marked Point Process based model introduced in paper (Descamps et al., 2008), which have previously been used for various population counting problems, dealing with a large number of objects which have low varieties in shape. Our probabilistic approach aims to find a configura- tion of ellipse objects that correspond to the real spatial loca- tions of tumuli. We use a spatial stochastic process for choos- ing the number and placements of the objects and a connected marking method for defining size, shape and orientation of ob- jects. The method handles the ground elevation in the hill-side in a fast and reliable way. Although inverse techniques might be

slower than direct methods, by adopting efficient optimization algorithms, such as the multiple birth-death dynamics (MBD), the computational time may be kept tractable for several ap- plications. In addition, the MBD algorithm can extract a good suboptimal object population in the high dimensional configur- ation space.

The tumuli population in the data set corresponds to a realiza- tion of the Marked Point Process. We assign a so-called Gibbs energy to the point process which characterizes each object con- figuration. The configuration, which fits best to the real-life situation, has the lowest energy. Therefore, the algorithm fol- lows an energy minimization approach.

The configuration energy consists of two parts, the prior en- ergy, which examines overlaps between neighbouring objects, and the data energy, which represents the ”correctness” of the objects as a function of the locally observed image data. Prior energy measures overlapping between neighbouring ellipses by considering the maximum overlap a particular ellipse exhib- its with any other objects of the population, and normalizing it between [0,1]. The overlapping coefficient is multiplied by a weight, which shows how much intersection is allowed in our configuration. Since tumuli are clearly non-overlapping objects, if the program places overlapping objects, it does not match the actual location of tumuli. Therefore in our applica- tion, the overlap weight is set high (see section 4.1). By increas- ing the prior energy, we penalize the overlapping of objects, since we prefer minimum energy.

Data energy demonstrates the measure of contrast between an object and its boundary. To determine the difference, the article underlying the current research (Descamps et al., 2008) used Bhattacharya distance between each ellipse, and its boundary.

Then the resulting value is squeezed between [-1,1] with a con- ditional function, which gives a value close to 1, when the Bhat- tacharya distance (and so the contrast) is small, and a negative value, close to -1, when the distance is large. This way, as we are looking for minimum energy, we will keep the object can- didates, which give the best approximation of the real objects in the dataset. For the current problem, we had to deviate from the Bhattacharya distance measure, as it did not adequately de- scribe tumuli in the test data. In our case it is not enough for an object candidate to show high contrast with its environment, since that type of measure would also include structures other than tumuli. It is very important to apply a metric which is suit- able only for a tumulus. Therefore we introduced a new data term, which allows us to indentify tumuli with high precision.

Detailed description of the new term will be presented in the next section.

As mentioned before, MPP is combined with an optimization process that attempts to find the configuration with the highest confidence. We applied a stochastic process called the multiple birth-death dynamics (Descombes et al., 2009), which have been mainly applied in object extraction tasks (Kulikova et al., 2010), (Benedek et al., 2011). The iterative process consists of three parts: creating new ellipses (birth), sorting objects by their data terms, and deleting presumably incorrect objects (death). Each step depends on the energy of the objects. In the birth step, we use values from a pre-calculated ”birth” map to generate probability values for object creation. The elements in the map are normalized and weighted data energy values calculated for fixed sized ellipse objects at each grid position. During the sort- ing, objects are put in a descending order based on their data energy. In the death step, those who have unacceptably high

(3)

(a) (b) (c)

Figure 2. From left to right respectively: original location of the tumulus used for demonstration (the middle one), tumulus with the approximated ideal ellipsoid surface, cross-section of tumulus with differences from the ellipsoid surface (yellow – positive, green –

negative) (prior and data ) energy are likely eliminated. Thus, if an ellipse overlaps significantly with another and it does not project elev- ation from its surroundings, it is likely to be removed from the configuration. The steps iterate until convergence is reached.

Convergence is reached when only those objects are removed, which were created in the current iteration.

4. ADAPTED DATA TERM CONSTRUCTION As it was explained in the previous section, we had to adapt the data term calculation to the current research task. With data energy, we look for features of the proposed object candidates that prove the presence of a tumulus. The two most character- istic indicators of a tumulus are the hill-like elevation and the round shape. Therefore, we quantify these two indicators to define data energy. As a combination of the two, we compare the object candidates to a half ellipsoid surface, since the shape of a tumulus can be approximated by one – even after centuries of erosion. For the shape estimation, we use the elevation value at the centre and the axes of a previously created object candid- ate. The candidates are defined by an ellipse because the tumuli are not exactly circular.

For the data energy calculation, first we calculate the absolute distances (dA) of the elevation values from the ideal ellipsoid shape. We defined this distance as the difference between the DEM value at the center of the ellipsoid base, and the sur- face point of the approximated ellipsoid shape in exactly the same horizontal position. The most important parameters for the ideal surface approximation are the center coordinates and the axes of the ellipsoid. The horizontal coordinates and axes are trivially given by the created object candidate. The ver- tical center coordinate is estimated with the mean value of the boundary area around the ellipse.

Careful selection of the boundary size is very important, be- cause if the area is overly extended, it may cover parts of neigh- bouring tumuli, thus influencing the estimation. On the other hand, a small boundary area would also have a bad effect on the approximation. When having a small object candidate, the boundary could easily be inside the area of the examined tumu- lus. This way the estimated base value would be higher than the actual base of the tumulus. In practice, a boundary size that is set to the difference between the radius of the largest possible tumulus and the smallest one works well. After we calculated the vertical center coordinate, the vertical axis (c) can be de- rived from that by subtracting it from the DEM value at the center. The explained calculation is summarized in equation 1.

dA= X

txy∈T ex,ey∈E

txy r

1(exeu)2

a2 (eyev)2 b2

c2+µ(B(x))

!

(1)

where

txy= tumulus candidate elevation value ex, ey= ellipsoid horizontal coordinates eu, ev= ellipsoid horizontal centre coordinates a, b, c= ellipsoid axes

µ(B(x))= mean value of boundary points

Then the calculated distance value is squeezed between -1 and 1 with a non-linear function shown in equation 2, that is derived from the similar function in (Descamps et al., 2008) paper, and with that the final energy value (Qd(dA)) is computed. The main modification is the switch of the conditions. This change was essential, because in contrast to the Bhattacharya distance used in the orginal function, the absolute distance is directly proportional to the data energy. The other modification, the use of a minimum function is required in order to keep the resulting value within the allowed range. The function uses a presetd0

parameter which determines the boundary that separates small distances belonging to good candidates from larger ones.

Qd(dA) =

(min(1,1−dd0

A ∗c) if dA> d0

e(−1)∗(d0−dA)∗w∗c−1 if dA<=d0

(2)

where

d0= separation parameter

dA= absolute distance, defined by eq. (1) c= height of ellipsoid

ω= weight parameter

On Figure 2 visual demonstration of data energy calculation is shown. On sub-figure (a) it is visible how strongly mountain- ous ground elevates, and it is also noticeable that the location of the tumuli is quite dense at certain regions. The main idea be- hind the data term calculation is modelled in sub-figure (b) with the middle tumulus in the previous sub-figure. The figure also justifies the validity of the calculation, as the estimated surface is extremely close to the real surface of the tumulus. The pre- cise estimation is even better illustrated by sub-figure (c), where the distance between the surfaces is colored. The difference is

(4)

Number of tumuli Area Precision Recall F-rate Precision - Pixel Recall - Pixel F-rate - Pixel

Test1 24 1.3 79 92 85 55 69 61

Test2 24 1.8 92 100 96 59 71 65

Test3 15 1.7 100 87 93 60 72 65

Test4 17 1.7 100 94 97 66 70 68

Test5 16 1.5 94 100 97 70 80 75

Test6 35 2.0 83 97 89 63 57 60

Test6 HT 35 2.0 74 49 59 64 32 43

All 131 10 91.3 95 92.8 62.1 69.8 65.7

Table 1. Performance of our proposed Marked Point Process Model on six test data set indicated by F-rates (in %) . On the 6th test set, performance of Hough transform was also measured. Area of the test sets are given in hectares.

(a) (b) (c)

Figure 3. Hough transform, Marked Point Process Model and reference data (from left to right respectively) largest at the side of the structure, this is essentially the case

with all the tumuli in the region.

4.1 Parameter settings

By setting the MBD optimization parameters such as default birth rate, inverse tempreature and discretization step, we fol- lowed the guidelines provided in (Descamps et al., 2008). The rest of the model parameters were used for fine-tuning the data and prior energy terms. The non-linear function that generated the data energy values used a weight set tow= 10 and a sep- arating boundary valued0= 0.25. In addition, we have created a lower bound for the height of the ellipsoid object candidates which is set based on the minimum height of a tumulus to 0.25.

As we mentioned in Section 3, overlapping of objects were pen- alized by using a weight for the prior energy term which was set to 3.

5. RESULTS AND DISCUSSION

In the following, we will discuss the performance and preci- sion of the modified model, in comparison with a similar auto- mated method and with manual visualization technique-based approaches. As it was mentioned before, tumuli are scattered in the examined mountain region. Therefore, we took samples from the different areas, and measured the model performance on each data set. The number of tumuli in the test sets ranges from 15 to 35, and the size of the selected regions ranges from 1.3 to 2.0 hectares. These measurements provided a compre- hensive evaluation of the applicability of the method in areas with diverse reliefs.

We performed quantitative evaluation both on object and pixel level by using F-rate measure, which gives the harmonic mean of precision and recall. To calculate the latter two on object level, we classified objects as True Positive (TP), False Positive (FP), and False Negative (FN) by comparing results to the ref- erence data. On pixel level, we calculated F-rate based on com- parison of masks of detected objects and reference objects. To

enable fully automated evaluation, we developed a program that allows us to manually classify object on the visualized image of the data set and after giving the results of MPP as an input for the program, it calculates the above mentioned performance in- dicators. The computed precision, recall and F-rates are shown in Table 1 in percentage for all the six test sets. The table also shows the number of tumuli in each set.

The efficiency of our approach is shown by the fact that the object-level F-rate measures of the results obtained for the ex- amined areas exceeded 84% in all cases. In fact, there was only one case, where the F-rate value was not around 90% or higher.

The values alone prove to be satisfactory, as the rocky outcrops of the mountainous ground could easily mislead the detection.

However, compared to other automated techniques, the results seem even more outstanding. On Figure 3, side-by-side com- parison of our method with a Hough transform based reference technique is visible on the largest tested area where the highest number of tumuli were located. Hough transform (Mukhopad- hyay, Chaudhuri, 2015) is a well-known and frequently used technique for circle detection, and since the shape of a tumulus is close to circular, it is a good reference method for compar- ing performance and accuracy. Both of the methods were eval- uated against in-situ observation also indicated on the Figure.

However, even without the quantitative measurements we can observe that tumuli segmented by MPP are much closer to the human classification than results of the Hough transform.

As expected, Hough transform, yielded detection results that were suitable for the same evaluation measure that was described before. The produced F-rates were included in the table of performance values of our own method so the significant dif- ference in the Precision, Recall, F-rate values can be clearly seen. Side-by-side comparison with the average performance values of the Marked Point Process model shows, that in some cases, the value generated by our method is almost double that of the reference method. Although Hough transform managed to identify some of the larger tumuli, it often did so by detecting

(5)

only a small portion of the tumulus, the circle objects used for identification shifted relative to the actual location. Neverthe- less, the method is working well in many other cases, e.g. it was successfully applied to detect optic nerve head (ONH), which is a key step in retinal structure extraction (Zhu et al., 2010). Tu- muli, however, which are often irregular in shape or difficult to recognize due to their small size, reduce its effectiveness in our case.

In contrast to this, ellipses created by our approach are precisely located, usually at the very center of the tumulus objects. Due to the probabilistic nature of the model, an object candidate is often not created at the position where the prediction map has a local maximum. This is right, since the energy based values in the matrix were calculated with an average ellipse radius size.

Therefore, it can happen, that with the proper size, energy value has a local minimum at a different position. However, in order to make sure that all objects are in the right place, we correct their position along the gradient after each death step. With a recursive search, the current neighbourhood is examined, and the object is shifted to a neighbouring position, if that would result in a lower configuration energy. Therefore, by the end of each iteration, the created objects are located closest to the actual location of the tumuli, thus minimizing configuration en- ergy.

Figure 4. Sharp protrusions caused by measurement noise in the LiDAR data

Proper localization is only partly responsible for the good res- ults. High accuracy of the model is mainly achieved by the correct data energy calculation. We experimented with differ- ent methods for the calculation. First we applied the Bahat- tacharya distance based computation, but it was not suitable for tumuli detection. Then we used a simple method, where we looked for a density of higher points in an approximately flat region. It was partly successful, because the technique man- aged to identify many tumuli. However, it also produced nu- merous false-positive results, as noise in the LiDAR data mis- lead the simple calculation. That was the reason we introduced the technique based on deviation from the ellipsoid surface, to simultaneously filter noisy regions while maintaining the high tumulus detection number. On Figure 4 the second test set is seen from side view, where it is visible how in both positive and negative directions there are unnaturally sharp protrusions on the surface of the ground. Of course, the larger protrusions could be filtered out with a noise filter, but smaller ones would still remain in the data set, which could easily cause false de- tection results with the simpler energy calculation. Moreover, noise filtering would be an additional step leading to speed de- crease.

Computational time is usually a key component in research, be- cause the higher the speed of data processing, the more data can

(a) (b)

Figure 5. Visibility of 3 tumuli in the shaded relief model and in the original DTM

be analysed leading to better conclusions. Therefore, speed is an important argument in favor of an automated detection pro- cedure. Nevertheless, it is common practice for DTM created from LiDAR measurements to be subjected to a visualization procedure that makes certain relief elements more recogniz- able. Then archaeologists examine the visualized regions and manually identify objects of interest. This technique is rather slow and requires the presence of an expert. Moreover, it is not always accurate, because visualization techniques may not be able to show smaller structures, like a tumulus. This case is demonstrated on Figure 5, where three tumuli are barely vis- ible on the shaded relief model, but they are identifiable in the original DTM.

6. CONCLUSIONS

This study presented a method, which is able to distinguish small features in a diverse area, without being affected by the measurement noise. The Marked Point Process model managed to identify exceptionally high percentage of the tumuli in each tested region. We evaluated the approach on data collected from the Jakab Mountain to demonstrate its efficiency. There were at most two tumuli in each region, which were not recognized. In contrast to the results of other methods that were examined, the performance of our approach is even better, since they could detect only half of the tumuli or not at all. There have been some false positive results, but it is more important for archae- ological research to find most of the tumuli than to examine a few additional areas where there is potentially no tumulus. The high hit rate is crucial for this study, since each newly identified tumulus can provide valuable insights into the past.

During testing of the algorithm in the different mountain re- gions one limitation has been revealed, which has to be taken into consideration. Due to the probabilistic way of creating the object candidates, there can be differences between the separate runs of the method. This is both advantage and disadvantage, since it can localize new objects, that were not found before.

But it can also miss an object which was detected in a previous run. Since the running of the process took only a few seconds for each test area, it is possible to run it multiple times. There- fore, in the future, the mentioned disadvantage can be avoided by selecting the objects closest to reality on the basis of a stat- istical method from the objects created during several runs.

7. ACKNOWLEDGEMENTS

This work was funded by the European Union and the Hun- garian Government from the project ‘Thematic Fundamental

(6)

Research Collaborations Grounding Innovation in Informatics and Infocommunications’ under grant number EFOP-3.6.2-16- 2017-00013, by the National Research, Developement and In- novation Fund (grant NKFIA K-120233), and by the Michelber- ger Master Award of the Hungarian Academy of Engineering.

The authors thank Csilla G´ati and G´abor Bert´ok for test data provision.

REFERENCES

Benedek, C., Descombes, X., Zerubia, J., 2011. Building de- velopment monitoring in multitemporal remotely sensed image pairs with stochastic birth-death dynamics.IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1), 33–50.

Bewley, R. H., Crutchley, S. P., Shell, C. A., 2005. New light on an ancient landscape: lidar Survey in the Stonehenge World Heritage Site.Antiquity, 79(305), 636–647.

Descamps, S., Descombes, X., B´echet, A., Zerubia, J., 2008.

Automatic flamingo detection using a multiple birth and death process. 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, 1113–1116.

Descombes, X., Minlos, R., Zhizhina, E., 2009. Object extrac- tion using a stochastic birth-and-death dynamics in continuum.

Journal of Mathematical Imaging and Vision, 33(3), 347–359.

Doneus, M., 2013. Openness as visualization technique for in- terpretative mapping of airborne lidar derived digital terrain models.Remote sensing, 5(12), 6427–6442.

Evans, J. S., Hudak, A. T., 2007. A multiscale curvature al- gorithm for classifying discrete return LiDAR in forested envir- onments.IEEE Transactions on Geoscience and Remote Sens- ing, 45(4), 1029–1038.

G´ati, C., 2017 autumn. L´egi l´ezerszkennel´es eredm´enyei a P´ecs melletti Jakab-hegyen (Results of aerial laser scanning in the Jakab Mountain, near P´ecs).Magyar r´eg´eszet online magazine.

In Hungarian.

Hesse, R., 2010. LiDAR-derived Local Relief Models–a new tool for archaeological prospection. Archaeological prospec- tion, 17(2), 67–72.

Huang, Y.-L., Chen, D.-R., 2004. Watershed segmentation for breast tumor in 2-D sonography.Ultrasound in medicine & bio- logy, 30(5), 625–632.

Humme, A., Lindenbergh, R., Sueur, C., 2006. Revealing celtic fields from lidar data using kriging based filtering.Proceedings of the ISPRS commission V symposium.

Kulikova, M. S., Jermyn, I. H., Descombes, X., Zhizhina, E., Zerubia, J., 2010. Extraction of arbitrarily-shaped objects using stochastic multiple birth-and-death dynamics and active con- tours.Computational Imaging VIII, 7533, International Society for Optics and Photonics, 753306.

Mukhopadhyay, P., Chaudhuri, B. B., 2015. A survey of Hough Transform.Pattern Recognition, 48(3), 993–1010.

Stepinski, T. F., Ghosh, S., Vilalta, R., 2006. Automatic re- cognition of landforms on mars using terrain segmentation and classification.International Conference on Discovery Science, Springer, 255–266.

Sung, G.-Y., Kwak, D.-M., Lyou, J., 2010. Neural network based terrain classification using wavelet features.Journal of Intelligent & Robotic Systems, 59(3-4), 269–281.

Zakˇsek, K., Oˇstir, K., Kokalj, ˇZ., 2011. Sky-view factor as a relief visualization technique.Remote sensing, 3(2), 398–415.

Zhu, X., Rangayyan, R. M., Ells, A. L., 2010. Detection of the optic nerve head in fundus images of the retina using the hough transform for circles.Journal of digital imaging, 23(3), 332–

341.

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

The traffic sign data is seen as a realization of a marked Poisson process and the minimum description length (MDL) principle – set to work in the form of Page-Hinkley change

The paper shows a possible generalisation of the method implemented in this package to multidimensional data: the dynamic programming method can be applied to find the

Edges with the orientation of the classified urban area are emphasized with shearlet based edge detection method, resulting in an efficient connectivity map.. The feature point set

In this paper, we propose OptiRef, our simulated annealing based method to find, in a given area, the optimal number and placement of the reference sensors to be used for indoor

The paper aims at the accuracy aspects of Digital Elevation Modelling, compares the most common data acquisition methods, data structures used to store elevation

The analytical method mentioned operates on irregularly spaced data points of digital relief model meaning that coordinates Y i, Xi' Zi of the data points serve

The only study that tried to explore the cause(s) of the marked regional differences in suicide mortality in Hungary based on exact data found a significant negative correlation

The process of the Exeter point from the centroid serves as a base for defining the Exeter transformation with respect to the triangle ABC, which maps all points of the plane.. We