• Nem Talált Eredményt

The application of VTSs during construction processes is relatively new, how- ever the elements of these systems have been present long time ago at con- struction sites. RTK GPS is applied in civil engineering as a high precision 3D measurement system offering good alternative instead of laser level systems.

Another example of the first uses of GPS in construction automation is the piling rig positioning in [184].

The recently applied technologies of Real Time Locating Systems (RTLS) at construction sites:

• GPS

• RFID: this technology is also used to allocate tools at the construction job site in order to get information about their availability [72] and for auditing the wearing of personal protective equipment (PPE) ([101])

• Ultra Wideband technology (UWB): the technology enables the trans- mission of large amount data using large spectrum of frequency and in resource location tracking in construction sites it has been already replacing the earlier used angle of arrival (AoA), time of arrival (ToA),

3.3. Vehicle Tracking in Construction 35 traditional time-distance-of-arrival (TDoA) or non-senor networked tech- nologies like Robotic Total Station (RTS) [36].

• Bluetooth: This license-free wireless short range beacon technology is integrated in the VTS of [131]. This system combines GPS, DR and fixed position Bluetooth beacon positioning in order to track mixer truck in urban environment and construction site.

• Laser and vision based detection: laser based technologies are long time inevitable elements of construction processes for example in elevation control but cameras and image processing algorithms are also present in recent researches of detecting equipment. In [129] an algorithm was developed for recognizing machinery and workers.

The prerequisite of the monitoring of constructions is the precise process description, modelling this is supported by different methods (simple flow chart, Gantt diagram) and process description languages (e.g. IDEF0- Inte- grated Computer Aided Manufacturing Definition for Function Modeling, BPMN - Business Process Model and Notation, EPC - Event Driven Process Chain) [98].

The researchers aim to find the optimum solutions for all the different purpose applications of vehicle tracking in construction sites.

Simulation: The goal of the simulations is always the helping of decision making and the vehicle tracking has a great importance in that, since if we support the simulation with real or near real-time data we get much more accurate results. That is why Dynamic Data-Driven Application Simulation (DDDAS) (in which measurement, simulation and application are present at the same time) is one of the most improving fields nowadays [191]. In [4] this paradigm is supplemented by visualization too. [210] proposes a framework for near real-time simulation (NRTS) in which the parts of the machineries are equipped with UWB RTLS e.g. for positioning the bucket of an excavator in a local coordinate system and after averaging its movement and uses the GPS location data of the machinery an algorithm determines the activity of the machine. So the cycle time of the action can be recorded and with this data the simulation can be tuned in near real-time.

Control: With the help of RTK GPS receiver in [157] instrumentation, soft- ware (for the calculation of the excavated volume, productivity) and control approach is introduced for excavation.

FIGURE3.2: Installed GPS-antennas (GPS A1/2) and encoders (G 1/2) [157]

Safety considerations: In [143] one can found a detailed comparison about collision detection systems used earlier. The publication states the follow- ings: ultrasonic technologies have fast response time and low cost but their working radius of few meters restricts their use. Vision technologies are in developing phase and they are very expensive yet. Infrared technology has the advantage of small size and low cost but because its high response time it is not applied recently. Radar technologies would prove to be the most effec- tive one but they also have high cost. So the academic study mentioned be- fore propose a collision detection system with GPS technology and wireless communication. The main goal of the application presented in [149] is the analysis of the GPS gathered data (mainly of cyclic activities) for improving productivity but it demonstrates the possibility of automatic (hauling, load- ing, unloading) zone detection based on criteria like vehicle speed through an algorithm and this way the safety of machines working close to each other can be ameliorated.

Surveillance: Inter alia the registration of the duty time or measuring the loaded weight are available tracking services but the most common used one is the fuel management. Fuel level detections:

• Float level meter: these devices are inbuilt gauges. The drawback of these that upper 10 % of the tank cannot be measured because of the float encounters the wall of it. Its signal can be read on CAN bus.

• Flow meter: It is the most precise and expensive one, the in- and out- flow can be measured but the time of the fuel loss cannot be determined and it also requires maintenance because of its filter.

TABLE3.2: Base table for urban transport

Accuracy Installing Coordinates Weight

number

T1=1 T2=3 T3=5

GPS G 30 G 30 VG 40

Sign-post VG 40 S 10 S 10

GBR US 0 G 30 VG 40

DR S 10 M 20 VG 40

One verbal qualification and a quantitative metric belongs to every alternative- factor pair. The meanings of abbreviations and values:

• US: unsatisfactory (0),

• S: satisfactory (10),

• M: medium (20),

• G: good (30),

• VG: very good (40).

These numeric values (n0.1={0,10,20,30,40} ) correspond to 10 % weight.

The true quantitative metric can be calculated by:

ni =n0.1· Ti

3j=1Ti. (3.1)

Based on Table3.2one can construct the KIPA-matrix (Table3.3), which is a pairwise comparison of alternatives in terms of preference (cij and disqual- ificance (dij) indices [103].)

The conditions of evaluations:

• cij50 (one method is preferred to another, if it is valid in terms of 50

% – considering their weights – of evaluation criteria),

• dij<100 (the maximum disadvantage is not allowed).

3.4. Recommendations and Conclusion 39 TABLE3.3: KIPA-matrix for urban transport

GPS Sign- post

GBR DR

GPS cij [% ] - 89 100 100

dij [% ] - 5 0 0

Sign- post

cij [% ] 11 - 11 11

dij [% ] 75 - 75 75

GBR cij [% ] 89 89 - 89

dij [% ] 15 20 - 0

DR cij [% ] 56 89 67 -

dij [% ] 15 15 15 -

Reading the results from the matrix I got the preference order in case of urban transport:

1. GPS 2. GBR 3. DR 4. Sign-post

There are two reasons for which it is worth to investigate separately vehi- cle tracking in urban, long-distance transport and construction industry. The first one is the earlier mentioned different weight of evaluation factors, and the second one is the dynamic behavior of these factors. Their evaluations are strongly dependent on the application. Some examples are given in the followings for illustrative purposes:

•GPS is less accurate in urban environments.

•Relatively small number of “sign-posts” are enough to cover a construc- tion site.

•If a construction site is completely covered continuous data registration is secured.

The remaining two base tables can be seen in Table3.4and Table3.5

TABLE3.4: Base table for long-distance transport Accuracy Installing Coordinates Weight

number

T1=1 T2=3 T3=5

GPS VG 40 G 30 VG 40

Sign-post VG 40 US 0 S 10

GBR US 0 G 30 VG 40

DR US 0 M 20 VG 40

TABLE3.5: Base table for construction industry Accuracy Installing Coordinates Weight

number

T1=5 T2=3 T3=1

GPS G 30 G 30 VG 40

Sign-post VG 40 M 20 M 20

GBR US 0 G 30 VG 40

DR M 20 M 20 VG 40

My analysis found that the GPS is the most appropriate locating hard- ware for long-distance transportation and Sign-post technologies are for the construction industry (Table3.6).

TABLE3.6: Preference order of locating hardware technologies Urban

transporta- tion

Long- distance transporta- tion

Construction

1. GPS GPS Sign-post

2. GBR GBR GPS

3. DR DR DR

4. Sign-post Sign-post GBR

Besides, the reader should note that I only have considered basic types in the selection process in a much generalized manner.

Data transfer type:

For the case of transportation in urban environment obviously quasi-online would be an optimal choice because it combines the advantages of online

3.4. Recommendations and Conclusion 41 and off-line ones, but in the other two sectors it is hardly feasible. In most of the applications long-distance transport requires online communication because of the capability of reacting relatively sudden changes (e.g. freight exchange), while in the construction industry traditionally off-line data gath- ering is satisfactory (for instance considering of monitoring of low speed ve- hicles). However, many of the previously presented researches require real- time data for simulation, control etc. purposes. The right choice of the data transfer type requires further evaluations.

The work of this Chapter is comprehensive insight to the field of vehi- cle tracking in general and detailed in the transportation and construction industry. It also a proposal about technology selection for different task as well. Its aim to help the first stage of the decision making and to provide fur- ther literature for the selection of the right apparatus. We can conclude that the methods, instruments, algorithms show wide variety and because of that to find the proper choice we need to deal with all the cases as unique ones, but general conclusions can be drawn. Beside the above, a criterion system is proposed for the decision making problem of technology selection.

43

Chapter 4

Partial Shape Recognition

The perception of the environment is essential for safety and automation reasons in case of a moving vehicle or machine. Object candidates can be different, but shape recognition can mean the basis of higher order seman- tic interpretation, whether we talk about autonomous driving, construction site, industrial warehouse or any other automation task in different environ- ments. The author realized that no real 3D information is available in the practice, 2.5D one is also hard to acquire, because of occlusions. We have to deal with partial information and shapes which is rather ignored in the 3D shape recognition literature. This Chapter introduces a partial shape recog- nition method, making it applicable to harsh environment. The method was first introduced in [194], then new tests were published in [164], meanwhile it was presented in Hungarian forums like [166] [168] and [167]. Finally, the earlier articles were extended to a journal article including even more tests and some modification of the method in order to decrease its running time [165]. Here, my own contributions are the method development, including local scale and pattern definition.

Intelligent vehicles are usually provided with data from OSH devices or narrow Field of View (FoV) LIDAR detectors (e.g., Velodyne VLP-161). 2D and 3D LIDARs with narrow FoV acquire low vertical information about the near environment by one frame. Incremental registration offers a chance for the exploitation of this data type.

Autonomous vehicles/mobile machines are in the process of intensive development including sensors and algorithms. In industrial transportation systems so called AGVs are needed to differentiate obstacle categories so that the gained information can be utilized for many aims: recognized objects can be used as landmarks for navigation purposes or for better respond to a certain safety situation.

1http://velodynelidar.com/vlp-16.html

FIGURE4.1: Point clouds of objects with keypoints clustered on local descriptors

4.1 Introduction

Autonomous vehicles must be equipped with protective devices because of OSH aspects, implementing different sensor modalities and partly indepen- dent, or contrary, fused evaluation of sensor information to get a higher re- liability. There are various collision avoidance systems, see the survey in [137]. In case of AGVs, the protective devices are usually safety laser scan- ners, and one or more regular 2D laser scanners are installed. Conventional use of these sensors can result in go forward, stop or avoid commands for the autonomous vehicle, when an obstacle is detected. These decisions are based on the distance and the static or dynamic nature of the obstacle (dif- ferentiation of static and dynamic obstacles is not an easy task itself [228]).

A system which is capable of obstacle recognition could suggest avoidance direction (using yet invisible but known extension of the obstacle), and have more precise static/dynamic differentiation; e.g. standing human will not be categorized as static. Calculating given parameters of the partially visible objects (size, maximum acceleration, maximum velocity, etc.) can be realized as well. Even prediction of the behavior of the obstacles (vehicles, humans or animals will react differently to the approach of a mobile machine) is a possibility.

The vehicle needs to be informed about the objects in its surroundings.

4.1. Introduction 45

FIGURE4.2: 3D Shape Categorization Benchmark [123]

The visible surfaces of these objects are most often represented in a 3D coor- dinate system as a set of measured points which is referred as 3D point cloud.

However, object recognition from 3D point clouds is a competitive research area without applicable results for partial views (the present problem).

10

3D pattern recognition is a challenging issue both in full 3D and 2.5D cases [148] [147]:

• State of the art 3D shape recognition methods are capable of e.g. re- trieving 3D models from 2D sketch queries applying multi-view con- volutional Neural Networks (CNN) on rendered models [193].

• Best recognition results on TOSCA+Sumner dataset of 3D Shape Cate- gorization Benchmark is about 96% achieved by 3D Spatial Pyramids (Fig. 4.2) [123].

• In Non-rigid 3D Shape Retrieval track of SHREC’15 (Shape retrieval Contest) NN (nearest neighbors) values equal or near to 100 % are ob- tained [116].

These works make it possible for aiming large-scale object retrieval [182] for researches working with full 3D. While obtaining full 3D model is not possi- ble in real scenarios, recognition from 2.5D can be even more difficult.

Hereinafter I investigate the recognition problem from sequentially ac- quired information of a tilted LIDAR. This development is devoted for ex- ploiting the on-board sensor of a sensored AGV with improved computing capacity. In case of AGVs, sensors tilted upwards can detect for example hanging crane hooks. Sensors installed tilted downwards can warn about objects jutting out of shelving (Fig.4.4). In urban environment typical reason of tilted scanner installation is Mobile Laser Scanning (MLS); illustration of

(a) Frame acquired by 3D LIDAR (b) Frame acquired by 2D LI- DAR

(c) Velodyne HDL-64E (d) Sick LMS500

FIGURE4.3: Examples of 3D (left) and 2D (right) LIDARs and point clouds acquired by them.

my proposed method on MLS data can be seen in Fig. 4.1. We explore an object in bottom-up sequence (or top-down) collecting the rare views in front of the vehicle. We would only see the full height 2.5D view of this object if the vehicle will be (dangerously) close to the obstacle or we will not see it at all. It is desirable not to let our machine to approach the object too close, the decision must be made in much earlier stage from partial information.

Data of on-board 2D LIDARs have been exploited for better awareness. Us- ing additional sensor information (e.g. camera) is recommended, but each modality should have its own reliability for superior fusion of different on- board solutions.

My proposed method can solve the recognition problem for wide FoV 3D LIDARs too, where layers are separated from each other, and the same object seen on the disjoint layers cannot be connected. However, sequentially registering information on each beam results in a similar problem I deal with.

The line segments of the scrappy object can be categorized in each layer as separated curve for each scan.

This Chapter addresses the problem, where sparse 3D clouds can be built from sequentially scanned data, without having full 3D cloud. I will show here that this data gathering through motion may contain enough informa- tion for semantic level analysis of the neighborhood of the vehicle. My method

4.1. Introduction 47

FIGURE4.4: Tilted sensor installation for overhang detection.

Photo source: SICK - Efficient solutions for material transport vehicles in factory and logistics automation2.

is capable of recognizing 3D shapes without having to look on the whole shape, or having dense resolution point clouds for sufficient modeling the shape or parts. In this Chapter I extend our earlier work [194] by some modi- fication of my method aiming increase in its speed (real-time run is achieved) and precision, and also present results on a real-life urban database.

Approaching towards a street object, my method accumulates the infor- mation to increase the probability of detecting a possible object. My method improves the system by using low-level scattered data sources for semantic level interpretation during the motion.

A tilted LIDAR-based decision support system for autonomous vehicles has several elements:

• The data of the LIDAR sensor has to be registered in a global coordinate system by fusing it with IMU, GPS or other synchronized localization data stream [150].

• The resulted enriched cloud should be preprocessed for later work, noise [176] and ghost removal [139] have to be done.

• Object candidates have to be segmented in order to classify the environ- ment. This is usually done by connected component analysis or specific object detection [15].

• Shape classification is the next stage, which is investigated in details in the Chapter. At this stage I assume that I have some preliminary predic- tion about the surrounding objects, and I can also predict the environ- mental scenery type (e.g.: urban, warehouse, etc.). Using this assump- tion I can assign prior probability to different classes and I can make Bayes decision (e.g.: animals are unlikely in urban environment). In my tests on urban data of the proof-of-concept evaluation, I collected fre- quent urban objects and I assigned the same prior probability to them.

• In the final stage, using the classification result, some decision rule has to be applied to control the autonomous vehicle [45].

2https://www.sick.com

4.1. Introduction 49

4.1.1 The Contribution of the Chapter

State of the Art methods cannot be applied for recognition from small parts without shape model or incremental recognition and without exact scale in- formation. Looking at the palette of the solutions of the best practices: apply- ing conventional 3D recognition methods like [148] is not favorable, because mesh generation or voxelization steps can be expensive for incomplete data and can also cause information loss. Efforts in order to decrease the compu- tation time of meshes can results points to be left-out. Even all the points are used, mesh generation from partial data is ambiguous. The characteristic fea- tures of the point cloud like noise and local density disappear in these steps.

2.5D methods [115] are not applicable, because they cannot deal with hid- den points produced by the viewpoint change during motion. Model based approaches [26] also has to be excluded, because we only know the partial object.

I propose a new solution addressing the following issues:

• Processing steps work directly on point clouds, avoiding the mesh gen- eration and possible information loss;

• Keypoint search based on local radius in 3D, making it independent of the full size;

• Local feature based object description, which is not model based;

• New local graph based descriptors;

• Object description with bag of features.

The solution of the above issues offer the potential of recognition from partial clouds and solving the problem of sequential data gaining of tilted single- layer/multi-layer LIDAR sensors. Some aspects of these issues I have ad- dressed in [194], where preliminary results were introduced on the valida- tion level. In the following I will show how discriminative local patterns can be used for the classification of partially visible objects.

4.1.2 The Outline of the Chapter

Section4.2 introduces related works. In section4.3, I describe my proposed solutions in detail. To validate the proposed method, in section4.4 and4.5, I will show results on known 3D data and on real life MLS point clouds.

Finally in section4.6I will draw some conclusions.