Multipath is today still one of the most crucial problems in GNSS, as the error is caused locally and can not be corrected through the use of correction data, which is provided by reference receiver stations or networks. The advances in the development of signal processing techniques for multipath mitigation have led to a continual improvement ofperformance, whereas basically two major approaches can be distinguished: The class of techniques that actually mitigate the effect ofmultipath by modifications of the antenna pattern (either by means of hardware design or with signal processing techniques) or by aligning the more or less traditional receiver components (e.g. the early/late correlator) and the class ofmultipath estimation techniques, which treat multipath (in particular the delay of the paths) as something to be estimated from the received signal, so that its effects can be trivially removed at a later processing stage. Most of the conventional mitigation techniques are in some way aligning the discriminator of the delay lock loop (DLL) to the signal received in the multipath environment. Well-known examples of this category are amongst others the Narrow Correlator  and the Strobe Correlator . For the estimation techniques static and dynamic approaches can be distinguished, according to the underlying assumption of the channel dynamics. Examples for static multipath estimation are those belonging to the family of maximum likelihood (ML) estimators , where the probably best- known technique is the multipath estimating delay lock loop (MEDLL) . For static channels without availability of prior information, the ML approach is optimal and performs significantly better than other techniques, especially if the echoes have short delay. Finally, sequential estimators that target the computation of the posterior probability density function (PDF) of the signal parameters conditioned on the received channel output sequence at the receiver have been considered for time-variant channel scenarios .
Abstract— The rapid growth of available services depending on location awareness has led to a more and more increasing demand for positioning in challenging environments. Global navigation satellite system (GNSS) based positioning methods may fail or show weak performance in indoor and urban scenarios due to blocking of the signals and multipath propagation. In contrast, cellular radio signals provide better reception in these scenarios due to a much higher transmit power. Also, they offer high cov- erage in most urban areas. However, they also undergo multipath propagation, which deteriorates the positioning performance. In addition, there are often only one or two base stations within communication range of the user. Both of these problems can be solved by means of a multipath-assisted positioning approach. The idea is to exploit multipath components (MPCs) arriving at the receiver via multiple paths due to scattering or reflections. Such approaches highly depend on the ability to resolve the MPCs at the receiver. This is why multipath-assisted positioning schemes typically assume ultra-wideband systems. Today’s cellular radio systems work with much smaller bandwidths, though. The 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) standard uses bandwidths up to 20 MHz. The aim of this paper is to show by means of measurements that multipath- assisted positioning is possible using 3GPP-LTE signals with only two base stations. We apply an advanced signal processing algorithm to track MPCs arriving at the mobile terminal, and to estimate the position of the mobile terminal. Since each of the MPCs can be regarded as being sent from some physical or virtual transmitter, we estimate the positions of transmitters in addition. Assuming only the starting position and direction of the mobile terminal to be known, the results show that the root mean square positioning error of the mobile terminal is always below 1.8 meters. In 90% of the cases, it is below 1.25 meters.
Compared to the various effects which degrade GNSS performance in general, nowadays multipath propagation accounts for the most dominant error in satellite navigation. Other error sources like satellite clock deviation and atmospheric effects for example can be compensated to a certain degree by the use of high-stability timing equipment (as for example the hydrogen maser in GIOVE-B represents), SBAS corrections, multi-frequency Galileo ranging and pilot signals, and the future availability of civil signals with a higher bandwidth than the currently available C/A code signal. Especially in high-multipathenvironments like urban and suburban areas, the performanceof GNSS receivers is severely affected by multipath propagation. Extensive measurement campaigns were undertaken during the last years by the German Aerospace Center (DLR) for different scenarios such as the aeronautical, vehicular/pedestrian urban, sub-urban and rural environments, to record and model effects caused by multipath signal reception. Based on the measurement campaigns sophisticated high-realistic channel models have been developed in recent years, which allow for the investigation ofmultipath effects on GNSS receiver performance and which are foreseen as well to support the design and development of future navigationsignals and high performancemultipath mitigation algorithms.
The new signals and services provided by future GNSSs like Galileo and modernized GPS will foster the use of satellite navigation for safety of life applications, e. g. for precision landing approaches of higher categories in aviation. These new signals and services require the development of advanced receiver technologies, which make full use of the performance provided by the new signal characteristics. In aviation environments various potential interference sources exist, which can degrade the performanceof receivers. In particular, DME/TACAN is one of the main interference sources in the E5 Galileo band in aviation environments. Therefore, besides functional receiver validation under nominal conditions also the behaviour of the receiver under strong interference conditions must be tested. Software and hardware simulations have shown already that DME interference can reduce the C/N 0 of a receiver
performanceof various signals had been made [1, 2]. Lack of accurate models for signal propagation in ur- ban multipathenvironments did result in diﬀerent hy- pothetical channel models . Their usage in simula- tions showed an extreme influence of the signal prop- agation on the navigation accuracy. This exposed the necessity of measurements for the satellite-to-land mobile channel. These measurements have been car- ried out in  and have brought an in-depth under- standing of signal propagation in urban, suburban, and rural environments. These measurements have been analyzed  and have resulted in a multipath channel model  which has been released into the public domain . The International Telecommuni- cation Union (ITU) has standardized this model [7,8]. The development of GALILEO  and the modern- ization of GPS  have been accompanied by a long discussion about new wideband signal options. These have resulted into the definition of the GALILEO sig- nal format , the invention of GPS M-code for mil- itary applications , and a second signal L2C on L5 for civil applications .
a key player in the next-generation intelligent transportation systems and several safety-critical applications. However, even if the navigation research community has been developing positioning methodologies for decades, there are still several limitations that may limit the use of GNSS in the most stringent applications, i.e., lane-level precision for autonomous driving in highly populated cities with harsh propagation conditions. One of the key open problems is how to achieve precise PNT solutions under harsh environments, i.e., affected by multipath, deep fading, signal blockage, or non-line-of-sight (NLOS) conditions. Using standard GNSS signals, it is known that code-based techniques (i.e., only relying on the time-delay estimation between the receiver and a set of visible satellites) do not provide precise PNT information. The standard way to provide such precise navigation capabilities is by exploiting carrier phase information. Indeed, this measurement is linked to the wavelength, which is much smaller than the baseband signal resolution (i.e., for a legacy Global Positioning System (GPS) L1-C/A signal, the wavelength is 19.4 cm, while the baseband signal resolution is around 300 m). The two main solutions are precise point positioning (PPP) [ 1 ] and real-time kinematic (RTK) positioning [ 2 ]. However, the main problem of these techniques is that they are very sensitive to the quality of phase observables, i.e., it is unlikely that they provide a robust solution under harsh propagation conditions, at least exploiting standard GNSS signals. Therefore, in order to provide robust and precise solutions, new alternatives must be accounted for. A possible alternative is to robustify the signal processing, for instance resorting to outlier mitigation techniques [ 3 , 4 ]. Another option is to increase the receiver complexity and exploit large bandwidth signals, which allow obtaining a better (i.e., with respect to standard signals) baseband resolution, and therefore more precise code-based observables. The latter can be achieved by using (i) high-order binary offset carrier (HO-BOC) modulations or (ii) GNSS meta-signals, which is the combination of two GNSS signals at different frequency bands as a single signal.
This section presents the tracking performance as function of azimuth γ and elevation θ (see Figure 6), using the ASCM. Since the critical azimuth and elevation values are not expected to depend on the navigation signal type, the simulations were carried out for the GPS L1 C/A code only. The two aircrafts are compared while the type of plane is affecting the shape of the Doppler power spectrum for diffraction effects and fuselage reflections  in the ASCM. The results are plotted in polar azimuth-elevation plots, whose interpretation is corresponding to Figure 6. Due to the ASCM limitations the results cover an elevation range from 10° to 70° and an azimuth range from 10° to 170° and 190° to 350°. The limitations of the angular values arise from the physical optics model that underlies the ASCM, where the fuselage is modelled as a cylinder . Due to that fact physical reasonable ASCM outputs are obtainable only for restricted angular ranges.
The snapshot model, which is the oldest model, belongs to the matching methods with feature preselection and was published by Cartwright and Collett (1983, 1987). It is based on the theory that a biological agent tries to minimize the difference between landmark bearing and size in current view and snapshot. In empirical studies, bees were trained to locate a food location in-between dif- ferent cylindric landmarks. If the arrangement of cylinders was changed, the bees searched at the wrong position, which implicates that the bees learned the alignment of the cylinders. The horizontal input images are separated into dark and light sectors. The images are compass-aligned and every sector from the snapshot is associated with the closest sector from the current view. The corre- spondence pairs are represented by two unit vectors. One describes the angular difference, the other the difference in apparent size. Accumulating all unit vec- tors yields the home vector. Establishing correspondences between snapshot and current view is widely used for visual homing. Some approaches try to ex- tract distinctive features (Hong et al., 1991; Se et al., 2001, 2002; Brown and Lowe, 2002; Fiala and Basu, 2004; Hayet et al., 2007; Pons et al., 2007), oth- ers use less unique features (Cartwright and Collett, 1983, 1987; Weber et al., 1999; Lambrinos et al., 2000). Vardy and Oppacher (2003, 2004) concentrate on features that have been extracted by the Harris corner detector. Furthermore, colored regions have also been employed for navigation by Gourichon et al. (2003) and Goedemé et al. (2004).
As discussed in detail in chapter 4.3 an increase of reaction times and error rates dependent on the amount of reverberant energy is expected.
A number of 24 paid, student participants aged between 19 and 34 years (mean age: 23.9 ± 3.4 years) take part in the experiment. They are equally divided in female and male participants. Listeners are screened to ensure that they have normal hearing (within 20 dB) for frequencies between 250 Hz and 10 kHz. All listeners can be considered as non-expert listeners since they have never participated in a listening test on auditory selective attention. One participant had to be excluded from the analysis due to missing data in reaction times. The extended binaural-listening paradigm as described in chapter 3.1.3 is used for this experiment, as a reissue of Experiment III. Different to the first experiment on auditory selective attention in reverberant room conditions, the stimuli are extended by a direction word. The stimuli are therefore longer and polysyllabic (3.4.2). Headphones as described in chapter 3.7.3 are used for all participants. Binaural synthesis is based on HRTFs of the dummy head, but headphones are equalized by individually measured HpTFs (compare chapter 3.6.3). The experi- ment takes place in the darkened hearing booth (compare chapter 3.5.2). Using RAVEN stimuli are adjusted to three different reverberation levels (compare chapter 3.8).
The NLOS problem is considered a killer issue in UWB localization. In this article, a comprehensive overview of the existing methods for localization in distributed UWB sensor networks under NLOS condition is given and a new method is proposed. The proposed method handles the NLOS prob- lem by NLOS node identification and mitigation approach through hypothesis test. It determines the NLOS nodes by comparing the mean square error of the range estimates with the variance of range estimates in LOS situation, and moreover, using the statistics of the arrangement of circular traces to further improve the performance in the situations that there are less than three LOS nodes available. Because the number of the iteration times is equal or less than the number of the NLOS nodes, this method is not too much computational intensive.
(red) or even higher (infrared) can be perceived (Briscoe and Chittka, 2001, Chittka, 1996, Menzel and Backhaus, 1991). Figure 2.1 shows irradiance spectra of direct sunlight and blue sky as well as reﬂectance spectra of limestone and dry grass. For objects on the ground, the reﬂectivity in the UV channel is commonly small (especially compared to visual light), while the UV portion of direct sun light and blue sky is high. Therefore, there is a high contrast between sunlight reﬂected from ground objects and the sky. As proposed by Wehner (1982), this allows a simple classiﬁcation between ground objects and the sky (resulting in a binary skyline) in the UV channel by determining an appropriate threshold. This idea is supported by behavioral experiments with the ant Melophorus bagoti, where it could be shown that their navigational abilities are signiﬁcantly decreased as soon as UV light is blocked (Schultheiss et al., 2016). The lower reﬂectivity of ground objects in the UV channel compared to the green channel, motivates an alternative approach: Instead of only using the UV channel for skyline extraction, a contrast of the UV and green channel can be used (Möller, 2002). This separation is, on the one hand, based on the reﬂectivity spectra of vegetation (Chittka et al., 1992, 1994, Gumbert et al., 1999), soil, or rocks (Sgavetti et al., 2006): A smaller amount of light with a short wavelength (e.g. UV) is reﬂected by terrestrial objects compared to light with a longer wavelength (e.g. green), such that reﬂected light has a small ratio of UV compared to green light intensity. On the other hand, light scattered by the sky (diﬀuse skylight) has a high ratio of UV light compared to green light due to Rayleigh scattering. Therefore, a color-opponent coding between two diﬀerent colors, in this case some kind of ‘UV- green ratio’, may provide an illumination-invariant skyline separation. Kollmeier et al. (2007) explored diﬀerent combinations of wavelengths for a skyline separation and found that the quality of separation increases with the distance between the two wavelengths. With respect to most insects, this would be the combination of the L and S channels. We chose UV/green (in the following UV/G) since it is the only combination available to the desert ant Cataglyphis bicolor (Mote and Wehner, 1980).
Leading companies are also present in all of them, except for BIUFF. In ICOPPE/ UFRJ, there are anchor companies such as Petrobras, on the campus of UFRJ. PqTec- SJC, IG/PUC-Rio, and CDT/UnB are also supported by companies. IG/PUC-Rio and CDT/UnB are involved with local development, incubating companies that can later act in the region. Since Brasilia is not an industrial city, the organization interacts with a limited number of companies. However, the collaboration with the local government is good and has formed a good network of relationships. Some graduated companies are also leading companies in their sectors, like PipeWay (IG/PUC-Rio), specialized in pipeline inspection and management, and Pam-Membranas (ICOPPE/UFRJ), a maker of filtration membranes. These companies depend on the maintenance of the linkages via projects and installing laboratories, besides coaching. In the case of PqTec-SJC, there is an emphasis on attracting regional companies, like aircraft company Embraer, in order to promote regional development.
The data model presented by (Gröger & Plümer 2011a) is generally feasible to represent both 2-dimensional and 3-dimensional space cells of the MLSEM in both primal geometry and topology space. Although the model is intended to describe entire city models, the modelling of the building interior along disjoint solid primitives is not excluded (cf. Gröger & Plümer 2012b). The aggregation of non-overlapping solids conforms to the notion of a space cell complex (cf. definition 3.7) and facilitates its consistent representation which is not supported by the models presented above. Moreover, the full tessellation of space agrees with the equivalent condition expressed for the primal space representation of space layers, and the notion of the air and earth solid corresponds to the concept of the outer space cell which has already been introduced in early publications on the MLSEM (cf. Becker et al. 2009a). However, the data model of (Gröger & Plümer 2011a) also lacks expressivity compared to the math- ematical model underlying the MLSEM. First, interior voids are allowed for space cells but are neither supported by face nor by solid primitives. Second, due to the restriction to proper CW decompositions the number of cells required to represent a topological space is higher than for general CW decompositions. Thus, the dual graph representation will consequently contain a higher number of nodes and edges. Third, both bounded and partially bounded solids are restricted to manifold spaces which excludes non-manifold configurations of space cell com- plexes as discussed in chapter 220.127.116.11. Fourth, the notion of outer in the MLSEM is more general and actually can be viewed as the space occupied by both the air and the earth solid, and thus makes the explicit modelling of unbounded 2.8-dimensional maps obsolete. And finally, the geometry is strongly coupled with the topological primitives and disallows curved or
we implemented a Kalman filter  for tracking individual MPCs over time. This Kalman Enhanced Super Resolution Tracking (KEST) algorithm as described in  is the core of our signal processing algorithm and allows for a dynamic description of the evolution of the MPCs in terms of delay and amplitude at the receiver. In particular, their overall lifetime, i.e., the distance of receiver movement over which the path is observable, is monitored. This is important, since MPCs of a long lifetime can contribute much better to the tracking of the receiver than MPCs of a short lifetime. The KEST algorithm basically works in two stages: An outer stage keeps track of the number L of MPCs and their corresponding parameters, whereas in the inner stage those parameters of the MPCs are estimated. For this estimation in the inner stage, we use the space-alternating generalized expectation- maximization (SAGE) algorithm . The SAGE algorithm is an extension of the Expectation-Maximization (EM) algorithm that jointly estimates the parameters of impinging waves in mobile radio environments.
Due to errors inherent to inertial sensors, the pure integration of inertial data will lead to an unbound error grow, resulting in an erroneous navigation solution. Reasonable measurements of an additional sensor are needed to restrain this errors. Some proposed solutions require active measurements, e.g. radar, laser range finder, or local infrastructure which have to be established first (Zeimpekis et al., 2003). On the other hand vision can pro- vide enough information from a passive measurement to serve as a reference. As no local infrastructure or external references are used it is suitable for non-cooperative indoor and outdoor envi- ronments. A stereo based approach was preferred to obtain 3D information from the environment which is used for ego motion determination. Both, inertial measurements and optical data are fused within a Kalman filter to provide an accurate navigation solution. Additional sensors can be included to achieve a higher precision, reliability and integrity.
neglecting different claim levels in different channels. The lead ratio, the appointment ratio and the penetration of MM were significantly higher than in other channels, leading to the conclusion that the effectiveness of MM in the initiation stage is higher than that of other channels. Also, it turned out, that the price for a lead as billed to independent agents was not covering the internal costs for generating the lead. Currently SwissHealth views this low price as differentiator against competitors, but the gap between typical market prices and SwissHealth’s price still leaves room. The non-monetary effect of the low price on the perception of SwissHealth by the independent agents is to be researched and then to be taken into account as well, of course. As a consequence, SwissHealth’s management decided to extend the MM campaign across all brands and to a larger geographic region. It is expected that the performance is increased since the ratio of fixed costs will be lowered. Also, MM is now considered a first choice channel for the initiation stage of transactions.
concrete structures, we had to keep in mind the existing uncertainty of the structural type (in the database there is no indication if the structures are made of concrete, masonry units, or reinforced concrete frame). Therefore for concrete structures we assigned the class C initially, with the less probable range of A-E. Secondly, we took into account weaknesses that are mentioned in the inventory database. If necessary, the corresponding modifiers were applied to the initial vulnerability class. Assessmentof weaknesses should be building-type-specific, by examining if the principal rules of earthquake-resistant design (including quality, regularity, homogeneity, ductility, overall integrity and stability of the structure) are observed. The following essential weakness characteristics were considered: wide openings of the ground floor (which may cause soft storey effects), very thin bearing elements (columns and beams) or their absence, lack of reinforcement (or low reinforcement ratio) of the main structural elements, poor results from the hammer tests. If some of the listed weaknesses can be identified, the vulnerability class of such buildings was downgraded. Besides weaknesses there may be strengths (e.g. relatively high reinforcement ratio or enlarged dimensions of the bearing structures), however, we do not take those into account due to the reported poor quality of materials and workmanship in the area in general. Another illustrative indicator of seismic performanceof structures is the damage observed from previous earthquakes. Based on this we downgraded the vulnerability class in the case of moderate to severe damage to class B or even to the range of A-B, depending on the structural type. Tab. III-1 shows the EMS-98 classes assigned and the affiliated number of buildings. For some classes there were only a sparse numbers of instances, and so some classes were aggregated. The bold letter indicates the most likely vulnerability class. This is done to have a more sufficient number of samples per class for applying supervised learning approaches.
Keywords: audiovisual virtual reality, head movement, hearing aid evaluation
In everyday life, people naturally move their head when listening and talking to other people. There are indications that head movement can affect the performanceof hearing aid (HA) algorithms. To provide a realistic estimate of the everyday benefit of the HA algorithms, lab experiments should therefore be carried out under natural head movement, which is not the case for conventional lab experiments, e.g. . Based on examples found in literature, two possible consequences of head movement are described that might result in a reduced performance. First, Ricketts  showed that head orientation affects the speech intelligibility benefit of directional HA algorithms. This could happen because of misalignment between the beam of the directional algorithms and the direction of optimal benefit. Secondly, Abdipour et al.  and Boyd et al.  provide examples of a reduced performance due to head movement for algorithms that use temporal integration to estimate some property of the acoustic scene (sound source direction). When temporal integration takes place during head movement, it might lead to smeared and therefore inaccurate estimates, potentially resulting in a reduced performanceof the algorithm. Furthermore, head movement causes dynamic changes to the acoustic scene to which adaptive algorithms are adapting. All adaptive algorithms have a certain update rate at which they adapt their settings. Instantaneous adaptation is not possible due to artefacts, so there might be some time during adaptation in which the performance is not optimal. Smearing of the estimated acoustic scene properties and the time taken to adapt could be problematic if the algorithm is not fast enough or if the acoustic scene is rapidly changing because of head movement, leading to maladaptation.