A dedicated section summarizes the outcome of an extensive literature survey on integrity algo- rithms in general. Also, integrity algorithms that set themselves apart are depicted such as carrier phase based RAIM and hybridized RAIM together with INS. From this point on, a selection of RAIM algorithms is chosen that are used for further analysis within this thesis. The Least-Squares Residu- al Approach exploits measurement redundancy and is closely connected to the navigation solution in the receiver. On the background of multi-frequency and multi-constellation perspective, the usage of LSR RAIM gains high attraction also due to the fact that the GNSS performance is further improving. In the past, RAIM in general was only used foroperations with less stringent require- ments. This thesis accounts for the current and future GNSS developments and therefore the LSR RAIM has been chosen in order to assess its possibilities and limitations formaritime users. On top to the latter RAIM approach, a Novel RAIM has been developed in this thesis: it exploits the fact that maritime users move exclusively along the sea surface which is approximated by the geoid model and thus brings in an opportunity of using additional height information. The idea is to use the additional height information in order to perform a cross-check with the GNSS derived height. The possibility of performing fault detection based on a test statistic, expressed as the difference be- tween the height derived from the geoid and the one based on GNSS, is assessed and the finding is that fault detection can be performed to a certain extent. Furthermore, a scheme is proposed in order to derive a horizontal protection level based on this test statistic. The MHSS RAIM is also introduced in this thesis: it is deemed being the algorithm used in the frame of a potential future Advanced RAIM system. A possible implementation of this system is making use of an independent reference network that is in charge of characterizing fault-free errors, satellite and constellation fault probabilities in near real-time and to provide these parameters via an Integrity Support Message to the user.
While RAIM algorithms that are currently in use for hori- zontal navigation (RNAV) cannot be employed with applica- tions requiring vertical guidance, Advanced RAIM algorithms developed from the Multi-Hypothesis Solution Separation (MHSS) algorithm  can provide the necessary robust- ness to support LPV-200 based approaches. Advanced RAIM (ARAIM) is a new category of satellite navigation integrity algorithms that has been emerging over the past 5 years, along with the prospect of having multiple simultaneously operational GNSS constellations, each with enhanced multi- frequency navigation signals. The novelty that these algorithms bring over the classical RAIM algorithms, developed over the previous two decades, is that they are designed to handle any number of simultaneous satellite faults, which are inherent in a combined constellation scenario, as well as entire constel- lation faults. Additionally, ARAIM algorithms expect to take advantage of the anticipated multi-frequency signals in order to eliminate the unpredictable ionospheric delay, one of the most significant error sources for the earlier systems based on single-frequency GPS. With the possibility to use dual- frequency measurements on L1 and L5 and thus allowing users themselves to correct for the ionospheric delay , the residual nominal measurement errors will be low enough to allow for guidance of an approaching plane down to a decision height of 200 ft above the runway threshold.
To solve this new linear equation a “Total Least Squares” (TLS) method is used, based on a matrix manipulation technique known as Singular Value Decomposition (SVD). Although the positioning solution produced by the TLS method will not in general be more accurate than that found using standard least squares techniques, the TLS method is of interest in the context of RAIM because the singular value decomposition yields quantitative measures of the mismatches in both the error vector and the observation matrix. Thus, with two independent observable metrics for measurement consistency, it is possible to construct a RAIM algorithm that, for given false alarm and missed detection probabilities, has far higher availability than the LSR algorithm previously discussed. However, as will be shown, in order to achieve this improved performance in receivers using the EIV RAIM algorithm, the underlying position algorithm requires a degree of management of the user clock offset. The detailed definition of the EIV technique is discussed in the following sections.
The algorithm adopted to guarantee navigation integrity is based on multiple hypothesis technique extended during the past years to MHSS based RAIM , , , with a target to fulﬁll the integrity requirements for precision approach . ARAIM employs a multiple hypothesis approach to incor- porate the potential effects of single or multiple satellite faults into its prediction of the worst case error, the Protection Level (PL). Any combination of faults is ﬁrst evaluated with respect to its prior probability of occurrence. If a fault mode (i.e., one unique combination of faulted and healthy measurements) is likely to occur, a subset of measurements excluding the potential fault candidates is established and a subset based position is estimated. With all hypothetical position solutions merged into a union of possible positions, the resulting interval is likely to contain a position solution which is based entirely on fault free measurements. The remaining fault hypotheses which were not considered in the interval constitute the set of unmonitored hypotheses, where the sum probability of those is required to be a fraction of the allowable integrity budget applicable to a pre-deﬁned operational mode. For LPV-200 approaches, this integrity risk is deﬁned as P sat = 1 · 10 −7 . A. Fault model
The Snapshot LSR ReceiverAutonomousIntegrityMonitoring (RAIM) developed by the civil aviation community (Parkinson et al. 1988) or the statistical reliability testing developed by the geodetic community (Teunissen 1998) are the classic references for non-augmented GPS- based LSR algorithms. All approaches make use of measurements redundancy to check, on a measurement-by-measurement basis, the relative consistency among estimated residuals in or- der to detect the most likely measurement fault. Most of the previously referred approaches are based on the comparison between a test statistic depending on the estimated least-squares (LS) residuals and a given threshold. The decision threshold is set considering a priori knowledge of the statistical distribution of the test in the fault free case and a given false detection rate. Al- though the classical methods mainly use snapshot techniques, some works have been reported on introducing the FDE algorithms for RBE techniques (Petovello 2002), usually formulated in a well-known form of the Kalman filter (KF), where it has been proven that the KF innovations follow the same statistical distribution as the LS residuals (Wang 2008).
Besides it proposes two ISM ground monitoring algo- rithms: a long term monitoring covering the fault free case using a quantile method based on an hypothesis testing approach. This method improves continuity and avail- ability through reduced ISM inflation factors with respect to usual sample standard deviation method. Secondly a short term monitoring is proposed, where the maximum Signal In Space errors rather than conservative mean values are estimated. This additional monitoring allows to relax the receiver design requirements and to have simple user algorithms. Finally a different ISM content with respect to the present definition is proposed. The combination of these proposals improve the continuity and availability performances providing reduced protec- tion levels.
source not found.6 (upper graphics) shows the calculated PRR for satellite GPS 09 at the IMS for the studied days. The overall behaviour of the PKI lies within the tolerable limits for both days, without sudden changes or large blunders, exemplifying how the behaviour of the PKI should look like for a satellite using healthy corrections. The obtained residuals for the satellite are bounded continuously by the reported UDRE, indicating not only high quality of the corrections but also that their errors are accounted by this PKI. On the other hand, Figure 6 (lower graphics) shows the calculated PRR for satellite GPS 21 at the studied days. Peaks with large amplitude for the indicator are noticed at doy 75 and 76 at the same time approximately, where the largest amplitudes are found at doy 76, exceeding the tolerance limit at several epochs. More noticeable events are detected at lunchtime at both days, where the magnitude of the PRR exceeds the tolerable limits. And indicator that either satellite or applied corrections are inconvenient for the positioning process. However, the behaviour of the UDRE for satellite GPS 21 shows nearly the same variability of the PRR along the day, reinforcing the possibility of bounding of errors during augmented positioning through the usage of the UDRE. As expected, the correlation of the two PKI is notorious during these time slots and drops in the quality of the UDRE, an evidence of the potential use of this indicator for the monitoringof the integrityof the used corrections. 3.5 DGNSS Position Domain
The scope of this paper is the development of a preliminary integrity concept for a high-precision maritime phase-based GBAS and its validation in the test area. In the frame of the national funded project ALEGRO ([ALEGRO], [ALEGRO-FR]), hardware and software for a phase-based GBAS experimental system have been developed and deployed in the Research Port Rostock. The experimental system was realized on the basis of EVnet technology, a universal platform for data acquisition, processing, and data product distribution [EVnet]. A key element in the GBAS processing system is the "GNSS Performance Assessment Facility" that derives signal-specific quality parameters from the incoming data streams of a receiver to provide a first real- time performancemonitoringof the used GNSS. Provision of augmentation data from the reference station to the users is decided on the basis of the usability of satellite specific measurements for DGNSS, which is inferred from the relationship between quality parameters values measured in real time, station-specific PKI, and positioning with the DIA (Detection, Identification and Adaptation) process at the established location. The GBAS will be completed in the project ASMS by an integritymonitoring station (IMS) validating in real time the provided augmentation [ASMS]. For this purpose the integritymonitoring station operates as an artificial user. The IMS determines its position using the data provided by the reference station and applying differential positioning techniques. This widely follows the integrity concept applied in IALA DGNSS Beacon Systems ([IALA-R-121], [Hoppe-2006]). But due to the open maritime standardization process of phase- based GBAS, the selection of suitable performance key identifier for reference station and integritymonitoring station is still an open topic of research. It will be discussed and investigated in this paper.
performance requirements of the actual operational region. The core of the PNT Module is a PNT Unit. This PNT Unit is a processing system, which combines by means of sensor and data fusion methods all available PNT sensors. The PNT Module is on the one hand part of the integrated PNT System and on the other hand part of the on- board INS. After a short discussion of the sensors of a PNT Module we have introduced a preliminary integritymonitoring concept for a PNT Module. In a first step towards the development of a PNT Module demonstrator system we have performed first measurement campaigns and derived PNT output data from different sensors. The analysis of these data shows, that for their usage within compatibility tests, the different locations of the sensors onboard the vessel needs to be considered.
extraction process becomes an intrinsic task in the inner layers of the neural network. The goal of the research was to develop an efficient tool for the automated and real-time coastline monitoring, specifically tuned for shark identification. In order to carry out this task, a database of almost four thousands video frames was used for the training, validation and testing of a Faster Region-based Convolutional Neural Network (R-CNN), achieving an average precision of 0.904, distinguishing among four classes. The inner network architecture that produced these remarkable results was the Visual Geometry Group (VGG)-16, introduced by . This network is made of 13 convolutional layers, followed by 3 fully connected layers. However, the project bottleneck was the processing time: about 7 FPS on high performance HW, being suitable to be executed on the ground segment only. Therefore, the UAVs were only responsible to acquire and transmit the video to the ground station. The same research led to a second publication, aimed at applying the same architecture to estimate mammals population in images acquired from higher altitudes . Despite the configuration was exactly the same of , the achieved results were far below, with an average precision of 0.28. The authors claimed that the application of a non-maxima suppression algorithm might have improved the whole pipeline performance by neglecting the objects
parameters such as accuracy, integrity, continuity, and avail- ability are used to quantify the performanceof a navigation system. The AIS, however, was originally designed as radio broadcast channel for exchange of static and navigation data only. Thus, it does not provide any integrity information, nor does it ensure continuity of the service. For that reason, AIS can be currently understood as a supportive but not reliable system formaritime traffic situation assessment. In fact, the sole use of AIS for collision avoidance is illegal with respect to . To compensate for the addressed drawbacks of the AIS it is proposed to perform trajectory tracking and integritymonitoringof AIS data. In this context, integrity is understood as plain measure if the current AIS data can be trusted or not. While existing methods for AIS integritymonitoring in  work fairly well for vessels proceeding along a straight path, they suffer from false alarms in non-linear situations (e.g. a turn manoeuvre). For that reason, this work presents a novel approach for AIS integritymonitoring and trajectory tracking based on the design of an Extended Kalman Filter (EKF). Two motion models will be introduced, that are expected to represent the vessels dynamics either in nearly straight line motion or turn manoeuvres, respectively. To merge the benefits of both models in one filter, the Interacting Multiple Model (IMM) framework of  was adopted, which showed good performancefor vessel tracking in  and . By monitoring the innovation of the filter, hypotheses tests can be applied to detect abnormal system behaviour. Common approaches involving the chi-squared or Generalized Likelihood Ratio (GLR) test can be found in many failure detection schemes for dynamic systems, e.g. for GNSS based applications (see ,  and ).
With the advent of Global Navigation Satellite Systems (GNSS) the positioning fixing by electronic means has become a key component for a variety of applications and can be considered as a main source formaritime Positioning, Navigation and Timing (PNT) data provision. Navigation information systems compliant with IMO (International Maritime Orga- nization) regulations, like the Automatic Identification System (AIS) integrate the GNSS based positioning information . The complexity of the navigation process and the associated modern technologies involved had led the IMO to start the so called e-Navigation Initiative which aims the harmonization of systems and standards . In order to support the strategic goal of resilient PNT data provision, the German Aerospace Center has developed a PNT Unit concept, where all navigational sensors are used in a combined way. To confirm the performance under real operational conditions an operational prototype of such a PNT unit was realized. Here the integritymonitoring (IM) algorithms are responsible for the evaluation of the events or conditions that have the potential to cause or to contribute to hazardously misleading information (HMI).
A fully self-optimizing reactor system using FTIR analysis was presented by Jensen and co-workers. 8 For this purpose, a Paal –Knorr reaction was chosen for which flow rates of two reaction components, as well as the reaction temperature, were controlled by the automated system (Scheme 6). Calibra- tion of substrate and product allowed the accurate determi- nation of reaction yields via online FTIR. Corresponding IR signals for DMSO as a solvent were subtracted, and equilibra- tion of reaction parameters prior to yield determination were guaranteed.
As the results presented in Sections 4.3 and 4.4 suggest, the AT service is able to operate much more efficient within the city center than in the outskirts of the city. This raises the question if a smaller service area may not be a better choice for both the operator and overall city traffic. Hence the service area was reduced to the city center only and consists now roughly of the S-Bahn (urban rail) circle, an area that is often used for different kinds of transport- related policy cases in Berlin. This area is designated to be used by ATs only, meaning that passengers travelling into or out of the zone by car would requested to change from car to AT (or vice versa) at one of four mobility hubs near the border of the new service area. These hubs are located at trunk roads going into and out of the city center. Fig. 9 provides an overview of the setup. Overall, some 174,000 trips are dispatched in this scenario, of which 129,000 either begin or end at one of the hub locations. Car trips, that now take place only outside the AT service area, were not simulated.
1.5 Outline of the Contents of the Thesis 23 Performancemonitoring can be effective and return value only if the performance problems are fixed. Part II of the thesis is concerned with how to detect, diagnose and isolate loop faults with further tests and analysis. After giving the main sources of poor control performance, meth- ods and procedures for automatic and non-invasive detection of unwanted oscillations in control loops are treated in Chapter 8. Techniques based on integrated absolute control error and zero crossings and those based of the auto-covariance function are discussed in detail. Features and practical issues of the methods are given and illustrated by industrial data. Chapter 9 presents techniques for detecting the presence of process non-linearities that may lead to limit cycles in the loop. This includes non-invasive data-based methods based on the analysis of bicoherence and surrogates of time series. The bicoherence and surrogates methods are demonstrated and compared with two industrial case studies, in which the main task is to find out the source of oscillations propagating through whole plants. Also discussed in this chapter is how to detect saturating controller outputs that may also lead to limit cycles. In Chapter 10, it is shown how actuator problems in control loops can be automatically detected and diagnosed. The main focus is on analysing and detecting static friction (stiction) in control valves, being the most common cause of oscillations found in the process industry. Non-invasive stiction detection methods based on cross-correlation, curve fitting and elliptic patterns of PV–OP plots are presented and discussed in detail. Tests for confirming the presence of stiction are then described. An oscilla- tion diagnosis procedure that combines different techniques is proposed. In Chapter 11, we develop a novel technique for detection and quantification of valve stiction in control loops from normal closed-loop operating data based on two-stage identification of a Hammerstein model. This method ends up with a diagnosis algorithm that helps discriminate between loop problems induced by valve stiction, external disturbances, or aggressive controller.
The herein presented flight data evaluation contains of two different steps: first the a-priori data analysis and segmentation and second the prediction of the aircraft flight performance variation. As illustrated in Fig. 10, the data was first checked and validated to remove er- roneous data sets from the subsequent analysis con- taining unreliable sensor measurements, unexplainable data jumps or missing time segments. The data was automatically segmented in relatively short time-slices during which the aircraft was flying in a quasi-steady state (see section 2): stabilized flight path (cruise, but also climb or descend) and possibly steady turns. One key requirement for each segment was a well compa- rable engine state, which is required by the evaluation method applied afterwards. Data segments with very dynamical maneuvers (e.g. high roll rate or rapid vari- ation of load factor) were ignored in this first analysis but could be considered in future evaluations as well. Later on, the segments were categorized according to their average speed, altitude, fan speed, gross weight and outside air temperature. Each category describes an engine operating point allowing the estimation of the linear model describing the engine influence on the flight performance similar to the validation example in section 3. Calculation of the necessary values for each segments completed the first evaluation step. In a sec- ond step, the data segments of each evaluated category were analyzed and the flight performance variation rep- resented by the equivalent drag coefficient ∆C D e was predicted. This procedure is very similar to the method- ology validation presented in section 3.
The Ecologger is a microcontroller-based device, which holds a repetitive software pattern to measure environmental variables and poll data from remote sensors at a configurable frequency. Additionally, the Ecologger contains a precise time reference, which enables tracking measurements at the time (and therefore, events) when they occurred. It also offers an external storage system using an SD card that acts as data backup storage to retrieve information if communication with the satellite is lost and to verify the correct transmission of the experimental station. Additionally, it has a backup communication interface using a USB protocol, which can be used to connect to a different transmitter or to connect to ground-based communication technologies if required. On the communication interface, the Ecologger acts as a data aggregator, featuring an XBee master node, which can collect information from all the remote dendrometers (Fig. 3).
so as to focus their attention and effort on undertaking higher-value activities ( Emmanouilidis et al., 2019 ). Identifying the characteristics of specific situations enables a better understanding of the internal logis- tics system and thus leads to more effective knowledge and insights acquisition. This is indeed among the key expectations in relation to the impact of Industry 4.0 technologies to job profiles, i.e. a switching from low-skilled and routine activities to ones that require higher skills or cognitive involvement from human operators in such environments ( Zolotova et al., 2018 ). Although the presented demonstrator offers limited scope for context analysis, future work that targets systems of higher complexity would be required to implement effective context modelling and reasoning, which can also be exploited to drive context- adaptive views ofperformance information, raising attention to emer- ging context-relevant and higher priority issues. This would be a nat- ural next step and aim for further research, and would be necessary in order to produce industrially relevant solutions. Such solutions could aim at optimizing internal logistics systems in terms of operational performance, uptime and sustainability through IoT-driven analytics. References
The examples presented show that in recent years researchers have developed fully integrated optimization systems for flow chemistry comprising components that were previously thought to be stand alone tools. Due to the inter-connectivity human interaction becomes less important leading to time and cost savings. Although the construction of a self- optimizing reactor system that integrates a flow reactor, mon- itoring tools and software is not an easy task and needs skill and preparation time, these obstacles clearly outweigh the advantages and benefits that can be achieved. Since the age of fully integrated reactor systems seems to have just begun more examples and sophisticated setups will be developed in the future.
Ability tests are used to measure a broad range of knowledge a person brings to the job situa- tion (Rathje, 2002). The pre-selection of the DLR’s Department of Aviation and Space Psy- chology consists of a battery of tests covering all important abilities such as memory capacity, spatial orientation, concentration, attention, numerical abilities, personality as well as some knowledge-based aspects like English language competency and mechanical comprehension (Eißfeldt & Deuchert, 2002). Current ability tests for airline pilots and air traffic controllers measure the ability requirements which are needed to be successful under current training and job situations (Lorenz, Pecena & Eißfeldt, 1995). This raises the question of whether one’s monitoringperformance could be predicted by these ability tests as well. We assume that the DLR’s tests concerning attention and concentration are related to monitoring behaviour.