Nach oben pdf Ensemble time in GNSS - Performance requirements and algorithm tests

Ensemble time in GNSS - Performance requirements and algorithm tests

Ensemble time in GNSS - Performance requirements and algorithm tests

Any Global Navigation Satellite System (GNSS) relies on a highly stable and reliable System Time which has to meet high performance requirements to enable GNSS services suitable for navigation and timing communities. The challenge is to guarantee this high performance continuously. The Kalman filter algorithm implemented in GPS, called GPS Composite Clock, is a mature method to generate such a highly robust System Time. The algorithm estimates the time offsets of every individual clock to the so called implicit mean, which is a common component in all clock estimates. The common component offers the functionality of System Time and is understandable as a weighted average out of all ensemble clock readings. GPS Composite Clock performance is analyzed by simulations of a “light” GNSS configuration with 10 satellite Rubidium- clocks including deterministic drift, 6 ground Cesium-clocks and 2 ground Active Hydrogen Masers. Besides evaluating the stability of an error-free clock constellation to define the regular performances the behavior of the algorithm is investigated considering different operational scenarios: exclusion of clocks from the GNSS ensemble and occurrence of clock feared events (frequency steps in satellite Rubidium-clocks and ground H-masers).
Mehr anzeigen

1 Mehr lesen

Ensemble time in GNSS - Performance requirements and algorithm tests

Ensemble time in GNSS - Performance requirements and algorithm tests

The second approach is using an Ensemble Time algorithm to establish System Time. Such algorithms estimate time offsets of every GNSS clock with respect to a “paper” System Time produced by the algorithm. In this approach, there is no physical representation and System Time is equal to a weighted average of all GNSS clocks. Since 1991, GPS makes use of a Kalman Filter that is called GPS Composite Clock (CC) [1]. Although the GPS Composite Clock does not depend on a single clock, every GNSS clock contributes with a different weight to the System Time. Hence, any clock error still affects System Time and, thus, disturbs all other time offsets. These effects will be investigated further in this paper.
Mehr anzeigen

12 Mehr lesen

Benefits of Multi-Constellation/Multi-Frequency GNSS in a Tightly Coupled GNSS/IMU/Odometry Integration Algorithm

Benefits of Multi-Constellation/Multi-Frequency GNSS in a Tightly Coupled GNSS/IMU/Odometry Integration Algorithm

Abstract: Localization algorithms based on global navigation satellite systems (GNSS) play an important role in automotive positioning. Due to the advent of autonomously driving cars, their importance is expected to grow even further in the next years. Simultaneously, the performance requirements for these localization algorithms will increase because they are no longer used exclusively for navigation, but also for control of the vehicle’s movement. These requirements cannot be met with GNSS alone. Instead, algorithms for sensor data fusion are needed. While the combination of GNSS receivers with inertial measurements units (IMUs) is a common approach, it is traditionally executed in a single-frequency/single-constellation architecture, usually with the Global Positioning System’s (GPS) L1 C/A signal. With the advent of new GNSS constellations and civil signals on multiple frequencies, GNSS/IMU integration algorithm performance can be improved by utilizing these new data sources. To achieve this, we upgraded a tightly coupled GNSS/IMU integration algorithm to process measurements from GPS (L1 C/A, L2C, L5) and Galileo (E1, E5a, E5b). After investigating various combination strategies, we chose to preferably work with ionosphere-free combinations of L5-L1 C/A and E5a-E1 pseudo-ranges. L2C-L1 C/A and E5b-E1 combinations as well as single-frequency pseudo-ranges on L1 and E1 serve as backup when no L5/E5a measurements are available. To be able to process these six types of pseudo-range observations simultaneously, the differential code biases (DCBs) of the employed receiver need to be calibrated. Time-differenced carrier-phase measurements on L1 and E1 provide the algorithm with pseudo-range-rate observations. To provide additional aiding, information about the vehicle’s velocity obtained by an odometry model fed with angular velocities from all four wheels as well as the steering wheel angle is incorporated into the algorithm. To evaluate the performance improvement provided by these new data sources, two sets of measurement data are collected and the resulting navigation solutions are compared to a higher-grade reference system, consisting of a geodetic GNSS receiver for real-time kinematic positioning (RTK) and a navigation grade IMU. The multi-frequency/multi-constellation algorithm with odometry aiding achieves a 3-D root mean square (RMS) position error of 3.6 m/2.1 m in these data sets, compared to 5.2 m/2.9 m for the single-frequency GPS algorithm without odometry aiding. Odometry is most beneficial to positioning accuracy when GNSS measurement quality is poor. This is demonstrated in data set 1, resulting in a reduction of the horizontal position error’s 95% quantile from 6.2 m without odometry aiding to 4.2 m with odometry aiding.
Mehr anzeigen

25 Mehr lesen

UAV path planning optimization based on GNSS quality and mission requirements

UAV path planning optimization based on GNSS quality and mission requirements

navigation system.  Availability: The percentage of time that the services of the system are usable by the navigator. On the research area of UAV path planning by considering GNSS performance, work [8] developed a pre-flight planning method in an urban environment where the GNSS availability is assessed in order to prove the necessity of using alternative navigation sensors. Their approach was to create a tool that will generate heatmaps of visible satellites and Dilution of Precision (DOP) values, which then are used by an optimization algorithm to produce the optimal path. Their model included only information of loss of direct line of sight and they suggested for future work to enhance it with better models of multipath, scattering and reflections. Our intention in this paper is to fill this gap by including a realistic multipath model, which would simulate diffractions and reflections, and thus to effectively assess their impact on GNSS signal quality, path planning, as well as GNSS availability and integrity metrics.
Mehr anzeigen

12 Mehr lesen

Module-level power electronics under indoor performance tests

Module-level power electronics under indoor performance tests

With the centralised MPPT of a conventional system with PV modules connected in series and the lack of control capabilities of voltage and current at the module- level, the string voltage is decreased, as the by-pass diodes of the shaded modules turn on. In certain cases, the shaded modules will then be operated in the lower power maximum. In the case of a system with power optimizers, the current and voltage of the shaded modules can be adjusted and set to the values of the global MPP of the module instead to a local one, while not influencing the values of the other modules. Consequently, the MLPE system will operate with a higher performance than the string inverter system, as long as the difference in performance is greater than the losses resulting from the additional conversion step (see Eq. 1).
Mehr anzeigen

7 Mehr lesen

Bayesian Time Delay Estimation of GNSS Signals in Dynamic Multipath Environments

Bayesian Time Delay Estimation of GNSS Signals in Dynamic Multipath Environments

We have demonstrated how sequential Bayesian estimation techniques can be applied to the multipath mitigation problem in a navigation receiver. The proposed approach is characterized by code-matched, correlator-based signal compression together with interpolation techniques for effi- cient likelihood computation in combination with a particle filter realization of the prediction and update recursion. The considered movement model has been adapted to dynamic multipath scenarios and incorporates the number of echoes as a time-variant hidden channel state variable that is tracked together with the other parameters in a probabilistic fashion. A further advantage compared to ML estimation is that the posterior PDF at the output of the estimator represents reliability information about the desired parameters and preserves the ambiguities and multiple modes that may occur within the likelihood function. Simulation results for BPSK- and BOC-(1,1) modulated signals show that in both cases significant improvements can be achieved compared to a DLL with narrow correlator. In this work, we have employed two methods to reduce complexity: the signal compression to facilitate the computation of the likelihood function as well as a simple form of Rao-Blackwellization to eliminate the complex amplitudes from the state space. Further work will concentrate on additional complexity reduction techniques such as more suitable proposal functions or particle filtering algorithms such as the auxiliary particle filter that are possibly more efficient with respect to the number of particles when applied to our problem domain.
Mehr anzeigen

11 Mehr lesen

Distribution-free tests for time series models specification

Distribution-free tests for time series models specification

refers to Class B; which imposes some further mild restrictions on the class of functions J in order to avoid some pathological behaviour of d ; but allowing fairly ‡exible speci…cations, including those exhibiting long-memory such as fractionally integrated ARMA and exponential models. Similar assumptions were also used by Delgado, Hidalgo and Velasco (2005). Henceforth, it is assumed that the parameter estimator n is p n-consistent under the sequence of local alternatives H 1n .

33 Mehr lesen

GNSS Receiver Performance Assessment with a Realistic Aeronautical Channel Model

GNSS Receiver Performance Assessment with a Realistic Aeronautical Channel Model

Figure 4 shows the general setup for the measurements during the flight trials: The received signal is first fed into a low noise antenna preamplifier (LNA) and then is entering the RF- frontend (FE), where filtering and additional amplification take place. The total RF- amplification is approximately 45 dB. As the main measurement equipment a vector power spectrum analyzer (Agilent E4443A (PSA)) is used. The PSA executes the down conversion as well as the digitization with a bandwidth of up to 80 MHz. The control of the PSA is done via a PC (SA-PC). On that PC also the data files containing the interference data are stored. For calibration purpose a signal generator (PSG) is connected to the LNA instead of the passive antenna prior to every flight to obtain the characteristics of the measurement setup. These data are also collected by the PSA.
Mehr anzeigen

10 Mehr lesen

Time-varying capital requirements and disclosure rules: Effects on capitalization and lending decisions

Time-varying capital requirements and disclosure rules: Effects on capitalization and lending decisions

5 We find that the buffer between actual and regulatory capital decreases (increases) when the regulatory capital requirement increases (decreases). Accordingly, the capital buffer provides banks with some flexibility when they face changes in capital requirements. Furthermore, the weakly significant negative effect on equity we observe when banks face decreases in regulatory capital requirements, is the result of a reduction in Tier 1 capital. This latter finding is confirmed by an increase in bank leverage. It might be driven by e.g. (an increase of) dividend payments. 3 We do not observe any change in liquid assets when the capital ratio requirement changes. In sum, an increase in the regulatory capital requirement results in a higher capital ratio due to less asset risk and a lower capital buffer, while a decrease leads to a higher buffer as well as a lower capital ratio due to less Tier 1 capital and an increase in lending. To better understand the latter effect, we also split loans into retail loans, firm loans, and loans to public institutions using annual data. We find that a decrease in the required capital ratio is accompanied by the issuance of more loans to firms. Overall, this reflects a tradeoff for the regulator and policymakers between a more resilient banking system and a fostering of the economy through more bank lending using the capital ratio as a policy instrument. It also implies that raising capital requirements does not restrain banks from issuing new loans. Reducing requirements, on the other hand, enables banks to extend new loans. Even when our setting is different from the countercyclical capital buffer of Basel III, an implication of our results is that increasing requirements does not reduce loan growth, but reducing requirements facilitates loan growth.
Mehr anzeigen

50 Mehr lesen

Testing financial time series for autocorrelation: Robust Tests

Testing financial time series for autocorrelation: Robust Tests

For example, Lin & McLeod (2006) and Peña & Rodríguez (2002, 2006) develop tests based on a general measure of multivariate dependence. Their main idea is that the estimated residuals from an ARMA fit can be viewed as a sample from a multivariate distribution, so that testing for zero autocorrelation amounts to testing for proportionality of their correlation matrix to the identity, in other words, testing whether or not the correlation matrix is diagonal. In a similar vein, Fisher & Gallagher (2012) are inspired in high- dimensional data analysis to derive new weighting schemes for the Portmanteau statistic. All the resulting tests are weighted sums of the empirical autocorrelation and partial autocorrelation functions as explained in Gallagher & Fisher (2015). The asymptotic null distribution under H (k) 0 is, in all cases, a linear combination of k independent χ 2 (1) variates. This follows from the fact that under the assumption of independence, the
Mehr anzeigen

17 Mehr lesen

Time-related alterations and other confounding factors in direct sediment contact tests

Time-related alterations and other confounding factors in direct sediment contact tests

Direct sediment contact tests (SCTs), also referred to as whole-sediment toxicity tests, directly integrate all factors that are relevant for the impact of a given pollutant, such as dependence on test species, compound characteristics, sediment properties and environmental factors, and translate them into biological effects (e.g. Blaha et al. 2010, Conder et al. 2004, Dillon et al. 1994, Duft et al. 2003, Feiler et al. 2009, Höss et al. 2010a, Ingersoll et al. 1995, Jones et al. 2008, Lee et al. 2004, Re et al. 2009, Ryder et al. 2004, Sae-Ma et al. 1998, Seiler et al. 2010, Turesson et al. 2007). Thus, SCTs directly represent the bioavailability of a contaminant to the test species on the level of effect. In addition, as reviewed by Seiler et al. (2008), these test systems reduce alterations of the sample compared to most other current strategies for sediment assessment, in particular extraction with organic solvents. The lab- based evaluation of chemicals and field monitoring can therefore greatly benefit from the application of SCTs. In the context of REACh (EC 2006) and the amended water framework directive (EC 2008), effect-focused assessment tools can provide data on potential biological impact if first-line screening indicates a need for further, more detailed assessment measures. In terms of field monitoring, SCTs are invaluable markers of biological impact (e.g.Eklund et al. 2010, Hilscherova et al. 2010, Krcmova et al. 2009, Re et al. 2009, Schmitt et al. 2010). However, in order to fully benefit from these advantages and properly interpret the resulting biological data, potential influences on the effects recorded using SCTs need to be investigated. An important step towards this end has been made by the German SEKT project framework, which defined reference conditions, control sediments and toxicity thresholds for a battery of six test systems (Feiler et al. 2009, Feiler et al. 2005, Höss et al. 2010b). The present comment aims to show that the time between an initial contamination event (or spiking in case of lab-based assessment of chemicals) and the actual exposure in an SCT is a factor with potentially great impact on SCT results.
Mehr anzeigen

216 Mehr lesen

Challenging incompleteness of performance requirements by sentence patterns

Challenging incompleteness of performance requirements by sentence patterns

Withall presents a comprehensive pattern catalogue for natural language requirements including patterns for perfor- mance requirements in his book [23]. The pattern catalogue contains a large number of patterns for different types of requirements. In contrast to their work, our framework focuses on performance requirements and is derived step-by-step from literature. Moreover, we provide a notion of completeness for performance requirements and explicitly include the context (by means of cross-cutting aspects) of performance requirements. Filipovikj et al. conduct a case study on the applicability of requirement patterns in the automotive domain [24]. They conclude that the concept of patterns is likely to be generally applicable for the automotive domain. In contrast to our framework, they use patterns that are intended for the real- time domain. They use Real Time Specification Pattern System as defined by Konrad and Cheng [25] (based on the work of Dwyer et al. [26]). These patterns use structured English grammar and support the specification of real-time properties. Stalhane and Wien [27] report on a case study where requirement analysts use requirement patterns to describe requirements in a structured way. Their results show that the resulting requirements are readable for humans and analyzable for their tool. Moreover, their tool improved the quality of requirements by reducing ambiguities and inconsistent use of terminology, removing redundant requirements, and improving partial and unclear requirements. In contrast to their work, we specifically focus on performance requirements, provide a notion of completeness, and provide more detailed (and also literature-based) sentence pattern.
Mehr anzeigen

11 Mehr lesen

Das rhetorische Ensemble

Das rhetorische Ensemble

Die Ausdifferenzierung der Sinne und Medien hat vielleicht ein Datum, das man aber erst im späten 18. Jahrhundert festlegen könnte (wenn man es denn müsste). Das historische Datum dieser Ausdifferenzierung ist eine Versicherung, sie reagiert auf Unsicher- heiten. Das heißt, dass es weniger ein reales Datum als ein Feiertag zur Herstellung historischer Evidenz ist. Die Lektüre der Anlei- tungstexte von Alberti mit Mühlmanns Augen gibt Auskunft über eine ganz andere und unversicherte Zeit. Es handelt sich um eine Zeit, in der die Medien und Sinne in einem rhetorischen Ensemble und als Möglichkeit der rhetorischen Figuration gedeutet wurden. Insoweit ist eine Abgrenzung der Rhetorik von der Rechtswissen- schaft vor dem späten 18. Jahrhundert alles andere als selbstver- ständlich. Sie kann kaum im Rückgriff auf Rom festgemacht werden, aber auch nicht an der Frage, inwieweit aequitas früh- neuzeitlich nur in Form der aequitas scripta, als Rechtsquelle anerkannt wurde und wie differenziert Topoikataloge waren. Denn das würde voraussetzen, dass die Grenzen der Systeme mit den Grenzen der Medien und den Grenzen semantischer Rubriken identisch wären bzw. als identisch reflektiert würden. In der rhetorischen Phase europäischer Gesellschaften und in der Zeit des Paragone war die Leitidee aber nicht, dass Medien exklusive Eigenschaften entwickeln. Es herrschte eher die Vorstellung vor, dass Medien im Wettstreit stehen und durch die Konkurrenz auf- einander bezogen sind. Insoweit konnte der Satz aus der Horaz- schen Poetik – ut pictura poesis – auch noch in der Umkehrung
Mehr anzeigen

15 Mehr lesen

A stable, polynomial-time algorithm for the eigenpair problem

A stable, polynomial-time algorithm for the eigenpair problem

Algorithms which output approximate eigenvalues without accompanying approximate eigenvectors might be easier to analyze. The experimental evidence of [ 41 ] for symmet- ric matrices suggests that many of the algorithms in use are of average finite cost and even that there is some universality. An informal explanation of this fact is that the eigenval- ues of symmetric matrices are very well conditioned: see for example [ 56 , eq. (1.5)]. But eigenvectors are another matter. When the matrices are close to having multiple eigen- values, the condition of the eigenvector tends to infinity. For example, even for 2 × 2 symmetric matrices, any pair of orthogonal vectors (a, b) and (−b, a) are the eigenvec- tors of a matrix
Mehr anzeigen

63 Mehr lesen

Validation and Assessment of Multi-GNSS Real-Time Precise Point Positioning in Simulated Kinematic Mode Using IGS Real-Time Service

Validation and Assessment of Multi-GNSS Real-Time Precise Point Positioning in Simulated Kinematic Mode Using IGS Real-Time Service

Following this, in order to analyze the convergence performance of real-time PPP in simulated kinematic mode, an experiment of real-time PPP processing was carried out over about two weeks for three IGS/MGEX stations. Finally, to assess and validate the positioning accuracy of real-time multi-GNSS PPP in simulated kinematic mode, another real-time PPP processing experiment was carried out continually in October 2017 over 31 days by applying the real-time data streams from 12 globally distributed stations in the IGS/MGEX network. From the experiment results given in Figures 4 – 6 , we can see that there are big differences between the minimum and maximum convergence times for the 20-cm level. This may be caused by the poor quality of real-time products or by the poor stability of real-time data streams during some periods. On the one hand, the quality (accuracy and availability) of the real-time products in different periods may be different, especially for GLONASS, GALILEO, and BDS satellites. On the other hand, there are some uncertain factors which cannot be avoided in real-time processing, such as the potential instability or unavailability of real-time products broadcasted by NTRIP Caster, the loss of a network connection with the Caster when receiving data streams on the user’s side, etc. As a result, the stability of the real-time PPP solutions will be influenced.
Mehr anzeigen

19 Mehr lesen

Ensemble and constrained clustering with applications

Ensemble and constrained clustering with applications

This chapter gathers fundamental concepts, algorithms and information about the databases that are used throughout the thesis. Section 1 provides a comprehensive disambiguation of the general terms used. Some of the terms belong to relatively new areas and therefore, little consensus among different authors exists. Choices are made to maintain the text’s cohesion. The variants of such terms are hereby named in order to ease the reading. Section 2 gathers the mathematical notation used in order to serve as a reference point that can be revisited in cases of doubt during the inspection of specific sections of the text. However, any additional nomenclature needed is properly defined in locus. Section 3 describes the ensemble generation schemes as well as the algorithms used to create the partitions. This topic is located here since it plays a central role for the ensemble clustering methods reviewed or proposed in this thesis. Additionally, the ensemble generation schemes require a number of details to be specified. Section 4 gives a brief review of the existing evaluation measures, commonly used to assess the quality of clustering algorithms. Some of those measures are used during the evaluation step by different algorithms all around the thesis. Section 5 reviews the concept of computing median of objects. Median concept plays an important role in the consensus clustering methods. It is also used by the constrained ensemble clustering methods introduced in this thesis. Additionally it works as a basis for the sum of pairwise distance method proposed in Chapter 7. Finally, Section 6 presents a detailed description of the four databases used by the various experiments in this thesis.
Mehr anzeigen

228 Mehr lesen

Real-time sensing of atmospheric water vapor from multi-GNSS constellations

Real-time sensing of atmospheric water vapor from multi-GNSS constellations

WVR measures the thermal sky emission, which is caused by the water vapor, the liquid water, and the oxygen in the atmosphere, at the two frequencies 21.0 and 31.4 GHz. The WVR is operated continuously in a “sky-mapping” mode, which corresponds to a repeated cycle of 60 observations spread over the sky with elevation angles of no less than 20°; typically, this results in 6000-9000 measurements per day. The wet delays from the WVR are inferred from the sky brightness temperatures (Elgered and Jarlemark, 1998). The formal uncertainty of wet delay estimates varies with the elevation angle and weather conditions, usually ranging from 0.5 mm to 3.0 mm. However, the absolute uncertainty (magnitude of one standard deviation) is of the order of about 7 mm, when an uncertainty of 1 K is assumed for the observed sky brightness temperature. The gradient estimates are not directly provided by the WVR; rather, they are estimated along with the zenith delays by using all the acquired line-of-sight observations to an in-house software package, applying the model presented in Equation (3.5). The estimation process is similar to that implemented in the GNSS processing, and the gradients are solved by a least square estimator for different time resolutions, e.g. 15 minutes, one hour or two hours (here one hour resolution is used). The gradients retrieved from WVR provide a direct assessment of the performance of the GNSS-based estimates. However, it is noteworthy that the WVR data only provide wet gradients, while GNSS data produce total gradients which include both wet and dry elements. For further comparison and validation with the GNSS-based estimates, WVR wet gradients are corrected with the ECMWF hydrostatic gradients to derive the total gradients (Li et al., 2015d). The six hour hydrostatic gradients of ECMWF are linearly interpolated to be consistent with the WVR wet gradients of one hour interval. The corrected gradient values, which contain not only the wet component but also the hydrostatic component, are referred to as the total WVR gradient and are used in this study.
Mehr anzeigen

141 Mehr lesen

Precise real‐time navigation of LEO satellites using GNSS broadcast ephemerides

Precise real‐time navigation of LEO satellites using GNSS broadcast ephemerides

The magnitude of the combined noise and multipath errors of the simulated pseudorange observations over ele- vation angle is illustrated in Figure 4 . The largest errors are obtained for the legacy GPS L1 C/A-code as well as the L2C-code which use a traditional binary phase shift keying (BPSK) modulation and a roughly 300 m chip length. The GPS L1/L2 P(Y) pseudorange observations benefit from a ten-times smaller chip length, but exhibit increased noise at low elevation angles due to the inherent semi-codeless tracking losses. Compared to the legacy GPS signals, a clearly improved performance is obtained for the E1/B1 sig- nals of the new constellations. Among others, tracking of these signals benefits from the binary-offset-carrier (BOC) modulation, which allow for a 3-times lower noise vari- ance than traditional binary phase shift keying (BPSK) sig- nals at the same loop bandwidth. With values of less than 20 cm even down to the horizon, the lowest pseudorange noise is obtained for the GPS L5, Galileo E5a and BeiDou-3 B2a signals that benefit from both a high signal power and a 10-times lower chip length than other open service sig- nals. In addition to the low SISRE of Galileo and BeiDou-3, these constellations also provide a favorable measurement quality, which is again considered beneficial for position- ing performance. The small increase of the noise at very high elevation angles is related to antenna group delay dis- tortions near bore-sight direction.
Mehr anzeigen

14 Mehr lesen

Airborne navigation and gravimetry ensemble & laboratory

Airborne navigation and gravimetry ensemble & laboratory

The Meinberg clock that was used in AGFA is of the type Meinberg Funkuhr GPS166. It normally is used for spatially static high precision timing. When the instrument is started, a minimum number of four GPS receivers are searched for to determine a precise GPS time signal. When more than four satellites are available, the GPS time errors can be minimized by comparing the different GPS clocks and re-adjusting the clock errors. If the location of the Meinberg clock was changed considerably or the instrument was not used for some months, this procedure may take up to 20 minutes because the clock must then determine it’s new position and download the new set of ephemeredes from the satellites. Once this is successfully accomplished, the procedure will only take a few minutes if the clock is newly started. The initialization procedure only can take place for a static GPS antenna. Before the aircraft starts moving, the initialization must be completed and the antenna must be switched off. Once a good static GPS timing is achieved, this signal is used to initialize and control an internal high quality rubidium quartz clock. The highest internal clock rate is 10 MHz. From that signal a pulse per minute and a pulse per second signal are derived. Furthermore 10 MHz, 1 MHz and 100kHz signals are generated for output. Moreover, two serial data strings provide ASCII data if requested.
Mehr anzeigen

75 Mehr lesen

Genetic algorithm for project time-cost optimization in fuzzy environment

Genetic algorithm for project time-cost optimization in fuzzy environment

This research work customized GA based project time-cost optimization algorithm in a fuzzy environment. It provides an efficient computational technique for time-cost optimization project scheduling problem incorporating uncertainty in network analysis. Because of dynamic situation of environment, these proposed algorithms are efficient due to fuzziness of its variables. With the concept of alpha-cut method of fuzzy theory, fuzzy input variables were transformed into crisp values. Due to NP-hard nature of the problem, a computer code of GA based solver was used to find the optimum solution within project completion time constraint at different values of alpha-cut level. Decision makers (mainly project managers) can use this model to choose the desired (optimum) solution for time-cost trade off within a time limit under different risk levels that varies with the values of  (  =0 means lowest level of risk and
Mehr anzeigen

19 Mehr lesen

Show all 10000 documents...