Nach oben pdf Combination techniques and decision problems for disunification

Combination techniques and decision problems for disunification

Combination techniques and decision problems for disunification

Our first main result says t.hat solvability of disullificatioll problems ill tile cOlllbi- nation of disjoint equational theories can be reduced to solvability of disl[r]

40 Mehr lesen

An integrated approach for solving a MCDM problem, Combination of Entropy Fuzzy and F-PROMETHEE techniques

An integrated approach for solving a MCDM problem, Combination of Entropy Fuzzy and F-PROMETHEE techniques

Keywords: entropy fuzzy, F-PROMETHEE, multi criteria decision making, fuzzy set, decision maker, fuzzy environment 1. Introduction Multiple criteria decision making (MCDM) is a powerful tool used widely for appraisement and ranking problems containing multiple, usually conflicting, criteria (Bilsel, Buyukozkan, & Ruan, 2006). Selecting a proper method requires an insight analysis among available MCDM techniques. Multi Criteria Decision Making (MDCM) methods have been implemented frequently in terms of solving different problems in both of certain and uncertain environments. Additionally, it is increasingly used in environmental policy evaluation as (a) it offers the possibility to deal with intricate issues, (b) it incorporates criteria that are difficult to monetize, (c) it represents a holistic view incorporating tangible as well as intangible (or ‘fuzzier’) aspects, often neglected by other evaluation techniques such as AHP (Munda, 2004).
Mehr anzeigen

16 Mehr lesen

Decision making in structural engineering problems under polymorphic uncertainty

Decision making in structural engineering problems under polymorphic uncertainty

The considered engineering system is a portal frame under vertical and horizontal point loads, see Section 2. It is a simple mechanical system that helps avoiding false interpretation of results with respect to system behavior and failure states. Two competing failure modes, material failure and stability failure, make the limit state function strongly nonlinear and sensitive to uncertainties. Such a combination of failure modes is typical for many structural systems. Data for the structural and loading parameters are available more or less sparsely in Section 3. Using these information, three challenges formulated in Section 4 have to be solved. One of the crucial issues is an objective comparison of results caused by different underlying models for the uncertain input parameters. Obviously, this could be done on the basis of the decisions made. The second challenge of the benchmark deals with the question how data assimilation can help in the decision making. Finally, a design problem under polymorphic uncertainty is posed in which more demanding operation requirements have to be fulfilled. Reference solutions are given for comparison in Section 5 which have been determined by using the "true" structural and loading parameters which are only known to the authors. Final remarks can be found in Section 6.
Mehr anzeigen

15 Mehr lesen

Model reduction applied to finite-element techniques for the solution of porous-media problems

Model reduction applied to finite-element techniques for the solution of porous-media problems

This chapter discusses the application of the model-reduction methods presented in the previous chapter to the specified TPM models derived in Chapter 2, as well as their numer- ical implementation. Therefore, it is necessary to provide further information on the corre- sponding problem settings such as the particularly chosen initial-boundary-value problem, the respective reduced system or the underlying processes to sample the snapshots. Starting with the relatively simple porous-soil model with linear-elastic material be- haviour, which results in a system of equations with (approximately) time-invariant sys- tem matrices (hereinafter referred to as linear equation systems), the suitability of a system reduction using the (modified) POD method is demonstrated for different prob- lem settings. Afterwards, the transition to time-efficient simulations of a nonlinear porous material undergoing large deformations, modelled with a Neo-Hookean approach, is dis- cussed. In this context, the DEIM is used as an additional method in combination with the POD method to deal with the occurring nonlinearities. Next, a reduction of dif- ferent biomechanical problems is further examined. Therefore, reduced simulations of drug-infusion processes in the human brain are initially performed. In particular, the POD method is first applied to the simplified brain-tissue model with a linear equation system, implying isotropic permeability conditions and the further assumptions of the simplified model derived in Subsection 3.3.2. Then, the POD-DEIM approach is used to reduce the system of equations of the general (nonlinear) brain-tissue model under anisotropic permeability conditions, investigating a variation of the material parameters. Subsequently, the POD-DEIM approach is applied to the intervertebral-disc model spec- ified in Subsection 3.3.3, studying the simulation of different deformation states. In this regard, the selection of specific snapshots is extensively investigated, since an appropriate choice strongly affects the quality of the reduced simulations. The main focus in all the numerical examples is on the investigation of the accuracy of the reduced models, as well as on the examination of the resulting time savings due to the system reduction. Finally, a generalised approach for a reduction of a coupled system of equations using the evolved modifications is presented in order to enable a systematic adaptation to other models. For all intents, the numerical implementation is realised with the coupled FE solver PANDAS . All computations were performed on a single core of an Intel i5-4590 with 32 GB of memory running at clock speed of 3.30 GHz. While the FE meshes for sim- ple geometrical problems are directly generated in the FE solver, the program package CUBIT 1 is used to define FE meshes for more complex geometries. Moreover, the deter-
Mehr anzeigen

198 Mehr lesen

Local Evaluation of Policies for Discounted Markov Decision Problems

Local Evaluation of Policies for Discounted Markov Decision Problems

The outline of the chapter is as follows. The construction of lower and upper bounds on one component of the optimal value vector is described in Section 3.1. In Section 3.2 we present our approximation theorem showing that an ε-approximation for the component can be obtained by taking into account only a local part of the state space whose size is independent of the total number of states in the MDP. Section 3.3 introduces the foundation of our approximation algorithm based on column generation, which is the main contribution of this thesis. In Section 3.4 we describe how the algorithm can be utilized to obtain also approximations for a concrete policy or a single action. Finally, Section 3.5 is devoted to several theoretical and practical is- sues related to the column generation algorithm. In particular, we investigate the structure of the encountered dual linear programs, derive a combinato- rial formula for the reduced profits, and propose various pricing strategies and approximation heuristics. Moreover, we develop an equivalent variant of the approximation algorithm that refrains from using linear programming techniques, but employs the policy iteration method instead.
Mehr anzeigen

208 Mehr lesen

Multiple Criteria Decision Analysis Techniques in Aircraft Design and Evaluation Processes

Multiple Criteria Decision Analysis Techniques in Aircraft Design and Evaluation Processes

The main characteristics of all versions of ELECTRE methods were summarized by Roy (68), as shown in Table 2.4. Considering different problem statements, some guide- lines on how to choose among ELECTRE methods were also suggested. For instance, if it is truly essential to work with a very simple method and it is realistic to have no information on the indifference threshold and preference threshold, ELECTRE I should be selected in order to eliminate the non-dominated alternatives, while ELECTRE II should be used in order to build a partial pre-order of alternatives. ELECTRE VI would be convenient only if there exists a good reason to refusing the introduction of importance coefficients. In general, ELECTRE IS, II, III, IV, and TRI do provide powerful support for the classification of the alternatives. However, they require too many threshold definitions from DMs, thus, it is rather complex to implement these methods in real world problems (60).
Mehr anzeigen

224 Mehr lesen

Decision making in structural engineering problems under polymorphic uncertainty

Decision making in structural engineering problems under polymorphic uncertainty

The considered engineering system is a portal frame under vertical and horizontal point loads. It is a simple mechanical system that helps avoiding false interpretation of results with respect to system behavior and failure states. On the other hand, it possesses two competing failure modes, material failure and stability failure, which make the limit state function strongly nonlinear and sensitive to uncertainties. Such a combination of failure modes is typical for many structural systems. The benchmark should demonstrate how real decision making can work in the case of polymorphic uncertainties. One of the crucial issues is an objective comparison of results with different origin, namely, probabilistic and non-probabilistic ones. Perhaps, this could be done on the basis of the decisions made. The second challenge of the benchmark deals with the question how data assimilation can help in the decision making. Finally, a design problem under polymorphic uncertainty is given in which more demanding operation requirements have to be fulfilled.
Mehr anzeigen

10 Mehr lesen

A comparative analysis of machine learning methods for classification type decision problems in healthcare

A comparative analysis of machine learning methods for classification type decision problems in healthcare

Keywords: Classification; Data mining; Machine learning; Decision making; Asthma; Pulmonary sound signals; Discrete wavelet transformation Background As the decision situations become increasingly more complex, advanced analytical techniques are gaining popularity in addressing wide variety of problem types (descrip- tive, predictive and prescriptive) in many fields including healthcare and medicine (Delen et al. 2009). Because of the rapid increase in the collection and storage of large quantities of data (facilitated by improving software and hardware capabilities coupled with increasingly lower cost of acquiring and using them), data and model driven deci- sion making (a.k.a. analytics) is becoming a mainstream practice in every field imagin- able (from art to business, medicine to science). One area where faster and better decisions could make a significant difference is in healthcare/medicine. This data rich field can undoubtedly use what modern day decision analytics has to offer (Oztekin et al. 2009). In this study, we used analytics to address a classification type decision problem, namely prediction of asthma using only the chest sound signals obtained from actual patients using ordinary microphones.
Mehr anzeigen

21 Mehr lesen

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

provided in literature. This dissertation contributes a pixel-based algorithm to detect increased backscattering in SAR images by analyzing the SAR pixel values according to simulated layers. To detect demolished buildings, simulated images are generated using LiDAR data. Two comparison operators (normalized mutual information and joint histogram slope) are used to compare image patches related to same buildings. An experiment using Munich data has shown that both of them provide an overall accuracy of more than 90%. A combination of these two comparison operators using decision trees improves the result. The fourth objective is to detect changes between SAR images acquired with different incidence angles. For this purpose, three algorithms are presented in this dissertation. The first algorithm is a building-level algorithm based on layer fill. Image patches related to the same buildings in the two SAR images are extracted using simulation methods. For each extracted image patch pair, the change ratio based on the fill ratio of building layers is estimated. The change ratio values of all buildings are then classified into two classes using the EM-algorithm. This algorithm works well for buildings with different size and shape in complex urban scenarios. Since the whole building is analyzed as one object, buildings with partly demolished walls may not be detected. Under the same idea, a wall-level change detection algorithm was developed. Image patches related to the same walls in the two SAR images were extracted and converted to have the same geometry. These converted patch pairs are then compared using change ratios based on fill ratio or fill position. Lastly, the wall change results are fused to provide building change result. Compared to the building-level change detection algorithm, this method is more time consuming, but yields better results for partly demolished buildings. A combination of these two algorithms is therefore suggested, whereby the building-level method is used for all buildings and wall-level method additionally for selected large buildings. The third developed algorithm is a wall-level change detection algorithm based on point-feature location. To this end, local maximum points in two SAR images corresponding to the same building façade are compared. This method provides promising result for the present data. It may work better for future data with increased resolution to detect changes of detailed façade structures.
Mehr anzeigen

126 Mehr lesen

Accelerated decomposition techniques for large discounted Markov decision processes

Accelerated decomposition techniques for large discounted Markov decision processes

Received: 14 October 2016 / Accepted: 14 March 2017 / Published online: 23 March 2017  The Author(s) 2017. This article is an open access publication Abstract Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the par- tition of the state space into strongly connected compo- nents (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan’s algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decom- position algorithms.
Mehr anzeigen

11 Mehr lesen

Decision making in structural engineering problems under polymorphic uncertainty

Decision making in structural engineering problems under polymorphic uncertainty

The considered engineering system is a portal frame under vertical and horizontal point loads. It is a simple mechanical system that helps avoiding false interpretation of results with respect to system behavior and failure states. On the other hand, it possesses two competing failure modes, material failure and stability failure, which make the limit state function strongly nonlinear and sensitive to uncertainties. Such a combination of failure modes is typical for many structural systems. The benchmark should demonstrate how real decision making can work in the case of polymorphic uncertainties. One of the crucial issues is an objective comparison of results with different origin, namely, probabilistic and non-probabilistic ones. Perhaps, this could be done on the basis of the decisions made. The second challenge of the benchmark deals with the question how data assimilation can help in the decision making. Finally, a design problem under polymorphic uncertainty is given in which more demanding operation requirements have to be fulfilled.
Mehr anzeigen

10 Mehr lesen

High-order methods for convection dominated nonlinear problems using multilevel techniques

High-order methods for convection dominated nonlinear problems using multilevel techniques

the systems of ODEs arising in the high-order context is that constraints, such as memory requirements and stability limits (for explicit relaxation), become stricter. Furthermore, experience shows that more effort is needed in order to have both algo- rithmic and storage efficiency [122]. Some advances have already been made: just as an example, one of the 5 cores of the European ADIGMA research project [121, 122] was dedicated to solution strategies. Nonetheless, many questions have still to be considered in order to make high-order methods competitive. The role played by re- laxation is essential to efficiency, with respect to both computational cost and storage requirements. In general, the larger the number of ‘difficulties’ in the problem to be solved (stability issues, stiffness of the matrix, memory limitations, ill-conditioning, slow convergence), the larger the number of choices (which combination of methods leads to the solution with minimum effort) and the number of parameters to be tuned. Sections 3.2 and 3.3 are devoted to explicit and implicit techniques, respectively. Therein we turn our attention also to stability issues, multigrid methods, storage issues and preconditioning. In particular, we review those existing time-relaxation techniques which play a role in our best-practice relaxation strategy (which we will present in Chapter 4), focusing on those aspects which are relevant to our work. In this chapter efficiency is addressed both from a computational and implementation point of view, in the latter case involving the support of libraries and automatic differentiation (see Section 3.4).
Mehr anzeigen

187 Mehr lesen

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

Combination of LiDAR and SAR data with simulation techniques for image interpretation and change detection in complex urban scenarios

provided in literature. This dissertation contributes a pixel-based algorithm to detect increased backscattering in SAR images by analyzing the SAR pixel values according to simulated layers. To detect demolished buildings, simulated images are generated using LiDAR data. Two comparison operators (normalized mutual information and joint histogram slope) are used to compare image patches related to same buildings. An experiment using Munich data has shown that both of them provide an overall accuracy of more than 90%. A combination of these two comparison operators using decision trees improves the result. The fourth objective is to detect changes between SAR images acquired with different incidence angles. For this purpose, three algorithms are presented in this dissertation. The first algorithm is a building-level algorithm based on layer fill. Image patches related to the same buildings in the two SAR images are extracted using simulation methods. For each extracted image patch pair, the change ratio based on the fill ratio of building layers is estimated. The change ratio values of all buildings are then classified into two classes using the EM-algorithm. This algorithm works well for buildings with different size and shape in complex urban scenarios. Since the whole building is analyzed as one object, buildings with partly demolished walls may not be detected. Under the same idea, a wall-level change detection algorithm was developed. Image patches related to the same walls in the two SAR images were extracted and converted to have the same geometry. These converted patch pairs are then compared using change ratios based on fill ratio or fill position. Lastly, the wall change results are fused to provide building change result. Compared to the building-level change detection algorithm, this method is more time consuming, but yields better results for partly demolished buildings. A combination of these two algorithms is therefore suggested, whereby the building-level method is used for all buildings and wall-level method additionally for selected large buildings. The third developed algorithm is a wall-level change detection algorithm based on point-feature location. To this end, local maximum points in two SAR images corresponding to the same building façade are compared. This method provides promising result for the present data. It may work better for future data with increased resolution to detect changes of detailed façade structures.
Mehr anzeigen

126 Mehr lesen

Hybrid Solving Techniques for Project Scheduling Problems

Hybrid Solving Techniques for Project Scheduling Problems

The search process in SAT works as follows: Initially all variables are unassigned. In each node of the search tree, unit domain propagation is performed, i.e., constraint propagation over each boolean formula is applied. Thereby, variables of one constraint (one formula) are checked whether they can be fixed to 1 (true) or 0 (false). E.g., if all but one literal of a clause are false, the remaining variable can be set such that the clause is satisfied. Local search heuristics are used in order to find a feasible assignment. As soon as the first feasible assignment has been found, the algorithm stops. When constraint propagation cannot detect any further fixings, some variable is fixed to true or false (the branching step). The branching variable is chosen according to its importance in former propagations and its involvement in conflicts. If all literals of at least one clause are fixed to zero, the subproblem is infeasible. This is called a conflict. Conflicts are analyzed in order to produce conflict clauses [180]. During this analysis, a conflict graph is created. Such a graph is a logical expression tree where vertices correspond to variable fixings (zero or one). There is a directed edge in that graph between two vertices u and v if the fixing of the variable belonging to u deduced the fixing of the other variable in node v. Cuts in that graph that separate the branching decisions from the infeasibility yield new clauses. Furthermore, the analysis may show that some branching decision (variable fixing) in the last subtree at depth level ` yielded the infeasibility together with other fixings at some depth level ` 0 < `. Then, the negation of the branching decision can be already applied at depth level ` 0 . This is called non-chronological backtracking. The concept of conflict analysis in our framework with integer variables is explained in more detail in Section 2.2.1.
Mehr anzeigen

196 Mehr lesen

Are multi-criteria decision making techniques useful for solving corporate finance problems? A bibliometric analysis

Are multi-criteria decision making techniques useful for solving corporate finance problems? A bibliometric analysis

2.4 Search and classification procedure The selection of materials (the documents) was performed in two stages. Firstly, we carried out a search in the Scopus database, including a comprehensive set of keywords related to both the field of corporate finance (capital budgeting, working capital, financial planning, financial performance evaluation, etc.) and the field of MCDM (multi-attribute utility theory, multi-objective programming, goal programming, preference disaggregation, etc.). The keywords were combined using the logical operators “OR”, indicating that at least one word from each field had to appear in the search output andAND”, in order to obtain the intersection of the keywords of the two knowledge fields. In this first stage 1,417 papers were obtained. In the second stage, we read the Abstracts and eliminated those not related to the field of corporate finance and those papers that did not really use MCDM techniques. Thus, the sample was reduced to 339 papers and 8 books.
Mehr anzeigen

21 Mehr lesen

Algebraic Techniques for Satisfiability Problems

Algebraic Techniques for Satisfiability Problems

Introduction 7 cases which have the same algebraic behavior, but lead to different degrees of complexity. Hence we need to go beyond the classification provided by the algebraic properties, and perform a finer analysis of the cases. It turns out that the problem still is dichotomic in nature, revealing that each of these problems is equivalent to the standard “complete” problems of standard complexity classes inside P. Finally, in Chapter 5, we consider quantified constraint formulas. These are generalizations of the usual constraint formu- las, where additionally the quantifiers ∃ and ∀ are allowed to occur. As hinted above, such formulas can be used to describe settings where two opponents are working against each other. It is well-known that adding these quantifiers to the formulas raises the complexity of the involved decision problems significantly: the problems we consider in this chapter are prototypical for the classes of the polynomial hierarchy, and for the class PSPACE, containing all computational problems which can be solved in polyno- mial space. We study various problems for these formulas: first, we consider the formula evaluation problem in this context, and the closely related model checking problem. An- other decision problem which is very interesting is the equivalence problem, where we ask if two formulas have the same set of satisfying assignments. This question is very important in practice, since it can be used to decide whether two given database queries are equivalent, if a program behaves as its specification demands, or if two games have the same winning strategies.
Mehr anzeigen

140 Mehr lesen

Machine learning based decision support for a class of many-objective optimisation problems

Machine learning based decision support for a class of many-objective optimisation problems

First and foremost, my utmost gratitude to Dr. Dhish Kumar Saxena not only as a supervisor but also as a friend. As his student, I have been extremely lucky to benefit from his very rigorous expertise, constant encouragement, and from his vision. He is a truly remarkable person, and this thesis would not have been possible without his constant support. I am also truly indebted and thankful for all the time that he spent during innumerous technical discussions and also as a father-figure in teaching me how to become a better human-being through: sincerity, hard-work, honesty, ambition, and most important, attitude. I would like also to thank my two co-supervisors, Professor Ashutosh Tiwari and Dr. Evan Hughes, for their constant support, inputs and encouragement during my PhD. I am also grateful to Professor Qingfu Zhang for his expertise and also to Professor Rajkumar Roy for his support. Moreover, I want to thank Dr. Keshav Dahal and Professor Mark Savill for having read and marked this thesis and for their useful detailed comments, positive feedback, and challenging discussion.
Mehr anzeigen

363 Mehr lesen

Novel modeling approaches for combinatorial optimization problems with binary decision variables / Christian Truden

Novel modeling approaches for combinatorial optimization problems with binary decision variables / Christian Truden

Our experiments show that the ANS heuristic is capable of nding a larger number of feasible delivery slots than the Simple Insertion heuristic, requiring run times that are suited for A[r]

148 Mehr lesen

Combination of planar laser optical measurement techniques for the investigation of pre-mixed lean combustion

Combination of planar laser optical measurement techniques for the investigation of pre-mixed lean combustion

The lean combustion concept is considered the most promising approach to achieve an engine which is both more fuel efficient and at the same time has low emissions. Here the air fuel mixture injected by the burners is already lean and in conjunction with partially pre-mixing the existence of near stoichiometric regions can be reduced significantly. By this means the peak temperature and thus the NOx production rate is substantially reduced allowing a further increase of the overall temperature level to improve the fuel efficiency. In comparison to conventional combustors a complete redistribution of the air flow into the combustor is required which also affects the amount of air available for cooling of the combustor walls. For a combustor with lean burning primary zone the cooling air consumption has to be reduced by roughly 50% leading to a sparse wall cooling film. Combustor cooling becomes even more demanding as the overall temperature (including cooling air temperature) and pressure level will increase. This requires not only the development of cooling concepts with increased efficiency (see e.g. [3]). The key for the understanding of lean combustion in aero engines is the characterization and understanding of the interaction between the highly turbulent, swirling and reacting burner flow field with the wall cooling film. To the knowledge of the authors, aside from experiments performed at the laboratory scale [18][7][13], there is very little experimental data obtained at realistic operating conditions using non-intrusive laser-optical diagnostic methods.
Mehr anzeigen

12 Mehr lesen

Multigrid methods and sparse-grid collocation techniques for parabolic optimal control problems with random coefficients

Multigrid methods and sparse-grid collocation techniques for parabolic optimal control problems with random coefficients

In the regime of small σ (or γ), however, the TS-CGS iteration cannot provide robust smoothing because the coupling in the space direction becomes weak and therefore, pointwise relaxation in space is not effective in reducing the high-frequency components of the error. To overcome this problem, block-relaxation of the variables that are strongly connected must be performed. In our case, this means solving for the pairs of state and adjoint variables along the time-direction for each space coordinate. To describe this block Gauss–Seidel procedure, consider the discrete optimality system (11) at any i, j and for all time steps. For simplicity, we use the optimality condition to eliminate the control variable. Thus for each spatial grid point i, j, a block-tridiagonal system is obtained, where each block is a 2 ×2 matrix corresponding to the pair (y, p). This block-tridiagonal system has the following form:
Mehr anzeigen

21 Mehr lesen

Show all 10000 documents...