Nach oben pdf Probabilistic Analysis of Discrete Optimization Problems

Probabilistic Analysis of Discrete Optimization Problems

Probabilistic Analysis of Discrete Optimization Problems

Equipped with a very robust probabilistic analysis for the Nemhauser/Ullmann algorithm, we aim in this section at analyzing more advanced algorithmic techniques for the knapsack problem. Our focus lies on the analysis of core algorithms, the predominant algorithmic concept used in practice. Despite the well known hardness of the knapsack problem on worst-case instances, practical studies show that knapsack core algorithms can solve large scale instances very efficiently [Pis95, KPP04, MPT99]. For example, there are algorithms that exhibit almost linear running time on purely random inputs. For comparison, the running time of the Nemhauser/Ullmann algorithm for this class of instances is about cubic in the number of items. Core algorithms make use of the fact that the optimal integral solution is usually very similar to the optimal fractional solution in the sense that only a few items need to be exchanged in order to transform one into the other. Obtaining an optimal fractional solution is computationally very inexpensive. The idea of the core concept is to fix most variables to the values prescribed by the optimal fractional solution and to work with only a small number of free variables, called the core (items). The core problem itself is again a knapsack problem with a different capacity bound and a subset of the original items. Intuitively, the core contains those items for which it is hard to decide whether or not they are part of an optimal knapsack filling. We will apply the Nemhauser/Ullmann algorithm on the core problem and exploit the remaining randomness of the core items to upper bound the expected number of enumerated Pareto points. This leads to the first theoretical result on the expected running time of a core algorithm that comes close to the results observed in experiments. In particular, we will prove an upper bound of O(npolylog (n)) on the expected running time on instances with n items whose profits and weights are drawn independently, uniformly at random. In addition, we investigate harder instances in which profits and weights are pairwise correlated. For this kind of instances, we can prove a tradeoff describing how the degree of correlation influences the running time.
Mehr anzeigen

120 Mehr lesen

Analysis of the Discrete Theory of Radiative Transfer in the Coupled "Ocean-Atmosphere" System: Current Status, Problems and Development Prospects

Analysis of the Discrete Theory of Radiative Transfer in the Coupled "Ocean-Atmosphere" System: Current Status, Problems and Development Prospects

As will be shown in this paper, all numerical methods are based on one or another way of replacing the scattering integral with a finite sum, which replaces the desired continuous brightness distribution with discrete values or a set of coefficients for the expansion of this distribution over a system of functions. Thus, the transfer equation, its solutions, and all the corollaries from it acquire a discrete matrix form. In this case, the approximation is only the replacement of the integral by the sum, and all other conclusions can be made strictly analytically. In fact, we can talk about a discrete transfer theory. Kolmogorov A.N. [ 3 ] pointed out that with the development of modern computer technology in many cases, it is reasonable to study real phenomena, avoiding the intermediate stage of their stylization in the spirit of representations of mathematics of the infinite and continuous, moving directly to discrete models.
Mehr anzeigen

17 Mehr lesen

Incremental and Compositional Probabilistic Analysis of Programs

Incremental and Compositional Probabilistic Analysis of Programs

We have developed an incremental and compositional approach for the approximation of the solution space of complex nonlinear constraints. We also presented an approach for counting the solution space of constraints over data structures. This allows us to extend symbolic execution to perform a probabilistic analysis - the computation of path condition probabilities. We also allowed adding uncertainty about the input values of the analyzed program and take such uncertainty into account when computing the probability of a path condition. Our initial experiments are promising, however our approach has same scalability issues especially when counting solutions for constraints over data structures. We plan to explore optimization schemes to its performance. We also plan to open source our tool.
Mehr anzeigen

21 Mehr lesen

Shadow price of information in discrete time stochastic optimization

Shadow price of information in discrete time stochastic optimization

Abstract The shadow price of information has played a central role in stochastic optimization ever since its introduction by Rockafellar and Wets in the mid-seventies. This article studies the concept in an extended formulation of the problem and gives relaxed sufficient conditions for its existence. We allow for general adapted decision strategies, which enables one to establish the existence of solutions and the absence of a duality gap e.g. in various problems of financial mathematics where the usual bound- edness assumptions fail. As applications, we calculate conjugates and subdifferentials of integral functionals and conditional expectations of normal integrands. We also give a dual form of the general dynamic programming recursion that characterizes shadow prices of information.
Mehr anzeigen

21 Mehr lesen

Multilevel optimization problems with linear differential-algebraic equations

Multilevel optimization problems with linear differential-algebraic equations

In order to compute solutions of multilevel optimal control problems, we are interested in computing the change of solutions with respect to higher level variables. Sensitivity analysis for ordinary differential equations (ODEs) and also differential-algebraic equations (DAEs) has been addressed by many au- thors. In [ SP02 ], sensitivity analysis is done for implicit ODEs with boundary conditions. In [ CLPS03 ] the case of general index 1 DAEs and DAEs of index 2 in Hessenberg form with given initial values have been treated. Adjoint equations for the tractability index have been analyzed in [ BL05 ; BM00 ]. For a comparison of the different index concepts we refer to, e. g., [ Meh15 ].
Mehr anzeigen

206 Mehr lesen

Convex Optimization for Inequality Constrained Adjustment Problems

Convex Optimization for Inequality Constrained Adjustment Problems

Σ { e X } = E{( e X − E{ e X })( e X − E{ e X }) T }. (4.11) However, as mentioned before, this second central moment would not contain the full stochastic information in the inequality constrained case because we have to deal with truncated PDFs. There- fore, it is more conducive to compute an m-dimensional histogram of the parameters. This histogram can be seen as a discrete approximation of the joint PDF of the parameters. Approximations of the marginal densities can be computed the same way, adding up the particular rows of the hyper matrix of the histogram. The quality of approximation of the continuous PDF depends directly on M (cf. Alkhatib and Schuh, 2007), which therefore has to be chosen in a way that allows a satisfactory approximation while keeping the computation time at an acceptable level. In each Monte Carlo iteration a new optimization problem has to be solved. However, as the solution of the original ICLS problem can be used as feasible initial value for the parameters, convergence of the active-set method is usually quite fast.
Mehr anzeigen

140 Mehr lesen

State of the art at DLR in solving aerodynamic shape optimization problems using the discrete viscous adjoint method

State of the art at DLR in solving aerodynamic shape optimization problems using the discrete viscous adjoint method

> FlowHead Conference > 28 March 2012 Starting Geometry Starting Geometry Parameterisation Parameterisation Mesh procedure Mesh procedure Flow simulation Flow simulation Objective [r]

48 Mehr lesen

Estimation and optimization problems in power markets

Estimation and optimization problems in power markets

The liberalization of energy markets has induced generators, suppliers and large-scale end users to trade actively on the market. Actors like energy utilities have a variety of trading relations for the purchase and sale of electricity that are about to abandon the (still) wide-spread long-term full supply and purchase contracts. An energy utility has (or is modeled by) a portfolio of purchase and supply contracts for electricity. Any market movement leads to a change of purchase and sales possibilities and thus to a change of the portfolio value. In that sense portfolio analysis is understood as the process of measuring and controlling the ratio of risk and return of the portfolio. An energy utility having different fuel supply sources in contrast to a long-term full supply contract faces different types of risks in the liberalized energy market. The sources of risk are wide-ranging, just to name the market price risk, fuel price risk, risk of investing in production capacities or the volume risk. Thus portfolio management is closely related to risk management and the plant managers need a tool to quantify these risks. Therefore it is necessary to employ techniques that accurately incorporate the uncertain environment in the portfolio and risk management process. Uncertainty in the electricity market is additionally evoked by a number of factors such as political changes, weather changes or plant outages. Looking at historical electricity spot price series clearly reflects that uncertain environment and sets them apart from stock prices or equity index values. The series show sudden increases in value (known as the electricity spikes) and high levels of volatility. Besides that, they show a tendence to revert to a long term mean level. Such a behavior is often referred to as the mean reverting property of electricity prices. Moreover, one detects a seasonal pattern. The spot market is a market, where
Mehr anzeigen

154 Mehr lesen

A note on the existence of nonsmooth nonconvex optimization problems

A note on the existence of nonsmooth nonconvex optimization problems

where G is an extended, real-valued functional on a Banach space X . The mapping G is not necessarily convex or smooth. There is no explicit assumption on G to be bounded from below. We provide sufficient condition for (P) to have a solution and give some inherently infinite dimensional examples to demonstrate their applicability. Problem (P) has been adressed in several contributions before. In [2] the authors also consider the infinite dimensional case and provide examples from linear and nonlinear elasticity theory. Our condition is somewhat weaker than the condition in [2] and we provide different examples. In [1] the finite dimensional cases are studied in much details. In [3] necessary and sufficient conditions for existence to (P) are obtained in terms of asymptotic behavior of G along sequences, which are candidates for being minimizing sequences. While this is an elegant asymptotic analysis, for ver- ifying existence in concrete applications the conditions given below are more direct and remain to be of independent importance.
Mehr anzeigen

9 Mehr lesen

Polyhedral aspects of cardinality constrained combinatorial optimization problems

Polyhedral aspects of cardinality constrained combinatorial optimization problems

CHAPTER 2: This chapter is dedicated to cardinality constrained matroids and polymatroids. It serves, among other things, as an example for the poly- hedral analysis of the cardinality constrained version of a polynomial time solvable combinatorial optimization problem. Maurras [61] has given a com- plete linear description of the cardinality constrained matroid polytope. We give an elementary proof of this result. Moreover, we characterize the facets of this polytope and state a polynomial time separation procedure. Based on the results for the cardinality constrained matroid, we give a complete linear description of the cardinality constrained polymatroid and present a polynomial time algorithm that solves the associated separation problem. CHAPTER 3: As an example of NP-hard cardinality constrained combi- natorial optimization problems, we extensively study polyhedra associated with cardinality constrained versions of path and cycle problems defined on directed and undirected graphs. We show that a modification of forbidden cardinality inequalities leads to strong inequalities related to cardinality con- straints. Moreover, as one would expect, inequalities that define facets of the polytope associated with the ordinary problem usually define facets of the polytope associated with the cardinality constrained version.
Mehr anzeigen

229 Mehr lesen

Geometrical and combinatorial optimization problems

Geometrical and combinatorial optimization problems

In general there is a certain degree of freedom to distribute the vertex delay to different branches of the tree. In this delay model we consider only binary Steiner trees where Steiner points can have the same position. By inserting a gate at a vertex of the tree it is possible to reduce the delay of one of the incident branches while increasing the delay on the other branch by about the same amount. As there are only a discrete number of gates with different sizes available, this effect can be modeled by so-called L 0 (k)-trees for some appropriate k ∈ N where an L 0 (k)-tree is a binary tree in which all edges have positive integral lengths and the sum of the lengths of the two edges leading from every non-leaf to its two children is k. Then the required arrival times at each sink correspond to a depth restriction for a leaf of the
Mehr anzeigen

86 Mehr lesen

publish.UP Semiclassical spectral analysis of discrete Witten Laplacians

publish.UP Semiclassical spectral analysis of discrete Witten Laplacians

saddle point. This is a particularly difficult situation since WKB expan- sions starting from a minimum break down at the saddle point, and it is therefore hard to get overlapping quasimodes. Moreover, as we know from the probabilistic model (see in particular (0.12)), the tunneling between two minima which is responsible for the appearance of a given non-zero small eigenvalue may also occur through a well associated to a third minimum, which is weakly resonant in the terminology of [46], [47] and further compli- cates the situation. Apart from this, one has to face another complication in Step 2, related to the fact that the small eigenvalues have distinct expo- nential decay. Indeed, when diagonalizing the matrix of the operator, error terms propagate additively (see [48]) and therefore quantities of order of the larger exponentially small eigenvalues destroy the possibility to accurately estimate the smaller ones.
Mehr anzeigen

199 Mehr lesen

Model-based Methods for Continuous and Discrete Global Optimization

Model-based Methods for Continuous and Discrete Global Optimization

The use of surrogate models is a standard method to deal with complex, real- world optimization problems. The first surrogate models were applied to con- tinuous optimization problems. In recent years, surrogate models gained impor- tance for discrete optimization problems. This article, which consists of three parts, takes care of this development. The first part presents a survey of model- based methods, focusing on continuous optimization. It introduces a taxonomy, which is useful as a guideline for selecting adequate model-based optimization tools. The second part provides details for the case of discrete optimization problems. Here, six strategies for dealing with discrete data structures are in- troduced. A new approach for combining surrogate information via stacking is proposed in the third part. The implementation of this approach will be available in the open source R package SPOT2. The article concludes with a discussion of recent developments and challenges in both application domains. Keywords: Surrogate, Discrete Optimization, Combinatorial Optimization, Metamodels, Machine learning, Expensive optimization problems, Model management, Evolutionary computation
Mehr anzeigen

54 Mehr lesen

Competitive Analysis of Scheduling Problems and List Accessing Problems

Competitive Analysis of Scheduling Problems and List Accessing Problems

Chapter 4 Closer Randomized Analysis of BIT This chapter focuses on the stochastic analysis of BIT. Request sequences considered in this chapter consist of requests which are i.i.d. over the set of list elements. In Section 4.1, uniform distribution is used to generate requests. It turns out that the cost of BIT can be simulated by throwing a fair die several times and counting the sum of the resulting points, independent of the initial bit setting. A further analysis of the distribution of bit values is performed in this section, resulting in an improved version of Conjecture 3.56. In Section 4.2, a formula for the expected cost of BIT is developed for request sequences generated by discrete distribution. The last section of this chapter provides a brief view of a more general case, which occurs naturally in the context of data compression.
Mehr anzeigen

127 Mehr lesen

Evaluation of Bayesian Optimization applied to Discrete-Event Simulation

Evaluation of Bayesian Optimization applied to Discrete-Event Simulation

In the described experiments, we applied Bayesian Optimization with a standard configuration as shipped by GPyOpt, without any optimizations. However, there are well known levers for optimization of the BO algorithm: (1) Choose a more appropriate acquisition function, which in our case was expected improvement (EI), (2) optimize the acquisition function’s parameters, such as the balance between exploration and exploitation, and (3) analyze the fit of the surrogate model, which in our case were Gaussian processes, for the specific problem domain. For example, current research indicates that random forests might be a more suitable surrogate model for problems with discrete parameters [17].
Mehr anzeigen

9 Mehr lesen

Probabilistic seismic safety analysis of multi-component systems

Probabilistic seismic safety analysis of multi-component systems

log-log space of the seismic intensity versus response, a linear relation is established using regression. The median capacity and log standard deviation can be computed from the least square linear regression, whereby, the sum of the squared error between the predicted values and the structural responses are minimized. The method can be seen in Ellingwood and Kinali [43], Richard and Chaudat [111], Choi et al. [22], etc. A linear relationship is assumed between the logarithm of the intensity and the response of the structure [139]. Maximum likelihood estimates (MLE), a method used by Shi- nozuka et al. [121] is yet another approach also commonly used for the computation of the lognormal parameters. This is a discrete approach as opposed to the linear regres- sion approach and the intensity is not directly influencing the lognormal parameters. The likelihood function at PGA level using the binomial probabilities are calculated, and fragility parameters are obtained by optimization, either by maximizing this like- lihood as shown in [7] or by minimizing the error [141], [139]. Other methods which are in practice are truncated IDA method also using a likelihood approach and multiple strip analysis (MSA) method. The post processing and development of fragility using IDA is further studied by Baker [7]. The statistical methods and concepts for fitting fragility function, and optimizing the number of structural analysis for the fragility functions fitting, etc. are explained in detail for IDA and multiple strip analysis meth- ods in Baker [7].
Mehr anzeigen

154 Mehr lesen

Online Optimization: Probabilistic Analysis and Algorithm Engineering

Online Optimization: Probabilistic Analysis and Algorithm Engineering

Elevator control algorithms for elevator groups were first studied back in the 1950s, when the first automatic elevator controls were installed [Bar02]. These first algorithms were rather simple, since they had to be implemented in hardware using relays. Since then, the performance achieved by elevator control algorithms has become more important as buildings become higher and higher. In addition to algorithmic improvements one possible way to enhance this performance is to use destination call systems (sometimes also called destination hall call system). In such a system, a passenger registers his or her destination floor right at his start floor instead of the travel direction only. This way, the elevator control has more information as a basis for its decisions which hopefully leads to better performance. Apart from new high-rise buildings, there is another important application for destination call systems in existing buildings. It may happen that due to changing usage of a building the installed elevator system is not capable to cope with increased passenger demand. In this situation, changing over to a destination call system may be a relatively cheap alternative to upgrading the elevator system itself or installing additional elevators.
Mehr anzeigen

182 Mehr lesen

Solving trajectory optimization problems in the presence of probabilistic constraints

Solving trajectory optimization problems in the presence of probabilistic constraints

Alternatively, chance-constrained optimal path design re- lies on chance-constrained optimization (CCO) algorithms. This type of algorithm allows constraint violations to be less than a user-specified risk parameter. A detailed review regarding different CCO algorithms can be found in [30] and the references therein. In [31], the authors proposed a CCO- based model predictive control scheme so as to optimize the movement of the ego vehicle. Considering the uncertainty in the system state as well as the constraint, a hybrid CCO method was designed in [32] and applied to solve an au- tonomous vehicle motion planning problem. Compared with RO methods, the CCO methods tend to be less conservative [30]. However, one challenge of the use of CCO methods is that the probabilistic functions and their derivatives cannot be calculated directly. An effective strategy to handle this issue is to replace or approximate these constraints by using deterministic functions or samples [33]–[35]. The motivation for the use of approximation-based strategies relies on their ability in dealing with general probability distributions for the uncertainty as well as preserving feasibility of approximation solutions. Until now, some approximation techniques have been proposed based on Bernstein method [24], [33], con- straint tightening approach [36], scenario approximation [37], etc. Although these strategies can be feasible for replacing the probabilistic constraints, there are still some open problems. For example, an important issue is that the conservatism is usually high and difficult to be controlled. Furthermore, the smoothness, differentiability and convergence properties of the approximation strategy can hardly be preserved.
Mehr anzeigen

14 Mehr lesen

Robust optimization for survey statistical problems

Robust optimization for survey statistical problems

Chapter 5 Analysis with Simulated Data In this chapter we generate some large scale simulated data of survey statistical problems. Simulation can enable us to work with diversely distributed variables of a population. Some- times it is difficult to gather exact information about the population such as the distribution of variables in subgroups of the population. In this simulation study we generate such data using R software. We generate a population of fixed size with variables having different distributions within the total population and also within the subgroups of the population. We use this simulated data to calculate robust allocations from our robust formulations. As we have already discussed in Chapter 4, Bertsimas and Sim’s approach is less conservative than Soyster’s apporach. We formulate three different robust formulations of the sampling allocation problem (SAP) using Bertsimas and Sim’s approach and compare the results. We perform various experiments on the robust allocations obtained for the simulated data. These experiments are helpful in explaining the benefits of using robust formulations. In these ex- periments we check if the uncertain parameters of the optimization problems can make the robust solutions infeasible.
Mehr anzeigen

125 Mehr lesen

Towards average-case complexity analysis of NP optimization problems

Towards average-case complexity analysis of NP optimization problems

Recently, \average-case complexity" has received considerable attention by researchers in several elds of computer science. Even a problem is not (or may not be) solvable eciently in the worst-case, it may be solvable eciently on average. Indeed, several results have been obtained that show even simple algorithms work well on average (see, e.g., Joh84]). On the other hand, most of those results are about concrete problems, and not so much has been done for more general study of average-case complexity, though there are many interesting open questions in this area. In this paper, we consider one of such open questions, and improve our knowledge towards this question.
Mehr anzeigen

25 Mehr lesen

Show all 10000 documents...