A well-known result of Grötschel, Lovász and Schrijver [ 39 ] states that – in order to derive a polynomial-time algorithm for solving ( 1.1 ) – the systems Ax ≤ b actually do not need to have small size but are only required to yield a polynomial-time algorithm to solve a certain separation problem. However, using such a description amounts to im- plementing separation routines rather than using linear programming solvers in a black-box way. In some sense, this is incompatible with an important aspect of the success of many mathematical program- ming frameworks, namely the ability to easily formulate problems and use existing algorithms and software – even as a non-mathematician. Therefore, in this work we restrict our attention to formulations that have small size and hence potentially can be easily written down explic- itly. Here, “small” is always understood relatively to the cardinality of E and usually means “polynomial”.
conditions: z(S) ≤ σ(S) for all S ⊆ N .
Let us say a few words about the computational aspects for this problem. A computational problem is defined by its “inputs” and the solution to be computed. The input for a cooperative game includes the set N and the function ν. As ν is defined on 2 N , it requires in general 2 |N | values, one for each subset S ⊆ N . The input size will already be an exponential function in the number of players. Under such circumstance, any solution concept (unless the very trivial ones) would require time exponential in the number of players. This is true for the core. To evaluate whether an allocation satisfies the sub-group rationality constraints of the core requires an exponential number oflinear inequalities. In many practical settings however, such as in the models we are going to introduce, such a high number is not necessary (at least for the input).
Although tolerance and bounded tolerance graphs have been studied extensively, the recognition problems for both these classes have been the most fundamental open prob- lems since their introduction in 1982 [ 27 , 57 , 62 ]. Therefore, all existing algorithms assume that, along with the input tolerance graph, a tolerance representation of it is given. The only result about the complexity of recognizing tolerance and bounded toler- ance graphs is that they have a (non-trivial) polynomial sized tolerance representation, hence the problems of recognizing tolerance and bounded tolerance graphs are in the class NP [ 66 ]. Recently, a linear time recognition algorithm for the subclass of bipar- tite tolerance graphs has been presented in [ 27 ]. Furthermore, the class of trapezoid graphs (which strictly contains parallelogram, i.e. bounded tolerance, graphs [ 103 ]) can be also recognized in polynomial time [ 90 , 107 ]. On the other hand, the recognition of max-tolerance graphs is known to be NP-hard [ 75 ]. Unfortunately, the structure of max-tolerance graphs differs significantly from that of tolerance graphs (max-tolerance graphs are not even perfect, as they can contain induced C 5 ’s [ 75 ]), so the technique
f without losing non-negativity or violating any capacities. More on flows can be found in . The reason why we are particularly interested in network flows is that there are efficient combi- natorial algorithms for linearoptimization problems over flow polyhedra. This means that there are algorithms that solve such problems in polynomial time. We assume that the reader has basic knowledge in complexity theory to relate to the terms NP-complete and polynomial time solv- able. If this is not the case, we refer the interested reader to  and we give the following rough distinction for the practitioner. If a problem is NP-complete then it is unlikely that we find an efficient algorithm that solves the problem exactly, i.e. we can solve only small instances in practice. If it is polynomial time solvable, then there is an algorithm that solves such a problem within a running time polynomially bounded in the size of the input, and we may hope to come up with an algorithm in practice that exactly solves much larger instances in a timely manner. We will describe the bounds for the running times and also the space consumptions with the so-called O-notation. Let f, g : N → R functions from the natural numbers to the reals. The
This paper presents a branch-and-price-and-cut algorithm for the exact solution of the active-passive vehicle-routing problem (APVRP). The APVRP covers a range of logistics applications where pickup-and-delivery requests necessitate a joint op- eration of active vehicles (e.g., trucks) and passive vehicles (e.g., loading devices such as containers or swap bodies). The objective is to minimize a weighted sum of the total distance traveled, the total completion time of the routes, and the num- ber of unserved requests. To this end, the problem supports a flexible coupling and decoupling of active and passive vehicles at customer locations. Accordingly, the operations of the vehicles have to be synchronized carefully in the planning. The contribution of the paper is twofold: Firstly, we present an exact branch-and- price-and-cut algorithm for this class of routing problems with synchronization con- straints. To our knowledge, this algorithm is the first such approach that considers explicitly the temporal interdependencies between active and passive vehicles. The algorithm is based on a non-trivial network representation that models the logical relationships between the different transport tasks necessary to fulfill a request as well as the synchronization of the movements of active and passive vehicles. Sec- ondly, we contribute to the development of branch-and-price methods in general, in that we solve, for the first time, an ng-path relaxation of a pricing problem with linear vertex costs by means of a bidirectional labeling algorithm. Computational experiments show that the proposed algorithm delivers improved bounds and solu- tions for a number of APVRP benchmark instances. It is able to solve instances with up to 76 tasks, 4 active, and 8 passive vehicles to optimality within two hours of CPU time.
The degree-two case is of special interest due to its relation to the QMST-problem. Combining the descriptionsof higher order spanning tree polytopes with one degree-two monomial for all possible degree- two monomials, we obtain a relaxation of the quadratic spanning tree polytope. Doing this with our extended formulations for one degree- two monomial we model in an implicit way a further relation between the monomials and improve the relaxation compared to those we ob- tain using the descriptions in the original space. As a side effect, we find new facets of the adjacent quadratic forest polytope and the adja- cent quadratic spanning tree polytope. Via computational experiments we visualize the amount of improvement of the relaxations.
The topic of deriving linear characterizations for combinatorial optimiza- tion problems via dynamic programs, has been paid some attention, mainly in the 1980’s and 1990’s. Prodon, Liebling, and Groflin  give a dy- namic programming based polyhedral characterization for Steiner trees on directed series-parallel graphs (see also Goemans ). Barany, Van Roy, and Wolsey , Eppen and Martin , and Martin, Rardin, and Camp- bell  provide such formulations for various kinds of lot sizing problems. Further examples are Martin et al.  for k-terminal graphs, Liu  for 2- terminal Steiner trees, and Raffensperger  for the cutting stock, the tank scheduling, and the traveling salesman problem. Recently, Kaibel and Loos (personal communication) provided a dynamic programming based extended formulation for full orbitopes.
where x denotes the real vector of unknown parameters (the point movements) and v denotes the real vector of unknown residuals, that is, the degree of constraint satisfaction. Both, A (referred to as the design matrix) and l (the vector of observations) need to be speciﬁed in advance to deﬁne the constraints. The constraints are perfectly satisﬁed if v = 0. As this is generally not possible for all constraints, the function v T ·P ·v is minimized, where P deﬁnes the weights between diﬀerent constraints. If there are non-linear constraints, these are usually replaced by their linear approximations. Sarjakoski & Kilpel¨ ainen (1999) and Harrie & Sarjakoski (2002) show how to solve the problem for large datasets, also considering other generalization operators. Applying the same adjustment technique, Koch & Heipke (2005) and Koch (2007) additionally show how to cope with hard inequality constraints that are needed to ensure consistency between DLMs and digital terrain models. Related problems are discussed in the generalization domain, for example, a river must not run uphill (Gaﬀuri, 2007). Least squares adjustment allows diﬀerent generalization operators to be handled, yet the existing generalization methods that are based on this technique do not take the discrete nature of map generalization into account. Usually, continuous variables are used to model a problem. These are not suited, for example, to represent whether a vertex of an original line is selected for its simpliﬁcation. In their system, Sarjakoski & Kilpel¨ ainen (1999) deﬁne a constraint that attempts to pull an unwanted vertex onto the line connecting its predecessor and successor. This is a smart workaround to also allow for line simpliﬁcation, but of course it is not a solution to the discrete problem of vertex selection, which only allows two stages and none in between. Sester (2005) applies adjustment calculus to satisfy constraints in building simpliﬁcation, but also points out that it does not solve the whole problem: the elimination of details is done in a ﬁrst step, which is not based on optimization. The handling of hard constraints in optimization approaches is seldom addressed in the map generalization literature. Often constraints are relaxed, as they are conﬂicting (Harrie & Weibel, 2007). A few exceptions exist in the context of discrete optimization, which is addressed in the next section.
In general there is a certain degree of freedom to distribute the vertex delay to different branches of the tree. In this delay model we consider only binary Steiner trees where Steiner points can have the same position. By inserting a gate at a vertex of the tree it is possible to reduce the delay of one of the incident branches while increasing the delay on the other branch by about the same amount. As there are only a discrete number of gates with different sizes available, this effect can be modeled by so-called L 0 (k)-trees for some appropriate k ∈ N where an L 0 (k)-tree is a binary tree in which all edges have positive integral lengths and the sum of the lengths of the two edges leading from every non-leaf to its two children is k. Then the required arrival times at each sink correspond to a depth restriction for a leaf of the
a tour visiting each vertex exactly once (or Hamiltonian cycle) with minimum total cost of its edges. The approach of Dantzig, Fulkerson and Johnson was iterative. They first decided on a Linear Programming formulation whose optimal solution would provide a lower bound to the length of the optimal tour. Due to the exponential size of the formulation, its solution would not be computationally feasible. Hence, only a considerably smaller Linear Program, containing a subset of the constraints, would actually be solved by the Simplex method. If the solution to the LP were found to violate some of the constraints which had been omitted, those con- straints would be added to the Linear Program, and thus, an iterative procedure would generate successively better lower bounds on the length of the optimal tour.
This last chapter is devoted to the numerical treatment of some shape optimization prob- lems employing the material from Chapter 6 . We provide a numerical validation that the volume expression allows very accurate approximations and can even reconstruct domains with corners. We begin with numerical results for simple unconstrained domain integrals. Subsequently, we present numerics for the transmission problems from Section 5.2 , Subsec- tion 6.5.1 and the EIT problem of Section 5.3 . Except for the example from Section 5.2 , where we use the boundary expression and basis splines, all computations use the domain expression. We discretise the arising partial differential equations by means of the finite el- ement method. All implementations in this chapter have been done either using the WIAS toolbox PDELib or the FENICS finite element toolbox. The author acknowledges here the implementation of the example from Section 7.2 by Martin Eigel and the level set method of Section 7.3 by Antoine Laurain.
(“9”) of phenylene units of the CPP ring. Ellipsoidal-shaped CPPs were observed for CPP  and CHPB  where no or significantly less sterical hindrance of phenyl substituents is present. In addition, even numbered CPPs display circular structures.  The ellipsoidal shape can be explained by a repulsion-induced alternating twisting of the phenylene units. Generally, neighboring phenylenes point in an alternating pattern to the outside and inside of the ring to minimize repulsion of hydrogen atoms. For an even- numbered ring, this positioning is feasible without inducing additional ring strain. [87, 127, 146] In the case of odd-numbered rings, however, one phenylene moiety experiences significantly higher sterical repulsion, as the in- and outside positions (ortho-positions) are blocked by the neighbouring phenylenes (see Figure 18). As a consequence, the entire macrocycle is distorted to minimize this repulsion. Therefore, not a circular but rather an ellipsoidal structure is obtained. The tetraphenylbenzene and biphenyl moieties attached to the CPP ring exert additional strain, since they cannot rotate freely and experience mutual repulsion. Thus, the odd-number of phenylene rings and conjunction with sterical crowdedness induces a strong twist of phenylene units, which in turn, because of the twisting, does result in reduced flexibility.
On an abstract level, arithmetic matroids oﬀer an abstract theory supporting some notable properties of the arithmetic Tutte polynomial, while matroids over rings are a very general and strongly algebraic theory with diﬀerent applications for suitable choices of the “base ring” (e.g., to tropical geometry for matroids over discrete valuation rings). However, outside the case of lists of integer vectors in abelian groups, the arithmetic Tutte polynomial and arithmetic matroids have few combinatorial interpretations. For instance, the poset of connected com- ponents of intersections of a toric arrangement – which provides combinatorial interpretations for many an evaluation of arithmetic Tutte polynomials – has no counterpart in the case of non-realizable arithmetic matroids. Moreover, from a structural point of view it is striking (and unusual for matroidal objects) that there is no known cryptomorphism for arithmetic matroids, while for matroids over a ring a single one was recently presented . In addition, some conceptual relationships between arithmetic matroids (which come in diﬀerent variants, see [20, 28]) and matroids over rings are not yet cleared.
Abstract In this paper, we analyze the power con- sumption of different GPU-accelerated iterative solver implementations enhanced with energy-saving techni- ques. Specifically, while conducting kernel calls on the graphics accelerator, we manually set the host system to a power-efficient idle-wait status so as to leverage dynamic voltage and frequency control. While the us- age of iterative refinement combined with mixed preci- sion arithmetic often improves the execution time of an iterative solver on a graphics processor, this may not necessarily be true for the power consumption as well. To analyze the trade-off between computation time and power consumption we compare a plain GMRES solver and its preconditioned variant to the mixed-precision iterative refinement implementations based on the re- spective solvers. Benchmark experiments conclusively reveal how the usage of idle-wait during GPU-kernel calls effectively leverages the power-tools provided by hardware, and improves the energy performance of the algorithm.
The first of the two main results in this paper is that there is an algorithm that produces a canonical basis for the spaces of D-type that is dual to the canonical basis for the spaces of P-type. Here, canonical means that the basis we obtain only depends on the order of the elements in the list X and not on any further choices. The two previously known algorithms that construct a basis for spaces of D-type depend on additional choices [25, 32]. Our second main result is that far more general pairs of zonotopal spaces with nice properties can be constructed than the ones that were previously known. We will define a new combinatorial structure called forward ex- change matroid. A forward exchange matroid is an ordered matroid together with a subset of its set of bases that satisfies a weak version of the basis ex- change axiom. This is the underlying structure of the generalised zonotopal D-spaces and P-spaces that we introduce.
group acting on it. We use the same notion of an equivariant acyclic matching as in Equivariant Discrete Morse Theory. The relation between acyclic matchings and poset maps with small fibers can also be adapted to the equivariant case, which is proven in this thesis and can also be found in . For the proof of the adaption in the equivariant case, it turns out that using poset maps with small fibers is a nice replacement for linear extensions of partial orders.
packages to accept bids. We investigate whether this decision is influenced by the presence of a competing auctioneer vying for the same bidders.
Our main insight, derived in a game-theoretic model, is that a revenue-maximizing auction- eer who faces a competitor may find it optimal to restrict the packages on which he will accept bids, in contrast to a monopolistic seller who, in a comparable setting, would not. The intuition is that, by disallowing bids on some packages, a competing auctioneer can differentiate himself from his competitor and attract a more homogeneous group of bidders. If auctioneers benefit from stronger competition between similar bidders (i.e., bidders with similar preferences) as compared to competition between dissimilar bidders, then such differentiation is profitable and can occur in equilibrium. Indeed, we find that such segmentation of bidders into homogeneous groups occurs in all our equilibria.