amino group carboxyl group
Figure 2.1: Structure of an amino acid consisting of a central carbon atom C α together
with an amino group and a carboxyl group as well as a side chain R.
determining the special chemical properties of the particular amino acid (see Figure 2.1). Now, these amino acids are linked to each other by means of so- called peptide bonds (see Figure 2.2). In this way, they form a backbone of amino group — central carbon atom — carboxyl group — amino group, and so forth, with the side chains attached to it. Due to their chain structure it is easy to describe proteins in terms of strings over an alphabet, whose characters encode for specific amino acids. In Table 2.1 we list the names of the 20 standard amino acids together with their character encoding. Moreover, the table refers to the polarity of the amino acids, i. e. their affinity to water. The reading direction of this string is fixed by chemical properties of the molecule, namely the reading direction is from the end with the free amino group to the end with the free carboxyl group. Though we often refer to proteins in terms of strings, we should clearly bare in mind that a protein is in fact not a string, but also has a spatial structure not covered by this string representation. In particular we distinguish four levels of structure: The above mentioned string representation is also denoted as the primary structure of the protein. Foldings implied by chemical interactions between the backbone regions, yielding structures such as α-helices and β-sheets (see Figure 2.3), are referred to as secondary structure, while essentially the three-dimensional shape of a single protein is denoted as its tertiary structure. Finally, a protein complex may consist of more than one single chain, it may even include auxiliary molecules that are no proteins. The quaternary structure describes the relationship between these subunits.
The topic of deriving linear characterizations for combinatorial optimiza- tion problems via dynamic programs, has been paid some attention, mainly in the 1980’s and 1990’s. Prodon, Liebling, and Groflin  give a dy- namic programming based polyhedral characterization for Steiner trees on directed series-parallel graphs (see also Goemans ). Barany, Van Roy, and Wolsey , Eppen and Martin , and Martin, Rardin, and Camp- bell  provide such formulations for various kinds of lot sizing problems. Further examples are Martin et al.  for k-terminal graphs, Liu  for 2- terminal Steiner trees, and Raffensperger  for the cutting stock, the tank scheduling, and the traveling salesman problem. Recently, Kaibel and Loos (personal communication) provided a dynamic programming based extended formulation for full orbitopes.
This natural process of modifying the candidate set, however, creates a nat- ural opportunity for manipulating the result. A particularly crafty agent may remove those candidates that prevent his or her favorite candidate from winning. Similarly, after the initial process of thinning down the candidate set, a voter may request that some candidates are added back into consideration, possibly to help her favorite candidate. More importantly, it is quite realistic to assume that the voters in a small group know each other so well as to reliably predict each others’ votes (this is particularly applicable to the example of the hiring commit- tee). Thus, it is natural and relevant to study the computational complexity of candidate control parameterized by the number of voters. While control problems do not model the full game-theoretic process of adding or deleting candidates, they allow agents to compute what effects they might be able to achieve, and, if the corresponding computational problem is tractable, also how to achieve these effects. 1
Dynamic Programming. When considering branching algorithms in general, it is often observed that many recursive questions near the leaves of the search-tree are repeated. For example, if a computation path in the search-tree first tries to delete some vertex v and in the next step tries deleting another vertex u, the follow- ing recursion produces the same result as if u was deleted first and then v in the next step since (G−v)−u) = (G−u)−v. It is reasonable to try and safe time by stor- ing the result of certain recursive calls, essentially “pruning the search tree” [ 91 ]. Dynamic programming often comes in the form of computing the entries of a table with the help of other table entries. Finally the solution to the input instance can be read from the final table-entry. The presentation of a dynamic programming algorithm usually consists of stating the semantics of a table entry and then stating the formula for computing an entry from previously computed entries. This way, proving that the formula matches the stated semantics is usually su fficient to prove correctness of the algorithm. To emphasize the versatility of dynamic programming, we give two discriminative examples.
The combinatorial optimization, as a scientific paradigm, has a significant influence on the increasement of the effectiveness of any logistic decision, with particular emphasis on vehicle routing decisions. Although these decisions have been widely studied, research on vehicle routing optimization mostly focused on the empirical application of the solution methods (exact or approximate), already established in the scientific literature, or on providing new methods for such purposes. However, in both cases, the greatest contributions are addressed to the design, improvement and application of optimization methods, a priori unknowing, how effective these methods can be, considering the complexity feature of the problem in a multivariate context. Therefore, the general methodologies to carry out the optimization process for such decisions lack of an integrative approach, which allows to check the relevancy degree of the proposed methods. To avoid the mentioned inadequacies, a conceptual model and procedure have been proposed in this thesis. Both proposals involved the assistance of decision-making in vehicle routing optimization. In this sense, the major research contributions are summarized in the conception of the optimization process into three stages, the design and modification of meta-heuristic algorithms based on Ant Colony Optimization (ACO) and the application ofsome robust statistic techniques in decision-making.
Graph modification problems are a core topic ofalgorithmic research [Cai96, LY80, Yan81]. Given a graph G, the aim in these problems is to transform G by a minimum number of modifications (like vertex deletions, edge deletions, or edge insertions) into another graph G 0 fulfilling certain properties. Particularly well-studied are hereditary graph properties, which are closed under vertex deletions and which are characterized by minimal forbidden induced subgraphs : a graph fulfills a hereditary property Π if and only if it does not contain a graph F from a property-specific family F of graphs as an induced subgraph. All nontrivial vertex deletion problems and many edge modification and deletion problems for establishing hereditary graph properties are NP-complete [Alo06, ASS16, KM86, LY80, SST04, Yan81]. If the desired graph property has a finite forbidden induced subgraph characterization, then the corresponding vertex deletion, edge deletion, and edge modification problems are known to be fixed- parameter tractable with respect to the number k of modifications [Cai96]. All vertex deletion problems for establishing graph properties characterized by a finite number of forbidden induced subgraphs have a problem kernel of size polynomial in the number k of allowed vertex deletions [Kra12]. In contrast, many variants of F -free Editing do not admit a problem kernel whose size is polynomial in k [CC15, Gui+13, KW13].
Multilevel refinement An interesting and versatile heuristic approach to improve solu- tion techniques for problems in combinatorial optimization is the multilevel refinement paradigm. Given a combinatorial optimization problem and an exact, approximate or heuristic solution procedure, the method can be used to reduce the runtime or improve the solution quality, or both. The paradigm relies on the conservation of crucial prop- erties from the original instance to the coarser versions. There exist multiple different variants that have been applied to a wide range of optimization problems. We mention in the following some results and refer to the excellent survey of Walshaw [Wal08] for further literature. The research area that gained arguably the most attention as an ap- plication of multilevel refinement is the graph partitioning problem and its variants, e.g. [HL95, KK98, WC00, CS09, OS10, SS11, SSS15, HOU + 19]. We also refer to the survey of Buluç et al. [BMS + 16]. Further heuristics were developed for the travelling sales- man problem [Saa97, Wal02, Bou04], graph drawing [HH01, HK01, Wal03, HJ05, BAM07], clustering [KHK99, RN11, CGR12, BVL16, Bou16] and the capacitated multicommodity network design problem [CLT06].
After having established identiﬁable bipartite graphs as the basic object of our study, we focus on two optimization problems that arise in the context of source-sensor net- works. For these problems we coin the names MINSENSOR and MINSOURCE; we deﬁne and study them in Chapters 4 and 5, respectively. Roughly speaking, both problems deal with the selection of good subgraphs: Given a bipartite graph G the goal is to ﬁnd a subgraph of G that is identiﬁable and also satisﬁes some additional restrictions. Both problems turn out to be NP-hard, as we show with reductions from SET COVER. This is a prototypical NP-hard problem with many generalizations and variants, for many of which the approximation (and inapproximability) properties have been well- studied. One powerful generalization is SUBMODULAR SET COVER, for which a greedy approach achieves a logarithmic approximation guarantee. We derive an approxima- tion algorithm for MINSENSOR rather painlessly, by showing that it is, in fact, a special case of SUBMODULAR SET COVER.
This thesis consists of two parts. In the first part, we focus on algebraic and geometric aspectsof birational maps of degree two of the complex projective plane. We discuss the relation be- tween the singularity structure of a generic quadratic Cremona transformation and the sequence of degrees of its iterates. In particular, based on general results by Bedford & Kim, we identify the singularity structures that result in polynomial growth of degrees. Thereafter, we discuss the singularity structure of Kahan discretizations of a class of quadratic vector fields and of the Lotka- Volterra system, and provide a classification of the parameter values such that the corresponding Kahan map is integrable. Further, we elaborate on the geometric construction of birational invo- lutions on elliptic pencils of degree four and six that are a generalization of the so-called Manin involutions on cubic pencils. For this, we present a geometric (completely algorithmic) approach to reduce such higher degree pencils to cubic ones by (a composition of) quadratic birational changes of coordinates of the complex projective plane. Finally, we discuss special cubic, quartic and sextic pencils that feature quadratic Manin maps. Lastly, we demonstrate how one can repair non-integrable Kahan discretizations in some cases by adjusting coefficients of the Kahan scheme. In the second part, we consider modified invariants, that is, formal integrals of motion that are a perturbation of an integral of motion of the continuous system, for Kahan discretizations. In this context, we present a combinatorial proof of the Celledoni-McLachlan-Owren-Quispel for- mula for an integral of motion of Kahan discretizations of canonical Hamiltonian systems with a cubic Hamiltonian. Further, we exemplify that one can recover an integral of motion of a Kahan discretization from a divergent modified invariant using Padé approximation.
In the previous chapters we considered many problems that are closely related to the vertex enumeration problem but that turn out to be NP-hard. Vertex Enumer- ation has a very interesting property in that if one tries to solve it by modifying the problem “a little bit”, one runs into two kinds ofproblems. One kind are those that are polynomially equivalent to the original problem like polytope verification [ABS97]. Such problems do not really open up possibilities for any method that was not applicable to the original problem. The other kind ofproblems are ones for which a polynomial algorithm would yield a polynomial algorithm for VE but whose hardness has no consequence on VE. Examples include polytope contain- ment [FO85], projection (Chapter 6), covering R d with polyhedral cones (Chapter
ciency. In this case, the more parallel processors are used, the lower parallel efficiency we obtain. Another example is when an instance of a CSP is large and hard, the backtrack search tree must be deep, and a high level (near the root) mistake can- not be recovered from for hours. When the solution happens to be in the rightmost part of the search tree, stealing left and low is unlikely to solve the problem. Chu et al.  in 2009, developed confidence-based work stealing, an adaptive work-stealing strategy, to address this issue by estimating the ratio of solution densities between the subtrees at each node, where the ratio is formulated as confidence. By dynamically adjusting the confidence of each node during resolution process, the algorithm can dynamically decide how to allocate the processing power according to confidences for two branches, thereby achieving “near-optimal” stealing pattern. The mathematical model of updating confidence reflects changes to the exploration of a barren subtree in an intuitive way. That is, if one part of a search tree has been thoroughly searched without producing a solution, the updated confidence then guides more processing power to steal from an unexplored subtree that is as different from the previously explored subtrees as possible. The experimental results demonstrate the effectiveness of this approach, especially for satisfaction problems. The instances solved by paral- lel algorithm vastly outnumbered the instances solved by the sequential version by 21 to 7 on the Knights problems, 82 to 15 on the Perfect-square problem, and 100 to 4 on the N-Queens problems. Nevertheless, the experimental results also indicate the performance of the parallel algorithm (i.e., runtime, solved instances) determined by the initial setting of confidence, which requires the user’s domain knowledge. For example, with a poor confidence value, the parallel runs took longer than sequential runs to solve the N-Queens problems. Although the authors proposed a method to estimate the confidence, this method needs to be further evaluated.
This document is made available under Deposit Licence (No Redistribution - no modifications). We grant a non-exclusive, non- transferable, individual and limited right to using this document. This document is solely intended for your personal, non- commercial use. All of the copies of this documents must retain all copyright information and other information regarding legal protection. You are not allowed to alter this document in any way, to copy it for public or commercial purposes, to exhibit the document in public, to perform, distribute or otherwise use the document in public.
divided into two sections - industrial development and economic and
social services. The former is more or less entirely the responsibility of the Centre, and the role of the States in.the process of industrialisation hitherto has been subsidiary. y The economic and social services are mostly those which fall within the State sphere, like education, health, agriculture, co-operation, social welfare, housing, etc. In this segment there are the State programmes and the Central programmes. Both are administered through the States, but the Centre participates to a much larger extent financially in the Cen-
v example, consider a graph G on n vertices. We can choose an edge e and ask if e is contained in G until the answers imply that the graph has the property or not. What is the number c(n) of questions needed in the worst case?Is there a good strategy for selecting the next edge e? Have we got to test all possible edges to decide? If for all strategies there exists a G such that all edges have to be tested, then a property is called evasive. A quite famous problem is the evasiveness conjecture on page 7, Conjecture 2.6. It is open for more than 30 years and has kept busy many researchers in combinatorics, thus originating related problems and theorems. Unfortunately, the problem itself seems to be far from being solved. We give an example of a theorem related to the asymptotic complexity c(n) with growing n.
Documents in EconStor may be saved and copied for your personal and scholarly purposes.
You are not to copy documents for public or commercial purposes, to exhibit the documents publicly, to make them publicly available on the internet, or to distribute or otherwise use the documents in public.
Someaspectsof merger control in the EC
Suggested Citation: Hölzler, Heinrich (1976) : Someaspectsof merger control in the EC,
Intereconomics, ISSN 0020-5346, Verlag Weltarchiv, Hamburg, Vol. 11, Iss. 4, pp. 111-114, http://dx.doi.org/10.1007/BF02928669