In both approaches, we need to make crucial design decisions. Which penalty should we charge for an infeasible candidate? How should we repair an infeasible candidate, in order to obtain a good feasible solution? When should we invoke the repair procedure, for example, each time an infeasible candidate becomes selected or only if our ﬁnal solution is infeasible? Abramson et al. (1996) discuss these design decisions for a simulated annealing approach to the general problem of solving binary programs. However, though the authors emphasize the generality of their approach, simulated annealing is usually more eﬃcient when tailored to a speciﬁc problem. Let’s discuss these design issues in the context of generalization. In this application, local search methods are usually applied to iteratively improve the input map (Ware & Jones, 1998). However, as database speciﬁcations often deﬁne hard size constraints for the target scale, the input map is not a feasible solution. Hence, we can only start our local search method with the input map, if we relax these hard constraints. Additional costs or penalties need to be charged instead, for example, if a region is smaller than its required size. We can put high weights on such soft constraints, but this approach does not guarantee compliance with database speciﬁcations. Alternatively, we could apply a repair procedure to the input map that yields a feasible initial solution. For example, an empty map or a map that only contains one big area of a single class will usually comply with database speciﬁcations. The disadvantage is that such a solution will be poor in terms of quality, that is, the cost for the solution would be extremely high. Consequently, our search will require many iterations to ﬁnd a good result. Furthermore, even if we ﬁnd a feasible start, it can be impossible to reach good feasible solutions: again, if we reject infeasible candidates during the search, there may be no feasible path to the optimal solution. These considerations become apparent if we take a look at the label placement problem in Fig. 3.12. In the input map (Fig. 3.12(a)), each label clearly corresponds to a location in the road network: a user will certainly understand that all facilities are on the left road, close to the road junction. If we generalize this map to a smaller scale, we aim to preserve clear correspondences between labels and the actual facility locations. Furthermore, for the sake of clarity, we do not allow intersections of labels and roads. In terms of constraints, we could deﬁne a soft constraint (“correspondences between labels and locations need to be clear”) and a hard constraint (“a label and a road must not intersect”).
Results carriers. Previously, lipo-oligomers containing fatty acid have also been introduced as potential delivery vehicle for pDNA and siRNA. Here, we focused on the cationic counterpart of the lipo-oligomers, and investigate the influence of the different cationic branches on the nucleic acid compaction and gene transfer activity. For this purpose, we designed and synthesized a small library of lipo-oligomers with two oleic acid as the hydrophobic domain, while alternating the number of cationic arms with the introduction of lysine as the branching point. Oligomers with one, two, four and even eight cationic arms were assembled. To further identify the effects of the protonable amine on each arm, additionally, oligomers consisting of different numbers of Stp units (1 to 4) on each arm have been included for comparison (Scheme 3.2). Oligomers were synthesized via SPS start from Fmoc-Lys(ivDde)-OH loaded 2-Chlorotritylchloride resin, where lysine provides an asymmetrical branching point for the final attachment of oleic acid. For oligomers with one or two cationic arms (905-908), a resin with 0.1598 mmol/g loading was used, whereas for highly branched oligomers with four or eight cationic arms (909-911), a resin with rather low loading (=0,095 mmol/g) was used to avoid the aggregation during the synthesis. Terminal cysteines were integrated for polyplex stabilization via disulfide formation. The assembly of oligomers was finished after introducing the hydrophobic moiety by coupling oleic acid to an additional lysine, which was coupled to the ε-amine of the preloaded lysine. The sequence of all oligomers used in this study are displayed in
The PC problem is known to be NP-complete even on the classes of planar graphs [ 54 ], bipartite graphs, chordal graphs [ 57 ], chordal bipartite graphs, strongly chordal graphs [ 94 ], as well as in several classes of intersection graphs [ 16 ]. On the other hand, it is solvable in linear O(n + m) time on interval graphs with n vertices and m edges [ 3 ]. For the greater class of circular-arc graphs there is an optimal O(n)-time approxima- tion algorithm, given a set of n arcs with endpoints sorted [ 70 ]. The cardinality of the path cover found by this approximation algorithm is at most one more than the optimal one. Several variants of the Hamiltonian path (HP) and the PC problems are of great interest. The simplest of them are the 1HP and 2HP problems, where the goal is to decide whether G has a Hamiltonian path with one, or two fixed endpoints, respectively. Both problems are NP-hard for general graphs, as a generalization of the HP problem, while 1HP can be solved in polynomial time on interval graphs [ 7 ].
The extensive computational tests on many known and some additional harder bench- mark problems allow some clear statements: For all problems analyzed (CS, BP, VC, and BPC), the newly identied and potential DDOIs together reduce the number of CG iterations when added before proving optimality of the RMP, sometimes signicantly. Since the LP-reoptimization of the RMP becomes slightly more time consuming with additional DI columns, the overall reduction in computation time is generally smaller than the reduction in CG iterations. However, for all studied problems the average computation times are reduced compared to state-of-the-art CG algorithms. For the BP example, we compare with an implementation already using constraint aggregation as suggested by Ben Amor et al. (2006). Here we are able to conrm their substantial and impressive speedups of factors between two and three for many BP instances. With a mix of statically and dynamically added DIs together with over-stabilization we gain another factor of approx. two for the extensive benchmark set of Sim and Hart (2013) and a factor of 2.5 for the widely used benchmark by Scholl et al. (1997). Notably, while for this strategy (stat+dyn) the maximum slowdown was by factor 1.3, the maximum speedup was more than factor ten on some instances. Thus, the use of DIs seldom and only gently hampers the CG process, but very often it accelerates. For future applica- tions we expect the dynamic addition of DIs to work best for those problems, in which the solution of the pricing almost completely occupies the overall computation time: any reduction in the number of CG iterations should linearly contribute to a reduction in the solution time.
Our interest in the particular version of signed SAT arises from applications in computational systems biology, where iSAT yields a generalization of modeling with Boolean networks [ Kau69 ], where biological systems are represented by logical formulas with variables corresponding to biological components like proteins. Reactions are modeled as logical conditions which have to hold simultaneously, and then transferred into CNF. The model is widely used by practitioners (see e.g. [ Dow01 , KSRL + 06 , HNTW09 ] and the references therein). Often, though, this binary approach is not sufficient to model real life behavior or even accommodate all known data. Due to new measurement techniques, a typical situation is that an experiment yields several “activa- tion levels” of a component. Thus, one wants to make statements of the form: If the quantity of component A reaches a certain threshold but does not exceed another, and component B
-productive suggested actions could be very disappointing. E.g., the logistics manager could receive a report, which tells him to lower the stock of a specific item, because this would improve his ‘‘Stock Productivity’’. He performs the suggested action and therefore receives another report from another KPIMS, which tells him to raise the stock, because this would probably improve his ‘‘Deliveries On Time In Full’’. As a consequence, the manager would probably reject the suggested actions and just act according to his or her personal opinion or not react at all. Therefore, the company is looking for a solution to optimize their logistics network in a constant, multi-objective manner, i.e. to find those actions that optimize the network considering all monitored KPIs. Essentially, the goal is an interdependence analysis, which can be used to improve the understanding of the interconnectedness of the different KPIs and, most importantly, generate good integrated action suggestions that improve the logistics network. Unfortunately, the complexity of the network inhibits to reduce the problem to one of the traditional combinatorial problems with stochastic parameters, such as for example the Inventory Routing Problem with Stochastic
eventually indicating a decreased crystal quality or competitive phase formation, impedes a meaningful discussion of the XRD data at higher alloying concentrations.
By employing Bragg’s law , the lattice parameters a and c can be calculated based on the XRD data. The results are depicted in Figure 4b. The lattice parameters of binary Mg2Ca synthesized at 200 ◦ C are also added. With increasing Al concentrations solved in hexagonal C14 Mg 2 Ca, a continuous decrease in lattice parameters a and c is obtained, which can be reasoned by the smaller atomic radius of Al (1.25 Å) compared to Mg (1.4 Å) . Calculation of the hypothetical C14 Al2Ca phase yields lattice parameters a and c of 5.67 and 9.23 Å , respectively, which are consistent with the composition induced change in lattice parameters measured here. The obtained trend is also consistent with bulk studies, showing a good quantitative agreement with Kevorkov et al. , while the decrease in lattice parameters reported by Amerioun et al.  appears to be overestimated. It should be noted that compositional gaps in the composition–structure relationship known for the bulk samples are now filled employing the combinatorial thin film methodology. Considering the high reactivity of Ca, hampering experimental investigations, the pioneered usage of less-reactive intermetallic Mg2Ca as a source for Ca, presented within this work, serves as a sophisticated approach to systematically study the Mg–Ca–Al system.
The 30 muscles of each hemisegment (A2-A7) are innervated specifically by 35 motorneurons (Nicholson and Keshishian, 2006). While motorneurons can initially develop on their own, they require the presence of myotubes to find their correct final positioning. At this step, both the growing motorneurons and myotubes extend filipodia probing for correct contact. The IgSF member protein Sidestep, present on the membrane of all myotubes, is generally required for the guidance of motorneurons towards myotubes, but other factor are present in specific muscles, [Fascilin III (Fas3), Connectin (Con), Capricious (caps), Netrin- A (Net-A) or Netrin-B (Net-B)], allowing the identification of specific targets by motorneurons (Nicholson and Keshishian, 2006). Toll and Robo were also implicated in mediating repulsion of motorneurons. Correct contacts lead to the formation of the neuromuscular junction (NMJ), with the assembly of post- and presynaptical complexes and the localization of Glutamate receptor (GluR) to the synapses. This is in sharp contrast to vertebrates, where acetylcholine is the neurotransmitter of choice for neuromuscular junctions, with glutamate used as the major excitatory neurotransmitter of the central nervous system.
Our generalization opens a possibility to study theoretical properties of the PIK problem. For example, one can find an answer of the question “Can we always find a smooth PIK solution given smooth Jacobians and references?” from . Also, our generalization provides an intuitive and systematic way to find new PIK solutions; it would be easier to design proper objective functions than to find PIK solutions directly.
Small molecules are considered to be promising modulators of the development of amyloid aggregation leading to neurodegenerative disease. Currently, a large number of small molecules are widely used as therapeutic drugs for treatment of mild to moderate cognitive symptoms of Alzheimer's disease. 113 Such molecules as donepezil (trade name Aricept ® ), rivastigmine (trade name Exelon ® ), galantamine (trade name Razadyne ® , formerly Reminyl ® ) or tacrine (trade name Cognex ® ) positively contribute to the disease related cognitive deficiency in vivo. 114 The desire to achieve a persistent clinical effect in treatment of Alzheimer's disease prompted scientists to test various types of small molecules also as a modulator of A 1-40/42 aggregation in vitro. For instance, resveratrol (RES) and curcumin are able to bind to N-terminus (between amino acids R5-F20) of the A 1-42 thereby inhibiting formation of high molecular weight aggregates as determined by solution NMR spectroscopy and atomic force microscopy (AFM). 115 Moreover, RES can bind both to monomeric and fibril A 1-40/42, however the binding response of RES to fibril A 1-40 was lower than to monomeric A 1-40, whereas the binding response of RES to fibril A 1-42 was stronger than to monomeric A 1-42 as determined by NMR spectroscopy in solution. 116 RES is also able to remodel A 1-42 fibrils into unstructured aggregates as observed by AFM. 96 The flavanol epigallocatechin gallate (EGCG) is one of the components found in green tea, showing a promising inhibitory effect towards A 1-40/42 aggregation. The molecule can directly bind to both preformed oligomeric structures and mature fibrils through hydrophobic interactions and modify their morphology as discussed in a number of works. 117-120 Porat et al. 121 reviewed a wide number of polyphenols and noticed some similarities between the molecules able either to interfere an aggregation process or morphology of the aggregate. Thus, all these polyphenols have at “least two phenolic rings with two to six atom linkers, and a minimum number of three OH groups on the aromatic rings”. Additional examples of small molecules capable of inhibiting an A 1-40/42 aggregation are shown in Table 3.
Second, we draw on auction models with bidders facing an exposure problem (see Bikhchan- dani 1999, Goeree and Lien 2014, Krishna and Rosenthal 1996, Rosenthal and Wang 1996, Szentes 2007, Szentes and Rosenthal 2003), as this issue is prevalent in some continuation games and subgames that arise in our framework. In our framework, bidders respond to the exposure problem by submitting either very low or very high bids and may react to increased bidder competition by reducing their bids (for similar results, see Krishna and Rosenthal 1996). Third, a monopolistic auctioneer’s decision regarding the packages on which to allow bids has been investigated for symmetric bidders with additive valuations by Armstrong (2000), Chakraborty (1999), Jehiel et al. (2007), Palfrey (1983), for symmetric bidders who consider two items complements or substitutes by Subramaniam and Venkatesh (2009), and for asym- metric bidders with single-item or additive-value multi-item demand for two items by Avery and Hendershott (2000). Allowing bids on all packages can be optimal for a monopolistic seller facing symmetric bidders with either additive valuations, substitutes valuation, or strong com- plements valuations (Jehiel et al. 2007, Subramaniam and Venkatesh 2009). Our monopolistic benchmark scenario has not been addressed in these studies. Furthermore, there exist studies that acknowledge the benefit of reducing complexity (for both bidders and the auctioneer) by disallowing bids on some packages (see Lehmann et al. 2006, Nisan 2006, Rothkopf et al. 1998). Our results identify an additional motive for the designer to restrict package bidding or even abolish a combinatorial design: the presence of competing auctioneers.
Historically these companies were both gas traders and gas network operators. They purchased gas from other suppliers and operated the necessary infrastructure to transport the gas from those suppliers to their own customers. During the liberalization of the German gas market these business functions were separated by regulatory authorities (GasNZV 2005). Nowadays, there are companies that trade gas and others whose sole task is the operation of gas networks for the transportation of gas. One of the requirements set by the regulatory authorities is that every trader can use the network infrastructure to transport gas. Open access to these gas networks has to be granted to all the trading companies free of any discrimination. This means that the gas supplies and demands cannot be fully controlled by the network operator. Therefore the network operator is required to have a high degree of operational flexibility. The majority of network management is carried out manually with the aid of simulation software. There is a need to develop a more automated process in order to cope with anticipated challenges associated with the addition of more traders accessing the network. Here we have developed mathematical optimization methods to improve the network operation and to enhance the cost effectiveness of investments in the infrastructure.
since energy metabolism and fat distribution is known to be highly influenced by sex hormones 354–356 . Surprisingly, the Gadd45 KO females gained weight in the same range compared to their male counterparts (Fig. 5.34B), which was also represented by a comparably increased gonadal fat pad (Fig. 5.42). In line with increased weight gain, high-fat diet-fed female Gadd45 KO mice additionally displayed slightly higher resting glucose and HDL cholesterol levels (Fig. 5.35 and Fig. 5.37) in addition to elevated cholesterol levels (Fig. 5.38). Interestingly, the Gadd45 KO animals in total displayed moderately increased triglyceride concentrations in response to the high-fat diet compared to the respective WT mice (Fig. 5.36). Due to its role as a central metabolic organ, high-fat diet severely affects the liver, which was shown by the elevated histological scores in male high-fat animals. The observed frequent inclusion of large fat drops in the lobe of the liver resemble the morphological changes in a developing non-alcoholic fatty liver disease (NAFLD) 357 . In line with the other obtained data, the livers of high-fat diet-fed Gadd45 KO females displayed a comparable pathology to the males (Fig. 5.40). In all high-fat groups except the WT females, the beforehand clearly morphologically separated brown adipose tissue (BAT) progressively shifted towards a white adipose tissue (WAT) phenotype within the scapular fat pad. In general, the individual cell sizes were remarkably elevated in these high-fat diet groups (Fig. 5.41). As shown in a previous study, this change is associated with an increased amount of stored triglycerides and suggested to come along with an enhanced level of mitophagy in the BAT 358 . Overall, the histological results correspond well to the measured differences between the different groups in the evaluated metabolic parameters. Taken together, the herein observed higher susceptibility of Gadd45 KO mice, in particular the females, to a high-fat diet, suggests an at least partly gender-dependent regulatory link between Gadd45
– Non-stationarity: Our GPR models assume stationarity of the covariance structure in the data. Yet, some functions are obviously non-stationary. For example, the BBOB variant of the Schwefel function (BBOB function 20) behaves entirely differently close to the search boundaries compared to the optimal region (due to a penalty term). In another way, the two Gallagher’s Gaussian functions (BBOB functions 21, 22) show a more localized type of non-stationarity. There, the activity of the function will change direction depending on the closest Gaussian component. Such functions are particu- larly difficult to model with common GPR models. Non-stationary variants of GPR exist, and might be better suited. A good choice might be an ap- proach based on clustering . Adapting the spectral method to that case is straight-forward. The simulations from individual models (for each cluster) can be combined (locally) by a weighted sum.
Furthermore these techniques usually have no representation of the strength of evidence supporting a certain conclusion. In early learning it can be better to first remember the evidence in detail rather than use the evidence to slightly modify model parameters. Furthermore, early in the learning process the learner has no experience about the importance and information content (uniqueness) of the individual samples unless the learner has good prior knowledge of the domain. Without prior knowledge, early generalization can only be driven based on the similarity of the samples. Shepard (Shepard 1987) shows that this kind of similarity based generalization decays exponentially in animals as they accumulate experience. When the animal has enough data to validate models from more complex classes it does not need to remember individual experiences explicitly but rather can discover general rules as generalities in the data. This mode of learning allows a learner to develop more complex models while maintaining a measure of confidence in its predictions. In technical systems, similar behavior is often implemented in terms of a bias that is either encoded in the fundamental assumptions of the model or given explicitly as a prior favoring certain model configurations over others. An example of the first can be seen in Markov random fields and Bayesian networks which encode sparse interaction patterns by exploiting conditional independence relations (a certain state is influenced only by limited set of predecessors). For example convolutional Neural Networks (CNN) that are mainly used in image recognition assume that interaction between pixels is limited to nearby pixels, because pixel values are affected by only a small number of localized objects. More examples can be found in physics, where many theories assume that objects are only affected by other objects that are close by.
This version: 11-02-2013
Abstract. In recent years, Combinatorial Clock Auctions (CCAs) have been used around the
world to allocate frequency spectrum for mobile telecom licenses. CCAs are claimed to significantly reduce the scope for gaming or strategic bidding. In this paper, we show, however, that CCAs significantly enhance the possibilities for strategic bidding. Real bidders in telecom markets are not only interested in the spectrum they win themselves and the price they pay for that, but also in the price competitors pay for that spectrum. Moreover, budget constraints play an important role. When these considerations are taken into account, CCAs provide bidders with significant gaming possibilities, resulting in high auction prices and problems associated with multiple equilibria and bankruptcy (given optimal bidding strategies).
3.3. Transfer to other Production Relevant Factors
The VAHM methodology can be applied to factors other than just space usage. Asset utilization, material stock or the exchange of information are some examples for factors that can be assessed and then visualized. On a business process map, inefficiencies in the flow of information and unnecessary or costly processes could also be visualized. Especially with regard to digital transformation, this method could be of great use to identify the aspects of networking which hold the greatest potential for improvement.
To find out the potential role of um11825 in pathogenicity, mating and pheromone response the gene was deleted in the solopathogenic strain SG200 and in the compatible haploid strains FB1 and FB2 following the method of Kämper (2004). Haploid strains were tested for mating by co-spotting on PD-charcoal plate. After 24 hours, all deletion strains displayed a reduction in the dikaryon formation when compatible deletion strains were co- spotted with the strain combination (Figure 11). This shows that um11825 is involved in mating and cell fusion. In solopathogenic deletion strain only one mutant was generated and when this single strain was spotted on charcoal plate, the deletion strain showed a strong reduction in filamentation compared to SG200, indicating that um11825 plays a role in post fusion development. In a plant infection assay using compatible haploid strains um11825 deletion strains showed significant reduction in tumor formation in maize plants in comparison to the cross FB1XFB2. To exclude that the reduction in tumor formation is caused by the cell fusion defect of compatible um11825 deletion strains, plant infections were performed with the solopathogenic strain SG200 and its derivative SG200∆um11825. Upon infection with SG200∆um11825, no tumors could be observed in infected plants as compared to SG200. Complementation of SG200∆um11825 by introduction of a single copy of um11825 ORF into the ip locus using one kb of promoter region was not successful (Figure 11). As there was only one SG200∆um11825 strain was tested for virulence assay, it is crucial to complement the virulence phenotype using appropriate size of promoter region.
phosphorothioate forward and reverse oligonucleo- tides (PTOs) (Additional file 1 : Table S3), 20 ng template DNA, and 1 U KOD Hot Start DNA polymerase. PCR- cycling started with an initial denaturation step at 96°C for 2 min. Subsequently, three cycles (96°C, 20 s; 50°C, 30 s; and 72°C, 120 s), followed by 28 cycles (96°C, 20 s; 55°C, 30 s; and 72°C, 120 s) and one fill-up cycle (72°C, 5 min) were performed. All PCR reactions were sub- jected to DpnI digestion (10 U, 16 h, 37°C) prior to puri- fication of the PCR products using a PCR purification kit (Macherey–Nagel, Düren, Germany) according to manu- facturer’s instructions. Subsequently, PCR products were quantified using a NanoDrop ND-1000 spectrophotom- eter (NanoDrop Technologies, Wilmington, DE, USA). For increased Operon-PLICing efficiency, a second PCR reaction with the same composition was performed using the amplified DNA fragments from the first PCR reaction as template. For this purpose, 30 PCR cycles (96°C, 20 s; 55°C, 20 s; and 72°C, 60 s) and one fill-up cycle (72°C, 5 min) after the initial denaturation step (96°C, 2 min) were performed. Again, the PCR products were purified using a PCR purification kit (Macherey–Nagel, Düren, Germany). Agarose–TAE gel electrophoresis was per- formed according to standard protocols to confirm the presence and correct size of the amplified genes and the pALXtreme-1a vector backbone [ 28 ].
We provide an application of our results to the model of public goods in networks, introduced in Bramoull´ e and Kranton (2007). The key question addressed in Bramoull´ e and Kranton (2007) is how the network architecture of spillovers influences public goods provision, in the absence of coordination. Our aggregation approach complements the analysis of Bramoull´ e and Kranton (2007), as it provides a necessary condition in order to have a Nash equilibrium with strictly positive contributions—that is, with no free- riders. Despite the attractive normative feature of sharing the burden of public goods among all players, such an equilibrium is not always guaranteed to exist. The necessary condition rules out the simultaneous presence of a single player and a non-single player in fully connected strong modules since this presence brings about a mismatch between what these players contribute and consume of the public goods, leading one of them to become a free-rider. This necessary condition for the existence of an equilibrium without free-riders, which also becomes sufficient for a special class of networks, illustrates the role played by the strong modules of the network in determining public goods provision.