Nach oben pdf Combinatorial optimization and recognition of graph classes with applications to related models

Combinatorial optimization and recognition of graph classes with applications to related models

Combinatorial optimization and recognition of graph classes with applications to related models

In Chapters 2 and 3 , we investigate two different path problems on interval and proper interval graphs, as well as we introduce two matrix representations of them. First, we investigate in Chapter 2 the complexity status of the longest path problem on the class of interval graphs. Even if a graph is not Hamiltonian, it makes sense in several applications to search for a longest path, or equivalently, to find a maximum induced subgraph of the graph that is Hamiltonian. However, computing a longest path seems to be more difficult than deciding whether or not a graph admits a Hamiltonian path. Indeed, it has been proved that even if a graph is Hamiltonian, the problem of computing a path of length n −n ε for any ε < 1 is NP-hard, where n is the number of vertices of the input graph [ 74 ]. Moreover, there is no polynomial-time constant-factor approximation algorithm for the longest path problem unless P=NP [ 74 ]. In contrast to the Hamiltonian path problem, there are only few known polynomial algorithms for the longest path problem, and these restrict to trees and some other small graph classes. In particular, the complexity status of the longest path problem on interval graphs was as an open question [ 113 , 114 ], although the Hamiltonian path problem on an interval graph G = (V, E) is well known to be solved by a greedy approach in linear time O( |V |+|E|) [ 3 ]. We resolve this problem by presenting in Chapter 2 the first polynomial algorithm for the longest path problem on interval graphs with running time O(n 4 ), which is based on a dynamic programming
Mehr anzeigen

156 Mehr lesen

Polyhedral aspects of cardinality constrained combinatorial optimization problems

Polyhedral aspects of cardinality constrained combinatorial optimization problems

The focus of this thesis is the investigation of the facial structure of the cardinality constrained matroid, path and cycle polytopes. As it might be expected and is exemplarily shown for path, cycle, and matroid polytopes, an inequality that induces a facet of the polytope associated with the ordinary problem usually induces a facet of the polytope associated with the cardinality re- stricted version. However, we are in particular interested in inequalities that cut off the incidence vectors of solutions that are feasible for the ordinary problem but infeasible for its cardinality restricted version. In this context, the most important class of inequalities for this thesis are the so-called forbidden cardinality inequalities. These inequalities are valid for a polytope associated with a cardinality constrained combinatorial optimization problem independent of its specific combinatorial structure. Using these inequalities as prototype for inequalities incorporating com- binatorial structures of a problem, we derive facet defining inequalities for polytopes associated with several cardinality constrained combinatorial optimization problems, in particular, for the above mentioned polytopes. Moreover, for cardinality constrained path and cycle polytopes we derive further classes of facet defining inequalities related to cardinality restrictions, also those inequalities specific to odd/even path/cycles and hop constrained paths.
Mehr anzeigen

229 Mehr lesen

Combinatorial and probabilistic aspects of lattice path models

Combinatorial and probabilistic aspects of lattice path models

We have seen in Chapter 2 how a random tiling of a hexagon maps to a collection of directed non-intersecting lattice paths on a hexagonal lattice. Since these paths only use steps in two lattice directions, we can map such a family to a family of paths on the square lattice only taking up-steps ((i, j) → (i, j + 1)) and right-steps ((i, j) → (i + 1, j)). In the case of an (r, s, t)-hexagon we have r paths, the ith path running from (−i + 1, i − 1) to (t − i + 1, s + i − 1). Each path can be viewed as a Ferrers diagram (cf. Section 3.4) fitting inside an t × s-rectangle and the volume of a plane partition is the sum of their areas. Thus the volume can be viewed as an area functional on the set of families of paths. Other, more obvious area functionals are the area between two paths or the area a path encloses with a given line if the family is conditioned to stay on one side of that line (as those associated with symmetric tilings, cf. Section 2.1). In this chapter we investigate this problem in the special case of only two paths on the square lattice. We compute the area laws for all symmetry subclasses of such configurations in the limit of large path lengths. This model has been studied in combinatorics [BM96, Ric09b] as well as statistical physics [PB95, Ric06] under the name of staircase polygons, parallelogram polygons or polyominoes, where they serve as a simplified model of self-avoiding polygons, see also Chapter 5. Explicit expressions for the half-perimeter and area generating functions for all symmetry classes of staircase polygons are given in [LR01], however these are not amenable to a first principles approach as applied to formula 3.1 in Chapter 3. Instead we analyse functional equations satisfied by the half-perimeter and area generating functions of the respective classes and apply the moment method [Bil95, Section 30]. Some of the limit laws can also be obtained via bijections to related combinatorial objects (cf. Sections 4.2.3, 4.2.4 and 4.2.8), but we give a unified approach applicable to all symmetry subclasses.
Mehr anzeigen

116 Mehr lesen

Universally balanced combinatorial optimization games

Universally balanced combinatorial optimization games

Second, for the emerging social network systems over the Internet, simple game models also start to have potential real applications. Consider a graph representing friend relationships. Let an advertiser who wants to place advertisement on the social network so that each individual has an AD on his own web page or (not exclusive) on one of his friends’ pages. The constraints can be written in terms of the incidence matrix of the graph (not the edge-vertex incidence matrix). The optimal solution to the covering problem will save advertiser’s cost. Issues on how to share the revenue generated among the nodes is a very interesting problem.
Mehr anzeigen

19 Mehr lesen

Variants of the Graph Laplacian with Applications in Machine Learning

Variants of the Graph Laplacian with Applications in Machine Learning

However, as soon as one allows for zero-entries in W , the problem becomes much harder. For the special case of a doubly stochastic limit (r = c = 1), Sinkhorn and Knopp (1967) show that convergence only holds if the set of non-zeros E(W ) satisfies specific combinatorial properties. Even in case of convergence, the limit W h∞i can show zeros where W has non-zeros, which turns the relative entropy objective into a non-smooth optimization problem. As a consequence, the Lagrangian approach can no longer be applied without further considerations. Some publications overlook this pitfall (see Section 3.4.3). They “simply” apply the Lagrangian approach to the non-negative setting and finally obtain invalid conclusions. Hence, assuming positivity is not just about “simplifying some arguments”, as sloppily stated by Ireland and Kullback (1968), but a totally different scenario. Also the proof by Bregman (1967b) claims to imply the non-negative case, but it deals only superficially with the extension to zero-entries. The proof for the RE-optimality of IPF was rigorously carried over from the positive to the non-negative case by Csiszar (1975), who totally avoids any Lagrangian-type arguments by a measure theoretical approach. Zeros in W h∞i that are not present in W are handled technically sound by considering the Radon-Nikodym derivatives, which are able to deal with absolute continuous measures, that is with E(W h∞i ) ( E(W ). The only drawback of this approach is that it does not provide an algorithmic intuition on why optimality holds. What is so special about the IPF-sequence that guarantees that the limit is not just some element from Ω(r, c, W ), but precisely the unique element from Ω(r, c, W ) that is closest to W with respect to relative-entropy-error?
Mehr anzeigen

221 Mehr lesen

Combinatorial optimization problems with disjunctive constraints: algorithms and complexity / Joachim Schauer

Combinatorial optimization problems with disjunctive constraints: algorithms and complexity / Joachim Schauer

of their results were further improved by Epstein and Levin [14] who give a 2.5 approximation ratio for perfect graphs and also consider the on-line case of the problem. In [27] Jansen gives an asymptotic fully polynomial approx- imation scheme for BP CG if the underlying conflict graph is a d−inductive graph. Gendreau et al. [19] introduce six heuristics for BP CG and perform extensive computational experiments. For reasons of comparison, also two lower bounds are developed. An extension of BP CG to the two-dimensional case of packing squares was recently treated by Epstein et al. [15]. Again approximation algorithms for special graph classes were stated and analyzed. To the best of our knowledge the only exact algorithm for BP CG is given in the recent paper by Muritiba et al. [33]. Their paper introduces non- trivial lower bounds based on the clique structure, on a matching and on a surrogate relaxation over all constraints (1.1). Upper bounds are computed by a population based heuristic framework with a tabu list. With these ingredients a branch-and-price algorithm is developed based on a set covering model for bin packing. The associated pricing problem is an instance of KCG which is solved by a parametric greedy-type heuristic in [33]. Detailed computational experiments illustrate the effectiveness of this approach.
Mehr anzeigen

93 Mehr lesen

Gutenberg Open Science: Contributions to column-generation approaches in combinatorial optimization

Gutenberg Open Science: Contributions to column-generation approaches in combinatorial optimization

This paper presents a branch-and-price-and-cut algorithm for the exact solution of the active-passive vehicle-routing problem (APVRP). The APVRP covers a range of logistics applications where pickup-and-delivery requests necessitate a joint op- eration of active vehicles (e.g., trucks) and passive vehicles (e.g., loading devices such as containers or swap bodies). The objective is to minimize a weighted sum of the total distance traveled, the total completion time of the routes, and the num- ber of unserved requests. To this end, the problem supports a flexible coupling and decoupling of active and passive vehicles at customer locations. Accordingly, the operations of the vehicles have to be synchronized carefully in the planning. The contribution of the paper is twofold: Firstly, we present an exact branch-and- price-and-cut algorithm for this class of routing problems with synchronization con- straints. To our knowledge, this algorithm is the first such approach that considers explicitly the temporal interdependencies between active and passive vehicles. The algorithm is based on a non-trivial network representation that models the logical relationships between the different transport tasks necessary to fulfill a request as well as the synchronization of the movements of active and passive vehicles. Sec- ondly, we contribute to the development of branch-and-price methods in general, in that we solve, for the first time, an ng-path relaxation of a pricing problem with linear vertex costs by means of a bidirectional labeling algorithm. Computational experiments show that the proposed algorithm delivers improved bounds and solu- tions for a number of APVRP benchmark instances. It is able to solve instances with up to 76 tasks, 4 active, and 8 passive vehicles to optimality within two hours of CPU time.
Mehr anzeigen

196 Mehr lesen

Aggregation in Map Generalization by Combinatorial Optimization

Aggregation in Map Generalization by Combinatorial Optimization

where x denotes the real vector of unknown parameters (the point movements) and v denotes the real vector of unknown residuals, that is, the degree of constraint satisfaction. Both, A (referred to as the design matrix) and l (the vector of observations) need to be specified in advance to define the constraints. The constraints are perfectly satisfied if v = 0. As this is generally not possible for all constraints, the function v T ·P ·v is minimized, where P defines the weights between different constraints. If there are non-linear constraints, these are usually replaced by their linear approximations. Sarjakoski & Kilpel¨ ainen (1999) and Harrie & Sarjakoski (2002) show how to solve the problem for large datasets, also considering other generalization operators. Applying the same adjustment technique, Koch & Heipke (2005) and Koch (2007) additionally show how to cope with hard inequality constraints that are needed to ensure consistency between DLMs and digital terrain models. Related problems are discussed in the generalization domain, for example, a river must not run uphill (Gaffuri, 2007). Least squares adjustment allows different generalization operators to be handled, yet the existing generalization methods that are based on this technique do not take the discrete nature of map generalization into account. Usually, continuous variables are used to model a problem. These are not suited, for example, to represent whether a vertex of an original line is selected for its simplification. In their system, Sarjakoski & Kilpel¨ ainen (1999) define a constraint that attempts to pull an unwanted vertex onto the line connecting its predecessor and successor. This is a smart workaround to also allow for line simplification, but of course it is not a solution to the discrete problem of vertex selection, which only allows two stages and none in between. Sester (2005) applies adjustment calculus to satisfy constraints in building simplification, but also points out that it does not solve the whole problem: the elimination of details is done in a first step, which is not based on optimization. The handling of hard constraints in optimization approaches is seldom addressed in the map generalization literature. Often constraints are relaxed, as they are conflicting (Harrie & Weibel, 2007). A few exceptions exist in the context of discrete optimization, which is addressed in the next section.
Mehr anzeigen

136 Mehr lesen

A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

7. Conclusions and future perspectives Most of the combinatorial optimization problems that are found in real-world applications have a stochastic nature. Since the vast majority of the articles in the combinatorial optimization litera- ture deal with deterministic scenarios, there is a need to consider simulation–optimization approaches that allow researchers and practitioners for solving realistic models including uncertainty. Simheuristics contribute to fill this gap by extending metaheuristic algorithms in a natural way, so they can also be applied in solving combinatorial optimization problems with stochastic components either in the objective function or in the set of constraints. How- ever, the concept of simheuristics described in this paper differs from the metaheuristics reported in the SO community [ 9 , 57 ]. In- stead of using a pure black box approach, where evaluations are performed only by simulation, simheuristics closely integrate op- timization and simulation by incorporating problem-specific infor- mation. Thus, analytical expressions complement the optimization process and may be used to screen poor or infeasible solutions. Since these analytical expressions are problem-specific, they ex- ist prior to any simulation run. Therefore, they are not as depen- dent on simulation as the metamodels used in the SO community. Still, they can be enhanced with the simulation feedback. Finally, by design they are able to provide different alternative solutions of similar quality and promote the introduction of risk or reliability analysis criteria when comparing these solutions, so the decision- maker can choose the solution that best fits his/her utility function according to these criteria. In order to control the computational time invested in performing simulations, there are some critical is- sues in the design of an efficient simheuristic algorithm. One issue is the selection policy of promising solutions—the ones that will be sent to the simulation component. Another issue is the number of replications that must be run for each of these promising solu- tions. During the stochastic searching process, simulations with a relatively short number of replications should be sufficient to ob- tain rough estimates of the solution value, so that a list of elite solutions can be constructed. Once the stochastic searching pro- cess is finished, simulations with more replications can be run in order to obtain more accurate estimates for each of the elite solu- tions. Alternatively, statistical selection methods can be incorpo- rated to adjust the simulation length according to the difference between solutions. Also, variance reduction techniques can be em- ployed here.
Mehr anzeigen

12 Mehr lesen

Locally optimal designs for generalized linear models with applications to gamma models

Locally optimal designs for generalized linear models with applications to gamma models

In the present chapter the gamma model with continuous (quantitative) factors is considered. There are wide applications where the gamma model with its canonical link can be fitted. Nevertheless, there is always a doubt about the suitable link function for outcomes. The common alternative links may come from the power link family that includes the canonical link therefore it is a favorite choice for employment in the thesis. In section 4.1 , we introduce the gamma model highlighting on the related assump- tions. Additionally, the notions of locally complete classes and locally essentially com- plete classes are presented. In section 4.2 , locally complete classes and locally essen- tially complete classes of designs are found leading to a considerable reduction of the problems of locally optimal designs for gamma models. From those classes locally D- and A-optimal designs are derived. Besides, as a gamma model is recognized as a par- ticular generalized linear model the results that are obtained in Chapter 3 for a general setup of the generalized linear model will be applied in relevant cases here. The opti- mality conditions will be intuitively characterized by the model parameters and hence, those conditions cover relevant subregions of the parameter space. So, our results on locally D- or A-optimality are applicable for the majority of possible parameter points. In Section 4.3 , we consider a model with a single continuous factor. In section
Mehr anzeigen

132 Mehr lesen

Aspects of Preprocessing Applied to Combinatorial Graph Problems

Aspects of Preprocessing Applied to Combinatorial Graph Problems

The thesis is structured into six chapters, representing parts of my work in the field of parameterized complexity theory. Chapter 1 contains an introduction to and mo- tivation of preprocessing, as well as preliminary explanations of graph-theoretic and (parameterized) complexity-theoretic notation used throughout the thesis. Chapter 2 contains work I did in collaboration with my coauthor Johannes Uhlmann on the Two-Layer Planarization problem [ 191 ] which aims at making a given graph drawable in two layers without edge crossings by deleting edges. As we shared a room in our o ffices at the Friedrich-Schiller-Universität Jena, he approached me with the suggestion to look at this problem. Based on earlier work he did with Nadja Betzler and Jiong Guo, there was a stub manuscript containing some data reduction rules. In joint work, Johannes and I developed missing data reduction rules, I came up with a uniform way of presenting our preprocessing using “tokens”. We proved correctness of the whole procedure and developed a branching strategy to solve Two-Layer Planarization. The paper then was accepted to publication at the 7th Annual Conference on Theory and Applications of Models of Computation (TAMC 2010), yielding an invitation to a special issue of the journal Theoretical Computer Science. Recently, I picked the paper up again and worked on the question whether the branching algorithm presented by Suderman [ 182 ] could be adapted to our parameterization. I succeeded to some extend, providing a branching algorithm whose asymptotic running time almost matches that of Suderman’s algorithm. I implemented the algorithm and tested it against published results by Mutzel [ 156 ] and Suderman and Whitesides [ 183 ]. This work, however, was not published so far.
Mehr anzeigen

204 Mehr lesen

Additive models: Extensions and related models

Additive models: Extensions and related models

Clearly, an alternative approach would be to calculate estimators b f 1 and b f 2 in the model Z i = f 1 (X i+1 )+ f 2 (X i ) + η i and to use b f 1 (x) − b f 2 (x) as an estimator of f . We will come back to related models below. The additive model is important for two reasons: (i) It is the simplest nonparametric regression model with several nonparametric components. The theoretical analysis is quite simple because the nonparametric components enter linearly into the model. Furthermore, the mathematical analysis can build on localization arguments from classical smoothing theory. The simple structure allows for completely understanding of how the presence of additional terms influences estimation of each one of the nonparametric curves. This question is related to semiparametric efficiency in models with a parametric component and nonparametric nuissance components. We will come back to a short discussion of nonparametric efficiency below. (ii) The additive model is also important for practical reasons. It efficiently avoids the curse of di- mensionality of a full-dimensional nonparametric estimator. Nevertheless, it is a powerful and flexible model for high-dimensional data. Higher-dimensional structures can be well approximated by additive functions. As lower-dimensional curves they are also easier to visualize and hence to interpret than a higher-dimensional function.
Mehr anzeigen

35 Mehr lesen

Structure Exploitation in Mixed-Integer Optimization with Applications to Energy Systems

Structure Exploitation in Mixed-Integer Optimization with Applications to Energy Systems

The inherent modularity of distributed and decentralized frameworks can also be beneficial in a multi-agent setting when the number of agents in the system is dynamic. For example, when a faulty component needs to be isolated and shut down for maintenance, the temporarily reduced system can be readily re-optimized within a distributed optimization framework. It is worth noting that agents can also be unresponsive or acting atypically, which are known as Byzantine faults. Detection and resolution of Byzantine faults are not considered within this thesis, but we refer the interested reader to an early paper by Lamport et al. ( [LSP82]) and the overview by Driscoll ( [DHSZ03]). For distributed continuous convex optimization, there are already a number of available algorithms such as dual decomposition [Eve63, NS08], Alternating Direction Method of Multipliers (ADMM) [BPC + 11, EB92, GM76], or Aug- mented Lagrangian based Alternating Direction Inexact Newton (ALADIN) methods [HFD16], which can all be used to solve large-scale strictly con- vex programs to global optimality by alternating between solving small-scale convex optimization problems and sparse linear algebra operations.
Mehr anzeigen

161 Mehr lesen

Random Graph Models and their Application to Biological Networks

Random Graph Models and their Application to Biological Networks

The small-world problem dates back to Stanley Milgram, who was the first studying this phenomenon in social networks [16]. He discovered by an experiment, that two ran- domly chosen people are closely related to each other, despite the fact, that they may be very different 5 . In his experiment, Milgram asked some arbitrarily chosen person liv- ing in Nebraska to send a letter to a stockbroker in Massachusetts by passing the letter from person to person. Furthermore, Milgram made the restriction, that the letter may only be send to a person, which is known on a first-name basis. To his surprise he found out, that in average the letter was passed only to six other people before it reached the stockbroker. The conclusion is, that in the world considered as a social network of peo- ple connected through friendship or acquaintanceship the average path length between any two people is rather short [17]. Additionally to these short connections between people, most people have a high number of friends or acquaintances, which makes the network highly clustered.
Mehr anzeigen

93 Mehr lesen

Marginalized predictive likelihood comparisons of linear Gaussian state-space models with applications to DSGE, DSGEVAR, and VAR models

Marginalized predictive likelihood comparisons of linear Gaussian state-space models with applications to DSGE, DSGEVAR, and VAR models

Turning to the quadratic standardized forecast error term in Figure 5, it may be deduced that time variation of the log predictive likelihood is primarily due to the forecast errors. This is not surprising since the covariance matrix of the predictive distribution changes slowly and smoothly over time while the forecast errors are more volatile. Moreover, the ranking of the models is to some extent reversed, particularly with the BVAR having much larger standardized forecast errors than the other models over the first half of the forecast sample. With the exception of the random walk model, this is broadly consistent with the findings for the point forecasts; see Warne et al. (2013). The reversal in rankings for the forecast error term can also be understood from the behavior of second moments, where a given squared forecast error yields a larger value for this term the smaller the uncertainty linked to the forecast is. Nevertheless, when compared with the forecast uncertainty term in Figure 4 the differences between the models are generally smaller for the forecast error term. This suggests that the model ranking based on the log predictive score is primarily determined by the second moments of the predictive distribution in this application.
Mehr anzeigen

28 Mehr lesen

Optimization of Realistic Loudspeaker Models With Respect to Basic Response Characteristics

Optimization of Realistic Loudspeaker Models With Respect to Basic Response Characteristics

upper bound. Young’s modulus and density are chosen as the parameters to be changed during the optimization because they are very much linked to the vibration pattern of the mechanical structure. The parameters needs change with some co-dependency (here a linear dependency) such that the achieved material configuration is kept realistic and to avoid trivial solutions.

8 Mehr lesen

Analysis of asymmetric GARCH volatility models with applications to margin measurement

Analysis of asymmetric GARCH volatility models with applications to margin measurement

Based on the BIC information criterion with heavier penalty for extra parameters, the GTARCH model without spline is preferred, while using the AIC criterion, the Spline-Macro-GTARCH is the superior model. This result holds for both SPX and TSX. Note that both selected models in- clude the most general GTARCH specification with the presence of asymmetry in both ARCH and GARCH terms. Moreover, the asymmetric term δ goes up to 0.24 in the SPX Spline GTARCH model, making the response to negative news even more asymmetric compared to GTARCH with- out spline with δ = 0.16. The optimal number of knots in the SPX Spline model is 17, while the number of knots goes down to 8 when we add macroeconomic variables. Macroeconomic vari- ables are useful in modeling the low-frequency component, as their presence reduces the number of knots for cycles and they have a statistically significant effect on long-run volatility dynamics. Engle and Rangel (2008) use macroeconomic variables for panel regressions of 48 countries using annual volatility data. In our paper, we model macroeconomic variables for daily volatility fore- casting in the slow-moving component. Thus, statistically significant macroeconomic variables in the low-frequency component could be used for stress testing of VaRs, which is a typical regu- latory requirement. The following variables are statistically significant at 10% for predicting the low-frequency volatility component for SPX:
Mehr anzeigen

59 Mehr lesen

Engineering combinatorial optimization algorithms to improve the lifetime of OLED displays

Engineering combinatorial optimization algorithms to improve the lifetime of OLED displays

Considering videos, it becomes clear why we are in particular interested in the average reduction ratio. The diodes are more or less uniformly exposed to the stress, and the degradation is a long drawn process. Moreover the worst-case reduction ratio for any algorithm is 1 since an image that is represented by a diagonal matrix can not be decomposed into multilines. Note that this also holds for non-consecutive multiline addressing. On the other hand, we advise designers of user interfaces for CMLA driven OLED devices to bear in mind that they may delay the so-called burn-in effect of frequent steady images by smoothing sharp diagonal edges, e.g. by antialiasing. As mentioned before, we claim that an economical hardware implementation becomes possible. The hardware complexity of the logics to implement mlacompact is only a few thousand gates for QQVGA displays. This yields a sufficiently small area on a silicon wafer that is necessary to be in business on the highly competitive display market. However, if we use the same amount of RAM as we did on the PC, that is to store the complete output, the area will increase significantly. We will address this issue in the following.
Mehr anzeigen

85 Mehr lesen

Bayesian Recognition of Motion Related Activities with
Inertial Sensors

Bayesian Recognition of Motion Related Activities with Inertial Sensors

Based on the 3D turn rates and accelerations provided by the IMU we analysed characteristic features for each target activity with their physical or bio-mechanical explanation, their discriminative power between activities and their computation complexity. The features span different window lengths from 32 to 512 samples (at 100 Hz), which represent the different natures of instantaneous activities (like “jumping”) to longer term, repetitive activities like “running”. All features are calculated in real time with a frequency of 4 Hz and discretised into states meaningful to distinguish between activities. These have been defined manually in our set up, but this could be automated easily with data clustering algorithms. In our implementation, the set of features is easily extendable and would also cover the integration of more sensors into the system seamlessly. For the classification, we decided to apply Bayesian techniques. With the discretised value ranges of all features, we applied a modified learning algorithm for discrete Bayesian Networks (BNs), the Greedy Hill Climber with Random Restarts based on the Cooper and Herskovits Log score (see [8]) and Dirichlet distributions of the conditional probability tables, on our 270 minutes activities data set. We limited structure learning to a fixed number of parents per node and imposed causal direction to learnt arcs. The learnt structure is shown in Figure 1.
Mehr anzeigen

2 Mehr lesen

Choosing Prior Hyperparameters: With Applications To Time-Varying Parameter Models

Choosing Prior Hyperparameters: With Applications To Time-Varying Parameter Models

prior beliefs about the hyperparameter κ are encoded in a prior distribution p(κ). From a conceptual point of view, a researcher could introduce another level of hierarchy and make the prior for κ depend on more hyperparameters as well. Since we are concerned with applications where the dimensionality of κ is already small (such as the time-varying parameter models we describe later), we will not pursue this question further in this paper - our approach could be extended in a straightforward manner if a researcher was interested in introducing additional levels of hierarchy. We focus here on drawing one vector of hy- perparameters, but other vectors of hyperparameters could be included in θ (which could be high-dimensional, as in our time-varying parameter VAR later). Draws for those other vectors of hyperparameters would then be generated using additional Metropolis steps that have the same structure. If J vectors of hyperparameters are present, we denote vector j by κ j (j = 1, . . . , J ) and the vector of all hyperparameters by ˜ κ = [κ ′ 1 κ ′ 2 . . . κ ′ J ] ′ . When we
Mehr anzeigen

32 Mehr lesen

Show all 10000 documents...