Whenever a certain function shall be minimized (e.g., a sum of squared residuals) or maximized (e.g., profit) optimization methods are applied. If in addition prior knowledge about some of the parameters can be expressed as bounds (e.g., a non-negativity bound for a density) we are dealing with an optimization problem with inequality constraints. Although, common in many economic and engineering disciplines, inequalityconstrainedadjustment methods are rarely used in geodesy. Within this thesis methodology aspects of convexoptimization methods are covered and analogies to adjustment theory are provided. Furthermore, three shortcomings are identified which are—in the opinion of the author—the main obstacles that prevent a broader use of inequalityconstrained ad- justment theory in geodesy. First, most optimization algorithms do not provide quality information of the estimate. Second, most of the existing algorithms for the adjustment of rank-deficient systems either provide only one arbitrary particular solution or compute only an approximative solution. Third, the Gauss-Helmert model with inequality constraints was hardly treated in the literature so far. We propose solutions for all three obstacles and provide simulation studies to illustrate our approach and to show its potential for the geodetic community.
• In Chapter 5, we studied unconstrained planar point-objective location problems where the distances between points are defined by means of the Manhattan norm. By characterizing the nonessential objectives of such location problems and, by eliminating them, we developed an effective algorithm (the Rectangular Decomposition Algorithm) for generating the whole set of Pareto efficient solutions as the union of a special family of rectangles and line segments. • We analyzed point-objective location problems in finite-dimensional Hilbert spaces involving multiple forbidden regions (see Chapter 6). For the choice of the new location point, we are taking into consideration a finite number of forbidden regions that are given by open balls (defined with respect to the underlying norm). For such a nonconvex multi-objective location problem, under the assumption that the forbidden regions are pairwise disjoint, we succeeded to give complete geometrical descriptions for the sets of (strictly, weakly) Pareto efficient solutions.
We consider constrainedoptimizationproblems that are distributed over multi-agent-networks. In such scenarios, each agent has a local cost function, only known to the respective agent. The overall goal of the network is to minimize the sum of all local functions, while the exact form of the latter should remain private. This type of problem is known as a social welfare optimization. Objective variables are often subject to a variety of constraints, depending on the application, that need to be considered in the optimization process. For many of such constrainedproblems, it can be distinguished between global constraints that effect all agents in the system and local constraints that are only relevant to a single agent. An example application is the distributed economic dispatch problem (DEDP), where each agent rep- resents a generator with a distinct cost function. The goal of DEDPs is to minimize the overall cost for producing power, while matching the demand and keeping the production inside the generator’s limits. In such problems, the balancing constraint is globally defined, as it constrains the power production of all generators, but the limits of each generator should remain private and therefore local. Depending on the cost function choice, the resulting problem is either convex or non-convex.
Also in so-called quasi infinite horizon NMPC schemes a terminal region or terminal inequality constraint is used to enforce stability [Chen and Allgöwer 1998]. As depicted in Figure 2.2, the terminal region is a neighborhood of the desired set point. Moreover, a terminal feedback law guarantees constraint satisfaction and convergence inside E. In contrast to the dual-mode strategy the terminal feedback is never applied to the real system. Its value is merely conceptual in the sense that it allows the construction of a local Lyapunov function which is valid on E. Using this local Lyapunov function one can guarantee the decrease of the optimal value function of the OCP (2.3) and show stability of the closed NMPC loop. A seminal overview on these approaches covering discrete and continuous time formulations is provided in [Mayne et al. 2000]. More general formulations allowing consideration of non-autonomous systems and relaxed controls have been proposed, for example, in [Findeisen 2006; Fontes 2001]. There gen- eralized terminal penalties—not necessarily local Lyapunov functions—are employed. It should be noted that a terminal region implies a rather strong requirement on the controllability properties in the presence of constraints: in a finite time span T p the
As far as generic methods are concerned, since these algorithms are generic, some per- formances of them in some case can’t be fully satisfied. However, these special methods are applicable either to these optimizationproblems having convex search region only or to these op- timization problem whose objective and constraint functions are differentiable. In fact, among the generic methods, the most popular approach in real optimization fields to deal with the constraint of an optimization problem is the penalty function method, which involves a num- ber of penalty parameters and we must to set right in any algorithms in order to obtain the optimal solution, and this performance on penalty parameter has led many researches to de- vise the sophisticated penalty function method. These methods mainly can be divided three categories: a) multi-level penalty functions  ; b) dynamic penalty functions based on adaptive
The focus of this thesis is the investigation of the facial structure of the cardinality constrained matroid, path and cycle polytopes. As it might be expected and is exemplarily shown for path, cycle, and matroid polytopes, an inequality that induces a facet of the polytope associated with the ordinary problem usually induces a facet of the polytope associated with the cardinality re- stricted version. However, we are in particular interested in inequalities that cut off the incidence vectors of solutions that are feasible for the ordinary problem but infeasible for its cardinality restricted version. In this context, the most important class of inequalities for this thesis are the so-called forbidden cardinality inequalities. These inequalities are valid for a polytope associated with a cardinality constrained combinatorial optimization problem independent of its specific combinatorial structure. Using these inequalities as prototype for inequalities incorporating com- binatorial structures of a problem, we derive facet defining inequalities for polytopes associated with several cardinality constrained combinatorial optimizationproblems, in particular, for the above mentioned polytopes. Moreover, for cardinality constrained path and cycle polytopes we derive further classes of facet defining inequalities related to cardinality restrictions, also those inequalities specific to odd/even path/cycles and hop constrained paths.
To solve the optimizationproblems (4) and (5) an active-set SQP method is considered using a BFGS update (see e.g. ). In each iteration step a quadratic subproblem is constructed and its solution yields a new iterate. The SQP approach can be interpreted as a Newton’s method applied to the KKT optimality conditions of this quadratic subproblem. The inequality constraints are treated using an active set strategy. In that framework an inequality constraint appears in the Lagrange formulation as an equality constraint if the bound is reached at the current design point and neglected otherwise. Hence in each iteration step only subproblems with equality constraints are solved. Here we want to sketch the algorithm for the example of an active stability constraint. We define the Lagrange function for (4) as follows
An earlier reformulation of cardinality constraints was introduced in  using binary auxil- iary variables for the case of polyhedral constraints and a quadratic objective function. This formulation lead to the application of methods from discrete optimization: In  a branch & bound method for a quadratic objective function was proposed. In  a concave reformula- tion is considered and applied to portfolio selection problems. Approximation techniques such as simulated annealing are studied in [17, 78]. For a convex objective function, an approxima- tion of the cardinality constraint with the ` 1 -norm is studied in , see also the survey article . Linear programs with (multiple) overlapping cardinality constraints are considered in . Cardinality constraints were first motivated by an application to portfolio optimization in  and since then have been further studied in this context. In  a local relaxation method was proposed. The application of a convex penalty function was considered in . Compared to the literature covering mixed-integer or approximation methods for cardinal- ity constrainedoptimizationproblems, there are few publications covering approaches from nonlinear optimization. In  a special case is considered, in which no further constraints, except the cardinality constraint, are present. For this case, optimality conditions from non- linear optimization and algorithms are investigated. In  first and second order optimality conditions are given. These are formulated using the original cardinality constraint and use suitable normal cones of the corresponding feasible set. After being introduced in [14, 26], the complementarity formulation (1.2) was further studied in [15, 88, 11, 13].
relative eccentricity and inclination vectors, inside convex control windows, minimum separation distances between satellites can be guaranteed. Defining constraint windows on relative orbital el- ements does not inhibit natural orbital motion and removes the need for a convexification of the minimum separation distance constraint. In  it was demonstrated that differential perturbations between satellites are small within a geostationary slot, suggesting that, in the absence of maneu- vers, variations in the osculating relative states are small as well. Thus if the guidance and control problem is formulated directly in terms of osculating relative states, and these relative states are maintained within small tolerance windows, the number of satellites that can be collocated within a single slot increases. To achieve this increase, the method for station-keeping of geostationary satel- lites introduced in  is reformulated in terms of relative states. A leader-follower control hierarchy is used to control the fleet. The leader satellite can be controlled using any desired method (we use the method from ), and it is assumed that the predicted leader’s state trajectory is available for determining the followers’ station-keeping maneuvers. Since the follower trajectory is only de- pendent on the leader state trajectory, the method can be implemented in a decentralized manner and is scalable to an arbitrary number of follower satellites (n.b the number of follower satellites is limited by the size of the geostationary slot, and the desired minimum separation distance). The maneuvers are determined by formulating and solving a convexoptimization problem. The method from  is further improved by explicitly accounting for the thruster configuration and using directly the thrusts of each individual thruster as independent variables in the optimization problem. The optimization problem is scaled to improve the numeric solution.
Simulations were carried out using Matlab. We used a validated propagator including Earth gravity up to 8 th order and degree, Moon gravity, Sun gravity and solar ra- diation pressure (SRP). The RK4 integration method was used, with a timestep of 100 seconds. The formulation of the optimization problem was done using CVX , and solved using MOSEK . The timestep in the optimiza- tion problem was 1000 s. Maneuver plans were imple- mented based on simple on/off thruster with a single op- erational point (i.e. T = 0 or T = T max ) as in . Several forms of uncertainty were included; Gaussian orbit deter- mination errors were implemented based on the covariance matrix in Table 1, actuator uncertainty was included by implementing a 5% thrust force error (Gaussian, 3σ) and a 1.5 ◦ attitude error (Gaussian, 3σ). In addition, SRP un- certainty was included as a 15% uniform random error on the acceleration due to SRP.
more recent years the space for seminal ideas has drastically decreased and the inter- est has shifted towards the organisation, refinement and selection of existing methods. As scientists in this field, our interest is in understanding, articulating, and extending the paradigm underlying these algorithms that already exist and in deriving a theory of practice for their implementation. Despite some attempts through landscape analyses and theoretical insights on the convergence of the algorithms, many issues on these algo- rithms remain open. While an answer to these issues through theoretical proofs remains very hard, new consciousness arose that the experimental method can provide satisfac- tory answers. Our interest is in determining whether these methods, as they have been applied so far, yield the best ever possible performance or if margins still exist for im- provements. Most important, we would like to clearly map SLS methods to optimisation problems, that is, having a problem and specifications on its instances, we would like to find in the literature (or more likely in the Internet) a clear answer about which algo- rithm is worth to be re-implemented as it has been shown to perform the best among many others. A similar map could be worth also for the components underlying SLS methods, such as, construction heuristics, neighbourhood structures, local search pro- cedures, number of solutions maintained, the interaction between these solutions, etc. Whenever a pratictioner would face the same problem, or a similar one, he could consult these results and adopt the correct method.
treated as the novel one fusing the advantages of the three afore-mentioned approaches. First, it searches initial matching in the similar way with the alignment method to guarantee global optimization in a polynomial time. Second, similar to the ICP algorithm, the matching and the transformation are updated alternately to get more precision and less computation. The difference is that the CAO framework explores all the initial solutions and uses different update methods. Third, the uncertainty of transformed features is taken into account like geometric constraint analysis to achieve high robustness, but as a distribution function instead of a geometric region.
Previous experience shows that multigrid algorithms based on present smoothing strategy can solve optimal control problems with optimal computational complex- ity and appear to be robust with respect to the choice of values of the optimization parameters; see -. Among others, representative examples demonstrating these facts can be found in  for the case of singular optimal con- trol problems and in  for the case of linear control- constrained optimal control problems. In the latter case, the resulting multigrid algorithm allowed to investigate bang-bang control and, for the first time, to show the oc- currence of chattering of control in an elliptic problem. For more details see .
In order to avoid getting stuck in local minima, global search methods such as simulated annealing  and genetic algorithms  have been pro- posed. However, they have the disadvantage of being painstakingly slow are not guaranteed to converge to a good solution . The most effective general optimization procedures (i.e., which can be used without a starting design) are probably the ones based on a special method called the needle optimization technique . The needle optimization technique is the basis of the commercial software OptiLayer , which has yielded some impressive results [48, 62], but also becomes slow when the number of layers is large.
This paper described a decomposition approach for con- strained optimal control problems and a suitable algorithm based on a transformation technique with saturation functions and a hierarchical optimization approach. The transformation method extends the original formulated OCP and replaces the considered state and input constraints by new subdynamics and algebraic equations. In this regard, the reformulation of the OCP represents an analytical step that can be performed with computer algebra software such as M ATHEMATICA . The structure of the new OCP formulation is exploited to derive a decomposition approach resulting in a max-min-problem. The corresponding minimization step can be split in the minimization of independent subproblems which can also be performed in parallel. Moreover, an algorithm is presented to solve the decoupled OCP consisting of two layers in a hierarchical optimization structure.
Although the main focus of this work lies on the mathematical aspects and generic imple- mentation of an on-board feasible trajectory optimization algorithm, one of the future goals of this work is to implement and test the created algorithm on-board of EAGLE. EAGLE is a vertical take-off and landing vehicle designed as a platform for testing and demonstrating new GNC algorithms related to soft-landing, smooth ascent and hovering. It provides dy- namics, sensors and actuators similar to a typical landing vehicle. Hence it can be used to test new GNC solutions, e.g. for space exploration missions, in a realistic environment. The EAGLE system mainly consists of the main structure housing all relevant sub-systems, like sensors, on-board computer and power supply. There are three legs attached to this housing. The actuation mainly consists of a jet engine generating the main thrust. Its mag- nitude is controlled by the fuel flow into the engine’s burning chamber, while the thrust direction is controlled via two vanes aligned perpendicular below the engine. Those vanes deflect the thrust for maneuvers in pitch- and yaw-direction. For roll-control, a cold-gas reaction control system is applied with actuators placed on the two lever arms of EAGLE. These actuators are electro-magnetic on/off-valves which control the ejection of the pressur- ized gas, and thus they control the torque acting along the roll direction .
Our goal in the first example is to reduce the impact of wildfires, and we consider the following situation that fits the framework of MIPDECO. In a forest close to a populated area with a fire- fighting department a wildfire in its early stages is reported. Leaving it burning uncontrolled not only might endanger the local population and their properties but also might lead to a major hazard for the whole region. Of utmost importance therefore is that the firefighters plan their response in an optimal way. However, the safety of the units taking part in the operation must be ensured. While most of the forest cannot be crossed with heavy equipment, there is a road network inside the forest that can be used for firefighting operations. All movements are restricted to this road net- work. In addition no movement should take place on roads leading through or too close to burning territory, since this might endanger the firefighters themselves. Also, the available resources for controlling the fire (water, equipment, and manpower) are limited; therefore an optimal resource allocation and proper scheduling might make the difference between either getting the fire under control or a major disaster.
Newton’s ‘Principia’ (1687) not only initiated a scientific revolution in physics, but also had a substantial impact on other fields of study. While Newton is (disputably) cited that he “could calculate the motions of erratic bodies, but not the madness of a multi- tude” (Francis, 1850, p. 142), modern economics endeavored to imitate the methodology of the natural sciences. “Newton’s success in discovering the natural laws of motion” (summarized in section 3) inspired the search for “general laws of economics” (Hether- ington, 1983, p. 497). The “Newtonian method” of deducing several phenomena from certain primary principles (Redman, 1993, pp. 211–5) was applied by classical economist Adam Smith (1759; 1776) “first to ethics and then to economics” (Blaug, 1992, p. 57).