After solving the nonlinear programming model with the described approach, an a posteriori verification of the numerical results is now carried out. In this step we try to check the necessary optimality conditions of optimalcontrol in order to verify the candidate optimality of the continuous problem via the approximate discrete optimal solution. The optimality is checked by the use of Pontrya- gin’s minimum principle and so the numerical results are compared to an analytical optimalcontrol law. Therefore we firstly have to define the Hamiltonian function: Definition 1. (The Hamiltonian). Let x be the state vari- able and u the control variable of an optimalcontrol prob- lem. The integrand of the corresponding cost functional
Gugat has presented analytical solutions for various optimal boundary control problems associated with the wave equation as one of the constraints. In contrast, we are going to solve these problems numerically in order to show the order of approximation for their optimal solutions. For, we want to propagate a numerical approach for the numerical solution of PDE constrained optimalcontrol problems if hyperbolic equations are involved. The method of choice proposed here is either a full discretization method, in case of small size problems, or the vertical method of lines, in case of medium size problems. For large size problems only model reduction methods may today give a chance for their solution such as proper orthogonal decomposition; see, e. g., Hinze, Volkwein . Concerning small and medium size problems, appropriate difference methods for the spatial discretization, resp. semidiscretization must be applied in order to approximate the hyperbolic equation correctly. Clearly, both approaches belong to the class first discretize, then optimize. They possess the advantage that the full power of methods for NLPs and ODE constrained optimalcontrol problems can be used including their possibilities for a numerical sensitivity analysis and therefore real-time control purposes; see, e. g., B¨ uskens –. The latter is a must if optimal solutions are to be applied in practise. No doubt, that the approach first optimize, then discretize generally may give more theoretical inside and safety, but as in ODE constrained optimization its advantage does not pay for its drawbacks when complicated real-life applications are to be treated. 2. OptimalControl Problems
Model Predictive Control and compute a sub-optimalcontrol consisting of opti- mal controls for sub-problems of much smaller time-horizons of only 4 time-steps, cf. Chapter A.4. Moreover, since the chosen discretization of 101×101 grid points in space still leads to fairly high computation times, only a semi-implicit Euler- scheme for solving the discrete systems is applied. However, the elapsed time of about 13.42 hours for computing an optimalcontrol ¯ u is still comparatively large. Figure 1.12 illustrates the computed (sub-)optimalcontrol ¯ u with associated activator state ¯ y in various times t. As shown, the control task is satisfied and the objective functional reads f (¯ u, ¯ y) = 9.899 · 10 −3 . The reason for the high amplitudes of the control in the area of the outer circle-shaped area of the sup- port function c Q is that the profile of the given desired trajectory is the one of
Chapter 1 tries to give an overview of the governing equations. It provides some functional analytic background material such as function spaces and weak solution concepts. Each control action requires a reaction of the flow. Though in real life this reaction should be unique, the mathematical theory does not yet provide such a result for three-dimensional problems for low regular data. In the two dimensional case this reaction is unique, and the mapping control 7→ velocity field can be studied. It turns out that this mapping is even twice Fr´echet differentiable, which enables us to use well-known Banach space programming techniques later on. The contribution of the first chapter is the study of the linearized and adjoint state equations in L p -spaces, a topic that was not recognized in optimalcontrol of the
Motivated by this observation, we will study the effect of perturbations and discretization errors near the end of the optimization horizon on the initial part of the control. It will turn out that, considering linear quadratic optimalcontrol problems, their influence decays exponentially in time. Thus, they are indeed negligible if the horizon is long enough.
In Figure 3, we present the optimal voltages and the associated optimal electrical currents for the tube with coil. The optimalcontrol voltage 𝑢 POD ∗ starts at the maximal possible voltage 10 V and reaches after some time the value 6 V that holds the current of 0.6 A. In the figure, the computed optimal voltage 𝑢 ∗ POD is com- pared with ℙ [𝛼,𝛽] (−𝑞 ∗ POD /𝜆 𝑢 ). For the ratio 𝜆 𝑢 /𝜆 𝑄 = 10 −5 , both functions graphically coincide, hence 𝑢 ∗ POD very well satisfies the optimality conditions (“optimality test”). For 10 −6 and smaller ratios, the optimality test is less satisfactory. Here, 𝜆 𝑢 /𝜆 𝑄 seems to have reached the precision of solving the POD reduced differential equation.
The present work deals with the application of OptimalControl Theory (OCT) to open quantum systems with a particular focus on solid-state quantum information processing devices. The latter are typically nanoscale structures that have to be manufactured, prepared, controlled and measured with an extraordinary degree of precision so that their quantum properties can be harnessed. Furthermore, they usually interact with some sort of solid-state environment that may lead to adverse effects regarding the performance of these devices. Therefore, isolation of the quantum information processor from its environment poses an important problem. This corresponds to a somewhat contradictory requirement since unwanted interactions can affect the quantum system by the same channels that are used to control the qubit. Closing these channels would lead to a reduction in the sensitivity with respect to environmental interactions but would result in a loss of controllability of the processor as well.
To show how such a model would work, we give two examples of their functioning. First we construct the optimalcontrol problem using minimization of loss function. The problem formulated by this approach is a non-linear one. As such it can be solved by using the Pontryagin’s principle, whose theoretical background will be briefly discussed later. Since solving a non-linear control problem with the Pontryagin’s principle can be very difficult, sometimes even unsolvable, for this purpose we suggest the use of fuzzy control. Fuzzy control is a new control approach which may succeed when other traditional control methods are unable to deal with. An overview of possible applications in technical and other areas can be found in Driankov et al (1996) and Novak (2000). The application of fuzzy control in economics can be found in the work of Kukal and Tran Van Quang (2013).
Is the Fed capable of fulfilling these requirements? Shojai and Feiger (2010), in their article Economists’ Hubris – The Case for Risk Management – write that the tools that are currently at the disposal of the world’s major global financial institutions are not adequate to help them prevent such crises in the future and that the current structure of these institu- tions makes it literally impossible to avoid the kind of failures that we have witnessed. I evaluate what Greenspan has learned and develop the Stochastic OptimalControl approach that should be used to implement the D-F bill.
Abstract: Different strains of influenza viruses spread in human populations during every epidemic season. As the size of an infected population increases, the virus can mutate itself and grow in strength. The traditional epidemic SIR model does not capture virus mutations and, hence, the model is not sufficient to study epidemics where the virus mutates at the same time as it spreads. In this work, we establish a novel framework to study the epidemic process with mutations of influenza viruses, which couples the SIR model with replicator dynamics used for describing virus mutations. We formulated an optimalcontrol problem to study the optimal strategies for medical treatment and quarantine decisions. We obtained structural results for the optimal strategies and used numerical examples to corroborate our results.
Thermal ablation treatments in cancer therapy heat a target volume, enough to cause it to burn, but leave healthy tissue and neighboring sensitive structures undamaged. Per- forming such treatments involves the placement of one or multiple heat sources within the target volume and the power control of each heat source. Since the final placement of the heat sources and tissue parameter estimates are only possible during treatment, real-time updates of the heat source powers are very relevant. Optimizing the power of each heat source involves the solution of optimalcontrol problems governed by parametrized elliptic partial differential equations with non-affine source terms. We employ the reduced basis method to derive a reliable and real-time efficient surrogate model. We work on extending  and  in order to derive error bounds for the ensuing problem.
In many computations involving the discretization of (partial) differential equations or variational inequalities one is interested in the accurate evalu- ation of some target quantity. This might be the value of the solution of a partial differential equation (PDE) at some reference point in the domain of interest, a physically relevant quantity such as the drag in airfoil design, or, in optimalcontrol, the value of the objective function at the solution of the underlying minimization problem. Highly accurate numerical evaluations of these targets can be guaranteed by using uniform meshes with a small mesh size h. This, however, usually represents a significant computational challenge due to the resulting large scale of the discrete problems. There- fore, one seeks to adaptively refine the meshes with the goal of achieving a desired accuracy in the evaluation of the output quantity of interest while keeping the computational cost as small as possible.
by an electromagnetic coil to which currents at various frequencies and time-varying amplitudes are applied. The amplitudes are considered as the controls and the objective is to heat the workpiece up to a desired temperature profile at the final time of the heating process. The workpiece is then quenched which due to a phase transition in the crystallographic structure of the steel leads to a hardening of the surface of the workpiece. For the inductive heating process, the state equations represent a coupled system of nonlinear partial differential equations consisting of the eddy currents equations in the coil, the workpiece, and the surrounding air, and a heat equation in the workpiece. The nonlinearity stems from the temperature dependent nonlinear material laws for steel both with regard to its electromagnetic and thermal behavior. Following the principle ’Discretize first, then optimize’, we consider a semi-discretization in time by the implicit Euler scheme which leads to a discrete-time optimalcontrol problem. We prove the existence of a minimizer for the discrete-time optimalcontrol problem and derive the first order necessary optimality conditions.
stiff problems and semi-discretizations of partial differential equations. Due to their high stage order they do not suffer from order reduction and work especially well for high accuracy requirements. Our main motivation is to clarify the potential of implicit peer methods to overcome the deficiencies of linear multistep methods when applied to optimalcontrol problems. We construct peer methods with high adjoint consistency in the interior of the integration interval and show that the well-known backward differentiation formulas are optimal with respect to the achievable order. We will clearly identify that inappropriate adjoint initialization still remains a crucial issue for implicit peer methods, which restrict the overall consistency order to one. A fact that is also valid for linear multistep methods. Given the low consistency order of the discrete adjoints and therefore of the whole discretization, we have to conclude that implicit peer methods are not suitable for a first-discretize-then-optimize approach. The content of Chapter 4 has been published in .
In this work, we extend the results from [Lub15] to optimalcontrol of static contact problems in non-linear hyperelasticity. Our aim is to establish basic results, concerning the existence of optimal solutions and to analyze a regularization scheme for their numerical computation.
Our paper is structured as follows: In the following section, we introduce the setting for a hyperelastic contact problem which is already a challenging problem by itself. The existence of solutions to hyperelastic problems was shown by John Ball in the context of polyconvexity [Bal77] and was extended to the case of contact in [CN85]. Since the techniques developed there are crucial for the analysis of optimalcontrol problems, we are going to give a short recapitulation of this topic. Additionally, we introduce the so-called normal compliance approach [OM85, MO87] as a regularization method for the contact constraints. From this, we obtain a regularized problem with relaxed constraints. Existence results for this problem will be discussed as well. Also, we examine the convergence of solutions of the regularized problem to solutions of the original problem.
Keywords: decoupling points, production-inventory management, optimalcontrol theory, deteriorating
items, time-varying demand, supply chain management
With the emergence of a business era that embraces “change” as one of its major characteristics (Agarwal, Shankar & Tiwari, 2006), the difficulty of success is increasing for many manufactures to a large extent. Since the customer needs and preferences deeply influenced the SC’s inner workings such as product functionality, quality, speed of production, timeliness of deliveries and so on, manufacturers are no longer the sole drivers of the supply chain. A shift from a ‘‘push’’ to a ‘‘pull’’ environment is well on its way. Hence, there confront a very practical problem: how to take the customer demand into consideration while designing an enterprise’ inner works such as production planning and inventory policy, and how to schedule these operational decisions in adjusting to demand changes.
Recently, there has been an increasing interest in more advanced robotic assignments that involve solving multiple tasks or visiting several locations in a complex environment. This problem is addressed by assuming an a-priori given decomposition into high-level planning and low-level control, thus focusing mainly on the purely discrete path planning task. High level specifications can be often captured effectively by temporal logic formulas that allow for employing model checking and automata-based game techniques to obtain a solution [122–124]. Finding the optimal path for a discrete transition system with a temporal logic specification has been studied, e.g. in . The closely related Vehicle Routing Problem (VRP) , where the goal is to plan optimal routes for vehicles that have to service customer requests located at different spatial sites with temporal constraints, has also received extensive attention from several research communities. The VRP represents, in general, an instant of the well known NP-hard Traveling Salesperson Problem (TSP), for which many effective heuristics providing satisfactory solutions to moderately sized problems have been proposed . VRPs combined with temporal logic specifications have been studied in , where the solution is acquired by solving a Mixed-Integer Linear Program (MILP) that incorporates the specification as additional optimization constraints, and in  by an automata-based approach. The hybrid OptimalControl Problem (OCP) of motion planning with continuous dynamics and a discrete specification has been addressed in, e.g., [116, 130]. An LTL specification can also be encoded directly into a single mixed-integer problem to obtain the optimalcontrol for mixed logical dynamical  or differentially flat systems . A multi- objective optimal planning problem was solved for a team of robots with a finite LTL mission specification and support resource constraints . Hybrid optimal exploration and control problems with discrete high-level specifications [33, 34, 134] have also been investigated.
Note that the question of optimal sampling of measurements is to a large extent independent from most of the aforementioned approaches to NMPC. Our approach is complementary in the sense that, e.g., the numerical structure exploitation, the treatment of systematic disturbances, the treatment of robustness, the formulation of dual control objective functions, the use of scenario trees or set-based approaches can all be combined with an adaptive measurement grid. This also applies to the nature of the underlying control task, where many extensions are possible, e.g., multi-stage processes, mixed path- and control constraints, complicated boundary conditions and so on. In the interest of a clear focus on our main results we choose a simple setting that allows to highlight the additional value of measurement grid adaptivity, and only show exemplarily the impact of one possible extension of our algorithm. We take the uncertainty with respect to model parameters into account on the level of optimalcontrol and of the experimental design by calculating robust optimal controls and samplings, compare [ 42 , 43 ], and compare our new algorithm based on nominal solutions with the new algorithm with robust solutions.
Abstract We analyze the maximal output power that can be obtained from a vibration energy harvester. While recent work focused on the use of mechanical nonlinearities and on determining the optimal resistive load at steady-state operation of the trans- ducers to increase extractable power, we propose an optimalcontrol approach. We consider the open-circuit stiffness and the electrical time constant as control functions of linear two-port harvesters. We provide an analysis of optimal controls by means of Pontryagin’s maximum principle. By making use of geometric methods from opti- mal control theory, we are able to prove the bang–bang property of optimal controls. Numerical results illustrate our theoretical analysis and show potential for more than 200% improvement of harvested power compared to that of fixed controls.
We present a general framework for the design of smoothers for the multigrid solution of elliptic optimalcontrol prob- lems. These are constrained minimization problems gov- erned by elliptic partial differential equations [2, 14, 19]. Specifically, we address the solution of optimalcontrol problems by means of the corresponding optimality sys- tems representing the first-order necessary conditions for a minimum and assume that these critical points also satisfy the second-order necessary conditions for (local) minima; for details regarding these conditions in a multi- grid context see . For a general and detailed descrip- tion of multigrid methods see, e.g., .