Abstract:
The goal of this thesis is to compare three state-of-the-art optimization methods for nonlinear model predictive control. The analyzed methods consist of two recently developed constrained differential dynamic programming algorithms based on the alternating direction method of multipliers and interior point methods. The third algorithm is the well-studied direct single shooting approach. The thesis includes mathematical background on unconstrained optimization, constrained optimization, and optimal control. The unconstrained optimization part of the background focuses on trust region methods and the Levenbeg--Marquardt method. In the constrained optimization part we derive the Karush--Kuhn--Tucker optimality conditions and present penalty and barrier methods. The background ends with the formulation and solution of an optimal control problem through various approaches as well as an overview of receding horizon control. The analyzed methods are then re-derived from scratch following the concepts defined in the background. To test the feasibility of the algorithms three control tasks are defined. The first task is the swing-up control of an inverted pendulum from the initial state where it points downwards, to the desired state where it should point upwards. In the second scenario, complexity is added to the inverted pendulum by simulating the inherently unstable cart-pole system where both the pendulum and the position of the cart must be controlled. The third task consists of the velocity control of a vehicle with obstacle avoidance. For the given control tasks, both trajectory optimization and model predictive control simulation scenarios are considered. The algorithms are compared based on the feasibility of the solution, steady-state error, and optimization of a minimum-energy criterion. A discussion of each simulation result is then provided.