MA3L2 Content
Content: The solution of optimisation problems is at the core of several applications, from decision-making to engineering design problems. In these contexts, one often wants to optimise an outcome (e.g. minimise the cost of production) of a process, where the process variables are restricted to certain constraints. In today’s data-driven world, one would also want to find the best fit between existing models and available data. Often, both costs and constraints are nonlinear functions of the existing variables. This module will introduce key concepts in the mathematical analysis of (continuous) optimisation problems, starting with classical methods and their convergence properties, both for unconstrained and constrained problems, including derivative-free approaches when gradients are not available, and finishing with more modern methods for the solution of these nonlinear problems, especially when they have a large number of variables.
Outline Syllabus:
This is an indicative module outline only to give an indication of the sort of topics that may be covered. Actual sessions held may differ.
- Motivation. Why we need optimisation, how to recognise solutions (basic notions of critical points and their classification) and quantify convergence, what can go wrong, why we need sophisticated algorithms (large data, non-smooth functions, local vs global minima)
- Introduction to unconstrained optimisation. Optimality conditions. Choose from: line search methods, steepest descent methods, Newton and quasi-Newton methods, trust region methods, conjugate gradients.
Convergence rates. Global vs local optimisation / Convex vs non-convex problems - Derivative-free optimisation. Basic notions of convex sets and functions, tangent cones, cone of linearised feasible directions. Examples of algorithms (e.g. Nelder-Mead and other recent developments)
- Constrained optimisation. types of and qualification of restrictions, necessary and sufficient conditions (first and second order, optimality/KKT conditions), duality theory, linear programming, quadratic programming (SQP) Farkas lemma.
- Numerical methods for constrained optimisation. Quadratic penalisation, augmented Lagrangian, interior point / barrier methods
- Extra topics. Choose from: Particle Swarm Optimisation, Simulated Annealing, Consensus Based Optimisation, Stochastic Gradient Descent
Aims: This module aims to introduce mathematics students to continuous optimisation problems as well as their solution, when possible, or common bottlenecks and how to tackle them when a solution does not exist. It will focus on several classes of optimisation methods and use both analytical (e.g. convergence properties) and numerical approaches to study them, finishing with an overview of modern research problems in the field.
Learning Outcomes:
By the end of the module, students should be able to:
- - Understand critical points of multivariable functions.
- - Understand convex and nonconvex problems and associated difficulties in the context of optimisation.
- - Apply several techniques to solve nonlinear / nonconvex optimisation problems.
- - Derive convergence rates or guarantees for a number of optimisation algorithms.
- - Compare different optimisation techniques and choose the appropriate technique for example problems.
- - Apply appropriate numerical methods to solve optimisation problems.
Indicative reading list:
Nocedal, J. and Wright, S.J. eds., 1999. Numerical optimization. New York, NY: Springer New York.(main reference)
Boyd, S.P. and Vandenberghe, L., 2004. Convex optimization. Cambridge university press.
Conn, A.R., Scheinberg, K. and Vicente, L.N., 2009. Introduction to derivative-free optimization. Society for Industrial and Applied Mathematics.