Skip to main content Skip to navigation

Day 1 - 15th of May

10:30 Registration and welcome

11:00 – Dante Kalise: Optimal stabilization for global optimization: controlling interacting Particle systems

This talk explores the control of interacting particle systems to achieve desired stationary configurations, bridging the gap between microscopic particle dynamics and macroscopic mean-field descriptions. We focus on consensus dynamics in Consensus-Based Optimization (CBO) and introduce a controlled CBO framework that incorporates a feedback control term derived from the numerical solution of a Hamilton-Jacobi-Bellman equation. This control guides particles towards the global minimizer of the objective function, enhancing the performance of standard CBO methods.

We establish the well-posedness of the controlled CBO system and demonstrate its improved performance through numerical simulations. The controlled CBO framework offers a powerful tool for solving challenging optimization problems across various domains, such as machine learning, engineering design, and scientific computing.

 

11:30 – Chiara Segala: Kernel methods for interacting particle systems: mean-field limit and control surrogate modelling

Interacting particle systems (IPS) are a very important class of dynamical systems, arising in different domains like biology, physics, sociology, and engineering. In many applications, these systems can be very large, making their simulation and control, as well as related numerical tasks, very challenging. Mean-field methods are an established approach to tackle large scale IPS, by considering only the distribution or density of all particles over the state space.

Very recently, the mean-field limit has been rigorously investigated in the context of kernel methods and their application to statistical machine learning, leading to the notion of the mean-field limit of a kernel and its reproducing kernel Hilbert space (RKHS). These developments open the path to kernel-based methods for large scale IPS, including learning and control of such systems. In this talk, numerical experiments on kernel methods for IPS will be shown, with a particular focus on control and learning problems on the large scale perspective.

 

12:00 – Sara Bicego: Steady states finding via deflation and controlled stabilization for the Fokker-Planck equation

In the context of interacting particle systems, collective behavior is often described at the mean-field level by the time evolution of a probability density distribution governed by Fokker-Planck-type equations. The system's emergent patterns, based on the underlying microscopic interactions, are represented in terms of stationary states of the evolutionary partial differential equation. A common feature of these systems is the coexistence of various steady configurations and non-trivial phase transitions. The nature and number of steady states are linked to two key parameters: the noise amplitude and the interaction strength, measuring the interplay between diffusivity and drift, in addition to the modeling of the forces acting on the particle ensemble.

To capture the different steady states of the Fokker Planck equation, a Spectral Galerkin approximation is combined with a deflated Newton's method, factoring out roots as they are identified. Comparison with existing asymptotic analysis results allows for the verification of solutions and the classification between stable and unstable configurations. Once the steady states are found, an optimal control problem is designed to stabilize the system at a desired unstable steady state. The control action is computed via iterated open-loop solves in a receding horizon fashion.

 

12:30 – Lunch break

 

14:30 – Zhengang Zhong: Multi-level optimal control with neural surrogate models

Optimal actuator and control design is studied as a multi-level optimization problem, where the actuator design is evaluated based on the performance of the associated optimal closed loop. The evaluation of the optimal closed loop for a given actuator realisation is a computationally demanding task, for which the use of a neural network surrogate is proposed. The use of neural network surrogates to replace the lower level of the optimization hierarchy enables the use of fast gradient-based and gradient-free consensus-based optimization methods to determine the optimal actuator design. The effectiveness of the proposed surrogate models and optimization methods is assessed in a test related to optimal actuator location for heat control.

 

15:00 – Alessandro Scagliotti: Adversarial training as minimax optimal control problems

In this talk, we address the adversarial training of neural ODEs from a robust control perspective. This is an alternative to the classical training via empirical risk minimization, and it is widely used to enforce reliable outcomes for input perturbations. Neural ODEs allow the interpretation of deep neural networks as discretizations of control systems, unlocking powerful tools from control theory for the development and the understanding of machine learning. In this specific case, we formulate the adversarial training with perturbed data as a minimax optimal control problem, for which we derive first order optimality conditions in the form of Pontryagin’s Maximum Principle. We provide a novel interpretation of robust training leading to an alternative weighted technique, which we test on a low-dimensional classification task.

 

15:30 – Jan Heiland: Polytopic autoencoders for higher-order series expansions of nonlinear feedback laws

On the way to a computational and general-purpose approach to nonlinear controller design, the approximative embedding of nonlinear models in the class of quasi linear-parameter varying (LPV) systems seems a promising path. In this talk, we illustrate how the embedding and approximation works in general and highlight two recent research efforts for the efficient embedding and the controller design.

Firstly, we discuss autoencoders that provide low-dimensional parametrizations of states in a polytope (as opposed to a linear space). For nonlinear PDEs, this idea is readily used for low-dimensional linear parameter-varying approximations.

Secondly, we recall how LPV approximations can been the base for efficient nonlinear controller design via series expansions of the solution to the state-dependent Riccati equation.

Then, we adapt a general polytopic autoencoder for control applications and show how it outperforms standard linear approaches in view of LPV approximations of nonlinear systems and how the particular architecture enables higher order series expansions at little extra computational effort.

In a numerical study, we illustrate the procedure and how this combined approach can reliably outperform the standard linear-quadratic design.

16:00 – Discussion session (Topic TBC)

 

18:00 onwards – Reception