Skip to main content Skip to navigation

Bayesian Optimisation with Multiple Objectives: Agenda

Event Schedule

Monday 27 February, 10am-6pm, Scarman Conference Centre, University of Warwick

09.00: Registrations open, please confirm your arrival for the day in reception

09:30: Arrival and coffee

10:00: Welcome and introductions in Space 43 (first floor)

10:30: Keynote: Sam Daulton: Practical Multi-Objective Bayesian Optimization

In many real-world scenarios, decision makers seek to efficiently optimize multiple competing objectives in a sample-efficient fashion. Multi-objective Bayesian optimization is a common approach, but it has previously provided limited utility. In this talk, we will examine practical methods for performing multi-objective Bayesian optimization in a variety of common scenarios including where (i) multiple designs can be evaluated simultaneously in parallel (potentially in large batches), (ii) the selected design is subject to input noise at implementation time (due to for example manufacturing tolerances), and (iii) the search space is high-dimensional and high-throughput optimization is desired (such as in optimizing designs for optical displays for augmented reality).

11:15: Discussion

11:30: Tinkle Chugh: Mono-surrogate vs Multi-surrogate in Multi-objective Bayesian Optimisation

Many real-world optimisation problems involve multiple conflicting objectives to be achieved. In some cases, e.g., engineering applications, the objective functions rely on computationally expensive evaluations. Such problems are usually black-box optimisation problems without any closed form for the objective functions. Bayesian optimisation (BO) can be used to alleviate the computational cost and to find an approximated set of Pareto optimal solutions in the least number of function evaluations. These methods rely on a Bayesian model as the surrogate (or metamodel) of the objective functions and find promising decision vectors by optimising an acquisition function. In multi-objective optimisation problems, BO aims to find a set of approximated Pareto optimal solutions. This talk will provide an overview of two commonly used approaches in multi-objective BO: Mono-surrogate (e.g. by using a scalarising function) and multi-surrogate (surrogate for each objective function). Specifically, the talk will focus on the weighted Tchebycheff scalarising function to compare both approaches and show that the distribution of the function in the multi-surrogate approach is not Gaussian and can be approximated with the generalised extreme value distribution. The results and comparison with existing approaches on standard benchmark and real-world optimisation problems show the potential of the multi-surrogate approach. The talk will also cover a recent work in multi-objective BO using classification instead of regression.

11:45: Discussion

12:00: Lunch

13:00: Keynote: Ruth Misener: Autonomous research machines: Self-optimizing new chemistry

Our research seeks to boost R&D efficiency in the chemicals industry. As an example, consider "micro reactor flow systems", which are transforming chemical manufacturing by enabling flexible prototyping. Because these high-throughput microfluidic devices can control reaction conditions online, they are ideal for quantitatively characterizing diverse chemical synthesis techniques along new reaction pathways. The challenge is: How do we automate the design of experiments to "self-optimise" new chemistry? Together with the BASF Data Science for Materials & Chemistry teams, we’re interested to solve Bayesian optimization challenges which may simultaneously exhibit: multiple objectives, mixed-feature spaces, asynchronous decisions, large batch sizes, input constraints, multi-fidelity observations, hierarchical choices, and costs associated with switching between experimental points. We review the machine learning contributions that we’ve found useful towards achieving these goals and discuss our own methodological and software contributions.

This work is a collaboration between Imperial (Jose Pablo Folch, Alexander Thebelt, Shiqiang Zhang, Jan Kronqvist, Calvin Tsay, Ruth Misener) and BASF (Robert Lee, Behrang Shafei, Nathan Sudermann-Merx, David Walz).

13:45: Discussion

14:00: Yaochu Jin: Evolutionary multi-objective Bayesian optimization with sparse Gaussian processes and random grouping

Bayesian optimization is a powerful tool for solving low-dimensional single-objective black-box optimization problems; however, how to scale this approach up to higher dimensions and multiple objectives remains a challenging research topic. This talk presents a recently developed algorithm that aims to scale Bayesian optimization to high-dimensional multi-optimization problems. The algorithm decomposes the high-dimensional search space into low-dimensional spaces by random grouping, in which sparse Gaussian processes are built to further reduce the computational complexity. A multi-objective evolutionary algorithm is employed as the baseline search method for solving multi-objective acquisition functions.

14:15: Discussion

14:30: Coffee Break

15:00: Hrvoje Stojic: Scaling multi-objective Bayesian optimization to very large batches

15:15: Sebastian Rojas-Gonzalez: Data-efficient interactive multiobjective optimization

The solution to a multi-objective optimization problem usually consists of a set of non-dominated solutions that reveal the essential trade-offs of the conflicting objectives. Finding a set of non-dominated solutions that covers the entire Pareto front can be computationally expensive, especially with increasing the number of objectives. Evidently, in practice the decision-maker is not always interested in the entire Pareto front, but in the solution exhibiting the best trade-off based on the expert's knowledge. In this paper, we propose two novel Bayesian optimization algorithms to efficiently locate the most preferred region of the Pareto front in expensive-to-evaluate problems, by interacting with the expert during the optimization process in a simple manner. We rely on Gaussian Processes to build cheap approximations of the objectives and propose to use advanced discretization methods to explore the interesting regions of the search space. The experimental results show our proposed algorithms to be competitive in finding non-dominated solutions that are congruent with the preference of the decision-maker.

15:30: Discussion

16:00: Coffee break

16:30: Joao Duro: Handling uncertainties in Bayesian Multi-objective optimization

Bayesian optimizers are arguably well suited for dealing with optimization problems characterised for being expensive to evaluate candidate solutions. Further important considerations are accounting for uncertainties arising from multiple sources and satisfying multiple conflicting objectives. In this scenario we wish to identify alternative robust trade-offs between objectives, where robustness comprises some preferred metric that quantifies the ability of a solution to retain high performance in the presence of uncertainties. Estimating the robustness of a solution to uncertainties can itself be a computational demanding task, where the statistical properties of the expected solution's performance must be quantified. This talk will discuss sParEGO [1], a multi-objective Bayesian optimization algorithm that employs a novel uncertainty quantification approach to assess the robustness of a candidate solution without having to rely on expensive sampling techniques (e.g. Monte Carlo methods). The performance of sParEGO is demonstrated on benchmark problems that make use of a toolkit from [2] to simulate the effect of stochasticity. For a given robustness indicator, sParEGO is shown to converge towards the robust trade-off surface. One potential pitfall of this algorithm is that modality could be incorrectly interpreted as stochasticity.

16:45: Jeremie Houssineau: Bayesian optimisation via a dedicated representation of epistemic uncertainty

Although it is widely accepted that there are two main sources of uncertainty, randomness (aleatoric) and lack of knowledge (epistemic), the usual approach to statistical inference conflates these two types of uncertainty and uses the language and tools of probability theory to describe both. However, one can leverage possibility theory as a specific representation of epistemic uncertainty within Bayesian inference. This is natural in the context of Bayesian optimisation since it is often the case that the uncertainty about the function to be optimised is purely epistemic. Possibilitic Gaussian processes therefore represent a promising alternative to the standard approach.

17:00: Discussion

17:30: Closing (community discussion)

18:00 to 18.30: Lounge reception drinks and convene for dinner.