Skip to main content Skip to navigation

Juan Ungredda

About me

Juan graduated in 2023 with a PhD in Mathematics of Systems. After graduating, Juan joined ESTECO, a software provider highly specialised in numerical optimisation for engineering design problems. ESTECO were the external partner on Juan's PhD project.

Juan was supervised by Prof. Jürgen BrankeLink opens in a new window. He studied Industrial Engineering at Universidad Católica Andrés Bello and obtained an MSc degree in Mathematics of Systems (with Distinction) from University of Warwick.

Juan's research involved real-world optimisation problems which are often very expensive (i.e. money, time) to evaluate. Therefore, it is of great significance to efficiently collect data in order to find the best design based on a small number of simulation runs. This is achieved by Gaussian Processes as surrogate models for model based optimisation. In academia, this method has impacted a wide range of areas, including interactive user interfaces, robotics, environmental monitoring, sensor networks, etc.

Email: J.Ungredda@warwick.ac.uk

Github: https://github.com/JuanUngreddaLink opens in a new window

Publications

1. Bayesian Optimisation vs. Input Uncertainty Reduction, Juan Ungredda, Michael Pearce, Juergen Branke. To appear in ACM TOMACS.

2. One step preference elicitation in multi-objective Bayesian optimization, Juan Ungredda, Juergen Branke, Mariapia Marchi, Teresa Montrone. GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference Companion

3. Bayesian Optimisation for Constrained Problems, Juan Ungredda, Juergen Branke. Currently only in Arxiv.

Conferences, Workshops and Internships

1. IMA and OR Society Conference on Mathematics of Operational Research. Speaker in "Bayesian Optimisation with Input Uncertainty Reduction" (slides).

2. Data Study Group, Alan Turing Institute September 2019, Telenor, Green Radio: Dynamic power saving configuration for mobile networks

3. 3 months internship period April-June, 2019 at ESTECOLink opens in a new window in Trieste, Italy.

4. The Genetic and Evolutionary Computation Conference (GECCO) 2021. Poster presenter in "One Step Preference Elicitation in Multi-Objective Bayesian Optimization" (Youtube link).

5. GECCO 2021 Industrial Challenge Winner (2nd place) in collaboration with the Bayesian Optimisation reading group (War-BORG). Optimization of a simulation model for a capacity and resource planning task for hospitals under special consideration of the COVID-19 pandemic.

5. Data Science for Social Good (DSSG) 2 months internship, period June-July, 2021.

6. Current active member of the warwic Bayesian Optimisation reading group (link).

    My PhD Work

    Bayesian Optimisation vs Input Uncertainty

    Simulation optimisation, i.e., the search for a design or solution that optimises some output value of the simulation
    model, allows to automate the design of complex systems and has many real-world applications. Yet, several
    difficulties arise when dealing with real systems, specially long simulation running times, and stochastic outputs.
    Also, stochastic simulations take in probabilistic assumptions, through system logic, to produce random outputs
    that must be estimated. Therefore, when constructing the simulation model, the decision maker often faces the
    challenge of defining input distributions (eg. the mean of an arrival time distribution), in particular, if multiple
    candidate distributions can fit the input data reasonably well, performance analysis are subjected to input error,
    variability or uncertainty \mathbb{P}[A|Data].

    Moreover, if both, running additional simulations to learn about the output landscape \mu(X,A), and collecting more data to reduce the input uncertainty \mathbb{P}[A|Data] are expensive, then it is important to evaluate the trade-off between them since devoting too much effort to data collection (left image) may not leave sufficient time for optimisation, while devoting too little effort to data collection will require us to search for a robust solution that performs well across the possible input distribution, but may not be best for the true input parameters (right image).

    Reduce input uncertainty by collecting more source data Collecting more data to update surrogate model

    Constrained Bayesian Optimisation

    Expensive black-box constrained optimisation problems appear in many fields where the possible number of evaluations is limited. Examples include hyperparameter tuning, where the objective is to minimise the validation error of a machine learning algorithm, the optimisation of the control policy of a robot under performance and safety constraints, or engineering design optimisation. However, some of this examples may present constraints that are also expensive to evaluate. For the machine learning example, we may be interested to tune the hyperparameters of a fully connected neural network subject tto a limit on the prediction time for real-time prediction.

    Most BO approaches assume unconstrained or box-constrained problems. Therefore, we aim for a criteria that quantifies the value of the objective function and the constraint information we would gain from a given sampling decision. Note that obtaining feasibility information does not immediately translate to better expected objective performance but rather more accurate feasibility information where more updated feasibility information may change our current beliefs about where the optimal solution is located.

    Msc Group Project

    This project was in collaboration with ESTECO to extend their software to also work well with stochastic simulators (such as discrete event simulations used in social science or epidemiology). The main challenges in this domain are

    1. There are often multiple optimisation criteria. In such cases, it is usually not possible to find one solution best according to all criteria, but the algorithm is supposed to search for a set of so-called “Pareto-optimal” solutions, with different trade-offs. The goal is to generate a representative subset of such solutions for a decision maker to choose from. Evolutionary algorithms are particularly suitable for this task, as they work with a population of solutions and can generate a set of Pareto optimal solutions in one run.

    2. Simulations are computationally expensive, so only few solutions can be evaluated. To reduce the number of simulations, surrogate models can be constructed and used to approximate the solution quality.

    This poses a challenge to optimisation, as a solution’s objective function values are now stochastic, making it difficult to decide whether an observed quality difference is genuine or simply due to random noise. The goal is thus to develop novel strategies to handle noisy multi-objective simulation optimisation problems.

    Application: The problem has applications in many areas, including traffic light control, epidemiology, scheduling or call centres, to name just a few. In this study group, we suggest to work on a Supply Chain Management problem in the Chemical Process Industry.

    ReportLink opens in a new window