Machine Learning Club
We meet informally every week to discuss Machine Learning topics over biscuits and tea. We alternate between Speaker Series and CoreML sessions. Participants come from a variety of disciplines so please feel free to come along!
The format of the Speaker Series is one presenter per meeting who chooses the topic. The talk could be about anything Machine Learning related - your own research, an interesting ML paper, or some new exciting method.
In the weeks when the Speaker Series is not running, you can get your Machine Learning fix at the CoreML group where we discuss the latest ideas within the Machine Learning community.
Upcoming Meetings:
- 03/12/2018 - Speaker Series
Tim Pearce - University of Cambridge
Bayesian Neural Network Ensembles (and Application to Exploration in Reinforcement Learning)
Understanding the uncertainty of a neural network’s (NN) predictions is useful for many applications. The Bayesian framework provides a principled approach to this, however applying it to NNs is challenging due to the large number of parameters and data. Our recent work developed an easily implementable ensembling technique, that gives surprisingly good approximations to the Bayesian posterior. We demonstrate its pracitcality on supervised regression tasks as well as a small-scale RL problem.
Past Meetings:
- 26/11/2018 - Speaker Series
Michael Pearch - Warwick/Deepmind
Applications of VAEs in State Space Models. - 19/11/2018 - Cancelled
- 12/11/2018 - CoreML
Ayman Boustati
Natural Gradients
Gradient descent is ubiquitous in machine learning. It is the learning algorithm that used the most to optimise a models parameters for generalisation; however, many machine learning models having an inderlying probabilistic iterpretation where changing their parameters in Euclidean space is sub-optimal. In this session, we will take a look at natural gradients and how they can help in making gradient descent more efficient in probabilistic models. - 05/11/2018 - CoreML
Ayman Boustati
Normalising Flows
Variational inference (VI) has become one of the most effective methods for approximate Bayesian inference; however, most VI techniques employ simple approximations to the posterior that can sometime be inappropriate. One way to mitigate this is throught the use of Normalising Flow, i.e. deterministic transformations to simple distributions that yield more expressive posterior distributions. In this session, we will cover the basics of Normalising Flows. - 29/10/2018 - Speaker Series
Virginia Aglietti
Efficient Inference in Multi-task Cox Process Models
In the work I am going to present a framework that generalizes the log Gaussian Cox process (LGCP) to model multiple correlated point data jointly. The observations are treated as realizations of multiple LGCPs, whose log intensities are given by linear combinations of latent functions drawn from Gaussian process priors. The combination coefficients are also drawn from Gaussian processes and can incorporate additional dependencies. We derive closed-form expressions for the moments of the intensity functions and develop an efficient variational inference algorithm that is orders of magnitude faster than competing deterministic and stochastic approximations of multivariate LGCP, coregionalization models, and multi-task permanental processes. Our approach outperforms these benchmarks in multiple problems, offering the current state of the art in modeling multivariate point processes. - 22/10/2018 - Speaker Series
Michael Pearce - DeepMind
Life at DeepMind
Michael will be sharing his experience as an intern in DeepMind. The session will be Q&A style. - 15/10/2018 - CoreML
Jev Gamper
Stochastic (mini-batch) MCMC techniquesStandard MCMC algorithm requires to evaluate the model over the whole data-set, which in current days of abundant data is not so convenient anymore. In this talk we will go through two related papers: Seita et al., “An Efficient Minibatch Acceptance Test for Metropolis-Hastings.”, and Welling and Teh, “Bayesian Learning via Stochastic Gradient Langevin Dynamics.” The former, proposes an efficient mini-batch acceptance test for Metropolis-Hastings, while the latter makes use of stochastic gradients but samples from the posterior nevertheless.
- 08/10/2018 - Speaker Series
Jim Skinner
Artificial olfaction in medicine & a model for dealing with the data
Olfaction (smell) has a long history in medicine, and automating smell-based diagnoses may be useful. There are a number of technologies for producing a machine that can smell, and they produce highly structured, high-dimensional data. I will talk through a few studies where we use artificial olfaction to diagnose disease, then talk about a model I have developed for feature learning from artificial olfaction data. The model is PPCA with a prior on the loadings, and an approximate marginal likelihood enabling model selection & uncertainty quantification. - 01/10/2018 - CoreML
Jev Gamper
Meta Learning in neural networks
An exposition into the latest methods for meta learning in deep neural networks.