Skip to main content Skip to navigation

Core Machine Learning

Welcome to Core Machine Learning page, we are a subgroup of Machine Learning Research Group. CoreML is a group of students informally meeting every monday to discuss and learn more about the latest ideas within the Machine Learning community. COME AND UPDATE YOUR POSTERIOR!


What to expect from a meeting:

  • Free energy variational bisquits
  • Awesome atmosphere full of bright Bayesian rainbows
  • Discussions full of interesting latent structures

What is expected from a participant:

  • Read the material prior to the meeting, thats easy!

What is expected from a discussion leader:

  • Nothing formal, just make sure to read the material
  • At the beggining, quickly summarise the topic to make sure everyone is on the same page. No need for presentation, unless you want to! Or if you would like to demonstrate an example or what not you can just show on your laptop screen - its really up to you.
  • Think of three questions for a discussion! These are very open ended and informal!
  • See first discussion below for an example!

See below the list of topics, and associated reading suggestions.

Introduction to Approximate Inference

To be discussed on 05/03/18. Discussion leader: Peter Byfield.  Although extremely useful and conceptually simple, a fully probabalistic approach to machine learning posses a number of computational challenges. One of them being the marginalisation over all the variables in the model except for the variables of interest. In this session Peter will discuss Variational Inference methods (and maybe will touch on exact Monte-Carlo inference methods) that help us overcome these challenges,.

Classifier Technology and the Illusion of Progress

To be discussed on 19/03/18. Discussion leader: Jevgenij Gamper. In this discussion, I will lead the review of a paper by David J. Hand (Imperial College). Abstract: A great many tools have been developed for supervised classification, ranging from early methods such as linear discriminant analysis through to modern developments such as neural networks and support vector machines. A large number of comparative studies have been conducted in attempts to establish the relative superiority of these methods. This paper argues that these comparisons often fail to take into account important aspects of real problems, so that the apparent superiority of more sophisticated methods may be something of an illusion. In particular, simple methods typically yield performance almost as good as more sophisticated methods, to the extent that the difference in performance may be swamped by other sources of uncertainty that generally are not considered in the classical supervised classification paradigm.

Dropout as Bayesian Approximation

To be discussed on 19/02/18. Discussion leader: Jev Gamper.  In this discussion we will discover how Neural Networks relate to Gaussian Processes. In particular, when drawing the relationship between Gaussian Processes and Neural Network stochastic regularisation methods such as Drop out we discover that we can use the latter to obtain uncertainty within the predictions of the neural network. Using drop out for uncertainty estimation is extremely useful, as we do not have to use expensive probabalistic inference methods such Bayesian Neural Networks, and can use existing network architechtures and traditional optimisation methods instead.

 

Time and Location
Monday, 4pm - 5pm

Room D1.07

Centre for Complexity Science

Zeeman Building

Contact
Jevgenij Gamper (organiser)

email image