Core ML discussions topics
All about Gaussian Processes I (non-Gaussian likelihoods)
To be dicussed on November 27th. Dicussion leader: Ayman Boustati. After reviewing the GP models and their approximations in the case of regression, we will cover the literature on GPs where the likelihood is not necesserily tractable, i.e. non-Gaussian. That could be the case in classification for example, as well as, in multiple other interesting applications.
All about Gaussian Processes I (Regression)
To be discussed on 13th Nov. Discussion leader: Kieran Kalair. Gaussian processes are awesomely flexible and powerfull "nonparametric" models! However, they do come with quite a few caveats. In this discussion we will cover some of the most important literature on GPs, and particularly the advancements in GP scalability using inducing points.
Explaining and Harnessing Adversarial Examples
To be discussed on Oct 30th, 2017. Discussion leader: Jev. It is possible to create data instances, which would trick any kind of machine learning classifier. Early attempts at explaining this phenomenon in Neural Networks focused on nonlinearity and overfitting. In the subject paper, authors argue instead that the primary cause of neural networks’ vulnerability to adversarial perturbation is their linear nature - particularly, the piecewise linearity at the last layers. This, naturally sparks a discussion of our limited theoretical understanding of the way deep learning works.