Coronavirus (Covid-19): Latest updates and information
Skip to main content Skip to navigation

MA3K0 Content

Content:

- Preliminaries on Random Variables (limit theorems, classical inequalities, Gaussian models, Monte Carlo)

- Basic Information theory (entropy; Kull-Back Leibler information divergence)

- Concentrations of Sums of Independent Random Variables

- Random Vectors in High Dimensions

- Random Matrices

- Concentration with Dependency structures

- Deviations of Random Matrices and Geometric Consequences

- Graphical models and deep learning

Aims:

- Concentration of measure problem in high dimensions

- Three basic concentration inequalities

- Application of basic variational principles

- Concentration of the norm

- Dependency structures

- Introduction to random matrices

Objectives:

By the end of the module the student should be able to:

Understand the concentration of measure problem in high dimensions

Distinguish three basic concentration inequalities

Distinguish between concentration for independent families as well as for various dependency structures

Understand the basic concentrations of the norm

Be familiar with random matrices (main properties)

Be able to understand basic variational problems

Be familiar with some application of graphical models

Books:

We won't follow a particular book and will provide lecture notes. The course is based on the following three books where the majority is taken from [1]:

[1] Roman Vershynin, High-Dimensional Probability: An Introduction with Applications in Data Science, Cambridge Series in Statistical and Probabilistic Mathematics, (2018).

[2] Kevin P. Murphy, Machine Learning - A Probabilistic Perspective, MIT Press (2012).

[3] Simon Rogers and Mark Girolami, A first course in Machine Learning, CRC Press (2017).

[4] Alex Kulesza and Ben Taskar, Determinantal point processes for machine learning, Lecture Notes (2013).