- Preliminaries on Random Variables (limit theorems, classical inequalities, Gaussian models, Monte Carlo)
- Basic Information theory (entropy; Kull-Back Leibler information divergence)
- Concentrations of Sums of Independent Random Variables
- Random Vectors in High Dimensions
- Random Matrices
- Concentration with Dependency structures
- Deviations of Random Matrices and Geometric Consequences
- Graphical models and deep learning
- Concentration of measure problem in high dimensions
- Three basic concentration inequalities
- Application of basic variational principles
- Concentration of the norm
- Dependency structures
- Introduction to random matrices
By the end of the module the student should be able to:
- Understand the concentration of measure problem in high dimensions
- Distinguish three basic concentration inequalities
- Distinguish between concentration for independent families as well as for various dependency structures
- Understand the basic concentrations of the norm
- Be familiar with random matrices (main properties)
- Be able to understand basic variational problems
- Be familiar with some application of graphical models
We won't follow a particular book and will provide lecture notes. The course is based on the following three books where the majority is taken from :
 Roman Vershynin, High-Dimensional Probability: An Introduction with Applications in Data Science, Cambridge Series in Statistical and Probabilistic Mathematics, (2018).
 Kevin P. Murphy, Machine Learning - A Probabilistic Perspective, MIT Press (2012).
 Simon Rogers and Mark Girolami, A first course in Machine Learning, CRC Press (2017).
 Alex Kulesza and Ben Taskar, Determinantal point processes for machine learning, Lecture Notes (2013).