# MA3K0 High Dimensional Probability

**Lecturer:** Stefan Adams

**Term(s): **Not running 2020/21

**Status for Mathematics students:**

**Commitment: **10 x 3 hour lectures + 9 x 1 hour support classes

**Assessment: **Assessed homework sheets (15%) and Summer exam (85%)

**Prerequisites: **ST111 Probability A & B; (MA259 Multivariate Calculus and MA244 Analysis III) or (MA258 Mathematical Analysis III and ST208 Mathematical Methods); MA359 Measure Theory or ST342 Mathematics of Random Events.

Earlier probability modules will be of some use. The framework is some mild probability theory (e.g., the following modules can be useful: ST202 Stochastic Processes, MA3H2 Markov Processes and Percolation Theory).

**Leads To: **There are also strong links and thus suitable combinations to the following modules MA4K4 Topics in Interacting Particle Systems, MA4F7 Brownian Motions, MA427 Ergodic Theory, MA424 Dynamical Systems, MA4L2 Statistical Mechanics, MA4L2 Large deviation theory

**Content:**

- Preliminaries on Random Variables (limit theorems, classical inequalities, Gaussian models, Monte Carlo)

- Basic Information theory (entropy; Kull-Back Leibler information divergence)

- Concentrations of Sums of Independent Random Variables

- Random Vectors in High Dimensions

- Random Matrices

- Concentration with Dependency structures

- Deviations of Random Matrices and Geometric Consequences

- Graphical models and deep learning

**Aims:**

- Concentration of measure problem in high dimensions

- Three basic concentration inequalities

- Application of basic variational principles

- Concentration of the norm

- Dependency structures

**-** Introduction to random matrices

**Objectives:
**

**By the end of the module the student should be able to:**

Understand the concentration of measure problem in high dimensions

Distinguish three basic concentration inequalities

Distinguish between concentration for independent families as well as for various dependency structures

Understand the basic concentrations of the norm

Be familiar with random matrices (main properties)

Be able to understand basic variational problems

Be familiar with some application of graphical models

**Books:**

We won't follow a particular book and will provide lecture notes. The course is based on the following three books where the majority is taken from [1]:

[1] Roman Vershynin, *High-Dimensional Probability: An Introduction with Applications in Data Science*, Cambridge Series in Statistical and Probabilistic Mathematics, (2018).

[2] Kevin P. Murphy, *Machine Learning - A Probabilistic Perspective*, MIT Press (2012).

[3] Simon Rogers and Mark Girolami, *A first course in Machine Learning*, CRC Press (2017).

[4] Alex Kulesza and Ben Taskar, *Determinantal point processes for machine learning*, Lecture Notes (2013).