Skip to main content Skip to navigation

ST343 Additional Information

Module Leader - Professor Chenlei Leng, S Mendelson,
Arpan Mukhopadhyay (Computer Science)

The majority information about this module can be found from the module catalogue page (see left)

(The course regulations listed in the course handbook take precedence over the availability of modules listed in the module catalogue).

The module structure and commitment for this module in 21/22 is expected to involve;

Lectures: 30h (asynchronous*)

Introductory sessions: 3h (week 1, in person)

Assessment method: 100% exam

Office hours: 2h/week

*Asynchronous video recordings (watch any time) covering content equivalent to the lectures they replace

...........................................................................................................................................................................................

Lecturer 1: Prof. Shahar Mendelson, Statistics
 
Title: complexity measures of subsets of R^n
 
Aims: When is a subset of R^n large? As it happens, there are many notions of size that are encountered in different areas of mathematics. Our focus will be on notions of size that appear naturally in Statistical Learning Theory: the metric entropy, the combinatorial dimension and various mean widths.
I shall explore basic features of those notions. I will also explain how the notions capture the difficulty of simple recovery problems, such as noiseless approximate recovery. If time permits, I will discuss their connections with the so-called Rademcher averages, which are complexity parameters that are used to characterize the optimal sample complexity in classical learning scenarios.
 
Objectives: By the end of the course the students should have a basic understanding of the complexity measures we shall explore.
 
References: Relevant reading material (course notes) will be posted later.
 
===================================================
Lecturer 2: Dr Arpan Mukhopadhyay, Computer Science
 
Topic: Multi-Armed Bandits
 
Aims: The multi-armed bandit framework includes a large class of sequential
decision making problems where at each step of a decision making process
an agent has to take an action based on the outcomes (rewards or costs) of its past actions.
The goal of the agent is to learn the "best course of actions" over a period.
Such problems have a variety of applications ranging from medical trials to ad placement on a webpage. The aim of this part of the module would be to introduce the basic frameworks
for stochastic and adversarial bandits, devise simple algorithms for both settings,
analyse regret upper bounds for these algorithms, and 
(time permitting) prove regret lower bounds for some simple bandit problems.
 
Learning outcomes: By the end of this part of the module students should
be able to (i) understand the difference between stochastic and adversarial bandits
(ii) Distinguish between full feedback and bandit feedback settings
(iii) devise efficient algorithms for multi-armed bandit problems,
(iv) obtain regret upper bounds for simple bandit algorithms,
(v) obtain regret lower bounds for some simple settings.
 
References:
 
1. Introduction to Multi-Armed Bandits, A. Slivkins, Preprint: https://arxiv.org/pdf/1904.07272.pdf
 
2. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, S. Bubeck and N. Cesa-Bianchi, Foundations and Trends in Machine Learning, Vol 5: No 1, 1-122, 2012
 
==========================================
Lecturer 3: Prof. Chenlei Leng, Statistics
 
Topic: High-dimensional Statistics
 
Aims: High-dimensional data concern those datasets having more variables than observations and have become a defining feature of the big data era. In the last few decades, many methods, most noticeably based on various notions of sparsity, have undergone rapid developments. This part will survey the main developments in this sub-area of statistics, focusing on the main methodologies, theory and algorithms. We will discuss the methodologies developed for fitting linear models, generalized linear models, graphical models, and other related models. A particular emphasis is to discuss the theory for the lasso estimator and debiased estimation for statistical inference.
 
Objectives: By the end of this part, students should (i) have a good understanding of the theory and various methods for modelling high-dimensional data; (ii) have ideas on how to formulate methods for a new problem and provide the theory under simple settings; (iii) be able to develop algorithms for a particular problem at the conceptual level.
 
Reference:
 
Wainwright, M. (2019). High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press.
=========================================================================================