Skip to main content

ST419: Advanced Topics in Data Science

Lecturer(s): Dr Paul Jenkins , Dr Theo Damoulas and Dr Adam Johansen
Please note that the topics covered in this module may change from year to year.

Prerequisite(s): Either ST219 Mathematical Statistics B, ST220 Introduction to Mathematical Statistics or CS260 Algorithms.

Commitment: 3 lectures per week for 10 weeks. This module runs in Term 2.

Content: Three self-contained sets of ten lectures in term 2.

Title: Artificial Neural Networks

Lecturer: Dr Paul Jenkins

Aims: Artificial neural networks (NNs) are a class of learning algorithms for regression, classification, and unsupervised learning that mimic real neural networks. They are very flexible and have become hugely popular in recent years. This topic will provide an introduction to the theory and practice of artificial NNs for supervised learning, building up from simple single layer feed-forward networks to complex multi-layer 'deep' architectures. We will cover some theory such as universal approximation theorems, as well as practicalities like training and regularization. We will also cover extensions including recurrent NNs, convolutional NNs, and (time-permitting) unsupervised NNs.

Objectives: By the end of the course students should be able to (1) explain the key concepts of artificial NNs, such as activation functions, layers, weights; (2) describe and implement a gradient descent algorithm for fitting a NN; (3) discuss the issues arising in training NNs, such as overfitting, instability, and computational cost.

References: Goodfellow et al. “Deep learning”. MIT Press, 2017.

Title: Introduction to Reinforcement Learning

Lecturer: Dr Theo Damoulas

Aims: Reinforcement Learning (RL) is one of the main subfields of machine learning, alongside supervised and unsupervised learning, that focuses on decision making under implicit feedback. As such, it is heavily employed and developed in areas such as robotics and Al enginges in games like Go and Chess.
This topic will introduce the field of RL and standard agent-environment framework, covering Bellman's equations, dynamic programming, Monte Carlo and Temporal-Difference learning.

Objectives: By the end of the topic students should be able to (1) explain key concepts in RL such as the exploration-expolitation trade-off, discounting, MDPs, Policy iteration, Bellman's equations, TD and Q-learning; (2) implement basic RL algorithms; (3) have an understanding of basic issues such as the curse of dimensionality and differences between on and off-policy control.

References: Reinforcement Learning: An Introduction, S.Sutton and A.G.Barto (2002)


Title: Modelling the Written Word: Compression and Human-Computer-Interfaces

Lecturer: Dr Adam Johansen

Aims: Modelling of written words, viewed as streams of symbols from a finite alphabet, is a rich field with an extensive literature. This topic will provide an introduction to some probabilistic approaches to this problem and will show how these models can be used to efficiently store written text and also to provide efficient mechanisms for entry of text into computer systems which can be used without mastering the keyboard.

Objectives: By the end of the course students should be able to

(1) train and use simple probabilistic models for strings of characters to describe written languages;
(2) describe and implement simple symbol- and stream- coding and decoding algorithms (such as Huffman coding and arithmetic coding);
(3) explain the connection between data compression and data entry and hence how to use simple probabilistic models of written language to facilitate efficient entry of text into a computer system.


  • Information Theory, Inference, and Learning Algorithms, D. J. C. MacKay, Cambridge University Press (2003).
  • Non-uniform Random Variate Generation, L. Devroye, Springer-Verlag (1986).
  • Fast Hands-free writing by Gaze Direction, D. J. Ward and D. J. C. MacKay, Nature 418(6900):838.


Students will be given selected advanced research material for independent study and examination.

Assessment: 100% by 2-hour examination.