Skip to main content Skip to navigation

APTS module: Statistical Inference

Module leader: Michael Goldstein

Please see the full Module Specifications for background information relating to all of the APTS modules, including how to interpret the information below.

Aims: To explore a number of statistical principles, such as the likelihood principle and sufficiency principle, and their logical implications for statistical inference. To consider the nature of statistical parameters, the different viewpoints of Bayesian and Frequentist approaches and their relationship with the given statistical principles. To introduce the idea of inference as a statistical decision problem. To understand the meaning and value of ubiquitous constructs such as p-values, confidence sets, and hypothesis tests.

Learning outcomes: An appreciation for the complexity of statistical inference, recognition of its inherent subjectivity and the role of expert judgement, the ability to critique familiar inference methods, knowledge of the key choices that must be made, and scepticism about apparently simple answers to difficult questions.

Preliminaries: Students should have done at least one course in probability and one in statistics. Students should be
familiar with: the idea of a statistical model, statistical parameters, the likelihood function, estimators, the maximum
likelihood estimator, confidence intervals and hypothesis tests, p-values, Bayesian inference, prior and posterior distributions.

Further information on all of these topics can be found in standard undergraduate statistics textbooks, for example

  • J.A. Rice, 1999, Mathematical Statistics and Data Analysis, 2nd edn, Duxbury Press (more recent edition available); or
  • Morris H, DeGroot, and Mark J Schervish, 2002, Probability & Statistics, Addison Wesley, 3rd edn. Prof. Schervish maintains a list of errata at

More advanced treatments can be found in

  • G.A. Young and R.L. Smith, 2005, Essential of Statistical Inference, Cambridge University Press.
  • A.C. Davison, 2003, Statistical Models, Cambridge University Press. This book also contains a wealth of applications. Prof. Davison maintains a list of errata at


  1. What is statistics? Statistical models, prediction and inference, Frequentist and Bayesian approaches.
  2. Principles of inference: the Likelihood Principle, Birnbaum's Theorem, the Stopping Rule Principle, implications for different approaches.
  3. Decision theory: Bayes Rules, admissibility, and the Complete Class Theorems. Implications for point and set estimation, and for hypothesis testing.
  4. Likelihood based estimators and their large sample properties. Confidence sets, hypothesis testing, and P-values. Relationships between Bayesian and frequentist intervals.
  5. Limitations of models of statistical inference. Exchangeability representations. Lessons from Uncertainty Quantification.

Assessment: General questions on the implementation of different approaches in particular types of
inference, possibly involving additional reading.