Skip to main content Skip to navigation

Events

Select tags to filter on
  More events Jump to any date

Search calendar

Enter a search term into the box below to search for all events matching those terms.

Start typing a search term to generate results.

How do I use this calendar?

You can click on an event to display further information about it.

The toolbar above the calendar has buttons to view different events. Use the left and right arrow icons to view events in the past and future. The button inbetween returns you to today's view. The button to the right of this shows a mini-calendar to let you quickly jump to any date.

The dropdown box on the right allows you to see a different view of the calendar, such as an agenda or a termly view.

If this calendar has tags, you can use the labelled checkboxes at the top of the page to select just the tags you wish to view, and then click "Show selected". The calendar will be redisplayed with just the events related to these tags, making it easier to find what you're looking for.

 
Tue 19 May, '26
-
Research Committee
MB0.08
Tue 19 May, '26
-
Statistical Learning & Inference Seminars
(pls see webpage for location details)
Tue 19 May, '26
-
Management Group
MB1.05
Wed 20 May, '26
-
SF@W Seminars
B3.03 (Zeeman)
Thu 21 May, '26
-
Research SSLC
MB0.08
Thu 21 May, '26
-
YRM
Stats Common Room
Fri 22 May, '26
-
Algorithms & Computationally Intensive Inference Seminars
MB0.08
Mon 25 May, '26
-
Staff Forum
Stats Common Room
Tue 26 May, '26
-
Statistical Learning & Inference Seminars
(pls see webpage for location details)
Thu 28 May, '26
-
CRiSM colloquium - Nicholas Polson
MB0.07

Chess has long been a proving ground for AI and statistical reasoning. This talk takes a Bayesian look at two recent flashpoints in elite play. The centerpiece is joint work with Shiva Maharaj (Chess Ed) and Vadim Sokolov (George Mason) on the 2023 Kramnik–Nakamura controversy, in which former world champion Vladimir Kramnik publicly questioned Hikaru Nakamura’s 45.5 out of 46 streak in 3+0 online blitz on chess.com. Combining Anand’s prior on the prevalence of online cheating with the streak evidence, we compute a posterior of roughly 99.6% that Nakamura did not cheat. The case study illustrates two classic fallacies — the Prosecutor’s Fallacy on Kramnik’s side, and a misuse of cherry-picking that violates the likelihood principle on Nakamura’s side — and connects to the broader literature on fraud detection and streaks in sports. I will then survey related work with the same group: a Brownian-motion model for the probability that Magnus Carlsen reaches an Elo of 2900 and its implications for the K-factor; a neural-network valuation of (piece, square) combinations; and a comparison of Stockfish and Leela Chess Zero as competing paradigms — handcrafted search versus deep reinforcement learning — through Plaskett’s endgame study.

Thu 28 May, '26
-
YRM
Stats Common Room
Fri 29 May, '26
-
Algorithms & Computationally Intensive Inference Seminars
MB0.08
Tue 2 Jun, '26
-
Statistical Learning & Inference Seminars
(pls see webpage for location details)
Wed 3 Jun, '26
-
SF@W Seminars
B3.03 (Zeeman)
Wed 3 Jun, '26
-
WEDIC
MB5.19
Thu 4 Jun, '26
-
YRM
Stats Common Room
Fri 5 Jun, '26
-
Algorithms & Computationally Intensive Inference Seminars
MB0.08
Mon 8 Jun, '26
-
Staff Forum
Stats Common Room
Tue 9 Jun, '26
-
Statistical Learning & Inference Seminars
(pls see webpage for location details)
Tue 9 Jun, '26
-
Management Group
MB1.05
Wed 10 Jun, '26
-
CRiSM colloquium - Bin Yu
B3.03

tbc

Thu 11 Jun, '26
-
YRM
Stats Common Room
Fri 12 Jun, '26
-
Algorithms & Computationally Intensive Inference Seminars
MB0.08
Tue 16 Jun, '26
-
Statistical Learning & Inference Seminars
(pls see webpage for location details)
Wed 17 Jun, '26
-
IT Committee
Teams
Thu 18 Jun, '26
-
YRM
Stats Common Room
Fri 19 Jun, '26
-
Algorithms & Computationally Intensive Inference Seminars
MB0.08
Mon 22 Jun, '26
-
Staff Forum
Stats Common Room
Mon 22 Jun, '26 - Wed 24 Jun, '26
13:00 - 14:00
ProbAI Theory of Scaling Laws Workshop 2026
University of Warwick, Zeeman Building, MS.01

Runs from Monday, June 22 to Wednesday, June 24.

Overview

Modern neural networks operate at unprecedented scales across model size, data and compute. A central research problem is to understand how their performance scales with these factors, which guides how networks can be trained optimally at scale. In recent years, empirical heuristics for scaling have arguably driven much of the success of Large Language Models (LLMs). Theoretical work on scaling laws has also seen much fruitful progress, shedding light on empirical phenomena such as model collapse, emergence and training stability, while providing concrete practical insights on techniques such as hyperparameter tuning.

This three-day workshop will bring together researchers working at the frontiers of theoretical scaling laws to share their insights about the field. The workshop will be the first of its kind in the UK, inspired by successes of similar workshops in the US and Europe.

  • The first half of the workshop consists of introductory tutorials, with the aim of equipping attendees with basic tools for framing and understanding problems in this field;
  • The second half will feature talks on latest research advances.

The aim is for researchers across academia and industry to learn about and participate in this active field of research, which has seen many fruitful empirical outcomes.

Tue 23 Jun, '26
-
Statistical Learning & Inference Seminars
(pls see webpage for location details)

Placeholder

Let us know you agree to cookies