Events
Tue 9 Jan, '18- |
C1.06 - Simulation Reading Group |
|
Tue 9 Jan, '18- |
YRM - 3pmCommon Room (C0.06) |
|
Wed 10 Jan, '18- |
Dept Council MeetingRadcliffe House |
|
Wed 10 Jan, '18- |
SSLCC1.06 |
|
Wed 10 Jan, '18- |
Probability SeminarsB3.02 |
|
Thu 11 Jan, '18- |
C1.06 - Machine Learning Reading Group |
|
Fri 12 Jan, '18- |
Algorithms and Computationally Intensive Inference seminarsC1.06 |
|
Fri 12 Jan, '18 |
APTS Executive CommitteeC1.06 |
|
Tue 16 Jan, '18- |
C1.06 - Simulation Reading Group |
|
Wed 17 Jan, '18- |
Teaching CommitteeC1.06 |
|
Wed 17 Jan, '18- |
Probability SeminarsB3.02 |
|
Thu 18 Jan, '18- |
C1.06 - Machine Learning Reading Group |
|
Fri 19 Jan, '18- |
Algorithms and Computationally Intensive Inference seminarsC1.06 |
|
Fri 19 Jan, '18- |
CRiSM SeminarMA_B1.01Jonas Peters, Department of Mathematical Sciences, University of Copenhagen Invariant Causal Prediction Abstract: Why are we interested in the causal structure of a process? In classical prediction tasks as regression, for example, it seems that no causal knowledge is required. In many situations, however, we want to understand how a system reacts under interventions, e.g., in gene knock-out experiments. Here, causal models become important because they are usually considered invariant under those changes. A causal prediction uses only direct causes of the target variable as predictors; it remains valid even if we intervene on predictor variables or change the whole experimental setting. In this talk, we show how we can exploit this invariance principle to estimate causal structure from data. We apply the methodology to data sets from biology, epidemiology, and finance. The talk does not require any knowledge about causal concepts. David Ginsbourger, Idiap Research Institute and University of Bern, http://www.ginsbourger.ch
Abstract: Gaussian Process models have been used in a number of problems where an objective function f needs to be studied based on a drastically limited number of evaluations.
Global optimization algorithms based on Gaussian Process models have been investigated for several decades, and have become quite popular notably in design of computer experiments. Also, further classes of problems involving the estimation of sets implicitly defined by f, e.g. sets of excursion above a given threshold, have inspired multiple research developments.
In this talk, we will give an overview of recent results and challenges pertaining to the estimation of sets under Gaussian Process priors, with a particular interest for to the quantification and the sequential reduction of associated uncertainties.
Based on a series of joint works primarily with Dario Azzimonti, François Bachoc, Julien Bect, Mickaël Binois, Clément Chevalier, Ilya Molchanov, Victor Picheny, Yann Richet and Emmanuel Vazquez. |
|
Fri 19 Jan, '18- |
CRiSM SeminarA1.01 |
|
Tue 23 Jan, '18- |
C1.06 - Simulation Reading Group |
|
Tue 23 Jan, '18- |
YRM - 3pmCommon Room (C0.06) |
|
Wed 24 Jan, '18- |
Probability SeminarsB3.02 |
|
Thu 25 Jan, '18- |
C1.06 - Machine Learning Reading Group |
|
Fri 26 Jan, '18- |
OxWaSPC0.08module 5: 26 January organised by Jim Smith (Warwick) - François Caron (Oxford) 1400-1500 Mihaela van der Schaar (Oxford Man) AutoPrognosis Mihaela's work uses data science and machine learning to create models that assist diagnosis and prognosis. Existing models suffer from two kinds of problems. Statistical models that are driven by theory/hypotheses are easy to apply and interpret but they make many assumptions and often have inferior predictive accuracy. Machine learning models can be crafted to the data and often have superior predictive accuracy but they are often hard to interpret and must be crafted for each disease … and there are a lot of diseases. In this talk I present a method (AutoPrognosis) that makes machine learning itself do both the crafting and interpreting. For medicine, this is a complicated problem because missing data must be imputed, relevant features/covariates must be selected, and the most appropriate classifier(s) must be chosen. Moreover, there is no one “best” imputation algorithm or feature processing algorithm or classification algorithm; some imputation algorithms will work better with a particular feature processing algorithm and a particular classifier in a particular setting. To deal with these complications, we need an entire pipeline. Because there are many pipelines we need a machine learning method for this purpose, and this is exactly what AutoPrognosis is: an automated process for creating a particular pipeline for each particular setting. Using a variety of medical datasets, we show that AutoPrognosis achieves performance that is significantly superior to existing clinical approaches and statistical and machine learning methods. 1530-1630 Jim Griffith (Kent) Bayesian nonparametric vector autoregressive models Vector autoregressive (VAR) models are the main work-horse model for macroeconomic forecasting, and provide a framework for the analysis of complex dynamics that are present between macroeconomic variables. Whether a classical or a Bayesian approach is adopted, most VAR models are linear with Gaussian innovations. This can limit the model’s ability to explain the relationships in macroeconomic series. We propose a nonparametric VAR model that allows for nonlinearity in the conditional mean, heteroscedasticity in the conditional variance, and non-Gaussian innovations. Our approach differs to that of previous studies by modelling the stationary and transition densities using Bayesian nonparametric methods. Our Bayesian nonparametric VAR (BayesNP-VAR) model is applied to US and UK macroeconomic time series, and compared to other Bayesian VAR models. We show that BayesNP-VAR is a flexible model that is able to account for nonlinear relationships as well as heteroscedasticity in the data. In terms of short-run out-of-sample forecasts, we show that BayesNP-VAR predictively outperforms competing models.
|
|
Fri 26 Jan, '18- |
Algorithms and Computationally Intensive Inference seminarsC1.06 |
|
Fri 26 Jan, '18- |
OxWaSP Mini-SymposiaMS_B3.03 |
|
Mon 29 Jan, '18- |
Assistant or Associate Professor PresentationsD1.07 |
|
Tue 30 Jan, '18- |
C1.06 - Simulation Reading Group |
|
Tue 30 Jan, '18- |
YRMCommon Room (C0.06) |
|
Wed 31 Jan, '18- |
PhD Open DayC0.06, Common Room |
|
Wed 31 Jan, '18- |
Management GroupC0.08 |
|
Wed 31 Jan, '18- |
Probability SeminarsB3.02 |
|
Thu 1 Feb, '18- |
C1.06 - Machine Learning Reading Group |
|
Fri 2 Feb, '18- |
Algorithms and Computationally Intensive Inference seminarsC1.06 |