Skip to main content Skip to navigation

Events

Select tags to filter on
  More events Jump to any date

Search calendar

Enter a search term into the box below to search for all events matching those terms.

Start typing a search term to generate results.

How do I use this calendar?

You can click on an event to display further information about it.

The toolbar above the calendar has buttons to view different events. Use the left and right arrow icons to view events in the past and future. The button inbetween returns you to today's view. The button to the right of this shows a mini-calendar to let you quickly jump to any date.

The dropdown box on the right allows you to see a different view of the calendar, such as an agenda or a termly view.

If this calendar has tags, you can use the labelled checkboxes at the top of the page to select just the tags you wish to view, and then click "Show selected". The calendar will be redisplayed with just the events related to these tags, making it easier to find what you're looking for.

 
Thu 28 Jan, '10
-
CRiSM Seminar - Jan Palczewski
A1.01
Dr Jan Palczewski (University of Leeds)
Why Markowitz portfolio weights are so volatile?
Markowitz theory of asset allocation is one of very few research ideas that made it into practical finance. Yet, its investment recommendations exhibit incredible sensitivity to even smallest variations in the estimation horizon or estimation techniques. Scientists as well as practitioners have put enormous effort into stabilizing the estimators of portfolios (with moderate success, according to some). However, there seems to be no simple quantitative method to measure the portfolio stability. In this talk, I will derive
analytical formulas that relate the mean and the covariance matrix of asset returns with the stability of portfolio composition. These formulas allow for the indentification of main culprits of worse-than-expected performance of the Markowitz framework. In particular, I will question the common wisdom that puts the main responsibility on estimation errors of the mean.

This research is a spin-off of a consultancy project at the University of Warsaw regarding the allocation of the foreign reserves of Polish Central Bank.
Thu 11 Feb, '10
-
CRiSM Seminar - Alexander Schied (Mannheim)
A1.01
Alexander Schied (Mannheim)
Mathematical aspects of market impact modeling
Abstract: In this talk, we discuss the problem of executing large orders in illiquid markets so as to optimize the resulting liquidity costs. There are several reasons why this problem is relevant. On the mathematical side, it leads to interesting  nonlinearity effects that arise from the price feedback of  strategies. On the economical side, it helps  understanding which market impact models are viable, because the analysis of order execution provides a test for the existence of undesirable properties of a model. In the first part of the talk, we present market impact models with transient price impact, modeling the resilience of electronic limit order books. In the second part of the talk, we consider the Almgren-Chriss market impact model and analyze the effects of risk aversion on optimal strategies by using stochastic control methods. In the final part, we discuss effects that occur in a multi-player equilibrium.
Thu 18 Feb, '10
-
CRiSM Seminar - Theo Kypraios (Nottingham)
A1.01
Theo Kypraios (Nottingham)
A novel class of semi-parametric time series models: Construction and Bayesian Inference
Abstract
--------
In this talk a novel class of semi-parametric time series models will be presented, for which we can specify in advance the marginal distribution of the observations and then build the dependence structure of the observations around them by introducing an underlying stochastic process termed as 'latent branching tree'. It will be demonstrated how can we draw Bayesian inference for the model parameters using Markov Chain Monte Carlo methods as well as Approximate Bayesian Computation methodology. Finally a real dataset on genome scheme data will be fitted to these models and we will also discuss how this kind of models can be used in modelling Internet traffic.
Thu 25 Feb, '10
-
CRiSM Seminar - Vincent Macaulay (Glasgow)
A1.01
Vincent Macaulay, Dept of Statistics, University of Glasgow
Inference of migration episodes from modern DNA sequence variation
One view of human prehistory is of a set of punctuated migration events across space and time, associated with settlement, resettlement and discrete phases of immigration. It is pertinent to ask whether the variability that exists in the DNA sequences of samples of people living now, something which can be relatively easily measured, can be used to fit and test such models. Population genetics theory already makes predictions of patterns of genetic variation under certain very simple models of prehistoric demography. In this presentation I will describe an alternative, but still quite simple, model designed to capture more aspects of human prehistory of interest to the archaeologist, show how it can be rephrased as a mixture model, and illustrate the kinds of inferences that can be made on a real data set, taking a Bayesian approach.
Thu 4 Mar, '10
-
CRiSM Seminar - Jeremy Taylor (Michigan)
A1.01
Jeremy Taylor, University of Michigan
Individualized predictions of prostate cancer recurrence following radiation therapy
In this talk I will present a joint longitudinal-survival statistical model for the pattern of psa values and clinical recurrence for data from patients following radiation therapy for prostate cancer. A random effects model is used for the longitudinal psa data and a time dependent proportional hazards model is used for clinical recurrence of prostate cancer. The model is implemented on a website, psacalc.sph.umich.edu , where patients or doctors can enter a series of psa values and obtain a prediction of future disease progression. Details of the model estimation and validation will be described and the website calculator demonstrated.
Thu 18 Mar, '10
-
CRiSM Seminar - Prakash Patil (Birmingham)
A1.01

Prakash Patil (University of Birmingham)
Smoothing based Lack-of-Fit (or Goodness-of-Fit) Tests

To construct a nonparametric (smoothing-based) test of lack-of-fit, one measures one way or other, the discrepancy between a smooth estimator of the unknown curve and the hypothesised curve. Although there are many possible choices for measuring this discrepancy, for being technically most easy to deal with, the lack-of-fit tests based on the ISE seem to have received the most(?) attention. But since a test based on the ISE requires the estimation of the unknown curve, its ability to distinguish between the null model and the departures from the null model is linked to the smoothing parameter that one chooses to estimate the curve. Whereas, if one takes a local view and then constructs a test, one can show that the test has better power properties. And although the performance of the test is still linked to the smoothing parameter, the choice of smoothing parameter will now be dictated by the `testing’ aspect of the problem rather than be driven by the estimation of the unknown curve. In this talk, we will mainly use regression quantile curves to illustrate the above points but will show that this procedure could be used for density and hazard rate curves.
Fri 26 Mar, '10
-
CRiSM Seminar - David Findley (US Census Bureau)
A1.01

David Findley (US Census Bureau)

Two improved Diebold-Mariano test statistics for comparing the forecasting ability of incorrect time series models

We present and show applications of two new test statistics for deciding if one ARIMA model provides significantly better h -step-ahead forecasts than another, as measured by the difference of approximations to their asymptotic mean square forecast errors. The two statistics differ in the variance estimate whose square root is the statistic's denominator. Both variance estimates are consistent even when the ARMA components of the models considered are incorrect. Our principal statistic's variance estimate accounts for parameter estimation. Our simpler statistic's variance estimate treats parameters as fixed. The broad consistency properties of these estimates yield improvements to what are known as tests of Diebold and Mariano (1995) type. These are tests whose variance estimates treat parameters as fixed and are generally not consistent in our context.

We describe how the new test statistics can be calculated algebraically for any pair of ARIMA models with the same differencing operator. Our size and power studies demonstrate their superiority over the Diebold-Mariano statistic. The power study and the empirical study also reveal that, in comparison to treating estimated parameters as fixed, accounting for parameter estimation can increase power and can yield more plausible model selections for some time series in standard textbooks.

(Joint work with Tucker McElroy)

Thu 29 Apr, '10
-
CRiSM Seminar - Ann Nicholson (Monash)
A1.01

Ann Nicholson (Monash)

Incorporating expert knowledge when learning Bayesian network structure: Heart failure as a case study

Bayesian networks (BNs) are rapidly becoming a leading technology in applied Artificial Intelligence (AI), with medicine one of its most popular application area.  Both automated learning of BNs and expert elicitation have been used to build these networks, but the potentially more useful combination of these two methods remain underexplored. In this seminar, I will present a case study of this combination using public-domain data for heart failure. We run an automated causal discovery system (CaMML), which allows the incorporation of multiple kinds of prior expert knowledge into its search, to test and compare unbiased discovery with discovery biased with different kinds of expert opinion. We use adjacency matrices enhanced with numerical and colour labels to assist with the interpretation of the results.  These techniques are presented within a wider context of knowledge engineering with Bayesian networks
 (KEBN). 
Fri 30 Apr, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
David White (Warwick)

Fri 7 May, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
Informal Group Meeting
Thu 13 May, '10
-
Ann Nicholson - Workshop 2
C1.06
Applications of Bayesian Networks
Thu 13 May, '10
-
CRiSM Seminar - Federico Turkheimer (Imperial)
A1.01
Federico Turkheimer (Imperial)

Title: Higher Mental Ability: A Matter of Persistence?
Abstract: Executive function is thought to originate in the dynamics of frontal cortical networks of the human brain. We examined the dynamic properties of the blood oxygen level-dependent (BOLD) time-series measured with fMRI within the prefrontal cortex to test the hypothesis that temporally persistent neural activity underlies executive performance in normal controls doing executive tasks. A numerical estimate of signal persistence, derived from wavelet scalograms of the BOLD time-series and postulated to represent the coherent firing of cortical networks, was determined and correlated with task performance. We further tested our hypothesis on traumatic brain injury subjects that present with mild diffuse heterogenous injury but common executive dysfunction, this time using a resting state experimental condition.
Fri 14 May, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
Informal Group Meeting
Wed 19 May, '10
-
CRiSM Seminar - Petros Dellaportas (Athens University)
A1.01
Petros Dellaportas (Athens University of Economics and Business)
Control variates for reversible MCMC samplers
A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic.
Thu 20 May, '10
-
CRiSM Seminar - Claudia Kirch (Karlsruhe)
A1.01
Claudia Kirch (Karlsruhe)
Resampling Methods in Change-Point Analysis
Real life data series are frequently not stable but exhibit changes in parameters at unknown time points. We encounter changes (or the possibility thereof) everyday in such diverse fields as economics, finance, medicine, geology, physics and so on. Therefore the detection, location and investigation of changes is of special interest. Change-point analysis provides the statistical tools (tests, estimators, confidence intervals). Most of the procedures are based on distributional asymptotics, however convergence is often slow -- or the asymptotic does not sufficiently reflect dependency. Using resampling procedures we obtain better approximations for small samples which take possible dependency structures more efficiently into account.
In this talk we give a short introduction into change-point analysis. Then we investigate more closely how resampling procedures can be applied in this context. We have a closer look at a classical location model with dependent data as well as a sequential location test, which has become of special interest in recent years.
Fri 21 May, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
Omiros Papaspiliopoulos (Universitat Pompeu Fabra)
Wed 26 May, '10
-
Seminar from Warwick back to Melbourne
Digital Laboratory Auditorium

Ann Nicholson (Monash University)
Bayesian Networks for Biosecurity Risk Assessment

Bayesian networks (BNs) are rapidly becoming a tool of choice for ecological and environmental modelling and decision making. By combining a graphical representation of the dependencies between variables with probability theory and efficient inference algorithms, BNs provide a powerful and flexible tool for reasoning under uncertainty.  The popularity of BNs is based on their ability to reason both diagnostically and predictively, and to explicitly model causal interventions and cost-benefit trade-offs.

In Australia, current biosecurity risk assessments are typically qualitative, combining assessment across the stages of importatation, distribution, entry, establishment and spread. In this seminar, I will show a BN that directly models the current qualitative method.  I'll then adapt the BN to explicitly model the volume of infestation, adding clarity and precision.  I will demonstrate how more complex BNs can explicitly model the factors influencing each stage, and can be extended with management decisions and cost functions for decision making.

Thu 27 May, '10
-
CRiSM Seminar - William Astle (Imperial)
A1.01
William Astle (Imperial)

A Bayesian model of NMR spectra for the deconvolution and quantification of metabolites in complex biological mixtures

Fri 28 May, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
Mahadevan Ganesh (Edinburgh)
Thu 3 Jun, '10
-
CRiSM Seminar - Idris Eckley (Lancaster)
A1.01
Idris Eckley (Lancaster)
Wavelets - the secret to great looking hair?

Texture is the visual character of an image region whose structure is, in some sense, regular, for example the appearance of a woven material. The perceived texture of an image depends on the scale at which it is observed. In this talk we show how wavelet processes can be used to model and analyse texture structure. Our wavelet texture models permit the classification of images based on texture and reveal important information on differences between subtly different texture types. We provide examples, taken from industry, where wavelet methods have enhanced the classification of images of hair and fabrics.
Fri 4 Jun, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
Informal Group Meeting
Fri 11 Jun, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
John Aston (Warwick)
Thu 17 Jun, '10
-
CRiSM Seminar - Adrian Bowman (Glasgow)
A1.01

Prof Adrian Bowman, University of Glasgow

 Surfaces, shapes and anatomy
 
Three-dimensional surface imaging, through laser-scanning or stereo-photogrammetry, provides high resolution data defining the shape of objects.  In an anatomical setting this can provide invaluable quantitative information, for example on the success of surgery.  Two particular applications are in the success of breast reconstruction and in facial surgery following conditions such as cleft lip and palate.  An initial challenge is to extract suitable information from these images, to characterise the surface shape in an informative manner. Landmarks are traditionally used to good effect but these clearly do not adequately represent the very much richer information present in each digitised images. Curves with clear anatomical meaning provide a good compromise between informative representations of shape and simplicity of structure.  Some of the issues involved in analysing data of this type will be discussed and illustrated. Modelling issues include the measurement of asymmetry and longitudinal patterns of growth.
 
A second form of surface data arises in the analysis of MEG data which is collected from the head surface of patients and gives information on underlying brain activity.  In this case, spatiotemporal smoothing offers a route to a flexible model for the spatial and temporal locations of stimulated brain activity.

  

Fri 18 Jun, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
Informal Group Meeting
Thu 24 Jun, '10
-
CRiSM Seminar - Sujit Sahu (Southampton)
A1.01

Sujit Sahu (Southampton)

High Resolution Bayesian Space-Time Modelling for Ozone Concentration Levels

Ground-level ozone is a pollutant that is a significant health risk, especially for children with asthma. It also damages crops, trees and other vegetation. It is a main ingredient of urban smog. To evaluate exposure to ozone levels, the  United States Environmental Protection Agency (USEPA) has developed a primary and a secondary air quality standard. To assess compliance to these standards, the USEPA collects ozone concentration data continuously from several networks of sparsely and irregularly spaced monitoring sites throughout the US. Data obtained from these sparse networks must be processed using spatial and spatio-temporal methods to check compliance to the ozone standards at an unmonitored site in the vast continental land  mass of the US. 

This talk will first discuss the two air quality standards for ozone levels and then will develop high resolution Bayesian space-time models which can be used to assess compliance. Predictive inference properties of several rival modelling strategies for both spatial interpolation and temporal forecasting will be compared and illustrated with simulation and real data examples. A number of large real life ozone concentration data sets observed over the eastern United States will also be used to illustrate the Bayesian space-time models. Several prediction maps from these models for the eastern US, published and used by the USEPA, will be discussed. 

Fri 25 Jun, '10
-
Applied Maths & Stats Seminar
B3.02 (Maths)
Jim Nolen (Duke)
Tue 13 Jul, '10
-
CRiSM Seminar - Freedom Gumedze (University of Cape Town)
A1.01
Freedom Gumedze (University of Cape Town)
 
An alternative approach to outliers in meta-analysis
 
Meta-analysis involves the combining of estimates from independent studies on some treatment in order to get an estimate across studies. However, outliers often occur even under the random effects model. The presence of such outliers could alter the conclusions in a meta-analysis. This paper proposes a methodology that detects and accommodates outliers in a meta-analysis rather than remove them to achieve homogeneity. An outlier is taken as an observation (study result) with inflated random effect variance, with the status of the ith observation as an outlier indicated by the size of the associated shift in the variance. We use the likelihood ratio test statistic as an objective measure for determining whether the ith observation has inflated variance and is therefore an outlier. A parametric bootstrap procedure is proposed to obtain the sampling distribution for the likelihood ratio test and to account for multiple testing. We illustrate the methodology and its usefulness using three meta-analysis data sets from the Cochrane Collaboration.
Mon 26 Jul, '10
-
CRiSM Seminar - Andrew Gelman
A1.01

Andrew Gelman (Columbia University)

Nothing is Linear, Nothing is Additive: Bayesian Models for Interactions in Social Science

Mon 20 Sep, '10
-
CRiSM Lectures
A1.01

Tim Johnson

Lecture 1: Introduction to Spatial Point Processes
Monday 20 Sept, 11-noon, A1.01

1. Introduction

2. Spatial Poisson Process

3. Spatial Cox Processes
Tue 21 Sep, '10
-
CRiSM Lectures
A1.01

Tim Johnson

Lecture 2: Aggregative, Repulsive and Marked Point Processes
Tuesday 21 Sept, 11-noon, A1.01

1. Cluster Point Processes
   (a) Independent Cluster Process
   (b) Log-Gaussian Cox Process

2. Markov Point Processes
   (a) Hard-Core Process
   (b) Strauss Process

3. Marked Point Processes

Placeholder