Events
Thu 28 Jan, '10- |
CRiSM Seminar - Jan PalczewskiA1.01Dr Jan Palczewski (University of Leeds)
Why Markowitz portfolio weights are so volatile?
Markowitz theory of asset allocation is one of very few research ideas that made it into practical finance. Yet, its investment recommendations exhibit incredible sensitivity to even smallest variations in the estimation horizon or estimation techniques. Scientists as well as practitioners have put enormous effort into stabilizing the estimators of portfolios (with moderate success, according to some). However, there seems to be no simple quantitative method to measure the portfolio stability. In this talk, I will deriveanalytical formulas that relate the mean and the covariance matrix of asset returns with the stability of portfolio composition. These formulas allow for the indentification of main culprits of worse-than-expected performance of the Markowitz framework. In particular, I will question the common wisdom that puts the main responsibility on estimation errors of the mean. This research is a spin-off of a consultancy project at the University of Warsaw regarding the allocation of the foreign reserves of Polish Central Bank. |
|
Thu 11 Feb, '10- |
CRiSM Seminar - Alexander Schied (Mannheim)A1.01Alexander Schied (Mannheim)
Mathematical aspects of market impact modeling Abstract: In this talk, we discuss the problem of executing large orders in illiquid markets so as to optimize the resulting liquidity costs. There are several reasons why this problem is relevant. On the mathematical side, it leads to interesting nonlinearity effects that arise from the price feedback of strategies. On the economical side, it helps understanding which market impact models are viable, because the analysis of order execution provides a test for the existence of undesirable properties of a model. In the first part of the talk, we present market impact models with transient price impact, modeling the resilience of electronic limit order books. In the second part of the talk, we consider the Almgren-Chriss market impact model and analyze the effects of risk aversion on optimal strategies by using stochastic control methods. In the final part, we discuss effects that occur in a multi-player equilibrium. |
|
Thu 18 Feb, '10- |
CRiSM Seminar - Theo Kypraios (Nottingham)A1.01Theo Kypraios (Nottingham)
A novel class of semi-parametric time series models: Construction and Bayesian Inference Abstract -------- In this talk a novel class of semi-parametric time series models will be presented, for which we can specify in advance the marginal distribution of the observations and then build the dependence structure of the observations around them by introducing an underlying stochastic process termed as 'latent branching tree'. It will be demonstrated how can we draw Bayesian inference for the model parameters using Markov Chain Monte Carlo methods as well as Approximate Bayesian Computation methodology. Finally a real dataset on genome scheme data will be fitted to these models and we will also discuss how this kind of models can be used in modelling Internet traffic. |
|
Thu 25 Feb, '10- |
CRiSM Seminar - Vincent Macaulay (Glasgow)A1.01Vincent Macaulay, Dept of Statistics, University of Glasgow
Inference of migration episodes from modern DNA sequence variation One view of human prehistory is of a set of punctuated migration events across space and time, associated with settlement, resettlement and discrete phases of immigration. It is pertinent to ask whether the variability that exists in the DNA sequences of samples of people living now, something which can be relatively easily measured, can be used to fit and test such models. Population genetics theory already makes predictions of patterns of genetic variation under certain very simple models of prehistoric demography. In this presentation I will describe an alternative, but still quite simple, model designed to capture more aspects of human prehistory of interest to the archaeologist, show how it can be rephrased as a mixture model, and illustrate the kinds of inferences that can be made on a real data set, taking a Bayesian approach. |
|
Thu 4 Mar, '10- |
CRiSM Seminar - Jeremy Taylor (Michigan)A1.01Jeremy Taylor, University of Michigan
Individualized predictions of prostate cancer recurrence following radiation therapy
In this talk I will present a joint longitudinal-survival statistical model for the pattern of psa values and clinical recurrence for data from patients following radiation therapy for prostate cancer. A random effects model is used for the longitudinal psa data and a time dependent proportional hazards model is used for clinical recurrence of prostate cancer. The model is implemented on a website, psacalc.sph.umich.edu
![]() |
|
Thu 18 Mar, '10- |
CRiSM Seminar - Prakash Patil (Birmingham)A1.01Prakash Patil (University of Birmingham) |
|
Fri 26 Mar, '10- |
CRiSM Seminar - David Findley (US Census Bureau)A1.01David Findley (US Census Bureau) Two improved Diebold-Mariano test statistics for comparing the forecasting ability of incorrect time series models We present and show applications of two new test statistics for deciding if one ARIMA model provides significantly better h -step-ahead forecasts than another, as measured by the difference of approximations to their asymptotic mean square forecast errors. The two statistics differ in the variance estimate whose square root is the statistic's denominator. Both variance estimates are consistent even when the ARMA components of the models considered are incorrect. Our principal statistic's variance estimate accounts for parameter estimation. Our simpler statistic's variance estimate treats parameters as fixed. The broad consistency properties of these estimates yield improvements to what are known as tests of Diebold and Mariano (1995) type. These are tests whose variance estimates treat parameters as fixed and are generally not consistent in our context. We describe how the new test statistics can be calculated algebraically for any pair of ARIMA models with the same differencing operator. Our size and power studies demonstrate their superiority over the Diebold-Mariano statistic. The power study and the empirical study also reveal that, in comparison to treating estimated parameters as fixed, accounting for parameter estimation can increase power and can yield more plausible model selections for some time series in standard textbooks. (Joint work with Tucker McElroy) |
|
Thu 29 Apr, '10- |
CRiSM Seminar - Ann Nicholson (Monash)A1.01Ann Nicholson (Monash) Incorporating expert knowledge when learning Bayesian network structure: Heart failure as a case studyBayesian networks (BNs) are rapidly becoming a leading technology in applied Artificial Intelligence (AI), with medicine one of its most popular application area. Both automated learning of BNs and expert elicitation have been used to build these networks, but the potentially more useful combination of these two methods remain underexplored. In this seminar, I will present a case study of this combination using public-domain data for heart failure. We run an automated causal discovery system (CaMML), which allows the incorporation of multiple kinds of prior expert knowledge into its search, to test and compare unbiased discovery with discovery biased with different kinds of expert opinion. We use adjacency matrices enhanced with numerical and colour labels to assist with the interpretation of the results. These techniques are presented within a wider context of knowledge engineering with Bayesian networks (KEBN). |
|
Fri 30 Apr, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)David White (Warwick)
|
|
Fri 7 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Thu 13 May, '10- |
Ann Nicholson - Workshop 2C1.06Applications of Bayesian Networks
|
|
Thu 13 May, '10- |
CRiSM Seminar - Federico Turkheimer (Imperial)A1.01Federico Turkheimer (Imperial)
Title: Higher Mental Ability: A Matter of Persistence? Abstract: Executive function is thought to originate in the dynamics of frontal cortical networks of the human brain. We examined the dynamic properties of the blood oxygen level-dependent (BOLD) time-series measured with fMRI within the prefrontal cortex to test the hypothesis that temporally persistent neural activity underlies executive performance in normal controls doing executive tasks. A numerical estimate of signal persistence, derived from wavelet scalograms of the BOLD time-series and postulated to represent the coherent firing of cortical networks, was determined and correlated with task performance. We further tested our hypothesis on traumatic brain injury subjects that present with mild diffuse heterogenous injury but common executive dysfunction, this time using a resting state experimental condition.
|
|
Fri 14 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Wed 19 May, '10- |
CRiSM Seminar - Petros Dellaportas (Athens University)A1.01Petros Dellaportas (Athens University of Economics and Business)
Control variates for reversible MCMC samplers A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic. |
|
Thu 20 May, '10- |
CRiSM Seminar - Claudia Kirch (Karlsruhe)A1.01Claudia Kirch (Karlsruhe)
Resampling Methods in Change-Point Analysis
Real life data series are frequently not stable but exhibit changes in parameters at unknown time points. We encounter changes (or the possibility thereof) everyday in such diverse fields as economics, finance, medicine, geology, physics and so on. Therefore the detection, location and investigation of changes is of special interest. Change-point analysis provides the statistical tools (tests, estimators, confidence intervals). Most of the procedures are based on distributional asymptotics, however convergence is often slow -- or the asymptotic does not sufficiently reflect dependency. Using resampling procedures we obtain better approximations for small samples which take possible dependency structures more efficiently into account.
In this talk we give a short introduction into change-point analysis. Then we investigate more closely how resampling procedures can be applied in this context. We have a closer look at a classical location model with dependent data as well as a sequential location test, which has become of special interest in recent years.
|
|
Fri 21 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Omiros Papaspiliopoulos (Universitat Pompeu Fabra)
|
|
Wed 26 May, '10- |
Seminar from Warwick back to MelbourneDigital Laboratory AuditoriumAnn Nicholson (Monash University) Bayesian networks (BNs) are rapidly becoming a tool of choice for ecological and environmental modelling and decision making. By combining a graphical representation of the dependencies between variables with probability theory and efficient inference algorithms, BNs provide a powerful and flexible tool for reasoning under uncertainty. The popularity of BNs is based on their ability to reason both diagnostically and predictively, and to explicitly model causal interventions and cost-benefit trade-offs. |
|
Thu 27 May, '10- |
CRiSM Seminar - William Astle (Imperial)A1.01William Astle (Imperial)
A Bayesian model of NMR spectra for the deconvolution and quantification of metabolites in complex biological mixtures |
|
Fri 28 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Mahadevan Ganesh (Edinburgh)
|
|
Thu 3 Jun, '10- |
CRiSM Seminar - Idris Eckley (Lancaster)A1.01Idris Eckley (Lancaster)
Wavelets - the secret to great looking hair? Texture is the visual character of an image region whose structure is, in some sense, regular, for example the appearance of a woven material. The perceived texture of an image depends on the scale at which it is observed. In this talk we show how wavelet processes can be used to model and analyse texture structure. Our wavelet texture models permit the classification of images based on texture and reveal important information on differences between subtly different texture types. We provide examples, taken from industry, where wavelet methods have enhanced the classification of images of hair and fabrics. |
|
Fri 4 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Fri 11 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)John Aston (Warwick)
|
|
Thu 17 Jun, '10- |
CRiSM Seminar - Adrian Bowman (Glasgow)A1.01Prof Adrian Bowman, University of Glasgow Surfaces, shapes and anatomy
Three-dimensional surface imaging, through laser-scanning or stereo-photogrammetry, provides high resolution data defining the shape of objects. In an anatomical setting this can provide invaluable quantitative information, for example on the success of surgery. Two particular applications are in the success of breast reconstruction and in facial surgery following conditions such as cleft lip and palate. An initial challenge is to extract suitable information from these images, to characterise the surface shape in an informative manner. Landmarks are traditionally used to good effect but these clearly do not adequately represent the very much richer information present in each digitised images. Curves with clear anatomical meaning provide a good compromise between informative representations of shape and simplicity of structure. Some of the issues involved in analysing data of this type will be discussed and illustrated. Modelling issues include the measurement of asymmetry and longitudinal patterns of growth.
A second form of surface data arises in the analysis of MEG data which is collected from the head surface of patients and gives information on underlying brain activity. In this case, spatiotemporal smoothing offers a route to a flexible model for the spatial and temporal locations of stimulated brain activity.
|
|
Fri 18 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Thu 24 Jun, '10- |
CRiSM Seminar - Sujit Sahu (Southampton)A1.01Sujit Sahu (Southampton) High Resolution Bayesian Space-Time Modelling for Ozone Concentration Levels Ground-level ozone is a pollutant that is a significant health risk, especially for children with asthma. It also damages crops, trees and other vegetation. It is a main ingredient of urban smog. To evaluate exposure to ozone levels, the United States Environmental Protection Agency (USEPA) has developed a primary and a secondary air quality standard. To assess compliance to these standards, the USEPA collects ozone concentration data continuously from several networks of sparsely and irregularly spaced monitoring sites throughout the US. Data obtained from these sparse networks must be processed using spatial and spatio-temporal methods to check compliance to the ozone standards at an unmonitored site in the vast continental land mass of the US.
This talk will first discuss the two air quality standards for ozone levels and then will develop high resolution Bayesian space-time models which can be used to assess compliance. Predictive inference properties of several rival modelling strategies for both spatial interpolation and temporal forecasting will be compared and illustrated with simulation and real data examples. A number of large real life ozone concentration data sets observed over the eastern United States will also be used to illustrate the Bayesian space-time models. Several prediction maps from these models for the eastern US, published and used by the USEPA, will be discussed. |
|
Fri 25 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Jim Nolen (Duke)
|
|
Tue 13 Jul, '10- |
CRiSM Seminar - Freedom Gumedze (University of Cape Town)A1.01Freedom Gumedze (University of Cape Town)
An alternative approach to outliers in meta-analysis
Meta-analysis involves the combining of estimates from independent studies on some treatment in order to get an estimate across studies. However, outliers often occur even under the random effects model. The presence of such outliers could alter the conclusions in a meta-analysis. This paper proposes a methodology that detects and accommodates outliers in a meta-analysis rather than remove them to achieve homogeneity. An outlier is taken as an observation (study result) with inflated random effect variance, with the status of the ith observation as an outlier indicated by the size of the associated shift in the variance. We use the likelihood ratio test statistic as an objective measure for determining whether the ith observation has inflated variance and is therefore an outlier. A parametric bootstrap procedure is proposed to obtain the sampling distribution for the likelihood ratio test and to account for multiple testing. We illustrate the methodology and its usefulness using three meta-analysis data sets from the Cochrane Collaboration.
|
|
Mon 26 Jul, '10- |
CRiSM Seminar - Andrew GelmanA1.01Andrew Gelman (Columbia University) Nothing is Linear, Nothing is Additive: Bayesian Models for Interactions in Social Science |
|
Mon 20 Sep, '10- |
CRiSM LecturesA1.01Tim Johnson Lecture 1: Introduction to Spatial Point Processes
Monday 20 Sept, 11-noon, A1.01
1. Introduction
2. Spatial Poisson Process
3. Spatial Cox Processes
|
|
Tue 21 Sep, '10- |
CRiSM LecturesA1.01Tim Johnson Lecture 2: Aggregative, Repulsive and Marked Point Processes
Tuesday 21 Sept, 11-noon, A1.01
1. Cluster Point Processes
(a) Independent Cluster Process
(b) Log-Gaussian Cox Process
2. Markov Point Processes
(a) Hard-Core Process
(b) Strauss Process
3. Marked Point Processes
|