Skip to main content Skip to navigation

Events

Select tags to filter on
  More events Jump to any date

Search calendar

Enter a search term into the box below to search for all events matching those terms.

Start typing a search term to generate results.

How do I use this calendar?

You can click on an event to display further information about it.

The toolbar above the calendar has buttons to view different events. Use the left and right arrow icons to view events in the past and future. The button inbetween returns you to today's view. The button to the right of this shows a mini-calendar to let you quickly jump to any date.

The dropdown box on the right allows you to see a different view of the calendar, such as an agenda or a termly view.

If this calendar has tags, you can use the labelled checkboxes at the top of the page to select just the tags you wish to view, and then click "Show selected". The calendar will be redisplayed with just the events related to these tags, making it easier to find what you're looking for.

 
Thu 22 Jan, '09
-
CRiSM Seminar
Room A1.01, Zeeman Building

Dr Laura Sangalli, Politecnico di Milano

Title : Efficient estimation of curves in more than one dimension by free-knot regression splines, with applications to the analysis of 3D cerebral vascular geometries.

Abstract : We deal with the problem of efficiently estimating a 3D curve and its derivatives, starting from a discrete and noisy observation of the curve.  We develop a regression technique based on free-knot splines, ie. regression splines where the number and position of knots are not fixed in advance but chosen in a way to minimize a penalized sum of squared errors criterion.  We thoroughly compare this technique to a classical regression method, local polynomial smoothing, via simulation studies and application to the analysis of inner carotid artery centerlines (AneuRisk Project dataset).  We show that 3D free-knot regression splines yield more accurate and efficient estimates.

 Joint work with Piercesare Secchi and Simone Vantini.

Thu 29 Jan, '09
-
CRiSM Seminar - Dr Antonio Lijoi
A1.01 - Zeeman Building

Di Finetti Afternoon, two seminars on Non-Parametric Bayesian Analysis, 2-5pm

Dr Antonio Lijoi (Pavia, Italy)

Priors for vectors of probability distributions

In this talk we describe the construction of a nonparametric prior for vectors of probablility distributions obtained by a suitable transformation of completely random measures.  The dependence between random probability distributions by a Levy copula.  A first example that will be presented concerns a prior for a pair of survival functions and it will be used to model two-sample data: a posterior characterization, conditionally on possibly right-censored data, will be provided.  Then, a vector of random probabilities, with two-parameter Poisson-Dirichlet marginals, is introduced.

Thu 29 Jan, '09
-
CRiSM Seminar - Dr Igor Pruenster
A1.01 - Zeeman Building

Di Finetti Afternoon, two seminars on Non-Parametric Bayesian Analysis, 2-5pm

Dr Igor Pruenster (Torino, Italy)

Asymptotics for posterior hazards

A popular Bayesian nonpaametric approach to survival analysis consists in modelling hazard rates as kernel mixtures driven by a completely random measure.  A comprehensive analysis of the asymptotic behaviour of such models is provided.  Consistency of the posterior distribution is investigated and central limit theorems for both linear and quadratic functionals of the posterior hazaed rate are derived.  The general results are then specialized to various specific kernels and mixing measures, thus yielding consistency under minimal conditions and near central limit theorems for the distribution of functionals.

Joint work with P. De Blasi and G. Peccati

Thu 19 Feb, '09
-
CRiSM Seminar - Dr Jian Qing Shi
A1.01

Dr Jian Qing Shi, University of Newcastle

Curve Prediction and Clustering using Mixtures of Gaussian Process Functional Regression Models

The problem of large data is one major statistical challenges, for example, more and more data are generated from subjects with different backgrounds at an incredible rate in one of our biomechanical project as well as some many other examples.  For such functional/longitudinal data, it is more flexible and efficient to treat them as curves, and flexible mixture models are capable of capturing variation and biases for the data generated from different resources.  Mixture models are also applicable to classification and clustering of the data.

In this talk, I will first introduce a nonparametric Gaussian process functional regression (GPFR) model, and then discuss how to extend to a mixture model to address the problem of heterogeneity with multiple data types.  A mew method will be presented for modelling functional data with 'spatially' indexed data, ie, the heterogeneity is dependent on factors such as region and individual patient's information.  Nonparametric and functional mixture models have also been developed for curve clustering for some very complex systems which the response curves may depend on a number of functional and non-functional covariates.  Some numerical results with simulations study and real applications will also be present.

Mon 23 Feb, '09
-
CRiSM Seminar - Dr Yvonne Ho
A1.01

Dr Yvonne Ho, Imperial College London

Conditional accuracy and robustness

We shall review the classical conditionality principle in statistics and study conditional inference procedures under nonparametric regression models, conditional on the observed residuals.  As revealed from the title of the talk, we shall consider two issues: inference accuracy and robustness.  An innovative procedure using a smoothing technique in conjunction with configural polysampling will be presented.

Thu 26 Feb, '09
-
CRiSM Seminar - Dr Abhir Bhalerao
A1.01

Dr Abhir Bhalerao, University of Warwick

Automatic Screening for Microaneurysms in Digital Images

Diabetic retinopathy is one of the major causes of blindness.  However, diabetic retinopathy does not usually casuse a loss of sight until it has reached an advanced stage.  The earliest signs of the disease are microaneurysms (MA) which appear as small red dots on retinal fundus images.  Various screening programmes have been established in the UK and other coutries to collect and assess images on a regular basis, especially in the diabetic population.  A considerable amount of time and money is spent in manually grading these images, a large percentage of which are normal.  By automatically identifying the normal images, the manual workload and costs could be reducced greatly while increasing the effectiveness of the screening programmes.  A novel method of microaneurysm from digital retinal screening images is presented.  It is based on image filtering using complex-valued circular-symmetric filters, and an eigen-image, morphological analysis of the candidate regions to reduce the false-positive rate.  The image processing algorithms will be presented with evaluation on a typical set of 89 images from a published database.  Teh resulting method is shown to have a best operating sensitivity of 82.6% at a specificity of 80.2% which makes is useful for screening.  The results are discussed in the context of a model of visual search and the ROC curves that it can predict.

Fri 27 Feb, '09
-
CRiSM Seminar - Victor M Panaretos
A1.01

Victor M Panaretos, Institute of Mathematics, Ecole Polytechnique Federale de Lausanne

Modular Statistical Inference in Single Particle Tomography

What can be said about an unknown density function on $\mathbb{R}^n$ given a finite collection of projections onto random and unknown hyperplanes? This question arises in single particle electron microscopy, a powerful method that biophysicists employ to learn about the structure of biological macromolecules. The method images unconstrained particles, as opposed to particles fixed on a lattice (crystallography) and poses a variety of problems. We formulate and study statistically the problem of determining a structural model for a biological particle given random projections of its Coulomb potential density, observed through the electron microscope. Although unidentifiable (ill-posed), this problem can be seen to be amenable to a consistent modular statistical solution once it is viewed through the prism of shape theory.

Thu 5 Mar, '09
-
CRiSM Seminar - Dr Catherine Hurley
A1..01

Dr Catherine Hurley, National University of Ireland, Maynooth

Composition of statistical graphics: a graph-throretic perspective

Visualization methods are crucial in data exploration, presentation and modelling.  A carefully chosen graphic reveals information about the data, assisting the viewer in comparing and relating variables, cases, groups, clusters ormodel fits.  A statistical graphic is composed of display components whose arrangement (in space or time) facilitates thses comparisons.  In this presentation, we take a graph-theoretic prespective ot the graphical layout problem.  The basic idea is that graphical layouts are essentially traversals of appropriately constructed mathematical graphs.  We explore the construction of familiar scatterplot matrices and parallel coordinate displays from this perspective.  We present graph traversal algorithms tailored to the graphical layout prolem.  Novel applications range from a new display for pairwise comparison of treatment groups, to a guided parallel coordinate display and on to a road map for dynamic exploration of high-dimensional data.

Thu 12 Mar, '09
-
CRiSM Seminar - Dr Peter Craig
A1.01

Dr Peter Craig, University of Durham

Multivariate normal orthant probabilities - geometry, computation and application to statistics

The multivariate normal distribution is the basic model for multivariate continuous variability and uncertainty and its properties are intrinsically interesting.  The orthant probablility (OP) is the probability that each component is positive and is of practical importance both as the generalisation of tail probability and as the likelihood function for multivariate probit models.  Efficient quasi-Monte Carlo methods are available for approximation of OPs but are unsuitable for high-precision calculations. However, accurate calculations are relatively straightforward for some covariance structures other than independence.  I shall present the geometry of two ways to express general OPs in terms of these simpler OPs, discuss the computational consequences and briefly illustrate the application of these methods to a classic application of multivariate probit modelling.

Tue 17 Mar, '09
-
CRiSM Seminar - Dr Cristiano Varin
A1.01

Dr Cristiano Varin, Ca' Foscari University, Venice

Marginal Regression Models with Stationary Time Series Errors

This talk is concerned with regression analysis of serially dependent non-normal observations.  Stemming from traditional linear regression models with stationary time series errors, a class of marginal models that can accommodate responses of any type is presented.  Model fitting is performed either by exact or simulated maximum likelihood whether the responses is continuous or not.  Computational aspects are described in detail.  Real data applications to time series of counts are considered for illustration.  The talk is based on collaborative work with Guido Masarotto, University of Padova.

Thu 23 Apr, '09
-
CRiSM Seminar - Prof Hans Kuensch
A1.01

Prof Hans Kuensch, ETH Zurich

Ensemble kalman filter versus particle filters

With high dimensional state space, particle filters usually behave badly.  In atmospheric science, the ensemble Kalman filter is used instead which assumes a linear Gaussian observation and a Gaussian prediction distribution.  I will discuss some open problems from a statistical perspective.

Thu 30 Apr, '09
-
CRiSM Seminar - Dr David MacKay
MS.03

David MacKay, University of Cambridge

Hands-free Writing

Keyboards are inefficient for two reasons: they do not exploit the predictability of normal language; and they waste the fine analogue capabilities of the user's muscles.  I describe Dasher, a communications system designed using information theory.  A person's gestures are a source of bits, and the sentences they wish to communicate are the sink.  We aim to maximise the number of bits per second conveyed from user into text.

Users can achieve single-finger writing speeds of 35 words per minute and hands-free writing speeds of 25 words per minute.

Dasher is free spftware, and it works in over one hundred languages.

Thu 7 May, '09
-
CRiSM Seminar - Prof T Bandyopadhyay
A1.01

Professor Tathagata Bandyopadhyay, Indian Institute of Management, Ahmedabad, India

Testing Equality of Means from Paired Data When The Labels Are Missing

Suppose (Xi, Y9) i=1,..,n represent a random sample of size n from a bivariate normal population.  Suppose for some reason the labels in each pair are missing.  We consider the problem of testing the null hypothesis of equality of means based on such a messy data set.  We will cite a few practical instances when such a situation may arise.  Naturally, standard t- test cannot be applied here since one can not label the components of each pair by 'X' or 'Y'.  Instead, the observable pairs are (Mi, mi), i=1, ...n, where Mi = max(Xi, Yi) and mi = min(Xi, Yi).  We will talk about a number of large sample tests based on (Mi, mi) for testing the above hypothesis.

 

Thu 14 May, '09
-
CRiSM Seminar - Prof Howell Tong
A1.01

Prof Howell Tong, LSE & Hong Kong University

Time Reversibility of Multivariate Linear Time Series

In the time series literature, time reversibility is often assumed either explicitly (if honest) or implicitly (if less so).  In reality, time reversibility is the exception rather than the rule.  The situation with multivariate time series is much more exotic as this seminar will explore.

Thu 4 Jun, '09
-
CRiSM Seminar - Dr Chendi Zhang
A1.01

Dr Chendi Zhang, Warwick Business School

Information Salience, Investor Sentiment and Stock Returns : The Case of British Soccer Betting

Soccer clubs listed on the London Stock Exchange provide a unique way of testing stock price reactions to different types of news.  For each firm, two pieces of information are released on a weekly basis : experts' expectations about game outcomes through the betting odds, and the game outcomes themselves.  The stock market reacts strongly to news about game results, generating significant abnormal returns and trading volumes.  We find evidence that the abnormal returns for the winning teams do not reflect rational expectations but are high due to overreactions induced by investor sentiment.  This is not the case for losign teams.  There is no market reaction to the release of new betting information although these betting odds are excellent predictors of the game outcomes.  Ths discrepancy between the strong market reaction to game results and the lack of reaction to betting odds may not only be the result from overreaction to game results but also from the lack of informational content or information salience of the betting information.  Therefore, we also examine whether betting information can be used to predict short-run stock returns subsequent to the games.  We reach mixed results: we conclude that investors ignore some non-salient public information such as betting odds, and betting information predicts a stock price overreaction to game results which is influenced by investors' mood (especially when the teams are strongly expected to win).

Tue 9 Jun, '09
-
CRiSM Seminar - Dr Pulak Ghosh
A1.01

Dr Pulak Ghosh, Georgia State University, USA

Joint Modelling of Multivariate Longitudinal Data for Mixed Responses with Application to Multiple Sclerosis Data

Multiple sclerosis (MS) is one of the most chronic neurological diseases in young adults with around 2.5 million affected individuals worldwide (Compston 2006).  The most common presenting symptoms are inflammation of the optic nerve, weakness, sensory disturbances, gait disturbances and bladder dysfunction.  So far only standard analysis methodology to estimate risks for relapse occurence has been used.  This includes mostly single endpoint survival analysis in which MRI information is shrunken to baseline values or aggregated measures such as means.  In the present analysis we aim to establish a model that llows the description and prediction of occurence of relapses by considering processes in the brain (visualized on T1 and T2 weighted MRI) simultaneously.  These complex processes, together with clinical baseline infromation, have never been considered in one model so far.  We will use our model to evaluate strength of dependencies of multivariate longitudinal MRI measures with the occurrence of MS relapses.

Thu 11 Jun, '09
-
CRiSM Seminar - Dr Daniel Jackson
A1.01

Dr Daniel Jackson, MRC Biostatistics Unit

How much can we learn about missing data?  An exploration of a clinical trial in psychiatry

by Dan Jackson, Ian R White and Morven Leese

When a randomised controlled trial has missing outcome data, any analysis is based on untestable assumptions, for example that the data are missing at random, or less commonly on other assumptions about the mising data mechanism.  Given such assumptions, there is an extensive literature on suitable analysis methods.  However, little is known about what assumptions are appropriate.  We use two sources of ancillary data to explore the missing data mechanism in a trial of adherence therapy in patients with schizophrenia: carer-reported (proxy) outcomes and the number of contact attempts.  This requires making additional assumptions whose plausibility we discuss.  We also perform sensitivity analyses to departures from missing at random.  Wider use of techniques such as these will help to inform the choice of suitable assumptions for the analysis of randomised controlled trials.

Thu 25 Jun, '09
-
CRiSM Seminar - Dr Frederic Ferraty
A1.01

Dr Frederic Ferraty, University of Toulouse, France

Most-predictive design points for functional data predictors

(In coll. with Peter Hall and Phillipe Vieu)

Functional data analysis (FDA) has found application in a great many fields, including biology, chemometrics, econometrics, geophysics, medical sciences, pattern recognization....  For instance a sample of curves or a sample of surfaces is a special case of functional data.  In the example of near infrared (NIR) spectroscopy, X(t) denotes the absorbance of the NIR spectrum at wavelength t.  The observation of X(t) for a discrete but large set of values (or design points) t produces what is called a spectrometric curve.  A standard chemometrical dataset is that where X(t) corresponds to the NIR spectrum of a piece of meat and where a scalar response Y denotes a constituent of the piece of meat (eg, fat or moisture).

Here, we are interested in regressing a scalar response Y on a functional predictor X(t) where t belongs to a discrete but large set I of "design points" (hundreds or thousands).  From now on, one sets X:={X(t);t in I}.  It is of practical interest to know which design points of t, have greatest influence on the response, Y.  In this situation, we propose a method for choosing a small subset of design points to optimize prediction of a response variable, Y.  The selected design points are referred to as the most predictive design points, or covariates, and are computed using information contained in a set of independent observations (X_i,Y_i) of (X,Y).  The algorithm is based on local linear regression, and calculations can be accelerated using linear regression to preselect design points.  Boosting can be employed to further improve predictive performance.  We illustrate the usefulness of our ideas through examples drawn from chemometrics, and we develop theoretical arguments showing that the methodology can be applied successfully in a range of settings.

Thu 8 Oct, '09
-
CRiSM Seminar - Peter Diggle
A1.01

Peter Diggle (Lancaster University and Johns Hopkins University)

Statistical Modelling for Real-time Epidemiology

Fri 16 Oct, '09
-
CRiSM Seminar - Arnaud Doucet
A1.01

Arnaud Doucet (ISM)

Forward Smoothing using Sequential Monte Carlo with Application to Recursive Parameter Estimation

 Sequential Monte Carlo (SMC) methods are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. We propose new SMC algorithms to compute the expectation of additive functionals recursively. Compared to the standard path space SMC estimator whose asymptotic variance increases quadratically with time even under favourable mixing assumptions, the asymptotic variance of the proposed SMC estimates only increase linearly with time. We show how this allows us to perform recursive parameter estimation using SMC algorithms which do not not suffer from the particle path degeneracy problem.

 

Joint work with P. Del Moral (INRIA Bordeaux) & S.S. Singh (Cambridge University.

Thu 22 Oct, '09
-
CRiSM Seminar - Roman Belavkin
A1.01

Roman Belavkin (Middlesex University)

The effect of information constraints on decision-making and economic behaviour

Economic theory is based on the idea of rational agents acting according to their preferences.  Mathematically, this is represented by maximisation of some utility or the expected utility function, if choices are made under uncertainty.  Although this formalism has become dominant in optimisation, game theories and even AI, there is a degree of scepticism about the expected utility representation, especially among behavioural economists who often use paradoxical counter-examples dating back as far as Aristotle.  I will try to convince you that many of these paradoxes can be avoided if the problem is treated from a learning theory point of view, where information constraints are explicitly taken into account.  I will use methods of functional and convex analyses to demonstrate geometric interpretation of the solution of an abstract optimal learning problem, and demonstrate how this solution explains the mismatch between the normative and behavioural theories of decision-making.

Thu 5 Nov, '09
-
Warwick Oxford Joint Seminar (at Warwick)
PS1.28

Oxford-Warwick Joint Seminar (2 talks)

Speaker 1:  Andrew Stuart (University of Warwick)

Title: MCMC in High Dimensions

Abstract: Metropolis based MCMC methods are a flexible tool for sampling a wide variety of complex probability distributions. Nonetheless, their effective use depends very much on careful tuning of parameters, choice of proposal distribution and so forth. A thorough understanding of these issues in high dimensional problems is particularly desirable as they can be critical to the construction of a practical sampler.
 
In this talk we study MCMC methods based on random walk, Langevin and Hybrid Monte Carlo proposals, all of which are based on the discretization of a (sometimes stochastic) differential equation. We describe how to scale the time-step in this discretization to achieve optimal efficiency, and compare the resulting computational cost of the different methods. We initially confine our study to target distributions with a product structure but then show how the ideas may be extended to a wide class of non-product measures arising in applications; these arise from measures on a Hilbert space which are absolutely continuous with respect to a product measure.  We illustrate the ideas through application to a range of problems arising in molecular dynamics and in data assimilation in the ocean-atmosphere sciences.
 
The talk will touch on various collaborations with Alex Beskos (UCL), Jonathan Mattingly (Duke), Gareth Roberts (Warwick), Natesh Pillai (Warwick) and Chus Sanz-Serna (Valladolid).


2.  Tom Nichols (University of Warwick and University of Oxford)


A Hierarchical Spatial Bayesian Model for Multisubject Functional MRI Data

Standard practice in Functional Magnetic Resonance Imaging (fMRI) is to use a 'mass-univariate' model, where linear models are fit independently at each spatial location.  A fundamental assumption of this approach is that the image data has been spatially warped so that anatomy of each subject's brain aligns.  In practice, even after the best anatomical warping, practitioners find that individual subjects have activations in different locations (though still in the same general anatomic area).  Within the mass-univariate framework the only recourse is to spatially smooth the data, causing the effects to be blurred out and allowing areas of common activation to be detected. Our approach is to fit a Bayesian hierarchical spatial model to the unsmoothed data.  We model each subject's data with individual activation centres which are assumed to cluster about population centres.  Our model thus allows and explicitly estimates intersubject heterogeneity in location and yet also makes precise inferences on the activation location in the population.  We demonstrate the method on simulated and real data of a visual working memory experiment.
[joint with Lei Xu, Department of Biostatistics, Vanderbilt University, and Timothy Johnson, Department of Biostatistics, University of Michigan.]

Wed 11 Nov, '09
-
CRiSM Seminar - Tomasz Schreiber
A1.01

Professor Tomasz Schreiber (Nicolaus Copernicus University)
Polygonal Markov fields in the plane
Abstract: Polygonal Markov fields (PMFs), originally introduced by Arak, Clifford and Surgailis, can be regarded as random ensembles of non-intersecting planar polygonal contours with interaction determined by a rather flexible class of potentials. Not unexpectedly, such models share a number of important properties with the two-dimensional Ising model, including Ising-like phase transitions, spontaneous magnetisation and low temperature phase separation (Wulff construction). On the other hand, the polygonal fields exhibit remarkable features of their own, such as consistency property and explicit expressions for many crucial numeric and functional characteristics (free energy, correlation functions, integral geometric characteristics). Arguably the most important property of polygonal fields is that they admit a rich class of graphical constructions, all yielding the same field and often used as a crucial tool in theoretical developments on PMFs. In this talk we take the algorithmic graphical constructions as the starting point for defining the polygonal Markov fields, rather than the usual Gibbsian formalism. This point of view is compatible with applications of the PMFs for Bayesian image segmentation which we shall present (joint work with M.N.M. van Lieshout, R. Kluszczynski and M. Matuszak). Further, we shall also discuss our latest theoretical developments made possible by this approach, examples including the evaluation of higher order correlation functions, factorisation theorems and duality theory, where the dual object - the polygonal web - arises as the union of interacting critical branching polygonal walks in the plane. We shall conclude the talk indicating existing open problems and conjectures on the PMFs.

Tue 17 Nov, '09
-
CRiSM Seminar - James Curran
D1.07 Complexity Seminar Rm

James Curran (Auckland)

Some issues in modern forensic DNA evidence interpretation

The forensic biology community has adopted new DNA typing technology relatively quickly as it has evolved over the last twenty years. However, the adoption of the statistical methodology used for the interpretation of this evidence has not been as fast. In this talk I will discuss classical forensic DNA interpretation and how it the changes in technology  and thinking have led to challenges to the way we interpret evidence. I will present some relatively new models for evidence interpretation, and discuss future directions. This talk is aimed at a general audience.

Thu 19 Nov, '09
-
CRiSM Seminar - Anna Gottard
A1.01

Anna Gottard (Florence)
Graphical models for homologous factors
Abstract: Homologous factors are factors measured with the same categorical scale. They are commonly encountered in matched pairs studies, in attitudinal research where subjects’ opinions about some issue are observed under several conditions, or in longitudinal studies that repeatedly observe whether a condition is present at various times. In these cases, it can happen that the contingency table cross classifying the homologous factors shows a special structure, typically captured by symmetry and quasi-symmetry models. In this talk, I will present a class of graphical log linear models having symmetry and quasi-symmetry models as particular cases, and allowing situations of conditional independence and the presence of non-homologous factors. This class of models can be associated with a graph with coloured edges and nodes. (This is partially joint work with G.M. Marchetti and A. Agresti).

Fri 27 Nov, '09
-
CRiSM Seminar - Peter Muller
A1.01
Peter Müller
A DEPENDENT POLYA TREE MODEL
with Lorenzo Trippa and Wes Johnson

We propose a probability model for a family of unknown distributions indexed with covariates. The marginal model for each distribution is a Polya tree prior. The proposed model introduces the desired dependence across the marginal Polya tree models by defining dependent random branching probabilities of the unknown distributions.

An important feature of the proposed model is the easy centering of the nonparametric model around any parametric regression model.  This is important for the motivating application to the proportional hazards (PH) model. We use the proposed model to implement nonparametric inference for survival regression. The proposed model allows us to center the nonparametric prior around parametric PH structures.  In contrast to many available models that restrict the non-parametric extension of the PH model to the baseline hazard, the proposed model defines a family of random probability measures that are a priori centered around the PH model but allows any other structure.  This includes, for example, crossing hazards, additive hazards, or any other structure as supported by the data.

Thu 3 Dec, '09
-
CRiSM Seminar - Serge Guillas (UCL)
A1.01

Serge Guillas (UCL)
Bivariate Splines for Spatial Functional Regression Models
We consider the functional linear regression model where the explanatory variable is a random surface and the response is a real random variable, in various situations where both the explanatory variable and the noise can be unbounded and dependent. Bivariate splines over triangulations represent the random surfaces. We use this representation to construct  least squares estimators of the regression function with a penalization term. Under the assumptions that the regressors in the sample span a large enough space of functions, bivariate splines approximation properties yield the consistency of the estimators. Simulations demonstrate the quality of the asymptotic properties on a realistic domain. We also carry out an application to ozone concentration forecasting over the US that illustrates the predictive skills of the method.
Finally, we present recent results of long-term seabed forecasting using this technique.

Thu 10 Dec, '09
-
CRiSM/Stats Seminar - Siem Jan Koopman
A1.01
Siem Jan Koopman
Free University Amsterdam
 
Title: Dynamic factor analysis and the dynamic modelling of the yield curve of interest rates.
 
Abstract:
A new approach to dynamic factor analysis by imposing smoothness restrictions on the factor loadings is proposed. A statistical procedure based on Wald tests that can be used to find a suitable set of such restrictions is presented. These developments are presented in the context of maximum likelihood estimation. The empirical illustration concerns term structure models but the methodology is also applicable in other settings. An empirical study using a data set of unsmoothed Fama-Bliss zero yields for US treasuries of different maturities is performed. The general dynamic factor model with and without smooth loadings is considered in this study together with models that are associated with Nelson-Siegel and arbitrage-free frameworks. These existing models can be regarded as special cases of the dynamic factor model with restrictions on the model parameters. Statistical hypothesis tests are performed in order to verify whether the restrictions imposed by the models are supported by the data. The main conclusion is that smoothness restrictions can be imposed on the loadings of dynamic factor models for the term structure of US interest rates.
[Joint work with Borus Jungbacker and Michel van der Wel]
Thu 28 Jan, '10
-
CRiSM Seminar - Jan Palczewski
A1.01
Dr Jan Palczewski (University of Leeds)
Why Markowitz portfolio weights are so volatile?
Markowitz theory of asset allocation is one of very few research ideas that made it into practical finance. Yet, its investment recommendations exhibit incredible sensitivity to even smallest variations in the estimation horizon or estimation techniques. Scientists as well as practitioners have put enormous effort into stabilizing the estimators of portfolios (with moderate success, according to some). However, there seems to be no simple quantitative method to measure the portfolio stability. In this talk, I will derive
analytical formulas that relate the mean and the covariance matrix of asset returns with the stability of portfolio composition. These formulas allow for the indentification of main culprits of worse-than-expected performance of the Markowitz framework. In particular, I will question the common wisdom that puts the main responsibility on estimation errors of the mean.

This research is a spin-off of a consultancy project at the University of Warsaw regarding the allocation of the foreign reserves of Polish Central Bank.
Thu 11 Feb, '10
-
CRiSM Seminar - Alexander Schied (Mannheim)
A1.01
Alexander Schied (Mannheim)
Mathematical aspects of market impact modeling
Abstract: In this talk, we discuss the problem of executing large orders in illiquid markets so as to optimize the resulting liquidity costs. There are several reasons why this problem is relevant. On the mathematical side, it leads to interesting  nonlinearity effects that arise from the price feedback of  strategies. On the economical side, it helps  understanding which market impact models are viable, because the analysis of order execution provides a test for the existence of undesirable properties of a model. In the first part of the talk, we present market impact models with transient price impact, modeling the resilience of electronic limit order books. In the second part of the talk, we consider the Almgren-Chriss market impact model and analyze the effects of risk aversion on optimal strategies by using stochastic control methods. In the final part, we discuss effects that occur in a multi-player equilibrium.

Placeholder