Skip to main content Skip to navigation

Events

Select tags to filter on
  More events Jump to any date

Search calendar

Enter a search term into the box below to search for all events matching those terms.

Start typing a search term to generate results.

How do I use this calendar?

You can click on an event to display further information about it.

The toolbar above the calendar has buttons to view different events. Use the left and right arrow icons to view events in the past and future. The button inbetween returns you to today's view. The button to the right of this shows a mini-calendar to let you quickly jump to any date.

The dropdown box on the right allows you to see a different view of the calendar, such as an agenda or a termly view.

If this calendar has tags, you can use the labelled checkboxes at the top of the page to select just the tags you wish to view, and then click "Show selected". The calendar will be redisplayed with just the events related to these tags, making it easier to find what you're looking for.

 
Thu 13 Jan, '11
-
CRiSM Seminar - Tilman Davies
A1.01

Tilman Davies (Massey University, NZ)

Refining Current Approaches to Spatial and Spatio-Temporal Modelling in Epidemiology

It is reasonable to expect both space and time to be important factors when investigating disease in human, animal and even plant populations. A common goal in many studies in geographical epidemiology, for example, is the idenification of disease risk 'hotspots', where spatial sub-regions that correspond to a statistically significant increase in the risk of infection are highlighted. More advanced problems involving not just space but space-time data, such as real-time disease surveillance, can be difficult to model due to complex correlation structures and computationally demanding operations. Decisions based on these kinds of analyses can range from the local, to national and even global levels. It is therefore important we continue to improve statistical methodology in this relatively young field, and ensure any theoretical benefits can flow through in practice.

This talk aims to give an overview of the PhD research currently underway in an effort to develop and implement refinements to spatial and spatio-temporal modelling. Of note include use of a spatially adaptive smoothing parameter for estimation of the kernel-smoothed relative-risk function, development of a novel, computationally inexpensive method for associated spatial tolerance contour calculation, release of an R package implementing these capabilities, and the scope for improvement to the current marginal minimum-contrast methods for parameter estimation in relevant stochastic models.

Thu 20 Jan, '11
-
CRiSM Seminar - Jouni Kuha

Jouni Kuha (London School of Economics)

Sample group means in multilevel models: Sampling error as measurement error

Research questions for models for clustered data often concern the effects of cluster-level averages of individual-level variables.  For example, data from a social survey might characterise neigborhoods in
terms of average income, ethnic composition etc. of people within each neighbourhood. Unless the true values of such averages are known from some other source, they are typically estimated by within-cluster
sample estimates, using data on the subjects in the observed data. This incurs a measurement error bias when these estimates are used as explanatory variables in subsequent modelling, even if the individual observations are measured without error. The measurement error variance can, however, be estimated from within-cluster variation, using knowledge of the sampling design within each cluster, and we can then apply relatively standard measurement error methods to adjust for the error. This talk considers such estimation for generalised linear mixed models, comparing common measurement error adjustments to naive analysis with no adjustment.

Thu 27 Jan, '11
-
CRiSM Seminar - Alberto Sorrentino

Alberto Sorrentino (Warwick)

Bayesian filtering for estimation of brain activity in magnetoencephalography

Magnetoencephalography (MEG) is a sophisticated technique measuring the tiny magnetic fields produced by the brain activity. Relative to other functional neuroimaging techniques MEG recordings feature an outstanding temporal sampling resolution, in principle allowing for a study of the neural dynamics on a millisecond-by-millisecond time scale, but the spatial localization of neural currents from MEG data turns out to be an ill-posed inverse problem, i.e. a problem which has infinitely many solutions. To mitigate ill-posedness, a variety of parametric models of the neural currents are proposed in the burgeoning neuroimaging literature. In particular, under suitable approximations the problem of estimating brain activity from MEG data can be re-phrased as a Bayesian filtering problem with an unknown and time-varying number of sources.

In this talk I will first illustrate a statistical model of source localisation for MEG data which builds directly on the well-established Physics of the electro-magnetic brain field. The focus of the talk will then be to describe the application of a recently developed class of sequential Monte Carlo methods (particle filters) for estimation of the model parameters using empirical MEG data.

Thu 3 Feb, '11
-
CRiSM Seminar - Simon Spencer
A1.01

Simon Spencer (Warwick)

Outbreak detection for campylobacteriosis in New Zealand

Identifying potential outbreaks of campylobacteriosis from a background of sporadic cases is made more difficult by the large spatial and temporal variation in incidence. One possible approach involves using Bayesian hierarchical models to simultaneously estimate spatial, temporal and spatio-temporal components of the risk of infection. By assuming that outbreaks are characterized by spatially localised periods of increased incidence, it becomes possible to calculate an outbreak probability for each potential disease cluster. The model correctly identifies known outbreaks in data from New Zealand for the period 2001 to 2007. Studies using simulated data have shown that by including epidemiological information in the model construction, this approach can outperform an established method.

Thu 17 Feb, '11
-
CRiSM Seminar - Wicher Bergsma
A1.01

Wicher Bergsma (LSE) Marcel Croon, Jacques Hagenaars

Marginal Models for Dependent, Clustered, and Longitudinal Categorical Data

  

In the social, behavioural, educational, economic, and biomedical sciences, data are often collected in ways that introduce dependencies in the observations to be compared. For example, the same respondents are interviewed at several occasions, several members of networks or groups are interviewed within the same survey, or, within families, both children and parents are investigated. Statistical methods that take the dependencies in the data into account must then be used, e.g., when observations at time one and time two are compared in longitudinal studies. At present, researchers almost automatically turn to multi-level models or to GEE estimation to deal with these dependencies.

Despite the enormous potential and applicability of these recent developments, they require restrictive assumptions on the nature of the dependencies in the data. The marginal models of this talk provide another way of dealing with these dependencies, without the need for such assumptions, and can be used to answer research questions directly at the intended marginal level. The maximum likelihood method, with its attractive statistical properties, is used for fitting the models.

This talk is based on a recent book by the authors in the Springer series Statistics for the Social Sciences, see www.cmm.st.

 

Thu 24 Feb, '11
-
CRiSM Seminar - Iain Murray
A1.01

Iain Murray (University of Edinburgh)

Sampling latent Gaussian models and hierarchical modelling

Sometimes hyperparameters of hierarchical probabilistic models are not well-specified enough to be optimized. In some scientific applications inferring their posterior distribution is the objective of learning.

Using a simple example, I explain why Markov chain Monte Carlo (MCMC) simulation can be difficult, and offer a solution for latent Gaussian models.

Thu 24 Mar, '11
-
CRiSM Seminar - Carlos Navarette
A1.01

Carlos Navarette (Universidad de La Serena)

Similarity analysis in Bayesian random partition models

This work proposes a method to assess the influence of individual observations in the clustering generated by any process that involves random partitions. It is called Similarity Analysis. It basically consists of decomposing the estimated similarity matrix into an intrinsic and an extrinsic part, coupled with a new approach for representing and interpreting partitions. Individual influence is associated with the particular ordering induced by individual covariates, which in turn provides an interpretation of the underlying clustering mechanism. Some applications in the context of Species Sampling Mixture Models will be presented, including Bayesian density estimation, dependent linear regression models and logistic regression for bivariate response. Additionally, an application to time series modelling based on time-dependent Dirichlet processes will be outlined.

Thu 28 Apr, '11
-
CRiSM Seminar - Sofia Massa
A1.01

Dr Sofia Massa (Oxford)

Combining information from graphical Gaussian models

In some recent applications, the interest is in combining information about relationships between variables from independent studies performed under partially comparable circumstances. One possible way of formalising this problem is to consider combination of families of distribution respecting conditional independence constraints with respect to a graph G, i.e., graphical models. In this talk I will introduce some motivating examples of the research question and I will present some relevant types of combinations and associated properties, in particular the relation between the properties of the combination and the structure of the graphs. Finally I will discuss some issues related to the estimation of the parameters of the combination.

Thu 12 May, '11
-
CRiSM Seminar - Alexander Gorban
A1.01

Alexander Gorban (Leicester)

Geometry of Data Sets

Plan
1. The problem
2. Approximation of multidimensional data by low-dimensional objects
2.1. Principal manifolds and elastic maps
2.2. Principal graphs and topological grammars
2.3. Three types of complexity: geometrical, topological and algorithmic
3. Self-simplification of essentially high-dimensional sets
4. Terra Incognita between low-dimensional sets and self-simplified high-dimensional ones.
5. Conclusion and open problems

Thu 19 May, '11
-
CRiSM Seminar - Sumeetpal Singh
A1.01

Sumeetpal Singh (Cambridge)

Computing the filter derivative using Sequential Monte Carlo

Sequential Monte Carlo (SMC) methods are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. We propose a SMC algorithm to compute the derivative of the optimal filter in a Hidden Markov Model (HMM) and study its stability both theoretically and with numerical examples. Applications include calibrating the HMM from observed data in an online manner.

(Joint work with P. Del Moral and A. Doucet)

Mon 23 May, '11
-
CRiSM PhD Talks
MS.03

Chris Nam (Warwick)
Quantifying the Uncertainty in Change Points in Time Series

Bryony Hill (Warwick)
A Gradient Field Approach to Modelling Fibre-Generated Spatial Point Processes

Ashley Ford (Warwick)
Indian Buffet Epidemics - A non-parametric Bayesian Approach to Modelling Heterogeneity

Thu 26 May, '11
-
CRiSM Seminar - Postponed due to illness

 

Thu 2 Jun, '11
-
CRiSM Seminar - Evsey Morozov
A1.01

Evsey Morozov (Karelian Research Centre, Russia)

Regenerative queues: stability analysis and simulation

We present a general approach to stability of regenerative queueing systems, which is based on the properties of the embedded renewal process of regenerations. Such a process obeys a useful characterization of the limiting remaining renewal time allowing in many cases to establish minimal stability conditions by a two-step procedure. At first step, a negative drift condition is used to prove that the basic process does not go to infinity (in probability), and at the second step, the finiteness of the mean regeneration period is proved. This approach has lead to the effective stability analysis of some models describing, in particular, such modern telecommunication systems as retrial queues and queues with optical buffers.

Moreover, we discuss regenerative simulation method including both classical and non-classical (extended) regeneration allowing a dependence between regeneration cycles.

Wed 6 Jul, '11
-
Prof. Hernando Ombao - CRiSM Seminar
A1.01

Hernando Ombao

Intro to spectral analysis and coherence

 

Thu 7 Jul, '11
-
Prof. Hernando Ombao - CRiSM Seminar
A1.01

Hernando Ombao

Special topics on spectral analysis: principal components analysis, clustering and discrimination

 

Fri 8 Jul, '11
-
Prof. Hernando Ombao - CRiSM Seminar
A1.01

Hernando Ombao

Analysis of non-stationary time series

Thu 6 Oct, '11
-
CRiSM Seminar - Marek Kimmel (Rice University, Houston)
A1.01

Marek Kimmel, Rice University, Houston

Modeling the mortality reduction due to computed tomography screening for lung cancer

The efficacy of computed tomography (CT) screening for lung cancer remains controversial despite the fact that encouraging results from the National Lung Screening Trial are now available. In this study, the authors used data from a single-arm CT screening trial to estimate the mortality reduction using a modeling-based approach to construct a control comparison arm.

Mon 17 Oct, '11
-
CRiSM Seminar - Atanu Biswas (Indian Statistical Institute)
B1.01

Atanu Biswas (Indian Statistical Institute)

Comparison of treatments and data-dependent allocation for circular data from a cataract surgery

Circular data is a natural outcome in many biomedical studies, e.g. some measurements in ophthalmologic studies, degrees of rotation of hand or waist, etc. With reference to a real data set on astigmatism induced in two types of cataract surgeries we carry out some two-sample testing problems including the Behren-Fisher type of test in the circular set up. Response-adaptive designs are used in phase III clinical trials to allocate a larger proportion of patients to the better treatment. There is no available work on response-adaptive designs for circular data. Here we provide some response-adaptive designs where the responses are of circular nature, first an ad-hoc allocation design, and then an optimal design. Detailed simulation study and the analysis of the data set, including resigning the cataract surgery data, are carried out.

Joint work with Somak Dutta (University of Chicago), Arnab Kumar Laha (Indian Institute of Management, Ahmedabad), Partho Bakshi (Disha Eye Hospitals, Barrackpore, India).

Thu 20 Oct, '11
-
Joint CRiSM-Systems Biology Seminar
MOAC Seminar Room, Coventry House
Chris Brien (University of South Australia)
Robust Microarray Experiments by Design: A Multiphase Framework
This seminar will outline a statistical approach to the design of microarray experiments, taking account of all the experimental phases involved from initial sample collection to assessment of gene expression. The approach being developed is also highly relevant for other high-throughput technologies. This seminar should be of interest to all those working with experiments using microarray and other high-throughput technologies, as well as to statisticians.
Thu 3 Nov, '11
-
CRiSM Seminar - Scott Schmidler (Duke University)
MS.01

Scott Schmidler (Duke University)

Bayesian Shape Matching for Protein Structure Alignment and Phylogeny

Understanding protein structure and function remains one of the great post-genome challenges of biology and molecular medicine. The 3D structure of a protein provides fundamental insights into its
biological function, mechanism, and interactions, and plays a key role in drug design. We have developed a Bayesian approach to modeling protein structure families, using methods adapted from the statistical
theory of shape. Our approach provides natural solutions to a variety of problems in the field, including pairwise and multiple alignment for the study of conservation and variability, algorithms for flexible
matching, and the impact of alignment uncertainty on phylogenetic tree reconstruction. Our recent efforts focus on extension to full evolutionary stochastic process models, which significantly improve upon
sequence-based phylogeny when divergence times are large.

Thu 3 Nov, '11
-
CRiSM Seminar - Dave Woods (Southampton)
A1.01

Dave Woods (University of Southampton)

Design of experiments for Generalised Linear (Mixed) Models

Generalised Linear and Generalised Linear Mixed Models (GLMs and GLMMs) may be used to describe data from a range of experiments in science, technology and industry. Example experiments with binary data come from areas such as crystallography, food science and aeronautical engineering. If the experiment is performed in blocks, eg subjects in a clinical trial or batches in manufacturing, a mixed model with random block effects allows the estimation of either subject-specific or population averaged treatment effects and induces an intra-block correlation structure for the response.

Finding optimal or efficient designs for GLMs and variants is complicated by the dependence of design performance on the values of the unknown model parameters. We describe methods for finding (pseudo) Bayesian designs that average a function of the information matrix across a prior distribution, and assess the resulting designs using simulation. The methods can also be extended to account for uncertainty in the linear predictor and link function, or to find designs for models with nonlinear predictors.

For GLMMs, the search for designs is complicated by the fact that the information matrix is not available in closed form. We make use of analytic and computational approximations, and also an alternative marginal model and generalised estimation equations.

Thu 17 Nov, '11
-
CRiSM Seminar - Nick Chater (Warwick Business School)
A1.01
Nick Chater (Warwick Business School)
Is the brain a Bayesian?

Almost all interesting problems that the brain solves involve probabilities inference; and the brain is clearly astonishingly effective at solving such problems. A substantial movement in cognitive science, neuroscience and artificial intelligence has suggested that the brain may, to some approximation, be a Bayesian. This talk considers in what sense, if any, this might be true; and asks how it might be that a Bayesian brain might, nonetheless, so poor at explicit probabilistic reasoning.

Thu 1 Dec, '11
-
CRiSM Seminar - Mark Strong
A1.01

Mark Strong (University of Sheffield)

Managing Structural Uncertainty in Health Economic Decision Models

It was George Box who famously wrote ‘Essentially, all models are wrong’. Given our limited understanding of the highly complex world in which we live this statement seems entirely reasonable. Why then, in the context of health economic decision modelling, do we often act as if our models are right even if we know that they are wrong?

Imagine we have built a deterministic mathematical model to predict the costs and health effects of a new treatment, in comparison with an existing treatment. The model will be used by NICE to inform the decision as to whether to recommend the new treatment for use in the NHS.

The inputs to the model are uncertain, and we quantify the effect of this input uncertainty on the model output using Monte Carlo methods. We may even quantify the value of obtaining more information. We present our results to NICE as a fait accompli.

But, if we believe George Box then surely we should consider that our model output, our uncertainty analysis, and our estimates of the value of information are all ‘wrong’ because they are generated by a model that is ‘wrong’! The challenge is to quantify how wrong, and then determine the value of improving the model.

This seminar will explore the problem of structural uncertainty in health economic decision models, along with some suggested approaches to managing this uncertainty.

Mon 16 Jan, '12
-
CRiSM Seminar - Shinto Eguchi (Institute of Statistical Mathematics, Japan)
C1.06

Shinto Eguchi (Institute of Statistical Mathematics, Japan)

Maximization of a generalized t-statistic for linear discrimination in the two group classification problem

We discuss a statistical method for the classification problem with two groups labelled 0 and 1. We envisage a situation in which the conditional distribution given label 0 is well specified by a normal distribution, but the conditional distribution given label 1 is not well modelled by any specific distribution. Typically in a case-control study the distribution in the control group can be assumed to be normal, however the distribution in the case group may depart from normality. In this situation the maximum t-statistic for linear discrimination, or equivalently Fisher's linear discriminant function, may not be optiimal. We propose a class of generalized t-statistics and study asymptotic consistency and normality. The optimal generalized t-statistic in the sense of asymptotic variance is derived in a semi-parametric manner, and its statistical performance is confirmed in several numerical experiments.

Thu 2 Feb, '12
-
CRiSM Seminar - Theodor Stewart
A1.01

Theodor Stewart (University of Cape Town)

Principles and Practice of Multicriteria Decision Analysis

The role of multicriteria decision analysis (MCDA) in the broader context of decision science will be discussed. We will review the problem structuring needs of MCDA, and caution against over-simplistic approaches. Different schools of thinking in MCDA, primarily for deterministic problems, will be introduced, to demonstrate that even such problems include many complexities and pitfalls. The practicalities will be illustrated by means of value function methods (and perhaps goal programming if time permits). We will conclude with consideration of the impact of uncertainty on MCDA and the role of scenario planning in this regard.

Thu 16 Feb, '12
-
CRiSM Seminar - Yee Whye Teh
A1.01

Yee Whye Teh (Gatsby Computational Neuroscience Unit, UCL)

A Bayesian nonparametric model for genetic variations based on fragmentation-coagulation processes

Hudson's coalescent with recombination (aka ancestral recombination
graph (ARG)) is a well accepted model of genetic variation in
populations. With growing amounts of population genetics data, demand
for probabilistic models to analyse such data is strong, and the ARG
is a very natural candidate. Unfortunately posterior inference in the
ARG is intractable, and a number of approximations and alternatives
have been proposed. A popular class of alternatives are based on
hidden Markov models (HMMs), which can be understood as approximating
the tree-structured genealogies at each point of the chromosome with a
partition of the observed haplotypes. However due to the way HMMs
parametrize partitions using latent states, they suffer from
significant label-switching issues affecting the quality of posterior
inferences.

We propose a novel Bayesian nonparametric model for genetic variations
based on Markov processes over partitions called
fragmentation-coagulation processes. In addition to some interesting
properties, our model does not suffer from the label-switching issues
of HMMs. We derive an efficient Gibbs sampler for the model and report
results on genotype imputation.

Joint work with Charles Blundell and Lloyd Elliott

Thu 1 Mar, '12
-
CRiSM Seminar - Stephen Connor
C1.06

Stephen Connor (University of York)

State-dependent Foster-Lyapunov criteria

Foster-Lyapunov drift criteria are a useful way of proving results about the speed of convergence of Markov chains. Most of these criteria involve examining the change in expected value of some function of the chain over one time step. However, in some situations it is more convenient to look at the change in expectation over a longer time period, which perhaps varies with the current state of the chain.

This talk will review some joint work with Gersende Fort (CNRS-TELECOM ParisTech), looking at when such state-dependent drift conditions hold and (perhaps more interestingly) what can be inferred from them. (Sadly I can't promise many pictures on this occasion, but I do promise not to show any proofs!)

Thu 1 Mar, '12
-
CRiSM Seminar - Graham Wood
A1.01

Graham Wood (Macquarie University and Warwick Systems Biology)

Normalization of ratio data

Quantitative mass spectrometry techniques are commonly used for comparative proteomic analysis in order to provide relative quantitation between samples. For example, in attempting to find the proteins expressed in ovarian cancer, the quantities of a given protein are assessed by mass spectrometry in separate samples of both cancerous and healthy cells. To account for the variable “loading” (the total volumes of samples) from one sample to the other, a normalization procedure is required. A common approach to normalization is to use internal standards, proteins that are assumed to display only minimal changes in abundance between the samples under comparison. A normalization procedure then allows adjustment of the data, so enabling true relative quantities to be reported.

Normalization is determined by centring the symmetrized ratio (say, cancerous over healthy) internal standards data. This presentation makes two contributions to an understanding of ratio normalization. First, the customary centring of logarithmically transformed ratios (frequently used, for example, in microarray analyses) is shown to attend not only to centring but also to minimisation of the spread of the symmetrized data. Second, the normalization problem is set in a larger context, allowing normalization to be achieved based on a symmetrization which carries the ratios to approximate normality, so increasing the power with which under or over-expressed proteins can be detected. Both simulated and real data will be used to illustrate the new method.

Wed 14 Mar, '12
-
CRiSM Seminar - Heather Battey
A1.01

Heather Battey (University of Bristol)

Wed 14 Mar, '12
-
CRiSM Seminar - Heather Battey
A1.01

Heather Battey (University of Bristol)

Further details to follow

Placeholder