Events
Thu 13 Jan, '11- |
CRiSM Seminar - Tilman DaviesA1.01Tilman Davies (Massey University, NZ) Refining Current Approaches to Spatial and Spatio-Temporal Modelling in Epidemiology It is reasonable to expect both space and time to be important factors when investigating disease in human, animal and even plant populations. A common goal in many studies in geographical epidemiology, for example, is the idenification of disease risk 'hotspots', where spatial sub-regions that correspond to a statistically significant increase in the risk of infection are highlighted. More advanced problems involving not just space but space-time data, such as real-time disease surveillance, can be difficult to model due to complex correlation structures and computationally demanding operations. Decisions based on these kinds of analyses can range from the local, to national and even global levels. It is therefore important we continue to improve statistical methodology in this relatively young field, and ensure any theoretical benefits can flow through in practice. This talk aims to give an overview of the PhD research currently underway in an effort to develop and implement refinements to spatial and spatio-temporal modelling. Of note include use of a spatially adaptive smoothing parameter for estimation of the kernel-smoothed relative-risk function, development of a novel, computationally inexpensive method for associated spatial tolerance contour calculation, release of an R package implementing these capabilities, and the scope for improvement to the current marginal minimum-contrast methods for parameter estimation in relevant stochastic models. |
|
Thu 20 Jan, '11- |
CRiSM Seminar - Jouni KuhaJouni Kuha (London School of Economics) Sample group means in multilevel models: Sampling error as measurement error Research questions for models for clustered data often concern the effects of cluster-level averages of individual-level variables. For example, data from a social survey might characterise neigborhoods in |
|
Thu 27 Jan, '11- |
CRiSM Seminar - Alberto SorrentinoAlberto Sorrentino (Warwick) Bayesian filtering for estimation of brain activity in magnetoencephalography Magnetoencephalography (MEG) is a sophisticated technique measuring the tiny magnetic fields produced by the brain activity. Relative to other functional neuroimaging techniques MEG recordings feature an outstanding temporal sampling resolution, in principle allowing for a study of the neural dynamics on a millisecond-by-millisecond time scale, but the spatial localization of neural currents from MEG data turns out to be an ill-posed inverse problem, i.e. a problem which has infinitely many solutions. To mitigate ill-posedness, a variety of parametric models of the neural currents are proposed in the burgeoning neuroimaging literature. In particular, under suitable approximations the problem of estimating brain activity from MEG data can be re-phrased as a Bayesian filtering problem with an unknown and time-varying number of sources. In this talk I will first illustrate a statistical model of source localisation for MEG data which builds directly on the well-established Physics of the electro-magnetic brain field. The focus of the talk will then be to describe the application of a recently developed class of sequential Monte Carlo methods (particle filters) for estimation of the model parameters using empirical MEG data. |
|
Thu 3 Feb, '11- |
CRiSM Seminar - Simon SpencerA1.01Simon Spencer (Warwick) Outbreak detection for campylobacteriosis in New Zealand Identifying potential outbreaks of campylobacteriosis from a background of sporadic cases is made more difficult by the large spatial and temporal variation in incidence. One possible approach involves using Bayesian hierarchical models to simultaneously estimate spatial, temporal and spatio-temporal components of the risk of infection. By assuming that outbreaks are characterized by spatially localised periods of increased incidence, it becomes possible to calculate an outbreak probability for each potential disease cluster. The model correctly identifies known outbreaks in data from New Zealand for the period 2001 to 2007. Studies using simulated data have shown that by including epidemiological information in the model construction, this approach can outperform an established method. |
|
Thu 17 Feb, '11- |
CRiSM Seminar - Wicher BergsmaA1.01Wicher Bergsma (LSE) Marcel Croon, Jacques Hagenaars Marginal Models for Dependent, Clustered, and Longitudinal Categorical Data
In the social, behavioural, educational, economic, and biomedical sciences, data are often collected in ways that introduce dependencies in the observations to be compared. For example, the same respondents are interviewed at several occasions, several members of networks or groups are interviewed within the same survey, or, within families, both children and parents are investigated. Statistical methods that take the dependencies in the data into account must then be used, e.g., when observations at time one and time two are compared in longitudinal studies. At present, researchers almost automatically turn to multi-level models or to GEE estimation to deal with these dependencies. Despite the enormous potential and applicability of these recent developments, they require restrictive assumptions on the nature of the dependencies in the data. The marginal models of this talk provide another way of dealing with these dependencies, without the need for such assumptions, and can be used to answer research questions directly at the intended marginal level. The maximum likelihood method, with its attractive statistical properties, is used for fitting the models. This talk is based on a recent book by the authors in the Springer series Statistics for the Social Sciences, see www.cmm.st.
|
|
Thu 24 Feb, '11- |
CRiSM Seminar - Iain MurrayA1.01Iain Murray (University of Edinburgh) Sampling latent Gaussian models and hierarchical modelling Sometimes hyperparameters of hierarchical probabilistic models are not well-specified enough to be optimized. In some scientific applications inferring their posterior distribution is the objective of learning. Using a simple example, I explain why Markov chain Monte Carlo (MCMC) simulation can be difficult, and offer a solution for latent Gaussian models. |
|
Thu 24 Mar, '11- |
CRiSM Seminar - Carlos NavaretteA1.01Carlos Navarette (Universidad de La Serena) Similarity analysis in Bayesian random partition models This work proposes a method to assess the influence of individual observations in the clustering generated by any process that involves random partitions. It is called Similarity Analysis. It basically consists of decomposing the estimated similarity matrix into an intrinsic and an extrinsic part, coupled with a new approach for representing and interpreting partitions. Individual influence is associated with the particular ordering induced by individual covariates, which in turn provides an interpretation of the underlying clustering mechanism. Some applications in the context of Species Sampling Mixture Models will be presented, including Bayesian density estimation, dependent linear regression models and logistic regression for bivariate response. Additionally, an application to time series modelling based on time-dependent Dirichlet processes will be outlined. |
|
Thu 28 Apr, '11- |
CRiSM Seminar - Sofia MassaA1.01Dr Sofia Massa (Oxford) Combining information from graphical Gaussian models In some recent applications, the interest is in combining information about relationships between variables from independent studies performed under partially comparable circumstances. One possible way of formalising this problem is to consider combination of families of distribution respecting conditional independence constraints with respect to a graph G, i.e., graphical models. In this talk I will introduce some motivating examples of the research question and I will present some relevant types of combinations and associated properties, in particular the relation between the properties of the combination and the structure of the graphs. Finally I will discuss some issues related to the estimation of the parameters of the combination. |
|
Thu 12 May, '11- |
CRiSM Seminar - Alexander GorbanA1.01Alexander Gorban (Leicester) Geometry of Data Sets Plan |
|
Thu 19 May, '11- |
CRiSM Seminar - Sumeetpal SinghA1.01Sumeetpal Singh (Cambridge) Computing the filter derivative using Sequential Monte Carlo Sequential Monte Carlo (SMC) methods are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. We propose a SMC algorithm to compute the derivative of the optimal filter in a Hidden Markov Model (HMM) and study its stability both theoretically and with numerical examples. Applications include calibrating the HMM from observed data in an online manner. (Joint work with P. Del Moral and A. Doucet) |
|
Mon 23 May, '11- |
CRiSM PhD TalksMS.03Chris Nam (Warwick) Bryony Hill (Warwick) Ashley Ford (Warwick) |
|
Thu 26 May, '11- |
CRiSM Seminar - Postponed due to illness
|
|
Thu 2 Jun, '11- |
CRiSM Seminar - Evsey MorozovA1.01Evsey Morozov (Karelian Research Centre, Russia) Regenerative queues: stability analysis and simulation We present a general approach to stability of regenerative queueing systems, which is based on the properties of the embedded renewal process of regenerations. Such a process obeys a useful characterization of the limiting remaining renewal time allowing in many cases to establish minimal stability conditions by a two-step procedure. At first step, a negative drift condition is used to prove that the basic process does not go to infinity (in probability), and at the second step, the finiteness of the mean regeneration period is proved. This approach has lead to the effective stability analysis of some models describing, in particular, such modern telecommunication systems as retrial queues and queues with optical buffers. Moreover, we discuss regenerative simulation method including both classical and non-classical (extended) regeneration allowing a dependence between regeneration cycles. |
|
Wed 6 Jul, '11- |
Prof. Hernando Ombao - CRiSM SeminarA1.01Hernando Ombao Intro to spectral analysis and coherence |
|
Thu 7 Jul, '11- |
Prof. Hernando Ombao - CRiSM SeminarA1.01Hernando Ombao Special topics on spectral analysis: principal components analysis, clustering and discrimination |
|
Fri 8 Jul, '11- |
Prof. Hernando Ombao - CRiSM SeminarA1.01Hernando Ombao Analysis of non-stationary time series |
|
Thu 6 Oct, '11- |
CRiSM Seminar - Marek Kimmel (Rice University, Houston)A1.01Marek Kimmel, Rice University, Houston Modeling the mortality reduction due to computed tomography screening for lung cancer The efficacy of computed tomography (CT) screening for lung cancer remains controversial despite the fact that encouraging results from the National Lung Screening Trial are now available. In this study, the authors used data from a single-arm CT screening trial to estimate the mortality reduction using a modeling-based approach to construct a control comparison arm. |
|
Mon 17 Oct, '11- |
CRiSM Seminar - Atanu Biswas (Indian Statistical Institute)B1.01Atanu Biswas (Indian Statistical Institute) Comparison of treatments and data-dependent allocation for circular data from a cataract surgery |
|
Thu 20 Oct, '11- |
Joint CRiSM-Systems Biology SeminarMOAC Seminar Room, Coventry HouseChris Brien (University of South Australia)
Robust Microarray Experiments by Design: A Multiphase Framework
This seminar will outline a statistical approach to the design of microarray experiments, taking account of all the experimental phases involved from initial sample collection to assessment of gene expression. The approach being developed is also highly relevant for other high-throughput technologies. This seminar should be of interest to all those working with experiments using microarray and other high-throughput technologies, as well as to statisticians.
|
|
Thu 3 Nov, '11- |
CRiSM Seminar - Scott Schmidler (Duke University)MS.01Scott Schmidler (Duke University) Bayesian Shape Matching for Protein Structure Alignment and Phylogeny |
|
Thu 3 Nov, '11- |
CRiSM Seminar - Dave Woods (Southampton)A1.01Dave Woods (University of Southampton) Design of experiments for Generalised Linear (Mixed) Models |
|
Thu 17 Nov, '11- |
CRiSM Seminar - Nick Chater (Warwick Business School)A1.01Nick Chater (Warwick Business School)
Is the brain a Bayesian?
Almost all interesting problems that the brain solves involve probabilities inference; and the brain is clearly astonishingly effective at solving such problems. A substantial movement in cognitive science, neuroscience and artificial intelligence has suggested that the brain may, to some approximation, be a Bayesian. This talk considers in what sense, if any, this might be true; and asks how it might be that a Bayesian brain might, nonetheless, so poor at explicit probabilistic reasoning. |
|
Thu 1 Dec, '11- |
CRiSM Seminar - Mark StrongA1.01Mark Strong (University of Sheffield) Managing Structural Uncertainty in Health Economic Decision Models |
|
Mon 16 Jan, '12- |
CRiSM Seminar - Shinto Eguchi (Institute of Statistical Mathematics, Japan)C1.06Shinto Eguchi (Institute of Statistical Mathematics, Japan) Maximization of a generalized t-statistic for linear discrimination in the two group classification problem We discuss a statistical method for the classification problem with two groups labelled 0 and 1. We envisage a situation in which the conditional distribution given label 0 is well specified by a normal distribution, but the conditional distribution given label 1 is not well modelled by any specific distribution. Typically in a case-control study the distribution in the control group can be assumed to be normal, however the distribution in the case group may depart from normality. In this situation the maximum t-statistic for linear discrimination, or equivalently Fisher's linear discriminant function, may not be optiimal. We propose a class of generalized t-statistics and study asymptotic consistency and normality. The optimal generalized t-statistic in the sense of asymptotic variance is derived in a semi-parametric manner, and its statistical performance is confirmed in several numerical experiments. |
|
Thu 2 Feb, '12- |
CRiSM Seminar - Theodor StewartA1.01Theodor Stewart (University of Cape Town) Principles and Practice of Multicriteria Decision Analysis The role of multicriteria decision analysis (MCDA) in the broader context of decision science will be discussed. We will review the problem structuring needs of MCDA, and caution against over-simplistic approaches. Different schools of thinking in MCDA, primarily for deterministic problems, will be introduced, to demonstrate that even such problems include many complexities and pitfalls. The practicalities will be illustrated by means of value function methods (and perhaps goal programming if time permits). We will conclude with consideration of the impact of uncertainty on MCDA and the role of scenario planning in this regard. |
|
Thu 16 Feb, '12- |
CRiSM Seminar - Yee Whye TehA1.01Yee Whye Teh (Gatsby Computational Neuroscience Unit, UCL) A Bayesian nonparametric model for genetic variations based on fragmentation-coagulation processes Hudson's coalescent with recombination (aka ancestral recombination |
|
Thu 1 Mar, '12- |
CRiSM Seminar - Stephen ConnorC1.06Stephen Connor (University of York) State-dependent Foster-Lyapunov criteria |
|
Thu 1 Mar, '12- |
CRiSM Seminar - Graham WoodA1.01Graham Wood (Macquarie University and Warwick Systems Biology) Normalization of ratio data Quantitative mass spectrometry techniques are commonly used for comparative proteomic analysis in order to provide relative quantitation between samples. For example, in attempting to find the proteins expressed in ovarian cancer, the quantities of a given protein are assessed by mass spectrometry in separate samples of both cancerous and healthy cells. To account for the variable “loading” (the total volumes of samples) from one sample to the other, a normalization procedure is required. A common approach to normalization is to use internal standards, proteins that are assumed to display only minimal changes in abundance between the samples under comparison. A normalization procedure then allows adjustment of the data, so enabling true relative quantities to be reported. Normalization is determined by centring the symmetrized ratio (say, cancerous over healthy) internal standards data. This presentation makes two contributions to an understanding of ratio normalization. First, the customary centring of logarithmically transformed ratios (frequently used, for example, in microarray analyses) is shown to attend not only to centring but also to minimisation of the spread of the symmetrized data. Second, the normalization problem is set in a larger context, allowing normalization to be achieved based on a symmetrization which carries the ratios to approximate normality, so increasing the power with which under or over-expressed proteins can be detected. Both simulated and real data will be used to illustrate the new method. |
|
Wed 14 Mar, '12- |
CRiSM Seminar - Heather BatteyA1.01Heather Battey (University of Bristol) |
|
Wed 14 Mar, '12- |
CRiSM Seminar - Heather BatteyA1.01Heather Battey (University of Bristol) Further details to follow |