Skip to main content Skip to navigation

Titles and abstracts

Speaker: Heather Battey (Imperial and Princeton)

Title: Large covariance and precision matrices: robust estimation and new structured classes.

Abstract: Estimation of covariance and inverse covariance matrices is an essential ingredient to virtually every modern statistical procedure. When data are high dimensional and drawn from distributions with heavy tails, the performance of popular matrix estimators is not guaranteed. I will first discuss robust counterparts to these procedures, and how their properties are related to the tail behaviour of the underlying distribution. I will then present new structured model classes for covariance and inverse covariance matrices, discussing the spectral properties of random matrices drawn from this class.

Speaker: Binyan Jiang (Hong Kong Polytechnic University)

Title: A direct approach for sparse quadratic discriminant analysis

Abstract: Quadratic discriminant analysis (QDA) is a standard tool for classification due to its simplicity and flexibility. Because the number of its parameters scales quadratically with the number of the variables, QDA is not practical, however, when the dimensionality is relatively large. To address this, we propose a novel procedure named QUDA for QDA in analyzing high-dimensional data. Formulated in a simple and coherent framework, QUDA aims to directly estimate the key quantities in the Bayes discriminant function including quadratic interactions and a linear index of the variables for classification. Under appropriate sparsity assumptions, we establish consistency results for estimating the interactions and the linear index, and further demonstrate that the misclassification rate of our procedure converges to the optimal Bayes risk, even when the dimensionality is exponentially high with respect to the sample size. An efficient algorithm based on the alternating direction method of multipliers (ADMM) is developed for finding interactions, which is much faster than its competitor in the literature. The promising performance of QUDA is illustrated via extensive simulation studies and the analysis of two datasets.

Speaker: Chenlei Leng (Warwick)

Title: DECOrrelated feature space partitioning for distributed sparse regression

Abstract: Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space). While the majority of the literature focuses on sample space partitioning, feature space partitioning is more effective when p >> n. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In this paper, we solve these problems through a new embarrassingly parallel framework named DECO for distributed variable selection and parameter estimation. In DECO, variables are first partitioned and allocated to m distributed workers. A decorrelation step is then carried out within each worker and the decorrelated data are fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework. This is joint work with Xiangyu Wang (Google) and David Dunson (Duke).

Speaker: Degui Li (York)

Title: Semiparametric ultra-high dimensional model averaging of nonlinear dynamic time series.

Abstract: We propose two semiparametric model averaging schemes for nonlinear dynamic time series regression models with a very large number of covariates including exogenous regressors and auto-regressive lags, aiming to obtain accurate forecasts of a response variable by making use of a large number of conditioning variables in a nonparametric way. In the first scheme, we introduce a Kernel Sure Independence Screening (KSIS) technique to screen out the regressors whose marginal regression (or auto-regression) functions do not make signi cant contribution to estimating the joint multivariate regression function; we then propose a semiparametric penalisedmethod of Model Averaging MArginal Regression (MAMAR) for the regressors and auto-regressors that survive the screening procedure, to further select the regressors that have significant effects on estimating the multivariate regression function and predicting the future values of the response variable. In the second scheme, we impose an approximate factor modelling structure on the ultra-high dimensional exogenous regressors and use the principal component analysis to estimate the latent common factors; we then apply the penalised MAMAR method to select the estimated common factors and the lags of the response variable that are signi cant. In each of the two semiparametric schemes, we construct the optimal combination of the signi cant marginal regression and auto-regression functions. Under some regularity conditions, we derive some asymptotic properties for these two semiparametric schemes. Numerical studies including both simulation and an empirical application are provided to illustrate the proposed methodology.

Speaker: Guangming Pan (Nanyang Technological University)

Title: Recent development of the largest eigenvalues of large random matrices.

Abstract: This talk is about the asymptotic distribution of the larges eigenvalues of large random matrices including sample covariance matrices, sample correlation matrices, F matrices and CCA when the sample size and the dimension are comparable. We will discuss two types asymptotic distributions: Tracy-Widom distribution for the non-spiked eigenvalues and Gaussian distribution for the spiked eigenvalues. If time permits, some applications will be discussed as well.

Speaker: Xinghao Qiao (LSE)

Title: Regularized estimation in high-dimensional functional time series models

Abstract: Modelling multiple functions arises in a broad spectrum of real applications. However, many studies in functional data literature primarily reply on the critical assumption of independent and identically distributed (i.i.d.) samples. In this talk, we focus on two statistical problems in the context of high dimensional functional time series: (a) functional stochastic regression and (b) vector functional autoregressive models. We develop regularization approaches via the group lasso to estimate coefficient functions in (a) and autoregressive coefficient functions in (b). We also introduce a functional stability measure for stationary functional processes that provides insight into the effect of dependence on the accuracy of regularized estimates and derive the non-asymptotic bounds for the estimation errors. Finally, we show the proposed methods significantly outperform their competitors through a series of simulations and some real world datasets.

Speaker: Richard Samworth (Cambridge)

Title: High-dimensional changepoint estimation via sparse projection

Abstract: Changepoints are a very common feature of Big Data that arrive in the form of a data stream. In this talk, we study high-dimensional time series in which, at certain time points, the mean structure changes in a sparse subset of the coordinates. The challenge is to borrow strength across the coordinates in order to detect smaller changes than could be observed in any individual component series. We propose a two-stage procedure called 'inspect' for estimation of the changepoints: first, we argue that a good projection direction can be obtained as the leading left singular vector of the matrix that solves a convex optimisation problem derived from the CUSUM transformation of the time series. We then apply an existing univariate changepoint detection algorithm to the projected series. Our theory provides strong guarantees on both the number of estimated changepoints and the rates of convergence of their locations, and our numerical studies validate its highly competitive empirical performance for a wide range of data generating mechanisms.

This is joint work with Tengyao Wang.

Speaker: Rajen Shah (Cambridge)

Title: Goodness of fit tests for high-dimensional linear models

Abstract: In this talk I will introduce a framework for constructing goodness of fit tests in both low and high-dimensional linear models. The idea involves applying regression methods to the scaled residuals following either an ordinary least squares or Lasso fit to the data, and using some proxy for prediction error as the final test statistic. We call this family Residual Prediction (RP) tests. We show that simulation can be used to obtain the critical values for such tests in the low-dimensional setting, and demonstrate that some form of the parametric bootstrap can do the same when the high-dimensional linear model is under consideration. We show that RP tests can be used to test for significance of groups or individual variables as special cases, and here they compare favourably with state of the art methods, but we also argue that they can be designed to test for as diverse model misspecifications as heteroscedasticity and different types of nonlinearity. This is joint work with Peter Bühlmann.

Speaker: Cheng Yong Tang (Temple)

Title: Precision matrix estimation by inverse principal orthogonal decomposition

Abstract: We consider a parsimonious approach for modeling a large precision matrix in a factor model setting. The approach is developed by inverting a principal orthogonal decomposition (IPOD) that disentangles the systematic component from the idiosyncratic component in the target dynamic system of interest. In the IPOD approach, the impact due to the systematic component is captured by a low-dimensional factor model. Motivated by practical considerations for parsimonious and interpretable methods, we propose to use a sparse precision matrix to capture the contribution from the idiosyncratic component to the variation in the target dynamic system. Conditioning on the factors, the IPOD approach has an appealing practical interpretation in the conventional graphical models for informatively investigating the associations between the idiosyncratic components. We discover that the large precision matrix depends on the idiosyncratic component only through its sparse precision matrix, and show that IPOD is convenient and feasible for estimating the large precision matrix in which only inverting a low-dimensional matrix is involved. We formally establish the estimation error bounds of the IPOD approach under various losses and that the impact due to the common factors vanishes as the dimensionality of the precision matrix diverges. Extensive numerical examples including real data examples in practical problems demonstrate the merits of the IPOD approach in its performance and interpretability. This is a joint work with Yingying Fan.

Speaker: Qiwei Yao (LSE)

Title: Kriging over space and time based on a latent reduced rank structure

Abstract: We propose a new approach to extract nonparametrically covariance structure of a spatio-temporal process in terms of latent common factors. Though it is formally similar to the existing reduced rank approximation methods (Section 7.1.3 of Cressie and Wikle, 2011), the fundamental difference is that the low-dimensional structure is completely unknown in our setting, which is learned from the data collected irregularly over space but regularly in time. We do not impose any stationarity conditions over space either, as the learning is facilitated by the stationarity in time. Krigings over space and time are carried out based on the learned low-dimensional structure. Their performance is further improved by a newly proposed aggregation method via randomly partitioning the observations accordinly to their locations. A low-dimensional correlation structure also makes the kriging methods scalable to the cases when the data are taken over a large number of locations and/or over a long time period. Asymptotic properties of the proposed methods are established. Illustration with both simulated and real data sets is also reported.

Speaker: Yi Yu (Bristol and Cambridge)

Title: Estimating whole brain dynamics using spectral clustering

Abstract: The estimation of time-varying networks for functional Magnetic Resonance Imaging (fMRI) data sets is of increasing importance and interest. In this work, we formulate the problem in a high-dimensional time series framework and introduce a data-driven method, namely Network Change Points Detection (NCPD), which detects change points in the network structure of a multivariate time series, with each component of the time series represented by a node in the network. NCPD is applied to various simulated data and a resting-state fMRI data set. This new methodology also allows us to identify common functional states within and across subjects. Finally, NCPD promises to offer a deep insight into the large-scale characterisations and dynamics of the brain. This is joint work with Ivor Cribben (Alberta School of Business).