# Statistical challenges in Neuroscience: Abstracts

Abstracts are sorted by author.

### Statistics on neuro-anatomical configurations: models and estimations

**Stephanie Allassonniere (Polytechnique) Stanley Durrleman**

Structural neuroimaging enables the investigation of the anatomical basis of neurologic diseases. Morphological alterations of the cortical or sub-cortical structures occur years before the onset of neurodegenerative diseases. Alterations of structural connectivity during brain development may lead to psychiatric diseases, such as autism. However, such effects could be found only by the automatic processing of large data sets, and therefore by the mean of statistical methods, due to the huge variability of brain structure among individuals. In contrast to usual methods that analyze the differences in image intensity at homologous positions, we investigate differences in brain structures through deformations. Deformations are used to map anatomical configurations, which can be made of the images themselves or any geometric objects segmented from them. The parameters of such deformations give the relative position of a given anatomical configuration on a Riemannian manifold with respect to a reference anatomy called template. We propose to estimate jointly one or several template(s) and the variance of the deformation parameters modeling the geometric distribution of anatomies within the group under study. These estimations are performed in the framework of a Bayesian Mixed Effects (BME) model. Due to the inherent complexity of the deformation model, approximations need to be made. We propose two algorithms to get the Maximum A Posteriori Estimator. In the first place, we use a deterministic approximation of the Expectation-Minimization (EM) algorithm. We also introduce a stochastic version of this EM where the simulation step is optimized using the Anisotropic Metropolis Adjusted Langevin Algorithm (AMALA), which benefits from better theoretical properties. We will illustrate our approach in real case studies, and show how it could not only lead to high classification between disease and non-diseases state, but also could display the most discriminative features in an interpretable way.

### Independent Component Analysis: the basics and some fresh insights

**Jean Francois Cardoso (ENST)**

Independent component analysis (ICA) is a framework for processing multi-sensor data based on a simple but powerful idea: if several different linear mixtures of independent components can be measured at the output of several sensors, it is possible to recover those components without external knowledge of the mixture coefficients, by resorting only to the property of statistical independence of the underlying components. Various ICA algorithms have been applied with success in many fields, including neuroscience. The success of ICA depends on our ability to implement statistical models for the components which are rich enough to capture the salient features of the component distribution yet simple enough to yield robust and fast algorithms. The talk will discuss the most commonly used models which rely on non Gaussianity, sparsity, non stationarity or spectral diversity. I will show how those models correspond to various versions of mutual information and how this is unified in an information-geometric view. I will also discuss how those models can be enhanced to deal with specific situations: noisy models, partial mixtures information, correlated or multi-dimensional components.

### Searching Multiregression Dynamic Models of fMRI Networks using Integer Programming

**Lilia Carolina Carneiro da Costa**

Download slides: Searching Multiregression Dynamic Models of fMRI Networks Using Integer Programming

### Multi-subject Bayesian Joint Detection and Estimation in functional MRI

**Philippe Ciuciu (CEA) **

Modern cognitive experiments in functional Magnetic Resonance Imaging (fMRI) rely on a cohort of subjects sampled from a population of interest to study characteristics of the healthy brain or to identify biomarkers on a specific pathology~(e.g., Alzheimer's disease) or disorder~(e.g., aging). Group-level studies usually proceed in two steps by making random-effect analysis on top of intra-subject analyses, to localize activated regions in response to stimulations or to estimate brain dynamics. Here, we focus on improving the accuracy of group-level inference of the hemodynamic response function (HRF). We rest on an existing Joint Detection-Estimation (JDE) framework we formerly developed (Makni et al, 2005, 2008; Vincent et al, 2010). The latter aims at detecting evoked activity and estimating HRF shapes jointly. So far, region-specific group-level HRFs have been captured by averaging intra-subject HRF profiles. Here, our approach extends the JDE formalism to the multi-subject context by proposing a hierarchical Bayesian modeling that includes an additional layer for describing the link between subject-specific and group-level HRFs. This extension outperforms the original approach both on artificial and real multi-subject datasets. It allows us to probe the effect of aging in different cognitive circuits by comparing HRF profiles of young and elderly participants to the same localizer paradigm.

Download slides: Multi-subject Bayesian joint detection & estimation in fMRI

### Dynamic Causal Modelling of brain-behaviour relationships

**Jean Daunizeau (ICM Paris) **

Dynamic Causal Modelling (DCM) of neuroimaging data has become a standard tool for identifying the structure and flexibility of brain networks that respond to the experimental manipulation (e.g., sensory stimuli or task demands). DCM, however, does not explain how distributed brain responses are causally involved in the production of behaviour (e.g. choices, reaction times). In this work, we propose to merge DCM with neuroimaging decoding approaches, with the aim of identifying a neural transfer function that would map experimental inputs to their behavioural response, through activity in the underlying large-scale brain dynamics. In brief, our approach provides a neuro-computational decomposition of behavioural responses, in terms of the contribution of brain regions and their functional connections to the input-output transform. In turn, it provides a direct quantification of the behavioural relevance of effective connectivity. In this view, neuroimaging data serves to identify key parameters (e.g. synaptic weights and their modulation) that control the "transfer function" from experimental inputs to behavioural outputs. This can serve to predict behavioural deficits induced by specific anatomical lesions, as well as behavioural recovery potentials that derive from brain plasticity. We will first recall the basics of the DCM framework and expose its behavioural extension. We will then evaluate the capabilities and limits of the approach using both Monte-Carlo simulations and empirical data.

Download slides: Dynamic Causal Modelling of brain-behaviour relationships

### Sparse Paradign Free Mapping

**Ian Dryden (University of Nottingham) **

Paradigm Free Mapping (PFM) is a method for detecting brain activations in functional Magnetic Resonance Imaging (FMRI) without specifying prior information on the timing of the events. The PFM method involves a ridge regression estimator for signal deconvolution and a baseline signal period for statistical inference. A sparse version of PFM uses the Dantzig Selector and a new approach called Penalized Euclidean Distance regression. These methods obtain high detection rates of activation, comparable to a model-based analysis, but requiring no information on the timing of the events or a baseline period. The practical operation of sparse PFM was assessed with single-trial fMRI high-field 7T data, where all task-related events as well as several resting state networks were detected. The work is joint with Cesar Caballero Gaudes, Natalia Petridou, Susan Francis and Penny Gowland.

Download slides: Sparse Paradigm Free Mapping (powerpoint presentation)

### Spatial statistics and attentional dynamics in scene viewing

**Ralf Engbert (University of Postdam) **

In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on, first, activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data.

Download slides: Spatial statistics and attentional dynamics in scene viewing

### Physiologically informed Bayesian analysis of ASL functional MRI data using MCMC

**Florence Forbes (INRIA Grenoble) **

ASL fMRI data provides a quantitative measurement of blood perfusion. In contrast to Blood Oxygenation Level Dependent signal, the ASL signal is a direct and closer to neuronal activity measurement. However, ASL data has a lower signal to noise ratio (SNR) and poorer resolution, both in time and space. In this work, we thus aim at taking advantage of the physiological link between the hemodynamic (venous) and perfusion (arterial) components in the ASL signal to improve the estimation of the impulse responses of the neurovascular system. In a Bayesian framework, a linearization of this link is injected as prior information to temporally regularize the regionwise estimation of the perfusion response function while enabling the joint detection of brain activity elicited by stimuli delivered along a fast event-related paradigm. All the parameters of interest in space and time as well as hyperparameter are computed in the posterior mean sense after convergence of a hybrid MetropolisGibbs sampler. In this way, we aim at providing clinically relevant perfusion characteristics for the analysis of ASL data in low SNR conditions. This work has been done by Aina Frau (PhD student) and Thomas Vincent (postdoc fellow) under the joint supervision of Florence Forbes (INRIA Grenoble) and Philippe Ciuciu (CEA &INRIA Saclay).

### Estimation of the fractal connectivity

**Irne Gannaz (INSA Lyon) **

A challenge in imaging neuroscience is to characterize the brain organization, trough the integration of interactions between segregated areas. One way to estimate the functional connectivity consists in estimating correlations of pairs of measurements of neuronal activity. The aim of the present work is to take into account the long range dependence properties of the recordings. Fractal connectivity can statistically be defined as the spectral correlations between long memory processes over a range of low frequency scales. It can be seen as the asymptotic limit of Fourier and wavelets correlation at low frequencies. Fractal connectivity thus corresponds to the ``structural'' or long-term covariation between the processes. We first introduce a semi-parametric multivariate model, defining the fractal connectivity for a large class of multivariate time series. This model includes the multivariate Brownian motion and fractionally integrated processes. We propose an estimation of the long-dependence parameters and of the fractal connectivity, based on the Whittle approximation and on a wavelet representation of the time series. The theoretical properties of the estimation show the asymptotic optimality. A simulation study confirms the satisfying behaviour of the procedure on finite samples. Finally we propose an application to the estimation of a human brain functional network based on MEG data sets. Our study highlights the benefit of the multivariate analysis, namely improved efficiency of estimation of dependence parameters and of long term correlations.

### Hierarch Bayesian Inference of Mixed Modality Brain Imaging for Clinical Diagnostics

**Mark Girolami (University of Warwick) **

The promise of brain imaging as a general clinical diagnostic remains just that. This talk will present a recent study assessing the statistical importance of fusing a range of diverse imaging modalities in assessing early onset of Parkinsonian type diseases. A hierarchic Bayesian structure to integrate and assess the importance of various modalities is developed and the issues related to efficient inference this model are investigated. This is an ongoing study with clinical neurologists.

Download slides: Hierarch Bayesian Inference of Mixed Modality Brain Imaging for Clinical Diagnostics

### How much stats does it take to look at the brain at a millisecond time-scale with MEG and EEG?

**Alexandre Gramfort (Telecom Paristech) **

Electroencephalography (EEG) and Magnetoencephalography (MEG) are noninvasive techniques that allow to image the active brain at a millisecond time scale. Yet to do so, challenging computational and statistical problems need to be solved. In this talk I will first review the physics behind MEG/EEG measurements before diving into two statistical problems: the estimation of the noise covariance used for prewhitening and then the localization of active sources in the brain. The later problem is a high dimensional regression problem where the target variables are multivariate time series. I will detail recent contributions using sparsity promoting regularizations and time-frequency representations.

Download slides: How much stats does it take to look at the brain at a millisecond time-scale with MEG and EEG?

### Observing the brain in the wild -- in need for large scale collaborations to move forward

**Michael Hanke (University of Magdeburg) **

Prolonged complex naturalistic stimulation is arguably more likely to elicit brain responses that are representative of naturally occurring brain states and dynamics than artificial, highly controlled experiments with a limited number of simplified conditions. If we want to know how the brain works, we need to study it while it does what it can do best: process vast amounts of multi-sensory input, effortlessly determine what is important, and trigger the right actions. The catch is, of course, that without properly designed experiments many of the standard statistical analysis approaches are no longer applicable as they often rely on multiple repetitions, or assumptions of particular distributions. Solutions to this problem are more flexible analysis strategies that can handle complexity in a single dataset, or large amounts of data that enable aggregation across the enormous variety of brain processes. I claim that current neuroimaging research reality hinders progress on both aspects. While there is a lot of neuroscientific data being collected only a minuscule portion of it is accessible for any kind of aggregation or meta-analysis. This situation seriously inhibits inter-disciplinary contributions -- a potent source of novel approaches to look at brain data -- as scientists from other disciplines (statistics, engineering, machine learning, data mining) cannot easily access neuroimaging datasets that are both relevant to their field and have the potential to move neuroimaging research forward. In order to explore this potential, we have started an experiment on a de-centralized, distributed collaboration on neuroimaging data analysis. The concept of this project is to provide a rich and unique dataset to encourage scientists with various backgrounds to infer as much as possible about the nature of the processes in the human brain. Anybody can participate without a formal agenda or consortium. We published a dataset that has the potential to garner the attention of researchers working in diverse fields of science, within and outside the neuroimaging domain. It is a large (more than 300 GB of raw and readily pre-processed data), state-of-the-art high-resolution, ultra-highfield 7-Tesla fMRI dataset with simultaneous physiological measurements recorded during a 2-hour quasi natural stimulation via a Hollywood audio-movie. As such, this dataset may be the largest consecutive sample of natural language processing that is publicly available today. Functional brain response data for 20 participants are accompanied by a multitude of structural/anatomical data (sub-millimeter T1w and T2w, DTI, SWI, angiography) and a dedicated measurement of technical noise during the functional scans. All data are released into the public domain in standard open-source data formats, and a reference implementation for data access is made available to streamline workflows for scientists without prior experience with neuroimaging data. An effort was made to describe the dataset in enough detail so that it will be usable for scientists without strict neuroimaging training. The dataset is publicly available since January 2014 at http://www.studyforrest.org A detailed data description was published in Hanke, M., Baumgartner, F.J., Ibe, P., Kaule, F.R., Pollmann, S., Speck, O., Zinke, W. & Stadler, J. (in press). A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie. Nature Scientific Data. In my presentation I will report on the progress of this experiment. I will give an overview of data use, published and preliminary results, as well as challenges imposed by the complex nature of these data. Moreover, I will discuss what we have learnt from attempting to engage in inter-disciplinary mass-collaboration in this uni-lateral fashion.

### Natural Image Statistics

**Aapo Hyvärinen (University of Helsinki) **

A fundamental question in visual neuroscience is to understand the principles that determine the various stages of visual processing in the brain. That is: Why are the receptive fields and response properties of visual neurons as they are? A modern approach to this problem emphasizes the importance of adaptation to the statistics of ecologically valid input (natural images). The problem is closely related to the engineering problem of finding a good low-level representation of images. In this talk I will review work on natural image statistics and the obtained functional explanations of the properties of visual neurons. The models start with sparse coding or independent component analysis, proceed to non-Gaussian two-layer models, and finally arrive multi-layer models related to the fashionable "deep learning".

Download slides: Natural Image Statistics

### Bayesian Methods in Neuroimaging

**Timothy D. Johnson (University of Michigan) **

Bayesian methods have a long and intimate history with image analysis dating back at least to the seminal paper by Geman and Geman (1984) and perhaps even a decade earlier with Besag (1974). Two primary advantages of Bayesian methods over frequentist or MLE methods for image analyses are the ease at which prior information can be incorporated into the models and the ease at which spatial and temporal correlation can be handled. The primary disadvantages are computational cost and the lack of general software packages that can handle the massive image data currently being collected. In this talk I will present several examples of Bayesian image analyses and highlight their benefits. I will then discuss several recent advances in Bayesian computation that show promise in breaking the computational bottleneck, including recent Monte Carlo simulations methods and approximation methods of the joint posterior.

Download slides: Bayesian Methods in Neuroimaging

### Deep neural nets elucidate hierarchical visual processing

**Patrick Mineault (McGill) **

Neurons in intermediate and high-level visual areas hierarchically re-encode the visual input to ever more abstract representations which support high-level behaviours (DiCarlo & Cox 2007). Systems identification can help us understand this process by identifying the computations supported performed by these neurons. When low-level stages in a hierarchical computation are well-understood, we can estimate relationship between a neuron and its most proximal input - the previous area in the hierarchy (Cadieu et al. 2008; Mineault et al. 2012). When low-level stages are poorly characterized, however, systems identification becomes more challenging. To address this, we used deep feedforward neural networks (Bengio et al. 2006) find to a nonlinear hierarchical transformation of the input which linearizes the relationship between the input and the output of a set of neurons. We choose a convolutional transformation followed by a nonlinearity to approximate the local receptive fields and spiking nonlinearities of neurons. We show in simulations and with recorded neural data that it is possible to learn, via stochastic gradient descent, an effective representation of the input in a standard systems identification paradigm (Marmarelis & Marmarelis 1976) - e.g. a simple-cell-like representation from the output of complex cells. By stacking multiple layers of this transformation, we can learn ever more complex representations of the input in a greedy fashion. In an application to a neural dataset of V2 neurons (David et al. 2010), we show that the proposed method is much more effective than shallow methods in predicting responses to a validation dataset, and that it recovers a number of suspected receptive field properties of V2 neurons. Since the proposed method can be extended to arbitrary depth, it holds promise in characterizing neural computations at the highest levels of the visual system.

### Towards a Multi-Subject Analysis of Neural Connectivity

**Chris Oates (Warwick)**

Refs:

Oates CJ, Costa L, Nichols T (2014) Towards a Multi-Subject Analysis of Neural Connectivity. Neural Computation (to appear). [arxiv:1404.1239]

Oates CJ, Smith JQ, Mukherjee S, Cussens J (2014) Exact Estimation of Multiple Directed Acyclic Graphs. In Submission. [arxiv:1404.1238]

Download slides: Towards a Multi-Subject Analysis of Neural Connectivity

### Multivariate time series in electroencephalography

**Sofia Olhede (UCL) **

Electroencephalography recordings are measurements of the electrical activity of the brain, taken at the scalp. Generally multiple such series are recorded and the behaviour of these series in response to sensory stimulation is studied. Because the subjects under observation are subjected to different stimulus intensities and modalities, the observations are inherently nonstationary. One of the most important tasks as an applied statistician is to balance the usage of degrees of freedom, especially in heterogeneous populations. I will discuss how time-frequency methods can be used to extract important time-localised information, and the importance of correct normalisation (within and across subjects) in this setting.

### Population Level Models of Dynamical Systems

**Will Penny (UCL) **

In this talk I will describe two multivariate dynamical systems models of use to imaging neuroscience. The first operates at a fast time scale and describes the evolution of event-related activity underlying working memory, as measured using MEG. The system is modelled by describing the underlying neuronal sources using a phase/amplitude representation. The second operates at a slow time scale and describes the evolution of gray-matter densities underlying normal ageing and dementia, as measured using MRI. For both approaches we use a mixed effects generative model in which subject-specific dynamics are sampled from a population level model. This approach helps avoid the local minima previously encountered in single-subject dynamical models of MEG. It also allows the use of sparsely sampled time series at the individual subject level, which is especially important for longitudinal MRI as we have many subjects, but each is scanned at only a few time points. Statistical inference is implemented using gradient-based MCMC and we improve the efficiency of model estimation by computing gradients using an adjoint method. The broader vision of this work is that aberrant synaptic plasticity operating at the short time scales of memory encoding leads to the molecular and systems level changes underlying neurodegenerative disease at longer time scales.

### Quantification of noise in MR experiments

**Joerg Polzehl (Weierstrass Institute Berlin) **

We present a novel method for local estimation of the noise level in magnetic resonance images in the presence of a signal. The procedure uses a multi-scale approach to adaptively infer on local neighbourhoods with similar data distribution. It exploits a maximum-likelihood estimator for the local noise level. Information assessed by this method is essential in a correct modelling in diffusion magnetic resonance experiments as well as in adequate preprocessing. The validity of the method is evaluated on repeated diffusion data of a phantom and simulated data. The results are compared to other noise estimation methods. We illustrate the gain from using the method in data enhancement and modelling of a high-resolution diffusion dataset.

Download slides: Quantification of noise in MR experiments

### Partial volume estimation in brain MRI - revisiting the mixel model

**Alexis Roche (EPFL) **

Conventional magnetic resonance imaging (MRI) based brain morphometry methods rest upon image tissue classification models that ignore, or do not fully account for the mixing of several tissues within voxels, a problem known as partial voluming, which may lead to inaccurate estimation of both local tissue concentrations and regional tissue volumes, and may impede challenging applications such as detection of focal atrophy patterns relating to early-stage progression of particular forms of dementia. While it was shown two decades ago that maximum likelihood partial volume estimation from single channel MR images is an ill-posed problem [1], the neuroimaging community has mainly resorted to finite Gaussian mixture modeling approaches for tissue classification (possibly using Markov random field priors), thereby resolving ill-posedness at the expense of neglecting partial volume effects. Owing to the necessity to incorporate strong prior knowledge for the estimation of plausible tissue concentration maps, we propose to regularize the partial volume maximum likelihood estimation problem using a Bayesian approach that assigns priors on both voxelwise tissue concentrations and image appearance parameters. We further demonstrate an associated maximum a posteriori (MAP) tracking algorithm that essentially uses sequential quadratic programming and works reasonably fast compared to conventional tissue classification methods. Our initial experiments show that global and local brain atrophy measures estimated using the proposed algorithm correlate more with age and disease than using conventional finite mixture modeling approaches or ad-hoc methods such as the fuzzy c-mean algorithm. [1] Choi et al, IEEE Trans. Medical Imaging 10(3), 1991.

### On Firing Rate Estimation for Dependent Interspike Intervals

**Laura Sacerdote (University of Turin) **

Time-varying external inputs determine time dependent neuronal instantaneous firing rate but time dependent instantaneous firing rate may be determined also by dependences between successive inter-spike intervals (ISIs). We show that in this second case, the instantaneous firing rate does not enlighten existence of the ISIs dependencies. Hence the conditional firing rate should be introduced. Existing estimators for the conditional firing rate request the knowledge of the ISIs distribution, a fact rarely verified for observed data. We propose a non-parametric estimator for the conditional instantaneous firing rate for Markov, stationary and ergodic ISIs. An algorithm to check the reliability of the proposed estimator is introduced and its consistency properties are proved. The method is applied to data obtained from a stochastic two compartment model and to experimental data.

### Mutual Information: estimation and application to neural data

**Roberta Sirovich (University of Turin) **

In the past few decades there has been a strong increase in the popularity of information--theoretic analysis of neural data. Information quantities have been used in several directions, such as learning about the signal from the output spike train, but also to quantify dependencies among the involved units. In particular we are interested in using mutual information, a measure of the linear and non linear dependencies among random variables. This approach seems to be very promising in many applications in neurosciences. From a statistical point of view, the direct estimation of mutual information presents problems that increase with the dimension of the problem. In [1] we proposed a new non-parametric estimator that exploits the link between mutual information and the entropy of a suitably transformed sample. After illustrating some features of the new statistical procedure, we discuss some possible applications in neuroscience. [1] Giraudo MT, Sacerdote L, Sirovich R (2013) Non-parametric Estimation of Mutual Information through the Entropy of the Linkage, Entropy 15(12), 5154-5177.

Download slides: Mutual Information: estimation and application to neural data

### Sequential Monte Carlo samplers for a conditionally linear problem in magneto/electro-encephalography

**Alberto Sorrentino (University of Genoa) **

Magneto/Electro-encephalography (M/EEG) are powerful tools that record the magnetic field / electric potential generated by brain activity, with a millisecond-by-millisecond resolution. However, estimation of the spatio-temporal distribution of neural currents from MEEG data is an ill-posed problem, due to the non-identifiability of the model. We adopt the Bayesian approach and make use of a multi-dipole model, where the neural current is approximated with a small set of point-like currents (current dipoles), each one characterized by a location and a dipole moment. We consider the problem of estimating the number of dipoles and their parameters either from a single spatial distribution of M/EEG data, or from a time-series, under the assumption that the number of sources and their locations do not change in time. We exploit the linearity with respect to the dipole moment, and set up a variable dimension model and a Sequential Monte Carlo sampler (SMC, Del Moral et al., 2006) to approximate the marginal posterior distribution for the non-linear variables, while the conditionally Gaussian posterior for the dipole moments is computed analytically. As the only time-varying variables are the linear ones, the computational cost of the algorithm does not depend on the length of the time-series. We apply the method to both synthetic and experimental data to show that it can effectively recover neural sources with high accuracy. A comparison with a full SMC (Sorrentino et al., 2014), sampling the whole posterior distribution, shows that exploitation of the linear substructure leads indeed to a reduced Monte Carlo variance of the estimators. Del Moral et al. (2006) Journal of the Royal Statistical Society B 68: 411-436 Sorrentino et al. (2014) Inverse Problems 30: 045010

### High-resolution diffusion MRI by msPOAS

**Karsten Tabelow (Weierstrass Institute) **

In this talk we present a new method msPOAS for adaptive smoothing diffusion magnetic resonance imaging data. The procedure is based on the propagation-separation approach and uses the geometry of the measurement space of (voxel) positions and (gradient) orientations to reduce noise in the measured image volumes. We will elaborate on the principles of the algorithm and show applications to high resolution diffusion MRI data.

Download slides: High-resolution diffusion MRI by msPOAS

### Nonlinear approaches to neural system identification

**Lucas Theis (University of Tübingen) **

Due to their conceptual and computational simplicity, generalized linear models (GLMs) represent a popular choice for the probabilistic characterization of neural spike responses. However, their limited flexibility necessitates choosing an appropriate feature space to model nonlinear behavior which can be difficult in practice. I will present nonlinear extensions to generalized linear models which are able to extract nonlinear features from data automatically. Despite losing global convergence guarantees, these models are able to learn complex stimulus-response relationships with simple off-the-shelf optimization routines and can outperform typical GLMs by large margins. I will further show how they can be used to improve spike extraction from two-photon calcium images and discuss the quantification of the quality of spike train predictions.

Download slides: Nonlinear approaches to neural system identification

### Opportunities and Challenges in EEG-based Assessment of Cognitive Status in Severe Brain Injury

**Jonathan D. Victor (Cornell) Nicolas D. Schiff**

Severe brain injury presents an immense burden to affected individuals, their families, and society. While many individuals eventually recover some level of function, they typically have overwhelming motor disability. This confounds the determination of cognitive capacities via standard behavioral means, and motivates the development of assessment strategies that bypass the motor system, such as functional brain imaging and electroencephalography (EEG). EEG is an especially attractive approach because it can capture events at behaviorally relevant timescales of less than one second, it is widely available, and measurements can be made over a prolonged period of time. The latter consideration is especially important for assessing patients with severe chronic brain injury because their level of arousal can fluctuate substantially and unpredictably. Nevertheless, developing EEG-based paradigms to assess cognitive function presents challenges. Some of these are generic: electrical signals recorded on the scalp constitute a spatially-averaged mixture the activity of large and heterogeneous populations of neurons, and inferring the sources of these signals is an ill-posed problem. Scalp-recorded signals invariably contain artifacts, due both to other bioelectric sources (such as muscle activity) and environmental sources; this problem is exacerbated in this subject population as they are unable to cooperate, and may be in an electrically noisy environment. Moreover, artifacts may have complex dynamics and covariation across time, as they may be coupled to level of arousal or environmental events. Finally, the EEG is intrinsically multivariate it is a broadband signal recorded at dozens of scalp locations. Since the particular dynamical features of interest may not be known in advance or even predictable from normal subjects, there is the potential for a massive multiple-comparisons problem. While practical strategies exist for meeting each of these challenges, there is much room for improvement and improvements will directly translate into more precise and reliable evaluation tools.

Download slides: Opportunities and Challenges in EEG-based Assessment of Cognitive Status in Severe Brain Injury

### Statistical Pitfalls in Cognitive Neuroscience

**Eric-Jan Wagenmakers (University of Amsterdam) **

In this presentation I discuss three statistical pitfalls that are particularly relevant for cognitive neuroscience. The first pitfall concerns the fact that the difference between significant and not significant is itself not necessarily significant (i.e., the imager's fallacy). The second pitfall concerns the misinterpretation of the p-value as evidence against the null hypothesis; specifically, I will show that when p is about .05, the evidence against the null is anecdotal at best. The third pitfall is perhaps most serious, and it concerns the presentation of exploratory analyses as confirmatory. All three pitfalls can be avoided, but it requires that cognitive neuroscientists change the way they design their experiments and analyze their data.

Download slides: Statistical Pitfalls in Cognitive Neuroscience (powerpoint presentation)

### ABC and the statistical challenges of big simulation

**Richard Wilkinson (University of Nottingham) **

'Big data' has been the focus of much recent research and looks at how to learn when datasets are so large that traditional methods of analysis break down. In this talk, I will talk about the complimentary challenge presented by 'big simulation', namely, how can we analyse simulators that are so complex that traditional statistical methodology cannot be used. I will describe and review a class of algorithms that are known as approximate Bayesian computation (ABC) methods, which have been developed to fit complex simulators to data (calibration). ABC methods have rapidly become popular in the biological sciences over the past decade. The simplest form of the algorithm is very easy to implement, and can nearly always be applied, allowing us (in theory) to fit any simulator to data. or complex simulators, in practice we have to use more efficient (and more complex) versions of ABC in order to do the analysis. I will review the main approaches taken to implementing ABC for expensive simulators, and outline some recent work that uses Gaussian process emulators of the simulator in order to enable inference for genuinely expensive stochastic simulators.

Download slides: ABC and the statistical challenges of big data simulation

### Localisation microscopy with quantum dots using non-negative matrix factorisation

**Chris Williams (University of Edinburgh) **

We propose non-negative matrix factorisation (NMF) to model a noisy dataset of highly overlapping fluorophores with intermittent intensities. We can recover images of individual sources from the optimised model, despite their high mutual overlap in the original data. This allows us to consider blinking quantum dots as bright and stable fluorophores for localisation microscopy. We compare the NMF results to CSSTORM, 3B and bSOFI techniques. Joint work with Ondrej Mandula, Ivana Sumanovac Sestak and Rainer Heintzmann.

Download slides: Localisation microscopy with quantum dots using non-negative matrix factorisation