Skip to main content Skip to navigation

Current Seminar Offerings

Further details and publications on these topics can be seen in the Research Interests section


Chaos Communication Modelling


This talk will deal with the communications-engineering research area of chaos-based communications systems - systems in which message bits are carried and spread by discrete-time chaotic waves rather than periodic waves. Such systems have potential for security, capacity and resistance to interference. Development of the area heavily depends on the statistical and dynamical aspects of chaotic processes and their interplay with channel stochastic noise and interference. While the area is a very active one in several groups of international and mathematically educated electronics researchers, it faces some challenges which are most suitably approached with mathematical and statistical expertise. The main theoretical aspects are those of bit estimation and bit error, both highly statistical. Previous engineering approaches ignored some key points and gave unsatisfactory approximations. Work on the chaos-shift keying system by the speaker and collaborators has produced exact results which give considerable insight of engineering value. Latest work involves develeopments to laser-based shift-keying communications systems and the analysis of very long sequences of data collected over very short time periods. Chaos-based communications is one area in nonlinear engineering where statisticians can make a difference.

This topic can be covered at an introductory level for statistical audiences or at a level which assumes knowledge of the area and which goes more into current issues and advances.


Statistical Analysis in Financial Time Series

This topic being developed is concerned with the graphics and modelling of volatility in time series. A general modelling basis for stationary time series allowing changing level and volatility is used to suggest graphics which explore the existence of volatility, its dependence on earlier values in the time series, and possible ways in which it depends on earlier values. The importance of prior decorrelation is explored. Illustrations use financial series, while simulations are employed for validation purposes. The well-known signature of volatility, the clustering of oppositely signed extremes, is captured by the plot of their squared deautocorrelated values and its autocorrelations. This prompts investigating the explicit model of squared values when the unsquared deautocorrelated values follow garch models. In fact, the squared values follow arma models except that their innovations are themselves garch-like volatile and dependent, as seen in examples from financial series. A final point concerns the usefulness of linear dependency in financial series, however slight, and motivates modifying the iconic arch and garch volatility models to include linear dependence in a natural way suggested by the previous volatility developments.


Statistical Aspects of Chaos


An approach to understanding the statistical aspects of deterministic chaotic map sequences is developed, based on extending the Perron-Frobenius theory of invariant distributions to nonlinear dependence in the sequences, as followed by Kohda, and identifying Kohda's commonly satisfied but little known equi-distribution property. Whilst complete independence can never be obtained, a form of deterministic independence can be given. Nonlinear dependence will be seen through mean-centred quadratic correlations, an old nonlinear time series idea, but where previous lack of mean-centring has produced misleading information. The idea of chaotic synchronization will then be introduced, whereby two chaotic sequences each driven by a third chaotic sequence may converge to each other, a phenomena of opposite character to that of chaotic divergence; there will be extensions of earlier work to the statistical aspects of bivariate chaos along with the relevant bivariate dynamical aspects. These areas eminate from chaos-based communications. Other topics, also with communications motivations, are that of binary and discrete chaos when, perhaps surprisingly, independent sequences can be produced.


Local Influence Regression Diagnostics for Goodness of Model Fit and Model Test


Local influence diagnostics for linear and generalized linear models are usually formulated in terms of Cook’s likelihood distance. This measure is concerned with parameter estimates after model or data perturbations; the non-peturbed likelihood is evaluated at the perturbed estimates as the basis of the measure. By contrast here, the local influence approach is extended to likelihood-based measures for goodness of model fit, in the form of the deviance and perturbed deviance, and to influence on model test results, restricted for brevity, to the test of a null model. It is claimed that the distinction made resolves some conceptual questions still remaining over the original focus on parameter estimates. Whilst local influence points to case directions of greatest local influence, there is still a need to employ actual perturbations in such directions and to statistically calibrate them. It is suggested that this be done by choosing perturbations along a line on the deviance or test statistic perturbation surface and calibrating the actual influence by the resulting perturbed significance level. For Cook’s distance measure of case deletion, this is analagous in its assessment to parameter estimate movement in relation to an original parameter confidence region. The approach is exemplified with the standard Normal linear regression model, and by developments for binomial regression models where appropriate local perturbation schemes are suggested.


Masking and Cook's Distance


Cook's distance is a well-known statistic in regression diagnostics for assessing parameter estimate influence by case-deletion. Initial remarks on some lesser-known aspects in the wider context of influence will form the introduction. Since it is concerned with cases individually, it can miss the influential effects of pairs and more generally groups of cases; such difficulties have been referred to as masking - although the definition has been left rather open. Two approaches will be mentioned, one in terms of the more established joint influence and the arguably preferable one in terms of the notion of conditional influence, conditional on the previous deletion of cases. In respect of the former, a new version of Cook's distance appropriate for replicated data will be shown, and also one for 'oppositely' replicated data. These yield some intuition on the distorting effects of joint influence relative to individual influence. Masking will be defined in terms of conditional influence and Cook's distance, and will indicate the circumstances in which it can arise. Exemplification by a constructed and a reported data set will be cited. Further work for goodness of fit and testing influence may be mentioned.


Engine Mapping Experiments: Regression Based on Spark Sweeps


Engine mapping is the term used in the automotive industry when modelling engine ouputs in terms of engine inputs. Such relationships are required by electronic engine controllers to provide optimum fuel economy within legal restrictions on exhaust gas emissions and within the operational limits of the engine. The current methods offer considerable scope for improvement and a new approach has been developed. The key idea is to consider the problem in two stages. The first stage is concerned with response curves, which is the way the data are collected. The second stage involves informed multivariate regression modelling of key engineering quantities of these curves. This division of the problem allows both input from the engineering base and the effective use of statistical modelling and a variety of diagnostics; it produces models with much improved predictive performance. The approach is outlined and then illustrated on data from a designed experiment carried out during the course of the work. A number of novel statistical features are involved, some of which are still being investigated. The topic has not previously been subject to detailed study in either the statistics or engineering communities. The original investigations were carried out for The Ford Motor Company and involved the PhD work of Tim Holliday, supervised by Tony Lawrance (Birmingham University) and Tim Davis (Ford Motor Company), and the collaboration of the Power Train Matching Group at Ford's Dunton Research and Engineering Centre.


Analysis of Road Distress Data


The talk describes an analysis of pavement condition data collected by the Transport Road Research Laboratory at two experimental road sites in England over the period 1960-1985; these were measurements of Benkelman Beam deflections together with records of traffic loading taken at 6 or 12 month intervals. The analysis investigates the deflection trend as a function of road base material and thickness. The deflection trend was represented by a negative exponential curve form. Engineering aspects of the curve form are mathematically extracted and statistically analysed. The results obtained focus on the dependency of deflection progression on both road base material and thickness which are shown to act either jointly or singly, depending on the engineering characteristic. In particular, the work exemplifies an approach to the analysis of the deflection progression trend as a function of traffic loading and pavement construction characteristics, such as base material and thickness. This makes possible the comparative evaluation of base materials and thickness. In making the comparisons several aspects of deflection behaviour were identified, the main ones being eventual level of deflection (DEL), the drop in deflection over the entire period (DDR) and the traffic loading up to the onset of stable deflection (TST). The idea of an initial settling period was proposed, with its deflection drop (DSP) and traffic loading (TSP) and further, irregularities in the progressions suggested flexing of the surfaces which was captured by deflection residual variability (DRV). A variety of engineering conclusions from the analysis were obtained. The work is intended as a methodological guide to the area and definitive conclusions on specific materials and thickness values need wide justification over many sites. The talk is based on joint work with Professor H Kerali of Birmingham's Engineering School.

Click to return to main document