Skip to main content

Current PhD and EngD Projects

Amar Dhokia, High-fidelity smell in virtual environments

Virtual simulation today still lacks the means to effectively combine all sensory stimuli to provide a realistic virtual experience. Whilst novel attempts of integrating olfaction into virtual environments have been tried, there is still yet to be a paradigm shift which overcomes the problems that are currently associated with incorporating this sense. The proposed research aim of this work is to create an olfaction virtualisation framework independent of how olfactory perception works. This can be broken into stages. Firstly there is the capture stage which requires an effective way to capture smells and be able to analyse and quantify them on a chemical level. The next stage requires a mechanism by which the reproduced odours can be presented to users in the most perceptually authentic way i.e. accurate temporally, in concentration, and in smell character itself. The final stage requires a feedback to control the odour presentation dynamically and change characteristics as needed, since environments are rarely static.

Rossella Suma, PhD Project

This PhD project aims to test whether HDR presentation methods allow faster and more accurate perception of psychologically relevant stimuli with an emphasis on face processing. The work is also likely to provide new insights into how humans perceive faces and which elements are relevant for emotion recognition.

Completed Doctoral Projects (PhD)

Martin Kolar, High Quality Texture Synthesis

This research investigated the quality of offline texture synthesis, introduced a new method for texture synthesis, and made advances in quality assessment. The following have been developed: an improved patch-based algorithm for parallel texture synthesis, a new texture dataset, and a user study which demonstrated statistically significant conclusions regarding state-of-the-art texture synthesis algorithms.

Stratos Doukakis, Resource allocation for Multi-Sensory Virtual Environments

Fidelity is of key importance if multi-sensory Virtual Environments (VEs) are to be used as authentic representations of real environments. Simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. Based on a series of subjective experiments, estimation models are proposed and successfully validated using resource budgets and scenarios.

Pinar Satilmis, High Fidelity Sky Models

Light sources are an important part of physically-based rendering when accurate imagery is required. High-fidelity models of sky illumination are essential when virtual environments are illuminated via the sky as is commonplace in most outdoor scenarios. The complex nature of sky lighting makes it difficult to accurately model real life skies. The current solutions to sky illumination can be analytically based and are computationally expensive for complex
models, or based on captured data. Such captured data is impractical to capture and difficult to use due to temporally inconsistencies in the captured content. This research enhances the state-of-the-art in sky lighting by addressing these problems via novel sky illumination methods that are accurate, practical and flexible. This research focuses on two novel sky illumination methods where; the first of which focuses on clear sky lighting and the second one deals with illumination from cloudy skies.

Ratnajit Mukherjee

This PhD evaluated the acceptability of HDR video over traditional video, investigated existing HDR video compression algorithms and finally proposed a novel compression algorithm which delivers superior HDR video quality at low transmission cost.