Skip to main content Skip to navigation

Music

I have a long-standing interest in music, from playing and production through research into algorithmic music, undergraduate research and interdisciplinary teaching.

Sonification-for-music

Sonification is the process of turning data into a non-speech audible signal to help convey information or process a data stream. For instance, NASA used sonification to study the solar wind. I'm interested in using non-musical data such as text or images to form musical sound. There is a lot of interest in algorithmic composition, i.e. the production of music without human intervention. For example, one can use evolutionary algorithms to mutate a random sequence towards a particular style of music: Melomics is a nice example. Of course, the "evolutionary fitness" of generations is defined against a human standard (genre patterns). In sonification-for-music the role of the algorithm is "translator", converting data to music according to some rules. The composer can both tweak the rules and choose the data set on which they operate.

Text to Music is based on work by three University of Warwick Physics students, Alex Simpson, John Bamping and Joe Marsh Rossney. The explored the generation of music directly from text, using phonemes (the basic sounds of a natural language) as an intermediary. This approach has never been tried before. This project was supported by IATL via the Science of Music interdisciplinary module. Some samples of work produced in this phase of the project are available on Soundcloud.

SAMI is an earlier project in which we converted images to music.

Outreach

Music is a great way into physics and I enjoy using music for science outreach.

Production

I have been involved in music recording since the days of the cassette 4-track!