FAST Live Demonstrations @Ideas Unwrapped

FAST Live Demonstrations @Ideas Unwrapped, Thursday 26 April, Queen Mary University of London, Mile End   FAST_LOGO_REVERSED_P1_WEBFAST IMPACt is a five-year EPSRC project that brings the latest technologies to bear on the entire recorded music industry, end-to-end and producer to consumer. It makes the production process more fruitful, the consumption process more engaging, and the delivery and intermediation more automated and robust. The Principal Investigator is Professor Mark Sandler, Director of the Centre for Digital Music, School of EECS. The other two academic partners in the project are University of Oxford and University of Nottingham. BBC R & D, Abbey Road RED, Universität Erlangen-Nurnberg and other industry partners are also involved.

Part of the yearly Ideas Unwrapped Queen Mary event on Thursday 26 April for Queen Mary staff & students, the FAST show & tell session will take place in the Performance Lab of the Engineering building, 11:15 – 13:15 hrs, Queen Mary, Mile End. It will consist of a number of live demonstrations showcasing some of the project’s most exciting research related to semantic web and audio technologies. The session will be preceded by a talk by the Director of the project, Professor Mark Sandler introducing the FAST project (People’s Palace).

Talk: ‘FAST Forward’ 
Mark Sander (Centre for Digital Music)

Music’s changing fast: FAST’s changing music! The FAST EPSRC Programme Grant led from Queen Mary is looking at new technologies to disrupt the recorded music industry. As well as the team from Queen Mary, we will showcase some of the work from partners Nottingham and Oxford, and we hope by the end of it, attendees will have some idea what we mean by Signal Processing and the Semantic Web. Even if they don’t, they’ll preview some cool new ideas, apps and technology that the team will be showcasing to industry at Abbey Road Studios in the Autumn.

List of projects and presenters on show in the FAST session:

1) FXive: A Web Platform for Procedural Sound Synthesis
Parham Bahadoran and Adan Benito (Centre for Digital Music, EECS)

Fxive demonstrates a new way to perform sound design. High quality, artistic sound effects can be achieved by the use of lightweight and versatile sound synthesis models. Such models do not rely on stored samples, and provide a rich range of sounds that can be shaped at the point of creation.

2) Signal Processing Methods for Source Separation in Music Production
Delia Fano Yala (Centre for Digital Music, EECS)

3) Audio Commons: An Ecosystem for bringing Creative Commons content to the creative industries
George Fazekas (Centre for Digital Music, EECS)

 Audio Commons addresses barriers to using Creative Commons content in the creative industries presented by uncertain licensing, insufficient metadata or variation in quality. Two demonstrators will be shown that have been developed for music production and game sound design use cases.

AudioTexture is a plugin prototype for sound texture synthesis developed by AudioGaming.

SampleSurfer provides an audio search engine that integrates instant listening capabilities, editing tools, and transparent creative commons (CC) licensing processes.

4) MusicLynx
Rishi Shukla & Alo Allik (Centre for Digital Music, EECS)

MusicLynx is a web platform for music discovery that collects information and reveals connections between artists from a range of online sources. The information is used to build a network that users can explore to discover new artists and how they are linked together.

5) Fast DJ
Florian Thalmann (Centre for Digital Music, EECS)

 A minimal DJ web app that analyzes music files, makes automatic transitions, and learns from the user’s taste.

6) Grateful Live
Thomas Wilmering (Centre for Digital Music, EECS)

A website for the exploration of recordings of Grateful Dead concerts, drawing its information from various sources on the web.

7) Numbers-into-Notes Semantic Remixer
John Pybus (Oxford e-Research Centre, University of Oxford)

8) The Prism audience perception app
Mat Willcoxson (Oxford e-Research Centre, University of Oxford)

We have developed a customisable app to obtain audience feedback during live performances, which has been used in live public experiments in Manchester and Oxford to investigate human perception of musical features. As well as supporting research, these have acted as public engagement events to engage audiences with music, maths, and particular composers.  (The PRiSM app has been developed in collaboration with colleagues in Oxford and the Royal Northern College of Music, and relates to other apps in FAST such as Mood Conductor.)

9) The Climb! performance and score archive – MELD and Muzicodes
Kevin Page (Oxford e-Research Centre, Oxford University)  

“Climb!” is a non-linear musical work for Disklavier and electronics in which the pianist’s progression through the piece is not predetermined, but dynamically chosen according to scored challenges and choices. The challenges are implemented using two FAST technologies — Muzicodes and MELD (Music Encoding and Linked Data) — which are also used to create an interactive archive through which recorded performances of Climb! can be explored.

10) 6-Channel Guitar Dataset
Johan Pauwels (Centre for Digital Music, EECS)

A demonstration of a data collection procedure, as a prerequisite for future research.

Further information:

Dr. Jasmina Bolfek-Radovani
FAST IMPACt Programme Manager (FTE 0.7)
School of Electronic Eng and Computer Science
Queen Mary University of London
Peter Landin Building
10 Godward Square, London E1 4FZ
Tel: +44 (0)20 7882 7597
j.bolfek-radovani@qmul.ac.uk
follow us on twitter @semanticaudio