Author Archives: admin

Prof. Mark Sandler gives FAST talk @Ideas Unwrapped

Thursday 26 April, Queen Mary University of London, Mile End

5736429648_IMG_0005Professor Mark Sandler gave his talk ‘FAST Forward’ on the FAST project on Thursday 26 April in a keynote talk at the Queen Mary’s Ideas Unwrapped yearly event.

Talk summary:
Music’s changing fast: FAST’s changing music! The FAST EPSRC Programme Grant – Fusing Audio & Semantic Technologies for Intelligent Music Production and Consumption –  led from Queen Mary is looking at new technologies to disrupt the recorded music industry. As well as the team from QM, we will showcase some of the work from partners Nottingham and Oxford, and we hope by the end of it, you’ll have some idea what we mean by Signal Processing and the Semantic Web. Even if you don’t, you’ll preview some cool new ideas, apps and technology that the team will be showcasing to industry at Abbey Road Studios in the Autumn.

You can listen to Prof. Sandler’ talk online:

 

FAST Live Demonstrations @Ideas Unwrapped

FAST Live Demonstrations @Ideas Unwrapped, Thursday 26 April, Queen Mary University of London, Mile End   FAST_LOGO_REVERSED_P1_WEBFAST IMPACt is a five-year EPSRC project that brings the latest technologies to bear on the entire recorded music industry, end-to-end and producer to consumer. It makes the production process more fruitful, the consumption process more engaging, and the delivery and intermediation more automated and robust. The Principal Investigator is Professor Mark Sandler, Director of the Centre for Digital Music, School of EECS. The other two academic partners in the project are University of Oxford and University of Nottingham. BBC R & D, Abbey Road RED, Universität Erlangen-Nurnberg and other industry partners are also involved.

Part of the yearly Ideas Unwrapped Queen Mary event on Thursday 26 April for Queen Mary staff & students, the FAST show & tell session will take place in the Performance Lab of the Engineering building, 11:15 – 13:15 hrs, Queen Mary, Mile End. It will consist of a number of live demonstrations showcasing some of the project’s most exciting research related to semantic web and audio technologies. The session will be preceded by a talk by the Director of the project, Professor Mark Sandler introducing the FAST project (People’s Palace).

Talk: ‘FAST Forward’ 
Mark Sander (Centre for Digital Music)

Music’s changing fast: FAST’s changing music! The FAST EPSRC Programme Grant led from Queen Mary is looking at new technologies to disrupt the recorded music industry. As well as the team from Queen Mary, we will showcase some of the work from partners Nottingham and Oxford, and we hope by the end of it, attendees will have some idea what we mean by Signal Processing and the Semantic Web. Even if they don’t, they’ll preview some cool new ideas, apps and technology that the team will be showcasing to industry at Abbey Road Studios in the Autumn.

List of projects and presenters on show in the FAST session:

1) FXive: A Web Platform for Procedural Sound Synthesis
Parham Bahadoran and Adan Benito (Centre for Digital Music, EECS)

Fxive demonstrates a new way to perform sound design. High quality, artistic sound effects can be achieved by the use of lightweight and versatile sound synthesis models. Such models do not rely on stored samples, and provide a rich range of sounds that can be shaped at the point of creation.

2) Signal Processing Methods for Source Separation in Music Production
Delia Fano Yala (Centre for Digital Music, EECS)

3) Audio Commons: An Ecosystem for bringing Creative Commons content to the creative industries
George Fazekas (Centre for Digital Music, EECS)

 Audio Commons addresses barriers to using Creative Commons content in the creative industries presented by uncertain licensing, insufficient metadata or variation in quality. Two demonstrators will be shown that have been developed for music production and game sound design use cases.

AudioTexture is a plugin prototype for sound texture synthesis developed by AudioGaming.

SampleSurfer provides an audio search engine that integrates instant listening capabilities, editing tools, and transparent creative commons (CC) licensing processes.

4) MusicLynx
Rishi Shukla & Alo Allik (Centre for Digital Music, EECS)

MusicLynx is a web platform for music discovery that collects information and reveals connections between artists from a range of online sources. The information is used to build a network that users can explore to discover new artists and how they are linked together.

5) Fast DJ
Florian Thalmann (Centre for Digital Music, EECS)

 A minimal DJ web app that analyzes music files, makes automatic transitions, and learns from the user’s taste.

6) Grateful Live
Thomas Wilmering (Centre for Digital Music, EECS)

A website for the exploration of recordings of Grateful Dead concerts, drawing its information from various sources on the web.

7) Numbers-into-Notes Semantic Remixer
John Pybus (Oxford e-Research Centre, University of Oxford)

8) The Prism audience perception app
Mat Willcoxson (Oxford e-Research Centre, University of Oxford)

We have developed a customisable app to obtain audience feedback during live performances, which has been used in live public experiments in Manchester and Oxford to investigate human perception of musical features. As well as supporting research, these have acted as public engagement events to engage audiences with music, maths, and particular composers.  (The PRiSM app has been developed in collaboration with colleagues in Oxford and the Royal Northern College of Music, and relates to other apps in FAST such as Mood Conductor.)

9) The Climb! performance and score archive – MELD and Muzicodes
Kevin Page (Oxford e-Research Centre, Oxford University)  

“Climb!” is a non-linear musical work for Disklavier and electronics in which the pianist’s progression through the piece is not predetermined, but dynamically chosen according to scored challenges and choices. The challenges are implemented using two FAST technologies — Muzicodes and MELD (Music Encoding and Linked Data) — which are also used to create an interactive archive through which recorded performances of Climb! can be explored.

10) 6-Channel Guitar Dataset
Johan Pauwels (Centre for Digital Music, EECS)

A demonstration of a data collection procedure, as a prerequisite for future research.

Further information:

Dr. Jasmina Bolfek-Radovani
FAST IMPACt Programme Manager (FTE 0.7)
School of Electronic Eng and Computer Science
Queen Mary University of London
Peter Landin Building
10 Godward Square, London E1 4FZ
Tel: +44 (0)20 7882 7597
j.bolfek-radovani@qmul.ac.uk
follow us on twitter @semanticaudio

QMUL Inaugural Lecture by Prof. Josh Reiss

The Inaugural Lecture by Professor Josh Reiss, Professor in Audio Engineering

“Do you hear what I hear? The science of everyday sounds”

Tuesday 17 April, 18:30 – 19:30 hrs, Queen Mary University of London

Book here:
https://www.eventbrite.co.uk/e/do-you-hear-what-i-hear-the-science-of-everyday-sounds-tickets-43749224107

Description

The sounds around us shape our perception of the world. In films, games, music and virtual reality, we recreate those sounds or create unreal sounds to evoke emotions and capture the imagination. But there is a world of fascinating phenomena related to sound and perception that is not yet understood. If we can gain a deep understanding of how we perceive and respond to complex audio, we could not only interpret the produced content, but we could create new content of unprecedented quality and range.

This talk considers the possibilities opened up by such research. What are the limits of human hearing? Can we create a realistic virtual world without relying on recorded samples? If every sound in a major film or game soundtrack were computer-generated, could we reach a level of realism comparable to modern computer graphics? Could a robot replace the sound engineer? Investigating such questions leads to a deeper understanding of auditory perception, and has the potential to revolutionise sound design and music production. Research breakthroughs concerning such questions will be discussed, and cutting-edge technologies will be demonstrated.

Biography

Josh Reiss is a Professor of Audio Engineering with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers (including over 50 in premier journals and 4 best paper awards), and co-authored the textbook Audio Effects: Theory, Implementation and Application. His research has been featured in dozens of original articles and interviews since 2007, including Scientific American, New Scientist, Guardian, Forbes magazine, La Presse and on BBC Radio 4, BBC World Service, Channel 4, Radio Deutsche Welle, LBC and ITN, among others. He is a former Governor of the Audio Engineering Society (AES), chair of their Publications Policy Committee, and co-chair of the Technical Committee on High-resolution Audio. His Royal Academy of Engineering Enterprise Fellowship resulted in founding the high-tech spin-out company, LandR, which currently has over a million and a half subscribers and is valued at over £30M. He has investigated psychoacoustics, sound synthesis, multichannel signal processing, intelligent music production, and digital audio effects. His primary focus of research, which ties together many of the above topics, is on the use of state-of-the-art signal processing techniques for professional sound engineering. He maintains a popular blog, YouTube channel and twitter feed for scientific education and dissemination of research activities.

Palindrome Perception: Music and Maths in the Sheldonian

DUkjQ7BXUAEGUFD

Music and Maths, Sheldonian Theatre in Oxford, Oxfordshire. (Photo: Matthew Wilcoxson)

On Saturday 27th January more than 800 people attended “Music and Maths”, a performance by the Oxford Philharmonic with Marcus du Sautoy, Simonyi Professor for the Public Understanding of Science and Professor of Mathematics. As well as a performance of masterworks by Mozart, Haydn and Beethoven (Marios Papadopoulos conductor), and a fascinating discussion by Marcus du Sautoy on the numerical blueprint of celebrated scores, over 500 members of the audience engaged in an experiment as part of the PRiSM collaboration between the Oxford e-Research Centre’s FAST project and the Royal Northern College of Music.

Listening to Haydn Symphony No. 47 in G major ‘The Palindrome’, participants used the PRiSM perception app to indicate where they perceived palindromes in the performance.  This data will be analysed as part of the Oxford e-Research Centre’s ongoing study into mathematics and music, and develops out of their inaugural experiment at a performance of Gyorgy Ligeti’s Fanfares at the RNCM at the launch of PRiSM last October.

More on PRISM:
https://www.rncm.ac.uk/news/lord-mayor-manchester-launches-new-research-centre-prism-rncm/

Read the Oxford e-Research Centre news item:
https://www.oerc.ox.ac.uk/news/music-and-maths-sheldonian

FAST at the ‘All Your Bass’ festival in Nottingham

AYB_FaceBook_Events5The Mixed Reality Lab in collaboration with the National Video Game Arcade and the Theatre Royal and Royal Concert Hall, Nottingham present two exciting activities scheduled for ‘All Your Bass’, a new videogame music festival in Nottingham.
https://www.thenva.com/allyourbass

 

1 – “Climb!” for Disklavier and Electronics, performed by Anne Veinberg and Zubin Kanga.

6pm, Friday 19th January at The Royal Concert Hall, Nottingham. Free entrance.

Bringing a new perspective to the influence of videogames on classical music composer Maria Kallionpää, the University of Nottingham’s Mixed Reality Lab and University of Oxford’s e-Research Centre present their recent collaboration “Climb!” for Disklavier and Electronics. “Climb!” is a virtuoso piece composed for live pianist, a self playing Disklavier piano, interactive system and visuals which combines contemporary piano repertoire with elements of computer games to create a non-linear musical journey in which the pianist negotiates an ascent of a mountain. Along the way the performer encounters musical challenges that determine their route, battle through uncertain weather conditions, and come face-to-face with animals and other obstacles that block their path.

Pianists Anne Veinberg and Zubin Kanga present two back-to-back performances of “Climb!”, each finding their own route up the mountain.

“Climb!” is supported by the EPSRC-funded FAST project (EP/L019981/1) and University of Nottingham’s Research Priority Area (RPA) Development Fund.

2 – Nott Listening: an exploratory audio walk around Nottingham’s City Centre

Times: 10am to 5pm, Friday 19th, Saturday 20th and Sunday 21st January. National Video Game Arcade. Free activity

Specially crafted spoken stories and original music accompany you as you walk, immersing you in a rich sound world controlled by your movements and your location. View the City Centre streets through a new lens and peer into the lives of the people you meet along the way. Original texts written by members of the University of Nottingham Creative Writing Society, Jocelyn Spence and Adrian Hazzard. Original music composed by James Torselli and Adrian Hazzard.

Nott Listening commences from The National Video Arcade, 24-32 Carlton St, Nottingham, where you can loan the required equipment. The walk is free to do, but you will need to leave some security with us while the equipment is in your possession. We can facilitate a limited number of people at any given time, so you are welcome to just drop in, but we recommend that you book a slot in advance by calling Adrian Hazzard on 07983416504 or by emailing locativesound@gmail.com

This work is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/G037574/1) and the EPSRC-funded FAST project (EP/L019981/1).