Category Archives: News

FAST show & tell at Ideas Unwrapped, Queen Mary, 26 April

FAST_LOGO_FULL_COL_WEB

FAST Live demonstrations at the QM Ideas Unwrapped yearly event, 26 April, Queen Mary

5736429648_IMG_0060

Grateful Live demonstrator by Thomas Wilmering, C4DM, Queen Mary

FAST team members successfully presented some of their project’s most exciting research during the yearly QM event ‘Ideas Unwrapped’ that took place on Thursday 26 April. The FAST event was a preparation for the FAST Industry day at Abbey Road Studios on 25 October

The two-hour session was attended by staff and students from Queen Mary in a relaxed and enjoyable atmosphere. Photos from the FAST event can be viewed here.  The session was preceded by a talk entitled ‘FAST forward’ by Prof. Mark Sandler introducing the project. An audio of the talk can be accessed here.

The following live demonstrations were on show:

1) FXive: A Web Platform for Procedural Sound Synthesis
Parham Bahadoran and Adan Benito (Centre for Digital Music, EECS)

Fxive demonstrates a new way to perform sound design. High quality, artistic sound effects can be achieved by the use of lightweight and versatile sound synthesis models. Such models do not rely on stored samples, and provide a rich range of sounds that can be shaped at the point of creation.

2) Signal Processing Methods for Source Separation in Music Production
Delia Fano Yala (Centre for Digital Music, EECS)

3) Audio Commons: An Ecosystem for bringing Creative Commons content to the creative industries
George Fazekas (Centre for Digital Music, EECS)

 Audio Commons addresses barriers to using Creative Commons content in the creative industries presented by uncertain licensing, insufficient metadata or variation in quality. Two demonstrators will be shown that have been developed for music production and game sound design use cases.

AudioTexture is a plugin prototype for sound texture synthesis developed by AudioGaming.

SampleSurfer provides an audio search engine that integrates instant listening capabilities, editing tools, and transparent creative commons (CC) licensing processes.

4) MusicLynx
Rishi Shukla (Centre for Digital Music, EECS)

MusicLynx is a web platform for music discovery that collects information and reveals connections between artists from a range of online sources. The information is used to build a network that users can explore to discover new artists and how they are linked together.

5) Fast DJ
Florian Thalmann (Centre for Digital Music, EECS)

A minimal DJ web app that analyzes music files, makes automatic transitions, and learns from the user’s taste.

6) Grateful Dead Live
Thomas Wilmering (Centre for Digital Music, EECS)

A website for the exploration of recordings of Grateful Dead concerts, drawing its information from various sources on the web.

7) Numbers-into-Notes Semantic Remixer
John Pybus (Oxford e-Research Centre, University of Oxford)

8) The Prism audience perception app
Mat Willcoxson (Oxford e-Research Centre, University of Oxford)

A customisable app developed to obtain audience feedback during live performances, which has been used in live public experiments in Manchester and Oxford to investigate human perception of musical features. As well as supporting research, these have acted as public engagement events to engage audiences with music, maths, and particular composers.

(The PRiSM app has been developed in collaboration with colleagues in Oxford and the Royal Northern College of Music, and relates to other apps in FAST such as Mood Conductor.)

9) The Climb! performance and score archive – MELD and Muzicodes
Kevin Page (Oxford e-Research Centre, Oxford University)  

“Climb!” is a non-linear musical work for Disklavier and electronics in which the pianist’s progression through the piece is not predetermined, but dynamically chosen according to scored challenges and choices. The challenges are implemented using two FAST technologies — Muzicodes and MELD (Music Encoding and Linked Data) — which are also used to create an interactive archive through which recorded performances of Climb! can be explored.

10) 6-Channel Guitar Dataset
Johan Pauwels (Centre for Digital Music, EECS)

A demonstration of a data collection procedure, as a prerequisite for future research.

 

 

Prof. Mark Sandler gives FAST talk @Ideas Unwrapped

Thursday 26 April, Queen Mary University of London, Mile End

5736429648_IMG_0005Professor Mark Sandler gave his talk ‘FAST Forward’ on the FAST project on Thursday 26 April in a keynote talk at the Queen Mary’s Ideas Unwrapped yearly event.

Talk summary:
Music’s changing fast: FAST’s changing music! The FAST EPSRC Programme Grant – Fusing Audio & Semantic Technologies for Intelligent Music Production and Consumption –  led from Queen Mary is looking at new technologies to disrupt the recorded music industry. As well as the team from QM, we will showcase some of the work from partners Nottingham and Oxford, and we hope by the end of it, you’ll have some idea what we mean by Signal Processing and the Semantic Web. Even if you don’t, you’ll preview some cool new ideas, apps and technology that the team will be showcasing to industry at Abbey Road Studios in the Autumn.

You can listen to Prof. Sandler’ talk online:

 

FAST Live Demonstrations @Ideas Unwrapped

FAST Live Demonstrations @Ideas Unwrapped, Thursday 26 April, Queen Mary University of London, Mile End   FAST_LOGO_REVERSED_P1_WEBFAST IMPACt is a five-year EPSRC project that brings the latest technologies to bear on the entire recorded music industry, end-to-end and producer to consumer. It makes the production process more fruitful, the consumption process more engaging, and the delivery and intermediation more automated and robust. The Principal Investigator is Professor Mark Sandler, Director of the Centre for Digital Music, School of EECS. The other two academic partners in the project are University of Oxford and University of Nottingham. BBC R & D, Abbey Road RED, Universität Erlangen-Nurnberg and other industry partners are also involved.

Part of the yearly Ideas Unwrapped Queen Mary event on Thursday 26 April for Queen Mary staff & students, the FAST show & tell session will take place in the Performance Lab of the Engineering building, 11:15 – 13:15 hrs, Queen Mary, Mile End. It will consist of a number of live demonstrations showcasing some of the project’s most exciting research related to semantic web and audio technologies. The session will be preceded by a talk by the Director of the project, Professor Mark Sandler introducing the FAST project (People’s Palace).

Talk: ‘FAST Forward’ 
Mark Sander (Centre for Digital Music)

Music’s changing fast: FAST’s changing music! The FAST EPSRC Programme Grant led from Queen Mary is looking at new technologies to disrupt the recorded music industry. As well as the team from Queen Mary, we will showcase some of the work from partners Nottingham and Oxford, and we hope by the end of it, attendees will have some idea what we mean by Signal Processing and the Semantic Web. Even if they don’t, they’ll preview some cool new ideas, apps and technology that the team will be showcasing to industry at Abbey Road Studios in the Autumn.

List of projects and presenters on show in the FAST session:

1) FXive: A Web Platform for Procedural Sound Synthesis
Parham Bahadoran and Adan Benito (Centre for Digital Music, EECS)

Fxive demonstrates a new way to perform sound design. High quality, artistic sound effects can be achieved by the use of lightweight and versatile sound synthesis models. Such models do not rely on stored samples, and provide a rich range of sounds that can be shaped at the point of creation.

2) Signal Processing Methods for Source Separation in Music Production
Delia Fano Yala (Centre for Digital Music, EECS)

3) Audio Commons: An Ecosystem for bringing Creative Commons content to the creative industries
George Fazekas (Centre for Digital Music, EECS)

 Audio Commons addresses barriers to using Creative Commons content in the creative industries presented by uncertain licensing, insufficient metadata or variation in quality. Two demonstrators will be shown that have been developed for music production and game sound design use cases.

AudioTexture is a plugin prototype for sound texture synthesis developed by AudioGaming.

SampleSurfer provides an audio search engine that integrates instant listening capabilities, editing tools, and transparent creative commons (CC) licensing processes.

4) MusicLynx
Rishi Shukla & Alo Allik (Centre for Digital Music, EECS)

MusicLynx is a web platform for music discovery that collects information and reveals connections between artists from a range of online sources. The information is used to build a network that users can explore to discover new artists and how they are linked together.

5) Fast DJ
Florian Thalmann (Centre for Digital Music, EECS)

 A minimal DJ web app that analyzes music files, makes automatic transitions, and learns from the user’s taste.

6) Grateful Live
Thomas Wilmering (Centre for Digital Music, EECS)

A website for the exploration of recordings of Grateful Dead concerts, drawing its information from various sources on the web.

7) Numbers-into-Notes Semantic Remixer
John Pybus (Oxford e-Research Centre, University of Oxford)

8) The Prism audience perception app
Mat Willcoxson (Oxford e-Research Centre, University of Oxford)

We have developed a customisable app to obtain audience feedback during live performances, which has been used in live public experiments in Manchester and Oxford to investigate human perception of musical features. As well as supporting research, these have acted as public engagement events to engage audiences with music, maths, and particular composers.  (The PRiSM app has been developed in collaboration with colleagues in Oxford and the Royal Northern College of Music, and relates to other apps in FAST such as Mood Conductor.)

9) The Climb! performance and score archive – MELD and Muzicodes
Kevin Page (Oxford e-Research Centre, Oxford University)  

“Climb!” is a non-linear musical work for Disklavier and electronics in which the pianist’s progression through the piece is not predetermined, but dynamically chosen according to scored challenges and choices. The challenges are implemented using two FAST technologies — Muzicodes and MELD (Music Encoding and Linked Data) — which are also used to create an interactive archive through which recorded performances of Climb! can be explored.

10) 6-Channel Guitar Dataset
Johan Pauwels (Centre for Digital Music, EECS)

A demonstration of a data collection procedure, as a prerequisite for future research.

Further information:

Dr. Jasmina Bolfek-Radovani
FAST IMPACt Programme Manager (FTE 0.7)
School of Electronic Eng and Computer Science
Queen Mary University of London
Peter Landin Building
10 Godward Square, London E1 4FZ
Tel: +44 (0)20 7882 7597
j.bolfek-radovani@qmul.ac.uk
follow us on twitter @semanticaudio

QMUL Inaugural Lecture by Prof. Josh Reiss

The Inaugural Lecture by Professor Josh Reiss, Professor in Audio Engineering

“Do you hear what I hear? The science of everyday sounds”

Tuesday 17 April, 18:30 – 19:30 hrs, Queen Mary University of London

Book here:
https://www.eventbrite.co.uk/e/do-you-hear-what-i-hear-the-science-of-everyday-sounds-tickets-43749224107

Description

The sounds around us shape our perception of the world. In films, games, music and virtual reality, we recreate those sounds or create unreal sounds to evoke emotions and capture the imagination. But there is a world of fascinating phenomena related to sound and perception that is not yet understood. If we can gain a deep understanding of how we perceive and respond to complex audio, we could not only interpret the produced content, but we could create new content of unprecedented quality and range.

This talk considers the possibilities opened up by such research. What are the limits of human hearing? Can we create a realistic virtual world without relying on recorded samples? If every sound in a major film or game soundtrack were computer-generated, could we reach a level of realism comparable to modern computer graphics? Could a robot replace the sound engineer? Investigating such questions leads to a deeper understanding of auditory perception, and has the potential to revolutionise sound design and music production. Research breakthroughs concerning such questions will be discussed, and cutting-edge technologies will be demonstrated.

Biography

Josh Reiss is a Professor of Audio Engineering with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers (including over 50 in premier journals and 4 best paper awards), and co-authored the textbook Audio Effects: Theory, Implementation and Application. His research has been featured in dozens of original articles and interviews since 2007, including Scientific American, New Scientist, Guardian, Forbes magazine, La Presse and on BBC Radio 4, BBC World Service, Channel 4, Radio Deutsche Welle, LBC and ITN, among others. He is a former Governor of the Audio Engineering Society (AES), chair of their Publications Policy Committee, and co-chair of the Technical Committee on High-resolution Audio. His Royal Academy of Engineering Enterprise Fellowship resulted in founding the high-tech spin-out company, LandR, which currently has over a million and a half subscribers and is valued at over £30M. He has investigated psychoacoustics, sound synthesis, multichannel signal processing, intelligent music production, and digital audio effects. His primary focus of research, which ties together many of the above topics, is on the use of state-of-the-art signal processing techniques for professional sound engineering. He maintains a popular blog, YouTube channel and twitter feed for scientific education and dissemination of research activities.

Palindrome Perception: Music and Maths in the Sheldonian

DUkjQ7BXUAEGUFD

Music and Maths, Sheldonian Theatre in Oxford, Oxfordshire. (Photo: Matthew Wilcoxson)

On Saturday 27th January more than 800 people attended “Music and Maths”, a performance by the Oxford Philharmonic with Marcus du Sautoy, Simonyi Professor for the Public Understanding of Science and Professor of Mathematics. As well as a performance of masterworks by Mozart, Haydn and Beethoven (Marios Papadopoulos conductor), and a fascinating discussion by Marcus du Sautoy on the numerical blueprint of celebrated scores, over 500 members of the audience engaged in an experiment as part of the PRiSM collaboration between the Oxford e-Research Centre’s FAST project and the Royal Northern College of Music.

Listening to Haydn Symphony No. 47 in G major ‘The Palindrome’, participants used the PRiSM perception app to indicate where they perceived palindromes in the performance.  This data will be analysed as part of the Oxford e-Research Centre’s ongoing study into mathematics and music, and develops out of their inaugural experiment at a performance of Gyorgy Ligeti’s Fanfares at the RNCM at the launch of PRiSM last October.

More on PRISM:
https://www.rncm.ac.uk/news/lord-mayor-manchester-launches-new-research-centre-prism-rncm/

Read the Oxford e-Research Centre news item:
https://www.oerc.ox.ac.uk/news/music-and-maths-sheldonian

FAST at the ‘All Your Bass’ festival in Nottingham

AYB_FaceBook_Events5The Mixed Reality Lab in collaboration with the National Video Game Arcade and the Theatre Royal and Royal Concert Hall, Nottingham present two exciting activities scheduled for ‘All Your Bass’, a new videogame music festival in Nottingham.
https://www.thenva.com/allyourbass

 

1 – “Climb!” for Disklavier and Electronics, performed by Anne Veinberg and Zubin Kanga.

6pm, Friday 19th January at The Royal Concert Hall, Nottingham. Free entrance.

Bringing a new perspective to the influence of videogames on classical music composer Maria Kallionpää, the University of Nottingham’s Mixed Reality Lab and University of Oxford’s e-Research Centre present their recent collaboration “Climb!” for Disklavier and Electronics. “Climb!” is a virtuoso piece composed for live pianist, a self playing Disklavier piano, interactive system and visuals which combines contemporary piano repertoire with elements of computer games to create a non-linear musical journey in which the pianist negotiates an ascent of a mountain. Along the way the performer encounters musical challenges that determine their route, battle through uncertain weather conditions, and come face-to-face with animals and other obstacles that block their path.

Pianists Anne Veinberg and Zubin Kanga present two back-to-back performances of “Climb!”, each finding their own route up the mountain.

“Climb!” is supported by the EPSRC-funded FAST project (EP/L019981/1) and University of Nottingham’s Research Priority Area (RPA) Development Fund.

2 – Nott Listening: an exploratory audio walk around Nottingham’s City Centre

Times: 10am to 5pm, Friday 19th, Saturday 20th and Sunday 21st January. National Video Game Arcade. Free activity

Specially crafted spoken stories and original music accompany you as you walk, immersing you in a rich sound world controlled by your movements and your location. View the City Centre streets through a new lens and peer into the lives of the people you meet along the way. Original texts written by members of the University of Nottingham Creative Writing Society, Jocelyn Spence and Adrian Hazzard. Original music composed by James Torselli and Adrian Hazzard.

Nott Listening commences from The National Video Arcade, 24-32 Carlton St, Nottingham, where you can loan the required equipment. The walk is free to do, but you will need to leave some security with us while the equipment is in your possession. We can facilitate a limited number of people at any given time, so you are welcome to just drop in, but we recommend that you book a slot in advance by calling Adrian Hazzard on 07983416504 or by emailing locativesound@gmail.com

This work is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/G037574/1) and the EPSRC-funded FAST project (EP/L019981/1).

FAST C4DM team wins best paper at the ISMIR Conference, 23 – 27 Oct.

banner-clear-900x450Members of the FAST IMPACt project team from C4DM, Queen Mary University of London and Oxford e-Research Centre, Oxford University presented their research at the 18th International Society for Music Information Retrieval conference at the National University of Singapore, Suzhou, China, 23 – 27 October.

C4DM researchers gave several presentations at the above conference. The paper “Transfer learning for music classification and regression tasks” by the PhD student Keunwoo Choi, Senior lecturer Dr George Fazekas and Professor Mark Sandler won best paper at the conference. In the paper, the researchers presented a transfer learning approach for music classification and regression tasks. They proposed to use a pre-trained convnet feature, a concatenated feature vector using the activations of feature maps of multiple layers in a trained convolutional network. They showed how this convnet feature can serve as general-purpose music representation. Access to the full paper and abstract can be found here: https://ismir2017.smcnus.org/wp-content/uploads/2017/10/12_Paper.pdf

A paper by Research Associate Dr David Weigl and Senior Researcher Dr Kevin Page on Music Encoding and Linked Data was also presented at the ISMIR Music Information Retrieval Conference on 24 October. The paper is a product of the FAST IMPACt project. It presents the Music Encoding and Linked Data (MELD) framework for distributed real-time annotation of digital music scores.

Further information

OeRC news article:
http://www.oerc.ox.ac.uk/news/weigl-page-paper-music-encoding-and-linked-data-be-presented-ismir

ISMIR Conference website:
https://ismir2017.smcnus.org

FAST workshop at AES New York 2017, 18 – 20 October

Unknown-2FAST IMPACt project members from the Centre for Digital Music (C4DM) organised a workshop on the 20th October 2017 as part of AES New York 2017 on Archiving & Restoration: AR08 – The Music Never Stopped: The Future of the Grateful Dead Experience in the Information Age. 

During this workshop Thomas Wilmering and George Fazekas (who co-chaired the event) demonstrated new ways of navigating concert recordings with editorial metadata and semantic audio analysis combined using Semantic Web technologies. They focussed particularly on material by the Grateful Dead and discussed opportunities and requirements with audio archivists and librarians, as well as the broader social and cultural context of how new technologies bear on music archiving and fandom. The workshop also explored the opportunities created by semantic audio technologies in experiencing live music performances and interacting with live performance and cultural archives. The workshop included presentations by Prof. Mark Sandler, head of C4DM, introducing challenges and opportunities in semantic audio. C4DM members Thomas Wilmering and Ben White presented their research in navigating music archives and the creative use of archival material. A number of experts from the USA were invited to contribute in the areas of live sound reinforcement, metadata and semantic audio.

UnknownThe workshop included a panel discussion moderated by George Fazekas. John Meyer, President and CEO of Meyer Sound Laboratories discussed how the development of new audio technologies were driven by the needs of the Grateful Dead. Nicholas Meriwether, Director of the Center for Counterculture Studies in San Francisco presented an account on the wealth of material available from the Grateful Dead and discussed challenges and opportunities in the development of music related cultural archives during a panel discussion, together with Scott Carlson, Metadata Coordinator of Fondren Library at Rice University and Jeremy Berg, cataloging librarian at the University of North Texas. The panel also reflected on how semantic audio technologies could bring about new ways to navigate and experience live music archives after demonstrations by Juan Pablo Bello, Associate Professor of Music Technology and Computer Science & Engineering at New York University and Prof. Bryan Pardo, head of the Northwestern University Interactive Audio Lab. Useful feedback on several aspects of our research in FAST was gathered from the panel as well as the audience.

Unknown-3

Unknown-1

Further Information:
Fazekas, G., Wilmering, T. (2017). The Music Never Stopped: The Future of the Grateful Dead Experience in the Information Age. Audio Engineering Society 143rd Convention, New York, NY, USA, October 20, 2017. (http://www.aes.org/events/143/archiving/?ID=5761)

See related publication: 
Wilmering, T. Thalmann, F. Fazekas, G. Sandler, M. B. (2017) Bridging Fan Communities and Facilitating Access to Music Archives through Semantic Audio Applications. Audio Engineering Society 143rd Convention, New York, NY, USA, October 20, 2017. (http://www.aes.org/e-lib/browse.cfm?elib=19335)

 

 

Nottingham to participate in the videogame music festival

All Your Bass: The Videogame Music Festival, Nottingham, Friday 19 and Saturday 20 January 2018

The National Videogame Arcade has just announced the launch of a new videogame music and audio festival on Friday 19 and Saturday 20 January 2018. A series of talks, concerts and more will take place across The National Videogame Arcade, Nottingham Royal Concert Hall and Antenna, with Friday’s sessions taking an industry and student focus.

The University of Nottingham project team from the Mixed Reality Lab and the composer Maria Kallionpää will participate in the festival with their collaboration ‘Climb!’.

thumbnail

Maria Kallionppää performing Climb!

Further information

Festival Press Release:
All Your Bass: The Videogame Music Festival

FAST IMPACt blog post:
Maria Kallionpää: “Climb!” – A Virtuoso Piece for Live Pianist, Disklavier and Interactive System (2017)

FAST residency with musician Tracy Readhead at C4DM, 17-24 Oct 2017

B-Ix_uOIEAAnDl8Between the 17th and the 24th October 2017 the Centre for Digital Music (C4DM) invited the composer Tracy Redhead to collaborate on FAST with the Postdoctoral researcher Florian Thalmann.  The residency work concentrated on the dynamic music experiences, including small examples and a larger composition to be released next year. A joint specification of additional requirements for the Semantic Player FAST demonstrator was explored and researched, both on a compositional and technical level. Tracy Redhead produced and provided musical material in the form of loops, whileas the C4DM team provided the framework and mobile application.

Further information:

Semantic Player:
https://www.semanticaudio.ac.uk/demonstrators/d4/

Tracy Redhead:
http://www.tracyredhead.com/