Author Archives: admin

FAST final celebration event

Friday, 1 November 2019, 1:30 pm – 5:30 pm
The Neuron pod, Whitechapel, Queen Mary University of London

The FAST partners from the Centre for Digital Music, Queen Mary, Oxford e-Research Centre, Oxford University and Mixed Reality Lab, Nottingham University will come together on the Friday 1 November to celebrate the outcomes of the five-year EPSRC funded FAST project coming to an end in December 2019. The event will consist of a mixture of talks, industry presentations and project demos. There will be plenty of time for socialising.

Final Programme

1:30 – 1:40 pm: Welcome by Mark Sandler, Queen Mary (FAST Principal Investigator)

1:40 – 2:10 pm: Academic & Principal Co-Investigator Talks

From Assisting Creativity to Human-Machine Co-Creation
Dave de Roure, Oxford

The UKRI Centre for Doctoral Training in Artificial Intelligence and Music
Simon Dixon, Queen Mary

The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM) will train a new generation of researchers who combine state-of-the-art ability in artificial intelligence (AI), machine learning and signal processing with cross- disciplinary sensibility to deliver groundbreaking original research and impact within the UK Creative Industries (CI) and cultural sector. Based at Queen Mary University of London (QMUL), the CDT will receive £6.2M UKRI, £2M from QMUL and over £1M from industry partners, who include key players in the music industry: Abbey Road Studios, Apple, Deezer, Music Tribe Brands, Roli, Spotify, Steinberg, Solid State Logic, Universal Music; as well as a range of innovative SMEs. AIM will run from 2019–2027 and will train five annual cohorts of 12-15 PhD students, starting in September 2019.

Challenges and Opportunities of Using Audio Semantics in Music Production
George Fazekas, Queen Mary

FXive: Transformative Innovations in Sound Design
Joshua Reiss, Queen Mary

Sound effects for games, film and TV are generally sourced using large databases of pre-recorded audio samples. Our alternative is procedural audio, where sounds are created using software algorithms. These are inflexible, limited and uncreative. Supported by the FAST grant, we have launched the high tech spin-out company FXive. FXive’s transformative innovation allows all sound effects to be created in real-time and with intuitive controls, without use of recorded samples, thus creating fully immersive auditory worlds. We aim to completely replace the reliance on sample libraries with our new approach that is more creative, more enjoyable, higher quality and more productive. Longer-term, it can lead to fully computer-generated sound design, just as Toy Story revolutionised the industry with fully computer generated animation.

Post-FAST Opportunities: Musical Knowledge, Inference and Formal Methods for Semantic Deployment, Geraint Wiggins (Queen Mary)

The FAST Blues, Steve Benford (Nottingham)

 

2:15 – 3:15 pm: FAST project demonstrations

D1. Climb! Interactive Archive (Chris Greenhalgh and Adrian Hazzard, Nottingham)

D2. rCALMA
(Kevin Page, Oxford)

D3. Plunderphonics (Florian Thalmann, Queen Mary)

D4. SOFA (John Pybus and Graham Klyne, Oxford)

D5. The Grateful Dead Concert Explorer (Thomas Wilmering, Queen Mary)

D6. Moodplay (tbc) (Alo Allik, Queen Mary)

3:15 – 3:45 pm: Coffee and cakes

3:45 – 4:15 pm: Industry Talk

Music Informatics in Academia and Industry — Personal Insights
Matthias Mauch, Apple

When I started working in music informatics, industry involvement was minimal. I’m going to trace my own journey, from my PhD studies at Queen Mary, via other academic jobs, to my current work in the Music Science team at Apple Media Products.  All stages have had their unique fascination, and I will share some insights on similarities and differences on the way.

4:15 – 5:00 pm: FAST project demos (cont’d)

D7. Carolan Guitar (Chris Greenhalgh and Adrian Hazzard, Nottingham)

D8. Folk Song Network Based on Different Similarity Measures (Cornelia Metzig, Queen Mary)

D9. Jam with Jamendo (Johann Pauwels, Queen Mary)

D10. Lohengrin Digital Companion (Kevin Page, Oxford)

D11. Notes and Chords (Ken O’Hanlon, Queen Mary)

D12. Numbers into Notes
(Dave de Roure, Oxford)

D13. Intelligent Music Production (David Moffat, Queen Mary)

5:15 pm: End

6:00 pm: Dinner at Pizza East, 56 Shoreditch High Street, E1 6JJ

Further information

FAST demonstrators:
https://www.semanticaudio.ac.uk/demonstrators/
FAST videos:
https://www.semanticaudio.ac.uk/vimeos/
FAST industry Day Abbey Road Studios:
https://www.semanticaudio.ac.uk/events/fast-industry-day/

 

TWITTER_LOGO

 

FAST C4DM PhD student organises Sound Source Separation Workshop (HOSSS)

Written by: Delia Fano Yela 

pic 1For a second year in a row, C4DM hosted the Hands-on Sound Source Separation Workshop (HOSSS) on Saturday 11th of May, and it was a great success! It originated in 2018 out of the student frustration of not having enough space to brainstorm in conferences, when all relevant experts are in the same place.

 

pic 6The idea is simple: get together and spend the day actively brainstorming/hacking some source separation related topics given by some of us (so no keynotes/talks/posters). Everyone is there to engage with each other and so everyone becomes much more approachable than in a traditional conference setting. This format therefore aims to inspire early-stage researchers in the field and promote the sense of scientific community by offering a common place for discussion.

pic 3The workshop started with an ice-breaking game followed by a quick presentation of all participants.  Then, those who wanted to propose a topic for discussion did so briefly, explaining their idea on a whiteboard. As they did, we created a list of topics on the side. Once everyone who wanted had the chance to present their idea/problem/interest to the group, we proceeded to vote.

Each participant could pick 3 topics they would like to work on for the rest of the day. In doing so, and after a little pruning, work-groups were formed. We then divided into groups and started the hands-on work on the topic. Some of the topics were more broad than others; a group ended up brainstorming on the nature of singing voice, where another was full-on hacking a deep clustering model for source separation recently added to the nussl python library by one of the participants, Prem Seetharaman from the Northwestern University.

pic 2After lunch, some people changed groups. For example, after the two “Antoines” (Liutkus and Deleforge) had a productive Gaussian Processes morning with our PhD candidate Pablo Alvarado, another of our PhDs, Changhong Wang, discovered the world of median filtering source separation lead by the hand by one of the very best, A. Litukus.

In the afternoon there was a real great feeling of productivity, everyone seemed to be “in the zone”. However, one can only concentrate that much, so at some point we all gathered to discuss how far we got in our quest and share what we have learned. We left the discussion of future avenues for the pub, were we all ended up the day to celebrate with our new colleagues!

FAST at the Tate Modern

The FAST project combined forces with the ESRC PETRAS Internet of Things project in a public event at the at the Tate Modern on 8-9 February. The event explored how the Internet of Things is changing our lives now, and how it may influence or disrupt our futures – at home, at work and in our environment.

Professor David De Roure (University of Oxford) closed the event with a discussion about IoT, music, and creativity with his talk “The making of music: creative algorithmic interventions and the imagination of Ada Lovelace”, including demos of arduino-based and algorithmically-enhanced electronic instruments developed in the FAST project.

David’s talk was preceded by a piece of music with accompanying film by Alan Chamberlain (University of Nottingham) which was specially produced for the event alongside the live premiere of his piece “Pen Dinas in Voice” on Friday 8th February at the Bangor Music Festival. The composition used the ‘Numbers into Notes’ software produced by David De Roure, with the same algorithms used in the demonstrations in his talk.

The film can be seen here: https://www.youtube.com/watch?v=Pl3KrfYc3r8

Audio is available here: https://soundcloud.com/alain_du_norde/pen-dinas-in-voice

David De Roure commented, “This was an exciting and lively event, exploring our possible futures in two days of conversations with highly engaged, intrigued and insightful visitors to Tate Modern. For me it was a great opportunity to bring together two of our research projects, FAST and Petras, and to revisit a conversation about machines, music and creativity that has been going on since 1843… if not before”.

Alan Chamberlain added:  “The FAST project has been the catalyst for a range of innovative technologies and works of art. By using technologies such as ‘Numbers into Notes’ the people are able to engage and understand how such technologies might be used”.

A news post from the University of Nottingham can be found here – “Musical Composition and its Impact on Culture and Research”.

The event was well attended by over 1300 members of the public across two days, with lively discussion and engagement with an exciting range of demonstrations, including Living Room of the Future, Smart Utopia, Karma Kettles, Human Sensor, Coral Love Story, The Listening Wood, Tales of the Park, Data Feeders, and Move for Me Baby. Further information can be found on the Tate Modern website.

The PETRAS Internet of Things (IoT) Research Hub is a consortium of nine leading UK universities working together to explore critical issues in privacy, ethics, trust, reliability, acceptability, and security.

FAST documentation film

Music’s changing fast, FAST is changing music. Our FAST documentation film is now available to watch (click on one of the two links below):

Full film VIMEO: https://player.vimeo.com/video/307448834
Full film YOUTUBE: https://youtu.be/VF4qQRcGfl8

Filmed at various locations, including the legendary Abbey Road Studio, the short film tells the FAST story. Listen to Mark Sandler (FAST Principal Investigator), David de Roure (Oxford Co-investigator), Steve Benford (Nottingham Co-investigator) and Josh Reiss (Queen Mary Co-investigator) speak. Also speaking are Mark d’Inverno (FAST Advisory Board chair), pianist and composer Maria Kallionpää, and artist, composer and researcher Tracy Redhead. With background music by Tracy Redhead (also featured on our FAST LP).

More information about the FAST industry Day:

FAST Industry Day

FAST Press release: Music’s changing FAST, FAST is changing music

Showcasing the culmination of five years of digital music research, the FAST IMPACt project (Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption), led by Queen Mary University of London, hosted an invite-only industry day at Abbey Road Studios on the 25th October.

You can read our press release in full here.

FAST project member Kevin Page chairs DLfM

IMG_20180928_141003Seventy international researchers attended the 5th conference on Digital Libraries for Musicology (DLfM) in Paris, France, organised by the FAST project in collaboration with the Unlocking Musicology project and the IReMus centre of the CNRS (Institut de recherche en Musicologie, Centre national de la recherche scientifique).

Dr Kevin Page, senior researcher at FAST partner the University of Oxford, served as programme and general chair for the conference, working with local chairs Dr Christophe Guillotel-Nothmann and Dr Cécile Davy-Rigaux, CNRS Directrice de recherche. This year DLfM was held at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris, France, a world-renowned and ground-breaking centre working at the intersection of music and technology.

The programme showcased research into the application of computational and informatics approaches for the study, analysis, and organisation of digital music, and music-related, corpora. The papers were presented across four themed sessions on Technological Advances, Digital Studies, Recognition and Encoding, and Collections, with the proceedings forming part of the ACM ICPS series and available online with open access.

The FAST project’s MELD framework featured in a paper at the conference, co-authored by FAST team member and MELD lead developer David Weigl, alongside colleagues from the University of Oxford e-Research Centre and Faculty of Music. The paper describes how MELD was used to create an interactive exhibition on the work of composer Frederick Delius (1862–1934), using Linked Data to combine digital material from the British Library, Delius Trust, and an annotated string quartet performance recorded earlier this year in Oxford. The technical innovations drive an environment for enriched engagement with musical sources and give an insight into the creative process.