Author Archives: admin

FAST Report on FAST Industry Day @Abbey Road Studios

44925610294_4954d2418e_o

Introduction by Professor Mark d’Inverno

Music’s changing fast: FAST is changing music. Showcasing the culmination of five years of digital music research, the FAST IMPACt project (Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption) led by Queen Mary University of London hosted an invite only industry day at Abbey Road Studios on Thursday 25 October, 2 – 8 pm. Presented by Professor Mark Sandler, Director of the Centre for Digital Music at Queen Mary, the event showcased to artists, journalists and industry professionals the next generation technologies that will shape the music industry – from production to consumption.

31777519618_832ce76f26_o

Professor Mark Sandler introducing FAST

FAST is looking at how new technologies can positively disrupt the recorded music industry. Research from across the project was presented to the audience, with work from partners at the University Nottingham and the University of Oxford presented alongside that from Queen Mary. The aim being that by the end of the FAST Industry day, people would gain some idea how AI and the Semantic Web can couple with Signal Processing to overturn conventional ways to produce and consume music. Along the way, industry attendees were able to preview some cool and interesting new ideas, apps and technology that the FAST team showcased.

44925000264_cace2133ab_o

Panel session members

One hundred and twenty (120) attendees were treated to an afternoon and evening of talks demonstrations, the Climb! performance, and an expert panel discussion with Jon Eaves (The Rattle), Paul Sanders (state51), Peter Langley (Origin UK), Tracy Redhead (award-winning musician, composer and interactive producer, University of Newcastle, Australia), Maria Kallionpää (composer and pianist, Hong Kong Baptist University) and Mark d’Inverno (Goldsmiths) who chaired the panel. Rivka Gottlieb, harpist and music therapist, performed some musical pieces based on her collaboration with PI David de Roure and the project ‘Numbers into Notes’, Oxford e-Research Centre, throughout the day. Oxford e-Research Centre, throughout the day. Other speakers included George Fazekas who outlined the Audio Commons Initiative, Tracy Redhead and Florian Thalmann who presented their work on the semantic player technologies and Ben White who spoke about the Open Music Archive project (exploring the intersection betweeen art, music and archives).

The FAST Industry Day was opened by Lord Tim Clement-Jones (Chair of Council, Queen Mary University of London) and was compered by Professor Mark d’Inverno (Professor of Computing at Goldsmiths College, London).

Below are some highlights:

Carolan Guitar: Connecting Digital to the Physical – The Carolan Guitar tells its own story. Play the guitar, contribute to its history, scan its decorative patterns and discover its story. Carolan uses a unique visual marker technology that enables the physical instrument to link to the places it’s been, the people who’ve played it and the songs it’s sung, and deep learning techniques to better event detection. https://carolanguitar.com

FAST DJ – Fast DJ is a web-based automatic DJ system and plugin that can be embedded into any website. It generates transitions between any pair of successive songs and uses machine learning to adapt to the user’s taste via simple interactive decisions.

Grateful Dead Concert Explorer – A Web service for the exploration of recordings of Grateful Dead concerts, drawing its information from various Web sources. It demonstrates how Semantic Audio and Linked Data technologies can produce an improved user experience for browsing and exploring music collections.  See Thomas Wilmering explaining more about the Grateful Dead Concert explorer: https://vimeo.com/297974486

Jam with Jamendo – Jam with Jamendo brings music learners and unsigned artists together by recommending suitable songs as new and varied practice material. In this web app, users are presented with a list of songs based on their selection of chords. They can then play along with the chord transcriptions or use the audio as backing tracks for solos and improvisations. Using AI-generated transcriptions makes it trivial to grow the underlying music catalogue without human effort. See Johan Pauwels explaining more about Jam with Jamendo: https://vimeo.com/297981584

MusicLynx – a web platform for music discovery that collects information and reveals connections between artists from a range of online sources. The information is used to build a network that users can explore to discover new artists and how they are linked together.

The SOFA Ontological Fragment Assembler – enables the combination of musical fragments – Digital Music Objects, or DMOs – into compositions, using semantic annotations to suggest compatible choices.

Numbers into Notes – experiments in algorithmic composition and the relationship between humans, machines, algorithms and creativity. See David de Roure explaining more about the research: https://vimeo.com/297989936

rCALMA Environment for Live Music Data Science – a big data visualisation of key in the Live Music Archive using Linked Data to combine programmes and audio feature analysis. See David Weigl talking about rCALMA: https://vimeo.com/297970119

Climb! Performance Archive – Climb! is a non-linear composition for Disklavier piano and electronics. This web-based archive creates a richly indexed and navigable archive of every performance of the work, allowing audiences and performers to engage with the work in new ways.

The FAST project brings together labs from three UK’s top Universities: Queen Mary¹s Centre for Digital Music, University of Nottingham¹s Mixed Reality Lab and the University of Oxford¹s e-Research Centre.

More about the FAST Industry Day:
https://www.semanticaudio.ac.uk/events/fast-industry-day

Full list of FAST demonstrators:
https://www.semanticaudio.ac.uk/demonstrators/

News item on Audio Commons demonstrators at the FAST industry day:
https://www.audiocommons.org/2018/10/23/abbey-road-industry.html

FAST’ing with Robert Thomas

Over the summer Adrian Hazzard from the Mixed Reality Lab (Nottingham) spent some time with Robert Thomas. Robert is a leading music composer and experience designer with an extensive track record in the creation of music which adapts in real-time to the listeners situation – like a soundtrack to your life.

Robert has collaborated with Massive Attack, Imogen Heap, Hans Zimmer, Ben Burtt, Richard King, Tom Holkenborg, Carl Craig, Air, Bookashade, Jimmy Edgar, Mel Wesson, Little Boots, Chiddy Bang, Console, Sophie Barker (Zero 7) and Kirsty Hawkshaw (Opus III, Orbital, Tiesto) to name a few. Robert uses a range of techniques and approaches from adaptive systems, algorithmic, generative, stochastic composition, procedural generation and machine learning / artificial intelligence.

An on-going thread of Adrian’s FAST research seeks to understand of how artists, such as Robert, approach the design, composition, and realisation of such adaptive audio experiences. To study these processes at first hand Adrian invited Robert to create a prototype adaptive audio experience.

Robert, who was supported by his colleague Franky Redente, was presented with an open design brief, and after a process of discussion and research they decided to create a locative audio ‘app’ for Nottingham City Centre, where playback of the audio responds to a listener’s movements and orientation. An interesting feature of Nottingham City Centre are the caves that sprawl under the city’s streets. Intrigued by this, Robert used the caves as the principle design feature for this audio walk: There must be all kinds of stories of people’s lives who lived in or above those caves which are lost. We don’t really have that much information about the caves’ stories in detail, so imagining people’s lives in there might be interesting”. Building on this narrative, Robert wanted listeners to “hear below them into the underbelly of Nottingham”. The finished design presents a mixture of sound design elements and tense, textural music that represents the dark hidden world beneath, accompanying users as they walk around the city’s streets. One element of this audio treatment is dripping water panned in 3D that acts as an audio marker. By following the spatial placement of these drips (as sensed by a mobile phone’s compass sensor), listeners are guided along the streets to specific locales situated above the presence of a cave. Once a cave is reached listeners are greeted with rich arrangements of music composed by Robert, which portray these ‘lost’ stories.

Robert and Franky visited Nottingham for a couple of days, walking the streets and visiting the caves, where they recorded the ambient sound within them (such as the sound of dripping water) and also capturing impulse responses of the caves natural reverberation. Using these elements to drive the composition and authoring of the completed app.

Robert and Franky

Robert and Franky recording an impulse response in a Nottingham cave

While in Nottingham, Robert also visited the Mixed Reality Lab and gave a fascinating talk about his work, the challenges of music interaction design, alongside some example of his compositional approach using Pure Data.

If you would like to find out more about Robert Thomas and his work, follow the link here to his website – http://robertthomassound.com/

 

 

Smart contracts for fair trade of music

by Panos Kudumakis & Thomas Wilmering (Centre for Digital Music, Queen Mary University of London)

Media Value Chain Ontology (ISO/IEC 21000-19) facilitates IP rights tracking for fair and transparent royalties payment by capturing user roles and their permissible actions on a particular IP entity. However, widespread adoption of interactive music services, e.g., remixing, karaoke and collaborative music creation, thanks to Interactive Music Application Format (ISO/IEC 23000-12), raises the issue of rights monitoring when reuse of audio IP entities is involved, such as, tracks or even segments of them in new derivative works.

Audio Value Chain Ontology (ISO/IEC 21000-19 AMD1), addresses the aforementioned issue by extending MVCO functionality related to description of composite IP entities in the audio domain, whereby the components of a given IP entity can be located in time, and for the case of multi-track audio, associated with specific tracks. The introduction of an additional ‘reuse’ action enables querying and granting permissions for the reuse of existing IP entities in order to create new derivative composite IP entities.

Audio Value Chain Ontology (AVCO) conceptualisation.

Furthermore, MVCO/AVCO smart contracts by facilitating machine readable deontic expressions for permissions, obligations and prohibitions, with respect to particular users and IP entities, could be used in conjunction with distributed ledgers, e.g., blockchain, enabling both transparency and interoperability towards fair trade of music.

For further info please visit MPEG Developments.

Resources

  • ISO/IEC Information Technology – Multimedia Framework (MPEG-21) – Part 19: Media Value Chain Ontology AMENDMENT 1: Extensions on Time-Segments & Multi-Track Audio (ISO/IEC 21000-19/AMD 1:2018)
  • ISO/IEC Information Technology – Multimedia Framework (MPEG-21) – Part 8: Reference Software AMENDMENT 4: Media Value Chain Ontology Extensions on Time-Segments & Multi-Track Audio
    (ISO/IEC 21000-8/AMD 4:2018)

 

High-tech spin-out company FXive is formed

logo black on whiteResearch in the FAST project and several other projects has demonstrated a new way to perform sound design. High quality, artistic sound effects can be achieved by the use of lightweight and versatile sound synthesis models. Such models do not rely on stored samples, and provide a rich range of sounds that can be shaped at the point of creation.

This system is now live at https://fxive.com (requires the Chrome browser), a web platform for sound effect synthesis. On July 25th, Prof. Josh Reiss and collaborators co-founded the company FXive to commercialise the technology. FXive is now seeking investment and working towards a full commercial launch. FXive will be demonstrated at the FAST Industry Day.

 

Centre releases Music Encoding and Linked Data software

Oxford FAST team members who are researchers at the University of Oxford’s e-Research Centre, have announced version 1.0 of their Music Encoding and Linked Data (MELD) framework, a flexible software platform for research which combines digital representations of music – such as audio and notation – with contextual and interpretive knowledge in the Semantic Web

The release of MELD represents a significant milestone in the Centre’s activities in the £5m EPSRC-funded Fusing Audio and Semantic Technologies (FAST) project, a collaboration with Queen Mary University of London and the University of Nottingham.

MELD brings an innovative new model for combining multimedia music resources, moving beyond milliseconds and simple labels to capture meaningful associations derived from music theory” explains Senior Researcher Kevin Page, who leads the FAST Music Flows activity and MELD research within the Centre. Dr Page, a member of the group which produced the W3C Linked Data Platform (LDP) specification, adds “by extending standards including LDP, Web Annotations, and the Music Encoding Initiative (MEI), MELD provides a flexible, scaleable, core while simultaneously enabling the detailed application-specific customisations researchers and industry find valuable”.

Dr David Weigl, principal developer of the MELD framework, recounts that “what’s been fascinating and rewarding is the variety of research we’ve worked on. We’ve effectively created a new instrument to perform a contemporary piece of music, analysed how musicians rehearse and perform, and are now building an interface to explore historical catalogues in the British Library. It really highlights the adaptability of our approach”.

“Climb!”, a non-linear composition for Disklavier piano and electronics by Maria Kallionpää (pictured), is an example of one such application. The performance environment for Climb! was built by Nottingham’s Mixed Reality Lab in collaboration with the Oxford FAST team, and combines their Muzicodes software with MELD. Climb! will receive its next performance during our FAST Industry Day at the world famous Abbey Road Studios, where Oxford researchers will be on hand to demonstrate and explain their innovations to artists, journalists and industry professionals.

I

In collaboration with colleagues from the Faculty of Music, MELD will also play a supporting role at the forthcoming Digital Delius: Unlocking Digitised Music Manuscripts event at the British Library. It is the technical foundation for an experimental digital exhibition presenting scores and sketches, early recordings, photographs, and concert programmes showcasing the music of British-born composer Frederick Delius (1862–1934). The materials are complemented by expert commentary and an interactive MELD application which situates the role of the items within the creative process.

This was also the theme of a workshop earlier in the year, when Centre researcher David Lewis (pictured above) used MELD to record the performance adaptations made by a student ensemble under the tutelage of the Villiers Quartet.

Professor David De Roure, Oxford FAST Investigator, summarises: “In MELD, we’ve created an implementation of Digital Music Objects, or DMOs. This next generation technology realises the FAST research vision for end-to-end semantics across all stages of the music lifecycle, bringing fantastic creative opportunities to artists, the music industry and consumers.”

MELD is open source and available from github.

News Source: http://www.oerc.ox.ac.uk/news/centre-releases-music-encoding-software (13 Sept 2018)

FAST participates in BBC R & D ‘Sounds Amazing’ event

On Wednesday 2 May, FAST team members from the Centre for Digital Music at Queen Mary, Alo Allik and Josh Reiss, presented their latest research at the Sounds Amazing 2018 event at the BBC in London. The event was attended by researchers and professionals working in the field of spatial audio.

4320811664_IMG_0083

Alo Allik showing MusicLynx

Alo Allik gave a demonstration of MusicLynx, a web application for music discovery that enables users to explore an artist similarity graph constructed by linking together various open public data sources. Josh Reiss gave a demonstration of FXive, a new way to perform sound design. High quality, artistic sound effects can be achieved by the use of lightweight and versatile sound synthesis models. Such models do not rely on stored samples, and provide a rich range of sounds that can be shaped at the point of creation.

4320811664_IMG_0103

Josh Reiss showing FXive

Sounds Amazing was brought by the BBC Academy Fusion project, BBC R&D and the S3A partnership. This event builds on the successful ‘Sound: Now and Next 2015’ conference from BBC R&D. It consisted of a day of talks, panels and a tech. expo. that was aimed to inspire and inform those from production and engineering about the latest developments in the amazing world of audio.

The detailed programme for the event is available below:

Morning: Award Winning Audio Production, Commissioning and Top Tech Tips.
Matthew Postgate, BBC Chief Technology and Product Officer, leading the BBC’s Design & Engineering division will open the event. L.J. Rich – Technology presenter (BBC Click), sound designer, inventor and NASA Datanaut will be the host for the day.

Sounds Dramatic explores how great audio can add drama to your content whether fact or fiction, Radio, TV or emerging VR. Presentations include multi-award winning podcast producer, James Robinson on the heart-stopping drama ‘Tracks’ and the outstanding sound team of Kate Hopkins and Graham Wilde who were behind ‘Blue Planet II’ and ‘Planet Earth II’ for BBC 1 ’.

A Sound Commission looks at what ticks the Commissioners’ boxes as Ben Chapman – Head of Digital, BBC Radio & Music; Mohit Bakaya – Commissioning Editor, Factual, Radio 4 and Zillah Watson – Commissioning Editor, Virtual Reality, BBC VR Hub share their favourite projects and preferences.

Tips On Top Tech! Ali Shah – Head of Emerging Technology & Strategic Direction at BBC and Chris Pike – Lead Audio R&D Engineer at BBC provide a rapid fire guide to the latest technology to help you save money and sound great!

12:00-13:30 Networking Lunch and Tech Expo (Delivered by the S3A project)
An opportunity to immerse yourself in futuristic sound experiences, connect with cutting edge technology experts from universities and industry – and explore exciting new partnerships. Lunch included for ticket holders, held in Media Café.

 Afternoon: Live, Immersive and Interactive – Innovative Live Performance and Immersive Production Techniques. 

Kicking the Mic! – Fusing live tap dance, looping and fully sound reactive LED dress – a short multi-sensory show by groundbreaking artist Laura Kriefman.

 Sound Bites – Immersive Masterclass – Catherine Robinson – Audio Supervisor, BBC explores the differences between 3D, binaural and surround sound and talks about her top production tips, setting up her own Binaural Studio in Wales and the developments happening there in radio, digital, live and VR.

 Live and Kicking – Tom Parnell – Senior Audio Supervisor, BBC R&D joins Dr. Paul Ferguson – Associate Professor of Audio Engineering, Edinburgh Napier University and Guto Thomas – Workflow and
Mobile Technology Specialist at BBC Cymru Wales for a session exploring innovation in live recording.

Sounds Academic – Professor Trevor Cox – Professor of Acoustic Engineering, University of Salford presents his experiment into object based audio by sharing a radio drama in a novel way, using mobile phones.

 Sound Bites – Breaking The Sound Barrier!  Composer and Producer Matthew Herbert – BBC Radiophonic Workshop provides us with a glimpse into the razor sharp cutting edge of audio technology and his experience of breaking new territory.

 Talk To me – Interactive Sound – Mukul Devichand – Editor, Voice at BBC introduces the world of interactive sound and its potential for broadcasters. Henry Cooke – Senior Producer / Creative Technologist in BBC R&D gives us the producer’s view of ‘The Inspection Chamber’ an original interactive audio drama. Mark Savage – Music Reporter, BBC, describes his experience of living for a month with an Apple HomePod, Amazon Echo, Google Home…. putting 7 speakers to the test and talking to himself.

FAST show & tell at Ideas Unwrapped, Queen Mary, 26 April

FAST_LOGO_FULL_COL_WEB

FAST Live demonstrations at the QM Ideas Unwrapped yearly event, 26 April, Queen Mary

5736429648_IMG_0060

Grateful Live demonstrator by Thomas Wilmering, C4DM, Queen Mary

FAST team members successfully presented some of their project’s most exciting research during the yearly QM event ‘Ideas Unwrapped’ that took place on Thursday 26 April. The FAST event was a preparation for the FAST Industry day at Abbey Road Studios on 25 October

The two-hour session was attended by staff and students from Queen Mary in a relaxed and enjoyable atmosphere. Photos from the FAST event can be viewed here.  The session was preceded by a talk entitled ‘FAST forward’ by Prof. Mark Sandler introducing the project. An audio of the talk can be accessed here.

The following live demonstrations were on show:

1) FXive: A Web Platform for Procedural Sound Synthesis
Parham Bahadoran and Adan Benito (Centre for Digital Music, EECS)

Fxive demonstrates a new way to perform sound design. High quality, artistic sound effects can be achieved by the use of lightweight and versatile sound synthesis models. Such models do not rely on stored samples, and provide a rich range of sounds that can be shaped at the point of creation.

2) Signal Processing Methods for Source Separation in Music Production
Delia Fano Yala (Centre for Digital Music, EECS)

3) Audio Commons: An Ecosystem for bringing Creative Commons content to the creative industries
George Fazekas (Centre for Digital Music, EECS)

 Audio Commons addresses barriers to using Creative Commons content in the creative industries presented by uncertain licensing, insufficient metadata or variation in quality. Two demonstrators will be shown that have been developed for music production and game sound design use cases.

AudioTexture is a plugin prototype for sound texture synthesis developed by AudioGaming.

SampleSurfer provides an audio search engine that integrates instant listening capabilities, editing tools, and transparent creative commons (CC) licensing processes.

4) MusicLynx
Rishi Shukla (Centre for Digital Music, EECS)

MusicLynx is a web platform for music discovery that collects information and reveals connections between artists from a range of online sources. The information is used to build a network that users can explore to discover new artists and how they are linked together.

5) Fast DJ
Florian Thalmann (Centre for Digital Music, EECS)

A minimal DJ web app that analyzes music files, makes automatic transitions, and learns from the user’s taste.

6) Grateful Dead Live
Thomas Wilmering (Centre for Digital Music, EECS)

A website for the exploration of recordings of Grateful Dead concerts, drawing its information from various sources on the web.

7) Numbers-into-Notes Semantic Remixer
John Pybus (Oxford e-Research Centre, University of Oxford)

8) The Prism audience perception app
Mat Willcoxson (Oxford e-Research Centre, University of Oxford)

A customisable app developed to obtain audience feedback during live performances, which has been used in live public experiments in Manchester and Oxford to investigate human perception of musical features. As well as supporting research, these have acted as public engagement events to engage audiences with music, maths, and particular composers.

(The PRiSM app has been developed in collaboration with colleagues in Oxford and the Royal Northern College of Music, and relates to other apps in FAST such as Mood Conductor.)

9) The Climb! performance and score archive – MELD and Muzicodes
Kevin Page (Oxford e-Research Centre, Oxford University)  

“Climb!” is a non-linear musical work for Disklavier and electronics in which the pianist’s progression through the piece is not predetermined, but dynamically chosen according to scored challenges and choices. The challenges are implemented using two FAST technologies — Muzicodes and MELD (Music Encoding and Linked Data) — which are also used to create an interactive archive through which recorded performances of Climb! can be explored.

10) 6-Channel Guitar Dataset
Johan Pauwels (Centre for Digital Music, EECS)

A demonstration of a data collection procedure, as a prerequisite for future research.