Author Archives: admin

FAST in conversation with Meinard Müller, International Audio Laboratories Erlangen

In August 2016, FAST interviewed Professor Meinard Muller, one of the partners on the FAST IMPACt project.

muller image

1. Could you please introduce yourself?

After studying mathematics and theoretical computer science at the University of Bonn, I moved to more applied research areas such as multimedia retrieval and signal processing. In particular, I have worked in audio processing, music information retrieval, and human motion analysis. These areas allow me to combine technical aspects from computer science and engineering with music – a beautiful and interdisciplinary domain. Since 2012, I hold a professorship at the International Audio Laboratories Erlangen (AudioLabs), which is a joint institution of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and the Fraunhofer Institute for Integrated Circuits IIS. At the AudioLabs, I chair the group on Semantic Audio Processing with a focus on music processing and music information retrieval.

  1. What is your role/work within the project?

Within the FAST project, I see my main role as bringing advice and skills from music information retrieval and expertise in semantic audio processing. For example, in collaboration with researchers from the Centre for Digital Music (C4DM), we have developed methods for music content analysis (e.g. structure analysis, source separation, vibrato detection). Also, we have been working on the development of automated procedures for linking metadata (such as symbolic score or note information) and audio content. As we are conducting joint research, another important role is to support the exchange of young researchers between the project partners. We have had such an exchange in the last years by sending students and post-docs from the AudioLabs to the Centre for Digital Music, and vice versa.

  1. What, in your opinion, makes this research project different to other research projects in the same discipline?

One main goal of the FAST project is to consider the entire chain from music production to music consumption. For example, to support automated methods for content analysis, one may exploit intermediate audio sources such as multitrack recordings or additional metadata that is generated in the production cycle. This often leads to substantial improvements in the results achievable by automated analysis methods. Considering the entire music processing pipeline makes the FAST project very special within our discipline.

  1. What are the research questions you find most inspiring within your area of study / field of work?

Personally, I am very interested in processing audio and music signals with regard to semantically or musically relevant patterns. Such patterns may relate to the rhythm, the tempo, or the beat of music. Or one may aim at finding and understanding harmonic or melodic patterns, certain motives, themes, or loops. Other patterns may relate to a certain timbre, instrumentation, or playing style (involving, for example, vibrato or certain ornaments). Obviously, music is extremely versatile and rich. As a result, musical objects (for example, two music recordings), although similar from a structural or semantic point of view, may reveal significant differences. Understanding these differences and identifying semantic relations despite these differences by means of automated methods are what I find inspiring and challenging research issues. These issues can be studied within music processing, but their relevance goes far beyond the music scenario.

  1. What can academic research in music bring to society?

Music is a vital part of nearly every person’s life on this planet. Furthermore, musical creations and performances are amongst the most complex cultural artifacts we have as a society. Academic research in music can help us to preserve and to make our musical heritage more accessible. In particular, due to the digital revolution in music distribution and storage, we need the help of automated methods to manage, browse, and understand musical content in all its different facets.

  1. Please tell me why do you find valuable/ exciting / inspiring to do academic research related to music.

 As already mentioned before, music is an outstanding example for studying general principles that apply to a wide range of multimedia data. First, music is a content type with many different representations, including audio recordings, symbolic scores, video material as provided by YouTube, and vast amounts of music-related metadata. Furthermore, music is rich in content and form, comprising different genres and styles – from simple, unaccompanied folk songs, to popular and jazz music, to symphonies for full orchestras. There are many different musical aspects to be considered such as rhythm, melody, harmony, timbre – just to name a few. And finally, there is also the emotional dimension of music. All these different aspects make the processing of music-related data exciting.

  1. What are you working on at the moment?

My recent research interests include music processing, music information retrieval, and audio signal processing. In the last years, I have been working on various questions related to computer-assisted audio segmentation, structure analysis, music analysis, and audio source separation. In my research, I am interested in developing general strategies that exploit additional information such as sheet music or genre-specific knowledge. We develop and test the relevance of our strategies within particular case studies in collaboration with music experts. For example, at the moment, we are involved in projects that deal with the harmonic analysis of Wagner’s operas, the retrieval of Jazz solos, the decomposition of electronic dance music, and separation of Georgian chant music. By considering different application scenarios, we study how general signal processing and pattern matching methods can be adapted to cope with the wide range of signal characteristics encountered in music.

  1. Which is the area of your practice you enjoy the most?

Besides my work in music processing, I very much enjoy the combination of doing research and teaching. I am convinced that music processing serves as a beautiful and instructive application scenario for teaching general concepts on data representations and algorithms. In my experience as a lecturer in computer science and engineering, starting a lecture with music processing applications – in particularly playing music to students opens them up and raises their interest. This makes it much easier to get the students engaged with the mathematical theory and technical details. Mixing theory and practice by immediately applying algorithms to concrete music processing tasks helps to develop the necessary intuition behind the abstract concepts and awakens the student’s fascination for the topic. My enthusiasm in research and teaching has also resulted in a recent textbook titled “Fundamentals of Music Processing” (Springer, 2015, www.music-processing.de), which also reflects some of my research interests.

  1. What is it that inspires you?

Because of the diversity and richness of music, music processing and music information retrieval are interdisciplinary research areas which are related to various disciplines including signal processing, information retrieval, machine learning, multimedia engineering, library science, musicology, and digital humanities. Bringing together researchers and students from a multitude of different fields is what makes our community so special. Working together with colleagues and students who love what they do (in particular, we all love data we are dealing with) is what inspires me.

Contact Details:
Prof. Dr. Meinard Müller
Lehrstuhl für Semantische Audiosignalverarbeitung
International Audio Laboratories Erlangen
Friedrich-Alexander Universität Erlangen-Nürnberg
Am Wolfsmantel 33
91058 Erlangen, Germany
Email: meinard.mueller@audiolabs-erlangen.de

Reference:
Müller, Meinard, Fundamentals of Music Processing, Audio, Analysis, Algorithms, Applications, 483 p., 249 illus., 30 illus. in color, hardcover, ISBN: 978-3-319-21944-8, Springer 2015. www.music-processing.de

FAST participates in performance and electronic music workshop at STEIM

Alan Chamberlain 0August saw Dr Alan Chamberlain (University of Nottingham) of the FAST project attend and help document a workshop that supported digital instrument designers to use the Bela platform “a new embedded audio / sensor platform…which provides sub-millisecond latency between action and sound, and which replaces the need for a laptop and external microcontroller boards such as Arduino to create digital musical instruments.” The instrument designers were able to create new instruments and design and develop better ways to interact with their existing systems. The Bela platform was designed by Dr Andrew McPherson (QMUL) see http://bela.io and was funded via Kickstarter.

Dr Chamberlain said, “Andrew asked if I was interested in studying the way that communities come together and use technologies such as the Bela platform, and I thought that this was a fantastic opportunity. Andrew and I had previously worked together on a series of workshops that enabled different groups of people to engage with the DBox (a musical instrument), and the research that we carried out was published [1] at ACM Designing Interactive systems last year.”

Alan Chamberlain 1

The workshop was held at STEIM (the STudio for Electro-Instrumental Music) in Amsterdam, a leading centre focusing on the performance and electronic music, which supports an international community of musicians, artists, performers and researchers.
http://steim.org/event/bela-workshop-call/

As the workshop ran for three days with delegates attending from across Europe, the workshop outline states, “A major goal of the workshop is to make the new instrument designs sustainable by thoroughly documenting the process of building and using the instrument. In addition to the new artefacts created in the workshop, we hope (where the designer agrees) to support the release of documentation that will allow others in the community to replicate and modify the instruments. In addition to technical help, the organiser team will help the participants document their efforts as they go along, and they will record short interviews as part of a research study on digital musical instrument sustainability.”

 Alan Chamberlain 2Dr Chamberlain said, “It was a great experience to see people coming together to use these tools and it was evident that what Andrew had developed was a real benefit to the workshop attendees. Projects such as FAST enable and support researchers to work to their full potential, and work with others to help deliver tools that can have real impact for creative communities”.

 

 

 

  1. Andrew McPherson, Alan Chamberlain, Adrian Hazard, Sean McGrath and Steve Benford (2016) “Designing for Exploratory Play with a Hackable Digital Musical Instrument”, Proceedings of Designing Interactive Systems, DIS’16, June 4 – 8, 2016, Brisbane, Australia. ACM Press. Pages 1233-1245. DOI: http://dx.doi.org/10.1145/2901790.2901831

FAST presents Ada Lovelace-inspired project at MobileHCI ’16

Professor David De Roure and Pip Willcox, two Oxford members of the FAST IMPACt team, presented an Ada Lovelace inspired paper ‘Numbers in Places: Creative Interventions in Musical Space & Time‘ at the recent Audio in Place workshop at ACM MobileHCI 2016 in Italy last September.

The aim of the Audio in Place workshop was “to explore the possibilities, issues, challenges and application of methods for understanding, creating meaning and using audio content, for users of mobile devices”. The workshop was part of MobileHCI 2016, the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, which was held in Tuscany from 7-9 September 2016.

Now Professor De Roure and Pip Willcox (Head of the Centre for Digital Scholarship, Bodleian Libraries) are investigating what Lovelace might do today—with an ‘orchestra’ of microcontrollers instead of the analytical engine.

ArduinoArduinos (programmable circuit boards with software) designed in the Oxford e-Research Centre are used to replicate the Numbers into Notes web application tool as a small standalone ‘music engine’. These can then be controlled by participants using infrared remote controls and proximity sensors to select and map a subset of notes to individual instruments.

More on this news item here.

FAST represented at the Audio Mostly 2016 conference

by Sean McGrath (The University of Nottingham)

This week, FAST were well represented at Audio Mostly, a conference on interaction with sound in cooperation with ACM. In its tenth year anniversary, the conference returned to Sweden. The conference was hosted in the beautiful city of Norrköping at the iconic Visualization Center. It ran over three days, with talks on all matter of subjects from walking, playing, producing and consuming sound in a wide range of settings.

FAST members presented four pieces of work, discussing a range of audio related work from production to consumption. The work explored the role of dynamic music, performance and composition tools and the role of social media in production. Paper titles were as follows:

Creating, Visualizing, and Analyzing Dynamic Music Objects in the Browser with the Dymo Designer, Florian Thalmann, György Fazekas, Geraint A. Wiggins, Mark B. Sandler (Centre for Digital Music, Queen Mary University of London)

^muzicode$: Composing and Performing Musical Codes, Chris Greenhalgh, Steve Benford, Adrian Hazzard (The Mixed Reality Lab, The University of Nottingham)

Making Music Together: An Exploration of Amateur and Pro-Am Grime Music Production, Sean McGrath, Alan Chamberlain, Steve Benford (The Mixed Reality Lab, The University of Nottingham)

The Grime Scene: Social Media, Music, Creation and Consumption, Sean McGrath, Alan Chamberlain, Steve Benford (The Mixed Reality Lab, The University of Nottingham)

It was a pleasure to visit such a beautiful city. We would like to take this opportunity to thank those involved in organising the conference. The presentations were informative and the opportunity to network and discuss ongoing work in the area was wonderful.

FAST participants attend Sonar+D 2016 festival

From 16-18 June 2016, the Centre for Digital Music (QMUL) including members from the FAST project presented a public exhibition of its research at the Sonar+D festival, a high-profile annual event in Barcelona which caters to musicians, the music technology industry, and members of the general public. C4DM was chosen by competitive application for a display booth (roughly 4m x 4m) on the exhibition floor, in a prime location near the entrance to the facility. 12 C4DM researchers, including PhD students, postdocs, early- and mid-career academics, attended to showcase their work.

The event was attended by thousands of people, mainly adults but also occasionally children. The booth had many visitors from both large and small businesses, including several different members of the music company Focusrite. Several musicians who were performing and speaking at Sonar also attended the booth, including the well-known composer Brian Eno who took an interest in several of the research projects. The event raised the public profile of C4DM and the individual projects within it.

The projects shown directly related to FAST were:

  •  Bela, an open-source platform for ultra-low-latency audio and sensor processing, which launched on Kickstarter in 2016. Bela attracted significant interest from Sonar attendees;
  • Moodplay, a mobile phone-based system that allows users to collectively control music and lighting effects to express desired emotions;
  • MixRights, a demo of how content reuse is enabled by emerging MPEG standards, such as IM AF format for interactive music apps and MVCO ontology for IP rights tracking, driving a shift of power in the music value chain.

Other C4DM projects shown were:

  • Augmented Violin, a sensor-based extension of the traditional violin to give students constructive feedback on their playing
  • Tape.pm, an interactive object which explores novel ways to record and share an improvisation that is created on a musical instrument;
  • Aural Character of Places, an interactive online demo of soundwalks conducted around London;
  • TouchKeys, a transformation of the piano-style keyboard into an expressive multi-touch control surface, launched on Kickstarter in 2013 and spun out into a company in 2016;
  • Collidoscope, a collaborative audio-visual musical instrument which was a viral hit online with over 10M views;
  • RTSFX (Real-Time Sound Effects), an online library of real-time synthesised (rather than sampled) sound effects for a variety of different objects and environmental sounds.

The Sonar+D exhibition generated substantial publicity for C4DM and the FAST project, with thousands of people attending the booth. The Sonar+D organisation also featured C4DM in its online media and videos, and we were interviewed for a broadcast on Spanish television. Finally, C4DM and FAST attendees at Sonar+D also had the opportunity to see other booths and talks at the event, generating new ideas and connections for future projects.