Author Archives: admin

FAST report from the Sound Talking workshop, 3 November 2017, London Science Museum

sound-talking-logo-large-1024x363On Friday 3 November, Dr Brecht De Man (Centre for Digital Music, Queen Mary University of London) and Dr Melissa Dickson (Diseases of Modern Life, University of Oxford) organised a one-day workshop at the London Science Museum on the topic of language describing sound, and sound emulating language. Titled ‘Sound Talking‘, it brought together a diverse lineup of speakers around the common theme of sonic semantics. And with diverse we truly mean that: the programme featured a neuroscientist, a historian, an acoustician, and a Grammy-winning sound engineer, among others.

The event was born from a friendship between two academics who had for a while assumed their work could not be more different, with music technology and history of Victorian literature as their respective fields. When learning their topics were both about sound-related language, they set out to find more researchers from maximally different disciplines and make it a day of engaging talks.

After having Dr Dickson as a resident researcher earlier this year, the Science Museum generously hosted the event, providing a very appropriate and ‘neutral’ central London venue. The venue was further supported by the Diseases of Modern Life project, funded by the European Research Council, and the Centre for Digital Music at Queen Mary University of London.

The programme featured (in order of appearance):

  • Maria Chait, Professor of auditory cognitive neuroscience at UCL, on the auditory system as the brain’s early warning system
  • Jonathan Andrews, Reader in the history of psychiatry at Newcastle University, on the soundscape of the Bethlehem Hospital for Lunatics (‘Bedlam’)
  • Melissa Dickson, postdoctoral researcher in Victorian literature at University of Oxford, on the invention of the stethoscope and the development of an associated vocabulary
  • Mariana Lopez, Lecturer in sound production and post production at University of York, on making film accessible for visually impaired audiences through sound design
  • David M. Howard, Professor of Electronic Engineering at Royal Holloway University of London, on the sound of voice and the voice of sound
  • Brecht De Man, postdoctoral researcher in audio engineering at Queen Mary University of London, on defining the language of music production
  • Mandy Parnell, mastering engineer at Black Saloon Studios, on the various languages of artistic direction
  • Trevor Cox, Professor of acoustic engineering at University of Salford, on categorisation of everyday sounds

In addition to this stellar speaker lineup, Aleks Kolkowski (Recording Angels) exhibited an array of historic sound making objects, including tuning forks, listening tubes, a monochord, and a live recording of a wax cylinder. The workshop took place in a museum, after all, where Dr Kolkowski has held a research associateship, so the display was very fitting.

The full program can be found on the event’s web page. Video proceedings of the event are forthcoming.

 

FAST C4DM team wins best paper at the ISMIR Conference, 23 – 27 Oct.

banner-clear-900x450Members of the FAST IMPACt project team from C4DM, Queen Mary University of London and Oxford e-Research Centre, Oxford University presented their research at the 18th International Society for Music Information Retrieval conference at the National University of Singapore, Suzhou, China, 23 – 27 October.

C4DM researchers gave several presentations at the above conference. The paper “Transfer learning for music classification and regression tasks” by the PhD student Keunwoo Choi, Senior lecturer Dr George Fazekas and Professor Mark Sandler won best paper at the conference. In the paper, the researchers presented a transfer learning approach for music classification and regression tasks. They proposed to use a pre-trained convnet feature, a concatenated feature vector using the activations of feature maps of multiple layers in a trained convolutional network. They showed how this convnet feature can serve as general-purpose music representation. Access to the full paper and abstract can be found here: https://ismir2017.smcnus.org/wp-content/uploads/2017/10/12_Paper.pdf

A paper by Research Associate Dr David Weigl and Senior Researcher Dr Kevin Page on Music Encoding and Linked Data was also presented at the ISMIR Music Information Retrieval Conference on 24 October. The paper is a product of the FAST IMPACt project. It presents the Music Encoding and Linked Data (MELD) framework for distributed real-time annotation of digital music scores.

Further information

OeRC news article:
http://www.oerc.ox.ac.uk/news/weigl-page-paper-music-encoding-and-linked-data-be-presented-ismir

ISMIR Conference website:
https://ismir2017.smcnus.org

FAST workshop at AES New York 2017, 18 – 20 October

Unknown-2FAST IMPACt project members from the Centre for Digital Music (C4DM) organised a workshop on the 20th October 2017 as part of AES New York 2017 on Archiving & Restoration: AR08 – The Music Never Stopped: The Future of the Grateful Dead Experience in the Information Age. 

During this workshop Thomas Wilmering and George Fazekas (who co-chaired the event) demonstrated new ways of navigating concert recordings with editorial metadata and semantic audio analysis combined using Semantic Web technologies. They focussed particularly on material by the Grateful Dead and discussed opportunities and requirements with audio archivists and librarians, as well as the broader social and cultural context of how new technologies bear on music archiving and fandom. The workshop also explored the opportunities created by semantic audio technologies in experiencing live music performances and interacting with live performance and cultural archives. The workshop included presentations by Prof. Mark Sandler, head of C4DM, introducing challenges and opportunities in semantic audio. C4DM members Thomas Wilmering and Ben White presented their research in navigating music archives and the creative use of archival material. A number of experts from the USA were invited to contribute in the areas of live sound reinforcement, metadata and semantic audio.

UnknownThe workshop included a panel discussion moderated by George Fazekas. John Meyer, President and CEO of Meyer Sound Laboratories discussed how the development of new audio technologies were driven by the needs of the Grateful Dead. Nicholas Meriwether, Director of the Center for Counterculture Studies in San Francisco presented an account on the wealth of material available from the Grateful Dead and discussed challenges and opportunities in the development of music related cultural archives during a panel discussion, together with Scott Carlson, Metadata Coordinator of Fondren Library at Rice University and Jeremy Berg, cataloging librarian at the University of North Texas. The panel also reflected on how semantic audio technologies could bring about new ways to navigate and experience live music archives after demonstrations by Juan Pablo Bello, Associate Professor of Music Technology and Computer Science & Engineering at New York University and Prof. Bryan Pardo, head of the Northwestern University Interactive Audio Lab. Useful feedback on several aspects of our research in FAST was gathered from the panel as well as the audience.

Unknown-3

Unknown-1

Further Information:
Fazekas, G., Wilmering, T. (2017). The Music Never Stopped: The Future of the Grateful Dead Experience in the Information Age. Audio Engineering Society 143rd Convention, New York, NY, USA, October 20, 2017. (http://www.aes.org/events/143/archiving/?ID=5761)

See related publication: 
Wilmering, T. Thalmann, F. Fazekas, G. Sandler, M. B. (2017) Bridging Fan Communities and Facilitating Access to Music Archives through Semantic Audio Applications. Audio Engineering Society 143rd Convention, New York, NY, USA, October 20, 2017. (http://www.aes.org/e-lib/browse.cfm?elib=19335)

 

 

FAST in conversation with Joshua Reiss, Centre for Digital Music, Queen Mary University of London

  1. Could you please introduce yourself?JReiss

I am Josh Reiss. My title is Professor of Audio Engineering. I am an academic with the Centre for Digital Music at Queen Mary University of London.

  1. What’s your role/work within the project?

I am new to the project, but coming in as an Investigator, mainly involved in the Production side of things, but not limited to that.

  1. Which would you say are the most exciting areas of research in music currently?

Probably algorithmic music composition. This is still very much in its infancy and has huge potential. Research in this area also can yield deep insights into creativity and high level human perception and cognition.

Other areas are also being heavily impacted by other fields. Machine learning is becoming pervasive, but metamaterials and nanotechnology are affecting all transducers (loudspeakers, microphones…), enabling amazing spatial audio systems. Object-based audio is changing the broadcast world. And the path from academic research to commercial product or real world application is rapidly accelerating.

The world of audio and music technology will look very different even just 5 years from now.

  1. What, in your opinion, makes this research project different to other research projects in the same discipline?

By this research project, I assume you mean ‘FAST IMPACt’. I like its exploratory nature, while at the same time having a strong focus on impact and dissemination. It is well-positioned to make breakthroughs.

  1. What are the research questions you find most inspiring within your area of study / field of work?

Many. But I like the questions where there is some debate. Where people have asked the question but haven’t reached a consensus. And I like to work on questions that are well-defined, where you can feel that there is an answer to be found, even if its not a single, definitive solution.

  1. What, in your opinion, is the value of the connections established between the project research and the industry?

It’s a two way street. Working with industry helps academic researchers realise the value of their work, and gain new insights into what problems and approaches really matter. For industry, working with academia gives them a means to take risks that would not necessarily make sense if attempted internally. It helps them pursue disruptive innovations and helps ensure a certain level of rigour in their R&D.

  1. What can academic research in music bring to society?

Better music, better music technology, deeper understanding of perception, cognition and creativity. Plus, it’s a field where advances can be applied in many other domains.

  1. Please tell me why do you find valuable/ exciting / inspiring to do academic research related to music.

This is probably not the answer you’re looking for, but I am inspired by the research itself, not necessarily the fact that I’m doing it with music. To quote Richard Feynman, “I’ve already got the prize. The prize is the pleasure of finding the thing out, the kick in the discovery, the observation that other people use it. Those are the real things.”

  1. Why did you choose this area of research / field of industry?

Relating to question 8, my passion is research. If you had met me as a child and asked me what I wanted to be when I grew up, I would have said ‘Scientist.’ That hasn’t really changed. If there are interesting questions and problems in an area, then I naturally become interested in it. So, I moved into this field because of a background studying nonlinear systems, which lead to audio analogue to digital converters, which lead to audio effects, which lead to lots and lots of questions. But the choice was always to do research.

  1. What are you working on at the moment?

A few things. One interesting area of research is the question of whether we can hear a difference between standard CD quality audio (44,100 samples a second, 16 bit resolution) and any higher quality. Evidence slightly suggests that the answer might be that we can. But why? It seems doubtful that we can hear any frequencies above 20 kHz, so what is the cause of any perceived difference?

Another interesting thing is that we can hear a difference between pouring hot and cold water. Again, why? Lots of people have given explanations for this, but none of them agree with some simple tests. This area of research is just for fun.

I’m also involved in a few that are meant to change the way people do things, or their workflows or the tools they use. A few of these are automatic mixing, sound synthesis (for real world sound design), and adaptive mixes for the hearing impaired.

  1. Which is the area of your practice you enjoy the most?

That moment in research where you realise you’ve got the breakthrough, solved the problem, built the killer demonstrator… And also, just when I’m able to work on a problem by myself and make some real progress… And also watching one of the researchers working with me give a great talk, successfully defend a phd, win an award or anything like that.

Nottingham to participate in the videogame music festival

All Your Bass: The Videogame Music Festival, Nottingham, Friday 19 and Saturday 20 January 2018

The National Videogame Arcade has just announced the launch of a new videogame music and audio festival on Friday 19 and Saturday 20 January 2018. A series of talks, concerts and more will take place across The National Videogame Arcade, Nottingham Royal Concert Hall and Antenna, with Friday’s sessions taking an industry and student focus.

The University of Nottingham project team from the Mixed Reality Lab and the composer Maria Kallionpää will participate in the festival with their collaboration ‘Climb!’.

thumbnail

Maria Kallionppää performing Climb!

Further information

Festival Press Release:
All Your Bass: The Videogame Music Festival

FAST IMPACt blog post:
Maria Kallionpää: “Climb!” – A Virtuoso Piece for Live Pianist, Disklavier and Interactive System (2017)