Category Archives: Blog

The Rough Mile project: a two-part location based audio walk

by Sean McGrath, Mixed Reality Lab, University of Nottingham

The project, The Rough Mile, is funded through the EPSRC’s FAST project (Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption, EPSRC EP/L019981/1) through the Mixed Reality Lab at the University of Nottingham. The Rough Mile is a two-part location-based audio walk that combines principles of immersive theatre with novel audio technologies, giving people the opportunity to give each other performance-based gifts based on digital music.

Listening to recorded music is generally construed as a passive act, yet it can be a passionately felt element of individual identity as well as a powerful mechanism for deepening social bonds. We see an enormous performative potential in the intensely meaningful act of listening to and sharing digital music, understood through the lens of intermedial performance. Our primary research method is exploring the dramaturgical and scenographic potentials of site-specific theatre in relation to music. Specific contexts of listening can make powerful affective connections between the music, the listener, and the multitude of memories and emotions that may be triggered by that music. These site-specific approaches are augmented by performance-based examinations of walking practices, especially Heddon and Turner’s (2012) feminist interrogation of the dérive, as music often shapes a person’s journey as much as it does any experience they arrive at.

For this project, as part of the EPSRC-funded FAST Programme Grant, we are creating a system through which people can devise ‘musical experiences’ based at a particular location or route, which they then share with others. They craft an immersive and intermedial experience drawing on music, narration, imagery, movement, and engagement with the particularities of each location, motivated by and imbued with the personal meanings and memories of both the creator of the experience and its recipient. We believe that this fluid engagement with digital media, technology, identity, and place will provide insights into the relationships between commercial music, the deeply personal significance that such music holds for individuals, the performativity inherent in the shared act of listening, and site-based intermedial performance.

We are developing a two-part, location-based audio walk performance for pairs of friends using Professor Chris Greenhalgh’s Daoplayer. They are drawn into a fictional world that prompts them to consider pieces of music that have personal significance in relation to their friend. Through the performance they choose songs and share narratives behind them, which are then combined to form a new audio walk performance, this time as a gift for their friend to experience. By focusing on the twin roles of memory and imagination in the audience experience, the work turns performance practice into a means of engaging more fully with digital technologies such as personal photos and music that people invest with profound personal significance.

The first part is immersive and intermedial audio walk drawing on music, narration, imagery, movement, and engagement with the particularities of the location in central Nottingham. The second part has participants retrace their original route, this time listening to the songs chosen for them by their friend, contextualised by snippets of audio from their friend’s verbal contributions. The musical gift is motivated by and imbued with the personal meanings and memories of both the giver and the receiver. We believe that this fluid engagement with digital media, technology, identity, and place will provide insights into the relationships between commercial music, the deeply personal significance that such music holds for individuals, location, movement, and performance.

British Art Show 8

by Ben White, Centre for Digital Music, Queen Mary University of London (in collaboration with Eileen Simpson)

British Art Show 8 
Talbot Rice Gallery, University of Edinburgh
13 February – 8 May 2016

britishAuditory Learning is part of the wider project of Open Music Archive, to find, distribute and reanimate out-of-copyright music recordings. Sourcing vinyl 45 rpm records of chart hits from 1962 – the last year that commercial recordings can be retrieved for public use until 2034 due to recent copyright revisions – and using emerging information retrieval technologies, the artists have extracted over 50,000 sounds to produce a new public sonic inventory. See Open Music Archive website: http://www.openmusicarchive.org/auditorylearning

At Talbot Rice Gallery two new works are presented which explore the reassembly of this corpus of 1962 sounds:

Assembled Onsets features eight modified turntables and individually lathe-cut records which play a new sound work assembled from the inventory of individual notes, percussive elements and vocal phonemes. The surrounding graphic fragments recall vinyl 7 inch printed paper sleeves.

Linear Search Through Feature Space is a looped audio work in which the algorithmic playback of acoustically similar sounds assembles an evolving rhythmic soundscape – a sonic journey recalling the sounds of 1962.

Auditory Learning will change and develop throughout the exhibition’s tour. Reassembled as part of a live event during British Art Show 8 and Huddersfield Contemporary Music Festival in Leeds, in Southampton it will form the soundtrack for a new film produced with a group of local teenagers.

The British Art Show is widely recognised as the most ambitious and influential exhibition of contemporary British art, with artists chosen for their significant contribution over the past five years. Organised by Hayward Touring at Southbank Centre, London, and taking place every five years, it introduces a broad public to a new generation of artists.

Eileen Simpson and Ben White, Auditory Learning: Linear Search Through Feature Space (2016) excerpt.

Linked data descriptions of live performances

by Graham Klyne, Oxford e-Research Centre, University of Oxford

The FAST project is exploring the notion of Digital Music Objects to carry information about all aspects of music, from creation and recording to distribution and consumption.  Live performances place music consumers close to the performers of the music, and offer a particular experience as a starting point for exploring the output of an artist or group.

Our vision for creating Digital Music Objects describing live performances is to create a “Performance Digital Music Object” that augments the interaction between live performers and their audience, extending the experience beyond the performance itself, and providing access to a range of information that relates to the performance and its performers.

Working with the FAST team at Nottingham’s Mixed Reality Lab, and building on earlier work to use linked data to describe the Carolan Guitar’s story [1][2], we have been using Annalist [3][4] to create a description of a live performance given at The Maze in Nottingham by Carmina and the Phil Langran Band.

The linked data model we are developing draws upon terms from W3C PROV [6], CIDOC CRM [7], FRBRoo [8] and Music Ontology [9], and in so doing aims to situate the live performance in a context that includes the music performed, the people performing it and the time and place at which the performance took place [5].  By sharing terms with other linked data descriptions, such as the Carolan Guitar’s story, we also allow the performance (which featured the Carolan Guitar) to be connected with a wider range of topics that could be of interest to the audience present.

We have work in progress to allow the data created using Annalist to be used in conjunction with a user-facing web site that presents the information for audience members to access, and provides a route for audience feedback to the performers.

Creating linked data descriptions of musical objects and events is informing the development of Annalist, which is a tool for quickly creating web linked data using any combination of ontologies.  Annalist is being created as an independent open development, and is being used within the FAST project to prototype linked data models, exploring the use and expressivity of existing ontologies for describing concepts being explored by FAST.

Taking a wider view of FAST project activities, the aim is that the prototyping work with Annalist can provide a bottom-up proving of requirements coming from specific scenarios and user stories, as a complement to top-down design and adoption of ontologies, to reveal any gaps in the adopted ontologies and to guide the design of additional vocabulary terms to fill these gaps.  Annalist may also help to improve time to deployment of applications that depend on newly adopted ontologies by providing rapid prototype tools for data entry and management.

[1] S. Benford, A. Hazzard, A. Chamberlain, K. Glover, C. Greenhalgh, L. Xu, M. Hoare and D. Darzentas. 2016. Accountable Artefacts: The Case of the Carolan Guitar. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 1163-1175. DOI=http://dx.doi.org/10.1145/2858036.2858306
[2] http://fast-project.annalist.net/annalist/c/Carolan_Guitar/
[3] G. Klyne, C. Willoughby, K.R. Page. Annalist: A practical tool for creating, managing and sharing evolving linked data. Linked Data on the Web Workshop 2016. To appear in CEUR Workshop Proceedings (CEUR-WS.org) Vol-1593.
[4] http://annalist.net/
[5] http://fast-project.annalist.net/annalist/c/Performances/
[6] T. Lebo et al. PROV-O: the PROV ontology. W3C Recommendation. W3C, Apr. 2013. url: http://www.w3.org/TR/prov-o/
[7] P. Le Bœuf, M. Doerr, et al. Definition of the CIDOC conceptual reference model, version V6.2. Tech. rep. International Council of Museums, May 2015. url: http://www.cidoc-crm.org/docs/cidoc_crm_version_6.2.pdf
[8] C. Bekiari, M. Doerr, and P. Le Bœuf, eds. FRBR – object-oriented definition and mapping to FRBRer (version 1.0). 1.0 ed. May 2009, 1.0 ed., 2009. url: http://www.cidoc-crm.org/docs/frbr_oo/frbr_docs/FRBRoo_V1.0_draft__2009_may_.pdf
[9] http://musicontology.com

External Artist Programme: Mixed Reality Lab collaboration with Ron Herrema

by Sean McGrath, Mixed Reality Laboratory, University of Nottingham

As part of the FAST project, the MRL are hosting a number of artists in residence. One of those collaborations is with an artist Ron Herrema, as part of the partnership with B3 Media. In collaboration with the MRL Ron is developing an iPad application. Infinity is a graphically and sonically generative application and follows a similar audiovisual experience to that of Ron’s previous creation for the iPhone, Dancing Wu Wei. His previous curation is freely downloadable from the app store.

sean 1

Figure 1 – Infinity app visuals

The app uses touch interaction to modify the audio/visual elements generated and to create an interactive experience. Infinity explores the concept of user experience from multiple perspectives. As a piece of art, as a tool for contemplation and as a tool for engagement. Ron is currently undertaking a body of work which focuses on a human-centred design methodology. He is iterating through designs based on feedback from five core users, using the app in a variety of contexts. The feedback from these users is driving further iterative developments in the application.

Figure 2 - Visuals changing over time

Figure 2 – Visuals changing over time

The work so far has highlighted a number of key themes of interest. These are as follows:

  • Immersion/Engagement
  • Interactivity/gestures – Orientation
  • Control and emergent flow – Discovering interactive elements
  • Scheduling and the relevance of context
  • Metaphors of use

The next stage of the work involves hosting a workshop. During the course of this workshop, participants will be observed using the application and then asked a series of questions relating to their experiences. The workshop will inform future design and development changes in the application.

The application will shortly be entering the beta testing phase. If you are interested in beta testing the application and offering feedback to the research team then feel free to contact sean.mcgrath@nottingham.ac.uk with an expression of interest.

Related links:
https://vimeo.com/100254134
http://ronherrema.net

FAST in full force at the ACM’s annual Computer-Human Interaction conference

By Steve Benford, Mixed Reality Lab, Nottingham University and Mathieu Barthet, Centre for Digital Music, Queen Mary University of London

CHI Carolan 1The FAST project was in full force at the ACM’s annual Computer-Human Interaction conference – widely known as CHI – in San Jose in May. CHI is the leading conference for research into interfaces and interaction and was attended by over 3,000 delegates this year.

First up for FAST was our work on the Carolan guitar as an example of an ‘accountable artefact’ a physical object that can becomes associated with a growing digital record and it is passed among different custodians over its lifetime and that can interrogated to tell various stories of provenance, use or personal meaning based on this. Our research paper on Carolan was well received, winning a “Best of CHI” honorable mention (awarded to the top 5% of all submissions) and also being nominated for a Best Art paper (the first year for this category).

CHI Carolan 2We also presented Carolan as an exhibit at Interactivity, CHI’s hand-on programme of exhibits. Our stand was popular, enjoying a constant stream of visitors, including several who were recorded playing the guitar (see our blogpost at https://carolanguitar.com/2016/05/13/56-chi/).

FAST also contributed a paper on brain-controlled interfaces for entertainment, specifically a study of the design and experience of our associate artist (and PhD student) Richard Ramchurn’s unique movie #Scanners. This paper introduces an unusual taxonomy for designing brain-computer control of digital media in which participants’ extent of voluntary control, as well as their conscious awareness of this control, varies throughout an experience. We were excited to win the Best Art Paper award for this paper, adding to FAST’s haul of certificates.

Richard, Matthew and the team also exhibited #Scanners at CHI Interactivity, attracting great interest and invitations to stag the work elsewhere.

Several FAST demonstrators were presented in our review paper « Crossroads: Interactive Music Systems Transforming Performance, Production and Listening » (http://bit.ly/crossroadsMusicHCI) at CHI 2016’s Music and HCI workshop which gathered international experts and pioneers of the field of Music Interaction. The workshop which proved very successful will lead to the publication of a book aiming to reflect on the latest research in Music and HCI and to strengthen the dialogue between the music interaction community and the wider HCI community.

CHI2016_OpenSymphony_Performance_close_up    Open Symphony team at CHI2016FAST innovated during CHI’s Interactivity session being the only project to propose a series of live interactive music performances during which conference attendees became not only listeners but truly engaged in the music creation process. This was orchestrated by the Open Symphony team (Yongmeng Wu, Leshao Zhang, Kate Hayes, Mathieu Barthet) who showcased their participatory music system enabling audience members to generate live graphic scores for directed improvisations using mobile phones (http://bit.ly/opensymphonyCHI2016). The project lead to a fruitful collaboration with talented local musicians from University of California Berkeley and their Center for New Music and Audio Technologies (CNMAT) who were rehearsed prior to the conference. The Open Symphony musical and technical system sparkled the interest of many CHI attendees during the exhibition and the performances welcomed several hundreds of participants who actively engaged with the music. It was not only a genuine way to demonstrate new music composition and listening paradigms through computer-supported cooperative music creation but also to reach a wider audience and collaborate with local performers, as one element of the term ‘Open’ relates to the participatory nature of the project.

FACT – Carolan guitar blog post (52)

by Steven Benford, Mixed Reality Lab, University of Nottingham

The turning of the year is always a time for reflection and for thinking of friends and family. So first of all, many thanks to everyone who met and played Carolan in 2015 and contributed to its growing story. It’s certainly been an eventful year. Looking back to the now distant Summer, one key event that we haven’t covered to date was Carolan’s Open Mic session at Foundation for Art and Creative Technology (FACT) in Liverpool.

Performing Data

This was part of a wider exhibition titled Performing Data that presented a series of artistic explorations of how data can be incorporated into live performance and that featured a series of thematically-link artworks including:

Open Mic

Alongside these works, Carolan hosted an Open Mic session that explored how data from its historical record might be interleaved with live performance. This featured an array of talented singers and players from around Liverpool including:

Jo Bywater

Jo Bywater’s last EP, Chasing Tales has been described as ‘delicious, laid back folk-tinged Americana’ and ‘unorthodox, uncompromising, simply brilliant’. Over the last few years she has been establishing her place in Blues, Americana, Folk and Roots music and has been compared to Janis Joplin, Alanis Morrissette and Ani Difranco. Chasing Tales was voted winner of FATEA Awards ‘Best EP/Mini Album/Single of 2013′ and in November 2014 Jo won Judges Award for her song ‘Riches to Rags’ at the Liverpool Acoustic Songwriting Challenge. In August 2015 Jo was a finalist in ‘Bluebird at the Bluecoat’ Songwriting Competition and invited to attend a songwriting masterclass with Nashville songwriters and Bob Harris.

Thomas McConnell

 Since March 2012 when Thomas first started on Liverpool’s open-mic scene he has toured the UK with Glenn Tilbrook, China Crisis and Ian McNabb. He has also opened up for Squeeze, Difford & Tilbrook, Ian McCulloch & Ian Broudie, Steve Cradock and The Pretty Things. His eclectic sound is influenced by Rock/Pop of the 20th century in particular, Paul McCartney. He plays all his own instruments and writes and arranges every part on recordings.

Louise Quasie-Wood

Louise is Singer & Songwriter from West Yorkshire who is studying Music at the University of Liverpool.

Little Rivers

Little Rivers is the project of Belfast native Callum Cairns. After sponta​neously releasing the ‘We, I’ EP in late 2012 with no prior recordings or gigs, a storm of critical interest quickly followed. Support slots with artists such as Soak and Rams’ Pocket Radio unveiled delicate and emotionally-​drench​ed songs that quietly stunned audiences around Ireland over a set of sporadic live dates.

All We Are

All We Are are a global gathering made up of Ireland’s Richard ‘O Flynn (drums, vocals), Norway’s Guro Gikling (bass, vocals) and Brazil’s Luis Santos (guitar, vocals), who met at university in Liverpool. A total democracy, the band writes all music and lyrics together. Inspired by a shared love of hip-hop and soul music, Guro calls their sound “psychedelic boogie”.

Interleaving live performance with recordings from the archive

The frequent change over of artists that is part and parcel of an Open Mic session presented an opportunity to explore a format in which live performances were interleaved with various recordings that were recalled from Carolan’s archive to create something of a conversation between live and recorded materials. As with previous events, this required us to create a new mapping between Carolan and various digital materials that was specific to this event, in this case connecting our guitar to the programme for the event, performer’s bios and websites and the playlist of curated videos. Building on the examples from recent Carolan Guitar blog posts 50 and 51, this further cements the idea that the mapping between Carolan and its digital record needs to be both tailorable and also highly contextual.

At the same time, our open mic session generated a suite of new recordings to be added to Carolan’s archive and possibly incorporated into future events, for example open mic sessions in other venues and cities. Here’s Jo Bywater playing her composition Riches to Rags. Jo describes how the song was inspired by a painting. In a similar vein, she commented on the storytelling potential of Carolan. “Every now and then a really unique experience pops up and this was one of them. The guitar itself is very beautifully made and a real talking point. I’m also a big fan of stories, so the fact that the guitar has travelled and been played by many, as well as logging all the details within it at a scan is all amazingly interesting. A network and a story. It was great to meet Steve and have the opportunity to ask geeky questions about the build and the fret markers.”

As a final note, we also created a series of drinks coasters that were decorated with Carolan’s patterns as a way of projecting the instrument’s presence around the venue so that audience members could also scan the guitar from afar to learning more about its history and our performers.

So many thanks to all our players and also to FACT for hosting us. And thanks once again to everyone who has contributed to the project this past year. We wish you all a very happy and musical New Year!

Report on the Linked Music Hackathon, 9 October 2015, London

by Kevin Page, Oxford e-Research Centre, University of Oxford

IMG_7228On Friday 9th October over 25 academics and developers gathered for a Linked Music Hackathon, the culminating event of the Semantic Linking of BBC Radio (SLoBR) project led by Dr Kevin Page from the Oxford e-Research Centre. The project, funded by the EPSRC Semantic Media Network, has applied Linked Data and Semantic Web technologies to combine cultural heritage and media industry sources for the benefit of academic and commercial research — and is research which will be developed and extended within the Fusing Audio and Semantic Technology (FAST) project.

Attendees were able to use data produced by SLoBR and more than 20 other Linked Data music sources to produce new mashups prototyped and presented on the same day. A variety of hacks were shown at the end of the event with the winning project “Geobrowsing using RISM” chosen by popular vote. A Radio 1 team were also in attendance to interview attendees for the BBC’s “Make It Digital” campaign.

The hackathon was hosted at Goldsmiths, University of London, by colleagues from SLoBR and assisted by the Transforming Musicology project. Organisers and participants from the FAST project included David De Roure, Graham Klyne, Kevin Page, John Pybus, and David Weigl.

Carolan Guitar Acquires Siblings

by Steve Benford, Mixed Reality Lab, University of Nottingham

The Nottingham project team has been developing a guitar called the Carolan guitar or Carolan (named after the legendary composer Turlough O’Carolan, the last of the great blind Irish harpers, and an itinerant musician who roamed Ireland at the turn of the 18th century, composing and playing beautiful celtic tunes). Like it’s namesake, Carolan is a roving bard; a performer that passes from place to place, learning tunes, songs and stories as it goes and sharing them with the people it encounters along the way. This is possible because of a unique technology that hides digital codes within the decorative patterns adorning the instrument. These act somewhat like QR codes in the sense that you can point a phone or tablet at them to access or upload information via the Internet. Unlike QR codes, however, they are aesthetically beautiful and form a natural part of instrument’s decoration. This unusual and new technology enables our guitar to build and share a ‘digital footprint’ throughout its lifetime, but in a way that resonates with both the aesthetic of an acoustic guitar and the craft of traditional luthiery.

We are now bringing you some exciting news. The Carolan has acquired some siblings, although you might be hard press to spot the family resemblance. The Carolan team (http://carolanguitar.com/2014/07/25/team/) has been collaborating with Andrew McPherson and his group at the Augmented Instruments Laboratory at Queen Mary to explore how they might decorate other musical instruments with their life stories. This collaboration that is part of the FAST project is aimed at exploring new forms of digital music object.

Andrew’s lab creates new musical instruments and also augments traditional ones with sensors, new types of sound production and new ways of interacting. A nice example of Andrew’s previous work was to augment traditional piano keyboards with capacitance sensors so that pianists could naturally create vibratos, bends and other effects by moving their fingers on individual keys – an idea called TouchKeys. See: https://youtu.be/6fhmIqKHGs8

In a quite different vein, Andrew’s recent work has been exploring the design of ‘hackable instruments’, new forms of electronic instrument that while superficially simple, can be opened up and radically modified – ‘hacked’ – by musicians to create highly personalised instruments and performances. An early example is the D-Box, an electronic instrument that can easily be built from scratch and whose innards can be messed with in all sorts of ways without frying the circuitry. See: https://youtu.be/JOAO-EUtrGQ

This inherent hackability may lead to individual D-boxes becoming associated with rich histories of how they have been modified and used to create particular sounds and performances. This prompted the Carolan team to ask whether we might decorate D-Boxes with interactive patterns and so enable their owners to document and recall the unique history of hacks associated wih each instrument?

In January 2015, we hosted a FAST project workshop at the Mixed Reality Lab to build some D-Boxes and design some Artcodes (the new and more friendly name for Aestheticodes) that might be used to decorate them. First we assembled our D-Boxes …

artcode 1

FAST participants assembling D-boxes

Then we spent some time sketching Artcode decorations for them …

artcode 2

Participants sketching Artcode decorations for D-boxes

After the workshop, Adrian from the Carolan team designed a series of Artcode D-Box logos, a distinct one for each of six instruments. These were laser-etched and cut into the wooden sides of our D-boxes to create a family of instruments that can be scanned with a phone or tablet to read and update their individual blogs.

artcode 3

The family of six D-Boxes, each inlaid with its own distinct Atcode

We are looking forward to releasing our D-Boxes into the wild to see where they might go, how they will be hacked and also how their interactive decorations might help maintain their histories.

artcode4

Perhaps Carolan might even get to meet its new siblings? Anyone for a D-Box and acoustic guitar jam?

The Open Multitrack Testbed

by Brecht De Man, C4DM, Queen Mary University of London

Many types of audio research rely on multitrack recorded or otherwise generated audio (and sometimes mixes thereof) for analysis or for demonstration of algorithms. In this context, a ‘mix’ denotes a summation of processed versions of (a subset of) these tracks, that can itself be processed as well. The availability of this type of data is of vital importance to many researchers, but also useful for budding mix engineers looking for practice material, audio educators, developers, as well as musicians or creative professionals in need of accompanying music or other audio where some tracks can be disabled.

Among the types of research that require or could benefit from a large number of audio tracks, mixes and/or processing parameters, are analysis of production practices, source separation, automatic mixing, automatic multitrack segmentation, applications of masking and other auditory phenomena, and others that we haven’t thought of yet.

Existing online resources of multitrack audio content have a relatively low number of songs, show little variation in content, contain content of which the use is restricted due to copyright, provide little to no metadata, rarely have mixed versions including the parameter settings, and/or do not come with facilities to search the content for specific criteria.

For a multitrack audio resource to be useful for the wider research community, it should be highly diverse in terms of genre, instrumentation, and quality, so that sufficient data is available for most applications. Where training on large datasets is needed, such as with machine learning applications, a large number of audio samples is especially critical. Data that can be shared without limits, on account of a Creative Commons or similar license, facilitates collaboration, reproducibility and demonstration of research and even allows it to be used in commercial settings, making the testbed appealing to a larger audience. Moreover, reliable metadata can serve as a ground truth that is necessary for applications such as instrument identification, where the algorithm’s output needs to be compared to the ‘actual’ instrument. Providing this data makes the testbed an attractive resource for training or testing such algorithms as it obviates the need for manual annotation of the audio, which can be particularly tedious if the number of files becomes large. Similarly, for the testbed to be highly usable it is mandatory that the desired type of data can be easily retrieved by filtering or searches pertaining to this metadata.

interface

For this reason, we present a testbed that

  •  can host a large amount of data;
  • supports a variety of data of varying type and quality, including raw tracks, stems, mixes (plural), and digital audio workstation (DAW) files;
  • contains data under Creative Commons license or similar (including those allowing commercial use);
  • offers the possibility to add a wide range of meaningful metadata;
  • comes with a semantic database to easily browse, filter and search based on all metadata fields.

The testbed can be accessed here.