by Matthew Wilcoxson, Oxford e-Research Centre, University of Oxford
We are exploring the idea of a Performance Digital Music Object (DMO) – an object of this nature would contain information associated with a particular performance or group of performances taking place in a single event. As an example, it could be used as a souvenir gift which you may receive after a live concert containing (or linking to) the recording, analysis, video and social media reactions of the event.
An initial aim of this work is to have a DMO which only contains a list of linked datasources not the data itself. When accessed it would request the data and generate a human readable view of this data. Initial efforts centre around automatically creating webpages based on linked data and a minimal specification of how that data would be best displayed.
Early prototypes of this webpage generation use a SPARQL server. Data is requested by queries which return the main entity’s literal objects, its linked entities and other entities which are directly linked to it. By utilising the additional information within the connected entities we can create a practical enhanced view of the main entity. For example, when viewing an entity representing a music group we can enhance it with the group’s events, members, etc.
To represent this data in an engaging way suitable view templates are selected for different parts of the whole view by matching a combination of the main entity’s type and the predicate and type (or type hierarchies) of the linked entities. By utilising entity type hierarchies we should be able to avoid specifying all views for all entities as parent entity views should be compatible with child entities. For example, a view template that was designed for an Agent (e.g. https://www.w3.org/ns/prov#Agent) could also be used for a Person (e.g. http://xmlns.com/foaf/0.1/Person). However, in this case unique predicates of children entities would not be shown.
A further aim is to receive and display real-time updates, for example reading automatic music transcriptions (see http://www.semanticaudio.ac.uk/blog/can-a-computer-tell-me-what-notes-i-play-music-transcription-in-the-studio/) generated during an ongoing live performance. This work is still to be done but it is expected to work in a similar way to how static data is retrieved and displayed but this system will poll a SPARQL endpoint for updates. If some processing of the raw data is needed this is assumed to have already happened before passing through the SPARQL endpoint.
Our datasource currently comes from Annalist which is a generic data store created by Graham Klyne and produces JSON-LD (See http://www.semanticaudio.ac.uk/blog/linked-data-descriptions-of-live-performances/ ) . The data is indexed into our SPARQL server (currently Apache Jena’s Fuseki) and queried from there via a NodeJS server. Final versions should be generic enough to use any SPARQL endpoint.
With future research it should be possible to create a new view template ontology and separate view template specifications meaning specifications could be located anywhere on the internet and requested and reused in the same way one might request data from any datasource.