At the recent Audio Mostly conference, the C4DM team presented their Moodplay experience project: “Moodplay: An Interactive Mood‐Based Musical Experience”. (Paper reference: Barthet, M., Fazekas, G, Allik, A., Sandler, M., Moodplay: An Interactive Mood‐Based Musical Experience, Proc. ACM Audio Mostly, Greece, 2015).
The full conference programme can be viewed here.
The theme of this year’s conference was “Sound, Semantics and Social Interaction”. As the organisers explained in their Call for papers: “the conference aims at confronting issues related to audio design, semantic processing and interaction that can be part of enhanced multimodal HMI in the social media landscape. It also attempts to investigate their role and involvement in the deployment of innovative web and multimedia semantic services as part of the transition to the Web3.0 era. A representative theme example that can be drawn here is that audio content production, music clips recording, editing, and sound design processes can be collaboratively applied within the social media networking context, while registering content shares and tagging with user feedback for semantically enhanced interaction. Moreover, besides technology-oriented approaches to the conference theme, submissions that refer to interdisciplinary work in this domain are strongly encouraged, including i) psychological research on the influence of auditory cues in shaping human multimodal perception, semantic processing, emotional affect, and social interaction within rich context environments, and ii) presentation of new practices of using sound as a medium for enhancing social interaction in new media artworks, and soundscapes.”
The team’s research is also one of the FAST Demonstrator projects. Previously, the team presented Moodplay at the Digital Shoreditch Festival in May 2015. A one minute video can be viewed below: