23-26 August

Queen Mary, London, UK

Augmented and Participatory Sound and Music Experiences

Javascript must be enabled to continue!

Workshop Program

Audio Mostly Workshops have limited places. Tickets will be available at the registration desk on a first come first served basis, free of charge to Audio Mostly delegates.

Friday 25th Aug

Joint Pure Data workshop Hosted by: Andy Farnell and Kosmas Giannoutakis

This workshop is composed of the two following parts:

  • Zero to Hero Pure Data Workshop (2 Hours)
    Hosted by: Andy Farnell

In the 2 hour workshop we will have a brief super accelerated (zero to hero) Pure Data course including an overview discussion of application possibilities for things like LibPd, ZenGarden, Arduino and Raspberry Pi. The second part will be an introduction to Andy’s style of procedural audio, making sound effects and sonification sources in the language.

Pure Data is a visual environment for programming sound. Andy uses it for sound design and for teaching purposes. He finds it a very useful tool to teach students of all ages and abilities. It allows to mock up ideas very quickly, faster than by using programming languages such as C++. Andy is the author of the excellent book Designing Sound, available from MIT Press and other online distributors. It’s a great way to get into Pure Data and programmatic sound design.

Participants need to provide their own laptops with Pd-vanilla already installed, and are encouraged to bring their own headphones. Free download is available at http://puredata.info/downloads/pure-data.

  • Composing Recurrent Neural Topologies as Generative Music Systems (1 Hour)
    Hosted by: Kosmas Giannoutakis

In this workshop the generative music capabilities of artificial recurrent neural networks will be explored, using an abstractions library for the programming environment Pure Data, called RNMN (Recurrent Neural Music Networks). The library provides the basic building blocks, neurons and synapses, which can be arbitrarily connected, easily and conveniently, creating compound topologies. The framework allows real time signal processing, which permit direct interactions with the topologies and quick development of musical intuition. For the workshop, a laptop with a built-in microphone, Pure Data (PD-vanilla 0.47.1 version is recommended) installed and headphones is required for the participants. Experience with visual programming is a plus but not a necessary prerequisite. It will be explained the basic principles of the framework and it will be demonstrated the construction of some basic topologies.

Making High-Performance Interactive Audio Systems with Bela and Pure Data
Hosted By: Giulio Moro and Robert Jack

This hands-on workshop introduces participants to Bela, an embedded platform for ultra-low latency audio and sensor processing. We will present the hardware and software features of Bela through a tutorial that gets participants started developing interactive music projects. Bela projects can be developed in C/C++ or Pure Data (Pd), and the platform features an on-board browser-based IDE for getting started quickly. This workshop will focus specifically on using Pd with Bela to create interactive audio projects.

Saturday 26th Aug – all day

Interaction, Instruments and Performance: HCI and the Design of Future Music Technologies
Hosted By: Alan Chamberlain, Xenia Pestova, Mads Bodker, Maria Kalionpaa and David De Roure

This workshop examines the interplay between people, musical instruments, performance and technology. Now, more than ever technology is enabling us to augment the body, develop new ways to play and perform, and augment existing instruments that can span the physical and digital realms. By bringing together performers, artists, designers and researchers we aim to develop new understandings how we might design new performance technologies.

Participants will be actively encouraged to participant, engaging with other workshop attendees to explore concepts such as; augmentation, physicality, data, improvisation, provenance, curation, context and temporality, and the ways that these might be employed and unpacked in respect to both performing and understanding interaction with new performance-based technologies.

Workshop candidates are requested to send a paper (4-8 pages in the Extended Abstracts template, landscape) to the workshop organizers based on their research topic – this may also be a position piece, or a demonstrator that may be used in the field by workshop participants, equally audio-pieces may be submitted that relate to the theme of the workshop. Full information and the call for contribution for this workshop is available here.

Participants will be chosen based on the relevance of their work and interest to other workshop participants. Due to the limited time, there will be a mixture of presentations, performances and demos. At least one author of accepted papers needs to register for the workshop and for the conference itself.

Designing Sounds In The Cloud
Hosted by: Visda Goudarzi, Mathieu Barthet, György Fazekas, Francisco Bernardo, Rebecca Fiebrink, Michael Zbyszynski and Chunghsin Yeh

Current machine-based sound design tools typically operate disconnected from the cloud and are not adapted for participatory creation. This prevents to exploit the vast amount of digital audio content available online and novel web-based sensor and machine learning technologies for musical expression and creative collaboration. In this workshop, opened to anyone with an interest in sound, participants will be introduced to active listening and to novel web-based frameworks for sound design and musical interaction from two ongoing European Horizon 2020 projects, Audio Commons and Rapid-Mix. The Audio Commons initiative aims to promote the use of open audio content by providing a digital ecosystem that connects content providers and creative end-users with a range of retrieval and production services. The Rapid-mix project focuses on multimodal and procedural interaction leveraging on rich sensing capabilities, interactive machine learning, and embodied ways to interact with sound. After technology presentations, participants will be invited to form teams and join in a sound walk to encourage awareness in listening to sounds and discuss collaborative opportunities. Each team will then be invited to produce a sonic/musical artefact (e.g. piece, performance or web-based app) in a participatory way and to provide feedback on the proposed technologies. Outcomes will be captured by short videos made by participants which will be presented back to the group. The workshop will be facilitated by members from the University of Music and Performing Arts (Graz, Austria), Queen Mary University of London (London, UK), Goldsmiths University (London, UK), and the company AudioGaming (Toulouse, France).