IEM / University of Music and Performing Arts, Graz, Austria

Audio Mostly 2020 in extended realities

15th to 17th September 2020

Javascript must be enabled to continue!

Workshop Program

On 14th September 2020, we offer a full day workshop program. If you plan to attend, please select it already upon registration. There are 3 workshops to choose from (being held in two parallel sessions).


Into Reality: Extending the reach of audio archives and audio archival content CANCELLED!
Laurence Cliff

Time: 10:00 - 12:00 CEST. Duration: 2 hours. Maximum participants: 12.

This workshop is designed to investigate new possibilities for interfacing with digital audio archives and digital audio archival content. It aims to present, discuss and creatively explore with participants how recently digitized audio archival recordings’ communicative footprints can be extended in order to promote exploration and engagement with this type of material and their holding collections. It intends, through the presentation, discussion and creation of a selection of innovative concepts, to promote new thinking around the idea of what an audio archival interface could look, sound or be like. As a staring point, initial themes will include possible virtual reality, augmented reality, audio augmented reality and mixed reality solutions, along with innovative web, mobile, graphical audio-first and audio-only based concepts. Also explored will be how audio archive and archival content, subject and context could be utilised in the construction of new interfacing experiences.


How to create interactive virtual acoustic environments in TASCAR?
Joanna Luberadzka, Giso Grimm and Volker Hohmann

Time: 13:00 - 13:45 CEST. Duration: 45 minutes. Maximum participants: 20.

In this workshop we will introduce the Toolbox for Acoustic Scene Creation And Rendering (TASCAR) and its applications in hearing research. We will provide a step by step tutorial illustrating how TASCAR can be used in this context. We will show how to define a virtual acoustic scene with dynamic objects and how to render it via headphones or loudspeaker. We will demonstrate the possibilites to interactively control parameters of the objects in the scene e.g. listening position.
We will also show how to couple the acoustic objects with a game engine for visual rendering. Finally, we will show how to access and modify the scene from Matlab to enable for adaptive measurement procedures and how to generate realistic, but controllable sound material, for evaluating the performance of signal processing algorithms.


Low delay network audio for remote collaboration in hearing research and music applications
Giso Grimm, Joanna Luberadzka, Hendrik Kayser and Volker Hohmann

Time: 14:00 - 14:45 CEST. Duration: 45 minutes. Maximum participants: 20.

This workshop will show how to install and use a remote collaboration device 'ovbox' built upon the tool 'zita-njbridge'. The ovbox is a remote collaboration box developed by the ensemble ORLANDOviols primarily to allow rehearsals during the lockdown due to Covid19 pandemia. This box is completely built upon open source software and open or standardized hardware. At the University of Oldenburg this project was extended towards application for low-delay real-time streaming in hearing experiments, e.g., to render complex virtual acoustic environments on a powerful PC in the lab, and stream the signals to the user via a device with calibrated reproduction hardware. Also low-delay transmission of head movements for head-tracked rendering is possible. This workshop will also cover the setup of session management servers as well as systems for firewall and NAT traversal. 


Human-Sound Interaction (HSI) workshop
Balandino Di Donato, Tychonas Michailidis and Alessio Gabriele

Part 1: Time: 9:00 - 12:00 CEST. Duration: 3 hours. Maximum participants: unlimited.
Part 2: Time: 13:00 - 17:00 CEST. Duration: 4 hours. Maximum participants: unlimited.

The design of interactions with sound and audio processes is a seminal activity in the creation of a performance, installation, a virtual sound environment, or interface for musical expression. The interaction design is often fixated by the interface without taking into account human factors and our diverse abilities to perceive the sound and interface affordances.
The Human-Sound Interaction (HSI) workshop is a full-day workshop that investigates principles of interaction design with sound. This workshop with the human at the centre of the design in a collaborative, interactive, inclusive and diverse environment.
This workshop will look at Human-Centered Interaction Design (HCID) aspects that determine the realisation and appreciation of musical works (installations, composition and performance), interfaces for sound design and musical expression, sonic aspects of virtual environments (VR/AR/MR), interactive audiovisual dance performances.
Participants are invited to a 3-hours interactive session, with hands-on design activities, and a 4-hours extended abstract presentation session.

Detailed information can be found here: https://balandinodidonato.com/hsi/audiomostly2020/