University of Nottingham, Nottingham, UK

Audio Mostly 2019: A Journey in Sound
18th to 20th September 2019

Javascript must be enabled to continue!

Conference Program

Attachment language: English File type: PDF document Conference Program
Updated: 13-09-2019 13:11 - Size: 131.15 KB
Tuesday 17th
Time Activity Details Location
9:00 Registration Room Location
Music Department Foyer (Room A0b) – Map 1 – Registration & Refreshments
Rehearsal Hall (Room A42) – Map 1 – Demos & Posters
Djanogly Recital Hall – Map 1 – Musical Performances

Art Centre Lecture Theatre - (Room A30) – Map 2 – Oral Presentations
Music Department - Foyer (University Park, University of Nottingham) Bulding No. 33 - Map
09:30 - 5:00 Workshops

- Sonic Interaction in Intelligent Cars

- Bela Workshop - Paper Sensors. 2-5pm to register email Mads Bødker - mb.digi@cbs.dk

   
Evening Evening get-together t.b.c    

 

Wednesday 18th
Time Activity Details Location
8:00 Registration Room Location
Music Department Foyer (Room A0b) – Map 1 – Registration & Refreshments
Rehearsal Hall (Room A42) – Map 1 – Demos & Posters
Djanogly Recital Hall – Map 1 – Musical Performances

Art Centre Lecture Theatre - (Room A30) – Map 2 – Oral Presentations

Music Department - Foyer (University Park, University of Nottingham)

Bulding No. 33 - Map

9:30 Welcome     Arts Centre Lecture Theatre
9:45 Keynote Andrew McPherson Technological and Cultural Values in Digital Musical Instrument Design

Every year, many new musical instruments are created in research and industry. New instruments are often promoted for technical novelty, range of sonic or expressive capabilities, or accessibility to novice players. However, most new instruments drop out of regular use after just a few years, while classic acoustic and electronic designs remain ubiquitous in many styles of music, highlighting the central role of human factors in determining instrument uptake.

This talk queries the broader context of why we build new musical instruments and examines some of the values we embed into them. The aesthetic context in which an instrument is created will strongly influence its design, regardless of what technology is used. At the same time, our tools and materials are not aesthetically neutral: they contain subtle assumptions about the form and structure of music, and they make certain design choices easier or more apparent than others. This talk will consider several examples illustrating these technical and cultural influences, concluding with open questions and reflections for creators seeking to engage with the human factors of new music technologies.
Arts Centre Lecture Theatre
10:45 Coffee / Tea Break     Foyer
11:15 Oral Session 1

Sonic Journeys through Space: Audio Augmented Reality and Sonic Atmospheres

Session Chair: Maria Kallionpää (Hong Kong Baptist University, Hong Kong)

 

Arts Centre Lecture Theatre
[1] 11:15 - Michael Krzyzaniak, Philip Jackson and David Frohlich

Six Types of Audio That DEFY Reality! (A Taxonomy of Audio Augmented Reality with Examples)

In this paper we examine how the term ‘Audio Augmented Reality’ (AAR) is used in the literature, and how the con- cept is used in practice. In particular, AAR seems to refer to a variety of closely related concepts. In order to gain a deeper understanding of disparate work surrounding AAR, we present a taxonomy of these concepts and highlight both canonical examples in each category, as well as edge cases that help define the category boundaries

[2] 11:40 - Laurence Cliffe, James Mansell, Joanne Cormac, Christopher Greenhalgh and Adrian Hazzard

The Audible Artefact: Promoting Cultural Exploration and Engagement with Audio Augmented Reality

This paper introduces two ongoing projects where audio augmented reality is implemented as a means of engaging museum and gallery visitors with audio archive material and associated objects, artworks and artefacts. It outlines some of the issues surrounding the presentation and engagement with sound based material within the context of the cultural institution, discusses some previous and related work on approaches to the cultural application of audio augmented reality, and describes the research approach and methodology currently engaged with in developing an increased understanding in this area. Additionally, it discusses the project within the context of related cultural and sound studies literature, presents some initial conclusions as a result of a practice-based approach, and outlines the next steps for the project.

[3] 12:05 - Inês Salselas and Rui Penha

The Role of Sound in Inducing Storytelling in Immersive Environments

Sound design has been a fundamental component of audiovisual storytelling. However, with technological developments things are rapidly changing. More sensory information is available and, at the same time, the user is gaining agency upon the narrative, being offered the possibility of navigating or making other decisions. These new characteristics of immersive environments bring new challenges to storytelling in interactive narratives and require new strategies and techniques for audiovisual narrative progression. Can technology offer an immersive environment where the user has the sensation of agency, of choice, where her actions are not mediated by evident controls but subliminally induced in a way that it is ensured that a narrative is being followed? Can sound be a subliminal element that induces attentional focus on the most relevant elements for the narrative, inducing storytelling and biasing search in an audiovisual immersive environment? Herein, we present a literature review that has been guided by this prospect. With these questions in view, we present our exploration process in finding possible answers and potential solution paths. We point out that consistency, in terms coherency across sensory modalities and emotional matching may be a critical aspect.

[4] 12:30 - Elio Toppano, Sveva Toppano and Alessandro Basiaco

Moving Across Sonic Atmospheres

The concept of sonic atmosphere has become the focus of an increasing amount of attention in both academic and public forums, but scholars have developed diverging and overlapping definitions of the concept which threatens to inhibit our progress in understanding atmospheric phenomena. This paper draws on recent developments in the field of New Aesthetics and New Phenomenology. In particular, the research work highlights the role a sonic atmosphere has as a backdrop of the acoustic environment and the soundscape, and explores the relationships existing among these concepts. This provides us with a reference framework for studying movement through and between sonic atmospheres and to understand the possible relationships unfolding between an individual' s mood and the affective tonality and affordances of a sonic space. A case study exemplifies the application of the proposed conceptual framework in the field of urban design.

13:00 Lunch      
14:00 Oral Session 2

Gaming Audio

Session Chair: Juan Martinez-Avila (MRL, Nottingham, UK)

 

Arts Centre Lecture Theatre
[5] 14:00 - Michael Urbanek and Florian Güldenpfennig

Celebrating 20 Years of Computer-based Audio Gaming

We look back on two decades of academic research on audio games. During this time, a substantial amount of research has explored many facets of this special genre of computer games. However, despite many publications, there is a lack of review papers, which help delineate this growing research field. For this reason, we take one step back and investigate 20 years of audio game research by synthesizing a literature review adopting grounded theory methods. The resulting research map provides an overview of efforts into audio games with a special focus on how to design for audio games. We observed three important trends or tensions in audio game research. Firstly, audio games research depended heavily on technological advancements during the last two decades. Secondly, most studies about audio games were conducted with novices to audio games in lab situations, that is, based on artificial situations and not on real gamers and their genuine experience. Thirdly, the audio game design process per se has been greatly neglected in the literature so far. We conclude the paper by discussing design or research implications.

[6] 14:25 - Katja Rogers and Michael Weber

Audio Habits and Motivations in Video Game Players

Game music is increasingly being explored in terms of empirical effects on players, but we know very little about how players actually perceive and use background music in games, and why. We conducted a survey (N=737) to gain an understanding of players' in-the-wild audio habits and motivations, which can inform future research and industry practices regarding game audio design. The results indicate a wide variance in players' estimation of the importance of music in games, and provide evidence for a substantial number of players who do not listen to games' provided background music, often in favour of additional/parallel media usage. Based on these findings, we discuss implications for game audio design, as well as future research directions.

[7] 14:50 - Adrian Hazzard and Chris Greenhalgh

Adaptive Musical Soundtracks: From In-Game to on the Street

We describe the challenges of creating location-based adaptive musical soundtracks, which we try to address via the iterative development of daoplayer, a tool to create such media experiences. Daoplayer is broadly modelled on the functionally of computer game middleware, functionality we argue is missing from most tools freely available for location-based experiences. We chart the development progress and use in practice, identifying the emerging challenges and outlining our responses to these along the way. Our focus is broad, taking in technical concepts and infrastructure, in addition to the creative concerns of the composer. We reveal that our tool was largely successful in delivering fine grained ‘musical’ soundtracks, which on the one hand highlights the similarities between computer game soundtracks and location-based soundtracks, but on the other hand we note a number of key distinctions.

15:15 Coffee / Tea Break      
15:40 Oral Session 3

Journeys with Sound and Data

Session Chair: Jo Cormac (University of Nottingham, UK)

Arts Centre Lecture Theatre
[8] 15:40 - Emma Young, Alan Marsden and Paul Coulton

Making the Invisible, Audible: Sonifying Qualitative Data

We describe how Embosonic Sketching, a novel approach to qualitative data sonification, was employed in the design of a sound art installation. The method was conceived to minimise designer bias in the creation of sound artworks which seek to faithfully represent the lived experience of a sample group. The ‘Her[sonifications]’ project was driven by the notion that much exists beyond sight, yet remains undiscovered, and explored bringing the unseen to light through the transformative medium of sound. We ran a series of workshop and focus group sessions to engage with women from a range of backgrounds including artists, writers, designers and researchers to explore how we could harness the emotive qualities of sound to communicate the visceral experience of womanhood. We contribute insight into how Embosonic Sketching can be employed to enrich the process of qualitative data collection in group-based settings; to produce sound sketches to inform the sound design process; and most significantly, to create self-actualised sonifications of visceral and experiential phenomena. Our findings demonstrate that Embosonic Sketching is a useful and accessible tool for exchanging ideas about sound and promotes heightened participant engagement with participatory design processes.

[9] 16:05 - Iain Emsley, David de Roure, Pip Willcox and Alan Chamberlain

Performing Shakespeare: From Symbolic Notation to Sonification

We present an ongoing project using Joshua Steele’s symbolic notation to represent prosody in Eighteenth-Century dramatic performances. We discuss the sonification of the original notation to simulate the work and how it can be used to support other experiments. Drawing on two experimental models and their different methodologies, we consider how the digitised version relates to the historical work and the challenges that they bring. The framework and challenges for marking up the current work using semantic web technology, such as the PROV ontology, we demonstrate a notebook tool that links user annotations to the generated audio model, to store revisions and edits for re-use. This project demonstrates sonification’s use as an experimental Humanities tool and as a way of thinking about historical prosody models.

[10] 16:30 - David L Page

Music & Sound-tracks of our Everyday Lives

The original aim of this professional Doctor of Creative Industries (DCI) Research Project was to investigate music–making practice and Self as a practitioner in the process of creating and producing a DIY music artifact. Specifically: to investigate why I as the practitioner felt a connection with one form of music-making (acoustic instrument-based), and not a connection with another form of music-making (digital virtual-based). As a phenomenologist, I situated Self into this auto-ethnographic study in the dual roles of researcher and practitioner; developing first-person narratives of my personal journey, critical reflection and reflexive practice. The holistic and multi-dimensional nature of this research has provided rich and nuanced data, illuminating the co-constituted nature of Self, interpreting meaning, and practice. In particular, the research study contextualises contemporary DIY creative practice relative to three interdependent tenets: music & sound-making practice, meaning-making and Self-making, where these tenets are understood in terms of hybridity, agency and subjectivity.

17:00 Opening Reception Demos, Posters, Industry Stalls   Rehearsal Hall
    [1] Poster 1 - Darrell Gibson and Richard Polfreman

A Journey in (Interpolated) Sound: Impact of Different Visualizations in Graphical Interpolators

Graphical interpolation systems provide a simple mechanism for the control of sound synthesis systems by providing a level of abstraction above the parameters of the synthesis engine, allowing users to explore different sounds without awareness of the synthesis details. While a number of graphical interpolator systems have been developed over many years, with a variety of user-interface designs, few have been subject to user-evaluations. We present the testing and evaluation of alternative visualizations for a graphical interpolator in order to establish if the visual feedback provided through the interface aids the identification of sounds with the system. Typically, interpolator systems present the user with a two-dimensional graphical pane where synthesizer presets can be located. Moving an interpolation point cursor within the pane will then calculate new parameter values, based on the interpolation model, cursor position and the relative locations of the presets, generating new sounds. These systems therefore supply users with two sensory modalities in the form of sonic output and visual feedback from the interface. Our testing aimed to study how users interact with interpolation systems and make journeys through the interpolated sounds defined by the space, in order to better understand the design considerations for graphical interpolators. Furthermore, the testing examined if, when different levels of visual feedback are provided to the user, this aids the discovery of new sounds. The testing took the form of comparing the users’ mouse traces, showing the journey they made through the interpolated sounds when different visual interfaces were used. In addition, Null Hypothesis Significance Testing (NHST) was undertaken to examine if there is a significant difference in the way users interact with different interfaces. Sixteen participants took part in the user testing and a summary of the results is presented, showing that the visuals provide users with additional cues that lead to better interaction with the interpolators.

 
    [2] Poster 2 - Andrew Thompson and Gyorgy Fazekas

A Model-View-Update framework for Interactive Web Audio Applications

We present the Flow framework, a front-end framework for interactive Web applications built on the Web Audio API\@. It encourages a purely declarative approach to application design by providing a number of abstractions for the creation of HTML, audio processing graphs, and event listeners. In doing so we place the burden of tracking and managing state solely on the framework rather than the developer. We introduce the Model-View-Update architecture and apply it to audio application design. The MVU architecture is built on the unidirectional flow of data through pure functions, pushing side effects onto the framework's runtime. Extending the concept of a View to encompass both the visual and audio output of an application is at the heart of the Flow framework.The implementation of a virtual audio graph is central to Flow's guarantees of purity, describing audio nodes and parameters as simple JSON. We present a polyphonic synthesiser as a demonstration of the framework. Future plans for the framework include a robust plug-in system to add support for third-party audio nodes and browser APIs such as Web Sockets, a time travelling debugger to replay sequences of messages to the runtime, and a bespoke programming language that better aligns with Flow's functional influences.

 
    [3] Poster 3 - Etienne Richan and Jean Rouat

A Study Comparing Shape, Colour and Texture as Visual Labels in Audio Sample Browsers

Searching through vast libraries of sound samples can be a daunting and time-consuming task. Modern audio sample browsers use mappings between acoustic properties and visual attributes to visually differentiate displayed items. There are few studies focused on the effect of these mappings on the time it takes to search for a specific sample. We designed a study to compare using shape, colour and texture as visual labels in a known-item search task. We describe the motivation and implementation of the study, then present a preliminary analysis of results. We find that shape and colour outperform texture. Based on these results we propose modifications to the study and avenues for further analysis.

 
    [4] Poster 4 - Luca Andrea Ludovico

A Web Prototype to Teach Music and Computational Thinking Through Building Blocks

This paper presents the recent evolution of a Web prototype originally conceived to teach music and computational thinking to preschool and primary school learners through a gamification approach. The software tool, called Legato, is based on the metaphor of building blocks, whose characteristics (e.g., position in space, shape, and color) can be associated with basic music parameters (e.g., pitch, rhythm, and timbre). Legato is a Web app written using standard languages, such as HTML5, CSS and JavaScript; besides, it adopts the Web MIDI API to produce sounds. The prototype is made publicly available for evaluation and use in an educational context.

 
    [5] Poster 5 - Yesid Ospitia, Jose Ramon Beltran, Cecilia Sanz and Sandra Baldassarri

Dimensional Emotion Prediction Through Low-Level Musical Features

This article presents the design of a prediction system for musical pieces according to the perceived emotions by listeners. The main objective is to determine an optimal solution that allows to maximize the success rate of predictions through a machine learning technique. For the training process a data set of 1802 sound files previously annotated in a dimensional emotional model with arousal and valence evaluation is used; each sound file has 260 low-level features obtained from an audio features extract dynamic process. The classification process must resolve an approximate values prediction calculating a coordinate of valence and arousal (V/A) for each song. These results will be considered for carrying out the emotional classification of the music in a near future.

 
    [6] Poster 6 - Jeevan Nayal, Abhishek Joshi and Bijendra Kumar

Emotion Recognition in Songs via Bayesian Deep Learning

In the era where enormous amount of data is generated every moment through multimedia and Internet, songs are no exception. New songs are released through the Internet and make their way into digital music libraries. However, music information retrieval on these platforms can be really challenging, and in particular, the task of recognition of musical emotion is a popular research area. In this paper, we propose a novel method to recognize the emotion implicit in songs. To the best of our knowledge, ours is the first attempt to solve for emotion recognition incorporating Bayesian Deep learning technique. We obtain spectrograms from the audios to leverage both the time and frequency information and classify with a Bayesian Convolutional Neural Network (CNN). We demonstrate this approach by evaluating it on a benchmark dataset and achieve improved performance over traditional machine learning methods that have been used in the past for this task. Further, we provide a thorough analysis of our proposed approach and perform statistical significance test for comparison of proposed model against the baseline.

 
    [7] Poster 7 - Dalia Senvaityte, Johan Pauwels and Mark Sandler

Guitar String Separation Using Non-Negative Matrix Factorization and Factor Deconvolution

Guitar string separation is a novel and complicated task. Guitar notes are not pure steady-state signals, hence, we hypothesize that neither Non-Negative Matrix Factorization (NMF) nor Non-Negative Matrix Factor Deconvolution (NMFD) are optimal for separating them. Therefore, we separate steady-state and transient parts using Harmonic-Percussive Separation (HPS) as a preprocessing step. Then, we use NMF for factorizing the harmonic part and NMFD for deconvolving the percussive part. We make use of a hexaphonic guitar dataset which allows for objective evaluation. In addition, we compare several types of time-frequency mask and introduce an intuitive way to combine a binary mask with a ratio mask. We show that the HPS mask type has an effect on source estimation. Our proposed method achieved results comparable to NMF without HPS. Finally, we show that the optimal mask at the final separation stage depends on the estimation algorithm.

 
    [8] Poster 8 - Luca Turchet and Mathieu Barthet

Haptification of Performer Control Gestures in Live Electronic Music Performance

In this paper, we introduce musical haptic wearables for audiences (MHWAs) which provide sensing and haptic stimulation technologies for networked musical interaction using wireless connectivity. We report on a concert experiment during which audience members could experience vibro-tactile feedback mapped to the control gestures of two electronic music performers. Preliminary results suggest that MHWAs may increase the audience's understanding of the musical expression and the presence of the performers when the tempo is slow while no significant effects were found at fast tempi. Participants' comments also indicate that vibro-tactile feedback related to musical attributes such as beat could enrich some aspects of the live music experience.

 
    [9] Poster 9 - Luca Turchet

Interactive Sonification and the IoT: The Case of Smart Sonic Shoes for Clinical Applications

To date, little attention has been devoted by the research community to applications of the Internet of Things (IoT) paradigm to the field of interactive sonification. The IoT has the potential to facilitate the emergence of novel forms of interactive sonifications that are the result of shared control of the sonification system by both the user performing the gestures locally to the system itself, and one or more remote users. This can for instance impact therapies based on auditory feedback where the control of the sound generation may be shared by patients and doctors remotely connected. This paper describes a prototype of connected shoes for interactive sonification that can be remotely controlled and can collect data about the gait of a walker. The system targets primarily clinical applications where sound stimuli are utilized to help guide and improve walking actions of patients with motor impairments.

 
    [10] Poster 10 - Stine Lundgaard, Peter Axel Nielsen and Jesper Kjeldskov

Interaction Design for Domestic Sound Zones

Sound zone systems have actively been developed for more than two decades. Building on this, we explore four different interaction design approaches for domestic sound zone systems: Tangible representation, light projection, familiar objects, and handhelds. These four approaches were conceived through a scenario-based workshop with HCI and IS experts responding to the functional challenges, opportunities, and requirements of interactive sound zones. The work presented in this paper contributes to development of interaction design for sound zone systems which is an essential parallel to the technical development.

 
    [11] Poster 11 - David Alexander and Jack Armtage

LiveCore: Increasing Liveness in a Low-Level Dataflow Programming Environment

Liveness is an important factor in live coding but frequently liveness focuses on high-level, textual environments. While these environments offer manifold abstraction capabilities, users of low-level dataflow programming environments could also benefit from increased liveness. In this work we intro- duce LiveCore: a macro library for the low-level dataflow environment Reaktor Core enabling live coding. LiveCore manages program state at audio rates and provides a suite of modules for musical pattern generation, sequencing and synthesis. LiveCore increases liveness in Reaktor Core from an editable dataflow program, to one with continuous au- dio suitable for musical performance. We reflect on the de- sign process to discuss the qualitative differences of liveness in low-level dataflow programming, compared with other forms of live coding. We suggest that live coding in a low- level dataflow environment provides a uniquely immediate experience for the performer.

 
    [12] Poster 12 - Sara Nielsen, Lars Bo Larsen, Kashmiri Stec and Adèle Simon

Mental Models of Loudspeaker Directivity

We carry out a study to investigate how naïve (non-audio-experts) users understand the concept of sound directivity. This was done in the context of loudspeaker reproduction of sound fields with the purpose being to discover how such acoustical phenomena can best be explained to users. We investigated the mental models of 20 participants via an interview-based approach, in which we asked participants to draw and explain how they understood directivity, only providing them with minimal prior information. The interviews and drawings were analysed and mental models were extracted. Our analysis showed the models could be categorized into three General Mental Model Types (GMMTs): Direction of Sound, Area with Sound, and Sound Waves. These GMMTs were then used to build a 3-level combined model that also contains observations of what each GMMT is suitable for when trying to e.g. explain or illustrate loudspeaker directivity to non-expert users. These guidelines can be useful for designing visual representations to help explain loudspeaker directivity to non-expert users, and could be used for visualising further complex acoustical concepts.

 
    [13] Poster 13 - Signe Lund Mathiesen, Derek Victor Byrne and Qian Janice Wang

Sonic Mug: A Sonic Seasoning System

This paper outlines the development of an in-progress prototype system that explores the interplay between sonic interaction and eating activities. The music-playing mug prototype is designed as a physical interface which aligns the user’s senses with the act of drinking. Drinking from the mug involves multiple senses, includ- ing tactile interaction with the mug, gustatory stimuli from the beverage, and by engaging with the sonic mug, the user becomes attentive towards the onset of the sound when drinking, thereby in- volving the sense of hearing as well. The system is being developed as an experiential piece which allows the user to explore the nature of multisensory perception and to experience how what we taste can be influenced by what we listen to. An initial pilot study was carried out, revealing a relationship between sound liking and taste evaluation, in addition to certain design challenges to be addressed in subsequent iterations. In this paper, we discuss these issues and propose new directions for the development of the prototype.

 
    [14] Poster 14 - Trevor Hunter, Peter Worthy, Ben Matthews and Stephen Viller

Using Participatory Design in the Development of a New Musical Interface: Understanding Musician’s Needs beyond Usability

The design of New Musical Interfaces often involves an intersection of disciplines: New Musical Interface(NMI) design, Human-computer Interaction (HCI) andInteraction Design (IxD). This intersection has brought into NMI design approaches and philosophies arising from the focus of HCI and IxD on human-centred design. Not unexpectedly, a number of challenges and questions have arisen along with a significant tension related to the nature of user involvement in the design process. In this project, an approach informed by participatory design was adopted where musicians became integral to the design process from start to finish. Through this process a deeper understanding of the needs of musicians beyond usability considerations was developed. Whilst this understanding is bounded by the specifics of the design context, this project is submitted as a case study that may provide some guidance in the processes adopted by others when designing NMIs.

 
    [1] Demo 1 - Michael Urbanek, Michael Habiger and Florian Güldenpfennig

Creating Audio Games Online with a Browser-Based Editor

Play has been identified as a fundamental human desire (see, e.g., Huizinga's seminal work on ''Homo Ludens''). To little surprise then, people have also used sound in play and to create games. Since the advent of the personal computer, the genre of audio games invites sighted and visually impaired people alike to play interactive computer games solely based on sound renderings. While audio games are popular, especially among blind people, there is a lack of development tools to support audio game design and to foster further growth of this genre. For this reason, we demonstrate a browser-based audio game editor that we have developed over the last year or so, drawing on the experience and needs of seven long-term audio gamers. To the best of our knowledge, it is the first application or tool of its kind. Its key features are easy usage (including instant game play and sound rendering) and open source development to increase sustainability and possible impact.

 
    [2] Demo 2 - Jonathan Weinel

Cyberdream VR: Visualizing Rave Music and Vaporwave in Virtual Reality

Cyberdream VR is a short artistic virtual reality (VR) experience, which is based on the concept of visualizing the imaginative worlds suggested by rave music and vaporwave as symbolic, spatial, virtual environments. The piece is conceptualized as a virtual hallucination through the broken techno-utopias of cyberspace. Aesthetically, the work adapts the forms of 1990s VJ performance, demoscene animations, and the visual language of rave fliers from this era, constructing these forms as virtual spaces that the user can enter into through VR. Drawing upon the Internet-borne music subculture vaporwave, the piece also deconstructs the visual language of 1990s techno- utopian computer culture. By transporting the user into the imaginative worlds suggested by rave music and vaporwave, Cyberdream VR more broadly indicates a possible approach to visualizing music that prioritizes symbolic representation. In the future, this approach could be applied for other types of music and yield new transformative approaches to music visualization that may eventually be automated.

 
    [3] Demo 3 - Luke Skarth-Hayley and Julie Greensmith

Demonstrating Customisation Markup in the Siren Songs Sonification System

Live and retrospective analysis of network traffic is a challenging task for IT Departments and Network Operating Centres. Examining graphs and log files is time-consuming, and alerts can be missed. Network sonification provides an ‘at-a-listen’ state of the network that, during normaloperation, is easy to ignore and non-fatiguing for listeners, and during attacks surfaces the type of attack with a unique sound signature. Prior research into network sonification lacks a formal grammar and aesthetic considerations. This demo paper presents the technical details of Siren Songs, a network sonification system which translates network traffic and attacks into a customisable MIDI output that can feed into any MIDI instrument or Digital Audio Workstation (DAW). The related research provoked interesting questions about the aural bandwidth of listeners, and the system as presented enables bespoke sonifications for future investigation of aural bandwidth in network monitoring.

 
    [4] Demo 4 - Daniel Mayer

PbindFx – An Interface for Sequencing Effect Graphs in the SuperCollider Audio Programming Language

The class PbindFx, contained in miSCellaneous_lib, an extension of SuperCollider (SC), enables the sequencing of arbitrary audio effect (fx) graphs and corresponding parameter sequences. In this context an fx graph means a graph of SC SynthDefs, i.e. synthesis and/or processing instruments, which themselves are naturally defined by graph structures. Therefore PbindFx can be understood as a flexible meta-synthesis tool, including the possibility of realtime replacement of all of its constitutive elements. When applied with sequencing of short events unusual variants of granular synthesis can be achieved. This special application has been demonstrated in my artistic research project kitchen studies, which is documented in the artistic research database Research Catalogue, miSCellaneous_lib includes the source code of the fixed media piece of the same name.

 
    [5] Demo 5 - Trevor Hunter, Peter Worthy, Ben Matthews and Stephen Viller

Soundscape: Participatory Design of an Interface for Musical Expression

The design of New Musical Interfaces (NMIs) has been informed by principles and methods core to Human-computer Interaction. Largely, the field of NMIs remains focused on usability as it evaluates and designs interfaces. Both field, however, recognize but still wrestle with the need to adopt a more humanistic approach to both design and evaluation. In this project a design process informed by participatory design principles was followed. Through the discussions and evaluations within that process, the detail of musicians needs and values emerged indicating a need for the field to step beyond usability. Soundscape, the resulting NMI, has value as a as a physical embodiment of the range of needs and values of those musicians in its design as an interface to support creativity. Further, it is hoped that through demonstration to the NMI design community critical discourse around humanistic design considerations will be supported.

 
    [6] Demo 6 - Laura Boffi

The First Experience Prototype of The Storytellers Project

This paper reports the first experience prototype of The Storytellers project, which is a remote reading aloud service for children. The interactive interface for children is described, called the Storybell, as well as a low fidelity prototype of it which was developed in order for children and readers to run a first remote reading session.

 
    [7] Demo 7 - George Meikle

ScreenPlay: A Topic-Theory-Inspired Interactive System

In many ways, topic theory forms the foundation of human perception of emotion through music. Accordingly, it affords great potential for creative exploitation within human-computer interaction (HCI) in music. ScreenPlay is an interactive computer music system (ICMS) that implements topic theory as part of its approach to facilitating intuitive and engaging interactive musical experiences with the hope that the emotional manipulation of music results in more meaningful interactions for novice users whilst simultaneously posing an intriguing compositional/performative paradigm for experienced musicians.

 
    [8] Demo 8 - Serge Bulat

Inkblot

"INKBLOT" is an electroacoustic music experience by Serge Bulat, designed to demonstrate one's ability to reinterpret data, perceiving reality as a "personal construct". Similar to the psychological inkblot test, the audio piece serves as a trigger for imagination; expecting an association, thought or feeling to take place. By adding a sense, the test takes one step further in exploring the territories of the Mind. The success of the test depends solely on the testee, based on the idea that the participant is both the experiment and the experimenter. Stimulated by both visuals and sound, the subjects are invited for a"self-diagnosis", formed through the sensory experience. Participants are welcomed to compare their reading and interpretations with the source material, to reinforce the experiment. Described by the artist as "listening parties for the thinkers", "INKBLOT" is aimed to bring back the wonder of sound, interactivity and conceptualism in music.

 
    [9] Demo 9 - Laurence Cliffe, James Mansell, Joanne Cormac, Christopher Greenhalgh and Adrian Hazzard

The Audible Artefact

This emo introduces two ongoing projects where audio augmented reality is implemented as a means of engaging museum and gallery visitors with audio archive material and associated objects, artworks and artefacts. It outlines some of the issues surrounding the presentation and engagement with sound based material within the context of the cultural institution, discusses some previous and related work on approaches to the cultural application of audio augmented reality, and describes the research approach and methodology currently engaged with in developing an increased understanding in this area. Additionally, it discusses the project within the context of related cultural and sound studies literature, presents some initial conclusions as a result of a practice-based approach, and outlines the next steps for the project.

 
    [10] Demo 10 - Jacob Harrison, Robert Jack and Andrew McPherson

The Strummi: The Design of a Guitar-like Research Product

We present the Strummi: a digital musical instrument (DMI) designed to emulate aspects of guitar playing. The Strummi uses a technique of audio-rate excitation of the Karplus-Strong pluckedstring algorithm to afford a rich and expressive interaction with a digital string instrument. We have designed several versions of the instrument for use in a variety of user studies and real-world settings. In this paper, we provide an overview of the key technical features of the Strummi and explain the motivations behind our design decisions in relation to two concepts from design research technology probes and research products. We aim to show that DMI design and research can benefit from a consideration of these concepts through an account of the ways in which the Strummi has been used as both a research product and a musical instrument in its own right.

 
19:00 Conference Reception & Concert Rikard Lindell:
Chris Rhodes:
Canny:
Gerry Brazell:
Cecilia Suhr:
Thomas Harris:
Biagio Francia:
Nicola Fumo Frattegiani:
Poietic the becoming
Viano
Phases
Flowchart Mat
I, You , We

Genesis
The Economy Experience

Polvere near
Djanogly Recital Hall
20:30 End of day      

 

Thursday 19th
Time Activity Details Location
8:00 Registration Opens     Foyer
9:00 Keynote Rebecca Stewart Listening To The Material Turn

Projected futures of ubiquitous and pervasive computing promised invisible technologies that blended into our surroundings. Instead we have a world inhabited by an Internet of Things where technology still holds very real and tangible forms. But why is this relevant to audio developers and researchers? Audio has been recognised as an important modality for interacting with these Things — see the collections of plastic boxes containing speakers and microphone arrays living on bookshelves around the world. However, voice agents and assistants are certainly not the sole example of audio interaction within embedded computing devices. Digital musical interfaces have also been stepping away from the virtual and exploring physical forms while other artistic fields such as theatre and sculpture are looking towards broadening their own creative palettes with digital audio technologies. Therefore materiality is playing a larger role in interactive audio systems, which is a part of a larger movement of a ‘material-turn’ in human-computer interaction. As audio engineers, if we want to stay relevant and influence how audio is used in these spaces, then we need to actively engage with physical materials and those who work primarily with them. This talk will look at the specific materials of paper and textiles in order to explore methods of connecting digital audio to material practices.
Arts Centre Lecture Theatre
10:00 Coffee / Tea Break     Foyer
10:30 Oral Session 4 Journeys in Sound Design and Manipulation

Session Chair: Stine Schmieg Lundgaard (Aalborg University, Denmark)

Arts Centre Lecture Theatre
[11] 10:30 - Stuart Cunningham, Harrison Ridley, Jonathan Weinel and Rich Picking

Audio Emotion Recognition using Machine Learning to support Sound Design

In recent years, the field of Music Emotion Recognition has become established. Less attention has been directed towards the counterpart domain of Audio Emotion Recognition, which focuses upon detection of emotional stimuli resulting from non-musical sound. By better understanding how sounds provoke emotional responses in an audience it may be possible to enhance the work of sound designers. The work in this paper uses the International Affective Digital Sounds set. Audio features are extracted and used as the input to two machine-learning approaches: regression modelling and artificial neural networks, in order to predict the emotional dimensions of arousal and valence. It is found that shallow neural networks perform better than a range of regression models. Consistent with existing research in emotion recognition, prediction of arousal is more reliable than that of valence. Several extensions of this research are discussed, including work related to improving data sets as well as the modelling processes.

[12] 10:55 - Gary Bromham, David Moffat, Mathieu Barthet, Anne Danielsen and György Fazekas

The Impact of Audio Effects Processing on the Perception of Brightness and Warmth

It is not uncommon to hear musicians and audio engineers speak of `warmth' and `brightness' when describing analog technologies such as vintage mixing consoles, multitrack tape machines, and valve compressors. What is perhaps less common, is hearing this term used in association with retro digital technology. A question exists as to how much the low bit rate and low-grade conversion quality contribute to the overall brightness or warmth of a sound when processed with audio effects simulating early sampling technologies. These two dimensions of timbre are notoriously difficult to define and more importantly, measure. We present a subjective user study of brightness and warmth, where a series of audio examples are processed with different audio effects. 26 participants rated the perceived level of brightness and warmth of various instrumental sequences for 5 different audio effects including bit depth reduction, compression and equalisation. Results show that 8 bit reduction tends to increase brightness and decrease warmth whereas 12 bit reduction tends to do the opposite, although this is very much dependent on the instrument. Interestingly, the most significant brightness changes, due to bit reduction, were obtained for bass sounds. For comparison purposes, instrument phrases were also processed with both an analogue compressor and an equalisation plugin to see if any subjective difference was noticed when simulating sonic characteristics that might be associated with warmth. Greater significance was observed when the sound excerpts were processed with the plugins being used to simulate the effects of bit depth reduction.

[13] 11:20 - Feng Su and Chris Joslin

Toward Generating Realistic Sounds for Soft Bodies: A Review

Generating realistic sounds for soft bodies is a challenging task due to the complexity of the interactions. Therefore, automatic audio generation based on procedural approach has become an attractive method for digital synthesis of soft-body sounds. In this paper, we present a comprehensive review in the field of procedural audio, with a special focus on synthesizing the sound of soft bodies. We first introduce the concept of procedural sound generation, including its advantages and challenges for soft-body sound synthesis. Next, we review the state-of-the-art in rigid/non-rigid-body sound synthesis techniques for computer animations and games. Thirdly, we summarize and survey a taxonomy of existing synthesis methods from previous literatures by analyzing their benefits and drawbacks for generating soft-body sounds. These methods include modal synthesis, sound texture modeling, motion-driven sound synthesis, wavelet tree learning, granular synthesis, and concatenative sound synthesis. Last but not least, we discuss several possible directions for future research in procedural soft-body sound synthesis.

[14] 11:45 - Thomas Graham, Thor Magnusson, Chinmay Rajguru, Arash Pouryazdan, Alex Jacobs and Gianluca Memoli

Composing Spatial Soundscapes Using Acoustic Metasurfaces

In this work, we explore the use of acoustic metamaterials in delivering spatially significant acoustic experiences. In particular, we discuss a user study run in a space where a dedicated composition is played through a metamaterial "prism". Results show users perceive sound to be louder in the direction determined by the metamaterial, depending on its frequency. This demonstrates how an acoustic metamaterial prism, in combination with an electronic composer, may be used to deliver different sound messages to different parts of an audience, even with a single speaker. We underpin our conclusions with user observations and heuristic considerations on possible application scenarios.

12:15 Lunch + Posters / Demos     Rehearsal Hall
13:30 Oral Session 5 Transporting Audiences with New Modes of Sonic Engagement

Session Chair: Trevor Hunter (University of Queensland, Australia)

Arts Centre Lecture Theatre
[15] 13:30 - Dirk Vander Wilt and Morwaread Farbood

Automating Audio Description for Live Theater: Using Reference Recordings to Trigger Descriptive Tracks in Real Time

Audio description is an accessibility service used by blind or visually impaired individuals. Often accompanying movies, television shows, and other visual art forms, the service provides spoken descriptions of visual content, allowing people with vision loss the ability to access information that sighted people obtain visually. At live theatrical events, audio description provides spoken descriptions of scenes, characters, props, and other visual elements that those with vision loss may otherwise find inaccessible. In this paper we present a method for deploying pre-recorded audio description in a live musical theater environment. This method uses a reference recording and an online time warping algorithm to align audio descriptions with live performances, including a process for handling unexpected interruptions. A software implementation that is integrated into an existing theatrical workflow is also described. This system is used in two evaluation experiments that show the method successfully aligns multiple recordings of works of musical theater in order to automatically trigger pre-recorded, descriptive audio in real time.

 

[16] 13:55 - Annaliese Micallef Grimaud and Tuomas Eerola

EmoteControl: A System for Live-Manipulation of Emotional Cues in Music

Numerous computer systems have been designed for music emotion research, aiming to identify how different structural and expressive cues of a musical piece affect the emotion conveyed by the music and perceived by the listener. However, most systems are either offline systems, which work by pre-rendering different variations of the music, or real-time systems, which focus mostly on expressive cues. This paper presents a new system called EmoteControl, which allows changes to be made to both structural and expressive cues (tempo, pitch, dynamics, articulation, brightness, and mode) of a musical piece while it plays in real-time. First, a brief overview of previous computer systems will be given, followed by a detailed explanation of EmoteControl’s interface design and structure. A smaller-scale integrated version of the system will also be described, and specifications for the music inputted in the system will be outlined. Interface limitations will be noted, and user evaluation cases for both versions of the interface will be discussed. A demonstration of the interface will be presented, featuring specifically composed musical pieces.

[17] 14:20 - Luca Turchet, Travis West and Marcelo M. Wanderley

Smart Mandolin and Musical Haptic Gilet: Effects of Vibro-Tactile Stimuli During Live Music Performance

In this paper we investigate the role of haptic stimuli in affecting the perception of live music. We designed a study where a smart mandolin performer played live for audience members wearing a gilet-based musical haptic wearable, which provided vibro-tactile sensations in response to the performed music. Six performances were conducted, each of which involved audiences of two people for a total of twelve participants. Results showed that the audio-haptic experience was not homogeneous across participants, who could be grouped as those appreciative of the vibrations and those less appreciative of them. The causes for a lack of appreciation of the haptic experience were mainly identified as the sensation of unpleasantness caused by the vibrations in certain parts of the body and the lack of the comprehension of the relation between what was felt and what was heard. Based on the reported results, we offer suggestions for practitioners interested in designing wearables for enriching the musical experience of audiences of live music via the sense of touch. Such suggestions point towards the need of mechanisms of personalization, systems able to minimize the latency between the sound and the vibrations, and a time of adaptation to the vibrations.

14:45

Coffee / Tea Break

 

Foyer

15:15

Acoustmatic Concert

Kinnersley:
Avantaggiato:
Peralta:
Mayer:
Lin:
Serafin:

Metro Faces Petals
Atlas of Uncertainty
Soundscape graphology Listening Room
Kitchen Studies Listening Room
Entre le son et la lumière
1962

Djanogly Recital Hall

16:00

Oral Session 6

Sonic Journeys in Time: Rhythm and Perception

Chair: Inês Salselas (FEUP, Portugal)

 

Arts Centre Lecture Theatre

[18] 16:00 - Filippo Carnovalini and Antonio Rodà

A Real-Time Tempo and Meter Tracking System for Rhythmic Improvisation

 

Music is a form of expression that often requires interaction between players. If one wishes to interact in such a musical way with a computer, it is necessary for the machine to be able to interpret the input given by the human to find its musical meaning. In this work, we propose a system capable of detecting basic rhythmic features that can allow an application to synchronize its output with the rhythm given by the user, without having any prior agreement or requirement on the possible input. The system is described in detail and an evaluation is given through simulation using quantitative metrics. The evaluation shows that the system can detect tempo and meter consistently under certain settings and could be a solid base for further developments leading to a system robust to rhythmically changing inputs.

[19] 16:25 - Fred Bruford, Mathieu Barthet, Skot McDonald and Mark Sandler

Modelling Musical Similarity for Drum Patterns: A Perceptual Evaluation

Computational models of similarity for drum kit patterns are an important enabling factor in many intelligent music production systems. In this paper, we carry out a perceptual study to evaluate the performance of a number of state-of-the-art models for estimating similarity of drum patterns. 24 listeners rated similarity between 80 pairs of drum patterns covering a range of styles. We find that many of the models perform well, especially those using density-based features, and a more simplistic rhythm-pattern distance. However, many of the most perceptually important factors reported by listeners (such as swing, genre and style, instrument distribution) are not adequately accounted for. We also introduce a velocity transform method to better incorporate variable onset intensity into rhythm similarity analysis. Inter-rater agreement analysis shows that models are also limited somewhat by individual perceptual differences. These findings will inform future research into improved approaches to drum pattern similarity modelling that integrate existing features with new features modelling a wider range of characteristics.

[20] 16:50 - Nathan Renney and Benedict Gaster

Digital Expression and Representation of Rhythm

 

Music provides a means to explore time by sequencing musical events in a seemingly endless and expressive way. This potential often far exceeds the ability of digital systems to enable composers and performers to explore musical time, perhaps due to the influence of Western music on implementation or maybe due to the challenges involved in the notation of music itself. In this paper we look at ways to explore time within a musical context, looking to create tangible examples and methods for exploring complex rhythmic relationships using digital systems. We draw on the approach for describing sequences in terms of cycles, inspired by the live coding language Tidal Cycles. A simple Domain Specific Language (DSL) is described, in order to realize a Digital Musical Instrument (DMI) that facilitates performing with polyrhythm in a intuitive and tactile way. This highlights the use of DSLs for the design of DMIs. Further, an abstraction for representing sequences of musical events on a digital system is provided, which facilitates complex rhythmic relationships (namely, polyrhythm and polymeter) and extends to handle modulation of time itself.

[21] 17:15 - Neil McGuiness and Chris Nash

The Pulse: Embedded Beat Sensing Using Physical Data

 

This research investigates the utility of drum vibration data as the input to beat-tracking algorithms. This approach presents a novel alternative to using either audio signals or MIDI/Virtual Score representations as a means of tempo following and subsequent tempo control. A prototype system: The Pulse has been developed as proof of concept for this approach. The system comprises one or more sensors connected to a microcontroller which runs beat detection and tempo tracking algorithms in real-time. The paper discusses scenarios where this approach benefits over existing approaches. As a means of quantitative evaluation, a methodology to compare the functionality of this sensor-based system to that of a contemporary audio signal-based system was also created in the form of a user study, which was conducted with the results then analysed here. The conclusion of this project asserts that low-cost sensors, attached to either instruments or performers themselves during a live performance can be reliably used to detect the percussive onsets required by beat-tracking systems and the performance and accuracy of the prototype system is comparable with existing, audio-only systems.

 

17:45

Make way to meal venue

   

18:30

Conference Meal

Peggy’s Skylight

https://www.peggysskylight.co.uk/

Peggy’s Skylight

23:59

End of day

   

 

Friday 20th
Time Activity Details Location
8:30 Registration Opens     MD - Foyer
9:30 Oral Session 7

Heads up for Auditory Display

Session Chair: Dalia Senvaityte (Queen Mary University of London, UK)

Arts Centre Lecture Theatre

[21] 09:30 - Michael Iber, Patrik Lechner, Christian Jandl, Manuel Mader and Michael Reichmann

Auditory Augmented Reality for Cyber Physical Production Systems

We describe a proof-of-concept approach on the sonification of estimated operation states of 3D printing processes. The results of this study form the basis for the development of an “intelligent” noise protection headphone as part of Cyber Physical Production Systems, which provides auditorily augmented information to machine operators and enables radio communication between them. Further application areas are implementations in control rooms (equipped with multichannel loudspeaker systems) and utilization for training purposes. The focus of our research lies on situation-specific acoustic processing of conditioned machine sounds and operation related data with high information content, considering the often highly auditorily influenced working knowledge of skilled workers. As a proof-of-concept the data stream of error probability estimations regarding partly manipulated 3D printing processes was mapped to three sonification models, giving evidence about momentary operation states. The neural network applied indicates a high accuracy (>93%) concerning error estimation distinguishing between normal and manipulated operation states. None of the manipulated states could be identified by listening. An auditory augmentation, respectively sonification of these error estimations provides a considerable benefit to process monitoring.

[22] 09:55 - Marian Weger and Robert Hoeldrich

A hear-through system for plausible auditory contrast enhancement

In many of our everyday and professional routines, we rely on knowledge we gather from the auditory feedback of physical interactions. In an attempt to facilitate some of these listening practices (particularly percussion), we introduce a hear-through system for intra-stimulus Auditory Contrast Enhancement (ACE) in real time. Plausible spectral ACE is achieved by adopting the neural mechanism of lateral inhibition. Additional decay prolongation facilitates pitch perception. Perceptual plausibility of the augmented auditory feedback from the observer-perspective is investigated in an experiment with auditory-visual stimuli. Measured plausibility forms material-specific patterns depending on spectral dynamics and decay prolongation.

[23] 10:20 - Ronan O'Dea, Rokaia Jedir and Flaithri Neff

Auditory Distraction in HCI: Towards a Framework for the Design of Hierarchically-Graded Auditory Notifications

This paper discusses hierarchical structure of auditory distractors based on two human perceptual systems responsible for distinct pre-attentive process:1. the auditory perceptual system and 2. the working memory (WM) system. Specifically, the authors propose accounting for WM function and capacitywhen designing auditory notifications for multimodal applications, due to interaction between certain auditory attention mechanisms and WM. A review of literature concerning WM disruption caused by auditory streams, as well as reference to relevant ISO (International Organization for Standardization) standards, is also presented.

[24] 10:45 - Lars Engeln and Rainer Groh

CoHEARence - a qualitive User-(Pre-)Test on Resynthesized Shapes for coherent visual Sounddesign.

To achieve an intuitive way of designing sound, visual approaches for synthesis or sound collages are used. In order to create a coherent workflow between visual and resulting audio, the stimuli should be matched to each other. Therefore, during spectral synthesis and editing, the sound is designed in a visualization of the frequency domain. In this work, a qualitative user pre-test is introduced, which is supposed to show the intiutive understanding from the shape to the sound. So whether there is a connection between the visual shape and the subsequent auditory impression.

11:15

Coffee / Tea Break

   

MD - Foyer

11:40

Oral Session 8

From Acoustic to Digital: Interacting with Instruments

 

Chair: Rikard Lindell (Mälardalen University, Sweden)

 

Arts Centre

Lecture Theatre

[25] 11:40 - Keisuke Shiro, Ryotaro Miura, Changyo Han and Jun Rekimoto

An Intuitive Interface for Digital Synthesizer by Pseudo-intention Learning

Digital musical instruments are essential technologies in modern musical composition and performance. However, the interface of the synthesizer is not intuitive enough and require extra knowledge because of the parameters. To address this problem, we propose pseudo-intention learning: a novel data collection method for supervised learning in musical instrument development. Pseudo-intention learning collects a data set of the paired target tone and input performed by the user. We developed a conversion framework that reflects the composer's intention by combining standard convolutional neural network and pseudo-intention learning. As a proof of concept, we constructed an interface that can freely manipulate the sound source of a digital snare drum and demonstrated its effectiveness with a pilot study. We confirmed that the tone parameters generated by our system reflected the user's intention. We also discuss applying this method to richer musical expression.

[26] 12:05 - Juan Pablo Martinez Avila, Adrian Hazzard, Chris Greenhalgh and Steve Benford

Augmenting Guitars for Performance Preparation

 

A substantial number of Digital Musical Instruments (DMIs) are built upon existing musical instruments by digitally and physically intervening in their design and functionality to augment their sonic and expressive capabilities. These are commonly known as Augmented Musical Instruments (AMIs). In this paper we survey different degress of invasiveness and transformation within augmentations made to musical instruments across research and commercial settings. We also observe a common design rationale among various AMI projects, where augmentations are intended to support the performer's interaction and expression with the instrument. Consequently, we put forward a series of minimally-invasive {\itshape supportive} Guitar-based AMI designs that emerge from observational studies with a community of practicing musicians preparing to perform which reveal different types of physical encumbrances that arise from the introduction of additional resources beyond their instrument. We then reflect on such designs and discuss how both academic and commercially-developed DMI technologies may be employed to facilitate the design of supportive AMIs.

[27] 12:30 - Jack Armitage and Andrew McPherson

Bricolage in a hybrid digital lutherie context: a workshop study

 

Interaction design research typically differentiates processes involving hardware and software tools as being led by tinkering and play, versus engineering and conceptualisation. Increasingly however, embedded maker tools and platforms require hybridisation of these processes. In the domain of digital musical instrument (DMI) design, we were motivated to explore the tensions of such a hybrid process. We designed a workshop where groups of DMI designers were given the same partly-finished instrument consisting of four microphones exciting four vibrating string models. Their task was to refine this simple instrument to their liking for one hour using Pure Data software. All groups sought to use the microphone signals to control the instrument's behaviour in rich and complex ways, but found even apparently simple mappings difficult to realise within the time constraint. We describe the difficulties they encountered and discuss emergent issues with tinkering in and with software. We conclude with further questions and suggestions for designers and technologists regarding embedded DMI design processes and tools.

[28] 12:55 - Alejandro Delgado Luezas, Skot McDonald, Ning Xu and Mark Sandler

A New Dataset for Amateur Vocal Percussion Analysis

 

The imitation of percussive instruments via the human voice is a natural way for us to communicate rhythmic ideas and, for this reason, it attracts the interest of music makers. Specifically, the automatic mapping of these vocal imitations to their emulated instruments would allow creators to realistically prototype rhythms in a faster way. The contribution of this study is two-fold. Firstly, a new Amateur Vocal Percussion (AVP) dataset is introduced to investigate how people with little or no experience in beatboxing approach the task of vocal percussion. The end-goal of this analysis is that of helping mapping algorithms to better generalise between subjects and achieve higher performances. The dataset comprises a total of 9780 utterances recorded by 28 participants with fully annotated onsets and labels (kick drum, snare drum, closed hi-hat and opened hi-hat). Lastly, we conducted baseline experiments on audio onset detection with the recorded dataset, comparing the performance of four state-of-the-art algorithms in a vocal percussion context.

13:30

Lunch / Closing Session / Concert

   

MD - Recital Hall

Go to Geek Prank and try the online Windows XP simulator, play with the classic Minesweeper and Tetris games or listen to some music.