© Sacha Krstulović
Technology and Innovation Leader at Music Tribe (UK)
Keynote title: DSP as a Service
The current geopolitical situation is prompting the music equipment industry to find solutions to work around component shortages, to define new services and business models around remote music creation, and generally to keep and grow music making as a top choice activity for the general public.
Dr. Sacha Krstulović is the Head of AI research at Music Tribe, a major manufacturer of audio equipment and the holding company of iconic brands such as Midas, Behringer, TC electronics, Aston Microphones and more. Prior to that, he was the director of research at Audio Analytic, a Cambridge based AI startup, where he was driving forward a new type of AI technology that allows machines to hear sounds, and he used to be a Senior Research Engineer at Nuance’s Advanced Speech Group (Nuance ASG), where he worked on pushing the limits of large-scale speech recognition services such as Voicemail-to-Text and Voice-Based Mobile Assistants (Apple Siri type services). Before that, he was a Research Engineer at Toshiba Research Europe Ltd., developing novel Text-To-Speech synthesis approaches able to learn expressive speech acoustics from data. Sacha is the author and co-author of three book chapters, several international patents and several articles in international journals and conferences, mostly revolving around machine learning applied to speech and audio processing. At Music Tribe, Sacha and his research team are focusing their research interests on using AI, machine learning and advanced DSP to lower the barriers to making music: AI empowers, musicians create.
© Braden Kowitz
Director of Stanford University's Center for Computer Research in Music and Acoustics (USA)
Keynote title: Delayscapes
Musical interactions take place with sound propagation delay and with other qualities of delay which can be attributed to human factors. The speed of sound determines the time it takes for sound to travel from source to receiver. Timing dynamics in musical production are less well understood. Feedback loops are the topic of this talk, namely when a sound or a musical "message" going out makes a round trip that recirculates between two endpoints. Common examples are acoustical environments whose echoes are the result of passive loops between reflecting walls and active loops which can arise within these rooms when two musicians are playing together. Internet "rooms" provide places for remote musical interaction which have analogous qualities. Delayscapes in the acoustical worlds of air and the internet are compared in terms of physical round trips of sounds and differences in interactions.
Chris Chafe is a composer, improvisor, and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). In 2019, he was International Visiting Research Scholar at the Peter Wall Institute for Advanced Studies The University of British Columbia, Visiting Professor at the Politecnico di Torino, and Edgard-Varèse Guest Professor at the Technical University of Berlin. At IRCAM (Paris) and The Banff Centre (Alberta), he has pursued methods for digital synthesis, music performance and real-time internet collaboration. CCRMA's JackTrip project involves live concertizing with musicians the world over.
© Martin Lifka
Chief Research and Innovation Officer at St. Pölten University of Applied Sciences (Austria)
Keynote title: Audio Mostly!? – What we could actually learn from the digital evolution of music
Rapid progress of digital (audio) technologies have not only completely changed music production and industry, but also compensated some basic principles of sound, influenced sonic perception and changed human listening skills. For instance, the transience of sonic energy, the impossibility of identical repetition, the irreversible principle of cause and effect and the need for dynamic processes have lost importance or even validity: “Music is no longer about time, place, occasion” (Bill Drummond, 2010). Though we still can’t touch the sound itself, it has somehow become a durable medium with almost unlimited access at any time and any place and “recorded music somehow reduced everything to one genre“ (Bill Drummond 2010). There is no doubt that we have to re-think the framework of sonic experience and (digital) perception leading to changing roles and new potentials of sound in the digital age. The music industry was one of the first sectors to be affected by the digital transformation. An analysis of the digital evolution of could therefore also provide important learned lessons and relevant insights into the ongoing change processes and diverse challenges of our societies in the digital age.
Hannes Raffaseder is internationally active as an award-winnig composer and sound artist. His music has been performed in renowened concert halls and media art festivals. He has more than 20 years of teaching experience in media technology and was responsible for several research projects dealing primarily with sonic perception and the effects of sound in (digital) media. His more than 40 (scientific) publications as a co-author also include the textbook Audiodesign, which was published in the second edition by Hanser-Verlag in 2010. In addition to his artistic career, Hannes was a committed professor of media technology and audio design, academic director of a master's degree in digital media technology and founding director of the Institute for Creative\Media/Technology. Since 2019, Hannes has been a member of the executive board of the St. Pölten University of Applied Sciences as Chief Research & Innovation Officer. Also, since 2020, he has acted as lead coordinator of E³UDRES², the Engaged and Entrepreneurial European University as Driver for European Smart and Sustainable Regions.
© multisensory experience lab
Professor at the Department of Architecture, Design and Media Technology at Aalborg University (Denmark)
Keynote title: Multisensory experiences for health and culture
In this talk I will present different research projects we are currently involved in at the Multisensory Experience lab at Aalborg University in Copenhagen. Specifically, I will focus on the collaboration with The Center for Hearing and balance at Rigshospitalet in Denmark to use technologies to help hearing impaired individuals train their listening skills, and the collaboration with the Danish Music Museum part of the National museum to metaphorically take the musical instruments outside the glass cabinet and make them alive.
Stefania Serafin is professor of Sonic interaction design at Aalborg University in Copenhagen and the leader of the multi-sensory experience lab together with Rolf Nordahl. She is the President of the Sound and Music Computing association, Project Leader of the Nordic Sound and Music Computing network and lead of the Sound and music computing Master at Aalborg University. Stefania received her PhD entitled “The sound of friction: computer models, playability and musical applications” from Stanford University in 2004, supervised by Professor Julius Smith III. Her research on sonic interaction design, sound for virtual and augmented reality with applications in health and culture can be found here: tinyurl.com/35wjk3jn