Trento, Italy

Sonic experiences in the era of the Internet of Sounds

1-3 September 2021

Javascript must be enabled to continue!

Keynote Speakers

Prof. Bob Sturm

Prof. Bob Sturm

KTH Royal Institute of Technology, Division of speech, music and hearing

 

Keynote title: Music at the Frontiers of Artificial Creativity and Criticism

I present an overview of the 5-year research project (commenced Oct. 2020), "Music at the Frontiers of Artificial Creativity and Criticism" (MUSAiC, ERC-2019-COG no. 864189, https://musaiclab.wordpress.com). MUSAiC will analyse, criticise and fundamentally broaden the transformation of three interrelated music practices by artificial intelligence (Ai): 1) listening, 2) composition and performance, and 3) analysis and criticism. For each practice, and grounded in traditional music (e.g., Irish and Swedish), MUSAiC will document and critique the impacts of and ethical issues surrounding Ai, e.g., recommendation, generation and performance synthesis, not to mention frictions around the participation of something divorced from the cultural and historical contexts of music, e.g., national and social identity. What kinds of threats does Ai music generation pose to musicians and traditions? How and to what extent does bias manifest in a listener’s analysis and criticism of music created by or with machines? How can ethical considerations be folded into the engineering and application of these applications of Ai. When an Ai system generates a billion tunes, how can one navigate them efficiently? Does the world even need a billion more tunes!? Is this even something to celebrate? I will illustrate my talk with outcomes of my poetic research (https://tunesfromtheaifrontiers.wordpress.com), in which I am engaging with traditional music practice through Ai.

Biography

Bob Sturm is Associate Professor of Computer Science at the KTH Royal Institute of Technology, Stockholm, Sweden. He has degrees in physics, music, multimedia, and engineering, and specializes in signal processing and machine learning applied to music data. He currently leads the MUSAiC project funded by the European Research Council (https://musaiclab.wordpress.com), and is probably most known for his work on horses (https://ieeexplore.ieee.org/document/6847693), the GTZAN dataset (https://arxiv.org/abs/1306.1461), and playing Ai generated folk music on his accordion (https://tunesfromtheaifrontiers.wordpress.com).

 

Prof. Paola Cesari

Prof. Paola Cesari

University of Verona, Department of Neurosciences, Biomedicine and Movement Sciences

 

Keynote title: Sound in action

Sounds are ubiquitous in our living environment. They can stem from different sources, be that biological organisms or mere physical objects. We can detect and select sounds, distill information, and use it to perform appropriate actions in the environment. Here, I would like to share ideas from the motor behavior domain and rise questions on the role sounds might have in providing information to guide our motor system, and in turn to ask if the motor system might influence the perception of sound. What are the cortical correlates of audio-motor and audiovisual integration? Can intention of one other agent’s action be detect through the auditory information alone? and in turn, is this information used to prospectively guide movements? The attempt is to unravel the interaction between humans auditory and motor system taken from different perspectives.

Biography

Paola Cesari is an Associate Professor at the Department of Neuroscience Biomedicine and Movement Sciences at the University of Verona. After her bachelor in Movement Science received from the University of Bologna, she moved to United State first as a visiting scholar at the University of Pittsburgh, then after she received a PhD in Motor Control and Learning at the Penn State University U.S.A. Her current lines of research are: Sensory-motor control and visuo-motor imagery, action recognition and motor representation and performance.  The overall goal of her work is to improve the basic scientific understanding of the learning and control mechanisms underlying skilled movement and to understand the ability to recognize and predict human behavior through action observation.

Prof. Marianna Obrist

Prof. Marianna Obrist

University College London, UCL Interaction Center

 

Keynote title: Multisensory Experiences: Beyond Audio-Visual Interfaces

Multisensory experiences, that is, experiences that involve more than one of our senses, are part of our everyday life. However, we often tend to take them for granted, at least when our different senses function normally (normal sight functioning) or are corrected-to-normal (using glasses). However, closer inspection to any, even the most mundane experiences, reveals the remarkable sensory world in which we live in. While we have built tools, experiences and computing systems that have played to the human advantages of hearing and sight (e.g., signage, modes of communication, visual and musical arts, theatre, cinema and media), we have long neglected the opportunities around touch, taste, or smell as interface/interaction modalities. Within this talk, I will share my vision for the future of computing and what role touch, taste, and smell can play in it, enriching the audio-visual design space.

Biography

Marianna Obrist is Professor of Multisensory Interfaces at UCL (University College London), Department of Computer Science and Deputy Director (Digital Health) for the UCL Institute of Healthcare Engineering. Her research ambition is to establish touch, taste, and smell as interaction modalities in human-computer interaction (HCI), spanning a range of application scenarios, from immersive VR experiences to automotive, and health/wellbeing uses. Before joining UCL, Marianna was Professor of Multisensory Experiences at the School of Engineering and Informatics at the University of Sussex, and Marie Curie Fellow at Newcastle University. Marianna is an inaugural member for the ACM Future of Computing Academy and was selected Young Scientist 2017 and 2018 to attend the World Economic Forum (WEF) in the People’s Republic of China. She is co-founder of OWidgets LTD, a University spin-out that is enabling the design of novel olfactory experiences. The company’s smell delivery technology was exhibited twice at the WEF in Davos, 2019 and 2020. She is a Visiting Professor at the Material Science Research Centre at the Royal College of Art in London and was a Visiting Professor at the HCI Engineering Group at MIT CSAIL in summer 2019. Most recently, she published a book on ‘Multisensory Experiences: where the senses meet technology’. More information: https://multi-sensory.info/