Audio Mostly

Keynotes

Rebecca Fiebrink

“How can machine learning support human musical practices?”

Abstract: It’s 2017, and machine learning seems to suddenly be everywhere: playing Go, driving cars, serving us targeted advertising. Machine learning can compose new folk tunes and synthesise new sounds.
What does this mean for those of us who compose new music or create new interactions with sound? What does our future hold, besides sitting at home all day listening to algorithmically generated music after robots take our jobs?
In this talk, I will challenge you to consider how we can instead use machine learning to better support fundamentally human creative activities. For instance, machine learning can aid human designers engaged in rapid prototyping and refinement of new interactions with sound and media. Machine learning can support greater embodied engagement in the design of those interactions, and it can enable more people to participate in the creation and customisation of new technologies. Furthermore, machine learning is leading to new types of human creative practices with computationally-infused mediums, in which people act not only as designers and implementors, but also as explorers, curators, and co-creators.

Bio: Dr. Rebecca Fiebrink is a Senior Lecturer at Goldsmiths, University of London. Her research focuses on designing new ways for humans to interact with computers in creative practice, including on the use of machine learning as a creative tool. Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and the creator of a MOOC titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” She holds a PhD in Computer Science from Princeton University.


Andy Farnell

“New Sounds – War stories from the Avant-garde and how to join the resistance”

Bio: Andy Farnell is a computer scientist from the UK specialising in audio digital signal processing and synthesis. Author of “Designing Sound“, his original research and design work establishes the emerging field of Procedural Audio. As well as consulting for pioneering game and audio technology companies he teaches widely, as guest lecturer and visiting professor at several European institutions. Andy is a long-time advocate of free open source software, good educational opportunities and access to enabling tools and knowledge for all.