Audio Mostly

Keynotes

Rebecca Fiebrink

“How can machine learning support human musical practices?”

Abstract: It’s 2017, and machine learning seems to suddenly be everywhere: playing Go, driving cars, serving us targeted advertising. Machine learning can compose new folk tunes and synthesise new sounds.

What does this mean for those of us who compose new music or create new interactions with sound? What does our future hold, besides sitting at home all day listening to algorithmically generated music after robots take our jobs?
In this talk, I will challenge you to consider how we can instead use machine learning to better support fundamentally human creative activities. For instance, machine learning can aid human designers engaged in rapid prototyping and refinement of new interactions with sound and media. Machine learning can support greater embodied engagement in the design of those interactions, and it can enable more people to participate in the creation and customisation of new technologies. Furthermore, machine learning is leading to new types of human creative practices with computationally-infused mediums, in which people act not only as designers and implementors, but also as explorers, curators, and co-creators.

Bio: Dr. Rebecca Fiebrink is a Senior Lecturer at Goldsmiths, University of London. Her research focuses on designing new ways for humans to interact with computers in creative practice, including on the use of machine learning as a creative tool. Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and the creator of a MOOC titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” She holds a PhD in Computer Science from Princeton University.


Andy Farnell

“New Sounds – War stories from the Avant-garde and how to join the resistance”

Abstract: Originally formulated as an essay titled “In defence of art in music technology”, Andy Farnell assesses the transition from human centred values in audio arts, and asks what hopeful responses are possible as we approach the end of capitalist realism. The main ideas are summarised in this keynote talk about the changes he has seen in the last 30 years in music technology.

We are always looking for new sounds, and the audio arts seem exceptionally fertile and resilient to cookie cutter culture. Audio is perhaps the last stronghold of unconstrained artistic expression, and students bring a particular passion to find their own space. Sound is broad, encompassing a whole human faculty and a mode of thinking that McLuhan identifies as Auditory Space. Being unlike visual symbolic thinking that dominates our culture, audio attracts many non-conformists and challenging thinkers who thrive within a possibility-oriented (as opposed to problem-centred) world. The biographies of many spirited audio pioneers from Bell to Bose show this magic. All research and original art depends upon certain enabling freedoms; the freedom to read, process and publish ideas, to teach, to have universal access to knowledge, to share and to speak freely as all indigenous aural traditions have for millennia. This year the keys to the open web were handed over to the “Big 5″ media powers in the form of HTML5 Digital Rights Management, giving them another asset in a war against general purpose computing, human expression and ultimately the power to control not only what is popular in digital art, but what is even possible. By changing academic practice, supporting open source free software, encouraging sharing and reproducible research we continue a positive fight against an anodyne, controlled monoculture. Join the resistance and find out how as an audio person you can be a part of the solution, not a part of the problem.

Bio: Andy Farnell is a computer scientist from the UK specialising in audio digital signal processing and synthesis. Author of “Designing Sound“, his original research and design work establishes the emerging field of Procedural Audio. As well as consulting for pioneering game and audio technology companies he teaches widely, as guest lecturer and visiting professor at several European institutions. Andy is a long-time advocate of free open source software, good educational opportunities and access to enabling tools and knowledge for all.