“How can machine learning support human musical practices?”Abstract: It’s 2017, and machine learning seems to suddenly be everywhere: playing Go, driving cars, serving us targeted advertising. Machine learning can compose new folk tunes and synthesise new sounds.
Bio: Dr. Rebecca Fiebrink is a Senior Lecturer at Goldsmiths, University of London. Her research focuses on designing new ways for humans to interact with computers in creative practice, including on the use of machine learning as a creative tool. Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and the creator of a MOOC titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” She holds a PhD in Computer Science from Princeton University.
“New Sounds – War stories from the Avant-garde and how to join the resistance”
Bio: Andy Farnell is a computer scientist from the UK specialising in audio digital signal processing and synthesis. Author of “Designing Sound“, his original research and design work establishes the emerging field of Procedural Audio. As well as consulting for pioneering game and audio technology companies he teaches widely, as guest lecturer and visiting professor at several European institutions. Andy is a long-time advocate of free open source software, good educational opportunities and access to enabling tools and knowledge for all.